diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzmcsj" "b/data_all_eng_slimpj/shuffled/split2/finalzzmcsj" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzmcsj" @@ -0,0 +1,5 @@ +{"text":"\\section{Preliminaries}\n\\subsection{Forcing}\nFirst we review some essential facts about forcing. We refer the reader to \\cite{jechbook} and \\cite{kunen} for background and details.\n\nA partial order $\\mathbb{P}$ is said to be \\emph{separative} when $p \\nleq q \\Rightarrow (\\exists r \\leq p) r \\perp q$. Every partial order $\\mathbb{P}$ has a canonically associated equivalence relation $\\sim_s$ and a separative quotient $\\mathbb{P}_s$, which is isomorphic to $\\mathbb{P}$ if $\\mathbb{P}$ is already separative. In most cases we will assume our partial orders are separative. For every separative partial order $\\mathbb{P}$, there is a canonical complete boolean algebra $\\mathcal{B}(\\mathbb{P})$ with a dense set isomorphic to $\\mathbb{P}$.\n\nA map $e: \\mathbb{P} \\to \\mathbb{Q}$ is an \\emph{embedding} when it preserves order and incompatibility. An embedding is said to be \\emph{regular} when it preserves the maximality of antichains. A order-preserving map $\\pi : \\mathbb{Q} \\to \\mathbb{P}$ is called a \\emph{projection} when $\\pi(1_\\mathbb{Q}) = 1_\\mathbb{P}$, and $p \\leq \\pi(q) \\Rightarrow (\\exists q' \\leq q) \\pi(q') \\leq p$.\n\n\n\\begin{lemma}Suppose $\\mathbb{P}$ and $\\mathbb{Q}$ are partial orders. \n\n\\begin{enumerate}[(1)]\n\\item $G$ is a generic filter for $\\mathbb{P}$ iff $\\{ [p]_s : p \\in G \\}$ is a generic filter for $\\mathbb{P}_s$.\n\n\\item $e : \\mathbb{P} \\to \\mathbb{Q}$ is a regular embedding iff for all $q \\in \\mathbb{Q}$, there is $p \\in \\mathbb{P}$ such that for all $r \\leq p$, $e(r)$ is compatible with $q$.\n\n\\item The following are equivalent:\n\\begin{enumerate}[(a)]\n\\item There is a regular embedding $e : \\mathbb{P}_s \\to \\mathcal{B}(\\mathbb{Q}_s)$.\n\\item There is a projection $\\pi : \\mathbb{Q}_s \\to \\mathcal{B}(\\mathbb{P}_s)$.\n\\item There is a $\\mathbb{Q}$-name $\\dot{g}$ for a $\\mathbb{P}$-generic filter such that for all $p \\in \\mathbb{P}$, there is $q \\in \\mathbb{Q}$ such that $q \\Vdash p \\in \\dot{g}$.\n\\end{enumerate}\n\n\\item Suppose $e : \\mathbb{P} \\to \\mathbb{Q}$ is a regular embedding. If $G$ is a filter on $\\mathbb{P}$ let $\\mathbb{Q}\/G = \\{ q : \\neg \\exists p \\in G( e(p) \\perp q) \\}$. The following are equivalent:\n\\begin{enumerate}[(a)]\n\\item $H$ is $\\mathbb{Q}$-generic over $V$.\n\\item $G = e^{-1}[H]$ is $\\mathbb{P}$-generic over $V$, and $H$ is $\\mathbb{Q} \/ G$-generic over $V[G]$.\n\\end{enumerate}\n\n\\end{enumerate}\n\\end{lemma}\n\n\n\\begin{lemma}Suppose $\\mathbb{P}$ and $\\mathbb{Q}$ are partial orders. $\\mathcal{B}(\\mathbb{P}_s) \\cong \\mathcal{B}(\\mathbb{Q}_s)$ iff the following holds. Letting $\\dot{G}, \\dot{H}$ be the canonical names for the generic filters for $\\mathbb{P},\\mathbb{Q}$ respectively, there is a $\\mathbb{P}$-name for a function $\\dot{f}_0$ and a $\\mathbb{Q}$-name for a function $\\dot{f}_1$ such that:\n\\begin{enumerate}[(1)]\n\\item $\\Vdash_\\mathbb{P} \\dot{f}_0(\\dot{G})$ is a $\\mathbb{Q}$-generic filter,\n\\item $\\Vdash_\\mathbb{Q} \\dot{f}_1(\\dot{H})$ is a $\\mathbb{P}$-generic filter,\n\\item $\\Vdash_\\mathbb{P} \\dot{G} = \\dot{f}_1^{\\dot{f}_0(\\dot{G})} (\\dot{f}_0(\\dot{G}))$, and $\\Vdash_\\mathbb{Q} \\dot{H} = \\dot{f}_0^{\\dot{f}_1(\\dot{H})} (\\dot{f}_1(\\dot{H}))$.\n\\end{enumerate}\nAn isomorphism is given by $p \\mapsto || \\check{p} \\in \\dot{f}_1(\\dot{H}) ||_{\\mathcal{B}(\\mathbb{Q}_s)}$.\n\\end{lemma}\n\nFor a broader notion of ``forcing equivalence,'' the best that can be said in general is the following:\n\n\\begin{lemma}Suppose $\\mathbb{P}$ and $\\mathbb{Q}$ are partial orders.\n\\begin{enumerate}[(1)]\n\\item If $e : \\mathbb{P} \\to \\mathbb{Q}$ is a regular embedding, and any $\\mathbb{Q}$-generic $H$ yields $V[H] = V[e^{-1}[H]]$, then there is a predense set $A \\subseteq \\mathcal{B}(\\mathbb{Q}_s)$ such that $\\mathcal{B}(\\mathbb{P}_s) \\cong \\mathcal{B}(\\mathbb{Q}_s) \\restriction a$ for all $a \\in A$.\n\n\\item $\\mathbb{P}$ and $\\mathbb{Q}$ yield the same generic extensions iff for a dense set of $p \\in \\mathbb{P}$, there is $q \\in \\mathcal{B}(\\mathbb{Q}_s )$ such that $\\mathcal{B}(\\mathbb{P}_s) \\restriction p \\cong \\mathcal{B}(\\mathbb{Q}_s )\\restriction q$.\n\\end{enumerate}\n\\end{lemma}\n\nA partial order $\\mathbb{P}$ is said to be \\emph{$\\kappa$-distributive} if for any collection of maximal antichains in $\\mathbb{P}$, $\\{ A_\\alpha : \\alpha < \\beta < \\kappa \\}$, there is a maximal antichain $A$ such that $A$ refines $A_\\alpha$ for all $\\alpha < \\beta$. $\\mathbb{P}$ is called \\emph{$(\\kappa,\\lambda)$-distributive} if the same holds restricted to antichains of size $\\leq \\lambda$. Forcing with $\\mathbb{P}$ adds adds no new functions from any $\\alpha<\\kappa$ to $\\lambda$ iff $\\mathcal{B}(\\mathbb{P})$ is $(\\kappa,\\lambda)$-distributive.\n\nA strictly stronger property than distributivity is strategic closure. For a partial order $\\mathbb{P}$ and an ordinal $\\alpha$, we define a game $G_\\alpha(\\mathbb{P})$ with two players \\emph{Even} and \\emph{Odd}. \\emph{Even} starts by playing some element $p_0 \\in \\mathbb{P}$. At successor stages $\\beta+1$, the next player must play some element $p_{\\beta+1} \\leq p_\\beta$. \\emph{Even} plays at limit stages $\\beta$ if possible, by playing a $p_\\beta$ that is $\\leq p_\\gamma$ for all $\\gamma < \\beta$. If \\emph{Even} cannot play at some stage below $\\alpha$, the game is over and \\emph{Odd} wins; otherwise \\emph{Even} wins. We say that $\\mathbb{P}$ is \\emph{$\\alpha$-strategically closed} if for every $p \\in \\mathbb{P}$, \\emph{Even} has a winning strategy with first move $p$. Note that under this definition, every partial order is trivially $\\omega$-strategically closed.\n\nA stronger property that $\\kappa$-strategic closure is $\\kappa$-closure. $\\mathbb{P}$ is \\emph{$\\kappa$-closed} when any descending chain of length less than $\\kappa$ has a lower bound. $\\mathbb{P}$ is \\emph{$\\kappa$-directed closed} when any directed set of size ${<} \\kappa$ has a lower bound.\n\nFor any partial order $\\mathbb{P}$, the \\emph{saturation} of $\\mathbb{P}$, $\\sat(\\mathbb{P})$, is the least cardinal $\\kappa$ such that every antichain in $\\mathbb{P}$ has size less than $\\kappa$. Erd\\H{o}s and Tarski~\\cite{ET} proved that $\\sat(\\mathbb{P})$ is always regular. The \\emph{density} of $\\mathbb{P}$, $\\den(\\mathbb{P})$, is the least cardinality of a dense subset of $\\mathbb{P}$. Clearly $\\sat(\\mathbb{P}) \\leq \\den(\\mathbb{P})^+$ for any $\\mathbb{P}$. We say $\\mathbb{P}$ is \\emph{$\\kappa$-saturated} if $\\sat(\\mathbb{P}) \\leq \\kappa$, and $\\mathbb{P}$ is \\emph{$\\kappa$-dense} if $\\den(\\mathbb{P}) \\leq \\kappa$. A synonym for $\\kappa$-saturation is the \\emph{$\\kappa$-chain condition ($\\kappa$-c.c.).}\n\nThe properties of distributivity, strategic closure, saturation, and density are robust in the sense that they are absolute between $\\mathbb{P}$ and $\\mathcal{B}(\\mathbb{P})$ for any separative partial order $\\mathbb{P}$, and often inherited by intermediate forcings:\n\\begin{lemma}Suppose $e : \\mathbb{P} \\to \\mathbb{Q}$ is a regular embedding and $\\kappa$ is a cardinal.\n\\begin{enumerate}[(1)]\n\\item If $\\mathbb{Q}$ is $\\kappa$-strategically closed, then so is $\\mathbb{P}$.\n\\item $\\mathbb{Q}$ is $\\kappa$-distributive iff $\\mathbb{P}$ is $\\kappa$-distributive and $\\Vdash_{\\mathbb{P}} \\mathbb{Q} \/ \\dot{G}$ is $\\kappa$-distributive.\n\\item $\\mathbb{Q}$ is $\\kappa$-saturated iff $\\mathbb{P}$ is $\\kappa$-saturated and $\\Vdash_{\\mathbb{P}} \\mathbb{Q} \/ \\dot{G}$ is $\\kappa$-saturated.\n\\item $\\mathbb{Q}$ is $\\kappa$-dense iff $\\mathbb{P}$ is $\\kappa$-dense and $\\Vdash_{\\mathbb{P}} \\mathbb{Q} \/ \\dot{G}$ is $\\kappa$-dense.\n\\end{enumerate}\n\\end{lemma}\n\n\nFor any forcing $\\mathbb{P}$ and any $\\mathbb{P}$-name $\\dot{X}$ for a set of ordinals, there is a canonically associated complete subalgebra $\\mathcal{A}_{\\dot{X}} \\subseteq \\mathcal{B}(\\mathbb{P})$ that captures $\\dot{X}$. It is the smallest complete subalgebra containing all elements of the form $|| \\check{\\alpha} \\in \\dot{X} ||$ for $\\alpha$ an ordinal. $\\mathcal{A}_{\\dot{X}}$ has the property that whenever $G \\subseteq \\mathbb{P}$ is generic, $\\dot{X}^G$ and $G \\cap \\mathcal{A}_{\\dot{X}}$ are definable from each other using the parameters $\\mathcal{B}(\\mathbb{P})$ and its powerset, as computed in the ground model. In this case, we have $V[\\dot{X}^G] = V[G \\cap \\mathcal{A}_{\\dot{X}}]$. See \\cite[p. 247]{jechbook} for details. \n\n\n\n\\subsection{Ideals}\n\nLet $Z$ be any set. An \\emph{ideal} $I$ on $Z$ is a collection of subsets of $Z$ closed under taking subsets and pairwise unions. If $\\kappa$ is a cardinal, $I$ is called \\emph{$\\kappa$-complete} if it is also closed under unions of size less than $\\kappa$. \\emph{``Countably complete''} is taken as synonymous with ``$\\omega_1$-complete.'' $I$ is called \\emph{nonprincipal} if $\\{z \\} \\in I$ for all $z \\in Z$, and \\emph{proper} if $Z \\notin I$. Hereafter we will assume all our ideals are nonprincipal and proper.\n\nLet $X = \\bigcup Z$. $I$ is called \\emph{fine} if for all $x \\in X$, $\\{ z : x \\notin z \\} \\in I$. $I$ is called \\emph{normal} if for any sequence $\\langle A_x : x \\in X \\rangle \\subseteq I$, the \\emph{``diagonal union''} $\\{ z : \\exists x(x \\in z \\in A_x) \\}$ is in $I$. It is well-known that $I$ is normal iff for any $A \\in \\p(Z) \\setminus I$ and any function $f$ on $A$ such that $f(z) \\in z$ for all $z \\in A$, there is an $x$ such that $f^{-1}(x) \\notin I$.\n\nTo fix notation, let $I^* = \\{ Z \\setminus A : A \\in I \\}$ (the \\emph{$I$-measure one sets}), $I^+ = \\p(Z) \\setminus I$ (the \\emph{$I$-positive sets}), $\\hat{x} = \\{ z : x \\in z \\}$, and denote diagonal unions by $\\nabla_{x \\in X} A_x$. Note that $\\nabla_{x \\in X} A_x = \\bigcup_{x \\in X} \\hat{x} \\cap A_x$.\n\nThe following basic fact seems to have been previously overlooked--see, for example, the hypotheses of several theorems in \\cite{foremanhandbook} and \\cite{foremanduality}.\n\n\\begin{proposition}All normal and fine ideals are countably complete.\n\\end{proposition}\n\\begin{proof}Let $I$ be a normal and fine ideal on $Z \\subseteq \\p(X)$. If $\\{ x_\\alpha : \\alpha < \\kappa \\}$ is an enumeration of distinct elements of $X$, and $\\bigcap_{\\alpha<\\kappa} \\hat{x}_\\alpha \\in I^*$, then $I$ is $\\kappa^+$-complete. For suppose that $\\{ A_\\alpha : \\alpha<\\kappa \\} \\subseteq I$, but $A = \\bigcup_{\\alpha<\\kappa} A_\\alpha \\in I^+$. Then by hypothesis, $A \\cap (\\bigcap_{\\alpha < \\kappa} \\hat{x}_\\alpha) \\in I^+$. Let $f : A \\to X$ be defined by $f(z) = x_\\alpha$, where $\\alpha$ is the least ordinal such that $z \\in A_\\alpha$. By normality, there is some $A_\\alpha \\in I^+$, a contradiction. So it suffices to find an infinite set $\\{x_n : n < \\omega \\} \\subseteq X$ such that $\\bigcap_{n<\\omega} \\hat{x}_n \\in I^*$. Since we assume $I$ is proper and nonprincipal, $X$ is infinite. We show that any infinite set of distinct elements of $X$ suffices.\n\nLet $\\{x_n : n < \\omega \\}$ be distinct elements of $X$, and suppose the contrary, that $B = \\{ z : \\{x_n : n < \\omega \\} \\nsubseteq z \\} \\in I^+$. By fineness, $B \\cap \\hat{x}_0 \\in I^+$. For each $z \\in B \\cap \\hat{x}_0$, let $n_z$ be the largest integer such that $\\{x_0,...,x_{n_z} \\} \\subseteq z$. Let $f : B \\cap \\hat{x}_0 \\to X$ be defined by $f(z) = x_{n_z}$. By normality, there is an $n$ such that $f^{-1}(x_n) \\in I^+$. Then for all $z \\in f^{-1}(x_n)$, $x_{n+1} \\notin z$. This contradicts fineness. \n\\end{proof}\n\n\\begin{proposition}If $I$ is a normal, fine, $\\kappa$-complete ideal on $Z \\subseteq \\p(\\kappa)$, then $\\kappa \\in I^*$.\n\\end{proposition}\n\\begin{proof}Suppose $A = \\{ z \\in Z : z$ is not an ordinal$\\} \\in I^+$. Let $f : A \\to \\kappa$ be such that $f(z)$ is the least $\\alpha \\in z$ such that $\\alpha \\nsubseteq z$. Then for some $\\alpha$, $f^{-1}(\\alpha) \\in I^+$. However, $\\{ z : \\alpha \\subseteq z \\} \\in I^*$ by fineness and $\\kappa$-completeness. \n\\end{proof}\n\nProofs of the following facts can be found in \\cite{foremanhandbook}. If $I$ is an ideal on $Z$, say $A \\sim_I B$ if the symmetric difference $A \\Delta B$ is in $I$. Let $[A]_I$ denote the equivalence class of $A$ mod $\\sim_I$. The equivalence classes form a boolean algebra under the obvious operations, which we denote by $\\p(Z) \/ I$. Normality ensures a certain amount of completeness of the algebra:\n\n\\begin{proposition}\n\\label{sums}\nSuppose $I$ is a normal and fine ideal on $Z \\subseteq \\p(X)$. If $\\{ A_x : x \\in X \\} \\subseteq \\p(Z)$, then $\\nabla A_x$ is the least upper bound of $\\{ [A_x]_I : x \\in X \\}$ in $\\p(Z)\/I$.\n\\end{proposition}\n\nIf we force with this algebra, we get a generic ultrafilter $G$ on $Z$ extending $I^*$. We can form the ultrapower $V^Z \/ G$. If this ultrapower is well-founded for every generic $G$, then $I$ is called precipitous. A combinatorial characterization of precipitousness is given by the following:\n\n\\begin{theorem}[Jech-Prikry]\n\\label{prec}\n$I$ is a precipitous ideal on $Z$ iff the following holds: For any sequence $\\langle A_n : n < \\omega \\rangle\\subseteq \\p(I^+)$, such that for each $n$, \n\\begin{enumerate}[(1)]\n\\item $B_n = \\{ [a]_I : a \\in A_n \\}$ is a maximal antichain in $\\p(Z) \/ I$,\n\\item $B_{n+1}$ refines $B_n$,\n\\end{enumerate}\nthere is a function $f$ with domain $\\omega$ such that for all $n$, $f(n) \\in A_n$, and $\\bigcap_{n<\\omega} f(n) \\not= \\emptyset$.\n\\end{theorem}\n\nFor an ideal $I$, the saturation, density, distributivity, and strategic closure of $I$ refers to that of the corresponding boolean algebra. The next proposition is immediate from Theorem~\\ref{prec}:\n\n\\begin{proposition}\nIf $I$ is an $\\omega_1$-complete, $\\omega_1$-distributive ideal, then $I$ is precipitous.\n\\end{proposition}\n\n\\begin{proposition}Suppose $I$ is a $\\kappa$-complete precipitous ideal on $Z$, and there is no $A \\in I^+$ such that $I {\\restriction} A$ is $\\kappa^+$-complete. Let $G$ be $\\p(Z)\/I$-generic, and let $j : V \\to M$ be the associated elementary embedding, where $M$ is the transitive collapse of $V^Z\/G$. Then the critical point of $j$ is $\\kappa$.\n\\end{proposition}\n\n\\begin{proposition}\nLet $I$ be an ideal $Z \\subseteq \\p(X)$. Then $I$ is normal and fine iff $1 \\Vdash_{\\p(Z)\/I} [\\id]_{\\dot G} = j[X]$.\n\\end{proposition}\n\n\\begin{proposition}\nSuppose $I$ is an ideal on $Z \\subseteq \\p(X)$. If $I$ is $\\kappa$-complete and $\\kappa^+$-saturated, or if $I$ is normal, fine, and $|X|^+$-saturated, then every antichain in $\\p(Z)\/I$ has a system of pairwise disjoint representatives. \n\\end{proposition}\n\n\\begin{proof}\nIf $I$ is $\\kappa$-complete, and $\\{ A_\\alpha : \\alpha < \\kappa \\}$ is an antichain, replace each $A_\\alpha$ with $A_\\alpha \\setminus \\bigcup_{\\beta<\\alpha} A_\\beta$. If $I$ is normal and fine, and $\\{ A_x : x \\in X \\}$ is an antichain, replace $A_x$ by $A_x \\cap \\hat{x} \\setminus \\bigcup_{y \\not= x} A_y \\cap \\hat{y}$. \n\\end{proof}\n\n\\begin{theorem}\n\\label{disj}\nSuppose $I$ is a countably complete ideal on $Z$, and every antichain in $\\p(Z)\/I$ has a system of pairwise disjoint representatives. Then:\n\\begin{enumerate}[(1)]\n\\item $I$ is precipitous.\n\\item $\\p(Z)\/I$ is a complete boolean algebra.\n\\item If $G$ is generic over $\\p(Z)\/I$, $j : V \\to M$ is the associated embedding, and $ j[\\lambda] \\in M$, then $M$ is closed under $\\lambda$-sequences from $V[G]$.\n\\end{enumerate}\n\\end{theorem}\n\nIf $\\kappa=\\mu^+$ and $I$ is a normal, fine, $\\kappa$-complete ideal on $\\p_\\kappa(\\lambda)$, then $I$ is not $\\lambda$-saturated. For otherwise, let $j : V \\to M \\subseteq V[G]$ be a generic embedding arising from $I$. Then $M \\models |[\\id]| = \\mu$, and $[\\id] = j[\\lambda]$, so $\\lambda$ has cardinality $\\mu$ in $V[G]$. So the smallest possible density of such an ideal is $\\lambda$.\n\n\\subsection{Elementary embeddings}\nProofs of the following can be found in \\cite{kanamori}.\n\n\\begin{lemma}Suppose $M$ and $N$ are models of $ZF^-$, $j : M \\to N$ is an elementary embedding, $\\mathbb{P} \\in M$ is a partial order, $G$ is $\\mathbb{P}$-generic over $M$, and $H$ is $j(\\mathbb{P})$-generic over $N$. Then $j$ has a unique extension $\\hat{j} : M[G] \\to N[H]$ with $\\hat{j}(G) = H$ iff $j[G] \\subseteq H$.\n\\end{lemma}\n\n\\begin{lemma}Suppose $M$, $N$ are transitive models of ZFC with the same ordinals, and $j : M \\to N$ is an elementary embedding. Then either $j$ has a critical point, or $j$ is the identity and $M = N$.\n\\end{lemma}\n\n\\section{Dense ideals from large cardinals}\n\nHere we show that it is consistent relative to an almost-huge cardinal that there is a normal, $\\kappa$-complete, $\\lambda$-dense ideal on $\\p_\\kappa(\\lambda)$, where $\\kappa$ is the successor of a regular cardinal $\\mu$, and $\\lambda \\geq \\kappa$ is regular, for many particular choices for $\\mu,\\lambda$. We also show that relative to a super-almost-huge cardinal, there can exist a successor cardinal $\\kappa$ such that for every regular $\\lambda \\geq \\kappa$, there is a normal, $\\kappa$-complete, $\\lambda$-dense ideal on $\\p_\\kappa(\\lambda)$. This generalizes a theorem of Woodin about the relative consistency of an $\\aleph_1$-dense ideal on $\\aleph_1$, and has the following additional advantages: (1) An explicit forcing extension is taken, rather than an inner model of an extension. (2) Careful constructions within a model where the axiom of choice fails, as presented in~\\cite{foremanhandbook}, are avoided.\n\nLet us first recall the essential facts about almost-huge cardinals (see~\\cite{kanamori}, 24.11). A cardinal $\\kappa$ is almost-huge if there is an elementary embedding $j : V \\to M$ with critical point $\\kappa$, such that $M^{ \\beta$, so $j(\\eta) > \\gamma$, and thus $j[\\delta]$ is cofinal in $j(\\delta)$. \n\\end{proof}\n\nA \\emph{super-almost-huge} cardinal is a cardinal $\\kappa$ such that for all $\\lambda \\geq \\kappa$, there is an almost huge tower of height $\\geq \\lambda$. The next result follows from considering the set of closure points under witnesses to property (c) in the tower characterization.\n\n\\begin{corollary}\nIf $\\kappa$ has an almost-huge tower of Mahlo height $\\delta$, then for stationary many $\\alpha < \\delta$, $V_\\alpha \\models ZFC + \\kappa$ is super-almost-huge.\n\\end{corollary}\n\nThere is a vast gap in strength between almost-huge and huge:\n\n\\begin{theorem}\n\\label{tourney}\nIf $\\kappa$ is a huge cardinal, then there is a stationary set $S \\subseteq \\kappa$ such that for all $\\alpha < \\beta$ in $S$, $\\alpha$ has an almost-huge tower of height $\\beta$.\n\\end{theorem}\n\n\\begin{proof}\nSuppose $j : V \\to M$ is an elementary embedding with critical point $\\kappa$, $j(\\kappa) = \\delta$, and $M^\\delta \\subseteq M$. Then $\\kappa$ carries an almost-huge tower $\\vec{U}$ of length $\\delta$, and $\\vec{U} \\in M$. Let $F$ be the ultrafilter on $\\kappa$ defined by $F = \\{ X \\subseteq \\kappa: \\kappa \\in j(X) \\}$. Let $A = \\{ \\alpha < \\kappa : \\alpha$ carries an almost-huge tower of height $\\kappa \\}$. Since $\\kappa \\in j(A)$, $A \\in F$. Now let $c: \\kappa^2 \\to 2$ be defined by $c(\\alpha,\\beta) = 1$ if $\\alpha$ carries an almost-huge tower of height $\\beta$, and $c(\\alpha,\\beta) = 0$ otherwise. By Rowbottom's theorem, let $H \\in F$ be homogeneous for $c$. We claim $c$ takes constant value 1 on $H$. For if $\\alpha \\in A \\cap H$, then $\\{ \\alpha,\\kappa \\} \\in [j(A \\cap H)]^2$, and $j(c)(\\alpha,\\kappa) = 1$. \n\\end{proof}\n\n\\subsection{Layering and absorption}\n\n\\begin{definition}\nWe will call a partial order $\\mathbb{P}$ \\emph{$(\\mu,\\kappa)$-nicely layered} when there is a collection $\\mathcal{L}$ of atomless regular suborders of $\\mathbb{P}$ such that:\n\\begin{enumerate}[(1)]\n\\item for all $\\mathbb{Q} \\in \\mathcal{L}$, $\\mathbb{Q}$ is $\\mu$-closed and has size $< \\kappa$,\n\\item for all $\\mathbb{Q}_0, \\mathbb{Q}_1 \\in \\mathcal{L}$, if $\\mathbb{Q}_0 \\subseteq \\mathbb{Q}_1$, then $\\Vdash_{\\mathbb{Q}_0} \\mathbb{Q}_1 \/ \\dot{G}$ is $\\mu$-closed, and\n\\item for all $\\mathbb{P}$-names $\\dot{f}$ for a function from $\\mu$ to the ordinals, and all $\\mathbb{Q}_0 \\in \\mathcal{L}$, there is an $\\mathbb{Q}_1 \\in \\mathcal{L}$ and an $\\mathbb{Q}_1$-name $\\dot{g}$ such that $\\mathbb{Q}_0 \\subseteq \\mathbb{Q}_1$, and $\\Vdash_\\mathbb{P} \\dot{f} = \\dot{g}$.\n\\end{enumerate}\n\nWe will say $\\mathbb{P}$ is \\emph{$(\\mu,\\kappa)$-nicely layered with collapses,} $(\\mu,\\kappa)$-NLC, when additionally for all $\\alpha < \\kappa$ and all $\\mathbb{Q}_0 \\in \\mathcal{L}$, there is $\\mathbb{Q}_1 \\in \\mathcal{L}$ such that $\\mathbb{Q}_0 \\subseteq \\mathbb{Q}_1$ and $\\Vdash_{\\mathbb{Q}_1} |\\mathbb{Q}_1| = \\mu$.\n\\end{definition}\n\n\\begin{proposition}\nIf $\\mathcal{L}$ witnesses that $\\mathbb{P}$ is $(\\mu,\\kappa)$-nicely layered, then $\\mathbb{P}$ is $\\kappa$-c.c.\\ and $\\bigcup \\mathcal{L}$ is dense in $\\mathbb{P}$.\n\\end{proposition}\n\n\\begin{proof}\nSuppose that $\\{ p_\\alpha : \\alpha < \\eta \\} \\subseteq \\mathbb{P}$, $\\eta \\geq \\kappa$, is a maximal antichain. Let $\\dot{f}$ be a name of a function with domain $\\{ 0 \\}$ such that $f(0) = \\alpha$ iff $p_\\alpha \\in G$. There cannot be a regular suborder $\\mathbb{Q}$ of size $< \\kappa$ and a $\\mathbb{Q}$-name $\\dot{g}$ that is forced to be equal to $\\dot{f}$, since such a $\\dot{g}$ would have $<\\kappa$ possible values for its range.\n\nSimilarly, let $p \\in \\mathbb{P}$ be arbitrary, and let $\\{ p_\\alpha : \\alpha < \\delta \\}$ be a maximal antichain with $p = p_0$. Let $\\dot{f}$ be a name of a function with domain $\\{ 0 \\}$ such that $f(0) = \\alpha$ iff $p_\\alpha \\in G$. If $\\mathbb{Q}$ is a regular suborder and $\\dot{g}$ is a $\\mathbb{Q}$-name such that $\\Vdash_\\mathbb{P} \\dot{f} = \\dot{g}$, then there is some $q \\in \\mathbb{Q}$ forcing $\\dot{g}(0) = 0$, so $q \\leq p$.\n\\end{proof}\n\n\\begin{proposition}If there exists a $(\\mu,\\kappa)$-NLC poset, then $\\alpha^{<\\mu} < \\kappa$ for all $\\mu < \\kappa$.\n\\end{proposition}\n\\begin{proof}\nLet $\\mathbb P$ be $(\\mu,\\kappa)$-NLC with layering $\\mathcal L$, and let $\\alpha < \\kappa$. If $\\mathbb Q \\in \\mathcal L$ collapses $\\alpha$ to $\\mu$, then we can build a $\\mu$-closed tree $T \\subseteq \\mathbb Q$ of height $\\mu$ such that each level is an antichain of size $\\geq \\alpha$. $\\alpha^{<\\mu} \\leq |T| < \\kappa$. \n\\end{proof}\n\n\\begin{lemma}[Folklore]\n\\label{folk}\nIf $\\mathbb{P}$ is a $\\mu$-closed partial order such that $\\Vdash_\\mathbb{P} |\\mathbb{P}| = \\mu$, then $\\mathcal{B}(\\mathbb{P}) \\cong \\mathcal{B}(\\col(\\mu,|\\mathbb{P}|))$.\n\\end{lemma}\n\n\\begin{proof}\nPick a $\\mathbb{P}$-name $\\dot{f}$ for a bijection from $\\mu$ to $\\dot{G}$. We build a tree $T \\subseteq \\mathbb{P}$ that is isomorphic to a dense subset of $\\col(\\mu,|\\mathbb{P}|)$, and show that it is dense in $\\mathbb{P}$. Each level will be a maximal antichain in $\\mathbb{P}$. Let the first level $T_0 = \\{ 1_\\mathbb{P} \\}$. If levels $\\{ T_\\beta : \\beta < \\alpha + 1 \\}$ are defined, below each $p \\in T_\\alpha$, pick a $|\\mathbb{P}|$-sized maximal antichain of conditions deciding $\\dot{f}(\\alpha)$, and let $T_{\\alpha+1}$ be the union of these antichains. If $\\{ T_\\beta : \\beta < \\lambda \\}$ is defined up to a limit $\\lambda$, pick for each descending chain $b$ through the previous levels, a $|\\mathbb{P}|$-sized maximal antichain of lower bounds to $b$, and set $T_\\lambda$ equal to the union of these anithchains. It is easy to check that $T_\\lambda$ is a maximal antichain. Let $T = \\bigcup_{\\alpha<\\mu} T_\\alpha$. To show $T$ is dense, let $p \\in \\mathbb{P}$. Let $q \\leq p$ be such that for some $\\alpha < \\mu$, $q \\Vdash \\dot{f}(\\alpha) = p$. $q$ is compatible with some $r \\in T_{\\alpha+1}$. Since $r$ decides $\\dot{f}(\\alpha)$ and forces it in $\\dot{G}$, $r \\leq p$. \n\\end{proof}\n\n\n\\begin{lemma}\n\\label{rearrange}\nSuppose $\\mu < \\kappa$ are regular, and $\\mathbb{P}$ is $(\\mu,\\kappa)$-NLC. If $G$ is $\\mathbb{P}$-generic over $V$, then there is a forcing $\\mathbb{R} \\in V[G]$ such that $\\mathbb{R}$ adds a filter $H \\subseteq \\break \\col(\\mu,{<}\\kappa)$ which is generic over $V$ and such that $(\\ord^\\mu)^{V[G]} = (\\ord^\\mu)^{V[H]}$.\n\\end{lemma}\n\n\\begin{proof}\nLet $\\mathcal{L}$ witness the $(\\mu,\\kappa)$-NLC property. In $V[G]$, let $\\mathbb{R}$ be the collection of filters $h \\subseteq \\col(\\mu,{<}\\alpha)$ for $\\alpha< \\kappa$ which are generic over $V$, such that for some $\\mathbb{Q} \\in \\mathcal{L}$, $V[h] = V[G \\cap \\mathbb{Q}]$. The ordering is end-extension. \n\n\nLet $h \\in \\mathbb{R}$ with $\\mathbb{Q}_0 \\in \\mathcal{L}$ a witness, and let and $\\alpha < \\kappa$ be arbitrary. Let $\\alpha < \\beta < \\kappa$ and $\\mathbb{Q}_1 \\supseteq \\mathbb{Q}_0$ in $\\mathcal{L}$ be such that in $V[h]$, $|\\mathbb{Q}_1 \/ (G \\cap \\mathbb{Q}_0 ) | = |\\beta|$, and $\\mathbb{Q}_1$ collapses $\\beta$ to $\\mu$. By the definition and Lemma~\\ref{folk}, $\\mathbb{Q}_1 \/ (G \\cap \\mathbb{Q}_0 )$ is equivalent in $V[h]$ to $\\col(\\mu,\\beta)$, which is equivalent to the ${<}\\mu$-support product of $\\col(\\mu,\\gamma)$ for $\\alpha \\leq \\gamma \\leq \\beta$. The filter $G \\cap \\mathbb{Q}_1$ therefore gives a filter $h' \\supseteq h$ on $\\col(\\mu,<\\beta+1)$ that is generic over $V$, with $V[h'] = V[\\mathbb{Q}_1 \\cap G]$. \n\nLet $h \\in \\mathbb{R}$ with $\\mathbb{Q}_0 \\in \\mathcal{L}$ a witness, and let $f: \\mu \\to \\ord$ in $V[G]$ be arbitrary. By the definition of $(\\mu,\\kappa)$-NLC, we can find some $\\mathbb{Q}_1 \\supseteq \\mathbb{Q}_0$ in $\\mathcal{L}$ such that $f \\in V[G \\cap \\mathbb{Q}_1]$. By the previous paragraph, we may find $\\mathbb{Q}_2 \\supseteq \\mathbb{Q}_1$ in $\\mathcal{L}$ equivalent to some $\\col(\\mu,<\\alpha)$, and some filter $h' \\subseteq \\col(\\mu,{<}\\alpha)$ generic over $V$, extending $h$, and such that $V[G \\cap \\mathbb{Q}_2] = V[h']$.\n\nIf $F$ is generic over $\\mathbb{R}$, let $H = \\bigcup_{h \\in F} h$. Since $\\col(\\mu,{<}\\kappa)$ is $\\kappa$-c.c., $H$ is generic, since any maximal antichain from $V$ intersects some $h \\in F$. By the above arguments, any $f : \\mu \\to \\ord$ in $V[G]$ is in $V[H]$. Conversely, any $f : \\mu \\to \\ord$ in $V[H]$ lives in some $V[h]$ with $h \\in \\mathbb{R}$, so is in $V[G]$. \n\\end{proof}\n\n\\subsubsection{The anonymous collapse}\n\nLet $\\kappa$ be a regular cardinal whose regularity is preserved by a forcing $\\mathbb{P}$. Let $A(\\mathbb{P})$ be the complete subalgebra of $\\mathcal{B} ( \\mathbb{P} * \\add(\\kappa))$ generated by the canonical name for the $\\add(\\kappa)$-generic set. More precisely, if $e : \\mathbb{P}*\\add(\\kappa) \\to \\mathcal{B} ( \\mathbb{P} * \\add(\\kappa))$ is the canonical dense embedding, $A(\\mathbb{P})$ is completely generated by the elements of the form $e(\\langle 1, \\dot{\\{ \\langle \\alpha,1 \\rangle \\} } \\rangle)$. By \\cite[p. 247]{jechbook}, we have a canonical correspondence between such $\\add(\\kappa)$-generic sets $X$ which come after forcing with $\\kappa$-preserving posets $\\mathbb P$, and $A(\\mathbb P)$-generic filters $H$. We will move between the two by writing, for example, $X_H$ and $H_X$.\n\nIn the case that $\\alpha^{<\\mu} < \\kappa$ for all $\\alpha < \\kappa$ and $\\mathbb{P} = \\col(\\mu,{<}\\kappa)$, denote $A(\\mathbb{P})$ by $A(\\mu,\\kappa)$, and write $B(\\mu,\\kappa)$ for $\\mathcal{B}(\\col(\\mu,{<}\\kappa)*\\add(\\kappa))$.\n\n\\begin{lemma}\n\\label{quotdist}\nIf $\\mathbb{P}$ is $(\\mu,\\kappa)$-NLC, and $H \\subseteq A(\\mathbb{P})$ is generic over $V$, then \\break $\\mathcal{B}(\\mathbb{P} * \\add(\\kappa))\/H$ is $\\kappa$-distributive in $V[H]$.\n\\end{lemma}\n\n\\begin{proof}\n$V[H] = V[X_H]$ for the canonically associated $X_H \\subseteq \\kappa$, and by forcing with $\\mathcal{B}(\\mathbb{P} * \\add(\\kappa))\/H$ over $V[H]$, we recover a filter $G * X_H$ for $\\mathbb{P} * \\add(\\kappa)$, generic over $V$.\n \nIf $G * X$ is $\\mathbb{P} * \\add(\\kappa)$-generic over $V$, then $X$ codes all subsets of $\\mu$ that live in $V[G]$. By the definition of $(\\mu,\\kappa)$-NLC, every $z \\in (\\ord^\\mu)^{V[G]}$ occurs in some submodel of the form $V[G \\cap \\mathbb{Q}]$, where $\\mathbb{Q}$ is isomorphic to $\\col(\\mu,\\alpha)$ for some $\\alpha < \\kappa$. Thus $z \\in V[y]$ for some $y \\subseteq \\mu$ in $V[G]$, so $(\\ord^\\mu)^{V[X]} \\supseteq (\\ord^\\mu)^{V[G]}$. Since $\\add(\\kappa)$ adds no $\\mu$-sized sets of ordinals, $(\\ord^\\mu)^{V[G]} = (\\ord^\\mu)^{V[G*X]} \\supseteq (\\ord^\\mu)^{V[X]}$. Thus $\\mathcal{B}(\\mathbb{P} * \\add(\\kappa))\/H$ is $\\kappa$-distributive. \n\\end{proof}\n\n\\begin{lemma}\n\\label{cheat}\nLet $V$ be a countable transitive model of ZFC (or just assume generic extensions are always available), and assume $\\Vdash^V_\\mathbb{P} \\kappa$ is regular. If $X \\subseteq \\kappa$, the following are equivalent:\n\\begin{enumerate}[(1)]\n\\item $X$ is $A(\\mathbb{P})$-generic over $V$.\n\\item There is $G \\subseteq \\mathbb{P}$ such that $G$ is generic over $V$, and $X$ is $\\add(\\kappa)$-generic over $V(P_0)$, where $P_0 = \\p_\\kappa(\\kappa)^{V[G]}$.\n\\end{enumerate}\n\\end{lemma}\n\n\\begin{proof}\nIf $X$ is $A(\\mathbb{P})$-generic then force with $\\mathcal{B}(\\mathbb{P} * \\add(\\kappa))\/H_X$ over $V[X]$, obtaining $G$ such that $G*X$ is $\\mathbb{P}*\\add(\\kappa)$-generic over $V$. Then $X$ is $\\add(\\kappa)$-generic over $V[G]$, and since $\\add(\\kappa)^{V[G]} = \\add(\\kappa)^{V(P_0)}$, $X$ is $\\add(\\kappa)$-generic over $V(P_0)$.\n\nSuppose $G \\subseteq \\mathbb{P}$ is generic over $V$, and $X$ is $\\add(\\kappa)$-generic over $V(P_0)$, but not $A(\\mathbb{P})$-generic over $V$. Then some $p \\in \\add(\\kappa)^{V(P_0)}$ forces this with $\\dom(p) = \\alpha < \\kappa$, and $X \\restriction \\alpha = p$. Take $Y \\subseteq \\kappa$ such that $Y \\restriction \\alpha = p$ that is $\\add(\\kappa)$-generic over the larger model $V[G]$. Then $Y$ is $A(\\mathbb{P})$-generic over $V$, and $V(P_0)[Y]$ can see this, but this contradicts the property of $p$. So $X$ was $A(\\mathbb{P})$-generic over $V$. \n\\end{proof}\n\n\n\\begin{theorem}\n\\label{forget}\nFor any $\\mathbb{P}$ that is is $(\\mu,\\kappa)$-NLC, there is an isomorphism \\break $\\iota : A(\\mathbb{P}) \\to A(\\mu,\\kappa)$ such that $\\iota(|| \\alpha \\in \\dot{X} ||_{A(\\mathbb{P})}) = || \\alpha \\in \\dot{X} ||_{A(\\mu,\\kappa)}$ for all $\\alpha < \\kappa$.\n\\end{theorem}\n\n\\begin{proof}\nLet $X$ be $A(\\mathbb{P})$-generic over $V$. There is a $\\kappa$-distributive forcing over $V[X]$ to get $G$ such that $G * X$ is $\\mathbb{P} * \\add(\\kappa)$-generic over $V$. By Lemma~\\ref{rearrange}, we can do further forcing to obtain $H \\subseteq \\col(\\mu,{<}\\kappa)$ generic over $V$ such that $(\\ord^\\mu)^{V[H]} = (\\ord^\\mu)^{V[G]}$. By Lemma~\\ref{cheat}, $X$ is also $A(\\mu,\\kappa)$-generic over $V$.\n\nConversely, every $A(\\mu,\\kappa)$-generic $X$ is $A(\\mathbb{P})$-generic. For suppose $X$ is a counterexample. Then there is some $(p,\\dot{q}) \\in \\col(\\mu,{<}\\kappa) * \\add(\\kappa)$ such that $(p,\\dot{q}) \\Vdash \\dot{X}$ is not $A(\\mathbb{P})$-generic over $V$. Let $Y$ be any $A(\\mathbb{P})$-generic set, and let $P_0 = \\p(\\mu)^{V[Y]}$. By the above, $Y$ is $A(\\mu,\\kappa)$-generic over $V$. Thus we can force over $V[Y]$ to get $H \\subseteq \\col(\\mu,{<}\\kappa)$ such that $H *Y$ is $B(\\mu,\\kappa)$-generic over $V$. By the homogeneity of the Levy collapse, there is some automorphism $\\pi \\in V$ such that $p \\in \\pi[H] = H^\\prime$. By the homogeneity of Cohen forcing, there is some automorphism $\\sigma$ in $V(P_0)$ such that $\\sigma[Y]$ is a generic $Y^\\prime$ such that $Y^\\prime \\restriction \\dom(\\dot{q}^{H^\\prime}) = \\dot{q}^{H^\\prime}$. $Y^\\prime$ is also $A(\\mathbb{P})$-generic over $V$. However, $(p,\\dot{q}) \\in H^\\prime * Y^\\prime$, so we have a contradiction.\n\n\nThis implies that we have a canonical correspondence between $A(\\mathbb{P})$- and $A(\\mu,\\kappa)$-generic filters, i.e. definable functions $f,g$ such that for any generic $H$ for $A(\\mathbb{P})$, $f(H)$ is the generic for $A(\\mu,\\kappa)$ computed from $X_H$, and vice versa, and $g(f(H)) = H$. For $p \\in A(\\mathbb{P})$, put $\\iota(p) = || p \\in g(\\dot{H}) ||_{A(\\mu,\\kappa)}$. It is easy to see that $\\iota$ is a complete embedding. For any $q \\in A(\\mu,\\kappa)$, there is $p \\in A(\\mathbb{P})$ forces that $q \\in f(\\dot{H})$. Thus if $H$ is generic for $A(\\mu,\\kappa)$ and $\\iota(p) \\in H$, then $p \\in g(H)$, so $q \\in f(g(H)) = H$, hence $\\iota(p) \\leq q$. The range of $\\iota$ is dense, so it is an isomorphism. By the way we construct $f$ and $g$, $\\iota(|| \\alpha \\in \\dot{X} ||_{A(\\mathbb{P})}) = \\break || \\alpha \\in \\dot{X} ||_{A(\\mu,\\kappa)}$. \n\\end{proof}\n\nThis machinery has some interesting applications to the absoluteness of some properties of a given powerset. First, it is easy to see for regular $\\mu < \\kappa$ such that $\\alpha^{<\\mu} < \\kappa$ for all $\\alpha < \\kappa$, $\\col(\\mu,{<}\\kappa) \\times \\add(\\mu,\\lambda)$ is $(\\mu,\\kappa)$-NLC for every $\\lambda$. Thus if $X$ is $A(\\mu,\\kappa)$-generic, then for any $\\lambda$, we may further force to obtain a model which is a $(\\col(\\mu,{<}\\kappa) \\times \\add(\\mu,\\lambda)) * \\add(\\kappa)$-generic extension with the same $\\ord^\\mu$. Taking inner models given by such $\\col(\\mu,{<}\\kappa) \\times \\add(\\mu,\\lambda)$-generic sets, we produce many models with the same cardinals and same $\\p(\\mu)$, each assigning a different cardinal value for $2^\\mu$. For example, if we add $\\omega_1$ Cohen reals to any model of $M$ of ZFC, this is the same as forcing with $\\col(\\omega,{<}\\omega_1)$. There is for each uncountable ordinal $\\alpha \\in M$, a generic extension with the same reals and same cardinals, in which it appears we have added $\\alpha$ many Cohen reals.\n\nBy using weakly compact cardinals, we can get even more dramatic examples. If $\\kappa$ is weakly compact, every $\\kappa$-c.c.\\ partial order captures small sets in small factors. To show this, first consider a partial order $\\mathbb{P}$ of size $\\kappa$. We can code $\\mathbb{P}$ as $A \\subseteq \\kappa$, and by weak compactness, there is some transitive elementary extension $(V_\\kappa,\\in,A) \\prec (M,\\in,B)$. If $\\mu < \\kappa$, then any $\\mathbb{P}$-name for function $f : \\mu \\to \\ord$ has an equivalent name $\\tau \\in V_\\kappa$ by the $\\kappa$-c.c. Since $A \\in M$ and $M$ sees $A$ as a regular suborder of $B$, $M$ thinks that $\\tau$ is a $\\mathbb{Q}$-name for some regular suborder $\\mathbb{Q}$ of $B$. By elementarity, $V_\\kappa$ thinks that $\\tau$ is a $\\mathbb{Q}$-name for some regular $\\mathbb{Q}$ of $A$. For $\\mathbb{P}$ of arbitrary size, let $\\tau$ be a $\\mathbb{P}$-name of size $<\\kappa$, take some regular $\\theta$ such that $\\mathbb{P},\\tau \\in H_\\theta$, and take an elementary $M \\prec H_\\theta$ with $\\mathbb{P},\\tau \\in M$ such that $|M| = \\kappa$ and $M^{<\\kappa} \\subseteq M$. It is easy to see that $M \\cap \\mathbb{P}$ is a regular suborder of $\\mathbb{P}$, and so the above considerations apply to show that there is some regular $\\mathbb{Q} \\subseteq \\mathbb{P} \\cap M \\subseteq \\mathbb{P}$ of size $<\\kappa$ such that $\\tau$ is a $\\mathbb{Q}$-name.\n\nTherefore, if $\\kappa$ is weakly compact and $\\mathbb{P}$ is $\\kappa$-c.c.,\\ the collection $\\mathcal{L}$ of all regular suborders of $\\mathbb{P}$ of size $<\\kappa$ witnesses that $\\mathbb{P}$ is $(\\omega,\\kappa)$-nicely layered. If $\\mathbb{P}$ also forces $\\kappa = \\aleph_1$, then this collection also witnesses that $\\mathbb{P}$ is $(\\omega,\\kappa)$-NLC. To check this, take any $\\mathbb{Q}_0 \\in \\mathcal{L}$, any $\\mathbb{P}$-name $\\tau$ of size $<\\kappa$, and $\\alpha < \\kappa$. Let $H \\subseteq \\mathbb{Q}_0$ be generic. Since $\\kappa$ is still weakly compact in $V[H]$, there is some regular $\\mathbb{Q}_1 \\subseteq \\mathbb{P}\/H$ of size $<\\kappa$ in $V[H]$ such that the $(\\mathbb{P}\/H)$-name associated to $\\tau$ is a $\\mathbb{Q}_1$-name. Let $\\beta \\geq \\max \\{ \\alpha, | \\mathbb{Q}_0 * \\dot{\\mathbb{Q}}_1| \\}$. Since $\\mathbb{P} \/ (\\mathbb{Q}_0 * \\dot{\\mathbb{Q}_1})$ adds a generic for $\\col(\\omega,\\beta)$, we have $\\mathbb{Q}_2 \\in \\mathcal{L}$ extending $\\mathbb{Q}_0 * \\dot{\\mathbb{Q}}_1$ such that $\\mathbb{Q}_2 \\sim \\col(\\omega,\\beta)$.\n\nIn particular, if $\\kappa$ is weakly compact, then $\\col(\\omega,<\\kappa) * \\dot{\\mathbb{Q}}$, where $\\dot{\\mathbb{Q}}$ is forced to be c.c.c.,\\ is $(\\omega,\\kappa)$-NLC. Thus an extremely wide variety of forcing extensions with very different theories can be obtained, each sharing the same reals and same cardinals.\n\n\n\n\\subsubsection{An unfortunate reality}\n\nDespite the universality of $A(\\mu,\\kappa)$, it is difficult to characterize its combinatorial structure. While it absorbs all of the small sets added by a $(\\mu,\\kappa)$-NLC forcing, no such forcing completely embeds into it. The reader may opt to skip this section, as later results will not depend it.\n\nTo show this, we first isolate two properties of a forcing extension that depend on two regular cardinals $\\mu <\\kappa$. The author is grateful to Mohammad Golshani for bringing these properties to his attention.\n\\begin{enumerate}[(1)]\n\\item \\emph{Levy($\\mu,\\kappa$):} $(\\exists A \\in [\\kappa]^\\kappa ) ( \\forall y \\in [ \\kappa ]^\\mu \\cap V ) y \\nsubseteq A$.\n\\item \\emph{Silver($\\mu,\\kappa$):} $(\\exists A \\in [\\kappa]^\\kappa) (\\forall X \\in [\\kappa]^\\kappa \\cap V)(\\exists y \\in [X]^\\mu \\cap V) y \\cap A = \\emptyset$.\n\\end{enumerate}\n\nNote that these are both $\\Sigma_1$ properties of the parameters $([ \\kappa ]^\\mu)^V$ and $([ \\kappa ]^\\kappa)^V$. For any partial order $\\mathbb{P}$, and collection of dense subsets $\\mathcal{D} \\subseteq \\p(\\mathbb{P})$ the statement, ``There is a filter $G \\subseteq \\mathbb{P}$ that is $\\mathcal{D}$-generic,'' is also a $\\Sigma_1$ property of $\\mathbb{P}$ and $\\mathcal{D}$. Now the following proposition either holds or fails for a given partial order $\\mathbb{P}$ and cardinals $\\mu < \\kappa$:\n\\[\n(*)_{\\mu,\\kappa} : (\\forall X \\in [\\mathbb{P} ]^\\kappa ) ( \\exists y \\in [X]^\\mu ) y \\mbox{ has a lower bound in }\\mathbb{P}.\n\\]\n\n\n\\begin{lemma}\nIf $\\mathbb{P}$ is a separative partial order that satisfies $(*)_{\\mu,\\kappa}$, preserves the regularity of $\\kappa$, and such that $\\den(\\mathbb{P} \\restriction p) = \\kappa$ for all $p \\in \\mathbb{P}$, then $\\mathbb{P}$ forces $Silver(\\mu,\\kappa)$.\n\\end{lemma}\n\n\\begin{proof}\nLet $\\{ p_\\alpha : \\alpha < \\kappa \\}$ be a dense subset of $\\mathbb{P}$. Inductively build a dense $D \\subseteq \\{ p_\\alpha : \\alpha < \\kappa \\}$, putting $p_\\alpha \\in D$ just in case there is no $\\beta < \\alpha$ such that $p_\\beta \\in D$ and $p_\\beta \\leq p_\\alpha$. $D$ has the property that for all $p \\in D$, $| \\{ q \\in D : p \\leq q \\} | < \\kappa$. Fixing a bijection $f : D \\to \\kappa$, we claim that if $G \\subseteq \\mathbb{P}$ is generic, $A = f[G]$ witnesses $Silver(\\mu,\\kappa)$. Note that since $\\mathbb{P}$ is nowhere $<\\kappa$-dense, $A$ is an unbounded subset of $\\kappa$. Now let $p \\in D$ and $X = \\{ q_\\alpha : \\alpha < \\kappa \\} \\in [D]^\\kappa$ be arbitrary. There is some $B \\in [\\kappa]^\\kappa$ such that for all $\\alpha \\in B$, $p \\nleq q_\\alpha$. For each $\\alpha \\in B$, choose $r_\\alpha \\leq p$ such that $r_\\alpha \\perp q_\\alpha$. By $(*)$, there is some $y \\in [B]^\\mu$ such that $\\{ r_\\alpha : \\alpha \\in y \\}$ has a lower bound $r$. We have $r \\Vdash \\{ q_\\alpha : \\alpha \\in \\check{y} \\} \\cap \\dot{G} = \\emptyset$. As $p$ and $X$ were arbitrary, $Silver(\\mu,\\kappa)$ is forced.\n\\end{proof}\n\n\n\n\\begin{lemma}\nIf $\\mathbb{P}$ is a $\\kappa$-c.c.\\ separative partial order of size $\\kappa$ satisfying $\\neg (*)_{\\mu,\\kappa}$, then some $p \\in \\mathbb{P}$ forces $Levy(\\mu,\\kappa)$.\n\\end{lemma}\n\n\\begin{proof}\nSuppose $X \\in [\\mathbb{P}]^\\kappa$ witnesses $\\neg (*)_{\\mu,\\kappa}$. By the $\\kappa$-c.c., there is some $p$ such that $p \\Vdash | \\check{X} \\cap \\dot{G} | = \\kappa$. If $y \\in [X]^\\mu$, then $1 \\Vdash \\check{y} \\nsubseteq \\dot{G}$, since otherwise some $q$ is a lower bound to $y$. Hence $p$ forces that $X \\cap G$ witnesses $Levy(\\mu,\\kappa)$. \n\\end{proof}\n\n\\begin{lemma}Suppose $\\mu < \\kappa$, $\\mu$ is regular for all $\\alpha < \\kappa$, $\\alpha^\\mu < \\kappa$. There are two $(\\mu,\\kappa)$-NLC partial orders $\\mathbb{P}_0$ and $\\mathbb{P}_1$ such that $\\mathbb{P}_0$ forces $Levy \\wedge \\neg Silver$, and $\\mathbb{P}_1$ forces $\\neg Levy \\wedge Silver$.\n\\end{lemma}\n\n\\begin{proof}Let $\\mathbb{P}_0$ be the Levy collapse $\\col(\\mu, {<}\\kappa)$, and let $\\mathbb{P}_1$ be the Silver collapse,\n\\[ \\{ p : (\\exists \\alpha < \\mu)(\\exists x \\in [\\kappa]^\\mu) p : x \\times \\alpha \\to \\kappa, \\text{ and } p(\\beta,\\gamma) < \\beta \\text{ for all } (\\beta,\\gamma) \\in \\dom p \\} \\]\nIt is easy to see that $\\mathbb{P}_1$ satisfies $(*)_{\\mu,\\kappa}$, while $\\mathbb{P}_0$ fails this property, as witnessed by $X = \\mathbb{P}_0$. Hence by the previous lemmas, $\\mathbb{P}_0$ forces $Levy(\\mu,\\kappa)$, and $\\mathbb{P}_1$ forces $Silver(\\mu,\\kappa)$. We must show that the respective negations are also forced.\n\nLet $\\dot{A}$ be a $\\mathbb{P}_0$-name such that $1 \\Vdash \\dot{A} \\in [\\kappa]^\\kappa$. Let $p \\in \\mathbb{P}$ be arbitrary, and let $\\gamma< \\kappa$ be such that $\\supp(p) \\subseteq \\gamma$. Let $X_0 = \\{ \\alpha < \\kappa : p \\nVdash \\alpha \\notin \\dot{A} \\}$. For each $\\alpha \\in X_0$, pick some $q_\\alpha \\leq p$ such that $q_\\alpha \\Vdash \\alpha \\in \\dot{A}$. By a delta-system argument, let $X_1 \\in [X_0]^\\kappa$ be such that there is $r \\leq p$ such that for all $\\alpha \\in X_1$, $q_\\alpha \\restriction \\gamma = r$, and for $\\alpha \\not= \\beta$ in $X_1$, $(\\supp(q_\\alpha) \\setminus \\gamma) \\cap (\\supp(q_\\beta) \\setminus \\gamma) = \\emptyset$. For any $q \\leq r$ and $y \\in [X_1]^\\mu$, $q \\nVdash \\check{y} \\cap \\dot{A} = \\emptyset$. This is because for such $q$, there is some $\\alpha \\in y$ such that $(\\supp(q_\\alpha) \\setminus \\gamma) \\cap \\supp(q) = \\emptyset$, so $q$ is compatible with $q_\\alpha$. Hence $r \\Vdash (\\exists X \\in [\\kappa]^\\kappa \\cap V)(\\forall y \\in [X]^\\mu \\cap V) y \\cap \\dot{A} \\not= \\emptyset$. As $\\dot{A}$ and $p$ were arbitrary, $\\neg Silver(\\mu,\\kappa)$ is forced.\n\nNow let $\\dot{A}$ be a $\\mathbb{P}_1$-name such that $1 \\Vdash \\dot{A} \\in [\\kappa]^\\kappa$, and let $p \\in \\mathbb{P}_1$ be arbitrary. Form $X_0$, $\\{q_\\alpha : \\alpha \\in X_0 \\}$, and $X_1$ like above. We can take a $y \\in [X_1]^\\mu$ such that $\\bigcup_{\\alpha \\in y} q_\\alpha = q \\in \\mathbb{P}_1$. Then $q \\Vdash \\check{y} \\subseteq \\dot{A}$, so $q$ forces $\\neg Levy(\\mu,\\kappa)$. \n\\end{proof}\n\n\n\\begin{corollary}Suppose $\\mu$, $\\kappa$, $\\mathbb{P}_0$, and $\\mathbb{P}_1$ are as above. Let $G$ be $\\mathbb{P}_0$-generic and $H$ be $\\mathbb{P}_1$-generic over $V$. Let $\\mathbb{Q} \\in V$ be a partial order. If $\\mathbb{Q}$ forces $Levy(\\mu,\\kappa)$, then $V[H]$ has no $\\mathbb{Q}$-generic, and if $\\mathbb{Q}$ forces $Silver(\\mu,\\kappa)$, then $V[G]$ has no $\\mathbb{Q}$-generic. If $\\mathbb{Q}$ is $\\kappa$-c.c.\\ and of size $\\kappa$, then no $\\kappa$-closed forcing extension of $V[G]$ or $V[H]$ can introduce a generic for $\\mathbb{Q}$.\n\\end{corollary}\n\n\\begin{proof}\nSince $V[H]$ satisfies $\\neg Levy$, and $Levy$ is a $\\Sigma_1$ property with parameters in $V$, no inner model of $V[H]$ containing $V$ can satisfy $Levy$. Likewise, no inner model of $V[G]$ containing $V$ can satisfy $Silver$. To see that the non-existence of $\\mathbb{Q}$-generics is preserved by $\\kappa$-closed forcing, suppose that for some such forcing $\\mathbb{R} \\in V[G]$, $r \\Vdash^{V[G]}_\\mathbb{R} \\dot{K}$ is $\\mathbb{Q}$-generic over $V$. Since $\\mathbb{Q}$ has size $\\kappa$, we can build a descending sequence $\\{ r_\\alpha : \\alpha < \\kappa \\}$ below $r$ such that for all $q \\in \\mathbb{Q}$, there is $r_\\alpha$ deciding whether $q \\in K$. Let $K' = \\{ q : (\\exists \\alpha < \\kappa) r_\\alpha \\Vdash q \\in \\dot{K} \\}$. Any maximal antichain $A \\in V$ contained in $\\mathbb{Q}$ has size $< \\kappa$, thus some $r_\\alpha$ completely decides $A \\cap K$. Since $r_\\alpha \\Vdash \\check{A} \\cap \\dot{K} \\not= \\emptyset$, we must have $K' \\cap A \\not= \\emptyset$, so $K'$ is $\\mathbb{Q}$-generic over $V$. The argument for $\\kappa$-closed forcing over $V[H]$ is the same. \n\\end{proof}\n\n\\begin{theorem}Suppose $\\mu < \\kappa$ are regular and $\\alpha^\\mu<\\kappa$ for all $\\alpha<\\kappa$. No $(\\mu,\\kappa)$-NLC forcing regularly embeds into $A(\\mu,\\kappa)$. Further, a generic extension by $A(\\mu,\\kappa)$ has no generic filters for any $\\kappa$-c.c.\\ forcing $\\mathbb{Q}$ such that $\\den(\\mathbb{Q} \\restriction q) \\geq \\kappa$ for all $q \\in \\mathbb{Q}$.\n\\end{theorem}\n\\begin{proof}\nFirst note that we only need to consider $\\mathbb{Q}$ such that $\\den(\\mathbb{Q} \\restriction q) = \\kappa$ for all $q \\in \\mathbb{Q}$. For if $p \\in A(\\mu,\\kappa)$ is such that $p \\Vdash \\dot{K}$ is $\\mathbb{Q}$-generic, then there would be some $q \\in \\mathcal{B}(\\mathbb{Q})$ and some $p' \\leq p$ such that $\\mathcal{B}(\\mathbb{Q}) \\restriction q$ completely embeds into $A(\\mu,\\kappa) \\restriction p'$. Since $d(A(\\mu,\\kappa) = \\kappa$, this implies $\\mathcal{B}(\\mathbb{Q}) \\restriction q \\leq \\kappa$.\n\nLet $\\mathbb{Q}$ be any $\\kappa$-c.c.\\ forcing such that $\\den(\\mathbb{Q} \\restriction q) = \\kappa$ for all $q \\in \\mathbb{Q}$. For any $p \\in \\mathbb{Q}$, if $(*)$ holds for $\\mathbb{Q} \\restriction p$, then $p \\Vdash Silver$, and otherwise for some $q \\leq p$, $q \\Vdash Levy$. Thus $\\Vdash_\\mathbb{Q} Levy \\vee Silver$. Suppose $K$ is $\\mathbb{Q}$-generic over $V$, and $X$ is $A(\\mu,\\kappa)$-generic over $V$. There are two further forcings $\\mathbb{R}_0,\\mathbb{R}_1$ over $V[X]$ that respectively get filters $G,H$ such that $V[G][X]$ is $\\mathbb{P}_0 * \\add(\\kappa)$-generic, and $V[H][X]$ is $\\mathbb{P}_1 * \\add(\\kappa)$-generic. If $V[K] \\models Levy$, then $K \\not \\in V[H][X]$, and if $V[K] \\models Silver$, then $K \\notin V[G][X]$. Thus $V[X]$ has no $\\mathbb{Q}$-generics.\n\\end{proof}\n\n\n\\subsection{Construction of a dense ideal}\n\nFirst we will define a useful strengthening of ``nicely layered.''\n\n\\begin{definition}\n$\\mathbb{P}$ is \\emph{$(\\mu,\\kappa)$-very nicely layered (with collapses)} when there is a sequence $\\langle \\mathbb{Q}_\\alpha : \\alpha < \\kappa \\rangle = \\mathcal{L}$ such that:\n\\begin{enumerate}[(1)]\n\\item $\\mathcal{L}$ witnesses that $\\mathbb{P}$ is $(\\mu,\\kappa)$-nicely layered (with collapses),\n\\item $\\mathcal{L}$ is $\\subseteq$-increasing,\n\\item every subset of $\\mathbb{P}$ of size $<\\mu$ with a lower bound has an infimum, and\n\\item there is a system of continuous projection maps $\\pi_\\alpha : \\mathbb{P} \\to \\mathbb{Q}_\\alpha$ such that for each $\\alpha$, $\\pi_\\alpha \\restriction \\mathbb{Q}_\\alpha = \\id$, and for $\\beta < \\alpha < \\kappa$, $\\pi_\\beta = \\pi_\\beta \\circ \\pi_\\alpha$.\n\n(By continuous, we mean that for any $X \\subseteq \\mathbb P$, if $\\inf(X)$ exists, then for all $\\alpha<\\kappa$, $\\inf(\\pi_\\alpha[X]) = \\pi_\\alpha(\\inf(X))$.)\n\\end{enumerate}\n\\end{definition}\n\nA typical example is the Levy collapse $\\col(\\mu,{<}\\kappa)$. In the general case, we will usually abbreviate the action of the projection maps $\\pi_\\alpha(q)$ by $q {\\restriction} \\alpha$. In applying clause (3), we will use the next proposition, proof of which is left to the reader.\n\n\\begin{proposition}\nIf $\\mathbb{P}$ is a partial order such that every descending chain of length $< \\mu$ has an infimum, then every directed subset of size $<\\mu$ has an infimum.\n\\end{proposition}\n\n\n\\begin{theorem}\n\\label{maindense}\nAssume $\\kappa$ carries an almost-huge tower of height $\\delta$, and let $j : V \\to M$ be given by the tower. Let $\\mu,\\lambda$ be regular such that $\\mu < \\kappa \\leq \\lambda < \\delta$. Suppose $\\Vdash_{A(\\mu,\\kappa)}$ `` $\\dot{ \\mathbb{P} }$ is $(\\kappa,\\delta)$-very nicely layered and forces $\\delta = \\lambda^+$.'' If $X * H$ is $A(\\mu,\\kappa) * \\dot{\\mathbb{P}}$-generic, then in $V[X][H]$, there is a normal, $\\kappa$-complete, $\\lambda$-dense ideal on $\\p_\\kappa(\\lambda)$.\n\\end{theorem}\n\n\n\\begin{proof}\nLet $H_X$ be the $A(\\mu,\\kappa)$-generic filter computed from $X$. Let $K \\times C$ be $B(\\mu,\\kappa)\/H_X \\times \\col(\\mu,\\lambda)$-generic over $V[X][H]$, and for brevity let $W = V[X][H][K][C]$. Note that $V[X][K] = V[G][X]$, where $G * X$ is some \\break $\\col(\\mu,{<}\\kappa) * \\add(\\kappa)$-generic filter over $V$. Let $\\langle \\mathbb{Q}_\\alpha : \\alpha < \\delta \\rangle$ witness that $\\mathbb{P}$ is $(\\kappa,\\delta)$-nicely layered. By the distributivity of $B(\\mu,\\kappa)\/H_X$ in $V[X]$, $\\mathbb{P}$ and its layers $\\mathbb{Q}_\\alpha$ are still $\\kappa$-closed in $V[G][X]$. For $\\alpha < \\beta$, the relation $\\Vdash_{\\mathbb{Q}_\\alpha}$ ``$\\mathbb{Q}_\\beta \/ \\mathbb{Q}_\\alpha$ is $\\kappa$-closed'' holds in $V[G][X]$ because in $V[X]$, $B(\\mu,\\kappa)\/H_X \\times \\mathbb{Q}_\\alpha$ is $\\kappa$-distributive. Furthermore, since no sequences of length $<\\mu$ are added, the forcing given by the definition of $\\col(\\mu,\\lambda)$ is the same between $V$, $W$, and intermediate models.\n\n\nThe forcing to get from $V[G]$ to $W$ is equivalent to $(\\add(\\kappa) \\times \\col(\\mu,\\lambda)) * \\mathbb{P}$. Let $\\mathcal{L}$ be the collection of subforcings of the form $(\\add(\\kappa) \\times \\col(\\mu,\\lambda)) * \\mathbb{Q}_\\alpha$ for $\\alpha < \\delta$. This sequence then witnesses the $(\\mu,\\delta)$-NLC property in $V[G]$. The closure properties are evident, and since the whole forcing has the $\\delta$-c.c., functions from $\\mu$ to ordinals are indeed captured by these factors.\n\n\n\nLet $P_0 = \\p(\\mu)^W$, and consider the submodel $M(P_0)$. In $W$, $Q_0 = \\break \\p(\\add(\\delta))^{M(P_0)}$ has cardinality $\\delta$. To show this, let $Y \\subseteq \\delta$ be $\\add(\\delta)$-generic over $W$. By Theorem~\\ref{forget}, $Y$ is $A(\\mu,\\delta)$-generic over $V$, and hence over $M$ since $(\\col(\\mu,{<}\\delta)*\\add(\\delta))^M = (\\col(\\mu,{<}\\delta)*\\add(\\delta))^V$ by the closure of $M$. Since $M[Y]$ thinks $j(\\delta)$ is inaccessible, $M[Y] \\models |Q_0| < j(\\delta)$, so $W[Y] \\models |Q_0| = \\delta$ since $j(\\delta) < (\\delta^+)^V$. Since $W \\models 2^\\mu = \\delta$, $W$ and $W[Y]$ have the same cardinals, so $W \\models |Q_0| = \\delta$. Therefore, working in $W$, we can inductively build a set $\\hat{X} \\subseteq \\delta$ that is $\\add(\\delta)$-generic over $M(P_0)$ with $\\hat{X} \\cap \\kappa = X$. By Lemma~\\ref{cheat}, $\\hat{X}$ is $A(\\mu,\\delta)$-generic over $M[G]$. A further forcing produces $G^\\prime \\supseteq G$, such that $G^\\prime * \\hat{X}$ is $\\col(\\mu,{<}\\delta)*\\add(\\delta)$-generic over $M$, so we have an elementary $\\hat{j} : V[G][X] \\to M[G^\\prime][\\hat{X}]$ extending $j$. By elementarity, for the corresponding filters $H_X$ and $H_{\\hat{X}}$ on the respective algebras $A(\\mu,\\kappa)^V$ and $A(\\mu,\\delta)^M$, we have $j[H_X] \\subseteq H_{\\hat{X}}$. Hence we can define in $W$ the restricted elementary embedding $\\hat{j} : V[X] \\to M[\\hat{X}]$.\n\n\n\n\nNow we wish to extend $\\hat{j}$ to have domain $V[X][H]$. As in the argument for Lemma~\\ref{quotdist}, every element of $(\\ord^\\mu)^W$ is coded by some element of $M$ and some $y \\subseteq \\mu$ coded in $\\hat{X}$, so $M[\\hat{X}]$ is closed under ${<}\\delta$-sequences from $W$. Consequently, $H \\cap \\mathbb{Q}_\\alpha$ and $\\hat{j}[ H \\cap \\mathbb{Q}_\\alpha]$ are in $M[\\hat{X}]$ for all $\\alpha < \\delta$. Also, $M[\\hat{X}] \\vDash$ ``$\\hat{j}(\\mathbb{P})$ is $(\\delta,j(\\delta))$-very nicely layered.'' Each $\\hat{j}[ H \\cap \\mathbb{Q}_\\alpha ]$ is a directed set of size $\\mu$ in $M[\\hat{X}]$, so it has an infimum $m_\\alpha \\in \\hat{j}(\\mathbb{Q}_\\alpha)$.\n\nLet $\\langle A_\\alpha : \\alpha < \\delta \\rangle \\in W$ enumerate the maximal antichains of $\\hat{j}(\\mathbb{P})$ from $M[\\hat{X}]$. (There are only $\\delta$ many because $M[\\hat{X}]$ thinks this partial order has inaccessible size $j(\\delta)$ and is $j(\\delta)$-c.c.) Inductively define an increasing sequence of ordinals $\\langle \\alpha_i \\rangle_{i < \\delta} \\subseteq \\delta$, and a corresponding decreasing sequence of conditions $\\langle p_i \\rangle_{i < \\delta} \\subseteq \\hat{j}(\\mathbb{P})$ as follows.\n\nAssume as the induction hypothesis that we have defined the sequences up to $i$, and for all $\\xi < i$ and all $\\alpha < \\delta$, $p_\\xi$ is compatible with $m_\\alpha$, and for all $\\xi < i$, there is some $a \\in A_\\xi$ such that $p_\\xi \\leq a$. Let $q_i = \\inf_{\\xi \\alpha_i$, $m_\\alpha \\restriction j(\\alpha_i) = m_{\\alpha_i}$. This is because for any $\\alpha < \\beta < \\delta$, \n\\begin{align*}\nm_\\beta \\restriction j(\\alpha) = & (\\inf \\{ j(p) : p \\in H {\\restriction} \\beta \\}) \\restriction j(\\alpha) = \\inf \\{ j(p) {\\restriction} j(\\alpha) : p \\in H {\\restriction} \\beta \\} \\\\\n= & \\inf \\{ j(p {\\restriction} \\alpha) : p \\in H {\\restriction} \\beta \\} = \\inf \\{ j(p) : p \\in H {\\restriction} \\alpha \\} = m_\\alpha. \n\\end{align*}\n\nThe upward closure of the sequence $\\langle p_i \\rangle_{i < \\delta}$ is a filter $\\hat{H}$ which is $\\hat{j}(\\mathbb{P})$-generic over $M[\\hat{X}]$. For all $p \\in H$, $\\hat{j}(p) \\in \\hat{H}$ since there is some $m_\\alpha \\leq \\hat{j}(p)$. Thus we get an extended elementary embedding $\\hat{j} : V[X][H] \\to M[\\hat{X}][\\hat{H}]$. In $W$, we define an ultrafilter $U$ over $(\\p(\\p_\\kappa \\lambda ))^{V[X][H]}$: let $A \\in U$ iff $j[\\lambda] \\in \\hat{j}(A)$. Note that $j[\\lambda] \\in \\p_{j(\\kappa)}(j(\\lambda))^{M[\\hat{X}][\\hat{H}]}$. $U$ is $\\kappa$-complete and normal with respect to functions in $V[X][H]$. If $f : \\p_\\kappa(\\lambda) \\to \\lambda$ is a regressive function in $V[X][H]$ on a set $A \\in U$, then $\\hat{j}(f)(j[\\lambda]) = j(\\alpha)$ for some $\\alpha < \\lambda$, so $\\{ z \\in A : f(z) = \\alpha \\} \\in U$.\n\nNow the forcing to obtain $U$ was $\\mathbb{Q} = B(\\mu,\\kappa)\/H_X \\times \\col(\\mu,\\lambda)$, the product of a $\\kappa$-dense and a $\\lambda$-dense partial order. In $V[X][H]$, let $e : \\p(\\p_\\kappa \\lambda) \\to \\mathcal{B}(\\mathbb{Q})$ be defined by $e(A) = || \\check{A} \\in \\dot{U} ||$. Let $I$ be the kernel of $e$. $I$ is clearly a normal, $\\kappa$-complete ideal. $e$ lifts to a boolean embedding of $\\p(\\p_\\kappa \\lambda) \/ I$ into $\\mathcal{B}(\\mathbb{Q})$. Since $\\mathbb{Q}$ is $\\lambda^+$-c.c., $I$ is $\\lambda^+$-saturated. If $\\langle [A_\\alpha] : \\alpha < \\lambda \\rangle$ is a maximal antichain in $\\p_\\kappa(\\lambda) \/ I$, then $\\nabla A_\\alpha$ is the least upper bound and is in the dual filter to $I$. $e(\\nabla A_\\alpha) = || \\nabla A_\\alpha \\in \\dot{U} || = 1$, and this is the least upper bound in $\\mathcal{B}(\\mathbb{Q})$ to $\\{ e(A_\\alpha) : \\alpha <\\lambda \\}$. This is because if there were a generic extension in which all $A_\\alpha \\notin U$, then $\\nabla A_\\alpha \\notin U$ as well since $U$ is normal with respect to sequences from $V[X][H]$. Therefore $e$ is a complete embedding, and thus $I$ is $\\lambda$-dense. \n\\end{proof}\n\nWe can also characterize the exact structure of $\\p(\\p_\\kappa \\lambda) \/ I$. First note the following about the ground model embedding $j : V \\to M$. $M$ is the direct limit of the coherent system of $\\alpha$-supercompactness embeddings $j_\\alpha : V \\to M_\\alpha$ for $\\alpha < \\delta$. Every member of $M_\\alpha$ is represented as $j_\\alpha(f)(j_\\alpha[\\alpha])$ for some function $f \\in V$ with domain $\\p_\\kappa(\\alpha)$. If $k_\\alpha : M_\\alpha \\to M$ is the factor map such that $j = k_\\alpha \\circ j_\\alpha$, then the critical point of $k_\\alpha$ is above $\\alpha$, so $k_\\alpha(x) = k_\\alpha[x]$ when $M_\\alpha \\vDash |x| \\leq |\\alpha|$. Since $M$ is the direct limit, for any $x \\in M$, there is some $\\alpha < \\delta$ and some $f \\in V$ such that\n\\[ x = k_\\alpha([f]) = k_\\alpha(j_\\alpha(f)(j_\\alpha[\\alpha])) = j(f)(k_\\alpha(j_\\alpha[\\alpha])) = j(f)(j[\\alpha])).\n\\]\n\nLet $U \\subseteq \\p(\\p_\\kappa \\lambda) \/ I$ be generic over $V[X][H]$, and let $j_U : V \\to N$ be the generic ultrapower embedding. Since $e : \\p(\\p_\\kappa \\lambda) \/ I \\to \\mathcal{B}(\\mathbb{Q})$ is a complete embedding, forcing with $\\mathcal{B}(\\mathbb{Q}) \/ e[U]$ over $V[X][H][U]$ produces a model $W$ as above. Notice that the definition of $e$ and $U$ makes $A \\in U$ iff $j[\\lambda] \\in \\hat{j}(A)$. Hence we can define an elementary embedding $k : N \\to M[\\hat{X}][\\hat{H}]$ by $k([f]) = \\hat{j}(f)(j[\\lambda])$, and we have $\\hat{j} = k \\circ j_U$.\n\nWhat is the critical point of $k$? Since $N \\vDash \\mu^+ = \\delta$, certainly it must be at least $\\delta$. Let $\\beta$ be any ordinal. There is some $\\alpha$ such that $\\lambda \\leq \\alpha < \\delta$ and some $f \\in V$ such that $\\beta = j(f)(j[\\alpha])$. Let $b : \\lambda \\to \\alpha$ be a bijection in $V[X][H]$. Then $\\beta = j(f)(\\hat{j}(b)[j[\\lambda]])$. Furthermore, $j[\\lambda] = k(j_U[\\lambda])$. Therefore, $\\beta = k( j_U(f)(j_U(b)[j_U[\\lambda]]))$. Thus $\\beta \\in \\ran(k)$, and so $k$ does not have a critical point.\n\nTherefore, $N = M[\\hat{X}][\\hat{H}]$. By the closure of $M[\\hat{X}][\\hat{H}]$, the generic $K \\times C$ for $\\mathbb{Q}$ is in $M[\\hat{X}][\\hat{H}] = N \\subseteq V[X][H][U]$. So the quotient $\\mathcal{B}(\\mathbb{Q}) \/ e[U]$ is trivial and $\\p(\\p_\\kappa \\lambda) \/ I \\cong \\mathcal{B}(\\mathbb{Q}) \\restriction q$ for some $q$. \n\nThe generic embeddings coming from $I$ extend the original almost-hugeness embedding. In particular, $j[\\delta]$ is cofinal in $j(\\delta)$. This can also be deduced from the assumption that there is some $A \\in I^*$ of size $\\lambda$, which of course follows from $\\lambda^{<\\kappa} = \\lambda$. In contrast, Burke and Matsubara~\\cite{BM} proved that if there is a normal, fine, $\\kappa$-complete, $\\lambda^+$-saturated ideal on $\\p_\\kappa(\\lambda)$ and $\\cf(\\lambda)<\\kappa$, then it is forced that $\\sup (j[\\lambda^+]) < j(\\lambda^+)$. It seems to be unknown whether it is consistent to have saturated ideals on $\\p_\\kappa(\\lambda)$ for successor $\\kappa$ and singular $\\lambda$, and this result suggests that quite different methods will be needed for an answer.\n\n\\subsubsection{Minimal generic supercompactness}\nGeneralizing supercompactness, we will say cardinal $\\kappa$ is \\emph{generically supercompact} when for every $\\lambda \\geq \\kappa$, there is a forcing $\\mathbb{P}$ such that whenever $G \\subseteq \\mathbb{P}$ is generic, there is an elementary embedding $j : V \\to M$, where $M$ is a transitive class in $V[G]$, $\\crit(j) = \\kappa$, $j(\\kappa) > \\lambda$, and $M^\\lambda \\cap V[G] \\subseteq M$. We note that unlike in the case of non-generic supercompactness, the condition that $j[\\lambda] \\in M$ does not imply that $M$ is closed under $\\lambda$-sequences from $V[G]$. Whenever a supercompact $\\kappa$ is turned into a successor cardinal by a $\\kappa$-c.c.\\ forcing, we'll have that for all $\\lambda \\geq \\kappa$, there is a normal, fine, precipitous ideal on $\\p_\\kappa(\\lambda)$ whose generic embeddings always extend the original supercompactness embedding. But if $j : V \\to M$ is an embedding coming from a normal ultrafilter on $\\p_\\kappa(\\lambda)$, then $2^{\\lambda^{<\\kappa}} < j(\\kappa) < (2^{\\lambda^{<\\kappa}})^+$. If $\\kappa = \\mu^+$ in a generic extension $V[H]$, and a further extension gives $\\hat{j} : V[H] \\to M[\\hat{H}] \\subseteq V[H][G]$ extending $j$, then $M[\\hat{H}]$ is not closed under $\\lambda$-sequences from $V[H][G]$. This is because $| \\lambda | = | j(\\kappa)| = \\mu$ in $V[H][G]$, while $M[\\hat{H}]$ thinks $j(\\kappa)$ is a cardinal.\n\nStronger properties of ideals on $\\p_\\kappa(\\lambda)$ are needed to give genuine generic supercompactness. One such property is $\\lambda^+$-saturation, which is implied by $\\lambda$-density. We now sketch how to get a model in which there is a successor cardinal $\\kappa$ such that for all regular $\\lambda \\geq \\kappa$, there is a normal, $\\kappa$-complete, $\\lambda$-dense ideal on $\\p_\\kappa(\\lambda)$. Start with a super-almost-huge cardinal $\\kappa$ and a regular $\\mu < \\kappa$. The first part of the forcing is $A(\\mu,\\kappa)$. Then we do a proper class iteration, which we prefer to describe instead as an iteration up to an inaccessible $\\delta > \\kappa$ such that $V_\\delta \\vDash \\kappa$ is super-almost-huge.\n\nLet $T = \\{ \\alpha < \\delta : \\kappa$ carries an almost-huge tower of height $\\alpha \\}$. Let $C$ be the closure of $T$, and let $\\langle \\alpha_\\beta \\rangle_{\\beta < \\delta}$ be its continuous increasing enumeration. Over $V^{A(\\mu,\\kappa)}$, let $\\mathbb{P}_\\delta$ be the Easton-support limit of the following:\n\\begin{itemize}\n\\item Let $\\mathbb{P}_0 = \\col(\\kappa,<\\alpha_0)$.\n\\item If $\\beta$ is zero or a successor ordinal, let $\\mathbb{P}_{\\beta+1} = \\mathbb{P}_{\\beta} * \\col(\\alpha_\\beta,<\\alpha_{\\beta+1})$.\n\\item If $\\beta$ is a limit ordinal such that $\\alpha_\\beta$ is singular, let $\\mathbb{P}_{\\beta+1} = \\col(\\alpha_\\beta^+,<\\alpha_{\\beta+1})$.\n\\item If $\\beta$ is a limit ordinal such that $\\alpha_\\beta$ is regular, let $\\mathbb{P}_{\\beta+1} = \\col(\\alpha_\\beta,<\\alpha_{\\beta+1})$.\n\\end{itemize}\n\nIt is routine to verify that this iteration preserves the regularity of the members of $T$, the successors of the singular limit points of $T$, and the regular limit points of $T$. Further, the set of non-limit-points of $T$ becomes the set of successors of regular cardinals between $\\kappa$ and $\\delta$.\n\nLet $X \\subseteq \\kappa$ be $A(\\mu,\\kappa)$-generic over $V$, and let $H \\subseteq \\mathbb{P}_\\delta$ be generic over $V[X]$. Suppose $\\kappa \\leq \\lambda < \\delta$, and $\\lambda$ is regular in $V[X][H]$. Then there is some successor ordinal $\\beta<\\delta$ such that $\\alpha_\\beta \\in T$ and $\\alpha_\\beta = \\lambda^+$. Consider the subforcing $A(\\mu,\\kappa) * \\mathbb{P}_\\beta = (A(\\mu,\\kappa) * \\mathbb{P}_{\\beta-1}) * \\col(\\lambda,<\\alpha_\\beta)$. The forcing $\\mathbb{P}_\\beta$ is $(\\kappa,\\alpha_\\beta)$-very nicely layered in $V[X]$.\n\nIf $j : V \\to M_\\beta$ is an almost-huge embedding with critical point $\\kappa$ and $j(\\kappa) = \\alpha_\\beta$, then by Theorem~\\ref{maindense}, there is a normal, $\\kappa$-complete, $\\lambda$-dense ideal on $\\p_\\kappa(\\lambda)$ in $V[X][H_\\beta]$. Now note that the tail-end forcing $\\mathbb{P}_{\\beta,\\delta}$ is $\\alpha_\\beta$-closed. Since $\\lambda^{<\\kappa} = \\lambda$ in $V[X][H_\\beta]$, no new subsets of $\\p_\\kappa(\\lambda)$ are added by the tail. The collection $\\{ A_\\alpha : \\alpha < \\lambda \\}$ witnessing the $\\lambda$-density of $I$ retains this property, as this is a local property of the boolean algebra $\\p_\\kappa(\\lambda) \/ I$ and $\\{ A_\\alpha : \\alpha < \\lambda \\}$. Normality and completeness of $I$ are likewise preserved. \n\nThis method is quite flexible, and can done by iterating collapsing posets other than the Levy collapse, or by using products rather than iterations.\n\n\\subsubsection{Dense ideals on successive cardinals?}\n\nAt the time of this writing, it is unknown whether there can exist simultaneously a normal $\\kappa$-dense ideal on $\\kappa$ and a normal $\\kappa^+$-dense ideal on $\\kappa^+$. The following is the current best approximation.\n\nSuppose $\\langle \\kappa_n : n < \\omega \\rangle$ is a sequence of cardinals such that for all $n$, $\\kappa_n$ carries an almost-huge tower of height $\\kappa_{n+1}$. Such a sequence will be called an \\emph{almost-huge chain}. Obviously, extending this to sequences of length longer than $\\omega$ requires an extra idea; perhaps we just stack one $\\omega$-chain above another, or maybe postulate some relationship between the $\\omega$-chains. By Theorem~\\ref{tourney}, such chains occur quite often below a huge cardinal.\n\n\nSuppose $\\langle \\kappa_n : 0 < n < \\omega \\rangle$ is an almost-huge chain, and $\\mu < \\kappa_1$ is regular. Consider the full-support iteration $\\mathbb{P}$ of $\\langle \\mathbb{P}_n : n < \\omega \\rangle$, where $\\mathbb{P}_0 = A(\\mu,\\kappa_1)$, and for all $n < \\omega$, $\\mathbb{P}_{n+1} = \\mathbb{P}_n * A(\\kappa_n,\\kappa_{n+1})$. The stage $\\mathbb{P}_1 = A(\\mu,\\kappa_1) * A(\\kappa_1,\\kappa_2)$ regularly embeds into $A(\\mu,\\kappa_1) * (\\col(\\kappa_1,< \\! \\kappa_2) * \\add(\\kappa_2))$. The first two stages here add a normal $\\kappa_1$-dense ideal on $\\kappa_1$ and make $\\kappa_1=\\mu^+$, $\\kappa_2 = \\mu^{++}$. The third stage preserves this since it adds no subsets of $\\kappa_1$. By Lemma~\\ref{quotdist}, the quotient forcing $\\mathbb{Q}$ to get from $V^{\\mathbb{P}_1}$ to this three-stage extension is $\\kappa_2$-distributive. Now the tail-end forcing $\\mathbb{P} \/ \\mathbb{P}_1$ is $\\kappa_2$-strategically closed. Since $\\mathbb{Q}$ does not add any plays of the relevant game of length $< \\! \\kappa_2$, $\\mathbb{P} \/ \\mathbb{P}_1$ remains $\\kappa_2$-strategically closed in $V^{\\mathbb{P}_1 * \\mathbb{Q}}$, so forcing with it preserves the $\\kappa_1$-dense ideal on $\\kappa_1$. Also, $\\mathbb{Q}$ remains $\\kappa_2$-distributive in $V^{\\mathbb{P}}$, since $\\mathbb{Q} \\times (\\mathbb{P} \/ \\mathbb{P}_1)$ is $\\kappa_2$-distributive in $V^{\\mathbb{P}_1}$. It thus remains the case in $V^\\mathbb{P}$ that there is a $\\kappa_2$-distributive forcing adding a normal $\\kappa_1$-dense ideal on $\\kappa_1$.\n\n\nSimilarly, consider $V^{\\mathbb{P}_n}$ for $n >1$. $\\mathbb{P}_n = \\mathbb{P}_{n-2} * (A(\\kappa_{n-1}, \\kappa_n) * A(\\kappa_n,\\kappa_{n+1}))$. Since $|\\mathbb{P}_{n-2}| = \\kappa_{n-1}$ (or $\\mu$ for $n=2$), $\\kappa_{n}$ retains an almost-huge tower of height $\\kappa_{n+1}$ in $V^{\\mathbb{P}_{n-2}}$. Thus the same argument applies: In $V^{\\mathbb{P}_n}$, there is a $\\kappa_{n+1}$-distributive forcing adding a normal $\\kappa_n$-dense ideal on $\\kappa_n$, and this remains true in $V^\\mathbb{P}$. Therefore, we obtain a model in which for all $n >0$, there is a $\\mu^{+n+1}$-distributive forcing adding a normal $\\mu^{+n}$-dense ideal on $\\mu^{+n}$. By repeating this with a tall enough stack of almost-huge chains, we obtain the consistency of ZFC with the statement, ``For all regular cardinals $\\kappa$, there is a $\\kappa^{++}$-distributive forcing adding a normal $\\kappa^+$-dense ideal on $\\kappa^+$.''\n\n\\section{Structural constraints}\n\nSaturated ideals have a strong influence over the combinatorial structure of the universe in their vicinity. Phenomena of this type may also be viewed as the universe imposing constraints on the structural properties of ideals. Below are some of the most interesting known results to this effect. Proofs can be found in~\\cite{foremanhandbook}.\n\n\\begin{enumerate}[(1)]\n\\item (Tarski) If $I$ is a nowhere-prime ideal which is $\\kappa$-complete and $\\mu$-saturated for some $\\mu < \\kappa$, then $2^{<\\mu} \\geq \\kappa$.\n\\item (Jech-Prikry) If $\\kappa = \\mu^+$, $2^\\mu = \\kappa$, and there is a $\\kappa$-complete, $\\kappa^+$-saturated ideal on $\\kappa$, then $2^\\kappa = \\kappa^+$.\n\\item (Jech-Prikry) If $\\kappa = \\mu^+$, and there is a $\\kappa$-complete, $\\kappa^+$-saturated ideal on $\\kappa$, then there are no $\\kappa$-Kurepa trees.\n\\item (Woodin) If there is a countably complete, $\\omega_1$-dense ideal on $\\omega_1$, then there is a Suslin tree.\n\\item (Woodin) If there is a countably complete, uniform, $\\omega_1$-dense ideal on $\\omega_2$, then $2^\\omega = \\omega_1$. (Uniform means that all sets of size $<\\omega_2$ are in the ideal--equivalent to fineness.)\n\\item (Shelah) If $2^\\omega < 2^{\\omega_1}$, then $NS_{\\omega_1}$ is not $\\omega_1$-dense.\n\\item (Gitik-Shelah) If $I$ is a $\\kappa$-complete, nowhere-prime ideal, then $\\den(I) \\geq \\kappa$.\n\\end{enumerate}\n\nWe note that result (2) easily generalizes to the following: If $\\kappa = \\mu^+$, $2^\\mu = \\kappa$, and there is a normal, fine, $\\kappa$-complete, $\\lambda^+$-saturated ideal on $\\p_\\kappa(\\lambda)$, then $2^\\lambda = \\lambda^+$.\n\nIf no requirements are made for the ideal $I$ and the set $Z$ on which it lives, almost no structural constraints on quotient algebras remain. The following strengthens a folklore result, probably known to Sikorski. The argument was supplied by Don Monk in personal correspondence.\n\n\\begin{proposition}\nLet $\\mathbb{B}$ be a complete boolean algebra, and let $\\kappa$ be a cardinal such that $2^\\kappa \\geq |\\mathbb{B}|$. There is a uniform ideal $I$ on $\\kappa$ such that $\\mathcal{B} \\cong \\p(\\kappa)\/I$.\n\\end{proposition}\n\n\\begin{proof}\nLet $\\kappa$, $\\mathbb{B}$ be as hypothesized. By the theorem of Fichtenholz-Kantorovich and Hausdorff (see \\cite[Lemma 7.7]{jechbook}), there exists a family $F$ of $2^\\kappa$ many subsets of $\\kappa$ such that for any $x_1,...,x_n,y_1,...,y_m \\in F$, $x_1 \\cap ... \\cap x_n \\cap (\\kappa \\setminus y_1) \\cap ... \\cap (\\kappa \\setminus y_m)$ has size $\\kappa$. $F$ generates a free algebra: closing $F$ under finitary set operations gives a family of sets $G$ such that any equation holding between elements of $G$ expressed as boolean combinations of elements of $F$ holds in all boolean algebras. If we pick any surjection $h_0 : F \\to \\mathbb{B}$ and extend it to $h_1 : G \\to \\mathbb{B}$ in the obvious way, then $h_1$ will be a well-defined homomorphism.\n\nLet $I_{bd}$ be the ideal of bounded subsets of $\\kappa$. Since all elements of $G$ are either empty or have cardinality $\\kappa$, $G \\cong G\/I_{bd}$, so $h_1$ has an extension $h_2$ from the algebra generated by $G \\cup I_{bd}$ to $\\mathbb{B}$, where $h_2(x) = 0$ for all $x \\in I_{bd}$. Finally, by Sikorski's extension theorem, there is a further extension to a homomorphism $h_3 : \\p(\\kappa) \\to \\mathbb{B}$. The kernel of $h_3$ is an ideal $I$ such that $\\p(\\kappa)\/I \\cong \\mathbb{B}$. \n\\end{proof}\n\n\n\\subsection{Cardinal arithmetic and ideal structure}\n\nA careful examination of the proof of Woodin's theorem (5) shows that $\\omega_2$ can be replaced by any $\\omega_n$, $2 \\leq n < \\omega$. Aside from that, Woodin's argument is rather specific to the cardinals involved. In \\cite{foremanhandbook}, Foreman asked (Open Question 27) whether the analogous statement holds one level up:\n\n\\begin{question}[Foreman]Does the existence of an $\\omega_2$-complete, $\\omega_2$-dense, uniform ideal on $\\omega_3$ imply that $2^{\\omega_1} = \\omega_2$?\n\\end{question}\n\nTo answer this, we invoke an easy preservation lemma about ideals under small forcing. If $I$ is an ideal, $\\mathbb{P}$ is a partial order, and $G \\subseteq \\mathbb{P}$ is generic, then $\\bar{I}$ denotes the ideal generated by $I$ in $V[G]$, i.e. $\\{ X : (\\exists Y \\in I) X \\subseteq Y \\}$.\n\n\n\\begin{lemma}\n\\label{pres}\nSuppose $I$ is a $\\kappa$-complete ideal on $Z \\subseteq \\p(X)$, $\\mathbb{P}$ is a partial order, and $G$ is $\\mathbb{P}$-generic.\n\\begin{enumerate}[(1)]\n\\item If $\\sat(\\mathbb{P}) \\leq \\kappa$, then $\\bar{I}$ is $\\kappa$-complete in $V[G]$.\n\\item If $\\den(\\mathbb{P}) < \\kappa$, then $\\den(\\bar{I})^{V[G]} \\leq \\den(I)^V$. \n\\end{enumerate}\n\\end{lemma}\n\n\\begin{proof}\nFor (1), let $\\dot{s}$ be a $\\mathbb{P}$-name for a sequence of elements of $\\bar{I}$ of length less than $\\kappa$. By $\\kappa$-saturation, let $\\beta < \\kappa$ be such that $1 \\Vdash dom(\\dot{s}) \\leq \\beta$. For each $\\alpha < \\beta$, let $A_\\alpha$ be a maximal antichain such that for $p \\in A_\\alpha$, $p \\Vdash \\dot{s}(\\alpha) \\subseteq \\check{b}^p_\\alpha$, where ${b}^p_\\alpha \\in I$. Then $B = \\bigcup_{p, \\alpha} b^p_{\\alpha} \\in I$, and $1 \\Vdash \\bigcup \\dot{s} \\subseteq \\check{B}$.\n\n\nFor (2), let $D \\subseteq \\mathbb{P}$ be a dense set of size less than $\\kappa$, and let $A \\in \\bar{I}^+$. Then $A = \\bigcup_{d \\in D \\cap G} \\{ z : d \\Vdash z \\in \\dot{A} \\}$. By (1), there is some $d \\in D$ such that $\\{ z : d \\Vdash z \\in \\dot{A} \\} \\notin I$. This shows that $(\\p(Z)\/I)^V$ is dense in $(\\p(Z)\/\\bar{I})^{V[G]}$, and the conclusion follows. \n\\end{proof} \n\n\\begin{corollary}\n\\label{wg}\nIf there is a $\\kappa^+$-complete, $\\kappa^+$-dense, uniform ideal on $\\kappa^{++}$, then $2^\\kappa = \\kappa^+$.\n\\end{corollary}\n\n\\begin{proof}\nSuppose for a contradiction that $f: \\p(\\kappa) \\to \\kappa^{++}$ is a surjection. Let $\\mathbb{P} = \\col(\\omega,\\kappa)$, and let $G$ be $\\mathbb{P}$-generic. Since $\\den(\\mathbb{P}) = \\kappa$, Lemma~\\ref{pres} implies that $\\bar{I}$ is $\\kappa^+$-complete and $\\kappa^+$-dense in $V[G]$. Furthermore, in $V[G]$, $\\kappa^+ = \\omega_1$ and $\\kappa^{++}= \\omega_2$. Thus Woodin's theorem implies that $V[G] \\vDash$ CH. However, $f$ witnesses the failure of CH, a contradiction. \n\\end{proof}\n\n\nAnother interesting constraint can be derived from the following:\n\n\\begin{theorem}[Shelah~\\cite{shelahproper}]\n\\label{shelahcof}\nSuppose $V \\subseteq W$ are models of ZFC. If $\\kappa$ is a regular cardinal in $V$, and $\\cf(\\kappa) \\not= \\cf(|\\kappa|)$ in $W$, then $(\\kappa^+)^V$ is not a cardinal in $W$.\n\\end{theorem}\n\n\\begin{corollary}[Burke-Matsubara \\cite{bmclub}]\n\\label{cofcon}\nIf $\\kappa = \\mu^+$, $\\lambda \\geq \\kappa$ is regular, and $I$ is a normal, fine, $\\kappa$-complete, $\\lambda^+$-saturated ideal on $\\p_\\kappa(\\lambda)$, then $\\{ z : \\cf(z) = \\cf(\\mu) \\} \\in I^*$.\n\\end{corollary}\n\\begin{proof}\nLet $G$ be a generic ultrafilter extending $I^*$. Since $\\crit(j) = \\kappa$ and $\\lambda^+$ is preserved, $j(\\kappa) = \\lambda^+$, and $|\\lambda| = \\mu$ in $V[G]$. By Shelah's theorem, $\\cf(\\lambda) = \\cf(\\mu)$ in $V[G]$ and in the ultrapower $M$ since $M^\\mu \\cap V[G] \\subseteq M$. Since $1 \\Vdash [\\id] = j[\\lambda]$, \\L o\\'{s}'s theorem gives $\\{ z : \\cf(z) = \\cf(\\mu) \\} \\in I^*$. \n\\end{proof}\n\n\\begin{theorem}\n\\label{dist}\nSuppose $\\kappa = \\mu^+$, and $I$ is a normal, fine, $\\kappa^+$-saturated ideal on $\\kappa$. Then $\\p(\\kappa) \/ I$ is $\\cf(\\mu)$-distributive iff $\\mu^{<\\cf(\\mu)} = \\mu$.\n\\end{theorem}\n\n\\begin{proof}\nSuppose $\\p(\\kappa) \/ I$ is $\\cf(\\mu)$-distributive, and let $\\{ f_\\alpha : \\alpha < \\delta \\}$ be an enumeration of $[\\mu]^{<\\cf(\\mu)}$, where $\\delta$ is a cardinal. If $\\mu < \\delta$, then for any $\\p(\\kappa) \/ I$-generic $G$, $([\\mu]^{<\\cf(\\mu)})^V$ is a proper subset of $([\\mu]^{<\\cf(\\mu)})^{V[G]}$, since $j[\\delta] \\not= j(\\delta)$. This contradicts the distributivity of $\\p(\\kappa) \/ I$.\n\nSince $\\p(\\kappa)\/I$ is $\\kappa^+$-saturated, it is $\\cf(\\mu)$-distributive iff it is $(\\cf(\\mu), \\kappa)$- \\break distributive. Let $G$ be $\\p(\\kappa) \/ I$-generic and let $M$ be the generic ultrapower. Let $\\beta < \\cf(\\mu)$, and suppose $f \\in V[G]$ is a function from $\\beta$ to $\\kappa$. By Theorem~\\ref{disj}, $f \\in M$. By Corollary~\\ref{cofcon}, $M \\vDash \\cf(\\kappa) = \\cf([\\id]) = \\cf(\\mu)$. Thus there is a $\\gamma < \\kappa$ such that $\\ran(f) \\subseteq \\gamma$. Observe that $j(^\\beta \\gamma) = (^\\beta \\gamma)^M = (^\\beta \\gamma)^V$, since $\\mu^\\beta < \\kappa$. Hence $f \\in V$. \n\\end{proof}\n\n\n\\subsection{Stationary reflection}\nA stationary subset $S$ of a regular cardinal $\\kappa$ is said to reflect if there is some $\\alpha < \\kappa$ of uncountable cofinality such that $S \\cap \\alpha$ is stationary in $\\alpha$. A collection of stationary subsets $\\{ S_i : i < \\delta \\}$ of $\\kappa$ is said to reflect simultaneously if there is some $\\alpha < \\kappa$ if $S_i \\cap \\alpha$ is stationary for all $i < \\delta$. It is well known that if $\\kappa = \\mu^+$ and $X$ is a set of regular cardinals below $\\mu$, then the statement that every stationary subset of $\\{ \\alpha < \\kappa : \\cf(\\alpha) \\in X \\}$ reflects contradicts $\\square_\\mu$, and the statement that every pair of stationary subsets of $\\{ \\alpha < \\kappa : \\cf(\\alpha) \\in X \\}$ reflect simultaneously contradicts the weaker principle $\\square(\\kappa)$.\n\n\\begin{theorem}\nSuppose there is a $\\kappa^{+}$-complete, $\\kappa^{++}$-saturated, uniform ideal on $\\kappa^{+n}$ for some $n \\geq 2$. Then for $2 \\leq m \\leq n$, every collection of $\\kappa$ many stationary subsets of $\\kappa^{+m}$ contained in $\\cof(\\leq \\kappa)$ reflects simultaneously. \n\\end{theorem}\n\n\\begin{proof}\nSuppose $I$ is such an ideal and $j : V \\to M \\subseteq V[G]$ is a generic embedding arising from the ideal. The critical point of $j$ is $\\kappa^+$, and all cardinals above $\\kappa^{+}$ are preserved. Since $I$ is uniform, and there is a family of $\\kappa^{+n+1}$ many almost-disjoint functions from $\\kappa^{+n}$ to $\\kappa^{+n}$, $j(\\kappa^{+n}) \\geq (\\kappa^{+n+1})^V$. The first $n-1$ cardinals in $V$ above $\\kappa$ must map onto the first $n-1$ cardinals in $M$ above $\\kappa$. But in $M$, there are at least $n-1$ cardinals in the interval $(\\kappa,(\\kappa^{+n+1})^V)$ since all cardinals above $\\kappa^+$ are preserved. Thus if $j(\\kappa^{+n}) > (\\kappa^{+n+1})^V$, then $\\kappa^{+n+1}$ would be collapsed. So for $1 \\leq m \\leq n$, $j(\\kappa^{+m}) = (\\kappa^{+m+1})^V$.\n\nLet $\\{ S_\\alpha : \\alpha < \\kappa \\}$ be stationary subsets of $\\kappa^{+m}$ concentrating on $\\cof(\\leq \\kappa)$, where $2 \\leq m \\leq n$. By the $\\kappa^{++}$-chain condition, these sets remain stationary in $V[G]$. By the above remarks, $\\gamma = \\sup (j[\\kappa^{+m}]) < j(\\kappa^{+m})$. For each $\\alpha$, $j \\restriction S_\\alpha$ is continuous since $\\kappa < \\crit(j)$. For each $\\alpha$, let $C_\\alpha$ be the closure of $S_\\alpha$. In $V[G]$, we can define a continuous increasing function $f : C_\\alpha \\to \\gamma$ extending $j \\restriction S_\\alpha$ by sending $\\sup (S_\\alpha \\cap \\beta)$ to $\\sup (j[S_\\alpha \\cap \\beta])$ when $\\beta$ is a limit point of $S_\\alpha$. This shows that $j[S_\\alpha]$ is stationary in $\\gamma$. Now $M$ may not have $j[S_\\alpha]$ as an element, but it satisfies that $j(S_\\alpha) \\cap \\gamma$ is stationary in $\\gamma$. Furthermore, $j( \\{ S_\\alpha : \\alpha < \\kappa \\}) = \\{ j(S_\\alpha) : \\alpha < \\kappa \\}$, and $M$ sees that these all reflect at $\\gamma$. By elementarity, the $S_\\alpha$ have a common reflection point. \n\\end{proof}\n\n\\begin{proposition}\n\\label{Z-refl}\nSuppose $\\mu,\\kappa,\\lambda$ are regular cardinals such that $\\omega < \\mu < \\kappa = \\mu^+ < \\lambda$, and $I$ is an ideal on $\\p_\\kappa(\\lambda)$ as in Theorem~\\ref{maindense}. Then every collection $\\{ S_i : i < \\mu \\}$ of stationary subsets of $\\lambda \\cap \\cof( \\omega)$ reflects simultaneously.\n\\end{proposition}\n\n\\begin{proof}The algebra $\\p(\\p_\\kappa(\\lambda)) \/ I$ is isomorphic to $\\mathcal{B}(\\mathbb{P} \\times \\mathbb{Q})$, where $\\mathbb{P}$ is $\\kappa$-dense and $\\Vdash_\\mathbb{P} ``\\mathbb{Q}$ is $\\mu$-closed.'' Forcing with $\\mathbb{P} \\times \\mathbb{Q}$ thus preserves the stationarity of any subset of $\\lambda \\cap \\cof(\\omega)$. If $j : V \\to M \\subseteq V[G]$ is a generic embedding arising from the ideal, then since $j[\\lambda] \\in M$ and $M$ thinks $j(\\lambda)$ is regular, $\\gamma = \\sup(j[\\lambda])< j(\\lambda)$. The restriction of $j$ to each $S_i$ is continuous, and as above we may define in $V[G]$ a continuous increasing function from the closure of $S_i$ into $\\gamma$, showing $j[S_i]$ is stationary in $\\gamma$ for each $i$. Thus $M \\models (\\forall i < \\mu) j(S_i) \\cap \\gamma$ is stationary, so by elementarity, the collection reflects simultaneously. \n\\end{proof}\n\n\n\\subsection{Nonregular ultrafilters}\nThe computation of the cardinality of ultrapowers is an old problem of model theory. Originally, it was conjectured that if $\\mu,\\kappa$ are infinite cardinals, and $U$ is a countably incomplete uniform ultrafilter on $\\kappa$, then $| \\mu^\\kappa \/ U | = \\mu^\\kappa$ \\cite{ck}. It was shown by Donder~\\cite{donder} that this conjecture holds in the core model below a measurable cardinal. A key tool in computing the size of ultrapowers is the notion of regularity:\n\n\\begin{definition}\nAn ultrafilter $U$ on $Z$ is called \\emph{$(\\mu, \\kappa)$-regular} if there is a sequence $\\langle A_\\alpha : \\alpha < \\kappa \\rangle \\subseteq U$ such that for any $Y \\subseteq \\kappa$ of order type $\\mu$, $\\bigcap_{\\alpha \\in Y} A_\\alpha = \\emptyset$.\n\\end{definition}\n\n\\begin{theorem}[Keisler \\cite{keisler}]\n\\label{keisler}\nSuppose $U$ is a $(\\mu, \\kappa)$-regular ultrafilter on $Z$, witnessed by $\\langle A_\\alpha : \\alpha < \\kappa \\rangle$. For each $z \\in Z$, let $\\beta_z = \\ot( \\{ \\alpha : z \\in A_\\alpha \\}) < \\mu$. Then for any sequence of ordinals $\\langle \\gamma_z : z \\in Z \\rangle$, we have $| \\prod \\gamma_z^{\\beta_z} \/ U | \\geq | \\prod \\gamma_z \/ U |^\\kappa$.\n\\end{theorem}\n\nObviously any uniform ultrafilter on a cardinal $\\kappa$ is $(\\kappa,\\kappa)$-regular. Also, any fine ultrafilter on $\\p_\\kappa(\\lambda)$ is $(\\kappa,\\lambda)$-regular, as witnessed by $\\langle \\hat{\\alpha} : \\alpha < \\lambda \\rangle$. Much can be seen by exploiting a connection between dense ideals and nonregular ultrafilters.\n\n\\begin{lemma}[Huberich \\cite{huberich}]\n\\label{huberich}\nSuppose $\\mathbb{B}$ is a complete boolean algebra of density $\\kappa$, where $\\kappa$ is regular. Then there is an ultrafilter $U$ on $\\mathbb{B}$ such that whenever $X \\subseteq \\mathbb{B}$ and $\\sum X \\in U$, then there is $Y \\subseteq X$ such that $|Y| < \\kappa$ and $\\sum Y \\in U$.\n\\end{lemma}\n\n\\begin{proof}\nLet $D = \\{ d_\\alpha : \\alpha < \\kappa \\}$ be dense in $\\mathbb{B}$. For any maximal antichain $A \\subseteq \\mathbb{B}$, let $\\gamma_A > 0$ be least such that for all $\\alpha < \\gamma_A$, there are $\\beta < \\gamma_A$ and $a \\in A$ such that $d_\\beta \\leq d_\\alpha \\wedge a$. Let $C_A = \\{ d \\in D \\restriction \\gamma_A : (\\exists a \\in A) d \\leq a \\}$. Let $F = \\{ \\sum C_A : A$ is a maximal antichain$\\}$.\n\nWe claim $F$ has the finite intersection property. Let $A_1,...,A_n$ be maximal antichains. We may assume $\\gamma_{A_1} \\leq ... \\leq \\gamma_{A_n}$. Let $d_{\\alpha_1} \\leq d_0 \\wedge a_1$ for some $a_1 \\in A_1$, where $\\alpha_1 < \\gamma_{A_1}$. Let $d_{ \\alpha_2} \\leq d_{\\alpha_1} \\wedge a_2$ for some $a_2 \\in A_2$, where $\\alpha_2 < \\gamma_{A_2}$. Proceeding inductively, we get a descending chain $d_{\\alpha_1} \\geq ... \\geq d_{\\alpha_n}$, where each $d_{\\alpha_i} \\in C_{A_i}$. Thus $d_{\\alpha_n} \\leq \\sum C_{A_1} \\wedge ... \\wedge \\sum C_{A_n}$.\n\nLet $U \\supseteq F$ be any ultrafilter. If $\\sum X \\in U$, then we can find an antichain $A$ that is maximal below $\\sum X$ such that $(\\forall a \\in A)(\\exists x \\in X) a \\leq x$. Extending $A$ it to a maximal antichain $A'$, we have $\\sum C_{A'} \\in F$. Since $|C_{A'}| < \\kappa$, the conclusion follows. \n\\end{proof}\n\nIf $I$ is an ideal on $Z$ and $U'$ is an ultrafilter on $\\p(Z)\/I$, then $U'$ generates an ultrafilter $U \\supseteq I^*$ on $Z$ by taking $U = \\{ X : [X]_I \\in U' \\}$.\n\n\n\\begin{lemma}\n\\label{singreg}\nSuppose $\\kappa=\\mu^+$, $\\lambda$ is regular, and $I$ is a normal and fine, $\\kappa$-complete, $\\lambda$-dense ideal on $Z \\subseteq \\p_\\kappa(\\lambda)$. Then any ultrafilter $U \\supseteq I^*$ given by Lemma \\ref{huberich} is $(\\cf(\\mu)+1,\\lambda)$-regular.\n\\end{lemma}\n\n\\begin{proof}\nLet $U'$ be an ultrafilter on $\\p(Z)\/I$ given by Lemma \\ref{huberich} and let $U$ be the corresponding ultrafilter on $Z$. By Corollary~\\ref{cofcon}, $\\{ z : \\cf(z) = \\cf(\\mu) \\} \\in I^*$. For such $z$, choose $A_z \\subseteq z$ of order type $\\cf(\\mu)$ that is cofinal in $z$. We will inductively build a sequence of intervals $\\{ (x_\\alpha,y_\\alpha) : \\alpha < \\lambda \\}$, each contained in $\\lambda$, such that $y_\\alpha < x_\\beta$ when $\\alpha < \\beta$, and such that for all $\\alpha$, $\\{ z : A_z \\cap (x_\\alpha,y_\\alpha) \\not= \\emptyset \\} \\in U$.\n\nSuppose we have constructed the intervals up to $\\beta$. Let $\\lambda > x_\\beta > \\sup \\{ y_\\alpha : \\alpha < \\beta \\}$. For $z \\in \\hat{x}_\\beta$, let $y_\\beta(z) \\in z$ be such that $A_z \\cap (x_\\beta,y_\\beta(z)) \\not= \\emptyset$. Since $I$ is normal, there is a maximal antichain $A$ of $I$-positive sets such that for all $a \\in A$, $y_z(\\beta)$ is the same for all $z \\in a$. There is some $A' \\subseteq A$ of size $< \\lambda$ such that $\\sum A' \\in U'$. Let $y_\\beta > x_\\beta$ be such that for $z \\in a \\in A'$, $y_\\beta(z) < y_\\beta$.\n\nFor $\\alpha < \\lambda$, let $X_\\alpha = \\{ z : A_z \\cap (x_\\alpha,y_\\alpha) \\not= \\emptyset \\}$. Since each $A_z$ has order type $\\cf(\\mu)$ and the intervals $(x_\\alpha,y_\\alpha)$ are disjoint and increasing, each $A_z$ cannot have nonempty intersection with all intervals in some sequence of length greater than $\\cf(\\mu)$. Thus if $s \\subseteq \\lambda$ and $z \\in \\bigcap_{\\alpha \\in s} X_\\alpha$, then $\\ot(s) \\leq \\cf(\\mu)$. \n\\end{proof}\n\n\\begin{lemma}\n\\label{ultsize}\nSuppose $I$ is a $\\lambda$-dense, $\\kappa$-complete ideal on $Z$, and $\\p(Z)\/I$ is a complete boolean algebra. Then any $U \\supseteq I^*$ given by Lemma \\ref{huberich} has the property that for all $\\alpha < \\kappa$, $| \\alpha^Z \/ U | \\leq 2^{<\\lambda}$.\n\\end{lemma}\n\n\\begin{proof}\nLet $U'$ be an ultrafilter on $\\p(Z)\/I$ given by Lemma \\ref{huberich} and let $U$ be the corresponding ultrafilter on $Z$.\nTo compute a bound on $|\\alpha^Z\/U|$ for $\\alpha < \\kappa$, we identify a small subset of $\\alpha^Z$ and show that it contains representative of every equivalence class modulo $U$. Let $D$ witness $\\lambda$-density. Choose an antichain $A \\subseteq D$ of size $< \\! \\lambda$, and choose $f : A \\to \\alpha$. There are $\\sum_{\\gamma < \\lambda} \\lambda^\\gamma \\cdot \\alpha^\\gamma = 2^{<\\lambda}$ many choices. Using $\\kappa$-completeness, let $\\{ B_\\beta : \\beta < \\alpha \\}$ be pairwise disjoint and such that each $[B_\\beta]_I = \\sum f^{-1}(\\beta)$. Let $g_f : Z \\to \\alpha$ be defined by $g_f(z) = \\beta$ if $z \\in B_\\beta$ and $g_f(z) = 0$ if $z \\notin \\bigcup_{\\beta<\\alpha} B_\\beta$.\n\nNow let $g : Z \\to \\alpha$ be arbitrary. By $\\kappa$-completeness, $A = \\{ g^{-1}(\\beta) :\\beta < \\alpha$ and $g^{-1}(\\beta) \\in I^+ \\}$ forms a maximal antichain. Let $A' \\subseteq D$ be a maximal antichain refining $A$. There is some $A'' \\subseteq A'$ of size $< \\! \\lambda$ such that $\\sum A'' \\in U'$. Let $f : A'' \\to \\alpha$ be defined by $f(a) = \\beta$ iff $a \\leq_I g^{-1}(\\beta)$. If $[B]_I = \\sum A''$, then $\\{ z \\in B : g(z) \\not= g_f(z) \\} \\in I$, so $g =_U g_f$.\n\\end{proof}\n\nThe following contrasts with the consistency results of Section 2:\n\n\\begin{corollary}Suppose $\\mu$ is a singular cardinal such that $2^{\\cf(\\mu)} < \\mu$, $\\lambda$ is regular, and $2^{<\\lambda} < 2^\\lambda$. Then there is no normal and fine, $\\lambda$-dense ideal on $\\p_{\\mu^+}(\\lambda)$. Furthermore, there is a proper class of such $\\lambda$.\n\\end{corollary}\n\n\\begin{proof}\nSuppose such an ideal exists, and let $U \\supseteq I^*$ be given by Lemma \\ref{huberich}. Then $| \\prod \\mu \/U| \\leq 2^{<\\lambda}$, and $U$ is $(\\cf(\\mu)+1,\\lambda)$-regular. Theorem~\\ref{keisler} implies that $| \\prod 2^{\\cf(\\mu)} \/ U | \\geq 2^\\lambda$, a contradiction. \n\n\nAssume for a contradiction that $\\alpha$ is such that $2^{<\\lambda} = 2^\\lambda$ for all regular $\\lambda \\geq \\alpha$. Let $\\kappa = 2^\\alpha$. We will show by induction the impossible conclusion that $2^\\beta = \\kappa$ for all $\\beta \\geq \\alpha$. Suppose that this holds for all $\\gamma < \\beta$. If $\\beta$ is regular, $2^{<\\beta} = 2^\\beta$ by assumption, so $2^\\beta = \\kappa$. If $\\beta$ is singular, then by \\cite[Theorem 5.16]{jechbook} $2^\\beta = (2^{<\\beta})^{\\cf(\\beta)} = \\kappa^{\\cf(\\beta)}$. If $\\cf(\\beta) < \\gamma < \\beta$, then $\\kappa = 2^\\gamma = (2^\\gamma)^{\\cf(\\beta)} = \\kappa^{\\cf(\\beta)}$. \n\\end{proof}\n\n\\begin{corollary}\n\\label{incon1}\nIf $\\kappa$ is singular such that $2^{\\cf(\\kappa)} < \\kappa$, then there is no uniform, $\\kappa^+$-complete, $\\kappa^+$-dense ideal on $\\kappa^{+n}$ for $n \\geq 2$.\n\\end{corollary}\n\n\\begin{proof}Assume $I$ is a uniform, $\\kappa^+$-complete, $\\kappa^+$-dense ideal on $\\kappa^{+n}$ for some $n \\geq 2$. Define $\\phi : \\p(\\kappa^+) \\to {\\p(\\kappa^{+n})\/I}$ by $X \\mapsto || \\kappa^+ \\in j(X) ||_{\\p(\\kappa^{+n})\/I}$. Let $J = \\ker \\phi$. $\\phi$ lifts to an embedding of $\\p(\\kappa^+)\/J$ into $\\p(\\kappa^{+n})\/I$. Since $J$ is clearly normal and $\\kappa^{++}$-saturated, the embedding is regular, since for a maximal antichain $\\{ A_\\alpha : \\alpha < \\kappa^+ \\}$, $\\Vdash \\kappa^+ \\in j(\\nabla_{\\alpha < \\kappa^+} A_\\alpha)$, so it is forced that for some $\\alpha<\\kappa^+$, $\\phi(A_\\alpha)$ is in the generic filter. Thus $J$ is a normal $\\kappa^+$-dense ideal on $\\kappa^+$. We have $2^{<\\kappa^+} < 2^{\\kappa^+}$ by Corollary~\\ref{wg}, so $\\kappa$ cannot be singular such that $2^{\\cf(\\kappa)} < \\kappa$. \n\\end{proof}\n\nThese methods can also be used to deduce more cardinal arithmetic consequences of dense ideals. First we need a few more lemmas:\n\n\\begin{theorem}[Kunen-Prikry \\cite{kp}]\n\\label{kp}\nIf $\\kappa$ is regular and $U$ is a $(\\kappa^+,\\kappa^+)$-regular ultrafilter, then $U$ is $(\\kappa,\\kappa)$-regular.\n\\end{theorem}\n\n\\begin{lemma}\n\\label{linear}\nSuppose $( L, < )$ is a linear order such that for all $x \\in L$, \\break $| \\{ y \\in L : y < x \\} | \\leq \\kappa$. Then $|L| \\leq \\kappa^+$.\n\\end{lemma}\n\n\\begin{corollary}\n\\label{wg2}\nSuppose there is a $\\kappa^+$-complete, $\\kappa^+$-dense ideal on $\\kappa^{+n}$, where $n \\geq 2$. Then for $0 \\leq m \\leq n$, $2^{\\kappa^{+m}} = \\kappa^{+m+1}$.\n\\end{corollary}\n\n\\begin{proof}\nLet $I$ be such an ideal, and let $U \\supseteq I^*$ be given by Lemma~\\ref{huberich}. By Lemma~\\ref{ultsize}, $|\\kappa^{\\kappa^{+n}} \/ U | \\leq 2^\\kappa$, which is $\\kappa^+$ by Corollary~\\ref{wg}. Note that for any cardinal $\\mu$, any ultrafilter $V$ on a set $Z$, and any $g : Z \\to \\mu^+$, $\\{ [f]_V : f <_V g \\}$ has cardinality at most $| \\mu^Z \/ V |$. Thus, applying Lemma~\\ref{linear} inductively, we get that $| (\\kappa^{+m})^{\\kappa^{+n}} \/ U | \\leq \\kappa^{+m+1}$ for all $m < \\omega$.\n\n$U$ is $(\\kappa^{+n},\\kappa^{+n})$-regular, so by Theorem~\\ref{kp}, it is $(\\kappa^{+m},\\kappa^{+m})$-regular for $m \\leq n$. Assume for induction that $2^{\\kappa^{+r}} = \\kappa^{+r+1}$ for $r < m \\leq n$; note the base case $m = 1$ holds. Let $\\{ X_\\alpha : \\alpha < \\kappa^{+m} \\}$ witness $(\\kappa^{+m},\\kappa^{+m})$-regularity, and let $\\beta_z = \\ot(\\{ \\alpha : z \\in X_\\alpha \\})$. By Theorem~\\ref{keisler} and the above observations, we have:\n\\[ 2^{\\kappa^{+m}} \\leq |\\prod 2^{\\beta_z} \/ U| \\leq |\\prod 2^{\\kappa^{+m-1}} \/ U| = |\\prod \\kappa^{+m} \/ U| \\leq \\kappa^{+m+1} \\mbox{. } \\] \\end{proof}\n\n\n\nWe note that if the hypothesis of Corollary~\\ref{wg2} is consistent, then no cardinal arithmetic above $\\kappa^{+n}$ can be deduced from it, since any forcing which adds no subsets of $\\kappa^{+n}$ will preserve the relevant properties of the ideal.\n\n\n\nBy combining this technique with the results of Section 2, we can answer the following, which was Open Question 16 from \\cite{foremanhandbook}:\n\n\n\\begin{question}[Foreman]\nIs it consistent that there is a uniform ultrafilter $U$ on $\\omega_3$ such that $\\omega^{\\omega_3} \/U$ has cardinality $\\omega_3$? Is it consistent that there is a uniform ultrafilter $U$ on $\\aleph_{\\omega+1}$ such that $\\omega^{\\aleph_{\\omega+1}} \/U$ has cardinality $\\aleph_{\\omega+1}$? Give a characterization of the possible cardinalities of ultrapowers.\n\\end{question}\n\n\\begin{theorem}Assume ZFC is consistent with a super-almost-huge cardinal. Then it is consistent that every regular uncountable cardinal $\\kappa$ carries a uniform ultrafilter $U$ such that $|\\omega^\\kappa \/ U| = \\kappa$. \n\\end{theorem}\n\nThis follows from Section 2 and the next result.\n\n\\begin{lemma}Suppose $\\kappa = \\mu^+$, GCH holds at cardinals $\\geq \\mu$, and for all regular $\\lambda \\geq \\kappa$, there is a normal and fine, $\\kappa$-complete, $\\lambda$-dense ideal on $\\p_\\kappa(\\lambda)$. Then for every regular $\\lambda$, there is a uniform ultrafilter $U$ on $\\lambda$ such that $|\\mu^\\lambda \/ U| = \\lambda$.\n\\end{lemma}\n\n\\begin{proof}\nLet $I$ be a normal and fine, $\\kappa$-complete, $\\lambda$-dense ideal on $Z = \\p_\\kappa(\\lambda)$, where $\\kappa= \\mu^+$ and $\\lambda$ is regular. Note that $|Z| = \\lambda$, and every $Y \\subseteq Z$ of size $<\\lambda$ is in $I$. Let $U \\supseteq I^*$ be given by Lemma~\\ref{ultsize}, so that $| \\mu^Z \/ U | \\leq 2^{<\\lambda}$. If $2^{<\\lambda} = \\lambda$, then $| \\mu^Z \/ U | \\leq \\lambda$. Since $2^\\mu = \\kappa$ and any ultrafilter extending $I^*$ is $(\\kappa,\\lambda)$-regular, Theorem~\\ref{keisler} implies that $| \\kappa^Z \/ U | > \\lambda$, and Lemma~\\ref{linear} implies that $| \\kappa^Z \/ U | \\leq | \\mu^Z \/ U |^+$. Thus $| \\mu^Z \/ U | = \\lambda$. \n\\end{proof}\n\nThe following extra conclusion can be immediately deduced in the case of $\\mu < \\aleph_\\omega$ and $\\lambda = \\rho^+$, where $\\cf(\\rho) = \\omega$. Suppose $\\mu = \\omega_n$. Since $| \\omega_{n+1}^Z \/ U | > \\lambda$, we cannot have $| \\omega_m^Z \/ U | < \\rho$ for any $m$, since by Lemma~\\ref{linear}, we would have $| \\omega_r^Z \/ U | < \\rho$ for all $r < \\omega$. Also, $U$ is $(\\omega,\\omega)$-regular, so Theorem~\\ref{keisler} implies that $|\\omega^Z \/U| \\geq |\\omega^Z \/U|^\\omega$. Thus $| \\omega_m^Z \/ U | = \\lambda$ for all $m \\leq n$.\n\n\\section{Compatibility with square}\nSolovay~\\cite{solovay} showed that $\\square_\\delta$ fails when $\\delta \\geq \\kappa$ and $\\kappa$ is strongly compact. In contrast, we will show $(\\forall \\delta \\geq \\kappa)\\square_\\delta$ is consistent with the kind of generically supercompact $\\kappa$ constructed in Section 2. The key difference is that nontrivial forcings may be absorbed into the quotient algebras of the ideals in the generic case.\n\nWe start with a model given by Section 2, force $\\square$, and show that dense ideals still exist. We will use the following variation on Foreman's duality theorem~\\cite{foremanduality}.\n\n\\begin{lemma}\n\\label{dualabsorb}\nSuppose $I$ is a precipitous ideal on $Z \\subseteq \\p(X)$ and $e : \\mathbb{P} \\to \\mathcal{B}(\\p(Z)\/I)$ is a regular embedding. Suppose that for all generic $G \\subseteq \\mathcal{B}(\\p(Z)\/I)$, if $j : V \\to M \\subseteq V[G]$ is the associated embedding and $H = e^{-1}[G]$, there is a filter $\\hat{H} \\in V[G]$ that is $j(\\mathbb{P})$-generic over $M$ and such that $j[H] \\subseteq \\hat{H}$. Then there is a $\\mathbb{P}$-name for a precipitous ideal $J$ on $Z$ such that $\\mathcal{B}( \\mathbb{P} * \\dot{\\p(Z)\/J}) \\cong \\mathcal{B}(\\p(Z)\/I)$. Furthermore, $J$ has the same completeness and normality properties as $I$.\n\\end{lemma}\n\n\\begin{proof}\nLet $H$ be $\\mathbb{P}$-generic over $V$, and let $G$ be $\\mathcal{B}(\\mathcal{P}(Z)\/I)$-generic over $V$ with $e[H] \\subseteq G$. Let $j : V \\to M$ be the generic ultrapower embedding, and let $\\hat{H}$ be as hypothesized. Then $j$ is uniquely extended to $\\hat{j} : V[H] \\to M[\\hat{H}]$. In $V[H]$, let $\\mathbb{Q} = \\mathcal{B}(\\mathcal{P}(Z)\/I) \/ e[H]$, and let $J = \\{ X \\subseteq Z : 1 \\Vdash_{\\mathbb{Q}} [\\id]_M \\notin \\hat{j}(X) \\}$. Let $\\iota : \\mathbb{P} * \\dot{\\p(Z)\/J}$ be defined by $\\iota(p,\\dot{X}) = e(p) \\wedge || [\\id] \\in \\hat{j}(\\dot{X}) ||$.\n\nIt is easy to see that $\\iota$ is order and antichain preserving. To see that the range of $\\iota$ is dense, let $A \\in I^+$ be arbitrary. Take $G$ with $A \\in G$, so that $[\\id] \\in j_G(A)$. If $H = e^{-1}[G]$, then some $p \\in H$ forces $A \\in J^+$. Let $\\dot{X}$ be a $\\mathbb{P}$-name such that $p \\Vdash \\dot{X} = \\check{A}$ and $q \\Vdash \\dot{X} = Z$ whenever $q \\perp p$; this makes sure $\\dot{X}$ is forced to be in $J^+$. Then $\\iota(p,\\dot{X}) \\leq A$, since any $G$ with $e(p) \\wedge || [\\id] \\in \\hat{j}(\\dot{X}) || \\in G$ must have $[\\id] \\in j(A)$ and thus $A \\in G$.\n\nSuppose $H * \\bar{G} \\subseteq \\mathbb{P} * \\p(Z) \/ \\bar{I}$ is generic, and let $G * \\hat{H} = \\iota[H * \\bar{G}]$. For $A \\in J^+$, $A \\in \\bar{G}$ iff $[\\id]_M \\in \\hat{j}(A)$. If $i : V[H] \\to N = V[H]^Z \/ \\bar{G}$ is the canonical ultrapower embedding, then there is an elementary embedding $k : N \\to M[\\hat{H}]$ given by $k([f]_N) = \\hat{j}(f)([\\id]_M)$, and $\\hat{j} = k \\circ i$. Thus $N$ is well-founded, so $J$ is precipitous. If $f : Z \\to \\ord$ is a function in $V$, then $k([f]_N) = j(f)([\\id]_M) = [f]_M$. Thus $k$ is surjective on ordinals, so it must be the identity, and $N = M[\\hat{H}]$. Since $i = \\hat{j}$ and $\\hat{j}$ extends $j$, $i$ and $j$ have the same critical point, so the completeness of $J$ is the same as that of $I$. Finally, since $[\\id]_N = [\\id]_M$, $I$ is normal in $V$ iff $J$ is normal in $V[H]$, because $j \\restriction X = \\hat{j} \\restriction X$, and normality is equivalent to $[\\id] = j[X]$.\n\\end{proof}\n\n\n\nFor a cardinal $\\delta$, let $\\mathbb{S}_\\delta$ be the collection of bounded approximations to a $\\square_\\delta$ sequence. That is, a condition is a sequence $\\langle C_\\alpha : \\alpha \\in \\eta \\cap \\mathrm{Lim} \\rangle$ such that $\\eta < \\delta^+$ is a successor ordinal, each $C_\\alpha$ is a club subset of $\\alpha$ of order type $\\leq \\delta$, and whenever $\\beta$ is a limit point of $C_\\alpha$, $C_\\alpha \\cap \\beta = C_\\beta$. For proof of the following lemma, we refer the reader to \\cite{sssr}.\n\n\\begin{lemma}For every cardinal $\\delta$, $\\mathbb{S}_\\delta$ is countably closed and $(\\delta+1)$-strategically closed and adds a $\\square_\\delta$ sequence $\\langle C_\\alpha : \\alpha \\in \\delta^+ \\cap \\mathrm{Lim} \\rangle = \\bigcup G$, where $G \\subseteq \\mathbb{S}_\\delta$ is the generic filter. For every regular $\\lambda \\leq \\delta$, there is a $\\mathbb{S}_\\delta$-name for a ``threading'' partial order $\\mathbb{T}_\\delta^\\lambda$ that adds a club $C \\subseteq (\\delta^+)^V$ of order type $\\lambda$ and such that whenever $\\alpha$ is a limit point of $C$, $C \\cap \\alpha = C_\\alpha$. Furthermore, $\\mathbb{S}_\\delta * \\mathbb{T}_\\delta^\\lambda$ has a $\\lambda$-closed dense subset of size $2^\\delta$.\n\\end{lemma}\n\n\\begin{theorem}\nSuppose $\\kappa$ is super-almost-huge and $\\mu < \\kappa$ is regular. Then there is a $\\mu$-distributive forcing extension in which $\\kappa = \\mu^+$, $\\square_\\lambda$ holds for all cardinals $\\lambda \\geq \\kappa$, and for all regular $\\lambda \\geq \\kappa$ there is a normal, fine, $\\kappa$-complete, $\\lambda$-dense ideal on $\\p_\\kappa(\\lambda)$.\n\\end{theorem}\n\n\\begin{proof}\nBy Section 2, we may pass to a $\\mu$-distributive forcing extension in which $\\kappa = \\mu^+$ and for all regular $\\lambda \\geq \\kappa$ there is a normal, fine, $\\kappa$-complete, $\\lambda$-dense ideal on $\\p_\\kappa(\\lambda)$, and GCH holds above $\\mu$. Over this model, force with $\\mathbb{P}$, the Easton support product of $\\mathbb{S}_\\lambda$ where $\\lambda$ ranges over all cardinals $\\geq \\kappa$. For every cardinal $\\lambda$, $\\mathbb{P}$ naturally factors into $\\mathbb{P}_{<\\lambda} \\times \\mathbb{P}_{\\geq \\lambda}$. Note that if $\\lambda \\geq \\kappa$, $\\mathbb{P}_{\\geq \\lambda}$ is $(\\lambda + 1)$-strategically closed.\n\nFirst we show that for each regular $\\lambda \\geq \\kappa$, $\\mathbb{P}_{\\geq \\lambda}$ is $\\lambda^+$-distributive in $V^{\\mathbb{P}_{<\\lambda}}$. Suppose that $H_0 \\times H_1$ is $(\\mathbb{P}_{<\\lambda} \\times \\mathbb{P}_{\\geq \\lambda})$-generic, and $f : \\lambda \\to \\ord$ is in $V[H_0][H_1]$. Then in $V[H_1]$, there is a $\\mathbb{P}_{<\\lambda}$-name $\\tau$ for $f$. By GCH and the fact that we take Easton support, $|\\mathbb{P}_{<\\lambda}| = \\lambda$, so it is $\\lambda^+$-c.c.\\ in $V[H_1]$. Thus $\\tau$ may be assumed to be a subset of $V$ of size $\\lambda$. By the strategic closure of $\\mathbb{P}_{\\geq \\lambda}$, $\\tau \\in V$. Thus $f = \\tau^{H_0} \\in V[H_0]$, establishing the claim.\n\nNext we show that $\\mathbb{P}$ preserves all regular cardinals. First note that since $\\mathbb{P}$ is $(\\kappa+1)$-strategically closed, $\\mathbb{P}$ cannot change the cofinality of any regular $\\delta$ to some $\\lambda \\leq \\kappa$. If $\\mathbb{P}$ does not preserve regular cardinals, then in some generic extension $V[G]$, there are $\\lambda < \\delta$ which are regular in $V$ with $\\kappa < \\lambda$, such that $V[G] \\models \\cf(\\delta) = \\lambda$. Let $H = H_0 \\times H_1$, where $H_0 \\subseteq \\mathbb{P}_{<\\lambda}$ and $H_1 \\subseteq \\mathbb{P}_{\\geq \\lambda}$. By the $\\lambda^+$-c.c.\\ of $\\mathbb{P}_{<\\lambda}$, $V[H_0] \\models \\cf(\\delta) > \\lambda$, and by the $\\lambda^+$-distributivity of $\\mathbb{P}_{\\geq \\lambda}$ in $V[H_0]$, $V[H] \\models \\cf(\\delta) > \\lambda$, a contradiction. Since a square sequence is upwards absolute to models with the same cardinals and $\\mathbb{S}_\\lambda$ regularly embeds into $\\mathbb{P}$ for all $\\lambda \\geq \\kappa$, $\\mathbb{P}$ forces $(\\forall \\lambda \\geq \\kappa) \\square_\\lambda$.\n\n\nFor each regular $\\lambda \\geq \\kappa$, let $Z_\\lambda = \\p_\\kappa(\\lambda)$. We want to show that in $V^{\\mathbb{P}}$, for each regular $\\lambda \\geq \\kappa$, there is a normal, fine, $\\lambda$-dense ideal on $Z_\\lambda$. It suffices to show that such an ideal exists in $V^{\\mathbb{P}_{<\\lambda}}$, since $\\mathbb{P}_{\\geq \\lambda}$ adds no subsets of $\\lambda$, and $|Z_\\lambda | = \\lambda$. First note that by the strategic closure of $\\mathbb{P}$, the dense ideal on $\\kappa$ is unaffected.\n\nLet $\\mathbb{Q}$ be the Easton support product of $\\mathbb{S}_\\lambda * \\mathbb{T}_\\lambda^\\mu$, where $\\lambda$ ranges over all cardinals. There is a coordinate-wise regular embedding of $\\mathbb{P}$ into $\\mathbb{Q}$. When $\\lambda$ is regular, $\\mathbb{Q}_{<\\lambda}$ has a dense $\\mu$-closed subset of size $\\lambda$. Hence it regularly embeds into $\\mathcal{B}(\\col(\\mu,\\lambda))$. The dense ideal $I_\\lambda$ on $Z_\\lambda$ in $V$ has quotient algebra isomorphic to $\\mathcal{B}(\\mathbb{R} \\times \\col(\\mu,\\lambda))$ for some small $\\mathbb{R}$, and so $\\mathbb{Q}_{<\\lambda}$ regularly embeds into this forcing.\n\nIf $G \\subseteq \\p(Z_\\lambda)\/I_\\lambda$ is generic, let $H$ be the induced generic for $\\mathbb{P}_{<\\lambda}$, and let $j : V \\to M \\subseteq V[G]$ be the ultrapower embedding. Recall that $\\crit(j) = \\kappa$, $j(\\kappa) = \\lambda^+$, $\\lambda^{++}$ is a fixed point of $j$, and $j[\\lambda] \\in M$. First note that $j[\\lambda] \\setminus j(\\kappa)$ is an Easton set in $M$. If $j(\\kappa) \\leq \\delta \\leq j(\\lambda)$ and $\\delta$ is regular in $M$, then since $\\ot(j[\\lambda] \\cap \\delta) \\leq \\lambda < \\delta$, $\\sup(j[\\lambda] \\cap \\delta) < \\delta$.\n\nFor each cardinal $\\delta$ such that $\\kappa \\leq \\delta < \\lambda$, let $\\langle C_\\alpha^\\delta : \\alpha < \\delta^+ \\rangle$ be the $\\square_\\delta$ sequence and let $t_\\delta$ be the ``thread'' of order type $\\mu$, both given by $H \\restriction ( \\mathbb{S}_\\delta * \\mathbb{T}^\\mu_\\delta)$. By the $\\mu$-distributivity of $\\mathbb{S}_\\delta * \\mathbb{T}_\\delta^\\mu$, all initial segments of $t_\\delta$ are in $V$, and since they are small, $j(t_\\delta \\cap \\alpha) = j[t_\\delta \\cap \\alpha]$ for $\\alpha < \\delta^+$, and $j$ is continuous at all limit points of $t_\\delta$. Let $\\gamma_\\delta = \\sup(j[\\delta^+]) < j(\\delta^+)$, and in $M$ consider $m_\\delta = \\bigcup_{\\alpha < \\delta^+} j(\\langle C_\\alpha^\\delta : \\beta < \\alpha \\rangle) \\cup \\{ (\\gamma_\\delta, j[t_\\delta] ) \\} $. Each $m_\\delta$ is a condition in $(\\mathbb{S}_{j(\\delta)})^M$, and the sequence $m = \\langle m_\\delta : \\delta \\in j[\\lambda] \\setminus \\kappa \\cap \\card^M \\rangle$ is a condition in $(\\mathbb{P}_{}(.45,-.5)(.45,.5) \n\\psline(-.5,-.5)(.5, -.5)\n\\psline(-.5,.5)(.5,.5)\n\\psarc(-.5,0){.5}{90}{270}\n\\psarc(.5,0){.5}{270}{90}\n\\psline[linestyle=dotted](-1.,0)(1.,0)\n}\n\n\n\n\\def\\CircleUp{\n\\pscircle(0,0){.5}\n\\psline(0,.5)(0,1.5)\n}\n\\def\\CircleUpAmp {\n\\pscircle(0,0){.5}\n\\psline(0,.5)(0,.7)\n}\n\\def\\CircleFeet{\n\\psline(-1.,-1.)(-0.3,-0.3)\n\\psline(1.,-1.)(0.3,-0.3)\n\\psline[linewidth=.1,linestyle=dotted,dotsep=.3](-.5,-1)(.5,-1)\n}\n\\def\\OvalShortFeet {\n\\psline(-1.,-1.)(-0.5,-0.5)\n\\psline(1.,-1.)(0.5,-0.5)\n\\psline[linewidth=.1,linestyle=dotted,dotsep=.3](-.5,-1)(.5,-1)\n}\n\\def\\DoubleCircleFeet {\n\\psline(-1.,-1.)(-0.75,-0.5)\n\\psline(1.,-1.)(0.75,-0.5)\n\\psline[linewidth=.1,linestyle=dotted,dotsep=.3](-.5,-1)(.5,-1)\n}\n\n\n\\def\\OvalShortInt{\n\\psline(-.5,-.5)(.5, -.5)\n\\psline(-.5,.5)(.5,.5)\n\\psarc(-.5,0){.5}{90}{270}\n\\psarc(.5,0){.5}{270}{90}\n\\psline(-.5,.5)(0,1.5)\n\\psline(.5,.5)(0,1.5)\n}\n\\def\\OvalShortIPIInt{\n\\psline(-.5,-.5)(.5, -.5)\n\\psline(-.5,.5)(.5,.5)\n\\psarc(-.5,0){.5}{90}{270}\n\\psarc(.5,0){.5}{270}{90}\n\\psline(-.5,.5)(0,1.5)\n\\psline(.5,.5)(0,1.5)\n\\psline[linestyle=dotted](-1.,0)(1.,0) \n}\n\n\\def\\OvalShortIPICircle {\n\\psline(-.5,-.5)(.5, -.5)\n\\psline(-.5,.5)(.5,.5)\n\\psarc(-.5,0){.5}{90}{270}\n\\psarc(.5,0){.5}{270}{90}\n\\pscircle(0,1.5){.5}\n\\psline(-.5,.5)(-.2,1.1)\n\\psline(.5,.5)(.2,1.1)\n\\psline[linestyle=dotted](-1.,0)(1.,0)\n\\psline(0,2.0)(0,2.2) \n}\n\n\n\\def\\DoubleCircleInt{\n\\pscircle(-.75,0){.5}\n\\pscircle(.75,0){.5}\n\\psline(-.5,.5)(0,1.5)\n\\psline(.5,.5)(0,1.5)\n}\n\n\\def\\DoubleCircleCircle{\n\\pscircle(-.75,0){.5}\n\\pscircle(.75,0){.5}\n\\psline(-.5,.5)(-.2,1.1)\n\\psline(.5,.5)(.2,1.1)\n\n\\pscircle(0,1.5){.5}\n\\psline(0,2.0)(0,2.2)\n}\n\n\n\\def\\DoubleCircle{\n\\pscircle(-.75,0){.5}\n\\pscircle(.75,0){.5}\n}\n\n\n\\def\\OvalShortTwoFeet {\n\\psline(-1.,-1.)(-0.5,-0.5)\n\\psline(1.,-1.)(0.5,-0.5)\n}\n\n\n\\def\\OvalShortTwoShortFeet {\n\\psline(-0.6,-0.6)(-0.5,-0.5)\n\\psline(.6,-0.6)(0.5,-0.5)\n}\n\n\n\\def\\OvalShortOneShortFoot {\n\\psline(0,-0.6)(0,-0.5)\n}\n\n\n\\def\\OvalUpFeet {\n\\psline(-1.,1.)(-.5,.5)\n\\psline(1,1)(.5,.5)\n}\n\n\n\\def\\OvalUpShortFeet {\n\\psline(-.6,.6)(-.5,.5)\n\\psline(.6,.6)(.5,.5)\n}\n \\def\\OvalUpShortFoot {\n\\psline(0,.6)(0,.5)\n}\n\n\n\n\\pspicture(-1,-0.5)(12,5)\n\\rput(1.2,0){\n \\psline[linewidth=3pt,linecolor=black]{->}(-1.4,3.3)(-.7,3.3)\n \\rput(1.5,3.3){= \\ $x^\\mu - y^\\mu$}\n \\rput(-.7,2.75){$y$}\n \\rput(-1.4,2.8){$x$}\n}\n \\rput(0.,1.5){0}\n \\rput(1.5,1.5){ = \\ \\ $\\frac 1 2 $}\n \\rput(-3.5,-0.25){\n\\rput(7.5,1.){\\OvalShort} \n\\rput(7.5,1.){\\OvalShortOneShortFoot}\n\\rput(7.5,2.5){\\OvalShort}\n\\rput(7.5,2.5){\\OvalUpShortFoot}\n\\psline(7.0,1.5)(7.0,2.0)\n\\psline[linewidth=3pt]{->}(8.0,1.5)(8.0,2.0)\n}\n\\rput(5.7,1.5){+ \\ $\\frac 1 4$\n\\rput(0,-1.){\n\\rput(7.5,1.){\\OvalShort} \n\\rput(7.5,1.){\\OvalShortOneShortFoot}\n\\rput(7.5,2.5){\\BSKernArrow}\n\\rput(7.5,2.5){\\large B}\n\n\\psline(7.0,1.5)(7.0,2.0)\n\\psline(8.0,1.5)(8.0,2.0)\n\\rput(7.5,4){\\OvalShort} \n\\rput(7.5,4){\\OvalUpShortFoot} \n\\psline(7.0,3.0)(7.0,3.5)\n\\psline(8.0,3.0)(8.0,3.5)\n\n}\n\\endpspicture\n\nThe equation for the inverse propagator is actually equivalent to an equation for the 3-point function of the stress enegy tensor \\cite{MackSymanzik}.\n\\subsection{Skeleton graph expansions}\nThe equations for $n\\geq 4$ point functions can be solved iteratively, producing skeleton graph expansions for Green functions and also for the Bethe Salpeter kernel \\cite{MackSymanzik}:\n\\psset{unit=.8cm}\n\\psset{dotsize=0.2 0}\n\\psset{linewidth=.05}\n\\def\\BSKern{\n\\psline(-.5,-.5)(.5, -.5)\n\\psline(-.5,.5)(.5,.5)\n\\psarc(-.5,0){.5}{90}{270}\n\\psarc(.5,0){.5}{270}{90}\n\\psline[linestyle=dotted](-1.,0)(1.,0)\n\\rput(0,0.05){$B$}\n}\n\n\\def\\BSKernWithFeet{\n \\rput(0,0){\n \\BSKern\n \\OvalUpFeet \n \\OvalShortTwoFeet \n }\n}\n\\def\\OvalUpFeet {\n\\psline(-1.,1.)(-.5,.5)\n\\psline(1,1)(.5,.5)\n}\n\\def\\OvalShortTwoFeet {\n\\psline(-1.,-1.)(-0.5,-0.5)\n\\psline(1.,-1.)(0.5,-0.5)\n}\n\\def\\PartialWave{\n\\rput{-90}(0,0){\n\\CGKern\n\\CircleFeet\n }\n\\rput{90}(2.2,0){\n\\CGKern\n\\CircleFeet\n }\n}\n\\def\\PartialWaveFeetDown{\n\\rput(0,0){\\CGKern}\n\\rput(2.2,0){\\CGKern}\n\\psline(.5,0)(1.7,0)\n}\n\\def\\CGKern{\n\\pscircle(0,0){.5}\n\\rput(0,0.5){\\zigzagTwo}\n}\n\\def\\CircleFeet{\n\\psline(-1.,-1.)(-0.3,-0.3)\n\\psline(1.,-1.)(0.3,-0.3)\n}\n\\def\\zigzagTwo{\n\\psline(0,0)(0.1,0.1)(-0.1,0.3)(0.1,0.5)(0,0.6)\n}\n\\def\\Vertex{\n\\pscircle(0,0){.5}\n}\n\\def\\BornTerm{\n\\rput(0,0){\\Vertex}\n\\rput(2.2,0){\\Vertex}\n\\psline(.5,0)(1.7,0)\n\\psline(-.3,-.3)(-1.,-1.\n\\psline(2.5,-.3)(3.2,-1.\n\\psline(-.3,.3)(-1.,1) \n\\psline(2.5,.3)(3.2,1.).\n}\n\\def\\BornTermDownFeet\n{\n\\rput(0,0){\\Vertex}\n\\rput(2.2,0){\\Vertex}\n\\psline(.5,0)(1.7,0)\n\\psline(-.3,-.3)(-1.,-1.\n\\psline(2.5,-.3)(3.2,-1.\n}\n\\def\\BornTermCrossed{\n\\BornTermDownFeet\n\\psline(0.3,0.3)(3.2,1.)\n\\psline(1.9,0.3)(-1.,1)\n}\n\\def\\SecondGraph{\n\\rput(0,0){\\Vertex}\n\\rput(2.2,0){\\Vertex}\n\\rput(0,1.5){\\Vertex}\n\\rput(2.2,1.5){\\Vertex}\n\\psline(0,0.4)(0,1.1)\n\n\\psline(2.2,.4)(2.2,1.1\n\\psline(0.3,0.3)(1.9,1.2\n\\psline(1.9,0.3)(0.3,1.2\n\\psline(-.3,-.3)(-1.,-1.)\n\\psline(2.5,-.3)(3.2,-1.)\n\\psline(-.3,1.8)(-1.,2.5)\n\\psline(2.5,1.8)(3.2,2.5)\n}\n\n\n\\pspicture(-1.2,-1)(11,3.5)\n\\rput(0,1){\n\\rput(0,.0){\\BSKernWithFeet}\n\\rput(2,0){$=$}\n\\rput(3.3,0){\\BornTerm}\n\\rput(6.6,0){$+$}\n\\rput(8.0,0){\\BornTermCrossed}\n\\rput(11.3,0){$+$}\n\\rput(12.5,0){\\SecondGraph}\n\\rput(16.4,0){$+ ...$}\n}\n\\endpspicture\n\nIn $\\phi^4$-theory, Born terms involve $:\\phi^2:$ propagators\n\\subsection{$\\epsilon$-expansions}\nIn $D=6+\\epsilon$ dimensions for $\\varphi^3$-theory (and in $D=4-\\epsilon$ dimensions for $\\phi^4$-theory), the dressed vertices are of order $\\epsilon$. Therefore only a finite number of skeleton graphs contribute to any order in $\\epsilon$. Inserting into the equations for dressed vertex and propagator, these can be solved by power series expansion in $\\epsilon$. For $\\phi^3$-theory in \n$6+\\epsilon$ dimensions one finds to leading orders \\cite{Mack3}: The \nfundamental field $\\phi$ has dimension\n$$ d= \\frac{D}{2} -1 + \\Delta,\\qquad \\qquad \\Delta= \\frac {1}{18}\\epsilon + ...$$ \nand a trajectory of traceless symmetric tensor fields of even rank $s\\geq 2$ with dimension $d_s= D-2 + s +\\sigma_s$,\n$$ \\half \\sigma_s= \\left[ \\frac {1}{18} - \\frac{2}{3(s+2)(s+1)}\\right]\n \\epsilon + ..\n$$\n\n\\section{\\large The relation between reprentations and harmonic analysis for covering groups of $SO(D,2)$ and $SO(D+1,1)$}\n Basic: Elementary representations $\\chi=[l,\\delta]$\n{\\bf are representations of the complex Lie algebra}.\n Partial wave amplitudes $g(\\chi)$ depend on $\\chi$. Their use is a crucial feature of the group theoretical approach \\cite{Mack1,Mack2,Mack3,DobrevPetkovaPetrovaTodorov,DobrevMackPetkovaPetrovaTodorov}.\n The (partial) equivalence of representations $\\chi=[l,\\delta]$ and $\\tilde{\\chi}=[\\tilde{l}, D-\\delta]$ ($\\tilde{l}=l$ for completely symmetric tensor \nrepresentations) is the basis of the shadow operator formalism introduced by Ferrara et al. \\cite{FerraraGattoGrillo}\n\\\\\n\\subsection{Euklidean partial wave expansion}\nfor the 4-point function $<\\phi(x_4)...\\phi(x_1)>$.\n\\begin{figure}[h]\n\\psset{unit=1cm}\n\\psset{dotsize=0.2 0}\n\\psset{linewidth=.05}\n\\def\\CircleFeet{\n\\psline(-1.,-1.)(-0.3,-0.3)\n\\psline(1.,-1.)(0.3,-0.3)\n}\n\\def\\OvalShort { \n\\psline(-.5,-.5)(.5, -.5)\n\\psline(-.5,.5)(.5,.5)\n\\psarc(-.5,0){.5}{90}{270}\n\\psarc(.5,0){.5}{270}{90}\n}\n\\def\\OvalLong { \n\\psline(-.75,-.5)(.75, -.5)\n\\psline(-.75,.5)(.75,.5)\n\\psarc(-.75,0){.5}{90}{270}\n\\psarc(.75,0){.5}{270}{90}\n}\n\\def\\BSKern {\n\\psline(-.5,-.5)(.5, -.5)\n\\psline(-.5,.5)(.5,.5)\n\\psarc(-.5,0){.5}{90}{270}\n\\psarc(.5,0){.5}{270}{90}\n\\psline[linestyle=dotted](-1.,0)(1.,0)\n}\n\n\\def\\CircleUp{\n\\pscircle(0,0){.5}\n\\psline(0,.5)(0,1.5)\n}\n\\def\\OvalShortTwoFeet {\n\\psline(-1.,-1.)(-0.5,-0.5)\n\\psline(1.,-1.)(0.5,-0.5)\n}\n\n\\def\\OvalUpFeet {\n\\psline(-1.,1.)(-.5,.5)\n\\psline(1,1)(.5,.5)\n}\n\n\\def\\OvalShorFteet {\n\\psline(-1.,-1.)(-0.5,-0.5)\n\\psline(1.,-1.)(0.5,-0.5)\n}\n\\def\\zigzagOne{\n\\psline(0,0)(0.1,0.1)(-0.1,0.3)(0,0.4)\n}\n\\def\\zigzagTwo{\n\\psline(0,0)(0.1,0.1)(-0.1,0.3)(0.1,0.5)(0,0.6)\n}\n\\def\\CGKern{\n\\pscircle*(0,0){.5}\n\\rput(0,0.5){\\zigzagTwo}\n}\n\n\\def\\PartialWave{\n\\rput{-90}(0,0){\n\\CGKern\n\\CircleFeet\n }\n\\rput{90}(2.2,0){\n\\CGKern\n\\CircleFeet\n }\n}\n\\def\\PartialWaveUp{\n\\rput{90}(0,0){\\PartialWave}\n}\n\\def\\FourPointFct{\n\\pscircle(0,0){0.9}\n\\psline(-1.25,-1.25)(-0.60,-0.60)\n\\psline(0.60,0.60)(1.25,1.25)\n\\psline(1.25,-1.25)(0.60,-0.60)\n\\psline(-0.60,0.60)(-1.25,1.25)\n}\n\n\\def\\FourPointMellin{\n\\rput(-2.9,0){$=\\int d\\delta \\ M(\\{ \\delta_{ij}\\})$}\n\\psline(-1.25,-1.25)(-1.25,1.25)\n\\psline(-1.25,-1.25)(1.25,-1.25)\n\\psline(-1.25,-1.25)(1.25,1.25)\n\\psline(1.25,1.25)(-1.25,1.25)\n\\psline(1.25,1.25)(1.25,-1.25)\n\\psline(1.25,-1.25)(-1.25,1.25)\n}\n\n\n\n\\pspicture(0,0)(11,3.0)\n\n\\rput(7,0){\n\\rput(-2.5,1.25){ $ = \\int d\\chi g(\\chi) $}\n\\rput(-.0,0){\n\\rput(-1.2,0){$3$}\n\\rput(-1.2,2.3){$1$}\n\\rput(3.4,0){$4$}\n\\rput(3.4,2.3){$2$}\n\\rput(0.0,1.25){\\PartialWave}\n\\rput(1.3,1.7){$\\chi$}\n}\n}\n\\rput(1.2,1.25){\\FourPointFct}\n\\rput(11,1.25){$=\\ ...$\n\\endpspicture\n\\caption{Euklidean partial wave expansion for the 4-point function \\newline\n $<\\phi(x_4)...\\phi(x_1)>$.\nThe dots stand for expansions in the two other channels $(12)\\mapsto (34) $ and $(13)\\mapsto (24)$ of the same amplitude, implying the equality shown in section 4.2\n \\label{fig:EuklideanPartialWave.tex}}\n\\end{figure} \n\n\n\n\nThis formula {\\em is quite trivial, simply expressing the orthogonality} (and completeness!) {\\em of the partial waves} say Simmons-Duffin, Stanford, and Witten \\cite{SimmonsDuffinStanfordWitten}.\n\n indeed: Physical positivity implies constraints on the partial waves $g(\\chi)$!\nBut also\n completeness is not trivial, because presence of a Born term requires contributions from the {\\em complementary} series in addition to the \n{\\em principal} series.\n\n They can\n be included by choosing a proper path of the $c$-integration\n$$\\int d\\chi .... = \\sum_l\\int_C dc \\rho_l(c)...,\\qquad \\chi=[l,\\half D + c]$$\n\\setlength{\\unitlength}{0.65mm}\n\\begin{picture}(100,60)(-2,0)\n\\thinlines\n\\put(0,10){\n\\put(38,25){\\line(1,0){4}}\n\\thicklines\n\\put(40,0){\\vector(0,1){50}}\n\\put(32,25){\\circle*{3}}\n\\put(30,15){$c_f$}\n\\put(48,25){\\circle{3}}\n\\put(32,25){\\circle{9}}\n\\put(36.4,24.5){\\vector(0,-1){1}}\n\\put(48,25){\\circle{9}}\n\\put(43.6,24.5){\\vector(0,-1){1}}\n\n\\put(62,25){\\circle*{3}}\n\\put(74,25){\\circle*{3}}\n\\put(84,25){\\circle*{3}}\n\n\\put(18,25){\\circle{3}}\n\\put(6,25){\\circle{3}}\n\\put(-4,25){\\circle{3}}\n}\n\\put(0,3){a) Path $C$ of the c-integration for $l=0$}\n\\end{picture}\n\n\n{\\bf Positivity constraints} were examined already in the 7o's. \nThey restrict the position of the\n poles and fix the sign of of the residues of the partial wave amplitudes $g(\\chi)$ defined by\n\\psset{unit=1cm}\n\\psset{dotsize=0.2 0}\n\\psset{linewidth=.05}\n\\def\\CircleFeet{\n\\psline(-1.,-1.)(-0.3,-0.3)\n\\psline(1.,-1.)(0.3,-0.3)\n}\n\\def\\OvalShort { \n\\psline(-.5,-.5)(.5, -.5)\n\\psline(-.5,.5)(.5,.5)\n\\psarc(-.5,0){.5}{90}{270}\n\\psarc(.5,0){.5}{270}{90}\n}\n\\def\\OvalLong { \n\\psline(-.75,-.5)(.75, -.5)\n\\psline(-.75,.5)(.75,.5)\n\\psarc(-.75,0){.5}{90}{270}\n\\psarc(.75,0){.5}{270}{90}\n}\n\\def\\BSKern {\n\\psline(-.5,-.5)(.5, -.5)\n\\psline(-.5,.5)(.5,.5)\n\\psarc(-.5,0){.5}{90}{270}\n\\psarc(.5,0){.5}{270}{90}\n\\psline[linestyle=dotted](-1.,0)(1.,0)\n}\n\n\\def\\CircleUp{\n\\pscircle(0,0){.5}\n\\psline(0,.5)(0,1.5)\n}\n\\def\\OvalShortTwoFeet {\n\\psline(-1.,-1.)(-0.5,-0.5)\n\\psline(1.,-1.)(0.5,-0.5)\n}\n\n\\def\\OvalUpFeet {\n\\psline(-1.,1.)(-.5,.5)\n\\psline(1,1)(.5,.5)\n}\n\n\\def\\OvalShorFteet {\n\\psline(-1.,-1.)(-0.5,-0.5)\n\\psline(1.,-1.)(0.5,-0.5)\n}\n\\def\\zigzagOne{\n\\psline(0,0)(0.1,0.1)(-0.1,0.3)(0,0.4)\n}\n\\def\\zigzagTwo{\n\\psline(0,0)(0.1,0.1)(-0.1,0.3)(0.1,0.5)(0,0.6)\n}\n\\def\\CGKern{\n\\pscircle*(0,0){.5}\n\\rput(0,0.5){\\zigzagTwo}\n}\n\n\\def\\PartialWave{\n\\rput{-90}(0,0){\n\\CGKern\n\\CircleFeet\n }\n\\rput{90}(2.2,0){\n\\CGKern\n\\CircleFeet\n }\n}\n\\def\\PartialWaveUp{\n\\rput{90}(0,0){\\PartialWave}\n}\n\\def\\FourPointFct{\n\\pscircle(0,0){0.9}\n\\psline(-1.25,-1.25)(-0.60,-0.60)\n\\psline(0.60,0.60)(1.25,1.25)\n\\psline(1.25,-1.25)(0.60,-0.60)\n\\psline(-0.60,0.60)(-1.25,1.25)\n}\n\\def\\FourPointFctLeftOnly{\n\\pscircle(0,0){0.9}\n\\psline(-1.25,-1.25)(-0.60,-0.60)\n\\psline(-1.25,1.25)(-0.60,0.60)\n}\n\n\n\n\\def\\FourPointMellin{\n\\rput(-2.9,0){$=\\int d\\delta \\ M(\\{ \\delta_{ij}\\})$}\n\\psline(-1.25,-1.25)(-1.25,1.25)\n\\psline(-1.25,-1.25)(1.25,-1.25)\n\\psline(-1.25,-1.25)(1.25,1.25)\n\\psline(1.25,1.25)(-1.25,1.25)\n\\psline(1.25,1.25)(1.25,-1.25)\n\\psline(1.25,-1.25)(-1.25,1.25)\n}\n\\def\\rhspw{\n \\FourPointFctLeftOnly\n \\rput(2,0){\\rotatebox{-90}{\\CGKern}}\n \\psline(.6,.6)(2.0,0)\n \\psline(.6,-.6)(2.0,0)\n \\rput(3.5,0){$\\chi$}\n}\n\n\n\\pspicture(0,0)(11,2.5)\n\\rput(0.5,1){\n \\rput(0.1,0){$g(\\chi)$}\n \\rput(1.3,0){\\rotatebox{-90}{\n \\CGKern\n \\CircleFeet\n } \n }\n \\rput(2.8,0){$\\chi =$}\n \\rput(4.5,0){\\rhspw}\n}\n\\endpspicture\n\n\n\\subsection{Crossing relations}\n\n\\psset{unit=1cm}\n\\psset{dotsize=0.2 0}\n\\psset{linewidth=.05}\n\\def\\CircleFeet{\n\\psline(-1.,-1.)(-0.3,-0.3)\n\\psline(1.,-1.)(0.3,-0.3)\n}\n\\def\\OvalShort { \n\\psline(-.5,-.5)(.5, -.5)\n\\psline(-.5,.5)(.5,.5)\n\\psarc(-.5,0){.5}{90}{270}\n\\psarc(.5,0){.5}{270}{90}\n}\n\\def\\OvalLong { \n\\psline(-.75,-.5)(.75, -.5)\n\\psline(-.75,.5)(.75,.5)\n\\psarc(-.75,0){.5}{90}{270}\n\\psarc(.75,0){.5}{270}{90}\n}\n\\def\\BSKern {\n\\psline(-.5,-.5)(.5, -.5)\n\\psline(-.5,.5)(.5,.5)\n\\psarc(-.5,0){.5}{90}{270}\n\\psarc(.5,0){.5}{270}{90}\n\\psline[linestyle=dotted](-1.,0)(1.,0)\n}\n\n\\def\\CircleUp{\n\\pscircle(0,0){.5}\n\\psline(0,.5)(0,1.5)\n}\n\\def\\OvalShortTwoFeet {\n\\psline(-1.,-1.)(-0.5,-0.5)\n\\psline(1.,-1.)(0.5,-0.5)\n}\n\n\\def\\OvalUpFeet {\n\\psline(-1.,1.)(-.5,.5)\n\\psline(1,1)(.5,.5)\n}\n\n\\def\\OvalShorFteet {\n\\psline(-1.,-1.)(-0.5,-0.5)\n\\psline(1.,-1.)(0.5,-0.5)\n}\n\\def\\zigzagOne{\n\\psline(0,0)(0.1,0.1)(-0.1,0.3)(0,0.4)\n}\n\\def\\zigzagTwo{\n\\psline(0,0)(0.1,0.1)(-0.1,0.3)(0.1,0.5)(0,0.6)\n}\n\\def\\CGKern{\n\\pscircle*(0,0){.5}\n\\rput(0,0.5){\\zigzagTwo}\n}\n\n\\def\\PartialWave{\n\\rput{-90}(0,0){\n\\CGKern\n\\CircleFeet\n }\n\\rput{90}(2.2,0){\n\\CGKern\n\\CircleFeet\n }\n}\n\n\\pspicture(-1,0)(11,3.5)\n\\rput(1,0.5){\n\n\\rput(0.5 ,1.25){ $ \\int d\\chi \\quad g(\\chi) $}\n\\rput(3,0){\n\\rput(-1.2,0){$3$}\n\\rput(-1.2,2.3){$1$}\n\\rput(3.4,0){$4$}\n\\rput(3.4,2.3){$2$}\n\\rput(0,1.25){\\PartialWave}\n\\rput(1.3,1.7){$\\chi$}\n}\n\\rput(2,0){\n\\rput(6,1.25){$\\quad =\\int d\\chi^\\prime g(\\chi^\\prime)$}\n\\rput{90}(9,0.5){\\PartialWave}\n\\rput(9.75,1.75){$\\chi^\\prime$}\n\\rput(7.8,3.8){$1$}\n\\rput(10.2,3.8){$2$}\n\\rput(10.2,-0.4){$4$}\n\\rput(7.8,-0.4){$3$}\n }\n}\n\\endpspicture\n\n\n\n\\noindent The crossing relation is a linear relation for the partial wave, \nof the form $$g(\\chi)= \\int d\\chi^\\prime C(\\chi^\\prime, \\chi)g(\\chi^\\prime).$$ Using orhogonality of the CG-kernels, the crossing kernel \n$C(\\chi,\\chi^\\prime)$ is defined by\n\\psset{unit=1cm}\n\\psset{dotsize=0.2 0}\n\\psset{linewidth=.05}\n\\def\\CircleLeftFoot{\n \\psline(-1.,-1.)(-0.3,-0.3)\n}\n\\def\\CircleLongLeftFoot{\n \\psline(-1.,-1.)(1.,1.)\n}\n\\def\\CircleRightFoot{\n\\psline(1.,-1.)(0.3,-0.3)\n}\n\\def\\CircleLongRightFoot{\n\\psline(1.,-1.)(-1.,1.)\n}\n\\def\\CircleFeet{\n\\psline(-1.,-1.)(-0.3,-0.3)\n\\psline(1.,-1.)(0.3,-0.3)\n}\n\\def\\OvalShort { \n\\psline(-.5,-.5)(.5, -.5)\n\\psline(-.5,.5)(.5,.5)\n\\psarc(-.5,0){.5}{90}{270}\n\\psarc(.5,0){.5}{270}{90}\n}\n\\def\\OvalLong { \n\\psline(-.75,-.5)(.75, -.5)\n\\psline(-.75,.5)(.75,.5)\n\\psarc(-.75,0){.5}{90}{270}\n\\psarc(.75,0){.5}{270}{90}\n}\n\\def\\BSKern {\n\\psline(-.5,-.5)(.5, -.5)\n\\psline(-.5,.5)(.5,.5)\n\\psarc(-.5,0){.5}{90}{270}\n\\psarc(.5,0){.5}{270}{90}\n\\psline[linestyle=dotted](-1.,0)(1.,0)\n}\n\n\\def\\CircleUp{\n\\pscircle(0,0){.5}\n\\psline(0,.5)(0,1.5)\n}\n\\def\\OvalShortTwoFeet {\n\\psline(-1.,-1.)(-0.5,-0.5)\n\\psline(1.,-1.)(0.5,-0.5)\n}\n\n\\def\\OvalUpFeet {\n\\psline(-1.,1.)(-.5,.5)\n\\psline(1,1)(.5,.5)\n}\n\n\\def\\OvalShorFteet {\n\\psline(-1.,-1.)(-0.5,-0.5)\n\\psline(1.,-1.)(0.5,-0.5)\n}\n\\def\\zigzagOne{\n\\psline(0,0)(0.1,0.1)(-0.1,0.3)(0,0.4)\n}\n\\def\\zigzagTwo{\n\\psline(0,0)(0.1,0.1)(-0.1,0.3)(0.1,0.5)(0,0.6)\n}\n\\def\\CGKern{\n\\pscircle*(0,0){.5}\n\\rput(0,0.5){\\zigzagTwo}\n}\n\n\\def\\PartialWave{\n\\rput{-90}(0,0){\n\\CGKern\n\\CircleFeet\n }\n\\rput{90}(2.2,0){\n\\CGKern\n\\CircleFeet\n }\n}\n\\def\\PartialWaveFeetDown{\n\\rput{-90}(0,0){\n\\CGKern\n }\n \\CircleLongLeftFoot\n\\rput{90}(2.2,0){\n \\CGKern\n }\n \\rput(2.0,0){\\CircleLongRightFoot}\n}\n\\def\\PartialWaveUp{\n\\rput{90}(0,0){\\PartialWave}\n}\n\\def\\FourPointFct{\n\\pscircle(0,0){0.9}\n\\psline(-1.25,-1.25)(-0.60,-0.60)\n\\psline(0.60,0.60)(1.25,1.25)\n\\psline(1.25,-1.25)(0.60,-0.60)\n\\psline(-0.60,0.60)(-1.25,1.25)\n}\n\n\\def\\FourPointMellin{\n\\rput(-2.9,0){$=\\int d\\delta \\ M(\\{ \\delta_{ij}\\})$}\n\\psline(-1.25,-1.25)(-1.25,1.25)\n\\psline(-1.25,-1.25)(1.25,-1.25)\n\\psline(-1.25,-1.25)(1.25,1.25)\n\\psline(1.25,1.25)(-1.25,1.25)\n\\psline(1.25,1.25)(1.25,-1.25)\n\\psline(1.25,-1.25)(-1.25,1.25)\n}\n\n\n\\pspicture(-1,0)(5,3.4)\n\n\\rput(7,0){\n \\rput(-.5,0){\n \\rput(0,1.25){\\PartialWaveFeetDown} \n \\rput(1.2,0.7){$\\chi$}\n \\rput(1.6,3.4){$\\chi^\\prime$\n }\n\\rput(0.3,2.5){\\rotatebox{90}(\\CGKern}\n}\n\\rput(0.5,2.0){\\CGKern\n\\CircleFeet\n\\rput(0.5,1.2){$\\chi^\\prime$}\n}\n\\rput(3.5,2.){$C(\\chi,\\chi^\\prime)\\ = \\ \\frac {1}{2}$}\n\\endpspicture\n\n\n\n\\subsection{\\large Derivation of Operator product expansions \n from Euklidean partial wave expansions}\n\\subsubsection{Split of the partial waves}\nOperator product expansions (OPE) have the structure \\\\\n$$ \\phi^1(-\\half x_1)\\phi^2(\\half x_1)=\\sum_n C_n(x_1)O^n(0)$$\nwhere $O^n$ are fields or derivatives of fields.\nThe derivatives of each nonderivative field can be summed up. The resulting \nterms are known as {\\bf confornal blocks}.\n To obtain this expansion, \\\\ \n{\\bf the Clebsch Gordan coefficients need to be split.}\n\n\n\\psset{unit=1cm}\n\\psset{dotsize=0.2 0}\n\\psset{linewidth=.05}\n\\def\\CircleFeet{\n\\psline(-1.,-1.)(-0.3,-0.3)\n\\psline(1.,-1.)(0.3,-0.3)\n}\n\\def\\OvalShort { \n\\psline(-.5,-.5)(.5, -.5)\n\\psline(-.5,.5)(.5,.5)\n\\psarc(-.5,0){.5}{90}{270}\n\\psarc(.5,0){.5}{270}{90}\n}\n\\def\\OvalLong { \n\\psline(-.75,-.5)(.75, -.5)\n\\psline(-.75,.5)(.75,.5)\n\\psarc(-.75,0){.5}{90}{270}\n\\psarc(.75,0){.5}{270}{90}\n}\n\\def\\BSKern {\n\\psline(-.5,-.5)(.5, -.5)\n\\psline(-.5,.5)(.5,.5)\n\\psarc(-.5,0){.5}{90}{270}\n\\psarc(.5,0){.5}{270}{90}\n\\psline[linestyle=dotted](-1.,0)(1.,0)\n}\n\n\\def\\CircleUp{\n\\pscircle(0,0){.5}\n\\psline(0,.5)(0,1.5)\n}\n\\def\\OvalShortTwoFeet {\n\\psline(-1.,-1.)(-0.5,-0.5)\n\\psline(1.,-1.)(0.5,-0.5)\n}\n\n\\def\\OvalUpFeet {\n\\psline(-1.,1.)(-.5,.5)\n\\psline(1,1)(.5,.5)\n}\n\n\\def\\OvalShorFteet {\n\\psline(-1.,-1.)(-0.5,-0.5)\n\\psline(1.,-1.)(0.5,-0.5)\n}\n\\def\\zigzagOne{\n\\psline(0,0)(0.1,0.1)(-0.1,0.3)(0,0.4)\n}\n\\def\\zigzagTwo{\n\\psline(0,0)(0.1,0.1)(-0.1,0.3)(0.1,0.5)(0,0.6)\n}\n\n\\def\\zigzagTwoHor{\n\\psline(0,0)(0.1,0.1)(0.3,-0.1)(0.5,0.1)(0.6,0)\n}\n\\def\\CGKern{\n\\pscircle*(0,0){.5}\n\\rput(0,0.5){\\zigzagTwo}\n}\n\n\\def\\PartialWave{\n\\rput{-90}(0,0){\n\\CGKern\n\\CircleFeet\n }\n\\rput{90}(2.2,0){\n\\CGKern\n\\CircleFeet\n }\n}\n\\def\\CGhalf{\n\\psarc*(0,0){.5}{180}{0}\n\\psarc(0,0){.5}{0}{180}\n}\n\\def\\CGhalfHor{\n\\psarc*(0,0){.5}{90}{-90}\n\\psarc(0,0){.5}{-90}{90}\n}\n\\def\\QKern{\n\\psarc*(0,0){.5}{90}{-90}\n\\psarc(0,0){.5}{-90}{90}\n\\psline(-1.,-1)(-.3,-.3)\n\\psline(-1,1)(-.3,.3)\n\\rput(0.5,0){\\zigzagTwoHor}\n\\rput(0.7,0.5){$\\chi$}\n}\n\n\n\\def\\QKerntilde{\n\\psarc*(0,0){.5}{90}{-90}\n\\psarc(0,0){.5}{-90}{90}\n\\psline(-1.,-1)(-.3,-.3)\n\\psline(-1,1)(-.3,.3)\n\\rput(0.5,0){\\zigzagTwoHor}\n\\rput(0.7,0.5){$\\tilde{\\chi}$}\n\\rput(1.2,0.0){\\zigzaglongHor}\n}\n\\def\\zigzaglongHor{\n\\psline(0,0)(0.1,0.1)(0.3,-0.1)(0.5,0.1)(0.7,-0.1)(0.8,0)\n}\n\\def\\SplitCGrhs{\n \\QKern \n \\rput(1.5,0){$+$}\n \\rput(2.7,0){\\QKerntilde}\n}\n\\def\\SplitCGlhs{\n \\rotatebox{-90}{\\CGKern}\n \\psline(-1.,-1)(-.3,-.3)\n \\psline(-1,1)(-.3,.3)\n}\n\\pspicture(0,0)(11,2.)\n\\rput(1,1){\n \\rput(0,0){\\SplitCGlhs}\n \\rput(1.7,0.0){$=$\n \\rput(3.0,0){\\SplitCGrhs}\n}\n\\endpspicture\n\n\nThis split exploits the equivalence of representations $\\chi=[l,h+c]$ and \n$\\tilde{\\chi}=[l,h-c]$ $(h=\\half D)$ of the Euklidean conformal group to split\n the three point function into Clebsch-Gordan kernels of the second kind in a \nway modeled after the split of Legendre functions $P_l$ into Legendre\n functions of the second kind, $Q_l$,\n\\ba\n&& \\mathfrak{V}(\\x_0,\\tilde{\\chi}, \\x_2 , \\chi_2, \\x_1, \\chi_1)\n= \\ \\label{kernel2ndKind}\\nn \\\\ && \\frac {\\pi}{\\sin(l+c)}\n[\\mathfrak{Q}(\\x,\\tilde{\\chi} ,\\x_2,\\chi_2,\\x_1, \\chi_1)- \\nn \\\\\n&& \\qquad \\qquad - \\int d^D\\y \\Delta^{\\tilde{\\chi}}(\\x,\\y )\\mathfrak{Q}(\\y,\\chi,\\x_2,\\chi_1,\\x_1, \\chi_2)] \n\\nn\n\\ea\nsuch that the partial Fourier transform \\\\\n$\\mathfrak{Q}(p,\\tilde{\\chi} ,\\x_2,\\chi_2,\\x_1, \\chi_1)$\\\\\n{\\bf is an entire holomorphic function of $p$.}\n\\subsection{Continuation to Minkowski apace}\nAfter the split, one can\ndeform the path of the $c$-integration in the partial wave expansion involving\n$g(\\chi)$, $\\chi=[l,h+c])$, $h=D\/2$\\\\picking up a sum over residues as shown in the following picture\\\\[4mm]\n\\setlength{\\unitlength}{0.65mm}\n\\begin{picture}(100,95)(-2.6,0)\n\\thinlines\n\\put(0,50){\n\\put(38,25){\\line(1,0){4}}\n\\thicklines\n\\put(40,0){\\vector(0,1){50}}\n\\put(32,25){\\circle*{3}}\n\\put(30,15){$c_f$}\n\\put(48,25){\\circle{3}}\n\\put(32,25){\\circle{9}}\n\\put(36.4,24.5){\\vector(0,-1){1}}\n\\put(48,25){\\circle{9}}\n\\put(43.6,24.5){\\vector(0,-1){1}}\n\n\\put(62,25){\\circle*{3}}\n\\put(74,25){\\circle*{3}}\n\\put(84,25){\\circle*{3}}\n\n\\put(18,25){\\circle{3}}\n\\put(6,25){\\circle{3}}\n\\put(-4,25){\\circle{3}}\n\n\\put(0,2.5){a) Original path of c-integration} \n}\n\n\\put(-100,0){\n\\thinlines\n\\put(138,25){\\line(1,0){4}}\n\\put(140,23){\\line(0,1){4}}\n\\thicklines\n\\put(132,25){\\circle*{3}}\n\\put(130,15){$c_f$}\n\\put(139,15){$0$}\n\n\\put(132,25){\\circle{9}}\n\\put(136.4,24.5){\\vector(0,-1){1}}\n\\put(148,25){\\circle{3}}\n\n\\put(162,25){\\circle*{3}}\n\\put(162,25){\\circle{9}}\n\\put(166,24.5){\\vector(0,-1){1}}\n\n\\put(174,25){\\circle*{3}}\n\\put(174,25){\\circle{9}}\n\\put(178,24.5){\\vector(0,-1){1}}\n\n\\put(184,25){\\circle*{3}}\n\\put(184,25){\\circle{9}}\n\\put(188,24.5){\\vector(0,-1){1}}\n\n\\put(118,25){\\circle{3}}\n\\put(106,25){\\circle{3}}\n\\put(96,25){\\circle{3}}\n\n\\put(100,3){b) Path after closure} \n}\n\\end{picture}\n\n\n\nThis results in a discrete sum of conformal blocks\nas shown before, i.e. the expected OPE.\n\\begin{figure}[h!\n\\psset{unit=1.2cm}\n\\psset{dotsize=0.2 0}\n\\psset{linewidth=.05}\n\\def\\CircleLeftFoot{\n \\psline(-1.,-1.)(-0.3,-0.3)\n}\n\\def\\CircleLongLeftFoot{\n \\psline(-1.,-1.)(1.,1.)\n}\n\\def\\CircleRightFoot{\n\\psline(1.,-1.)(0.3,-0.3)\n}\n\\def\\CircleLongRightFoot{\n\\psline(1.,-1.)(-1.,1.)\n}\n\\def\\CircleFeet{\n\\psline(-1.,-1.)(-0.3,-0.3)\n\\psline(1.,-1.)(0.3,-0.3)\n}\n\\def\\OvalShort { \n\\psline(-.5,-.5)(.5, -.5)\n\\psline(-.5,.5)(.5,.5)\n\\psarc(-.5,0){.5}{90}{270}\n\\psarc(.5,0){.5}{270}{90}\n}\n\\def\\OvalLong { \n\\psline(-.75,-.5)(.75, -.5)\n\\psline(-.75,.5)(.75,.5)\n\\psarc(-.75,0){.5}{90}{270}\n\\psarc(.75,0){.5}{270}{90}\n}\n\\def\\BSKern {\n\\psline(-.5,-.5)(.5, -.5)\n\\psline(-.5,.5)(.5,.5)\n\\psarc(-.5,0){.5}{90}{270}\n\\psarc(.5,0){.5}{270}{90}\n\\psline[linestyle=dotted](-1.,0)(1.,0)\n}\n\n\\def\\CircleUp{\n\\pscircle(0,0){.5}\n\\psline(0,.5)(0,1.5)\n}\n\\def\\OvalShortTwoFeet {\n\\psline(-1.,-1.)(-0.5,-0.5)\n\\psline(1.,-1.)(0.5,-0.5)\n}\n\n\\def\\OvalUpFeet {\n\\psline(-1.,1.)(-.5,.5)\n\\psline(1,1)(.5,.5)\n}\n\n\\def\\OvalShorFteet {\n\\psline(-1.,-1.)(-0.5,-0.5)\n\\psline(1.,-1.)(0.5,-0.5)\n}\n\\def\\zigzagOne{\n\\psline(0,0)(0.1,0.1)(-0.1,0.3)(0,0.4)\n}\n\\def\\zigzagTwo{\n\\psline(0,0)(0.1,0.1)(-0.1,0.3)(0.1,0.5)(0,0.6)\n}\n\\def\\CGKern{\n\\pscircle*(0,0){.5}\n\\rput(0,0.5){\\zigzagTwo}\n}\n\n\\def\\PartialWave{\n \\rput{-90}(0,0){\n \\CGKern\n \\CircleFeet\n }\n\\rput{90}(2.2,0){\n\\CGKern\n\\CircleFeet\n }\n}\n\\def\\PartialWaveQ{\n \\rput{-90}(0,0){\n \\QKern\n \\CircleFeet\n }\n\\rput{90}(2.2,0){\n\\CGKern\n\\CircleFeet\n}\n\n}\n\\def\\PartialWaveFeetDown{\n\\rput{-90}(0,0){\n\\CGKern\n }\n \\CircleLongLeftFoot\n\\rput{90}(2.2,0){\n \\CGKern\n }\n \\rput(2.0,0){\\CircleLongRightFoot}\n}\n\\def\\PartialWaveUp{\n\\rput{90}(0,0){\\PartialWave}\n}\n\\def\\FourPointFct{\n\\pscircle(0,0){0.9}\n\\psline(-1.25,-1.25)(-0.60,-0.60)\n\\psline(0.60,0.60)(1.25,1.25)\n\\psline(1.25,-1.25)(0.60,-0.60)\n\\psline(-0.60,0.60)(-1.25,1.25)\n}\n\n\\def\\FourPointMellin{\n\\rput(-2.9,0){$=\\int d\\delta \\ M(\\{ \\delta_{ij}\\})$}\n\\psline(-1.25,-1.25)(-1.25,1.25)\n\\psline(-1.25,-1.25)(1.25,-1.25)\n\\psline(-1.25,-1.25)(1.25,1.25)\n\\psline(1.25,1.25)(-1.25,1.25)\n\\psline(1.25,1.25)(1.25,-1.25)\n\\psline(1.25,-1.25)(-1.25,1.25)\n}\n\n\\def\\QKern{\n\\psarc*(0,0){.5}{90}{-90}\n\\psarc(0,0){.5}{-90}{90}\n\\psline(-1.,-1)(-.3,-.3)\n\\psline(-1,1)(-.3,.3)\n\\rput(0.5,0){\\zigzagTwoHor}\n\\rput(0.7,0.5){$\\chi$}\n}\n\n\\def\\QKerntilde{\n\\psarc*(0,0){.5}{90}{-90}\n\\psarc(0,0){.5}{-90}{90}\n\\psline(-1.,-1)(-.3,-.3)\n\\psline(-1,1)(-.3,.3)\n\\rput(0.5,0){\\zigzagTwoHor}\n\\rput(0.7,0.5){$\\tilde{\\chi}$}\n\\rput(1.2,0.0){\\zigzaglongHor}\n}\n\\def\\zigzaglongHor{\n\\psline(0,0)(0.1,0.1)(0.3,-0.1)(0.5,0.1)(0.7,-0.1)(0.8,0)\n}\n\n\\def\\zigzagTwoHor{\n\\psline(0,0)(0.1,0.1)(0.3,-0.1)(0.5,0.1)(0.6,0)\n}\n\n\\def\\SplitCGrhs{\n \\QKern \n \\rput(1.5,0){$+$}\n \\rput(2.7,0){\\QKerntilde}\n}\n\\pspicture(0,0)(3,7.0)\n\\rput(0,5){\n \\rput(1,0){\\FourPointFct}\n \\rput(4.1,0){$=\\ \\sum_{poles\\ in \\chi} res_\\chi g(\\chi)$} \n \\rput(8,0){\\QKern}\n \\rput(10,0){\\rotatebox{90}{\\CGKern \\CircleFeet} }\n}\n\\rput(6.5,1.0){\n \\rotatebox{90}{\\QKern}\n \\rput(0,2){\n {\\rotatebox{180}{\\CGKern \\CircleFeet}}\n }\n}\n\\rput(5.7,1.5){$=\\ \\sum_{poles\\ in \\chi} res_\\chi g(\\chi) \\qquad \\qquad \\qquad \\qquad = \\ \\ . . .$}\n\\endpspicture\n\\caption{\ndiscrete version of the crossing relation for partial wave $g(\\chi)$\n}\n\\end{figure}\n\n\n\n\n\\section{\\large Recent progress}\n\\subsection{Conformal Regge theory}\nFamilies of poles of the partial wave $g(\\chi=[l, \\delta])$ labelled by \n(even or odd) $l$ are analogues of Regge trajectories \\cite{Mack6,CostaGoncalvesPenedones,SimmonsDuffin}. But in contrast to conventional Regge theory the pole ``approximation'' is exact!\n\\subsection{Analysis of the conformal blocks}\nDolan and Osborn \\cite{DolanOsborn} determined the conformal blocks for even dimension $D$. Recursion relations were studied by Penedones, Trevisani and Yamazaki \\cite{PenedonesTrevisaniYamazaki}\n\nEl-Showk, Paulos, Poland, Rychkov, Simmons-Duffin, Vichi \n\\cite{ElShowkPaulosPolandRychkovSimmonsDuffinVichi}\\\\\ngeneralized the result to arbitrary $D$ and use numerical evaluation to \nbound and find critical indices for the {\\bf Ising model}\n\n\\subsection{Mellin representation}\nI proposed to \n consider bootstrap in Mellin space in 2009 \\cite{Mack6}. The Mellin representation was advocated as a natural language by Fitzpatrick, Kaplan, Penedones, Raju and van Rees\n\\cite{FitzpatrickKaplanPenedones} \n and further studied by Dey, Ghosh and Sinha \\cite{DeyGhoshSinha}(2018)\\\\\n\\begin{figure}[t]\n\\psset{unit=0.6cm}\n\\psset{dotsize=0.2 0}\n\\psset{linewidth=.05}\n\\def\\CircleFeet{\n\\psline(-1.,-1.)(-0.3,-0.3)\n\\psline(1.,-1.)(0.3,-0.3)\n}\n\\def\\OvalShort { \n\\psline(-.5,-.5)(.5, -.5)\n\\psline(-.5,.5)(.5,.5)\n\\psarc(-.5,0){.5}{90}{270}\n\\psarc(.5,0){.5}{270}{90}\n}\n\\def\\OvalLong { \n\\psline(-.75,-.5)(.75, -.5)\n\\psline(-.75,.5)(.75,.5)\n\\psarc(-.75,0){.5}{90}{270}\n\\psarc(.75,0){.5}{270}{90}\n}\n\\def\\BSKern {\n\\psline(-.5,-.5)(.5, -.5)\n\\psline(-.5,.5)(.5,.5)\n\\psarc(-.5,0){.5}{90}{270}\n\\psarc(.5,0){.5}{270}{90}\n\\psline[linestyle=dotted](-1.,0)(1.,0)\n}\n\n\\def\\CircleUp{\n\\pscircle(0,0){.5}\n\\psline(0,.5)(0,1.5)\n}\n\\def\\OvalShortTwoFeet {\n\\psline(-1.,-1.)(-0.5,-0.5)\n\\psline(1.,-1.)(0.5,-0.5)\n}\n\n\\def\\OvalUpFeet {\n\\psline(-1.,1.)(-.5,.5)\n\\psline(1,1)(.5,.5)\n}\n\n\\def\\OvalShorFteet {\n\\psline(-1.,-1.)(-0.5,-0.5)\n\\psline(1.,-1.)(0.5,-0.5)\n}\n\\def\\zigzagOne{\n\\psline(0,0)(0.1,0.1)(-0.1,0.3)(0,0.4)\n}\n\\def\\zigzagTwo{\n\\psline(0,0)(0.1,0.1)(-0.1,0.3)(0.1,0.5)(0,0.6)\n}\n\\def\\CGKern{\n\\pscircle*(0,0){.5}\n\\rput(0,0.5){\\zigzagTwo}\n}\n\n\\def\\PartialWave{\n\\rput{-90}(0,0){\n\\CGKern\n\\CircleFeet\n }\n\\rput{90}(2.2,0){\n\\CGKern\n\\CircleFeet\n }\n}\n\\def\\FourPointFct{\n\\pscircle(0,0){0.9}\n\\psline(-1.25,-1.25)(-0.60,-0.60)\n\\psline(0.60,0.60)(1.25,1.25)\n\\psline(1.25,-1.25)(0.60,-0.60)\n\\psline(-0.60,0.60)(-1.25,1.25)\n}\n\n\\def\\FourPointMellin{\n\\rput(-6,0){$=\\int d\\delta \\ M(\\{ \\delta_{ij}\\})$}\n\\psline(-1.25,-1.25)(-1.25,1.25)\n\\psline(-1.25,-1.25)(1.25,-1.25)\n\\psline(-1.25,-1.25)(1.25,1.25)\n\\psline(1.25,1.25)(-1.25,1.25)\n\\psline(1.25,1.25)(1.25,-1.25)\n\\psline(1.25,-1.25)(-1.25,1.25)\n}\n\n\n\\pspicture(-1,0)(15,4.5\n\n\n\\rput(0,4){\n \\rput(0,1.25){\\FourPointFct}\n \\rput(4,1.25){ $\\ = \\int d\\chi \\ g(\\chi) $}\n \\rput(8.5,1.25){\\PartialWave}\n \\rput(9.5,1.7){$\\chi$}\n}\n\\rput(12,1.25){\\FourPointMellin}\n\\endpspicture\n\\caption{ Partial wave expansion and Mellin representation for the 4-point function.$<\\phi(x_4)...\\phi(x_1)>$. The lines in the Mellin representation represent propagators \n$\\Gamma(\\delta_{ij}(\\xi_i\\xi_j)^{-\\delta_ij}$ in covariant language\nIntegration is over $\\delta_{ij}=\\delta_{ji}$ such that $\\sum_{i}\\delta_{ij}=d_i$ when the scalar field $\\phi(x_i)$ has dimension $d_i$\n \\label{fig:partialWaveAndMellin.tex}}\n\\end{figure} \n\n The lines in the Mellin representation represent propagators \n$\\Gamma(\\delta_{ij})(\\xi_i\\xi_j)^{-\\delta_{ij}}$ \n in Dirac's covariant language.\nIntegration is over $\\delta_{ij}=\\delta_{ji}$ such that\n $\\sum_{i}\\delta_{ij}=d_i$ when the scalar field $\\phi(x_i)$\n has dimension $d_i$\n\\subsection{Pismaks simplified version of the crossing relation}\nPismak \\cite{Pismak} proposed to study a simplified version of the crossing relation for the partial wave $g(\\chi)$ as shown in Figure 5.\n\\begin{figure}[h]\n\\psset{unit=1cm}\n\\psset{dotsize=0.2 0}\n\\psset{linewidth=.05}\n\\def\\CircleRightFoot{\n\\psline(1.,-1.)(0.3,-0.3)\n}\n\\def\\CircleFeet{\n\\psline(-1.,-1.)(-0.3,-0.3)\n\\psline(1.,-1.)(0.3,-0.3)\n}\n\\def\\CircleWLeftFootLong\n{\n\\pscircle(0,0){.5}\n\\psline(1.7,-1.7)(0.3,-0.3)\n}\n\\def\\CircleWRightFootLong\n{\n\\pscircle(0,0){.5}\n\\psline(-1.7,-1.7)(-0.3,-0.3)\n}\n\n\\def\\CircleWFeetLong\n{\n\\pscircle(0,0){.5}\n\\psline(-1.7,-1.7)(-0.3,-0.3)\n\\psline(1.7,-1.7)(0.3,-0.3)\n}\n\\def\\zigzagOne{\n\\psline(0,0)(0.1,0.1)(-0.1,0.3)(0,0.4)\n}\n\\def\\zigzagTwo{\n\\psline(0,0)(0.1,0.1)(-0.1,0.3)(0.1,0.5)(0,0.6)\n}\n\\def\\CGKern{\n\\pscircle*(0,0){.5}\n\\rput(0,0.5){\\zigzagTwo}\n}\n\\def\\PartialWave{\n \\rput{-90}(0,0){\n \\CGKern\n \\CircleFeet\n }\n \\rput{90}(2.2,0){\n \\CGKern\n\\CircleFeet\n }\n}\n\n\\def\\PartialWaveHorizontalBottomLong{\n\\rput(1.15,.5){$\\chi$}\n \\rput{-90}(0,0){\n \\CGKern\n\n\\CircleWLeftFootLong \n }\n \\rput{90}(2.2,0){\n \\CGKern\n\\CircleWRightFootLong\n }\n\\rput(-1.7,-2.1){$3$}\n\\rput(3.9,-2.1){$4$}\n}\n\\def\\PartialWaveUp{\n\\rput{90}(0,0){\n \\rput{-90}(0,0){\n \\CGKern\n \\CircleWFeetLong\n }\n \\rput{90}(2.2,0){\n \\CGKern\n\\CircleRightFoot\n }\n }\n\\rput(-0.9,3.55){$p_1=0$}\n\\psline[linearc=.25](1.7,-1.65)(1.0,3.0)(.3,2.55)\n\\rput(1.5,2.0){$m$}\n\\rput(1.5,1.5){$\\alpha$}\n\\rput(0.65,1.0){$\\chi^\\prime$}\n\\psline(1.4,1.75)(1.1,1.75)\n\\rput(-2,0.5){$= \\int g(\\chi^\\prime)$}\n}\n\n\n\\def\\PartialWaveHorizontalBottomLongP10{\n\\PartialWaveHorizontalBottomLong\n\\psline(-.3,.3)(-1.,1)\n\\rput(-1.3,1.3){$p_1=0$}\n\\rput(2.2,0){\n\\psline[linearc=.25](.3,.3)(1.,1.)(1.7,-1.7)\n\\psline(1.1,0.0)(1.4,0.0)\n\\rput(1.6,.3){$m$}\n\\rput(1.6,-.4){$\\alpha$}\n }\n}\n\\pspicture(-1,0)(15,5.0)\n\\rput(10,1.5){\n\\PartialWaveUp\n}\n\\rput(2,2,0.5){\n\\rput(-1.5,0){$\\int g(\\chi)$}\n\\PartialWaveHorizontalBottomLongP10\n}\n\\endpspicture\n\\caption{ Pismaks version of the crossing symmetry of the partial wave $g(\\chi)$. In the Ising model with fields $\\phi(x)$ and $\\phi^2(x)$, $g(\\chi)$\nis a $2\\times 2$ matrix.\n \\label{fig:crossingKernelPismak}\n}\n\\end{figure}\n\n\\subsection{Spacetime derivation of the Lorentzian OPE inversion formula}\nCaron-Huot \\cite{CaronHuot}, and Simmons-Duffin, Stanford, Witten \\cite{SimmonsDuffinStanfordWitten}\n derive a formula for the partial wave \n$g(\\chi=[\\Delta, J] ) = n_{\\Delta,J}I_{\\Delta,J}$ expressed in terms of vacuum \nvacuum expectation value of commutators of the four scalar fields\n $\\phi_i(x_i)$ of dimension $d_i$ in Minkowski space, $n_{\\Delta,J}$\nis an explicitly given normalization factor\n\\ba\nI_{\\Delta,J} &=& -\\hat{C}_J(1)\n\\left[ \\int_{3>1,2>4} \\frac {d^D x_{3}d^Dx_{4}}{vol(SO(D-1)}\\right.\\nn \\\\\n&&\n \\frac{ <\\Omega, [\\phi_4(x_4),\\phi_2(x_2)][\\phi_1(x_1),\\phi_3(x_3)]]|\\Omega>}{ |x_{34}|^{J+2D-d_3-d_4-\\Delta}}\n\\nn \\\\\n&&\n\\left.\n(m\\cdot x_{34})^J \\theta(m\\cdot x_{34}) \\right.\\nn \\\\\n &+& (-1)^J \\int _{4>1,2>3}\n \\frac{d^D x_3 d^D x_4}{vol(SO(D-1))}\\nn \\\\\n&&\n \\frac{<\\Omega|[\\phi_3,(x_3),\\phi_2(x_2)][\\phi_1(x_1),\\phi_4(x_4)]|\\Omega>}{|x_{34}|^{J+2D-d_3-d_4-\\Delta}}\n\\nn \\\\\n&& \\left.\n (-m\\cdot x_{34})^J \\theta(-m\\cdot x_{34}) \n\\right]\n\\nn \n\\ea\nwhere $m$ is the nullvector $m^\\mu=(1,1,0,...,0)$ where the second component is the time direction, and coordinates $x_1,x_2$\nhave been fixed to \n$x_1=(1,0,0,...), $, $x_2=0$. The notation $i>j$ means that $x_i$ is in the future light cone of $x_j$.\n\nAn explicit expression for\n $\\hat{C}_J(1)$ is given.\n\nThe formula has the following advantages: It can be {\\bf analytically continued in the spin $J$}, and for real dimension and spin, the integrand satisfies {\\bf positivity conditions}.\n\n\\bibliographystyle{plain}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\\section{Introduction}\n\nRecent technological advances in machine learning have transformed healthcare in many ways from expediting research and discovery, to making diagnostic decisions or treatment recommendations for individual patients. Particularly, a growing body of research has explored clinical decision-making in critical settings \n(e.g. \\cite{ghassemi2015multivariate, desautels2016prediction, awad2017early}). Critical care practitioners need to make time sensitive and accurate decisions when treating patients, often in settings where there is a high degree of uncertainty and patients have little physiologic reserve \\cite{lighthall}. Recent algorithms have demonstrated the ability to provide alerts for impending events\n, and in some cases \neven make better treatment recommendations than clinicians \\cite{ghassemi2017predicting, outperforms}. \n\nDespite these advances however, machine learning models are still prone to critical errors, often failing to generalise across different care settings or different institutions (cite?). Translating these models for safe use in clinical practice thus presents a unique set of challenges. An emerging line of research in machine interpretability, defined by Doshi-Velez and Kim \\cite{fdv-rigorous} as ``the ability to explain or to present in understandable terms to a human,'' provides an alternative way to ensure systems preserve properties such as safety or nondiscrimination. If a model is interpretable to human stakeholders, then clinical experts can inspect and verify that a model's reasoning is sound prior to integration in a care setting.\n\nIn this paper, we propose a method for learning summaries of patient patient physiological time-series data that are \\emph{human-simulatable}, or can be easily calculated by a human when given access to the time-series data \\cite{lipton-mythos}. Importantly, the parameters of our model are \\emph{decomposable}, and can be described using a meaningful and short text description \\cite{lipton-mythos}. We show that by using our model to first derive human-simulable summaries from a patient's physiological timeseries, and combining these with other patient data in a logistic regression model, we can make accurate predictions that are easy to interpret. By learning summaries that are human-interpretable functions of the patient's time-series, our work explicitly optimizes for human-interpretability. Optimizing for interpretability is particularly meaningful in healthcare settings, where models are used to inform high-risk and high-uncertainty decisions. \n\nOur technical contributions are twofold. First, we introduce several functions used to summarize the dynamics of physiological time-series data. \nIn our summary definitions, we also formalize several design choices of clinical significance, such as how much of each patient's time-series data should be used to compute summary features. Second, we propose an generalizable optimization procedure to search over these design choices automatically. Our approach simultaneously enables optimization over both the human-interpretable time-series summaries and a linear model to make predictions for any intensive care classification task. We achieve performance comparable to complex models for two intensive care prediction tasks while preserving the human-interpretability of the learned model and representations.\n\n\\section{Related Work}\n\n\\paragraph{Black-box prediction methods}\nA number of works develop ways to use timeseries data to make predictions. Black-box deep models such as recurrent neural networks have been used for a variety of intensive care tasks including predicting in-hospital mortality \\cite{benchmarking-deep, ng-deep-learning-ehr} or vasopressor onset \\cite{intervention-cnns}. Models which learn hidden representations from the patient's time-series such as switching dynamical systems models \\cite{hughes-vaso, fdv-vaso-sam} or variations of Hidden Markov models \\cite{gp-timeseries-acuity} have also been used for \nintensive care prediction tasks. While these approaches typically get better prediction accuracy, their representations are not easily interpreted by clinicians. Unlike these approaches, our work explicitly learns human-interpretable summaries of the time series for performing prediction. \n\n\\paragraph{Prediction methods that use expert-derived features}\nSeveral common risk stratification models have also been deployed in intensive care units, such as SAPS-II \\cite{saps-ii} and APACHE \\cite{apache}. These methods use simple score-card algorithms to evaluate patient acuity. These algorithms take as input features such as the patient's ``average'' or ``worst'' values contained in the time-series to compute score. While easily computed and understood, their predictive power is limited. Such scores have also typically been targeted toward a single end point, such as in-hospital mortality or mortality 28 days post-discharge rather than on-going risk stratification such as deterioration in the next several hours.\n\nCloser to our work, there exists a growing literature on methods that, like the score-card approaches, first compute statistics of a timeseries, and then use those to make predictions \\cite{NIPS2012_3cec07e9,Harutyunyan19,DBLP:journals\/midm\/GuoLC20}.\nThese studies have included statistics such as the maximum, minimum, mean, and variance of a particular timeseries dimension. While somewhat more flexible than the traditional score-card approaches, this set of statistics is \nstill limited and does not capture clinically important patterns such as whether a value is increasing or decreasing, nor do they consider the appropriate duration over which the statistics should be computed. Moreover, the majority of these studies \nfail to justify which summary statistics can achieve the best performances for physiological time series. In contrast to these methods, we focus on learning representations that are simple simulable functions of a patient's timeseries.\n\n\\paragraph{Methods for feature ranking and selection.}\nCharacterising time-series using summary statistics can often contain redundant information if features are coupled, or non-representative features are selected for making predictions. As a result, several approaches have been developed to rank features according to their importance, and select only the most relevant among these \\cite{WEI201911}. Recently, Tonekaboni et al. \\cite{NEURIPS2020_08fa4358} propose a method for quantifying the importance of features \\emph{over time} based on their contribution to time-dependent shifts of a model's output distribution. While the method allows the authors to evaluate the aggregate importance of subsets of features over time to promote post-hoc interpretability \\cite{lipton-mythos}, they do not explicitly optimise for interpretability as we propose in our work. \n\n\n\n\\end{document}\n\n\\section{Introduction}\nAccurate predictions of patient risk in critical care units can aid clinicians in making more effective decisions. Specifically, early identification of patients at high risk for in-hospital mortality is critical to assess patient disease acuity and inform life-saving interventions \\cite{saps-ii, escobar2020}. To predict in-hospital mortality risk, researchers have developed algorithms ranging from simple score-cards \\cite{saps-ii, apache} to statistical machine learning (ML) models. Recent advances in ML have led to the development of models with vast improvements in predictive accuracy for patient in-hospital mortality risk \\cite{ awad2017early, gp-timeseries-acuity, ng-deep-learning-ehr, benchmarking-deep}. \n\nDespite these improvements, however, ML models are still prone to critical errors, often failing to generalize across different care settings or institutions \\cite{futoma2020myth}. An emerging line of research in interpretability, defined by \\cite{fdv-rigorous} as ``the ability to explain or to present in understandable terms to a human,'' provides an alternative way to ensure systems preserve properties such as safety or nondiscrimination. If a model is interpretable to stakeholders, then clinical experts can inspect the model and verify that its reasoning is sound. This ability to audit and validate is especially important when the models are used to inform critical decisions affecting patient health. \n\nIn this work, we present a novel ML method to learn clinical timeseries summaries that are interpretable and predictive. We introduce functions to compute summaries that align with simple and intuitive concepts, such as whether the timeseries is decreasing or spikes above a critical threshold. In contrast to prior work, our method learns how much of the timeseries should be used to calculate these summaries, discarding earlier timesteps that may be irrelevant for a specific prediction task. Importantly, we introduce relaxations of our summary definitions to enable differentiable optimization, allowing summary parameters to be learned jointly with those of a downstream model. \nWe show that with our method, we can achieve accuracies comparable with state-of-the-art baselines without sacrificing interpretability.\n\n\\section{Related Work}\n\nPrior work on explaining clinical timeseries models fall into two categories: learning simple models that are inherently interpretable, and generating explanations of complex black box models. We summarize a few key examples below.\n\n\\textbf{Interpreting Deep Models.} \nOne popular strategy for explaining ML models is learning a second post-hoc explanation model to explain the first black box model \\cite{ribeiro2016should}. Many post-hoc explanation techniques for clinical timeseries models train explanation models that quantify the relative importance of each clinical variable \\cite{NEURIPS2020_08fa4358, lundberg2018explainable}. However, several works argue against the use of post-hoc explanation techniques, as explanation models are not always faithful or representative of the true underlying black boxes \\cite{rudin2019stop, lakkaraju2020fooling}. \nOur method avoids these problems by design, instead explicitly optimizing for interpretability so that a second explanation model is not needed.\n\nAnother line of research proposes attention mechanism models specifically designed for timeseries. \\cite{choi2017retain, sha2017} present neural attention architectures for clinical timeseries and argue that attention scores measure feature importance. However, attention methods are often highly complex and nonlinear. Furthermore, \\cite{serrano2019} shows attention scores do not always reflect true importance. Instead of approximating importance, our study uses linear models over richer features, where importance does not need to be approximated but can be directly read off model coefficients.\n\n\\textbf{Expert Systems and Expert Features.} Our work extends a long tradition of clinical experts hand-crafting features to create interpretable clinical decision-support algorithms. Two expert systems widely used in ICUs are SAPS-II \\cite{saps-ii} and APACHE \\cite{apache}, which use simple score-card algorithms to evaluate patient acuity. These systems use input features such as the patient's average or worst lab or vital values over time to compute mortality risk. While SAPS-II and APACHE are simple and simulable to clinicians, their predictiveness is limited, as they cannot capture how labs or vitals change across a patient's stay.\n\nA similar line of research proposes manual construction of expert features from clinical data, which are then used as input to ML models \\cite{sun2012, roe2020}. One limitation of this approach is that expert feature derivation is expensive and requires clinical expertise. Rather than relying on expert knowledge to identify which summary features will be the most predictive for a given task, our work instead uses optimization with a sparsity constraint to automatically learn which summary features are the most predictive.\n\n\\textbf{Summarizing Clinical Timeseries.} A number of works have proposed a wide range of summary statistics for patient timeseries data. Many works such as \\cite{awad2017early, harutyunyan2019} train clinical models using the minimum value, maximum value, first measured value, or skew of clinical timeseries data. \\cite{DBLP:journals\/midm\/GuoLC20} proposes a more comprehensive set of 14 summary statistics to characterize the central tendency, dispersion tendency, and distribution shape of clinical timeseries data. Our work extends this research, and is the first to our knowledge to use the slope of the timeseries or proportion of time above or below critical thresholds. Our work is also novel in that we explicitly model and optimize for the duration over which we compute each summary feature.\n\n\\section{Cohort and Problem Set-Up}\nOur goal is to learn summaries of patient timeseries data that are both human-interpretable and predictive for a downstream classification task. Our approach will define how to calculate these human-interpretable summaries, and describe how both summary and classification model parameters are learned through optimization. In what follows, we detail each of these processes.\n\n\\textbf{Prediction Task.} Our work examines early prediction of in-hospital mortality. We use the patient's first 24 hours of data to predict if they would later expire over the course of the remainder of their admission. Patients who expired in the first 24 hours of their stay were excluded from our cohort.\n\n\\textbf{Cohort Selection.}\nWe use data from the MIMIC-III critical care database \\cite{mimic-iii}, which contains deidentified health data from patients in critical care units of the Beth Israel Deaconess Medical Center between 2001 and 2012. All data was extracted from MIMIC-III PhysioNet version 1.4, which contains 30,232 patients. We exclude patients under 18 years of age and patients whose weight is not measured. We include data from each patient's first hospitalization only, and only patients with stays between 24-72 hours \\cite{awad2017early}. After applying these criteria, our final cohort contained 11,035 patients, 15.23\\% of whom died in-hospital. Cohort characteristics and demographics are summarized in Table \\ref{table:mimic-summary-stats}. \n\\begin{table}[h!]\n\\centering\n\\resizebox{\\textwidth}{!}{%\n\\begin{tabular}{|c | c c | c c c c | c c c c|} \n \\hline\n Cohort & Age & \\% Female & \\% Urgent & \\% Emergency & \\% Elective & \\% MICU & \\% SICU & \\% CCU & \\% CSRU & \\\\ \n \\hline\n All & 64.7 & 43.8 & 1.12 & 84.46 & 14.43 & 42 & 18 & 12 & 16 &\\\\\\hline\n + & 70.9 & 46.5 & 0.77 & 96.25 & 2.97 & 53 & 18 & 13 & 4 &\\\\ \\hline\n - & 64.1 & 43.6 & 1.14 & 83.22 & 15.64 & 41 & 18 & 12 & 17 & \\\\ \\hline\n\\end{tabular}}\n\\caption{Mean statistics for the population cohort, and for cohorts of positive versus negative patients for in-hospital mortality. Abbreviations: MICU, medical care unit; SICU, surgical care unit; CCU, cardiac care unit; CSRU, cardiac-surgery recovery\nunit.}\n\\label{table:mimic-summary-stats}\n\\end{table}\n\n\\subsection{Extracting Inputs and Outputs}\nFor each patient $n$ in our $N$ patient cohort, we extracted static observations and physiological data including labs and vital signs sampled hourly. All clinical variables were separately normalized to have zero mean and unit variance. Figure \\ref{figure:feature-extraction} shows how features are extracted for patients that are positive versus negative for in-hospital mortality. \n\\begin{figure}[]\n\\centering\n \\includegraphics[width=0.6\\textwidth]{images\/feature_extraction_mortality.png}\n\\caption{Example positive and negative time-series to illustrate feature extraction. The two trajectories have input data extracted from time 0 to time of prediction $T = 24$. Figure inspired by Sherman et al \\cite{sherman}.}\n\\label{figure:feature-extraction}\n\\end{figure}\n\n\\textbf{Static observations $\\bm{S}$.} Matrix $\\bm{S}: (N \\times 8)$ contains $8$ demographic variables for each patient $n$: their age at admission, gender, and other information about their ICU stay (their first ICU service type, and whether their admission was urgent, emergency, or elective).\n\n\\textbf{Per-timestep clinical observations $\\bm{X}$.} The clinical variable tensor $\\bm{X}: (N \\times D \\times T)$ contains $D = 28$ measurements of clinical variables for each of the $N$ patients at time $t$, discretized by hour. These 28 measurements consist of vital signs and labs: diastolic blood pressure, fio2, GCS score, heartrate, mean arterial blood pressure, systolic blood pressure, SRR, oxygen saturation, body temperature, urine output, blood urea nitrogen, magnesium, platelets, sodium, ALT and AST, hematocrit, po2, white blood cell count, bicarbonate, creatinine, lactate, pco2, glucose, INR, hemaglobin, and bilirubin. Missing values at timestep $t$ were imputed using either the most recent measurement of the variable, or the population median if the variable had not yet been measured during the patient's stay. We use the patient's first $T = 24$ hours of data to predict in-hospital mortality. We use subscripts to index into the tensor: for example, $\\bm{X}_t$ indicates the $(N \\times D)$ matrix of measurements taken at time $t$.\n\n\n\\textbf{Per-timestep measurement indicators $\\bm{M}$.}\nThe measurement indicator tensor $\\bm{M}: (N \\times D \\times T)$ contains indicator elements $\\bm{M}_{n, d, t}$ which are 1 if their corresponding clinical variable in $\\bm{X}_{n, d, t}$ was measured at time $t$, 0 otherwise.\n\n\\textbf{Outcome labels $\\bm{y}$.} Label vector $\\bm{y}$ contains indicators $\\bm{y}_n$ which are 1 if patient $n$ expired in-hospital, 0 otherwise.\n \n\n\\section{Methods}\nGiven a cohort of $N$ training examples $\\{ \\bm{X}, \\bm{M}, \\bm{S}, \\bm{y} \\}$, we propose a novel procedure for learning predictive human-interpretable timeseries summaries. Concretely, we first compute summaries $\\bm{H}$ from clinical timeseries data $(\\bm{X}, \\bm{M})$. We then use the summaries $\\bm{H}$ in addition to static data $\\bm{S}$ and clinical variables $\\bm{X}$ as input to a Logistic Regression model to predict labels $\\bm{y}$. This process of using human-interpretable summaries for prediction is shown in Figure \\ref{figure:model-diagram}.\n\nIn Section \\ref{section:model-def}, we motivate and introduce our novel summary features. In Section \\ref{section:relaxations}, we give continuous relaxations of summary feature definitions to enable efficient inference of summary parameters. In Section \\ref{section:learning-process}, we discuss how predictive summary features can be jointly learned with downstream classification model parameters.\n\n\\begin{figure}[]\n\\centering\n \\includegraphics[width=0.7\\textwidth]{images\/model_diagram.png}\n\\caption{Illustrates summary extraction for prediction from timeseries data. First, non-static clinical variables $\\{\\bm{X}, \\bm{M}\\} $ are used to compute interpretable summary features $\\bm{H}$. Then, summary features $\\bm{H}$, static features $\\bm{S}$, and the non-static variables $\\{\\bm{X}, \\bm{M}\\}$ at the time of prediction are given as input to a Logistic Regression model $g$, which predicts output labels $\\bm{\\hat{y}}$. Figure inspired by Ghassemi, Szolovits et al \\cite{hughes-vaso}.}\n\\label{figure:model-diagram}\n\\end{figure}\n\n\\subsection{Model: Defining human-interpretable summaries}\\label{section:model-def}\nMeasures of central tendency and dispersion have commonly been used to summarize timeseries (see survey in \\cite{DBLP:journals\/midm\/GuoLC20}). Our key modelling innovations include adding additional features that correspond to how clinicians themselves describe timeseries. Our novel contributions include explicitly modelling the overall trend of a lab\/vital and the number of hours that a lab\/vital dips above or below a threshold, as well as allowing different features to be computed over different periods of time (e.g. the most recent 6 hours vs. the most recent 24 hours).\n\nThe $I = 13$ summary statistics used in this study are listed in Table \\ref{table:summaries-weighted}. Each of the summary statistics takes into account measurement indicators $\\bm{M}$ so that clinical variable summaries are computed only using timesteps where the variable is measured. Each summary statistic is applied to each of the $D$ clinical variables to create the summary feature tensor $\\bm{H}: (N \\times D \\times I)$.\n\nBelow, we expand on the parameterization of our summary features. Next, we describe how we enable efficient, automated search over summary feature parameters. Importantly, our approach automates many processes associated with summary design, enabling optimization over summary parameters. \n\n\\textbf{Incorporating Duration.}\nMany prior works in clinical timeseries modelling do not use all timesteps for the patient, but instead only the most recent available data, such as the six or twelve hours before the time of prediction. In contrast to prior works, we explicitly model how much of each timeseries should be used to compute each of the $I$ summaries in $\\bm{H}$. For example, we may wish to exclude earlier measurements of a particular clinical variable if only the variable's recent measurements before time of prediction $T$ are significant for a prediction task. Specifically, for each variable $d$ and for each summary function $i$, we define a duration time $\\bm{C}_{d, i}$. Only the variable's timeseries data that occurred in the immediately previous $\\bm{C}_{d, i}$ hours before the time of prediction is used to calculate summary $i$. We organize all the duration time parameters $\\bm{C}_{d,i}$ into a $(D \\times I)$ matrix $\\bm{C}$.\n\nTo exclude data that occurs before time $(T -\\bm{C}_{d, i})$ when computing summary features, we multiply each of the original timeseries variables by indicator variables for whether the measurements occurred within $\\bm{C}_{d, i}$ hours before time of prediction $T$. For example, a mean summary statistic would be computed using indicator variables $\\mathbbm{1}(\\cdot)$ as:\n\\begin{equation}\\label{non_diff_cutoff_indicators} \n\\bm{H}_{i = mean}: (N \\times D) = \\left( \\textstyle\\sum_{t=1}^{T} \\mathbbm{1}(t > T - \\bm{C}_{i = mean}) \\odot \\bm{X}_{ t} \\odot \\bm{M}_{ t} \\right) \/ \\left( \\textstyle\\sum_{t = 1}^{T} \\bm{M}_{ t} \\odot \\mathbbm{1}(t > T - \\bm{C}_{i = mean}) \\right)\n\\end{equation}\n\nwhere $\\odot$ is the element-wise multiplication operator and division is performed element-wise. Our objective is to learn duration parameters $\\bm{C}$ that maximize the predictiveness of their corresponding summary features $\\bm{H}$.\n\n\\textbf{Threshold Parameters.}\nSome of the summary functions $f_i$ in Table \\ref{table:summaries-weighted} have additional parameters such as thresholds. For example, one of the summaries is the proportion of the patient's measured timeseries where their measured clinical variables are above some $D$-dimensional critical threshold parameter vector $\\bm{\\phi}^{+}$ for each variable:\n\\begin{equation}\\label{thresholds-non-diff}\n \\left( \\textstyle\\sum_{t = 1}^{T} \\bm{M}_t \\odot \\mathbbm{1} (\\bm{X}_t > \\bm{\\phi}^{+}) \\right) \/ \\left( \\textstyle\\sum_{t = 1}^{T} \\bm{M}_t \\right)\n\\end{equation}\nThese summaries correspond to clinically-intuitive ideas, such as whether the patient been mostly well or sick. As with durations, we learn threshold parameters automatically to avoid burdening experts and to assist in prediction.\n\n\\subsection{Continuous Relaxations for Efficient Inference}\\label{section:relaxations}\n\\textbf{Learning Duration Parameters.}\nIn Equation \\ref{non_diff_cutoff_indicators}, we showed how summary functions $f_i$ that only use the most recent $\\bm{C}$ hours of data can be calculated using indicator variables $\\mathbbm{1}(t > T - \\bm{C}_{d, i}) $. These indicator variables, however, do not have informative gradients and are not differentiable. To enable differentiable optimization for duration time parameters $\\bm{C}$, we introduce weight parameters $\\bm{W}$ by relaxing the indicator random variables using the sigmoid function $\\sigma(x) = \\frac{1}{1+e^{-x}}$. Using the duration parameter matrix $\\bm{C}$, we define $D$-dimensional vectors $\\bm{w}_{t, i}$ that compose weight tensor $\\bm{W}: (T \\times I \\times D)$: \n\n\\begin{equation}\\label{w_def}\n \\bm{w}_{t, i} = \\sigma \\left((t - T + \\bm{C_i})\/ \\tau \\right) \n\\end{equation}\n\nFor each feature $d$ and summary $i$, clinical observations $\\bm{X}_{ d, t}$ where $t > T - \\bm{C}_{d, i}$ will have corresponding weights $\\bm{w}_{t, i, d}$ near 1. Timesteps $t$ where $t < T - \\bm{C}_{d, i}$ will have corresponding weights $\\bm{w}_{t, i, d}$ near 0. Temperature parameter $\\tau$ controls the harshness of the weight matrix: small temperatures push the sigmoid function towards its edges, learning weights that are closer to exactly 0 for timesteps before $T -\\bm{C}_{d, i}$ and 1 for timesteps after $T - \\bm{C}_{d, i}$, effectively functioning as the indicator variables in Equation \\ref{non_diff_cutoff_indicators}.\n\nWeighted summary functions $f_i$ used to derive human-interpretable summaries $\\bm{H}$ can be found in Table \\ref{table:summaries-weighted}. Duration parameters $\\bm{C}$ that determine weight tensor $\\bm{W}$ are included in $\\beta_{\\bm{H}}$, the set of all parameters necessary to compute the summaries.\n\\begin{table}[h!]\n\\centering\n\n\\begin{tabular}{|c| c|} \n \\hline\n Description & Function \\\\ [0.5ex] \n \\hline\n \n Mean of the time-series & $\\left(\\sum_{t=1}^{T} \\left( \\bm{w}_{t, i} \\odot \\bm{X}_t \\odot \\bm{M}_t \\right)\\right) \/ \\left(\\sum_{t = 1}^{T} \\bm{M}_t \\odot \\bm{w}_{t, i} \\right) $\\\\ \\hline\n \n Variance of the time-series & $ \\frac{(\\sum_{t = 1}^{T} \\bm{M}_t \\odot \\bm{w}_{t, i})^2}{(\\sum_{t=1}^{T} \\bm{M}_t \\odot \\bm{w}_{t, i})^2 - \\sum_{t=1}^{T} \\bm{M}_t \\odot \\bm{W}^2_t} \\odot \\sum_{t = 1}^{T} \\bm{M}_t \\odot \\bm{w}_{t, i} \\odot \\left( \\bm{X}_t {-} \\bar{\\bm{X}} \\right)^2$ \\\\ \\hline\n \n\n \n Indicator if feature was ever measured & $\\sigma \\left( \\left( \\sum_{t = 1}^T \\bm{w}_{t, i} \\odot \\bm{M}_t \\right) \/ \\left(\\tau \\odot \\sum_{t = 1}^T \\bm{w}_{t, i} \\right)\\right)$\\\\ \\hline\n \n\n \n Mean of the indicator sequence & $ \\left(\\sum_{t = 1}^{T} \\bm{w}_{t, i} \\odot \\bm{M}_t \\right) \/ \\left(\\sum_{t = 1}^{T} \\bm{w}_{t, i} \\right)$\\\\ \\hline\n \nVariance of the indicator sequence & $ \\left( \\frac{(\\sum_{t=1}^{T} \\bm{w}_{t, i})^2}{(\\sum_{t=1}^{T} w_t)^2 - \\sum_{t=1}^{T} \\bm{W}^2_t} \\right) \\sum_{t = 1}^{T} \\bm{w}_{t, i} \\odot ( \\bm{M}_t - \\bar{\\bm{M}})^2$\\\\ \\hline\n \n \\# switches from missing to measured & $\\left(\\sum_{t = 1}^{T - 1} \\bm{w}_{t, i} \\odot | \\bm{M}_{t + 1} - \\bm{M}_t|\\right) \/ \\left(\\sum_{t=1}^{T} \\bm{w}_{t, i} \\right) $\\\\ \\hline\n \n First time the feature is measured & $ \\min t$ s.t. $\\bm{M}_t = 1$\\\\ \\hline\n \n Last time the feature is measured & $ \\max t$ s.t. $\\bm{M}_t = 1$\\\\ \\hline\n \n Proportion of time above threshold $\\bm{\\phi}^{+}$ & $ \\left( \\sum_{t = 1}^{T} \\bm{w}_{t, i} \\odot \\bm{M}_t \\odot \\sigma( \\frac{\\bm{X}_t- \\bm{\\phi}^{+}}{\\tau}) \\right) \/ \\left( \\sum_{t = 1}^{T} \\bm{M}_t \\odot \\bm{w}_{t, i} \\right)$ \\\\ \\hline\n \n Proportion of time below threshold $\\phi^{-}$ & $ \\left( \\sum_{t = 1}^{T} \\bm{w}_{t, i} \\odot \\bm{M}_t \\odot \\sigma( \\frac{\\bm{\\phi}^{-} - \\bm{X}_t }{\\tau}) \\right) \/ \\left( \\sum_{t = 1}^{T} \\bm{M}_t \\odot \\bm{w}_{t, i} \\right)$\\\\ \\hline\n \n Slope of a L2 line & $\\frac{\\sum_{t= 1}^T \\bm{w}_{t, i} (t - \\bar{t}_{\\bm{w}}) (\\bm{X}_t - \\bar{\\bm{X}}_{\\bm{w}})}{\\sum_{t = 1}^T \\bm{w}_{t, i} (t - \\bar{t}_{\\bm{w}})^2}$, where $\\bar{t}_{\\bm{w}} = \\frac{\\sum_t \\bm{w}_{t, i} \\cdot t}{\\sum_t \\bm{w}_{t, i}}$ and $\\bar{\\bm{X}}_{\\bm{w}} = \\frac{\\sum_t \\bm{w}_{t, i} \\odot \\bm{X}_t}{\\sum_t \\bm{w}_{t, i}}$ \\\\ \\hline\n \n\n\n \n Standard error of the L2 line slope & $1 \/ \\left( \\sum_t \\bm{w}_{t, i} \\odot (t - \\bar{t}_{\\bm{w}})^2 \\right)$ \\\\ \\hline\n\n \n\n \n\n \n\n\n \\hline\n\\end{tabular}\n\\caption{Table of functions $f_i$ used to calculate human-interpretable summaries $\\bm{H}$. For each of the $D$ clinical variables, all $I$ of the above functions are applied to each of the $N$ patients. Each of the $I$ summary features $i$ is defined with respect to $D$-dimensional weight vectors $\\bm{w}_{t, i}$ defined in Section \\ref{section:relaxations}. Paramater $\\tau$ is a temperature parameter for the sigmoid function. $\\mathbbm{1}(\\cdot)$ denotes indicator variables for events inside the parentheses, $\\odot$ indicates element-wise matrix multiplication and division is done element-wise. Additionally, $\\bar{\\bm{X}} = \\sum_t^T \\bm{M}_t \\odot \\bm{X}_t \/\\sum_t^T \\bm{M}_t$, and $\\bar{\\bm{M}}_t = \\frac{1}{T} \\sum_t^T \\bm{M}_t$.}\n\\label{table:summaries-weighted}\n\\end{table}\n\n \\textbf{Learning Threshold Parameters.}\nOur work relaxes summary definitions to enable differentiable optimization to learn summary parameters.\nThe indicator variables used to define the proportion of hours that a patient's timeseries is above thresholds $\\bm{\\phi}^{+}$ in Equation \\ref{thresholds-non-diff} are non-differentiable with respect to $\\bm{\\phi^{+}}$. To enable differentiable optimization, our work defines our threshold summary features using the sigmoid function $\\sigma$ with temperature parameter $\\tau$:\n\\begin{equation}\n f_{threshold}(\\bm{X}, \\bm{M}, \\bm{W}) = \\left(\\textstyle\\sum_{t = 1}^{T} \\bm{w}_{t, i = threshold} \\odot \\bm{M}_t \\odot \\sigma\\left( \\frac{\\bm{X}_t - \\bm{\\phi}^{+}}{\\tau}\\right)\\right) \/ \\left( \\textstyle\\sum_{t = 1}^{T} \\bm{M}_t \\cdot \\bm{w}_{t, i = threshold} \\right) \n\\end{equation}\n\nThreshold parameters $\\{\\bm{\\phi}^{+}, \\bm{\\phi}^{-}\\}$ are included in $\\beta_{\\bm{H}}$, the set of all parameters necessary to compute the summaries.\n\n\\subsection{Learning Process}\\label{section:learning-process}\nOur study uses summary features $\\bm{H}$, along with static variables $\\bm{S}$ and timeseries variables $\\{\\bm{X}, \\bm{M}\\}$ at prediction time $T = 24$ as input to a Logistic Regression model $g$ with coefficients $\\bm{\\beta}_g$. Logistic Regression model $g$ outputs predicted probabilities $\\hat{\\bm{y}} = p(\\bm{y} = \\bm{1} | \\bm{X},\\bm{S}, \\bm{M})$. Our objective is to learn optimal summary and model parameters $\\bm{\\beta} = \\{\\bm{\\beta}_{\\bm{H}}, \\bm{\\beta}_g\\}$. We jointly learn parameters $\\bm{\\beta}$ by minimizing the loss function\n\\begin{equation}\n \\mathcal{L}(\\bm{\\beta}; \\bm{X}, \\bm{M}, \\bm{S}, \\bm{y}) = - \\frac{1}{N} \\sum_{n = 1}^N \\omega_n \\left( \\bm{y}_n \\cdot \\log[g(\\bm{X}_n, \\bm{M}_n, \\bm{S}_n, \\bm{\\beta})] + (1 - \\bm{y}_n) \\log[1 - g(\\bm{X}_n, \\bm{M}_n, \\bm{S}_n, \\bm{\\beta})] \\right) + \\Omega(\\bm{\\beta}_g)\n\\end{equation} \nOur loss function is the sum of the weighted binary cross-entropy loss using predictive model $g$ and the regularization penalty $\\Omega(\\bm{\\beta}_g)$. To account for class imbalance, we reweight each training example $n$'s loss contribution by $\\omega_n$, the inverse of its class frequency in the training dataset.\n\n\\textbf{Horseshoe Regularization on Coefficients.}\nWe explicitly optimize for sparsity in model parameters $\\bm{\\beta}_g$ via our regularization penalty in our training objective. We use a Horseshoe regularization penalty $\\Omega(\\bm{\\beta}_g)$ with shrinkage parameter $1$ to encourage sparsity in the learned regression coefficients\\cite{horseshoe-bhadra}. \n\n\\section{Experiments}\nWe compare models learned using our summaries to other interpretable models as well as deep learning baselines. We show that our models outperform other traditional human-interpretable model classes and achieve performance comparable to deep models on the in-hospital mortality task.\n\n\\textbf{Model configurations.} Our baseline models are: Ridge Logistic Regression models that take as input only the patient timeseries measured at the time of prediction $T$, and Ridge Logistic Regression and LSTM models that take as input all of the patient timeseries. We used an LSTM as our deep baseline architecture as prior works document their superior performance at mortality prediction from clinical timeseries data \\cite{Harutyunyan19}.\n\nOur LSTM models were trained on the sequential timeseries data $\\{\\bm{X}, \\bm{M}\\}$ with a step size of 1 hour. The LSTM hidden states at each timestep were then used to predict both the next timestep of the patient timeseries $\\bm{X}$, and to predict outcome labels $\\bm{y}$. All $T$ of the output hidden states $\\bm{X}_t$ were input to a fully-connected layer, which output predictions of the next timestep $\\bm{X}_{t + 1}$. The last output hidden state at time $T$ was also input with static features $\\bm{S}$ to a fully-connected layer to predict outcome labels $\\bm{y}$. The LSTM models were trained to minimize both the mean-squared error of the next state prediction, and the binary cross entropy loss of the classification prediction. ReLU activation functions were applied to both fully-connected layers.\n\n\\textbf{Training Details.}\nFor training and testing, we split the cohort of $N$ patients into train and test sets, where all data associated with each patient is either in train or test. All performance metrics are averaged across five train-test splits. Ridge baseline models were implemented using RidgeCV from \\texttt{scikit-learn} \\cite{scikit-learn}. LSTMs as well as our summary-based Logistic Regression models were implemented with PyTorch, and trained with the Adam optimizer\\cite{adam} at a batch size of 256. We trained all of our Logistic Regression models for 30,000 epochs and LSTM models for 10,000 epochs using early stopping. All hyperparameters (including the LSTM hidden state and layer dimensions, optimizer learning rate, and regularization parameters) were selected via random hyperparameter search \\cite{bergstra2012random}. All temperature parameters $\\tau$ were set to $0.1$. \n\nOur final learned LSTM models have hidden state dimension 32, 1024 nodes in the layer to predict the next timestep, and 64 nodes in the layer to predict labels $\\bm{y}$. They are trained using a learning rate of $1e-05$. Our final learned models with summaries use $\\alpha = 1e-05$, Horseshoe shrinkage parameter $1.0$, and learning rate $1e-05$. \n\n\\section{Results}\n\\textbf{Our learned models achieve performance comparable to state-of-the-art baselines.} Table \\ref{table:mortality-auc} compares the performance of our learned models with linear and deep baselines for the in-hospital mortality prediction task. Applying a linear model to the learned summary features $\\bm{H}$ consistently improves AUCs in comparison to using a linear model on clinical timeseries and static data alone. Our models achieve an AUC performance comparable to state-of-the-art LSTM models with test AUCs of $0.9000 \\pm 0.0223$. Notably, our models that allow differentiable optimization to learn duration times outperform models that compute all summary features using the entire duration of the patient timeseries. This implies that there is predictive value in explicitly modelling how much of each variable's clinical timeseries should be considered for a specific prediction task.\n\\begin{table}[h!]\n\\centering\n\\begin{tabular}{|c | c| c |} \n \\hline\n \\textbf{Model} & Train set AUC & Test set AUC\\\\ \\hline\n LR trained on $\\{\\bm{S}, \\bm{M}\\}$ and $\\bm{X}$ at time $T$ only & $0.8653 \\pm 0.0013$ & $0.8626 \\pm 0.0079$ \\\\ \\hline\n LR trained on $\\{\\bm{S}, \\bm{M}, \\bm{X}\\}$ & $0.8931 \\pm 0.0015$\t& $ 0.8668 \\pm 0.0122$ \\\\ \\hline\n LSTM trained on $\\{\\bm{S}, \\bm{M}, \\bm{X}\\}$ & $\\bf{0.9101 \\pm 0.0018}$ & $\\bf{0.9000 \\pm 0.0223}$ \\\\ \\hline\n Our model, trained on $\\{\\bm{S}, \\bm{M}, \\bm{X}\\}$, non-differentiable durations $\\bm{C}$ & $\\bf{0.9065 \\pm 0.0019}$ & $\\bf{0.8818 \\pm 0.0063}$ \\\\ \\hline \n Our model, trained on $\\{\\bm{S}, \\bm{M}, \\bm{X}\\}$, differentiable durations $\\bm{C}$ & $\\bf{0.9074 \\pm 0.0016}$ & $\\bf{ 0.8867 \\pm 0.0061}$ \\\\ \\hline\n\\end{tabular}\n\n\\caption{Performance of learned models on the in-hospital mortality prediction task. AUCs are averaged over five train-test splits with their standard error. Abbreviations: LR, Logistic Regression; LSTM, long short-term memory. $\\bm{C}$ refers to the duration parameters defined in Section \\ref{section:relaxations}.}\n \n\\label{table:mortality-auc}\n\\end{table}\n\n\\textbf{Our learned models use fewer features to achieve higher accuracy in comparison to other interpretable baselines.} To evaluate the sparsity of each model, we performed a set of ablation experiments where we zeroed all but the $N$ coefficients with the largest magnitudes for each of the learned Logistic Regression models. In Figure \\ref{figure:ablation-sparsity}, we show the average test set AUC for Logistic Regression baseline versus our models using only $N$ coefficients. Our models consistently outperform baseline models when all but $N$ coefficients are zeroed, suggesting that our models learn a smaller and more predictive set of important features. \n\\begin{figure}[]\n\\centering\n \\includegraphics[width=0.5\\textwidth]{images\/ablation_mortality.png}\n\\caption{Prediction quality (mean test AUC) vs. model complexity (number of non-zero features) for baseline Ridge Regression trained on all timesteps of the patient timeseries, versus our models. The dark blue horizontal line shows our model's average test set AUC using all $401$ derived features, and the orange horizontal line shows the average test set AUC of the Logistic Regression model trained only on the $65$ features extracted at the time of prediction.}\n\\label{figure:ablation-sparsity}\n\\end{figure}\n\n\\textbf{Our learned models are interpretable.} \nTable \\ref{table:key-summary-features} shows 15 key summary features that consistently have the largest learned Logistic Regression coefficients across train-test splits. The corresponding coefficients for each feature can be interpreted as a measure of the feature's contribution to the final classification label. For example, because the mean of the patient's GCS has a large negative coefficient, this means that patients with higher mean GCS scores will be assigned lower predicted probabilities for in-hospital mortality. Therefore our models are \\emph{decomposable} \\cite{lipton-mythos}, as each of the model's features and coefficients has an intuitive clinical explanation.\n\n\n\\textbf{Our learned summary features are clinically sensible.} \nThe vast majority of the key summary features learned by our models shown in Table \\ref{table:key-summary-features} are supported by studies in medical literature. For instance, it is widely accepted that patients who are older tend to have lower chances of survival in ICU settings \\cite{nielsen2019survival,fuchs2012icu}. Similarly, patients with lower GCS scores of below 6 tend to have severe injuries and higher chances of mortality \\cite{bastos1993glasgow}. Notably a lower GCS score in the later hours of a patient's hospitalisation significantly reduces a patient's chances of survival \\cite{10.1001\/archneur.1990.00530110035013,settervall2011hospital}. Finally, the normal range of features such as the blood oxygen saturation (SPO$_2$) is between 95\\% and 100\\%. An SPO$_2$ consistently below 90\\% indicates hypoxaemia or potential respiratory distress. These patients have to be mechanically ventilated in ICU and frequently have lower chances of survival \\cite{lazzerini2015hypoxaemia,vold2015low}.\n\\begin{table}[h!]\n\\centering\n\\begin{tabular}{|c | l| l | r | } \n \\hline\n \\textbf{Feature} & \\textbf{Aggregation} & \\textbf{Time} & \\textbf{Coefficient} \\\\ \\hline\n Age & static value & - & 14.9 \\\\ \\hline\n BUN & value at & hour 24 & 6.2 \\\\ \\hline\n GCS & mean over & hours 5 - 24 & - 5.59 \\\\ \\hline\n HR & value at & hour 24 & 4.69 \\\\ \\hline\n FiO$_2$ & value at & hour 24 & 4.31 \\\\\\hline\n Hct & times measured over & hours 2 - 24 & - 3.81 \\\\ \\hline\n HR, & mean over & hours 2 - 24 & 3.78 \\\\ \\hline\n GCS & value at & hour 24 & - 3.62 \\\\ \\hline\n GCS & hours below 6.08 & hours 7 - 24 & 3.37 \\\\\\hline\n Creatinine & hours below 0.35 mg\/dL & hours 5 - 24 & 2.76 \\\\ \\hline\n FiO$_2$ & hours above 62.96\\% & hours 16 - 24 & 2.59 \\\\ \\hline\n SpontaneousRR & mean over & hours 2 - 24 & 2.29 \\\\ \\hline\n GCS & times measured over & hours 5 - 24 & 2.23 \\\\ \\hline\n Sodium & hours below 131.57 mEq\/L & hours 1 - 24 & 2.16 \\\\\\hline\n WBC & hours below 0.78 cells\/mL & hours 6 - 24 & 2.10 \\\\\\hline\n SPO$_2$ & hours below 92.36\\% & hours 10 - 24 & 2.04 \\\\ \\hline\n\\end{tabular}\n\n\\caption{ Key summary features, sorted from largest to smallest coefficient magnitudes, from learned models. }\n \n\\label{table:key-summary-features}\n\\end{table}\n\n\\textbf{Initialization sensitivity.}\nIn general, we observed that our optimization procedure is stable, learning the same 15 key summary features across different stochastic parameter initializations and train-test splits. However, there are cases where we observed that the learned duration parameters $\\bm{C}$ varied depending on their initialization. As such, we recommend that practitioners incorporate prior knowledge about the clinical prediction task when initializing the duration time parameters. For example, if examining the entire duration of the patient's timeseries is necessary for a prediction task, then the duration parameters should be initialized to include the entire timeseries by default. \n\n\\section{Discussion \n\\& Conclusion}\nIn this work, we defined functions to compute interpretable, parameterizable summaries of clinical timeseries, and developed relaxations so that our summary parameters could be jointly learned with a downstream predictive model. In our experiments, we used Logistic Regression to make predictions because its coefficients are easily decomposable \\cite{lipton-mythos}. However, because our learned summaries are inherently interpretable, any other interpretable architecture could be used instead. Our methodology is generalizable and enables the efficient learning of intuitive and predictive timeseries summaries without placing any assumptions on the downstream model architecture.\n\n\\textbf{Future work.} Our study poses many interesting directions for future work. One avenue would be to conduct a user study to validate the human-interpretability and decomposability of our proposed summary features. Another would be to evaluate whether the summary features learned for particular critical care prediction tasks remain predictive for a wider set of critical care prediction tasks. \nFinally, we could also develop additional summary statistic functions, or expand our framework to consider sharing duration parameters across features or across summaries to better model dependencies between clinical labs and vitals---as many physiological events are characterized by several simultaneous changes to multiple labs and vitals \\cite{hotchkiss2017septic}.\n\n\\textbf{Conclusion.} In this paper, we propose a new method to learn interpretable and predictive summary features from clinical timeseries data. In addition to introducing novel summary statistics including slope and threshold features, our work differs from prior work by learning the duration of timeseries data that should be used to compute each summary. We demonstrate that our learned timeseries summaries achieve performance quality comparable to state-of-the-art deep models when trained to predict early patient mortality risk on real patient data. We also qualitatively validate our models to confirm their interpretability and sensibility. Our work is an important step towards optimizing for representations of clinical timeseries data that are both highly predictive and interpretable.\n\n\\textbf{Acknowledgements:} NJ and FDV acknowledge support from NIH R01 MH123804-01A1. SP acknowledges support from the Miami Foundation and SNSF P2BSP2-184359.\n\n\\bibliographystyle{unsrt}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nNeutrino oscillation measurements have become more and more\naccurate. The latest combined results \\cite{Lisi} read $\\D\nm^2_{\\odot} = (7.9 \\pm 0.7) \\times 10^{-5} \\ {\\rm eV}^2$and $\\D\nm^2_{\\rm atm} = (2.4^{+0.5}_{-0.6})\\times 10^{-3} \\ {\\rm eV}^2$,\nfor the mass squared differences of the large mixing angle\nsolution (LMA) to the solar neutrino problem \\cite{solar1,solar2},\nand for atmospheric neutrinos \\cite{atm1,atm2,atm3}, respectively.\nBoth mass differences are sub-eV, but the neutrino mass scale is\nnot yet certain.\n\nNeutrino data also hint at the possibility of more than three\nmassive, mostly active neutrinos. The Liquid Scintillator Neutrino\nDetector (LSND) Collaboration \\cite{LSND1} has reported evidence\nfor a $\\bar{\\nu}_e$ flux $30$ meters away from a source of\n$\\bar{\\nu}_\\mu$, produced in $\\pi^+ \\rightarrow \\mu^+ \\nu_\\mu$\nwith subsequent $\\mu^+ \\rightarrow \\bar{\\nu}_\\mu + e^+ + \\nu_e$\ndecay. The unexpected flux can be explained if there is a small\nprobability $P(\\bar{\\nu}_\\mu \\rightarrow \\bar{\\nu}_e)=(0.26 \\pm\n0.08)\\%$ for a neutrino produced as a $\\bar{\\nu}_\\mu$ to be\ndetected as a $\\bar{\\nu}_e$ \\cite{LSND2}.\nThis LSND anomaly has yet to be confirmed, and the MiniBooNE\nexperiment at Fermilab \\cite{MiniBooNE} should very soon give a\ndefinitive confirmation or refutation. Numerous attempts to solve\nthe LSND puzzle, however, have already been proposed\n\\cite{Strumia}. It has been shown that oscillations with extra\nsterile neutrinos can fit the LSND anomaly \\cite{sterile}. But it\nhas also been pointed out that extra sterile neutrinos could be in\nconflict with Big Bang Nucleosynthesis (BBN) \\cite{Cirelli}, as\nwell as SN1987A supernova neutrino events \\cite{CPT}.\n\nIn a recent work \\cite{Gouvea}, it has been argued that, if the\nright-handed neutrino Majorana scale $m_R$ is of ${\\cal O}({\\rm\neV})$, adequate fits to the LSND data can be obtained. This ``eV\nseesaw\" scenario runs against theoretical arguments in favor of a\nvery large $m_R$. To name a few such arguments: the canonical\nseesaw mechanism \\cite{seesaw1,seesaw2,seesaw3} with $m_R \\sim\n10^{14}\\ {\\rm GeV}$ can elegantly explain why neutrino masses are\nso small, even with lepton Yukawa couplings that are of order one;\nthermal leptogenesis \\cite{leptogenesis1} points to $m_R \\gtrsim\n10^{10}\\ {\\rm GeV}$ \\cite{leptogenesis2}. However, as stressed in\nRef. \\cite{Gouvea}, nothing {\\it experimental} is really known\nabout the magnitude of $m_R$, except perhaps the LSND result,\nwhich is at eV scale.\n\nThe purpose of this letter is to show that the eV seesaw proposed\nin Ref.~\\cite{Gouvea} can be straightforwardly incorporated in a\nfour generation scenario.\n\nThe Standard Model (SM) with a sequential fourth generation (SM4)\nis not ruled out by electroweak precision measurements, if one\nallows the extra active neutrino to have mass close to $50$ GeV\n\\cite{Maltoni,HPS}. To avoid bounds from direct search at LEP II\n\\cite{LEPII}, mixing of the fourth heavy neutrino with the three\nlight neutrinos should be small $(\\lesssim 10^{-6})$. It is clear\nthat in standard seesaw with $m_R \\sim 10^{15}$ GeV, an extra\ngeneration is hard to accommodate (A different approach to predict\na light sterile neutrino in the presence of a fourth generation is\nthe so called ``flipped seesaw\" \\cite{GZ}). All four (mostly)\nactive neutrinos will then be light, contradicting the invisible\n$Z$ width which measures only three light neutrinos. But taking\n$m_R$ at scale ${\\cal O}(\\rm{eV})$, one can now have a\nsufficiently heavy fourth neutrino.\n\nIn the following we will show that, by taking $m_R \\sim {\\cal\nO}(\\rm{eV})$, the fourth neutrino is pseudo-Dirac and heavy. It\nwill not affect the invisible $Z$ width, and largely decouples\nfrom lower generations. Aside from three mostly active light\nneutrinos, three sterile neutrinos with mass $\\gtrsim {\\rm eV}$ is\npredicted. A numerical analysis gives results consistent with the\nLSND as well as solar and atmospheric data. It seems that\n$\\sin^2\\theta_{13}$ cannot be large.\n\n\n\\section{Pseudo-Dirac Fourth Neutrino}\n\nFollowing Ref. \\cite{Gouvea} but allowing for a possible 4th\ngeneration, the $8 \\times 8$ neutrino mass matrix $M$ is given by\n\\begin{equation}\nM = M_D + \\Delta M_R + \\delta M_D,\n \\label{massmatrix}\n\\end{equation}\nin a form suggestive of mass hierarchies.\nIn the basis where the $4 \\times 4$ Dirac mass matrix is diagonal,\nthe dominant Dirac mass for the 4th generation arises from\n\\begin{equation}\nM_D = m_D\\left(\n\\begin{array}{cc}\n0 & I_4 \\\\\nI_4 & 0\n\\end{array} \\right),\n\\end{equation}\nwhere $m_D \\sim 50$ GeV, $0$ and $I_4$ are ${4\\times4}$ matrices\nwith zero elements, except 1 in 44 element of $I_4$.\nThe right-handed Majorana mass matrix is given by\n\\begin{equation}\n\\Delta M_R=m_R \\left(\n\\begin{array}{cc}\n0 & 0 \\\\\n0 & r \\\\\n\\end{array} \\right),\n\\label{MR}\n\\end{equation}\nwhere $m_R \\sim$ eV \\cite{Gouvea}, and $r$ is a ${4\\times4}$\nsymmetric matrix with elements $r_{ij} \\sim 1$.\nThe third matrix is\n\\begin{equation}\n\\delta M_D = m_R \\left(\n\\begin{array}{cc}\n0 & \\varepsilon \\\\\n\\varepsilon & 0\n\\end{array} \\right),\n\\end{equation}\nwhich is pinned more to the $m_R$ scale, with\n\\begin{equation}\n{ \\varepsilon =\\left(\n\\begin{array}{cccc}\n\\ep_1 & 0 & 0 & 0 \\\\\n0 & \\ep_2 & 0 & 0 \\\\\n0 & 0 & \\ep_3 & 0 \\\\\n0 & 0 & 0 & 0 \\\\\n\\end{array} \\right)\n},\n\\end{equation}\nwhere $\\ep_4$ has been absorbed into $m_D$. Clearly, with $m_R\/m_D\n\\equiv x \\sim 10^{-10}$ and $\\ep_i$ considerably less than 1,\n$\\Delta M_R$ and $\\delta M_D$ can be treated as perturbations to\n$M_D$.\n\nWe note that the neutrino mass matrix of Eq.~(\\ref{massmatrix})\ncould arise from very small deviations from a democratic structure\nfor the Dirac contribution \\cite{Silva}, and lepton number is\nassumed to be only slightly violated. The latter requires $m_R\n\\sim 0$ \\cite{Gouvea}, i.e. symmetry is enhanced in the limit of\n$\\Delta M_R \\rightarrow 0$. Having smaller elements in $\\delta\nM_D$ compared to $m_R$ is purely phenomenological \\cite{Gouvea}.\n\nThe dominant $M_D$ has six null eigenvalues, plus two eigenvalues\n$\\pm m_D$. For the eigenvectors associated with zero eigenvalues,\none can choose $e_i^{(0)} = (0,\\ldots,0,1,0\\ldots,0)$ with $1$ in\nthe $i^{th}$ position for $i=1,2,3$ and $i=5,6,7$. For the\neigenvalues $-m_D$ and $m_D$, the corresponding eigenvectors are\n$e_{4,8}^{(0)}=(0,0,0,\\mp1\/\\sqrt{2},0,0,0,1\/\\sqrt{2})$.\nAt zeroth order in $x$, the states $e_4^{(0)}$ and $e_8^{(0)}$\ncombine into a pure Dirac state of mass $m_D$. When linear\ncorrections in $x$ are considered, the perturbed states $e_4$ and\n$e_8$ with $e_i= e_i^{(0)} + x e_i^{(1)}$ have masses which differ\nby ${\\cal O}(x)$, and they now correspond to a pseudo-Dirac\nneutrino with mass $\\sim m_D$, which we denote as $N$ (the charged\npartner is denoted $E$, with $m_E \\gtrsim 100$ GeV\n\\cite{PDG2004}).\n\nFor the six null eigenvalues of $M_D$, one can apply perturbation\ntheory with degeneracies. A linear combination of the degenerate\nunperturbed states $e_i^{(0)}$ diagonalizes the $\\Delta M_R +\n\\delta M_D$ perturbation. That is,\nby diagonalizing the $6\\times6$ perturbation matrix\n\\begin{equation}\n{\\scriptsize M^{(3)}=m_R\\left(\n\\begin{array}{cccccc}\n0 & 0 & 0 & \\ep_1 & 0 & 0 \\\\\n0 & 0 & 0 & 0 & \\ep_2 & 0 \\\\\n0 & 0 & 0 & 0 & 0 & \\ep_3 \\\\\n\\ep_1 & 0 & 0 & r_{11} & r_{12} & r_{13} \\\\\n0 & \\ep_2 & 0 & r_{21} & r_{22} & r_{23} \\\\\n0 & 0 & \\ep_3 & r_{31} & r_{32} & r_{33} \\\\\n\\end{array} \\right)},\n\\label{M3x3}\n\\end{equation}\none obtains the corrections of ${\\cal O}(x)$ to the mass\neigenvalues, and the correct eigenstates at order zero. For the\neffect of the 4th generation neutrino through the right-handed\nsector, $r_{ij}$, one has linear corrections in $x$ proportional\nto $r_{44}$ to the eigenvalues $\\pm m_D$ for $e_4$ and $e_8$ as\nalready stated, and ${\\cal O}(x^2)$ corrections to all the other\neigenvalues. The big hierarchy between the matrix elements\n$M_{48}$, $M_{84} \\cong m_D$ and all the others allow the fourth\ngeneration to largely decouple from the other three.\nAs stated in the Introduction, this is also required by direct\nsearch limits that demand very small mixings between $N$ and the\nlight neutrino flavors.\n\nHaving reduced the problem to a $6\\times 6$ case, the analysis\nperformed in \\cite{Gouvea} suggests that one could find a solution\nto the LSND puzzle. Our main goal here is to confirm the\npossibility of an existing solution, and to gain some insight on\nwhat could be a plausible scenario.\n\n\\\n\n\\section{3+3 Neutrino Model }\n\nWe set to zero all phases for simplicity, since there are already\ntoo many parameters. We define $U^{\\prime}$ to be the rotation\nmatrix which diagonalizes $M^{(3)}$. Having started in the basis\nwhere the Dirac neutrino mass matrix $M_D + \\delta M_D$ is\ndiagonal, one still has the freedom to perform a rotation\n$U^{\\prime \\prime}$ in the left sector\n\\begin{equation}\n{\\scriptsize U^{\\prime \\prime}=\\left(\n\\begin{array}{cccccc}\nc_1 c_3 & s_1 c_3 & s_3 & 0 & 0 & 0 \\\\\n-s_1 c_2 - c_1 s_2 s_3 & c_1 c_2 - s_1 s_2 s_3 & s_2 c_3 & 0 & 0 & 0 \\\\\ns_1 s_2 - c_1 c_2 s_3 & -c_1 s_2 - s_1 c_2 s_3 & c_2 c_3 & 0 & 0 & 0 \\\\\n0 & 0 & 0 & 1 & 0 & 0 \\\\\n0 & 0 & 0 & 0 & 1 & 0 \\\\\n0 & 0 & 0 & 0 & 0 & 1 \\\\\n\\end{array} \\right)}.\n\\label{Uprpr}\n\\end{equation}\nWe assume no mixing between the fourth and first three generation\ncharged leptons, as already discussed.\nFor the right sector, a rotation will just change $r_{ij}$ to\n$r_{ij}^{\\prime}$, resulting in no change to our numerical\nanalysis.\n\nThe probability for a neutrino, produced with flavor $\\alpha$ and\nenergy $E$, to be detected as a neutrino of flavor $\\beta$ after\ntravelling a distance $L$ is \\cite{PDG2004}\n\\begin{equation}\nP(\\nu_\\a \\rightarrow \\nu_\\b) = \\delta_{\\a \\b}-4\\sum_{j>i}^n U_{\\a\nj} U_{\\b j}U_{\\a i}U_{\\b i} \\, \\sin^2{x_{ji}},\n \\label{Palphabeta}\n\\end{equation}\nwhere $\\a=e, \\mu, \\tau, s_i$ with $s_i$ the sterile neutrino\nflavors, $U=U^{\\prime \\prime} U^{\\prime}$, and $x_{ji}=1.27\\D\nm_{ji}^2 \\, L\/E$ with $\\D m_{ji}^2 \\equiv m_j^2-m_i^2$.\nApplying Eq.~(\\ref{Palphabeta})\n\\cite{SCS} in the ``3 active plus 3 sterile neutrino\"\n $(3+3)$ case, using the approximations $x_{21}=x_{31}=x_{32}=0$ and\n$x_{i1}=x_{i2}=x_{i3}$ for $i=4,5,6$, one obtains\n\\begin{widetext}\n\\begin{eqnarray}\nP(\\nu_\\a \\rightarrow \\nu_\\b) &= & \\d_{\\a\\b} +\n 4[ U_{\\a 4}^2(U_{\\b 4}^2-\\d_{\\a\\b})\\sin^2{x_{41}} + U_{\\a 5}^2(U_{\\b 5}^2-\\d_{\\a\\b}) \\sin^2{x_{51}}\n + U_{\\a 6}^2(U_{\\b 6}^2 -\\d_{\\a\\b}) \\sin^2{x_{61}} \\nonumber \\\\\n&& + U_{\\a 4}U_{\\b 4}U_{\\a 5}U_{\\b\n5}(\\sin^2{x_{41}}+\\sin^2{x_{51}} -\\sin^2{x_{54}})\n + U_{\\a 4}U_{\\b 4}U_{\\a 6}U_{\\b 6}(\\sin^2{x_{41}}+\\sin^2{x_{61}}\n -\\sin^2{x_{64}})\\nonumber \\\\\n &&+ U_{\\a 5}U_{\\b 6}U_{\\a 5}U_{\\b 6}(\\sin^2{x_{51}}+\\sin^2{x_{61}} -\\sin^2{x_{65}})]\\, ,\n \\label{Pmue}\n\\end{eqnarray}\n\\end{widetext}\nwhere orthogonality of $U$ has been used.\nExpressions for the mixing angles are given by \\cite{Gouvea2}\n\\begin{eqnarray}\n&&\\tan^2{\\theta_{12}}\\equiv\\frac{|U_{e2}|^2}{|U_{e1}|^2}\\, , \\,\\,\n\\tan^2{\\theta_{23}}\\equiv\\frac{|U_{\\mu 3}|^2}{|U_{\\tau 3}|^2} \\, ,\\nonumber \\\\\n&& \\ \\sin^2{\\theta_{13}}\\equiv{|U_{e3}|^2}\\, . \\label{mixingangle}\n\\end{eqnarray}\n\n\\begin{table}\n\\caption{Best fit values, $2\\s$ and $3\\s$ intervals for\nthree-flavor neutrino oscillation parameters from global data,\nincluding solar, atmospheric, reactor (KamLAND and CHOOZ) and\naccelerator (K2K) experiments, taken from Ref.~\\cite{Maltoni2}. }\n\\begin{center}\n\\begin{ruledtabular}\n\\begin{tabular}{cccc}\n& BEST FIT & $2\\s$ & $3\\s$\n\\\\ \\hline \\\\\n$\\D m_{21}^2\\,(10^{-5} \\; {\\rm eV}^2)$ & $8.1$ & $7.5-8.7$ & $7.2-9.1$ \\\\\n$\\D m_{31}^2\\,(10^{-3} \\; {\\rm eV}^2)$ & $2.2$ & $1.7-2.9$ & $1.4-3.3$ \\\\\n$\\sin^2{\\theta_{12}}$ & $0.30$ & $0.25-0.34$ & $0.23-0.38$ \\\\\n$\\sin^2{\\theta_{23}}$ & $0.50$ & $0.38-0.64$ & $0.34-0.68$ \\\\\n$\\sin^2{\\theta_{13}}$ & $0.0$ & $\\leq 0.028$ & $\\leq 0.047$ \\\\\n\\end{tabular}\n\\end{ruledtabular}\n\\end{center}\n\\end{table}\n\nTo perform our numerical analysis, we build the $\\chi^2$ by using\nthe three-flavor neutrino oscillation parameters $\\D m_{ji}^2$ and\n$\\sin^2{\\theta_{ij}}$ taken from Ref.~\\cite{Maltoni2}, which is\ncompiled from global data including solar, atmospheric, reactor\n(KamLAND and CHOOZ) and accelerator (K2K) experiments. These are\ngiven in Table 1. We also include the LSND result of\n$P(\\bar{\\nu}_\\mu \\rightarrow \\bar{\\nu}_e)=(0.26\\pm0.08)\\%$, and\nrequire $\\D m^2_{41} \\sim 1 \\, {\\rm eV}^2$, for a total of seven\ninputs.\nAs one clearly has too many parameters to make a proper fit, we\njust minimize the $\\c^2$ built with these quantities by letting\nthe twelve parameters of the model vary.\n\nAs an illustration, we find $s_1=-0.57,\\,s_2=0.98,\\, s_3=0.80$,\nwhich give\n\\begin{equation}\n{\\scriptsize\nU^{\\prime \\prime}=\\left(\n\\begin{array}{cccccc}\n0.49 & -0.34 & 0.80 & 0 & 0 & 0 \\\\\n-0.54 & 0.60 & 0.58 & 0 & 0 & 0 \\\\\n-0.69 & -0.72 & 0.11 & 0 & 0 & 0 \\\\\n0 & 0 & 0 & 1 & 0 & 0 \\\\\n0 & 0 & 0 & 0 & 1 & 0 \\\\\n0 & 0 & 0 & 0 & 0 & 1 \\\\\n\\end{array} \\right) \\, ,\n} \\label{Udoubleprime}\n\\end{equation}\nand\n\\begin{equation}\n{\\scriptsize M^{(3)}=5\\,{\\rm eV} \\left(\n\\begin{array}{cccccc}\n0 & 0 & 0 & 0.051 & 0 & 0 \\\\\n0 & 0 & 0 & 0 & 0.004 & 0 \\\\\n0 & 0 & 0 & 0 & 0 & 0.031 \\\\\n0.051 & 0 & 0 & 1.195 & 0.861 & 1.038 \\\\\n0 & 0.004 & 0 & 0.861 & 0.968 & 0.878 \\\\\n0 & 0 & 0.031 & 1.038 & 0.878 & 1.264 \\\\\n\\end{array} \\right),\n} \\label{M3x3num}\n\\end{equation}\nfrom which, together with Eq.~(\\ref{M3x3}), one can read the\nvalues for the rest of the parameters coming from the minimization\nprocess. The eigenvalues of $M^{(3)}$ are\n\\begin{eqnarray}\n(m_1,\\,m_2,\\,m_3) &=& -(7 \\times 10^{-5}, \\, 9\\times 10^{-3}, \\,\n0.048) \\;{\\rm eV},\n\\nonumber \\\\\n(m_4, \\, m_5, \\, m_6) &=& (1, \\, 1.2, \\, 15) \\; {\\rm eV},\n\\label{masses}\n\\end{eqnarray}\nand the associated rotation matrix is\n\\begin{equation}\n{\\scriptsize U^{\\prime}=\\left(\n\\begin{array}{cccccc}\n-0.06 & -0.99 & -0.11 & 0 & 0 & 0 \\\\\n-0.38 & 0.12 & -0.91 & 0.01 & -0.06 & 0.05 \\\\\n0.90 & -0.02 & -0.37 & -0.17 & 0.05 & 0.12 \\\\\n-0.19 & 0 & 0.09 & -0.75 & 0.16 & 0.60 \\\\\n0.05 & -0.01 & 0.07 & 0.22 & -0.83 & 0.49 \\\\\n0.01 & 0 & 0 & 0.60 & 0.52 & 0.61 \\\\\n\\end{array} \\right).\n} \\label{Uprime}\n\\end{equation}\n\nFrom Eq.~(\\ref{masses}) one gets $\\D m_{21}^2=8.1\\times\n10^{-5}\\,{\\rm eV}^2$ and $\\D m_{31}^2=2.3\\times 10^{-3}\\,{\\rm\neV}^2$, which of course is in good agreement with data. The\nrequirements for the neutrino mass splitting applied in\nRef.~\\cite{SCS} for the $3+2$ case, $0.1\\, {\\rm eV}^2 \\leq \\D\nm_{41}^2 \\leq \\D m_{51}^2\\leq 100 \\, {\\rm eV}^2$, are also\nsatisfied. This guarantees that the approximations used to derive\nEq.~(\\ref{Pmue}) are valid.\n\n\nFrom Eqs.~(\\ref{Udoubleprime}) and (\\ref{Uprime}), we obtain the\nfull rotation matrix $U = U''U'$, i.e.\n\\begin{equation}\n{\\scriptsize\nU=\\left(\n\\begin{array}{cccccc}\n0.82 & -0.54 & -0.04 & -0.14 & 0.06 & 0.08 \\\\\n0.33 & 0.60 & -0.71 & -0.09 & -0.01 & 0.10 \\\\\n0.42 & 0.59 & 0.69 & -0.03 & 0.05 & -0.03 \\\\\n-0.19 & 0 & 0.09 & -0.75 & 0.16 & 0.60 \\\\\n0.05 & -0.01 & 0.07 & 0.22 & -0.84 & 0.49 \\\\\n0.01 & 0 & 0.01 & 0.60 & 0.52 & 0.61 \\\\\n\\end{array} \\right).\n} \\label{U}\n\\end{equation}\nUsing $x_{ji}=1.27\\D m_{ji}^2({\\rm eV}^2) L({\\rm m})\/E({\\rm MeV})$\nwith $L\/E \\sim 1$, together with Eq.~(\\ref{Pmue}) for the $\\mu\n\\rightarrow e$ case, Eqs.~(\\ref{masses}) and~(\\ref{U}), one\nobtains $P(\\bar{\\nu}_\\mu \\rightarrow \\bar{\\nu}_e)=0.15\\%$, which\nis within 2$\\sigma$ from the LSND central value.\n\n\n$M^{(3)}$ in Eq.~(\\ref{M3x3num}) can be viewed as deviating from\n\\begin{equation}\n{\\scriptsize M^{(3)} = m_R \\left(\n\\begin{array}{cccccc}\n0 & 0 & 0 & 0 & 0 & 0 \\\\\n0 & 0 & 0 & 0 & 0 & 0 \\\\\n0 & 0 & 0 & 0 & 0 & 0 \\\\\n0 & 0 & 0 & 1 & 1 & 1 \\\\\n0 & 0 & 0 & 1 & 1 & 1 \\\\\n0 & 0 & 0 & 1 & 1 & 1 \\\\\n\\end{array} \\right) },\n\\label{M3x3app}\n\\end{equation}\nwhich has five zero eigenvalues and one nonzero eigenvalue equal\nto $3m_R \\sim 15$ eV, and diagonalized by\n\\begin{equation}\n{\\scriptsize U^{\\prime }=\\left(\n\\begin{array}{cccccc}\n1 & 0 & 0 & 0 & 0 & 0 \\\\\n0 & 1 & 0 & 0 & 0 & 0 \\\\\n0 & 0 & 1 & 0 & 0 & 0 \\\\\n0 & 0 & 0 & -\\frac{1}{\\sqrt{2}} & \\frac{1}{\\sqrt{2}} & 0 \\\\\n0 & 0 & 0 & -\\frac{1}{\\sqrt{2}} & 0 & \\frac{1}{\\sqrt{2}} \\\\\n0 & 0 & 0 & \\frac{1}{\\sqrt{3}} & \\frac{1}{\\sqrt{3}} & \\frac{1}{\\sqrt{3}} \\\\\n\\end{array} \\right) }.\n\\end{equation}\nTogether with $U''$ of Eq.~(\\ref{Uprpr}), one has\n\\begin{equation}\n{\\scriptsize U=\\left(\n\\begin{array}{cccccc}\nc_1 c_3 & s_1 c_3 & s_3 & 0 & 0 & 0 \\\\\n-s_1 c_2 - c_1 s_2 s_3 & c_1 c_2 - s_1 s_2 s_3 & s_2 c_3 & 0 & 0 & 0 \\\\\ns_1 s_2 - c_1 c_2 s_3 & -c_1 s_2 - s_1 c_2 s_3 & c_2 c_3 & 0 & 0 & 0 \\\\\n0 & 0 & 0 & -\\frac{1}{\\sqrt{2}} & \\frac{1}{\\sqrt{2}} & 0 \\\\\n0 & 0 & 0 & -\\frac{1}{\\sqrt{2}} & 0 & \\frac{1}{\\sqrt{2}} \\\\\n0 & 0 & 0 & \\frac{1}{\\sqrt{3}} & \\frac{1}{\\sqrt{3}} & \\frac{1}{\\sqrt{3}} \\\\\n\\end{array} \\right) }.\n\\label{Uapp}\n\\end{equation}\n\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=0.3\\textwidth]{chiSQ_s2fixed.eps}\n \\vskip 0.5cm\n \\caption{\n Contour-plot of $\\chi^2$ vs the mixing angles $s_1$ and\n $s_3$, with $\\ep_i$ and $r_{ij}$ as in\n Eq.~(\\ref{M3x3num}), and $s_2 = 0.98$ held fixed.\n The regions in different shades are only indicative, and should\n not be interpreted as the $1\\s$, $2\\s$ and $3\\s$ regions,\n as the rest of the parameters are fixed at the best fit values.\n }\n \\label{fig:chiSQ23}\n\\end{figure}\n\n\\begin{figure}[b]\n \\centering\n \\includegraphics[width=0.45\\textwidth,height=0.45\\textwidth]{plotall_s2fixed.eps}\n \\caption{\n $P(\\bar{\\nu}_\\mu \\rightarrow \\bar{\\nu}_e)$, $\\sin^2{\\theta_{12}}$,\n $\\sin^2{\\theta_{23}}$ and $\\sin^2{\\theta_{13}}$ vs $s_1$ and $s_3$,\n corresponding to the lower right solution in Fig. 1, with\n $\\ep_i$ and $r_{ij}$ fixed as in Eq.~(\\ref{M3x3num}), and $s_2=0.98$.}\n \\label{fig:allplot}\n\\end{figure}\n\n\\begin{figure}[b]\n \\centering\n \\includegraphics[width=0.45\\textwidth,height=0.45\\textwidth]{plotall_s2fixed2nd.eps}\n \\caption{\n Same as Fig. 2, but corresponding to upper left solution in Fig. 1.}\n \\label{fig:allplot}\n\\end{figure}\n\nApplying this to Eqs.~(\\ref{Pmue}) for the $\\mu \\rightarrow e$\ncase, one sees that the transition probability $P(\\nu_\\mu\n\\rightarrow \\nu_e)$ would vanish. Deviations from\nEq.~(\\ref{M3x3app}) as realized in Eq.~(\\ref{M3x3num}) not only\nproduces the needed neutrino mass spectrum, finite transition\nprobability $P(\\nu_\\mu \\rightarrow \\nu_e)$ is also achieved.\n\nIn our analysis we did not take into account the null\nshort-baseline experiments (NSBL). Let us compare our results with\nthe ones obtained by doing a full analysis in Ref~\\cite{SCS} for\nthe (3+2) case. Using the rotation matrix elements $U_{e4}=-0.14$,\n$U_{\\mu4}=-0.09$, $U_{e5}=0.06$, $U_{\\mu5}=-0.01$, $U_{e6}=0.08$\nand $U_{\\mu6}=0.10$ as one can read from Eq.~(\\ref{U}), together\nwith Eq.~(\\ref{Pmue}) respectively for $\\mu \\rightarrow e$, $e\n\\rightarrow e$ and $\\mu \\rightarrow \\mu$, we obtain $P(\\nu_\\mu\n\\rightarrow \\nu_e)=0.0015$, $P(\\nu_e \\rightarrow \\nu_e)=0.89$ and\n$P(\\nu_\\mu \\rightarrow \\nu_\\mu)=0.93$ in the $L\/E \\sim 1$\napproximation. Our calculated values for oscillation appearance\nand disappearance probabilities can now be compared with the ones\nobtained by using the best fit values for the rotation matrix\nelements in the (3+2) case as in Ref.~\\cite{SCS}, $U_{e4}=0.121$,\n$U_{\\mu4}=0.204$, $U_{e5}=0.036$ and $U_{\\mu5}=0.224$. In the $L\/E\n\\sim 1$ approximation these give $P(\\nu_\\mu \\rightarrow\n\\nu_e)=0.0021$, $P(\\nu_e \\rightarrow \\nu_e)=0.95$ and $P(\\nu_\\mu\n\\rightarrow \\nu_\\mu)=0.84$. We remark that, although a full\nanalysis is needed to tell if our model is able to accommodate\nboth NSBL and LSND data, our predictions seem to be not too far\naway from the results of Ref.~\\cite{SCS}.\n\n\n\n\nFrom Eqs.~(\\ref{mixingangle}) and~(\\ref{U}), we find\n$\\sin^2{\\theta_{12}}=0.30$, $\\sin^2{\\theta_{23}}=0.52$, and\n\\begin{equation}\n\\sin^2{\\theta_{13}}=0.0018. \\label{theta13}\n\\end{equation}\nAs expected, the values for $\\sin^2{\\theta_{12}}$ and\n$\\sin^2{\\theta_{23}}$ are in good agreement with data, but the as\nyet unmeasured $\\sin^2{\\theta_{13}}$ turns out to be rather small.\n\n\n\n\n\\section{In Search of Sizable \\boldmath $\\sin^2{\\theta_{13}}$}\n\nTo investigate the possibility for a bigger value of\n$\\sin^2{\\theta_{13}}$, we restrict the $\\chi^2$ to the four inputs\nof $\\sin^2{\\theta_{ij}}$ and $P(\\bar{\\nu}_\\mu \\rightarrow\n\\bar{\\nu}_e)$. The mass spectrum is not affected by the rotation\nof Eq.~(\\ref{Uprpr}). With $\\ep_i$ and $r_{ij}$ given as in\nEq.~(\\ref{M3x3num}), we first fix $s_1=-0.57$ and perform a\n$\\chi^2$ fit vs $s_2$ and $s_3$. We iterate with fixing $s_2 =\n0.98$ ($s_3 = 0.8$) and minimize $\\chi^2$ vs $s_1$ and $s_3$\n($s_1$ and $s_2$). We find for both cases of fixing $s_1$ and\n$s_3$ to the values found in previous section,\n$\\sin^2{\\theta_{23}}$ is quite strongly dependent on $s_2$, and\nthe value around 0.98 is preferred. We thus illustrate with fixing\n$s_2 = 0.98$.\n\nIn Fig. 1 we show the contour plot of $\\chi^2$ vs $s_1, s_3$. The\nthree different shaded regions should not be interpreted as the\n$1\\s$, $2\\s$ and $3\\s$ regions, since we have fixed the rest of\nthe parameters to the best fit values. But they still give an\nindication of variations around the best fit region under the\nabove assumptions.\n\n\n\n\nIn Fig. 2 we plot the four quantities $P(\\bar{\\nu}_\\mu \\rightarrow\n\\bar{\\nu}_e)$, $\\sin^2{\\theta_{12}}$, $\\sin^2{\\theta_{23}}$ and\n$\\sin^2{\\theta_{13}}$ vs $s_1$ and $s_3$, for the solution on the\nlower right of Fig. 1. The same is plotted in Fig.~3 for the upper\nleft solution. Again, $\\ep_i$ and $r_{ij}$ are fixed as in\nEq.~(\\ref{M3x3num}), and $s_2$ is held fixed at 0.98. We see that\n$P(\\bar{\\nu}_\\mu \\rightarrow \\bar{\\nu}_e)$ can reach the one\n$\\sigma$ region and $\\sin^2{\\theta_{12}}$ is well within range.\nHowever, to push $\\sin^2{\\theta_{13}}$ beyond 0.01,\n$\\sin^2{\\theta_{23}}$ seems to wander away from maximal mixing of\n0.5, and values at $\\sim 0.4$ or 0.6 has to be tolerated. We note\nfurther that the sensitivity of $\\sin^2{\\theta_{23}}$ is to $s_1$,\nrather than $s_3$.\n\nWe conclude that $\\sin^2{\\theta_{13}}$ greater than 0.01 is\npossible, but seemingly not preferred. It is not clear whether\nthis is an artefact of not being able to do a real fit.\nNote that we have not checked explicitly whether constraints from\nshort baseline disappearance experiments are fully satisfied.\n\n\n\n\n\\section{Discussion and Conclusion}\n\nIt is tempting to consider whether mixing between the fourth and\nthe first three light charged lepton generations could modify the\nsituation with $\\sin^2{\\theta_{13}}$. But as already mentioned in\nthe Introduction, one needs to satisfy both bounds from direct\nsearch at LEP II and electroweak precision measurements. We have\npursued a numerical study, but find that, if we wish to keep the\nmixings sufficiently small so that the fourth active heavy\nneutrino will be semi-stable, no important change with respect to\nthe no-mixing case is observed. We note that the heavy neutrino\ncould be heavier than 50 GeV and still with suppressed mixing to\nlower generations, but then one would have to face electroweak\nprecision constraints. We note in passing that semi-stable heavy\nneutrinos are still of interest~\\cite{Rybka} to dark matter search\nexperiments, as a fourth heavy lepton was once a leading dark\nmatter candidate.\n\nIn summary, we have extended the eV seesaw scenario to four lepton\ngenerations. Taking the LSND scale as the right-handed seesaw\nscale $m_R \\sim$ eV, one has a heavy pseudo-Dirac neutrino with\nmass $m_N \\sim 50$ GeV, which largely decouples from other\ngenerations, and is relatively stable. One effectively has a $3+3$\nsolution to the LSND anomaly, where we illustrate with numerical\nsolutions. As a possible outcome, our numerical study indicates\nthat the third mixing angle, $\\sin^2{\\theta_{13}}$, seems to be\nless than 0.01.\n\n\n\\vskip 0.3cm \\noindent{\\bf Acknowledgement}.\\ \\ This work is\nsupported in part by NSC-94-2112-M-002-035 and HPRN-CT-2002-00292.\nWe thank G. Raz, T. Volansky and R. Volkas for useful discussions.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\n\\IEEEPARstart{A}{} traditional approach for transmitting digital\ninformation over a communication link with high spectral efficiency\n($\\eta\\geq2$ bit\/s\/Hz) is to combine good error-correcting code with\nhigh-order modulation. Over last two decades, several capacity-approaching\ncoded modulation techniques have been proposed, including low-density\nparity check (LDPC) coded modulation {[}1, 2{]}, turbo-coded modulation\nand turbo trellis-coded modulation {[}3{]}. In this paper, we propose\nan alternative bandwidth-efficient modulation technique, which does\nnot rely on any error correction coding in the traditional sense,\nwhile still achieving error rates close to the best known coded modulation\nschemes. Our method is based on serial concatenation of an orthogonal\nlinear transform, such as DFT or WHT, with memoryless nonlinearity.\nWe demonstrate that such a simple signal construction may exhibit\nproperties of random code ensemble, as a result approaching channel\ncapacity if decoded using a maximum-likelihood (ML) decoder. \n\nWhen the proposed modulation scheme relies on DFT transform, it can\nbe viewed as an OFDM signal distorted by very strong memoryless nonlinearity.\nIn the context of OFDM, any nonlinearity is usually considered a highly\nundesirable phenomenon. In fact, the sensitivity of OFDM technology\nto nonlinear distortions is usually cited as a major drawback of OFDM.\nNonetheless, in {[}4{]}, the authors theoretically proved that the\nbit error rate (BER) performance of OFDM transmission subjected to\nstrong memoryless nonlinearity can outperform linear transmission,\nprovided that an optimal ML-receiver is used. Moreover, in {[}5{]},\nwe proposed a practical message-passing receiver for nonlinearly distorted\nOFDM signals, showing that the BER performance of hard-clipped OFDM\nsignals can be up to 2 dB better than linear OFDM transmission. In\nthis paper, we go one step further to demonstrate that if the uncoded\nOFDM signal is distorted by memoryless nonlinearity with certain properties,\nthe resultant signal waveform may achieve BER performance close to\nthe best known capacity\\textendash approaching coded modulation techniques.\nThe same performance can also be achieved by replacing DFT in the\ntransmitter with WHT, resulting in simpler hardware\/software implementation\nof the encoder and the decoder.\n\n\\section{Proposed modulation technique}\n\n\\subsection{Achieving capacity by serial concatenation of orthogonal transform\nand memoryless nonlinearity}\n\nConsider the modulation scheme illustrated in Figure \\ref{fig:direct_modulator}.\nIn the transmitter, $N$ uncoded real or complex baseband modulation\nsymbols $\\ensuremath{\\boldsymbol{\\mathbf{x}}=\\left\\{ x_{k}\\right\\} }$\n(e.g. $M$-QAM, or $M$-PAM) are first transformed by means of orthogonal\nblock transform and then passed through memoryless nonlinearity block.\nThe signal at the modulator output can be expressed as:\n\n\\begin{equation}\n\\ensuremath{s_{n}=f\\left(z_{n}\\right),\\quad n=0,1,...,N-1}\\label{eq: main_modulator}\n\\end{equation}\nwhere $f(z)$ is a memoryless nonlinear function, and\n\n\\begin{equation}\n\\ensuremath{{\\bf z}={\\bf Fx},}\n\\end{equation}\nwhere $\\mathbf{z}=\\left\\{ z_{n}\\right\\} $, and $\\mathbf{F}$ is a\n$N\\times N$ (unitary) orthogonal transform matrix. In this paper,\nwe consider two types of transforms: Walsh-Hadamard transform, and\ndiscrete Fourier transform (real-valued or complex).\n\nLet us now assume that the memoryless nonlinear function $f(z)$ can\nbe represented as $\\ensuremath{f\\left(z\\right)=\\underbrace{g\\left(...g\\left(g\\left(z\\right)\\right)\\right)}_{l-times}}$,\nwhere $g(z)$ is a deterministic chaotic map. Thus, $f(z)$ has certain\nproperties of chaotic iterated functions, namely, \\emph{sensitive\ndependence on initial conditions} {[}6{]}. Under such assumptions,\nand if $N\\rightarrow\\infty$ and $l\\rightarrow\\infty$, the ensemble\nof waveforms generated by (\\ref{eq: main_modulator}) possesses all\nmajor properties of a random code ensemble, because even small modifications\nin the message sequence $\\ensuremath{\\left\\{ x_{k}\\right\\} }$ (e.g.,\nin a single bit) lead to: \n\\begin{description}\n\\item [{a)}] small modifications, at least, for all samples of the intermediate\nsignal $z_{n}$ due to the spreading properties of the orthogonal\ntransform operation, and \n\\item [{b)}] large (and pseudo-random) modifications for all samples of\nwaveform $s_{n}$ due to the properties of the nonlinear function\n$f(z)$. \n\\end{description}\nTherefore, we conjecture that where $f(z)$ has the above-mentioned\nproperties if the waveforms generated by (\\ref{eq: main_modulator}) are\ndemodulated using a ML decoder, system performance may approach channel\ncapacity. Unfortunately, brute-force, maximum-likelihood decoding\nof (\\ref{eq: main_modulator}) is prohibitively complex, generally\nof the order $O(M^{N})$, where $M$ is the modulation order. However,\nas in many practical applications, we can rely on belief-propagation\ntechniques to approximate ML-decoding. A perfect candidate for decoding\nthe signals generated by (\\ref{eq: main_modulator}) is the generalized\napproximate message passing (GAMP) algorithm {[}7{]}. Having already\nbeen applied to the decoding of clipped OFDM signals in {[}5{]}, it\nhas demonstrated good performance and reasonably good convergence\nbehavior. \n\n\\begin{figure}[tbh]\n\\includegraphics[scale=1.1]{modulator}\\caption{\\label{fig:direct_modulator}Proposed modulation scheme}\n\\end{figure}\n\n\n\\subsection{GAMP algorithm and its modifications}\n\nIn the additive white Gaussian noise (AWGN) channel, the model of\nthe received signal can be expressed as\n\n\\begin{equation}\n\\ensuremath{y_{n}=f\\left(z_{n}\\right)+w_{n},}\\quad n=0,1,...,N-1\\label{eq:gamp_model}\n\\end{equation}\nwhere $w_{n}$ is the AWGN term with zero mean and variance $\\mu^{w}$.\nModel (\\ref{eq:gamp_model}) is equivalent to a general problem formulation\nfor the GAMP algorithm {[}7{]}, which belongs to a class of Gaussian\napproximations of loopy belief propagation for dense graphs. In our\ndecoder implementation, we use the sum-product variant of the GAMP\nalgorithm, which approximates minimum mean-squared error estimates\nof $\\mathbf{x}$ and $\\mathbf{z}$. \n\nDuring simulation study, we discovered that the convergence behavior\nand bit and frame error rate (FER) performance of the conventional\nGAMP algorithm applied to the decoding of (\\ref{eq:gamp_model}) may\nbe improved by several simple algorithm modifications. In particular,\nwe used damped version of the GAMP algorithm, which is known to improve\nconvergence for certain mixing matrices {[}8{]}, and introduced scaling\nfactor for AWGN term variance $\\mu^{w}\\rightarrow\\alpha\\mu^{w}$.\nIn addition, we implemented the final decision selection and post-processing\nscheme as will be explained in the next section that helped us overcome\ncertain deficiencies of a conventional GAMP algorithm. To reduce computational\ncomplexity we opted to use a version of GAMP algorithm with scalar\nstep-sizes {[}7{]}. Note that the scalar step-size version of the\nGAMP algorithm is optimal for WHT and complex DFT since in both cases\n$\\left|F_{n,k}\\right|^{2}=1$. Moreover, our experimental study revealed\nthat the performance loss for real DFT case ($\\left|F_{n,k}\\right|^{2}\\neq1$)\nwas also marginal. \n\nThe GAMP algorithm adapted to our problem is summarized below. The\nalgorithm generates a sequence of estimates ${\\bf \\hat{x}}\\left(t\\right)$,\n${\\bf \\hat{z}}\\left(t\\right)$, for $t=1,2,...$ through the following\nrecursions:\n\n\\emph{Parameters:}\n\n$t_{max}$ the maximum number of iterations, \n\n$\\alpha$ the noise scaling factor, \n\n$\\beta$ the damping factor.\n\n\\emph{Step 1) Initialization:}\n\n$t=1$, ${\\bf \\hat{x}}\\left(1\\right)=\\boldsymbol{0}$, $\\tilde{\\mathbf{x}}\\left(1\\right)=\\boldsymbol{0}$,\n${\\bf \\boldsymbol{\\mu}}^{x}\\left(1\\right)=\\boldsymbol{1}$, ${\\bf \\hat{s}}\\left(0\\right)=\\boldsymbol{0}$\n\n\\emph{Step 2) Estimation of output nodes:}\n\n\\begin{equation}\n\\mu_{n}^{p}\\left(t\\right)=\\frac{1}{N}\\sum\\limits _{k=0}^{N-1}\\mu_{k}^{x}\\left(t\\right),\\forall n\\label{eq:gamp_algo_start}\n\\end{equation}\n\n\\begin{equation}\n\\hat{p}_{n}\\left(t\\right)=\\sum\\limits _{k=0}^{N-1}F_{n,k}\\hat{x}_{k}\\left(t\\right)-\\mu_{n}^{p}\\left(t\\right)\\hat{s}_{n}\\left(t-1\\right),\\forall n\n\\end{equation}\n\n\\begin{equation}\n\\hat{z}_{n}\\left(t\\right)=\\frac{1}{C}\\int\\limits _{-\\infty}^{\\infty}ze^{-\\frac{\\left(y_{n}-f\\left(z\\right)\\right)^{2}}{2\\alpha\\mu^{w}}-\\frac{\\left(\\hat{p}_{n}\\left(t\\right)-z\\right)^{2}}{2\\mu_{n}^{p}\\left(t\\right)}}dz,\\forall n\\label{eq:Integral_start}\n\\end{equation}\n\n\\begin{equation}\n\\mu_{n}^{z}\\left(t\\right)=\\frac{1}{C}\\int\\limits _{-\\infty}^{\\infty}z^{2}e^{-\\frac{\\left(y_{n}-f\\left(z\\right)\\right)^{2}}{2\\alpha\\mu^{w}}-\\frac{\\left(\\hat{p}_{n}\\left(t\\right)-z\\right)^{2}}{2\\mu_{n}^{p}\\left(t\\right)}}dz-\\left(\\hat{z}_{n}\\left(t\\right)\\right)^{2},\\forall n\n\\end{equation}\nwhere \n\n\\begin{equation}\nC=\\int\\limits _{-\\infty}^{\\infty}e^{-\\frac{\\left(y_{n}-f\\left(z\\right)\\right)^{2}}{2\\alpha\\mu^{w}}-\\frac{\\left(\\hat{p}_{n}\\left(t\\right)-z\\right)^{2}}{2\\mu_{n}^{p}\\left(t\\right)}}dz,\\forall n\\label{eq:Integral_end}\n\\end{equation}\n\n\\begin{equation}\n\\hat{s}_{n}\\left(t\\right)=\\left(1-\\beta\\right)\\hat{s}_{n}\\left(t-1\\right)+\\beta\\dfrac{\\hat{z}_{n}\\left(t\\right)-\\hat{p}_{n}\\left(t\\right)}{\\mu_{n}^{p}\\left(t\\right)},\\forall n\n\\end{equation}\n\n\\begin{equation}\n\\mu_{n}^{s}\\left(t\\right)=\\left(1-\\beta\\right)\\mu_{n}^{s}\\left(t-1\\right)+\\beta\\frac{1-\\frac{\\mu_{n}^{z}\\left(t\\right)}{\\mu_{n}^{p}\\left(t\\right)}}{\\mu_{n}^{p}\\left(t\\right)},\\forall n\n\\end{equation}\n\n\\emph{Step 3) Estimation of input nodes}: \n\n\\begin{equation}\n\\tilde{x}_{k}\\left(t\\right)=\\left(1-\\beta\\right)\\tilde{x}_{k}\\left(t-1\\right)-\\beta\\hat{x}_{k}\\left(t\\right)\n\\end{equation}\n\n\\begin{equation}\n\\mu_{k}^{r}\\left(t\\right)=\\left(\\frac{1}{N}\\sum\\limits _{n=0}^{N-1}\\mu_{n}^{s}\\left(t\\right)\\right)^{-1},\\forall k\n\\end{equation}\n\n\\begin{equation}\n\\hat{r}_{k}\\left(t\\right)=\\tilde{x}_{k}\\left(t\\right)+\\mu_{k}^{r}\\left(t\\right)\\sum\\limits _{n=0}^{N-1}F_{n,k}^{*}\\hat{s}_{k}\\left(t\\right),\\forall k\n\\end{equation}\n\n\\begin{equation}\n\\hat{x}_{k}\\left(t+1\\right)=\\sum\\limits _{m=1}^{M}d_{m}P_{m,k},\\forall k\n\\end{equation}\n\n\\begin{equation}\n\\mu_{k}^{x}\\left(t+1\\right)=\\sum\\limits _{m=1}^{M}\\left(d_{m}-\\hat{x}_{k}\\left(t+1\\right)\\right)^{2}P_{m,k},\\forall k\\label{eq:gamp_algo_end}\n\\end{equation}\nwhere\n\n\\begin{equation}\nP_{m,k}=\\frac{e^{-\\frac{\\left(d_{m}-\\hat{r}_{k}\\left(t\\right)\\right)^{2}}{2\\mu_{k}^{r}\\left(t\\right)}}}{\\sum\\limits _{l=1}^{M}e^{-\\frac{\\left(d_{l}-\\hat{r}_{k}\\left(t\\right)\\right)^{2}}{2\\mu_{k}^{r}\\left(t\\right)}}},\n\\end{equation}\n$M$ is the number of points in the signal constellation, and $\\left\\{ d_{m}\\right\\} $\nis the vector of constellation points. For example, for 2-PAM modulation,\n$\\left\\{ d_{m}\\right\\} =\\left[-1{\\rm \\quad+}1\\right]$, and for 4-PAM\nmodulation $\\left\\{ d_{m}\\right\\} =\\left[-3\\mathord{\\left\/\\vphantom{-3{\\sqrt{5}}}\\right.\\kern -\\nulldelimiterspace}\\sqrt{5}\\quad-1\\mathord{\\left\/\\vphantom{-1{\\sqrt{5}}}\\right.\\kern -\\nulldelimiterspace}\\sqrt{5}{\\rm \\quad+}1\\mathord{\\left\/\\vphantom{1{\\sqrt{5}}}\\right.\\kern -\\nulldelimiterspace}\\sqrt{5}{\\rm \\quad+}{\\rm 3}\\mathord{\\left\/\\vphantom{{\\rm 3}{\\sqrt{5}}}\\right.\\kern -\\nulldelimiterspace}\\sqrt{5}\\right]$. \n\nSteps (\\ref{eq:gamp_algo_start})\\textendash (\\ref{eq:gamp_algo_end})\nare repeated with $t\\rightarrow t+1$ until $t_{max}$ iterations\nhave been performed. \n\nWe presented here the GAMP algorithm version for the real-valued modulation\nand real orthogonal transform (e.g. $M$-PAM and WHT or real-DFT).\nNonetheless, the extension to the complex case is straightforward.\nMoreover, in case of Cartesian-type nonlinearity $f(z)$ the complexity\nof the decoding algorithm with $N$ complex input symbols is essentially\nthe same as the complexity of the real-valued algorithm with $2N$\ninput symbols.\n\nMore details on the GAMP algorithm and its thorough analysis can be\nfound in {[}7{]}. \n\n\\subsection{Choosing optimal nonlinearity $f(z)$}\n\nChoosing the optimal shape of nonlinearity $f(z)$ is not easy, requiring\na balance between two conflicting requirements. Firstly, the nonlinear\nfunction should be reasonably \\textquotedbl{}chaotic\\textquotedbl{}\nto guarantee sensitivity to initial conditions and, therefore, random-like\nproperties of the coded waveforms. On the other hand, our experimental\nstudy implies that a \\textquotedbl{}truly chaotic\\textquotedbl{} shape\nof nonlinearity $f(z)$ precludes GAMP algorithm from converging to\nthe ML-solution. Therefore, we adopted an \\emph{ad hoc} procedure\nto select and optimize the shape of the memoryless nonlinearity $f(z)$.\nOur choice of nonlinearity $f(z)$ relies on two key ideas: \n\\begin{itemize}\n\\item $f(z)$ should contain a linear or almost linear region around $f(0)$\nto allow the message passing decoder to converge. \n\\item $f(z)$ should \\emph{resemble} a chaotic iterated function in other\nregions to guarantee sensitivity to initial conditions. \n\\end{itemize}\nFor a general representation of $f(z)$, we suggest using the flexible,\npiece-wise linear model:\n\n\\begin{equation}\n\\ensuremath{f\\left(z\\right)=\\left\\{ \\begin{array}{l}\n{\\mathop{\\rm sgn}}\\left(z\\right)\\left(a_{0}\\left|z\\right|+b_{0}\\right),{\\rm \\quad if}\\;0\\le\\left|z\\right|{\\raggedright}p{0.85\\columnwidth}|}\n\\hline \n\\emph{No.} & \\emph{Parameters}\\tabularnewline\n\\hline \n1 & $\\boldsymbol{a}G_{0}^{-1}$= \\{1 2 2 -2 -2 2 2 -2 -2 -0.5\\}\\newline$\\boldsymbol{b}$\n= \\{0 -2 -2.5 4 4.5 -4 -4.5 6 6.5 2.5\\}\\newline$\\boldsymbol{T}G_{0}$=\n\\{0 1 1.25 1.5 1.75 2 2.25 2.5 2.75 3\\}; $G_{0}=0.53$\\tabularnewline\n\\hline \n2 & $\\boldsymbol{a}G_{0}^{-1}$= \\{1 2 2 -2 -2 2 2 -2 -2 -0.5\\}\\newline$\\boldsymbol{b}$\n= \\{0 -2 -3.5 4 3.5 -4 -4.5 6 6.5 2.5\\}\\newline$\\boldsymbol{T}G_{0}$=\n\\{0 1 1.25 1.5 1.75 2 2.25 2.5 2.75 3\\}; $G_{0}=0.5125$\\tabularnewline\n\\hline \n3 & $\\boldsymbol{a}G_{0}^{-1}$= \\{1.25 2 2 -2 -2 2 2 -2 -2 -0.5\\}\\newline$\\boldsymbol{b}$\n= \\{0 -1.6 -3.1 3.6 3.1 -3.6 -4.1 5.6 6.1 2.4\\}\\newline$\\boldsymbol{T}G_{0}$=\n\\{0 0.8 1.05 1.3 1.55 1.8 2.05 2.3 2.55 2.8\\}; $G_{0}=0.415$\\tabularnewline\n\\hline \n\\end{tabular}\n\\end{table}\n\n\\begin{figure}[tbh]\n\\includegraphics[scale=0.61]{nonlinearity2}\n\n\\caption{\\label{fig:Nonlinearities}Three examples of nonlinear functions $f(z)$:\nnonlinearity 1 (top), nonlinearity 2 (middle), and nonlinearity 3\n(bottom).}\n\\end{figure}\n\n\n\\section{Simulation results}\n\nThe performance of the proposed modulation scheme with the GAMP-based\ndecoder was studied by means of Monte-Carlo simulation. Our initial\nsimulation-based study revealed that the conventional GAMP algorithm\n(with $\\alpha=1$, and $\\beta=1$) exhibits several deficiencies,\nin particular, relatively high error-floor, slow convergence and occasional\ninstability problems (divergence with the increase of number of iterations).\nIt turns out that the error-floor issue was, at least, in part due\nto the decoder algorithm. To alleviate these deficiencies we implemented\nseveral enhancement techniques. Firstly, we appended the cyclic-redundancy\ncheck (CRC) block to each data frame and used it for the early stopping\ncriterion. Secondly, after each decoding iteration we calculated the\nEuclidean distance $E(t)$ between the received vector $\\{y_{n}\\}$\nand the reconstructed waveform $f\\left(\\sum\\limits _{k=0}^{N-1}F_{n,k}\\hat{x}_{k}\\left(t\\right)\\right)$,\nand if a CRC error was detected after $t_{max}$ iterations the final\ndecision was based on the vector $\\left\\{ \\hat{x}_{k}\\left(t\\right)\\right\\} $\nthat corresponded to the minimum Euclidean distance $E(t)$. Although\nthis procedure did not improve FER, it minimized the BER for incorrectly\ndecoded frames. Thirdly, we experimentally discovered that the decoder\nconvergence speed can be significantly improved by using noise scaling\nfactor $\\alpha<1$ simultaneously with damping ($\\beta<1$). However,\nsuch a technique could result in occasional algorithm divergence,\ntherefore to achieve better performance we implemented the following\nprocedure: $t_{max}\/2$ iterations were performed using the modified\nGAMP algorithm ($\\alpha<1$, $\\beta<1$), and if the decoder could\nnot converge within the first $t_{max}\/2$ iterations, all internal\nvariables were reset and the subsequent $t_{max}\/2$ iterations were\nperformed with the conventional GAMP settings ($\\alpha=1$, $\\beta=1$).\nThe overall decoding algorithm flowchart is depicted in Figure \\ref{fig:Decoding-algorithm-flowchart}.\n\n\\begin{figure}[tbh]\n\\includegraphics[scale=0.53]{algorithm}\\caption{\\label{fig:Decoding-algorithm-flowchart}Decoding algorithm flowchart}\n\\end{figure}\n\nOur simulation results indicate that the performance of the proposed\nscheme with WHT, real DFT or complex DFT with Cartesian-type nonlinearity\nis almost identical. Therefore, from complexity point of view it seems\npreferable to use WHT, and, hence, most of our simulation results\nreported here were obtained for WHT-based scheme.\n\nFigure \\ref{fig:BER-vs-EbNo-capacity} illustrates the BER vs $E_{b}\/N_{0}$\ncurves for 2-PAM input modulation with three nonlinearities (Table\n\\ref{tab:Nonlinearities}\/Figure \\ref{fig:Nonlinearities}) and different\nvalues of $N$. In all simulations, the variance of $\\{z_{n}\\}$ was\nnormalized to 1, the maximum number of iterations ($t_{max}$) was\nset to 100, and integrals (\\ref{eq:Integral_start})\\textendash (\\ref{eq:Integral_end})\nwere approximated using numerical summation. At the initialization\nstep the parameters $\\alpha$, $\\beta$ were set to $\\alpha=0.71$,\n$\\beta=0.875$. \n\nRemarkably, for the selected nonlinearity parameters $G_{0}$, $\\boldsymbol{a}$,\n$\\boldsymbol{b}$, $\\mathbf{T}$ the proposed modulation scheme exhibits\nbehavior similar to random-like codes with the presence of a waterfall\nregion and error-floor, and, as expected, performance improves for\nlarger frame sizes. Moreover, the proposed modulation scheme with\nnonlinearity 3 and $N$=16384 achieves target BER=$10^{-5}$ at $E_{b}\/N_{0}=3.3$\ndB, which represents 6.3 dB gain over uncoded 2-PAM or 4-QAM modulation\nand is only about 1.5 dB away from the \\emph{unconstrained} AWGN channel\ncapacity. These results are better or within $0.1\\div0.3$ dB of the\nperformance of the capacity-approaching, bandwidth-efficient modulation\nschemes with $\\eta=2$ bit\/s\/Hz reported in {[}1-3, 9, 10{]}. BER\ncurves for some of these advanced coded modulation schemes with comparable\nframe sizes are reproduced in Figure \\ref{fig:BER-vs-EbNo} for comparison. \n\n\\begin{figure}[tbh]\n\\includegraphics[scale=0.57]{n1K_16K_capacity}\\caption{\\label{fig:BER-vs-EbNo-capacity}BER vs $E_{b}\/N_{0}$ for the proposed\nmodulation scheme with different nonlinearities and frame sizes in\nAWGN channel ($\\eta=2$ bit\/s\/Hz)}\n\\end{figure}\n\nAlthough we set $t_{max}=100$, the average number of decoder iterations\nwas significantly lower. For example, at $BER=10^{-5}$ the average\nnumber of decoder iterations was about 26 for the frame size $N$=16384,\nand about 17 for the frame size $N$=4096, and only 7 iterations for\nthe frame size $N$=1024.\n\nA considerable performance improvement over uncoded modulation was\nalso observed for 4-PAM or 16-QAM input modulation ($\\eta=4$ bit\/s\/Hz).\nHowever, in case of 4-PAM\/16-QAM, the GAMP algorithm is sub-optimal\nin terms of BER performance, and the example nonlinearities that we\nuse for 4-QAM\/2-PAM (Table \\ref{tab:Nonlinearities}) are not optimal\neither. Therefore, application of the proposed technique to high-order\nmodulation formats ($\\eta\\geq4$ bit\/s\/Hz) is an open research topic. \n\n\\begin{figure}[tbh]\n\\includegraphics[scale=0.46]{7a_n16384_compare_w_papers_light}\n\n\\caption{\\label{fig:BER-vs-EbNo}Performance comparison of the proposed technique\n(WHT, Nonlinearity 3, $N$=16384) with state-of-the-art coded modulation\nschemes ($2$ bit\/s\/Hz):\\protect \\\\\n1) ARJ4A LDPC, $N$=16384, 16-APSK {[}9{]}, \\protect \\\\\n2) ARJ4A LDPC, $N$=16384, 8-PSK {[}9{]}, \\protect \\\\\n3) Regular LDPC, BICM, $d_{v}$=3, $n$=20000, 4-PAM {[}1{]}\\protect \\\\\n4) Regular LDPC, BICM, $d_{v}$=3, $n$=10000, 4-PAM {[}1{]}\\protect \\\\\n5) Irregular LDPC, BICM, $d_{v}$=15, $n$=20000, 4-PAM {[}1{]}\\protect \\\\\n6) Irregular LDPC, BICM, $d_{v}$=15, $n$=10000, 4-PAM {[}1{]}\\protect \\\\\n7) eIRA LDPC, BICM, $n$=10000, 4-PAM {[}10{]}\\protect \\\\\n8) eIRA LDPC, BICM, $n$=9000, 8-PSK {[}10{]}\\protect \\\\\n9) Turbo TCM, $N$=10000, 8-PSK {[}3{]}}\n\\end{figure}\n\n\n\\section{Discussion}\n\nWe believe that the proposed joint coding-modulation technique might\nbe useful in some wired and wireless communication applications, since\nit has several interesting properties:\n\\begin{itemize}\n\\item Good BER performance: As illustrated in the previous section the BER\nperformance is on par with that of the best known coded modulation\nschemes.\n\\item Relatively low decoder complexity: The GAMP-based decoder complexity\nper iteration is dominated by two orthogonal transform operations,\nwhich, in case of fast WHT, require only $N\\log_{2}(N)$ real additions\nor subtractions. The input\/output nonlinear step is generally a scalar\noperation with complexity of order $O(N)$. Although, in our simulation\nmodel, we used numerical integration to approximate integrals (\\ref{eq:Integral_start})\\textendash (\\ref{eq:Integral_end}),\nthese can be expressed in closed-form using tabulated Gauss error\nfunction, and, consequently, can be simplified or implemented via\nlookup tables. It is also possible to use simpler max-sum version\nof the GAMP algorithm to trade-off performance and complexity {[}7{]}. \n\\item The choice of nonlinearity $f(z)$ may be tailored to the requirements\nof the transmission system in order to improve overall system efficiency.\nFor example, $f(z)$ may be selected to improve power conversion efficiency,\nor illumination-to-communication conversion efficiency in visible\nlight communication applications {[}11{]}.\n\\item An interesting application of the proposed modulation scheme is to\nuse it in combination with a conventional, linear OFDM transmitter,\nas illustrated in Figure \\ref{fig:precoder_for_ofdm}. Such a system\narrangement may result in the OFDM signal with reduced PAPR. Due to\nthe presence of nonlinearity $f(z)$, the distribution of signal samples\nat the output of the modulator illustrated in Figure \\ref{fig:precoder_for_ofdm}\nresembles the distribution of $M$-QAM\/$M$-PAM signal affected by\nadditive Gaussian noise. It can be shown that if the nonlinearity\n$f(z)$ has a relatively large linear region the PAPR of such a signal\nwill be much lower than that of a conventional OFDM signal. Figure\n\\ref{fig:CCDF} compares the complementary cumulative distribution\nfunctions (CCDF) of PAPR for signals generated by the transmission\nsystem illustrated in Figure \\ref{fig:precoder_for_ofdm} with nonlinearity\n1 (complex-valued OFDM with Cartesian-type nonlinearity), and a conventional\nOFDM system. We analyzed CCDF for Nyquist sampled waveforms and four-times\noversampled waveforms, since four-times oversampling provides a good\napproximation of the continuous-time PAPR {[}12{]}. As may be seen,\nat probability $10^{-4}$ the PAPR of the continuous signal generated\nby the transmission system illustrated in Figure \\ref{fig:precoder_for_ofdm}\nis approximately 2.3 dB lower than the PAPR of a conventional OFDM\nsignal. The difference is increased to 3.9dB for Nyquist-sampled waveforms.\n\\end{itemize}\nIt should be noted that the proposed method has several limitations\nand open issues. Firstly, in this paper, we focused primarily on the\ncase $\\eta=2$ bit\/s\/Hz. One possible way to apply this technique\nto systems with higher spectral efficiency is to use the output waveform\npuncturing. This approach (i.e. random puncturing) seems to work well\nup to $\\eta=2.5\\div3$ bit\/s\/Hz. It is also possible to use higher-order\ninput modulations (16-QAM\/4-PAM), however, as we mentioned earlier,\nextension of this technique to spectral efficiency $\\eta\\geq4$ bit\/s\/Hz\nis still an open research topic. Secondly, the GAMP algorithm is based\non Gaussian and quadratic approximations, and, therefore, it is apparently\nsub-optimal, especially, for small frame sizes. We were able to improve\nperformance of the GAMP-based decoder using several \\emph{ad hoc}\ntechniques. However, it is still unclear, how close the GAMP decoder\nperformance is to the theoretical ML performance. Finally, our choice\nof nonlinearity $f(z)$ is generally based on a heuristic and experimental\napproach. The problem here is that the performance of the proposed\nscheme is limited not only by distance distribution of encoded waveforms\nbut also by non-idealities of the decoding algorithm, and this fact\ngreatly complicates optimization strategy. \n\n\\begin{figure}[tbh]\n\\includegraphics[scale=0.8]{ofdm_precoder}\n\n\\caption{\\label{fig:precoder_for_ofdm}Proposed modulation scheme as a pre-coder\nfor a conventional OFDM transmitter}\n\\end{figure}\n\\begin{figure}[tbh]\n\\includegraphics[scale=0.5]{ccdf_papr}\n\n\\caption{\\label{fig:CCDF}CCDF of PAPR for the proposed pre-coded OFDM (Figure\n\\ref{fig:precoder_for_ofdm}) with $N$=1024 and nonlinearity \\#1:\\protect \\\\\n1) Conventional OFDM (Nyquist sampling)\\protect \\\\\n2) Conventional OFDM (four-times oversampling)\\protect \\\\\n3) OFDM with the proposed modulation (Nyquist sampling)\\protect \\\\\n4) OFDM with the proposed modulation (four-times oversampling)}\n\\end{figure}\n\n\\balance\n\n\\section{Conclusions}\n\nIn this paper, we proposed a novel joint coding-modulation technique\nbased on serial concatenation of orthogonal linear transformation\n(e.g., WHT or DFT) with memoryless nonlinearity. We demonstrated that\nsuch a simple signal construction may exhibit properties of a random\ncode ensemble, as a result approaching channel capacity. Our computer\nsimulations confirmed that if the decoder relies on the approximate\nmessage passing algorithm, the proposed modulation technique exhibits\nperformance on par with state-of-the-art coded modulation schemes\nthat use capacity-approaching component codes. The proposed technique\ncould be extended to modulation formats with higher spectral efficiency\n($\\eta\\geq4$ bit\/s\/Hz) and other types of orthogonal transformations,\noffering one possible direction for our future research.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}