diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzaxad" "b/data_all_eng_slimpj/shuffled/split2/finalzzaxad" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzaxad" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\n\n\nOne of the central concepts of linear algebra is the notion of a basis for a vector space: a subset of a vector space is called a basis for the former if every vector can be uniquely written as a finite linear combination of basis elements. Part of the importance of bases stems from the convenient consequences that follow from their existence. For example, linear transformations between vector spaces admit matrix representations relative to pairs of bases \\cite{lang2004algebra}, which can be used for efficient numerical calculations. The idea of a basis however is not restricted to the theory of vector spaces: other algebraic theories have analogous notions of bases -- sometimes by waiving the uniqueness constraint --, for instance modules, semi-lattices, Boolean algebras, convex sets, and many more. In fact, the theory of bases for vector spaces is different to others only in the sense that every vector space admits a basis, which is not the case for e.g. modules. In this paper we seek to give a compact definition of a basis that subsumes the well-known cases, and as a consequence allows us to lift results from one theory to the others. For example, one may wonder if there exists a matrix representation theory for convex sets that is analogous to the one of vector spaces.\n\nIn the category theoretic approach to universal algebra, algebraic structures are typically captured as algebras over a monad \\cite{eilenberg1965adjoint, linton1966some}. Intuitively, a monad may be seen as a generalisation of closure operators on partially ordered sets, and an algebra over a monad may be viewed as a set with an operation that allows the interpretation of formal linear combinations in a way that is coherent with the monad structure. For instance, a vector space over a field $k$, that is, an algebra for the free $k$-vector space monad, is given by a set $X$ with a function $h$ that coherently interprets a finitely supported $X$-indexed $k$-sequence $\\lambda$ as an actual linear combination $h(\\lambda) = \\sum_x \\lambda_x \\cdot x$ in $X$ \\cite{coumans2010scalars}. \nIt is straightforward to see that under this perspective a basis for a vector space thus consists of a subset $Y$ of $X$ and a function $d$ that assigns to a vector $x$ in $X$ a $Y$-indexed $k$-sequence $d(x)$ such that $h(d(x)) = x$ for all $x$ in $X$ and $d(h(\\lambda)) = \\lambda$ for all $Y$-indexed $k$-sequences $\\lambda$. In other words, the restriction of $h$ to $Y$-indexed $k$-sequences is an isomorphism with inverse $d$, and surjectivity corresponds to the fact that the subset $Y$ generates the vector space, while injectivity captures that $Y$ does so uniquely. As demonstrated in \\Cref{basisdefinition}, the concept easily generalises to arbitrary monads on arbitrary categories by making the subset relation explicit in form of a function.\n\n\nMonads however not only occur in the context of universal algebra, but also play a role in algebraic topology \\cite{godementtheorie} and theoretical computer science \\cite{moggi1988computational, moggi1990abstract,\t\tmoggi1991notions}. Among others, they are a convenient tool for capturing side-effects of coalgebraic systems \\cite{rutten1998automata}: popular examples include the powerset monad (non-determinism), the distribution monad (probability), and the neighbourhood monad (alternating). While coalgebraic systems with side-effects can be more compact than their deterministic counterparts, they often lack a unique minimal acceptor for a given language. For instance, every regular language admits a unique deterministic automaton with minimal state space, which can be computed via the Myhill-Nerode construction. On the other hand, for some regular languages there exist multiple non-isomorphic non-deterministic automata with minimal state space. The problem has been independently approached for different variants of side-effects, often with the common idea of restricting to a particular subclass \\cite{esposito2002learning, berndt2017learning}. For example, for non-derministic automata, the subclass of so-called residual finite state automata has been identified as suitable.\n It moreover has turned out that in order to construct a unique minimal realisation in one of the subclasses it is often sufficient to derive an equivalent system with free state space from a particular given system \\cite{van2017learning}. As Arbib and Manes realised \\cite{arbib1975fuzzy}, instrumental to the former is what they call a \\textit{scoop}, or what we call a generator in \\Cref{generatordefinition}, a slight generalisation of bases. In other words, our definition of a basis for an algebra over a monad has its origin in a context that is not purely concerned with universal algebra. Throughout the paper we will value these roots by lifting results of Arbib and Manes from scoops to bases. More importantly, we believe that our treatment allows us to uncover hidden ramifications between certain areas of universal algebra and the theory of coalgebras.\n\nThe paper is structured as follows. In \\Cref{preliminariessec} we recall the basic categorical notions of a monad, algebras over a monad, coalgebras, distributive laws, and bialgebras. In \\Cref{generatorssec} we introduce generators for algebras over monads and exemplify their relation with free bialgebras. The definition of bases for algebras over monads, basic results, and their relation with free bialgebras is covered in \\Cref{basesection}. Questions about the existence and the uniqueness of bases are answered in \\Cref{existencebasesec} and \\Cref{uniquebasesec}, respectively. In \\Cref{representationtheorysec} we generalise the representation theory of linear maps between vector spaces to a representation theory of homomorphisms between algebras over a monad.\nThe intuition that bases for an algebra over a monad coincide with free isomorphic algebras is clarified in \\Cref{basesfreealgebrasec}. In \\Cref{basesforbialgebrasec} we look into bases for bialgebras, which are algebras for a particular monad. In \\Cref{examplesec} we instantiate the theory for a variety of monads. Related work and future work are discussed in \\Cref{relatedworksec} and \\Cref{discussionsec}, respectively. Further details can be found in \\Cref{basesascoaglebrassec}.\n\n\n\n\n\\section{Preliminaries}\n\n\\label{preliminariessec}\n\nWe only assume a basic knowledge of category theory, e.g. an understanding of categories, functors, natural transformations, and adjunctions. All other relevant definitions can be found in the paper. In this section, we recall the notions of a monad, algebras for a monad, coalgebras, distributive laws, and bialgebras.\n\nThe concept of a monad can be traced back both to algebraic topology \\cite{godementtheorie} and to an alternative to Lawvere theory as a category theoretic formulation of universal algebra \\cite{eilenberg1965adjoint, linton1966some}. For an extended historical overview we refer to the survey of Hyland and Power \\cite{hyland2007category}. In the context of computer science, monads have been introduced by Moggi as a general perspective on exceptions, side-effects, non-determinism, and continuations \\cite{moggi1988computational, moggi1990abstract,\t\tmoggi1991notions}. \n\n\\begin{definition}[Monad]\n\tA \\textit{monad} on $\\mathscr{C}$ is a tuple $\\mathbb{T} = (T, \\mu, \\eta)$ consisting of an endofunctor $T: \\mathscr{C} \\rightarrow \\mathscr{C}$ and natural transformations $\\mu: T^2 \\Rightarrow T$ and $\\eta: \\textnormal{id}_{\\mathscr{C}} \\Rightarrow T$ satisfying the commutative diagrams\n\t\\begin{equation*}\n\t\t\\begin{tikzcd}[row sep=3em, column sep = 3em]\n\t\t\tT^3X \\arrow{r}{T\\mu_X} \\arrow{d}[left]{\\mu_{TX}} & T^2X \\arrow{d}{\\mu_X}\\\\\n\t\t\tT^2X \\arrow{r}{\\mu_X} & TX \n\t\t\\end{tikzcd}\t\n\t\t\\qquad\n\t\t\t\\begin{tikzcd}[row sep=3em, column sep = 3em]\n\t\t\tTX \\arrow{r}{\\eta_{TX}} \\arrow{d}[left]{T\\eta_X} \\arrow{dr}{\\textnormal{id}_{TX}} & T^2X \\arrow{d}{\\mu_X}\\\\\n\t\t\tT^2X \\arrow{r}{\\mu_X} & TX \n\t\t\\end{tikzcd}\t\n\t\\end{equation*}\t\n\t\t\tfor all objects $X$ in $\\mathscr{C}$.\t\t\n\\end{definition}\n\nMany examples of monads arise as the result of a free-forgetful adjunction, for instance the free group monad or the free vector space monad. Below we provide some details for the latter case. More monads are covered in \\Cref{monoidmonad} and \\Cref{examplesec}.\n\n\\begin{example}[Vector spaces]\n\\label{freevectorspacemonadexample}\n\tThe free $k$-vector space monad is an instance of the so-called multiset monad over some semiring $S$, when $S$ is given by the field $k$. The underlying set endofunctor $T$ assigns to a set $X$ the set of finitely-supported $X$-indexed sequences $\\varphi$ in $S$, typically written as formal sums $\\sum_i s_i \\cdot x_i$ for $s_i = \\varphi(x_i)$; the unit $\\eta_X$ maps an element in $X$ to the singleton multiset $1 \\cdot x$; and the multiplication $\\mu_X$ satisfies $\\mu_X(\\sum_i s_i \\cdot \\varphi_i)(x) = \\sum_i s_i \\cdot \\varphi_i(x)$ \\cite{coumans2010scalars}. \n\\end{example}\n\nIf a monad results from a free-forgetful adjunction induced by some algebraic structure, the latter may be recovered in the following sense:\n\n\\begin{definition}[$\\mathbb{T}$-algebra]\n\tAn \\textit{algebra} for a monad $\\mathbb{T} = (T, \\mu, \\eta)$ on $\\mathscr{C}$ is a tuple $(X, h)$ consisting of an object $X$ and a morphism $h: TX \\rightarrow X$ such that the diagrams on the left and in the middle below commute\n\t\\begin{equation*}\n\t\t\\begin{tikzcd}[row sep=3em, column sep = 3em]\n\t\t\tT^2X \\arrow{r}{\\mu_X} \\arrow{d}[left]{Th} & TX \\arrow{d}{h}\\\\\n\t\t\tTX \\arrow{r}{h} & X \n\t\t\\end{tikzcd}\t\n\t\t\\qquad \n\t\t\\begin{tikzcd}[row sep=3em, column sep = 3em]\n\t\t\tX \\arrow{r}[above]{\\textnormal{id}_X} \\arrow{d}[left]{\\eta_X} & X \\\\\n\t\t\t TX \\arrow{ur}[right]{h} & \n\t\t\\end{tikzcd}\n\t\t\\qquad\n\t\t\\begin{tikzcd}[row sep=3em, column sep = 3em]\n\t\t\tTX \\arrow{r}{Tf} \\arrow{d}[left]{h_X} & TY \\arrow{d}{h_Y}\\\\\n\t\t\tX \\arrow{r}{f} & Y\n\t\t\\end{tikzcd}\t.\n\t\\end{equation*}\nA \\textit{homomorphism} $f: (X, h_X) \\rightarrow (Y,h_Y)$ between $\\mathbb{T}$-algebras is a morphism $f: X \\rightarrow Y$ such that the diagram on the right above commutes. The category of $\\mathbb{T}$-algebras and homomorphisms is denoted by $\\mathscr{C}^{\\mathbb{T}}$.\n\\end{definition}\n\nThe canonical example for an algebra over a monad is the free $\\mathbb{T}$-algebra $(TX, \\mu_X)$ for any object $X$ in $\\mathscr{C}$. \nBelow we give some more details on how to recognise algebras over the free vector space monad as vector spaces.\n\n\\begin{example}[Vector spaces]\n\\label{vectorspacemonadalgebras}\n\tLet $\\mathbb{T}$ be the free vector space monad defined in \\Cref{freevectorspacemonadexample}. Every $\\mathbb{T}$-algebra $(X,h)$ induces a vector space structure on its underlying set by interpreting a finite formal linear combination $\\sum_i \\lambda_i \\cdot x_i$ as an element $h(\\varphi) \\in X$ for $\\varphi(x) := \\lambda_i$, if $x = x_i$, and $\\varphi(x) := 0$ otherwise. Conversely, every vector space with underlying set $X$ induces an algebra $(X,h)$ over $\\mathbb{T}$ by defining $h(\\varphi) := \\sum_{x} \\lambda_x \\cdot x$ with $\\lambda_x := \\varphi(x)$ for a finitely-supported sequence $\\varphi$.\n\t\\end{example}\n \n\n\n\nWe now turn our attention to the dual of algebras: coalgebras \\cite{rutten1998automata}. While algebras have been used in the context of computer science to model finite data types, coalgebras deal with infinite data types and have turned out to be suited as an abstraction for a variety of state-based systems \\cite{rutten2000universal}. \n\n\\begin{definition}[$F$-coalgebra]\n\tA \\textit{coalgebra} for an endofunctor $F: \\mathscr{C} \\rightarrow \\mathscr{C}$ is a tuple $(X, k)$ consisting of an object $X$ and a morphism $k: X \\rightarrow FX$. A \\textit{homomorphism} $f: (X, k_X) \\rightarrow (Y,k_Y)$ between $F$-coalgebras is a morphism $f: X \\rightarrow Y$ satisfying the commutative diagram\n\t\\begin{equation*}\n\\begin{tikzcd}[row sep=3em, column sep = 3em]\n\t\t\tX \\arrow{r}{f} \\arrow{d}[left]{k_X} & Y \\arrow{d}{k_Y}\\\\\n\t\t\tFX \\arrow{r}{Ff} & FY \n\t\t\\end{tikzcd}\t.\t\n\\end{equation*}\nThe category of coalgebras and homomorphisms is denoted by $\\textsf{Coalg}(F)$. \nA $F$-coalgebra $(\\Theta, k_{\\Theta})$ is \\textit{final}, if it is final in $\\textnormal{\\textsf{Coalg}}(F)$, that is, for every $F$-coalgebra $(X,k_X)$ there exists a unique homomorphism \\[ !_{(X,k_X)}: (X,k_X) \\longrightarrow (\\Theta, k_{\\Theta}). \\]\n\\end{definition}\n\nFor example, coalgebras for the set endofunctor $FX = X^A \\times B$ are unpointed Moore automata with input $A$ and output $B$, and the final $F$-coalgebra homomorphism assigns to a state $x$ of an unpointed Moore automaton the language in $A^* \\rightarrow B$ accepted by the latter when given the initial state $x$.\n\nWe are particularly interested in systems with side-effects, for instance non-deterministic automata. Often such systems can be realised as coalgebras for an endofunctor composed of a monad and an endofunctor similar to $F$ above. In these cases the compatibility between the dynamics of the system and its side-effects can be captured by a so-called a distributive law. Distributive laws have originally occurred as a way to compose monads \\cite{beck1969distributive}, but now also exist in a wide range of other forms \\cite{Street2009}. For our particular case it is sufficient to consider distributive laws between a monad and an endofunctor.\n\n\\begin{definition}[Distributive law]\n\tLet $\\mathbb{T} = (T, \\mu, \\eta)$ be a monad on $\\mathscr{C}$ and $F: \\mathscr{C} \\rightarrow \\mathscr{C}$ an endofunctor. A natural transformation $\\lambda: TF \\Rightarrow FT$ is called \\textit{distributive law}, if it satisfies \n\t\\begin{equation*}\n\t\t\\begin{tikzcd}[row sep=3em, column sep = 3em]\n\t\t\tFX \\arrow{r}[above]{F\\eta_X} \\arrow{d}[left]{\\eta_{FX}} & FTX \\\\\n\t\t\t TFX \\arrow{ur}[right]{\\lambda_X} & \n\t\t\\end{tikzcd}\n\t\t\\qquad\n\t\t\t\t\\begin{tikzcd}[row sep=3em, column sep = 3em]\n\t\t\tT^2FX \\arrow{r}{T\\lambda_X} \\arrow{d}[left]{\\mu_{FX}} & TFTX \\arrow{r}{\\lambda_{TX}} & FT^2X \\arrow{d}{F\\mu_X} \\\\\n\t\t\tTFX \\arrow{rr}{\\lambda_X} && FTX \n\t\t\\end{tikzcd}.\n\t\\end{equation*}\n\t\\end{definition}\n\nGiven a distributive law, it is straightforward to model the determinisation of a system with side-effects. Indeed, for any morphism $f: Y \\rightarrow X$ and $\\mathbb{T}$-algebra $(X,h)$, the natural isomorphism underlying the free-algebra adjunction yields a $\\mathbb{T}$-algebra homomorphism \n\\begin{equation}\n\\label{algebrainducedliftinggeneral}\n\tf^{\\sharp} := h \\circ Tf: (TY, \\mu_Y) \\longrightarrow (X, h).\n\\end{equation}\nThus, in particular, any $FT$-coalgebra $k: X \\rightarrow FTX$ lifts to a $F$-coalgebra\n\\begin{equation}\n\\label{ftlifting}\n\tk^{\\sharp} := (F \\mu_X \\circ \\lambda_{TX}) \\circ Tk: (TX, \\mu_X) \\longrightarrow (FTX, F \\mu_X \\circ \\lambda_{TX}).\n\\end{equation}\n\nFor instance, if $\\mathbb{P}$ is the powerset monad and $F$ is the set endofunctor for deterministic automata satisfying $FX = X^A \\times 2$, the disjunctive $\\mathbb{P}$-algebra structure on the set $2$ induces a canonical distributive law \\cite{jacobs2012trace}, such that the lifting \\eqref{ftlifting} is given by the classical determinisation procedure for non-deterministic automata \\cite{rutten2013generalizing}. \n\nOne can show that the state spaces of $F$-coalgebras obtained by the lifting \\eqref{ftlifting} can canonically be equipped with a $\\mathbb{T}$-algebra structure that is compatible with the $F$-coalgebra structure: they are $\\lambda$-bialgebras.\n\n\\begin{definition}[$\\lambda$-bialgebra]\n\\label{defbialgebras}\n\tLet $\\lambda$ be a distributive law between a monad $\\mathbb{T}$ and an endofunctor $F$. A $\\lambda$\\textit{-bialgebra} is a tuple $(X, h, k)$ consisting of a $\\mathbb{T}$-algebra $(X, h)$ and a $F$-coalgebra $(X, k)$, satisfying\n\t\\begin{equation*}\n\t\t\\begin{tikzcd}[row sep=3em, column sep = 3em]\n\t\tTX \\arrow{r}{Tk} \\arrow{dd}[left]{h} & TFX \\arrow{d}{\\lambda_X} \\\\\n\t\t& FTX \\arrow{d}{Fh} \\\\\n\t\tX \\arrow{r}{k} & FX.\n\t\t\\end{tikzcd}.\n\t\\end{equation*}\n\tA \\textit{homomorphism} $f: (X, h_X, k_X) \\rightarrow (Y, h_Y, k_Y)$ between $\\lambda$-bialgebras is a morphism $f: X \\rightarrow Y$ that is simultaneously a $\\mathbb{T}$-algebra homomorphism and a $F$-coalgebra homomorphism. The category of $\\lambda$-bialgebras and morphisms is denoted by $\\textsf{Bialg}(\\lambda)$.\n\t\\end{definition}\n\t\nIt is well-known that a distributive law $\\lambda$ between a monad $\\mathbb{T}$ and an endofunctor $F$ induces simultaneously\n\\begin{itemize}\n\t\\item a monad $\\mathbb{T}_{\\lambda} = (T_{\\lambda}, \\mu, \\eta)$ on $\\textsf{Coalg}(F)$ by $T_{\\lambda}(X,k) = (TX, \\lambda_X \\circ Tk)$ and $T_{\\lambda}f = Tf$; and\n\t\\item an endofunctor $F_{\\lambda}$ on $\\mathscr{C}^{\\mathbb{T}}$ by $F_{\\lambda}(X,h) = (FX, Fh \\circ \\lambda_X)$ and $F_{\\lambda}f = Ff$,\n\\end{itemize}\nsuch that the algebras over $\\mathbb{T}_{\\lambda}$, the coalgebras of $F_{\\lambda}$, and $\\lambda$-bialgebras coincide \\cite{turi1997towards}. In light of the latter we will not distinguish between the different categories, and instead use the notation of $\\lambda$-bialgebras for all three cases.\n\nOne can further show that, if it exists, the final $F$-coalgebra $(\\Theta, k_{\\Theta})$ induces a final $F_{\\lambda}$-coalgebra $(\\Theta, h_{\\Theta}, k_{\\Theta})$ for $h_{\\Theta} :=\\ !_{(T\\Theta, \\lambda_{\\Theta} \\circ T k_{\\Theta})}$ the unique $F$-coalgebra homomorphism below:\n\\begin{equation*}\n\t\t\t\\begin{tikzcd}[row sep=3em, column sep = 3em]\n\t\t\t\tT\\Theta \\arrow{d}[left]{\\lambda_{\\Theta} \\circ T k_{\\Theta}} \\arrow[dashed]{r}{h_{\\Theta}} & \\Theta \\arrow{d}{k_{\\Theta}} \\\\\n\t\t\t\tFT\\Theta \\arrow[dashed]{r}{Fh_{\\Theta}} & F\\Theta\n\t\t\t\\end{tikzcd}.\n\\end{equation*}\nFor instance, for the canonical distributive law between the powerset monad $\\mathbb{P}$ and the set endofunctor $F$ with $FX = X^A \\times 2$ as before, one verifies that the underlying state space $A^* \\rightarrow 2$ of the final $F$-coalgebra will in this way be enriched with a $\\mathbb{P}$-algebra structure that takes the union of languages \\cite{jacobs2012trace}.\n\n\n\\section{Generators for algebras}\n\n\\label{generatorssec}\n\nIn this section we define what it means to be a generator for an algebra over a monad. Our notion coincides with what is called a \\textit{scoop} by Arbib and Manes \\cite{arbib1975fuzzy}. One may argue that the morphism $i$ in the definition below should be mono, but we choose to continue without this requirement. \n\n\n \n\n\n\\begin{definition}[Generator \\cite{arbib1975fuzzy}]\n\\label{generatordefinition}\n\tA \\textit{generator} for a $\\mathbb{T}$-algebra $(X, h)$ is a tuple $(Y, i, d)$ consisting of an object $Y$, a morphism $i: Y \\rightarrow X$, and a morphism $d: X \\rightarrow TY$, such that $i^{\\sharp} \\circ d = \\textnormal{id}_{X}$, that is, the diagram on the left below commutes\n\t\\begin{equation*}\n\t\\begin{tikzcd}[row sep=3em, column sep = 3em]\n\t\t\tTY \\arrow{r}{Ti} & TX \\arrow{d}{h}\\\\\n\t\t\tX \\arrow{u}{d} \\arrow{r}{\\textnormal{id}_X} & X \n\t\t\\end{tikzcd}\n\t\t\\qquad\n\t\t\\begin{tikzcd}[row sep=2em, column sep = 3em]\n\t\t\t\t\t&TY_{\\alpha} \\arrow{dd}[right]{Tf} & \\\\\n\t\t\t\t\t X \\arrow{ur}{d_{\\alpha}} \\arrow{dr}[below]{d_{\\beta}} \\\\\n\t\t\t\t\t& TY_{\\beta} \n\t\t\t\t\\end{tikzcd}\n\t\t\t\t\t\t\t\t\\qquad\t\t\t\t\n\t\t\\begin{tikzcd}[row sep=2em, column sep = 3em]\n\t\t\t\t\tY_{\\alpha} \\arrow{dd}[left]{f} \\arrow{dr}{i_{\\alpha}} & \\\\\n\t\t\t\t\t& X \\\\\n\t\t\t\t\tY_{\\beta} \\arrow{ur}[below]{i_{\\beta}}\n\t\t\t\t\\end{tikzcd}.\n\t\t\\end{equation*}\n\t\tA morphism $f: (Y_{\\alpha},i_{\\alpha},d_{\\alpha}) \\rightarrow (Y_{\\beta},i_{\\beta},d_{\\beta})$ between generators for $(X,h)$ is a morphism $f: Y_{\\alpha} \\rightarrow Y_{\\beta}$ satisfying the two commutative diagrams on the right above.\n\\end{definition}\n\nWe give an example that slightly generalises the vector space situation mentioned in the introduction.\n\n\\begin{example}[Semimodules]\n\\label{semimoduleexample}\n\tLet $\\mathbb{T} = (T, \\mu, \\eta)$ be the multiset monad over some semiring $S$ defined in \\Cref{freevectorspacemonadexample}.\n\tFollowing the lines of \\Cref{vectorspacemonadalgebras}, one can show that $\\mathbb{T}$-algebras correspond to semimodules over $S$, such that a function $i: Y \\rightarrow X$ is part of a generator $(Y, i, d)$ for the former if and only if for all $x$ in $X$ there exists a finitely-supported $Y$-indexed $S$-sequence $d(x) = (s_y)_{y \\in Y}$, such that $x = \\sum_{y \\in Y} s_y \\cdot i(y)$.\n\\end{example}\n\nOne might would like to adapt \\Cref{generatordefinition} by replacing the existence of a morphism $d$ with the property of $i^{\\sharp}$ being an epimorphism. Indeed, in every category a morphism with a right-inverse is an epimorphism. However, conversely, not every epimorphism admits a right-inverse, and if there would exist one, it might not be unique. For this reason we treat the morphism $d$ as explicit data.\n\nIt is well-known that every algebra over a monad admits a generator \\cite{arbib1975fuzzy}.\n\n\\begin{lemma}\n\\label{canonicalgenerator}\n$(X, \\textnormal{id}_X, \\eta_X)$ is a generator for any $\\mathbb{T}$-algebra $(X,h)$.\n\\end{lemma}\n\\begin{proof}\nFollows immediately from the equality $h \\circ \\eta_X = \\textnormal{id}_X$.\n\\end{proof}\n\nThe following result is a slight generalisation of a statement by Arbib and Manes {\\cite{arbib1975fuzzy}. We are particularly interested in using the construction of an equivalent free bialgebra for a unified view on the theory of residual finite state automata and variations of it \\cite{van2016master, van2020phd, myers2015coalgebraic, denis2002residual}; more details are given in \\Cref{RFSAexample}.\n\n\\begin{proposition}\n\\label{forgenerator-isharp-is-bialgebra-hom}\n\tLet $(X, h, k)$ be a $\\lambda$-bialgebra and let $(Y, i, d)$ be a generator for the $\\mathbb{T}$-algebra $(X,h)$. Then $i^{\\sharp} := h \\circ Ti : TY \\rightarrow X$ is a $\\lambda$-bialgebra homomorphism $i^{\\sharp}: (TY, \\mu_Y, (Fd \\circ k \\circ i)^{\\sharp}) \\rightarrow (X, h, k) $ for $(Fd \\circ k \\circ i)^{\\sharp} := F\\mu_Y \\circ \\lambda_{TY}\\circ T(Fd \\circ k \\circ i)$.\n\\end{proposition}\n\\begin{proof}\nThe commuting diagram below shows that for a $FT$-coalgebra $f: Y \\rightarrow FTY$ and $f^{\\sharp}: TY \\rightarrow FTY$ the lifting in \\eqref{ftlifting}, the tuple $(TY, \\mu_Y, f^{\\sharp})$ constitutes a $\\lambda$-bialgebra\n\\begin{equation}\n\\label{freebialgebra}\n\t\t\\begin{tikzcd}[row sep=3em, column sep = 3em]\n\t\tT^2Y \\arrow{r}{T^2f} \\arrow{dd}[left]{\\mu_Y} & T^2 FTY \\arrow{dd}{\\mu_{FTY}} \\arrow{r}{T\\lambda_{TY}} & TFT^2Y \\arrow{d}{\\lambda_{T^2Y}} \\arrow{r}{TF\\mu_Y} & TFTY \\arrow{d}{\\lambda_{TY}} \\\\\n\t\t& & FT^3Y \\arrow{r}{FT{\\mu_Y}} \\arrow{d}[right]{F \\mu_{TY}} & FT^2Y \\arrow{d}[right]{F\\mu_Y} \\\\\n\t\tTY \\arrow{r}{Tf} & TFTY \\arrow{r}{\\lambda_{TY}} & FT^2Y \\arrow{r}{F\\mu_Y} & FTY\n\t\\end{tikzcd}.\n\\end{equation}\nIn particular, the tuple $(TY, \\mu_Y, (Fd \\circ k \\circ i)^{\\sharp})$ thus is a $\\lambda$-bialgebra. The lifting in \\eqref{algebrainducedliftinggeneral} turns $i^{\\sharp}$ into a $\\mathbb{T}$-algebra homomorphism. It thus remains to show that $i^{\\sharp}$ is a $F$-coalgebra homomorphism. The latter follows from the commutativity of the following diagram\n\\begin{equation*}\n\t\\begin{tikzcd}[row sep=3em, column sep = 3em]\n\t\tTY \\arrow{r}{Ti} \\arrow{d}[left]{Ti} & TX \\arrow{dd}{Tk} \\arrow{rr}{h} & & X \\arrow{ddddd}{k} \\\\\n\t\tTX \\arrow{d}[left]{Tk} \\\\\n\t\tTFX \\arrow{d}[left]{TFd} \\arrow{r}{\\textnormal{id}_{TFX}} & TFX \\arrow{ddr}{\\lambda_X} \\\\\n\t\tTFTY \\arrow{d}[left]{\\lambda_{TY}} \\arrow{r}{TFTi} & TFTX \\arrow{u}{TFh} \\\\\n\t\tFT^2Y \\arrow{r}{FT^2i} \\arrow{d}[left]{F\\mu_Y} & FT^2X \\arrow{r}{FTh} \\arrow{d}{F \\mu_X} & FTX \\arrow{d}{Fh} \\\\\n\t\tFTY \\arrow{r}{FTi} & FTX \\arrow{r}{Fh} & FX \\arrow{r}{\\textnormal{id}_{FX}} & FX\n\t\\end{tikzcd}.\n\\end{equation*}\n\\end{proof}\n\nWe show next that above construction extends to morphisms $f: (Y_{\\alpha}, i_{\\alpha}, d_{\\alpha}) \\rightarrow (Y_{\\beta}, i_{\\beta}, d_{\\beta})$ between generators for the underlying algebra of a bialgebra. For readability we abbreviate $k_{\\gamma} := (Fd_{\\gamma} \\circ k \\circ i_{\\gamma})^{\\sharp}$ for $\\gamma \\in \\lbrace \\alpha , \\beta \\rbrace$.\n\n\\begin{lemma}\n\tThe morphism $Tf: TY_{\\alpha} \\rightarrow TY_{\\beta}$ is a $\\lambda$-bialgebra homomorphism $Tf: (TY_{\\alpha}, \\mu_{Y_{\\alpha}}, k_{\\alpha}) \\rightarrow (TY_{\\beta}, \\mu_{Y_{\\beta}}, k_{\\beta})$ satisfying $(i_{\\beta})^{\\sharp} \\circ Tf = (i_{\\alpha})^{\\sharp}$.\n\\end{lemma}\n\\begin{proof}\nThe identity $(i_{\\beta})^{\\sharp} \\circ Tf = (i_{\\alpha})^{\\sharp}$ follows from the equality $i_{\\beta} \\circ f = i_{\\alpha}$, as shown below\n\t\\begin{equation*}\n\t\t\\begin{tikzcd}[row sep=3em, column sep = 3em]\n\t\tTY_{\\alpha} \\arrow{r}{Tf} \\arrow{d}[left]{Ti_{\\alpha}} & \tTY_{\\beta} \\arrow{d}{Ti_{\\beta}} \\arrow{r}{Ti_{\\beta}} & TX \\arrow{d}{h} \\\\\n\t\tTX \\arrow{r}{\\textnormal{id}_{TX}} & TX \\arrow{r}{h} & X\n\t\t\\end{tikzcd}.\n\t\\end{equation*}\n\tOne easily verifies that the lifting \\eqref{ftlifting} extends to a functor from the category of $FT$-coalgebras to the category of $\\lambda$-bialgebras; see e.g. \\eqref{freebialgebra}. It thus remains to show that $f$ is a $FT$-coalgebra homomorphism $\n\tf: (Y_{\\alpha}, k_{\\alpha}) \\rightarrow (Y_{\\beta}, k_{\\beta})$. The latter follows from the commutativity of the following diagram\n\t\\begin{equation*}\n\t\t\\begin{tikzcd}[row sep=3em, column sep = 3em]\n\t\t\tY_{\\beta} \\arrow{r}{i_{\\beta}} & X \\arrow{r}{k} & FX \\arrow{r}{Fd_{\\beta}} & FTY_{\\beta} \\\\\n\t\t\tY_{\\alpha} \\arrow{u}{f} \\arrow{r}{i_{\\alpha}} & X \\arrow{u}{\\textnormal{id}_X} \\arrow{r}{k} & FX \\arrow{u}{\\textnormal{id}_{FX}} \\arrow{r}{Fd_{\\alpha}} & FTY_{\\alpha} \\arrow{u}[right]{FTf}\n\t\t\\end{tikzcd}.\n\t\\end{equation*}\n\\end{proof}\n\nIn the following example we instantiate the previous results to recover the so-called canonical residual finite state automata \\cite{denis2002residual}.\n\n\\begin{example}[Canonical RFSA]\n\\label{RFSAexample}\t\nAs before, let $\\mathbb{P}$ be the powerset monad and $F$ the set endofunctor for deterministic automata over the alphabet $A$ satisfying $FX = X^A \\times 2$. One verifies that the disjunctive $\\mathbb{P}$-algebra structure on the set $2$ induces a canonical distributive law $\\lambda$ between $\\mathbb{P}$ and $F$, such that $\\lambda$-bialgebras are deterministic unpointed automata in the category of complete lattices and join-preserving functions \\cite{jacobs2012trace}; for more details see \\Cref{powersetmonadsec}.\nWe are particularly interested in the $\\lambda$-bialgebra that is typically called the minimal $\\mathbb{P}$-automaton $M_{\\mathbb{P}}(L)$ for a regular language $L$ \\cite{van2020phd, van2017learning}. \nOn a very high-level, $M_{\\mathbb{P}}(L)$ may be recognised as some algebraic closure of the well-known minimal automaton $M(L)$ for $L$ in the category of sets and functions. More concretely, it consists of the inclusion-ordered free complete lattice of unions of residuals of $L$, equipped with the usual transition and output functions for languages inherited by the final coalgebra for $F$. \nUsing well-known lattice theoretic arguments one can show that the tuple $(\\mathcal{J}(M_{\\mathbb{P}}(L)),i,d)$, with $i$ the subset-embedding of join-irreducibles and $d$ the function assigning to a language the join-irreducible languages below it, is a generator for $M_{\\mathbb{P}}(L)$. Writing $k$ for the $F$-coalgebra structure of $M_{\\mathbb{P}}(L)$, it is easy to verify that the $F\\mathbb{P}$-coalgebra structure $Fd \\circ k \\circ i$ on $\\mathcal{J}(M_{\\mathbb{P}}(L))$ mentioned in \\Cref{forgenerator-isharp-is-bialgebra-hom} corresponds precisely to the so-called canonical residual finite state automaton for $L$ \\cite{denis2002residual}.\n\\end{example}\n\n\nWe close this section with a more compact characterisation of generators for free algebras.\n\n\\begin{lemma}\n\\label{setgenerator}\n\tLet $i: Y \\rightarrow X$ and $d: X \\rightarrow TY$ be morphisms such that the following diagram commutes:\n\t\\begin{equation*}\n\t\t\\begin{tikzcd}[row sep=3em, column sep = 3em]\n\t\t\t\t\tT^2Y \\arrow{r}{T^2i} & T^2X \\arrow{d}{\\mu_X} \\\\\n\t\t\t\t\tTX \\arrow{u}{Td} \\arrow{r}{\\textnormal{id}_{TX}} & TX\n\t\t\t\t\\end{tikzcd}\t.\n\t\t\t\t\\end{equation*}\t\n\tThen $(Y, \\eta_X \\circ i, \\mu_Y \\circ Td)$ is a generator for the $\\mathbb{T}$-algebra $(TX, \\mu_X)$.\n\\end{lemma}\n\\begin{proof}\nFollows from the commutativity of the diagram below\n\t\\begin{equation*}\n\t\t\\begin{tikzcd}[row sep=3em, column sep = 3em]\n\t\t\t\t\tTY \\arrow{r}{Ti} & TX \\arrow{r}{T\\eta_X} \\arrow{dr}{\\textnormal{id}_{TX}} & T^2X \\arrow{d}{\\mu_X} \\\\\n\t\t\t\t\tT^2Y \\arrow{u}{\\mu_Y} \\arrow{r}{T^2i} & T^2 X \\arrow{u}{\\mu_X} \\arrow{r}{\\mu_X} & TX \\arrow{d}{\\textnormal{id}_{TX}} \\\\\n\t\t\t\t\tTX \\arrow{u}{Td} \\arrow{rr}{\\textnormal{id}_{TX}} & & TX\n\t\t\t\t\\end{tikzcd}\t.\n\t\t\t\t\\end{equation*}\n\\end{proof}\n\n\\section{Bases for algebras}\n\n\\label{basesection}\n\nIn the last section we adopted the notion of a scoop by Arbib and Manes \\cite{arbib1975fuzzy} by introducing generators for algebras over a monad. In this section we extend the former to the definition of a basis for an algebra over a monad by adding a uniqueness constraint. While scoops have mainly occured in the context of state-based systems, our extension allows us to emphasise their ramifications with universal algebra.\n\n\\begin{definition}[Basis]\n\\label{basisdefinition}\n\tA \\textit{basis} for a $\\mathbb{T}$-algebra $(X, h)$ is a tuple $(Y, i, d)$ consisting of an object $Y$, a morphism $i: Y \\rightarrow X$, and a morphism $d: X \\rightarrow TY$, such that $i^{\\sharp} \\circ d = \\textnormal{id}_{X}$ and $d \\circ i^{\\sharp} = \\textnormal{id}_{TY}$, that is, the following two diagrams commute:\n\t\\begin{equation*}\n\t\\begin{tikzcd}[row sep=3em, column sep = 3em]\n\t\t\tTY \\arrow{r}{Ti} & TX \\arrow{d}{h}\\\\\n\t\t\tX \\arrow{u}{d} \\arrow{r}{\\textnormal{id}_X} & X \n\t\t\\end{tikzcd}\n\t\t\\qquad\n\t\t\\begin{tikzcd}[row sep=3em, column sep = 3em]\n\t\t\tTX \\arrow{r}{h} & X \\arrow{d}{d}\\\\\n\t\t\tTY \\arrow{u}{Ti} \\arrow{r}{\\textnormal{id}_{TY}} & TY \n\t\t\\end{tikzcd}.\t\n\\end{equation*} \nA \\textit{homomorphism} between two bases for $(X,h)$ is a morphism between the underlying generators. The category consisting of bases for a $\\mathbb{T}$-algebra $(X,h)$ and homomorphisms between them is\t denoted by $\\textnormal{\\textsf{Bases}}(X,h)$.\n\\end{definition}\n\n We begin with an example for a basis in above sense for the theory of monoids.\n\n\\begin{example}[Monoids]\n\\label{monoidmonad}\n\tLet $\\mathbb{T} = (T, \\mu, \\eta)$ be the set monad whose underlying endofunctor $T$ assigns to a set $X$ the set of all finite words over the alphabet $X$; whose unit $\\eta_X$ assigns to a character in $X$ the corresponding word of length one; and whose multiplication $\\mu_X$ syntactically flattens words over words over the alphabet $X$ in the usual way. The monad $\\mathbb{T}$ is also known as the list monad. One verifies that the constraints for its algebras correspond to the unitality and associativity laws of monoids. A function $i: Y \\rightarrow X$ is thus part of a basis $(Y,i,d)$ for a $\\mathbb{T}$-algebra with underlying set $X$ if and only if for all $x \\in X$ there exists a unique word $d(x) = \\lbrack y_1, ...,y_n \\rbrack$ over the alphabet $Y$ satisfying $x = i(y_1) \\cdot ... \\cdot i(y_n)$. \n\t\n\t\\end{example}\n\nThe next result establishes that the morphism $d$ is in fact an algebra homomorphism, and, intuitively, that elements of a basis are uniquely generated by their image under the monad unit, that is, typically by themselves.\n\n\n\\begin{lemma}\n\\label{forbasis-d-isalgebrahom}\n\tLet $(Y, i, d)$ be a basis for a $\\mathbb{T}$-algebra $(X, h)$. Then the following two diagrams commute:\n\t\\begin{equation*}\n\t\\begin{tikzcd}[row sep=3em, column sep = 3em]\n\t\t\tTX \\arrow{d}[left]{h} \\arrow{r}{Td} & T^2Y \\arrow{d}{\\mu_Y} \\\\\n\t\t\tX \\arrow{r}{d} & TY\n\t\t\\end{tikzcd}\n\t\t\\qquad\n\t\t\\begin{tikzcd}[row sep=3em, column sep = 3em]\n\t\t\tY \\arrow{r}[above]{i} \\arrow{d}[left]{\\eta_{Y}} & X \\arrow{dl}[right]{d} \\\\\n\t\t\t TY & \n\t\t\\end{tikzcd}.\n\t\\end{equation*}\n\\end{lemma}\n\\begin{proof}\nFollows from the commutativity of the following two diagrams\n\t\\begin{equation*}\n\t\t\t\\begin{tikzcd}[row sep=3em, column sep = 3em]\n\t\t\t\tTX \\arrow{rr}{Td} \\arrow{d}[left]{\\textnormal{id}_{TX}} & & T^2Y \\arrow{dl}{T^2i} \\arrow{dd}{\\mu_Y} \\\\\n\t\t\t\tTX \\arrow{dd}[left]{h} & T^2X \\arrow{l}{Th} \\arrow{d}{\\mu_X} \\\\\n\t\t\t\t& TX \\arrow{dl}{h} & TY \\arrow{l}{Ti} \\arrow{d}{\\textnormal{id}_{TY}} \\\\\n\t\t\t\tX \\arrow{rr}{d} & & TY\n\t\t\t\\end{tikzcd}\n\t\t\t\\qquad\n\t\t\\begin{tikzcd}[row sep=3em, column sep = 3em]\n\t\t\tY \\arrow{r}{i} \\arrow{d}[left]{\\eta_Y} & X \\arrow{r}{\\textnormal{id}_X} \\arrow{d}[left]{\\eta_X} & X \\arrow{dddll}{d} \\\\\n\t\t\tTY \\arrow{dd}[left]{\\textnormal{id}_{TY}} \\arrow{r}{Ti} &TX \\arrow{ur}{h} \\\\ \n\t\t\t\\\\\n\t\t\tTY\n\t\t\\end{tikzcd}.\n\t\\end{equation*}\n\\end{proof}\n\nIn consequence we can derive the following three corollaries. First, every algebra with a basis is isomorphic to a free algebra.\n\n\\begin{corollary}\n\\label{basisimpliesfree}\n\tLet $(Y,i,d)$ be a basis for a $\\mathbb{T}$-algebra $(X,h)$. Then $d: X \\rightarrow TY$ is a $\\mathbb{T}$-algebra isomorphism $d: (X,h) \\rightarrow (TY, \\mu_Y)$.\n\\end{corollary}\n\\begin{proof}\n\tBy \\Cref{forbasis-d-isalgebrahom} the morphism $d$ is a $\\mathbb{T}$-algebra homomorphism. From general arguments it follows that the lifting $i^{\\sharp} = h \\circ Ti$ is a $\\mathbb{T}$-algebra homomorphism in the reverse direction. Since $(Y,i,d)$ is a basis, $d$ and $i^{\\sharp}$ are mutually inverse.\n\\end{proof}\n\nSecondly, an algebra with a basis embeds into the free algebra it spans. The monomorphism is fundamental to an alternative approach to bases \\cite{jacobs2011bases}; for more details see \\Cref{basesascoaglebrassec}. \n \n\\begin{corollary}\n\\label{Tid-algebrahom}\n\tLet $(Y,i,d)$ be a basis for a $\\mathbb{T}$-algebra $(X,h)$. Then $Ti \\circ d: X \\rightarrow TX$ is a $\\mathbb{T}$-algebra monomorphism $Ti \\circ d: (X,h) \\rightarrow (TX, \\mu_X)$ with left-inverse $h: (TX, \\mu_X) \\rightarrow (X, h)$.\n\\end{corollary}\n\\begin{proof}\n\tThe morphism $Ti \\circ d$ is a $\\mathbb{T}$-algebra homomorphism since by \\Cref{forbasis-d-isalgebrahom} the following diagram commutes\n\t\\begin{equation*}\n\t\t\t\t\\begin{tikzcd}[row sep=3em, column sep = 3em]\n\t\t\t\t\tTX \\arrow{d}[left]{h} \\arrow{r}{Td} & T^2Y \\arrow{d}{\\mu_Y} \\arrow{r}{T^2i} & T^2X \\arrow{d}{\\mu_X} \\\\\n\t\t\t\t\tX \\arrow{r}{d} & TY \\arrow{r}{Ti} & TX\n\t\t\t\t\\end{tikzcd}.\n\t\\end{equation*}\n\tThe morphism $h$ is a $\\mathbb{T}$-algebra homomorphism since the equality $h \\circ \\mu_X = h \\circ Th$ holds for all algebras over a monad. By the definition of a generator $h$ is a left-inverse to $Ti \\circ d$. The morphism $Ti \\circ d$ is mono since every morphism with left-inverse is mono.\t\n\t \\end{proof}\n\nThirdly, every algebra homomorphism is uniquely determined by its image on a basis.\n\n\n\\begin{corollary}\n\t\tLet $(Y, i, d)$ be a basis for a $\\mathbb{T}$-algebra $(X,h_X)$, and let $(Z,h_Z)$ be another $\\mathbb{T}$-algebra. For every morphism $f: Y \\rightarrow Z$ there exists a $\\mathbb{T}$-algebra homomorphism $f^{\\sharp}: (X,h_X) \\rightarrow (Z,h_Z)$ satisfying $f^{\\sharp} \\circ i = f$.\n\\end{corollary}\n\t\\begin{proof}\n\t\tWe define a candidate as follows $f^{\\sharp} := h_Z \\circ Tf \\circ d$. Using \\Cref{forbasis-d-isalgebrahom} we establish the commutativity of the following two diagrams\n\t\t\\begin{equation*}\n\t\t\t\\begin{tikzcd}[row sep=3em, column sep = 3em]\n\t\t\tX \\arrow{r}{d} & TY \\arrow{r}{Tf} & TZ \\arrow{r}{\\textnormal{id}_{TZ}} & TZ \\arrow{d}{h_Z} \\\\\n\t\t\t& Y \\arrow{ul}{i} \\arrow{r}{f} \\arrow{u}{\\eta_Y} &Z \\arrow{u}{\\eta_{Z}} \\arrow{r}{\\textnormal{id}_{Z}} & Z\n\t\t\\end{tikzcd}\n\t\t\\end{equation*}\n\t\t\t\t\\begin{equation*}\n\t\t\\begin{tikzcd}[row sep=3em, column sep = 3em]\n\t\t\tTX \\arrow{r}{Td} \\arrow{d}[left]{h_X} & T^2Y \\arrow{d}{\\mu_{Y}} \\arrow{r}{T^2f}& T^2 Z \\arrow{d}{\\mu_Z} \\arrow{r}{Th_Z} & TZ \\arrow{d}{h_Z} \\\\\n\t\t\tX \\arrow{r}{d} & TY \\arrow{r}{Tf} & TZ \\arrow{r}{h_Z} &Z\n\t\t\\end{tikzcd}.\t\n\t\t\\end{equation*}\n\t\tThe first diagram proves the identity $f^{\\sharp} \\circ i = f$, and the second diagram shows that $f^{\\sharp}$ is a $\\mathbb{T}$-algebra homomorphism.\n\t\\end{proof}\n\nIn \\Cref{basisimpliesfree} it was proven that every algebra with a basis is isomorphic to a free algebra. We show next that the statement can be strengthened to bialgebras.\nIn more detail, in \\Cref{forgenerator-isharp-is-bialgebra-hom} we have seen that a generator for the underlying algebra of a bialgebra allows to construct a bialgebra with free state space, such that $i^{\\sharp}$ extends to a bialgebra homomorphism from the latter to the former. As it turns out, for a basis, the homomorphism $i^{\\sharp}$ is in fact an isomorphism.\n\n\\begin{proposition}\n\\label{forbasis-d-is-bi-algebrahom}\n\tLet $(X, h, k)$ be a $\\lambda$-bialgebra and let $(Y, i, d)$ be a basis for the $\\mathbb{T}$-algebra $(X,h)$. Then $d: X \\rightarrow TY$ is a $\\lambda$-bialgebra homomorphism $d: (X, h, k) \\rightarrow (TY, \\mu_Y, (Fd \\circ k \\circ i)^{\\sharp})$ for $(Fd \\circ k \\circ i)^{\\sharp} := F\\mu_Y \\circ \\lambda_{TY}\\circ T(Fd \\circ k \\circ i)$.\n\\end{proposition}\n\\begin{proof}\nAs before, it follows by general arguments that $(TY, \\mu_Y, (Fd \\circ k \\circ i)^{\\sharp})$ is a $\\lambda$-bialgebra. The morphism $d$ is a $\\mathbb{T}$-algebra homomorphism by \\Cref{forbasis-d-isalgebrahom}. It thus remains to show that $d$ is a $F$-coalgebra homomorphism. The former is established by \\Cref{forbasis-d-isalgebrahom}, as shown below\n\\begin{equation*}\n\t\t\\begin{tikzcd}[row sep=3em, column sep = 3em]\n\t\t\tX \\arrow{rrrrr}{k} \\arrow{dd}[left]{d} \\arrow{rd}{\\textnormal{id}_X} & & & & & FX \\arrow{dd}{Fd} \\\\\n\t\t\t& X \\arrow{rrrru}{k} & & & FTX \\arrow{d}{FTd} \\arrow{ur}{Fh} & \\\\\n\t\t\tTY \\arrow{r}{Ti} & TX \\arrow{u}{h} \\arrow{r}{Tk} & TFX \\arrow{r}{TFd} \\arrow{rru}{\\lambda_X} & TFTY \\arrow{r}{\\lambda_{TY}} & FT^2Y \\arrow{r}{F\\mu_Y} & FTY\n\t\t\\end{tikzcd}.\n\\end{equation*}\n\\end{proof}\n\n\n\\begin{corollary}\n\\label{forbasis-bialgebra-areisomorphic}\n\tLet $(X, h, k)$ be a $\\lambda$-bialgebra and let $(Y, i, d)$ be a basis for the $\\mathbb{T}$-algebra $(X,h)$. Then the $\\lambda$-bialgebras $(X, h, k)$ and $(TY, \\mu_Y, (Fd \\circ k \\circ i)^{\\sharp})$ are isomorphic.\n\\end{corollary}\n\\begin{proof}\n\tBy \\Cref{forbasis-d-is-bi-algebrahom} the morphism $d$ is a $\\lambda$-bialgebra homomorphism of the right type. By \\Cref{forgenerator-isharp-is-bialgebra-hom} the morphism $i^{\\sharp}$ is a $\\lambda$-bialgebra homomorphism in reverse direction to $d$. From the definition of a basis it follows that $d$ and $i^\\sharp$ are mutually inverse.\n\\end{proof}\n\n\\subsection{Existence of bases}\n\n\\label{existencebasesec}\n\nWhile every algebra over a monad admits a generator, cf. \\Cref{canonicalgenerator}, the latter is not necessarily true for a basis. In this section we show that one however can safely assume that every \\textit{free} algebra admits a basis. We begin with a characterisation of bases for free algebras that is slightly more compact than the one derived directly from the definition.\n\n\\begin{lemma}\n\\label{basisfreealgebraeasier}\n\tLet $i: Y \\rightarrow X$ and $d: X \\rightarrow TY$ be morphisms such that the following two diagrams commute:\n\t\\begin{equation*}\n\t\t\\begin{tikzcd}[row sep=3em, column sep = 3em]\n\t\t\t\t\tT^2Y \\arrow{r}{T^2i} & T^2X \\arrow{d}{\\mu_X} \\\\\n\t\t\t\t\tTX \\arrow{u}{Td} \\arrow{r}{\\textnormal{id}_{TX}} & TX\n\t\t\t\t\\end{tikzcd}\t\n\t\t\t\t\\qquad\n\t\t\t\\begin{tikzcd}[row sep=3em, column sep = 3em]\n\t\t\t\tTX \\arrow{r}{Td} & T^2Y \\arrow{d}{\\mu_Y} \\\\\n\t\t\t\tTY \\arrow{u}{Ti} \\arrow{r}{\\textnormal{id}_{TY}} & TY\n\t\t\t\\end{tikzcd}.\n\t\t\t\t\\end{equation*}\t\n\tThen $(Y, \\eta_X \\circ i, \\mu_Y \\circ Td)$ is a basis for the $\\mathbb{T}$-algebra $(TX, \\mu_X)$.\n\\end{lemma}\n\\begin{proof}\nOne part of the claim follows from \\Cref{setgenerator}. The other part follows from the commutativity of the following diagram\n\t\\begin{equation*}\n\t\t\t\t\t\\begin{tikzcd}[row sep=3em, column sep = 3em]\n\t\t\t\t\t\tT^2X \\arrow{rr}{\\mu_X} & & TX \\arrow{d}{Td} \\\\\n\t\t\t\t\t\tTX \\arrow{u}{T\\eta_X} \\arrow{urr}{\\textnormal{id}_{TX}} \\arrow{rr}{Td} & & T^2Y \\arrow{d}{\\mu_Y} \\\\\n\t\t\t\t\t\tTY \\arrow{u}{Ti} \\arrow{rr}{\\textnormal{id}_{TY}} & & TY\n\t\t\t\t\t\\end{tikzcd}.\n\t\\end{equation*}\n\\end{proof}\n\n\\begin{corollary}\n\t\\label{canonicalbasisforfreealgbera}\n\t$(X,\\eta_X, \\textnormal{id}_X)$ is a basis for the $\\mathbb{T}$-algebra $(TX, \\mu_X)$.\n\\end{corollary}\n\\begin{proof}\n\tUsing the equality $\t\\mu_X \\circ T\\eta_X = \\textnormal{id}_{TX}$, the claim follows from \\Cref{basisfreealgebraeasier} with $i= \\textnormal{id}_X$ and $d= \\eta_X$.\n\\end{proof}\n\n\\subsection{Uniqueness of bases}\n\n\\label{uniquebasesec}\n\nIn this section we investigate the uniqueness of bases for algebras over a monad.\nTo begin with, assume $(X,h)$ is an algebra over a monad $\\mathbb{T}$ and we are given a fixed morphism $i: Y \\rightarrow X$. Then any two morphisms $d_{\\alpha}$ and $d_{\\beta}$ turning $(Y,i,d_{\\alpha})$ and $(Y,i,d_{\\beta})$ into bases for $(X,h)$, respectively, are in fact identical: \\[ d_{\\alpha} = d_{\\alpha} \\circ i^{\\sharp} \\circ d_{\\beta} = d_{\\beta}. \\] If the morphism $i$ is not fixed, we have the following slightly weaker result about the uniqueness of bases:\n\n\\begin{lemma}\n\tLet $(Y_{\\alpha},i_{\\alpha},d_{\\alpha})$ and $(Y_{\\beta}, i_{\\beta}, d_{\\beta})$ be bases for a $\\mathbb{T}$-algebra $(X,h)$. Then the $\\mathbb{T}$-algebras $(TY_{\\alpha}, \\mu_{Y_{\\alpha}})$ and $(TY_{\\beta}, \\mu_{Y_{\\beta}})$ are isomorphic.\n\\end{lemma}\n\\begin{proof}\n\tWe have a $\\mathbb{T}$-algebra homomorphism \\[ d_{\\beta} \\circ (i_{\\alpha})^{\\sharp}: (TY_{\\alpha}, \\mu_{Y_{\\alpha}}) \\longrightarrow (TY_{\\beta}, \\mu_{Y_{\\beta}}) \\] since the components $(i_{\\alpha})^{\\sharp}$ and $d_{\\beta}$ are $\\mathbb{T}$-algebra homomorphisms by general arguments and \\Cref{forbasis-d-isalgebrahom}, respectively.\n\tAnalogously, by symmetry, the morphism $d_{\\alpha} \\circ (i_{\\beta})^{\\sharp}$ is a $\\mathbb{T}$-algebra homomorphism in the reverse direction.\n\tIt is easy to verify that the definition of a basis implies that both morphisms are mutually inverse.\n\\end{proof}\n\nIf a set monad preserves the set cardinality relation in the sense that \n\\[ \\vert Y_{\\alpha} \\vert \\not = \\vert Y_{\\beta} \\vert \\quad \\textnormal{implies} \\quad \\vert TY_{\\alpha} \\vert \\not = \\vert TY_{\\beta} \\vert, \\] above result in particular shows that any two bases for a fixed algebra have the same cardinality.\n\n\\subsection{Representation theory}\n\n\\label{representationtheorysec}\n\nIn this section we use our general definition of a basis to derive a representation theory for homomorphisms between algebras over monads that is analogous to the representation theory for linear transformations between vector spaces.\n\nIn more detail, recall that a linear transformation $L: V \\rightarrow W$ between $k$-vector spaces with finite bases $\\alpha = \\lbrace v_1, ... , v_n \\rbrace$ and $\\beta = \\lbrace w_1, ..., w_m \\rbrace$, respectively, admits a matrix representation $L_{\\alpha \\beta} \\in \\textnormal{Mat}_{k}(m, n)$ with \\[ L(v_j) = \\sum_i (L_{\\alpha \\beta})_{i,j} w_i, \\] such that for any vector $v$ in $V$ the coordinate vectors $L(v)_{\\beta} \\in k^m$ and $v_{\\alpha} \\in k^n$ \nsatisfy the equality \\[ L(v)_{\\beta} = L_{\\alpha \\beta} v_{\\alpha}. \\] A great amount of linear algebra is concerned with finding bases such that the corresponding matrix representation is in an efficient shape, for instance diagonalised. The following definitions generalise the situation by substituting Kleisli morphisms for matrices.\n\n \n\\begin{definition}\n\tLet $\\alpha = (Y_{\\alpha}, i_{\\alpha}, d_{\\alpha})$ and $\\beta = (Y_{\\beta}, i_{\\beta}, d_{\\beta})$ be bases for $\\mathbb{T}$-algebras $(X_{\\alpha},h_{\\alpha})$ and $(X_{\\beta},h_{\\beta})$, respectively. The \\textit{basis representation} of a $\\mathbb{T}$-algebra homomorphism $f: (X_{\\alpha},h_{\\alpha}) \\rightarrow (X_{\\beta},h_{\\beta})$ with respect to $\\alpha$ and $\\beta$ is the composition \n\t\\begin{equation}\n\t\\label{basisrepresentation}\n\t\tf_{\\alpha \\beta} := Y_{\\alpha} \\overset{i_{\\alpha}}{\\longrightarrow} X_{\\alpha} \\overset{f}{\\longrightarrow} X_{\\beta} \\overset{d_{\\beta}}{\\longrightarrow} TY_{\\beta}.\n\t\\end{equation}\n\t Conversely, the morphism \\textit{associated} with a Kleisli morphism $p: Y_{\\alpha} \\rightarrow TY_{\\beta}$ with respect to $\\alpha$ and $\\beta$ is the composition \\begin{equation}\n\t\t\\label{associatedmorph}\n\t\tp^{\\alpha \\beta} := X_{\\alpha} \\overset{d_{\\alpha}}{\\longrightarrow} TY_{\\alpha} \\overset{Tp}{\\longrightarrow} T^2Y_{\\beta} \\overset{\\mu_{Y_{\\beta}}}{\\longrightarrow} TY_{\\beta} \\overset{Ti_{\\beta}}{\\longrightarrow} TX_{\\beta} \\overset{h_{\\beta}}{\\longrightarrow} X_{\\beta}.\n\t\t\t\\end{equation}\n\t\\end{definition}\n\nThe morphism associated with a Kleisli morphism should be understood as the linear transformation between vector spaces induced by some matrix of the right type. The following result confirms this intuition.\n\t\n\\begin{lemma}\n\t\\eqref{associatedmorph} is a $\\mathbb{T}$-algebra homomorphism $p^{\\alpha \\beta}: (X_{\\alpha},h_{\\alpha}) \\rightarrow (X_{\\beta},h_{\\beta})$.\n\\end{lemma}\t\n\\begin{proof}\nUsing \\Cref{forbasis-d-isalgebrahom} we deduce the commutativity of the following diagram\n\\begin{equation*}\n\t\\begin{tikzcd}[row sep=3em, column sep = 3em]\n\tTX_{\\alpha} \\arrow{d}[left]{h_{\\alpha}} \\arrow{r}{Td_{\\alpha}} & T^2Y_{\\alpha} \\arrow{r}{T^2p} \\arrow{d}{\\mu_{Y_{\\alpha}}} & T^3Y_{\\beta} \\arrow{r}{T\\mu_{Y_{\\beta}}} \\arrow{d}{\\mu_{TY_{\\beta}}} & T^2Y_{\\beta} \\arrow{d}{\\mu_{Y_{\\beta}}} \\arrow{r}{T^2i_{\\beta}} & T^2X_{\\beta} \\arrow{r}{Th_{\\beta}} \\arrow{d}{\\mu_{X_{\\beta}}} & TX_{\\beta} \\arrow{d}{h_{\\beta}} \\\\\n\tX_{\\alpha} \\arrow{r}{d_{\\alpha}} & TY_{\\alpha} \\arrow{r}{Tp} & T^2Y_{\\beta} \\arrow{r}{\\mu_{Y_{\\beta}}} & TY_{\\beta} \\arrow{r}{Ti_{\\beta}} & TX_{\\beta} \\arrow{r}{h_{\\beta}} & X_{\\beta}\t\n\t\\end{tikzcd}.\n\\end{equation*}\n\\end{proof}\n\nThe following result establishes a generalisation of the observation that for fixed bases, constructing a matrix representation of a linear transformation on the one hand, and associating a linear transformation to a matrix of the right type on the other hand, are mutually inverse operations.\n\\begin{lemma} \n\tThe operations $\\eqref{basisrepresentation}$ and $\\eqref{associatedmorph}$ are mutually inverse.\n\\end{lemma}\n\\begin{proof}\n\tThe definitions imply \n\t\\begin{align*}\n\t\t(p^{\\alpha \\beta})_{\\alpha \\beta} &= d_{\\beta} \\circ (h_{\\beta} \\circ Ti_{\\beta} \\circ \\mu_{Y_{\\beta}} \\circ Tp \\circ d_{\\alpha}) \\circ i_{\\alpha} \\\\\n\t\t(f_{\\alpha \\beta})^{\\alpha \\beta} &= h_{\\beta} \\circ Ti_{\\beta} \\circ \\mu_{Y_{\\beta}} \\circ T(d_{\\beta} \\circ f \\circ i_{\\alpha}) \\circ d_{\\alpha}.\n\t\\end{align*}\nUsing \\Cref{forbasis-d-isalgebrahom} we deduce the commutativity of the diagrams below\n\t\\[\n\t\\begin{gathered}\n\t\t\\begin{tikzcd}[row sep=3em, column sep = 3em]\n\t\t\tY_{\\alpha} \\arrow{d}[left]{i_{\\alpha}} \\arrow{rrr}{p} \\arrow{dr}{\\eta_{Y_{\\alpha}}} & & & TY_{\\beta} \\arrow{d}{\\textnormal{id}_{TY_{\\beta}}} \\arrow{rr}{\\textnormal{id}_{TY_{\\beta}}} \\arrow{dl}{\\eta_{TY_{\\beta}}} & & TY_{\\beta} \\\\\n\t\t\tX_{\\alpha} \\arrow{r}{d_{\\alpha}} & TY_{\\alpha} \\arrow{r}{Tp} & T^2Y_{\\beta} \\arrow{r}{\\mu_{Y_{\\beta}}} & TY_{\\beta} \\arrow{r}{Ti_{\\beta}} & TX_{\\beta} \\arrow{r}{h_{\\beta}} & X_{\\beta} \\arrow{u}[right]{d_{\\beta}}\n\t\t\t\\end{tikzcd} \\\\\n\t\t\\begin{tikzcd}[row sep=3em, column sep = 3em]\n\t\t\t\tX_{\\alpha} \\arrow{r}{\\textnormal{id}_{X_{\\alpha}}} \\arrow{d}[left]{Ti_{\\alpha} \\circ d_{\\alpha}} & X_{\\alpha} \\arrow{rr}{f} & & X_{\\beta} \\arrow{d}{d_{\\beta}} \\arrow{r}{\\textnormal{id}_{X_{\\beta}}} & X_{\\beta} \\\\\n\t\t\t\tTX_{\\alpha} \\arrow{ur}{h_{\\alpha}} \\arrow{r}{Tf} & TX_{\\beta} \\arrow{r}{Td_{\\beta}} \\arrow{urr}{h_{\\beta}} & T^2Y_{\\beta} \\arrow{r}{\\mu_{Y_{\\beta}}} & TY_{\\beta} \\arrow{r}{Ti_{\\beta}} & TX_{\\beta} \\arrow{u}[right]{h_{\\beta}}\n\t\t\t\t\\end{tikzcd}\t.\t\t\n\t\\end{gathered}\n\t\\]\t\t\t\t\t\n\\end{proof}\n\n\nThe next result establishes the compositionality of basis representations: the matrix representation of the composition of two linear transformations is given by the multiplication of the matrix representations of the individual linear transformations. On the left side of the following equation we use the usual Kleisli composition.\n\\begin{lemma}\n$g_{\\beta \\gamma} \\cdot f_{\\alpha \\beta} = (g \\circ f)_{\\alpha \\gamma}$.\n\\end{lemma}\n\\begin{proof}\n\tThe definitions imply\n\t\\begin{align*}\n\t\tg_{\\beta \\gamma} \\cdot f_{\\alpha \\beta} &= \\mu_{Y_{\\gamma}} \\circ T(d_{\\gamma} \\circ g \\circ i_{\\beta}) \\circ d_{\\beta} \\circ f \\circ i_{\\alpha} \\\\\n\t\t(g \\circ f)_{\\alpha \\gamma} &= d_{\\gamma} \\circ (g \\circ f) \\circ i_{\\alpha}.\n\t\\end{align*}\n\tWe delete common terms and use \\Cref{forbasis-d-isalgebrahom} to deduce the commutativity of the diagram below\n\\begin{equation*}\n\t\t\t\t\t\\begin{tikzcd}[row sep=3em, column sep = 3em]\n\t\t\t\t\t\t X_{\\beta} \\arrow{d}[left]{\\textnormal{id}_{X_{\\beta}}} \\arrow{r}{d_{\\beta}} & TY_{\\beta} \\arrow{r}{Ti_{\\beta}} & TX_{\\beta} \\arrow{r}{Tg} \\arrow{dll}{h_{\\beta}} & TX_{\\gamma} \\arrow{r}{Td_{\\gamma}} \\arrow{dll}{h_{\\gamma}} & T^2 Y_{\\gamma} \\arrow{d}{\\mu_{Y_{\\gamma}}} \\\\\n\t\t\t\t\t\t X_{\\beta} \\arrow{r}[below]{g} & X_{\\gamma} \\arrow{rrr}[below]{d_{\\gamma}} & & & TY_{\\gamma}\n\t\t\t\t\t\\end{tikzcd}.\n\\end{equation*}\n\\end{proof}\n\nSimilarly to the previous result, the next observation captures the compositionality of the operation that assigns to a Kleisli morphism its associated homomorphism.\n\\begin{lemma}\n\t$q^{\\beta \\gamma} \\circ p^{\\alpha \\beta} = (q \\cdot p)^{\\alpha \\gamma}$. \n\\end{lemma}\n\\begin{proof}\n\tThe definitions imply \n\t\\begin{align*}\n\t\tq^{\\beta \\gamma} \\circ p^{\\alpha \\beta} &= (h_{\\gamma} \\circ Ti_{\\gamma} \\circ \\mu_{Y_{\\gamma}} \\circ Tq \\circ d_{\\beta}) \\circ (h_{\\beta} \\circ Ti_{\\beta} \\circ \\mu_{Y_{\\beta}} \\circ Tp \\circ d_{\\alpha}) \\\\\n\t\t(q \\cdot p)^{\\alpha \\gamma} &= h_{\\gamma} \\circ Ti_{\\gamma} \\circ \\mu_{Y_{\\gamma}} \\circ T\\mu_{Y_{\\gamma}} \\circ T^2q \\circ Tp \\circ d_{\\alpha}.\n\t\\end{align*}\n\tBy deleting common terms and using the equality $d_{\\beta} \\circ h_{\\beta} \\circ Ti_{\\beta} = \\textnormal{id}_{TY_{\\beta}}$ it is thus sufficient to show\n\t\\[\n\t\\mu_{Y_{\\gamma}} \\circ Tq \\circ \\mu_{Y_{\\beta}} = \\mu_{Y_{\\gamma}} \\circ T\\mu_{Y_{\\gamma}} \\circ T^2q.\n\t\\]\n\tAbove equation follows from the commutativity of the diagram below\n\t\\begin{equation*}\n\t\t\t\t\t\t\t\\begin{tikzcd}[row sep=3em, column sep = 3em]\n\t\t\t\t\t\t\t\tT^2Y_{\\beta} \\arrow{d}[left]{T^2q} \\arrow{r}{\\mu_{Y_{\\beta}}} & TY_{\\beta} \\arrow{r}{Tq} & T^2Y_{\\gamma} \\arrow{d}{\\mu_{Y_{\\gamma}}} \\\\\n\t\t\t\t\t\t\t\tT^3Y_{\\gamma} \\arrow{urr}{\\mu_{TY_{\\gamma}}} \\arrow{r}[below]{T\\mu_{Y_{\\gamma}}} & T^2 Y_{\\gamma} \\arrow{r}[below]{\\mu_{Y_{\\gamma}}} & TY_{\\gamma}\n\t\t\t\t\t\t\t\\end{tikzcd}.\n\t\\end{equation*}\n\\end{proof}\n\nAt the beginning of this section we recalled the soundness identity \\[ L(v)_{\\beta} = L_{\\alpha \\beta} v_{\\alpha} \\] for the matrix representation $L_{\\alpha \\beta}$ of a linear transformation $L$. The next result is a natural generalisation of this statement. \n\n\\begin{lemma}\n\t$d_{\\beta} \\circ f = f_{\\alpha \\beta} \\cdot d_{\\alpha}$ \n\\end{lemma}\n\\begin{proof}\n\tThe definitions imply\n\t\\[\n\t f_{\\alpha \\beta} \\cdot d_{\\alpha} = \\mu_{Y_{\\beta}} \\circ T(d_{\\beta} \\circ f \\circ i_{\\alpha}) \\circ d_{\\alpha}.\n\t\\]\n\tUsing \\Cref{forbasis-d-isalgebrahom} we deduce the commutativity of the diagram below\n\t\\begin{equation*}\n\t\t\t\t\t\t\t\t\t\\begin{tikzcd}[row sep=3em, column sep = 3em]\n\t\t\t\t\tX_{\\alpha} \\arrow{d}[left]{\\textnormal{id}_{X_{\\alpha}}} \\arrow{r}{d_{\\alpha}} & TY_{\\alpha} \\arrow{r}{Ti_{\\alpha}} & TX_{\\alpha} \\arrow{dll}{h_{\\alpha}} \\arrow{r}{Tf} & TX_{\\beta} \\arrow{dll}{h_{\\beta}} \\arrow{r}{Td_{\\beta}} & T^2Y_{\\beta} \\arrow{d}{\\mu_{Y_{\\beta}}} \\\\\n\t\t\t\t\tX_{\\alpha} \\arrow{r}[below]{f} & X_{\\beta} \\arrow{rrr}[below]{d_{\\beta}} & & & TY_{\\beta}\t\t\t\t\n\t\t\t\t\t\t\t\t\t\\end{tikzcd}.\n\t\\end{equation*}\n\\end{proof}\n\nAssume we are given bases $\\alpha, \\alpha'$ and $\\beta, \\beta'$ for $\\mathbb{T}$-algebras $(X_{\\alpha}, h_{\\alpha})$ and $(X_{\\beta}, h_{\\beta})$, respectively. \nThe following result makes clear how the two basis representations $f_{\\alpha \\alpha'}$ and $f_{\\beta \\beta'}$ are related.\n\n\\begin{proposition}\n\\label{similaritygeneral}\n\tThere exist Kleisli isomorphisms $p$ and $q$ such that $f_{\\alpha' \\beta'} = q \\cdot f_{\\alpha \\beta} \\cdot p$.\n\\end{proposition}\n\\begin{proof}\n\tThe Kleisli morphisms $p$ and $q$ and their respective candidates for inverses $p^{-1}$ and $q^{-1}$ are defined below\n\t\\begin{alignat*}{6}\n\t\t& p &&:= d_{\\alpha} \\circ i_{\\alpha'}: Y_{\\alpha'} &&\\longrightarrow TY_{\\alpha} \\qquad \\qquad && q &&:= d_{\\beta'} \\circ i_{\\beta}: Y_{\\beta} && \\longrightarrow TY_{\\beta'} \\\\\n\t\t& p^{-1} &&:= d_{\\alpha'} \\circ i_{\\alpha}: Y_{\\alpha} && \\longrightarrow TY_{\\alpha'} \\qquad \\qquad && q^{-1} &&:= d_{\\beta} \\circ i_{\\beta'} : Y_{\\beta'} && \\longrightarrow TY_{\\beta}.\n\t\\end{alignat*}\nFrom \\Cref{forbasis-d-isalgebrahom} it follows that the diagram below commutes \\begin{equation*}\n\t\t\\begin{tikzcd}[row sep=3em, column sep = 3em]\n\t\t\tY_{\\alpha} \\arrow{r}{i_{\\alpha}} \\arrow{dd}[left]{\\eta_{Y_{\\alpha}}} & X_{\\alpha} \\arrow{d}{\\textnormal{id}_{X_{\\alpha}}} \\arrow{r}{d_{\\alpha'}} & TY_{\\alpha'} \\arrow{dd}{Ti_{\\alpha'}} \\\\\n\t\t\t& X_{\\alpha} \\arrow{dl}{d_{\\alpha}} & \\\\\n\t\t\tTY_{\\alpha} & T^2Y_{\\alpha} \\arrow{l}{\\mu_{Y_{\\alpha}}} & TX_{\\alpha} \\arrow{ul}{h_{\\alpha}} \\arrow{l}{Td_{\\alpha}}\n\t\t\\end{tikzcd}.\n\t\t\\end{equation*}\n\t\tThis shows that $p^{-1}$ is a Kleisli right-inverse of $p$. A symmetric version of above diagram shows that $p^{-1}$ is also a Kleisli left-inverse of $p$. Analogously it follows that $q^{-1}$ is a Kleisli inverse of $q$.\n\t\t\n\t\tThe definitions further imply the equalities\n\t\t\\begin{align*}\n\t\t\tq \\cdot f_{\\alpha \\beta} \\cdot p &= \\mu_{Y_{\\beta'}} \\circ T(d_{\\beta'} \\circ i_{\\beta}) \\circ \\mu_{Y_{\\beta}} \\circ T(d_{\\beta} \\circ f \\circ i_{\\alpha}) \\circ d_{\\alpha} \\circ i_{\\alpha'} \\\\\n\t\t\tf_{\\alpha' \\beta'} &= d_{\\beta'} \\circ f \\circ i_{\\alpha'}.\n\t\t\\end{align*}\n\t\tWe delete common terms and use \\Cref{forbasis-d-isalgebrahom} to establish the commutativity of the diagram below\n\t\t\\begin{equation*}\n\t\t\t\\begin{tikzcd}[row sep=3em, column sep = 3em]\n\t\t\tX_{\\alpha} \\arrow{dr}{\\textnormal{id}_{X_{\\alpha}}} \\arrow{r}{d_{\\alpha}} \\arrow{dd}[left]{f} & TY_{\\alpha} \\arrow{r}{Ti_{\\alpha}} & TX_{\\alpha} \\arrow{dl}[above]{h_{\\alpha}} \\arrow{r}{Tf} & TX_{\\beta} \\arrow{ddll}{h_{\\beta}} \\arrow{d}{Td_{\\beta}} \\\\\n\t\t\t & X_{\\alpha} \\arrow{d}{f} & & T^2Y_{\\beta} \\arrow{dd}{\\mu_{Y_{\\beta}}} \\\\\n\tX_{\\beta} \\arrow{d}[left]{d_{\\beta'}} & X_{\\beta} \\arrow{l}[above]{\\textnormal{id}_{X_{\\beta}}} \\arrow{drr}{d_{\\beta}} & & \\\\\n\t\t\tTY_{\\beta'} & T^2 Y_{\\beta'} \\arrow{l}{\\mu_{Y_{\\beta'}}} & TX_{\\beta} \\arrow{llu}{h_{\\beta}} \\arrow{l}{Td_{\\beta'}} & TY_{\\beta} \\arrow{l}{Ti_{\\beta}}\n\t\t\\end{tikzcd}.\n\t\t\\end{equation*}\t\n\\end{proof}\n\nAbove result simplifies in the case one restricts to an endomorphism: the respective basis representations are similar.\n\n\\begin{corollary}\n\t\\label{similarity}\nThere exists a Kleisli isomorphism $p$ with Kleisli inverse $p^{-1}$ such that $f_{\\alpha' \\alpha'} = p^{-1} \\cdot f_{\\alpha \\alpha} \\cdot p$.\n\\end{corollary}\n\\begin{proof}\n\tIn \\Cref{similaritygeneral} let $\\beta = \\alpha$ and $\\beta' = \\alpha'$. One verifies that in the corresponding proof the definitions of the morphisms $p^{-1}$ and $q$ coincide.\n\\end{proof}\n\n\\subsection{Bases as free algebras}\n\n\\label{basesfreealgebrasec}\n\nIn \\Cref{basisimpliesfree} it was shown that an algebra over a monad with a basis is isomorphic to a free algebra. Conversely, in \\Cref{canonicalbasisforfreealgbera} it was proven that a free algebra over a monad admits a basis. Intuitively, one may thus think that bases for an algebra coincide with free isomorphic algebras.\nIn this section we make this idea precise on the level of categories.\n\nFormally, given an algebra $(X,h)$ over a monad $\\mathbb{T}$, let $\\textsf{Free}(X,h)$ denote the category defined as follows: \n\\begin{itemize}\n\t\\item objects are given by pairs $(Y, \\varphi)$ consisting of an isomorphism $\\varphi: (TY, \\mu_Y) \\rightarrow (X,h)$; and\n\t\\item a morphism $f: (Y_{\\alpha}, \\varphi_{\\alpha}) \\rightarrow (Y_{\\beta}, \\varphi_{\\beta})$ between objects consists of a morphism $f: Y_{\\alpha} \\rightarrow Y_{\\beta}$ such that $\\varphi_{\\alpha} = \\varphi_{\\beta} \\circ Tf$.\n\\end{itemize}\n\nThe next result shows that for a fixed algebra, the natural isomorphism \nunderlying the free-algebra adjunction restricts to an equivalence between the category of bases defined in \\Cref{basisdefinition}, and the category of free isomorphic algebras given above.\n\\begin{proposition}\n$\\textnormal{\\textsf{Bases}}(X,h) \\simeq \\textnormal{\\textsf{Free}}(X,h)$\n\\end{proposition}\n\\begin{proof}\n\tWe define functors $F$ and $G$ between the respective categories as follows\n\t\\begin{alignat*}{7}\n\t\t&F&&: \\textnormal{\\textsf{Bases}}(X,h) &&\\longrightarrow \\textnormal{\\textsf{Free}}(X,h) \\qquad && F(Y,i,d)&& = (Y, i^{\\sharp}) \\qquad && Ff &&= f \\\\\n\t\t&G&&: \\textnormal{\\textsf{Free}}(X,h) && \\longrightarrow \\textnormal{\\textsf{Bases}}(X,h) \\qquad && G(Y, \\varphi) && = (Y, \\varphi \\circ \\eta_Y, \\varphi^{-1}) \\qquad &&Gf &&= f.\n\t\\end{alignat*} \n\t\t\n\tThe functor $F$ is well-defined on objects since by \\Cref{basisimpliesfree} the morphism $i^{\\sharp}$ is an isomorphism with inverse $d$. Its well-definedness on morphisms is an immediate consequence of the constraint $i_{\\alpha} = i_{\\beta} \\circ f$ for morphisms $f: (Y_{\\alpha},i_{\\alpha},d_{\\alpha}) \\rightarrow (Y_{\\beta}, i_{\\beta}, d_{\\beta})$ between bases,\n\t\\[\n\t(i_{\\alpha})^{\\sharp} = h \\circ Ti_{\\alpha} = h \\circ Ti_{\\beta} \\circ Tf = (i_{\\beta})^{\\sharp} \\circ Tf.\n\t\\]\t\n\t\n\tThe functor $G$ is well-defined on objects since $(\\varphi \\circ \\eta_Y)^{\\sharp} = \\varphi$ and $\\varphi^{-1}$ are mutually inverse. Its well-definedness on morphisms $f: (Y_{\\alpha}, \\varphi_{\\alpha}) \\rightarrow (Y_{\\beta}, \\varphi_{\\beta})$ follows from the equality $\\varphi_{\\beta} \\circ Tf = \\varphi_{\\alpha}$,\n\t\\begin{align*}\n\t\tTf \\circ (\\varphi_{\\alpha})^{-1} &= (\\varphi_{\\beta})^{-1} \\circ \\varphi_{\\beta} \\circ Tf \\circ (\\varphi_{\\alpha})^{-1} = (\\varphi_{\\beta})^{-1} \\circ \\varphi_{\\alpha} \\circ (\\varphi_{\\alpha})^{-1} = (\\varphi_{\\beta})^{-1}\n\t\\end{align*}\n\tand the naturality of $\\eta$,\n\t\\[\n\t\\varphi_{\\alpha} \\circ \\eta_{Y_{\\alpha}} = \\varphi_{\\beta} \\circ Tf \\circ \\eta_{Y_{\\alpha}} = \\varphi_{\\beta} \\circ \\eta_{Y_{\\beta}} \\circ f.\n\t\\]\n\t\n\tThe functors are clearly mutually inverse on morphisms. For objects the statement follows from\n\t\\begin{align*}\n\t\tF \\circ G(Y, \\varphi) &= (Y, (\\varphi \\circ \\eta_Y)^{\\sharp}) = (Y, \\varphi) \\\\\n\t\tG \\circ F(Y, i,d) &= (Y, i^{\\sharp} \\circ \\eta_Y, (i^{\\sharp})^{-1}) = (Y,i,d).\n\t\\end{align*}\n\\end{proof}\n\n\n\\subsection{Bases for bialgebras}\n\n\\label{basesforbialgebrasec}\n\nIt is well-known that a distributive law $\\lambda$ between a monad $\\mathbb{T}$ and an endofunctor $F$ induces a monad \n$\\mathbb{T}_{\\lambda}$ on the category of $F$-coalgebras such that the algebras over $\\mathbb{T}_{\\lambda}$ coincide with the $\\lambda$-bialgebras of \\Cref{defbialgebras}.\n This section is concerned with generators and bases for $\\mathbb{T}_{\\lambda}$-algebras, or equivalently, $\\lambda$-bialgebras.\n\nBy definition, a generator for a $\\lambda$-bialgebra $(X,h,k)$ consists of a $F$-coalgebra $(Y,k_Y)$ and morphisms $i: Y \\rightarrow X$ and $d: X \\rightarrow TY$, such that the three diagrams on the left below commute\n\\begin{equation}\n\\label{bialgebrageneratorequations}\t\n\t\\begin{tikzcd}[row sep=3em, column sep = 3em]\n\t\tY \\arrow{r}{i} \\arrow{d}[left]{k_Y} & X \\arrow{d}{k} \\\\\n\t\tFY \\arrow{r}{Fi} &FX\n\t\\end{tikzcd}\n\t\\quad\n\t\t\\begin{tikzcd}[row sep=3em, column sep = 3em]\n\t\tX \\arrow{r}{d} \\arrow{d}[left]{k} & TY \\arrow{d}[left]{\\lambda_Y \\circ Tk_Y} \\\\\n\t\tFX \\arrow{r}{Fd} &FTY\n\t\\end{tikzcd}\n\t\\quad\n\t\\begin{tikzcd}[row sep=3em, column sep = 3em]\n\t\t\tTY \\arrow{r}{Ti} & TX \\arrow{d}{h}\\\\\n\t\t\tX \\arrow{u}{d} \\arrow{r}{\\textnormal{id}_X} & X \n\t\t\\end{tikzcd}\n\t\t\\quad\n\t\t\\begin{tikzcd}[row sep=3em, column sep = 3em]\n\t\t\tTX \\arrow{r}{h} & X \\arrow{d}{d}\\\\\n\t\t\tTY \\arrow{u}{Ti} \\arrow{r}{\\textnormal{id}_{TY}} & TY \n\t\t\\end{tikzcd}.\n\\end{equation}\n\nA basis for a $\\lambda$-bialgebra is moreover given by a generator, such that in addition the diagram on the right above commutes.\n\nIt is easy to verify that by forgetting the $F$-coalgebra structure, every generator for a bialgebra in particular provides a generator for the underlying algebra of the bialgebra. By \\Cref{forgenerator-isharp-is-bialgebra-hom} it thus follows that there exists a $\\lambda$-bialgebra homomorphism \n\\[ i^{\\sharp} = h \\circ Ti : (TY, \\mu_Y, (Fd \\circ k \\circ i)^{\\sharp}) \\longrightarrow (X,h,k). \\]\nThe next result establishes that there exists a second equivalent free bialgebra with a different coalgebra structure.\n\n\\begin{lemma}\n\\label{generatorbialgebraisharp}\n\tLet $(Y,k_Y, i,d)$ be a generator for $(X,h,k)$. Then $i^{\\sharp}: TY \\rightarrow X$ is a $\\lambda$-bialgebra homomorphism $i^{\\sharp} : (TY, \\mu_Y, \\lambda_Y \\circ Tk_Y) \\rightarrow (X,h,k)$.\n\\end{lemma}\n\\begin{proof}\n\tClearly $i^{\\sharp}$ is a $\\mathbb{T}$-algebra homomorphism. It is a $F$-coalgebra homomorphism since the diagram below commutes\n\t\\begin{equation*}\n\t\t\\begin{tikzcd}[row sep=3em, column sep = 3em]\n\t\t\tTY \\arrow{r}{Ti} \\arrow{d}[left]{Tk_Y} & TX \\arrow{r}{h} \\arrow{d}{Tk} & X \\arrow{dd}{k} \\\\\n\t\t\tTFY \\arrow{r}{TFi} \\arrow{d}[left]{\\lambda_Y} & TFX \\arrow{d}{\\lambda_X} & \\\\\n\t\t\tFTY \\arrow{r}{FTi} & FTX \\arrow{r}{Fh} & FX\n\t\t\\end{tikzcd}.\n\t\\end{equation*}\t\n\\end{proof}\n\nIf one moves from generator for bialgebras to bases for bialgebras, both coalgebra structures coincide.\n\n\\begin{lemma}\n\\label{samecoalgebrastructures}\n\tLet $(Y,k_Y, i, d)$ be a basis for $(X,h,k)$, then $\\lambda_Y \\circ Tk_Y = (Fd \\circ k \\circ i)^{\\sharp}$.\n\\end{lemma}\n\\begin{proof}\n\tUsing \\Cref{forbasis-d-isalgebrahom} we establish the commutativity of diagram below\n\t\\begin{equation*}\n\t\\begin{tikzcd}[row sep=3em, column sep = 3em]\n\tTY \\arrow{dd}[left]{\\textnormal{id}_{TY}} \\arrow{rr}{Tk_Y} & & TFY \\arrow{rr}{\\lambda_Y} \\arrow{dd}{TFi} & & FTY \\arrow{d}{FTi} \\arrow{rr}{\\textnormal{id}_{FTY}} && FTY \\arrow{dd}{\\textnormal{id}_{FTY}} \\\\\n\t& & & & FTX \\arrow{d}{FTd} \\arrow{r}{Fh} & FX \\arrow{dr}{Fd} & \\\\\n\tTY \\arrow{r}{Ti} & TX \\arrow{r}{Tk} & TFX \\arrow{r}{TFd} & TFTY \\arrow{r}{\\lambda_{TY}} & FT^2Y \\arrow{rr}{F\\mu_{Y}} & & FTY\t\n\t\\end{tikzcd}.\n\t\\end{equation*}\n\\end{proof}\n\nWe close this section by observing that a basis for the underlying algebra of a bialgebra is sufficient for constructing a generator for the full bialgebra.\n\n\\begin{lemma}\n\tLet $(X,h,k)$ be a $\\lambda$-bialgebra and $(Y,i,d)$ a basis for the $\\mathbb{T}$-algebra $(X,h)$. Then $(TY,(Fd \\circ k \\circ i)^{\\sharp},i^{\\sharp},\\eta_{TY} \\circ d)$ is a generator for $(X,h,k)$.\n\\end{lemma}\n\\begin{proof}\n\tIn the following we abbreviate $k_{TY} := (Fd \\circ k \\circ i)^{\\sharp}$. By \\Cref{forgenerator-isharp-is-bialgebra-hom} the morphism $i^{\\sharp}$ is a $F$-coalgebra homomorphism $ i^{\\sharp}: (TY, k_{TY}) \\rightarrow (X, k)$. This shows the commutativity of the diagram on the left of \\eqref{bialgebrageneratorequations}. \n\tBy \\Cref{forbasis-d-is-bi-algebrahom} the morphism $d$ is a $F$-coalgebra homomorphism in the reverse direction. Together with the commutativity of the diagram on the left below\n\t\\[\n\t\\begin{tikzcd}[row sep=3em, column sep = 3em]\n\tTY \\arrow{r}{\\eta_{TY}} \\arrow{dd}[left]{k_{TY}} & T^2Y \\arrow{d}{Tk_{TY}} \\\\\n\t& TFTY \\arrow{d}{\\lambda_{TY}} \\\\\n\tFTY \\arrow{ur}{\\eta_{FTY}} \\arrow{r}{F\\eta_{TY}} & FT^2Y\t\n\t\\end{tikzcd}\n\t\\qquad\n\t\\begin{tikzcd}[row sep=3em, column sep = 3em]\n\t\t\t\tT^2Y \\arrow{r}{T^2i} & T^2X \\arrow{rr}{Th} \\arrow{dr}{\\mu_X} & & TX \\arrow{dd}{h} \\\\\n\t\t\t\tTY \\arrow{u}{\\eta_{TY}} \\arrow{r}{Ti} & TX \\arrow{u}{\\eta_{TX}} \\arrow{r}{\\textnormal{id}_{TX}} & TX \\arrow{dr}{h} \\\\\n\t\t\t\tX \\arrow{u}{d} \\arrow{rrr}{\\textnormal{id}_X} & & & X.\n\t\t\t\\end{tikzcd}\n\t\\] this implies the commutativity of the second diagram to the left of \\eqref{bialgebrageneratorequations}.\t\t\n\tSimilarly, the commutativity of third diagram to the left of \\eqref{bialgebrageneratorequations} follows from the commutativity of the diagram on the right above.\n\\end{proof}\n\n\\section{Examples}\n\n\\label{examplesec}\n\nIn this section we give examples of generators and bases for algebras over the powerset-, downset-, distribution-, and neighbourhood monad.\n\n\\subsection{Powerset monad}\n\n\\label{powersetmonadsec}\n\nThe powerset monad $\\mathbb{P} = (\\mathcal{P}, \\mu, \\eta)$ on the category of sets may easily be the most well-known monad. Its underlying set endofunctor $\\mathcal{P}$ assigns to a set its powerset and to a function the function that maps a subset to its direct image under the former. Multiplication and unit transformations are further given by \n\\[ \\eta_X(x) = \\lbrace x \\rbrace \\qquad \\mu_X(U) = \\bigcup_{A \\in U} A. \\]\n\nThe category of algebras for the powerset monad is famously isomorphic to the category of complete lattices and join-preserving functions. Indeed, since the existence of all joins is equivalent to the existence of all meets, one can define, given a $\\mathbb{P}$-algebra $(X,h)$, a complete lattice $(X, \\leq)$ with $x \\leq y :\\Leftrightarrow h(\\lbrace x, y \\rbrace) = y$ and supremum $\\bigvee A := h(A)$. Conversely, given a complete lattice $(X, \\leq)$ with supremum $\\bigvee$ one defines a $\\mathbb{P}$-algebra $(X,h)$ by $h(A) := \\bigvee A$. \n\nLet $F$ be the set endofunctor satisfying $FX = X^A \\times 2$. Coalgebras for $F$ are easily recognised as deterministic unpointed automata over the alphabet $A$, and coalgebras for the composition $F\\mathcal{P}$ coincide with unpointed non-deterministic automata over the same alphabet. The output set $2$ can be equipped with a disjunctive $\\mathbb{P}$-algebra structure, such that bialgebras for the induced canonical distributive law between $\\mathbb{P}$ and $F$ consist of deterministic automata with a complete lattice as state space and supremum-preserving transition and output functions \\cite{jacobs2012trace}. For instance, every deterministic automaton derived from a non-deterministic automaton via the classical subset construction is of such a form. \n\n The following observation relates generators for algebras over $\\mathbb{P}$ with the induced complete lattice structure of the former.\n\n\\begin{lemma}\n\\label{generatorpowerset}\n\t$(Y, i, d)$ is a generator for a $\\mathbb{P}$-algebra $(X,h)$ iff \n\\[\nx = \\bigvee_{y \\in d(x)} i(y)\n\\]\nfor all $x \\in X$.\n\\end{lemma}\n\\begin{proof}\nFollows immediately from the equality\n\\begin{align*}\n\th \\circ \\mathcal{P}i \\circ d(x) &= h(\\lbrace i(y) \\mid y \\in d(x) \\rbrace) = \\bigvee_{y \\in d(x)} i(y).\n\\end{align*}\t\n\\end{proof}\n\nRecall that a non-zero element $x \\in X$ of a lattice $L = (X, \\leq)$ is called join-irreducible, if $x = y \\vee z$ implies $x = y$ or $x = z$. \n It is well-known that in a finite lattice, or more generally in a lattice satisfying the descending chain condition, any element is the join of the join-irreducible elements below it. In other words, by \\Cref{generatorpowerset}, if $i: \\mathcal{J}(L) \\rightarrow L$ is the subset embedding of join-irreducibles and $d: L \\rightarrow \\mathcal{P}(\\mathcal{J}(L))$ satisfies \\[ d(x) = \\lbrace a \\in \\mathcal{J}(L) \\mid a \\leq x \\rbrace, \\] then $(\\mathcal{J}(L), i, d)$ is a generator for the $\\mathbb{P}$-algebra $L$.\n\n\\subsection{Downset monad}\n\nIn the previous section we have seen that the category of algebras for the powerset monad is equivalent to the category of complete lattices and join-preserving functions. It is probably slightly less-known that there exists a second monad with the same category of algebras: the downset monad $\\mathbb{P}_{\\downarrow} = (\\mathcal{P}_{\\downarrow}, \\mu, \\eta)$ on the category of posets. \n\nFor a subset $Y$ of a poset let $Y_{\\downarrow}$ be its so-called downward closure, that is, the set of those poset elements for which there exists at least one element in $Y$ above. A subset is called downset or downclosed, if it coincides with its downward closure. The endofunctor $\\mathcal{P}_{\\downarrow}$ underlying the downset monad assigns to a poset the inclusion-ordered poset of its downclosed subsets, and to a monotone function the monotone function mapping a downclosed subset to the downclosure of its direct image. \nThe natural transformations $\\eta$ and $\\mu$ are further given by\n\\[\n\\eta_P(x) = \\lbrace x \\rbrace_{\\downarrow} \\qquad \\mu_P(U) = \\bigcup_{A \\in U} A.\n\\]\n\nGiven an algebra $(P,h)$ over $\\mathbb{P}_{\\downarrow}$, one verifies that the poset $P$ is a complete lattice with supremum $\\bigvee A := h(A_{\\downarrow})$. Conversely, given a complete lattice $P$ with supremum $\\bigvee$, one defines an algebra $(P,h)$ over $\\mathbb{P}_{\\downarrow}$ by $h(A) := \\bigvee A$. The following observation relates generators for algebras over $\\mathbb{P}_{\\downarrow}$ with the induced complete lattice structure of the latter.\n\n\\begin{lemma}\n\t$(Y,i,d)$ is a generator for a $\\mathbb{P}_{\\downarrow}$-algebra $(P,h)$ iff\n\t\\[\n\tx = \\bigvee_{y \\in d(x)} i(y)\n\t\\]\n\tfor all $x \\in X$.\n\\end{lemma}\n\\begin{proof}\nFollows immediately from the equality\n\t\\[\n\th \\circ \\mathcal{P}_{\\downarrow}i \\circ d(x) = h(\\lbrace i(y) \\mid y \\in d(x) \\rbrace_{\\downarrow}) = \\bigvee_{y \\in d(x)} i(y).\n\t\\]\n\\end{proof}\n\nIt is due to Birkhoff's Representation Theorem that if $L$ is a finite (hence complete) distributive lattice, the monotone function $d: L \\rightarrow \\mathcal{P}_{\\downarrow}(\\mathcal{J}(L))$ assigning to an element the downward closed set of join-irreducibles below it, is an isomorphism with an inverse satisfying $A \\mapsto \\bigvee A$. In other words, in light of the previous result, if $i: \\mathcal{J}(L) \\rightarrow L$ is the subset embedding of join-irreducibles, then $(\\mathcal{J}(L), i, d)$ constitutes a basis for the $\\mathbb{P}_{\\downarrow}$-algebra $L$.\n\n\n\n\\subsection{Distribution monad}\n\nThe distribution monad $\\mathbb{D} = (\\mathcal{D}, \\mu, \\eta)$ on the category of sets is given as follows. The underlying set endofunctor $\\mathcal{D}$ assigns to a set $X$ its set of distributions with finite support,\n\\[\n\\mathcal{D}X = \\lbrace p: X \\rightarrow \\lbrack 0, 1 \\rbrack \\mid supp(p) \\textnormal{ finite, and } \\sum_{x \\in X} p(x) = 1 \\rbrace\n\\]\n\tand to a function $f: X \\rightarrow Y$ the direct image $\\mathcal{D}f: \\mathcal{D}X \\rightarrow \\mathcal{D}Y$ satisfying the equality \n\t\\[ \\mathcal{D}(f)(p)(y) = \\sum_{x \\in f^{-1}(y)} p(x). \\]\nThe natural transformations $\\eta$ and $\\mu$ are further given by \n\\[ \\eta_X(x)(y) = \\lbrack x = y \\rbrack \\qquad \\mu_X(\\Phi)(x) = \\sum_{p \\in \\mathcal{D}X} p(x) \\Phi(p). \\]\n\nIt is well-known that the category of algebras for the distribution monad is isomorphic to the category of convex sets and affine functions \\cite{jacobs2010convexity}. Indeed, any $\\mathbb{D}$-algebra $(X,h)$ can be turned into a unique convex set with finite sums defined by $\\sum_{i} r_i x_i := h(p)$ for $p(x) = r_i$ if $x=x_i$, and $p(x) = 0$ otherwise. \n\nLet $F$ be the set endofunctor satisfying $FX = X^A \\times \\lbrack 0, 1 \\rbrack$. Coalgebras for the composed endofunctor $F\\mathcal{D}$ are known as unpointed Rabin probabilistic automata over the alphabet $A$ \\cite{rabin1963probabilistic, rutten2013generalizing}. The unit interval can be equipped with a $\\mathbb{D}$-algebra structure $h$ which satisfies $h(p) = \\sum_{x \\in \\lbrack 0, 1 \\rbrack} x p(x)$ and induces a canonical distributive law between $\\mathbb{D}$ and $F$ \\cite{jacobs2012trace}. The respective bialgebras consist of unpointed Moore automata with input $A$ and output $\\lbrack 0, 1 \\rbrack$, a convex set as state space, and affine transition and output functions. For instance, every Moore automaton derived from a probabilistic automaton by assigning to the state space of the latter all its distributions with finite support constitutes such a bialgebra.\n\n\n The next observation relates generators of algebras over $\\mathbb{D}$ with the induced convex set structure on the latter. For simplicity we assume that in the following statement the function $i: Y \\rightarrow X$ is an injection.\n\n\\begin{lemma}\n\t$(Y,i,d)$ is a generator for a $\\mathbb{D}$-algebra $(X,h)$ iff\n\t\\[\n\tx = \\sum_{y \\in Y} d(x)(y) i(y)\t\\]\n\tfor all $x \\in X$.\n\\end{lemma}\n\\begin{proof}\nFor $x \\in X$ let $p_{x} \\in \\mathcal{D}X$ be the distribution satisfying the equality $p_{x}(\\overline{x}) = \\sum_{y \\in i^{-1}(\\overline{x})} d(x)(y)$. Since $i$ is injective we find $p_{x}(\\overline{x}) = d(x)(y)$, if $\\overline{x} = i(y)$, and $p_{x}(\\overline{x}) = 0$ otherwise. Thus we can deduce\t\\[\t\th \\circ \\mathcal{D}i \\circ d(x) = h(p_{x}) = \\sum_{y \\in Y} d(x)(y) i(y). \\]\n\t\n\\end{proof}\n\n\\subsection{Neighbourhood monad}\n\nIt is well-known that the contravariant setfunctor assigning to a set its powerset, and to a function the function that precomposes a characteristic function with the former, is dually self-adjoint. The monad $\\mathbb{H} = (N, \\mu, \\eta)$ induced by the adjunction is known under the name neighbourhood monad, since its coalgebras are related to neighbourhood frames in modal logic \\cite{jacobs2017recipe}. Its underlying set endofunctor $N$ is given by\n\\begin{gather*}\n\tNX = 2^{2^X} \\qquad Nf(\\Phi)(\\varphi) = \\Phi(\\varphi \\circ f), \n\\end{gather*}\nand its unit $\\eta$ and multiplication $\\mu$ satisfy\n\\begin{gather*}\n\t\\eta_X(x)(\\varphi) = \\varphi(x) \\qquad \\mu_X(\\Phi)(\\varphi) = \\Phi(\\eta_{2^X}(\\varphi)).\n\\end{gather*}\n\nRecall that a non-zero element of a lattice is called atomic, if there exists no non-zero element below it, and a lattice is called atomic, if each element can be written as join of atoms below it. \nIt is well-known that (i) the category of algebras over $\\mathbb{H}$ is equivalent to the category of complete atomic Boolean algebras; and (ii) the category of complete atomic Boolean algebras is contravariant equivalent to the category of sets \\cite{taylor2002subspaces}. \n\nIn more detail \\cite{bezhanishvili2020minimisation}, the equivalence (i) assigns to a $\\mathbb{H}$-algebra $(X,h)$ the complete atomic Boolean algebra on $X$ with pointwise induced operations\n\\begin{equation}\n\t\\label{inducedcaba}\n\t\\begin{gathered}\n\t0 = h(\\emptyset) \\qquad 1 = h(2^X) \\qquad \\neg x = h(\\sim \\eta_X(x)) \\\\\n\t \\bigvee A = h(\\bigcup_{x \\in A} \\eta_X(x)) \\qquad \\bigwedge A = h(\\bigcap_{x \\in A} \\eta_X(x)),\n\\end{gathered}\n\\end{equation}\nwhile the equivalence (ii) assigns to a complete atomic Boolean algebra $B$ its set of atoms $At(B)$, and to a set $X$ the complete atomic Boolean powerset algebra $\\mathcal{P}X$. The mutually invertibility of the assignments in the latter equivalence is witnessed by the Boolean algebra isomorphism between $B$ and $\\mathcal{P}(At(B))$ that assigns to an element the set of atoms below it.\nFor $K$ the Eilenberg-Moore comparison functor induced by the monadic self-dual powerset adjunction, and $J$ the equivalence in (i), one may recover the representation in (ii) as the composition of $J$ with $K$. \\cite{taylor2002subspaces}.\n\nAs before, let $F$ be the set endofunctor satisfying $FX = X^A \\times 2$. One verifies that coalgebras for the composed endofunctor $F N$ can be recognised as unpointed alternating automata \\cite{bezhanishvili2020minimisation}. The set $2$ can be equipped with a $\\mathbb{H}$-algebra structure $h$ which satisfies $h(\\varphi) = \\varphi(\\textnormal{id}_2)$ and induces a canonical distributive law between $\\mathbb{H}$ and $F$ \\cite{jacobs2012trace}. The corresponding bialgebras consist of deterministic automata with a complete atomic Boolean algebra as state space and join-preserving transition and output Boolean algebra homomorphisms. For instance, every deterministic automaton derived from an alternating automaton via a double subset construction is of such a form. \n\nThe next observation relates generators of a $\\mathbb{H}$-algebras with the induced complete atomic Boolean algebra structure on the latter.\n\n\\begin{lemma}\n\t$(Y, i, d)$ is a generator for a $\\mathbb{H}$-algebra $(X,h)$ iff \n\t\\[\n\tx = \\bigvee_{A \\in d(x)} ( \\bigwedge_{y \\in A} i(y) \\wedge \\bigwedge_{y \\not \\in A} \\neg i(y) )\n\t\\]\n\tfor all $x \\in X$. \n\\end{lemma}\n\\begin{proof}\nOne verifies that after identifying a subset with its characteristic function, any $\\Phi \\in NY$ satisfies \\[ \\Phi(\\psi) = \\bigvee_{A \\in \\Phi} ( \\bigwedge_{y \\in A} \\psi(y) \\wedge \\bigwedge_{y \\not \\in A} \\neg \\psi(y)). \\] In particular we can deduce the following equality,\n\t\\begin{align*}\n\t\tx &= h \\circ Ni \\circ d(x) = h(\\lambda \\varphi.d(x)(\\varphi \\circ i)) \\\\\n\t\t&= h(\\bigcup_{A \\in d(x)} (\\bigcap_{y \\in A} \\eta_X(i(y)) \\cap \\bigcap_{y \\not \\in A} \\sim \\eta_X(i(y)))).\n\t\\end{align*}\n\tThe statement follows from the latter by \\eqref{inducedcaba}. \n\t\\end{proof}\n\n\n\\section{Related work}\n\n\\label{relatedworksec}\n\nOne of the main motivations for the present paper has been our broad interest in active learning algorithms for state-based models \\cite{angluin1987learning}, in particular automata for NetKAT \\cite{anderson2014netkat}, a formal system for the verification of networks based on Kleene Algebra with Tests \\cite{kozen1996kleene}. \nOne of the main challenges in learning non-deterministic models such as NetKAT automata is the common lack of a unique minimal acceptor for a given language \\cite{denis2002residual}. The problem has been independently approached for different variants of non-determinism, often with the common idea of finding a subclass admitting a unique representative \\cite{esposito2002learning, berndt2017learning}. A more general and unifying perspective has been given in \\cite{van2017learning} by van Heerdt, see also \\cite{van2016master, van2020phd}. \n\nOne of the central notions in the work of van Heerdt is the concept of a scoop, originally introduced by Arbib and Manes \\cite{arbib1975fuzzy}.\nIn the present paper the notion coincides with what we call a generator in \\Cref{generatordefinition}. Scoops have primarily been used as a tool for constructing minimal realisations of automata, similarly to \\Cref{forgenerator-isharp-is-bialgebra-hom}. Strengthening the definition of Arbib and Manes to the notion of a basis in \\Cref{basisdefinition} allows us to further extend such automata-theoretical results, e.g. \\Cref{forbasis-bialgebra-areisomorphic}, but also uncovers ramifications with universal algebra, leading for instance to a representation theory of algebra homomorphisms in the same framework.\n\nA generalisation of the notion of a basis to algebras of arbitrary monads has been approached before. For instance, in \\cite{jacobs2011bases} Jacobs defines a basis for an algebra as a coalgebra for the comonad on the category of algebras induced by the free algebra adjunction. One can show that a basis in the sense of \\Cref{basisdefinition} always induces a basis in the sense of \\cite{jacobs2011bases}. Conversely, given certain assumptions about the existence and preservation of equaliser, it is possible to recover a basis in the sense of \\Cref{basisdefinition} from a basis in the sense of \\cite{jacobs2011bases}. Starting with a basis in the sense of \\Cref{basisdefinition}, the composition of both translations yields a basis that is not less compact than the basis one began with; in certain cases they coincide. As equaliser do not necessarily exist and are not necessarily preserved, our approach carries additional data and thus can be seen as finer. \n\n\n\\section{Discussion and future work}\n\n\\label{discussionsec}\n\nWe have presented a notion of a basis for an algebra over a monad on an arbitrary category that subsumes the familiar notion for algebraic theories.\nWe have covered questions about the existence and uniqueness of bases, and established a representation theory for homomorphisms between algebras over a monad in the spirit of linear algebra by substituting Kleisli morphisms for matrices. \nBuilding on foundations in the work of Arbib and Manes \\cite{arbib1975fuzzy}, we further have established that a basis for the underlying algebra of a bialgebra yields an isomorphic bialgebra with free state space.\nMoreover, we have established an equivalence between the category of bases for an algebra and the category of its isomorphic free algebras, and looked into bases for bialgebras.\nFinally we gave characterisations of bases for algebras over the powerset, downset, distribution, and neighbourhood monad.\n\nFor the future we are particularly interested in using the present work for a unified view on the theory of residual finite state automata \\cite{denis2002residual} (RFSA) and variations of it, for instance the theories of residual probabilistic automata \\cite{esposito2002learning} and residual alternating automata \\cite{berndt2017learning}. RFSA are non-deterministic automata that share with deterministic automata two important properties: for any regular language they admit a unique minimal acceptor, and the language of each state is a residual of the language of its initial state. In \\Cref{RFSAexample} we have demonstrated that the so-called canonical RFSA can be recovered as the bialgebra with free state space induced by a generator of join-irreducibles for the underlying algebra of a particular bialgebra. We believe we can uncover similar correspondences for other variations of non-determinism. Similar ideas have already been served as motivation in the work of Arbib and Manes \\cite{arbib1975fuzzy} and have recently come up again in \\cite{myers2015coalgebraic}. We are also interested in insights into the formulation of active learning algorithms along the lines of \\cite{angluin1987learning} for different classes of residual automata, as sketched in the related work section.\n\n\\section{Acknowledgements}\nThis research has been supported by GCHQ via the VeTSS grant \"Automated black-box verification of networking systems\" (4207703\/RFA 15845).\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{INTRODUCTION}\n\n\\begin{figure}[t!]\n \\centering\n \\includegraphics[width=1\\linewidth]{figures\/teaser.png}\n \\caption{Fusing a single-view depth probability distribution predicted by a DNN with standard photometric error terms helps to resolve ambiguities in the photometric error due to occlusions or lack of texture. The above projected keyframe depth map was created by our system.}\n \\label{fig:teaser}\n \\vspace{2mm}\\hrule\n\\end{figure}\n\nThere has been continued research interest in using structure-for-motion (SfM) and visual simultaneous localisation and mapping (SLAM) for the incremental creation of dense 3D scene geometry due to its potential applications in safe robotic navigation, augmented reality, and manipulation.\nUntil recently, dense monocular reconstruction systems typically worked by minimising the photometric error over several frames.\nAs this minimisation problem is not well-constrained due to occlusion boundaries or regions of low texture, most reconstruction systems employ regularisers based on smoothness (\\cite{Newcombe:etal:ICCV2011}, \\cite{Pizzoli:etal:ICRA2014}) or planar (\\cite{Concha:etal:RSS2014, Concha:Civera:IROS2015, Concha:etal:ICRA2016}) assumptions.\n\nWith the continued success of deep learning in computer vision, there have been many suggestions for data-driven approaches to the monocular reconstruction problem.\nSeveral of these approaches propose a completely end-to-end framework, predicting the scene geometry from either a single image (\\cite{Eigen:etal:NIPS2014, Laina:etal:3DV2016, Godard:etal:CVPR2017, Fu:etal:CVPR2018}) or several consecutive frames (\\cite{Ummenhofer:etal:CVPR2017, Zhou:etal:CVPR2017, Mahjourian:etal:CVPR2018, Zhou:etal:ECCV2018, Yao:etal:ECCV2018, Chang:Chen:CVPR2018}).\nMost promising, however, are those systems that combine deep learning with standard geometric constraints (\\cite{Weerasekera:etal:ICRA2017, Tateno:etal:CVPR2017, Yang:etal:ECCV2018, Bloesch:etal:CVPR2018, Wang:etal:CVPR2018, Laidlow:etal:ICRA2019, Tang:Tan:2019}).\nIt was shown in \\cite{Facil:etal:RAL2017} that learning-based and geometry-based approaches have a complementary nature as learning-based systems tend to perform better on the interior points of objects but blur edges, whereas geometry-based systems typically do well on areas with a high image gradient but perform poorly on interior points that may lack texture.\n\nThe optimal way to combine these two approaches, however, is not clear.\nThe best current results seem to come from systems that take the output of traditional geometry-based systems and feed these into a deep neural network (DNN). A particularly impressive example of this type of system is DeepTAM \\cite{Zhou:etal:ECCV2018}, which passes a photometric cost volume through a network to extract a depth map.\n\nIt may be desirable, however, to use learning-based systems as an additional component that is fused into the pipeline of a traditional system.\nThis approach would keep the probabilistic framework of the reconstruction system, a requirement for many robotic applications.\nPossible benefits of such a framework include avoiding the necessity of having to perform an expensive neural network pass every time the geometric information is updated, and, as DNNs perform best on images close to the training dataset, it might be possible to switch the network component on or off or switch between different networks depending on the environment being reconstructed.\nThe difficulty of this approach, however, is that to probabilistically fuse the network outputs into a 3D reconstruction system, some measure of the uncertainty associated with each prediction is required.\n\nIn this paper, we propose a 3D reconstruction system that fuses together the output of a DNN with a standard photometric cost volume to create dense depth maps for a set of keyframes.\nWe train a network to predict a discrete, nonparametric probability distribution for the depth of each pixel over a given range from a single image.\nLike \\cite{Liu:etal:CVPR2019}, we refer to this collection of probability distributions for each pixel in the keyframe as a ``probability volume''.\nThen, with each subsequent frame, we create a probability volume based on the photometric consistency between the current frame and the keyframe image and fuse this into the keyframe volume.\nThe main contribution of this paper is to demonstrate that combining the probability volumes from these two sources often results in a better conditioned probability volume.\nWe extract depth maps from the probability volume by optimising a cost function that includes a regularisation term based on network predicted surface normals and occlusion boundaries.\nPlease see Figure \\ref{fig:teaser} for an example keyframe reconstruction created by our system.\n\n\n\n\\section{RELATED WORKS}\n\nIn general, uncertainty can be classified into two categories: model or epistemic uncertainty, and statistical or aleatoric uncertainty. In \\cite{Gal:Ghahramani:ICML2016}, the authors suggest using a Monte Carlo dropout technique to estimate the model uncertainty of a network, but this requires multiple expensive network passes.\n\nLike \\cite{Bishop:MDN}, the authors of \\cite{Kendall:Gal:NIPS2017} propose having the network predict its own aleatoric uncertainty and using a Gaussian or Laplacian likelihood as the loss function during training, which was used by \\cite{Laidlow:etal:ICRA2019} for 3D reconstruction.\nThe problem with this approach is that it forces the network to predict a parametric and unimodal distribution.\nAs shown in \\cite{Campbell:etal:ECCV2008}, this type of distribution may be particularly ill-suited to dense reconstruction where there is a clear need for a multi-hypothesis prediction.\n\nOne proposal has been to use a multi-headed network (\\cite{Zhou:etal:ECCV2018}, \\cite{Peretroukhin:etal:CVPRW2019}) with each head making a separate prediction.\nFrom these many predictions, one can calculate the mean and covariance to use in a probabilistic fusion algorithm.\nThe drawbacks of this approach are that it increases the size of the network and requires a careful balancing of the relative size of the network body and heads. \n\nRecently, both \\cite{Fu:etal:CVPR2018} and \\cite{Liu:etal:CVPR2019} achieved impressive results by having their networks predict discrete, nonparametric probability distributions.\nWhile \\cite{Liu:etal:CVPR2019} uses these distributions to fuse the output with other network predictions, to the best of our knowledge, no one has used this method to fuse the predictions of networks with the output of standard reconstruction pipelines, which is what we aim to do in this paper.\n\n\n\n\\section{METHOD}\n\nIn this section, we describe our method for fusing predictions from DNNs into a standard 3D reconstruction pipeline to produce dense depth maps.\n\nOur system represents the observed geometry as a collection of keyframe-based ``probability volumes''.\nThat is, instead of representing the surface as a depth map with a single depth estimate per pixel, the depth is represented with a per-pixel discrete probability distribution over a given depth range.\nThese probability volumes are initialised with the output of a monocular depth prediction network.\nWith each additional RGB image, the system computes a cost volume based on the photometric consistency.\nThis cost volume is then converted to a probability volume and fused into the volume of the current keyframe.\nOnce the number of inliers drops below a given threshold, a new keyframe is created. To propagate information from one keyframe to another, we warp the previous distribution and fuse it into the new one.\n\nWhen we want to extract a depth map from the probability volume, we could take the maximum probability depth values, but in featureless regions where there is also high network uncertainty this would be susceptible to false minima and cause local inconsistencies in the prediction.\nAlso, as the probability distribution is discrete, taking the maximum would result in a quantisation of the final depth prediction.\nTo overcome these shortcomings, we first construct a smooth probability density function (PDF) from the volume using a kernel density estimation (KDE) technique.\nWe then minimise the negative log probability of this PDF along with a regularisation term.\nWhile many dense systems propose using regularisers based on smoothness (\\cite{Newcombe:etal:ICCV2011, Pizzoli:etal:ICRA2014}) or planar (\\cite{Concha:etal:RSS2014, Concha:etal:ICRA2014, Concha:Civera:IROS2015}) assumptions, we follow the examples of \\cite{Weerasekera:etal:ICRA2017} and \\cite{Laidlow:etal:ICRA2019} and penalise our reconstruction for deviating from the surface normals predicted by a DNN.\n\n\\subsection{Multi-Hypothesis Monocular Depth Prediction}\n\nRather than predict a single depth value for each pixel, our network predicts a discrete depth probability distribution over a given range, similar to \\cite{Fu:etal:CVPR2018} and \\cite{Liu:etal:CVPR2019}.\nNot only does this allow the network to express uncertainty about its prediction, but it also allows the network to make a multi-hypothesis depth prediction.\nAs discussed in \\cite{Fu:etal:CVPR2018}, the prediction of the depth probability distribution can be improved by having a variable resolution over the depth range.\nWe choose a log-depth parameterisation, following the examples of \\cite{Weerasekera:etal:ICRA2018} and \\cite{Eigen:etal:NIPS2014}.\nBy uniformly dividing the depth range in log-space, we achieve the desired result of having higher resolution in the areas close to the camera and lower resolution farther away.\n\nFor our network architecture (see Figure \\ref{fig:network}), we use a ResNet-50 encoder \\cite{He:etal:CVPR2016} followed by three upsample blocks, each consisting of a bilinear upsampling layer, a concatenation with the input image, and then two convolutional layers to bring the output back up to the input resolution.\nAll inputs and outputs have a resolution of 256$\\times$192.\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=1\\linewidth]{figures\/network_diagram_arrows.png}\n \\caption{Our network consists of a ResNet-50 encoder with an output stride size of 8 and no global pooling layer. We then pass the output of the encoder through three upsample blocks consisting of a bilinear resize, concatenation with the input image, and then two convolutional layers to match the output resolution to the input. The probability distribution that the network outputs is discretised over 64 channels.}\n \\label{fig:network}\n \\vspace{2mm}\\hrule\n\\end{figure}\n\nAs we are having the network predict a discrete distribution rather than a depth map, we cannot use a standard loss function based on the sum of squared errors.\nA cross-correlation loss would not be ideal either, as we would like to penalise the network less for predicting high probabilities in incorrect bins that are close to the true bin than in bins farther away.\nInstead, we choose to use the ordinal loss function proposed in \\cite{Fu:etal:CVPR2018}:\n\\begin{equation}\n\\begin{split}\n \\mathcal{L}(\\mbs \\theta) ={} & -\\sum_i {} \\Biggl[ {} \\sum_{k = 0}^{k_i^*} \\log(p_{\\mbs \\theta,i}(k_i^* \\geq k)) \\\\\n &+ \\sum_{k = k_i^* + 1}^{K-1} \\log(1 - p_{\\mbs \\theta,i}(k_i^* \\geq k)) \\Biggr],\n\\end{split}\n\\end{equation}\n\\noindent\nwhere\n\\begin{equation}\n p_{\\mbs \\theta,i}(k_i^* \\geq k) = \\sum_{j = k}^{K-1} p_{\\mbs \\theta,i}(k_i^* = j),\n\\end{equation}\n\\noindent\n$\\mbs \\theta$ is the set of network weights, $K$ is the number of bins over which the depth range is discretised, $k_i^*$ is the index of the bin containing the ground truth depth for pixel $i$, and $p_{\\mbs \\theta,i}(k_i^* = j)$ is the network prediction of the probability that the ground truth depth is in bin $j$.\n\nLike \\cite{Liu:etal:CVPR2019}, we train our network on the ScanNet RGB-D dataset \\cite{Dai:etal:CVPR2017}.\nNo fine-tuning was done on our evaluation dataset, the TUM RGB-D dataset \\cite{Sturm:etal:IROS2012}.\nWe set the depth range to be between 10cm and 12m and group the log-depth values uniformly into 64 bins.\n\nEach keyframe created by our system is initialised with this network output.\n\n\\subsection{Fusion with Photometric Error Terms}\n\nFor each additional reference frame, we construct a DTAM-style cost volume \\cite{Newcombe:etal:ICCV2011}.\nFirst, we normalise both the keyframe and reference frame images by subtracting their means and dividing by their standard deviations.\nWe then calculate the photometric error by warping the normalised keyframe image into the reference frame for each depth value in the cost volume and taking the sum of squared differences on 3$\\times$3 patches.\nTo simplify the later fusion, we use the midpoint of each of the depth bins used for the network prediction as the depth values in the cost volume.\nPoses are obtained from an oracle, such as a separate tracking system like ORB-SLAM2 \\cite{Mur-Artal:etal:TRO2017}.\n\nTo convert to a probability volume, we separately scale the negative of the squared photometric error for each pixel such that it sums to one over the ray.\nWe then fuse this new probability volume, $p_{\\text{RF}}$, into the current keyframe volume, $p_{\\text{KF}}$:\n\\begin{equation}\n p_i(k_i^* = k) = p_{\\text{KF},i}(k_i^* = k) p_{\\text{RF},i}(k_i^* = k),\n\\end{equation}\n\\noindent\nfor each pixel $i$ and depth $k$, which is then scaled to sum to one over the ray.\n\n\\subsection{Kernel Density Estimation}\n\nTo avoid a quantisation of the final depth prediction and to have a smooth function to use in the optimisation step, we construct a PDF for the depth of each pixel using a KDE technique with Gaussian basis functions:\n\\begin{equation}\n f_i(d) = \\sum_{k = 0}^{K-1} p_i(k_i^* = k) \\phi\\left(d(k), \\sigma\\right)\n\\end{equation}\n\\noindent\nwhere $\\phi\\left(\\mu, \\sigma\\right)$ is the probability density of the Gaussian distribution with mean $\\mu$ and standard deviation $\\sigma$, $d(k)$ is the depth value at the midpoint of bin $k$, and $\\sigma$ is a constant smoothing parameter across all pixels and depth values.\nThe value of $\\sigma$ is a hyperparameter that needs to be tuned empirically; we found that $\\sigma = 0.1$ works well in our setting.\n\nAn example of a discrete PDF produced by our system and the smoothed result after applying the KDE technique is shown in Figure \\ref{fig:smoothing}.\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=1\\linewidth]{figures\/smoothed.png}\n \\caption{Our fusion algorithm produces a discrete probability distribution for each pixel in the keyframe. To reduce discretisation errors and to have a continuous cost function for the optimiser, we convert the probability values along each ray into a smooth probability density function using a kernel density estimation technique.}\n \\label{fig:smoothing}\n \\vspace{2mm}\\hrule\n\\end{figure}\n\n\\subsection{Regularisation}\n\nAlthough the fused probability volume will have more local consistency than using the photoconsistency terms alone, the result can still be improved by adding a regularisation term to the optimisation used to extract the depth map.\nWhile most dense reconstruction systems base their regularisers on smoothness or planar assumptions, we propose using the surface normals predicted by a DNN as was done in both \\cite{Weerasekera:etal:ICRA2017} and \\cite{Laidlow:etal:ICRA2019} as this may allow for better preservation of fine-grained local geometry.\nTo predict the surface normals from the keyframe image, we use the state-of-the-art network SharpNet \\cite{Ramamonjisoa:Lepetit:ICCVW2019}.\nAs we determine the local surface orientation of our depth estimation from neighbouring pixels and we do not wish incur high costs at depth discontinuities, we mask the regularisation term at occlusion boundaries, which are also predicted by SharpNet.\nSince SharpNet actually predicts a probability of each pixel belonging to an occlusion boundary, we include all pixels with a probability higher than 0.4 in the mask.\nExample predictions of surface normals and occlusion boundaries made by SharpNet on the TUM RGB-D dataset \\cite{Sturm:etal:IROS2012} are shown in Figure \\ref{fig:sharpnet}.\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=1\\linewidth]{figures\/sharpnet.png}\n \\caption{To regularise our depth estimate, we use the surface normals and occlusion boundaries predicted by SharpNet \\cite{Ramamonjisoa:Lepetit:ICCVW2019}. Some examples of the predictions made by SharpNet on the TUM RGB-D dataset \\cite{Sturm:etal:IROS2012} are shown above. From left to right: input RGB images, predicted normals, predicted occlusion boundaries with a probability greater than 0.4.}\n \\label{fig:sharpnet}\n \\vspace{2mm}\\hrule\n\\end{figure}\n\n\\subsection{Optimisation}\n\nTo extract a depth map from the probability volume, we minimise a cost function consisting of two terms:\n\\begin{equation}\n c(\\mbf{d}) = c_f(\\mbf{d}) + \\lambda c_{\\mbf{\\hat n}}(\\mbf{d}),\n\\end{equation}\n\\noindent\nwhere $\\mbf{d}$ is the set of depth values to be estimated, and $\\lambda$ is a hyperparameter used to adjust the strength of the regularisation term. Empirically, we found $\\lambda = 1.0 \\cdot 10^7$ to work well.\n\nThe first term, $c_f$, imposes a unary constraint on each of the pixels:\n\\begin{equation}\n c_{f}(\\mbf{d}) = -\\sum_i \\log \\left( f_i(d_i) \\right)\n\\end{equation}\n\\noindent\nwhere $f_i(d_i)$ is the smoothed PDF of pixel $i$ evaluated at depth $d_i$.\n\nThe second term, $c_{\\mbf{\\hat n}}$, is a regularisation term that combines two pairwise constraints:\n\\begin{equation}\n\\begin{split}\n c_{\\mbf{\\hat n}}(\\mbf{d}) = \\sum_i & b_i \\left( \\langle \\mbf{\\hat n}_i, d_i \\mbf{K}^{-1} \\mbf{\\tilde u}_i - d_{i+1} \\mbf{K}^{-1} \\mbf{\\tilde u}_{i+1} \\rangle \\right)^2 \\\\\n &+ b_i \\left( \\langle \\mbf{\\hat n}_i, d_i \\mbf{K}^{-1} \\mbf{\\tilde u}_i - d_{i+\\text{W}} \\mbf{K}^{-1} \\mbf{\\tilde u}_{i+\\text{W}} \\rangle \\right)^2\n\\end{split}\n\\end{equation}\n\\noindent\nwhere $b_i \\in \\{0, 1\\}$ is the value of the occlusion boundary mask for pixel $i$, $\\langle \\cdot, \\cdot \\rangle$ is the dot product operator, $\\mbf{\\hat n}_i$ is the normal vector predicted by SharpNet, $\\mbf{K}$ is the camera intrinsics matrix, $\\mbf{\\tilde u}_i$ is the homogeneous pixel coordinates for pixel $i$, and $\\text{W}$ is the width of the image in pixels.\n\nWe minimise the cost function by applying a maximum of 100 iterations of gradient descent with a step size of 0.2, and initialise the optimisation with the maximum probability depth values from the fused probability volume.\nAs the focus of this paper is on the benefits of fusing learning-based and geometry-based approaches, the system was not implemented to achieve real time results.\nCurrently, the forward pass of the network is not a major bottleneck (it takes approximately 53ms), but the process of going from a fused probability volume through the smoothing and optimisation to an extracted depth map can take up to 1.2s, depending on how many iterations are required before the stopping criterion is met.\nThis could be improved significantly by using Newton's method or the primal-dual algorithm instead of gradient descent; however, we leave this for future work.\n\n\n\\subsection{Keyframe Warping}\n\nTo avoid throwing away information on the creation of each new keyframe, we warp the probability volume of the current keyframe and use it to initialise the new one.\nAs the probability volume is a distribution over the depth values of a pixel, however, warping the probability volume is not trivial.\nTo do this, we propose using a discrete variation of the method described in \\cite{Loop::etal::3DV2016}, where we first convert the depth probability distribution to an occupancy-based probability volume, where for each depth bin along the ray there is a probability that the associated point in space is occupied. \nWe then warp this occupancy grid into the new frame and convert back to a depth probability distribution.\n\nWe start by defining the probability that the voxel $S_{k,i}$ (corresponding to depth bin $k$ along the ray of pixel $i$) is occupied, conditioned on the depth belonging to bin $j$:\n\\begin{equation}\n p(S_{k,i} = 1 \\lvert k_i^* = j) =\n \\begin{cases}\n 0 & \\text{if}\\ k < j \\\\\n 1 & \\text{if}\\ k = j \\\\\n \\frac{1}{2} & \\text{if}\\ k > j\n \\end{cases}\n\\end{equation}\n\nTo convert a depth probability distribution into an occupancy probability, we marginalise out the conditional:\n\\begin{equation}\n p(S_{k,i} = 1) = \\sum_{k=0}^{K-1} p_i(k_i^* = k) p(S_{k,i} = 1 \\lvert k_i^* = k)\n\\end{equation}\n\\noindent\nwhere $p_i(k_i^* = k)$ is the value of the depth probability volume in bin $k$ for pixel $i$.\n\nAs the occupancy grid represents probabilities for locations in 3D space, we can directly warp this into the new keyframe, filling in any unknown values with a default occupancy probability.\nIn our work, we use a default probability of 0.01.\n\nAfter warping, the occupancy grid can be converted back into a depth probability distribution:\n\\begin{equation}\n p_i(k_i^* = k) = \\prod_{j < k} \\left[1 - p(S_{j,i} = 1)\\right] p(S_{k,i} = 1),\n\\end{equation}\n\\noindent\nand scaled so that the distribution sums to one along the ray.\n\n\\section{EXPERIMENTAL RESULTS}\n\nWe evaluate our system on the Freiburg 1 sequences of the TUM RGB-D dataset \\cite{Sturm:etal:IROS2012}.\nPlease note that only the RGB images are processed by our system and the depth channel is just used as a ``ground truth'' with which to validate our results against.\n\n\\subsection{Qualitative Results}\n\nFigure \\ref{fig:fusion} shows the various PDFs for a sample of four pixels taken from a keyframe in the TUM RGB-D sequence \\textit{fr1\/desk}.\nThe PDFs in the first row are those predicted by the DNN. Note that the network is able to make multi-hypothesis predictions and can have varying degrees of certainty.\nThe PDFs in the second row are those that result from the photometric cost volume.\nFor some of the pixels (such as pixels A and C), the photometric error results in a clear peak.\nThis situation is most often found on corners and edges in the image where there are large intensity gradients.\nFor pixels in textureless regions or on occlusion boundaries or areas with repeating patterns, the photometric PDF may have many peaks (such as pixel B) or no peak at all (such as Pixel D).\nThe final row of the figure shows the fused PDF for each of the pixels.\nBy fusing the two PDFs together, uncertainty can be reduced and ambiguous photometric data can be resolved.\n\nAn example reconstruction for a single keyframe with various ablations is shown in Figure \\ref{fig:qualex}.\n\n\\begin{figure*}\n \\centering\n \\includegraphics[width=1\\linewidth]{figures\/mosaic_small.png}\n \\caption{This figure shows a grid of probability densities for a sample of four pixels from a keyframe (left). The first row, in red, shows the probability densities predicted by the network. The second row, in green, shows the probability densities estimated from the photometric error after the addition of 25 reference frames. The final row, in blue, shows the fused probability densities that results from our algorithm. Note that both the network and the photometric error are capable of producing multiple peaks. In some cases (such as pixel C), both the network and the photometric methods produce good estimates. In others (such as pixel A), both the network and photometric error are relatively uncertain, but together produce a strong peak. In pixels B and D, the network helps resolve ambiguous photometric peaks from either a repetition or lack of texture. The vertical black bars show the location of the ground truth depth.}\n \\label{fig:fusion}\n \\vspace{2mm}\\hrule\n\\end{figure*}\n\n\\begin{figure*}\n \\centering\n \\includegraphics[width=1\\linewidth]{figures\/qualex_labels.png}\n \\caption{Qualitative results from an example keyframe and 6 additional reference frames in the TUM RGB-D \\textit{fr1\/360} sequence. The top left image is the keyframe image, and the bottom left is the ground truth depth. The remaining images on the top row are the depth estimates obtained by taking the maximum probability depth from each corresponding probability volume. The bands of colour show the quantisation that results from using this method. The remaining images in the bottom row are the depth estimates that result after performing the optimisation step. Note that the photometric error is only capable of estimating the depth at pixels with a high image gradient (the repeated edges are the result of pose error). While using only the network prediction results in a good reconstruction, the best reconstruction is obtained by fusing the network and photometric volumes together.}\n \\label{fig:qualex}\n \\vspace{2mm}\\hrule\n\\end{figure*}\n\n\\subsection{Quantitative Evaluation}\n\nWe demonstrate the value of fusion on the reconstruction pipeline by comparing the performance of the system on each of the Freiburg 1 TUM RGB-D sequences under three different scenarios: using only the network probability volume, using only the photometric probability volume, and using the fused probability volume.\nTo isolate the performance of our reconstruction system, we use the ground truth poses provided in the dataset.\nWe evaluate the performance using three metrics defined in \\cite{Eigen:etal:NIPS2014}: the absolute relative difference (L1-rel), the squared relative difference (L2-rel) and the root mean squared error (RMSE).\nNote that since the photometric probability volume has extremely noisy results on textureless surfaces, we found that the results were improved by initialising the optimisation with the expected value of the depth from the probability volume rather than the highest probability depth.\n\nThe results are presented in Table \\ref{tab:fusion_ablation}.\nIn six of the sequences, the best result is achieved by fusing together the network and photometric probability volumes.\nWhile there is a large performance gain in using the network over the photometric probability volume, the best outcome is achieved by fusing the two together.\nIn one of the sequences (\\textit{fr1\/floor}), the best result is achieved by using only the photometric probability volume.\nFor this entire sequence, the camera is aimed at a bare wooden floor, and, being well outside the training distribution, the network produces particularly bad priors.\nIn the remaining two sequences, the best result is achieved using only the network probability volume.\nIn one of these sequences (\\textit{fr1\/rpy}), the camera motion is purely rotational and the photoconsistency-based subsystem is not able to produce meaningful depth estimates.\n\n\n\\begin{table}[H]\n\\centering\n\\def0.9{0.9}\n\\begin{tabular}{l l c c c}\n \\toprule\n \\textbf{Sequence} & \\textbf{System} & \\textbf{L1-rel} & \\textbf{L2-rel} & \\textbf{RMSE} \\\\\n \\midrule\n \\multirow{4}{*}{fr1\/360} & Network-Only & 0.193 & 0.147 & \\textbf{0.555} \\\\\n & Photometric-Only & 0.633 & 1.008 & 1.514 \\\\\n & Fused & \\textbf{0.191} & \\textbf{0.143} & \\textbf{0.555} \\\\\n \\cmidrule(l){2-5}\n & DeepTAM* & 0.194 & 0.174 & 0.559 \\\\\n \\midrule\n \\multirow{4}{*}{fr1\/desk} & Network-Only & 0.295 & 0.201 & 0.447 \\\\\n & Photometric-Only & 0.541 & 0.503 & 0.859 \\\\\n & Fused & \\textbf{0.278} & \\textbf{0.177} & \\textbf{0.427} \\\\\n \\cmidrule(l){2-5}\n & DeepTAM* & 0.089 & 0.031 & 0.213 \\\\\n \\midrule\n \\multirow{4}{*}{fr1\/desk2} & Network-Only & 0.237 & 0.139 & \\textbf{0.423} \\\\\n & Photometric-Only & 0.522 & 0.494 & 0.890 \\\\\n & Fused & \\textbf{0.236} & \\textbf{0.138} & 0.424 \\\\\n \\cmidrule(l){2-5}\n & DeepTAM* & 0.111 & 0.049 & 0.270 \\\\\n \\midrule\n \\multirow{4}{*}{fr1\/floor} & Network-Only & 0.806 & 0.727 & 0.821 \\\\\n & Photometric-Only & \\textbf{0.488} & \\textbf{0.303} & \\textbf{0.562} \\\\\n & Fused & 0.785 & 0.691 & 0.796 \\\\\n \\cmidrule(l){2-5}\n & DeepTAM* & 0.131 & 0.034 & 0.156 \\\\\n \\midrule\n \\multirow{4}{*}{fr1\/plant} & Network-Only & 0.426 & 0.502 & \\textbf{0.816} \\\\\n & Photometric-Only & 0.726 & 1.422 & 1.983 \\\\\n & Fused & \\textbf{0.416} & \\textbf{0.485} & 0.833 \\\\\n \\cmidrule(l){2-5}\n & DeepTAM* & 0.167 & 0.143 & 0.602 \\\\\n \\midrule\n \\multirow{4}{*}{fr1\/room} & Network-Only & 0.231 & 0.155 & 0.493 \\\\\n & Photometric-Only & 0.605 & 0.762 & 1.187 \\\\\n & Fused & \\textbf{0.226} & \\textbf{0.147} & \\textbf{0.488} \\\\\n \\cmidrule(l){2-5}\n & DeepTAM* & 0.132 & 0.079 & 0.367 \\\\\n \\midrule\n \\multirow{4}{*}{fr1\/rpy} & Network-Only & \\textbf{0.242} & \\textbf{0.199} & \\textbf{0.577} \\\\\n & Photometric-Only & 0.514 & 0.577 & 1.047 \\\\\n & Fused & 0.255 & 0.212 & 0.614 \\\\\n \\cmidrule(l){2-5}\n & DeepTAM* & 0.154 & 0.101 & 0.427 \\\\\n \\midrule\n \\multirow{4}{*}{fr1\/teddy} & Network-Only & \\textbf{0.294} & \\textbf{0.271} & \\textbf{0.773} \\\\\n & Photometric-Only & 0.748 & 1.569 & 2.108 \\\\\n & Fused & 0.297 & 0.277 & 0.792 \\\\\n \\cmidrule(l){2-5}\n & DeepTAM* & 0.173 & 0.157 & 0.626 \\\\\n \\midrule\n \\multirow{4}{*}{fr1\/xyz} & Network-Only & 0.241 & 0.162 & 0.432 \\\\\n & Photometric-Only & 0.517 & 0.403 & 0.764 \\\\\n & Fused & \\textbf{0.225} & \\textbf{0.137} & \\textbf{0.401} \\\\\n \\cmidrule(l){2-5}\n & DeepTAM* & 0.065 & 0.017 & 0.164 \\\\\n \\bottomrule\n\\end{tabular}\n\\caption{Comparison of reconstruction errors on Freiburg 1 TUM RGB-D \\cite{Sturm:etal:IROS2012} sequences showing the relative performance of using only the network-predicted probability volume, only the photometric probability volume, and the fused probability volume. *Despite more accurate\nresults, DeepTAM does not maintain a probabilistic formulation.}\n\\label{tab:fusion_ablation}\n\\vspace{2mm}\\hrule\n\\end{table}\n\n\\begin{table}[h]\n\\centering\n\\def0.9{0.9}\n\\begin{tabular}{l l c c c}\n \\toprule\n \\textbf{Sequence} & \\textbf{System} & \\textbf{L1-rel} & \\textbf{L2-rel} & \\textbf{RMSE} \\\\\n \\midrule\n \\multirow{4}{*}{fr1\/360} & No Optimisation & 0.210 & 0.184 & 0.608 \\\\\n & Smoothing-Only & 0.207 & 0.179 & 0.601 \\\\\n & Total Variation & 0.194 & 0.152 & 0.565 \\\\\n & Normals + Occlusions & \\textbf{0.191} & \\textbf{0.143} & \\textbf{0.555} \\\\\n \\midrule\n \\multirow{4}{*}{fr1\/desk} & No Optimisation & 0.324 & 0.292 & 0.537 \\\\\n & Smoothing-Only & 0.323 & 0.289 & 0.533 \\\\\n & Total Variation & 0.296 & 0.226 & 0.470 \\\\\n & Normals + Occlusions & \\textbf{0.278} & \\textbf{0.177} & \\textbf{0.427} \\\\\n \\midrule\n \\multirow{4}{*}{fr1\/desk2} & No Optimisation & 0.283 & 0.240 & 0.537 \\\\\n & Smoothing-Only & 0.280 & 0.235 & 0.532 \\\\\n & Total Variation & 0.254 & 0.181 & 0.470 \\\\\n & Normals + Occlusions & \\textbf{0.236} & \\textbf{0.138} & \\textbf{0.424} \\\\\n \\midrule\n \\multirow{4}{*}{fr1\/floor} & No Optimisation & 0.806 & 0.772 & 0.861 \\\\\n & Smoothing-Only & 0.807 & 0.771 & 0.860 \\\\\n & Total Variation & 0.801 & 0.738 & 0.836 \\\\\n & Normals + Occlusions & \\textbf{0.785} & \\textbf{0.691} & \\textbf{0.796} \\\\\n \\midrule\n \\multirow{4}{*}{fr1\/plant} & No Optimisation & 0.436 & 0.558 & 0.863 \\\\\n & Smoothing-Only & 0.435 & 0.555 & 0.857 \\\\\n & Total Variation & 0.425 & 0.520 & \\textbf{0.830} \\\\\n & Normals + Occlusions & \\textbf{0.416} & \\textbf{0.485} & 0.833 \\\\\n \\midrule\n \\multirow{4}{*}{fr1\/room} & No Optimisation & 0.267 & 0.227 & 0.583 \\\\\n & Smoothing-Only & 0.265 & 0.223 & 0.577 \\\\\n & Total Variation & 0.243 & 0.179 & 0.522 \\\\\n & Normals + Occlusions & \\textbf{0.226} & \\textbf{0.147} & \\textbf{0.488} \\\\\n \\midrule\n \\multirow{4}{*}{fr1\/rpy} & No Optimisation & 0.327 & 0.434 & 0.781 \\\\\n & Smoothing-Only & 0.320 & 0.390 & 0.755 \\\\\n & Total Variation & 0.265 & 0.231 & 0.621 \\\\\n & Normals + Occlusions & \\textbf{0.255} & \\textbf{0.212} & \\textbf{0.614} \\\\\n \\midrule\n \\multirow{4}{*}{fr1\/teddy} & No Optimisation & 0.296 & 0.300 & 0.799 \\\\\n & Smoothing-Only & 0.295 & 0.296 & 0.792 \\\\\n & Total Variation & \\textbf{0.290} & \\textbf{0.276} & \\textbf{0.771} \\\\\n & Normals + Occlusions & 0.297 & 0.277 & 0.792 \\\\\n \\midrule\n \\multirow{4}{*}{fr1\/xyz} & No Optimisation & 0.299 & 0.303 & 0.595 \\\\\n & Smoothing-Only & 0.296 & 0.298 & 0.590 \\\\\n & Total Variation & 0.255 & 0.212 & 0.493 \\\\\n & Normals + Occlusions & \\textbf{0.225} & \\textbf{0.137} & \\textbf{0.401} \\\\\n \\bottomrule\n\\end{tabular}\n\\caption{Comparison of reconstruction errors on Freiburg 1 TUM RGB-D \\cite{Sturm:etal:IROS2012} sequences showing the relative performance of different regularisation schemes. No Optimisation: results from taking the depth value with the maximum probability in the probability volume. Smoothing-Only: results from minimising the smoothed negative log probability density function without including a regularisation term. Total Variation: results from using the total variation of the depth as a regulariser. Normals + Occlusions: the pipeline as described in this paper.}\n\\label{tab:reg_ablation}\n\\vspace{2mm}\\hrule\n\\end{table}\n\nAs discussed in the introduction, the best results (in terms of the accuracy of the final depth maps) seem to come from systems that take classic, photometric-based approaches and feed the results into a DNN for regularisation.\nDeepTAM \\cite{Zhou:etal:ECCV2018} is a state-of-the-art example of such a system.\nWe argue, however, that a probabilistic formulation is necessary for many applications of depth estimation in robotics and that it is important to investigate methods of fusing the output of learning-based systems into standard reconstruction pipelines that maintain this formulation.\nWe therefore do not expect or claim to be able to achieve more accurate depth reconstructions than those produced by DeepTAM.\nInstead, we claim that we are able to improve the performance of standard SLAM systems by fusing in the outputs of a deep neural network while maintaining a probabilistic formulation.\nFor the sake of transparency, however, we have also run the DeepTAM mapping system with ground truth poses on the same sequences and have included the results in Table 1.\n\nTo show the benefit of our method of regularisation, we compare the performance of the full system against three other regularisation schemes: using no optimisation at all (taking the depth values that maximise the discrete probability distribution), optimising without any regularisation (this will allow for the smoothing of the depth maps based on the continuous PDF, but provide no regularisation), and regularising using the total variation.\n\nFor the total variation, we tuned the hyperparameters of our system for the best performance ($\\lambda = 1.0 \\cdot 10^2$ and a step size of 0.05).\n\nThe results are presented in Table \\ref{tab:reg_ablation}. In all cases the best performance is achieved when using the surface normals and occlusion masks predicted by SharpNet.\n\nFinally, to evaluate our method for warping probability volumes between keyframes, we compare our system against a version without warping where each keyframe is initialised only with the network output and does not receive any information from other keyframes.\n\nThe results are presented in Table \\ref{tab:warp_ablation}.\nUsing our warping method improves the performance of the system in all cases.\n\n\\begin{table}[t]\n\\centering\n\\def0.9{0.9}\n\\begin{tabular}{l l c c c}\n \\toprule\n \\textbf{Sequence} & \\textbf{System} & \\textbf{L1-rel} & \\textbf{L2-rel} & \\textbf{RMSE} \\\\\n \\midrule\n \\multirow{2}{*}{fr1\/360} & No Keyframe Warping & 0.202 & 0.157 & 0.575 \\\\\n & Keyframe Warping & \\textbf{0.191} & \\textbf{0.143} & \\textbf{0.555} \\\\\n \\midrule\n \\multirow{2}{*}{fr1\/desk} & No Keyframe Warping & 0.316 & 0.236 & 0.474 \\\\\n & Keyframe Warping & \\textbf{0.278} & \\textbf{0.177} & \\textbf{0.427} \\\\\n \\midrule\n \\multirow{2}{*}{fr1\/desk2} & No Keyframe Warping & 0.283 & 0.195 & 0.480 \\\\\n & Keyframe Warping & \\textbf{0.236} & \\textbf{0.138} & \\textbf{0.424} \\\\\n \\midrule\n \\multirow{2}{*}{fr1\/floor} & No Keyframe Warping & \\textbf{0.776} & \\textbf{0.684} & \\textbf{0.787} \\\\\n & Keyframe Warping & 0.785 & 0.691 & 0.796 \\\\\n \\midrule\n \\multirow{2}{*}{fr1\/plant} & No Keyframe Warping & 0.420 & 0.490 & 0.845 \\\\\n & Keyframe Warping & \\textbf{0.416} & \\textbf{0.485} & \\textbf{0.833} \\\\\n \\midrule\n \\multirow{2}{*}{fr1\/room} & No Keyframe Warping & 0.256 & 0.189 & 0.528 \\\\\n & Keyframe Warping & \\textbf{0.226} & \\textbf{0.147} & \\textbf{0.488} \\\\\n \\midrule\n \\multirow{2}{*}{fr1\/rpy} & No Keyframe Warping & 0.297 & 0.263 & 0.654 \\\\\n & Keyframe Warping & \\textbf{0.255} & \\textbf{0.212} & \\textbf{0.614} \\\\\n \\midrule\n \\multirow{2}{*}{fr1\/teddy} & No Keyframe Warping & 0.302 & 0.286 & \\textbf{0.791} \\\\\n & Keyframe Warping & \\textbf{0.297} & \\textbf{0.277} & 0.792 \\\\\n \\midrule\n \\multirow{2}{*}{fr1\/xyz} & No Keyframe Warping & 0.315 & 0.247 & 0.521 \\\\\n & Keyframe Warping & \\textbf{0.225} & \\textbf{0.137} & \\textbf{0.401} \\\\\n \\bottomrule\n\\end{tabular}\n\\caption{Comparison of reconstruction errors on Freiburg 1 TUM RGB-D \\cite{Sturm:etal:IROS2012} sequences showing the performance gain from using our method to warp keyframe probability volumes.}\n\\label{tab:warp_ablation}\n\\vspace{2mm}\\hrule\n\\end{table}\n\n\\section{CONCLUSION}\n\nWe have presented a method for fusing learned monocular depth priors into a standard pipeline for 3D reconstruction.\nBy training a DNN to predict nonparametric probability distributions, we allow the network to express uncertainty and make multi-hypothesis depth predictions.\n\nThrough a series of experiments, we demonstrated that by fusing the discrete probability volume predicted by the network with a probability volume computed from the photometric error, we often achieve better performance than either on its own.\nFurther experiments showed the value of our regularisation scheme and warping method.\n\n\n\\bibliographystyle{IEEEtran}\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nVideo understanding is a central task of artificial intelligence that requires complex grounding and reasoning over multiple modalities. Among many tasks, multiple-choice question answering has been seen as a top-level task \\citep{richardson_mctest_2013} toward this goal due to its flexibility and ease of evaluation. A line of research towards constructing Video QA datasets have been completed \\cite{Tapaswi2016MovieQAUS,Lei2018TVQALC,Zadeh2019SocialIQAQ}. Ideally, a model for this task should understand each modality well and have a good way to aggregate information from different modalities. To this end, it is a natural choice for researchers to use the state-of-the-art models for each subtask and modality. Recently in the Natural Language domain, BERT \\citep{devlin-etal-2019-bert} and other transformer-based models have become baselines in many research works. However, it is a known phenomenon that complex multimodal models tend to overfit to strong-performing single modality \\cite{cirik_visual_2018,mudrakarta_did_2018, thomason_shifting_2019}. To caution against such undesirable modality collapsing, we study how strong RoBERTa \\citep{Liu2019RoBERTaAR}, a better trained version of BERT, can perform on the QA-only task.\n\n\\paragraph{Our main contribution includes:}\n(1) Show that RoBERTa baselines exceed all previously published QA-only baselines on two popular video QA datasets. (2) The strong QA-only results indicate the existence of non-trivial biases in the datasets that may not be obvious to human eyes but can be exploited by modern language models like RoBERTa. We provide analyses and ablations to root-cause \nthese QA biases, recommend best practices for dataset splits and share insights on subjectivity vs. objectivity for question answering.\n\n\n\\section{Model}\n\\label{sec:model}\n\n\n\\begin{table*}[h]\n\\centering\n\\begin{small}\n\\begin{tabular}{@{\\hspace{1pt}}c@{\\hspace{20pt}}l@{\\hspace{20pt}}l@{\\hspace{20pt}}l@{\\hspace{5pt}}c}\n \\bf Dataset & \\bf Model Name\/Source & \\bf Modality & \\bf QA Model & \\bf Val Acc (\\%) \\\\\n \\toprule\n \\multirow{4}{*}{\\shortstack{MovieQA \\\\(A5)}} & \\bf Our Answer-only & \\bf A & \\bf RoBERTa (fine-tune) & \\bf 34.16\\\\\n & \\bf Our QA-only & \\bf Q+A & \\bf RoBERTa (fine-tune) & \\bf 37.33\\\\\n & \\bf Our QA-only& \\bf Q+A & \\bf RoBERTa (freeze) & \\bf 22.52\\\\\n & SOTA \\cite{Jasani2019AreWA} &V+S+Q+A & w2v & 48.87\\\\\n & Random Guess & - & - & 20.00 \\\\\n \n \\cmidrule{1-5}\n \\multirow{6}{*}{\\shortstack{TVQA \\\\(A5)}} & \\bf Our Answer-only & \\bf A & \\bf RoBERTa (fine-tune) & \\bf 46.58\\\\\n & \\bf Our QA-only& \\bf Q+A & \\bf RoBERTa (fine-tune) & \\bf 48.91\\\\\n & \\bf Our QA-only& \\bf Q+A & \\bf RoBERTa (freeze) & \\bf 30.75\\\\\n & QA-only with Glove \\cite{Jasani2019AreWA} & Q+A & GloVe + LSTM & 42.77\\\\\n & SOTA's QA-only \\cite{yang2020bert}& Q+A & BERT (fine-tune) & 46.88 \\\\\n & SOTA\\cite{yang2020bert} &V+S+Q+A& BERT (fine-tune) & 72.41 \\\\\n & Random Guess & - & - & 20.00 \\\\\n\n\n\t\\bottomrule\n\\end{tabular}\n\\end{small}\n\\caption{\nComparison with State-of-the-art Performance.\n}\n\\label{tab:baselines}\n\\end{table*}\n\n\\begin{table*}[]\n\\centering\n\\begin{small}\n\\begin{tabular}{|l|c|c|c|c|c|c|}\n\\hline\n & \\multicolumn{1}{l|}{\\textbf{\\# of questions}} & \\multicolumn{1}{l|}{\\textbf{\\# of annoators}} & \\multicolumn{1}{l|}{\\textbf{\\% of why\/how }} & \\multicolumn{1}{l|}{\\textbf{\\% of other type }} & \\multicolumn{1}{l|}{\\textbf{avg len of Q}} & \\multicolumn{1}{l|}{\\textbf{avg len of A}} \\\\ \\hline\n\\textbf{Movie QA} & 14,944 & --- & 20.9\\% & 79.1\\% & 5.2 & 5.29 \\\\ \\hline\n\\textbf{TVQA} & 152,545 & 1,413 & 14.5\\% & 85.5\\% & 13.5 & 4.72 \\\\ \\hline\n\\end{tabular}\n\\end{small}\n\\caption{Dataset Statistics.}\n\\end{table*}\n\n\nWe fine-tune pretrained RoBERTa from \\citet{Liu2019RoBERTaAR} to solve the question answering task. Specifically, for one multiple-choice question with five answers (1 correct and 4 incorrect), we concatenate the tokenized question with each of the five tokenized answers and feed each of these five q-a pairs into RoBERTa. The RoBERTa is connected with a 4-layer MLP (Multi-Layer Perceptron) head to produce a scalar score for each q-a pair. These five scores are then passed through Softmax to output five probabilities indicating how likely the model think it is for each q-a pair to be correct. During training, the probabilities are trained on Cross Entropy loss; during testing, the q-a pair with the highest probability is selected as the model's prediction.\n\n\n\n\\section{Datasets}\n\\label{sec:datasets}\n\nWe evaluate our baseline model against two popular multimodal QA datasets: MovieQAand TVQA.\\\\\n\n\\paragraph{MovieQA:}\nMovieQA \\citep{Tapaswi2016MovieQAUS} was created from 408 subtitled movies. Each movie has a set of questions with 5 multiple choice answers, only one of which is correct. The dataset also contains plot synopses collected from Wikipedia.\n\n\\paragraph{TVQA:} \nTVQA \\citep{Lei2018TVQALC} was collected from 6 long-running TV shows from 3 genres. There are 21,793 video clips in total for QA collection, accompanied with subtitles and aligned with transcripts to add character names. Depending on the type of TV shows, a video clip is in 60 or 90 seconds. Each video clip has a set of questions with 5 multiple choice answers, only one of which is correct.\n\n\n\n\\paragraph{Notation:}\nIn this paper, we use A5 to denote the tasks on datasets. A5 means the multiple choice question consists of 1 correct answer and 4 incorrect answers.\n\n\\section{Bias Analysis}\n\n\\subsection{QA Bias and Inability to Generalize}\n\n\n\\begin{table}[h!]\n\\centering\n\\begin{small}\n\\begin{tabular}{@{\\hspace{5pt}}l@{\\hspace{10pt}}r@{\\hspace{10pt}}r@{\\hspace{10pt}}}\n \\bf Train Set & \\multicolumn{2}{c}{{\\bf Validation Accuracy (\\%)}} \\\\\n \\toprule\n & \\bf MovieQA & \\bf TVQA \\\\\n \\cmidrule{2-3}\n MovieQA & \\bf 37.33 & 31.18 \\\\\n\tTVQA & 33.45 & \\bf 48.91 \\\\\n\n\t\\bottomrule\n\\end{tabular}\n\\end{small}\n\\caption{\nAcross-dataset generalization accuracy. Both datasets are trained and evaluated on the A5 task: multiple-choice questions with 1 correct answer and 4 incorrect answers (random guess yields 20\\% accuracy). \\textbf{Bold} number is the highest number in each column.\n}\n\\label{tab:interdataset_generalization}\n\\end{table}\n\n\n\n\n\nFor the two datasets introduced in Section \\ref{sec:datasets}, we perform QA-only baselines using pretrained language model as described in Section \\ref{sec:model}. Table \\ref{tab:baselines} shows how our QA-only model's performance compares to random guess, state-of-the-art full modality performance and its associated QA-only ablation performance.\n\nFrom Table \\ref{tab:baselines}, looking at the numbers in bold font, we discover language model like RoBERTa is able to answer a significant portion of the questions correctly, despite that these questions are supposed to be not answerable without looking at the video. This result indicates that the model exploits the biases in these datsets. In addition, we also find that answer-only performance is quite close to QA-only performance, indicating the answer alone gives the model a pretty good hint on whether it is likely to be a correct answer.\n\nKnowing there are biases in the datasets, we are then curious on if these learned biases are transferable between datasets. Tihs investigation is important because if the biases are transferable, then perhaps they are not necessarily bad, because one could argue the model has captured some common sense in these questions and answers; but if these biases are not transferable, then it means these biases only patterns tied to one particular dataset, which we hope the model not to learn. To verify this with experiments, we train a model on each of the two dataset's train split and evaluate these two models on each of the two dataset's validation split. The results are shown in Table \\ref{tab:interdataset_generalization}.\n\nLooking at each row in Table \\ref{tab:interdataset_generalization}, we see all transfer-dataset evaluation's performance decreases from same-dataset evaluation. This means that although the model learns some tricks to answer the questions without context, such tricks learned from one dataset no longer works when applied at a different dataset. In other words, the model learns bias in the dataset and such bias is not transferable. This undesirable behavior is what motivates to our analysis in the next sections.\n\n\n\n\n\n\n\\subsection{Source of Bias: Annotator}\n\n\n\n\\newcommand{\\decreased}[1] {\\textcolor{red}{#1$\\downarrow$}}\n\\newcommand{\\increased}[1] {\\textcolor{green}{#1$\\uparrow$}}\n\n\n\\begin{table*}[h!]\n\\centering\n\\begin{small}\n\\begin{tabular}{c c c@{\\hspace{4pt}} c@{\\hspace{4pt}} c@{\\hspace{4pt}} c@{\\hspace{4pt}} c@{\\hspace{4pt}} c@{\\hspace{4pt}} c@{\\hspace{4pt}} c@{\\hspace{4pt}} c@{\\hspace{4pt}} c@{\\hspace{4pt}}}\n & \\bf Overlap Acc (\\%) & \\multicolumn{10}{c}{\\textbf{Non-overlap Acc Shift (\\%) vs. Dropped annotator}} \\\\\n \\toprule\n \\multirow{3}{*}{\\shortstack{TVQA\\\\ (A5)}} & & w17 & w366 & w24 & w297 & w118 & w313 & w14 & w19 & w2 & w254 \\\\\n \\cmidrule{3-12}\n & \\bf 50.59 & \\decreased{-5.59} & \\decreased{-11.28} & \\decreased{-20.14} & \\decreased{-10.55} & \\increased{+23.22} & \\decreased{-20.59} & \\decreased{-1.69} & \\decreased{-5.96} & \\decreased{-12.23} & \\decreased{-17.28}\\\\\n\n\t\\bottomrule\n\\end{tabular}\n\\end{small}\n\\caption{\nNon-overlapping dataset re-split results on the top-10-annotator subset. The ``Overlap Acc\" column is the validation accuracy where the train and validation set both contain questions from all 10 annotators. The ``Non-overlap Acc Shift vs. Dropped annotator\" is the validation accuracy where the train set contains questions from 9 annotators and the validation set only contains questions from the dropped annotator.\n}\n\\label{tab:non-overlap_resplit}\n\\end{table*}\n\n\n\nWe hypothesize one source of bias is from annotators. To verify our hypothesis, we obtain the Annotator IDs corresponding to the questions in TVQA \\footnote{We thank the authors of TVQA for sharing this information. Annotator information for MovieQA is unfortunately not available to us.} and construct a confusion matrix between the top-10 annotators. The results are shown in Figure \\ref{fig:inter_annotator}. For each of the annotators, we construct a mini-train and mini-valid set. For TVQA, each mini-train and mini-valid set contains 1980 and 220 A5 questions, respectively.\n\nFigure \\ref{fig:inter_annotator} reveals a pattern where most cells except for those on the diagonals are light colored, which means the accuracy decreases when the train set's and validation set's questions are not from the same annotator. This indicates the model learns to guess for one specific annotator's questions but such guess strategy is not transferable to other annotator's questions. This reveals that RoBERTa has the capacity to overfit to the annotators' QA style in the train set.\n\nLooking at the bottom number in each \\emph{diagonal} cell from Figure \\ref{fig:inter_annotator}, we see that our model performs quite differently on different annotators. Some annotators, such as w118 and w14, have a very high performance (90.0\\% and 64.5\\%, respectively), while some annotators, such as w24 and w313, have a relatively low performance (31.4\\% and 24.6\\%). This shows different annotator's questions have different level of biases.\n\nWe also discover that all annotators seem to transfer well to w118. We hypothesize w118 may have asked many questions that are similar to other annotator's questions which the model has already learned to answer during training time.\n\n\n\\begin{figure}[t]\n\n \\centering\n \n \\includegraphics[width=\\linewidth]{figures\/tvqa_inter_annotator_confusion_matrix.pdf} \n \n\\caption{TVQA Inter-annotator accuracy shift confusion matrix. Each $w_i$ represents an annotator id and each cell represents a train-test combination between annotators. The cells are colored based on accuracy shift (the top number in each cell): lighter color means more negative accuracy shift and darker color means more positive accuracy shift. Accuracy shift is defined as the difference between each cell's accuracy (the bottom number) and the same-row diagonal cell's accuracy (again, the bottom number).}\n\\label{fig:inter_annotator}\n\\end{figure}\n\n\\paragraph{Dataset Re-split} The observation above incentivizes further investigation: what if we construct a re-split of the dataset where the validation set does not contain annotators in the train set? We conduct this experiment with the limited scope of the top-10 annotators used in Figure \\ref{fig:inter_annotator} for clearer comparison. We create 11 re-splits of the dataset: 1 with annotator-overlapping train and validation set and 10 with annotator-non-overlapping train and validation set (use 9 annotators for train set and use 1 annotator for validation set). The results are shown in Table \\ref{tab:non-overlap_resplit}. We find that 9 out of 10 for TVQA non-overlapping re-splits incur decrease of performance (less bias). Interestingly, the re-split where there is an increase in performance, w118, matches the columns in Figure \\ref{fig:inter_annotator} whose cells' color is darker than average. This further verifies our explanation that w118 asks similar questions to other annotators. Nonetheless, this overall performance decrease trend after re-split suggests that for pretrained language models, annotator-non-overlapping re-split is a harder task than annotator-overlapping split and such re-split can help alleviate the QA bias. Based on this observation, we recommend future research work should create and use an annotator-non-overlapping split for train, validation and test sets whenever possible. The performance reported under such setting will contain fewer annotator biases and is thus a more accurate indicator of progress.\n\n\n\n\\subsection{Source of Bias: Question Type}\n\n\\begin{figure}[h!]\n \\centering\n \n \\includegraphics[width=.95\\linewidth]{figures\/movieqa_question_type_analysis.pdf} \n \\caption{MovieQA (A5) Accuracy by Question Type}\n \\label{fig:movieqa_q_type}\n\\end{figure}\n\n\\begin{figure}[h!]\n \\centering\n \n \\includegraphics[width=.95\\linewidth]{figures\/tvqa_question_type_analysis.pdf} \n \\caption{TVQA (A5) Accuracy by Question Type}\n \\label{fig:tvqa_q_type}\n\\end{figure}\n\n\nWe also hypothesize type of questions, such as reasoning question (such as why\/how questions) vs. factual question (where\/who questions), can be a source of bias. To verify, we ablate the model's accuracy based on the question's prefix. The results are shown in Figure \\ref{fig:movieqa_q_type} and \\ref{fig:tvqa_q_type}. These ablations are done on the A5 version of each dataset: Recall the random guess baseline in this case is 80\\%: 20\\%.\n\nIn Figure \\ref{fig:movieqa_q_type}, we see MovieQA shows a clear distinction ($>10\\%$) between ``why\" ``how\" questions vs. ``what\", ``who\", ``where\" questions. The model fits significantly better to the former than the latter. \n\nIn Figure \\ref{fig:tvqa_q_type} for TVQA, the model can guess ``why\" questions better than other question categories, while guessing ``who\" remains difficult.\n\nIn general, we observe a trend that questions such as ``why\" and ``how\", which are reasoning and abstract questions and whose answers are more complex, incur more biases that language model can exploit; whereas ``what\", ``who\" and ``where\" questions, which are factual and direct and whose answers are simple, are less bias-prone.\n\n\n\n\n\n\\section{Related Work}\nAlthough more analysis \\citep{goyal_making_2017, leibe_revisiting_2016} have been done on Visual Question Answering (VQA) \\citep{Agrawal2015VQAVQ}, there are few works analysing biases in Video Question Answering datasets. \\citet{Jasani2019AreWA} suggest MovieQA contain biases by showing that about half of the questions can be answered correctly under the QA-only setting. However, their word embeddings are trained from plot synopses of movies in the dataset and thus they actually introduce context information into their model, making it no longer QA-only.\n\\citet{goyal2017making} propose that language provides a strong prior that can result in good superficial performance and therefore preventing the model from focusing on the visual content. They attempt to fight against these language biases by creating a balanced dataset to force the model focus on the visual information.\nSimilarly, \\citet{cadene2019rubi} design a training strategy to reduce the amount of biases learned by VQA models named Rubi to counter the strong biases in the language modality.\n\\citet{manjunatha2019explicit} provide a method that can capture macroscopic rules that a VQA model ostensibly utilizes to answer questions.\nHowever, those models fail to explain clearly where the bias in the dataset comes from, which is the main topic of our work.\n\n\n\n\\section{Conclusion}\nIn this work, we fine-tune pretrained language model baselines for two popular Video QA datasets and discover that our simple baselines exceed previously published QA-only baselines. These strong baselines reveal the existence of non-trivial biases in the datasets. Our ablation study demonstrates these biases can come from annotator splits and question types. Based on our analysis, we recommend researchers and dataset creators to use annotator-non-overlapping splits for train, validation and test sets; we also caution the community that when dealing with reasoning questions, we are likely to encounter more biases than in factual questions.\n\nThis paper is an post-hoc analysis for the datasets. However, the tools used in this paper could potentially also be extended to aid dataset creation. For example, a dataset creator could have a RoBERTa trained \\emph{online} as annotators add more data. The annotators can use this language model's prediction to self-check if they are injecting any QA bias while coming up with the questions and answers. The dataset creator can also use a confusion matrix like Figure \\ref{fig:inter_annotator} to monitor and identify low-quality annotators and decide the best strategy to reduce biases during the dataset creation process.\n\n\\section*{Acknowledgments}\n\nWe thank the anonymous reviewer for providing helpful feedbacks.\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction and preliminaries }\nThe maximal green sequence was originally defined to be a particular sequence of mutations of framed cluster quivers, which\n was firstly introduced by Keller in \\cite{Kel}. Maximal green sequences are not only an important subject in cluster\n algebras, but also have important applications in many other objects, such as counting BPS states\n in string theory, producing quantum dilogarithm identities and computing refined Donaldson-Thomas\n invariants.\n\nCluster algebras have closed relations with representation theory via categorification, it follows that maximal\n green sequence could be interpreted from the viewpoint of tilting theory and silting theory. For example,\n a maximal green sequence for a cluster quiver corresponds to a sequence of forward mutation of a specific\n heart to its shift in a particular triangulated category. We refer to \\cite{BDP} for more details. Inspired\n by $\\tau$-tilting theory, Br$\\mathrm{\\ddot{u}}$stle, Smith and Treffinger defined maximal green sequence as\n particular finite chain of torsion classes for a finite dimensional algebra in \\cite{BST0}, which can be\n also naturally defined in arbitrary abelian categories in \\cite{BST1}.\n\nThroughout this paper we always assume $\\mathcal{A}$ is a small abelian category.\n\nLet us firstly recall some basic concepts. Suppose $X$ is an object in $\\mathcal{A}$.\n We say that $X$ has finite length, if there exists a finite filtration\n \\[0=X_0 \\subset X_1 \\subset X_2 \\subset \\dots \\subset X_m =X\\]\n such that $X_i\/X_{i-1}$ is simple for all $i$. Such a filtration is called a {\\bf{\\em Jordan-H$\\ddot{o}$lder}\n series} of $X$. It is well-known that if $X$ has finite length, then the length of the\n {\\em Jordan-H$\\ddot{o}$lder} series of $X$ is uniquely determined by $X$, which will be denoted\n by $l(X)$. Recall that an {\\bf abelian length category} is an abelian category such that every object has\n finite length. Throughout this article, we always assume that $\\mathcal{A}$ is an abelian length category.\n\n\nLet $\\mathcal{A}$\n be an abelian length category, and $\\mathcal{T}$ and $\\mathcal{F}$ be full subcategories of $\\mathcal{A}$\n which are closed under isomorphisms. The pair $(\\mathcal{T}, \\mathcal{F})$ is called a {\\bf torsion pair} if it satisfies\n the following conditions.\n \\begin{enumerate}\n \\item[(i)] For any objects $X\\in \\mathcal{T}$ and $Y\\in \\mathcal{F}$, then $Hom(X, Y) = 0$,\n \\item[(ii)] An obeject $X$ belongs to $\\mathcal{T}$ if and only if $Hom(X, Y)=0$ for any object $Y\\in \\mathcal{F}$,\n \\item[(iii)] An obeject $Y$ belongs to $\\mathcal{F}$ if and only if $Hom(X, Y)=0$ for any object $X\\in \\mathcal{T}$.\n \\end{enumerate}\n\n For a torsion pair\n $(\\mathcal{T}, \\mathcal{F})$, the full subcategories $\\mathcal{T}$ and $\\mathcal{F}$ are called a {\\bf torsion class}\n and a {\\bf torsion-free class}, respectively. It is well-known that a full subcategory in $\\mathcal{A}$ is a torsion class\n if and only if it is closed under extensions and factors, and a full subcategory in $\\mathcal{A}$ is a torsion-free\n class if and only if it is closed under extensions and subobjects. One of important properties of a torsion pair is that for any\n object $X$ in $\\mathcal{A}$, there is a unique exact sequence $0 \\rightarrow X_1 \\rightarrow X\n \\rightarrow X_2 \\rightarrow 0$ with $X_1\\in \\mathcal{T}$ and $X_2\\in \\mathcal{F}$ up to isomorphism,\n which is called the {\\em canonical sequence} for $X$ with respect to the torsion pair $(\\mathcal{T}, \\mathcal{F})$.\n\nLet $\\mathcal{T}$ and $\\mathcal{T}'$ be two torsion classes in $\\mathcal{A}$. We say the torsion\n class $\\mathcal{T}'$ {\\em covers} $\\mathcal{T}$ if $\\mathcal{T} \\subsetneq \\mathcal{T}'$ and $\\mathcal{X} = \\mathcal{T}$ or $\\mathcal{X} = \\mathcal{T}'$ for any torsion class $\\mathcal{X}$ satisfying $\\mathcal{T} \\subset \\mathcal{X} \\subset \\mathcal{T}'$.\n In this case, we write $\\mathcal{T} \\lessdot \\mathcal{T}'$.\n\n\\begin{Definition}[\\cite{BST1}] A {\\bf maximal green sequence} in an abelian length category $\\mathcal{A}$ is a finite\n sequence of torsion classes with covering relations\n \\[\\,\\mathcal{T}_0\\,\\lessdot\\, \\mathcal{T}_1\\, \\lessdot\\, \\mathcal{T}_2\\, \\lessdot \\,\\dots\\,\n \\lessdot\\, \\mathcal{T}_m\\]\n such that $\\mathcal{T}_0 = 0$ and $\\mathcal{T}_m = \\mathcal{A}$.\n\\end{Definition}\n\nStability conditions and Harder-Narasimhan filtration are widely studied by many authors\n and are very active. They were introduced in different contexts. For examples, King introduced stability\n functions on quiver representations in \\cite{King}, and Rudakov extended it to abelian categories in \\cite{Rud}.\n Let us recall basic definitions on stability functions and the important Harder-Narasimhan property for abelian\n length categories from \\cite{Rud}.\n\n\n\n\\begin{Definition}[\\cite{Rud, BST1}]\nLet $\\mathcal{P}$ be a totally ordered set and $\\phi : \\mathcal{A}^*\\rightarrow \\mathcal{P}$ a\n function on $\\mathcal{A}^*=\\mathcal{A}\\backslash \\{0\\}$ which is constant on isomorphism classes.\n The map $\\phi$ is called a {\\bf stability function} if for each short exact sequence\n $0\\rightarrow L\\rightarrow M\\rightarrow N\\rightarrow 0$\n of nonzero objects in $\\mathcal{A}$ one has the so-called {\\bf see-saw property}:\n\n\n either $\\phi(L) = \\phi(M) = \\phi(N)$,\n\n or $\\phi(L) > \\phi(M) > \\phi(N)$,\n\n or $\\phi(L) < \\phi(M) < \\phi(N)$.\n\nMoreover, a nonzero object\n $M$ in is said to be {\\bf $\\phi$-stable} (or {\\bf $\\phi$-semistable}) if every nontrivial subobject\n $L \\subset M$ satisfies $\\phi(L) < \\phi(M)$ (or $\\phi(L) \\leq \\phi(M)$, respectively).\n\\end{Definition}\n\nLet $\\phi$ be a stability function on $\\mathcal{A}$. For any nonzero object $X$ in $\\mathcal{A}$, we call $\\phi(X)$ the {\\bf phase} of $X$.\n When there is no confusion, we will simply call an object {\\bf semistable} (respectively, {\\bf stable})\n instead of $\\phi$-semistable (respectively, $\\phi$-stable). Rudakov proved the Harder-Narasimhan property as follows.\n\n\\begin{Theorem}[\\cite{Rud}]\\label{ruda}\nLet $\\phi : \\mathcal{A}\\rightarrow \\mathcal{P}$ be a stability function, and let $X$ be a\n nonzero object in $\\mathcal{A}$. Then up to isomorphism, $X$ admits a unique Harder-Narasimhan filtration,\n that is a filtration\n \\[0=X_0\\subsetneq X_1\\subsetneq X_2 \\subsetneq \\dots \\subsetneq X_l =X\\]\n such that the quotients $F_i = X_i\/X_{i-1}$ are semistable,\n and $\\phi(F_1) > \\phi(F_2) > \\dots > \\phi(F_l)$.\n\nOn the other hand, if $Y$ is a semistable object in $\\mathcal{A}$, then there exists a filtration of $Y$\n \\[0=Y_0\\subsetneq Y_1\\subsetneq Y_2 \\subsetneq \\dots \\subsetneq Y_m =Y\\]\nsuch that the quotients $G_i = Y_i\/Y_{i-1}$ are stable, and $\\phi(Y) = \\phi(G_m) = \\dots =\\phi(G_1)$.\n\\end{Theorem}\n\nThe second part of Theorem \\ref{ruda} claims that any semistable object admits a stable subobject and a\n stable quotient with the same phase as the semistable object.\n Following from \\cite{BST1}, we call $F_1 = X_1$ the {\\bf maximally destabilizing subobject} of $X$ and\n $F_l = X_l\/X_{l-1}$ the {\\bf maximally destabilizing quotient} of $X$. They are unique up to isomorphism.\n\nFor a stability function $\\phi : \\mathcal{A}\\rightarrow \\mathcal{P}$, T. Br$\\mathrm{\\ddot{u}}$stle, D. Smith, and H. Treffinger proved in \\cite{BST1} that it can induce a torsion pair $(\\mathcal{T}_p, \\mathcal{F}_p)$ in $\\mathcal{A}$ for every $p\\in \\mathcal{P}$ which\n is given as follows.\n \\[ \\mathcal{T}_{\\geq p}\\, =\\, \\{X\\,\\in\\,Obj(\\mathcal{A}): \\phi(X')\\,\\geq\\, p\\,\\, \\text{for the maximally destabilizing\n quotient $X'$ of $X$}\\} \\cup \\{0\\},\\]\n \\[ \\mathcal{F}_{
s$.\n In \\cite{BST1}, Br$\\mathrm{\\ddot{u}}$stle, Smith and Treffinger proved that\n under some conditions on the stability function, the chain of torsion classes induced by the stability function is\n a maximal green sequence in $\\mathcal{A}$.\n\n On the other hand, the important examples of stability functions are given by central charges. Let $\\mathcal{A}$ be\n an abelian length categories with exactly $n$ nonisomorphic simple objects $S_1, S_2, \\dots, S_n$. We know that the Grothendieck\n group $K_0(\\mathcal{A})$ of $\\mathcal{A}$ is isomorphic to $\\mathbb{Z}^n$.\n \\begin{Definition}\nA {\\bf central charge} $Z$ on $\\mathcal{A}$ is an additive map $Z: K_0(\\mathcal{A}) \\rightarrow \\mathbb{C}$\nwhich is given by \\[Z(X) = \\langle\\alpha, [X]\\rangle + \\mathrm{i}\\langle\\beta, [X]\\rangle\\]\n for $X\\in Obj(\\mathcal{A})$. Here $\\alpha\\in \\mathbb{R}^n$ and $\\beta\\in \\mathbb{R}_{>0}^n$ are fixed, and $\\langle\\cdot \\,, \\cdot\\rangle$ is the canonical inner product on $\\mathbb{R}^n$ and $\\mathrm{i}= \\sqrt{-1}$.\n\\end{Definition}\n\nSince $\\langle\\beta, [X]\\rangle > 0$ for any nonzero object $X$ in $\\mathcal{A}$, then $Z(X)$ lies in the strict upper half space of the complex space.\n It is well-known that every central charge $Z$ on $\\mathcal{A}$ determines a stability function $\\phi_{Z}$ (see also the proof of Theorem \\ref{main}), which is given by\n\\[\\phi_Z(X) = \\frac{argZ(X)}{\\pi}.\\]\n\nWe say that a maximal green sequence can be induced by a central charge if the stability function determined by a central charge induces this maximal green sequence.\\\\\n\nThis article is organized as follows.\n In Section \\ref{2}, we study relations between maximal green sequences and complete forward hom-orthogonal sequences.\n In Section \\ref{3.1}, we study properties of maximal green sequences induced\n by stability functions. In Section \\ref{3.2}, we define {\\bf crossing inequalities} for maximal green sequences (see Definition \\ref{as}),\n and then prove the following main result.\n\n\n{\\bf Theorem \\ref{main}}\\;\n{\\em A maximal green sequence $\\mathcal{T}: 0 = \\mathcal{T}_0\\, \\lessdot\\, \\mathcal{T}_1\\, \\lessdot\\, \\mathcal{T}_2\\, \\lessdot \\,\\dots\\, \\lessdot\\, \\mathcal{T}_m = \\mathcal{A} $ in an abelian length category $\\mathcal{A}$ is induced by some central charge $Z : K_0(\\mathcal{A}) \\rightarrow \\mathbb{C}$ if and only if\n $\\mathcal{T}$ satisfies crossing inequalities. }\n\nIn Section \\ref{4.1}, for finite dimensional algebras, we formulate relations between maximal green sequences of torsion classes and maximal green sequences of $\\tau$-tilting pairs, which are defined via $c$-vectors. In Section \\ref{4.2}, we prove the Rotation Lemma for finite dimensional Jacobian algebras and apply Theorem \\ref{main} to formulate relations of crossing inequalities between Jacobian algebras and its mutation.\n\n\n\\section{Correspondence between maximal green sequences and complete forward hom-orthogonal sequences}\\label{2}\n\n\\subsection{Complete forward hom-orthogonal sequences}\nWe recall the concept of complete forward hom-orthogonal sequences from \\cite{Ig1, Ig2}. Let us introduce some notations. Let $\\mathcal{A}$\n be an abelian length category, $\\mathcal{C}$ be a subcategory of $\\mathcal{A}$ and $N$ be an object in $\\mathcal{A}$.\n A {\\bf wide subcategory} of $\\mathcal{A}$ is an abelian subcategory closed under extensions.\n The full subcategory $N^{\\bot}$ is defined to be $N^{\\bot} : = \\{X\\in \\mathcal{A} | Hom(N,X)=0\\}$\n and the full subcategory $\\mathcal{C}^{\\bot}$ is defined to be $\\mathcal{C}^{\\bot} : = \\{X\\in \\mathcal{A} | Hom(Y,X)=0, \\forall Y\\in \\mathcal{C}\\}$.\n The full subcategories $^{\\bot}N$ and $^{\\bot}\\mathcal{C}$ are defined similarly.\n We also write $\\mathcal{F}(N):= N^{\\bot}$ and $\\mathcal{G}(N):=\n {}^\\bot{\\mathcal{F}(N)}$ for every object $N\\in Obj(\\mathcal{A})$.\n\n Then it is clear that $\\mathcal{F}(N) = \\mathcal{G}(N)^{\\bot}$ and\n $(\\mathcal{F}(N),\\, \\mathcal{G}(N))$ is a torsion pair in $\\mathcal{A}$.\n\n\\begin{Proposition}[\\cite{Ig2}]\\label{gx}\n Suppose that $Hom(X, Y) = 0$ and $\\mathcal{C} = X^{\\bot} \\cap {}^{\\bot}Y$, then $\\mathcal{G}(X) =\n {}^{\\bot}\\mathcal{C} \\cap {}^{\\bot}Y$.\n\\end{Proposition}\n\n\\begin{Definition}\nAn object $X$ in $\\mathcal{A}$ is called a {\\bf brick}, if $EndX$ is a division ring.\n\\end{Definition}\n\nIt is obvious that any brick is indecomposable. Let $\\mathcal{S}$ be a subset of $obj(\\mathcal{A})$, we use $Filt(\\mathcal{S})$ to denote\nthe full subcategory of $\\mathcal{A}$ consisting of objects having a finite filtration with subquotients\nare isomorphic to indecomposable objects in $\\mathcal{S}$, i.e., $X\\in Filt{\\mathcal{S}}$ if and only if there exists a finite filtration of\n$X$: \\[0=X_0 \\subset X_1 \\subset X_2 \\subset \\dots \\subset X_m =X\\]\nsuch that $X_i\/X_{i-1}\\in Ind(\\mathcal{S})$ for all $i$. For an indecomposable object $X$, we will denote $Filt(\\{X\\})$ by $Filt(X)$.\n\n The following lemma is well-known.\n\n\\begin{Lemma}[\\cite{Rin}]\nIf $X$ is a brick in $\\mathcal{A}$, then $Filt(X)$ is a wide subcategory of $\\mathcal{A}$.\n\\end{Lemma}\n\n\n\\begin{Definition}[\\cite{Ig1, Ig2}]\\label{cfho}\nA {\\bf complete forward hom-orthogonal sequence} (briefly, CFHO sequence) in $\\mathcal{A}$ is a finite sequence of bricks $N_1, N_2, \\dots , N_m$\n such that\n \\begin{enumerate}\n \\item[(i)] $Hom(N_i, N_j) = 0$ for all $1\\leq i \\lneqq j \\leq m$;\n \\item[(ii)] The sequence is maximal in $\\mathcal{G}(N)$, where $N = N_1 \\oplus N_2 \\oplus \\dots \\oplus N_m$.\n By maximal we mean that no other bricks can be inserted into $N_1, N_2, \\dots , N_m$ preserving (i);\n \\item[(iii)] $\\mathcal{G}(N) = \\mathcal{A}$.\n \\end{enumerate}\n\\end{Definition}\n\nNote that \\cite{Ig2} (page 4) claims that if the sequence $N_1, N_2, \\dots , N_m$ satisfies Definition \\ref{cfho} (i), then the condition (ii) in this definition is equivalent to the fact that for all $k$, $$\\mathcal{G}(N) \\cap (N_1 \\oplus \\dots \\oplus N_k)^{\\bot} \\cap {}^{\\bot}(N_{k+1} \\oplus \\dots \\oplus N_m) = 0.$$\n\n\n\\begin{Corollary}\\label{cgx}\nLet $M_1, M_2, \\dots , M_m$ be a complete forward hom-orthogonal sequence in $\\mathcal{A}$, and let\n $M_0 = 0 = M_{m+1}$, $X_i = M_0 \\oplus M_1 \\oplus \\dots \\oplus M_i$ and $Y_i = M_{i+1} \\oplus \\dots\n \\oplus M_m \\oplus M_{m+1}$ for $0 \\leq i \\leq m$. Then $\\mathcal{G}(X_i) = {}^{\\bot}Y_i$ for every $0 \\leq i \\leq m$.\n\\end{Corollary}\n\\begin{proof}\nSince $M_1, M_2, \\dots , M_m$ is a complete forward hom-orthogonal sequence, we have $Hom(X_i, Y_i) = 0$\n and $\\mathcal{C}_i = X^{\\bot}_i \\cap {}^{\\bot}Y_i = \\mathcal{A} \\cap X^{\\bot}_i \\cap {}^{\\bot}Y_i =\n \\mathcal{G}(M) \\cap X^{\\bot}_i \\cap {}^{\\bot}Y_i =0$. It follows from Proposition \\ref{gx} that\n $\\mathcal{G}(X_i) = {}^{\\bot}Y_i$.\n\\end{proof}\n\n In \\cite{Ig2}, Igusa also proved the following property of complete forward hom-orthogonal sequences, which\n shows simple objects are important ingredients in a complete forward hom-orthogonal sequence.\n\n\\begin{Lemma}[\\cite{Ig2}]\\label{sim}\n Let $N_1, N_2, \\dots , N_m$ be a complete forward hom-orthogonal sequence in $\\mathcal{A}$.\n Then the sequence contains all simple objects (up to isomorphism) in $\\mathcal{A}$. Moreover $N_1$ and $N_m$\n are simple objects.\n\\end{Lemma}\n\n\\begin{Corollary}\nIf $\\mathcal{A}$ admits a complete forward hom-orthogonal sequence, then there are only\n finite simple objects in $\\mathcal{A}$ up to isomorphism.\n\\end{Corollary}\n\n\\subsection{Maximal green sequences and CFHO sequences}\nMinimal extending objects for a torsion class were introduced by Barnard, Carroll, and Zhu in \\cite{BCZ}\n to study covers of the torsion class.\n\\begin{Definition}[\\cite{BCZ}]\\label{MEO}\nSuppose $\\mathcal{T}$ is a torsion class in $\\mathcal{A}$. An object $M$ in $\\mathcal{A}$ is called\n a {\\bf minimal extending object} for $\\mathcal{T}$ provided with the following conditions:\n \\begin{enumerate}\n \\item[(i)] Every proper factor of $M$ is in $\\mathcal{T}$;\n \\item[(ii)] If $0 \\rightarrow M \\rightarrow X \\rightarrow T \\rightarrow 0$ is a non-split exact sequence with\n $T\\in \\mathcal{T}$, then $X\\in \\mathcal{T}$;\n \\item[(iii)] $\\mathrm{Hom}(\\mathcal{T}, M) = 0$.\n \\end{enumerate}\n\\end{Definition}\n\nNote that if $M$ is a minimal extending object for a torsion class $\\mathcal{T}$, then $M$ is indecomposable\n by Definition \\ref{MEO} (i). Moreover, assuming (i), then (iii) is equivalent to the fact that\n $M\\notin \\mathcal{T}$. We write $[M]$ for the isoclass of the object $M$, $ME(\\mathcal{T})$ for the set\n of isoclasses $[M]$ such that $M$ is a minimal extending object for $\\mathcal{T}$, and $Filt(\\mathcal{T}\n \\cup \\{M\\})$ for the iterative extension closure of $Filt(\\mathcal{T})\\cup M$. The following results\n was proved for the category of finitely generated modules over a finite-dimensional algebra in \\cite{BCZ}.\n The results in Section 2 of \\cite{BCZ} also hold for abelian length categories.\n\n\\begin{Proposition}[\\cite{BCZ}]\\label{pop} Suppose $\\mathcal{T}$ is a torsion class in $\\mathcal{A}$ and\n $M$ is an indecomposable object such that every proper factor of $M$ lies in $\\mathcal{T}$. Then\n $Filt(\\mathcal{T}\\cup \\{M\\})$ is a torsion class and $M$ is a brick.\n\\end{Proposition}\n\nThe following result was proved for finite dimensional algebras in \\cite{BCZ}. We give a new proof for abelian length categories.\n\\begin{Lemma}[\\cite{BCZ}]\\label{lem}\nLet $\\mathcal{T}$ be a torsion class in $\\mathcal{A}$ and $M\\notin \\mathcal{T}$ be an indecomposable object in $\\mathcal A$ such\nthat each proper factor of $M$ is in $\\mathcal{T}$. Let $N\\in Filt(\\mathcal{T}\\cup \\{M\\})\\backslash \\mathcal{T}$ such that each proper\nfactor of $N$ lies in $\\mathcal{T}$. If $Filt(\\mathcal{T}\\cup \\{M\\}) \\gtrdot \\mathcal{T}$, then $M \\cong N$.\n\\end{Lemma}\n\\begin{proof}\nIt is clear that $N$ is indecomposable. By Proposition \\ref{pop}, the full subcategory $Filt(\\mathcal{T}\\cup \\{N\\})$\nis a torsion class satisfying that $\\mathcal{T} \\subsetneq Filt(\\mathcal{T}\\cup \\{N\\}) \\subset Filt(\\mathcal{T}\\cup \\{M\\})$,\nwhich implies that $Filt(\\mathcal{T}\\cup \\{N\\}) = Filt(\\mathcal{T}\\cup \\{M\\})$ since $Filt(\\mathcal{T}\\cup \\{M\\}) \\gtrdot \\mathcal{T}$.\n\nWe claim that $Hom(M, N) \\neq 0$ and $Hom(N, M) \\neq 0$. Note that $Hom(\\mathcal{T}, M) = 0$,\n since each proper factor of $M$ is in $\\mathcal{T}$ and $M\\notin \\mathcal{T}$. If $Hom(N, M) = 0$,\n then it is easy to see that $Hom(Filt(\\mathcal{T}\\cup \\{N\\}),\\,\\, M)=0$. This contradicts to the fact\n that $M\\in Filt(\\mathcal{T}\\cup \\{M\\}) = Filt(\\mathcal{T}\\cup \\{N\\})$. Then $Hom(M, N) \\neq 0$. Similarly,\n we have that $Hom(N, M) \\neq 0$.\n\n Suppose that $M \\ncong N$. Let $f: M\\rightarrow N$ and $g: N\\rightarrow M$ be two nonzero morphisms.\n Then $f$ and $g$ are not epimorphisms. Otherwise, one would be a proper factor of the other,\n which contradicts to the facts that $M\\notin \\mathcal{T}$ and $N\\notin \\mathcal{T}$. Thus $cokerf$ is a proper factor of $N$ and therefore belongs to $\\mathcal{T}$.\n\n If $f$ is not a monomorphism, then $Imf$ is a proper factor of $M$. Then $Imf$ and $cokerf$ belong to $\\mathcal{T}$,\n that implies that $M\\in \\mathcal{T}$, which contradicts to $M\\not\\in \\mathcal{T}$. Hence $f$ is a monomorphism and similarly $g$ is also a monomorphism.\n\n Note that $gf \\neq 0$, since $f\\not=0$ and $g$ is a monomorphism. Therefore $gf: M\\rightarrow M$ is\n an isomorphism since $M$ is a brick. This implies $g$ is an epimorphism, which is a\n contradiction.\n\n Thus $M\\cong N$.\n\\end{proof}\n\\begin{Theorem}[\\cite{BCZ}]\\label{mini} Suppose $\\mathcal{T}$ is a torsion class in $\\mathcal{A}$. Then the map\n $\\eta_{\\mathcal{T}}\\,:\\, [M]\\,\\mapsto\\,Filt(\\mathcal{T}\\cup \\{M\\})$\n is a bijection from the set $ME(\\mathcal{T})$ to the set of $\\mathcal{T}'$\n such that $\\mathcal{T} \\lessdot \\mathcal{T}'$. Moreover, for each\n such $\\mathcal{T}'$, there exists a unique indecomposable\n object $M$ such that $\\mathcal{T}' = Filt(\\mathcal{T}\\cup \\{M\\})$, and in this case,\n $M$ is a minimal extending object for $\\mathcal{T}$.\n Furthermore, the map $Filt(\\mathcal{T}\\cup \\{M\\}) \\mapsto [M]$ is the\n inverse to $\\eta_{\\mathcal{T}}$.\n\\end{Theorem}\nIn \\cite{BCZ}, the statement that $M$ is a minimal extending object for $\\mathcal{T}$ in this case was given in the proof of this theorem.\n\nThe following results are the main tools for us to construct\n a stability function for a given class of maximal green sequence.\n\\begin{Theorem}\\label{m2}\nSuppose that the sequence $N_1, N_2, \\dots , N_m$ is a complete forward hom-orthogonal sequence in\n $\\mathcal{A}$. Let $\\mathcal{G}_i = \\mathcal{G}(N_0\\oplus N_1 \\oplus \\dots N_{i})$ for\n each $0 \\leq i \\leq m$, where $N_0 = 0$. Then,\n\n (i)\\; $\\mathcal{G}_i = Filt(N_0, N_1, \\dots, N_i)$;\n\n(ii)\\; $N_i$ is a minimal extending object of $\\mathcal{G}_{i-1}$ satisfying that $\\mathcal{G}_i = Filt(\\mathcal{G}_{i-1}\\cup \\{N_i\\})$;\n\n(iii)\\; The sequence\n $0 = \\mathcal{G}_0\\, \\lessdot\\, \\mathcal{G}_1\\, \\lessdot\\, \\mathcal{G}_2\\, \\lessdot \\,\\dots\\,\n \\lessdot\\, \\mathcal{G}_m = \\mathcal{A} $ is a maximal green sequence in $\\mathcal{A}$.\n\\end{Theorem}\n\n\n\\begin{proof}\nBy Corollary \\ref{cgx}, we have $\\mathcal{G}_i = \\mathcal{G}(N_0\\oplus N_1 \\oplus \\dots N_{i}) = {}^{\\bot}(N_{i+1}\\oplus\\dots N_m \\oplus N_{m+1})$ ,\n where $N_{m+1}=0$. We will prove (i) and (ii) using induction method. It is obvious that $\\mathcal{G}_0 = Filt(N_0) = 0$ and $N_1 \\in ME(\\mathcal{G}_0)$\n since $N_1$ is a simple object.\n\nSuppose that $\\mathcal{G}_i = Filt(N_0, N_1, \\dots, N_i)$ and $N_i\\in ME(\\mathcal{G}_{i-1})$ for $1\\leq i \\leq j$.\n We claim that $N_{j+1}\\in ME(\\mathcal{G}_j)$. First note that $Hom(\\mathcal{G}_j, N_k) = 0$ for $k>j$ since $\\mathcal{G}_j^{\\bot} = (N_0\\oplus \\dots \\oplus N_j)^{\\bot}$.\n In particular, $Hom(\\mathcal{G}_j, N_{j+1}) = 0$. Second, suppose that $N$ is a proper quotient of $N_{j+1}$, then\n it is clear that $N \\in {}^{\\bot}(N_{j+2}\\oplus \\dots \\oplus N_m)$. If $f\\in Hom(N, N_{j+1})$ is nonzero, since $N$\n is a quotient of $N_{j+1}$, then $f$ must be an isomorphism, which is a contradiction. Thus $N\\in \\mathcal{G}_j =\n {}^{\\bot}(N_{j+1}\\oplus \\dots \\oplus N_m)$. Let $0 \\rightarrow N_{j+1} \\stackrel{a}{\\longrightarrow} X \\stackrel{b}{\\longrightarrow} T \\rightarrow 0$\n be a nonsplit exact sequence with $T\\in \\mathcal{G}_j$. Then it is enough to prove that $Hom(X, N_{j+1}) = 0$.\n Let $f\\in Hom(X, N_{j+1})$. If $fa\\neq 0$, it is an isomorphism and $f$ is a section, which is a contradiction.\n Then $fa=0$, and thus $f$ can be factor through $b$. Since $Hom(T, N_{j+1})=0$, then $f=0$. Then $X\\in \\mathcal{G}_j$ and\n $N_{j+1}\\in ME(\\mathcal{G}_j)$. And thus, $\\mathcal{G}_{j+1} = Filt(\\mathcal{G}_j \\cup \\{N_{j+1}\\}) = Filt(Filt({N_0, N_1, \\dots , N_{j}}) \\cup \\{N_{j+1}\\}) = Filt({N_0, N_1, \\dots , N_{j+1}})$. Then by induction, (i) and (ii) hold.\n\nClearly, $\\mathcal{G}_0=0$ and $\\mathcal{G}_m=\\mathcal{A}$. By Theorem \\ref{mini} and (ii), we have $\\mathcal{G}_i\\lessdot \\mathcal{G}_{i+1}$ for any $i$. Then (iii) holds.\n\\end{proof}\n\n\n\n\\begin{Theorem}\\label{m1}\nLet $0 = \\mathcal{T}_0\\, \\lessdot\\, \\mathcal{T}_1\\, \\lessdot\\, \\mathcal{T}_2\\, \\lessdot \\,\\dots\\,\n \\lessdot\\, \\mathcal{T}_m = \\mathcal{A} $ be a maximal green sequence in $\\mathcal{A}$. Then there\n exists a sequence of bricks $N_1, N_2, \\dots , N_m$ such that\n \\begin{enumerate}\n \\item[(i)] $N_i$ is a minimal extending object of $\\mathcal{T}_{i-1}$ and $\\mathcal{T}_i =\n Filt(\\mathcal{T}_{i-1} \\cup \\{N_i\\})$ for each $1 \\leq i \\leq m$.\n \\item[(ii)] $\\mathcal{T}_i = Filt(N_0, N_1, \\dots, N_i) = \\mathcal{G}(N_0 \\oplus N_1\\oplus \\dots \\oplus N_i)\n = {}^{\\bot}(N_{i+1}\\oplus \\dots \\oplus N_m \\oplus N_{m+1})$ for each $0 \\leq i \\leq m$,\n where $N_0 = 0 = N_{m+1}$.\n \\item[(iii)] The sequence $N_1, N_2, \\dots , N_m$ is a complete forward hom-orthogonal sequence in $\\mathcal{A}$.\n \\item[(iv)] $Filt(N_i) = \\mathcal{T}_i \\cap \\mathcal{F}_{i-1}$ for each $1 \\leq i \\leq m$.\n \\item[(v)] Up to isomorphism, each object $X$ in $\\mathcal{A}$ admits a unique filtration\n \\[0=X_0\\subset X_1\\subset X_2 \\subset \\dots \\subset X_m =X\\]\n such that $X_i\/X_{i-1} \\in Filt(N_i)$ for each $1 \\leq i \\leq m$.\n \\end{enumerate}\n\\end{Theorem}\n\\begin{proof}\n(i) By Theorem \\ref{mini}, there exist indecomposable objects $N_1, N_2, \\dots, N_m$\nsuch that $N_i$ is a minimal\n extending object of $\\mathcal{T}_{i-1}$ and $\\mathcal{T}_i = Filt(\\mathcal{T}_{i-1} \\cup \\{N_i\\})$\n for each $1 \\leq i \\leq m$. Then due to Proposition \\ref{pop} and by the definition of minimal extending objects, $N_1, N_2, \\dots, N_m$ are bricks.\n\n(ii) It is obvious that $\\mathcal{T}_0 = Filt(N_0)$ and $\\mathcal{T}_1 = Filt(N_0, N_1)$. Then we can\n prove that $\\mathcal{T}_i = Filt(N_0, N_1, \\dots, N_i)$ by induction. To prove that $\\mathcal{T}_i =\n \\mathcal{G}(N_0 \\oplus N_1\\oplus \\dots \\oplus N_i)$, it is enough to prove that $\\mathcal{F}_i =\n (N_0 \\oplus N_1\\oplus \\dots \\oplus N_i)^{\\bot}$ for each $0 \\leq i \\leq m$. Assume that $Y \\in\n (N_0 \\oplus N_1\\oplus \\dots \\oplus N_i)^{\\bot}$, it is clear that $Hom(\\mathcal{T}_i, Y) = 0$\n since $\\mathcal{T}_i = Filt(N_0, N_1, \\dots, N_i)$. Thus $Y \\in \\mathcal{F}_i$. Conversely, if\n $X\\in \\mathcal{F}_i$, then we have that $Hom(\\mathcal{T}_i, X) = 0$ and hence $X \\in\n (N_0 \\oplus N_1\\oplus \\dots \\oplus N_i)^{\\bot}$. Therefore $\\mathcal{T}_i = \\mathcal{G}(N_0 \\oplus N_1\\oplus \\dots \\oplus N_i)$.\n\n The statement $\\mathcal{T}_i = {}^{\\bot}(N_{i+1}\\oplus \\dots \\oplus N_m \\oplus N_{m+1})$\n will follow from (iii) by Corollary \\ref{cgx}.\n\n(iii) At first, by (i), $N_1, N_2, \\dots, N_m$ are bricks.\n\nBy the above proof of (ii), if $i\\leq j-1$, then $N_i \\in \\mathcal{T}_{j-1} = Filt(N_0,N_1, \\dots, N_{j-1})$ for all $i < j$. And by (i), we have $Hom(\\mathcal{T}_{j-1}, N_j) = 0$. Hence, $Hom(N_i, N_j) = 0$ for all $i < j$. By (ii), we\n have shown that $\\mathcal{G}(N_1\\oplus \\dots \\oplus N_m) = \\mathcal{T}_m = \\mathcal{A}$. Now it is enough\n to prove that for all $i$, $$(N_0 \\oplus N_1\\oplus \\dots \\oplus N_i)^{\\bot} \\cap {}^{\\bot}(N_{i+1} \\oplus \\dots \\oplus N_m \\oplus N_{m+1})=0.$$\n If $0\\not=X\\in (N_0 \\oplus N_1\\oplus \\dots \\oplus N_i)^{\\bot} \\cap {}^{\\bot}(N_{i+1} \\oplus \\dots \\oplus N_m \\oplus N_{m+1})$, then $X\\in \\mathcal{F}_i$ and thus $X\\notin \\mathcal{T}_i$. Hence there exists $k$\n such that $k > i$ and $X\\in T_k\\backslash T_{k-1}$. We have that $N_k$ is a factor of $X$, i.e., there\n is an epimorphism $X\\rightarrow N_k$. Note that $X\\in {}^{\\bot}(N_{i+1} \\oplus \\dots \\oplus N_m \\oplus N_{m+1})$\n and $k>i$, therefore $Hom(X, N_k) = 0$, which is a contradiction. Thus\n $(N_0 \\oplus N_1\\oplus \\dots \\oplus N_i)^{\\bot} \\cap {}^{\\bot}(N_{i+1} \\oplus \\dots \\oplus N_m \\oplus N_{m+1})=0$\n and the sequence $N_1, N_2, \\dots , N_m$ is a complete forward hom-orthogonal sequence in $\\mathcal{A}$. We have also\n proved that $\\mathcal{T}_i = {}^{\\bot}(N_{i+1}\\oplus \\dots \\oplus N_m \\oplus N_{m+1})$ for all $i$.\n\n(iv) Suppose $X$ is a nonzero object in $\\mathcal{T}_i \\cap \\mathcal{F}_{i-1}$, then $X\\in \\mathcal{F}_{i-1}\n = (N_0 \\oplus N_1\\oplus \\dots \\oplus N_{i-1})^{\\bot}$ and thus $N_i$ is a subobject of $X$. Therefore there is\n an exact sequence $0\\rightarrow N_i \\rightarrow X \\rightarrow Y \\rightarrow 0$. It is clear that $Y\\in \\mathcal{T}_i$.\n We claim that $Y\\in \\mathcal{F}_{i-1} = (N_0 \\oplus N_1\\oplus \\dots \\oplus N_{i-1})^{\\bot}$. Otherwise assume that\n there is a nonzero morphism $h: N_k \\rightarrow Y$ with $k q$ and $\\mathcal{A}_{\\geq p} \\subsetneq \\mathcal{A}_{\\geq q}$.\n Let $X\\in \\mathcal{A}_{\\geq q}\\backslash \\mathcal{A}_{\\geq p}$ with phase $\\phi(X) = r$.\n Note that $X$ admits a quotient $N$ satisfying that $N$ is stable and $\\phi(X) = \\phi(N) = r$.\n It is obvious that $q \\leq r < p$ and hence $\\mathcal{T}_{\\geq p} \\subset \\mathcal{T}_{\\geq r} \\subset \\mathcal{T}_{\\geq q}$.\n Since $X$ is semistable, the maximal destabilizing quotient of $X$ is itself. Then $X\\in \\mathcal{T}_{\\geq r}$ and\n $X\\notin \\mathcal{T}_{\\geq p}$. Therefore $\\mathcal{T}_{\\geq p} \\subsetneq \\mathcal{T}_{\\geq r}$ and hence\n $\\mathcal{T}_{\\geq r} = \\mathcal{T}_{\\geq q}$, which implies $\\phi(N)=r\\in [q]$.\n\nSuppose there are two stable objects $N$ and $N'$ with phase $\\phi(N)=r$ and $\\phi(N')=r'$ satisfying that $r, r' \\in [q]$.\n If $r=r'$, then $N \\cong N'$ since $\\phi$ is discrete at $r\\in [q]$ by Proposition \\ref{cov}. Otherwise, we may assume that\n $r < r'$. Then we have that $\\mathcal{T}_{\\geq q} = \\mathcal{T}_{\\geq r} = \\mathcal{T}_{\\geq r'}$ and thus\n $N\\in \\mathcal{T}_{\\geq r} = \\mathcal{T}_{\\geq r'}$. By the definition of $\\mathcal{T}_{\\geq r'}$, we have\n that $r=\\phi(N) \\geq r'$, which is a contradiction. The uniqueness follows.\n\nWe shall prove that every proper factor of $N$ is in $\\mathcal{T}_{\\geq p}$. Indeed, let $N'$ be a nontrivial factor of $N$,\nand let $N''$ be the maximal destabilizing quotient of $N'$ and $N'''$ be the stable quotient of $N''$ with phase $\\phi(N''') = \\phi(N'') = s$.\nNote that $N'''\\in \\mathcal{T}_{\\geq q} = \\mathcal{T}_{\\geq r}$ implies that $s \\geq r$. On the other hand, if $s = r$, then $N \\cong N'$, which is\na contradiction. Therefore we have $s > r$.\nIf $s \\geq p$, then $N' \\in \\mathcal{T}_{\\geq p}$.\nIf $s < p$, we claim that $\\mathcal{T}_{\\geq s} = \\mathcal{T}_{\\geq p}$, and then $N' \\in \\mathcal{T}_{\\geq p}$ also follows.\nIndeed, since $r< s < p$, we have that $\\mathcal{T}_{\\geq p} \\subset \\mathcal{T}_{\\geq s} \\subset \\mathcal{T}_{\\geq r}$.\nIf $\\mathcal{T}_{\\geq p} \\subsetneq \\mathcal{T}_{\\geq s}$, we have $\\mathcal{T}_{\\geq s} = \\mathcal{T}_{\\geq r}$, which implies\nthat $s\\in [p]$. This contradicts to the uniqueness of $N$. As a consequence, every proper factor of $N$ is in $\\mathcal{T}_{\\geq p}$.\nIt is easy to see $N$ is a minimal extending object for $\\mathcal{T}_{\\geq p}$ by Lemma \\ref{lem} and Theorem \\ref{mini}.\n\n\nIn particular, if $r_1\\in\\mathcal{P}$ satisfying that $\\phi(N)= r < r_1 \\leq p$,\n then $\\mathcal{T}_{\\geq p} \\subset \\mathcal{T}_{\\geq r_1} \\subset \\mathcal{T}_{\\geq q}$.\n If $\\mathcal{T}_{\\geq r_1} = \\mathcal{T}_{\\geq q}$, then $\\mathcal{T}_{\\geq p} \\lessdot \\mathcal{T}_{\\geq r_1}$.\n Thus there exists a stable object $M$ satisfying that $\\phi(M) \\in [r_1]$, and in particular $r < r_1 \\leq \\phi(M) < p$.\n This contradicts to the uniqueness of $N$. Hence $\\mathcal{T}_{\\geq r_1} = \\mathcal{T}_{\\geq p}$.\n\\end{proof}\n\n\nFor the stability function $\\phi$, suppose that there exist $r_0 > r_1 > \\dots > r_m$ in $\\mathcal{P}$ such that\n\\[0 = \\mathcal{T}_{\\geq r_0} \\lessdot \\mathcal{T}_{\\geq r_1} \\lessdot \\dots \\lessdot \\mathcal{T}_{\\geq r_m} = \\mathcal{A}\\]\nforms a maximal green sequence.\nAssume that $N_i$ is the minimal extending object of $\\mathcal{T}_{\\geq r_{i-1}}$ such that\n$\\mathcal{T}_{\\geq r_i} = Filt(\\mathcal{T}_{\\geq r_{i-1}}\\cup \\{N_i\\})$. By Theorem \\ref{cor}, we know that $N_i$ is stable,\nand we may, without loss of generality, assume that $\\phi(N_i) = r_i$.\nRecall that for $p\\in \\mathcal{P}$, the full subcategory $\\mathcal{A}_p$ is given by\n\\[\\mathcal{A}_p = \\{0\\} \\cup \\{M\\in \\mathcal{A}\\,\\, |\\,\\, M\\,\\, \\text{is semistable and}\\,\\,\\phi(M)=p \\}.\\]\nIt is shown in \\cite{BST1} that $\\mathcal{A}_p$ is a wide subcategory for each $p\\in \\mathcal{P}$. In particular, we have the following\nresult.\n\\begin{Proposition}\nWith the assumptions and notations above, we have that $\\mathcal{A}_{r_i} = Filt(N_i)$ for each $i$ with $1 \\leq i \\leq m$.\n\\end{Proposition}\n\\begin{proof}\nNote that $N_i$ is stable with phase $r_i$, then $N_i \\in \\mathcal{A}_p$.\nSince $\\mathcal{A}_p$ is a wide subcategory, it is closed under extensions.\nThen we have that $Filt(N_i) \\subset \\mathcal{A}_p$. On the other hand, if $N\\in \\mathcal{A}_p$ is nonzero,\nthen $N$ admits a stable factor $N'$ with phase $\\phi(N') = \\phi(N) =r_i$. Since $\\phi$ is discrete at $r_i$,\nthen $N' \\cong N_i$. Therefore we have a short exact sequence $0\\rightarrow L \\rightarrow N \\rightarrow N_i \\rightarrow 0$.\nSince $N, N_i\\in \\mathcal{A}_p$ and $\\mathcal{A}_p$ is closed under kernels, then $L\\in \\mathcal{A}_p$. Note that\n$l(L)