diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzpjzu" "b/data_all_eng_slimpj/shuffled/split2/finalzzpjzu" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzpjzu" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\n\\label{sec:intro} Quantum phases of matter are generally characterized\nby the symmetries broken by their ground or finite temperature states,\nthe topological properties of the former, or both. These measures\nfaithfully describe the physics of quantum systems at and near equilibrium.\nHowever, they are not suitable for understanding systems far away\nfrom equilibrium. In this realm, quantum many body systems are better\nunderstood in terms of qualitative features of their dynamics. Classical\nmeasures such as mass transport and quantum measures such as entanglement\nentropy (EE) and out-of-time-correlators (OTOCs) \\cite{Larkin1969,CalabreseCardy,Maldacena2016}\nhave been successful in identifying the dynamics in such systems from\na theoretical viewpoint. Seminal examples include the growth of EE after a quench from a high energy state in both chaotic~\\cite{KimHuse13}\nand non-chaotic systems, particularly many-body localized (MBL) systems~\\cite{Znidaric08,Bardarson12}.\nThe OTOC has also proven very useful for distinguishing between chaotic~\\cite{swingle16},\nand localized systems~\\cite{otoc1,otoc2,otoc3,otoc4}.\n\nMost experiments, however, are designed to measure time-ordered correlation\nfunctions, including classical features such as mass transport. Proposals\nfor adapting them to measure the aforementioned quantum dynamical\nprobes exist; however, they are unfeasible even for modest system\nsizes, thereby strongly limiting their potential. In particular, current\nprotocols are rooted in tomography and involve measuring every observable\nin the region of interest to reconstruct the density matrix \\cite{Tomography16},\nor entail measuring the overlap of two wavefunctions by creating two identical copies of the same quantum system \\cite{daleyzollareegrowth12,EEMeasurement1,EEMeasurement2,EEMeasurement3,OTOCMeasurement1,OTOCMeasurement2}, interferometrically using a control qubit that couples to the entire system \\cite{swingle16} or choosing a special initial state whose corresponding projection operator is a simple observable \\cite{swingle16,Meier17}. \nDue to unfavorable scalability, these approaches appear limiting for\na large number of qubits.\n\nIn this work, inspired from the OTOCs, we propose an alternative diagnostic,\nan out-of-time-ordered measurement (OTOM), to probe and distinguish\nthree general classes of quantum statistical phases, namely (i) chaotic\n(ii) MBL and (iii) Anderson localized (AL).\nWe also comment on (iv) delocalized integrable systems without disorder\nsuch as freely propagating particles and Bethe integrable systems\n(however, we do not study them in detail in this work). All but the\nfirst class are integrable. The second and third classes possess local\nconserved quantities that make them integrable, while integrability\nin the last class is enforced by non-local conserved quantities.\n\nWith regards to experimental accessibility, the OTOM offers several\nadvantages as compared to OTOCs (since it requires only one copy of\nthe system) and tomographic approaches (as it involves only one measurement).\nLike the OTOCs, it involves reversing the sign of the Hamiltonian\nto simulate time reversal \\cite{Guanyu2016}, which has been done\nin nuclear spin setups \\cite{OTOCMeasurement1} and cold atoms experiments\n\\cite{OTOCMeasurement2} with high precision. Within the scope of\nthis work, we are not aiming to be able to understand all the detailed\nquantum dynamics of the above classes but rather to show that we can\nreveal them with the new diagnostic. We comment on the experimental\nfeasibility of such a protocol towards the end of the work, as well\nas on other possible uses of OTOMs.\n\nThe plan of the paper is as follows. In Sec.~\\ref{sec:def}, we define\nthe new measure and discuss its relationship with OTOCs. Our analysis\nof the OTOM will be performed on spin chain models defined in Sec.~\\ref{sec:model},\nwhere details of the numerics are also described. Sec.~\\ref{sec:num}\ncontains the results on the different dynamical behaviors of the OTOM\nin the classes of quantum systems investigated. Finally, Sec.~\\ref{sec:conc}\noffers a discussion of these results and comments on the experimental\nprotocols to be used to measure the OTOM.\n\n\\section{Definitions of dynamical measures}\n\n\\label{sec:def}\n\nThe OTOM is defined as \n\\begin{equation}\n\\mathcal{M}(t)=\\left\\langle F^{\\dagger}(t)MF(t)\\right\\rangle \n\\end{equation}\nwhere $F(t)=W^{\\dagger}(t)V^{\\dagger}(0)W(t)V(0)$ (with $W(t)=\\exp(iHt)W\\exp(-iHt)$)\nis an out-of-time-ordered (OTO) operator, $M$ is a simple observable,\nand $V$ and $W$ are local \\textit{unitary }operators that commute\nat $t=0$ \\footnote{Unitarity of $W$ and $V$ is not essential; rather, it simplifies\nthe analysis by allowing us to discard normalization factors associated\nwith their non-unitarity and is thus, usually assumed.}. Thus, $F(0)=\\mathbbm{1}$ and $\\mathcal{M}(0)=\\left\\langle M\\right\\rangle $.\nThe expectation value is taken in a suitable state $\\rho$ for the\nproblem of interest, such as a short-range entangled state, an eigenstate\nof the Hamiltonian under investigation, or a thermal density matrix.\nIn this work, we choose $\\rho$ to be a short-range entangled pure state, $\\rho=\\left|\\psi\\rangle\\langle\\psi\\right|$.\n\nIn contrast, the OTOC is defined as: \n\\begin{equation}\n\\mathcal{C}(t)=\\left\\langle F(t)\\right\\rangle \n\\end{equation}\nBoth the OTOC and the OTOM involve an OTO operator $F(t)$ and therefore\nhave many similar properties. For instance, they have many of the\nsame qualitative features in their time-dependences for\ndynamics far from equilibrium in the above phases. However, they have\nimportant differences, that we will exploit in this work to demonstrate\npotential advantages of OTOMs in experiments.\n\nGenerally, OTOCs can be measured experimentally by applying $F(t)$\non $|\\psi\\rangle$ to obtain $|\\phi(t)\\rangle=F(t)|\\psi\\rangle$.\n$F(t)$ contains both forward and backward time-propagation operators,\n$e^{\\pm iHt}$; in practice, the latter is implemented in experiments\nby negating the Hamiltonian. One way to measure a generic OTOC, $\\mathcal{C}(t)$,\nis by measuring the overlap of $|\\phi(t)\\rangle$ with a second copy\nof $|\\psi\\rangle$. Creating two copies of the system, however, becomes\nprohibitively difficult for large systems. In contrast, OTOM is simply\nthe expectation value of a local observable $M$ in $|\\phi(t)\\rangle$,\nand does not require any qubit besides the original system. Note that\nthis procedure is distinct from measuring the dynamical time evolution\nof a local observable $M$ after a quench; the latter is known to\nnot reveal the dynamics captured by EE\/OTOCs.\n\n\\paragraph{{OTOM vs OTOC:}}\n\nTo see how $\\mathcal{M}(t)$ is related to $\\mathcal{C}(t)$, let\nus first consider a trace of the form $I=\\mbox{tr}(ABCD)$ for some\noperators $A\\dots D$. Now, any operator can be thought of as a state\nin a doubled Hilbert space: \n\\begin{eqnarray}\nA & \\equiv & \\sum_{ij}a_{ij}\\left|i\\rangle\\langle j\\right|\\to\\sum_{ij}a_{ij}|i\\rangle|j\\rangle\\equiv|A\\rangle\\\\\n\\langle A| & \\equiv & \\left(\\sum_{ij}a_{ij}|i\\rangle|j\\rangle\\right)^{\\dagger}=\\sum_{ij}a_{ij}^{*}\\langle i|\\langle j|\n\\end{eqnarray}\nIf the Hamiltonian of the original system is $H$, then the doubled system is governed by $\\tilde{H}=H\\otimes-H$, since kets\nin the second copy map to bras in the original system. Then, we can\nthink of $A$ and $C$ as states on the two copies, and think of $D\\otimes\\mathbbm{1}$\nand $\\mathbbm{1}\\otimes B$ as operators acting on the first and second\ncopy, respectively. Note that this doubling is a purely theoretical\nconstruct and is unrelated to the duplication of the experimental\nsystem needed for measuring $\\mathcal{C}(t)$. Thus, we have \n\\begin{equation}\nI=\\mbox{tr}(ABCD)=\\left\\langle A^{*}\\left|D^{T}\\otimes B\\right|C^{T}\\right\\rangle .\n\\end{equation}\nChoosing $A=\\rho^{*}$,\n$B=F^{T}(t)$, $C=M^{T}$ and $D=F^{*}(t)$, \n\\begin{eqnarray}\n\\mathcal{M}(t)=\\mathcal{M}^{*}(t) & =\\mbox{tr}\\left[\\rho^{*}F^{T}(t)M^{*}F^{*}(t)\\right]\\\\\n & =\\left\\langle \\rho\\left|F^{\\dagger}(t)\\otimes F^{T}(t)\\right|M\\right\\rangle \n\\end{eqnarray}\nEquivalently, \n\\begin{equation}\n\\mathcal{M}(t)=\\left\\langle \\rho\\left|\\tilde{V}^{\\dagger}(0)\\tilde{W}^{\\dagger}(t)\\tilde{V}(0)\\tilde{W}(t)\\right|M\\right\\rangle \n\\end{equation}\nwhere $\\tilde{V}=V^{T}\\otimes V^{*}$, $\\tilde{W}=W^{T}\\otimes W^{*}$\nand time evolution is given by $\\tilde{U}(t)=e^{iHt}\\otimes e^{-iHt}$,\ni.e., $\\tilde{W}(t)=\\tilde{U}(t)\\tilde{W}(0)\\tilde{U}^{\\dagger}(t)$.\nThus, $\\mathcal{M}(t)$ is similar to an OTOC constructed from local\noperators $\\tilde{V}$ and $\\tilde{W}$ and the Hamiltonian $\\tilde{H}$,\nalbeit with two differences: (i) it is defined on two copies of the\nsame system (it is important to note that the doubled system has the\nsame localization properties as the single copy), and (ii) the matrix\nelement of the OTO operator is evaluated between two different states\n$|\\rho\\rangle$ and $|M\\rangle$, whereas standard OTOCs are an expectation\nvalue of an OTO operator in a given state.\n\nIt is not known generally how off-diagonal matrix elements of OTO\noperators behave in various classes of systems. However, if the two\nstates have a large overlap, we can expect them to qualitatively mimic\ntraditional OTOCs which are expectation values. Thus, we need: \n\\begin{equation}\n\\left|\\left\\langle \\rho|M\\right\\rangle \\right|\\equiv\\frac{\\left|\\text{tr}(\\rho M)\\right|}{\\sqrt{\\text{tr}\\rho^{2}}\\sqrt{\\text{tr}M^{2}}}\n\\end{equation}\nIn a Hilbert space of dimension $d$, the typical magnitude of the\noverlap between two vectors is $1\/\\sqrt{d}$. In this paper, we work\nwith a spin-1\/2 chain of length $L$ for all our numerical calculations,\nand choose $M=\\frac{1}{L}\\sum_{i}(-1)^{i}\\sigma_{i}^{z}$ and $|\\psi\\rangle=|\\uparrow\\downarrow\\uparrow\\downarrow\\dots\\rangle$. Then $\\left\\langle \\rho|M\\right\\rangle =1\/(\\sqrt{L}2^{L\/2})\\gg1\/\\sqrt{d}$\nwhere $d=4^{L}$ for the doubled system. Thus, $\\left\\langle \\rho|M\\right\\rangle $\nis ``large'' and $\\mathcal{M}(t)$ is expected to behave like a\ntraditional OTOC in many ways. Indeed, when $M=\\rho=|\\psi\\rangle\\langle\\psi|$ (so that $\\langle\\rho |M\\rangle=1$), then $\\mathcal M(t) = |\\mathcal C(t)|^2$, and we recover the protocol for measuring OTOCs proposed theoretically ~\\cite{swingle16} and implemented experimentally for a small system ~\\cite{Meier17}.\n\nWe note that an exception occurs when $M$ is a global conserved quantity.\nIn the doubled system, this corresponds to $|M\\rangle$ being an eigenstate\nof $\\tilde{H}$. Then, $M$ is a sum of an extensive number of local\nterms, and satisfies $[M,H]=0$, while $[M,W]$ and $[M,V]$ contain\n$O(1)$ terms each. Thus, $W$ and $V$ can be moved through $M$\nin $F(t)$ at the cost of $O(1)$ extra terms, resulting in \n\\begin{equation}\n\\frac{\\mathcal{M}(t)}{\\mathcal{M}(0)}\\to1+O(1\/\\text{volume})\\,\\,\\,\\forall t\n\\end{equation}\nThus, the time-dependence of $\\mathcal{M}(t)$ is suppressed by the\ninverse volume of the system when $M$ is a global conserved quantity,\nso $\\mathcal{M}(t)$ stays pinned at its $t=0$ value for all times\nin the thermodynamic limit \\footnote{Without loss of generality, we can make $\\mathcal{M}(0)\\neq0$ by\nshifting $M$ by a term proportional to identity, which has the same\nscaling with volume as a typical eigenvalue of $M$.}. This behaviour is strikingly different from that of OTOCs in chaotic,\nMBL and AL phases, where, as we shall see in the following sections,\nthe OTOCs decay exponentially, as a power law, and undergo a short-time\nevolution before saturating, respectively. However, we will not be\nconcerned with such special choices of $M$ in this work, since global\nconserved quantities can invariably be either factored out trivially\nfrom the analysis, or are too complex to measure experimentally.\n\nClearly, our specific choices of $M$, $V$, $W$ and $|\\psi\\rangle$\nare not necessary for the above qualitative statements as long as\n$V$ and $W$ are local and unitary, $M$ is the sum of an extensive\nnumber of local terms and $\\left\\langle \\psi\\left|M\\right|\\psi\\right\\rangle $\nis not exponentially small itself. Moreover, we expect these results\nto generalize to mixed states with short-range entanglement, such\nas the infinite temperature state, instead of the pure state $|\\psi\\rangle$,\nand to extend to other choices of operators and states in generic\nquantum systems.\n\n\\section{Model and methods}\n\n\\label{sec:model}\n\nTo probe various dynamical behaviors, we consider the spin-1\/2 XXZ\nchain in a random magnetic field: \n\\[\nH(\\Delta,h)=J\\sum_{i=1}^{L}\\sigma_{i}^{x}\\sigma_{i+1}^{x}+\\sigma_{i}^{y}\\sigma_{i+1}^{y}+\\Delta\\sigma_{i}^{z}\\sigma_{i+1}^{z}-\\sum_{i}h_{i}\\sigma_{i}^{z}\n\\]\nWe use open boundary conditions and set $J=1$. The random fields\n$h_{i}$ are taken from a box distribution $[-h,h]$. This model offers\nthe advantage of testing different regimes based on the values\nof its parameters. The special value $\\Delta=0$ corresponds\nto a free-fermion model. In presence of disorder, this model is celebrated\nto host a MBL phase at large enough disorder $h>h_{c}$ for $\\Delta\\neq0$,\nwhile the low-disorder phase $h\\lesssim h_{c}$ is an ergodic, non-integrable,\nsystem. In the Heisenberg case ($\\Delta=1$), the critical disorder\nis estimated to be $h_{c}\\simeq7.5$ (for eigenstates near the middle of the spectrum)~\\cite{Luitz15}. When $\\Delta=0$, the model\nmaps to an Anderson-model which is in a localized phase for\nany $h\\neq0$.\n\nWe choose the OTOM operator $M=\\frac{1}{L}\\sum_{i}(-1)^{i}\\sigma_{i}^{z}$\n(staggered magnetization), and the initial state $|\\psi\\rangle$ as\n$|\\uparrow\\downarrow\\uparrow\\downarrow\\dots\\rangle$ for all the numerical\nsimulations. We use Pauli spin operators for the local unitary operators,\ntypically $V=\\sigma_{i}^{x},W=\\sigma_{j}^{x}$ or $V=\\sigma_{i}^{z},W=\\sigma_{j}^{z}$.\n\nIn the numerics, we use full diagonalization in the $\\sigma^{z}=\\sum_{i}\\sigma_{i}^{z}=0$\nsector (as $\\sigma^{z}$ commutes with $H(\\Delta,h)$) on chains of\nlength up to $L=16$, to probe very long time scales. In some cases,\nwe also checked our results on larger systems using a Krylov propagation\ntechnique~\\cite{Nauts}, albeit on smaller time-scales. We average\nquantities over 200 to 1000 disorder realizations for each parameter\nset. All times are measured in units of $1\/J$, which is set to $1$.\n\nThe OTOM $\\mathcal{M}(t)$, being an expectation value of an observable,\nis real, while the OTOC is in general a complex number. There are\ncases however where its imaginary part vanishes (for our choice of\nthe initial state, this happens for $V=\\sigma_{i}^{z},W=\\sigma_{j}^{z}$).\nWhen the OTOC is complex, we average its modulus $|\\mathcal{C}(t)|$\nover disorder, while we average over its real part when it is purely real.\n\nBesides these out-of-time order measurements, to highlight the difference\nbetween quantum and classical measures, we study two properties\nof the time-evolved state $|\\psi(t)\\rangle=\\exp(-iHt)|\\psi\\rangle$: the time-ordered spin imbalance $\\mathcal{I}(t)=\\langle\\psi(t)|M|\\psi(t)\\rangle$\nand the half-system EE, $S(t)=\\mbox{tr}(\\rho_{A}(t)\\ln\\rho_{A}(t))$,\nwhere $\\rho_{A}(t)=\\mbox{tr}_{\\bar{A}}|\\psi(t)\\rangle\\langle\\psi(t)|$\nis the reduced density matrix obtained by tracing out the degrees\nof freedom in the other half part $\\bar{A}$ of the system.\n\n\\section{Numerical Results}\n\n\\label{sec:num}\n\n\\subsection{Local mass transport (imbalance)}\n\nIn ergodic systems, we expect local densities to relax exponentially\nfast, as a spin density was shown to do when starting from a classical\nstaggered state~\\cite{Barmettler09}. In contrast, this relaxation\nis incomplete in localized systems, and the spin density settles to\na non-zero value both in the presence and the absence of interactions.\nFig. \\ref{fig:imbalance} shows results for the dynamics\nof the imbalance $\\mathcal{I}(t)$ in three different regimes of\nthe XXZ chain. Mass transport is measured experimentally in a number of systems, and is often taken as a probe of local thermalization~\\cite{Trotzky12,Pertot14,Schreiber15}.\n\n\\begin{figure}\n\\centering \\includegraphics[width=1\\linewidth]{FigImbalance_v5}\n\\caption{\\textbf{Local mass transport:} Imbalance $\\mathcal{I}$ (with respect\nto N\u00e9el state) after a quench versus time (log scale), for a $L=16$\nspin chain in three different regimes: AL (for $\\Delta=0,h=5$),\nMBL ($\\Delta=0.2,h=5$) and ergodic ($\\Delta=1,h=0.5$).}\n\\label{fig:imbalance} \n\\end{figure}\n\n\n\\subsection{Quantum phase dynamics}\n\nThis section contains numerical results for the time evolution\nof the above measures for a non-integrable system with chaotic transport\nand for MBL\/AL systems with absent particle transport. These examples\nare chosen to illustrate the important features of both classical\nand quantum dynamics. In each case, it appears that the\nOTOM can reveal the underlying quantum dynamics.\n\n\\subsubsection{Ergodic systems}\n\nIn a non-integrable system with mass transport (spin transport in\nthe spin chains considered), we expect time-ordered local observables\nto quickly relax to their long-time expectations values, which is\nzero in the case of the imbalance $\\mathcal{I}$. At the same time,\nthe OTOCs are also expected to relax exponentially quickly to zero~\\cite{Maldacena2016},\nand we anticipate the same for the OTOM. This is indeed what we find,\nas shown in Fig.~\\ref{fig:chaotic}. As shown in the figure, all\noperators relax rapidly to zero within a few spin-flip times, and\nthis holds for different choices of the local operators $V$ and $W$\nfor the OTOC\/OTOM. We note that in such a system, the EE grows very fast (ballistically) and saturates to a volume\nlaw in the subsystem size~\\cite{KimHuse13}.\n\n\\begin{figure}\n\\centering \\includegraphics[width=1\\linewidth]{FigErgodic_v5} \\caption{\\textbf{Dynamics in a chaotic system:} The OTOC $|\\mathcal{C}(t)|$\nas well as the OTOM $|\\mathcal{M}(t)|$ relax to zero within a couple\nof tunneling times, the time-ordered imbalance $\\mathcal{I}(t)$ vanishes\neven faster, and the EE $S(t)$ very rapidly saturates\nto an extensive value. For the OTOC\/OTOMs, two sets of results are\ndisplayed using different local unitary operators: the OTOM\/OTOC denoted\nXX (ZZ) correspond to $V=\\sigma_{5}^{x}$, $W=\\sigma_{12}^{x}$ ($V=\\sigma_{5}^{z}$,\n$W=\\sigma_{12}^{z}$). Other parameters are $L=16$, $h=0.5$, $\\Delta=1$.\nNote the linear time scale, as well as the scaling factor for the\nEE.}\n\\label{fig:chaotic} \n\\end{figure}\n\n\n\\subsubsection{Localized systems}\n\nPerhaps the clearest distinction of quantum phase dynamics can be\nobserved in dynamics of localized systems. Localization prohibits\nparticle\/spin transport completely~\\cite{Anderson58,Basko06}. Upon\nperforming a quench of the Hamiltonian, it results in a local memory\nof initial density whether or not interactions are present in the\nsystem (see Fig.~\\ref{fig:imbalance}). However, non-local quantities\nsuch as EE show distinct behaviors in the presence\nor absence of interactions due to the involved decoherence (entanglement)\nof the particle phases between nearby sites~\\cite{Znidaric08,Bardarson12,Serbyn13}.\nOTOCs readily capture this nature of the quantum dynamics as well~\\cite{otoc1,otoc2,otoc3,otoc4,otoc5}.\n\nFor non-interacting AL systems, particle phases do\nnot entangle with one another. The time evolution of the various quantities\nis shown in Fig.~\\ref{fig:al}. The EE shows no\nfurther time dynamics after an initial increase (due to a finite localization\nlength). Such quickly saturating behavior and absence of dynamics\nat longer times can also be captured by OTOCs~\\cite{otoc1,otoc2,otoc3,otoc4}.\nThey equal unity in the absence of any time evolution (by definition),\nand deviate only very slightly from this value with time. More precisely,\nwhile their exact time dependence depends on the local spins perturbed\nby the operators $V$ and $W$ and the strength of the disorder, importantly,\nthey show no time dynamics after an initial resettling (of the order\nof $~50$ tunneling times for the parameters of Fig.~\\ref{fig:al}),\nstrongly resembling the dynamics of EE. The time\nevolution of OTOM for an AL system also displays the\nsame qualitative features: a tiny reduction from unity and no\nmarked time dependence, even up to very long times. This, thus, also\nprovides evidence for the absence of further quantum phase dynamics\nin such systems, similar to the analysis of the dynamics of entanglement\n(a non-local probe). The absence of decay is a generic feature, as\ncan be observed for different choices of local operators $V$ and\n$W$ in Fig.~\\ref{fig:al}. However, one may also argue that the\nsame kind of information is carried by the (absence of) dynamics in\na time-ordered quantity such as the normal imbalance (also displayed\nin Fig.~\\ref{fig:al}), a very local `classical' probe. Hence, to clearly show that the two observables capture qualitatively\ndifferent information, it is crucial to look at the interacting generalization\nof AL systems.\n\n\\begin{figure}\n\\centering \\includegraphics[width=1\\linewidth]{FigAL_v5} \\caption{\\textbf{Dynamics in an Anderson localized system:} The time-ordered\nimbalance $\\mathcal{I}(t)$ relaxes to non-zero value due to absence\nof transport (same data as Fig.~\\ref{fig:imbalance}) \\textit{and}\nthe OTOC\/OTOM are both close to unity and overlapping. At the same\ntime, the EE $S(t)$ quickly saturates to a finite\nsmall value. In such a well localized system, there is almost no intermediate\ndynamics. Parameters for simulations: $L=16$, $\\Delta=0$, $h=5$.\nFor the OTOC\/OTOM, we consider operators $V=\\sigma_{5}^{x}$, $W=\\sigma_{12}^{x}$\n(denoted as XX) as well as $V=\\sigma_{5}^{z}$, $W=\\sigma_{12}^{z}$\n(denoted as ZZ) \\textendash{} data for these two different choices\nare not distinguishable on this scale. }\n\\label{fig:al} \n\\end{figure}\n\nThe interacting case of MBL systems offers quite a different situation,\ndue to the slow entangling of quantum phases even in the absence of\ntransport. The numerical calculations for this case are shown in Fig.~\\ref{fig:mbl}.\nIn such systems, the EE indeed shows time dynamics\ndespite the complete absence of transport, featuring a slow logarithmic\ngrowth~\\cite{Znidaric08,Bardarson12,Serbyn13} to reach an extensive\nvalue (which is smaller than the one reached in the ergodic phase~\\cite{Serbyn13}).\nAt the same time, recent results have shown that time dynamics in\nMBL systems can also be captured by OTOCs, which have been shown to\nexhibit a slow power-law like relaxation~\\cite{otoc1,otoc2,otoc3,otoc4}.\nWe also find that the OTOM is capable of capturing the time dynamics\nin the MBL phase: it shows a slow relaxation similar to the OTOC behavior,\nalbeit it does not reach the same long-time value. This can be understood\nas the long-time limit of the OTOC depends on the overlap of $V$\nand $W$ with the local integral of motions inherent to the MBL phase~\\cite{otoc4}\nand we consequently expect that the long-time limit of the OTOM depends\non analogous overlaps between $V$, $W$ and $M$, resulting in different\nvalues.\n\nThis observation of slow relaxation at intermediate times is in strong\ncontrast with the behavior of the time-ordered imbalance, which again\nshows no time dynamics at these timescales of slow dephasing. \n\\begin{figure}\n\\centering \\includegraphics[width=1\\linewidth]{FigMBL_v5} \\caption{\\textbf{Dynamics in a many-body localized system:} Similar to AL,\nthe imbalance $\\mathcal{I}$ relaxes to a non-zero value on a relatively\nshort time scale (same data as Fig.~\\ref{fig:imbalance}). On the\nother hand, the EE $S(t)$ shows a slow logarithmic\ngrowth and OTOCs show an apparent slow power-law decay, revealing the quantum phase dynamics at\ntime much longer than the density relaxation time. This slow dynamics\nis also revealed in OTOMs, which display the same qualitative behavior\nas OTOCs. Note the lin-log scale for the main panel, the log-log scale\nfor the inset, and the scaling factor for the EE.\nParameters for simulations: $L=16$, $\\Delta=0.2$, $h=5$. For the\nOTOC\/OTOM, we consider operators $V=\\sigma_{5}^{x}$, $W=\\sigma_{12}^{x}$\n(denoted as XX) and $V=\\sigma_{5}^{z}$, $W=\\sigma_{12}^{z}$ (denoted\nas ZZ) . The exact long-time limit of OTOC\/OTOM depends on the operators\nchosen for $V$ and $W$.}\n\\label{fig:mbl} \n\\end{figure}\n\n\n\\section{{Discussions and conclusions}}\n\n\\label{sec:conc}\n\nWe have demonstrated that out-of-time ordered measurements OTOMs carry\nthe same information as their correlator counterpart OTOC for three\ndifferent physical situations: ergodic, AL and MBL systems. We propose a concrete measure which we believe\nwill advance the use of genuinely interesting quantum probes\nin experiments. Thanks to its scalability, one main application of\nour probe is to allow experimental access to higher dimensional systems\nand in particular, to two dimensional systems debated to have an MBL\nphase~\\cite{2DMBLLocal}. While reversing the sign of the Hamiltonian\nis a hard task, each term of the Hamiltonian has\nbeen sign-reversed in several experiments, including hopping (with\na drive), interactions (using Feshbach resonance) and disorder (using\na phase shift of the quasi-periodic term). Additionally, this opens\nthe door to measuring quantum dynamics in systems where the initial\nstate cannot be duplicated, such as superconducting qubits, trapped\nions and NV-centers.\n\nOn the theoretical front, our work naturally opens up several avenues\nfor future studies. Firstly, given that the OTOM is an off-diagonal\nOTOC in a doubled system, one can ask, ``what features of quantum\ndynamics are captured by off-diagonal OTOCs that are missed by the\nusual diagonal OTOCs'' and vice versa? Secondly, one can try to exploit\nthe freedom in choosing the OTOM operator $M$ to study aspects of\nquantum dynamics not studied here. One direct application would be\nto explore the SYK model \\cite{KitaevTalk,Sachdev1993SYK,MaldacenaStanfordSYK,Polchinski2016}\nand the debated Griffiths-phase before the ergodic-MBL transition~\\cite{agarwal_rare-region_2017,luitz_ergodic_2017,prelovsek_density_2017}.\nAnother possible continuation is to consider delocalized systems which\nare integrable, such as systems of free particles or systems soluble\nby the Bethe ansatz, as they necessarily have non-trivial global conserved\nquantities (i.e., conserved quantities in addition to trivial ones\nsuch as total charge or spin). We noted earlier that OTOMs behave\ndifferently from OTOCs when $M$ is chosen to be a global conserved\nquantity. It would be an interesting extension of our work to compare and contrast\nOTOCs and OTOMs in these phases in detail. We expect that OTOCs, as\nwell as OTOMs derived from non-conserved quantities would exhibit\nchaotic (oscillatory) time dependence in Bethe-ansatz soluble (free particle)\nsystems, while OTOMs based on global conserved quantities would show\na suppression of any time-independence at all with system size. Finally,\nunlike ordinary expectation values, OTOMs are able to distinguish\nbetween local and global conserved quantities, since only OTOMs based\non the latter are expected to remain pinned at their $t=0$ values\nin the thermodynamic limit. Naively, local conserved quantities are\nexpected to reduce chaos in a system more than global ones are; OTOMs\nmight be ideally suited for capturing this difference. We leave these\nopen questions for future investigations.\n\n\n\\begin{acknowledgments}\nThis work benefited from the support of the project THERMOLOC ANR-16-CE30-0023-02\nof the French National Research Agency (ANR) and by the French Programme\nInvestissements d'Avenir under the program ANR-11-IDEX-0002-02, reference\nANR-10-LABX-0037-NEXT. FA acknowledges PRACE for awarding access to\nHLRS's Hazel Hen computer based in Stuttgart, Germany under grant\nnumber 2016153659, as well as the use of HPC resources from GENCI\n(grant x2017050225) and CALMIP (grant 2017-P0677). PH was supported\nby the Division of Research, the Department of Physics and the College\nof Natural Sciences and Mathematics at the University of Houston. \n\\end{acknowledgments}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nThis paper presents a vibration energy harvester, employing a Lorentz-force energy converter, that is suitable for powering miniaturized autonomous IoT sensors \\cite{Arnold}\\cite{Y.Tan}. The harvester comprises a DRIE-etched silicon suspension, and pick-and-place N42 NdBFe magnets and copper coils, all enclosed in a 3D-printed plastic package. The harvester has an active volume of 1.79~cm$^3$, and an output power $P_{\\rm Out}$ of 2.2~mW at $1.1\\ g$ and 76 Hz under matched load, yielding a power density (PD) of 1.23~mW\/cm$^3$ and a normalized power density (NPD) of 1.02~mW\/cm$^3\/g^2$, the highest reported PD and NPD among silicon-based MEMS harvesters reported to date \\cite{Y.Tan}. The high $P_{\\rm Out}$ follows the use of a four-bar-linkage suspension that lowers beam stress compared to our earlier accordion suspension \\cite{Abraham}, enabling mm-range strokes and hence mW-level $P_{\\rm Out}$. The key contributions here are: (i) large-stroke (2 mm) silicon suspensions with stress analysis, (ii) harvester implementation yielding $P_{\\rm Out}=2.2$ mW, and (iii) optimized design guidelines and scaling to further reduce harvester size while preserving $P_{\\rm Out}$.\n\n\\section{Design and Optimization}\nAs described below, harvester design is based on optimizations over mechanical, magnetic and electrical domains. To begin, a mechanical optimization of the harvester spring-mass-damper system is executed following \\cite {Abraham}. To do so, the mechanical power $P_{\\rm M}$ converted by the harvester is expressed at resonance in sinusoidal steady state in the absence of mechanical loss as\n\\begin{equation} \nP_{\\rm M} = B \\omega^2 X^2 \/ 2 = \\omega X M A \/ 2 = \\rho \\omega X A (L_1 - 2S) L_2 L_3 \/ 2\n\\label{OPT1}\n\\end{equation}\nwhere damping coefficient $B$ is a proxy for energy conversion through the Lorentz-force energy converter, $\\omega$ is the resonance frequency, $X$ is the stroke amplitude, $M$ is the proof-mass mass, $A$ is vibration acceleration, $\\rho$ is the density of the proof mass, $L_1$, $L_2$ and $L_3$ are the dimensions of the harvester with $L_1$ in the stroke direction, and $S \\ge X$ is the single-sided space within $L_1$ allocated for stroke; the assumption of negligible mechanical loss is supported by experimental observation. Following \\cite{Abraham}, $P_{\\rm M}$ is maximized when $X=S=L_1\/4$, yielding\n\\begin{equation} \nP_{\\rm M,Max} = \\rho \\omega S A (L_1 - 2S) L_2 L_3 \/ 2 = \\rho \\omega A L_1^2 L_2 L_3 \/ 16\n\\quad .\n\\label{OPT2}\n\\end{equation}\nThus, $L_1$, is allocated equally to the mass and bi-directional stroke $2X$. This assumes the suspension\/springs permit (nearly) all of $S$ to be used for $X$, which was true for the accordion suspension \\cite{Abraham}. Finally, following (\\ref{OPT2}), the longest harvester dimension is used for the stroke direction $L_1$ for maximum $P_{\\rm M}$. Note that to achieve the maximized power in (\\ref{OPT2}), the Lorentz-force energy converter must provide the damping $B$ required to satisfy (\\ref{OPT1}) and $X=L_1\/4$.\n\nTo achieve $P_{\\rm Out}$ in the mW range, the stroke $X$ must be in the mm range. Our previous accordion suspension \\cite{Abraham} did not yield large $X$ as the springs fractured at $X=0.6$~mm due to high stress caused by the side-bar used to raise the resonant frequencies of the higher-order vibration modes. A four-bar-linkage suspension is chosen here to overcome the spring fracture at large strokes while raising the resonance frequencies of the higher-order modes. Figures \\ref{device}(a)-(b) show a cross-sectional side-view of the fabricated harvester and a top-view of the silicon four-bar-linkage suspension.\n\\begin{figure}[tb]\\hspace{-0.4 cm}\n\\includegraphics[width=39pc, clip=true, trim=0mm 0mm 0mm 0mm]{.\/figures\/figure1.pdf}\n\\caption{ (a) Cross-sectional side view of the harvester. (b) Top view of the suspension and magnets glued together. (c) Individual 3D-printed packaging parts to be combined. (d) Harvester at its maximum deflection showing the bending of the beams.\n}\\label{device}\n\\vspace{-1.2mm}\n\\vspace{-1.2mm}\n\\end{figure}\n\\vspace{-1.2mm}\nThe four-bar linkage is not as space efficient as the accordion suspension as it requires $X\\le S\/2$. $P_{\\rm M}$ in (\\ref{OPT1}) is then maximized with $X=S\/2=L_1\/8$, yielding\n\\begin{equation} \nP_{\\rm M,Max} = \\rho \\omega S A (L_1 - 2S) L_2 L_3 \/ 4 = \\rho \\omega A L_1^2 L_2 L_3 \/ 32\n\\quad .\n\\label{OPT3}\n\\end{equation}\nHowever, the springs in the new suspension should experience lower stress than in \\cite{Abraham} because they are not torqued by the side bar. This is evident from the spring bending at maximum stroke shown in Figure \\ref{device}(d). Second, the spring widths are tapered via narrowing towards their middle to create a more uniform stress profile. Third, the joints are designed to have symmetric fillets with radius of five spring widths to further reduce stress. Fourth, the spring length is increased to reduce strain. These precautions yield the desired stroke of $X=S\/2=2$ mm as shown in Figure \\ref{device}(d), the largest stroke reported in a Si-MEMS harvester suspension \\cite{Y.Tan}.\n\nThe suspension is dimensioned to achieve a\nnear-50-Hz resonance subject to the constraints that the spring width exceeds 25 $\\mu$m as limited by etch resolution, and beam stress is less than 130~MPa. Figure \\ref{resonance}(a) shows the resonant modal analysis for the optimized suspension highlighting a fundamental resonance translational mode $f_{\\rm Res}$ close to 50 Hz and a 5-fold separation between $f_{\\rm Res}$ and higher-order resonance frequencies.\n\\begin{figure}[tb]\n\\vspace{-1.9 cm}\n\\hspace{0.9 cm}\\includegraphics[width=34pc, clip=true, trim=0mm 0mm 0mm 0mm]{.\/figures\/figure2.pdf}\\vspace{-0.3 cm}\n\\caption{\\label{resonance} (a) Resonant modal analysis of the 4-bar-linkage suspension showing good modal separation between higher resonant modes and the desired fundamental mode. (b) Stress analysis showing that the suspension under maximum stroke has three times lower stress than does the previous accordion spring suspension \\cite{Abraham}. (c) 2D magnetic simulation to compute $G$.\n}\n\\vspace{-1.2mm}\n\\vspace{-1.2mm}\n\\end{figure}\nSolidworks simulations show the lowest resonant modes are: translational along $L_1$ at $76\\ Hz$; translational along the $L_3$ (magnetic pole) direction at 270 Hz; and rotational about $L_3$ at 575 Hz. The stress analysis in Figure \\ref{resonance}(b) also shows that the maximum beam stress at full stroke is 130 MPa compared to 460 MPa in the accordion suspension. Thus, the mechanical optimization achieves optimum resource allocation for mass and stroke while the suspension exhibits the desired $f_{\\rm Res}$ and $X$. \n\nTwo anti-parallel N42 NdBFe permanent magnets are the magnetic flux source through which $B$ is implemented, and the proof mass because they offer higher mass density than loose windings. Figure \\ref{device}(a) shows the magnetic energy converter cross section with moving magnets, and two stationary coils on the harvester frame. The design parameters of magnet height $T$, coil height $\\Delta$, and airgap between magnet and coil $\\delta$ are optimized for maximum time-average electrical output power $P_{\\rm E}$. Assuming the magnetic thickness of $T+2\\Delta+2\\delta$ is small compared to the magnet widths and lengths, the maximum magnetic flux density, and hence the maximum induced coil voltage $V$ at a given $\\omega$ and $X$, the coil resistance $R_{\\rm Coil}$, and the maximum output power $P_{\\rm E} = V^2\/8R_{\\rm Coil}$ into a matched load, are given by\n\\begin{equation} \nV \\propto NT\/(T+2\\Delta+2\\delta)\\quad ;\\quad\nR_{\\rm coil} \\propto N^2\/\\Delta\\quad ;\\quad\nP_{\\rm E} \\propto \\Delta T^2\/(T+2\\Delta+2\\delta)^2\n\\end{equation}\nwith $N$ coil turns. For a given magnet size and hence $T$, $P_{\\rm E}$ is maximized with the smallest permissible $\\delta$, and $\\Delta = T\/2 + \\delta$. A harvester so optimized produces a maximum induced coil voltage in sinusoidal steady state given by Faraday's Law according to\n\\begin{equation} \nV = d\\phi_B\/dt= B L_2 \\omega X = G \\omega X \\quad .\n\\end{equation}\n2D-magnetic simulation computes the flux distribution and output voltage as described in \\cite{Abraham}; a magnetization of 1.3 T is chosen for the permanent magnets. The magnetic flux distribution of $B_x$ and $B_y$ is shown for zero stroke in Figure \\ref{resonance}(c). Together with the velocity $u(t)$, the distribution is used to compute the time-dependent voltage across each coil turn according to\n\\begin{equation} \n V_{\\rm Turn}(t)= \\omega L_3 (A(x_1(t))-A(x_2(t)) = u(t) L_3 (B_y(x_1(t))-B_y(x_2(t)) = u(t) G(t) \\quad .\n\\end{equation}\nFinally, an optimized coil configuration maximizes the mechanical-to-electrical transduction coefficient $G$ while minimizing the coil resistance $R_{\\rm Coil}$. The total number of layers $N_{\\rm Layers}$ of 50-$\\mu$m-diameter copper-wire coils within $T$, and the number of turns $N_{\\rm Turns}$ in each such layer, are determined based on this optimization. While thick coils reduce $R_{\\rm Coil}$, they also result in fewer $N_{\\rm Layers}$ within a given $T$ and hence a lower $G$. The electrical optimization considers this trade-off using the metric $G^2\/R_{\\rm Coil}$ as the optimization goal to maximize power output.\n\n\\section{Fabrication and Packaging}\nSilicon-spring suspensions are fabricated using a single deep-reactive-ion etch through a 525-$\\mu$m-thick wafer, with the mask haloed \\cite{Abraham}-\\cite{blowout} using 40-$\\mu$m-wide trenches to reduce etch loading, resulting in essentially vertically-etched side walls as shown in Figure \\ref{sem}. Additionally, the mask is biased to accommodate a 5-$\\mu$m blow-out per side wall. This fabrication process allows a minimum feature size of 25 $\\mu$m, smaller than the minimum spring width of 30 $\\mu$m used in the suspension. SEM images in Figure \\ref{sem} show the tapering of the beams along their length, and vertical smooth surfaces after etching that prevent damping. The magnets are glued inside the Si-wrapper linked to the suspension using a pick-and-place process with a 3D-printed platform. The spring-mass system is housed within another plastic assembly shown in Figure \\ref{device}(c) along with parts that hold the two coils that are wound from 44 AWG wire on a Mandrell using a lathe. Each coil has 400 turns resulting in an individual $R_{\\rm Coil}=123\\ \\Omega$. The full assembly with the plan- and side-views is shown in Figure \\ref{device}.\n\\begin{figure}[bt]\n\\vspace{-0.5 cm}\\includegraphics[width=38pc, clip=true, trim=0mm 0mm 0mm 0mm]{.\/figures\/figure3.pdf}\n\\caption{\\label{sem} SEM images. (a) Tapered spring profile showing the spring width variation. (b) Angled view of a joint. (c) Side view of an inner suspension wall showing the etch profile. (d) An image of the anchor with the fillet profile.}\n\\vspace{-1.2mm}\n\\end{figure}\n\n\\section{Experimental Performance} \nHarvester parameters are extracted from the measured open-circuit voltage $V_{\\rm OC}=G \\omega X$ as a function of $\\omega$ and $g$; see Figure \\ref{opencircuit}(a). Various optimizations involved in harvester design result in the high $V_{OC}=1.75$ V. Figure \\ref{opencircuit}(a) shows that the suspension springs harden at higher $g$, exhibiting two voltage branches together with a slight increase in $\\omega_{\\rm Res}=2\\pi f_{\\rm Res}$ with $g$. The Duffing dynamics \\cite{duffing} used to fit the data and extract $X$ are shown in Figure \\ref{opencircuit}(b) while the harvester equivalent circuit model and its parameters are listed in Figure \\ref{opencircuit}(c). The parameters are extracted from the two lower-$g$ data and the model fits well against the data for $g=0.095$.\n\\begin{figure}[tb]\n\\hspace{-0.4 cm}\\includegraphics[width=40pc, clip=true, trim=0mm 0.3mm 0mm 1mm]{.\/figures\/figure4.pdf}\n\\caption{\\label{opencircuit} (a) Measured open-circuit voltage compared against simulations for different $\\omega$ and $g$. (b) Harvester dynamic model with the Duffing spring hardening. (c) Harvester equivalent circuit model with extracted mechanical and electromagnetic parameters.\n\\vspace{-1.2mm}\n\\vspace{-1.2mm}\n\\vspace{-1.2mm}\n}\n\\end{figure}\n\nThe measured loaded output power $P_{\\rm Out}$, output voltage $V_{\\rm Load}$, resonance frequency $f_{\\rm Res}=\\omega_{\\rm Res}\/2\\pi$, and stroke X are plotted against $g$ for different loads $R_{\\rm Load}$. The measurements, all performed at resonance, are compared against model simulations in Figure~\\ref{power} showing a good match. The maximum $P_{\\rm Out}$ of 2.2 mW is obtained with the matched load of $R_{\\rm Load}=R_{\\rm Coil}=245\\ \\Omega$ at 1.1 $g$, the highest reported value to date for Si-based MEMS harvesters, at least at sub-kHz frequencies. The device is currently stroke-limited with the side-bar hitting the external frame. If $R_{\\rm Load}>R_{\\rm Coil}$, reduced electrical damping causes the stroke to reach its maximum of $X = S\/2$ at a lower $g$ with a higher $V_{\\rm Load}$. The opposite holds for $R_{\\rm Load}7$\\,kV\/cm), and electrons\ntherefore scatter preferentially into high-energy states. This yields high \nsteady-state\nelectron temperatures of 184 and 189\\,K for emission at 3 and 4\\,THz\nrespectively at a lattice temperature of 4\\,K. The electric fields are lower in\n(111) Si\/SiGe and (001) Ge\/GeSi devices, owing to the greater lengths of the\nactive regions. This leads to correspondingly lower electron temperatures of 127\nand 129\\,K for (111) Si\/SiGe devices, and 93 and 100\\,K for Ge\/GeSi\ndevices emitting at 3 and 4\\,THz respectively.\nThe effect of thermal excitation upon device\nperformance is illustrated in Fig.~\\ref{fig:Gpk_vs_Te-combined}. It can be seen\nthat the gain decreases monotonically as electron temperature increases, owing\nto the thermal backfilling of the lower laser level. Ge\/GeSi devices are able to\noperate with the lowest electron temperatures, and hence achieve the\nhighest peak gains.\n\n\\begin{figure}\n\\includegraphics*[width=8.6cm]{Gpk_vs_Te_combined}\n\\caption{(Color online) Relationship between peak gain and electron temperature for\ndevices emitting near 3\\,THz (red lines) and 4\\,THz (black lines).}\n\\label{fig:Gpk_vs_Te-combined}\n\\end{figure}\n\nThe current density was calculated at the design bias for each of the devices.\nCurrent densities of 270 and 380\\,A\\,cm$^{-2}$ were predicted at the design bias \nfor Ge\/GeSi devices operating at 3 and 4\\,THz respectively. In Si\/SiGe, current densities were \ncalculated as 430 and 460\\,A\\,cm$^{-2}$ for the (111)-oriented devices and\n210 and 240\\,A\\,cm$^{-2}$ for the (001)-oriented devices at 3 and 4\\,THz respectively.\nThe low operating currents in (001) Si\/SiGe devices were due to the very low \nscattering rates, which result from the high $\\Delta_2$ valley effective mass.\nThe ratio of peak gain to current density was calculated as a figure of merit\nfor each device at its design bias. Ge\/GeSi devices were found to have the\nhighest values (240 and 210\\,cm\/kA) followed by (111) Si\/SiGe (57 and\n84\\,cm\/kA), and (001) Si\/SiGe (25 and 14\\,cm\/kA) at 3 and 4\\,THz\nrespectively. We should note that our simulations of\n(001) Si\/SiGe QCLs do not include $\\Delta_2\\to\\Delta_4$ intervalley scattering\nevents, which would further degrade the predicted performance. However, as these\nstructures already appear to be poor candidates for laser design, a more\ncomprehensive transport model was considered unnecessary.\nThreshold current densities were calculated at $T=4\\,K$\nas 440, 210, and 330\\,A\\,cm$^{-2}$ for the 4\\,THz (111)-Si\/SiGe, 3\\,THz Ge\/GeSi and\n4\\,THz Ge\/GeSi devices respectively. \n\n\\section{Conclusion}\nWe have presented a comparison between the simulated performance of Si-based \nQCLs using the (001) Ge\/GeSi, (111) Si\/SiGe, and (001) Si\/SiGe material\nconfigurations. A semi-automated design optimization algorithm was used, in\norder to provide a fair comparison between equivalent designs. Our results\nshow that (001) Ge\/GeSi is the most promising system for development of a\nSi-based QCL. Firstly, the \nbandstructure calculations in section~\\ref{scn:bandstructure} show that the\n(001) Ge\/GeSi and (111) Si\/SiGe systems offer a $\\gtrsim$90\\,meV energy range\nfor QCL design, compared with only $\\sim$5\\,meV in (001) Si\/SiGe systems, owing \nto the large energy separation between conduction band minima. This reduces the\nprobability of current-leakage via intervalley scattering, and allows a wider\nrange of emission frequencies to be targeted.\nSecondly, the low $L$ valley effective mass was found to yield a relatively long\nperiod length for the QCL active region. This reduces the operating electric\nfield, and hence the current density and the temperature of the electron\ndistribution. Net gain was predicted for both of the Ge\/GeSi devices, but only \none of the four optimized Si\/SiGe devices. Ge\/GeSi bound--to--continuum QCLs\nwere predicted to operate up to temperatures of 179 and 184\\,K at 3 and 4\\,THz\nrespectively, while the 4\\,THz (111) Si\/SiGe device was predicted to operate\nup to 127\\,K. These figures may potentially be improved via waveguide design\noptimization to minimize losses, or through the use of a resonant-phonon active \nregion design.\nNevertheless, the predicted values exceed the highest-recorded operating\ntemperature of 116\\,K for a 3.66\\,THz seven-well III--V BTC\ndevice.\\cite{APLScalari2005}\n\n\n\\begin{acknowledgments}\nThis work was supported by EPSRC Doctoral Training Allowance funding. The\nauthors are grateful to Jonathan Cooper, University of Leeds and Douglas Paul, \nUniversity of Glasgow for useful discussions.\n\\end{acknowledgments}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction} \\label{sec:intro}\nTime dependent data, arising when measurements are taken on a set of units at different time occasions, are pervasive in a plethora of different fields. Examples include, but are not limited to, time evolution of asset prices and volatility in finance, growth of countries as measured by economic indices, heart or brain activities as monitored by medical instruments, disease evolution evaluated by suitable bio-markers in epidemiology. When analyzing such data, we need strategies modelling typical time courses by accounting for the individual correlation over time.\nIn fact, while nomenclature and taxonomy in this setting are not always consistent, we might highlight some relevant differences, subsequently implying different challenges in the modelling process, in time-dependent data structures. On opposite poles, we may distinguish functional from longitudinal data analysis. In the former case the quantity of interest is supposed to vary over a continuum and usually a large number of regularly sampled observations is available, allowing to treat each element of the sample as a function. On the other hand, in longitudinal studies, time observations are often shorter with sparse and irregular measurements. Readers may refer to \\citet{rice2004functional} for a thorough comparison and discussion about differences and similarities between functional and longitudinal data analysis.\n\nEarly development in these areas mainly aimed to describe individual-specific curves by properly accounting for the correlation between measurements for each subject \\citep[see e.g.][and references therein]{diggle2002analysis,ramsey2005functional} with the subjects themselves often considered to be independent. \nNonetheless this is not always the case. Therefore, more recently, there has been an increased attention towards clustering methodologies aimed at describing heterogeneity among time-dependent observed trajectories;\nsee \\citet{erosheva2014breaking} for a recent review of related methods used in criminology and developmental psychology. \nFrom a functional standpoint, different approaches have been studied and readers may refer to the works by \\citet{bouveyron2011model}, \\citet{bouveyron2015discriminative} and \\citet{bouveyron2020co} or to \\citet{jacques2014functional} for a review. On the other hand, from a longitudinal point of view, model-based clustering approaches have been proposed by \\citet{de2008model}, \\citet{mcnicholas2010model}. Lastly a review from a more general time-series data perspective may be found in \\citet{liao2005clustering} and \\citet{frauhwirth2011model}.\n\nThe methods cited so far usually deal with situations where a single feature is measured over time for a number of subjects, where therefore the data are represented by a $n \\times T$ matrix, with $n$ and $T$ being the number of subjects and of observed time occasions. Nonetheless it is increasingly common to encounter multivariate time-dependent data, where several variables are measured over time for different individuals. These data may be represented according to three-way $n \\times d \\times T$ matrices, with $d$ being the number of time-dependent features; a graphical illustration of such structure is displayed in Figure \\ref{fig:raw_curves_multiple}. The introduction of an additional layer entails new challenges that have to be faced by clustering tools. In fact, as noted by \\citet{anderlucci2015covariance}, models have to account for three different aspects, being the correlation across different time observations, the relationships between the variables and the heterogeneity among the units, each one of them arising from a different layer of the three-way data structure.\n\n\n\n\\begin{figure}[t]\n\t\\centering\n\t\\includegraphics[width = 9cm, height = 8cm]{Figures\/raw_multicurves.pdf}\n\t\\caption{Example of multivariate time-dependent data: $d=4$ variables are measured for $n=8$ individuals over $T=15$ time instants, giving rise to the displayed curves.}\n\t\\label{fig:raw_curves_multiple}\n\\end{figure}\n\nTo extract useful information and unveil patterns from such complex structured and high-dimensional data, standard clustering strategies would require specification and estimation of severely parameterized models. In this situation, parsimony has often been induced by somehow neglecting the correlation structure among variables. A possible clever workaround, specifically proposed in a parametric setting, is represented by the contributions of \\citet{viroli2011finite,viroli2011model} where, in order to handle three-way data, mixtures of Gaussian matrix-variate distributions are exploited. \n\nIn the present work, a different direction is taken, and a co-clustering strategy is pursued to address the mentioned issues. The term co-clustering refers to those methods aimed to find simultaneously row and column clusters of a data matrix. These techniques are particularly useful in high-dimensional settings where standard clustering methods may fall short in uncovering meaningful and interpretable row groups because of the high number of variables. By searching for homogeneous blocks in large matrices, co-clustering tools provide parsimonious summaries possibly useful both as dimensionality reduction and as exploratory steps. These techniques are particularly appropriate when relations among the observed variables are of interest. Note that, even in the co-clustering context, the usual dualism between \\emph{distance-based} and \\emph{density-based} strategies can be found. We pursue the latter approach which embeds co-clustering in probabilistic framework, builds a common framework to handle different types of data and reflects the idea of a density resulting from a mixture model. \nSpecifically, we propose a parametric model for time-dependent data and a new estimation strategy to handle the distinctive characteristics of the model. Parametric co-clustering of time-dependent data has been pursued by \\citet{slimen2018model} and \\citet{bouveyron2018functional} in the functional setting, by mapping the original curves to the space spanned by the coefficients of a basis expansion. Unlike these contributions, we build on the idea that individual curves within a cluster arise as transformations of a common shape function which is modeled in such a way as to handle both functional and longitudinal data. \nOur co-clustering framework allows for easy\ninterpretation and cluster description but also \nfor specification of different notions of clusters which, depending on subject matter application, may be more desirable and interpretable by subject matter experts. \n\nThe rest of the paper is organized as follows. In Section \\ref{sec:chcharles_buildingblocks}, we provide the background needed for the specification of the proposed model which is described in Section \\ref{sec:chcharles_timedepLBM}, along with the estimation procedure. In Section \\ref{sec:chcharles_numexample}, the model performances are illustrated both on simulated and real examples. We conclude the paper by summarizing our contributions and pointing to some future research directions.\n\n\n\n\\section{Modelling time-dependent data}\\label{sec:chcharles_buildingblocks}\nWhen dealing with the heterogeneous time dependent data landscape, outlined in the previous section, a variety of modelling approaches are sensible to be pursued. \nThe route we follow in this work borrows its rationale from the \\emph{curve registration} framework \\citep{ramsay1998curve}, according to which observed curves often exhibit common patterns but with some variations. Methods for curve registration, also known as \\emph{curve alignment} or \\emph{time warping}, are based on the idea of aligning prominent features in a set of curves via either an \\emph{amplitude variation}, a \\emph{phase variation} or a combination of the two. The first one is concerned with vertical variations while the latter regards horizontal, hence time related, ones. As an example it is possible to think about modelling the evolution of a specific disease. Here the observable heterogeneity of the raw curves can often be disentangled in two distinct sources: on the one hand, it could depend on differences in the intensities of the disease among subjects whereas, on the other hand, there could be different ages of onset, i.e. the age at which an individual experiences the first symptoms. After properly taking into account these causes of variation, the curves result to be more homogeneously behaving, with a so called \\emph{warping function}, which synchronizes the observed curves and allows for visualization and estimation of a common mean shape curve. \n\nCoherently with the aforementioned rationale, in this work time dependency is accounted for via a \\emph{self-modelling regression} approach \\citep{lawton1972self} and, more specifically, via an extension to the so called \\emph{Shape Invariant Model} \\citep[SIM,][]{lindstrom1995self}, based on the idea that an individual curve arises as a simple transformation of a common shape function. \\\\\nLet $\\mathcal{X}=\\{x_i({\\bf t_i})\\}_{1\\le i \\le n}$ be the set of curves, observed on $n$ individuals, with $x_i(t)$ being the level of the \\emph{i-th} curve at time $t$ and $t \\in {\\bf t_i} = (t_1, \\dots, T_{n_i})$, hence with the number of observed measurements allowed to be subject-specific. Stemming from the SIM, $x_i(t)$ is modelled as \n\\begin{eqnarray}\\label{eq:shapeinvariantmodel_nocluster}\nx_i(t) = \\alpha_{i,1} + \\text{e}^{\\alpha_{i,2}}m(t-\\alpha_{i,3}) + \\epsilon_{i}(t) \\; \\end{eqnarray} \nwhere \n\\begin{itemize}\n\\item $m(\\cdot)$ denotes a general common shape function whose specification is arbitrary. In the following we consider B-spline basis functions \\citep{de1978practical}, i.e. giving $m(t)=m(t;\\beta)= \\mathcal{B}(t)\\beta,$ where $\\mathcal{B}(t)$ and $\\beta$ are respectively a vector of B-spline basis evaluated at time $t$ and a vector of basis coefficients whose dimensions allow for different degrees of flexibility;\n\\item $\\alpha_{i}=(\\alpha_{i,1},\\alpha_{i,2},\\alpha_{i,3}) \\sim \\mathcal{N}_3(\\mu^\\alpha,\\Sigma^{\\alpha})$ for $i=1,\\dots,n$ is a vector of subject-specific normally distributed random effects. These random effects are responsible for the individual specific transformations of the mean shape curve $m(\\cdot)$ assumed to generate the observed ones. In particular $\\alpha_{i,1}$ and $\\alpha_{i,3}$ govern respectively amplitude and phase variations while $\\alpha_{i,2}$ describes possible scale transformations. Random effects also allow accounting for the correlation among observations on the same subject measured at different time points. Following \\citet{lindstrom1995self}, the parameter $\\alpha_{i,2}$ is here optimized in the log-scale to avoid identifiability issues;\n\\item $\\epsilon_{i}(t) \\sim \\mathcal{N}(0,\\sigma^2_{\\epsilon})$ is a Gaussian distributed error term.\n\\end{itemize}\n\nDue to its flexibility, the SIM has already been considered as a stepping stone to model functional as well as longitudinal time-dependent data\n\\citep{telesca2008bayesian,telesca2012modeling}. Indeed, on the one hand, the smoothing involved in the specification of $m(\\cdot;\\beta)$ allows to handle function-like data. On the other hand, random effects, which borrow information across curves, make this approach fruitful even with short, irregular and sparsely sampled time series; readers may refer to \\citet{erosheva2014breaking} for an illustration of this capabilities in the context of behavioral trajectories. Therefore, we find model (\\ref{eq:shapeinvariantmodel_nocluster}) particularly appealing and suitable for our aims, potentially able to handle time-dependent data in a comprehensive way. \n\n\n\n\\section{Time-dependent Latent Block Model}\\label{sec:chcharles_timedepLBM}\n\\subsection{Latent Block Model}\\label{sec:chcharles_lbmgeneral}\nIn the parametric, or model-based, co-clustering framework, the \\emph{Latent Block Model} \\citep[LBM,][]{govaert2013co} is the most popular approach. Data are represented in a matrix form $\\mathcal{X}=\\{ x_{ij} \\}_{1\\le i \\le n, 1 \\le j \\le d}$, where by now we should intend $x_{ij}$ as a generic random variable. To aid the definition of the model, and in accordance with the parametric approach to clustering \\citep{fraley2002model,bouveyron2019model}, two latent random vectors $\\mathbf{z} = \\{ z_{i}\\}_{1 \\le i \\le n}$, and $\\mathbf{w}=\\{ w_{j} \\}_{1 \\le j \\le d}$, with $z_i = (z_{i1},\\dots,z_{iK})$, $w_j=(w_{j1},\\dots,w_{jL})$, are introduced, indicating respectively the row and column memberships, with $K$ and $L$ the number of row and column clusters. A standard binary notation is used for the latent variables, i.e. $z_{ik}=1$ if the \\emph{i-th} observation belongs to the \\emph{k-th} row cluster and 0 otherwise and, likewise, $w_{jl}=1$ if the \\emph{j-th} variable belongs to the \\emph{l-th} column cluster and 0 otherwise. The model formulation relies on a local independence assumption, i.e. the $n \\times d$ random variables $\\{ x_{ij} \\}_{1\\le i \\le n, 1 \\le j \\le d}$ are assumed to be independent conditionally on $\\mathbf{z}$ and $\\mathbf{w},$ in turn supposed to be independent. The LBM can be thus written as\n\\begin{eqnarray}\\label{eq:LBM}\np(\\mathcal{X}; \\Theta) = \\sum_{z \\in Z}\\sum_{w \\in W}p(\\mathbf{z};\\Theta)p(\\mathbf{w};\\Theta)p(\\mathcal{X}|\\mathbf{z},\\mathbf{w};\\Theta) \\; , \n\\end{eqnarray}\nwhere\n\\begin{itemize}\n \\item $Z$ and $W$ are the sets of all the possible partitions of rows and columns respectively in $K$ and $L$ groups; \n \\item the latent vectors $\\mathbf{z}, \\mathbf{w}$ are assumed to be multinomial, with $p(\\mathbf{z};\\Theta)=\\prod_{ik}\\pi_k^{z_{ik}}$ and $p(\\mathbf{w};\\Theta)=\\prod_{jl} \\rho_l^{w_{jl}}$ where $\\pi_k, \\rho_l > 0$ are the row and column mixture proportions, $\\sum_k \\pi_k = \\sum_l \\rho_l = 1$; \n \\item as a consequence of the local independence assumption, $p(\\mathcal{X}|\\mathbf{z},\\mathbf{w};\\Theta) = \\prod_{ijkl} p(x_{ij};\\theta_{kl})^{z_{ik}w_{jl}}$ where $\\theta_{kl}$ is the vector of parameters specific to block $(k,l)$; \\item $\\Theta = (\\pi_k,\\rho_l,\\theta_{kl})_{1\\le k \\le K, 1 \\le l \\le L}$ is the full parameter vector of the model. \n\\end{itemize}\n\nThe LBM is particularly flexible in modelling different data types, as handled by a proper specification of the marginal density $p(x_{ij};\\theta_{kl})$ for binary \\citep{govaert2003clustering}, count \\citep{govaert2010latent}, continuous \\citep{lomet2012selection}, categorical \\citep{keribin2015estimation}, ordinal \\citep{jacques2018model,corneli2019co}, and even mixed-type data \\citep{selosse2020model}\n\n\\subsection{Model specification}\\label{sec:chcharles_modspec}\nOnce the LBM structure has been properly defined, extending its rationale to handle time-dependent data in a co-clustering framework boils down to a suitable specification of $p(x_{ij};\\theta_{kl})$. Note that this reveals one of the main advantage of such an highly-structured model, consisting in the chance to search for patterns in multivariate and complex data by specifying only the model for the variable $x_{ij}$. \nAs introduced in Section \\ref{sec:intro}, multidimensional time-dependent data may be represented according to a three-way structure where the third \\emph{mode} accounts for the time evolution. The observed data assume an array configuration $\\mathcal{X}= \\{ x_{ij}({\\bf t_i}) \\}_{1\\le i \\le n, 1\\le j \\le d}$ with ${\\bf t_i}=(t_1,\\dots,T_{n_i})$ as outlined in Section \\ref{sec:chcharles_buildingblocks}; different observational lengths can be handled by a suitable use of missing entries. Consistently with (\\ref{eq:shapeinvariantmodel_nocluster}), we consider as a generative model for the curve in the $(i,j)$\\emph{-th} entry, belonging to the generic block $(k,l)$, the following \n\\begin{eqnarray}\\label{eq:simmodel_cluster}\nx_{ij}(t)|_{z_{ik}=1,w_{jl}=1} = \\alpha_{ij,1}^{kl} + \\text{e}^{\\alpha_{ij,2}^{kl}}m(t-\\alpha_{ij,3}^{kl}; \\beta_{kl}) + \\epsilon_{ij}(t) \\; \n\\end{eqnarray}\nwith $t \\in {\\bf t_i}$ a generic time instant.\nA relevant difference with respect to the original SIM consists, coherently with the co-clustering setting, in the parameters being block-specific since the generative model is specified conditionally to the block membership of the cell. As a consequence:\n\\begin{itemize}\n\\item $m(t;\\beta_{kl})= \\mathcal{B}(t)\\beta_{kl}$ where the quantities are defined as in Section \\ref{sec:chcharles_buildingblocks}, with the only difference that $\\beta_{kl}$ is a vector of block-specific basis coefficients, hence allowing different mean shape curves across different blocks; \n\\item $\\alpha_{ij}^{kl}=(\\alpha_{ij,1}^{kl},\\alpha_{ij,2}^{kl},\\alpha_{ij,3}^{kl}) \\sim \\mathcal{N}_3(\\mu_{kl}^\\alpha,\\Sigma_{kl}^{\\alpha})$ is a vector of cell-specific random effects distributed according to a block-specific Gaussian distribution;\n\\item $\\epsilon_{ij}(t) \\sim \\mathcal{N}(0,\\sigma^2_{\\epsilon,kl})$ is the error term distributed as a block-specific Gaussian; \n\\item $\\theta_{kl}=(\\mu_{kl}^\\alpha,\\Sigma_{kl}^{\\alpha},\\sigma^2_{\\epsilon,kl},\\beta_{kl})$.\n\\end{itemize}\n \nNote that the ideas borrowed from the \\emph{curve registration} framework are here embedded in a clustering setting. Therefore, while \\emph{curve alignment} aims to synchronize the curves to estimate a common mean shape, in our setting the SIM works as a suitable tool to model the heterogeneity inside a block and to introduce a flexible notion of cluster. The rationale behind considering the SIM in a co-clustering framework consists in looking for blocks characterized by a different mean shape function $m(\\cdot;\\beta_{kl})$. Curves within the same block arise as random shifts and scale transformations of $m(\\cdot;\\beta_{kl}),$ driven by the block-specifically distributed random effects. Let consider the small panels on the left side of Figure \\ref{fig:curve_coclust}, displaying a number of curves which arise as transformations induced by non-zero values of $\\alpha_{ij,1},$ $\\alpha_{ij,2}, $ or $\\alpha_{ij,3}$. Beyond the sample variability, the curves differ for a (phase) random shift on the $x-$axes, an amplitude shift on the $y-$ axes, and a scale factor. According to model (\\ref{eq:simmodel_cluster}), all those curves belong to the same cluster, since they share the same mean shape function (Figure \\ref{fig:curve_coclust}, right panel). \n\n\\begin{figure}[t]\n\\center\n\\includegraphics[width=.62\\textwidth]{Figures\/prova.pdf}\n\\raisebox{2.5ex}{\n\\includegraphics[width=.34\\textwidth]{Figures\/TTT.pdf}}\n\\caption{In the left panels, curves in dotted line arise as random fluctuations of the superimposed red curve, but they are all time, amplitude or scale transformations of the same mean-shape function on the right panel.}\n\\label{fig:curve_coclust}\n\\end{figure}\n\nIn fact, further flexibility can be naturally introduced within the model by ``switching off'' one or more random effects, depending on subject-matter considerations and on the concept of cluster one has in mind. If there are reasons to support that similar time evolutions associated to different intensities are, in fact, expression of different clusters, it makes sense to switch off the random intercept $\\alpha_{ij,1}$. In the example illustrated in Figure \\ref{fig:curve_coclust} this ideally leads to a two-clusters structure (Figure \\ref{fig:FT}, left panels). Similarly, switching off the random effect $\\alpha_{ij,3}$ would lead to blocks characterized by a shifted time evolution (Figure \\ref{fig:FT}, right panels), while removing $\\alpha_{ij,2}$ determines different blocks varying for a scale factor (Figure \\ref{fig:FT}, middle panels). \n\n\\begin{figure}[t]\n\\center\n\\begin{tabular}{p{3cm}p{4cm}p{4cm}p{4cm}}\n&\\texttt{FTT}&\\hspace{-0.35cm}\\texttt{TFT}&\\hspace{-0.65cm}\\texttt{TTF}\\\\\n\\end{tabular}\\\\\n\\includegraphics[width=.25\\textwidth]{Figures\/FTT.pdf}\n\\includegraphics[width=.25\\textwidth]{Figures\/TFT.pdf}\n\\includegraphics[width=.25\\textwidth]{Figures\/TTF.pdf}\n\\caption{Pairs of plots in each column represent the two-cluster configurations arising from switching off, from left to right $\\alpha_{ij,1}$, $\\alpha_{ij,2},$ $\\alpha_{ij,3}$. In the names of the models, as used in the rest of the paper, \\texttt{T} indicates a switched on random effect while \\texttt{F} a switched off one.}\n\\label{fig:FT}\n\\end{figure}\n\n\n\\subsection{Model estimation}\\label{sec:chcharles_modelest}\nTo estimate the LBM several approaches have been proposed, as for example Bayesian \\citep{wyse2012block}, greedy search \\citep{wyse2017inferring} and likelihood-based ones \\citep{govaert2008block}. In this work we focus on the latter class of methods. In principle, the estimation strategy would aim to maximize the log-likelihood $\\ell(\\Theta) = \\log p(\\mathcal{X}; \\Theta)$ with $p(\\mathcal{X}; \\theta)$ defined as in (\\ref{eq:LBM}); nonetheless, the missing structure of the data makes this maximization impractical. For this reason the \\emph{complete data log-likelihood} is usually considered as the objective function to optimize, being defined as\n\\begin{equation}\\label{eq:compl_loglik_LBM}\n\\ell_c(\\Theta,\\mathbf{z},\\mathbf{w}) = \\sum_{ik} z_{ik}\\log\\pi_k + \\sum_{jl}w_{jl}\\log\\rho_l + \\sum_{ijkl}z_{ik}w_{jl}\\log p(x_{ij}; \\theta_{kl})\n\\end{equation}\nwhere the first two terms account for the proportions of row and column clusters while the third one depends on the probability density function of each block.\n\nAs a general solution, to maximize (\\ref{eq:compl_loglik_LBM}) and obtain an estimate of $\\hat{\\Theta}$ when dealing with situations where latent variables are involved, one would instinctively resort to the Expectation-Maximization algorithm \\citep[EM,][]{dempster1977maximum}. The basic idea underlying the EM algorithm consists in finding a lower bound of the log-likelihood and optimizing it via an iterative scheme in order to create a converging series of $\\hat{\\Theta}^{(h)}$ \\citep[see][for more details about the convergence properties of the algorithm]{wu1983convergence}. In the co-clustering framework, this lower bound can be easily exhibited by rewriting the log-likelihood as follows\n$$\n\\ell(\\Theta) = \\mathcal{L}(q;\\Theta) + \\zeta \n$$\nwhere $\\mathcal{L}(q; \\Theta) = \\sum_{{\\bf z},{\\bf w}} q({\\bf z},{\\bf w})\\log(p(\\mathcal{X},{\\bf z},{\\bf w}| \\theta)\/q({\\bf z},{\\bf w}))$ with $q({\\bf z},{\\bf w})$ being a generic probability mass function on the support of $({\\bf z},{\\bf w})$ while $\\zeta$ is a positive constant not depending on $\\Theta$. \n\nThe E step of the algorithm maximizes the lower bound $\\mathcal{L}$ over $q$ for a given value of $\\Theta$. Straightforward calculations show that $\\mathcal{L}$ is maximized for $q^{*}({\\bf z},{\\bf w})=p({\\bf z},{\\bf w}|\\mathcal{X},\\theta)$. Unfortunately, in a co-clustering scenario, the joint posterior distribution $p({\\bf z},{\\bf w}|\\mathcal{X},\\Theta)$ is not tractable, as it involves terms that cannot be factorized as it conversely happens in a standard mixture model framework. As a consequence, several modifications have been explored, searching for viable solutions when performing the E step \\citep[see][for a more detailed tractation]{govaert2013co}; examples are the \\emph{Classification EM} (CEM) and the \\emph{Variational EM} (VEM). Here we propose to make use of a Gibbs sampler within the E step to approximate the posterior distribution $p({\\bf z},{\\bf w}|\\mathcal{X},\\Theta)$. This results in a stochastic version of the EM algorithm, which will be called SEM-Gibbs in the following. Given an initial column partition ${\\bf w^{(0)}}$ and an initial value for the parameters $\\Theta^{(0)}$, at the $h$\\emph{-th} iteration the algorithm proceeds as follows \n\\begin{itemize}\n\t\\item SE step: $q^{*}({\\bf z},{\\bf w})\\simeq p({\\bf z},{\\bf w}|\\mathcal{X},\\Theta^{(h-1)})$ is approximated with a Gibbs sampler. The Gibbs sampler consists in sampling alternatively ${\\bf z}$ and ${\\bf w}$ from their conditional distributions a certain number of times before to retain new values for ${\\bf z}^{(h)}$ and ${\\bf w}^{(h)}$,\n\t\\item M step: $\\mathcal{L}(q^{*}({\\bf z},{\\bf w}),\\Theta^{(h-1)})$ is then maximized over $\\Theta$, where\n\t\\begin{align*}\n\t\\mathcal{L}(q^{*}({\\bf z},{\\bf w}),\\Theta^{(h-1)}) & \\simeq \\sum_{z,w}p({\\bf z},{\\bf w}|\\mathcal{X},\\Theta^{(h-1)})\\log(p(\\mathcal{X},{\\bf z},{\\bf w}|\\Theta)\/p({\\bf z},{\\bf w}|\\mathcal{X},\\Theta^{(h-1)}))\\\\\n\t\t& \\simeq E[\\ell_c(\\Theta, {\\bf z}^{(h)}, {\\bf w}^{(h)})|\\Theta^{(h-1)}]+\\xi,\n\t\\end{align*}\n\t$\\xi$ not depending on $\\Theta$. This step therefore reduces to the maximization of the conditional expectation of the \\emph{complete data log-likelihood} (\\ref{eq:compl_loglik_LBM}) given $\\bf{z}^{(h)}$ and ${\\bf w}^{(h)}$. \n\\end{itemize}\n\n\nIn the proposed framework, due to the presence of the random effects, some additional challenges have to be faced. In fact, the maximization of the conditional expectation of (\\ref{eq:compl_loglik_LBM}) associated to model (\\ref{eq:simmodel_cluster}) requires a cumbersome multidimensional integration in order to compute the marginal density defined as \n\\begin{eqnarray}\\label{eq:marglikelihood_raneff}\np(x_{ij};\\theta_{kl}) = \\int p(x_{ij}|\\alpha_{ij}^{kl};\\theta_{kl})p(\\alpha_{ij}^{kl};\\theta_{kl})\\, d\\alpha_{ij}^{kl} \\; . \n\\end{eqnarray} \nNote that, with a slight abuse of notation, we suppress here the dependency on the time $t$, i.e. $x_{ij}$ has to be intended as $x_{ij}({\\bf t_i})$. In the SE step, on the other hand, the evaluation of (\\ref{eq:marglikelihood_raneff}) is needed for all the possible configurations of $\\{z_i\\}_{i=1,\\dots,n}$ and $\\{w_{j}\\}_{j=1,\\dots,d}$. These quantities are straightforwardly obtained when considering the SEM-Gibbs to estimate models without any random effect involved, while their computation is more troublesome in our scenario. \n\nFor these reasons, we propose a modification of the SEM-Gibbs algorithm, called \\emph{Marginalized SEM-Gibbs} (M-SEM), where an additional \\emph{Marginalization step} is introduced to properly account for the random effects.\nGiven an initial value for the parameters $\\Theta^{(0)}$ and an initial column partition $\\mathbf{w}^{(0)}$, the $h$\\emph{-th} iteration of the M-SEM algorithm alternates the following steps: \n\\begin{itemize}\n \\item \\textbf{Marginalization step}: The single cell contributions in (\\ref{eq:marglikelihood_raneff}) to the \\emph{complete data log-likelihood} are computed by means of a Monte Carlo integration scheme as follows \n \n \\begin{eqnarray}\n p(x_{ij};\\theta_{kl}^{(q)}) \\simeq \\frac{1}{M} \\sum_{m=1}^M p(x_{ij} ; \\alpha_{ij}^{kl,(m)}, \\theta_{kl}^{(h-1)}) \\; \n \\end{eqnarray}\n for $i=1,\\dots,n$, $j=1,\\dots,d$, $k=1,\\dots,K$ and $l=1,\\dots,L$ and being $M$ the number of Monte Carlo samples. The values of the vectors $\\alpha_{ij}^{kl,(1)},\\dots,\\alpha_{ij}^{kl,(M)}$ are drawn from a Gaussian distribution $\\mathcal{N}_3(\\mu_{kl}^{\\alpha,(h-1)},\\Sigma_{kl}^{\\alpha,(h-1)})$; note that this choice amounts to a random version of the \\emph{Gaussian quadrature rule} \\cite[see e.g.][]{pinheiro2006mixed};\n\n \n \\item \\textbf{SE step}: $p({\\bf z},{\\bf w}|\\mathcal{X},\\Theta^{(h-1)})$ is approximated by repeating, for a number of iterations, the following Gibbs sampling steps\n \\begin{enumerate}\n \\item generate the row partition $z_i^{(h)}=(z_{i1}^{(h)},\\dots,z_{iK}^{(h)}), \\; i=1,\\dots,n$ according to a multinomial distribution $z_i^{(h)}\\sim \\mathcal{M}(1,\\tilde{z}_{i1},\\dots,\\tilde{z}_{iK})$, with\n \\begin{eqnarray*}\n \\tilde{z}_{ik} &=& p(z_{ik}=1 | \\mathcal{X},\\mathbf{w}^{(h-1)};\\Theta^{(h-1)}) \\\\\n &=& \\frac{\\pi_k^{(h-1)}p_k(\\mathbf{x}_i | \\*w^{(h-1)}; \\Theta^{(h-1)})}{\\sum_{k'}\\pi_{k'}^{(h-1)}p_{k'}(\\mathbf{x}_i | \\*w^{(h-1)}; \\Theta^{(h-1)})} \\; ,\n \\end{eqnarray*}\n for $k=1,\\dots,K$, $\\mathbf{x}_i = \\{x_{ij}\\}_{1\\le j\\le d}$ and $p_k(\\mathbf{x}_i | \\mathbf{w}^{(h-1)}; \\Theta^{(h-1)}) = \\prod_{jl} p(x_{ij}; \\theta_{kl}^{(h-1)})^{w_{jl}^{(h-1)}}$. \n \\item generate the column partition $w_j^{(h)}=(w_{j1}^{(h)},\\dots,w_{jL}^{(h)}), \\; j=1,\\dots,d$ according to a multinomial distribution $w_j^{(h)}\\sim \\mathcal{M}(1,\\tilde{w}_{j1},\\dots,\\tilde{w}_{jL})$, with\n \\begin{eqnarray*}\n \\tilde{w}_{jl} &=& p(w_{jl}=1 | \\mathcal{X}, \\mathbf{z}^{(h)}; \\Theta^{(h-1)}) \\\\\n &=& \\frac{\\rho_l^{(q)}p_l(\\mathbf{x}_j | \\mathbf{z}^{(h)}; \\Theta^{(h-1)})}{\\sum_{l'}\\rho_{l'}^{(h-1)}p_{l'}(\\mathbf{x}_j | \\mathbf{z}^{(h)}; \\Theta^{(h-1)})} \\; , \n \\end{eqnarray*}\n for $l=1,\\dots,L$, $\\mathbf{x}_j = \\{x_{ij}\\}_{1 \\le i\\le n}$ and $p_l(\\mathbf{x}_j | \\mathbf{z}^{(h)}; \\Theta^{(h-1)}) = \\prod_{ik} p(x_{ij}; \\Theta_{kl}^{(h-1)})^{z_{ik}^{(h)}}$.\n \\end{enumerate}\n \\item \\textbf{M step}: Estimate $\\Theta^{(h)}$ \n \n by maximizing $E[\\ell_c(\\Theta, {\\bf z}^{(h)}, {\\bf w}^{(h)})|\\Theta^{(h-1)}]$.\\\\ Mixture proportions are updated as $\\pi_k^{(h)} = \\frac{1}{n}\\sum_{i}z_{ik}^{(h)}$ and $\\rho_l^{(h)}=\\frac{1}{d}\\sum_j w_{jl}^{(h)}$. The estimate of $\\theta_{kl}=(\\mu_{kl}^\\alpha,\\Sigma_{kl}^{\\alpha},\\sigma^2_{\\epsilon,kl},\\beta_{kl})$ is obtained by exploiting the \\emph{non-linear mixed effect model} specification in (\\ref{eq:simmodel_cluster}) and considering the approximate maximum likelihood formulation proposed in \\citet{lindstrom1990nonlinear}; the variance and the mean components are estimated by approximating and maximizing the marginal density of the latter near the mode of the posterior distribution of the random effects. Conditional or shrinkage estimates are then used for the estimation of the random effects. \n\\end{itemize}\n\nThe M-SEM algorithm is run for a certain number of iterations until a convergence criterion on the \\emph{complete data log-likelihood} is met. Since a burn-in period is considered, the final estimate for $\\Theta$, denoted as $\\hat{\\Theta}$, is given by the mean of the sample distribution. A sample of $(\\mathbf{z},\\mathbf{w})$ is then generated according to the SE step as illustrated above with $\\Theta=\\hat{\\Theta}$. The final block-partition $(\\hat{\\mathbf{z}},\\hat{\\mathbf{w}})$ is then obtained as the mode of their sample distribution. \n\n\\subsection{Model selection}\n\nThe choice of the number of groups is considered here\nas a model selection problem. Operationally several models, corresponding to different combinations of $K$ and $L$ and, in our case, to different configurations of the random effects, are estimated and the best one is selected according to an information criterion. Note that the model selection step is more troublesome in this setting with respect to a standard clustering one, since we need to select not only the number of row clusters $K$ but also the number of column ones $L$. Standard choices, such as the AIC and the BIC, are not directly available in the co-clustering framework where, as noted by \\citet{keribin2015estimation}, the computation of the likelihood of the LBM is challenging, even when the parameters are properly estimated. Therefore, as a viable alternative, we consider an approximated version of the ICL \\citep{biernacki2000assessing} that, relying on the \\emph{complete data log-likelihood}, does not suffer from the same issues, and which reads as follows\n\\begin{eqnarray}\\label{eq:ICL_cocl}\n\\text{ICL} = \\ell_c(\\hat{\\Theta}, \\hat{z}, \\hat{w}) - \\frac{K-1}{2}\\log n - \\frac{L-1}{2}\\log d - \\frac{KL\\nu}{2}\\log nd \\; , \n\\end{eqnarray}\nwhere $\\nu$ denotes number of specific parameters for each block while $\\ell_c(\\hat{\\Theta}, \\hat{z}, \\hat{w})$ is defined as in \\eqref{eq:compl_loglik_LBM} with $\\Theta$, $z$ and $w$ being replaced by their estimates.\nThe model associated with the highest value of the ICL is then selected. \n\nEven if the use of this criterion is a well-established practice in co-clustering applications, \\citet{keribin2015estimation} noted that its consistency has not been proved yet to estimate the number of blocks of a LBM. Additionally, \\citet{nagin2009group} and \\citet{corneli2020bayesian} point out a bias of the ICL towards overestimation of the number of clusters in the longitudinal context. The validity of the ICL could be additionally undermined by the presence of random effects. As noted by \\citet{delattre2014note}, standard information criteria have unclear definitions in a mixed effect model framework, since the definition of the actual sample size is not trivial. Given that, common asymptotic approximations are not valid anymore. Even if a proper exploration of the problem from a co-clustering perspective is still missing, we believe that the mentioned issues might have an impact also on the derivation of the criterion in (\\ref{eq:ICL_cocl}). The development of valid model selection tools for LBM when random effects are involved is out of scope of this work, therefore, operationally, we consider the ICL. Nonetheless, the analyses in Section \\ref{sec:chcharles_numexample} have to be interpreted with full awareness of the limitations described above. \n\n\n\n\\subsection{Remarks}\\label{sec:chcharles_discussion}\nThe model introduced so far inherits the advantages of both the building ingredients, namely the LBM and the SIM, it embeds. \nThanks to the local independence assumption of the LBM, it allows handling multivariate, possibly high-dimensional complex data structures in a relatively parsimonious way. Differences among the subjects are captured by the random effects, while curve summaries can be expressed as a functional of the mean shape curve. Additionally, resorting to a smoother when modeling the mean shape function, it allows for a flexible handling of functional data whereas the presence of random effects make the model effective also in a longitudinal setting. Finally, clustering is pursued directly on the observed curves, without resorting to intermediate transformation steps, as it is done in \\citet{bouveyron2018functional}, where clustering is performed on the basis expansion coefficients used to transform the original data. \n\nThe attractive features of the model go hand in hand with some difficulties that require caution --namely, likelihood multimodality, the associated convergence issues, and the curse of flexibility-- that we discuss below in turn. \n\\begin{itemize}\n\\item \\emph{Initialization} The M-SEM algorithm encloses different numerical steps which require the suitable specification of starting values. \nFirst, the convergence of EM-type algorithms towards a global maximum is not guaranteed; as a consequence they are known to be sensitive to the initialization with a proper one being crucial to avoid local solutions. Assuming $K$ and $L$ to be known, the M-SEM algorithm requires starting values for $\\mathbf{z}$ and $\\mathbf{w}$ in order to implement the first M step. A standard strategy resorts to multiple random initializations: the row and column partitions are sampled independently from multinomial distributions with uniform weights and the one eventually leading to the highest value of the \\emph{complete data log-likelihood} is retained. An alternative approach, possibly accelerating the convergence, is given by a k-means initialization, where two k-means algorithms are independently run for the rows and the columns of $\\mathcal{X}$ and the M-SEM algorithm is initialized with the obtained partitions. It has been pointed out \\citep[see e.g.][]{govaert2013co} that the SEM-Gibbs, being a stochastic algorithm, can attenuate in practice the impact of the initialization on the resulting estimates. Finally, note that a further initialization is required, to estimate the nonlinear mean shape function within the M step. \n\n\\item \\emph{Convergence and other numerical problems}. Although the benefits of including random effects in the considered framework \nare undeniable, parameters estimation is known not to be straightforward in mixed effect models, especially in the nonlinear setting \\citep[see, e.g.][]{harring2016comparison}. As noted above the nonlinear dependence of the conditional mean of the response on the random effects requires \nmultidimensional integration to derive the\nmarginal distribution of the data. While several methods have been proposed to compute the integral, convergence issues are often encountered. \nIn such situations, some strategies can be employed to help with convergence of the estimation algorithm. Examples are to try different sets of starting values, to scale the data prior to the modeling step, or to simplify the structure of the model (e.g. by reducing the number of knots of the B-splines). Addressing these issues often results in considerable computational times even when convergence is eventually achieved. Depending on the specific data at hand, it is also possible to consider alternative mean shape formulations, such as polynomial functions, which result in easier estimation procedures. Lastly note that, if available, prior knowledge about the time evolution of the observed phenomenon may be incorporated in the models as it could introduce some constraints possibly simplifying the estimation process \\citep[see e.g.][]{telesca2012modeling}.\n\n\\item \\emph{Curse of flexibility}. Including random effects for both phase and amplitude shifts and scale transformations might allow for a virtually excellent fitting of various arbitrarily shaped curve. This flexibility, albeit desirable, sometimes achieve excessive extents, possibly leading to estimation troubles. This is especially true in a clustering framework, where data are expected to exhibit a remarkable heterogeneity. \nFrom a practical point of view, our experience suggests that the estimation of the parameters $\\alpha_{ij,2}$ turns out to be the most troublesome, sometimes leading to convergence issues and instability in the resulting estimates.\n\n\n\\end{itemize}\n\n\n\\section{Numerical experiments}\\label{sec:chcharles_numexample}\n\nBefore moving forward to the interest of the proposed approach on real-world situations, this section presents some numerical experiments illustrating its main features on simulated data. \n\n\\subsection{Synthetic data}\\label{sec:chcharles_simulation}\n\nThis section examines the main features of the proposed approach on some synthetic data. The aim of the simulation study is twofold. The first and primary goal of the analyses consists in exploring the capability of the proposed method to properly partition the data into blocks, also in comparison with some competitors such as the one proposed by \\citet{bouveyron2018functional} (\\texttt{funLBM} in the following) and a double k-means approach, where row and column partitions are obtained separately and subsequently merged to produce blocks. As for the second aim of the simulations, we evaluate the performances of the ICL in the developed framework to select both the number $(K,L)$ of blocks and the random effect configuration.\n\nAll the analyes have been conducted in the R environment \\citep{rcore} with the aid of \\texttt{nlme} package \\citep{nlmepack}, used to estimate the parameters in the M-step, and the \\texttt{splines} package, used to handle the B-spline involved in the specification of the common shape function. The code implementing the proposed procedure is available upon request.\n\n\\begin{figure}[!t]\n \\centering\n \\includegraphics[width=14cm,height=8.5cm]{Figures\/meancurves_simulated.pdf}\n \\caption{Subsample of simulated curves (black dashed lines) with over-imposed block specific mean shape curves (colored continue lines) employed in the numerical study}\n \\label{fig:meancurves}\n\\end{figure}\n\nThe examined simulation setup is defined as follows. We generated $B=100$ Monte Carlo samples of curves according to the general specification \\eqref{eq:simmodel_cluster}, with block-specific mean shape function $m_{kl}(\\cdot)$ and both the parameters involved in the error term and the ones describing the random effects distribution being constant across the blocks. In fact, in the light of the considerations made in Section \\ref{sec:chcharles_discussion}, the random scale parameter is switched off in the data generative mechanism, i.e. $\\alpha_{ij,2}$ is constrained to be degenerate in zero. The number of row and column clusters has been fixed to $K_{\\text{true}} = 4$ and $L_{\\text{true}} = 3$. The mean shape functions $m_{kl}(\\cdot)$ are chosen among four different curves, namely $m_{11}=m_{13}=m_{33}=m_1$, $m_{12}=m_{32}=m_{31}=m_{41}=m_2$, $m_{21}=m_{32}=m_{42}=m_3$ and $m_{22}=m_{43}=m_4$, as illustrated in Figure \\ref{fig:meancurves} with different color lines, and specified as follows: \n\\begin{eqnarray*}\nm_1(t) &\\propto& 6t^2 - 7t + 1 \\hspace{3cm} m_2(t) \\propto \\phi(t; 0.2,0.008) \\\\\nm_3(t) &\\propto& 0.75-0.8\\mathbbm{1}_{\\{t \\in (0.4,0.6)\\}} \\hspace{1.4cm} m_4(t) \\propto \\frac{1}{(1 + \\exp(-10t + 5))} \n\\end{eqnarray*} \nThe other parameters involved have been set to $\\sigma_{\\epsilon,kl}=0.3$, $\\mu_{kl}^\\alpha = (0,0,0)$ and $\\Sigma_{kl}^\\alpha = \\text{diag}(1,0,0.1)$ $\\forall k=1,\\dots,K_{\\text{true}}, l=1,\\dots,L_{\\text{true}}$. Three different scenarios are considered with generated curves consisting of $T = 15$ equi-spaced observations ranging in $[0,1]$. As a first baseline scenario, we set the number of rows to $n = 100$ and the number of columns to $d = 20$. The other scenarios are considered in order to obtain insights and indications on the performances of the proposed method when dealing with larger matrices. Coherently in the second scenario $n = 500$ and $d = 20$ while in the third one $n = 100$ and $d = 50$ thus increasing respectively the number of samples and features.\n\nAs for the first goal of the simulation study, we explore the performances of our proposal in terms of the Co-clustering Adjusted Rand Index \\citep[CARI,][]{robert2020comparing}. This criterion generalizes the Adjusted Rand Index \\citep{hubert1985comparing} to the co-clustering framework, and takes the value 1 when the blocks partitions perfectly agree up to a permutation. \nIn order to have a fair comparison with the double \\emph{k-means} approach, for which selecting the number of blocks does not admit an obvious solution, and to separate the uncertainty due to model selection from the one due to cluster detection, models are compared by considering the number of blocks as known and equal to $(K_{\\text{true}}, L_{\\text{true}})$. Consistently, we estimate our model only for the \\texttt{TFT} random effects configuration, being the one generating the data. This is made possible by constraining mean and variance estimates of the scale-random effect to be equal to zero. \n\nResults are reported in Table \\ref{tab:charles_simulation1}. \nThe proposed method claims excellent performances in all the considered settings, with results notably featured by a very limited variability. No clear-cut indications arise from the comparison with \\texttt{funLBM} as the two methodologies lead to comparable results. Conversely, the use of an approach which is not specifically conceived for co-clustering, like the the double \\emph{k-means}, leads to a strong degradation of the quality of the partitions. Unlike the results obtained with our method and with \\texttt{funLBM}, being insensitive to changes in the dimensions of the data, the double \\emph{k-means} behaves better with an increased number of variables or observations. \n\\begin{table}[t]\n\\centering\n\\caption{Mean (and std error) of the CARI computed over the simulated samples in the three scenarios. Partitions are obtained using the proposed approach (\\texttt{tdLBM}), \\texttt{funLBM} and a double k-means approach.}\n\\vspace*{0.2cm}\n\\begin{tabular}{cccc}\n \\hline\n & $n = 100, d = 20$ & $n = 100, d = 50$ & $n = 500, d = 20$ \\\\ \n \\hline\nCARI$_{\\text{\\texttt{tdLBM}}}$ & 0.972 (0.044) & 0.988 (0.051) & 0.981 (0.020) \\\\ \nCARI$_{\\text{\\texttt{funLBM}}}$ & 0.981 (0.066) & 0.986 (0.053) & 0.986 (0.060) \\\\ \nCARI$_{\\text{\\texttt{kmeans}}}$ & 0.761 (0.158) & 0.842 (0.182) & 0.809 (0.169) \\\\ \n \\hline\n\\end{tabular}\n\\label{tab:charles_simulation1}\n\\end{table}\n\n\n\n\\begin{table}[b]\n\\centering\n \\caption{Rate of selection of $(K,L)$ configurations for the different scenarios considered obtained using the proposed approach.}\n \\vspace*{0.3cm}\n\\begin{tabular}{p{4cm}p{4cm}p{4cm}p{2cm}}\n\\hline\n\\hspace{1.15cm} $n=100, d=20$ & \\hspace{0.85cm} $n=100, d=50$ & \\hspace{0.7cm} $n=500, d=20$&\\\\\n\\hline\n\\multicolumn{3}{l}{\\multirow{9}{*}{\\includegraphics[height = 4.6cm, width = 13cm]{Figures\/table2bis.pdf}}} & {\\multirow{9}{*}{\\includegraphics[scale=0.43]{Figures\/output.pdf}}}\\\\\n&&&\\\\\n&&&\\\\\n&&&\\\\\n&&&\\\\\n&&&\\\\\n&&&\\\\\n&&&\\\\\n&&&\\\\\n&&&\\\\\n\\hline\n\\end{tabular}\n\\label{tab:charles_simulation2}\n\\end{table}\n\nAs for the performances of the ICL, Table \\ref{tab:charles_simulation2} shows the fractions of samples where the criterion has led to the selection of each of the considered configurations of $(K, L)$, with $K,L = 2,\\dots,5$, for models estimated with the proposed method. In all the considered settings, the actual number of co-clusters is the most frequently selected by the ICL criterion. In fact, a non-negligible tendency to favor overparameterized models, especially for larger sample size, is witnessed, consistently with the comments in \\citet{corneli2019co}.\n\nSimilar considerations may be drawn from the exploration of the performances of the ICL when used to select the random effect configuration (Table \\ref{tab:charles_simulation3}). Here, the performances seem to be less influenced by the data dimensionality, and ICL selects the true configuration for the majority of the samples in two scenarios while, in the third one, the true model is selected approximately one out of two samples. Nonetheless, also in this case, a tendency to overestimation is visible, with the \\texttt{TTT} configuration frequently selected in all the scenarios. In general, the penalization term in (\\ref{eq:ICL_cocl}) seems to be too weak and overall not completely able to account for the presence of random effects. These results, along with the remarks at the end of Section \\ref{sec:chcharles_modelest}, provide a suggestion about a possibly fruitful research direction which consists in proposing some suitable adjustments. \n\nIn fact, it is worth noting that when the selection of the number of clusters is the aim, the observed behavior is overall preferable with respect to underestimation since it does not undermine the homogeneity within a block, being the final aim of cluster analysis; this has been confirmed by further analyses suggesting that the additional groups are usually small and arising because of the presence of outliers. As for the random effect configuration, we believe that since the choice impacts the notion of cluster one aims to identify, it should be driven by subject-matter knowledge rather than by automatic criteria. Additionally, the reported analyses are exploratory in nature, aiming to provide general insights on the characteristics of the proposed approach. To limit computational time required to run the large number of models involved in Tables \\ref{tab:charles_simulation2} and \\ref{tab:charles_simulation3}, for demonstration of our method, we did not use multiple initializations and we have pre-selected the number of knots for the block-specific mean functions. In practice, we recommend, at a minimum, using multiple starting values and carrying out sensitivity analyses on the number of knots to ensure that substantive conclusions are not affected.\n\n\n\n\n\\begin{table}[t]\n\\centering\n \\caption{Rate of selection for each random effects configuration in the considered scenarios. \\texttt{T} means that the corresponding random effect is switched on while \\texttt{F} means that is switched off. As an example \\texttt{FTT} represent a model where $\\alpha_{ij,1}$ is constrained to be a random variable with degenerate distribution in zero. Bold cells represents the true data generative model (TFT), blank ones represent percentages equal to zero.}\n \\vspace*{0.3cm}\n\\begin{tabular}{cc|cccccccc}\n \\hline\n & & \\texttt{FFF} & \\texttt{TFF} & \\texttt{FTF} & \\texttt{FFT} & \\texttt{TTF} & \\texttt{TFT} & \\texttt{FTT} & \\texttt{TTT} \\\\\n \\hline \n & $n = 100, d=20$ & & & & & 1\\% & {\\bf 58\\%} & & 41\\%\\\\\n \\% of selection \n & $n = 100, d=50$ & & 2\\% & & & 1\\% & {\\bf 62\\%} & & 35\\% \\\\\n & $n = 500, d=20$ & & 1\\% & & & 5\\% & {\\bf 47\\%} & & 47\\%\\\\\n \\hline\n\\end{tabular}\n\\label{tab:charles_simulation3}\n\\end{table}\n\n\n\\subsection{Applications to real world problems}\n\\subsubsection{French Pollen Data}\\label{sec:chcharles_realdata}\nThe data we consider in this section are provided by the \\emph{R\\'eseau National de Surveillance A\u00e9robio-\\\\logique} (RNSA), the French institute which analyzes the biological particles content of the air and studies their impact on the human health.\nRNSA collects data on concentration of pollens and moulds in the air, along with some clinical data, in more than 70 municipalities in France. \n\nThe analyzed dataset contains daily observations of the concentration of 21 pollens for 71 cities in France in 2016. Concentration is measured as the number of pollens detected over a cubic meter of air and carried on by means of some pollen traps located in central urban positions over the roof of buildings, in order to be representative of the trend air quality. \n\nThe aim of the analysis is to identify homogeneous trends in the pollen concentration over the year and across different geographic areas. \nFor this reason, we focus on finding groups of pollens differentiating one from the others for either the period of maximum exhibition or the time span they are present. Consistently with this choice, only models with the y-axis shift parameter $\\alpha_{ij,1}$ are estimated (i.e. $\\alpha_{ij,2}$ and $\\alpha_{ij,3}$ are switched off), for varying number of row and column clusters, and the best one selected via ICL. \nWe consider monthly data by averaging the observed daily concentrations over each month. The resulting dataset may be represented as a matrix with $n=71$ rows (cities), $p=21$ columns (pollens) where each entry is a sequence of $T=12$ time-indexed measurements. Moreover, to practically apply our proposed procedure on the data, a preprocessing step has been carried out. We work on a logarithmic scale and, in order to improve the stability of the estimation procedure, the data have been standardized.\n\nResults are graphically displayed in Figure \\ref{fig:meancurv_pollini}. The ICL selects a model with $K=3$ row clusters and $L=5$ column ones.\nA first visual inspection of the pollen time evolutions reveals that the procedure is able to discriminate the pollens according to their seasonality. Pollens in the first two column groups are mainly present during the summer, with a difference in the intensity of the concentration. In the remaining three groups pollens are more active, roughly speaking, during winter and spring months but with a different time persistence and evolution.\n\nDigging deeper substantially in the cluster configuration obtained is beyond the scope of this work and may benefit from insights from experts of botanical and geographical disciplines. Anyway it stands out that column clusters are roughly grouping together trees pollens, distinguishing them from weed and grass ones (right panel of Table \\ref{fig:mappa_francia}). Results align with the usually considered typical seasons, with groups of pollens from trees mostly present in winter and spring while the ones from grass spreading in the air mainly during the summer months. With respect to the row partition, displayed in the left panel of Table \\ref{fig:mappa_francia}, three clusters have been detected, with one roughly corresponding to the Mediterranean region. The situation, for what it concerns the other two clusters, appears to be more heterogeneous. One of these groups tends to gather cities in the northern region and on the Atlantic coast while the other covers mainly the central and continental part of the country. Again the results appear promising but it may be beneficial a cross analysis with some climate scientists in order to get a more informative and substantiated point of view.\n\n\\begin{figure}[t]\n \\includegraphics[scale=0.2]{Figures\/meancurves_k3l5.pdf}\n \\caption{French Pollen Data results. Curves belonging to each single block with superimposed the corresponding block specific mean curve (in light blue).}\n \\label{fig:meancurv_pollini}\n\\end{figure}\n\n\\begin{table}[tb]\n\\caption{French map with overimposed the points indicating the cities colored according to their row cluster memberships (left) and Pollens organized by the column cluster memberships (right).}\\label{fig:mappa_francia}\n\\begin{center}\n\\begin{tabular}{c|cl}\n Row groups (Cities) & \\multicolumn{2}{c}{Column groups (Pollens)} \\\\\n\\hline\n\\multirow{10}{*}{\\vspace{-0.1cm} \\includegraphics[scale=0.13]{Figures\/mappa_k3l5.pdf}} & & \\\\\n& & \\\\\n& 1 & Gramineae, Urticaceae \\\\\n& 2 & Chestnut, Plantain \\\\\n& 3 & Cypress \\\\\n& 4 & Ragweed, Mugwort, Birch, Beech, \\\\ \n& & Morus, Olive, Platanus, Oak, Sorrel \\\\\n& & Linden \\\\\n& 5 & Alder, Hornbeam, Hazel, Ash, \\\\\n& & Poplar, Willow \\\\\n& & \\\\\n\\end{tabular}\n\\end{center}\n\\end{table}\n\n\\subsubsection{COVID-19 evolution across countries}\\label{sec:covid}\n\nAt the time of writing this paper, an outbreak of infection with severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) has severely harmed the whole world. Countries all over the world have undertaken measures to reduce the spread of the virus: quarantine and social distancing practices have been implemented, collective events have been canceled or postponed, business and educational activities have been either interrupted or moved online. \n\nWhile the outbreak has led to a global social and economic disruption, its spreading and evolution, also in relation to the aforementioned non pharmaceutical interventions, have not been the same all over the world \\citep[see][for an account of this in the first months of the pandemic]{flaxman2020estimating, brauner2021inferring}. With this regard, the goal of the analysis is to evaluate differences and similarities among the countries and for different aspects of the pandemic. \n\nBeing the overall situation still evolving, and given that testing strategies have significantly changed across waves, we refer to the first wave of infection, considering the data from the 1st of March to the 4th of July 2020, in order to guarantee the consistency of the disease metrics used in the co-clustering. Moreover we restrict the analysis to the European countries. Data have been collected by the Oxford COVID-19 Government Response Tracker \\citep[OxCGRT, ][]{covid} and originally refer to daily observations \nof the number of confirmed cases and deaths for COVID-19 in each country. We also select two indicators tracking the individual country intervention in response to the pandemic: the Stringency index and the Government response index. Both indicators are recorded on a 0-100 ordinal scale that represents the level of strictness of the policy and accounts for containment and closure policies. The latter indicator also reflects Health system policies such as information campaigns, testing strategies and contact tracing. \n\nData have been pre-processed as follows: daily values have been converted into weekly averages in order to reduce the impact of short term fluctuations and the number of time observations. Rates per 1000 inhabitants have been evaluated from the number of confirmed cases and deaths, and the logarithms applied to reduce the data skewness. All the variables have been standardized. \n\nThe resulting dataset is a matrix with $n= 38$ rows (countries), $p$= 4 columns (variables describing the pandemic evolution and containment), observed over a period of $T= 18$ weeks.\nUnlike the French Pollen data, here there is no strong reason to favour one random effect configuration over the others. Conversely, different configurations of random effects would entail different ideas of similarity of virus evolution. Thus, while the presence of random effects would allow to cluster together similar trends yet associated to different intensities, speed of evolution and time of onset, switching the random effects off could result in enhancing such differences via the separation of the trends. \n\nModels have been run for $K = 1, \\ldots, 6$ row clusters and $L = 1,2,3$ column clusters, and all the 8 possible configurations of random effects. \nThe behaviour of the resulting ICL values supports the remark in Section \\ref{sec:chcharles_simulation}, as the criterion favours highly parameterized models. This holds particularly true with regard to the random effects configuration where the larger the number of random effects switched on, the highest the corresponding ICL. \nThus, models with all the random effects switched on stand out among the others, with a preference for $K=2$ and $L=3$ whose results are displayed in Figure \\ref{fig:best_covid}. The obtained partition is easily interpretable: in the column partition, reported on the right panel of Table \\ref{fig:best_covid_mappa}, the containment indexes are grouped together into the same cluster whereas the log-rate of positiveness and death are singleton clusters. Consistently with the random effect configuration, row clusters exhibit a different evolution in terms of cases, deaths and undertaken containment measures: \none cluster gathers countries where the virus has spread earlier and caused more losses; here, more severe control measures have been adopted, whose effect is likely seen in a general decreasing of cases and deaths after achieving a peak. The second row cluster collects countries for which the death toll of the pandemic seems to be more contained. The virus outbreak generally shows a delayed onset and a slower growth, which does not show a steep decline after reaching the peak, although the containment policies remain high for a long period. \nNotably, the row partition is also geographical, with the countries with higher mortality all belonging to the West Europe (see the left panel in Table \\ref{fig:best_covid_mappa}). \n\nTo properly show the benefits of considering different random effects configurations in terms of notion and interpretation of the clusters, we also illustrate the partition produced by another model estimated having the three random effects switched off (Figure \\ref{fig:covid2}). Here we consider $K = L = 3$: the column partition remains unchanged with respect to the best model, with the row partition still separating countries by the severity of the impact, with the third additional cluster having intermediate characteristics. According to this model, two row clusters feature countries with a similar right-skewed bell-shaped trend of cases and similar policies of containment, yet with a notable difference in the virus lethality. Indeed, the effect of switching $\\alpha_2$ off is clearly noted in the log-rate of death fitting, with two mean curves having similar shapes but different scales. The additional intermediate cluster, less impacted in terms of death rate, is populated by countries from the central-east Europe.\nThe apparent smaller impact of the first wave of the pandemic on the eastern European countries could be explained by several factors ranging from demographic characteristic and more timely closure policies to a different international mobility pattern. Additionally, other factors such as the general economic and health conditions might have prevented accurate testing and tracking policies, so that the actual spreading of the pandemic might have been underestimated. \n\n\\begin{figure}[bt]\n\\begin{center}\n \\includegraphics[width=.9\\textwidth]{Figures\/mod23TRUETRUETRUE.pdf}\n \\end{center}\n \\caption{COVID-19 outgrowth results of the best model, with $K =2,$ $L=3$ and the three random effects on. Curves belonging to each single block with superimposed the associated block specific mean curve (in light blue).}\n \\label{fig:best_covid}\n\\end{figure} \n\n\\begin{table}[tb]\n\\caption{Europe map with countries colored according to their row cluster memberships (left) and variables organized by the column cluster membership (right) for the best ICL model.}\n\\label{fig:best_covid_mappa}\n\\begin{center}\n\\begin{tabular}{p{.56\\textwidth}|c p{.34\\textwidth}}\n Row groups (Countries) & \\multicolumn{2}{p{.4\\textwidth}}{Column groups (COVID-19 spreading and containment)} \\\\\n\\hline\n\\multirow{7}{*}{\\vspace*{-3cm} \\includegraphics[scale=0.43]{Figures\/mapmod23TRUETRUETRUE.pdf}} & & \\\\\n& & \\\\\n& 1 & log \\% of cases per 1000 inhabitants \\\\\n& 2 & log \\% of deaths per 1000 inhabitants \\\\\n& 3 & Stringency index, Government response index \\\\\n& & \\\\\n& & \\\\\n& & \\\\\n\\end{tabular}\n\\end{center}\n\\end{table}\n\n\\begin{figure}[h]\n \\includegraphics[width=.9\\textwidth]{Figures\/mod33FALSEFALSEFALSE.pdf}\n \\caption{COVID-19 outgrowth results of the model with $K =3,$ $L=3$ and the three random effects off.}\n \\label{fig:covid2}\n\\end{figure}\n\n\\section{Conclusions}\\label{sec:chcharles_conclusions}\n\n\nModeling multivariate time-dependent data requires accounting for heterogeneity among subjects, capturing similarities and differences among variables, as well as correlations between repeated measures. \nOur work has tackled \nthese challenges by proposing a new parametric co-clustering methodology, recasting the widely known Latent Block Model in a time-dependent fashion. The co-clustering model, by simultaneously searching for row and column clusters, partitions three-way matrices in blocks of homogeneous curves. Such approach\ntakes into account the mentioned features of the data while building parsimonious and meaningful summaries. \nAs a data generative mechanism for a single curve, we have considered the \\emph{Shap Invariant Model} that has turned out to be particularly flexible when embedded in a co-clustering context. The model allows to describe arbitrary time evolution patterns while adequately capturing dependencies among different temporal instants. \nThe proposed method compares favorably with the few existing competitors for the simultaneous clustering of subjects and variables with time-dependent data. Our proposal, while producing co-partitions with comparable quality as measured by objective criteria, applies to both functional and longitudinal data, and has relevant advantages in terms of interpretability. The \noption\nof ``switching off'' some of the random effects, although in principle simplifying the model structure, increases its flexibility, as it allows to encompass different concepts of cluster possibly depending on the specific applications and on subject-matter considerations. \n\nWhile further analyses are required to increase our understanding about the general performance of the proposed model, its application to both simulated and real data has provided overall satisfactory results and highlighted some aspects which worth further investigation. \nOne interesting direction for future research \nis studying possible alternatives to the ICL to be used as selection tools when the model specification in the LBM framework involves random effects. Moreover, alternative choices, for example, for specifying the block mean curves, could be considered and compared with the choices adopted here. A further direction for future work would be\nexploring \na fully Bayesian approach to model specification and estimation, possibly handling more easily the random parameters in the model.\n\n\n\n\n\\newpage\n\\bibliographystyle{plainnat}\n\n\\section{Introduction} \\label{sec:intro}\nTime dependent data, arising when measurements are taken on a set of units at different time occasions, are pervasive in a plethora of different fields. Examples include, but are not limited to, time evolution of asset prices and volatility in finance, growth of countries as measured by economic indices, heart or brain activities as monitored by medical instruments, disease evolution evaluated by suitable bio-markers in epidemiology. When analyzing such data, we need strategies modelling typical time courses by accounting for the individual correlation over time.\nIn fact, while nomenclature and taxonomy in this setting are not always consistent, we might highlight some relevant differences, subsequently implying different challenges in the modelling process, in time-dependent data structures. On opposite poles, we may distinguish functional from longitudinal data analysis. In the former case the quantity of interest is supposed to vary over a continuum and usually a large number of regularly sampled observations is available, allowing to treat each element of the sample as a function. On the other hand, in longitudinal studies, time observations are often shorter with sparse and irregular measurements. Readers may refer to \\citet{rice2004functional} for a thorough comparison and discussion about differences and similarities between functional and longitudinal data analysis.\n\nEarly development in these areas mainly aimed to describe individual-specific curves by properly accounting for the correlation between measurements for each subject \\citep[see e.g.][and references therein]{diggle2002analysis,ramsey2005functional} with the subjects themselves often considered to be independent. \nNonetheless this is not always the case. Therefore, more recently, there has been an increased attention towards clustering methodologies aimed at describing heterogeneity among time-dependent observed trajectories;\nsee \\citet{erosheva2014breaking} for a recent review of related methods used in criminology and developmental psychology. \nFrom a functional standpoint, different approaches have been studied and readers may refer to the works by \\citet{bouveyron2011model}, \\citet{bouveyron2015discriminative} and \\citet{bouveyron2020co} or to \\citet{jacques2014functional} for a review. On the other hand, from a longitudinal point of view, model-based clustering approaches have been proposed by \\citet{de2008model}, \\citet{mcnicholas2010model}. Lastly a review from a more general time-series data perspective may be found in \\citet{liao2005clustering} and \\citet{frauhwirth2011model}.\n\nThe methods cited so far usually deal with situations where a single feature is measured over time for a number of subjects, where therefore the data are represented by a $n \\times T$ matrix, with $n$ and $T$ being the number of subjects and of observed time occasions. Nonetheless it is increasingly common to encounter multivariate time-dependent data, where several variables are measured over time for different individuals. These data may be represented according to three-way $n \\times d \\times T$ matrices, with $d$ being the number of time-dependent features; a graphical illustration of such structure is displayed in Figure \\ref{fig:raw_curves_multiple}. The introduction of an additional layer entails new challenges that have to be faced by clustering tools. In fact, as noted by \\citet{anderlucci2015covariance}, models have to account for three different aspects, being the correlation across different time observations, the relationships between the variables and the heterogeneity among the units, each one of them arising from a different layer of the three-way data structure.\n\n\n\n\\begin{figure}[t]\n\t\\centering\n\t\\includegraphics[width = 9cm, height = 8cm]{Figures\/raw_multicurves.pdf}\n\t\\caption{Example of multivariate time-dependent data: $d=4$ variables are measured for $n=8$ individuals over $T=15$ time instants, giving rise to the displayed curves.}\n\t\\label{fig:raw_curves_multiple}\n\\end{figure}\n\nTo extract useful information and unveil patterns from such complex structured and high-dimensional data, standard clustering strategies would require specification and estimation of severely parameterized models. In this situation, parsimony has often been induced by somehow neglecting the correlation structure among variables. A possible clever workaround, specifically proposed in a parametric setting, is represented by the contributions of \\citet{viroli2011finite,viroli2011model} where, in order to handle three-way data, mixtures of Gaussian matrix-variate distributions are exploited. \n\nIn the present work, a different direction is taken, and a co-clustering strategy is pursued to address the mentioned issues. The term co-clustering refers to those methods aimed to find simultaneously row and column clusters of a data matrix. These techniques are particularly useful in high-dimensional settings where standard clustering methods may fall short in uncovering meaningful and interpretable row groups because of the high number of variables. By searching for homogeneous blocks in large matrices, co-clustering tools provide parsimonious summaries possibly useful both as dimensionality reduction and as exploratory steps. These techniques are particularly appropriate when relations among the observed variables are of interest. Note that, even in the co-clustering context, the usual dualism between \\emph{distance-based} and \\emph{density-based} strategies can be found. We pursue the latter approach which embeds co-clustering in probabilistic framework, builds a common framework to handle different types of data and reflects the idea of a density resulting from a mixture model. \nSpecifically, we propose a parametric model for time-dependent data and a new estimation strategy to handle the distinctive characteristics of the model. Parametric co-clustering of time-dependent data has been pursued by \\citet{slimen2018model} and \\citet{bouveyron2018functional} in the functional setting, by mapping the original curves to the space spanned by the coefficients of a basis expansion. Unlike these contributions, we build on the idea that individual curves within a cluster arise as transformations of a common shape function which is modeled in such a way as to handle both functional and longitudinal data. \nOur co-clustering framework allows for easy\ninterpretation and cluster description but also \nfor specification of different notions of clusters which, depending on subject matter application, may be more desirable and interpretable by subject matter experts. \n\nThe rest of the paper is organized as follows. In Section \\ref{sec:chcharles_buildingblocks}, we provide the background needed for the specification of the proposed model which is described in Section \\ref{sec:chcharles_timedepLBM}, along with the estimation procedure. In Section \\ref{sec:chcharles_numexample}, the model performances are illustrated both on simulated and real examples. We conclude the paper by summarizing our contributions and pointing to some future research directions.\n\n\n\n\\section{Modelling time-dependent data}\\label{sec:chcharles_buildingblocks}\nWhen dealing with the heterogeneous time dependent data landscape, outlined in the previous section, a variety of modelling approaches are sensible to be pursued. \nThe route we follow in this work borrows its rationale from the \\emph{curve registration} framework \\citep{ramsay1998curve}, according to which observed curves often exhibit common patterns but with some variations. Methods for curve registration, also known as \\emph{curve alignment} or \\emph{time warping}, are based on the idea of aligning prominent features in a set of curves via either an \\emph{amplitude variation}, a \\emph{phase variation} or a combination of the two. The first one is concerned with vertical variations while the latter regards horizontal, hence time related, ones. As an example it is possible to think about modelling the evolution of a specific disease. Here the observable heterogeneity of the raw curves can often be disentangled in two distinct sources: on the one hand, it could depend on differences in the intensities of the disease among subjects whereas, on the other hand, there could be different ages of onset, i.e. the age at which an individual experiences the first symptoms. After properly taking into account these causes of variation, the curves result to be more homogeneously behaving, with a so called \\emph{warping function}, which synchronizes the observed curves and allows for visualization and estimation of a common mean shape curve. \n\nCoherently with the aforementioned rationale, in this work time dependency is accounted for via a \\emph{self-modelling regression} approach \\citep{lawton1972self} and, more specifically, via an extension to the so called \\emph{Shape Invariant Model} \\citep[SIM,][]{lindstrom1995self}, based on the idea that an individual curve arises as a simple transformation of a common shape function. \\\\\nLet $\\mathcal{X}=\\{x_i({\\bf t_i})\\}_{1\\le i \\le n}$ be the set of curves, observed on $n$ individuals, with $x_i(t)$ being the level of the \\emph{i-th} curve at time $t$ and $t \\in {\\bf t_i} = (t_1, \\dots, T_{n_i})$, hence with the number of observed measurements allowed to be subject-specific. Stemming from the SIM, $x_i(t)$ is modelled as \n\\begin{eqnarray}\\label{eq:shapeinvariantmodel_nocluster}\nx_i(t) = \\alpha_{i,1} + \\text{e}^{\\alpha_{i,2}}m(t-\\alpha_{i,3}) + \\epsilon_{i}(t) \\; \\end{eqnarray} \nwhere \n\\begin{itemize}\n\\item $m(\\cdot)$ denotes a general common shape function whose specification is arbitrary. In the following we consider B-spline basis functions \\citep{de1978practical}, i.e. giving $m(t)=m(t;\\beta)= \\mathcal{B}(t)\\beta,$ where $\\mathcal{B}(t)$ and $\\beta$ are respectively a vector of B-spline basis evaluated at time $t$ and a vector of basis coefficients whose dimensions allow for different degrees of flexibility;\n\\item $\\alpha_{i}=(\\alpha_{i,1},\\alpha_{i,2},\\alpha_{i,3}) \\sim \\mathcal{N}_3(\\mu^\\alpha,\\Sigma^{\\alpha})$ for $i=1,\\dots,n$ is a vector of subject-specific normally distributed random effects. These random effects are responsible for the individual specific transformations of the mean shape curve $m(\\cdot)$ assumed to generate the observed ones. In particular $\\alpha_{i,1}$ and $\\alpha_{i,3}$ govern respectively amplitude and phase variations while $\\alpha_{i,2}$ describes possible scale transformations. Random effects also allow accounting for the correlation among observations on the same subject measured at different time points. Following \\citet{lindstrom1995self}, the parameter $\\alpha_{i,2}$ is here optimized in the log-scale to avoid identifiability issues;\n\\item $\\epsilon_{i}(t) \\sim \\mathcal{N}(0,\\sigma^2_{\\epsilon})$ is a Gaussian distributed error term.\n\\end{itemize}\n\nDue to its flexibility, the SIM has already been considered as a stepping stone to model functional as well as longitudinal time-dependent data\n\\citep{telesca2008bayesian,telesca2012modeling}. Indeed, on the one hand, the smoothing involved in the specification of $m(\\cdot;\\beta)$ allows to handle function-like data. On the other hand, random effects, which borrow information across curves, make this approach fruitful even with short, irregular and sparsely sampled time series; readers may refer to \\citet{erosheva2014breaking} for an illustration of this capabilities in the context of behavioral trajectories. Therefore, we find model (\\ref{eq:shapeinvariantmodel_nocluster}) particularly appealing and suitable for our aims, potentially able to handle time-dependent data in a comprehensive way. \n\n\n\n\\section{Time-dependent Latent Block Model}\\label{sec:chcharles_timedepLBM}\n\\subsection{Latent Block Model}\\label{sec:chcharles_lbmgeneral}\nIn the parametric, or model-based, co-clustering framework, the \\emph{Latent Block Model} \\citep[LBM,][]{govaert2013co} is the most popular approach. Data are represented in a matrix form $\\mathcal{X}=\\{ x_{ij} \\}_{1\\le i \\le n, 1 \\le j \\le d}$, where by now we should intend $x_{ij}$ as a generic random variable. To aid the definition of the model, and in accordance with the parametric approach to clustering \\citep{fraley2002model,bouveyron2019model}, two latent random vectors $\\mathbf{z} = \\{ z_{i}\\}_{1 \\le i \\le n}$, and $\\mathbf{w}=\\{ w_{j} \\}_{1 \\le j \\le d}$, with $z_i = (z_{i1},\\dots,z_{iK})$, $w_j=(w_{j1},\\dots,w_{jL})$, are introduced, indicating respectively the row and column memberships, with $K$ and $L$ the number of row and column clusters. A standard binary notation is used for the latent variables, i.e. $z_{ik}=1$ if the \\emph{i-th} observation belongs to the \\emph{k-th} row cluster and 0 otherwise and, likewise, $w_{jl}=1$ if the \\emph{j-th} variable belongs to the \\emph{l-th} column cluster and 0 otherwise. The model formulation relies on a local independence assumption, i.e. the $n \\times d$ random variables $\\{ x_{ij} \\}_{1\\le i \\le n, 1 \\le j \\le d}$ are assumed to be independent conditionally on $\\mathbf{z}$ and $\\mathbf{w},$ in turn supposed to be independent. The LBM can be thus written as\n\\begin{eqnarray}\\label{eq:LBM}\np(\\mathcal{X}; \\Theta) = \\sum_{z \\in Z}\\sum_{w \\in W}p(\\mathbf{z};\\Theta)p(\\mathbf{w};\\Theta)p(\\mathcal{X}|\\mathbf{z},\\mathbf{w};\\Theta) \\; , \n\\end{eqnarray}\nwhere\n\\begin{itemize}\n \\item $Z$ and $W$ are the sets of all the possible partitions of rows and columns respectively in $K$ and $L$ groups; \n \\item the latent vectors $\\mathbf{z}, \\mathbf{w}$ are assumed to be multinomial, with $p(\\mathbf{z};\\Theta)=\\prod_{ik}\\pi_k^{z_{ik}}$ and $p(\\mathbf{w};\\Theta)=\\prod_{jl} \\rho_l^{w_{jl}}$ where $\\pi_k, \\rho_l > 0$ are the row and column mixture proportions, $\\sum_k \\pi_k = \\sum_l \\rho_l = 1$; \n \\item as a consequence of the local independence assumption, $p(\\mathcal{X}|\\mathbf{z},\\mathbf{w};\\Theta) = \\prod_{ijkl} p(x_{ij};\\theta_{kl})^{z_{ik}w_{jl}}$ where $\\theta_{kl}$ is the vector of parameters specific to block $(k,l)$; \\item $\\Theta = (\\pi_k,\\rho_l,\\theta_{kl})_{1\\le k \\le K, 1 \\le l \\le L}$ is the full parameter vector of the model. \n\\end{itemize}\n\nThe LBM is particularly flexible in modelling different data types, as handled by a proper specification of the marginal density $p(x_{ij};\\theta_{kl})$ for binary \\citep{govaert2003clustering}, count \\citep{govaert2010latent}, continuous \\citep{lomet2012selection}, categorical \\citep{keribin2015estimation}, ordinal \\citep{jacques2018model,corneli2019co}, and even mixed-type data \\citep{selosse2020model}\n\n\\subsection{Model specification}\\label{sec:chcharles_modspec}\nOnce the LBM structure has been properly defined, extending its rationale to handle time-dependent data in a co-clustering framework boils down to a suitable specification of $p(x_{ij};\\theta_{kl})$. Note that this reveals one of the main advantage of such an highly-structured model, consisting in the chance to search for patterns in multivariate and complex data by specifying only the model for the variable $x_{ij}$. \nAs introduced in Section \\ref{sec:intro}, multidimensional time-dependent data may be represented according to a three-way structure where the third \\emph{mode} accounts for the time evolution. The observed data assume an array configuration $\\mathcal{X}= \\{ x_{ij}({\\bf t_i}) \\}_{1\\le i \\le n, 1\\le j \\le d}$ with ${\\bf t_i}=(t_1,\\dots,T_{n_i})$ as outlined in Section \\ref{sec:chcharles_buildingblocks}; different observational lengths can be handled by a suitable use of missing entries. Consistently with (\\ref{eq:shapeinvariantmodel_nocluster}), we consider as a generative model for the curve in the $(i,j)$\\emph{-th} entry, belonging to the generic block $(k,l)$, the following \n\\begin{eqnarray}\\label{eq:simmodel_cluster}\nx_{ij}(t)|_{z_{ik}=1,w_{jl}=1} = \\alpha_{ij,1}^{kl} + \\text{e}^{\\alpha_{ij,2}^{kl}}m(t-\\alpha_{ij,3}^{kl}; \\beta_{kl}) + \\epsilon_{ij}(t) \\; \n\\end{eqnarray}\nwith $t \\in {\\bf t_i}$ a generic time instant.\nA relevant difference with respect to the original SIM consists, coherently with the co-clustering setting, in the parameters being block-specific since the generative model is specified conditionally to the block membership of the cell. As a consequence:\n\\begin{itemize}\n\\item $m(t;\\beta_{kl})= \\mathcal{B}(t)\\beta_{kl}$ where the quantities are defined as in Section \\ref{sec:chcharles_buildingblocks}, with the only difference that $\\beta_{kl}$ is a vector of block-specific basis coefficients, hence allowing different mean shape curves across different blocks; \n\\item $\\alpha_{ij}^{kl}=(\\alpha_{ij,1}^{kl},\\alpha_{ij,2}^{kl},\\alpha_{ij,3}^{kl}) \\sim \\mathcal{N}_3(\\mu_{kl}^\\alpha,\\Sigma_{kl}^{\\alpha})$ is a vector of cell-specific random effects distributed according to a block-specific Gaussian distribution;\n\\item $\\epsilon_{ij}(t) \\sim \\mathcal{N}(0,\\sigma^2_{\\epsilon,kl})$ is the error term distributed as a block-specific Gaussian; \n\\item $\\theta_{kl}=(\\mu_{kl}^\\alpha,\\Sigma_{kl}^{\\alpha},\\sigma^2_{\\epsilon,kl},\\beta_{kl})$.\n\\end{itemize}\n \nNote that the ideas borrowed from the \\emph{curve registration} framework are here embedded in a clustering setting. Therefore, while \\emph{curve alignment} aims to synchronize the curves to estimate a common mean shape, in our setting the SIM works as a suitable tool to model the heterogeneity inside a block and to introduce a flexible notion of cluster. The rationale behind considering the SIM in a co-clustering framework consists in looking for blocks characterized by a different mean shape function $m(\\cdot;\\beta_{kl})$. Curves within the same block arise as random shifts and scale transformations of $m(\\cdot;\\beta_{kl}),$ driven by the block-specifically distributed random effects. Let consider the small panels on the left side of Figure \\ref{fig:curve_coclust}, displaying a number of curves which arise as transformations induced by non-zero values of $\\alpha_{ij,1},$ $\\alpha_{ij,2}, $ or $\\alpha_{ij,3}$. Beyond the sample variability, the curves differ for a (phase) random shift on the $x-$axes, an amplitude shift on the $y-$ axes, and a scale factor. According to model (\\ref{eq:simmodel_cluster}), all those curves belong to the same cluster, since they share the same mean shape function (Figure \\ref{fig:curve_coclust}, right panel). \n\n\\begin{figure}[t]\n\\center\n\\includegraphics[width=.62\\textwidth]{Figures\/prova.pdf}\n\\raisebox{2.5ex}{\n\\includegraphics[width=.34\\textwidth]{Figures\/TTT.pdf}}\n\\caption{In the left panels, curves in dotted line arise as random fluctuations of the superimposed red curve, but they are all time, amplitude or scale transformations of the same mean-shape function on the right panel.}\n\\label{fig:curve_coclust}\n\\end{figure}\n\nIn fact, further flexibility can be naturally introduced within the model by ``switching off'' one or more random effects, depending on subject-matter considerations and on the concept of cluster one has in mind. If there are reasons to support that similar time evolutions associated to different intensities are, in fact, expression of different clusters, it makes sense to switch off the random intercept $\\alpha_{ij,1}$. In the example illustrated in Figure \\ref{fig:curve_coclust} this ideally leads to a two-clusters structure (Figure \\ref{fig:FT}, left panels). Similarly, switching off the random effect $\\alpha_{ij,3}$ would lead to blocks characterized by a shifted time evolution (Figure \\ref{fig:FT}, right panels), while removing $\\alpha_{ij,2}$ determines different blocks varying for a scale factor (Figure \\ref{fig:FT}, middle panels). \n\n\\begin{figure}[t]\n\\center\n\\begin{tabular}{p{3cm}p{4cm}p{4cm}p{4cm}}\n&\\texttt{FTT}&\\hspace{-0.35cm}\\texttt{TFT}&\\hspace{-0.65cm}\\texttt{TTF}\\\\\n\\end{tabular}\\\\\n\\includegraphics[width=.25\\textwidth]{Figures\/FTT.pdf}\n\\includegraphics[width=.25\\textwidth]{Figures\/TFT.pdf}\n\\includegraphics[width=.25\\textwidth]{Figures\/TTF.pdf}\n\\caption{Pairs of plots in each column represent the two-cluster configurations arising from switching off, from left to right $\\alpha_{ij,1}$, $\\alpha_{ij,2},$ $\\alpha_{ij,3}$. In the names of the models, as used in the rest of the paper, \\texttt{T} indicates a switched on random effect while \\texttt{F} a switched off one.}\n\\label{fig:FT}\n\\end{figure}\n\n\n\\subsection{Model estimation}\\label{sec:chcharles_modelest}\nTo estimate the LBM several approaches have been proposed, as for example Bayesian \\citep{wyse2012block}, greedy search \\citep{wyse2017inferring} and likelihood-based ones \\citep{govaert2008block}. In this work we focus on the latter class of methods. In principle, the estimation strategy would aim to maximize the log-likelihood $\\ell(\\Theta) = \\log p(\\mathcal{X}; \\Theta)$ with $p(\\mathcal{X}; \\theta)$ defined as in (\\ref{eq:LBM}); nonetheless, the missing structure of the data makes this maximization impractical. For this reason the \\emph{complete data log-likelihood} is usually considered as the objective function to optimize, being defined as\n\\begin{equation}\\label{eq:compl_loglik_LBM}\n\\ell_c(\\Theta,\\mathbf{z},\\mathbf{w}) = \\sum_{ik} z_{ik}\\log\\pi_k + \\sum_{jl}w_{jl}\\log\\rho_l + \\sum_{ijkl}z_{ik}w_{jl}\\log p(x_{ij}; \\theta_{kl})\n\\end{equation}\nwhere the first two terms account for the proportions of row and column clusters while the third one depends on the probability density function of each block.\n\nAs a general solution, to maximize (\\ref{eq:compl_loglik_LBM}) and obtain an estimate of $\\hat{\\Theta}$ when dealing with situations where latent variables are involved, one would instinctively resort to the Expectation-Maximization algorithm \\citep[EM,][]{dempster1977maximum}. The basic idea underlying the EM algorithm consists in finding a lower bound of the log-likelihood and optimizing it via an iterative scheme in order to create a converging series of $\\hat{\\Theta}^{(h)}$ \\citep[see][for more details about the convergence properties of the algorithm]{wu1983convergence}. In the co-clustering framework, this lower bound can be easily exhibited by rewriting the log-likelihood as follows\n$$\n\\ell(\\Theta) = \\mathcal{L}(q;\\Theta) + \\zeta \n$$\nwhere $\\mathcal{L}(q; \\Theta) = \\sum_{{\\bf z},{\\bf w}} q({\\bf z},{\\bf w})\\log(p(\\mathcal{X},{\\bf z},{\\bf w}| \\theta)\/q({\\bf z},{\\bf w}))$ with $q({\\bf z},{\\bf w})$ being a generic probability mass function on the support of $({\\bf z},{\\bf w})$ while $\\zeta$ is a positive constant not depending on $\\Theta$. \n\nThe E step of the algorithm maximizes the lower bound $\\mathcal{L}$ over $q$ for a given value of $\\Theta$. Straightforward calculations show that $\\mathcal{L}$ is maximized for $q^{*}({\\bf z},{\\bf w})=p({\\bf z},{\\bf w}|\\mathcal{X},\\theta)$. Unfortunately, in a co-clustering scenario, the joint posterior distribution $p({\\bf z},{\\bf w}|\\mathcal{X},\\Theta)$ is not tractable, as it involves terms that cannot be factorized as it conversely happens in a standard mixture model framework. As a consequence, several modifications have been explored, searching for viable solutions when performing the E step \\citep[see][for a more detailed tractation]{govaert2013co}; examples are the \\emph{Classification EM} (CEM) and the \\emph{Variational EM} (VEM). Here we propose to make use of a Gibbs sampler within the E step to approximate the posterior distribution $p({\\bf z},{\\bf w}|\\mathcal{X},\\Theta)$. This results in a stochastic version of the EM algorithm, which will be called SEM-Gibbs in the following. Given an initial column partition ${\\bf w^{(0)}}$ and an initial value for the parameters $\\Theta^{(0)}$, at the $h$\\emph{-th} iteration the algorithm proceeds as follows \n\\begin{itemize}\n\t\\item SE step: $q^{*}({\\bf z},{\\bf w})\\simeq p({\\bf z},{\\bf w}|\\mathcal{X},\\Theta^{(h-1)})$ is approximated with a Gibbs sampler. The Gibbs sampler consists in sampling alternatively ${\\bf z}$ and ${\\bf w}$ from their conditional distributions a certain number of times before to retain new values for ${\\bf z}^{(h)}$ and ${\\bf w}^{(h)}$,\n\t\\item M step: $\\mathcal{L}(q^{*}({\\bf z},{\\bf w}),\\Theta^{(h-1)})$ is then maximized over $\\Theta$, where\n\t\\begin{align*}\n\t\\mathcal{L}(q^{*}({\\bf z},{\\bf w}),\\Theta^{(h-1)}) & \\simeq \\sum_{z,w}p({\\bf z},{\\bf w}|\\mathcal{X},\\Theta^{(h-1)})\\log(p(\\mathcal{X},{\\bf z},{\\bf w}|\\Theta)\/p({\\bf z},{\\bf w}|\\mathcal{X},\\Theta^{(h-1)}))\\\\\n\t\t& \\simeq E[\\ell_c(\\Theta, {\\bf z}^{(h)}, {\\bf w}^{(h)})|\\Theta^{(h-1)}]+\\xi,\n\t\\end{align*}\n\t$\\xi$ not depending on $\\Theta$. This step therefore reduces to the maximization of the conditional expectation of the \\emph{complete data log-likelihood} (\\ref{eq:compl_loglik_LBM}) given $\\bf{z}^{(h)}$ and ${\\bf w}^{(h)}$. \n\\end{itemize}\n\n\nIn the proposed framework, due to the presence of the random effects, some additional challenges have to be faced. In fact, the maximization of the conditional expectation of (\\ref{eq:compl_loglik_LBM}) associated to model (\\ref{eq:simmodel_cluster}) requires a cumbersome multidimensional integration in order to compute the marginal density defined as \n\\begin{eqnarray}\\label{eq:marglikelihood_raneff}\np(x_{ij};\\theta_{kl}) = \\int p(x_{ij}|\\alpha_{ij}^{kl};\\theta_{kl})p(\\alpha_{ij}^{kl};\\theta_{kl})\\, d\\alpha_{ij}^{kl} \\; . \n\\end{eqnarray} \nNote that, with a slight abuse of notation, we suppress here the dependency on the time $t$, i.e. $x_{ij}$ has to be intended as $x_{ij}({\\bf t_i})$. In the SE step, on the other hand, the evaluation of (\\ref{eq:marglikelihood_raneff}) is needed for all the possible configurations of $\\{z_i\\}_{i=1,\\dots,n}$ and $\\{w_{j}\\}_{j=1,\\dots,d}$. These quantities are straightforwardly obtained when considering the SEM-Gibbs to estimate models without any random effect involved, while their computation is more troublesome in our scenario. \n\nFor these reasons, we propose a modification of the SEM-Gibbs algorithm, called \\emph{Marginalized SEM-Gibbs} (M-SEM), where an additional \\emph{Marginalization step} is introduced to properly account for the random effects.\nGiven an initial value for the parameters $\\Theta^{(0)}$ and an initial column partition $\\mathbf{w}^{(0)}$, the $h$\\emph{-th} iteration of the M-SEM algorithm alternates the following steps: \n\\begin{itemize}\n \\item \\textbf{Marginalization step}: The single cell contributions in (\\ref{eq:marglikelihood_raneff}) to the \\emph{complete data log-likelihood} are computed by means of a Monte Carlo integration scheme as follows \n \n \\begin{eqnarray}\n p(x_{ij};\\theta_{kl}^{(q)}) \\simeq \\frac{1}{M} \\sum_{m=1}^M p(x_{ij} ; \\alpha_{ij}^{kl,(m)}, \\theta_{kl}^{(h-1)}) \\; \n \\end{eqnarray}\n for $i=1,\\dots,n$, $j=1,\\dots,d$, $k=1,\\dots,K$ and $l=1,\\dots,L$ and being $M$ the number of Monte Carlo samples. The values of the vectors $\\alpha_{ij}^{kl,(1)},\\dots,\\alpha_{ij}^{kl,(M)}$ are drawn from a Gaussian distribution $\\mathcal{N}_3(\\mu_{kl}^{\\alpha,(h-1)},\\Sigma_{kl}^{\\alpha,(h-1)})$; note that this choice amounts to a random version of the \\emph{Gaussian quadrature rule} \\cite[see e.g.][]{pinheiro2006mixed};\n\n \n \\item \\textbf{SE step}: $p({\\bf z},{\\bf w}|\\mathcal{X},\\Theta^{(h-1)})$ is approximated by repeating, for a number of iterations, the following Gibbs sampling steps\n \\begin{enumerate}\n \\item generate the row partition $z_i^{(h)}=(z_{i1}^{(h)},\\dots,z_{iK}^{(h)}), \\; i=1,\\dots,n$ according to a multinomial distribution $z_i^{(h)}\\sim \\mathcal{M}(1,\\tilde{z}_{i1},\\dots,\\tilde{z}_{iK})$, with\n \\begin{eqnarray*}\n \\tilde{z}_{ik} &=& p(z_{ik}=1 | \\mathcal{X},\\mathbf{w}^{(h-1)};\\Theta^{(h-1)}) \\\\\n &=& \\frac{\\pi_k^{(h-1)}p_k(\\mathbf{x}_i | \\*w^{(h-1)}; \\Theta^{(h-1)})}{\\sum_{k'}\\pi_{k'}^{(h-1)}p_{k'}(\\mathbf{x}_i | \\*w^{(h-1)}; \\Theta^{(h-1)})} \\; ,\n \\end{eqnarray*}\n for $k=1,\\dots,K$, $\\mathbf{x}_i = \\{x_{ij}\\}_{1\\le j\\le d}$ and $p_k(\\mathbf{x}_i | \\mathbf{w}^{(h-1)}; \\Theta^{(h-1)}) = \\prod_{jl} p(x_{ij}; \\theta_{kl}^{(h-1)})^{w_{jl}^{(h-1)}}$. \n \\item generate the column partition $w_j^{(h)}=(w_{j1}^{(h)},\\dots,w_{jL}^{(h)}), \\; j=1,\\dots,d$ according to a multinomial distribution $w_j^{(h)}\\sim \\mathcal{M}(1,\\tilde{w}_{j1},\\dots,\\tilde{w}_{jL})$, with\n \\begin{eqnarray*}\n \\tilde{w}_{jl} &=& p(w_{jl}=1 | \\mathcal{X}, \\mathbf{z}^{(h)}; \\Theta^{(h-1)}) \\\\\n &=& \\frac{\\rho_l^{(q)}p_l(\\mathbf{x}_j | \\mathbf{z}^{(h)}; \\Theta^{(h-1)})}{\\sum_{l'}\\rho_{l'}^{(h-1)}p_{l'}(\\mathbf{x}_j | \\mathbf{z}^{(h)}; \\Theta^{(h-1)})} \\; , \n \\end{eqnarray*}\n for $l=1,\\dots,L$, $\\mathbf{x}_j = \\{x_{ij}\\}_{1 \\le i\\le n}$ and $p_l(\\mathbf{x}_j | \\mathbf{z}^{(h)}; \\Theta^{(h-1)}) = \\prod_{ik} p(x_{ij}; \\Theta_{kl}^{(h-1)})^{z_{ik}^{(h)}}$.\n \\end{enumerate}\n \\item \\textbf{M step}: Estimate $\\Theta^{(h)}$ \n \n by maximizing $E[\\ell_c(\\Theta, {\\bf z}^{(h)}, {\\bf w}^{(h)})|\\Theta^{(h-1)}]$.\\\\ Mixture proportions are updated as $\\pi_k^{(h)} = \\frac{1}{n}\\sum_{i}z_{ik}^{(h)}$ and $\\rho_l^{(h)}=\\frac{1}{d}\\sum_j w_{jl}^{(h)}$. The estimate of $\\theta_{kl}=(\\mu_{kl}^\\alpha,\\Sigma_{kl}^{\\alpha},\\sigma^2_{\\epsilon,kl},\\beta_{kl})$ is obtained by exploiting the \\emph{non-linear mixed effect model} specification in (\\ref{eq:simmodel_cluster}) and considering the approximate maximum likelihood formulation proposed in \\citet{lindstrom1990nonlinear}; the variance and the mean components are estimated by approximating and maximizing the marginal density of the latter near the mode of the posterior distribution of the random effects. Conditional or shrinkage estimates are then used for the estimation of the random effects. \n\\end{itemize}\n\nThe M-SEM algorithm is run for a certain number of iterations until a convergence criterion on the \\emph{complete data log-likelihood} is met. Since a burn-in period is considered, the final estimate for $\\Theta$, denoted as $\\hat{\\Theta}$, is given by the mean of the sample distribution. A sample of $(\\mathbf{z},\\mathbf{w})$ is then generated according to the SE step as illustrated above with $\\Theta=\\hat{\\Theta}$. The final block-partition $(\\hat{\\mathbf{z}},\\hat{\\mathbf{w}})$ is then obtained as the mode of their sample distribution. \n\n\\subsection{Model selection}\n\nThe choice of the number of groups is considered here\nas a model selection problem. Operationally several models, corresponding to different combinations of $K$ and $L$ and, in our case, to different configurations of the random effects, are estimated and the best one is selected according to an information criterion. Note that the model selection step is more troublesome in this setting with respect to a standard clustering one, since we need to select not only the number of row clusters $K$ but also the number of column ones $L$. Standard choices, such as the AIC and the BIC, are not directly available in the co-clustering framework where, as noted by \\citet{keribin2015estimation}, the computation of the likelihood of the LBM is challenging, even when the parameters are properly estimated. Therefore, as a viable alternative, we consider an approximated version of the ICL \\citep{biernacki2000assessing} that, relying on the \\emph{complete data log-likelihood}, does not suffer from the same issues, and which reads as follows\n\\begin{eqnarray}\\label{eq:ICL_cocl}\n\\text{ICL} = \\ell_c(\\hat{\\Theta}, \\hat{z}, \\hat{w}) - \\frac{K-1}{2}\\log n - \\frac{L-1}{2}\\log d - \\frac{KL\\nu}{2}\\log nd \\; , \n\\end{eqnarray}\nwhere $\\nu$ denotes number of specific parameters for each block while $\\ell_c(\\hat{\\Theta}, \\hat{z}, \\hat{w})$ is defined as in \\eqref{eq:compl_loglik_LBM} with $\\Theta$, $z$ and $w$ being replaced by their estimates.\nThe model associated with the highest value of the ICL is then selected. \n\nEven if the use of this criterion is a well-established practice in co-clustering applications, \\citet{keribin2015estimation} noted that its consistency has not been proved yet to estimate the number of blocks of a LBM. Additionally, \\citet{nagin2009group} and \\citet{corneli2020bayesian} point out a bias of the ICL towards overestimation of the number of clusters in the longitudinal context. The validity of the ICL could be additionally undermined by the presence of random effects. As noted by \\citet{delattre2014note}, standard information criteria have unclear definitions in a mixed effect model framework, since the definition of the actual sample size is not trivial. Given that, common asymptotic approximations are not valid anymore. Even if a proper exploration of the problem from a co-clustering perspective is still missing, we believe that the mentioned issues might have an impact also on the derivation of the criterion in (\\ref{eq:ICL_cocl}). The development of valid model selection tools for LBM when random effects are involved is out of scope of this work, therefore, operationally, we consider the ICL. Nonetheless, the analyses in Section \\ref{sec:chcharles_numexample} have to be interpreted with full awareness of the limitations described above. \n\n\n\n\\subsection{Remarks}\\label{sec:chcharles_discussion}\nThe model introduced so far inherits the advantages of both the building ingredients, namely the LBM and the SIM, it embeds. \nThanks to the local independence assumption of the LBM, it allows handling multivariate, possibly high-dimensional complex data structures in a relatively parsimonious way. Differences among the subjects are captured by the random effects, while curve summaries can be expressed as a functional of the mean shape curve. Additionally, resorting to a smoother when modeling the mean shape function, it allows for a flexible handling of functional data whereas the presence of random effects make the model effective also in a longitudinal setting. Finally, clustering is pursued directly on the observed curves, without resorting to intermediate transformation steps, as it is done in \\citet{bouveyron2018functional}, where clustering is performed on the basis expansion coefficients used to transform the original data. \n\nThe attractive features of the model go hand in hand with some difficulties that require caution --namely, likelihood multimodality, the associated convergence issues, and the curse of flexibility-- that we discuss below in turn. \n\\begin{itemize}\n\\item \\emph{Initialization} The M-SEM algorithm encloses different numerical steps which require the suitable specification of starting values. \nFirst, the convergence of EM-type algorithms towards a global maximum is not guaranteed; as a consequence they are known to be sensitive to the initialization with a proper one being crucial to avoid local solutions. Assuming $K$ and $L$ to be known, the M-SEM algorithm requires starting values for $\\mathbf{z}$ and $\\mathbf{w}$ in order to implement the first M step. A standard strategy resorts to multiple random initializations: the row and column partitions are sampled independently from multinomial distributions with uniform weights and the one eventually leading to the highest value of the \\emph{complete data log-likelihood} is retained. An alternative approach, possibly accelerating the convergence, is given by a k-means initialization, where two k-means algorithms are independently run for the rows and the columns of $\\mathcal{X}$ and the M-SEM algorithm is initialized with the obtained partitions. It has been pointed out \\citep[see e.g.][]{govaert2013co} that the SEM-Gibbs, being a stochastic algorithm, can attenuate in practice the impact of the initialization on the resulting estimates. Finally, note that a further initialization is required, to estimate the nonlinear mean shape function within the M step. \n\n\\item \\emph{Convergence and other numerical problems}. Although the benefits of including random effects in the considered framework \nare undeniable, parameters estimation is known not to be straightforward in mixed effect models, especially in the nonlinear setting \\citep[see, e.g.][]{harring2016comparison}. As noted above the nonlinear dependence of the conditional mean of the response on the random effects requires \nmultidimensional integration to derive the\nmarginal distribution of the data. While several methods have been proposed to compute the integral, convergence issues are often encountered. \nIn such situations, some strategies can be employed to help with convergence of the estimation algorithm. Examples are to try different sets of starting values, to scale the data prior to the modeling step, or to simplify the structure of the model (e.g. by reducing the number of knots of the B-splines). Addressing these issues often results in considerable computational times even when convergence is eventually achieved. Depending on the specific data at hand, it is also possible to consider alternative mean shape formulations, such as polynomial functions, which result in easier estimation procedures. Lastly note that, if available, prior knowledge about the time evolution of the observed phenomenon may be incorporated in the models as it could introduce some constraints possibly simplifying the estimation process \\citep[see e.g.][]{telesca2012modeling}.\n\n\\item \\emph{Curse of flexibility}. Including random effects for both phase and amplitude shifts and scale transformations might allow for a virtually excellent fitting of various arbitrarily shaped curve. This flexibility, albeit desirable, sometimes achieve excessive extents, possibly leading to estimation troubles. This is especially true in a clustering framework, where data are expected to exhibit a remarkable heterogeneity. \nFrom a practical point of view, our experience suggests that the estimation of the parameters $\\alpha_{ij,2}$ turns out to be the most troublesome, sometimes leading to convergence issues and instability in the resulting estimates.\n\n\n\\end{itemize}\n\n\n\\section{Numerical experiments}\\label{sec:chcharles_numexample}\n\nBefore moving forward to the interest of the proposed approach on real-world situations, this section presents some numerical experiments illustrating its main features on simulated data. \n\n\\subsection{Synthetic data}\\label{sec:chcharles_simulation}\n\nThis section examines the main features of the proposed approach on some synthetic data. The aim of the simulation study is twofold. The first and primary goal of the analyses consists in exploring the capability of the proposed method to properly partition the data into blocks, also in comparison with some competitors such as the one proposed by \\citet{bouveyron2018functional} (\\texttt{funLBM} in the following) and a double k-means approach, where row and column partitions are obtained separately and subsequently merged to produce blocks. As for the second aim of the simulations, we evaluate the performances of the ICL in the developed framework to select both the number $(K,L)$ of blocks and the random effect configuration.\n\nAll the analyes have been conducted in the R environment \\citep{rcore} with the aid of \\texttt{nlme} package \\citep{nlmepack}, used to estimate the parameters in the M-step, and the \\texttt{splines} package, used to handle the B-spline involved in the specification of the common shape function. The code implementing the proposed procedure is available upon request.\n\n\\begin{figure}[!t]\n \\centering\n \\includegraphics[width=14cm,height=8.5cm]{Figures\/meancurves_simulated.pdf}\n \\caption{Subsample of simulated curves (black dashed lines) with over-imposed block specific mean shape curves (colored continue lines) employed in the numerical study}\n \\label{fig:meancurves}\n\\end{figure}\n\nThe examined simulation setup is defined as follows. We generated $B=100$ Monte Carlo samples of curves according to the general specification \\eqref{eq:simmodel_cluster}, with block-specific mean shape function $m_{kl}(\\cdot)$ and both the parameters involved in the error term and the ones describing the random effects distribution being constant across the blocks. In fact, in the light of the considerations made in Section \\ref{sec:chcharles_discussion}, the random scale parameter is switched off in the data generative mechanism, i.e. $\\alpha_{ij,2}$ is constrained to be degenerate in zero. The number of row and column clusters has been fixed to $K_{\\text{true}} = 4$ and $L_{\\text{true}} = 3$. The mean shape functions $m_{kl}(\\cdot)$ are chosen among four different curves, namely $m_{11}=m_{13}=m_{33}=m_1$, $m_{12}=m_{32}=m_{31}=m_{41}=m_2$, $m_{21}=m_{32}=m_{42}=m_3$ and $m_{22}=m_{43}=m_4$, as illustrated in Figure \\ref{fig:meancurves} with different color lines, and specified as follows: \n\\begin{eqnarray*}\nm_1(t) &\\propto& 6t^2 - 7t + 1 \\hspace{3cm} m_2(t) \\propto \\phi(t; 0.2,0.008) \\\\\nm_3(t) &\\propto& 0.75-0.8\\mathbbm{1}_{\\{t \\in (0.4,0.6)\\}} \\hspace{1.4cm} m_4(t) \\propto \\frac{1}{(1 + \\exp(-10t + 5))} \n\\end{eqnarray*} \nThe other parameters involved have been set to $\\sigma_{\\epsilon,kl}=0.3$, $\\mu_{kl}^\\alpha = (0,0,0)$ and $\\Sigma_{kl}^\\alpha = \\text{diag}(1,0,0.1)$ $\\forall k=1,\\dots,K_{\\text{true}}, l=1,\\dots,L_{\\text{true}}$. Three different scenarios are considered with generated curves consisting of $T = 15$ equi-spaced observations ranging in $[0,1]$. As a first baseline scenario, we set the number of rows to $n = 100$ and the number of columns to $d = 20$. The other scenarios are considered in order to obtain insights and indications on the performances of the proposed method when dealing with larger matrices. Coherently in the second scenario $n = 500$ and $d = 20$ while in the third one $n = 100$ and $d = 50$ thus increasing respectively the number of samples and features.\n\nAs for the first goal of the simulation study, we explore the performances of our proposal in terms of the Co-clustering Adjusted Rand Index \\citep[CARI,][]{robert2020comparing}. This criterion generalizes the Adjusted Rand Index \\citep{hubert1985comparing} to the co-clustering framework, and takes the value 1 when the blocks partitions perfectly agree up to a permutation. \nIn order to have a fair comparison with the double \\emph{k-means} approach, for which selecting the number of blocks does not admit an obvious solution, and to separate the uncertainty due to model selection from the one due to cluster detection, models are compared by considering the number of blocks as known and equal to $(K_{\\text{true}}, L_{\\text{true}})$. Consistently, we estimate our model only for the \\texttt{TFT} random effects configuration, being the one generating the data. This is made possible by constraining mean and variance estimates of the scale-random effect to be equal to zero. \n\nResults are reported in Table \\ref{tab:charles_simulation1}. \nThe proposed method claims excellent performances in all the considered settings, with results notably featured by a very limited variability. No clear-cut indications arise from the comparison with \\texttt{funLBM} as the two methodologies lead to comparable results. Conversely, the use of an approach which is not specifically conceived for co-clustering, like the the double \\emph{k-means}, leads to a strong degradation of the quality of the partitions. Unlike the results obtained with our method and with \\texttt{funLBM}, being insensitive to changes in the dimensions of the data, the double \\emph{k-means} behaves better with an increased number of variables or observations. \n\\begin{table}[t]\n\\centering\n\\caption{Mean (and std error) of the CARI computed over the simulated samples in the three scenarios. Partitions are obtained using the proposed approach (\\texttt{tdLBM}), \\texttt{funLBM} and a double k-means approach.}\n\\vspace*{0.2cm}\n\\begin{tabular}{cccc}\n \\hline\n & $n = 100, d = 20$ & $n = 100, d = 50$ & $n = 500, d = 20$ \\\\ \n \\hline\nCARI$_{\\text{\\texttt{tdLBM}}}$ & 0.972 (0.044) & 0.988 (0.051) & 0.981 (0.020) \\\\ \nCARI$_{\\text{\\texttt{funLBM}}}$ & 0.981 (0.066) & 0.986 (0.053) & 0.986 (0.060) \\\\ \nCARI$_{\\text{\\texttt{kmeans}}}$ & 0.761 (0.158) & 0.842 (0.182) & 0.809 (0.169) \\\\ \n \\hline\n\\end{tabular}\n\\label{tab:charles_simulation1}\n\\end{table}\n\n\n\n\\begin{table}[b]\n\\centering\n \\caption{Rate of selection of $(K,L)$ configurations for the different scenarios considered obtained using the proposed approach.}\n \\vspace*{0.3cm}\n\\begin{tabular}{p{4cm}p{4cm}p{4cm}p{2cm}}\n\\hline\n\\hspace{1.15cm} $n=100, d=20$ & \\hspace{0.85cm} $n=100, d=50$ & \\hspace{0.7cm} $n=500, d=20$&\\\\\n\\hline\n\\multicolumn{3}{l}{\\multirow{9}{*}{\\includegraphics[height = 4.6cm, width = 13cm]{Figures\/table2bis.pdf}}} & {\\multirow{9}{*}{\\includegraphics[scale=0.43]{Figures\/output.pdf}}}\\\\\n&&&\\\\\n&&&\\\\\n&&&\\\\\n&&&\\\\\n&&&\\\\\n&&&\\\\\n&&&\\\\\n&&&\\\\\n&&&\\\\\n\\hline\n\\end{tabular}\n\\label{tab:charles_simulation2}\n\\end{table}\n\nAs for the performances of the ICL, Table \\ref{tab:charles_simulation2} shows the fractions of samples where the criterion has led to the selection of each of the considered configurations of $(K, L)$, with $K,L = 2,\\dots,5$, for models estimated with the proposed method. In all the considered settings, the actual number of co-clusters is the most frequently selected by the ICL criterion. In fact, a non-negligible tendency to favor overparameterized models, especially for larger sample size, is witnessed, consistently with the comments in \\citet{corneli2019co}.\n\nSimilar considerations may be drawn from the exploration of the performances of the ICL when used to select the random effect configuration (Table \\ref{tab:charles_simulation3}). Here, the performances seem to be less influenced by the data dimensionality, and ICL selects the true configuration for the majority of the samples in two scenarios while, in the third one, the true model is selected approximately one out of two samples. Nonetheless, also in this case, a tendency to overestimation is visible, with the \\texttt{TTT} configuration frequently selected in all the scenarios. In general, the penalization term in (\\ref{eq:ICL_cocl}) seems to be too weak and overall not completely able to account for the presence of random effects. These results, along with the remarks at the end of Section \\ref{sec:chcharles_modelest}, provide a suggestion about a possibly fruitful research direction which consists in proposing some suitable adjustments. \n\nIn fact, it is worth noting that when the selection of the number of clusters is the aim, the observed behavior is overall preferable with respect to underestimation since it does not undermine the homogeneity within a block, being the final aim of cluster analysis; this has been confirmed by further analyses suggesting that the additional groups are usually small and arising because of the presence of outliers. As for the random effect configuration, we believe that since the choice impacts the notion of cluster one aims to identify, it should be driven by subject-matter knowledge rather than by automatic criteria. Additionally, the reported analyses are exploratory in nature, aiming to provide general insights on the characteristics of the proposed approach. To limit computational time required to run the large number of models involved in Tables \\ref{tab:charles_simulation2} and \\ref{tab:charles_simulation3}, for demonstration of our method, we did not use multiple initializations and we have pre-selected the number of knots for the block-specific mean functions. In practice, we recommend, at a minimum, using multiple starting values and carrying out sensitivity analyses on the number of knots to ensure that substantive conclusions are not affected.\n\n\n\n\n\\begin{table}[t]\n\\centering\n \\caption{Rate of selection for each random effects configuration in the considered scenarios. \\texttt{T} means that the corresponding random effect is switched on while \\texttt{F} means that is switched off. As an example \\texttt{FTT} represent a model where $\\alpha_{ij,1}$ is constrained to be a random variable with degenerate distribution in zero. Bold cells represents the true data generative model (TFT), blank ones represent percentages equal to zero.}\n \\vspace*{0.3cm}\n\\begin{tabular}{cc|cccccccc}\n \\hline\n & & \\texttt{FFF} & \\texttt{TFF} & \\texttt{FTF} & \\texttt{FFT} & \\texttt{TTF} & \\texttt{TFT} & \\texttt{FTT} & \\texttt{TTT} \\\\\n \\hline \n & $n = 100, d=20$ & & & & & 1\\% & {\\bf 58\\%} & & 41\\%\\\\\n \\% of selection \n & $n = 100, d=50$ & & 2\\% & & & 1\\% & {\\bf 62\\%} & & 35\\% \\\\\n & $n = 500, d=20$ & & 1\\% & & & 5\\% & {\\bf 47\\%} & & 47\\%\\\\\n \\hline\n\\end{tabular}\n\\label{tab:charles_simulation3}\n\\end{table}\n\n\n\\subsection{Applications to real world problems}\n\\subsubsection{French Pollen Data}\\label{sec:chcharles_realdata}\nThe data we consider in this section are provided by the \\emph{R\\'eseau National de Surveillance A\u00e9robio-\\\\logique} (RNSA), the French institute which analyzes the biological particles content of the air and studies their impact on the human health.\nRNSA collects data on concentration of pollens and moulds in the air, along with some clinical data, in more than 70 municipalities in France. \n\nThe analyzed dataset contains daily observations of the concentration of 21 pollens for 71 cities in France in 2016. Concentration is measured as the number of pollens detected over a cubic meter of air and carried on by means of some pollen traps located in central urban positions over the roof of buildings, in order to be representative of the trend air quality. \n\nThe aim of the analysis is to identify homogeneous trends in the pollen concentration over the year and across different geographic areas. \nFor this reason, we focus on finding groups of pollens differentiating one from the others for either the period of maximum exhibition or the time span they are present. Consistently with this choice, only models with the y-axis shift parameter $\\alpha_{ij,1}$ are estimated (i.e. $\\alpha_{ij,2}$ and $\\alpha_{ij,3}$ are switched off), for varying number of row and column clusters, and the best one selected via ICL. \nWe consider monthly data by averaging the observed daily concentrations over each month. The resulting dataset may be represented as a matrix with $n=71$ rows (cities), $p=21$ columns (pollens) where each entry is a sequence of $T=12$ time-indexed measurements. Moreover, to practically apply our proposed procedure on the data, a preprocessing step has been carried out. We work on a logarithmic scale and, in order to improve the stability of the estimation procedure, the data have been standardized.\n\nResults are graphically displayed in Figure \\ref{fig:meancurv_pollini}. The ICL selects a model with $K=3$ row clusters and $L=5$ column ones.\nA first visual inspection of the pollen time evolutions reveals that the procedure is able to discriminate the pollens according to their seasonality. Pollens in the first two column groups are mainly present during the summer, with a difference in the intensity of the concentration. In the remaining three groups pollens are more active, roughly speaking, during winter and spring months but with a different time persistence and evolution.\n\nDigging deeper substantially in the cluster configuration obtained is beyond the scope of this work and may benefit from insights from experts of botanical and geographical disciplines. Anyway it stands out that column clusters are roughly grouping together trees pollens, distinguishing them from weed and grass ones (right panel of Table \\ref{fig:mappa_francia}). Results align with the usually considered typical seasons, with groups of pollens from trees mostly present in winter and spring while the ones from grass spreading in the air mainly during the summer months. With respect to the row partition, displayed in the left panel of Table \\ref{fig:mappa_francia}, three clusters have been detected, with one roughly corresponding to the Mediterranean region. The situation, for what it concerns the other two clusters, appears to be more heterogeneous. One of these groups tends to gather cities in the northern region and on the Atlantic coast while the other covers mainly the central and continental part of the country. Again the results appear promising but it may be beneficial a cross analysis with some climate scientists in order to get a more informative and substantiated point of view.\n\n\\begin{figure}[t]\n \\includegraphics[scale=0.2]{Figures\/meancurves_k3l5.pdf}\n \\caption{French Pollen Data results. Curves belonging to each single block with superimposed the corresponding block specific mean curve (in light blue).}\n \\label{fig:meancurv_pollini}\n\\end{figure}\n\n\\begin{table}[tb]\n\\caption{French map with overimposed the points indicating the cities colored according to their row cluster memberships (left) and Pollens organized by the column cluster memberships (right).}\\label{fig:mappa_francia}\n\\begin{center}\n\\begin{tabular}{c|cl}\n Row groups (Cities) & \\multicolumn{2}{c}{Column groups (Pollens)} \\\\\n\\hline\n\\multirow{10}{*}{\\vspace{-0.1cm} \\includegraphics[scale=0.13]{Figures\/mappa_k3l5.pdf}} & & \\\\\n& & \\\\\n& 1 & Gramineae, Urticaceae \\\\\n& 2 & Chestnut, Plantain \\\\\n& 3 & Cypress \\\\\n& 4 & Ragweed, Mugwort, Birch, Beech, \\\\ \n& & Morus, Olive, Platanus, Oak, Sorrel \\\\\n& & Linden \\\\\n& 5 & Alder, Hornbeam, Hazel, Ash, \\\\\n& & Poplar, Willow \\\\\n& & \\\\\n\\end{tabular}\n\\end{center}\n\\end{table}\n\n\\subsubsection{COVID-19 evolution across countries}\\label{sec:covid}\n\nAt the time of writing this paper, an outbreak of infection with severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) has severely harmed the whole world. Countries all over the world have undertaken measures to reduce the spread of the virus: quarantine and social distancing practices have been implemented, collective events have been canceled or postponed, business and educational activities have been either interrupted or moved online. \n\nWhile the outbreak has led to a global social and economic disruption, its spreading and evolution, also in relation to the aforementioned non pharmaceutical interventions, have not been the same all over the world \\citep[see][for an account of this in the first months of the pandemic]{flaxman2020estimating, brauner2021inferring}. With this regard, the goal of the analysis is to evaluate differences and similarities among the countries and for different aspects of the pandemic. \n\nBeing the overall situation still evolving, and given that testing strategies have significantly changed across waves, we refer to the first wave of infection, considering the data from the 1st of March to the 4th of July 2020, in order to guarantee the consistency of the disease metrics used in the co-clustering. Moreover we restrict the analysis to the European countries. Data have been collected by the Oxford COVID-19 Government Response Tracker \\citep[OxCGRT, ][]{covid} and originally refer to daily observations \nof the number of confirmed cases and deaths for COVID-19 in each country. We also select two indicators tracking the individual country intervention in response to the pandemic: the Stringency index and the Government response index. Both indicators are recorded on a 0-100 ordinal scale that represents the level of strictness of the policy and accounts for containment and closure policies. The latter indicator also reflects Health system policies such as information campaigns, testing strategies and contact tracing. \n\nData have been pre-processed as follows: daily values have been converted into weekly averages in order to reduce the impact of short term fluctuations and the number of time observations. Rates per 1000 inhabitants have been evaluated from the number of confirmed cases and deaths, and the logarithms applied to reduce the data skewness. All the variables have been standardized. \n\nThe resulting dataset is a matrix with $n= 38$ rows (countries), $p$= 4 columns (variables describing the pandemic evolution and containment), observed over a period of $T= 18$ weeks.\nUnlike the French Pollen data, here there is no strong reason to favour one random effect configuration over the others. Conversely, different configurations of random effects would entail different ideas of similarity of virus evolution. Thus, while the presence of random effects would allow to cluster together similar trends yet associated to different intensities, speed of evolution and time of onset, switching the random effects off could result in enhancing such differences via the separation of the trends. \n\nModels have been run for $K = 1, \\ldots, 6$ row clusters and $L = 1,2,3$ column clusters, and all the 8 possible configurations of random effects. \nThe behaviour of the resulting ICL values supports the remark in Section \\ref{sec:chcharles_simulation}, as the criterion favours highly parameterized models. This holds particularly true with regard to the random effects configuration where the larger the number of random effects switched on, the highest the corresponding ICL. \nThus, models with all the random effects switched on stand out among the others, with a preference for $K=2$ and $L=3$ whose results are displayed in Figure \\ref{fig:best_covid}. The obtained partition is easily interpretable: in the column partition, reported on the right panel of Table \\ref{fig:best_covid_mappa}, the containment indexes are grouped together into the same cluster whereas the log-rate of positiveness and death are singleton clusters. Consistently with the random effect configuration, row clusters exhibit a different evolution in terms of cases, deaths and undertaken containment measures: \none cluster gathers countries where the virus has spread earlier and caused more losses; here, more severe control measures have been adopted, whose effect is likely seen in a general decreasing of cases and deaths after achieving a peak. The second row cluster collects countries for which the death toll of the pandemic seems to be more contained. The virus outbreak generally shows a delayed onset and a slower growth, which does not show a steep decline after reaching the peak, although the containment policies remain high for a long period. \nNotably, the row partition is also geographical, with the countries with higher mortality all belonging to the West Europe (see the left panel in Table \\ref{fig:best_covid_mappa}). \n\nTo properly show the benefits of considering different random effects configurations in terms of notion and interpretation of the clusters, we also illustrate the partition produced by another model estimated having the three random effects switched off (Figure \\ref{fig:covid2}). Here we consider $K = L = 3$: the column partition remains unchanged with respect to the best model, with the row partition still separating countries by the severity of the impact, with the third additional cluster having intermediate characteristics. According to this model, two row clusters feature countries with a similar right-skewed bell-shaped trend of cases and similar policies of containment, yet with a notable difference in the virus lethality. Indeed, the effect of switching $\\alpha_2$ off is clearly noted in the log-rate of death fitting, with two mean curves having similar shapes but different scales. The additional intermediate cluster, less impacted in terms of death rate, is populated by countries from the central-east Europe.\nThe apparent smaller impact of the first wave of the pandemic on the eastern European countries could be explained by several factors ranging from demographic characteristic and more timely closure policies to a different international mobility pattern. Additionally, other factors such as the general economic and health conditions might have prevented accurate testing and tracking policies, so that the actual spreading of the pandemic might have been underestimated. \n\n\\begin{figure}[bt]\n\\begin{center}\n \\includegraphics[width=.9\\textwidth]{Figures\/mod23TRUETRUETRUE.pdf}\n \\end{center}\n \\caption{COVID-19 outgrowth results of the best model, with $K =2,$ $L=3$ and the three random effects on. Curves belonging to each single block with superimposed the associated block specific mean curve (in light blue).}\n \\label{fig:best_covid}\n\\end{figure} \n\n\\begin{table}[tb]\n\\caption{Europe map with countries colored according to their row cluster memberships (left) and variables organized by the column cluster membership (right) for the best ICL model.}\n\\label{fig:best_covid_mappa}\n\\begin{center}\n\\begin{tabular}{p{.56\\textwidth}|c p{.34\\textwidth}}\n Row groups (Countries) & \\multicolumn{2}{p{.4\\textwidth}}{Column groups (COVID-19 spreading and containment)} \\\\\n\\hline\n\\multirow{7}{*}{\\vspace*{-3cm} \\includegraphics[scale=0.43]{Figures\/mapmod23TRUETRUETRUE.pdf}} & & \\\\\n& & \\\\\n& 1 & log \\% of cases per 1000 inhabitants \\\\\n& 2 & log \\% of deaths per 1000 inhabitants \\\\\n& 3 & Stringency index, Government response index \\\\\n& & \\\\\n& & \\\\\n& & \\\\\n\\end{tabular}\n\\end{center}\n\\end{table}\n\n\\begin{figure}[h]\n \\includegraphics[width=.9\\textwidth]{Figures\/mod33FALSEFALSEFALSE.pdf}\n \\caption{COVID-19 outgrowth results of the model with $K =3,$ $L=3$ and the three random effects off.}\n \\label{fig:covid2}\n\\end{figure}\n\n\\section{Conclusions}\\label{sec:chcharles_conclusions}\n\n\nModeling multivariate time-dependent data requires accounting for heterogeneity among subjects, capturing similarities and differences among variables, as well as correlations between repeated measures. \nOur work has tackled \nthese challenges by proposing a new parametric co-clustering methodology, recasting the widely known Latent Block Model in a time-dependent fashion. The co-clustering model, by simultaneously searching for row and column clusters, partitions three-way matrices in blocks of homogeneous curves. Such approach\ntakes into account the mentioned features of the data while building parsimonious and meaningful summaries. \nAs a data generative mechanism for a single curve, we have considered the \\emph{Shap Invariant Model} that has turned out to be particularly flexible when embedded in a co-clustering context. The model allows to describe arbitrary time evolution patterns while adequately capturing dependencies among different temporal instants. \nThe proposed method compares favorably with the few existing competitors for the simultaneous clustering of subjects and variables with time-dependent data. Our proposal, while producing co-partitions with comparable quality as measured by objective criteria, applies to both functional and longitudinal data, and has relevant advantages in terms of interpretability. The \noption\nof ``switching off'' some of the random effects, although in principle simplifying the model structure, increases its flexibility, as it allows to encompass different concepts of cluster possibly depending on the specific applications and on subject-matter considerations. \n\nWhile further analyses are required to increase our understanding about the general performance of the proposed model, its application to both simulated and real data has provided overall satisfactory results and highlighted some aspects which worth further investigation. \nOne interesting direction for future research \nis studying possible alternatives to the ICL to be used as selection tools when the model specification in the LBM framework involves random effects. Moreover, alternative choices, for example, for specifying the block mean curves, could be considered and compared with the choices adopted here. A further direction for future work would be\nexploring \na fully Bayesian approach to model specification and estimation, possibly handling more easily the random parameters in the model.\n\n\n\n\n\\newpage\n\\bibliographystyle{plainnat}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\subsection*{Abstract}\nThis paper proposes a general system for computation-intensive graph mining tasks that find from a big graph all subgraphs that satisfy certain requirements (e.g., graph matching and community detection). Due to the broad range of applications of such tasks, many single-threaded algorithms have been proposed. However, graphs such as online social networks and knowledge graphs often have billions of vertices and edges, which requires distributed processing in order to scale. Unfortunately, existing distributed graph processing systems such as Pregel and GraphLab are designed for data-intensive analytics, and are inefficient for computation-intensive graph mining tasks since computation over any data is coupled with the data's access that involves network transmission. We propose a distributed graph mining framework, called \\oursys, which is designed for computation-intensive graph mining workloads. \\oursys provides an intuitive graph-exploration API for the convenient implementation of various graph mining algorithms, and the runtime engine provides efficient execution with bounded memory consumption, light network communication, and parallelism between computation and communication. Extensive experiments were conducted, which demonstrate that \\oursys is orders of magnitude faster than existing solution, and can scale to graphs that are two orders of magnitude larger given the same hardware resources.\n\n\\section{Introduction}\\label{sec:intro}\nWe focus on a class of graph mining problems, namely \\prob problems, which aim to find all subgraphs in a graph that satisfy certain requirements. It may enumerate (or count) all of the subgraphs, or find only those subgraphs with top-$k$ highest scores, or simply output the largest subgraph. Examples of \\prob problems include \\emph{graph matching}~\\cite{match3}, \\emph{maximum clique finding}~\\cite{clique03}, \\emph{maximal clique enumeration}~\\cite{clique73}, \\emph{quasi-clique enumeration}~\\cite{quasiclique}, \\emph{triangle listing and counting}~\\cite{yufei_triangle}, and \\emph{densest subgraph finding}~\\cite{densegraph}. These problems have a wide range of applications including social network analysis~\\cite{PattilloYB13eor,socialnet}, searching knowledge bases~\\cite{KasneciSIRW08icde,WuLWZ12sigmod} and biological network investigation~\\cite{HeS08sigmod,ZouCO09pvldb}. Although many serial algorithms have been proposed to solve these problems, they cannot scale to big graphs such as social networks and knowledge graphs. Moreover, it is non-trivial to extend these algorithms for parallel processing, because (1)~the input graph itself may be too big to fit in the memory of one machine, (2)~a serial algorithm checks subgraphs using backtracking, where only one candidate subgraph is constructed (incrementally from the previous candidate subgraph) and examined at a time; a parallel algorithm that checks many subgraphs simultaneously in memory many cause memory overflow. The issue is further complicated in the distributed setting due to the high cost of accessing large amounts of remote data.\n\nA plethora of distributed systems have been developed recently for processing big graphs~\\cite{dbs}, but most of them adopt a think-like-a-vertex style (or, vertex-centric) computation model~\\cite{giraph,graphx,gps,da_www} which is pioneered by Google's Pregel~\\cite{pregel}. These systems only require a programmer to specify the behavior of one generic vertex (e.g., sending messages to other vertices) when developing distributed graph algorithms, but the resulting programs are usually data-intensive. Specifically, the processing of each vertex is triggered by incoming messages sent (mostly) from other machines, and the CPU cost of vertex processing is negligible compared with the communication cost of message transmission. Moreover, subgraph finding problems operate on subgraphs rather than individual vertices, and it is unnatural to translate a subgraph finding problem into a vertex-centric program.\n\nAs a result, big graph analytics research mostly focuses on problems that naturally have a vertex-program implementation, such as PageRank and shortest path, and few work has been devoted to large-scale subgraph finding. To our knowledge, only two existing systems, NScale~\\cite{nscale} and Arabesque~\\cite{arab}, attempted to attack large-scale subgraph finding with a subgraph-centric programming model. Unfortunately, their execution engines still mine subgraphs in a data-intensive manner. Specifically, they construct all candidate subgraphs in a synchronous manner (i.e., large subgraphs are constructed from small ones), before the actual computation that examines these subgraphs. Materializing candidate subgraphs incurs large network and storage overhead (while CPU is under-utilized), and the actual computation-intensive mining process is delayed until the costly subgraph materialization is completed.\n\nAnother critical drawback of existing distributed frameworks is that they process each individual subgraph as an independent task, which loses many optimization opportunities. For example, if multiple subgraphs on a machine contain a common vertex $v$, their tasks may share $v$'s information (e.g., adjacent edges). But in existing systems, each subgraph will maintain its own copy of $v$'s information (likely received from another machine), leading to redundant communication and storage. The synchronous execution model of existing systems is also prone to the straggler problem, due to imbalanced workload distribution among different machines.\n\nIn this paper, we identify the following five requirements that a distributed system for \\prob should satisfy in order to be efficient and user-friendly:\n\\begin{itemize}\n\\item The programming interface should be {\\bf subgraph-centric}.\n\\item {\\bf Computation-intensive} processing should be native to the programming model. For example, a programmer should be able to backtrack a portion of graph to examine candidate subgraphs like in a serial algorithm, without materializing and transmitting any subgraph.\n\\item There should exist {\\bf no global synchronization} among the machines, i.e., the processing of different portions of a graph should not block each other.\n\\item Since a \\prob algorithm checks many (possibly overlapping) subgraphs whose cumulative volume can be much larger than the input graph itself, it is important to schedule the subgraph-tasks properly to keep the {\\bf memory usage bounded} at any point of time.\n\n Obviously, each machine should stream and process its subgraphs on its local disk (if memory is not sufficient), to minimize network and disk IO overhead.\n\\item Subgraphs on a machine that contain a common vertex $v$ should be able to share $v$'s information (e.g., adjacent edges), to {\\bf avoid redundant data transmission and storage}.\n\\end{itemize}\n\nBased on these criteria, we designed a novel subgraph-centric system, called \\oursys, as a unified framework for developing scalable algorithms for various \\prob problems. To write a \\oursys program, a user only needs to specify how to grow a portion of the input graph $g$ by pulling $g$'s surrounding vertices, and how to process $g$ (e.g., by backtracking). Communication and execution details in \\oursys are transparent to end users. In \\oursys, each machine only keeps and processes a small batch of tasks in memory at any time (to achieve high throughput through task batching while keeping memory usage bounded). Subgraphs that are waiting to be processed (e.g., to grow its frontier by pulling remote vertices) are buffered in a disk-based priority queue. The priority queue is organized by a min-hashing based task scheduling strategy, in order to maximize the opportunity that the processing of different subgraphs share common vertices (including their adjacent edges) that are cached in the local machine.\n\nWe have used \\oursys to develop significantly more efficient and scalable solutions for a number of \\prob problems, including triangle counting, maximum clique finding, and graph matching. Compared with existing systems, \\oursys is up to hundreds of times faster and scales to graphs that are two orders of magnitude larger given the same hardware resources.\n\nThe rest of this paper is organized as follows. We motivate the need of a computation-intensive subgraph finding framework with the problem of maximal clique enumeration in Section~\\ref{sec:example}, and then explain why existing systems are inefficient in Section~\\ref{sec:related}. Section~\\ref{sec:overview} provides an overview of the design of \\oursys, Section~\\ref{sec:api} introduces the programming interface of \\oursys, and Section~\\ref{sec:app} illustrates how to write application programs in \\oursys. We present the implementation of \\oursys in Section~\\ref{sec:system}, and report experimental results in Section~\\ref{sec:results}. Finally, we conclude the paper in Section~\\ref{sec:conclude}.\n\n\\section{A Motivating Example}\\label{sec:example}\nA common feature of \\prob problems is that, the computation over a graph $G$ can be decomposed into that over subgraphs of $G$ that are often much smaller (called {\\bf decomposed subgraphs}), such that each result subgraph is found in exactly one decomposed subgraph. In other words, the decomposed subgraphs partition the search space and there is no redundant computation. We illustrate by considering maximal clique enumeration, which serves as our running example. Table~\\ref{gnote} summarizes the notations used throughout this paper.\n\n\\begin{table}[t]\n\\caption{Notation Table}\\label{gnote}\n\\vspace{1mm}\n\\centering\n\\includegraphics[width=0.9\\columnwidth]{gnote}\n\\vspace{-4mm}\n\\end{table}\n\n\\vspace{1mm}\n\n\\noindent{\\bf Example: Maximal Clique Enumeration.} We decompose a graph $G=(V, E)$ into a set of $G$'s subgraphs $\\{G_1, G_2, \\ldots, G_n\\}$, where $G_i$ is constructed by expanding from a vertex $v_i\\in V$. Let us denote the neighbors of a vertex $v$ by $\\Gamma(v)$. If we construct $G_i$ as the subgraph induced by $\\{v_i\\}\\cup\\Gamma(v_i)$ ($G_i$ is called $v_i$'s 1-ego network), then we can find all cliques from these 1-ego networks since any two vertices in a clique must be neighbors of each other. However, a clique could be double-counted.\n\nLet us define $\\Gamma_{gt}(v)=\\{u\\in\\Gamma(v)\\,|\\,u>v$\\} where vertices are compared according to their IDs. To avoid redundant computation, we redefine $G_i$ as induced by $\\{v_i\\}\\cup\\Gamma_{gt}(v_i)$, i.e., $G_i$ does not contain any neighbor $v_j$, $<$$C$$>$, $<$$V$$>$ and $<$$E$$>$. Among them, $<$$I$$>$, $<$$V$$>$ and $<$$E$$>$ specify the data types of vertices and edges: (1)~$<$$I$$>$: the type of vertex ID; (2)~$<$$V$$>$: the type of vertex attribute; (3)~$<$$E$$>$: the type of the attribute of an adjacency list item. Other system-defined types (e.g., those for subgraph, vertex, and adjacency list) are automatically derived by {\\em G-thinker} from them, and can be directly used in the UDFs once a user specifies these three template arguments.\n\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=0.78\\columnwidth]{api_example}\n \\caption{Data Types in \\oursys}\\label{api_example}\n \\vspace{-4mm}\n\\end{figure}\n\nFigure~\\ref{api_example} illustrates the inferred system-defined types. Specifically, a subgraph is shown on the left, which consists of a table of vertices (stored with their adjacency lists). The structure of Vertex~3 is shown on the right, where the vertex is stored with its ID (of type $<$$I$$>$) and a vertex label ``c'' (of type $<$$V$$>$), and an adjacency list. Each item in the adjacency list is stored with a neighbor ID (of type $<$$I$$>$) and an attribute (of type $<$$E$$>$) indicating the label of the neighbor and the edge label. For example, the first item corresponds to Vertex~1 with label ``a'', and the edge label of $(3, 1)$ is ``B''. Attributes (i.e., $<$$V$$>$ and $<$$E$$>$) are optional and are not needed for finding subgraphs with only topology constraints (e.g., triangles, cliques, and quasi-cliques).\n\n\\vspace{1mm}\n\n\\noindent{\\bf The Task class.} The {\\em Task} class has another template argument $<$$C$$>$ that specifies the type of context information for a task $t$, which can be, for example, $t$'s iteration number (a task in \\oursys proceeds its computation in iterations). Each {\\em Task} object $t$ maintains a subgraph $g$ and the user-specified {\\em context} object (of type $<$$C$$>$). The {\\em Task} class has only one UDF, $t$.{\\em compute}({\\em frontier}), where the input {\\em frontier} keeps the set of vertices requested by $t$ in its previous iteration. Each element of {\\em frontier} is actually a pointer to a vertex object in $T_{local}$ or $T_{cache}$. Of course, users may also access $t$'s subgraph and context object in {\\em compute}({\\em frontier}).\n\nUDF {\\em compute}({\\em frontier}) specifies how a task computes for one iteration. If $t$.{\\em compute}(.) returns {\\em true}, $t$ needs to be processed by more iterations; otherwise, $t$'s computation is finished after the current iteration. In \\oursys, when $t$ is fetched from the task queue for processing, $t$.{\\em compute}(.) is executed repeatedly until either $t$ is complete, or $t$ needs a non-local vertex $v$ that is not cached in $T_{cache}$, in which case $t$ is added to the task queue waiting for all $t$'s requested vertices to be pulled. When a task is completed or queued to disk, \\oursys automatically garbage collects the memory space of the task to make room for the processing of other tasks.\n\nInside $t$.{\\em compute}(.), a user may access and update $g$ and {\\em context}, and call {\\em pull}($u$) to request vertex $u$ for use in $t$'s next iteration. Here, $u$ is usually in the adjacency list of a previously pulled vertex, and {\\em pull}($u$) expands the frontier of $g$. To improve network utilization, $g$ is usually expanded in a breadth-first manner, so that each call of {\\em compute}(.) generates pull-requests for all relevant vertices adjacent to $g$'s growing frontier. A user may also call {\\em add\\_task}({\\em task}) in $t$.{\\em compute}(.) to add a newly-created task to the task queue.\n\n\\vspace{1mm}\n\n\\noindent{\\bf The Worker Class.} Each object of the {\\em Worker} class corresponds to a worker that processes its assigned tasks in serial. Figure~\\ref{api_simple} shows the key functions of the {\\em Worker} class, including two important UDFs.\n\nUDF {\\em seedTask\\_gene}($v$) specifies how to create tasks according to a seed vertex $v\\in T_{local}$. A worker of \\oursys starts by calling {\\em seedTask\\_gene}($v$) on every $v\\in T_{local}$, to generate seed tasks and to add them to the disk-based task queue. Inside {\\em seedTask\\_gene}($v$), users may examine the adjacency list of $v$, create tasks accordingly (and may let each task pull neighbors of $v$), and add these tasks to the task queue by calling {\\em add\\_task(.)}.\n\n\nUDF {\\em respond}($v$) is used to prune $\\Gamma(v)$ before sending it back to requesting workers. By default, {\\em respond}($v$) returns {\\em NULL} and \\oursys directly uses the vertex object of $v$ in $T_{local}$ to respond. Users may overload {\\em respond}($v$) to return a newly created copy of $v$, with items in $\\Gamma(v)$ properly pruned to save communication (e.g., $\\Gamma_{gt}(v)$ for clique enumeration). In this case, \\oursys will respond by sending the new object and then garbage-collect it.\n\nThe {\\em worker} class also contains formatting UDFs, e.g., for users to define how to parse a line in the input file on HDFS into a vertex object in $T_{local}$, which will be used during graph loading.\n\nTo run a \\oursys program, one may subclass {\\em Worker} with all UDFs properly implemented, and then call {\\em run}({\\em config\\_info}) to start the job, where {\\em config\\_info} contains job configuration parameters such as the HDFS file path of the input graph.\n\n\\vspace{1mm}\n\n\\noindent{\\bf The Aggregator Class.} The {\\em Worker} class optionally admits a second template argument $<${\\em aggT}$>$, which needs to be specified if aggregator is used to collect some statistics such as triangle count or maximum clique size. Each task can aggregate a value to its worker's local aggregator when it finishes. These locally aggregated values can either be globally aggregated at last when all workers finish computing their tasks (which is the default setting), or be periodically synchronized (e.g., every 10 seconds) to make the globally aggregated value available to all workers (and thus all tasks) timely for use (e.g., in {\\em compute}(.) to prune search space). In the latter case, users need to provide a frequency parameter.\n\n\\section{Applications}\\label{sec:app}\nWe consider two categories of applications: (1)~finding dense subgraph structures such as triangles, cliques and quasi-cliques, which is useful in social network analysis and community detection; (2)~graph matching, which is useful in applications such as querying semantic networks and pattern recognition.\n\nFor simplicity, we only consider top-level task decomposition, i.e., we grow each vertex $v_i\\in V$ into {\\em exactly one} decomposed subgraph $G_i$, and every qualified subgraph will be found in {\\em exactly one} decomposed subgraph.\n\n\\vspace{1mm}\n\n\\noindent{\\bf Triangle Counting.} Assume that for any vertex $v$, neighbors in $\\Gamma(v)$ are already sorted in increasing order of vertex ID (e.g., during graph loading). We also denote the largest (i.e., last) vertex in $\\Gamma(v)$ by $\\max_\\Gamma(v)$.\n\nWe want each triangle $\\triangle v_1v_2v_3$ (w.l.o.g., $v_1v_2$, $v_2$ only needs to respond $\\Gamma_{gt}(v_2)$ to $v_1$.\n\nAccording to the above discussion, among the UDFs of {\\em Worker}, {\\em respond}($v_2$) creates a copy of $v_2$ with adjacency list $\\Gamma_{gt}(v_2)$ for responding; if $|\\Gamma_{gt}(v_1)|\\geq 2$, {\\em seedTask\\_gene}($v_1$) creates a task $t$ for $v_1$ and let $t$ pull every vertex in $(\\Gamma_{gt}(v_1)-\\{\\max_\\Gamma(v_1)\\})$. The context of $t$ keeps $\\max_\\Gamma(v_1)$ and a triangle counter {\\em count} (initialized as 0).\n\nIn $t$.{\\em compute}({\\em frontier}), {\\em frontier} contains all the pulled vertices (i.e., $v_2$) in increasing order of their IDs as they were requested in {\\em seedTask\\_gene}($v_1$). We check every $v_2\\in$ {\\em frontier} as follows. For each $v_2$, we loop through all vertices $v_3>v_2$ in $\\Gamma_{gt}(v_1)$ ($\\Gamma_{gt}(v_1)$ is obtained by appending $\\max_\\Gamma(v_1)$ in $t$'s context to {\\em frontier}), and increment $t$'s counter if $v_3\\in\\Gamma(v_2)$. Finally, {\\em compute}(.) returns {\\em false} since we have checked all $(v_2, v_3)$ pairs and the task is finished.\n\nWhenever a task $t$ is finished, its counter (in $t$'s context) is added to the locally aggregated value, and when all workers finish computation, these local counts are sent to the master to get the total triangle count.\n\n\\vspace{1mm}\n\n\\noindent{\\bf Maximum Clique.} We adapt the serial backtracking algorithm of~\\cite{clique03} to \\oursys. The original algorithm maintains the size of the maximum clique currently found, denoted by $|Q_{max}|$, to prune the search space.\n\nTo allow timely pruning, each worker in our \\oursys program maintains $|Q_{max}|$ and keeps it relatively up to date by periodic aggregator synchronization, so that if a worker discovers a larger clique and updates $|Q_{max}|$, the value can be synchronized to other workers timely to improve their pruning effectiveness. In {\\em seedTask\\_gene}($v_i$), we create a task $t$ whose graph $g$ contains $v_i$, and we let $t$ pull all vertices in $\\Gamma_{gt}(v_i)$. Then in $t$.{\\em compute}({\\em frontier}), we collect vertices in {\\em frontier} (i.e., $\\Gamma_{gt}(v_i)$), add them to $g$ but filter those adjacency list items that are not in $\\{v_i\\}\\cup\\Gamma_{gt}(v_i)$, to form the decomposed subgraph $G_i$, and then run the algorithm of~\\cite{clique03} on $G_i$.\n\nThis solution can be easily extended to find quasi-cliques, where in a quasi-clique, every vertex is adjacent to at least $\\gamma$ ($\\geq0.5$) fraction of other vertices. In such a quasi-clique, two vertices are at most 2 hops away~\\cite{quasiclique}. The \\oursys algorithm is similar to that for finding maximum clique, except that (1)~for each local seeding vertex $v_i$, {\\em compute}({\\em frontier}) runs for 2 iterations to pull vertices (larger than $v_i$) within 2 hops of $v_i$; (2)~{\\em compute}({\\em frontier}) then constructs $G_i$ as the 2-hop ego-network of $v_i$ and runs the quasi-clique algorithm of~\\cite{quasiclique} on $G_i$ to compute the quasi-cliques.\n\n\\vspace{1mm}\n\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=0.65\\columnwidth]{gmatch}\n \\caption{An Example of Graph Matching}\\label{gmatch}\n \\vspace{-4mm}\n\\end{figure}\n\n\\noindent{\\bf Graph Matching.} Graph matching finds all subgraph instances in a data graph that match the query graph. Consider the problem of finding all occurrences of the query graph pattern given by Figure~\\ref{gmatch}(a) in the data graph shown in Figure~\\ref{gmatch}(b). In this example, each vertex in the query graph (and the data graph) has a unique integer ID and a label. We define $k_1k_2k_3k_4k_5$ as a mapping where vertex with ID $k_i$ in the data graph is mapped to vertex $\\textcircled{i}$ in the query graph. A mapping is a matching if vertex $k_i$ and vertex $\\textcircled{i}$ have the same label (for any $i$), and for any edge $(\\textcircled{i}, \\textcircled{j})$ in the query graph, the corresponding edge $(k_i, k_j)$ exists in the data graph. For example, 25478 is a matching, while 25178 is not since the data graph does not have edge $(1, 5)$ that corresponds to $(\\textcircled{3}, \\textcircled{2})$.\n\nExisting works on distributed graph matching combine vertex-centric graph exploration with subgraph join. Note that when a query graph contains cycles, vertex-centric graph exploration alone is not sufficient. For example, in Figure~\\ref{gmatch}(b), suppose that we perform vertex-centric exploration on the data graph along query graph path $\\textcircled{3}$-$\\textcircled{1}$-$\\textcircled{2}$, we will explore from Vertex~1 (or~4) to~2 and then to~5 simply according to neighbors' labels. Then, we need to check all $b$-labeled neighbors of Vertex~5 to find Vertex~1 (or~4), which is essentially an equi-join on the ID of $k_3$ rather than a simple label-based exploration. \\cite{match1} and~\\cite{match2} first decompose a query graph into small acyclic subgraphs called twigs (see Figure~\\ref{gmatch}(c) for an example), and then use graph exploration to find subgraph instances that match those twigs, and join twigs on joint vertices (e.g., $k_2$ and $k_3$ for Figure~\\ref{gmatch}(c)) to obtain the subgraphs that match the query graph.\n\nOur algorithm avoids materializing matched subgraphs and performing distributed subgraph join as required by existing solutions. Instead, we pull required vertices to construct each decomposed subgraph $G_i$, and then simply enumerate the matched subgraph instances in the decomposed graph using backtracking without generating any communication.\n\nWe illustrate how to write a \\oursys program for the query graph of Figure~\\ref{gmatch}(a). Assume that each adjacency list item contains vertex label\\footnote{If this is not the case, one may use the Pregel algorithm of~\\cite{da_www} for attribute broadcast to preprocess the graph data in linear cost.}. We start the matching from vertex \\textcircled{1} with label ``$a$'', and grow $G_i$ from each vertex $v_i$ in the data graph with label ``$a$''. Note that every matched subgraph instance will be found since it must contain an $a$-labeled vertex $v_i$, and it will only be found in $v_i$'s decomposed subgraph $G_i$.\n\nWe now present our algorithm, which can be safely skipped if you are not interested in reading the details.\n\n\\vspace{1mm}\n\n\n\\noindent{\\bf The Algorithm:} in {\\em seedTask\\_gene}($v$), we only create a task $t$ for $v$ if $v$'s label is ``$a$'', and $\\Gamma(v)$ contains neighbors with both labels ``$b$'' and ``$c$''. If this is the case, we add vertex $v$ to $g$, and pull all vertices in $\\Gamma(v)$ with labels ``$b$'' and ``$c$''.\n\nThen, in iteration~1 of $t$.{\\em compute}({\\em frontier}), we split {\\em frontier} into two vertex sets: $V_b$ (resp.\\ $V_c$) consists of vertices with label ``$b$'' (resp.\\ ``$c$''). However, while a vertex in $V_c$ definitely matches vertex \\textcircled{2} in Figure~\\ref{gmatch}(a), a vertex in $V_b$ may match either Vertex~\\textcircled{3} or Vertex~\\textcircled{4}. For each vertex $v_c\\in V_c$, we split all vertices in $\\Gamma(v_c)$ with label ``$b$'' into two sets: $U_1$ consisting of those vertices that are also in $V_b$ (i.e., they can match Vertex~\\textcircled{3} or Vertex~\\textcircled{4}), and $U_2$ consisting of the rest (i.e., they can only match Vertex~\\textcircled{4} since they are not neighbors of $v_c$). We prune $v_c$, (i)~if $U_1=\\emptyset$ since $v_c$ does not have a neighbor matching Vertex~\\textcircled{3}, or (ii)~if $|U_1|=1$ and $U_2=\\emptyset$, since $v_c$ does not have two neighbors with label ``$b$''. Otherwise, (iii)~if $|U_1|=1$ and $U_2\\neq\\emptyset$, then the vertex in $U_1$ has to match Vertex~\\textcircled{3}, and the vertex matching Vertex~\\textcircled{4} has to be from $U_2$, and thus we pull all vertices of $U_2$; while (iv)~if $|U_1|>1$, the vertex matching Vertex~\\textcircled{4} can be from either $U_1$ or $U_2$, and thus we pull all vertices from both $U_1$ and $U_2$. Let the only vertex (with label ``$a$'') currently in $g$ be $v_a$, then in both Cases~(iii) and~(iv), we add $v_c$ and edge $(v_a, v_c)$ to $g$, and for each vertex $v_b\\in U_1$ (i.e., $v_b$ can match Vertex~\\textcircled{3}), we add $v_b$ and edge $(v_a, v_b)$ to $g$.\n\nThen in iteration~2 of $t$.{\\em compute}({\\em frontier}), {\\em frontier} contains all pulled vertices with label ``$b$'' that can match Vertex~\\textcircled{4}. Let the set of all vertices with label ``$c$'' in $g$ (i.e., matching Vertex~\\textcircled{2}) be $V_c$. Then, for each vertex $v_b\\in$ {\\em frontier}, we denote the set of all vertices of $\\Gamma(v_b)$ with label ``$d$'' (i.e., matching Vertex~\\textcircled{5}) be $V_d$; if $V_d\\neq\\emptyset$, (1)~we add $v_b$ to $g$, (2)~for every vertex $v_c\\in V_c\\cap\\Gamma(v_b)$, we add edge $(v_c, v_b)$ (i.e., matching $(\\textcircled{2}, \\textcircled{4})$) to $g$, (3)~for every vertex $v_d\\in V_d$, we add $v_d$ and edge $(v_b, v_d)$ to $g$. Finally, we run a backtracking algorithm on $g$ to enumerate all subgraphs that match the query graph.\n\nLastly, we can let UDF {\\em respond}($v$) return a copy of $v$ by pruning items in $\\Gamma(v)$ whose labels do not fall into \\{a, b, c, d\\} to save communication. \\qed\n\n\n\n\\vspace{1mm}\n\nWe remark that only top-level subgraphs decomposed by Vertex~\\textcircled{1} in the query vertex have been considered. If a resulting decomposed subgraph $G_i$ is still too big, one may continue to decompose $G_i$ by looking at one more vertex in the query graph (given that Vertex~\\textcircled{1} is already matched to $v_i$).\n\n\\section{System Implementation}\\label{sec:system}\n\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=0.8\\columnwidth]{frame}\n \\caption{Computation Framework of a Worker}\\label{frame}\n \\vspace{-4mm}\n\\end{figure}\n\n\\noindent{\\bf Task Queue \\& Task Buffers.} Since tasks contain subgraphs that may overlap with each other, it is impractical to keep all tasks in memory. Thus, each worker of \\oursys maintains a disk-based queue $Q$ to keep those tasks waiting to be processed. Figure~\\ref{frame} shows the procedure of task computation on a worker of \\oursys. Specifically, tasks are fetched from the task queue $Q$ one at a time and added to a task buffer $B^T_{in}$ for batch processing, while the processed tasks (and those newly-created by {\\em add\\_task}(.)) are appended to buffer $B^T_{out}$ and then merged to $Q$ in batches.\n\nOne baseline approach to organize $Q$ is to treat it as a local-disk stream of tasks, which allows tasks to be sequentially read from (and appended to) $Q$. To support two-sided streaming, we organize tasks in $Q$ with files each containing $C$ tasks, where $C$ is a user-defined parameter (100 by default) to amortize the random IO cost of reading (and writing) a task file. We call this queue organization as {\\bf\\em stream-queue}, which does not consider whether tasks fetched into $B^T_{in}$ share common vertices to pull. To increase the probability that tasks in $B^T_{in}$ share common vertices to pull, we designed another queue organization called {\\bf\\em LSH-queue} based on min-hashing.\n\nSpecifically, assume that a task $t$ has called {\\em compute}(.), and let us denote the set of vertices that $t$ needs to pull from remote machines by $\\mathbf{P(t)}$. Before adding $t$ to $Q$, we append a key $k(t)$ to $t$, which consists of a sequence of $\\ell$ ($=4$ by default) MinHash signatures~\\cite{minhash} of $P(t)$. Due to the locality sensitivity of min-hashing, for two tasks $t_1$ and $t_2$, the more similar $k(t_1)$ and $k(t_2)$ are, the more likely $P(t_1)$ and $P(t_2)$ overlap~\\cite{minhash}. Therefore, we keep tasks in $Q$ ordered by their keys (in alphabetic order), so that tasks fetched tasks from the head of $Q$ have similar keys (e.g., see $B^Q_{out}$ in Figure~\\ref{frame}) and are likely to share more common vertices to pull. Reducing redundancy by ordering data according to min-hashing keys has been shown to be effective by previous works~\\cite{nscale,eagr}.\n\nTo avoid random disk IOs, we organize $Q$ as depicted in Figure~\\ref{frame}. We maintain an in-memory buffer $B^Q_{in}$ to receive incoming tasks (from $B^T_{out}$), and an in-memory buffer $B^Q_{out}$ to buffer ordered tasks to be fetched. The waiting tasks on local disk are grouped into files, where each file contains $[C\/2, C]$ tasks ordered by their keys and $C$ is a user-defined parameter. The ranges of keys in different files are disjoint (except at boundaries where keys may be equal), and all task files are linked in the order of key ranges by an in-memory doubly-linked list $L^Q$. Each element in $L^Q$ points to a task file and records its key range. Here, $L^Q$ is like the leaf level of a B$^+$-tree, but it is small enough to be memory-resident since only metadata of files are kept. When $B^Q_{in}$ overflows, we merge all its tasks to the list of task files efficiently by utilizing $L^Q$ while guaranteeing that each file still contains $[C\/2, C]$ tasks after merging, using a B$^+$-tree style algorithm.\n\nA computing thread fetches tasks one by one from $B^Q_{out}$ for computation, and when $B^Q_{out}$ becomes empty, we load to $B^Q_{out}$ those tasks in the first file of $L^Q$ if $L^Q$ is not empty; otherwise, we fill $B^Q_{out}$ with tasks obtained from the head of $B^Q_{in}$. Since tasks in a file are clustered by their keys, tasks in $B^Q_{out}$ tend to share common vertices to pull.\n\n\\vspace{1mm}\n\n\\noindent{\\bf Task Computation.} A worker of \\oursys processes its tasks in rounds, where each round consists of three steps, (1)~\\emph{task fetching}, (2)~\\emph{vertex pulling}, and (3)~\\emph{task computing}. Figure~\\ref{frame} illustrates these three steps.\n\nThe first step fetches tasks from $Q$ into $B^T_{in}$ until either (i)~$B^T_{in}$ becomes full, or (ii)~there is no more room in $T_{cache}$ to accommodate more vertices to pull. The second step then pulls all requested vertices that are not already hit in $T_{cache}$. Note that the pulling frequency is influenced by the capacity of $B^T_{in}$ and $T_{cache}$. Now, for every task $t\\in B^T_{in}$, its requested vertices in {\\em frontier} are either in $T_{local}$ or $T_{cache}$, and thus we start the third step to process the tasks in $B^T_{in}$. We compute each task $t$ iteratively until either $t$ is complete, or there exists a newly-requested vertex that is neither in $T_{local}$ nor in $T_{cache}$. In the latter case, we compute $t$'s key using $P(t)$, and then add $t$ to $B^T_{out}$. If new tasks are created by $t$, they are also added to $B^T_{out}$. Whenever $B^T_{out}$ is full, it merges its tasks to $Q$.\n\nSince a machine runs multiple workers, and the independence of their execution allows computation and communication to overlap.\n\n\\vspace{1mm}\n\n\\noindent{\\bf Other Issues.} Real graphs may contain some high-degree vertex $v$, and the task seeded from $v$ may have $|P(t)|$ larger than the capacity of $T_{cache}$. To allow such a task to proceed, we treat $t$ as a singleton task batch to perform vertex pulling, by temporarily increase the capacity of $T_{cache}$. After $t.${\\em compute}(.) returns, we recover the original capacity of $T_{cache}$ by evicting overflowed vertices, before starting the next round.\n\nA worker initially seeds the tasks from all vertices in $T_{local}$ into $Q$, to maximize the opportunity of finding tasks that share common vertices to pull. The seeded tasks are merge-sorted by their min-hashing key to efficiently create the file list of $Q$.\n\nThere exists some work that uses heuristics to estimate the computation cost of a task $t$ from its decomposed subgraph~\\cite{clique13}, and an online regressor may also be trained to improve the cost estimation after each task is finished. Since estimating the cost of $t$ from a partially grown subgraph is difficult, we only estimate $t$'s cost when its decomposed subgraph is fully constructed. To allow task prefetching, each worker buffers a small set of tasks with estimated costs, and if all other tasks are exhausted, the work requests tasks from a coordinating master while continuing to process the buffered tasks. The master collects task summary from all workers to decide which tasks to be redistributed when some worker requests more tasks. We remark that task stealing strategies are still under development and are thus not reported in this paper.\n\n\\section{Experiments}\\label{sec:results}\nWe evaluate the performance of \\oursys, which is implemented in C++ and communicates with HDFS (Hadoop 2.6.0) using libhdfs. All our experiments were run on a cluster of 15 machines, each with 12 cores (two Intel Xeon E5-2620 CPUs) and 48GB RAM. The connectivity between any pair of nodes in the cluster is 1Gbps. All system and application codes are released in \\oursys's website~\\footnote{http:\/\/yanda.cis.uab.edu\/gthinker\/}. We report end-to-end processing time, from graph loading to when the slowest worker finishes its processing, for all the following experiments.\n\n\\begin{table}[t]\n\\caption{Graph Datasets (M = 1,000,000)}\\label{data}\n\\centering\n\\includegraphics[width=0.96\\columnwidth]{data}\n\\vspace{-4mm}\n\\end{table}\n\nTable~\\ref{data} shows the four real-world graph datasets used in our experiments. We chose these graphs to be undirected since our applications described in Section~\\ref{sec:app} are for undirected graphs, while \\oursys can also handle directed graphs. These graphs are also chosen to have different sizes: {\\em Youtube}\\footnote{https:\/\/snap.stanford.edu\/data\/com-Youtube.html}, {\\em Skitter}\\footnote{http:\/\/konect.uni-koblenz.de\/networks\/as-skitter}, {\\em Orkut}\\footnote{http:\/\/konect.uni-koblenz.de\/networks\/orkut-links}, and {\\em Friendster}\\footnote{http:\/\/snap.stanford.edu\/data\/com-Friendster.html} have 2.99 M, 11.1 M, 117 M and 1,806 M undirected edges, respectively.\n\nWe ran the algorithms described in Section~\\ref{sec:app}, and list the triangle count, maximum clique size (denoted by $|Q_{max}|$), and number of matched subgraph instances in Table~\\ref{data}. For graph matching, we used the query graph of Figure~\\ref{gmatch}(a) and randomly generated a label for each vertex in the data graph among $\\{a,b,c,d,e,f,g\\}$ (following a uniform distribution). We can see that the job is highly computation-intensive; e.g., the number of matched subgraphs is in the order of $10^{11}$ for {\\em Orkut} and {\\em Friendster}.\n\nRecall from Table~\\ref{frame} that each worker in \\oursys maintains four task buffers, $B^T_{in}$, $B^T_{out}$, $B^Q_{in}$ and $B^Q_{out}$. We set the capacity of $B^T_{in}$, $B^T_{out}$ and $B^Q_{in}$ to be the same, which is the maximum number of tasks that are processed in each round. We call this capacity as {\\bf\\em buffer capacity}. We also set the capacity of $B^Q_{out}$ to be the {\\bf\\em file capacity} $C$, since it loads a file of tasks from disk each time.\n\nUnless otherwise stated, the default setting is as follows. Each machine runs 8 workers (i.e., 120 workers in total). The buffer capacity is set as 1000 tasks, and the file capacity $C$ is set to be 100 tasks. Moreover, we set $T_{cache}$ to accommodate up to 1 M non-local vertices.\n\n\\vspace{1mm}\n\n\\begin{table}[t]\n\\caption{Comparison with Serial Algorithms}\\label{serial}\n\\centering\n\\includegraphics[width=\\columnwidth]{serial}\n\\vspace{-6mm}\n\\caption{System Comparison (Triangle Counting)}\\label{compare}\n\\includegraphics[width=0.86\\columnwidth]{compare}\n\\vspace{-4mm}\n\\end{table}\n\n\\begin{table*}[htbp]\n\\begin{minipage}[t]{0.6\\linewidth}\n\\caption{Scalability}\\label{scale}\n\\centering\n\\includegraphics[width=\\columnwidth]{scale}\n\\vspace{-4mm}\n\\end{minipage}\n\\hfill\n\\begin{minipage}[t]{0.38\\linewidth}\n\\caption{Effect of System Parameters}\\label{param}\n\\centering\n\\includegraphics[width=\\columnwidth]{param}\n\\vspace{-6mm}\n\\caption{\\# of Random IO by LSH-Queue}\\label{io}\n\\centering\n\\includegraphics[width=\\columnwidth]{io}\n\\vspace{-6mm}\n\\caption{Results with Stream-Queue}\\label{stream_queue}\n\\centering\n\\includegraphics[width=\\columnwidth]{stream_queue}\n\\end{minipage}\n\\vspace{-4mm}\n\\end{table*}\n\n\\noindent{\\bf Comparison with Serial Algorithms.} We first compare the performance of the serial algorithms for subgraph finding with their distributed \\oursys counterparts. Since serial algorithms need to hold an entire input graph in memory, we run them in a high-end machine with 1TB DDR3 RAM and 2.2GHz CPUs. Table~\\ref{serial} reports the comparison results for triangle counting (with relatively low computation intensity) and graph matching (which is highly computation-intensive). We see that \\oursys is orders of magnitude faster than the serial algorithm for graph matching, but only several times faster for triangle counting. This is because the light computation workload of triangle counting cannot offset the communication cost of vertex-pulling.\n\n\\vspace{1mm}\n\n\\noindent{\\bf Comparison with Other Systems.} We also compare \\oursys with Arabesque~\\cite{arab} and Pregelix (version 0.2.12)~\\cite{pregelix}. Both Arabesque and Pregelix have already implemented triangle counting and maximal clique enumeration, and we used these programs directly for comparison. Unfortunately, neither Arabesque nor Pregelix could successfully finish maximal clique enumeration on even the smallest graph {\\em Youtube} in our cluster. Arabesque failed after running for 1.5 hours due to memory overflow, while Pregelix reported a frame size error since the 1-ego network of some vertex cannot fit in a frame as required by Pregelix. In contrast, \\oursys found the maximum clique (of size 17) in {\\em Youtube} in just around 20 seconds.\n\nTable~\\ref{compare} reports the results for triangle counting. We can see that \\oursys is orders of magnitude faster than both Arabesque and Pregelix, while memory-based Arabesque is a few times faster than disk-based Pregelix. However, Arabesque failed to process {\\em Orkut} and {\\em Friendster} due to insufficient memory space. Pregelix failed to process {\\em Friendster} because it used up the disk space, probably due to its space-consuming B-tree structure for storing vertex data.\n\nFinally, although NScale~\\cite{nscale} is not public, \\cite{nscale} reported that it takes 1986 seconds to count the triangles of {\\em Orkut} (not including the expensive subgraph construction \\& packing) on a 16-node cluster, while \\oursys takes only 78 seconds on our 15-node cluster.\n\n\\vspace{1mm}\n\n\\noindent{\\bf Scalability.} We tested the vertical scalability of \\oursys by running various applications with all 15 machines, where each machine runs 1, 2, 4 and 8 workers, respectively. We also tested the horizontal scalability of \\oursys by running various applications with 5, 10 and 15 machines, where each machine runs 4 workers. The results are reported in Table~\\ref{scale}, where we can see that \\oursys scales well with the number of workers per machine, and the total number of machines, especially for highly computation-intensive problems like graph matching.\n\n\\vspace{1mm}\n\n\\noindent{\\bf Effect of System Parameters.} We conducted extensive experiments to study the impact of various parameters on system performance, and find that the performance is mainly sensitive to {\\em buffer capacity} and the capacity of $T_{cache}$; the impact of other parameters such as the file capacity $C$ is minor. Due to space limit, we only report the results for triangle counting over {\\em Friendster} here. Table~\\ref{param}(a) shows the results when we change the buffer capacity while keeping all other parameters as default. We can see that the runtime decreases as the capacity of task buffers increase, but increasing the capacity beyond $10^4$ does not lead to much improvement. Table~\\ref{param}(b) shows the results when we change the capacity of $T_{cache}$ while keeping all other parameters as default. We can see an obvious reduction in runtime as the capacity of vertex cache increases. Overall, the performance difference is not significantly influenced by the system parameters (e.g., less than doubled), and thus \\oursys is expected to perform well even when the memory space is limited.\n\n\\vspace{1mm}\n\n\\noindent{\\bf Stream-Queue v.s.\\ LSH-Queue.} Due to space limit, we only report results for triangle counting; the results for maximum clique and graph matching are similar. Table~\\ref{io} reports the total number of random disk reads and writes incurred by LSH-queue during the whole period of job execution, for the vertical scalability experiments we reported in Table~\\ref{scale}(a). We can see that the number of random IO is small, which demonstrates that LSH-Queue exhibits near-sequential disk IO. We also repeated the experiments using stream-queue instead of LSH-queue, and Table~\\ref{stream_queue} reports the results. Comparing Table~\\ref{scale}(a) to Table~\\ref{stream_queue}, we can see that LSH-Queue improves the performance of most jobs, especially those with heavy computation workload. For some jobs with relatively light workload, the overhead incurred by LSH-Queue (e.g., key computation and random IO) stands out and stream-queue is more efficient.\n\n\\vspace{-1mm}\n\n\\section{Conclusions and Future Work}\\label{sec:conclude}\n\\vspace{-1mm}\n\nWe presented a new framework called \\oursys for scalable subgraph finding, whose computation-intensive execution engine beats existing data-intensive systems by orders of magnitude, and scales to graphs with size two orders of magnitude larger given the same hardware resources. Future work of \\oursys include designing effective task stealing strategy for load balancing, and acceleration through new hardware (e.g., SSD and GPU).\n\n\\bibliographystyle{abbrv}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}