diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzhhhk" "b/data_all_eng_slimpj/shuffled/split2/finalzzhhhk" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzhhhk" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\nModeling 3D human pose and motion is a fundamental problem towards understanding human behaviour, with a wide range of applications in medical prognosis, 3D content production, autonomous driving and human-robot interaction. One of the most studied computer vision tasks is pose estimation either from single images~\\cite{toshev2014deeppose,moreno20173d,pavlakos2017coarse,guo2021pi}, monocular videos~\\cite{pfister2015flowing,pfister2015flowing,pavllo20193d}, or from multi-camera settings~\\cite{amin2013multi,rhodin2018learning,tu2020voxelpose}. This development is certainly due to the availability of large-scale datasets such as Human3.6M~\\cite{h36m_pami}. Beyond extracting 3D human pose information from various types of visual data, a number of works have been proposed around the development of machine learning methods allowing to forecast future motion~\\cite{fragkiadaki2015recurrent,jain2016structural,martinez2017human,liu2019towards,chiu2019action,butepage2017deep,butepage2018anticipating,holden2015learning,hernandez2019human,li2018convolutional,gui2018adversarial,mao2019learning,kipf2016semi,elickovic2018graph,mao2020history,li2021rain,Dang_2021_ICCV,Sofianos_2021_ICCV,adeli2020socially,Adeli_2021_ICCV,guo2021multi} and more recently on generating plausible future sequences of realistic human 3D pose data~\\cite{lin2018human,kundu2019bihmp,barsoum2018hp,mao2021gsps,mao2019learning,walker2017pose,yan2018mt,aliakbarian2020stochastic,yuan2020dlow,aliakbarian2021contextually,petrovich2021action,cai2021unified,guo2020action2motion}. \n\n\\input{fig_teaser}\n\n\nHuman motion generation has several challenges:\ni) diversity: different from deterministic prediction, the generation methods should not just learn average patterns, but need to faithfully reflect the intrinsic intra-class variability. \nii) dynamics: the generation process must inherently model the dynamics of the 3D human pose data, so that they can be transferred to the generated data and avoid collapsing to a stable motion. \niii) smoothness: the generated data should be smooth.\n\n\n\n\nSome of these challenges have been partially addressed in past studies, although up to our knowledge, there is no existing methodology designed to face all three challenges.\nFor instance, MT-VAE~\\cite{yan2018mt} combined the RNN-based motion prediction model with conditional variational autoencoders (VAE), where the difference between the observed and future poses was encoded into the latent variable, which was then concatenated with the RNN's hidden state to account for the dynamics.\nDLow~\\cite{yuan2020dlow} proposed to generate a diverse set of motion sequences, by training fifty different encoders (and a single decoder), and obtaining fifty different instances of the latent variable, and thus fifty different generated motions. \nGSPS~\\cite{mao2021gsps} inherited the diversity loss from DLow, but utilized a more powerful motion prediction framework, based on graph convolutional network (GCN) rather than the RNN. Diversity was obtained by concatenating random noise to the observed sequences. Finally, ACTOR, a Transformer-VAE method, was proposed in~\\cite{petrovich2021action} to perform sequential generation, as opposed to previous methods that were designed for a fixed output length. \nACTOR~\\cite{petrovich2021action} learned an action class informed token as input to the encoder and decoder, thus conditioning the generation with manually annotated labels. \nAll the above methods encode the observed human poses into sequence-level embedding, meaning that the entire observed sequence is encoded into a single time-independent embedding.\n\n \n\nThis motivates us to propose HiT-DVAE{} inspired by the recent literature on dynamical variational autoencoders (DVAE)~\\cite{girin2020dynamical}. \nOur method belongs to the very general family of variational autoencoders (VAE) and is therefore a probabilistic method inherently able to generate stochastic, and hence diverse output. \nMore precisely, using DVAE, the sequence of observations is encoded into a sequence of latent variables instead of a single latent variable, thus offering larger representation power to learn and exploit the motion dynamics, see Figure~\\ref{fig:data_space}. \nBesides, we model the generative process with auto-regressive dependencies, meaning that the generation of each frame depends on the previous ones and can be done sequentially. We can then train this model to generate sequences of arbitrary length. \nFinally, we implement these auto-regressive dependencies with a transformer-like (attention-based) encoder-decoder architecture, thus learning to automatically select which are the best frames to inform the generation of the next human 3D pose. \nOverall, HiT-DVAE{} implements auto-regressive probabilistic dependencies with transformer-like attention mechanisms enabling the learning of pose sequence dynamics as well as stochastic motion generation. Hence, the proposed solution deals with all the three challenges mentioned above.\n\n\nConsidering the evaluation of generated data, recent works either evaluate the generations directly on the joint location of poses~\\cite{yuan2020dlow,mao2021gsps}, or using the feature extracted from a pretrained feature extractor~\\cite{petrovich2021action,guo2020action2motion}. Both protocols have clear shortcomings: the former just evaluates the best generated sample and the diversity of all generations, ignoring the performance of the generations except the best one; while the latter depends on the quality of the feature extractor. To thoroughly evaluate the generation quality, we use both evaluation methods, and broaden the first one to take performance stability into consideration.\n\nWe thoroughly evaluate HiT-DVAE{} on HumanEva-I and Human3.6M datasets, using both explicit and implicit metrics to measure the quality of generated data.\nExperimental results show that our method achieves state-of-the-art on most of the metrics for both datasets, proving that the generation of HiT-DVAE{} has high quality (smaller errors, better features, correct action) with better performance stability. \n\n\n \n\n\n\n\n\n\n\n\\section{Related Work}\n\n\n\\subsection{Modeling Future Human motion}\n\\label{sec:realated_work}\nThe forecasting of human motion has been addressed under two paradigms so far: deterministic motion prediction and stochastic human motion generation. \nThe former aims at using deterministic approaches to regress a single future motion from the past observation which is the most likely to the ground truth; while the latter focuses at generating various possibilities of the future to model the multi-modal nature of human motions.\\\\\n\n\\subsubsection{Deterministic human motion prediction}\nDue to the inherent sequential structure of human motion, 3D human motion prediction has been mostly addressed with recurrent models~\\cite{fragkiadaki2015recurrent,jain2016structural,martinez2017human}. \nHowever, although RNNs can achieve great success in motion prediction, they represent the entire past motion history with a fixed-size hidden state and tend to converge to a static pose. Some works alleviate this problem by using RNN variants~\\cite{liu2019towards,chiu2019action}, sliding windows~\\cite{butepage2017deep,butepage2018anticipating}, convolutional models~\\cite{holden2015learning,hernandez2019human,li2018convolutional} or adversarial training~\\cite{gui2018adversarial}.\nSince human body pose data are structured, directly encoding the whole body into a compact latent embedding neglects the spatial connectivity of human joints. To this end, recent work tend to leverage the forward graph convolutional network (GCN)~\\cite{kipf2016semi,elickovic2018graph} with a predefined or learnable adjacency matrix~\\cite{mao2019learning,mao2020history,Dang_2021_ICCV,Sofianos_2021_ICCV,li2021rain,adeli2020socially,Adeli_2021_ICCV}. While deterministic methods have achieved promising results on accurate predictions, they exhibit strong limitations when it comes to model the diversity of plausible human motion forecasts. Stochastic methods are promising tools to overcome these limitations.\n\n\n\n\\subsubsection{Stochastic human motion generation} To generate multiple future outcomes given a sequence of past observations, two types of approaches have been studied in the recent past: (i) the enhancement of deterministic methods with stochastic variations, e.g., incorporating noise, and (ii) leveraging conditional variational architectures that learn a probability distribution. In the first category, early works include combining random noise with hidden states either by concatenation~\\cite{lin2018human,kundu2019bihmp} or addition~\\cite{barsoum2018hp}. More recently, Mao~\\textit{et al.}~\\cite{mao2021gsps} further investigated this paradigm with a GCN based motion prediction model~\\cite{mao2019learning}, and showed promising results with dedicated designed losses. In the second category, past observations are encoded to learn a posterior latent space, then a random variable will be sampled and then combined with observations to predict the future~\\cite{walker2017pose,yan2018mt,aliakbarian2020stochastic,cai2021unified,aliakbarian2021contextually}. Rencently, DLow~\\cite{yuan2020dlow} proposed to explicitly generate a large number of samples during training, then to use a energy function to promote the diverse generation. ACTOR~\\cite{petrovich2021action} first introduce a Transformer-VAE to obtain long term attention. Rather than modeling the whole observation into a single embedding, Action2Motion~\\cite{guo2020action2motion} and HuMor~\\cite{rempe2021humor} exploit a auto-regressive generative model that the current generation will depend on the past prediction. However, they do not model the entire sequence, but only on the last frame, thus resulting in non-smooth motion generation.\n\n\\subsection{Deep generative modeling}\nThe family of stochastic human motion generation methods are mostly based in the general paradigm of variational inference, and of variational autoencoders. VAEs model the joint distribution of an observation $\\mbf{x}$ and a latent variable $\\mbf{z}$. In stochastic human motion generation, the observations $\\mbf{x}$ often corresponds to a sequence of poses, rather than a single pose. However, up to our knowledge, most of the previous methods use a single latent variable $\\mbf{z}$ to encode the entire observed sequence. Alternatively, one could consider a sequence of latent variables and of observations, and use a VAE to model the relationship between between $\\mbf{x}_t$ and $\\mbf{z}_t$ without any time dependencies. However, the dynamics and any temporal relationships cannot be modeled in this case, which is obviously not desirable. Dynamical variational autoencoders (DVAEs)~\\cite{girin2020dynamical} offer the possibility to model data sequences within the general paradigm of variational inference. DVAE is a general class of models and different models are obtained when considering various dependencies between the variables, e.g., variational recurrent neural networks~\\cite{chung2015recurrent} or stochastic recurrent neural networks~\\cite{fraccaro2016sequential}. However, current DVAE models have a major limitation: the probabilistic dependencies between variables are always implemented with recurrent neural networks (or variants), thus avoiding the possibility to select which past frames are used to inform the generation of the current frame. In addition and up to our knowledge, the use of DVAEs for human motion forecasting has not been investigated so far. This motivates us to explore the use of attention mechanisms within the DVAE paradigm with applications to human motion forecasting, as explained in the following.\n\n\n\\section{Method}\n\n \nWe address the problem of 3D human motion generation that we formalise as follows. \nGiven a sequence of $O$ observed 3D poses of a person $\\mbf{x}_{1:O} = [\n\\mbf{x}_1, \\ldots, \\mbf{x}_O]$, our aim is to generate a sequence of $G$ 3D poses $\\mbf{x}_{O+1:O+G} = [\\mbf{x}_{O+1}, \\ldots \\mbf{x}_{O+G}]$, that follow the observations $\\mbf{x}_{1:O}$. Each pose vector $\\mbf{x}_t \\in \\mathbb{R}^{J\\times3}$ encodes the location of the $J$ joints of a person at time $t$ in Cartesian coordinates. Different from deterministic human motion prediction, we intend to generate multiple plausible future motion sequences with arbitrary length. To this end, we propose a new method named Hierarchical Transformer Dynamical Variational AutoEncoder or HiT-DVAE{}. Our method is based on the recently reviewed family of dynamical variational autoencoders~\\cite{girin2020dynamical}, which formulates the generative process of time series in an autoregressive and time-dependent perspective.\nUp to our knowledge, this general methodology has never been combined with attention-based mechanisms. On the one hand, existing variants of DVAEs are always implemented with recurrent networks~\\cite{girin2020dynamical} (or standard variants such as LSTM and GRU). On the other hand, even if self-attention has been proven useful when combined with a Conditional VAE~\\cite{petrovich2021action}, the architectures proposed so far encode the entire sequence into a single latent variable $\\mbf{z}$, therefore potentially limiting the representation capabilities of temporal dynamics. We propose HiT-DVAE{} to get the best of both worlds, enabling stochastic motion generation together with dynamic sequence modeling via transformer-like attention mechanisms.\n\n\n\n\\subsection{HiT-DVAE{}}\n\n\n\nThe proposed method is based on the very general DVAE methodology (see~\\cite{girin2020dynamical} for an exhaustive presentation on the topic). The basic principle of DVAEs is that for every observation $\\mbf{x}_t$ there is a corresponding latent variable $\\mbf{z}_t$, as opposed to VAEs which would encode the entire observed sequence $\\mbf{x}_{1:O}$ into a single latent variable $\\mbf{z}$.\nThe sequence of observations and corresponding latent variables will be denoted by $\\mbf{x}_{1:T} = [\\mbf{x}_t]_{t=1}^{T}$ and $\\mbf{z}_{1:T} = [\\mbf{z}_t]_{t=1}^{T}$, respectively. For the time being, we will assume that $T=O+G$, as if all 3D poses were observed even if this is not our setting. We will discuss the impact of having hybrid half-observed half-generated sequences later on.\n\nIn addition to the time-dependent latent variable $\\mbf{z}_{1:T}$, and inspired by~\\cite{petrovich2021action,yingzhen2018disentangled}, we add a time-independent latent variable $\\mbf{w}$. Very differently from~\\cite{petrovich2021action}, $\\mbf{w}$ will be learned in an unsupervised manner within the DVAE methodology, see~\\cite{yingzhen2018disentangled}, thus without requiring action class labels. Formally, the proposed generative model writes:\n\\begin{align}\n p_{\\bs{\\theta}}(\\mbf{x}_{1:T}, \\mbf{z}_{1:T}, \\mbf{w}) &= \\prod_{t=1}^{T} p_{\\bs{\\theta}}(\\mbf{x}_{t}, \\mbf{z}_{t}, \\mbf{w} | \\mbf{x}_{1:t-1},\\mbf{z}_{1:t-1}) \\\\\n &= p_{\\bs{\\theta}_{\\mbf{w}}}(\\mbf{w}) \\prod_{t=1}^{T} p_{\\bs{\\theta}_{\\mbf{x}}}(\\mbf{x}_{t} | \\mbf{x}_{1:t-1},\\mbf{z}_t, \\mbf{w}) p_{\\bs{\\theta}_{\\mbf{z}}}(\\mbf{z}_{t} | \\mbf{x}_{t-1},\\mbf{z}_{1:t-1}, \\mbf{w}),\n\\label{eq:dvae_generation}\n\\end{align}\nmeaning that the generative processes of both the observed and latent variables are auto-regressive, with cross-dependencies. We set $\\bs{\\theta} = \\bs{\\theta}_\\mbf{w} \\cup \\bs{\\theta}_\\mbf{z} \\cup \\bs{\\theta}_\\mbf{x}$. In order to learn this generative model, we introduce an inference model ($\\bs{\\phi}=\\bs{\\phi}_\\mbf{w}\\cup\\bs{\\phi}_\\mbf{z}$):\n\\begin{equation}\n q_{\\bs{\\phi}}(\\mbf{z}_{1:T}, \\mbf{w} | \\mbf{x}_{1:T}) = q_{\\bs{\\phi}_{\\mbf{w}}}(\\mbf{w} | \\mbf{x}_{1:T}) \\prod_{t=1}^T q_{\\bs{\\phi}_{\\mbf{z}}}(\\mbf{z}_t | \\mbf{x}_{1:T}, \\mbf{w}).\n\\label{eq:dvae_inference}\n\\end{equation}\nThe training objective is to maximize the evidence lower bound (ELBO):\n\\begin{equation}\n \\mcal{L}(\\bs{\\theta},\\bs{\\phi}; \\mbf{x}_{1:T}) = \\mbb{E}_{q_{\\bs{\\phi}}(\\mbf{z}_{1:T},\\mbf{w} | \\mbf{x}_{1:T})} \\left[ \\ln p_{\\bs{\\theta}}(\\mbf{x}_{1:T},\\mbf{z}_{1:T},\\mbf{w} ) - \\ln q_{\\bs{\\phi}}(\\mbf{z}_{1:T},\\mbf{w} | \\mbf{x}_{1:T}) \\right].\n\\label{eq:dvae_elbo}\n\\end{equation}\n\nAlthough the above equations define the probabilistic dependencies between the different random variables, there are plenty of ways of implementing these dependencies. In this paper, we propose to use a hierarchical transformer-based architecture. In our ablation study, we discuss other --perhaps more conventional-- ways of implementing such dependencies, that exhibit lower performance and demonstrate the interest of having both attention and a hierarchical structure, and thus justify the proposed HiT-DVAE{}. Both the encoder and the decoder of the proposed method exploit a spatial graph convolutional network (SGCN) to extract pose features from the raw poses $\\mbf{x}_t$. We will denote this pose feature extraction operation as $f$, and we will let the encoder and decoder fine-tune their pose extractor leading to $f_\\textsc{E}$ and $f_\\textsc{d}$, see below for more details. Figure~\\ref{fig:HIT-DVAE} shows an overview of our proposed model. Specifically, we employ the transformer architecture~\\cite{vaswani2017attention} jointly with GCN-based feature extractors to formulate the inference and generation on the sequential human motion data. \n\n\n\\begin{figure}[t!]\n \\centering\n \\includegraphics[width=0.99\\linewidth]{pipeline_v2.4.png}\n \\caption{Overview of HiT-DVAE{}. The Encoder (left) inputs the observed sequence $\\mbf{x}_{1:T}$ to estimate the posterior distribution of the time-dependent $\\mbf{z}_{1:T}$ and time-independent $\\mbf{w}$ latent variables. Then the Decoder (right) reconstructs the data and the prior of $\\mbf{z}$.\n \\label{fig:HIT-DVAE}\n\\end{figure}\n\n\\subsubsection{Generative Model (HiT-DVAE{} Decoder)} The generation of both $\\mbf{x}_{1:T}$ and $\\mbf{z}_{1:T}$ is performed via the attention mechanisms proposed in the original transformer architecture~\\cite{vaswani2017attention}. Specifically, the generative model will use multi-head cross-attention, after the GCN-based feature extractor $f_\\textsc{d}$. The generative processes of $\\mbf{x}$ and $\\mbf{z}$ differ on what variables are used as queries, keys and values in the attention mechanism. The output of the two decoders will be the parameters of the respective probability distributions, defined in~(\\ref{eq:dvae_generation}). Both distributions are considered to be Gaussian. While we learn both the mean and covariance matrix of the generative distribution of the latent variable $\\mbf{z}_t$, the covariance matrix of the observations $\\mbf{x}_t$ is considered to be the identity as in previous works~\\cite{aliakbarian2021contextually,rempe2021humor,guo2020action2motion,petrovich2021action,yuan2020dlow}. In particular we have $p_{\\bs{\\theta}_{\\mbf{x}}}(\\mbf{x}_{t} | \\mbf{x}_{1:t-1},\\mbf{z}_t, \\mbf{w}) = \\mathcal{N} (\\mbf{x}_t;\\; \\bs{\\mu}_{\\bs{\\bs{\\theta}}_{\\mbf{x}}, t}, \\mathbf{I})$ and $p_{\\bs{\\theta}_{\\mbf{z}}}(\\mbf{z}_{t} | \\mbf{x}_{t-1},\\mbf{z}_{1:t-1}, \\mbf{w}) = \\mathcal{N} (\\mbf{z}_t;\\; \\bs{\\mu}_{\\bs{\\bs{\\theta}}_{\\mbf{z}}, t}, \\bs{\\Sigma}_{\\bs{\\theta}_{\\mbf{z}}, t})$ with \n\\begin{align}\n\\bs{\\mu}_{\\bs{\\bs{\\theta}}_{\\mbf{x}}, t} &= \\textrm{MaskedMultiHead} \\left(Q_{\\bs{\\bs{\\theta}_{\\mbf{x}}},t}, K_{\\bs{\\bs{\\theta}_{\\mbf{x}}}}, V_{\\bs{\\bs{\\theta}_{\\mbf{x}}}}\\right), \\\\\nQ_{\\bs{\\bs{\\theta}_{\\mbf{x}}},t} &= \\left[ \\begin{tabular}{c} $\\mbf{z}_t$ \\\\ $\\mbf{w}$ \\end{tabular} \\right], K_{\\bs{\\bs{\\theta}_{\\mbf{x}}}} = V_{\\bs{\\bs{\\theta}_{\\mbf{x}}}} = [f_\\textsc{d}(\\mbf{x}_1), \\ldots, f_\\textsc{d}(\\mbf{x}_{T})],\\\\\n\\left[ \\begin{tabular}{c} \n$\\bs{\\mu}_{\\bs{\\theta}_{\\mbf{z}}, t}$ \\\\\n$\\bs{\\Sigma}_{\\bs{\\theta}_{\\mbf{z}}, t}$ \\end{tabular} \\right] &= \\textrm{MaskedMultiHead} \\left(Q_{\\bs{\\theta}_{\\mbf{z}}, t}, K_{\\bs{\\bs{\\theta}_{\\mbf{z}}}}, V_{\\bs{\\theta}_{\\mbf{z}}}\\right), \\\\\nQ_{\\bs{\\theta}_{\\mbf{z}}, t} &= \\left[ \\begin{tabular}{c} $f_\\textsc{d}(\\mbf{x}_{t-1})$ \\\\ $\\mbf{w}$ \\end{tabular} \\right], K_{\\bs{\\bs{\\theta}_{\\mbf{z}}}} = V_{\\bs{\\theta}_{\\mbf{z}}} = [\\mbf{z}_1, \\ldots, \\mbf{z}_{T}],\n\\end{align}\nwhere a mask is used to prevent $\\mbf{z}_{t}$ and $\\mbf{x}_{t}$ from being generated from future latent and observed variables \\cite{vaswani2017attention}. Finally, we have $p_{\\bs{\\theta}_{\\mbf{w}}}(\\mbf{w}) = \\mathcal{N}(\\mbf{w};\\mathbf{0},\\mathbf{I})$, where $\\mathbf{0}$ and $\\mathbf{I}$ are the zero vector and the identity matrix of appropriate dimensions. \n\n\n\n\\subsubsection{Inference Model (HiT-DVAE{} Encoder)} The inference of the latent variables $\\mbf{w}$ and $\\mbf{z}_{1:T}$ from $\\mbf{x}_{1:T}$ is performed via a temporal GCN and the multi-head self-attention mechanism of the transformer encoder (TE). The extracted pose features are fed into the temporal GCN with $T$ nodes, where each node indicates a time frame, and then into a fully connected (FC) layer to output the mean and variance of $\\mbf{w}$, namely $\\bs{\\mu}_{\\bs{\\bs{\\phi}_{\\mbf{w}}}}$ and $\\bs{\\Sigma}_{\\bs{\\bs{\\phi}_{\\mbf{w}}}}$. Samples are drawn from the corresponding posterior $q_{\\bs{\\phi}_{\\mbf{w}}}(\\mbf{w} | \\mbf{x}_{1:T}) = \\mathcal{N}(\\mbf{w};\\; \\bs{\\mu}_{\\bs{\\bs{\\phi}_{\\mbf{w}}}}, \\bs{\\Sigma}_{\\bs{\\bs{\\phi}_{\\mbf{w}}}})$, concatenated to all pose features and then fed into the transformer encoder such that $q_{\\bs{\\phi}_\\mbf{z}}(\\mbf{z}_t | \\mbf{x}_{1:T}, \\mbf{w}) = \\mathcal{N} (\\mbf{z}_t;\\; \\bs{\\mu}_{\\bs{\\phi}_{\\mbf{z}}, t}, \\bs{\\Sigma}_{\\bs{\\phi}_{\\mbf{z}}, t})$ with\n\\begin{align}\n\\left[ \\begin{tabular}{c} \n$\\bs{\\mu}_{\\bs{\\phi}_{\\mbf{z}}, t}$ \\\\\n$\\bs{\\Sigma}_{\\bs{\\phi}_{\\mbf{z}}, t}$ \\end{tabular} \\right] &= \\textrm{MultiHead}\\left(Q_{\\bs{\\phi}_{\\mbf{z}}, t}, K_{\\bs{\\bs{\\phi}_{\\mbf{z}}}}, V_{\\bs{\\phi}_{\\mbf{z}}}\\right), \\\\\nQ_{\\bs{\\phi}_{\\mbf{z}}, t} &= \\left[ \\begin{tabular}{ccc} \n $f_\\textsc{E}(\\mbf{x}_t)$ \\\\\n $\\mbf{w},$ \\end{tabular} \\right] , K_{\\bs{\\bs{\\phi}_{\\mbf{z}}}} = V_{\\bs{\\phi}_{\\mbf{z}}} = \\left[ \\begin{tabular}{ccc} \n $f_\\textsc{E}(\\mbf{x}_1),$ & $\\ldots,$ & $f_\\textsc{E}(\\mbf{x}_T)$ \\\\\n $\\mbf{w}$ & $\\ldots,$ & $\\mbf{w}$ \\end{tabular} \\right].\n\\end{align}\n\n\n\n\n\\subsubsection{Feature Extractor on Human Poses} As mentioned above, we extract pose features via a spatial GCN $f$. This strategy was suggested in~\\cite{chung2015recurrent}. In HiT-DVAE{}, the spatial GCN is composed of $J$ nodes in the graph, each of them representing a joint in the pose skeleton. While the architecture of the spatial GCN is the same as that of the generative and inference models, these two spatial GCNs are trained separately (i.e., the weights are not shared).\n\n\n\n\n\n\n\n\n\\subsection{Training losses}\n\n\\subsubsection{ELBO.} Optimising the evidence lower bound~(\\ref{eq:dvae_elbo}) in the case of the proposed HiT-DVAE{}, boils down to (i) minimising the $L_2$ loss on the reconstructed poses while (ii) minimising the KL divergence between the posterior and prior distributions over the latent variables.\nBecause directly optimising the ELBO would not encourage diversity in our generative model, we inspire from~\\cite{mao2021gsps}, and we explicitly generate $K$ motion sequences $\\{\\hat{\\mbf{x}}_{1:T}^k\\}_{k=1}^K$ and compute the ground-truth reconstruction loss and multi-modal reconstruction loss:\n\\begin{equation}\n \\mathcal{L}_{\\textsc{r}} = \\min_{k} || \\hat{\\mbf{x}}_{1:T}^k - \\mbf{x}_{1:T}||^2, \\qquad \\mathcal{L}_{\\textsc{mm}} = \\frac{1}{M}\\sum_{m=1}^M \\min_{k} || \\hat{\\mbf{x}}_{1:T}^k - \\mbf{x}_{1:T}^m||^2,\n \\label{eq:recon}\n\\end{equation}\nwhere $\\mbf{x}_{1:T}$ is the ground-truth, and $\\mbf{x}_{1:T}^m$ are the pseudo-ground truth sequences, which are selected from the training set following the same protocol as in~\\cite{mao2021gsps}. In addition to the reconstruction losses, we need to minimise the KL divergence:\n\\begin{align}\n \\mathcal{L}_{\\textsc{kl-z}} &= \\frac{1}{T}\\sum_{t=1}^T D_{KL} ( q_{\\bs{\\phi}_{\\mbf{z}}}(\\mbf{z}_t | \\mbf{x}_{1:T}, \\mbf{w}) || p_{\\bs{\\theta}_{\\mbf{z}}}(\\mbf{z}_{t} | \\mbf{x}_{t-1}, \\mbf{z}_{1:t-1}, \\mbf{w}))\\\\\n \\mathcal{L}_{\\textsc{kl-w}} &= D_{KL} ( q_{\\bs{\\phi}_{\\mbf{w}}}(\\mbf{w} | \\mbf{x}_{1:T}) || p_{\\bs{\\theta}_{\\mbf{w}}}(\\mbf{w})).\n\\label{eq:loss_kl}\n\\end{align}\nThe final evidence lower bound (ELBO) writes:\n\\begin{equation}\n \\mathcal{L}_{\\textsc{elbo}} = \\lambda_{\\textsc{r}}\\mathcal{L}_{\\textsc{r}} + \\lambda_{\\textsc{mm}}\\mathcal{L}_{\\textsc{mm}} + \\lambda_{\\textsc{kl-z}}\\mathcal{L}_{\\textsc{kl-z}} + \\lambda_{\\textsc{kl-w}}\\mathcal{L}_{\\textsc{kl-w}}.\n\\end{equation}\n\n\n\n\\subsubsection{Diversity loss.} As suggested by~\\cite{yuan2020dlow,mao2021gsps}, we add diversity promoting losses on upper body and lower body:\n\\begin{equation}\n \\mathcal{L}_{\\textsc{div}} = \\sum_{p\\in\\{l,u\\}}\\lambda_{\\textsc{div-}p}\\frac{2}{K(K-1)} \\sum_{k=1}^K \\sum_{k'=k+1}^K \\exp\\left(-\\frac{|| \\hat{\\mbf{x}}_{1:T}^{k,p} - \\hat{\\mbf{x}}_{1:T}^{k',p} ||_1}{\\alpha_p}\\right),\n\\label{eq:loss_div}\n\\end{equation}\nwhere $l$ ($u$) indicates the lower (upper) body part and $\\alpha_p$ is a normalizing factor.\n\n\\subsubsection{Realistic pose loss.} Follow~\\cite{mao2021gsps}, we employ three extra losses to penalize for unrealistic poses, $\\mathcal{L}_{\\textsc{l}}$ for shifting limb length, $\\mathcal{L}_{\\textsc{a}}$ for aberrant joint angles and $\\mathcal{L}_{\\textsc{nf}}$ for negative prior pose probability from a pre-trained pose prior model based on normalizing flow~\\cite{rezende2015variational,dinh2016density}. \nAltogether, our final training loss writes:\n\\begin{equation}\n \\mathcal{L} = \\mathcal{L}_{\\textsc{elbo}} + \\mathcal{L}_{\\textsc{div}} + \\lambda_{\\textsc{l}}\\mathcal{L}_{\\textsc{l}} + \\lambda_{\\textsc{a}}\\mathcal{L}_{\\textsc{a}} + \\lambda_{\\textsc{nf}}\\mathcal{L}_{\\textsc{nf}}.\n\\label{eq:loss_tot}\n\\end{equation}\n\n\\subsection{HiT-DVAE{} for diverse human motion generation}\nThe losses above allow to train the proposed HiT-DVAE{} to reconstruct full sequences. In practice, as stated in the problem formulation, HiT-DVAE{} must input $O$ observed frames $\\mbf{x}_{1:O}$ and generate the following $G$ frames $\\mbf{x}_{O+1:O+G}$. However, if we use the model trained with ground-truth input over the entire sequence $\\mbf{x}_{1:T}$ ($T=O+G$) we encounter severe difficulties since after $O$ frames the input distribution changes from the ground-truth to the generated one, and the generation fails. One alternative could be training with ground-truth input until $O$ $\\mbf{x}_{1:O}$ and then complete with generated data $\\hat{\\mbf{x}}_{O+1:O+G}$. Unfortunately, at the beginning of the training the generated data is pattern-less, and the training diverges. In order to overcome this issue, we use scheduled sampling~\\cite{bengio2015scheduled}: we start training only with ground-truth data, and we progressively add more generated frames (randomly) with a proportion starting at $0\\%$ and up to $100\\%$.\n\nOnce our model is trained, we can use it to generate various future motion sequences in arbitrary length. Given $O$ observations in our setting, we can get the posterior of $\\mbf{z}_{1:O}$ and $\\mbf{w}$ from the inference model. Then, we can generate the next $G$ frames $\\hat{\\mbf{x}}_{O+1:O+G}$ by recursively applying the generative function on $p_{\\bs{\\theta}_{\\mbf{x}}}(\\mbf{x}_{t} | \\mbf{x}_{1:t-1},\\mbf{z}_t, \\mbf{w})$ and $ p_{\\bs{\\theta}_{\\mbf{z}}}(\\mbf{z}_{t} | \\mbf{x}_{t-1},\\mbf{z}_{1:t-1}, \\mbf{w})$. The diversity comes simply from the different samples of $\\mbf{z}_{O+1:O+G}$ and $\\mbf{w}$.\n\n\n\n\n\n\n\\section{Experiments}\n\n\n\\subsection{Evaluation Protocols}\n\\label{sec:eval_metric}\n\\input{fig_vis}\n\\subsubsection{Datasets}\nFollowing~\\cite{mao2021gsps,yuan2020dlow}, we train and evaluate our methods on Human3.6M~\\cite{ionescu2013human3} and HumanEva-I~\\cite{sigal2006humaneva} dataset: \\textbf{Human3.6M} is the most commonly used dataset for motion tasks. It contains 7 actors (S1,5,6,7,8,9,11) performing 15 annotated actions recorded at 50 Hz. Human pose is represented in 32 skeletons, while we follow~\\cite{mao2021gsps} to use 17 of the skeletons in our training and all testing implementations, and we use S1,5,6,7,8 as training set and the other two subjects as test set.\n\\textbf{HumanEva} contains 5 actions (Box, Gesture, Jog, ThrowCatch, Walking) performed by 3 actors, recorded at 60 Hz. Each pose is represented by 15 skeletons. \nFor both datasets, we remove the global translation and set the root joint as zero.\n\n\\input{tab_gen_eva}\n\n\\subsubsection{Explicit evaluation metrics}\nFollowing \\cite{mao2021gsps,yuan2020dlow}, we evaluate error and diversity of our results with the following metrics, calculating directly on the joint locations of poses:\ni) Average Pairwise Distance (APD): average $L2$ distance of all pairs among the generated sequences: $\\frac{1}{K(K-1)}\\sum_{i=1}^K \\sum_{j\\neq i}^K \\|\\hat{\\mathbf{x}}^i_{O+1:O+G} - \\hat{\\mathbf{x}}^j_{O+1:O+G}\\|_2$. APD measures the capacity of the model to generate diverse samples without considering their quality.\nii) Average Displacement Error (ADE): $L2$ distance between the ground truth and the 'best' generated sample among all, taking the average over all frames of the sequence:$\\frac{1}{G}\\min_k \\|\\hat{\\mathbf{x}}^k_{O+1:O+G} - \\mathbf{x}_{O+1:O+G}\\|$. Here 'best' means the one which is closest to the ground truth. ADE evaluates the upper bound of the generation quality of the model among all the generation results.\niii) Final Displacement Error (FDE): Similar to ADE, FDE evaluates the distance between the ground truth and the best sample, but just on the final frame instead of the whole sequence: $\\min_k \\|\\hat{\\mathbf{x}}^k_{O+G} - \\mathbf{x}_{O+G}\\|$. \niv) Multi-Modal ADE (MMADE) and Multi-Modal FDE (MMFDE): multi-modal version of ADE and FDE. \n\nThe MPJPE based metrics are widely used for evaluating the quality of generated motion, considering the diversity of the generated data and accurancy of the best generated one. While the shortcoming of them is obvious. On one hand, when we use the generator to generate actions, we could not always have the groundtruth to judge which generation is the best one; on the other hand, as described in Sec~\\ref{sec:realated_work}, generating one best example is what deterministic motion prediction aims at, while stochastic methods should generate various good samples. For example, a batch of generated motions where only one sample is exact while the others are all super crazy will result in a large APD and tiny errors which seems perfect numerically, but this is certainly not a good generation we want. So just considering the above metrics is not comprehensively and proper.\n\nThus, we take two solutions: 1) instead of just evaluating ii-iv) on the best generated sample, we also evaluate these criteria on the 'medium' example, which holds the medium distance to the groundtruth among all the generated samples. 2) beside of this explicit measurement based on poses, we also consider implicit measurements based on a pretrained action classifier, as described below.\n\n\\subsubsection{Implicit evaluation metrics} \nFollowing \\cite{petrovich2021action,guo2020action2motion}, we use a GRU-based action classifier pre-trained on real data to evaluate the quality of generated data by: \\\\\ni) calculating Recognition Accuracy (Acc) of the classifier on generated data to evaluate if the generated data could be recognized as the correct action class.\\\\\nii) extracting features of the generated data and real data respectively by the action classifier, and calculate the Frechet Inception Distance (FID) of these two distributions to evaluate the overall quality of generated data.\nFor the two datasets, we trained a classifier for each of them on their training splits.\n\n\n\\subsection{Implementation details} \nWe set the dimension of $\\mbf{z}_t$ to 16 and $\\mbf{w}$ to 32, and employ the same GCN architecture described in~\\cite{mao2019learning}. We use 1 GCN block with hidden size of 8 for spatial GCN and 4 GCN blocks with hidden size of 64 for temporal GCN. For the Transformer encoder and the decoder for generating $\\mbf{z}_t$, we set the input feature dimension to 64, with 4 multi-head, followed by a FC layer with dimension of 256, whereas for the Transformer decoder to generate $\\mbf{x}_t$, we set those parameters to 256, 4, 1024 respectively. \n\nWe generate $K=50$ samples for each observation. We train the model for 500 epochs with 1000 training samples per epoch, using Adam optimizer, and set learning rate to 0.001, batch size to 64 for HumanEva and 32 for H3.6M. We applied a linear KL annealing~\\cite{sonderby2016ladder} for the first 20 epochs to warm-up the latent space, then we take 80 epochs to increase the probability of schedule sampling from 0 to 1. For HumanEva, we train with a sequence length of 75, where the inference of $\\mbf{w}$ only takes 15 frames with a random start point. The weights of different loss terms $(\\lambda_r, \\lambda_{mm}, \\lambda_{d, l}, \\lambda_{d, u}, \\lambda_{l}, \\lambda_{a}, \\lambda_{nf}, \\lambda_z , \\lambda_w)$ and the normalizing factors $(\\alpha_l, \\alpha_u)$ are set to (10, 5, 0.1, 0.2, 100, 1, 0.001, 0.5, 0.1) and (15, 50). For H3.6M, we train with a sequence length of 125, where $\\mbf{w}$ is inferred from 25 frames. The weights of different loss terms and the normalizing factors are set to (20, 10, 0.1, 0.2, 100, 1, 0.01, 0.5, 0.1) and (100, 300) respectively.\n\n\n\\input{tab_gen_h36m}\n\\subsection{Quantitative results}\nWe evaluate the generated motions on HumanEva-I and Human3.6m dataset by the explicit and implicit metrics described in Sec~\\ref{sec:eval_metric}, and found that our methods outperforms the state of the art methods on most of the evaluation metrics.\n\n\\subsubsection{HumanEva-I}\nAs shown in Tab~\\ref{table:tab_res_eva}, our method achieves comparable results with state of the art on explicit evaluation of diversity (APD) and errors of 'best' sample (ADE$_b$, FDE$_b$, MMADE$_b$, MMFDE$_b$). As discussed in Sec~\\ref{sec:eval_metric}, we know that just considering these errors and diversity is not reliable, because these erros just evaluate the best sample among all generations and APD just evaluates diversity without considering the generation quality. And we should note that APD is not always better for larger values, because although we want the generated data to have diversity, crazy large diversity represents that some of the generated samples might totally fail and the quality of generation is not guarantied.\nSo in order to comprehensively measure the performance of generated data, we test the error of 'medium' samples, action-recognition based accuracy ACC and feature-based FID scores on our data, and also on other code-released state of the art methods~\\cite{mao2021gsps,yuan2020dlow}.\n\nWe find that our method achieves significantly better results than other state of the art methods on the implicit metrics ACC and FID, which means that our generated data is better in feature distribution, and could generate more reasonable results that could be recognized as the right action. And our method is also clearly better on errors of medium sample (ADE$_m$, FDE$_m$, MMADE$_m$, MMFDE$_m$), which indicates higher stability of our overall generation quality.\\\\\n\n\\subsubsection{Human3.6M}\nSimilar conclusion could be drawn for Human3.6M dataset, as shown in Tab~\\ref{table:tab_res_h36m}. \nWhen training the action classifier for Human3.6M dataset, \ninstead of training on all the 15 action labels, we group the 15 actions into 5 groups. This is because that some actions in Human3.6M dataset are not with much difference, so it is is not suitable for training the action classier. For example, we could not see the difference between 'eating' and 'smoking' with just the skeletons of the person. With this grouping, the average classification accuracy on real data of our classifier increases from 48.1\\% to 85.5\\%. Note that even on the 15-action classifier with low real-data-accuracy, our method still has higher Acc and lower FID comparing with other state of the art methods. While we report the 5-group classifier here because we believe that a better classifier is more reliable for calculating accuracy and extracting features for FID. More details about the action classifier could be found in supplementary material\n\n\n\n\n\\input{tab_abla}\n\\subsection{Ablation study}\nTab~\\ref{table:tab_abla} shows ablation studies on our method with different architecture design. We bold the best results and underlined the second best ones. We could find that, without schedule sampling, our method tends to generate more diverse results, but with worse quality either on explicit metrics and implicit metrics. \nLooking in the results of Human3.6M, the use of attention mechanism brings higher generation quality on explicit measurement. The global time-independent variable $\\mbf{w}$ bring more diversity on both two datasets (see the results w\/o $\\mbf{w}$). When we consider a vanilla DVAE model (w\/o Att. \\& $\\mbf{w}$), without the hierarchical transformer (HIT) architecture, it is very likely to collapse to a static state on the sequential latent space, which leads to moderate generation quality, and much worse diversity. The final setting of HiT-DVAE{} we used performs good on almost all the metrics and is a balance of different evaluations.\n\n\n\n\n\\subsection{Qualitative results}\nTo qualitatively evaluate our generated results, we visualize various generating samples of our methods in Fig~\\ref{fig:vis} comparing with other state of the art methods. we could see that other methods either generates very similar samples for all the generations, either result in some weird actions, while our method performs well on all the generations, with diverse but always reasonable results. More visualisations could be found in supplementary material.\n\n\n\n\n\\section{Probabilistic Dependencies via Masked MHA}\nThe temporal dependencies are implemented via the mask of the attention modules of the transformer decoder and encoder. The attention matrix in a Transformer layer, and is computed as follows:\n\\begin{equation}\n \\textrm{Att}(\\bs{Q}, \\bs{K}, \\bs{V}) = \\textrm{Softmax}\\left( \\bs{\\mathcal{M}} \\circ \\frac{\\bs{Q}\\bs{K}^T}{\\sqrt{d_k}} \\right) \\bs{V},\n\\end{equation}\nwhere $\\bs{Q}, \\bs{K}, \\bs{V}$ represent the query, key and value, and $d_k$ represents the input feature dimension of query and key. $\\bs{\\mathcal{M}}$ is the attention mask and $\\circ$ denotes the element-wise multiplication. Obviously, an upper triangular mask without diagonal can prevent the model to see the future input. In this case, we can generate the entire sequence of ${\\bf x}_{1:T}$ or ${\\bf z}_{1:T}$ simultaneously. In practice, given an observed sequence with length $T$, we only generate ${\\bf x}_{2:T}$ and ${\\bf z}_{2:T}$ to bypass the estimation of initial state ${\\bf x}_0$ and ${\\bf z}_0$. \n\n\\input{fig_mask}\n\nFig.~\\ref{fig:mask} shows three cases of probabilistic dependencies when using different mask in the Transformer layer. Note that Fig.~\\ref{fig:mask} (a) is a non-causal situation, thus we can not generate future motion via this dependencies. The mask in Fig.~\\ref{fig:mask} (c) will make the attention computed only on one element, thus the attention mask is meaningless in this case. We choose the mask shown in Fig.~\\ref{fig:mask} (b) in our proposed HiT-DVAE.\n\n\\section{Pseudo-code for HiT-DVAE}\nHere, we provide the pseudo-code for HiT-DVAE in training and generation:\n\\begin{algorithm}[h]\n\\caption{HiT-DVAE in training}\n\\begin{algorithmic}\n\\Inputs{}\\vspace{-4mm}\n\\State{$\\triangleright$ Observation on human sequence $\\mbf{x}_{1:T}$}\n\\For{epo in epochs}\n\\State{\\textbf{Inference:}}\n\\State{$\\triangleright$ Compute posterior of $\\mbf{w}$ and sample $ \\mbf{w} \\sim q_{\\bs{\\phi}_{\\mbf{w}}}(\\mbf{w} | \\mbf{x}_{1:T}) = \\mathcal{N}(\\mbf{w};\\; \\bs{\\mu}_{\\bs{\\bs{\\phi}_{\\mbf{w}}}}, \\bs{\\Sigma}_{\\bs{\\bs{\\phi}_{\\mbf{w}}}})$ }\n\\State{$\\triangleright$ Compute posterior $\\mbf{z}_{1:T}$ and sample $\\mbf{z}_t \\sim q_{\\bs{\\phi}_\\mbf{z}}(\\mbf{z}_t | \\mbf{x}_{1:T}, \\mbf{w}) = \\mathcal{N} (\\mbf{z}_t;\\; \\bs{\\mu}_{\\bs{\\phi}_{\\mbf{z}}, t}, \\bs{\\Sigma}_{\\bs{\\phi}_{\\mbf{z}}, t})$ for $t=1,...,T$ }\n\\State{\\textbf{Generation:}}\n\\State{$\\triangleright$ Compute the distribution of $\\mbf{x}_{2:T}$ via $p_{\\bs{\\theta}_{\\mbf{x}}}(\\mbf{x}_{t} | \\mbf{x}_{1:t-1},\\mbf{z}_t, \\mbf{w}) = \\mathcal{N} (\\mbf{x}_t;\\; \\bs{\\mu}_{\\bs{\\bs{\\theta}}_{\\mbf{x}}, t}, \\mathbf{I})$ for $t=2, ..., T$}\n\\State{$\\triangleright$ Compute the prior of $\\mbf{z}_{2:T}$ via $p_{\\bs{\\theta}_{\\mbf{z}}}(\\mbf{z}_{t} | \\mbf{x}_{t-1},\\mbf{z}_{1:t-1}, \\mbf{w}) = \\mathcal{N} (\\mbf{z}_t;\\; \\bs{\\mu}_{\\bs{\\bs{\\theta}}_{\\mbf{z}}, t}, \\bs{\\Sigma}_{\\bs{\\theta}_{\\mbf{z}}, t})$ for $t=2, ..., T$}\n\\State{\\textbf{Compute loss and optimize via Adam}}\n\\EndFor\n\\end{algorithmic}\n\\label{algo:vem-enhancement}\n\\end{algorithm}\n\n\\vspace{9mm}\n\n\\begin{algorithm}[h]\n\\caption{HiT-DVAE in generation}\n\\begin{algorithmic}\n\\Inputs{}\\vspace{-4mm}\n\\State{$\\triangleright$ Observation on human sequence $\\mbf{x}_{1:O}$}\n\\Init{}\\vspace{-4mm}\n\\State{$\\triangleright$ Compute posterior of $\\mbf{w}$ and $\\mbf{z}_{1:O}$}\n\\For{t in range(O+1, O+G)}\n\\State{$\\triangleright$ Generate $\\hat{\\mbf{z}}_t$ via $\\mbf{z}_t \\sim p_{\\bs{\\theta}_{\\mbf{z}}}(\\mbf{z}_{t} | \\hat{\\mbf{x}}_{t-1},\\mbf{z}_{1:t-1}, \\mbf{w})$}\n\\State{$\\triangleright$ Generate $\\hat{\\mbf{x}}_t$ via $\\mbf{x}_t \\sim p_{\\bs{\\theta}_{\\mbf{x}}}(\\mbf{x}_{t} | \\mbf{x}_{1:O}, \\hat{\\mbf{x}}_{O+1:t-1},\\hat{\\mbf{z}}_t, \\mbf{w}) $}\n\\EndFor\n\\Outputs{}\\vspace{-4mm}\n\\State{$\\triangleright$ Generated human motion sequence $\\hat{\\mbf{x}}_{O+1:O+G}$}\n\\end{algorithmic}\n\\label{algo:vem-enhancement}\n\\end{algorithm}\n\\vspace{-9mm}\n\n\\section{Action-classifier}\nAs explained in Sec.~4.1 and Sec.~4.3 of the main paper, we trained a RNN-based classifier to calculate the implicit evaluation metrics ACC and FID following~\\cite{guo2020action2motion,petrovich2021action}.\nThe classifier we used is build upon 2 simple GRU layers with hidden size of 128. \nWhen training on Human3.6M dataset, we found that some action classes do not differ much from each other, which makes it difficult to train a good classifier. As our goal is to have a classifier which offers good feature, we believe the classifier with low accuracy on real data is not reliable enough, so we group the 5 similar actions, \nand trained the classifier on these 5 groups instead of the 15 original classes. The groups of actions are detailed in Tab~\\ref{table:group_15_class}.\\\\\nNote that even on the classifier trained on the 15 original classes, our method still performs better than others, as shown in Tab.~\\ref{table:res_15_class}.\n\n\\input{tab_supp_act_class_h36m}\n\n\n\n\n\n\n\n\n\n\\section{Conclusions}\n\nIn this paper we have investigated the use of attention combined with temporal probabilistic models for human motion generation. In particular, we proposed HiT-DVAE{}, a variational method modeling temporal dependencies between the observations and the latent variables, and exploiting attention to select which observations will be used to inform the generation of the current frame. Up to our knowledge, this is the first time that models with temporal latent variables and the use of attention are proposed to handle the human motion generation task. \nWe exhaustively evaluated our method on two widely used datasets, HumanEva and Human3.6M, and reported state-of-the-art results\n\\bibliographystyle{splncs04}\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{SecIntro}\n\nA defining prediction of hierarchically clustering models is that the Universe must be teeming with low-mass systems left over from the collapse of the early stages of the hierarchy \\citep{White1978}. The $\\Lambda$ cold dark matter ($\\Lambda$CDM) paradigm is no exception; indeed, the abundance of $\\Lambda$CDM halos massive enough, in principle, to host a galaxy is so high that they outnumber faint galaxies by a large factor \\citep[see, e.g.,][]{Klypin1999,Moore1999}. For example, more than $1,000$ halos with virial\\footnote{We define virial quantities as those calculated within a radius where the mean inner density equals 200 times the critical density of the Universe, $\\rho_{\\rm crit}=3H^2\/8\\pi G$. Virial quantities are identified by a ``200'' subscript.} mass exceeding $10^8 \\rm \\ M_\\odot$ are expected within $\\sim 2 \\rm \\ Mpc$ from the barycentre of the Local Group (LG), a region that contains fewer than $100$ galaxies with baryonic masses exceeding $10^5 \\rm \\ M_\\odot$ \\citep[][and references therein]{Sawala2016}.\n\nThis discrepancy is usually explained by assuming that galaxies fail to form in halos below a certain halo mass, leaving a large number of systems essentially ``dark'', or free of stars. The main culprit is cosmic reionization, which heats most baryons to $\\sim 10^4 \\rm \\ K$ at relatively high redshift and prevents them from settling and condensing into galaxies in the shallow potential wells of low-mass halos\\cite[e.g.,][]{Bullock2000}\n\nThe existence of these ``dark'' minihalos\\footnote{Throughout this paper we shall refer to halos in the mass range $10^8 10^{-2} \\rm \\ cm^{-2}$, and temperatures below $\\sim 10^4 \\rm \\ K$~\\citep[see, e.g,][]{Schaye2001,Rahmati2013}\\footnote{Our study focuses on systems with gas densities $n_{\\rm H} \\le 10^{-1} \\rm \\ cm^{-2}$ and temperatures $T \\ge 10^4 \\rm \\ K$. Self-shielding could in principle change the properties of some of these systems since the temperature would be slightly reduced, thus increasing the \\ion{H}{I} mass predicted in Sec.~\\ref{SecHI}.}.\nReionization of the Universe is modelled by switching on the HM01 background radiation field at redshift $z_{\\rm reion} = 11.5$. The photoheating and photoionizing rates are kept fixed in the redshift range $z=9-11.5$. For redshift $z \\le 9$, the UV background is allowed to evolve, and reaches a maximum at redshift $z \\sim 2$. In addition, an extra heating of 2eV per proton mass is injected to the gas particles at $z_{\\rm reion}$, which accounts for a boost in the photoheating rates during reionization relative to the optically thin rates assumed here, ensuring that the photoionized gas is rapidly heated to a temperature of $\\sim 10^4 \\rm \\ K$. This is done instantaneously for \\ion{H}, but for \\ion{He}{II} the extra heat is distributed in redshift with a Gaussian of width $0.5$, centred at $z=3.5$. \n\nFor redshift $z>11.5$, the net cooling\\footnote{We refer to the net cooling of the gas as the difference between the radiative heating and cooling processes.} of the gas is computed by exposing it to the CMB and the photodissociating background obtained by cutting the HM01 spectrum at 1 Ryd. Note that the presence of photodissociating radiation and the finite resolution of our simulations imply that we cannot model the formation of Pop III stars via $\\rm H_{2}$ cooling in minihalos, which could play a role for isolated halos of mass $\\sim 10^5 \\rm \\ M_{\\odot}$.\n\n\n\\subsection{Halo finding}\n\nHalos are identified in the simulations using the group finder {\\tt SUBFIND}~\\citep{Springel2001, Dolag2009}, which identifies self-bound substructures within a catalogue of friends-of-friends (FoF) halos built with a linking length of 0.2 times the mean interparticle separation. {\\tt SUBFIND} provides a list of self-bound subhalos within each FoF halo, organized as a ``central'' halo and its respective ``satellites''. \n\n\nMost of our analysis is based on central halos identified at redshift $z=0$ within a spherical volume of radius $3.5 \\rm \\ Mpc$ centred at the barycentre of each simulated ``Local Group''. \nWe keep for analysis all central halos with $M_{200} \\ge 10^8 \\ \\rm M_{\\odot}$ (i.e., with typically more than $3000$ dark matter particles). These limits in volume and mass ensure that all selected halos are far enough from the boundaries of the high-resolution zoom-in region and that we are able to resolve them confidently. \n\n\\section{Results}\n\\label{SecRes}\n\n\\subsection{Baryonic content of {\\small APOSTLE} halos}\n\\label{SecMgasM200}\n\n\n\\begin{figure}\t\n\\includegraphics[width=\\columnwidth]{FigRhoT}\n\\caption{Temperature-density diagram for all gas particles bound to {\\small RELHICs}. All particles of all {\\small RELHICs} are shown, and compared with two curves indicating (i) the loci of particles where the photoheating timescale equals the age of the Universe, $t_{H} \\approx 13.76 \\rm \\ Gyr$ (green dashed curve); and (ii) the loci where photoheating and radiative cooling are in equilibrium (thick magenta curve). Gas in {\\small RELHICs} has been photoheated to a density-dependent temperature that matches one of those two regimes. The tight relation between density and temperature that results defines a temperature-density relation for gas in {\\small RELHICs} (red dashed line) that can be used to derive density and temperature profiles assuming that the gas is in hydrostatic equilibrium to the potential well of the dark matter.}\n \\label{FigRhoT}\n\\end{figure}\n\nWe begin by analysing the baryonic content within the virial boundaries of the simulated halos. This is presented in the top panel of Figure~\\ref{FigMasses}, where we show the relation between virial mass and the mass of various baryonic components for the three ``high-resolution'' volumes. The oblique dashed line indicates, for reference, the theoretical maximum baryonic mass within the virial radius, $M_{\\rm bar}=f_{\\rm bar}M_{200}$, where $f_{\\rm bar}=\\Omega_{\\rm b}\/(\\Omega_{\\rm 0}+\\Omega_{\\rm b}=0.167$ is the universal baryon fraction.\n\nThe blue solid line indicates the median stellar mass \"bound\" to the central galaxies. Note that we only consider ``central'' galaxies in this figure; i.e., the most massive subhalo of each FoF halo. \n\nThe median stellar mass plummets below a virial mass of $\\sim 10^{10} \\rm \\ M_\\odot$, mainly because not all those low-mass halos harbour luminous galaxies. This may be seen from the thick black dashed line in the bottom panel of Fig.~\\ref{FigMasses}, which shows the fraction of central galaxies that do not have stars. All halos above $10^{10} \\rm \\ M_\\odot$ have luminous galaxies, but the fraction dips to $50\\%$ at $M_{200} \\sim 5 \\times 10^9 \\rm \\ M_\\odot$. Below $10^9 \\rm \\ M_\\odot$ essentially all halos are ``dark''\\footnote{Strictly speaking, these halos have galaxies less massive than a few $10^3 \\rm \\ M_\\odot$ in stars, the mass of one baryon particle at this resolution level.}. \n\nThe median total baryon mass bound to {\\it luminous} halos, measured within $r_{200}$, is shown by the green curve in Fig.~\\ref{FigMasses}, with a shading that indicates $\\pm1\\sigma$ dispersion. The total gas mass within $r_{200}$ bound\\footnote{We note that {\\tt SUBFIND} can at times underestimate these masses, especially in regions where the mean ambient gas density is comparable to the density in the outer regions of the minihalo or when a minihalo is embedded in hotter gas; the masses shown here have been carefully recomputed to take those issues into account.} to ``dark'' (i.e., star free) halos is shown with open circles. Two populations are clearly apparent: one where the bound gas mass is so small ($<10^5 \\rm \\ M_\\odot$, shown in grey) that it can barely be measured (the gas particle mass is $\\sim 10^4 \\rm \\ M_\\odot$ at resolution level L1); and another where the gas mass correlates tightly with virial mass (shown in red). We shall hereafter refer to the former as ``{\\small COSWEBs}'' (short for cosmic web-stripped halos) and to the latter as ``{\\small RELHICs}'' (for Reionization-Limited \\ion{H}{I} Clouds). \n\n\nBefore discussing the origin of these two populations, we show in the bottom panel of Fig.~\\ref{FigMasses} their relative fractions as a function of mass, as well as the dependence on numerical resolution. The thick solid red curve indicates the fraction of {\\small RELHICs} in the highest-resolution (L1 level) runs: {\\small RELHICs} inhabit halos spanning a small range in virial mass, $3\\times 10^{8} 10^{-4.8}$cm$^{-3}$ radiative cooling induced by collisional effects becomes more important and the gas settles at the ``equilibrium'' temperatures where radiative cooling effects balance photoheating from the ionizing background, shown by the thick purple line in Fig.~\\ref{FigRhoT}~\\citep[see e.g.,][]{Haehnelt1996, Theuns1998}. As is clear from this figure, these two regimes describe very well the temperature-density relation of gas in {\\small RELHICs} (shown by the red dashed curve).\n\n\n\\subsection{Gas masses}\n\\label{SecGasM}\n\nThe $n_{\\rm H}$-$T$ relation followed by gas particles in {\\small RELHICs} effectively defines a pressure-density relation, $P=P(\\rho)$, that enables us to estimate the gas mass bound to a halo of given virial mass. This may be done by assuming that the gas is in hydrostatic equilibrium within the potential of the dark halo and solving: \n\\begin{equation}\n\\displaystyle\\frac{1}{\\rho}\\frac{dP}{dr} = -\\frac{GM(r)}{r^2},\n\\label{EqHydroEquil}\n\\end{equation}\nto give a density profile that may be integrated to compute the total gas mass within $r_{200}$, once a boundary condition (e.g., an external pressure) is chosen. \n\nThe simplest choice is to assume that far from the virial boundary the gas reaches the mean baryon density of the Universe, at the appropriate temperature set by the ionizing background: this specifies the external pressure that closes the set of equations, enabling a simple estimate of {\\small RELHIC} gas masses. \n\nWe present details of the calculation in Appendix~\\ref{SecApp} and show the main result by the thick purple line in Fig.~\\ref{FigMasses}. Despite its simplicity, the model predicts accurately the gas mass of {\\small RELHICs} for halos not exceeding virial masses of order $5\\times 10^9 \\rm \\ M_\\odot$. At higher masses gravitational heating becomes important and, in addition, the central densities become high enough for self-shielding and cooling processes to become important; in those halos the gas would not be able to stay in hydrostatic equilibrium, but will collapse into a rotationally supported disk where it may form stars. Indeed, very few, if any, halos above $5\\times 10^9 \\rm \\ M_\\odot$ remain ``dark'', as shown in the bottom panel of Fig.~\\ref{FigMasses}.\n\n\n\n\\begin{figure*}\n\t\\includegraphics[width=\\textwidth]{FigHI}\n \\caption{Properties of {\\small RELHICs} (open circles), compared with those of the model presented in Appendix~\\ref{SecApp}. Left panels show, as a function of \\ion{H}{I} mass within $r_{200}$, the total gas mass (top) and the central \\ion{H}{I} column density $\\rm N_{\\ion{H}{I},0}$. Right panels show the total gas mass vs the velocity dispersion in bulk motions of the gas (top); and the central \\ion{H}{I} column density $\\rm N_{\\ion{H}{I},0}$ vs \\ion{H}{I} size, $R_{\\rm HI}$. Several characteristic radii for the latter are shown; from top to bottom the dashed lines indicate the radius of the iso column density contour of $10^{20}$, $10^{19}$, etc, in units of cm$^{-2}$. Each {\\small RELHIC} is shown at the column density immediately below its central value; for example, the radius of the $10^{18}$ cm$^{-2}$ contour is shown for those {\\small RELHICs} with central column densities in the range $10^{18}0.8$. This is consistent with the idea that {\\small RELHICs} are in hydrostatic equilibrium in the potential of mildly triaxial dark matter halos.}\n \\label{FigPhoto}\n\\end{figure}\n\n\n\\subsection{HI masses and radial profiles}\n\\label{SecHI}\n\nThe blue solid lines in Fig.~\\ref{FigRadProf} show the density profiles of neutral hydrogen, derived using the fitting formula given in appendix A1 of~\\cite{Rahmati2013}. This model uses a simple but accurate fit to the photoionization rates, obtained from radiative transfer simulations, where the scaling of the characteristic self-shielding density is taken from the analytic model of~\\cite{Schaye2001}, and computes neutral fractions as a function of density and temperature assuming ionization equilibrium. In the inner regions of the example {\\small RELHIC} shown in Fig.~\\ref{FigRadProf} the gas is dense and cold enough to be $\\sim 100\\%$ neutral; the neutral fraction drops rapidly from the center outwards. The \\ion{H}{I} column density profile is shown in the bottom right panel of Fig.~\\ref{FigRadProf}: the profile is quite steep, in part due to the onset of self-shielding in the model, and it drops from a well-defined central value of $10^{20}$ cm$^{-2}$ to $10^{18}$ cm$^{-2}$ at $\\sim 1.25$ kpc from the centre\\footnote{We compute the column density profiles by integrating the density profile along the line-of-sight within a sphere of radius $2\\times r_{200}$.}. \n\nWe show the gas density and \\ion{H}{I} column density profiles in Fig.~\\ref{FigModProf}, as a function of halo virial mass or, equivalently, as a function of the total gas mass. {\\small RELHICs} have density profiles that vary in shape as the halo mass decreases, and central densities that correlate strongly with mass. Although the most massive {\\small RELHICs} may reach central \\ion{H}{I} column densities $\\rm N_{\\ion{H}{I},0}$ of $10^{21}$ cm$^{-2}$ these drop steeply with decreasing mass, dipping below $10^{15}$ cm$^{-2}$ for halos below $2.5 \\times 10^9 \\rm \\ M_\\odot$. This suggests that only the most massive {\\small RELHICs} might be detectable in 21 cm surveys such as ALFALFA~\\citep[e.g.,][]{Haynes2011}, which only reaches column densities exceeding $\\sim 10^{18}$ cm$^{-2}$.\n\nWe provide further structural properties of the \\ion{H}{I} component of {\\small RELHICs} in Fig.~\\ref{FigHI}, where we show the total \\ion{H}{I} mass within $r_{200}$ as a function of central \\ion{H}{I} column density and as a function of the total gas mass (left panels). The dashed lines show the results of the model described in Appendix~\\ref{SecApp} ( Fig.~\\ref{FigModProf}), which agree very well with the simulation results. Clearly, neutral hydrogen makes up a very small fraction of the gaseous content of minihalos, confirming the expectations of the analytic models of \\citet{Sternberg2002}: minihalos are essentially spheres of ionized gas in hydrostatic equilibrium and they have a small core of neutral hydrogen.\n\nThe bottom right-hand panel of Fig.~\\ref{FigHI} shows several characteristic radii, where the {\\small RELHIC} \\ion{H}{I} column density drops from its central value to $10^{20}$, $10^{19}$,...,$10^{12}$ cm$^{-2}$, respectively (top to bottom). The dashed lines indicate the results of the model profiles shown in Fig.~\\ref{FigModProf}, whereas the open circles correspond to simulated {\\small RELHICs}, grouped so that each is plotted at the radius where the column density drops by about one decade (or less) from the center. For example, the radii shown for {\\small RELHICs} with central column densities between $10^{18}$ and $10^{19}$ cm$^{-2}$ is that where the column density drops to $10^{18}$ cm$^{-2}$, and so on. Note that, defined this way, most {\\small RELHICs} within reach of current \\ion{H}{I} surveys (i.e., $M_{\\rm HI}> 10^4 \\rm \\ M_\\odot$; $N_{\\rm HI}>10^{18}$ cm$^{-2}$) are expected to be compact, sub-kpc systems \\citep{Sternberg2002}.\n\n{\\small RELHICs} are near hydrostatic equilibrium, so the random bulk motions of the gas are quite small compared with the characteristic velocity dispersion of the halos they inhabit. This may be seen in the top right panel of Fig.~\\ref{FigHI}, where we show, as a function of the total gas mass within $r_{200}$, the velocity dispersion of {\\small RELHIC} gas particles, compared with the characteristic rms velocities of dark matter particles, $\\sigma_{200}\\approx V_{200}\/\\sqrt{2}$.\nTypical bulk motions in {\\small RELHICs} are below $5$ km\/s in essentially all cases, well below $\\sigma_{200}$, implying that the broadening of the 21 cm line should be mostly thermal.\n\nFinally, we examine the morphologies of {\\small RELHICs} in Fig.~\\ref{FigPhoto}, which shows \\ion{H}{I} column density maps for $4$ relatively massive {\\small RELHICs} (see masses in figure legends). Drawing attention to the $10^{18}$ cm$^{-2}$ contour (black thick inner line), which corresponds to the sensitivity limit of surveys such as ALFALFA, we see that {\\small RELHICs} would appear essentially round in such surveys. This is a direct consequence of the fact that the gas is in hydrostatic equilibrium in the dark halo potential. Indeed, although $\\Lambda$CDM halos are intrinsically triaxial, the axis ratios of the potential are much less aspherical than those of the mass distribution \\citep[see, e.g.,][]{Hayashi2007}.\n\n\n\n\\subsection{UCHVCs as Local Group RELHICs}\n\\label{SecObs}\n\nWe explore now the possibility that {\\small RELHICs} might have been detected already in existing \\ion{H}{I} surveys. Given their sizes and low \\ion{H}{I} masses, we compare {\\small RELHICs} with the population of Ultra Compact High Velocity Clouds (UCHVCs) first discussed by \\citet{Giovanelli2010} in the context of the ALFALFA survey. These are identified as high signal-to-noise \\ion{H}{I} sources with sizes less than $30'$, and velocities well outside the range expected for Galactic rotation. Note that $30'$ corresponds to $\\sim 2$ kpc at a distance of $250$ kpc, so these sources might include sub-kpc {\\small RELHICs} in the Local Group.\n\nWe begin by noting that, as shown in Fig.~\\ref{FigSpatialDistribution}, {\\small RELHICs} shun the region close to the Local Group barycentre and mainly populate the underdense regions of its outskirts. Indeed, we find no {\\small RELHIC} within $500$ kpc of any of the two main LG galaxies in any of the three \"high-resolution\" volumes we have analysed. This has two important consequences; one is that, coupled with the low \\ion{H}{I} masses expected of {\\small RELHICs}, their \\ion{H}{I} fluxes will be quite low, and another is that few {\\small RELHICs} will have negative Galactocentric radial velocities, as most will still be expanding away outside the LG turnaround radius. \n\nWe show this in Fig.~\\ref{FigHVCs}, where we show, as a function of the \\ion{H}{I} flux\\footnote{We compute \\ion{H}{I} fluxes using the total \\ion{H}{I} mass within the virial radius of a {\\small RELHIC}, $M_{\\ion{H}{I}}$, and its distance to the LG primary galaxies expressed in Mpc, $d_{\\rm Mpc}$: $M_{\\rm HI}\/{\\rm M_\\odot}=2.36 \\times 10^5 \\, S_{21}\\, \\left (d\/\\rm Mpc \\right )^2$, with $S_{21}$ given in units of Jy km\/s. Note that as we consider the two main galaxies of each simulated LG, every {\\small RELHIC} is shown twice in Fig.~\\ref{FigHVCs}.}, $S_{21}$, in units of $\\rm Jy \\ km \\ s^{-1}$, the \\ion{H}{I} size of the {\\small RELHIC}, defined as the mean radius $(\\sqrt{ab})$, where $a$ and $b$ are the semiaxes of the best fitting ellipse to its $10^{18}$ cm$^{-2}$ isodensity contour (top panel), Galactocentric radial velocity, $V_{\\rm gsr}$ (second from top), the FWHM line broadening parameter\\footnote{$W_{50}$ is computed by adding in quadrature the broadening due to the gas temperature and its bulk velocity dispersion. The former dominates, and is given by $T\/$K$=21.8\\, W_{50}^2$, with $W_{50}$ given in km\/s.}, $W_{50}$ (third from top), and the axis ratio of the limiting \\ion{H}{I} column density isocontour, $b\/a$ (bottom panel).\n\n\n\\begin{figure}\n\t\\includegraphics[scale=0.45]{FigHVCs}\n \\caption{Simulated Local Group {\\small RELHICs} (red open circles), and simulated LG dwarfs (magenta points) compared with the ALFALFA Ultra Compact High Velocity Clouds (UCHVCs, black crosses) from the compilation of \\citet{Adams2013}. {\\small RELHICs} are ``observed'' from the centre of the two primary galaxies, so that each {\\small RELHIC} is shown twice. Only those with fluxes $S_{21}>0.1$ Jy km\/s are shown. From top to bottom we show, as a function of $S_{21}$, the size of the $10^{18}$ cm$^{-2}$ \\ion{H}{I} column density contour in arcmin; the Galactocentric radial velocity $V_{\\rm gsr}$; the linewidth $W_{50}$; and the axis ratio $b\/a$. Note that, in general, Local Group {\\small RELHICs} are smaller in size, fainter, rounder, and more homogeneous in their linewidths than currently observed UCHVCs. {\\small RELHICs} also have predominantly positive Galactocentric radial velocities, consistent with their large distances to the primaries ($d>500$ kpc). Magenta points indicate simulated galaxies with a non-zero stellar component ($M_{\\rm str}< 10^6 \\rm \\ M_{\\odot}$). These systems, in contrast, have properties resembling those of UCHVCs. Some are bigger in size, have higher fluxes, and exhibit a wider range of morphologies. Vertical dashed lines show the flux limit of the UCHVC compilation.\n}\n \\label{FigHVCs}\n\\end{figure}\n\n\nFig.~\\ref{FigHVCs} compares simulated {\\small RELHICs} (open red circles) and luminous simulated dwarfs ($M_{\\rm str} < 10^6 \\rm \\ M_{\\odot}$; magenta circles), with the $59$ UCHVCs catalogued by \\citet{Adams2013} from ALFALFA data (see black crosses). The stellar mass limit roughly corresponds to that of Leo~P, which was discovered after follow-up imaging of an UCHVC ~\\citep{Giovanelli2013}. Note that {\\it all} {\\small APOSTLE} {\\small RELHICs} are just below the flux limit of the ALFALFA search, which is of $3$ Jy km\/s (shown by the vertical dashed line). UCHVCs are also much more numerous and heterogeneous as a population than expected for {\\small RELHICs}, which are rather small; fairly round ($b\/a>0.8$); have a narrow dispersion of linewidths about $W_{50}\\sim 20$ km\/s; and are almost exclusively moving away from the Galaxy. UCHVCs are, however, comparable to simulated dwarfs, which reach higher fluxes, exhibit a wider range of morphologies, and are bigger than {\\small RELHICs}. We do not analyse the distribution of $\\rm W_{50}$ for our simulated dwarfs as their temperature is set by a an effective equation of state imposed to model the ISM, which is set to $10^4 \\rm \\ K$. Because of this, the HI fluxes estimated for dwarfs must be regarded as lower limits. \n\nWe conclude that the \\citet{Adams2013} UCHVC catalogue does not contain the star-free ``dark'' minihalos we associate with {\\small RELHICs}, and that the properties of some UCHVCs might be consistent with very faint dwarf galaxies that have so far escaped detection in optical surveys. Indeed, as discussed in Sec.~\\ref{SecIntro}, some UCHVCs have already been identified as low surface brightness galaxies, some in the Local Volume \\citep{Sand2015} and some as far away as the Virgo Cluster \\citep{Bellazzini2015a}.\n\n\\section{Summary and Conclusions}\n\\label{SecConc}\n\nWe have used the {\\small APOSTLE} suite of cosmological hydrodynamical simulations of the Local Group to examine the gas content of $\\Lambda$CDM minihalos. We focussed our analysis on systems that are free of stars in our highest-resolution runs, since in such systems the bound gas content at $z=0$ should only depend on the effects of the UV ionizing background and on the ram pressure stripping that affects minihalos as they travel through the cosmic web. \n\n``Dark'' minihalos (or, more precisely, systems with stellar mass $M_{\\rm str}<10^4 \\rm \\ M_\\odot$, the mass resolution limit of our simulations) split into two well-defined groupings: one where the mass of bound gas is set by the ionizing background and correlates tightly with the minihalo virial mass ({\\small RELHICs}, for REionization Limited \\ion{H}{I} Clouds), and another where there is little or no bound gas left within the halo after stripping by the cosmic web ({\\small COSWEBs}, for COSmic WEb Stripped systems). The differentiation is thus mainly environmental; gas-free {\\small COSWEBs} populate the high-density regions near the luminous galaxies of the Local Group, where gas densities are high and cosmic web stripping is important, whereas the relatively gas-rich {\\small RELHICs} inhabit the underdense outskirts. Few {\\small RELHICs} are found within $500$ kpc of either the Milky Way or the M31 analogues in the simulations.\n\nIn terms of halo virial mass, the transition between luminous galaxies and dark systems like {\\small RELHICs} and {\\small COSWEBs} happens relatively quickly. Dark minihalos have masses that do not exceed $M_{200} \\sim 10^{10} \\rm \\ M_\\odot$; their fraction increase rapidly with decreasing mass, and they make up essentially all halos below $10^9 \\rm \\ M_\\odot$. {\\small RELHICs} make up most of the more massive dark minihalos; their abundance peaks at roughly $50\\%$ for $M_{200}\\sim 2\\times 10^9 \\rm \\ M_\\odot$. The {\\small RELHIC} bound gas mass fraction decreases with decreasing mass; from $20\\%$ of the universal baryon fraction at $M_{200} \\sim 5 \\times 10^9 \\rm \\ M_\\odot$ to $0.3\\%$ ($10^5 \\rm \\ M_\\odot$, or ten particles in our highest-resolution runs) in $\\sim 3\\times 10^8 \\rm \\ M_\\odot$ minihalos.\n\nThe gas component in {\\small RELHICs} is in approximate hydrostatic equilibrium with the dark matter potential and in thermal equilibrium with the ionizing UV background. Their thermodynamic properties are therefore well understood, and their gas density and temperature profiles are in excellent agreement with a simple model where UV-heated gas is in thermal and hydrostatic equilibrium within NFW halos. Gas in {\\small RELHICs} is nearly pristine in composition and nearly fully ionized, with small (sub-kpc) neutral hydrogen cores that span a large range of \\ion{H}{I} masses and column densities. These cores have negligible Doppler broadening and nearly round morphologies. \n\nThe most massive {\\small RELHICs} have properties comparable to those of some Ultra Compact High Velocity Clouds (UCHVCs) but the bulk of the Local Group {\\small RELHIC} population should have \\ion{H}{I} fluxes just below $\\sim 3$ Jy km\/s, the limit of the ALFALFA UCHVC detection. \n\nOther differences between {\\small RELHICs} and UCHVCs are the following: (i) the sheer number of UCHVCs implies that most UCHVCs are not {\\small RELHICs} (we expect fewer than 10 Local Group {\\small RELHICs} with $S_{21}>0.1$ Jy km\/s over the whole sky); (ii) {\\small RELHICs} should mostly reside beyond $\\sim 500$ kpc from the Milky Way, leading to low \\ion{H}{I} fluxes ($<3$ Jy km\/s), very small angular sizes ($<3'$), and predominantly positive Galactocentric radial velocities; (iii) {\\small RELHICs} should be nearly round on the sky ($b\/a>0.8$ at $10^{18}$ cm$^{-2}$) and (iv) have a very narrow distribution of thermally broadened line widths ($W_{50}\\sim 20$ km\/s). \n\nThe small overlap in properties between UCHVCs and {\\small RELHICs} suggest that the former are not part of the abundant dark minihalo population expected in the $\\Lambda$CDM models. UCHVCs are either \\ion{H}{I} ``debris'' in the Galactic halo, or else the \\ion{H}{I} component of more massive halos, most of whom are expected to host a luminous stellar component as well. Further work is underway that aims to clarify the overall abundance of {\\small RELHICs} in cosmological volumes; their contribution to the low-mass end of the \\ion{H}{I} mass function; their relation to ultra faint galaxies; and the best strategies to detect them. Although {\\small RELHICs} seem too faint to be a dominant source of \\ion{H}{I} detections in extant or planned surveys, they may be easier to detect and study in absorption against the light of luminous background objects at moderate redshifts. {\\small RELHICs} are a robust prediction of the $\\Lambda$CDM paradigm so their detection and characterization would offer a unique opportunity to shed light onto the ``dark'' side of a cold dark matter-dominated universe.\n\n\n\\section{Acknowledgements}\n\nWe acknowledge useful discussions with John Cannon, Luke Leisman, Manolis Papastergis, Antonino Marasco and Tom Osterloo. We also thank the anonymous referee for valuable comments that helped to improve the paper. We have benefited from the following public Python packages: {\\tt numpy} \\citep{van2011numpy}, {\\tt scipy} \\citep{Jones2001}, {\\tt matplotlib} \\citep{Hunter2007}, {\\tt Ipython} \\citep{Perez2007} and {\\tt py-sphviewer} \\citep{Benitez-Llambay2015b}. RCA is a Royal Society University Research Fellow. This work was supported by the Science and Technology Facilities Council (gran number ST\/L00075X\/1) and the European Research Council (grant numbers GA 267291 \"Cosmiway\"). It was also partially supported by the Interuniversity Attraction Poles Programme initiated by the Belgian Science Policy Office ([AP P7\/08 CHARM]), and by the European Research Council under the European Union's Seventh Framework Programme (FP7\/2007-2013)\/ERC grant agreement 278594-GasAround Galaxies and by the Netherlands Organisation for Scientific Research (NWO) through VICI grant 639.043.409.\nThis work used the DiRAC Data Centric system at Durham University,operated by the Institute for Computational Cosmology on behalf of the STFC DiRAC HPC Facility (www.dirac.ac.uk). This equipment was funded by\nBIS National E-infrastructure capital grant ST\/K00042X\/1, STFC capital grants ST\/H008519\/1 and ST\/K00087X\/1, STFC DiRAC Operations grant ST\/K003267\/1 and Durham University. DiRAC is part of the National E-Infrastructure.\n\n\\bibliographystyle{mnras}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nStaggered fermions are an inexpensive and popular way of\ndiscretizing quarks on a space-time lattice. They preserve a chiral\nsymmetry with a local action at the expense of containing extra modes\nthat quadruple the number of quark species being simulated. The\nquadrupled species, called {\\it tastes}, are mixed at nonzero lattice\nspacing, but are expected to produce four independent quark flavors in\nthe continuum limit. Evidence of this behavior can be seen in the low\neigenvalues of the staggered Dirac matrix. At small lattice spacings\nthe eigenvalues tend to cluster into distinct quartets which represent\nthe four tastes. As the lattice spacing is decreased, the eigenvalues\nwithin a quartet will become more degenerate \\cite{latev}.\n\nIn \\cite{srmt} a Random Matrix Theory (RMT) for staggered\nlattice fermions was introduced to describe the low eigenvalues of the\nstaggered Dirac operator. The staggered RMT (SRMT) adds additional\nterms to the standard chiral Random Matrix Theory that have the\nappropriate symmetries of the lattice operator. These additional\nterms in the SRMT reproduce the known $O(a^2)$ (ignoring any extra\nfactors of $\\alpha_s$ which we drop for convenience) taste breaking\nterms that appear in the staggered chiral Lagrangian.\n\nThe SRMT is equivalent to the staggered chiral Lagrangian at zero\nmomentum. This equivalence has been demonstrated for the fermionic\npartition function, but is only conjectured for the partially quenched\ncase, which is related to the eigenvalues of the Dirac operator. Here\nwe will directly compare estimates of the size of the taste breaking\nin lattice simulations determined from the staggered chiral Lagrangian\n(in the $p$-regime) with estimates obtained from comparing low\neigenvalues to the SRMT predictions (in the $\\epsilon$-regime).\nOur initial tests show good agreement with the predictions of SRMT\nin support of the conjectured equivalence of the partially quenched\ntheories.\n\n\n\\section{Staggered Chiral Lagrangian}\n\nThe effective chiral Lagrangian for staggered fermions at order $a^2$\n is given by \\cite{Lee:1999zxa,Aubin:2003mg}\n\\begin{eqnarray}\n\\label{cV}\n{\\cal L} = \\frac{F^2}{8} \\tr{\\partial_\\mu U \\partial_\\mu U^\\dagger} \n-\\frac{1}{2} \\Sigma_0 m \\tr{U + U^\\dagger}\n+ a^2 {\\cal V}\n\\end{eqnarray}\nwhere $\\tr{X}$ stands for the trace of $X$, and\n$F$ and $\\Sigma_0$ are the low energy constants (LECs) related to the\npion decay constant (with the convention that the physical value for\n $F \\approx 131$ MeV)\nand the magnitude of the chiral condensate, respectively.\nThe taste breaking terms can be divided into two parts ${\\cal V} =\n{\\cal V}_{1t} + {\\cal V}_{2t}$. The first part contains the single-trace terms\n(with $\\xi_\\mu = \\gamma_\\mu^*$)\n\\begin{eqnarray}\n\\label{sL}\n-{\\cal V}_{1t} &=&\n C_1 \\tr{ \\xi_5 U \\xi_5 U^\\dagger }\n+ C_3 \\frac{1}{2} \\sum_\\mu \\left[ \\tr{ \\xi_\\mu U \\xi_\\mu U } + h.c. \\right] \\nonumber\\\\\n&+& C_4 \\frac{1}{2} \\sum_\\mu \\left[ \\tr{ \\xi_{\\mu 5} U \\xi_{5 \\mu} U } + h.c. \\right]\n+ C_6 \\sum_{\\mu<\\nu} \\tr{ \\xi_{\\mu\\nu} U \\xi_{\\nu\\mu} U^\\dagger }\n\\end{eqnarray}\nand the second part has the two-trace terms\n\\begin{eqnarray}\n\\label{c25tr2}\n-{\\cal V}_{2t} &=& \n C_{2V} \\frac{1}{4} \\sum_\\mu \\left[ \\tr{ \\xi_\\mu U } \\tr{ \\xi_\\mu U } + h.c. \\right]\n+ C_{2A} \\frac{1}{4} \\sum_\\mu \\left[ \\tr{ \\xi_{\\mu5} U } \\tr{ \\xi_{5\\mu} U } + h.c. \\right] \\nonumber\\\\\n&+& C_{5V} \\frac{1}{2} \\sum_\\mu \\left[ \\tr{ \\xi_\\mu U } \\tr{ \\xi_\\mu U^\\dagger } \\right]\n+ C_{5A} \\frac{1}{2} \\sum_\\mu \\left[ \\tr{ \\xi_{\\mu5} U } \\tr{ \\xi_{5\\mu} U^\\dagger } \\right]\n.\n\\end{eqnarray}\n\n\n\\section{Staggered Chiral Random Matrix Theory}\n\nThe (fermionic) staggered chiral random matrix theory partition\nfunction can be written as \\cite{srmt}\n\\begin{eqnarray}\n\\label{SRMT}\nZ_{SRMT} =\n\\int dW p_0(W) p_T(T) \\prod_{f=1}^{N_f} \\det(D+m_f)\n\\end{eqnarray}\nwith\n\\begin{eqnarray}\nD = \\left( \\begin{array}{cc}\n0 & i W \\\\\ni W^\\dagger & 0\n\\end{array} \\right) \n\\otimes \\mathbb{I}_4 + a T ~.\n\\end{eqnarray}\nwhere $W$ is a $(N+\\nu) \\times N$ complex matrix with $\\nu$ the absolute\n value of the topological charge and $T$ incorporates the taste breaking terms.\nThe matrix potential for $W$ is conveniently a Gaussian,\n\\begin{eqnarray}\np_0(W) = \\exp(-\\alpha N \\tr{W^\\dagger W})\n\\end{eqnarray}\nwith $\\sqrt{\\alpha} = \\Sigma_0 V\/ 2N$ ($V$ is the four volume).\n\nThe taste breaking contribution to the SRMT ($T$) has eight terms that correspond directly\nwith the eight taste breaking terms of the chiral Lagrangian.\nIts complete form was given in \\cite{srmt}.\nAs an example, the $C_4$ term corresponds to\n\\begin{eqnarray}\n\\label{t4}\nT_4 = \\sum_{\\mu} \\left( \\begin{array}{cc} A_\\mu & 0 \\\\ 0 & B_\\mu \\\\ \\end{array} \\right)\n\\otimes \\xi_{\\mu 5}\n\\end{eqnarray}\nwhere $A_\\mu$ and $B_\\mu$ are Hermitian matrices of size $(N+\\nu) \\times (N+\\nu)$ and \n$N\\times N$, respectively.\nFor convenience we can choose a Gaussian weight function for these matrices,\n\\begin{eqnarray}\np_{T_4} = \\exp\\left(-\\frac{\\alpha N^2}{2 V C_4} \\sum_\\mu \\tr{A_\\mu^2} + \\tr{B_\\mu^2} \\right) ~.\n\\end{eqnarray}\nOne can then show that the (fermionic) RMT with this extra matrix term is equivalent\nto the zero-momentum staggered chiral Lagrangian with just a $C_4$ taste breaking term.\n\nNote that while the correction to the RMT enters at order $a$, this still reproduces\na term of order $a^2$ in the chiral Lagrangian. This is due to the taste breaking terms\nin the SRMT being traceless. As demonstrated in \\cite{srmt}, when expanding the\ndeterminant of the SRMT Dirac matrix, the $O(a)$ term vanishes for this reason, and results\nin a partition function that has a leading corrections at $O(a^2)$ even\n though the SRMT Dirac operator has terms of $O(a)$.\n\nThe two-trace terms can be incorporated in the SRMT in a couple of ways.\nOne is to add terms such as\n\\begin{eqnarray}\n \\left( \\begin{array}{cc} b_\\mu \\mathbb{I}_{N+\\nu} & 0 \\\\\n 0 & \\pm b_\\mu \\mathbb{I}_{N} \\\\\n \\end{array} \\right) \\otimes \\xi_{\\mu 5}\n\\end{eqnarray}\nwith a Gaussian weight for the scalar $b_\\mu$.\nThis will give a contribution to the $C_{2A}$ and $C_{5A}$ terms.\nWhile this term will reproduce the correct term in the chiral Lagrangian,\nit has no analogue in the lattice Dirac matrix. This term is of the\nform of a fluctuating taste-dependent mass, which is not present on\nthe lattice.\n\nAn alternative form for the two-trace terms is to simply modify the potentials\nfor the matrix terms corresponding to the one-trace terms. By replacing the simple\nGaussian weight with one that also includes $\\tr{A_\\mu}^2$, $\\tr{B_\\mu}^2$ and\n$\\tr{A_\\mu}\\tr{B_\\mu}$ in the exponential, the coefficients can be tuned to give\nthe correct two-trace terms in the chiral Lagrangian \\cite{srmt}.\n\n\n\\section{Generalized Staggered Random Matrix Theory}\n\nIf we look at the final form of the SRMT Dirac matrix, we see that it\nis the most general matrix that is consistent with the symmetries of\nthe staggered Dirac operator; it is anti-Hermitian and anticommutes\nwith the staggered chiral symmetry matrix $\\gamma_5 \\otimes \\xi_5$.\nAs mentioned in the previous section, the single-trace terms from the\nchiral Lagrangian are reproduced by considering independent Gaussian\nweights for the remaining matrix elements. Meanwhile the two-trace\nterms can be reproduced by adding two-trace terms to the RMT weights.\n\nIn this manner one could consider generalizing the SRMT to include\nmore terms in the weight function in an attempt to reproduce higher\norder terms in the chiral Lagrangian (at zero momentum). One can then\ndraw an analogy between the formulation of a RMT and of an effective\nLagrangian. For the Lagrangian one includes all terms, up to some\norder, that are consistent with the symmetries. Likewise for the RMT\none can consider a matrix containing all elements consistent with the\nsymmetries of the Dirac operator, with a generalized weight function\nfor these matrix elements up to some level of complexity. Of course the\nmapping from the RMT potential to the chiral Lagrangian at higher\norder may not be as simple as in the SRMT presented here, but one\ncould speculate that a generalized RMT could reproduce all higher\norder terms of the chiral Lagrangian. Again this equivalence\nonly holds for the zero momentum Lagrangian, but it is also possible to\nreduce the full Lagrangian to an effective zero momentum Lagrangian\n(for a recent example see \\cite{Lehner:2010mv}),\nwhich then might be representable as a RMT.\n\n\n\\begin{figure}[t]\n \\begin{center}\n \\includegraphics[width=0.8\\textwidth]{nv3232.eps}\n \\caption{\\label{nv}\n Number variance of the lowest eigenvalues of the \n $32^4$ lattice ensemble and the SRMT.\n }\n \\end{center}\n\\end{figure}\n\n\n\\section{Dominant form of taste breaking}\n\nMost of the taste breaking coefficients can be measured in lattice\nsimulations by comparing to staggered chiral perturbation theory\n(S$\\chi$PT) results. The four single-trace parameters are uniquely\ndetermined from the splittings of the pion masses at leading order\n\\cite{Aubin:2003mg}.\nTypically the $C_4$ term is found to be the dominant contribution to\nthe pion spectrum \\cite{Lee:1999zxa}. The two-trace terms enter in\nS$\\chi$PT{} formulas in the combinations\n$C_{V,A}^{\\pm} = (C_{2\\{V,A\\}} \\pm C_{5\\{V,A\\}})\/2$.\nThey don't contribute to the pion splittings at leading order,\nbut the $C_A^-$ and $C_V^-$ terms do appear in one-loop expressions,\nwhile the $C_A^+$ and $C_V^+$ terms do not \\cite{Aubin:2003mg}.\nIn lattice simulations $C_A^-$ has been found to be larger than $C_V^-$\n\\cite{Aubin:2004fs}.\n\nBoth of the dominant terms, $C_4$ and $C_A^-$, come from same term in\nthe SRMT with the more general form for the weight function. This\nsupports the idea of constructing the SRMT from a single matrix with\nthe correct symmetries and with a generalized weight function. Based\non the lattice measurements, the leading contribution to taste\nbreaking in the staggered Dirac matrix has an axial-vector taste\nstructure. If one imagined rotating the staggered Dirac matrix into a\ntaste basis and then expanding in powers of $a$, the dominant\ncorrection at order $a$ would then have the same form as in\n(\\ref{t4}). Of course the exact potential for the lattice Dirac\nmatrix would be much more complicated than in the SRMT, but the\nleading effects at low energy can be captured by the SRMT weight\nfunction considered here. Among the terms in the SRMT potential\ncorresponding to the axial-vector matrix, we have no reason to favor\none over the other, so we would naively expect them to be of\nsimilar magnitude. We would then expect the corresponding terms\nin the chiral Lagrangian to be of similar order, and also\ndominant over the corresponding terms with other taste symmetries,\nwhich is consistent with lattice measurements.\n\n\n\\section{Extracting LECs from RMT}\n\nThe RMT predictions for the eigenvalue correlations can be used to\nextract LECs from lattice simulations.\nFor example $\\Sigma_0$ can be obtained by fitting to the eigenvalue\ndensity.\nAdditionally $F$ can be obtained from the correlations of eigenvalues\nwith an imaginary chemical potential \\cite{rmtf}.\n\nIf the taste breaking is small enough then these methods can apply\ndirectly to staggered eigenvalues by replacing each quartet of\neigenvalues with its average \\cite{latev}. In this case one could also try\nto extract the taste breaking parameters from the splittings of the\neigenvalues within the quartet. In practice, it would likely be too\ndifficult to extract all the parameters, but if one assumes that the\n$C_4$ term is dominant, then it should be possible to estimate it from\ncomparison to the SRMT.\n\nHowever, if the taste breaking is large, then the higher order taste\nbreaking terms can become important. In this case the effective\nchiral Lagrangian reduces to a single flavor for the remaining\nstaggered chiral symmetry with a new set of LECs that are in principle\nunrelated to the original ones \\cite{srmt}. Thus extracting the LECs\nfrom low eigenvalues when there aren't clear quartets present may not\nyield the continuum LECs in chiral Lagrangian.\n\n\\begin{figure}\n \\vspace{-7mm}\n \\begin{center}\n \\begin{minipage}[t]{.49\\textwidth}\n \\includegraphics[clip,width=\\textwidth]{idq3232.eps}\n \\end{minipage}\n \\begin{minipage}[t]{.49\\textwidth}\n \\includegraphics[clip,width=\\textwidth]{idq23232.eps}\n \\end{minipage}\n \\caption{\\label{id}\n Individual integrated densities of the lowest four eigenvalues\n from the $32^4$ lattice ensemble and the SRMT. The dominant taste breaking\n parameter in the SRMT is set to $a^2VC_4 = 0.3$ (left) and $0.2$ (right).\n }\n \\end{center}\n \\vspace{-7mm}\n\\end{figure}\n\n\n\\section{Comparison to lattice simulations}\n\\vspace{-2mm}\n\nAs an initial test of the SRMT, we will compare the leading order taste breaking\nterm obtained from fitting the Dirac operator eigenvalues to the SRMT with that\nobtained from fitting the pion spectrum to the staggered chiral Lagrangian.\nSince it is difficult to get a single lattice ensemble that we could use for\nboth measurements we will use two ensembles with all parameters identical except\nfor the volume.\n\nFor the pion masses we use an ensemble from the MILC collaboration\n2+1+1 flavor HISQ runs \\cite{Bazavov:2010pi}. The ensemble has a\nvolume of $32^3\\times 96$ with a lattice spacing of $a\\approx 0.09$ fm and\nwith a light quark mass $m_l = m_s\/5$. From the pion mass splittings\nwe can get the single-trace taste breaking terms in the chiral\nLagrangian. Using a value of $F=131$ MeV, this gives\n\\begin{eqnarray}\na^2 V C_1 = 0.03(8),~~\na^2 V C_3 = -0.03(4),~~\na^2 V C_4 = 0.84(4),~~\na^2 V C_6 = 0.03(3) ~.\n\\end{eqnarray}\nWe can see here that $C_4$ is clearly dominant, as expected, and that\nthe other coefficients are all consistent with zero.\n\nWe generated a new ensemble of 430 lattices of size $32^4$ with all\nother parameters the same as the previous ensemble. From the volume\nscaling we expect to find\n\\begin{eqnarray}\n\\label{c4}\na^2 V C_4 = 0.28(1)\n\\end{eqnarray}\non this new ensemble.\nWe then compare lattice eigenvalues with numerical simulations of the SRMT\nwith only the $C_4$ taste breaking term. For this comparison we choose to\nuse the number variance statistic. This shows the fluctuations\n(variance) in the number of eigenvalues in an interval starting at\nzero versus the average number in the interval. We evaluated it\nnumerically from simulations of the SRMT with $N=400$.\n\nIn Figure \\ref{nv} we plot the number variance of the lattice\neigenvalues against the SRMT at different values of $a^2 V C_4$. We\ncan see that the best agreement for small intervals (up to around\n2 to 4 eigenvalues) is at $a^2 V C_4 \\approx 0.3$.\nThis is in good agreement with our prediction (\\ref{c4})\nobtained from the pion mass splittings.\nAs the length of the interval grows,\nthe lattice results start to move away from the SRMT result. This is\nlikely due to higher momentum modes entering on the lattice, that aren't\ncaptured in the RMT. This happens at the QCD equivalent of the Thouless energy \\cite{te},\nwhich for this ensemble, in units of the average eigenvalue spacing,\nis $F^2 \\sqrt{V} \\approx 3.5$.\nThis is consistent with our observations from the number variance.\n\nAs a further check that the SRMT describes the low eigenvalues of the\nstaggered Dirac operator, we look at the individual integrated\ndensities of the lowest four eigenvalues.\nIn Figure \\ref{id} we see the integrated density from the $32^4$\nlattice along with that from the numerical simulations of the SRMT at\ntwo different values of the taste breaking parameter\n$a^2VC_4 = 0.2,0.3$.\nWe can see that the value of $0.3$ fits the lattice data much better\nthan at $0.2$, again confirming that the predictions of the SRMT\nfit the lattice data well and give estimates of the taste breaking\nparameters that are consistent with the staggered chiral Lagrangian.\n\n\n\\section{Summary}\n\nWe have shown a chiral RMT that incorporates all leading order taste\n breaking terms from staggered chiral Lagrangian.\nThe SRMT can be constructed by considering a RMT\n with same symmetries as the staggered Dirac operator and a generalized weight function.\nInitial tests show that the predictions of the SRMT are in good agreement with\n lattice simulations within the range of validity of the SRMT.\nAdditionally, using the predictions of the SRMT,\n the dominant taste breaking parameter can be extracted from\n the low eigenvalues of the staggered Dirac operator and gives consistent\n results with that obtained from the pion mass splittings.\n\n\n\\begin{acknowledgments}\n We thank Doug Toussaint for providing the pion splitting data for\n the MILC ensemble. This research used resources of the Argonne\n Leadership Computing Facility at Argonne National Laboratory, and\n was supported by the U.S. DOE under contract DE-AC02-06CH11357.\n\\end{acknowledgments}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nConsider the set of natural numbers ${\\bf \\mathbb {N}^+} = \\{1,2,3,4,...\\} $, the Collatz mapping function $f$ maps each integer $n \\in {\\bf \\mathbb N^+}$ to another positive integer also in ${\\bf \\mathbb N^+}$ by the configuration\n\\begin{equation}\nf(n)= \\frac{3^{b(n)}n + b(n)}{2}\n\\end{equation}\nwhere the binary flag function $b(n)=1$ is fixed if n is odd, otherwise $b(n)=0$ \\cite{Ter76}. \n\nNote that this representation is equivalent to the Collatz mapping function \n\\begin{equation}\nf(n) = \n\\begin{cases}\n\\frac{3n+1}{2} &\\text{if $n \\equiv 1 $ (mod 2)}, i.e., ~[1]_2 \\\\\n\\frac{n}{2} &\\text{if $n \\equiv 0$ (mod 2)}, i.e., ~ [0]_2 \\\\\n\\end{cases}\n\\end{equation}\noften used in literature. The iterative execution of $f(n)$ always produces a sequence that terminates at 1 for any integer input $n$, according to the mathematician Lothar Collatz. This conjecture is known as the Collatz conjecture (CC) \\cite{Lag85}, \\cite{And98}, \\cite{Van05} and was first proposed in 1937. It is commonly known as the \\textit{3n+1} or \\textit{3x+1} conjecture \\cite{Lag85}, \\cite{Marc96}, \\cite{Gar81}, \\cite{Sim99}, Ulam conjecture, Thwaites conjecture, the Kakutani's problem, the Hasse's algorithm, and the Syracuse problem \\cite{Mad97}, \\cite{Syr01}. This problem is easy to state but extremely difficult to prove. Many mathematicians have investigated and written articles about the conjetcure \\cite{Ste77}, \\cite{Bel06}, \\cite{Bel98}, \\cite{Sim05}, \\cite{Sin10}. We recommend the work of Lagarias \\cite{Lag85}, \\cite{Lag11}, \\cite{Lag12} for comprehensive and annotated bibliographies on the subject. \n\nThe principal aim of this paper is to document, collate and present the results of a newly proposed number system formalised for investigating the truth of the CC. \n\n\n\\subsection{A new covering system and congruence classes modulo 18}\nHere we seek to formulate a simple covering system for the odd integers such that every odd integer may be represented by exactly one of the residue classes $d_i \\equiv r_i \\mod18$ and each residue class is a finite (or infinite) collection (i.e., reordered set) of integers congruent to any member of the set $\\{u_{i1}(\\mathrm{mod}\\ {2^13^2}),\\ \\ldots,\\ u_{im}(\\mathrm{mod}\\ {2^m3^2})\\}$. \n\nThe covering system sought for the proposed Collatz number systems must be 1-cover, i.e., covers every integer exactly once. One of the objectives of this proposition is to develop a complete recategorisation of odd numbers based on both their residue classes and Collatz profiles (i.e., according to their $2^m $ divisibility properties after multiplying them by 3 and adding 1).\n\n\n\\subsection{Definitions}\nLet $\\sigma_{\\infty}(n)$ represent the total stopping time of an integer $n$ under the Collatz system, where the symbol ``$\\sigma_{\\infty}$'' refers to ``number of iterations it takes to get to 1'' starting from the input $n$ - this is often referred to as the total stopping time of $n$.\n\nDefine the term \\textit{Collatz profile} $f(d_i,m)$ as a representation of odd numbers $d_i$ congruent to $r_i \\mod 18$ whose Collatz result $3d_i+1$ is divisible by $2^m$ such that the optimised result $\\frac{3d_i+1}{2^m}$ is odd. Hence, under the Collatz system the total stopping time relation $\\sigma_{\\infty}(d_i)=\\sigma_{\\infty}(\\frac{3d_i+1}{2^m})+(m+1)$ holds, i.e., the total stopping time $\\sigma_{\\infty}(d_i)$ is $(m+1)$ iterates more than that of the next odd number in the Collatz sequence: $\\sigma_{\\infty}(\\frac{3d+1}{2^m})=\\sigma_{\\infty}(d_i)-(m+1)$. \n\nFor example, $f(13,3) = \\frac{3(13)+1}{2^{\\bf 3}} = 5$ implies that if the total stopping time of number {\\bf 13} was $\\sigma_{\\infty}(13)$ under the Collatz system, then the total stopping time of number {\\bf 5} \\textit{would be} equal to $\\sigma_{\\infty}(5)=\\sigma_{\\infty}(13)-({\\bf 3}+1)=\\sigma_{\\infty}(13)-4$, i.e., $\\sigma_{\\infty}(13) = 9$ and $\\sigma_{\\infty}(5) = 5$. Such simple inference (concept) may be used to prove the CC by \\textit{reverse engineering}, i.e., the relative inference of total stopping times.\n\nDefine $r$ to be an odd number such that the following conditions are satisfied:\n\\begin{enumerate}\n\\item $1 \\le r <18$;\n\\item $d_i=18V_{r_i, m}(n)+r_i$, $n \\ge 0$; and\n\\item $\\frac{3d_i+1}{2^m}$ is odd.\n \\end{enumerate}\nIf $V_{r, m}(n)$ is defined as the multiplicative set (function) that uniquely identifies numbers of same residue class attributable to similar Collatz-like pattern (profile), i.e., $V_{r, m}(n)$ is regarded as a parameter set dependent on actual values of $d_i$ and $r$ where $d_i$ is odd and congruent to $ r$ (mod 18), i.e., $r \\in S$; where $S={1, 5, 3, 13, 17,15, 7, 11, 9}$. S is a reordered set of $r$. This set S establishes the connection between certain sets of integer numbers of one residue class in number system to other sets of other residue classes. The rationale behind this \\textit{reordering} of the residue classes is of utmost importance (as demonstrated and explained in the next section). \n\nIf we further stratify or reclassify every congruence class $d_i$ according to the result $ \\frac{3d_i+1}{2^m}$, i.e., so that m is optimised and $ \\frac{3d_i+1}{2^m}$ yields an odd number result, how does $d_i$ relate to the variable $m$, the exponent of $2^m$? Are there scientific or theoretical evidences to support the claim that the metatheory behind such relations may be completely deterministic? Ultimately, what is the schema for the proposed generalised Collatz based number system? These are the theoretical questions considered in this paper. \n\nFrom a theoretical point of view, the task of constructing a generalised schema of the Collatz based number system seems difficult, time consuming, and daunting. In practical terms, the difficulty associated with such tasks often seem more discouraging, because there is almost no guarantee of little or no recompense for the time invested. This is no longer the case. Here we present fundamental results that facilitate new theoretical perspectives on the Collatz conjecture and the generalised Collatz based number system. \n\nThe next section (Section \\ref{4np1}) introduces a modified (optimised) Collatz function and briefly gives a gentle introduction about a new discovery that inspires better understanding and new perspectives on residue classes modulo 18. \n\nThe proposed metatheories of the Collatz based number system and its general proof are presented in section \\ref{Reorder}, including the proposed schemata comprising of a map of generalised Collatz based number system and corresponding map of (distinct and fundamental) total stopping time functions.\n\n\n\\section{The fundamental relations between certain odd numbers} \\label{4np1}\nIt is important to understand the fundamental relations between integers $d_i$ congruent to $r \\mod 18$ and these relations define the inferred collatz properties, i.e., the divisibility quantities: $2^m | (3d_i+1)$ analogous to the result $[p]_{54}$ where $1 \\le p \\le 54$ and p must be odd. In other words, how can new covering subsystems be formulated for the residue classes of odd integers, classifying each subsystem according to the prescribed Collatz properties? This question requires a thorough understanding of: a) the mechanisms of Collatz sequence transformations from odd to odd integers; b) the formulation of a new covering subsystems for the entire set of odd integers; and c) the determination of (total) stopping time for every odd integer for the production of an irrefutable proof of the Collatz conjecture. The last point, which is not addressed completely in this foundational paper, requires a thorough analysis of the propose theoretical schema of the Collatz based number system, i.e., a complete understanding of the covering system of all odd integers. \n\n\n\\subsection{The modified Collatz function}Introduce the modified Collatz function\n\\begin{equation}\\label{fdm}\nf(d,m)= \\frac{3^{b(d)}d + b(d)}{2^m} = \\frac{3d+1}{2^m},\n\\end{equation}\nwhere $m$ is the maximum exponent such that the odd number $d \\in {\\bf N^+}$ and the result $f(n,m)$ is an odd integer, i.e., $2^m \\mid (3d+1)$ but $2^{m+1} \\ndiv (3d+1)$ \\cite{Ter76}.\n\n\\subsection{Residue classes modulo 18}\nGiven that $d$ is an odd number which belongs to only one of the following residue classes: \n\n\\begin{equation}\nd = \n\\begin{cases}\nd_1 &\\text{if $ d\\equiv 1 $ (mod 18), e.g. 1, 349525, \\dots}\\\\\nd_2 &\\text{if $ d \\equiv 5$ (mod 18), e.g. 5}\\\\\n\\textcolor{red}{d_3 }&\\text{if $ d \\equiv 3$ (mod 18), e.g. \\textcolor{red}{21}}\\\\\nd_4 &\\text{if $ d \\equiv 13$ (mod 18), e.g. 85}\\\\\nd_5 &\\text{if $ d \\equiv 17$ (mod 18), e.g. 341}\\\\\n\\textcolor{red}{d_6 }&\\text{if $ d \\equiv 15$ (mod 18), e.g. \\textcolor{red}{1365}}\\\\\nd_7 &\\text{if $ d \\equiv 7$ (mod 18), e.g. 5461}\\\\\nd_8 &\\text{if $ d \\equiv 11$ (mod 18), e.g. 21845}\\\\\n\\textcolor{red}{d_9 }&\\text{if $ d \\equiv 9$ (mod 18), e.g. \\textcolor{red}{87381}}\\\\\n\\hline\n\\textcolor{gray}{d_{10} }&\\textcolor{gray}{\\text{if $ d \\equiv 1$ (mod 18), e.g. \\textcolor{gray}{349525}}: \\text{$ d_{10} = [1]_{18} = d_1$}} \\\\\n\\vdots\n\\end{cases}\n\\end{equation}\nwhere $\\{d_1 ~\\cup ~d_2 ~\\cup ~d_4 ~\\cup ~d_5 \\cup ~d_7 ~\\cup ~d_8\\}$ is the set of all odd numbers indivisible by 3, \n$[1]_{18} \\in d_1$, \n$[5]_{18} \\in d_2$, \n$[3]_{18} \\in d_3$, \n$[13]_{18} \\in d_4$, \n$[17]_{18} \\in d_5$, \n$[15]_{18} \\in d_6$, \n$[7]_{18} \\in d_7$, \n$[11]_{18} \\in d_8$, \n$[9]_{18} \\in d_9$, and of course, \n$[0]_3 \\in \\{\\{d_3~ \\cup~d_6~ \\cup~d_9\\}$. This rearrangement enables one to capture cyclic recurrence relations between the odd integers using the relation $4d_i+1$, where $d_i \\in d$ as demonstrated in the next subsection.\n\n\\subsection{Cyclic recurrence relations between odd numbers}\nLet $r_i$ be a member of the finite set $S=\\{1,5,3,13,17,15,7,11, 9\\}$; $1 \\le i \\le 9$. This rearrangement of $[r_i]_{18}$ gives new insights into cyclic recurrence relations between the odd numbers:\n\n\\begin{equation}\nd =\n\\begin{cases}\n[r_i]_{18} &\\text{if $ 4([r_{i+8}]_{18})+1 \\equiv r_{i} ~(mod ~ 18) $ \\dots}\\\\\n[r_{i+1}]_{18} &\\text{if $ 4([r_i]_{18})+1 \\equiv r_{i+1} ~ (mod ~ 18) $ \\dots}\\\\\n\\vdots \\\\\n[r_{i+8}]_{18} &\\text{if $ 4([r_{i+7}]_{18})+1 \\equiv r_{i+8} ~ (mod ~ 18) $ }\n\\end{cases};\n\\end{equation}\n e.g. Let $ 19 \\in d_i = [r_i]_{18} = [1]_{18} \\equiv 1$ (mod 18), then $ 77 = 4d_i+1 = [r_{i+1}]_{18} = [5]_{18} \\equiv 5$ (mod 18), $ 309 = 4^2d_i+5 = [r_{i+2}]_{18} = 3$ (mod 18), $ 4^3d_i+21 = [r_{i+3}]_{18} $, $\\dots$ , $ 1267029 = 4^8d_i+21845 = [r_{i+8}]_{18}$, $ 5068117 = 4^9d_i+87381 = [r_{i}]_{18}$, and so on. \n \n\\section{The metatheory and properties of odd Integers }\\label{Reorder}\nWe formulate and show how the arrays of $d_i$ that correspond to certain integers congruent to $r_i$ modulo 18 are defined, i.e., the $d_i=18V_{r_i, m}(n)+r_i$, $n \\ge 0$ and $1 \\le i \\le 9$ that satisfy the required optimisation condition $\\frac{3d_i+1}{2^m}=odd$. \n\n\\subsection{New theories about certain odd numbers}\n\\begin{theorem}\\label{mainthm}\nLet the odd number transformation under the Collatz sequence system be defined as the function $f(d_i,m)= \\frac{3d_i+1}{2^m} = d_{{i,m}_{next}}$ where $d_{{i,m}_{next}}$ represents the next odd number after the odd number $d_i$. The fundamental rules governing the relations between sets of $d_i$ and corresponding (i.e. mapped) sets of $V_{r_i,m}(n)$ are found to be completely deterministic (i.e. non-chaotic) on the following conditions enumerated:\n\\begin{enumerate}\n\\item\n$d_i \\equiv S(i) \\mod 18$, i.e., $r_i = S(i)$;\n\\item\n$d_i = 18V_{r_i,m}(n)+S(i)$; and\n\\item\nthe Collatz result $d_{{i,m}_{next}} = \\frac{3(18~ V_{r_i,m}(n)+1)+S(i)}{2^m}$ must be odd.\n\\end{enumerate}\n\\end{theorem}\n\nThe proof of this major theorem requires enlisting all the possible $d_i$ and $V_{i,m}(n)$ values and demonstrating that the result $\\frac{3(18V_{i,m}(n)+1)+S(i)}{2^m}$ is always odd. This theorem alone requires 162 explicit proofs - one for every statement. For the purpose of brevity, the trivialities involved in the explicit proofs are all avoided. \n\n\\begin{proof}\nWhen $ i =1, r_1 = 1$:\n\n{\\small\n\\begin{equation}\nV_{1, m}(n) = \n\\begin{cases}\n\\text{$V_{1,1} \\in $ \\{1,3,5,7,...\\}} = & 2n + 1 \\\\\n\\text{$V_{1,2} \\in $ \\{0,4,8,12,...\\}} = & 4n \\\\\n\\text{$V_{1,3} \\in $ \\{6,14,22,30,...\\}} = & 8n + 6 \\\\\n\\text{$V_{1,4} \\in $ \\{2,18,34,50,...\\}} = & 16n + 2 \\\\\n\\text{$V_{1,5} \\in $ \\{10,42,74,106,...\\}} = & 32n + 10 \\\\\n\\text{$V_{1,6} \\in $ \\{58,122,186,250,...\\}} = & 64n + 58 \\\\\n\\text{$V_{1,7} \\in $ \\{26,154,282,410,...\\}} = & 128n + 26 \\\\\n\\text{$V_{1,8} \\in $ \\{90,346,602,858,...\\}} = & 256n + 90 \\\\\n\\text{$V_{1,9} \\in $ \\{218,730,1242,1754,...\\}} = & 512n + 218 \\\\\n\\text{$V_{1,10} \\in $ \\{474,1498,2522,3546,...\\}} = & 1024n + 474 \\\\\n\\text{$V_{1,11} \\in $ \\{2010,4058,6106,8154,...\\}} = & 2048n + 2010 \\\\\n\\text{$V_{1,12} \\in $ \\{986,5082,9178,13274,...\\}} = & 4096n + 986 \\\\\n\\text{$V_{1,13} \\in $ \\{7130,15322,23514,31706,...\\}} = & 8192n + 7130 \\\\\n\\text{$V_{1,14} \\in $ \\{11226,27610,43994,60378,...\\}} = & 16384n + 11226 \\\\\n\\text{$V_{1,15} \\in $ \\{3034,35802,68570,101338,...\\}} = & 32768n + 3034 \\\\\n\\text{$V_{1,16} \\in $ \\{52186,117722,183258,248794,...\\}} = & 65536n + 52186 \\\\\n\\text{$V_{1,17} \\in $ \\{84954,216026,347098,478170,...\\}} = & 131072n + 84954 \\\\\n\\text{$V_{1,18} \\in $ \\{150490,412634,674778,936922,...\\}} = & 262144n + 150490 \\\\\n\\vdots\n\\end{cases}\n\\end{equation}\n\\begin{equation}\nd_1 = 18V_{1, m}(n)+1 =\n\\begin{cases}\n\\text{$d_{1,1} \\in $ \\{19,55,91,127,...\\}} = & 36n + 19 \\\\\n\\text{$d_{1,2} \\in $ \\{1,73,145,217,...\\}} = & 72n + 1 \\\\\n\\text{$d_{1,3} \\in $ \\{109,253,397,541,...\\}} = & 144n + 109 \\\\\n\\text{$d_{1,4} \\in $ \\{37,325,613,901,...\\}} = & 288n + 37 \\\\\n\\text{$d_{1,5} \\in $ \\{181,757,1333,1909,...\\}} = & 576n + 181 \\\\\n\\text{$d_{1,6} \\in $ \\{1045,2197,3349,4501,...\\}} = & 1152n + 1045 \\\\\n\\text{$d_{1,7} \\in $ \\{469,2773,5077,7381,...\\}} = & 2304n + 469 \\\\\n\\text{$d_{1,8} \\in $ \\{1621,6229,10837,15445,...\\}} = & 4608n + 1621 \\\\\n\\text{$d_{1,9} \\in $ \\{3925,13141,22357,31573,...\\}} = & 9216n + 3925 \\\\\n\\text{$d_{1,10} \\in $ \\{8533,26965,45397,63829,...\\}} = & 18432n + 8533 \\\\\n\\text{$d_{1,11} \\in $ \\{36181,73045,109909,146773,...\\}} = & 36864n + 36181 \\\\\n\\text{$d_{1,12} \\in $ \\{17749,91477,165205,238933,...\\}} = & 73728n + 17749 \\\\\n\\text{$d_{1,13} \\in $ \\{128341,275797,423253,570709,...\\}} = & 147456n + 128341 \\\\\n\\text{$d_{1,14} \\in $ \\{202069,496981,791893,1086805,...\\}} = & 294912n + 202069 \\\\\n\\text{$d_{1,15} \\in $ \\{54613,644437,1234261,1824085,...\\}} = & 589824n + 54613 \\\\\n\\text{$d_{1,16} \\in $ \\{939349,2118997,3298645,4478293,...\\}} = & 1179648n + 939349 \\\\\n\\text{$d_{1,17} \\in $ \\{1529173,3888469,6247765,8607061,...\\}} = & 2359296n + 1529173 \\\\\n\\text{$d_{1,18} \\in $ \\{2708821,7427413,12146005,16864597,...\\}} = & 4718592n + 2708821 \\\\\n\\vdots\n\\end{cases}\n\\rightarrow \\frac{3d_i+1}{2^m} = odd\n\\end{equation}\n}\n\nWhen $i =2, r_2 = S(2) = 5$ (note that the index of V below is 5):\n{\\small\n\\begin{equation}\nV_{5, m}(n) = \n\\begin{cases}\n\\text{$V_{5,1} \\in $ \\{1,3,5,7,...\\}} = & 2n + 1 \\\\\n\\text{$V_{5,2} \\in $ \\{2,6,10,14,...\\}} = & 4n + 2 \\\\\n\\text{$V_{5,3} \\in $ \\{4,12,20,28,...\\}} = & 8n + 4 \\\\\n\\text{$V_{5,4} \\in $ \\{0,16,32,48,...\\}} = & 16n \\\\\n\\text{$V_{5,5} \\in $ \\{24,56,88,120,...\\}} = & 32n + 24 \\\\\n\\text{$V_{5,6} \\in $ \\{8,72,136,200,...\\}} = & 64n + 8 \\\\\n\\text{$V_{5,7} \\in $ \\{40,168,296,424,...\\}} = & 128n + 40 \\\\\n\\text{$V_{5,8} \\in $ \\{232,488,744,1000,...\\}} = & 256n + 232 \\\\\n\\text{$V_{5,9} \\in $ \\{104,616,1128,1640,...\\}} = & 512n + 104 \\\\\n\\text{$V_{5,10} \\in $ \\{360,1384,2408,3432,...\\}} = & 1024n + 360 \\\\\n\\text{$V_{5,11} \\in $ \\{872,2920,4968,7016,...\\}} = & 2048n + 872 \\\\\n\\text{$V_{5,12} \\in $ \\{1896,5992,10088,14184,...\\}} = & 4096n + 1896 \\\\\n\\text{$V_{5,13} \\in $ \\{8040,16232,24424,32616,...\\}} = & 8192n + 8040 \\\\\n\\text{$V_{5,14} \\in $ \\{3944,20328,36712,53096,...\\}} = & 16384n + 3944 \\\\\n\\text{$V_{5,15} \\in $ \\{28520,61288,94056,126824,...\\}} = & 32768n + 28520 \\\\\n\\text{$V_{5,16} \\in $ \\{44904,110440,175976,241512,...\\}} = & 65536n + 44904 \\\\\n\\text{$V_{5,17} \\in $ \\{12136,143208,274280,405352,...\\}} = & 131072n + 12136 \\\\\n\\text{$V_{5,18} \\in $ \\{208744,470888,733032,995176,...\\}} = & 262144n + 208744 \\\\\n\\vdots\n\\end{cases}\n\\end{equation}\n\\begin{equation}\nd_2 = 18V_{5, m}(n)+5 = \n\\begin{cases}\n\\text{$d_{2,1} \\in $ \\{23,59,95,131,...\\}} = & 36n + 23 \\\\\n\\text{$d_{2,2} \\in $ \\{41,113,185,257,...\\}} = & 72n + 41 \\\\\n\\text{$d_{2,3} \\in $ \\{77,221,365,509,...\\}} = & 144n + 77 \\\\\n\\text{$d_{2,4} \\in $ \\{5,293,581,869,...\\}} = & 288n + 5 \\\\\n\\text{$d_{2,5} \\in $ \\{437,1013,1589,2165,...\\}} = & 576n + 437 \\\\\n\\text{$d_{2,6} \\in $ \\{149,1301,2453,3605,...\\}} = & 1152n + 149 \\\\\n\\text{$d_{2,7} \\in $ \\{725,3029,5333,7637,...\\}} = & 2304n + 725 \\\\\n\\text{$d_{2,8} \\in $ \\{4181,8789,13397,18005,...\\}} = & 4608n + 4181 \\\\\n\\text{$d_{2,9} \\in $ \\{1877,11093,20309,29525,...\\}} = & 9216n + 1877 \\\\\n\\text{$d_{2,10} \\in $ \\{6485,24917,43349,61781,...\\}} = & 18432n + 6485 \\\\\n\\text{$d_{2,11} \\in $ \\{15701,52565,89429,126293,...\\}} = & 36864n + 15701 \\\\\n\\text{$d_{2,12} \\in $ \\{34133,107861,181589,255317,...\\}} = & 73728n + 34133 \\\\\n\\text{$d_{2,13} \\in $ \\{144725,292181,439637,587093,...\\}} = & 147456n + 144725 \\\\\n\\text{$d_{2,14} \\in $ \\{70997,365909,660821,955733,...\\}} = & 294912n + 70997 \\\\\n\\text{$d_{2,15} \\in $ \\{513365,1103189,1693013,2282837,...\\}} = & 589824n + 513365 \\\\\n\\text{$d_{2,16} \\in $ \\{808277,1987925,3167573,4347221,...\\}} = & 1179648n + 808277 \\\\\n\\text{$d_{2,17} \\in $ \\{218453,2577749,4937045,7296341,...\\}} = & 2359296n + 218453 \\\\\n\\text{$d_{2,18} \\in $ \\{3757397,8475989,13194581,17913173,...\\}} = & 4718592n + 3757397 \\\\\n\\vdots\n\\end{cases}\n\\rightarrow \\frac{3d_2+1}{2^m} = odd\n\\end{equation}\n}\n\nWhen $i =3, r_3 = S(3) = 3$:\n{\\small\n\\begin{equation}\nV_{3, m}(n) = \n\\begin{cases}\n\\text{$V_{3,1} \\in $ \\{0,2,4,6,...\\}} = & 2n \\\\\n\\text{$V_{3,2} \\in $ \\{3,7,11,15,...\\}} = & 4n + 3 \\\\\n\\text{$V_{3,3} \\in $ \\{5,13,21,29,...\\}} = & 8n + 5 \\\\\n\\text{$V_{3,4} \\in $ \\{9,25,41,57,...\\}} = & 16n + 9 \\\\\n\\text{$V_{3,5} \\in $ \\{17,49,81,113,...\\}} = & 32n + 17 \\\\\n\\text{$V_{3,6} \\in $ \\{1,65,129,193,...\\}} = & 64n + 1 \\\\\n\\text{$V_{3,7} \\in $ \\{97,225,353,481,...\\}} = & 128n + 97 \\\\\n\\text{$V_{3,8} \\in $ \\{33,289,545,801,...\\}} = & 256n + 33 \\\\\n\\text{$V_{3,9} \\in $ \\{161,673,1185,1697,...\\}} = & 512n + 161 \\\\\n\\text{$V_{3,10} \\in $ \\{929,1953,2977,4001,...\\}} = & 1024n + 929 \\\\\n\\text{$V_{3,11} \\in $ \\{417,2465,4513,6561,...\\}} = & 2048n + 417 \\\\\n\\text{$V_{3,12} \\in $ \\{1441,5537,9633,13729,...\\}} = & 4096n + 1441 \\\\\n\\text{$V_{3,13} \\in $ \\{3489,11681,19873,28065,...\\}} = & 8192n + 3489 \\\\\n\\text{$V_{3,14} \\in $ \\{7585,23969,40353,56737,...\\}} = & 16384n + 7585 \\\\\n\\text{$V_{3,15} \\in $ \\{32161,64929,97697,130465,...\\}} = & 32768n + 32161 \\\\\n\\text{$V_{3,16} \\in $ \\{15777,81313,146849,212385,...\\}} = & 65536n + 15777 \\\\\n\\text{$V_{3,17} \\in $ \\{114081,245153,376225,507297,...\\}} = & 131072n + 114081 \\\\\n\\text{$V_{3,18} \\in $ \\{179617,441761,703905,966049,...\\}} = & 262144n + 179617 \\\\\n\\vdots\n\\end{cases}\n\\end{equation}\n\\begin{equation}\nd_3 = 18V_{3, m}(n)+3 =\n\\begin{cases}\n\\text{$d_{3,1} \\in $ \\{3,39,75,111,...\\}} = & 36n + 3 \\\\\n\\text{$d_{3,2} \\in $ \\{57,129,201,273,...\\}} = & 72n + 57 \\\\\n\\text{$d_{3,3} \\in $ \\{93,237,381,525,...\\}} = & 144n + 93 \\\\\n\\text{$d_{3,4} \\in $ \\{165,453,741,1029,...\\}} = & 288n + 165 \\\\\n\\text{$d_{3,5} \\in $ \\{309,885,1461,2037,...\\}} = & 576n + 309 \\\\\n\\text{$d_{3,6} \\in $ \\{21,1173,2325,3477,...\\}} = & 1152n + 21 \\\\\n\\text{$d_{3,7} \\in $ \\{1749,4053,6357,8661,...\\}} = & 2304n + 1749 \\\\\n\\text{$d_{3,8} \\in $ \\{597,5205,9813,14421,...\\}} = & 4608n + 597 \\\\\n\\text{$d_{3,9} \\in $ \\{2901,12117,21333,30549,...\\}} = & 9216n + 2901 \\\\\n\\text{$d_{3,10} \\in $ \\{16725,35157,53589,72021,...\\}} = & 18432n + 16725 \\\\\n\\text{$d_{3,11} \\in $ \\{7509,44373,81237,118101,...\\}} = & 36864n + 7509 \\\\\n\\text{$d_{3,12} \\in $ \\{25941,99669,173397,247125,...\\}} = & 73728n + 25941 \\\\\n\\text{$d_{3,13} \\in $ \\{62805,210261,357717,505173,...\\}} = & 147456n + 62805 \\\\\n\\text{$d_{3,14} \\in $ \\{136533,431445,726357,1021269,...\\}} = & 294912n + 136533 \\\\\n\\text{$d_{3,15} \\in $ \\{578901,1168725,1758549,2348373,...\\}} = & 589824n + 578901 \\\\\n\\text{$d_{3,16} \\in $ \\{283989,1463637,2643285,3822933,...\\}} = & 1179648n + 283989 \\\\\n\\text{$d_{3,17} \\in $ \\{2053461,4412757,6772053,9131349,...\\}} = & 2359296n + 2053461 \\\\\n\\text{$d_{3,18} \\in $ \\{3233109,7951701,12670293,17388885,...\\}} = & 4718592n + 3233109 \\\\\n\\vdots\n\\end{cases}\n\\rightarrow \\frac{3d_3+1}{2^m} = odd\n\\end{equation}\n}\n\nWhen $ i =4, r_4 = S(4) = 13$ (note that the index of V below is 13):\n{\\small\n\\begin{equation}\nV_{13, m}(n) = \n\\begin{cases}\n\\text{$V_{13,1} \\in $ \\{1,3,5,7,...\\}} = & 2n + 1 \\\\\n\\text{$V_{13,2} \\in $ \\{2,6,10,14,...\\}} = & 4n + 2 \\\\\n\\text{$V_{13,3} \\in $ \\{0,8,16,24,...\\}} = & 8n \\\\\n\\text{$V_{13,4} \\in $ \\{12,28,44,60,...\\}} = & 16n + 12 \\\\\n\\text{$V_{13,5} \\in $ \\{20,52,84,116,...\\}} = & 32n + 20 \\\\\n\\text{$V_{13,6} \\in $ \\{36,100,164,228,...\\}} = & 64n + 36 \\\\\n\\text{$V_{13,7} \\in $ \\{68,196,324,452,...\\}} = & 128n + 68 \\\\\n\\text{$V_{13,8} \\in $ \\{4,260,516,772,...\\}} = & 256n + 4 \\\\\n\\text{$V_{13,9} \\in $ \\{388,900,1412,1924,...\\}} = & 512n + 388 \\\\\n\\text{$V_{13,10} \\in $ \\{132,1156,2180,3204,...\\}} = & 1024n + 132 \\\\\n\\text{$V_{13,11} \\in $ \\{644,2692,4740,6788,...\\}} = & 2048n + 644 \\\\\n\\text{$V_{13,12} \\in $ \\{3716,7812,11908,16004,...\\}} = & 4096n + 3716 \\\\\n\\text{$V_{13,13} \\in $ \\{1668,9860,18052,26244,...\\}} = & 8192n + 1668 \\\\\n\\text{$V_{13,14} \\in $ \\{5764,22148,38532,54916,...\\}} = & 16384n + 5764 \\\\\n\\text{$V_{13,15} \\in $ \\{13956,46724,79492,112260,...\\}} = & 32768n + 13956 \\\\\n\\text{$V_{13,16} \\in $ \\{30340,95876,161412,226948,...\\}} = & 65536n + 30340 \\\\\n\\text{$V_{13,17} \\in $ \\{128644,259716,390788,521860,...\\}} = & 131072n + 128644 \\\\\n\\text{$V_{13,18} \\in $ \\{63108,325252,587396,849540,...\\}} = & 262144n + 63108 \\\\\n\\vdots\n\\end{cases}\n\\end{equation}\n\\begin{equation}\nd_4 = 18V_{13, m}(n)+13 =\n\\begin{cases}\n\\text{$d_{4,1} \\in $ \\{31,67,103,139,...\\}} = & 36n + 31 \\\\\n\\text{$d_{4,2} \\in $ \\{49,121,193,265,...\\}} = & 72n + 49 \\\\\n\\text{$d_{4,3} \\in $ \\{13,157,301,445,...\\}} = & 144n + 13 \\\\\n\\text{$d_{4,4} \\in $ \\{229,517,805,1093,...\\}} = & 288n + 229 \\\\\n\\text{$d_{4,5} \\in $ \\{373,949,1525,2101,...\\}} = & 576n + 373 \\\\\n\\text{$d_{4,6} \\in $ \\{661,1813,2965,4117,...\\}} = & 1152n + 661 \\\\\n\\text{$d_{4,7} \\in $ \\{1237,3541,5845,8149,...\\}} = & 2304n + 1237 \\\\\n\\text{$d_{4,8} \\in $ \\{85,4693,9301,13909,...\\}} = & 4608n + 85 \\\\\n\\text{$d_{4,9} \\in $ \\{6997,16213,25429,34645,...\\}} = & 9216n + 6997 \\\\\n\\text{$d_{4,10} \\in $ \\{2389,20821,39253,57685,...\\}} = & 18432n + 2389 \\\\\n\\text{$d_{4,11} \\in $ \\{11605,48469,85333,122197,...\\}} = & 36864n + 11605 \\\\\n\\text{$d_{4,12} \\in $ \\{66901,140629,214357,288085,...\\}} = & 73728n + 66901 \\\\\n\\text{$d_{4,13} \\in $ \\{30037,177493,324949,472405,...\\}} = & 147456n + 30037 \\\\\n\\text{$d_{4,14} \\in $ \\{103765,398677,693589,988501,...\\}} = & 294912n + 103765 \\\\\n\\text{$d_{4,15} \\in $ \\{251221,841045,1430869,2020693,...\\}} = & 589824n + 251221 \\\\\n\\text{$d_{4,16} \\in $ \\{546133,1725781,2905429,4085077,...\\}} = & 1179648n + 546133 \\\\\n\\text{$d_{4,17} \\in $ \\{2315605,4674901,7034197,9393493,...\\}} = & 2359296n + 2315605 \\\\\n\\text{$d_{4,18} \\in $ \\{1135957,5854549,10573141,15291733,...\\}} = & 4718592n + 1135957 \\\\\n\\vdots\n\\end{cases}\n\\rightarrow \\frac{3d_4+1}{2^m} = odd\n\\end{equation}\n}\n\nWhen $i =5, r_5 = S(5) = 17$ (note that the index of V below is 17):\n{\\small\n\\begin{equation}\nV_{17, m}(n) = \n\\begin{cases}\n\\text{$V_{17,1} \\in $ \\{1,3,5,7,...\\}} = & 2n + 1 \\\\\n\\text{$V_{17,2} \\in $ \\{0,4,8,12,...\\}} = & 4n \\\\\n\\text{$V_{17,3} \\in $ \\{6,14,22,30,...\\}} = & 8n + 6 \\\\\n\\text{$V_{17,4} \\in $ \\{10,26,42,58,...\\}} = & 16n + 10 \\\\\n\\text{$V_{17,5} \\in $ \\{2,34,66,98,...\\}} = & 32n + 2 \\\\\n\\text{$V_{17,6} \\in $ \\{50,114,178,242,...\\}} = & 64n + 50 \\\\\n\\text{$V_{17,7} \\in $ \\{82,210,338,466,...\\}} = & 128n + 82 \\\\\n\\text{$V_{17,8} \\in $ \\{146,402,658,914,...\\}} = & 256n + 146 \\\\\n\\text{$V_{17,9} \\in $ \\{274,786,1298,1810,...\\}} = & 512n + 274 \\\\\n\\text{$V_{17,10} \\in $ \\{18,1042,2066,3090,...\\}} = & 1024n + 18 \\\\\n\\text{$V_{17,11} \\in $ \\{1554,3602,5650,7698,...\\}} = & 2048n + 1554 \\\\\n\\text{$V_{17,12} \\in $ \\{530,4626,8722,12818,...\\}} = & 4096n + 530 \\\\\n\\text{$V_{17,13} \\in $ \\{2578,10770,18962,27154,...\\}} = & 8192n + 2578 \\\\\n\\text{$V_{17,14} \\in $ \\{14866,31250,47634,64018,...\\}} = & 16384n + 14866 \\\\\n\\text{$V_{17,15} \\in $ \\{6674,39442,72210,104978,...\\}} = & 32768n + 6674 \\\\\n\\text{$V_{17,16} \\in $ \\{23058,88594,154130,219666,...\\}} = & 65536n + 23058 \\\\\n\\text{$V_{17,17} \\in $ \\{55826,186898,317970,449042,...\\}} = & 131072n + 55826 \\\\\n\\text{$V_{17,18} \\in $ \\{121362,383506,645650,907794,...\\}} = & 262144n + 121362 \\\\\n\\vdots\n\\end{cases}\n\\end{equation}\n\\begin{equation}\nd_5 = 18V_{17, m}(n) +17 =\n\\begin{cases}\n\\text{$d_{5,1} \\in $ \\{35,71,107,143,...\\}} = & 36n + 35 \\\\\n\\text{$d_{5,2} \\in $ \\{17,89,161,233,...\\}} = & 72n + 17 \\\\\n\\text{$d_{5,3} \\in $ \\{125,269,413,557,...\\}} = & 144n + 125 \\\\\n\\text{$d_{5,4} \\in $ \\{197,485,773,1061,...\\}} = & 288n + 197 \\\\\n\\text{$d_{5,5} \\in $ \\{53,629,1205,1781,...\\}} = & 576n + 53 \\\\\n\\text{$d_{5,6} \\in $ \\{917,2069,3221,4373,...\\}} = & 1152n + 917 \\\\\n\\text{$d_{5,7} \\in $ \\{1493,3797,6101,8405,...\\}} = & 2304n + 1493 \\\\\n\\text{$d_{5,8} \\in $ \\{2645,7253,11861,16469,...\\}} = & 4608n + 2645 \\\\\n\\text{$d_{5,9} \\in $ \\{4949,14165,23381,32597,...\\}} = & 9216n + 4949 \\\\\n\\text{$d_{5,10} \\in $ \\{341,18773,37205,55637,...\\}} = & 18432n + 341 \\\\\n\\text{$d_{5,11} \\in $ \\{27989,64853,101717,138581,...\\}} = & 36864n + 27989 \\\\\n\\text{$d_{5,12} \\in $ \\{9557,83285,157013,230741,...\\}} = & 73728n + 9557 \\\\\n\\text{$d_{5,13} \\in $ \\{46421,193877,341333,488789,...\\}} = & 147456n + 46421 \\\\\n\\text{$d_{5,14} \\in $ \\{267605,562517,857429,1152341,...\\}} = & 294912n + 267605 \\\\\n\\text{$d_{5,15} \\in $ \\{120149,709973,1299797,1889621,...\\}} = & 589824n + 120149 \\\\\n\\text{$d_{5,16} \\in $ \\{415061,1594709,2774357,3954005,...\\}} = & 1179648n + 415061 \\\\\n\\text{$d_{5,17} \\in $ \\{1004885,3364181,5723477,8082773,...\\}} = & 2359296n + 1004885 \\\\\n\\text{$d_{5,18} \\in $ \\{2184533,6903125,11621717,16340309,...\\}} = & 4718592n + 2184533 \\\\\n\\vdots\n\\end{cases}\n\\rightarrow \\frac{3d_5+1}{2^m} = odd\n\\end{equation}\n}\n\nWhen $i =6, r_6 = S(6) = 15$ (note that the index of V below is 15):\n{\\small\n\\begin{equation}\nV_{15, m}(n) = \n\\begin{cases}\n\\text{$V_{15,1} \\in $ \\{0,2,4,6,...\\}} = & 2n \\\\\n\\text{$V_{15,2} \\in $ \\{1,5,9,13,...\\}} = & 4n + 1 \\\\\n\\text{$V_{15,3} \\in $ \\{7,15,23,31,...\\}} = & 8n + 7 \\\\\n\\text{$V_{15,4} \\in $ \\{3,19,35,51,...\\}} = & 16n + 3 \\\\\n\\text{$V_{15,5} \\in $ \\{27,59,91,123,...\\}} = & 32n + 27 \\\\\n\\text{$V_{15,6} \\in $ \\{43,107,171,235,...\\}} = & 64n + 43 \\\\\n\\text{$V_{15,7} \\in $ \\{11,139,267,395,...\\}} = & 128n + 11 \\\\\n\\text{$V_{15,8} \\in $ \\{203,459,715,971,...\\}} = & 256n + 203 \\\\\n\\text{$V_{15,9} \\in $ \\{331,843,1355,1867,...\\}} = & 512n + 331 \\\\\n\\text{$V_{15,10} \\in $ \\{587,1611,2635,3659,...\\}} = & 1024n + 587 \\\\\n\\text{$V_{15,11} \\in $ \\{1099,3147,5195,7243,...\\}} = & 2048n + 1099 \\\\\n\\text{$V_{15,12} \\in $ \\{75,4171,8267,12363,...\\}} = & 4096n + 75 \\\\\n\\text{$V_{15,13} \\in $ \\{6219,14411,22603,30795,...\\}} = & 8192n + 6219 \\\\\n\\text{$V_{15,14} \\in $ \\{2123,18507,34891,51275,...\\}} = & 16384n + 2123 \\\\\n\\text{$V_{15,15} \\in $ \\{10315,43083,75851,108619,...\\}} = & 32768n + 10315 \\\\\n\\text{$V_{15,16} \\in $ \\{59467,125003,190539,256075,...\\}} = & 65536n + 59467 \\\\\n\\text{$V_{15,17} \\in $ \\{26699,157771,288843,419915,...\\}} = & 131072n + 26699 \\\\\n\\text{$V_{15,18} \\in $ \\{92235,354379,616523,878667,...\\}} = & 262144n + 92235 \\\\\n\\vdots\n\\end{cases}\n\\end{equation}\n\\begin{equation}\nd_6 = 18V_{15, m}(n) +15 = \n\\begin{cases}\n\\text{$d_{6,1} \\in $ \\{15,51,87,123,...\\}} = & 36n + 15 \\\\\n\\text{$d_{6,2} \\in $ \\{33,105,177,249,...\\}} = & 72n + 33 \\\\\n\\text{$d_{6,3} \\in $ \\{141,285,429,573,...\\}} = & 144n + 141 \\\\\n\\text{$d_{6,4} \\in $ \\{69,357,645,933,...\\}} = & 288n + 69 \\\\\n\\text{$d_{6,5} \\in $ \\{501,1077,1653,2229,...\\}} = & 576n + 501 \\\\\n\\text{$d_{6,6} \\in $ \\{789,1941,3093,4245,...\\}} = & 1152n + 789 \\\\\n\\text{$d_{6,7} \\in $ \\{213,2517,4821,7125,...\\}} = & 2304n + 213 \\\\\n\\text{$d_{6,8} \\in $ \\{3669,8277,12885,17493,...\\}} = & 4608n + 3669 \\\\\n\\text{$d_{6,9} \\in $ \\{5973,15189,24405,33621,...\\}} = & 9216n + 5973 \\\\\n\\text{$d_{6,10} \\in $ \\{10581,29013,47445,65877,...\\}} = & 18432n + 10581 \\\\\n\\text{$d_{6,11} \\in $ \\{19797,56661,93525,130389,...\\}} = & 36864n + 19797 \\\\\n\\text{$d_{6,12} \\in $ \\{1365,75093,148821,222549,...\\}} = & 73728n + 1365 \\\\\n\\text{$d_{6,13} \\in $ \\{111957,259413,406869,554325,...\\}} = & 147456n + 111957 \\\\\n\\text{$d_{6,14} \\in $ \\{38229,333141,628053,922965,...\\}} = & 294912n + 38229 \\\\\n\\text{$d_{6,15} \\in $ \\{185685,775509,1365333,1955157,...\\}} = & 589824n + 185685 \\\\\n\\text{$d_{6,16} \\in $ \\{1070421,2250069,3429717,4609365,...\\}} = & 1179648n + 1070421 \\\\\n\\text{$d_{6,17} \\in $ \\{480597,2839893,5199189,7558485,...\\}} = & 2359296n + 480597 \\\\\n\\text{$d_{6,18} \\in $ \\{1660245,6378837,11097429,15816021,...\\}} = & 4718592n + 1660245 \\\\\n\\vdots\n\\end{cases}\n\\rightarrow \\frac{3d_6+1}{2^m} = odd\n\\end{equation}\n}\n\nWhen $i =7, r_7 = S(7) = 7$:\n{\\small\n\\begin{equation}\nV_{7, m}(n) = \n\\begin{cases}\n\\text{$V_{7,1} \\in $ \\{0,2,4,6,...\\}} = & 2n \\\\\n\\text{$V_{7,2} \\in $ \\{1,5,9,13,...\\}} = & 4n + 1 \\\\\n\\text{$V_{7,3} \\in $ \\{3,11,19,27,...\\}} = & 8n + 3 \\\\\n\\text{$V_{7,4} \\in $ \\{7,23,39,55,...\\}} = & 16n + 7 \\\\\n\\text{$V_{7,5} \\in $ \\{31,63,95,127,...\\}} = & 32n + 31 \\\\\n\\text{$V_{7,6} \\in $ \\{15,79,143,207,...\\}} = & 64n + 15 \\\\\n\\text{$V_{7,7} \\in $ \\{111,239,367,495,...\\}} = & 128n + 111 \\\\\n\\text{$V_{7,8} \\in $ \\{175,431,687,943,...\\}} = & 256n + 175 \\\\\n\\text{$V_{7,9} \\in $ \\{47,559,1071,1583,...\\}} = & 512n + 47 \\\\\n\\text{$V_{7,10} \\in $ \\{815,1839,2863,3887,...\\}} = & 1024n + 815 \\\\\n\\text{$V_{7,11} \\in $ \\{1327,3375,5423,7471,...\\}} = & 2048n + 1327 \\\\\n\\text{$V_{7,12} \\in $ \\{2351,6447,10543,14639,...\\}} = & 4096n + 2351 \\\\\n\\text{$V_{7,13} \\in $ \\{4399,12591,20783,28975,...\\}} = & 8192n + 4399 \\\\\n\\text{$V_{7,14} \\in $ \\{303,16687,33071,49455,...\\}} = & 16384n + 303 \\\\\n\\text{$V_{7,15} \\in $ \\{24879,57647,90415,123183,...\\}} = & 32768n + 24879 \\\\\n\\text{$V_{7,16} \\in $ \\{8495,74031,139567,205103,...\\}} = & 65536n + 8495 \\\\\n\\text{$V_{7,17} \\in $ \\{41263,172335,303407,434479,...\\}} = & 131072n + 41263 \\\\\n\\text{$V_{7,18} \\in $ \\{237871,500015,762159,1024303,...\\}} = & 262144n + 237871 \\\\\n\\vdots\n\\end{cases}\n\\end{equation}\n\\begin{equation}\nd_7 = V_{7, m}(n) +7 =\n\\begin{cases}\n\\text{$d_{7,1} \\in $ \\{7,43,79,115,...\\}} = & 36n + 7 \\\\\n\\text{$d_{7,2} \\in $ \\{25,97,169,241,...\\}} = & 72n + 25 \\\\\n\\text{$d_{7,3} \\in $ \\{61,205,349,493,...\\}} = & 144n + 61 \\\\\n\\text{$d_{7,4} \\in $ \\{133,421,709,997,...\\}} = & 288n + 133 \\\\\n\\text{$d_{7,5} \\in $ \\{565,1141,1717,2293,...\\}} = & 576n + 565 \\\\\n\\text{$d_{7,6} \\in $ \\{277,1429,2581,3733,...\\}} = & 1152n + 277 \\\\\n\\text{$d_{7,7} \\in $ \\{2005,4309,6613,8917,...\\}} = & 2304n + 2005 \\\\\n\\text{$d_{7,8} \\in $ \\{3157,7765,12373,16981,...\\}} = & 4608n + 3157 \\\\\n\\text{$d_{7,9} \\in $ \\{853,10069,19285,28501,...\\}} = & 9216n + 853 \\\\\n\\text{$d_{7,10} \\in $ \\{14677,33109,51541,69973,...\\}} = & 18432n + 14677 \\\\\n\\text{$d_{7,11} \\in $ \\{23893,60757,97621,134485,...\\}} = & 36864n + 23893 \\\\\n\\text{$d_{7,12} \\in $ \\{42325,116053,189781,263509,...\\}} = & 73728n + 42325 \\\\\n\\text{$d_{7,13} \\in $ \\{79189,226645,374101,521557,...\\}} = & 147456n + 79189 \\\\\n\\text{$d_{7,14} \\in $ \\{5461,300373,595285,890197,...\\}} = & 294912n + 5461 \\\\\n\\text{$d_{7,15} \\in $ \\{447829,1037653,1627477,2217301,...\\}} = & 589824n + 447829 \\\\\n\\text{$d_{7,16} \\in $ \\{152917,1332565,2512213,3691861,...\\}} = & 1179648n + 152917 \\\\\n\\text{$d_{7,17} \\in $ \\{742741,3102037,5461333,7820629,...\\}} = & 2359296n + 742741 \\\\\n\\text{$d_{7,18} \\in $ \\{4281685,9000277,13718869,18437461,...\\}} = & 4718592n + 4281685 \\\\\n\\vdots\n\\end{cases}\n\\rightarrow \\frac{3d_7+1}{2^m} = odd\n\\end{equation}\n}\n\nWhen $i =8, r_8 = S(8) = 11$ (note that the index of V below is 11):\n{\\small\n\\begin{equation}\nV_{11, m}(n) = \n\\begin{cases}\n\\text{$V_{11,1} \\in $ \\{0,2,4,6,...\\}} = & 2n \\\\\n\\text{$V_{11,2} \\in $ \\{3,7,11,15,...\\}} = & 4n + 3 \\\\\n\\text{$V_{11,3} \\in $ \\{1,9,17,25,...\\}} = & 8n + 1 \\\\\n\\text{$V_{11,4} \\in $ \\{5,21,37,53,...\\}} = & 16n + 5 \\\\\n\\text{$V_{11,5} \\in $ \\{13,45,77,109,...\\}} = & 32n + 13 \\\\\n\\text{$V_{11,6} \\in $ \\{29,93,157,221,...\\}} = & 64n + 29 \\\\\n\\text{$V_{11,7} \\in $ \\{125,253,381,509,...\\}} = & 128n + 125 \\\\\n\\text{$V_{11,8} \\in $ \\{61,317,573,829,...\\}} = & 256n + 61 \\\\\n\\text{$V_{11,9} \\in $ \\{445,957,1469,1981,...\\}} = & 512n + 445 \\\\\n\\text{$V_{11,10} \\in $ \\{701,1725,2749,3773,...\\}} = & 1024n + 701 \\\\\n\\text{$V_{11,11} \\in $ \\{189,2237,4285,6333,...\\}} = & 2048n + 189 \\\\\n\\text{$V_{11,12} \\in $ \\{3261,7357,11453,15549,...\\}} = & 4096n + 3261 \\\\\n\\text{$V_{11,13} \\in $ \\{5309,13501,21693,29885,...\\}} = & 8192n + 5309 \\\\\n\\text{$V_{11,14} \\in $ \\{9405,25789,42173,58557,...\\}} = & 16384n + 9405 \\\\\n\\text{$V_{11,15} \\in $ \\{17597,50365,83133,115901,...\\}} = & 32768n + 17597 \\\\\n\\text{$V_{11,16} \\in $ \\{1213,66749,132285,197821,...\\}} = & 65536n + 1213 \\\\\n\\text{$V_{11,17} \\in $ \\{99517,230589,361661,492733,...\\}} = & 131072n + 99517 \\\\\n\\text{$V_{11,18} \\in $ \\{33981,296125,558269,820413,...\\}} = & 262144n + 33981 \\\\\n\\vdots\n\\end{cases}\n\\end{equation}\n\\begin{equation}\nd_8 = 18 V_{11, m}(n) +11\n\\begin{cases}\n\\text{$d_{8,1} \\in $ \\{11,47,83,119,...\\}} = & 36n + 11 \\\\\n\\text{$d_{8,2} \\in $ \\{65,137,209,281,...\\}} = & 72n + 65 \\\\\n\\text{$d_{8,3} \\in $ \\{29,173,317,461,...\\}} = & 144n + 29 \\\\\n\\text{$d_{8,4} \\in $ \\{101,389,677,965,...\\}} = & 288n + 101 \\\\\n\\text{$d_{8,5} \\in $ \\{245,821,1397,1973,...\\}} = & 576n + 245 \\\\\n\\text{$d_{8,6} \\in $ \\{533,1685,2837,3989,...\\}} = & 1152n + 533 \\\\\n\\text{$d_{8,7} \\in $ \\{2261,4565,6869,9173,...\\}} = & 2304n + 2261 \\\\\n\\text{$d_{8,8} \\in $ \\{1109,5717,10325,14933,...\\}} = & 4608n + 1109 \\\\\n\\text{$d_{8,9} \\in $ \\{8021,17237,26453,35669,...\\}} = & 9216n + 8021 \\\\\n\\text{$d_{8,10} \\in $ \\{12629,31061,49493,67925,...\\}} = & 18432n + 12629 \\\\\n\\text{$d_{8,11} \\in $ \\{3413,40277,77141,114005,...\\}} = & 36864n + 3413 \\\\\n\\text{$d_{8,12} \\in $ \\{58709,132437,206165,279893,...\\}} = & 73728n + 58709 \\\\\n\\text{$d_{8,13} \\in $ \\{95573,243029,390485,537941,...\\}} = & 147456n + 95573 \\\\\n\\text{$d_{8,14} \\in $ \\{169301,464213,759125,1054037,...\\}} = & 294912n + 169301 \\\\\n\\text{$d_{8,15} \\in $ \\{316757,906581,1496405,2086229,...\\}} = & 589824n + 316757 \\\\\n\\text{$d_{8,16} \\in $ \\{21845,1201493,2381141,3560789,...\\}} = & 1179648n + 21845 \\\\\n\\text{$d_{8,17} \\in $ \\{1791317,4150613,6509909,8869205,...\\}} = & 2359296n + 1791317 \\\\\n\\text{$d_{8,18} \\in $ \\{611669,5330261,10048853,14767445,...\\}} = & 4718592n + 611669 \\\\\n\\vdots\n\\end{cases}\n\\rightarrow \\frac{3d_8+1}{2^m} = odd\n\\end{equation}\n}\n\nWhen $i =9, r_9 = S(9) = 9$:\n{\\small\n\\begin{equation}\nV_{9, m}(n) = \n\\begin{cases}\n\\text{$V_{9,1} \\in $ \\{1,3,5,7,...\\}} = & 2n + 1 \\\\\n\\text{$V_{9,2} \\in $ \\{0,4,8,12,...\\}} = & 4n \\\\\n\\text{$V_{9,3} \\in $ \\{2,10,18,26,...\\}} = & 8n + 2 \\\\\n\\text{$V_{9,4} \\in $ \\{14,30,46,62,...\\}} = & 16n + 14 \\\\\n\\text{$V_{9,5} \\in $ \\{6,38,70,102,...\\}} = & 32n + 6 \\\\\n\\text{$V_{9,6} \\in $ \\{22,86,150,214,...\\}} = & 64n + 22 \\\\\n\\text{$V_{9,7} \\in $ \\{54,182,310,438,...\\}} = & 128n + 54 \\\\\n\\text{$V_{9,8} \\in $ \\{118,374,630,886,...\\}} = & 256n + 118 \\\\\n\\text{$V_{9,9} \\in $ \\{502,1014,1526,2038,...\\}} = & 512n + 502 \\\\\n\\text{$V_{9,10} \\in $ \\{246,1270,2294,3318,...\\}} = & 1024n + 246 \\\\\n\\text{$V_{9,11} \\in $ \\{1782,3830,5878,7926,...\\}} = & 2048n + 1782 \\\\\n\\text{$V_{9,12} \\in $ \\{2806,6902,10998,15094,...\\}} = & 4096n + 2806 \\\\\n\\text{$V_{9,13} \\in $ \\{758,8950,17142,25334,...\\}} = & 8192n + 758 \\\\\n\\text{$V_{9,14} \\in $ \\{13046,29430,45814,62198,...\\}} = & 16384n + 13046 \\\\\n\\text{$V_{9,15} \\in $ \\{21238,54006,86774,119542,...\\}} = & 32768n + 21238 \\\\\n\\text{$V_{9,16} \\in $ \\{37622,103158,168694,234230,...\\}} = & 65536n + 37622 \\\\\n\\text{$V_{9,17} \\in $ \\{70390,201462,332534,463606,...\\}} = & 131072n + 70390 \\\\\n\\text{$V_{9,18} \\in $ \\{4854,266998,529142,791286,...\\}} = & 262144n + 4854 \\\\\n\\vdots\n\\end{cases}\n\\end{equation}\n\\begin{equation}\nd_9 = 18V_{9, m}(n)+9 =\n\\begin{cases}\n\\text{$d_{9,1} \\in $ \\{27,63,99,135,...\\}} = & 36n + 27 \\\\\n\\text{$d_{9,2} \\in $ \\{9,81,153,225,...\\}} = & 72n + 9 \\\\\n\\text{$d_{9,3} \\in $ \\{45,189,333,477,...\\}} = & 144n + 45 \\\\\n\\text{$d_{9,4} \\in $ \\{261,549,837,1125,...\\}} = & 288n + 261 \\\\\n\\text{$d_{9,5} \\in $ \\{117,693,1269,1845,...\\}} = & 576n + 117 \\\\\n\\text{$d_{9,6} \\in $ \\{405,1557,2709,3861,...\\}} = & 1152n + 405 \\\\\n\\text{$d_{9,7} \\in $ \\{981,3285,5589,7893,...\\}} = & 2304n + 981 \\\\\n\\text{$d_{9,8} \\in $ \\{2133,6741,11349,15957,...\\}} = & 4608n + 2133 \\\\\n\\text{$d_{9,9} \\in $ \\{9045,18261,27477,36693,...\\}} = & 9216n + 9045 \\\\\n\\text{$d_{9,10} \\in $ \\{4437,22869,41301,59733,...\\}} = & 18432n + 4437 \\\\\n\\text{$d_{9,11} \\in $ \\{32085,68949,105813,142677,...\\}} = & 36864n + 32085 \\\\\n\\text{$d_{9,12} \\in $ \\{50517,124245,197973,271701,...\\}} = & 73728n + 50517 \\\\\n\\text{$d_{9,13} \\in $ \\{13653,161109,308565,456021,...\\}} = & 147456n + 13653 \\\\\n\\text{$d_{9,14} \\in $ \\{234837,529749,824661,1119573,...\\}} = & 294912n + 234837 \\\\\n\\text{$d_{9,15} \\in $ \\{382293,972117,1561941,2151765,...\\}} = & 589824n + 382293 \\\\\n\\text{$d_{9,16} \\in $ \\{677205,1856853,3036501,4216149,...\\}} = & 1179648n + 677205 \\\\\n\\text{$d_{9,17} \\in $ \\{1267029,3626325,5985621,8344917,...\\}} = & 2359296n + 1267029 \\\\\n\\text{$d_{9,18} \\in $ \\{87381,4805973,9524565,14243157,...\\}} = & 4718592n + 87381 \\\\\n\\vdots\n\\end{cases}\n\\rightarrow \\frac{3d_9+1}{2^m} = odd\n\\end{equation}\n}\n\nThe proofs of these $d_i$ statement is not difficult, for example \n\\[\n\\begin{split}\n\\frac{3(36n+27)+1}{2^1}&=54n+41 \\\\\n \\frac{3(72n+9)+1}{2^2}&=54n+7 \\\\ \n&\\vdots \\\\\n\\frac{3(4718592n+87381)+1}{2^{18}}&=54n+1.\n \\end{split}\n \\]\n\\end{proof}\n\n\\begin{conjecture} \\label{conj1} On the boundedness of $\\frac{3d_i+1}{2^m}$.\nOne of the implications of theorem \\ref{mainthm} is that the next odd number $d_{i_{next}}=\\frac{3d_{i,m}+1}{2^m}$ is bounded between $54n$ and $54n+54$, i.e., $54n < d_{i_{next}} < 54(n+1)$ where $n$ is a variable dependent on $d_i$ (as also prescribed in the proposed Collatz map in \\ref{CollMachinery1}). \n\\end{conjecture} \n\nThe implication of conj \\ref{conj1} is $d_{i_{next}}$ must be congruent to residue $a$ modulo $54n$ where $ 1 \\le a \\le 53$ and $a$ is odd. \n\n\\begin{conjecture} \\label{conj2}\nThese $d_i$ are values are the fundamental (principal) sets for all odd integers because they solely represent the sets from which all every odd number could be derived. \n\\end{conjecture}\n\nThe proof of this last conjecture is beyond the scope of the objectives of this foundational paper.\n\nAs a result of the proposed conjecture in \\ref{conj2} the map presented in \\ref{CollMachinery1} is constructed from the $d_i$ formulations and proposed as the generalised Collatz based number system. This map \\ref{CollMachinery1} consists of 3 major compartments: the top; middle; and the bottom sections representing the $d_i$, $3d_i+1$ and optimised $\\frac{3d_i+1}{2^m}$ results, respectively.\n\nLiewise, the map \\ref{CollHeightMap1} consists of 3 major compartments: the top; middle; and the bottom sections representing the corresponding total stopping time functions of $d_i$, $3d_i+1$ and optimsed $\\frac{3d_i+1}{2^m}$, respectively.\n\n\n\n\n\\begin{landscape}\n{\\fontsize{2.5}{4.0}\\selectfont\n\\begin{equation}\\label{CollMachinery1}\n\\begin{cases}\n\\textcolor{black}{{\\bf Odd~d_1\\downarrow}} \\\\\n36n + 19*\\\\\n\\textcolor{blue}{ 72n + 1} \\\\\n\\textcolor{red}{ 144n + 109} \\\\\n288n + 37\\\\\n576n + 181\\\\\n1152n + 1045\\\\\n2304n + 469\\\\\n\\textcolor{blue}{ 4608n + 1621} \\\\\n9216n + 3925\\\\\n\\textcolor{blue}{ 18432n + 8533} \\\\\n36864n + 36181\\\\\n73728n + 17749\\\\\n\\textcolor{blue}{ 147456n + 128341} \\\\\n294912n + 202069\\\\\n\\textcolor{blue}{ 589824n + 54613} \\\\\n1179648n + 939349\\\\\n2359296n + 1529173\\\\\n4718592n + 2708821\\\\\n\\vdots \\\\\n\\textcolor{black}{{\\bf Even_1 \\downarrow}} \\\\\n108n + 58*\\\\\n\\textcolor{blue}{ 216n + 4} \\\\\n\\textcolor{red}{ 432n + 328} \\\\\n864n + 112\\\\\n1728n + 544\\\\\n3456n + 3136\\\\\n6912n + 1408\\\\\n\\textcolor{blue}{ 13824n + 4864} \\\\\n27648n + 11776\\\\\n\\textcolor{blue}{ 55296n + 25600} \\\\\n110592n + 108544\\\\\n221184n + 53248\\\\\n\\textcolor{blue}{ 442368n + 385024} \\\\\n884736n + 606208\\\\\n\\textcolor{blue}{ 1769472n + 163840} \\\\\n3538944n + 2818048\\\\\n7077888n + 4587520\\\\\n14155776n + 8126464\\\\\n\\vdots \\\\\n\\textcolor{black}{{\\bf Odd~d_{1_{next}} \\downarrow}} \\\\\n54n + 29*\\\\\n\\textcolor{blue}{ 54n + 1} \\\\\n\\textcolor{red}{ 54n + 41} \\\\\n54n + 7\\\\\n54n + 17\\\\\n54n + 49\\\\\n54n + 11\\\\\n\\textcolor{blue}{ 54n + 19} \\\\\n54n + 23\\\\\n\\textcolor{blue}{ 54n + 25} \\\\\n54n + 53\\\\\n54n + 13\\\\\n\\textcolor{blue}{ 54n + 47} \\\\\n54n + 37\\\\\n\\textcolor{blue}{ 54n + 5} \\\\\n54n + 43\\\\\n54n + 35\\\\\n54n + 31\\\\\n\\vdots \\\\\n\\end{cases}\n\\begin{cases}\n\\textcolor{black}{{\\bf Odd~d_2\\downarrow}} \\\\\n36n + 23*\\\\\n72n + 41\\\\\n144n + 77\\\\\n\\textcolor{blue}{ 288n + 5} \\\\\n\\textcolor{red}{ 576n + 437} \\\\\n1152n + 149\\\\\n2304n + 725\\\\\n4608n + 4181\\\\\n9216n + 1877\\\\\n\\textcolor{blue}{ 18432n + 6485} \\\\\n36864n + 15701\\\\\n\\textcolor{blue}{ 73728n + 34133} \\\\\n147456n + 144725\\\\\n294912n + 70997\\\\\n\\textcolor{blue}{ 589824n + 513365} \\\\\n1179648n + 808277\\\\\n\\textcolor{blue}{ 2359296n + 218453} \\\\\n4718592n + 3757397\\\\\n\\vdots \\\\\n\\textcolor{black}{{\\bf Even~_2 \\downarrow}} \\\\\n108n + 70*\\\\\n216n + 124\\\\\n432n + 232\\\\\n\\textcolor{blue}{ 864n + 16} \\\\\n\\textcolor{red}{ 1728n + 1312} \\\\\n3456n + 448\\\\\n6912n + 2176\\\\\n13824n + 12544\\\\\n27648n + 5632\\\\\n\\textcolor{blue}{ 55296n + 19456} \\\\\n110592n + 47104\\\\\n\\textcolor{blue}{ 221184n + 102400} \\\\\n442368n + 434176\\\\\n884736n + 212992\\\\\n\\textcolor{blue}{ 1769472n + 1540096} \\\\\n3538944n + 2424832\\\\\n\\textcolor{blue}{ 7077888n + 655360} \\\\\n14155776n + 11272192\\\\\n\\vdots \\\\\n\\textcolor{black}{{\\bf Odd~d_{2_{next}} \\downarrow}} \\\\\n54n + 35*\\\\\n54n + 31\\\\\n54n + 29\\\\\n\\textcolor{blue}{ 54n + 1} \\\\\n\\textcolor{red}{ 54n + 41} \\\\\n54n + 7\\\\\n54n + 17\\\\\n54n + 49\\\\\n54n + 11\\\\\n\\textcolor{blue}{ 54n + 19} \\\\\n54n + 23\\\\\n\\textcolor{blue}{ 54n + 25} \\\\\n54n + 53\\\\\n54n + 13\\\\\n\\textcolor{blue}{ 54n + 47} \\\\\n54n + 37\\\\\n\\textcolor{blue}{ 54n + 5} \\\\\n54n + 43\\\\\n\\vdots \\\\\n\\end{cases}\n\\begin{cases}\n\\textcolor{black}{{\\bf Odd~d_3\\downarrow}} \\\\\n\\textcolor{blue}{ 36n + 3}* \\\\\n72n + 57\\\\\n144n + 93\\\\\n288n + 165\\\\\n576n + 309\\\\\n\\textcolor{blue}{ 1152n + 21} \\\\\n\\textcolor{red}{ 2304n + 1749} \\\\\n4608n + 597\\\\\n9216n + 2901\\\\\n18432n + 16725\\\\\n36864n + 7509\\\\\n\\textcolor{blue}{ 73728n + 25941} \\\\\n147456n + 62805\\\\\n\\textcolor{blue}{ 294912n + 136533} \\\\\n589824n + 578901\\\\\n1179648n + 283989\\\\\n\\textcolor{blue}{ 2359296n + 2053461} \\\\\n4718592n + 3233109\\\\\n\\vdots \\\\\n\\textcolor{black}{{\\bf Even~_3 \\downarrow}} \\\\\n\\textcolor{blue}{ 108n + 10}* \\\\\n216n + 172\\\\\n432n + 280\\\\\n864n + 496\\\\\n1728n + 928\\\\\n\\textcolor{blue}{ 3456n + 64} \\\\\n\\textcolor{red}{ 6912n + 5248} \\\\\n13824n + 1792\\\\\n27648n + 8704\\\\\n55296n + 50176\\\\\n110592n + 22528\\\\\n\\textcolor{blue}{ 221184n + 77824} \\\\\n442368n + 188416\\\\\n\\textcolor{blue}{ 884736n + 409600} \\\\\n1769472n + 1736704\\\\\n3538944n + 851968\\\\\n\\textcolor{blue}{ 7077888n + 6160384} \\\\\n14155776n + 9699328\\\\\n\\vdots \\\\\n\\textcolor{black}{{\\bf Odd~d_{3_{next}} \\downarrow}} \\\\\n\\textcolor{blue}{ 54n + 5}* \\\\\n54n + 43\\\\\n54n + 35\\\\\n54n + 31\\\\\n54n + 29\\\\\n\\textcolor{blue}{ 54n + 1} \\\\\n\\textcolor{red}{ 54n + 41} \\\\\n54n + 7\\\\\n54n + 17\\\\\n54n + 49\\\\\n54n + 11\\\\\n\\textcolor{blue}{ 54n + 19} \\\\\n54n + 23\\\\\n\\textcolor{blue}{ 54n + 25} \\\\\n54n + 53\\\\\n54n + 13\\\\\n\\textcolor{blue}{ 54n + 47} \\\\\n54n + 37\\\\\n\\vdots \\\\\n\\end{cases}\n\\begin{cases}\n\\textcolor{black}{{\\bf Odd~d_4\\downarrow}} \\\\\n\\textcolor{blue}{ 36n + 31}* \\\\\n72n + 49\\\\\n\\textcolor{blue}{ 144n + 13} \\\\\n288n + 229\\\\\n576n + 373\\\\\n1152n + 661\\\\\n2304n + 1237\\\\\n\\textcolor{blue}{ 4608n + 85} \\\\\n\\textcolor{red}{ 9216n + 6997} \\\\\n18432n + 2389\\\\\n36864n + 11605\\\\\n73728n + 66901\\\\\n147456n + 30037\\\\\n\\textcolor{blue}{ 294912n + 103765} \\\\\n589824n + 251221\\\\\n\\textcolor{blue}{ 1179648n + 546133} \\\\\n2359296n + 2315605\\\\\n4718592n + 1135957\\\\\n\\vdots \\\\\n\\textcolor{black}{{\\bf Even~_4 \\downarrow}} \\\\\n\\textcolor{blue}{ 108n + 94}* \\\\\n216n + 148\\\\\n\\textcolor{blue}{ 432n + 40} \\\\\n864n + 688\\\\\n1728n + 1120\\\\\n3456n + 1984\\\\\n6912n + 3712\\\\\n\\textcolor{blue}{ 13824n + 256} \\\\\n\\textcolor{red}{ 27648n + 20992} \\\\\n55296n + 7168\\\\\n110592n + 34816\\\\\n221184n + 200704\\\\\n442368n + 90112\\\\\n\\textcolor{blue}{ 884736n + 311296} \\\\\n1769472n + 753664\\\\\n\\textcolor{blue}{ 3538944n + 1638400} \\\\\n7077888n + 6946816\\\\\n14155776n + 3407872\\\\\n\\vdots \\\\\n\\textcolor{black}{{\\bf Odd~d_{4_{next}} \\downarrow}} \\\\\n\\textcolor{blue}{ 54n + 47}* \\\\\n54n + 37\\\\\n\\textcolor{blue}{ 54n + 5} \\\\\n54n + 43\\\\\n54n + 35\\\\\n54n + 31\\\\\n54n + 29\\\\\n\\textcolor{blue}{ 54n + 1} \\\\\n\\textcolor{red}{ 54n + 41} \\\\\n54n + 7\\\\\n54n + 17\\\\\n54n + 49\\\\\n54n + 11\\\\\n\\textcolor{blue}{ 54n + 19} \\\\\n54n + 23\\\\\n\\textcolor{blue}{ 54n + 25} \\\\\n54n + 53\\\\\n54n + 13\\\\\n\\vdots \\\\\n\\end{cases}\n\\begin{cases}\n\\textcolor{black}{{\\bf Odd~d_5\\downarrow}} \\\\\n36n + 35*\\\\\n72n + 17\\\\\n\\textcolor{blue}{ 144n + 125} \\\\\n288n + 197\\\\\n\\textcolor{blue}{ 576n + 53} \\\\\n1152n + 917\\\\\n2304n + 1493\\\\\n4608n + 2645\\\\\n9216n + 4949\\\\\n\\textcolor{blue}{ 18432n + 341} \\\\\n\\textcolor{red}{ 36864n + 27989} \\\\\n73728n + 9557\\\\\n147456n + 46421\\\\\n294912n + 267605\\\\\n589824n + 120149\\\\\n\\textcolor{blue}{ 1179648n + 415061} \\\\\n2359296n + 1004885\\\\\n4718592n + 2184533\\\\\n\\vdots \\\\\n\\textcolor{black}{{\\bf Even~_5 \\downarrow}} \\\\\n108n + 106*\\\\\n216n + 52\\\\\n\\textcolor{blue}{ 432n + 376} \\\\\n864n + 592\\\\\n\\textcolor{blue}{ 1728n + 160} \\\\\n3456n + 2752\\\\\n6912n + 4480\\\\\n13824n + 7936\\\\\n27648n + 14848\\\\\n\\textcolor{blue}{ 55296n + 1024} \\\\\n\\textcolor{red}{ 110592n + 83968} \\\\\n221184n + 28672\\\\\n442368n + 139264\\\\\n884736n + 802816\\\\\n1769472n + 360448\\\\\n\\textcolor{blue}{ 3538944n + 1245184} \\\\\n7077888n + 3014656\\\\\n14155776n + 6553600\\\\\n\\vdots \\\\\n\\textcolor{black}{{\\bf Odd~d_{5_{next}} \\downarrow}} \\\\\n54n + 53*\\\\\n54n + 13\\\\\n\\textcolor{blue}{ 54n + 47} \\\\\n54n + 37\\\\\n\\textcolor{blue}{ 54n + 5} \\\\\n54n + 43\\\\\n54n + 35\\\\\n54n + 31\\\\\n54n + 29\\\\\n\\textcolor{blue}{ 54n + 1} \\\\\n\\textcolor{red}{ 54n + 41} \\\\\n54n + 7\\\\\n54n + 17\\\\\n54n + 49\\\\\n54n + 11\\\\\n\\textcolor{blue}{ 54n + 19} \\\\\n54n + 23\\\\\n54n + 25\\\\\n\\vdots \\\\\n\\end{cases}\n\\begin{cases}\n\\textcolor{black}{{\\bf Odd~d_6\\downarrow}} \\\\\n36n + 15*\\\\\n\\textcolor{blue}{ 72n + 33} \\\\\n144n + 141\\\\\n288n + 69\\\\\n\\textcolor{blue}{ 576n + 501} \\\\\n1152n + 789\\\\\n\\textcolor{blue}{ 2304n + 213} \\\\\n4608n + 3669\\\\\n9216n + 5973\\\\\n18432n + 10581\\\\\n36864n + 19797\\\\\n\\textcolor{blue}{ 73728n + 1365} \\\\\n\\textcolor{red}{ 147456n + 111957} \\\\\n294912n + 38229\\\\\n589824n + 185685\\\\\n1179648n + 1070421\\\\\n2359296n + 480597\\\\\n4718592n + 1660245\\\\\n\\vdots \\\\\n\\textcolor{black}{{\\bf Even~_6 \\downarrow}} \\\\\n108n + 46*\\\\\n\\textcolor{blue}{ 216n + 100} \\\\\n432n + 424\\\\\n864n + 208\\\\\n\\textcolor{blue}{ 1728n + 1504} \\\\\n3456n + 2368\\\\\n\\textcolor{blue}{ 6912n + 640} \\\\\n13824n + 11008\\\\\n27648n + 17920\\\\\n55296n + 31744\\\\\n110592n + 59392\\\\\n\\textcolor{blue}{ 221184n + 4096} \\\\\n\\textcolor{red}{ 442368n + 335872} \\\\\n884736n + 114688\\\\\n1769472n + 557056\\\\\n3538944n + 3211264\\\\\n7077888n + 1441792\\\\\n14155776n + 4980736\\\\\n\\vdots \\\\\n\\textcolor{black}{{\\bf Odd~d_{6_{next}} \\downarrow}} \\\\\n54n + 23*\\\\\n\\textcolor{blue}{ 54n + 25} \\\\\n54n + 53\\\\\n54n + 13\\\\\n\\textcolor{blue}{ 54n + 47} \\\\\n54n + 37\\\\\n\\textcolor{blue}{ 54n + 5} \\\\\n54n + 43\\\\\n54n + 35\\\\\n54n + 31\\\\\n54n + 29\\\\\n\\textcolor{blue}{ 54n + 1} \\\\\n\\textcolor{red}{ 54n + 41} \\\\\n54n + 7\\\\\n54n + 17\\\\\n54n + 49\\\\\n54n + 11\\\\\n54n + 19\\\\\n\\vdots \\\\\n\\end{cases}\n\\begin{cases}\n\\textcolor{black}{{\\bf Odd~d_7\\downarrow}} \\\\\n36n + 7*\\\\\n\\textcolor{blue}{ 72n + 25} \\\\\n144n + 61\\\\\n\\textcolor{blue}{ 288n + 133} \\\\\n576n + 565\\\\\n1152n + 277\\\\\n\\textcolor{blue}{ 2304n + 2005} \\\\\n4608n + 3157\\\\\n\\textcolor{blue}{ 9216n + 853} \\\\\n18432n + 14677\\\\\n36864n + 23893\\\\\n73728n + 42325\\\\\n147456n + 79189\\\\\n\\textcolor{blue}{ 294912n + 5461} \\\\\n\\textcolor{red}{ 589824n + 447829} \\\\\n1179648n + 152917\\\\\n2359296n + 742741\\\\\n4718592n + 4281685\\\\\n\\vdots \\\\\n\\textcolor{black}{{\\bf Even~_7 \\downarrow}} \\\\\n108n + 22*\\\\\n\\textcolor{blue}{ 216n + 76} \\\\\n432n + 184\\\\\n\\textcolor{blue}{ 864n + 400} \\\\\n1728n + 1696\\\\\n3456n + 832\\\\\n\\textcolor{blue}{ 6912n + 6016} \\\\\n13824n + 9472\\\\\n\\textcolor{blue}{ 27648n + 2560} \\\\\n55296n + 44032\\\\\n110592n + 71680\\\\\n221184n + 126976\\\\\n442368n + 237568\\\\\n\\textcolor{blue}{ 884736n + 16384} \\\\\n\\textcolor{red}{ 1769472n + 1343488} \\\\\n3538944n + 458752\\\\\n7077888n + 2228224\\\\\n14155776n + 12845056\\\\\n\\vdots \\\\\n\\textcolor{black}{{\\bf Odd~d_{7_{next}} \\downarrow}} \\\\\n54n + 11*\\\\\n\\textcolor{blue}{ 54n + 19} \\\\\n54n + 23\\\\\n\\textcolor{blue}{ 54n + 25} \\\\\n54n + 53\\\\\n54n + 13\\\\\n\\textcolor{blue}{ 54n + 47} \\\\\n54n + 37\\\\\n\\textcolor{blue}{ 54n + 5} \\\\\n54n + 43\\\\\n54n + 35\\\\\n54n + 31\\\\\n54n + 29\\\\\n\\textcolor{blue}{ 54n + 1} \\\\\n\\textcolor{red}{ 54n + 41} \\\\\n54n + 7\\\\\n54n + 17\\\\\n54n + 49\\\\\n\\vdots \\\\\n\\end{cases}\n\\begin{cases}\n\\textcolor{black}{{\\bf Odd~d_8\\downarrow}} \\\\\n36n + 11*\\\\\n72n + 65\\\\\n144n + 29\\\\\n\\textcolor{blue}{ 288n + 101} \\\\\n576n + 245\\\\\n\\textcolor{blue}{ 1152n + 533} \\\\\n2304n + 2261\\\\\n4608n + 1109\\\\\n\\textcolor{blue}{ 9216n + 8021} \\\\\n18432n + 12629\\\\\n\\textcolor{blue}{ 36864n + 3413} \\\\\n73728n + 58709\\\\\n147456n + 95573\\\\\n294912n + 169301\\\\\n589824n + 316757\\\\\n\\textcolor{blue}{ 1179648n + 21845} \\\\\n\\textcolor{red}{ 2359296n + 1791317} \\\\\n4718592n + 611669\\\\\n\\vdots \\\\\n\\textcolor{black}{{\\bf Even~_8 \\downarrow}} \\\\\n108n + 34*\\\\\n216n + 196\\\\\n432n + 88\\\\\n\\textcolor{blue}{ 864n + 304} \\\\\n1728n + 736\\\\\n\\textcolor{blue}{ 3456n + 1600} \\\\\n6912n + 6784\\\\\n13824n + 3328\\\\\n\\textcolor{blue}{ 27648n + 24064} \\\\\n55296n + 37888\\\\\n\\textcolor{blue}{ 110592n + 10240} \\\\\n221184n + 176128\\\\\n442368n + 286720\\\\\n884736n + 507904\\\\\n1769472n + 950272\\\\\n\\textcolor{blue}{ 3538944n + 65536} \\\\\n\\textcolor{red}{ 7077888n + 5373952} \\\\\n14155776n + 1835008\\\\\n\\vdots \\\\\n\\textcolor{black}{{\\bf Odd~d_{8_{next}} \\downarrow}} \\\\\n54n + 17*\\\\\n54n + 49\\\\\n54n + 11\\\\\n\\textcolor{blue}{ 54n + 19} \\\\\n54n + 23\\\\\n\\textcolor{blue}{ 54n + 25} \\\\\n54n + 53\\\\\n54n + 13\\\\\n\\textcolor{blue}{ 54n + 47} \\\\\n54n + 37\\\\\n\\textcolor{blue}{ 54n + 5} \\\\\n54n + 43\\\\\n54n + 35\\\\\n54n + 31\\\\\n54n + 29\\\\\n\\textcolor{blue}{ 54n + 1} \\\\\n\\textcolor{red}{ 54n + 41} \\\\\n54n + 7\\\\\n\\vdots \\\\\n\\end{cases}\n\\begin{cases}\n\\textcolor{black}{{\\bf Odd~d_9\\downarrow}} \\\\\n\\textcolor{red}{ 36n + 27}* \\\\\n72n + 9\\\\\n144n + 45\\\\\n288n + 261\\\\\n576n + 117\\\\\n\\textcolor{blue}{ 1152n + 405} \\\\\n2304n + 981\\\\\n\\textcolor{blue}{ 4608n + 2133} \\\\\n9216n + 9045\\\\\n18432n + 4437\\\\\n\\textcolor{blue}{ 36864n + 32085} \\\\\n73728n + 50517\\\\\n\\textcolor{blue}{ 147456n + 13653} \\\\\n294912n + 234837\\\\\n589824n + 382293\\\\\n1179648n + 677205\\\\\n2359296n + 1267029\\\\\n\\textcolor{blue}{ 4718592n + 87381} \\\\\n\\vdots \\\\\n\\textcolor{black}{{\\bf Even~_9 \\downarrow}} \\\\\n\\textcolor{red}{ 108n + 82}* \\\\\n216n + 28\\\\\n432n + 136\\\\\n864n + 784\\\\\n1728n + 352\\\\\n\\textcolor{blue}{ 3456n + 1216} \\\\\n6912n + 2944\\\\\n\\textcolor{blue}{ 13824n + 6400} \\\\\n27648n + 27136\\\\\n55296n + 13312\\\\\n\\textcolor{blue}{ 110592n + 96256} \\\\\n221184n + 151552\\\\\n\\textcolor{blue}{ 442368n + 40960} \\\\\n884736n + 704512\\\\\n1769472n + 1146880\\\\\n3538944n + 2031616\\\\\n7077888n + 3801088\\\\\n\\textcolor{blue}{ 14155776n + 262144} \\\\\n\\vdots \\\\\n\\textcolor{black}{{\\bf Odd~d_{9_{next}} \\downarrow}} \\\\\n\\textcolor{red}{ 54n + 41}* \\\\\n54n + 7\\\\\n54n + 17\\\\\n54n + 49\\\\\n54n + 11\\\\\n\\textcolor{blue}{ 54n + 19} \\\\\n54n + 23\\\\\n\\textcolor{blue}{ 54n + 25} \\\\\n54n + 53\\\\\n54n + 13\\\\\n\\textcolor{blue}{ 54n + 47} \\\\\n54n + 37\\\\\n\\textcolor{blue}{ 54n + 5} \\\\\n54n + 43\\\\\n54n + 35\\\\\n54n + 31\\\\\n54n + 29\\\\\n\\textcolor{blue}{ 54n + 1} \\\\\n\\vdots \\\\\n\\end{cases}\n\\footnote{All rights reserved. $\\copyright$ Michael A. Idowu, 2014.}\n\\end{equation}\n}\n\\end{landscape}\n\n\\begin{landscape}\n{\\fontsize{2.5}{4.0}\\selectfont\n\\begin{equation}\\label{CollHeightMap1}\n\\begin{cases}\n\\textcolor{black}{{\\bf Odd~d_1\\downarrow}} \\\\\n\\sigma_{\\infty}(54n+29)+2\\\\\n\\textcolor{blue}{ \\sigma_{\\infty}(54n+1)+3} \\\\\n\\textcolor{red}{ \\sigma_{\\infty}(54n+41)+4} \\\\\n\\sigma_{\\infty}(54n+7)+5\\\\\n\\sigma_{\\infty}(54n+17)+6\\\\\n\\sigma_{\\infty}(54n+49)+7\\\\\n\\sigma_{\\infty}(54n+11)+8\\\\\n\\textcolor{blue}{ \\sigma_{\\infty}(54n+19)+9} \\\\\n\\sigma_{\\infty}(54n+23)+10\\\\\n\\textcolor{blue}{ \\sigma_{\\infty}(54n+25)+11} \\\\\n\\sigma_{\\infty}(54n+53)+12\\\\\n\\sigma_{\\infty}(54n+13)+13\\\\\n\\textcolor{blue}{ \\sigma_{\\infty}(54n+47)+14} \\\\\n\\sigma_{\\infty}(54n+37)+15\\\\\n\\textcolor{blue}{ \\sigma_{\\infty}(54n+5)+16} \\\\\n\\sigma_{\\infty}(54n+43)+17\\\\\n\\sigma_{\\infty}(54n+35)+18\\\\\n\\sigma_{\\infty}(54n+31)+19\\\\\n\\vdots \\\\\n\\textcolor{black}{{\\bf Even_1 \\downarrow}} \\\\\n\\sigma_{\\infty}(54n+29)+1\\\\\n\\textcolor{blue}{ \\sigma_{\\infty}(54n+1)+2} \\\\\n\\textcolor{red}{ \\sigma_{\\infty}(54n+41)+3} \\\\\n\\sigma_{\\infty}(54n+7)+4\\\\\n\\sigma_{\\infty}(54n+17)+5\\\\\n\\sigma_{\\infty}(54n+49)+6\\\\\n\\sigma_{\\infty}(54n+11)+7\\\\\n\\textcolor{blue}{ \\sigma_{\\infty}(54n+19)+8} \\\\\n\\sigma_{\\infty}(54n+23)+9\\\\\n\\textcolor{blue}{ \\sigma_{\\infty}(54n+25)+10} \\\\\n\\sigma_{\\infty}(54n+53)+11\\\\\n\\sigma_{\\infty}(54n+13)+12\\\\\n\\textcolor{blue}{ \\sigma_{\\infty}(54n+47)+13} \\\\\n\\sigma_{\\infty}(54n+37)+14\\\\\n\\textcolor{blue}{ \\sigma_{\\infty}(54n+5)+15} \\\\\n\\sigma_{\\infty}(54n+43)+16\\\\\n\\sigma_{\\infty}(54n+35)+17\\\\\n\\sigma_{\\infty}(54n+31)+18\\\\\n\\vdots \\\\\n\\textcolor{black}{{\\bf Odd~d_{1_{next}} \\downarrow}} \\\\\n\\sigma_{\\infty}(54n+29) \\\\\n\\textcolor{blue}{ \\sigma_{\\infty}(54n+1) } \\\\\n\\textcolor{red}{ \\sigma_{\\infty}(54n+41) } \\\\\n\\sigma_{\\infty}(54n+7) \\\\\n\\sigma_{\\infty}(54n+17) \\\\\n\\sigma_{\\infty}(54n+49) \\\\\n\\sigma_{\\infty}(54n+11) \\\\\n\\textcolor{blue}{ \\sigma_{\\infty}(54n+19) } \\\\\n\\sigma_{\\infty}(54n+23) \\\\\n\\textcolor{blue}{ \\sigma_{\\infty}(54n+25) } \\\\\n\\sigma_{\\infty}(54n+53) \\\\\n\\sigma_{\\infty}(54n+13) \\\\\n\\textcolor{blue}{ \\sigma_{\\infty}(54n+47) } \\\\\n\\sigma_{\\infty}(54n+37) \\\\\n\\textcolor{blue}{ \\sigma_{\\infty}(54n+5) } \\\\\n\\sigma_{\\infty}(54n+43) \\\\\n\\sigma_{\\infty}(54n+35) \\\\\n\\sigma_{\\infty}(54n+31) \\\\\n\\vdots \\\\\n\\end{cases}\n\\begin{cases}\n\\textcolor{black}{{\\bf Odd~d_2\\downarrow}} \\\\\n\\sigma_{\\infty}(54n+35)+2\\\\\n\\sigma_{\\infty}(54n+31)+3\\\\\n\\sigma_{\\infty}(54n+29)+4\\\\\n\\textcolor{blue}{ \\sigma_{\\infty}(54n+1)+5} \\\\\n\\textcolor{red}{ \\sigma_{\\infty}(54n+41)+6} \\\\\n\\sigma_{\\infty}(54n+7)+7\\\\\n\\sigma_{\\infty}(54n+17)+8\\\\\n\\sigma_{\\infty}(54n+49)+9\\\\\n\\sigma_{\\infty}(54n+11)+10\\\\\n\\textcolor{blue}{ \\sigma_{\\infty}(54n+19)+11} \\\\\n\\sigma_{\\infty}(54n+23)+12\\\\\n\\textcolor{blue}{ \\sigma_{\\infty}(54n+25)+13} \\\\\n\\sigma_{\\infty}(54n+53)+14\\\\\n\\sigma_{\\infty}(54n+13)+15\\\\\n\\textcolor{blue}{ \\sigma_{\\infty}(54n+47)+16} \\\\\n\\sigma_{\\infty}(54n+37)+17\\\\\n\\textcolor{blue}{ \\sigma_{\\infty}(54n+5)+18} \\\\\n\\sigma_{\\infty}(54n+43)+19\\\\\n\\vdots \\\\\n\\textcolor{black}{{\\bf Even_2 \\downarrow}} \\\\\n\\sigma_{\\infty}(54n+35)+1\\\\\n\\sigma_{\\infty}(54n+31)+2\\\\\n\\sigma_{\\infty}(54n+29)+3\\\\\n\\textcolor{blue}{ \\sigma_{\\infty}(54n+1)+4} \\\\\n\\textcolor{red}{ \\sigma_{\\infty}(54n+41)+5} \\\\\n\\sigma_{\\infty}(54n+7)+6\\\\\n\\sigma_{\\infty}(54n+17)+7\\\\\n\\sigma_{\\infty}(54n+49)+8\\\\\n\\sigma_{\\infty}(54n+11)+9\\\\\n\\textcolor{blue}{ \\sigma_{\\infty}(54n+19)+10} \\\\\n\\sigma_{\\infty}(54n+23)+11\\\\\n\\textcolor{blue}{ \\sigma_{\\infty}(54n+25)+12} \\\\\n\\sigma_{\\infty}(54n+53)+13\\\\\n\\sigma_{\\infty}(54n+13)+14\\\\\n\\textcolor{blue}{ \\sigma_{\\infty}(54n+47)+15} \\\\\n\\sigma_{\\infty}(54n+37)+16\\\\\n\\textcolor{blue}{ \\sigma_{\\infty}(54n+5)+17} \\\\\n\\sigma_{\\infty}(54n+43)+18\\\\\n\\vdots \\\\\n\\textcolor{black}{{\\bf Odd~d_{2_{next}} \\downarrow}} \\\\\n\\sigma_{\\infty}(54n+35) \\\\\n\\sigma_{\\infty}(54n+31) \\\\\n\\sigma_{\\infty}(54n+29) \\\\\n\\textcolor{blue}{ \\sigma_{\\infty}(54n+1) } \\\\\n\\textcolor{red}{ \\sigma_{\\infty}(54n+41) } \\\\\n\\sigma_{\\infty}(54n+7) \\\\\n\\sigma_{\\infty}(54n+17) \\\\\n\\sigma_{\\infty}(54n+49) \\\\\n\\sigma_{\\infty}(54n+11) \\\\\n\\textcolor{blue}{ \\sigma_{\\infty}(54n+19) } \\\\\n\\sigma_{\\infty}(54n+23) \\\\\n\\textcolor{blue}{ \\sigma_{\\infty}(54n+25) } \\\\\n\\sigma_{\\infty}(54n+53) \\\\\n\\sigma_{\\infty}(54n+13) \\\\\n\\textcolor{blue}{ \\sigma_{\\infty}(54n+47) } \\\\\n\\sigma_{\\infty}(54n+37) \\\\\n\\textcolor{blue}{ \\sigma_{\\infty}(54n+5) } \\\\\n\\sigma_{\\infty}(54n+43) \\\\\n\\vdots \\\\\n\\end{cases}\n\\begin{cases}\n\\textcolor{black}{{\\bf Odd~d_3\\downarrow}} \\\\\n\\textcolor{blue}{ \\sigma_{\\infty}(54n+5)+2} \\\\\n\\sigma_{\\infty}(54n+43)+3\\\\\n\\sigma_{\\infty}(54n+35)+4\\\\\n\\sigma_{\\infty}(54n+31)+5\\\\\n\\sigma_{\\infty}(54n+29)+6\\\\\n\\textcolor{blue}{ \\sigma_{\\infty}(54n+1)+7} \\\\\n\\textcolor{red}{ \\sigma_{\\infty}(54n+41)+8} \\\\\n\\sigma_{\\infty}(54n+7)+9\\\\\n\\sigma_{\\infty}(54n+17)+10\\\\\n\\sigma_{\\infty}(54n+49)+11\\\\\n\\sigma_{\\infty}(54n+11)+12\\\\\n\\textcolor{blue}{ \\sigma_{\\infty}(54n+19)+13} \\\\\n\\sigma_{\\infty}(54n+23)+14\\\\\n\\textcolor{blue}{ \\sigma_{\\infty}(54n+25)+15} \\\\\n\\sigma_{\\infty}(54n+53)+16\\\\\n\\sigma_{\\infty}(54n+13)+17\\\\\n\\textcolor{blue}{ \\sigma_{\\infty}(54n+47)+18} \\\\\n\\sigma_{\\infty}(54n+37)+19\\\\\n\\vdots \\\\\n\\textcolor{black}{{\\bf Even_3 \\downarrow}} \\\\\n\\textcolor{blue}{ \\sigma_{\\infty}(54n+5)+1} \\\\\n\\sigma_{\\infty}(54n+43)+2\\\\\n\\sigma_{\\infty}(54n+35)+3\\\\\n\\sigma_{\\infty}(54n+31)+4\\\\\n\\sigma_{\\infty}(54n+29)+5\\\\\n\\textcolor{blue}{ \\sigma_{\\infty}(54n+1)+6} \\\\\n\\textcolor{red}{ \\sigma_{\\infty}(54n+41)+7} \\\\\n\\sigma_{\\infty}(54n+7)+8\\\\\n\\sigma_{\\infty}(54n+17)+9\\\\\n\\sigma_{\\infty}(54n+49)+10\\\\\n\\sigma_{\\infty}(54n+11)+11\\\\\n\\textcolor{blue}{ \\sigma_{\\infty}(54n+19)+12} \\\\\n\\sigma_{\\infty}(54n+23)+13\\\\\n\\textcolor{blue}{ \\sigma_{\\infty}(54n+25)+14} \\\\\n\\sigma_{\\infty}(54n+53)+15\\\\\n\\sigma_{\\infty}(54n+13)+16\\\\\n\\textcolor{blue}{ \\sigma_{\\infty}(54n+47)+17} \\\\\n\\sigma_{\\infty}(54n+37)+18\\\\\n\\vdots \\\\\n\\textcolor{black}{{\\bf Odd~d_{3_{next}} \\downarrow}} \\\\\n\\textcolor{blue}{ \\sigma_{\\infty}(54n+5) } \\\\\n\\sigma_{\\infty}(54n+43) \\\\\n\\sigma_{\\infty}(54n+35) \\\\\n\\sigma_{\\infty}(54n+31) \\\\\n\\sigma_{\\infty}(54n+29) \\\\\n\\textcolor{blue}{ \\sigma_{\\infty}(54n+1) } \\\\\n\\textcolor{red}{ \\sigma_{\\infty}(54n+41) } \\\\\n\\sigma_{\\infty}(54n+7) \\\\\n\\sigma_{\\infty}(54n+17) \\\\\n\\sigma_{\\infty}(54n+49) \\\\\n\\sigma_{\\infty}(54n+11) \\\\\n\\textcolor{blue}{ \\sigma_{\\infty}(54n+19) } \\\\\n\\sigma_{\\infty}(54n+23) \\\\\n\\textcolor{blue}{ \\sigma_{\\infty}(54n+25) } \\\\\n\\sigma_{\\infty}(54n+53) \\\\\n\\sigma_{\\infty}(54n+13) \\\\\n\\textcolor{blue}{ \\sigma_{\\infty}(54n+47) } \\\\\n\\sigma_{\\infty}(54n+37) \\\\\n\\vdots \\\\\n\\end{cases}\n\\begin{cases}\n\\textcolor{black}{{\\bf Odd~d_4\\downarrow}} \\\\\n\\textcolor{blue}{ \\sigma_{\\infty}(54n+47)+2} \\\\\n\\sigma_{\\infty}(54n+37)+3\\\\\n\\textcolor{blue}{ \\sigma_{\\infty}(54n+5)+4} \\\\\n\\sigma_{\\infty}(54n+43)+5\\\\\n\\sigma_{\\infty}(54n+35)+6\\\\\n\\sigma_{\\infty}(54n+31)+7\\\\\n\\sigma_{\\infty}(54n+29)+8\\\\\n\\textcolor{blue}{ \\sigma_{\\infty}(54n+1)+9} \\\\\n\\textcolor{red}{ \\sigma_{\\infty}(54n+41)+10} \\\\\n\\sigma_{\\infty}(54n+7)+11\\\\\n\\sigma_{\\infty}(54n+17)+12\\\\\n\\sigma_{\\infty}(54n+49)+13\\\\\n\\sigma_{\\infty}(54n+11)+14\\\\\n\\textcolor{blue}{ \\sigma_{\\infty}(54n+19)+15} \\\\\n\\sigma_{\\infty}(54n+23)+16\\\\\n\\textcolor{blue}{ \\sigma_{\\infty}(54n+25)+17} \\\\\n\\sigma_{\\infty}(54n+53)+18\\\\\n\\sigma_{\\infty}(54n+13)+19\\\\\n\\vdots \\\\\n\\textcolor{black}{{\\bf Even_4 \\downarrow}} \\\\\n\\textcolor{blue}{ \\sigma_{\\infty}(54n+47)+1} \\\\\n\\sigma_{\\infty}(54n+37)+2\\\\\n\\textcolor{blue}{ \\sigma_{\\infty}(54n+5)+3} \\\\\n\\sigma_{\\infty}(54n+43)+4\\\\\n\\sigma_{\\infty}(54n+35)+5\\\\\n\\sigma_{\\infty}(54n+31)+6\\\\\n\\sigma_{\\infty}(54n+29)+7\\\\\n\\textcolor{blue}{ \\sigma_{\\infty}(54n+1)+8} \\\\\n\\textcolor{red}{ \\sigma_{\\infty}(54n+41)+9} \\\\\n\\sigma_{\\infty}(54n+7)+10\\\\\n\\sigma_{\\infty}(54n+17)+11\\\\\n\\sigma_{\\infty}(54n+49)+12\\\\\n\\sigma_{\\infty}(54n+11)+13\\\\\n\\textcolor{blue}{ \\sigma_{\\infty}(54n+19)+14} \\\\\n\\sigma_{\\infty}(54n+23)+15\\\\\n\\textcolor{blue}{ \\sigma_{\\infty}(54n+25)+16} \\\\\n\\sigma_{\\infty}(54n+53)+17\\\\\n\\sigma_{\\infty}(54n+13)+18\\\\\n\\vdots \\\\\n\\textcolor{black}{{\\bf Odd~d_{4_{next}} \\downarrow}} \\\\\n\\textcolor{blue}{ \\sigma_{\\infty}(54n+47) } \\\\\n\\sigma_{\\infty}(54n+37) \\\\\n\\textcolor{blue}{ \\sigma_{\\infty}(54n+5) } \\\\\n\\sigma_{\\infty}(54n+43) \\\\\n\\sigma_{\\infty}(54n+35) \\\\\n\\sigma_{\\infty}(54n+31) \\\\\n\\sigma_{\\infty}(54n+29) \\\\\n\\textcolor{blue}{ \\sigma_{\\infty}(54n+1) } \\\\\n\\textcolor{red}{ \\sigma_{\\infty}(54n+41) } \\\\\n\\sigma_{\\infty}(54n+7) \\\\\n\\sigma_{\\infty}(54n+17) \\\\\n\\sigma_{\\infty}(54n+49) \\\\\n\\sigma_{\\infty}(54n+11) \\\\\n\\textcolor{blue}{ \\sigma_{\\infty}(54n+19) } \\\\\n\\sigma_{\\infty}(54n+23) \\\\\n\\textcolor{blue}{ \\sigma_{\\infty}(54n+25) } \\\\\n\\sigma_{\\infty}(54n+53) \\\\\n\\sigma_{\\infty}(54n+13) \\\\\n\\vdots \\\\\n\\end{cases}\n\\begin{cases}\n\\textcolor{black}{{\\bf Odd~d_5\\downarrow}} \\\\\n\\sigma_{\\infty}(54n+53)+2\\\\\n\\sigma_{\\infty}(54n+13)+3\\\\\n\\textcolor{blue}{ \\sigma_{\\infty}(54n+47)+4} \\\\\n\\sigma_{\\infty}(54n+37)+5\\\\\n\\textcolor{blue}{ \\sigma_{\\infty}(54n+5)+6} \\\\\n\\sigma_{\\infty}(54n+43)+7\\\\\n\\sigma_{\\infty}(54n+35)+8\\\\\n\\sigma_{\\infty}(54n+31)+9\\\\\n\\sigma_{\\infty}(54n+29)+10\\\\\n\\textcolor{blue}{ \\sigma_{\\infty}(54n+1)+11} \\\\\n\\textcolor{red}{ \\sigma_{\\infty}(54n+41)+12} \\\\\n\\sigma_{\\infty}(54n+7)+13\\\\\n\\sigma_{\\infty}(54n+17)+14\\\\\n\\sigma_{\\infty}(54n+49)+15\\\\\n\\sigma_{\\infty}(54n+11)+16\\\\\n\\textcolor{blue}{ \\sigma_{\\infty}(54n+19)+17} \\\\\n\\sigma_{\\infty}(54n+23)+18\\\\\n\\sigma_{\\infty}(54n+25)+19\\\\\n\\vdots \\\\\n\\textcolor{black}{{\\bf Even_5 \\downarrow}} \\\\\n\\sigma_{\\infty}(54n+53)+1\\\\\n\\sigma_{\\infty}(54n+13)+2\\\\\n\\textcolor{blue}{ \\sigma_{\\infty}(54n+47)+3} \\\\\n\\sigma_{\\infty}(54n+37)+4\\\\\n\\textcolor{blue}{ \\sigma_{\\infty}(54n+5)+5} \\\\\n\\sigma_{\\infty}(54n+43)+6\\\\\n\\sigma_{\\infty}(54n+35)+7\\\\\n\\sigma_{\\infty}(54n+31)+8\\\\\n\\sigma_{\\infty}(54n+29)+9\\\\\n\\textcolor{blue}{ \\sigma_{\\infty}(54n+1)+10} \\\\\n\\textcolor{red}{ \\sigma_{\\infty}(54n+41)+11} \\\\\n\\sigma_{\\infty}(54n+7)+12\\\\\n\\sigma_{\\infty}(54n+17)+13\\\\\n\\sigma_{\\infty}(54n+49)+14\\\\\n\\sigma_{\\infty}(54n+11)+15\\\\\n\\textcolor{blue}{ \\sigma_{\\infty}(54n+19)+16} \\\\\n\\sigma_{\\infty}(54n+23)+17\\\\\n\\sigma_{\\infty}(54n+25)+18\\\\\n\\vdots \\\\\n\\textcolor{black}{{\\bf Odd~d_{5_{next}} \\downarrow}} \\\\\n\\sigma_{\\infty}(54n+53) \\\\\n\\sigma_{\\infty}(54n+13) \\\\\n\\textcolor{blue}{ \\sigma_{\\infty}(54n+47) } \\\\\n\\sigma_{\\infty}(54n+37) \\\\\n\\textcolor{blue}{ \\sigma_{\\infty}(54n+5) } \\\\\n\\sigma_{\\infty}(54n+43) \\\\\n\\sigma_{\\infty}(54n+35) \\\\\n\\sigma_{\\infty}(54n+31) \\\\\n\\sigma_{\\infty}(54n+29) \\\\\n\\textcolor{blue}{ \\sigma_{\\infty}(54n+1) } \\\\\n\\textcolor{red}{ \\sigma_{\\infty}(54n+41) } \\\\\n\\sigma_{\\infty}(54n+7) \\\\\n\\sigma_{\\infty}(54n+17) \\\\\n\\sigma_{\\infty}(54n+49) \\\\\n\\sigma_{\\infty}(54n+11) \\\\\n\\textcolor{blue}{ \\sigma_{\\infty}(54n+19) } \\\\\n\\sigma_{\\infty}(54n+23) \\\\\n\\sigma_{\\infty}(54n+25) \\\\\n\\vdots \\\\\n\\end{cases}\n\\begin{cases}\n\\textcolor{black}{{\\bf Odd~d_6\\downarrow}} \\\\\n\\sigma_{\\infty}(54n+23)+2\\\\\n\\textcolor{blue}{ \\sigma_{\\infty}(54n+25)+3} \\\\\n\\sigma_{\\infty}(54n+53)+4\\\\\n\\sigma_{\\infty}(54n+13)+5\\\\\n\\textcolor{blue}{ \\sigma_{\\infty}(54n+47)+6} \\\\\n\\sigma_{\\infty}(54n+37)+7\\\\\n\\textcolor{blue}{ \\sigma_{\\infty}(54n+5)+8} \\\\\n\\sigma_{\\infty}(54n+43)+9\\\\\n\\sigma_{\\infty}(54n+35)+10\\\\\n\\sigma_{\\infty}(54n+31)+11\\\\\n\\sigma_{\\infty}(54n+29)+12\\\\\n\\textcolor{blue}{ \\sigma_{\\infty}(54n+1)+13} \\\\\n\\textcolor{red}{ \\sigma_{\\infty}(54n+41)+14} \\\\\n\\sigma_{\\infty}(54n+7)+15\\\\\n\\sigma_{\\infty}(54n+17)+16\\\\\n\\sigma_{\\infty}(54n+49)+17\\\\\n\\sigma_{\\infty}(54n+11)+18\\\\\n\\sigma_{\\infty}(54n+19)+19\\\\\n\\vdots \\\\\n\\textcolor{black}{{\\bf Even_6 \\downarrow}} \\\\\n\\sigma_{\\infty}(54n+23)+1\\\\\n\\textcolor{blue}{ \\sigma_{\\infty}(54n+25)+2} \\\\\n\\sigma_{\\infty}(54n+53)+3\\\\\n\\sigma_{\\infty}(54n+13)+4\\\\\n\\textcolor{blue}{ \\sigma_{\\infty}(54n+47)+5} \\\\\n\\sigma_{\\infty}(54n+37)+6\\\\\n\\textcolor{blue}{ \\sigma_{\\infty}(54n+5)+7} \\\\\n\\sigma_{\\infty}(54n+43)+8\\\\\n\\sigma_{\\infty}(54n+35)+9\\\\\n\\sigma_{\\infty}(54n+31)+10\\\\\n\\sigma_{\\infty}(54n+29)+11\\\\\n\\textcolor{blue}{ \\sigma_{\\infty}(54n+1)+12} \\\\\n\\textcolor{red}{ \\sigma_{\\infty}(54n+41)+13} \\\\\n\\sigma_{\\infty}(54n+7)+14\\\\\n\\sigma_{\\infty}(54n+17)+15\\\\\n\\sigma_{\\infty}(54n+49)+16\\\\\n\\sigma_{\\infty}(54n+11)+17\\\\\n\\sigma_{\\infty}(54n+19)+18\\\\\n\\vdots \\\\\n\\textcolor{black}{{\\bf Odd~d_{6_{next}} \\downarrow}} \\\\\n\\sigma_{\\infty}(54n+23) \\\\\n\\textcolor{blue}{ \\sigma_{\\infty}(54n+25) } \\\\\n\\sigma_{\\infty}(54n+53) \\\\\n\\sigma_{\\infty}(54n+13) \\\\\n\\textcolor{blue}{ \\sigma_{\\infty}(54n+47) } \\\\\n\\sigma_{\\infty}(54n+37) \\\\\n\\textcolor{blue}{ \\sigma_{\\infty}(54n+5) } \\\\\n\\sigma_{\\infty}(54n+43) \\\\\n\\sigma_{\\infty}(54n+35) \\\\\n\\sigma_{\\infty}(54n+31) \\\\\n\\sigma_{\\infty}(54n+29) \\\\\n\\textcolor{blue}{ \\sigma_{\\infty}(54n+1) } \\\\\n\\textcolor{red}{ \\sigma_{\\infty}(54n+41) } \\\\\n\\sigma_{\\infty}(54n+7) \\\\\n\\sigma_{\\infty}(54n+17) \\\\\n\\sigma_{\\infty}(54n+49) \\\\\n\\sigma_{\\infty}(54n+11) \\\\\n\\sigma_{\\infty}(54n+19) \\\\\n\\vdots \\\\\n\\end{cases}\n\\begin{cases}\n\\textcolor{black}{{\\bf Odd~d_7\\downarrow}} \\\\\n\\sigma_{\\infty}(54n+11)+2\\\\\n\\textcolor{blue}{ \\sigma_{\\infty}(54n+19)+3} \\\\\n\\sigma_{\\infty}(54n+23)+4\\\\\n\\textcolor{blue}{ \\sigma_{\\infty}(54n+25)+5} \\\\\n\\sigma_{\\infty}(54n+53)+6\\\\\n\\sigma_{\\infty}(54n+13)+7\\\\\n\\textcolor{blue}{ \\sigma_{\\infty}(54n+47)+8} \\\\\n\\sigma_{\\infty}(54n+37)+9\\\\\n\\textcolor{blue}{ \\sigma_{\\infty}(54n+5)+10} \\\\\n\\sigma_{\\infty}(54n+43)+11\\\\\n\\sigma_{\\infty}(54n+35)+12\\\\\n\\sigma_{\\infty}(54n+31)+13\\\\\n\\sigma_{\\infty}(54n+29)+14\\\\\n\\textcolor{blue}{ \\sigma_{\\infty}(54n+1)+15} \\\\\n\\textcolor{red}{ \\sigma_{\\infty}(54n+41)+16} \\\\\n\\sigma_{\\infty}(54n+7)+17\\\\\n\\sigma_{\\infty}(54n+17)+18\\\\\n\\sigma_{\\infty}(54n+49)+19\\\\\n\\vdots \\\\\n\\textcolor{black}{{\\bf Even_7 \\downarrow}} \\\\\n\\sigma_{\\infty}(54n+11)+1\\\\\n\\textcolor{blue}{ \\sigma_{\\infty}(54n+19)+2} \\\\\n\\sigma_{\\infty}(54n+23)+3\\\\\n\\textcolor{blue}{ \\sigma_{\\infty}(54n+25)+4} \\\\\n\\sigma_{\\infty}(54n+53)+5\\\\\n\\sigma_{\\infty}(54n+13)+6\\\\\n\\textcolor{blue}{ \\sigma_{\\infty}(54n+47)+7} \\\\\n\\sigma_{\\infty}(54n+37)+8\\\\\n\\textcolor{blue}{ \\sigma_{\\infty}(54n+5)+9} \\\\\n\\sigma_{\\infty}(54n+43)+10\\\\\n\\sigma_{\\infty}(54n+35)+11\\\\\n\\sigma_{\\infty}(54n+31)+12\\\\\n\\sigma_{\\infty}(54n+29)+13\\\\\n\\textcolor{blue}{ \\sigma_{\\infty}(54n+1)+14} \\\\\n\\textcolor{red}{ \\sigma_{\\infty}(54n+41)+15} \\\\\n\\sigma_{\\infty}(54n+7)+16\\\\\n\\sigma_{\\infty}(54n+17)+17\\\\\n\\sigma_{\\infty}(54n+49)+18\\\\\n\\vdots \\\\\n\\textcolor{black}{{\\bf Odd~d_{7_{next}} \\downarrow}} \\\\\n\\sigma_{\\infty}(54n+11) \\\\\n\\textcolor{blue}{ \\sigma_{\\infty}(54n+19) } \\\\\n\\sigma_{\\infty}(54n+23) \\\\\n\\textcolor{blue}{ \\sigma_{\\infty}(54n+25) } \\\\\n\\sigma_{\\infty}(54n+53) \\\\\n\\sigma_{\\infty}(54n+13) \\\\\n\\textcolor{blue}{ \\sigma_{\\infty}(54n+47) } \\\\\n\\sigma_{\\infty}(54n+37) \\\\\n\\textcolor{blue}{ \\sigma_{\\infty}(54n+5) } \\\\\n\\sigma_{\\infty}(54n+43) \\\\\n\\sigma_{\\infty}(54n+35) \\\\\n\\sigma_{\\infty}(54n+31) \\\\\n\\sigma_{\\infty}(54n+29) \\\\\n\\textcolor{blue}{ \\sigma_{\\infty}(54n+1) } \\\\\n\\textcolor{red}{ \\sigma_{\\infty}(54n+41) } \\\\\n\\sigma_{\\infty}(54n+7) \\\\\n\\sigma_{\\infty}(54n+17) \\\\\n\\sigma_{\\infty}(54n+49) \\\\\n\\vdots \\\\\n\\end{cases}\n\\begin{cases}\n\\textcolor{black}{{\\bf Odd~d_8\\downarrow}} \\\\\n\\sigma_{\\infty}(54n+17)+2\\\\\n\\sigma_{\\infty}(54n+49)+3\\\\\n\\sigma_{\\infty}(54n+11)+4\\\\\n\\textcolor{blue}{ \\sigma_{\\infty}(54n+19)+5} \\\\\n\\sigma_{\\infty}(54n+23)+6\\\\\n\\textcolor{blue}{ \\sigma_{\\infty}(54n+25)+7} \\\\\n\\sigma_{\\infty}(54n+53)+8\\\\\n\\sigma_{\\infty}(54n+13)+9\\\\\n\\textcolor{blue}{ \\sigma_{\\infty}(54n+47)+10} \\\\\n\\sigma_{\\infty}(54n+37)+11\\\\\n\\textcolor{blue}{ \\sigma_{\\infty}(54n+5)+12} \\\\\n\\sigma_{\\infty}(54n+43)+13\\\\\n\\sigma_{\\infty}(54n+35)+14\\\\\n\\sigma_{\\infty}(54n+31)+15\\\\\n\\sigma_{\\infty}(54n+29)+16\\\\\n\\textcolor{blue}{ \\sigma_{\\infty}(54n+1)+17} \\\\\n\\textcolor{red}{ \\sigma_{\\infty}(54n+41)+18} \\\\\n\\sigma_{\\infty}(54n+7)+19\\\\\n\\vdots \\\\\n\\textcolor{black}{{\\bf Even_8 \\downarrow}} \\\\\n\\sigma_{\\infty}(54n+17)+1\\\\\n\\sigma_{\\infty}(54n+49)+2\\\\\n\\sigma_{\\infty}(54n+11)+3\\\\\n\\textcolor{blue}{ \\sigma_{\\infty}(54n+19)+4} \\\\\n\\sigma_{\\infty}(54n+23)+5\\\\\n\\textcolor{blue}{ \\sigma_{\\infty}(54n+25)+6} \\\\\n\\sigma_{\\infty}(54n+53)+7\\\\\n\\sigma_{\\infty}(54n+13)+8\\\\\n\\textcolor{blue}{ \\sigma_{\\infty}(54n+47)+9} \\\\\n\\sigma_{\\infty}(54n+37)+10\\\\\n\\textcolor{blue}{ \\sigma_{\\infty}(54n+5)+11} \\\\\n\\sigma_{\\infty}(54n+43)+12\\\\\n\\sigma_{\\infty}(54n+35)+13\\\\\n\\sigma_{\\infty}(54n+31)+14\\\\\n\\sigma_{\\infty}(54n+29)+15\\\\\n\\textcolor{blue}{ \\sigma_{\\infty}(54n+1)+16} \\\\\n\\textcolor{red}{ \\sigma_{\\infty}(54n+41)+17} \\\\\n\\sigma_{\\infty}(54n+7)+18\\\\\n\\vdots \\\\\n\\textcolor{black}{{\\bf Odd~d_{8_{next}} \\downarrow}} \\\\\n\\sigma_{\\infty}(54n+17) \\\\\n\\sigma_{\\infty}(54n+49) \\\\\n\\sigma_{\\infty}(54n+11) \\\\\n\\textcolor{blue}{ \\sigma_{\\infty}(54n+19) } \\\\\n\\sigma_{\\infty}(54n+23) \\\\\n\\textcolor{blue}{ \\sigma_{\\infty}(54n+25) } \\\\\n\\sigma_{\\infty}(54n+53) \\\\\n\\sigma_{\\infty}(54n+13) \\\\\n\\textcolor{blue}{ \\sigma_{\\infty}(54n+47) } \\\\\n\\sigma_{\\infty}(54n+37) \\\\\n\\textcolor{blue}{ \\sigma_{\\infty}(54n+5) } \\\\\n\\sigma_{\\infty}(54n+43) \\\\\n\\sigma_{\\infty}(54n+35) \\\\\n\\sigma_{\\infty}(54n+31) \\\\\n\\sigma_{\\infty}(54n+29) \\\\\n\\textcolor{blue}{ \\sigma_{\\infty}(54n+1) } \\\\\n\\textcolor{red}{ \\sigma_{\\infty}(54n+41) } \\\\\n\\sigma_{\\infty}(54n+7) \\\\\n\\vdots \\\\\n\\end{cases}\n\\begin{cases}\n\\textcolor{black}{{\\bf Odd~d_9\\downarrow}} \\\\\n\\textcolor{red}{ \\sigma_{\\infty}(54n+41)+2} \\\\\n\\sigma_{\\infty}(54n+7)+3\\\\\n\\sigma_{\\infty}(54n+17)+4\\\\\n\\sigma_{\\infty}(54n+49)+5\\\\\n\\sigma_{\\infty}(54n+11)+6\\\\\n\\textcolor{blue}{ \\sigma_{\\infty}(54n+19)+7} \\\\\n\\sigma_{\\infty}(54n+23)+8\\\\\n\\textcolor{blue}{ \\sigma_{\\infty}(54n+25)+9} \\\\\n\\sigma_{\\infty}(54n+53)+10\\\\\n\\sigma_{\\infty}(54n+13)+11\\\\\n\\textcolor{blue}{ \\sigma_{\\infty}(54n+47)+12} \\\\\n\\sigma_{\\infty}(54n+37)+13\\\\\n\\textcolor{blue}{ \\sigma_{\\infty}(54n+5)+14} \\\\\n\\sigma_{\\infty}(54n+43)+15\\\\\n\\sigma_{\\infty}(54n+35)+16\\\\\n\\sigma_{\\infty}(54n+31)+17\\\\\n\\sigma_{\\infty}(54n+29)+18\\\\\n\\textcolor{blue}{ \\sigma_{\\infty}(54n+1)+19} \\\\\n\\vdots \\\\\n\\textcolor{black}{{\\bf Even_9 \\downarrow}} \\\\\n\\textcolor{red}{ \\sigma_{\\infty}(54n+41)+1} \\\\\n\\sigma_{\\infty}(54n+7)+2\\\\\n\\sigma_{\\infty}(54n+17)+3\\\\\n\\sigma_{\\infty}(54n+49)+4\\\\\n\\sigma_{\\infty}(54n+11)+5\\\\\n\\textcolor{blue}{ \\sigma_{\\infty}(54n+19)+6} \\\\\n\\sigma_{\\infty}(54n+23)+7\\\\\n\\textcolor{blue}{ \\sigma_{\\infty}(54n+25)+8} \\\\\n\\sigma_{\\infty}(54n+53)+9\\\\\n\\sigma_{\\infty}(54n+13)+10\\\\\n\\textcolor{blue}{ \\sigma_{\\infty}(54n+47)+11} \\\\\n\\sigma_{\\infty}(54n+37)+12\\\\\n\\textcolor{blue}{ \\sigma_{\\infty}(54n+5)+13} \\\\\n\\sigma_{\\infty}(54n+43)+14\\\\\n\\sigma_{\\infty}(54n+35)+15\\\\\n\\sigma_{\\infty}(54n+31)+16\\\\\n\\sigma_{\\infty}(54n+29)+17\\\\\n\\textcolor{blue}{ \\sigma_{\\infty}(54n+1)+18} \\\\\n\\vdots \\\\\n\\textcolor{black}{{\\bf Odd~d_{9_{next}} \\downarrow}} \\\\\n\\textcolor{red}{ \\sigma_{\\infty}(54n+41) } \\\\\n\\sigma_{\\infty}(54n+7) \\\\\n\\sigma_{\\infty}(54n+17) \\\\\n\\sigma_{\\infty}(54n+49) \\\\\n\\sigma_{\\infty}(54n+11) \\\\\n\\textcolor{blue}{ \\sigma_{\\infty}(54n+19) } \\\\\n\\sigma_{\\infty}(54n+23) \\\\\n\\textcolor{blue}{ \\sigma_{\\infty}(54n+25) } \\\\\n\\sigma_{\\infty}(54n+53) \\\\\n\\sigma_{\\infty}(54n+13) \\\\\n\\textcolor{blue}{ \\sigma_{\\infty}(54n+47) } \\\\\n\\sigma_{\\infty}(54n+37) \\\\\n\\textcolor{blue}{ \\sigma_{\\infty}(54n+5) } \\\\\n\\sigma_{\\infty}(54n+43) \\\\\n\\sigma_{\\infty}(54n+35) \\\\\n\\sigma_{\\infty}(54n+31) \\\\\n\\sigma_{\\infty}(54n+29) \\\\\n\\textcolor{blue}{ \\sigma_{\\infty}(54n+1) } \\\\\n\\vdots \\\\\n\\end{cases}\n\\end{equation}\n}\n\\footnote{All rights reserved. $\\copyright$ Michael A. Idowu, 2014.}\n\\end{landscape}\n\n\\begin{conjecture} \\label{conj3}\nAn irrefutable proof of the Collatz conjecture essentially requires 18 fundamental formulae that represent the total stopping time functions of the fundamental covering system of odd integers.\n\\end{conjecture}\n\n\\section{Conclusions}\\label{Concl}\nA proposed proof of the CC essentially requires deriving the formulae for fundamental total stopping time functions for all odd integers or proving that all the $[3]_{36n+4x}$ numbers eventually converge below these start points using the proposed Collatz based number system, which is both visually demonstrable and theoretically evident.\n\nThe proposed covering system of the generalised Collatz based number system requires about 162 distinct sets of odd numbers, from which any other integers could be derived. \nEach fundamental set of odd numbers corresponds to a single fundamental total stopping time function in the proposed schemata.\n\nAn irrefutable proof of the Collatz conjecture essentially requires 18 fundamental formulae that represent the total stopping time functions of the fundamental covering system of odd integers.\n\nThe Collatz map \\ref{CollMachinery1} has many applications. For example, a visual and ingenous method to classify odd numbers to appropriate residue classes modulo 18 is easy. For example, $\\{ 349525,1 \\} \\in [1]_{18}$: $3+4+9+5+2+5 = 28 \\equiv 2+8 \\equiv 1+0 \\in [1]_{18}$; $\\{341, 17\\} \\in [17]_{18}$: $341 \\equiv 3+4+1 \\equiv 1+7 \\in [17]_{18}$. The reader is encouraged to try out this simple technique.\n\nThis foundational paper may be regarded as a proposed ``new mathematics'' of the Collatz based number system. \n\nThe whole idea may be used as a new theoretical framework for teaching and understanding elementary number system. \n\n\nThis novel theoretical framework is anticipated to open up new research and further development opportunities in number theory, dynamical systems, and discrete mathematics, including deterministic modelling, metamathematical and optimised integer factorisation, ergodic theory, dynamical systems, covering systems, cryptosystems and cryptography.\n\nOne of our aspirations is to further exploit and innovate the main results in visualisable algorithm development. \n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction and Main Results\\label{Section: Introduction and Main Results}}\n\n\n\\input{ch1sec1.tex}\n\n\\section{A Handlebody-Theoretic Reverse to the Plus Construction\\label{Section: A Handlebody-Theoretic Reverse to the Plus Construction}}\n\\input{ch3sec1.tex}\n\n\\section{Some Preliminaries to Creating Pseudo-Collarable High-Dimensional Manifolds\\label{Section: Some Preliminaries to Creating Pseudo-Collarable High-Dimensional Manifolds}}\n\\input{ch4sec1.tex}\n \n\\section{Some Algebraic Lemmas, Part 1\\label{Section: Some Algebraic Lemmas, Part 1}}\n\\input{ch4sec2.tex}\n \n\\section{Some Algebraic Lemmas, Part 2\\label{Section: Some Algebraic Lemmas, Part 2}}\n\\input{ch4sec3.tex}\n \n\\section{Some Algebraic Lemmas, Part 3\\label{Section: Some Algebraic Lemmas, Part 3}}\n\\input{ch4sec4.tex}\n \n\\section{Manifold Topology\\label{Section: Manifold Topology}}\n\\input{ch4sec5.tex}\n\n\n\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}