diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzfxad" "b/data_all_eng_slimpj/shuffled/split2/finalzzfxad" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzfxad" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\\vspace{-0.05in}\nSemantic segmentation aims at assigning semantic labels to every pixel of an image. Leveraging on CNNs~\\cite{he2016deep,Hu_2018_CVPR,ILSVRC15,simonyan2014very,szegedy2015going}, significant progress has been reported for this fundamental task~\\cite{chen2016deeplab,Chen_2018_ECCV,long2015fully,Peng_2017_CVPR}. One drawback of the existing approaches, nevertheless, is the requirement of large quantities of pixel-level annotations, such as in VOC \\cite{everingham2010pascal}, COCO \\cite{lin2014microsoft} and Cityscapes \\cite{Cordts2016Cityscapes} datasets, for model training. Labeling of semantics at pixel-level is cost expensive and time consuming.\nFor example, the Cityscapes dataset is composed of 5,000 high-quality pixel-wise annotated images, and the annotation on a single image is reported to take more than 1.5 hours.\n\nAn alternative is by utilizing synthetic data, which is largely available in 3D engines (e.g., SYNTHIA \\cite{ros2016synthia}) and 3D computer games (e.g., GTA5 \\cite{GTA5_richter2016playing}). The ground-truth semantics of these data can be automatically generated without manual labeling. Nevertheless, in the case where the synthetic data is different from the real images, the domain gap might be difficult to bridge. Unsupervised domain adaptation is generally regarded as an appealing way to address the problem of domain gap. The existing approaches include narrowing the gap by transferring images across domains \\cite{dundar2018domain,murez2018image,wu2018dcan} and learning domain-invariant representation via adversarial mechanism \\cite{Du_2019_ICCV,luo2019taking,Vu_2019_CVPR}.\n\n\\begin{figure*}[!tb]\n\\vspace{-0.05in}\n \\centering {\\includegraphics[width=0.98\\textwidth]{intro.pdf}}\n \\vspace{-0.1in}\n \\caption{\\small The examples of (a) predictions on two domains by fully convolutional networks trained on synthetic data; (b)$\\sim$(d) the three evaluation criteria we studied, i.e., patch-based consistency, cluster-based consistency and spatial logic.}\n \\label{fig:intro}\n \\vspace{-0.25in}\n\\end{figure*}\n\nIn this paper, we consider model overfitting in source domain as the major cause of domain mismatch. As shown in Figure \\ref{fig:intro}(a), although Fully Convolutional Networks (FCN) perfectly segment the synthetic image by correct labeling of pixels, directly deploying this model for real image yields poor results. Instead of leveraging training samples in the target domain for model fine-tuning, this paper explores label-free constraints to alleviate the problem of model overfitting. These constraints are intrinsic and generic in the context of semantic segmentation. Figure \\ref{fig:intro}(b)$\\sim$(d) illustrate three label-free constraints being investigated. The first two constraints, namely patch-based and cluster-based consistencies guide the segmentation based on the prediction consistency among the pixels in an image patch and among the clusters of patches sharing similar visual properties, respectively. The last criterion, namely spatial logic, contextualizes the prediction of labels based on spatial relation between image patches. Based on these criteria, we propose a novel Regularizer of Prediction Transfer (RPT) for transferring the model trained on synthetic data for semantic segmentation of real images.\n\nThe main contribution of this paper is on the exploration of label-free data-driven constraints for transferring of model to bridge domain gap. These constraints are imposed as regularizers during training to transfer an overfitted source model for proper labeling of pixels in the target domain. Specifically, at the lowest level of regularization, majority voting is performed to derive a dominative category for each image patch. The dominative category serves as a local cue for pixels with low prediction confidence to adjust their label prediction during training. The patch-level regularization is then extended to a higher level of regularization to explore cluster-level and context-level prediction consistency.\nDespite its simplicity, the three regularizers, when jointed optimized in a fully convolutional network with adversarial learning, show impressive performances by outperforming several state-of-the-art methods, when transferring the models trained on GTA5 and SYNTHIA for semantic segmentation on the Cityscapes dataset.\n\n\\vspace{-0.05in}\n\\section{Related Work}\n\\vspace{-0.05in}\n\\textbf{CNN Based Semantic Segmentation.} As one of the most challenging computer vision task, semantic segmentation has received intensive research attention. With the surge of deep learning and convolutional neural networks (CNNs), Fully Convolutional Network (FCN)~\\cite{long2015fully} successfully serves as an effective approach that employs CNNs to perform dense semantic prediction. Following FCN, various schemes, ranging from multi-path feature aggregation and refinement~\\cite{ghiasi2016laplacian,Lin:2017:RefineNet,Peng_2017_CVPR,Pohlen_2017_CVPR,zhang2019customizable,Zhao_2018_ECCV} to multi-scale context extraction and integration~\\cite{chen2018searching,chen2016deeplab,He_2019_ICCV,qiu2017learning,yang2018denseaspp,Zhang_2018_CVPR_context,zhao2017pspnet}, have been developed and achieved great success in leveraging contextual information for semantic segmentation. Post-processing techniques, such as CRF~\\cite{chen2016deeplab} and MRF~\\cite{liu2018deep}, could further be applied to take the spatial consistency of labels into account and improve the predictions from FCNs. Considering that such methods typically rely on the datasets with pixel-level annotations which are extremely expensive and laborious to collect, researchers have also strived to utilize a weaker form of annotation, such as image-level tags \\cite{papandreou2015weakly,pinheiro2015image}, bounding boxes~\\cite{dai2015boxsup}, scribbles~\\cite{bearman2016s} and statistics \\cite{pathak2015constrained}, for semantic segmentation. The development of computer graphics techniques provides an alternative approach that exploits synthetic data with free annotations. This work aims to study the methods of applying the semantic segmentation model learnt on the computer-generated synthetic data to unlabeled real data.\n\n\\textbf{Domain Adaptation of Semantic Segmentation.}\nTo alleviate the issues of expensive labeling efforts in collecting pixel-level annotations, domain adaptation is studied for semantic segmentation. FCNWild~\\cite{BDDS_hoffman2016fcns}, which is one of the early works, attempts to align the features in different domains from both global and local aspects by adversarial training. Curriculum~\\cite{zhang2017curriculum} proposes a curriculum-style learning approach to bridge the domain gap between synthetic and real data. Later on, similar to domain adaptation in image recognition and object detection \\cite{cai2019exploring,pan2019transferrable,yao2015semi}, visual appearance-level and\/or representation-level adaptation are exploited in~\\cite{dundar2018domain,murez2018image,Tsai_2018_CVPR,Zhang_2018_CVPR} for this task. \\cite{dundar2018domain,murez2018image} perform an image-to-image translation that transfers the synthetic images to the real domain in the appearance-level. From the perspective of the representation-level adaptation, AdaSegNet~\\cite{Tsai_2018_CVPR} proposes to apply adversarial learning on segmentation maps for adapting structured output space.\nFCAN~\\cite{Zhang_2018_CVPR} employs the two levels of adaptation simultaneously, in which the appearance gap between synthetic and real images is minimized and the network is encouraged to learn domain-invariant representations.\nThere have been several other strategies~\\cite{chang2019all,chen2019learning,chen2018road,pmlr-v80-hoffman18a,iqbal2019mlsl,li2019bidirectional,zou2018unsupervised}, being performed for cross-domain semantic segmentation.\nFor example, ROAD~\\cite{chen2018road} devises a target guided distillation module and a spatial-aware adaptation module for real style and distribution orientation. Labels from the source domain are transferred to the target domain as the additional supervision in CyCADA~\\cite{pmlr-v80-hoffman18a}. Depth maps which are available in virtual 3D environments are utilized as geometric information to reduce domain shift in ~\\cite{chen2019learning}. \\cite{iqbal2019mlsl,li2019bidirectional,zou2018unsupervised} treat target predictions as the guide for learning a model applicable to the images in target domain by self-supervised learning. \\cite{chang2019all} proposes a domain invariant structure extraction framework that decouples the structure and texture representations of images and improves the performance of segmentation.\n\n\\textbf{Summary.} Most of the aforementioned approaches mainly investigate the problem of domain adaptation for semantic segmentation through bridging the domain gap during training. Our work is different in the way that we seek the additional regularization for the prediction in target domain based on the intrinsic and generic properties of semantic segmentation task. Such solution formulates an innovative and promising research direction for this task.\n\n\\begin{figure}[!tb]\n \\centering {\\includegraphics[width=0.478\\textwidth]{patch.pdf}}\n \\caption{\\small Example of pixels to be unpunished (a) or punished (b) in optimization. (a) For the unpunished cases, some pixels are very confident in the class differed from the dominative category. (b) For the punished cases, most pixels inside the region predict relatively high probabilities for the dominative category.}\n \\label{fig:patch}\n \\vspace{-0.15in}\n\\end{figure}\n\\section{Regularizer of Prediction Transfer}\nWe start by introducing the Regularizer of Prediction Transfer (RPT) for semantic segmentation.\nThree criteria are defined to assess the quality of segmentation. The result of assessment is leveraged to guide the transfer of a learnt model in the source domain for semantic segmentation in the target domain.\n\n\\subsection{Patch-based Consistency}\nThe idea is to enforce all pixels in a patch to be consistent in the prediction of semantic labels. Here, a patch is defined as a superpixel that groups neighboring pixels with similar visual appearance. We employ Simple Linear Iterative Clustering (SLIC)~\\cite{achanta2012slic}, which is both speed and memory efficient in the generation of superpixels by adopting k-means algorithm.\nGiven one image from target domain $x_t$, SLIC splits the image into $N$ superpixels $\\{S_i|i=1,...,N\\}$. Each superpixel $S_i=\\{p^j_i|j=1,...,M_i\\}$ is composed of $M_i$ adjacent pixels with similar appearance.\nWe assume that all or the majority of pixels will be annotated with the same semantic labels. Here, the dominative category $\\hat{y}_i$ of a superpixel is defined as the most number of predicted labels among all the pixels in this superpixel.\n\nAs SLIC considers only visual cue, a superpixel usually contains multiple regions of different semantic labels. Simply involving all pixels in network optimization can run into the risk of skew optimization. To address this problem, a subset of pixels is masked out from patch-based regularization.\nSpecifically, in superpixel $S_i$, pixels $p_i^j\\in S_i$ are clustered into two groups depending on the predicted probability of the dominative category $\\hat{y}_i$: (a) $P_{seg}(\\hat{y}_i| p^j_i)<=\\lambda_{pc}$ means that the probability is less than or equal to a pre-defined threshold $\\lambda_{pc}$. In other words, the pixel $p_i^j$ is predicted with labels different from the dominative category with relatively high probability. This group of pixels should be exempted from regularization. (b) $P_{seg}(\\hat{y}_i| p^j_i)>\\lambda_{pc}$ represents that $p_i^j$ has relatively higher confidence to be predicted as the dominative category. In this case, the dominative $\\hat{y}_i$ is leveraged as a cue to guide the prediction of these pixels. To the end, the loss item for patch-based consistency regularization of a target image $x_t$ is formulated as:\n\\begin{equation}\\label{eq:pc}\n\\begin{aligned}\n\\mathcal{L}_{pc}(x_t)=- \\sum_{i, j} I_{(P_{seg}(\\hat{y}_i| p^j_i)>\\lambda_{pc})} log P_{seg}(\\hat{y}_i| p^j_i)\n\\end{aligned}~~,\n\\end{equation}\nwhere $I_{(\\cdot)}$ is an indicator function to selectively mask out pixels from optimization by thresholding. Figure \\ref{fig:patch} shows examples of superpixels that are masked out (i.e., unpunished) and involved (i.e., punished) for optimization.\n\n\n\\subsection{Cluster-based Consistency}\n\n\\begin{figure}[!tb]\n \\centering {\\includegraphics[width=0.40\\textwidth]{cluster.pdf}}\n \\caption{\\small Feature space visualization of seven superpixel clusters using t-SNE. The dominative category is given for each cluster.}\n \\label{fig:cluster}\n \\vspace{-0.15in}\n\\end{figure}\nIn addition to patch, we also enforce the consistency of label prediction among the clusters of patches that are visually similar. Specifically, cluster-level regularization imposes a constraint that the superpixels with similar visual properties should predict the cluster dominative category as their label. To this end, superpixels are further grouped into clusters. The feature representation of a superpixel is extracted through ResNet-101~\\cite{he2016deep}, which is pre-trained on ImageNet dataset~\\cite{ILSVRC15}. The feature vector utilized for clustering is generated by averagely pooling the feature maps of the superpixel region from $res5c$ layer.\nAll the superpixels from target domain images are grouped into $K=2048$ clusters by k-means algorithm. The cluster-level dominative category $\\tilde{y}_k$ is determined by majority voting among the superpixels within a cluster. Figure \\ref{fig:cluster} visualizes seven examples of clusters and the corresponding dominative categories by t-SNE \\cite{maaten:JMLR08}. As clustering is imperfect, it is expected that some superpixels will be incorrectly grouped. Denote $P_{seg}(\\tilde{y}_k| p^j_i)$, where $p^j_i \\in S_i \\in C_k$, as the probability of predicting cluster-level dominative category as label for pixel $p^j_i$. Similar to patch-based consistency regularization, pixels with low confidence on the cluster-level category will not be punished during network optimization. Thus, the loss item of cluster-based consistency regularization for a target image $x_t$ is defined as:\n\\begin{equation}\\label{eq:pc}\n\\begin{aligned}\n\\mathcal{L}_{cc}(x_t)=- \\sum_{i, j, S_i \\in C_k} I_{(P_{seg}(\\tilde{y}_k| p^j_i)>\\lambda_{cc})} log P_{seg}(\\tilde{y}_k| p^j_i)\n\\end{aligned}~~,\n\\end{equation}\nwhere $\\lambda_{cc}$ is a pre-defined threshold to gate whether a pixel should be masked out from regularization.\n\n\n\\subsection{Spatial Logic}\nA useful cue to leverage for target-domain segmentation is the spatial relation between semantic labels. For instance, a superpixel of category \\emph{sky} is likely on the top of another superpixel labeled with \\emph{building} or \\emph{road}, and not vice versa. These relations are expected to be invariant across the source and target domains. The supportive hypothesis behind is introduced in \\cite{chang2019all} that the high-level structure information of an image is informative for semantic segmentation and can be readily shared across domains. As such, the motivation of spatial logic is to preserve the spatial relations learnt in source domain to target domain.\n\nFormally, we exploit the LSTM encoder-decoder architecture to learn the vertical relation between superpixels, as shown in Figure \\ref{fig:spatial}. The main goal of this architecture is to speculate the category of the masked segment in the sequence according to context information. Then, the produced probability can be used to evaluate the logical validity of the predicted category in the masked segment. Suppose we have a prediction sequence $\\mathcal{Y}$, where $\\mathcal{Y}=\\{\\mathbf{y}_1,\\mathbf{y}_2,...,\\mathbf{y}_{T-1},\\mathbf{y}_{T}\\}$ including $T$ superpixel predictions sliced from one column of prediction map. Let $\\mathbf{y}_{t} \\in \\mathbb{R}^{C+1}$ denote the one-hot vector of the $t$-th prediction in the sequence, and the dimension of $\\mathbf{y}_{t}$, i.e., $C+1$, is the number of semantic categories plus one symbol as an identification of masked prediction. The masked prediction sequence $\\hat{\\mathcal{Y}}$, which is fed into the LSTM encoder, is generated by masking a segment of consecutive predictions with the identical semantic category in the original sequence $\\mathcal{Y}$. The LSTM encoder embeds the masked prediction sequence $\\hat{\\mathcal{Y}}$ into a sequence representation. The LSTM decoder, which is attached on the top of the encoder, then speculates the categories of the masked segment and reconstructs the original sequence $\\mathcal{Y}$. To learn the aforementioned spatial logic, the encoder-decoder architecture is optimized with the cross-entropy loss supervised by the label from source domain.\n\nNext, the optimized model can be utilized to estimate the validity of each prediction from the view of spatial logic. For the target image $x_t$, we first slice the prediction map to several columns consisting of vertically neighbored superpixels. The patch-level dominative categories of the superpixels in the column are organized into a prediction sequence. For the superpixel $S_i$ in the column, the spatial logical probability $P_{logic}(\\hat{y}_i| S_i)$ is measured by the LSTM encoder-decoder only when the prediction of this superpixel is masked in the input sequence. Once this probability is lower than the threshold $\\lambda_{sl}$, we consider this prediction to be illogical and punish the prediction of $\\hat{y}_i$ by the segmentation network. The loss of spatial logic regularization is computed as:\n\\begin{equation}\\label{eq:pc}\n\\begin{aligned}\n\\mathcal{L}_{sl}(x_t)=\\sum_{i, j} I_{(P_{logic}(\\hat{y}_i| S_i)<\\lambda_{sl})} log P_{seg}(\\hat{y}_i| p^j_i)\n\\end{aligned}~~,\n\\end{equation}\nwhere $P_{logic}(\\cdot)$ denotes the prediction from LSTM encoder-decoder architecture.\n\n\n\\begin{figure}[!tb]\n \\centering {\\includegraphics[width=0.45\\textwidth]{spatial.pdf}}\n \\caption{\\small The LSTM encoder-decoder architecture to learn the spatial logic in the prediction map.}\n \\label{fig:spatial}\n \\vspace{-0.15in}\n\\end{figure}\n\n\n\\section{Semantic Segmentation with RPT}\nThe proposed Regularizer of Prediction Transfer (RPT) can be easily integrated into most of the existing frameworks for domain adaptation of semantic segmentation. Here, we choose the widely adopted framework based on adversarial learning as shown in Figure \\ref{fig:framework}. The principle in this framework is equivalent to guiding the semantic segmentation in both domains by fooling a domain discriminator $D$ with the learnt source and target representations. Formally, given the training set $\\mathcal{X}_{s}=\\{x_s^{i}|i=1,\\dots,N_s\\}$ in source domain and $\\mathcal{X}_{t}=\\{x_t^{i}|i=1,\\dots,N_t\\}$ in target domain, the adversarial loss $\\mathcal{L}_{adv}$ is the average classification loss, which is formulated as:\n\\begin{equation}\n \\label{eq:adv}\n \\begin{aligned}\n \\mathcal{L}_{adv}(\\mathcal{X}_{s},\\mathcal{X}_{t})= \\mathop{-E}\\limits_{x_t \\sim \\mathcal{X}_t}[log(D(x_t))]\\mathop{-E}\\limits_{x_s \\sim \\mathcal{X}_s}[log(1 - D(x_s)]\n \\end{aligned}~~.\n\\end{equation}\nwhere $\\mathop{E}$ denotes the expectation over the image set. The discriminator $D$ will attempt to minimize this loss by differentiating between source and target representations, and the shared Fully Convolutional Network (FCN) is learnt to fool the domain discriminator.\nConsidering that the image region corresponding to the receptive field of each spatial unit in the final feature map is treated as an individual instance during semantic segmentation, the representations of such instances are expected to be invariant across domains.\nThus we employ a fully convolutional domain discriminator whose outputs are the domain prediction of each image region corresponding to the spatial unit in the feature map.\n\nSince training labels are available in the source domain, the loss function is based on the pixel-level classification loss $\\mathcal{L}_{seg}$. In contrast, due to the absence of training labels, the loss function in the target domain is defined based upon the following three regularizers:\n\\begin{equation}\n \\label{eq:rpt}\n \\small\n \\begin{aligned}\n \\mathcal{L}_{rpt}(\\mathcal{X}_{t})= \\mathop{E}\\limits_{x_t \\sim \\mathcal{X}_t}[\\mathcal{L}_{cc}(x_t)+\\mathcal{L}_{pc}(x_t)+\\mathcal{L}_{sl}(x_t)]\n \\end{aligned}~~.\n\\end{equation}\nHere, we empirically treat each loss in RPT equally. Thus, the overall objective of the segmentation framework integrates $\\mathcal{L}_{adv}$, $\\mathcal{L}_{seg}$ and $\\mathcal{L}_{rpt}$ as:\n\\begin{equation}\n \\label{eq:all}\n \\small\n \\begin{aligned}\n \\mathop{\\min}_{FCN}\\{-\\varepsilon \\mathop{\\min}_{D}\\mathcal{L}_{adv}(\\mathcal{X}_s, \\mathcal{X}_t) + \\mathcal{L}_{seg}(\\mathcal{X}_s) + \\mathcal{L}_{rpt}(\\mathcal{X}_t)\\}\n \\end{aligned}~~,\n\\end{equation}\nwhere $\\varepsilon=0.1$ is the trade-off parameter to align the scale of different losses.\n\\begin{figure}[!tb]\n \\centering {\\includegraphics[width=0.45\\textwidth]{framework.pdf}}\n \\vspace{-0.05in}\n \\caption{\\small The adversarial-based semantic segmentation adaptation framework with RPT. The shared FCN is learnt with adversarial loss for domain-invariant representations across two domains. The predictions on source domain are optimized by supervised label, while the target domain predictions are regularized by RPT loss.}\n \\label{fig:framework}\n \\vspace{-0.15in}\n\\end{figure}\n\n\\section{Implementation} \\label{sec:imp}\n\\textbf{Training strategy.} Our proposed network is implemented in Caffe~\\cite{jia2014caffe} framework and the weights are trained by SGD optimizer. We employ dilated FCN~\\cite{chen2016deeplab} originated from the ImageNet pre-trained ResNet-101 as our backbone followed by a PSP module~\\cite{zhao2017pspnet}, unless otherwise stated. The domain discriminator for adversarial learning is borrowed from FCAN~\\cite{Zhang_2018_CVPR}. During the training stage, images are randomly cropped to $713\\times713$ due to the limitation of GPU memory. Both random horizontal flipping and image resizing are utilized for data augmentation. To make the training process stable, we pre-train the FCN on data from the source domain with annotations. At the stage of pre-training, the ``poly'' policy whose power is fixed to 0.9 is adopted with the initial learning rate 0.001. Momentum and weight decay are 0.9 and 0.0005 respectively. Each mini-batch has 8 samples and maximum training iterations is set as 30K. With the source domain pre-trained weights, we perform the domain adaptation by finetuning the whole adaptation framework which is equipped with our proposed RPT. The initial learning rate is 0.0001 and the total training iteration is 10K. Other training hyper-parameters remain unchanged.\nFollowing \\cite{lian2019constructing}, we randomly selected 500 images from the official training set of Cityscapes as a general validation set. The hyper-parameters ($\\lambda_{pc}=\\lambda_{cc}=\\lambda_{sl}=0.25$, $\\varepsilon=0.1$) are all determined on this set.\n\n\\textbf{Complexity of superpixel.}\nRPT highly relies on the quality of superpixel extraction. For robustness, superpixels with complex content ideally should be excluded from model training. The term ``complex'' refers to the distribution of semantic labels in a superpixel. In our case, we measure complexity based on the proportion of pixels being predicted with the dominative category over the number of pixels in a superpixel. A larger value implies consistency in prediction and hence safer to involve the corresponding superpixel in regularizations. Empirically, RPT only regularizes the top-50\\% of superpixels. The empirical choice will be further validated in the next section.\n\n\\textbf{State update of RPT.}\nDuring network optimization, the segmentation prediction $P_{seg}$, superpixel dominative category $\\hat{y}_i$ and cluster dominative category $\\tilde{y}_k$ change gradually. Iteratively updating these ``states'' is computationally expensive because reassigning the categories to superpixel and cluster (e.g., $\\hat{y}_i$ and $\\tilde{y}_k$) requires the semantic predictions collected from the whole training set of the target domain. Considering these predictions only change slightly during training, we first calculate these states before the optimization (without regularization) and fix these states at the beginning of iterations. Then, we will update the predictions or states for $N_{su}$ times evenly during training.\n\n\\vspace{-0.1in}\n\\section{Experiments}\n\\vspace{-0.05in}\n\\subsection{Datasets}\n\\vspace{-0.05in}\nThe experiments are conducted on GTA5~\\cite{GTA5_richter2016playing}, SYNTHIA~\\cite{ros2016synthia} and Cityscapes~\\cite{Cordts2016Cityscapes} datasets. The proposed RPT is trained on GTA5 and SYNTHIA (source domain) and Cityscapes (target domain). GTA5 is composed of 24,966 synthetic images of size $1914 \\times 1052$. These images are generated by Grand Theft Auto V (GTA5), a modern computer game, to render city scenes. The pixels of these images are annotated with 19 classes that are compatible with the labels in Cityscapes. Similarly, SYNTHIA consists of synthetic images of urban scenes with resolutions of $1280 \\times 760$. Following~\\cite{chang2019all,chen2019learning,hong2018conditional,li2019bidirectional,Tsai_2018_CVPR}, we use the subset, SYNTHIA-RAND-CITYSCAPES, which has 9,400 images being annotated with labels consistent with Cityscapes for experiments. Cityscapes is composed of 5,000 images of resolution $2048 \\times 1024$. These images are split into three subsets of sizes 2,975, 500 and 1,525 for training, validation and testing, respectively. The pixels of these images are annotated with 19 classes. In the experiments, the training subset is treated as the target-domain training data, where the pixel-level annotation is assumed unknown to RPT. On the other hand, the target-domain testing data is from validation subset. The same setting is also exploited in~\\cite{chang2019all,li2019bidirectional,Tsai_2018_CVPR}.\n\nTo this end, the performance of RPT is assessed by treating GTA5 as source domain and Cityscapes as target domain (i.e., GTA5~$\\to$~Cityscapes), and similarly, SYNTHIA~$\\to$~Cityscapes. The metrics are per class Intersection over Union (IoU) and mean IoU over all the classes.\n\n\\begin{table}\n \\centering\n \\small\n \\caption{\\small RPT performances in terms of mean IoU for domain adaptation of semantic segmentation on GTA5~$\\to$~Cityscapes.}\n \\begin{tabular}{l|c@{~~}c@{~~}c|c@{~~}c@{~~}c} \\hline\n \\multirow{2}{*}{\\textbf{Method}} & \\multicolumn{3}{c|}{\\textbf{ResNet-50}}& \\multicolumn{3}{c}{\\textbf{ResNet-101}}\\\\\n & FCN & +ABN & +ADV & FCN & +ABN & +ADV \\\\ \\hline\n baseline & 30.1 & 35.7 & 45.7 & 32.3 & 39.1 & 47.2 \\\\ \\hline\n \\textbf{RPT$^{1}$} & 33.0 & 39.3 & 48.7 & 36.1 & 42.9 & 50.4 \\\\\n \\textbf{RPT$^{2}$} & 33.4 & 39.9 & 50.0 & 37.9 & 44.2 & 51.7\\\\\n \\textbf{RPT$^{3}$} & 33.5 & 40.0 & 50.0 & 39.1 & 44.6 & 52.6\\\\ \\hline\n \\end{tabular}\n \\label{tab:effectiveness}\n \\vspace{-0.15in}\n\\end{table}\n\n\\begin{figure}[!tb]\n \\centering\n \\subfigure[State updating]{\n \\label{fig:curve:a}\n \\includegraphics[width=0.22\\textwidth]{exp_iter.pdf}}\n \\subfigure[Filtering complex superpixels]{\n \\label{fig:curve:b}\n \\includegraphics[width=0.23\\textwidth]{exp_complex.pdf}}\n \\caption{\\small Two analysis experiments of (a) the effectiveness of state updating during training of RPT$^{3}$; (b) the percentage of filtered complex superpixels of RPT$^{1}$.}\n \\label{fig:curve}\n \\vspace{-0.15in}\n\\end{figure}\n\\subsection{Evaluation of RPT}\nRPT is experimented on top of six different network architectures derived from \\textbf{FCN} which leverages on either \\textbf{ResNet-50} or \\textbf{ResNet-101} as the backbone network. Specially, we adopt Adaptive Batch Normalization (\\textbf{ABN}) to replace the mean and variance of BN in the original version of FCN, resulting in a variant of network named FCN+ABN. Note that the BN layer is first learnt in source domain and then replaced by ABN when being applied to the target domain. In addition, leveraging on the adversarial training (\\textbf{ADV}), another variant, FCN+ABN+ADV, is trained to learn domain-invariant representations.\n\nWe first verify the impact of $N_{su}$, the number of state updating, in RPT. Table~\\ref{tab:effectiveness} summarizes the impact on six variants of network for domain adaptation on GTA5~$\\to$~Cityscapes.\nAll the networks are pre-trained on ImageNet dataset and then injected with RPT.\nThe superscript, RPT$^{n}$, refers to the number of times for state updating (see Table~\\ref{tab:effectiveness} for exact number).\nThe baselines are obtained by performing domain adaptation of semantic segmentation on the use of the corresponding network architectures, but without RPT.\nOverall, RPT improves the baseline without regularization. The improvement is consistently observed across the variants of networks, and proportional to the number of state updating at the expense of computation cost. RPT$^{3}$ achieves the best performance (mIoU = 52.6\\%) and with 5.4\\% improvement over the baseline of the same network (FCN+ABN+ADV). Figure \\ref{fig:curve:a} shows the performance changes in terms of mIoU during training over different times of state updating. The training starts with model learning in source domain. State updating, such as the assignment of dominative categories at superpixel and cluster levels, is then performed three times evenly during the training process in the target domain.\nDespite dropping in performance at the start of training after each state updating, mIoU gradually improves and eventually converges to a higher value than the previous round.\nFigure \\ref{fig:curve:b} shows the performance trend when the percentage of complex superpixels being excluded from learning gradually increases. As shown, the value mIoU constantly increases till reaching the level when 50\\% of superpixels are filtered. In the remaining experiments, we fix the setting of RPT to involve 50\\% of superpixels in regularization.\n\n\\begin{table}\n \\centering\n \\small\n \\caption{\\small Contribution of each design in RPT for domain adaptation of semantic segmentation on GTA5~$\\to$~Cityscapes.}\n \\begin{tabular}{l|c@{~}c@{~}c@{~}c@{~}c@{~}c|c} \\hline\n \\textbf{Method} & \\textbf{ABN} & \\textbf{ADV} & \\textbf{PCR} & \\textbf{CCR} & \\textbf{SLR} & \\textbf{SU} & \\textbf{mIoU} \\\\\\hline\n FCN & & & & & & & 32.3 \\\\\n +ABN & $\\surd$ & & & & & & 39.1 \\\\\n FCN$_{adv}$ (+ADV) & $\\surd$ & $\\surd$ & & & & & 47.2 \\\\ \\hline\n +{PCR} & $\\surd$ & $\\surd$ & $\\surd$ & & & & 49.0 \\\\\n +{CCR} & $\\surd$ & $\\surd$ & $\\surd$ & $\\surd$ & & & 49.6 \\\\\n \\textbf{RPT$^{1}$} (+{SLR}) & $\\surd$ & $\\surd$ & $\\surd$ & $\\surd$ & $\\surd$ & & 50.4 \\\\\n \\textbf{RPT$^{3}$} & $\\surd$ & $\\surd$ & $\\surd$ & $\\surd$ & $\\surd$ & $\\surd$ & 52.6 \\\\\\hline\n \\end{tabular}\n \\label{tab:contribution}\n \\vspace{-0.15in}\n\\end{table}\n\n\\begin{table*}\n \\centering\n \\footnotesize\n \\caption{\\small Comparisons with the state-of-the-art unsupervised domain adaptation methods on GTA5~$\\to$~Cityscapes adaptation. Please note that the baseline methods are divided into five groups: (1) representation-level domain adaptation by adversarial learning \\cite{chen2018road,Du_2019_ICCV,pmlr-v80-hoffman18a,hong2018conditional,luo2019taking,sankaranarayanan2018learning,Tsai_2018_CVPR}; (2) appearance-level domain adaptation by image translation \\cite{dundar2018domain,murez2018image}; (3) appearance-level + representation-level adaptation~\\cite{chang2019all,wu2018dcan,Zhang_2018_CVPR}; (4) self-learning \\cite{iqbal2019mlsl,lian2019constructing,zhang2018fully,zou2018unsupervised}; (5) others \\cite{Chen_2019_ICCV,li2019bidirectional,saleh2018effective,zhang2017curriculum,zhu2018penalizing}.}\n \\begin{tabular}{l@{~}|@{~}c@{~~}c@{~~}c@{~~}c@{~~}c@{~~}c@{~~}c@{~~}c@{~~}c@{~~}c@{~~}c@{~~}c@{~~}c@{~~}c@{~~}c@{~~}c@{~~}c@{~~}c@{~~}c@{~}|@{~}c@{~~}} \\hline\n Method & road & sdwlk & bldng & wall & fence & pole & light & sign & vgttn & trrn & sky & person & rider & car & truck & bus & train & mcycl & bcycl & mIoU \\\\ \\hline\n FCNWild~\\cite{BDDS_hoffman2016fcns} & 70.4 & 32.4 & 62.1 & 14.9 & 5.4 & 10.9 & 14.2 & 2.7 & 79.2 & 21.3 & 64.6 & 44.1 & 4.2 & 70.4 & 8.0 & 7.3 & 0.0 & 3.5 & 0.0 & 27.1 \\\\\n Learning~\\cite{sankaranarayanan2018learning} & 88.0 & 30.5 & 78.6 & 25.2 & 23.5 & 16.7 & 23.5 & 11.6 & 78.7 & 27.2 & 71.9 & 51.3 & 19.5 & 80.4 & 19.8 & 18.3 & 0.9 & 20.8 & 18.4 & 37.1 \\\\\n ROAD~\\cite{chen2018road} & 76.3 & 36.1 & 69.6 & 28.6 & 22.4 & 28.6 & 29.3 & 14.8 & 82.3 & 35.3 & 72.9 & 54.4 & 17.8 & 78.9 & 27.7 & 30.3 & 4.0 & 24.9 & 12.6 & 39.4 \\\\\n CyCADA~\\cite{pmlr-v80-hoffman18a} & 79.1 & 33.1 & 77.9 & 23.4 & 17.3 & 32.1 & 33.3 & 31.8 & 81.5 & 26.7 & 69.0 & 62.8 & 14.7 & 74.5 & 20.9 & 25.6 & 6.9 & 18.8 & 20.4 & 39.5 \\\\\n AdaptSegNet~\\cite{Tsai_2018_CVPR} & 86.5 & 36.0 & 79.9 & 23.4 & 23.3 & 23.9 & 35.2 & 14.8 & 83.4 & 33.3 & 75.6 & 58.5 & 27.6 & 73.7 & 32.5 & 35.4 & 3.9 & 30.1 & 28.1 & 42.4 \\\\\n CLAN~\\cite{luo2019taking} & 87.0 & 27.1 & 79.6 & 27.3 & 23.3 & 28.3 & 35.5 & 24.2 & 83.6 & 27.4 & 74.2 & 58.6 & 28.0 & 76.2 & 33.1 & 36.7 & 6.7 & 31.9 & 31.4 & 43.2 \\\\\n Conditional~\\cite{hong2018conditional} & 89.2 & 49.0 & 70.7 & 13.5 & 10.9 & 38.5 & 29.4 & 33.7 & 77.9 & 37.6 & 65.8 & \\textbf{75.1} & \\textbf{32.4} & 77.8 & \\textbf{39.2} & 45.2 & 0.0 & 25.5 & 35.4 & 44.5 \\\\\n SSF-DAN~\\cite{Du_2019_ICCV} & 90.3 & 38.9 & 81.7 & 24.8 & 22.9 & 30.5 & 37.0 & 21.2 & 84.8 & 38.8 & 76.9 & 58.8 & 30.7 & 85.7 & 30.6 & 38.1 & 5.9 & 28.3 & 36.9 & 45.4 \\\\\n ADVENT~\\cite{Vu_2019_CVPR} & 89.4 & 33.1 & 81.0 & 26.6 & 26.8 & 27.2 & 33.5 & 24.7 & 83.9 & 36.7 & 78.8 & 58.7 & 30.5 & 84.8 & 38.5 & 44.5 & 1.7 & 31.6 & 32.4 & 45.5 \\\\ \\hline\n I2I Adapt~\\cite{murez2018image} & 85.8 & 37.5 & 80.2 & 23.3 & 16.1 & 23.0 & 14.5 & 9.8 & 79.2 & 36.5 & 76.4 & 53.4 & 7.4 & 82.8 & 19.1 & 15.7 & 2.8 & 13.4 & 1.7 & 35.7 \\\\\n Stylization~\\cite{dundar2018domain} & 86.9 & 44.5 & 84.7 & 38.8 & 26.6 & 32.1 & 42.3 & 22.5 & 84.7 & 30.9 & 85.9 & 67.0 & 28.1 & 85.7 & 38.3 & 31.8 & 21.5 & 31.3 & 24.6 & 47.8 \\\\ \\hline\n DCAN~\\cite{wu2018dcan} & 85.0 & 30.8 & 81.3 & 25.8 & 21.2 & 22.2 & 25.4 & 26.6 & 83.4 & 36.7 & 76.2 & 58.9 & 24.9 & 80.7 & 29.5 & 42.9 & 2.5 & 26.9 & 11.6 & 41.7 \\\\\n DISE~\\cite{chang2019all} & 91.5 & 47.5 & 82.5 & 31.3 & 25.6 & 33.0 & 33.7 & 25.8 & 82.7 & 28.8 & 82.7 & 62.4 & 30.8 & 85.2 & 27.7 & 34.5 & 6.4 & 25.2 & 24.4 & 45.4 \\\\\n FCAN~\\cite{Zhang_2018_CVPR} & 88.9 & 37.9 & 82.9 & 33.2 & 26.1 & \\textbf{42.8} & 43.2 & 28.4 & 86.5 & 35.2 & 78.0 & 65.9 & 22.8 & 86.7 & 23.7 & 34.9 & 2.7 & 24.0 & 41.9 & 46.6 \\\\ \\hline\n FCTN~\\cite{zhang2018fully} & 72.2 & 28.4 & 74.9 & 18.3 & 10.8 & 24.0 & 25.3 & 17.9 & 80.1 & 36.7 & 61.1 & 44.7 & 0.0 & 74.5 & 8.9 & 1.5 & 0.0 & 0.0 & 0.0 & 30.5 \\\\\n CBST~\\cite{zou2018unsupervised} & 89.6 & \\textbf{58.9} & 78.5 & 33.0 & 22.3 & 41.4 & 48.2 & 39.2 & 83.6 & 24.3 & 65.4 & 49.3 & 20.2 & 83.3 & 39.0 & 48.6 & 12.5 & 20.3 & 35.3 & 47.0 \\\\\n PyCDA~\\cite{lian2019constructing} & \\textbf{92.3} & 49.2 & 84.4 & 33.4 & 30.2 & 33.3 & 37.1 & 35.2 & 86.5 & 36.9 & 77.3 & 63.3 & 30.5 & 86.6 & 34.5 & 40.7 & 7.9 & 17.6 & 35.5 & 48.0 \\\\\n MLSL~\\cite{iqbal2019mlsl} & 89.0 & 45.2 & 78.2 & 22.9 & 27.3 & 37.4 & 46.1 & 43.8 & 82.9 & 18.6 & 61.2 & 60.4 & 26.7 & 85.4 & 35.9 & 44.9 & \\textbf{36.4} & 37.2 & 49.3 & 49.0 \\\\ \\hline\n Curriculum~\\cite{zhang2017curriculum} & 72.9 & 30 & 74.9 & 12.1 & 13.2 & 15.3 & 16.8 & 14.1 & 79.3 & 14.5 & 75.5 & 35.7 & 10 & 62.1 & 20.6 & 19 & 0 & 19.3 & 12 & 31.4 \\\\\n Penalizing~\\cite{zhu2018penalizing} & - & - & - & - & - & - & - & - & - & - & - & - & - & - & - & - & - & - & - & 38.1 \\\\\n Effective~\\cite{saleh2018effective} & 79.8 & 29.3 & 77.8 & 24.2 & 21.6 & 6.9 & 23.5 & \\textbf{44.2} & 80.5 & 38.0 & 76.2 & 52.7 & 22.2 & 83.0 & 32.3 & 41.3 & 27.0 & 19.3 & 27.7 & 42.5 \\\\\n MaxSquare~\\cite{Chen_2019_ICCV} & 89.3 & 40.5 & 81.2 & 29.0 & 20.4 & 25.6 & 34.4 & 19.0 & 83.6 & 34.4 & 76.5 & 59.2 & 27.4 & 83.8 & 38.4 & 43.6 & 7.1 & 32.2 & 32.5 & 45.2 \\\\\n Bidirectional~\\cite{li2019bidirectional} & 91.0 & 44.7 & 84.2 & 34.6 & 27.6 & 30.2 & 36.0 & 36.0 & 85.0 & \\textbf{43.6} & 83.0 & 58.6 & 31.6 & 83.3 & 35.3 & \\textbf{49.7} & 3.3 & 28.8 & 35.6 & 48.5 \\\\ \\hline\\hline\n \\textbf{FCN$_{adv}$+RPT$^{1}$} & 88.7 & 37.0 & 85.2 & 36.6 & 27.7 & 42.6 & 49.1 & 30.0 & 86.9 & 37.6 & 80.7 & 66.8 & 27.5 & 88.1 & 30.3 & 39.5 & 22.5 & 28.0 & 53.0 & 50.4 \\\\\n \\textbf{FCN$_{adv}$+RPT$^{3}$} & 89.2 & 43.3 & 86.1 & 39.5 & 29.9 & 40.2 & 49.6 & 33.1 & 87.4 & 38.5 & 86.0 & 64.4 & 25.1 & 88.5 & 36.6 & 45.8 & 23.9 & 36.5 & 56.8 & 52.6 \\\\\n \\textbf{FCN$_{adv}$+RPT$^{3}$}+MS & 89.7 & 44.8 & \\textbf{86.4} & \\textbf{44.2} & \\textbf{30.6} & 41.4 & \\textbf{51.7} & 33.0 & \\textbf{87.8} & 39.4 & \\textbf{86.3} & 65.6 & 24.5 & \\textbf{89.0} & 36.2 & 46.8 & 17.6 & \\textbf{39.1} & \\textbf{58.3} & \\textbf{53.2} \\\\ \\hline\n \\end{tabular}\n \\label{tab:GTA5}\n \\vspace{-0.15in}\n\\end{table*}\n\n\\subsection{An Ablation Study}\nNext, we conduct an ablation study to assess the performance impacts of different design components. We separately assess the three regularizations in RPT: patch-based consistency regularization (\\textbf{PCR}), cluster-based consistency regularization (\\textbf{CCR}) and spatial logic regularization (\\textbf{SLR}). Table \\ref{tab:contribution} details the contribution of each component towards the overall performance. FCN$_{adv}$, by considering adaptive batch normalization and adversarial learning (ABN+ADV), successfully boosts mIoU from 32.3\\% to 47.2\\%. The result indicates the importance of narrowing the domain gap between synthetic data and real images. The three regularizations in target domain introduce 1.8\\%, 0.6\\% and 0.8\\% of improvement, respectively. Furthermore, by increasing the number of state updating during network optimization, additional 2.2\\% of improvement is observed from RPT$^{1}$ to RPT$^{3}$. Figure \\ref{fig:comparison} shows the gradual improvement on semantic segmentation of five images, when different design components are incrementally integrated.\n\n\\begin{figure}[!tb]\n \\centering {\\includegraphics[width=0.48\\textwidth]{comparison.pdf}}\n \\caption{\\small Examples of semantic segmentation results on GTA5-Cityscapes adaptation. The original images, their ground truth and comparative results at different stages of FCN$_{adv}$+RPT$^{3}$ are given.}\n \\label{fig:comparison}\n \\vspace{-0.15in}\n\\end{figure}\n\n\n\\subsection{Comparisons with State-of-the-Art}\nWe compare with several state-of-the-art techniques for unsupervised domain adaptation on GTA5~$\\to$~Cityscapes. Broadly, we can categorize the baseline methods into five categories: (1) representation-level domain adaptation by adversarial learning \\cite{chen2018road,Du_2019_ICCV,pmlr-v80-hoffman18a,hong2018conditional,luo2019taking,sankaranarayanan2018learning,Tsai_2018_CVPR}; (2) appearance-level domain adaptation by image translation~\\cite{dundar2018domain,murez2018image}; (3) appearance-level + representation-level adaptation \\cite{chang2019all,wu2018dcan,Zhang_2018_CVPR}; (4) self-learning \\cite{iqbal2019mlsl,lian2019constructing,zhang2018fully,zou2018unsupervised}; (5) others \\cite{Chen_2019_ICCV,li2019bidirectional,saleh2018effective,zhang2017curriculum,zhu2018penalizing}. The performance comparisons on GTA5~$\\to$~Cityscapes adaptation are summarized in Table~\\ref{tab:GTA5}.\nFCN$_{adv}$+RPT$^{3}$ achieves new state-of-the-art performance with mIoU of 52.6\\%. Benefiting from the proposed regularizations, FCN$_{adv}$+RPT$^{3}$ outperforms SSF-DAN~\\cite{Du_2019_ICCV} and ADVENT~\\cite{Vu_2019_CVPR}, which also adopt a similar adversarial mechanism, by additional improvement of 7.2\\% and 7.1\\%, respectively. The performance is also better than the most recently proposed FCAN~\\cite{Zhang_2018_CVPR} and Stylization~\\cite{dundar2018domain}, which exploit a novel appearance transferring module that is not considered in RPT. Comparing to the best reported result to-date by MLSL~\\cite{iqbal2019mlsl}, our proposed model still leads the performance by 3.6\\%. By further integrating with the multi-scale (MS) scheme, i.e, FCN$_{adv}$+RPT$^{3}$+MS, the mIoU boosts to 53.2\\% with 9 out of the 19 categories reach to-date the best reported performances.\n\n\nTo verify the generalization of RPT, we also test the performance on SYNTHIA~$\\to$~Cityscapes using the same settings. Following previous works~\\cite{iqbal2019mlsl,lian2019constructing,Vu_2019_CVPR,zou2018unsupervised}, the performances are reported in terms of mIoU@16 and mIoU@13 by not considering the different number of categories. The performance comparisons are summarized in Table \\ref{tab:SYNTHIA}. Similarly, FCN$_{adv}$+RPT$^{3}$+MS achieves the best performance with mIoU@16 = 51.7\\% and mIoU@13 = 59.5\\%. The performances are better than PyCDA, which reports the best known results, by 5\\% and 6.2\\% respectively.\n\\begin{table*}[]\n \\centering\n \\footnotesize\n \\caption{\\small Comparisons with the state-of-the-art unsupervised domain adaptation methods on SYNTHIA~$\\to$~Cityscapes transfer.}\n \\begin{tabular}{l@{~}|@{~}c@{~~}c@{~~}c@{~~}c@{~~}c@{~~}c@{~~}c@{~~}c@{~~}c@{~~}c@{~~}c@{~~}c@{~~}c@{~~}c@{~~}c@{~~}c@{~}|@{~}c@{~~}c@{~}}\\hline\n & road & sdwlk & bldng & wall & fence & pole & light & sign & vgttn & sky & person & rider & car & bus & mcycl & bcycl & mIoU@16 & mIoU@13 \\\\ \\hline\n Learning~\\cite{sankaranarayanan2018learning} & 80.1 & 29.1 & 77.5 & 2.8 & 0.4 & 26.8 & 11.1 & 18.0 & 78.1 & 76.7 & 48.2 & 15.2 & 70.5 & 17.4 & 8.7 & 16.7 & 36.1 & - \\\\\n ROAD~\\cite{chen2018road} & 77.7 & 30.0 & 77.5 & 9.6 & 0.3 & 25.8 & 10.3 & 15.6 & 77.6 & 79.8 & 44.5 & 16.6 & 67.8 & 14.5 & 7.0 & 23.8 & 36.2 & - \\\\\n AdaptSegNet~\\cite{Tsai_2018_CVPR} & 84.3 & 42.7 & 77.5 & - & - & - & 4.7 & 7.0 & 77.9 & 82.5 & 54.3 & 21.0 & 72.3 & 32.2 & 18.9 & 32.3 & - & 46.7 \\\\\n CLAN~\\cite{luo2019taking} & 81.3 & 37.0 & 80.1 & - & - & - & 16.1 & 13.7 & 78.2 & 81.5 & 53.4 & 21.2 & 73.0 & 32.9 & 22.6 & 30.7 & - & 47.8 \\\\\n Conditional~\\cite{hong2018conditional} & 85.0 & 25.8 & 73.5 & 3.4 & \\textbf{3.0} & 31.5 & 19.5 & 21.3 & 67.4 & 69.4 & 68.5 & 25.0 & 76.5 & 41.6 & 17.9 & 29.5 & 41.2 & - \\\\\n SSF-DAN~\\cite{Du_2019_ICCV} & 84.6 & 41.7 & 80.8 & - & - & - & 11.5 & 14.7 & 80.8 & 85.3 & 57.5 & 21.6 & 82.0 & 36.0 & 19.3 & 34.5 & - & 50.0 \\\\\n ADVENT~\\cite{Vu_2019_CVPR} & 85.6 & 42.2 & 79.7 & 8.7 & 0.4 & 25.9 & 5.4 & 8.1 & 80.4 & 84.1 & 57.9 & 23.8 & 73.3 & 36.4 & 14.2 & 33.0 & 41.2 & 48.0 \\\\ \\hline\n DCAN~\\cite{wu2018dcan} & 82.8 & 36.4 & 75.7 & 5.1 & 0.1 & 25.8 & 8.0 & 18.7 & 74.7 & 76.9 & 51.1 & 15.9 & 77.7 & 24.8 & 4.1 & 37.3 & 38.4 & - \\\\\n DISE~\\cite{chang2019all} & \\textbf{91.7} & \\textbf{53.5} & 77.1 & 2.5 & 0.2 & 27.1 & 6.2 & 7.6 & 78.4 & 81.2 & 55.8 & 19.2 & 82.3 & 30.3 & 17.1 & 34.3 & 41.5 & - \\\\\\hline\n CBST~\\cite{zou2018unsupervised} & 53.6 & 23.7 & 75.0 & 12.5 & 0.3 & 36.4 & 23.5 & 26.3 & 84.8 & 74.7 & 67.2 & 17.5 & 84.5 & 28.4 & 15.2 & 55.8 & 42.5 & 48.4 \\\\\n PyCDA~\\cite{lian2019constructing} & 75.5 & 30.9 & 83.3 & 20.8 & 0.7 & 32.7 & 27.3 & \\textbf{33.5} & 84.7 & 85.0 & 64.1 & 25.4 & 85.0 & 45.2 & 21.2 & 32.0 & 46.7 & 53.3 \\\\\n MLSL~\\cite{iqbal2019mlsl} & 59.2 & 30.2 & 68.5 & \\textbf{22.9} & 1.0 & 36.2 & 32.7 & 28.3 & \\textbf{86.2} & 75.4 & \\textbf{68.6} & 27.7 & 82.7 & 26.3 & 24.3 & 52.7 & 45.2 & 51.0 \\\\ \\hline\n Curriculum~\\cite{zhang2017curriculum} & 57.4 & 23.1 & 74.7 & 0.5 & 0.6 & 14.0 & 5.3 & 4.3 & 77.8 & 73.7 & 45.0 & 11.0 & 44.8 & 21.2 & 1.9 & 20.3 & 29.7 & - \\\\\n Penalizing~\\cite{zhu2018penalizing} & - & - & - & - & - & - & - & - & - & - & - & - & - & - & - & - & 34.2 & 40.3 \\\\\n MaxSquare~\\cite{Chen_2019_ICCV} & 82.9 & 40.7 & 80.3 & 10.2 & 0.8 & 25.8 & 12.8 & 18.2 & 82.5 & 82.2 & 53.1 & 18.0 & 79.0 & 31.4 & 10.4 & 35.6 & 41.4 & 48.2 \\\\\n Bidirectional~\\cite{li2019bidirectional} & 86.0 & 46.7 & 80.3 & - & - & - & 14.1 & 11.6 & 79.2 & 81.3 & 54.1 & \\textbf{27.9} & 73.7 & 42.2 & 25.7 & 45.3 & - & 51.4 \\\\ \\hline\\hline\n \\textbf{FCN$_{adv}$+RPT$^{1}$} & 87.7 & 43.1 & 84.0 & 10.5 & 0.5 & \\textbf{42.2} & \\textbf{40.5} & 33.1 & 86.0 & 81.9 & 56.0 & 26.1 & 85.9 & 35.8 & 24.8 & 56.2 & 49.6 & 57.0 \\\\\n \\textbf{FCN$_{adv}$+RPT$^{3}$} & 88.9 & 46.5 & 84.5 & 15.1 & 0.5 & 38.5 & 39.5 & 30.1 & 85.9 & 85.8 & 59.8 & 26.1 & 88.1 & 46.8 & 27.7 & 56.1 & 51.2 & 58.9 \\\\\n \\textbf{FCN$_{adv}$+RPT$^{3}$}+MS & 89.1 & 47.3 & \\textbf{84.6} & 14.5 & 0.4 & 39.4 & 39.9 & 30.3 & 86.1 & \\textbf{86.3} & 60.8 & 25.7 & \\textbf{88.7} & \\textbf{49.0} & \\textbf{28.4} & \\textbf{57.5} & \\textbf{51.7} & \\textbf{59.5}\\\\ \\hline\n \\end{tabular}\n \\vspace{-0.15in}\n \\label{tab:SYNTHIA}\n\\end{table*}\n\\subsection{Examples of Regularization}\n\\begin{figure}[!tb]\n \\centering {\\includegraphics[width=0.478\\textwidth]{case_loss.pdf}}\n \\caption{\\small Examples showing the effectiveness of patch-based consistency and cluster-based consistency in RPT.}\n \\label{fig:case_loss}\n \\vspace{-0.15in}\n\\end{figure}\nFigure~\\ref{fig:case_loss} shows examples to demonstrate the effectiveness of patch-based and cluster-based consistency regularizations. Here, we crop some highlighted regions of input image, ground truth, prediction by FCN$_{adv}$ and prediction by FCN$_{adv}$+RPT$^{3}$, respectively.\nOn one hand, as shown in Figure~\\ref{fig:case_loss}(a), patch-based consistency encourages the pixels to be predicted as the dominative category of the superpixel. On the other hand, cluster-based consistency is able to correct the predictions with the cue of visual similarity across superpixels as illustrated in Figure~\\ref{fig:case_loss}(b).\nThese examples validate our motivation of enforcing label consistency within superpixel and cluster, where most semantic labels are correctly predicted in the target domain. Figure~\\ref{fig:case_loss_logic} further visualizes the merit of modeling spatial context by spatial logic regularization.\nGiven the segmentation results from FCN$_{adv}$, our proposed LSTM encoder-decoder outputs the logical probability of assigning current semantic labels to each region. The darkness indicates that the region is predicted with low logical probability.\nBetter results are achieved by penalizing the illogical predictions, such as \\textit{road} on the top of \\textit{vegetation} (1$st$ row) or \\textit{car} (2$nd$ row), \\textit{sky} below \\textit{building} (3$rd$ row), \\textit{fence} above \\textit{building} (4$th$ row).\n\n\\section{Conclusion}\n\\begin{figure}[!tb]\n \\centering {\\includegraphics[width=0.45\\textwidth]{case_loss_logic.pdf}}\n \\caption{\\small The examples of punished patches by spatial logic.}\n \\label{fig:case_loss_logic}\n \\vspace{-0.15in}\n\\end{figure}\nWe have presented Regularizer of Prediction Transfer (RPT) for unsupervised domain adaptation of semantic segmentation. RPT gives light to a novel research direction, by directly exploring the three intrinsic criteria of semantic segmentation to restrict the label prediction on the target domain. These criteria, when imposed as regularizers during training, are found to be effective in alleviating the problem of model overfitting.\nThe patch-based consistency attempts to unify the prediction inside each region by introducing its dominative category to the unconfident pixels. The cluster-based consistency further amends the prediction according to other visually similar regions which belong to the same cluster. In pursuit of suppressing illogical predictions, spatial logic is involved to regularize the spatial relation which is shared across domains.\nExperiments conducted on the transfer from GTA5 to Cityscapes show that the injection of RPT can consistently improve the domain adaptation across different network architectures. More remarkably, the setting of FCN$_{adv}$+RPT$^{3}$ achieves new state-of-the-art performance. A similar conclusion is also drawn from the adaptation from SYNTHIA to Cityscapes, which demonstrates the generalization ability of RPT.\n\n\n{\\small\n\\bibliographystyle{ieee_fullname}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section*{Introduction}\nRecent work on parton distributions functions (PDF's)\nin the nucleon has focussed\non probing the sea and gluon distribution at small $x$. The valence \nquarks distribution has been thought to be relatively well understood.\nHowever, the precise knowledge of the $u$ and $d$ quark distribution\nat high $x$ is very important at collider energies in searches for signals\nfor new physics at high $Q^2$.\nIn addition, the value of $d\/u$ as $x \\rightarrow 1$ is of theoretical\ninterest.\nRecently, a proposed CTEQ toy model~\\cite{toy} included\n the possibility of an additional contribution to the $u$ quark distribution\n(beyond $x>0.75$) as an explanation for both the initial HERA high\n $Q^2$ anomaly~\\cite{highQ2},\nand for the jet excess at high-$P_t$ at CDF~\\cite{CDFjet}. In this\nLetter we conclude that a re-analysis of data from\nNMC and SLAC leads to a great improvement in our knowledge\nof PDF's at high $x$, and rules out such toy models.\n\nInformation about valence quarks originates from the proton and neutron\nstructure function data. The $u$ valence quark distribution at high $x$\nis relatively well\nconstrained by the proton structure function $F_2^p$.\nHowever, the neutron structure function $F_2^n$, which is sensitive \nto the $d$ valence quark at high $x$,\nis actually extracted from deuteron data.\n Therefore, there is an uncertainty in the $d$ valence quark \ndistribution from the corrections for nuclear binding effects in the deuteron. \nIn past extractions of $F_2^n$ from deuteron data,\nonly Fermi motion corrections\nwere considered, and other \nbinding effects were assumed to be negligible.\nRecently, the corrections for nuclear binding effects in the deuteron,\n$F_2^d\/F_2^{n+p}$, have been extracted empirically from\nfits to the nuclear dependence\nof electron scattering data from SLAC experiments\nE139\/140~\\cite{GOMEZ}.\nThe empirical extraction uses a\nmodel proposed by Frankfurt and Strikman~\\cite{Frankfurt}, \nin which all binding effects \nin the deuteron and heavy nuclear targets\nare assumed to scale with the nuclear density.\nThe total correction for nuclear binding effects in the deuteron\n(shown in Fig. \\ref{fig:f2dp}(a)), \nis in a direction which is opposite\nto what is expected from the previous models which included only \nthe Fermi motion effects. \nThe suprisingly large correction extracted in this empirical way\nmaybe controversial, but is smaller than the recent theoretical prediction\n~\\cite{duSLAC} (dashed line in Fig. \\ref{fig:f2dp}(a))\n\nThe ratio $F_2^d\/F_2^p$ is directly related to $d\/u$. In leading order QCD,\n $2F_2^d\/F_2^p -1 \\simeq (1+4d\/u)\/(4+d\/u)$ at high $x$.\n We perform a next-to-leading order (NLO) analysis on the precise\n NMC $F_2^d\/F_2^p$ data~\\cite{NMCf2dp} to extract $d\/u$\n as a function of $x$.\n We extract the ratio $F_2^{p+n}\/F_2^p$\n by applying the nuclear binding correction\n $F_2^d\/F_2^{n+p}$ to the $F_2^d\/F_2^p$ data.\n\\begin{figure}[t]\n\\centerline{\\psfig{figure=f2dp_mor98_v2.ps,width=3.0in,height=3.0in}}\n\\caption{(a) The total correction for nuclear \neffects (binding and Fermi motion) in the deuteron,\n $F_2^d\/F_2^{n+p}$, as a function of $x$, extracted from fits to\nthe nuclear dependence of SLAC $F_2$ electron scattering\ndata (compared to theoretical model~[6]). ~(b) Comparison of NMC $F_2^{n+p}\/F_2^p$ (corrected for nuclear\neffects) and the prediction in NLO using the MRS(R2) \nPDF\nwith and without our proposed modification to the $d\/u$ ratio.}\n\\label{fig:f2dp}\n\\end{figure}\n As shown in Fig. \\ref{fig:f2dp}(b), the\n standard PDF's~\\cite{MRSR2,CTEQ3M} do\n not describe the extracted $F_2^{p+n}\/F_2^p$.\n Since the $u$ distribution is relatively well constrained,\n we find a correction term to $d\/u$ in the standard PDF's\n (as a function of $x$), by varying only the $d$ distribution to fit the data.\n The correction term is parametrized \n as a simple quadratic form, $\\delta (d\/u) = (0.1\\pm0.01)(x+1)x$\n for the MRS(R2) PDF,\n where the corrected $d\/u$ ratio\n is $(d\/u)' = (d\/u) + \\delta (d\/u)$.\n Based on this correction,\n we obtain a MRS(R2)-modified PDF as shown in Fig \\ref{fig:dou}(a).\n The correction to other PDF's such as CTEQ3M\/4M is similar.\n Note that since the $d$ quark level is small at large $x$,\n all the sum rules are easily satisfied with a very minute change at low $x$.\n The NMC data, when corrected for nuclear binding effects\nin the deuteron, clearly indicate that $d\/u$ in the\n standard PDF's is significantly underestimated \n at high $x$ as shown in Fig. \\ref{fig:dou}(a).\n It also shows that the modified $d\/u$ ratio \n approaches\n $0.2\\pm0.02$ as $x \\rightarrow 1$, in agreement with a QCD\n prediction~\\cite{Farrar}. In contract, if the\ndeuteron data is only corrected for Fermi motion effects (as\nwas mistakenly done in the past) both the $d\/u$ from data and the $d\/u$\nin the standard PDF's fits\napproach $0$ as $x \\rightarrow 1$.\n Figure~\\ref{fig:dou}(a) shows that $d\/u$ values \nextracted from CDHSW~\\cite{du_cdhsw} $\\nu p$\/\n$\\overline{\\nu} p$ data (which are free from nuclear effects) \nalso favor the modified PDF's at high $x$.\n\nInformation (which is not affected by the corrections\nfor nuclear effects in the deuteron)\non $d\/u$ can be also extracted from $W$ production data in hadron\ncolliders.\nFigure~\\ref{fig:dou}(b) shows that the predicted $W$ asymmetry calculated\nwith the DYRAD NLO QCD program\nusing our modified PDF is\nin much better agreement with recent CDF data~\\cite{Wasym} at large\nrapidity than standard PDF's.\nWhen the modified PDF at $Q^2$=$16$ GeV$^2$ is evolved to $Q^2$=$10^4$ GeV$^2$\nusing the NLO QCD evolution, we find that\nthe modified $d$ distribution at $x=0.5$ is increased by about 40 \\% \nin comparison to the standard $d$ distribution.\nThe modified PDF's have a significant impact\non the \ncharged current cross sections~\\cite{zeushighq2}\nin the HERA high $Q^2$ region, shown in \nFig.~\\ref{fig:highq2}(a), because \nthe charged current scattering with positrons is on $d$ quark only.\nFigure~\\ref{fig:highq2}(b) shows that\nthe modified PDF's\nalso lead to an increase of 10\\% in the\nproduction rate\nof very high $P_T$ jets~\\cite{jet} in hadron colliders. \n\n\\begin{figure}[ht]\n\\centerline{\\psfig{figure=dou_mor98_v2.ps,width=3.0in,height=3.0in}}\n\\caption{(a) The $d\/u$ distributions at $Q^2$=$16$ GeV$^2$ \nas a function of $x$ for the standard and modified MRS(R2) PDF\ncompared to the CDHSW data. \n(b) Comparison of the CDF $W$ asymmetry data with NLO standard\nCTEQ3M, MRS(R2), and modified MRS(R2) as a function of the lepton rapidity.\nThe standard CTEQ3M with \na resummation calculation is also shown \nfor comparison.}\n\\label{fig:dou}\n\\end{figure}\n\n\\begin{figure}\n\\centerline{\\psfig{figure=zeus_highq2_diffx.ps,width=3.0in,height=1.5in}}\n\\centerline{\\psfig{figure=cdf-d0jet_cteq4mbase_v2.ps,width=3.0in,height=1.5in}}\n\n\\caption{ (a) The HERA charged current cross section data and \n(b) the CDF and D0 \ninclusive jet cross section data are compared \nwith both standard and modified PDF's.}\n\\label{fig:highq2}\n\\end{figure}\n\n\nSince all the standard PDF's, including our modified versions, are\nfit to data with $x$ less than 0.75, we now\ninvestigate the validity of the modified MRS(R2) at very high\n$x$ by comparing to $F_2^p$ data at SLAC.\nAlthough the SLAC data at very high $x$ are at reasonable values\nof $Q^2$ $(70.75$ is in the DIS region,\nand the data for $x>0.9$ is the resonance region.\nIt is worthwhile to investigate the resonance region also because\nfrom duality arguments~\\cite{Bloom} it is expected \nthat the average behavior of the resonances and elastic peak\nshould follows the DIS scaling limit curve.\nFigure~\\ref{fig:highx_highq2} shows the ratio of the SLAC data to the predictions\nof the modified MRS(R2) at relatively large $Q^2$ ($210.75$) overestimates the SLAC data \nby a factor of three at $x = 0.9$ (DIS region).\nFrom these comparisons, we find that the SLAC $F_2$ data do not support\nthe CTEQ Toy model\nwhich proposed an additional $u$ quark contribution\nat high $x$ as an explanation of the initial HERA high $Q^2$ anomaly\nand the CDF high-$P_t$ jet excess. As indicated in\nFig.~\\ref{fig:highx_highq2}(c), the uncertainties in the PDF's at\nhigh $x$ are small. The difference between\nCTEQ4M and MRS(R2) (with our $d\/u$ modifications) is\nan estimate of the errors.\n\n\nIn conclusion, we find that nuclear binding effects in the deuteron\nplay a significant role in our understanding of $d\/u$\nat high $x$.\nWith the inclusion of target mass\nand higher twist corrections, the modified PDF's\nalso describe all DIS data up to $x = 0.98$ and down to\n$Q^2 = 1$ GeV$^2$.\nThe modified PDF's with our $d\/u$ correction\nare in good agreement with the prediction of QCD at $x=1$, and with\nthe CDHSW $\\nu p$\nand $\\overline{\\nu} p$\ndata, the HERA CC cross section data,\nthe collider high-$P_t$ jet data, and with the\nCDF $W$ asymmetry data.\nA next-to-next leading order (NNLO) analysis~\\cite{note} of $R$ indicates that\nthe higher twist effects extracted in the NLO fit \nat low $Q^2$ may originate from the missing NNLO terms.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\n\nIt is well known that the usual theory of $\\beta^{-}$ decay presumes that the decay of a neutron to proton is accompanied by the creation of an electron and an anti-neutrino in continuum states. However, in a stellar plasma where atoms get partially or fully ionized, this continuum decay is not the sole option. Nuclear $\\beta ^{-}$ decay to the bound states of the ionized atom is another probable channel. Also bare atoms have been produced terrestrially and $\\beta ^{-}$ decays have been studied in storage ring experiments. In 1947 Daudel \\!{\\em et al.\\ } \\cite{daudel} first proposed the concept of bound state $\\beta$ decay. This suggests that a nucleus has a possibility to undergo $\\beta ^{-}$ decay by creating an electron in a previously unoccupied atomic orbital instead of the continuum decay. It is important to understand that the bound state decay process does not occur subsequently from the $\\beta^{-}$ decay of an electron previously created in the continuum state, it is rather the direct creation of an electron in an atomic bound state accompanied by a mono-energetic anti-neutrino created in the free state carrying away the total decay energy. This process has been studied both theoretically as well as experimentally over the past seven decades. \n\n\nIn case of a neutral atom, available phase space for the creation of an electron in a vacant atomic orbital is very small and therefore the bound state decay is almost negligible compared to the contribution of the continuum decay. Contrarily, ionization of atoms may lead to drastic enhancement of bound state $\\beta$ decay probability due to the availability of more unoccupied atomic levels. In some previous theoretical works from 60's to 80's, various groups have studied the continuum and bound state $\\beta$ decay for neutron, tritium \\cite{bahcall} and fully ionized (bare) heavy atoms \\cite{takahashi, takahashi1, takahashi2}. However, in most cases, previous theoretical works were based on very old data and\/or informatically incomplete. Simultaneously, the development of experimental techniques has served fruitfully to detect bound and continuum state $\\beta$ decay channels of fully ionized atoms. In 1992, Jung \\!{\\em et al.\\ } first observed the bound state $\\beta^{-}$ decay for the bare $^{163}$Dy atom \\cite{jung} by storing the fully ionized parent atom in a heavy-ion storage ring. In the same decade, Bosch \\!{\\em et al.\\ } studied the bound state $\\beta ^{-} $ decay for fully ionized $^{187}$Re \\cite{bosch} which was helpful for the calibration of $^{187}$Re - $^{187}$Os galactic chronometer \\cite{yokoi}. Further experiments with bare $^{207}$Tl \\cite{ohtsubo} showed the simultaneous measurement of bound and continuum state $\\beta^{-}$ decay. However, the authors have mentioned this decay as a single $\\beta^{-}$ transition process to a particular daughter level with 100 \\% branching \\cite{ohtsubo} whereas, the present data \\cite{nndc} suggest three available levels among which the total $\\beta^{-}$ decay is distributed. \n \nIn earlier studies, Takahashi and Yokoi \\cite{takahashi, takahashi2} had investigated $\\beta$ transition (bound state $\\beta^{-}$ decay and orbital electron capture) processes of some selected heavy nuclei suitable for s-process studies. However, in their work, they had not given separately the bound state decay rate of bare atoms. Further, in another work, Takahashi \\!{\\em et al.\\ } \\cite{takahashi1} had studied the $\\beta^{-}$ decay of some bare atoms for which bound state $\\beta^{-}$ decays produce significant enhancement in decay rates and proposed measurement in storage ring experiment.\nHowever, they did not take into account the contribution of transitions to all possible energy levels of the daughter nucleus in total $\\beta^{-}$ decay rate enhancement. As an example, according to the present $\\beta^{-}$ decay data \\cite{nndc}, there are six possible $\\beta^{-}$ transitions from the [117.59 keV, $6^{+}$] state of $^{110}$Ag to various states of $^{110}$Cd, but they had mentioned the contribution of only one transition.\n\n\n\nWith the availability of modern day experimental $\\beta$ decay half-lives in terrestrial condition for the neutral atom, experimental Q-values for $\\beta ^{-} $ decays and atomic physics inputs, it becomes inevitable to re-visit some of the earlier works. Moreover, in a previous work, Takahashi and Yokoi \\cite{takahashi} addressed a few nuclei in their `case studies', undergoing $\\beta ^{-}$ transitions, as some of the essential turnabouts in $s$-process nucleosynthesis, where contributions from atoms with different states of ionization were considered. However, the explicit study of bound and continuum state $\\beta ^{-}$ transitions of bare atoms for most of these nuclei remained unevaluated till date both experimentally as well as theoretically.\n \n\nIn the present work, our aim is to study the $\\beta ^{-} $ decay of some elements, in the mass range (A $\\approx$ 60 - 240) which might be of interest for future experimental evaluations using storage ring. In particular, calculations of $\\beta^{-}$ decay rates to the continuum as well as bound state of these fully ionized atoms, where information for neutral atom experimental half-life and $\\beta^{-}$ decay branchings are terrestrially available, have been performed. Most importantly the study of effective half-lives for bare atoms will be helpful to set a limit for the maximum enhancement in $\\beta^{-}$ decay rate due to the effect of bound state decay channels. \nMoreover, we have also discussed the effect of different nuclear structure and decay inputs (Q value, radius etc.) over the bound to continuum decay rate ratio. In addition, some interesting phenomena of changes in $\\beta^{-}$ decay branching for a number of bare atoms along with some notable change in branching (branching-flip) for a few of them, have been obtained. The branching-flip is obtained for the first time.\n\n\n\n \nThe paper is organized as follows: section \\ref{2} contains the methodology of our entire calculation for bound and continuum state $\\beta^{-}$ decay rates for bare atom, as well as comparative half-life ($Log ft$) for neutral atom. In section \\ref{3A} we have discussed our results for the neutral atoms, whereas in section \\ref{3B} results for the bare atoms have been discussed. The phenomenon of change in $\\beta^{-}$ decay branching for bare atom compared to that in neutral atom is also discussed explicitly in the section \\ref{3B}. Conclusion of our work has been described in section \\ref{4}. Finally, we present a table for the calculated $\\beta^-$ decay rates in Appendix A followed by a discussion on the choice of spin-parity for unconfirmed states of neutral atom in Appendix B.\n \n\n\n\n\n\n\n\\section{{Methodology} \\label{2}}\n\n In this work, we have dealt with the allowed and first-forbidden $\\beta ^{-} $ transitions for neutral and fully ionized atoms. The contributions of higher-order forbidden transitions are negligible in the determination of the final $\\beta ^{-} $ decay rate and thus we have not tabulated the contributions for the same. \n\nThe transition rates (in $sec^{-1}$) for allowed (a), non-unique first-forbidden(nu) and unique first-forbidden(u) transitions are given by \\cite{takahashi, takahashi1, takahashi2}\n\n\\begin{eqnarray}\n\\lambda = [(ln 2)\/f_0t](f^{*}_{m}) ~~~~~~ \\text {for m= a, nu} \\\\ \\nonumber\n=[(ln 2)\/f_1t](f^{*}_{m})~~~~~~ \\text{for m= u ~~ }.\n\\end{eqnarray}\n\nHere $t$ is the partial half-life of the specific parent-daughter energy level combination for which transition rate has to be calculated and $f^{*}_{m}$ is the lepton phase volume part described in detail, below in this section. For allowed and non-unique first-forbidden $\\beta ^{-} $ decay, the expression for the decay rate function $f_0(Z,W_0)$ can be simplified to \\cite{gove, konopinski} \n\n\\begin{eqnarray}\nf_0(Z,W_0) =\\int^{W_0}_1\\sqrt{(W^2-1)} W (W_0-W)^2\\\\ \\nonumber\n\\times F_0(Z,W)L_0dW .\n\\end{eqnarray}\n\nThe certain combinations of electron radial wave functions evaluated at nuclear radius R ( in the unit of $\\hslash\/m_ec$) were first introduced by Konopinski and Uhlenbeck \\cite{konopinski} as $L_k$'s. The value for $k=0$ can be approximated as\n \n\\begin{equation}\nL_0 = \\dfrac{1+\\sqrt{1-\\alpha^2 Z^2}}{2}.\n\\end{equation}\nHere, $\\alpha$ is the fine structure constant. In the work of Behrens and J\\\"anecke \\cite{behrens}, the authors had taken a different form of $L_0$, which includes a slight dependence on the momentum. However, we find that the $L_0$ approximation, adopted in our calculation, is in good agreement with that from the Ref. \\cite{behrens} within the considered energy window. \n\nIn Eq.(2), $W$ is the total energy of the $\\beta^{-}$ particle for a $Z-1\\rightarrow Z $ transition and $W_0 = Q_n\/m_ec^2+1 $ is the maximum energy available for the $\\beta^{-}$ particle. Here the mass difference between initial (parent) and final (daughter) states of neutral atoms are expressed as the decay $Q$ value ($Q_n$ in keV). The term $F_0(Z,W)$ is the Fermi function for allowed and non-unique transition, given by \\cite{konopinski}\n\n\\begin{eqnarray}\nF_0(Z,W)=\\dfrac{4}{ \\left|\\Gamma \\left( {1+2\\sqrt{1-\\alpha^2 Z^2 }}\\right)\\right|^2}\\\\ \\nonumber\n\\left(2R\\sqrt{W^2-1}\\right)^{2\\left(\\sqrt{1-\\alpha^2 Z^2} -1\\right)}exp\\left[\\dfrac{\\pi \\alpha Z W}{\\sqrt{W^2-1}}\\right] \\\\ \\nonumber\n\\times \\left|{\\Gamma{\\left(\\sqrt{1-\\alpha^2 Z^2} + i \\dfrac{\\alpha Z W}{\\sqrt{W^{2}-1}} \\right)}}\\right |^2.\n\\end{eqnarray}\n\nSimilarly, for the unique first-forbidden transition the decay rate function $f_1(Z,W_0)$ has the form reduced from Refs. \\cite{gove, konopinski} is given by,\n\n\\begin{eqnarray}\nf_1(Z,W_0) = \\int^{W_0}_1\\sqrt{(W^2-1)} W (W_0-W)^2 F_0(Z,W) \\\\ \\nonumber\n\\times\\left[(W_0-W)^2 L_0 + 9 L_1 \\right]dW ,\n\\end{eqnarray}\n\nwith $L_1$ given by,\n\n\\begin{equation}\nL_1 = \\dfrac{F_1(Z,W)}{F_0(Z,W)} \\left(\\dfrac{W^2-1}{9}\\right) \\dfrac{2+\\sqrt{4-\\alpha^2 Z^2}}{4}.\n\\end{equation}\n \nThe term $F_1(Z,W)$ for unique $\\beta ^{-} $ transition is given by \\cite{konopinski},\n\n\\begin{eqnarray}\nF_1(Z,W)=\\dfrac{(4!)^2}{ \\left|\\Gamma \\left( {1+2\\sqrt{4-\\alpha^2 Z^2 }} \\right) \\right|^2} \\\\ \\nonumber\n\\left(2R\\sqrt{W^2-1}\\right)^{2\\left(\\sqrt{4-\\alpha^2 Z^2} -2\\right)}exp\\left[\\dfrac{\\pi \\alpha Z W}{\\sqrt{W^2-1}}\\right] \\\\ \\nonumber\n\\times \\left|{\\Gamma{\\left(\\sqrt{4-\\alpha^2 Z^2} + i \\dfrac{\\alpha Z W}{\\sqrt{W^{2}-1}} \\right)}}\\right |^2.\n\\end{eqnarray}\n\n\n \nEqs. (2) and (5) are general forms of $f_0(Z,W_0)$ and $f_1(Z,W_0)$. For more precise calculation of f-factor, one should in principle, include various corrections in the integrand of Eqs. (2) and (5). Corrections due to atomic physics effects, radiative correction and finite nuclear size effects might be important for such studies. For fully ionized atoms, corrections due to atomic physics effects, such as, imperfect overlap of initial and final atomic wave functions, exchange effects that comes from the anti-symmetrisation of the emitted electron with the atomic electrons \\cite{bahcall2}, screening of the nuclear charge due to the coulomb field of the atomic electronic cloud are not needed. For neutral atom, the decay to the atomic bound state should be negligible \\cite{bahcall2}. Also, the screening and exchange corrections together cancel a large part of the overlap correction \\cite{budick}. Further the non-orthogonality effect becomes rapidly smaller as Z increases \\cite{takahashi1}. Some of the corrections have positive sign and some of them have negative sign. So unless all the corrections are taken together, the treatment for corrections to f- factor will not be consistent. Therefore we have neglected these contributions both for bare and neutral atoms. We have included the correction due to the extended charge distribution of the nucleus on the $\\beta^{-}$ spectrum. This correction is $\\Lambda_k(Z,W) \\rightarrow \\Lambda_k(1+\\Delta\\Lambda_k(Z,W)$), where the term $\\Lambda_k$ can be written in terms of $L_k$ and $F_0(Z,W)$ as \\cite{gove, konopinski}\n\n\\begin{equation}\n\\Lambda_k(Z,W)= F_0(Z,W)L_{k-1}\\left[ \\dfrac {(2k-1)!!}{(\\sqrt{W^2-1})^{k-1}}\\right]^2 ,\n\\end{equation}\n \nin such a way that it reduces to $\\left[ F_0(Z,W)L_0\\right]$ and $\\left[ 9F_0(Z,W)L_1\/(W^2-1)\\right]$ for $k=1$ and $2$, respectively. The correction term is given by \\cite{gove},\n\n\\begin{eqnarray}\n\\Delta\\Lambda_k (Z,W) =(Z-50) \\times \\\\ \\nonumber\n\\left[ -25\\times 10^{-4} - 4\\times10^{-6} W \\times (Z-50)\\right] \\\\ \\nonumber\n\\text { for } k = 1, Z > 50 , \\\\ \\nonumber\n= 0 ~~~~~~~~~~~~~~~~~~~~~ \\text { for } k = 1 , Z \\le 50, \\\\ \\nonumber\n= 0 ~~~~~~~~~~~~~~~~~~~~~~ \\text {for } k > 1~~~~~~~~~~~.\n\\end{eqnarray}\n\nThe screened energy of the emitted electron $(W')$ enters through $\\Delta\\Lambda_k(Z,W')$, where $W'=W-V_0(Z)$. We calculated $V_0(Z)$, following Gove and Martin \\cite{gove}, using expression from W. R. Garrett and C. P. Bhalla \\cite{bhalla}. This correction to the integrand in Eqs. (2) and (5) has effect in the fourth decimal place of the f-factor and this is consistent with Ref. \\cite{hayen} discussed for the allowed $\\beta^{-}$ decay. So we have dropped $W'$ and used $W$ in the integrand.\n\nIt is to be noted that in the present work we have used experimental quantities, such as Q - value, half-life, branching, which have uncertainties even up to the first decimal place \\cite{nndc, nist}. So, in our treatment we have neglected the screening effect too for neutral atom. Therefore, by using Eqs. (8) and (9) in the integrand of Eq. (2) and Eq. (5) one can calculate the values for $f_0(Z,W_0)$ and $f_1(Z,W_0)$ incorporating only finite size correction.\n\n In the work of A. Hayes \\!{\\em et al.\\ } \\cite{hayes}, the authors have taken a different form of the finite-size correction involving the charge density, which has a complicated radial dependency. However, we find that the results from the present calculation are in agreement with the available experimental data \\cite{nndc}. \n\n Further, from the above expressions (Eqs.(4) and (7)), it is evident that the factors $F_0(Z,W)$ and $F_1(Z,W)$ depend on the radius, thereby making the terms $f_0$ and $f_1$ (Eqs.(2) and (5)), radius dependent. Thus, in our present study, we have used various radius values from different phenomenological models and experiments to study their effects on the final $ft$ values. In order to calculate $ft$ values for a nucleus, we have extracted the half-life $t$ for individual transition to daughter levels using the latest $\\beta$ decay\nbranching information available in the literature \\cite{nndc}. \n\n\nThe lepton phase volume $f^{*}_{m}$ \\cite{takahashi2} for the continuum state $\\beta^{-}$ decay can thus be expressed as,\n\n\n\n\\begin{eqnarray}\nf^{*}_{m=a,nu}(Continuum) = \\int^{W_c}_1\\sqrt{(W^2-1)} \\\\ \\nonumber\n W (W_c-W)^2 F_0(Z,W) L_0 dW,\n\\end{eqnarray}\n \nand\n \n\\begin{eqnarray}\nf^{*}_{m=u}(Continuum) = \\int^{W_c}_1\\sqrt{(W^2-1)} \\\\ \\nonumber\nW (W_c-W)^2 F_0(Z,W) \\times \\\\ \\nonumber\n \\left[(W_c-W)^2 L_0 + 9L_1\\right] dW,\n\\end{eqnarray}\n\nHere $W_c = Q_c\/m_ec^2 +1 $ is the maximum energy available to the emitted $\\beta^{-}$ particle, and $Q_c$ is given by,\n\n\\begin{eqnarray}\nQ_c = Q_n - \\left[ B_n(Z+1) - B_n(Z)\\right].\n\\end{eqnarray}\n \n The term $\\left[ B_n(Z+1) - B_n(Z)\\right]$ denotes the difference of binding energies for bound electrons of the daughter and the parent atom. The experimental values for all the atomic data (binding energies\/ionization potential) are availed from Ref. \\cite{nist}. \n\nFurther, for the bound state $\\beta^{-}$ decay of the bare atom $f^{*}_{m}$ takes the form \\cite{takahashi2}\n\n\\begin{eqnarray}\nf^{*}_{m=a,nu}(Bound) = \\sum_x \\sigma_x \\left(\\pi\/2\\right) \\left[ f_x\\text{ or }g_x \\right]^2 b^2 \\\\ \\nonumber\n\\left(\\text {for } x=ns_{1\/2},np_{1\/2}\\right),\n\\end{eqnarray}\n \nand\n \n\\begin{eqnarray}\nf^{*}_{m=u}(Bound) = \\sum_x \\sigma_x \\left(\\pi\/2\\right) \\left[ f_x\\text{ or }g_x \\right]^2 b^4 \\\\ \\nonumber\n\\left(\\text {for } x=ns_{1\/2},np_{1\/2}\\right), \\\\ \\nonumber\n~~~~~~~~~~~~~~~~~~~~\\\\ \\nonumber\n= \\sum_x \\sigma_x \\left(\\pi\/2\\right) \\left[ f_x \\text{ or } g_x \\right]^2 b^2 \\left(9\/R^2\\right) \\\\ \\nonumber\n\\left(\\text{for } x=np_{3\/2},nd_{3\/2}\\right).\n\\end{eqnarray}\n\nHere $\\left[ f_x\\text{ or }g_x \\right]$ is the larger component of electron radial wave function evaluated at the nuclear radius $R$ of the daughter for the orbit $x$. The $\\left[ f_x\\text{ or }g_x \\right]$ is obtained by solving Dirac radial wave equations using the Fortran subroutine RADIAL by Salvat \\!{\\em et al.\\ } \\cite{cpc}. In our case, $\\sigma_x$ is the vacancy of the orbit, chosen as unity and $b$ is equal to $Q_b\/m_ec^2$ where,\n\n\\begin{eqnarray}\nQ_b = Q_n - \\left[ B_n(Z+1) - B_n(Z)\\right]-B_{shell}(Z+1).\n\\end{eqnarray}\n \nFor example, in case of a bare atom, if the emitted $\\beta^{-}$ particle gets absorbed in the atomic K shell, then the last term of Eq.(15) will be the ionization potential for the K electron denoted by $B_K(Z+1)$. \n\n\n\n\\section{Results and Discussion}\n\n\nIn this work, we have calculated \n$\\beta^-$ decay transition rates to bound and continuum states, for a number of fully ionized atoms in the mass range A $\\approx$ 60-240. One of the motivations is that there are some evidences where earlier works were not equipped enough to address the entire $\\beta^-$ decay scenario.\n This might be due to the unavailability of information about all the energy levels participating in transition processes.\n\nAs an example, Takahashi \\!{\\em et al.\\ } \\cite{takahashi1} have considered transitions for allowed(a), first-forbidden non-unique(nu) and first-forbidden unique(u) decay of parent nuclei to a few energy levels of daughter nuclei. For instance, in the case of $^{228}$Ra nucleus, the authors have tabulated the decay from the ground state of the parent $[E$(keV), $J^\\pi] = [0.0, 0^+]$ nucleus to $[6.3, 1^-]$ and $[33.1, 1^+]$ states of the daughter nucleus $^{228}$Ac. However, these two transitions cover only the 40\\% of the total $\\beta^{-}$ decay branching of neutral $^{228}$Ra atom from the ground state. With the latest experimental data \\cite{nndc}, we find that there are two more available states of $^{228}$Ac where the rest amount of $\\beta^{-}$ decay from the ground state of $^{228}$Ra occur. In this section, it will be shown that the contributions of all these four states are extremely important in the determination of effective enhancement of $\\beta^{-}$ transition rates of bare $^{228}$Ra as well as to understand the phenomenon of branching-flip, discussed in section \\ref{3B}.\n\n For simplicity, this section is subdivided into two parts. The first subsection involves the calculation of $Log~ ft$ for the neutral atom, a necessary ingredient for the calculation of $\\beta^-$ decay rate of the bare atom. In the next subsection, the $\\beta^{-}$ decay transition rates of bare atoms have been discussed with a detailed explanation of TABLE {\\red A.I}. The dependence of these decay rates on different parameters is also examined in the same subsection. Finally, we have shown and discussed the change in individual level branchings in fully ionized atoms.\n\n\n\\subsection{{ $Log~ ft$ calculation for neutral atoms}\\label{3A}}\n\nIt is evident from Eqs.(1-9) that the calculation for $ft=f_0t (\/f_1t)$ is one of the essential components in the determination of the transition rate $\\lambda$, which in turn depends on radius R of the daughter nucleus. However, $Log~ ft$ data obtained from Ref. \\cite{nndc} can not provide the information of the $R$ dependence of $Log~ ft$.\nAs the present theoretical modelling for bare atom depends on radius (see section \\ref{2}), we find it more accurate to calculate $Log~ ft$ for neutral atom for different choices of radii.\n\nIn Appendix \\ref{lognu}, we present a table for bound and continuum state $\\beta^-$ decay rates for bare atoms along with the values of $Log~ ft$ for corresponding neutral atoms at different radii and compare our calculations with existing theoretical as well as experimental results (see the supplemental material \\cite{supl} for details).\nAs explained in section \\ref{2}, we have tabulated $Log~ ft$ values only for allowed (a), first-forbidden non-unique (nu) and first-forbidden unique (u) transitions.\n\n \nHere, in TABLE {\\red A.I}, $R_1$ is the phenomenological radius evaluated as $R_1=1.2 A^{1\/3}$ fm, whereas $R_2$ is the nuclear charge radius in fm \\cite{angeli} and $R_3$ is the half-density radius given by \\cite{gove} $R_3=(1.123A^{1\/3} - 0.941A^{-1\/3})$ fm. We have calculated $Log~ ft$ values for $R_1$, $R_2$ and $R_3$ and compared them with the existing data \\cite{nndc}. Besides, we have tabulated the available values from previous calculations of Takahashi \\!{\\em et al.\\ } \\cite{takahashi1} in the same table. \n\nOne can see that the change in radius may cause a change in the $Log ~ft$ value mostly in the second decimal place. In the next subsection, we will show the effect of these variations on the transition rates for bare atoms.\n\nFurther, from TABLE {\\red A.I} and the supplemental material \\cite{supl}, it can be noted that our calculation matches with the experimental $Log~ ft$ data \\cite{nndc} in most cases up to the first decimal place. The agreement of our result with experimental data \\cite{nndc} confirms the applicability of the methodology adopted in the present study.\n\n\n\n\n\\subsection {{ Bound and Continuum decay rates of bare atoms}\\label{3B}}\n\n\n In the ninth and the eleventh column of TABLE {\\red A.I} of Appendix \\ref{lognu}, bound and continuum $\\beta^-$ decay rates of bare atoms are presented, respectively.\n\n It is observed that the dependence on radius affect the bound ($\\lambda_B$) and the continuum state ($\\lambda_C$) decay rates in first or second decimal places, and the ratio $\\lambda_B\/\\lambda_C$ remains almost unaffected up to the first decimal place for most of the examined cases. \n\nFurther, from TABLE {\\red A.I} (also see the supplemental material \\cite{supl}), we find that the values for $\\lambda_B$ and $\\lambda_C$ from our calculation agree with those of the existing theoretical results \\cite{takahashi1} quite well. The possible reasons for the slight mismatch between our calculation and that from Takahashi \\!{\\em et al.\\ } \\cite{takahashi1} \nare mainly due to (i) the effect of the nuclear radius, (ii) the adoption of present day Q values (for all $Q_n , Q_c$ and $Q_b$), (iii) availability of present day $\\beta^-$ decay branching of neutral atoms and (iv) the choice of significant digits. Despite that, the overall success of our calculation in reproducing available $\\lambda_B$ and $\\lambda_C$ for bare atoms once again justify the extension of the present calculation to previously unevaluated cases. \n\n\\begin{figure*}\n\n{\\includegraphics[width=15cm,height=11cm]{lblc_4}}\n\\caption{(Color online) Ratio of $\\lambda_B\/\\lambda_C$ Vs the neutral atom Q-value $Q_n$ (in keV) for various $\\beta^{-}$ transitions (for the radius $R_{1}$). The dotted curves are obtained from fitting to Eq.(16). See text for details. \n\\label{lblc}}\n\\end{figure*}\n \n\n\nIt can again be shown from TABLE {\\red A.I} that in a transition from the parent nucleus $^AX_{Z-1}$ to different energy levels of the daughter nucleus $^AX_{Z}$, the ratio $\\lambda_B\/\\lambda_C$ for all transitions are not same, rather it decreases with increasing $Q_n$ value. It can be understood from the expressions in Eqs.(10-15) where the factors $f^{*}_{Continuum}$ and $f^{*}_{Bound}$ depend on $Q_c$ and $Q_b$, respectively, which are again derived from the neutral atom Q value $Q_n$. Due to different $Q_n$ values for different transitions, $\\lambda_B\/\\lambda_C$ can be identified as a function of $Q_n$. For the sake of understanding, in FIG. \\ref{lblc}, we have plotted the ratio $\\lambda_B\/\\lambda_C$ versus $ Q_n$ for the nuclei $^{115}$Cd, $^{123}$Sn, $^{136}$Cs and $^{152}$Eu. In each case, dependence on $Q_n$ is observed which can be fitted to the form\n \n\\begin{eqnarray}\n\\dfrac{\\lambda_{B}}{\\lambda_{C}}=a\\times({Q_n})^b\n\\end{eqnarray}\n\nwhere a and b are the nucleus dependent constants given in TABLE \\ref{ab}.\n\n\\begin{table}[H]\n\n\\caption{Parameters a and b for Eq.(16) for the radius $R_1$.}\n\n\\vspace*{0.3 cm}\n\t\\centering\n\\resizebox{!}{!}{\n \\begin{tabular}{|c|c|c|}\n\t\t\\hline\n Parent $\\rightarrow$ Daughter & Parameter a & Parameter b \\\\ \\hline\n \\rule{0pt}{0.5 cm}\n $^{115}Cd$ $\\rightarrow$ $^{115}In$ & 3093.12 $\\pm$ 317.17 & -1.48 $\\pm$ 0.02 \\\\ \\hline\n\n \\rule{0pt}{0.5 cm}\n $^{123}Sn$ $\\rightarrow$ $^{123}Sb$ & 12657.22 $\\pm$ 1515.52 & -1.73 $\\pm$ 0.03 \\\\ \\hline\n\n \\rule{0pt}{0.5 cm}\n $^{136}Cs$ $\\rightarrow$ $^{136}Ba$ & 5178.76 $\\pm$ 654.04 & -1.52 $\\pm$ 0.02 \\\\ \\hline\n\n \\rule{0pt}{0.5 cm}\n $^{152}Eu$ $\\rightarrow$ $^{152}Gd$ & 18851.81 $\\pm$ 1065.69 & -1.68 $\\pm$ 0.01 \\\\ \\hline\n\n\\end{tabular}}\n\n\\label{ab}\n\\end{table}\n\n\n The TABLE \\ref{ab} confirms that Eq.(16) is a characteristic feature of $\\lambda_{B}\/ \\lambda_{C}$ ratio of the bare atom with particular Z and A values. If there is a mistake in the calculation of $f^{*}$ for $\\lambda_{B}$ or $\/$ and $\\lambda_{C}$, then the ratio point will not fit to such a power law.\n\n \n\n\n In the fourteenth column of TABLE {\\red A.I}, the ratio of $\\lambda_{Bare}(=\\lambda_{B} + \\lambda_{C})\/\\lambda_{Neutral}$ (called here rate enhancement factor) has been tabulated. It is evident from these values that there must be an enhancement in the decay rate for each transitions (i.e. $\\lambda_{Bare}\/\\lambda_{Neutral} > 1$) because of the additional bound state decay channel. \n\nIn FIG. \\ref{enh}, the ratio of $\\lambda_{Bare}\/\\lambda_{Neutral}$ for $^{110}$Ag, $^{155}$Eu and $^{227}$Ac have been shown. From the figure, it can be noted that rate enhancements (a) are different for different transitions of a particular nucleus, (b) are dependent on $Q_n$ values : lower the $Q_n$, larger the enhancement. Moreover, this rate enhancement factor (c) also depends on Z and A of the atom; larger the value of Z and\/or A, larger the enhancement. \n\nFurther, in TABLE {\\red A.I}, we have tabulated effective $\\beta^-$ decay half-lives for bare atoms and compared to those of neutral atoms. It should be noted that the neutral atom half-life given in the fifteenth column of the table is the total half-life corresponding to a, nu and u types of $\\beta^{-}$ transitions only.\n\n\\begin{figure}\n\n{\\includegraphics[width=85mm,height=65mm]{Figure_2b}}\n\\caption{(Color online) Ratio of $\\lambda_{Bare}\/\\lambda_{Neutral}$ Vs the neutral atom Q-value $Q_n$ (in keV) for various $\\beta^{-}$ transitions (for the radius $R_{1}$). See text for details. \n\\label{enh}}\n\\end{figure}\n \n\\vspace{0.2cm}\n\n\n\\underline {\\textbf{Transition details: case studies}} \n\\vspace{0.2cm}\n\n\\begin{figure*}[t]\n\\centering\n{\\includegraphics[width=19.5cm,height=7.5cm]{Figure_3}}\n\\caption{(Color online) Comparison of the $\\beta^{-}$ decay branchings for neutral and bare $^{136}$Cs isotope (for the radius $R_{1}$). \n\\label{noflip}}\n\\end{figure*}\n\n\n\nThe dependence of the rate enhancement factor on $Q_n$ causes a change in $\\beta^-$ branching for the bare atom. In bare atom, branchings similar to the neutral atom can only be achieved if the factor $\\lambda_{Bare}\/\\lambda_{Neutral}$ remains constant with $Q_n$, which is obviously not the case (FIG. \\ref{enh}). In other words, this change can be understood to be an outcome of the non-uniformity of the $\\lambda_B \/ \\lambda_C $ ratio with $Q_n$. It is observed that the continuum decay rate for bare atom decreases with respect to that for the neutral atom (i.e. $\\lambda_{C} < \\lambda_{Neutral}$) due to the reduction of continuum Q value ($Q_c < Q_n$, Eq. (12)). Further from FIG. \\ref{lblc}, it is clear that with the decrease in the $Q_n$ value, $\\lambda_B$ dominates more over $\\lambda_C$ and hence the effective decay rate of the bare atom $\\lambda_{Bare}=\\lambda_B + \\lambda_C$ does not follow the same branching as that of the neutral atom.\n\n\n\n Note: For the $\\beta^{-}$ transition having very low $Q_n$ value, bound state decay may be the only path of $\\beta^{-}$ decay. As an example, in the transition of $^{227}$Ac $[0.0,3\/2^-]$ to $^{227}$Th $[37.9,3\/2^-]$ with $Q_n = 6.9$ keV, $Q_c$ for continuum decay of the bare atom becomes $-13.1$ keV. As evident from Eqs.(10-12), due to the negative value of $Q_c$, the corresponding decay channel gets closed. On the other hand, as $(Q_b-Q_n) > 0$ for this transition, the total decay is governed by the bound state channel only.\n\n\n \nAs an example, in FIG. \\ref{noflip}, we have compared branchings for neutral (left panel) and bare (right panel) $^{136}$Cs atom. It can be seen from FIG. \\ref{noflip} that the branchings for all $\\beta^-$ transitions of the bare atom have been changed from that of the neutral atom. However, the ordering of each branch remains unaltered in both cases, i.e. the $[0.0, 5^+] \\rightarrow [2207.1, 6^+]$ branch gets the maximum feeding followed by the $[0.0, 5^+] \\rightarrow [1866.6, 4^+]$ and $[0.0, 5^+] \\rightarrow [2140.2, 5^-]$ branches, whereas the minimum feed goes to $[0.0, 5^+] \\rightarrow [2030.5, 7^-]$ channel for both the neutral and bare atoms.\n\nFurther, some notable observations and comments for some nuclei are given below. \n\n $\\bullet$ In case of neutral $^{207}$Tl atom in terrestrial condition, the $[0.0, 1\/2^+]$ state of $^{207}$Tl decays to $[0.0, 1\/2^-]$ state of $^{207}$Pb with 99.729\\% branching, whereas to $[569.6, 5\/2^-]$ state of the daughter has the branching $>$0.00004\\% (in some places of Ref. \\cite{nndc} this value is given as $<$0.00008\\%) and to $[897.8, 3\/2^-]$ state has 0.271\\% branching \\cite{nndc} (see supplemental material \\cite{supl} for details). For bare atom, Ohtsubo \\!{\\em et al.\\ } \\cite{ohtsubo} had observed bound state decay rate $\\lambda_B= 4.29(29) \\times 10^{-4}$ sec$^{-1}$ and continuum state decay rate $\\lambda_C=2.29 (012) \\times 10^{-3}$ sec$^{-1}$, by considering the transition to $[0.0, 1\/2^-]$ state of $^{207}$Pb with 100\\% branching. In our calculation for bare atom, we have got bound state decay rate $\\lambda_B = 4.15 \\times 10^{-4}$ sec$^{-1}$ and continuum state decay rate $\\lambda_C= 2.54 \\times 10^{-3}$ sec$^{-1}$. The calculated branchings of bare $^{207}$Tl : $\\sim$ 99.6 \\% to $[0.0, 1\/2^-]$, $\\sim$ 0.00005\\% - 0.0001\\% to $[569.6, 5\/2^-]$ and $\\sim$ 0.4 \\% to $[897.8, 3\/2^-]$ states of the daughter $^{207}$Pb.\n\n In our study, we found some special cases where effective branchings for the bare atom do not follow the same ordering as that of the neutral atom. This indicates a very interesting phenomenon of branching-flip, obtained for the first time in this work. Sometimes the additive contribution of $\\lambda_B$ and $\\lambda_C$ and the effect of these two competing channels can lead to this branching-flip. This can be understood from FIG. \\ref{all}. In FIG. \\ref{all}, decay rates (sec$^{-1}$) for neutral ($\\lambda_{Neutral}$) and bare ($\\lambda_{Bare}$) atom along with all decay components ($\\lambda_B$ and $\\lambda_C$) of the bare atom versus $Q_n$ are shown for the ground state decay of $^{134}$Cs and $^{228}$Ra nuclei. One can see from FIG. \\ref{all} that the highest point corresponding to $\\lambda_{Neutral}$ (i.e. maximum $\\beta^{-}$ branching in neutral atom) and the highest point corresponding to $\\lambda_{Bare}$ (i.e. maximum $\\beta^{-}$ branching in bare atom) are coming from different transitions to the daughter nuclei (different $Q_n$ values), which clearly indicates the phenomenon of flip in the branching sequence.\n\n$\\bullet$ In the case of $^{134}$Cs, $\\lambda_{Neutral}$ is maximum at $Q_{n} = 658.1$ keV, which is due to the maximum branching to the 1400.6 keV level (see supplemental material \\cite{supl} for details) of $^{134}$Ba \\cite{nndc}. In contrary, for the same nucleus, $\\lambda_{Bare}$ is maximum at $Q_n = 88.8$ keV which therefore indicates the maximum branching to the 1969.9 keV level (see TABLE {\\red A.I}) of the daughter $^{134}$Ba for bare atom. \n\n$\\bullet$ Similarly for $^{228}$Ra, the maximum branching for the bare atom ($(\\lambda_{Bare})_{max}$ at $Q_n=12.7$ keV) shifts from that of the neutral atom ($(\\lambda_n)_{max}$ at $Q_n=39.1$ keV). In FIG. \\ref{flip}, we have shown the change and alteration of transition branchings for the $\\beta^-$ decay of $^{228}$Ra. \nOne can see the branching-flips of the participating levels of the $^{228}$Ac atom in FIGS. \\ref{all} and \\ref{flip}. In case of the neutral $^{228}$Ra atom, maximum branching is 40\\% for the $[6.7, 1^+]$ level of the daughter \\cite{nndc}. After complete ionization, the major contribution of the total decay rate comes due to the bound state enhancement of $Q_n=$12.7 keV channel which has $\\sim$ 84.07\\% decay to the $[33.1, 1^+]$ level (30\\% in neutral atom) of the daughter atom, whereas only $\\sim$ 5.81\\% of the total decay branching is observed for the level $[6.7, 1^+]$. \n\nThere are a few more cases, where the branching-flips are observed. However, not necessarily, all the transition branches face the phenomenon of flip. It may also happen that only two or three branches change their sequence, whereas other branches remain in the same order as that of the neutral atom.\n\n$\\bullet$ In the $\\beta^-$ decay of $^{152}$Eu $[45.5998, 0^{-}]$ (see Table 1 of the Ref. \\cite{supl} for branching details), we find that in both cases (neutral and bare) the branching to $[0.0, 0^+]$ branch of the daughter dominate over the rest, whereas a branching-flip is observed between $[344.3,2^+]$ and $[1314.6, 1^-]$ states. \n\n$\\bullet$ Similarly for $^{227}$Ac, we find that there is a branching-flip between two transitions from $[0.0, 3\/2^-]$ state of the parent to $[0.0, 1\/2^+]$ and $[24.5, 3\/2^+]$ states of the daughter atom.\nThe ratio of branching for these two levels is 5.4:1 for neutral atom, which changes to 1:1.38 for bare atom. \n\n\nIt should be noted that the ultimate fate of individual branchings in the bare atom is decided by two factors: the initial branching (required to calculate $Log~t$ for each transition: a part of $Log~ft$ calculation) and Q value of the neutral atom. The competition between these two factors determines whether the branching-flip will occur or not.\n \n \n\\textbf{Effect of uncertainties: } Furthermore, in order to get the complete picture of $\\beta^-$ decay for bare atom, effects due to uncertainties in $\\beta^-$ decay half-life and Q value need to be considered. The effect of uncertainty is appreciable depending on the numerical value of the half-life and Q value. In case of atoms with the $\\beta^-$ decay half-life of the order of seconds\/minutes and having high Q value, no significant change is observed in the calculation of $Log~ ft$ due to the inclusion of experimental uncertainties. The contributions peek out for long lived nuclei with large uncertainty or for transitions of high Q value having large uncertainty. \n For example, in case of $^{93}$Zr atom, where the neutral atom half-life is equal to $1.61 \\times 10^6(5)$ years, $Log~ ft$ for the transition $[0.0,5\/2^+ \\rightarrow 30.8,1\/2^-]$ with the radius $R_1$ is given by ${10.234}^{+0.014}_{-0.013}$. Therefore, the final values for continuum and bare state $\\beta^-$ transitions including the uncertainties can be written as $\\lambda_C={6.87}_{-0.21}^{+0.22} \\times 10^{-15}$ $sec^{-1}$ and $\\lambda_B={6.13}_{-0.19}^{+0.20} \\times 10^{-15}$ $sec^{-1}$, respectively. \n\n\n\n\n\\begin{figure*}[t]\n\\centering\n{\\includegraphics[width=17cm,height=6cm]{Figure_4}}\n\n\\caption{(Color online) Decay rates (in sec$^{-1}$) for neutral ($\\lambda_{Neutral}$) and bare($\\lambda_{Bare}$) atoms along with all the decay components ($\\lambda_B$ and $\\lambda_C$) of the bare atom (for the radius $R_{1}$) with the neutral atom Q-value $Q_n$ (in keV). See text for details. \n\\label{all}}\n\\end{figure*} \n\n\\begin{figure*}[t]\n\\centering\n{\\includegraphics[width=19cm,height=7cm]{Figure_5}}\n\\caption{(Color online) Comparison of level branchings on neutral and bare $^{228}$Ra isotope (for the radius $R_{1}$). Left Panel: neutral atom, Right Panel: bare atom. \n\\label{flip}}\n\\end{figure*}\n\n\n\n\\section{{Conclusion}\\label{4}}\nTo summarize, in this work we have calculated individual contributions of bound and continuum state $\\beta^-$ decays to the effective $\\beta^-$ decay rate of the bare atom in the A $\\approx$ 60 to 240 mass range where earlier information were partial and\/or old.\n\n Additionally, the dependence of transition rates over the nuclear radius and the Q value is illustrated clearly in the present study. We found a power law dependence of $\\lambda_{B}\/ \\lambda_{C}$ of a bare atom on $Q_n$ for each value of Z and A. Along with the effective enhancement of transition rates, we have found that transition branchings for the bare atom differs from that of the neutral atom for all Z and A, which is an outcome of non-uniform enhancement amongst the participating branches. Most interestingly, we have found few nuclei, viz. $^{134}$Cs, $^{228}$Ra etc., where some flip in the branching pattern is found for their bare configuration. It will be interesting to see how these results help planning new experiments involving bare atoms. The calculations will be extended to partially ionized atoms which will provide decay rate as function of density and temperature of the stellar plasma and will be useful for calculation of nucleosynthesis processes. \n\n\n\n\n\\section*{ACKNOWLEDGEMENT}\nAG is grateful to DST-INSPIRE Fellowship (IF160297) for providing financial support. CL acknowledges the grant from DST-NPDF (No. PDF\/2016\/001348) Fellowship.\n\n\n\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\\section{Introduction}\\label{sec:introduction}}\n\\IEEEPARstart{C}{reating} dynamic general clothes or garments on animated characters has been a long-standing problem in computer graphics (CG).\nIn the CG industry, physics-based simulations (PBS) are used to achieve realistic and detailed folding patterns for garment animations. \nHowever, it is time-consuming and requires expertise to synthesize fine geometric details since high-resolution meshes with tens of thousands or more vertices are often required.\nFor example, 10 seconds are required for physics-based simulation of a frame for detailed skirt animation shown in Fig.~\\ref{fig:lrhrsim1}.\nNot surprisingly, garment animation remains a bottleneck in many applications.\nRecently, data-driven methods provide alternative solutions to fast and effective wrinkling behaviors for garments.\nDepending on human body poses, some data-driven methods~\\cite{wang10example,Feng2010transfer,deAguiar10Stable,santesteban2019learning, wang2019learning} are capable of generating tight cloth animations successfully.\n\\begin{figure}[t]\n\t\\centering\n\t\\begin{tabular}{ccc}\n\t\\multicolumn{3}{c}{\n\t\\includegraphics[width=1.0\\linewidth]{pictures\/wireframe2_1.pdf}} \\\\\n\t(a) coarse skirt & (b) tracked skirt & (c) fine skirt\n\t\\end{tabular}\n\t\\caption{\\small \\cl{One frame of \\YL{skirt in different representations.} (a) \\YL{coarse mesh} (207 triangles), (b) \\YL{tracked mesh} (13,248 triangles) and (c) \\YL{fine mesh} (13,248 triangles). \\YL{Both coarse and fine meshes are obtained by simulating the skirt using a physics-based method \\cl{\\cite{Narain2012AAR}}. The tracked mesh is obtained with physics-based simulation involving additional constraints to track the coarse mesh.} The tracked mesh exhibits stiff folds while the wrinkles in the fine simulated mesh are more realistic.}%\n\t}\n\t\\label{fig:lrhrsim1} \n\\end{figure}\nUnfortunately, they are not suitable for loose garments, such as skirts, since the deformation of wrinkles cannot be defined by a static mapping from a character's pose.\nInstead of human poses, wrinkle augmentation on coarse simulations provides another alternative. \nIt utilizes coarse simulations with fast speed to cover a high-level deformation and leverages learning-based methods to add realistic wrinkles.\nPrevious methods~\\cite{kavan11physics,zurdo2013wrinkles,chen2018synthesizing} commonly require dense correspondences between coarse and fine meshes, so that local details can be added without affecting global deformation. \n\\YL{Such methods also require coarse meshes to be sufficiently close to fine meshes, as they only add details to coarse meshes.}\nTo maintain the correspondences for training data and ensure closeness between coarse and fine meshes, weak-form constraints such as various test functions~\\cite{kavan11physics,zurdo2013wrinkles,chen2018synthesizing} are applied to make fine meshes track the coarse meshes, \n\\YL{but as a result, the obtained high-resolution meshes do not fully follow physical behavior, leading to animations that lack realism. An example is shown in Fig.~\\ref{fig:lrhrsim1} where the tracked skirt (b) loses a large amount of wrinkles which should appear when simulating on fine meshes (c).}\n\n \nWithout requiring the constraints between coarse and fine meshes, we propose \n\\gl{the DeformTransformer network\nto synthesize detailed thin shell animations from coarse ones, based on deformation transfer.}\nThis is inspired by the similarity observed between pairs of coarse and fine meshes generated by PBS. %\nAlthough the positions of vertices from two meshes are not aligned, the overall deformation is similar, so it is possible to predict fine-scale deformation with coarse simulation results.\nMost previous works~\\cite{kavan11physics,zurdo2013wrinkles,chen2018synthesizing} use explicit vertex coordinates to represent 3D meshes, which are sensitive to translations and rotations,\nso they require good alignments between low- and high-resolution meshes. \nIn our work, we regard the cloth animations as non-rigid deformation and propose a novel representation for mesh sequences, called TS-ACAP (Temporal and Spatial As-Consistent-As-Possible) representation. \nTS-ACAP is a local deformation representation, capable of representing and solving large-scale deformation problems, while maintaining the details of meshes.\nCompared to the original ACAP representation~\\cite{gao2019sparse}, TS-ACAP is fundamentally designed to ensure the temporal consistency of the extracted feature sequences, \\YL{and meanwhile} it can maintain the original features of ACAP \\YL{to cope with large-scale deformations}.\nWith \\YL{TS-ACAP} representations for both coarse and fine meshes, we leverage a sequence transduction network to map the deformation from coarse to fine level to assure the temporal coherence of generated sequences.\nUnlike existing works using recurrent neural networks (RNN)~\\cite{santesteban2019learning}, we utilize the Transformer network~\\cite{vaswani2017attention}, an architecture consisting of frame-level attention mechanisms for our mesh sequence transduction task.\nIt is based entirely on attention without recursion modules so can be trained significantly faster than architectures based on recurrent %\nlayers.\nWith \\YL{temporally consistent features and the Transformer network, \\YL{our method achieves} stable general cloth synthesis with fine details in an efficient manner.}\n\nIn summary, the main contributions of our work are as follows:\n\\begin{itemize}\n\t\\item \\YL{We propose a novel framework for the synthesis of cloth dynamics, by learning temporally consistent deformation from low-resolution meshes to high-resolution meshes \\gl{with realistic dynamic}, which is $10 \\sim 35$ times faster than PBS \\cite{Narain2012AAR}.}\n\t\\item \\YL{To achieve this, we propose a \\cl{temporally and spatially as-consistent-as-possible deformation representation (TS-ACAP)} to represent the cloth mesh sequences. It is able to deal with large-scale deformation, essential for mapping between coarse and fine meshes, while ensuring temporal coherence.} \n \\item \\gl{Based on the TS-ACAP, We further design an effective neural network architecture (named DeformTransformer) by improving Transformer network, which successfully enables high-quality synthesis of dynamic wrinkles with rich details on thin shells and maintains temporal consistency on the generated high-resolution mesh sequences.}\n \n \n\\end{itemize}\n\nWe qualitatively and quantitatively evaluate our method for various cloth types (T-shirts, pants, skirts, square and disk tablecloth) with different motion sequences. \nIn Sec.~\\ref{sec:related_work}, we review the work most related to ours. We then give the detailed description of our method in Sec.~\\ref{sec:approach}. \nImplementation details are presented in Sec.~\\ref{sec:implementation}. We present experimental results, including extensive\ncomparisons with state-of-the-art methods in Sec.~\\ref{sec:results}, and finally, we draw conclusions and \\YL{discuss future work} in Sec.~\\ref{sec:conclusion}.\n\n\n\\section{Related work} \\label{sec:related_work}\n\\subsection{Cloth Animation}\nPhysics-based techniques for realistic cloth simulation have been widely studied in computer graphics, \\YL{using methods such as} implicit Euler integrator \\cite{BW98,Harmon09asynchronous}, iterative optimization \\cite{terzopoulos87elastically,bridson03wrinkles,Grinspun03shell}, collision detection and response \\cite{provot97collision,volino95collision}, etc. \n\\YL{Although such techniques can generate realistic cloth dynamics, }they are time consuming for detailed cloth synthesis, and the robustness and efficiency of simulation systems are also of concern.\n\\YL{To address these, alternative methods have been developed to generate} the dynamic details of cloth animation via adaptive techniques \\cite{lee2010multi,muller2010wrinkle,Narain2012AAR}, data-driven approaches \\cite{deAguiar10Stable, Guan12DRAPE, wang10example, kavan11physics,zurdo2013wrinkles} and deep learning-based methods \\cite{chen2018synthesizing,gundogdu2018garnet,laehner2018deepwrinkles,zhang2020deep}, etc.\n\n Adaptive techniques \\cite{lee2010multi, muller2010wrinkle} usually simulate a coarse model by simplifying the smooth regions and \\YL{applying interpolation} to reconstruct the wrinkles, \\YL{taking normal or tangential degrees of freedom into consideration.} \nDifferent from simulating a reduced model with postprocessing detail augmentation, Narain {\\itshape et al.} \\cite{Narain2012AAR} directly generate dynamic meshes in \\YL{the} simulation phase through adaptive remeshing, at the expense of increasing \\YL{computation time}. \n\nData-driven methods have drawn much attention since they offer faster cloth animations than physical models.\nWith \\YL{a} constructed database of \\YL{high-resolution} meshes, researchers have proposed many techniques depending on the motions of human bodies with linear conditional models\\cite{deAguiar10Stable, Guan12DRAPE} or secondary motion graphs \\cite{Kim2013near, Kim2008drivenshape}.\nHowever, these methods are limited to tight garments and not suitable for skirts or cloth with more freedom.\nAn alternative line \\YL{of research} is to augment details on coarse simulations \\YL{by exploiting knowledge from a} database of paired meshes, to generalize the performance to complicated testing scenes.\nIn this line, in addition to wrinkle synthesis methods \\YL{based on} bone clusters \\cite{Feng2010transfer} or human poses \\cite{wang10example} for fitted clothes, there are some approaches \\YL{that investigate how to} learn a mapping from a coarse garment shape to a detailed one for general \\YL{cases} of free-flowing cloth simulation.\nKavan {\\itshape et al.} \\cite{kavan11physics} present linear upsampling operators to \\YL{efficiently} augment \\YL{medium-scale} details on coarse meshes.\nZurdo {\\itshape et al.} \\cite{zurdo2013wrinkles} define wrinkles as local displacements and use \\YL{an} example-based algorithm to enhance low-resolution simulations.\n\\YL{Their approaches mean the} high-resolution cloth \\YL{is} required to track \\YL{the} low-resolution cloth, \\YL{and thus cannot} exhibit full high-resolution dynamics.\n\nRecently deep learning-based methods have been successfully applied for 3D animations of human \\YL{faces}~\\cite{cao2016real, jiang20183d}, hair \\cite{zhang2018modeling, yang2019dynamic} and garments \\cite{liu2019neuroskinning, wang2019learning}.\nAs for garment synthesis, some approaches \\cite{laehner2018deepwrinkles, santesteban2019learning, patel2020tailornet} are proposed to utilize a two-stream strategy consisting of global garment fit and local \\YL{wrinkle} enhancement.\nL{\\\" a}hner {\\itshape et al.} \\cite{laehner2018deepwrinkles} present DeepWrinkles, \\YL{which recovers} the global deformation from \\YL{a} 3D scan system and \\YL{uses a} conditional \\YL{generative adversarial network} to enhance a low-resolution normal map.\nZhang {\\itshape et al.} \\cite{zhang2020deep} further generalize the augmentation method with normal maps to complex garment types as well as various motion sequences.\n\\YL{These approaches add wrinkles on normal maps \\YL{rather than geometry}, and thus their effectiveness is restricted to adding fine-scale visual details, not large-scale dynamics.}\nBased on \\YL{the} skinning representation, some algorithms \\cite{gundogdu2018garnet,santesteban2019learning} use neural networks to generalize garment synthesis algorithms to multiple body shapes. \n\\YL{In addition, other works are} devoted to \\YL{generalizing neural networks} to various cloth styles \\cite{patel2020tailornet} or cloth materials \\cite{wang2019learning}.\nDespite tight garments dressed on characters, some deep learning-based methods \\cite{chen2018synthesizing, oh2018hierarchical} are %\n\\YL{demonstrated to work for cloth animation with higher degrees} of freedom.\nChen {\\itshape et al.} \\cite{laehner2018deepwrinkles} represent coarse and fine meshes via geometry images and use \\YL{a} super-resolution network to learn the mapping.\nOh {\\itshape et al.} \\cite{oh2018hierarchical} propose a multi-resolution cloth representation with \\YL{fully} connected networks to add details hierarchically.\nSince the \\YL{free-flowing cloth dynamics are harder for networks to learn} than tight garments, the results of these methods have not reached the realism of PBS. \\YL{Our method based on a novel deformation representation and network architecture has superior capabilities of learning the mapping from coarse and fine meshes, generating realistic cloth dynamics, while being much faster than PBS methods.}\n \n \\begin{figure*}[ht]\n \t\\centering\n \t\\includegraphics[width=1.0\\linewidth, trim=20 250 20 50,clip]{pictures\/mainpicture2.pdf} \n \t\\caption{\\small The overall architecture of our detail synthesis network. At data preparation stage, we generate low- and high-resolution \\gl{thin shell} animations via coarse and fine \\gl{meshes} and various motion sequences.\n \t Then we encode the coarse meshes and the detailed meshes to a deformation representation TS-ACAP, respectively.\n \t\\YL{Our algorithm then} learns to map the coarse features to fine features %\n \t\\YL{by designing a DeformTransformer network that consists of temporal-aware encoders and decoders, and finally reconstructs the detailed animations.}\n \t}\n \t\\label{fig:pipeline}\n \\end{figure*}\n\n\\subsection{Representation for 3D Meshes}\nUnlike 2D images with regular grid of pixels, \\YL{3D meshes have irregular connectivity which makes learning more difficult. To address this, existing deep learning based methods turn 3D meshes to a wide range of representations to facilitate processing~\\cite{xiao2020survey},} such as voxels, images \\YL{(such as depth images and multi-view images)}, point clouds, meshes, etc.\n\\YL{The volumetric representation has a regular structure, but it} often suffers from \\YL{the problem of extremely high space and time consumption.}\nThus Wang {\\itshape et al.} \\cite{wang2017cnn} propose an octree-based convolutional neural network and encode the voxels sparsely. \nImage-based representations including \\YL{depth images} \\cite{eigen2014depth,gupta2014learning} and multi-view images \\cite{Su2015mvcnn,li20193d} are proposed to encode 3D models in a 2D domain. \nIt is unavoidable that both volumetric and image-based representations lose some geometric details.\nAlternatively, geometry images are used in \\cite{sinha2016deep,Sinha2017surfnet,chen2018synthesizing} for mesh classification or generation\\YL{, which are obtained through cutting a 3D mesh to a topological disk, parameterizing it to a rectangular domain and regularly sampling the 3D coordinates in the 2D domain~\\cite{gu2002geometry}.}\n\\YL{However, this representation} may suffer from parameterization distortion and seam line problems.\n\nInstead of representing 3D meshes into other formats, recently there are methods \\cite{tan2017autoencoder, tan2017variational, hanocka2019meshcnn} applying neural networks directly to triangle meshes with various features.\nGao {\\itshape et al.} \\cite{gao2016efficient} propose a deformation-based representation, called the rotation-invariant mesh difference (RIMD) which is translation and rotation invariant.\nBased on the RIMD feature, Tan {\\itshape et al.} \\cite{tan2017variational} propose a fully connected variational autoencoder network to analyze and generate meshes.\nWu {\\itshape et al.} \\cite{wu2018alive} use the RIMD to generate\na 3D caricature model from a 2D caricature image. \nHowever, it is expensive to reconstruct vertex coordinates from the RIMD feature due to the requirement of solving a very complicated optimization.\nThus it is not suitable for fast mesh generation tasks.\nA faster deformation representation based on an as-consistent-as-possible (ACAP) formulation \\cite{gao2019sparse} is further used to reconstruct meshes \\cite{tan2017autoencoder}, which is able to cope with large rotations and efficient for reconstruction.\nJiang {\\itshape et al.} \\cite{jiang2019disentangled} use ACAP to disentangle the identity and expression of 3D \\YL{faces}. \nThey further apply ACAP to learn and reconstruct 3D human body models using a coarse-to-fine pipeline \\cite{jiang2020disentangled}. \n\\YL{However, the ACAP feature is represented based on individual 3D meshes. When applied to a dynamic mesh sequence, it does not guarantee temporal consistency.}\nWe propose a \\cl{temporally and spatially as-consistent-as-possible (TS-ACAP)} representation, to ensure both spatial and temporal consistency of mesh deformation.\nCompared to ACAP, our TS-ACAP can also accelerate the computation of features thanks to the sequential constraints. \n\n\\subsection{Sequence Generation with \\YL{DNNs (Deep Neural Networks)}}\nTemporal information is crucial for stable and \\gl{vivid} sequence generation. Previously, recurrent neural networks (RNN) have been successfully applied in many sequence generation tasks \\cite{mikolov2010recurrent, mikolov2011extensions}. However, it is difficult to train \\YL{RNNs} to capture long-term dependencies since \\YL{RNNs} suffer from the vanishing gradient problem \\cite{bengio1994learning}. To deal with this problem, previous works proposed some variations of RNN, including long short-term memory (LSTM) \\cite{hochreiter1997long} and gated recurrent unit (GRU) \\cite{cho2014properties}. These variations of RNN rely on the gating mechanisms to control the flow of information, thus performing well in the tasks that require capturing long-term dependencies, such as speech recognition \\cite{graves2013speech} and machine translation \\cite{bahdanau2014neural, sutskever2014sequence}. Recently, based on attention mechanisms, the Transformer network \\cite{vaswani2017attention} has been verified to outperform \\YL{many typical sequential models} for long sequences. This structure is able to inject the global context information into each input. Based on Transformer, impressive results have been achieved in tasks with regard to audio, video and text, \\textit{e.g. } speech synthesis \\cite{li2019neural, okamoto2020transformer}, action recognition \\cite{girdhar2019video} and machine translation \\cite{vaswani2017attention}.\nWe utilize the Transformer network to learn the frame-level attention which improves the temporal stability of the generated animation sequences.\n\n\\section{Approach} \\label{sec:approach}\nWith a simulated sequence of coarse meshes $\\mathcal{C} = \\{\\mathcal{C}_1, \\dots, \\mathcal{C}_n\\}$ as input, our goal is to produce a sequence of fine ones $\\mathcal{D} = \\{\\mathcal{D}_1, \\dots, \\mathcal{D}_n\\}$ which have similar non-rigid deformation as the PBS. Given two simulation sets of paired coarse and fine garments, we extract the TS-ACAP representations respectively, \\YL{and} then use our proposed DeformTransformer network to learn the \\YL{transform} \\YL{from the low-resolution space to the high-resolution space}. \\YL{As illustrated previously in Fig.~\\ref{fig:lrhrsim1}, such a mapping involves deformations beyond adding fine details.}\nOnce the network is trained by the paired examples, a consistent and detailed animation $\\mathcal{D}$ can be synthesized for each input sequence $\\mathcal{C}$. \n\n\\subsection{Overview}\nThe overall architecture of our detail synthesis network is illustrated in Fig. \\ref{fig:pipeline}.\nTo synthesize realistic \\gl{cloth animations}, we propose a method to simulate coarse meshes first and learn a \\YL{temporally-coherent} mapping to the fine meshes. \nTo realize our goal, we construct datasets including low- and high-resolution cloth animations, \\textit{e.g. } coarse and fine garments dressed on a human body of various motion sequences. \nTo efficiently extract localized features with temporal consistency, we propose a new deformation representation, called TS-ACAP (temporal \\YL{and spatial} as-consistent-as-possible), which is able to cope with both large rotations and unstable sequences. It also has significant advantages: it is efficient to compute for \\YL{mesh} sequences and its derivatives have closed form solutions.\nSince the vertices of the fine models are typically more than ten thousand to simulate realistic wrinkles, it is hard to directly map the coarse features to the high-dimensional fine ones for the network.\nTherefore, \\YL{convolutional encoder networks are} \napplied to encode \\YL{coarse and fine meshes in the TS-ACAP representation} to \\YL{their latent spaces}, respectively.\nThe TS-ACAP generates local rotation and scaling\/shearing parts on vertices, so we perform convolution \\YL{operations} on vertices %\n\\YL{to learn to extract useful features using shared local convolutional kernels.}\nWith encoded feature sequences, a sequence transduction network is proposed to learn the mapping from coarse to fine TS-ACAP sequences.\nUnlike existing works using recurrent neural networks \\YL{(RNNs)}~\\cite{santesteban2019learning}, we use the Transformer \\cite{vaswani2017attention}, a sequence-to-sequence network architecture, based on frame-level attention mechanisms for our detail synthesis task, \\YL{which is more efficient to learn and leads to superior results.}\n\n\\subsection{Deformation Representation}\n\\YL{As discussed before, large-scale deformations are essential to represent \\gl{thin shell mode dynamics such as }cloth animations, because folding and wrinkle patterns during animation can often be complicated. Moreover, cloth animations are in the form of sequences, hence the temporal coherence is very important for the realistic. Using 3D coordinates directly cannot cope with large-scale deformations well, and existing deformation representations are generally designed for static meshes, and directly applying them to cloth animation sequences on a frame-by-frame basis does not take temporal consistency into account. }\nTo cope with this problem, we propose a mesh deformation feature with spatial-temporal consistency, called TS-ACAP, to represent the coarse and fine deformed shapes, which exploits the localized information effectively and reconstructs \\YL{meshes} accurately.\nTake \\YL{coarse meshes} $\\mathcal{C}$ for instance and \\YL{fine meshes $\\mathcal{D}$ are processed in the same way.} \\YL{Assume that a sequence} of coarse meshes contains $n$ models with the same topology, each denoted as $\\mathcal{C}_{t}$ \\YL{($1\\leq t \\leq n$)}. \n\\YL{A mesh with the same topology is chosen as the reference model, denoted as $\\mathcal{C}_{0}$. For example, for garment animation, this can be the garment mesh worn by a character in the T pose.}\n$\\mathbf{p}_{t,i} \\in \\mathbb{R}^{3}$ is the $i^{\\rm th}$ vertex on\nthe $t^{\\rm th}$ mesh.\nTo represent the local shape deformation, the deformation gradient $\\mathbf{T}_{t,i} \\in \\mathbb{R}^{3 \\times 3}$ can be obtained by minimizing the following energy:\n\\begin{equation}\n\t\\mathop{\\arg\\min}_{\\mathbf{T}_{t,i}} \\ \\ \\mathop{\\sum}_{j \\in \\mathcal{N}_i} c_{ij} \\| (\\mathbf{p}_{t,i} - \\mathbf{p}_{t,j}) - \\mathbf{T}_{t,i} (\\mathbf{p}_{0,i} - \\mathbf{p}_{0,j}) \\|_2^2 \\label{con:computeDG}\n\\end{equation}\nwhere $\\mathcal{N}_i$ is the one-ring neighbors of the $i^{\\rm th}$ vertex, and $c_{ij}$ is the cotangent weight $c_{ij} = \\cot \\alpha_{ij} + \\cot \\beta_{ij} $ \\cite{sorkine2007rigid,levi2014smooth}, where $\\alpha_{ij}$\nand $\\beta_{ij}$ are angles opposite to the edge connecting the $i^{\\rm th}$ and $j^{\\rm th}$ vertices.\n\nThe main drawback of the deformation gradient representation is that it cannot handle large-scale rotations, which often \\YL{happen} in cloth animation. \nUsing polar decomposition, the deformation gradient $\\mathbf{T}_{t,i} $ can be decomposed into a rotation part and a scaling\/shearing part $\\mathbf{T}_{t,i} = \\mathbf{R}_{t,i}\\mathbf{S}_{t,i}$.\nThe scaling\/shearing transformation $\\mathbf{S}_{t,i}$ is uniquely defined, while the rotation $\\mathbf{R}_{t,i}$ \\YL{corresponds to infinite possible rotation angles (differed by multiples of $2\\pi$, along with possible opposite orientation of the rotation axis)}. Typical formulation often constrain the rotation angle to be within $[0, \\pi]$ which is unsuitable for smooth large-scale animations. \n\nIn order to handle large-scale rotations, we first require the orientations of rotation axes and rotation angles of \\YL{spatially} adjacent vertices \\YL{on the same mesh} to be as consistent as possible. \nEspecially for our sequence data, we further add constraints for adjacent frames to ensure the temporal consistency of the orientations of rotation axes and rotation angles on each vertex.\n\nWe first consider consistent orientation for axes.\n\\begin{flalign}\\label{eqn:axis}\n\t\\arg\\max_{{o}_{t,i}} \\sum_{(i,j) \\in \\mathcal{E} } {o}_{t,i}{o}_{t,j} \\cdot s(\\boldsymbol{\\omega}_{t,i} \\cdot \\boldsymbol{\\omega}_{t,j}, \\theta_{t,i}, \\theta_{t,j}) \\nonumber\\\\\n\t+ \\sum_{i \\in \\mathcal{V} } {o}_{t,i} \\cdot s(\\boldsymbol{\\omega}_{t,i} \\cdot \\boldsymbol{\\omega}_{t-1,i}, \\theta_{t,i}, \\theta_{t-1,i}) \\nonumber\\\\\n\t{\\rm s.t.} \\quad\n\t{o}_{t,1} = 1, {o}_{t,i} = \\pm 1 (i \\neq 1) \\quad \n\\end{flalign}\nwhere $t$ is the \\YL{index} of \\YL{the} frame, $\\mathcal{E}$ is the edge set, and $\\mathcal{V}$ is the vertex set. \\YL{Denote by $(\\boldsymbol{\\omega}_{t,i}, \\theta_{t,i})$ one possible choice for the rotation axis and rotation angle that match $\\mathbf{R}_{t,i}$. $o_{t,i} \\in \\{+1, -1\\}$ specifies whether the rotation axis is flipped ($o_{t,i} = 1$ if the rotation axis is unchanged, and $-1$ if its opposite is used instead). }\\YL{The first term promotes spatial consistency while the second term promotes temporal consistency.} \n$s(\\cdot)$ is a function measuring orientation consistency, which is defined as follows:\n\\begin{equation}\n\ts(\\cdot)=\\left\\{\n\t\\begin{aligned}\n\t\t0 & , & |\\boldsymbol{\\omega}_{t,i} \\cdot \\boldsymbol{\\omega}_{t,j}|\\leq\\epsilon_1 \\; {\\rm or} \\;\n\t\t\\theta_{t,i}<\\varepsilon_2 \\; {\\rm or} \\; \\theta_{t,j}<\\varepsilon_2 \\\\\n\t\t1 & , & {\\rm Otherwise~if}~\\boldsymbol{\\omega}_{t,i} \\cdot \\boldsymbol{\\omega}_{t,j}>\\epsilon_1 \\\\\n\t\t-1 & , & {\\rm Otherwise~if}~ \\boldsymbol{\\omega}_{t,i} \\cdot \\boldsymbol{\\omega}_{t,j}<-\\epsilon_1 \\\\\n\t\\end{aligned}\n\t\\right.\n\\end{equation}\n\\YL{The first case here is to ignore cases where the rotation angle is near zero, as the rotation axis is not well defined in such cases.}\nAs for rotation angles, \\YL{we optimize the following}\n\\begin{flalign}\\label{eqn:angle}\n\\arg\\min_{r_{t,i}} &\\sum_{(i,j) \\in \\mathcal{E} } \\| (r_{t,i} \\cdot 2\\pi+{o}_{t,i}\\theta_{t,i}) - (r_{t,j}\\cdot 2\\pi+{o}_{t,j}\\theta_{t,j}) \\|_2^{2} &\\nonumber\\\\\n+ &\\sum_{i \\in \\mathcal{V} } \\| (r_{t,i} \\cdot 2\\pi+{o}_{t,i}\\theta_{t,i}) - (r_{t-1,i}\\cdot 2\\pi+{o}_{t,j}\\theta_{t-1,i}) \\|_2^{2} \\nonumber\\\\ \n{\\rm s.t.}& \\quad r_{t,i} \\in \\mathbb{Z},~~r_{t,1} = 0.\n\\end{flalign}\nwhere $r_{t,i} \\in \\mathbb{Z}$ specifies how many $2\\pi$ rotations should be added to the rotation angle.\n\\YL{The two terms here promote spatial and temporal consistencies of rotation angles, respectively. \nThese optimizations can be solved using integer programming, and we use the mixed integer solver CoMISo~\\cite{comiso2009} which provides an efficient \\gl{solver}. See~\\cite{gao2019sparse} for more details.}\nA similar process is used to compute the TS-ACAP representation of the fine meshes. \n\n\n\\cl{Compared to the ACAP representation, our TS-ACAP representation considers temporal constraints to represent nonlinear deformation for optimization of axes and angles, which is more suitable for consecutive large-scale deformation \\YL{sequences}.\nWe compare ACAP~\\cite{gao2019sparse} and our TS-ACAP using a simple example of a simulated disk-shaped cloth animation sequence. Once we obtain deformation representations of the meshes in the sequence, \nwe interpolate two meshes, the initial state mesh and a randomly selected frame, using linear interpolation of \\YL{shape representations}.\n\\YL{In Fig. \\ref{fig:interpolation}, we demonstrate the interpolation results with ACAP representation, which shows that it cannot handle such challenging cases with complex large-scale deformations. In contrast, with our temporally and spatially as-consistent-as-possible optimization, our TS-ACAP representation is able to produce consistent interpolation results.}\n\n\n\\begin{figure}[ht]\n\t\\centering\n\t\\includegraphics[width=\\linewidth]{pictures\/acap_tacap1_1.pdf}%\n\t\\caption{\\small Comparison of shape interpolation results with different deformation representations, ACAP and TS-ACAP. %\n\t(a) and (b) are the source (t = 0) and target (t = 1) models with large-scale deformation to be interpolated. \n\tThe first row shows the interpolation results by ACAP, and the second row show the results with our TS-ACAP. \n\t\\gl{The interpolated models with ACAP feature are plausible in each frame while they are not consistent in the temporal domain.}\n\t}\n\t\\label{fig:interpolation}\n\\end{figure}\n}\n\n\\subsection{DeformTransformer Networks}\nUnlike \\cite{tan2017variational, wang2019learning} which use fully connected layers for mesh encoder, we perform convolutions \\YL{on meshes to learn to extract useful features using compact shared convolutional kernels.} \nAs illustrated in Fig. \\ref{fig:pointconv}, we use a convolution operator on vertices \\cite{duvenaud2015convolutional, tan2017autoencoder} where the output at a vertex is obtained as a linear combination of input in its one-ring neighbors along with a bias. \n\\YL{The input to our network is the TS-ACAP representation, which for the $i^{\\rm th}$ vertex of the $t^{\\rm th}$ mesh, we collect non-trivial coefficients from the rotation $\\mathbf{R}_{t, i}$ and scaling\/shearing $\\mathbf{S}_{t,i}$, which forms a 9-dimensional feature vector (see~\\cite{gao2019sparse} for more details). Denote by $\\mathbf{f}_i^{(k-1)}$ and $\\mathbf{f}_i^{k}$ the feature of the $i^{\\rm th}$ vertex at the $(k-1)^{\\rm th}$ and $k^{\\rm th}$ layers, respectively. The convolution operator is defined as follows:\n\\begin{equation}\n\t\\mathbf{f}_i^{(k)} =\n\t\\mathbf{W}_{point}^{(k)} \\cdot \\mathbf{f}_{i}^{(k-1)} + \n\t\\mathbf{W}_{neighbor}^{(k)} \\cdot \\frac{1}{D_i} \\mathop{\\sum}_{j=1}^{D_i} \\mathbf{f}_{n_{ij}}^{(k-1)}\n\t+ \\mathbf{b}^{(k)} \n\\end{equation}\nwhere $\\mathbf{W}_{point}^{(k)}$, $\\mathbf{W}_{neighbor}^{(k)}$ and $\\mathbf{b}^{(k)}$ are learnable parameters for the $k^{\\rm th}$ convoluational layer, $D_i$ is the degree of the $i^{\\rm th}$ vertex, $n_{ij}(1 \\leq j \\leq D_i )$ is the $j^{\\rm th}$ neighbor of the $i^{\\rm th}$ vertex.\n}\n\n\\begin{figure}[ht]\n\t\\centering\n\t\\includegraphics[width=0.48\\linewidth]{pictures\/pointconv.pdf} \n\t\\caption{\\small Illustration of the convolutional operator on meshes. \n\t\tThe result of convolution for each vertex is obtained by a linear combination from the input in the 1-ring neighbors of the vertex, along with a bias.\n\t}\n\t\\label{fig:pointconv}\n\\end{figure}\n\\begin{figure}[ht]\n\t\\centering\n\t\\includegraphics[width=\\linewidth, trim=0 50 0 150,clip]{pictures\/transformer.pdf} %\n\t\\caption{\\small The architecture of our DeformTransformer network.\n\t\tThe coarse and fine mesh sequences are embedded into feature vectors using the TS-ACAP representation which \\YL{is} defined \\YL{at} each vertex as a 9-dimensional vector. \n\t\tThen two convolutional \\YL{encoders} map coarse and fine features to \\YL{their latent spaces}, respectively.\n\t\tThese latent vectors are fed into the DeformTransformer network, \\cl{which consists of the encoder and decoder, each including a stack of $N=2$ identical blocks with 8-head attention,} to recover \\YL{temporally-coherent} deformations.\n\t\tNotice that in \\YL{the} training phase the input high-resolution TS-ACAP \\YL{features are those from the ground truth}, \n\t\t\\YL{but during testing, these features are initialized to zeros, and once a new high-resolution frame is generated, its TS-ACAP feature is added.}\n\t\tWith predicted feature vectors, realistic and stable cloth animations are generated.\n\t}\n\t\\label{fig:Transformer}\n\\end{figure}\n\n\\begin{figure}[ht]\n\t\\centering\n\t\\includegraphics[width=0.4\\linewidth, trim=18 33 18 3,clip]{pictures\/tshirt06_08_poseswithhuman_collision\/temp0270keyshot_unsolve.png} \n\t\\includegraphics[width=0.4\\linewidth, trim=18 33 18 3,clip]{pictures\/tshirt06_08_poseswithhuman_collision\/temp0270keyshot_solve.png} \n\t\\caption{\\small For tight clothing, data-driven cloth deformations may suffer from apparent collisions with the body (left). We apply a simple postprocessing step to push \n\t\\YL{the collided} T-shirt vertices outside the body (right).\n\t}\n\t\\label{fig:collisionrefinement}\n\\end{figure}\n\\begin{figure*}[ht]\n\t\\centering\n\t\\includegraphics[width=1.0\\linewidth, trim=50 150 100 150,clip]{pictures\/dataset.pdf} \n\t\\caption{\\small \n\t\tWe test our algorithm on 5 datasets including TSHIRT, PANTS, SKIRT, SHEET and DISK.\t\t \n\t\tThe former three are garments (T-shirts, skirts, and pants) dressed on a template body and simulated with various motion sequences.\n\t\tThe SHEET dataset is a square sheet interacting with various obstacles.\n\t\tThe DISK dataset is a round tablecloth draping on a cylinder in the wind of various velocities. \n\t\tEach cloth shape has a coarse resolution (top) and a fine resolution (bottom). \n\t} \n\t\\label{fig:dataset}\n\\end{figure*}\nLet $\\mathcal{F}_\\mathcal{C} = \\{\\mathbf{f}_{\\mathcal{C}_1}, \\dots, \\mathbf{f}_{\\mathcal{C}_n}\\}$ be the sequence of coarse mesh features, and $\\mathcal{F}_\\mathcal{D} = \\{\\mathbf{f}_{\\mathcal{D}_1}, \\dots, \\mathbf{f}_{\\mathcal{D}_n}\\}$ be its counterpart, the sequence of detailed mesh features.\nTo synthesize $\\mathcal{F}_\\mathcal{D}$ from $\\mathcal{F}_\\mathcal{C}$, the DeformTransformer framework is proposed to solve this sequence-to-sequence problem.\nThe DeformTransformer network consists of several stacked encoder-decoder layers, \\YL{denoted} as $Enc(\\cdot)$ and $Dec(\\cdot)$. To take the order of the sequence into consideration, triangle positional embeddings \\cite{vaswani2017attention} are injected into frames of $\\mathcal{F}_\\mathcal{C}$ and $\\mathcal{F}_\\mathcal{D}$, respectively.\nThe encoder takes coarse mesh features as input and encodes it to a \\YL{temporally-dependent} hidden space.\nIt is composed of identical blocks \\YL{each} with two sub-modules, one is the multi-head self-attention mechanism, the other is the frame-wise fully connected feed-forward network. \nWe also employ a residual connection around these two sub-modules, followed \\YL{by} the layer normalization.\nThe multi-head attention is able to build the dependence between any frames, thus ensuring that each input can consider global context of the whole sequence. Meanwhile, compared with other sequence models, this mechanism splits \\YL{the} attention into several subspaces so that it can model the frame \\YL{relationships} in multiple aspects.\nWith the encoded latent vector $Enc(\\mathcal{F}_\\mathcal{C})$, the decoder network attempts to reconstruct a sequence of fine mesh features.\nThe decoder has two parts: \nThe first part takes fine mesh sequence $\\mathcal{F}_\\mathcal{D}$ as \\YL{input} and \nencodes it similar to the encoder. \n\\YL{Unlike the encoder, detailed meshes are generated sequentially, and when predicting frame $t$, it should not attend to subsequent frames (with the position after frame $t$). To achieve this, we utilize a masking process\nfor the self-attention module.} The second part performs multi-head attention over the output of the encoder, thus capturing the long-term dependence between coarse mesh features $\\mathcal{F}_\\mathcal{C}$ and fine mesh features $\\mathcal{F}_\\mathcal{D}$.\nWe train the Transformer network by minimizing the mean squared error between predicted detailed features and the ground-truth.\nWith predicted TS-ACAP feature vector, we reconstruct the vertex coordinates of \\YL{the} target mesh\\YL{, in the same way as reconstruction from ACAP features} (please refer to \\cite{gao2019sparse} for details). \nOur training data is generated by PBS \\YL{and is collision-free}.\nSince human body \\YL{(or other obstacles)} information is unseen in our algorithm, it does not guarantee the predicted cloth \\YL{is free from any penetration}.\nEspecially for tight garment like T-shirts, it will be apparent if collision \\YL{between the garment and human body} happens.\nWe use a fast refinement method \\cite{wang2019learning} to push the cloth vertices colliding with the body outside \\YL{while} preserving the local wrinkle details (see Fig.~\\ref{fig:collisionrefinement}). \nFor each vertex detected inside the body, we find its closest point over the body surface with normal and position.\nThen the cloth mesh is deformed to update the vertices by minimizing the energy which penalizes the euclidean distance and Laplacian difference between the updated mesh and the initial one (please refer to \\cite{wang2019learning} for details).\nThe collision solving process usually takes less than 3 iterations to converge to a collision-free state.\n\n\\section{Implementation}\\label{sec:implementation}\nWe describe the details of the dataset construction and the network architecture in this section.\n\n\\textbf{\\YL{Datasets}.}\nTo test our method, we construct 5 datasets, called TSHIRT, PANTS, SKIRT, SHEET and DISK respectively.\nThe former three datasets are different types of garments, \\textit{i.e. }, T-shirts, skirts and pants worn on human bodies.\nEach type of garment \\YL{is represented by both low-resolution and high-resolution meshes}, \\YL{containing} 246 and 14,190 vertices for the T-shirts, 219 and 12,336 vertices for the skirts, 200 and 11,967 vertices for the pants.\nGarments of the same type and resolution are simulated from a template mesh, which means \\YL{such meshes obtained through cloth animations have the same number of vertices and the same connectivity}.\nThese garments are dressed on animated characters, which are obtained via driving a body \\YL{in the SMPL (Skinned Multi-Person Linear) model} \\cite{loper2015smpl} with publicly available motion capture data from CMU \\cite{hodgins2015cmu}.\nSince the motion data is captured, there are some \\YL{self-collisions} or long repeated sequences. \n\\YL{After removing poor quality data}, we select various motions, such as dancing, walking, running, jumping etc., including 20 sequences (\\YL{9031, 6134, 7680 frames in total} for TSHIRT, PANTS and SKIRT respectively).\nIn these motions, 18 sequences are randomly selected for training and the remaining 2 sequences for testing.\nThe SHEET dataset consists of a pole or a sphere of three different sizes crashing to a piece of \\YL{cloth sheet}.\nThe coarse mesh has 81 vertices and the fine mesh has 4,225 vertices.\nThere are \\YL{4,000} frames in the SHEET dataset, in which 3200 frames for training and \\YL{the remaining} 800 frames for testing.\nWe construct the DISK dataset by draping a round tablecloth to a cylinder in the wind, with 148 and 7,729 vertices for coarse and fine meshes respectively.\nWe adjust the velocity of the wind to get various animation sequences, in which 1600 frames for training and 400 frames for testing. \n\n\\begin{table*}[ht]\n\t\\renewcommand\\arraystretch{1.5}\n\t\\caption{ Statistics and timing (sec\/\\YL{frame}) of the testing examples including five types of \\YL{thin shell animations}.\n\t}\n\t\\label{table:runtime}\n\t\\centering\n\t\\begin{tabular}{cccccccccc}\n\t\t\\toprule[1.2pt] \n\t\tBenchmark & \\#verts & \\#verts & PBS & ours & speedup & \\multicolumn{4}{c}{our components} \\\\ \\cline{7-10} \n\t\t& LR & HR & HR & & & coarse & TS-ACAP & synthesizing & refinement \\\\\n\t\t& & & & & & sim. & extraction & (GPU) & \\\\ \\hline \\hline\n\t\tTSHIRT & 246 & 14,190 & 8.72 & 0.867 & \\textbf{10} & 0.73 & 0.11 & 0.012 & 0.015\\\\\n\t\tPANTS & 200 & 11,967 & 10.92 &0.904 & \\textbf{12} & 0.80 & 0.078 & 0.013 & 0.013\\\\\n\t\tSKIRT & 127 & 6,812 & 6.84 & 0.207 & \\textbf{33} & 0.081 & 0.10 & 0.014 & 0.012 \\\\ \n\t\tSHEET & 81 & 4,225 & 2.48 & 0.157 & \\textbf{16} & 0.035 & 0.10 & 0.011 & 0.011 \\\\ \n\t\tDISK & 148 & 7,729 & 4.93 & 0.139 & \\textbf{35} & 0.078 & 0.041 & 0.012 & 0.008 \\\\ \n\t\t\\bottomrule[1.2pt]\n\t\\end{tabular}\n\\end{table*} \nTo prepare the above datasets, we generate both \\YL{low-resolution (LR)} and \\YL{high-resolution (HR)} cloth \\YL{animations} by PBS.\nThe initial state of the HR mesh is obtained by applying the Loop subdivision scheme \\cite{Thesis:Loop} to the coarse mesh and waiting for several seconds till stable.\nPrevious works \\cite{kavan11physics, zurdo2013wrinkles, chen2018synthesizing} usually constrain the high-resolution meshes by various tracking mechanisms to ensure that the coarse cloth \\YL{can be seen as} a low-resolution version of the fine cloth during the complete animation sequences.\nHowever, fine-scale wrinkle dynamics cannot be captured by this model, as wrinkles are defined quasistatically and limited to a \\YL{constrained} subspace.\nThus we \\YL{instead perform} PBS for the two resolution meshes \\emph{separately}, without any constraints between them.\nWe use a cloth simulation engine called ARCSim \\cite{Narain2012AAR} to produce all animation sequences of low- and high-resolution meshes with the same parameter setting. \nIn our experiment, we choose the Gray Interlock from a library of measured cloth materials \\cite{Wang2011DEM} as the material parameters for ARCSim simulation.\nSpecially for garments interacting with characters, to ensure collision-free, we manually put the coarse and fine garments on a template human body (in the T pose) and run the simulation to let the \\YL{clothing} relax. To this end, we define the initial state for all subsequent simulations.\nWe interpolate 15 frames between the T pose and the initial pose of each motion sequence, before applying the motion sequence, which is smoothed using a convolution operation.\n\n\\begin{figure}[ht]\n\t\\centering\n\t\\subfloat{\n\t\t\\includegraphics[width=0.5\\linewidth]{pictures\/hyper_inputframes-eps-converted-to.pdf} \n\t}\n\t\\subfloat{\n\t\t\\includegraphics[width=0.5\\linewidth]{pictures\/hyper_hiddensize-eps-converted-to.pdf} \n\t}\n\t\\caption{\\small Evaluation of hyperparameters in the Transformer network\\YL{, using the SKIRT dataset. }\n\t\t(Left) average error for the reconstructed results as a function of the number of input frames.\n\t\t(Right) error for the synthesized results under the condition of various dimensions of the latent layer.\n\t}\n\t\\label{fig:hyperpara}\n\\end{figure}\n\\textbf{Network architecture.}\nAs shown in Fig.~\\ref{fig:Transformer}, our transduction network consists of two components, namely convolutional \\YL{encoders} to map coarse and fine mesh sequences into latent spaces for improved generalization capability, and the Transformer network for \\YL{spatio-temporally} coherent deformation transduction.\nThe feature encoder module takes the 9-dimensional TS-ACAP features defined on vertices as input, followed by two convolutional layers with $tanh$ as the activation function. \nIn the last convolutional layer we abandon the activation function, similar to \\cite{tan2017autoencoder}.\nA fully connected layer is used to map the output of the convolutional layers into a 16-dimensional latent space.\nWe train one encoder for coarse \\YL{meshes} and another for fine \\YL{meshes} separately.\nFor the DeformTransformer network, its input includes the embedded latent vectors from both \\YL{the} coarse and fine domains.\nThe DeformTransformer network consists of sequential encoders and decoders, \neach \\YL{including} a stack of 2 identical blocks with 8-head attention.\nDifferent from variable length sequences used in natural language processing, we \\YL{fix} the number of input frames \\YL{(to 3 in our experiments)} since a motion sequence may include a thousand frames.\n\\YL{We perform experiments to evaluate the performance of our method with different settings.}\nAs shown in Fig.~\\ref{fig:hyperpara} \\YL{(left)}, using 3 input frames is found to perform well in our experiments.\nWe also evaluate the results generated with various dimensions of latent space shown in Fig. \\ref{fig:hyperpara} \\YL{(right)}.\nWhen the dimension of latent space is larger than 16, the network can \\YL{easily overfit}.\nThus we set the dimension of the latent space %\nto 16, which is sufficient for all the examples in the paper.\n\\begin{table}[tb]\n\t\\renewcommand\\arraystretch{1.5}\n\t\\caption{Quantitative comparison of reconstruction errors for unseen \\YL{cloth animations} in several datasets. We compare our results with Chen {\\itshape et al.} \\cite{chen2018synthesizing} and Zurdo {\\itshape et al.} \\cite{zurdo2013wrinkles} with LR meshes as a reference. \\YL{Three metrics, namely RMSE (Root Mean Squared Error), Hausdorff distance and STED (Spatio-Temporal Edge Difference)~\\cite{Vasa2011perception} are used. Since LR meshes have different number of vertices from the ground truth HR mesh, we only calculate its Hausdorff distance.}}\n \t\\label{table:compare_zurdo_chen2}\n\t\\centering \n\t\\begin{tabular}{ccccc} \n\t\t\\toprule[1.2pt]\n\t\t\\multirow{3}{*}{Dataset} & \\multirow{3}{*}{Methods} & \\multicolumn{3}{c}{Metrics} \\\\ \\cline{3-5}\n\t\t& & RMSE & Hausdorff & STED \\\\ \n\t\t& & $\\times 10^\\YL{-2}$ $\\downarrow$ & $\\times 10^\\YL{-2}$ $\\downarrow$ & $\\downarrow$ \\\\\n\t\t\\hline \\hline\n\t\t\\multirow{4}{*}{TSHIRT} & LR & - & 0.59 & - \\\\ \\cline{2-5}\n\t\t& Chen {\\itshape et al.} & 0.76 & 0.506 & 0.277 \\\\ \\cline{2-5}\n\t\t& Zurdo {\\itshape et al.} & 1.04 & 0.480 & 0.281 \\\\ \\cline{2-5}\n\t\t& Our & \\textbf{0.546} & \\textbf{0.416} & \\textbf{0.0776} \\\\ \\hline \\hline\n\t\t\\multirow{4}{*}{PANTS} & LR & - & 0.761 & - \\\\ \\cline{2-5}\n\t\t& Chen {\\itshape et al.} & 1.82 & 1.09 & 0.176 \\\\ \\cline{2-5}\n\t\t& Zurdo {\\itshape et al.} & 1.89 & 0.983& 0.151 \\\\ \\cline{2-5}\n\t\t& Our & \\textbf{0.663} & \\textbf{0.414} & \\textbf{0.0420} \\\\ \\hline \\hline\n\t\t\\multirow{4}{*}{SKIRT} & LR & - & 2.09 & - \\\\ \\cline{2-5}\n\t\t& Chen {\\itshape et al.} & 1.93 & 1.31 & 0.562 \\\\ \\cline{2-5}\n\t\t& Zurdo {\\itshape et al.} & 2.19 & 1.52 & 0.178 \\\\ \\cline{2-5}\n\t\t& Our & \\textbf{0.685} & \\textbf{0.681} & \\textbf{0.0241} \\\\ \\hline \\hline\n\t\t\\multirow{4}{*}{SHEET} \n\t\t& LR & - & 2.61 & - \\\\ \\cline{2-5}\n\t\t& Chen {\\itshape et al.} & 4.37 & 2.60 & 0.155 \\\\ \\cline{2-5}\n\t\t& Zurdo {\\itshape et al.} & 3.02 & 2.34 & 0.0672 \\\\ \\cline{2-5}\n\t\t& Our & \\textbf{0.585} & \\textbf{0.417} & \\textbf{0.0262} \\\\ \\hline \\hline\n\t\t\\multirow{4}{*}{DISK} & LR & - & 3.12 & - \\\\ \\cline{2-5}\n\t\t& Chen {\\itshape et al.} & 7.03 & 2.27 & 0.244 \\\\ \\cline{2-5}\n\t\t& Zurdo {\\itshape et al.} & 11.40 & 2.23 & 0.502 \\\\ \\cline{2-5}\n\t\t& Our & \\textbf{2.16} & \\textbf{1.30} & \\textbf{0.0557 } \\\\ \n\t\t\\bottomrule[1.2pt]\n\t\\end{tabular}\n\\end{table}\n\n\\section{Results}\\label{sec:results}\n\\subsection{Runtime Performance}\nWe implement our method on a \\YL{computer with a} 2.50GHz \\YL{4-Core} Intel CPU for coarse simulation and TS-ACAP extraction,\nand \\YL{an} NVIDIA GeForce\\textsuperscript{\\textregistered}~GTX 1080Ti GPU for fine TS-ACAP generation by the network and mesh coordinate reconstruction.\nTable~\\ref{table:runtime} shows average per-frame execution time of our method for various cloth datasets.\nThe execution time contains four parts: coarse simulation, TS-ACAP extraction, high-resolution TS-ACAP synthesis, and collision refinement. \nFor reference, we also \\YL{measure} the time of a CPU-based implementation of high-resolution PBS using ARCSim \\cite{Narain2012AAR}.\nOur algorithm is $10\\sim35$ times faster than the \\YL{PBS} HR simulation.\nThe low computational cost of our method makes it suitable for the interactive applications. \n\n\\begin{figure}[tb]\n\t\\centering\n\t\\setlength{\\fboxrule}{0.5pt}\n \\setlength{\\fboxsep}{-0.01cm}\n\t\\setlength{\\tabcolsep}{0.00cm} \n \\renewcommand\\arraystretch{0.01} \n \t\\begin{tabular}{>{\\centering\\arraybackslash}m{0.2\\linewidth}>{\\centering\\arraybackslash}m{0.2\\linewidth}>{\\centering\\arraybackslash}m{0.2\\linewidth}>{\\centering\\arraybackslash}m{0.2\\linewidth}>{\\centering\\arraybackslash}m{0.2\\linewidth}} \n \\includegraphics[width=\\linewidth, trim=17 0 37 0,clip]{pictures\/tshirt06_08_poses\/0crop0090down.png} & \n \\includegraphics[width=\\linewidth, trim=17 0 37 0,clip]{pictures\/tshirt06_08_poses\/1crop0090down.png} & \n \\includegraphics[width=\\linewidth, trim=17 0 37 0,clip]{pictures\/tshirt06_08_poses\/2crop0090down.png} & \n \\includegraphics[width=\\linewidth, trim=17 0 37 0,clip]{pictures\/tshirt06_08_poses\/3crop0090down.png} & \n \\includegraphics[width=\\linewidth, trim=17 0 37 0,clip]{pictures\/tshirt06_08_poses\/4crop0090down.png} \\\\\n \\includegraphics[width=\\linewidth, trim=17 0 37 0,clip]{pictures\/tshirt06_08_poses\/0crop0300down.png} & \n \\includegraphics[width=\\linewidth, trim=17 0 37 0,clip]{pictures\/tshirt06_08_poses\/1crop0300down.png} & \n \\includegraphics[width=\\linewidth, trim=17 0 37 0,clip]{pictures\/tshirt06_08_poses\/2crop0300down.png} & \n \\includegraphics[width=\\linewidth, trim=17 0 37 0,clip]{pictures\/tshirt06_08_poses\/3crop0300down.png} & \n \\includegraphics[width=\\linewidth, trim=17 0 37 0,clip]{pictures\/tshirt06_08_poses\/4crop0300down.png} \\\\\n \\includegraphics[width=\\linewidth, trim=17 0 37 0,clip]{pictures\/tshirt08_11_poses\/0crop0110down.png} & \n \\includegraphics[width=\\linewidth, trim=17 0 37 0,clip]{pictures\/tshirt08_11_poses\/1crop0110down.png} & \n \\includegraphics[width=\\linewidth, trim=17 0 37 0,clip]{pictures\/tshirt08_11_poses\/2crop0110down.png} & \n \\includegraphics[width=\\linewidth, trim=17 0 37 0,clip]{pictures\/tshirt08_11_poses\/3crop0110down.png} & \n \\includegraphics[width=\\linewidth, trim=17 0 37 0,clip]{pictures\/tshirt08_11_poses\/4crop0110down.png} \\\\\n \\includegraphics[width=\\linewidth, trim=17 0 37 0,clip]{pictures\/tshirt08_11_poses\/0crop0260down.png} & \n \\includegraphics[width=\\linewidth, trim=17 0 37 0,clip]{pictures\/tshirt08_11_poses\/1crop0260down.png} & \n \\includegraphics[width=\\linewidth, trim=17 0 37 0,clip]{pictures\/tshirt08_11_poses\/2crop0260down.png} & \n \\includegraphics[width=\\linewidth, trim=17 0 37 0,clip]{pictures\/tshirt08_11_poses\/3crop0260down.png} & \n \\includegraphics[width=\\linewidth, trim=17 0 37 0,clip]{pictures\/tshirt08_11_poses\/4crop0260down.png} \\\\ \n \\vspace{0.3cm} \\footnotesize (a) Input & \\vspace{0.3cm} \\hspace{-0.3cm} \\footnotesize (b) Chen {\\itshape et al.} & \\vspace{0.3cm} \\hspace{-0.2cm} \\footnotesize (c) Zurdo {\\itshape et al.} & \\vspace{0.3cm} \\footnotesize (d) Ours & \\vspace{0.3cm} \\footnotesize (e) GT \n\t\\end{tabular}\n\t\\caption{Comparison of the reconstruction results for unseen data \\YL{on the TSHIRT} dataset.\n\t\t(a) coarse simulation,\n\t\t(b) results of \\cite{chen2018synthesizing},\n\t\t(c) results of \\cite{zurdo2013wrinkles},\n\t\t(d) our results,\n\t\t(e) ground truth generated by PBS.\n\t\tOur method produces the detailed shapes of higher quality than Chen {\\itshape et al.} and Zurdo {\\itshape et al.}, see the folds and wrinkles in the close-ups. Chen {\\itshape et al.} results suffer from seam line problems. The results of Zurdo {\\itshape et al.} exhibit clearly noticeable artifacts.}\n\t\\label{fig:comparetoothers_tshirt}\n\\end{figure}\n \\begin{figure}[!htb]\n\t\\centering\n\t\\setlength{\\fboxrule}{0.5pt}\n \\setlength{\\fboxsep}{-0.01cm}\n\t\\setlength{\\tabcolsep}{0.00cm} \n \\renewcommand\\arraystretch{0.01} \n \t\\begin{tabular}{>{\\centering\\arraybackslash}m{0.2\\linewidth}>{\\centering\\arraybackslash}m{0.2\\linewidth}>{\\centering\\arraybackslash}m{0.2\\linewidth}>{\\centering\\arraybackslash}m{0.2\\linewidth}>{\\centering\\arraybackslash}m{0.2\\linewidth}} \n \\includegraphics[width=\\linewidth, trim=28 0 28 5,clip]{pictures\/pants09_07_poses\/0crop0010down.png} & \n \t\\includegraphics[width=\\linewidth, trim=28 0 28 5,clip]{pictures\/pants09_07_poses\/1crop0010down.png} & \n \t\\includegraphics[width=\\linewidth, trim=28 0 28 5,clip]{pictures\/pants09_07_poses\/2crop0010down.png} & \n \t\\includegraphics[width=\\linewidth, trim=28 0 28 5,clip]{pictures\/pants09_07_poses\/3crop0010down.png} & \n \t\\includegraphics[width=\\linewidth, trim=28 0 28 5,clip]{pictures\/pants09_07_poses\/4crop0010down.png} \\\\ \n \t\\includegraphics[width=\\linewidth, trim=28 0 28 5,clip]{pictures\/pants09_07_poses\/0crop0060down.png} & \n \t\\includegraphics[width=\\linewidth, trim=28 0 28 5,clip]{pictures\/pants09_07_poses\/1crop0060down.png} & \n \t\\includegraphics[width=\\linewidth, trim=28 0 28 5,clip]{pictures\/pants09_07_poses\/2crop0060down.png} & \n \t\\includegraphics[width=\\linewidth, trim=28 0 28 5,clip]{pictures\/pants09_07_poses\/3crop0060down.png} & \n \t\\includegraphics[width=\\linewidth, trim=28 0 28 5,clip]{pictures\/pants09_07_poses\/4crop0060down.png} \\\\ \n \t\\includegraphics[width=\\linewidth, trim=28 0 28 5,clip]{pictures\/pants09_07_poses\/0crop0140down.png} & \n \t\\includegraphics[width=\\linewidth, trim=28 0 28 5,clip]{pictures\/pants09_07_poses\/1crop0140down.png} & \n \t\\includegraphics[width=\\linewidth, trim=28 0 28 5,clip]{pictures\/pants09_07_poses\/2crop0140down.png} & \n \t\\includegraphics[width=\\linewidth, trim=28 0 28 5,clip]{pictures\/pants09_07_poses\/3crop0140down.png} & \n \t\\includegraphics[width=\\linewidth, trim=28 0 28 5,clip]{pictures\/pants09_07_poses\/4crop0140down.png} \\\\ \n \t\\includegraphics[width=\\linewidth, trim=28 0 28 5,clip]{pictures\/pants09_07_poses\/0crop0160down.png} & \n \t\\includegraphics[width=\\linewidth, trim=28 0 28 5,clip]{pictures\/pants09_07_poses\/1crop0160down.png} & \n \t\\includegraphics[width=\\linewidth, trim=28 0 28 5,clip]{pictures\/pants09_07_poses\/2crop0160down.png} & \n \t\\includegraphics[width=\\linewidth, trim=28 0 28 5,clip]{pictures\/pants09_07_poses\/3crop0160down.png} & \n \t\\includegraphics[width=\\linewidth, trim=28 0 28 5,clip]{pictures\/pants09_07_poses\/4crop0160down.png} \\\\ \n\t \\vspace{0.3cm} \\footnotesize (a) Input & \\vspace{0.3cm} \\hspace{-0.3cm} \\footnotesize (b) Chen {\\itshape et al.} & \\vspace{0.3cm} \\hspace{-0.2cm} \\footnotesize (c) Zurdo {\\itshape et al.} & \\vspace{0.3cm} \\footnotesize (d) Ours & \\vspace{0.3cm} \\footnotesize (e) GT \n\t\\end{tabular} \n\t\\caption{Comparison of the reconstruction results for unseen data in the PANTS dataset.\n\t\t(a) coarse simulation results,\n\t\t(b) results of \\cite{chen2018synthesizing}, mainly smooth the coarse meshes and barely exhibit any wrinkles.\n\t\t(c) results of \\cite{zurdo2013wrinkles}, have clear artifacts on examples where LR and HR meshes are not aligned well, \\textit{e.g. } the trouser legs.\n\t\t(d) our results, ensures physically-reliable results.\n\t\t(e) ground truth generated by PBS.\n\t}\n\t\\label{fig:comparetoothers_pants}\n\\end{figure} \n\\begin{figure*}[htb]\n\t\\centering\n\t\\subfloat[Input]{ \n\t\t\\begin{minipage}[b]{0.11\\linewidth} \n\t\t\t\\includegraphics[width=1.000000\\linewidth, trim=45 45 45 45,clip]{pictures\/skirt09_06_poses\/0\/frm0080_00_skirtlrkeyshot.png} \n\t\t\t\\includegraphics[width=1.000000\\linewidth, trim=45 45 45 45,clip]{pictures\/skirt09_06_poses\/0\/frm0110_00_skirtlrkeyshot.png} \n\t\t\t\\includegraphics[width=1.000000\\linewidth, trim=45 45 45 45,clip]{pictures\/skirt09_06_poses\/0\/frm0140_00_skirtlrkeyshot.png} \n\t\t\t\\includegraphics[width=1.000000\\linewidth, trim=45 45 45 45,clip]{pictures\/skirt09_06_poses\/0\/frm0160_00_skirtlrkeyshot.png} \n\t\\end{minipage}} \n\t\\subfloat[Chen {\\itshape et al.}]{ \n\t\t\\begin{minipage}[b]{0.11\\linewidth} \n\t\t\t\\includegraphics[width=1.000000\\linewidth, trim=45 45 45 45,clip]{pictures\/skirt09_06_poses\/1\/temp0080keyshot.png} \n\t\t\t\\includegraphics[width=1.000000\\linewidth, trim=45 45 45 45,clip]{pictures\/skirt09_06_poses\/1\/temp0110keyshot.png} \n\t\t\t\\includegraphics[width=1.000000\\linewidth, trim=45 45 45 45,clip]{pictures\/skirt09_06_poses\/1\/temp0140keyshot.png} \n\t\t\t\\includegraphics[width=1.000000\\linewidth, trim=45 45 45 45,clip]{pictures\/skirt09_06_poses\/1\/temp0160keyshot.png} \n\t\\end{minipage}} \n\t\t\\begin{minipage}[b]{0.11\\linewidth} \n\t\t\t\\includegraphics[width=1.000000\\linewidth, trim=45 45 45 45 ,clip]{pictures\/skirt09_06_posescolormap\/1\/09_06_posesfrm0080_00_skirtlr_result.png} \n\t\t\t\\includegraphics[width=1.000000\\linewidth, trim=45 45 45 45 ,clip]{pictures\/skirt09_06_posescolormap\/1\/09_06_posesfrm0110_00_skirtlr_result.png} \n\t\t\t\\includegraphics[width=1.000000\\linewidth, trim=45 45 45 45 ,clip]{pictures\/skirt09_06_posescolormap\/1\/09_06_posesfrm0140_00_skirtlr_result.png} \n\t\t\t\\includegraphics[width=1.000000\\linewidth, trim=45 45 45 45 ,clip]{pictures\/skirt09_06_posescolormap\/1\/09_06_posesfrm0160_00_skirtlr_result.png} \n\t\\end{minipage}\n\t\\subfloat[Zurdo {\\itshape et al.}]{ \n\t\t\\begin{minipage}[b]{0.11\\linewidth} \n\t\t\t\\includegraphics[width=1.000000\\linewidth, trim=45 45 45 45,clip]{pictures\/skirt09_06_poses\/2\/temp0080keyshot.png} \n\t\t\t\\includegraphics[width=1.000000\\linewidth, trim=45 45 45 45,clip]{pictures\/skirt09_06_poses\/2\/temp0110keyshot.png} \n\t\t\t\\includegraphics[width=1.000000\\linewidth, trim=45 45 45 45,clip]{pictures\/skirt09_06_poses\/2\/temp0140keyshot.png} \n\t\t\t\\includegraphics[width=1.000000\\linewidth, trim=45 45 45 45,clip]{pictures\/skirt09_06_poses\/2\/temp0160keyshot.png} \n\t\\end{minipage}} \n\t\t\\begin{minipage}[b]{0.11\\linewidth} \n\t\t\t\\includegraphics[width=1.000000\\linewidth, trim=45 45 45 45,clip]{pictures\/skirt09_06_posescolormap\/2\/frm0080_00_skirthr.png} \n\t\t\t\\includegraphics[width=1.000000\\linewidth, trim=45 45 45 45,clip]{pictures\/skirt09_06_posescolormap\/2\/frm0110_00_skirthr.png} \n\t\t\t\\includegraphics[width=1.000000\\linewidth, trim=45 45 45 45,clip]{pictures\/skirt09_06_posescolormap\/2\/frm0140_00_skirthr.png} \n\t\t\t\\includegraphics[width=1.000000\\linewidth, trim=45 45 45 45,clip]{pictures\/skirt09_06_posescolormap\/2\/frm0160_00_skirthr.png} \n\t\\end{minipage}\n\t\\subfloat[Ours]{ \n\t\t\\begin{minipage}[b]{0.11\\linewidth} \n\t\t\t\\includegraphics[width=1.000000\\linewidth, trim=45 45 45 45,clip]{pictures\/skirt09_06_poses\/3\/frm0080_00_skirthrkeyshot.png} \n\t\t\t\\includegraphics[width=1.000000\\linewidth, trim=45 45 45 45,clip]{pictures\/skirt09_06_poses\/3\/frm0110_00_skirthrkeyshot.png} \n\t\t\t\\includegraphics[width=1.000000\\linewidth, trim=45 45 45 45,clip]{pictures\/skirt09_06_poses\/3\/frm0140_00_skirthrkeyshot.png} \n\t\t\t\\includegraphics[width=1.000000\\linewidth, trim=45 45 45 45,clip]{pictures\/skirt09_06_poses\/3\/frm0160_00_skirthrkeyshot.png} \n\t\\end{minipage}}\n\t\t\\begin{minipage}[b]{0.11\\linewidth} \n\t\t\t\\includegraphics[width=1.000000\\linewidth, trim=45 45 45 45,clip]{pictures\/skirt09_06_posescolormap\/3\/frm0080_00_skirthr.png} \n\t\t\t\\includegraphics[width=1.000000\\linewidth, trim=45 45 45 45,clip]{pictures\/skirt09_06_posescolormap\/3\/frm0110_00_skirthr.png} \n\t\t\t\\includegraphics[width=1.000000\\linewidth, trim=45 45 45 45,clip]{pictures\/skirt09_06_posescolormap\/3\/frm0140_00_skirthr.png} \n\t\t\t\\includegraphics[width=1.000000\\linewidth, trim=45 45 45 45,clip]{pictures\/skirt09_06_posescolormap\/3\/frm0160_00_skirthr.png} \n\t\\end{minipage} \n\t\\subfloat[GT]{ \n\t\t\\begin{minipage}[b]{0.11\\linewidth} \n\t\t\t\\includegraphics[width=1.000000\\linewidth, trim=45 45 45 45,clip]{pictures\/skirt09_06_poses\/4\/frm0080_00_skirthrkeyshot.png} \n\t\t\t\\includegraphics[width=1.000000\\linewidth, trim=45 45 45 45,clip]{pictures\/skirt09_06_poses\/4\/frm0110_00_skirthrkeyshot.png} \n\t\t\t\\includegraphics[width=1.000000\\linewidth, trim=45 45 45 45,clip]{pictures\/skirt09_06_poses\/4\/frm0140_00_skirthrkeyshot.png} \n\t\t\t\\includegraphics[width=1.000000\\linewidth, trim=45 45 45 45,clip]{pictures\/skirt09_06_poses\/4\/frm0160_00_skirthrkeyshot.png} \n\t\\end{minipage}}\n\t\\begin{minipage}[b]{0.08\\linewidth} \n\t\t\\includegraphics[width=1.000000\\linewidth, trim=0 0 0 0,clip]{pictures\/bar.png}\n\t\\end{minipage}\n\t\n\t\\caption{Comparison of the reconstruction results for unseen data in the SKIRT dataset.\n\t\t(a) the coarse simulation,\n\t\t(b) the results of \\cite{chen2018synthesizing},\n\t\t(c) the results of \\cite{zurdo2013wrinkles},\n\t\t(d) our results,\n\t\t(e) the ground truth generated by PBS.\n\tThe reconstruction accuracy is qualitatively showed as a difference map. \n\tReconstruction errors are color-coded and warmer colors indicate larger errors. Our method leads to significantly lower reconstruction errors. }\n\t\\label{fig:comparetoothers_skirt}\n\\end{figure*}\n\n\\subsection{\\YL{Fine Detail} Synthesis Results and Comparisons}\nWe now demonstrate our method using various \\YL{detail enhancement}\nexamples \\YL{both} quantitatively and qualitatively, \\YL{including added wrinkles and rich dynamics.}\nUsing detailed meshes generated by PBS as ground truth, we compare our results with physics-based coarse simulations, our implementation of a deep learning-based method \\cite{chen2018synthesizing} and a conventional machine learning-based method \\cite{zurdo2013wrinkles}.\n\nFor quantitative comparison, we use \\YL{three} metrics: Root Mean Squared Error (RMSE), Hausdorff distance as well as spatio-temporal edge difference (STED) \\cite{Vasa2011perception} designed for motion sequences with a focus on `perceptual' error of models.\nThe results are shown in Table~\\ref{table:compare_zurdo_chen2}.\nNote that \\YL{for the datasets from the top to bottom in the table,} the Hausdorff \\YL{distances} between LR meshes and the ground truth are increasing. \\YL{This} tendency is in accordance with the deformation range from tighter T-shirts and pants to skirts and square\/disk tablecloth with higher degrees \\YL{of freedom}.\nSince the vertex position representation cannot handle rotations well, the larger scale the models deform, the more artifacts Chen {\\itshape et al.} \\cite{chen2018synthesizing} and Zurdo {\\itshape et al.} \\cite{zurdo2013wrinkles} would \\YL{bring in} in the reconstructed models, \\YL{leading to increased} RMSE and Hausdorff distances. \nThe results indicate that our method has better reconstruction results \\YL{quantitatively} than the compared methods \\YL{on} the 5 datasets with \\YL{all the three} metrics.\nEspecially \\YL{for} the SKIRT, SHEET and DISK \\YL{datasets} which \\YL{contain} loose cloth \\YL{and hence larger and richer deformation}, our \\YL{method} outperforms \\YL{existing methods significantly} since tracking between coarse and fine meshes \\YL{is} not required in our algorithm.\n\n\n\\begin{figure}[tb]\n\t\\centering\n\t\\setlength{\\fboxrule}{0.5pt}\n \\setlength{\\fboxsep}{-0.01cm}\n\t\\setlength{\\tabcolsep}{0.00cm} \n \\renewcommand\\arraystretch{0.01} \n \t\\begin{tabular}{>{\\centering\\arraybackslash}m{0.2\\linewidth}>{\\centering\\arraybackslash}m{0.2\\linewidth}>{\\centering\\arraybackslash}m{0.2\\linewidth}>{\\centering\\arraybackslash}m{0.2\\linewidth}>{\\centering\\arraybackslash}m{0.2\\linewidth}} \n \\includegraphics[width=\\linewidth, trim=25 0 50 6,clip]{pictures\/crashballpole0.3withhuman\/0crop0130down.png} & \n \t\\includegraphics[width=\\linewidth, trim=25 0 50 6,clip]{pictures\/crashballpole0.3withhuman\/1crop0130down.png} & \n \t\\includegraphics[width=\\linewidth, trim=25 0 50 6,clip]{pictures\/crashballpole0.3withhuman\/2crop0130down.png} & \n \t\\includegraphics[width=\\linewidth, trim=25 0 50 6,clip]{pictures\/crashballpole0.3withhuman\/3crop0130down.png} & \n \t\\includegraphics[width=\\linewidth, trim=25 0 50 6,clip]{pictures\/crashballpole0.3withhuman\/4crop0130down.png}\\\\ \n \t\\includegraphics[width=\\linewidth, trim=25 0 50 6,clip]{pictures\/crashballpole0.3withhuman\/0crop0180down.png} & \n \t\\includegraphics[width=\\linewidth, trim=25 0 50 6,clip]{pictures\/crashballpole0.3withhuman\/1crop0180down.png} & \n \t\\includegraphics[width=\\linewidth, trim=25 0 50 6,clip]{pictures\/crashballpole0.3withhuman\/2crop0180down.png} & \n \t\\includegraphics[width=\\linewidth, trim=25 0 50 6,clip]{pictures\/crashballpole0.3withhuman\/3crop0180down.png} & \n \t\\includegraphics[width=\\linewidth, trim=25 0 50 6,clip]{pictures\/crashballpole0.3withhuman\/4crop0180down.png} \\\\ \n \t\\includegraphics[width=\\linewidth, trim=25 0 50 6,clip]{pictures\/crashballpole0.3withhuman\/0crop0260down.png} & \n \t\\includegraphics[width=\\linewidth, trim=25 0 50 6,clip]{pictures\/crashballpole0.3withhuman\/1crop0260down.png} & \n \t\\includegraphics[width=\\linewidth, trim=25 0 50 6,clip]{pictures\/crashballpole0.3withhuman\/2crop0260down.png} & \n \t\\includegraphics[width=\\linewidth, trim=25 0 50 6,clip]{pictures\/crashballpole0.3withhuman\/3crop0260down.png} & \n \t\\includegraphics[width=\\linewidth, trim=25 0 50 6,clip]{pictures\/crashballpole0.3withhuman\/4crop0260down.png} \\\\ \n \t\\includegraphics[width=\\linewidth, trim=25 0 50 6,clip]{pictures\/crashballpole0.3withhuman\/0crop0320down.png} & \n \t\\includegraphics[width=\\linewidth, trim=25 0 50 6,clip]{pictures\/crashballpole0.3withhuman\/1crop0320down.png} & \n \t\\includegraphics[width=\\linewidth, trim=25 0 50 6,clip]{pictures\/crashballpole0.3withhuman\/2crop0320down.png} & \n \t\\includegraphics[width=\\linewidth, trim=25 0 50 6,clip]{pictures\/crashballpole0.3withhuman\/3crop0320down.png} & \n \t\\includegraphics[width=\\linewidth, trim=25 0 50 6,clip]{pictures\/crashballpole0.3withhuman\/4crop0320down.png} \\\\ \n\t \\vspace{0.3cm} \\footnotesize (a) Input & \\vspace{0.3cm} \\hspace{-0.3cm} \\footnotesize (b) Chen {\\itshape et al.} & \\vspace{0.3cm} \\hspace{-0.2cm} \\footnotesize (c) Zurdo {\\itshape et al.} & \\vspace{0.3cm} \\footnotesize (d) Ours & \\vspace{0.3cm} \\footnotesize (e) GT \n\t\\end{tabular}\n\t\\caption{Comparison of the reconstruction results for unseen data in the SHEET dataset.\n\t\t(a) the coarse simulation,\n\t\t(b) the results of \\cite{chen2018synthesizing}, with inaccurate and\nrough wrinkles different from the GT.\n\t\t(c) the results of \\cite{zurdo2013wrinkles}, show similar global shapes to coarse meshes with some wrinkles and unexpected sharp corner.\n\t\t(d) our results, show mid-scale wrinkles and similar global deformation as GT.\n\t\t(e) the ground truth generated by PBS.}\n\t\\label{fig:comparetoothers_crashball}\n\t\\vspace{-0.2cm}\n\\end{figure} \n\\begin{figure}[tb]\n\t\\centering\n\t\\setlength{\\fboxrule}{0.5pt}\n \\setlength{\\fboxsep}{-0.01cm}\n\t\\setlength{\\tabcolsep}{0.00cm} \n \\renewcommand\\arraystretch{0.001} \n \t\\begin{tabular}{>{\\centering\\arraybackslash}m{0.2\\linewidth}>{\\centering\\arraybackslash}m{0.2\\linewidth}>{\\centering\\arraybackslash}m{0.2\\linewidth}>{\\centering\\arraybackslash}m{0.2\\linewidth}>{\\centering\\arraybackslash}m{0.2\\linewidth}} \n\t\t\t \\includegraphics[width=\\linewidth, trim=42 0 30 30,clip]{pictures\/disk4.300withhuman\/0crop0050down.png} &\n\t\t\t \\includegraphics[width=\\linewidth, trim=42 0 30 30,clip]{pictures\/disk4.300withhuman\/1crop0050down.png} &\n\t\t\t \\includegraphics[width=\\linewidth, trim=42 0 30 30,clip]{pictures\/disk4.300withhuman\/2crop0050down.png} &\n\t\t\t \\includegraphics[width=\\linewidth, trim=42 0 30 30,clip]{pictures\/disk4.300withhuman\/3crop0050down.png} &\n\t\t\t \\includegraphics[width=\\linewidth, trim=42 0 30 30,clip]{pictures\/disk4.300withhuman\/4crop0050down.png} \\\\\n\t\t\t \\includegraphics[width=\\linewidth, trim=54 0 18 30,clip]{pictures\/disk4.300withhuman\/0crop0090down.png} &\n\t\t\t \\includegraphics[width=\\linewidth, trim=54 0 18 30,clip]{pictures\/disk4.300withhuman\/1crop0090down.png} &\n\t\t\t \\includegraphics[width=\\linewidth, trim=54 0 18 30,clip]{pictures\/disk4.300withhuman\/2crop0090down.png} &\n\t\t\t \\includegraphics[width=\\linewidth, trim=54 0 18 30,clip]{pictures\/disk4.300withhuman\/3crop0090down.png} &\n\t\t\t \\includegraphics[width=\\linewidth, trim=54 0 18 30,clip]{pictures\/disk4.300withhuman\/4crop0090down.png} \\\\\n\t\t\t \\includegraphics[width=\\linewidth, trim=54 0 18 30,clip]{pictures\/disk4.300withhuman\/0crop0160down.png} &\n\t\t\t \\includegraphics[width=\\linewidth, trim=54 0 18 30,clip]{pictures\/disk4.300withhuman\/1crop0160down.png} &\n\t\t\t \\includegraphics[width=\\linewidth, trim=54 0 18 30,clip]{pictures\/disk4.300withhuman\/2crop0160down.png} &\n\t\t\t \\includegraphics[width=\\linewidth, trim=54 0 18 30,clip]{pictures\/disk4.300withhuman\/3crop0160down.png} &\n\t\t\t \\includegraphics[width=\\linewidth, trim=54 0 18 30,clip]{pictures\/disk4.300withhuman\/4crop0160down.png} \\\\\n\t\t\t \\includegraphics[width=\\linewidth, trim=54 0 18 30,clip]{pictures\/disk4.300withhuman\/0crop0360down.png} &\n\t\t\t \\includegraphics[width=\\linewidth, trim=54 0 18 30,clip]{pictures\/disk4.300withhuman\/1crop0360down.png} &\n\t\t\t \\includegraphics[width=\\linewidth, trim=54 0 18 30,clip]{pictures\/disk4.300withhuman\/2crop0360down.png} &\n\t\t\t \\includegraphics[width=\\linewidth, trim=54 0 18 30,clip]{pictures\/disk4.300withhuman\/3crop0360down.png} &\n\t\t\t \\includegraphics[width=\\linewidth, trim=54 0 18 30,clip]{pictures\/disk4.300withhuman\/4crop0360down.png} \\\\ \n\t\t\t \\vspace{0.3cm} \\footnotesize (a) Input & \\vspace{0.3cm} \\hspace{-0.3cm} \\footnotesize (b) Chen {\\itshape et al.} & \\vspace{0.3cm} \\hspace{-0.2cm} \\footnotesize (c) Zurdo {\\itshape et al.} & \\vspace{0.3cm} \\footnotesize (d) Ours & \\vspace{0.3cm} \\footnotesize (e) GT \n\t\\end{tabular}\n\t\\caption{Comparison of the reconstruction results for unseen data in the DISK dataset.\n\t\t(a) the coarse simulation,\n\t\t(b) the results of \\cite{chen2018synthesizing}, cannot reconstruct credible shapes. \n\t\t(c) the results of \\cite{zurdo2013wrinkles}, show apparent artifacts near the flying tails since no tracking constraints applied.\n\t\t(d) our results, reproduce large-scale deformations, see the tail of the disk flies like a fan in the wind.\n\t\t(e) the ground truth generated by PBS.}\n\t\\label{fig:comparetoothers_disk}\n\\end{figure} \n\n\\YL{We further make qualitative comparisons on the 5 datasets.}\nFig. \\ref{fig:comparetoothers_tshirt} shows \\YL{detail synthesis results} on the TSHIRT dataset.\nThe first and second \nrows \nare from \\YL{sequence} 06\\_08, a woman dribbling the basketball sideways and the \\YL{last two rows} are from \\YL{sequence} 08\\_11, a walking woman.\nIn this dataset of tight t-shirts on human bodies, Chen {\\itshape et al.} \\cite{chen2018synthesizing}, Zurdo {\\itshape et al.} \\cite{zurdo2013wrinkles} and our method are able to reconstruct the garment model completely with mid-scale wrinkles.\nHowever, Chen {\\itshape et al.} \\cite{chen2018synthesizing} suffer from the seam line problems due to \\YL{the use of geometry image representation}. \nA geometry image is a parametric sampling of the shape, which is \\YL{made a topological disk by cutting through some seams.} \nThe boundary of the disk needs to be fused so that the reconstructed mesh has the original topology.\n\\YL{The super-resolved geometry image corresponding to high-resolution cloth animations are not entirely accurate, and as a result the fused boundaries no longer match exactly, }\n\\textit{e.g. } clear seam lines on the shoulder and crooked boundaries on the left side of the waist \\YL{for the examples} in Fig.~\\ref{fig:comparetoothers_tshirt} (b)),\n\\YL{while} our method \\YL{produces} better results than \\cite{chen2018synthesizing} and \\cite{zurdo2013wrinkles} which have \\YL{artifacts of unsmooth surfaces}.\n\nFig. \\ref{fig:comparetoothers_pants} shows comparative results of the animations of pants on a fixed body shape while changing the body pose over time. \nThe results of \\cite{chen2018synthesizing} \\YL{mainly} smooth the coarse meshes and barely exhibit \\YL{any} wrinkles.\nZurdo {\\itshape et al.} \\cite{zurdo2013wrinkles} utilize tracking algorithms to ensure the %\n\\YL{close alignment}\nbetween coarse and fine meshes, and thus the fine meshes are constrained \\YL{and do not exhibit the behavior of full physics-based simulation.}\n\\YL{So on the PANTS dataset,} the results of \\cite{zurdo2013wrinkles} have clear artifacts on examples \\YL{where} LR and HR meshes are not aligned well, \\textit{e.g. } the trouser legs.\nDifferent from the two compared methods \\YL{that reconstruct displacements} or local coordinates, \nour method \\YL{uses} deformation-based features in both encoding and decoding \\YL{phases} which \\YL{does not suffer from such restrictions and ensures physically-reliable results.}\n\nFor looser garments like \\YL{skirts}, we show comparison results in Fig. \\ref{fig:comparetoothers_skirt}, with color coding to highlight the differences between synthesized results and the ground truth.\nOur method successfully reconstructs the swinging skirt \\YL{caused by} the body motion (see the small wrinkles on the waist and the \\YL{medium-level} folds on the skirt \\YL{hem}).\nChen {\\itshape et al.} are able to reconstruct the overall shape of the skirt, however there are many small unsmooth \\YL{triangles leading to noisy shapes}\ndue to the 3D coordinate representation with untracked fine meshes with abundant wrinkles.\nThis leads to unstable animation, please see the accompanying video.\nThe results of \\cite{zurdo2013wrinkles} have some problems of the global deformation, see the directions of the skirt hem and the large highlighted area in the color map.\nOur learned \\YL{detail} synthesis model provides better visual quality for shape generation \\YL{and the generated results look} closer to the ground truth.\n \nInstead of garments dressed on human bodies, we additionally show some results of free-flying tablecloth. \nThe comparison of the testing results \\YL{on} the SHEET dataset are shown in Fig.~\\ref{fig:comparetoothers_crashball}.\nThe results of \\cite{chen2018synthesizing} show inaccurate and rough wrinkles different from the ground truth. \nFor hanging sheets in the results of \\cite{zurdo2013wrinkles}, the global shapes are more like coarse \\YL{meshes} with some wrinkles and unexpected sharp corners, \\textit{e.g. } the left side in the last row of Fig. \\ref{fig:comparetoothers_crashball} (c),\nwhile ours show \\YL{mid-scale} wrinkles and similar global deformation \\YL{as} the high-resolution meshes. \n\nAs for the DISK dataset, from the visual results in Fig.~\\ref{fig:comparetoothers_disk}, we can see that Chen {\\itshape et al.} \\cite{chen2018synthesizing} and Zurdo {\\itshape et al.} \\cite{zurdo2013wrinkles} cannot handle large-scale rotations well and cannot reconstruct credible shapes in such cases. \n\\gl{Especially for Zurdo {\\itshape et al.} \\cite{zurdo2013wrinkles}, the impact of tracking is significant for their algorithm.}\nThey can reconstruct the top\nand part of tablecloth near the cylinder, but the flying tails have apparent artifacts. \nOur algorithm does not have such drawbacks.\nNotice how our method successfully reproduces ground-truth deformations, including the overall drape (\\textit{i.e. }, how the tail of the disk flies like a fan in the wind) and mid-scale wrinkles.\n\n\\begin{table}[!htb]\n\t\\renewcommand\\arraystretch{1.5}\n\t\\caption{User study results on cloth \\YL{detail} synthesis. We show the average ranking score of the three methods: Chen {\\itshape et al.} \\cite{chen2018synthesizing}, Zurdo {\\itshape et al.} \\cite{zurdo2013wrinkles}, and ours. The\n\t\tranking ranges from 1 (the best) to 3 (the worst). The results are calculated\n\t\tbased on 320 trials. We see that our method achieves the best in terms of\n\t\twrinkles, temporal stability \\YL{and overall quality}.}\n\t\\label{table:userstudy}\n\t\\centering \n\t\\begin{tabular}{cccc}\n\t\t\\toprule[1.2pt] \n\t\tMethod & Wrinkles & Temporal stability & Overall \\\\ \\hline \n\t\tChen {\\itshape et al.} & 2.184 & 2.1258 &2.1319\\\\ \\hline \n\t\tZurdo {\\itshape et al.} & 2.3742 & 2.5215 & 2.4877\\\\ \\hline \n\t\tOurs & \\textbf{1.4417} & \\textbf{1.3528} & \\textbf{1.3804} \\\\\n\t\t\\bottomrule[1.2pt]\n\t\\end{tabular}\n\\end{table}\n\\gl{We further conduct a user study to evaluate the stability and realistic of the synthesized dense mesh dynamics. 32 volunteers are involved for this user study.}\nFor every question, we give one sequence and 5 images of coarse meshes as references, \\YL{and} then let the user rank the corresponding outputs from Chen {\\itshape et al.} \\cite{chen2018synthesizing}, Zurdo {\\itshape et al.} \\cite{zurdo2013wrinkles} and ours according to three different criteria (wrinkles, temporal stability and overall). \nWe shuffle the order of the algorithms each time we exhibit the question and show shapes from the three methods randomly \\YL{to avoid bias}. \nWe show the results of the user study in Table \\ref{table:userstudy}, where we observe that our generated \\YL{shapes} perform the best on all three criteria. \n\n\\begin{table}[tb]\n\t\\renewcommand\\arraystretch{1.5}\n\t\\caption{Per-vertex error (RMSE) on synthesized shapes with different feature representations: 3D coordinates, ACAP and TS-ACAP.}\n\t\\label{table:feature_compare}\n\t\\centering\n\t\\begin{tabular}{cccccc}\n\t\t\\toprule[1.2pt]\n\t\tDataset & TSHIRT & PANTS & \tSKIRT & SHEET & DISK \\\\ \\hline\n\t\t3D coordinates & 0.0101 & 0.0193 & 0.00941 & 0.00860 & 0.185 \\\\ \\hline\n\t\tACAP & 0.00614 & 0.00785 & 0.00693 & 0.00606 & 0.0351 \\\\ \\hline\n\t\tTS-ACAP & \\textbf{0.00546} & \\textbf{0.00663} & \\textbf{0.00685} & \\textbf{0.00585} & \\textbf{0.0216}\\\\ \n\t\t\\bottomrule[1.2pt]\n\t\\end{tabular}\n\\end{table}\n\\begin{figure}[tb]\n\t\\centering\n\t\\setlength{\\fboxrule}{0.5pt}\n \\setlength{\\fboxsep}{-0.01cm}\n\t\\setlength{\\tabcolsep}{0.00cm} \n \\renewcommand\\arraystretch{0.001}\n \t\\begin{tabular}{>{\\centering\\arraybackslash}m{0.25\\linewidth}>{\\centering\\arraybackslash}m{0.25\\linewidth}>{\\centering\\arraybackslash}m{0.25\\linewidth}>{\\centering\\arraybackslash}m{0.25\\linewidth}}\n \t\\includegraphics[width=1.000000\\linewidth, trim=63 0 0 0,clip]{pictures\/skirt09_07_poseswithhuman\/0\/crop0040.png} &\n \t\\includegraphics[width=1.000000\\linewidth, trim=63 0 0 0,clip]{pictures\/skirt09_07_poseswithhuman\/1\/crop0040.png} &\n \t\\includegraphics[width=1.000000\\linewidth, trim=63 0 0 0,clip]{pictures\/skirt09_07_poseswithhuman\/2\/crop0040.png} &\n \t\\includegraphics[width=1.000000\\linewidth, trim=63 0 0 0,clip]{pictures\/skirt09_07_poseswithhuman\/3\/crop0040.png} \\\\\n \t\\includegraphics[width=1.000000\\linewidth, trim=63 0 0 0,clip]{pictures\/skirt09_07_poseswithhuman\/0\/crop0075.png} &\n \t\\includegraphics[width=1.000000\\linewidth, trim=63 0 0 0,clip]{pictures\/skirt09_07_poseswithhuman\/1\/crop0075.png} &\n \t\\includegraphics[width=1.000000\\linewidth, trim=63 0 0 0,clip]{pictures\/skirt09_07_poseswithhuman\/2\/crop0075.png} &\n \t\\includegraphics[width=1.000000\\linewidth, trim=63 0 0 0,clip]{pictures\/skirt09_07_poseswithhuman\/3\/crop0075.png} \\\\\n \t\\includegraphics[width=1.000000\\linewidth, trim=63 0 0 0,clip]{pictures\/skirt09_07_poseswithhuman\/0\/crop0110.png} &\n \t\\includegraphics[width=1.000000\\linewidth, trim=63 0 0 0,clip]{pictures\/skirt09_07_poseswithhuman\/1\/crop0110.png} &\n \t\\includegraphics[width=1.000000\\linewidth, trim=63 0 0 0,clip]{pictures\/skirt09_07_poseswithhuman\/2\/crop0110.png} &\n \t\\includegraphics[width=1.000000\\linewidth, trim=63 0 0 0,clip]{pictures\/skirt09_07_poseswithhuman\/3\/crop0110.png} \\\\ \n \t\\vspace{0.3cm} \\small (a) Input & \\vspace{0.3cm}\\small (b) Coordinates & \\vspace{0.3cm}\\small (c) Ours & \\vspace{0.3cm}\\small (d) GT\n \\end{tabular} \n\t\\caption{The evaluation of the TS-ACAP feature in our detail synthesis method. \n\t\t(a) input coarse \\YL{shapes},\n\t\t(b) the results using 3D coordinates, which can be clearly seen the rough appearance, unnatural deformation and some artifacts, especially in the highlighted regions with details shown in the close-ups.\n\t\t(c) our results, which show smooth looks and the details are more similar to the GT.\n\t\t(d)\tground truth.\n\t\t }\n\t\\label{fig:ablationstudy_coordiniates_skirt}\n\\end{figure}\n\\begin{figure}[htb]\n\t\\centering\n\t\\setlength{\\tabcolsep}{0.05cm} \n \\renewcommand\\arraystretch{0.001}\n \t\\begin{tabular}{>{\\centering\\arraybackslash}m{0.02\\linewidth}>{\\centering\\arraybackslash}m{0.31\\linewidth}>{\\centering\\arraybackslash}m{0.31\\linewidth}>{\\centering\\arraybackslash}m{0.31\\linewidth}}\n \t \\rotatebox{90}{\\small ACAP} &\n \t\t\\includegraphics[width=\\linewidth, trim=90 0 0 60,clip]{pictures\/tacap_acap\/0\/crop0103.png} &\n \t\t\\includegraphics[width=\\linewidth, trim=90 0 0 60,clip]{pictures\/tacap_acap\/0\/crop0104.png} &\n \t\t\\includegraphics[width=\\linewidth, trim=90 0 0 60,clip]{pictures\/tacap_acap\/0\/crop0105.png} \\\\\n \t\\rotatebox{90}{\\small TS-ACAP} &\n \t\t\\includegraphics[width=\\linewidth, trim=90 0 0 60,clip]{pictures\/tacap_acap\/1\/crop0103.png} &\n \t\t\\includegraphics[width=\\linewidth, trim=90 0 0 60,clip]{pictures\/tacap_acap\/1\/crop0104.png} &\n \t\t\\includegraphics[width=\\linewidth, trim=90 0 0 60,clip]{pictures\/tacap_acap\/1\/crop0105.png} \\\\ \n \\vspace{0.3cm} & \\vspace{0.3cm} \\small $t = 103$ & \\vspace{0.3cm} \\small $t = 104$ & \\vspace{0.3cm} \\small $t = 105$ \n\t\\end{tabular} \n\t\\caption{\n\t\t Three consecutive frames from a testing sequence in the DISK dataset. First row: the results of ACAP. As shown in the second column, the enlarged wrinkles are different from the previous and the next frames.\n\t\t This causes jumping in the animation.\n\t\t Second row: the consistent results obtained via TS-ACAP feature, demonstrating that our TS-ACAP representation ensures the temporal coherence. \n\t}\n\t\\label{fig:jump_acap}\n\\end{figure}\n\\begin{table}[tb]\n\t\\renewcommand\\arraystretch{1.5}\n\t\\fontsize{7.5}{9}\\selectfont\n\t\\caption{Comparison of RMSE between synthesized shapes and ground truth with different networks, \\textit{i.e. } without temporal modules, with RNN, with LSTM and ours with the Transformer network.}\n\t\\label{table:transformer_compare}\n\t\\centering\n\t\\begin{tabular}{cccccc}\n\t\t\\toprule[1.2pt]\n\t\tDataset & TSHIRT & PANTS & \tSKIRT & SHEET & DISK \\\\ \\hline\n\t\tWO Transformer & 0.00909 & 0.01142 & 0.00831 & 0.00739 & 0.0427 \\\\ \\hline\n\t\tWith RNN & 0.0435 & 0.0357 & 0.0558 & 0.0273 & 0.157 \\\\ \\hline\n\t\tWith LSTM & 0.0351 & 0.0218 & 0.0451 & 0.0114 & 0.102 \\\\ \\hline\n\t\tWith Transformer & \\textbf{0.00546} & \\textbf{0.00663} & \\textbf{0.00685} & \\textbf{0.00585} & \\textbf{0.0216} \\\\ \n\t\t\\bottomrule[1.2pt]\n\t\\end{tabular}\n\\end{table} \n\\begin{figure}[tb]\n \t\\centering\n \\setlength{\\tabcolsep}{0.0cm} \n \\renewcommand\\arraystretch{-1.9}\n \t\\begin{tabular}{>{\\centering\\arraybackslash}m{0.08\\linewidth}>{\\centering\\arraybackslash}m{0.18\\linewidth}>{\\centering\\arraybackslash}m{0.18\\linewidth}>{\\centering\\arraybackslash}m{0.18\\linewidth}>{\\centering\\arraybackslash}m{0.18\\linewidth}>{\\centering\\arraybackslash}m{0.18\\linewidth}}\n \t\t\\rotatebox{90}{\\small (a) Input}& \n\t\t\\includegraphics[width=\\linewidth, trim=5 5 5 5,clip]{pictures\/tshirt06_08_poses\/0\/0008.png} &\n\t\t\\includegraphics[width=\\linewidth, trim=5 5 5 5,clip]{pictures\/tshirt06_08_poses\/0\/0016.png} &\n\t\t\\includegraphics[width=\\linewidth, trim=5 5 5 5,clip]{pictures\/tshirt06_08_poses\/0\/0022.png} &\n\t\t\\includegraphics[width=\\linewidth, trim=5 5 5 5,clip]{pictures\/tshirt06_08_poses\/0\/0094.png} &\n\t\t\\includegraphics[width=\\linewidth, trim=5 5 5 5,clip]{pictures\/tshirt06_08_poses\/0\/0200.png} \n \t\t\\\\\n \t\t \\rotatebox{90}{\\small (b) EncDec} &\n \t\t\\includegraphics[width=\\linewidth, trim=5 5 5 5,clip]{pictures\/tshirt06_08_poses\/5\/0008.png} &\n \t\t\\includegraphics[width=\\linewidth, trim=5 5 5 5,clip]{pictures\/tshirt06_08_poses\/5\/0016.png} &\n \t\t\\includegraphics[width=\\linewidth, trim=5 5 5 5,clip]{pictures\/tshirt06_08_poses\/5\/0022.png} &\n \t\t\\includegraphics[width=\\linewidth, trim=5 5 5 5,clip]{pictures\/tshirt06_08_poses\/5\/0094.png} &\n \t\t\\includegraphics[width=\\linewidth, trim=5 5 5 5,clip]{pictures\/tshirt06_08_poses\/5\/0200.png} \n \t\t\\\\\n \t\t \\rotatebox{90}{\\small (c) RNN} &\n \t\t\\includegraphics[width=\\linewidth, trim=5 5 5 5,clip]{pictures\/tshirt06_08_poses\/rnn\/0008.png} &\n \t\t\\includegraphics[width=\\linewidth, trim=5 5 5 5,clip]{pictures\/tshirt06_08_poses\/rnn\/0016.png} &\n \t\t\\includegraphics[width=\\linewidth, trim=5 5 5 5,clip]{pictures\/tshirt06_08_poses\/rnn\/0022.png} &\n \t\t\\includegraphics[width=\\linewidth, trim=5 5 5 5,clip]{pictures\/tshirt06_08_poses\/rnn\/0094.png} &\n \t\t\\includegraphics[width=\\linewidth, trim=5 5 5 5,clip]{pictures\/tshirt06_08_poses\/rnn\/0200.png} \n\t \t\\\\\n\t \t\\rotatebox{90}{\\small (d) LSTM}&\n\t \t\\includegraphics[width=\\linewidth, trim=5 5 5 5,clip]{pictures\/tshirt06_08_poses\/lstm\/0008.png}&\n\t \t\\includegraphics[width=\\linewidth, trim=5 5 5 5,clip]{pictures\/tshirt06_08_poses\/lstm\/0016.png}&\n\t \t\\includegraphics[width=\\linewidth, trim=5 5 5 5,clip]{pictures\/tshirt06_08_poses\/lstm\/0022.png}&\n\t \t\\includegraphics[width=\\linewidth, trim=5 5 5 5,clip]{pictures\/tshirt06_08_poses\/lstm\/0094.png}&\n\t \t\\includegraphics[width=\\linewidth, trim=5 5 5 5,clip]{pictures\/tshirt06_08_poses\/lstm\/0200.png} \n \t\t\\\\\n \t\t \\rotatebox{90}{\\small (e) Ours}& \n \t\t\\includegraphics[width=\\linewidth, trim=5 5 5 5,clip]{pictures\/tshirt06_08_poses\/3\/0008.png}&\n \t\t\\includegraphics[width=\\linewidth, trim=5 5 5 5,clip]{pictures\/tshirt06_08_poses\/3\/0016.png}&\n \t\t\\includegraphics[width=\\linewidth, trim=5 5 5 5,clip]{pictures\/tshirt06_08_poses\/3\/0022.png}&\n \t\t\\includegraphics[width=\\linewidth, trim=5 5 5 5,clip]{pictures\/tshirt06_08_poses\/3\/0094.png}&\n \t\t\\includegraphics[width=\\linewidth, trim=5 5 5 5,clip]{pictures\/tshirt06_08_poses\/3\/0200.png} \n \t\t\\\\ \n \t\t \\rotatebox{90}{\\small (f) GT}&\n \t\t\\includegraphics[width=\\linewidth, trim=5 5 5 5,clip]{pictures\/tshirt06_08_poses\/4\/0008.png}&\n \t\t\\includegraphics[width=\\linewidth, trim=5 5 5 5,clip]{pictures\/tshirt06_08_poses\/4\/0016.png}&\n \t\t\\includegraphics[width=\\linewidth, trim=5 5 5 5,clip]{pictures\/tshirt06_08_poses\/4\/0022.png}&\n \t\t\\includegraphics[width=\\linewidth, trim=5 5 5 5,clip]{pictures\/tshirt06_08_poses\/4\/0094.png}&\n \t\t\\includegraphics[width=\\linewidth, trim=5 5 5 5,clip]{pictures\/tshirt06_08_poses\/4\/0200.png} \n \t\\end{tabular} \n \t\\caption{The evaluation of the Transformer network in our model for wrinkle synthesis.\n \t\tFrom top to bottom we show (a) %\n \t\t\\gl{input coarse mesh with physical simulation}\n \t\t(b) the results with an encoder-decoder \\YL{dropping out temporal modules}, (c) the results with RNN \\cite{chung2014empirical}, (d) the results with LSTM \\cite{hochreiter1997long}, (e) ours, and (f) the ground truth generated by PBS.}\n \t\\label{fig:transformer_w_o_tshirt}\n \\end{figure} \n\n\\subsection{\\YL{Evaluation of} Network Components}\nWe evaluate the effectiveness of our network components for two aspects: the \\YL{capability} of the TS-ACAP feature and the \\YL{capability} of the Transformer network. \nWe evaluate our method qualitatively and quantitatively on different datasets.\n\n\\textbf{Feature Representation Evaluation}.\nTo verify the effectiveness of our TS-ACAP feature, we compare per-vertex position errors to other features to evaluate the generated shapes in different datasets quantitatively. \nWe compare our method using TS-ACAP feature with our transduction methods using 3D vertex coordinates and ACAP, with network layers and parameters adjusted accordingly to optimize performance alternatively.\nThe details of numerical comparison are shown in Table \\ref{table:feature_compare}.\nACAP and TS-ACAP show quantitative improvements than 3D coordinates. \nIn Fig. \\ref{fig:ablationstudy_coordiniates_skirt}, we exhibit several compared examples of animated skirts of coordinates and TS-ACAP. \n\\YL{The results using coordinates show rough appearance, unnatural deformation and some artifacts, \n I can't really see the two circles?\nespecially in the highlighted regions with details shown in the close-ups.} Our results with TS-ACAP are more similar to the ground truth than the ones with coordinates. \nACAP has the problem of temporal inconsistency, thus the results are shaking or jumping frequently. \n\\YL{Although the use of the Transformer network can somewhat mitigate this issue, such artifacts can appear even with the Transformer.}\n\\YL{Fig.~\\ref{fig:jump_acap} shows} three consecutive frames from a testing sequence in the DISK dataset.\nResults with TS-ACAP show more consistent wrinkles than the ones with ACAP thanks to the temporal constraints.\n\n\\textbf{Transformer Network Evaluation}.\nWe also evaluate the impact of the Transformer network in our pipeline. \nWe compare our method to an encoder-decoder network dropping out the temporal modules, our pipeline with the recurrent neural network (RNN) and with the long short-term memory (LSTM) \\YL{module}.\nAn example of T-shirts is given in Fig. \\ref{fig:transformer_w_o_tshirt}, \\YL{showing} 5 frames in order.\nThe results without any temporal modules show artifacts on the sleeves and neckline since these places have strenuous \\YL{forces}. %\nThe models using RNN and LSTM stabilize the sequence via eliminating dynamic and detailed deformation, but all the results keep wrinkles on the chest from the initial state\\YL{, lacking rich dynamics.}\nBesides, they are not able to generate stable and realistic garment animations \\YL{that look similar to} the ground truth,\n\\YL{while} \\YL{our} method with the Transformer network \\YL{apparently} improves the temporary stability, \\YL{producing results close to the ground truth.}\nWe also quantitatively evaluate the performance of the Transformer network \\YL{in our method} via per-vertex error. \nAs shown in Table \\ref{table:transformer_compare}, the RMSE of our model \\YL{is} smaller than the other models.\n\n\\section{Conclusion and Future Work}\\label{sec:conclusion}\nIn this paper, we introduce a novel algorithm for synthesizing robust and realistic cloth animations via deep learning.\nTo achieve this, we propose a geometric deformation representation named TS-ACAP which well embeds the details and ensures the temporal consistency.\n\\YL{Benefiting} from \\YL{the} deformation-based feature, there is no explicit requirement of tracking between coarse and fine meshes in our algorithm. \nWe also use the Transformer network based on attention mechanisms to map the coarse TS-ACAP to fine TS-ACAP, maintaining the stability of our generation.\nQuantitative and qualitative results reveal that our method can synthesize realistic-looking wrinkles in various datasets, such as draping tablecloth, tight or \\YL{loose} garments dressed on human bodies, etc. \n \nSince our algorithm synthesizes \\YL{details} based on the coarse meshes, the time for coarse simulation is unavoidable.\nEspecially for tight garments like T-shirts and pants, the collision solving phase is time-consuming.\nIn the future, we intend to generate coarse sequences for tight cloth via skinning-based methods in order to reduce the computation for our pipeline.\nAnother limitation is that our current network is not able to deal with all kinds of garments with different topology.\n\\newpage\n\\bibliographystyle{IEEEtran}\n\n\n\\section{Introduction}\\label{sec:introduction}}\n\\IEEEPARstart{C}{reating} dynamic general clothes or garments on animated characters has been a long-standing problem in computer graphics (CG).\nIn the CG industry, physics-based simulations (PBS) are used to achieve realistic and detailed folding patterns for garment animations. \nHowever, it is time-consuming and requires expertise to synthesize fine geometric details since high-resolution meshes with tens of thousands or more vertices are often required.\nFor example, 10 seconds are required for physics-based simulation of a frame for detailed skirt animation shown in Fig.~\\ref{fig:lrhrsim1}.\nNot surprisingly, garment animation remains a bottleneck in many applications.\nRecently, data-driven methods provide alternative solutions to fast and effective wrinkling behaviors for garments.\nDepending on human body poses, some data-driven methods~\\cite{wang10example,Feng2010transfer,deAguiar10Stable,santesteban2019learning, wang2019learning} are capable of generating tight cloth animations successfully.\n\\begin{figure}[t]\n\t\\centering\n\t\\begin{tabular}{ccc}\n\t\\multicolumn{3}{c}{\n\t\\includegraphics[width=1.0\\linewidth]{pictures\/wireframe2_1.pdf}} \\\\\n\t(a) coarse skirt & (b) tracked skirt & (c) fine skirt\n\t\\end{tabular}\n\t\\caption{\\small \\cl{One frame of \\YL{skirt in different representations.} (a) \\YL{coarse mesh} (207 triangles), (b) \\YL{tracked mesh} (13,248 triangles) and (c) \\YL{fine mesh} (13,248 triangles). \\YL{Both coarse and fine meshes are obtained by simulating the skirt using a physics-based method \\cl{\\cite{Narain2012AAR}}. The tracked mesh is obtained with physics-based simulation involving additional constraints to track the coarse mesh.} The tracked mesh exhibits stiff folds while the wrinkles in the fine simulated mesh are more realistic.}%\n\t}\n\t\\label{fig:lrhrsim1} \n\\end{figure}\nUnfortunately, they are not suitable for loose garments, such as skirts, since the deformation of wrinkles cannot be defined by a static mapping from a character's pose.\nInstead of human poses, wrinkle augmentation on coarse simulations provides another alternative. \nIt utilizes coarse simulations with fast speed to cover a high-level deformation and leverages learning-based methods to add realistic wrinkles.\nPrevious methods~\\cite{kavan11physics,zurdo2013wrinkles,chen2018synthesizing} commonly require dense correspondences between coarse and fine meshes, so that local details can be added without affecting global deformation. \n\\YL{Such methods also require coarse meshes to be sufficiently close to fine meshes, as they only add details to coarse meshes.}\nTo maintain the correspondences for training data and ensure closeness between coarse and fine meshes, weak-form constraints such as various test functions~\\cite{kavan11physics,zurdo2013wrinkles,chen2018synthesizing} are applied to make fine meshes track the coarse meshes, \n\\YL{but as a result, the obtained high-resolution meshes do not fully follow physical behavior, leading to animations that lack realism. An example is shown in Fig.~\\ref{fig:lrhrsim1} where the tracked skirt (b) loses a large amount of wrinkles which should appear when simulating on fine meshes (c).}\n\n \nWithout requiring the constraints between coarse and fine meshes, we propose \n\\gl{the DeformTransformer network\nto synthesize detailed thin shell animations from coarse ones, based on deformation transfer.}\nThis is inspired by the similarity observed between pairs of coarse and fine meshes generated by PBS. %\nAlthough the positions of vertices from two meshes are not aligned, the overall deformation is similar, so it is possible to predict fine-scale deformation with coarse simulation results.\nMost previous works~\\cite{kavan11physics,zurdo2013wrinkles,chen2018synthesizing} use explicit vertex coordinates to represent 3D meshes, which are sensitive to translations and rotations,\nso they require good alignments between low- and high-resolution meshes. \nIn our work, we regard the cloth animations as non-rigid deformation and propose a novel representation for mesh sequences, called TS-ACAP (Temporal and Spatial As-Consistent-As-Possible) representation. \nTS-ACAP is a local deformation representation, capable of representing and solving large-scale deformation problems, while maintaining the details of meshes.\nCompared to the original ACAP representation~\\cite{gao2019sparse}, TS-ACAP is fundamentally designed to ensure the temporal consistency of the extracted feature sequences, \\YL{and meanwhile} it can maintain the original features of ACAP \\YL{to cope with large-scale deformations}.\nWith \\YL{TS-ACAP} representations for both coarse and fine meshes, we leverage a sequence transduction network to map the deformation from coarse to fine level to assure the temporal coherence of generated sequences.\nUnlike existing works using recurrent neural networks (RNN)~\\cite{santesteban2019learning}, we utilize the Transformer network~\\cite{vaswani2017attention}, an architecture consisting of frame-level attention mechanisms for our mesh sequence transduction task.\nIt is based entirely on attention without recursion modules so can be trained significantly faster than architectures based on recurrent %\nlayers.\nWith \\YL{temporally consistent features and the Transformer network, \\YL{our method achieves} stable general cloth synthesis with fine details in an efficient manner.}\n\nIn summary, the main contributions of our work are as follows:\n\\begin{itemize}\n\t\\item \\YL{We propose a novel framework for the synthesis of cloth dynamics, by learning temporally consistent deformation from low-resolution meshes to high-resolution meshes \\gl{with realistic dynamic}, which is $10 \\sim 35$ times faster than PBS \\cite{Narain2012AAR}.}\n\t\\item \\YL{To achieve this, we propose a \\cl{temporally and spatially as-consistent-as-possible deformation representation (TS-ACAP)} to represent the cloth mesh sequences. It is able to deal with large-scale deformation, essential for mapping between coarse and fine meshes, while ensuring temporal coherence.} \n \\item \\gl{Based on the TS-ACAP, We further design an effective neural network architecture (named DeformTransformer) by improving Transformer network, which successfully enables high-quality synthesis of dynamic wrinkles with rich details on thin shells and maintains temporal consistency on the generated high-resolution mesh sequences.}\n \n \n\\end{itemize}\n\nWe qualitatively and quantitatively evaluate our method for various cloth types (T-shirts, pants, skirts, square and disk tablecloth) with different motion sequences. \nIn Sec.~\\ref{sec:related_work}, we review the work most related to ours. We then give the detailed description of our method in Sec.~\\ref{sec:approach}. \nImplementation details are presented in Sec.~\\ref{sec:implementation}. We present experimental results, including extensive\ncomparisons with state-of-the-art methods in Sec.~\\ref{sec:results}, and finally, we draw conclusions and \\YL{discuss future work} in Sec.~\\ref{sec:conclusion}.\n\n\n\\section{Related work} \\label{sec:related_work}\n\\subsection{Cloth Animation}\nPhysics-based techniques for realistic cloth simulation have been widely studied in computer graphics, \\YL{using methods such as} implicit Euler integrator \\cite{BW98,Harmon09asynchronous}, iterative optimization \\cite{terzopoulos87elastically,bridson03wrinkles,Grinspun03shell}, collision detection and response \\cite{provot97collision,volino95collision}, etc. \n\\YL{Although such techniques can generate realistic cloth dynamics, }they are time consuming for detailed cloth synthesis, and the robustness and efficiency of simulation systems are also of concern.\n\\YL{To address these, alternative methods have been developed to generate} the dynamic details of cloth animation via adaptive techniques \\cite{lee2010multi,muller2010wrinkle,Narain2012AAR}, data-driven approaches \\cite{deAguiar10Stable, Guan12DRAPE, wang10example, kavan11physics,zurdo2013wrinkles} and deep learning-based methods \\cite{chen2018synthesizing,gundogdu2018garnet,laehner2018deepwrinkles,zhang2020deep}, etc.\n\n Adaptive techniques \\cite{lee2010multi, muller2010wrinkle} usually simulate a coarse model by simplifying the smooth regions and \\YL{applying interpolation} to reconstruct the wrinkles, \\YL{taking normal or tangential degrees of freedom into consideration.} \nDifferent from simulating a reduced model with postprocessing detail augmentation, Narain {\\itshape et al.} \\cite{Narain2012AAR} directly generate dynamic meshes in \\YL{the} simulation phase through adaptive remeshing, at the expense of increasing \\YL{computation time}. \n\nData-driven methods have drawn much attention since they offer faster cloth animations than physical models.\nWith \\YL{a} constructed database of \\YL{high-resolution} meshes, researchers have proposed many techniques depending on the motions of human bodies with linear conditional models\\cite{deAguiar10Stable, Guan12DRAPE} or secondary motion graphs \\cite{Kim2013near, Kim2008drivenshape}.\nHowever, these methods are limited to tight garments and not suitable for skirts or cloth with more freedom.\nAn alternative line \\YL{of research} is to augment details on coarse simulations \\YL{by exploiting knowledge from a} database of paired meshes, to generalize the performance to complicated testing scenes.\nIn this line, in addition to wrinkle synthesis methods \\YL{based on} bone clusters \\cite{Feng2010transfer} or human poses \\cite{wang10example} for fitted clothes, there are some approaches \\YL{that investigate how to} learn a mapping from a coarse garment shape to a detailed one for general \\YL{cases} of free-flowing cloth simulation.\nKavan {\\itshape et al.} \\cite{kavan11physics} present linear upsampling operators to \\YL{efficiently} augment \\YL{medium-scale} details on coarse meshes.\nZurdo {\\itshape et al.} \\cite{zurdo2013wrinkles} define wrinkles as local displacements and use \\YL{an} example-based algorithm to enhance low-resolution simulations.\n\\YL{Their approaches mean the} high-resolution cloth \\YL{is} required to track \\YL{the} low-resolution cloth, \\YL{and thus cannot} exhibit full high-resolution dynamics.\n\nRecently deep learning-based methods have been successfully applied for 3D animations of human \\YL{faces}~\\cite{cao2016real, jiang20183d}, hair \\cite{zhang2018modeling, yang2019dynamic} and garments \\cite{liu2019neuroskinning, wang2019learning}.\nAs for garment synthesis, some approaches \\cite{laehner2018deepwrinkles, santesteban2019learning, patel2020tailornet} are proposed to utilize a two-stream strategy consisting of global garment fit and local \\YL{wrinkle} enhancement.\nL{\\\" a}hner {\\itshape et al.} \\cite{laehner2018deepwrinkles} present DeepWrinkles, \\YL{which recovers} the global deformation from \\YL{a} 3D scan system and \\YL{uses a} conditional \\YL{generative adversarial network} to enhance a low-resolution normal map.\nZhang {\\itshape et al.} \\cite{zhang2020deep} further generalize the augmentation method with normal maps to complex garment types as well as various motion sequences.\n\\YL{These approaches add wrinkles on normal maps \\YL{rather than geometry}, and thus their effectiveness is restricted to adding fine-scale visual details, not large-scale dynamics.}\nBased on \\YL{the} skinning representation, some algorithms \\cite{gundogdu2018garnet,santesteban2019learning} use neural networks to generalize garment synthesis algorithms to multiple body shapes. \n\\YL{In addition, other works are} devoted to \\YL{generalizing neural networks} to various cloth styles \\cite{patel2020tailornet} or cloth materials \\cite{wang2019learning}.\nDespite tight garments dressed on characters, some deep learning-based methods \\cite{chen2018synthesizing, oh2018hierarchical} are %\n\\YL{demonstrated to work for cloth animation with higher degrees} of freedom.\nChen {\\itshape et al.} \\cite{laehner2018deepwrinkles} represent coarse and fine meshes via geometry images and use \\YL{a} super-resolution network to learn the mapping.\nOh {\\itshape et al.} \\cite{oh2018hierarchical} propose a multi-resolution cloth representation with \\YL{fully} connected networks to add details hierarchically.\nSince the \\YL{free-flowing cloth dynamics are harder for networks to learn} than tight garments, the results of these methods have not reached the realism of PBS. \\YL{Our method based on a novel deformation representation and network architecture has superior capabilities of learning the mapping from coarse and fine meshes, generating realistic cloth dynamics, while being much faster than PBS methods.}\n \n \\begin{figure*}[ht]\n \t\\centering\n \t\\includegraphics[width=1.0\\linewidth, trim=20 250 20 50,clip]{pictures\/mainpicture2.pdf} \n \t\\caption{\\small The overall architecture of our detail synthesis network. At data preparation stage, we generate low- and high-resolution \\gl{thin shell} animations via coarse and fine \\gl{meshes} and various motion sequences.\n \t Then we encode the coarse meshes and the detailed meshes to a deformation representation TS-ACAP, respectively.\n \t\\YL{Our algorithm then} learns to map the coarse features to fine features %\n \t\\YL{by designing a DeformTransformer network that consists of temporal-aware encoders and decoders, and finally reconstructs the detailed animations.}\n \t}\n \t\\label{fig:pipeline}\n \\end{figure*}\n\n\\subsection{Representation for 3D Meshes}\nUnlike 2D images with regular grid of pixels, \\YL{3D meshes have irregular connectivity which makes learning more difficult. To address this, existing deep learning based methods turn 3D meshes to a wide range of representations to facilitate processing~\\cite{xiao2020survey},} such as voxels, images \\YL{(such as depth images and multi-view images)}, point clouds, meshes, etc.\n\\YL{The volumetric representation has a regular structure, but it} often suffers from \\YL{the problem of extremely high space and time consumption.}\nThus Wang {\\itshape et al.} \\cite{wang2017cnn} propose an octree-based convolutional neural network and encode the voxels sparsely. \nImage-based representations including \\YL{depth images} \\cite{eigen2014depth,gupta2014learning} and multi-view images \\cite{Su2015mvcnn,li20193d} are proposed to encode 3D models in a 2D domain. \nIt is unavoidable that both volumetric and image-based representations lose some geometric details.\nAlternatively, geometry images are used in \\cite{sinha2016deep,Sinha2017surfnet,chen2018synthesizing} for mesh classification or generation\\YL{, which are obtained through cutting a 3D mesh to a topological disk, parameterizing it to a rectangular domain and regularly sampling the 3D coordinates in the 2D domain~\\cite{gu2002geometry}.}\n\\YL{However, this representation} may suffer from parameterization distortion and seam line problems.\n\nInstead of representing 3D meshes into other formats, recently there are methods \\cite{tan2017autoencoder, tan2017variational, hanocka2019meshcnn} applying neural networks directly to triangle meshes with various features.\nGao {\\itshape et al.} \\cite{gao2016efficient} propose a deformation-based representation, called the rotation-invariant mesh difference (RIMD) which is translation and rotation invariant.\nBased on the RIMD feature, Tan {\\itshape et al.} \\cite{tan2017variational} propose a fully connected variational autoencoder network to analyze and generate meshes.\nWu {\\itshape et al.} \\cite{wu2018alive} use the RIMD to generate\na 3D caricature model from a 2D caricature image. \nHowever, it is expensive to reconstruct vertex coordinates from the RIMD feature due to the requirement of solving a very complicated optimization.\nThus it is not suitable for fast mesh generation tasks.\nA faster deformation representation based on an as-consistent-as-possible (ACAP) formulation \\cite{gao2019sparse} is further used to reconstruct meshes \\cite{tan2017autoencoder}, which is able to cope with large rotations and efficient for reconstruction.\nJiang {\\itshape et al.} \\cite{jiang2019disentangled} use ACAP to disentangle the identity and expression of 3D \\YL{faces}. \nThey further apply ACAP to learn and reconstruct 3D human body models using a coarse-to-fine pipeline \\cite{jiang2020disentangled}. \n\\YL{However, the ACAP feature is represented based on individual 3D meshes. When applied to a dynamic mesh sequence, it does not guarantee temporal consistency.}\nWe propose a \\cl{temporally and spatially as-consistent-as-possible (TS-ACAP)} representation, to ensure both spatial and temporal consistency of mesh deformation.\nCompared to ACAP, our TS-ACAP can also accelerate the computation of features thanks to the sequential constraints. \n\n\\subsection{Sequence Generation with \\YL{DNNs (Deep Neural Networks)}}\nTemporal information is crucial for stable and \\gl{vivid} sequence generation. Previously, recurrent neural networks (RNN) have been successfully applied in many sequence generation tasks \\cite{mikolov2010recurrent, mikolov2011extensions}. However, it is difficult to train \\YL{RNNs} to capture long-term dependencies since \\YL{RNNs} suffer from the vanishing gradient problem \\cite{bengio1994learning}. To deal with this problem, previous works proposed some variations of RNN, including long short-term memory (LSTM) \\cite{hochreiter1997long} and gated recurrent unit (GRU) \\cite{cho2014properties}. These variations of RNN rely on the gating mechanisms to control the flow of information, thus performing well in the tasks that require capturing long-term dependencies, such as speech recognition \\cite{graves2013speech} and machine translation \\cite{bahdanau2014neural, sutskever2014sequence}. Recently, based on attention mechanisms, the Transformer network \\cite{vaswani2017attention} has been verified to outperform \\YL{many typical sequential models} for long sequences. This structure is able to inject the global context information into each input. Based on Transformer, impressive results have been achieved in tasks with regard to audio, video and text, \\textit{e.g. } speech synthesis \\cite{li2019neural, okamoto2020transformer}, action recognition \\cite{girdhar2019video} and machine translation \\cite{vaswani2017attention}.\nWe utilize the Transformer network to learn the frame-level attention which improves the temporal stability of the generated animation sequences.\n\n\\section{Approach} \\label{sec:approach}\nWith a simulated sequence of coarse meshes $\\mathcal{C} = \\{\\mathcal{C}_1, \\dots, \\mathcal{C}_n\\}$ as input, our goal is to produce a sequence of fine ones $\\mathcal{D} = \\{\\mathcal{D}_1, \\dots, \\mathcal{D}_n\\}$ which have similar non-rigid deformation as the PBS. Given two simulation sets of paired coarse and fine garments, we extract the TS-ACAP representations respectively, \\YL{and} then use our proposed DeformTransformer network to learn the \\YL{transform} \\YL{from the low-resolution space to the high-resolution space}. \\YL{As illustrated previously in Fig.~\\ref{fig:lrhrsim1}, such a mapping involves deformations beyond adding fine details.}\nOnce the network is trained by the paired examples, a consistent and detailed animation $\\mathcal{D}$ can be synthesized for each input sequence $\\mathcal{C}$. \n\n\\subsection{Overview}\nThe overall architecture of our detail synthesis network is illustrated in Fig. \\ref{fig:pipeline}.\nTo synthesize realistic \\gl{cloth animations}, we propose a method to simulate coarse meshes first and learn a \\YL{temporally-coherent} mapping to the fine meshes. \nTo realize our goal, we construct datasets including low- and high-resolution cloth animations, \\textit{e.g. } coarse and fine garments dressed on a human body of various motion sequences. \nTo efficiently extract localized features with temporal consistency, we propose a new deformation representation, called TS-ACAP (temporal \\YL{and spatial} as-consistent-as-possible), which is able to cope with both large rotations and unstable sequences. It also has significant advantages: it is efficient to compute for \\YL{mesh} sequences and its derivatives have closed form solutions.\nSince the vertices of the fine models are typically more than ten thousand to simulate realistic wrinkles, it is hard to directly map the coarse features to the high-dimensional fine ones for the network.\nTherefore, \\YL{convolutional encoder networks are} \napplied to encode \\YL{coarse and fine meshes in the TS-ACAP representation} to \\YL{their latent spaces}, respectively.\nThe TS-ACAP generates local rotation and scaling\/shearing parts on vertices, so we perform convolution \\YL{operations} on vertices %\n\\YL{to learn to extract useful features using shared local convolutional kernels.}\nWith encoded feature sequences, a sequence transduction network is proposed to learn the mapping from coarse to fine TS-ACAP sequences.\nUnlike existing works using recurrent neural networks \\YL{(RNNs)}~\\cite{santesteban2019learning}, we use the Transformer \\cite{vaswani2017attention}, a sequence-to-sequence network architecture, based on frame-level attention mechanisms for our detail synthesis task, \\YL{which is more efficient to learn and leads to superior results.}\n\n\\subsection{Deformation Representation}\n\\YL{As discussed before, large-scale deformations are essential to represent \\gl{thin shell mode dynamics such as }cloth animations, because folding and wrinkle patterns during animation can often be complicated. Moreover, cloth animations are in the form of sequences, hence the temporal coherence is very important for the realistic. Using 3D coordinates directly cannot cope with large-scale deformations well, and existing deformation representations are generally designed for static meshes, and directly applying them to cloth animation sequences on a frame-by-frame basis does not take temporal consistency into account. }\nTo cope with this problem, we propose a mesh deformation feature with spatial-temporal consistency, called TS-ACAP, to represent the coarse and fine deformed shapes, which exploits the localized information effectively and reconstructs \\YL{meshes} accurately.\nTake \\YL{coarse meshes} $\\mathcal{C}$ for instance and \\YL{fine meshes $\\mathcal{D}$ are processed in the same way.} \\YL{Assume that a sequence} of coarse meshes contains $n$ models with the same topology, each denoted as $\\mathcal{C}_{t}$ \\YL{($1\\leq t \\leq n$)}. \n\\YL{A mesh with the same topology is chosen as the reference model, denoted as $\\mathcal{C}_{0}$. For example, for garment animation, this can be the garment mesh worn by a character in the T pose.}\n$\\mathbf{p}_{t,i} \\in \\mathbb{R}^{3}$ is the $i^{\\rm th}$ vertex on\nthe $t^{\\rm th}$ mesh.\nTo represent the local shape deformation, the deformation gradient $\\mathbf{T}_{t,i} \\in \\mathbb{R}^{3 \\times 3}$ can be obtained by minimizing the following energy:\n\\begin{equation}\n\t\\mathop{\\arg\\min}_{\\mathbf{T}_{t,i}} \\ \\ \\mathop{\\sum}_{j \\in \\mathcal{N}_i} c_{ij} \\| (\\mathbf{p}_{t,i} - \\mathbf{p}_{t,j}) - \\mathbf{T}_{t,i} (\\mathbf{p}_{0,i} - \\mathbf{p}_{0,j}) \\|_2^2 \\label{con:computeDG}\n\\end{equation}\nwhere $\\mathcal{N}_i$ is the one-ring neighbors of the $i^{\\rm th}$ vertex, and $c_{ij}$ is the cotangent weight $c_{ij} = \\cot \\alpha_{ij} + \\cot \\beta_{ij} $ \\cite{sorkine2007rigid,levi2014smooth}, where $\\alpha_{ij}$\nand $\\beta_{ij}$ are angles opposite to the edge connecting the $i^{\\rm th}$ and $j^{\\rm th}$ vertices.\n\nThe main drawback of the deformation gradient representation is that it cannot handle large-scale rotations, which often \\YL{happen} in cloth animation. \nUsing polar decomposition, the deformation gradient $\\mathbf{T}_{t,i} $ can be decomposed into a rotation part and a scaling\/shearing part $\\mathbf{T}_{t,i} = \\mathbf{R}_{t,i}\\mathbf{S}_{t,i}$.\nThe scaling\/shearing transformation $\\mathbf{S}_{t,i}$ is uniquely defined, while the rotation $\\mathbf{R}_{t,i}$ \\YL{corresponds to infinite possible rotation angles (differed by multiples of $2\\pi$, along with possible opposite orientation of the rotation axis)}. Typical formulation often constrain the rotation angle to be within $[0, \\pi]$ which is unsuitable for smooth large-scale animations. \n\nIn order to handle large-scale rotations, we first require the orientations of rotation axes and rotation angles of \\YL{spatially} adjacent vertices \\YL{on the same mesh} to be as consistent as possible. \nEspecially for our sequence data, we further add constraints for adjacent frames to ensure the temporal consistency of the orientations of rotation axes and rotation angles on each vertex.\n\nWe first consider consistent orientation for axes.\n\\begin{flalign}\\label{eqn:axis}\n\t\\arg\\max_{{o}_{t,i}} \\sum_{(i,j) \\in \\mathcal{E} } {o}_{t,i}{o}_{t,j} \\cdot s(\\boldsymbol{\\omega}_{t,i} \\cdot \\boldsymbol{\\omega}_{t,j}, \\theta_{t,i}, \\theta_{t,j}) \\nonumber\\\\\n\t+ \\sum_{i \\in \\mathcal{V} } {o}_{t,i} \\cdot s(\\boldsymbol{\\omega}_{t,i} \\cdot \\boldsymbol{\\omega}_{t-1,i}, \\theta_{t,i}, \\theta_{t-1,i}) \\nonumber\\\\\n\t{\\rm s.t.} \\quad\n\t{o}_{t,1} = 1, {o}_{t,i} = \\pm 1 (i \\neq 1) \\quad \n\\end{flalign}\nwhere $t$ is the \\YL{index} of \\YL{the} frame, $\\mathcal{E}$ is the edge set, and $\\mathcal{V}$ is the vertex set. \\YL{Denote by $(\\boldsymbol{\\omega}_{t,i}, \\theta_{t,i})$ one possible choice for the rotation axis and rotation angle that match $\\mathbf{R}_{t,i}$. $o_{t,i} \\in \\{+1, -1\\}$ specifies whether the rotation axis is flipped ($o_{t,i} = 1$ if the rotation axis is unchanged, and $-1$ if its opposite is used instead). }\\YL{The first term promotes spatial consistency while the second term promotes temporal consistency.} \n$s(\\cdot)$ is a function measuring orientation consistency, which is defined as follows:\n\\begin{equation}\n\ts(\\cdot)=\\left\\{\n\t\\begin{aligned}\n\t\t0 & , & |\\boldsymbol{\\omega}_{t,i} \\cdot \\boldsymbol{\\omega}_{t,j}|\\leq\\epsilon_1 \\; {\\rm or} \\;\n\t\t\\theta_{t,i}<\\varepsilon_2 \\; {\\rm or} \\; \\theta_{t,j}<\\varepsilon_2 \\\\\n\t\t1 & , & {\\rm Otherwise~if}~\\boldsymbol{\\omega}_{t,i} \\cdot \\boldsymbol{\\omega}_{t,j}>\\epsilon_1 \\\\\n\t\t-1 & , & {\\rm Otherwise~if}~ \\boldsymbol{\\omega}_{t,i} \\cdot \\boldsymbol{\\omega}_{t,j}<-\\epsilon_1 \\\\\n\t\\end{aligned}\n\t\\right.\n\\end{equation}\n\\YL{The first case here is to ignore cases where the rotation angle is near zero, as the rotation axis is not well defined in such cases.}\nAs for rotation angles, \\YL{we optimize the following}\n\\begin{flalign}\\label{eqn:angle}\n\\arg\\min_{r_{t,i}} &\\sum_{(i,j) \\in \\mathcal{E} } \\| (r_{t,i} \\cdot 2\\pi+{o}_{t,i}\\theta_{t,i}) - (r_{t,j}\\cdot 2\\pi+{o}_{t,j}\\theta_{t,j}) \\|_2^{2} &\\nonumber\\\\\n+ &\\sum_{i \\in \\mathcal{V} } \\| (r_{t,i} \\cdot 2\\pi+{o}_{t,i}\\theta_{t,i}) - (r_{t-1,i}\\cdot 2\\pi+{o}_{t,j}\\theta_{t-1,i}) \\|_2^{2} \\nonumber\\\\ \n{\\rm s.t.}& \\quad r_{t,i} \\in \\mathbb{Z},~~r_{t,1} = 0.\n\\end{flalign}\nwhere $r_{t,i} \\in \\mathbb{Z}$ specifies how many $2\\pi$ rotations should be added to the rotation angle.\n\\YL{The two terms here promote spatial and temporal consistencies of rotation angles, respectively. \nThese optimizations can be solved using integer programming, and we use the mixed integer solver CoMISo~\\cite{comiso2009} which provides an efficient \\gl{solver}. See~\\cite{gao2019sparse} for more details.}\nA similar process is used to compute the TS-ACAP representation of the fine meshes. \n\n\n\\cl{Compared to the ACAP representation, our TS-ACAP representation considers temporal constraints to represent nonlinear deformation for optimization of axes and angles, which is more suitable for consecutive large-scale deformation \\YL{sequences}.\nWe compare ACAP~\\cite{gao2019sparse} and our TS-ACAP using a simple example of a simulated disk-shaped cloth animation sequence. Once we obtain deformation representations of the meshes in the sequence, \nwe interpolate two meshes, the initial state mesh and a randomly selected frame, using linear interpolation of \\YL{shape representations}.\n\\YL{In Fig. \\ref{fig:interpolation}, we demonstrate the interpolation results with ACAP representation, which shows that it cannot handle such challenging cases with complex large-scale deformations. In contrast, with our temporally and spatially as-consistent-as-possible optimization, our TS-ACAP representation is able to produce consistent interpolation results.}\n\n\n\\begin{figure}[ht]\n\t\\centering\n\t\\includegraphics[width=\\linewidth]{pictures\/acap_tacap1_1.pdf}%\n\t\\caption{\\small Comparison of shape interpolation results with different deformation representations, ACAP and TS-ACAP. %\n\t(a) and (b) are the source (t = 0) and target (t = 1) models with large-scale deformation to be interpolated. \n\tThe first row shows the interpolation results by ACAP, and the second row show the results with our TS-ACAP. \n\t\\gl{The interpolated models with ACAP feature are plausible in each frame while they are not consistent in the temporal domain.}\n\t}\n\t\\label{fig:interpolation}\n\\end{figure}\n}\n\n\\subsection{DeformTransformer Networks}\nUnlike \\cite{tan2017variational, wang2019learning} which use fully connected layers for mesh encoder, we perform convolutions \\YL{on meshes to learn to extract useful features using compact shared convolutional kernels.} \nAs illustrated in Fig. \\ref{fig:pointconv}, we use a convolution operator on vertices \\cite{duvenaud2015convolutional, tan2017autoencoder} where the output at a vertex is obtained as a linear combination of input in its one-ring neighbors along with a bias. \n\\YL{The input to our network is the TS-ACAP representation, which for the $i^{\\rm th}$ vertex of the $t^{\\rm th}$ mesh, we collect non-trivial coefficients from the rotation $\\mathbf{R}_{t, i}$ and scaling\/shearing $\\mathbf{S}_{t,i}$, which forms a 9-dimensional feature vector (see~\\cite{gao2019sparse} for more details). Denote by $\\mathbf{f}_i^{(k-1)}$ and $\\mathbf{f}_i^{k}$ the feature of the $i^{\\rm th}$ vertex at the $(k-1)^{\\rm th}$ and $k^{\\rm th}$ layers, respectively. The convolution operator is defined as follows:\n\\begin{equation}\n\t\\mathbf{f}_i^{(k)} =\n\t\\mathbf{W}_{point}^{(k)} \\cdot \\mathbf{f}_{i}^{(k-1)} + \n\t\\mathbf{W}_{neighbor}^{(k)} \\cdot \\frac{1}{D_i} \\mathop{\\sum}_{j=1}^{D_i} \\mathbf{f}_{n_{ij}}^{(k-1)}\n\t+ \\mathbf{b}^{(k)} \n\\end{equation}\nwhere $\\mathbf{W}_{point}^{(k)}$, $\\mathbf{W}_{neighbor}^{(k)}$ and $\\mathbf{b}^{(k)}$ are learnable parameters for the $k^{\\rm th}$ convoluational layer, $D_i$ is the degree of the $i^{\\rm th}$ vertex, $n_{ij}(1 \\leq j \\leq D_i )$ is the $j^{\\rm th}$ neighbor of the $i^{\\rm th}$ vertex.\n}\n\n\\begin{figure}[ht]\n\t\\centering\n\t\\includegraphics[width=0.48\\linewidth]{pictures\/pointconv.pdf} \n\t\\caption{\\small Illustration of the convolutional operator on meshes. \n\t\tThe result of convolution for each vertex is obtained by a linear combination from the input in the 1-ring neighbors of the vertex, along with a bias.\n\t}\n\t\\label{fig:pointconv}\n\\end{figure}\n\\begin{figure}[ht]\n\t\\centering\n\t\\includegraphics[width=\\linewidth, trim=0 50 0 150,clip]{pictures\/transformer.pdf} %\n\t\\caption{\\small The architecture of our DeformTransformer network.\n\t\tThe coarse and fine mesh sequences are embedded into feature vectors using the TS-ACAP representation which \\YL{is} defined \\YL{at} each vertex as a 9-dimensional vector. \n\t\tThen two convolutional \\YL{encoders} map coarse and fine features to \\YL{their latent spaces}, respectively.\n\t\tThese latent vectors are fed into the DeformTransformer network, \\cl{which consists of the encoder and decoder, each including a stack of $N=2$ identical blocks with 8-head attention,} to recover \\YL{temporally-coherent} deformations.\n\t\tNotice that in \\YL{the} training phase the input high-resolution TS-ACAP \\YL{features are those from the ground truth}, \n\t\t\\YL{but during testing, these features are initialized to zeros, and once a new high-resolution frame is generated, its TS-ACAP feature is added.}\n\t\tWith predicted feature vectors, realistic and stable cloth animations are generated.\n\t}\n\t\\label{fig:Transformer}\n\\end{figure}\n\n\\begin{figure}[ht]\n\t\\centering\n\t\\includegraphics[width=0.4\\linewidth, trim=18 33 18 3,clip]{pictures\/tshirt06_08_poseswithhuman_collision\/temp0270keyshot_unsolve.png} \n\t\\includegraphics[width=0.4\\linewidth, trim=18 33 18 3,clip]{pictures\/tshirt06_08_poseswithhuman_collision\/temp0270keyshot_solve.png} \n\t\\caption{\\small For tight clothing, data-driven cloth deformations may suffer from apparent collisions with the body (left). We apply a simple postprocessing step to push \n\t\\YL{the collided} T-shirt vertices outside the body (right).\n\t}\n\t\\label{fig:collisionrefinement}\n\\end{figure}\n\\begin{figure*}[ht]\n\t\\centering\n\t\\includegraphics[width=1.0\\linewidth, trim=50 150 100 150,clip]{pictures\/dataset.pdf} \n\t\\caption{\\small \n\t\tWe test our algorithm on 5 datasets including TSHIRT, PANTS, SKIRT, SHEET and DISK.\t\t \n\t\tThe former three are garments (T-shirts, skirts, and pants) dressed on a template body and simulated with various motion sequences.\n\t\tThe SHEET dataset is a square sheet interacting with various obstacles.\n\t\tThe DISK dataset is a round tablecloth draping on a cylinder in the wind of various velocities. \n\t\tEach cloth shape has a coarse resolution (top) and a fine resolution (bottom). \n\t} \n\t\\label{fig:dataset}\n\\end{figure*}\nLet $\\mathcal{F}_\\mathcal{C} = \\{\\mathbf{f}_{\\mathcal{C}_1}, \\dots, \\mathbf{f}_{\\mathcal{C}_n}\\}$ be the sequence of coarse mesh features, and $\\mathcal{F}_\\mathcal{D} = \\{\\mathbf{f}_{\\mathcal{D}_1}, \\dots, \\mathbf{f}_{\\mathcal{D}_n}\\}$ be its counterpart, the sequence of detailed mesh features.\nTo synthesize $\\mathcal{F}_\\mathcal{D}$ from $\\mathcal{F}_\\mathcal{C}$, the DeformTransformer framework is proposed to solve this sequence-to-sequence problem.\nThe DeformTransformer network consists of several stacked encoder-decoder layers, \\YL{denoted} as $Enc(\\cdot)$ and $Dec(\\cdot)$. To take the order of the sequence into consideration, triangle positional embeddings \\cite{vaswani2017attention} are injected into frames of $\\mathcal{F}_\\mathcal{C}$ and $\\mathcal{F}_\\mathcal{D}$, respectively.\nThe encoder takes coarse mesh features as input and encodes it to a \\YL{temporally-dependent} hidden space.\nIt is composed of identical blocks \\YL{each} with two sub-modules, one is the multi-head self-attention mechanism, the other is the frame-wise fully connected feed-forward network. \nWe also employ a residual connection around these two sub-modules, followed \\YL{by} the layer normalization.\nThe multi-head attention is able to build the dependence between any frames, thus ensuring that each input can consider global context of the whole sequence. Meanwhile, compared with other sequence models, this mechanism splits \\YL{the} attention into several subspaces so that it can model the frame \\YL{relationships} in multiple aspects.\nWith the encoded latent vector $Enc(\\mathcal{F}_\\mathcal{C})$, the decoder network attempts to reconstruct a sequence of fine mesh features.\nThe decoder has two parts: \nThe first part takes fine mesh sequence $\\mathcal{F}_\\mathcal{D}$ as \\YL{input} and \nencodes it similar to the encoder. \n\\YL{Unlike the encoder, detailed meshes are generated sequentially, and when predicting frame $t$, it should not attend to subsequent frames (with the position after frame $t$). To achieve this, we utilize a masking process\nfor the self-attention module.} The second part performs multi-head attention over the output of the encoder, thus capturing the long-term dependence between coarse mesh features $\\mathcal{F}_\\mathcal{C}$ and fine mesh features $\\mathcal{F}_\\mathcal{D}$.\nWe train the Transformer network by minimizing the mean squared error between predicted detailed features and the ground-truth.\nWith predicted TS-ACAP feature vector, we reconstruct the vertex coordinates of \\YL{the} target mesh\\YL{, in the same way as reconstruction from ACAP features} (please refer to \\cite{gao2019sparse} for details). \nOur training data is generated by PBS \\YL{and is collision-free}.\nSince human body \\YL{(or other obstacles)} information is unseen in our algorithm, it does not guarantee the predicted cloth \\YL{is free from any penetration}.\nEspecially for tight garment like T-shirts, it will be apparent if collision \\YL{between the garment and human body} happens.\nWe use a fast refinement method \\cite{wang2019learning} to push the cloth vertices colliding with the body outside \\YL{while} preserving the local wrinkle details (see Fig.~\\ref{fig:collisionrefinement}). \nFor each vertex detected inside the body, we find its closest point over the body surface with normal and position.\nThen the cloth mesh is deformed to update the vertices by minimizing the energy which penalizes the euclidean distance and Laplacian difference between the updated mesh and the initial one (please refer to \\cite{wang2019learning} for details).\nThe collision solving process usually takes less than 3 iterations to converge to a collision-free state.\n\n\\section{Implementation}\\label{sec:implementation}\nWe describe the details of the dataset construction and the network architecture in this section.\n\n\\textbf{\\YL{Datasets}.}\nTo test our method, we construct 5 datasets, called TSHIRT, PANTS, SKIRT, SHEET and DISK respectively.\nThe former three datasets are different types of garments, \\textit{i.e. }, T-shirts, skirts and pants worn on human bodies.\nEach type of garment \\YL{is represented by both low-resolution and high-resolution meshes}, \\YL{containing} 246 and 14,190 vertices for the T-shirts, 219 and 12,336 vertices for the skirts, 200 and 11,967 vertices for the pants.\nGarments of the same type and resolution are simulated from a template mesh, which means \\YL{such meshes obtained through cloth animations have the same number of vertices and the same connectivity}.\nThese garments are dressed on animated characters, which are obtained via driving a body \\YL{in the SMPL (Skinned Multi-Person Linear) model} \\cite{loper2015smpl} with publicly available motion capture data from CMU \\cite{hodgins2015cmu}.\nSince the motion data is captured, there are some \\YL{self-collisions} or long repeated sequences. \n\\YL{After removing poor quality data}, we select various motions, such as dancing, walking, running, jumping etc., including 20 sequences (\\YL{9031, 6134, 7680 frames in total} for TSHIRT, PANTS and SKIRT respectively).\nIn these motions, 18 sequences are randomly selected for training and the remaining 2 sequences for testing.\nThe SHEET dataset consists of a pole or a sphere of three different sizes crashing to a piece of \\YL{cloth sheet}.\nThe coarse mesh has 81 vertices and the fine mesh has 4,225 vertices.\nThere are \\YL{4,000} frames in the SHEET dataset, in which 3200 frames for training and \\YL{the remaining} 800 frames for testing.\nWe construct the DISK dataset by draping a round tablecloth to a cylinder in the wind, with 148 and 7,729 vertices for coarse and fine meshes respectively.\nWe adjust the velocity of the wind to get various animation sequences, in which 1600 frames for training and 400 frames for testing. \n\n\\begin{table*}[ht]\n\t\\renewcommand\\arraystretch{1.5}\n\t\\caption{ Statistics and timing (sec\/\\YL{frame}) of the testing examples including five types of \\YL{thin shell animations}.\n\t}\n\t\\label{table:runtime}\n\t\\centering\n\t\\begin{tabular}{cccccccccc}\n\t\t\\toprule[1.2pt] \n\t\tBenchmark & \\#verts & \\#verts & PBS & ours & speedup & \\multicolumn{4}{c}{our components} \\\\ \\cline{7-10} \n\t\t& LR & HR & HR & & & coarse & TS-ACAP & synthesizing & refinement \\\\\n\t\t& & & & & & sim. & extraction & (GPU) & \\\\ \\hline \\hline\n\t\tTSHIRT & 246 & 14,190 & 8.72 & 0.867 & \\textbf{10} & 0.73 & 0.11 & 0.012 & 0.015\\\\\n\t\tPANTS & 200 & 11,967 & 10.92 &0.904 & \\textbf{12} & 0.80 & 0.078 & 0.013 & 0.013\\\\\n\t\tSKIRT & 127 & 6,812 & 6.84 & 0.207 & \\textbf{33} & 0.081 & 0.10 & 0.014 & 0.012 \\\\ \n\t\tSHEET & 81 & 4,225 & 2.48 & 0.157 & \\textbf{16} & 0.035 & 0.10 & 0.011 & 0.011 \\\\ \n\t\tDISK & 148 & 7,729 & 4.93 & 0.139 & \\textbf{35} & 0.078 & 0.041 & 0.012 & 0.008 \\\\ \n\t\t\\bottomrule[1.2pt]\n\t\\end{tabular}\n\\end{table*} \nTo prepare the above datasets, we generate both \\YL{low-resolution (LR)} and \\YL{high-resolution (HR)} cloth \\YL{animations} by PBS.\nThe initial state of the HR mesh is obtained by applying the Loop subdivision scheme \\cite{Thesis:Loop} to the coarse mesh and waiting for several seconds till stable.\nPrevious works \\cite{kavan11physics, zurdo2013wrinkles, chen2018synthesizing} usually constrain the high-resolution meshes by various tracking mechanisms to ensure that the coarse cloth \\YL{can be seen as} a low-resolution version of the fine cloth during the complete animation sequences.\nHowever, fine-scale wrinkle dynamics cannot be captured by this model, as wrinkles are defined quasistatically and limited to a \\YL{constrained} subspace.\nThus we \\YL{instead perform} PBS for the two resolution meshes \\emph{separately}, without any constraints between them.\nWe use a cloth simulation engine called ARCSim \\cite{Narain2012AAR} to produce all animation sequences of low- and high-resolution meshes with the same parameter setting. \nIn our experiment, we choose the Gray Interlock from a library of measured cloth materials \\cite{Wang2011DEM} as the material parameters for ARCSim simulation.\nSpecially for garments interacting with characters, to ensure collision-free, we manually put the coarse and fine garments on a template human body (in the T pose) and run the simulation to let the \\YL{clothing} relax. To this end, we define the initial state for all subsequent simulations.\nWe interpolate 15 frames between the T pose and the initial pose of each motion sequence, before applying the motion sequence, which is smoothed using a convolution operation.\n\n\\begin{figure}[ht]\n\t\\centering\n\t\\subfloat{\n\t\t\\includegraphics[width=0.5\\linewidth]{pictures\/hyper_inputframes-eps-converted-to.pdf} \n\t}\n\t\\subfloat{\n\t\t\\includegraphics[width=0.5\\linewidth]{pictures\/hyper_hiddensize-eps-converted-to.pdf} \n\t}\n\t\\caption{\\small Evaluation of hyperparameters in the Transformer network\\YL{, using the SKIRT dataset. }\n\t\t(Left) average error for the reconstructed results as a function of the number of input frames.\n\t\t(Right) error for the synthesized results under the condition of various dimensions of the latent layer.\n\t}\n\t\\label{fig:hyperpara}\n\\end{figure}\n\\textbf{Network architecture.}\nAs shown in Fig.~\\ref{fig:Transformer}, our transduction network consists of two components, namely convolutional \\YL{encoders} to map coarse and fine mesh sequences into latent spaces for improved generalization capability, and the Transformer network for \\YL{spatio-temporally} coherent deformation transduction.\nThe feature encoder module takes the 9-dimensional TS-ACAP features defined on vertices as input, followed by two convolutional layers with $tanh$ as the activation function. \nIn the last convolutional layer we abandon the activation function, similar to \\cite{tan2017autoencoder}.\nA fully connected layer is used to map the output of the convolutional layers into a 16-dimensional latent space.\nWe train one encoder for coarse \\YL{meshes} and another for fine \\YL{meshes} separately.\nFor the DeformTransformer network, its input includes the embedded latent vectors from both \\YL{the} coarse and fine domains.\nThe DeformTransformer network consists of sequential encoders and decoders, \neach \\YL{including} a stack of 2 identical blocks with 8-head attention.\nDifferent from variable length sequences used in natural language processing, we \\YL{fix} the number of input frames \\YL{(to 3 in our experiments)} since a motion sequence may include a thousand frames.\n\\YL{We perform experiments to evaluate the performance of our method with different settings.}\nAs shown in Fig.~\\ref{fig:hyperpara} \\YL{(left)}, using 3 input frames is found to perform well in our experiments.\nWe also evaluate the results generated with various dimensions of latent space shown in Fig. \\ref{fig:hyperpara} \\YL{(right)}.\nWhen the dimension of latent space is larger than 16, the network can \\YL{easily overfit}.\nThus we set the dimension of the latent space %\nto 16, which is sufficient for all the examples in the paper.\n\\begin{table}[tb]\n\t\\renewcommand\\arraystretch{1.5}\n\t\\caption{Quantitative comparison of reconstruction errors for unseen \\YL{cloth animations} in several datasets. We compare our results with Chen {\\itshape et al.} \\cite{chen2018synthesizing} and Zurdo {\\itshape et al.} \\cite{zurdo2013wrinkles} with LR meshes as a reference. \\YL{Three metrics, namely RMSE (Root Mean Squared Error), Hausdorff distance and STED (Spatio-Temporal Edge Difference)~\\cite{Vasa2011perception} are used. Since LR meshes have different number of vertices from the ground truth HR mesh, we only calculate its Hausdorff distance.}}\n \t\\label{table:compare_zurdo_chen2}\n\t\\centering \n\t\\begin{tabular}{ccccc} \n\t\t\\toprule[1.2pt]\n\t\t\\multirow{3}{*}{Dataset} & \\multirow{3}{*}{Methods} & \\multicolumn{3}{c}{Metrics} \\\\ \\cline{3-5}\n\t\t& & RMSE & Hausdorff & STED \\\\ \n\t\t& & $\\times 10^\\YL{-2}$ $\\downarrow$ & $\\times 10^\\YL{-2}$ $\\downarrow$ & $\\downarrow$ \\\\\n\t\t\\hline \\hline\n\t\t\\multirow{4}{*}{TSHIRT} & LR & - & 0.59 & - \\\\ \\cline{2-5}\n\t\t& Chen {\\itshape et al.} & 0.76 & 0.506 & 0.277 \\\\ \\cline{2-5}\n\t\t& Zurdo {\\itshape et al.} & 1.04 & 0.480 & 0.281 \\\\ \\cline{2-5}\n\t\t& Our & \\textbf{0.546} & \\textbf{0.416} & \\textbf{0.0776} \\\\ \\hline \\hline\n\t\t\\multirow{4}{*}{PANTS} & LR & - & 0.761 & - \\\\ \\cline{2-5}\n\t\t& Chen {\\itshape et al.} & 1.82 & 1.09 & 0.176 \\\\ \\cline{2-5}\n\t\t& Zurdo {\\itshape et al.} & 1.89 & 0.983& 0.151 \\\\ \\cline{2-5}\n\t\t& Our & \\textbf{0.663} & \\textbf{0.414} & \\textbf{0.0420} \\\\ \\hline \\hline\n\t\t\\multirow{4}{*}{SKIRT} & LR & - & 2.09 & - \\\\ \\cline{2-5}\n\t\t& Chen {\\itshape et al.} & 1.93 & 1.31 & 0.562 \\\\ \\cline{2-5}\n\t\t& Zurdo {\\itshape et al.} & 2.19 & 1.52 & 0.178 \\\\ \\cline{2-5}\n\t\t& Our & \\textbf{0.685} & \\textbf{0.681} & \\textbf{0.0241} \\\\ \\hline \\hline\n\t\t\\multirow{4}{*}{SHEET} \n\t\t& LR & - & 2.61 & - \\\\ \\cline{2-5}\n\t\t& Chen {\\itshape et al.} & 4.37 & 2.60 & 0.155 \\\\ \\cline{2-5}\n\t\t& Zurdo {\\itshape et al.} & 3.02 & 2.34 & 0.0672 \\\\ \\cline{2-5}\n\t\t& Our & \\textbf{0.585} & \\textbf{0.417} & \\textbf{0.0262} \\\\ \\hline \\hline\n\t\t\\multirow{4}{*}{DISK} & LR & - & 3.12 & - \\\\ \\cline{2-5}\n\t\t& Chen {\\itshape et al.} & 7.03 & 2.27 & 0.244 \\\\ \\cline{2-5}\n\t\t& Zurdo {\\itshape et al.} & 11.40 & 2.23 & 0.502 \\\\ \\cline{2-5}\n\t\t& Our & \\textbf{2.16} & \\textbf{1.30} & \\textbf{0.0557 } \\\\ \n\t\t\\bottomrule[1.2pt]\n\t\\end{tabular}\n\\end{table}\n\n\\section{Results}\\label{sec:results}\n\\subsection{Runtime Performance}\nWe implement our method on a \\YL{computer with a} 2.50GHz \\YL{4-Core} Intel CPU for coarse simulation and TS-ACAP extraction,\nand \\YL{an} NVIDIA GeForce\\textsuperscript{\\textregistered}~GTX 1080Ti GPU for fine TS-ACAP generation by the network and mesh coordinate reconstruction.\nTable~\\ref{table:runtime} shows average per-frame execution time of our method for various cloth datasets.\nThe execution time contains four parts: coarse simulation, TS-ACAP extraction, high-resolution TS-ACAP synthesis, and collision refinement. \nFor reference, we also \\YL{measure} the time of a CPU-based implementation of high-resolution PBS using ARCSim \\cite{Narain2012AAR}.\nOur algorithm is $10\\sim35$ times faster than the \\YL{PBS} HR simulation.\nThe low computational cost of our method makes it suitable for the interactive applications. \n\n\\begin{figure}[tb]\n\t\\centering\n\t\\setlength{\\fboxrule}{0.5pt}\n \\setlength{\\fboxsep}{-0.01cm}\n\t\\setlength{\\tabcolsep}{0.00cm} \n \\renewcommand\\arraystretch{0.01} \n \t\\begin{tabular}{>{\\centering\\arraybackslash}m{0.2\\linewidth}>{\\centering\\arraybackslash}m{0.2\\linewidth}>{\\centering\\arraybackslash}m{0.2\\linewidth}>{\\centering\\arraybackslash}m{0.2\\linewidth}>{\\centering\\arraybackslash}m{0.2\\linewidth}} \n \\includegraphics[width=\\linewidth, trim=17 0 37 0,clip]{pictures\/tshirt06_08_poses\/0crop0090down.png} & \n \\includegraphics[width=\\linewidth, trim=17 0 37 0,clip]{pictures\/tshirt06_08_poses\/1crop0090down.png} & \n \\includegraphics[width=\\linewidth, trim=17 0 37 0,clip]{pictures\/tshirt06_08_poses\/2crop0090down.png} & \n \\includegraphics[width=\\linewidth, trim=17 0 37 0,clip]{pictures\/tshirt06_08_poses\/3crop0090down.png} & \n \\includegraphics[width=\\linewidth, trim=17 0 37 0,clip]{pictures\/tshirt06_08_poses\/4crop0090down.png} \\\\\n \\includegraphics[width=\\linewidth, trim=17 0 37 0,clip]{pictures\/tshirt06_08_poses\/0crop0300down.png} & \n \\includegraphics[width=\\linewidth, trim=17 0 37 0,clip]{pictures\/tshirt06_08_poses\/1crop0300down.png} & \n \\includegraphics[width=\\linewidth, trim=17 0 37 0,clip]{pictures\/tshirt06_08_poses\/2crop0300down.png} & \n \\includegraphics[width=\\linewidth, trim=17 0 37 0,clip]{pictures\/tshirt06_08_poses\/3crop0300down.png} & \n \\includegraphics[width=\\linewidth, trim=17 0 37 0,clip]{pictures\/tshirt06_08_poses\/4crop0300down.png} \\\\\n \\includegraphics[width=\\linewidth, trim=17 0 37 0,clip]{pictures\/tshirt08_11_poses\/0crop0110down.png} & \n \\includegraphics[width=\\linewidth, trim=17 0 37 0,clip]{pictures\/tshirt08_11_poses\/1crop0110down.png} & \n \\includegraphics[width=\\linewidth, trim=17 0 37 0,clip]{pictures\/tshirt08_11_poses\/2crop0110down.png} & \n \\includegraphics[width=\\linewidth, trim=17 0 37 0,clip]{pictures\/tshirt08_11_poses\/3crop0110down.png} & \n \\includegraphics[width=\\linewidth, trim=17 0 37 0,clip]{pictures\/tshirt08_11_poses\/4crop0110down.png} \\\\\n \\includegraphics[width=\\linewidth, trim=17 0 37 0,clip]{pictures\/tshirt08_11_poses\/0crop0260down.png} & \n \\includegraphics[width=\\linewidth, trim=17 0 37 0,clip]{pictures\/tshirt08_11_poses\/1crop0260down.png} & \n \\includegraphics[width=\\linewidth, trim=17 0 37 0,clip]{pictures\/tshirt08_11_poses\/2crop0260down.png} & \n \\includegraphics[width=\\linewidth, trim=17 0 37 0,clip]{pictures\/tshirt08_11_poses\/3crop0260down.png} & \n \\includegraphics[width=\\linewidth, trim=17 0 37 0,clip]{pictures\/tshirt08_11_poses\/4crop0260down.png} \\\\ \n \\vspace{0.3cm} \\footnotesize (a) Input & \\vspace{0.3cm} \\hspace{-0.3cm} \\footnotesize (b) Chen {\\itshape et al.} & \\vspace{0.3cm} \\hspace{-0.2cm} \\footnotesize (c) Zurdo {\\itshape et al.} & \\vspace{0.3cm} \\footnotesize (d) Ours & \\vspace{0.3cm} \\footnotesize (e) GT \n\t\\end{tabular}\n\t\\caption{Comparison of the reconstruction results for unseen data \\YL{on the TSHIRT} dataset.\n\t\t(a) coarse simulation,\n\t\t(b) results of \\cite{chen2018synthesizing},\n\t\t(c) results of \\cite{zurdo2013wrinkles},\n\t\t(d) our results,\n\t\t(e) ground truth generated by PBS.\n\t\tOur method produces the detailed shapes of higher quality than Chen {\\itshape et al.} and Zurdo {\\itshape et al.}, see the folds and wrinkles in the close-ups. Chen {\\itshape et al.} results suffer from seam line problems. The results of Zurdo {\\itshape et al.} exhibit clearly noticeable artifacts.}\n\t\\label{fig:comparetoothers_tshirt}\n\\end{figure}\n \\begin{figure}[!htb]\n\t\\centering\n\t\\setlength{\\fboxrule}{0.5pt}\n \\setlength{\\fboxsep}{-0.01cm}\n\t\\setlength{\\tabcolsep}{0.00cm} \n \\renewcommand\\arraystretch{0.01} \n \t\\begin{tabular}{>{\\centering\\arraybackslash}m{0.2\\linewidth}>{\\centering\\arraybackslash}m{0.2\\linewidth}>{\\centering\\arraybackslash}m{0.2\\linewidth}>{\\centering\\arraybackslash}m{0.2\\linewidth}>{\\centering\\arraybackslash}m{0.2\\linewidth}} \n \\includegraphics[width=\\linewidth, trim=28 0 28 5,clip]{pictures\/pants09_07_poses\/0crop0010down.png} & \n \t\\includegraphics[width=\\linewidth, trim=28 0 28 5,clip]{pictures\/pants09_07_poses\/1crop0010down.png} & \n \t\\includegraphics[width=\\linewidth, trim=28 0 28 5,clip]{pictures\/pants09_07_poses\/2crop0010down.png} & \n \t\\includegraphics[width=\\linewidth, trim=28 0 28 5,clip]{pictures\/pants09_07_poses\/3crop0010down.png} & \n \t\\includegraphics[width=\\linewidth, trim=28 0 28 5,clip]{pictures\/pants09_07_poses\/4crop0010down.png} \\\\ \n \t\\includegraphics[width=\\linewidth, trim=28 0 28 5,clip]{pictures\/pants09_07_poses\/0crop0060down.png} & \n \t\\includegraphics[width=\\linewidth, trim=28 0 28 5,clip]{pictures\/pants09_07_poses\/1crop0060down.png} & \n \t\\includegraphics[width=\\linewidth, trim=28 0 28 5,clip]{pictures\/pants09_07_poses\/2crop0060down.png} & \n \t\\includegraphics[width=\\linewidth, trim=28 0 28 5,clip]{pictures\/pants09_07_poses\/3crop0060down.png} & \n \t\\includegraphics[width=\\linewidth, trim=28 0 28 5,clip]{pictures\/pants09_07_poses\/4crop0060down.png} \\\\ \n \t\\includegraphics[width=\\linewidth, trim=28 0 28 5,clip]{pictures\/pants09_07_poses\/0crop0140down.png} & \n \t\\includegraphics[width=\\linewidth, trim=28 0 28 5,clip]{pictures\/pants09_07_poses\/1crop0140down.png} & \n \t\\includegraphics[width=\\linewidth, trim=28 0 28 5,clip]{pictures\/pants09_07_poses\/2crop0140down.png} & \n \t\\includegraphics[width=\\linewidth, trim=28 0 28 5,clip]{pictures\/pants09_07_poses\/3crop0140down.png} & \n \t\\includegraphics[width=\\linewidth, trim=28 0 28 5,clip]{pictures\/pants09_07_poses\/4crop0140down.png} \\\\ \n \t\\includegraphics[width=\\linewidth, trim=28 0 28 5,clip]{pictures\/pants09_07_poses\/0crop0160down.png} & \n \t\\includegraphics[width=\\linewidth, trim=28 0 28 5,clip]{pictures\/pants09_07_poses\/1crop0160down.png} & \n \t\\includegraphics[width=\\linewidth, trim=28 0 28 5,clip]{pictures\/pants09_07_poses\/2crop0160down.png} & \n \t\\includegraphics[width=\\linewidth, trim=28 0 28 5,clip]{pictures\/pants09_07_poses\/3crop0160down.png} & \n \t\\includegraphics[width=\\linewidth, trim=28 0 28 5,clip]{pictures\/pants09_07_poses\/4crop0160down.png} \\\\ \n\t \\vspace{0.3cm} \\footnotesize (a) Input & \\vspace{0.3cm} \\hspace{-0.3cm} \\footnotesize (b) Chen {\\itshape et al.} & \\vspace{0.3cm} \\hspace{-0.2cm} \\footnotesize (c) Zurdo {\\itshape et al.} & \\vspace{0.3cm} \\footnotesize (d) Ours & \\vspace{0.3cm} \\footnotesize (e) GT \n\t\\end{tabular} \n\t\\caption{Comparison of the reconstruction results for unseen data in the PANTS dataset.\n\t\t(a) coarse simulation results,\n\t\t(b) results of \\cite{chen2018synthesizing}, mainly smooth the coarse meshes and barely exhibit any wrinkles.\n\t\t(c) results of \\cite{zurdo2013wrinkles}, have clear artifacts on examples where LR and HR meshes are not aligned well, \\textit{e.g. } the trouser legs.\n\t\t(d) our results, ensures physically-reliable results.\n\t\t(e) ground truth generated by PBS.\n\t}\n\t\\label{fig:comparetoothers_pants}\n\\end{figure} \n\\begin{figure*}[htb]\n\t\\centering\n\t\\subfloat[Input]{ \n\t\t\\begin{minipage}[b]{0.11\\linewidth} \n\t\t\t\\includegraphics[width=1.000000\\linewidth, trim=45 45 45 45,clip]{pictures\/skirt09_06_poses\/0\/frm0080_00_skirtlrkeyshot.png} \n\t\t\t\\includegraphics[width=1.000000\\linewidth, trim=45 45 45 45,clip]{pictures\/skirt09_06_poses\/0\/frm0110_00_skirtlrkeyshot.png} \n\t\t\t\\includegraphics[width=1.000000\\linewidth, trim=45 45 45 45,clip]{pictures\/skirt09_06_poses\/0\/frm0140_00_skirtlrkeyshot.png} \n\t\t\t\\includegraphics[width=1.000000\\linewidth, trim=45 45 45 45,clip]{pictures\/skirt09_06_poses\/0\/frm0160_00_skirtlrkeyshot.png} \n\t\\end{minipage}} \n\t\\subfloat[Chen {\\itshape et al.}]{ \n\t\t\\begin{minipage}[b]{0.11\\linewidth} \n\t\t\t\\includegraphics[width=1.000000\\linewidth, trim=45 45 45 45,clip]{pictures\/skirt09_06_poses\/1\/temp0080keyshot.png} \n\t\t\t\\includegraphics[width=1.000000\\linewidth, trim=45 45 45 45,clip]{pictures\/skirt09_06_poses\/1\/temp0110keyshot.png} \n\t\t\t\\includegraphics[width=1.000000\\linewidth, trim=45 45 45 45,clip]{pictures\/skirt09_06_poses\/1\/temp0140keyshot.png} \n\t\t\t\\includegraphics[width=1.000000\\linewidth, trim=45 45 45 45,clip]{pictures\/skirt09_06_poses\/1\/temp0160keyshot.png} \n\t\\end{minipage}} \n\t\t\\begin{minipage}[b]{0.11\\linewidth} \n\t\t\t\\includegraphics[width=1.000000\\linewidth, trim=45 45 45 45 ,clip]{pictures\/skirt09_06_posescolormap\/1\/09_06_posesfrm0080_00_skirtlr_result.png} \n\t\t\t\\includegraphics[width=1.000000\\linewidth, trim=45 45 45 45 ,clip]{pictures\/skirt09_06_posescolormap\/1\/09_06_posesfrm0110_00_skirtlr_result.png} \n\t\t\t\\includegraphics[width=1.000000\\linewidth, trim=45 45 45 45 ,clip]{pictures\/skirt09_06_posescolormap\/1\/09_06_posesfrm0140_00_skirtlr_result.png} \n\t\t\t\\includegraphics[width=1.000000\\linewidth, trim=45 45 45 45 ,clip]{pictures\/skirt09_06_posescolormap\/1\/09_06_posesfrm0160_00_skirtlr_result.png} \n\t\\end{minipage}\n\t\\subfloat[Zurdo {\\itshape et al.}]{ \n\t\t\\begin{minipage}[b]{0.11\\linewidth} \n\t\t\t\\includegraphics[width=1.000000\\linewidth, trim=45 45 45 45,clip]{pictures\/skirt09_06_poses\/2\/temp0080keyshot.png} \n\t\t\t\\includegraphics[width=1.000000\\linewidth, trim=45 45 45 45,clip]{pictures\/skirt09_06_poses\/2\/temp0110keyshot.png} \n\t\t\t\\includegraphics[width=1.000000\\linewidth, trim=45 45 45 45,clip]{pictures\/skirt09_06_poses\/2\/temp0140keyshot.png} \n\t\t\t\\includegraphics[width=1.000000\\linewidth, trim=45 45 45 45,clip]{pictures\/skirt09_06_poses\/2\/temp0160keyshot.png} \n\t\\end{minipage}} \n\t\t\\begin{minipage}[b]{0.11\\linewidth} \n\t\t\t\\includegraphics[width=1.000000\\linewidth, trim=45 45 45 45,clip]{pictures\/skirt09_06_posescolormap\/2\/frm0080_00_skirthr.png} \n\t\t\t\\includegraphics[width=1.000000\\linewidth, trim=45 45 45 45,clip]{pictures\/skirt09_06_posescolormap\/2\/frm0110_00_skirthr.png} \n\t\t\t\\includegraphics[width=1.000000\\linewidth, trim=45 45 45 45,clip]{pictures\/skirt09_06_posescolormap\/2\/frm0140_00_skirthr.png} \n\t\t\t\\includegraphics[width=1.000000\\linewidth, trim=45 45 45 45,clip]{pictures\/skirt09_06_posescolormap\/2\/frm0160_00_skirthr.png} \n\t\\end{minipage}\n\t\\subfloat[Ours]{ \n\t\t\\begin{minipage}[b]{0.11\\linewidth} \n\t\t\t\\includegraphics[width=1.000000\\linewidth, trim=45 45 45 45,clip]{pictures\/skirt09_06_poses\/3\/frm0080_00_skirthrkeyshot.png} \n\t\t\t\\includegraphics[width=1.000000\\linewidth, trim=45 45 45 45,clip]{pictures\/skirt09_06_poses\/3\/frm0110_00_skirthrkeyshot.png} \n\t\t\t\\includegraphics[width=1.000000\\linewidth, trim=45 45 45 45,clip]{pictures\/skirt09_06_poses\/3\/frm0140_00_skirthrkeyshot.png} \n\t\t\t\\includegraphics[width=1.000000\\linewidth, trim=45 45 45 45,clip]{pictures\/skirt09_06_poses\/3\/frm0160_00_skirthrkeyshot.png} \n\t\\end{minipage}}\n\t\t\\begin{minipage}[b]{0.11\\linewidth} \n\t\t\t\\includegraphics[width=1.000000\\linewidth, trim=45 45 45 45,clip]{pictures\/skirt09_06_posescolormap\/3\/frm0080_00_skirthr.png} \n\t\t\t\\includegraphics[width=1.000000\\linewidth, trim=45 45 45 45,clip]{pictures\/skirt09_06_posescolormap\/3\/frm0110_00_skirthr.png} \n\t\t\t\\includegraphics[width=1.000000\\linewidth, trim=45 45 45 45,clip]{pictures\/skirt09_06_posescolormap\/3\/frm0140_00_skirthr.png} \n\t\t\t\\includegraphics[width=1.000000\\linewidth, trim=45 45 45 45,clip]{pictures\/skirt09_06_posescolormap\/3\/frm0160_00_skirthr.png} \n\t\\end{minipage} \n\t\\subfloat[GT]{ \n\t\t\\begin{minipage}[b]{0.11\\linewidth} \n\t\t\t\\includegraphics[width=1.000000\\linewidth, trim=45 45 45 45,clip]{pictures\/skirt09_06_poses\/4\/frm0080_00_skirthrkeyshot.png} \n\t\t\t\\includegraphics[width=1.000000\\linewidth, trim=45 45 45 45,clip]{pictures\/skirt09_06_poses\/4\/frm0110_00_skirthrkeyshot.png} \n\t\t\t\\includegraphics[width=1.000000\\linewidth, trim=45 45 45 45,clip]{pictures\/skirt09_06_poses\/4\/frm0140_00_skirthrkeyshot.png} \n\t\t\t\\includegraphics[width=1.000000\\linewidth, trim=45 45 45 45,clip]{pictures\/skirt09_06_poses\/4\/frm0160_00_skirthrkeyshot.png} \n\t\\end{minipage}}\n\t\\begin{minipage}[b]{0.08\\linewidth} \n\t\t\\includegraphics[width=1.000000\\linewidth, trim=0 0 0 0,clip]{pictures\/bar.png}\n\t\\end{minipage}\n\t\n\t\\caption{Comparison of the reconstruction results for unseen data in the SKIRT dataset.\n\t\t(a) the coarse simulation,\n\t\t(b) the results of \\cite{chen2018synthesizing},\n\t\t(c) the results of \\cite{zurdo2013wrinkles},\n\t\t(d) our results,\n\t\t(e) the ground truth generated by PBS.\n\tThe reconstruction accuracy is qualitatively showed as a difference map. \n\tReconstruction errors are color-coded and warmer colors indicate larger errors. Our method leads to significantly lower reconstruction errors. }\n\t\\label{fig:comparetoothers_skirt}\n\\end{figure*}\n\n\\subsection{\\YL{Fine Detail} Synthesis Results and Comparisons}\nWe now demonstrate our method using various \\YL{detail enhancement}\nexamples \\YL{both} quantitatively and qualitatively, \\YL{including added wrinkles and rich dynamics.}\nUsing detailed meshes generated by PBS as ground truth, we compare our results with physics-based coarse simulations, our implementation of a deep learning-based method \\cite{chen2018synthesizing} and a conventional machine learning-based method \\cite{zurdo2013wrinkles}.\n\nFor quantitative comparison, we use \\YL{three} metrics: Root Mean Squared Error (RMSE), Hausdorff distance as well as spatio-temporal edge difference (STED) \\cite{Vasa2011perception} designed for motion sequences with a focus on `perceptual' error of models.\nThe results are shown in Table~\\ref{table:compare_zurdo_chen2}.\nNote that \\YL{for the datasets from the top to bottom in the table,} the Hausdorff \\YL{distances} between LR meshes and the ground truth are increasing. \\YL{This} tendency is in accordance with the deformation range from tighter T-shirts and pants to skirts and square\/disk tablecloth with higher degrees \\YL{of freedom}.\nSince the vertex position representation cannot handle rotations well, the larger scale the models deform, the more artifacts Chen {\\itshape et al.} \\cite{chen2018synthesizing} and Zurdo {\\itshape et al.} \\cite{zurdo2013wrinkles} would \\YL{bring in} in the reconstructed models, \\YL{leading to increased} RMSE and Hausdorff distances. \nThe results indicate that our method has better reconstruction results \\YL{quantitatively} than the compared methods \\YL{on} the 5 datasets with \\YL{all the three} metrics.\nEspecially \\YL{for} the SKIRT, SHEET and DISK \\YL{datasets} which \\YL{contain} loose cloth \\YL{and hence larger and richer deformation}, our \\YL{method} outperforms \\YL{existing methods significantly} since tracking between coarse and fine meshes \\YL{is} not required in our algorithm.\n\n\n\\begin{figure}[tb]\n\t\\centering\n\t\\setlength{\\fboxrule}{0.5pt}\n \\setlength{\\fboxsep}{-0.01cm}\n\t\\setlength{\\tabcolsep}{0.00cm} \n \\renewcommand\\arraystretch{0.01} \n \t\\begin{tabular}{>{\\centering\\arraybackslash}m{0.2\\linewidth}>{\\centering\\arraybackslash}m{0.2\\linewidth}>{\\centering\\arraybackslash}m{0.2\\linewidth}>{\\centering\\arraybackslash}m{0.2\\linewidth}>{\\centering\\arraybackslash}m{0.2\\linewidth}} \n \\includegraphics[width=\\linewidth, trim=25 0 50 6,clip]{pictures\/crashballpole0.3withhuman\/0crop0130down.png} & \n \t\\includegraphics[width=\\linewidth, trim=25 0 50 6,clip]{pictures\/crashballpole0.3withhuman\/1crop0130down.png} & \n \t\\includegraphics[width=\\linewidth, trim=25 0 50 6,clip]{pictures\/crashballpole0.3withhuman\/2crop0130down.png} & \n \t\\includegraphics[width=\\linewidth, trim=25 0 50 6,clip]{pictures\/crashballpole0.3withhuman\/3crop0130down.png} & \n \t\\includegraphics[width=\\linewidth, trim=25 0 50 6,clip]{pictures\/crashballpole0.3withhuman\/4crop0130down.png}\\\\ \n \t\\includegraphics[width=\\linewidth, trim=25 0 50 6,clip]{pictures\/crashballpole0.3withhuman\/0crop0180down.png} & \n \t\\includegraphics[width=\\linewidth, trim=25 0 50 6,clip]{pictures\/crashballpole0.3withhuman\/1crop0180down.png} & \n \t\\includegraphics[width=\\linewidth, trim=25 0 50 6,clip]{pictures\/crashballpole0.3withhuman\/2crop0180down.png} & \n \t\\includegraphics[width=\\linewidth, trim=25 0 50 6,clip]{pictures\/crashballpole0.3withhuman\/3crop0180down.png} & \n \t\\includegraphics[width=\\linewidth, trim=25 0 50 6,clip]{pictures\/crashballpole0.3withhuman\/4crop0180down.png} \\\\ \n \t\\includegraphics[width=\\linewidth, trim=25 0 50 6,clip]{pictures\/crashballpole0.3withhuman\/0crop0260down.png} & \n \t\\includegraphics[width=\\linewidth, trim=25 0 50 6,clip]{pictures\/crashballpole0.3withhuman\/1crop0260down.png} & \n \t\\includegraphics[width=\\linewidth, trim=25 0 50 6,clip]{pictures\/crashballpole0.3withhuman\/2crop0260down.png} & \n \t\\includegraphics[width=\\linewidth, trim=25 0 50 6,clip]{pictures\/crashballpole0.3withhuman\/3crop0260down.png} & \n \t\\includegraphics[width=\\linewidth, trim=25 0 50 6,clip]{pictures\/crashballpole0.3withhuman\/4crop0260down.png} \\\\ \n \t\\includegraphics[width=\\linewidth, trim=25 0 50 6,clip]{pictures\/crashballpole0.3withhuman\/0crop0320down.png} & \n \t\\includegraphics[width=\\linewidth, trim=25 0 50 6,clip]{pictures\/crashballpole0.3withhuman\/1crop0320down.png} & \n \t\\includegraphics[width=\\linewidth, trim=25 0 50 6,clip]{pictures\/crashballpole0.3withhuman\/2crop0320down.png} & \n \t\\includegraphics[width=\\linewidth, trim=25 0 50 6,clip]{pictures\/crashballpole0.3withhuman\/3crop0320down.png} & \n \t\\includegraphics[width=\\linewidth, trim=25 0 50 6,clip]{pictures\/crashballpole0.3withhuman\/4crop0320down.png} \\\\ \n\t \\vspace{0.3cm} \\footnotesize (a) Input & \\vspace{0.3cm} \\hspace{-0.3cm} \\footnotesize (b) Chen {\\itshape et al.} & \\vspace{0.3cm} \\hspace{-0.2cm} \\footnotesize (c) Zurdo {\\itshape et al.} & \\vspace{0.3cm} \\footnotesize (d) Ours & \\vspace{0.3cm} \\footnotesize (e) GT \n\t\\end{tabular}\n\t\\caption{Comparison of the reconstruction results for unseen data in the SHEET dataset.\n\t\t(a) the coarse simulation,\n\t\t(b) the results of \\cite{chen2018synthesizing}, with inaccurate and\nrough wrinkles different from the GT.\n\t\t(c) the results of \\cite{zurdo2013wrinkles}, show similar global shapes to coarse meshes with some wrinkles and unexpected sharp corner.\n\t\t(d) our results, show mid-scale wrinkles and similar global deformation as GT.\n\t\t(e) the ground truth generated by PBS.}\n\t\\label{fig:comparetoothers_crashball}\n\t\\vspace{-0.2cm}\n\\end{figure} \n\\begin{figure}[tb]\n\t\\centering\n\t\\setlength{\\fboxrule}{0.5pt}\n \\setlength{\\fboxsep}{-0.01cm}\n\t\\setlength{\\tabcolsep}{0.00cm} \n \\renewcommand\\arraystretch{0.001} \n \t\\begin{tabular}{>{\\centering\\arraybackslash}m{0.2\\linewidth}>{\\centering\\arraybackslash}m{0.2\\linewidth}>{\\centering\\arraybackslash}m{0.2\\linewidth}>{\\centering\\arraybackslash}m{0.2\\linewidth}>{\\centering\\arraybackslash}m{0.2\\linewidth}} \n\t\t\t \\includegraphics[width=\\linewidth, trim=42 0 30 30,clip]{pictures\/disk4.300withhuman\/0crop0050down.png} &\n\t\t\t \\includegraphics[width=\\linewidth, trim=42 0 30 30,clip]{pictures\/disk4.300withhuman\/1crop0050down.png} &\n\t\t\t \\includegraphics[width=\\linewidth, trim=42 0 30 30,clip]{pictures\/disk4.300withhuman\/2crop0050down.png} &\n\t\t\t \\includegraphics[width=\\linewidth, trim=42 0 30 30,clip]{pictures\/disk4.300withhuman\/3crop0050down.png} &\n\t\t\t \\includegraphics[width=\\linewidth, trim=42 0 30 30,clip]{pictures\/disk4.300withhuman\/4crop0050down.png} \\\\\n\t\t\t \\includegraphics[width=\\linewidth, trim=54 0 18 30,clip]{pictures\/disk4.300withhuman\/0crop0090down.png} &\n\t\t\t \\includegraphics[width=\\linewidth, trim=54 0 18 30,clip]{pictures\/disk4.300withhuman\/1crop0090down.png} &\n\t\t\t \\includegraphics[width=\\linewidth, trim=54 0 18 30,clip]{pictures\/disk4.300withhuman\/2crop0090down.png} &\n\t\t\t \\includegraphics[width=\\linewidth, trim=54 0 18 30,clip]{pictures\/disk4.300withhuman\/3crop0090down.png} &\n\t\t\t \\includegraphics[width=\\linewidth, trim=54 0 18 30,clip]{pictures\/disk4.300withhuman\/4crop0090down.png} \\\\\n\t\t\t \\includegraphics[width=\\linewidth, trim=54 0 18 30,clip]{pictures\/disk4.300withhuman\/0crop0160down.png} &\n\t\t\t \\includegraphics[width=\\linewidth, trim=54 0 18 30,clip]{pictures\/disk4.300withhuman\/1crop0160down.png} &\n\t\t\t \\includegraphics[width=\\linewidth, trim=54 0 18 30,clip]{pictures\/disk4.300withhuman\/2crop0160down.png} &\n\t\t\t \\includegraphics[width=\\linewidth, trim=54 0 18 30,clip]{pictures\/disk4.300withhuman\/3crop0160down.png} &\n\t\t\t \\includegraphics[width=\\linewidth, trim=54 0 18 30,clip]{pictures\/disk4.300withhuman\/4crop0160down.png} \\\\\n\t\t\t \\includegraphics[width=\\linewidth, trim=54 0 18 30,clip]{pictures\/disk4.300withhuman\/0crop0360down.png} &\n\t\t\t \\includegraphics[width=\\linewidth, trim=54 0 18 30,clip]{pictures\/disk4.300withhuman\/1crop0360down.png} &\n\t\t\t \\includegraphics[width=\\linewidth, trim=54 0 18 30,clip]{pictures\/disk4.300withhuman\/2crop0360down.png} &\n\t\t\t \\includegraphics[width=\\linewidth, trim=54 0 18 30,clip]{pictures\/disk4.300withhuman\/3crop0360down.png} &\n\t\t\t \\includegraphics[width=\\linewidth, trim=54 0 18 30,clip]{pictures\/disk4.300withhuman\/4crop0360down.png} \\\\ \n\t\t\t \\vspace{0.3cm} \\footnotesize (a) Input & \\vspace{0.3cm} \\hspace{-0.3cm} \\footnotesize (b) Chen {\\itshape et al.} & \\vspace{0.3cm} \\hspace{-0.2cm} \\footnotesize (c) Zurdo {\\itshape et al.} & \\vspace{0.3cm} \\footnotesize (d) Ours & \\vspace{0.3cm} \\footnotesize (e) GT \n\t\\end{tabular}\n\t\\caption{Comparison of the reconstruction results for unseen data in the DISK dataset.\n\t\t(a) the coarse simulation,\n\t\t(b) the results of \\cite{chen2018synthesizing}, cannot reconstruct credible shapes. \n\t\t(c) the results of \\cite{zurdo2013wrinkles}, show apparent artifacts near the flying tails since no tracking constraints applied.\n\t\t(d) our results, reproduce large-scale deformations, see the tail of the disk flies like a fan in the wind.\n\t\t(e) the ground truth generated by PBS.}\n\t\\label{fig:comparetoothers_disk}\n\\end{figure} \n\n\\YL{We further make qualitative comparisons on the 5 datasets.}\nFig. \\ref{fig:comparetoothers_tshirt} shows \\YL{detail synthesis results} on the TSHIRT dataset.\nThe first and second \nrows \nare from \\YL{sequence} 06\\_08, a woman dribbling the basketball sideways and the \\YL{last two rows} are from \\YL{sequence} 08\\_11, a walking woman.\nIn this dataset of tight t-shirts on human bodies, Chen {\\itshape et al.} \\cite{chen2018synthesizing}, Zurdo {\\itshape et al.} \\cite{zurdo2013wrinkles} and our method are able to reconstruct the garment model completely with mid-scale wrinkles.\nHowever, Chen {\\itshape et al.} \\cite{chen2018synthesizing} suffer from the seam line problems due to \\YL{the use of geometry image representation}. \nA geometry image is a parametric sampling of the shape, which is \\YL{made a topological disk by cutting through some seams.} \nThe boundary of the disk needs to be fused so that the reconstructed mesh has the original topology.\n\\YL{The super-resolved geometry image corresponding to high-resolution cloth animations are not entirely accurate, and as a result the fused boundaries no longer match exactly, }\n\\textit{e.g. } clear seam lines on the shoulder and crooked boundaries on the left side of the waist \\YL{for the examples} in Fig.~\\ref{fig:comparetoothers_tshirt} (b)),\n\\YL{while} our method \\YL{produces} better results than \\cite{chen2018synthesizing} and \\cite{zurdo2013wrinkles} which have \\YL{artifacts of unsmooth surfaces}.\n\nFig. \\ref{fig:comparetoothers_pants} shows comparative results of the animations of pants on a fixed body shape while changing the body pose over time. \nThe results of \\cite{chen2018synthesizing} \\YL{mainly} smooth the coarse meshes and barely exhibit \\YL{any} wrinkles.\nZurdo {\\itshape et al.} \\cite{zurdo2013wrinkles} utilize tracking algorithms to ensure the %\n\\YL{close alignment}\nbetween coarse and fine meshes, and thus the fine meshes are constrained \\YL{and do not exhibit the behavior of full physics-based simulation.}\n\\YL{So on the PANTS dataset,} the results of \\cite{zurdo2013wrinkles} have clear artifacts on examples \\YL{where} LR and HR meshes are not aligned well, \\textit{e.g. } the trouser legs.\nDifferent from the two compared methods \\YL{that reconstruct displacements} or local coordinates, \nour method \\YL{uses} deformation-based features in both encoding and decoding \\YL{phases} which \\YL{does not suffer from such restrictions and ensures physically-reliable results.}\n\nFor looser garments like \\YL{skirts}, we show comparison results in Fig. \\ref{fig:comparetoothers_skirt}, with color coding to highlight the differences between synthesized results and the ground truth.\nOur method successfully reconstructs the swinging skirt \\YL{caused by} the body motion (see the small wrinkles on the waist and the \\YL{medium-level} folds on the skirt \\YL{hem}).\nChen {\\itshape et al.} are able to reconstruct the overall shape of the skirt, however there are many small unsmooth \\YL{triangles leading to noisy shapes}\ndue to the 3D coordinate representation with untracked fine meshes with abundant wrinkles.\nThis leads to unstable animation, please see the accompanying video.\nThe results of \\cite{zurdo2013wrinkles} have some problems of the global deformation, see the directions of the skirt hem and the large highlighted area in the color map.\nOur learned \\YL{detail} synthesis model provides better visual quality for shape generation \\YL{and the generated results look} closer to the ground truth.\n \nInstead of garments dressed on human bodies, we additionally show some results of free-flying tablecloth. \nThe comparison of the testing results \\YL{on} the SHEET dataset are shown in Fig.~\\ref{fig:comparetoothers_crashball}.\nThe results of \\cite{chen2018synthesizing} show inaccurate and rough wrinkles different from the ground truth. \nFor hanging sheets in the results of \\cite{zurdo2013wrinkles}, the global shapes are more like coarse \\YL{meshes} with some wrinkles and unexpected sharp corners, \\textit{e.g. } the left side in the last row of Fig. \\ref{fig:comparetoothers_crashball} (c),\nwhile ours show \\YL{mid-scale} wrinkles and similar global deformation \\YL{as} the high-resolution meshes. \n\nAs for the DISK dataset, from the visual results in Fig.~\\ref{fig:comparetoothers_disk}, we can see that Chen {\\itshape et al.} \\cite{chen2018synthesizing} and Zurdo {\\itshape et al.} \\cite{zurdo2013wrinkles} cannot handle large-scale rotations well and cannot reconstruct credible shapes in such cases. \n\\gl{Especially for Zurdo {\\itshape et al.} \\cite{zurdo2013wrinkles}, the impact of tracking is significant for their algorithm.}\nThey can reconstruct the top\nand part of tablecloth near the cylinder, but the flying tails have apparent artifacts. \nOur algorithm does not have such drawbacks.\nNotice how our method successfully reproduces ground-truth deformations, including the overall drape (\\textit{i.e. }, how the tail of the disk flies like a fan in the wind) and mid-scale wrinkles.\n\n\\begin{table}[!htb]\n\t\\renewcommand\\arraystretch{1.5}\n\t\\caption{User study results on cloth \\YL{detail} synthesis. We show the average ranking score of the three methods: Chen {\\itshape et al.} \\cite{chen2018synthesizing}, Zurdo {\\itshape et al.} \\cite{zurdo2013wrinkles}, and ours. The\n\t\tranking ranges from 1 (the best) to 3 (the worst). The results are calculated\n\t\tbased on 320 trials. We see that our method achieves the best in terms of\n\t\twrinkles, temporal stability \\YL{and overall quality}.}\n\t\\label{table:userstudy}\n\t\\centering \n\t\\begin{tabular}{cccc}\n\t\t\\toprule[1.2pt] \n\t\tMethod & Wrinkles & Temporal stability & Overall \\\\ \\hline \n\t\tChen {\\itshape et al.} & 2.184 & 2.1258 &2.1319\\\\ \\hline \n\t\tZurdo {\\itshape et al.} & 2.3742 & 2.5215 & 2.4877\\\\ \\hline \n\t\tOurs & \\textbf{1.4417} & \\textbf{1.3528} & \\textbf{1.3804} \\\\\n\t\t\\bottomrule[1.2pt]\n\t\\end{tabular}\n\\end{table}\n\\gl{We further conduct a user study to evaluate the stability and realistic of the synthesized dense mesh dynamics. 32 volunteers are involved for this user study.}\nFor every question, we give one sequence and 5 images of coarse meshes as references, \\YL{and} then let the user rank the corresponding outputs from Chen {\\itshape et al.} \\cite{chen2018synthesizing}, Zurdo {\\itshape et al.} \\cite{zurdo2013wrinkles} and ours according to three different criteria (wrinkles, temporal stability and overall). \nWe shuffle the order of the algorithms each time we exhibit the question and show shapes from the three methods randomly \\YL{to avoid bias}. \nWe show the results of the user study in Table \\ref{table:userstudy}, where we observe that our generated \\YL{shapes} perform the best on all three criteria. \n\n\\begin{table}[tb]\n\t\\renewcommand\\arraystretch{1.5}\n\t\\caption{Per-vertex error (RMSE) on synthesized shapes with different feature representations: 3D coordinates, ACAP and TS-ACAP.}\n\t\\label{table:feature_compare}\n\t\\centering\n\t\\begin{tabular}{cccccc}\n\t\t\\toprule[1.2pt]\n\t\tDataset & TSHIRT & PANTS & \tSKIRT & SHEET & DISK \\\\ \\hline\n\t\t3D coordinates & 0.0101 & 0.0193 & 0.00941 & 0.00860 & 0.185 \\\\ \\hline\n\t\tACAP & 0.00614 & 0.00785 & 0.00693 & 0.00606 & 0.0351 \\\\ \\hline\n\t\tTS-ACAP & \\textbf{0.00546} & \\textbf{0.00663} & \\textbf{0.00685} & \\textbf{0.00585} & \\textbf{0.0216}\\\\ \n\t\t\\bottomrule[1.2pt]\n\t\\end{tabular}\n\\end{table}\n\\begin{figure}[tb]\n\t\\centering\n\t\\setlength{\\fboxrule}{0.5pt}\n \\setlength{\\fboxsep}{-0.01cm}\n\t\\setlength{\\tabcolsep}{0.00cm} \n \\renewcommand\\arraystretch{0.001}\n \t\\begin{tabular}{>{\\centering\\arraybackslash}m{0.25\\linewidth}>{\\centering\\arraybackslash}m{0.25\\linewidth}>{\\centering\\arraybackslash}m{0.25\\linewidth}>{\\centering\\arraybackslash}m{0.25\\linewidth}}\n \t\\includegraphics[width=1.000000\\linewidth, trim=63 0 0 0,clip]{pictures\/skirt09_07_poseswithhuman\/0\/crop0040.png} &\n \t\\includegraphics[width=1.000000\\linewidth, trim=63 0 0 0,clip]{pictures\/skirt09_07_poseswithhuman\/1\/crop0040.png} &\n \t\\includegraphics[width=1.000000\\linewidth, trim=63 0 0 0,clip]{pictures\/skirt09_07_poseswithhuman\/2\/crop0040.png} &\n \t\\includegraphics[width=1.000000\\linewidth, trim=63 0 0 0,clip]{pictures\/skirt09_07_poseswithhuman\/3\/crop0040.png} \\\\\n \t\\includegraphics[width=1.000000\\linewidth, trim=63 0 0 0,clip]{pictures\/skirt09_07_poseswithhuman\/0\/crop0075.png} &\n \t\\includegraphics[width=1.000000\\linewidth, trim=63 0 0 0,clip]{pictures\/skirt09_07_poseswithhuman\/1\/crop0075.png} &\n \t\\includegraphics[width=1.000000\\linewidth, trim=63 0 0 0,clip]{pictures\/skirt09_07_poseswithhuman\/2\/crop0075.png} &\n \t\\includegraphics[width=1.000000\\linewidth, trim=63 0 0 0,clip]{pictures\/skirt09_07_poseswithhuman\/3\/crop0075.png} \\\\\n \t\\includegraphics[width=1.000000\\linewidth, trim=63 0 0 0,clip]{pictures\/skirt09_07_poseswithhuman\/0\/crop0110.png} &\n \t\\includegraphics[width=1.000000\\linewidth, trim=63 0 0 0,clip]{pictures\/skirt09_07_poseswithhuman\/1\/crop0110.png} &\n \t\\includegraphics[width=1.000000\\linewidth, trim=63 0 0 0,clip]{pictures\/skirt09_07_poseswithhuman\/2\/crop0110.png} &\n \t\\includegraphics[width=1.000000\\linewidth, trim=63 0 0 0,clip]{pictures\/skirt09_07_poseswithhuman\/3\/crop0110.png} \\\\ \n \t\\vspace{0.3cm} \\small (a) Input & \\vspace{0.3cm}\\small (b) Coordinates & \\vspace{0.3cm}\\small (c) Ours & \\vspace{0.3cm}\\small (d) GT\n \\end{tabular} \n\t\\caption{The evaluation of the TS-ACAP feature in our detail synthesis method. \n\t\t(a) input coarse \\YL{shapes},\n\t\t(b) the results using 3D coordinates, which can be clearly seen the rough appearance, unnatural deformation and some artifacts, especially in the highlighted regions with details shown in the close-ups.\n\t\t(c) our results, which show smooth looks and the details are more similar to the GT.\n\t\t(d)\tground truth.\n\t\t }\n\t\\label{fig:ablationstudy_coordiniates_skirt}\n\\end{figure}\n\\begin{figure}[htb]\n\t\\centering\n\t\\setlength{\\tabcolsep}{0.05cm} \n \\renewcommand\\arraystretch{0.001}\n \t\\begin{tabular}{>{\\centering\\arraybackslash}m{0.02\\linewidth}>{\\centering\\arraybackslash}m{0.31\\linewidth}>{\\centering\\arraybackslash}m{0.31\\linewidth}>{\\centering\\arraybackslash}m{0.31\\linewidth}}\n \t \\rotatebox{90}{\\small ACAP} &\n \t\t\\includegraphics[width=\\linewidth, trim=90 0 0 60,clip]{pictures\/tacap_acap\/0\/crop0103.png} &\n \t\t\\includegraphics[width=\\linewidth, trim=90 0 0 60,clip]{pictures\/tacap_acap\/0\/crop0104.png} &\n \t\t\\includegraphics[width=\\linewidth, trim=90 0 0 60,clip]{pictures\/tacap_acap\/0\/crop0105.png} \\\\\n \t\\rotatebox{90}{\\small TS-ACAP} &\n \t\t\\includegraphics[width=\\linewidth, trim=90 0 0 60,clip]{pictures\/tacap_acap\/1\/crop0103.png} &\n \t\t\\includegraphics[width=\\linewidth, trim=90 0 0 60,clip]{pictures\/tacap_acap\/1\/crop0104.png} &\n \t\t\\includegraphics[width=\\linewidth, trim=90 0 0 60,clip]{pictures\/tacap_acap\/1\/crop0105.png} \\\\ \n \\vspace{0.3cm} & \\vspace{0.3cm} \\small $t = 103$ & \\vspace{0.3cm} \\small $t = 104$ & \\vspace{0.3cm} \\small $t = 105$ \n\t\\end{tabular} \n\t\\caption{\n\t\t Three consecutive frames from a testing sequence in the DISK dataset. First row: the results of ACAP. As shown in the second column, the enlarged wrinkles are different from the previous and the next frames.\n\t\t This causes jumping in the animation.\n\t\t Second row: the consistent results obtained via TS-ACAP feature, demonstrating that our TS-ACAP representation ensures the temporal coherence. \n\t}\n\t\\label{fig:jump_acap}\n\\end{figure}\n\\begin{table}[tb]\n\t\\renewcommand\\arraystretch{1.5}\n\t\\fontsize{7.5}{9}\\selectfont\n\t\\caption{Comparison of RMSE between synthesized shapes and ground truth with different networks, \\textit{i.e. } without temporal modules, with RNN, with LSTM and ours with the Transformer network.}\n\t\\label{table:transformer_compare}\n\t\\centering\n\t\\begin{tabular}{cccccc}\n\t\t\\toprule[1.2pt]\n\t\tDataset & TSHIRT & PANTS & \tSKIRT & SHEET & DISK \\\\ \\hline\n\t\tWO Transformer & 0.00909 & 0.01142 & 0.00831 & 0.00739 & 0.0427 \\\\ \\hline\n\t\tWith RNN & 0.0435 & 0.0357 & 0.0558 & 0.0273 & 0.157 \\\\ \\hline\n\t\tWith LSTM & 0.0351 & 0.0218 & 0.0451 & 0.0114 & 0.102 \\\\ \\hline\n\t\tWith Transformer & \\textbf{0.00546} & \\textbf{0.00663} & \\textbf{0.00685} & \\textbf{0.00585} & \\textbf{0.0216} \\\\ \n\t\t\\bottomrule[1.2pt]\n\t\\end{tabular}\n\\end{table} \n\\begin{figure}[tb]\n \t\\centering\n \\setlength{\\tabcolsep}{0.0cm} \n \\renewcommand\\arraystretch{-1.9}\n \t\\begin{tabular}{>{\\centering\\arraybackslash}m{0.08\\linewidth}>{\\centering\\arraybackslash}m{0.18\\linewidth}>{\\centering\\arraybackslash}m{0.18\\linewidth}>{\\centering\\arraybackslash}m{0.18\\linewidth}>{\\centering\\arraybackslash}m{0.18\\linewidth}>{\\centering\\arraybackslash}m{0.18\\linewidth}}\n \t\t\\rotatebox{90}{\\small (a) Input}& \n\t\t\\includegraphics[width=\\linewidth, trim=5 5 5 5,clip]{pictures\/tshirt06_08_poses\/0\/0008.png} &\n\t\t\\includegraphics[width=\\linewidth, trim=5 5 5 5,clip]{pictures\/tshirt06_08_poses\/0\/0016.png} &\n\t\t\\includegraphics[width=\\linewidth, trim=5 5 5 5,clip]{pictures\/tshirt06_08_poses\/0\/0022.png} &\n\t\t\\includegraphics[width=\\linewidth, trim=5 5 5 5,clip]{pictures\/tshirt06_08_poses\/0\/0094.png} &\n\t\t\\includegraphics[width=\\linewidth, trim=5 5 5 5,clip]{pictures\/tshirt06_08_poses\/0\/0200.png} \n \t\t\\\\\n \t\t \\rotatebox{90}{\\small (b) EncDec} &\n \t\t\\includegraphics[width=\\linewidth, trim=5 5 5 5,clip]{pictures\/tshirt06_08_poses\/5\/0008.png} &\n \t\t\\includegraphics[width=\\linewidth, trim=5 5 5 5,clip]{pictures\/tshirt06_08_poses\/5\/0016.png} &\n \t\t\\includegraphics[width=\\linewidth, trim=5 5 5 5,clip]{pictures\/tshirt06_08_poses\/5\/0022.png} &\n \t\t\\includegraphics[width=\\linewidth, trim=5 5 5 5,clip]{pictures\/tshirt06_08_poses\/5\/0094.png} &\n \t\t\\includegraphics[width=\\linewidth, trim=5 5 5 5,clip]{pictures\/tshirt06_08_poses\/5\/0200.png} \n \t\t\\\\\n \t\t \\rotatebox{90}{\\small (c) RNN} &\n \t\t\\includegraphics[width=\\linewidth, trim=5 5 5 5,clip]{pictures\/tshirt06_08_poses\/rnn\/0008.png} &\n \t\t\\includegraphics[width=\\linewidth, trim=5 5 5 5,clip]{pictures\/tshirt06_08_poses\/rnn\/0016.png} &\n \t\t\\includegraphics[width=\\linewidth, trim=5 5 5 5,clip]{pictures\/tshirt06_08_poses\/rnn\/0022.png} &\n \t\t\\includegraphics[width=\\linewidth, trim=5 5 5 5,clip]{pictures\/tshirt06_08_poses\/rnn\/0094.png} &\n \t\t\\includegraphics[width=\\linewidth, trim=5 5 5 5,clip]{pictures\/tshirt06_08_poses\/rnn\/0200.png} \n\t \t\\\\\n\t \t\\rotatebox{90}{\\small (d) LSTM}&\n\t \t\\includegraphics[width=\\linewidth, trim=5 5 5 5,clip]{pictures\/tshirt06_08_poses\/lstm\/0008.png}&\n\t \t\\includegraphics[width=\\linewidth, trim=5 5 5 5,clip]{pictures\/tshirt06_08_poses\/lstm\/0016.png}&\n\t \t\\includegraphics[width=\\linewidth, trim=5 5 5 5,clip]{pictures\/tshirt06_08_poses\/lstm\/0022.png}&\n\t \t\\includegraphics[width=\\linewidth, trim=5 5 5 5,clip]{pictures\/tshirt06_08_poses\/lstm\/0094.png}&\n\t \t\\includegraphics[width=\\linewidth, trim=5 5 5 5,clip]{pictures\/tshirt06_08_poses\/lstm\/0200.png} \n \t\t\\\\\n \t\t \\rotatebox{90}{\\small (e) Ours}& \n \t\t\\includegraphics[width=\\linewidth, trim=5 5 5 5,clip]{pictures\/tshirt06_08_poses\/3\/0008.png}&\n \t\t\\includegraphics[width=\\linewidth, trim=5 5 5 5,clip]{pictures\/tshirt06_08_poses\/3\/0016.png}&\n \t\t\\includegraphics[width=\\linewidth, trim=5 5 5 5,clip]{pictures\/tshirt06_08_poses\/3\/0022.png}&\n \t\t\\includegraphics[width=\\linewidth, trim=5 5 5 5,clip]{pictures\/tshirt06_08_poses\/3\/0094.png}&\n \t\t\\includegraphics[width=\\linewidth, trim=5 5 5 5,clip]{pictures\/tshirt06_08_poses\/3\/0200.png} \n \t\t\\\\ \n \t\t \\rotatebox{90}{\\small (f) GT}&\n \t\t\\includegraphics[width=\\linewidth, trim=5 5 5 5,clip]{pictures\/tshirt06_08_poses\/4\/0008.png}&\n \t\t\\includegraphics[width=\\linewidth, trim=5 5 5 5,clip]{pictures\/tshirt06_08_poses\/4\/0016.png}&\n \t\t\\includegraphics[width=\\linewidth, trim=5 5 5 5,clip]{pictures\/tshirt06_08_poses\/4\/0022.png}&\n \t\t\\includegraphics[width=\\linewidth, trim=5 5 5 5,clip]{pictures\/tshirt06_08_poses\/4\/0094.png}&\n \t\t\\includegraphics[width=\\linewidth, trim=5 5 5 5,clip]{pictures\/tshirt06_08_poses\/4\/0200.png} \n \t\\end{tabular} \n \t\\caption{The evaluation of the Transformer network in our model for wrinkle synthesis.\n \t\tFrom top to bottom we show (a) %\n \t\t\\gl{input coarse mesh with physical simulation}\n \t\t(b) the results with an encoder-decoder \\YL{dropping out temporal modules}, (c) the results with RNN \\cite{chung2014empirical}, (d) the results with LSTM \\cite{hochreiter1997long}, (e) ours, and (f) the ground truth generated by PBS.}\n \t\\label{fig:transformer_w_o_tshirt}\n \\end{figure} \n\n\\subsection{\\YL{Evaluation of} Network Components}\nWe evaluate the effectiveness of our network components for two aspects: the \\YL{capability} of the TS-ACAP feature and the \\YL{capability} of the Transformer network. \nWe evaluate our method qualitatively and quantitatively on different datasets.\n\n\\textbf{Feature Representation Evaluation}.\nTo verify the effectiveness of our TS-ACAP feature, we compare per-vertex position errors to other features to evaluate the generated shapes in different datasets quantitatively. \nWe compare our method using TS-ACAP feature with our transduction methods using 3D vertex coordinates and ACAP, with network layers and parameters adjusted accordingly to optimize performance alternatively.\nThe details of numerical comparison are shown in Table \\ref{table:feature_compare}.\nACAP and TS-ACAP show quantitative improvements than 3D coordinates. \nIn Fig. \\ref{fig:ablationstudy_coordiniates_skirt}, we exhibit several compared examples of animated skirts of coordinates and TS-ACAP. \n\\YL{The results using coordinates show rough appearance, unnatural deformation and some artifacts, \n I can't really see the two circles?\nespecially in the highlighted regions with details shown in the close-ups.} Our results with TS-ACAP are more similar to the ground truth than the ones with coordinates. \nACAP has the problem of temporal inconsistency, thus the results are shaking or jumping frequently. \n\\YL{Although the use of the Transformer network can somewhat mitigate this issue, such artifacts can appear even with the Transformer.}\n\\YL{Fig.~\\ref{fig:jump_acap} shows} three consecutive frames from a testing sequence in the DISK dataset.\nResults with TS-ACAP show more consistent wrinkles than the ones with ACAP thanks to the temporal constraints.\n\n\\textbf{Transformer Network Evaluation}.\nWe also evaluate the impact of the Transformer network in our pipeline. \nWe compare our method to an encoder-decoder network dropping out the temporal modules, our pipeline with the recurrent neural network (RNN) and with the long short-term memory (LSTM) \\YL{module}.\nAn example of T-shirts is given in Fig. \\ref{fig:transformer_w_o_tshirt}, \\YL{showing} 5 frames in order.\nThe results without any temporal modules show artifacts on the sleeves and neckline since these places have strenuous \\YL{forces}. %\nThe models using RNN and LSTM stabilize the sequence via eliminating dynamic and detailed deformation, but all the results keep wrinkles on the chest from the initial state\\YL{, lacking rich dynamics.}\nBesides, they are not able to generate stable and realistic garment animations \\YL{that look similar to} the ground truth,\n\\YL{while} \\YL{our} method with the Transformer network \\YL{apparently} improves the temporary stability, \\YL{producing results close to the ground truth.}\nWe also quantitatively evaluate the performance of the Transformer network \\YL{in our method} via per-vertex error. \nAs shown in Table \\ref{table:transformer_compare}, the RMSE of our model \\YL{is} smaller than the other models.\n\n\\section{Conclusion and Future Work}\\label{sec:conclusion}\nIn this paper, we introduce a novel algorithm for synthesizing robust and realistic cloth animations via deep learning.\nTo achieve this, we propose a geometric deformation representation named TS-ACAP which well embeds the details and ensures the temporal consistency.\n\\YL{Benefiting} from \\YL{the} deformation-based feature, there is no explicit requirement of tracking between coarse and fine meshes in our algorithm. \nWe also use the Transformer network based on attention mechanisms to map the coarse TS-ACAP to fine TS-ACAP, maintaining the stability of our generation.\nQuantitative and qualitative results reveal that our method can synthesize realistic-looking wrinkles in various datasets, such as draping tablecloth, tight or \\YL{loose} garments dressed on human bodies, etc. \n \nSince our algorithm synthesizes \\YL{details} based on the coarse meshes, the time for coarse simulation is unavoidable.\nEspecially for tight garments like T-shirts and pants, the collision solving phase is time-consuming.\nIn the future, we intend to generate coarse sequences for tight cloth via skinning-based methods in order to reduce the computation for our pipeline.\nAnother limitation is that our current network is not able to deal with all kinds of garments with different topology.\n\\newpage\n\\bibliographystyle{IEEEtran}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nHurricanes (also known as tropical cyclones or typhoons) are low-pressure weather systems with well-organized convection, which are among some of the most destructive disasters on Earth because they can induce strong winds, a rapid rise in local sea level, and heavy precipitation (Anthes \\cite{Anthes}, Emanuel \\cite{Emanuelb}). Hurricanes can also enhance the vertical mixing of heat and nutrients in the ocean, increase horizontal oceanic heat transport, and, subsequently, influence global climate (Emanuel \\cite{Emanuela}, Jansen \\& Ferrari \\cite{Jansen}, Li \\& Sriver \\cite{LiH}). For instance, the curl of strong hurricane winds can cause divergence and convergence in the upper ocean, producing regions of up-welling and down-welling, enhancing the exchanges between surface and subsurface oceans. Therefore, it is an important and interesting issue to consider whether hurricanes can form on other potentially habitable planets beyond Earth.\n\nIn this study, we focus on tidally locked terrestrial planets around M dwarfs due to their relatively large planet-to-star ratios and frequent transits based on observations. These planets differ from Earth in three major aspects: the uneven distribution of stellar energy between the permanent day and night sides, the slow rotation rate due to strong tidal force, and the redder stellar spectrum than the Sun. Several atmospheric general circulation models (AGCMs) have been employed and modified to simulate and understand the atmospheric and climatic dynamics of the planets (Joshi et al. \\cite{Joshi}, Merlis \\& Schneider \\cite{Merlis}, Edson et al. \\cite{Edson}, Pierrehumbert \\cite{Pierrehumberta}, Wordsworth et al. \\cite{Wordsworthetal}, Leconte et al. \\cite{Lecontea}, Menou \\cite{Menou}, Showman et al. \\cite{Showmanb}, Carone et al. \\cite{Carone}, Wordsworth \\cite{Wordsworth}, Shields et al. \\cite{Shields}, Turbet et al. \\cite{Turbetetal}, Boutle et al. \\cite{Boutle}, Checlair et al. \\cite{Checlair}, Haqq-Misra et al. \\cite{Haqq-Misra}, Kopparapu et al. \\cite{Kopparapu}, Noda et al. \\cite{Noda}, Wolf \\cite{Wolf2017}, Turbet et al. \\cite{Turbet}, Del Genio et al. \\cite{Del Genio}, Pierrehumbert \\& Hammond \\cite{Pierrehumbertb}, Yang et al. \\cite{Yang}). \n\nThese AGCMs have horizontal resolutions that are always equal to or larger than 300 km, so that most of their studies focus on planetary-scale phenomena, such as global-scale Walker circulation, equatorial superrotation, and forced Rossby and Kelvin waves, and their models are not able to properly simulate the characteristic features of hurricanes, such as the warm core, the eye-eyewall structure, and the spiral rain bands. No work has investigated synoptic phenomena apart from Bin et al. (\\cite{Bin}). Based on the output data of an AGCM, the authors estimated genesis potential index of hurricanes and showed that the probability of hurricane formation is low for planets in the middle range of the habitable zone of M dwarfs. However, the model resolution they used was not directly capable of simulating hurricanes and the question of whether the empirical index could be applied to exoplanets could not be answered. Moreover, they considered planets in the middle range of the habitable zone only. Here, we show that the possibility of hurricane formation increases with temperature and for planets with higher temperatures (closer to the inner edge of the habitable zone), the possibility is greater.\n\nIn this study, we explicitly simulate hurricane formation on tidally locked terrestrial planets with a high-resolution ($\\approx$50~km) AGCM. The structure of the paper is as follows. Section~2 describes our methods, Section~3 presents our results, and Section~4 gives our conclusions. \n\n\n\n\n\\section{Model description and experimental design}\nFor our model, we used the global Community Atmosphere Model version 4 (CAM4) with a dynamical core of finite volume (Neale et al. \\cite{Neale}). Deep convection was parameterized using the updated mass flux scheme of Zhang and McFarlane (\\cite{ZhangG}). Subgrid-scale momentum transport associated with convection was included (Richter \\& Rasch \\cite{Richter}). The parameterization of shallow moist convection is based on Hack (\\cite{Hack}). Condensation, evaporation, and precipitation parameterization is based on Zhang et al. (\\cite{ZhangM}) and Rasch and Kristjansson (\\cite{Rasch}). Cloud fraction is diagnosed from atmospheric stratification, convective mass flux, and relative humidity (Slingo \\cite{Slingo}, Hack et al. \\cite{Hack1993}, Kiehl et al. \\cite{Kiehl}, Rasch \\& Kristjansson \\cite{Rasch}). The realistic radiative transfer of water vapor, clouds, greenhouse gases, and aerosols are included as well (Ramanathan \\& Downey \\cite{Ramanathan}, Briegleb \\cite{Briegleb}, Collins et al. \\cite{Collins}, Neale et al. \\cite{Neale} ). \n\n\n\nThe horizontal resolution we employed is 0.47$^{\\circ}$\\,$\\times$\\,0.63$^{\\circ}$ in latitude and longitude, respectively. The number of vertical levels is 26. The planetary surface is covered by seawater throughout (namley, an aquaplanet). Because of the high resolution and limited computational power, we specify surface temperature (T$_S$) in the simulations. With a fixed T$_S$, the atmosphere reaches an equilibrium state within several years. If the model were coupled to a 50-m slab ocean, the surface and atmosphere would require tens of years to reach the equilibrium state, which is about one order of magnitude longer than that in the simulations with fixed T$_S$. The thermal inertia of the slab ocean is much larger than that of the atmosphere. If the model were coupled to a fully dynamical ocean with a depth of, for example, 3000 m, the model will require thousands of years to reach the equilibrium state due to the high thermal inertia and the slow motion of the deep ocean. For simulating hurricanes, a fixed surface temperature experiment is a good start and a useful method for understanding the formation and the properties of hurricanes, as found in hurricane simulations on Earth, carried out, for example, in the studies of Held and Zhao (\\cite{Held}) and Khairoutdinov and Emanuel (\\cite{Khairoutdinov}) and the recent review papers of Emanuel (\\cite{Emanuelb}) and Merlis and Held (\\cite{Merlis19}). The fixed-temperature surface acts as a boundary condition for the atmosphere system. Under a fixed T$_S$, the surface and the top of the atmosphere are not in energy balance, but the atmosphere itself is in energy balance; this is because the energy deficit or excess at the surface is approximately equal to that at the top of the atmosphere. In the coupled slab ocean experiment (shown in Sect. 4 below), the surface, the top of the atmosphere, and the atmosphere are all in energy balance.\n\n\nThe surface temperature is set according to previous simulations of lower resolution AGCMs coupled to a 50-m slab ocean (Yang et al. \\cite{Yang13}, Wolf et al. \\cite{Wolf}). On the day side, the surface temperature is a function of latitude and longitude: $(T_{max}-T_{min})cos(\\varphi)cos(\\lambda)+T_{min}$, or $(T_{max}-T_{min}){cos}^{1\/4}(\\varphi){cos}^{1\/4}(\\lambda)+T_{min}$, where $T_{max}$ is the maximum surface temperature, $T_{min}$ is the minimum surface temperature, $\\varphi$ is the latitude, and $\\lambda$ is the longitude. On the night side, the surface temperature is uniform with a value of $T_{min}$. Three groups of $T_{max}$ and $T_{min}$ are used. One is for planets near the inner edge of the habitable zone, 315 K \\& 310 K. The other two are for planets in the middle range of the habitable zone, 308 K \\& 275 K and 301 K \\& 268 K; the power of 1\/4 is used due to the very weak temperature gradients in the substellar region and strong temperature gradients near the terminators.\n\n\nThe planetary rotation period is set equal to the orbital period. Four rotation periods are examined: 6, 10, 20, and 40 Earth days. For other types of spin-orbit resonance, such as 3:2 as in the case of Mercury, the climate lies between the synchronous rotation and the rapid rotation of Earth (Yang et al. \\cite{Yang14}) and we have not carried out these kinds of experiments to date. Planetary radius and gravity are set to be the same as Earth, but both obliquity and eccentricity are set to zero. Stellar temperature is set to 2,600 or 3,700 K. The stellar radiation at the substellar point is set to 1,300 or 1,800 W m$^{-2}$. By default, the mean surface pressure is 1.0 bar with $\\approx$79\\% N$_2$ and $\\approx$21\\% O$_2$. For greenhouse gases, we set the CO$_2$ concentration to 367 parts per million by volume (ppmv), N$_2$O to 316 parts per billion by volume (ppbv), and CH$_4$ to 1760 ppbv. The ozone concentration is set to be the same as present-day Earth, which may influence the outflow temperature of hurricanes and the overshooting of extremely strong convection.\n\n\nIn order to briefly test the effect of atmospheric composition on hurricane formation, we did several ideal experiments in which the background gas is set to H$_2$, He, N$_2$, O$_2$, and CO$_2$, respectively. The corresponding mean molecular weights are 2.02, 4.00, 28.01, 31.99, and 44.00 g mole$^{-1}$ and the corresponding specific heats (Zhang \\& Showman \\cite{ZhangX}) are 28.9, 20.8, 29.1, 29.5, and 37.2 J mole$^{-1}$ K$^{-1}$. We modify these two constants only. The model we employed is incapable of calculating the radiative transfer of dense H$_2$, He, O$_2$, or CO$_2;$ meanwhile, surface temperatures under background gases that differ from Earth have not been seriously examined. Thus, we chose to use the globally uniform surface temperature (301 K) and stellar radiation (340 W m$^{-2}$) with neither a seasonal nor diurnal cycle. This idealized thermal boundary condition is unrealistic, but it is capable of avoiding the effects of any strong wind shear, the baroclinic zone, or other features that may inhibit hurricane formation or propagation (Merlis \\& Held \\cite{Merlis19}). We used two planetary rotation periods:\\ of one and of three Earth days.\n\n\n\nThe initial states of the experiments were based on long-term (of 40 Earth years) simulations using a lower resolution of 4$^{\\circ}$$\\times$5$^{\\circ}$ or 1.9$^{\\circ}$$\\times$2.5$^{\\circ}$ under the same experimental designs and parameterization schemes. Then each experiment was run for five Earth years under high resolution and the last four years were used to carry out the analysis, presented below.\n\n\nHurricane formation and tracking is based on six hourly model output variables using the Geophysical Fluid Dynamics Laboratory tracking algorithm (Zhao et al. \\cite{Zhao}). Candidate hurricanes are identified by finding regions that satisfy the following criteria: 1) the local 850-hPa relative vorticity maximum exceeds 3.5$\\times$10$^{-5}$ s$^{-1}$; 2) the 850-hPa warm-core temperature must be at least 0.5 K warmer than the surrounding local mean; 3) the distance between the local sea level pressure minimum and the vorticity maximum should be within a distance of 2$^{\\circ}$ latitude or longitude and so, this should also be the distance between the local sea level pressure minimum and the warm-core center; 4) the maximum 850-hPa wind speed exceeds $\\approx$33 m s$^{-1}$ at some point. These values for the thresholds impact the exact number of detected hurricanes but they do not affect the main conclusions of this study.\n\n\\begin{figure*\n\\centering\n\\setlength{\\abovecaptionskip}{0.1cm}\n\\includegraphics[width=0.9\\textwidth]{fig1.PNG}\n\\caption{Snapshots of hurricanes on a tidally locked aqua-planet near the inner edge of the habitable zone in the control experiment. From left to right, the variables are: instantaneous wind speed at 850 hPa, surface air pressure, precipitation, and the vertical component of relative vorticity at 850 hPa, respectively. From upper to bottom, they are for three different moments. The four hurricanes are marked with black circles over the wind speed panels. The black cross is the substellar point in this figure and hereafter. See the supplementary video online for a visualisation of the evolution of the hurricanes.}\n\\label{fig1}\n\\end{figure*}\n\nFor Earth, a useful method for estimating the possibility of hurricane formation is the genesis potential index (GPI; Emanuel \\& Nolan \\cite{Emanuelc}), which is written as:\n\\begin{equation}\nGPI = |10^5 (\\zeta + f)|^{3\/2}(RH\/50)^3(V_{pot}\/70)^3(1.0+0.1V_{shear})^{-2},\n\\end{equation}\nwhere $\\zeta$ is the vertical component of relative vorticity, $f$ is the planetary vorticity, $RH$ is the relative humidity at the middle troposphere (600 hPa), $V_{shear}$ is the wind shear of horizontal winds between the upper and lower troposphere (300 minus 850 hPa; or called vertical wind shear), and $V_{pot}$ is potential intensity. The $V_{pot}$ is a measure of the maximum near-surface wind that can be maintained by hurricane under given environmental conditions. We note that these parameters are not entirely independent; for instance, vertical wind shear can influence relative humidity. The value of $V_{pot}$ is calculated based on a local balance between thermal energy import and mechanical energy dissipation (Emanuel \\cite{Emanuel}, Bister \\& Emanuel \\cite{Bister}), written as\n\n\\begin{equation}\n V_{pot}^2 = \\frac{C_k}{C_D}\\frac{T_s}{T_o}\\,[CAPE^* - CAPE^b]|_m,\n\\end{equation}\nwhere $C_k$ is the exchange coefficient for enthalpy, $C_D$ is the drag coefficient, $T_s$ is the surface temperature, $T_o$ is the mean outflow temperature, ${CAPE}^\\ast$ is the convective available potential energy of air lifted from saturation at sea surface, and ${CAPE}^b$ is that of the boundary layer air. Both ${CAPE}^\\ast$ and ${CAPE}^b$ are computed at the radius of maximum surface wind.\n\nIn the discussion of the size of hurricanes, the Rossby deformation radius ($L_R$) is used, as described in Section 3.3 below. Here, $L_R$ is the length scale at which rotational effects become as important as the effects of gravity waves or buoyancy in the evolution of the flow in a disturbance. The $L_R$ is equal to $\\frac{NH\\ }{f}$, where $N$ is the Brunt-Vaisala frequency, and $f$ is the Coriolis parameter. Furthermore, $H$ is the scale height, equaling to $\\frac{R^\\ast\\bar{T}}{M_dg}$, where $R^\\ast$ is the universal gas constant, $\\bar{T}$ is the mean air temperature, $M_d$ is the molar weight of the atmosphere, and $g$ is the gravity (Wallace \\& Hobbs \\cite{Wallace}). The $L_R$ decreases as $M_d$ increases due to the reduction of $H$. In idealized experiments with uniform surface temperature or uniform rotation, it is one of the rough scales that can be used for understanding hurricane size (Held \\& Zhao \\cite{Held}); however, in more realistic conditions such as on Earth, $L_R$ is not a good scaling (Chavas et al. \\cite{Chavas16}, Chavas \\& Reed \\cite{Chavas19}). \n\n\n\n\n\n\\begin{figure*\n\\centering\n\\setlength{\\abovecaptionskip}{0.1cm}\n\\includegraphics[width=0.8\\textwidth]{fig2.PNG}\n\\caption{Azimuthal-mean height-radius cross-section of a typical, mature hurricane on the day side of a tidally locked aqua-planet in the control experiment. (a) tangential wind speed ($v_\\theta$), (b) radial wind speed ($v_r$), (c) vertical velocity ($\\omega$), (d) relative humidity, (e) temperature anomaly from the environmental value ($\\Delta$T), and (f) equivalent potential temperature anomaly ($\\Delta \\theta_e$).}\n\\label{fig2}\n\\end{figure*}\n\n\\begin{figure*\n \\centering\n \\includegraphics[width=0.8\\textwidth]{fig3.PNG}\n \\caption{Mechanisms for hurricane formation in the control experiment. (a): Location of hurricane formation (dots) and surface air temperature (color shading). The number of hurricanes during the four-year integration is 154. (b): Long-term mean surface air pressure (shading) and winds at 850 hPa (vector). (c): Same as (b) but for an instantaneous. The life cycle of one hurricane on the day side: (d): Maximum surface wind speed (blue) and minimum surface pressure (red), (e): Relative vorticity (blue) and divergence (red) at 850 hPa, (f): Precipitation (blue) and vertical velocity at 850 hPa (red), and (g): Surface latent heat flux (blue). For (e)-(g), the variables are calculated for area mean of 500$\\times$500 km$^{2}$ around the low-pressure center.}\n \\label{fig3}\n \\end{figure*}\n\n\n\n\n\\section{Results}\n\\subsection{Hurricanes on tidally locked planets}\n\nFigure 1 and 2 show the results of the control experiment for an aquaplanet orbiting close to the inner edge of the habitable zone around a late M dwarf of 2600 K. The rotation period is set to six Earth days. Both the day and night sides are set to hot (310--315~K), and the day-to-night surface temperature contrast is small (see Fig. 3a). This experimental design is due to the fact that the day-to-night atmospheric latent heat transport is very efficient for planets near the inner edge (Haqq-Misra et al. \\cite{Haqq-Misra}, Yang et al. \\cite{Yang}); oceanic heat transport can act to further reduce the day-to-night temperature contrast (Yang et al. \\cite{Yang}). The high surface temperature, weak day-to-night contrast, and relatively fast rotation rate (comparing to planets in the middle range of the habitable zone or planets orbiting around hotter stars) benefit hurricane formation. The temperatures of 310-315 K are close to the conditions of a runaway greenhouse state. For tidally locked planets, the planets start to enter runaway greenhouse state when the maximum surface temperature is close to or higher than these values (Wolf et al. \\cite{Wolf}).\n\n\n\n\nClearly, there are hurricanes in the control experiment. In the mature stage of the hurricanes, maximum wind speed reaches $\\approx$30-50 m s$^{-1}$ (Fig.~1a), surface air pressure at the center is $\\approx$950-980 hPa (Fig.~1b), precipitation reaches as high as 200-500 mm per day due to strong convection especially near the eyewall (Fig.~1c), and relative vorticity near the surface is on the order of 10$^{-4}$ s$^{-1}$ (Fig.~1d), values that are close to those on Earth (Anthes \\cite{Anthes}, Emanuel \\cite{Emanuelb}). The surface wind speed increases as the eyewall is approached from outside, but inside the eyewall the winds as well as precipitation weaken rapidly. The winds rotate counter-clockwise in the northern hemisphere and clockwise in the southern hemisphere due to the Coriolis force, although in this experiment, it is much smaller than that on Earth. The precipitation exhibits well-defined spiral bands rather than uniformly distributed throughout the region of the hurricane. The clear patterns of eye-eyewall and spiral rain bands suggest that the model resolution of 50 km is good in resolving the hurricanes.\n\n\\begin{figure*\n \\centering\n \\includegraphics[width=0.8\\textwidth]{fig4.PNG}\n \\caption{Environmental conditions for hurricane formation. (a): Genesis potential index (GPI); (b): Planetary vorticity; (c): Long-term mean relative vorticity at 850 hPa; (d): Relative humidity at 600 hPa; (e): Potential intensity; (f): Vertical shear of the horizontal winds between 300 and 850 hPa. We note the environmental vorticity in panel (c) is one order smaller than the vorticity of hurricanes shown in Fig.~1d.\n }\n \\label{fig4}\n \\end{figure*}\n\n\n\n\n\nFor vertical cross structure (Fig.~2), the tangential wind component dominates the flow through the system, although the radial wind is also significant. In the boundary layer, the winds flow towards the low-pressure center. Within the hurricane, upward motion is robust and tilts radially outward. The maximum ascendance near the surface locates at a distance of $\\approx$250 km outward from the hurricane center rather than at the center itself. Indeed, weak downward motion takes up the center. Due to the ascendance and deep convection, relative humidity is high in the hurricane. Latent heat release from the convection and adiabatic warming by compression from the subsidence in the eye produces a warmer region of air with temperatures of $\\approx$4~K above the environmental value. This can be called a 'warm core,' which is one of the most characteristic features of a hurricane. The warm core can also be viewed from the equivalent potential temperature anomaly. Hurricanes on the night side (Fig.~S1) are smaller in horizontal size compared to those on the day side, $\\approx$500 versus 1500 km, but the vertical structures are similar.\n\n\nStatistical analysis shows that in the control experiment there are four preferred regions for hurricane genesis: the northern and southern tropics of the day side near the substellar point and the middle-to-high latitudes of the night side on each hemisphere (black dots in Fig.~3a). Hurricane formation is largely determined by small-scale convection, large-scale environmental conditions, and the interactions between them. Below, we explore the underlying mechanisms in two ways.\n\n\nOne way is the positive feedback between cumulus convection and larger-scale disturbance, known as the conditional instability of the second kind (CISK; Charney \\& Eliassen \\cite{Charney}, Smith \\cite{Smith}, Yamasaki \\cite{Yamasaki}, Wang \\cite{Wang}). On tidally locked planets, long-term mean atmospheric circulation is characterized by large-scale Rossby waves on the west and pole of the substellar point and Kelvin waves on the east of the substellar point (Fig.~3b), excited from the uneven stellar radiation distribution (Showman \\& Polvani \\cite{Showmana}, Showman et al. \\cite{Showmanb}). The wave pattern is similar to the tropical Matsuno-Gill pattern on Earth (Matsuno \\cite{Matsuno}, Gill \\cite{Gill}), but the meridional (south-north) scale is larger, $\\approx$10,000 versus 3,000 km. The Rossby waves have one low-pressure center on each hemisphere, whose corresponding environmental vertical motion is updrafts and relative vorticity is positive (negative) on the northern (southern) hemisphere. This low-pressure system favors the onset of the CISK feedback: surface winds spiral into the low-pressure center and create horizontal convergence; this low-level convergence enhances the relative vorticity through vortex stretching, increases upward motion following the conservation of mass, and, in the meantime, brings water vapor into the center, amplifying cumulus convection and release of latent heat. The latent heat release warms the air and lowers the air density through forcing more upper-level air to move outward away from the center, subsequently reducing the surface pressure; the lower surface pressure further enhances the low-level convergence and increases the growth rate of the relative vorticity through vortex stretching (Fig.~3d-f). This feedback is the key in promoting the growth of small-scale disturbances to hurricanes in the background low-pressure regions. It is similar to that on Earth: hurricane generally forms in the monsoon troughs and the confluence zones where the surface pressure is relatively low, collocated with high cyclonic vorticity, convergent surface winds, and divergent winds aloft (Anthes \\cite{Anthes}, Emanuel \\cite{Emanuelb}). Moreover, during the formation phase, the latent heat flux from the surface to the boundary layer increases strongly (Fig.~3g), which also contributes to intensifying the hurricane through the feedback of wind-induced surface energy exchange (WISHE; Emanuel \\cite{Emanuelb}, Wang \\cite{Wang}).\n\n\n The lifetime of the hurricanes on the day side is $\\approx$40-50 Earth days, longer than that on Earth. This is mainly due to the absence of continents and the warm surface everywhere in the experiment. On the night side, the lifetime is shorter, $\\approx$10-20 Earth days.\n\n \\begin{figure*\n \\centering\n \\includegraphics[width=0.8\\textwidth]{fig5.PNG}\n \\caption{Effects of planetary rotation and surface temperature on hurricane formation (dots) and the GPI (color shading). Experiments are for varying rotation period (a-c), varying surface temperature (d-e), and varying values of both (f). Experimental designs are the same as the control experiment in Fig.~1 except that the rotation period is set to (a): 10, (b): 20, and (c): 40 Earth days; (d): Maximum surface temperature reduced from 315 to 308 K and the night-side surface temperature is reduced from 310 to 275 K (see Fig.~6a); (e): Same as (d) but for 301 K and 268 K, respectively (see Fig.~6b); and (f) same as (e) but for a rotation period of 40 earth days. The number of hurricanes is 88, 34, 21, 10, 2, and 0, respectively. See Fig.~7 for snapshots of typical hurricanes. The southern hemisphere always has more hurricanes than the northern hemisphere; this may be due to some asymmetry in initial state or some stochastic process in the model.\n }\n \\label{fig5}\n \\end{figure*}\n\n\n\\begin{figure*\n\\centering\n\\setlength{\\abovecaptionskip}{0.1cm}\n\\includegraphics[width=0.8\\textwidth]{fig6.PNG}\n\\caption{Surface air temperatures specified in the simulations of planets in the middle range of the habitable zone. (a): Maximum surface temperature is 308~K and the night-side surface temperature is uniform with a value of 275~K. (b): Same as (a) but the temperature is 7~K lower throughout.}\n\\label{fig6}\n\\end{figure*}\n\n\\begin{figure*\n\\centering\n\\setlength{\\abovecaptionskip}{0.1cm}\n\\includegraphics[width=0.8\\textwidth]{fig7.PNG}\n\\caption{Snapshots of instantaneous surface wind speed (m s$^{-1}$) of typical hurricanes in the experiments of varying rotation period (a-c), varying surface temperature (d-e), and variations of both (f). The six hurricanes are marked with black circles. There is no hurricane in panel (f). These experiments are the same as those shown in Fig.~5.}\n\\label{fig7}\n\\end{figure*}\n\n\nAnother more quantitative way is the empirical equation of the genesis potential index (GPI; Emanuel \\& Nolan \\cite{Emanuelc}, Camargo et al. \\cite{Camargo}, Bin et al. \\cite{Bin}). The index combines five environmental factors to predict the potential of hurricane formation, including planetary vorticity, relative vorticity, relative humidity, potential intensity, and wind shear. A comparison between Fig.~3a and~4a reveals a positive correlation between the location of hurricane genesis and large values of the GPI. In the four hurricane formation regions, GPI values are large because the relative vorticity is great, relative humidity is high over the substellar region, potential intensity is large, and vertical wind shear is weak (Fig.~4b-f). These properties favor hurricane formations in these scenarios. For example, when the shear is strong, an initial disturbance will be ventilated by cooler or drier air and thereby temperature and moisture anomalies are hard to maintain (Tang \\& Emanuel \\cite{Tang}). In this experiment, the vertical wind shear is strong, especially in the tropics of the night side associated with atmospheric superrotation (Showman et al. \\cite{Showmanb}, Pierrehumbert \\& Hammond \\cite{Pierrehumbertb}) and in the extratropics of the day side (Fig.~4f), so that there is nearly no hurricane formation there. The applicability of the empirical GPI index on Earth to the tidally locked planet is mainly due to the fact that we employed an Earth-like atmosphere here; when the atmospheric composition is quite different from that of Earth, GPI does not serve as a good index, as addressed below. \n\nThe formation of hurricanes on the night side is surprising because the night side has no stellar radiation and the long-term mean vertical motion is downwelling rather than upwelling. In this experiment, based on a planet close to the inner edge of the habitable zone, however, the night-side surface is warm and the surface temperature gradient is small (Fig.~3a). In addition, there are a few short-time, small, low-pressure regions (Fig.~3c), the planetary vorticity is relatively high (Fig.~4b), and the vertical wind shear is weak (Fig.~4f) at the middle-to-high latitudes. These factors promote hurricane genesis there. However, when the surface temperature is decreased or the rotation rate is slowed down, there are fewer or altogether no hurricanes on the night side (see below).\n\n\n\\begin{figure*\n \\centering\n \\includegraphics[width=0.8\\textwidth]{fig8.PNG}\n \\caption{ Effects of background gases on hurricane formation. From (a) to (e), these are snapshots of instantaneous surface air pressure (hPa) under background gases of H$_2$, He, N$_2$, O$_2$, and CO$_2$, respectively. In all these experiments, the planetary rotation period is one Earth day and surface temperature is uniform (301 K). For experiments with a rotation period of three Earth days, the results are the same except that the hurricanes are larger in size but fewer in number in the latter three experiments.\n }\n \\label{fig8}\n \\end{figure*}\n\n\\begin{figure*\n \\centering\n \\includegraphics[width=0.8\\textwidth]{fig9.PNG}\n \\caption{Snapshots of (a) surface temperature, (b) surface air pressure, (c) near-surface wind strength, (d) precipitation, and (e) relative vorticity in one experiment coupled to a slab ocean. Panels c\u2013e: only the hurricane region is shown in order to more clearly exhibit the structure of the hurricane. In this experiment, rotation period (=orbital period) is ten Earth days, the CO$_2$ concentration is 300 ppmv, stellar flux is 1450 W m$^{-2}$, and the star temperature is 2600 K. No oceanic heat transport is involved in this run. }\n \\label{fig9}\n \\end{figure*}\n\n\n\\subsection{Effects of planetary rotation rate and surface temperature} \nIn order to test the effect of the rotation rate, we performed three experiments in which rotation period is increased (i.e., the rotation rate is decreased) while other experimental design features are left to remain the same as those of the control experiment. In the case of ten days, the hurricane frequency does not change much on the day side but decreases significantly on the night side (Fig.~5a). In the case of 20 days, there are nearly no evidence of a hurricane on the night side and the number of hurricanes on the day side also decreases substantially (Fig.~5b). For the case of 40 days, the hurricane only forms at regions very close to the substellar point (Fig.~5c). This trend as a function of rotation period is mainly attributed to three factors: the direct weakness of planetary vorticity, the reduction of relative vorticity due to the fact that the atmosphere becomes so steady that waves and disturbances become less active, and the decrease of relative humidity on the night side (with an increase on the day side) due to the strengthening of the thermal-driven global Walker circulation (Fig.~S2).\n\n\nWhen the surface temperature is decreased, it is harder for a hurricane to form. As the maximum surface temperature is set to 308 K and the night-side surface temperature is set to 275 K (Fig.~6a), fewer hurricanes form in the vicinity of the substellar point and no hurricane on the night side (Fig.~5d). When the maximum surface temperature is set to 301 K and the night-side surface temperature is set to 268 K(Fig.~6b), there are only two hurricane events during the integration of four Earth years (Fig.~5e), which is consistent with the GPI prediction in Bin et al. (\\cite{Bin}). The temperature of 301 K is close to the tropical surface temperatures on Earth. This suggests that hurricane formation on tidally locked planets requires a warmer surface due to their slower rotation rates and stronger wind shears. For planets with both slow rotation and low temperature, no hurricane can form (Fig.~5f). The decreasing trend of hurricane formation as a function of reduced surface temperature is due to two main processes: the relative humidity and potential intensity decrease because of the cooler surface and weaker upwelling and convection, and the vertical wind shear becomes much stronger due to the enhanced temperature gradients between the day and night sides (Fig.~S3). Moreover, when the night-side surface temperature is low, air convergence from the night side to the day side brings cool air rather than warm air into the substellar region, suppressing hurricane formation there.\n\n\n\\subsection{Effect of bulk atmospheric composition}\nAtmospheric compositions on terrestrial exoplanets are as yet unknown. Here, we carry out a preliminary investigation of how atmospheric molecular weight influences hurricane formation under a uniform surface temperature of 301 K. When the background atmosphere is set to H$_2$ or He, there is no hurricane, in contrast to the experiments of N$_2$, O$_2$, and CO$_2$ (Fig.~8), although the GPI value is comparable to or even larger than that shown above. This is due to the fact that the condensate\u2013H$_2$O is heavier than H$_2$ and He, so that any disturbance that brings water vapor upward will cause the density of a moist parcel to be larger than its surrounding environment, similar to the conditions in Saturn's atmosphere (Guillot \\cite{Guillot}, Li \\& Ingersoll \\cite{LiC}, Leconte et al. \\cite{Leconteb}). This process induces a negative buoyancy and stabilizes the atmosphere against convection. It can simply be understood by using the ideal gas equation, $p=\\rho R_d T_v$ , and the virtual temperature ($T_v$),\n\\begin{center}\n\\begin{equation}\n \\centering\n T_v=\\frac{p}{p+\\left(\\epsilon-1\\right)e}T\n,\\end{equation} \n\\end{center}\n\n\\noindent\nwhere $p$ is total air pressure, $e$ is the partial pressure of the condensate, $\\rho$ is air density, $R_d$ is the gas constant of dry air, $\\epsilon$ is the molecular weight ratio of water vapor to the dry air, and $T$ is the air temperature. For a H$_2$-dominated (or He-dominated) atmosphere, $\\epsilon$ is equal to 9 (or 4.5), so that $T_v$ is smaller than $T$. Therefore, a moist parcel is heavier than a dry parcel under the same $p$ and $T$, and moist convection is inhibited, which is opposite to the conditions on Earth. Moreover, in the experiments with N$_2$, O$_2$, and CO$_2$, a clear trend is revealed, namely, that the size of the hurricane decreases as the mean molecular weight is increased. This is due to the fact that the atmospheric scale height is inversely proportional to the mean molecular weight and, subsequently, the Rossby deformation radius (see the last paragraph of Section 2 above) becomes smaller.\n\n\n\n\nIn the experiments of Fig.~8, the value of Rossby deformation radius is $\\approx$500-1500 km, comparable to the hurricane size. However, the Rossby deformation radius is strongly latitude-dependent because $f$ is equal to $2\\Omega sin(\\varphi)$, where $\\Omega$ is the rotation rate and $\\varphi$ is the latitude, but the hurricane size in the experiments does not exhibit the same dependency. A better scaling was not found because of the nonlinear dynamics of hurricane and the complex interactions between the hurricane and diabatic heating, environmental relative humidity, mesoscale convective system, and other features (Emanuel \\cite{Emanuelb}; Merlis \\& Held \\cite{Merlis19}). Under more realistic conditions, such as on Earth and the simulations in Sections 3.1 and 3.2, the Coriolis effect is not constant between latitudes and there are strong interactions between hurricanes and mean circulation, so that the Rossby deformation radius is not a good scale for hurricane size (e.g., Chavas et al. \\cite{Chavas16}).\n\n\n\n\n\\section{Conclusions and discussions}\n\nWe find that hurricanes can form on tidally locked planets especially for those orbiting near the inner edge of the habitable zone of late M dwarfs. For planets in the middle range of the habitable zone, hurricanes are relatively fewer. Storm theories of Earth and Saturn can be used to understand the hurricane formation on tidally locked planets. Hurricanes can enhance the ocean mixing and oceanic heat transport from warmer to cooler regions in both horizontal and vertical directions. Hurricanes can also influence the transmission spectra of tidally locked planets. For instance, if a hurricane moves to the terminator, water vapor concentration would increase (Fig.~S4), which can influence the transmission signals. Unfortunately, present-day telescopes are not capable of observing this feature (Morley et al. \\cite{Morley}, de Wit et al. \\cite{de Wit}) mainly due to the small-scale height of the atmosphere and the relatively small size of the hurricane compared to the planetary radius. Differentiating them requires the large space telescopes or ground-based extremely large telescopes of the future.\n\n\n\nFurthermore, future studies require the use of AGCMs coupled to a slab ocean or fully coupled atmosphere\u2013ocean models. The result of a test of the atmosphere coupled to a slab ocean is shown in Fig.~9. We can still find hurricanes in the experiment, whereas more results for different rotation periods, different stellar fluxes, and different CO$_2$ concentrations will be presented in a separate paper in the near future. Moreover, future works are required to examine how continents influence the results of such studies. Hurricanes always decay quickly when they move over land because of the dramatic reduction in evaporation and the increase in surface roughness. Global climate models with more realistic cloud schemes and regional cloud-resolving models with more accurate radiation transfer are required to simulate the hurricane genesis, especially for those who have quite different atmospheric compositions or air masses from Earth. One another weakness in this study is the convection scheme that was developed based on the knowledge of convection on Earth. Future studies using high-resolution models with explicit convection (e.g., Sergeev et al. \\cite{Sergeev}) are required. Moreover, convective self-aggregation (such as Bony et al. \\cite{Bony}, Pendergrass et al. \\cite{Pendergrass}, Wing et al. \\cite{Wing}) may have occurred in our simulations, particularly in the 310--315~K experiment. A future work is required to analyze this feature.\n\nRecently, using Earth-based metrics for hurricane genesis, Komacek et al. (\\cite{Komacek}) found that\nhurricane genesis is most favorable on tidally locked terrestrial exoplanets with intermediate rotation periods of about 8\u201310 days in the habitable zones of late-type M dwarf stars, and that on slowly rotating planets hurricane generis is unfavorable. The latter is consistent with Bin et al. (\\cite{Bin}) and our results. Future simulations using hurricane-resolved models are required to verify the intermediate rotation periods conclusion shown in Komacek et al. (\\cite{Komacek}).\n\n\n\n\n\\begin{acknowledgements}\n We thank the National Center for Atmospheric Research (NCAR) groups for developing the model CAM4 and making it available to public. We are grateful to the discussions with Hao Fu, Gan Zhang, Cheng Li, Weixin Xu, Yongyun Hu, Zhiyong Meng, Dorian S. Abbot, Thaddeus D. Komacek, and Fengyi Xie.\n\\end{acknowledgements}\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}