diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzhwjr" "b/data_all_eng_slimpj/shuffled/split2/finalzzhwjr" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzhwjr" @@ -0,0 +1,5 @@ +{"text":"\\section*{Acknowledgements}\nThis research is supported by the National Research Foundation, Singapore under its AI Singapore Programme (AISG Award No: AISG2-RP-2020-018). Any opinions, findings and conclusions or recommendations expressed in this material are those of the authors and do not reflect the views of National Research Foundation, Singapore. The authors thank Jianxin Wu, Chaoyang He, Shixuan Sun, Yaqi Xie and Yuhang Chen for their feedback. The authors also thank Yuzhi Zhao, Wei Wang, and Mo Sha for their supports of computing resources.\n\n{\\small\n\\bibliographystyle{ieee_fullname}\n\n\\section{More Details of the Datasets}\nThe statistics of each dataset are shown in Table \\ref{tbl:data} when setting $\\beta=0.5$. All datasets provide a training dataset and test dataset. All the reported accuracies are computed on the test dataset.\n\n\\begin{table}[h]\n\\centering\n\\caption{The statistics of datasets.}\n\\label{tbl:data}\n\\resizebox{\\linewidth}{!}{%\n\\begin{tabular}{|c|c|c|c|}\n\\hline\n\\multirow{2}{*}{dataset} & \\multicolumn{2}{c|}{\\#training samples\/party} & \\multirow{2}{*}{\\#test samples} \\\\ \\cline{2-3}\n & mean & std & \\\\ \\hline \\hline\nCIFAR10 & 5,000 & 1,165 & 10,000 \\\\ \\hline\nCIFAR100 & 5,000 & 181 & 10,000 \\\\ \\hline\nTiny-Imagenet & 10,000 & 99 & 10,000 \\\\ \\hline\n\\end{tabular}\n}\n\\end{table}\n\nFigure \\ref{fig:datadis_beta01} and Figure \\ref{fig:datadis_beta5} show the data distribution of $\\beta=0.1$ and $\\beta=5$ (used in Section 4.6 of the main paper), respectively.\n\n\n\n\\begin{figure*}\n\\centering\n\\subfloat[CIFAR-10]{\\includegraphics[width=0.32\\textwidth]{figures\/cifar10_data_beta01.pdf}%\n}\n\\hfill\n\\subfloat[CIFAR-100]{\\includegraphics[width=0.32\\textwidth]{figures\/cifar100_data_beta01.pdf}%\n}\n\\hfill\n\\subfloat[Tiny-Imagenet]{\\includegraphics[width=0.32\\textwidth]{figures\/tinyimg_data_beta01.pdf}}\n\\caption{The data distribution of each party using non-IID data partition with $\\beta=0.1$. }\n\\label{fig:datadis_beta01}\n\\vspace{-10pt}\n\\end{figure*}\n\n\\begin{figure*}\n\\centering\n\\subfloat[CIFAR-10]{\\includegraphics[width=0.32\\textwidth]{figures\/cifar10_data_beta5.pdf}%\n}\n\\hfill\n\\subfloat[CIFAR-100]{\\includegraphics[width=0.32\\textwidth]{figures\/cifar100_data_beta5.pdf}%\n}\n\\hfill\n\\subfloat[Tiny-Imagenet]{\\includegraphics[width=0.32\\textwidth]{figures\/tinyimg_data_beta5.pdf}}\n\\caption{The data distribution of each party using non-IID data partition with $\\beta=5$.}\n\\label{fig:datadis_beta5}\n\\vspace{-10pt}\n\\end{figure*}\n\n\n\\section{Projection Head}\nWe use a projection head to map the representation like \\cite{simclr}. Here we study the effect of the projection head. We remove the projection head and conduct experiments on CIFAR-10 and CIFAR-100 (Note that the network architecture changes for all approaches). The results are shown in Table \\ref{tbl:projhead}. We can observe that MOON can benefit a lot from the projection head. The accuracy of MOON can be improved by about 2\\% on average with a projection head.\n\n\n\\begin{table}[h]\n\\centering\n\\caption{The top1-accuracy with\/without projection head.}\n\\label{tbl:projhead}\n\\resizebox{\\linewidth}{!}{\n\\begin{tabular}{|c|c|c|c|}\n\\hline\n\\multicolumn{2}{|c|}{Method} & CIFAR-10 & CIFAR-100 \\\\ \\hline\\hline\n\\multirow{5}{*}{\\begin{tabular}[c]{@{}c@{}}without\\\\ projection \\\\ head\\end{tabular}} & MOON & 66.8\\% & 66.1\\% \\\\ \\cline{2-4} \n & FedAvg & 66.7\\% & 65.0\\% \\\\ \\cline{2-4} \n & FedProx & 67.5\\% & 65.4\\% \\\\ \\cline{2-4} \n & SCAFFOLD & 67.1\\% & 49.5\\% \\\\ \\cline{2-4} \n & SOLO & 39.8\\%$\\pm$3.9\\% & 22.5\\%$\\pm$1.1\\% \\\\ \\hline\n\\multirow{5}{*}{\\begin{tabular}[c]{@{}c@{}}with\\\\ projection \\\\ head\\end{tabular}} & MOON & \\tb{69.1\\%} & \\tb{67.5\\%} \\\\ \\cline{2-4} \n & FedAvg & 66.3\\% & 64.5\\% \\\\ \\cline{2-4} \n & FedProx & 66.9\\% & 64.6\\% \\\\ \\cline{2-4} \n & SCAFFOLD & 66.6\\% & 52.5\\% \\\\ \\cline{2-4} \n & SOLO & 46.3\\%$\\pm$5.1\\% & 22.3\\%$\\pm$1.0\\% \\\\ \\hline\n\\end{tabular}\n}\n\\end{table}\n\n\\section{IID Partition}\nTo further show the effect of our model-contrastive loss, we compare MOON and FedAvg when there is no heterogeneity among local datasets. The dataset is randomly and equally partitioned into the parties. The results are shown in Table \\ref{tbl:homo}. We can observe that the model-contrastive loss has little influence on the training when the local datasets are IID. The accuracy of MOON is still very close to FedAvg even though with a large $\\mu$. MOON is still applicable when there is no heterogeneity issue in data distributions across parties.\n\n\n\\begin{table}[h]\n\\centering\n\\caption{The top-1 accuracy of MOON and FedAvg with IID data partition on CIFAR-10.}\n\\label{tbl:homo}\n\\begin{tabular}{|c|c|c|}\n\\hline\n\\multicolumn{2}{|c|}{Method} & Top-1 accuracy\\\\ \\hline\\hline\n\\multirow{4}{*}{MOON} & $\\mu=0.1$ & 73.6\\% \\\\ \\cline{2-3} \n & $\\mu=1$ & 73.6\\% \\\\ \\cline{2-3} \n & $\\mu=5$ & 73.0\\% \\\\ \\cline{2-3} \n & $\\mu=10$ & 72.8\\% \\\\ \\hline\n\\multicolumn{2}{|c|}{FedAvg ($\\mu=0$)} & 73.4\\% \\\\ \\hline\n\\end{tabular}\n\\end{table}\n\n\n\n\n\\section{Hyper-Parameters Study}\n\n\\subsection{Effect of $\\mu$}\nWe show the accuracy of MOON with different $\\mu$ in Table \\ref{tbl:mu}. The best $\\mu$ for CIFAR-10, CIFAR-100, and Tiny-Imagenet are 5, 1, and 1, respectively. When $\\mu$ is set to a small value (i.e., $\\mu=0.1$), the accuracy of MOON is very close to FedAvg (i.e., $\\mu=0$) since the impact of model-contrastive loss is small. As long as we set $\\mu \\geq 1$, MOON can benefit a lot from the model-contrastive loss. Overall, we find that $\\mu=1$ is a reasonable good choice if they do not want to tune the parameter, where MOON achieves at least 2\\% higher accuracy than FedAvg.\n\n\\begin{table}[!]\n\\centering\n\\caption{The test accuracy of MOON with $\\mu$ from \\{0, 0.1, 1, 5, 10\\}. Note that MOON is actually FedAvg when $\\mu=0$.}\n\\label{tbl:mu}\n\\begin{tabular}{|c|c|c|c|}\n\\hline\n$\\mu$ & CIFAR-10 & CIFAR-100 & Tiny-Imagenet \\\\ \\hline\\hline\n0 & 66.3\\% & 64.5\\% & 23.0\\% \\\\ \\hline\n0.1 & 66.5\\% & 65.1\\% & 23.4\\% \\\\ \\hline\n1 & 68.4\\% & \\tb{67.5\\%} & \\tb{25.1\\%} \\\\ \\hline\n5 & \\tb{69.1\\%} & 67.1\\% & 24.4\\% \\\\ \\hline\n10 & 68.3\\% & 67.3\\% & 25.0\\% \\\\ \\hline\n\\end{tabular}\n\\end{table}\n\n\n\\subsection{Effect of temperature and output dimension} We tune $\\tau$ from \\{0.1, 0.5, 1.0\\} and tune the output dimension of projection head from \\{64, 128, 256\\}. The results are shown in Figure \\ref{fig:tau_outdim}. The best $\\tau$ for CIFAR-10, CIFAR-100, and Tiny-Imagenet are 0.5, 1.0, and 0.5, respectively. The best output dimension for CIFAR-10, CIFAR-100, and Tiny-Imagenet are 128, 256, and 128, respectively. Generally, MOON is stable regarding the change of temperature and output dimension. As we have shown in the main paper, MOON already improves FedAvg a lot with a default setting of temperature (i.e., 0.5) and output dimension (i.e., 256). Users may tune these two hyper-parameters to achieve even better accuracy.\n\n\\begin{figure}[h]\n\\centering\n\\subfloat[The effect of $\\tau$\\label{fig:tau}]{\\includegraphics[width=0.48\\linewidth]{figures\/tunetempe.pdf}\n}\n\\subfloat[The effect of output dimension\\label{fig:outdim}]{\\includegraphics[width=0.48\\linewidth]{figures\/tunedim.pdf}}\n\\caption{The top-1 accuracy of MOON trained with different temperatures and output dimensions.}\n\\label{fig:tau_outdim}\n\\vspace{-10pt}\n\\end{figure}\n\n\n\n\n\n\\section{Combining with FedAvgM}\nAs we have mentioned in the fourth paragraph of Section 2.1, MOON can be combined with the approaches working on improving the aggregation phase. Here we combine MOON and FedAvgM \\cite{hsu2019measuring}. We tune the server momentum $\\beta \\in \\{0.1, 0.7, 0.9\\}$. With the default experimental setting in Section 4.1, the results are shown in Table \\ref{tbl:fedavgm}. While FedAvgM is better than FedAvg, MOON can further improve FedAvgM by 2-3\\%.\n\n\\begin{table}[!]\n\\centering\n\\caption{The combining of MOON and FedAvgM.}\n\\label{tbl:fedavgm}\n\\resizebox{\\columnwidth}{!}{\n\\begin{tabular}{|c|c|c|c|c|}\n\\hline\nDatasets & FedAvg & MOON & FedAvgM & MOON+FedAvgM \\\\ \\hline\\hline\nCIFAR-10 & 66.3\\% & 69.1\\% & 67.1\\% & 69.6\\% \\\\ \\hline\nCIFAR-100 & 64.5\\% & 67.5\\% &65.1\\% &67.8\\% \\\\ \\hline\nTiny-Imagenet & 23.0\\% & 25.1\\% &23.4\\% &25.5\\% \\\\ \\hline\n\\end{tabular}\n}\n\\end{table}\n\n\\section{Computation Cost}\nSince MOON introduces an additional loss term in the local training phase, the training of MOON will be slower than FedAvg. For the experiments in Table 1, the average training time per round with a NVIDIA Tesla V100 GPU and four Intel Xeon E5-2684 20-core CPUs are shown in Table \\ref{tbl:local_time}. Compared with FedAvg, the computation overhead of MOON is acceptable especially on CIFAR-10 and CIFAR-100.\n\n\\begin{table}[!]\n\\centering\n\\caption{The average training time per round.}\n\\label{tbl:local_time}\n\\resizebox{\\columnwidth}{!}{\n\\begin{tabular}{|c|c|c|c|}\n\\hline\nMethod & CIFAR-10 & CIFAR-100 & Tiny-Imagenet \\\\ \\hline\\hline\nFedAvg & 330s & 20min & 103min \\\\ \\hline\nFedProx & 340s & 24min & 135min \\\\ \\hline\nSCAFFOLD & 332s & 20min & 112min \\\\ \\hline\nMOON & 337s & 31min & 197min \\\\ \\hline\n\\end{tabular}\n}\n\\vspace{-10pt}\n\\end{table}\n\n\\section{Number of Negative Pairs}\nIn typical contrastive learning, the performance usually can be improved by increasing the number of negative pairs (i.e., views of different images). In MOON, the negative pair is the local model being updated and the local model from the previous round. We consider using a single negative pair during training in the main paper. It is possible to consider multiple negative pairs if we include multiple local models from the previous rounds. Suppose the current round is $t$. Let $k$ denotes the maximum number of negative pairs. Let $z_{prev}^i = R_{w_i^{t-i}}(x)$ (i.e., $z_{prev}^i$ is the representation learned by the local model from $(t-i)$ round). Then, our local objective is\n\n{\\small\n\\begin{equation}\n \\ell_{con} = -\\log{\\frac{\\exp(\\textrm{sim}(z, z_{glob})\/\\tau)}{\\exp(\\textrm{sim}(z, z_{glob})\/\\tau) + \\sum_{i=1}^k\\exp(\\textrm{sim}(z, z_{prev}^i)\/\\tau)}}\n\\end{equation}\n}\n\nIf $k=1$, then the objective is the same as MOON presented in the main paper. If $k>t$, since there are at most $t$ local models from previous rounds, we only consider the previous $t$ local models (i.e., only $t$ negative pairs). There is no model-contrastive loss if $t=0$ (i.e., the first round). Here we study the effect of the maximum number of negative pairs on CIFAR-10. The results are shown in Table \\ref{tbl:n_pair}. Unlike typical contrastive learning, the accuracy of MOON cannot be increased by increasing the number of negative pairs. MOON can achieve the best accuracy when $k=1$, which is presented in our main paper.\n\n\\begin{table}[h]\n\\centering\n\\caption{The effect of maximum number of negative pairs. We tune $\\mu$ from \\{0.1, 1, 5, 10\\} for all approaches and report the best accuracy.}\n\\label{tbl:n_pair}\n\\begin{tabular}{|c|c|}\n\\hline\nmaximum number of negative pairs & top-1 accuracy \\\\ \\hline \\hline\n$k=1$ & \\tb{69.1\\%} \\\\ \\hline\n$k=2$ & 67.2\\% \\\\ \\hline\n$k=5$ & 67.7\\% \\\\ \\hline\n$k=100$ & 63.5\\% \\\\ \\hline\n\\end{tabular}\n\\end{table}\n\\section{Background and Related Work}\n\n\\subsection{Federated Learning}\nFedAvg \\cite{mcmahan2016communication} has been a de facto approach for federated learning. The framework of FedAvg is shown in Figure \\ref{fig:fedavg_fram}. There are four steps in each round of FedAvg. First, the server sends a global model to the parties. Second, the parties perform stochastic gradient descent (SGD) to update their models locally. Third, the local models are sent to a central server. Last, the server averages the model weights to produce a global model for the training of the next round.\n\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=\\linewidth]{figures\/fl_fram_print.pdf}\n \\caption{The FedAvg framework. In this paper, we focus on the second step, i.e., the local training phase.}\n \\label{fig:fedavg_fram}\n \\vspace{-5pt}\n\\end{figure}\n\n\nThere have been quite some studies trying to improve FedAvg on non-IID data. Those studies can be divided into two categories: improvement on local training (i.e., step 2 of Figure \\ref{fig:fedavg_fram}) and on aggregation (i.e., step 4 of Figure \\ref{fig:fedavg_fram}). This study belongs to the first category.\n\nAs for studies on improving local training, FedProx \\cite{lifedprox} introduces a proximal term into the objective during local training. The proximal term is computed based on the $\\ell_2$-norm distance between the current global model and the local model. Thus, the local model update is limited by the proximal term during the local training. SCAFFOLD \\cite{karimireddy2019scaffold} corrects the local updates by introducing control variates. Like the training model, the control variates are also updated by each party during local training. The difference between the local control variate and the global control variate is used to correct the gradients in local training. However, FedProx shows experiments on MNIST and EMNIST only with multinomial logistic regression, while SCAFFOLD only shows experiments on EMNIST with logistic regression and 2-layer fully connected layer. The effectiveness of FedProx and SCAFFOLD on image datasets with deep learning models has not been well explored. As shown in our experiments, those studies have little or even no advantage over FedAvg, which motivates this study for a new approach of handling non-IID image datasets with deep learning models. We also notice that there are other related contemporary work \\cite{acar2021federated,li2021fedbn,wang2020addressing} when preparing this paper. We leave the comparison between MOON and these contemporary work as future studies.\n\nAs for studies on improving the aggregation phase, FedMA \\cite{Wang2020Federated} utilizes Bayesian non-parametric methods to match and average weights in a layer-wise manner. FedAvgM \\cite{hsu2019measuring} applies momentum when updating the global model on the server. Another recent study, FedNova \\cite{wang2020tackling}, normalizes the local updates before averaging.\nOur study is orthogonal to them and potentially can be combined with these techniques as we work on the local training phase.\n\nAnother research direction is personalized federated learning \\cite{fallah2020personalized,dinh2020personalized,hanzely2020lower,zhang2021personalized,huang2021personalized}, which tries to learn personalized local models for each party. In this paper, we study the typical federated learning, which tries to learn a single global model for all parties.\n\n\n\\subsection{Contrastive Learning}\n\nSelf-supervised learning \\cite{jing2020self,grill2020bootstrap,simclr,simclr2,he2020momentum,misra2020self} is a recent hot research direction, which tries to learn good data representations from unlabeled data. Among those studies, contrastive learning approaches \\cite{simclr,simclr2,he2020momentum,misra2020self} achieve state-of-the-art results on learning visual representations. The key idea of contrastive learning is to reduce the distance between the representations of different augmented views of the same image (i.e., \\emph{positive pairs}), and increase the distance between the representations of augmented views of different images (i.e., \\emph{negative pairs}).\n\nA typical contrastive learning framework is SimCLR \\cite{simclr}. Given an image $x$, SimCLR first creates two correlated views of this image using different data augmentation operators, denoted $x_i$ and $x_j$. A base encoder $f(\\cdot)$ and a projection head $g(\\cdot)$ are trained to extract the representation vectors and map the representations to a latent space, respectively. Then, a contrastive loss (i.e., NT-Xent \\cite{sohn2016improved}) is applied on the projected vector $g(f(\\cdot))$, which tries to maximize agreement between differently augmented views of the same image. Specifically, given $2N$ augmented views and a pair of view $x_i$ and $x_j$ of same image, the contrastive loss for this pair is defined as \n\n\\begin{equation}\nl_{i, j} = -\\log{\\frac{\\exp(\\textrm{sim}(x_i, x_j)\/\\tau)}{\\sum_{k=1}^{2N} \\mathbb{I}_{[k\\neq i]}\\exp(\\textrm{sim}(x_i, x_k)\/\\tau)}}\n\\end{equation}\n\nwhere $\\textrm{sim}(\\cdot,\\cdot)$ is a cosine similarity function and $\\tau$ denotes a temperature parameter. The final loss is computed by summing the contrastive loss of all pairs of the same image in a mini-batch. \n\nBesides SimCLR, there are also other contrastive learning frameworks such as CPC \\cite{oord2018representation}, CMC \\cite{tian2019contrastive} and MoCo \\cite{he2020momentum}. We choose SimCLR for its simplicity and effectiveness in many computer vision tasks. Still, the basic idea of contrastive learning is similar among these studies: the representations obtained from different images should be far from each other and the representations obtained from the same image should be related to each other. The idea is intuitive and has been shown to be effective. \n\nThere is one recent study \\cite{zhang2020federated} that combines federated learning with contrastive learning. They focus on the unsupervised learning setting. Like SimCLR, they use contrastive loss to compare the representations of different images. In this paper, we focus on the supervised learning setting and propose model-contrastive learning to compare representations learned by different models.\n\\section{Conclusion}\nFederated learning has become a promising approach to resolve the pain of data silos in many domains such as medical imaging, object detection, and landmark classification. Non-IID is a key challenge for the effectiveness of federated learning. To improve the performance of federated deep learning models on non-IID datasets, we propose model-contrastive learning (MOON), a simple and effective approach for federated learning. MOON introduces a new learning concept, i.e., contrastive learning in model-level. Our extensive experiments show that MOON achieves significant improvement over state-of-the-art approaches on various image classification tasks. As MOON does not require the inputs to be images, it potentially can be applied to non-vision problems.\n\n\\section{Experiments}\n\\label{sec:exp}\n\\subsection{Experimental Setup}\n\\label{sec:exp_setup}\nWe compare MOON with three state-of-the-art approaches including (1) FedAvg \\cite{mcmahan2016communication}, (2) FedProx \\cite{lifedprox}, and (3) SCAFFOLD \\cite{karimireddy2019scaffold}. We also compare a baseline approach named SOLO, where each party trains a model with its local data without federated learning. We conduct experiments on three datasets including CIFAR-10, CIFAR-100, and Tiny-Imagenet\\footnote{\\url{https:\/\/www.kaggle.com\/c\/tiny-imagenet}} (100,000 images with 200 classes). Moreover, we try two different network architectures. For CIFAR-10, we use a CNN network as the base encoder, which has two 5x5 convolution layers followed by 2x2 max pooling (the first with 6 channels and the second with 16 channels) and two fully connected layers with ReLU activation (the first with 120 units and the second with 84 units). For CIFAR-100 and Tiny-Imagenet, we use ResNet-50 \\cite{he2016deep} as the base encoder. For all datasets, like \\cite{simclr}, we use a 2-layer MLP as the projection head. The output dimension of the projection head is set to 256 by default. Note that all baselines use the same network architecture as MOON (including the projection head) for fair comparison.\n\nWe use PyTorch \\cite{paszke2019pytorch} to implement MOON and the other baselines. The code is publicly available\\footnote{\\url{https:\/\/github.com\/QinbinLi\/MOON}}. We use the SGD optimizer with a learning rate 0.01 for all approaches. The SGD weight decay is set to 0.00001 and the SGD momentum is set to 0.9. The batch size is set to 64. The number of local epochs is set to 300 for SOLO. The number of local epochs is set to 10 for all federated learning approaches unless explicitly specified. The number of communication rounds is set to 100 for CIFAR-10\/100 and 20 for Tiny-ImageNet, where all federated learning approaches have little or no accuracy gain with more communications. For MOON, we set the temperature parameter to 0.5 by default like \\cite{simclr}.\n\n\n\nLike previous studies \\cite{pmlr-v97-yurochkin19a,Wang2020Federated}, we use Dirichlet distribution to generate the non-IID data partition among parties. Specifically, we sample $p_k \\sim Dir_N(\\beta)$ and allocate a $p_{k,j}$ proportion of the instances of class $k$ to party $j$, where $Dir(\\beta)$ is the Dirichlet distribution with a concentration parameter $\\beta$ (0.5 by default). With the above partitioning strategy, each party can have relatively few (even no) data samples in some classes. We set the number of parties to 10 by default. The data distributions among parties in default settings are shown in Figure \\ref{fig:datadis}. For more experimental results, please refer to Appendix.\n\n\\begin{figure*}[!]\n\\centering\n\\subfloat[CIFAR-10]{\\includegraphics[width=0.32\\textwidth]{figures\/cifar10_data.pdf}%\n}\n\\hfill\n\\subfloat[CIFAR-100]{\\includegraphics[width=0.32\\textwidth]{figures\/cifar100_data.pdf}%\n}\n\\hfill\n\\subfloat[Tiny-Imagenet]{\\includegraphics[width=0.32\\textwidth]{figures\/tinyimg_data.pdf}}\n\\caption{The data distribution of each party using non-IID data partition. The color bar denotes the number of data samples. Each rectangle represents the number of data samples of a specific class in a party. }\n\\label{fig:datadis}\n\\end{figure*}\n\n\n\\subsection{Accuracy Comparison}\n\\label{sec:overall_acc}\nFor MOON, we tune $\\mu$ from $\\{0.1, 1, 5, 10\\}$ and report the best result. The best $\\mu$ of MOON for CIFAR-10, CIFAR-100, and Tiny-Imagenet are 5, 1, and 1, respectively. Note that FedProx also has a hyper-parameter $\\mu$ to control the weight of its proximal term (i.e., $L_{FedProx} = \\ell_{FedAvg} + \\mu \\ell_{prox}$). For FedProx, we tune $\\mu$ from $\\{0.001, 0.01, 0.1, 1\\}$ (the range is also used in the previous paper \\cite{lifedprox}) and report the best result. The best $\\mu$ of FedProx for CIFAR-10, CIFAR-100, and Tiny-Imagenet are $0.01$, $0.001$, and $0.001$, respectively. Unless explicitly specified, we use these $\\mu$ settings for all the remaining experiments.\n\nTable \\ref{tbl:allacc} shows the top-1 test accuracy of all approaches with the above default setting. Under non-IID settings, SOLO demonstrates much worse accuracy than other federated learning approaches. This demonstrates the benefits of federated learning. Comparing different federated learning approaches, we can observe that MOON is consistently the best approach among all tasks. It can outperform FedAvg by 2.6\\% accuracy on average of all tasks. For FedProx, its accuracy is very close to FedAvg. The proximal term in FedProx has little influence in the training since $\\mu$ is small. However, when $\\mu$ is not set to a very small value, the convergence of FedProx is quite slow (see Section \\ref{sec:comm_effi}) and the accuracy of FedProx is bad. For SCAFFOLD, it has much worse accuracy on CIFAR-100 and Tiny-Imagenet than other federated learning approaches.\n\n\n\\begin{table}[]\n\\centering\n\\caption{The top-1 accuracy of MOON and the other baselines on test datasets. For MOON, FedAvg, FedProx, and SCAFFOLD, we run three trials and report the mean and standard derivation. For SOLO, we report the mean and standard derivation among all parties.}\n\\label{tbl:allacc}\n\\resizebox{\\linewidth}{!}{%\n\\begin{tabular}{|c|c|c|c|}\n\\hline\nMethod & CIFAR-10 & CIFAR-100 & Tiny-Imagenet \\\\ \\hline \\hline\nMOON & \\tb{69.1\\%}$\\pm$0.4\\% & \\tb{67.5\\%}$\\pm$0.4\\% & \\tb{25.1\\%}$\\pm$0.1\\% \\\\ \\hline\nFedAvg & 66.3\\%$\\pm$0.5\\% & 64.5\\% $\\pm$0.4\\% & 23.0\\%$\\pm$0.1\\% \\\\ \\hline\nFedProx & 66.9\\%$\\pm$0.2\\% & 64.6\\%$\\pm$0.2\\% & 23.2\\%$\\pm$0.2\\% \\\\ \\hline\nSCAFFOLD & 66.6\\%$\\pm$0.2\\% & 52.5\\% $\\pm$0.3\\% & 16.0\\%$\\pm$0.2\\% \\\\ \\hline\nSOLO &46.3\\% $\\pm$5.1\\% &22.3\\%$\\pm$1.0\\% &8.6\\%$\\pm$0.4\\%\\\\\\hline\n\\end{tabular}\n}\n\\end{table}\n\n\n\n\\subsection{Communication Efficiency}\n\\label{sec:comm_effi}\nFigure \\ref{fig:comm} shows the accuracy in each round during training. As we can see, the model-contrastive loss term has little influence on the convergence rate with best $\\mu$. The speed of accuracy improvement in MOON is almost the same as FedAvg at the beginning, while it can achieve a better accuracy later benefit from the model-contrastive loss. Since the best $\\mu$ values are generally small in FedProx, FedProx with best $\\mu$ is very close to FedAvg, especially on CIFAR-10 and CIFAR-100. However, when setting $\\mu=1$, FedProx becomes very slow due to the additional proximal term. This implies that limiting the $\\ell_2$-norm distance between the local model and the global model is not an effective solution. Our model-contrastive loss can effectively increase the accuracy without slowing down the convergence.\n\n\n\\begin{figure*}[!]\n\\centering\n\\subfloat[CIFAR-10]{\\includegraphics[width=0.32\\textwidth]{figures\/newcomm_cifar10.pdf}%\n}\n\\hfill\n\\subfloat[CIFAR-100]{\\includegraphics[width=0.32\\textwidth]{figures\/newcomm_cifar100.pdf}%\n}\n\\hfill\n\\subfloat[Tiny-Imagenet]{\\includegraphics[width=0.32\\textwidth]{figures\/newcomm_tinyimg.pdf}}\n\\caption{The top-1 test accuracy in different number of communication rounds. For FedProx, we report both the accuracy with best $\\mu$ and the accuracy with $\\mu=1$.}\n\\label{fig:comm}\n\\vspace{-15pt}\n\\end{figure*}\n\nWe show the number of communication rounds to achieve the same accuracy as running FedAvg for 100 rounds on CIFAR-10\/100 or 20 rounds on Tiny-Imagenet in Table \\ref{tbl:comm}. We can observe that the number of communication rounds is significantly reduced in MOON. MOON needs about half the number of communication rounds on CIFAR-100 and Tiny-Imagenet compared with FedAvg. On CIFAR-10, the speedup of MOON is even close to 4. MOON is much more communication-efficient than the other approaches.\n\n\\begin{table}[]\n\\centering\n\\caption{The number of rounds of different approaches to achieve the same accuracy as running FedAvg for 100 rounds (CIFAR-10\/100) or 20 rounds (Tiny-Imagenet). The speedup of an approach is computed against FedAvg.}\n\\label{tbl:comm}\n\\resizebox{\\linewidth}{!}{%\n\\begin{tabular}{|c|c|c|c|c|c|c|}\n\\hline\n \\multirow{2}{*}{Method} & \\multicolumn{2}{c|}{CIFAR-10} & \\multicolumn{2}{c|}{CIFAR-100} & \\multicolumn{2}{c|}{Tiny-Imagenet} \\\\ \\cline{2-7} \n & \\#rounds & speedup & \\#rounds & speedup & \\#rounds & speedup \\\\ \\hline \\hline\nFedAvg & 100 & 1$\\times$ & 100 & 1$\\times$ &20 & 1$\\times$ \\\\ \\hline\nFedProx & 52 & 1.9$\\times$ & 75 & 1.3$\\times$ & 17 & 1.2$\\times$ \\\\ \\hline\nSCAFFOLD & 80 & 1.3$\\times$ & \\backslashbox{}{} & \\textless{}1$\\times$ & \\backslashbox{}{} & \\textless{}1$\\times$ \\\\ \\hline\nMOON & \\tb{27} & \\tb{3.7}$\\times$ & \\tb{43} & \\tb{2.3}$\\times$ & \\tb{11} & \\tb{1.8}$\\times$ \\\\ \\hline\n\\end{tabular}\n}\n\\end{table}\n\n\n\n\\begin{figure*}\n\\centering\n\\subfloat[CIFAR-10]{\\includegraphics[width=0.32\\textwidth]{figures\/cifar10tunep.pdf}%\n}\n\\hfill\n\\subfloat[CIFAR-100]{\\includegraphics[width=0.32\\textwidth]{figures\/cifar100tunep.pdf}%\n}\n\\hfill\n\\subfloat[Tiny-Imagenet]{\\includegraphics[width=0.32\\textwidth]{figures\/tinyimgtunep.pdf}}\n\\caption{The top-1 test accuracy with different number of local epochs. For MOON and FedProx, $\\mu$ is set to the best $\\mu$ from Section \\ref{sec:overall_acc} for all numbers of local epochs. The accuracy of SCAFFOLD is quite bad when number of local epochs is set to 1 (45.3\\% on CIFAR10, 20.4\\% on CIFAR-100, 2.6\\% on Tiny-Imagenet). The accuracy of FedProx on Tiny-Imagenet with one local epoch is 1.2\\%.}\n\\label{fig:epoch}\n\\vspace{-5pt}\n\\end{figure*}\n\n\\subsection{Number of Local Epochs}\nWe study the effect of number of local epochs on the accuracy of final model. The results are shown in Figure \\ref{fig:epoch}. When the number of local epochs is 1, the local update is very small. Thus, the training is slow and the accuracy is relatively low given the same number of communication rounds. All approaches have a close accuracy (MOON is still the best). When the number of local epochs becomes too large, the accuracy of all approaches drops, which is due to the drift of local updates, i.e., the local optima are not consistent with the global optima. Nevertheless, MOON clearly outperforms the other approaches. This further verifies that MOON can effectively mitigate the negative effects of the drift by too many local updates.\n\n\n\n\n\\subsection{Scalability}\n\nTo show the scalability of MOON, we try a larger number of parties on CIFAR-100. Specifically, we try two settings: (1) We partition the dataset into 50 parties and all parties participate in federated learning in each round. (2) We partition the dataset into 100 parties and randomly sample 20 parties to participate in federated learning in each round (client sampling technique introduced in FedAvg \\cite{mcmahan2016communication}). The results are shown in Table \\ref{tbl:scala} and Figure \\ref{fig:scala}. For MOON, we show the results with $\\mu=1$ (best $\\mu$ from Section \\ref{sec:overall_acc}) and $\\mu=10$. For MOON ($\\mu=1$), it outperforms the FedAvg and FedProx over 2\\% accuracy at 200 rounds with 50 parties and 3\\% accuracy at 500 rounds with 100 parties. Moreover, for MOON ($\\mu=10$), although the large model-contrastive loss slows down the training at the beginning as shown in Figure \\ref{fig:scala}, MOON can outperform the other approaches a lot with more communication rounds. Compared with FedAvg and FedProx, MOON achieves about about 7\\% higher accuracy at 200 rounds with 50 parties and at 500 rounds with 100 parties. SCAFFOLD has a low accuracy with a relatively large number of parties.\n\n\n\n\\begin{table}[]\n\\centering\n\\caption{The accuracy with 50 parties and 100 parties (sample fraction=0.2) on CIFAR-100.}\n\\label{tbl:scala}\n\\resizebox{\\linewidth}{!}{%\n\\begin{tabular}{|c|c|c|c|c|}\n\\hline\n\\multirow{2}{*}{Method} & \\multicolumn{2}{c|}{\\#parties=50} & \\multicolumn{2}{c|}{\\#parties=100} \\\\ \\cline{2-5} \n & 100 rounds & 200 rounds & 250 rounds & 500 rounds \\\\ \\hline \\hline\nMOON ($\\mu$=1) & 54.7\\% &58.8\\% & 54.5\\% & 58.2\\% \\\\ \\hline\nMOON ($\\mu$=10)& \\tb{58.2\\%} &\\tb{63.2\\%} & \\tb{56.9\\%} & \\tb{61.8\\%} \\\\ \\hline\nFedAvg & 51.9\\% &56.4\\% & 51.0\\% & 55.0\\% \\\\ \\hline\nFedProx & 52.7\\% & 56.6\\% & 51.3\\% & 54.6\\% \\\\ \\hline\nSCAFFOLD & 35.8\\% & 44.9\\% & 37.4\\% & 44.5\\% \\\\ \\hline\nSOLO & \\multicolumn{2}{c|}{10\\%$\\pm$0.9\\%} & \\multicolumn{2}{c|}{7.3\\%$\\pm$0.6\\%} \\\\ \\hline\n\\end{tabular}\n}\n\\end{table}\n\n\\begin{figure*}\n\\centering\n\\subfloat[50 parties]{\\includegraphics[width=0.45\\linewidth]{figures\/200rounds_comm_cifar100_p50.pdf}\n}\n\\hfil\n\\subfloat[100 parties (sample fraction=0.2)]{\\includegraphics[width=0.45\\linewidth]{figures\/500rounds_newcomm_cifar100_p100.pdf}%\n}\n\\hfil\n\\caption{The top-1 test accuracy on CIFAR-100 with 50\/100 parties.}\n\\label{fig:scala}\n\\end{figure*}\n\n\\subsection{Heterogeneity}\nWe study the effect of data heterogeneity by varying the concentration parameter $\\beta$ of Dirichlet distribution on CIFAR-100. For a smaller $\\beta$, the partition will be more unbalanced. The results are shown in Table \\ref{tbl:beta}. MOON always achieves the best accuracy among three unbalanced levels. When the unbalanced level decreases (i.e., $\\beta=5$), FedProx is worse than FedAvg, while MOON still outperforms FedAvg with more than 2\\% accuracy. The experiments demonstrate the effectiveness and robustness of MOON. \n\n\n\\begin{table}[]\n\\centering\n\\caption{The test accuracy with $\\beta$ from \\{0.1, 0.5, 5\\}. }\n\\label{tbl:beta}\n\\resizebox{\\linewidth}{!}{%\n\\begin{tabular}{|c|c|c|c|}\n\\hline\nMethod & $\\beta=0.1$ & $\\beta=0.5$ & $\\beta=5$ \\\\ \\hline \\hline\nMOON & \\tb{64.0\\%} & \\tb{67.5\\%} & \\tb{68.0\\%} \\\\ \\hline\nFedAvg & 62.5\\% & 64.5\\% & 65.7\\% \\\\ \\hline\nFedProx & 62.9\\% & 64.6\\% & 64.9\\% \\\\ \\hline\nSCAFFOLD & 47.3\\% & 52.5\\% & 55.0\\% \\\\ \\hline\nSOLO & 15.9\\%$\\pm$1.5\\% & 22.3\\%$\\pm$1\\% & 26.6\\%$\\pm$1.4\\% \\\\ \\hline\n\\end{tabular}\n}\n\\end{table}\n\n\n\n\n\n\\subsection{Loss Function}\n\nTo maximize the agreement between the representation learned by the global model and the representation learned by the local model, our model-contrastive loss $\\ell_{con}$ is proposed inspired by NT-Xent loss \\cite{simclr}. Another intuitive option is to use $\\ell_2$ regularization, and the local loss is\n\\begin{equation}\n \\ell = \\ell_{sup} + \\mu \\norm{z-z_{glob}}_2\n\\end{equation}\n\nHere we compare the approaches using different kinds of loss functions to limit the representation: no additional term (i.e., FedAvg: $L = \\ell_{sup}$), $\\ell_2$ norm, and our model-contrastive loss. The results are shown in Table \\ref{tbl:loss}. We can observe that simply using $\\ell_2$ norm even cannot improve the accuracy compared with FedAvg on CIFAR-10. While using $\\ell_2$ norm can improve the accuracy on CIFAR-100 and Tiny-Imagenet, the accuracy is still lower than MOON. Our model-contrastive loss is an effective way to constrain the representations. \n\nOur model-contrastive loss influences the local model from two aspects. Firstly, the local model learns a close representation to the global model. Secondly, the local model also learns a better representation than the previous one until the local model is good enough (i.e., $z=z_{glob}$ and $\\ell_{con}$ becomes a constant).\n\n\n\\begin{table}[]\n\\centering\n\\caption{The top-1 accuracy with different kinds of loss for the second term of local objective. We tune $\\mu$ from \\{0.001, 0.01 , 0.1, 1, 5, 10\\} for the $\\ell_2$ norm approach and report the best accuracy.}\n\\label{tbl:loss}\n\\resizebox{\\linewidth}{!}{\n\\begin{tabular}{|c|c|c|c|}\n\\hline\nsecond term & CIFAR-10 & CIFAR-100 & Tiny-Imagenet \\\\ \\hline\\hline\nnone (FedAvg) & 66.3\\% & 64.5\\% & 23.0\\% \\\\ \\hline\n$\\ell_2$ norm & 65.8\\% & 66.9\\% & 24.0\\% \\\\ \\hline\nMOON & \\tb{69.1\\%} & \\tb{67.5\\%} & \\tb{25.1\\%} \\\\ \\hline\n\\end{tabular}\n}\n\\end{table}\n\n\n\n\\section{Introduction}\nDeep learning is data hungry. Model training can benefit a lot from a large and representative dataset (\\eg, ImageNet \\cite{deng2009imagenet} and COCO~\\cite{lin2014microsoft}). However, data are usually dispersed among different parties in practice (\\eg, mobile devices and companies). Due to the increasing privacy concerns and data protection regulations \\cite{voigt2017eu}, parties cannot send their private data to a centralized server to train a model. \n\nTo address the above challenge, federated learning \\cite{kairouz2019advances,yang2019federated,litian2019survey,li2019flsurvey} enables multiple parties to jointly learn a machine learning model without exchanging their local data. A popular federated learning algorithm is FedAvg~\\cite{mcmahan2016communication}. In each round of FedAvg, the updated local models of the parties are transferred to the server, which further aggregates the local models to update the global model. The raw data is not exchanged during the learning process. Federated learning has emerged as an important machine learning area and attracted many research interests \\cite{mcmahan2016communication,lifedprox,karimireddy2019scaffold,li2020practical,Wang2020Federated,dai2020federated,hu2020oarf,caldas2018leaf,chaoyanghe2020fedml}. Moreover, it has been applied in many applications such as medical imaging \\cite{kaissis2020secure,kumar2020blockchain}, object detection \\cite{liu2020fedvision}, and landmark classification \\cite{hsu2020federated}.\n\nA key challenge in federated learning is the heterogeneity of data distribution on different parties \\cite{kairouz2019advances}. The data can be non-identically distributed among the parties in many real-world applications, which can degrade the performance of federated learning~\\cite{karimireddy2019scaffold,Li2020On,niidbench}. When each party updates its local model, its local objective may be far from the global objective. Thus, the averaged global model is away from the global optima. There have been some studies trying to address the non-IID issue in the local training phase~\\cite{lifedprox,karimireddy2019scaffold}. FedProx \\cite{lifedprox} directly limits the local updates by $\\ell_2$-norm distance, while SCAFFOLD \\cite{karimireddy2019scaffold} corrects the local updates via variance reduction \\cite{johnson2013accelerating}. However, as we show in the experiments (see Section \\ref{sec:exp}), these approaches fail to achieve good performance on image datasets with deep learning models, which can be as bad as FedAvg.\n\nIn this work, we address the non-IID issue from a novel perspective based on an intuitive observation: \\emph{the global model trained on a whole dataset is able to learn a better representation than the local model trained on a skewed subset}. Specifically, we propose \\tb{mo}del-c\\tb{on}trastive learning (MOON), which corrects the local updates by maximizing the agreement of representation learned by the current local model and the representation learned by the global model. Unlike the traditional contrastive learning approaches \\cite{simclr,simclr2,he2020momentum,misra2020self}, which achieve state-of-the-art results on learning visual representations by comparing the representations of different images, MOON conducts contrastive learning in model-level by comparing the representations learned by different models. Overall, MOON is a simple and effective federated learning framework, and addresses the non-IID data issue with the novel design of model-based contrastive learning.\n\n\n\nWe conduct extensive experiments to evaluate the effectiveness of MOON. MOON significantly outperforms the other state-of-the-art federated learning algorithms~\\cite{mcmahan2016communication,lifedprox,karimireddy2019scaffold} on various image classification datasets including CIFAR-10, CIFAR-100, and Tiny-Imagenet. With only lightweight modifications to FedAvg, MOON outperforms existing approaches by at least 2\\% accuracy in most cases. Moreover, the improvement of MOON is very significant on some settings. For example, on CIFAR-100 dataset with 100 parties, MOON achieves 61.8\\% top-1 accuracy, while the best top-1 accuracy of existing studies is 55\\%. \n\n\\section{Model-Contrastive Federated Learning}\n\n\n\\subsection{Problem Statement}\nSuppose there are $N$ parties, denoted $P_1, ..., P_N$. Party $P_i$ has a local dataset $\\mathcal{D}^i$. Our goal is to learn a machine learning model $w$ over the dataset $\\mathcal{D}\\triangleq \\bigcup_{i\\in[N]}\\mathcal{D}^i$ with the help of a central server, while the raw data are not exchanged. The objective is to solve \n\\begin{equation}\n \\argmin_{w} \\mathcal{L}(w) = \\sum_{i=1}^N \\frac{|\\mathcal{D}^i|}{|\\mathcal{D}|}L_i(w),\n\\end{equation}\nwhere $L_i(w) = \\mathbb{E}_{(x,y)\\sim \\mathcal{D}^i} [\\ell_i(w; (x, y))]$ is the empirical loss of $P_i$.\n\n\n\\subsection{Motivation}\n\nMOON is based on an intuitive idea: the model trained on the whole dataset is able to extract a better feature representation than the model trained on a skewed subset. For example, given a model trained on dog and cat images, we cannot expect the features learned by the model to distinguish birds and frogs which never exist during training. \n\nTo further verify this intuition, we conduct a simple experiment on CIFAR-10. Specifically, we first train a CNN model (see Section \\ref{sec:exp_setup} for the detailed structure) on CIFAR-10. We use t-SNE \\cite{maaten2008visualizing} to visualize the hidden vectors of images from the test dataset as shown in Figure \\ref{fig:global}. Then, we partition the dataset into 10 subsets in an unbalanced way (see Section \\ref{sec:exp_setup} for the partition strategy) and train a CNN model on each subset. Figure \\ref{fig:local} shows the t-SNE visualization of a randomly selected model. Apparently, the model trained on the subset learns poor features. The feature representations of most classes are even mixed and cannot be distinguished. Then, we run FedAvg algorithm on 10 subsets and show the representation learned by the global model in Figure \\ref{fig:fedavg_global} and the representation learned by a selected local model (trained based on the global model) in Figure \\ref{fig:fedavg_local}. We can observe that the points with the same class are more divergent in Figure \\ref{fig:fedavg_local} compared with Figure \\ref{fig:fedavg_global} (\\eg, see class 9). The local training phase even leads the model to learn a worse representation due to the skewed local data distribution. This further verifies that the global model should be able to learn a better feature representation than the local model, and there is a \\emph{drift} in the local updates. Therefore, under non-IID data scenarios, we should control the drift and bridge the gap between the representations learned by the local model and the global model.\n\n\n\\begin{figure}\n\\centering\n\\subfloat[global model\\label{fig:global}]{\\includegraphics[width=0.5\\linewidth]{figures\/cifar_allin_print.pdf}%\n}\n\\subfloat[local model\\label{fig:local}]{\\includegraphics[width=0.5\\linewidth]{figures\/cifar_solo_print.pdf}%\n}\n\\hfil\n\\subfloat[FedAvg global model\\label{fig:fedavg_global}]{\\includegraphics[width=0.5\\linewidth]{figures\/cifar_global_35_print.pdf}%\n}\n\\subfloat[FedAvg local model\\label{fig:fedavg_local}]{\\includegraphics[width=0.5\\linewidth]{figures\/cifar_local_35_print.pdf}%\n}\n\\caption{T-SNE visualizations of hidden vectors on CIFAR-10.}\n\\label{fig:nparty}\n\\vspace{-5pt}\n\\end{figure}\n\n\\subsection{Method}\nBased on the above intuition, we propose MOON. MOON is designed as a simple and effective approach based on FedAvg, only introducing lightweight but novel modifications in the local training phase. Since there is always drift in local training and the global model learns a better representation than the local model, MOON aims to decrease the distance between the representation learned by the local model and the representation learned by the global model, and increase the distance between the representation learned by the local model and the representation learned by the previous local model. We achieve this from the inspiration of contrastive learning, which is now mainly used to learn visual representations. In the following, we present the network architecture, the local learning objective and the learning procedure. At last, we discuss the relation to contrastive learning.\n\n\n\\subsubsection{Network Architecture}\nThe network has three components: a base encoder, a projection head, and an output layer. The base encoder is used to extract representation vectors from inputs. Like \\cite{simclr}, an additional projection head is introduced to map the representation to a space with a fixed dimension. Last, as we study on the supervised setting, the output layer is used to produce predicted values for each class. For ease of presentation, with model weight $w$, we use $F_w(\\cdot)$ to denote the whole network and $R_w(\\cdot)$ to denote the network before the output layer (i.e., $R_w(x)$ is the mapped representation vector of input $x$).\n\n\\subsubsection{Local Objective}\nAs shown in Figure \\ref{fig:local_fram}, our local loss consists two parts. The first part is a typical loss term (\\eg, cross-entropy loss) in supervised learning denoted as $\\ell_{sup}$. The second part is our proposed model-contrastive loss term denoted as $\\ell_{con}$.\n\nSuppose party $P_i$ is conducting the local training. It receives the global model $w^t$ from the server and updates the model to $w_i^t$ in the local training phase. For every input $x$, we extract the representation of $x$ from the global model $w^t$ (i.e., $z_{glob} = R_{w^t}(x)$), the representation of $x$ from the local model of last round $w_i^{t-1}$ (i.e., $z_{prev} = R_{w_i^{t-1}}(x)$), and the representation of $x$ from the local model being updated $w_i^t$ (i.e., $z = R_{w_i^t}(x)$). Since the global model should be able to extract better representations, our goal is to decrease the distance between $z$ and $z_{glob}$, and increase the distance between $z$ and $z_{prev}$. Similar to NT-Xent loss \\cite{sohn2016improved}, we define model-contrastive loss as\n\n\\begin{equation}\n\\begin{aligned}\n\\small\n &\\ell_{con} = -\\log{\\frac{\\exp(\\textrm{sim}(z, z_{glob})\/\\tau)}{\\exp(\\textrm{sim}(z, z_{glob})\/\\tau) + \\exp(\\textrm{sim}(z, z_{prev})\/\\tau)}} \n\\end{aligned}\n\\end{equation}\n\nwhere $\\tau$ denotes a temperature parameter. The loss of an input $(x,y)$ is computed by \n\n\\begin{equation}\\label{eq:L}\n \\ell = \\ell_{sup}(w_i^t; (x,y)) + \\mu\\ell_{con}(w_i^t; w_i^{t-1}; w^t; x),\n\\end{equation}\n\nwhere $\\mu$ is a hyper-parameter to control the weight of model-contrastive loss. The local objective is to minimize \n\\begin{equation}\\label{eq:obj}\n\\min_{w_i^t} \\mathbb{E}_{(x,y)\\sim D^i} [\\ell_{sup}(w_i^t; (x,y)) + \\mu \\ell_{con}(w_i^t; w_i^{t-1}; w^t; x)].\n\\end{equation}\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=\\linewidth]{figures\/local_fram_print.pdf}\n \\caption{The local loss in MOON.}\n \\label{fig:local_fram}\n \n\\end{figure}\n\nThe overall federated learning algorithm is shown in Algorithm \\ref{alg:moon}. In each round, the server sends the global model to the parties, receives the local model from the parties, and updates the global model using weighted averaging. In local training, each party uses stochastic gradient descent to update the global model with its local data, while the objective is defined in Eq.~\\eqref{eq:obj}.\n\n\\begin{algorithm}[t]\n\\SetNoFillComment\n\\LinesNumbered\n\\SetArgSty{textnormal}\n\\KwIn{number of communication rounds $T$, number of parties $N$, number of local epochs $E$, temperature $\\tau$, learning rate $\\eta$, hyper-parameter $\\mu$}\n\\KwOut{The final model $w^T$}\n\n\\BlankLine\n\\tb{Server executes}:\n\ninitialize $w^0$\\\\\n\\For {$t=0, 1, ..., T-1$}{\n\n \\For {$i=1, 2, ..., N$ \\tb{in parallel}}{\n \n send the global model $w^t$ to $P_i$\n \n $w_i^t \\leftarrow$ \\tb{PartyLocalTraining}($i$, $w^t$)\n }\n $w^{t+1} \\leftarrow \\sum_{k=1}^N \\frac{|\\mathcal{D}^i|}{|\\mathcal{D}|} w_k^{t}$\n}\n\nreturn $w^T$\n\n\\BlankLine\n\\tb{PartyLocalTraining}($i$, $w^t$):\n\n$w_i^t \\leftarrow w^t$ \n\n\\For{epoch $i = 1, 2, ..., E$}{\n \\For{each batch $\\tb{b} = \\{x, y\\}$ of $\\mathcal{D}^i$}{\n $\\ell_{sup} \\leftarrow CrossEntropyLoss(F_{w_i^t}(x), y)$\n \n $z \\leftarrow R_{w_i^t}(x)$\n \n $z_{glob} \\leftarrow R_{w^t}(x)$\n \n $z_{prev} \\leftarrow R_{w_i^{t-1}}(x)$\n \n $\\ell_{con} \\leftarrow -\\log{\\frac{\\exp(\\textrm{sim}(z, z_{glob})\/\\tau)}{\\exp(\\textrm{sim}(z, z_{glob})\/\\tau) + \\exp(\\textrm{sim}(z, z_{prev})\/\\tau)}}$\n \n $\\ell \\leftarrow \\ell_{sup} + \\mu\\ell_{con}$\n \n $w_i^t \\leftarrow w_i^t - \\eta \\nabla \\ell$\n }\n}\n\nreturn $w_i^t$ to server\n\\caption{The MOON framework}\n\\label{alg:moon}\n\\end{algorithm}\n\nFor simplicity, we describe MOON without applying sampling technique in Algorithm \\ref{alg:moon}. MOON is still applicable when only a sample set of parties participate in federated learning each round. Like FedAvg, each party maintains its local model, which will be replaced\nby the global model and updated only if the party is selected\nto participate in a round. MOON only needs the latest local\nmodel that the party has, even though it may not be updated\nin round $(t-1)$ (e.g., $w_i^{t-1} = w_i^{t-2}$).\n\nAn notable thing is that considering an ideal case where the local model is good enough and learns (almost) the same representation as the global model (i.e., $z_{glob} = z_{prev}$), the model-contrastive loss will be a constant (i.e., $-\\log\\frac{1}{2}$). Thus, MOON will produce the same result as FedAvg, since there is no heterogeneity issue. In this sense, our approach is robust regardless of different amount of drifts.\n\n\n\\subsection{Comparisons with Contrastive Learning}\nA comparison between MOON and SimCLR is shown in Figure \\ref{fig:comp}. The model-contrastive loss compares representations learned by different models, while the contrastive loss compares representations of different images. We also highlight the key difference between MOON and traditional contrastive learning: MOON is currently for supervised learning in a federated setting while contrastive learning is for unsupervised learning in a centralized setting. Drawing the inspirations from contrastive learning, MOON is a new learning methodology in handling non-IID data distributions among parties in federated learning. \n\n\\begin{figure}\n\\centering\n\\includegraphics[width=\\linewidth]{figures\/compare_loss_print.pdf}\n\\caption{The comparison between SimCLR and MOON. Here $x$ denotes an image, $w$ denotes a model, and $R$ denotes the function to compute representation. SimCLR maximizes the agreement between representations of different views of the same image, while MOON maximizes the agreement between representations of the local model and the global model {on the mini-batches.}}\n\\label{fig:comp}\n\\end{figure}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nThe theory of fault-tolerant quantum error correction (FT-QEC) is one of the\npillars that the field of quantum information rests on. Starting with the\ndiscovery of quantum error correcting codes \\cite{Shor:95,Steane:96a}, and\nthe subsequent introduction of fault tolerance \\cite{Shor:96}, this theory\nhas been the subject of many improvements and important progress \\cite%\n{Knill:96,Knill:98,Aharonov:96b,Aharonov:99,Zalka:96,Gottesman:97b,Gottesman:99,Gottesman:99a,Preskill:97a,Steane:99a,Steane:03,Knill:04,Reichardt:04,Knill:05}%\n, leading to the well-known error correction threshold condition. Most\nrecently, work by Steane \\cite{Steane:03} and Knill \\cite{Knill:05} (see\nalso Reichardt \\cite{Reichardt:04}) has pushed the threshold down to values\nthat are claimed to be very close to being within experimental reach. A\nnotable feature of much of the work on FT-QEC is that the error models are \n\\emph{phenomenological}. By this we mean that the underlying models often do\nnot start from a Hamiltonian, microscopic description of the system-bath\ninteraction, but rather from a higher level, effective description, most\nnotably that of Markovian dynamics. E.g., Knill writes: \\textquotedblleft We\nassume that a gate's error consists of \\emph{random, independent}\napplications of products of Pauli operators with probabilities determined by\nthe gate\\textquotedblright\\ (our emphasis) \\cite{Knill:05}. This approach is\nnatural given the considerable difficulty of obtaining error thresholds\nstarting from a purely Hamiltonian description. Nevertheless, Hamiltonian\napproaches to decoherence management in a fault-tolerant setting have been\npursued, e.g., a mixed phenomenological-Hamiltonian treatment of FT-QEC \\cite%\n{MohseniLidar:05,Terhal:04,Aliferis:05,Aharonov:05}, and a fully Hamiltonian study of\nfault tolerance in dynamical decoupling \\cite{KhodjastehLidar:04}. Also\nnoteworthy are recent mixed phenomenological-Hamiltonian \\emph{continuous\ntime} treatments of QEC \\cite{Ahn:01,Sarovar:04,Sarovar:05}.\n\nHere we are concerned with a critical re-evaluation of the physical\nassumptions entering the theory of FT-QEC. We scrutinize, in particular, the\nconsistency of the assumption of Markovian dynamics within the larger\nframework of FT-QEC. We point out that there may be an inherent\ninconsistency in the theory of Markovian FT-QEC, when viewed from the\nperspective of the validity of the Markovian approximation. We begin by\nbriefly reviewing, in Section~\\ref{FTQEC-review}, a set of minimal and\nstandard, universally agreed upon assumptions made in Markovian FT-QEC theory. We then\nreview, in Section~\\ref{MME-review}, the derivation of Markovian master\nequations, emphasizing the physical assumptions entering the Markovian\napproximation, in particular the requirement of consistency with\nthermodynamics. Having delineated the set of assumptions entering FT-QEC and\nthe quantum Markov approximation, we discuss in Section~\\ref{consistency}\nthe internal consistency of Markovian FT-QEC theory. We point out where\naccording to our analysis there is an inconsistency, and discuss possible\nobjections. In Section~\\ref{alternatives} we then discuss how one may\novercome the inconsistency using a variety of alternative approaches,\nincluding adiabatic quantum computing (QC), holonomic QC, topological QC,\nand recent work on FT-QEC in a non-Markovian setting \\cite%\n{Terhal:04,Aliferis:05,Aharonov:05}. We conclude in Section~\\ref{conc}.\n\n\\section{Review of Standard Assumptions of FT-QEC}\n\n\\label{FTQEC-review}\n\nThe following are a set of minimal assumptions made in the theory of FT-QEC \n\\cite%\n{Shor:96,Knill:96,Knill:98,Aharonov:96b,Aharonov:99,Zalka:96,Gottesman:97b,Gottesman:99,Gottesman:99a,Preskill:97a,Steane:99a,Steane:03,Knill:04,Reichardt:04,Knill:05}%\n:\n\n\\begin{enumerate}\n\\item \\textbf{A1} --- \\emph{Gates can be executed in a time} $\\tau _{g}$ \n\\emph{such that} $\\tau _{g}\\omega =O(\\pi )$, \\emph{where} $\\omega $ \\emph{is\na typical Bohr or Rabi frequency}.\\footnote{%\nOne might object that slower (even adiabatic) gates could be used instead.\nWe analyze this possibility in detail in Section~\\ref{adiabatic-gates}, and\nshow that it does not lead to an improvement.}\n\n\\item \\textbf{A2} --- \\emph{A constant supply of fresh and nearly pure\nancillas}: at every time step we are given a supply of many qubits in the\nstate $|0\\rangle $, each of which can be faulty with some error parameter $%\n\\eta \\ll 1$.\n\n\\item \\textbf{A3} --- \\emph{Error correlations decay exponentially in time\nand space}.\n\\end{enumerate}\n\nSome remarks:\n\n\\noindent (i) \\textbf{A1} is not typically stated explicitly in the\nFT-QEC\\ literature, but can be understood as resulting from the definition\nof a quantum gate, which is a unitary transformation $U=\\exp (iA)$; when $%\nA=\\tau _{g}H$, where $H$ is a Hamiltonian generating the gate, \\textbf{A1}\nfollows from the absence of a free parameter:\\ when $\\tau _{g}$ is scaled up \n$H$ (and hence its eigenvalues) must be scaled down, and \\textit{vice versa}.\n\n\\noindent (ii) The distinction between Bohr and Rabi frequencies in \\textbf{%\nA1} is related to the application of constant \\textit{vs} periodic control\nfields, respectively. In the case of a constant control field \\textbf{A1}\ncan be understood as the condition that saturates the \\textquotedblleft\nMargolus-Levitin theorem\\textquotedblright\\ \\cite{Margolus:98}, which states\nthat the time required to transform an initial state $|\\psi \\rangle $ to an\northogonal state $|\\psi ^{\\bot }\\rangle $ using a constant Hamiltonian $H$\nis lower-bounded by $\\tau _{\\min }=\\pi \\hbar \/(2E)$, where $E=\\langle \\psi\n|H|\\psi \\rangle $; when $|\\psi \\rangle $ is an eigenstate of $H$ we have $%\n\\tau _{g}\\geq \\pi \/(2\\omega )$, where $\\omega =E\/\\hbar $ is the Bohr\nfrequency. See also \\cite{Andrecut:04} for the adiabatic version of the\nMargolus-Levitin theorem, and \\cite{Gea:02} for a lower bound on the amount\nof energy needed to carry out an elementary logical operation on a quantum\ncomputer, with a given accuracy and in a given time. In the case of periodic\ncontrol fields one can understand \\textbf{A1} as the result of the standard\nsolution to the driven two-level atom problem, where the probability of a\ntransition between ground and excited state is given by $(\\Omega _{\\mathrm{R}%\n}\/\\Omega _{\\mathrm{R}}^{\\prime })\\sin ^{2}(\\Omega _{\\mathrm{R}}^{\\prime\n}t\/2) $, where $\\Omega _{\\mathrm{R}}$ is the Rabi frequency and $\\Omega _{%\n\\mathrm{R}}^{\\prime }=(\\Omega _{\\mathrm{R}}+\\delta ^{2})^{1\/2}$, where $%\n\\delta $ is the detuning. This expression for the transition probability\nyields $\\tau _{g}\\Omega _{\\mathrm{R}}^{\\prime }=O(\\pi )$.\n\n\\noindent (iii) \\textbf{A2} is shown to be necessary in \\cite%\n{Aharonov:96b}. \\textbf{A3} is stated clearly in \\cite{Aharonov:99} (see the\ndiscussion in Sections 2.10 and 10 there). These, and additional assumptions\n[such as constant fault rate (independent of number of qubits) and\nparallelism (to correct errors in all blocks simultaneously)] are explicitly\nlisted, e.g., also in \\cite{Dennis:02}, Section II.\n\n\\noindent (iv) \\textbf{A3} is usually related to the\nMarkovian assumption, however both notions, the space-time correlations of\nerrors and the Markovian property, need some comments and explanations.\nUsing the convolutionless formalism in the theory of open systems (see, e.g.,\n\\cite{Breuer:02}) it is always possible to resolve the total superoperator $%\n\\Lambda (t)$ as \n\\begin{equation}\n\\Lambda (t)=\\prod_{i=1}^{n}\\Lambda _{i}U_{i} \\label{eq:Lambda}\n\\end{equation}%\nwhere $U_{i}$ are ideal unitary superoperators (corresponding to quantum\nlogic gates), and $\\Lambda _{i}$ are linear maps, not necessarily completely\npositive (CP) or even positive. If $\\Lambda _{i}$ are CP then we can always\nrealize them by coupling to an evironment which is \\textquotedblleft renewed\neach time step\\textquotedblright . This is the \\textquotedblleft Markovian\ncondition\\textquotedblright\\ as formulated in \\cite{Aharonov:99} (section\n2.10). However, complete positivity is not a necessary condition for QEC,\nwhich only requires a linear structure \\cite{Knill:97b,ShabaniLidar:06}. To obtain the\nThreshold Theorem one needs the following bound on the probability \\cite%\n{Aharonov:99}[Eq.~(2.6)]: \n\\begin{equation}\n\\text{Pr}(\\text{fault\\ path\\ with\\ }k\\text{\\ errors})\\leq c\\eta ^{k}(1-\\eta\n)^{v-k}, \\label{Pr}\n\\end{equation}%\nwhere $\\eta $ is the probability of a single error, $c$ is a certain\nconstant independent of $\\eta $, and $v$ is the number of error locations in\nthe circuit. This bound implies that the $k$-qubit errors should scale as $%\n\\sim \\eta ^{k}$, i.e., that in the decomposition of $\\Lambda _{j}$ into $k$%\n-qubit superoperators $L_{j}(k)$ \n\\begin{equation}\n\\Vert L_{j}(k)\\Vert \\leq c\\eta ^{k}\\ . \\label{k-qub}\n\\end{equation}%\nAs discussed in \\cite{Alicki:02} (within the Born approximation), the\ncondition (\\ref{k-qub}) can strictly be satisfied only for temporally \\emph{%\nexponentially}{\\ decaying reservoir correlation functions,} while for\nrealistic reservoir models the temporal decay is generically powerlike. The\ndecay of reservoir correlation functions (i.e., localization in time)\ntranslates into localization of errors in space due to the finite speed of\nerror propagation. On the other hand it is widely believed that the\nMarkovian model can be understood as arising, to an excellent approximation,\nfrom coupling to a reservoir which is not only renewed at each time step,\nbut whose influence is independent of the actual Hamiltonian dynamics of the\nopen system, and is localized in space (independent errors model) \\cite%\n{Giulini:book}. A large part of the present paper is devoted to a critical\ndiscussion of this claim.\n\n\\noindent (v) We note that the recent papers on FT-QEC theory \\cite%\n{Terhal:04,Aliferis:05,Aharonov:05} relax the (Markovian) assumption \\textbf{A3}, but do\nmake \\textbf{A1} (implicitly) and \\textbf{A2}. We comment on these\npapers in Section~\\ref{nonM-FTQEC}.\n\n\\section{Review of Markovian Master Equations}\n\n\\label{MME-review}\n\nThe field of derivations of the quantum Markovian master equation (MME) is\nstrewn with pitfalls: it is in fact non-trivial to derive the MME in a fully\nconsistent manner. There are essentially two types of fully rigorous\napproaches, known as the \\emph{singular coupling limit} (SCL) and the \\emph{%\nweak coupling limit} (WCL), both of which we consider below. See, e.g., the books \n\\cite{Alicki:87,Breuer:book} for more details, as well as the derivation in \n\\cite{Alicki:89}.\n\nConsider a system and a reservoir (bath), with self Hamiltonians $H_{S}^{0}$\nand $H_{R}$, interacting via the Hamiltonian $H_{SR}=\\lambda S\\otimes R$,\nwhere $S$ ($R$) is a Hermitian system (reservoir) operator and $\\lambda $ is\nthe coupling strength. A more general model of the form $H_{SR}=\\sum_{\\alpha\n}\\lambda _{\\alpha }S_{\\alpha }\\otimes R_{\\alpha }$ can of course also be\nconsidered and results in the same qualitative conclusions. Thus the total\nHamiltonian is \n\\begin{equation}\nH=(H_{S}^{0}+H_{C}(t))\\otimes I_{R}+I_{S}\\otimes H_{R}+H_{SR}, \\label{ham}\n\\end{equation}%\nwhere $H_{C}(t)$ describes control over the quantum device (system), and $I$\nis the identity operator.\n\nThe SCL and WCL derivations start from the expansion of the propagator $%\n\\Lambda $ for the reduced, system-only dynamics, \n\\begin{equation}\n\\rho _{S}(t)=\\Lambda (t,0)\\rho _{S}(0),\n\\end{equation}%\ncomputed in the interaction picture with respect to the renormalized, \\emph{%\nphysical}, time-dependent Hamiltonian $H_{S}(t)=H_{S}+H_{C}(t)$, where \n\\begin{equation}\nH_{S}=H_{S}^{0}+\\lambda ^{2}H_{1}^{\\mathrm{corr}}(t)+\\cdots . \\label{eq:H_S}\n\\end{equation}%\nThe renormalizing terms containing powers of $\\lambda $ are\n\\textquotedblleft Lamb-shift\\textquotedblright\\ corrections due to the\ninteraction with the bath (see, e.g., \\cite{Lidar:CP01}). The lowest order\n(Born) approximation with respect to the coupling constant $\\lambda $ yields \n$H_{1}^{\\mathrm{corr}}$, while the higher order terms ($\\cdots $) require\ngoing beyond the Born approximation. Introducing a cumulant expansion for\nthe propagator, \n\\begin{equation}\n\\Lambda (t,0)=\\exp \\sum_{n=1}^{\\infty }[\\lambda ^{n}K^{(n)}(t)],\n\\end{equation}%\none finds that $K^{(1)}=0$. The Born approximation consists of terminating\nthe cumulant expansion at $n=2$, whence we denote $K^{(2)}\\equiv K$: \n\\begin{equation}\n\\Lambda (t,0)=\\exp [\\lambda ^{2}K(t)+O(\\lambda ^{3})].\n\\end{equation}%\nOne obtains%\n\\begin{eqnarray}\nK(t)\\rho _{S}=\\int_{0}^{t}ds\\int_{0}^{t}duF(s-u)S(s)\\rho\n_{S}S(u)^{\\dag } \\nonumber \\\\\n+(\\mathrm{similar \\ terms})\n\\label{eq:K(t)}\n\\end{eqnarray}\nas the first term in a cumulant expansion \\cite{Alicki:89}. Here $F(s)=%\n\\mathrm{Tr}(\\rho _{R}R(s)R)$ is the autocorrelation function, where $\\rho\n_{R}$ is the reservoir state and $R(s)$ is $R$ in the $H_{R}$-interaction\npicture, and $S(u)$ is $S$ in the interaction picture with respect to the\nphysical Hamiltonian $H_{S}(t)$. The \\textquotedblleft similar\nterms\\textquotedblright\\ in Eq.~(\\ref{eq:K(t)}) are of the form $\\rho\n_{S}S(s)S(u)^{\\dag }$ and $S(s)S(u)^{\\dag }\\rho _{S}$.\n\nAt first sight $K(t)\\sim t^{2}$, and this is true for small times (Zeno\neffect \\cite{Facchi:PRL02}). The Markov approximation means that we can\nreplace $K(t)$ by an expression that is linear in $t$, i.e. \n\\begin{equation}\nK(t)\\simeq \\int_{0}^{t}\\mathcal{L}(s)ds \\label{eq:L}\n\\end{equation}%\nwhere $\\mathcal{L}(t)$ is a time-dependent Lindblad generator. That the\nLindblad generator can be time-dependent even after transforming back to the\nSchr\\\"{o}dinger picture is important for our considerations below.\n\n\\subsection{Singular Coupling Limit}\n\n\\label{SCL}\n\nThe SCL approach we present in this subsection underlies the standard\nderivation of the MME that can be found in almost any text concerning the\nMarkov approximation, though not always under the heading \\textquotedblleft\nSCL\\textquotedblright\\ (e.g., \\cite{Carmichael:book}, p.8, Eq.~(1.36)). The\nrigorous derivation of the SCL is briefly discussed (with references) in \n\\cite{Alicki:87}, pp.36-38. It is based on a rescaling of the bath and\nsystem-bath Hamiltonians, which physically makes sense in the\nhigh-temperature limit only. We will shortly see the emergence of this limit.\n\nIn essence, the \\textquotedblleft naive SCL-Markov\napproximation\\textquotedblright\\ is obtained by the ansatz $F(s)=a\\delta (s)$\nfor the autocorrelation function, whence \n\\begin{equation}\nL(s)\\rho _{S}=aS(s)\\rho _{S}S(s)^{\\dag }+(\\mathrm{similar\\ terms}).\n\\end{equation}%\n\\textbf{\\ }As a consequence, return to the Schr\\\"{o}dinger picture gives a\nMME with the dissipative part independent of the Hamiltonian: \n\\begin{eqnarray}\n\\frac{d\\rho _{S}}{dt} &=&-i[H_{S}(t),\\rho _{S}]+\\mathcal{L}\\rho _{S}, \\notag\n\\\\\n\\mathcal{L}\\rho _{S} &\\equiv &-\\frac{1}{2}\\lambda ^{2}a[S,[S,\\rho _{S}]]\n\\label{eq:SCL}\n\\end{eqnarray}\n\nMore precisely, we must consider the multi-time bath correlation functions $%\nF(t_{1},...,t_{n}):=\\mathrm{Tr}[\\rho _{R}R(t_{1})...R(t_{n})]:=\\langle\nR(t_{1})...R(t_{n})\\rangle $. Here $R(t):=\\exp (iH_{R}t)R\\exp (-iH_{R}t)$\nare the bath operators in the interaction picture, $\\rho _{R}=\\exp (-\\beta\nH_{R})\/Z$ (where $\\beta =1\/kT$, $Z=\\mathrm{Tr}[\\exp (-H_{R}\/kT)]$) is\nthe bath thermal equilibrium state at temperature $T$, which is a stationary\nstate of the reservoir, i.e., $[H_{R},\\rho _{R}]=0$. The influence of the\nenvironment on the system is entirely encoded into the $\\{F(t_{1},...,t_{n})%\n\\}_{n=2}^{\\infty }$.\\footnote{$F(t_{1})$ is constant by stationarity. We\nreserve the notation $F(t)$ for $F(t_{1},t_{2})\\equiv F(t_{2}-t_{1})$ below.}\nHeuristically, the Markov approximation can be justified under the following\nconditions:\n\n\\begin{enumerate}\n\\item The lowest order correlation function, \n\\begin{equation}\nF(t)=\\langle R(s+t)R(s)\\rangle =\\int_{-\\infty }^{\\infty }G(\\omega\n)e^{-i\\omega t}d\\omega , \\label{2corr}\n\\end{equation}%\ncan be approximated by a Dirac delta function:\\footnote{%\nNote that stationarity implies that $F(t)$ does not depend on $s$.} \n\\begin{equation}\nF(t)\\simeq \\left( \\int_{-\\infty }^{\\infty }F(s)ds\\right) \\delta\n(t)=G(0)\\delta (t) \\label{F-delta}\n\\end{equation}%\n(white-noise approximation). Eq.~(\\ref{2corr}) defines the \\emph{spectral\ndensity} $G(\\omega )$, which is a key object in the theory.\n\n\\item Higher order correlation functions exhibit a Gaussian-type behavior,\ni.e., can be estimated by sums of products of the lowest order ones, and\nthen, by condition (\\ref{F-delta}), decay sufficiently rapidly.\n\\end{enumerate}\n\nLet us now comment on the physical relevance of the white-noise\napproximation.\n\nFirst, the condition (\\ref{F-delta}) cannot be satisfied in general. For\nexample, in the important case of linear coupling to a bosonic field (e.g.,\nelectromagnetic field, phonons in solid state), we have $G(0)=0$, which\nmeans (by inverse Fourier transform) that $\\int_{-\\infty }^{+\\infty\n}F(t)dt=0 $, and therefore \\emph{$F(t)$ cannot be well approximated by} $%\n\\delta (t)$.\n\nSecond, even for models with $G(0)>0$ there exists a universal relation, the\nso-called Kubo-Martin-Schwinger (KMS) condition, $\\langle R(t)R(0)\\rangle\n=\\langle R(0)R(t+i\\beta )\\rangle $, which is valid for all quantum systems\nat thermal equilibrium. This implies: \n\\begin{equation}\nG(-\\omega )=e^{-\\beta \\omega }G(\\omega )\\ . \\label{KMS}\n\\end{equation}%\n(See, e.g., \\cite{Alicki:87}[pp.90-91], \\cite{Thirring:book}[pp.176-177], or \n\\cite{Breuer:book}[p.137].) The fundamental importance of the KMS condition\nis captured by the fact that it is necessary in order for thermodynamics to\nhold. The KMS condition implies a strong asymmetry of the spectral density $%\nG(\\omega )$ for low $T$, where $T$ is measured relative to the presence of $%\nkT$ energy scales in the bath, i.e., relative to the range where $G(\\omega )$\nis non-vanishing. The KMS condition is relevant to our discussion since we\nmake the reasonably minimalistic assumption that the reservoir (not the QC)\nis in thermal equilibrium.\\footnote{%\nOne may challenge the notion that the bath must always be in thermal\nequilibrium. E.g., consider an atom in a microwave cavity, with the cavity\nelectromagnetic field initially in thermal equilibrium. Now suppose the atom\nis driven and is coupled to the cavity electromagnetic field, which\ntherefore is no longer in equilibrium. However, is the internal\nelectromagnetic field the relevant environment, or is it the external one?\nClearly, the electromagnetic field inside the cavity is not a reservoir but\nitself a part of the system. This is because: a) its spectrum is discrete,\nb) its coupling to the atom (close to resonance) is enhanced. The reason\nthese considerations matter is because b) implies the strong coupling\nregime, hence failure of the initial state tensor product structure\nassumption, hence difficulties with the separation of the system from the\nreservoir (dressed atom picture); a) implies that $F(t)$ is (quasi)-periodic\nwith short Poincar\\'{e} recurrences, hence a strongly non-Markovian regime,\nand thus associated difficulties for Markovian FT-QEC. On the other hand the\nexternal electromagnetic field has a continuous spectrum and the state\nproduct structure is easily satisfied, hence qualifies as a reservoir. This\nexample merely serves to illustrate accepted notions regarding the division\ninto well defined system and bath; for most practical purposes a thermal\nequlibrium is the simplest and most relevant model of an environment, and\nFT-QEC\\ theory must be applicable to this setting.}\n\nThird, $G(\\omega )$ need not be flat even at high $T$ (indeed, the KMS\ncondition only implies that $G(\\omega )$ is symmetric at high $T$). For\nexample, this is the case for the electromagnetic field and for phonons, for\nwhich at $T>0$ one has $G(\\omega )\\sim \\omega ^{3}\/(1-e^{-\\hbar \\omega \/kT})$\nfor $|\\omega |\\leq \\omega _{\\mathrm{cut}}$, and $G(\\omega )=0$ for $|\\omega\n|>\\omega _{\\mathrm{cut}}$. One can see that for high $T$ ($kT\\gg \\hbar\n\\omega _{\\mathrm{cut}}$), $G(\\omega )\\sim kT\\omega ^{2}$ is symmetric. Here $%\n\\omega _{\\mathrm{cut}}$ is the Debye frequency in the case of phonons, while\nfor the electromagnetic field $\\omega _{\\mathrm{cut}}$ should tend to\ninfinity in the renormalization procedure. A flat $G(\\omega )$ means a\nstructureless bath, while physical systems always have a nontrivial\nstructure depending on relevant energy scales.\\footnote{%\nIt is interesting to note that even if we try to enforce a flat $G(\\omega )$\nby, e.g., choosing an appropriate form factor for the spin-boson system, the\nobtained model -- the so-called \\textquotedblleft Ohmic\ncase\\textquotedblright\\ -- is mathematically and physically ill defined (see \n\\cite{Alicki:02a}).}\n\nNow let us return to the implications of the SCL assumptions for the problem\nof FT-QEC. In order to derive the SCL from first principles, one rescales $%\nH_{R}\\rightarrow H_{R}\/\\epsilon ^{2}$, rescales $H_{SR}\\rightarrow\nH_{SR}\/\\epsilon $, but keeps $H_{S}$ and $\\rho _{R}$ fixed.\\footnote{%\nNote that because different Hamiltonians are rescaled differently, this\nrescaling procedure is \\emph{not} equivalent to a direct rescaling of the\ntime variable (which is what is done in the WCL, below).} The idea of this\nrescaling is that it accelerates the reservoir's evolution (via $%\nH_{R}\\rightarrow H_{R}\/\\epsilon ^{2}$) and hence produces faster decay of\nthe reservoir correlations, $F(t)$. To see this, note that the rescaling $%\nH_{SR}\\rightarrow H_{SR}\/\\epsilon $ increases the amplitude $F(0)$ to $%\nF(0)\/\\epsilon ^{2}$ (proportional to $H_{SR}^{2}$), while keeping the\nstrength of the noise $\\int_{-\\infty }^{+\\infty }F(t)dt=G(0)$ fixed (as can\nbe seen via a change of variables $t\\rightarrow t\/\\epsilon ^{2}$ in the\nintegral). This implies a faster decay of $F(t)$. The rescaling procedure is\nspecifically designed to yield the delta correlation [Eq.~(\\ref{F-delta})]\nin the limit as $\\epsilon \\rightarrow 0$. Note that if $\\rho _{R}$ is at\nthermal equilibrium at temperature $T$ with respect to $H_{R}$, then, since $%\n\\rho _{R}=\\exp (-\\beta H_{R})\/Z$ is fixed, it must be at thermal equilibrium\nwith respect to $H_{R}\/\\epsilon ^{2}$ at the temperature $T\/\\epsilon\n^{2}\\rightarrow \\infty $, whence our mention of the high temperature limit,\nabove. Further note that $H_{S}$ is not rescaled since the SCL is\n(artificially) designed to produce \\textquotedblleft white\nnoise\\textquotedblright\\ on the natural time scale of system's evolution,\nwhich is given by $H_{S}$.\n\nAnother, equivalent way to understand the emergence of the high-$T$ limit is\nthe following: For the Markovian condition $F(t)\\simeq a\\delta (t)$ to hold\nthe spectral density must be flat: $G(\\omega )=\\mathrm{const}$. However,\nthis is possible only in the limit $T\\rightarrow \\infty $ of the KMS\ncondition. More precisely, \\emph{the Markovian condition can hold only if }$%\nkT\\gg \\omega $ \\emph{over the entire spectrum of the system's Bohr\nfrequencies}. Strictly speaking, $G(\\omega )$ is never constant. The\nvariation of $G(\\omega )$ happens over the \\textquotedblleft thermal\nmemory\\textquotedblright\\ time $\\tau _{\\mathrm{th}}:=1\/kT$. In the infinite $%\nT$ limit we then recover the case of zero memory-time, i.e., Markovian\ndynamics. Physically, it is enough to assume that $G(\\omega )$ is\nessentially constant over the interval $[-\\omega _{0},\\omega _{0}]$ where $%\nkT>\\omega _{0}\\gg $ system's Bohr frequencies. I.e., system energy scales\nmust be compared to $1\/\\tau _{\\mathrm{th}}$ and this leads to the important\nrealization that \\emph{the Markovian approximation can be consistent with\nthe KMS condition only in the high temperature regime }$kT\\gg E$, \\emph{where%\n} $E$ \\emph{is the system energy scale}. As we argue below, this fact\npresents a serious difficulty in the context of FT-QEC, the issue being\nessentially that the requirement of a constant supply of nearly pure and\ncold ancillas contradicts the high-$T$ limit needed for the Markov\napproximation to hold.\n\n\\subsection{Weak Coupling Limit}\n\n\\label{WCL}\n\nIn the SCL approach above there was no restriction on the time-dependence of\nthe system Hamiltonian. However, the price paid is the high-$T$ limit.\nMoreover, while mathematically the SCL is rigorous in the scaling limit, it\nis inconsistent with thermodynamics except in the $T\\rightarrow \\infty $\nlimit. On the other hand, the derivation by Davies, in his seminal 1974\npaper \\cite{Davies:74}, is perhaps the only derivation of the MME that is\nentirely consistent from \\emph{both the mathematical and physical} points of\nview. The Davies approach is based not on the high-$T$ limit, but rather on\nthe physically plausible idea of weak coupling. This is natural and\nconsistent with thermodynamics at all temperatures.\n\nMore specifically, Davies' derivation does not invoke a flatness condition\non $G(\\omega )$ but is, of course, still subject to the KMS condition. In\nthe Davies approach the Markov approximation is a consequence of weak\ncoupling (and hence slow dynamics of the system in the interaction picture),\nand time coarse-graining, which leads to cancellation of the non-Markovian\noscillating terms. The price we pay is the invalidity of this approach for\ntime-dependent Hamiltonians, except in the adiabatic case. We explain this\nimportant comment below. Hence, while the Davies approach does not require\nthe high-$T$ limit, it imposes severe restrictions on the speed of quantum\ngates.\n\nIn his rigorous derivation Davies replaced the heuristic condition (\\ref%\n{F-delta}) by the weaker \n\\begin{equation}\n\\int |F(t)|dt<\\infty . \\label{F-cond}\n\\end{equation}%\nThis condition avoids the difficulties originating from the singularity of\nthe SCL condition (\\ref{F-delta}), and preempts the corresponding problems\nwith the high-$T$ limit.\\footnote{%\nIn some sense the weak coupling limit is similar to the Central Limit\nTheorem (CLT) in probability, and condition~(\\ref{F-cond}) is analogous to a\nrough upper bound on the second moment in the CLT. If it is not satisfied\nthen the noise may be not Gaussian in the weak coupling limit. The value of $%\n\\int |F(t)|dt$ itself does not provide any meaningful physical parameter and\ncan depend on some regularization\/cut-off parameters.}$^{,}$\\footnote{%\nOne can go further and ask how generic the Markovian case is, in the sense\nthat Eq.~(\\ref{F-cond}) is satisfied. In fact, typically $F(t)$ decays as $%\n1\/t^{\\alpha }$ (e.g., for the vacuum bath $\\alpha =4$ \\cite{Alicki:02}%\n\\textbf{)}, which means that in some cases ($\\alpha \\leq 1$) Eq.~(\\ref%\n{F-cond}) can be violated. For a systematic treatment of these non-Markovian\neffects see, e.g., \\cite{Breuer:02}.} We now consider the cases of a\nconstant, periodic, and arbitrarily time-dependent control Hamiltonian. The\nconstant case is the one originally treated by Davies \\cite{Davies:74}, and\nextended in \\cite{Davies:78} to time-dependent Hamiltonians assuming a slow\n(\\textquotedblleft adiabatic\\textquotedblright ) change on the dissipation\ntime scale $\\lambda ^{2}t$. The non-constant cases we study here have, as\nfar as we know, not been published before in the general scientific\nliterature.\n\n\\subsubsection{WCL for Constant $H_{C}$: Summary of the Original Davies\nDerivation}\n\nWe present a simplified version of the discussion of the Markov\napproximation in \\cite{Alicki:89}. Denote by ${E_{k}}$ the Bohr energies\n(eigenvalues of $H_{S}$), let $\\omega \\in \\{\\omega _{kl}=E_{k}-E_{l}\\}_{k,l}$%\n, and let $S_{\\omega }$ be the discrete Fourier components of the\ninteraction picture $S$, i.e., \n\\begin{equation}\nS(t)=\\exp (iH_{S}t)S\\exp (-iH_{S}t)=\\sum_{\\omega }S_{\\omega }\\exp (i\\omega\nt), \\label{eq:S}\n\\end{equation}%\nwhere $H_{S}$ is the renormalized (physical) system Hamiltonian: the sum of\nthe \\textquotedblleft bare\\textquotedblright\\ $H_{S}^{0}$ and a Lamb shift\nterm (bath induced), as in Eq.~(\\ref{eq:H_S}). Equivalently, \n\\begin{equation}\n\\lbrack H_{S},S_{\\omega }]=\\omega S_{\\omega }. \\label{Dav1}\n\\end{equation}%\nWe remark that in the original Davies paper the Bohr energies and Eq.~(\\ref%\n{Dav1}) are computed with respect to the bare Hamiltonian $H_{S}^{0}$. Here\nwe use the physical Hamiltonian $H_{S}$ in order to take into account the\nfact that the Lamb shift term, although formally proportional to $\\lambda\n^{2}$, can be large or even infinite after cut-off removal.\n\nThen, it follows from Eq.~(\\ref{eq:K(t)}) that \n\\begin{eqnarray}\nK(t)\\rho _{S}&=&\\sum_{\\omega ,\\omega ^{\\prime }}S_{\\omega }\\rho _{S}S_{\\omega\n^{\\prime }}^{\\dag }\\int_{0}^{t}e^{i(\\omega -\\omega ^{\\prime\n })u}du\\int_{-u}^{t-u}F(\\tau )e^{i\\omega \\tau }d\\tau \\notag \\\\\n&+&(\\mathrm{similar}\\text{ \n}\\mathrm{terms}). \\label{eq:K2}\n\\end{eqnarray}\nThe weak coupling limit is next formally introduced by rescaling the time $t$\nto $t\/\\lambda ^{2}$ (van Hove limit). This enables two crucial\napproximations, which are valid in the resulting large-$t$ limit:\n\n\\begin{enumerate}\n\\item We replace\\footnote{%\nIn a more rigorous treatment the Cauchy principal value must be used, but\nthe result is essentially the same \\cite{Alicki:89}.}%\n\\begin{equation}\n\\int_{0}^{t}e^{i(\\omega -\\omega ^{\\prime })u}du\\approx t\\delta _{\\omega\n\\omega ^{\\prime }}. \\label{eq:rep1}\n\\end{equation}%\nThis makes sense for \n\\begin{equation}\nt\\gg \\max \\{1\/(\\omega -\\omega ^{\\prime })\\}. \\label{eq:tbig}\n\\end{equation}%\n\\emph{This violates \\textbf{A1}, expressed in terms of the Bohr frequencies}%\n. We see here already the emergence of an adiabatic criterion for the\nvalidity of the Markov approximation.\n\n\\item We replace $\\int_{-u}^{t-u}F(\\tau )e^{i\\omega \\tau }d\\tau $ by the\nFourier transform:%\n\\begin{equation}\n\\int_{-u}^{t-u}F(\\tau )e^{i\\omega \\tau }d\\tau \\approx {G}(\\omega\n)=\\int_{-\\infty }^{\\infty }F(\\tau )e^{i\\omega \\tau }d\\tau . \\label{eq:rep2}\n\\end{equation}%\n\\end{enumerate}\n\nThe physical validity of the last approximation is usually ignored, though one\ncan make the following argument: On the LHS of Eq.~(\\ref{eq:rep2}), for a\ngiven Bohr frequency $\\omega $ the Fourier-like integral must sample the\nfunction $F(\\tau )$ with sufficiently high accuracy so that the Fourier\ntransform approximation will be valid. To this end one needs a time $t$ such\nthat: (i) $t\\gg 1\/\\omega $. This is a weaker condition than the previous one\n[$t\\gg \\max \\{1\/(\\omega -\\omega ^{\\prime })\\}$] which involves differences\nof Bohr frequencies. (ii) The time $t$ must be also much longer than the\ntime scale of the wildest variations of $F(\\tau )$, which is typically [as\nmay be checked for simple models of spectral densities $G(\\omega )$] given\nby $1\/\\omega _{\\mathrm{cut}}$, where $\\omega _{\\mathrm{cut}}$ is a\nhigh-frequency cutoff. When $\\omega <\\omega _{\\mathrm{cut}}$ (i) implies\n(ii). Therefore typically Eq.~(\\ref{eq:rep1}) is a stronger assumption than\nEq.~(\\ref{eq:rep2}).\n\n\nApplying the approximations (\\ref{eq:rep1}) and (\\ref{eq:rep2}), we obtain $K(t)\\rho _{S}=t\\sum_{\\omega\n}S_{\\omega }\\rho _{S}S_{\\omega }^{\\dag }{G}(\\omega )+(\\mathrm{similar}$ $%\n\\mathrm{terms})$, and hence it follows from Eq.~(\\ref{eq:L}) that $\\mathcal{L%\n}(s)=\\mathcal{L}$ is the Davies generator in the familiar Lindblad form: \n\\begin{eqnarray}\n\\frac{d\\rho _{S}}{dt} &=&-i[H_{S},\\rho _{S}]+\\mathcal{L}\\rho _{S}, \\notag \\\\\n\\mathcal{L}\\rho _{S} &\\equiv &\\frac{1}{2}\\lambda ^{2}\\sum_{\\omega }G(\\omega\n)([S_{\\omega },\\rho S_{\\omega }^{\\dagger }]+[S_{\\omega }\\rho ,S_{\\omega\n}^{\\dagger }]) \\label{Dav}\n\\end{eqnarray}%\nSeveral remarks are in order:\n\n\\noindent (i) The absence of off-diagonal terms in Eq.~(\\ref{Dav}), compared\nto Eq.~(\\ref{eq:K2}), is the hallmark of the Markovian limit. Namely, the\nDavies derivation relies on the cancellation of the non-Markovian\noff-diagonal terms $\\sum_{\\omega \\neq \\omega ^{\\prime }}S_{\\omega }\\rho\n_{S}S_{\\omega ^{\\prime }}^{\\dag }\\int_{0}^{t}e^{i(\\omega -\\omega ^{\\prime\n})u}du$. This time coarse-graining is possible due to integration over the\nfast-oscillating $\\int_{0}^{t}e^{i(\\omega -\\omega ^{\\prime })u}$ terms over\na long timescale, i.e., over $t\\gg \\max \\{1\/(\\omega -\\omega ^{\\prime })\\}$\n(see also \\cite{Lidar:CP01}). As remarked above, this violates \\textbf{A1},\nexpressed in terms of the Bohr frequencies.\n\n\\noindent (ii) It follows from Bochner's theorem applied to the Fourier\ntransform definition of ${G}(\\omega )$ that $G(\\omega )\\geq 0$ \\cite%\n{Alicki:87}[p.90], \\cite{Breuer:book}[p.136]; this result is essential for\nthe complete positivity of the Markovian master equation in the WCL.\n\n\\noindent (iii) Davies' derivation showed implicitly that the notion of\n\\textquotedblleft bath's correlation time\\textquotedblright\\ is not\nwell-defined -- Markovian behavior involves a rather complicated cooperation\nbetween system and bath dynamics. More specifically, the relations~(\\ref{Dav}%\n) and~(\\ref{Dav1}) together imply that the noise and $H_{S}$ are strongly\ncorrelated. In other words, contrary to what is often done in\nphenomenological treatments, \\emph{one cannot combine arbitrary }$H_{S}$%\n\\emph{'s with given }$S_{\\omega }$\\emph{'s.} This point is particularly\nrelevant in the context of FT-QEC, where it is common to assume Markovian\ndynamics and apply arbitrary control Hamiltonians.\n\nDavies did not consider time-dependent system Hamiltonians in \\cite%\n{Davies:74}, but it is possible to generalize his derivation to allow for\nslowly varying system Hamiltonians \\cite{Davies:78,Alicki:79,Alicki:89}.\nThat is, whenever the time scale of the variation of $H_{C}(t)$ is much\nlonger than the inverse of the typical Bohr frequency (of $H_{S}$), it is\npossible to add $H_{C}(t)$ to the system Hamiltonian in Eq.~(\\ref{Dav}),\nnecessitating at the same time this change also in Eq.~(\\ref{Dav1}). This is\na type of adiabatic limit (indeed, the $S_{\\omega }$ in Eq.~(\\ref{Dav1}) can\nbe interpreted, with $H_{S}$ replaced by $H_{S}+H_{C}(t)$, as being\nadiabatic eigenvectors of the superoperator $[H_{S}+H_{C}(t),\\cdot\n]$). We note that an alternative approach to adiabaticity in open\nquantum systems was recently developed in\nRef.~\\cite{SarandyLidar:04}. This approach, while being very general, is more\nphenomenological in that it postulates a\nconvolutionless master equation, and then derives corresponding\nadiabaticity conditions. Closer in spirit to the Davies derivation is\nanother recent approach to adiabaticity in open systems, which assumes\nslow system variation together with weak system-bath coupling \\cite{Thunstrom:05}.\n\n\\subsubsection{WCL for Periodic Driving:\\ Floquet Analysis}\n\n\\label{Floquet}\n\nBefore considering the case of periodic $H_{C}$ let us consider briefly once\nmore the case of a constant Hamiltonian in the so-called covariant\ndissipation setting. Covariance is the commutation condition $\\mathcal{H}%\n\\mathcal{L}=\\mathcal{L}\\mathcal{H}$ where $\\mathcal{H}=[H_{S},\\cdot ]$ is\nthe super-operator constant Hamiltonian, and $\\mathcal{L}$ is the Davies\ngenerator. Covariance is an abstract property which is automatically\nfulfilled for the Davies generator.\\footnote{%\nThis can be verified by directly computing $\\mathcal{HL}$ and making use of\nEq.~(\\ref{Dav1}) and the relation $[A,BC]=[A,B]C+B[A,C]$ (for operators $A,B$\nand $C$). A more elegant way to see this is to consider $\\mathcal{L}(t)=\\exp\n(-it\\mathcal{H})\\mathcal{L}\\exp (it\\mathcal{H})$ and note that Eq.~(\\ref%\n{Dav1}) implies that $S(t)$ and $S^{\\dag }(t)$ rotate in opposite\ndirections. Hence $\\mathcal{L}(t)=\\mathcal{L}$, whence $d\\mathcal{L}(t)\/dt=0$\ngives the result.} It is convenient since it implies factorization of the\nfull propagator into Hamiltonian and dissipative parts. Markovian dynamics\nobtained in the WCL as discussed above takes the form\n\n\\begin{equation}\n{\\frac{d\\rho }{dt}}=(-i\\mathcal{H}+\\mathcal{L})\\rho \\ ,\\ t\\geq 0, \\label{1}\n\\end{equation}%\nwhere the most general form of the Lindblad (or Davies) $\\mathcal{L}$\nsatisfying Eq.~(\\ref{1}) is \n\\begin{equation}\n\\mathcal{L}\\rho ={\\frac{1}{2}}\\sum_{\\{\\omega \\},j}\\left( [V_{j}(\\omega\n),\\rho V_{j}(\\omega )^{\\dagger }]+[V_{j}(\\omega )\\rho ,V_{j}(\\omega\n)^{\\dagger }]\\right) . \\label{2}\n\\end{equation}%\nHere $\\{\\omega \\}\\equiv \\mathrm{Spectrum}(\\mathcal{H})$, i.e., the Bohr\nfrequencies (differences of eigenvalues of $H$), and \n\\begin{equation}\n\\mathcal{H}V_{j}(\\omega )=\\omega V_{j}(\\omega ) \\label{3}\n\\end{equation}%\n[i.e., Eq.~(\\ref{Dav1})]. The solution, i.e., the dynamical semigroup is%\n\\begin{equation}\n\\rho (t)=\\Lambda (t)\\rho (0),\\ \\Lambda (t)=e^{-it\\mathcal{H}}e^{t\\mathcal{L}%\n}. \\label{4}\n\\end{equation}%\nNow consider a periodic control Hamiltonian with period $\\Theta $ \n\\begin{equation}\nH_{C}(t)=H_{C}(t+\\Theta ),\\ \\Omega =2\\pi \/\\Theta . \\label{5}\n\\end{equation}%\n(Note that $\\Omega $ is \\emph{not} the Rabi frequency, which throughout this\npaper we denote by $\\Omega _{\\mathrm{R}}$.) The situation is then very\nsimilar to the standard (time-independent $H_{C}$) WCL, but the set of\n\\textquotedblleft effective Bohr frequencies\\textquotedblright\\ (Floquet\nspectrum)\\ ${\\omega }$ is now larger and is of the form $\\{\\omega +q\\Omega\n\\} $, $q=0,\\pm 1,...$. Here $\\omega $ are Bohr frequencies for the Floquet\nunitary [defined in Eq.~(\\ref{eq:floquet}) below], i.e., differences of\neigenvalues $\\epsilon _{\\alpha }$ of the Floquet unitary, rather than $%\n\\{\\omega \\}=\\mathrm{Spectrum}(\\mathcal{H})$ as above. As this set of\n\\textquotedblleft effective Bohr frequencies\\textquotedblright\\ is discrete\nthe WCL still works, but the final Davies generator is more complicated, as\nwe now show.\n\nDefine the time-ordered unitary propagator \n\\begin{equation}\nU(t,s)\\equiv \\mathcal{T}\\exp \\left( -i\\int_{s}^{t}H_{S}(u)du\\right) ,\\ t\\geq\ns \\label{6}\n\\end{equation}%\nwhich satisfies the properties $U(s,t)\\equiv U(t,s)^{-1}=U(t,s)^{\\dagger }$, \n$U(t,s)U(s,u)=U(t,u)$, $U(t+\\Theta ,s+\\Theta )=U(t,s)$, and ${\\frac{d}{dt}}%\nU(t,s)=-iH_{S}(t)U(t,s)$, ${\\frac{d}{dt}}U(t,s)^{\\dagger }=iU(t,s)^{\\dagger\n}H_{S}(t)$. The \\emph{Floquet unitary operator} is%\n\\begin{equation}\nF(s)\\equiv U(s+\\Theta ,s)\\bigskip , \\label{eq:floquet}\n\\end{equation}%\nwith corresponding super-operator action%\n\\begin{equation}\n\\mathcal{F}(s)\\rho \\equiv F(s)\\rho F(s)^{\\dagger },\n\\end{equation}%\nand Floquet eigenvectors $|\\phi _{\\alpha }\\rangle $ and eigenvalues\n(quasi-energies) $\\epsilon _{\\alpha }$ satisfying\\footnote{%\nNote that the Floquet Hamiltonian $H_{S}(t)-id\/dt$ operates on a different\nHilbert space than $F(0)$ (the space of periodic functions with values in\nthe system's Hilbert space). But its eigenvalues coincide with $\\epsilon\n_{\\alpha }$ from Eq.~(\\ref{19}).} \n\\begin{equation}\nF(0)|\\phi _{\\alpha }\\rangle =e^{-i\\epsilon _{\\alpha }\\Theta }|\\phi _{\\alpha\n}\\rangle . \\label{19}\n\\end{equation}%\nIt follows from standard Floquet theory that%\n\\begin{equation}\nU(t,0)|\\phi _{\\alpha }\\rangle =e^{-it\\epsilon _{\\alpha }}\\sum_{q\\in \\mathbf{Z%\n}}e^{-itq\\Omega }|\\phi _{\\alpha }(q)\\rangle , \\label{20}\n\\end{equation}%\ni.e., the set $\\{|\\phi _{\\alpha }(q)\\rangle \\}$ is a complete basis.\nTherefore we have at most as many $q$'s as the dimension of the Hilbert\nspace. That the number of $q$'s is finite is important for our\nconsiderations below.\n\nWe call a Lindblad generator $\\mathcal{L}$ a \\textquotedblleft covariant\ndissipative perturbation of $H_{S}(t)$\\textquotedblright\\ if \n\\begin{equation}\n\\mathcal{F}(0)\\mathcal{L}=\\mathcal{L}\\mathcal{F}(0) \\label{9}\n\\end{equation}\n\nWe will assume this property, similarly to the case of a constant\nHamiltonian described above. In fact, covariance holds for a periodic $%\nH_{S}(t)$ and also for the corresponding WCL Davies generator. One can then\nderive the \\emph{covariant master equation} (we sketch the\nderivation below):%\n\\begin{equation}\n{\\frac{d\\rho }{dt}}=\\left( -i\\mathcal{H}(t)+\\mathcal{L}(t)\\right) \\rho \\ ,\\\nt\\geq 0, \\label{10}\n\\end{equation}%\n[compare to Eq.~(\\ref{1})] where \n\\begin{eqnarray}\n\\mathcal{L}(t) &=&\\mathcal{U}(t,0)\\mathcal{L}\\mathcal{U}(t,0)^{\\dagger },\n\\label{11} \\\\\n{\\frac{d}{dt}}\\mathcal{U}(t,s) &=&-i\\mathcal{H}(t)\\mathcal{U}(t,s),\n\\end{eqnarray}%\nand where the general form of $\\mathcal{L}$ appearing in Eq.~(\\ref{11}) is\ngiven by Eq.~(\\ref{2}), with $V_{j}(\\omega )$ now being eigenvectors of $%\n\\mathcal{F}(0)$, \n\\begin{equation}\n\\mathcal{F}(0)V_{j}(\\omega )=e^{-i\\omega \\Theta }V_{j}(\\omega ), \\label{14}\n\\end{equation}%\nrather than of $\\mathcal{H}$, as in Eq.~(\\ref{3}). Moreover, here $\\{\\omega\n\\}\\equiv \\{\\epsilon _{\\alpha }-\\epsilon _{\\beta }\\}$, where $\\epsilon\n_{\\alpha }$ are quasi-energies (effective Bohr frequencies) of the Floquet\noperator, rather $\\{\\omega \\}\\equiv \\mathrm{Spectrum}(\\mathcal{H})$ as we\nsaw in the case of constant $H_{C}$.\n\nThe solution replacing Eq.~(\\ref{4}) is \n\\begin{eqnarray}\n\\rho (t) &=&\\Lambda (t,s)\\rho (s),\\text{ \\ }t\\geq s \\notag \\\\\n\\Lambda (t,s) &=&\\mathcal{T}\\exp \\left\\{ \\int_{s}^{t}\\left( -i\\mathcal{H}(u)+%\n\\mathcal{L}(u)\\right) du\\right\\} \\label{12}\n\\end{eqnarray}%\nBy direct computation one can prove the following properties: \n\\begin{eqnarray}\n\\mathcal{L}(t+\\Theta ) &=&\\mathcal{L}(t), \\\\\n\\mathcal{F}(s)\\mathcal{L}(s)\\mathcal{F}(s)^{\\dagger } &=&\\mathcal{L}(s), \\\\\n\\Lambda (t,s)\\Lambda (s,u) &=&\\Lambda (t,u)\\text{ for }t\\geq s\\geq u, \\\\\n\\Lambda (t+\\Theta ,s+\\Theta ) &=&\\Lambda (t,s), \\\\\n\\Lambda (t,s) &=&\\mathcal{U}(t,s)e^{-(t-s)\\mathcal{L}(s)}. \\label{13}\n\\end{eqnarray}\n\nTo derive the covariant master equation (\\ref{10}) one considers the\nstandard picture of an open system $S+R$ with the total Hamiltonian \n\\begin{equation}\nH_{SR}(t)=H_{S}^{0}(t)+H_{R}+\\sum_{k}S_{k}\\otimes R_{k}, \\label{15}\n\\end{equation}%\n(we neglect the Lamb shift correction here; it can be included, changing $%\nH_{S}^{0}(t)$ into the physical Hamiltonian $H_{S}(t)$, by a suitable\nrenormalization procedure), stationary reservoir state $\\rho _{R}$, $%\n[H_{R},\\rho _{R}]=0$, $\\mathrm{Tr}(\\rho _{R}\\cdot )\\equiv \\langle \\cdot\n\\rangle _{R}$, $\\langle R_{k}\\rangle _{R}=0$. Then, exactly following a\nDavies-like calculation using a Fourier decomposition of $S(t)$, now\ngoverned by a periodic Hamiltonian, and making in particular again the\ncrucial assumption Eq.~(\\ref{eq:tbig}), which now reads \n\\begin{equation}\nt\\gg \\max \\{1\/(\\omega -\\omega ^{\\prime }+m\\Omega )\\},~m=0,\\pm 1,\\pm 2,...\n\\label{MA-per}\n\\end{equation}%\nwith $|m|$ upper-bounded by the dimension of the Hilbert space [see remark\nafter Eq.~(\\ref{20})], one obtains Eq.~(\\ref{10}) in the Davies WCL. The\nexplicit form of the generator is: \n\\begin{eqnarray}\n\\mathcal{L}\\rho &=& {\\frac{1}{2}}\\sum_{k,l}\\sum_{q\\in \\mathbf{Z}}\\sum_{\\{\\omega\n\\}}{\\hat{R}}_{kl}(\\omega +q\\Omega )\\{ [S_{l}(q,\\omega )\\rho\n ,S_{k}(q,\\omega )^{\\dagger }] \\nonumber \\\\\n&+& [S_{l}(q,\\omega ),\\rho S_{k}(q,\\omega\n)^{\\dagger }] \\} . \\label{16}\n\\end{eqnarray}\nHere $\\{\\omega \\}\\equiv \\{\\epsilon _{\\alpha }-\\epsilon _{\\beta }\\}$, the\nFloquet spectrum, and \n\\begin{equation}\n{\\hat{R}}_{kl}(x)=\\int_{-\\infty }^{\\infty }e^{-itx}\\langle\nR_{k}(t)R_{l}\\rangle _{R}dt \\label{17}\n\\end{equation}%\n\\begin{equation}\nS_{k}(q,\\omega )=\\sum_{p\\in \\mathbf{Z}}\\sum_{\\{\\epsilon _{\\alpha }-\\epsilon\n_{\\alpha ^{\\prime }}=\\omega \\}}\\langle \\phi _{\\alpha }(p+q)|S_{k}|\\phi\n_{\\alpha ^{\\prime }}(p)\\rangle |\\phi _{\\alpha }\\rangle \\langle \\phi _{\\alpha\n^{\\prime }}|. \\label{18}\n\\end{equation}%\n$S_{k}(q,\\omega )$ is the part of $S(t)$ which rotates with frequency $%\n\\omega +q\\Omega $ and can be computed using Eq.~(\\ref{20}). Note that by\ndiagonalizing the matrices ${\\hat{R}}_{kl}$ one can transform the generator $%\n\\mathcal{L}$ of Eq.~(\\ref{16}) into the form of Eq.~(\\ref{2}), which allows\none to read off the operators $V_{j}(\\omega )\\ $appearing there.\n\nNow to some important comments:\n\n\\noindent -- \\emph{Timescale analysis}: Note that for the periodic case the\ndifferences of \\textquotedblleft Bohr frequencies\\textquotedblright\\ may be\nof the order of $1\/\\Theta $. Hence we conclude from Eq.~(\\ref{MA-per})\\ that\none must average over many periods $\\Theta $, i.e., require $t\\gg \\Theta $.\nThis can be interpreted as a condition that \\textquotedblleft the\nenvironment must learn that the Hamiltonian is periodic\\textquotedblright .\nThis is exactly analogous to the adiabaticity condition in the adiabatic\ncase: $H(t)$ must be constant over many inverse Bohr frequencies to\n\\textquotedblleft be recognised\\textquotedblright\\ by the environment. The\nperiodic WCL is also a coarse-grained time description with the additional\ntime scale $\\Theta $.\n\\emph{Note that arbitrarily fast\nperiodic driving (small $\\Theta $) is incompatible even with the kind\nof generalized, finitely localized MME derived here, since then differences\nof Bohr frequencies matter in Eq.~(\\ref{MA-per})} (recall that $\\max\n|m|$ is bounded by the -- typically small -- dimension of the system\nHilbert space).\\\\\n\n\\noindent -- \\emph{Where is the Rabi frequency?} Note the dependence of the\noperators $S_{k}(q,\\omega )$ on the Floquet eigenvalue differences $\\epsilon\n_{\\alpha }-\\epsilon _{\\alpha ^{\\prime }}$. The usual Rabi frequency, $\\Omega\n_{\\mathrm{R}}=2dE\/\\hbar $ ($d$ is the dipole moment, $E$ is the electric\nfield amplitude) arises in the dipole approximation, which we have not made\nhere. The usual Rabi frequency is replaced in our non-perturbative treatment\n(in the sense of no multipole expansion) by the difference of Floquet\neigenvalues $\\epsilon _{\\alpha }-\\epsilon _{\\alpha ^{\\prime }}$ in Eq.~(\\ref%\n{18}).\\footnote{%\nOne can see that such a term also arises in the usual dipole approximation\nby considering, e.g., Eq. (2.94) in \\cite{Carmichael:book}. The interaction\npicture raising and lowering operators $\\sigma _{\\pm }(t)$ (for a two-level\natom driven by a classical field) there oscillate with three\n\\textquotedblleft Bohr frequencies\\textquotedblright\\ $\\omega _{A},\\omega\n_{A}\\pm \\Omega $\\thinspace , where $\\Omega =2dE\/\\hbar $ denotes the usual\nRabi frequency. Hence the Rabi frequency is a difference of two Bohr\nfrequencies.}\n\n\\noindent -- \\emph{More on the Rabi frequency}:\\ As we saw, the non-Markovian terms\nvanish because of the time coarse-grained description. To attain this, we\nmust average over times $t$ $\\gg \\max_{\\omega ,\\omega ^{\\prime }}\\{1\/(\\omega\n-\\omega ^{\\prime })\\}$, but must also keep in mind that the longest relevant\nscale for coarse-graining is given by the exponential decay time $\\tau $ (a \n\\emph{derived} quantity), i.e., we must have $t<\\tau $. The Rabi frequency $%\n\\Omega _{\\mathrm{R}}$ is a difference of two Bohr frequencies $\\omega\n,\\omega ^{\\prime }$:\\ $\\Omega _{\\mathrm{R}}=\\omega -\\omega ^{\\prime }$. This\nimplies that coarse-graining does not makes sense if $\\Omega _{\\mathrm{R}%\n}\\tau \\ll 1$ [since then $t<\\tau \\ll 1\/\\Omega _{\\mathrm{R}}=1\/(\\omega\n-\\omega ^{\\prime })$, in contradiction to the fundamental requirement on $t$%\n]. In physical terms this means that the width of the spectral line ($\\gamma\n=1\/\\tau $) is larger than the level splitting $\\Omega _{\\mathrm{R}}$ (see,\ne.g., Fig. 2.5 (i),(ii) in \\cite{Carmichael:book} for an illustration in the\ncase of the incoherent fluorescence spectrum) and therefore\n\\textquotedblleft the environment has no time to recognize the details of\nthe spectrum\\textquotedblright . On the other hand, when $\\Omega _{\\mathrm{R}%\n}\\tau \\gg 1$ (not inconsistent with the WCL), $\\Omega _{\\mathrm{R}}$ must\nappear in the generator, as appears from our treatment of the case of\nperiodic driving in the WCL, above. Unfortunately there are examples in the\nliterature where an MME\\ is written down subject to $\\Omega _{\\mathrm{R}%\n}\\tau \\gg 1$ but $\\Omega _{\\mathrm{R}}$ does not appear in the generator\n[e.g., Eq.~(2.96) in \\cite{Carmichael:book}, where $\\Omega _{\\mathrm{R}}\\sim\n10^{10}\\mathrm{Hz}$ and $\\tau \\sim 10^{-8}\\mathrm{s}$].\n\n\\noindent -- \\emph{Quantum optics considerations}:\\ The Markov approximation is\ncommonly accepted as an excellent approximation in quantum optics; see,\ne.g., the discussion of resonance fluorescence in \\cite{Carmichael:book}%\n[Ch.2]. This is also the basis for substantial confidence in the possibility\nof FT-QEC in quantum optical systems, such as trapped ions \\cite{Cirac:95}\nand atoms trapped in microwave cavities \\cite{Turchette:95}. Such arguments\nare based on the relative flatness of the damping constants $\\gamma (\\omega\n) $ as a function of frequency. This argument is closely related to the\nnotion of the flatness of the spectral density $G(\\omega )$ in the SCL,\nsince the damping constants are proportional to $G(\\omega )$ [see Eq.~(\\ref%\n{Dav})]. For example, below Eq. (2.95) in \\cite{Carmichael:book} the author\nargues that one can write down a Rabi frequency-independent MME for\nresonance fluorescence since $\\gamma (\\omega _{A})$ and $\\gamma (\\omega\n_{A}\\pm \\Omega _{\\mathrm{R}})$ (where $\\omega _{A}$ is the Bohr frequency)\ndiffer by less than 0.01$\\%$ at optical frequencies and reasonable laser\nintensities. However, this ignores the corrections due to the Rabi frequency\nto the operators $S_{k}(q,\\omega )$ [Eq.~(\\ref{18})]. This disagreement can\nbe traced to the question of at which point in the derivation it is safe to\nneglect $\\Omega _{\\mathrm{R}}$; in \\cite{Carmichael:book} this is done on\nthe basis of the flatness of $\\gamma (\\omega )$ before \\textquotedblleft a\nlot of tedious algebra\\textquotedblright\\ \\cite{Carmichael:book}[p.48], but\nour Floquet analysis shows that, in fact, one cannot neglect the Rabi\nfrequency relative to the Bohr frequency. This is relevant for our general\ndiscussion since the \\textquotedblleft \\emph{finitely localized}\nMME\\textquotedblright\\ which is the outcome of the Floquet analysis (see\nnext comment) actually exhibits a weak non-Markovian character. Such\ndeviations are, of course, important for FT-QEC, even if the effects are\nsmall. We revisit this point in Section \\ref{objections} below.\n\n\\noindent -- \\emph{Are there any non-Markovian effects at work here?} It seems that\none should accept the \\emph{generalized notion} of a quantum Markovian\nmaster equation as the one given by Eqs.~(\\ref{10}), (\\ref{11}) and (\\ref{2}%\n), i.e., a master equation with a possibly time-dependent Lindblad\ngenerator. In Davies' generalization to the time-dependent case \\cite%\n{Davies:78} (\\textquotedblleft adiabatic WCL\\textquotedblright ) the\ndissipative generator $\\mathcal{L}(t)$ depends on the Hamiltonian at the \n\\emph{same time} $t$. This is a type of \\textquotedblleft \\emph{local}\ngeneralized MME\\textquotedblright . On the other hand, in the periodic WCL\ntreated here, the dissipative generator $\\mathcal{L}(t)$ depends on the\nHamiltonians $H_{S}(u)$ from an interval, say $[0,t]$ ($t<\\Theta $), as can\nbe seen from Eq.~(\\ref{11}), which involves $\\mathcal{U}(t,0)$. This is\ntherefore a type of \\textquotedblleft \\emph{finitely localized}\nMME\\textquotedblright , though one could argue that it exhibits a weakly\nnon-Markovian character because of this dependence of the dissipative\ngenerator on the past. On the other hand, a non-Markovian master equation\n(in the convolutionless formalism \\cite{Breuer:book}) is also given by Eq.~(%\n\\ref{10}), but the generator is \\emph{not} of Lindblad form [in particular,\nit is not of the form (\\ref{19})], and may depend on the Hamiltonian in the \n\\emph{distant} past. The weight of distant past contributions depends on the\ndecay properties of $F(t)$ which are, generically, not exponential but\nrather powerlike. In the WCL the non-Lindbladian terms vanish due to the\noscillating character of the $e^{i(\\omega -\\omega ^{\\prime })u}$ terms in\nEq.~(\\ref{eq:K2}).\n\n\\noindent -- \\emph{The original Davies derivation}:\\ We note that the Davies result\nis a limit theorem which states that for a sufficiently small coupling\nconstant the WCL semigroup is a good approximation to the real dynamics.\nHowever, Davies' theorem itself does not provide the conditions under which\na given physical coupling is \\textquotedblleft small\nenough\\textquotedblright . In particular, one cannot extract from Davies'\ntheorem under what conditions the fast oscillating terms vanish. This can,\nhowever, be done by a more heuristic analysis, as done above.\n\n\n\\subsubsection{WCL for an Arbitrary Pulse}\n\nWe now consider the case \n\\begin{equation}\nH_{C}(t)=H_{0}+f(t)H_{1},\n\\end{equation}%\ni.e., an arbitrary driving field. This is, of course, the case of most\ninterest in FT-QEC. It follows from Fourier analysis that this case can be\ntreated qualitatively as a \\textquotedblleft\nsuperposition\\textquotedblright\\ of periodic perturbations discussed above.\nFor a single frequency $\\Omega $ the validity of the Markovian approximation\nis restricted by the condition (\\ref{MA-per}):\\ $t\\gg \\max \\{1\/(\\omega\n-\\omega ^{\\prime }+m\\Omega )\\}$. The discreteness of the frequencies $%\n\\{\\omega \\}$ and $\\{m\\Omega \\}$ is key:\\ it allows for condition (\\ref%\n{MA-per}) to be satisfied with finite $t$. A pulse $f(t)$ has a continuous\nband of frequencies of width $\\Gamma \\simeq 1\/\\tau _{g}$ (where $\\tau _{g}$\nis the gate duration), with amplitudes (Fourier transform) $\\hat{f}(\\Omega )$%\n, which add to and smear the effective Bohr spectrum $\\{\\omega \\}$. If the\npulse is long (a slow gate) then only a narrow band appears, and the\nsmearing effect is unimportant. More precisely, if $1\/\\tau _{g}$ is much\nsmaller than the typical difference of the Bohr frequencies, the\n\\textquotedblleft energy quanta\\textquotedblright\\ $m\\Omega $ [with $|m|$\nrestricted by the (typically small) dimension of the system Hilbert space]\ncannot fill the gap between $\\omega $ and $\\omega ^{\\prime }$ and the\ncondition (\\ref{MA-per}) can be satisfied. This is our adiabatic\napproximation. For fast pulses, when $1\/\\tau _{g}$ is comparable to $|\\omega\n-\\omega ^{\\prime }|$, the condition (\\ref{MA-per}) cannot be fulfilled: the\neffective Bohr spectrum becomes quasi-continuous and the denominator in\ncondition (\\ref{MA-per}) becomes abitrarily small. The result is that the\nWCL\\ analysis breaks down and non-Markovian effects dominate.\n\nThus, the condition for the adiabatic limit (Markov approximation valid) is:\n\\textquotedblleft the width of the band is much smaller than the minimal\ndifference of the effective Bohr frequencies\\textquotedblright . This is in\ncontradiction with the fast gate assumption, \\textbf{A1}.\n\n\\subsection{Section Summary}\n\nThe main advantage of the MME (\\ref{Dav}) is its consistency with\nthermodynamics. Namely, as a consequence of the KMS condition (\\ref{KMS})\nand the condition (\\ref{Dav1}), for a generic \ninitial state the system tends\nto its thermal equilibrium (Gibbs) state at the temperature of the heat\nbath \\cite{Alicki:87}.\n(An important exception to this rule are states within a\ndecoherence-free subspace \\cite{Zanardi:97c,LidarWhaley:03}, \nbut these states are not generic due\nto required symmetry properties of the\nsystem-bath interaction.) Therefore the dissipative part of the generator\nmust depend strongly on the Hamiltonian dynamics. This is consistent with\nthe notion of a coarse-grained description familiar from the study of MMEs:\nthe bath needs a time much longer than $\\max_{\\omega _{kl}}1\/\\omega _{kl}$\nto \\textquotedblleft learn\\textquotedblright\\ the system's Hamiltonian in\norder to drive it to a proper Gibbs state. In other words, \\emph{the Markov\napproximation is, equivalently, a long-time limit} (compared to $%\n\\max_{\\omega _{kl}}1\/\\omega _{kl}$ -- the system's Bohr frequencies), and\none cannot expect this approximation to be valid at short times. However,\nFT-QEC assumes operations on a time-scale that is short on the scale set by $%\n\\max_{\\omega _{kl}}1\/\\omega _{kl}$.\n\nStrictly speaking the MME (\\ref{Dav}) is valid only when $H_{S}$ is not time\ndependent. As we have shown, one can relax this by assuming slowly varying $%\nH_{S}$, giving rise to an \\textquotedblleft adiabatic MME\\textquotedblright\n, Eqs.~(\\ref{10}), (\\ref{11}) and (\\ref{2}). However, to accept Eqs.~(\\ref%\n{10}), (\\ref{11}) and (\\ref{2}) as a genuine Markovian description is\nsomewhat of a stretch, since the real question is not whether one obtains\nthe Lindblad form, but rather \\emph{how }$\\mathcal{L}(t)$\\emph{\\ depends on\nthe Hamiltonians }$H_{S}(u)$\\emph{, locally (i.e. }$u\\simeq t$\\emph{) or\nnonlocally. For fast gates and generic environments the dependence is\nnon-local, involving memory effects. }In any case, the crucial condition\nthat must be satisfied for a (generalized) MME is Eq.~(\\ref{MA-per}), which\nimplies that the average Bohr spectrum must be discrete. In essence, as long\nas the applied control does not spoil this discreteness a (generalized) MME\\\ncan be derived. On the other hand, \\emph{this means that fast gates are\nincompatible with the MME}, in violation of \\textbf{A1} of FT-QEC theory.\nThe corollary: \\emph{finite speed of gates implies non-Markovian effects}.\n\n\\section{Are the Standard FT-QEC Assumptions Internally Consistent?}\n\n\\label{consistency}\n\nWe now briefly summarize our examination of the assumptions of FT-QEC, in\nlight of the considerations above, and highlight where there may be internal\ninconsistencies in FT-QEC. As discussed above, there are essentially two\nrigorous approaches to the derivation of the MME: (i) the SCL, which is\ncompatible with arbitrarily fast Hamiltonian manipulations, but requires the\nhigh-$T$ limit; (ii) the WCL, which is compatible with thermodynamics at\narbitrary $T$, but requires adiabatic Hamiltonian manipulations.\n\nThe standard theory of FT-QEC (excluding Refs.~\\cite%\n{Terhal:04,Aliferis:05,Aharonov:05}) requires a quantum computer (QC)\nundergoing Markovian dynamics, supplemented with a constant supply of cold\nand fresh ancillas. These assumptions are contradictory under the SCL, since\nthe QC would have to be at high-$T$, while the ancillas require low-$T$ on\nthe same energy scale $E$ (set by the Bohr energies of the system = computer\n+ ancillas). Specifically, if we were to assume that for the ancillas too $%\nkT\\gg E$, they would quickly become highly mixed. If we insist that $kT\\ll E$\nfor the ancillas, then by coupling them to the QC we can no longer assume,\nin the SCL, that the total system = QC + ancillas is described by Markovian\ndynamics.\n\nIf, on the other hand, we approach the problem from the (physically more\nconsistent) WCL, then \\textbf{A3} is incompatible with \\textbf{A1} (the\nassumption of fast gates). Namely, in the WCL only adiabatic Hamiltonian\nmanipulations are allowed. Specifically, the Markov approximation in the WCL\nrequires a \\emph{discrete} system (effective) Bohr frequency spectrum, such\nthat the condition$\\ \\tau _{g}\\gg \\max_{\\omega _{kl}}1\/\\omega _{kl}$ can be\nsatisfied, hence violating the $\\tau _{g}\\omega _{B}=O(\\pi )$ condition of \n\\textbf{A1}. These conclusions are unavoidable if one accepts\nthermodynamics, since they follow from seeking a Markovian master equation\nthat satisfies the KMS condition -- a necessary condition for return to\nthermodynamic equilibrium in the absence of external driving. We take here\nthe reasonable position that a fault tolerant QC cannot be in violation of\nthermodynamics.\n\n\\section{Possible objections to the inconsistency}\n\n\\label{objections}\n\nIn this section we analyze a list of possible objections to the\ninconsistency we have pointed out.\n\n\\subsection{Is thermodynamics relevant?}\n\nWith respect to the SCL: \\textquotedblleft \\emph{Thermodynamics is\nirrelevant (since a QC need not ever be in thermal equilibrium)}%\n.\\textquotedblright\\ \n\nNote that we never claim that the QC is in thermal equilibrium; only the\nbath is. This assumption is a simplification which allows us to use a single\nparameter $T$ and therefore a single \\textquotedblleft thermal memory\ntime\\textquotedblright\\ $\\hbar \/kT$. There is no reason to use a nonthermal\nbath or many heat baths with different temperatures:\\ this does not make the\nspectral density flat and can only introduce more parameters.\n\n\\subsection{Doesn't the interaction picture save the day?}\n\nWith respect to the WCL: \\textquotedblleft \\emph{Suppose we have the\nfollowing Hamiltonian in the Schrodinger picture: }$%\nH=H_{S}+H_{C}(t)+H_{SR}+H_{R}$\\emph{\\ where }$||H_{S}||\\gg ||H_{C}||=$\\emph{%\ncontrol Hamiltonian }$\\gg ||H_{SR}||$\\emph{. Then in the interaction picture\nwith respect to} $H_{S}$\\emph{\\ the term }$H_{C}$\\emph{\\ is dominant and hence can\nimplement fast gates. However, in the original Schr\\\"{o}dinger picture }$%\nH_{C}$ \\emph{is small and hence the adiabatic limit for the derivation of\nthe MME is satisfied. Thus we have an example where we can have fast gates\n(in the interaction picture) and still the WCL can be satisfied so that the\nMarkovian limit can be reached. Moreover, this is the relevant limit relevant\nfor quantum optics, e.g., trapped ions.}\\textquotedblright\n\nThere are a number of problems with this argument. First, one should be more\ncareful about the formulation of the condition for adiabaticity. It can be\nstated as $|d\\omega (t)\/dt|\\ll \\omega (t)^{2}$, where $\\omega (t)$ is a\n\\textquotedblleft relevant\\textquotedblright\\ Bohr frequency. Merely\ncomparing norms as above does not guarantee adiabaticity. Second, in the\nquantum optics context we note the following. For three-level trapped ions\nwe have two Bohr frequencies: a large, time-independent $\\omega _{1}$, and a\nsmall, time-dependent $\\omega _{2}(t)$ (degenerate levels splitting). Only $%\n\\omega _{2}$ is \\textquotedblleft relevant\\textquotedblright\\ because it is\nrelated to gates, and then the adiabatic condition implies that $|d\\omega\n_{2}(t)\/dt|$ is correspondingly small, which contradicts the fast gate\ncondition \\textbf{A1}. Third, the inequality $||H_{C}||\\gg ||H_{SR}||$ is in\nfact not satisfied in the Markovian WCL, where $||H_{SR}||$ diverges (one\nshould not confuse the small system-reservoir coupling parameter involved in\nthe van-Hove limit with the operator norm, which can be infinite).\n\n\\subsection{Doesn't quantum optics provide a counterexample?}\n\nWith respect to the WCL: \\textquotedblleft \\emph{Trapped ions and other\nquantum optics systems provide a counter-example: a system experimentally\nsatisfying Markovian dynamics and allowing fast Rabi operations}%\n.\\textquotedblright\\ \n\nWe have already addressed quantum optical systems in Section \\ref{Floquet}.\nLet us add a few comments. We do not know of any quantum optics experiment\ntesting the Markov approximation with the accuracy relevant for FT-QEC (for\nquantum dots, on the other hand, non-Markovian effects are very visible). We\nknow that for constant, and also for strictly periodic Hamiltonians (which\ncorresponds in quantum optics to a constant external laser field), the\nDavies derivation can be applied (or extended, as in Section \\ref{WCL}) and\nthe Markov approximation is applicable. The problem appears for fast gates.\nIt would be difficult to test the Markov approximation in this case with the\nrequired accuracy, because, e.g., the results depend on the shape of the\npulse. A relevant example is resonance fluorescence, as described in \\cite%\n{Carmichael:book}[pp.43-61], and as discussed in Section \\ref{Floquet}. The\ndamping effects are only present in the widths of spectral lines -- see \\cite%\n{Carmichael:book}[p.61, Fig. 2.5]. The Markov approximation gives\nLorentzians while non-Markovian dynamics may give rise to more complicated\nlineshapes. Consider a 2-level atom like in \\cite{Carmichael:book} Section\n2.3.2., and in particular the final formula Eq. (2.96), which describes\nresonance fluorescence via a MME. The author claims that for typical\nparameters in quantum optics the dissipative part does not depend on the\nRabi frequency $\\Omega _{\\mathrm{R}}$ [recall our discussion in Section \\ref%\n{Floquet}]. Hence, as the gates are entirely related to $\\Omega _{\\mathrm{R}}\n$, it appears that either fast or slow gates are possible. The argument is\nbased on the small ratio $\\Omega _{\\mathrm{R}}\/\\omega _{A}<10^{10}\/10^{15}$\n(where $\\omega _{A}$ is the Bohr frequency). This is fine for replacing the\nspectral density at $\\omega _{A}\\pm \\Omega _{\\mathrm{R}}$ by the density at $%\n\\omega _{A}$, but the subsequent argument that we can replace [in Eq.\n(2.94)] $\\Omega _{\\mathrm{R}}$ by $0$ is inaccurate. This would be correct\nonly if the decay time $\\tau =1\/\\gamma $ is short enough such that $\\Omega _{%\n\\mathrm{R}}\\tau \\ll 1$. However, as explained in Section \\ref{Floquet}, in\nthis case the Davies type averaging makes no sense physically. In fact,\ntypically for radiation damping $\\tau =10^{-8}$s, and then $\\Omega _{\\mathrm{%\nR}}\\tau <100$ only. Hence for a fixed $\\Omega _{\\mathrm{R}}$ we do in fact\nnot have a simple Lindblad generator (of the type (2.96) in\n\\cite{Carmichael:book}), but rather a more complicated generator with\nLindblad \noperators depending on the Rabi frequency, as in Eq.~(\\ref{16}). Again, in\nthe derivation of a proper generator an averaging over terms of the form $%\n\\exp (-i\\Omega _{\\mathrm{R}}t)$ must be performed. Therefore the condition\nfor the adiabatic approximation involves the Rabi frequency $\\Omega _{%\n\\mathrm{R}}$ and cannot be satisfied for fast gates. For experiments based\non \\emph{spectral measurements} the difference between the two types of\ngenerators we have just discussed is probably irrelevant for many reasons;\nhowever, the quantum state of the atom at a given moment is sensitive to a\nsmall change in the Lindblad operators, and this is important in a fault\ntolerant implementation of quantum logic gates.\n\n\\subsection{Is \\textbf{A1} truly an assumption of FT-QEC?}\n\n\\label{adiabatic-gates}\n\nWith respect to the WCL: ``\\emph{Doesn't \\textbf{A1} impose an unnecessary\nconstraint on FT-QEC, in that gates are not required to satisfy the\ncondition }$\\tau _{g}\\omega =O(\\pi )$\\emph{?}''\n\nIn other words, one might argue in favor of slow gates, where instead the\ncondition is $\\tau _{g}\\omega \\gg O(\\pi )$. Such gates are certainly\nrelevant in the context of the adiabatic quantum computing (AQC) paradigm \n\\cite{Farhi:01}, holonomic QC \\cite{ZanardiRasetti:99,ZanardiRasetti:2000},\nor topological quantum computing (TQC) \\cite%\n{Kitaev:97,Freedman:01,DasSarma:05}. We comment in more detail on AQC, HQC,\\\nand TQC in Section \\ref{alternatives}. The question of interest to us is\nwhether an adiabatic gate satisfying $\\tau _{g}\\omega \\gg O(\\pi )$ is\napplicable to the standard FT-QEC paradigm we are considering here, and\nwhich is very different from AQC, HQC,\\ and TQC.\n\nFirst, let us clarify that by gates we mean one and two-qubit unitaries\npicked from well-known discrete and small sets of universal gates\\ \\cite%\n{Nielsen:book}. An algorithm is constructed via a sequence of such gates,\nand computational complexity is measured in terms of the minimal number of\nrequired gates. Of course one can instead join all gates used in a given\nalgorithm into a single unitary and call this a gate, but then one runs into\nthe problem of finding a relevant (physical) Hamiltonian and quantifying\ncomputational complexity. For a given gate there are infinitely many\nHamiltonian realizations. Among these are fast ones (optimal) which satisfy $%\n\\tau _{g}\\omega =O(\\pi )$ and slow ones (adiabatic) satisfying $\\tau\n_{g}\\omega \\gg O(\\pi )$ (all inequalities here are in the sense of orders of\nmagnitude). For example, consider a $\\pi $-rotation. The fast (optimal)\nrealization satisfies $\\tau _{g}\\omega =\\pi $ (compatible with \\textbf{A1}),\nwhile the slow (adiabatic) one satisfies $\\tau _{g}\\omega =\\pi +2\\pi n$ with \n$n\\gg 1$ (contradicts \\textbf{A1}).\n\nNow, one may ask whether a slow realization of gates can prevent the\ninconsistency with the WCL. We argue, based on computational complexity\nconsiderations, that the answer to this question is negative. To see this,\nnote first that non-Markovian errors are uncorrectable in standard FT-QEC.\nTherefore such non-Markovian, uncorrectable errors accumulate during the\ncomputation (by definition, they are not corrected by \\textquotedblleft\nMarkovian FT-QEC\\textquotedblright ), and in order to keep them under\ncontrol, the probability of such errors per gate, $p_{\\mathrm{non-M}}$,\nshould scale as \n\\begin{eqnarray}\n p_{\\mathrm{non-M}} &\\sim& O[1\/(\\text{volume of algorithm})]\n \\nonumber \\\\\n &=& O[1\/(\\text{input\nsize})^{\\alpha }],\n\\end{eqnarray}\nwhere $\\alpha $ is some fixed power. Now, it follows from our discussion in\nSection \\ref{WCL} that the more adiabatic the evolution, the smaller is the\nprobability of the non-Markovian errors per gate. Therefore, if one writes\nthe adiabaticity condition as $\\tau _{g}\\omega >M$, where $M\\gg 1$ is the\n\\textquotedblleft adiabatic slowness parameter\\textquotedblright , then the\nprobability of non-Markovian errors should satisfy \n\\begin{equation}\np_{\\mathrm{non-M}}\\sim O(1\/M^{\\beta }),\n\\end{equation}%\nwhere $\\beta $ is another fixed power [$\\omega $ (the Bohr or Rabi\nfrequency) is limited essentially by the choice of physical system].\nComparing the two expressions for $p_{\\mathrm{non-M}}$, we see that $M$ must\ngrow with input size. This means that if one works with adiabatic gates in\norder to keep the dynamics (approximately) Markovian, the result is that one\nmust slow the gates in proportion to the input size (to some power). This,\nhowever, violates the threshold condition of FT-QEC, in which the input size\nand gate times are independent parameters (see, e.g., Theorem 12 in \\cite%\n{Aharonov:99}).\n\n\\subsection{Measurements}\n\nWith respect to both the WCL and the SCL:\\ \\textquotedblleft \\emph{Recent\nresults on fault-tolerant QC using measurements only (e.g., \\cite%\n{Nielsen:04,Raussendorf:05}) render all the claimed problems irrelevant}%\n.\\textquotedblright\\ \n\nIndeed, we have so far discussed only the problems with quantum logic gates.\nMoreover, measurements are an integral part of FT-QEC theory as well, in\nparticular to reset and disentangle ancillas before they are introduced into\nan error-correction circuit. Therefore some remarks on the use of\nmeasurements are in order.\n\nIn the most advanced FT-QEC scheme of \\cite{Aharonov:99}, measurements are\nperformed at the end of the computation. However, this approach demands a\nhigh resource overhead, which may make it impractical. Therefore, more\nrecent proposals (e.g., \\cite{Knill:05,Steane:04}) rely on feedback\nmechanisms employing the results of quantum measurements. Those\n\\textquotedblleft measurements in the middle of\ncomputation\\textquotedblright\\ are treated for simplicity as certain\nvon-Neumann projective measurements (but with efficiency $\\ll 1$) satisfying\na \\emph{repeatability condition}. The latter implies that the subsequent\nmeasurements reduce the measurement error exponentially as their number\nincreases. This assumption should be carefully scrutinized, within realistic\nHamiltonian models of quantum measurement treated as a dynamical process.\nHere, again one can expect that the tacit assumption of statistical\nindependence of repeated measurements is in conflict with the non-Markovian\ncharacter of the dynamics of open quantum systems.\n\nAs all proposed measurement schemes are based on electromagnetic\ninteractions, it should be possible to construct a rather general\nHamiltonian framework and apply it to various particular implementations.\nIndeed, this has been done, e.g., for a single-electron tunneling (SET)\ntransistor coupled capacitively to a Josephson junction qubit \\cite%\n{Shnirman:98}. Rather than assuming that the measurement apparatus is\ncoupled to the system whenever measurements must be performed -- an option\nwhich is hard to achieve in mesoscopic systems -- Ref.~\\cite{Shnirman:98}\nmakes the reasonable assumption that the measurement apparatus is always\ncoupled to the system, but is in a state of equilibrium when it is not\nneeded. A measurement is then performed by driving the measuring device out\nof equilibrium, in a manner that dephases the qubit to be measured. Generic\nfeatures emerging from this analysis are the existence of three different\ntime-scales characterizing the measurement: the dephasing time, the\nmeasurement time (which may be longer than the dephasing time), and the\nmixing time (the time after which all the information about the initial\nquantum state is lost due to the transitions induced by the measurement).\nRef.~\\cite{Shnirman:98} thus arrives at a criterion for a ``good'' quantum\nmeasurement: the mixing time should be longer than the measurement time. A\ntime-scale analysis of measurements in optical systems, accounting for\nspontaneous emission, can be found, e.g., in Ref.~\\cite{Teich:89}. A fully\nconsistent analysis of FT-QEC should account for the existence of such\ntime-scales in a dynamic description of the measurement process. In\nparticular, it is important to set appropriate bounds on these time-scales,\nso that they may be taken into account in a threshold calculation (an\nanalysis based on a stochastic error model was reported in Ref.~\\cite%\n{Steane:03}).\n\n\\subsection{Degenerate Qubits}\n\nWith respect to the SCL: \\textquotedblleft \\emph{Degenerate qubits\nautomatically satisfy the high }$T$\\emph{\\ limit since their intrinsic\nenergy scale vanishes}.\\textquotedblright\\ \n\nExamples of degenerate qubits are common, e.g., in trapped ion quantum\ncomputing implementations where a pair of degenerate hyperfine states can\nserve as a qubit, with an auxiliary third level used to implement quantum\nlogic gates via Raman transitions \\cite{Wineland:98}. The case of degenerate\nqubits is somewhat more subtle to analyze within the context we have\nexplained above. Naively, in such a case the high-$T$ limit is indeed\nautomatically satisfied, since the system energy scale is zero. Therefore it\nappears that one could claim that the SCL version of the Markov\napproximation is attainable. However, upon closer examination this still\nseems problematic. Indeed, the vanishing of an energy scale for degenerate\nqubits holds, strictly speaking, only for fully adiabatic techniques, e.g.,\nHQC \\cite{ZanardiRasetti:99,ZanardiRasetti:2000}. Otherwise transformations\nbetween logical states are achieved by resorting to effective Hamiltonians\nwhich involve \\emph{virtual} transitions. For instance, if $|0\\rangle $ and $%\n|1\\rangle $ denote degenerate qubit levels (e.g., hyperfine levels of an\nion), one can introduce far-detuned (e.g., laser) couplings of $|0\\rangle $\nand $|1\\rangle $ with a third auxiliary level. Second order perturbation\ntheory then yields the effective Hamiltonian $H_{\\mathrm{eff}}=-(\\Omega _{%\n\\mathrm{R}}^{2}\/\\Delta )|1\\rangle \\langle 0|+\\mathrm{h.c.}$, where $\\Omega _{%\n\\mathrm{R}}$ and $\\Delta $ are the laser Rabi coupling and detuning,\nrespectively. Therefore we see that an effective, small but non-vanishing,\nenergy scale $E_{1}:=\\Omega _{\\mathrm{R}}^{2}\/\\Delta $ is introduced. (Note\nthat in order for perturbation theory to be valid one must have $\\Omega _{%\n\\mathrm{R}}\\ll \\Delta $, which in turn implies $E_{1}\\ll \\Delta $.) Yet\nanother energy scale is provided by the spectral width $E_{2}$ of the laser\npulse shape $\\Omega _{\\mathrm{R}}(t)$; in order to suppress unwanted \\emph{%\nreal} transitions, one must impose in addition that $E_{2}\\ll \\Delta $. At\nany rate, the appearance of these new system-energy scales implies that once\nagain the SCL-type contradiction applies. On the other hand, we can make\nboth $E_{1}$ and $E_{2}$ small at the price of lengthening the gating time ($%\n\\tau _{g}\\simeq \\max \\{1\/E_{1},\\,1\/E_{2}\\}$). This implies, once again, an\nadiabatic limit and the applicability of the WCL. Therefore it appears that\nas long as one restricts manipulations to adiabatic ones (thus contradicting \n\\textbf{A1}), quantum computing with degenerate qubits is possible even in\nthe Markovian limit. We expand on this viewpoint below.\n\n\\subsection{Impure Ancillas}\n\nWith respect to the SCL:\\ \\textquotedblleft \\emph{Do ancillas really need to\nbe pure?}\\textquotedblright\\ \n\nWhat precisely is the role of the ancillas in QEC? A popular answer is that\nthey serve as an \\textquotedblleft entropy sink\\textquotedblright\\ for the\nerrors accumulated during the quantum computation. This entropy in the\nsystem arises from the entanglement between system and bath, and the role of\nthe ancillas is to remove this entanglement. I.e., in a perfect quantum\nerror correction step the entanglement between system and bath is\ntransferred to the ancillas and bath. A natural objection to our SCL-based\ninconsistency is to claim that, in fact, ancillas need not be pure, or could\nperhaps even be highly mixed. However, this is not supported by the\n(current) standard theory of FT-QEC. Consider, e.g., an error correction\ncircuit based on the Steane 7-qubit code. It takes as input ancillas\nprepared in the $|\\psi \\rangle _{a}=(|0_{L}\\rangle +|1_{L}\\rangle )\/\\sqrt{2}$\nstate, where $|0_{L}\\rangle $ and $|1_{L}\\rangle $ are codewords. The\nphysical qubits which comprise such ancillas, are coupled bitwise via CNOT\ngates to the physical qubits making up the encoded data qubits in the\ncircuit. If instead we input an ancilla in a mixed state, this is equivalent\nto inputting a classical mixture with erred codewords, e.g., $(1-p)|\\psi\n\\rangle _{a}\\langle \\psi |+p|\\phi \\rangle _{a}\\langle \\phi |$, where $|\\phi\n\\rangle _{a}$ is an erred codeword. If one of these errors is a phase-flip,\nit feeds back (via the CNOT gates) into the data qubits, producing an error \n\\cite{Gottesman:97a}. Without fault-tolerance this means that there are now\ntwo errors (in the ancillas block and the data block), which may be more\nthan the code can handle. In FT-QEC theory such errors are accounted for,\nbut their magnitude is bounded from above (e.g., $p$ in the above example\nmust be small). We note that an ancilla which is initially entangled with\nthe data qubits (violating the assumption of being introduced into the\ncircuit in a tensor-product state) is essentially equivalent to the case of\nan impure ancilla just described (tracing over the data qubits yields an\nimpure ancilla state).\n\nA more general approach showing the importance of the assumption of pure\nancillas is the following (fairly standard account of QEC).\n\n\\noindent i) \\textit{Preparation}.--\n\nLet the initial state of system + reservoir + ancillas, with respective\nHilbert spaces $\\mathcal{H}_{S},\\mathcal{H}_{R},\\mathcal{H}_{A}$, be: $\\rho\n_{SRA}^{0}=|\\psi _{S}\\rangle \\langle \\psi _{S}|\\otimes |0_{R}\\rangle \\langle\n0_{R}|\\otimes \\rho _{A}$, where we have allowed for ancillas in a mixed\nstate $\\rho _{A}$.\n\n\\noindent ii) \\textit{System-reservoir interaction (decoherence)}.-- \n\\begin{equation}\n\\rho _{SRA}^{0}\\overset{U_{SR}}{\\longrightarrow }\\rho\n_{SRA}^{1}=\\sum_{e,e^{\\prime }\\in \\mathcal{E}}U_{e}|\\psi _{S}\\rangle \\langle\n\\psi _{S}|U_{e}^{\\dag }\\otimes |e_{R}\\rangle \\langle e_{R}^{\\prime }|\\otimes\n\\rho _{A},\n\\end{equation}%\nwhere $e$'s denote the \\emph{errors} belonging to the set $\\mathcal{E}$ that\nthe code $\\mathcal{C}$ can correct, and where $|e_{R}\\rangle $ are the\ncorresponding states of the reservoir. The error operators $U_{e}$ are\nassumed to be unitary and with linear span of dimension $|\\mathcal{E}|$.\n\n\\noindent iii) \\textit{System-ancilla interaction (syndrome extraction)}.--\n\nThis interaction takes the form $U_{SA}=\\sum_{e\\in \\mathcal{E}}\\Pi\n_{e}\\otimes T_{e}$ where the $T_{e}$'s are unitaries over $\\mathcal{H}_{A}$\nsuch that $T_{e}|0_{A}\\rangle =|e_{A}\\rangle $ and $\\Pi _{e}\\cong I_{%\n\\mathcal{C}}\\otimes |e\\rangle \\langle e|$ \\footnote{%\nWe know that $\\mathcal{H}_{S}\\cong \\mathcal{C}\\otimes \\mathcal{S}\\oplus \n\\mathcal{D}$ [$\\mathcal{S}$=syndrome subsystem, dim$\\mathcal{D}=|\\mathcal{E}|\n$; $\\mathcal{D}$=remainder (=$0$ for subspace-based codes)] \\cite%\n{Knill:97b,Knill:99a,Zanardi:99d}.}: \n\\begin{eqnarray}\n\\rho _{SRA}^{1} &\\overset{U_{SA}}{\\longrightarrow }&\\rho _{SRA}^{2} \\notag\n\\\\\n&=&\\sum_{e,e^{\\prime }\\in \\mathcal{E}}U_{e}|\\psi _{S}\\rangle \\langle \\psi\n_{S}|U_{e^{\\prime }}^{\\dagger }\\otimes |e_{R}\\rangle \\langle e_{R}^{\\prime\n}|\\otimes T_{e}\\rho _{A}T_{e^{\\prime }}. \\notag \\\\\n&&\n\\end{eqnarray}\n\n\\noindent iv) \\textit{Error recovery}.---\n\nUnitary recovery is implemented via $\\tilde{U}_{SA}=|\\mathcal{E}%\n|^{-1\/2}\\sum_{e\\in \\mathcal{E}}U_{e}^{\\dagger }\\otimes I_{R}\\otimes\n|e_{A}\\rangle \\langle e_{A}|$, where for unitarity we need $\\langle\ne_{A}|e_{A}^{\\prime }\\rangle =\\delta _{e,e^{\\prime }}$. By applying $\\tilde{U%\n}_{SA}$ and tracing over both $R$ and $A$ (assuming the $|e_{R}\\rangle $'s\ntoo are orthonormal) one obtains \n\\begin{equation}\n\\rho _{S}^{\\mathrm{out}}=\\frac{1}{|\\mathcal{E}|}\\sum_{e,f\\in {E}%\n}U_{f}^{\\dagger }U_{e}|\\psi _{S}\\rangle \\langle \\psi _{S}|U_{e}^{\\dagger\n}U_{f}\\,\\langle f_{A}|T_{e}\\rho _{A}T_{e}^{\\dagger }|f_{A}\\rangle .\n\\end{equation}%\nIn the case of a pure ancillas $\\rho _{A}=|0_{A}\\rangle \\langle 0_{A}|$ one\nhas $\\langle f_{A}|T_{e}\\rho _{A}T_{e}^{\\dagger }|f_{A}\\rangle =|\\langle\nf_{A}|e_{A}\\rangle |^{2}=\\delta _{f,e}$ and therefore the ideal case $\\rho\n_{A}^{\\mathrm{out}}=|\\psi _{S}\\rangle \\langle \\psi _{S}|$ is recovered. One\ncan also consider the fidelity \n\\begin{eqnarray}\nF &:=&\\langle \\psi _{S}|\\rho _{S}^{\\mathrm{out}}|\\psi _{S}\\rangle \\notag \\\\\n&=&|\\mathcal{E}|^{-1}\\sum_{e,f\\in {E}}|\\langle \\psi _{S}|U_{f}^{\\dagger\n}U_{e}|\\psi _{S}\\rangle |^{2}\\,\\langle f_{A}|T_{e}\\rho _{A}T_{e}^{\\dagger\n}|f_{A}\\rangle . \\notag \\\\\n&&\n\\end{eqnarray}%\nProvided the error operators $U_{f}$ satisfy the condition for a\nnon-degenerate code $\\langle \\psi _{S}|U_{f}^{\\dagger }U_{e}|\\psi\n_{S}\\rangle =\\delta _{f,e}$ \\cite{Knill:97b}, one obtains $F=|\\mathcal{E}%\n|^{-1}\\sum_{e\\in \\mathcal{E}}\\langle e_{A}|T_{e}\\rho _{A}T_{e}^{\\dagger\n}|e_{A}\\rangle =\\langle 0_{A}|\\rho _{A}|0_{A}\\rangle .$ Clearly, $F=1$ iff $\\rho _{A}=|0_{A}\\rangle \\langle 0_{A}|,$ i.e., \\emph{%\nthe ancillas are pure}. One can also consider non-unitary recovery via\nancilla measurements and conditional unitaries, with Kraus operators given\nby $A_{e}=|\\mathcal{E}|^{-1\/2}U_{e}^{\\dagger }\\otimes I_{R}\\otimes\n|e_{A}\\rangle \\langle e_{A}|$. The conclusion that the ancillas' state must\nbe pure is unchanged.\n\nWe note that FT is obtained by adding concatenation and, in steps iii) and\niv), preparing and coupling encoded ancillas with the system in a suitable\nway, e.g., as in the Steane-code example above. In this case it is\npermissible to allow slightly impure ancillas, and relax the assumptions\nthat, in step ii) the environment couples only to the system, and in steps\niii,iv), the environment does not act. This formulation, however, does not\nallow arbitrarily mixed-state ancillas, as argued in the Steane-code\nexample. While such a formulation of FT-QEC theory might still emerge (for\nexample, by using algorithmic cooling techniques \\cite%\n{Schulman:98,Schulman:05}, which, however, at present assume perfect gates),\nit does not appear possible at present to relax the assumption of cold\nancillas.\n\n\\subsection{Hot QC, cold ancillas, and fast QC-ancilla interactions in the\nSCL}\n\n\\label{Strong}\n\nWith respect to the SCL: \\emph{\\textquotedblleft One can keep the ancillas\ncoupled to a separate cold bath and then couple them for only a short time\nto the QC: what matters then is the }$\\emph{T}_{1}$\\emph{\\ timescale and\nthat one can be very long compared to the required ancilla-QC coupling\ntime\\textquotedblright }.\n\nLet us paraphrase this objection. If one can make $H_{SA}$ (system-ancillas)\nvery large then one could beat the rate of ancilla heating by strongly\ncoupling the QC and ancillas. I.e., suppose one would like to bring the\nancillas in from their cold reservoir to couple to the system, which is\ncoupled to a hot reservoir as required for the SCL. The ancillas then heat\nup fast, but there is a timescale associated with this heating\n(\\textquotedblleft $T_{1}$\\textquotedblright ), which one wishes to beat.\nNow if one could make the system-ancilla coupling very strong then one\ncould, presumably, use the ancillas (e.g. for syndrome extraction) faster\nthan their heating rate, while they are still sufficiently pure for fault\ntolerance purposes.\n\nThe simplest argument against this objection is the following. In the\nsetting of the objection, the QC is described by the SCL (high $T$) while\nthe ancillas are described by the WCL (low $T$). Strong and fast coupling\nbetween the QC and the ancillas is unacceptable according to the WCL because\nit is fast\n(only adiabatic manipulations are allowed),\nand according to the SCL because it is strong\n(``strong'' refers to the system's Hamiltonian part, while in the SCL this Hamiltonian is weak\nin comparison with the system-bath coupling).\n\nHowever, one could go on to argue that the ancillas are a different species\nthan the QC qubits, and in particular have a different intrinsic\n(less\ndense) energy scale, so that they are at low $T$ on the scale set by the QC\\\nqubits. In this case both ancillas and QC are described by the SCL. Then the\nproblem with the objection is the following: recall that in the SCL (see\nSection \\ref{SCL}) one must rescale $H_{SR}$ and $H_{AR}$ as $%\nH_{SR}\/\\epsilon $ and $H_{AR}\/\\epsilon $ respectively, where here $R$\ndenotes the common reservoir the system and the ancillas are coupled to.\nThe heating rate is proportional to the square of the coupling\nstrength to the reservoir, i.e., to $1\/\\epsilon ^{2}$, and hence diverges in\nthe SCL. Therefore to beat the ancilla heating process via fast manipulation\nof the system-ancilla coupling one would have to rescale $H_{SA}$ at least\nby $1\/\\epsilon ^{2}$, but this contradicts the SCL derivation, where in fact\none must keep $H_{SA}$ fixed while rescaling $H_{SR}$. The reason for this\nis that, in the SCL derivation, it is the system (now including the\nancillas) that sets the timescale against which reservoir correlations must\nbe accelerated.\\footnote{%\nLet us also consider the issue from the perspective of thermodynamics. This\nis not really necessary, since the arguments above about the SCL are\nrigorous, but is interesting in its own right. \nFirst, we remark that error correction should really be made to work at the\ncommon lower (initial ancillas') temperature. Heating a part of a QC only to\nbe closer to the Markovian limit is a suboptimal strategy, because it increases\nthe strength of the noise and stimulates entropy production. \nSecond, in standard FT-QEC heat (entropy) flows from the QC to the ancillas\nonly, while in reality one should expect a flow in both directions and\nadditionally an entropy production. To see this let us ignore for the moment\nthe coupling of the QC to the bath, and consider ancillas coupled to a heat\nbath at temperature $T$. The ancillas can be kept pure by maintaining an\nenergy gap $\\gg $ $kT$. Assume that the initial state of QC ($C$) and\nancillas ($A$) is a product state $|\\psi _{C}\\rangle \\otimes |\\psi\n_{A}\\rangle $. Switching on the interaction $H_{CA}$ we induce an\nequilibration process (because the dynamics is Markovian) of $C+A$ towards\nthe Gibbs state $\\rho _{CA}=\\exp (-H_{CA}\/kT)\/Z$, which is \\emph{entangled}\n(here for simplicity $H_{CA}$ contains not only the interaction but is the\ntotal Hamiltonian of $C+A$). After a single step of error correction the\ntotal state of $C+A$ can be modeled by $(1-p)U|\\psi _{C}\\rangle \\otimes\n|\\psi _{A}\\rangle U^{\\dag }+p\\rho _{CA}$, where $U$ is unitary and $010^{25}$ and the small\nsearch volumes, all detected FR II's are likely to cluster near the\nsurvey-limits in optical luminosity (near $M^*$). It should be noted,\nhowever, that it is difficult to separate out the effects of redshift\nevolution with a limited search volume.\n\n\\begin{figure}\n\\begin{center}\n \\includegraphics*[width=10cm]{lcfig1.eps}\n\\end{center}\n\\caption{FR I\/II diagram in the optical\/radio luminosity plane. The dotted\n lines show the 75th percentile values of $M_{24.5}$ and $P_{20cm}$\n for the various samples. The symbols are: FR I=1's, FR II=2's, and\n Fat-Double=F. The solid-line indicates the fiducial FR I\/II break.}\n\\label{selectioneffects}\n\\end{figure}\n\n\\section{Rich Cluster vs. Non Rich-Cluster Radio Galaxies}\n\nIn order to compare our rich cluster radio sources to these other\nsamples, we have therefore applied the B2 selection function (both\n$M_{opt}$ and $P_{20cm}$ constraints) to our rich cluster survey,\nincluding an upper-redshift cutoff of z=0.24 to better match the two\nsamples. We have compared a number of parameters from both the\noptical analysis as well as the radio properties between these two\nsamples. See \\citeasnoun{ledl2000} for a complete presentation of\nthese results.\n\nFor the optical properties, we compared the distribution of\nellipticities, surface-brightness profile shape (via a power-law\nexponent and $r^{1\/4}$-law fits), departure from elliptical isophotes\n(A4,B4 parameters), the Kormendy relation ($\\mu_e$ vs. $r_e$ slope),\nand the luminosity-size ($M_{24.5}$ vs. $r_{24.5}$) relationships for\nboth the cluster and non-cluster samples. Measured at the same\noptical luminosity ($M_{24.5}$), we find no discernable differences\nbetween the two samples.\n\nWe also compared the optical properties of the FR I and FR II host\ngalaxies (combining cluster and non-cluster sources together). From\nthe median of the optical magnitude distributions, we find:\n$M_{24.5}(FR I)=-22.85\\pm0.04$ and $M_{24.5}(FR II)=-22.58 \\pm0.10$.\nThus, the FR I's are found in only slightly brighter galaxies, and\nsignificantly less luminous on average than BCG's ($M_{24.5} <\n-23.5$). Additionally, we found no differences in the host galaxy\nproperties listed above between the FR I and FR II hosts, at least in\ntheir broadband photometric properties.\n\nOur radio comparison between cluster and non-cluster sources included\nthe size-distributions (at the same radio power), the distribution in\nthe radio\/optical luminosity plane, and the FR I\/II division. We find\nno evidence for differences in either of these properties between the\ntwo samples. We do however note the following observations: 1) The\nsize-distribution is somewhat broader outside of clusters, and 2) We\ndo not see a population of FR II's with $P_{20cm} \\gtrsim 25.5$ in the\nrich cluster sample. However, these differences are minor, and argue\nfor very similar envirnoments (at least the aspect of the environment\nwhich most influences the evolution of radio sources) inside\/outside\nthe cores of rich clusters. All three samples are plotted together in\nFig.\\ref{selectioneffects} and Fig. \\ref{pdplots}.\n\n\\section{The (P,D) Diagram} \n\nGiven that we find no significant differences in our\ncluster\/non-cluster samples, we have combined them in order to improve\nthe statistics. In Fig. \\ref{pdplots}, we show the radio-power\/size\n($P,D$) distributions in several ways. In the bottom diagram we\nseparate the three samples by point-type to show their intrinsic\ndistributions. In the middle plot we show the sample divided into FR\nI, FR II, and Fat-double classes. For the FR I's, we find a fit\n(log-log) of the form: $Size (FR I) \\propto P_{20cm}^{0.31\\pm0.03}$\n(ignoring unresolved sources with only size upper-limits). For the FR\nII's, we find essentially no dependence between power and size ($Size\n(FR II) \\propto P_{20cm}^{0.08\\pm0.09}$). Note, however, that at the\nsame radio power, both FR I and FR II sources have the same\ndistribution in source-sizes.\n\n\\begin{figure}[h]\n\\begin{center}\n\\includegraphics*[height=12cm]{lcfig2.eps}\n\\end{center}\n\\caption{(middle) ($P,D$) diagram for combined cluster\/non-cluster samples with a \n cutoff of z=0.24. The solid-lines show the fit for the FR I's with\n and without the size upper- limits for the unresolved sources. The\n dashed-line is the fit for the FR II's. (bottom) Same as middle,\n where the point types reflect the sample from which the sources were\n taken (see legend on plot). (top) Median values of the radio size in\n radio power bins as a function of optical magnitude ($\\Delta M$=0.75\n mag); centered on 1=-21.38, 2=-22.13, 3=-22.88, 4=-23.63, 5=-24.38.}\n\\label{pdplots}\n\\end{figure}\n\nThe top plot in Fig. \\ref{pdplots} shows the median size as a function\nof radio power and optical luminosity (lower numbers represent bins of\nlower optical luminosity). One sees from the plot that the trend is\nfor optically fainter galaxies to have larger radio sources. In bins\nof constant optical magnitude, we find that for fits of the form:\n$Size \\propto P_{20cm}^x$, x decreases with increasing $L_{opt}$ (from\n0.36-0.29). We also find that the size distributions for $M\\gtrsim\nM^*$ and $M\\lesssim -23$ are different at the 98\\% level. It thus\nappears that optically brighter galaxies tend to have smaller sizes\nwhen measured at the same radio power. Is this result a consequence\nof environment (the brighter galaxies being in a higher-density\nenvironment or near the bottom of the local gravitational potential)\nor initial conditions (the physics near the black hole and the jet\ninitial conditions)? We plan to address these issues by comparing our\nobserved distributions in ($M_{opt},P_{20cm},Size$) with model\npredictions. It is clear, however, that we may need a\nnew\/more-detailed model for FR I sources (see \\citeasnoun{eileklc}\nthese proceedings) in order to understand these results.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nThe study of cold quark matter has been an interesting topic of research in recent years. In an astrophysical\ncontext, quark matter may be used to explain the measureable properties of pulsar-like compact stars.\n\nIt has recently been proposed that realistic quark matter in compact stars could exist in a solid state\n~\\cite{xu03,Horvath05}, either as a super-solid or as a normal solid~\\cite{xu09}. The basic conjecture of normal\nsolid quark matter is that de-confined quarks tend to form quark-clusters when the temperature and density are\nrelatively low. Below some critical temperature, these clusters could be in periodic lattices immersed in a\ndegenerate, extremely relativistic, electron gas. Note that even though quark matter is usually described as\nweakly coupled, the interaction between quarks and gluons in a quark-gluon plasma is still very\nstrong~\\cite{Shuryak}. It is this strong coupling that could cause the quarks to cluster and form a solid-like\nmaterial.\n\nIn this paper, we argue that comtemporary observations of thermal X-ray emitting pulsars are consistent with the\nassumption that these sources are in fact Solid Quark Stars (SQSs).\nIn the successive Sect., the thermal X-ray observations on isolated pulsars are exhibited according to the\ncollation in this work.\nIn Sect. 3, the SQS pulsar model is explored to interpret these observations.\nConclusions and corresponding discussions are presented in Sect. 4\n\n\\section{The observations}\n\\begin{figure\n\\begin{center}\n\\epsfig{file=f5.eps,height=4.8in} \\caption{Functions $L_{\\rm bol}^{\\infty}(\\dot{E})$ ({\\it top} panel) and\n$t(\\dot{E})$ ({\\it bottom} panel) of active pulsar candidates. In the {\\it top} panel, the fits are carried out\nboth for Group A (including top 17 pulsars listed in Table 1 in \\cite{yx09}; they are marked by dark points and\ntheir numbers) and Group B (including the members in Group A and the other 10 sources whose upper limits on the\nbolometric luminosities could be defined observationally; the 10 sources are marked by grey points and letters).\nRed lines are the fitted results for Group A, while blue lines are those for Group B. In both groups, solid\nlines provide the best fits to the data, while the dashed lines give the fits by freezing $p1$ at 1. We note\nthat the 10 sources with upper limits on their bolometric luminosities defined are taken from ~\\cite{BA02}, and\nthey are a. B1509-58, b. B1951+32, c. B1046-58, d. B1259-63, e. B1800-21, f. B1929+10, g. B0540-69, h. B0950+08,\ni. B0355+54, j. B0823+26.}\\label{fig:edot}\n\\end{center}\n\\end{figure}\n\\begin{table}\n{\\scriptsize\n\\begin{center}\n\\begin{tabular}{ccccc}\n\\hline \\\\\nGroup & $p1$$^a$ & $p2$ & Corr. Coef.$^b$ & $\\chi_r^2$ (d.o.f)$^c$ \\\\\n\\hline \\\\\nA. & $0.4561\\pm0.2315$ & $16.20\\pm8.30$ & -- & 0.3277(14) \\\\\n & (1) & $-3.280^{+0.495}_{-0.494}$ & 0.7487 & 0.8607(15) \\\\\nB. & $0.5918^{+0.2045}_{-0.2046}$ & $11.66^{+7.3100}_{-7.2990}$ &\n-- & 0.6081(24) \\\\\n & (1) & $-2.896\\pm0.403$ & 0.7730 & 0.9964(25) \\\\\n\\hline \\\\\n\\end{tabular}\n\\end{center}\n\\begin{flushleft}\n$^a$ Values in the parenthesis are frozen during the fits. The errors are in 95\\% confidence level.\n\\newline\n$^b$ Correlation coefficient of the data between bolometric luminosity and $\\dot{E}$.\\newline\n$^c$ Reduced $\\chi^2$, or $\\chi^2$ per degree of freedom (d.o.f).\n\\caption{Fitting parameters for $L_{\\rm bol}^{\\infty}$---$\\dot{E}$ data, which has been shown in Fig.\n\\ref{fig:edot} ({\\it top} panel).}\\label{tab:fit}\n\\end{flushleft}}\n\\end{table}\n\nWe collate the thermal X-ray observations on 29 isolated pulsars (see Table 1 in~\\cite{yx09}). The black body\nparameters summarized comprise of pulsar surface temperature components $T_{\\rm\ns,1\/2}^{\\infty}$\\footnote{``$\\infty$''---values measured at distant.}, emission size components\n$R_{1\/2}^{\\infty}$, ages and bolometric X-ray luminosities $L_{\\rm bol}^{\\infty}$.\nSpin would be a significant source of energy for pulsars. Potential relation between pulsar X-ray\nbolometric luminosity $L_{\\rm bol}^{\\infty}$ and its spin energy loss rate $\\dot{E}$ could exist. Such a study\nwas done and the results are exhibited in Fig. \\ref{fig:edot} and Table \\ref{tab:fit}.\nUsing the function of\n\\begin{equation}\n{\\rm log}L_{\\rm bol}^{\\infty}=p1\\cdot {\\rm log}\\dot{E}+p2, \\label{eq:fit}\n\\end{equation}\n$L_{\\rm bol}^{\\infty}-\\dot{E}$ data were fitted and the results show that the best fit demonstrates a possible 1\/2-law between\nthese two quantities, or\n\\begin{equation}\nL_{\\rm bol}^{\\infty}(\\dot{E})=C\\dot{E}^{1\/2}, \\label{eq:onesecond}\n\\end{equation}\nwhere the coefficient $C=10^{p2}$ is in the unit of ${\\rm erg^{1\/2} s^{-1}}$.\nHowever, when the parameter $p1$ is set to 1, a linear-law of\n\\begin{equation}\nL_{\\rm bol}^{\\infty}(\\dot{E})=\\eta\\cdot\\dot{E}, \\label{eq:linear}\n\\end{equation}\nis presented, and this coefficient $\\eta$ is $10^{-3}$ approximately.\n\n\\section{The SQS pulsar model}\n\\subsection{X-ray cooling and accreting pulsars}\nThe state of a pulsar would be manifested in multiple wave bands and by different photon components. Top 17\npulsars have significant observable X-ray non-thermal components, which could originate from energetic\nmagnetospheres. Such pulsars are considered to be undergoing cooling processes as proposing by SQSs.\nThe rest sources are dominated by thermal X-ray components observationally, and show little magnetospheric\nactivities. These pulsars may be accreting to the surrounding interstellar medium to generate visible X-ray\nluminosities, if they are in fact SQSs.\n\\subsection{Cooling of SQSs}\n\\begin{figure}[htb]\n\\begin{center}\n\\epsfig{file=f3.eps,height=2.0in} \\caption{{\\it Left} panel: Cooling curves for SQSs, if the 1\/2-law between the\nbolometric luminosity and the spin energy loss rate holds. {\\it Right} panel: Corresponding temperature\ndifferences between the hot and warm components of SQSs, or $\\Delta T=T^{\\infty}_{\\rm p}-T^{\\infty}_{\\rm s}$.\nThe parameters in both panels: $M=0.1{\\rm M_\\odot}$, $C=10^{16}$ (solid lines); $M=1.0{\\rm M_\\odot}$,\n$C=10^{16}$ (dashed lines); $M=1.0{\\rm M_\\odot}$, $C=10^{15}$ (dotted lines); $M=0.01{\\rm M_\\odot}$, $C=10^{15}$\n(dash-dot lines) ($C$ is in the unit of erg$^{1\/2}$ s$^{-1\/2}$). For two curves with same $M$ and $C$, the upper\none corresponds to an initial spin of 10 ms, while that of the lower one is 100 ms. Noting that the errors on\nthe surface temperatures of PSRs J0205+6449 (No. 3) and J2043+2740 (No. 17) are not provided by the authors of\nthe references, we then conservatively adopt them as deviating from the central values by a factor of\n2.}\\label{fig:ccos}\n\\end{center}\n\\end{figure}\n\\begin{figure}[htb]\n\\begin{center}\n\\epsfig{file=f4.eps,height=2.0in} \\caption{{\\it Left} panel: Cooling curves for SQSs, if the linear-law holds.\n{\\it Right} panel: Temperature differences for this case. The parameters: $M=1.0{\\rm M_\\odot}$, $\\eta=0.01$\n(solid lines); $M=1.0{\\rm M_\\odot}$, $\\eta=0.001$ (dashed lines); $M=0.1{\\rm M_\\odot}$, $\\eta=0.1$ (dash-dot\nlines); $M=0.01{\\rm M_\\odot}$, $\\eta=0.1$ (dotted lines). As in Fig. \\ref{fig:ccos}, for two curves with same\n$M$ and $\\eta$, the upper one corresponds to an initial spin of 10 ms, while the lower one 100\nms.}\\label{fig:cclr}\n\\end{center}\n\\end{figure}\nCooling of a quark star, from the initial birth, can be quite complicated to model.\nTwo facts induce this result. Firstly, recent experiments on collisions between heavy relativistic ions (e.g.\nRHIC) manifested that the potential quark-gluon plasma in the fire ball would be strongly coupled even in the\ntemperatures of a few factors of the critical temperature $\\sim$170 MeV~\\cite{Shuryak}. However, and secondly,\nthe initial temperature at the birth of a pulsar could only be a few factors of $\\sim$ 10 MeV. Hence, if the\nnatal pulsar is a quark star, the stellar interior quark-gluon plasma would be coupled strongly, so that current\nphysics could not model either in an analytic or a\nnumerical way.\n\nNevertheless, a phenomenological study on this problem could be carried out, and Fig. 1 in~\\cite{yx09}\nillustrates proposed cooling processes for a quark star. It could be quite uncertain that at which temperatures\nthe phase transitions would occur, and how long the star would stay in Stages 1 and 2.\nHowever, in consequence of the strong coupling in quark-gluon plasma, Stages 1 and 2 would not be long, and the\nobserved pulsars (if they are quark stars) would be in the solid phase (Stage 3), because of the measured low\ntemperatures.\nThe calculations below concentrate on the late Stage 3, where the surface temperature is in the range of the\nobserved $10^7$---$10^5$ K. The loss of the thermal energy of a SQS in the late 3rd stage could dominantly induced by the emission of\nphotons rather than neutrinos, as a result of the low neutrino emissivities in such low temperatures.\n\nIn general, the cooling process would be described by the equation of\n\\begin{equation}\n-C_v\\frac{{\\rm d}T_{\\rm s}}{{\\rm d}t}+L_{\\rm SH}= L_{\\rm bol}, \\label{eq:coolinggeneral}\n\\end{equation}\nwith assuming that the stellar volume is invariant and the internal energy of the star is only the function of\nthe stellar temperature $T_{\\rm s}$. $C_v$ is the stellar heat capacity. $L_{\\rm SH}$ is the luminosity of the\nprobable heating process, and $L_{\\rm bol}$ is the photon luminosity of the star.\nThe stellar heat capacity was found to be tiny, so that the internal energy would be far from sustaining\nlong-term X-ray emission. We note here that for a $1.4{\\rm M_{\\odot}}$-SQS, it would cool down to 1 K from\n$10^{11}$ K in $\\sim$0.2 s. The detailed exhibition of the calculation of this stellar residual internal energy\nas well as heat capacity can be found in Appendix A.\n\nHeating mechanism would take important effect during the cooling of SQSs. In such a way, the thermal evolution\nof a SQS could not be a self-sustaining cooling process, but an external-supplying heating process.\nThe bombardment by backflowing particles at the poles could be the prime heating mechanism for SQSs, which would\nintrinsically be spin-powered.\nCombining the negligible residual internal energy of a SQS and the observed $L_{\\rm bol}^{\\infty}$---$\\dot{E}$\nrelations. The heating luminosity $L_{\\rm SH}$ could also follow that 1\/2-law or linear-law. The equation\n(\\ref{eq:coolinggeneral}) would hence yields\n\\begin{equation}\nC\\dot{E}^{1\/2}=4\\pi R^2\\sigma T_{\\rm s}^{4}+4\\pi r_{\\rm p}^2\\sigma T_{\\rm p}^4, \\label{eq:coolingactiveos}\n\\end{equation}\nor\n\\begin{equation}\n\\eta\\dot{E}=4\\pi R^2\\sigma T_{\\rm s}^{4}+4\\pi r_{\\rm p}^2\\sigma T_{\\rm p}^4. \\label{eq:coolingactivelr}\n\\end{equation}\nThe two terms on the right side of the equal sign are the stellar primary black body emission component and that\nfor a probable hot pole component respectively. The relation between $T_{\\rm p}$ and $T_{\\rm s}$ could be found\nby\n\\begin{equation}\nH\\sim \\frac{L_{\\rm SH}-4\\pi r_{\\rm p}^{2}\\sigma T_{\\rm p}^{4}}{\\pi r_{\\rm p}^{2}}\\sim \\kappa_{\\rm e}\\frac{T_{\\rm\np}-T_{\\rm s}}{R}, \\label{eq:H}\n\\end{equation}\nwhere $H$ is the heat current flowing from the poles to the bulk of the star. The stellar heat conductivity\n$\\kappa_{\\rm e}$ mainly contributed by free electrons refers to the corresponding results for solid\nmetal~\\cite{FI81}.\nFigs. \\ref{fig:ccos} and \\ref{fig:cclr} exhibit the computed cooling curves for the above two cases\nrespectively. In these calculations, SQSs were assumed to slow down as orthogonal rotators that lose their spin\nenergy by magnetic dipole radiation. The average densities of the SQSs were adopted to be 3 times nuclear\nsaturation density. Note that the results will not vary significantly when the density values are in the range of 3--5 nuclear saturation density, which is thought to be representative values for the solid quark matter.\nThe spin-powered thermal emission of SQSs enable an estimate on pulsar moments of inertia~\\cite{yx09}.\n\\subsection{Accreting SQSs}\nDim X-ray isolated neutron stars (XDINs) and compact central objects (CCOs) (Sources No. 18-29 and 4) are\nthought to be inactive pulsar candidates with inert magnetospheric manifestations. These sources may have\nexcessive X-ray luminosities to their spin energy loss rates.\nTheir energy origin has been keeping disputable, but accretion origin would be a candidate. The relative high\nmagnetic fields of these sources may drive their surrounding shells of matter (in large radii, usually larger\nthan the corotation radii) to rotate together with them. Under this {\\it propeller} regime, nearly all of the\nmatter would be expelled outward, but a small fraction may diffuse starward and fall onto the stellar surfaces\neventually. The release of the gravitational energy as well as the latent heat inducing by the burning from the\nbaryonic matter phase to the three-flavor quark matter phase could sustain long-term soft X-ray radiations of\nsuch pulsars.\nIn this scenario, the stellar luminosity would then be\n\\begin{equation}\nL_{\\rm bol}=\\frac{GM\\cdot \\eta_{\\rm acc}\\cdot\\dot{M}}{R}+\\Delta\\varepsilon\\frac{\\eta_{\\rm\nacc}\\cdot\\dot{M}}{m_{\\rm p}}. \\label{eq:coolingdead}\n\\end{equation}\nFor the calculated accreting parameters for SQSs in this {\\it propeller} regime and the resulting luminosities,\nplease see \\cite{yx09}.\nNote here that for No. 4 RX J0822-4300, a CCO, we treated it both as a cooling SQS and a accreting SQS, as the\nresult of the disputable structure around it~\\cite{ZTP99,HB06}.\n\n\\section{Conclusions}\nWe collate the thermal observations of 29 X-ray isolated pulsars and, in the SQS regime, for the\nmagnetospherically active pulsar candidates, establish their cooling processes (Figs. \\ref{fig:ccos} and\n\\ref{fig:cclr}), while for the magnetospherically inactive or dead pulsar candidates, interpret the X-ray\nluminosities under the accretion scenario.\nSQSs, because of the possibility of being low-mass, could provide an approach to understand the small black body\nemission sizes. We note that for SQSs with mass of $0.01{\\rm M_\\odot}$, ${\\rm 0.1{\\rm M_\\odot}}$ and $1{\\rm\nM_\\odot}$, their radii are $\\sim$1.8, $\\sim$3.8 and $\\sim$8.3 km respectively.\nOn the other hand, a linkage between pulsar rotational kinetic energy loss rates and bolometric X-ray\nluminosities is explored by SQSs.\n\nWe hence conclude that the phenomenological SQS pulsar model could not be ruled out by the thermal observations\non X-ray isolated pulsars, though a full depiction on the formation and the thermal evolution for a quark star\nin all stages could hardly be given nowadays, as the lack of the physics in some extreme conditions (as has been\ndiscussed in Sect. 3.2).\nSQSs have significant distinguishable interiors with neutron stars, but the structures of the magnetosphere\nbetween these two pulsar models could be similar. Therefore, SQSs and neutron stars would have similar heating\nmechanisms, such as the bombardment by backflowing particles and the accretion to surrounding medium. However, a\nfull comparison between SQSs and neutron stars including their cooling processes as well as heating mechanisms\nhas beyond the scope of this paper, and should be interesting and necessary in the future study.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nAlthough General Relativity (GR) provides very successful solutions, observations and predictions at the intermediate regimes, it fails to be a complete theory at both large (IR) and small (UV) scales. In the IR regime, GR does not give explanations to the accelerating expansion of the universe and rotational speed of galaxies without assuming a tremendous amount of dark energy and dark matter compared to the ordinary matter. As for small distances at the quantum level, it is a non-renormalizable theory according to perturbative quantum field theory perspective because of the infinities appearing in a renormalization procedure. These infinities coming from the self-interactions of gravitons (in the pure gravity case) cannot be regulated by a redefinition of finite numbers of parameters. GR has also black hole or cosmological type singularities at the classical level. The GR is expected to be modified at both regimes in order to have a complete theory. Here, the main question is what kind of modification in the UV will provide a complete model which may also solve cosmological or black hole singularity problems. In this respect, a possible way out of this problem was to add scalar higher order curvature terms to Einstein's theory such as the quadratic theory\n\\begin{equation}\nI=\\int d^4x(\\sigma R+\\alpha R^2+\\beta R_{\\mu\\nu}^2),\n\\end{equation}\nwhich describes massive and massless spin-$2$ gravitons together with a massless spin-$0$ particle \\cite{stelle}. By adding higher curvature terms, renormalizability is gained, but the unitarity (ghost and tachyon-free) of the theory is lost due to a conflict between the massless and massive spin-$2$ excitations. In other words, the theory has Ostragradsky type instabilities at the classical level which become ghosts at the quantum theory. Theory has an unbounded Hamiltonian density from below. That is to say, the addition of higher powers of curvature causes a conflict between the unitarity and the renormalizability. \n\nOn the other hand, it has been recently demonstrated that infinite derivative gravity (IDG) \\cite{Biswas1,Biswas2} has the potential to have a complete theory in the UV scale\\footnote{For recent developments on IDG, see \\cite{Tomboulis,Biswas4, Modesto,Biswas5,Biswas6,Buoninfante:2018xiw,Modesto1,Modesto2,Modesto22,Modesto3, Talaganis:2016ovm,Edholm:2016hbt,Modesto4,Boos:2018bxf,Moffat1,Moffat2,Briscese:2012ys}.}. IDG is described by an action constructed from non-local functions $F_i(\\Box)$ [given in Eq.(\\ref{idfunc})], where $\\Box$ is the d'Alembartian operator ($\\Box=g^{\\mu\\nu}\\nabla_\\mu\\nabla_\\nu$). The propagator of the IDG in a flat background in $3+1$ dimensions \n\\begin{equation}\n\\Pi_{IDG}=\\frac{P^2}{a(k^2)}-\\frac{P_s^0}{2a(k^2)}=\\frac{\\Pi_{GR}}{a(k^2)},\n\\end{equation}\nis given in terms of Barnes-Rivers spin projection operators ($P^2,P_s^0$) \\cite{Biswas1}. Here $a$ is given in terms of $F_i(\\Box)$ [see Eq.(\\ref{relations})] and $\\Pi_{GR}$ is the pure GR graviton propagator. One of the important points is to avoid introducing ghost-like instabilities and having additional scalar degrees of freedom other than the massless spin-$2$ graviton. To do this, $a(k^2)$ can be chosen to be an exponential of an entire function as $a(k^2)=e^{\\gamma(\\frac{k^2}{M^2})}$, where $\\gamma(\\frac{k^2}{M^2})$ is an entire function. This choice guarantees that the propagator has no additional poles other than massless graviton, in other words, $a(k^2)$ has no roots. In the $a(k^2)\\to 0$ or $k\\ll M$ limit, the propagator takes the usual Einsteinian form. Furthermore, as the propagator does not have any extra degrees of freedom, the modified propagator is free from ghost-like instabilities. The Hamiltonian density is bounded from below. Moreover, in \\cite{Talaganis}, it has been recently shown that loop divergences beyond one-loop may be handled by introducing some form factors. Furthermore, infinite derivative extension of GR may resolve the problem of singularities in black holes and cosmology \\cite{Biswas1, Biswas2,Tomboulis,Biswas4, Modesto,Biswas5,Biswas6,Buoninfante:2018xiw}. \n\nIn this work, we would like to explore the weak field limit of the IDG and compare it with the result of GR. In \\cite{Biswas1}, the Newtonian potential for the point source was calculated for the IDG, here we extend this discussion to include the spin, velocities and orbital motion of the sources. By spin, we mean the rotation of the sources about their own axes. Therefore we calculate the spin-spin and spin-orbit interactions between two massive sources in IDG and show that the mass-mass interaction, the spin-spin interaction and the spin-orbit interaction part become non-singular as $r\\to 0$. These non-singular results in IDG show that the theory has improved behavior in the small scale compared to GR. \n\nThe layout of the paper is as follows: In Sec. II, we investigate the spin-spin interactions of localized point-like spinning massive objects in IDG and consider the large and small distance limits of potential energy. Section III is devoted to extend the calculations in the previous section to the case that the massive spinning sources are also moving. In that section, in addition to mass-mass and spin-spin interactions, we studied the spin-orbit interactions in IDG. In conclusions and further discussions, we give the final result for a gravitational memory effect in IDG and discuss the effects of mass scale of non-locality on memory. In the Appendix, we give some of the details of calculations for Sec. III. \n\n\\section{Scattering Amplitude in IDG}\nThe matter coupled Lagrangian density of IDG is \\cite{Biswas1}\n\\begin{equation}\n\\mathcal{L}= \\sqrt{-g}\\bigg[\\frac{M^2_P}{2} R\\ +\\frac{1}{2} R F_1 (\\Box) R + \\frac{1}{2} R_{\\mu\\nu} F_2(\\Box) R^{\\mu\\nu}\n + \\frac{1}{2}C_{\\mu\\nu\\rho\\sigma} F_3(\\Box) C^{\\mu\\nu\\rho\\sigma}+\\mathcal{L}_{matter}\\bigg],\n\\end{equation}\nwhere $M_P$ is the Planck mass, $R$ is the scalar curvature, $R_{\\mu\\nu}$ is the Ricci tensor and $C_{\\mu\\nu\\rho\\sigma}$ is the Weyl tensor. The infinite derivative functions $F_i(\\Box)$ are given as\n\\begin{equation}\nF_i(\\Box)=\\sum_{n=1}^{\\infty}f_{i_n}\\frac{\\Box^n}{M^{2n}}, \n\\label{idfunc}\n\\end{equation} \nwhich are functions of the d'Alembartian operator. Here, $f_{i_n}$ are dimensionless coefficients and $M$ is the mass scale of non-locality. The linearized field equations around a Minkowski background of $g_{\\mu\\nu}=\\eta_{\\mu\\nu}+h_{\\mu\\nu}$ reads\\footnote{We will work with the mostly plus signature $\\eta_{\\mu\\nu}={\\it{diag}}(-1,1,1,1)$.} \\cite{Biswas1}\n\\begin{equation}\na(\\Box)R^L_{\\mu\\nu}-\\frac{1}{2}\\eta_{\\mu\\nu}c(\\Box)R^L-\\frac{1}{2}f(\\Box)\\partial_\\mu\\partial_\\nu R^L=\\kappa T_{\\mu\\nu},\n\\label{FeqIDG}\n\\end{equation}\nwhere $L$ refers to linearization and non-linear functions are defined as\n\\begin{equation}\n\\begin{aligned}\n &a(\\Box) =1 + M^{-2}_P \\left(F_2(\\Box) \n + 2 F_3(\\Box)\\right) \\Box, \\\\\n& c(\\Box) = 1 - M^{-2}_P\\left(4 F_1(\\Box) + F_2(\\Box) \n - \\frac{2}{3}F_3(\\Box)\\right)\\Box,\\\\\n& f(\\Box) =M^{-2}_P \\left(4F_1(\\Box) + 2F_2(\\Box) +\\frac{4}{3} F_3(\\Box)\\right),\n \\end{aligned}\n \\label{relations}\n\\end{equation}\nwhich give the constraint $a(\\Box)-c(\\Box) = f(\\Box)\\Box$. After plugging the relevant linearized curvature tensors \\cite{deser_tekin} into (\\ref{FeqIDG}), one arrives at the linearized field equations\n\\begin{equation}\n\\begin{aligned}\n &\\frac{1}{2} \\bigg[a(\\Box)\\left(\\Box h_{\\mu\\nu} \n -\\partial_\\sigma\n \\left(\\partial_\\mu h^\\sigma_\\nu + \\partial_\\nu h^\\sigma_\\mu \\right)\\right)\n + c(\\Box) \\left(\\partial_\\mu \\partial_\\nu h + \\eta_{\\mu\\nu} \\partial_\\sigma \\partial_\\rho \n h^{\\sigma\\rho}-\\eta_{\\mu\\nu} \\Box h\\right)\\\\&+ f(\\Box) \\partial_\\mu \\partial_\\nu \n \\partial_\\sigma \\partial_\\rho \nh^{\\sigma\\rho}\\bigg]=-\\kappa T_{\\mu\\nu} .\n\\label{IDGfeq1}\n\\end{aligned}\n\\end{equation}\n If we set $a(\\Box)=c(\\Box)$, we recover the pure GR propagator in the large distance limit without introducing additional degrees of freedom. Then, in the de Donder gauge $\\partial_\\mu h^{\\mu\\nu} = \\frac{1}{2} \\partial^\\nu h$, the linearized field equations (\\ref{IDGfeq1}) take the following compact form\n\\begin{equation}\n a(\\Box){\\cal{G}}^L_{\\mu\\nu}=\\kappa T_{\\mu\\nu} , \n \\label{IDGfeq2}\n\\end{equation}\nwhere ${\\cal{G}}^L_{\\mu\\nu}$ is the linearized Einstein tensor defined as ${\\cal{G}}^L_{\\mu\\nu}=-\\frac{1}{2}(\\Box h_{\\mu\\nu}-\\frac{1}{2}\\eta_{\\mu\\nu}\\Box h)$. Manipulation of (\\ref{IDGfeq2}) yields\n \\begin{equation}\n a(\\Box) \\Box h_{\\mu\\nu}=-2\\kappa(T_{\\mu\\nu}-\\frac{1}{2}\\eta_{\\mu\\nu}T) ,\n \\label{IDGfeq3}\n\\end{equation}\nwhich is the equation that we shall work with.\n\nFrom now on, we consider the tree-level scattering amplitude between two spinning conserved point-like sources and find the corresponding weak field potential energy. To do that, one needs to first eliminate the non-physical degrees of freedom from the theory. For this purpose, let us consider the following decomposition of the spin-$2$ field\n\\begin{equation}\n h_{\\mu\\nu} \\equiv h^{TT}_{\\mu\\nu}+\\bar{\\nabla}_{(\\mu}V_{\\nu)}+\\bar{\\nabla}_\\mu \\bar{\\nabla}_\\nu \\phi+\\bar{g}_{\\mu\\nu} \\psi,\n \\label{dectth}\n \\end{equation}\n where $ h^{TT}_{\\mu\\nu} $ is the transverse-traceless part of the field, $ V_\\mu $ is the transverse helicity-$1$ mode and $ \\phi $ and $ \\psi $ are scalar helicity-$0$ components of the field. To obtain $\\psi $ in terms of field $h$, one needs to take the trace and double divergence of (\\ref{dectth}) to arrive at\n\\begin{equation}\n h=\\partial^2\\phi+4 \\psi, \\hskip 1 cm \\frac{1}{2}\\partial^2h=\\partial^4\\phi+\\partial^2\\psi,\n \\label{scaleinsstre}\n\\end{equation}\nwhere we used $\\partial^\\mu\\partial^\\nu h_{\\mu\\nu}=\\frac{1}{2}\\partial^2h$. Then, by using (\\ref{scaleinsstre}) and (\\ref{IDGfeq2}), one obtains\n \\begin{equation}\n \\psi = \\frac{\\kappa}{3}(a(\\Box)\\partial^2)^{-1}T.\n \\label{psittt}\n \\end{equation}\nOn the other hand, inserting (\\ref{dectth}) into (\\ref{IDGfeq2}) yields the wave-type equation\n \\begin{equation}\n h^{TT}_{\\rho\\nu}= -2\\kappa\\,{\\cal O}^{-1} T^{TT}_{\\rho\\nu},\n \\label{htt}\n \\end{equation}\nwhere the corresponding scalar Green's function is\n\\begin{equation}\n G ({\\bf x},{\\bf x}^{'},t,t^{'})={\\cal O}^{-1} \\equiv (a(\\Box)\\partial^2)^{-1}.\n\\end{equation}\nAccordingly, the tensor decomposition of energy momentum tensor $T_{\\rho\\nu} $ can be given as \\cite{Gullu:2009vy}\n\\begin{equation}\n T^{TT}_{\\rho\\nu}=T_{\\rho\\nu}-\\frac{1}{3} \\bar{g}_{\\rho\\nu}T+\\frac{1}{3} \\Big (\\bar{\\nabla}_\\rho \\bar{\\nabla}_\\nu \\Big ) \\times (\\bar{\\square} )^{-1} T.\n\\label{ttstrener}\n \\end{equation}\nRecall that the tree-level scattering amplitude between two sources via one graviton exchange is given by\n\\begin{equation}\n\\begin{aligned}\n {\\cal A}&=\\frac{1}{4} \\int d^4 x \\sqrt{-\\bar{g}} T^{'}_{\\rho\\nu}(x) h^{\\rho\\nu}(x) \\\\\n &=\\frac{1}{4} \\int d^4 x \\sqrt{-\\bar{g}} (T^{'}_{\\rho\\nu} h^{TT\\rho\\nu}+T^{'} \\psi ).\n \\label{scatdef}\n\\end{aligned}\n \\end{equation}\nConsequently, by plugging (\\ref{psittt}),(\\ref{htt}) and (\\ref{ttstrener}) into (\\ref{scatdef}), the scattering amplitude in a flat background can be obtained as follows\n\\begin{equation}\n \\begin{aligned}\n 4{\\cal A}=-2\\kappa T^{'}_{\\rho\\nu} {\\cal O}^{-1} T^{\\rho\\nu}+\\kappa T^{'}{\\cal O}^{-1}T,\n\\label{mainressct}\n\\end{aligned}\n\\end{equation}\nwhere the integral signs are suppressed for notational simplicity. Now, we are ready to compute the tree-level scattering amplitude for IDG between two covariantly conserved point-like spinning sources. For this purpose, let us consider the following localized spinning energy-momentum tensors \n\\begin{equation}\nT_{00}= m_a \\delta^{(3)}({\\bf x}-{\\bf x}_a),\\qquad T^{i}{_0}=-\\frac{1}{2} J^k_a \\epsilon^{ikj} \\partial_j \\delta^{(3)}({\\bf x}-{\\bf x}_a),\n\\label{sources}\n\\end{equation}\nwhere $m_a $ are the mass and $ J_a $ are the spin of the sources which have no dimension in our limits; here $a=1,2$.\nIn this respect, we want to solve the linearized IDG equations for the sources given in (\\ref{sources}). The scattering amplitude (\\ref{mainressct}) can be explicitly recast as\n\\begin{equation}\n4A \n =-2\\kappa T{}_{00}^{\\prime}\\left\\{ \\frac{1}{a(\\Box)\\partial^2}\\right\\} T^{00}\n+\\kappa T^{\\prime}\\left\\{ \\frac{1}{a(\\Box)\\partial^2}\\right\\} T \n+4\\kappa T{}_{0i}^{\\prime}\\left\\{ \\frac{1}{a(\\Box)\\partial^2}\\right\\} T^i\\,_{0}.\n\\label{scatflat2}\n\\end{equation}\nOn the other hand one must keep in mind that, to avoid ghosts, $a(\\Box)$ must be an entire function. For simplicity, let us choose $a(\\Box)=e^{-\\frac{\\Box}{M^2}}$ with which the main propagator can be computed as \n\\begin{equation}\nG({\\bf x}, {\\bf x}^{'}, t, t^{'})=\\frac{1}{4\\pi r} \\mbox{erf} (\\frac{Mr}{2})\\delta({\\bf x}-{\\bf x}'-(t-t')),\n\\label{decgreen1}\n \\end{equation}\nwhere $ r = \\lvert {\\bf x}_1-{\\bf x}_2 \\rvert$ and $\\mbox{erf} (r)$ is the error function defined by the integral\n\\begin{equation}\n\\mbox{erf} (r)=\\frac{2}{\\sqrt{\\pi}}\\int_0^r e^{-k^2}dk.\n\\end{equation}\nThus, by substituting (\\ref{decgreen1}) into (\\ref{scatflat2}) and carrying out the time integrals, one gets\n\\begin{equation}\n \\begin{aligned}\n4\\,{\\cal U}=&-2\\kappa m_1 m_2\\int d^3 x \\int d^3 x^{'}\\,\\,\\,\\delta^{(3)}({\\bf x}^{'}-{\\bf x}_2) \\hat{G}({\\bf x}, {\\bf x}^{'}) \\delta^{(3)}({\\bf x}-{\\bf x}_1)\\\\& + \\kappa m_1 m_2 \\int d^3 x \\int d^3 x^{'}\\,\\,\\, \\delta^{(3)}({\\bf x}^{'}-{\\bf x}_2) \\hat{G}({\\bf x}, {\\bf x}^{'}) \\delta^{(3)}({\\bf x}-{\\bf x}_1) \\\\&\n+\\kappa\\int d^3 x \\int d^3 x^{'}\\,\\,J_1^k\\,\\epsilon^{ikj} \\partial'_j \\delta^{(3)}({\\bf x}^{'}-{\\bf x}_2)\\hat{G}({\\bf x}, {\\bf x}^{'}) \\,J_2^l\\,\\epsilon^{ilm}\\partial_m\\delta^{(3)}({\\bf x}-{\\bf x}_1).\n \\end{aligned}\n\\end{equation}\nHere, the potential energy is $ {\\cal U}={\\cal A}\/t $ \\cite{Gullu-Tekin, Dengiz:2013hka} and $\\hat{G}({\\bf x}, {\\bf x}^{'})$ denotes the time-integrated scalar Green's function defined as \n\\begin{equation}\n \\hat{G}({\\bf x},{\\bf x}^{'})=\\int d t^{'} \\, G ({\\bf x},{\\bf x}^{'},t,t^{'})=\\frac{1}{4\\pi r} \\mbox{erf} (\\frac{Mr}{2}).\n\\end{equation}\nFinally, the Newtonian potential energy can be obtained as\n\\begin{equation}\n \\begin{aligned}\n{\\cal U}=&-\\frac{Gm_1m_2}{r}\\mbox{erf} (\\frac{Mr}{2})+\\frac{M^3}{2\\sqrt{\\pi}}e^{-\\frac{M^2r^2}{4}}G[J_1.J_2-(J_1.\\hat{r})(J_2.\\hat{r})]\\\\&-G[J_1.J_2-3(J_1.\\hat{r})(J_2.\\hat{r})]\\times\\bigg[\\frac{1}{r^3}\\mbox{erf} (\\frac{Mr}{2})-\\frac{M}{\\sqrt{\\pi}r^2}e^{-\\frac{M^2r^2}{4}}\\bigg].\n \\label{newtpot1}\n \\end{aligned}\n\\end{equation}\nObserve that the first term is the ordinary potential energy in IDG which was found in \\cite{Biswas1}, and the last two terms are the spin-spin part which could be attractive or repulsive depending on the choice of spin alignments. Let us now turn our attention to the small and large distance behaviors of potential energy. For the large separations as $r\\to \\infty$, $\\mbox{erf}(r)\\to 1$, $e^{-r^2}\\to 0$, then potential energy takes the form\n \\begin{equation}\n \\begin{aligned}\n{\\cal U}=-\\frac{Gm_1m_2}{r}-\\frac{G}{r^3}\\bigg( J_1.J_2-3(J_1.\\hat{r})(J_2.\\hat{r})\\bigg),\n \\label{newtpot11}\n \\end{aligned}\n\\end{equation}\nwhich reproduces the pure GR result \\cite{Gullu-Tekin} as expected. That is, the first term is the usual Newtonian potential energy, and the second one is the spin-spin interactions in GR. On the other side, for the small distances, as expanding the error and the exponential functions into series around $r=0$ give \n \\begin{equation}\n\\mbox{erf}(r)= \\frac{2r}{\\sqrt{\\pi}}-\\frac{2r^3}{3\\sqrt{\\pi}}+ {\\cal{O}}(r^5),\\hskip .6 cm e^{-r^2}=1-r^2+{\\cal{O}}(r^4),\n \\end{equation}\n the potential energy reads\n\\begin{equation}\n \\begin{aligned}\n{\\cal U}=&-\\frac{ G m_1m_2 M}{\\sqrt{\\pi}}+\\frac{GM^3}{3\\sqrt{\\pi}}J_1.J_2+{\\cal{O}}(r^2).\n \\label{newtpot12}\n \\end{aligned}\n\\end{equation}\nHere, the ordinary Newtonian potential term and the spin-spin interaction term in (\\ref{newtpot12}) are constant and hence the potential is not singular at the origin. In GR, the spin-spin part diverges according to $\\sim -\\frac{1}{r^3}$ \\cite{Gullu-Tekin}, whereas in the IDG, this part is non-singular. Though the potential energy is generated by matter sources which have dirac delta function singularities, it is regular due to the non-locality. Thus, in the IDG, not only the usual Newtonian potential but also the spin-spin part become regular as one approaches $r\\to 0$. Therefore, the theory has improved behavior in the small scale behavior.\n\n\\section{Further Gravitomagnetism effects in IDG}\nIn the previous part, we have shown that both usual Newtonian potential and spin-spin terms are finite at the origin. This is a remarkable result, but one can ask whether further gravitomagnetic effects such as spin-orbit interactions also have non-singular behavior or not. To answer this question, let us turn our attention to the tree-level scattering amplitude between two spinning sources that also have velocities and orbital motion. For this purpose, let us consider the following energy-momentum tensors \\cite{Weinberg}:\n\\begin{eqnarray} \n\tT_{00}=T^{\\left(0\\right)}_{00}+T^{\\left(2\\right)}_{00}, \\hspace{10 mm}\n\tT_{i0}=T^{\\left(1\\right)}_{i0}, \\hspace{10 mm}\n\tT_{ij}=T^{\\left(2\\right)}_{ij} , \n\t\\label{Source_mom}\n\\end{eqnarray}\nwhere the relevant tensors are\n\\begin{eqnarray} \n\tT^{\\left(0\\right)}_{00}&=&m_a\\delta^{(3)}\\left(\\vec{x}-\\vec{x}_{a}\\right),\\nonumber \\\\\n\tT^{\\left(2\\right)}_{00}&=&\\frac{1}{2}m_a\\vec{v}_a^{2} \\delta^{(3)} \\left(\\vec{x}-\\vec{x}_{a}\\right)-\\frac{1}{2}J_a^{k}\\,v_a^{i}\\epsilon^{ikj}\\partial_{j}\\delta^{(3)} \\left(\\vec{x}\n\t-\\vec{x}_{a}\\right),\\nonumber \\\\\n\tT^{\\left(1\\right)}_{i0}&=&-m_a v_a^{i} \\delta^{(3)} \\left(\\vec{x}-\\vec{x}_{a}\\right)+\\frac{1}{2}J_a^{k}\\,\\epsilon^{ikj}\\partial_{j}\\delta^{(3)}\\left(\\vec{x}\n\t-\\vec{x}_{a}\\right),\\nonumber \\\\\n\tT^{\\left(2\\right)}_{ij}&=&m_av_a^{i}v_a^{j}\\delta^{(3)} \\left(\\vec{x}-\\vec{x}_{a}\\right)+J_a^{l}v_a^{(i}\\epsilon^{j)kl}\\partial_{k}\\delta^{(3)} \\left(\\vec{x}\n\t-\\vec{x}_{a}\\right). \n\t\\label{en_mom}\n\\end{eqnarray}\nHere, $\\vec{v}_{i}$ are the velocities of the particles as defined in a rest frame, and $v^{(i}\\epsilon^{j)kl}$ denotes symmetrization. We shall work in the small velocity and spin limits, in other words up to $O(v^{2})$ and $O(vJ)$. In this respect, the scattering amplitude (\\ref{mainressct}) turns into\n\n\\begin{equation}\n\t4A=-2\\kappa T{}_{00}^{\\prime}( a(\\Box)\\partial^{2})^{-1}T^{00} -4\\kappa T{}_{0i}^{\\prime}( a(\\Box)\\partial^{2})^{-1}T^{0i} \n\t-2\\kappa T{}_{ij}^{\\prime}( a(\\Box)\\partial^{2})^{-1}T^{ij}+\\kappa T^{\\prime}( a(\\Box)\\partial^{2})^{-1}T,\n\t\\label{sctIDG}\n\\end{equation}\nwhere integral signs are suppressed and $( a(\\Box)\\partial^{2})^{-1}$ is the scalar Green's function as was given in (\\ref{decgreen1}). To find the weak field potential energy for the sources given in (\\ref{Source_mom}), let us calculate the amplitude by working each term in (\\ref{sctIDG}), separately. After evaluating the relevant integrals, the energy density interaction term takes the form\n\\begin{equation}\n\\begin{aligned}\n\t-2\\kappa T{}_{00}( a(\\Box)\\partial^{2})^{-1}T^{\\prime 00}&=-2\\kappa \\bigg [ \\frac{m_{1}m_{2}}{4\\pi r}\\bigg (1+\\frac{\\vec{v}^{2}_{1}+\\vec{v}^{2}_{2}}{2}\\bigg )\\mbox{erf} (\\frac{Mr}{2})\\\\&+\\frac{1}{4\\pi}\\bigg(\\frac{1}{ r^{2}}\\mbox{erf} (\\frac{Mr}{2})-\\frac{M}{\\sqrt{\\pi} r}e^{-\\frac{M^2r^2}{4}}\\bigg)\n\t\\bigg (\\frac{m_{1}(\\hat{r}\\times \\vec{v}_{2})\\cdot\\vec{J_{2}}}{2}-\\frac{m_{2}(\\hat{r}\\times \\vec{v}_{1} )\\cdot\\vec{J_{1}}}{2} \\bigg )\\bigg ]t.\n\\end{aligned}\n\\end{equation}\n\nHere, we have dropped the term which includes higher order contributions $O(J^{2}v^{2})$. On the other hand, the trace-trace interaction term yields \n\\begin{equation}\n\\begin{aligned}\n\t\\kappa T^{\\prime}( a(\\Box)\\partial^{2})^{-1}T=&\t\\kappa\\bigg [\\frac{m_{1}m_{2}}{4\\pi r}\\bigg(1+\\frac{-\\vec{v}^{2}_{1}-\\vec{v}^{2}_{2}}{2}\\bigg)\\mbox{erf} (\\frac{Mr}{2})\\\\&\n\t+\\frac{1}{4\\pi}\\bigg(\\frac{1}{ r^{2}}\\mbox{erf} (\\frac{Mr}{2})-\\frac{M}{\\sqrt{\\pi} r}e^{-\\frac{M^2r^2}{4}}\\bigg)\\bigg (\n\t-\\frac{m_{1}(\\hat{r}\\times \\vec{v}_{2})\\cdot\\vec{J_{2}}}{2}+\n\t\\frac{m_{2}(\\hat{r}\\times \\vec{v}_{1})\\cdot\\vec{J_{1}}}{2} \\bigg )\\bigg ] t.\n\\end{aligned}\n\\end{equation}\nSimilarly the $T{}_{0i}^{\\prime}(\\partial^{2})^{-1}T^{0i}$ term becomes \n\\begin{equation}\n\\begin{aligned}\n-4\\kappa T{}_{0i}^{\\prime}( a(\\Box)\\partial^{2})^{-1}T^{0i}&=-4\\kappa \\bigg [ -\\dfrac{m_{1}m_{2}\\vec{v}_{1}\\cdot\\vec{v}_{2} }{4\\pi r}\\mbox{erf} (\\frac{Mr}{2}) \\\\&+\\frac{1}{8\\pi}\\bigg(\\dfrac{1}{r^{2}} \\mbox{erf} (\\frac{Mr}{2})-\\frac{M}{\\sqrt{\\pi}r}\ne^{-\\frac{M^2r^2}{4}}\\bigg) \\bigg ( -m_{1}(\\hat{r}\\times \\vec{v}_{1} )\\cdot \\vec{J}_{2}+m_{2}(\\hat{r}\\times \\vec{v}_{2} )\\cdot \\vec{J}_{1}\\bigg)\\\\& - \\dfrac{1}{16 \\pi }\\bigg( \\frac{M^3}{2\\sqrt{\\pi}}e^{-\\frac{M^2r^2}{4}}[J_1.J_2-(J_1.\\hat{r})(J_2.\\hat{r})]\\\\&-[J_1.J_2-3(J_1.\\hat{r})(J_2.\\hat{r})]\\times[\\frac{1}{r^3}\\mbox{erf} (\\frac{Mr}{2})-\\frac{M}{\\sqrt{\\pi}r^2}e^{-\\frac{M^2r^2}{4}}]\\bigg)\\bigg]t.\n\\end{aligned}\n\\end{equation}\nNote that as the $T{}_{ij}^{\\prime}(\\partial^{2})^{-1}T{}^{ij}$ term in (\\ref{sctIDG}) contributes only at the higher order, it has been dropped.\nConsequently, by using all these results, the potential energy in IDG takes the form\n\n\\begin{equation}\n\\begin{aligned}\nU_{IDG} =& -\\frac{G}{r} m_{1}m_{2} \\left [ 1+\\frac{3}{2}\\vec{v}^{2}_{1}+\\frac{3}{2}\\vec{v}^{2}_{2}-4\\vec{v}_{1}\\cdot \\vec{v}_{2} \\right ] \\mbox{erf} (\\frac{Mr}{2})\n+\\frac{M^3}{2\\sqrt{\\pi}}e^{-\\frac{M^2r^2}{4}}G[J_1.J_2-(J_1.\\hat{r})(J_2.\\hat{r})]\\\\&-G[J_1.J_2-3(J_1.\\hat{r})(J_2.\\hat{r})]\\times\\bigg[\\frac{1}{r^3}\\mbox{erf} (\\frac{Mr}{2})-\\frac{M}{\\sqrt{\\pi}r^2}e^{-\\frac{M^2r^2}{4}}\\bigg]\\\\\n&-G\\bigg(\\frac{1}{r^{2}}\\mbox{erf} (\\frac{Mr}{2})-\\frac{M}{\\sqrt{\\pi}r} e^{-\\frac{M^2r^2}{4}}\\bigg) \\bigg [ \\frac{3m_{1}(\\hat{r}\\times \\vec{v}_{2})\\cdot\\vec{J_{2}}}{2}-\\frac{3m_{2}(\\hat{r}\\times \\vec{v}_{1})\\cdot\\vec{J_{1}}}{2}\\\\&-2m_{1}(\\hat{r}\\times \\vec{v}_{1})\\cdot\\vec{J_{2}}+2m_{2}(\\hat{r}\\times \\vec{v}_{2})\\cdot\\vec{J_{1}}\n\t\\bigg ]. \\label{IDGgrpe}\n\t\\end{aligned}\n\\end{equation}\nObserve that potential energy has the ordinary Newtonian potential energy, spin-spin and spin-orbit interactions. \nFor large separations as $r\\to \\infty$, the potential energy becomes\n \\begin{equation}\n \\begin{aligned}\nU =& -\\frac{G}{r} m_{1}m_{2} \\left [ 1+\\frac{3}{2}\\vec{v}^{2}_{1}+\\frac{3}{2}\\vec{v}^{2}_{2}-4\\vec{v}_{1}\\cdot \\vec{v}_{2} \\right ]\n-\\frac{G}{r^{3}}\\left[\\vec{J_{1}}\\centerdot\\vec{J_{2}}-3\\vec{J_{1}}\\centerdot\\hat{r}\\,\n\\vec{J_{2}}\\centerdot\\hat{r}\\right] \\\\\n&-\\frac{G}{r^{2}} \\left [ \\frac{3m_{1}(\\hat{r}\\times \\vec{v}_{2})\\cdot\\vec{J_{2}}}{2}-\\frac{3m_{2}(\\hat{r}\\times \\vec{v}_{1})\\cdot\\vec{J_{1}}}{2}-2m_{1}(\\hat{r}\\times \\vec{v}_{1})\\cdot\\vec{J_{2}}+2m_{2}(\\hat{r}\\times \\vec{v}_{2})\\cdot\\vec{J_{1}}\n\t\\right ], \n\\end{aligned}\t\n \\end{equation}\nwhich matches with the pure GR result \\cite{Tasseten} as expected. That is, the potential energy contains the usual Newtonian potential energy and relativistic corrections. On the other hand, for small distances, the potential energy reduces to\n\\begin{equation}\n \\begin{aligned}\n{\\cal U}=&-\\frac{ G m_1m_2 M}{\\sqrt{\\pi}}\\bigg[ 1+\\frac{3}{2}\\vec{v}^{2}_{1}+\\frac{3}{2}\\vec{v}^{2}_{2}-4\\vec{v}_{1}\\cdot \\vec{v}_{2} \\bigg]+\\frac{GM^3}{3\\sqrt{\\pi}}J_1.J_2+{\\cal{O}}(r).\n \\label{newtpot121}\n \\end{aligned}\n\\end{equation}\nHere, the ordinary Newtonian potential term and the spin-spin interaction term in (\\ref{newtpot121}) are constant and the spin-orbit interaction terms contribute at the order ${\\cal{O}}(r)$. Therefore the potential is regular at the origin. Thus, in the IDG, not only the usual Newtonian potential but also the spin-spin and spin-orbit interactions become regular as one approaches $r\\to 0$. These non-singular results in IDG show that the theory is very well-behaved in the UV region compared to GR.\n\n\\section{Conclusions and further discussions}\nWe have considered the IDG in $3+1$ dimensional flat backgrounds. We computed the tree-level scattering amplitude in IDG and accordingly found weak field potential energy between two point-like spinning sources interacting via one-graviton exchange. We have demonstrated that at large distances potential energy is the same as the GR result, whereas at small distances, it is discreetly different from GR. We have also shown that both the ordinary Newtonian potential energy and the spin-spin term remain finite at the small distance limit ($r\\to 0$). Furthermore, in addition to spin-spin interactions, we studied the spin-orbit interactions in IDG by considering that the sources are also moving. We found that not only mass-mass but also spin-spin and spin-orbit interactions are non-singular and finite at the origin. That is, gravitational potential energy of spinning sources that also have velocities becomes non-singular for IDG. Consequently, the theory is a very well-behaved feature in the UV regime as compared to GR. \n\nNow, we would like to discuss the effects of mass scale of non-locality ($M$) on gravitational memory effect. Gravitational waves, induced by merger of neutron stars or black holes etc, create a permanent effect on a system composed of inertial test particles. In other words, a pulse of gravitational wave changes the relative displacements of test particles. This effect is called gravitational memory effect and comes in two forms: ordinary (or linear) \\cite{zeldovich} and null (or\nnon-linear) \\cite{Christodoulou}. The studies on gravitational memory effect have recently received more attention in various aspects \\cite{Garfinkle,Satishchandran:2017pek,Tolish1,Tolish2,Bieri2,Gibbons,Kilicarslan} because there is a hope that it could be measured by advanced LIGO. To calculate gravitational memory effect in IDG in a flat spacetime, we can follow the method of \\cite{Satishchandran:2017pek, Garfinkle}: we first solved the geodesic deviation equation and then integrated it two times to find relative separation of the test particles. Without giving the details, we shall give the final result:\n\\begin{equation}\n\\begin{aligned}\n\\Delta\\xi^{i}&\n=\\frac{1}{r} \\mbox{erf}(\\frac{Mr}{2})\\Delta_j^i\\Theta(U)\\xi^{j},\n\\label{memoryeffect}\n\\end{aligned}\n\\end{equation} \nwhere $\\Theta$ is the step function, $\\xi$ is a spatial separation vector and $\\Delta_j^i$ are spatial components of the memory tensor (See Eq.(45) in \\cite{Garfinkle} for memory tensor). This result shows that the test particles have non-trivial change in their separations which is described by the memory tensor. Observe that the memory is dependent of the mass scale of non-locality and different from GR. In the large distance limits, memory is the same as the usual Einsteinian form as expected. Furthermore, for a lower bound on mass scale of non-locality ($M>4keV$) \\cite{Edholm:2018qkc}, the memory reduces to GR prediction above at very small distances.\n\n\\section{\\label{ackno} Acknowledgements}\nWe would like to thank B. Tekin for useful discussions, suggestions and comments. We would also like to thank S.Dengiz and J. Edholm for suggestions and critical readings of the manuscript.\n\n\\section{Appendix: Details of the Calculations}\nIn this part, we would like to give the details of scattering amplitude calculations for the Sec. III. Before going into further details, let us give the following identities:\n\\begin{align}\n\\partial_{k}r= \\dfrac{(x^{k}-x^{\\prime k})}{r}=\\hat{r}^{k}, \\hspace{1cm} \\partial_{k}\\dfrac{1}{r}=\\dfrac{-(x^{k}-x^{\\prime k})}{r^{3}}=\\dfrac{-\\hat{r}^{k}}{r^{2}}, \\nonumber \\\\\n\\partial_{k^{\\prime}}r= \\dfrac{-(x^{k}-x^{\\prime k})}{r}=-\\hat{r}^{k}, \\hspace{1cm} \\partial_{k^{\\prime}}\\dfrac{1}{r}=\\dfrac{(x^{k}-x^{\\prime k})}{r^{3}}=\\dfrac{\\hat{r}^{k}}{r^{2}}, \\nonumber \\\\\n\\partial_{k}\\partial_{n^{\\prime}}r=\\dfrac{1}{r}\\left( -\\delta^{kn}+\\hat{r}^{k}\\hat{r}^{n}\\right) , \\hspace{1cm}\n\\partial_{k}\\partial_{n^{\\prime}}\\dfrac{1}{r}= \\dfrac{1}{r^{3}}\\left( \\delta^{kn}-3\\hat{r}^{k}\\hat{r}^{n}\\right),\\nonumber \\\\\n\\partial_{k}\\mbox{erf} (r)=\\frac{2}{\\sqrt{\\pi}}e^{-r^2}\\hat{r}^{k} , \\hspace{1cm}\n\\partial_{k^{\\prime}}\\mbox{erf} (r)=-\\frac{2}{\\sqrt{\\pi}}e^{-r^2}\\hat{r}^{k},\n\\end{align}\nwhich are needed for computations. Let us now calculate the amplitude by working each term in (\\ref{sctIDG}), separately. The energy density interaction term becomes\n\n\\begin{equation}\n\\begin{aligned}\n\tT{}_{00}(a(\\Box)\\partial^{2})^{-1}T^{\\prime 00}=&\\bigg [ m_{1}\\delta^{(3)}\\left(\\vec{x}-\\vec{x}_{1}\\right)+\\frac{1}{2}m_{1}\\vec{v}^{2}_{1} \\delta^{(3)}\\left(\\vec{x}-\\vec{x}_{1}\\right) -\\frac{1}{2}J_{1}^{l}\\,v^{i}_{1}\\epsilon^{ilk}\\partial_{k}\\delta\\left(\\vec{x}\n\t-\\vec{x}_{1}\\right)\\bigg ] (a(\\Box)\\partial^{2})^{-1} \\\\\n\t&\n\t\\bigg [ m_{2}\\delta^{(3)}\\left(\\vec{x^{\\prime}}-\\vec{x}_{2}\\right)+\\frac{1}{2}m_{2}\\vec{v}^{2}_{2} \\delta^{(3)}\\left(\\vec{x^{\\prime}}-\\vec{x}_{2}\\right) -\\frac{1}{2}J_{2}^{m}\\,v^{j}_{2}\\epsilon^{jmn}\\partial_{n}^{\\prime}\\delta^{(3)}\\left(\\vec{x^{\\prime}}\n\t-\\vec{x}_{2}\\right)\\bigg ],\n\t\\end{aligned}\n\\end{equation}\nwhose each distinct term reads\n\\begin{equation}\n\tm_{1}\\delta^{(3)}\\left(\\vec{x}-\\vec{x}_{1}\\right)( a(\\Box)\\partial^{2})^{-1}m_{2}\\delta^{(3)}\\left(\\vec{x^{\\prime}}-\\vec{x}_{2}\\right)=\\frac{m_{1}m_{2}}{4\\pi r}\\mbox{erf} (\\frac{Mr}{2})t,\n\\end{equation}\n\n\n\\begin{equation}\n\tm_{1}\\delta^{(3)}\\left(\\vec{x}-\\vec{x}_{1}\\right)( a(\\Box)\\partial^{2})^{-1}\\frac{1}{2}m_{2}\\vec{v}^{2}_{2} \\delta^{(3)}\\left(\\vec{x^{\\prime}}-\\vec{x}_{2}\\right)=\\frac{1}{2}\\frac{m_{1}m_{2}\\vec{v}^{2}_{2}}{4\\pi r}\\mbox{erf} (\\frac{Mr}{2})t,\n\\end{equation}\n\n\\begin{equation}\n\\begin{aligned}\n\t-\\frac{1}{2}m_{1}\\delta^{(3)}\\left(\\vec{x}-\\vec{x}_{1}\\right)( a(\\Box)\\partial^{2})^{-1}J_{2}^{m}\\,v^{j}_{2}\\epsilon^{jmn}\\partial_{n}^{\\prime}\\delta^{(3)}\\left(\\vec{x^{\\prime}}\n\t-\\vec{x}_{2}\\right) \n\t=&\\frac{1}{2}\n\t\\frac{m_{1}(\\hat{r}\\times \\vec{v}_{2}).\\vec{J_{2}}}{4\\pi r^{2}}\\mbox{erf} (\\frac{Mr}{2})t\\\\&-\\frac{M}{2\\sqrt{\\pi}}e^{-\\frac{M^2r^2}{4}}\n\t\\frac{m_{1}(\\hat{r}\\times \\vec{v}_{2}).\\vec{J_{2}}}{4\\pi r}t,\t\n\t\\end{aligned}\n\\end{equation}\n\n\\begin{equation}\n\\frac{1}{2}m_{1}\\vec{v}^{2}_{1} \\delta^{(3)}\\left(\\vec{x}-\\vec{x}_{1}\\right)( a(\\Box)\\partial^{2})^{-1}m_{2}\\delta^{(3)}\\left(\\vec{x^{\\prime}}-\\vec{x}_{2}\\right)=\\frac{1}{2}\\frac{m_{1}m_{2}\\vec{v}^{2}_{1}}{4\\pi r}\\mbox{erf} (\\frac{Mr}{2})t,\n\\end{equation}\n\n\\begin{equation}\n\\begin{aligned}\n\t-\\frac{1}{2}J_{1}^{l}\\,v^{i}_{1}\\epsilon^{ilk}\\partial_{k}\\delta^{(3)}\\left(\\vec{x}\n\t-\\vec{x}_{1}\\right) ( a(\\Box)\\partial^{2})^{-1} m_{2}\\delta^{(3)}\\left(\\vec{x^{\\prime}}-\\vec{x}_{2}\\right)\n\t=&-\\frac{1}{2}\n\t\\frac{m_{2}(\\hat{r}\\times \\vec{v}_{1}).\\vec{J_{1}}}{4\\pi r^{2}}\\mbox{erf} (\\frac{Mr}{2})t\\\\&+\\frac{M}{2\\sqrt{\\pi}}\n\t\\frac{m_{2}(\\hat{r}\\times \\vec{v}_{1}).\\vec{J_{1}}}{4\\pi r}e^{-\\frac{M^2r^2}{4}}t,\n\t\\end{aligned}\n\\end{equation}\nwith these terms, one ultimately gets\n\\begin{equation}\n\\begin{aligned}\n\t-2\\kappa T{}_{00}( a(\\Box)\\partial^{2})^{-1}T^{\\prime 00}&=-2\\kappa \\bigg [ \\frac{m_{1}m_{2}}{4\\pi r}\\mbox{erf} (\\frac{Mr}{2})\\bigg (1+\\frac{\\vec{v}^{2}_{1}+\\vec{v}^{2}_{2}}{2}\\bigg )\\\\&+\\frac{1}{4\\pi}\\bigg(\\frac{1}{r^{2}}\\mbox{erf} (\\frac{Mr}{2})-\\frac{M}{\\sqrt{\\pi} r}e^{-\\frac{M^2r^2}{4}}\\bigg)\n\t\\bigg (\\frac{m_{1}(\\hat{r}\\times \\vec{v}_{2})\\cdot\\vec{J_{2}}}{2}-\\frac{m_{2}(\\hat{r}\\times \\vec{v}_{1} )\\cdot\\vec{J_{1}}}{2} \\bigg )\\bigg ]t.\n\\end{aligned}\n\\end{equation}\n\nOn the other side, the trace-trace interaction term yields \n\\begin{equation}\n\\begin{aligned}\n\tT^{\\prime}( a(\\Box)\\partial^{2})^{-1}T= &\\bigg [ -m_{1}\\delta^{(3)}\\left(\\vec{x}-\\vec{x}_{1}\\right)+\\frac{1}{2}m_{1}\\vec{v}^{2}_{1} \\delta^{(3)}\\left(\\vec{x}-\\vec{x}_{1}\\right) -\\frac{1}{2}J_{1}^{l}\\,v^{i}_{1}\\epsilon^{ilk}\\partial_{k}\\delta^{(3)}\\left(\\vec{x}\n\t-\\vec{x}_{1}\\right)\\bigg ] (\\partial^{2})^{-1} \\\\\n\t&\\bigg [ -m_{2}\\delta^{(3)}\\left(\\vec{x^{\\prime}}-\\vec{x}_{2}\\right)+\\frac{1}{2}m_{2}\\vec{v}^{2}_{2} \\delta^{(3)}\\left(\\vec{x^{\\prime}}-\\vec{x}_{2}\\right) -\\frac{1}{2}J_{2}^{m}\\,v^{j}_{2}\\epsilon^{jmn}\\partial_{n}^{\\prime}\\delta^{(3)}\\left(\\vec{x^{\\prime}}\n\t-\\vec{x}_{2}\\right)\\bigg ].\n\\end{aligned}\n\\end{equation}\nThen, by evaluating the relevant integrals, one eventually obtains \n\\begin{equation}\n\\begin{aligned}\n\t\\kappa T^{\\prime}( a(\\Box)\\partial^{2})^{-1}T=&\t\\kappa\\bigg [\\frac{m_{1}m_{2}}{4\\pi r}\\bigg(1+\\frac{-\\vec{v}^{2}_{1}-\\vec{v}^{2}_{2}}{2}\\bigg)\\mbox{erf} (\\frac{Mr}{2})\\\\&\n\t+\\bigg(\\frac{1}{4\\pi r^{2}}\\mbox{erf} (\\frac{Mr}{2})-\\frac{M}{4\\pi^{\\frac{3}{2}} r}e^{-\\frac{M^2r^2}{4}}\\bigg)\\bigg (\n\t-\\frac{m_{1}(\\hat{r}\\times \\vec{v}_{2})\\cdot\\vec{J_{2}}}{2}+\n\t\\frac{m_{2}(\\hat{r}\\times \\vec{v}_{1})\\cdot\\vec{J_{1}}}{2} \\bigg )\\bigg ] t.\n\\end{aligned}\n\\end{equation}\nSimilarly, the $T{}_{0i}^{\\prime}(\\partial^{2})^{-1}T^{0i}$ term can be written as\n\\begin{equation}\n\\begin{aligned}\n\tT{}_{0i}^{\\prime}( a(\\Box)\\partial^{2})^{-1}T^{0i}=&\\bigg [ -m_{1}v^{i}_{1}\\delta^{(3)}\\left(\\vec{x}-\\vec{x}_{1}\\right)+\\frac{1}{2}J_{1}^{k}\\,\\epsilon^{ikj}\\partial_{j}\\delta^{(3)}\\left(\\vec{x}\n\t-\\vec{x}_{1}\\right)\\bigg ](a(\\Box)\\partial^{2})^{-1}\\\\&\\times\\bigg [ m_{2}v^{i}_{2}\\delta^{(3)}\\left(\\vec{x^{\\prime}}-\\vec{x}_{2}\\right)-\\frac{1}{2}J_{2}^{l}\\,\\epsilon^{ilm}\\partial^{\\prime}_{m}\\delta^{(3)}\\left(\\vec{x^{\\prime}}\n\t-\\vec{x}_{2}\\right)\\bigg ],\n\\end{aligned}\n\\end{equation}\nwhich after lengthy and tedious calculations becomes\n\\begin{equation}\n\\begin{aligned}\n-4\\kappa T{}_{0i}^{\\prime}( a(\\Box)\\partial^{2})^{-1}T^{0i}&=-4\\kappa \\bigg [ -\\dfrac{m_{1}m_{2}\\vec{v}_{1}\\cdot\\vec{v}_{2} }{4\\pi r}\\mbox{erf} (\\frac{Mr}{2}) \\\\&+\\frac{1}{8\\pi}\\bigg(\\dfrac{1}{ r^{2}} \\mbox{erf} (\\frac{Mr}{2})-\\frac{M}{\\sqrt{\\pi} r}\ne^{-\\frac{M^2r^2}{4}}\\bigg) \\bigg ( -m_{1}(\\hat{r}\\times \\vec{v}_{1} )\\cdot \\vec{J}_{2}+m_{2}(\\hat{r}\\times \\vec{v}_{2} )\\cdot \\vec{J}_{1}\\bigg)\\\\& - \\dfrac{1}{16 \\pi }\\bigg( \\frac{M^3}{2\\sqrt{\\pi}}e^{-\\frac{M^2r^2}{4}}[J_1.J_2-(J_1.\\hat{r})(J_2.\\hat{r})]\\\\&-[J_1.J_2-3(J_1.\\hat{r})(J_2.\\hat{r})]\\times(\\frac{1}{r^3}\\mbox{erf} (\\frac{Mr}{2})-\\frac{M}{\\sqrt{\\pi}r^2}e^{-\\frac{M^2r^2}{4}})\\bigg)\\bigg]t.\n\\end{aligned}\n\\end{equation}\nRecall that the $T{}_{ij}^{\\prime}(\\partial^{2})^{-1}T{}^{ij}$ term contributes in higher order corrections.\nConsequently, by using the results above obtained, the potential energy in IDG is obtained in the form as given in (\\ref{IDGgrpe}).\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}