diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzqidr" "b/data_all_eng_slimpj/shuffled/split2/finalzzqidr" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzqidr" @@ -0,0 +1,5 @@ +{"text":"\n\n\n\\subsection{Notation}\nFor the response generation task,\nlet $M$ denote the input word sequence (message) $M=\\{m_1,m_2,...,m_I\\}$. %\n$R$ denotes the word sequence in response to $M$, where $R=\\{r_1,r_2,...,r_J,$ {\\it EOS}\\xspace{}\\} and $J$ is the length of the response (terminated by an {\\it EOS}\\xspace token). $r_t$ denotes a word token that is associated with a $K$ dimensional distinct word embedding $e_t$. $V$ is the vocabulary size. \n\n\n\\subsection{Speaker Model}\nOur first model is the Speaker Model, which \nmodels the respondent alone.\nThis model represents each individual speaker as a vector or embedding, which encodes \nspeaker-specific information (e.g., dialect, register, age, gender, personal information) that influences the content and style of her responses. Note that these attributes are not explicitly annotated, which would be tremendously expensive for our datasets. Instead, our model manages to cluster users along some of these traits (e.g., age, country of residence) based on the responses alone.\n\nFigure \\ref{fig1} gives a brief illustration of the Speaker Model. \nEach speaker $i\\in [1,N]$ is associated with a user-level representation $v_i\\in\\mathbb{R}^{K\\times 1}$. \n As in standard {{\\textsc{Seq2Seq}}}\\xspace models, we first encode message $S$ into a vector representation $h_S$ using the source LSTM. \nThen for each step in the target side, hidden units are obtained by \ncombining the representation produced by the target LSTM at the previous time step, the word representations at the current time step, and the speaker embedding $v_i$:\n\\begin{equation}\n\\left[\n\\begin{array}{lr}\ni_t\\\\\nf_t\\\\\no_t\\\\\nl_t\\\\\n\\end{array}\n\\right]=\n\\left[\n\\begin{array}{c}\n\\sigma\\\\\n\\sigma\\\\\n\\sigma\\\\\n\\text{tanh}\\\\\n\\end{array}\n\\right]\nW\\cdot\n\\left[\n\\begin{array}{c}\nh_{t-1}\\\\\ne_{t}^s\\\\\nv_i\\\\\n\\end{array}\n\\right]\n\\end{equation}\n\\begin{equation}\nc_t=f_t\\cdot c_{t-1}+i_t\\cdot l_t\n\\end{equation}\n\\begin{equation}\nh_{t}^s=o_t\\cdot \\text{tanh}(c_t)\n\\end{equation}\nwhere $W\\in \\mathbb{R}^{4K\\times 3K}$. \nIn this way, speaker information is encoded and \ninjected into the hidden layer at each time step and thus helps predict personalized responses throughout the generation process.\nThe Speaker embedding $\\{v_i\\}$ is shared across all conversations that involve speaker $i$. $\\{v_i\\}$ are learned by back propagating word prediction errors to each neural component during training. \n\nAnother useful property of this model is that it \nhelps {\\it infer} answers to questions even if the evidence is not readily present in the training set.\nThis is important as\nthe training data does not contain explicit \ninformation about every\nattribute of each user\n(e.g., gender, age, country of residence).\nThe model learns speaker representations based on conversational content produced by different speakers, and speakers producing similar responses tend to have similar embeddings, occupying nearby positions in the vector space. \nThis way, the training data of speakers nearby in vector space help increase the generalization capability of the\nspeaker model. For example, consider two speakers $i$ and $j$\nwho sound distinctly British, and who are therefore close in speaker \nembedding space. Now, suppose that, in the training data, speaker $i$ was asked {\\it Where do you live?} and responded {\\it in the UK}. Even if speaker $j$ was never asked the same question, this answer can help influence a good response from speaker $j$, and this without explicitly labeled geo-location information.\n\\begin{comment}\nWhen a speaker is asked a questions like {\\it Which company do you work for?} or {\\it Where do you live?}, the evidence of which is not contained in the training set, the model can benefit from speakers that the current speaker resemble, take advantage of information of them and make inference. \n \n the hometown of the speaker, the company he works for or his\/her food preference for dinner for most of users as long as they don't explicitly mention these aspects. But the model learns \n\\end{comment}\n \n\\subsection{Speaker-Addressee Model}\nA natural extension of the Speaker Model is a model that is sensitive to speaker-addressee interaction patterns within the conversation. Indeed,\nspeaking style, register, and content does not vary only with the identity of the speaker, but also with that of the addressee.\nFor example, in scripts for the TV series {\\it Friends} used in some of our experiments, the character Ross often \ntalks differently to his sister Monica than to Rachel,\nwith whom he is engaged in an on-again off-again relationship throughout the series. \n\\begin{comment}\nModeling speaker-to-speaker interacting patterns requires that both of the speakers are involved in a long-term interactions with each other, from which their interacting patterns can be learned. \n\\end{comment}\n\nThe proposed Speaker-Addressee Model operates as follows:\nWe wish to predict how speaker $i$ would respond to a message produced by speaker $j$. Similarly to the Speaker model, we associate each speaker with a $K$ dimensional speaker-level representation, namely $v_i$ for user $i$ and $v_j$ for user $j$. \nWe obtain an interactive representation $V_{i,j}\\in \\mathbb{R}^{K\\times 1}$ by linearly combining user vectors $v_i$ and $v_j$\nin an attempt to model the interactive style of user $i$ towards user $j$,\n\\begin{equation}\nV_{i,j}=\\text{tanh}(W_1\\cdot v_i+W_2\\cdot v_2)\n\\end{equation}\nwhere $W_1, W_2\\in \\mathbb{R}^{K\\times K}$. \n$V_{i,j}$ is then linearly incorporated into LSTM models at each step in the target: \n\\begin{equation}\n\\left[\n\\begin{array}{lr}\ni_t\\\\\nf_t\\\\\no_t\\\\\nl_t\\\\\n\\end{array}\n\\right]=\n\\left[\n\\begin{array}{c}\n\\sigma\\\\\n\\sigma\\\\\n\\sigma\\\\\n\\text{tanh}\\\\\n\\end{array}\n\\right]\nW\\cdot\n\\left[\n\\begin{array}{c}\nh_{t-1}\\\\\ne_{t}^s\\\\\nV_{i,j}\\\\\n\\end{array}\n\\right]\n\\end{equation}\n\\begin{equation}\nc_t=f_t\\cdot c_{t-1}+i_t\\cdot l_t\\\\\n\\end{equation}\n\\begin{equation}\nh_{t}^s=o_t\\cdot \\text{tanh}(c_t)\n\\end{equation}\n$V_{i,j}$ \ndepends on both speaker and addressee and\nthe same speaker will thus respond differently to a message from different interlocutors. \nOne potential issue with Speaker-Addressee modelling is the difficulty involved in collecting a large-scale training dataset in which each speaker \nis involved in conversation with a wide variety of people. Like the Speaker Model, however, the Speaker-Addressee Model derives generalization capabilities from speaker embeddings.\nEven if the two speakers\nat test time ($i$ and $j$) were never involved in the same conversation in the training data, two speakers $i'$ and $j'$ who are respectively close in embeddings may have been, and this can help modelling how $i$ should respond to $j$. \n\n\\subsection{Decoding and Reranking}\nFor decoding, \nthe N-best lists are generated using the decoder with beam size \\mbox{$B=200$}.\nWe set a maximum length of 20 for the generated candidates. \nDecoding operates as follows: At each time step, \nwe first examine all \\mbox{$B\\times B$} possible next-word candidates, and add all hypothesis ending with an {\\it EOS}\\xspace token to the N-best list. We then preserve the top-$B$ unfinished hypotheses and move to the next word position. \n\nTo deal with the issue that {{\\textsc{Seq2Seq}}}\\xspace models tend to generate generic and commonplace responses such as {\\it I don't know}, we follow \\newcite{li2015diversity} by reranking the generated N-best list \nusing a scoring function that linearly combines \n a length penalty and the log likelihood of the source given the target:\n\\begin{equation}\n\\log p(R|M,v)+\\lambda\\log p(M|R)+\\gamma |R| \n\\end{equation}\nwhere $p(R|M,v)$ denotes the probability of the generated response given the message $M$ and the respondent's speaker ID. \n$|R|$ denotes the length of the target and $\\gamma$ denotes the associated penalty weight. We optimize $\\gamma$ and $\\lambda$ on N-best lists of response candidates generated from the development set using MERT \\cite{mert} by optimizing {{\\sc Bleu}}\\xspace.\nTo compute $p(M|R)$, \nwe train an inverse {{\\textsc{Seq2Seq}}}\\xspace model by swapping messages and responses. We trained standard {{\\textsc{Seq2Seq}}}\\xspace models for $p(M|R)$ with no speaker information considered. \n\n\n\\subsection{Twitter Persona Dataset} \n\\paragraph{Data Collection}\nTraining data for the {Speaker Model} was extracted from the Twitter FireHose for the six-month period beginning January 1, 2012.\nWe limited the sequences to those where the responders had engaged in at least 60 (and at most 300) 3-turn conversational interactions during the period, in other words, users who reasonably frequently engaged in conversation. This yielded a set of 74,003 users who took part in a minimum of 60 and a maximum of 164 conversational turns (average: 92.24, median: 90). \nThe dataset extracted using responses by these ``conversationalists'' contained 24,725,711 3-turn sliding-window (context-message-response) conversational \nsequences.\n\nIn addition, we sampled 12000 3-turn conversations from the same user set from the Twitter FireHose for the three-month period beginning July 1, 2012, and set these aside as development, validation, and test sets (4000 conversational sequences each). Note that development, validation, and test sets for this data are single-reference, which is by design. Multiple reference responses would typically require acquiring responses from different people, which would confound different personas.\n\n\\paragraph{Training Protocols} We trained four-layer {{\\textsc{Seq2Seq}}}\\xspace models on the Twitter corpus following the approach of \\cite{sutskever2014sequence}.\nDetails are as follows: \n\\begin{itemize}\n\\reduceVerticalSpace\n\\item 4 layer LSTM models with 1,000 hidden cells for each layer.\n\\item Batch size is set to 128.\n\\item Learning rate is set to 1.0.\n\\item Parameters are initialized by sampling from the uniform distribution $[-0.1,0.1]$.\n\\item Gradients are clipped to avoid gradient explosion with a threshold of 5.\n\\item Vocabulary size is limited to 50,000.\n\\item Dropout rate is set to 0.2.\n\\end{itemize}\nSource and target LSTMs use different sets of parameters.\nWe ran 14 epochs, and \ntraining took roughly a month to finish on a Tesla K40 GPU machine. \n\nAs only speaker IDs of responses were specified when compiling the Twitter dataset, experiments on this dataset were limited to the {Speaker Model}. \n\n\\subsection{Twitter Sordoni Dataset} \n\nThe Twitter Persona Dataset was collected for this paper for experiments with speaker ID information. \nTo obtain a point of \ncomparison with prior state-of-the-art work \\cite{Sordoni2015,li2015diversity}, \nwe measure our baseline (non-persona) LSTM model against prior \nwork on the dataset of \\cite{Sordoni2015}, which we call the Twitter Sordoni Dataset. \nWe only use\nits test-set portion, which contains\nresponses for 2114 context and messages. \nIt is important to note that \nthe Sordoni dataset offers up to 10 references per message, while the \nTwitter Persona dataset has only 1 reference per message. Thus {{\\sc Bleu}}\\xspace scores cannot be compared across the two Twitter datasets ({{\\sc Bleu}}\\xspace scores on 10 references \nare generally much higher than with 1 reference).\nDetails of this dataset are in \\cite{Sordoni2015}.\n\n\\subsection{Television Series Transcripts} \n\\paragraph{Data Collection} For the dyadic Speaker-Addressee Model we used scripts from the American television comedies {\\it Friends}\\footnote{\\url{https:\/\/en.wikipedia.org\/wiki\/Friends}} and {\\it The Big Bang Theory},\\footnote{\\url{https:\/\/en.wikipedia.org\/wiki\/The_Big_Bang_Theory}} available from Internet Movie Script Database (IMSDb).\\footnote{\\url{http:\/\/www.imsdb.com}}\nWe collected 13 main characters from the two series in a corpus of 69,565 turns. \nWe split the corpus into training\/development\/testing sets, with development and testing sets each of about 2,000 turns. \n\n\\paragraph{Training} \nSince the relatively small size of the dataset does not allow for training an open domain dialog model, we adopted a domain adaption strategy where we first trained a standard {{\\textsc{Seq2Seq}}}\\xspace models using a much larger OpenSubtitles (OSDb) dataset \\cite{tiedemann2009news}, and then adapting the pre-trained model to the TV series dataset. \n\nThe OSDb dataset is a large, noisy, open-domain dataset containing roughly 60M-70M scripted lines spoken by movie characters. \nThis dataset does not specify which character speaks each subtitle line, which prevents us from inferring speaker turns. \nFollowing Vinyals et al. (2015), we make the simplifying assumption that each line of subtitle constitutes a full speaker turn.\\footnote{This introduces a degree of noise as consecutive lines are not necessarily from the same scene or two different speakers.}\nWe trained standard {{\\textsc{Seq2Seq}}}\\xspace models on OSDb dataset, following the protocols already described in Section 5.1. \nWe run 10 iterations over the training set.\n\nWe initialize word embeddings and LSTM parameters in the Speaker Model and the Speaker-Addressee model using parameters learned from OpenSubtitles datasets. \nUser embeddings are randomly initialized from $[-0.1,0.1]$. \nWe then ran 5 additional epochs until the perplexity on the development set stabilized. \n\n\\subsection{Evaluation}\nFollowing \\cite{Sordoni2015,li2015diversity}\nwe used {{\\sc Bleu}}\\xspace \\cite{Papineni2002BLEU} \nfor parameter tuning and evaluation. \n{{\\sc Bleu}}\\xspace has been shown to correlate well with human judgment on the response generation task, as demonstrated in \\cite{galley2015deltableu}. \n\\begin{comment}\n\\cite{li2015diversity} also adopted {\\it distinct-1} and {\\it distinct-2} to calculating the number of distinct unigrams and bigrams in generated responses \nto measure the degree of diversity. The value is scaled by total number of generated tokens to avoid favoring long sentences. \nWe also report the degree of diversity following the protocols defined in \\cite{li2015diversity}. \n\\end{comment}\nBesides {{\\sc Bleu}}\\xspace scores, we also report perplexity as an indicator of model capability.\n\n\\begin{table}\n\\centering\n\\begin{tabular}{lc}\nSystem & {{\\sc Bleu}}\\xspace \\\\ \\hline\nMT baseline \\cite{ritter2011data} & 3.60\\% \\\\ \\hline\nStandard LSTM MMI \\cite{li2015diversity} & 5.26\\% \\\\\nStandard LSTM MMI (our system) & 5.82\\% \\\\ \\hline\n{\\it Human} & {\\it 6.08\\%}\\\\\n\\end{tabular}\n\\caption{{{\\sc Bleu}}\\xspace on the Twitter Sordoni dataset (10 references). We contrast our baseline against an SMT baseline \\cite{ritter2011data}, and the best result \\cite{li2015diversity} on the established\ndataset of \\cite{Sordoni2015}.\nThe last result is for a human oracle, but it is not directly comparable as the oracle {{\\sc Bleu}}\\xspace is computed in a leave-one-out fashion, having one less reference available. We nevertheless provide\nthis result to give a sense that these {{\\sc Bleu}}\\xspace scores of 5-6\\% are not unreasonable.}\n\\label{twitter-baselines}\n\\end{table}\n\n\\subsection{Baseline}\nSince our main experiments are with a new dataset (the Twitter Persona Dataset), we first show that our LSTM baseline is competitive with the state-of-the-art \\cite{li2015diversity} on an established\ndataset, the Twitter Sordoni Dataset \\cite{Sordoni2015}.\nOur baseline is simply our implementation of the LSTM-MMI of \\cite{li2015diversity}, so results should be relatively close to their reported results.\nTable~\\ref{twitter-baselines} summarizes our results against prior work.\n\\begin{comment}\nThe comparison is particularly important because the\ntest set of \\cite{Sordoni2015,li2015diversity} offers up to 10 references, while ours is single-reference (a multi-reference test set, would involve different respondents and confound different personas, which we want to avoid).\n\\end{comment}\nWe see that our system actually does better than \\cite{li2015diversity}, and\nwe attribute the improvement to a larger training corpus, the use of dropout during training, and possibly to the ``conversationalist'' nature of our corpus.\n\n\\begin{table}\n\\centering\n\\begin{tabular}{ccc}\nModel&Standard LSTM&Speaker Model \\\\\\hline\nPerplexity&47.2&42.2 ($-10.6\\%$) \\\\\n\\end{tabular}\n\\caption{Perplexity for standard {{\\textsc{Seq2Seq}}}\\xspace and the Speaker model \non the Twitter Persona development set.}\n\\label{twitter-per}\n\\end{table}\n\\begin{table}\n\\centering\n\\begin{tabular}{lll}\nModel&Objective& {{\\sc Bleu}}\\xspace \\\\\\hline\nStandard LSTM &MLE& 0.92\\% \\\\\nSpeaker Model & MLE&1.12\\% (+21.7$\\%$) \\\\\\hline\nStandard LSTM &MMI& 1.41\\% \\\\\nSpeaker Model & MMI&1.66\\% (+11.7$\\%$) \\\\\n\\end{tabular}\n\\caption{\n{{\\sc Bleu}}\\xspace on the Twitter Persona dataset (1 reference), for the\nstandard {{\\textsc{Seq2Seq}}}\\xspace model and the Speaker model using as objective either maximum likelihood (MLE) or maximum mutual information (MMI).}\n\\label{twitter-bleu}\n\\end{table}\n\n\\begin{table*}\n\\centering\n\\begin{tabular}{cccc}\nModel&Standard LSTM&Speaker Model& Speaker-Addressee Model \\\\\\hline\nPerplexity&27.3&25.4 ($-7.0\\%$)& 25.0 ($-8.4\\%$) \\\\\n\\end{tabular}\n\\caption{Perplexity for standard {{\\textsc{Seq2Seq}}}\\xspace and persona models on the TV series dataset.}\n\\label{tv-per}\n\\end{table*}\n\\begin{table*}\n\\centering\n\\begin{tabular}{ccccc}\nModel&Standard LSTM&Speaker Model& Speaker-Addressee Model \\\\\\hline\nMLE&1.60\\%& 1.82\\% ($+13.7\\%$)& 1.83\\% ($+14.3\\%$) \\\\\nMMI&1.70\\%& 1.90\\% ($+10.6\\%$) &1.88\\% ($+10.9\\%$) \\\\\\hline\n\\end{tabular}\n\\caption{\n{{\\sc Bleu}}\\xspace on the TV series dataset (1 reference), for the\nstandard {{\\textsc{Seq2Seq}}}\\xspace and persona models.}\n\\label{tv-bleu}\n\\end{table*}\n\n\\subsection{Results}\nWe first report performance on the Twitter Persona dataset.\nPerplexity is reported in Table \\ref{twitter-per}. We observe about a $10\\%$ decrease in perplexity for the Speaker model over the standard {{\\textsc{Seq2Seq}}}\\xspace model. \nIn terms of {{\\sc Bleu}}\\xspace scores (Table~\\ref{twitter-bleu}), a significant performance boost \nis observed for \n the Speaker model over the standard {{\\textsc{Seq2Seq}}}\\xspace model, yielding an increase of $21\\%$\nin the maximum likelihood (MLE) setting and $11.7\\%$ for mutual information setting (MMI). \nIn line with findings in \\cite{li2015diversity}, we observe a consistent performance boost introduced by \nthe MMI objective function \nover a standard {{\\textsc{Seq2Seq}}}\\xspace model based on the MLE objective function. \nIt is worth noting that our persona models are more beneficial to the MLE models\nthan to the MMI models. This result is intuitive as the persona models help make \nStandard LSTM MLE outputs more informative and less bland, and thus make the use \nof MMI less critical.\n\\begin{comment}\nIt is noting that the difference between MLE persona models and MLE {{\\textsc{Seq2Seq}}}\\xspace models is greater than that between MMI persona model and MMI {{\\textsc{Seq2Seq}}}\\xspace models. \nSuch a result is intuitive since both persona models and {{\\textsc{Seq2Seq}}}\\xspace models use the same $p(M|R)$ for later MMI reranking, which is obtained without user-level information. Such a reranking process would dilute the discrepancy between the persona models and vanilla {{\\textsc{Seq2Seq}}}\\xspace models. \n\\end{comment}\n\n\nFor the TV Series dataset, perplexity and {{\\sc Bleu}}\\xspace scores are respectively reported in Table \\ref{tv-per} and Table \\ref{tv-bleu}.\nAs can be seen, the Speaker and Speaker-Addressee models respectively achieve perplexity values of 25.4 and 25.0 on the TV-series dataset, $7.0\\%$ and $8.4\\%$ percent lower than the correspondent standard {{\\textsc{Seq2Seq}}}\\xspace models. \nIn terms of {{\\sc Bleu}}\\xspace score, we observe a similar performance boost as on the Twitter dataset, in which the Speaker model\nand the Speaker-Addressee model \n outperform the standard {{\\textsc{Seq2Seq}}}\\xspace model by $13.7\\%$ and $10.6\\%$.\nBy comparing the Speaker-Addressee model against the Speaker model on the TV Series dataset, we do not observe a significant difference. \nWe suspect that this is primarily due to the relatively small size of the dataset where the interactive patterns might not be fully captured. \nSmaller values of perplexity are observed for the Television Series dataset than the Twitter dataset, the perplexity of which is over 40, presumably due to the noisier nature of Twitter dialogues. \n\\subsection{Qualitative Analysis}\n\n\\begin{table}\n\\setlength{\\tabcolsep}{4pt}\n\\center\n{\\small\n\\begin{tabularx}{\\linewidth}{rl} \\hline\n{{\\it message}}\\xspace & Where is your hometown?\\\\\\hline\n{\\it baseline} & I was born in Canada.\\\\\\hline\n\\User{1} & I'm from England.\\\\\n\\User{2} & I'm from Manchester.\\\\\n\\User{3} & I'm from Liverpool.\\\\\n\\User{4} & England. You?\\\\\n\\User{5} & My hometown.\\\\\n\\User{6} & I'm from Texas.\\\\\n\\User{7} & I'm from LA.\\\\\n\\User{8} & England.\\\\\n\\User{9} & I was born here.\\\\\n\\User{10} & I was born in the us.\\\\\\hline\\hline\n{{\\it message}}\\xspace & What company do you work for?\\\\\\hline\n{\\it baseline} & I work for a company.\\\\\\hline\n\\User{1} & I don't have a job.\\\\\n\\User{2} & I'm a manager.\\\\\n\\User{3} & I'm working. {\\color{red} $\\#$}\\\\\n\\User{4} & I work for a company. \\\\\n\\User{5} & I'm working. {\\color{red} $\\#$}\\\\\n\\User{6} & Customer service.\\\\\n\\User{7} & I work in retail.\\\\\n\\User{8} & I work in a cafe.\\\\\n\\User{9} & I work at a gas station.\\\\\n\\User{10} & I don't know {\\color{red} $\\#$} \\\\\\hline\\hline\n{{\\it message}}\\xspace & What did you have for dinner?\\\\\\hline\n{\\it baseline} & I had fish and chips.\\\\\\hline\n\\User{1} & I had spag bol.\\\\\n\\User{2} & Chicken and chips.\\\\\n\\User{3} & Chicken and rice.\\\\\n\\User{4} & Fish and chips.\\\\\n\\User{5} & I had spag bol.\\\\\n\\User{6} & I had Mexican food.\\\\\n\\User{7} & Salad...\\\\\n\\User{8} & I had chicken and chips.\\\\\n\\User{9} & I had spag bol.\\\\\n\\User{10} & Pizza.\\\\\\hline\n\\end{tabularx}\n}\n\\caption{Responses generated by the baseline (LSTM-MMI) and the Speaker Model for ten randomly selected users, without cherry picking. {\\color{red} $\\#$} indicates poor-quality responses produced by the system.}\n\\label{example1}\n\\end{table}\n\n\n\\paragraph{Diverse Responses by Different Speakers}\nTable \\ref{example1} represents responses generated by persona models in response to three different input questions. We randomly selected 10 speakers (without cherry-picking) from the original Twitter dataset. We collected their user level representations from a speaker look-up table and integrated them into the decoding models. The model tends to generate specific responses for different people in response to the factual questions.\\footnote{There appears to be a population bias in the training set that favors British users.} \n\nTable \\ref{addressees} shows responses generated from the {\\it Speaker-Addressee Model} using the TV-series dataset. Interestingly, we regularly observe \nthat this model is sensitive to the identity of the addressee, generating words specifically targeted at that addressee (e.g., her name).\nFor example, the model produces {\\it Of course, I love you, Emily} in response to an input from {\\it Emily}. Also, the model generates {\\it Of course I love you. ( kisses him)}, where the pronoun ``him'' accurately identifies the gender of the addressee. \n\n\n\\paragraph{Human Evaluation} We conducted a human evaluation of outputs from the Speaker Model, using \na crowdsourcing service. \nSince we cannot expect crowdsourced human judges to know or attempt to learn the ground truth of Twitter users who are not well-known public figures, we designed our experiment to evaluate the consistency of outputs associated with the speaker IDs. To this end, we collected 24 pairs of questions for which we would expect responses to be consistent if the persona model is coherent. For example, responses to the questions {\\it What country do you live in?} and {\\it What city do you live in?} would be considered consistent if the answers were {\\it England} and {\\it London} respectively, but not if they were {\\it UK} and {\\it Chicago}. Similarly, the responses to {\\it Are you vegan or vegetarian?} and {\\it Do you eat beef?} are consistent if the answers generated are {\\it vegan} and {\\it absolutely not}, but not if they are {\\it vegan} and {\\it I love beef}. We collected 20 pairs of outputs for randomly-selected personas provided by the Speaker Model for each question pair (480 response pairs total). We also obtained the corresponding outputs from the baseline MMI-enhanced {{\\textsc{Seq2Seq}}}\\xspace system. \n\n\\begin{table}\n\\setlength{\\tabcolsep}{4pt}\n\\center\n{\\small\n\\begin{tabularx}{\\linewidth}{rl} \\hline\n{{\\it message}}\\xspace & Do you love me?\\\\\\hline\n{\\it baseline} & Of course, I love you. \\\\\\hline\n& {\\it addressee: {\\it Emily}, speaker: Sheldon}\\\\\n{\\it response} & Of course, I love you, Emily.\\\\\\hline\n& {\\it addressee: {\\it Emily}, speaker: Penny}\\\\\n{\\it response} & Of course, I love you, Emily.\\\\\\hline\n& {\\it addressee: {\\it Leonard}, speaker: Penny}\\\\\n{\\it response} & Of course, I love you, Leonard.\\\\\\hline\n& {\\it addressee: {\\it Monica}, speaker: Chandler}\\\\\n{\\it response} & Of course I love you. I want to marry you.\\\\\\hline\n& {\\it addressee: {\\it Rachel}, speaker: Chandler}\\\\\n{\\it response} & Of course I love you.\\\\\\hline\n& {\\it addressee: {\\it Ross}, speaker: Chandler}\\\\\n{\\it response} & Of course I love you.\\\\\\hline\n& {\\it addressee: {\\it Ross}, speaker: Rachel}\\\\\n{\\it response} & Of course I love you. (kisses him) \\\\\\hline\n& {\\it addressee: {\\it Emily}, speaker: Rachel}\\\\\n{\\it response} & Of course I love you. \\\\\\hline\n\\end{tabularx}\n}\n\\caption{Responses to {\\it Do you love me?} from the {\\it Speaker-Addressee} model on the TV-series dataset using different addressees and speakers.}\n\\label{addressees}\n\\end{table}\n\n\\begin{table}\n\\setlength{\\tabcolsep}{4pt}\n\\center\n{\\small\n\\begin{tabularx}{\\linewidth}{rl} \\hline\n{\\bf User1} &\\\\\\hline\n{{\\it message}}\\xspace & Where is your hometown?\\\\\n{{\\it response}}\\xspace & I'm from England.\\\\\n{{\\it message}}\\xspace & Where are you from?\\\\\n{{\\it response}}\\xspace & I'm from England.\\\\\n{{\\it message}}\\xspace & In which city do you live now?\\\\\n{{\\it response}}\\xspace & I live in London.\\\\\n{{\\it message}}\\xspace & In which country do you live now?\\\\\n{{\\it response}}\\xspace & I live in England.\\\\\\hline\n{{\\it message}}\\xspace & What is your major?\\\\\n{{\\it response}}\\xspace & Business. You?\\\\\n{{\\it message}}\\xspace & What did you study in college?\\\\\n{{\\it response}}\\xspace & I did business studies.\\\\\\hline\n{{\\it message}}\\xspace & How old are you?\\\\\n{{\\it response}}\\xspace & I'm 18.\\\\\n{{\\it message}}\\xspace & What is your age?\\\\\n{{\\it response}}\\xspace & I'm 18.\\\\\\hline\\hline\n{\\bf User2} \\\\\\hline\n{{\\it message}}\\xspace & Where is your hometown? \\\\\n{{\\it response}}\\xspace & I live in Indonesia.\\\\\n{{\\it message}}\\xspace & Where are you from?\\\\\n{{\\it response}}\\xspace & England, you?\\\\\n{{\\it message}}\\xspace & In which city do you live now?\\\\\n{{\\it response}}\\xspace & I live in Jakarta.\\\\\n{{\\it message}}\\xspace & In which country do you live now?\\\\\n{{\\it response}}\\xspace & I live in Indonesia.\\\\\\hline\n{{\\it message}}\\xspace & What is your major?\\\\\n{{\\it response}}\\xspace & Business, you?\\\\\n{{\\it message}}\\xspace & What did you study in college?\\\\\n{{\\it response}}\\xspace & Psychology, you?\\\\\\hline\n{{\\it message}}\\xspace & How old are you?\\\\\n{{\\it response}}\\xspace & I'm 18.\\\\\n{{\\it message}}\\xspace & What is your age?\\\\\n{{\\it response}}\\xspace & I'm 16.\\\\\\hline%\n\\end{tabularx}\n}\n\\caption{Examples of speaker consistency and inconsistency generated by the Speaker Model}\n\\label{example2}\n\\end{table}\n\n\nSince our purpose is to measure the gain in consistency over the baseline system, we presented the pairs of answers system-pairwise, i.e., 4 responses, 2 from each system, displayed on the screen, and asked judges to decide which of the two systems was more consistent. The position in which the system pairs were presented on the screen was randomized. \nThe two systems were judged on 5-point zero-sum scale, assigning a score of 2 (-2) if one system was judged more (less) consistent than the other, and 1 (-1) if one was rated ``somewhat'' more (less) consistent. Ties were assigned a score of zero. Five judges rated each pair and their scores were averaged and remapped into 5 equal-width bins. After discarding ties, we found the persona model was judged either ``more consistent'' or ``somewhat more consistent'' in 56.7\\% of cases. If we ignore the ``somewhat more consistent'' judgments, the persona model wins in 6.1\\% of cases, compared with only 1.6\\% for the baseline model. \nIt should be emphasized that the baseline model is a strong baseline, \n\\begin{comment}\nThe two systems were judged on a 5-point scale, assigning a score of 2 (respectively -2) if the persona system was judged much more (respectively less) consistent than the baseline, 1 (respectively -1) if ``mostly'' more (respectively less) consistent, and 0 otherwise. \nFive judges rated each pair, and scores were averaged across judges.\\footnote{To turn this average back into a 5-point scale, we mapped average scores into 5 equal-size bins.}\nAfter removing ties\nwe found that\nthe persona model (respectively baseline) was judged either ``more consistent'' or ``somewhat more consistent'' than the baseline (respectively persona model) in 56.7\\% (respectively 43.3\\%) of the cases. Ignoring ``somewhat more consistent\" judgments, the persona model is more consistent in 6.0\\% of the cases, while the baseline only 1.6\\% of the time.\nIt should be stressed that the latter is a strong baseline, \n\\end{comment}\nsince it represents the consensus of all 70K Twitter users in the dataset\\footnote{{\\it I'm not pregnant} is an excellent consensus answer to the question {\\it Are you pregnant?}, while {\\it I'm pregnant} is consistent as a response only in the case of someone who also answers the question {\\it Are you a guy or a girl?} with something in the vein of {\\it I'm a girl}.}.\n\nTable \\ref{example2} illustrates how consistency is an emergent property of two arbitrarily selected users. The model is capable of discovering the relations between different categories of location such as London and the UK, Jakarta and Indonesia. However, the model also makes inconsistent response decisions, generating different answers in the second example in response to questions asking about age or major. \nOur proposed persona models integrate user embeddings into the LSTM, and thus can be viewed as encapsulating a trade-off between a persona-specific generation model and a general conversational model. \n\\begin{comment}\nHere we qualitatively show examples about speaker consistency to identify both the advantages and limitations of the proposed models. We manually make up different sets of questions, and each set consists of 2 or 3 questions. These questions are either paraphrases such as {\\it How old are you?} and {\\it What's your age?} or related questions such as {\\it Where are you from?} and {\\it Where do you live?}. In our experiments for consistency analysis, we set maximum response length to 8, as to make it easier for humans (especially crowd workers) to judge factual consistency. \n\nwe observe a general tendency to generate consistent responses for individual speakers. A broader analysis across the dataset shows that the model can handle some categories of \nlocation related questions pretty well, and is also capable of discovering the relations between different categories of location names such as London and the UK, Jakarta and Indonesia as shown in the examples. However, the model also frequently make inconsistent response decisions: generating different answers in the second example in response to questions asking about age or major. \nOur explanation is as follows: the proposed persona models integrate user embeddings into decoding, and can be viewed as a trade-off between personal specific generation model and the general conversational model. \nThe combined model would lean towards the general one, which is a safer choice\nwhen the influence from user-level embedding is significant or in other words, clues presented from the training data is not evident enough. \nFrom the algorithmic perspective, the process of integrating speaker embeddings can be viewed as \nadding bias to the standard {{\\textsc{Seq2Seq}}}\\xspace model, the influence of which can be limited. At the same time, we also wish to maintain a balance between speaker-level evidence and the overall response model to avoid overfitting. However to address these issues remains our long-term goal in dialogue literature. \n\\end{comment}\n\n\n\\section{Introduction} \n\n\n\\label{sec:intro}\n\\input{01-intro-clean.tex}\n\n\\section{Related Work}\n\\label{sec:related}\n\\input{02-relatedwork-clean.tex}\n\n\\section{Sequence-to-Sequence Models}\n\\label{sec:seq2seq}\n\\input{03-seq2seq-clean.tex}\n\n\\section{Personalized Response Generation}\n\\label{sec:models}\n\\input{04-models-clean.tex}\n\n\\section{Datasets}\n\\label{sec:data}\n\\input{05-datasets-new-clean.tex}\n\n\\section{Experiments}\n\\label{sec:experiments}\n\\input{06-experiments-clean.tex}\n\n\\section{Conclusions}\n\\label{sec:conclusion}\n\\input{07-conclusion-clean.tex}\n\n\\section*{Acknowledgments}\n\nWe with to thank Stephanie Lukin, Pushmeet Kohli, Chris Quirk, Alan Ritter, and Dan Jurafsky for helpful discussions.\n\n\\bibliographystyle{acl2016}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\\section{Other Related Work}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Preliminary Results}\n\\label{app:prelim}\n\n\n\\begin{lemma}[Young's inequality]\n\\label{lem:Young}\nGiven two same-dimensional vectors $\\mathbf u, \\mathbf v \\in \\mathbb R^d$, the Euclidean inner product can be bounded as follows:\n$$\\left\\langle \\mathbf u, \\mathbf v \\right\\rangle \\leq \\frac{\\norm{\\mathbf u}^2}{2 \\gamma} + \\frac{\\gamma \\norm{\\mathbf v}^2}{2}$$\nfor every constant $\\gamma > 0$.\n\\end{lemma}\n\n\\begin{lemma}[Strong Concavity]\nA function $g: \\mathcal X \\times \\mathcal Y$ is strongly concave in ${\\mathbf y}$, if there exists a constant $\\mu > 0$, such that for all ${\\mathbf x} \\in \\mathcal X$, and for all ${\\mathbf y}, {\\mathbf y}' \\in \\mathcal Y$, the following inequality holds.\n$$g({\\mathbf x}, {\\mathbf y}) \\leq g({\\mathbf x}, {\\mathbf y}') + \\left\\langle \\nabla_{\\by} g({\\mathbf x}, {\\mathbf y}'), {\\mathbf y}' - {\\mathbf y} \\right\\rangle - \\frac{\\mu}{2} \\norm{{\\mathbf y} - {\\mathbf y}'}^2.$$\n\\end{lemma}\n\n\\begin{lemma}[Jensen's inequality]\n\\label{lem:jensens}\nGiven a convex function $f$ and a random variable $X$, the following holds.\n$$f \\left( \\mathbb E [X] \\right) \\leq \\mathbb E \\left[ f(X) \\right].$$\n\\end{lemma}\n\n\\begin{lemma}[Sum of squares]\n\\label{lem:sum_of_squares}\nFor a positive integer $K$, and a set of vectors $x_1, \\hdots, x_K$, the following holds:\n\\begin{align*}\n \\norm{\\sum_{k=1}^K x_k}^2 \\leq K \\sum_{k=1}^K \\norm{x_k}^2.\n\\end{align*}\n\\end{lemma}\n\n\\begin{lemma}[Quadratic growth condition \\cite{schmidt16lin_conv_PL_kdd}]\n\\label{lem:quad_growth}\nIf function $g$ satisfies Assumptions \\ref{assum:smoothness}, \\ref{assum:PL_y}, then for all $x$, the following conditions holds\n\\begin{align*}\n g(x) - \\min_{z} g(z) & \\geq \\frac{\\mu}{2} \\norm{x_p - x}^2, \\\\\n \\norm{\\nabla g(x)}^2 & \\geq 2 \\mu \\left( g(x) - \\min_z g(z) \\right).\n\\end{align*}\n\\end{lemma}\n\n\\subsection{Local SGD}\n\\label{app:local_SGD}\nLocal SGD is the algorithm which forms the basis of numerous Federated Learning algorithms \\cite{konevcny16federated, fedavg17aistats}.\nEach client running Local SGD (\\cref{alg_local_SGD}), runs a few SGD iterations locally and only then communicates with the server, which in turn computes the average and returns to the clients. \nThis approach saves the limited communication resources of the clients, without sacrificing the convergence guarantees.\n\nThe algorithm has been analyzed for both convex and nonconvex minimization problems.\nWith identical distribution of client data, Local SGD has been analyzed in \\cite{stich18localSGD_iclr, stich20error_fb_jmlr, khaled20localSGD_aistats, spiridonoff21comm_eff_SGD_neurips} for (strongly) convex objectives, and in \\cite{wang21coopSGD_jmlr, zhou18localSGD_ijcai} for nonconvex objectives.\nWith heterogeneous client data Local SGD has been analyzed in \\cite{khaled20localSGD_aistats, koloskova20unified_localSGD_icml} for (strongly) convex objectives, and in \\cite{jiang18linear_neurips, haddadpour19conv_FL_arxiv, koloskova20unified_localSGD_icml} for nonconvex objectives.\n\n\\begin{algorithm}[ht]\n\\caption{Local SGD}\n\\label{alg_local_SGD}\n\\begin{algorithmic}[1]\n\t\\STATE{\\textbf{Input: }{\\small${\\mathbf x}_0^i = {\\mathbf x}_0$}, for all $i \\in [n]$, step-size $\\eta$, $\\tau$, $T$}\n\t\\FOR[At all clients $i=1,\\hdots, n$]{$t=0$ to $T-1$}\n\t \\STATE{Sample minibatch ${\\xi^i_{t}}$ from local data}\n \\STATE{${\\mathbf x^i_{t+1}} = {\\mathbf x^i_t} - \\eta \\nabla g_i ({\\mathbf x^i_t}; {\\xi^i_{t}})$}\n \n \\IF{$t+1$ mod $\\tau = 0$}\n \\STATE{Clients send $\\{ {\\mathbf x^i_{t+1}} \\}$ to the server}\n \\STATE{Server computes averages ${\\mathbf x_{t+1}} \\triangleq \\frac{1}{n} \\sum_{i=1}^n {\\mathbf x^i_{t+1}}$, and sends to all the clients}\n \\STATE{${\\mathbf x^i_{t+1}} = {\\mathbf x_{t+1}}$, for all $i \\in [n]$}\n \\ENDIF\n\t\\ENDFOR\n\t\\STATE{\\textbf{Return: }${\\bar{\\bx}_T}$ drawn uniformly at random from $\\{ {\\mathbf x_t} \\}$, where ${\\mathbf x_t} \\triangleq \\frac{1}{n} \\sum_{i=1}^n {\\mathbf x^i_t}$}\n\\end{algorithmic}\n\\end{algorithm}\n\n\\begin{lemma}[Local SGD for Convex Function Minimization \\cite{khaled20localSGD_aistats}]\n\\label{lem:local_SGD_khaled}\n\nSuppose that the local functions $\\{ g_i \\}$ satisfy Assumptions \\ref{assum:smoothness}, \\ref{assum:bdd_var}, \\ref{assum:bdd_hetero}, and are all convex.\\footnote{The result actually holds under slightly weaker assumptions on the noise and heterogeneity.}\nSuppose, the step-size $\\eta$ is chosen such that $\\eta \\leq \\min \\left\\{ \\frac{1}{4 L_f}, \\frac{1}{8 L_f (\\tau - 1)} \\right\\}$.\nThen, the iterates generated by Local SGD (\\cref{alg_local_SGD}) algorithm satisfy\n\\begin{align*}\n \\mathbb E \\left[ g({\\bar{\\bx}_T}) \\right] - g({\\mathbf x}^*) \\leq \\frac{1}{T} \\sumtT \\mathbb E \\left[ g({\\mathbf x_t}) - g({\\mathbf x}^*) \\right] \\leq \\frac{4 \\norm{{\\mathbf x}_0 - {\\mathbf x}^*}^2}{\\eta T} + \\frac{20 \\eta \\sigma^2}{n} + 16 \\eta^2 L_f (\\tau-1)^2 \\left( \\sigma^2 + \\varsigma_x^2 \\right),\n\\end{align*}\nwhere ${\\bar{\\bx}_T} \\triangleq \\frac{1}{T} \\sumtT {\\mathbf x_t}$.\n\\end{lemma}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\newpage\n\\section{Nonconvex-PL (NC-PL) Functions: Local SGDA (\\texorpdfstring{\\cref{thm:NC_PL}}{Theorem 1})} \\label{app:ncpl}\nIn this section we prove the convergence of \\cref{alg_local_SGDA} for Nonconvex-PL functions, and provide the complexity and communication guarantees.\n\nWe organize this section as follows. First, in \\cref{sec:NC_PL_int_results} we present some intermediate results, which we use to prove the main theorem. Next, in \\cref{sec:NC_PL_thm_proof}, we present the proof of \\cref{thm:NC_PL}, which is followed by the proofs of the intermediate results in \\cref{sec:NC_PL_int_results_proofs}.\nWe utilize some of the proof techniques of \\cite{mahdavi21localSGDA_aistats}.\nHowever, the algorithm we analyze for NC-PL functions is different. Also, we provide an improved analysis, resulting in better convergence guarantees.\n\nThe problem we solve is\n\\begin{align*}\n \\min_{{\\mathbf x}} \\max_{{\\mathbf y}} \\left\\{ f({\\mathbf x}, {\\mathbf y}) \\triangleq \\frac{1}{n} \\sum_{i=1}^n f_i({\\mathbf x}, {\\mathbf y}) \\right\\}.\n\\end{align*}\nWe define\n\\begin{align}\n \\Phi ({\\mathbf x}) \\triangleq \\max_{{\\mathbf y}} f({\\mathbf x}, {\\mathbf y}) \\quad \\text{and} \\quad {\\mathbf y}^* ({\\mathbf x}) \\in \\operatornamewithlimits{arg\\,max}_{{\\mathbf y}} f({\\mathbf x}, {\\mathbf y}).\n \n\\end{align}\nSince $f({\\mathbf x}, \\cdot)$ is $\\mu$-PL, ${\\mathbf y}^*({\\mathbf x})$ \\textit{need not} be unique.\n\nFor the sake of analysis, we define \\textit{virtual} sequences of average iterates:\n\\begin{align*}\n & {\\mathbf x_t} \\triangleq \\frac{1}{n} \\sum_{i=1}^n {\\mathbf x^i_t}; \\quad {\\mathbf y_t} \\triangleq \\frac{1}{n} \\sum_{i=1}^n {\\mathbf y^i_t}.\n\\end{align*}\nNote that these sequences are constructed only for the sake of analysis. During an actual run of the algorithm, these sequences exist only at the time instants when the clients communicate with the server.\nWe next write the update expressions for these virtual sequences, using the updates in Algorithm \\ref{alg_local_SGDA}.\n\\begin{equation}\n \\begin{aligned}\n {\\mathbf x_{t+1}} &= {\\mathbf x_t} - \\eta_x \\frac{1}{n} \\sum_{i=1}^n \\nabla_{\\bx} f_i ({\\mathbf x^i_t}, {\\mathbf y^i_t}; {\\xi^i_{t}}) \\\\\n {\\mathbf y_{t+1}} &= {\\mathbf y_t} + \\eta_y \\frac{1}{n} \\sum_{i=1}^n \\nabla_{\\by} f_i ({\\mathbf x^i_t}, {\\mathbf y^i_t}; {\\xi^i_{t}})\n \\end{aligned}\n \\label{eq:NC_PL_update_avg}\n\\end{equation}\nNext, we present some intermediate results which we use in the proof of \\cref{thm:NC_PL}. To make the proof concise, the proofs of these intermediate results is relegated to \\cref{sec:NC_PL_int_results_proofs}.\n\n\\subsection{Intermediate Lemmas} \\label{sec:NC_PL_int_results}\n\nWe use the following result from \\cite{nouiehed19minimax_neurips19} about the smoothness of $\\Phi(\\cdot)$.\n\n\\begin{lemma}\n\\label{lem:Phi_smooth_nouiehed}\nIf the function $f({\\mathbf x}, \\cdot)$ satisfies Assumptions \\ref{assum:smoothness}, \\ref{assum:PL_y} ($L_f$-smoothness and $\\mu$-PL condition in ${\\mathbf y}$), then $\\Phi ({\\mathbf x})$ is $L_{\\Phi}$-smooth with $L_{\\Phi} = \\kappa L\/2 + L$, where $\\kappa = L\/\\mu$ is the condition number.\n\\end{lemma}\n\n\\begin{lemma}\n\\label{lem:NC_PL_Phi_decay_one_iter}\nSuppose the local client loss functions $\\{ f_i \\}$ satisfy Assumptions \\ref{assum:smoothness}, \\ref{assum:PL_y} and the stochastic oracles for the local functions satisfy \\cref{assum:bdd_var}.\nThen the iterates generated by \\cref{alg_local_SGDA} satisfy\n\\begin{equation}\n \\begin{aligned}\n \\mathbb E \\left[ \\Phi ({\\mathbf x_{t+1}}) \\right] & \\leq \\mathbb E \\left[ \\Phi ({\\mathbf x_t}) \\right] - \\frac{\\eta_x}{2} \\mathbb E \\norm{\\nabla \\Phi ({\\mathbf x_t})}^2 - \\frac{\\eta_x}{2} \\left( 1 - L_{\\Phi} \\eta_x \\right) \\mathbb E \\norm{\\frac{1}{n} \\sum_{i=1}^n \\nabla_{\\bx} f_i({\\mathbf x^i_t}, {\\mathbf y^i_t})}^2 \\nonumber \\\\\n & \\quad + \\frac{2 \\eta_x L_f^2}{\\mu} \\mathbb E \\left[ \\Phi ({\\mathbf x_t}) - f({\\mathbf x_t}, {\\mathbf y_t}) \\right] + 2 \\eta_x L_f^2 \\Delta_{t}^{\\bx,\\by} + \\frac{L_{\\Phi} \\eta_x^2 \\sigma^2}{2 n},\n \\end{aligned}\n\\end{equation}\nwhere, we define $\\Delta_{t}^{\\bx,\\by} \\triangleq \\frac{1}{n} \\sum_{i=1}^n \\mathbb E \\left( \\left\\| {\\mathbf x^i_t} - {\\mathbf x_t} \\right\\|^2 + \\left\\| {\\mathbf y^i_t} - {\\mathbf y_t} \\right\\|^2 \\right)$, the synchronization error.\n\\end{lemma}\n\n\n\\begin{lemma}\n\\label{lem:NC_PL_phi_error}\nSuppose the local loss functions $\\{ f_i \\}$ satisfy Assumptions \\ref{assum:smoothness}, \\ref{assum:bdd_hetero},\nand the stochastic oracles for the local\nfunctions satisfy \\cref{assum:bdd_var}.\nFurther, in \\cref{alg_local_SGDA}, we choose step-sizes $\\eta_x, \\eta_y$ satisfying $\\eta_y \\leq 1\/\\mu$, $\\frac{\\eta_x}{\\eta_y} \\leq \\frac{1}{8 \\kappa^2}$.\nThen the following inequality holds.\n\\begin{equation}\n \\begin{aligned}\n & \\frac{1}{T} \\sum_{t=0}^{T-1} \\mathbb E \\left( \\Phi ({\\mathbf x_t}) - f({\\mathbf x_t}, {\\mathbf y_t}) \\right) \\\\\n & \\leq \\frac{2 \\left( \\Phi ({\\mathbf x}_0) - f({\\mathbf x}_0, {\\mathbf y}_0) \\right)}{\\eta_y \\mu T} + \\frac{2 L_f^2}{\\mu \\eta_y} \\left( 2 \\eta_x (1 - \\eta_y \\mu) + \\eta_y \\right) \\frac{1}{T} \\sum_{t=0}^{T-1} \\Delta_{t}^{\\bx,\\by} + (1 - \\eta_y \\mu) \\frac{\\eta_x}{\\eta_y \\mu} \\frac{1}{T} \\sum_{t=0}^{T-1} \\mathbb E \\norm{\\nabla \\Phi({\\mathbf x_t})}^2 \\nonumber \\\\\n & \\quad + \\left[ (1 - \\eta_y \\mu) \\frac{\\eta_x^2}{2} \\left( L_f + L_{\\Phi} \\right) + \\eta_y L_f^2 \\eta_x^2 \\right] \\frac{2}{\\eta_y \\mu T} \\sum_{t=0}^{T-1} \\mathbb E \\norm{\\frac{1}{n} \\sum_{i=1}^n \\nabla_{\\bx} f_i({\\mathbf x^i_t}, {\\mathbf y^i_t})}^2 \\\\\n & \\quad + \\frac{\\sigma^2}{\\mu n} \\left( \\eta_y L_f + 2 L_f^2 \\eta_x^2 \\right) + \\frac{(1 - \\eta_y \\mu)}{\\mu \\eta_y} \\frac{\\eta_x^2 \\sigma^2}{n} \\left( L_f + L_{\\Phi} \\right).\n \\end{aligned}\n \n\\end{equation}\n\\end{lemma}\n\n\\begin{remark}[Comparison with \\cite{mahdavi21localSGDA_aistats}]\nNote that to derive a result similar to \\cref{lem:NC_PL_phi_error}, the analysis in \\cite{mahdavi21localSGDA_aistats} requires the additional assumption of $G_x$-Lipschitz continuity of $f(\\cdot, {\\mathbf y})$.\nAlso, the algorithm we analyze (Local SGDA) is simpler than the algorithm analyzed in \\cite{mahdavi21localSGDA_aistats} for NC-PL functions.\n\\end{remark}\n\n\n\\begin{lemma}\n\\label{lem:NC_PL_consensus_error}\nSuppose the local loss functions $\\{ f_i \\}$ satisfy Assumptions \\ref{assum:smoothness}, \\ref{assum:bdd_hetero},\nand the stochastic oracles for the local\nfunctions satisfy \\cref{assum:bdd_var}.\nFurther, in \\cref{alg_local_SGDA}, we choose step-sizes $\\eta_x, \\eta_y \\leq \\frac{1}{8 \\tau L_f}$.\nThen, the iterates $\\{ {\\mathbf x^i_t}, {\\mathbf y^i_t} \\}$ generated by \\cref{alg_local_SGDA} satisfy\n\\begin{equation}\n \\begin{aligned}\n \\frac{1}{T} \\sum_{t=0}^{T-1} \\Delta_{t}^{\\bx,\\by} & \\triangleq \\frac{1}{T} \\sum_{t=0}^{T-1} \\frac{1}{n} \\sum_{i=1}^n \\mathbb E \\left( \\left\\| {\\mathbf x^i_t} - {\\mathbf x_t} \\right\\|^2 + \\left\\| {\\mathbf y^i_t} - {\\mathbf y_t} \\right\\|^2 \\right) \\nonumber \\\\\n & \\leq 2 (\\tau-1)^2 \\left( \\eta_x^2 + \\eta_y^2 \\right) \\sigma^2 \\left( 1 + \\frac{1}{n} \\right) + 6 (\\tau-1)^2 \\left( \\eta_x^2 \\varsigma_x^2 + \\eta_y^2 \\varsigma_y^2 \\right).\n \\end{aligned}\n \\label{eq:lem:NC_PL_consensus_error}\n\\end{equation}\n\\end{lemma}\n\n\n\n\n\n\n\n\n\n\\subsection{Proof of \\texorpdfstring{\\cref{thm:NC_PL}}{Theorem 1}}\n\\label{sec:NC_PL_thm_proof}\nFor the sake of completeness, we first state the full statement of \\cref{thm:NC_PL} here.\n\n\\begin{theorem*}\nSuppose the local loss functions $\\{ f_i \\}_i$ satisfy Assumptions \\ref{assum:smoothness}, \\ref{assum:bdd_var}, \\ref{assum:bdd_hetero}, and the global function $f$ satisfies \\cref{assum:PL_y}.\nSuppose the step-sizes $\\eta_x, \\eta_y$ are chosen\nsuch that $\\eta_y \\leq \\frac{1}{8 L_f \\tau}$, $\\frac{\\eta_x}{\\eta_y} = \\frac{1}{8 \\kappa^2}$, where $\\kappa = \\frac{L_f}{\\mu}$ is the condition number.\nThen for the output ${\\bar{\\bx}_T}$ of \\cref{alg_local_SGDA}, the following holds.\n\\begin{align}\n \\mathbb E \\norm{\\nabla \\Phi ({\\bar{\\bx}_T})}^2 = & \\frac{1}{T} \\sum_{t=0}^{T-1} \\mathbb E \\norm{\\nabla \\Phi ({\\mathbf x_t})}^2 \\nonumber \\\\\n & \\leq \\underbrace{\\mathcal O \\left( \\kappa^2 \\left[ \\frac{\\Delta_{\\Phi}}{\\eta_y T} + \\frac{L_f \\eta_y \\sigma^2}{n} \\right] \\right)}_{\\text{Error with full synchronization}} + \\underbrace{\\mathcal O \\left( L_f^2 \\kappa^2 (\\tau-1)^2 \\left[ \\eta_y^2 \\left( \\sigma^2 + \\varsigma_y^2 \\right) + \\eta_x^2 \\varsigma_x^2 \\right] \\right)}_{\\text{Error due to local updates}},\n \\label{eq_proof:thm_NC_PL}\n\\end{align}\nwhere $\\Phi({\\mathbf x}) \\triangleq \\max_{\\mathbf y} f({\\mathbf x}, {\\mathbf y})$ is the envelope function, $\\Delta_{\\Phi} \\triangleq \\Phi ({\\mathbf x}_0) - \\min_{\\mathbf x} \\Phi ({\\mathbf x})$.\nUsing $\\eta_y = \\sqrt{\\frac{n}{L_f T}}$ and $\\eta_x = \\frac{1}{8 \\kappa^{2}} \\sqrt{\\frac{n}{L_f T}}$, we get\n\\begin{align}\n & \\mathbb E \\norm{\\nabla \\Phi ({\\bar{\\bx}_T})}^2 \\leq \\mathcal O \\left( \\frac{\\kappa^2 \\left( \\sigma^2 + \\Delta_{\\Phi} \\right)}{\\sqrt{n T}} + \\kappa^2 (\\tau-1)^2 \\frac{n \\left( \\sigma^2 + \\varsigma_x^2 + \\varsigma_y^2 \\right)}{T} \\right). \\nonumber\n \n\\end{align}\n\\end{theorem*}\n\n\\begin{proof}\nWe start by summing the expression in \\cref{lem:NC_PL_Phi_decay_one_iter} over $t = 0, \\hdots, T-1$.\n\\begin{align}\n \\frac{1}{T} \\sum_{t=0}^{T-1} \\mathbb E \\left[ \\Phi ({\\mathbf x_{t+1}}) - \\Phi ({\\mathbf x_t}) \\right] & \\leq - \\frac{\\eta_x}{2} \\frac{1}{T} \\sum_{t=0}^{T-1} \\mathbb E \\norm{\\nabla \\Phi ({\\mathbf x_t})}^2 - \\frac{\\eta_x}{2} \\left( 1 - L_{\\Phi} \\eta_x \\right) \\frac{1}{T} \\sum_{t=0}^{T-1} \\mathbb E \\norm{\\frac{1}{n} \\sum_{i=1}^n \\nabla_{\\bx} f_i({\\mathbf x^i_t}, {\\mathbf y^i_t})}^2 \\nonumber \\\\\n & \\quad + \\frac{2 \\eta_x L_f^2}{\\mu} \\frac{1}{T} \\sum_{t=0}^{T-1} \\mathbb E \\left[ \\Phi ({\\mathbf x_t}) - F({\\mathbf x_t}, {\\mathbf y_t}) \\right] + 2 \\eta_x L_f^2 \\frac{1}{T} \\sum_{t=0}^{T-1} \\Delta_{t}^{\\bx,\\by} + \\frac{L_{\\Phi} \\eta_x^2 \\sigma^2}{2 n}. \\label{eq_proof:thm:NC_PL_1}\n\\end{align}\nSubstituting the bound on $\\frac{1}{T} \\sum_{t=0}^{T-1} \\Delta_{t}^{\\bx,\\by}$ from \\cref{lem:NC_PL_consensus_error}, and the bound on $\\frac{1}{T} \\sum_{t=0}^{T-1} \\mathbb E \\left[ \\Phi ({\\mathbf x_t}) - F({\\mathbf x_t}, {\\mathbf y_t}) \\right]$ from \\cref{lem:NC_PL_phi_error}, and rearranging the terms in \\eqref{eq_proof:thm:NC_PL_1}, we get\n\\begin{align}\n & \\frac{\\mathbb E \\Phi ({\\mathbf x}_T) - \\Phi ({\\mathbf x}_0)}{T} \\nonumber \\\\\n & \\leq - \\underbrace{\\left( \\frac{\\eta_x}{2} - (1 - \\eta_y \\mu) \\frac{2 \\eta_x^2 L_f^2}{\\eta_y \\mu^2} \\right)}_{\\geq \\eta_x\/4} \\frac{1}{T} \\sum_{t=0}^{T-1} \\mathbb E \\norm{\\nabla \\Phi ({\\mathbf x_t})}^2 \\nonumber \\\\\n & \\quad - \\underbrace{\\frac{\\eta_x}{2} \\left( 1 - L_{\\Phi} \\eta_x - \\frac{8 L_f^2}{\\mu^2 \\eta_y} \\left[ (1 - \\eta_y \\mu) \\frac{\\eta_x^2}{2} \\left( L + L_{\\Phi} \\right) + \\eta_y L_f^2 \\eta_x^2 \\right] \\right)}_{\\geq 0} \\frac{1}{T} \\sum_{t=0}^{T-1} \\mathbb E \\norm{\\frac{1}{n} \\sum_{i=1}^n \\nabla_{\\bx} f_i({\\mathbf x^i_t}, {\\mathbf y^i_t})}^2 \\nonumber \\\\\n & \\quad + \\left[ \\frac{2 \\eta_x L_f^2}{\\mu} \\left( \\frac{2 L_f^2}{\\mu} + \\frac{4 \\eta_x L_f^2 (1 - \\eta_y \\mu)}{\\mu \\eta_y} \\right) + 2 \\eta_x L_f^2 \\right] \\frac{1}{T} \\sum_{t=0}^{T-1} \\Delta_{t}^{\\bx,\\by} \\nonumber \\\\\n & \\quad + \\frac{2 \\eta_x L_f^2}{\\mu} \\left[ \\frac{2 \\left( \\Phi ({\\mathbf x}_0) - f({\\mathbf x}_0, {\\mathbf y}_0) \\right)}{\\eta_y \\mu T} + \\frac{\\sigma^2}{\\mu n} \\left( \\eta_y L_f + 2 L_f^2 \\eta_x^2 \\right) + \\frac{(1 - \\eta_y \\mu)}{\\mu \\eta_y} \\frac{\\eta_x^2 \\sigma^2}{n} \\left( L_f + L_{\\Phi} \\right) \\right] + \\frac{L_{\\Phi} \\eta_x^2 \\sigma^2}{2 n}. \\label{eq_proof:thm:NC_PL_2}\n\\end{align}\nHere, $\\frac{\\eta_x}{2} - \\frac{2 \\eta_x^2 (1-\\mu \\eta_y)L_f^2}{\\mu^2 \\eta_y} \\geq \\frac{\\eta_x}{4}$ holds since $\\frac{\\eta_x}{\\eta_y} \\leq \\frac{1}{8 \\kappa^2}$.\nAlso, $1 - L_{\\Phi} \\eta_x - \\frac{8 L_f^2}{\\mu^2 \\eta_y} \\left[ (1 - \\eta_y \\mu) \\frac{\\eta_x^2}{2} \\left( L + L_{\\Phi} \\right) + \\eta_y L_f^2 \\eta_x^2 \\right] \\geq 0$ follows from the bounds on $\\eta_x, \\eta_y$.\nRearranging the terms in \\eqref{eq_proof:thm:NC_PL_2} and using \\cref{lem:NC_PL_consensus_error}, we get\n\\begin{align}\n & \\frac{1}{T} \\sum_{t=0}^{T-1} \\mathbb E \\norm{\\nabla \\Phi ({\\mathbf x_t})}^2 \\leq \\frac{4 \\left( \\Phi ({\\mathbf x}_0) - \\mathbb E \\Phi ({\\mathbf x}_T) \\right)}{\\eta_x T} \\nonumber \\\\\n & \\quad + \\frac{4}{\\eta_x} 2 \\eta_x L_f^2 \\left[ 1 + 2 \\kappa^2 + 4 \\kappa^2 \\frac{\\eta_x}{\\eta_y} \\right] 2 (\\tau-1)^2 \\left[ \\left( \\eta_x^2 + \\eta_y^2 \\right) \\sigma^2 \\left( 1 + \\frac{1}{n} \\right) + 3 \\left( \\eta_x^2 \\varsigma_x^2 + \\eta_y^2 \\varsigma_y^2 \\right) \\right] \\nonumber \\\\\n & \\quad + \\frac{4}{\\eta_x} \\left[ \\frac{4 \\eta_x \\kappa^2}{\\eta_y} \\frac{\\left( \\Phi ({\\mathbf x}_0) - f({\\mathbf x}_0, {\\mathbf y}_0) \\right)}{T} + \\frac{2 \\eta_x \\kappa^2 \\sigma^2}{n} \\left( \\eta_y L_f + 2 L_f^2 \\eta_x^2 \\right) + \\frac{2 \\eta_x \\kappa^2}{\\eta_y} \\frac{\\eta_x^2 \\sigma^2}{n} \\left( L_f + L_{\\Phi} \\right) \\right] + \\frac{4}{\\eta_x} \\frac{L_{\\Phi} \\eta_x^2 \\sigma^2}{2 n} \\nonumber \\\\\n & \\overset{(a)}{\\leq} \\frac{4 \\Delta_{\\Phi}}{\\eta_x T} + 8 L_f^2 \\left[ 2 + 2 \\kappa^2 \\right] 2 (\\tau-1)^2 \\left[ \\left( \\eta_x^2 + \\eta_y^2 \\right) \\sigma^2 \\left( 1 + \\frac{1}{n} \\right) + 3 \\left( \\eta_x^2 \\varsigma_x^2 + \\eta_y^2 \\varsigma_y^2 \\right) \\right] \\nonumber \\\\\n & \\quad + \\frac{16 \\kappa^2 \\Delta_{\\Phi}}{\\eta_y T} + \\frac{8 \\kappa^2 \\sigma^2}{n} \\left( \\eta_y L_f + 2 L_f^2 \\eta_x^2 \\right) + \\frac{8 \\kappa^2 \\eta_x}{\\eta_y} \\frac{\\eta_x \\sigma^2}{n} \\left( L_f + L_{\\Phi} \\right) + \\frac{2 L_{\\Phi} \\eta_x \\sigma^2}{n} \\nonumber \\\\\n & \\overset{(b)}{\\leq} \\frac{4 \\Delta_{\\Phi}}{\\eta_x T} + 192 L_f^2 \\kappa^2 (\\tau-1)^2 \\left[ \\left( \\eta_x^2 + \\eta_y^2 \\right) \\sigma^2 + \\eta_x^2 \\varsigma_x^2 + \\eta_y^2 \\varsigma_y^2 \\right] + \\frac{16 \\kappa^2 \\Delta_{\\Phi}}{\\eta_y T} + \\frac{8 \\kappa^2 \\sigma^2}{n} \\left( \\eta_y L_f + 2 L_f^2 \\eta_x^2 \\right) + \\frac{4 L_{\\Phi} \\eta_x \\sigma^2}{n} \\nonumber \\\\\n & = \\mathcal O \\left( \\frac{\\Delta_{\\Phi}}{\\eta_x T} + \\frac{L_{\\Phi} \\eta_x \\sigma^2}{n} + \\kappa^2 \\left[ \\frac{\\Delta_{\\Phi}}{\\eta_y T} + \\frac{L_f \\eta_y \\sigma^2}{n} \\right] + L_f^2 \\kappa^2 (\\tau-1)^2 \\left[ \\left( \\eta_x^2 + \\eta_y^2 \\right) \\sigma^2 + \\eta_x^2 \\varsigma_x^2 + \\eta_y^2 \\varsigma_y^2 \\right] \\right). \\nonumber \\\\\n & = \\underbrace{\\mathcal O \\left( \\kappa^2 \\left[ \\frac{\\Delta_{\\Phi}}{\\eta_y T} + \\frac{L_f \\eta_y \\sigma^2}{n} \\right] \\right)}_{\\text{Error with full synchronization}} + \\underbrace{\\mathcal O \\left( L_f^2 \\kappa^2 (\\tau-1)^2 \\left[ \\eta_y^2 \\left( \\sigma^2 + \\varsigma_y^2 \\right) + \\eta_x^2 \\varsigma_x^2 \\right] \\right)}_{\\text{Error due to local updates}}. \\tag{$\\because \\kappa \\geq 1$}\n \n\\end{align}\nwhere, we denote $\\Delta_{\\Phi} \\triangleq \\Phi ({\\mathbf x}_0) - \\min_{\\mathbf x} \\Phi ({\\mathbf x})$.\n$(a)$ follows from $\\frac{\\eta_x}{\\eta_y} \\leq \\frac{1}{8 \\kappa^2}$;\n$(b)$ follows since $\\kappa \\geq 1$ and $L_{\\Phi} \\geq L_f$.\nTherefore, $\\frac{8 \\kappa^2 \\eta_x}{\\eta_y} \\frac{\\eta_x \\sigma^2}{n} (L_f + L_{\\Phi}) \\leq \\frac{\\eta_x \\sigma^2}{n} (L_f + L_{\\Phi}) \\leq \\frac{2 L_{\\Phi} \\eta_x \\sigma^2}{n}$, which results in \\eqref{eq_proof:thm_NC_PL}.\n\nUsing $\\eta_y = \\sqrt{\\frac{n}{L_f T}}$ and $\\eta_x = \\frac{1}{8 \\kappa^{2}} \\sqrt{\\frac{n}{L_f T}} \\leq \\frac{\\eta_y}{8 \\kappa^2}$, and since $\\kappa \\geq 1$, we get\n\\begin{align}\n & \\frac{1}{T} \\sum_{t=0}^{T-1} \\mathbb E \\left\\| \\nabla \\Phi ({\\mathbf x_t}) \\right\\|^2 \\leq \\mathcal O \\left( \\frac{\\kappa^2 \\left( \\sigma^2 + \\Delta_{\\Phi} \\right)}{\\sqrt{n T}} + \\kappa^2 (\\tau-1)^2 \\frac{n}{T} \\left[ \\sigma^2 + \\frac{\\varsigma_x^2}{\\kappa^4} + \\varsigma_y^2 \\right] \\right). \\nonumber\n \n\\end{align}\n\\end{proof}\n\n\\begin{proof}[Proof of \\cref{cor:NC_PL_comm_cost}]\nWe assume $T \\geq n^3$.\nTo reach an $\\epsilon$-accurate point, i.e., ${\\mathbf x}$ such that $\\mathbb E \\left\\| \\nabla \\Phi ({\\mathbf x}) \\right\\| \\leq \\epsilon$, we need\n\\begin{align*}\n \\mathbb E \\left\\| \\nabla \\Phi ({\\bar{\\bx}_T}) \\right\\| = \\left[ \\frac{1}{T} \\sum_{t=0}^{T-1} \\mathbb E \\left\\| \\nabla \\Phi ({\\mathbf x_t}) \\right\\|^2 \\right]^{1\/2} \\leq \\mathcal O \\left( \\frac{\\kappa \\sqrt{\\sigma^2 + \\Delta_{\\Phi}}}{(nT)^{1\/4}} + \\kappa (\\tau-1) \\sqrt{\\frac{n \\left( \\sigma^2 + \\varsigma_x^2 + \\varsigma_y^2 \\right)}{T}} \\right).\n\\end{align*}\nIf we choose $\\tau = \\mathcal O \\left( \\frac{T^{1\/4}}{n^{3\/4}} \\right)$, we need $T = \\mathcal O \\left( \\kappa^4\/(n \\epsilon^4) \\right)$ iterations, to reach an $\\epsilon$-accurate point.\nThe number of communication rounds is $\\mathcal O \\left( \\frac{T}{\\tau} \\right) = \\mathcal O \\left( (n T)^{3\/4} \\right) = \\mathcal O \\left( \\kappa^3\/\\epsilon^3 \\right)$. \n\\end{proof}\n\n\n\n\n\n\n\n\n\n\n\\subsection{Proofs of the Intermediate Lemmas}\n\\label{sec:NC_PL_int_results_proofs}\n\n\n\\begin{proof}[Proof of Lemma \\ref{lem:NC_PL_Phi_decay_one_iter}]\n\\label{proof:lem:NC_PL_Phi_decay_one_iter}\nIn the proof, we use the quadratic growth property of $\\mu$-PL function $f({\\mathbf x}, \\cdot)$ (\\cref{lem:quad_growth}), i.e.,\n\\begin{align}\n \\frac{\\mu}{2} \\norm{{\\mathbf y} - {\\mathbf y}^*({\\mathbf x})}^2 \\leq \\max_{{\\mathbf y}'} f({\\mathbf x}, {\\mathbf y}') - f({\\mathbf x}, {\\mathbf y}), \\quad \\forall \\ {\\mathbf x},{\\mathbf y} \\label{eq:quad_growth_PL}\n\\end{align}\nwhere ${\\mathbf y}^*({\\mathbf x}) \\in \\operatornamewithlimits{arg\\,max}_{{\\mathbf y}'} f({\\mathbf x}, {\\mathbf y}')$.\nSee \\cite{mahdavi21localSGDA_aistats} for the entire proof.\n\\end{proof}\n\n\n\\begin{proof}[Proof of Lemma \\ref{lem:NC_PL_consensus_error}]\nWe define the separate synchronization errors for ${\\mathbf x}$ and ${\\mathbf y}$\n\\begin{align*}\n \\Delta_{t}^{\\bx} \\triangleq \\frac{1}{n} \\sum_{i=1}^n \\mathbb E \\left\\| {\\mathbf x^i_t} - {\\mathbf x_t} \\right\\|^2, \\qquad \\Delta_{t}^{\\by} \\triangleq \\frac{1}{n} \\sum_{i=1}^n \\mathbb E \\left\\| {\\mathbf y^i_t} - {\\mathbf y_t} \\right\\|^2,\n\\end{align*}\nsuch that $\\Delta_{t}^{\\bx,\\by} = \\Delta_{t}^{\\bx} + \\Delta_{t}^{\\by}$.\nWe first bound the ${\\mathbf x}$- synchronization error $\\Delta_{t}^{\\bx}$.\nDefine $s = \\lfloor t\/\\tau \\rfloor$, such that $s \\tau + 1 \\leq t \\leq (s+1) \\tau - 1$.\nThen,\n\\begin{align}\n \\Delta_{t}^{\\bx} & \\triangleq \\frac{1}{n} \\sum_{i=1}^n \\mathbb E \\left\\| {\\mathbf x^i_t} - {\\mathbf x_t} \\right\\|^2 \\nonumber \\\\\n &= \\frac{1}{n} \\sum_{i=1}^n \\mathbb E \\norm{\\Big( {\\mathbf x}^i_{s \\tau} - \\eta_x \\sum_{k=s\\sync}^{t-1} \\nabla_{\\bx} f_i ({\\mathbf x}_k^i, {\\mathbf y}_k^i; \\xi^i_k) \\Big) - \\Big( {\\mathbf x}_{s \\tau} - \\eta_x \\frac{1}{n} \\sum_{j=1}^n \\sum_{k=s\\sync}^{t-1} \\nabla_{\\bx} f_j ({\\mathbf x}_k^j, {\\mathbf y}_k^j; \\xi_k^j) \\Big)}^2 \\tag{see \\eqref{eq:NC_PL_update_avg}} \\\\\n &= \\eta_x^2 \\frac{1}{n} \\sum_{i=1}^n \\mathbb E \\norm{\\sum_{k=s\\sync}^{t-1} \\nabla_{\\bx} f_i ({\\mathbf x}_k^i, {\\mathbf y}_k^i; \\xi^i_k) - \\frac{1}{n} \\sum_{j=1}^n \\sum_{k=s\\sync}^{t-1} \\nabla_{\\bx} f_j ({\\mathbf x}_k^j, {\\mathbf y}_k^j; \\xi_k^j)}^2 \\tag{$\\because {\\mathbf x}^i_{s \\tau} = {\\mathbf x}_{s \\tau}, \\forall \\ i \\in [n]$} \\\\\n & \\overset{(a)}{\\leq} \\eta_x^2 \\frac{1}{n} (t-s\\tau) \\sum_{k=s\\sync}^{t-1} \\sum_{i=1}^n \\mathbb E \\Big\\| \\nabla_{\\bx} f_i ({\\mathbf x}_k^i, {\\mathbf y}_k^i; \\xi^i_k) - \\nabla_{\\bx} f_i ({\\mathbf x}_k^i, {\\mathbf y}_k^i) + \\nabla_{\\bx} f_i ({\\mathbf x}_k^i, {\\mathbf y}_k^i) - \\nabla_{\\bx} f_i ({\\mathbf x}_k, {\\mathbf y}_k) + \\nabla_{\\bx} f_i ({\\mathbf x}_k, {\\mathbf y}_k) \\nonumber \\\\\n & \\qquad - \\nabla_{\\bx} f ({\\mathbf x}_k, {\\mathbf y}_k) - \\frac{1}{n} \\sum_{j=1}^n \\left( \\nabla_{\\bx} f_j ({\\mathbf x}_k^j, {\\mathbf y}_k^j, \\xi_k^j) - \\nabla_{\\bx} f_j ({\\mathbf x}_k^j, {\\mathbf y}_k^j) + \\nabla_{\\bx} f_j ({\\mathbf x}_k^j, {\\mathbf y}_k^j) - \\nabla_{\\bx} f_j ({\\mathbf x}_k, {\\mathbf y}_k) \\right) \\Big\\|^2 \\nonumber \\\\\n & \\overset{(b)}{=} \\frac{\\eta_x^2 (t-s\\tau)}{n} \\sum_{k=s\\sync}^{t-1} \\sum_{i=1}^n \\mathbb E \\Bigg[ \\norm{\\nabla_{\\bx} f_i ({\\mathbf x}_k^i, {\\mathbf y}_k^i; \\xi^i_k) - \\nabla_{\\bx} f_i ({\\mathbf x}_k^i, {\\mathbf y}_k^i)}^2 + \\Big\\| \\frac{1}{n} \\sum_{j=1}^n \\left( \\nabla_{\\bx} f_j ({\\mathbf x}_k^j, {\\mathbf y}_k^j, \\xi_k^j) - \\nabla_{\\bx} f_j ({\\mathbf x}_k^j, {\\mathbf y}_k^j) \\right) \\Big\\|^2 \\nonumber \\\\\n & + \\Big\\| \\nabla_{\\bx} f_i ({\\mathbf x}_k^i, {\\mathbf y}_k^i) - \\nabla_{\\bx} f_i ({\\mathbf x}_k, {\\mathbf y}_k) + \\nabla_{\\bx} f_i ({\\mathbf x}_k, {\\mathbf y}_k) - \\nabla_{\\bx} f ({\\mathbf x}_k, {\\mathbf y}_k) - \\frac{1}{n} \\sum_{j=1}^n \\left( \\nabla_{\\bx} f_j ({\\mathbf x}_k^j, {\\mathbf y}_k^j) - \\nabla_{\\bx} f_j ({\\mathbf x}_k, {\\mathbf y}_k) \\right) \\Big\\|^2 \\Bigg] \\nonumber \\\\\n & \\overset{(c)}{\\leq} \\frac{\\eta_x^2 (\\tau-1)}{n} \\sum_{k=s\\sync}^{t-1} \\sum_{i=1}^n \\mathbb E \\Bigg[ \\sigma^2 + \\frac{\\sigma^2}{n} + 3 \\norm{\\nabla_{\\bx} f_i ({\\mathbf x}_k^i, {\\mathbf y}_k^i) - \\nabla_{\\bx} f_i ({\\mathbf x}_k, {\\mathbf y}_k)}^2 + 3 \\norm{\\nabla_{\\bx} f_i ({\\mathbf x}_k, {\\mathbf y}_k) - \\nabla_{\\bx} f ({\\mathbf x}_k, {\\mathbf y}_k)}^2 \\nonumber \\\\\n & \\qquad \\qquad \\qquad \\qquad \\qquad \\qquad + 3 \\Big\\| \\frac{1}{n} \\sum_{j=1}^n \\left( \\nabla_{\\bx} f_j ({\\mathbf x}_k^j, {\\mathbf y}_k^j) - \\nabla_{\\bx} f_j ({\\mathbf x}_k, {\\mathbf y}_k) \\right) \\Big\\|^2 \\Bigg] \\nonumber \\\\\n & \\overset{(d)}{\\leq} \\frac{\\eta_x^2 (\\tau-1)}{n} \\sum_{k=s\\sync}^{t-1} \\sum_{i=1}^n \\mathbb E \\Bigg[ \\sigma^2 + \\frac{\\sigma^2}{n} + 3 L_f^2 \\left[ \\norm{{\\mathbf x}_k^i - {\\mathbf x}_k}^2 + \\norm{{\\mathbf y}_k^i - {\\mathbf y}_k}^2 \\right] + 3 \\varsigma_x^2 \\nonumber \\\\\n & \\qquad \\qquad \\qquad \\qquad \\qquad \\qquad + \\frac{3}{n} \\sum_{j=1}^n L_f^2 \\left[ \\norm{{\\mathbf x}_k^j - {\\mathbf x}_k}^2 + \\norm{{\\mathbf y}_k^j - {\\mathbf y}_k}^2 \\right] \\Bigg] \\nonumber \\\\\n \n &= \\eta_x^2 (\\tau-1) \\sum_{k=s\\sync}^{t-1} \\left[ \\sigma^2 \\left( 1 + \\frac{1}{n} \\right) + 3 \\varsigma_x^2 + 6 L_f^2 \\left( \\Delta_{k}^{\\bx} + \\Delta_{k}^{\\by} \\right) \\right], \\nonumber\n \n\\end{align}\nwhere $(a)$ follows from \\cref{lem:sum_of_squares};\n$(b)$ follows from \\cref{assum:bdd_var} (unbiasedness of stochastic gradients);\n$(c)$ follows from \\cref{assum:bdd_var} (bounded variance of stochastic gradients);\n$(d)$ follows from \\cref{assum:smoothness}, \\ref{assum:bdd_hetero}, and Jensen's inequality (\\cref{lem:jensens}) for $\\| \\cdot \\|^2$.\n\nFurthermore, $\\Delta_{t}^{\\bx} = 0$ for $t = s \\tau$. Therefore,\n\\begin{align}\n \\sum_{t=s\\tau}^{(s+1)\\tau-1} \\Delta_{t}^{\\bx} = \\sum_{t=s\\tau+1}^{(s+1)\\tau-1} \\Delta_{t}^{\\bx} & \\leq \\eta_x^2 (\\tau-1) \\sum_{t=s\\tau+1}^{(s+1)\\tau-1} \\sum_{k=s\\sync}^{t-1} \\left[ \\sigma^2 \\left( 1 + \\frac{1}{n} \\right) + 3 \\varsigma_x^2 + 6 L_f^2 \\left( \\Delta_{k}^{\\bx} + \\Delta_{k}^{\\by} \\right) \\right] \\nonumber \\\\\n & \\leq \\eta_x^2 (\\tau-1)^2 \\sum_{t=s\\tau+1}^{(s+1)\\tau-1} \\left[ \\sigma^2 \\left( 1 + \\frac{1}{n} \\right) + 3 \\varsigma_x^2 + 6 L_f^2 \\Delta_{t}^{\\bx,\\by} \\right]. \\label{eq_proof:lem:NC_PL_x_consensus_error}\n\\end{align}\nThe ${\\mathbf y}$- synchronization error $\\Delta_{t}^{\\by}$ following a similar analysis and we get.\n\\begin{align}\n \\sum_{t=s\\tau}^{(s+1)\\tau-1} \\Delta_{t}^{\\by} & \\leq \\eta_y^2 (\\tau-1)^2 \\sum_{t=s\\tau+1}^{(s+1)\\tau-1} \\left[ \\sigma^2 \\left( 1 + \\frac{1}{n} \\right) + 3 \\varsigma_y^2 + 6 L_f^2 \\Delta_{t}^{\\bx,\\by} \\right]. \\label{eq_proof:lem:NC_PL_y_consensus_error}\n\\end{align}\nCombining \\eqref{eq_proof:lem:NC_PL_x_consensus_error} and \\eqref{eq_proof:lem:NC_PL_y_consensus_error}, we get\n\\begin{align}\n \\sum_{t=s\\tau}^{(s+1)\\tau-1} \\Delta_{t}^{\\bx,\\by} & \\leq (\\tau-1)^2 \\left[ \\tau \\left( \\eta_x^2 + \\eta_y^2 \\right) \\sigma^2 \\left( 1 + \\frac{1}{n} \\right) + 3 \\tau \\left( \\eta_x^2 \\varsigma_x^2 + \\eta_y^2 \\varsigma_y^2 \\right) + 6 L_f^2 \\left( \\eta_x^2 + \\eta_y^2 \\right) \\sum_{t=s\\tau+1}^{(s+1)\\tau-1} \\Delta_{t}^{\\bx,\\by} \\right]. \\nonumber\n \n\\end{align}\nUsing our choice of $\\eta_x, \\eta_y$, we have $6 L_f^2 \\left( \\eta_x^2 + \\eta_y^2 \\right) (\\tau - 1)^2 \\leq 1\/2$, then\n\\begin{align}\n \\sum_{t=s\\tau}^{(s+1)\\tau-1} \\Delta_{t}^{\\bx,\\by} & \\leq 2 \\tau (\\tau-1)^2 \\left[ \\left( \\eta_x^2 + \\eta_y^2 \\right) \\sigma^2 \\left( 1 + \\frac{1}{n} \\right) + 3 \\left( \\eta_x^2 \\varsigma_x^2 + \\eta_y^2 \\varsigma_y^2 \\right) \\right] \\nonumber \\\\\n \\Rightarrow \\frac{1}{T} \\sum_{s=0}^{T\/\\tau - 1} \\sum_{t=s\\tau}^{(s+1)\\tau-1} \\Delta_{t}^{\\bx,\\by} = \\frac{1}{T} \\sum_{t=0}^{T-1} \\Delta_{t}^{\\bx,\\by} & \\leq 2 (\\tau-1)^2 \\left[ \\left( \\eta_x^2 + \\eta_y^2 \\right) \\sigma^2 \\left( 1 + \\frac{1}{n} \\right) + 3 \\left( \\eta_x^2 \\varsigma_x^2 + \\eta_y^2 \\varsigma_y^2 \\right) \\right]. \\nonumber\n \n\\end{align}\n\\end{proof}\n\n\n\\begin{proof}[Proof of Lemma \\ref{lem:NC_PL_phi_error}]\nUsing $L_f$-smoothness of $f({\\mathbf x}, \\cdot)$,\n\\begin{align}\n f({\\mathbf x_{t+1}}, {\\mathbf y_t}) &+ \\left\\langle \\nabla_{\\by} f({\\mathbf x_{t+1}}, {\\mathbf y_t}), {\\mathbf y_{t+1}} - {\\mathbf y_t} \\right\\rangle - \\frac{L_f}{2} \\norm{{\\mathbf y_{t+1}} - {\\mathbf y_t}}^2 \\leq f({\\mathbf x_{t+1}}, {\\mathbf y_{t+1}}) \\nonumber \\\\\n \\Rightarrow f({\\mathbf x_{t+1}}, {\\mathbf y_t}) & \\leq f({\\mathbf x_{t+1}}, {\\mathbf y_{t+1}}) - \\eta_y \\left\\langle \\nabla_{\\by} f({\\mathbf x_{t+1}}, {\\mathbf y_t}), \\frac{1}{n} \\sum_{i=1}^n \\nabla_{\\by} f_i ({\\mathbf x^i_t}, {\\mathbf y^i_t}; {\\xi^i_{t}}) \\right\\rangle + \\frac{\\eta_y^2 L_f}{2} \\norm{\\frac{1}{n} \\sum_{i=1}^n \\nabla_{\\by} f_i ({\\mathbf x^i_t}, {\\mathbf y^i_t}; {\\xi^i_{t}})}^2 \\tag{using \\eqref{eq:NC_PL_update_avg}} \\\\\n \\Rightarrow \\mathbb E f({\\mathbf x_{t+1}}, {\\mathbf y_t}) & \\leq \\mathbb E f({\\mathbf x_{t+1}}, {\\mathbf y_{t+1}}) - \\eta_y \\mathbb E \\left\\langle \\nabla_{\\by} f({\\mathbf x_{t+1}}, {\\mathbf y_t}), \\frac{1}{n} \\sum_{i=1}^n \\nabla_{\\by} f_i ({\\mathbf x^i_t}, {\\mathbf y^i_t}) \\right\\rangle \\nonumber \\\\\n & \\qquad + \\frac{\\eta_y^2 L_f}{2} \\left[ \\frac{\\sigma^2}{n} + \\norm{\\frac{1}{n} \\sum_{i=1}^n \\nabla_{\\by} f_i ({\\mathbf x^i_t}, {\\mathbf y^i_t})}^2 \\right] \\tag{\\cref{assum:bdd_var}} \\\\\n &= \\mathbb E f({\\mathbf x_{t+1}}, {\\mathbf y_{t+1}}) - \\frac{\\eta_y}{2} \\mathbb E \\norm{\\nabla_{\\by} f({\\mathbf x_{t+1}}, {\\mathbf y_t})}^2 - \\frac{\\eta_y}{2} \\left( 1 - \\eta_y L_f \\right) \\mathbb E \\norm{\\frac{1}{n} \\sum_{i=1}^n \\nabla_{\\by} f_i ({\\mathbf x^i_t}, {\\mathbf y^i_t})}^2 \\nonumber \\\\\n & \\qquad + \\frac{\\eta_y}{2} \\mathbb E \\norm{\\nabla_{\\by} f({\\mathbf x_{t+1}}, {\\mathbf y_t}) - \\nabla_{\\by} f({\\mathbf x_t}, {\\mathbf y_t}) + \\nabla_{\\by} f({\\mathbf x_t}, {\\mathbf y_t}) - \\frac{1}{n} \\sum_{i=1}^n \\nabla_{\\by} f_i ({\\mathbf x^i_t}, {\\mathbf y^i_t})}^2 + \\frac{\\eta_y^2 L_f \\sigma^2}{2n} \\nonumber \\\\\n & \\leq \\mathbb E f({\\mathbf x_{t+1}}, {\\mathbf y_{t+1}}) - \\frac{\\eta_y}{2} \\mathbb E \\norm{\\nabla_{\\by} f({\\mathbf x_{t+1}}, {\\mathbf y_t})}^2 - \\frac{\\eta_y}{2} \\left( 1 - \\eta_y L_f \\right) \\mathbb E \\norm{\\frac{1}{n} \\sum_{i=1}^n \\nabla_{\\by} f_i ({\\mathbf x^i_t}, {\\mathbf y^i_t})}^2 \\nonumber \\\\\n & \\qquad + \\eta_y L_f^2 \\mathbb E \\norm{{\\mathbf x_{t+1}} - {\\mathbf x_t}}^2 + \\eta_y L_f^2 \\Delta_{t}^{\\bx,\\by} + \\frac{\\eta_y^2 L_f \\sigma^2}{2n}, \\label{eq_proof:lem:NC_PL_phi_error_1}\n\\end{align}\nwhere \\eqref{eq_proof:lem:NC_PL_phi_error_1} follows from Jensen's inequality (\\cref{lem:jensens}) for $\\norm{\\cdot}^2$, \\cref{assum:smoothness} and Young's inequality (\\cref{lem:Young}) for $\\gamma = 1$, $\\left\\langle \\mathbf a, \\bf b \\right\\rangle \\leq \\frac{1}{2} \\norm{\\mathbf a}^2 + \\frac{1}{2} \\norm{\\mathbf b}^2$.\nNext, note that using \\cref{assum:bdd_var}\n\\begin{align}\n \\mathbb E \\norm{{\\mathbf x_{t+1}} - {\\mathbf x_t}}^2 &= \\eta_x^2 \\mathbb E \\norm{\\frac{1}{n} \\sum_{i=1}^n \\nabla_{\\bx} f_i ({\\mathbf x^i_t}, {\\mathbf y^i_t}; {\\xi^i_{t}})}^2 \\leq \\eta_x^2 \\norm{\\frac{1}{n} \\sum_{i=1}^n \\nabla_{\\bx} f_i ({\\mathbf x^i_t}, {\\mathbf y^i_t})}^2 + \\frac{\\eta_x^2 \\sigma^2}{n}. \\label{eq_proof:lem:NC_PL_phi_error_2a}\n\\end{align}\nAlso, using \\cref{assum:PL_y},\n\\begin{align}\n \\norm{\\nabla_{\\by} f({\\mathbf x_{t+1}}, {\\mathbf y_t})}^2 \\geq 2 \\mu \\left( \\max_{\\mathbf y} f({\\mathbf x_{t+1}}, {\\mathbf y}) - f({\\mathbf x_{t+1}}, {\\mathbf y_t}) \\right) = 2 \\mu \\left( \\Phi ({\\mathbf x_{t+1}}) - f({\\mathbf x_{t+1}}, {\\mathbf y_t}) \\right). \\label{eq_proof:lem:NC_PL_phi_error_2b}\n\\end{align}\nSubstituting \\eqref{eq_proof:lem:NC_PL_phi_error_2a}, \\eqref{eq_proof:lem:NC_PL_phi_error_2b} in \\eqref{eq_proof:lem:NC_PL_phi_error_1}, and rearranging the terms, we get\n\\begin{align}\n & \\eta_y \\mu \\mathbb E \\left( \\Phi ({\\mathbf x_{t+1}}) - f({\\mathbf x_{t+1}}, {\\mathbf y_t}) \\right) \\nonumber \\\\\n & \\leq \\mathbb E f({\\mathbf x_{t+1}}, {\\mathbf y_{t+1}}) - \\mathbb E f({\\mathbf x_{t+1}}, {\\mathbf y_t}) - \\frac{\\eta_y}{2} \\left( 1 - \\eta_y L_f \\right) \\mathbb E \\norm{\\frac{1}{n} \\sum_{i=1}^n \\nabla_{\\by} f_i ({\\mathbf x^i_t}, {\\mathbf y^i_t})}^2 + \\frac{\\eta_y^2 L_f \\sigma^2}{2n} \\nonumber \\\\\n & \\quad + \\eta_y L_f^2 \\left[ \\eta_x^2 \\mathbb E \\norm{\\frac{1}{n} \\sum_{i=1}^n \\nabla_{\\bx} f_i ({\\mathbf x^i_t}, {\\mathbf y^i_t})}^2 + \\frac{\\eta_x^2 \\sigma^2}{n} \\right] + \\eta_y L_f^2 \\Delta_{t}^{\\bx,\\by} \\nonumber \\\\\n \\Rightarrow & \\mathbb E \\left( \\Phi ({\\mathbf x_{t+1}}) - f({\\mathbf x_{t+1}}, {\\mathbf y_{t+1}}) \\right) \\nonumber \\\\\n & \\leq (1 - \\eta_y \\mu) \\mathbb E \\left( \\Phi ({\\mathbf x_{t+1}}) - f({\\mathbf x_{t+1}}, {\\mathbf y_t}) \\right) - \\frac{\\eta_y}{2} \\left( 1 - \\eta_y L_f \\right) \\mathbb E \\norm{\\frac{1}{n} \\sum_{i=1}^n \\nabla_{\\by} f_i ({\\mathbf x^i_t}, {\\mathbf y^i_t})}^2 + \\frac{\\eta_y^2 L_f \\sigma^2}{2n} \\nonumber \\\\\n & \\quad + \\eta_y L_f^2 \\left[ \\eta_x^2 \\mathbb E \\norm{\\frac{1}{n} \\sum_{i=1}^n \\nabla_{\\bx} f_i ({\\mathbf x^i_t}, {\\mathbf y^i_t})}^2 + \\frac{\\eta_x^2 \\sigma^2}{n} \\right] + \\eta_y L_f^2 \\Delta_{t}^{\\bx,\\by}. \\label{eq_proof:lem:NC_PL_phi_error_3}\n\\end{align}\nNext, we bound $\\mathbb E \\left( \\Phi ({\\mathbf x_{t+1}}) - f({\\mathbf x_{t+1}}, {\\mathbf y_t}) \\right)$.\n\\begin{align}\n & \\mathbb E \\left[ \\Phi ({\\mathbf x_{t+1}}) - f({\\mathbf x_{t+1}}, {\\mathbf y_t}) \\right] \\nonumber \\\\\n &= \\underbrace{\\mathbb E \\left[ \\Phi ({\\mathbf x_{t+1}}) - \\Phi ({\\mathbf x_t}) \\right]}_{I_1} + \\mathbb E \\left[ \\Phi ({\\mathbf x_t}) - f({\\mathbf x_t}, {\\mathbf y_t}) \\right] + \\underbrace{\\mathbb E \\left[ f({\\mathbf x_t}, {\\mathbf y_t}) - f({\\mathbf x_{t+1}}, {\\mathbf y_t}) \\right]}_{I_2}\n\\end{align}\n$I_1$ is bounded in \\cref{lem:NC_PL_Phi_decay_one_iter}.\nWe next bound $I_2$. Using $L_f$-smoothness of $f(\\cdot, {\\mathbf y_t})$,\n\\begin{align}\n & f({\\mathbf x_t}, {\\mathbf y_t}) + \\left\\langle \\nabla_{\\bx} f({\\mathbf x_t}, {\\mathbf y_t}), {\\mathbf x_{t+1}} - {\\mathbf x_t} \\right\\rangle - \\frac{L_f}{2} \\norm{{\\mathbf x_{t+1}} - {\\mathbf x_t}}^2 \\leq f({\\mathbf x_{t+1}}, {\\mathbf y_t}) \\nonumber \\\\\n \\Rightarrow I_2 &= \\mathbb E \\left[ f({\\mathbf x_t}, {\\mathbf y_t}) - f({\\mathbf x_{t+1}}, {\\mathbf y_t}) \\right] \\nonumber \\\\\n & \\leq \\eta_x \\mathbb E \\left\\langle \\nabla_{\\bx} f({\\mathbf x_t}, {\\mathbf y_t}), \\frac{1}{n} \\sum_{i=1}^n \\nabla_{\\bx} f_i ({\\mathbf x^i_t}, {\\mathbf y^i_t}; {\\xi^i_{t}}) \\right\\rangle + \\frac{\\eta_x^2 L_f}{2} \\mathbb E \\norm{\\frac{1}{n} \\sum_{i=1}^n \\nabla_{\\bx} f_i ({\\mathbf x^i_t}, {\\mathbf y^i_t}; {\\xi^i_{t}})}^2 \\nonumber \\\\\n & \\leq \\eta_x \\mathbb E \\left\\langle \\nabla_{\\bx} f({\\mathbf x_t}, {\\mathbf y_t}), \\frac{1}{n} \\sum_{i=1}^n \\nabla_{\\bx} f_i ({\\mathbf x^i_t}, {\\mathbf y^i_t}) \\right\\rangle + \\frac{\\eta_x^2 L_f}{2} \\left[ \\frac{\\sigma^2}{n} + \\mathbb E \\norm{\\frac{1}{n} \\sum_{i=1}^n \\nabla_{\\bx} f_i ({\\mathbf x^i_t}, {\\mathbf y^i_t})}^2 \\right] \\tag{\\cref{assum:bdd_var}} \\\\\n & \\leq \\frac{\\eta_x}{2} \\mathbb E \\left[ \\norm{\\nabla_{\\bx} f({\\mathbf x_t}, {\\mathbf y_t})}^2 + \\norm{\\frac{1}{n} \\sum_{i=1}^n \\nabla_{\\bx} f_i ({\\mathbf x^i_t}, {\\mathbf y^i_t})}^2 \\right] + \\frac{\\eta_x^2 L_f}{2} \\left[ \\frac{\\sigma^2}{n} + \\mathbb E \\norm{\\frac{1}{n} \\sum_{i=1}^n \\nabla_{\\bx} f_i ({\\mathbf x^i_t}, {\\mathbf y^i_t})}^2 \\right] \\nonumber \\\\\n & \\leq \\eta_x \\mathbb E \\left[ \\norm{\\nabla \\Phi({\\mathbf x_t})}^2 + \\norm{\\nabla_{\\bx} f({\\mathbf x_t}, {\\mathbf y_t}) - \\nabla \\Phi({\\mathbf x_t})}^2 \\right] + \\frac{\\eta_x^2 L_f \\sigma^2}{2 n} + \\frac{\\eta_x}{2} \\left( 1 + \\eta_x L_f \\right) \\mathbb E \\norm{\\frac{1}{n} \\sum_{i=1}^n \\nabla_{\\bx} f_i ({\\mathbf x^i_t}, {\\mathbf y^i_t})}^2 \\nonumber \\\\\n & \\overset{(a)}{\\leq} \\eta_x \\mathbb E \\norm{\\nabla \\Phi({\\mathbf x_t})}^2 + \\eta_x L_f^2 \\mathbb E \\norm{{\\mathbf y_t} - {\\mathbf y}^*({\\mathbf x_t})}^2 + \\frac{\\eta_x^2 L_f \\sigma^2}{2 n} + \\frac{\\eta_x}{2} \\left( 1 + \\eta_x L_f \\right) \\mathbb E \\norm{\\frac{1}{n} \\sum_{i=1}^n \\nabla_{\\bx} f_i ({\\mathbf x^i_t}, {\\mathbf y^i_t})}^2 \\nonumber \\\\\n & \\leq \\eta_x \\mathbb E \\norm{\\nabla \\Phi({\\mathbf x_t})}^2 + \\frac{2 \\eta_x L_f^2}{\\mu} \\mathbb E \\left[ \\Phi({\\mathbf x_t}) - f({\\mathbf x_t}, {\\mathbf y_t}) \\right] + \\frac{\\eta_x^2 L_f \\sigma^2}{2 n} + \\frac{\\eta_x}{2} \\left( 1 + \\eta_x L_f \\right) \\mathbb E \\norm{\\frac{1}{n} \\sum_{i=1}^n \\nabla_{\\bx} f_i ({\\mathbf x^i_t}, {\\mathbf y^i_t})}^2. \\label{eq_proof:lem:NC_PL_phi_error_4}\n\\end{align}\nwhere $(a)$ follows from \\cref{assum:smoothness} and \\cref{lem:Phi_smooth_nouiehed}. \nAlso, recall that ${\\mathbf y}^*({\\mathbf x}) \\in \\operatornamewithlimits{arg\\,max}_{{\\mathbf y}'} f({\\mathbf x}, {\\mathbf y}')$.\n\\eqref{eq_proof:lem:NC_PL_phi_error_4} follows from the quadratic growth property of $\\mu$-PL functions (\\cref{lem:quad_growth}).\nSubstituting the bounds on $I_1, I_2$ from \\cref{lem:NC_PL_Phi_decay_one_iter} and \\eqref{eq_proof:lem:NC_PL_phi_error_4} respectively, in \\eqref{eq_proof:lem:NC_PL_phi_error_3}, we get\n\\begin{align}\n & \\mathbb E \\left( \\Phi ({\\mathbf x_{t+1}}) - f({\\mathbf x_{t+1}}, {\\mathbf y_{t+1}}) \\right) \\nonumber \\\\\n & \\leq (1 - \\eta_y \\mu) \\left( 1 + \\frac{4 \\eta_x L_f^2}{\\mu} \\right) \\mathbb E \\left( \\Phi ({\\mathbf x_t}) - f({\\mathbf x_t}, {\\mathbf y_t}) \\right) \\nonumber \\\\\n & \\quad + (1 - \\eta_y \\mu) \\left[ - \\frac{\\eta_x}{2} \\mathbb E \\norm{\\nabla \\Phi ({\\mathbf x_t})}^2 - \\frac{\\eta_x}{2} \\left( 1 - L_{\\Phi} \\eta_x \\right) \\mathbb E \\norm{\\frac{1}{n} \\sum_{i=1}^n \\nabla_{\\bx} f_i({\\mathbf x^i_t}, {\\mathbf y^i_t})}^2 + 2 \\eta_x L_f^2 \\Delta_{t}^{\\bx,\\by} + \\frac{L_{\\Phi} \\eta_x^2 \\sigma^2}{2 n} \\right] \\nonumber \\\\\n & \\quad + (1 - \\eta_y \\mu) \\left[ \\eta_x \\mathbb E \\norm{\\nabla \\Phi({\\mathbf x_t})}^2 + \\frac{\\eta_x^2 L_f \\sigma^2}{2 n} + \\frac{\\eta_x}{2} \\left( 1 + \\eta_x L_f \\right) \\mathbb E \\norm{\\frac{1}{n} \\sum_{i=1}^n \\nabla_{\\bx} f_i ({\\mathbf x^i_t}, {\\mathbf y^i_t})}^2 \\right] \\nonumber \\\\\n & \\quad - \\frac{\\eta_y}{2} \\left( 1 - \\eta_y L_f \\right) \\mathbb E \\norm{\\frac{1}{n} \\sum_{i=1}^n \\nabla_{\\by} f_i ({\\mathbf x^i_t}, {\\mathbf y^i_t})}^2 + \\frac{\\eta_y^2 L_f \\sigma^2}{2n} \\nonumber \\\\\n & \\quad + \\eta_y L_f^2 \\left[ \\eta_x^2 \\norm{\\frac{1}{n} \\sum_{i=1}^n \\nabla_{\\bx} f_i ({\\mathbf x^i_t}, {\\mathbf y^i_t})}^2 + \\frac{\\eta_x^2 \\sigma^2}{n} \\right] + \\eta_y L_f^2 \\Delta_{t}^{\\bx,\\by} \\nonumber \\\\\n & \\leq \\left( 1 - \\frac{\\eta_y \\mu}{2} \\right) \\mathbb E \\left( \\Phi ({\\mathbf x_t}) - f({\\mathbf x_t}, {\\mathbf y_t}) \\right) + \\frac{\\eta_y^2 L_f \\sigma^2}{2n} + \\frac{\\eta_y L_f^2 \\eta_x^2 \\sigma^2}{n} + \\eta_y L_f^2 \\Delta_{t}^{\\bx,\\by} \\nonumber \\\\\n & \\quad + \\left[ (1 - \\eta_y \\mu) \\frac{\\eta_x^2}{2} \\left( L_f + L_{\\Phi} \\right) + \\eta_y L_f^2 \\eta_x^2 \\right] \\mathbb E \\norm{\\frac{1}{n} \\sum_{i=1}^n \\nabla_{\\bx} f_i({\\mathbf x^i_t}, {\\mathbf y^i_t})}^2 \\nonumber \\\\\n & \\quad + (1 - \\eta_y \\mu) \\left[ \\frac{\\eta_x}{2} \\mathbb E \\norm{\\nabla \\Phi({\\mathbf x_t})}^2 + \\frac{\\eta_x^2 L_f \\sigma^2}{2 n} + 2 \\eta_x L_f^2 \\Delta_{t}^{\\bx,\\by} + \\frac{L_{\\Phi} \\eta_x^2 \\sigma^2}{2 n} \\right], \\label{eq_proof:lem:NC_PL_phi_error_5}\n\\end{align}\nwhere we choose $\\eta_x$ such that $(1 - \\eta_y \\mu) \\left( 1 + \\frac{4 \\eta_x L_f^2}{\\mu} \\right) \\leq \\left( 1 - \\frac{\\eta_y \\mu}{2} \\right)$. \nThis holds if $\\frac{4 \\eta_x L_f^2}{\\mu} \\leq \\frac{\\eta_y \\mu}{2} \\Rightarrow \\eta_x \\leq \\frac{\\eta_y}{8 \\kappa^2}$.\nSumming \\eqref{eq_proof:lem:NC_PL_phi_error_5} over $t=0, \\hdots, T-1$, and rearranging the terms, we get\n\\begin{align}\n & \\frac{1}{T} \\sum_{t=0}^{T-1} \\mathbb E \\left( \\Phi ({\\mathbf x_{t+1}}) - f({\\mathbf x_{t+1}}, {\\mathbf y_{t+1}}) \\right) \\nonumber \\\\\n & \\leq \\left( 1 - \\frac{\\eta_y \\mu}{2} \\right) \\frac{1}{T} \\sum_{t=0}^{T-1} \\mathbb E \\left( \\Phi ({\\mathbf x_t}) - f({\\mathbf x_t}, {\\mathbf y_t}) \\right) + L_f^2 \\left( 2 \\eta_x (1 - \\eta_y \\mu) + \\eta_y \\right) \\frac{1}{T} \\sum_{t=0}^{T-1} \\Delta_{t}^{\\bx,\\by} \\nonumber \\\\\n & \\quad + \\left[ (1 - \\eta_y \\mu) \\frac{\\eta_x^2}{2} \\left( L_f + L_{\\Phi} \\right) + \\eta_y L_f^2 \\eta_x^2 \\right] \\frac{1}{T} \\sum_{t=0}^{T-1} \\mathbb E \\norm{\\frac{1}{n} \\sum_{i=1}^n \\nabla_{\\bx} f_i({\\mathbf x^i_t}, {\\mathbf y^i_t})}^2 + (1 - \\eta_y \\mu) \\frac{\\eta_x}{2} \\frac{1}{T} \\sum_{t=0}^{T-1} \\mathbb E \\norm{\\nabla \\Phi({\\mathbf x_t})}^2 \\nonumber \\\\\n & \\quad + \\frac{\\eta_y^2 L_f \\sigma^2}{2n} + \\frac{\\eta_y L_f^2 \\eta_x^2 \\sigma^2}{n} + (1 - \\eta_y \\mu) \\left[ \\frac{\\eta_x^2 L_f \\sigma^2}{2 n} + \\frac{L_{\\Phi} \\eta_x^2 \\sigma^2}{2 n} \\right]. \\nonumber\n \n\\end{align}\nRearranging the terms, we get\n\\begin{align}\n & \\frac{1}{T} \\sum_{t=0}^{T-1} \\mathbb E \\left( \\Phi ({\\mathbf x_t}) - f({\\mathbf x_t}, {\\mathbf y_t}) \\right) \\nonumber \\\\\n & \\leq \\frac{2}{\\eta_y \\mu} \\left[ \\frac{\\Phi ({\\mathbf x}_0) - f({\\mathbf x}_0, {\\mathbf y}_0)}{ T} - \\frac{\\mathbb E \\left( \\Phi ({\\mathbf x}_T) - f({\\mathbf x}_T, {\\mathbf y}_T) \\right)}{ T} \\right] + \\frac{2 L_f^2}{\\mu \\eta_y} \\left( 2 \\eta_x (1 - \\eta_y \\mu) + \\eta_y \\right) \\frac{1}{T} \\sum_{t=0}^{T-1} \\Delta_{t}^{\\bx,\\by} \\nonumber \\\\\n & \\quad + \\left[ (1 - \\eta_y \\mu) \\frac{\\eta_x^2}{2} \\left( L_f + L_{\\Phi} \\right) + \\eta_y L_f^2 \\eta_x^2 \\right] \\frac{2}{\\eta_y \\mu T} \\sum_{t=0}^{T-1} \\mathbb E \\norm{\\frac{1}{n} \\sum_{i=1}^n \\nabla_{\\bx} f_i({\\mathbf x^i_t}, {\\mathbf y^i_t})}^2 + (1 - \\eta_y \\mu) \\frac{\\eta_x}{\\eta_y \\mu T} \\sum_{t=0}^{T-1} \\mathbb E \\norm{\\nabla \\Phi({\\mathbf x_t})}^2 \\nonumber \\\\\n & \\quad + \\frac{\\eta_y L_f \\sigma^2}{\\mu n} + \\frac{2 L_f^2 \\eta_x^2 \\sigma^2}{\\mu n} + \\frac{(1 - \\eta_y \\mu)}{\\mu \\eta_y} \\left[ \\frac{\\eta_x^2 L_f \\sigma^2}{n} + \\frac{L_{\\Phi} \\eta_x^2 \\sigma^2}{n} \\right] \\nonumber \\\\\n & \\leq \\frac{2 \\left( \\Phi ({\\mathbf x}_0) - f({\\mathbf x}_0, {\\mathbf y}_0) \\right)}{\\eta_y \\mu T} + \\frac{2 L_f^2}{\\mu \\eta_y} \\left( 2 \\eta_x (1 - \\eta_y \\mu) + \\eta_y \\right) \\frac{1}{T} \\sum_{t=0}^{T-1} \\Delta_{t}^{\\bx,\\by} \\tag{$\\because \\Phi ({\\mathbf x}_T) \\triangleq \\operatornamewithlimits{arg\\,max}_{\\mathbf y} f({\\mathbf x}_T, {\\mathbf y})$} \\\\\n & \\quad + \\left[ (1 - \\eta_y \\mu) \\frac{\\eta_x^2}{2} \\left( L_f + L_{\\Phi} \\right) + \\eta_y L_f^2 \\eta_x^2 \\right] \\frac{2}{\\eta_y \\mu T} \\sum_{t=0}^{T-1} \\mathbb E \\norm{\\frac{1}{n} \\sum_{i=1}^n \\nabla_{\\bx} f_i({\\mathbf x^i_t}, {\\mathbf y^i_t})}^2 + (1 - \\eta_y \\mu) \\frac{\\eta_x}{\\eta_y \\mu T} \\sum_{t=0}^{T-1} \\mathbb E \\norm{\\nabla \\Phi({\\mathbf x_t})}^2 \\nonumber \\\\\n & \\quad + \\frac{\\eta_y L_f \\sigma^2}{\\mu n} + \\frac{2 L_f^2 \\eta_x^2 \\sigma^2}{\\mu n} + \\frac{(1 - \\eta_y \\mu)}{\\mu \\eta_y} \\left[ \\frac{\\eta_x^2 L_f \\sigma^2}{n} + \\frac{L_{\\Phi} \\eta_x^2 \\sigma^2}{n} \\right], \\nonumber\n \n\\end{align}\nwhich concludes the proof.\n\\end{proof}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\newpage\n\\section{Nonconvex-PL (NC-PL) Functions: Momentum Local SGDA (\\texorpdfstring{\\cref{thm:NC_PL_mom}}{Theorem 2})} \\label{app:NC_PL_mom}\nIn this section we prove the convergence of \\cref{alg_NC_momentum} for Nonconvex-PL functions, and provide the complexity and\ncommunication guarantees.\n\nWe organize this section as follows. First, in \\cref{sec:NC_PL_mom_int_results} we present some intermediate results. \nNext, in \\cref{sec:NC_PL_mom_thm_proof}, we present the proof of \\cref{thm:NC_PL_mom}, which is followed by the proofs of the intermediate results in \\cref{sec:NC_PL_mom_int_results_proofs}.\n\nAgain, the problem we solve is\n\\begin{align*}\n \\min_{{\\mathbf x}} \\max_{{\\mathbf y}} \\left\\{ f({\\mathbf x}, {\\mathbf y}) \\triangleq \\frac{1}{n} \\sum_{i=1}^n f_i({\\mathbf x}, {\\mathbf y}) \\right\\}.\n\\end{align*}\nWe define\n\\begin{align}\n \\Phi ({\\mathbf x}) \\triangleq \\max_{{\\mathbf y}} f({\\mathbf x}, {\\mathbf y}) \\quad \\text{and} \\quad {\\mathbf y}^* ({\\mathbf x}) \\in \\operatornamewithlimits{arg\\,max}_{{\\mathbf y}} f({\\mathbf x}, {\\mathbf y}). \\label{eq:Phi_defn}\n\\end{align}\nSince $f({\\mathbf x}, \\cdot)$ is $\\mu$-PL (\\cref{assum:PL_y}), ${\\mathbf y}^*({\\mathbf x})$ is not necessarily unique.\n\nFor the sake of analysis, we define \\textit{virtual} sequences of average iterates and average direction estimates:\n\\begin{align*}\n & {\\mathbf x_t} \\triangleq \\frac{1}{n} \\sum_{i=1}^n {\\mathbf x^i_t}; \\quad {\\mathbf y_t} \\triangleq \\frac{1}{n} \\sum_{i=1}^n {\\mathbf y^i_t}; \\\\\n & \\Tbx_{t+\\frac{1}{2}} \\triangleq \\frac{1}{n} \\sum_{i=1}^n \\Tbx^i_{t+\\frac{1}{2}}; \\quad \\Tby_{t+\\frac{1}{2}} \\triangleq \\frac{1}{n} \\sum_{i=1}^n \\Tby^i_{t+\\frac{1}{2}}; \\\\\n & {\\mathbf d_{x,t}} \\triangleq \\frac{1}{n} \\sum_{i=1}^n {\\mathbf d^i_{x,t}}; \\quad {\\mathbf d_{y,t}} \\triangleq \\frac{1}{n} \\sum_{i=1}^n {\\mathbf d^i_{y,t}}.\n\\end{align*}\nNote that these sequences are constructed only for the sake of analysis. During an actual run of the algorithm, these sequences exist only at the time instants when the clients communicate with the server.\nWe next write the update expressions for these virtual sequences, using the updates in Algorithm \\ref{alg_NC_momentum}.\n\\begin{equation}\n \\begin{aligned}\n & \\Tbx_{t+\\frac{1}{2}} = {\\mathbf x_t} - \\eta_x {\\mathbf d_{x,t}}, \\qquad {\\mathbf x_{t+1}} = {\\mathbf x_t} + \\alpha_t \\left( \\Tbx_{t+\\frac{1}{2}} - {\\mathbf x_t} \\right) \\\\\n & \\Tby_{t+\\frac{1}{2}} = {\\mathbf y_t} + \\eta_y{\\mathbf d_{y,t}}, \\qquad {\\mathbf y_{t+1}} = {\\mathbf y_t} + \\alpha_t \\left( \\Tby_{t+\\frac{1}{2}} - {\\mathbf y_t} \\right) \\\\\n & {\\mathbf d_{x,t+1}} = (1 - \\beta_x \\alpha_t) {\\mathbf d_{x,t}} + \\beta_x \\alpha_t \\frac{1}{n} \\sum_{i=1}^n \\nabla_{\\bx} f_i ({\\mathbf x^i_{t+1}}, {\\mathbf y^i_{t+1}}; {\\xi^i_{t+1}}) \\\\\n & {\\mathbf d_{y,t+1}} = (1 - \\beta_y \\alpha_t) {\\mathbf d_{y,t}} + \\beta_y \\alpha_t \\frac{1}{n} \\sum_{i=1}^n \\nabla_{\\by} f_i ({\\mathbf x^i_{t+1}}, {\\mathbf y^i_{t+1}}; {\\xi^i_{t+1}}).\n \\end{aligned}\n \\label{eq:NC_mom_update_avg}\n\\end{equation}\nNext, we present some intermediate results which we use in the proof of \\cref{thm:NC_PL_mom}. To make the proof concise, the proofs of these intermediate results is relegated to \\cref{sec:NC_PL_mom_int_results_proofs}.\n\n\\subsection{Intermediate Lemmas} \\label{sec:NC_PL_mom_int_results}\n\nWe use the following result from \\cite{nouiehed19minimax_neurips19} about the smoothness of $\\Phi(\\cdot)$.\n\n\\begin{lemma}\n\\label{lem:Phi_PL_smooth_nouiehed}\nIf the function $f({\\mathbf x}, \\cdot)$ satisfies Assumptions \\ref{assum:smoothness}, \\ref{assum:PL_y} ($L_f$-smoothness and $\\mu$-PL condition in ${\\mathbf y}$), then $\\Phi ({\\mathbf x})$ is $L_{\\Phi}$-smooth with $L_{\\Phi} = \\kappa L_f\/2 + L_f$, where $\\kappa = L_f\/\\mu$, and \n$$\\nabla \\Phi(\\cdot) = \\nabla_{\\bx} f(\\cdot, {\\mathbf y}^*(\\cdot)),$$\nwhere ${\\mathbf y}^*(\\cdot) \\in \\operatornamewithlimits{arg\\,max}_{\\mathbf y} f(\\cdot, {\\mathbf y})$.\n\\end{lemma}\n\n\n\\begin{lemma}\n\\label{lem:NC_PL_mom_Phi_1_step_decay}\nSuppose the loss function $f$ satisfies Assumptions \\ref{assum:smoothness}, \\ref{assum:PL_y}, and the step-size $\\eta_x$, and $\\alpha_t$ satisfy $0 < \\alpha_t \\eta_x \\leq \\frac{\\mu}{4 L_f^2}$.\nThen the iterates generated by Algorithm \\ref{alg_NC_momentum} satisfy\n\\begin{align}\n \\Phi ({\\mathbf x_{t+1}}) - \\Phi ({\\mathbf x_t}) & \\leq - \\frac{\\alpha_t}{2 \\eta_x} \\left\\| \\Tbx_{t+\\frac{1}{2}} - {\\mathbf x_t} \\right\\|^2 + \\frac{4 \\eta_x \\alpha_t L_f^2}{\\mu} \\left[ \\Phi({\\mathbf x_t}) - f({\\mathbf x_t}, {\\mathbf y_t}) \\right] + 2 \\eta_x \\alpha_t \\left\\| \\nabla_{\\bx} f({\\mathbf x_t}, {\\mathbf y_t}) - {\\mathbf d_{x,t}} \\right\\|^2, \\nonumber\n \n\\end{align}\nwhere $\\Phi (\\cdot)$ is defined in \\eqref{eq:Phi_defn}.\n\\end{lemma}\n\n\nNext, we bound the difference $\\Phi({\\mathbf x_t}) - f({\\mathbf x_t}, {\\mathbf y_t})$.\n\n\n\\begin{lemma}\n\\label{lem:NC_PL_mom_phi_error}\nSuppose the loss function $f$ satisfies Assumptions \\ref{assum:smoothness}, \\ref{assum:PL_y}, and the step-sizes $\\eta_x, \\eta_y$, and $\\alpha_t$ satisfy $0 < \\alpha_t \\eta_y \\leq \\frac{1}{2 L_f}$, $0 < \\alpha_t \\eta_x \\leq \\frac{\\mu}{8 L_f^2}$, and $\\eta_x \\leq \\frac{\\eta_y}{8 \\kappa^2}$.\nThen the iterates generated by Algorithm \\ref{alg_NC_momentum} satisfy\n\\begin{align}\n \\Phi ({\\mathbf x_{t+1}}) - f({\\mathbf x_{t+1}}, {\\mathbf y_{t+1}}) & \\leq \\left( 1 - \\frac{\\alpha_t \\eta_y \\mu}{2} \\right) \\left[ \\Phi ({\\mathbf x_t}) - f({\\mathbf x_t}, {\\mathbf y_t}) \\right] - \\frac{\\alpha_t}{4 \\eta_y} \\norm{\\Tby_{t+\\frac{1}{2}} - {\\mathbf y_t}}^2 \\nonumber \\\\\n & \\quad + \\frac{\\alpha_t}{2 \\eta_x} \\left\\| \\Tbx_{t+\\frac{1}{2}} - {\\mathbf x_t} \\right\\|^2 + \\alpha_t \\eta_y \\norm{\\nabla_{\\by} f({\\mathbf x_t}, {\\mathbf y_t}) - {\\mathbf d_{y,t}}}^2. \\nonumber\n \n\\end{align}\n\\end{lemma}\n\n\nThe next result bounds the variance in the average direction estimates ${\\mathbf d_{x,t}}, {\\mathbf d_{y,t}}$ \\eqref{eq:NC_mom_update_avg} w.r.t. the partial gradients of the global loss function $\\nabla_{\\bx} f({\\mathbf x_t}, {\\mathbf y_t}), \\nabla_{\\by} f({\\mathbf x_t}, {\\mathbf y_t})$, respectively.\n\n\\begin{lemma}\n\\label{lem:NC_PL_mom_grad_var_bound}\nSuppose the local loss functions $\\{ f_i \\}$ satisfy \\cref{assum:smoothness}, and the stochastic oracles for the local functions $\\{ f_i \\}$ satisfy \\cref{assum:bdd_var}.\nFurther, in \\cref{alg_NC_momentum}, we choose $\\beta_x = \\beta_y = \\beta$, and $\\alpha_t$ such that $0 < \\alpha_t < 1\/\\beta$.\nThen the following holds.\n\\begin{equation}\n \\begin{aligned}\n & \\mathbb E \\left\\| \\nabla_{\\bx} f({\\mathbf x_{t+1}}, {\\mathbf y_{t+1}}) - {\\mathbf d_{x,t+1}} \\right\\|^2 \\leq \\left( 1 - \\frac{\\beta \\alpha_t}{2} \\right) \\mathbb E \\left\\| \\nabla_{\\bx} f({\\mathbf x_t}, {\\mathbf y_t}) - {\\mathbf d_{x,t}} \\right\\|^2 + \\frac{\\beta^2 \\alpha_t^2 \\sigma^2}{n} \\\\\n & \\quad + \\frac{2 L_f^2 \\alpha_t}{\\beta} \\mathbb E \\left( \\left\\| \\Tbx_{t+\\frac{1}{2}} - {\\mathbf x_t} \\right\\|^2 + \\left\\| \\Tby_{t+\\frac{1}{2}} - {\\mathbf y_t} \\right\\|^2 \\right) + \\beta \\alpha_t \\frac{1}{n} \\sum_{i=1}^n L_f^2 \\mathbb E \\left( \\left\\| {\\mathbf x^i_{t+1}} - {\\mathbf x_{t+1}} \\right\\|^2 + \\left\\| {\\mathbf y^i_{t+1}} - {\\mathbf y_{t+1}} \\right\\|^2 \\right),\n \\end{aligned}\n \\label{eq:lem:NC_PL_mom_grad_var_bound_x}\n\\end{equation}\n\\begin{equation}\n \\begin{aligned}\n & \\mathbb E \\left\\| \\nabla_{\\by} f({\\mathbf x_{t+1}}, {\\mathbf y_{t+1}}) - {\\mathbf d_{y,t+1}} \\right\\|^2 \\leq \\left( 1 - \\frac{\\beta \\alpha_t}{2} \\right) \\mathbb E \\left\\| \\nabla_{\\by} f({\\mathbf x_t}, {\\mathbf y_t}) - {\\mathbf d_{y,t}} \\right\\|^2 + \\frac{\\beta^2 \\alpha_t^2 \\sigma^2}{n} \\\\\n & \\quad + \\frac{2 L_f^2 \\alpha_t}{\\beta} \\mathbb E \\left( \\left\\| \\Tbx_{t+\\frac{1}{2}} - {\\mathbf x_t} \\right\\|^2 + \\left\\| \\Tby_{t+\\frac{1}{2}} - {\\mathbf y_t} \\right\\|^2 \\right) + \\beta \\alpha_t \\frac{1}{n} \\sum_{i=1}^n L_f^2 \\mathbb E \\left( \\left\\| {\\mathbf x^i_{t+1}} - {\\mathbf x_{t+1}} \\right\\|^2 + \\left\\| {\\mathbf y^i_{t+1}} - {\\mathbf y_{t+1}} \\right\\|^2 \\right).\n \\end{aligned}\n \\label{eq:lem:NC_PL_mom_grad_var_bound_y}\n\\end{equation}\n\\end{lemma}\nNotice that the bound depends on the disagreement of the individual iterates with the \\textit{virtual} global average: $\\mathbb E \\left\\| {\\mathbf x^i_{t+1}} - {\\mathbf x_{t+1}} \\right\\|^2$, $\\mathbb E \\left\\| {\\mathbf y^i_{t+1}} - {\\mathbf y_{t+1}} \\right\\|^2$, which is nonzero since $\\tau > 1$, and the clients carry out multiple local updates between successive rounds of communication with the server.\nNext, we bound these synchronization errors.\nHenceforth, for the sake of brevity, we use the following notations:\n\\begin{align*}\n \\Delta_{t}^{\\bx,\\by} & \\triangleq \\frac{1}{n} \\sum_{i=1}^n \\mathbb E \\left( \\left\\| {\\mathbf x^i_t} - {\\mathbf x_t} \\right\\|^2 + \\left\\| {\\mathbf y^i_t} - {\\mathbf y_t} \\right\\|^2 \\right), \\\\\n \\Delta_{t}^{\\bdx} & \\triangleq \\frac{1}{n} \\sum_{i=1}^n \\mathbb E \\left\\| {\\mathbf d^i_{x,t}} - {\\mathbf d_{x,t}} \\right\\|^2, \\\\\n \\Delta_{t}^{\\bdy} & \\triangleq \\frac{1}{n} \\sum_{i=1}^n \\mathbb E \\left\\| {\\mathbf d^i_{y,t}} - {\\mathbf d_{y,t}} \\right\\|^2.\n\\end{align*}\n\n\n\\begin{lemma}\n\\label{lem:NC_PL_mom_cons_errs_recursion}\nSuppose the local loss functions $\\{ f_i \\}$ satisfy Assumptions \\ref{assum:smoothness}, \\ref{assum:bdd_hetero}, and the stochastic oracles for the local functions $\\{ f_i \\}$ satisfy \\cref{assum:bdd_var}.\nFurther, in \\cref{alg_NC_momentum}, we choose $\\beta_x = \\beta_y = \\beta$, and $\\alpha_t$ such that $0 < \\alpha_t < 1\/\\beta$.\nThen, the iterates $\\{ {\\mathbf x^i_t}, {\\mathbf y^i_t} \\}$ and direction estimates $\\{ {\\mathbf d^i_{x,t}}, {\\mathbf d^i_{y,t}} \\}$ generated by Algorithm \\ref{alg_NC_momentum} satisfy\n\\begin{align}\n \\Delta_{t+1}^{\\bx,\\by} & \\leq (1+c_1) \\Delta_{t}^{\\bx,\\by} + \\left( 1 + \\mfrac{1}{c_1} \\right) \\alpha_t^2 \\left( \\eta_x^2 \\Delta_{t}^{\\bdx} + \\eta_y^2 \\Delta_{t}^{\\bdy} \\right), \\qquad \\text{ for any constant } c_1 > 0\n \\label{eq:lem:NC_PL_mom_xy_cons_errs_recursion} \n \\\\\n \\Delta_{t+1}^{\\bdx} & \\leq (1-\\beta \\alpha_t) \\Delta_{t}^{\\bdx} + 6 L_f^2 \\beta \\alpha_t \\Delta_{t+1}^{\\bx,\\by} + \\beta \\alpha_t \\left[ \\sigma^2 \\left( 1 + \\mfrac{1}{n} \\right) + 3 \\varsigma_x^2 \\right],\n \\label{eq:lem:NC_PL_mom_p_cons_errs_recursion}\n \\\\\n \\Delta_{t+1}^{\\bdy} & \\leq (1-\\beta \\alpha_t) \\Delta_{t}^{\\bdy} + 6 L_f^2 \\beta \\alpha_t \\Delta_{t+1}^{\\bx,\\by} + \\beta \\alpha_t \\left[ \\sigma^2 \\left( 1 + \\mfrac{1}{n} \\right) + 3 \\varsigma_y^2 \\right].\n \\label{eq:lem:NC_PL_mom_q_cons_errs_recursion}\n\\end{align}\n\\end{lemma}\n\n\n\\begin{lemma}\n\\label{lem:NC_PL_mom_induct_bd_cons_error_xy}\nSuppose the local loss functions $\\{ f_i \\}$ satisfy Assumptions \\ref{assum:smoothness}, \\ref{assum:bdd_hetero}, and the stochastic oracles for the local functions $\\{ f_i \\}$ satisfy \\cref{assum:bdd_var}.\nFurther, in \\cref{alg_NC_momentum}, we choose $\\beta_x = \\beta_y = \\beta$, and step-sizes $\\eta_x, \\eta_y, \\alpha_t$ such that \n$\\alpha_t \\equiv \\alpha \\leq \\min \\left\\{ \\frac{\\beta}{6 L_f^2 (\\eta_y^2 + \\eta_x^2)}, \\frac{1}{16 \\beta \\tau} \\right\\}$ for all $t$, and $L_f^2 (\\eta_y^2 + \\eta_x^2) \\leq \\frac{\\beta^2}{6}$.\nSuppose $s \\tau + 1 \\leq t \\leq (s+1) \\tau -1$ for some positive integer $s$ (i.e., $t$ is between two consecutive synchronizations).\nAlso, let $1 \\leq k < \\tau$ such that $t - k \\geq s \\tau + 1$.\nThen, the consensus error satisfies\n\\begin{align}\n \\Delta_{t}^{\\bx,\\by} \\leq (1 + 2 k \\theta) \\Delta_{t-k}^{x,y} + 2 k \\mfrac{\\alpha}{\\beta} (1-\\beta \\alpha) \\left( \\eta_x^2 \\Delta_{t-k-1}^{{\\mathbf d_x}} + \\eta_y^2 \\Delta_{t-k-1}^{{\\mathbf d_y}} \\right) + k^2 (1+\\theta) \\Upsilon,\n \\label{eq:lem:NC_PL_mom_induct_bd_cons_error_xy}\n\\end{align}\nwhere, $\\theta = c_1 + 6 L_f^2 \\alpha^2 (\\eta_y^2 + \\eta_x^2)$, $c_1 = \\frac{\\beta \\alpha}{1 - \\beta \\alpha}$, and $\\Upsilon = \\alpha^2 \\left[ \\left( \\eta_x^2 + \\eta_y^2 \\right) \\sigma^2 \\left( 1 + \\mfrac{1}{n} \\right) + 3 \\eta_x^2 \\varsigma_x^2 + 3 \\eta_y^2 \\varsigma_y^2 \\right]$.\n\\end{lemma}\n\n\n\\begin{cor}\n\\label{cor:NC_PL_mom_induct_bd_cons_error_xy}\nSince the clients in Algorithm \\ref{alg_NC_momentum} communicate with the server every $\\tau$ iterations, for all $t = 0, \\hdots, T-1$, then under the conditions of \\cref{lem:NC_PL_mom_induct_bd_cons_error_xy}, the iterate consensus error is bounded as follows.\n\\begin{align*}\n \\Delta_{t}^{\\bx,\\by} \\leq \\Theta \\left( (\\tau - 1)^2 \\alpha^2 \\left( \\left( \\eta_x^2 + \\eta_y^2 \\right) \\sigma^2 + \\eta_x^2 \\varsigma_x^2 + \\eta_y^2 \\varsigma_y^2 \\right) \\right).\n\\end{align*}\n\\end{cor}\n\n\n\n\n\n\n\n\n\n\n\\subsection{Proof of \\texorpdfstring{\\cref{thm:NC_PL_mom}}{Theorem 2}}\n\\label{sec:NC_PL_mom_thm_proof}\n\nFor the sake of completeness, we first state the full statement of \\cref{thm:NC_PL_mom}, in a slightly more general form.\n\n\\begin{theorem*}\nSuppose the local loss functions $\\{ f_i \\}_i$ satisfy Assumptions \\ref{assum:smoothness}, \\ref{assum:bdd_var}, \\ref{assum:bdd_hetero}, and the global function $f$ satisfies \\cref{assum:PL_y}.\nSuppose in \\cref{alg_NC_momentum}, \n$\\beta_x = \\beta_y = \\beta = 3$, $\\alpha_t \\equiv \\alpha \\leq \\min \\big\\{ \\frac{\\beta}{6 L_f^2 (\\eta_y^2 + \\eta_x^2)}, \\frac{1}{48 \\tau} \\big\\}$, for all $t$, and the step-sizes $\\eta_x, \\eta_y$ are chosen such that $\\eta_y \\leq \\frac{\\mu}{8 L_f^2}$, and $\\frac{\\eta_x}{\\eta_y} \\leq \\frac{1}{20 \\kappa^2}$, where $\\kappa = L_f\/\\mu$ is the condition number.\nThen the iterates generated by \\cref{alg_NC_momentum} satisfy\n\\begin{equation}\n \\begin{aligned}\n & \\frac{1}{T} \\sum_{t=0}^{T-1} \\left[ \\frac{1}{\\eta_x^2} \\mathbb E \\left\\| \\Tbx_{t+\\frac{1}{2}} - {\\mathbf x_t} \\right\\|^2 + \\frac{2 L_f^2}{\\mu} \\mathbb E \\left[ \\Phi ({\\mathbf x_t}) - f({\\mathbf x_t}, {\\mathbf y_t}) \\right] + \\mathbb E \\left\\| \\nabla_{\\bx} f({\\mathbf x_t}, {\\mathbf y_t}) - {\\mathbf d_{x,t}} \\right\\|^2 \\right] \\\\\n & \\leq \\underbrace{\\mathcal O \\left( \\frac{\\kappa^2}{\\eta_y \\alpha T} + \\frac{\\alpha}{\\mu \\eta_y} \\frac{\\sigma^2}{n} \\right)}_{\\text{Error with full synchronization}} + \\underbrace{\\mathcal O \\Big( (\\tau - 1)^2 \\alpha^2 \\left( \\sigma^2 + \\varsigma_x^2 + \\varsigma_y^2 \\right) \\Big)}_{\\text{Error due to local updates}}.\n \\end{aligned}\n \\label{eq_proof:thm_NC_PL_mom_conv_rate}\n\\end{equation}\nRecall that $\\sigma^2$ is the variance of stochastic gradient oracle (\\cref{assum:bdd_var}), and $\\varsigma_x, \\varsigma_y$ quantify the heterogeneity of local functions (\\cref{assum:bdd_hetero}).\nWith $\\alpha = \\sqrt{\\frac{n}{T}}$ in \\eqref{eq_proof:thm_NC_PL_mom_conv_rate}, we get\n\\begin{align}\n & \\frac{1}{T} \\sum_{t=0}^{T-1} \\mathbb E \\left[ \\frac{1}{\\eta_x^2} \\left\\| \\Tbx_{t+\\frac{1}{2}} - {\\mathbf x_t} \\right\\|^2 + \\frac{L_f^2}{\\mu} \\left[ \\Phi ({\\mathbf x_t}) - f({\\mathbf x_t}, {\\mathbf y_t}) \\right] + \\left\\| \\nabla_{\\bx} f({\\mathbf x_t}, {\\mathbf y_t}) - {\\mathbf d_{x,t}} \\right\\|^2 \\right] \\nonumber \\\\\n & \\qquad \\leq \\mathcal O \\left( \\frac{\\kappa^2 + \\sigma^2}{\\sqrt{nT}} \\right) + \\mathcal O \\left( \\frac{n (\\tau-1)^2 \\left( \\sigma^2 + \\varsigma_x^2 + \\varsigma_y^2 \\right)}{T} \\right). \\nonumber\n\\end{align}\n\\end{theorem*}\n\n\\begin{remark}[Convergence results in terms of $\\norm{\\Phi (\\cdot)}$]\nThe inequality \\eqref{eq:thm:NC_PL_mom} results from the following reasoning.\n\\begin{align}\n \\norm{\\nabla \\Phi ({\\mathbf x_t})} &= \\norm{\\nabla_{\\bx} f({\\mathbf x_t}, {\\mathbf y}^*({\\mathbf x_t}))} \\tag{\\cref{lem:Phi_PL_smooth_nouiehed}} \\\\\n & \\leq \\norm{\\nabla_{\\bx} f({\\mathbf x_t}, {\\mathbf y}^*({\\mathbf x_t})) - \\nabla_{\\bx} f({\\mathbf x_t}, {\\mathbf y_t})} + \\norm{\\nabla_{\\bx} f({\\mathbf x_t}, {\\mathbf y_t})} \\tag{Triangle inequality} \\\\\n & \\leq L_f \\norm{{\\mathbf y}^*({\\mathbf x_t}) - {\\mathbf y_t}} + \\norm{\\nabla_{\\bx} f({\\mathbf x_t}, {\\mathbf y_t}) - {\\mathbf d_{x,t}}} + \\norm{{\\mathbf d_{x,t}}} \\tag{\\cref{assum:smoothness}} \\\\\n &= L_f \\sqrt{\\frac{2}{\\mu} \\left[ \\Phi ({\\mathbf x_t}) - f({\\mathbf x_t}, {\\mathbf y_t}) \\right]} + \\norm{\\nabla_{\\bx} f({\\mathbf x_t}, {\\mathbf y_t}) - {\\mathbf d_{x,t}}} + \\frac{1}{\\eta_x} \\norm{\\Tbx_{t+\\frac{1}{2}} - {\\mathbf x_t}}. \\tag{quadratic growth of $\\mu$-PL functions (\\cref{lem:quad_growth})}\n \\label{eq:NC_PL_mom_compare_metrics}\n \\\\\n \\Rightarrow \\frac{1}{T} \\sum_{t=0}^{T-1} \\mathbb E \\norm{\\nabla \\Phi({\\mathbf x_t})}^2 & \\leq \\frac{3}{T} \\sum_{t=0}^{T-1} \\mathbb E \\left( \\frac{1}{\\eta_x^2} \\left\\| \\Tbx_{t+\\frac{1}{2}} - {\\mathbf x_t} \\right\\|^2 + \\frac{2 L_f^2}{\\mu} \\left[ \\Phi ({\\mathbf x_t}) - f({\\mathbf x_t}, {\\mathbf y_t}) \\right] + \\left\\| \\nabla_{\\bx} f({\\mathbf x_t}, {\\mathbf y_t}) - {\\mathbf d_{x,t}} \\right\\|^2 \\right). \\nonumber\n\\end{align}\n\\end{remark}\n\n\\begin{proof}[Proof of \\cref{thm:NC_PL_mom}]\n\n\nMultiplying both sides of \\cref{lem:NC_PL_mom_phi_error} by $10 L_f^2 \\eta_x \/ (\\mu^2 \\eta_y)$, we get\n\n\\begin{align}\n & \\frac{10 L_f^2 \\eta_x}{\\mu^2 \\eta_y} \\Big[ \\left[ \\Phi ({\\mathbf x_{t+1}}) - f({\\mathbf x_{t+1}}, {\\mathbf y_{t+1}}) \\right] - \\left[ \\Phi ({\\mathbf x_t}) - f({\\mathbf x_t}, {\\mathbf y_t}) \\right] \\Big] \\nonumber \\\\\n & \\leq - \\frac{5 \\eta_x \\alpha_t L_f^2}{\\mu} \\left[ \\Phi ({\\mathbf x_t}) - f({\\mathbf x_t}, {\\mathbf y_t}) \\right] - \\frac{5 \\kappa^2 \\alpha_t \\eta_x}{2 \\eta_y^2} \\norm{\\Tby_{t+\\frac{1}{2}} - {\\mathbf y_t}}^2 \\nonumber \\\\\n & \\quad + \\frac{5 L_f^2 \\alpha_t}{\\mu^2 \\eta_y} \\left\\| \\Tbx_{t+\\frac{1}{2}} - {\\mathbf x_t} \\right\\|^2 + 10 \\kappa^2 \\eta_x \\alpha_t \\norm{\\nabla_{\\by} f({\\mathbf x_t}, {\\mathbf y_t}) - {\\mathbf d_{y,t}}}^2. \n \\label{eq_proof:thm:NC_PL_mom_1}\n\\end{align}\n\n\nDefine\n\\begin{align}\n \\mc E_{t} \\triangleq \\Phi({\\mathbf x_t}) - \\Phi^* + \\frac{10 L_f^2 \\eta_x}{\\mu^2 \\eta_y} \\left[ \\Phi ({\\mathbf x_t}) - f({\\mathbf x_t}, {\\mathbf y_t}) \\right]. \\nonumber\n \n\\end{align}\nThen, using \\cref{lem:NC_PL_mom_phi_error} and \\eqref{eq_proof:thm:NC_PL_mom_1}, we get\n\\begin{align}\n \\mc E_{t+1} - \\mc E_{t} & \\leq - \\left( \\frac{\\alpha_t}{2 \\eta_x} - \\frac{5 L_f^2 \\alpha_t}{\\mu^2 \\eta_y} \\right) \\left\\| \\Tbx_{t+\\frac{1}{2}} - {\\mathbf x_t} \\right\\|^2 - \\frac{\\eta_x \\alpha_t L_f^2}{\\mu} \\left[ \\Phi ({\\mathbf x_t}) - f({\\mathbf x_t}, {\\mathbf y_t}) \\right] - \\frac{5 \\kappa^2 \\alpha_t \\eta_x}{2 \\eta_y^2} \\norm{\\Tby_{t+\\frac{1}{2}} - {\\mathbf y_t}}^2 \\nonumber \\\\\n & \\quad + 2 \\eta_x \\alpha_t \\left\\| \\nabla_{\\bx} f({\\mathbf x_t}, {\\mathbf y_t}) - {\\mathbf d_{x,t}} \\right\\|^2 + 10 \\kappa^2 \\eta_x \\alpha_t \\norm{\\nabla_{\\by} f({\\mathbf x_t}, {\\mathbf y_t}) - {\\mathbf d_{y,t}}}^2 \\nonumber \\\\\n & \\leq - \\frac{\\alpha_t}{4 \\eta_x} \\left\\| \\Tbx_{t+\\frac{1}{2}} - {\\mathbf x_t} \\right\\|^2 - \\frac{\\eta_x \\alpha_t L_f^2}{\\mu} \\left[ \\Phi ({\\mathbf x_t}) - f({\\mathbf x_t}, {\\mathbf y_t}) \\right] - \\frac{5 \\kappa^2 \\alpha_t \\eta_x}{2 \\eta_y^2} \\norm{\\Tby_{t+\\frac{1}{2}} - {\\mathbf y_t}}^2 \\nonumber \\\\\n & \\quad + 2 \\eta_x \\alpha_t \\left\\| \\nabla_{\\bx} f({\\mathbf x_t}, {\\mathbf y_t}) - {\\mathbf d_{x,t}} \\right\\|^2 + 2 \\alpha_t \\eta_y \\norm{\\nabla_{\\by} f({\\mathbf x_t}, {\\mathbf y_t}) - {\\mathbf d_{y,t}}}^2. \\label{eq_proof:thm:NC_PL_mom_2}\n\\end{align}\nwhere, $- \\frac{\\alpha_t}{2 \\eta_x} + \\frac{5 \\kappa^2 \\alpha_t}{\\eta_y} \\leq - \\frac{\\alpha_t}{4 \\eta_x}$, since $\\eta_x \\leq \\frac{\\eta_y}{20 \\kappa^2}$. \nNext, we choose $\\beta_x = \\beta_y = \\beta = 3$, and define\n\\begin{align}\n \\mathfrak{E}_{t} \\triangleq \\mc E_{t} + \\frac{2 \\eta_x}{\\mu \\eta_y} \\left\\| \\nabla_{\\bx} f({\\mathbf x_t}, {\\mathbf y_t}) - {\\mathbf d_{x,t}} \\right\\|^2 + \\frac{2 \\eta_x}{\\mu \\eta_y} \\left\\| \\nabla_{\\by} f({\\mathbf x_t}, {\\mathbf y_t}) - {\\mathbf d_{y,t}} \\right\\|^2, \\quad t \\geq 0. \\nonumber\n \n\\end{align}\nThen, using the bounds in \\cref{lem:NC_PL_mom_grad_var_bound} and \\eqref{eq_proof:thm:NC_PL_mom_2}, we get\n\\begin{align}\n \\mathbb E \\left[ \\mathfrak{E}_{t+1} - \\mathfrak{E}_{t} \\right] & \\leq - \\left( \\frac{\\alpha_t}{2 \\eta_x} - 2 \\frac{2 \\eta_x}{\\mu \\eta_y} \\frac{2 L_f^2 \\alpha_t}{3} \\right) \\mathbb E \\left\\| \\Tbx_{t+\\frac{1}{2}} - {\\mathbf x_t} \\right\\|^2 - \\frac{\\eta_x \\alpha_t L_f^2}{\\mu} \\mathbb E \\left[ \\Phi ({\\mathbf x_t}) - f({\\mathbf x_t}, {\\mathbf y_t}) \\right] \\nonumber \\\\\n & \\quad - \\left( \\frac{2 \\eta_x}{\\mu \\eta_y} \\frac{3 \\alpha_t}{2} - 2 \\alpha_t \\eta_x \\right) \\mathbb E \\left[ \\left\\| \\nabla_{\\bx} f({\\mathbf x_t}, {\\mathbf y_t}) - {\\mathbf d_{x,t}} \\right\\|^2 + \\left\\| \\nabla_{\\by} f({\\mathbf x_t}, {\\mathbf y_t}) - {\\mathbf d_{y,t}} \\right\\|^2 \\right] \\nonumber \\\\\n & \\quad - \\left( \\frac{5 \\alpha_t \\kappa^2 \\eta_x}{2 \\eta_y^2} - 2 \\frac{2 \\eta_x}{\\mu \\eta_y} \\frac{2 L_f^2 \\alpha_t}{3} \\right) \\mathbb E \\left\\| \\Tby_{t+\\frac{1}{2}} - {\\mathbf y_t} \\right\\|^2 + 2 \\frac{2 \\eta_x}{\\mu \\eta_y} 3 \\alpha_t L_f^2 \\Delta_{t+1}^{\\bx,\\by} + 2 \\frac{2 \\eta_x}{\\mu \\eta_y} \\frac{9 \\alpha_t^2 \\sigma^2}{n} \\nonumber \\\\\n & \\leq - \\frac{\\alpha_t}{4 \\eta_x} \\mathbb E \\left\\| \\Tbx_{t+\\frac{1}{2}} - {\\mathbf x_t} \\right\\|^2 - \\frac{\\eta_x \\alpha_t L_f^2}{\\mu} \\mathbb E \\left[ \\Phi ({\\mathbf x_t}) - f({\\mathbf x_t}, {\\mathbf y_t}) \\right] -\\frac{\\alpha_t \\kappa^2 \\eta_x}{\\eta_y^2} \\mathbb E \\left\\| \\Tby_{t+\\frac{1}{2}} - {\\mathbf y_t} \\right\\|^2 \\nonumber \\\\\n & \\quad - \\frac{2 \\alpha_t \\eta_x}{\\mu \\eta_y} \\mathbb E \\left[ \\left\\| \\nabla_{\\bx} f({\\mathbf x_t}, {\\mathbf y_t}) - {\\mathbf d_{x,t}} \\right\\|^2 + \\left\\| \\nabla_{\\by} f({\\mathbf x_t}, {\\mathbf y_t}) - {\\mathbf d_{y,t}} \\right\\|^2 \\right] + \\frac{4 \\eta_x}{\\mu \\eta_y} \\left[ 3 \\alpha_t L_f^2 \\Delta_{t+1}^{\\bx,\\by} + \\frac{9 \\alpha_t^2 \\sigma^2}{n} \\right] \\label{eq_proof:thm:NC_PL_mom_3}\n\\end{align}\n\nHere, using $\\eta_y\\leq 1\/(8 L_f) \\leq 1\/(8 \\mu)$ and $\\eta_y\\geq 20 \\eta_x \\kappa^2$, we simplify the coefficients in \\eqref{eq_proof:thm:NC_PL_mom_3} as follows\n\\begin{align*}\n & - \\frac{\\alpha_t}{2 \\eta_x} \\left( 1 - \\frac{16 \\eta_x^2 L_f^2}{3 \\mu \\eta_y} \\right) = - \\frac{\\alpha_t}{2 \\eta_x} + \\frac{\\alpha_t}{2 \\eta_x} \\frac{16 \\mu \\eta_y \\kappa^2}{3} \\frac{\\eta_x^2}{\\eta_y^2} \\leq - \\frac{\\alpha_t}{2 \\eta_x} + \\frac{\\alpha_t}{2 \\eta_x} \\frac{16}{3} \\frac{1}{8} \\frac{1}{400 \\kappa^2} \\leq -\\frac{\\alpha_t}{4 \\eta_x} \\tag{$\\because \\kappa \\geq 1$} \\\\\n & -\\left( \\frac{2 \\eta_x}{\\mu \\eta_y} \\frac{3 \\alpha_t}{2} - 2 \\eta_x \\alpha_t \\right) \\leq - \\frac{3 \\eta_x \\alpha_t}{\\mu \\eta_y} + \\frac{2 \\eta_x \\alpha_t}{8 \\mu \\eta_y} \\leq - \\frac{2 \\eta_x \\alpha_t}{\\mu \\eta_y}, \\tag{$\\because 1 \\leq 1\/(8 \\mu \\eta_y)$} \\nonumber \\\\\n & - \\left( \\frac{5 \\alpha_t \\kappa^2 \\eta_x}{2 \\eta_y^2} - \\frac{4 \\eta_x}{\\mu \\eta_y} \\frac{2 L_f^2 \\alpha_t}{3} \\right) = \\frac{\\alpha_t \\kappa^2 \\eta_x}{\\eta_y^2} \\left( -\\frac{5}{2} + \\frac{8}{3} \\eta_y \\mu \\right) \\leq \\frac{\\alpha_t \\kappa^2 \\eta_x}{\\eta_y^2} \\left( -\\frac{5}{2} + \\frac{1}{3} \\right) \\leq -\\frac{\\alpha_t \\kappa^2 \\eta_x}{\\eta_y^2}. \\tag{$\\because 1 \\leq 1\/(8 \\mu \\eta_y)$}\n\\end{align*}\nSumming \\eqref{eq_proof:thm:NC_PL_mom_3} over $t=0, \\hdots, T-1$ and rearranging the terms, we get\n\\begin{align}\n & \\frac{1}{T} \\sum_{t=0}^{T-1} \\frac{\\alpha_t \\eta_x}{4} \\left[ \\frac{1}{\\eta_x^2} \\mathbb E \\left\\| \\Tbx_{t+\\frac{1}{2}} - {\\mathbf x_t} \\right\\|^2 + \\frac{4 L_f^2}{\\mu} \\mathbb E \\left[ \\Phi ({\\mathbf x_t}) - f({\\mathbf x_t}, {\\mathbf y_t}) \\right] + \\frac{8}{\\mu \\eta_y} \\mathbb E \\left\\| \\nabla_{\\bx} f({\\mathbf x_t}, {\\mathbf y_t}) - {\\mathbf d_{x,t}} \\right\\|^2 \\right] \\nonumber \\\\\n & \\qquad \\leq \\frac{1}{T} \\sum_{t=0}^{T-1} \\frac{4 \\eta_x}{\\mu \\eta_y} \\left[ 9 \\alpha_t^2 \\frac{\\sigma^2}{n} + 3 \\alpha_t L_f^2 \\Delta_{t+1}^{\\bx,\\by} \\right] + \\frac{1}{T} \\sum_{t=0}^{T-1} \\mathbb E \\left[ \\mathfrak{E}_{t} - \\mathfrak{E}_{t+1} \\right]. \\nonumber\n\\end{align}\n\nWe choose $\\alpha_t = \\alpha$ for all $t$. $\\frac{1}{8 \\mu \\eta_y} \\geq 1$. Also, $\\mathfrak{E}_{t} \\geq 0, \\forall \\ t$. Therefore,\n\\begin{align}\n & \\frac{1}{T} \\sum_{t=0}^{T-1} \\left[ \\frac{1}{\\eta_x^2} \\mathbb E \\left\\| \\Tbx_{t+\\frac{1}{2}} - {\\mathbf x_t} \\right\\|^2 + \\frac{2 L_f^2}{\\mu} \\mathbb E \\left[ \\Phi ({\\mathbf x_t}) - f({\\mathbf x_t}, {\\mathbf y_t}) \\right] + \\mathbb E \\left\\| \\nabla_{\\bx} f({\\mathbf x_t}, {\\mathbf y_t}) - {\\mathbf d_{x,t}} \\right\\|^2 \\right] \\nonumber \\\\\n & \\qquad \\leq \\frac{4 \\mathfrak{E}_0}{\\eta_x \\alpha T} + \\frac{1}{T} \\sum_{t=0}^{T-1} \\frac{16}{\\mu \\eta_y} \\left[ 9 \\alpha \\frac{\\sigma^2}{n} + 3 L_f^2 \\Delta_{t+1}^{\\bx,\\by} \\right] \\tag{$\\because \\mathfrak{E}_{t} \\geq 0$ for all $t$} \\\\\n \n & \\qquad \\leq \\mathcal O \\left( \\frac{\\mathfrak{E}_0}{\\eta_x \\alpha T} + \\frac{\\alpha}{\\mu \\eta_y} \\frac{\\sigma^2}{n} \\right) + \\mathcal O \\left( \\frac{L_f^2}{\\mu \\eta_y} (\\tau - 1)^2 \\alpha^2 \\left( \\left( \\eta_x^2 + \\eta_y^2 \\right) \\sigma^2 + \\eta_x^2 \\varsigma_x^2 + \\eta_y^2 \\varsigma_y^2 \\right) \\right) \\tag{\\cref{cor:NC_PL_mom_induct_bd_cons_error_xy}} \\\\\n & \\qquad = \\mathcal O \\left( \\frac{\\kappa^2}{\\eta_y \\alpha T} + \\frac{\\alpha}{\\mu \\eta_y} \\frac{\\sigma^2}{n} \\right) + \\mathcal O \\left( \\kappa^2 \\mu (\\tau - 1)^2 \\alpha^2 \\left( \\eta_y \\left( \\sigma^2 + \\varsigma_y^2 \\right) + \\frac{\\eta_x^2}{\\eta_y} \\left( \\sigma^2 + \\varsigma_x^2 \\right) \\right) \\right) \\nonumber \\\\\n & \\qquad = \\mathcal O \\left( \\frac{\\kappa^2}{\\eta_y \\alpha T} + \\frac{\\alpha}{\\mu \\eta_y} \\frac{\\sigma^2}{n} \\right) + \\mathcal O \\left( (\\tau - 1)^2 \\alpha^2 \\left( \\sigma^2 + \\varsigma_y^2 \\right) + \\mu (\\tau - 1)^2 \\alpha^2 \\left( \\eta_x \\left( \\sigma^2 + \\varsigma_x^2 \\right) \\right) \\right) \\tag{$\\because \\eta_y \\leq \\frac{\\mu}{8 L_f^2}, \\frac{\\eta_x}{\\eta_y} \\leq \\frac{1}{20 \\kappa^2}$} \\\\\n & \\qquad \\leq \\underbrace{\\mathcal O \\left( \\frac{\\kappa^2}{\\eta_y \\alpha T} + \\frac{\\alpha}{\\mu \\eta_y} \\frac{\\sigma^2}{n} \\right)}_{\\substack{\\text{Single client} \\\\\n \\text{convergence error}}} + \\underbrace{\\mathcal O \\Big( (\\tau - 1)^2 \\alpha^2 \\left( \\sigma^2 + \\varsigma_x^2 + \\varsigma_y^2 \\right) \\Big)}_{\\text{Error due to local updates}}. \\tag{$\\because \\mu \\eta_x \\leq 1$}\n \n\\end{align}\nFinally, since $\\mathfrak{E}_0$ is a constant, and using $\\eta_y \\geq 20 \\eta_x \\kappa^2$, we get \\eqref{eq_proof:thm_NC_PL_mom_conv_rate}.\n\nFurther, with $\\alpha = \\sqrt{\\frac{n}{T}}$ in \\eqref{eq_proof:thm_NC_PL_mom_conv_rate}, we get\n\\begin{align}\n & \\frac{1}{T} \\sum_{t=0}^{T-1} \\mathbb E \\left[ \\frac{1}{\\eta_x^2} \\left\\| \\Tbx_{t+\\frac{1}{2}} - {\\mathbf x_t} \\right\\|^2 + \\frac{2 L_f^2}{\\mu} \\left[ \\Phi ({\\mathbf x_t}) - f({\\mathbf x_t}, {\\mathbf y_t}) \\right] + \\left\\| \\nabla_{\\bx} f({\\mathbf x_t}, {\\mathbf y_t}) - {\\mathbf d_{x,t}} \\right\\|^2 \\right] \\nonumber \\\\\n & \\qquad \\leq \\mathcal O \\left( \\frac{\\kappa^2 + \\sigma^2}{\\sqrt{nT}} \\right) + \\mathcal O \\left( \\frac{n (\\tau-1)^2 \\left( \\sigma^2 + \\varsigma_x^2 + \\varsigma_y^2 \\right)}{T} \\right). \\nonumber\n\\end{align}\n\\end{proof}\n\n\n\n\n\n\\begin{proof}[Proof of \\cref{cor:NC_PL_mom_comm_cost}]\nWe assume $T \\geq n^3$.\nTo reach an $\\epsilon$-accurate point, we note that using Jensen's inequality\n\\begin{align}\n & \\min_{t \\in [T-1]} \\mathbb E \\left[ \\frac{1}{\\eta_x} \\left\\| \\Tbx_{t+\\frac{1}{2}} - {\\mathbf x_t} \\right\\| + L_f \\sqrt{\\frac{2}{\\mu} \\left[ \\Phi ({\\mathbf x_t}) - f({\\mathbf x_t}, {\\mathbf y_t}) \\right]} + \\left\\| \\nabla_{\\bx} f({\\mathbf x_t}, {\\mathbf y_t}) - {\\mathbf d_{x,t}} \\right\\| \\right] \\nonumber \\\\\n & \\leq \\frac{1}{T} \\sum_{t=0}^{T-1} \\mathbb E \\left[ \\frac{1}{\\eta_x} \\left\\| \\Tbx_{t+\\frac{1}{2}} - {\\mathbf x_t} \\right\\| + L_f \\sqrt{\\frac{2}{\\mu} \\left[ \\Phi ({\\mathbf x_t}) - f({\\mathbf x_t}, {\\mathbf y_t}) \\right]} + \\left\\| \\nabla_{\\bx} f({\\mathbf x_t}, {\\mathbf y_t}) - {\\mathbf d_{x,t}} \\right\\| \\right] \\nonumber \\\\\n & \\leq \\left[ \\frac{3}{T} \\sum_{t=0}^{T-1} \\mathbb E \\left( \\frac{1}{\\eta_x^2} \\left\\| \\Tbx_{t+\\frac{1}{2}} - {\\mathbf x_t} \\right\\|^2 + \\frac{2 L_f^2}{\\mu} \\left[ \\Phi ({\\mathbf x_t}) - f({\\mathbf x_t}, {\\mathbf y_t}) \\right] + \\left\\| \\nabla_{\\bx} f({\\mathbf x_t}, {\\mathbf y_t}) - {\\mathbf d_{x,t}} \\right\\|^2 \\right) \\right]^{1\/2} \\nonumber \\\\\n & \\leq \\mathcal O \\left( \\frac{\\kappa + \\sigma}{(n T)^{1\/4}} \\right) + \\mathcal O \\left( \\tau \\sqrt{\\frac{n \\left( \\sigma^2 + \\varsigma_x^2 + \\varsigma_y^2 \\right)}{T}} \\right), \\nonumber\n\\end{align}\nwhere we use $\\sqrt{a+b} \\leq \\sqrt{a} + \\sqrt{b}$.\nHence, we need $T = \\mathcal O \\left( \\kappa^4\/(n \\epsilon^4) \\right)$ iterations, to reach an $\\epsilon$-accurate point.\nWe can choose $\\tau \\leq \\mathcal O \\left( \\frac{T^{1\/4}}{n^{3\/4}} \\right)$ without affecting the convergence rate.\nHence, the number of communication rounds is $\\mathcal O \\left( \\frac{T}{\\tau} \\right) = \\mathcal O \\left( (n T)^{3\/4} \\right) = \\mathcal O \\left( \\kappa^3\/\\epsilon^3 \\right)$. \n\\end{proof}\n\n\n\\subsection{Proofs of the Intermediate Lemmas}\n\\label{sec:NC_PL_mom_int_results_proofs}\n\n\n\\begin{proof}[Proof of \\cref{lem:NC_PL_mom_Phi_1_step_decay}]\nUsing $L_{\\Phi}$-smoothnes of $\\Phi(\\cdot)$ (\\cref{lem:Phi_PL_smooth_nouiehed})\n\\begin{align}\n & \\Phi ({\\mathbf x_{t+1}}) - \\Phi ({\\mathbf x_t}) \\leq \\langle \\nabla \\Phi({\\mathbf x_t}), {\\mathbf x_{t+1}} - {\\mathbf x_t} \\rangle + \\frac{L_{\\Phi}}{2} \\left\\| {\\mathbf x_{t+1}} - {\\mathbf x_t} \\right\\|^2 \\nonumber\n \n \\\\\n & \\quad = \\alpha_t \\langle \\nabla \\Phi({\\mathbf x_t}), \\Tbx_{t+\\frac{1}{2}} - {\\mathbf x_t} \\rangle + \\frac{L_{\\Phi} \\alpha_t^2}{2} \\left\\| {\\mathbf x_{t+1}} - {\\mathbf x_t} \\right\\|^2 \\tag{see updates in \\eqref{eq:NC_mom_update_avg}} \\\\\n & \\quad = \\alpha_t \\langle {\\mathbf d_{x,t}}, \\Tbx_{t+\\frac{1}{2}} - {\\mathbf x_t} \\rangle + \\alpha_t \\langle \\nabla_{\\bx} f({\\mathbf x_t}, {\\mathbf y_t}) - {\\mathbf d_{x,t}}, \\Tbx_{t+\\frac{1}{2}} - {\\mathbf x_t} \\rangle \\nonumber \\\\\n & \\qquad + \\alpha_t \\left\\langle \\nabla \\Phi({\\mathbf x_t}) - \\nabla_{\\bx} f({\\mathbf x_t}, {\\mathbf y_t}), \\Tbx_{t+\\frac{1}{2}} - {\\mathbf x_t} \\right\\rangle + \\frac{L_{\\Phi} \\alpha_t^2}{2} \\left\\| \\Tbx_{t+\\frac{1}{2}} - {\\mathbf x_t} \\right\\|^2. \\label{eq:lem:NC_PL_mom_Phi_1_step_decay_1}\n\\end{align}\nNext, we bound the individual inner product terms in \\eqref{eq:lem:NC_PL_mom_Phi_1_step_decay_1}.\n\\begin{align}\n \\alpha_t \\langle {\\mathbf d_{x,t}}, \\Tbx_{t+\\frac{1}{2}} - {\\mathbf x_t} \\rangle &= -\\frac{\\alpha_t}{\\eta_x} \\left\\| \\Tbx_{t+\\frac{1}{2}} - {\\mathbf x_t} \\right\\|^2, \\label{eq:lem:NC_PL_mom_Phi_1_step_decay_2a} \\\\\n \\alpha_t \\langle \\nabla \\Phi({\\mathbf x_t}) - \\nabla_{\\bx} f({\\mathbf x_t}, {\\mathbf y_t}), \\Tbx_{t+\\frac{1}{2}} - {\\mathbf x_t} \\rangle & \\overset{(a)}{\\leq} \\frac{\\alpha_t}{8 \\eta_x} \\left\\| \\Tbx_{t+\\frac{1}{2}} - {\\mathbf x_t} \\right\\|^2 + \\alpha_t 2 \\eta_x \\left\\| \\nabla \\Phi({\\mathbf x_t}) - \\nabla_{\\bx} f({\\mathbf x_t}, {\\mathbf y_t}) \\right\\|^2, \\nonumber\n \n \\\\\n & \\overset{(b)}{\\leq} \\frac{\\alpha_t}{8 \\eta_x} \\left\\| \\Tbx_{t+\\frac{1}{2}} - {\\mathbf x_t} \\right\\|^2 + 2 \\eta_x \\alpha_t L_f^2 \\left\\| {\\mathbf y}^*({\\mathbf x_t}) - {\\mathbf y_t} \\right\\|^2, \\nonumber \\\\\n & \\leq \\frac{\\alpha_t}{8 \\eta_x} \\left\\| \\Tbx_{t+\\frac{1}{2}} - {\\mathbf x_t} \\right\\|^2 + \\frac{4 \\eta_x \\alpha_t L_f^2}{\\mu} \\left[ f({\\mathbf x_t}, {\\mathbf y}^*({\\mathbf x_t})) - f({\\mathbf x_t}, {\\mathbf y_t}) \\right], \\nonumber \\\\\n & = \\frac{\\alpha_t}{8 \\eta_x} \\left\\| \\Tbx_{t+\\frac{1}{2}} - {\\mathbf x_t} \\right\\|^2 + \\frac{4 \\eta_x \\alpha_t L_f^2}{\\mu} \\left[ \\Phi({\\mathbf x_t}) - f({\\mathbf x_t}, {\\mathbf y_t}) \\right], \\label{eq:lem:NC_PL_mom_Phi_1_step_decay_2b} \\\\\n \\alpha_t \\langle \\nabla_{\\bx} f({\\mathbf x_t}, {\\mathbf y_t}) - {\\mathbf d_{x,t}}, \\Tbx_{t+\\frac{1}{2}} - {\\mathbf x_t} \\rangle & \\leq \\frac{\\alpha_t}{8 \\eta_x} \\left\\| \\Tbx_{t+\\frac{1}{2}} - {\\mathbf x_t} \\right\\|^2 + 2 \\eta_x \\alpha_t \\left\\| \\nabla_{\\bx} f({\\mathbf x_t}, {\\mathbf y_t}) - {\\mathbf d_{x,t}} \\right\\|^2, \\label{eq:lem:NC_PL_mom_Phi_1_step_decay_2c}\n\\end{align}\nwhere \\eqref{eq:lem:NC_PL_mom_Phi_1_step_decay_2a} follows from the update expression of \\textit{virtual} averages in \\eqref{eq:NC_mom_update_avg}; \n$(a)$ and \\eqref{eq:lem:NC_PL_mom_Phi_1_step_decay_2c} both follow from Young's inequality \\cref{lem:Young}\n(with $\\gamma = 4 \\eta_x$); \n$(b)$ follows from \\cref{lem:Phi_PL_smooth_nouiehed} and $L_f$-smoothness of $f({\\mathbf x_t}, \\cdot)$ (\\cref{assum:smoothness});\nand \\eqref{eq:lem:NC_PL_mom_Phi_1_step_decay_2b} follows from the quadratic growth condition of $\\mu$-PL functions (\\cref{lem:quad_growth}).\nSubstituting \\eqref{eq:lem:NC_PL_mom_Phi_1_step_decay_2a}-\\eqref{eq:lem:NC_PL_mom_Phi_1_step_decay_2c} in \\eqref{eq:lem:NC_PL_mom_Phi_1_step_decay_1}, we get\n\\begin{align*}\n \\Phi ({\\mathbf x_{t+1}}) - \\Phi ({\\mathbf x_t}) & \\leq - \\left( \\frac{3 \\alpha_t}{4 \\eta_x} - \\frac{L_{\\Phi} \\alpha_t^2}{2} \\right) \\left\\| \\Tbx_{t+\\frac{1}{2}} - {\\mathbf x_t} \\right\\|^2 + \\frac{4 \\eta_x \\alpha_t L_f^2}{\\mu} \\left[ \\Phi({\\mathbf x_t}) - f({\\mathbf x_t}, {\\mathbf y_t}) \\right] + 2 \\eta_x \\alpha_t \\left\\| \\nabla_{\\bx} f({\\mathbf x_t}, {\\mathbf y_t}) - {\\mathbf d_{x,t}} \\right\\|^2.\n\\end{align*}\nNotice that for $\\alpha_t \\leq \\frac{\\mu}{4 \\eta_x L_f^2}$, $\\frac{L_{\\Phi} \\alpha_t^2}{2} \\leq \\kappa L_f \\alpha_t^2 \\leq \\frac{\\alpha_t}{4 \\eta_x}$. Hence the result follows.\n\\end{proof}\n\n\n\\begin{proof}[Proof of Lemma \\ref{lem:NC_PL_mom_phi_error}]\nUsing $L_f$-smoothness of $f({\\mathbf x}, \\cdot)$ (\\cref{assum:smoothness}),\n\\begin{align}\n f({\\mathbf x_{t+1}}, {\\mathbf y_t}) &+ \\left\\langle \\nabla_{\\by} f({\\mathbf x_{t+1}}, {\\mathbf y_t}), {\\mathbf y_{t+1}} - {\\mathbf y_t} \\right\\rangle - \\frac{L_f}{2} \\norm{{\\mathbf y_{t+1}} - {\\mathbf y_t}}^2 \\leq f({\\mathbf x_{t+1}}, {\\mathbf y_{t+1}}) \\nonumber \\\\\n \\Rightarrow f({\\mathbf x_{t+1}}, {\\mathbf y_t}) & \\leq f({\\mathbf x_{t+1}}, {\\mathbf y_{t+1}}) - \\alpha_t \\left\\langle \\nabla_{\\by} f({\\mathbf x_{t+1}}, {\\mathbf y_t}), \\Tby_{t+\\frac{1}{2}} - {\\mathbf y_t} \\right\\rangle + \\frac{\\alpha_t^2 L_f}{2} \\norm{\\Tby_{t+\\frac{1}{2}} - {\\mathbf y_t}}^2. \\label{eq_proof:lem:NC_PL_mom_phi_error_1}\n\\end{align}\nNext, we bound the inner product in \\eqref{eq_proof:lem:NC_PL_mom_phi_error_1}.\n\\begin{align}\n & - \\alpha_t \\left\\langle \\nabla_{\\by} f({\\mathbf x_{t+1}}, {\\mathbf y_t}), \\Tby_{t+\\frac{1}{2}} - {\\mathbf y_t} \\right\\rangle = - \\alpha_t \\eta_y \\left\\langle \\nabla_{\\by} f({\\mathbf x_{t+1}}, {\\mathbf y_t}), {\\mathbf d_{y,t}} \\right\\rangle \\tag{using \\eqref{eq:NC_mom_update_avg}} \\\\\n &= -\\frac{\\alpha_t \\eta_y}{2} \\left[ \\norm{\\nabla_{\\by} f({\\mathbf x_{t+1}}, {\\mathbf y_t})}^2 + \\norm{{\\mathbf d_{y,t}}}^2 - \\norm{\\nabla_{\\by} f({\\mathbf x_{t+1}}, {\\mathbf y_t}) - \\nabla_{\\by} f({\\mathbf x_t}, {\\mathbf y_t}) + \\nabla_{\\by} f({\\mathbf x_t}, {\\mathbf y_t}) - {\\mathbf d_{y,t}}}^2 \\right] \\nonumber \\\\\n & \\leq -\\alpha_t \\eta_y \\mu \\left[ \\Phi ({\\mathbf x_{t+1}}) - f({\\mathbf x_{t+1}}, {\\mathbf y_t}) \\right] - \\frac{\\alpha_t}{2 \\eta_y} \\norm{\\Tby_{t+\\frac{1}{2}} - {\\mathbf y_t}}^2 + \\alpha_t \\eta_y \\left[ L_f^2 \\norm{{\\mathbf x_{t+1}} - {\\mathbf x_t}}^2 + \\norm{\\nabla_{\\by} f({\\mathbf x_t}, {\\mathbf y_t}) - {\\mathbf d_{y,t}}}^2 \\right] \\label{eq_proof:lem:NC_PL_mom_phi_error_2}\n\\end{align}\nwhere, \\eqref{eq_proof:lem:NC_PL_mom_phi_error_2} follows from the quadratic growth condition of $\\mu$-PL functions (\\cref{lem:quad_growth}),\n\\begin{align}\n \\norm{\\nabla_{\\by} f({\\mathbf x_{t+1}}, {\\mathbf y_t})}^2 \\geq 2 \\mu \\left( \\max_{\\mathbf y} f({\\mathbf x_{t+1}}, {\\mathbf y}) - f({\\mathbf x_{t+1}}, {\\mathbf y_t}) \\right) = 2 \\mu \\left( \\Phi ({\\mathbf x_{t+1}}) - f({\\mathbf x_{t+1}}, {\\mathbf y_t}) \\right). \\nonumber\n \n\\end{align}\nSubstituting \\eqref{eq_proof:lem:NC_PL_mom_phi_error_2} in \\eqref{eq_proof:lem:NC_PL_mom_phi_error_1}, we get\n\\begin{align}\n f({\\mathbf x_{t+1}}, {\\mathbf y_t}) & \\leq f({\\mathbf x_{t+1}}, {\\mathbf y_{t+1}}) -\\alpha_t \\eta_y \\mu \\left[ \\Phi ({\\mathbf x_{t+1}}) - f({\\mathbf x_{t+1}}, {\\mathbf y_t}) \\right] - \\frac{\\alpha_t}{2 \\eta_y} \\norm{\\Tby_{t+\\frac{1}{2}} - {\\mathbf y_t}}^2 + \\frac{\\alpha_t^2 L_f}{2} \\norm{\\Tby_{t+\\frac{1}{2}} - {\\mathbf y_t}}^2 \\nonumber \\\\\n & \\quad + \\alpha_t \\eta_y \\left[ L_f^2 \\norm{{\\mathbf x_{t+1}} - {\\mathbf x_t}}^2 + \\norm{\\nabla_{\\by} f({\\mathbf x_t}, {\\mathbf y_t}) - {\\mathbf d_{y,t}}}^2 \\right]. \\nonumber\n\\end{align}\nRearranging the terms we get\n\\begin{align}\n \\Phi ({\\mathbf x_{t+1}}) - f({\\mathbf x_{t+1}}, {\\mathbf y_{t+1}}) & \\leq \\left( 1-\\alpha_t \\eta_y \\mu \\right) \\left[ \\Phi ({\\mathbf x_{t+1}}) - f({\\mathbf x_{t+1}}, {\\mathbf y_t}) \\right] - \\frac{\\alpha_t}{2} \\left( \\frac{1}{\\eta_y} - \\alpha_t L_f \\right) \\norm{\\Tby_{t+\\frac{1}{2}} - {\\mathbf y_t}}^2 \\nonumber \\\\\n & \\quad + \\alpha_t \\eta_y \\left[ L_f^2 \\norm{{\\mathbf x_{t+1}} - {\\mathbf x_t}}^2 + \\norm{\\nabla_{\\by} f({\\mathbf x_t}, {\\mathbf y_t}) - {\\mathbf d_{y,t}}}^2 \\right]. \\label{eq_proof:lem:NC_PL_mom_phi_error_3}\n\\end{align}\n\nNext, we bound $\\Phi ({\\mathbf x_{t+1}}) - f({\\mathbf x_{t+1}}, {\\mathbf y_t})$.\n\\begin{align}\n & \\Phi ({\\mathbf x_{t+1}}) - f({\\mathbf x_{t+1}}, {\\mathbf y_t}) = \\Phi ({\\mathbf x_{t+1}}) - \\Phi ({\\mathbf x_t}) + \\left[ \\Phi ({\\mathbf x_t}) - f({\\mathbf x_t}, {\\mathbf y_t}) \\right] + \\underbrace{f({\\mathbf x_t}, {\\mathbf y_t}) - f({\\mathbf x_{t+1}}, {\\mathbf y_t})}_{I}. \\label{eq_proof:lem:NC_PL_mom_phi_error_4}\n\\end{align}\nNext, we bound $I$. Using $L_f$-smoothness of $f(\\cdot, {\\mathbf y_t})$,\n\\begin{align}\n & f({\\mathbf x_t}, {\\mathbf y_t}) + \\left\\langle \\nabla_{\\bx} f({\\mathbf x_t}, {\\mathbf y_t}), {\\mathbf x_{t+1}} - {\\mathbf x_t} \\right\\rangle - \\frac{L_f}{2} \\norm{{\\mathbf x_{t+1}} - {\\mathbf x_t}}^2 \\leq f({\\mathbf x_{t+1}}, {\\mathbf y_t}) \\nonumber \\\\\n \\Rightarrow I &= f({\\mathbf x_t}, {\\mathbf y_t}) - f({\\mathbf x_{t+1}}, {\\mathbf y_t}) \\nonumber \\\\\n & \\leq -\\alpha_t \\left\\langle \\nabla_{\\bx} f({\\mathbf x_t}, {\\mathbf y_t}), \\Tbx_{t+\\frac{1}{2}} - {\\mathbf x_t} \\right\\rangle + \\frac{\\alpha_t^2 L_f}{2} \\norm{\\Tbx_{t+\\frac{1}{2}} - {\\mathbf x_t}}^2 \\nonumber \\\\\n &= -\\alpha_t \\left\\langle \\nabla_{\\bx} f({\\mathbf x_t}, {\\mathbf y_t}) - \\nabla \\Phi({\\mathbf x_t}), \\Tbx_{t+\\frac{1}{2}} - {\\mathbf x_t} \\right\\rangle -\\alpha_t \\left\\langle \\nabla \\Phi({\\mathbf x_t}), \\Tbx_{t+\\frac{1}{2}} - {\\mathbf x_t} \\right\\rangle + \\frac{\\alpha_t^2 L_f}{2} \\norm{\\Tbx_{t+\\frac{1}{2}} - {\\mathbf x_t}}^2 \\nonumber \\\\\n & \\leq \\frac{\\alpha_t}{8 \\eta_x} \\left\\| \\Tbx_{t+\\frac{1}{2}} - {\\mathbf x_t} \\right\\|^2 + \\frac{4 \\eta_x \\alpha_t L_f^2}{\\mu} \\left[ \\Phi({\\mathbf x_t}) - f({\\mathbf x_t}, {\\mathbf y_t}) \\right] \\tag{using \\eqref{eq:lem:NC_PL_mom_Phi_1_step_decay_2b}} \\\\\n & \\quad + \\Phi ({\\mathbf x_t}) - \\Phi ({\\mathbf x_{t+1}}) + \\frac{\\alpha_t^2 L_{\\Phi}}{2} \\left\\| \\Tbx_{t+\\frac{1}{2}} - {\\mathbf x_t} \\right\\|^2 + \\frac{\\alpha_t^2 L_f}{2} \\norm{\\Tbx_{t+\\frac{1}{2}} - {\\mathbf x_t}}^2 \\tag{smoothness of $\\Phi$ (\\cref{lem:Phi_PL_smooth_nouiehed})} \\\\\n & = \\Phi ({\\mathbf x_t}) - \\Phi ({\\mathbf x_{t+1}}) + \\frac{4 \\eta_x \\alpha_t L_f^2}{\\mu} \\left[ \\Phi({\\mathbf x_t}) - f({\\mathbf x_t}, {\\mathbf y_t}) \\right] + \\frac{\\alpha_t}{2} \\left( \\frac{1}{4 \\eta_x} + 2 \\alpha_t L_{\\Phi} \\right) \\left\\| \\Tbx_{t+\\frac{1}{2}} - {\\mathbf x_t} \\right\\|^2 \\tag{$\\because L_f \\leq L_{\\Phi}$}.\n\\end{align}\nUsing the bound on $I$ in \\eqref{eq_proof:lem:NC_PL_mom_phi_error_4} and then substituting in \\eqref{eq_proof:lem:NC_PL_mom_phi_error_3}, we get\n\\begin{align}\n & \\Phi ({\\mathbf x_{t+1}}) - f({\\mathbf x_{t+1}}, {\\mathbf y_{t+1}}) \\nonumber \\\\\n & \\leq (1 - \\alpha_t \\eta_y \\mu) \\left[ \\left( 1 + \\frac{4 \\eta_x \\alpha_t L_f^2}{\\mu} \\right) \\left[ \\Phi({\\mathbf x_t}) - f({\\mathbf x_t}, {\\mathbf y_t}) \\right] + \\frac{\\alpha_t}{2} \\left( \\frac{1}{4 \\eta_x} + 2 \\alpha_t L_{\\Phi} \\right) \\left\\| \\Tbx_{t+\\frac{1}{2}} - {\\mathbf x_t} \\right\\|^2 \\right] \\nonumber \\\\\n & \\quad - \\frac{\\alpha_t}{2} \\left( \\frac{1}{\\eta_y} - \\alpha_t L_f \\right) \\norm{\\Tby_{t+\\frac{1}{2}} - {\\mathbf y_t}}^2 + \\alpha_t \\eta_y \\left[ L_f^2 \\norm{{\\mathbf x_{t+1}} - {\\mathbf x_t}}^2 + \\norm{\\nabla_{\\by} f({\\mathbf x_t}, {\\mathbf y_t}) - {\\mathbf d_{y,t}}}^2 \\right] \\nonumber \\\\\n & \\overset{(a)}{\\leq} \\left( 1 - \\frac{\\alpha_t \\eta_y \\mu}{2} \\right) \\left[ \\Phi ({\\mathbf x_t}) - f({\\mathbf x_t}, {\\mathbf y_t}) \\right] + \\frac{\\alpha_t}{2} \\left( \\frac{1}{4 \\eta_x} + 2 \\alpha_t L_{\\Phi} + 2 \\eta_y L_f^2 \\alpha_t^2 \\right) \\left\\| \\Tbx_{t+\\frac{1}{2}} - {\\mathbf x_t} \\right\\|^2 \\nonumber \\\\\n & \\quad - \\frac{\\alpha_t}{2} \\left( \\frac{1}{\\eta_y} - \\alpha_t L_f \\right) \\norm{\\Tby_{t+\\frac{1}{2}} - {\\mathbf y_t}}^2 + \\alpha_t \\eta_y \\norm{\\nabla_{\\by} f({\\mathbf x_t}, {\\mathbf y_t}) - {\\mathbf d_{y,t}}}^2. \\nonumber \\\\\n & \\overset{(b)}{\\leq} \\left( 1 - \\frac{\\alpha_t \\eta_y \\mu}{2} \\right) \\left[ \\Phi ({\\mathbf x_t}) - f({\\mathbf x_t}, {\\mathbf y_t}) \\right] + \\frac{\\alpha_t}{2 \\eta_x} \\left\\| \\Tbx_{t+\\frac{1}{2}} - {\\mathbf x_t} \\right\\|^2 - \\frac{\\alpha_t}{4 \\eta_y} \\norm{\\Tby_{t+\\frac{1}{2}} - {\\mathbf y_t}}^2 + \\alpha_t \\eta_y \\norm{\\nabla_{\\by} f({\\mathbf x_t}, {\\mathbf y_t}) - {\\mathbf d_{y,t}}}^2. \\nonumber\n \n\\end{align}\nwhere in $(a)$ we choose $\\eta_x$ such that $(1 - \\alpha_t \\eta_y \\mu) \\left( 1 + \\frac{4 \\eta_x \\alpha_t L_f^2}{\\mu} \\right) \\leq \\left( 1 - \\frac{\\alpha_t \\eta_y \\mu}{2} \\right)$. \nThis holds if $\\frac{4 \\eta_x \\alpha_t L_f^2}{\\mu} \\leq \\frac{\\alpha_t \\eta_y \\mu}{2} \\Rightarrow \\eta_x \\leq \\frac{\\eta_y}{8 \\kappa^2}$, where $\\kappa = L_f\/\\mu \\geq 1$ is the condition number.\nFinally, $(b)$ follows since $\\alpha_t \\eta_y \\leq \\frac{1}{2 L_f}$ and $\\alpha_t \\leq \\frac{\\mu}{8 \\eta_x L_f^2} = \\frac{1}{8 \\eta_x \\kappa L_f}$. Therefore,\n\\begin{align*}\n & 2 \\alpha_t L_{\\Phi} \\leq 4 \\kappa \\alpha_t L_f \\leq \\frac{1}{2 \\eta_x} \\tag{$L_{\\Phi} \\leq 2 \\kappa L_f$} \\\\\n & 2 \\eta_y L_f^2 \\alpha_t^2 \\leq 2 \\eta_y \\alpha_t \\frac{\\mu}{8 \\eta_x} \\leq \\frac{\\mu}{8 \\eta_x} \\frac{1}{L_f} \\leq \\frac{1}{8 \\eta_x}.\n\\end{align*}\n\\end{proof}\n\n\n\\begin{proof}[Proof of \\cref{lem:NC_PL_mom_grad_var_bound}]\nWe prove \\eqref{eq:lem:NC_PL_mom_grad_var_bound_x} here. The proof for \\eqref{eq:lem:NC_PL_mom_grad_var_bound_y} is analogous.\n\\begin{align}\n & \\mathbb E \\left\\| \\nabla_{\\bx} f({\\mathbf x_{t+1}}, {\\mathbf y_{t+1}}) - {\\mathbf d_{x,t+1}} \\right\\|^2 \\nonumber \\\\\n &= \\mathbb E \\left\\| \\nabla_{\\bx} f({\\mathbf x_{t+1}}, {\\mathbf y_{t+1}}) - (1 - \\beta_x \\alpha_t) {\\mathbf d_{x,t}} - \\beta_x \\alpha_t \\frac{1}{n} \\sum_{i=1}^n \\nabla_{\\bx} f_i ({\\mathbf x^i_{t+1}}, {\\mathbf y^i_{t+1}}; {\\xi^i_{t+1}}) \\right\\|^2 \\tag{see \\eqref{eq:NC_mom_update_avg}} \\\\\n &= \\mathbb E \\left\\| \\nabla_{\\bx} f({\\mathbf x_{t+1}}, {\\mathbf y_{t+1}}) - (1 - \\beta_x \\alpha_t) {\\mathbf d_{x,t}} - \\beta_x \\alpha_t \\frac{1}{n} \\sum_{i=1}^n \\nabla_{\\bx} f_i ({\\mathbf x^i_{t+1}}, {\\mathbf y^i_{t+1}}) \\right. \\nonumber \\\\\n & \\qquad \\qquad \\qquad \\qquad \\qquad \\left.- \\beta_x \\alpha_t \\frac{1}{n} \\sum_{i=1}^n \\left( \\nabla_{\\bx} f_i ({\\mathbf x^i_{t+1}}, {\\mathbf y^i_{t+1}}; {\\xi^i_{t+1}}) - \\nabla_{\\bx} f_i ({\\mathbf x^i_{t+1}}, {\\mathbf y^i_{t+1}}) \\right) \\right\\|^2 \\nonumber \\\\\n & \\overset{(a)}{=} \\mathbb E \\left\\| (1 - \\beta_x \\alpha_t) \\left( \\nabla_{\\bx} f({\\mathbf x_{t+1}}, {\\mathbf y_{t+1}}) - {\\mathbf d_{x,t}} \\right) + \\beta_x \\alpha_t \\left( \\nabla_{\\bx} f({\\mathbf x_{t+1}}, {\\mathbf y_{t+1}}) - \\frac{1}{n} \\sum_{i=1}^n \\nabla_{\\bx} f_i ({\\mathbf x^i_{t+1}}, {\\mathbf y^i_{t+1}}) \\right) \\right\\|^2 \\nonumber \\\\\n & \\qquad \\qquad \\qquad + \\beta_x^2 \\alpha_t^2 \\mathbb E \\left\\| \\frac{1}{n} \\sum_{i=1}^n \\left( \\nabla_{\\bx} f_i ({\\mathbf x^i_{t+1}}, {\\mathbf y^i_{t+1}}; {\\xi^i_{t+1}}) - \\nabla_{\\bx} f_i ({\\mathbf x^i_{t+1}}, {\\mathbf y^i_{t+1}}) \\right) \\right\\|^2 \\nonumber \\\\\n & \\leq (1 + a_1) (1 - \\beta_x \\alpha_t)^2 \\mathbb E \\left\\| \\nabla_{\\bx} f({\\mathbf x_{t+1}}, {\\mathbf y_{t+1}}) - {\\mathbf d_{x,t}} \\right\\|^2 \\nonumber \\\\\n & \\qquad + \\beta_x^2 \\alpha_t^2 \\left( 1 + \\dfrac{1}{a_1} \\right) \\mathbb E \\left\\| \\frac{1}{n} \\sum_{i=1}^n \\left( \\nabla_{\\bx} f_i ({\\mathbf x_{t+1}}, {\\mathbf y_{t+1}}) - \\nabla_{\\bx} f_i ({\\mathbf x^i_{t+1}}, {\\mathbf y^i_{t+1}}) \\right) \\right\\|^2 + \\beta_x^2 \\alpha_t^2 \\frac{\\sigma^2}{n}. \\label{eq:proof_lem:NC_PL_mom_grad_var_bound_x_3}\n\\end{align}\nHere, $(a)$ follows from Assumption \\ref{assum:bdd_var} (unbiasedness of stochastic gradients),\n\\begin{align*}\n & \\mathbb E \\left\\langle (1 - \\beta_x \\alpha_t) \\left( \\nabla_{\\bx} f({\\mathbf x_{t+1}}, {\\mathbf y_{t+1}}) - {\\mathbf d_{x,t}} \\right) + \\beta_x \\alpha_t \\left( \\nabla_{\\bx} f({\\mathbf x_{t+1}}, {\\mathbf y_{t+1}}) - \\frac{1}{n} \\sum_{i=1}^n \\nabla_{\\bx} f_i ({\\mathbf x^i_{t+1}}, {\\mathbf y^i_{t+1}}) \\right), \\right. \\nonumber \\\\\n & \\qquad \\qquad \\qquad \\left. \\frac{1}{n} \\sum_{i=1}^n \\left( \\nabla_{\\bx} f_i ({\\mathbf x^i_{t+1}}, {\\mathbf y^i_{t+1}}; {\\xi^i_{t+1}}) - \\nabla_{\\bx} f_i ({\\mathbf x^i_{t+1}}, {\\mathbf y^i_{t+1}}) \\right) \\right\\rangle \\nonumber \\\\\n &= \\mathbb E \\left\\langle (1 - \\beta_x \\alpha_t) \\left( \\nabla_{\\bx} f({\\mathbf x_{t+1}}, {\\mathbf y_{t+1}}) - {\\mathbf d_{x,t}} \\right) + \\beta_x \\alpha_t \\left( \\nabla_{\\bx} f({\\mathbf x_{t+1}}, {\\mathbf y_{t+1}}) - \\frac{1}{n} \\sum_{i=1}^n \\nabla_{\\bx} f_i ({\\mathbf x^i_{t+1}}, {\\mathbf y^i_{t+1}}) \\right), \\right. \\nonumber \\\\\n & \\qquad \\qquad \\qquad \\left. \\frac{1}{n} \\sum_{i=1}^n \\left( \\mathbb E \\left[ \\nabla_{\\bx} f_i ({\\mathbf x^i_{t+1}}, {\\mathbf y^i_{t+1}}; {\\xi^i_{t+1}}) \\right] - \\nabla_{\\bx} f_i ({\\mathbf x^i_{t+1}}, {\\mathbf y^i_{t+1}}) \\right) \\right\\rangle = 0. \\tag{Law of total expectation}\n\\end{align*}\nAlso, \\eqref{eq:proof_lem:NC_PL_mom_grad_var_bound_x_3} follows from Assumption \\ref{assum:bdd_var} (independence of stochastic gradients across clients), and \\cref{lem:Young}\n(with $\\gamma = a_1$).\nNext, in \\eqref{eq:proof_lem:NC_PL_mom_grad_var_bound_x_3}, we choose $a_1$ such that $\\left( 1 + \\frac{1}{a_1} \\right) \\beta_x \\alpha_t = 1$, i.e., $a_1 = \\frac{\\beta_x \\alpha_t}{1 - \\beta_x \\alpha_t}$. \nTherefore, $(1-\\beta_x \\alpha_t) (1 + a_1) = 1$. Consequently, in \\eqref{eq:proof_lem:NC_PL_mom_grad_var_bound_x_3} we get,\n\\begin{align}\n & \\mathbb E \\left\\| \\nabla_{\\bx} f({\\mathbf x_{t+1}}, {\\mathbf y_{t+1}}) - {\\mathbf d_{x,t+1}} \\right\\|^2 \\nonumber \\\\\n & \\leq (1 - \\beta_x \\alpha_t) \\mathbb E \\left\\| \\nabla_{\\bx} f({\\mathbf x_{t+1}}, {\\mathbf y_{t+1}}) - \\nabla_{\\bx} f({\\mathbf x_t}, {\\mathbf y_t}) + \\nabla_{\\bx} f({\\mathbf x_t}, {\\mathbf y_t}) - {\\mathbf d_{x,t}} \\right\\|^2 + \\beta_x^2 \\alpha_t^2 \\frac{\\sigma^2}{n} \\nonumber \\\\\n & \\quad + \\beta_x \\alpha_t \\frac{1}{n} \\sum_{i=1}^n L_f^2 \\mathbb E \\left[ \\left\\| {\\mathbf x_{t+1}} - {\\mathbf x^i_{t+1}} \\right\\|^2 + \\left\\| {\\mathbf y_{t+1}} - {\\mathbf y^i_{t+1}} \\right\\|^2 \\right] \n \\tag{Jensen's inequality with $\\norm{\\cdot}^2_2$; \\cref{assum:smoothness}}\n \n \\\\\n & \\leq (1 - \\beta_x \\alpha_t) \\left[ (1 + a_2) \\mathbb E \\left\\| \\nabla_{\\bx} f({\\mathbf x_t}, {\\mathbf y_t}) - {\\mathbf d_{x,t}} \\right\\|^2 + \\left( 1 + \\dfrac{1}{a_2} \\right) \\mathbb E \\left\\| \\nabla_{\\bx} f({\\mathbf x_{t+1}}, {\\mathbf y_{t+1}}) - \\nabla_{\\bx} f({\\mathbf x_t}, {\\mathbf y_t}) \\right\\|^2 \\right] + \\beta_x^2 \\alpha_t^2 \\frac{\\sigma^2}{n} \\nonumber \\\\\n & \\quad + \\beta_x \\alpha_t \\frac{1}{n} \\sum_{i=1}^n L_f^2 \\mathbb E \\left[ \\left\\| {\\mathbf x_{t+1}} - {\\mathbf x^i_{t+1}} \\right\\|^2 + \\left\\| {\\mathbf y_{t+1}} - {\\mathbf y^i_{t+1}} \\right\\|^2 \\right], \\label{eq:proof_lem:NC_PL_mom_grad_var_bound_x_5}\n\\end{align}\nIn \\eqref{eq:proof_lem:NC_PL_mom_grad_var_bound_x_5}, we choose $a_2 = \\frac{\\beta_x \\alpha_t}{2}$. Then, $(1-\\beta_x \\alpha_t) \\left( 1 + \\frac{\\beta_x \\alpha_t}{2} \\right) \\leq 1 - \\frac{\\beta_x \\alpha_t}{2}$, and $(1-\\beta_x \\alpha_t) \\left( 1 + \\frac{2}{\\beta_x \\alpha_t} \\right) \\leq \\frac{2}{\\beta_x \\alpha_t}$. Therefore, we get\n\\begin{align}\n & \\mathbb E \\left\\| \\nabla_{\\bx} f({\\mathbf x_{t+1}}, {\\mathbf y_{t+1}}) - {\\mathbf d_{x,t+1}} \\right\\|^2 \\nonumber \\\\\n & \\leq \\left( 1 - \\frac{\\beta_x \\alpha_t}{2} \\right) \\mathbb E \\left\\| \\nabla_{\\bx} f({\\mathbf x_t}, {\\mathbf y_t}) - {\\mathbf d_{x,t}} \\right\\|^2 + \\frac{2}{\\beta_x \\alpha_t} L_f^2 \\mathbb E \\left[ \\left\\| {\\mathbf x_{t+1}} - {\\mathbf x_t} \\right\\|^2 + \\left\\| {\\mathbf y_{t+1}} - {\\mathbf y_t} \\right\\|^2 \\right] \\nonumber \\\\\n & \\qquad + \\beta_x^2 \\alpha_t^2 \\frac{\\sigma^2}{n} + \\beta_x \\alpha_t \\frac{1}{n} \\sum_{i=1}^n L_f^2 \\mathbb E \\left[ \\left\\| {\\mathbf x_{t+1}} - {\\mathbf x^i_{t+1}} \\right\\|^2 + \\left\\| {\\mathbf y_{t+1}} - {\\mathbf y^i_{t+1}} \\right\\|^2 \\right] \\nonumber \\\\\n &= \\left( 1 - \\frac{\\beta_x \\alpha_t}{2} \\right) \\mathbb E \\left\\| \\nabla_{\\bx} f({\\mathbf x_t}, {\\mathbf y_t}) - {\\mathbf d_{x,t}} \\right\\|^2 + \\frac{2 L_f^2 \\alpha_t}{\\beta_x} \\mathbb E \\left[ \\left\\| \\Tbx_{t+\\frac{1}{2}} - {\\mathbf x_t} \\right\\|^2 + \\left\\| \\Tby_{t+\\frac{1}{2}} - {\\mathbf y_t} \\right\\|^2 \\right] \\nonumber \\\\\n & \\qquad + \\beta_x^2 \\alpha_t^2 \\frac{\\sigma^2}{n} + \\beta_x \\alpha_t \\frac{1}{n} \\sum_{i=1}^n L_f^2 \\mathbb E \\left[ \\left\\| {\\mathbf x_{t+1}} - {\\mathbf x^i_{t+1}} \\right\\|^2 + \\left\\| {\\mathbf y_{t+1}} - {\\mathbf y^i_{t+1}} \\right\\|^2 \\right],\n \n\\end{align}\nFinally, we choose $\\beta_x = \\beta$.\nThis concludes the proof.\n\\end{proof}\n\n\n\n\\begin{proof}[Proof of \\cref{lem:NC_PL_mom_cons_errs_recursion}]\nFor the sake of clarity, we repeat the following notations:\n$\\Delta_{t}^{\\bx,\\by} \\triangleq \\frac{1}{n} \\sum_{i=1}^n \\mathbb E \\left( \\left\\| {\\mathbf x^i_t} - {\\mathbf x_t} \\right\\|^2 + \\left\\| {\\mathbf y^i_t} - {\\mathbf y_t} \\right\\|^2 \\right)$, $\\Delta_{t}^{\\bdx} \\triangleq \\frac{1}{n} \\sum_{i=1}^n \\mathbb E \\left\\| {\\mathbf d^i_{x,t}} - {\\mathbf d_{x,t}} \\right\\|^2$ and $\\Delta_{t}^{\\bdy} \\triangleq \\frac{1}{n} \\sum_{i=1}^n \\mathbb E \\left\\| {\\mathbf d^i_{y,t}} - {\\mathbf d_{y,t}} \\right\\|^2$.\n\nFirst we prove \\eqref{eq:lem:NC_PL_mom_xy_cons_errs_recursion}.\n\\begin{align}\n \\Delta_{t+1}^{\\bx,\\by} & \\triangleq \\frac{1}{n} \\sum_{i=1}^n \\mathbb E \\left( \\left\\| {\\mathbf x^i_{t+1}} - {\\mathbf x_{t+1}} \\right\\|^2 + \\left\\| {\\mathbf y^i_{t+1}} - {\\mathbf y_{t+1}} \\right\\|^2 \\right) \\nonumber \\\\\n & = \\frac{1}{n} \\sum_{i=1}^n \\mathbb E \\left( \\left\\| \\left( {\\mathbf x^i_t} - {\\mathbf x_t} \\right) - \\eta_x \\alpha_t \\left( {\\mathbf d^i_{x,t}} - {\\mathbf d_{x,t}} \\right) \\right\\|^2 + \\left\\| \\left( {\\mathbf y^i_t} - {\\mathbf y_t} \\right) + \\eta_y\\alpha_t \\left( {\\mathbf d^i_{y,t}} - {\\mathbf d_{y,t}} \\right) \\right\\|^2 \\right)\n \n \\tag{from \\eqref{eq:NC_mom_update_avg}} \\\\\n & \\leq \\frac{1}{n} \\sum_{i=1}^n \\left[ (1 + c_1) \\mathbb E \\left( \\left\\| {\\mathbf x^i_t} - {\\mathbf x_t} \\right\\|^2 + \\left\\| {\\mathbf y^i_t} - {\\mathbf y_t} \\right\\|^2 \\right) + \\alpha_t^2 \\left( 1 + \\dfrac{1}{c_1} \\right) \\mathbb E \\left( \\eta_x^2 \\left\\| {\\mathbf d^i_{x,t}} - {\\mathbf d_{x,t}} \\right\\|^2 + \\eta_y^2 \\left\\| {\\mathbf d^i_{y,t}} - {\\mathbf d_{y,t}} \\right\\|^2 \\right) \\right]\n \n \\tag{from \\cref{lem:Young}, with $\\gamma = c_1$}\n \\\\\n &= (1+c_1) \\Delta_{t}^{\\bx,\\by} + \\left( 1 + \\dfrac{1}{c_1} \\right) \\alpha_t^2 \\left( \\eta_x^2 \\Delta_{t}^{\\bdx} + \\eta_y^2 \\Delta_{t}^{\\bdy} \\right). \\nonumber\n\\end{align}\nNext, we prove \\eqref{eq:lem:NC_PL_mom_p_cons_errs_recursion}.\nThe proof of \\eqref{eq:lem:NC_PL_mom_q_cons_errs_recursion} is analogous, so we skip it here.\n\\begin{align}\n & \\Delta_{t+1}^{\\bdx} \\triangleq \\frac{1}{n} \\sum_{i=1}^n \\mathbb E \\left\\| {\\mathbf d^i_{x,t+1}} - {\\mathbf d_{x,t+1}} \\right\\|^2 \\nonumber \\\\\n &= \\frac{1}{n} \\sum_{i=1}^n \\mathbb E \\left\\| (1 - \\beta_x \\alpha_t) \\left( {\\mathbf d^i_{x,t}} - {\\mathbf d_{x,t}} \\right) + \\beta_x \\alpha_t \\Big( \\nabla_{\\bx} f_i ({\\mathbf x^i_{t+1}}, {\\mathbf y^i_{t+1}}; {\\xi^i_{t+1}}) - \\frac{1}{n} \\sum_{j=1}^n \\nabla_{\\bx} f_j ({\\mathbf x^j_{t+1}}, {\\mathbf y^j_{t+1}}; {\\xi^j_{t+1}}) \\Big) \\right\\|^2 \\tag{from \\eqref{eq:NC_mom_update_avg}} \\\\\n & \\leq (1 + c_2) (1 - \\beta_x \\alpha_t)^2 \\Delta_{t}^{\\bdx} + \\left( 1 + \\dfrac{1}{c_2} \\right) \\frac{\\beta_x^2 \\alpha_t^2}{n} \\sum_{i=1}^n \\mathbb E \\left\\| \\nabla_{\\bx} f_i ({\\mathbf x^i_{t+1}}, {\\mathbf y^i_{t+1}}; {\\xi^i_{t+1}}) - \\frac{1}{n} \\sum_{j=1}^n \\nabla_{\\bx} f_j ({\\mathbf x^j_{t+1}}, {\\mathbf y^j_{t+1}}; {\\xi^j_{t+1}}) \\right\\|^2 \n \n \\tag{\\cref{lem:Young} (with $\\gamma = c_2$)} \\\\\n & \\overset{(a)}{=} (1 - \\beta_x \\alpha_t) \\Delta_{t}^{\\bdx} \\nonumber \\\\\n & \\ + \\beta_x \\alpha_t \\frac{1}{n} \\sum_{i=1}^n \\mathbb E \\Bigg\\| \\nabla_{\\bx} f_i ({\\mathbf x^i_{t+1}}, {\\mathbf y^i_{t+1}}; {\\xi^i_{t+1}}) - \\nabla_{\\bx} f_i ({\\mathbf x^i_{t+1}}, {\\mathbf y^i_{t+1}}) + \\nabla_{\\bx} f_i ({\\mathbf x^i_{t+1}}, {\\mathbf y^i_{t+1}}) - \\nabla_{\\bx} f_i ({\\mathbf x_{t+1}}, {\\mathbf y_{t+1}}) + \\nabla_{\\bx} f_i ({\\mathbf x_{t+1}}, {\\mathbf y_{t+1}}) \\nonumber \\\\\n & \\quad - \\frac{1}{n} \\sum_{j=1}^n \\left( \\nabla_{\\bx} f_j ({\\mathbf x^j_{t+1}}, {\\mathbf y^j_{t+1}}; {\\xi^j_{t+1}}) - \\nabla_{\\bx} f_j ({\\mathbf x^j_{t+1}}, {\\mathbf y^j_{t+1}}) + \\nabla_{\\bx} f_j ({\\mathbf x^j_{t+1}}, {\\mathbf y^j_{t+1}}) - \\nabla_{\\bx} f_j ({\\mathbf x_{t+1}}, {\\mathbf y_{t+1}}) + \\nabla_{\\bx} f_j ({\\mathbf x_{t+1}}, {\\mathbf y_{t+1}}) \\right) \\Bigg\\|^2 \\nonumber\n \n \\\\\n & \\overset{(b)}{\\leq} (1 - \\beta_x \\alpha_t) \\Delta_{t}^{\\bdx} + \\beta_x \\alpha_t \\frac{1}{n} \\sum_{i=1}^n \\mathbb E \\Bigg[ \\left\\| \\nabla_{\\bx} f_i ({\\mathbf x^i_{t+1}}, {\\mathbf y^i_{t+1}}; {\\xi^i_{t+1}}) - \\nabla_{\\bx} f_i ({\\mathbf x^i_{t+1}}, {\\mathbf y^i_{t+1}}) \\right\\|^2 \\nonumber \\\\\n & \\qquad \\qquad \\qquad \\qquad \\qquad + \\left\\| \\frac{1}{n} \\sum_{j=1}^n \\left( \\nabla_{\\bx} f_j ({\\mathbf x^j_{t+1}}, {\\mathbf y^j_{t+1}}; {\\xi^j_{t+1}}) - \\nabla_{\\bx} f_j ({\\mathbf x^j_{t+1}}, {\\mathbf y^j_{t+1}}) \\right) \\right\\|^2 \\nonumber \\\\\n & \\qquad \\qquad \\qquad \\qquad \\qquad + \\Big\\| \\nabla_{\\bx} f_i ({\\mathbf x^i_{t+1}}, {\\mathbf y^i_{t+1}}) - \\nabla_{\\bx} f_i ({\\mathbf x_{t+1}}, {\\mathbf y_{t+1}}) + \\nabla_{\\bx} f_i ({\\mathbf x_{t+1}}, {\\mathbf y_{t+1}}) \\nonumber \\\\\n & \\qquad \\qquad \\qquad \\qquad \\qquad \\qquad - \\frac{1}{n} \\sum_{j=1}^n \\left( \\nabla_{\\bx} f_j ({\\mathbf x^j_{t+1}}, {\\mathbf y^j_{t+1}}) - \\nabla_{\\bx} f_j ({\\mathbf x_{t+1}}, {\\mathbf y_{t+1}}) \\right) - \\nabla_{\\bx} f ({\\mathbf x_{t+1}}, {\\mathbf y_{t+1}}) \\Big\\|^2 \\Bigg]\n \n \\nonumber \\\\\n & \\overset{(c)}{\\leq} (1 - \\beta_x \\alpha_t) \\Delta_{t}^{\\bdx} + \\beta_x \\alpha_t \\frac{1}{n} \\sum_{i=1}^n \\Bigg[ \\sigma^2 + \\frac{\\sigma^2}{n} + 3 \\mathbb E \\left\\| \\nabla_{\\bx} f_i ({\\mathbf x^i_{t+1}}, {\\mathbf y^i_{t+1}}) - \\nabla_{\\bx} f_i ({\\mathbf x_{t+1}}, {\\mathbf y_{t+1}}) \\right\\|^2 \\nonumber \\\\\n & + 3 \\mathbb E \\left\\| \\nabla_{\\bx} f_i ({\\mathbf x_{t+1}}, {\\mathbf y_{t+1}}) - \\nabla_{\\bx} f ({\\mathbf x_{t+1}}, {\\mathbf y_{t+1}}) \\right\\|^2 + 3 \\mathbb E \\left\\| \\frac{1}{n} \\sum_{j=1}^n \\left( \\nabla_{\\bx} f_j ({\\mathbf x^j_{t+1}}, {\\mathbf y^j_{t+1}}) - \\nabla_{\\bx} f_j ({\\mathbf x_{t+1}}, {\\mathbf y_{t+1}}) \\right) \\right\\|^2 \\Bigg]\n \n \\nonumber \\\\\n & \\overset{(d)}{\\leq} (1 - \\beta_x \\alpha_t) \\Delta_{t}^{\\bdx} + \\beta_x \\alpha_t \\frac{1}{n} \\sum_{i=1}^n \\Bigg[ \\sigma^2 + \\frac{\\sigma^2}{n} + 3 L_f^2 \\mathbb E \\left( \\left\\| {\\mathbf x^i_{t+1}} - {\\mathbf x_{t+1}} \\right\\|^2 + \\left\\| {\\mathbf y^i_{t+1}} - {\\mathbf y_{t+1}} \\right\\|^2 \\right) + 3 \\varsigma_x^2 \\nonumber \\\\\n & \\qquad \\qquad \\qquad \\qquad + 3 L_f^2 \\frac{1}{n} \\sum_{j=1}^n \\mathbb E \\left( \\left\\| {\\mathbf x^j_{t+1}} - {\\mathbf x_{t+1}} \\right\\|^2 + \\left\\| {\\mathbf y^j_{t+1}} - {\\mathbf y_{t+1}} \\right\\|^2 \\right) \\Bigg]\n \n \\nonumber \\\\\n &= (1 - \\beta_x \\alpha_t) \\Delta_{t}^{\\bdx} + 6 \\beta_x \\alpha_t L_f^2 \\Delta_{t+1}^{\\bx,\\by} + \\beta_x \\alpha_t \\left[ \\sigma^2 \\left( 1 + \\dfrac{1}{n} \\right) + 3 \\varsigma_x^2 \\right]. \\nonumber\n\\end{align}\nIn $(a)$ we choose $c_2$ such that $\\left( 1 + \\frac{1}{c_2} \\right) \\beta_x \\alpha_t = 1$, i.e., $c_2 = \\frac{\\beta_x \\alpha_t}{1 - \\beta_x \\alpha_t}$ and $(1-\\beta_x \\alpha_t) (1 + c_2) = 1$; \n$(b)$ follows from Assumption \\ref{assum:bdd_var} (unbiasedness of stochastic gradients);\n$(c)$ follows from Assumption \\ref{assum:bdd_var} (bounded variance of stochastic gradients, and independence of stochastic gradients across clients), and the generic sum of squares inequality in \\cref{lem:sum_of_squares};\n$(d)$ follows from \\cref{assum:smoothness} ($L_f$-smoothness of $f_i$) \\cref{assum:bdd_hetero} (bounded heterogeneity across clients).\n\nFinally, we choose $\\beta_x = \\beta$.\nThis concludes the proof of \\eqref{eq:lem:NC_PL_mom_p_cons_errs_recursion}.\n\\end{proof}\n\n\n\n\\begin{proof}[Proof of \\cref{lem:NC_PL_mom_induct_bd_cons_error_xy}]\nSubstituting \\eqref{eq:lem:NC_PL_mom_p_cons_errs_recursion}, \\eqref{eq:lem:NC_PL_mom_q_cons_errs_recursion} from Lemma \\ref{lem:NC_PL_mom_cons_errs_recursion} in \\eqref{eq:lem:NC_PL_mom_xy_cons_errs_recursion}, we get\n\\begin{equation}\n \\begin{aligned}\n \\Delta_{t+1}^{\\bx,\\by} & \\leq \\left\\{ 1+c_1 + \\left( 1 + \\mfrac{1}{c_1} \\right) 6 L_f^2 \\beta \\alpha^3 (\\eta_x^2 + \\eta_y^2) \\right\\} \\Delta_{t}^{\\bx,\\by} + \\left( 1 + \\mfrac{1}{c_1} \\right) \\alpha^2 (1 - \\beta \\alpha) \\left( \\eta_x^2 \\Delta_{t-1}^{\\bdx} + \\eta_y^2 \\Delta_{t-1}^{\\bdy} \\right) \\\\\n & \\qquad + \\left( 1 + \\mfrac{1}{c_1} \\right) \\beta \\alpha^3 \\left[ \\left( \\eta_x^2 + \\eta_y^2 \\right) \\sigma^2 \\left( 1 + \\mfrac{1}{n} \\right) + 3 \\eta_x^2 \\varsigma_x^2 + \\eta_y^2 \\varsigma_y^2 \\right].\n \\end{aligned}\n \\label{eq_proof:lem:NC_PL_mom_induct_bd_1}\n\\end{equation}\nUsing $c_1 = \\frac{\\beta \\alpha}{1 - \\beta \\alpha}$ in \\eqref{eq_proof:lem:NC_PL_mom_induct_bd_1} gives us\n\\begin{align}\n \\Delta_{t+1}^{\\bx,\\by} & \\leq \\left\\{ 1+c_1 + 6 L_f^2 \\alpha^2 (\\eta_x^2 + \\eta_y^2) \\right\\} \\Delta_{t}^{\\bx,\\by} + \\frac{\\alpha}{\\beta} (1 - \\beta \\alpha) \\left( \\eta_x^2 \\Delta_{t-1}^{\\bdx} + \\eta_y^2 \\Delta_{t-1}^{\\bdy} \\right) \\nonumber \\\\\n & \\qquad + \\alpha^2 \\left[ \\left( \\eta_x^2 + \\eta_y^2 \\right) \\sigma^2 \\left( 1 + \\mfrac{1}{n} \\right) + 3 \\eta_x^2 \\varsigma_x^2 + \\eta_y^2 \\varsigma_y^2 \\right] \\nonumber \\\\\n & = \\left( 1 + \\theta \\right) \\Delta_{t}^{\\bx,\\by} + \\frac{\\alpha}{\\beta} (1 - \\beta \\alpha) \\left( \\eta_x^2 \\Delta_{t-1}^{\\bdx} + \\eta_y^2 \\Delta_{t-1}^{\\bdy} \\right) + \\Upsilon, \\label{eq_proof:lem:NC_PL_mom_induct_bd_2}\n\\end{align}\nwhere we define $\\theta \\triangleq c_1 + 6 L_f^2 \\alpha^2 (\\eta_x^2 + \\eta_y^2)$.\n\nNow, we proceed to prove the induction. For $k=1$, it follows from \\eqref{eq_proof:lem:NC_PL_mom_induct_bd_2} that \\eqref{eq:lem:NC_PL_mom_induct_bd_cons_error_xy} holds. Next, we assume the induction hypothesis in \\eqref{eq:lem:NC_PL_mom_induct_bd_cons_error_xy} holds for some $k > 1$ (assuming $t-1-k \\geq s \\tau + 1$). We prove that it also holds for $k+1$.\n\\begin{align}\n \\Delta_{t}^{\\bx,\\by} & \\leq (1 + 2 k \\theta) \\Delta_{t-k}^{{\\mathbf x},{\\mathbf y}} + 2 k \\mfrac{\\alpha}{\\beta} (1-\\beta \\alpha) \\left( \\eta_x^2 \\Delta_{t-k-1}^{{\\mathbf d_x}} + \\eta_y^2 \\Delta_{t-k-1}^{{\\mathbf d_y}} \\right) + k^2 (1+\\theta) \\Upsilon \\tag{Induction hypothesis} \\\\\n & \\leq \\left\\{ (1 + 2 k \\theta) (1 + \\theta) + 2 k \\mfrac{\\alpha}{\\beta} (1-\\beta \\alpha) (\\eta_x^2 + \\eta_y^2) 6 L_f^2 \\beta \\alpha \\right\\} \\Delta_{t-k-1}^{x,y} \\tag{\\cref{lem:NC_PL_mom_cons_errs_recursion}, \\eqref{eq_proof:lem:NC_PL_mom_induct_bd_2}} \\\\\n & \\quad + \\left\\{ (1 + 2 k \\theta) \\frac{\\alpha}{\\beta} (1 - \\beta \\alpha) + 2 k \\mfrac{\\alpha}{\\beta} (1-\\beta \\alpha)^2 \\right\\} \\left( \\eta_x^2 \\Delta_{t-k-2}^{{\\mathbf d_x}} + \\eta_y^2 \\Delta_{t-k-2}^{{\\mathbf d_y}} \\right) \\nonumber \\\\\n & \\quad + \\left[ 1 + 2k \\theta + k^2 (1 + \\theta) \\right] \\Upsilon + 2 k \\mfrac{\\alpha}{\\beta} (1-\\beta \\alpha) \\beta \\alpha \\left[ \\left( \\eta_x^2 + \\eta_y^2 \\right) \\sigma^2 \\left( 1 + \\mfrac{1}{n} \\right) + 3 \\eta_x^2 \\varsigma_x^2 + 3 \\eta_y^2 \\varsigma_y^2 \\right] \\nonumber \\\\\n & \\leq \\left\\{ (1 + 2 k \\theta) (1 + \\theta) + 2 k (1-\\beta \\alpha) (\\theta - c_1) \\right\\} \\Delta_{t-k-1}^{x,y} \\tag{see definition of $\\theta$ in \\cref{lem:NC_PL_mom_induct_bd_cons_error_xy}} \\\\\n & \\quad + \\left[ 1 + 2 k \\theta + 2 k (1-\\beta \\alpha) \\right] \\frac{\\alpha}{\\beta} (1 - \\beta \\alpha) \\left( \\eta_x^2 \\Delta_{t-k-2}^{{\\mathbf d_x}} + \\eta_y^2 \\Delta_{t-k-2}^{{\\mathbf d_y}} \\right) \\nonumber \\\\\n & \\quad + \\left[ 1 + 2k \\theta + k^2 (1 + \\theta) + 2 k (1-\\beta \\alpha) \\right] \\Upsilon. \\tag{see definition of $\\Upsilon$ in \\cref{lem:NC_PL_mom_induct_bd_cons_error_xy}}\n\\end{align}\nNext, we see how the parameter choices in \\cref{lem:NC_PL_mom_induct_bd_cons_error_xy} satisfy the induction hypothesis.\nBasically, we need to satisfy the following three conditions:\n\\begin{align}\n \\begin{aligned}\n (1 + 2 k \\theta) (1 + \\theta) + 2 k (1-\\beta \\alpha) (\\theta - c_1) & \\leq 1 + 2 (k+1) \\theta, \\\\\n 1 + 2 k \\theta + 2 k (1-\\beta \\alpha) & \\leq 2(k+1), \\\\\n 1 + 2k \\theta + k^2 (1 + \\theta) + 2 k (1-\\beta \\alpha) & \\leq (k+1)^2 (1+\\theta).\n \\end{aligned}\n \\label{eq:NC_PL_mom_induct_bd_cons_error_xy:param_condition}\n\\end{align}\n\\begin{enumerate}\n \\item The first condition in \\eqref{eq:NC_PL_mom_induct_bd_cons_error_xy:param_condition} is equivalent to\n \\begin{align}\n \\theta + 2 k \\theta^2 + 2 k (1-\\beta \\alpha) (\\theta - c_1) & \\leq 2 \\theta. \\label{eq:cond_theta_1}\n \\end{align}\n Recall that in \\cref{lem:NC_PL_mom_induct_bd_cons_error_xy}, $\\theta - c_1 = 6 L_f^2 \\alpha^2 (\\eta_y^2 + \\eta_x^2)$. \n If $6 L_f^2 \\alpha^2 (\\eta_y^2 + \\eta_x^2) \\leq \\min \\{ c_1, \\theta^2 \\}$, a \\textit{sufficient} condition for \\eqref{eq:cond_theta_1} is\n \\begin{align*}\n 4 k \\theta^2 \\leq \\theta \\quad \\Rightarrow \\quad \\theta \\leq 1\/4k.\n \\end{align*}\n Since $\\theta \\leq 2 c_1$ and $c_1 \\leq 2 \\beta \\alpha$ (if $\\alpha \\leq 1\/(2 \\beta)$), this is satisfied if $\\alpha \\leq \\frac{1}{16 \\beta k}$.\n Next, we verify that $6 L_f^2 \\alpha^2 (\\eta_y^2 + \\eta_x^2) \\leq \\min \\{ c_1, \\theta^2 \\}$ holds.\n \\begin{itemize}\n \\item $6 L_f^2 \\alpha^2 (\\eta_y^2 + \\eta_x^2) \\leq c_1$ follows from the condition $\\alpha \\leq \\frac{\\beta}{6 L_f^2 (\\eta_y^2 + \\eta_x^2)}$ (since $c_1 \\geq \\beta \\alpha$).\n \\item $6 L_f^2 \\alpha^2 (\\eta_y^2 + \\eta_x^2) \\leq \\theta^2$ follows from the condition $L_f^2 (\\eta_y^2 + \\eta_x^2) \\leq \\frac{\\beta^2}{6}$ (since $\\theta \\geq c_1 \\geq \\alpha \\beta$).\n \\end{itemize}\n \n \n \n \n \n \n \n \\item The second condition in \\eqref{eq:NC_PL_mom_induct_bd_cons_error_xy:param_condition} is equivalent to\n \\begin{align*}\n 2k (\\theta - \\beta \\alpha) \\leq 1.\n \\end{align*}\n A \\textit{sufficient} condition for this to be satisfied is $\\theta \\leq \\frac{1}{2k}$, which, as seen above, is already satisfied if $\\alpha \\leq \\frac{1}{16 \\beta k}$.\n \\item The third condition in \\eqref{eq:NC_PL_mom_induct_bd_cons_error_xy:param_condition} is equivalent to\n \\begin{align*}\n 1 + 2k \\theta + 2 k (1-\\beta \\alpha) & \\leq 2k (1+\\theta) + (1+\\theta) \\\\\n \\Leftrightarrow - 2k \\beta \\alpha & \\leq \\theta.\n \\end{align*}\n which is trivially satisfied.\n\\end{enumerate}\nHence, the parameter choices in \\cref{lem:NC_PL_mom_induct_bd_cons_error_xy} satisfy the induction hypothesis, which completes the proof.\n\\end{proof}\n\n\n\n\\begin{proof}[Proof of \\cref{cor:NC_PL_mom_induct_bd_cons_error_xy}]\nFor $k = k_0$ such that $(t-k_0-1) \\mod \\tau = 0$, then by \\cref{alg_NC_momentum} $$\\Delta_{t-k_0-1}^{{\\mathbf x}, {\\mathbf y}} = \\Delta_{t-k_0-1}^{{\\mathbf d_x}} = \\Delta_{t-k_0-1}^{{\\mathbf d_y}} = 0.$$\nFrom \\cref{lem:NC_PL_mom_cons_errs_recursion}, $\\Delta_{t-k_0}^{{\\mathbf x}, {\\mathbf y}} = 0$. \nUsing this information in \\cref{lem:NC_PL_mom_induct_bd_cons_error_xy}, we get\n\\begin{align*}\n \\Delta_{t}^{\\bx,\\by} & \\leq (1 + 2 k_0 \\theta) \\Delta_{t-k_0}^{x,y} + k_0^2 (1+\\theta) \\Upsilon \\\\\n & \\leq (\\tau - 1)^2 \\alpha^2 \\left[ \\left( \\eta_x^2 + \\eta_y^2 \\right) \\sigma^2 \\left( 1 + \\mfrac{1}{n} \\right) + 3 \\eta_x^2 \\varsigma_x^2 + 3 \\eta_y^2 \\varsigma_y^2 \\right]. \\tag{Using $\\Upsilon$ from \\cref{lem:NC_PL_mom_induct_bd_cons_error_xy}}\n\\end{align*}\n\\end{proof}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\newpage\n\\section{Nonconvex-Concave Functions: Local SGDA+ (\\texorpdfstring{\\cref{thm:NC_C}}{Theorem 1})} \\label{app:NC_C}\n\n\n\\begin{algorithm}[ht]\n\\caption{Local SGDA+ \\cite{mahdavi21localSGDA_aistats}}\n\\label{alg_local_SGDA_plus}\n\\begin{algorithmic}[1]\n\t\\STATE{\\textbf{Input: }{\\small${\\mathbf x}_0^i = \\widetilde{\\bx}_0 = {\\mathbf x}_0, {\\mathbf y}_0^i = {\\mathbf y}_0$}, for all $i \\in [n]$; step-sizes $\\eta_x, \\eta_y$; $\\tau$, $T$, $S, k=0$}\n\t\\FOR[At all clients $i=1,\\hdots, n$]{$t=0$ to $T-1$}\n\t \\STATE{Sample minibatch ${\\xi^i_{t}}$ from local data}\n \\STATE{${\\mathbf x^i_{t+1}} = {\\mathbf x^i_t} - \\eta_x \\nabla_{\\bx} f_i ({\\mathbf x^i_t}, {\\mathbf y^i_t}; {\\xi^i_{t}})$}\n \\STATE{${\\mathbf y^i_{t+1}} = {\\mathbf y^i_t} + \\eta_y \\nabla_{\\by} f_i (\\Tbx_{k}, {\\mathbf y^i_t}; {\\xi^i_{t}})$}\n \\IF{$t+1$ mod $\\tau = 0$}\n \\STATE{Clients send $\\{ {\\mathbf x^i_{t+1}}, {\\mathbf y^i_{t+1}} \\}$ to the server}\n \\STATE{Server computes averages ${\\mathbf x_{t+1}} \\triangleq \\frac{1}{n} \\sum_{i=1}^n {\\mathbf x^i_{t+1}}$, \n ${\\mathbf y_{t+1}} \\triangleq \\frac{1}{n} \\sum_{i=1}^n {\\mathbf y^i_{t+1}}$, and sends to all the clients}\n \\STATE{${\\mathbf x^i_{t+1}} = {\\mathbf x_{t+1}}$, ${\\mathbf y^i_{t+1}} = {\\mathbf y_{t+1}}$, for all $i \\in [n]$}\n \\ENDIF\n \\IF{$t+1$ mod $S = 0$}\n \\STATE{Clients send $\\{ {\\mathbf x^i_{t+1}} \\}$ to the server}\n \\STATE{$k \\gets k+1$}\n \\STATE{Server computes averages $\\Tbx_{k} \\triangleq \\frac{1}{n} \\sum_{i=1}^n {\\mathbf x^i_{t+1}}$, and sends to all the clients}\n \\ENDIF\n\t\\ENDFOR\n\t\\STATE{\\textbf{Return: }${\\bar{\\bx}_T}$ drawn uniformly at random from $\\{ {\\mathbf x_t} \\}$, where ${\\mathbf x_t} \\triangleq \\frac{1}{n} \\sum_{i=1}^n {\\mathbf x^i_t}$}\n\\end{algorithmic}\n\\end{algorithm}\n\nWe organize this section as follows. First, in \\cref{sec:NC_C_int_results} we present some intermediate results, which we use in the proof of \\cref{thm:NC_C}. Next, in \\cref{sec:NC_C_thm_proof}, we present the proof of \\cref{thm:NC_C}, which is followed by the proofs of the intermediate results in \\cref{sec:NC_C_int_results_proofs}.\n\n\\subsection{Intermediate Lemmas} \\label{sec:NC_C_int_results}\n\n\\begin{lemma}\n\\label{lem:NC_C_Phi_smooth_decay_one_iter}\nSuppose the local loss functions $\\{ f_i \\}$ satisfy Assumptions \\ref{assum:smoothness}, \\ref{assum:bdd_var}, \\ref{assum:bdd_hetero}, \\ref{assum:concavity}, \\ref{assum:Lips_cont_x}.\nThen, the iterates generated by \\cref{alg_local_SGDA_plus} satisfy\n\\begin{align}\n \\mathbb E \\left[ \\Phi_{1\/2L_f} ({\\mathbf x_{t+1}}) \\right] & \\leq \\mathbb E \\left[ \\Phi_{1\/2L_f} ({\\mathbf x_t}) \\right] + \\eta_x^2 L_f \\left( G_x^2 + \\frac{\\sigma^2}{n} \\right) + 2 \\eta_x L_f^2 \\Delta_{t}^{\\bx,\\by} \\nonumber \\\\\n & \\quad + 2 \\eta_x L_f \\mathbb E \\left[ \\Phi({\\mathbf x_t}) - f({\\mathbf x_t}, {\\mathbf y_t}) \\right] - \\frac{\\eta_x}{8} \\mathbb E \\norm{\\nabla \\Phi_{1\/2L_f} ({\\mathbf x_t})}^2. \\nonumber\n\\end{align}\nwhere $\\Delta_{t}^{\\bx,\\by} = \\frac{1}{n} \\sum_{i=1}^n \\mathbb E \\left( \\left\\| {\\mathbf x^i_t} - {\\mathbf x_t} \\right\\|^2 + \\left\\| {\\mathbf y^i_t} - {\\mathbf y_t} \\right\\|^2 \\right)$ is the synchronization error at time $t$.\n\\end{lemma}\n\nNext, we bound the difference $\\mathbb E \\left[ \\Phi({\\mathbf x_t}) - f({\\mathbf x_t}, {\\mathbf y_t}) \\right]$.\n\n\\begin{lemma}\n\\label{lem:NC_C_Phi_f_diff}\nSuppose the local functions satisfy Assumptions \\ref{assum:smoothness}, \\ref{assum:bdd_var}, \\ref{assum:bdd_hetero}, \\ref{assum:Lips_cont_x}.\nFurther, suppose we choose the step-size $\\eta_y$ such that $\\eta_y \\leq \\frac{1}{8 L_f \\tau}$.\nThen the iterates generated by \\cref{alg_local_SGDA_plus} satisfy\n\\begin{align}\n \\frac{1}{T} \\sumtT \\mathbb E \\left[ \\Phi({\\mathbf x_t}) - f({\\mathbf x_t}, {\\mathbf y_t}) \\right] & \\leq 2 \\eta_x G_x S \\sqrt{G_x^2 + \\frac{\\sigma^2}{n}} + \\frac{4 D}{\\eta_y S} + \\frac{20 \\eta_y \\sigma^2}{n} + 16 \\eta_y^2 L_f (\\tau-1)^2 \\left( \\sigma^2 + \\varsigma_y^2 \\right). \\nonumber\n\\end{align}\n\\end{lemma}\n\n\n\\begin{lemma}\n\\label{lem:NC_C_consensus_error}\nSuppose the local loss functions $\\{ f_i \\}$ satisfy Assumptions \\ref{assum:smoothness}, \\ref{assum:bdd_hetero},\nand the stochastic oracles for the local\nfunctions satisfy \\cref{assum:bdd_var}.\nFurther, in \\cref{alg_local_SGDA}, we choose step-sizes $\\eta_x, \\eta_y \\leq \\frac{1}{8 \\tau L_f}$.\nThen, the iterates $\\{ {\\mathbf x^i_t}, {\\mathbf y^i_t} \\}$ generated by \\cref{alg_local_SGDA_plus} satisfy\n\\begin{align}\n \\frac{1}{T} \\sum_{t=0}^{T-1} \\Delta_{t}^{\\by} & \\triangleq \\frac{1}{T} \\sum_{t=0}^{T-1} \\frac{1}{n} \\sum_{i=1}^n \\mathbb E \\left( \\left\\| {\\mathbf y^i_t} - {\\mathbf y_t} \\right\\|^2 \\right) \\leq 2 (\\tau-1)^2 \\eta_y^2 \\left[ \\sigma^2 \\left( 1 + \\frac{1}{n} \\right) + 3 \\varsigma_y^2 \\right], \\nonumber \\\\\n \\frac{1}{T} \\sum_{t=0}^{T-1} \\Delta_{t}^{\\bx} & \\triangleq \\frac{1}{T} \\sum_{t=0}^{T-1} \\frac{1}{n} \\sum_{i=1}^n \\mathbb E \\left( \\left\\| {\\mathbf x^i_t} - {\\mathbf x_t} \\right\\|^2 \\right) \\leq 2 (\\tau-1)^2 \\left[ \\left( \\eta_x^2 + \\eta_y^2 \\right) \\sigma^2 \\left( 1 + \\frac{1}{n} \\right) + 3\\left( \\eta_x^2 \\varsigma_x^2 + \\eta_y^2 \\varsigma_y^2 \\right) \\right]. \\nonumber\n \\label{eq:lem:NC_C_consensus_error}\n\\end{align}\n\\end{lemma}\n\n\n\n\\subsection{Proof of \\texorpdfstring{\\cref{thm:NC_C}}{Theorem 3}}\n\\label{sec:NC_C_thm_proof}\nFor the sake of completeness, we first state the full statement of \\cref{thm:NC_C} here.\n\n\\begin{theorem*}\nSuppose the local loss functions $\\{ f_i \\}$ satisfy Assumptions \\ref{assum:smoothness}, \\ref{assum:bdd_var}, \\ref{assum:bdd_hetero}, \\ref{assum:concavity}, \\ref{assum:Lips_cont_x}.\nFurther, let $\\norm{{\\mathbf y_t}}^2 \\leq D$ for all $t$.\nSuppose the step-sizes $\\eta_x, \\eta_y$ are chosen such that $\\eta_x, \\eta_y \\leq \\frac{1}{8 L_f \\tau}$.\nThen the iterates generated by \\cref{alg_local_SGDA_plus} satisfy\n\\begin{align}\n \\frac{1}{T} \\sumtT \\mathbb E \\norm{\\nabla \\Phi_{1\/2L_f} ({\\mathbf x_t})}^2 & \\leq \\frac{8 \\widetilde{\\Delta}_{\\Phi}}{\\eta_x T} + 8 \\eta_x L_f \\left( G_x^2 + \\frac{\\sigma^2}{n} \\right) + \\frac{320 \\eta_y L_f \\sigma^2}{n} + 16 L_f \\left[ 2 \\eta_x G_x S \\sqrt{G_x^2 + \\frac{\\sigma^2}{n}} + \\frac{4 D}{\\eta_y S} \\right] \\nonumber \\\\\n & \\quad + 64 L_f^2 (\\tau-1)^2 \\left[ \\left( \\eta_x^2 + \\eta_y^2 \\right) \\sigma^2 \\left( 1 + \\frac{1}{n} \\right) + 3 \\left( \\eta_x^2 \\varsigma_x^2 + \\eta_y^2 \\varsigma_y^2 \\right) + 4 \\eta_y^2 \\left( \\sigma^2 + \\varsigma_y^2 \\right) \\right]. \\nonumber\n \n\\end{align}\nWith the following parameter values:\n\\begin{align*}\n \\eta_x = \\Theta \\left( \\frac{n^{1\/4}}{T^{3\/4}} \\right), \\qquad \\eta_y = \\Theta \\left( \\frac{n^{3\/4}}{T^{1\/4}} \\right), \\qquad S = \\Theta \\left( \\sqrt{\\frac{T}{n}} \\right),\n\\end{align*}\nwe can further simplify to\n\\begin{align}\n & \\frac{1}{T} \\sumtT \\mathbb E \\norm{\\nabla \\Phi_{1\/2L_f} ({\\mathbf x_t})}^2 \\leq \\mathcal O \\left( \\frac{1}{(nT)^{1\/4}} \\right) + \\mathcal O \\left( \\frac{n^{1\/4}}{T^{3\/4}} \\right) + \\mathcal O \\left( \\frac{n^{3\/2} (\\tau-1)^2}{T^{1\/2}} \\right) + \\mathcal O \\left( (\\tau-1)^2 \\frac{\\sqrt{n}}{T^{3\/2}} \\right). \\nonumber\n \n\\end{align}\n\\end{theorem*}\n\n\n\\begin{proof}\nWe sum the result in \\cref{lem:NC_C_Phi_smooth_decay_one_iter} over $t = 0$ to $T-1$ and rearrange the terms to get\n\\begin{align}\n & \\frac{1}{T} \\sumtT \\mathbb E \\norm{\\nabla \\Phi_{1\/2L_f} ({\\mathbf x_t})}^2 \\leq \\frac{8}{\\eta_x} \\frac{1}{T} \\sumtT \\left( \\mathbb E \\left[ \\Phi_{1\/2L_f} ({\\mathbf x_t}) \\right] - \\mathbb E \\left[ \\Phi_{1\/2L_f} ({\\mathbf x_{t+1}}) \\right] \\right) + 8 \\eta_x L_f \\left( G_x^2 + \\frac{\\sigma^2}{n} \\right) \\nonumber \\\\\n & \\qquad \\qquad \\qquad \\qquad \\qquad \\qquad + 16 L_f \\frac{1}{T} \\sumtT \\mathbb E \\left[ \\Phi({\\mathbf x_t}) - f({\\mathbf x_t}, {\\mathbf y_t}) \\right] + 16 L_f^2 \\Delta_{t}^{\\bx,\\by} \\nonumber \\\\\n & \\leq \\frac{8}{\\eta_x T} \\left[ \\Phi_{1\/2L_f} ({\\mathbf x}_0) - \\mathbb E \\left[ \\Phi_{1\/2L_f} ({\\mathbf x}_T) \\right] \\right] + 8 \\eta_x L_f \\left( G_x^2 + \\frac{\\sigma^2}{n} \\right) + 16 L_f^2 \\Delta_{t}^{\\bx,\\by} \\nonumber \\\\\n & \\quad + 16 L_f \\left[ 2 \\eta_x G_x S \\sqrt{G_x^2 + \\frac{\\sigma^2}{n}} + \\frac{4 D}{\\eta_y S} + \\frac{20 \\eta_y \\sigma^2}{n} + 16 \\eta_y^2 L_f (\\tau-1)^2 \\left( \\sigma^2 + \\varsigma_y^2 \\right) \\right] \\tag{\\cref{lem:NC_C_Phi_f_diff}} \\\\\n & \\leq \\frac{8 \\widetilde{\\Delta}_{\\Phi}}{\\eta_x T} + 8 \\eta_x L_f \\left( G_x^2 + \\frac{\\sigma^2}{n} \\right) + \\frac{320 \\eta_y L_f \\sigma^2}{n} + 16 L_f \\left[ 2 \\eta_x G_x S \\sqrt{G_x^2 + \\frac{\\sigma^2}{n}} + \\frac{4 D}{\\eta_y S} \\right] \\nonumber \\\\\n & \\quad + 64 L_f^2 (\\tau-1)^2 \\left[ \\left( \\eta_x^2 + \\eta_y^2 \\right) \\sigma^2 \\left( 1 + \\frac{1}{n} \\right) + 3 \\left( \\eta_x^2 \\varsigma_x^2 + \\eta_y^2 \\varsigma_y^2 \\right) + 4 \\eta_y^2 \\left( \\sigma^2 + \\varsigma_y^2 \\right) \\right], \\tag{\\cref{lem:NC_C_consensus_error}}\n\\end{align}\nwhere $\\widetilde{\\Delta}_{\\Phi} = \\Phi_{1\/2L_f} ({\\mathbf x}_0) - \\min_{\\mathbf x} \\Phi_{1\/2L_f} ({\\mathbf x})$.\n\nIf $D = 0$, we let $S=1$. Else, let $S = \\sqrt{\\frac{2 D}{\\eta_x \\eta_y G_x \\sqrt{G_x^2 + \\sigma^2\/n}}}$. Then we get\n\\begin{align}\n & \\frac{1}{T} \\sumtT \\mathbb E \\norm{\\nabla \\Phi_{1\/2L_f} ({\\mathbf x_t})}^2 \\leq \\frac{8 \\widetilde{\\Delta}_{\\Phi}}{\\eta_x T} + 8 \\eta_x L_f \\left( G_x^2 + \\frac{\\sigma^2}{n} \\right) + \\frac{320 \\eta_y L_f \\sigma^2}{n} + 64 L_f \\sqrt{\\frac{2 D \\eta_x G_x \\sqrt{G_x^2 + \\frac{\\sigma^2}{n}}}{\\eta_y}} \\nonumber \\\\\n & \\quad + 64 L_f^2 (\\tau-1)^2 \\left[ \\left( \\eta_x^2 + \\eta_y^2 \\right) \\sigma^2 \\left( 1 + \\frac{1}{n} \\right) + 3 \\left( \\eta_x^2 \\varsigma_x^2 + \\eta_y^2 \\varsigma_y^2 \\right) + 4 \\eta_y^2 \\left( \\sigma^2 + \\varsigma_y^2 \\right) \\right], \\label{eq_proof:thm_NC_C_1}\n\\end{align}\nFor $\\eta_y \\leq 1$, the terms containing $\\eta_y^2$ are of higher order, and we focus only on the other terms containing $\\eta_y$, i.e., \n\\begin{align*}\n 64 L_f \\left[ \\frac{5 \\eta_y \\sigma^2}{n} + \\sqrt{\\frac{2 D \\eta_x G_x \\sqrt{G_x^2 + \\frac{\\sigma^2}{n}}}{\\eta_y}} \\right].\n\\end{align*}\nTo optimize these, we choose $\\eta_y = \\left( \\frac{n}{10 \\sigma^2} \\right)^{2\/3} \\left( 2 D \\eta_x G_x \\sqrt{G_x^2 + \\frac{\\sigma^2}{n}} \\right)^{1\/3}$. Substituting in \\eqref{eq_proof:thm_NC_C_1}, we get\n\\begin{align}\n & \\frac{1}{T} \\sumtT \\mathbb E \\norm{\\nabla \\Phi_{1\/2L_f} ({\\mathbf x_t})}^2 \\leq \\frac{8 \\widetilde{\\Delta}_{\\Phi}}{\\eta_x T} + 8 \\eta_x L_f \\left( G_x^2 + \\frac{\\sigma^2}{n} \\right) + 320 L_f \\left( 10 \\frac{\\sigma^2}{n} D \\eta_x G_x \\sqrt{G_x^2 + \\frac{\\sigma^2}{n}} \\right)^{1\/3} \\nonumber \\\\\n & \\quad + 200 L_f^2 (\\tau-1)^2 \\left[ 4 \\eta_x^{2\/3} \\left( \\frac{n}{10 \\sigma^2} \\right)^{4\/3} \\left( 2 D G_x \\sqrt{G_x^2 + \\frac{\\sigma^2}{n}} \\right)^{2\/3} \\left( \\sigma^2 + \\varsigma_y^2 \\right) + \\eta_x^2 \\left( \\sigma^2 + \\varsigma_x^2 \\right) \\right],\n \\label{eq_proof:thm_NC_C_2}\n\\end{align}\nAgain, we ignore the higher order terms of $\\eta_x$, and only focus on\n\\begin{align*}\n \\frac{8 \\widetilde{\\Delta}_{\\Phi}}{\\eta_x T} + 320 L_f \\left( 10 \\frac{\\sigma^2}{n} D \\eta_x G_x \\sqrt{G_x^2 + \\frac{\\sigma^2}{n}} \\right)^{1\/3}.\n\\end{align*}\nWith $\\eta_x = \\left( \\frac{3}{40 L_f T} \\right)^{3\/4} \\left( 10 \\frac{\\sigma^2}{n} D G_x \\sqrt{G_x^2 + \\frac{\\sigma^2}{n}} \\right)^{-1\/4}$,\nand absorbing numerical constants inside $\\mathcal O (\\cdot)$ we get,\n\\begin{align}\n & \\frac{1}{T} \\sumtT \\mathbb E \\norm{\\nabla \\Phi_{1\/2L_f} ({\\mathbf x_t})}^2 \\leq \\mathcal O \\left( \\left( \\sigma^2 D G_x \\sqrt{G_x^2 + \\frac{\\sigma^2}{n}} \\right)^{1\/4} \\frac{L_f^{3\/4}}{(nT)^{1\/4}} \\right) \\nonumber \\\\\n & \\quad + \\mathcal O \\left( \\frac{L_f^{1\/4}}{T^{3\/4}} \\left( \\frac{\\sigma^2}{n} D G_x \\sqrt{G_x^2 + \\frac{\\sigma^2}{n}} \\right)^{-1\/4} \\left( G_x^2 + \\frac{\\sigma^2}{n} \\right) \\right) \\nonumber \\\\\n & \\quad + \\mathcal O \\left( \\frac{L_f^{3\/2} (\\tau-1)^2}{T^{1\/2}} \\left( \\frac{n}{\\sigma^2} \\right)^{3\/2} \\left( D G_x \\sqrt{G_x^2 + \\frac{\\sigma^2}{n}} \\right)^{1\/2} \\left( \\sigma^2 + \\varsigma_y^2 \\right) \\right), \\nonumber \\\\\n & \\quad + \\mathcal O \\left( (\\tau-1)^2 \\left( \\sigma^2 + \\varsigma_x^2 \\right) \\frac{\\sqrt{L_f}}{T^{3\/2}} \\left( \\frac{\\sigma^2}{n} D G_x \\sqrt{G_x^2 + \\frac{\\sigma^2}{n}} \\right)^{-1\/2} \\right), \\label{eq_proof:thm_NC_C_3} \\\\\n & \\leq \\mathcal O \\left( \\frac{\\sigma^2 + D + G_x^2}{(nT)^{1\/4}} \\right) + \\mathcal O \\left( \\frac{n^{1\/4}}{T^{3\/4}} \\right) + \\mathcal O \\left( \\frac{n^{3\/2} (\\tau-1)^2}{T^{1\/2}} \\right) + \\mathcal O \\left( (\\tau-1)^2 \\frac{\\sqrt{n}}{T^{3\/2}} \\right),\n \\label{eq_proof:thm_NC_C_4}\n\\end{align}\nwhere in \\eqref{eq_proof:thm_NC_C_4}, we have dropped all the problem-specific parameters, to show dependence only on $\\tau, n, T$.\n\nLastly, we specify the algorithm parameters in terms of $n,T$.\n\\begin{itemize}\n \\item $\\eta_x = \\left( \\frac{3}{40 L_f T} \\right)^{3\/4} \\left( 10 \\frac{\\sigma^2}{n} D G_x \\sqrt{G_x^2 + \\frac{\\sigma^2}{n}} \\right)^{-1\/4} = \\Theta \\left( \\frac{n^{1\/4}}{T^{3\/4}} \\right)$,\n \\item $\\eta_y = \\left( \\frac{n}{10 \\sigma^2} \\right)^{2\/3} \\left( 2 D \\eta_x G_x \\sqrt{G_x^2 + \\frac{\\sigma^2}{n}} \\right)^{1\/3} = \\Theta \\left( \\frac{n^{3\/4}}{T^{1\/4}} \\right)$,\n \\item $S = \\sqrt{\\frac{2 D}{\\eta_x \\eta_y G_x \\sqrt{G_x^2 + \\sigma^2\/n}}} = \\Theta \\left( \\sqrt{\\frac{T}{n}} \\right)$.\n\\end{itemize}\n\n\\end{proof}\n\n\n\\begin{proof}[Proof of \\cref{cor:NC_C_comm_cost}]\nWe assume $T \\geq n^7$.\nTo reach an $\\epsilon$-accurate point, i.e., ${\\bar{\\bx}_T}$ such that $\\mathbb E \\left\\| \\nabla \\Phi_{1\/2L_f} ({\\bar{\\bx}_T}) \\right\\| \\leq \\epsilon$, we need\n\\begin{align*}\n \\mathbb E \\left\\| \\nabla \\Phi_{1\/2L_f} ({\\bar{\\bx}_T}) \\right\\| & \\leq \\left[ \\frac{1}{T} \\sum_{t=0}^{T-1} \\mathbb E \\left\\| \\nabla \\Phi_{1\/2L_f} ({\\mathbf x_t}) \\right\\|^2 \\right]^{1\/2} \\nonumber \\\\\n & \\leq \\mathcal O \\left( \\frac{1}{(nT)^{1\/8}} \\right) + \\mathcal O \\left( \\frac{n^{1\/8}}{T^{3\/8}} \\right) + \\mathcal O \\left( \\frac{n^{3\/4} (\\tau-1)}{T^{1\/4}} \\right) + \\mathcal O \\left( (\\tau-1) \\frac{n^{1\/4}}{T^{3\/4}} \\right).\n\\end{align*}\nWe can choose $\\tau \\leq \\mathcal O \\left( \\frac{T^{1\/8}}{n^{7\/8}} \\right)$ without affecting the convergence rate $\\mathcal O \\left( \\frac{1}{(nT)^{1\/8}} \\right)$.\nIn that case, we need $T = \\mathcal O \\left( \\frac{1}{n \\epsilon^8} \\right)$ iterations to reach an $\\epsilon$-accurate point.\nAnd the minimum number of communication rounds is \n$$\\mathcal O \\left( \\frac{T}{\\tau} \\right) = \\mathcal O \\left( (n T)^{7\/8} \\right) = \\mathcal O \\left( \\frac{1}{\\epsilon^7} \\right).$$\n\\end{proof}\n\n\n\n\n\\subsection{Proofs of the Intermediate Lemmas}\n\\label{sec:NC_C_int_results_proofs}\n\n\\begin{proof}[Proof of \\cref{lem:NC_C_Phi_smooth_decay_one_iter}]\nWe borrow the proof steps from \\cite{lin_GDA_icml20, mahdavi21localSGDA_aistats}. Define $\\widetilde{{\\mathbf x}}_{t} = \\operatornamewithlimits{arg\\,min}_{\\mathbf x} \\Phi ({\\mathbf x}) + L_f \\norm{{\\mathbf x} - {\\mathbf x_t}}^2$, then using the definition of $\\Phi_{1\/2L_f}$, we get\n\\begin{align}\n \\Phi_{1\/2L_f} ({\\mathbf x_{t+1}}) & \\triangleq \\min_{\\mathbf x} \\Phi ({\\mathbf x}) + L_f \\norm{{\\mathbf x} - {\\mathbf x_{t+1}}}^2 \\nonumber \\\\\n & \\leq \\Phi (\\widetilde{{\\mathbf x}}_t) + L_f \\norm{\\widetilde{{\\mathbf x}}_t - {\\mathbf x_{t+1}}}^2.\n \\label{eq:lem:NC_C_Phi_smooth_decay_one_iter_1}\n\\end{align}\nUsing the ${\\mathbf x^i_t}$ updates in \\cref{alg_local_SGDA_plus},\n\\begin{align}\n & \\mathbb E \\norm{\\widetilde{{\\mathbf x}}_t - {\\mathbf x_{t+1}}}^2 = \\mathbb E \\norm{\\widetilde{{\\mathbf x}}_t - {\\mathbf x_t} + \\eta_x \\frac{1}{n} \\sumin \\nabla_{\\bx} f_i ({\\mathbf x^i_t}, {\\mathbf y^i_t}; {\\xi^i_{t}})}^2 \\nonumber \\\\\n &= \\mathbb E \\norm{\\widetilde{{\\mathbf x}}_t - {\\mathbf x_t}}^2 + \\eta_x^2 \\mathbb E \\norm{\\frac{1}{n} \\sumin \\nabla_{\\bx} f_i ({\\mathbf x^i_t}, {\\mathbf y^i_t}; {\\xi^i_{t}})}^2 + 2 \\eta_x \\mathbb E \\left\\langle \\widetilde{{\\mathbf x}}_t - {\\mathbf x_t}, \\frac{1}{n} \\sumin \\nabla_{\\bx} f_i ({\\mathbf x^i_t}, {\\mathbf y^i_t}) \\right\\rangle \\tag{\\cref{assum:bdd_var}} \\\\\n & \\leq \\mathbb E \\norm{\\widetilde{{\\mathbf x}}_t - {\\mathbf x_t}}^2 + \\eta_x^2 \\mathbb E \\norm{\\frac{1}{n} \\sumin \\nabla_{\\bx} f_i ({\\mathbf x^i_t}, {\\mathbf y^i_t})}^2 + \\frac{\\eta_x^2 \\sigma^2}{n} + 2 \\eta_x \\mathbb E \\left\\langle \\widetilde{{\\mathbf x}}_t - {\\mathbf x_t}, \\nabla_{\\bx} f ({\\mathbf x_t}, {\\mathbf y_t}) \\right\\rangle \\nonumber \\\\\n & \\quad + \\eta_x \\mathbb E \\left[ \\frac{L_f}{2} \\norm{\\widetilde{{\\mathbf x}}_t - {\\mathbf x_t}}^2 + \\frac{2}{L_f} \\norm{\\frac{1}{n} \\sumin \\nabla_{\\bx} f_i ({\\mathbf x^i_t}, {\\mathbf y^i_t}) - \\nabla_{\\bx} f ({\\mathbf x_t}, {\\mathbf y_t})}^2 \\right] \\tag{\\cref{lem:Young}} \\\\\n & \\leq \\mathbb E \\norm{\\widetilde{{\\mathbf x}}_t - {\\mathbf x_t}}^2 + \\eta_x^2 \\left( \\mathbb E \\norm{\\frac{1}{n} \\sumin \\nabla_{\\bx} f_i ({\\mathbf x^i_t}, {\\mathbf y^i_t})}^2 + \\frac{\\sigma^2}{n} \\right) + 2 \\eta_x \\mathbb E \\left\\langle \\widetilde{{\\mathbf x}}_t - {\\mathbf x_t}, \\nabla_{\\bx} f ({\\mathbf x_t}, {\\mathbf y_t}) \\right\\rangle \\nonumber \\\\\n & \\quad + \\frac{\\eta_x L_f}{2} \\norm{\\widetilde{{\\mathbf x}}_t - {\\mathbf x_t}}^2 + 2 \\eta_x L_f \\Delta_{t}^{\\bx,\\by}\n \\label{eq:lem:NC_C_Phi_smooth_decay_one_iter_2}\n\\end{align}\nwhere \\eqref{eq:lem:NC_C_Phi_smooth_decay_one_iter_2} follows from \\cref{assum:smoothness}.\nNext, we bound the inner product in \\eqref{eq:lem:NC_C_Phi_smooth_decay_one_iter_2}.\nUsing $L_f$-smoothness of $f$ (\\cref{assum:smoothness}):\n\\begin{align}\n \\mathbb E \\left\\langle \\widetilde{{\\mathbf x}}_t - {\\mathbf x_t}, \\nabla_{\\bx} f ({\\mathbf x_t}, {\\mathbf y_t}) \\right\\rangle & \\leq \\mathbb E \\left[ f(\\widetilde{{\\mathbf x}}_t, {\\mathbf y_t}) - f({\\mathbf x_t}, {\\mathbf y_t}) + \\frac{L_f}{2} \\norm{\\widetilde{{\\mathbf x}}_t - {\\mathbf x_t}}^2 \\right] \\nonumber \\\\\n & \\leq \\mathbb E \\left[ \\Phi(\\widetilde{{\\mathbf x}}_t) - f({\\mathbf x_t}, {\\mathbf y_t}) + \\frac{L_f}{2} \\norm{\\widetilde{{\\mathbf x}}_t - {\\mathbf x_t}}^2 \\right] \\nonumber \\\\\n & = \\mathbb E \\left[ \\Phi(\\widetilde{{\\mathbf x}}_t) + L_f \\norm{\\widetilde{{\\mathbf x}}_t - {\\mathbf x_t}}^2 \\right] - \\mathbb E f({\\mathbf x_t}, {\\mathbf y_t}) - \\frac{L_f}{2} \\mathbb E \\norm{\\widetilde{{\\mathbf x}}_t - {\\mathbf x_t}}^2 \\nonumber \\\\\n & \\leq \\mathbb E \\left[ \\Phi({\\mathbf x_t}) + L_f \\norm{{\\mathbf x_t} - {\\mathbf x_t}}^2 \\right] - \\mathbb E f({\\mathbf x_t}, {\\mathbf y_t}) - \\frac{L_f}{2} \\mathbb E \\norm{\\widetilde{{\\mathbf x}}_t - {\\mathbf x_t}}^2 \\tag{by definition of $\\widetilde{{\\mathbf x}}_t$} \\\\\n & \\leq \\mathbb E \\left[ \\Phi({\\mathbf x_t}) - f({\\mathbf x_t}, {\\mathbf y_t}) - \\frac{L_f}{2} \\norm{\\widetilde{{\\mathbf x}}_t - {\\mathbf x_t}}^2 \\right]. \\label{eq:lem:NC_C_Phi_smooth_decay_one_iter_3}\n\\end{align}\nSubstituting the bounds in \\eqref{eq:lem:NC_C_Phi_smooth_decay_one_iter_2} and \\eqref{eq:lem:NC_C_Phi_smooth_decay_one_iter_3} into \\eqref{eq:lem:NC_C_Phi_smooth_decay_one_iter_1}, we get\n\\begin{align}\n \\mathbb E \\left[ \\Phi_{1\/2L_f} ({\\mathbf x_{t+1}}) \\right] &\\leq \\mathbb E \\Phi (\\widetilde{{\\mathbf x}}_t) + L_f \\left[ \\mathbb E \\norm{\\widetilde{{\\mathbf x}}_t - {\\mathbf x_t}}^2 + \\eta_x^2 \\left( G_x^2 + \\frac{\\sigma^2}{n} \\right) \\right] + \\frac{\\eta_x L_f^2}{2} \\norm{\\widetilde{{\\mathbf x}}_t - {\\mathbf x_t}}^2 + 2 \\eta_x L_f^2 \\Delta_{t}^{\\bx,\\by} \\nonumber \\\\\n & \\qquad + 2 \\eta_x L_f \\mathbb E \\left[ \\Phi({\\mathbf x_t}) - f({\\mathbf x_t}, {\\mathbf y_t}) - \\frac{L_f}{2} \\norm{\\widetilde{{\\mathbf x}}_t - {\\mathbf x_t}}^2 \\right] \\nonumber \\\\\n & \\leq \\mathbb E \\left[ \\Phi_{1\/2L_f} ({\\mathbf x_t}) \\right] + \\eta_x^2 L_f \\left( G_x^2 + \\frac{\\sigma^2}{n} \\right) + 2 \\eta_x L_f^2 \\Delta_{t}^{\\bx,\\by} - \\frac{\\eta_x L_f^2}{2} \\mathbb E \\norm{\\widetilde{{\\mathbf x}}_t - {\\mathbf x_t}}^2 \\nonumber \\\\\n & \\quad + 2 \\eta_x L_f \\mathbb E \\left[ \\Phi({\\mathbf x_t}) - f({\\mathbf x_t}, {\\mathbf y_t}) \\right] \\nonumber \\\\\n & = \\mathbb E \\left[ \\Phi_{1\/2L_f} ({\\mathbf x_t}) \\right] + \\eta_x^2 L_f \\left( G_x^2 + \\frac{\\sigma^2}{n} \\right) + 2 \\eta_x L_f^2 \\Delta_{t}^{\\bx,\\by} - \\frac{\\eta_x}{8} \\mathbb E \\norm{\\nabla \\Phi_{1\/2L_f} ({\\mathbf x_t})}^2 \\nonumber \\\\\n & \\quad + 2 \\eta_x L_f \\mathbb E \\left[ \\Phi({\\mathbf x_t}) - f({\\mathbf x_t}, {\\mathbf y_t}) \\right]. \\nonumber\n\\end{align}\nwhere we use the result $\\nabla \\Phi_{1\/2L_f} ({\\mathbf x}) = 2 L_f ({\\mathbf x} - \\widetilde{{\\mathbf x}})$ from Lemma~2.2 in \\cite{davis19wc_siam}. This concludes the proof.\n\\end{proof}\n\n\n\\begin{proof}[Proof of \\cref{lem:NC_C_Phi_f_diff}]\nLet $t = kS + 1$ to $(k+1) S$, where $k = \\lfloor T\/S \\rfloor$ is a positive integer.\nLet $\\Tbx_{k}$ is the latest snapshot iterate in \\cref{alg_local_SGDA_plus}. Then\n\\begin{align}\n & \\mathbb E \\left[ \\Phi({\\mathbf x_t}) - f({\\mathbf x_t}, {\\mathbf y_t}) \\right] \\nonumber \\\\\n &= \\mathbb E \\left[ f({\\mathbf x_t}, {\\mathbf y}^*({\\mathbf x_t})) - f(\\Tbx_{k}, {\\mathbf y}^*(\\Tbx_{k})) \\right] + \\mathbb E \\left[ f(\\Tbx_{k}, {\\mathbf y}^*(\\Tbx_{k})) - f(\\Tbx_{k}, {\\mathbf y_t}) \\right] + \\mathbb E \\left[ f(\\Tbx_{k}, {\\mathbf y_t}) - f({\\mathbf x_t}, {\\mathbf y_t}) \\right] \\nonumber \\\\\n & \\leq \\mathbb E \\left[ f({\\mathbf x_t}, {\\mathbf y}^*({\\mathbf x_t})) - f(\\Tbx_{k}, {\\mathbf y}^*({\\mathbf x_t})) \\right] + \\mathbb E \\left[ f(\\Tbx_{k}, {\\mathbf y}^*(\\Tbx_{k})) - f(\\Tbx_{k}, {\\mathbf y_t}) \\right] + G_x \\mathbb E \\norm{\\Tbx_{k} - {\\mathbf x_t}} \\nonumber \\\\\n & \\leq 2 G_x \\mathbb E \\norm{\\Tbx_{k} - {\\mathbf x_t}} + \\mathbb E \\left[ f(\\Tbx_{k}, {\\mathbf y}^*(\\Tbx_{k})) - f(\\Tbx_{k}, {\\mathbf y_t}) \\right]. \\label{eq_proof:lem:NC_C_Phi_f_diff_1}\n\\end{align}\nwhere, \\eqref{eq_proof:lem:NC_C_Phi_f_diff_1} follows from $G_x$-Lipschitz continuity of $f(\\cdot, {\\mathbf y})$ (\\cref{assum:Lips_cont_x}), and since ${\\mathbf y}^*(\\cdot) \\in \\operatornamewithlimits{arg\\,max}_{\\mathbf y} f(\\cdot, {\\mathbf y})$.\nNext, we see that\n\\begin{align*}\n & \\mathbb E G_x \\norm{\\Tbx_{k} - {\\mathbf x_t}} \\leq \\eta_x S G_x \\sqrt{G_x^2 + \\frac{\\sigma^2}{n}},\n\\end{align*}\nThis is because ${\\mathbf x^i_t}$ can be updated at most $S$ times between two consecutive updates of $\\widetilde{\\bx}$. Also, at any time $t$,\n\\begin{align*}\n \\mathbb E \\norm{\\frac{1}{n} \\sumin \\nabla_{\\bx} f_i ({\\mathbf x^i_t}, {\\mathbf y^i_t}; {\\xi^i_{t}})}^2 & = \\mathbb E \\norm{\\frac{1}{n} \\sumin \\left[ \\nabla_{\\bx} f_i ({\\mathbf x^i_t}, {\\mathbf y^i_t}; {\\xi^i_{t}}) - \\nabla_{\\bx} f_i ({\\mathbf x^i_t}, {\\mathbf y^i_t}) \\right]}^2 + \\mathbb E \\norm{\\frac{1}{n} \\sumin \\nabla_{\\bx} f_i ({\\mathbf x^i_t}, {\\mathbf y^i_t})}^2 \\\\\n & \\leq \\frac{\\sigma}{n} + G_x^2,\n\\end{align*}\nwhere the expectation is conditioned on the past.\nTherefore, from \\eqref{eq_proof:lem:NC_C_Phi_f_diff_1} we get\n\\begin{align}\n & \\sum_{t=kS+1}^{(k+1)S} \\mathbb E \\left[ \\Phi({\\mathbf x_t}) - f({\\mathbf x_t}, {\\mathbf y_t}) \\right] \\leq 2 \\eta_x G_x S^2 \\sqrt{G_x^2 + \\frac{\\sigma^2}{n}} + \\sum_{t=kS+1}^{(k+1)S} \\mathbb E \\left[ f(\\Tbx_{k}, {\\mathbf y}^*(\\Tbx_{k})) - f(\\Tbx_{k}, {\\mathbf y_t}) \\right]. \\label{eq_proof:lem:NC_C_Phi_f_diff_2}\n\\end{align}\nNext, we bound $\\mathbb E \\left[ f(\\Tbx_{k}, {\\mathbf y}^*(\\Tbx_{k})) - f(\\Tbx_{k}, {\\mathbf y_t}) \\right]$.\nSince in localSGDA+, during the updates of $\\{ {\\mathbf y^i_t} \\}$, for $t = kS + 1$ to $(k+1) S$, the corresponding ${\\mathbf x}$ remains constant at $\\Tbx_{k}$. Therefore, for $t = kS + 1$ to $(k+1) S$, the ${\\mathbf y}$ updates behave like maximizing a concave function $f(\\Tbx_{k}, \\cdot)$.\nWith $\\{ {\\mathbf y^i_t} \\}$ being averaged every $\\tau$ iterations, these ${\\mathbf y^i_t}$ updates can be interpreted as iterates of a Local Stochastic Gradient Ascent (Local SGA) algorithm.\n\nUsing \\cref{lem:local_SGD_khaled} for Local SGD (\\cref{alg_local_SGD}), and modifying the result for concave function maximization, we get\n\\begin{align*}\n \\frac{1}{S} \\sum_{t=kS+1}^{(k+1)S} \\mathbb E \\left[ f(\\Tbx_{k}, {\\mathbf y}^*(\\Tbx_{k})) - f(\\Tbx_{k}, {\\mathbf y_t}) \\right] & \\leq \\frac{4 \\norm{{\\mathbf y}_{kS+1} - {\\mathbf y}^*(\\Tbx_{k})}^2}{\\eta_y S} + \\frac{20 \\eta_y \\sigma^2}{n} + 16 \\eta_y^2 L_f (\\tau-1)^2 \\left( \\sigma^2 + \\varsigma_y^2 \\right) \\nonumber \\\\\n & \\leq \\underbrace{\\frac{4 D}{\\eta_y S} + \\frac{20 \\eta_y \\sigma^2}{n}}_{\\substack{\\text{error with full} \\\\ \\text{synchronization}}} + \\underbrace{16 \\eta_y^2 L_f (\\tau-1)^2 \\left( \\sigma^2 + \\varsigma_y^2 \\right)}_{\\text{error due to local updates}}. \\nonumber\n\\end{align*}\nSubstituting this bound in \\eqref{eq_proof:lem:NC_C_Phi_f_diff_2}, we get\n\\begin{align*}\n & \\sum_{t=kS+1}^{(k+1)S} \\mathbb E \\left[ \\Phi({\\mathbf x_t}) - f({\\mathbf x_t}, {\\mathbf y_t}) \\right] \\leq 2 \\eta_x G_x S^2 \\sqrt{G_x^2 + \\frac{\\sigma^2}{n}} + \\frac{4 D}{\\eta_y} + \\frac{20 \\eta_y \\sigma^2 S}{n} + 16 S \\eta_y^2 L_f (\\tau-1)^2 \\left( \\sigma^2 + \\varsigma_y^2 \\right).\n \n\\end{align*}\nSumming over $k = 0$ to $T\/S - 1$, we get\n\\begin{align*}\n & \\frac{1}{T} \\sum_{k=0}^{T\/S-1} \\sum_{t=kS+1}^{(k+1)S} \\mathbb E \\left[ \\Phi({\\mathbf x_t}) - f({\\mathbf x_t}, {\\mathbf y_t}) \\right] \\leq 2 \\eta_x G_x S \\sqrt{G_x^2 + \\frac{\\sigma^2}{n}} + \\frac{4 D}{\\eta_y S} + \\frac{20 \\eta_y \\sigma^2}{n} + 16 \\eta_y^2 L_f (\\tau-1)^2 \\left( \\sigma^2 + \\varsigma_y^2 \\right).\n \n\\end{align*}\n\\end{proof}\n\n\n\\begin{proof}[Proof of \\cref{lem:NC_C_consensus_error}]\nThe proof follows analogously to the proof of \\cref{lem:NC_PL_consensus_error}.\n\\end{proof}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\newpage\n\\section{Nonconvex-One-Point-Concave Functions: Local SGDA+ (\\texorpdfstring{\\cref{thm:NC_1PC}}{Theorem 4})} \\label{app:NC_1PC}\n\nThe proof of \\cref{thm:NC_1PC} is similar to the proof of \\cref{thm:NC_C}.\nWe organize this section as follows. First, in \\cref{sec:NC_1PC_int_results} we present some intermediate results, which we use in the proof of \\cref{thm:NC_1PC}. Next, in \\cref{sec:NC_1PC_thm_proof}, we present the proof of \\cref{thm:NC_1PC}, which is followed by the proofs of the intermediate results in \\cref{sec:NC_1PC_int_results_proofs}.\nIn \\cref{app:NC_1PC_tau_1}, we prove convergence for the full synchronized Local SGDA+.\n\n\\subsection{Intermediate Lemmas} \\label{sec:NC_1PC_int_results}\n\nThe main difference with the nonconvex-concave problem is the bound on the difference $\\mathbb E \\left[ \\Phi({\\mathbf x_t}) - f({\\mathbf x_t}, {\\mathbf y_t}) \\right]$.\nIn case of concave functions, as we see in \\cref{lem:NC_C_Phi_f_diff}, this difference can be bounded using standard results for Local SGD (\\cref{lem:local_SGD_khaled}), which have a linear speedup with the number of clients $n$ (notice the $\\frac{\\eta_y \\sigma^2}{n}$ term in \\cref{lem:NC_C_Phi_f_diff}).\nThe corresponding result for minimization of smooth one-point-convex function using local SGD is an open problem.\nRecent works on deterministic and stochastic quasar-convex problems (of which one-point-convex functions are a special case) \\cite{gasnikov17acc_quasar_convex_arxiv, hinder20near_opt_star_convex_colt, jin20quasar_convex_arxiv} have achieved identical (within multiplicative constants) convergence rates, as smooth convex functions, for this more general class of functions, using SGD.\nThis leads us to conjecture that local SGD should achieve identical communication savings, along with linear speedup (as in \\cref{lem:local_SGD_khaled}), for one-point-convex problems.\nHowever, proving this claim formally remains an open problem.\n\nIn absence of this desirable result, we bound $\\mathbb E \\left[ \\Phi({\\mathbf x_t}) - f({\\mathbf x_t}, {\\mathbf y_t}) \\right]$ in the next result, but without any linear speedup in $n$.\n\n\\begin{lemma}\n\\label{lem:NC_1PC_Phi_f_diff}\nSuppose the local functions satisfy Assumptions \\ref{assum:smoothness}, \\ref{assum:bdd_var}, \\ref{assum:bdd_hetero}, \\ref{assum:Lips_cont_x}, \\ref{assum:1pc_y}.\nFurther, suppose we choose the step-size $\\eta_y$ such that $\\eta_y \\leq \\frac{1}{8 L_f \\tau}$.\nThen the iterates generated by \\cref{alg_local_SGDA_plus} satisfy\n\\begin{align}\n \\frac{1}{T} \\sumtT \\mathbb E \\left[ \\Phi({\\mathbf x_t}) - f({\\mathbf x_t}, {\\mathbf y_t}) \\right] & \\leq 2 \\eta_x G_x S \\sqrt{G_x^2 + \\frac{\\sigma^2}{n}} + \\frac{4 D}{\\eta_y S} + 20 \\eta_y \\sigma^2 + 16 \\eta_y^2 L_f (\\tau-1)^2 \\left( \\sigma^2 + \\varsigma_y^2 \\right). \\nonumber\n\\end{align}\n\\end{lemma}\n\n\n\n\\subsection{Proof of \\texorpdfstring{\\cref{thm:NC_1PC}}{Theorem 4}}\n\\label{sec:NC_1PC_thm_proof}\nFor the sake of completeness, we first state the full statement of \\cref{thm:NC_1PC} here.\n\n\\begin{theorem*}\nSuppose the local loss functions $\\{ f_i \\}$ satisfy Assumptions \\ref{assum:smoothness}, \\ref{assum:bdd_var}, \\ref{assum:bdd_hetero}, \\ref{assum:Lips_cont_x}, \\ref{assum:1pc_y}.\nFurther, let $\\norm{{\\mathbf y_t}}^2 \\leq D$ for all $t$.\nSuppose the step-size $\\eta_y$ is chosen such that $\\eta_y \\leq \\frac{1}{8 L_f \\tau}$.\nThen the output ${\\bar{\\bx}_T}$ of \\cref{alg_local_SGDA_plus} satisfies\n\\begin{equation}\n \\begin{aligned}\n \\frac{1}{T} \\sumtT \\mathbb E \\norm{\\nabla \\Phi_{1\/2L_f} ({\\mathbf x_t})}^2 & \\leq \\mathcal O \\left( \\frac{\\widetilde{\\Delta}_{\\Phi}}{\\eta_x T} + \\eta_x L_f \\left( G_x^2 + \\frac{\\sigma^2}{n} \\right) + \\eta_y L_f \\sigma^2 + L_f \\left[ \\eta_x G_x S \\sqrt{G_x^2 + \\frac{\\sigma^2}{n}} + \\frac{D}{\\eta_y S} \\right] \\right) \\\\\n & \\quad + \\mathcal O \\left( L_f^2 (\\tau-1)^2 \\left[ \\left( \\eta_x^2 + \\eta_y^2 \\right) \\sigma^2 \\left( 1 + \\frac{1}{n} \\right) + \\left( \\eta_x^2 \\varsigma_x^2 + \\eta_y^2 \\varsigma_y^2 \\right) + \\eta_y^2 \\left( \\sigma^2 + \\varsigma_y^2 \\right) \\right] \\right),\n \n \\end{aligned}\n \\label{eq:thm:NC_1PC}\n\\end{equation}\nwhere {\\small$\\widetilde{\\Delta}_{\\Phi} \\triangleq \\Phi_{1\/2 L_f} ({\\mathbf x}_0) - \\min_{\\mathbf x} \\Phi_{1\/2 L_f} ({\\mathbf x})$}.\nWith the following parameter values:\n\\begin{align*}\n \\eta_x = \\Theta \\left( \\frac{1}{T^{3\/4}} \\right), \\qquad \\eta_y = \\Theta \\left( \\frac{1}{T^{1\/4}} \\right), \\qquad S = \\Theta \\left( \\sqrt{T} \\right),\n\\end{align*}\nwe get\n\\begin{align}\n & \\frac{1}{T} \\sumtT \\mathbb E \\norm{\\nabla \\Phi_{1\/2L_f} ({\\mathbf x_t})}^2 \\leq \\mathcal O \\left( \\frac{1}{T^{1\/4}} \\right) + \\mathcal O \\left( \\frac{1}{T^{3\/4}} \\right) + \\mathcal O \\left( \\frac{(\\tau-1)^2}{T^{1\/2}} \\right) + \\mathcal O \\left( \\frac{(\\tau-1)^2}{T^{3\/2}} \\right).\n \\label{eq:thm:NC_1PC_conv_rate}\n\\end{align}\n\\end{theorem*}\n\n\\begin{cor}\n\\label{cor:NC_1PC_comm_cost}\nTo reach an $\\epsilon$-accurate point, i.e., ${\\mathbf x}$ such that $\\mathbb E \\| \\nabla \\Phi_{1\/2L_f} ({\\mathbf x}) \\| \\leq \\epsilon$,\nthe stochastic gradient complexity of \\cref{alg_local_SGDA_plus} is $\\mathcal O (1\/\\epsilon^8)$.\nThe number of communication rounds required for the same is $T\/\\tau = \\mathcal O ( 1\/\\epsilon^{7} )$.\n\\end{cor}\n\n\\begin{remark}\nNote that the only difference between the convergence rates for NC-1PC functions in \\eqref{eq:thm:NC_1PC_conv_rate}, and for NC-C functions in \\eqref{eq:thm:NC_C_conv_rate} is the absence of $n$ from the leading $\\mathcal O (1\/T^{1\/4})$ term.\nThis implies we do not observe a linear speedup in $n$ in this case.\nAs stated earlier, this limitation stems from the fact that even for simple minimization of one-point-convex functions, proving linear speedup in convergence rate in the presence of local updates at the clients is an open problem.\n\\end{remark}\n\n\n\\begin{proof}\nWe sum the result in \\cref{lem:NC_C_Phi_smooth_decay_one_iter} over $t = 0$ to $T-1$ and rearrange the terms to get\n\\begin{align}\n & \\frac{1}{T} \\sumtT \\mathbb E \\norm{\\nabla \\Phi_{1\/2L_f} ({\\mathbf x_t})}^2 \\leq \\frac{8}{\\eta_x} \\frac{1}{T} \\sumtT \\left( \\mathbb E \\left[ \\Phi_{1\/2L_f} ({\\mathbf x_t}) \\right] - \\mathbb E \\left[ \\Phi_{1\/2L_f} ({\\mathbf x_{t+1}}) \\right] \\right) + 8 \\eta_x L_f \\left( G_x^2 + \\frac{\\sigma^2}{n} \\right) \\nonumber \\\\\n & \\qquad \\qquad \\qquad \\qquad \\qquad \\qquad + 16 L_f \\frac{1}{T} \\sumtT \\mathbb E \\left[ \\Phi({\\mathbf x_t}) - f({\\mathbf x_t}, {\\mathbf y_t}) \\right] + 16 L_f^2 \\Delta_{t}^{\\bx,\\by} \\nonumber \\\\\n & \\leq \\frac{8}{\\eta_x T} \\left[ \\Phi_{1\/2L_f} ({\\mathbf x}_0) - \\mathbb E \\left[ \\Phi_{1\/2L_f} ({\\mathbf x}_T) \\right] \\right] + 8 \\eta_x L_f \\left( G_x^2 + \\frac{\\sigma^2}{n} \\right) + 16 L_f^2 \\Delta_{t}^{\\bx,\\by} \\nonumber \\\\\n & \\quad + 16 L_f \\left[ 2 \\eta_x G_x S \\sqrt{G_x^2 + \\frac{\\sigma^2}{n}} + \\frac{4 D}{\\eta_y S} + 20 \\eta_y \\sigma^2 + 16 \\eta_y^2 L_f (\\tau-1)^2 \\left( \\sigma^2 + \\varsigma_y^2 \\right) \\right] \\tag{\\cref{lem:NC_1PC_Phi_f_diff}} \\\\\n & \\leq \\frac{8 \\widetilde{\\Delta}_{\\Phi}}{\\eta_x T} + 8 \\eta_x L_f \\left( G_x^2 + \\frac{\\sigma^2}{n} \\right) + 320 \\eta_y L_f \\sigma^2 + 16 L_f \\left[ 2 \\eta_x G_x S \\sqrt{G_x^2 + \\frac{\\sigma^2}{n}} + \\frac{4 D}{\\eta_y S} \\right] \\nonumber \\\\\n & \\quad + 32 L_f^2 (\\tau-1)^2 \\left[ \\left( \\eta_x^2 + \\eta_y^2 \\right) \\sigma^2 \\left( 1 + \\frac{1}{n} \\right) + 3 \\left( \\eta_x^2 \\varsigma_x^2 + \\eta_y^2 \\varsigma_y^2 \\right) + 8 \\eta_y^2 \\left( \\sigma^2 + \\varsigma_y^2 \\right) \\right], \\tag{\\cref{lem:NC_PL_consensus_error}}\n\\end{align}\nwhere $\\widetilde{\\Delta}_{\\Phi} = \\Phi_{1\/2L_f} ({\\mathbf x}_0) - \\min_{\\mathbf x} \\Phi_{1\/2L_f} ({\\mathbf x})$.\nFollowing similar technique as in the proof of \\cref{thm:NC_C}, using the following parameter values,\n\\begin{align*}\n S = \\Theta \\left( \\sqrt{T} \\right), \\qquad \\eta_x = \\Theta \\left( \\frac{1}{T^{3\/4}} \\right), \\qquad \\eta_y = \\Theta \\left( \\frac{1}{T^{1\/4}} \\right),\n\\end{align*}\nwe get the following bound.\n\\begin{align}\n & \\frac{1}{T} \\sumtT \\mathbb E \\norm{\\nabla \\Phi_{1\/2L_f} ({\\mathbf x_t})}^2 \\leq \\mathcal O \\left( \\frac{\\sigma^2 + D + G_x^2}{T^{1\/4}} \\right) + \\mathcal O \\left( \\frac{1}{T^{3\/4}} \\right) + \\mathcal O \\left( \\frac{(\\tau-1)^2}{T^{1\/2}} \\right) + \\mathcal O \\left( \\frac{(\\tau-1)^2}{T^{3\/2}} \\right),\n \\label{eq_proof:thm_NC_1PC_4}\n\\end{align}\nwhich completes the proof\n\\end{proof}\n\n\n\\begin{proof}[Proof of \\cref{cor:NC_1PC_comm_cost}]\nTo reach an $\\epsilon$-accurate point, i.e., ${\\mathbf x}$ such that $\\mathbb E \\left\\| \\nabla \\Phi_{1\/2L_f} ({\\mathbf x}) \\right\\| \\leq \\epsilon$, we need\n\\begin{align*}\n \\mathbb E \\left\\| \\nabla \\Phi_{1\/2L_f} ({\\bar{\\bx}_T}) \\right\\| & \\leq \\left[ \\frac{1}{T} \\sum_{t=0}^{T-1} \\mathbb E \\left\\| \\nabla \\Phi_{1\/2L_f} ({\\mathbf x_t}) \\right\\|^2 \\right]^{1\/2} \\nonumber \\\\\n & \\leq \\mathcal O \\left( \\frac{1}{T^{1\/8}} \\right) + \\mathcal O \\left( \\frac{1}{T^{3\/8}} \\right) + \\mathcal O \\left( \\frac{\\tau-1}{T^{1\/4}} \\right) + \\mathcal O \\left( \\frac{\\tau-1}{T^{3\/4}} \\right).\n\\end{align*}\nWe can choose $\\tau \\leq \\mathcal O \\left( T^{1\/8} \\right)$ without affecting the convergence rate $\\mathcal O \\left( \\frac{1}{T^{1\/8}} \\right)$.\nIn that case, we need $T = \\mathcal O \\left( \\frac{1}{\\epsilon^8} \\right)$ iterations to reach an $\\epsilon$-accurate point.\nAnd the minimum number of communication rounds is \n$$\\mathcal O \\left( \\frac{T}{\\tau} \\right) = \\mathcal O \\left( T^{7\/8} \\right) = \\mathcal O \\left( \\frac{1}{\\epsilon^7} \\right).$$\n\\end{proof}\n\n\n\n\n\\subsection{Proofs of the Intermediate Lemmas}\n\\label{sec:NC_1PC_int_results_proofs}\n\n\n\\begin{proof}[Proof of \\cref{lem:NC_1PC_Phi_f_diff}]\nThe proof proceeds the same way as for \\cref{lem:NC_C_Phi_f_diff}.\nLet $t = kS + 1$ to $(k+1) S$, where $k = \\lfloor T\/S \\rfloor$ is a positive integer.\nLet $\\Tbx_{k}$ is the latest snapshot iterate in \\cref{alg_local_SGDA_plus}. From \\eqref{eq_proof:lem:NC_C_Phi_f_diff_2}, we get\n\\begin{align}\n & \\sum_{t=kS+1}^{(k+1)S} \\mathbb E \\left[ \\Phi({\\mathbf x_t}) - f({\\mathbf x_t}, {\\mathbf y_t}) \\right] \\leq 2 \\eta_x G_x S^2 \\sqrt{G_x^2 + \\frac{\\sigma^2}{n}} + \\sum_{t=kS+1}^{(k+1)S} \\mathbb E \\left[ f(\\Tbx_{k}, {\\mathbf y}^*(\\Tbx_{k})) - f(\\Tbx_{k}, {\\mathbf y_t}) \\right]. \\label{eq_proof:lem:NC_1PC_Phi_f_diff_2}\n\\end{align}\nNext, we bound $\\mathbb E \\left[ f(\\Tbx_{k}, {\\mathbf y}^*(\\Tbx_{k})) - f(\\Tbx_{k}, {\\mathbf y_t}) \\right]$.\nSince in \\cref{alg_local_SGDA_plus}, during the updates of $\\{ {\\mathbf y^i_t} \\}$, for $t = kS + 1$ to $(k+1) S$, the corresponding ${\\mathbf x}$ remains constant at $\\Tbx_{k}$. Therefore, for $t = kS + 1$ to $(k+1) S$, the ${\\mathbf y}$ updates behave like maximizing a concave function $f(\\Tbx_{k}, \\cdot)$.\nWith $\\{ {\\mathbf y^i_t} \\}$ being averaged every $\\tau$ iterations, these ${\\mathbf y^i_t}$ updates can be interpreted as iterates of a Local Stochastic Gradient Ascent (Local SGA) (\\cref{alg_local_SGD}).\n\nHowever, since the function is no longer concave, but one-point-concave, we lose the linear speedup in \\cref{lem:local_SGD_khaled}, and get\n\\begin{align*}\n \\frac{1}{S} \\sum_{t=kS+1}^{(k+1)S} \\mathbb E \\left[ f(\\Tbx_{k}, {\\mathbf y}^*(\\Tbx_{k})) - f(\\Tbx_{k}, {\\mathbf y_t}) \\right] & \\leq \\frac{4 \\norm{{\\mathbf y}_{kS+1} - {\\mathbf y}^*(\\Tbx_{k})}^2}{\\eta_y S} + 20 \\eta_y \\sigma^2 + 16 \\eta_y^2 L_f (\\tau-1)^2 \\left( \\sigma^2 + \\varsigma_y^2 \\right) \\nonumber \\\\\n & \\leq \\underbrace{\\frac{4 D}{\\eta_y S} + 20 \\eta_y \\sigma^2}_{\\substack{\\text{error with full} \\\\ \\text{synchronization}}} + \\underbrace{16 \\eta_y^2 L_f (\\tau-1)^2 \\left( \\sigma^2 + \\varsigma_y^2 \\right)}_{\\text{error due to local updates}}. \\nonumber\n\\end{align*}\nSubstituting this bound in \\eqref{eq_proof:lem:NC_1PC_Phi_f_diff_2}, we get\n\\begin{align*}\n & \\sum_{t=kS+1}^{(k+1)S} \\mathbb E \\left[ \\Phi({\\mathbf x_t}) - f({\\mathbf x_t}, {\\mathbf y_t}) \\right] \\leq 2 \\eta_x G_x S^2 \\sqrt{G_x^2 + \\frac{\\sigma^2}{n}} + \\frac{4 D}{\\eta_y} + 20 \\eta_y \\sigma^2 S + 16 S \\eta_y^2 L_f (\\tau-1)^2 \\left( \\sigma^2 + \\varsigma_y^2 \\right).\n \n\\end{align*}\nSumming over $k = 0$ to $T\/S - 1$, we get\n\\begin{align*}\n & \\frac{1}{T} \\sum_{k=0}^{T\/S-1} \\sum_{t=kS+1}^{(k+1)S} \\mathbb E \\left[ \\Phi({\\mathbf x_t}) - f({\\mathbf x_t}, {\\mathbf y_t}) \\right] \\leq 2 \\eta_x G_x S \\sqrt{G_x^2 + \\frac{\\sigma^2}{n}} + \\frac{4 D}{\\eta_y S} + 20 \\eta_y \\sigma^2 + 16 \\eta_y^2 L_f (\\tau-1)^2 \\left( \\sigma^2 + \\varsigma_y^2 \\right).\n \n\\end{align*}\n\\end{proof}\n\n\n\n\\subsection{With full synchronization}\n\\label{app:NC_1PC_tau_1}\nIn this subsection, we discuss the case when the clients perform a single local update between successive communications $\\tau = 1$.\nThe goal of the results in this subsection is to show that at least in this specialized case, linear speedup can be achieved for NC-1PC functions.\n\n\\begin{lemma}\n\\label{lem:NC_1PC_Phi_f_diff_tau_1}\nSuppose the local functions satisfy Assumptions \\ref{assum:smoothness}, \\ref{assum:bdd_var}, \\ref{assum:bdd_hetero}, \\ref{assum:Lips_cont_x}, \\ref{assum:1pc_y}.\nFurther, suppose we choose the step-size $\\eta_y$ such that $\\eta_y \\leq \\frac{1}{2 L_f}$.\nThen the iterates generated by \\cref{alg_local_SGDA_plus} satisfy\n\\begin{align}\n \\frac{1}{T} \\sumtT \\mathbb E \\left[ \\Phi({\\mathbf x_t}) - f({\\mathbf x_t}, {\\mathbf y_t}) \\right] & \\leq 2 \\eta_x G_x S \\sqrt{G_x^2 + \\frac{\\sigma^2}{n}} + \\frac{D}{2 \\eta_y S} + \\frac{\\eta_y \\sigma^2}{n}. \\nonumber\n\\end{align}\n\\end{lemma}\n\n\\begin{proof}\nThe proof follows similar technique as in \\cref{lem:NC_C_Phi_f_diff}. From \\eqref{eq_proof:lem:NC_C_Phi_f_diff_2}, we get\n\\begin{align}\n & \\sum_{t=kS+1}^{(k+1)S} \\mathbb E \\left[ \\Phi({\\mathbf x_t}) - f({\\mathbf x_t}, {\\mathbf y_t}) \\right] \\leq 2 \\eta_x G_x S^2 \\sqrt{G_x^2 + \\frac{\\sigma^2}{n}} + \\sum_{t=kS+1}^{(k+1)S} \\mathbb E \\left[ f(\\Tbx_{k}, {\\mathbf y}^*(\\Tbx_{k})) - f(\\Tbx_{k}, {\\mathbf y_t}) \\right]. \\label{eq_proof:lem:NC_1PC_Phi_f_diff_tau_1}\n\\end{align}\nWe only need to bound the second term in \\eqref{eq_proof:lem:NC_1PC_Phi_f_diff_tau_1}.\nWith $\\tau = 1$, the ${\\mathbf y^i_t}$ updates reduce to minibatch stochastic gradient ascent, with batch-size $\\mathcal O (n)$. Using the result for stochastic minimization of $\\gamma$-quasar convex functions (for one-point-concave functions, $\\gamma = 1$) using SGD (Theorem~3.3 in \\cite{jin20quasar_convex_arxiv}), we get\n\\begin{align*}\n \\frac{1}{S} \\sum_{t=kS+1}^{(k+1)S} \\mathbb E \\left[ f(\\Tbx_{k}, {\\mathbf y}^*(\\Tbx_{k})) - f(\\Tbx_{k}, {\\mathbf y_t}) \\right] \\leq \\frac{D}{2 \\eta_y S} + \\frac{\\eta_y \\sigma^2}{n},\n\\end{align*}\nwhich completes the proof.\n\\end{proof}\n\n\nNext, we state the convergence result.\n\n\\begin{theorem*}\nSuppose the local loss functions $\\{ f_i \\}$ satisfy Assumptions \\ref{assum:smoothness}, \\ref{assum:bdd_var}, \\ref{assum:bdd_hetero}, \\ref{assum:Lips_cont_x}, \\ref{assum:1pc_y}.\nFurther, let $\\norm{{\\mathbf y_t}}^2 \\leq D$ for all $t$.\nSuppose the step-size $\\eta_y$ is chosen such that $\\eta_y \\leq \\frac{1}{2 L_f}$.\nThen the output ${\\bar{\\bx}_T}$ of \\cref{alg_local_SGDA_plus} satisfies\n\\begin{equation}\n \\begin{aligned}\n & \\frac{1}{T} \\sumtT \\mathbb E \\norm{\\nabla \\Phi_{1\/2L_f} ({\\mathbf x_t})}^2 \\leq \\mathcal O \\left( \\frac{\\widetilde{\\Delta}_{\\Phi}}{\\eta_x T} + \\eta_x L_f \\left( G_x^2 + \\frac{\\sigma^2}{n} \\right) + \\frac{\\eta_y L_f \\sigma^2}{n} + L_f \\left[ \\eta_x G_x S \\sqrt{G_x^2 + \\frac{\\sigma^2}{n}} + \\frac{D}{\\eta_y S} \\right] \\right),\n \n \\end{aligned}\n \\label{eq:thm:NC_1PC_tau_1}\n\\end{equation}\nwhere {\\small$\\widetilde{\\Delta}_{\\Phi} \\triangleq \\Phi_{1\/2 L_f} ({\\mathbf x}_0) - \\min_{\\mathbf x} \\Phi_{1\/2 L_f} ({\\mathbf x})$}.\nWith the following parameter values:\n\\begin{align*}\n S = \\Theta \\left( \\sqrt{\\frac{T}{n}} \\right), \\qquad \\eta_x = \\Theta \\left( \\frac{n^{1\/4}}{T^{3\/4}} \\right), \\qquad \\eta_y = \\Theta \\left( \\frac{n^{3\/4}}{T^{1\/4}} \\right),\n\\end{align*}\nwe get\n\\begin{align}\n & \\frac{1}{T} \\sumtT \\mathbb E \\norm{\\nabla \\Phi_{1\/2L_f} ({\\mathbf x_t})}^2 \\leq \\mathcal O \\left( \\frac{1}{(n T)^{1\/4}} \\right) + \\mathcal O \\left( \\frac{n^{1\/4}}{T^{3\/4}} \\right). \\nonumber\n\\end{align}\n\\end{theorem*}\n\n\\begin{cor}\n\\label{cor:NC_1PC_comm_cost_tau_1}\nTo reach an $\\epsilon$-accurate point, i.e., ${\\mathbf x}$ such that $\\mathbb E \\| \\nabla \\Phi_{1\/2L_f} ({\\mathbf x}) \\| \\leq \\epsilon$,\nthe stochastic gradient complexity of \\cref{alg_local_SGDA_plus} is $\\mathcal O (1\/n \\epsilon^8)$.\n\\end{cor}\n\n\n\\begin{proof}\nWe sum the result in \\cref{lem:NC_C_Phi_smooth_decay_one_iter} over $t = 0$ to $T-1$. Since $\\tau = 1$, $\\Delta_{t}^{\\bx,\\by} = 0$ for all $t$. Rearranging the terms, we get\n\\begin{align}\n & \\frac{1}{T} \\sumtT \\mathbb E \\norm{\\nabla \\Phi_{1\/2L_f} ({\\mathbf x_t})}^2 \\leq \\frac{8}{\\eta_x} \\frac{1}{T} \\sumtT \\left( \\mathbb E \\left[ \\Phi_{1\/2L_f} ({\\mathbf x_t}) \\right] - \\mathbb E \\left[ \\Phi_{1\/2L_f} ({\\mathbf x_{t+1}}) \\right] \\right) + 8 \\eta_x L_f \\left( G_x^2 + \\frac{\\sigma^2}{n} \\right) \\nonumber \\\\\n & \\qquad \\qquad \\qquad \\qquad \\qquad \\qquad + 16 L_f \\frac{1}{T} \\sumtT \\mathbb E \\left[ \\Phi({\\mathbf x_t}) - f({\\mathbf x_t}, {\\mathbf y_t}) \\right] \\nonumber \\\\\n & \\leq \\frac{8}{\\eta_x T} \\left[ \\Phi_{1\/2L_f} ({\\mathbf x}_0) - \\mathbb E \\left[ \\Phi_{1\/2L_f} ({\\mathbf x}_T) \\right] \\right] + 8 \\eta_x L_f \\left( G_x^2 + \\frac{\\sigma^2}{n} \\right) \\nonumber \\\\\n & \\quad + 16 L_f \\left[ 2 \\eta_x G_x S \\sqrt{G_x^2 + \\frac{\\sigma^2}{n}} + \\frac{D}{2 \\eta_y S} + \\frac{\\eta_y \\sigma^2}{n} \\right] \\tag{\\cref{lem:NC_1PC_Phi_f_diff_tau_1}} \\\\\n & \\leq \\frac{8 \\widetilde{\\Delta}_{\\Phi}}{\\eta_x T} + 8 \\eta_x L_f \\left( G_x^2 + \\frac{\\sigma^2}{n} \\right) + \\frac{16 \\eta_y L_f \\sigma^2}{n} + 16 L_f \\left[ 2 \\eta_x G_x S \\sqrt{G_x^2 + \\frac{\\sigma^2}{n}} + \\frac{D}{2 \\eta_y S} \\right], \\nonumber\n\\end{align}\nwhere $\\widetilde{\\Delta}_{\\Phi} = \\Phi_{1\/2L_f} ({\\mathbf x}_0) - \\min_{\\mathbf x} \\Phi_{1\/2L_f} ({\\mathbf x})$.\nFollowing similar technique as in the proof of \\cref{thm:NC_C}, using the following parameter values,\n\\begin{align*}\n S = \\Theta \\left( \\sqrt{\\frac{T}{n}} \\right), \\qquad \\eta_x = \\Theta \\left( \\frac{n^{1\/4}}{T^{3\/4}} \\right), \\qquad \\eta_y = \\Theta \\left( \\frac{n^{3\/4}}{T^{1\/4}} \\right),\n\\end{align*}\nwe get the following bound.\n\\begin{align}\n & \\frac{1}{T} \\sumtT \\mathbb E \\norm{\\nabla \\Phi_{1\/2L_f} ({\\mathbf x_t})}^2 \\leq \\mathcal O \\left( \\frac{\\sigma^2 + D + G_x^2}{(n T)^{1\/4}} \\right) + \\mathcal O \\left( \\frac{n^{1\/4}}{T^{3\/4}} \\right). \\nonumber\n\\end{align}\n\n\\end{proof}\n\n\n\\begin{proof}[Proof of \\cref{cor:NC_1PC_comm_cost_tau_1}]\nWe assume $T \\geq n$.\nTo reach an $\\epsilon$-accurate point, i.e., ${\\mathbf x}$ such that $\\mathbb E \\left\\| \\nabla \\Phi_{1\/2L_f} ({\\mathbf x}) \\right\\| \\leq \\epsilon$, since\n\\begin{align*}\n \\mathbb E \\left\\| \\nabla \\Phi_{1\/2L_f} ({\\bar{\\bx}_T}) \\right\\| \\leq \\left[ \\frac{1}{T} \\sum_{t=0}^{T-1} \\mathbb E \\left\\| \\nabla \\Phi_{1\/2L_f} ({\\mathbf x_t}) \\right\\|^2 \\right]^{1\/2} \\leq \\mathcal O \\left( \\frac{1}{(n T)^{1\/8}} \\right) + \\mathcal O \\left( \\frac{n^{1\/8}}{T^{3\/8}} \\right),\n\\end{align*}\nwe need $T = \\mathcal O \\left( \\frac{1}{n \\epsilon^8} \\right)$ iterations.\n\\end{proof}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\newpage\n\\section{Additional Experiments}\n\\label{app:add_exp}\n\n\\begin{algorithm}[ht]\n\\caption{Local SGDA+ \\cite{mahdavi21localSGDA_aistats}}\n\\label{alg_mom_local_SGDA_plus}\n\\begin{algorithmic}[1]\n\t\\STATE{\\textbf{Input:} {\\small${\\mathbf x}_0^i = \\widetilde{\\bx}_0 = {\\mathbf x}_0, {\\mathbf y}_0^i = {\\mathbf y}_0$, $\\mathbf d_{x,0}^i = \\nabla_{\\bx} f_i ({\\mathbf x}^i_0, {\\mathbf y}^i_0; \\xi^i_0)$, $\\mathbf d_{y,0}^i = \\nabla_{\\by} f_i ({\\mathbf x}^i_0, {\\mathbf y}^i_0; \\xi^i_0)$} for all $i \\in [n]$; step-sizes $\\eta_x, \\eta_y$; synchronization intervals $\\tau, S$; $T, k = 0$}\n\t\\FOR[At all clients $i=1,\\hdots, n$]{$t=0$ to $T-1$}\n\t \\STATE{{\\small$\\Tbx^i_{t+\\frac{1}{2}} = {\\mathbf x^i_t} - \\eta_x {\\mathbf d^i_{x,t}}$, \n $\\ {\\mathbf x^i_{t+1}} = {\\mathbf x^i_t} + \\alpha_t ( \\Tbx^i_{t+\\frac{1}{2}} - {\\mathbf x^i_t} )$}}\n \\STATE{{\\small$\\Tby^i_{t+\\frac{1}{2}} = {\\mathbf y^i_t} + \\eta_y {\\mathbf d^i_{y,t}}$, $\\ {\\mathbf y^i_{t+1}} = {\\mathbf y^i_t} + \\alpha_t ( \\Tby^i_{t+\\frac{1}{2}} - {\\mathbf y^i_t} )$}}\n \\STATE{Sample minibatch ${\\xi^i_{t+1}}$ from local data}\n \\STATE{{\\small${\\mathbf d^i_{x,t+1}} = (1 - \\beta_x \\alpha_t) {\\mathbf d^i_{x,t}} + \\beta_x \\alpha_t \\nabla_{\\bx} f_i ({\\mathbf x^i_{t+1}}, {\\mathbf y^i_{t+1}}; {\\xi^i_{t+1}})$}}\n \\STATE{{\\small${\\mathbf d^i_{y,t+1}} = (1 - \\beta_y \\alpha_t) {\\mathbf d^i_{y,t}} + \\beta_y \\alpha_t \\nabla_{\\by} f_i (\\Tbx_{k}, {\\mathbf y^i_{t+1}}; {\\xi^i_{t+1}})$}}\n\t \n \n \n \\IF{$t+1$ mod $\\tau = 0$}\n \n \n \n \n \n \\STATE{Clients send $\\{ {\\mathbf x^i_{t+1}}, {\\mathbf y^i_{t+1}} \\}$ to the server}\n \\STATE{Server computes averages ${\\mathbf x_{t+1}} \\triangleq \\frac{1}{n} \\sum_{i=1}^n {\\mathbf x^i_{t+1}}$, \n ${\\mathbf y_{t+1}} \\triangleq \\frac{1}{n} \\sum_{i=1}^n {\\mathbf y^i_{t+1}}$, and sends to all the clients}\n \\STATE{${\\mathbf x^i_{t+1}} = {\\mathbf x_{t+1}}$, ${\\mathbf y^i_{t+1}} = {\\mathbf y_{t+1}}$, for all $i \\in [n]$}\n \\STATE{${\\mathbf d^i_{x,t+1}} = 0$, ${\\mathbf d^i_{y,t+1}} = 0$, for all $i \\in [n]$}\n \\ENDIF\n \\IF{$t+1$ mod $S = 0$}\n \\STATE{Clients send $\\{ {\\mathbf x^i_{t+1}} \\}$ to the server}\n \\STATE{$k \\gets k+1$}\n \\STATE{Server computes averages $\\Tbx_{k} \\triangleq \\frac{1}{n} \\sum_{i=1}^n {\\mathbf x^i_{t+1}}$, and sends to all the clients}\n \\ENDIF\n\t\\ENDFOR\n\t\\STATE{\\textbf{Return: }${\\bar{\\bx}_T}$ drawn uniformly at random from $\\{ {\\mathbf x_t} \\}$, where ${\\mathbf x_t} \\triangleq \\frac{1}{n} \\sum_{i=1}^n {\\mathbf x^i_t}$}\n\\end{algorithmic}\n\\end{algorithm}\n\n\\subsection{Fair Classification}\nBatch-size of $32$ is used. Momentum parameter $0.9$ is used only in Momentum Local SGDA (\\cref{alg_NC_momentum}) and corresponds to $\\alpha \\beta$ in the pseudocode.\n\n\\begin{table}[ht]\n\\begin{center}\n\\caption{Parameter values for experiments in \\cref{sec:exp_fair}}\n\\begin{tabular}{llll}\n\\hline\nParameter & & & \\\\\n\\hline\nLearning Rate $(\\eta_y)$ & $0.02$ & $2 \\times 10^{-3}$ & $2 \\times 10^{-4}$ \\\\\nLearning Rate $(\\eta_x)$ & $0.016$ & $1.6 \\times 10^{-3}$ & $1.6 \\times 10^{-4}$ \\\\\nCommunication rounds & 150 & 75 & 75 \\\\\n\\hline\n\\end{tabular}\n\\end{center}\n\\end{table}\n\n\n\\subsection{Robust Neural Network Training}\nBatch-size of $32$ is used. Momentum parameter $0.9$ is used only in Momentum Local SGDA+ (\\cref{alg_mom_local_SGDA_plus}) and corresponds to $\\alpha \\beta$ in the pseudocode. $S = \\tau^2$ in both \\cref{alg_local_SGDA_plus} and \\cref{alg_mom_local_SGDA_plus}.\n\n\\begin{table}[ht]\n\\begin{center}\n\\caption{Parameter values for experiments in \\cref{sec:exp_fair}}\n\\begin{tabular}{llll}\n\\hline\nParameter & & & \\\\\n\\hline\nLearning Rate $(\\eta_y)$ & $0.02$ & $2 \\times 10^{-3}$ & $2 \\times 10^{-4}$ \\\\\nLearning Rate $(\\eta_x)$ & $0.016$ & $1.6 \\times 10^{-3}$ & $1.6 \\times 10^{-4}$ \\\\\nCommunication rounds & 150 & 75 & 75 \\\\\n\\hline\n\\end{tabular}\n\\end{center}\n\\end{table}\n\n\\begin{figure}[ht]\n \\centering\n \\includegraphics[width=0.55\\textwidth]{figures\/RobustNN\/CIFAR10_test_loss.pdf}\n \\caption{Robust test loss for the CIFAR10 experiment shown in \\cref{sec:exp_robustnn}. The test loss in \\cref{eq:exp_robustnn} is computed using some steps of gradient ascent to find an estimate of ${\\mathbf y}^*$.}\n \n\\end{figure}\n\n\\begin{figure}[ht]\n \\centering\n \\begin{subfigure}\n \\centering\n \\includegraphics[width=0.45\\textwidth]{figures\/RobustNN\/fashionMNIST_test_loss.pdf}\n \n \\label{fig:robustnn_fashionmnist_loss}\n \\end{subfigure}\n \\begin{subfigure}\n \\centering\n \\includegraphics[width=0.45\\textwidth]{figures\/RobustNN\/fashionMNIST_test_acc.pdf}\n \n \\label{fig:robustnn_fashionmnist_acc}\n \\end{subfigure}\n \\caption{Comparison of the effects of $\\tau$ on the performance of Local SGDA and Momentum Local SGDA algorithms, for the robust NN training problem on the FashionMNIST dataset, with the VGG11 model. The figures show the robust test loss and robust test accuracy. \\label{fig:robustnn_fashionmnist}}\n\\end{figure}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\ifx\n\\newpage\n\\section{Extra: Nonconvex-Concave (NC-C) case} \nWe organize this section as follows. First, in we present some intermediate results, which we use to prove the main theorem. Next, in , we present the proof of \\cref{thm:NC_C}, which is followed by the proofs of the intermediate results in \n\n\\subsection{Intermediate Lemmas} \n\n\n\\subsection{Proof of \\cref{thm:NC_C}}\n\n\n\\subsection{Proofs of the Intermediate Lemmas}\n\n\\subsection{Algorithm}\nThe algorithm is modified from the single-client approach proposed in \\cite{rafique18WCC_oms}, and incorporates local updates at the clients and periodic communication with the server.\n\n\n\\begin{equation}\n \\min_{{\\mathbf x}} \\max_{{\\mathbf y}} \\left\\{ {F_{\\gamma, \\psi}} ({\\mathbf x}, {\\mathbf y}; \\bar{{\\mathbf x}}, \\bar{{\\mathbf y}}) \\triangleq f({\\mathbf x}, {\\mathbf y}) + \\frac{1}{2\\gamma} \\left\\| {\\mathbf x} - \\bar {\\mathbf x} \\right\\|^2 - \\frac{1}{2 \\psi} \\left\\| {\\mathbf y} - \\bar {\\mathbf y} \\right\\|^2 \\right\\}\n\\end{equation}\nwhere $\\gamma < \\frac{1}{L}$.\nFor every $t$, the problem \n\\begin{align*}\n \\min_{{\\mathbf x}} \\max_{{\\mathbf y}} {F_{\\gamma, {\\psi_t}}} ({\\mathbf x}, {\\mathbf y}; \\bar{{\\mathbf x}}_t, \\bar{{\\mathbf y}}_t)\n\\end{align*}\nis a strongly convex-strongly concave minimax problem, which we solve using the Local SGDA algorithm proposed in \\cite{mahdavi21localSGDA_aistats}. \nThe function ${F_{\\gamma, {\\psi_t}}} ({\\mathbf x}, {\\mathbf y}; \\bar{{\\mathbf x}}_t, \\bar{{\\mathbf y}}_t)$ is $\\mu_x$-strongly convex in ${\\mathbf x}$, and $\\mu_y$-strongly concave in ${\\mathbf y}$, where\n$\\mu_x = \\frac{1}{\\gamma}-L$ and $\\mu_y = \\frac{1}{{\\psi_t}}$.\nAlso, ${F_{\\gamma, {\\psi_t}}}$ is $L_x$-smooth in ${\\mathbf x}$, and $L_y$-smooth in ${\\mathbf y}$, where $L_x = L_f + \\frac{1}{\\gamma}$, and $L_y = L_f + \\frac{1}{{\\psi_t}}$.\n{\\color{blue}We define $\\mu = \\min \\left\\{ \\mu_x, \\mu_y \\right\\}$ and $\\widetilde{L}_f = \\max \\left\\{ L_x, L_y \\right\\}$.}\n\n\\subsection{Analysis}\n\\paragraph*{Update Equations}\n\\begin{equation}\n \\begin{aligned}\n {\\mathbf{x}^i_{t,k+1}} &= \\mathcal P_{\\mathcal X} \\left[ {\\mathbf{x}^i_{t,k}} - \\eta_k \\left( \\Tilde{\\nabla}_x f_i ({\\mathbf{x}^i_{t,k}}, {\\mathbf{y}^i_{t,k}}) + \\frac{1}{\\gamma} \\left( {\\mathbf{x}^i_{t,k}} - \\bar{{\\mathbf x}}_t \\right) \\right) \\right] \\\\\n {\\mathbf{y}^i_{t,k+1}} &= \\mathcal P_{\\mathcal Y} \\left[ {\\mathbf{y}^i_{t,k}} + \\eta_k \\left( \\Tilde{\\nabla}_y f_i ({\\mathbf{x}^i_{t,k}}, {\\mathbf{y}^i_{t,k}}) - \\frac{1}{{\\psi_t}} \\left( {\\mathbf{y}^i_{t,k}} - \\bar{{\\mathbf y}}_t \\right) \\right) \\right]\n \\end{aligned}\n\\end{equation}\n\n\n\\paragraph*{Convergence Result from \\cite{mahdavi21localSGDA_aistats} for the Strongly-Convex-Strongly-Concave Case}\n\n\\begin{theorem*}\nIf each local function $\\{ f_i \\}$ satisfies Assumptions \\ref{assum:smoothness}, \\ref{assum:concavity}, \\ref{assum:bdd_var}. \nChoose $\\eta_x = \\eta_y = \\eta_k = \\frac{8}{\\mu (k + a)}$, then the iterates generated by \\cref{alg_local_SGDA} satisfy\n\\begin{align}\n \\mathbb E \\left[ \\norm{{\\mathbf x}_K - {\\bx^{\\star}}}^2 + \\norm{{\\mathbf y}_K - {\\by^{\\star}}}^2 \\right] \\leq \\mathcal O \\left( \\frac{a^3}{K^3} + \\frac{\\kappa^2 \\tau^2 (\\varsigma_x + \\varsigma_y) }{\\mu K^2} + \\frac{\\kappa^2 \\tau^2 \\sigma^2}{\\mu K^2} + \\frac{\\sigma^2}{\\mu^2 n K} \\right).\n\\end{align}\nWith synchronization frequency $\\tau = \\sqrt{K\/n}$, we get\n\\begin{align}\n \\mathbb E \\left[ \\norm{{\\mathbf x}_K - {\\bx^{\\star}}}^2 + \\norm{{\\mathbf y}_K - {\\by^{\\star}}}^2 \\right] \\leq \\mathcal O \\left( \\frac{a^3}{K^3} + \\frac{\\kappa^2 (\\varsigma_x + \\varsigma_y) }{\\mu n K} + \\frac{\\kappa^2 \\sigma^2}{\\mu n K} + \\frac{\\sigma^2}{\\mu^2 n K} \\right).\n\\end{align}\n\\end{theorem*}\n\n\n\\paragraph*{Extension of Lemma 3.1 in \\cite{mahdavi21localSGDA_aistats}}\n$({\\bx_t^{\\star}}, {\\by_t^{\\star}})$ is the saddle-point of\n\\begin{align*}\n \\min_{{\\mathbf x} \\in \\mathcal X} \\max_{{\\mathbf y} \\in \\mathcal Y} \\left\\{ {F_{\\gamma, {\\psi_t}}} ({\\mathbf x}, {\\mathbf y}; \\bar{{\\mathbf x}}_t, \\bar{{\\mathbf y}}_t) \\triangleq f({\\mathbf x}, {\\mathbf y}) + \\frac{1}{2 \\gamma} \\norm{{\\mathbf x} - \\bar{{\\mathbf x}}_t}^2 - \\frac{1}{2 {\\psi_t}} \\norm{{\\mathbf y} - \\bar{{\\mathbf y}}_t}^2 \\right\\}.\n\\end{align*}\n\n\n\\begin{theorem}{Convergence Rate}\n\\label{thm:nc_c_conv_rate}\n\\begin{align}\n \\mathbf E_{t, K_t} &= \\mathbb E \\left[ \\norm{\\mathbf x_{t,K_t} - {\\bx_t^{\\star}}}^2 + \\norm{\\mathbf y_{t,K_t} - {\\bx_t^{\\star}}}^2 \\right] \\nonumber \\\\\n &\\leq \\mathcal O \\left( \\frac{a_t^3}{K_t^3} \\right) + \\mathcal O \\left( \\frac{\\kappa_t^2 \\tau (\\tau - 1)}{\\mu K_t^2} \\left( \\sigma^2 + \\delta_x + \\delta_y \\right) \\right) + \\mathcal O \\left( \\frac{\\sigma^2}{\\mu^2 n K_t} \\right).\n \n\\end{align}\n\\end{theorem}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\newpage\n\\paragraph*{Global Algorithm:}\n\\begin{align*}\n {F_{\\gamma, {\\psi_t}}} ({\\mathbf x}, {\\mathbf y}; \\bar{{\\mathbf x}}_t, \\bar{{\\mathbf y}}_t) & \\triangleq f({\\mathbf x}, {\\mathbf y}) + \\frac{1}{2 \\gamma} \\norm{{\\mathbf x} - \\bar{{\\mathbf x}}_t}^2 - \\frac{1}{2 {\\psi_t}} \\norm{{\\mathbf y} - \\bar{{\\mathbf y}}_t}^2, \\\\\n {F_{\\gamma}} ({\\mathbf x}, {\\mathbf y}; \\bar{{\\mathbf x}}_t) & \\triangleq f({\\mathbf x}, {\\mathbf y}) + \\frac{1}{2 \\gamma} \\norm{{\\mathbf x} - \\bar{{\\mathbf x}}_t}^2.\n\\end{align*}\nNext,\n\\begin{align}\n & \\max_{{\\mathbf y}} {F_{\\gamma}} ({\\mathbf x}_{t,K_t}, {\\mathbf y}; \\bar{{\\mathbf x}}_t) - \\min_{{\\mathbf x}} \\max_{{\\mathbf y}} {F_{\\gamma}} ({\\mathbf x}, {\\mathbf y}; \\bar{{\\mathbf x}}_t) \\nonumber \\\\\n & \\leq \\max_{{\\mathbf y}} {F_{\\gamma, {\\psi_t}}} ({\\mathbf x}_{t,K_t}, {\\mathbf y}; \\bar{{\\mathbf x}}_t, \\bar{{\\mathbf y}}_t) - \\min_{{\\mathbf x}} \\max_{{\\mathbf y}} {F_{\\gamma, {\\psi_t}}} ({\\mathbf x}, {\\mathbf y}; \\bar{{\\mathbf x}}_t, \\bar{{\\mathbf y}}_t) + \\frac{D_y^2}{2 {\\psi_t}} \\nonumber \\\\\n \n\\end{align}\nusing $\\left( L + \\frac{1}{\\gamma} - \\frac{1}{{\\psi_t}} \\right)$-smoothness of ${F_{\\gamma, {\\psi_t}}}$.\nDefine $\\Phi ({\\mathbf x}) \\triangleq \\max_{{\\mathbf y}} f({\\mathbf x}, {\\mathbf y})$. \nThen, $\\max_{{\\mathbf y}} {F_{\\gamma}} ({\\mathbf x}, {\\mathbf y}; \\bar{{\\mathbf x}}) = \\Phi({\\mathbf x}) + \\frac{1}{2 \\gamma} \\norm{{\\mathbf x} - \\bar{{\\mathbf x}}}^2$.\nWe denote ${\\bar{\\bx}_{t+1}} = {\\mathbf x}_{t,K_t}$, ${\\bx_t^{\\star}} = \\operatornamewithlimits{arg\\,min}_{{\\mathbf x}} \\max_{{\\mathbf y}} {F_{\\gamma}} ({\\mathbf x}, {\\mathbf y}; \\bar{{\\mathbf x}})$.\nFrom ()\n, we see that for some constant $c_1 > 0$ (following proof of Theorem 4.2 in \\cite{rafique18WCC_oms})\n\\begin{align}\n & \\mathbb E \\left[ \\Phi({\\bar{\\bx}_{t+1}}) + \\frac{1}{2 \\gamma} \\norm{{\\bar{\\bx}_{t+1}} - {\\bar{\\bx}_t}}^2 - \\Phi({\\bx_t^{\\star}}) - \\frac{1}{2 \\gamma} \\norm{{\\bx_t^{\\star}} - {\\bar{\\bx}_t}}^2 \\right] \\leq c_1 \\frac{\\sigma^2 {\\psi_t}^2}{n K_t} + \\frac{D_y^2}{2 {\\psi_t}} \\\\\n \\Rightarrow & \\mathbb E \\Phi({\\bar{\\bx}_{t+1}}) \\leq \\mathbb E \\left[ \\Phi({\\bx_t^{\\star}}) + \\frac{1}{2 \\gamma} \\norm{{\\bx_t^{\\star}} - {\\bar{\\bx}_t}}^2 - \\frac{1}{2 \\gamma} \\norm{{\\bar{\\bx}_{t+1}} - {\\bar{\\bx}_t}}^2 \\right] + c_1 \\frac{\\sigma^2 {\\psi_t}^2}{n K_t} + \\frac{D_y^2}{2 {\\psi_t}} \\\\\n \\Rightarrow & \\mathbb E \\Phi({\\bar{\\bx}_{t+1}}) \\leq \\mathbb E \\Phi({\\bx_t^{\\star}}) + \\frac{1}{2 \\gamma} \\mathbb E \\left( \\frac{1}{3} \\norm{{\\bx_t^{\\star}} - {\\bar{\\bx}_t}}^2 + 4 \\norm{{\\bar{\\bx}_{t+1}} - {\\bx_t^{\\star}}}^2 \\right) + c_1 \\frac{\\sigma^2 {\\psi_t}^2}{n K_t} + \\frac{D_y^2}{2 {\\psi_t}} \\\\\n \\Rightarrow & \\mathbb E \\Phi({\\bar{\\bx}_{t+1}}) \\leq \\mathbb E \\Phi({\\bx_t^{\\star}}) + \\frac{1}{6 \\gamma} \\mathbb E \\norm{{\\bx_t^{\\star}} - {\\bar{\\bx}_t}}^2 + c \\left( c_1 \\frac{\\sigma^2 {\\psi_t}^2}{n K_t} + \\frac{D_y^2}{2 {\\psi_t}} \\right) \\\\\n \\Rightarrow & \\mathbb E \\Phi({\\bar{\\bx}_{t+1}}) \\leq \\mathbb E \\Phi({\\bar{\\bx}_t}) - \\frac{1}{3 \\gamma} \\mathbb E \\norm{{\\bx_t^{\\star}} - {\\bar{\\bx}_t}}^2 + c \\left( c_1 \\frac{\\sigma^2 {\\psi_t}^2}{n K_t} + \\frac{D_y^2}{2 {\\psi_t}} \\right) \\\\\n \\Rightarrow & \\mathbb E \\Phi({\\bar{\\bx}_{t+1}}) \\leq \\mathbb E \\Phi(\\bar{{\\mathbf x}}_0) - \\frac{1}{3 \\gamma} \\sum_{t=0}^{T-1} \\mathbb E \\norm{{\\bx_t^{\\star}} - {\\bar{\\bx}_t}}^2 + c \\sum_{t=0}^{T-1} \\left( c_1 \\frac{\\sigma^2 {\\psi_t}^2}{n K_t} + \\frac{D_y^2}{2 {\\psi_t}} \\right).\n\\end{align}\nwhere\n\\begin{align}\n \\norm{{\\bx_t^{\\star}} - {\\bar{\\bx}_t}}^2 - \\norm{{\\bar{\\bx}_{t+1}} - {\\bar{\\bx}_t}}^2 &= \\left( \\norm{{\\bx_t^{\\star}} - {\\bar{\\bx}_t}} + \\norm{{\\bar{\\bx}_{t+1}} - {\\bar{\\bx}_t}} \\right) \\left( \\norm{{\\bx_t^{\\star}} - {\\bar{\\bx}_t}} - \\norm{{\\bar{\\bx}_{t+1}} - {\\bar{\\bx}_t}} \\right) \\nonumber \\\\\n & \\leq \\left( 2 \\norm{{\\bx_t^{\\star}} - {\\bar{\\bx}_t}} + \\norm{{\\bar{\\bx}_{t+1}} - {\\bx_t^{\\star}}} \\right) \\norm{{\\bx_t^{\\star}} - {\\bar{\\bx}_{t+1}}} \\nonumber \\\\\n & = 2 \\norm{{\\bx_t^{\\star}} - {\\bar{\\bx}_t}} \\norm{{\\bx_t^{\\star}} - {\\bar{\\bx}_{t+1}}} + \\norm{{\\bar{\\bx}_{t+1}} - {\\bx_t^{\\star}}}^2 \\nonumber \\\\\n & = \\frac{1}{3} \\norm{{\\bx_t^{\\star}} - {\\bar{\\bx}_t}}^2 + 4 \\norm{{\\bar{\\bx}_{t+1}} - {\\bx_t^{\\star}}}^2.\n\\end{align}\nRearranging the terms and summing over $t=0,\\hdots, T-1$, we get\n\\begin{align}\n \\mathbb E \\norm{{\\bx_{\\tau}^{\\star}} - \\bar{{\\mathbf x}}_{\\tau}}^2 \\leq \\frac{1}{T} \\sum_{t=0}^{T-1} \\mathbb E \\norm{{\\bx_t^{\\star}} - {\\bar{\\bx}_t}}^2 \\leq \\frac{3 \\gamma}{T} \\left( \\Phi(\\bar{{\\mathbf x}}^0) - \\Phi^* + c \\sum_{t=0}^{T-1} \\left( c_1 \\frac{\\sigma^2 {\\psi_t}^2}{n K_t} + \\frac{D_y^2}{2 {\\psi_t}} \\right) \\right).\n\\end{align}\nSince $\\mathbb E \\left[ \\text{Dist} \\left( \\mathbf 0, \\partial \\Phi ({\\bx_{\\tau}^{\\star}}) \\right)^2 \\right] \\leq \\frac{1}{\\gamma^2} \\mathbb E \\norm{{\\bx_{\\tau}^{\\star}} - {\\bar{\\bx}_t}}^2$, we can ensure $\\mathbb E \\left[ \\text{Dist} \\left( \\mathbf 0, \\partial \\Phi ({\\bx_{\\tau}^{\\star}}) \\right)^2 \\right] \\leq \\epsilon^2$ by choosing\n\\begin{align*}\n T = \\frac{3}{\\gamma \\epsilon^2} \\left( \\Phi(\\bar{{\\mathbf x}}^0) - \\Phi^* + c \\sum_{t=0}^{T-1} \\left( c_1 \\frac{\\sigma^2 {\\psi_t}^2}{n K_t} + \\frac{D_y^2}{2 {\\psi_t}} \\right) \\right)\n\\end{align*}\nUsing ${\\psi_t} = \\mathcal O \\left( \\frac{1}{\\epsilon^2} \\right), K_t = \\mathcal O \\left( \\frac{1}{n \\epsilon^6} \\right)$, we get\n\\begin{align*}\n \\frac{1}{T} \\sum_{t=0}^{T-1} \\left( c_1 \\frac{\\sigma^2 {\\psi_t}^2}{n K_t} + \\frac{D_y^2}{2 {\\psi_t}} \\right) \\leq \\mathcal O \\left( \\epsilon^2 \\right).\n\\end{align*}\nTotal IFO complexity\n\\begin{align}\n \\sum_{t=0}^{T-1} K_t &= \\mathcal O \\left( \\frac{1}{\\gamma \\epsilon^2} \\frac{1}{n \\epsilon^6} \\right) = \\mathcal O \\left( \\frac{1}{\\gamma n \\epsilon^8} \\right).\n\\end{align}\n\\fi\n\n\n\n\n\\section{Introduction}\n\\label{sec:intro}\n\nIn the recent years, minimax optimization theory has found relevance in several modern machine learning applications including \nGenerative Adversarial Networks (GANs) \\cite{goodfellow14GANs_neurips, arjovsky17WGANs_icml, gulrajani17improved_WGANs_neurips},\nadversarial training of neural networks \n\\cite{sinha17certifiable_robust_iclr, madry18adversarial_iclr, wang21adversarial_minmax_neurips},\nreinforcement learning \\cite{dai17learning_aistats, dai18sbeed_RL_nonlin_FA_icml},\nand robust optimization \\cite{namkoong16SG_DRO_neurips, namkoong17var_regular_neurips, mohri19agnosticFL_icml}. \nMany of these problems lie outside the domain of classical convex-concave theory \\cite{daskalakis21constr_minmax_sigact, hsieh21limits_minmax_icml}.\n\n\\ificml\n\\begin{table*}[t]\n\t\\begin{center}\n\t\t\\begin{threeparttable}\n\t\t\t\\caption{Comparison of different local-updates-based algorithms proposed to solve \\eqref{eq:problem}, in terms of the number of stochastic gradient computations (per client) and the number of communication rounds needed to reach an $\\epsilon$-stationary solution (see \\cref{defn:stationarity}) of \\eqref{eq:problem}. \n\t\t\t\tHere, $\\kappa = L_f\/\\mu$ is the condition number (see Assumptions \\ref{assum:smoothness}, \\ref{assum:PL_y}).\n\t\t\t}\n\t\t\t\\label{table:comparison}\n\t\t\t\\vskip 0.15in\n\t\t\t\\begin{small}\n\t\t\t\n\t\t\t\t\\begin{tabular}{|c|c|c|c|c|}\n\t\t\t\t\t\\hline\n\t\t\t\t\tFunction Class & Work & \\makecell{Number of Communication \\\\ Rounds} & \\makecell{Stochastic Gradient \\\\ Complexity} \\\\\n\t\t\t\t\t\\hline\n\t\t\t\t\t\\multirow{3}{*}{\\begin{tabular}[c]{@{}c@{}} \\underline{N}on\\underline{C}onvex- \\\\ \\underline{S}trongly-\\underline{C}oncave \\\\\n\t\t\t\t\t\t\t(NC-SC) \\end{tabular}} &\n\t\t\t\t\tBaseline ($n=1$) \\cite{lin_GDA_icml20}\n\t\t\t\t\t& - & $\\mathcal O ( \\kappa^3\/\\epsilon^{4} )$ \\\\\n\t\t\t\t\t& \\cite{mahdavi21localSGDA_aistats} & {\\small$\\mathcal O ( \\kappa^8\/(n^{1\/3} \\epsilon^4) )$} & {\\small$\\mathcal O ( \\kappa^{12}\/(n \\epsilon^6) )$} \\\\\n\t\t\t\t\t& \\cellcolor{Gainsboro!60}\\textbf{This Work} (Theorems \\ref{thm:NC_PL}, \\ref{thm:NC_PL_mom}) & \\cellcolor{Gainsboro!60} {\\color{red}$\\mathcal O ( \\kappa^3\/\\epsilon^{3} )$} & \\cellcolor{Gainsboro!60} {\\color{red}$\\mathcal O ( \\kappa^4\/(n \\epsilon^{4}) )$} \\\\\n\t\t\t\t\t\\hline\n\t\t\t\t\t\\multirow{4}{*}{\\begin{tabular}[c]{@{}c@{}} \\underline{N}on\\underline{C}onvex-\\underline{PL} \\\\ (NC-PL) \\end{tabular}} &\n\t\t\t\t\t\\cellcolor{Gainsboro!60} \\begin{tabular}[c]{@{}c@{}}\n\t\t\t\t\t\tBaseline ($n=1$) \\\\ \\textbf{This Work} (Theorems \\ref{thm:NC_PL}, \\ref{thm:NC_PL_mom}), \\cite{yang21NCPL_arxiv}\\tnote{a}\n\t\t\t\t\t\\end{tabular}\n\t\t\t\t\t& \\cellcolor{Gainsboro!60}- & \\cellcolor{Gainsboro!60} {\\color{red}$\\mathcal O ( \\kappa^4\/\\epsilon^{4} )$} \\\\\n\t\t\t\t\t& \\cite{mahdavi21localSGDA_aistats}\\tnote{b} & {\\small$\\mathcal O \\left( \\max \\left\\{ \\frac{\\kappa^2}{\\epsilon^4}, \\frac{\\kappa^4}{n^{2\/3} \\epsilon^4} \\right\\} \\right)$}\n\t\t\t\t\n\t\t\t\t\t& {\\small$\\mathcal O \\left( \\max \\left\\{ \\frac{\\kappa^3}{n \\epsilon^6}, \\frac{\\kappa^6}{n^2 \\epsilon^6} \\right\\} \\right)$} \\\\\n\t\t\t\t\n\t\t\t\t\t& \\cellcolor{Gainsboro!60}\\textbf{This Work} (Theorems \\ref{thm:NC_PL}, \\ref{thm:NC_PL_mom}) & \\cellcolor{Gainsboro!60}{\\color{red}$\\mathcal O ( \\kappa^3\/\\epsilon^{3} )$} & \\cellcolor{Gainsboro!60}{\\color{red}$\\mathcal O ( \\kappa^4\/(n \\epsilon^{4}) )$} \\\\\n\t\t\t\t\t\\hline\n\t\t\t\t\t\\multirow{3}{*}{\\begin{tabular}[c]{@{}c@{}} \\underline{N}on\\underline{C}onvex- \\\\ \\underline{C}oncave (NC-C) \\end{tabular}} &\n\t\t\t\t\tBaseline ($n=1$) \\cite{lin_GDA_icml20}\n\t\t\t\t\t& - & $\\mathcal O ( 1\/\\epsilon^{8} )$ \\\\\n\t\t\t\t\t& \\cite{mahdavi20dist_robustfl_neurips}\\tnote{c} \n\t\t\t\t\t& $\\mathcal O (1\/\\epsilon^{12})$ & $\\mathcal O ( 1\/\\epsilon^{16} )$ \\\\\n\t\t\t\t\t& \\cellcolor{Gainsboro!60}\\textbf{This Work} (Theorem \\ref{thm:NC_C})\n\t\t\t\t\t& \\cellcolor{Gainsboro!60}{\\color{red}\n\t\t\t\t\t\t$\\mathcal O (1\/\\epsilon^7)$}\n\t\t\t\t\t& \\cellcolor{Gainsboro!60}{\\color{red}\n\t\t\t\t\t\t$\\mathcal O (1\/(n \\epsilon^8))$} \\\\\n\t\t\t\t\t\\hline\n\t\t\t\t\t\\multirow{5}{*}{\\begin{tabular}[c]{@{}c@{}} \\underline{N}on\\underline{C}onvex- \\\\ \\underline{1}-\\underline{P}oint-\\underline{C}oncave \\\\\n\t\t\t\t\t\t\t(NC-1PC) \\end{tabular}} &\n\t\t\t\t\t\\cellcolor{Gainsboro!60} \n\t\t\t\t\tBaseline ($n=1$) \\textbf{This Work} (\\cref{thm:NC_1PC})\n\t\t\t\t\t& \\cellcolor{Gainsboro!60} - \n\t\t\t\t\t& \\cellcolor{Gainsboro!60}{\\color{red}$\\mathcal O ( 1\/\\epsilon^{8} )$} \\\\\n\t\t\t\t\t& \\cite{mahdavi21localSGDA_aistats} & $\\mathcal O ( n^{1\/6}\/\\epsilon^{8} )$ & $\\mathcal O ( 1\/\\epsilon^{12} )$ \\\\\n\t\t\t\t\t& \\cite{liu20dec_GANs_neurips} & $\\widetilde{\\mathcal O} ( 1\/\\epsilon^{12} )$\\tnote{d} & $\\mathcal O ( 1\/\\epsilon^{12} )$ \\\\\n\t\t\t\t\t& \\cellcolor{Gainsboro!60}\n\t\t\t\t\t\\textbf{This Work} (\\cref{thm:NC_1PC})\n\t\t\t\t\t& \\cellcolor{Gainsboro!60}{\\color{red}$\\mathcal O ( 1\/\\epsilon^{7} )$} & \\cellcolor{Gainsboro!60}{\\color{red}$\\mathcal O ( 1\/\\epsilon^{8} )$} \\\\\n\t\t\t\t\t& \\cellcolor{Gainsboro!60} \\textbf{This Work} ($\\tau = 1$) (\\cref{app:NC_1PC_tau_1})\\tnote{e}\n\t\t\t\t\t& \\cellcolor{Gainsboro!60}{\\color{red}$\\mathcal O ( 1\/(n \\epsilon^{8}) )$} & \\cellcolor{Gainsboro!60}{\\color{red}$\\mathcal O ( 1\/(n \\epsilon^{8}) )$} \\\\\n\t\t\t\t\t\\hline\n\t\t\t\t\\end{tabular}\n\t\t\t\t\\begin{tablenotes}\n\t\t\t\t\t\\small\n\t\t\t\t\t\\item[a] We came across this work during the preparation of this manuscript.\n\t\t\t\t\t\\item[b] Needs the additional assumption of $G_x$-Lipschitz continuity of $f(x,y)$ in $x$.\n\t\t\t\t\t\\item[c] The loss function is nonconvex in ${\\mathbf x}$ and linear in ${\\mathbf y}$.\n\t\t\t\t\t\\item[d] Decentralized algorithm. Requires $\\mathcal O (\\log(1\/\\epsilon))$ communication rounds with the neighbors after each update step.\n\t\t\t\t\t\\item[e] This is fully synchronized Local SGDA.\n\t\t\t\t\\end{tablenotes}\n\t\t\t\n\t\t\t\\end{small}\n\t\t\t\\vskip -0.1in\n\t\t\\end{threeparttable}\n\t\\end{center}\n\\end{table*}\n\\else\n\\begin{table*}[t]\n\t\\begin{center}\n\t\t\\begin{threeparttable}\n\t\t\t\\caption{Comparison of different local-updates-based algorithms proposed to solve \\eqref{eq:problem}, in terms of the number of stochastic gradient computations (per client) and the number of communication rounds needed to reach an $\\epsilon$-stationary solution (see \\cref{defn:stationarity}) of \\eqref{eq:problem}. \n\t\t\t\tHere, $\\kappa = L_f\/\\mu$ is the condition number (see Assumptions \\ref{assum:smoothness}, \\ref{assum:PL_y}).\n\t\t\t}\n\t\t\t\\label{table:comparison}\n\t\t\t\\vskip 0.15in\n\t\t\t\\begin{small}\n\t\t\t\t\\begin{tabular}{|c|c|c|c|c|}\n\t\t\t\t\t\\hline\n\t\t\t\t\tFunction Class & Work & \\makecell{Number of Communication \\\\ Rounds} & \\makecell{Stochastic Gradient \\\\ Complexity} \\\\\n\t\t\t\t\t\\hline\n\t\t\t\t\t\\multirow{4}{*}{\\begin{tabular}[c]{@{}c@{}} \\underline{N}on\\underline{C}onvex- \\\\ \\underline{S}trongly-\\underline{C}oncave \\\\\n\t\t\t\t\t\t\t(NC-SC) \\end{tabular}} &\n\t\t\t\t\tBaseline ($n=1$) \\cite{lin_GDA_icml20}\n\t\t\t\t\t& - & $\\mathcal O \\left( \\frac{\\kappa^3}{\\epsilon^{4}} \\right)$ \\\\\n\t\t\t\t\t& \\cite{mahdavi21localSGDA_aistats} & $\\mathcal O \\left( \\frac{\\kappa^8}{n^{1\/3} \\epsilon^4} \\right)$ & $\\mathcal O \\left( \\frac{\\kappa^{12}}{n \\epsilon^6} \\right)$ \\\\\n\t\t\t\t\t& \\cellcolor{Gainsboro!60}\\textbf{This Work} (Theorems \\ref{thm:NC_PL}, \\ref{thm:NC_PL_mom}) & \\cellcolor{Gainsboro!60} {\\color{red}$\\mathcal O \\left( \\frac{\\kappa^3}{\\epsilon^{3}} \\right)$} & \\cellcolor{Gainsboro!60} {\\color{red}$\\mathcal O \\left( \\frac{\\kappa^4}{n \\epsilon^{4}} \\right)$} \\\\\n\t\t\t\t\t\\hline\n\t\t\t\t\t\\multirow{4}{*}{\\begin{tabular}[c]{@{}c@{}} \\underline{N}on\\underline{C}onvex-\\underline{PL} \\\\ (NC-PL) \\end{tabular}} &\n\t\t\t\t\t\\cellcolor{Gainsboro!60} \\begin{tabular}[c]{@{}c@{}}\n\t\t\t\t\t\tBaseline ($n=1$) \\\\ \\textbf{This Work} (Theorems \\ref{thm:NC_PL}, \\ref{thm:NC_PL_mom}), \\cite{yang21NCPL_arxiv}\\tnote{a}\n\t\t\t\t\t\\end{tabular}\n\t\t\t\t\t& \\cellcolor{Gainsboro!60}- & \\cellcolor{Gainsboro!60} {\\color{red}$\\mathcal O \\left( \\frac{\\kappa^4}{\\epsilon^{4}} \\right)$} \\\\\n\t\t\t\t\t& \\cite{mahdavi21localSGDA_aistats}\\tnote{b} & $\\mathcal O \\left( \\max \\left\\{ \\frac{\\kappa^2}{\\epsilon^4}, \\frac{\\kappa^4}{n^{2\/3} \\epsilon^4} \\right\\} \\right)$\n\t\t\t\t\n\t\t\t\t\t& $\\mathcal O \\left( \\max \\left\\{ \\frac{\\kappa^3}{n \\epsilon^6}, \\frac{\\kappa^6}{n^2 \\epsilon^6} \\right\\} \\right)$ \\\\\n\t\t\t\t\n\t\t\t\t\t& \\cellcolor{Gainsboro!60}\\textbf{This Work} (Theorems \\ref{thm:NC_PL}, \\ref{thm:NC_PL_mom}) & \\cellcolor{Gainsboro!60}{\\color{red}$\\mathcal O \\left( \\frac{\\kappa^3}{\\epsilon^{3}} \\right)$} & \\cellcolor{Gainsboro!60}{\\color{red}$\\mathcal O \\left( \\frac{\\kappa^4}{n \\epsilon^{4}} \\right)$} \\\\\n\t\t\t\t\t\\hline\n\t\t\t\t\t\\multirow{3}{*}{\\begin{tabular}[c]{@{}c@{}} \\underline{N}on\\underline{C}onvex- \\\\ \\underline{C}oncave (NC-C) \\end{tabular}} &\n\t\t\t\t\tBaseline ($n=1$) \\cite{lin_GDA_icml20}\n\t\t\t\t\t& - & $\\mathcal O ( \\epsilon^{-8} )$ \\\\\n\t\t\t\t\t& \\cite{mahdavi20dist_robustfl_neurips}\\tnote{c} \n\t\t\t\t\t& $\\mathcal O (\\epsilon^{-12})$ & $\\mathcal O ( \\epsilon^{-16} )$ \\\\\n\t\t\t\t\t& \\cellcolor{Gainsboro!60}\\textbf{This Work} (Theorem \\ref{thm:NC_C})\n\t\t\t\t\t& \\cellcolor{Gainsboro!60}{\\color{red}\n\t\t\t\t\t\t$\\mathcal O (\\epsilon^{-7})$}\n\t\t\t\t\t& \\cellcolor{Gainsboro!60}{\\color{red}\n\t\t\t\t\t\t$\\mathcal O \\left( \\frac{1}{n \\epsilon^8} \\right)$} \\\\\n\t\t\t\t\t\\hline\n\t\t\t\t\t\\multirow{6}{*}{\\begin{tabular}[c]{@{}c@{}} \\underline{N}on\\underline{C}onvex- \\\\ \\underline{1}-\\underline{P}oint-\\underline{C}oncave \\\\\n\t\t\t\t\t\t\t(NC-1PC) \\end{tabular}} &\n\t\t\t\t\t\\cellcolor{Gainsboro!60} \n\t\t\t\t\tBaseline ($n=1$) \\textbf{This Work} (\\cref{thm:NC_1PC})\n\t\t\t\t\t& \\cellcolor{Gainsboro!60} - \n\t\t\t\t\t& \\cellcolor{Gainsboro!60}{\\color{red}$\\mathcal O ( \\epsilon^{-8} )$} \\\\\n\t\t\t\t\t& \\cite{mahdavi21localSGDA_aistats} & $\\mathcal O \\left( \\frac{n^{1\/6}}{\\epsilon^{8}} \\right)$ & $\\mathcal O ( \\epsilon^{-12} )$ \\\\\n\t\t\t\t\t& \\cite{liu20dec_GANs_neurips} & $\\widetilde{\\mathcal O} ( \\epsilon^{-12} )$\\tnote{d} & $\\mathcal O (\\epsilon^{-12})$ \\\\\n\t\t\t\t\t& \\cellcolor{Gainsboro!60}\n\t\t\t\t\t\\textbf{This Work} (\\cref{thm:NC_1PC})\n\t\t\t\t\t& \\cellcolor{Gainsboro!60}{\\color{red}$\\mathcal O ( \\epsilon^{-7} )$} & \\cellcolor{Gainsboro!60}{\\color{red}$\\mathcal O ( \\epsilon^{-8} )$} \\\\\n\t\t\t\t\t& \\cellcolor{Gainsboro!60} \\textbf{This Work} ($\\tau = 1$) (\\cref{app:NC_1PC_tau_1})\\tnote{e}\n\t\t\t\t\t& \\cellcolor{Gainsboro!60}{\\color{red}$\\mathcal O \\left( \\frac{1}{n \\epsilon^{8}} \\right)$} & \\cellcolor{Gainsboro!60}{\\color{red}$\\mathcal O \\left( \\frac{1}{n \\epsilon^{7}} \\right)$} \\\\\n\t\t\t\t\t\\hline\n\t\t\t\t\\end{tabular}\n\t\t\t\t\\begin{tablenotes}\n\t\t\t\t\t\\small\n\t\t\t\t\t\\item[a] We came across this work during the preparation of this manuscript.\n\t\t\t\t\t\\item[b] Needs the additional assumption of $G_x$-Lipschitz continuity of $f(x,y)$ in $x$.\n\t\t\t\t\t\\item[c] The loss function is nonconvex in ${\\mathbf x}$ and linear in ${\\mathbf y}$.\n\t\t\t\t\t\\item[d] Decentralized algorithm. Requires $\\mathcal O (\\log(1\/\\epsilon))$ communication rounds with the neighbors after each update step.\n\t\t\t\t\t\\item[e] This is fully synchronized Local SGDA.\n\t\t\t\t\\end{tablenotes}\n\t\t\t\\end{small}\n\t\t\n\t\t\\end{threeparttable}\n\t\\end{center}\n\\end{table*}\n\\fi\n\n\nIn this work, we consider the following smooth\nnonconvex minimax distributed optimization problem:\n\\ificml\n{\\small\n\t\\begin{align}\n\t\t\\min_{{\\mathbf x} \\in \\mathbb R^{d_1}} \\max_{{\\mathbf y} \\in \\mathbb R^{d_2}} \\Big\\{ f({\\mathbf x}, {\\mathbf y}) := \\frac{1}{n} \\sum_{i=1}^n f_i({\\mathbf x}, {\\mathbf y}) \\Big\\},\n\t\t\\label{eq:problem}\n\t\\end{align}\n}%\n\\else\n\\begin{align}\n\t\\min_{{\\mathbf x} \\in \\mathbb R^{d_1}} \\max_{{\\mathbf y} \\in \\mathbb R^{d_2}} \\Big\\{ f({\\mathbf x}, {\\mathbf y}) := \\frac{1}{n} \\sum_{i=1}^n f_i({\\mathbf x}, {\\mathbf y}) \\Big\\},\n\t\\label{eq:problem}\n\\end{align}\n\\fi\nwhere $n$ is the number of clients, and $f_i$ represents the local loss function at client $i$, defined as $f_i({\\mathbf x}, {\\mathbf y}) = \\mathbb E_{\\xi_i \\sim \\mathcal D_i} \\left[ L({\\mathbf x}, {\\mathbf y}; \\xi_i) \\right]$.\nHere, $L(\\cdot, \\cdot; \\xi_i)$ denotes the loss for the data point $\\xi_i$, sampled from the local data distribution $\\mathcal D_i$ at client $i$.\nThe functions $\\{ f_i \\}$ are smooth, nonconvex in ${\\mathbf x}$, and concave or nonconcave in ${\\mathbf y}$.\n\nStochastic gradient descent ascent (SGDA) \\cite{heusel17gans_neurips, daskalakis18GANs_iclr}, a simple generalization of SGD \\cite{bottou18optML_siam}, is one of the simplest algorithms used to iteratively solve \\eqref{eq:problem}. \nIt carries out alternate (stochastic) gradient descent\/ascent for the min\/max problem.\nThe exact form of the convergence results depends on the (non)-convexity assumptions which the objective function $f$ in \\eqref{eq:problem} satisfies with respect to $\\mathbf{x}$ and $\\mathbf{y}$. For example, strongly-convex strongly-concave (in ${\\mathbf x}$ and ${\\mathbf y}$, respectively), non-convex-strongly-concave, non-convex-concave, etc.\n\nMost existing literature on minimax optimization problems is focused on solving the problem at a single client.\nHowever, in big data applications that often rely on multiple sources or \\textit{clients} for data collection \\cite{xing2016strategies}, transferring the entire dataset to a single \\textit{server} is often undesirable. Doing so might be costly in applications with high-dimensional data, or altogether prohibitive due to the privacy concerns of the clients \\cite{leaute13protecting}.\n\nFederated Learning (FL) is a recent paradigm \\cite{konevcny16federated, kairouz19advancesFL_arxiv} proposed to address this problem.\nIn FL, the edge clients are not required to send their data to the server, improving the privacy afforded to the clients. Instead, the central server offloads some of its computational burden to the clients, which run the training algorithm on their local data. The models trained locally at the clients are periodically communicated to the server, which aggregates them and returns the updated model to the clients.\nThis infrequent communication with the server leads to communication savings for the clients. \nLocal Stochastic Gradient Descent (Local SGD or FedAvg) \\cite{fedavg17aistats, stich18localSGD_iclr} is one of the most commonly used algorithms for FL.\nTight convergence rates along with communication savings for Local SGD have been shown for smooth convex \\cite{khaled20localSGD_aistats, spiridonoff21comm_eff_SGD_neurips} and nonconvex \\cite{koloskova20unified_localSGD_icml} minimization problems. \nSee \\cref{app:local_SGD} for more details.\nDespite the promise shown by FL in large-scale applications \\cite{yang18FL_google_arxiv, bonawitz19towardsFL_arxiv}, much of the existing work focuses on solving standard minimization problems of the form $\\min_{\\mathbf{x}} g(\\mathbf{x})$.\nThe goals of distributed\/federated minimax optimization algorithms and their analyses are to show that by using $n$ clients, we can achieve error $\\epsilon$, not only in $n$ times fewer total iterations, but also with fewer rounds of communication with the server. This means that more local updates are performed at the clients while the coordination with the central server is less frequent.\nAlso, this $n$-fold saving in computation at the clients is referred to as \\textit{linear speedup} in the FL literature \\cite{jiang18linear_neurips, yu19icml_momentum, yang21partial_client_iclr}.\nSome recent works have attempted to achieve this goal for convex-concave \\cite{mahdavi20dist_robustfl_neurips, hou21FedSP_arxiv, liao21local_AdaGrad_CC_arxiv}, for nonconvex-concave \\cite{mahdavi20dist_robustfl_neurips}, and for nonconvex-nonconcave problems \\cite{mahdavi21localSGDA_aistats, reisizadeh20robustfl_neurips, guo20DeepAUC_icml, yuan21FedDeepAUC_icml}.\n\n\nHowever, in the context of stochastic smooth nonconvex minimax problems, the convergence guarantees of the existing distributed\/federated approaches are, to the best of our knowledge, either asymptotic \\cite{shen21fedmm_arxiv} or suboptimal \\cite{mahdavi21localSGDA_aistats}.\nIn particular, they do not reduce to the existing baseline results for the centralized minimax problems $(n=1)$. \nSee \\cref{table:comparison}.\n\n\n\\paragraph{Our Contributions.}\nIn this paper, we consider the following four classes of minimax optimization problems and refer to them using the abbreviations given below:\n\\ificml\n\\newline\n1) NC-SC: \\underline{N}on\\underline{C}onvex in ${\\mathbf x}$, \\underline{S}trongly-\\underline{C}oncave in ${\\mathbf y}$,\n2) NC-PL: \\underline{N}on\\underline{C}onvex in ${\\mathbf x}$, \\underline{PL}-condition in ${\\mathbf y}$ (\\cref{assum:PL_y}),\n3) NC-C: \\underline{N}on\\underline{C}onvex in ${\\mathbf x}$, \\underline{C}oncave in ${\\mathbf y}$,\n4) NC-1PC: \\underline{N}on\\underline{C}onvex in ${\\mathbf x}$, \\underline{1}-\\underline{P}oint-\\underline{C}oncave in ${\\mathbf y}$ (\\cref{assum:1pc_y}).\n\\newline\n\\else\n\\begin{enumerate}\n\t\\item NC-SC: \\underline{N}on\\underline{C}onvex in ${\\mathbf x}$, \\underline{S}trongly-\\underline{C}oncave in ${\\mathbf y}$,\n\t\\item NC-PL: \\underline{N}on\\underline{C}onvex in ${\\mathbf x}$, \\underline{PL}-condition in ${\\mathbf y}$ (\\cref{assum:PL_y}),\n\t\\item NC-C: \\underline{N}on\\underline{C}onvex in ${\\mathbf x}$, \\underline{C}oncave in ${\\mathbf y}$,\n\t\\item NC-1PC: \\underline{N}on\\underline{C}onvex in ${\\mathbf x}$, \\underline{1}-\\underline{P}oint-\\underline{C}oncave in ${\\mathbf y}$ (\\cref{assum:1pc_y}).\n\\end{enumerate}\n\\fi\nFor each of these problems, we improve the convergence analysis of existing algorithms or propose a new local-update-based algorithm that gives a better sample complexity. \nA key feature of our results is the linear speedup in the sample complexity with respect to the number of clients, while also providing communication savings. We make the following main contributions, also summarized in \\cref{table:comparison}.\n\n\\begin{itemize}[leftmargin=*]\n\t\\setlength\\itemsep{-0.5em}\n\t\\item For NC-PL functions (\\cref{sec:NC_PL}), we prove that Local SGDA\n\n\thas {\\small$\\mathcal O (\\kappa^4\/(n \\epsilon^{4}))$} gradient complexity, and {\\small$\\mathcal O (\\kappa^3\/\\epsilon^{3})$} communication cost (\\cref{thm:NC_PL}).\n\n\tThe results are optimal in $\\epsilon$.\\footnote{Even for simple nonconvex function minimization, the complexity guarantee cannot be improved beyond {\\small$\\mathcal O (1\/\\epsilon^{4})$} \\cite{arjevani19lower_stoch_NC_arxiv}. Further, our results match the complexity and communication guarantees for simple smooth nonconvex minimization with local SGD \\cite{yu19icml_momentum}.}\n\tTo the best of our knowledge, this complexity guarantee does not exist in the prior literature even for $n=1$.\\footnote{During the preparation of this manuscript, we came across the centralized minimax work \\cite{yang21NCPL_arxiv}, which achieves {\\small$\\mathcal O (\\kappa^4\/ \\epsilon^{4})$} complexity for NC-PL functions. However, our work is more general since we incorporate local updates at the clients.}\n\t\\item Since the PL condition is weaker than strong-concavity, our result also extends to NC-SC functions.\n\tTo the best of our knowledge, ours is the first work to prove optimal (in $\\epsilon$) guarantees for SDGA in the case of NC-SC functions, with $\\mathcal O (1)$ batch-size. This way, we improve the result in \\cite{lin_GDA_icml20} which necessarily requires {\\small$\\mathcal O (1\/\\epsilon^2)$} batch-sizes.\n\tIn the federated setting, ours is the first work to achieve the optimal (in $\\epsilon$) guarantee.\n\t\\item We propose a novel algorithm (Momentum Local SGDA - \\cref{alg_NC_momentum}), which achieves the same theoretical guarantees as Local SGDA for NC-PL functions\n\t(\\cref{thm:NC_PL_mom}), and also outperforms Local SGDA in experiments.\n\t\\item For NC-C functions (\\cref{sec:NC_C}), we utilize Local SGDA+ algorithm proposed in \\cite{mahdavi21localSGDA_aistats}\\footnote{\\cite{mahdavi21localSGDA_aistats} does not analyze NC-C functions.}, and prove {\\small$\\mathcal O (1\/(n \\epsilon^{8}))$} gradient complexity, and {\\small$\\mathcal O (1\/\\epsilon^{7})$} communication cost (\\cref{thm:NC_C}).\n\tThis implies linear speedup over the $n=1$ result \\cite{lin_GDA_icml20}.\n\t\\item For NC-1PC functions (\\cref{sec:NC_1PC}), using an improved analysis for Local SGDA+,\n\twe prove {\\small$\\mathcal O (1\/\\epsilon^{8})$} gradient complexity, and {\\small$\\mathcal O (1\/\\epsilon^{7})$} communication cost (\\cref{thm:NC_1PC}).\n\tTo the best of our knowledge, this result is the first to generalize the existing {\\small$\\mathcal O (1\/\\epsilon^{8})$} complexity guarantee of SGDA (proved for NC-C problems in \\cite{lin_GDA_icml20}), to the more general class of NC-1PC functions. \n\\end{itemize}\n\n\n\\section{Related Work}\n\\label{sec:related_work}\n\n\\subsection{Single client minimax}\n\n\nUntil recently, the minimax optimization literature was focused largely on convex-concave problems \\cite{nemirovski04prox_siam, nedic09subgradient_jota}.\nHowever, since the advent of machine learning applications such as GANs \\cite{goodfellow14GANs_neurips}, and adversarial training of neural networks (NNs) \\cite{madry18adversarial_iclr}, the more challenging problems of nonconvex-concave and nonconvex-nonconcave minimax optimization have attracted increasing attention.\n\n\\paragraph{Nonconvex-Strongly Concave (NC-SC) Problems.}\nFor stochastic NC-SC problems, \\cite{lin_GDA_icml20} proved {\\small$\\mathcal O (\\kappa^3\/\\epsilon^{4})$} stochastic gradient complexity for SGDA.\nHowever, the analysis necessarily requires mini-batches of size {\\small$\\Theta (\\epsilon^{-2})$}.\nUtilizing momentum, \\cite{qiu20single_timescale_ncsc} achieved the same {\\small$\\mathcal O (\\epsilon^{-4})$} convergence rate with {\\small$\\mathcal O (1)$} batch-size.\n\\cite{qiu20single_timescale_ncsc, luo20SREDA_ncsc_neurips} utilize variance-reduction to further improve the complexity to {\\small$\\mathcal O (\\kappa^3\/\\epsilon^{3})$}.\nHowever, whether these guarantees can be achieved in the federated setting, with multiple local updates at the clients, is an open question.\nIn this paper, we answer this question in the affirmative.\n\n\\paragraph{Nonconvex-Concave (NC-C) Problems.}\nThe initial algorithms \\cite{nouiehed19minimax_neurips19, thekumparampil19NC_C_neurips, rafique18WCC_oms} for deterministic NC-C problems all have a nested-loop structure. For each ${\\mathbf x}$-update, the inner maximization with respect to ${\\mathbf y}$ is approximately solved. Single-loop algorithms have been proposed in subsequent works by \\cite{tomluo_1_loop_ncc_neurips20, lan_unified_ncc_arxiv20}.\nHowever, for stochastic problems, to the best of our knowledge, \\cite{lin_GDA_icml20} is the only work to have analyzed a single-loop algorithm (SGDA), which achieves {\\small$\\mathcal O (1\/\\epsilon^{8})$} complexity.\n\n\\paragraph{Nonconvex-Nonconcave (NC-NC) Problems.}\nRecent years have seen extensive research on NC-NC problems \\cite{mertikopoulos18optMD_SP_iclr, diakonikolas21NC_NC_aistats, daskalakis21constr_minmax_sigact}.\nHowever, of immediate interest to us are two special classes of functions.\n\\newline\n1) Polyak-{\\L}ojasiewicz (PL) condition \\cite{polyak63PL} is weaker than strong concavity, and does not even require the objective to be concave.\nRecently, PL-condition has been shown to hold in overparameterized neural networks \\cite{charles18generalization_icml, liu22overparameter_NN_elsevier}.\nDeterministic NC-PL problems have been analyzed in \\cite{nouiehed19minimax_neurips19, yang20NCNC_VR_neurips, fiez21NC_PL_SC_neurips}.\nDuring the preparation of this manuscript, we came across \\cite{yang21NCPL_arxiv} which solves stochastic NC-PL minimax problems. \nStochastic alternating gradient descent ascent (Stoc-AGDA) is proposed, which achieves {\\small$\\mathcal O (\\kappa^4\/\\epsilon^{4})$} iteration complexity. \nFurther, another single-loop algorithm, \\textit{smoothed GDA} is proposed, which improves dependence on $\\kappa$ to {\\small$\\mathcal O (\\kappa^2\/\\epsilon^{4})$}.\n\\newline\n2) One-Point-Concavity\/convexity (1PC) has been observed in the dynamics of SGD for optimizing neural networks \\cite{li17relu_neurips, kleinberg18icml}.\nDeterministic and stochastic optimization guarantees for 1PC functions have been proved in \\cite{gasnikov17acc_quasar_convex_arxiv, hinder20near_opt_star_convex_colt, jin20quasar_convex_arxiv}.\nNC1PC minimax problems have been considered in \\cite{mertikopoulos18optMD_SP_iclr} with asymptotic convergence results, and in \\cite{liu20dec_GANs_neurips}, with $\\mathcal O (1\/\\epsilon^{12})$ gradient complexity.\nAs we show in \\cref{sec:NC_1PC}, this complexity result can be significantly improved.\n\n\n\\subsection{Distributed\/Federated Minimax}\n\nRecent years have seen a spur of interest in distributed minimax problems, driven by the need to train neural networks over multiple clients \\cite{liu20dec_GANs_neurips, chen20dist_GAN_quantize_arxiv}.\nSaddle-point problems and more generally variational inequalities have been studied extensively in the context of decentralized optimization by \\cite{beznosikov20dist_SP_arxiv, gasnikov21dec_stoch_EG_VI_arxiv, beznosikov21dist_sp_neurips, rogozin21dec_local_global_var_cc_arxiv, xian21dec_ncsc_storm_neurips}.\n\nLocal updates-based algorithms for convex-concave problems have been analyzed in \\cite{mahdavi20dist_robustfl_neurips, hou21FedSP_arxiv, liao21local_AdaGrad_CC_arxiv}.\n\\cite{reisizadeh20robustfl_neurips} considers PL-PL and NC-PL minimax problems in the federated setting.\nHowever, the clients only communicate min variables to the server. \nThe limited client availability problem of FL is considered for NC-PL problems in \\cite{xie21NC_PL_FL_arxiv}. \nHowever, the server is responsible for additional computations, to compute the global gradient estimates.\nIn our work, we consider a more general setting, where both the min and max variables need to be communicated to the server periodically.\nThe server is more limited in functionality, and only computes and returns the averages to the clients.\n\\cite{mahdavi20dist_robustfl_neurips} shows a suboptimal convergence rate for nonconvex-linear minimax problems (see \\cref{table:comparison}). \nWe consider more general NC-C problems, improve the convergence rate, and show linear speedup in $n$.\n\n\n\\paragraph{Comparison with \\cite{mahdavi21localSGDA_aistats}.}\nThe work most closely related to ours is \\cite{mahdavi21localSGDA_aistats}. \nThe authors consider three classes of smooth nonconvex minimax functions: NC-SC, NC-PL, and NC-1PC. \nHowever, the gradient complexity and communication cost results achieved are suboptimal.\nFor all three classes of functions, we provide tighter analyses, resulting in improved gradient complexity with improved communication savings.\nSee \\cref{table:comparison} for a comprehensive comparison of results.\n\n\n\\section{Preliminaries}\n\\label{sec:prelim}\n\n\\ificml\n\\paragraph{Notations.} Throughout the paper, we let $\\norm{\\cdot}$ denote the Euclidean norm $\\norm{\\cdot}_2$.\nGiven a positive integer $m$, the set of numbers $\\{ 1, 2, \\hdots, m \\}$ is denoted by $[m]$. Vectors at client $i$ are denoted with superscript $i$, for e.g., ${\\mathbf x}^i$.\nVectors at time $t$ are denoted with subscript $t$, for e.g., ${\\mathbf y}_t$.\nAverage across clients appear without a superscript, for e.g., {\\small${\\mathbf x_t} = \\frac{1}{n} \\sum_{i=1}^n {\\mathbf x^i_t}$}.\nWe define the gradient vector as {\\small$\\nabla f_i({\\mathbf x}, {\\mathbf y}) = \\left[ \\nabla_{\\bx} f_i({\\mathbf x}, {\\mathbf y})^{\\top}, \\nabla_{\\by} f_i({\\mathbf x}, {\\mathbf y})^{\\top} \\right]^{\\top}$}.\nFor a generic function {\\small$g({\\mathbf x}, {\\mathbf y})$}, we denote its stochastic gradient vector as {\\small$\\nabla g({\\mathbf x}, {\\mathbf y}; \\xi^i) = \\left[ \\nabla_{\\bx} g({\\mathbf x}, {\\mathbf y}; \\xi^i)^{\\top}, \\nabla_{\\by} g({\\mathbf x}, {\\mathbf y}; \\xi^i)^{\\top} \\right]^{\\top}$}, where $\\xi^i$ denotes the randomness.\n\\else\n\\paragraph{Notations.} Throughout the paper, we let $\\norm{\\cdot}$ denote the Euclidean norm $\\norm{\\cdot}_2$.\nGiven a positive integer $m$, the set of numbers $\\{ 1, 2, \\hdots, m \\}$ is denoted by $[m]$. Vectors at client $i$ are denoted with superscript $i$, for e.g., ${\\mathbf x}^i$.\nVectors at time $t$ are denoted with subscript $t$, for e.g., ${\\mathbf y}_t$.\nAverage across clients appear without a superscript, for e.g., ${\\mathbf x_t} = \\frac{1}{n} \\sum_{i=1}^n {\\mathbf x^i_t}$.\nWe define the gradient vector as $\\nabla f_i({\\mathbf x}, {\\mathbf y}) = \\left[ \\nabla_{\\bx} f_i({\\mathbf x}, {\\mathbf y})^{\\top}, \\nabla_{\\by} f_i({\\mathbf x}, {\\mathbf y})^{\\top} \\right]^{\\top}$.\nFor a generic function $g({\\mathbf x}, {\\mathbf y})$, we denote its stochastic gradient vector as $\\nabla g({\\mathbf x}, {\\mathbf y}; \\xi^i) = \\left[ \\nabla_{\\bx} g({\\mathbf x}, {\\mathbf y}; \\xi^i)^{\\top}, \\nabla_{\\by} g({\\mathbf x}, {\\mathbf y}; \\xi^i)^{\\top} \\right]^{\\top}$, where $\\xi^i$ denotes the randomness.\n\\fi\n\n\\paragraph{Convergence Metrics.} Since the loss function $f$ is nonconvex, we cannot prove convergence to a global saddle point. \nWe instead prove convergence to an \\textit{approximate} stationary point, which is defined next.\n\n\\begin{definition}[$\\epsilon$-Stationarity]\n\t\\label{defn:stationarity}\n\tA point $\\widetilde{\\bx}$ is an $\\epsilon$-stationary point of a differentiable function $g$ if $\\norm{\\nabla g (\\widetilde{\\bx})} \\leq \\epsilon$.\n\\end{definition}\n\n\\begin{definition}\n\tStochastic Gradient (SG) complexity is the total number of gradients computed by a single client during the course of the algorithm.\n\\end{definition}\n\nSince all the algorithms analyzed in this paper are single-loop and use a $\\mathcal O (1)$ batchsize, if the algorithm runs for $T$ iterations, then the SG complexity is $\\mathcal O (T)$.\n\nDuring a communication round, the clients send their local vectors to the server, where the aggregate is computed, and communicated back to the clients.\nConsequently, we define the number of communication rounds as follows.\n\n\\begin{definition}[Communication Rounds]\n\tThe number of communication rounds in an algorithm is the number of times clients communicate their local models to the server.\n\\end{definition}\nIf the clients perform $\\tau$ local updates between successive communication rounds, the total number of communication rounds is $\\lceil T\/\\tau \\rceil$.\nNext, we discuss the assumptions that will be used throughout the rest of the paper.\n\n\\begin{assump}[Smoothness]\n\t\\label{assum:smoothness}\n\tEach local function $f_i$ is differentiable and has Lipschitz continuous gradients.\n\tThat is, there exists a constant $L_f > 0$ such that at each client $i \\in [n]$, for all ${\\mathbf x}, {\\mathbf x}' \\in \\mathbb R^{d_1}$ and ${\\mathbf y}, {\\mathbf y}' \\in \\mathbb R^{d_2}$,\n\t\\ificml\n\t\\newline\n\t$\\left\\| \\nabla f_i({\\mathbf x}, {\\mathbf y}) - \\nabla f_i({\\mathbf x}', {\\mathbf y}') \\right\\| \\leq L_f \\left\\| ({\\mathbf x}, {\\mathbf y}) - ({\\mathbf x}', {\\mathbf y}') \\right\\|$.\n\t\\else\n\t\\begin{align*}\n\t\t\\left\\| \\nabla f_i({\\mathbf x}, {\\mathbf y}) - \\nabla f_i({\\mathbf x}', {\\mathbf y}') \\right\\| \\leq L_f \\left\\| ({\\mathbf x}, {\\mathbf y}) - ({\\mathbf x}', {\\mathbf y}') \\right\\|.\n\t\\end{align*}\n\t\\fi\n\\end{assump}\n\n\\begin{assump}[Bounded Variance]\n\t\\label{assum:bdd_var}\n\tThe stochastic gradient oracle at each client is unbiased with bounded variance, i.e., there exists a constant $\\sigma > 0$ such that at each client $i \\in [n]$, for all ${\\mathbf x}, {\\mathbf y}$,\n\t\\ificml\n\t$\\mathbb E_{\\xi_i} [ \\nabla f_i({\\mathbf x}, {\\mathbf y}; \\xi^i) ] = \\nabla f_i({\\mathbf x}, {\\mathbf y})$, and $\\mathbb E_{\\xi_i} \\| \\nabla f_i({\\mathbf x}, {\\mathbf y}; \\xi^i) - \\nabla f_i({\\mathbf x}, {\\mathbf y}) \\|^2 \\leq \\sigma^2.$\n\t\\else\n\t\\begin{align*}\n\t\t\\mathbb E_{\\xi_i} [ \\nabla f_i({\\mathbf x}, {\\mathbf y}; \\xi^i) ] &= \\nabla f_i({\\mathbf x}, {\\mathbf y}), \\\\\n\t\t\\mathbb E_{\\xi_i} \\| \\nabla f_i({\\mathbf x}, {\\mathbf y}; \\xi^i) - \\nabla f_i({\\mathbf x}, {\\mathbf y}) \\|^2 & \\leq \\sigma^2.\n\t\\end{align*}\n\t\\fi\n\t\n\\end{assump}\n\n\\begin{assump}[Bounded Heterogeneity]\n\t\\label{assum:bdd_hetero}\n\tTo measure the heterogeneity of the local functions $\\{ f_i({\\mathbf x}, {\\mathbf y}) \\}$ across the clients, we define\n\t\\ificml\n\t\\newline\n\t{\\small$\\varsigma_x^2 = \\sup_{{\\mathbf x} \\in \\mathbb R^{d_1}, {\\mathbf y} \\in \\mathbb R^{d_2}} \\frac{1}{n} \\textstyle \\sum_{i=1}^n \\left\\| \\nabla_{\\bx} f_i({\\mathbf x}, {\\mathbf y}) - \\nabla_{\\bx} f({\\mathbf x}, {\\mathbf y}) \\right\\|^2,$}\n\t\\newline\n\t{\\small$\\varsigma_y^2 = \\sup_{{\\mathbf x} \\in \\mathbb R^{d_1}, {\\mathbf y} \\in \\mathbb R^{d_2}} \\frac{1}{n} \\textstyle \\sum_{i=1}^n \\left\\| \\nabla_{\\by} f_i({\\mathbf x}, {\\mathbf y}) - \\nabla_{\\by} f({\\mathbf x}, {\\mathbf y}) \\right\\|^2.$}\n\tWe assume that $\\varsigma_x$ and $\\varsigma_y$ are bounded.\n\t\\else\n\t\\newline\n\t\\begin{align*}\n\t\t\\varsigma_x^2 &= \\sup_{{\\mathbf x} \\in \\mathbb R^{d_1}, {\\mathbf y} \\in \\mathbb R^{d_2}} \\frac{1}{n} \\textstyle \\sum_{i=1}^n \\left\\| \\nabla_{\\bx} f_i({\\mathbf x}, {\\mathbf y}) - \\nabla_{\\bx} f({\\mathbf x}, {\\mathbf y}) \\right\\|^2, \\\\\n\t\t\\varsigma_y^2 &= \\sup_{{\\mathbf x} \\in \\mathbb R^{d_1}, {\\mathbf y} \\in \\mathbb R^{d_2}} \\frac{1}{n} \\textstyle \\sum_{i=1}^n \\left\\| \\nabla_{\\by} f_i({\\mathbf x}, {\\mathbf y}) - \\nabla_{\\by} f({\\mathbf x}, {\\mathbf y}) \\right\\|^2.\n\t\\end{align*}\n\tWe assume that $\\varsigma_x$ and $\\varsigma_y$ are bounded.\n\t\\fi\n\\end{assump}\n\n\n\\section{Algorithms and their Convergence Analyses}\n\\label{sec:algo_theory}\n\nIn this section, we discuss local updates-based algorithms to solve nonconvex-concave and nonconvex-nonconcave minimax problems.\nEach client runs multiple update steps on its local models using local stochastic gradients.\nPeriodically, the clients communicate their local models to the server, which returns the average model.\nIn this section, we demonstrate that this leads to communication savings at the clients, without sacrificing the convergence guarantees.\n\n\nIn the subsequent subsections, for each class of functions considered (NC-PL, NC-C, NC-1PC), we first discuss an algorithm.\nNext, we present the convergence result, followed by a discussion of the gradient complexity and the communication cost needed to reach an $\\epsilon$ stationary point.\nSee \\cref{table:comparison} for a summary of our results, along with comparisons with the existing literature.\n\n\n\\subsection{Nonconvex-PL (NC-PL) Problems} \\label{sec:NC_PL}\n\nIn this subsection, we consider smooth nonconvex functions which satisfy the following assumption.\n\n\\begin{assump}[Polyak {\\L}ojasiewicz (PL) Condition in ${\\mathbf y}$]\n\t\\label{assum:PL_y}\n\tThe function $f$ satisfies $\\mu$-PL condition in ${\\mathbf y}$ ($\\mu > 0$), if for any fixed ${\\mathbf x}$: 1) $\\max_{{\\mathbf y}'} f({\\mathbf x}, {\\mathbf y}')$ has a nonempty solution set; \n\t2) {\\small$\\norm{\\nabla_{\\by} f({\\mathbf x}, {\\mathbf y})}^2 \\geq 2 \\mu ( \\max_{{\\mathbf y}'} f({\\mathbf x}, {\\mathbf y}') - f({\\mathbf x}, {\\mathbf y}) )$}, for all ${\\mathbf y}$.\n\\end{assump}\n\nFirst, we present an improved convergence result for Local SGDA (\\cref{alg_local_SGDA}), proposed in \\cite{mahdavi21localSGDA_aistats}. Then we propose a novel momentum-based algorithm (\\cref{alg_NC_momentum}), which achieves the same convergence guarantee, and has improved empirical performance (see \\cref{sec:exp}).\n\n\n\\paragraph{Improved Convergence of Local SGDA.}\nLocal Stochastic Gradient Descent Ascent (SGDA) (\\cref{alg_local_SGDA}) proposed in \\cite{mahdavi21localSGDA_aistats}, is a simple extension of the centralized algorithm SGDA \\cite{lin_GDA_icml20}, to incorporate local updates at the clients. At each time $t$, clients updates their local models $\\{ {\\mathbf x^i_t}, {\\mathbf y^i_t} \\}$ using local stochastic gradients $\\{ \\nabla_{\\bx} f_i ({\\mathbf x^i_t}, {\\mathbf y^i_t}; {\\xi^i_{t}}), \\nabla_{\\by} f_i ({\\mathbf x^i_t}, {\\mathbf y^i_t}; {\\xi^i_{t}}) \\}$.\nOnce every $\\tau$ iterations, the clients communicate $\\{ {\\mathbf x^i_t}, {\\mathbf y^i_t} \\}$ to the server, which computes the average models $\\{ {\\mathbf x_t}, {\\mathbf y_t} \\}$, and returns these to the clients.\nNext, we discuss the finite-time convergence of \\cref{alg_local_SGDA}.\nWe prove convergence to an approximate stationary point of the envelope function $\\Phi({\\mathbf x}) = \\max_{\\mathbf y} f({\\mathbf x}, {\\mathbf y})$.\\footnote{Under Assumptions \\ref{assum:smoothness}, \\ref{assum:PL_y}, $\\Phi$ is smooth \\cite{nouiehed19minimax_neurips19}.}\n\n\\begin{algorithm}[ht]\n\t\\caption{Local SGDA \\cite{mahdavi21localSGDA_aistats}}\n\t\\label{alg_local_SGDA}\n\t\\begin{algorithmic}[1]\n\t\t\\STATE{\\textbf{Input: }{\\small${\\mathbf x}_0^i = {\\mathbf x}_0, {\\mathbf y}_0^i = {\\mathbf y}_0$}, for all $i \\in [n]$; step-sizes $\\eta_x, \\eta_y$; $\\tau$, $T$}\n\t\t\\FOR[At all clients $i=1,\\hdots, n$]{$t=0$ to $T-1$}\n\t\t\\STATE{Sample minibatch ${\\xi^i_{t}}$ from local data}\n\t\t\\STATE{${\\mathbf x^i_{t+1}} = {\\mathbf x^i_t} - \\eta_x \\nabla_{\\bx} f_i ({\\mathbf x^i_t}, {\\mathbf y^i_t}; {\\xi^i_{t}})$}\n\t\t\\STATE{${\\mathbf y^i_{t+1}} = {\\mathbf y^i_t} + \\eta_y \\nabla_{\\by} f_i ({\\mathbf x^i_t}, {\\mathbf y^i_t}; {\\xi^i_{t}})$}\n\t\t\\IF{$t+1$ mod $\\tau = 0$}\n\t\t\\STATE{Clients send $\\{ {\\mathbf x^i_{t+1}}, {\\mathbf y^i_{t+1}} \\}$ to the server}\n\t\t\\STATE{Server computes averages ${\\mathbf x_{t+1}} \\triangleq \\frac{1}{n} \\sum_{i=1}^n {\\mathbf x^i_{t+1}}$, \n\t\t\t${\\mathbf y_{t+1}} \\triangleq \\frac{1}{n} \\sum_{i=1}^n {\\mathbf y^i_{t+1}}$, and sends to all the clients}\n\t\t\\STATE{${\\mathbf x^i_{t+1}} = {\\mathbf x_{t+1}}$, ${\\mathbf y^i_{t+1}} = {\\mathbf y_{t+1}}$, for all $i \\in [n]$}\n\t\t\\ENDIF\n\t\t\\ENDFOR\n\t\t\\STATE{\\textbf{Return: }${\\bar{\\bx}_T}$ drawn uniformly at random from $\\{ {\\mathbf x_t} \\}_{t=1}^T$, where ${\\mathbf x_t} \\triangleq \\frac{1}{n} \\sum_{i=1}^n {\\mathbf x^i_t}$}\n\t\\end{algorithmic}\n\\end{algorithm}\n\n\n\\begin{theorem}\n\t\\label{thm:NC_PL}\n\tSuppose the local loss functions $\\{ f_i \\}_i$ satisfy Assumptions \\ref{assum:smoothness}, \\ref{assum:bdd_var}, \\ref{assum:bdd_hetero}, and the global function $f$ satisfies \\cref{assum:PL_y}.\n\tSuppose the step-sizes $\\eta_x, \\eta_y$ are chosen such that {\\small$\\eta_y \\leq \\frac{1}{8 L_f \\tau}$, $\\frac{\\eta_x}{\\eta_y} \\leq \\frac{1}{8 \\kappa^2}$}, where {\\small$\\kappa = L_f\/\\mu$} is the condition number.\n\tThen, for the output ${\\bar{\\bx}_T}$ of \\cref{alg_local_SGDA}, the following holds.\n\t\\ificml\n\t\\vspace{-3mm}\n\t{\\small\n\t\t\\begin{equation}\n\t\t\t\\begin{aligned}\n\t\t\t\t& \\mathbb E \\norm{\\nabla \\Phi ({\\bar{\\bx}_T})}^2 \\leq \\underbrace{\\mathcal O \\left( \\kappa^2 \\left[ \\frac{\\Delta_{\\Phi}}{\\eta_y T} + \\frac{\\eta_y \\sigma^2}{n} \\right] \\right)}_{\\text{Error with full synchronization}} \\\\\n\t\t\t\t& \\qquad + \\underbrace{\\mathcal O \\left( \\kappa^2 (\\tau-1)^2 \\left[ \\eta_y^2 \\left( \\sigma^2 + \\varsigma_y^2 \\right) + \\eta_x^2 \\varsigma_x^2 \\right] \\right)}_{\\text{Error due to local updates}},\n\t\t\t\\end{aligned}\n\t\t\t\\label{eq:thm:NC_PL}\n\t\t\\end{equation}\n\t}%\n\t\\else\n\t\\begin{equation}\n\t\t\\begin{aligned}\n\t\t\t& \\mathbb E \\norm{\\nabla \\Phi ({\\bar{\\bx}_T})}^2 \\leq \\underbrace{\\mathcal O \\left( \\kappa^2 \\left[ \\frac{\\Delta_{\\Phi}}{\\eta_y T} + \\frac{\\eta_y \\sigma^2}{n} \\right] \\right)}_{\\text{Error with full synchronization}} + \\underbrace{\\mathcal O \\left( \\kappa^2 (\\tau-1)^2 \\left[ \\eta_y^2 \\left( \\sigma^2 + \\varsigma_y^2 \\right) + \\eta_x^2 \\varsigma_x^2 \\right] \\right)}_{\\text{Error due to local updates}},\n\t\t\\end{aligned}\n\t\t\\label{eq:thm:NC_PL}\n\t\\end{equation}\n\t\\fi\n\twhere {\\small$\\Phi(\\cdot) \\triangleq \\max_{\\mathbf y} f(\\cdot, {\\mathbf y})$} is the envelope function, {\\small$\\Delta_{\\Phi} \\triangleq \\Phi ({\\mathbf x}_0) - \\min_{\\mathbf x} \\Phi ({\\mathbf x})$}.\n\tUsing {\\small$\\eta_x = \\mathcal O ( \\frac{1}{\\kappa^2} \\sqrt{\\frac{n}{T}} )$, $\\eta_y = \\mathcal O ( \\sqrt{n\/T} )$}, we can bound {\\small$\\mathbb E \\norm{\\nabla \\Phi ({\\bar{\\bx}_T})}^2$} as\n\t\\ificml\n\t{\\small\n\t\t\\begin{align}\n\t\t\t& \\mathcal O \\Big( \\frac{\\kappa^2 ( \\sigma^2 + \\Delta_{\\Phi} )}{\\sqrt{n T}} + \\kappa^2 (\\tau-1)^2 \\frac{n ( \\sigma^2 + \\varsigma_x^2 + \\varsigma_y^2 )}{T} \\Big).\n\t\t\t\\label{eq:thm:NC_PL_conv_rate}\n\t\t\\end{align}\n\t}\n\t\\else\n\t\\begin{align}\n\t\t& \\mathcal O \\Big( \\frac{\\kappa^2 ( \\sigma^2 + \\Delta_{\\Phi} )}{\\sqrt{n T}} + \\kappa^2 (\\tau-1)^2 \\frac{n ( \\sigma^2 + \\varsigma_x^2 + \\varsigma_y^2 )}{T} \\Big).\n\t\t\\label{eq:thm:NC_PL_conv_rate}\n\t\\end{align}\n\t\\fi\n\\end{theorem}\n\n\\begin{proof}\n\tSee \\cref{app:ncpl}.\n\\end{proof}\n\n\\begin{remark}\n\t\\label{rem:NC_PL_local_SGDA_1}\n\tThe first term of the error decomposition in \\eqref{eq:thm:NC_PL} represents the optimization error for a fully synchronous algorithm ($\\tau = 1$), in which the local models are averaged after every update.\n\tThe second term arises due to the clients carrying out multiple $(\\tau > 1)$ local updates between successive communication rounds.\n\tThis term is impacted by the data heterogeneity across clients $\\varsigma_x, \\varsigma_y$.\n\tSince the dependence on step-sizes $\\eta_x, \\eta_y$ is quadratic, as seen in \\eqref{eq:thm:NC_PL_conv_rate}, for small enough $\\eta_x, \\eta_y$, and carefully chosen $\\tau$, having multiple local updates does not impact the asymptotic convergence rate $\\mathcal O (1\/\\sqrt{nT})$.\n\\end{remark}\n\n\\begin{cor}\n\t\\label{cor:NC_PL_comm_cost}\n\tTo reach an $\\epsilon$-accurate point ${\\bar{\\bx}_T}$, assuming $T \\geq \\Theta (n^3)$, the stochastic gradient complexity of \\cref{alg_local_SGDA} is $\\mathcal O (\\kappa^4\/(n \\epsilon^4))$.\n\tThe number of communication rounds required for the same is $T\/\\tau = \\mathcal O ( \\kappa^3\/\\epsilon^{3} )$.\n\\end{cor}\n\n\\begin{remark}\n\t\\label{rem:NC_PL_local_SGDA_2}\n\tOur analysis improves the existing complexity results for Local SGDA \\cite{mahdavi21localSGDA_aistats}.\n\tThe analysis in \\cite{mahdavi21localSGDA_aistats} also requires the additional assumption of $G_x$-Lipschitz continuity of $f(\\cdot, {\\mathbf y})$, which we do not need.\n\tThe complexity result is optimal in $\\epsilon$.\\footnote{In terms of dependence on $\\epsilon$, our complexity and communication results match the corresponding results for the simple smooth nonconvex minimization with local SGD \\cite{yu19icml_momentum}.}\n\tTo the best of our knowledge, this complexity guarantee does not exist in the prior literature even for $n=1$.\\footnote{During the preparation of this manuscript, we came across the centralized minimax work \\cite{yang21NCPL_arxiv}, which achieves {\\small$\\mathcal O (\\kappa^4\/ \\epsilon^{4})$}, using stochastic alternating GDA.}\n\tFurther, we also provide communication savings, requiring model averaging only once every $\\mathcal O ( \\kappa\/(n \\epsilon) )$ iterations.\n\\end{remark}\n\n\\begin{remark}[Nonconvex-Strongly-Concave (NC-SC) Problems]\n\t\\label{rem:NC_PL_local_SGDA_3}\n\tSince the PL condition is more general than strong concavity, we also achieve the above result for NC-SC minimax problems.\n\tMoreover, unlike the analysis in \\cite{lin_GDA_icml20} which necessarily requires $\\mathcal O (1\/\\epsilon^{2})$ batch-sizes, to the best of our knowledge, ours is the first result to achieve $\\mathcal O (1\/\\epsilon^{4})$ rate for SGDA with $\\mathcal O (1)$ batch-size.\n\\end{remark}\n\n\n\\paragraph{Momentum-based Local SGDA.}\n\nNext, we propose a novel momentum-based local updates algorithm (\\cref{alg_NC_momentum}) for NC-PL minimax problems.\nThe motivation behind using momentum in local updates is to control the effect of stochastic gradient noise,\nvia historic averaging of stochastic gradients.\nSince momentum is widely used in practice for training deep neural networks, it is a natural question to ask, whether the same theoretical guarantees as Local SGDA can be proved for a momentum-based algorithm.\nA similar question has been considered in \\cite{yu19icml_momentum} in the context of smooth minimization problems.\n\\cref{alg_NC_momentum} is a local updates-based extension of the approach proposed in \\cite{qiu20single_timescale_ncsc} for centralized problems.\nAt each step, each client uses momentum-based gradient estimators {\\small$\\{ {\\mathbf d^i_{x,t}}, {\\mathbf d^i_{y,t}} \\}$} to arrive at intermediate iterates {\\small$\\{ \\Tbx^i_{t+\\frac{1}{2}}, \\Tby^i_{t+\\frac{1}{2}} \\}$}.\nThe local updated model is a convex combination of the intermediate iterate and the current model.\nOnce every $\\tau$ iterations, the clients communicate {\\small$\\{ {\\mathbf x^i_t}, {\\mathbf y^i_t}, {\\mathbf d^i_{x,t}}, {\\mathbf d^i_{y,t}} \\}$} to the server, which computes the averages {\\small$\\{ {\\mathbf x_t}, {\\mathbf y_t}, {\\mathbf d_{x,t}}, {\\mathbf d_{y,t}} \\}$}, and returns these to the clients.\\footnote{The direction estimates {\\small$\\{ {\\mathbf d^i_{x,t}}, {\\mathbf d^i_{y,t}} \\}$} only need to be communicated for the sake of analysis. In our experiments in \\cref{sec:exp}, as in Local SGDA, only the models are communicated.}\n\n\\ificml\n\\begin{algorithm}[ht]\n\t\\caption{Momentum Local SGDA}\n\t\\label{alg_NC_momentum}\n\t\\begin{algorithmic}[1]\n\t\t\\STATE{\\textbf{Input:} {\\small${\\mathbf x}_0^i = {\\mathbf x}_0, {\\mathbf y}_0^i = {\\mathbf y}_0$, $\\mathbf d_{x,0}^i = \\nabla_{\\bx} f_i ({\\mathbf x}^i_0, {\\mathbf y}^i_0; \\xi^i_0)$, $\\mathbf d_{y,0}^i = \\nabla_{\\by} f_i ({\\mathbf x}^i_0, {\\mathbf y}^i_0; \\xi^i_0)$} for all $i \\in [n]; \\eta_x, \\eta_y, \\tau, T$}\n\t\t\\FOR[At all clients $i=1,\\hdots, n$]{$t=0$ to $T-1$}\n\t\t\\STATE{{\\small$\\Tbx^i_{t+\\frac{1}{2}} = {\\mathbf x^i_t} - \\eta_x {\\mathbf d^i_{x,t}}$, \n\t\t\t\t$\\ {\\mathbf x^i_{t+1}} = {\\mathbf x^i_t} + \\alpha_t ( \\Tbx^i_{t+\\frac{1}{2}} - {\\mathbf x^i_t} )$}}\n\t\t\\STATE{{\\small$\\Tby^i_{t+\\frac{1}{2}} = {\\mathbf y^i_t} + \\eta_y {\\mathbf d^i_{y,t}}$, $\\ {\\mathbf y^i_{t+1}} = {\\mathbf y^i_t} + \\alpha_t ( \\Tby^i_{t+\\frac{1}{2}} - {\\mathbf y^i_t} )$}}\n\t\t\\STATE{Sample minibatch ${\\xi^i_{t+1}}$ from local data}\n\t\t\\STATE{{\\small${\\mathbf d^i_{x,t+1}} = (1 - \\beta_x \\alpha_t) {\\mathbf d^i_{x,t}} + \\beta_x \\alpha_t \\nabla_{\\bx} f_i ({\\mathbf x^i_{t+1}}, {\\mathbf y^i_{t+1}}; {\\xi^i_{t+1}})$}}\n\t\t\\STATE{{\\small${\\mathbf d^i_{y,t+1}} = (1 - \\beta_y \\alpha_t) {\\mathbf d^i_{y,t}} + \\beta_y \\alpha_t \\nabla_{\\by} f_i ({\\mathbf x^i_{t+1}}, {\\mathbf y^i_{t+1}}; {\\xi^i_{t+1}})$}}\n\t\t\\IF{$t+1$ mod $\\tau = 0$}\n\t\t\\STATE{Clients send $\\{ {\\mathbf x^i_{t+1}}, {\\mathbf y^i_{t+1}}, {\\mathbf d^i_{x,t+1}}, {\\mathbf d^i_{y,t+1}} \\}$ to the server}\n\t\t\\STATE{Server computes averages {\\small${\\mathbf x_{t+1}} \\triangleq \\frac{1}{n} \\sum_{i=1}^n {\\mathbf x^i_{t+1}}$}, \n\t\t\t{\\small${\\mathbf y_{t+1}} \\triangleq \\frac{1}{n} \\sum_{i=1}^n {\\mathbf y^i_{t+1}}$}, {\\small${\\mathbf d_{x,t+1}} \\triangleq \\frac{1}{n} \\sum_{i=1}^n {\\mathbf d^i_{x,t+1}}$}, \n\t\t\t{\\small${\\mathbf d_{y,t+1}} \\triangleq \\frac{1}{n} \\sum_{i=1}^n {\\mathbf d^i_{y,t+1}}$}, and sends to the clients}\n\t\t\\STATE{${\\mathbf x^i_{t+1}} = {\\mathbf x_{t+1}}$, ${\\mathbf y^i_{t+1}} = {\\mathbf y_{t+1}}$, ${\\mathbf d^i_{x,t+1}} = {\\mathbf d_{x,t+1}}$, ${\\mathbf d^i_{y,t+1}} = {\\mathbf d_{y,t+1}}$, for all $i \\in [n]$}\n\t\t\\ENDIF\n\t\t\\ENDFOR\n\t\t\\STATE{\\textbf{Return: }${\\bar{\\bx}_T}$ drawn uniformly at random from $\\{ {\\mathbf x_t} \\}$, where ${\\mathbf x_t} \\triangleq \\frac{1}{n} \\sum_{i=1}^n {\\mathbf x^i_t}$}\n\t\\end{algorithmic}\n\\end{algorithm}\n\\else\n\\begin{algorithm}[ht]\n\t\\caption{Momentum Local SGDA}\n\t\\label{alg_NC_momentum}\n\t\\begin{algorithmic}[1]\n\t\t\\STATE{\\textbf{Input:} ${\\mathbf x}_0^i = {\\mathbf x}_0, {\\mathbf y}_0^i = {\\mathbf y}_0$, $\\mathbf d_{x,0}^i = \\nabla_{\\bx} f_i ({\\mathbf x}^i_0, {\\mathbf y}^i_0; \\xi^i_0)$, $\\mathbf d_{y,0}^i = \\nabla_{\\by} f_i ({\\mathbf x}^i_0, {\\mathbf y}^i_0; \\xi^i_0)$} for all $i \\in [n]; \\eta_x, \\eta_y, \\tau, T$\n\t\t\\FOR[At all clients $i=1,\\hdots, n$]{$t=0$ to $T-1$}\n\t\t\\STATE{$\\Tbx^i_{t+\\frac{1}{2}} = {\\mathbf x^i_t} - \\eta_x {\\mathbf d^i_{x,t}}$, \n\t\t\t$\\ {\\mathbf x^i_{t+1}} = {\\mathbf x^i_t} + \\alpha_t ( \\Tbx^i_{t+\\frac{1}{2}} - {\\mathbf x^i_t} )$}\n\t\t\\STATE{$\\Tby^i_{t+\\frac{1}{2}} = {\\mathbf y^i_t} + \\eta_y {\\mathbf d^i_{y,t}}$, $\\ {\\mathbf y^i_{t+1}} = {\\mathbf y^i_t} + \\alpha_t ( \\Tby^i_{t+\\frac{1}{2}} - {\\mathbf y^i_t} )$}\n\t\t\\STATE{Sample minibatch ${\\xi^i_{t+1}}$ from local data}\n\t\t\\STATE{${\\mathbf d^i_{x,t+1}} = (1 - \\beta_x \\alpha_t) {\\mathbf d^i_{x,t}} + \\beta_x \\alpha_t \\nabla_{\\bx} f_i ({\\mathbf x^i_{t+1}}, {\\mathbf y^i_{t+1}}; {\\xi^i_{t+1}})$}\n\t\t\\STATE{${\\mathbf d^i_{y,t+1}} = (1 - \\beta_y \\alpha_t) {\\mathbf d^i_{y,t}} + \\beta_y \\alpha_t \\nabla_{\\by} f_i ({\\mathbf x^i_{t+1}}, {\\mathbf y^i_{t+1}}; {\\xi^i_{t+1}})$}\n\t\t\\IF{$t+1$ mod $\\tau = 0$}\n\t\t\\STATE{Clients send $\\{ {\\mathbf x^i_{t+1}}, {\\mathbf y^i_{t+1}}, {\\mathbf d^i_{x,t+1}}, {\\mathbf d^i_{y,t+1}} \\}$ to the server}\n\t\t\\STATE{Server computes averages \n\t\t\t\\begin{align*}\n\t\t\t\t{\\mathbf x_{t+1}} \\triangleq \\frac{1}{n} \\sum_{i=1}^n {\\mathbf x^i_{t+1}}, \\quad {\\mathbf y_{t+1}} \\triangleq \\frac{1}{n} \\sum_{i=1}^n {\\mathbf y^i_{t+1}}, \\quad {\\mathbf d_{x,t+1}} \\triangleq \\frac{1}{n} \\sum_{i=1}^n {\\mathbf d^i_{x,t+1}}, \\quad {\\mathbf d_{y,t+1}} \\triangleq \\frac{1}{n} \\sum_{i=1}^n {\\mathbf d^i_{y,t+1}}\n\t\t\t\\end{align*}\n\t\t\t\\hspace{7mm} and sends to the clients}\n\t\t\\STATE{${\\mathbf x^i_{t+1}} = {\\mathbf x_{t+1}}$, ${\\mathbf y^i_{t+1}} = {\\mathbf y_{t+1}}$, ${\\mathbf d^i_{x,t+1}} = {\\mathbf d_{x,t+1}}$, ${\\mathbf d^i_{y,t+1}} = {\\mathbf d_{y,t+1}}$, for all $i \\in [n]$}\n\t\t\\ENDIF\n\t\t\\ENDFOR\n\t\t\\STATE{\\textbf{Return: }${\\bar{\\bx}_T}$ drawn uniformly at random from $\\{ {\\mathbf x_t} \\}$, where ${\\mathbf x_t} \\triangleq \\frac{1}{n} \\sum_{i=1}^n {\\mathbf x^i_t}$}\n\t\\end{algorithmic}\n\\end{algorithm}\n\\fi\n\nNext, we discuss the finite-time convergence of \\cref{alg_NC_momentum}.\n\n\\begin{theorem}\n\t\\label{thm:NC_PL_mom}\n\tSuppose the local loss functions $\\{ f_i \\}_i$ satisfy Assumptions \\ref{assum:smoothness}, \\ref{assum:bdd_var}, \\ref{assum:bdd_hetero}, and the global function $f$ satisfies \\cref{assum:PL_y}.\n\tSuppose in \\cref{alg_NC_momentum}, \n\t$\\beta_x = \\beta_y = \\beta = 3$, {\\small$\\alpha_t \\equiv \\alpha \\leq \\min \\big\\{ \\frac{\\beta}{6 L_f^2 (\\eta_y^2 + \\eta_x^2)}, \\frac{1}{48 \\tau} \\big\\}$}, for all $t$, and the step-sizes $\\eta_x, \\eta_y$ are chosen such that $\\eta_y \\leq \\frac{\\mu}{8 L_f^2}$, and $\\frac{\\eta_x}{\\eta_y} \\leq \\frac{1}{20 \\kappa^2}$, where {\\small$\\kappa = L_f\/\\mu$} is the condition number.\n\tThen, for the output ${\\bar{\\bx}_T}$ of \\cref{alg_NC_momentum}, the following holds.\n\t\\ificml\n\t{\\small\n\t\t\\begin{equation}\n\t\t\t\\begin{aligned}\n\t\t\t\t\\mathbb E \\norm{\\nabla \\Phi ({\\bar{\\bx}_T})}^2 & \\leq \\underbrace{\\mathcal O \\Big( \\frac{\\kappa^2}{\\eta_y \\alpha T} + \\frac{\\alpha}{\\mu \\eta_y} \\frac{\\sigma^2}{n} \\Big)}_{\\text{Error with full synchronization}} \\\\\n\t\t\t\t& + \\underbrace{\\mathcal O \\big( (\\tau - 1)^2 \\alpha^2 ( \\sigma^2 + \\varsigma_x^2 + \\varsigma_y^2 ) \\big)}_{\\text{Error due to local updates}},\n\t\t\t\\end{aligned}\n\t\t\t\\label{eq:thm:NC_PL_mom}\n\t\t\\end{equation}\n\t}%\n\t\\else\n\t\\begin{equation}\n\t\t\\begin{aligned}\n\t\t\t\\mathbb E \\norm{\\nabla \\Phi ({\\bar{\\bx}_T})}^2 & \\leq \\underbrace{\\mathcal O \\Big( \\frac{\\kappa^2}{\\eta_y \\alpha T} + \\frac{\\alpha}{\\mu \\eta_y} \\frac{\\sigma^2}{n} \\Big)}_{\\text{Error with full synchronization}} + \\underbrace{\\mathcal O \\big( (\\tau - 1)^2 \\alpha^2 ( \\sigma^2 + \\varsigma_x^2 + \\varsigma_y^2 ) \\big)}_{\\text{Error due to local updates}},\n\t\t\\end{aligned}\n\t\t\\label{eq:thm:NC_PL_mom}\n\t\\end{equation}\n\t\\fi\n\twhere $\\Phi(\\cdot) \\triangleq \\max_{\\mathbf y} f(\\cdot, {\\mathbf y})$ is the envelope function.\n\tWith {\\small$\\alpha = \\sqrt{n\/T}$}, the bound in \\eqref{eq:thm:NC_PL_mom} simplifies to\n\t\\ificml\n\t{\\small\n\t\t\\begin{align}\n\t\t\t\\mathcal O \\Big( \\frac{\\kappa^2 + \\sigma^2}{\\sqrt{n T}} + (\\tau-1)^2 \\frac{n ( \\sigma^2 + \\varsigma_x^2 + \\varsigma_y^2 )}{T} \\Big).\n\t\t\t\\label{eq:thm:NC_PL_mom_conv_rate}\n\t\t\\end{align}\n\t}\n\t\\else\n\t\\begin{align}\n\t\t\\mathcal O \\Big( \\frac{\\kappa^2 + \\sigma^2}{\\sqrt{n T}} + (\\tau-1)^2 \\frac{n ( \\sigma^2 + \\varsigma_x^2 + \\varsigma_y^2 )}{T} \\Big).\n\t\t\\label{eq:thm:NC_PL_mom_conv_rate}\n\t\\end{align}\n\t\\fi\n\\end{theorem}\n\n\\begin{proof}\n\tSee \\cref{app:NC_PL_mom}.\n\\end{proof}\n\n\\begin{remark}\n\tAs in the case of \\cref{thm:NC_PL}, the second term in \\eqref{eq:thm:NC_PL_mom} arises due to the clients carrying out multiple ($\\tau > 1$) local updates between successive communication rounds. \n\tHowever, the dependence of this term on $\\alpha$ is quadratic. Therefore, as seen in \\eqref{eq:thm:NC_PL_mom_conv_rate}, for small enough $\\alpha$ and carefully chosen $\\tau$, having multiple local updates does not affect the asymptotic convergence rate $\\mathcal O (1\/\\sqrt{nT})$.\n\\end{remark}\n\n\n\n\\begin{cor}\n\t\\label{cor:NC_PL_mom_comm_cost}\n\tTo reach an $\\epsilon$-accurate point ${\\bar{\\bx}_T}$, assuming $T \\geq \\Theta (n^3)$, the stochastic gradient complexity of \\cref{alg_NC_momentum} is $\\mathcal O (\\kappa^4\/(n \\epsilon^4))$.\n\tThe number of communication rounds required for the same is $T\/\\tau = \\mathcal O ( \\kappa^3\/\\epsilon^{3} )$.\n\\end{cor}\n\nThe stochastic gradient complexity and the number of communication rounds required are identical (up to multiplicative constants) for both \\cref{alg_local_SGDA} and \\cref{alg_NC_momentum}.\nTherefore, the discussion following \\cref{thm:NC_PL} (Remarks \\ref{rem:NC_PL_local_SGDA_2}, \\ref{rem:NC_PL_local_SGDA_3}) applies to \\cref{thm:NC_PL_mom} as well.\nWe demonstrate the practical benefits of Momentum Local SGDA in \\cref{sec:exp}.\n\n\n\\subsection{Nonconvex-Concave (NC-C) Problems} \\label{sec:NC_C}\n\nIn this subsection, we consider smooth nonconvex functions which satisfy the following assumptions.\n\n\\begin{assump}[Concavity]\n\t\\label{assum:concavity}\n\tThe function $f$ is concave in ${\\mathbf y}$ if for a fixed ${\\mathbf x} \\in \\mathbb R^{d_1}$, for all ${\\mathbf y}, {\\mathbf y}' \\in \\mathbb R^{d_2}$,\n\t\\ificml\n\t\\newline\n\t$f({\\mathbf x}, {\\mathbf y}) \\leq f({\\mathbf x}, {\\mathbf y}') + \\left\\langle \\nabla_{\\by} f({\\mathbf x}, {\\mathbf y}'), {\\mathbf y} - {\\mathbf y}' \\right\\rangle$.\n\t\\else\n\t\\begin{align*}\n\t\tf({\\mathbf x}, {\\mathbf y}) \\leq f({\\mathbf x}, {\\mathbf y}') + \\left\\langle \\nabla_{\\by} f({\\mathbf x}, {\\mathbf y}'), {\\mathbf y} - {\\mathbf y}' \\right\\rangle.\n\t\\end{align*}\n\t\\fi\n\\end{assump}\n\n\\begin{assump}[Lipschitz continuity in ${\\mathbf x}$]\n\t\\label{assum:Lips_cont_x}\n\tFor the function $f$, there exists a constant $G_x$, such that for each ${\\mathbf y} \\in \\mathbb R^{d_2}$, and all ${\\mathbf x}, {\\mathbf x}' \\in \\mathbb R^{d_1}$,\n\t\\ificml\n\t$\\norm{f({\\mathbf x}, {\\mathbf y}) - f({\\mathbf x}', {\\mathbf y})} \\leq G_x \\norm{{\\mathbf x} - {\\mathbf x}'}$.\n\t\\else\n\t\\begin{align*}\n\t\t\\norm{f({\\mathbf x}, {\\mathbf y}) - f({\\mathbf x}', {\\mathbf y})} \\leq G_x \\norm{{\\mathbf x} - {\\mathbf x}'}.\n\t\\end{align*}\n\t\\fi\n\\end{assump}\n\nIn the absence of strong-concavity or PL condition on ${\\mathbf y}$, the envelope function $\\Phi({\\mathbf x}) = \\max_{\\mathbf y} f({\\mathbf x}, {\\mathbf y})$ defined earlier need not be smooth.\nInstead, we use the alternate definition of stationarity, proposed in \\cite{davis19wc_siam}, utilizing the Moreau envelope of $\\Phi$, which is defined next.\n\n\\begin{definition}[Moreau Envelope]\n\tA function $\\Phi_{\\lambda}$ is the $\\lambda$-Moreau envelope of $\\Phi$, for $\\lambda > 0$, if for all ${\\mathbf x} \\in \\mathbb R^{d_1}$,\n\t\\ificml\n\t\\newline\n\t$\\Phi_\\lambda({\\mathbf x}) = \\min_{{\\mathbf x}'} \\Phi ({\\mathbf x}') + \\frac{1}{2 \\lambda} \\norm{{\\mathbf x}' - {\\mathbf x}}^2$.\n\t\\else\n\t\\begin{align*}\n\t\t\\Phi_\\lambda({\\mathbf x}) = \\min_{{\\mathbf x}'} \\Phi ({\\mathbf x}') + \\frac{1}{2 \\lambda} \\norm{{\\mathbf x}' - {\\mathbf x}}^2. \n\t\\end{align*}\n\t\\fi\n\\end{definition}\n\nA small value of $\\norm{\\nabla \\Phi_\\lambda({\\mathbf x})}$ implies that ${\\mathbf x}$ is near some point $\\widetilde{\\bx}$ that is \\textit{nearly stationary} for $\\Phi$ \\cite{drusvyatskiy19wc_mathprog}.\nHence, we focus on minimizing $\\norm{\\nabla \\Phi_\\lambda({\\mathbf x})}$.\n\n\\paragraph{Improved Convergence Analysis for NC-C Problems.}\n\nFor centralized NC-C problems, \\cite{lin_GDA_icml20} analyze the convergence of SGDA.\nHowever, this analysis does not seem amenable to local-updates-based modification.\nAnother alternative is a double-loop algorithm, which approximately solves the inner maximization problem $\\max f({\\mathbf x}, \\cdot)$ after each ${\\mathbf x}$-update step.\nHowever, double-loop algorithms are complicated to implement.\n\\cite{mahdavi21localSGDA_aistats} propose Local SGDA+ (see \\cref{alg_local_SGDA_plus} in \\cref{app:NC_C}), a modified version of SGDA \\cite{lin_GDA_icml20}, to resolve this impasse.\nCompared to Local SGDA, the ${\\mathbf x}$-updates are identical.\nHowever, for the ${\\mathbf y}$-updates, stochastic gradients $\\nabla_{\\by} f_i (\\widetilde{\\bx}, {\\mathbf y^i_t}; {\\xi^i_{t}})$ are evaluated with the $x$-component fixed at $\\widetilde{\\bx}$, which is updated every $S$ iterations.\n\nIn \\cite{mahdavi21localSGDA_aistats}, Local SGDA+ is used for solving nonconvex-one-point-concave (NC-1PC) problems (see \\cref{sec:NC_1PC}).\nHowever, the guarantees provided are far from optimal (see \\cref{table:comparison}).\nIn this and the following subsection, we present improved convergence results for Local SGDA+, for NC-C and NC-1PC minimax problems.\n\n\\begin{theorem}\n\t\\label{thm:NC_C}\n\tSuppose the local loss functions $\\{ f_i \\}$ satisfy Assumptions \\ref{assum:smoothness}, \\ref{assum:bdd_var}, \\ref{assum:bdd_hetero}, \\ref{assum:concavity}, \\ref{assum:Lips_cont_x}.\n\tFurther, let $\\norm{{\\mathbf y_t}}^2 \\leq D$ for all $t$.\n\tSuppose the step-sizes $\\eta_x, \\eta_y$ are chosen such that $\\eta_x, \\eta_y \\leq \\frac{1}{8 L_f \\tau}$.\n\tThen, for the output ${\\bar{\\bx}_T}$ of \\cref{alg_local_SGDA_plus},\n\t\\ificml\n\t{\\small\n\t\t\\begin{equation}\n\t\t\t\\begin{aligned}\n\t\t\t\t& \\mathbb E \\norm{\\nabla \\Phi_{1\/2L_f} ({\\bar{\\bx}_T})}^2 \\leq \\underbrace{\\mathcal O \\Big( \\frac{\\widetilde{\\Delta}_{\\Phi}}{\\eta_x T} + \\eta_x \\Big( G_x^2 + \\frac{\\sigma^2}{n} \\Big) \\Big)}_{\\text{Error with full synchronization I}} \\\\\n\t\t\t\t& \\qquad + \\underbrace{\\mathcal O \\Big( \\frac{\\eta_y \\sigma^2}{n} + \\Big[ \\eta_x G_x S \\sqrt{G_x^2 + \\sigma^2\/n} + \\frac{D}{\\eta_y S} \\Big] \\Big)}_{\\text{Error with full synchronization II}} \\\\\n\t\t\t\t& \\qquad + \\underbrace{\\mathcal O \\Big( (\\tau-1)^2 \\left[ \\left( \\eta_x^2 + \\eta_y^2 \\right) \\sigma^2 + \\left( \\eta_x^2 \\varsigma_x^2 + \\eta_y^2 \\varsigma_y^2 \\right) \\right] \\Big)}_{\\text{Error due to local updates}},\n\t\t\t\\end{aligned}\n\t\t\t\\label{eq:thm:NC_C}\n\t\t\\end{equation}\n\t}%\n\t\\else\n\t\\begin{equation}\n\t\t\\begin{aligned}\n\t\t\t\\mathbb E \\norm{\\nabla \\Phi_{1\/2L_f} ({\\bar{\\bx}_T})}^2 & \\leq \\underbrace{\\mathcal O \\left( \\frac{\\widetilde{\\Delta}_{\\Phi}}{\\eta_x T} + \\eta_x \\Big( G_x^2 + \\frac{\\sigma^2}{n} \\Big) \\right) + \\mathcal O \\left( \\frac{\\eta_y \\sigma^2}{n} + \\Big[ \\eta_x G_x S \\sqrt{G_x^2 + \\sigma^2\/n} + \\frac{D}{\\eta_y S} \\Big] \\right)}_{\\text{Error with full synchronization}} \\\\\n\t\t\t& \\qquad + \\underbrace{\\mathcal O \\Big( (\\tau-1)^2 \\left[ \\left( \\eta_x^2 + \\eta_y^2 \\right) \\sigma^2 + \\left( \\eta_x^2 \\varsigma_x^2 + \\eta_y^2 \\varsigma_y^2 \\right) \\right] \\Big)}_{\\text{Error due to local updates}},\n\t\t\\end{aligned}\n\t\t\\label{eq:thm:NC_C}\n\t\\end{equation}\n\t\\fi\n\twhere {\\small$\\Phi_{1\/2L_f}({\\mathbf x}) \\triangleq \\min_{{\\mathbf x}'} \\Phi ({\\mathbf x}') + L_f \\norm{{\\mathbf x}' - {\\mathbf x}}^2$}, {\\small$\\widetilde{\\Delta}_{\\Phi} \\triangleq \\Phi_{1\/2 L_f} ({\\mathbf x}_0) - \\min_{\\mathbf x} \\Phi_{1\/2 L_f} ({\\mathbf x})$}.\n\n\tUsing {\\small$S = \\Theta ( \\sqrt{T\/n} )$}, {\\small$\\eta_x = \\Theta \\left( \\frac{n^{1\/4}}{T^{3\/4}} \\right)$}, {\\small$\\eta_y = \\Theta \\left( \\frac{n^{3\/4}}{T^{1\/4}} \\right)$}, the bound in \\eqref{eq:thm:NC_C} simplifies to\n\t\\ificml\n\t{\\small\n\t\t\\begin{equation}\n\t\t\t\\begin{aligned}\n\t\t\t\t& \\mathbb E \\norm{\\nabla \\Phi_{1\/2L_f} ({\\bar{\\bx}_T})}^2 \\leq \\underbrace{\\mathcal O \\Big( \\frac{1}{(nT)^{1\/4}} + \\frac{n^{1\/4}}{T^{3\/4}} \\Big)}_{\\text{Error with full synchronization}} \\\\\n\t\t\t\t& \\qquad + \\underbrace{\\mathcal O \\Big( \\frac{n^{3\/2} (\\tau-1)^2}{T^{1\/2}} + (\\tau-1)^2 \\frac{\\sqrt{n}}{T^{3\/2}} \\Big)}_{\\text{Error due to local updates}}.\n\t\t\t\\end{aligned}\n\t\t\t\\label{eq:thm:NC_C_conv_rate}\n\t\t\\end{equation}\n\t}%\n\t\\else\n\t\\begin{equation}\n\t\t\\begin{aligned}\n\t\t\t& \\mathbb E \\norm{\\nabla \\Phi_{1\/2L_f} ({\\bar{\\bx}_T})}^2 \\leq \\underbrace{\\mathcal O \\left( \\frac{1}{(nT)^{1\/4}} + \\frac{n^{1\/4}}{T^{3\/4}} \\right)}_{\\text{Error with full synchronization}} + \\underbrace{\\mathcal O \\left( \\frac{n^{3\/2} (\\tau-1)^2}{T^{1\/2}} + (\\tau-1)^2 \\frac{\\sqrt{n}}{T^{3\/2}} \\right)}_{\\text{Error due to local updates}}.\n\t\t\\end{aligned}\n\t\t\\label{eq:thm:NC_C_conv_rate}\n\t\\end{equation}\n\t\\fi\n\\end{theorem}\n\n\n\\begin{proof}\n\tSee \\cref{app:NC_C}.\n\\end{proof}\n\n\\ificml\n\\begin{remark}\n\t\\label{rem:NC_C_local_SGDA_plus_1}\n\tThe first two terms in the error decomposition in \\eqref{eq:thm:NC_C}, represent the optimization error for a fully synchronous algorithm.\n\tThis is exactly the error observed in the centralized case \\cite{lin_GDA_icml20}.\n\tThe third term arises due to multiple ($\\tau > 1$) local updates.\n\tAs seen in \\eqref{eq:thm:NC_C_conv_rate}, for small enough $\\eta_y, \\eta_x$, and carefully chosen $S, \\tau$, this does not impact the asymptotic convergence rate {\\small$\\mathcal O (1\/(nT)^{1\/4})$}.\n\\end{remark}\n\\else\n\\begin{remark}\n\t\\label{rem:NC_C_local_SGDA_plus_1}\n\tThe first term in the error decomposition in \\eqref{eq:thm:NC_C}, represents the optimization error for a fully synchronous algorithm.\n\tThis is exactly the error observed in the centralized case \\cite{lin_GDA_icml20}.\n\tThe second term arises due to multiple ($\\tau > 1$) local updates.\n\tAs seen in \\eqref{eq:thm:NC_C_conv_rate}, for small enough $\\eta_y, \\eta_x$, and carefully chosen $S, \\tau$, this does not impact the asymptotic convergence rate $\\mathcal O (1\/(nT)^{1\/4})$.\n\\end{remark}\n\\fi\n\n\\begin{cor}\n\t\\label{cor:NC_C_comm_cost}\n\tTo reach an $\\epsilon$-accurate point, i.e., ${\\mathbf x}$ such that {\\small$\\mathbb E \\| \\nabla \\Phi_{1\/2L_f} ({\\mathbf x}) \\| \\leq \\epsilon$},\n\tassuming {\\small$T \\geq \\Theta (n^7)$},\n\tthe stochastic gradient complexity of \\cref{alg_local_SGDA_plus} is {\\small$\\mathcal O (1\/(n \\epsilon^8))$}.\n\tThe number of communication rounds required is {\\small$T\/\\tau = \\mathcal O ( 1\/\\epsilon^{7} )$}.\n\\end{cor}\n\n\n\\begin{remark}\n\t\\label{rem:NC_C_local_SGDA_plus_2}\n\tOurs is the first work to match the centralized ($n=1$) results in \\cite{lin_GDA_icml20} ({\\small$\\mathcal O ( 1\/\\epsilon^{8} )$} using SGDA), and provide linear speedup for $n>1$ with local updates.\n\tIn addition, we also provide communication savings, requiring model averaging only once every {\\small$\\mathcal O ( 1\/(n \\epsilon) )$} iterations.\n\\end{remark}\n\n\n\n\n\\subsection{Nonconvex-One-Point-Concave (NC-1PC) Problems} \\label{sec:NC_1PC}\n\nIn this subsection, we consider smooth nonconvex functions which also satisfy the following assumption.\n\n\\begin{assump}[One-point-Concavity in ${\\mathbf y}$]\n\t\\label{assum:1pc_y}\n\tThe function $f$ is said to be one-point-concave in ${\\mathbf y}$ if fixing ${\\mathbf x} \\in \\mathbb R^{d_1}$, for all ${\\mathbf y} \\in \\mathbb R^{d_2}$,\n\t\\ificml\n\t$\\left\\langle \\nabla_{\\by} f({\\mathbf x}, {\\mathbf y}'), {\\mathbf y} - {\\mathbf y}^*({\\mathbf x}) \\right\\rangle \\leq f({\\mathbf x}, {\\mathbf y}) - f({\\mathbf x}, {\\mathbf y}^*({\\mathbf x}))$,\n\t\\else\n\t\\begin{align*}\n\t\t\\left\\langle \\nabla_{\\by} f({\\mathbf x}, {\\mathbf y}'), {\\mathbf y} - {\\mathbf y}^*({\\mathbf x}) \\right\\rangle \\leq f({\\mathbf x}, {\\mathbf y}) - f({\\mathbf x}, {\\mathbf y}^*({\\mathbf x})), \n\t\\end{align*}\n\t\\fi\n\twhere ${\\mathbf y}^*({\\mathbf x}) \\in \\operatornamewithlimits{arg\\,max}_{\\mathbf y} f({\\mathbf x}, {\\mathbf y})$.\n\\end{assump}\n\nDue to space limitations, we only state the sample and communication complexity results for \\cref{alg_local_SGDA_plus} with NC-1PC functions. The complete result is stated in \\cref{app:NC_1PC}.\n\n\\begin{theorem} \n\t\\label{thm:NC_1PC}\n\tSuppose the local loss functions $\\{ f_i \\}$ satisfy Assumptions \\ref{assum:smoothness}, \\ref{assum:bdd_var}, \\ref{assum:bdd_hetero}, \\ref{assum:Lips_cont_x}, \\ref{assum:1pc_y}.\n\tFurther, let {\\small$\\norm{{\\mathbf y_t}}^2 \\leq D$} for all $t$.\n\tThen, to reach a point ${\\mathbf x}$ such that {\\small$\\mathbb E \\| \\nabla \\Phi_{1\/2L_f} ({\\mathbf x}) \\| \\leq \\epsilon$}, the sample complexity of \\cref{alg_local_SGDA_plus} is {\\small$\\mathcal O (1\/\\epsilon^8)$}, and the number of communication rounds required is {\\small$\\mathcal O ( 1\/\\epsilon^{7} )$}.\n\\end{theorem}\n\n\\begin{remark}\n\t\\label{rem:NC_1PC_local_SGDA_plus_2}\n\tSince one-point-concavity is more general than concavity, for $n=1$, our gradient complexity result $\\mathcal O (1\/\\epsilon^8)$ generalizes the corresponding result for NC-C functions \\cite{lin_GDA_icml20}. To the best of our knowledge, ours is the first work to provide this guarantee for NC-1PC problems. We also reduce the communication cost by requiring model averaging only once every $\\mathcal O ( 1\/\\epsilon )$ iterations.\n\tFurther, our analysis improves the corresponding results in \\cite{mahdavi21localSGDA_aistats} substantially (see \\cref{table:comparison}).\n\\end{remark}\n\n\n\\section{Experiments}\n\\label{sec:exp}\n\nIn this section, we present the empirical performance of the algorithms discussed in the previous sections.\nTo evaluate the performance of Local SGDA and Momentum Local SGDA, we consider the problem of fair classification \\cite{mohri19agnosticFL_icml, nouiehed19minimax_neurips19} using the FashionMNIST dataset \\cite{xiao17fashionMNIST}.\nSimilarly, we evaluate the performance of Local SGDA+ and Momentum Local SGDA+, a momentum-based algorithm (see \\cref{alg_mom_local_SGDA_plus} in \\cref{app:add_exp}), on a robust neural network training problem \\cite{madry18adversarial_iclr, sinha17certifiable_robust_iclr}, using the CIFAR10 dataset.\nWe conducted our experiments on a cluster of 20 machines (clients), each equipped with an NVIDIA TitanX GPU. Ethernet connections communicate the parameters and related information amongst the clients.\nWe implemented our algorithm based on parallel training tools offered by PyTorch 1.0.0 and Python 3.6.3.\nAdditional experimental results, and the details of the experiments, along with the specific parameter values can be found in \\cref{app:add_exp}.\n\n\n\\subsection{Fair Classification}\n\\label{sec:exp_fair}\nWe consider the following NC-SC minimax formulation of the fair classification problem \\cite{nouiehed19minimax_neurips19}.\n\\ificml\n{\\small\n\t\\begin{align}\n\t\t\\min_{\\mathbf x} \\max_{{\\mathbf y} \\in \\mathcal Y} \\sum_{c=1}^C y_c F_c({\\mathbf x}) -\\frac{\\lambda}{2} \\norm{{\\mathbf y}}^2,\n\t\t\\label{eq:exp_fair_2}\n\t\\end{align}\n}%\n\\else\n\\begin{align}\n\t\\min_{\\mathbf x} \\max_{{\\mathbf y} \\in \\mathcal Y} \\sum_{c=1}^C y_c F_c({\\mathbf x}) -\\frac{\\lambda}{2} \\norm{{\\mathbf y}}^2,\n\t\\label{eq:exp_fair_2}\n\\end{align}\n\\fi\nwhere ${\\mathbf x}$ denotes the parameters of the NN, $F_1, F_2, \\hdots, F_C$ denote the individual losses corresponding to the $C(=10)$ classes, and {\\small$\\mathcal Y = \\{ {\\mathbf y} \\in \\mathbb R^C: y_c \\geq 0, \\sum_{c=1}^C y_c = 1 \\}$}.\n\n\\ificml\n\\begin{figure}[t]\n\t\\centering\n\t\\includegraphics[width=0.4\\textwidth]{figures\/FairClassifier\/fashionMNIST_test_acc.pdf}\n\t\\vspace{-3mm}\n\t\\caption{Comparison of the effects of increasing $\\tau$ on the performance of Local SGDA and Momentum Local SGDA algorithms, for the fair classification problem on the FashionMNIST dataset, with a VGG11 model. The figure shows the test accuracy for the worst distribution. \\label{fig:fairclass_fashionmnist}}\n\\end{figure}\n\\else\n\\begin{figure}[t]\n\t\\centering\n\t\\includegraphics[width=0.55\\textwidth]{figures\/FairClassifier\/fashionMNIST_test_acc.pdf}\n\t\\vspace{-3mm}\n\t\\caption{Comparison of the effects of increasing $\\tau$ on the performance of Local SGDA and Momentum Local SGDA algorithms, for the fair classification problem on the FashionMNIST dataset, with a VGG11 model. The figure shows the test accuracy for the worst distribution. \\label{fig:fairclass_fashionmnist}}\n\\end{figure}\n\\fi\n\nWe ran the experiment with a VGG11 network.\nThe network has $20$ clients.\nThe data is partitioned across the clients using a Dirichlet distribution $\\text{Dir}_{20}(0.1)$ as in \\cite{wang19FL_iclr}, to create a non-iid partitioning of data across clients. We use different values of synchronization frequency $\\tau \\in \\{1, 5, 10\\}$.\nIn accordance with \\eqref{eq:exp_fair_2}, we plot the worst distribution test accuracy in \\cref{fig:fairclass_fashionmnist}.\nWe plot the curves for the number of communications it takes to reach $50\\%$ test accuracy on the worst distribution in each case.\nFrom \\cref{fig:fairclass_fashionmnist}, we see the communication savings which result from using higher values of $\\tau$, since fully synchronized SGDA ($\\tau = 1$) requires significantly more communication rounds to reach the same accuracy.\nWe also note the superior performance of Momentum Local SGDA, compared to Local SGDA.\n\n\\subsection{Robust Neural Network Training}\n\\label{sec:exp_robustnn}\nNext, we consider the problem of robust neural network (NN) training, in the presence of adversarial perturbations \\cite{madry18adversarial_iclr, sinha17certifiable_robust_iclr}.\nWe consider a similar problem as considered in \\cite{mahdavi21localSGDA_aistats}.\n\\ificml\n{\\small\n\t\\begin{align}\n\t\t\\min_{\\mathbf x} \\max_{\\norm{{\\mathbf y}}^2 \\leq 1} \\sum_{j=1}^N \\ell \\left( h_{\\mathbf x} (\\mathbf a_i + {\\mathbf y}), b_i \\right), \\label{eq:exp_robustnn}\n\t\\end{align}\n}%\n\\else\n\\begin{align}\n\t\\min_{\\mathbf x} \\max_{\\norm{{\\mathbf y}}^2 \\leq 1} \\sum_{j=1}^N \\ell \\left( h_{\\mathbf x} (\\mathbf a_i + {\\mathbf y}), b_i \\right), \\label{eq:exp_robustnn}\n\\end{align}\n\\fi\nwhere ${\\mathbf x}$ denotes the parameters of the NN, ${\\mathbf y}$ denotes the perturbation, $(a_i, b_i)$ denotes the $i$-th data sample.\n\\ificml\n\\begin{figure}[t]\n\t\\centering\n\t\\includegraphics[width=0.4\\textwidth]{figures\/RobustNN\/CIFAR10_test_acc.pdf}\n\t\\vspace{-3mm}\n\t\\caption{Comparison of the effects of $\\tau$ on the performance of Local SGDA and Momentum Local SGDA algorithms, for the robust NN training problem on the CIFAR10 dataset, with the VGG11 model. The figure shows the robust test accuracy. \\label{fig:robustnn_cifar10}}\n\\end{figure}\n\\else\n\\begin{figure}[t]\n\t\\centering\n\t\\includegraphics[width=0.55\\textwidth]{figures\/RobustNN\/CIFAR10_test_acc.pdf}\n\t\\vspace{-3mm}\n\t\\caption{Comparison of the effects of $\\tau$ on the performance of Local SGDA and Momentum Local SGDA algorithms, for the robust NN training problem on the CIFAR10 dataset, with the VGG11 model. The figure shows the robust test accuracy. \\label{fig:robustnn_cifar10}}\n\\end{figure}\n\\fi\n\nWe ran the experiment using a VGG11 network, with the same network and data partitioning as in the previous subsection.\nWe use different values of $\\tau \\in \\{1, 5, 10\\}$.\nFor both Local SGDA+ and Momentum Local SGDA+, we use $S = \\tau^2$.\nIn \\cref{fig:robustnn_cifar10}, we plot the robust test accuracy. \nFrom \\cref{fig:robustnn_cifar10}, we see the communication savings which result from using higher values of $\\tau$, since for both the algorithms, $\\tau = 1$ case requires significantly more communication rounds to reach the same accuracy.\nWe also note the superior performance of Momentum Local SGDA+, compared to Local SGDA+ to reach the same accuracy level.\n\n\n\\section{Concluding Remarks}\n\\label{sec:conclude}\nIn this work, we analyzed existing and newly proposed distributed communication-efficient algorithms for nonconvex minimax optimization problems.\nWe proved \\textit{order-optimal} complexity results, along with communication savings, for several classes of minimax problems. Our results showed linear speedup in the number of clients, which enables scaling up distributed systems.\nOur results for nonconvex-nonconcave functions improve the existing results for centralized minimax problems.\nAn interesting future direction is to analyze these algorithms for more complex systems with partial and erratic client participation \\cite{gu21mifa_neurips, ruan21device_part_FL_aistats}, and with a heterogeneous number of local updates at each client \\cite{joshi20fednova_neurips}.\n\n\n\n\\nocite{lin20near_opt_det_colt, nesterov18book, yoon21acc_ncc_icml, ouyang21lower_cc_bilinear_mathprog, wang20improved_cc_neurips, li21lower_bd_NCSC_neurips, lei21stability_Minimax_icml, zhang21NCSC_uai, kiyavash20catalyst_neurips, lee21NCNC_structured_neurips, lu20HiBSA_NC_C_ieee, tran20hybrid_NCLin_neurips, jin20local_opt_NCNC_icml, liang20proxGDA_KL_iclr, luo21near_opt_FS_cc_arxiv, xie20lower_FS_cc_icml, gasnikov21decen_deter_cc_icoa, ozdaglar19dec_prox_sp_arxiv, richtarik21dist_VI_comm_arxiv, gasnikov21dec_person_FL_arxiv, jacot18NTK_neurips}\n\n\\section*{Acknowledgements}\n\t\n\n\n\t\n\t\n\t","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nIn recent years, the ultrathin graphite films, i.e. $N$ ($>1$) stacked\ngraphene layers (NSGL's), are successfully fabricated.\\cite{Novoselov1,\n Berger} Extensive studies have been devoted to these systems\ndue to their competitive capability for the design of novel\nnano-devices. It's desirable to carry out a through symmetry analysis to figure\nout the corresponding electron and phonon spectra, and to reveal the\nrelevant selection rules and the optical activities. Moreover, to our\nknowledge, although there are some studies on the electronic\nstructure,\\cite{Peeters1} no existing works concern the phonon dispersions\nfor the NSGL's. Recently it is reported that the\nfrequencies of optical C-C stretching mode in the NSGL's decrease with\nincreasing $N$.\\cite{Ferrari, Gupta, Anindya} The amplitude of this\nred shift is about $3 - 5$, $5 - 6$, and 8~cm$^{-1}$ in these three\nexperiments respectively. Theoretical explanation of this red shift\nis required.\n\nIn this paper, the symmetry analysis is referred for both the AB- and\nAA-stacked lattice structures, while the latter has the point group\n$D_{6h}$ irrespective of the even-oddness of $N$. We classify the\nphonon normal modes at $\\Gamma$ point and determine their Raman and\nInfra-red (Ir) properties. We\ngeneralize the force constant model of graphene\\cite{Aizawa} into\nNSGL's and it can be applied to calculate the phonon dispersions for the NSGL's in AB- or\nAA-stacking with arbitrary layer number $N$. \n\nFor the intra-layer\noptical C-C stretching mode with frequency around 1600~cm$^{-1}$, it \nis Raman active for all NSGL's and the calculated frequencies exhibit\nlayer-number dependence as that the frequency decreases with $N$\nincreasing. The red shift values for the AB- (AA-) stacked systems are\nabout 2~cm$^{-1}$ (4~cm$^{-1}$) which are in consistent with the\nexperimental measurements. In the medium frequency range around 800~cm$^{-1}$,\nthe out-of-plane optical mode is Ir active for AB-stacked structure but\nneither Raman nor Ir active in AA-stacking. Its frequency behaviors a blue\nshift as layer number increasing. There is an interesting inter-layer optical\nmode in the low frequency region which is Raman active in the NSGL's with $N$ even (ENSGL's) while \nIr active in NSGL's with $N$ odd (OENSGL's). Its\nfrequency value depends on the layer number $N$ more sensitively and\nincreases from 106~cm$^{-1}$ (94.5~cm$^{-1}$) to 149.8~cm$^{-1}$\n(133.6~cm$^{-1}$) for the AB- (AA-) stacked NSGL's, which is of an\norder less than those intra-layer optical modes. Phonon dispersions\nfor the AA-stacked 3-dimensional (3D) graphite are also discussed.\n\nThe present paper is organized as follows. In Sec.~II, the lattice\nconfiguration is illustrated for the NSGL's. Sec.~III is devoted to\nthe symmetry analysis for the phonon modes. The vibrational potential\nenergy is discussed in Subsec.~IV~A, while the main results and\nrelevant discussions on phonon spectrum calculations are presented in\nSubsec.~IV~B. The paper ends with a brief summary in Sec.~V.\n\n\n\n\\section{lattice configuration}\n\\subsection{AB-stacked}\n\\begin{figure}\n \\begin{center}\n \\scalebox{1.2}[1.4]{\\includegraphics[width=7cm]{unitcell.eps}}\n \\end{center}\n \\caption{The sketch of the configurations of AB-stacked in (a) and \n AA-stacked in (b) for multi-layer graphene.}\n \\label{Fig:unitcell}\n\\end{figure}\n\nIt is known that graphene is a single layer of carbon atoms with the\nhoneycomb lattice configuration which is characterized by the $D_{6h}$\nsymmetry.\\cite{Dresselhaus} The 3D graphite is AB-stacked honeycomb\nlattice, where the B layers are achieved by shifting the A layers\nalong one of its first-nearest carbon-carbon bonds in the horizontal\nplane as shown in Fig.~\\ref{Fig:unitcell}(a). The space group of the\n3D graphite is non-symmorphic group $D_{6h}^{4}$ with non-primitive\ntranslation $ \\vec\\tau=\\frac{1}{2}\\vec{c}$ (primitive translation\n$\\vec t=n_1 \\vec a_1+n_2 \\vec a_2+ n_3\\vec c$).\\cite{Brillson} The\ndistance between two adjacent layers is about $\\frac{c}{2}=3.35\\AA$\nwhich is much larger than the bond length between two nearest-neighbor\natoms in the plane, $b=1.42\\AA$.\n\nThe NGSL's are also constructed by AB-stacked honeycomb lattice, but\nwith limited number of layers. Although the structures of each layer of\n3D graphite and NSGL's are the same, the corresponding symmetry groups\nare different since the displacement symmetry along $\\vec{c}$ axis no\nlonger exists for NSGL's, so does the symmetry associated with\n$\\vec\\tau$. Now the symmetry group becomes a direct product of a 3D\npoint group and a 2D translational group. And the point groups are\ndifferent for ENSGL's and ONSGL's as mentioned in\nRef.~\\onlinecite{Manes}. For the ENSGL's, a center of inversion\n$\\sigma_{i}$ occurs in the middle of atom 4 in the $\\frac {N}{2}$-th\nlayer and 5 in the $\\frac {N}{2}+1$-th layer as shown in\nFig.~\\ref{Fig:unitcell}(a). There are one 3-fold main axis in the\ndirection perpendicular to the layers and three 2-fold axes,\n$C_{2}''$, perpendicular to the main axis and at angles of $\\pi\/3$ to\neach other. All these symmetry operations together constitute the\npoint group $D_{3d}=\\{E, 3C_{3}, 3C_{2}''\\}\\times\\{E, \\sigma_{i}\\}$.\nIn the ONSGL's, instead of a center of inversion there is a reflection\nsymmetry $\\sigma_{h}$ with the middle layer as its reference plane. A\n3-fold main axis exists in the direction of $z$-axis and three 2-fold\naxes, $C_{2}'$, are perpendicular to it. We notice that these three\n2-fold axes $C_{2}'$ are one to one perpendicular to those of\n$C_{2}''$. Consequently, the symmetry group for the ONSGL's is\n$D_{3h}=\\{E, 3C_{3}, 3C_{2}'\\}\\times\\{E, \\sigma_{h}\\}$.\n\nThe environments for an atom in the graphite and NSGL's are different\nfrom that in 2D graphene. For each carbon atom in the graphene layer,\nthere are three nearest-neighbor carbon atoms and six next-nearest-\nneighbors. There are four carbon atoms 1, 2, 3, 4 in a unit cell of\ngraphite, as represented in Fig.~\\ref{Fig:unitcell}(a). For atom 4 in\nA layer, there are two inter-layer nearest-neighbors in each of the\ntwo adjacent layers, with the distance $c$. Also, in each of the two\nadjacent layers, there are three inter-layer next-nearest-neighbor\natoms around atom 4 with distance $\\sqrt{b^{2}+(\\frac{c}{2})^{2}}$. As illustrated\nin Fig.~\\ref{Fig:unitcell}(a), the adjacent environment for atom 3 is\nquite different from that of atom 4. It has no inter-layer\nfirst-neighbors with the same distance as $\\frac{c}{2}$. However, atom 3 has six\ninter-layer neighbors in each of the two adjacent layers with the same\ndistance as that of the second-nearest-neighbor of atom 4.\n\\subsection{AA-stacked}\nThe AA-stacked NSGL's (AA-NSGL's) are constructed by AA-stacked\nhoneycomb lattice where all layers have the same configuration. In the\nAA-stacked system, the ENSGL's, ONSGL's and the 3D graphite have the\nsame point group $D_{6h}$ which is the symmetry of the graphene. As\nshown in Fig.~\\ref{Fig:unitcell}(b), the environments for carbon\natoms in the AA-stacked NSGL's are quite different from those in AB-stacked\nsystems. For each atom, there are two inter-layer nearest-neighbors in\neach of the two adjacent layers with the distence $\\frac{c}{2}$. And each atom\nhas six inter-layer second-nearest-neighbors with distance\n$\\sqrt{b^{2}+(\\frac{c}{2})^{2}}$ in its two adjacent layers. We notice here that\nin the AA-stacked 3D graphite, there are only two atoms in the unit\ncell and the primitive translation along $c$-axis is $\\vec{c}\/2$, which\nis half of the correspondence in the AB-stacked 3D graphite.\n\n\n\\section{symmetry analysis for the phonon modes}\nThe dynamical representation\n$\\Gamma^{dyn}=\\Gamma^{v}\\bigotimes\\Gamma^{atom}$ can be decomposed\ninto the irreducible representations of the symmetry group, with the\nlattice displacements as the bases, where $\\Gamma^{v}$ is the vector\nrepresentation and $\\Gamma^{atom}$ is the permutation representation\nof the group. By applying the projection operator technique, we carry\nout the decomposition of the dynamical representation into the\nirreducible representations for the NSGL's with $N$ even and odd\nrespectively. According to Elliott,\\cite{Elliott} the Ir\nactive phonon modes belong to the irreducible representations\ndecomposed from the vector representation $\\Gamma^{v}$, while the\nRaman active modes correspond to the irreducible representations shown\nup in the decomposition of a six-dimensional representation with bases\nas the quadratic forms: $x^{2}+y^{2}$, $z^{2}$, $x^{2}-y^{2}$, $xy$,\n$yz$, and $zx$. The three acoustic modes with zero frequency at the\n$\\Gamma$ point, which correspond to the vector representation\n$\\Gamma^{v}$, are excluded in the consideration of Ir and Raman active\nmodes. For comparison the corresponding results for the graphene and\n3D graphite are also listed in the following. The symbol for the\nirreducible representations we used here is the notation used in\nRef.~\\onlinecite{Eyring} which is the most commonly used in the\ntreatment of molecules.\n\\begin{table*}[t]\n \\caption{The symmetry analysis for the phonon modes at the\n $\\Gamma$ point of the NSGL's with AA- or AB-stacking.\n Phonon modes are classified by the irreducible\n representations of $\\Gamma^{dyn}$ in the fourth column.\n The irreducible representations of the Ir and Raman\n active modes are listed in the fifth and sixth column respectively.}\n \\label{Tab:SymmetryAnalysis}\n \\begin{ruledtabular}\n \\begin{tabular}{|l|c|c|c|c|c|}\n &&group&$\\Gamma^{dyn}$&$\\Gamma^{Ir}$&$\\Gamma^{R}$\\\\\n \\hline\n graphene\\cite{Dresselhaus}&&$D_{6h}$&$A_{2u}\\bigoplus B_{2g}\\bigoplus E_{1u}\\bigoplus E_{2g}$&\/&$E_{2g}$\\\\\n \\hline &ENSGL's&$D_{3d}$\\cite{Manes}&$N(A_{1g}\\bigoplus A_{2u}\\bigoplus E_{g}\\bigoplus E_{u})$&$(N-1)A_{2u}\\bigoplus (N-1)E_{u}$&$NA_{1g}\\bigoplus NE_{g}$\\\\\n AB-stacked&ONSGL's&$D_{3h}$\\cite{Manes}&$(N-1)A_{1}'\\bigoplus (N+1)A_{2}''\\bigoplus (N+1)E'$&$NA_{2}''\\bigoplus N E'$&$(N-1)A_{1}'\\bigoplus NE'\\bigoplus (N-1)E''$\\\\\n &&& $\\bigoplus (N-1)E''$ &&\\\\\n &3D\\cite{Mani}&$D_{6h}^{4}$&$2(A_{2u}\\bigoplus B_{2g}\\bigoplus\n E_{1u}\\bigoplus\n E_{2g})$&$A_{2u}\\bigoplus E_{1u}$&$2E_{2g}$\\\\\n \\hline\n &ENSGL's&$D_{6h}$&$\\frac{N}{2}(A_{1g}\\bigoplus A_{2u}\\bigoplus B_{1u}\\bigoplus B_{2g}\\bigoplus E_{1u}$&$(\\frac{N}{2}-1)(A_{2u}\\bigoplus E_{1u})$&$\\frac{N}{2}(A_{1g}\\bigoplus E_{1g}\\bigoplus E_{2g})$\\\\\n &&& $\\bigoplus E_{1g}\\bigoplus E_{2g}\\bigoplus E_{2u})$ &&\\\\\n AA-stacked&ONSGL's&$D_{6h}$&$\\frac{N-1}{2}(A_{1g}\\bigoplus B_{1u}\\bigoplus E_{1g}\\bigoplus E_{2u})$&$\\frac{N-1}{2}(A_{2u}\\bigoplus E_{1u})$&$\\frac{N-1}{2}(A_{1g}\\bigoplus E_{1g})\\bigoplus \\frac{N+1}{2}E_{2g}$\\\\\n &&& $\\bigoplus \\frac{N+1}{2}(A_{2u}\\bigoplus B_{2g}\\bigoplus E_{1u}\\bigoplus E_{2g})$ &&\\\\\n &3D&$D_{6h}$&$A_{2u}\\bigoplus B_{2g}\\bigoplus E_{1u}\\bigoplus E_{2g}$&\/&$E_{2g}$\\\\\n \\end{tabular}\n \\end{ruledtabular}\n\\end{table*}\n\n\nThe symmetry analysis for the phonon modes and the Raman and Ir modes\nare classified in Table.~\\ref{Tab:SymmetryAnalysis}. In the\nAB-stacked NSGL's, since the $\\sigma_{i}$ symmetry and $\\sigma_{h}$\nsymmetry can not coexist in the ENSGL's or ONSGL's, we can see two\nstraightforward consequences from the above classification for the Ir\nand Raman active modes. Firstly, in the ENSGL's, phonon modes can not\nbe Ir and Raman active simultaneously (which is also true for the\ngaphene and graphite). However, in the ONSGL's, the $N$ $E'$ modes are\nboth Ir and Raman active. This is because there is no inversion center\nin the ONSGL's. Secondly, among the optical modes with their\nvibrational displacements perpendicular to the constituent layers,\nthere is an exotic mode oscillating with each layer as a whole but\nalternatively from layer to layer. It belongs to the $A_{1g}$ in the\nENSGL's and $A''_{2}$ in the ONSGL's. Since the $\\sigma_{h}$ operation\nexists only in the ONSGL's, this mode ($\\omega_{1}$ mode) is Raman\nactive in the ENSGL's while Ir active in the ONSGL's.\n\nIn the AA-stacked NSGL's with $N$ even or odd, the symmetry group is\n$D_{6h}$ which includes both $\\sigma_{i}$ and $\\sigma_{h}$. As a\nresult, phonon modes can not be Ir and Raman active simultaneously.\nThe $\\omega_{1}$ mode mentioned above belongs to the $A_{1g}$ in the\nENSGL's and $A_{2u}$ in the ONSGL's. This mode is Raman active in the\nENSGL's while Ir active in the ONSGL's which is the same as the\nAB-stacked NSGL's. Nevertheless, its vibrational mode favors to take\nthe maximum advantage of the inter-layer interactions. It would be\nsensitive and useful experimentally to identify the even-oddness of\nthe NSGL's with a few layers.\n\n\n\\section{calculation for the phonon dispersion}\n\\subsection{Vibrational potential energy}\nThe vibrational potential energy for a graphene sheet can be described\nby five quadratic terms with the rigid rotational symmetry\nimplemented.\\cite{Aizawa, Jiang2} They are the 1st and 2nd\nnearest-neighbor stretching, the in-plane bond angle variations, the\nout-of-surface bond bending and the bond twisting energies. From the\nmodality of atomic movements, we can also classify the inter-layer\nvibrational potential terms into three types: The first one describes\nthe stretching movements between the two atoms located in the adjacent\nlayers. The second describes the relative movement between the two\npairs of atoms with a common one as an apex. That is this type of\nmovements involves three atoms to form one bond in a layer and another\nconnecting the two nearest layers. The third involves more than three\natoms according to the specific bond configurations. As shown in\nFig.~\\ref{Fig:unitcell}, there is only one inter-layer\nnearest-neighbor carbon-carbon bond in each unit cell (the bond\nbetween atoms 1 and 4). So that just the twisting potential on the\ninter-layer bond is encountered here. The whole of these terms is\nactually a modified valence force field model to account some interactions for far away atoms\nin response to the bond charge effect in certain extent. Since the\ninter-layer bonds are much longer than that in the plane, all above three \ntype interactions are one or two orders\nless than their counterparts in layer and they\nthemselves have comparable contributions. In the following, the\ninter-layer terms are written in the AB-stacked system and they can be\nsimilarly generalized to the AA-stacked system.\n\n\\begin{table}[t]\n \\caption{Comparison of several mode frequencies (in the unit of\n cm$^{-1}$) for the AB-stacked 3D graphite between the our calculation results and the\n experimental values.\\cite{Nicklow, Maultzsch}}\n \\label{Tab:Fit}\n \\begin{ruledtabular}\n \\begin{tabular}{cllll}\n Reps &$A_{1}^{'}$ & $E_{2g}$ & $A_{2u}$ & $E_{2g}$ \\\\\n \\hline\n experiments& 30\\cite{Nicklow} & 40\\cite{Nicklow} & 868\\cite{Maultzsch} & 1586\\cite{Maultzsch}\\\\\n theory &30.2 & 42.7 & 869.9 & 1586.6 \\\\\n \\end{tabular}\n \\end{ruledtabular}\n\\end{table}\n\n\n(1). The inter-layer bond stretching energies $V^{(int)}_l\n(V^{(int)}_{sl})$ have the form as:\n\\begin{eqnarray}\n \\sum_{i,j}\\frac{\\hat{k}_l}{2}[(\\vec{u}_{i}-\\vec{u}_{j})\\cdot\n \\vec{e}_{ij}^{l}]^{2},\n \\label{Eq:Potential1}\n\\end{eqnarray}\nwhere $\\vec{u}_{i}$ $(\\vec{u}_{j})$ is the displacement vector of the\natom $i$ $(j)$ and $\\vec{e}_{ij}^{l}$ is the unit vector from atom $i$\nto atom $j$. If the summation is taken over the nearest-neighbored\ninter-layer pair of atoms, the corresponding force constant is denoted\nas $\\hat{k}_l$ while for next nearest-neighbor inter-layer pairs we\nhave the force constant as $\\hat{k}_{sl}$.\n\n\n(2). For the three atoms 1, 4 and $i$, where $i$ is the in-plane\nnearest neighbor of atom 1 (see Fig.~\\ref{Fig:unitcell}), we found\nthat under a specific configuration with atom $i$ rather than atom 1\nas an apex, while the force being along the corresponding bond\ndirection instead of perpendicular direction, a correlation term\n$\\hat{k}_{rr}$ has the most and sensitive contribution to the layer\ndependence of the inta-layer C-C stretching optical modes,\n\\begin{eqnarray}\n \\frac{\\hat{k}_{rr}}{2}\\sum_{i}[(\\vec{u}_{1}-\\vec{u}_{i})\\cdot\n \\vec{e}_{i1}^{l}-(\\vec{u}_{4}-\\vec{u}_{i})\\cdot\n \\vec{e}_{i4}^{l}]^2. \\nonumber\n\\end{eqnarray}\nActually the two square terms in above modality have already been\naccounted in the in-plane and inter-plane stretching terms\nrespectively. Only the across term is left,\n\\begin{eqnarray}\n V_{rr}=-\\hat{k}_{rr}\\sum_{i}[(\\vec{u}_{1}-\\vec{u}_{i})\n \\cdot \\vec{e}_{i1}^{l}]\n [(\\vec{u}_{4}-\\vec{u}_{i})\\cdot \\vec{e}_{i4}^{l}]\\; ,\n \\label{Eq:Potential4}\n\\end{eqnarray}\nwhich weakens the interaction between two adjacent layers. The\npositive definite condition for getting real frequencies is\n$\\hat{k}_{sl}\\geq \\hat{k}_{rr}$.\n\n\n\n(3). The twisting potential for an inter-layer bond between atoms 1\nand 4 is coming from the two sets of three nearest-neighbors of atoms\n1 and 4 respectively. It can be described as\n\\begin{eqnarray}\n V_{tw}=\\frac{\\hat{k}_{tw}}{2}[\\sum_{i}(\\vec{u}_{i}-\\vec{u}_{1})\\cdot \\vec{e}_{i}^{\\theta}\n -\\sum_{j}(\\vec{u}_{j}-\\vec{u}_{4})\\cdot\n \\vec{e}_{j}^{\\theta}]^{2},\n \\label{Eq:Potential3}\n\\end{eqnarray}\nwhere $\\sum_{i}$ and $\\sum_{j}$ represent the summation over the three\nintra-plane first-nearest-neighbors for atoms 1 and 4\nrespectively. $\\vec{e}_{i}^{\\theta}=\\vec{e}_{z}\\times\n\\vec{e}_{1i}^{l}$ is the tangential unit vector in the plane formed by\nthree atoms 1, 4, and $i$. The expression in quadratic form as a whole\nensures a proper definition for the torsion angle. For pure rotations\naround the bond, this expression gives zero torsion consistently. In\ncontrast, the bond is most severely twisted when the three neighbors\naround atom 1 and those of atom 4 rotate reversely.\n\n\nWe stress here that, all of the above four inter-layer vibrational\npotential energy terms satisfy the rigid rotational symmetry\nrequirements\\cite{Popov, Mahan, Jiang2} which guarantees the existence\nof the flexure modes in the low dimensional systems. Although we establish the vibrational\npotential terms based on the analysis to the modality of movements,\nthe bond charge effect especially along the perpendicular direction\nhas been involved by extending the valence force field beyond the nearest\nneighbors. Comparing, for example, the above $\\hat{k}_{rr}$ term with\nthat $V_{b-b}$ in Ref.~\\onlinecite{Mahan2}, which\nis followed from the bond-charge model, they have the same negative\ncross term.\n\n\\subsection{Results and discussion}\n\n\n\\begin{figure}\n \\begin{center}\n \\scalebox{1.0}[1.0]{\\includegraphics[width=7.6cm]{3DFitLow.eps}}\n \\end{center}\n \\caption{Phonon dispersion for the 3D graphite in the low-frequency\n region. Solid dots are the experimental results of\n Ref.~\\onlinecite{Nicklow}. Our theoretical calculations are shown\n in lines.}\n \\label{Fig:3DFitLow}\n\\end{figure}\n\\begin{figure}\n \\begin{center}\n \\scalebox{1.0}[1.0]{\\includegraphics[width=7.6cm]{3DFitHigh.eps}}\n \\end{center}\n \\caption{Phonon dispersion for the 3D graphite in the high-frequency\n region. Solid dots are the experimental results.\\cite{Maultzsch,\n Mohr} In Refs.~\\onlinecite{Maultzsch, Mohr}, those phonon wave\n vectors $\\vec{q}$, which were not exactly along the ~$\\Gamma$-M or\n ~$\\Gamma$-K-M direction, were projected onto the closest\n high-symmetry direction. Lines are our theoretical calculations.}\n \\label{Fig:3DFitHigh}\n\\end{figure}\n\n\n\n\nThe five intra-layer force constants we used in the following are\ntaken from Ref.~\\onlinecite{Jiang} with a minor modification. We\nadjust the four inter-layer force constants to fit the experimental\nvalues of four modes in 3D graphite as shown in Table~\\ref{Tab:Fit}.\nThe fitting error for phonon modes is kept less than $7\\%$. The\ninter-layer force constants are then fitted as\n$\\hat{k}_{l}=0.77$~Nm$^{-1}$, $\\hat{k}_{sl}=0.95$~Nm$^{-1}$,\n$\\hat{k}_{tw}=0.64$~Nm$^{-1}$, $\\hat{k}_{rr}=0.9$~Nm$^{-1}$.\n\n\n\n\nBase on the above fitted vibrational potential energy with nine terms,\nwe calculate the dispersion curves for the AB-stacked graphite. As\nillustrated in Figs~\\ref{Fig:3DFitLow} and ~\\ref{Fig:3DFitHigh}, our\ntheoretical calculations meet the experimental results not only in the\nlow frequency \\cite{Nicklow} but also in the high frequency regions.\n\\cite{Maultzsch, Mohr} The excellent consistency with the experimental data\nshows that our model and parameters are reasonable and applicable.\n\n\n\n\\begin{table}[t]\n \\caption{The Raman and Ir mode frequencies (in the unit of cm$^{-1}$)\n for the AB-stacked 3D graphite, AB-stacked 2-layer and AA-stacked 3D grphite are listed. \n The irreducible representations\n are presented in the brackets following the frequency values.}\n \\label{Tab:Raman}\n \\begin{ruledtabular}\n \\begin{tabular}{|l|cc|cc|}\n &Raman&&Infra-red&\\\\\n \\hline\n AB- 3D & 42.7 ($E_{2g}$) & 1586.7 ($E_{2g}$) & 869.9 ($A_{2u}$) & 1588.2 ($E_{1u}$)\\\\\n \\hline\n AB- & 30.2 ($E_{g}$) & 106 ($A_{1g}$) & 868.7 ($A_{2u}$) & 1588.1 ($E_{u}$)\\\\\n 2-layer & 867.4 ($A_{1g}$) & 1587.3 ($E_{g}$)\\\\\n \\hline\n AA- 3D & 1584.7 ($E_{2g}$) & & & \\\\\n \\end{tabular}\n \\end{ruledtabular}\n\\end{table}\n\n\\begin{figure}\n \\begin{center}\n \\scalebox{1.0}[1.0]{\\includegraphics[width=7.6cm]{ModeIntraLayer.eps}}\n \\end{center}\n \\caption{The frequency value for the optical C-C stretching mode vs\n the layer number $N$. Lines are draw to guide eyes.}\n \\label{Fig:ModeIntraLayer}\n\\end{figure}\n\\begin{figure}\n \\begin{center}\n \\scalebox{1.0}[1.0]{\\includegraphics[width=7.6cm]{ModeOut.eps}}\n \\end{center}\n \\caption{The frequency value for the out-of-plane optical mode vs\n the layer number $N$. This mode is Ir active in the AB stacking while \n it is neither Ir nor Raman active in the AA stacking. Lines are draw to guide eyes.}\n \\label{Fig:ModeOut}\n\\end{figure}\n\\begin{figure}\n \\begin{center}\n \\scalebox{1.1}[1.2]{\\includegraphics[width=7.6cm]{ModeInterLayer.eps}}\n \\end{center}\n \\caption{The frequencies of the inter-layer optical mode vs the layer\n number $N$. Datas for the AB and AA stacked NSGL's are designated by \n pentagrams and circles, respectively. The Raman and Infra-red activities \n for this mode are displayed by the full and empty symbols, respectively.\n The broken and dashed lines correspond to the frequencies for the AB-stacked \n and AA-stacked graphite, respectively.}\n \\label{Fig:ModeInterLayer}\n\\end{figure}\n\\begin{figure}\n \\begin{center}\n \\scalebox{1.0}[1.0]{\\includegraphics[width=7.6cm]{2Layer3DLow.eps}}\n \\end{center}\n \\caption{In the low frequency region, there is significant\n difference between 3D graphite and the 2-layer graphene.}\n \\label{Fig:2Layer3DLow}\n\\end{figure}\n\nWith the above force constants, we can calculate the phonon dispersion\nfor NSGL's of the AA- or AB-stacking with an arbitrary layer\nnumber $N$. In Fig.~\\ref{Fig:ModeIntraLayer}, the calculated frequency\nof the intra-layer optical C-C stretching mode is represented with\ndifferent stacked styles and layer number $N$. The layer dependence of\nthe frequency shows up a red shift behavior which is in agreement with\nthe experimental measurements. The frequency value for this mode is\nabout 1588~cm$^{-1}$ in the single graphene layer and decreases with\nincreasing $N$ and almost saturates at $N=10$. The limit is\n1586.7~cm$^{-1}$ (1584.7~cm$^{-1}$) in the AB- (AA-) stacked system\nrespectively. The amount of red shift value in our calculation\ncorresponds excellently with that measured by experiments within the ranges $3 - 5$, $5 -\n6$, and 8~cm$^{-1}$ in Refs.~\\onlinecite{Ferrari}, \\onlinecite{Gupta}\nand \\onlinecite{Anindya}, respectively.\n\nThe out-of-plane optical mode, belonging to the $A_{2u}$ ($B_{2g}$)\nirreducible representation in the AB (AA) stacking, is Ir active in the AB-stacking\nyet inactive in the AA-stacking irrespective of the even-oddness of the \nlayer number $N$ and is useful in determining whether the NSGL's is of AB or AA stacking. \nAs shown in Fig.~\\ref{Fig:ModeOut}, frequencies for \nthis mode depend on the layer number $N$ and increase from 864.8~cm$^{-1}$\nto 872.6~cm$^{-1}$ in both the AB and AA stacking. In contrast to the C-C \nstretching optical mode, this mode frequency exhibits a blue shift type\nlayer dependence which could be identified with the development of the experimental\ntechnique.\n\nFor the inter-layer optical mode, the layer number dependence of the\nfrequency value is shown in Fig.~\\ref{Fig:ModeInterLayer}. This mode\ntakes the greatest advantage of the inter-layer interation and is\nconsiderably dependent on the layer number $N$ and the stack style AB\nor AA. In case of $N=2$, the $\\omega_{1}$ mode has the frequency values\n106~cm$^{-1}$ and 94.5~cm$^{-1}$ for the AB- and AA-stacked NSGL's\nrespectively. The frequencies of the $\\omega_1$ mode increase with\nincreasing $N$ and almost come to the limit values at $N=10$. The limit\nvalues are 149.8~cm$^{-1}$ and 133.6~cm$^{-1}$ for the AB- and\nAA-stacked NSGL's respectively. The frequency differences as well as the\nRaman versus Ir (see Sec. III) of the $\\omega_{1}$ mode in NSGL's with\ndifferent layers might inspire considerably experimental interest in\nthe $\\omega_1$ mode.\n\n\nWe then calculate the phonon dispersion for the 2-layered AB stacking, in\ncomparison with that of the 3D graphite. The most significant\ndifference between the 2-layer graphene and the 3D graphite lies in\nthe low-frequency region around the $\\Gamma$ point as shown in\nFig.~\\ref{Fig:2Layer3DLow}. The frequencies of the low-frequency\noptical modes in the 2-layer graphene are much smaller than their\ncounterparts in the 3D graphite. The frequencies of the Raman and Ir\nactive modes are shown in the third line of Table~\\ref{Tab:Raman}\namong which the two $A_{1g}$ modes have the frequency value\n$\\omega_{1}$=106~cm$^{-1}$ and $\\omega_{2}$=867.4~cm$^{-1}$. In fact,\n$\\omega_{1}$ and $\\omega_{2}$ modes are the above mentioned $NA_{1g}$\nmodes of the ENSGL's specified to $N=2$.\n\n\n\n\\begin{figure}\n \\begin{center}\n \\scalebox{1.0}[1.0]{\\includegraphics[width=7.6cm]{AA3Dgraphite.eps}}\n \\end{center}\n \\caption{The phonon dispersion along some high-symmetry directions\n for the AA-stacked 3D graphite. There are only six branches in the\n figure, since the unit cell in the AA-stacked 3D graphite contains\n two atoms.}\n \\label{Fig:AA3Dgraphite}\n\\end{figure}\n\nWe further calculate the phonon dispersion curve for the AA-stacked 3D\ngraphite as shown in Fig.~\\ref{Fig:AA3Dgraphite}. Since the unit cell\ncontains only two atoms in contrast to that of the AB-stacked\ngraphite, there are six branches of phonon dispersion. Along $\\Gamma A$ in\nthe Brillouin zone, the lowest and highest branches which correspond\nto the in-plane acoustic and optical vibrational modes are doubly\ndegenerate, while the remaining two branches describing the\nout-of-plane vibration are non-degenerate. At the $A$ point in\nthe Brillouin zone, there is a phase factor difference of $\\pi$\nbetween two adjacent layers for the out-of-plane motion, among which\nthe $\\omega_{1}$ mode has the frequency value of 133.6~cm$^{-1}$. In\nthe fourth line of Table~\\ref{Tab:Raman}, the Raman and Ir active\nmodes for the AA-stacked 3D graphite are listed which are to be\nconfirmed in future experiments.\n\n\n\n\n\n\\section{conclusion}\nBased upon a thorough investigation of the lattice symmetry of the\nNSGL's, the Raman and Ir properties, in particular its layer dependence, is\nsystematically studied. With a proposed generalized vibrational\npotential, we further calculate the phonon dispersion of various modes\nof the AB- or AA-stacked NSGL's where the layer dependence is also\nstressed. The calculated frequencies of optical C-C stretching mode \nexhibit a red shift as layer number increasing in both the AB- and \nAA-stacked NSGL's, and the shift value 2~cm$^{-1}$ (4~cm$^{-1}$) for \nAB- (AA-) stacked NSGL's is in good consistent with the experimental measurements. \nThe out-of-plane optical mode with frequency around about 800~cm$^{-1}$ is \nIr active in AB-stacked structure yet neither Raman nor Ir active\nin AA-stacking. Its frequency shows a blue shift layer dependence.\nWe also predict that the frequency of the inter-layer optical mode increases with $N$\nincreasing. Since this mode is more sensitive to the layer number $N$, \nit should be experimentally interesting in\ndetermining the lattice structure properties of the NSGL's.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nThermoelectric effect (i.e. the appearance of an electric current upon application of a thermal gradient) in superconductors remains an intriguing topic that attracts a lot of interest over last decades \\cite{NL}.\nWhile usually the magnitude of this effect in generic metals and superconductors remains small (being proportional to the ratio between temperature and the Fermi energy $T\/\\varepsilon_F$), it can increase by orders of magnitude provided electron-hole symmetry is lifted, e.g., due to spin-dependent scattering of electrons. This situation was predicted to occur in a variety of structures, such as superconductors doped with magnetic impurities \\cite{Kalenkov12}, superconductor-ferromagnet hybrids with the density of states spin-split by the exchange and\/or Zeeman fields \\cite{Machon,Ozaeta} or superconductor-normal metal (SN) bilayers with spin-active interfaces \\cite{KZ14,KZ15}. In accordance with theoretical predictions large thermoelectric currents were recently observed in superconductor-ferromagnet tunnel junctions in high magnetic fields \\cite{Beckmann}.\n\nLarge thermoelectric effect was also observed in multiterminal hybrid SNS structures with no magnetic inclusions \\cite{Venkat1,Venkat2,Petrashov03,Venkat3}. Being exposed to a temperature gradient such structures (frequently called Andreev interferometers) were found to develop a thermopower signal which magnitude was not restricted by a small parameter $T\/\\varepsilon_F$ and, furthermore, turned out to be a periodic function of the superconducting phase difference $\\chi$ across the corresponding SNS junction. The latter observation indicates that macroscopic quantum coherence can play an important role and poses a question about the relation between thermoelectric and Josephson effects in the systems under consideration. Subsequent theoretical analysis \\cite{Seviour00,KPV,VH,VP} indeed demonstrated that the thermoelectric effect in Andreev interferometers can be large and confirmed the periodic dependence of the thermopower on the phase difference $\\chi$. At the same time, there appear to be no general consensus in the literature concerning the basic physical origin of this effect. While the authors \\cite{KPV} emphasized an important role of electron-hole imbalance, Virtanen and Heikkil\\\"a \\cite{VH}, on the contrary, proposed that in Andreev interferometers a non-vanishing thermopower signal can be generated even provided electron-hole symmetry is maintained and that the dominant part of this signal can be directly related to the difference between the {\\it equilibrium} values for the Josephson current at temperatures $T_1$ and $T_2$. Subsequently, Volkov and Pavlovskii \\cite{VP} argued that in general no such simple relation between the thermopower and the Josephson current can be established and that under certain conditions the former can still remain large even though the latter gets strongly suppressed by temperature effects.\n\nThe existing theory \\cite{Seviour00,KPV,VH,VP} merely deals with the experimentally relevant diffusive limit in which case the analysis may become rather cumbersome forcing the authors either to employ numerics or to resort to various approximations. On the other hand, in order to clarify the physical origin of a large thermoelectric effect in Andreev interferometers one could take a different route focusing the analysis on the ballistic limit. A clear advantage of this approach is the possibility to employ the notion of semiclassical electron trajectories and in this way to treat the problem exactly. Below we will follow this route.\n\nThe structure of the paper is as follows. In Sec. \\ref{model} we define our\nmodel and describe the quasiclassical formalism which will be employed in our\nfurther analysis. In Sec. \\ref{riccati} we demonstrate that the quasiclassical\nEilenberger equations supplemented by Zaitsev boundary conditions can be conveniently resolved with the aid\nof the so-called Riccati parameterization of the Green functions. Specific\nquasiparticle trajectories and their contributions to electric currents are\nidentified in Sec. \\ref{quas} where we also discuss the conditions for\nelectron-hole symmetry violation in our system. In Sec. \\ref{thermo} we\nevaluate the thermoelectric voltages induced by applying a temperature gradient\nand demonstrate that these voltages can be large because of the presence of\nelectron-hole asymmetry in Andreev interferometers. Sec. \\ref{josephson} is\ndevoted to further analysis of the relation between thermoelectric and Josephson\neffects. Some technical details of our calculation are displayed in Appendix.\n\n\\section{The model and quasiclassical formalism}\n\\label{model}\nLet us consider an NSNSN structure shown in Fig. \\ref{nsnsn3-fig}. Two\nsuperconductors with phase difference $\\chi = \\chi_1 - \\chi_2$ are connected to\na normal wire. The two ends of this normal wire are maintained at temperatures $T_1$\nand $T_2$ and voltages $V_1$ and $V_2$ respectively. Within our model we deliberately choose to disregard electron scattering on impurities and boundary imperfections, in which case electron motion is ballistic and\ncan be conveniently described in terms of quasiclassical trajectories. In\naddition, we will disregard inelastic electron relaxation inside our system by\nassuming the inelastic relaxation length to exceed the system size. In this case\ninelastic electron relaxation can only occur deep inside normal terminals\n$\\mathrm{N_1}$ and $\\mathrm{N_2}$.\n\n\\begin{figure}\n\\centerline{ \\includegraphics[width=80mm]{nsnsn3-fig} }\n\\caption{(Color online) Normal wire connected to normal and superconducting leads. Normal leads are maintained at temperatures $T_1$ and $T_2$ and voltages $V_1$ and $V_2$ respectively. An example of a quasiclassical electron trajectory relevant for the thermoelectric effect under consideration is also illustrated.}\n\\label{nsnsn3-fig}\n\\end{figure}\n\nIn what follows we will employ the quasiclassical theory of superconductivity based on the Eilenberger equations \\cite{Eil,bel}. Under the above assumptions adopted within our model these equations take the form\n\\begin{equation}\n\\left[ \\hat\\Omega , \\hat g^{R,A,K} \\right]\n+\ni\\bm{v}_F \\nabla \\hat g^{R,A,K} (\\bm{p}_F, \\bm{r}, \\varepsilon) =0,\n\\quad\n\\check g^2 =1,\n\\end{equation}\nwhere $\\hat g^{R,A,K}$ are energy-integrated retarded, advanced and Keldysh $2\\times 2$ matrix\nGreen functions. The matrices $\\hat\\Omega$ and $\\check g$ have the following\nstructure\n\\begin{equation}\n\\hat \\Omega=\n\\begin{pmatrix}\n\\varepsilon & \\Delta \\\\\n-\\Delta^* & -\\varepsilon\n\\end{pmatrix},\n\\quad\n\\check g=\n\\begin{pmatrix}\n\\hat g^R & \\hat g^K \\\\\n0 & \\hat g^A\n\\end{pmatrix},\n\\end{equation}\nwhere $\\Delta$ and $\\varepsilon$ are respectively the superconducting order parameter and the\nquasiparticle energy. Electric current density can be expressed in terms of the\nKeldysh Green function in the standard manner as\n\\begin{equation}\n\\bm{j}(\\bm{r})= -\\dfrac{e N_0}{4} \\int d \\varepsilon\n\\left< \\bm{v}_F \\Sp [\\hat \\tau_3 \\hat g^K(\\bm{p}_F, \\bm{r},\n\\varepsilon) ] \\right>,\n\\label{current}\n\\end{equation}\nwhere $\\bm{p}_F=m\\bm{v}_F$ is the electron Fermi momentum vector, $\\hat\\tau_3$\nis the Pauli matrix in the Nambu space and $N_0=mp_F\/(2\\pi^2)$ is the normal density of states at\nthe Fermi level. Here the angular brackets $\\left<\\cdots\\right>$ denote\naveraging over\nthe Fermi momentum directions.\n\n\\section{Riccati parameterization and boundary conditions}\n\\label{riccati}\nFor the system under consideration the Eilenberger equations can be solved exactly.\nIn order to proceed we restrict our analysis to quasiparticles propagating in the normal metal with subgap energies $|\\varepsilon| < |\\Delta|$ and employ the so-called Riccati parameterization for the retarded and advanced Green functions \\cite{Schopohl95}\n\\begin{equation}\n\\hat g^{R,A}=\\pm\n \\hat N^{R,A}\n \\begin{pmatrix}\n 1+\\gamma^{R,A} \\tilde \\gamma^{R,A} & 2\\gamma^{R,A} \\\\\n -2 \\tilde \\gamma^{R,A} & -1- \\tilde \\gamma^{R,A} \\gamma^{R,A} \\\\\n \\end{pmatrix},\n \\label{graparam}\n\\end{equation}\nwhere $\\gamma^{R,A}$, $\\tilde \\gamma^{R,A}$ are Riccati amplitudes and\n\\begin{equation}\n\\hat N^{R,A}=\n \\begin{pmatrix}\n (1-\\gamma^{R,A} \\tilde \\gamma^{R,A})^{-1} & 0 \\\\\n 0 & (1-\\tilde \\gamma^{R,A} \\gamma^{R,A} )^{-1} \\\\\n \\end{pmatrix}.\n \\label{nrparam}\n\\end{equation}\nParameterization of the Keldysh Green function $\\hat g^K$ also contains the two distribution functions $x$ and $\\tilde x$. It reads \\cite{Eschrig00}\n\\begin{equation}\n\\hat g^K=\n2\n\\hat N^R\n\\begin{pmatrix}\nx - \\gamma^R \\tilde x \\tilde \\gamma^A &\n-\\gamma^R \\tilde x + x \\gamma^A \\\\\n-\\tilde \\gamma^R x + \\tilde x \\tilde \\gamma^A &\n\\tilde x - \\tilde \\gamma^R x \\gamma^A \\\\\n\\end{pmatrix}\n\\hat N^A.\n\\label{gkparam}\n\\end{equation}\n\nIn the normal metal (i.e. for $\\Delta \\equiv 0$) Riccati amplitudes $\\gamma^{R,A}$, $\\tilde \\gamma^{R,A}$ and distribution functions $x$, $\\tilde x$ obey the following simple equations\n\\begin{gather}\ni\\bm{v}_F \\nabla \\gamma^{R,A} = -2 \\varepsilon \\gamma^{R,A},\n\\quad\ni\\bm{v}_F \\nabla \\tilde \\gamma^{R,A} = 2 \\varepsilon \\tilde \\gamma^{R,A},\n\\label{gammaeq}\n\\\\\ni\\bm{v}_F \\nabla x =0, \\quad i\\bm{v}_F \\nabla \\tilde x =0.\n\\label{xeq}\n\\end{gather}\nWithin the quasiclassical approximation adopted here quasiparticles propagate\nalong the straight line trajectories between each two scattering events which\ncan only occur at the boundaries of the normal metal wire. Of interest for us\nhere is to describe quasiparticle scattering at the interfaces between the\nnormal metal and each of the two superconductors. This task is accomplished in\nthe standard manner with the aid of the Zaitsev boundary conditions\n\\cite{Zaitsev84} for the quasiclassical Green functions rewritten in terms of\nthe above Riccati amplitudes and the distribution functions\\cite{Eschrig00}.\n\nConsider, for instance, a quasiparticle propagating from the first normal terminal, being reflected at each of the two NS interfaces and leaving the system through the second normal terminal. The corresponding quasiparticle trajectory is indicated in Fig. \\ref{nsnsn3-fig}. It is important to emphasize that here we only account for quasiparticles with subgap energies which cannot penetrate into superconductors suffering either normal or Andreev reflection at both NS interfaces. It is easy to verify that the distribution functions for such quasiparticles take the form\n\\begin{gather}\nx =\n\\left[1- \\gamma^R \\tilde \\gamma^A\\right] x_1,\n\\quad\n\\tilde x =\n\\left[1- \\tilde \\gamma^R \\gamma^A \\right] \\tilde x_2\n\\end{gather}\nobeying both Eilenberger equations and the corresponding boundary conditions at NS\ninterfaces. Here $\\gamma^{R,A}$ and $\\tilde \\gamma^{R,A}$ are Riccati amplitudes\nalong the trajectory, $x_1$ and $\\tilde x_2$ are asymptotic values of the\ndistribution functions $x$ and $\\tilde x$ respectively at the initial and final trajectory points. Then we obtain\n\\begin{equation}\n\\hat g^K = \\dfrac{x_1}{2}\n(1+\\hat g^R)(1 - \\hat g^A)\n+\n\\dfrac{\\tilde x_2}{2}\n(1 - \\hat g^R)(1 + \\hat g^A).\n\\end{equation}\nMaking use of the asymptotic conditions\n\\begin{gather}\n\\gamma^R_1 = \\tilde \\gamma^A_1 =0, \\quad \\tilde \\gamma^R_2 = \\gamma^A_2 = 0,\n\\label{as1}\n\\end{gather}\nwhich hold respectively in the initial and final\ntrajectory points we recover simple expressions for the Keldysh Green function\ninside the normal leads, i.e.\n\\begin{gather}\n\\hat g^K_1=\n2\n\\begin{pmatrix}\nx_1 & x_1 \\gamma^A_1 \\\\\n-\\tilde \\gamma^R_1 x_1 & \\tilde x_2 - \\tilde \\gamma^R_1 \\gamma^A_1 (x_1 + \\tilde x_2)\n\\\\\n\\end{pmatrix}\n,\n\\\\\n\\hat g^K_2=\n2\n\\begin{pmatrix}\nx_1 - \\gamma^R_2 \\tilde \\gamma^A_2 ( x_1 + \\tilde x_2) & - \\gamma^R_2 \\tilde x_2 \\\\\n\\tilde x_2 \\tilde \\gamma^A_2 & \\tilde x_2\\\\\n\\end{pmatrix}.\n\\end{gather}\n\nThe expressions for Riccati amplitudes $\\gamma^{R,A}$ and $\\tilde \\gamma^{R,A}$ are derived from Eqs. \\eqref{gammaeq} supplemented by the boundary conditions at the NS interfaces. The latter can be expressed in the form \\cite{Eschrig00}\n\\begin{gather}\n\\gamma^R_{\\text{out}}=\n\\dfrac{\n(\\mathcal{R}-\\gamma^R_S \\tilde \\gamma^R_S) \\gamma^R_{\\text{in}} + \\mathcal{D} \\gamma^R_S\n}{\n- \\mathcal{D} \\tilde \\gamma^R_S \\gamma^R_{\\text{in}} +(1- \\mathcal{R}\\gamma^R_S \\tilde \\gamma^R_S)\n},\n\\label{bound1}\n\\\\\n\\gamma^A_{\\text{in}}=\n\\dfrac{\n(\\mathcal{R}-\\gamma^A_S \\tilde \\gamma^A_S) \\gamma^A_{\\text{out}} + \\mathcal{D} \\gamma^A_S\n}{\n- \\mathcal{D} \\tilde \\gamma^A_S \\gamma^A_{\\text{out}} +(1- \\mathcal{R}\\gamma^A_S \\tilde \\gamma^A_S)\n},\n\\label{bound2}\n\\\\\n\\tilde \\gamma^R_{\\text{in}}=\n\\dfrac{\n(\\mathcal{R}-\\gamma^R_S \\tilde \\gamma^R_S) \\tilde \\gamma^R_{\\text{out}} + \\mathcal{D} \\tilde\n\\gamma^R_S\n}{\n- \\mathcal{D} \\gamma^R_S \\tilde \\gamma^R_{\\text{out}} +(1- \\mathcal{R} \\gamma^R_S \\tilde\n\\gamma^R_S)\n},\n\\label{bound3}\n\\\\\n\\tilde \\gamma^A_{\\text{out}}=\n\\dfrac{(\\mathcal{R}-\\gamma^A_S \\tilde \\gamma^A_S) \\tilde \\gamma^A_{\\text{in}}\n+\n\\mathcal{D} \\tilde \\gamma^A_S}{\n- \\mathcal{D} \\gamma^A_S \\tilde \\gamma^A_{\\text{in}}\n+(1- \\mathcal{R}\\gamma^A_S \\tilde \\gamma^A_S)\n\\label{bound4}\n},\n\\end{gather}\nwhere the Riccati amplitudes denoted by the subscripts ``in'' and ``out''\nparameterize retarded and advanced Green function for respectively incoming and outgoing\nmomentum directions, $\\mathcal{D}=1-\\mathcal{R}$ denotes the\nnormal transmission of the corresponding NS interface and $\\gamma^{R,A}_S$, $\\tilde \\gamma^{R,A}_S$ are\nthe Riccati amplitudes in the superconductor defined as\n\\begin{gather}\n\\gamma_S^R = \\dfrac{\\varepsilon - \\sqrt{\\varepsilon^2 -\n|\\Delta|^2}}{\\Delta^*},\n\\quad\n\\tilde \\gamma_S^A = [\\gamma_S^R]^*,\n\\\\\n\\tilde \\gamma_S^R = \\dfrac{\\varepsilon - \\sqrt{\\varepsilon^2 -\n|\\Delta|^2}}{\\Delta},\n\\quad\n\\gamma_S^A = [\\tilde \\gamma_S^R]^*,\n\\end{gather}\nwhere the branch of the square root is chosen so that $\\sgn \\Img \\sqrt{z} =\n\\sgn \\Img z$.\n\nFrom Eqs. \\eqref{bound1}-\\eqref{bound4} it is easy to observe that at subgap energies $|\\varepsilon| < |\\Delta|$ the boundary conditions at the NS interface acquire the same form for four different functions $\\gamma^R$, $1\/\\tilde \\gamma^R$, $1\/\\tilde \\gamma^A$, and $\\gamma^A$, i.e.\n\\begin{gather}\nw_{\\text{out}}=\n\\dfrac{\n(\\mathcal{R}-\\gamma^R_S \\tilde \\gamma^R_S) w_{\\text{in}} + \\mathcal{D} \\gamma^R_S\n}{\n- \\mathcal{D} \\tilde \\gamma^R_S w_{\\text{in}} +(1- \\mathcal{R}\\gamma^R_S \\tilde \\gamma^R_S)\n},\n\\\\\nw=\\gamma^R, 1\/\\tilde \\gamma^R, 1\/\\tilde \\gamma^A, \\gamma^A.\n\\label{w}\n\\end{gather}\nHere we employed the identities $\\gamma_S^R \\tilde \\gamma_S^A =1$ and $\\tilde\n\\gamma_S^R \\gamma_S^A =1$ which hold in the relevant energy interval\n$|\\varepsilon| < |\\Delta|$. Furthermore, Eqs. \\eqref{gammaeq} demonstrate that\nthe four functions \\eqref{w} obey the same equation. With this in mind it is\nstraightforward to verify that the following combination of the Riccati\namplitudes\n\\begin{equation}\n\\dfrac{1 - \\gamma^R \\tilde \\gamma^A}{1 - \\gamma^R \\tilde \\gamma^R}\n\\dfrac{1 - \\tilde \\gamma^R \\gamma^A}{1 - \\gamma^A \\tilde \\gamma^A}\n\\end{equation}\nremains constant along the trajectory at subgap energies. Then making use of Eqs. \\eqref{as1} we obtain the relation between Riccati amplitudes in the initial and final points of the quasiclassical trajectory:\n\\begin{equation}\n\\tilde \\gamma^R_1 \\gamma^A_1\n=\n\\gamma^R_2 \\tilde \\gamma^A_2.\n\\end{equation}\n\nAsymptotic behavior of Riccati amplitudes at the beginning and at the end of\nelectron trajectories is\ndirectly related to transmission and reflection probabilities of the corresponding\nprocesses. These probabilities will be explicitly evaluated in the next section.\n\n\\section{Quasiclassical trajectories and electron-hole asymmetry generation}\n\\label{quas}\nAs we already pointed out, quasiparticles with subgap energies can only propagate inside the normal part of our system being unable to penetrate deep into the superconductors S$_1$ and S$_2$. Let us classify such electron trajectories relevant for the thermoelectric effect under consideration.\n\nThere exist electron trajectories starting in one of the terminals\n($\\mathrm{N_1}$ or $\\mathrm{N_2}$) and going back to the same terminal without\nhitting any\nof the two NS interfaces. These trajectories do not contribute to any current flowing in our system and, hence, can be safely ignored in our subsequent\nconsideration. Of more relevance are electron trajectories which propagate from the first to the second terminal (or vice versa).\nProvided these trajectories ``know nothing about superconductivity'' (even \nthough some of them can hit at least one of the NS interfaces)\nthey contribute to the dissipative Ohmic current $(V_1-V_2)\/R_{0}$ flowing\nbetween the terminals $\\mathrm{N_1}$ and $\\mathrm{N_2}$. Here\n\\begin{equation}\n\\dfrac{1}{R_{0}}=\n2 e^2 N_0 \\int \\left< \\bm{v}_F \\Theta_{12}(\\bm{p}_F,\\bm{r}) \\right> d\n\\bm{\\Sigma}_1,\n\\label{sharvin}\n\\end{equation}\nis the inverse Sharvin resistance of the normal wire and the function $\\Theta_{12}(\\bm{p}_F,\\bm{r})$ equals to unity for\nall electron trajectories connecting the first and the second terminals and to zero otherwise.\nHere and below averaging $\\left< ...\\right>$ includes only the directions of $\\bm{v}_F$ corresponding to electron trajectories going\nout of the first terminal and $\\int \\cdots d\\bm{\\Sigma}_{1(2)}$ denotes the integral over the cross-section of the normal lead $N_{1(2)}$.\n\nThe remaining contributions to the currents $I_1$ and $I_2$ in the normal terminals (see Fig. \\ref{nsnsn3-fig})\nare due to electron trajectories which directly involve at least one of the superconductors. In what follows for the sake of simplicity we will restrict our analysis to trajectories which may hit each of the NS interfaces only once assuming that the contribution of more complicated trajectories is negligible. This can easily be achieved by a proper choice of the system geometry (e.g., by assuming the cross sections of both NS interfaces to be sufficiently small). There are electron trajectories which originate in the first terminal, hit one of the\nNS interfaces and go back to the same terminal. Making use of the formalism\ndescribed in the previous sections we evaluate the Keldysh Green function on\nsuch trajectories and then derive the expression for the current with the aid of\nEq. \\eqref{current}. E.g., in the case of the first terminal we obtain\n\\begin{multline}\nI_{1}^{loc}=\neN_0\n\\int\n\\left<\n\\bm{v}_{F}\\Theta_{11}^S(\\bm{p}_F,\\bm{r})\n|\\tilde \\gamma_1^R(\\bm{p}_F, \\bm{r}, \\varepsilon)|^2\n\\right>\n\\\\\\times\n\\left[\n-\\tilde x_1 (\\varepsilon)\n-\nx_1(\\varepsilon)\n\\right] d \\varepsilon d \\bm{\\Sigma}_1,\n\\label{i1loc}\n\\end{multline}\nwhere the function $\\Theta_{11}^S(\\bm{p}_F,\\bm{r})=1$ for electron trajectories which start inside the\nfirst terminal, hit one of the two NS interfaces and return back to the same\nterminal $\\mathrm{N_1}$ and $\\Theta_{11}^S(\\bm{p}_F,\\bm{r})=0$ otherwise. The\nexpression for $I_{2}^{loc}$ can be obtained from Eq. \\eqref{i1loc} simply by\nreplacing the indices $1 \\leftrightarrow 2$.\nWe also note that the equilibrium distribution functions $x_{1,2}$ and $\\tilde\nx_{1,2}$ in the bulk normal electrodes $\\mathrm{N_1}$ and $\\mathrm{N_2}$ are\ndefined by the standard expressions\n\\begin{gather}\nx_{1,2}=\\tanh\\dfrac{\\varepsilon - eV_{1,2}}{T_{1,2}},\n\\quad\n\\tilde x_{1,2}=-\\tanh\\dfrac{\\varepsilon + eV_{1,2}}{T_{1,2}}.\n\\label{distr}\n\\end{gather}\n\nWhat remains is to account for the trajectories connecting two different\nterminals and touching either only one of the two NS interfaces or both these\ninterfaces one after the other. The latter situation is illustrated in Fig.\n\\ref{nsnsn3-fig}. As before, for these trajectories we set\n$\\Theta_{12}^S(\\bm{p}_F,\\bm{r})=1$, whereas $\\Theta_{12}^S(\\bm{p}_F,\\bm{r})=0$\nfor all other trajectories. Again evaluating the Keldysh component of the Green\nfunction matrix and making use of Eq. \\eqref{current}, we get\n\\begin{multline}\nI^{nl} = eN_0\n\\int\n\\left<\n\\bm{v}_{F}\\Theta_{12}^S(\\bm{p}_F,\\bm{r})\n|\\tilde \\gamma_1(\\bm{p}_F, \\bm{r}, \\varepsilon)|^2\n\\right>\n\\\\\\times\n\\left[\n-\\tilde x_2 (\\varepsilon)\n-\nx_1(\\varepsilon)\n\\right] d \\varepsilon d \\bm{\\Sigma}_1.\n\\label{nl}\n\\end{multline}\n\nCollecting all the above contributions, we determine the currents $I_1$ and $I_2$ flowing into respectively the first and the second normal terminals:\n\\begin{gather}\nI_1 = (V_1-V_2)\/R_{0}+I_1^{loc}(V_1)+I^{nl}(V_1,V_2,\\chi),\n\\label{i1}\n\\\\\nI_2 =(V_2-V_1)\/R_{0}+ I_2^{loc}(V_2) +I^{nl}(V_1,V_2,\\chi).\n\\label{i2}\n\\end{gather}\n\nIn order to proceed we need to evaluate the Riccati amplitude $\\tilde \\gamma_1^R(\\bm{p}_F,\n\\bm{r}, \\varepsilon)$ in the beginning of the corresponding trajectory. For simplicity let us assume that both temperatures $T_{1,2}$ and voltages $V_{1,2}$ remain well below the superconducting gap, i.e. $T_{1,2}, eV_{1,2} \\ll |\\Delta|$. In this case it follows immediately, e.g., from Eqs. \\eqref{i1loc}-\\eqref{nl} that electron transport in our system is dominated by quasiparticles with energies well in the subgap range\n\\begin{equation}\n|\\varepsilon| \\ll |\\Delta|.\n\\label{subgap}\n\\end{equation}\n\nConsider first the quasiclassical electron trajectory that begins in the\nterminal $\\mathrm{N_1}$,\nhits one of the NS interfaces and returns back to the same terminal. Making use of the analysis\ndeveloped in the previous section, under the condition \\eqref{subgap} one readily finds\n\\begin{equation}\n|\\tilde \\gamma_1^R(\\bm{p}_F, \\bm{r},\n\\varepsilon)|^2 = \\mathcal{D}^2\/(1+\\mathcal{R})^2,\n\\label{BTK}\n\\end{equation}\nwhere, as before, $\\mathcal{D}=\\mathcal{D}_{1,2}=1-\\mathcal{R}_{1,2}$ is the normal transmission of the corresponding NS interface. Likewise, for the trajectories which start in the first terminal, hit\nthe interfaces NS$_1$ and NS$_2$ and then go towards the terminal\n$\\mathrm{N_2}$ (as illustrated\nin Fig. \\ref{nsnsn3-fig}), in the limit \\eqref{subgap} we obtain\n\\begin{multline}\n|\\tilde \\gamma_1^R|^2 =\n1-\\frac{16\\mathcal{R}_1\\mathcal{R}_2}\n{\\bigl|\n(1+\\mathcal{R}_1)(1+\\mathcal{R}_2)\n+\\mathcal{D}_1\\mathcal{D}_2e^{i(\\chi + 2\\varepsilon d\/v_F)}\n\\bigr|^{2}},\n\\label{as2}\n\\end{multline}\nwhere $d$ is the effective distance covered by a quasiparticle between the two\nscattering events at $\\mathrm{NS_1}$ and $\\mathrm{NS_2}$ interfaces. Here and\nbelow we assume that this distance obeys the condition $d \\gg v_F\/|\\Delta|$.\nCombining the results \\eqref{BTK} and \\eqref{as2} with Eqs.\n\\eqref{i1loc}-\\eqref{nl} we can easily evaluate the currents $I_{1,2}$\n\\eqref{i1}, \\eqref{i2}.\n\nBefore we complete this calculation let us briefly discuss the physical meaning of the above results.\nIt is straightforward to observe that the asymptotic value $\\tilde \\gamma^R_1\n\\gamma^A_1=|\\tilde \\gamma_1^R (\\varepsilon )|^2$ in the beginning of the\ncorresponding trajectory defines the Andreev reflection probability, i.e.\nthe probability for an incoming electron with energy $\\varepsilon$ to be\nreflected back as a hole. This observation is well illustrated, e.g., by Eq.\n\\eqref{BTK} which is nothing but the standard BTK result \\cite{BTK}. Making\nuse of general symmetry relations for the Green functions one can also\ndemonstrate that the probability for an incoming hole to be reflected back as an\nelectron equals\nto $|\\tilde \\gamma_1^R(-\\varepsilon)|^2$. Thus, from Eq. \\eqref{as2} we conclude that scattering on two NS interfaces generates {\\it electron-hole symmetry violation}\n\\begin{equation}\n|\\tilde \\gamma_1^R(\\varepsilon)|^2\\neq |\\tilde \\gamma_1^R(-\\varepsilon)|^2\n\\end{equation}\nfor quasiparticles propagating from $\\mathrm{N_1}$- to $\\mathrm{N_2}$-terminals\nalong the trajectories displayed in Fig. \\ref{nsnsn3-fig} provided the\nsuperconducting phase difference $\\chi$ takes an arbitrary value not equal to\nzero or $\\pi$ and provided normal transmissions of both NS interfaces obey the\ncondition $0<\\mathcal{D}_{1,2}<1$. Below we will demonstrate that this\nelectron-hole asymmetry yields a large thermoelectric effect in the system\nunder consideration.\n\n\n\\section{Thermoelectric voltage}\n\\label{thermo}\nLet us now evaluate the currents $I_1$ \\eqref{i1} and $I_2$ \\eqref{i2}. In order to recover the local BTK terms $I^{loc}_{1,2}$ we explicitly specify the contributions from electron trajectories scattered at the first\nand the second NS interfaces by splitting $\\Theta_{11}^S \\to \\Theta_{11}^{S_1}+\\Theta_{11}^{S_2}$ (and similarly for $\\Theta_{22}^S$). Then combining Eqs. \\eqref{i1loc}, \\eqref{distr} with \\eqref{BTK}, at low voltages and temperatures $eV_{1,2}, T_{1,2} \\ll |\\Delta|$ we get\n\\begin{equation}\nI^{loc}_{1,2} = V_{1,2}\/R_{1,2},\n\\end{equation}\nwhere $R_{1,2}$ define the standard BTK low temperature resistance of the SN interfaces, i.e.\n\\begin{multline}\n\\dfrac{1}{R_1}=\n4e^2N_0\n\\int\n\\Biggr<\n\\bm{v}_{F}\n\\Biggl[\n\\dfrac{\\mathcal{D}_1^2}{(1+\\mathcal{R}_1)^2}\n\\Theta_{11}^{S_1}(\\bm{p}_F,\\bm{r})\n+\\\\+\n\\dfrac{\\mathcal{D}_2^2}{(1+\\mathcal{R}_2)^2}\n\\Theta_{11}^{S_2}(\\bm{p}_F,\\bm{r})\n\\Biggr]\n\\Biggl>\nd \\bm{\\Sigma}_1.\n\\label{BTK2}\n\\end{multline}\nNote that for simplicity in Eq. \\eqref{BTK2} we disregard multiple scattering effects \\cite{FN0} and account for quasiparticle trajectories which hit only one of the two NS interfaces ignoring, e.g., the contribution of trajectories of the type $\\Theta_{11}^{S_1S_2}(\\bm{p}_F,\\bm{r})$ which hit both NS interfaces. If necessary, the latter contribution can easily be recovered, however, it may only yield renormalization of subgap resistances $R_{1,2}$ (cf., e.g., the last term in Eq. \\eqref{rnl} below) and does not play any significant role in our further analysis.\n\nIn contrast, the trajectories of the type $\\Theta_{12}^{S_1S_2}(\\bm{p}_F,\\bm{r})$ displayed in Fig. \\ref{nsnsn3-fig} give an important contribution to the non-local current $I^{nl}$ and they should\nnecessarily be accounted for along with trajectories $\\Theta_{12}^{S_1}$ and $\\Theta_{12}^{S_2}$.\nCollecting all these contributions \\cite{FN}, with the aid of Eqs. \\eqref{distr}, \\eqref{nl} and \\eqref{as2}\nwe obtain\n\\begin{equation}\nI^{nl}= (V_1+V_2)\/R^{nl}_+\n+\n\\tilde I^{nl}(T_{1,2},V_{1,2},\\chi),\n\\end{equation}\nwhere we defined\n\\begin{multline}\n\\dfrac{1}{R^{nl}_{\\pm}}\n=\n2e^2N_0\n\\int\n\\Biggr<\n\\bm{v}_{F}\n\\Biggl[\n\\dfrac{\\mathcal{D}_2^2\\Theta_{12}^{S_2}(\\bm{p}_F,\\bm{r})}{(1+\\mathcal{R}_2)^2}\n\\pm\\dfrac{\\mathcal{D}_1^2\\Theta_{12}^{S_1}(\\bm{p}_F,\\bm{r})}{(1+\\mathcal{R}_1)^2}\n\\\\+\n\\dfrac{\\mathcal{R}_1 \\mathcal{D}_2^2 \\pm \\mathcal{R}_2 \\mathcal{D}_1^2}{\n(1+ \\mathcal{R}_1 \\mathcal{R}_2) (\\mathcal{R}_1 + \\mathcal{R}_2)}\n\\Theta_{12}^{S_1S_2}(\\bm{p}_F,\\bm{r})\n\\Biggr]\n\\Biggl>\nd \\bm{\\Sigma}_1,\n\\label{rnl}\n\\end{multline}\nand $\\tilde I^{nl} (T_{1,2}, V_{1,2}, \\chi)$ represents the term sensitive to electron-hole asymmetry in our system.\nPerforming the corresponding energy integral (see Appendix), we get\n\\begin{multline}\n\\tilde I^{nl}= -eN_0\n\\int\n\\Biggr<\n\\dfrac{32\\pi\\bm{v}_{F}\\Theta_{12}^{S_1S_2}(\\bm{p}_F,\\bm{r})\\mathcal{R}_1\n\\mathcal{R}_2\\beta}{\n(1+ \\mathcal{R}_1 \\mathcal{R}_2) (\\mathcal{R}_1 + \\mathcal{R}_2)\n}\n\\\\\\times\n\\Biggl[\nT_2 W(\\beta, t_2, \\chi - v_2)-T_1 W(\\beta, t_1, \\chi + v_1)\n\\Biggr]\n\\Biggl>\nd \\bm{\\Sigma}_1,\n\\label{tnl}\n\\end{multline}\nwhere\n\\begin{equation}\nW(\\beta,t,\\chi) = \\Img\n\\sum_{n \\geqslant 0}\n\\dfrac{e^{i\\chi} e^{-t(2n+1)}}{1 + \\beta e^{i\\chi} e^{-t(2n+1)}}\n\\label{W}\n\\end{equation}\nand we defined\n\\begin{equation}\n\\beta = \\dfrac{\\mathcal{D}_1 \\mathcal{D}_2}{(1+\\mathcal{R}_1)(1+\\mathcal{R}_2)},\n\\end{equation}\n$t_{1,2} =2\\pi T_{1,2} d\/v_F$ and $v_{1,2}=2eV_{1,2} d\/v_F$. Note that here and below\nthe parameter $d$ depends on the particular electron trajectory and, hence, the function\n$W$ cannot be taken out of the angular brackets indicating averaging over the directions of $\\bm{v}_{F}$.\n\nThe expression for $W$ \\eqref{W} gets simplified in the limits of high and low temperatures,\ni.e. \\begin{equation}\nW(\\beta, t, \\chi)\n=\n\\begin{cases}\ne^{-t}\\sin \\chi , & t \\gg 1,\n\\\\\n\\dfrac{1}{2t\\beta}\n\\arctan\n\\left(\n\\dfrac{\\beta \\sin \\chi}{1 + \\beta \\cos \\chi}\\right), & t \\ll 1.\n\\end{cases}\n\\end{equation}\n\n\nThe current $\\tilde I^{nl}$ is responsible for the large thermoelectric effect in the system under consideration. In order to illustrate this fact let us disconnect both normal terminals from external leads in which case one obviously has\n\\begin{equation}\nI_1=I_2\\equiv 0.\n\\label{discon}\n\\end{equation}\nThen in the absence of a temperature gradient (i.e. for $T_1=T_2$) both voltages vanish identically $V_1=V_2=0$. If, however, temperatures $T_1$ and $T_2$ take different values non-zero {\\it thermoelectric} voltages\n$V_{1,2}=V_{T1,2}$ are induced in our system. Introducing the renormalized subgap and\nSharvin resistances\n\\begin{gather}\n\\dfrac{1}{\\tilde R_{1,2}}\n=\n\\dfrac{1}{R_{1,2}}\n-\n\\dfrac{2}{R^{nl}_+},\n\\quad\n\\dfrac{1}{\\tilde R_{0}}\n=\n\\dfrac{1}{R_{0}}\n-\n\\dfrac{1}{R^{nl}_+},\n\\end{gather}\nrewriting Eqs. \\eqref{i1}, \\eqref{i2} in the form\n\\begin{gather}\nI_1 = \\dfrac{V_{T1} - V_{T2}}{\\tilde R_{0}}+\\dfrac{V_{T1}}{\\tilde R_1}\n+ \\tilde I^{nl},\n\\label{i11}\n\\\\\nI_2 = \\dfrac{V_{T2} - V_{T1}}{\\tilde R_{0}}+\\dfrac{V_{T2}}{\\tilde R_2} +\n\\tilde I^{nl}\n\\label{i22}\n\\end{gather}\nand resolving these equations with respect to $V_{T1}$ and $V_{T2}$ together\nwith Eq. \\eqref{discon}, we arrive at the result\n\\begin{equation}\nV_{T1,2} =\n\\dfrac{\n\\tilde R_{2,1}(\\tilde R_{1,2}+\\tilde R_{0}\/2)}{\n\\tilde R_1+ \\tilde R_2+\\tilde R_{0}}\n\\tilde I^{nl}(T_{1,2},V_{T1,2},\\chi),\n\\label{vt12}\n\\end{equation}\nwhich determines the magnitude of the thermoelectric voltages $V_{T1,2}$ induced by a nonzero temperature gradient $T_2-T_1$.\n\nEq. \\eqref{vt12} -- together with the expression for $\\tilde I^{nl}$ \\eqref{tnl} -- constitutes the central result of this work.\nIt demonstrates that the thermoelectric effect in Andreev interferometers is in general not reduced by the small parameter $T\/\\varepsilon_F$ and\nremains well in the measurable range. The key physical reason for this behavior is the presence of electron-hole\nasymmetry generated for quasiparticles moving between two normal terminals and being scattered at two interfaces NS$_1$ and NS$_2$.\nAccording to Eq. \\eqref{as2} this asymmetry is generated for any value of the phase difference $\\chi$ between superconductors S$_1$ and S$_2$\nexcept for $\\chi =0,\\pi$ and for all values of the interface transmissions $\\mathcal{D}_{1,2}$ except for $\\mathcal{D}_{1,2}=0,1$.\n\nThese observations emphasize a crucial role played by coherent Andreev reflections at both NS interfaces.\nIndeed, the effect trivially vanishes $\\tilde I^{nl}\\equiv 0$ in the absence of Andreev reflection at any of the interfaces,\ni.e. for $\\mathcal{D}_{1(2)}=0$. Remarkably, electron-hole asymmetry is {\\it not} generated also at full transmissions $\\mathcal{R}_{1(2)}=0$\n(cf. Eq. \\eqref{as2}) and, hence, the current $\\tilde I^{nl}$ \\eqref{tnl} also vanishes provided complete Andreev reflection\n($|\\tilde \\gamma_1^R|^2=1$) is realized at any of the NS interfaces. Moreover, bearing in mind that Andreev reflection does not violate\nquasiparticle momentum conservation one can immediately conclude that, e.g., for $\\mathcal{R}_{1}=0$ electron scattering at the first\nNS interface can only contribute to the BTK resistance of this interface but not to the non-local current $\\tilde I^{nl}$.\nThis is because an incident electron and a reflected hole propagate along the\nsame (time-reversed) trajectories starting and ending\nin one and the same normal terminal. This observation is specific for ballistic systems and has the same physical origin as,\ne.g., the effect of vanishing crossed Andreev reflection contribution to the average non-local current in NSN structures with ballistic electrodes and\nfully transparent NS interfaces \\cite{KZ07}.\n\nWe would like to emphasize that the validity of our analysis is not restricted to the limit of small temperature\ngradients and, hence, our results apply at any values $T_2-T_1$ provided both temperatures are kept well in the subgap range.\nIn the limit $T_2-T_1 \\gg v_F\/d$ the magnitudes of both thermal voltages $V_{T1,2}$ \\eqref{vt12} and\nthe non-local current $\\tilde I^{nl}$ \\eqref{tnl} depend only on the lower of the two temperatures ($T_1$) and become practically\nindependent of the higher one ($T_2$). Then the maximum values of $\\tilde I^{nl}$ which could possibly be reached at a given temperature\n$T_1$ can roughly be estimated as\n\\begin{equation}\n\\tilde I^{nl}\n\\sim\n\\begin{cases}\ne\\mathcal{N}_{\\mathrm{ch}}T_1 e^{-2\\pi T_1d\/v_F} , & T_1 \\gg v_F\/(2\\pi d),\n\\\\\n e\\mathcal{N}_{\\mathrm{ch}}v_F\/d, &T_1 \\ll v_F\/(2\\pi d),\n\\end{cases}\n\\label{estim}\n\\end{equation}\nwhere $\\mathcal{N}_{\\mathrm{ch}} \\sim p_F^2S$ is the number of conducting channels in a metallic wire with an effective cross section $S$. The parameter\n$d$ here should be understood as an effective distance between the two NS interfaces. The estimate \\eqref{estim} demonstrates that in the optimum case\nthe current $\\tilde I^{nl}$ can be of the same order as the critical Josephson current of ballistic SNS junctions \\cite{Kulik}\nwith similar parameters $\\mathcal{N}_{\\mathrm{ch}}$ and $v_F\/d$. In the low \ntemperature limit $T_{1,2} \\ll v_F\/(2\\pi d)$ and small temperature difference \n$|T_1-T_2| \\ll T_{1,2}$ Eqs. \\eqref{tnl}, \\eqref{W} yield $\\tilde \nI^{nl} \\propto (T_1+T_2)(T_1-T_2)$ in a qualitative agreement with the results \\cite{Seviour00,KPV,VP,JW}.\n\nLet us also note that the thermoelectric signal \\eqref{tnl}, \\eqref{vt12} \nderived within our model is described by an \\textit{odd} periodic function of \nthe phase difference $\\chi$. This result goes in line with a general symmetry \nanalysis \\cite{VH2} both for diffusive and ballistic structures. At the same \ntime, it is worth pointing out that odd as well as even $2\\pi$-periodic phase \ndependencies of the thermopower were observed in experiments \n\\cite{Venkat1,Venkat2,Petrashov03,Venkat3}. Possible explanations of such \nobservations were proposed by a number of authors. E.g., Titov \\cite{Titov} \nargued that an even in $\\chi$ thermopower response can occur for certain \ngeometries of Andreev interferometers as a result of the charge imbalance \nbetween the chemical potential of Cooper pairs in superconductor and the one for \nquasiparticles in the normal metal. Jacquod and Whitney \\cite{JW} analyzed \nthermoelectric effects in Andreev interferometers within the scattering \nformalism and attributed an even in $\\chi$ behavior of the observed thermopower \nto mesoscopic fluctuation effects.\n\n\n\\section{Josephson current and thermoflux}\n\\label{josephson}\nIn order to further investigate possible relation between $\\tilde I^{nl}$ and the Josephson current between the two S-terminals\nlet us evaluate the current $I_{12}$ flowing in the middle part of the normal wire, see Fig. \\ref{nsnsn3-fig}. Under the condition\n\\eqref{discon} $I_{12}$ determines the total current between two superconductors S$_1$ and S$_2$. The whole calculation is carried out in the same manner as\nthat for the currents $I_1$ and $I_2$. Evaluating the contributions from all quasiparticle trajectories going through the central part\nof the normal wire, we obtain\n\\begin{equation}\nI_{12}=I_T+I_S^{\\rm tot},\n\\end{equation}\nwhere the term $I_T$ represents the thermoelectric current which has the form\n\\begin{equation}\nI_T=\\frac{V_{T1}}{R_{1}^+}\n-\\frac{V_{T2}}{R_{2}^-},\\quad \\frac{1}{R_{1,2}^{\\pm}}=\\frac{1}{R_{1,2}^*}+\\dfrac{1}{R_{0}}\\pm\\dfrac{1}{R^{nl}_{-}},\n\\label{IT}\n\\end{equation}\n$R_{1(2)}^*=R_{1(2)}|_{\\mathcal{D}_{1(2)}=0}$, while the current $I_S^{\\rm tot}$ defines the supercurrent between the two superconductors.\nIt can be split into two parts:\n\\begin{equation}\nI_S^{\\rm tot}=I_{S_1S_2}(\\chi )+ I_S(\\chi, T_{1,2},V_{T1,2}),\n\\end{equation}\nwhere $I_{S_1S_2}(\\chi )$ is the equilibrium Josephson current evaluated for closed quasiparticle trajectories confined between\nthe two S-terminals \\cite{GZ02} (such trajectories, if exist, ``know nothing''\nabout the normal terminals $\\mathrm{N_1}$ and $\\mathrm{N_2}$ and\nfor this reason were not considered above) and\n\\begin{multline}\nI_S(\\chi, T_{1,2},V_{T1,2})=-e N_0\n\\int\n\\Biggr<16 \\pi\\bm{v}_{F}\\Theta_{12}^{S_1S_2}(\\bm{p}_F,\\bm{r})\n\\\\\\times\n \\dfrac{\n\\mathcal{R}_1\\mathcal{R}_2\\beta}{(\\mathcal{R}_1 + \\mathcal{R}_2)(1 + \\mathcal{R}_1\\mathcal{R}_2)}\n\\Biggl[\n\\frac{1+\\mathcal{R}_2^2}{\\mathcal{R}_2} T_1 W(\\beta, t_1, \\chi +v_1)\n\\\\\n+\n\\frac{1+\\mathcal{R}_1^2}{\\mathcal{R}_1} T_2 W(\\beta, t_2, \\chi -v_2)\n\\Biggr]\n\\Biggl>\nd \\bm{\\Sigma}_1\n\\label{J}\n\\end{multline}\nrepresents the contribution of open trajectories of the type $\\Theta_{12}^{S_1S_2}$ connecting\nthe normal terminals $\\mathrm{N_1}$ and $\\mathrm{N_2}$. In Eq. \\eqref{J} we\nagain defined $v_{1,2}=2eV_{T1,2} d\/v_F$.\n\nComparing the above expressions for the thermoelectric current $I_T \\propto \\tilde I^{nl}$ and for the supercurrent $I_S^{\\rm tot}$\nwe conclude that in general there exists no simple relation between these two currents. Only in the tunneling limit $\\mathcal{R}_{1,2} \\to 1$\none can observe that Eqs. \\eqref{tnl} and \\eqref{J} are defined by almost the same integrals except for different signs\nin front of the two $W$-functions in the square brackets. If, furthermore, we assume that the temperature difference exceeds the parameter\n$v_F\/d$, in the tunneling limit we obtain $\\tilde I^{nl}\\simeq I_S$ for $T_1-T_2 \\gg v_F\/d$ and $\\tilde I^{nl}\\simeq -I_S$ for $T_2-T_1 \\gg v_F\/d$.\n\nNote that in the presence of a temperature gradient the expression \\eqref{J} represents a {\\it non-equilibrium} contribution to the \nJosephson current which explicitly depends on both $T_{1,2}$ and $V_{T1,2}$. In the absence of this gradient, i.e. at $T_1=T_2=T$ and $V_{T1,2}=0$\nthe term \\eqref{J} reduces to its equilibrium form and we can define\n\\begin{equation}\nI_J(\\chi ,T)=-I_S(\\chi, T_{1,2}=T,V_{T1,2}=0).\n\\label{IJ}\n\\end{equation}\nThen in the tunneling limit $\\mathcal{R}_{1,2} \\to 1$ and provided both thermal voltages remain small $2eV_{T1,2} \\ll v_F\/d$ we may write\n\\begin{equation}\nV_{T1,2}\\propto \\tilde I^{nl}\\simeq \\frac12[I_J(\\chi ,T_1)-I_J(\\chi ,T_2)].\n\\label{VJ}\n\\end{equation}\nTo a certain extent this relation resembles the result \\cite{VH} derived in the diffusive limit.\nNote, however, that within our model Eq. \\eqref{VJ} is valid only under quite stringent conditions and, furthermore, the term $I_J(\\chi ,T)$ \\eqref{IJ} can be interpreted as a total equilibrium Josephson current only if we neglect the contribution $I_{S_1S_2}(\\chi )$. Most importantly, it is clear from our analysis that the relation \\eqref{VJ} by no means implies that large thermoelectric voltages $V_{T1,2}$ could possibly occur provided electron-hole symmetry in our system is maintained.\n\nThe effects discussed here can be conveniently measured, e.g., in a setup similar to that employed in recent experiments \\cite{Petrashov16}. One can connect two superconductors in a way to form a superconducting loop.\nInserting an external flux $\\Phi_x$ into this loop one would be able to control the phase difference $\\chi =2\\pi \\Phi_x\/\\Phi_0$, where $\\Phi_0$ is the superconducting flux quantum. By measuring the total flux inside\nthe loop $\\Phi=\\Phi_x+\\mathcal{L}I_{12}$ with and without a temperature gradient one could easily determine\nthe value of the magnetic thermoflux\n\\begin{equation}\n\\Phi_T = \\mathcal{L}[I_{12}(\\chi, T_{1,2},V_{T1,2})-I_{12}(\\chi, T)],\n\\end{equation}\nwhere $\\mathcal{L}$ is an effective inductance of a superconducting loop and $ I_{12}(\\chi, T)=I_{12}(\\chi, T_{1,2}=T,V_{T1,2}=0)$ is the equilibrium Josephson current at temperature $T$.\n\nIn conclusion, we demonstrated that large thermoelectric effect in multiterminal ballistic normal-superconducting hybrid structures is caused by electron-hole asymmetry generated for quasiparticles propagating between two normal terminals (kept at different temperatures) and suffering coherent Andreev reflection at two NS interfaces. At sufficiently high temperature gradients the thermoelectric voltages $V_{T1,2}$ depend only on the lowest of the two temperatures. The $2\\pi$-periodic dependence of $V_{T1,2}$ on the superconducting phase difference $\\chi$ is determined self-consistently and is strongly non-sinusoidal at low enough temperatures. Although the temperature dependence of $V_{T1,2}$ roughly resembles that of the equilibrium Josephson current in SNS junctions there exists no fundamental relation between these two quantities. Further information can be obtained by analyzing the behavior of the magnetic thermoflux induced in Andreev interferometers by applying\na thermal gradient.\n\nThis work was supported in part by RFBR grant No. 15-02-08273.\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nThe observation of baryon number violation (BNV) would address a number of open questions in modern physics. BNV is required to understand the matter-antimatter symmetry of the universe~\\cite{Sakharov:1967dj}. Many models which explain non-zero neutrino masses also prescribe BNV~\\cite{Mohapatra:1980qe}. Even within the Standard Model (SM) baryon number is subject only to an approximate conservation law. At the perturbative level baryon number conservation arises due to the specific matter content in the SM, and corresponds to a so-called ``accidental\" symmetry. The SM predicts BNV to occur via rare non-perturbative electroweak instanton processes~\\cite{Adler:1969gk,'tHooft:1976fv} (the quantum number $B-L$ is respected by the SM and not $B$ and $L$ separately). Furthermore, precision tests of the Equivalence Principle~\\cite{Adelberger:1990xq} offer no evidence for a long range force coupled to baryon number and thus a local gauge symmetry forbidding BNV. Consequently, BNV occurs as a generic feature of many proposed extensions to the SM~\\cite{Barbier:2004ez}. A promising means of searching for BNV is via the observation of the $\\Delta B=2$ process, neutron-antineutron oscillation~\\cite{Mohapatra:2009wp,Phillips:2014fgb}. In this paper, a proposed new experiment~\\cite{EOInnbar} to look for such oscillations at the European Spallation Source (ESS) is outlined. The experiment would be sensitive to oscillation probabilities up to three orders of magnitude lower than has previously been obtained using free neutrons.\n\n\nThere exists a symbiosis between neutron-antineutron oscillations and neutrino physics via the quantum number $B-L$. A popular model explaining non-zero neutrino mass is the see-saw mechanism~\\cite{see-saw}. In this approach neutrinos possess a Majorana component and lepton number is violated by two units. Evidence for $\\Delta L=2$ processes are sought with, eg, double neutrinoless beta decay searches~\\cite{Elliott:2012sp}. Since $B-L$ (the true anomaly-free SM symmetry) is also violated by two units it would be natural to expect $\\Delta B=2$ processes. In addition to the complementarity with neutrino physics, neutron-antineutron oscillation features in a number other models of new physics. Examples include $R$-parity violating supersymmetry~\\cite{Barbier:2004ez} and post-sphaleron baryogesesis~\\cite{Babu:2006xc}. Values of the BNV mass scale for which observable oscillations take place exceed those attainable at colliders. Using a six-fermion BNV operator and dimensional reasoning mass scales of $10-1000$~TeV are obtained while other approaches (also leading to an observable signature) predict scales near the grand unified mass~\\cite{Mohapatra:2009wp}. A further motivation for searching for oscillations was recently provided by the observation that such processes violate not only baryon number but also $CP$~\\cite{Berezhiani:2015uya}, thereby addressing two of the Sakharov conditions~\\cite{Sakharov:1967dj} for baryogenesis.\n\nSetting aside the substantial theoretical motivation, a strictly experimentalist consideration of BNV hunting highlights the importance of neutron-antineutron oscillation searches. In an oscillation experiment only the violation of baryon number is sought, and not that of other hitherto conserved quantities. Single nucleon decay searches (eg, $p\\rightarrow \\pi^0 e^+$) require lepton number violation ; among other reasons this ensures angular momentum conservation. Only searches for free neutron oscillation~\\cite{Fidecaro:1985cm, BaldoCeolin:1994jz} and anomalous nuclear decays, under the neutron oscillation~\\cite{nnbartrapped} or dinucleon decay-hypothesis~\\cite{dinucleon}, offer high precision sensitivity to BNV-only processes. The most competitive limits for the free neutron oscillation time have hitherto been produced at ILL~\\cite{BaldoCeolin:1994jz} ($\\sim 3 \\times 10^8$s) and Super-Kamiokande~\\cite{Abe:2011ky} ($\\sim 1 \\times 10^8$s, after a correction for nuclear effects).\n\nOf the class of experiments which search for BNV-only processes, free neutron oscillation searches possess both the cleanest experimental and theoretical environments in which to perform the search and quantify the results of the search. Owing to improvements in neutronics, particle identification technology and a longer running time, the proposed experiment at the ESS~\\cite{EOInnbar} will have a sensitivity in oscillation probability which is to up to three orders of magnitude greater than at ILL.\n\nThis paper is organised as follows. Descriptions of the ESS and the neutron moderator are given. The plan for the transmission of neutrons to the detector is outlined, followed by a description of the detector. Finally, the collaboration which aims to conduct the experiment, as well a provisional timescale for the work, are briefly described.\n\n\\section{Overview of ESS and the proposed experiment}\\label{sec:ess}\nCurrently under construction, the ESS is a multi-disciplinary research laboratory which will house the world's most powerful neutron source~\\cite{ESS-TDR}. The ESS will comprise a 2.86 ms long proton pulse at 2 GeV energy at a repetition rate of 14 Hz which impacts on a rotating tungsten target. Spallation neutrons emerging from a system of moderators and reflectors are delivered to the beam ports and are then guided with a neutron supermirror to the instrument. For the experiment described here, neutrons would be transported through in a vacuum through a magnetically shielded beam pipe over $200--300$m to a target with which anti-neutrons could annihilate. Magnetic shielding is necessary to suppress the energy split between neutron and antineutron states ($\\Delta E=\\bar{\\mu} \\cdot \\bar{B}$) which would occur in a $B$-field due do the particles' dipole moments $\\pm \\bar{\\mu}$, and which would inhibit the oscillation process. A detector surrounding the target would record the final states emerging from an annihilation as well as monitoring background processes.\n\nThe quantity of merit for a neutron-antineutron search is $N_n \\cdot t^2$ where $N_n$ is the free neutron flux reaching the target and $t$ is the free flight time of the neutron.\n\nFor a high sensitivity (high $N_n \\cdot t^2$) search the following criteria must be met:\n\\begin{itemize}\n\\item The moderator must deliver a beam of slow, cold neutrons (energy $< 5$meV) at high intensity, maximising $t$ and $N_n$, respectively. An overall lower neutron spectrum emission also increases the transport efficiency of the supermirror neutron reflector.\n\\item The beam port must correspond to a large opening angle for neutron emission.\n\\item A long beamline to increase $t$.\n\\item Long running time.\n\\end{itemize}\n\n\nA number of factors drive the improvement in sensitivity of the proposed experiment compared to the work at ILL. An important contribution to increased sensitivity is due to the use of a large elliptical focusing supermirror reflector which directs off-axis neutrons to the detector with a single reflection\\footnote{Each reflection effectively ''resets the clock\" prior to a putative neutron antineutron oscillation.}. Furthermore, a larger detector with enhanced particle identification is possible, as is a longer running time. \n\n\\section{Moderator system}\nA conceptual overview of the ESS and moderator system is given Figure~\\ref{fig:wheelandmoderator}. A ''butterfly\" moderator design has been chosen which comprises cold regions of para-$H_2$ at around $20$~K, and water at ambient temperature . The upper (lower) moderator has a height of 3cm (6cm). This represents the most optimal choice for the brightness of the ESS. In addition, it provides a flexibility such that the placing of the instruments can be chosen according to which moderator is most appropriate for the experiment. For the work proposed here, a beam port is available such that neutrons from both moderators can be exploited.\n\n\n\\begin{figure}[tb]\n \\setlength{\\unitlength}{1mm}\n \\begin{center}\n\\includegraphics[width=0.80\\linewidth, angle=90]{wheelandmoderator.pdf}\n \\end{center}\n \\vspace{-2.25cm}\n \\caption{Overview of the ESS and the moderator system.\n }\n \\label{fig:wheelandmoderator}\n\\end{figure}\n\nFigure~\\ref{fig:neutronics} shows a cross sectional view of the moderator system from the suppermirror. The cold moderators are shown in red. Here, the $x$ and $y$ coordinates are axes map out the horizontal plane. The figures to the top and left show the relative guiding efficiency by a truncated focusing ellipsoid mirror centred on the middle point as a function of the point of emission. The transport efficiency for neutrons produced in the cold region is $\\sim 10$\\% of that expected for neutrons produced at the ellipsoid's focal point. This drop in efficiencies near the cold region illustrates the need for a sophisticated mirror configuration which fully takes into account the design of the moderator. At present a ''clover\" assembly of four quarter ellipsoids, each one centred on a moderator, is being considered, as is a simpler design comprising an ellipsoid focused on aa cold moderator.\n\n\n\\begin{figure}[tb]\n \\setlength{\\unitlength}{1mm}\n \\begin{center}\n\\includegraphics[width=1.00\\linewidth, angle=-90]{neutronics.pdf}\n \\end{center}\n \\vspace{-2.75cm}\n \\caption{ View of the butterfly moderators from the supermirror showing the cold moderators in red. The top and left figures show the relative guiding efficiency by a mirror centred on the middle point as a function of the point of emission.\n }\n \\label{fig:neutronics}\n\\end{figure}\n\n\nAn increase to the nominal angular acceptance for produced neutrons is another area in which sensitivity can be enhanced. As seen in Figure~\\ref{fig:moderatorandreflector} losses will occur due to the presence of the Fe shield and Be reflector system. Figure~\\ref{fig:moderatorandreflector} shows a possible adjustment to the design of the beam port for the proposed work. Here, the parts of the shield and reflector system would be removed to allow a greater conical penetration corresponding to an increase in sensitivity of $\\sim 2$. Inserts could be made such that the full system is restored for experiments after the proposed work. The study of such a scheme including its on the other experiments is underway.\n\n\\begin{figure}[tb]\n \\setlength{\\unitlength}{1mm}\n \\begin{center}\n\\includegraphics[width=0.80\\linewidth, angle=0]{reflectormoderator.pdf}\n \\end{center}\n \\vspace{-.75cm}\n \\caption{ Nominal (white region to the left of the neutron source) and enlarged (region enclosed by dashed lines) conical penetration through the Be reflector and Fe shield).\n }\n \\label{fig:moderatorandreflector}\n\\end{figure}\n\n\\begin{figure}[tb]\n \\setlength{\\unitlength}{1mm}\n \\begin{center}\n\\includegraphics[width=0.80\\linewidth, angle=0]{shielding.pdf}\n \\end{center}\n \\vspace{-1.75cm}\n \\caption{ Schematic overview of the planned shielding.\n }\n \\label{fig:shield}\n\\end{figure}\n\n\n\\section{Magnetics}\nAs mentioned in Section~\\ref{sec:ess}, the neutrons must be transported in a magnetically shielded vaccum. For the proposed work this corresponds to a vacuum of $10^{-5}$ mbar and a magnetic field of less than 5nT along the neutron flight path.\n\nThe target vaccum can be achieved with a vacuum chamber comprising highly non-magnetic materials, eg Al, with turbo molecular pumps, mounted outside of the magnetically shielded area. Magnetic fields of less than 5 nT have been achieved over large volumes (see, for example, Ref.~\\cite{m1}). For the planned experiment, a shielding concept will be used based on an aluminium vacuum chamber, a two layer passive shield made from magnetizable alloy for transverse shielding, and end sections made from passive and active components for longitudinal shielding, as shown in Figure~\\ref{fig:shield}.\n\n\\section{Detector}\nDetector design is guided by the need for high efficiency for antineutron detection and the ability to maintain a low background yield. A typical annihilation signature would be a multi-pion final state. A schematic diagram of the detector is given in Figure~\\ref{fig:detector}. The following components are envisaged and studies on the various technology options underway:\n\n\\begin{itemize}\n\\item A annihilation target. One option is a $^{12}C$ disk of diameter $1$~m and which is 100~$\\mu$m thick. An alternative is a target made of $^{10}Be$ which may have better potential to capture photons from background processes. A two target system is also being considered; an antineutron would annihilate in the first target so a second target could be used for background monitoring.\n\\item A charged particle tracker, necessary for the determination of pion momenta and the vertex position. Different technologies are under consideration, for a straw tube-based drift chamber or a Time Projection Chamber. However, any tracking system will need at least some layers with fast readout (i.e. straws) to allow tracking information to be included in the trigger.\n\\item A calorimeter must accurately measure photon and pion energies in order to reconstruct the final state invariant mass. Depending on the technology choice, precision timing information from the calorimeter or a calorimeter+time-of-flight configuration would be available. This is necessary for establishing the time of an annihilation event, and for rejecting false vertex reconstructions due to cosmic ray showers. The calorimeter will also need to handle high pile-up rates from gamma production at the target.\n\\item A trigger exploiting all read-out channels enabling a highly selective system to collect signal and background candidate events.\n\\item A dedicated cosmic veto system to reject background.\n\\end{itemize}\n\n\\begin{figure}[tb]\n \\setlength{\\unitlength}{1mm}\n \\begin{center}\n\\includegraphics[width=0.80\\linewidth, angle=90]{detector.pdf}\n \\end{center}\n \\vspace{-2.15cm}\n \\caption{Schematic overview of the detector.\n }\n \\label{fig:detector}\n\\end{figure}\n\n\\section{Collaboration and time-scales}\nA growing collaboration has been formed with the aim of carrying out the proposed work. Working groups corresponding to specific aspects of the experiment have been established and a number of workshops have taken place. An expression of interest (EOI) with signatories from 26 institutes and 8 countries has been sent~\\cite{EOInnbar}.\n\nA provisional time-scale consists of the preparation a Technical Design Report in 2017, construction in 2019, commissioning in 2022, and data-taking ''physics\" runs in 2023-2025.\n\n\n\\section{Summary}\nStrong theoretical motivations addressing open questions in modern physics such as the matter-antimatter asymmetry and the possible majorana nature of the neutrino imply the existence of BNV processes. A promising means of searching for BNV is via the neutron-antineutron oscillation signature. A new high precision search is being planned for neutron-antineutron oscillation processes at the ESS. The sensitivity to the oscillation probability would be around three orders of magnitude compared with the last such experiment. A collaboration to carry has been formed with the aim of performing the experiment.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}