diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzkuiq" "b/data_all_eng_slimpj/shuffled/split2/finalzzkuiq" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzkuiq" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\nPeer review (PR) is the prevalent quality assurance mechanism in science \\cite{birukou_alternatives_2011}. During peer review, multiple referees evaluate a paper providing scores and a textual report. Next, the program or area chairs weigh the reviews of all papers to make final acceptance decisions selecting the top papers exceeding a quality threshold \\cite{jefferson_measuring_2002}. This requires careful comparison of papers based on the reviewers' assessment of soundness, presentation, impact potential and rightfulness \\cite{jefferson_measuring_2002,aksnes_citations_2019}. The difficulty of this process is amplified by noise, bias and high disagreement in the peer reviews \\cite{lee_bias_2013,walker_emerging_2015} and the ever-growing submission numbers in academia.\n\nTo cope with the complexity of this task, decision makers settle for heuristics to guide and prioritize the assessment process. Typically, statistics on review scores, like the mean overall score per paper, are used to rank papers as an input. Hereby, decision makers can focus on borderline regions close to the quality threshold. This heuristic is insufficient due to inconsistent usages of rating scales by the referees \\cite{wang_your_2019,lee_commensuration_2015} and the aforementioned issues of noise. Consequently, agreement on absolute scores is considerably below substantial (in computer science) \\cite{ragone_peer_2013} making the mean on typically three to four peer reviews per paper highly unreliable.\n\nPeer review can gain in accuracy and efficiency when providing a reliable input ranking to the complex and time-consuming decision making task. In the past, so-called consensus ranking has been applied to rank submissions \\cite{cook_creating_2007,baskin_preference_2009}; yet these approaches neglect textual review information and are prone to noise in review scores. Related methods from the field of Natural Language Processing (NLP) use textual information, like a paper's abstract, to predict binary acceptance decisions \\cite{kang_dataset_2018} or citation counts \\cite{li_neural_2019}. The output of these methods is coarse-grained or relies on information that is not available during the time of review restricting their applicability to assist acceptance decision making. \n\nIn this paper, we propose preference learning to rank submissions represented by their review scores and texts using referees' preferences as supervision on this feature space. According to this perspective, training requires no external paper quality estimates, like citation counts, which are typically not available during the time of review. We investigate three hypotheses: First, preferences expressed by human referees over the set of papers under review serve as a valuable supervision signal. Second, a ranking model on submissions benefits from including peer review texts in addition to scores. Third, preference learning techniques can effectively mitigate the impact of noise, disagreement and bias in peer review data. To formalize our new preference learning perspective, we introduce a more general formulation of the assistance task, which we call the Paper Ranking Problem (PRP). To validate our hypotheses on real data, we define a novel generic evaluation framework motivated by science-of-science literature on peer review, using past acceptance labels and citation counts as a reference. We select Gaussian process preference learning (GPPL) \\cite{simpson_finding_2018} to tackle the PRP. GPPL is a preference learning method that has been applied to NLP tasks, such as ranking arguments by convincingness \\cite{simpson_finding_2018}, and proved to be robust against noise.\n\nWe apply the aforementioned framework on the highly structured peer review data from the 2018 conference of the Association for Computational Linguistics (ACL) \\cite{gao_does_2019}. We show that our preference-learning-based approach achieves the best balanced performance on both paper quality proxies compared to previous methods and baselines. During our ablation study, we find that review texts increase ranking performance by citation counts substantially, while review scores alone have small predictive validity for the future impact of accepted submissions. Finally, we find that our approach is less susceptible to additional noise in the review scores caused by unreliable referees and bias induced by heterogeneous weighing of quality criteria. \n\\section{Related Work}\n\\citeauthor{cook_creating_2007} \\shortcite{cook_creating_2007} propose the conversion from exact review scores to \\textit{partial rankings} on a per-referee basis to mitigate the issue of score miscalibration bias in PR \\cite{wang_your_2019}. For example, if referee $A$ reviewed the papers $x$, $y$ and $z$ assigning scores of $2$, $1$ and $3$ (higher is better), respectively, this is expressed by the ordering $\\prec_{A}$: $y \\prec_{A} x \\prec_{A} z$. To aggregate reviewed submissions into a ranking, the authors cast the task as an NP-hard consensus ranking problem: given the partial rankings on overlapping paper subsets, the goal is an output ranking on all papers that violates the least precedence pairs. The authors propose a branch-and-bound algorithm that can find optima on small artificial datasets in reasonable time. \\citeauthor{baskin_preference_2009} \\shortcite{baskin_preference_2009} propose a more efficient neighborhood-based optimization algorithm for the same task.\nDuring our experiments on a real-world dataset, we show that both consensus ranking approaches perform inferior in the presence of bias and noise compared to our preference-learning-based method.\n\nScholarly document quality assessment (SDQA) covers various problem settings from NLP judging paper quality aspects based on paper texts or associated review texts.\n\\citeauthor{kang_dataset_2018} \\shortcite{kang_dataset_2018} introduce the task of \\textit{paper acceptance prediction} (PAP): given papers from past conferences their binary acceptance decisions are the target of prediction. This task has been approached in many flavors \\cite{ghosal_deepsentipeer_2019,stappen_uncertainty-aware_2020}.\n\\citeauthor{maillette_de_buy_wenniger_structure-tags_2020} \\shortcite{maillette_de_buy_wenniger_structure-tags_2020} combine PAP with the task of \\textit{citation count prediction} (CCP), where the target of prediction is the paper's future citation count. The authors augment paper texts with structural tags and apply it on both tasks in isolation. \\citeauthor{li_neural_2019} \\shortcite{li_neural_2019} approach CCP using cross-attention between review texts and the paper abstract extended by author metadata like their \\textit{h-index} as a feature.\nSeveral factors hinder the applicability of SDQA methods for assistance in PR: PAP approaches make coarse-grained recommendations and often neglect review texts. Methods from CCP (and PAP) train models on historic papers from broad domains having citation counts (and acceptance decisions) available. As scientific merit is domain- and time-dependent \\cite{lee_commensuration_2015}, there is a substantial distribution shift between training and inference time when applying these models for a particular, new PR process. Consequently, their recommendations are likely biased to a historic state of the art.\n\\section{Paper Ranking Problem}\nIn this section we introduce the \\textit{Paper Ranking Problem} (PRP), which models the task of aggregating reviews into a ranking of submissions to assist acceptance decision making. \n\n\\subsection{Problem Definition}\nThe PRP is the task of ranking the submissions to a PR system according to their estimated quality. Unlike the problem settings in PAP or CCP, the quality estimate should be produced in comparison to the other submissions, hereby accounting for their scientific context and allowing program chairs to determine the acceptance threshold dynamically.\n\nLet $P$ denote the set of papers submitted to the venue, where each paper $p \\in P$ is represented by its text. For the set of reviews $R$, each review $r \\in R$ is characterized by its text and a vector of scores. Each review is associated with exactly one referee $\\textit{ref}(r) \\in E$ from the set of referees $E$ and exactly one paper $\\textit{pap}(r) \\in P$.\nHence, each referee $e \\in E$ generates a set of reviews $R_e = \\{r \\textit{ }|\\textit{ } r \\in R, \\textit{ref}(r) = e\\}$ corresponding to a set of reviewed papers $P_e = \\{\\textit{pap}(r) \\textit{ } | \\textit{ } r \\in R_e\\}$.\nThe \\textit{Paper Ranking Problem} is then defined as the task of predicting an overall ranking $\\mathcal{O}_{P'}$ implying a total order on $P' \\subseteq P$, given $R$, $P$ and $E$ with all associated information. $\\mathcal{O}_{P'}$ should have minimal ranking distance to the ranking of papers by their true total quality ordering $\\widehat{\\mathcal{O}}_{P'}$.\nIn case of making acceptance decisions on all submissions together, we have $P' = P$. To make individual acceptance decisions on subsets of $P$ grouping submissions, for instance, by track or paper type, we model this task as multiple PRP on disjoint subsets $P'_1, ..., P'_N$ partitioning $P$. For conciseness we refer to this set of subsets by $P'$, as well.\n\nIn this definition, we assume that there is one true ranking by paper quality for each considered subset of papers. This is an inherent assumption in peer review that is merely reflected in this formalization and which is most plausible for the given context limited to papers within a single PR system. \n\n\\subsection{Performance Criteria}\nA useful assistance system increases the efficiency of a PR process, while at least maintaining the quality of output decisions. Hence, we adapt quality criteria from the science-of-science literature on peer review, to measure the performance on the PRP. Specifically, we consider \\textit{effectiveness}, \\textit{completeness}, \\textit{fairness} and \\textit{efficiency} described below.\nWe draw on the concept of \\textit{random ranking models} \\cite{critchlow_probability_1991} to describe a paper ranking algorithm $A$. The quality of a paper $p \\in P'$, $u(p)$, is drawn from the true quality distribution. For any $x, y \\in P'$ the pairwise precedence $x \\preceq_\\mathcal{M} y$ is distributed according to the random ranking model $\\mathcal{M}$ implied by $A$; this means that $x$ precedes $y$ in output rankings of $A$ with the probability defined by $\\mathcal{M}$.\n\\paragraph{Effectiveness and Completeness}\nEffectiveness and completeness are classical criteria from information retrieval. In the context of PR, effectiveness is often described by the predictive validity \\cite{ragone_peer_2013} or the ability to filter out low quality works \\cite{birukou_alternatives_2011}. Lack of completeness is associated with low acceptance rates \\cite{church_reviewing_2005}. %\nIf $x$ is ranked higher than $y$, effectiveness requires that the quality of $x$ is higher than the one of $y$.\n\\begin{equation} \\label{eq:com}\n P(u(x) > u(y) | x \\preceq_\\mathcal{M} y) = 1\n\\end{equation}\nFor a complete ranking model, $x$ should always precede $y$ in the ranking, given that $x$ has a higher quality than $y$.\n\\begin{equation} \\label{eq:eff}\n P(x \\preceq_\\mathcal{M} y | u(x) > u(y)) = 1\n\\end{equation}\n\\paragraph{Fairness}\nFairness in PR is often regarded as the absence of bias in the reviews or as the replicability of the PR process \\cite{walker_emerging_2015}. This perspective neglects the subjectivity of reviewing and desirable types of bias in reviews, like a credit of trust towards potentially ground-breaking works \\cite{bornmann_scientific_2011}. We propose an output-oriented criterion for fairness in the PRP: a ranking model is fair if papers of the same quality precede each other with the same probability.\n\\begin{equation}\n P(x \\succeq_\\mathcal{M} y | u(x) = u(y)) = P(y \\succeq_\\mathcal{M} x | u(x) = u(y))\n\\end{equation}\nThis shifts the focus of bias analysis from the reviews to the ranking model and allows for swapped pairs in the generated output rankings, as long as they do not occur systematically.\n\\paragraph{Efficiency}\nEfficiency is a meta-criterion of the process of obtaining a ranking model. \\citeauthor{ragone_peer_2013} \\shortcite{ragone_peer_2013} measure the efficiency of PR by the time spent by reviewers to achieve a certain quality standard of acceptance decisions. We transfer this criterion directly to the PRP: the number of reviews required as an input should be minimal to produce an effective, complete, and fair ranking model.\n\\subsection{Evaluation Framework}\nTo evaluate approaches to the PRP in real scenarios, we approximate the latent true quality per paper $u(p)$. Being multi-faceted in nature, we combine multiple weak indicators for different aspects of quality to approximate $u(p)$.\n\nDespite uncertainty as to what citation counts measure exactly \\cite{aksnes_citations_2019}, they are common proxies for the impact of a paper. Within a fixed scientific field and a fixed time since publication, we use them as indicators for paper impact in our framework.\nFurthermore, historic acceptance decisions relate to paper quality. While being venue-specific, noisy and highly selective, due to restrictive acceptance quotas, they can serve as valuable quality indicators within the context of a fixed PR process.\nAlthough these measures should not be considered in isolation, a \\textit{balanced}, high performance on both of them reveals consistency to paper quality within the limitations of the used proxies. In principle, the provided evaluation framework is applicable on any set of metrics correlating with diverse aspects of paper quality.\n\nTo measure effectiveness and completeness against a ranking by citation counts and binary acceptance decisions practically, we use Spearman's rank correlation ($\\rho$) and the area under the receiver operating characteristic (AUROC), respectively.\nAs the probability distribution of precedence pairs $P(x \\succeq_\\mathcal{M} y)$ is typically not available, we approximate fairness as the sensitivity of the PRP approach to bias and noise in the input data. When adding artificial bias and noise to the review scores, a fair approach should output rankings consistent to the unaltered and true one. We hereby ground the commonly used sensitivity analysis on artificial PR data (e.g. \\cite{cook_creating_2007,wang_your_2019}) on real datasets, to make it more adequate for application scenarios.\nThe evaluation of efficiency follows directly from its definition. After sub-sampling the set of reviews randomly, we measure the performance decrease in terms of effectiveness and completeness compared to the full dataset. During experiments we apply this general evaluation schema on real data.\n\n\\section{Preference Learning for Paper Ranking}\nThe Paper Ranking Problem can be naturally approached by preference learning to incorporate review texts and account for bias and noise in review scores.\n\\subsection{Preference-based Model of the PRP}\nIn preference learning, relative preferences on the item space serve as the supervision signal. For that PAP, we elicit preferences as for consensus ranking: we convert the review scores of a referee $e \\in E$ into a partial ranking on the set of reviewed papers $R_e$.\nThis makes a latent assumption: referees consciously or subconsciously compare papers during review. While effects like \\textit{order bias} \\cite{birukou_alternatives_2011} suggest that this is in fact true, there might be violations for papers on very different topics reviewed by the same referee.\nHowever, this assumption is not only induced by the preference perspective, but it is also inherent to the direct use of review scores: scores of incomparable papers are mapped to the same numeric space. In fact, the preference-based view is less restrictive, as it allows for explicit filtering of potentially invalid comparisons.\n\nFormally, the partial rankings $\\textit{PO}_E=\\{\\mathcal{O}_{P_e}| e \\in E\\}$ are given as training supervision. Here $\\mathcal{O}_{P_e}$ is the total order on papers $P_e$ reviewed by $e$, which is induced by the order on associated scores. We train the model on all papers $P$, where each paper is represented by a feature vector derived from the review texts and score vectors. The goal is to predict $\\mathcal{O}_{P'}$ assuming that the observed partial rankings are sampled from the true order $\\widehat{\\mathcal{O}}_{P'}$ with random permutations. Each partial ranking implies a set of preference pairs including \\textit{tie preferences} for non-strict partial orders. In this learning setup, the model orders only items seen during training.\n\\subsection{GPPL for the Paper Ranking Problem}\nGaussian processes (GPs) are fully Bayesian regression learners with Gaussian priors achieving high robustness against noise in small-data domains \\cite{rasmussen_gaussian_2006}.\n\\citeauthor{chu_preference_2005} \\shortcite{chu_preference_2005} adapt GPs for preference learning. The authors assume that observed preference pairs $x \\succ y$ (\"$x$ is preferred over $y$\"), follow a likelihood function where the probability of $x \\succ y$ depends on the delta between the hidden quality of $x$ and $y$ plus Gaussian noise. GPPL predicts the quality $u(x)$ of a sample given the training pairs by marginalizing over the latent variables.\n\n\\citeauthor{simpson_finding_2018} \\shortcite{simpson_finding_2018} propose a more scalable approximation for the posterior using stochastic variational inference. The authors show that scalable GPPL effectively ranks arguments by convincingness based on embeddings and linguistic features.\nScalable GPPL is suitable for our purpose given that review scores can be noisy and the number of submissions range from below hundred to thousands. Hence, we utilize the method by \\citeauthor{simpson_finding_2018} \\shortcite{simpson_finding_2018} for our approach.\nTo apply GPPL for the PRP, the partial orders $\\textit{PO}_E$ are converted into a set of preferences. We enumerate all implied precedence pairs for each $\\mathcal{O}_{P_e} \\in \\textit{PO}_E$ and join these precedence pairs of all referees into a multi-set as an input. The papers are represented by feature vectors described in the following section. The output of a GPPL model are real-valued quality estimates allowing to rank all submissions.\n\n\\subsection{Feature Set Design}\nWe focus on review scores and texts to represent submissions. Hereby, we investigate the importance of review texts for the PRP, and enforce that primarily the human judgements of referees are reflected in the output rankings.\nWe consider three feature sets to define a vector representation, such that two papers are close in the vector space if they are of similar quality.\n\\begin{itemize} \n \\item \\textbf{Score Features}: These features are derived from the reviews' score vectors. Apart from an overall assessment, typically ratings on aspects of paper quality, like soundness or presentation, are provided as \\textit{aspect scores}. \n For each aspect score and the overall score, the mean, standard deviation, minimum and maximum over the reviews per paper are computed.\n Additionally, the score vectors of each review of a paper are concatenated in arbitrary order. Score-based features should reflect controversy and multiple quality aspects, but they are expected to be noisy, as they rely on the scores directly.\n \\item \\textbf{Discourse Features}: Peer reviews contain questions, feedback and summaries. The distribution of these argumentative units relates to paper quality, as they reflect its strengths and weaknesses. In the AMPERE dataset \\cite{hua_argument_2019}, 400 computer science reviews are annotated with sentence-level discourse labels, including e.g. \\textit{request} or \\textit{non-argumentative}. We fine-tune the last layer of BERT \\cite{devlin_bert_2019} for $4$ epochs on $90\\%$ of samples in AMPERE resulting in a $0.7969$ micro-F1-score on the remaining $10\\%$. We apply this model on each review sentence. The distribution of discourse labels and the proportion of non-argumentative sentences across the reviews of a paper are added as a feature. \n \\item \\textbf{Embedding Features}: Embeddings are widely used in NLP to capture the similarity and meaning of texts. Reviews are structured into sections answering different questions of the review form. We encode each section using mean pooling on sentence-embeddings and concatenate them. Additionally, we form the mean of embeddings per section across reviews. To capture review-relatedness we compute the average cosine similarity between the first sentence per review.\n To embed sentences, we use distilled SBERT \\cite{reimers-gurevych-2019-sentence} fine-tuned on the natural language inference task, as these embeddings are not domain-specific and suitable to represent general statements in reviews. In this work, we focus on a simple representation of review texts as a proof of concept. We leave domain-specific embeddings or paragraph-level embeddings for future work.\n\\end{itemize}\nDuring experiments, we investigate feature subsets to determine their utility with respect to different proxies of paper quality. Additionally, we tested simple reading complexity metrics but they did not improve performance in any scenario.\n\\section{Experiments}\n\\begin{table}[t]\n\t\\centering\n\t\\renewcommand{\\arraystretch}{0.95}\n\t\\begin{tabular}{R{5.6cm}L{1.8cm}}\n\t \\toprule\n\t\t\\multicolumn{2}{C{8cm}}{{\\textbf{ACL-2018 Dataset Statistics}}} \\\\ \n\t\t\\midrule\n\t\t\\# Papers & $1538$\\\\\n\t\t\\# Reviews & $3875$\\\\\n\t\t\\# Reviews per paper & $2.52 \\pm 0.67$ \\\\ \n\t\t\\# Reviews per referee & $3.04 \\pm 1.35$ \\\\ \n\t\tKrippendorff's $\\alpha$ & $0.3596$ \\\\\n\t\tIntra-class correlation coefficient (ICC) & $0.365$ \\\\\n\t\t\\# Preference pairs & $5109$ \\\\\n\t\t\\# Comparisons per paper & $6.65 \\pm 2.62$ \\\\\n\t\t\\bottomrule\n\t\\end{tabular} \n\t\\caption[Basic Statistics of the Datasets]{On overall scores, we use Krippendorff's $\\alpha$ \\cite{krippendorff_content_1980} with an ordinal metric and ICC \\cite{mcgraw1996forming}.}\n\t\\label{t:datastats}\n\\end{table}\nWe rely on the anonymized PR data from consenting referees at ACL-2018 kindly provided by the collecting parties \\cite{gao_does_2019}. According to \\citeauthor{gao_does_2019}, $85\\%$ of referees opted-in the collection making the dataset close to complete. ACL-2018 contains anonymous referee identifiers, as in real PR systems, which we use to infer partial rankings. To our knowledge, ACL-2018 is the only available dataset with this information.\nNevertheless, access to more complete PR data is likely in the future, due to an increased interest in open peer review \\cite{birukou_alternatives_2011}.\n\\paragraph{Dataset} \nThe ACL-2018 dataset includes reviews and acceptance labels for 1538 submissions from 21 tracks. We only consider before-rebuttal reviews to avoid social biases \\cite{gao_does_2019}. The acceptance rate lies at roughly 25\\%. Each review has an overall score on a six-point scale and six aspect scores \\textit{originality}, \\textit{soundness}, \\textit{substance}, \\textit{replicability}, \\textit{meaningful comparison} and \\textit{readability} on five-point scales. Additionally, there are five text sections including \\textit{summary and contributions}, \\textit{strengths}, \\textit{weaknesses}, \\textit{questions} and \\textit{additional comments} in each review.\nTable \\ref{t:datastats} summarizes relevant statistics of ACL-2018. We observe that the preference signal is ample: each paper is compared on average $6$-times. Treating PR as an annotation study, the agreement on overall scores reveals the level of consistency in referees' judgements. For ACL-2018, the agreement is low for a computer science venue \\cite{ragone_peer_2013}, but high compared to social science journals \\cite{bornmann_scientific_2011}.\n\nTo realize the proposed evaluation strategy on both citation counts and acceptance labels as gold standards, we match the accepted papers in ACL-2018 with the \\textit{NLPScholar} dataset \\cite{mohammad_examining_2020} converting the hereby acquired citation counts into a ranking. The papers have identical age making their citation counts comparable. As an additional reference, we normalize citation counts per track, which eliminates a preference to topics with broad audiences. We normalize by the sum of citation counts $\\textit{cc}(p)$ of papers in track $t$ to form a ranking:\n$\n\\textit{ncc}_\\textit{t}(p) = \\frac{\\textit{cc}(p)}{\\sum_{q \\in t}\\textit{cc}(q)}\n$.\nDue to privacy restrictions, the citation counts of rejected papers cannot be considered. \n\\paragraph{Hyper-parameter Tuning and Training}\nWe employ the GPPL implementation by \\citeauthor{simpson_finding_2018} \\shortcite{simpson_finding_2018} in its default configuration with a \\textit{Mat\u00e9rn} kernel\\footnote{The code of our GPPL-based approach, baselines and detailed hyper-parameters at \\url{www.anonymous-github-repo.com}}. Instead of length-scale optimization or the proposed median-heuristic we use standard normalization on the non-embedding features, as this increased performance on all gold standards substantially. We only report on variations of the feature sets, as no other hyper-parameter configuration improved performance during experiments.\nThe goal of the PRP is to predict the true quality ranking of papers seen during training. For all experiments, we therefore train the GPPL model on all papers and their reviews, but make predictions for paper subsets. We split ACL-2018 into a 20\\% development and an 80\\% test set. The sets are randomly sampled while ensuring that the distribution of acceptance labels and positions in the citation count ranking are consistent with the overall population.\nWe report the mean and standard deviation of the performance metrics over five runs with randomly shuffled inputs.\n\\paragraph{Baselines}\nWe consider score aggregation strategies including the mean overall score weighted by referee confidence (MEAN-S-w), median overall score (MEDIAN-S) and majority voting on overall scores (MAJOR-S) falling back to the mean score for tied votes.\nThe MEAN-S-w weights based on referee-provided confidence scores, as this improved performance on the development set. \nAdditionally, we compare to the decision-based \\cite{cook_creating_2007} (DCON) and the neighborhood-based \\cite{baskin_preference_2009} (NCON) consensus rankers. Both algorithms are re-implemented in Python based on the author-provided code. They receive the same input as the GPPL model excluding tie preferences, as they cannot account for them.\n\\paragraph{Experimental Scenarios}\nIn the first experiments, we investigate the performance of different feature sets. We optimize two feature configurations, where the first is selected based on acceptance labels (by AUROC) and the second one is selected based on the citation count ranking (using Spearman's $\\rho$) on the development set. Hereby, we identify which features boost performance on each gold standard. In the end, we select the configuration with the best balanced performance on both gold standards for all further experiments.\n\nThe evaluation of \\textit{effectiveness} and \\textit{completeness} follows directly from the generic strategy described earlier. We measure the performance of the baselines and best model on the test set of the ACL-2018 dataset.\nTo judge the \\textit{fairness} of ranking models, we consider two scenarios of rating errors and measure their impact on performance: First, we add random noise $\\epsilon \\sim \\mathcal{N}(0, \\sigma^2)$ with $\\sigma \\in \\{0.75, 1.0\\}$ to the aspect and overall scores (rounded to integers) of $\\alpha \\in \\{30\\%, 60\\%\\}$ of the referees. Hereby, we simulate the effect of unreliable referees. Commensuration bias refers to the heterogeneous weighting of paper aspects by different referees to derive the overall score of a paper \\cite{lee_commensuration_2015}.\nWe simulate this by replacing the overall score by a weighted sum of the aspect scores and adding low normal noise ($\\sigma = 0.5$). We apply this for the reviews of $\\alpha = 30\\%$ of the referees. To analyze different scenarios, we use equal weights (COMM-EQ), an over-emphasis on readability (COMM-READ) and discarding of the originality score (COMM-CON). %\nTo evaluate the \\textit{efficiency} of the PRP approaches we sub-sample the reviews per paper discarding $\\alpha \\in \\{30\\%, 60\\%\\}$ of the reviews while guaranteeing at least one review per paper. Again we measure the performance decrease on both gold standards.\n\\section{Results and Analysis} \\label{ss:res}\n\n\\begin{table*}[t]\n\t\\centering\n\t\\renewcommand{\\arraystretch}{0.95}\n\t\\begin{tabular}{R{5.2cm}C{2.5cm}C{2.5cm}C{2.5cm}C{2.5cm}}\n\t\t\\toprule\n\t\t& \\textbf{AUROC} & \\textbf{PRAUC} & $\\rho$ \\textbf{raw} & $\\rho$ \\textbf{norm.} \\\\\n\t\t\\midrule\n\t\t{MEAN-S-w} & $\\mathbf{0.9041}$ & $0.7180$ & $0.1114$ & $0.1352$ \\\\\n\t\t{MEDIAN-S} & $0.8711$ & $0.6530$ & $0.1109$ & $0.1229$\\\\\n\t\t{MAJOR-S} & $0.8731$ & $0.6491$ & $0.1203$ & $0.1292$ \\\\\n\t\t{DCON} & $0.8302 \\pm 0.003$ & $0.5487 \\pm 0.011$ & $0.0907 \\pm 0.008$ & $0.0746 \\pm 0.007$ \\\\\n\t\t{NCON} & $0.7824 \\pm 0.005$ & $0.5028 \\pm 0.007$ & $0.0765 \\pm 0.029$ & $0.0507 \\pm 0.026$ \\\\\n\t\t\\addlinespace[0.15cm]\n\t\t{GPPL} & $0.8942 \\pm 0.001$ & $0.7213 \\pm 0.004$ & $0.2047 \\pm 0.010$ & $0.2074 \\pm 0.010$\\\\\n\t\t{GPPL only embedding features} & $0.8224 \\pm 0.004$ & $0.5687 \\pm 0.009$ & $\\mathbf{0.2333} \\pm 0.022 $ & $\\mathbf{0.2322} \\pm 0.022 $ \\\\\n\t\t{GPPL only score features} & $0.9012 \\pm 0.000$ & $\\mathbf{0.7395} \\pm 0.000$ & $0.1307 \\pm 0.000$ & $0.1317 \\pm 0.000$ \\\\\n\t\t\\bottomrule\n\t\\end{tabular}\n\t\\caption[Effectiveness and Completeness Measurements]{Effectiveness and completeness on the test set of ACL-2018. The mean and standard deviation over five runs are given. \"norm.\" refers to the citation counts normalized per track. \"PRAUC\" is area under the precision-recall curve.}\n\t\\label{at:effect-test} \n\\end{table*}\n\\begin{table*}[t]\n\t\\centering\n\t\\begin{tabular}{R{1.9cm}C{1.2cm}C{1.2cm}C{1.2cm}C{1.2cm}C{1.1cm}C{1.0cm}C{1.1cm}C{1.0cm}C{1.1cm}C{1.0cm}}\n\t\t\\toprule\n\t\t& \n\t\t\\multicolumn{2}{C{2.5cm}}{\\textbf{30\\% noise}} &\n\t\t\\multicolumn{2}{C{2.5cm}}{\\textbf{60\\% noise}} &\n\t\t\\multicolumn{2}{C{2.5cm}}{\\textbf{COMM-EQ}} & \n\t\t\\multicolumn{2}{C{2.5cm}}{\\textbf{COMM-READ}} & \n\t\t\\multicolumn{2}{C{2.5cm}}{\\textbf{COMM-CON}}\\\\\n\t\t\\cmidrule(lr){2-3}\\cmidrule(lr){4-5}\\cmidrule(lr){6-7}\\cmidrule(lr){8-9}\\cmidrule(lr){10-11}\n\t\t& \\textbf{AUROC} & $\\rho$ \\textbf{raw} & \\textbf{AUROC} & $\\rho$ \\textbf{raw} & \\textbf{AUROC} & $\\rho$ \\textbf{raw} & \\textbf{AUROC} & $\\rho$ \\textbf{raw} & \\textbf{AUROC} & $\\rho$ \\textbf{raw} \\\\\n\t\t\\midrule\n\t\tMEAN-S-w & $\\mathbf{0.889}$ & $0.119$ & $0.865$ & $0.078$ & $0.874$ & $0.149$ & $0.867$ & $0.089$ & $0.859$ & $0.144$ \\\\\n\t\t{DCON} & $0.820$ & $0.104$ & $0.794$ & $0.065$ & $0.797$ & $-0.045$ & $0.795$ & $0.145$ & $0.809$ & $0.070$ \\\\\n\t\tGPPL & $0.886$ & $\\mathbf{0.169}$ & $\\mathbf{0.879}$ & $\\mathbf{0.162}$ & $\\mathbf{0.885}$ & $\\mathbf{0.200}$ & $\\mathbf{0.878}$ & $\\mathbf{0.194}$ & $\\mathbf{0.878}$ & $\\mathbf{0.199}$ \\\\\n\t\t\\bottomrule\n\t\\end{tabular}\n\t\\caption[Fairness]{Performance for the fairness scenarios. The noise scenarios refer to the case with $\\sigma=1.0$ with $30\\%$ and $60\\%$ affected referees.}\n\t\\label{at:fairness-test}\n\\end{table*}\n\\begin{figure}\n\t\\centering\n\t\\includegraphics[width=0.7\\linewidth]{eff_figure.pdf}\n\t\\caption{Performance on randomly removed reviews with at least one review per paper. Dashed lines refer to the right axis ($\\rho$).}\n\\label{fig:efficiency}\n\\end{figure}\nOur implementation of DCON is stopped early after $20$h of computation (8 CPUs, 16Gb of RAM). NCON on average converges in $11$h. Our GPPL model using the final pre-computed feature set terminates on average in $4.5$min.\n\\paragraph{Feature Selection}\nThe best features on acceptance labels include score- and embedding-based features, but no discourse information. This configuration achieves $0.8463$ AUROC and $\\rho=0.1930$ on citation counts. The performance drops to $0.7687$ AUROC without score-based features. Not surprisingly, review scores are central to predict acceptance labels, because they strongly influence decision making. To rank consistently with citation counts, embedding-based features are crucial: the best features include only embeddings of the reviews' \\textit{summary and contributions} sections and discourse features. This achieves $\\rho = 0.2952$ and $0.6948$ AUROC. Discarding embedding-based features leads to a drastic drop of $\\rho$ by $0.1049$.\nThis is reasonable, as impact is strongly linked to the contributions of a paper.\nAll further experiments use the features optimized on acceptance labels, because they offer the best performance trade-off on both gold standards.\n\\paragraph{Effectiveness and Completeness}\nIn Table \\ref{at:effect-test} the effectiveness and completeness of our approach is compared to the baselines. As an ablation study, we also report the performance using only embedding-based and only score features.\nWhile MEAN-S-w performs best on acceptance labels by AUROC, the difference to our model is close to zero ($-0.0099$). At the same time the performance gain of our model according to $\\rho$ on the raw ($+184\\%$) and normalized ($+65\\%$) citation rankings is substantial. The consensus ranking baselines perform consistently worse. This shows the limitations of previous methods on real PR data.\nThe GPPL models using subsets of the best features confirm the importance of scores for acceptance labels and of the embedding-based features for citation counts. The low $\\rho$ on citation counts of the MEAN-S-w also shows the difficulty of predicting future citation counts for human annotators putting the overall weak correlation into perspective. \nFinally, the equally high performance on track-wise normalized citation counts hints that our approach does not simply learn to favor topics with a broad audience, as this effect is mitigated in this ranking.\n\\paragraph{Fairness}\nThe results for simulating unreliable referees are consistent for added noise at levels $\\sigma=0.75$ and $\\sigma=1.0$. Hence, we only report on the setting with $\\sigma=1.0$ on varying rates of unreliable referees.\nFor all algorithms except the consensus rankers, the ranking produced on the noisy reviews is highly consistent with their original rankings ($\\rho > 0.9$). Table \\ref{at:fairness-test} shows the AUROC and Spearman's $\\rho$ on the test set for different ratios of affected referees. GPPL shows the smallest decay in performance for acceptance labels. On the citation count ranking, the performance of GPPL remains the highest, while the other baselines show a drastic drop for 60\\% of noisy referees. Although adding noise only to scores favors approaches that rely on review texts, this suggests that the GPPL model is less affected by additional score noise.\nFor the three scenarios of commensuration bias, the performance of the best baselines and our approach is reported in Table \\ref{at:fairness-test}. The GPPL model outperforms all other methods in all scenarios. Surprisingly, the COMM-EQ scenario leads to an improved performance on the citation count ranking for nearly all algorithms. Apparently, substituting the actual overall score by the average of aspect scores acts as a de-biasing approach. This suggests pre-processing of data samples and preference pairs might further improve performance.\n\\paragraph{Efficiency}\nIn the first scenario of efficiency evaluation ($\\alpha=30\\%$) on average $1.60$ reviews per paper and $1.93$ reviews per referee are left. For $\\alpha=60\\%$ of removed reviews, there are $1.01$ reviews per paper and $1.23$ reviews per referee. The consensus rankers are not applicable in both scenarios, as some papers are not included in a partial ranking of more than one element. Likewise, the GPPL model makes predictions on papers not seen during training.\nAs shown in Figure \\ref{fig:efficiency}, the performance of all algorithms drops drastically. Our model deals slightly better with sparsity of reviews than the MEAN-S-w. The reduction of the number of reviews to increase efficiency appears contradictory to output quality for all approaches.\n\\section{Conclusion and Future Work}\nIn this paper, we demonstrated that preference learning is useful to assist acceptance decision making in peer review. We defined the Paper Ranking Problem and a novel generic evaluation framework to enable the empirical study of approaches in this field. We showed that our GPPL-based method offers the best balanced performance on acceptance labels and citation counts, while being more robust against unreliable referees and added commensuration bias. Our experiments also highlighted the importance of both review texts and scores for ranking papers.\nAs future research directions specialized embeddings of review texts and their combination with paper embeddings proposed in scholarly document quality assessment are promising. Additionally, research on the transferability of our method to different peer review systems is essential once more datasets with complete peer review data become available.\n\\bibliographystyle{named}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction and main results}\n\n\\emph{Space-time homogenization problems} for hyperbolic equations were first studied by Bensoussan, Lions and Papanicolaou. In \\cite{BLP}, based on a method of \\emph{asymptotic expansion}, the following wave equation is treated\\\/{\\rm :}\n\\begin{equation}\\label{wave}\n\\partial_{tt}^2u_{\\varepsilon} -{\\rm{div}}\\left(a_{\\varepsilon}\\nabla u_{\\varepsilon} \\right)=f \\quad\\text{ in } \\Omega\\times (0,T),\n\\end{equation} \nwhere $\\Omega$ is a bounded domain in $\\mathbb R^N$ with smooth boundary $\\partial\\Omega$, $N\\ge1$, $T>0$, $f=f(x,t)$ is a given data, $a : \\mathbb{T}^N \\times \\mathbb{T}\\to \\mathbb R^{N\\times N}$ is an $N \\times N$ symmetric matrix field satisfying a uniform ellipticity and $1$-periodicity and $a_{\\varepsilon}:=a(\\tfrac{x}{\\e},\\tfrac{t}{\\e^r})$ for $r>0$ (i.e., $a_\\varepsilon$ is $\\varepsilon\\times \\varepsilon^r$-periodic). The homogenization problem concerns asymptotic behavior as $\\varepsilon\\to 0_+$ of (weak) solutions $u_\\varepsilon=u_\\varepsilon(x,t)$ as well as a rigorous derivation of limiting equations, often called \\emph{homogenized equation}. In \\cite{BLP}, it is assumed that (weak) solutions $u_\\varepsilon = u_\\varepsilon(x,t)$ can be expanded as a series\\\/{\\rm :}\n\\begin{equation}\\label{asym_exp}\nu_\\varepsilon(x,t) = \\sum_{j = 0}^\\infty \\varepsilon^j u_j(x,t,\\tfrac{x}{\\e},\\tfrac{t}{\\e^r}),\n\\end{equation}\nwhere $u_j = u_j(x,t,y,s) : \\Omega \\times (0,T)\\times \\mathbb{T}^N\\times \\mathbb{T} \\to \\mathbb R$ for $j=0,1,2,\\ldots$ are some periodic functions, and then, by substituting \\eqref{asym_exp} to \\eqref{wave}, at a formal level, $u_0=u_0(x,t)$ turns out to be independent of \\emph{microscopic variable} $(y,s)$ and to solve the following homogenized equation\\\/:\n\\begin{equation*}\\label{wave-hom}\n\\partial_{tt}^2u_{0} -{\\rm{div}}\\left(a_{\\rm hom}\\nabla u_{0} \\right)=f \\quad\\text{ in } \\Omega\\times (0,T),\n\\end{equation*} \nwhere $a_{\\rm hom}$ is the so-called \\emph{homogenized matrix} and represented as\n\\begin{equation}\\label{hom_mat}\na_{\\rm hom} e_k = \\int_0^1\\int_{\\square} a(y,s)\\bigl(\\nabla_y \\Phi_k(y,s)+e_k\\bigl) \\, dyds \\quad \\mbox{ for } \\ k=1,2,\\ldots,N.\n\\end{equation}\nHere $\\square := (0,1)^N$ is a unit cell, $\\nabla_y$ stands for the gradient operator with respect to the third variable $y$, \n$\\{e_k\\} = \\{[\\delta_{jk}]_{j=1,2,\\ldots,N}\\}$ stands for a canonical basis of $\\mathbb R^N$ and \n $\\Phi_k : \\mathbb{T}^N\\times \\mathbb{T} \\to \\mathbb R$ (for $k = 1,2,\\ldots,N$) is the \\emph{corrector} which will be explained latter (see Remark \\ref{corrector} below).\nMoreover, $\\Phi_k$ is determined by the so-called\n\\emph{cell problems}. \nIn particular, if the log-ratio of the spatial and temporal periods of the coefficients is the hyperbolic scale ratio (i.e., $r=1$), then the cell problem is also a wave equation.\n \\begin{equation*}\n\\partial_{ss}^2\\Phi_k-\\mathrm{div}_y \\bigl[ a(y,s) (\\nabla_y \\Phi_k+ e_k ) \\bigl] = 0 \\quad \\mbox{ in } \\mathbb{T}^N\\times \\mathbb{T}\n\\end{equation*} \n(otherwise, cell problems are always elliptic equations, e.g.,~\\eqref{CPslow} below).\n\nIn \\cite{BLP}, the following heat equation is also treated\\\/{\\rm :}\n\\begin{equation}\\label{heat}\n\\partial_tu_{\\varepsilon}-{\\rm{div}}\\left(a_{\\varepsilon}\\nabla u_{\\varepsilon} \\right)=f \\quad\\text{ in } \\Omega\\times (0,T).\n\\end{equation}\nBy substituting \\eqref{asym_exp} to \\eqref{heat}, \n$u_0=u_0(x,t)$ is a (weak) solution to the following homogenized equation\\\/{\\rm :} \n\\begin{equation}\\label{heat-hom}\n\\partial_tu_{0}-{\\rm{div}}\\left(a_{\\rm hom}\\nabla u_{0} \\right)=f \\quad\\text{ in } \\Omega\\times (0,T),\n\\end{equation}\nwhere $a_{\\rm hom}$ is defined by \\eqref{hom_mat}. Furthermore, if $r=2$ (i.e., $a_\\varepsilon=a(\\tfrac{x}{\\e},\\tfrac{t}{\\varepsilon^2})$), then the corrector $\\Phi_k$ is the unique solution to the following cell problem\\\/{\\rm :}\n\\begin{equation}\\label{heat-CP}\n\\partial_{s}\\Phi_k-{\\rm{div}}_y\\bigl[a(y,s)(\\nabla_y\\Phi_k+e_{k})\\bigl]=0\\quad \\text{ in }\\ \\mathbb{T}^N\\times \\mathbb{T}\n\\end{equation}\n(as in \\eqref{wave}, cell problems are always elliptic equations for any $r\\neq 2$). Thus the type of the cell problem depends on the log-ratio of the spatial and temporal periods of the coefficients. Moreover, these formal arguments based on the asymptotic expansion for (the Cauchy-Dirichlet problem for) \\eqref{heat} are justified via \\emph{two-scale convergence theory} by A.~Holmbom in \\cite{Ho}. The notion of two-scale convergence was first proposed by G.~Nguetseng~\\cite{Ng}, and then, developed by G.~Allaire~\\cite{Al1,Al2} (see also, e.g.,~\\cite{LNW,Vi,Zh}). It enables us to analyze how strong compactness of bounded sequences in Sobolev spaces fails due to their oscillatory behaviors (see (i) and (ii) of Remark \\ref{indepwtts} below). A.~Holmbom extended the two-scale convergence theory to space-time homogenization and derived \\eqref{heat-hom} and \\eqref{heat-CP} rigorously. Moreover, the notion of \\emph{very weak two-scale convergence} is introduced, and then, it plays a crucial role for characterizing homogenized matrices (see Corollary \\ref{veryweak} below for details). Besides, homogenization problems for various parabolic equations have been studied not only for linear ones but also for nonlinear ones (e.g.~\\cite{AO,EP,FHOS,J,NW,W}). In particular, for p-Laplace type \\cite{EP,W} and porous medium type \\cite{AO}, it has been proved that cell problems are given as parabolic equations at the critical scale (i.e.,~$a_\\varepsilon=a(\\tfrac{x}{\\e},\\tfrac{t}{\\varepsilon^2})$ in \\eqref{heat}).\n\nOn the other hand, the following more general hyperbolic-parabolic equation is treated (e.g.~\\cite{BFM,BL,CCMM, DT, FF,Mi,Ti2,To}). \n\\begin{equation}\\label{H-P}\nh_{\\varepsilon}\\partial_{tt}^2u_{\\varepsilon}-{\\rm{div}}(a_{\\varepsilon}\\nabla u_{\\varepsilon})+g_{\\varepsilon}\\partial_{t}u_{\\varepsilon}=f\n\\quad\\text{ in } \\Omega\\times (0,T).\n\\end{equation}\nHere $h_{\\varepsilon}$ and $g_{\\varepsilon}$ are $\\varepsilon\\times \\varepsilon^r$-periodic functions rapidly oscillating. Furthermore, \\cite{NNS,Nn,Ti1,WD} deal with nonlinear wave equations, and in particular, in \\cite{NNS,Nn}, almost periodic settings are studied via $\\Sigma$-convergence theory developed in \\cite{Ng2}. Here \\eqref{H-P} is called \\emph{damped wave equations} for $h_\\varepsilon\\equiv1$ and $g_{\\varepsilon}>0$ and it is noteworthy that asymptotic expansions of solutions to damped wave equations are performed with the aid of solutions to diffusion equations (e.g.~\\eqref{heat}), and moreover, asymptotic behaviors of solutions to damped wave equations are similar to those of diffusion equations as $t\\to +\\infty$ (see e.g.~\\cite{GR,N1}). Therefore, it is expected that cell problems for \\eqref{H-P} will change at the critical scale for \\eqref{heat} (i.e., $a_\\varepsilon=a(\\tfrac{x}{\\e},\\tfrac{t}{\\varepsilon^2})$). However, at least to our knowledge, it does not seem to occur under the periodic homogenization in the fixed domain except for $h_\\varepsilon=-\\varepsilon^2$ and $g_\\varepsilon\\equiv 1$ (see \\cite[Chapter 2, Section 4.5]{BLP} for details).\n\n\\subsection{Setting of the problem}\nOne of main purposes of the present paper is to find conditions under which the cell problems of \\eqref{H-P} will be different from elliptic ones (see \\eqref{CPslow} below). As a consequence, we emphasize that the (arithmetic) quasi-periodicity of the time-dependent coefficient $g_\\varepsilon$ in \\eqref{H-P} is crucial and it is defined as follows.\n\\begin{defi}[Quasi-periodic functions]\\label{quasi}\nThe function $\\varphi\\in C(\\mathbb R)$ is said to be \\emph{(arithmetic) quasi-periodic} if it satisfies \n\\begin{equation*}\n\\varphi(s+1)=\\varphi(s)+C_{\\ast}\\quad \\text{ for all $s\\in [0,1)$ and \\ $C_{\\ast}\\in \\mathbb R$}\n\\end{equation*}\n{\\rm(}i.e.,~$\\varphi$ is $(0,1)$-periodic if $C_{\\ast}=0${\\rm)}.\n\\end{defi}\n\n\\begin{rmk}\n\\rm\nThe notion of quasi-periodicity has been defined\nin several different ways \n(see e.g.,~\\cite{Co,Coo}). We stress that quasi-periodic functions in the sense of Definition \\ref{quasi} do not satisfy the almost-periodicity in the sense of Besicovitch, which is known as a generalization of periodicity. Indeed, if $\\varphi\\in C(\\mathbb R)$ is quasi-periodic, there exists a $(0,1)$-periodic function $\\varphi_{\\rm per}\\in C_{\\rm per}(\\square)$\n\\footnote{Indeed, setting $\\varPhi(s):= \\varphi(s)-C_{\\ast}s$, we see that $\\varPhi(s)$ is $(0,1)$-periodic.}\nsuch that\n\\begin{equation*}\n\\varphi(s)=\\varphi_{\\rm per}(s)+C_{\\ast}s,\n\\end{equation*}\nwhich implies that\n$$\n\\left(\\limsup_{R\\to +\\infty}\\frac{1}{|2R|}\\int_{-R}^{R}|\\varphi(s)|^r\\, ds\\right)^{1\/r}=+\\infty,\\quad \\text{ for all }\\ r\\in [1,+\\infty).\n$$\nThus $\\varphi$ does not belong to the generalized Besicovitch space $B^r(\\mathbb R)$ (see e.g.,~\\cite{CG, JKO}). Moreover, we shall consider both the effect of the periodic homogenization and the effect of the singular limit due to $\\varphi(\\tfrac{t}{\\e^r})=\\varphi_{\\rm per}(\\tfrac{t}{\\e^r})+C_{\\ast}\\tfrac{t}{\\e^r}$ and $C_\\ast\\tfrac{t}{\\e^r}\\to +\\infty$ for $t>0$ as $\\varepsilon\\to 0_+$.\n\\end{rmk}\n\nIn this paper, we shall consider the Cauchy-Dirichlet problem for the following damped wave equation\\\/{\\rm :} \n\\begin{equation}\\label{DW}\n\\displaystyle\n\\left\\{\n\\begin{aligned}\n&\\partial_{tt}^2u_{\\varepsilon}-{\\rm{div}}\\bigl[ a\\left(t,\\tfrac{x}{\\varepsilon}\\right)\\nabla u_{\\varepsilon} \\bigl]+g\\left(\\tfrac{t}{\\varepsilon^r}\\right)\\partial_tu_{\\varepsilon}=f_{\\varepsilon} \\quad\\text{ in } \\Omega\\times (0,T), \\\\\n&u_{\\varepsilon}|_{\\partial\\Omega}=0,\\quad \nu_{\\varepsilon}|_{t=0}=v_\\varepsilon^0,\\quad\n\\partial_tu_{\\varepsilon}|_{t=0}=v_\\varepsilon^1.\n\\end{aligned}\n\\right.\n\\end{equation}\nHere we make the following\n\\vspace{1mm}\n\n\\noindent\n{\\bf Assumption (A).}\\\nLet $\\Omega$ be a bounded domain in $\\mathbb R^N$ with smooth boundary $\\partial\\Omega$, $N\\ge 1$.\n\\begin{itemize}\n\\item[(i)]\nLet $T>0$, $\\varepsilon>0$ and $r>0$. Let $v_\\varepsilon^0\\in H^1_0(\\Omega)$ and $v_\\varepsilon^1\\in L^2(\\Omega)$ be such that\n\\begin{align*}\nv_\\varepsilon^0\\to v^0\\quad \\text{ weakly in } H^1_0(\\Omega)\\quad\\text{ and }\\quad \nv_\\varepsilon^1\\to v^1\\quad \\text{ weakly in } L^2(\\Omega).\n\\end{align*}\nLet $f_{\\varepsilon}, f\\in L^{2}(\\Omega\\times (0,T))$ be such that\n$$\nf_\\varepsilon\\to f\\ \\text{ weakly in }\\ L^2(\\Omega\\times (0,T)).\n$$\n\\item[(ii)]\nThe $N\\times N$ symmetric matrix $a\\in[C^1(0,T;L^{\\infty}(\\mathbb R^N))]^{N\\times N}$ satisfies a uniform ellipticity, i.e.,~there exists $\\lambda>0$ such that \n\\begin{equation}\n\\label{ellip}\n\\lambda |\\xi|^2\\le a(t,y) \\xi\\cdot\\xi\\le |\\xi|^2\\ \\text{ for any $\\xi\\in\\mathbb R^N$ and a.e.~$(t,y)\\in (0,T)\\times \\mathbb R^N$,}\n\\end{equation}\nand $(0,1)^N$-periodicity\\\/{\\rm :}\n\\begin{equation*}\na(t,y+e_j)=a(t,y)\\quad \\text{ a.e.~in $(t,y)\\in (0,T)\\times \\mathbb R^N$}.\n\\end{equation*}\n\\item[(iii)]\nSet $g\\in C(\\mathbb R;\\mathbb R_+)$ as follows\\\/{\\rm :} \n$$\ng(s)=g_{\\rm per}(s)+C_{\\ast}s>0\\ \\text{ for all $s\\in\\mathbb R_+$}.\n$$\nHere $g_{\\rm per}$ is a $(0,1)$-periodic function and $C_{\\ast} \\ge 0$ is a constant.\nIn addition, if $r=2$, we further assume $C_{\\ast}\\le \\frac{2\\lambda}{C_{\\square}}$, where $C_{\\square}=N\/\\pi^2$ is the best constant of the Poincar\\'{e} inequality on the unit cell, that is,\n$$\n\\|w\\|_{L^2(\\square)}\\le C_{\\square}\\|\\nabla w\\|_{L^2(\\square)}\\quad \\text{ for all }\\ w\\in H^1_{\\rm per}(\\square\n$$\n {\\rm(}see Notation below{\\rm)}.\n\\item[(iv)]\nIn addition, if $C_{\\ast}\\neq 0$ and $20$.\n\n\\subsection{Main results}\nWe start with the following definition of weak solutions to \\eqref{DW}\\\/{\\rm:}\n\\begin{defi}[Weak solution of \\eqref{DW}]\\label{sol}\nA function $u_{\\varepsilon}\\in L^{\\infty}(0,T;H^1_0(\\Omega))$ is said to be a weak solution to \\eqref{DW}, if the following {\\rm (i)-(iii)} are all satisfied\\\/{\\rm:} \n\\begin{itemize}\n\\item[(i)]{\\rm(}Regularity{\\rm)} $u_{\\varepsilon}\\in W^{2,2}(0,T;H^{-1}(\\Omega))\\cap W^{1,\\infty}(0,T;L^2(\\Omega))$.\n\\item[(ii)]{\\rm(}Initial condition{\\rm)} $u_{\\varepsilon}(t)\\to v^0_\\varepsilon$ strongly in $L^{2}(\\Omega)$ as $t\\to0_+$ and $\\partial_tu_{\\varepsilon}(t)\\to v^1_\\varepsilon$ in $H^{-1}(\\Omega)$ as $t\\to0_+$.\n\\item[(iii)]{\\rm(}Weak form{\\rm)} It holds that, for all $\\phi\\in H^1_0(\\Omega)$,\n\\begin{equation}\\label{weakform}\n\\left\\langle \\partial_{tt}^2 u_{\\varepsilon}(t),\\phi\\right\\rangle_{H^1_0(\\Omega)}\n+A_{\\varepsilon}^t(u_{\\varepsilon}(t),\\phi)\n+\\langle g(\\tfrac{t}{\\varepsilon^{r}})\\partial_t u_{\\varepsilon}(t),\\phi\\rangle_{H^1_0(\\Omega)}\t\n=\\langle f_{\\varepsilon}(t),\\phi\\rangle_{H^1_0(\\Omega)} \t\n\\end{equation}\nfor a.e.~in $t\\in(0,T)$, where $A_{\\varepsilon}^t(v,w)$ is a bilinear form in $H^1_0(\\Omega)$ defined by\n$$\nA_{\\varepsilon}^t(v,w) =\\int_{\\Omega}a\\left(t,\\tfrac{x}{\\varepsilon}\\right)\\nabla v(x)\\cdot \\nabla w(x)\\, dx\\quad\n\\text{ for }\\ v,w\\in H^1_0(\\Omega).\n$$\n\\end{itemize}\n\\end{defi}\n\nBy Galerkin's method (cf.~{\\cite[Theorem 12.2]{CD}}), we have\n\\begin{thm}[Existence and uniqueness of weak solutions to \\eqref{DW}]\\label{well-posedness}\nSuppose that \n\\begin{align*}\n&a(t,\\tfrac{x}{\\e})\\in [C^1(0,T;L^{\\infty}(\\Omega))]^{N\\times N}_{\\rm sym}, \\\ng(\\tfrac{t}{\\e^r})\\in C(0,T),\\\nf_{\\varepsilon}\\in L^2(\\Omega\\times (0,T)),\\\\\n&v^0_\\varepsilon\\in H^1_0(\\Omega),\\\nv^1_\\varepsilon\\in L^2(\\Omega) .\n\\end{align*}\nThen for every $\\varepsilon>0$ there exists a unique weak solution $u_{\\varepsilon}$ to \\eqref{DW}. \n\\end{thm}\n\n\nThen we first obtain the following homogenization theorem\\\/{\\rm :}\n\n\\begin{thm}[Homogenization theorem]\\label{HPthm}\nSuppose that {\\bf (A)} is satisfied. Let $u_{\\varepsilon}\\in L^{\\infty}(0,T;H^1_0(\\Omega))$ be a unique weak solution to \\eqref{DW}. There exist $u_0\\in L^{\\infty}(0,T;H^1_0(\\Omega))$ and $h\\in L^2_{\\rm loc}((0,T];H^{-1}(\\Omega))$ such that, for any $\\sigma>0$, \n\\begin{align}\nu_{\\varepsilon}&\\to u_0 &&\\text{weakly-$\\ast$ in } L^{\\infty}(0,T;H^1_0(\\Omega)),\\label{HPconv1}\\\\\nu_{\\varepsilon}&\\to u_0 &&\\text{strongly in } C([0,T];L^2(\\Omega)),\\label{HPconv2}\\\\\ng(\\tfrac{t}{\\e^r})\\partial_tu_{\\varepsilon}&\\to \\langle g_{\\rm per}\\rangle_s\\partial_t \nu_0+C_{\\ast} h \\quad &&\\text{weakly in } \n\\begin{cases}\nL^2(0,T;H^{-1}(\\Omega)) \\text{ if } C_{\\ast}=0, \\\\\nL^2(\\sigma,T;H^{-1}(\\Omega)) \\text{ if } C_{\\ast}\\neq 0,\\\\\n\\end{cases}\\label{HPconv4}\\\\\n\\partial_{tt}^2u_{\\varepsilon}&\\to\\partial_{tt}^2u_{0}&&\\text{weakly in } \n\\begin{cases}\nL^2(0,T;H^{-1}(\\Omega)) \\text{ if } C_{\\ast}=0, \\\\\nL^2(\\sigma,T;H^{-1}(\\Omega)) \\text{ if } C_{\\ast}\\neq 0,\\\\\n\\end{cases}\\label{HPconv5}\\\\\na(t,\\tfrac{x}{\\e})\\nabla u_{\\varepsilon}&\\to \\langle a(t)(\\nabla u_0+\\nabla_y u_1) \\rangle_y &&\\text{weakly in } [L^2(\\Omega\\times (0,T))]^N,\\label{HPconv3}\n\\end{align}\nwhere $\\langle w\\rangle_{s}=\\int_{0}^1w (s)\\, ds$, $\\langle \\hat{w}\\rangle_{y}=\\int_{\\square}\\hat{w}(y)\\, dy$ and $u_1$ is written by \n\\begin{align}\n u_1 (x,t,y): =\\sum_{k=1}^N\\partial_{x_k}u_0(x,t)\\Phi_k(t,y).\\label{HPu1}\n\\end{align}\nHere $\\Phi_k$ is a corrector for each $k=1,\\ldots, N$ and it is characterized as follows\\\/{\\rm :}\n\\begin{itemize}\n\\item[{\\rm(i)}] In case $r\\in(0, +\\infty)\\setminus\\{2\\}$, $\\Phi_k\\in H^1_{\\mathrm{per}}(\\mathbb{T}^N)\/\\mathbb R$ {\\rm(}see Notation below{\\rm)} is the unique solution to \n\\begin{equation}\\label{CPslow}\n-{\\rm{div}}_y\\left[a(t,y)(\\nabla_y\\Phi_k+e_{k})\\right]=0\\ \\text{ in }\\ \\mathbb{T}^N\\times (0,T),\n\\end{equation}\nwhere $e_{k}$ is the $k$-th vector of the canonical basis of $\\mathbb R^N$.\n\\item\n[{\\rm (ii)}] In case $r=2$, $\\Phi_k\\in L^{2}(0,T;H^1_{\\mathrm{per}}(\\mathbb{T}^N)\/\\mathbb R)$ is the unique solution to\n\\begin{equation}\\label{CPcritical}\nC_{\\ast}t\\partial_t\\Phi_k-{\\rm{div}}_y\\left[a(t,y)(\\nabla_y\\Phi_k+e_{k})\\right]=0\\ \\text{ in }\\ \\mathbb{T}^N\\times (0,T).\n\\end{equation}\nIn particular, if either $C_{\\ast}= 0$ or $a=a(y)$, then $\\Phi_k\\in H^1_{\\mathrm{per}}(\\mathbb{T}^N)\/\\mathbb R$ is the unique solution to \\eqref{CPslow}. \n\\end{itemize}\nFurthermore, for any $C_{\\ast}\\ge 0$, $u_0$ is the unique weak solution to \n\\begin{equation}\\label{HDW2}\n\\displaystyle\n\\left\\{\n\\begin{aligned}\n&\\partial_{tt}^2u_{0}-{\\rm{div}}\\left[ a_{\\rm hom}(t)\\nabla u_{0} \\right]+ \\langle g_{\\rm per}\\rangle_s\\partial_t u_0+C_{\\ast} h =f \\quad\\text{ in } \\Omega\\times (0,T), \\\\\n&u_{0}|_{\\partial\\Omega}=0 , \\quad \tu_{0}|_{t=0}=v^0,\\quad \n\\partial_tu_{0}|_{t=0}= \\tilde{v}^1. \n\\end{aligned}\n\\right.\n\\end{equation} \nHere $u_0\\equiv v^0$ whenever $C_{\\ast}\\neq 0$, and moreover, \n\\begin{equation*}\\label{v1tilde}\n\\tilde{v}^1=\n\\begin{cases}\nv^1 &\\text{ if }\\ C_{\\ast}=0,\\\\\n0 &\\text{ if }\\ C_{\\ast}\\neq 0.\n\\end{cases}\n\\end{equation*}\nMoreover, $a_{\\rm hom}(t)$ is the homogenized matrix given by\n\\begin{equation}\\label{a_hom}\na_{\\rm hom}(t)e_k=\\int_{\\square}a(t,y)\\bigl(\\nabla_y \\Phi_k(t,y)+e_k\\bigl)\\, dy, \\quad k=1,2,\\ldots,N.\n\\end{equation}\n\\end{thm}\n\n\\begin{rmk}\n\\rm\nIt is noteworthy that, due to the loss of the time periodicity, the following facts hold\\\/{\\rm :} \n\\begin{itemize}\n\\item[(i)] {\\bf(Homogenized equation).\\\/}\nThe homogenized equation \\eqref{HDW2} is of the same type as the original equation \\eqref{DW} for the periodic case (i.e., $C_{\\ast}= 0$). On the other hand, for the quasi-periodic case (i.e., $C_{\\ast}\\neq 0$), by the effect of the singular limit of $g$, \\eqref{HDW2} is represented as the following elliptic equation\\\/{\\rm :}\n\\begin{equation*}\n-{\\rm{div}} (a_{\\rm hom}\\nabla u_{0}) =f-C_{\\ast}h \\ \\text{ in }\\ \\Omega\\times (0,T), \\quad u_0\\in H^1_0(\\Omega).\n\\end{equation*}\nFurthermore, the limit of the solution to \\eqref{DW} coincides with the limit of the initial data $v_\\varepsilon^0$.\n\\item[(ii)] {\\bf(Cell problem).\\\/}\nFor the periodic case $C_{\\ast}= 0$, the corrector $\\Phi_k$ is always described as the solution to the elliptic equation \\eqref{CPslow}. On the other hand, for the quasi-periodic case, at the critical case $r=2$, the cell problem \\eqref{CPcritical} is different from \\eqref{CPslow} and it is given as the parabolic equation by the effect of the singular limit of $g$.\nThus $\\Phi_k$ depends on $t\\in (0,T)$, and then, \nqualitative properties of the homogenized matrix $a_{\\rm hom}$ will change due to \\eqref{a_hom} (see Proposition \\ref{property_of_a_hom} below).\n\n\\end{itemize}\n\\end{rmk}\n\nMoreover, as for the homogenized matrix, we next have the following \n\\begin{prop}[Qualitative properties of the homogenized matrix $a_{\\rm hom}$]\\label{property_of_a_hom}\nUnder the same assumption as in Theorem \\ref{HPthm},\nlet $0 0$ is the ellipticity constant of $a(t,y)$ defined by \\eqref{ellip} and $\\Phi_{\\xi}$ is the corrector given by either \\eqref{CPslow} or \\eqref{CPcritical} with $e_k$ replaced by $\\xi\\in \\mathbb R^N$.\\vspace{3mm}\n\\item[(ii)]{\\rm(}Symmetry and asymmetry{\\rm)}\nIf $a(t,y)$ is the symmetric matrix, then $a_{\\rm hom}(t)$ is the asymmetric matrix for $r=2$ and $C_{\\ast}\\neq 0$. Otherwise, $a_{\\rm hom}(t)$ is also the symmetric matrix. \n\\end{itemize}\n\\end{prop}\n\n\\begin{rmk}\n\\rm\nWe stress that, in the critical case (i.e., $r=2$ and $C_{\\ast}\\neq 0$), even though the elliptic constant of $a(t, y)$ is independent of $t \\in (0, T)$, that of $a_{\\rm hom}(t)$ depends on $t$.\nFurthermore, the symmetry breaking of $a_{\\rm hom}(t)$ occurs \nbut it makes no contribution to the divergence (see Remark \\ref{skew} below). \n\\end{rmk}\n\nWe finally get the following corrector result. \n\\begin{thm}[Corrector result for time independent coefficients]\\label{CR}\nSuppose that {\\bf(A)} is fulfilled and assume that \n$a=a(y)$, $v_\\varepsilon^0$, $v_\\varepsilon^1$, $a(y)$, $g(s)$ and $f_\\varepsilon$ are smooth, $(-{\\rm{div}} (a(\\tfrac{x}{\\e})\\nabla v_\\varepsilon^0))$, $(v_\\varepsilon^1)$, \n$(f_\\varepsilon)$ and $(\\partial_t f_\\varepsilon)$ are bounded in $L^2(\\Omega)$, $H^1_0(\\Omega)$, $L^{\\infty}(0,T;L^2(\\Omega))$ and $L^2(\\Omega\\times (0,T))$, respectively. \nLet $u_{\\varepsilon}$ and $u_0$ be the unique solutions to \\eqref{DW} and \\eqref{HDW2}, respectively. Then it holds that \n\\begin{equation}\\label{errorest}\n\\displaystyle \\lim_{\\varepsilon\\to 0_+}\t\\int_{0}^T\\int_{\\Omega} \\left|\\nabla u_{\\varepsilon}(x,t)-\\bigl(\\nabla u_0(x,t)+\\nabla_y u_1(x,t,\\tfrac{x}{\\e})\\bigl)\\right|^2\\, dxdt=0 \n\\end{equation}\nfor all $r\\in (0,+\\infty)$, where $u_1=\\sum_{k=1}^N\\partial_{x_k}u_0\\Phi_k$ and $\\Phi_{k}\\in L^2(0,T;H^1_{\\rm per} (\\mathbb{T}^N)\/\\mathbb R)$ is the corrector for $r\\in(0,+\\infty)$. \n\\end{thm}\n\nAs for the time dependent case $a=a(t,y)$, we have the following corrector result for more specific settings\\\/{\\rm :}\n\\begin{corollary}[Corrector result for time dependent coefficients]\\label{CR2}\nSuppose that $C_{\\ast}\\neq 0$. In addition, assume that $a(t,y)$ is smooth and the following \\eqref{ellip2}-\\eqref{f-add} hold\\\/{\\rm :}\n\\begin{align}\n&\\partial_t a(t,y)\\xi\\cdot\\xi \\le 0\\ \n\\quad \\text{ for all $\\xi\\in \\mathbb R^N$ and all $(t,y)\\in (0,T)\\times \\mathbb R^N$}, \\label{ellip2} \\\\\n&-{\\rm{div}}(a(0,\\tfrac{x}{\\e})\\nabla v_\\varepsilon^0)\\to -{\\rm{div}}(a_{\\rm hom}(0)\\nabla v^0) \\text{ strongly in $H^{-1}(\\Omega)$}, \\label{strong hinv}\\\\\n&\\lim_{\\varepsilon\\to 0_+}\\|v_\\varepsilon^1\\|_{L^2(\\Omega)}=0, \\label{initialv-add}\\\\\n&f_\\varepsilon\\to f \\text{ strongly in } L^2(\\Omega\\times (0,T))\n\\text{ or } (f_\\varepsilon\/\\sqrt{t}) \\text{ is bounded in } L^2(\\Omega\\times (0,T)),\n\\label{f-add}\n\\end{align} \nIn addition, if $r=2$ and $C_\\ast \\neq 0$, assume that\n\\begin{equation}\n\\partial_t a(t,y)=-a(t,y) \\quad \\text{ for all $(t,y)\\in (0,T)\\times \\mathbb R^N$}. \\label{ellip3}\n\\end{equation}\nHere $a_{\\rm hom}(t)$ is the homogenized matrix defined by \\eqref{a_hom}.\nLet $u_{\\varepsilon}$ and $u_0$ be the unique solutions to \\eqref{DW} and \\eqref{HDW2}, respectively. Then \\eqref{errorest} holds. \n\\end{corollary}\n\n\\begin{rmk}\t\n\\rm\nInitial data $v_\\varepsilon^0\\in H^1_0(\\Omega)$ satisfying \\eqref{strong hinv} can actually be constructed (see e.g.~\\cite[pp.~236]{CD}).\n\\end{rmk}\n\n\\begin{rmk}\\label{corrector}\n\\rm\nFrom Theorem \\ref{CR}, it holds that \n\\begin{equation*}\nu_{\\varepsilon}\\not\\to u_0 \\quad \\text{ strongly in }L^2(0,T;H^1_0(\\Omega))\n\\end{equation*}\nin general due to the oscillation of the third term $u_1(x,t,\\tfrac{x}{\\e})$ as $\\varepsilon\\to 0_+$.\nThus $u_1(x,t,\\tfrac{x}{\\e})$ plays a role as the corrector term recovering the strong compactness in this topology. For this reason, $\\Phi_k$ is often called a corrector. \n\\end{rmk}\n\n\\subsection{Plan of the paper and notation}\nThis paper is organized as follows. In the next section, we summarize relevant material on space-time two-scale convergence. Section $3$ is devoted to proving uniform estimates for solutions $u_\\varepsilon$ to \\eqref{DW} as $\\varepsilon\\to 0_+$. Furthermore, we shall prove their weak(-$\\ast$) and strong convergences.~In Section $4$, we shall prove Theorem \\ref{HPthm}. To prove Proposition \\ref{property_of_a_hom}, we shall discuss qualitative properties of the homogenized matrix $a_{\\rm hom}(t)$ in Section 5. The final section is devoted to proofs of Theorem \\ref{CR} and Corollary \\ref{CR2}.\n\n\\noindent\n{\\bf Notation.}\\ \nThroughout this paper, $C>0$ denotes a non-negative constant which may vary from line to line. In addition, the subscript A of $C_{A}$ means dependence of $C_{A}$ on $A$. Let $\\delta_{ij}$ be the Kronecker delta, $e_i=(\\delta_{ij})_{1\\le j\\le N}$ be the $i$-th vector of the basis of $\\mathbb R^N$, $\\|\\cdot\\|_{H^1_0(A)}$ be defined by $\\|\\cdot\\|_{H^1_0(A)}:=\\|\\nabla\\cdot\\|_{L^2(A)}$ for domains $A\\subset \\mathbb R^N$, $\\nabla$ and $\\nabla_y$ denote gradient operators with respect to $x$ and $y$, respectively, and ${\\rm{div}}$ and ${\\rm{div}}_y$ denote divergence operators with respect to $x$ and $y$, respectively. Furthermore, we shall use the following notation\\\/:\n\\begin{itemize}\n\\item $\\square=(0,1)^N$, \\quad $I=(0,T)$,\\quad $J=(0,1)$,\\quad $dZ=dydsdxdt$.\n\\item Define the set of smooth $\\square$-periodic functions by\n\\begin{align*}\nC^{\\infty}_{\\rm per}(\\square) \n&= \\{w\\in C^{\\infty}(\\square) \\colon w(\\cdot+e_k)=w(\\cdot) \\text{ in } \\mathbb R^N \\ \\text{ for }\\ 1\\leq k \\leq N\\}.\n\\end{align*}\n\\item We also define $W^{1,q}_{\\rm per}(\\square)$ and $L^q_{\\rm per}(\\square)$ as closed subspaces of $W^{1,q}(\\square)$ and $L^q(\\square)$ by\n$$\nW^{1,q}_{\\rm per}(\\square) = \\overline{C^\\infty_{\\rm per}(\\square)}^{\\|\\cdot\\|_{W^{1,q}(\\square)}}, \\quad L^q_{\\rm per}(\\square) = \\overline{C^\\infty_{\\rm per}(\\square)}^{\\|\\cdot\\|_{L^q(\\square)}},\n$$ \nrespectively, for $1\\leq q < +\\infty$. In particular, set $H^1_{\\rm per}(\\square) := W^{1,2}_{\\rm per}(\\square)$. We shall simply write $L^q(\\square)$ instead of $L^q_{\\rm per}(\\square)$, unless any confusion may arise.\n\\item We often write $L^q(\\Omega\\times \\square)$ instead by $L^q(\\Omega;L^q_{\\rm per}(\\square))$ since $L^q_{\\rm per}(\\square)$ is reflexive Banach space for $10$, and\n\\begin{align*}\n&\\lim_{\\varepsilon\\to 0_+}\\left\\|\\Psi\\left(x,t,\\tfrac{x}{\\varepsilon},\\tfrac{t}{\\varepsilon^r}\\right)\\right\\|_{L^{q'}(\\Omega\\times I)} = \\left\\|\\Psi(x,t,y,s)\\right\\|_{L^{q'}(\\Omega\\times I\\times \\square\\times J)}, \n\\\\\n &\\left\\|\\Psi\\left(x,t,\\tfrac{x}{\\varepsilon},\\tfrac{t}{\\varepsilon^r}\\right)\\right\\|_{L^{q'}(\\Omega\\times I)} \\le C\\left\\|\\Psi(x,t,y,s)\\right\\|_{X}\\quad \\text{ for }\\ \\varepsilon>0. \n\\end{align*}\nMoreover, $\\Psi\\in$ $X$ is called an \\emph{admissible test function} {\\rm (}for the weak space-time two-scale convergence in $L^q(\\Omega\\times I\\times \\square \\times J)${\\rm)}.\n\\end{defi}\n\nThe following fact is well known and often used, in particular, to discuss weak convergence of periodic test functions.\n\\begin{prop}[Mean-value property]\\label{mean}\nLet $w\\in L^q(\\square\\times J)$ and set $w_{\\varepsilon}(x,t)=w(\\tfrac{x}{\\e},\\tfrac{t}{\\e^r})$ for $\\varepsilon>0$ and $00$. Then the following {\\rm(i)-(vi)} hold\\\/{\\rm :}\n\\begin{enumerate}\n\\rm\n\\item[(i)]\n$(u_{\\varepsilon}) $ is bounded in $L^{\\infty}(I;H^1_{0}(\\Omega))$,\n\\item[(ii)]\n$(\\partial_tu_{\\varepsilon})$ is bounded in $L^{\\infty}(I;L^{2}(\\Omega))$,\n\\item[(iii)]\n$(\\sqrt{t\\varepsilon^{-r}}\\partial_{t}u_{\\varepsilon})$ is bounded in $L^2(\\Omega\\times I)$, provided that $C_{\\ast}\\neq 0$,\n\\item[(iv)]\n$(\\partial_{tt}^2u_{\\varepsilon}+g(\\tfrac{t}{\\e^r})\\partial_{t}u_{\\varepsilon})$ is bounded in $L^2(I;H^{-1}(\\Omega))$,\n\\item[(v)]\n$(\\partial_{tt}^2u_{\\varepsilon})$ is bounded in \n$\\begin{cases}\nL^2(I;H^{-1}(\\Omega)) &\\text{ if } C_{\\ast}= 0,\\\\\nL^2(I_\\sigma;H^{-1}(\\Omega)) &\\text{ if } C_{\\ast}\\neq 0,\n\\end{cases}$\n\\item[(vi)]\n$(t\\varepsilon^{-r}\\partial_{t}u_{\\varepsilon})$ is bounded in $L^2(I_\\sigma;H^{-1}(\\Omega))$, provided that $C_{\\ast}\\neq 0$.\n\\end{enumerate}\n\\end{lem}\n\n\\begin{proof}\nRecall \\eqref{weakform}, i.e., \n\\begin{align*}\n\\langle \\partial_{tt}^2u_{\\varepsilon}(t), \\phi\\rangle_{H^1_0(\\Omega)}+A_{\\varepsilon}^t(u_{\\varepsilon}(t),\\phi)+\\langle g(\\tfrac{t}{\\e^r})\\partial_t u_{\\varepsilon}(t), \\phi\\rangle_{H^1_0(\\Omega)}\n=\\langle f_{\\varepsilon}(t), \\phi\\rangle_{H^1_0(\\Omega)}\n\\end{align*}\nfor all $\\phi\\in H^1_0(\\Omega)$.\nTesting it by $\\partial_t u_{\\varepsilon}$ (see Remark \\ref{test reg} below), we deduce by the symmetry of $a(t,y)$ that\n\\begin{align}\n\\lefteqn{\\int_{\\Omega}a(t,\\tfrac{x}{\\e})\\nabla u_{\\varepsilon}(x,t)\\cdot \\nabla \\partial_t u_{\\varepsilon}(x,t)\\, dx}\\label{Herm-mainterm}\\\\\n&=\n\\frac{1}{2}\\frac{d}{dt}\\int_{\\Omega}a(t,\\tfrac{x}{\\e})\\nabla u_{\\varepsilon}(x,t)\\cdot \\nabla u_{\\varepsilon}(x,t)\\, dx\n-\n\\frac{1}{2}\\int_{\\Omega}\\partial_t a(t,\\tfrac{x}{\\e})\\nabla u_{\\varepsilon}(x,t)\\cdot \\nabla u_{\\varepsilon}(x,t)\\, dx\\nonumber\n\\end{align}\na.e.~in $I$. Thus we have\n\\begin{align}\n\\lefteqn{\\frac{1}{2}\\int_0^s\\frac{d}{dt}\\int_{\\Omega}\\Bigl[|\\partial_t u_{\\varepsilon}(x,t)|^2+a\\left(t,\\tfrac{x}{\\e}\\right)\\nabla u_{\\varepsilon}(x,t)\\cdot \\nabla u_{\\varepsilon}(x,t)\\Bigl]\\, dxdt}\\label{re-weak}\\\\\n&\\stackrel{\\eqref{Herm-mainterm}}{=}\n\\frac{1}{2}\\int_0^s\\int_{\\Omega}\\partial_t a(t,\\tfrac{x}{\\e})\\nabla u_{\\varepsilon}(x,t)\\cdot \\nabla u_{\\varepsilon}(x,t)\\, dxdt\\nonumber\n\\\\\n&\\quad+\n\\int_0^s\\int_{\\Omega}f_{\\varepsilon}(x,t) \\partial_tu_{\\varepsilon}(x,t)\\, dxdt-\\int_0^s\\Bigl(g_{\\rm per}(\\tfrac{t}{\\varepsilon^r})+C_{\\ast}\n\\tfrac{t}{\\varepsilon^r}\\Bigl)\\|\\partial_tu_{\\varepsilon}(t)\\|_{L^2(\\Omega)}^2\\, dt\\nonumber\n\\end{align}\nfor all $s\\in I$.\nThen we observe from the uniform ellipticity \\eqref{ellip}, \\eqref{re-weak} and {\\bf (A)} that \n\\begin{align}\n&\\|\\partial_t u_{\\varepsilon}(s)\\|_{L^2(\\Omega)}^2+\\lambda\\|u_{\\varepsilon}(s)\\|_{H^1_0(\\Omega)}^2\\nonumber\\\\\n&\\stackrel{\\eqref{ellip}}{\\le}\\|v^1_\\varepsilon\\|_{L^2(\\Omega)}^2+\\|v^0_\\varepsilon\\|_{H^1_0(\\Omega)}^2+\\int_0^s\\frac{d}{dt}\\left(\\|\\partial_t u_{\\varepsilon}(t)\\|_{L^2(\\Omega)}^2+\\int_{\\Omega}a(t,\\tfrac{x}{\\e})\\nabla u_{\\varepsilon}(x,t)\\cdot\\nabla u_{\\varepsilon}(x,t)\\, dx\\right)\\, dt\\nonumber\\\\\n&\\stackrel{\\eqref{re-weak}}{=}\\|v^1_\\varepsilon\\|_{L^2(\\Omega)}^2+\\|v^0_\\varepsilon\\|_{H^1_0(\\Omega)}^2\n+\\int_0^s\\int_{\\Omega}\\partial_t a(t,\\tfrac{x}{\\e})\\nabla u_{\\varepsilon}(x,t)\\cdot \\nabla u_{\\varepsilon}(x,t)\\, dxdt\n\\nonumber\\\\\n&\\quad+2\\left( \\int_0^s\\int_{\\Omega}f_{\\varepsilon}(x,t) \\partial_t u_{\\varepsilon}(x,t)\\, dxdt-\\int_0^s\\Bigl(g_{\\rm per}(\\tfrac{t}{\\e^r})+C_{\\ast}\\tfrac{t}{\\varepsilon^r}\\Bigl)\\|\\partial_tu_{\\varepsilon}(t)\\|_{L^2(\\Omega)}^2\\, dt\\right)\\nonumber\\\\\n&\\stackrel{{\\bf(A)}}{\\le} \\|v^1_\\varepsilon\\|_{L^2(\\Omega)}^2+\\|v^0_\\varepsilon\\|_{H^1_0(\\Omega)}^2+\\sup_{t\\in I}\\|\\partial_t a(t)\\|_{L^\\infty(\\square)}\\int_0^s\\|u_\\varepsilon(t)\\|_{H^1_0(\\Omega)}^2\\, dt\\nonumber\\\\\n&\\quad+2 \\int_0^s\\left[\\|f_{\\varepsilon}(t)\\|_{L^2(\\Omega)}\\| \\partial_t u_{\\varepsilon}(t)\\|_{L^2(\\Omega)}+\\Bigl(\\beta-C_{\\ast}\\tfrac{ t}{\\varepsilon^r}\\Bigl)\\|\\partial_tu_{\\varepsilon}(t)\\|_{L^2(\\Omega)}^2\\right]\\, dt\\nonumber\\\\\n&\\le\\left(\\|v^1_\\varepsilon\\|_{L^2(\\Omega)}^2+\\|v^0_\\varepsilon\\|_{H^1_0(\\Omega)}^2+\\|f_{\\varepsilon}\\|_{L^2(\\Omega\\times I)}^2\\right)\\nonumber\\\\\n&\\quad +C_{\\beta}\\int_0^s\\left(\\|\\partial_tu_{\\varepsilon}(t)\\|_{L^2(\\Omega)}^2+\\|u_\\varepsilon(t)\\|_{H^1_0(\\Omega)}^2\\right)\\, dt-C_\\ast\\int_0^s\\|\\sqrt{t\\varepsilon^{-r}}\\partial_tu_{\\varepsilon}(t)\\|_{L^2(\\Omega)}^2\\, dt.\\nonumber\n\\end{align}\nHere $\\beta=\\max_{s\\in [0,1]}|g_{\\rm per}(s)|$. From the boundedness of $(f_{\\varepsilon})$ in $L^2(\\Omega\\times I)$, we get\n\\begin{align}\n&\\|\\partial_t u_{\\varepsilon}(s)\\|_{L^2(\\Omega)}^2+\\lambda\\|u_{\\varepsilon}(s)\\|_{H^1_0(\\Omega)}^2+C_\\ast\\int_0^s\\|\\sqrt{t\\varepsilon^{-r}}\\partial_tu_{\\varepsilon}(t)\\|_{L^2(\\Omega)}^2\\, dt\\label{bdd1}\\\\\n&\\quad \\le C+C_{\\beta}\\int_0^s\\left(\\|\\partial_tu_{\\varepsilon}(t)\\|_{L^2(\\Omega)}^2+\\|u_\\varepsilon(t)\\|_{H^1_0(\\Omega)}^2\\right)\\, dt,\\nonumber\n\\end{align}\nwhich together with Gronwall's inequality yields (i) and (ii). \nMoreover, (iii) also follows from (i), (ii) and \\eqref{bdd1}.\nWe next prove (iv). For any $\\phi\\in H^1_0(\\Omega)$, the weak form \\eqref{weakform} yields\n\\begin{align}\n| \\langle \\partial_{tt}^2u_{\\varepsilon}(t)+g(\\tfrac{t}{\\e^r})\\partial_tu_\\varepsilon(t), \\phi\\rangle_{H^1_0(\\Omega)} |\n\\le\t\\|\\phi\\|_{H^1_0(\\Omega)}\\left(\\|f_{\\varepsilon}(t)\\|_{H^{-1}(\\Omega)}+\\|u_{\\varepsilon}(t)\\|_{H^1_0(\\Omega)}\\right).\\label{lem3.1(iv)}\n\\end{align}\nHere we used the fact that\n\\begin{equation*}\\label{ray}\n|a(t,y)\\xi\\cdot \\zeta|\\le |\\xi| |\\zeta|\n\\quad \\text{ for all $\\xi,\\zeta\\in \\mathbb R^N$ and a.e.~$(t,y)\\in I\\times\\mathbb R^N$},\n\\end{equation*}\nwhich follows from the Rayleigh-Ritz variational principle. By the boundedness of $(f_{\\varepsilon})$ in $L^2(I;H^{-1}(\\Omega))$ together with (i) and \\eqref{lem3.1(iv)}, we deduce that \n\\begin{align*}\n\\lefteqn{\\int_0^T\\|\\partial_{tt}^2u_{\\varepsilon}(t)+g(\\tfrac{t}{\\e^r})\\partial_tu_\\varepsilon(t)\\|^2_{H^{-1}(\\Omega)}\\, dt}\\\\\n&\\stackrel{\\eqref{lem3.1(iv)}}{\\le}\n\\int_0^T\\left(\\|f_{\\varepsilon}(t)\\|_{H^{-1}(\\Omega)}+\\|u_{\\varepsilon}(t)\\|_{H^1_0(\\Omega)}\\right)^2\\, dt\n\\le\n2\\left(\\|f_{\\varepsilon}\\|_{L^2(I;H^{-1}(\\Omega))}^2+\\|u_{\\varepsilon}\\|_{L^2(I;H^1_0(\\Omega))}^2\\right),\n\\end{align*}\nwhich implies that (iv) holds true. Here noting that \n\\begin{align*}\n\\lefteqn{\n\\|\\partial_{tt}^2u_{\\varepsilon}+C_{\\ast}t\\varepsilon^{-r}\\partial_{t}u_{\\varepsilon}\\|_{L^2(I;H^{-1}(\\Omega))}}\\\\\n&\\quad \\le\n\\|\\partial_{tt}^2u_{\\varepsilon}+g(\\tfrac{t}{\\e^r})\\partial_{t}u_{\\varepsilon}\\|_{L^2(I;H^{-1}(\\Omega))}\n+\\beta\\|\\partial_tu_\\varepsilon\\|_{L^2(I;H^{-1}(\\Omega))}0$. Then the following {\\rm(i)-(v)} hold\\\/{\\rm :}\n\\begin{enumerate}\n\\item[(i)]\n$(-{\\rm{div}} (a(\\tfrac{x}{\\e})\\nabla u_\\varepsilon))$ is bounded in $L^{\\infty}(I;L^2(\\Omega))$,\n\\item[(ii)]\n$(\\partial_tu_{\\varepsilon})$ is bounded in $L^{\\infty}(I;H^1_0(\\Omega))$,\n\\item[(iii)]\n$(\\partial_{tt}^2u_{\\varepsilon}+g(\\tfrac{t}{\\e^r})\\partial_{t}u_{\\varepsilon})$ is bounded in $L^{\\infty}(I;L^2(\\Omega))$,\n\\item[(iv)]\n$(\\partial_{tt}^2u_{\\varepsilon})$ is bounded in $L^2(\\Omega\\times I_\\sigma)$,\n\\item[(v)]\n$(t\\varepsilon^{-r}\\partial_{t}u_{\\varepsilon})$ is bounded in $L^2(\\Omega\\times I_\\sigma)$, provided that $C_{\\ast}\\neq 0$.\n\\end{enumerate}\n\\end{lem}\n\n\\begin{proof}\nTest \\eqref{weakform} by $-{\\rm{div}} (a(\\tfrac{x}{\\e})\\nabla \\partial_t u_\\varepsilon)$. Then we observe that\n\\begin{align*}\n\\lefteqn{\\int_0^s\\int_{\\Omega}\\partial_{tt}^2u_\\varepsilon(x,t)\\Bigl(-{\\rm{div}} \\bigl(a(\\tfrac{x}{\\e})\\nabla \\partial_t u_\\varepsilon(x,t)\\bigl)\\Bigl)\\, dxdt}\\\\\n&=\n\\int_0^s\\int_{\\Omega}\\partial_t\\bigl(\\nabla \\partial_t u_\\varepsilon(x,t)\\bigl)\\cdot a(\\tfrac{x}{\\e})\\nabla \\partial_tu_\\varepsilon(x,t)\\, dxdt\\\\\n&=\n\\frac{1}{2}\\int_0^s\\frac{d}{dt}\\left(\\int_{\\Omega}a(\\tfrac{x}{\\e})\\nabla\\partial_tu_\\varepsilon(x,t)\\cdot\\nabla \\partial_tu_\\varepsilon(x,t)\\, dx\\right)dt\n\\stackrel{\\eqref{ellip}}{\\ge}\n\\frac{\\lambda}{2}\\|\\partial_tu_\\varepsilon(s)\\|_{H^1_0(\\Omega)}^2-\\frac{1}{2}\\|v^1_\\varepsilon\\|_{H^1_0(\\Omega)}^2\n\\end{align*}\nand \n\\begin{align*}\n\\lefteqn{\\int_0^s\\int_\\Omega\\left(-{\\rm{div}} \\bigl(a(\\tfrac{x}{\\e})\\nabla u_\\varepsilon(x,t)\\bigl)\\right)\\left(-{\\rm{div}} \\bigl(a(\\tfrac{x}{\\e})\\nabla \\partial_tu_\\varepsilon(x,t)\\bigl)\\right)\\, dxdt}\\\\\n&\\quad=\n\\frac{1}{2}\\|-{\\rm{div}} (a(\\tfrac{x}{\\e})\\nabla u_\\varepsilon(s))\\|_{L^2(\\Omega)}^2-\\frac{1}{2}\\|-{\\rm{div}} (a(\\tfrac{x}{\\e})\\nabla v_\\varepsilon^0)\\|_{L^2(\\Omega)}^2\n\\end{align*}\nfor all $s\\in I$. Hence we derive that\n\\begin{align*}\n\\lefteqn{\\frac{\\lambda}{2}\\|\\partial_tu_\\varepsilon(s)\\|_{H^1_0(\\Omega)}^2+\\frac{1}{2}\\|-{\\rm{div}} (a(\\tfrac{x}{\\e})\\nabla u_\\varepsilon(s))\\|_{L^2(\\Omega)}^2}\\\\\n&\\le\n\\frac{1}{2}\\|v^1_\\varepsilon\\|_{H^1_0(\\Omega)}^2+\\frac{1}{2}\\|-{\\rm{div}} (a(\\tfrac{x}{\\e})\\nabla v^0_\\varepsilon)\\|_{L^2(\\Omega)}^2\\\\\n&\\quad+\n\\int_{\\Omega}f_\\varepsilon(x,s)\\Bigl(-{\\rm{div}} (a(\\tfrac{x}{\\e})\\nabla u_\\varepsilon(x,s))\\Bigl)\\, dx-\\int_{\\Omega} f_\\varepsilon(x,0)\\Bigl(-{\\rm{div}} (a(\\tfrac{x}{\\e})\\nabla v_\\varepsilon^0 (x) )\\Bigl)\\, dx\\\\\n&\\quad-\n\\int_0^s\\int_{\\Omega}\\partial_t f_\\varepsilon(x,t)\\Bigl(-{\\rm{div}} (a(\\tfrac{x}{\\e})\\nabla u_\\varepsilon(x,t))\\Bigl)\\, dxdt\\\\\n&\\quad-\n\\int_0^s\\int_{\\Omega}g(\\tfrac{t}{\\e^r})\\nabla \\partial_t u_\\varepsilon(x,t)\\cdot a(\\tfrac{x}{\\e})\\nabla \\partial_tu_\\varepsilon(x,t)\\, dxdt\\\\\n&\\le\n\\frac{1}{2}\\|v^1_\\varepsilon\\|_{H^1_0(\\Omega)}^2+\\|-{\\rm{div}} (a(\\tfrac{x}{\\e})\\nabla v^0_\\varepsilon)\\|_{L^2(\\Omega)}^2\n\\\\\n&\\quad+\n\\|f_\\varepsilon(s)\\|_{L^2(\\Omega)}^2+\\frac{1}{4}\\|-{\\rm{div}} (a(\\tfrac{x}{\\e})\\nabla u_\\varepsilon(s))\\|_{L^2(\\Omega)}^2\n+\\frac{1}{2}\\|f_\\varepsilon(0)\\|_{L^2(\\Omega)}^2\\\\\n&\\quad \n+\\frac{1}{2}\\|\\partial_tf_\\varepsilon\\|_{L^2(\\Omega\\times I)}^2+\\frac{1}{2}\\int_0^s\\|-{\\rm{div}} (a(\\tfrac{x}{\\e})\\nabla u_\\varepsilon(t))\\|_{L^2(\\Omega)}^2\\, dt-\\underbrace{\\int_0^s\\lambda g(\\tfrac{t}{\\e^r})\\|\\nabla\\partial_t u_\\varepsilon(t)\\|_{L^2(\\Omega)}^2\\, dt}_{\\ge 0}\\\\\n&\\le \nC+\n\\frac{1}{4}\\|-{\\rm{div}} (a(\\tfrac{x}{\\e})\\nabla u_\\varepsilon(s))\\|_{L^2(\\Omega)}^2\n+\\frac{1}{2}\\int_0^s\\|-{\\rm{div}} (a(\\tfrac{x}{\\e})\\nabla u_\\varepsilon(t))\\|_{L^2(\\Omega)}^2\\, dt,\n\\end{align*}\nwhich together with Gronwall's inequality yields (i) and (ii). Hence one can derive that\n\\begin{equation}\n\\label{u1zero2}\n\\|\\partial_{tt}^2u_\\varepsilon+g(\\tfrac{t}{\\e^r})\\partial_tu_{\\varepsilon}\\|_{L^{\\infty}(I;L^2(\\Omega))}\\le\n\\|f_\\varepsilon\\|_{L^{\\infty}(I;L^2(\\Omega))}+\\|-{\\rm{div}}(a(\\tfrac{x}{\\e})\\nabla u_\\varepsilon)\\|_{L^{\\infty}(I;L^2(\\Omega))},\n\\end{equation}\nwhich implies (iii). As in the proofs of \\eqref{t-indepbdd} and \\eqref{t-indepbdd2}, (iv) and (v) follow from \\eqref{u1zero2}.\n\n\\end{proof}\nApplying Lemma \\ref{bdd}, we next get the following \n\\begin{lem}[Weak(-star) and strong convergences]\\label{conv}\nLet $u_{\\varepsilon}\\in L^{\\infty}(I;H^1_0(\\Omega))$ be the unique weak solution of \\eqref{DW} under the same assumption as in Lemma \\ref{bdd}. \nThen there exist a subsequence $(\\varepsilon_n)$ of $(\\varepsilon)$, $u_{0}\\in L^{\\infty}(I;H^1_0(\\Omega))$, \n $w\\in L^2(I;H^{-1}(\\Omega))$ and $h \\in L^2_{\\rm loc}((0,T];H^{-1}(\\Omega))$ such that, for any $\\sigma\\in I$, \n\\begin{align}\n\\label{conv1}\nu_{\\varepsilon_n}&\\to u_{0}\\quad &&\\text{ weakly-}\\ast \\text{ in }\\ L^{\\infty}(I;H^1_0(\\Omega)),\\\\\n\\label{conv2}\n\\partial_{t}u_{\\varepsilon_n}&\\to \\partial_{t}u_{0}\\quad &&\\text{ weakly-}\\ast \\text{ in }\\ L^{\\infty}(I;L^2(\\Omega)),\\\\\n\\label{conv9}\n\\partial_{tt}^2u_{\\varepsilon_n}+g(\\tfrac{t}{\\e_n^r})\\partial_{t}u_{\\varepsilon_n}&\\to w \\quad &&\\text{ weakly in }\\ L^2(I;H^{-1}(\\Omega)), \\\\\n\\label{conv3}\n\\partial_{tt}^2u_{\\varepsilon_n}&\\to \\partial_{tt}^2u_{0} \\quad &&\\text{ weakly in }\\ \n\\begin{cases}\nL^2(I;H^{-1}(\\Omega)) &\\text{ if }\\ C_{\\ast}= 0,\\\\\nL^2(I_\\sigma;H^{-1}(\\Omega)) &\\text{ if }\\ C_{\\ast}\\neq 0,\n\\end{cases} \\\\\n\\label{conv8}\t\nt\\varepsilon_n^{-r}\\partial_{t}u_{\\varepsilon_n}&\\to h \\quad &&\\text{ weakly in }\\ L^2(I_\\sigma;H^{-1}(\\Omega)) \\hspace{8mm}\\text{ if }\\ C_{\\ast}\\neq 0, \\\\\n\\label{conv4}\nu_{\\varepsilon_n}&\\to u_{0}\\quad &&\\text{ strongly in }\\ C(\\overline{I};L^2(\\Omega)),\\\\\n\\label{conv5}\n\\partial_{t}u_{\\varepsilon_n}&\\to \\partial_tu_{0}\\quad &&\\text{ strongly in }\t\n\\begin{cases}\nC(\\overline{I};H^{-1}(\\Omega)) &\\text{ if }\\ C_{\\ast}= 0,\\\\\nC(\\overline{I}_\\sigma;H^{-1}(\\Omega)) &\\text{ if }\\ C_{\\ast}\\neq 0,\n\\end{cases}\\\\\n\\label{conv7}\n\\sqrt{t}\\partial_tu_{\\varepsilon_n}&\\to 0\\quad &&\\text{ strongly in }\\ L^2(\\Omega\\times I)\\hspace{15mm} \\text{ if }\\ C_{\\ast}\\neq 0.\n\\end{align}\nIn particular, if $C_{\\ast} \\neq 0$, then $\\partial_t u_0(\\cdot,t)\\equiv 0$ for a.e.~$t\\in I$, and hence, $u_0$ is independent of $t\\in I$, i.e.,~$u_0=u_0(x)$. Furthermore, there exists $w_1\\in L^{2}(\\Omega\\times I; H^{1}_{\\rm per}(\\square\\times J)\/\\mathbb R)$ such that \n\\begin{align}\n\t \\partial_t u_{\\varepsilon_n} &\\overset{2,2}{\\rightharpoonup} \\partial_t u_{0}+\\partial_s w_1 \\quad &&\\text{ in }\\ L^2(\\Omega\\times I\\times \\square\\times J), \\label{conv6.5}\\\\\n\ta(t,\\tfrac{x}{\\e_n})\\nabla u_{\\varepsilon_n}&\\overset{2,2}{\\rightharpoonup} a(t,y)(\\nabla u_{0}+\\nabla_y w_1 ) \\quad &&\\text{ in }\\ [L^2(\\Omega\\times I\\times \\square\\times J)]^N.\\label{conv5.5}\n\\end{align}\nThus it holds that\n\\begin{align}\na(t,\\tfrac{x}{\\e_n})\\nabla u_{\\varepsilon_n}&\\to \\langle a(t,\\cdot)(\\nabla u_{0}+\\nabla_y w_1 )\\rangle_{y,s} \\quad \\text{ weakly in }\\ [L^2(\\Omega\\times I)]^N,\\label{conv6}\n\\end{align}\nwhere\n$$\n\\bigl\\langle a(t,\\cdot)\\bigl(\\nabla u_0(x,t)+\\nabla_y w_1(x,t,\\cdot,\\cdot) \\bigl)\\bigl\\rangle_{y,s}\n=\n\\int_{\\square}a(t,y)\\bigl(\\nabla u_0(x,t)+\\nabla_y \\langle w_1 (x,t,y,\\cdot)\\rangle_s\\bigl)\\, dy.\n$$\n\\end{lem}\n\n\\begin{proof}\nThanks to Lemma \\ref{bdd}, we readily obtain \\eqref{conv1}-\\eqref{conv8}. Furthermore, from (i) and (ii) of Lemma \\ref{bdd}, the Asocili-Arzel\\'a theorem yields \\eqref{conv4}. In the same way, \\eqref{conv5} also holds true by (ii) and (v) of Lemma \\ref{bdd}. As for \\eqref{conv7}, noting by (iii) of Lemma \\ref{bdd} that\n$$\n\\limsup_{\\varepsilon_n\\to 0_+}\\|\\sqrt{t}\\partial_t u_{\\varepsilon_n}\\|_{L^2(\\Omega\\times I)}^2\\le \n\\limsup_{\\varepsilon_n\\to 0_+} C\\varepsilon^{r}_n=0,\n$$\nwe obtain \\eqref{conv7}. Thus $u_0=u_0(x)$, provided that $C_{\\ast}\\neq 0$. We finally show \\eqref{conv6.5}, \\eqref{conv5.5} and \\eqref{conv6}. All the assumptions of Theorem \\ref{gradientcpt} can be checked by (i) and (ii) of Lemma \\ref{bdd}. \nHence \\eqref{conv6.5} holds true. Moreover, note that, for any $\\Psi\\in [L^2_{\\rm per}(\\square\\times J;C_{\\rm c}(\\Omega\\times I))]^N$, $\\Psi$ and $a(t,y)\\Psi$ are admissible test functions in $[L^2(\\Omega\\times I\\times \\square\\times J)]^N$ (see \\cite[Theorems 2 and 4]{LNW} for details) and define $\\Xi\\in [L^2(\\Omega\\times I\\times \\square\\times J)]^N$ by\n$$\na(t,\\tfrac{x}{\\e_n})\\nabla u_{\\varepsilon_n} \\overset{2,2}{\\rightharpoonup} \\Xi \\quad \\text{ in }\\ [L^2(\\Omega\\times I\\times \\square\\times J)]^N.\n$$\nThen, Theorem \\ref{gradientcpt} yields that \n\\begin{align*}\n\\lefteqn{\\int_0^T\\int_\\Omega\\int_0^1\\int_\\square\\Xi(x,t,y,s)\\cdot \\Psi(x,t,y,s)\\, dZ}\\\\\n&\\quad=\n\\lim_{\\varepsilon_n\\to 0_+}\\int_0^T\\int_{\\Omega}\\nabla u_{\\varepsilon_n}(x,t)\\cdot {}^t\\! a(t,\\tfrac{x}{\\e_n})\\Psi(x,t,\\tfrac{x}{\\e_n},\\tfrac{t}{\\e_n^r})\\, dxdt\\\\\n&\\quad=\n\\int_0^T\\int_{\\Omega}\\int_0^1\\int_\\square\\bigl(\\nabla u_{0}(x,t)+\\nabla_y w_1(x,t,y,s)\\bigl)\\cdot {}^t\\! a(t,y)\\Psi(x,t,y,s)\\, dZ,\n\\end{align*}\nwhich implies \\eqref{conv5.5}, and hence, (i) of Remark \\ref{indepwtts} yields \\eqref{conv6}. This completes the proof.\n\\end{proof}\n\n\n\\section{Proof of Theorem \\ref{HPthm}}\nWe first derive the homogenized equation by setting\n\\begin{align*}\nj_{\\rm hom}(x,t):=\\Bigl\\langle a(t,\\cdot)\\bigl(\\nabla u_0(x,t)+\\nabla_y w_1 (x,t,\\cdot,\\cdot)\\bigl)\\Bigl\\rangle_{y,s}\n\\end{align*}\nRecalling \\eqref{conv9} and \\eqref{conv6}, we observe that, for all $\\phi\\in H^1_0(\\Omega)$ and $\\psi\\in C^{\\infty}_{\\rm c}(I)$,\n\\begin{align*}\n\\lefteqn{\\int_0^T \\langle f(t), \\phi\\rangle_{H^1_0(\\Omega)}\\psi(t)\\, dt}\\\\%1\n&=\n\\lim_{\\varepsilon_n\\to 0_+}\\int_0^T \\langle f_{\\varepsilon_n}(t), \\phi\\rangle_{H^1_0(\\Omega)}\\psi(t)\\, dt\\\\%2\n&\\stackrel{\\eqref{weakform}}{=}\n\\lim_{\\varepsilon_n\\to 0_+}\\int_0^T\n\\Bigl[\n\\langle \\partial_{tt}^2u_{\\varepsilon_n}(t)+g(\\tfrac{t}{\\e_n^r})\\partial_tu_{\\varepsilon_n}(t),\\phi\\rangle_{H^1_0(\\Omega)}\n+\\bigl(a(t,\\tfrac{x}{\\e_n})\\nabla u_{\\varepsilon_n}(t), \\nabla \\phi\\bigl)_{L^2(\\Omega)}\\Bigl]\\psi(t)\\, dt\\\\%3\n&\\stackrel{\\eqref{conv9}, \\eqref{conv6}}{=}\n\\int_0^T\n\\Bigl[\n\\langle w , \\phi\\rangle_{H^1_0(\\Omega)}\n+\\bigl(j_{\\rm hom}(t), \\nabla \\phi\\bigl)_{L^2(\\Omega)}\\Bigl]\\psi(t)\\, dt.\\nonumber\n\\end{align*}\nHere $w$ can be regarded as \n\\begin{equation}\nw=\\partial_{tt}^2u_{0}+\\langle g_{\\rm per}\\rangle_s\\partial_t \nu_0+C_{\\ast} h.\\label{conv-w}\n\\end{equation}\nActually, due to $\\psi\\in C^{\\infty}_{\\rm c}(\\Omega)$, this follows from \\eqref{conv8}, \\eqref{conv5} and Proposition \\ref{mean}. Hence, by the arbitrariness of $\\psi\\in C^{\\infty}_{c}(I)$, $u_0$ turns out to be a weak solution to\n\\begin{equation}\\label{DWHP}\n\\left\\{\n\\begin{aligned}\n&\\partial_{tt}^2u_{0}-{\\rm{div}}\\, j_{\\rm hom}+\\langle g_{\\rm per}\\rangle_s\\partial_tu_0+C_{\\ast} h=f \\quad\\text{ in } \\Omega\\times I, \\\\\n&u_{0}|_{\\partial\\Omega}=0 , \\quad \tu_{0}|_{t=0}=v^0,\\quad \\partial_tu_{0}|_{t=0}= \\tilde{v}^1, \n\\end{aligned}\n\\right.\n\\end{equation}\nwhere\n$$\n\\tilde{v}^1=\n\\begin{cases}\nv^1 &\\text{ if }\\ C_{\\ast}=0,\\\\\n0 &\\text{ if }\\ C_{\\ast}\\neq 0.\n\\end{cases}\n$$\nIndeed, noting that\n\\begin{align*}\n\\|u_0(0)-v^0\\|_{L^2(\\Omega)}\n&\\le\n\\|u_0(0)-u_{\\varepsilon_n}(0)\\|_{L^2(\\Omega)}\n+\\|u_{\\varepsilon_n}(0)-v^0\\|_{L^2(\\Omega)}\\\\\n&\\le\n\\|u_0-u_{\\varepsilon_n}\\|_{C(\\overline{I};L^2(\\Omega))}\n+\\|v_{\\varepsilon_n}^0-v^0\\|_{L^2(\\Omega)},\n\\end{align*}\nwe see by \\eqref{conv4} and {\\bf (A)} that\n\\begin{align*}\n\\|u_0(0)-v^0\\|_{L^2(\\Omega)}\n\\le\n\\limsup_{\\varepsilon_n\\to 0_+}\\|u_0-u_{\\varepsilon_n}\\|_{C(\\overline{I};L^2(\\Omega))}\n+\\limsup_{\\varepsilon_n\\to 0_+}\\|v_{\\varepsilon_n}^0-v^0\\|_{L^2(\\Omega)}=0,\n\\end{align*}\nwhich implies that $u_0(x,0)=v^0$. Thus $u_0\\equiv v^0$ by $\\partial_t u_0\\equiv 0$, provided that for $C_{\\ast}\\neq0$. To check $\\partial_t u_0(x,0)=v^1(x)$ a.e.~in $\\Omega$ for $C_{\\ast}=0$, let $\\psi\\in C^{\\infty}(I)$ be such that $\\psi(T)=0$ and $\\psi(0)=1$. Then we infer that, for all $\\phi\\in C^{\\infty}_{\\rm c}(\\Omega)$,\n\\begin{align*}\n\\int_{\\Omega} v^1(x)\\phi(x)\\, dx\n&\\stackrel{{\\bf (A)},\\, \\eqref{weakform}}{=}\n\\lim_{\\varepsilon_n\\to 0_+}\\int_{\\Omega} v_{\\varepsilon_n}^1(x)\\phi(x)\\, dx\\\\\n&\\quad+\n\\lim_{\\varepsilon_n\\to 0_+}\\int_0^T\\Bigl\\langle\n\\partial_{tt}^2 u_{\\varepsilon_n}(t)\n+g(\\tfrac{t}{\\e_n^r})\\partial_t u_{\\varepsilon_n}(t),\\phi\\Bigl\\rangle_{H^1_0(\\Omega)}\\psi(t)\\, dt\\\\\n&\\quad+\n\\lim_{\\varepsilon_n\\to 0_+}\\int_0^T\\int_{\\Omega}\\Bigl[ a(t,\\tfrac{x}{\\e_n})\\nabla u_{\\varepsilon_n}(x,t)\\cdot\\nabla \\phi(x)\\psi(t)-f_{\\varepsilon_n}(x,t)\\phi(x)\\psi(t)\\Bigl]\\, dxdt\\\\\n&=\n\\lim_{\\varepsilon_n\\to 0_+}\\int_0^T\\int_{\\Omega} \\Bigl[-\\partial_t u_{\\varepsilon_n}(x,t)\\phi(x)\\partial_t\\psi(t)+g(\\tfrac{t}{\\e_n^r})\\partial_t u_{\\varepsilon_n}(x,t)\\phi(x)\\psi(t)\\\\\n&\\quad+\na(t,\\tfrac{x}{\\e_n})\\nabla u_{\\varepsilon_n}(x,t)\\cdot\\nabla \\phi(x)\\psi(t)\n-f_{\\varepsilon_n}(x,t)\\phi(x)\\psi(t)\\Bigl]\\, dxdt\\\\\n&=\n\\int_0^T\\int_{\\Omega} \\Bigl[-\\partial_t u_{0}(x,t)\\phi(x)\\partial_t\\psi(t)+\\langle g_{\\rm per}\\rangle_s\\partial_tu_0(x,t)\\phi(x)\\psi(t)\\\\\n&\\quad+\nj_{\\rm hom}(x,t)\\cdot\\nabla \\phi(x)\\psi(t)-f(x,t)\\phi(x)\\psi(t)\\Bigl]\\, dxdt\\\\\n&\\stackrel{\\eqref{DWHP}}{=}\n\\int_{\\Omega} \\partial_t u_0(x,0)\\phi(x)\\, dx, \n\\end{align*}\nwhich together with the arbitrariness of $\\phi\\in C^{\\infty}_{\\rm c}(\\Omega)$ yields that $\\partial_tu_0(x,0)=v^1(x)$ a.e.~in $\\Omega$ for $C_{\\ast}= 0$. \n\nThe rest of the proof is to show that\n\\begin{equation}\\label{rest}\nj_{\\rm hom}=a_{\\rm hom}(t)\\nabla u_0(x,t).\n\\end{equation}\nHere $a_{\\rm hom}(t)$ is the homogenized matrix defined by \\eqref{a_hom}. Thus it suffices to prove \\eqref{HPu1}, that is,\n\\begin{equation}\n\\langle w_1\\rangle_{s}=u_1:=\\sum_{k=1}^N\\partial_{x_k}u_0(x,t)\\Phi_k(t,y),\\label{u1}\n\\end{equation}\nwhere $\\Phi_k$ is the corrector defined by either \\eqref{CPslow} or \\eqref{CPcritical}. \nIndeed, if \\eqref{u1} holds, then we derive that\n\\begin{eqnarray*}\nj_{\\rm hom}(x,t)\n&=&\n\\left\\langle a(t,\\cdot)\\bigl(\\nabla u_0(x,t)+\\nabla_y w_1(x,t,\\cdot,\\cdot) \\bigl)\\right\\rangle_{y,s}\\\\\n&\\stackrel{\\eqref{u1}}{=}&\n\\int_{\\square}a(t,y)\\Bigl(\\nabla u_0(x,t)+\\sum_{k=1}^N\\partial_{x_k}u_0(x,t)\\nabla_y\\Phi_k(t,y)\\Bigl)\\, dy\\\\\n&=&\n\\sum_{k=1}^N\\underbrace{\\Bigl(\\int_{\\square}a(t,y)\\left(\\nabla_y\\Phi_k(t,y)+e_k\\right)\\, dy\\Bigl)}_{=a_{\\rm hom}(t)e_k \\text{ by \\eqref{a_hom}}}\\partial_{x_k}u_0(x,t)=a_{\\rm hom}(t)\\nabla u_0(x,t),\t\t\t\t\t\n\\end{eqnarray*}\nwhich implies \\eqref{rest}. Hence $u_0$ turns out to be a unique weak solution to \\eqref{HDW2}. \nIndeed, this follows from the uniqueness of the corrector $\\Phi_k$ and the similar argument as in Theorem \\ref{well-posedness} if $C_{\\ast}=0$ and $u_0\\equiv v^0$ whenever $C_{\\ast}\\neq 0$. Thus we have\n$$\nu_{\\varepsilon} \\to u_0\\quad \\text{ as }\\ \\varepsilon\\to 0_+\n$$\nwithout taking any subsequence $(\\varepsilon_n)$. \nTherefore, \\eqref{HPconv1}--\\eqref{HPconv3} hold by Lemma \\ref{conv} and \\eqref{conv-w}. Thus we get all the assertions. \n\nIn the rest of this section, we shall prove \\eqref{u1} for all $01$, we claim that\n$$\nw_1=w_1(x,t,y) \\quad \\text{ for all $r\\in (1,+\\infty)$}.\n$$\nIndeed, multiplying both sides by $\\varepsilon_n^{-2(1-r)}$ in \\eqref{key3}, we see that the third term in \\eqref{key3} is zero as $\\varepsilon_n\\to 0_+$ due to \\eqref{conv7}, and then, one can derive by Lemma \\ref{bdd} and Corollary \\ref{veryweak} that\n\\begin{align*}\n0&=\n-\\lim_{\\varepsilon_n\\to 0_+}\n\\varepsilon_n^{r-1}\\int_0^T\\int_{\\Omega}\t\\partial_t u_{\\varepsilon_n}(x,t)\t\\phi(x)b(\\tfrac{x}{\\e_n})\\psi(t)\\partial_s c(\\tfrac{t}{\\e_n^r})\\, dxdt\\\\\n&=\n\\underbrace{\\lim_{\\varepsilon_n\\to 0_+}\\varepsilon_n^{r-1}\\int_0^T\\int_{\\Omega}\t u_{\\varepsilon_n}(x,t)\t\\phi(x)b(\\tfrac{x}{\\e_n})\\partial_t\\psi(t)\\partial_s c(\\tfrac{t}{\\e_n^r})\\, dxdt}_{=0}\\\\\n&\\quad+\\lim_{\\varepsilon_n\\to 0_+}\\int_0^T\\int_{\\Omega}\t \\frac{u_{\\varepsilon_n}}{\\varepsilon_n}(x,t)\t\\phi(x)b(\\tfrac{x}{\\e_n})\\psi(t)\\partial_{ss}^2 c(\\tfrac{t}{\\e_n^r})\\, dxdt\\\\\n&=\n\\int_0^T\\int_{\\Omega}\\int_0^1\\int_{\\square}\t w_1(x,t,y,s) \t\\phi(x)b(y)\\psi(t)\\partial_{ss}^2 c(s)\\, dZ,\n\\end{align*} \nwhich implies that $\\partial_s w_1$ is independent of $s\\in J$ and so is $ w_1 $ by $J$-periodicity. Thus $ w_1 \\in L^2(\\Omega\\times I;H^1_{\\rm per}(\\square)\/\\mathbb R)$ for all $r>1$.\n\nWe choose $c(s)\\equiv 1$ in \\eqref{key} below. Then one can get the following \n\\begin{lem}\\label{3rd}\nFor any $1}). Line 76 finalizes the OpenFPM Library at the end of the program.\n\nTable \\ref{openfpm_vs_lammps} compares the performance of the OpenFPM-based implementation using Verlet lists with LAMMPS for a strong scaling, i.e., distributing the fixed number of 216,000 particles across an increasing number of processors. The absolute wall-clock time per time step is below 1\\,second even on a single core. On 1536 cores, a simulation time step is completed in 0.5\\,ms. Despite the fact that OpenFPM is a general-purpose particle-mesh library and is not limited to MD, its performance is almost as good as that of the highly optimized LAMMPS. \n \n\n\\begin{frame}[Listing 4.1: C++ code for Lennard-Jones molecular dynamics using OpenFPM]\\label{list:md}\n\\lstset{language=C++,\n basicstyle=\\footnotesize\\ttfamily,\n keywordstyle=\\color{blue},\n stringstyle=\\color{red},\n breaklines=true,\n numbers=left,\n lineskip=-0.7ex,\n basewidth={0.5em,0.5em},\n commentstyle=\\color{green},\n morecomment=[l][\\color{magenta}]{\\#}\n}\n\\begin{lstlisting}\n\/\/\/\/\/ define parameters \ndouble sigma12, sigma6, epsilon = 1.0, sigma = 0.1; \/\/ parameters of the potential\ndouble dt = 0.0005, r_cut = 3.0*sigma; \/\/ parameters of the simulation\ndouble r_cut2;\n\nconstexpr int velocity_prop = 0; \/\/ velocity is the first particle property\nconstexpr int force_prop = 1; \/\/ force is the second particle property\n\n\/\/\/\/\/ Define Lennard-Jones interaction to be used in applyKernel_in_sym\nDEFINE_INTERACTION_3D(ln_force)\n Point<3,double> r = xp - xq;\n double rn = norm2(r);\n if (rn > r_cut2) return 0.0;\n return 24.0*epsilon*(2.0*sigma12\/(rn*rn*rn*rn*rn*rn*rn)-sigma6\/(rn*rn*rn*rn))*r;\nEND_INTERACTION\n\nint main(int argc, char* argv[]) {\n \/\/\/\/\/ Initialize OpenFPM\n openfpm_init(&argc,&argv);\n\n \/\/\/\/\/ Initialize constants \n sigma6 = pow(sigma,6), sigma12 = pow(sigma,12);\n r_cut2 = r_cut*r_cut;\n\n \/\/\/\/\/ Define initialization grid, simulation box, periodicity\n \/\/\/\/\/ and ghost layer\n size_t sz[3] = {60,60,60};\n Box<3,float> box({0.0,0.0,0.0},{1.0,1.0,1.0});\n size_t bc[3]={PERIODIC,PERIODIC,PERIODIC};\n Ghost<3,float> ghost(r_cut);\n\n \/\/\/\/\/ Lennard-Jones potential object used in applyKernel_in\n ln_force lennard_jones;\n\n \/\/\/\/\/\/ Define particles and initialize them on a grid\n vector_dist<3,double,aggregate,Point<3,double>>> particles(0,box,bc,ghost);\n Init_grid(sz,particles);\n\n \/\/\/\/\/ Define aliases for the particle force, velocity, and position\n \/\/\/\/\/ to simplify notation\n auto force = getV(particles);\n auto velocity = getV(particles);\n auto position = getV(particles);\n\n \/\/\/\/\/ initialize all particle velocities to zero\n velocity = 0;\n\n \/\/\/\/\/ Generate the cell lists and compute the initial forces using the Lennard-Jones\n \/\/\/\/\/ potential evaluated with exploiting symmetry\n auto NN = particles.getCellListSym(r_cut);\n force = applyKernel_in_sym(particles,NN,lennard_jones);\n\n \/\/\/\/\/ Time loop\n for (size_t i = 0; i < 10000 ; i++) {\n \/\/\/\/\/ 1st step of velocity Verlet time integration\n \/\/\/\/\/ v(t + 1\/2*dt) = v(t) + 1\/2*force(t)*dt\n \/\/\/\/\/ x(t + dt) = x(t) + v(t + 1\/2*dt)\n velocity = velocity + 0.5*dt*force;\n position = position + velocity*dt;\n\n \/\/\/\/\/ communicate particles that have crossed processor boundaries and\n \/\/\/\/\/ update the ghost layers for all properties (empty props list)\n particles.map();\n particles.ghost_get<>();\n\n \/\/ Calculate the forces at t + dt\n particles.updateCellListSym(NN);\n force = applyKernel_in_sym(particles,NN,lennard_jones);\n\n \/\/\/\/\/ 2nd step of velocity Verlet time integration\n \/\/\/\/\/ v(t+dt) = v(t + 1\/2*dt) + 1\/2*force(t+dt)*dt\n velocity = velocity + 0.5*dt*force;\n }\n \n \/\/\/\/\/ Finalize OpenFPM and deallocate all memory\n openfpm_finalize();\n}\n\\end{lstlisting}\n\\end{frame}\n\n\n\n\\subsection{Smoothed-particle hydrodynamics}\n\n\\begin{figure}[]\n\\begin{minipage}[t]{0.99\\textwidth}\n\\centering\n \\includegraphics[scale=0.25]{frame000_dlb.png}\n \\subcaption{t = 0\\,s}\n\\end{minipage}\n\\begin{minipage}[t]{0.99\\textwidth}\n\\centering\n \\includegraphics[scale=0.25]{frame043_dlb.png}\n \\subcaption{t = 0.43\\,s}\n\\end{minipage}\n\\begin{minipage}[t]{0.99\\textwidth}\n\\centering\n \\includegraphics[scale=0.25]{frame095_dlb.png}\n \\subcaption{t = 0.95\\,s}\n\\end{minipage}\n\\caption{Visualization of the SPH dam-break simulation. We show the fluid particles at times 0, 0.43, and 0.95\\,s of simulated time, starting from a column of fluid in the left corner of the domain as shown. We use the OpenFPM SPH to solve the weakly compressible Navier-Stokes equations with the equation of state for pressure as given in Eqs.~\\ref{eq:sph1}--\\ref{eq:statesph}. The figure shows a density iso-surface indicating the fluid surface with color indicating the fluid velocity magnitude. \nThe small insets show the distribution of the domain onto 4 processors with different processors shown by different colors. The dynamic load balancing of OpenFPM automatically adjusts the domain decomposition to the evolution of the simulation in order to maintain scalability.}\n\\label{fig:sph_all}\n\\end{figure}\n\nSmoothed-Particle Hydrodynamics (SPH) is a widely used method for simulating continuous models of fluid dynamics. Due to its simplicity and flexibility in modeling complex fluid properties and free fluid surfaces, it is preferentially used to model multi-phase flows and fluid-structure interaction \\cite{Hu2:2006,Adami:2012}.\n\nWe use OpenFPM to implement a weakly compressible SPH solver for the Navier-Stokes equations, where each particle $p$ has a velocity $\\bm{v}_p$, a pressure $P_p$, and a density $\\rho _p$. The evolution of these particle properties is governed by \\cite{Monaghan:1992}:\n\\begin{align}\n\\frac{d\\bm{v}_p}{dt} &= - \\!\\!\\sum_{q \\in \\mathcal{N}(p) } m_q \\left(\\frac{P_p + P_q}{\\rho_p \\rho_q} + \\Pi_{pq} \\right) \\nabla W(\\bm{x}_q - \\bm{x}_p) + \\bm{g} \\label{eq:sph1} \\\\\n\\frac{d\\rho_p}{dt} &= \\sum_{q \\in \\mathcal{N}(p) } m_q \\bm{v}_{pq} \\cdot \\nabla W(\\bm{x}_q - \\bm{x}_p) \\label{eq:sph2} \\\\\nP_p &= b \\left[ \\left( \\frac{\\rho_p}{\\rho_{0}} \\right)^{\\gamma} - 1 \\right] \\label{eq:statesph} \\\\\nb &= \\frac{1}{\\gamma}c_\\text{sound}^2 |\\bm{g}| h_\\text{swl} \\rho_{0} \\, , \n\\end{align}\nwhere $h_\\text{swl}$ is the maximum height of the fluid, $\\gamma=7$, and $c_\\text{sound}=20$ \\cite{Monaghan:1992}.\nHere, $\\mathcal{N}(p)$ is the set of all particles within a cutoff radius of $2 \\sqrt{3}h$ from $p$, where $h$ is the distance between nearest neighbors.\n$W(\\bm{x})$ is the classic cubic SPH kernel~\\cite{Monaghan:1992} and $\\bm{g}$ is the gravitational acceleration. The relative velocity between particles $p$ and $q$ is \\mbox{$\\bm{v}_{pq} = \\bm{v}_p - \\bm{v}_q$}, \n$\\nabla W(\\bm{x}_q - \\bm{x}_p)$ is the analytical gradient of the kernel $W$ centered at particle $p$ and evaluated at the location of particle $q$. \nEquation \\ref{eq:statesph} is the equation of state that links the pressure $P_p$ with the density $\\rho_p$, where $\\rho_0$ is the density of the fluid at \\mbox{$P=0$}. $\\Pi_{pq}$ is the viscosity term defined as:\n\\begin{equation}\n\\Pi_{pq} = \\begin{cases} - \\frac {\\alpha \\bar{c_{pq}} \\mu_{pq} }{\\bar{\\rho_{pq}} } & v_{pq} \\cdot r_{pq} > 0 \\\\ 0 & v_{pq} \\cdot r_{pq} < 0 \\end{cases}\n\\end{equation}\nwith constants defined as: $ \\mu_{pq} = \\frac{h v_{pq} \\cdot r_{pq}}{r^2_{pq} + \\eta^2} $ and $ \\bar{c_{pq}} = \\sqrt{g \\cdot h_{swl}}$. \n \nWe use the OpenFPM-based implementation to simulate a water column impacting onto a fixed obstacle. This ``dam break'' scenario is a standard test case for SPH simulation codes. A visualization of the OpenFPM result at three different time points is shown in Fig.~\\ref{fig:sph_all}. We compare the results and performance with those obtained using the popular open-source SPH code DualSPHysics~\\cite{Crespo:2015}. The publicly available version of DualSPHysics only supports shared-memory multi-core platforms and GPGPUs, which is why we limit comparisons to these cases. \n\nThe present SPH implementation based on OpenFPM uses the same algorithms as DualSPHysics~\\cite{Crespo:2015}, with identical initialization, boundary conditions, treatment of the viscosity term, and Verlet time-stepping \\cite{Verlet:1967} with dynamic step size. The results are therefore directly comparable. In this test case, particles are not homogeneously distributed across the domain, and they significantly move during the simulation. Therefore, this provides a good showcase for the dynamic load balancing capability of OpenFPM. \n\nWe validate our simulation by calculating and comparing the velocity and pressure profiles at multiple points between OpenFPM and DualSPHysics~\\cite{Crespo:2015}. We find that all pressure and velocity profiles are identical (not shown).\nWe measure the performance of the OpenFPM-based implementation in comparison with the DualSPHysics code running on 24 cores of a single cluster node. We simulate the dam-break case with 171,496 particles until a physical time of 1.5\\,seconds. The OpenFPM code completes the entire simulation in about 500\\,seconds, whereas DualSPHysics requires about 950\\, seconds. The roughly two-fold better performance of OpenFPM is possibly attributed to the use of symmetry when evaluating the interactions and the use of optimized Verlet lists, which do not seem to be exploited in DualSPHysics~\\cite{Crespo:2015}. \n\nSince DualSPHysics is mainly optimized for use on GPGPUs, we also compare the OpenFPM-based implementation in distributed-memory mode with DualSPHysics running on a GPGPU. The benchmark is done with 15 million SPH particles using a nVidia GeForce GTX1080 GPU. The OpenFPM code reached the same performance when running on around 270 CPU cores of the benchmark machine and was faster when using more cores. This shows that OpenFPM can reach GPU performance on moderate numbers of CPU cores without requiring specialized CUDA code.\n\nWe also use this test case to profile OpenFPM with respect to the fraction of time spent computing, communicating, and load-balancing. The results are shown in Table \\ref{table:perf_perc} for different numbers of particles on 1536 processors, hence testing the scalability of the code to large numbers of particles.\nThe small insets in Fig.~\\ref{fig:sph_all} show how the domain decomposition of OpenFPM dynamically adapts to the evolving particle distribution by dynamic load re-balancing (see Section \\ref{sec:dynbal}). \nIn this example, the load distribution strongly changes due to the large bulk motion of the particles. \nThe dynamic load-balancing routines of OpenFPM consume anywhere between 5 and 25\\% of the total execution time, but their absolute runtime is independent of the number of particles. \nTherefore, the relative fraction of communication and load-balancing decreases for increasing number of particles, whereas the average imbalance remains roughly constant due to the dynamic load balancing. \nSince load balancing and communication are not required when running on a single core, the percentage of time spent computing (second column in Table~\\ref{table:perf_perc}) can directly be interpreted as the parallel efficiency of the code on 1536 cores, which is, as expected, increasing with problem size. \n\n\\begin{table}[h]\n\\begin{adjustbox}{max width=\\textwidth}\n\\input{openfpm_sph_perc}\n\\end{adjustbox}\n\\caption{Percentage of the total runtime spent on different tasks by OpenFPM for the SPH dam-break simulation on 1536 cores using different numbers of particles (1st column). The computation time is the average wall-clock time across processors spent on local computations, while the load imbalance is given by the difference between the maximum wall-clock time across processors and the average. Communication is the time taken by all mappings together, and DLB (dynamic load balancing) is the time taken to decompose the problem and assign sub-domains to processors. The last column gives the total runtime of the simulation until a simulated time of 1.5\\,s.}\n\\label{table:perf_perc}\n\\end{table}\n\n\n\n\\subsection{Finite-difference reaction-diffusion code}\n\nAs a third showcase, we consider a purely mesh-based application, namely a finite-difference code to numerically solve a reaction-diffusion system. \nReaction-diffusion systems are widely studied due to their ability to form steady-state concentration patterns, including Turing patterns~\\cite{Turing:1952}. A particularly known example is the Gray-Scott system \\cite{Gray:1983,Gray:1984,Gray:1985,Lee:1993}, which produces a rich variety of patterns in different parameter regimes. It is described by the following set of partial differential equations:\n\\begin{equation}\n\\begin{split}\n\\frac{\\partial u}{\\partial t} & = D_u \\nabla u -uv^2 + F(1-u) \\\\\n\\frac{\\partial v}{\\partial t} & = D_v \\nabla v +uv^2 - (F + k)v \\, , \n\\end{split}\n\\end{equation}\nwhere $D_u$ and $D_v$ are the diffusion constants of the two species with concentrations $u$ and $v$, respectively. The parameters $F$ and $k$ determine the type of pattern that is formed.\n\nWe implement an OpenFPM-based numerical solver for these equations using second-order centered finite-differences on a regular Cartesian mesh in 3D of size $256^3$.\nWe compare the performance of the OpenFPM-based implementation with that of an efficient AMReX-based solver \\cite{AMReX}. \nEven though AMReX is a multi-resolution adaptive mesh-refinement code, we still use it as a benchmark also in the present uniform-resolution case because it is highly optimized. \nHowever, AMReX requires the user to tune the maximum grid size for data distribution \\cite{AMReX}. If we choose the maximum grid size too large, AMReX does not have enough granularity to parallelize. If it is chosen too small, scalability is impaired by a larger ghost-layer communication overhead. This parameter for AMReX was determined manually in order to ensure that the number of sub-grids is always larger than the number of processor cores used. The actual values used are given in the last column of Table~\\ref{table:ofp_amrex_scal}. OpenFPM does not require the user to set such a parameter, as the domain decomposition is determined automatically. For both AMReX and OpenFPM, we use MPI-only parallelism in order to compare the results.\n\n\\begin{figure}[t!]\n\\begin{minipage}[t]{0.32\\textwidth}\n \\includegraphics[scale=0.21]{gs_alpha.png}\n \\subcaption{$\\alpha$ pattern ($F$=0.010, $k$=0.047)}\n\\end{minipage}\n\\begin{minipage}[t]{0.32\\textwidth}\n \\includegraphics[scale=0.21]{gs_beta.png}\n \\subcaption{$\\beta$ pattern ($F$=0.026, $k$=0.051)}\n\\end{minipage}\n\\begin{minipage}[t]{0.32\\textwidth}\n \\includegraphics[scale=0.20]{gs_delta.png}\n \\subcaption{$\\delta$ pattern ($F$=0.030, $k$=0.055)}\n\\end{minipage}\n\\begin{minipage}[t]{0.32\\textwidth}\n \\includegraphics[scale=0.21]{gs_epsilon.png}\n \\subcaption{$\\varepsilon$ pattern ($F$=0.018, $k$=0.055)}\n\\end{minipage}\n\\begin{minipage}[t]{0.32\\textwidth}\n \\includegraphics[scale=0.21]{gs_eta.png}\n \\subcaption{$\\eta$ pattern ($F$=0.022, $k$=0.061)}\n\\end{minipage}\n\\begin{minipage}[t]{0.32\\textwidth}\n \\includegraphics[scale=0.20]{gs_gamma.png}\n \\subcaption{$\\gamma$ pattern ($F$=0.026, $k$=0.055)}\n\\end{minipage}\n\\begin{minipage}[t]{0.32\\textwidth}\n \\includegraphics[scale=0.21]{gs_iota.png}\n \\subcaption{$\\iota$ pattern ($F$=0.046, $k$=0.059)}\n\\end{minipage}\n\\begin{minipage}[t]{0.32\\textwidth}\n \\includegraphics[scale=0.21]{gs_kappa.png}\n \\subcaption{$\\kappa$ pattern ($F$=0.050, $k$=0.063)}\n\\end{minipage}\n\\begin{minipage}[t]{0.32\\textwidth}\n \\includegraphics[scale=0.20]{gs_theta.png}\n \\subcaption{$\\theta$ pattern ($F$=0.030, $k$=0.057)}\n\\end{minipage}\n\\caption{Visualizations of the OpenFPM-simulations of nine steady-state patterns produced by the Gray-Scott reaction-system in 3D \\cite{Pearson:1993} for different values of the parameters $F$ and $k$.}\n\\label{fig:gs_pattern}\n\\end{figure}\n\nFor the benchmark simulations, we use the following parameter values: \\mbox{$D_u = 2\\cdot 10^{-5}$}, \\mbox{$D_v = 10^{-5}$}, varying $k$ and $F$ as given in the legends of Fig.~\\ref{fig:gs_pattern} to produce different patterns. To validate the simulation, we reproduce the nine patterns classified by Pearson \\citep{Pearson:1993}, with visualizations shown in Fig.~\\ref{fig:gs_pattern}. \n\nAn OpenFPM source-code example of applying a simple 5-point finite-difference stencil to a regular Cartesian mesh is shown in Listing \\ref{list:mesh}. The stencil is defined in line 2 as an OpenFPM grid key array with relative grid coordinates. Here, the stencil object is called \\texttt{star\\_stencil\\_2D} and consists of 5 points. In line 5, a mesh iterator is created for this stencil as applied to the mesh object \\texttt{Old}. Lines 7--24 then loop over all mesh nodes and apply the stencil. The expression of the stencil (lines 18--20) issimplified by first defining aliases for the shifted nodes in lines 10--14, albeit this is not necessary.\n\n\\begin{frame}[Listing: 4.3: OpenFPM code example for stencil operations on a regular Cartesian mesh]\\label{list:mesh}\n\\lstset{language=C++,\n basicstyle=\\footnotesize\\ttfamily,\n keywordstyle=\\color{blue},\n stringstyle=\\color{red},\n breaklines=true,\n numbers=left,\n lineskip=-0.7ex,\n basewidth={0.5em,0.5em},\n commentstyle=\\color{green},\n morecomment=[l][\\color{magenta}]{\\#}\n}\n\\begin{lstlisting}\n\/\/\/\/\/ finite-difference stencil definition\nstatic grid_key_dx<2> star_stencil_2D[5] = {{0,0},{-1,0},{+1,0},{0,-1},{0,+1}};\n \n\/\/\/\/\/ create an iterator for the stencil on the mesh \"Old\" \nauto it = Old.getDomainIteratorStencil(star_stencil_2D);\n\nwhile (it.isNext()) {\n \/\/\/\/\/ define aliases for center, minus-x, plus-x, minus-y, plus-y.\n \/\/\/\/\/ The template parameter is the stencil element.\n auto Cp = it.getStencilGrid<0>();\n auto mx = it.getStencilGrid<1>();\n auto px = it.getStencilGrid<2>();\n auto my = it.getStencilGrid<3>();\n auto py = it.getStencilGrid<4>();\n \n \/\/\/\/\/ apply the stencil to field U on mesh \"Old\" and store \n \/\/\/\/\/ the result in the field U on mesh \"New\"\n New.get(Cp) = Old.get(Cp) + \n (Old.get(my)+Old.get(py)+Old.get(mx)+Old.get(px) - \n 4.0*Old.get(Cp));\n \n \/\/\/\/\/ Move to the next mesh node\n ++it;\n}\n\\end{lstlisting}\n\\end{frame} \n \nThe performance of OpenFPM compared to AMReX is shown in Table \\ref{table:ofp_amrex_scal} and Fig.~\\ref{amrex_scal_ofp}. OpenFPM scales slightly better than AMReX, with wall-clock times in the same range. Both codes saturate at the same wall-clock time for large numbers of cores (Fig.~\\ref{amrex_scal_ofp}). Both AMReX and OpenFPM use mixed C++\/Fortran code for this benchmark, with all stencil iterations implemented in Fortran. Because Fortran provides native support for multi-dimensional arrays, it produces more efficient assembly code than C++. In our tests, a fully C++ version was about 20\\% slower than the hybrid C++\/Fortran. We note that the present benchmark problem is relatively small ($256^3$ mesh nodes), which is why strong scaling saturates already at about 24 cores. \n\n\\begin{table}[!htb]\n \\centering\n\t\t\\begin{tabular}{ |l|l|l|l|l| }\n\t\t\\#cores & OpenFPM (seconds) & AMReX (seconds) & AMReX param \\\\\n\t\t1 & 393.1 $\\pm$ 1.3 & 388.5 $\\pm$ 1.5 & 256 \\\\\n\t\t2 & 207.5 $\\pm$ 1.3 & 265.0 $\\pm$ 0.8 & 128 \\\\\n\n\t\t4 & 105.8 $\\pm$ 1.3 & 144.8 $\\pm$ 0.3 & 128\\\\\n\n\n\n\t\t8 & 65.1 $\\pm$ 2.1 & 106.6 $\\pm$ 2.6 & 128\\\\\n\n\n\n\t\t12 & 65.6 $\\pm$ 2.6 & 90.9 $\\pm$ 5.0 & 64\\\\\n\n\n\n\t\t16 & 57.6 $\\pm$ 1.9 & 173.6 $\\pm$ 3.6 & 64\\\\\n\n\n\n\t\t20 & 56.8 $\\pm$ 2.0 & 66.0 $\\pm$ 1.7 & 64\\\\\n\n\n\n\t\t24 & 60.5 $\\pm$ 0.3 & 60.9 $\\pm$ 4.0 & 64\\\\\n\t\\end{tabular}\n\t\\vspace{0.5cm}\n\t\\caption{Performance of the OpenFPM finite-difference code compared with AMReX \\cite{AMReX}. Times are given in seconds as mean$\\pm$standard deviation over 10 independent runs for a fixed problem size of $256^3$ mesh nodes (strong scaling). The grid-size parameters used for AMReX are given in the last column.}\n\t\\label{table:ofp_amrex_scal}\n\\end{table}\n\n\\begin{figure}[t!]\n\\begin{minipage}[t]{0.5\\textwidth}\n\\centering\n \\includegraphics[scale=0.5]{scal_amrex.eps}\n\\end{minipage}\n\\caption{Scalability of the OpenFPM finite-difference code (blue circles) in comparison with AMReX \\cite{AMReX} (red triangles) for a strong scaling. Shown is the wall-clock time in seconds to complete 5000 time steps of the Gray-Scott finite-difference code (5-point stencil) on a $256^3$ uniform Cartesian grid using different numbers of cores.}\\label{amrex_scal_ofp}\n\\end{figure}\n\n\\subsection{Vortex Methods}\n\nAs a fourth showcase, and in order to show how OpenFPM handles hybrid particle-mesh problems, we consider a full vortex-in-cell \\cite{Cottet:2000} code, a hybrid particle-mesh method to numerically solve the incompressible Navier-Stokes equations in vorticity form with periodic boundary conditions. These equations are:\n\\begin{equation}\n\\begin{split}\n\\frac{D \\bm{\\omega}}{D t} & = (\\bm{\\omega} \\cdot \\nabla)\\bm{u} + \\nu \\Delta \\bm{\\omega} \\\\\n\\Delta \\bm{\\psi} & = \\nabla \\times \\bm{u} = \\bm{\\omega} \\, , \n\\end{split}\n\\label{eq:NSvort}\n\\end{equation}\nwith $\\bm{\\omega}$ the vorticity, $\\bm{\\psi}$ the vector stream function, $\\nu$ the viscosity, and $\\bm{u}$ the velocity of the fluid. The operator $\\frac{D}{Dt}$ denotes a Lagrangian (material) time derivative \\cite{Cottet:2000}. We numerically solve these equations using an OpenFPM-based implementation of the classic vortex-in-cell method as given in Algorithm \\ref{vortex_in_cell_val} with two-stage Runge-Kutta time stepping. Particle-mesh and mesh-particle interpolations use the moment-conserving $M^{\\prime}_{4}$ interpolation kernel \\cite{Monaghan:1992}.\n\nWe run a simulation that reproduces previous results of a self-propelling vortex ring \\cite{Bergdorf:2007}. The vortex ring is initialized on a grid of size $1600 \\times 400 \\times 400$ using\n\n\\begin{equation}\n \\bm{\\omega}_0 = \\frac{\\Gamma}{\\pi \\sigma^2} e^{-s\/\\sigma} \\, , \n\\end{equation} \n \nwhere \\mbox{$s^2 = (z-z_c)^2 + [(x-x_c)^2 + (y - y_c)^2 - R]$}, with \\mbox{$R=1$}, \\mbox{$\\sigma=R\/3.531$}, and the domain \\mbox{$(0 \\ldots 5.57,\\, 0 \\ldots 5.57,\\, 0 \\ldots 22.0)$}. \nWe set \\mbox{$\\Gamma = 1$}, and \\mbox{$x_c=2.785$}, \\mbox{$y_c=2.785$}, \\mbox{$z_c=2.785$} as the center of the torus defining the initial vortex ring.\n\nA Runge-Kutta time-stepping scheme of order 2 is used with fixed step size \\mbox{$\\delta t = 0.0025$}. All differential operators are discretized using second-order symmetric finite differences on the mesh. We use 256 million particles distributed across 3072 processors to simulate the behavior of the vortex ring at Reynolds number \\mbox{$\\mathrm{Re}=3750$} until final time \\mbox{$t = 225.5$}. VTK files are written by OpenFPM and directly visualized using Paraview \\cite{Ayachit:2015}. We observe the same patterns and structures for the ring as in Ref.~\\cite{Bergdorf:2007}, see Fig.~\\ref{fig:vortex_in_cell_result}. \n\n\\begin{figure}[t!]\n\\begin{minipage}[t]{0.49\\textwidth}\n \\includegraphics[scale=0.07]{vortex_validation.png}\n \\subcaption{}\n\\end{minipage}\n\\begin{minipage}[t]{0.49\\textwidth}\n \\includegraphics[scale=0.07]{vortex_turbolent.png}\n \\subcaption{}\n\\end{minipage}\n\\begin{minipage}[t]{0.49\\textwidth}\n \\includegraphics[scale=0.07]{vortex_turbolent_back.png}\n \\subcaption{}\n\\end{minipage}\n\\begin{minipage}[t]{0.49\\textwidth}\n \\includegraphics[scale=0.07]{vortex_turbolent_front.png}\n \\subcaption{}\n\\end{minipage}\n\\caption{Visualization of the OpenFPM simulation of a vortex ring at Re=3750 using a hybrid particle-mesh Vortex Method (Algorithm \\ref{vortex_in_cell_val}) to solve the incompressible Navier-Stokes equations with 256 million particles on 3072 processors. Results are visualized for \\mbox{$t = 195.5$} when the ring is just about to become turbulent. (a) The iso-surfaces of vorticity highlight the tubular dipole structures in the vortex ring. Color corresponds to the $x$-component of the vorticity with red indicating positive signs and blue negative signs. (b)--(d) Three different views of a volume rendering of four vorticity bands: orange is \\mbox{$\\norm{\\bm{\\omega}}^2 = 3.239 \\ldots 2.3$}, green is \\mbox{$\\norm{\\bm{\\omega}}^2 = 1.16 \\ldots 1.372$}, yellow is \\mbox{$\\norm{\\bm{\\omega}}^2 = 0.7 \\ldots 0.815$}, and blue is \\mbox{$\\norm{\\bm{\\omega}}^2 = 0.3 \\ldots 0.413$}.}\n\\label{fig:vortex_in_cell_result}\n\\end{figure}\n\nThe performance and scalability of the OpenFPM code are limited by the linear system solver required for computing the velocity from the vorticity on the mesh, i.e., by solving the Poisson equation. In this benchmark, OpenFPM internally uses a solver provided by the PetSc library \\cite{petsc-web-page}. We benchmark the parallel scalability of the solver and the overall code in a weak scaling starting from a \\mbox{$109 \\times 28 \\times 28$} mesh on 1 processor up to \\mbox{$1207 \\times 317 \\times 317$} mesh nodes on 1536 processors. We separately time the efficiency of the PetSc solver and of the OpenFPM parts of the code (particle-mesh\/mesh-particle interpolation, remeshing, time integration, right-hand side evaluation). The results are shown in Fig.~\\ref{fig:vc_weak_scal}. Within a cluster node (1$\\ldots$24 cores), the decay in efficiency can be explained by the shared memory bandwidth (see Table \\ref{table:mem_bw}). PetSc shows another marked drop in efficiency when transitioning from one cluster node to two nodes (48 cores). After that, the efficiency remains stable until 768 cores, when it starts to slowly drop again.\n\nTo put these results into perspective, we compare the particle-mesh interpolation part of the code with the corresponding part of a PPM-based hybrid particle-mesh vortex code previously used \\cite{Sbalzarini:2006b}. We only compare this part of the code in order to exclude differences between PetSc and the own internal solvers of PPM. \nInterpolating two million particles to a $128^3$ mesh takes 0.41\\,s in OpenFPM and 3.4\\,s in PPM on a single core. \nPerforming a weak scaling starting from a $128^3$ mesh on 1 processor, the OpenFPM particle-mesh interpolation reaches a parallel efficiency of 75\\% on 128 cores (16 nodes using 8 cores of each node). This is comparable with the scalability of PPM on the same test problem (see Fig.~13 of Ref.~\\cite{Sbalzarini:2006b}).\n\n\\begin{figure}[t!]\n\\begin{minipage}[t]{0.49\\textwidth}\n \\includegraphics[scale=0.55]{vic_efficency.eps}\n\\end{minipage}\n\\caption{Parallel efficiency of the OpenFPM-based hybrid particle-mesh vortex code for a scaled-size problem (weak scaling). The problem size scales from \\mbox{$109 \\times 28 \\times 28$} mesh nodes on 1 processor core to \\mbox{$1207 \\times 317 \\times 317$} mesh nodes on 1536 cores (24 cores per node). We separately show the parallel efficiency for the PetSc Poisson solver (yellow squares), the OpenFPM parts of the code (red triangles) and the resulting overall scalability (blue circles). For three points, the problem sizes and the overall wall-clock time per time step in seconds are indicated next to the symbols. We note that the computational complexity of the Poisson solver is not linear with problem size.}\n\\label{fig:vc_weak_scal}\n\\end{figure}\n\n\\begin{algorithm}\n\\caption{Vortex-in-Cell Method with two-stage Runge-Kutta (RK) time integration}\n\\label{vortex_in_cell_val}\n\\begin{algorithmic}[1]\n\\Procedure{VortexMethod}{}\n\\State initialize the vortex ring on the mesh\n\\State do a Helmholtz-Hodge projection to make the vorticity divergence-free\n\\State initialize particles at the mesh nodes \n\\While {$t < t_\\text{end}$}\n\\State calculate velocity $\\bm{u}$ from the vorticity $\\bm{\\omega}$ on the mesh (Poisson equation solver)\n\\State calculate the right-hand side of Eq.~\\ref{eq:NSvort} on the mesh and interpolate to particles\n\\State interpolate velocity $\\bm{u}$ to particles\n\\State {\\em 1st RK stage}: move particles according to the velocity; save old position in $\\bm{x}_\\text{old}$\n\\State interpolate vorticity $\\bm{\\omega}$ from particles to mesh\n\\State calculate velocity $\\bm{u}$ from the vorticity $\\bm{\\omega}$ on the mesh (Poisson equation solver)\n\\State calculate the right-hand side of Eq.~\\ref{eq:NSvort} on the mesh and interpolate to particles\n\\State interpolate velocity $\\bm{u}$ to particles\n\\State {\\em 2nd RK stage}: move particles according to the velocity starting from $\\bm{x}_\\text{old}$\n\\State interpolate the vorticity $\\bm{\\omega}$ from particles to mesh \n\\State create new particles at mesh nodes (remeshing)\n\\EndWhile\n\\EndProcedure\n\\end{algorithmic}\n\\end{algorithm}\n\n\\subsection{Discrete element methods}\n\n\\begin{figure}[t!]\n\\begin{minipage}[t]{0.33\\textwidth}\n \\includegraphics[scale=0.16]{DEM_0.png}\n \\subcaption{$t=0.0$}\n\\end{minipage}\n\\begin{minipage}[t]{0.33\\textwidth}\n \\includegraphics[scale=0.16]{DEM_50.png}\n \\subcaption{$t=1.5$}\n\\end{minipage}\n\\begin{minipage}[t]{0.33\\textwidth}\n \\includegraphics[scale=0.16]{DEM_530.png}\n \\subcaption{$t=15.9$}\n\\end{minipage}\n\\caption{Visualization of the Discrete Element Method (DEM) simulation of an avalanche of spheres down an inclined plane (inclination angle: 30 degrees). OpenFPM output is shown for simulated times \\mbox{$t=0.0$}, \\mbox{$t = 1.5$}, and \\mbox{$t = 15.9$}. Particles are rendered as spheres, colored by velocity magnitude.}\n\\label{fig:dem_sim}\n\\end{figure}\n\nDiscrete element methods (DEM) are important for the study of granular materials, in particular for determining effective macroscopic dynamics for which the governing equations are not known. They simulate each grain of the material explicitly, with all collisions fully resolved. Force and torque balance over the grains then governs their Newtonian mechanics. The main difference to MD is that forces are only exerted by direct contact, and that contact sites experience elastic deformation. In order to correctly integrate these deformations over time, lists of contact sites between particles need to be managed. Since these lists are of varying length, both in time and space, and collisions involving ghost particles need to be properly accounted for in the lists of the respective source particles, parallelizing DEM is not trivial. Previously, DEM has been parallelized onto distributed-memory machines using the PPM Library \\cite{Sbalzarini:2006b}, enabling DEM simulations of 122 million elastic spheres distributed over 192 processors \\cite{Walther:2009}. \n\nWe implement the same DEM simulation in OpenFPM in order to directly compare performance with the previous PPM implementation. We implement the classic Silbert grain model \\cite{Silbert:2001}, including a Hertzian contact forces and elastic deformation of the grains, as previously considered \\cite{Walther:2009}. All particles have the same radius $R$, mass $m$, and moment of inertia $I$. Each particle $p$ is represented by the location of its center of mass $r_p$. When two particles $p$ and $q$ are in contact with each other, the radial elastic contact deformation is given by:\n\\begin{equation}\n\\delta_{pq} = 2R-r_{pq} \\, ,\n\\end{equation}\nwith $\\bm{r}_{pq}=\\bm{r}_p - \\bm{r}_q$ the vector between the two \nparticle centers and $r_{pq} = \\|\\bm{r}_{pq}\\|_2$ its length.\nThe evolution of the tangential elastic deformation $\\bm{u}_{t_{pq}}$ is integrated over the time duration of a contact using the explicit Euler scheme as:\n\\begin{equation}\n\\bm{u}_{t_{pq}} = \\bm{u}_{t_{pq}} + \\bm{v}_{t_{pq}} \\delta t \\, , \n\\label{eq:eldist}\n\\end{equation}\nwhere $\\delta t$ is the simulation time step and $\\bm{v}_{pq} = \\bm{v}_{t_{pq}} + \\bm{v}_{n_{pq}}$ are the tangential and radial components of the relative velocity between the two colliding particles, respectively.\nFor each pair of particles that are in contact with each other, the normal and tangential\nforces are \\cite{Silbert:2001}:\n\\begin{equation}\n\\bm{F}_{n_{pq}}=\\sqrt{\\frac{\\delta_{pq}}{2R}}\\,\\,\\left(k_n\\delta_{pq}\\bm{n}_{pq}-\\gamma_n\nm_{\\text{eff}}\\bm{v}_{n_{ij}}\\right) \\, ,\n\\end{equation}\n\\begin{equation}\n\\bm{F}_{t_{pq}}=\\sqrt{\\frac{\\delta_{pq}}{2R}}\\,\\,\\left(-k_t\\bm{u}_{t_{pq}}-\\gamma_t\nm_{\\text{eff}}\\bm{v}_{t_{pq}}\\right) \\, ,\n\\end{equation}\nwhere $k_{n,t}$ are the elastic constants in normal and tangential direction,\nrespectively, and $\\gamma _{n,t}$ the corresponding friction constants. The\neffective collision mass is given by $m_{\\text{eff}}=\\frac{m}{2}$. \nIn addition, the tangential deformation is rescaled to enforce Coulomb's law as described \\cite{Silbert:2001,Walther:2009}.\nThe total resultant force $\\bm{F}_p^{\\text{tot}}$ and torque $\\bm{T}_p^{\\text{tot}}$ on particle $p$ are then computed by summing the contributions over all current collision partners $q$ and the gravitational force vector.\nWe integrate the equations of motion \nusing the second-order accurate leapfrog scheme, as: \n\\begin{equation}\n \\bm{v}_p^{n+1} = \\bm{v}_p^n + \\frac{\\delta t}{m}\\bm{F}_p^{\\text{tot}} \\, ,\n \\qquad\n \\bm{r}_p^{n+1} = \\bm{r}_p^n + \\delta t \\bm{v}_p^{n+1} \\, ,\n \\qquad\n \\bm{\\omega}_p^{n+1} = \\bm{\\omega}_p^n + \\frac{\\delta t}{I}\\bm{T}_p^{\\text{tot}} \\, ,\n\\end{equation}\nwhere $\\bm{r}_p^{n}$, $\\bm{v}_p^{n}$, and $\\bm{\\omega}_p^{n}$ are the center-of-mass position, velocity, and rotational\/angular velocity of particle $p$ at time step $n$.\n\nWe simulate an avalanche down an inclined plane, which has previously been used as a benchmark case for distributed-memory parallel DEM simulations using the PPM Library \\cite{Walther:2009}.\nThe simulation, visualized in Fig.~\\ref{fig:dem_sim}, uses 82,300 particles with \\mbox{$k_n=7.849$}, \\mbox{$k_t=2.243$}, \\mbox{$\\gamma_n=3.401$}, \\mbox{$R=0.06$}, \\mbox{$m=1.0$}, and \\mbox{$I = 1.44\\cdot 10^{-3}$}. \nThe size of the simulation domain is \\mbox{$8.4 \\times 3.0 \\times 3.18$}. Initially, all particles are placed on a Cartesian lattice inside a box of size \\mbox{$4.26 \\times 3.06 \\times 1.26$}, as shown in Fig.~\\ref{fig:dem_sim}a. The simulation box is inclined by 30 degrees by rotating the gravity vector accordingly and has fixed-boundary walls in $x$-direction, a free-space boundary in positive $z$-direction, and periodic boundaries in $y$-direction. \n\n\\begin{figure}[htbp]\n \\centering\n \\hspace*{-1cm}\\includegraphics[scale=.5]{DEM.eps}\n \\caption{Strong scaling of the OpenFPM DEM simulation using a fixed problem size of $677,310$ particles distributed onto up to 192 cores using 8 cores on each cluster node. The numbers near the symbols indicate the absolute wall-clock time per time step in seconds.}\n \\label{fig:ppm_openfpm}\n\\end{figure}\n\nWe compare the performance of the OpenFPM DEM with the legacy PPM code \\cite{Walther:2009} using the same test problem. In Fig.~\\ref{fig:ppm_openfpm}, we plot the parallel efficiency of the OpenFPM DEM simulation for a strong scaling on up to 192 processors. OpenFPM completes one time step with 677,310 particles on one core in 0.32 seconds, whereas the PPM-based code needs 1.0 second per time step for 635,780 particles. On 192 cores, OpenFPM completes a time step of the same problem in 3\\,ms with a parallel efficiency of 56\\%. In comparison, the PPM DEM client needs 11\\,ms per time step on 192 cores with a parallel efficiency of 47\\%~\\cite{Walther:2009}. This literature result is compatible with our present benchmark, as the PPM code was tested on a Cray XT-3 machine, whose AMD Opteron 2.6\\,GHz processors are about 3 times slower than the 2,5\\,GHz Intel Xeon E5-2680v3 of the present benchmark machine, indicating similar effective performance for both codes. \n\n\n\\subsection{Particle-swarm covariance-matrix-adaptation evolution strategy (PS-CMA-ES)}\n\nOne of the main advantages of OpenFPM over other simulation frameworks is that OpenFPM can transparently handle spaces of arbitrary dimension. This enables simulations in higher-dimensional spaces, such as the four-dimensional spaces used in lattice quantum chromodynamics \\cite{Wilson:1974,Bonati:2012}, and it also enables parallelization of non-simulation applications that require high-dimensional spaces, including image analysis algorithms \\cite{Afshar:2016} and Monte-Carlo sampling strategies \\cite{Muller:2010a}. \n\nA particular Monte-Carlo sampler used for stochastic real-valued optimization is the Covariance-Matrix-Adaptation Evolution Strategy (CMA-ES) \\cite{Hansen:2003,Hansen:2007}. The goal is to find a (local) optimum of a (non-convex) function $f : \\mathbb{R}^{n} \\mapsto \\mathbb{R}$. In practical applications, the dimensionality $n$ of the domain is 10 to 50. CMA-ES has previously been parallelized by running multiple instances concurrently that exchange information akin to a particle-swarm optimizer. The resulting particle-swarm CMA-ES (PS-CMA-ES) has been shown to outperform standard CMA-ES on multi-funnel functions \\cite{Muller:2009}, and an efficient Fortran implementation of it is available, pCMAlib \\cite{Muller:2009a}. \n\nHere, we implement PS-CMA-ES using OpenFPM in order to demonstrate how OpenFPM transparently handles high-dimensional spaces and also extends to non-simulation applications, such as sampling and computational optimization. In our implementation, each OpenFPM particle corresponds to one CMA-ES instance, hence implementing PS-CMA-ES through particle interactions across processors. To validate the OpenFPM implementation, we use the multi-modal test function $f_{15}$ from the IEEE CEC2005 set of standard optimization test functions \\cite{Muller:2011a}. In order to directly compare with pCMAlib, we limit the total number of function evaluations allowed to \\mbox{$5\\times 10^5$} and run both implementations 25 times each. We compare the success rate, i.e., the fraction of the 25 runs that found the true global optimum, and the success performance, i.e., the average best function value found across all 25 runs, in 10, 30, and 50 dimensions \\cite{Muller:2009,Muller:2009a}. The results from the OpenFPM-based implementation are identical with those from pCMAlib when using the same pseudo-random number sequence (not shown).\n\nWe also compare the runtime performance and parallel scalability of the OpenFPM-based implementation with the highly optimized Fortran pCMAlib. The results are shown in Fig.~\\ref{fig:ps_cma_es_result} for dimension 50. For dimensions 10 and 30, the results are analogous and not shown. Since the total number of function evaluations is kept constant at \\mbox{$5 \\times 10^5$}, irrespective of the number of cores used, this amounts to a strong scaling. However, the number of swarm particles is always chosen equal to the number of cores, as this is a hard requirement of pCMALib, while OpenFPM would not require this. In all cases, the OpenFPM implementation is about one third faster than pCMAlib.\n\n\\begin{figure}[t!]\n\\centering\n \\includegraphics[scale=0.5]{PS_CMA_ES_50.eps}\n\\caption{Strong scaling for the OpenFPM PS-CMA-ES client (blue circles) in comparison with the Fortran pCMALib (red triangles) scaling from 1 to 48 cores for IEEE CEC2005 test function $f_{15}$ in dimension 50. Shown is the minimum (over 25 independent repetitions) total wall-clock time in seconds for \\mbox{$5 \\times 10^5$} function evaluations.}\n\\label{fig:ps_cma_es_result}\n\\end{figure}\n\n\n\nImplementing arbitrary-dimensional codes using OpenFPM is straightforward, as the dimensionality is a template parameter in all data structures. While, of course, the memory requirement for mesh data structures grows exponentially with dimension, the size of particle data structures scales linearly. The code example in Listing \\ref{list:cma} illustrates how the PS-CMA-ES data structures in 50 dimensions are defined in OpenFPM. All iterators and mappings work transparently. This example illustrates that OpenFPM naturally extends to problems in higher-dimensional spaces, which the original PPM Library~\\cite{Sbalzarini:2006b} could not. \n\n\\begin{frame}[Listing 4.6: OpenFPM code example for high-dimensional spaces]\\label{list:cma}\n\\lstset{language=C++,\n basicstyle=\\footnotesize\\ttfamily,\n keywordstyle=\\color{blue},\n stringstyle=\\color{red},\n breaklines=true,\n numbers=left,\n lineskip=-0.7ex,\n basewidth={0.5em,0.5em},\n commentstyle=\\color{green},\n morecomment=[l][\\color{magenta}]{\\#}\n}\n\\begin{lstlisting}\nconstexpr int dim = 50; \/\/ define the dimensionality\n\n\/\/\/\/\/ Define the optimization domain as (-5:5)^dim\nBox domain;\nfor (size_t i = 0; i < dim; i++) {\n domain.setLow(i,-5.0);\n domain.setHigh(i,5.0);\n}\n\n\/\/\/\/\/ Define periodic boundary conditions \nsize_t bc[dim];\nfor (size_t i = 0; i < dim; i++) {bc[i] = NON_PERIODIC;};\n\n\/\/\/\/\/ There are no ghost layers needed for this problem\nGhost g(0.0);\n\n\/\/\/\/\/ define the particles data structure\nvector_dist> particles(8,domain,bc,g);\n\n\/\/\/\/\/ get an iterator over particles and loop over all of them\nauto it = vd.getDomainIterator();\nwhile (it.isNext()) {\n ....... \/\/ do PS-CMA-ES here\n ++it;\n}\n\\end{lstlisting}\\label{listcpp}\n\\end{frame} \n\n\\section{Conclusions}\nWe have presented OpenFPM, an open-source framework for particle and particle-mesh codes on parallel computers. OpenFPM implements abstract data structures and operators for particles-only and hybrid particle-mesh methods \\cite{Sbalzarini:2010}. The same abstractions were already implemented in the discontinued PPM Library \\cite{Sbalzarini:2006b}, which has enabled particle-mesh simulations of unprecedented scalability and performance over the past 12 years \\cite{Sbalzarini:2010}. OpenFPM extends this to a modern software-engineering framework using C++ Template Meta-Programming (TMP), continuous integration, and rigorous unit testing. OpenFPM provides a scalable infrastructure that allows implementing particle-mesh simulations of both discrete and continuous models, as well as non-simulation applications such as computational optimization \\cite{Muller:2009} and image analysis \\cite{Afshar:2016}. The parallelization infrastructure provided by OpenFPM includes dynamic load (re-)balancing, parallel and distributed HDF5 file I\/O, checkpoint-restart on different numbers of processors, transparent iterators for particles and mesh nodes, and adaptive domain decompositions. This infrastructure is supplemented with frequently used numerical solvers and a range of convenience functions, including direct VTK file output for visualization of simulation results using the open-source software Paraview \\cite{Ayachit:2015}.\n\nWe have described the architectural principles of OpenFPM and provided an overview of its functionality. We have then showcased and benchmarked the framework in six applications ranging from molecular dynamics simulations to 3D fluid mechanics to discrete element simulations, to optimization in high-dimensional spaces. Despite the automatic and transparent parallelization in OpenFPM, code performance and scalability in all examples was comparable to or better than those of state-of-the-art application-specific codes. \n\nWe have tested OpenFPM on up to 3072 processor cores, simulating systems with millions of degrees of freedom. For molecular dynamics, wall-clock times per time step were between 0.5\\,ms and 1\\,s, almost reaching the performance and scalability of the highly optimized LAMMPS code \\cite{Plimpton:1995}. For SPH, OpenFPM outperforms the popular DualSPHysics CPU code \\cite{Crespo:2015} by about a factor of two, reaching GPU performance when using 270 CPU cores or more. Solving a finite-difference system on a regular Cartesian mesh, OpenFPM outperforms the highly optimized AMReX code \\cite{AMReX} on small-scale problems, both in terms of scalability and performance. When using Vortex Methods \\cite{Cottet:2000} to simulate incompressible fluid flow, OpenFPM was able to compute vortex-ring dynamics at Re=3750 using 256 million particles on 3072 processors and achieved state-of-the-art parallel efficiencies in all benchmarks. Using DEM to simulate a granular avalanche down an inclined plane illustrated OpenFPM's capability to handle complex particle properties, such as time-varying contact lists, outperforming the previous PPM code \\cite{Sbalzarini:2006b,Walther:2009} by a small margin Finally, we illustrated the use of OpenFPM in high-dimensional problems by implementing PS-CMA-ES and comparing with the pCMAlib Fortran library \\cite{Muller:2009a}. This benchmark has shown the simplicity with which OpenFPM handles different space dimensions, while maintaining performance and scalability. Taken together, OpenFPM offers state-of-the-art performance and scalability at a reduced code development overhead. It overcomes the main limitations of the PPM Library \\cite{Sbalzarini:2006b} by extending to spaces of arbitrary dimension and allowing particles to carry arbitrary data types (C++ objects) as particle properties. It also adds automatic dynamic load (re-)balancing, transparent internal memory management and re-alignment, parallel checkpoint-restart, visualization file output, and custom distributed template expressions.\n\nOpenFPM is going to be supported and developed in the long term. In the future, we plan to add the following functionalities to OpenFPM: transparent support for Discretization-Corrected Particle-Strength Exchange (DC-PSE) operators for the consistent discretization of differential operators on arbitrary particle distributions \\cite{Schrader:2010}, an efficient distributed multi-grid solver for the general Poisson equation, 3D rendering capabilities for real-time {\\it in-situ} visualization of a running simulation on screens and in virtual-reality environments, a compiler and development environment for application-specific language front-ends to OpenFPM, static (compile-time) and dynamic (runtime) code analysis \\cite{Karol:2018} and optimization in order to reduce communication overhead to the required minimum, as well as support for adaptive-resolution particle representations \\cite{Reboux:2012,Awile:2012} and GPU calculations \\cite{Buyukkececi:2013}. In addition, we will further improve performance and scalability, e.g., by optimizing the domain decomposition and sub-domain merging implementations and by using space-filling curves, such as Morton curves, to constrain processor assignment.\n\nThe source code of OpenFPM, virtual machines for various operating systems with a complete OpenFPM environment pre-installed, virtualized Docker containers, documentation, example applications, and tutorial videos are freely available from {\\url{http:\/\/openfpm.mpi-cbg.de}}. We hope that the flexibility, free availability, performance, quality of documentation, and long-term support of OpenFPM will make it a standard platform for particles-only and hybrid particle-mesh simulations of discrete and continuous models on parallel computer hardware, as well as for non-simulation applications, such as evolutionary optimization strategies and particle-based image-analysis methods \\cite{Cardinale:2012,Afshar:2016}. \n\n\n\\section*{Acknowledgments}\n\nWe thank all members of the MOSAIC Group for the many fruitful discussions. We particularly thank the early adopters and test users of OpenFPM whose feedback has helped improve the library throughout: Prof.~Nikolaus Adams and Dr.~Stefan Adami (both TU Munich, Germany), Prof.~Marco Ellero (Swansea University, UK), Prof. Bernhard Peters (University of Luxembourg, Luxembourg), Prof. Bonnefoy (\\'{E}cole des Mines Saint-\\'{E}tienne, France), and Dr.~Yaser Afshar (University of Michigan, Ann Arbor, USA). This project was supported in parts by the Deutsche Forschungsgemeinschaft (DFG) under project ``OpenPME''.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{\\label{sec:Intro}Introduction}\n\nIt occurs in many areas of physics that the time-evolution of a spatially unbounded system is required to be analysed. Such systems have been studied in many fields of physics involving wave propagation, spanning areas such as laser physics and gravitational waves, \\cite{springerlink:10.1134\/S1054660X10050063,symABC,Lau2004376,alpert2000}. Examples occur in nuclear physics and we analyse such a case in the present work.\n\nThe particular physical phenomenon being studied here is the nuclear giant monopole resonance. It is well known that these are above the particle decay threshold \\cite{giantResFHMNE}, so that one allowed decay mode involves the expulsion of one or more nucleons from the nucleus. A time-dependent simulation of such a decay will involve the spatial region in which the nuclear wavefunction is non-negligible becoming larger and larger as time goes on. \n\nOne way of analysing this sort of system is via the time-dependent Hartree-Fock (TDHF) approximation, that reduces the many-body interaction to a simpler mean field one. The simplification however still does not allow analytic solutions to be gained but allows for numerical analysis to be applied and the computational cost to be manageable. \n\nA common numerical implementation is to discretise the equations using time and space grids employing finite difference methods. Here, a non-trivial problem occurs because the boundary of the finite grids impose an artificial boundary into the solution. As the outgoing wave condition for the Hartree-Fock equation is evaluated at infinity it cannot straightforwardly be applied directly. Enforcing the wrong boundary conditions results in the solution becoming incorrect for the time after the emitted particles have reached the artificial boundary and so it can be important that the boundary is handled properly.\n\nThere are various methods available that aim to simulate or circumvent the application of the outgoing wave condition\\cite{giantRes,springerlink:10.1140\/epjad\/i2005-06-052-x}. The most crude is just to apply a reflecting boundary sufficiently far away so that the matter being emitted does not reach it within the time of the calculation. This works and reflecting boundaries can be easily implemented but the major drawback is that one needs an increasing number of grid points in space as one wants to evolve further in time. Eventually, this becomes computationally unfeasible.\n\nOther methods include absorbing potentials and masking functions. These allow the artificial boundary to be placed closer to the nucleus but generally have to be tuned to each particular case and do not in general approximate the outgoing wave condition perfectly.\n\nHere, we present a method of implementing exact boundary conditions \\cite{springerlink:10.1134\/S1054660X10050063,symABC}. These rely on choosing the artificial boundary such that the potential outside of it has a simple form, so that the propogation of waves in the exterior region does not have to be dealt with explicitly.\n\nIn solving the TDHF equations, a simplified Skyrme interaction is used in the implementation which reproduces the magic numbers needed for $\\nucl{4}{2}{He}$, $\\nucl{16}{8}{O}$ and $\\nucl{40}{20}{Ca}$ to be seen without the complexity of the full interaction \\cite{PhysRevC.60.044302}, as a reasonable proof-of-concept. Spherical symmetry is also assumed inside and outside of the artificial boundary. The calculations involves one, in the case of $\\nucl{4}{2}{He}$, or more, in the cases of $\\nucl{16}{8}{O}$ and $\\nucl{40}{20}{Ca}$, different forms of differential equation, each of which requires its own absorbing boundary condition to be applied. Here some continuous absorbing boundary conditions are used. Other types of absorbing boundary are fully-discrete \\cite{Antoine2003157} and semi-discrete\\cite{1dABC} but are not described here. A review of the various absorbing boundary conditions can be found in \\cite{abcreview}.\n\nThe structure of this paper is as follows: Section \\ref{sec:gmr} gives a brief summary of nuclear giant monopole resonances; sections \\ref{sec:ProbInt} and \\ref{sec:intDisc} describe the Hartree-Fock approximation, the first the theory and the second its discretization and implementation; section \\ref{sec:probExt} and \\ref{sec:boundDisc} describe the exterior problem and the absorbing boundary conditions; sections \\ref{sec:ResultsAndTesting1}, \\ref{sec:ResultsAndTesting2} and \\ref{sec:ResultsAndTesting3} show the testing and results of our implementation which includes a short analysis of the errors caused by the discretization and strength functions for $\\nucl{4}{2}{He}$, $\\nucl{16}{8}{O}$ and $\\nucl{40}{20}{Ca}$, and results with large-amplitude excitation. We end with some concluding remarks.\n\n\n\\section{\\label{sec:gmr}Giant Monopole Resonances}\n\nGiant resonances are collective modes of excitation of finite fermionic systems \\cite{bertschbroglia}. The first evidence for their existence in atomic nuclei came in 1937, with a theoretical description and systematic experimental study coming in the next decade \\cite{harakeh}. While the first studies excited the electric isovector dipole resonance, in which protons and neutrons oscillated out of phase with each other due to the dominance of the E1 component of the photon field, other giant resonances were discovered later. In particular, the isoscalar giant monopole resonance (GMR) was definitively reported in 1977 \\cite{Harakeh1977}.\n\nThe GMR, as a compression mode, probes the nuclear equation of state \\cite{Blaizot1980}, and is therefore useful in constraining nuclear models \\cite{Dutra2012}. As a spherically-symmetric excitation, it is the first port of call for testing new theoretical methods, as the symmetry renders many types of calculation more simple. In particular, methods based on Time-Dependent Hartree-Fock have turned to giant monopole resonances in spherical doubly-magic nuclei as a proving ground \\cite{Vautherin1979,Stringari1979,Lacroix1998,Wu1999,Almehed2005,Almehed2005a,Stevenson2010}. \n\nThe present paper is written in that spirit, employing the simplified $t_0$-$t_3$ version of the Skyrme force used in previous applications \\cite{Stringari1979,Wu1999}. While the focus of this work is on the development of the boundary conditions, and the simplified Skyrme force we use should not be expected to give good agreement with experiment, it is noted that of the three nuclei considered here, the GMR has been unambiguosly observed only in $^{40}$Ca \\cite{Brandenburg1983}, though the nature of giant resonances in general in nuclei as light as $^4$He is a subject of ongoing interest \\cite{Tornow2012}.\n\nThe key observable calculated for the giant resonance is the linear response function, describing the response of the nucleus to an external perturbation \\cite{NozPines}. From this, one derives the strength function, related in turn to the experimental cross section for the reaction. The strength function can be obtained, within TDHF, via the Fourier Transform of the time-dependent moment of the resonance mode desired \\cite{Chinn1996} and we present calculations of such strength functions. We note that the strength functions are particular sensitive to the success of implementation of the absorbing boundary conditions \\cite{giantRes}, and provide a good measure of success, as well as being the physically relevant quantity.\n\n\n\n\\section{\\label{sec:ProbInt}Time Dependent Hartree-Fock}\n\n\nThe time-dependent Hartree-Fock method originates with Dirac \\cite{Dirac1930}, and became computationally viable for nuclear processes in the 1970s \\cite{Bonche1976,Cusson1976,Devi1979}. Since then it has been extensively used for calculating heavy-ion reactions \\cite{Maruhn2006} and giant resonances \\cite{Stevenson2004}, with increasingly sophisticated implementations of the effective interaction \\cite{Umar2006,PhysRevC.86.044303}. A full derivation of the Time-dependent Hartree-Fock equations in the case of Skyrme forces can be found in the original paper by Engel et al. \\cite{Engel1975}. In the present case, with the simplified Skyrme force, and omitting Coulomb, we note that the Time-Dependent Hartree-Fock equations can be written as a series of coupled non-linear Schr\\\"odinger equations of the form\n\\begin{equation}\ni\\hbar\\frac{\\partial\\psi_\\lambda(\\vec{r},t)}{\\partial t}=\\hat{h}\\psi_\\lambda(\\vec{r},t),\\qquad\\lambda=1,\\ldots,A,\n\\end{equation}\nwhere the Hartree-Fock Hamiltonian is given by\n\\begin{equation}\n\\hat{h} = -\\frac{\\hbar^2}{2m}\\nabla^2 + a\\rho(\\vec{r},t) + b\\rho^2(\\vec{r},t),\n\\end{equation}\nwith $\\rho(\\vec{r},t)=\\sum_{\\lambda=1}^A\\psi_\\lambda^*(\\vec{r},t)\\psi_\\lambda(\\vec{r},t)$ denoting the particle density. The values of $a$ and $b$ used thoughtout this paper are taken from \\cite{PhysRevC.56.857} where they have the values $-817.5$ MeV fm$^3$ and $3241.5$ MeV fm$^6$. In practice, the time-dependent Hartree-Fock equations are solved by evolving in time according to\n\\begin{equation}\n\\psi_\\lambda(\\vec{r},t+\\Delta t) = e^{-i\\Delta t \\hat{h}\/\\hbar}\\psi_\\lambda(\\vec{r},t)\n\\end{equation}\nSpecialisation to spherical symmetry, and details of discritisation methods, are given in the following sections, in which the details of the algorithm dealing with the boundary conditions are also given.\n\n\\section{\\label{sec:intDisc}Interior Discretization}\nAs well as the coupled non-linear differential equations noted in the previous section, initial conditions are required, and are calculated from stationary Hartree-Fock. We first describe our method for calculating the stationary solution and then go on to the time-dependent case. In both we discetized the equations on equally spaced grids, for simplicity, though non-uniform grids can be in themselves useful in pushing the boundary far into the exterior region at an acceptable computational cost \\cite{PhysRevC.71.024301}.\n\n\\subsection{Stationary Discretisation}\nWe start with the calculation of the initial condition, which itself is a non-linear problem. We solve it by the following iterative procedure:\n\\begin{eqnarray}\n&\\hat{H}^{(i)}_\\alpha(r) Q_\\alpha^{(i+1)}(r) = \\lambda^{(i+1)}_\\alpha Q_\\alpha^{(i+1)}(r)& \\\\\n&\\hat{H}^{(i)}_\\alpha(r) = -\\half\\pderivtwo{}{r} + \\sbrac{\\frac{l_\\alpha(l_\\alpha+1)}{2r^2} + V\\left\\{ \\rho^{(i)}(r)\\right\\}}& \\label{eq:hfhamil} \\\\\n&\\rho^{(0)}(r) = \\frac{1}{4\\pi r^2}\\sum_\\alpha g_\\alpha |Q_\\alpha^{(0)}(r)|& \\label{TIHF_initialden} \n\\end{eqnarray}\nfor $i\\in\\mathbb{N}_0$ and where $Q(r)=r\\psi$ represents the reduced wave function, $V$ is the potential, and $l_\\alpha$ the orbital angular momentum. We calculate the initial guess, $\\rho^{(0)}(r,t)$, using harmonic oscillator wave-functions as the $Q^{(0)}_\\alpha(r)$ in equation \\bref{TIHF_initialden}.\n\nSpatial discritisation of the equations is made on a uniformly-spaced grid, such that\\begin{eqnarray}\nr_m=m\\Delta r \\text{, } m=1,\\hdots,M\\text{, } \\Delta r = \\frac{R_{out}}{M} \\label{TIHFspaceGrid}\n\\end{eqnarray}\nwhere $M$ is the total number of gridpoints and $R_{out}$ is the distance from the origin to the spherical outer boundary. The second derivative operator in (\\ref{eq:hfhamil}) is treated with the three-point approximation.\n\nWe also require the wave functions at two additional points; $Q_\\alpha^{(i)}(r_0)\\equiv Q_\\alpha^{(i)}(0)$ and $Q_\\alpha^{(i)}(r_{M+1})\\equiv Q_\\alpha^{(i)}\\brac{(M+1)\\Delta r}$. Although our differential equation is not evaluated at these points, values of the wave function here are needed for the finite differencing.\n\nWorking with the reduced wave function leads to a boundary condition of $Q_\\alpha^{(i)}(r_0)=0$. However, the large-$r$ boundary condition, that the wave function remain square-integrable and fall to zero strictly only at infinity and cannot be applied directly. We make use of that property that the wavefunctions for bound states decay exponentially as $r$ increases. Hence we can find a radius at which the wavefunction is zero, within a given accuracy, and so we choose $Q_\\alpha^{(i)}(r_{M+1})=0$ for the solution of the static Hartree-Fock equations.\n\nThis leaves us with a tridiagonal matrix eigenvalue problem at each iteration, which can be solved efficiently using the LAPACK subroutines.\n\nWe iterate until both the eigenvalue, $\\lambda^{(i+1)}_\\alpha$, and the mean square errors for each wave function,\n\\begin{eqnarray}\n\\epsilon_\\alpha &=&\n\\bigg\\lvert\\langle Q^{(i+1)}_\\alpha\\mid\\!\\hat{H}^{(i)}\\!\\mid Q^{(i+1)}_\\alpha\\rangle^2 \\nonumber \\\\\n&&- \\langle Q^{(i+1)}_\\alpha\\mid \\!\\brac{\\hat{H}^{(i)}}^2 \\!\\mid Q^{(i+1)}_\\alpha\\rangle\\bigg\\rvert,\n\\end{eqnarray} \nhave stopped changing, within machine precision, from one iteration to the next.\n \n\\subsection{Time-Dependent Discretisation} \\label{section_IntDiscTDT}\nAfter the initial states have been found using the above procedure we need to apply the monopole boost operator in order to start the nucleus in the breathing mode. This can be done using the usual boost operator for an isoscalar monopole mode\n\\begin{eqnarray}\nQ_\\alpha(r_m,0) = e^{ik r_m^2}Q_\\alpha(r_m), \\label{eq:boost}\n\\end{eqnarray}\nwhere $k$ is the adjustable strength. \n\n\nOnce this has been done the $Q_\\alpha$'s can be propagated in time. The equally spaced time grid\n\\begin{eqnarray}\nt_n=n\\Delta t\\text{, } n=1,\\hdots,N\n\\end{eqnarray} \nis used and the same space grid, \\bref{TIHFspaceGrid}, as the stationary problem. The Crank-Nicholson method is then used for the time discretization of the time-dependent Hartree-Fock equation:\n\\begin{eqnarray}\n&&\\brac{\\hat{I}+\\frac{i\\Delta t}{2}\\hat{H}(r_m,t_{n-\\half})}Q_\\alpha(r,t_n) \\nonumber \\\\\n&&\\qquad= \\brac{\\hat{I}-\\frac{i\\Delta t}{2}\\hat{H}(r_m,t_{n-\\half})}Q_\\alpha(r,t_{n-1}) \\qquad \\nonumber \\\\\n&&\\qquad\\qquad+ \\mathcal{O}(\\Delta r^2,\\Delta t^2) \\label{TDHFCrank}\n\\end{eqnarray}\nWe choose the Crank-Nicholson method because it has properties that are useful for this type of calculation: it is unconditionally stable; and it maintains norm. However being an implicit method it also yields the Hamiltonian evaluated at a half time-step and so through the potential term the density evaluated at the half timestep. This means our resulting equations are not a system of linear equations. To get around this problem we use an explicit method, which is calculated after each propagation in time to yield the wavefunctions needed to calculate the half-time-step density. We use a method based on the evolution operator:\n\\begin{eqnarray} \nQ(r_m,t_{n+\\half}) &=& \\exp\\brac{-\\frac{i\\Delta t}{2}\\hat{H}(r_m,t_n)}Q(r,t_n) \\label{intStepEvo}\\\\\n&=& \\sum_{j=0}^{j_{max}} \\brac{-\\frac{i\\Delta t}{2}\\hat{H}(r_m,t_n)}^n Q(r_m,t_n) \\\\ &&+ \\mathcal{O}(\\Delta r^2,\\Delta t^{j_{max}})\n\\end{eqnarray}\nrequiring knowledge of the Hamiltonian only at the current time-step.\n\nOnce equation \\bref{TDHFCrank} has been discretized in space using central differences and the grid \\bref{TIHFspaceGrid} it is a tridiagonal matrix equation, again solved with LAPACK routines to get from one time to the next. \n\nHowever, the last row in the matrix contains an unknown $Q(r_{M+1},t_n)$ for $n>0$. This has to be specified with the boundary condition which we know at infinity, but we require a boundary condition at $r=(M+1)\\Delta r$. We could use the same reasoning as the stationary case, that we can find a point at which the wavefunction will be zero and apply the boundary there. We also know however that this system has a probability of particle emission, which manifests itself in the calculations as a thin non-zero tail travelling away from the central mass near the origin. This means as time passes the point at which the wavefunction is zero gets increasingly further away. This corresponds to longer calculation times which can be prohibitive. Hence we seek an absorbing boundary condition to give the value of $Q(r_{M+1},t)$.\n\n\n\n\\section{\\label{sec:probExt}Problem in the Exterior} \n\n\\subsection{Splitting the Domain}\n\nWe start by splitting the domain into two regions: an interior in which we choose to contain all the nuclear dynamics; and an exterior where we assume only the long ranged components are of significance, in this case just the centrifugal barrier. Given the partial differential equation for a single particle state in coordinate space:\n\\begin{eqnarray}\ni\\pderiv{}{t}Q_l(r,t) = \\brac{-\\half\\pderivtwo{}{r} + V(r,t)}Q_l(r,t), \\label{exteriorDE}\n\\end{eqnarray} \nwith boundary conditions:\n\\begin{eqnarray}\nQ_l(0,t)=0, \\\\\n\\lim_{r\\to\\infty}Q_l(r,t)=0. \\label{exteriorBC2}\n\\end{eqnarray}\nWe can mathematically describe the splitting with the potential term:\n\\begin{eqnarray}\nV(r,t) \\equiv V_{short}(r,t) + V_{long}(r),\n\\end{eqnarray}\nwhere we define:\n\\begin{eqnarray}\nV_{short}(r,t)= 0 &\\text{ for }& r \\ge R, \\label{VintConditions}\\\\\nV_{long}(r)=\\frac{l(l+1)}{2r^2} &\\text{ for }& r \\ge 0 . \\label{VextConditions}\n\\end{eqnarray}\nThe problem has now been split into where the internal potential is present and where it is not. The parameter $R$ is commonly called the artificial boundary and has to be chosen so equations \\bref{VintConditions} and \\bref{VextConditions} are satisfied. We also assume that the initial wave function is zero outside the artificial boundary:\n\\begin{eqnarray}\nQ_l(r,0)=0 \\text{ for } r \\ge R . \\label{WFintConditions} \n\\end{eqnarray}\nThis is not overly restrictive and consistent with our choice for the solution of the static Hartree-Fock equations.\n\n\\subsection{Deriving the Absorbing Boundary Conditions}\nWe have now all the assumptions needed to construct the absorbing boundary condition. There are various ways of doing this and a Green's function approached has already been described by Heinen and Kull in \\cite{symABC,springerlink:10.1134\/S1054660X10050063} for this problem. We proceed differently, however, by describing a derivation using a Laplace transform method.\n\nWe start by recalling the definitions of the Laplace transform\\cite[Chapter 29]{AbraMathFunc} in time, $\\hat{f}(s)$, of a function, $f(t)$, as: \n\\begin{eqnarray} \n\\hat{f}(s) = \\int_{0}^{\\infty}f(t)e^{-st}dt, \\label{laplaceDef}\n\\end{eqnarray}\nand the inversion formula, known as the Bromwich integral\\cite{transMethDuffy}: \n\\begin{eqnarray}\nf(t) = \\frac{1}{2\\pi i}\\int_{c-i\\infty}^{c+i\\infty}\\hat{f}(s)e^{st}ds. \\label{inversionDef}\n\\end{eqnarray}\nCombining equations \\bref{exteriorDE} and \\bref{VextConditions} for $r\\ge R$ we have:\n\\begin{eqnarray}\ni\\pderiv{}{t}Q_l(r,t) = \\brac{-\\half\\pderivtwo{}{r} + \\frac{l(l+1)}{2r^2}}Q_l(r,t), \\label{exteriorDE2}\n\\end{eqnarray}\nMultiplying by $e^{-st}$ and integrating time from $0$ to $\\infty$ allows us to use equation \\bref{laplaceDef} to get the ordinary differential equation: \n\\begin{eqnarray}\n\\half\\pderivtwo{\\hat{Q_l}(r,s)}{r} +\\brac{is - \\frac{l(l+1)}{2r^2} }\\hat{Q_l}(r,s) = 0.\n\\end{eqnarray}\nThe substitution $Q_l(\\rho,s)=\\rho h_l(\\rho,s)$ where $\\rho = kr$ and $k=\\sqrt{2is}$, yields the following equation for $h_l(\\rho,s)$:\n\\begin{eqnarray}\nr^2\\pderivtwo{h_l}{\\rho} +2r\\pderiv{h_l}{\\rho} +\\brac{r^2-l(l+1)}h_l = 0,\n\\end{eqnarray}\nwhere the square root is assumed to be on the branch resulting in a positive real part. As $l\\in\\mathbb{N}_0$ we can see that this equation has spherical Bessel functions as solutions \\cite[Chapter 10]{AbraMathFunc} of which there are various satisfactory pairs. We choose the particular solutions as the spherical Bessel functions of the third kind, also known as spherical Hankel functions. Any pair of solutions can be used to give the same end result once the boundary condition are applied. However this pair simplifies the consequent derivations.\n\nTaking the Hankel function solutions, we can write $\\hat{Q_l}$ as:\n\\begin{eqnarray}\n\\hat{Q_l}(r,s) = \\left. A(s)\\rho h_l^{(1)}(\\rho)+B(s)\\rho h_l^{(2)}(\\rho)\\right|_{\\rho=kr}.\n\\end{eqnarray}\nOnly the boundary condition \\bref{exteriorBC2} is relevant here, to be precise its Laplace transform, as $r\\ge R$ and may be applied by the use of the following limiting forms for $z\\to\\infty$:\n\\begin{eqnarray}\n&&h_l^{(1)}(z) \\sim i^{-l-1}z^{-1}e^{iz}, \\label{h1limiting}\\\\\n&&h_l^{(2)}(z) \\sim i^{l+1}z^{-1}e^{-iz}.\n\\end{eqnarray}\nAssuming $c>0$ in the Bromwich integral \\bref{inversionDef} allows us to say that $y>0$ where $k=\\sqrt{2is}=x+iy$, along the integration path. So by the limiting form of $\\hat{Q}(r,s)$ as $r\\to\\infty$:\n\\begin{eqnarray}\n\\hat{Q_l}(r,s) \\sim A(s)i^{-l-1}e^{(ix-y)r}+B(s)i^{l+1}e^{(y-ix)r},\n\\end{eqnarray}\nwe must have $B(s)=0$.\n\n$\\hat{Q}(r,t)$ and its $r$ derivative can now be written as:\n\\begin{eqnarray}\n\\hat{Q_l}(r,s) = \\left. A(s)\\rho h_l^{(1)}(\\rho)\\right|_{\\rho=kr}, \\\\\n\\pderiv{\\hat{Q}_l(r,s)}{r} = \\left. A(s)k\\pderiv{}{p}\\brac{\\rho h_l^{(1)}(\\rho)}\\right|_{\\rho=kr}.\n\\end{eqnarray}\nDivision of these two equations and evaluating on the artificial boundary yields the Laplace transform of the absorbing boundary condition:\n\\begin{eqnarray}\n\\hat{Q}_l(R,s) = \\brac{\\left.\\frac{1}{k}\\frac{ \\rho h_l^{(1)}(\\rho)}{ \\pderiv{}{p}\\brac{\\rho h_l^{(1)}(\\rho)}}\\right|_{\\rho=kr}}\\pderiv{\\hat{Q_l}(R,s)}{r}.\n\\end{eqnarray}\nUse of the convolution theorem for Laplace transforms gives us the absorbing boundary condition:\n\\begin{eqnarray}\nQ_l(R,t) = \\int_0^{t} G_l(R,\\tau)\\pderiv{Q_l(R,t-\\tau)}{r} \\, d\\tau, \\label{ABCresult1}\n\\end{eqnarray}\nwhere we define:\n\\begin{eqnarray}\n\\hat{G}_l(R,s) \\equiv \\left.\\frac{1}{k}\\frac{ \\rho h_l^{(1)}(\\rho)}{ \\pderiv{}{p}\\brac{\\rho h_l^{(1)}(\\rho)}}\\right|_{\\rho=kr}. \\label{laplaceABC1}\n\\end{eqnarray}\n$\\hat{G}(R,s)$ being the Laplace transform of $G(R,\\tau)$, which can be simplified by the recurrence relation:\n\\begin{eqnarray}\n\\deriv{\\hl(z)}{z} = \\frac{n}{z}\\hl(z) - h_{l+1}^{(1)}(z),\n\\end{eqnarray}\nto:\n\\begin{eqnarray}\n\\hat{G}_l(r,s) \\equiv \\left.\\frac{1}{k}\\frac{ \\rho h_l^{(1)}(\\rho)}{ (l+1)h^{(1)}_l(\\rho)-ph^{(1)}_{l+1}(\\rho) }\\right|_{\\rho=kr}.\n\\end{eqnarray}\n\n\n\n\\subsection{Calculation of the kernel $G(r,t)$}\nOur final task, before discretization, is to calculate the inverse Laplace transform above. This is done by using a series expansion\\cite[p439]{AbraMathFunc} for $h_l^{(1)}(z)$:\n\\begin{eqnarray}\n&&h_l^{(1)} = i^{-l-1}z^{-1}e^{iz}\\sum^l_0(l+\\half,k)(-2iz)^{-k},\n\\end{eqnarray}\nwhere:\n\\begin{eqnarray}\n&&(l+\\half,k) = \\frac{(l+v)!}{v!(l-v)!}.\n\\end{eqnarray}\nAfter manipulation and simplification we gain the rational function in $k$:\n\\begin{eqnarray}\n\\hat{G}_l(R,s) = \\frac{ -i\\sum_{v=0}^l \\sbrac{\\frac{(l+\\half,v)}{(l+\\thrhalf,0)(-2iR)^{v}}} k^{l-v} } { k^{l+1} + \\sum_{v=0}^l \\sbrac{ \\frac{ (l+\\thrhalf,v+1) - 2(l+1)(l+\\half,v)}{(l+\\thrhalf,0)(-2iR)^{v+1}} }k^{l-v} }. \\label{eqn:SSReadyForPF}\n\\end{eqnarray}\nThis can be expanded in partial fractions:\n\\begin{eqnarray}\n\\hat{G}_l(R,s) &=& \\sum^{l+1}_{j=1} \\frac{\\alpha_j}{k-k_j} \\label{PFresult} \\\\\n&=& \\sum^{l+1}_{j=1} \\frac{\\frac{\\alpha_j}{\\sqrt{2i}}}{\\sqrt{s}-\\frac{k_j}{\\sqrt{2i}}}, \\label{PFresult2}\n\\end{eqnarray}\nwhere the $k_j$ are the roots of the polynomial in the denominator of \\bref{eqn:SSReadyForPF} and $a_j$ are the pole strengths. In practice we calculate the roots and strengths for each $l$ with Maple.\n\nThe inversion of \\bref{PFresult2} is performed just by applying the well known result from tables\\cite{AbraMathFunc,intTransErdelyi}:\n\\begin{eqnarray}\n\\mathcal{L}^{-1}\\left\\{ \\frac{1}{\\sqrt{s}+a}\\right\\} = \\frac{1}{\\sqrt{\\pi t}} - a\\mathrm{w}(ia\\sqrt{t}),\n\\end{eqnarray}\nrather than contour integration of the Bromwich integral \\bref{inversionDef}. Here $\\mathrm{w}(z)=e^{-z^2}\\mathrm{erfc}(-iz)$ is the Faddeeva function, which can be calculated with an implementation of reference \\cite{Poppe:1990:MEC:77626.77629}. $G(R,s)$ can now be written as:\n\\begin{eqnarray}\nG_l(R,\\tau) = \\sum^{l+1}_{j=1} \\sbrac{\\frac{\\alpha_j}{\\sqrt{2\\pi it}}-\\half i\\alpha_j k_j\\mathrm{w}\\brac{z_j}},\n\\end{eqnarray}\nwhere $z_j = -k_j\\sqrt{\\frac{i\\tau}{2}}$. Simplification of the above can be made by using the limiting form \\bref{h1limiting} in equation \\bref{laplaceABC1} and comparing to \\bref{PFresult} in the limit $k\\to\\infty$: \n\\begin{eqnarray}\n0=\\lim_{k\\to\\infty} \\brac{k\\hat{G}_l(r,s) - k\\hat{G}_l(r,s)}\\qquad\\qquad\\qquad\\quad \\\\\n= \\lim_{k\\to\\infty} \\brac{\\left. \\rho\\frac{i^{-l-1}e^{i\\rho}}{\\pderiv{}{\\rho}(i^{-l-1}e^{i\\rho})}\\right|_{\\rho=kr} - \\sum_{j=1}^{l+1} \\alpha_j\\frac{k}{k-k_j}},\n\\end{eqnarray}\nthe differentiation of the limiting form is allowed as the functions $\\hl(z)$ are analytic. The limit can be performed to give:\n\\begin{eqnarray}\n\\sum_{j=1}^{l+1} \\alpha_j = -i,\n\\end{eqnarray}\nwhich allows us to write our final form of the kernel $G$ as:\n\\begin{eqnarray}\nG_l(R,\\tau) = \\frac{-i}{\\sqrt{2\\pi i \\tau}}-\\frac{i}{2}\\sum^{l+1}_{j=1} \\alpha_j k_j\\mathrm{w}\\brac{z_j}. \\label{ABCkernel1}\n\\end{eqnarray}\nAn interesting and reassuring feature of this boundary condition is that for $l=0$ where equation \\bref{exteriorDE} reduces to the free one dimensional Schr\u00f6dinger equation, we have the values $a_1=-i$ and $k_1=0$. Using these values we gain the absorbing boundary condition for the free one dimensional Schr\\\"odinger equation as found in \\cite{symABCprev}.\n\n\n\\section{\\label{sec:boundDisc}Boundary Discretization}\n\n\\subsection{Removing the Singularity}\nEquations \\bref{ABCresult1} and \\bref{ABCkernel1} will now be discretized on the grid for use in the Crank-Nicholson scheme. Inspecting equation \\bref{ABCkernel1} we see that it has a square root singularity at $\\tau=0$ and is not ideal for numerical integration. So integration by-parts is done on the first term to give:\n\\begin{eqnarray}\nG_l(R,\\tau) = \\sqrt{\\frac{2i\\tau}{\\pi}}\\pderiv{}{\\tau} - \\frac{i}{2}\\sum^{l+1}_{j=1} \\alpha_j k_j\\mathrm{w}\\brac{z_j}. \\label{ABCkernel2}\n\\end{eqnarray}\nOur function is now continuous at $\\tau=0$ and although its derivatives are not it is better suited to the numerical integration. Note that $G_l(R,\\tau)$ is now an operator. Defining a function $u^{(l)}(R,\\tau)$ allows for a more compact expression:\n\\begin{eqnarray}\nG_l(R,\\tau) = \\sqrt{\\frac{2i\\tau}{\\pi}}\\pderiv{}{\\tau} + u^{(l)}(R,\\tau). \\label{ABCkernel3} \\\\ u^{(l)}(R,\\tau) = - \\frac{i}{2}\\sum^{l+1}_{j=1} \\alpha_j k_j\\mathrm{w}\\brac{z_j}\n\\end{eqnarray}\n\n\n\\subsection{Time Discretization}\n\nWe first form a semi-discrete equation on the grid $t_n = n\\Delta t$ with $t=t_N$ and $\\tau_n=t_n$. By using the extended midpoint rule:\n\\begin{eqnarray}\n\\int_0^t f(\\tau) \\, d\\tau = \\Delta t \\sum_{n=0}^{N-1} f\\brac{t_{n+\\half}} + \\mathcal{O}(\\Delta t^2)\n\\end{eqnarray}\nto evaluate the integral and the difference formulas:\n\\begin{eqnarray}\nf(r,t_{n-\\half}) = \\frac{f(r,t_{n})+f(r,t_{n-1})}{2} + \\mathcal{O}(\\Delta t^2)\\\\\n\\pderiv{f(r,t_{n-\\half}) }{t} = \\frac{f(r,t_{n})-f(r,t_{n-1})}{\\Delta t} + \\mathcal{O}(\\Delta t^2)\n\\end{eqnarray}\nfor functions evaluated at a half time step gives the following semi-discrete equation:\n\\begin{eqnarray*}\n&&Q_l(R,t_N) +\\brac{\\sqrt{\\frac{2it_\\half}{\\pi}} - \\frac{\\Delta t}{2}u_l(R,t_\\half)}\\deriv{Q_l(R,t_N)}{r} \\\\\n&=& \\brac{\\sqrt{\\frac{2it_\\half}{\\pi}} + \\frac{\\Delta t}{2}u_l(R,t_\\half)}\\deriv{Q_l(R,t_{N-1})}{r}\n\\end{eqnarray*}\n\\begin{eqnarray*}\n\\quad- \\sum_{n=1}^{N-1} \\brac{\\sqrt{\\frac{2it_{n+\\half}}{\\pi}} - \\frac{\\Delta t}{2}u_l(R,t_{n+\\half})}\\deriv{Q_l(R,t_{N-n})}{r}\\quad&& \\nonumber \\\\\n+ \\sum_{n=1}^{N-1} \\brac{\\sqrt{\\frac{2it_{n+\\half}}{\\pi}} + \\frac{\\Delta t}{2}u_l(R,t_{n+\\half})}\\deriv{Q_l(R,t_{N-n-1})}{r}&& \\\\\n+\\mathcal{O}(\\Delta t^2)\\qquad&&\n\\end{eqnarray*}\n\n\n\\subsection{Space Discretization} \nFor the space discretization we choose the artificial boundary at $R=r_{M-\\half}$ between the penultimate and final spatial grid-points. The following difference formulas are used:\n\\begin{eqnarray}\nf(r_{M-\\half},t) = \\frac{f(r_M,t)+f(r_{M-1},t)}{2} + \\mathcal{O}(\\Delta r^2)\\\\\n\\pderiv{f(r_{M-\\half},t) }{t} = \\frac{f(r_{M},t)-f(r_{M-1},t)}{\\Delta t} + \\mathcal{O}(\\Delta t^2)\n\\end{eqnarray}\nat the points between the spatial grid. This yields the fully discetized absorbing boundary condition:\n\\begin{eqnarray}\n&&\\!\\!\\!\\!\\!\\!\\!\\!\\brac{1-B^{(M,0)}_l}Q_l(r_M,t_N) + \\brac{1+B^{(M,0)}_l}Q_l(r_{M-1},t_N) \\nonumber \\\\\n&=&C^{(M,0)}_l\\brac{Q_l(r_{M-1},t_{N-1})-Q_l(r_M,t_{N-1}) \\phantom{\\frac{}{}} } \\qquad\\qquad\\nonumber \\\\\n&+& \\sum_{n=1}^{N-1}B^{(M,n)}_l\\brac{Q_l(r_M,t_{N-n}) - Q_l(r_{M-1},t_{N-n}) \\phantom{\\frac{}{}} } \\qquad \\nonumber\\\\\n&+& \\sum_{n=1}^{N-1}C^{(M,n)}_l\\brac{Q_l(r_{M-1},t_{N-n-1}) - Q_l(r_M,t_{N-n-1}) \\phantom{\\frac{}{}} }\\nonumber\\\\\n&+& \\mathcal{O}(\\Delta r^2,\\Delta t^2).\\label{discreteABC}\n\\end{eqnarray}\nWhere:\n\\begin{eqnarray*}\n&&A = \\frac{-2}{\\Delta r}\\sqrt{\\frac{i\\Delta t}{\\pi}}, \\\\\n&&B^{(M,n)}_l = A\\sqrt{2n+1} + \\frac{\\Delta t}{\\Delta r}u_l(r_{M-\\half},t_{n+\\half}), \\\\\n&&C^{(M,n)}_l = A\\sqrt{2n+1} - \\frac{\\Delta t}{\\Delta r}u_l(r_{M-\\half},t_{n+\\half}).\n\\end{eqnarray*}\nWithin the implementation, equation \\bref{discreteABC} replaces the last row of the matrix described in section \\bref{section_IntDiscTDT}.\n\n\\section{\\label{sec:ResultsAndTesting1}Results and Testing: Absorbing Boundary Effectiveness}\n\nBefore calculating the giant resonances, the implementation of the absorbing boundary is tested in a simplified case, without any potential, beyond that coming from the centrifugal term. We apply the absorbing boundaries to a partial differential equation of the form \\bref{exteriorDE}. This is to show the validity of the implementation and to demonstrate its performance. The solution to the following partial differential equation is found:\n\\begin{eqnarray}\ni\\pderiv{Q_l}{t} = \\half\\pderivtwo{Q_l(r,t)}{r} + \\frac{l(l+1)}{2r^2}Q_l(r,t), \\label{results:DE1}\n\\end{eqnarray}\n\\begin{eqnarray}\n&Q_l(r,0)= Are^{-(r-5)^2} ,& \\\\\n&Q_l(0,t)=0 \\text{,\\quad} \\lim_{r\\to\\infty} Q_l(r,t) = 0, &\n\\end{eqnarray} \nfor $l=0,1,2$. Although calculations can be done for any angular momentum these are the only values required for the Hartree-Fock calculations shown later. $A$ is chosen to normalise $Q_l(r,0)$ and is calculated with Simpson's rule.\n\n Physically the equation corresponds to the evolution of a free particle which initially is a shell surrounding the origin. Although this sort of system provides no particular physical insights, it does allow us to make quick and simple calculations which are suitable for testing the validity of the method.\n\nWe use the same time and space discretization as described in section \\bref{sec:intDisc} to discretise equation \\bref{results:DE1}. The intermediate step \\bref{intStepEvo} is not needed here, as the equation is linear.\n\nOur results will show comparisons between a calculation done with absorbing boundaries at $r=10$ and one with reflecting boundaries at a radius chosen so reflection does not occur, which will be specified for each test.\n\nFor this simplified case, we take $\\hbar=m=1$. \n\n\\subsection{Densities}\n\nTo show how the solutions to equation \\bref{results:DE1} evolve through time the probability densities are presented. These are gained from calculating the wavefunction through time with a reflecting boundary at $r=100$. In the time interval chosen, $[0,15]$, reflection does not occur. Figure \\ref{result1} shows us the densities through time for each angular momentum. Only the interval $[0,10]$ is plotted as this is where we place the test absorbing boundary. The results are calculated with grid spacings $\\Delta x=\\Delta t=0.1$.\n\n\\begin{figure}[!htb]\n\\centerline{\\includegraphics[scale=1.35]{stateEvolution}}\n\\caption{These figures show wavefunctions, of angular momentum $l=0$ changing in time with a percentage leaving the interval of interest. The calculations are done with a reflecting boundary at $r=100$ and have grid spacings of $\\Delta x = \\Delta t = 0.1$. From top to bottom the graphs show the evolution of the wavefunctions at times 0,5,10 and 15.}\n\\label{result1}\n\\end{figure}\n\nIn each case we see the bulk of the density begins centred at $r=5$. As it the system evolves, the wavepacket spreads out, and interferes with itself as it reaches the origin.\n\n\\subsection{Radial Comparison of Wavefunction}\n\nWe now go on to see how the absorbing boundary performs. We plot:\n\\begin{eqnarray}\n|Q^{(Ref)}_l(r,t)-Q^{(ABC)}_l(r,t)|\n\\label{test2eqn}\n\\end{eqnarray}\nat $t=15$, where $Q^{(Ref)}_l$ and $Q^{(ABC)}_l$ are the calculations with reflecting and absorbing boundaries respectively. This is to see how any error from the absorbing boundary effects the interior points. Figure \\ref{results2} shows the result for each angular momentum with two different grid spacings. Again the reflecting boundaries are chosen to be at $r=100$.\n\n\\begin{figure}[!htb]\n\n\\centerline{\\includegraphics[scale=1.35]{solCompareEnd}}\n\\caption{The figures shows a comparison of the radial component of the wavefunctions at the final time $15$, for angular momenta $l=0,1,2$, calculated with each technique. The value in equation \\bref{test2eqn} is plotted against the radius.}\n\\label{results2} \n\\end{figure}\n\nWe see that in all cases the error has remained small throughout the interior, for the $dx=dt=0.1$ case bounded by $10^{-3}$ and for $dx=dt=0.01$ bounded by $10^{-5}$. This is within the $\\mathcal{O}(\\Delta r^2,\\Delta t^2)$ expected from the discretisation.\n\n\\subsection{Temporal Comparison of Probability}\n\nWe now test the how the error evolves through time. This is done by calculating the probability of finding the particle inside the interval over time, mathematically the following is calculated:\n\\begin{eqnarray}\nP(t) = \\int_0^{10} |Q_l(r,t)|^2 \\, dr\n\\label{test3eqn}\n\\end{eqnarray}\nwith reflecting and absorbing boundaries and the absolute value of the difference taken.\n\nFor this test we increase the time interval to $[0,50]$ and move the reflecting boundary to $r=200$. In each case more than $90\\%$ of the wavefunction has left the interval, specifically the probabilities inside the interval are $8.57E-002$, $6.36E-003$ and $2.03E-004$ for $l=0,1,2$ respectively at the end of the calculation.\n\nFigure \\ref{results3} shows the results for each angular momenta and different grid spacings.\n\n\\begin{figure}[!htb]\n\n\n\\centerline{\\includegraphics[scale=1.55]{normalCompare}}\n\\caption{(Color online) These plots show how the error in the probability from the absorbing boundaries changes through time. Equation \\bref{test3eqn} is calculated with reflecting and absorbing boundaries and the absolute value of there difference taken, though time and plotted.}\n\\label{results3}\n\\end{figure}\n\nWe see that in time also the error remains bounded. From the plots it appears the bound on the error is proportional to the grid spacings.\n\nThese results are satisfactory and so now with confidence in the previous work we go on to the Hartree-Fock calculations.\n\n\n\\section{\\label{sec:ResultsAndTesting2}Results and Testing: Hartree-Fock Resonances in the Linear Regime}\nResults from the implementation of the discretised Hartree-Fock system, as described in sections \\ref{sec:intDisc} and \\ref{sec:boundDisc}, are now shown. We first present the variation of the root mean square radius over time for $\\nucl{4}{2}{He}$, $\\nucl{16}{8}{O}$ and $\\nucl{40}{20}{Ca}$. For each nuclei the following is shown:\n\\begin{enumerate}[(a)]\n\\item A calculation performed with reflecting boundaries at $1500$ fm. This is the result expected from a continuum calculation because the boundary is far enough away so as to avoid reflection. This is plotted from $0$ to $500 \\text{ fm c}^{-1}$ to show the main features occurring at the beginning of the resonance.\n\n\\item The result of using reflecting boundaries at 30 fm. This is to show the effect the absorbing boundaries are having. Again this is plotted from $0$ to $500 \\text{ fm c}^{-1}$.\n\n\\item The difference between the expected result in (a) and a calculation with absorbing boundaries at $30$ fm. This is plotted for the entire 0 to $3000 \\text{ fm c}^{-1}$ time range. This difference is an error due to the discretization of the absorbing boundaries and so we consider a upper bound for this value of $\\mathcal{O}(\\Delta r^2)$ acceptable.\n\\end{enumerate}\nFor each nucleus has a group of three figures are shown which are labelled according to the above. We also show the time each calculation takes to evaluate the efficiency of the absorbing bounds.\n\nGrid spacings of $\\Delta r = 0.1\\text{ fm}$ and $\\Delta t = 0.1\\text{ fm c}^{-1}$ are used and all calculation are evolved from 0 to 3000 fm c$^{-1}$.\n\n\\subsubsection{Helium-4}\n\n\\begin{figure}\n\t\\centerline{\\includegraphics[scale=1.35]{reflectLarge_He4}}\n\t\\centerline{\\includegraphics[scale=1.35]{reflectMabc_He4}}\n\t\\caption{The time evolution of the monopole moment in Helium-4, showing (a) the continuum result, (b) for comparison, the result of a reflecting boundary wall and (c) the absolute value of the difference between the monopole moments when calculated using a absorbing boundary and using a far reflecting wall, over time.}\t\n\\label{test_He}\n\\end{figure}\nFrom figure (\\ref{test_He}a) we can see that the resonance for $\\nucl{4}{2}{He}$ has a simple damped oscillatory motion, the radius of the nuclei repeatedly increasing and decreasing clearly demonstrating the breathing mode. Figure (\\ref{test_He}c) shows us that the absorbing boundary provide us with a reasonable discrepancy from the expected result being bounded by $10^{-7}$, well below the $\\mathcal{O}(0.1^2)$ discretization error. Finally by comparing (\\ref{test_He}a) and (\\ref{test_He}b) the effect of the reflected flux can clearly be seen, which is the source of discretisation artefacts in the strength functions \\cite{Stevenson2007}.\n\n\n\\subsubsection{Oxygen-16}\n\n\\begin{figure}\n\t\\centerline{\\includegraphics[scale=1.35]{reflectLarge_O16}}\n\t\\centerline{\\includegraphics[scale=1.35]{reflectMabc_O16}}\n\t\\caption{The time evolution of the monopole moment in Oxygen-16, showing (a) the continuum result, (b) for comparison, the result of a reflecting boundary wall and (c) the absolute value of the difference between the monopole moments when calculated using a absorbing boundary and using a far reflecting wall, over time.}\t\n\\label{test_O}\n\\end{figure}\n\nThe top panel of figure (\\ref{test_O}a) shows a more complicated motion of the nucleus this time, which does not look like a single damped mode. This is due to the multiple single-particle states present, known as Landau fragmentation. The absolute error as shown in figure (\\ref{test_O}c) is bounded by a larger number than helium, but again within the acceptable range.\n\n\n\n\n\\subsubsection{Calcium-40}\n\n\\begin{figure}\n\\centerline{\\includegraphics[scale=1.35]{reflectLarge_Ca40}}\n\\centerline{\\includegraphics[scale=1.35]{reflectMabc_Ca40}}\t\n\t\\caption{The time evolution of the monopole moment in Calcium-40, showing (a) the continuum result, (b) for comparison, the result of a reflecting boundary wall and (c) the absolute value of the difference between the monopole moments when calculated using a absorbing boundary and using a far reflecting wall, over time.}\t\n\\label{test_Ca}\n\\end{figure}\n\nThe results for calcium again show a damped oscillation, as expected, though a long-lived resonant component is excited too, which the reflecting boundaries obviously cannot reproduce for long times. The errors are somewhat larger than the helium or oxygen cases but still acceptable.\n\n\\subsection{Timing}\nAs an guide, we present a table of timing results for the Oxygen calculations in Table \\ref{tab:timing}.\n\n\\begin{table}[tbh]\n\\begin{tabular}{|l|c|c|}\n\\hline\n\tBoundary Type & R(fm) & Calculation Time (s)\\\\\n\t\\hline\n\tReflecting & 1500\t& 2378 \\\\\n\tReflecting & 30\t\t& 58 \\\\\n\tAbsorbing & 30\t\t& 144\\\\\n\\hline\n\\end{tabular}\n\\caption{Calculation times for the large box continuum calculation with reflecting bounds, a small-box calculation with spurious reflections and a small-box calculation with absorbing boundaries.\\label{tab:timing}}\n\\end{table}\n\nThe results show that the absorbing boundaries are considerably more expensive than reflecting boundaries, but less so than using a large box with simple boundary conditions. It is interesting also to examine the time taken to each iteration. Figure \\bref{fig:itertime} shows a plot of the time to compute each iteration, as a running average over 20 iterations to somewhat smooth out the effect of computer load.\n\\begin{figure}[htp]\n\\centering\n\\includegraphics[scale=1.35]{timings}\n\\caption{A plot showing the expense of each iteration in a calculation of oxygen-16. It clearly show the non-locally of the absorbing boundary increasing the calculation time for iteration the further the calculation progresses.}\n\\label{fig:itertime}\n\\end{figure}\nThis shows the steady increase in expense to calculate a iteration as the calculation progresses and is due to the non-locality in time of the absorbing boundary condition.\n\n\n\\subsection{Strength Functions}\n\nThe strength functions for these calculations are now presented. As these are the calculations required in order to make comparisons to experiment their accurate calculation is critical. We require that the error in the above results do not give noticable artefacts in the strength functions, at least to the level of experimental resolution. Figure \\bref{resultsz} shows the calculated strength function from the expected result with that calculated using absorbing boundaries.\n\n\\begin{figure}[!htb]\n\t\\includegraphics[scale=1.3]{12heliumFA}\n\t\\includegraphics[scale=1.3]{22oxygenFA}\n\t\\includegraphics[scale=1.3]{32calciumFA}\n\\caption{Plots showing the effect of using the absorbing boundary condition on the strength functions of various nuclei. Going from top to bottom there is the helium, oxygen and calium strength functions.}\n\\label{resultsz}\n\\end{figure}\n\nWe see that both calculations match up well for all the nuclei tested. The figures show the increasing complexity of the nuclear structure, as more features appear in the strength functions.\n\n\\section{\\label{sec:ResultsAndTesting3}Results and Testing: Non-Linear Regime}\n\\begin{figure}[tbh]\n\\includegraphics[scale=1.5]{particles_O16}\n\\caption{\\label{fig:emission_summary}\nA comparison of the number of particles emitted from the region between 0 and 30fm with absorbing boundaries at 30fm compared with reflecting boundaries at 600fm which are not reached in the time of the calculation.\n}\n\\end{figure}\n\\begin{figure}[tbh]\n\\includegraphics[scale=1.5]{particles_overTime}\n\\caption{\\label{fig:particlestime}\nThe time-dependence of particle emission as a function of boost strength for large-amplitude excitations in $^{16}$O. The legend indicates the strength $k$ (fm$^{-2}$) of the boost in equation (\\ref{eq:boost}).\n}\n\\end{figure}\n\n\\begin{figure*}[tb]\n \\centerline{\\includegraphics[scale=1.12]{particle_errors_boxes}}\n\\caption{The total error in number of particles emitted by the nucleus as a function of time for increasingly stronger boosts (indicated by the strength $k$ in each panel). The error is calculated with respect to a calculation withouth aborsbing bounds but in a space so large that the boundaries are not probed. The boost parameter $k$ is as defined in (\\ref{eq:boost}).\\label{parterrstime}}\n\\end{figure*}\nAs well as testing in the small amplitude linear response regime, of relevance to giant resonances, it is also instructive to examine the larger-amplitude regime, which can be studied in THDF-based techniques \\cite{PhysRevC.68.024302,PhysRevC.80.064309,Reinhard2007}, unlike the small-amplitude-limited RPA. This regime is relevant to the decay of highly excited fragments following e.g. deep inelastic collisions, and significant particle emission may be expected. Similar situation arise in atomic physics where direct electromagnetic excitation of highly ionizing collective modes is feasible \\cite{cluster}. We use a test case of monopole exciations of $^{16}$O, with increasingly strong boosts (\\ref{eq:boost}) such that eventually all particles are lost from the nucleus through large-amplitude excitation. We note that the computational effort for large amplitude excitations is not different to that for small-amplitude excitations, as the iteration procedure is not changed for larger amplitudes.\n\nDespite the success of the small-amplitude calculations, there is no a priori reason to expect larger amplitude calculations to perform so well, since our absorbing boundaries are predicated on the fact that the only potential active at the boundary is the centrifugal barrier, whereas the nuclear mean-field exists wherever the nucleon wavefunction is finite. As more particles are emitted, so too the nuclear wavefunction and its associated mean-field are present in the exterior region. Figure \\ref{fig:emission_summary} shows the comparison of the total number of particles emitted (by 1500 fm\/c) from a $^{16}$O nucleus between an absorbing boundary calculation, and a reflecting boundary calculation in which the size of the box is so high that the reflecting boundaries are not reached. The range of boost is sufficiently large to cover the small amplitude limit as well as the regime in which the nucleus is entirely ionized. The two calculations are seen to be close over the entire range, with small differences near the bend as complete ionization occurs. The time-dependence of the particle emission is shown in Figure \\ref{fig:particlestime}, in which the case around the bend is shown to still be changing at the end time of the calculation.\n\nFigure {\\ref{parterrstime}} shows the time-dependent error (absorbing bounds compared with large-space reflecting bounds) in the total number of particles emitted for a range of kick size. This highlights the small differences in Figure \\ref{fig:emission_summary} where the errors around $k=0.2$ fm$^{-2}$ are seen to be largest. In the worst case, this error is noticable, but still rather small.\n\n\\section{Perspectives and Conclusion} \n\n\\subsection{Perspectives for more realistic calculations}\n\nOur calculations respresent a step on the way to more realistic calculations of giant resonances within a continuum time-dependent Hartree-Fock framework. We discuss in this section some perspectives for the possibility of performing more realistic calculations. Our calculations deliberately considered a simple case, yet within TDHF-based methods, calculations without our form of absorbing bound exist with more relaxed symmetries \\cite{PhysRevC.71.064328,brinemg,PhysRevC.71.024301,Umar2005} or with pairing in the BCS or TDHFB framework \\cite{PhysRevC.71.064328,PhysRevC.82.034306,PhysRevC.78.044318,PhysRevC.84.051309}. Our method is extendable in a straightforward way to calculations involving pairing. The increased expense scales in the same way as discrete calculations with pairing scale with respect to calculations without pairing. The addition of extra single-particle states to account for the scattering of Cooper pairs will involve extra boundary conditions, but only with a linear scaling with respect to the number of particle states. On the other hand, increased dimensions will be more costly. In our case of spherical symmetry in which there is a single boundary point for ~300 interior points, we have a similar time spent on the boundary as the entire internal region. In a three-dimensional calculation, in which the boundary is the surface of volume, the ratio of boundary points to internal points is much higher. Our technique is thus not currently suitable for a three-dimensional calculation. However, reasonable scaling could nevertheless be achieved with an expansion of the density in spherical harmonics. For the purposes of calculating giant resonances of general multipolarity and of deformed nuclei, this would suffice, as only one point per moment of the density would be needed to act as a boundary point, and a typical expansion of a handful of terms would describe a small-amplitude deformation. A full three-dimensional code would remain required for heavy-ion collisions.\n\nOur immediate aim is to find a suitable way to include the Coulomb potential, which has been ignored here, within the treatment of the absorbing boundaries. The practical realisation of this is more difficult than the present case because the required inverse Laplace transform is not of a simple form. The current approach being developed is to use the method in \\cite{Jiang2001,CPA:CPA20200,springerlink:10.1007\/s10915-012-9620-9} to approximate the more complex inverse Laplace transform.\n\nIt should also be possible to reduce the time taken to perform the boundary calculation. In the oxygen tests it was shown that most of the expense comes from the end of the calculation where the non-locality in time plays a part. One solution to this would be to use the method described in \\cite{Jiang2004955} which uses a sum of exponentials approximation that can be evaluated recursively. The effect is to reduce the sum in \\bref{discreteABC} that requires $\\mathcal{O}(N)$ operations to one that requires just $\\mathcal{O}(\\ln N)$.\n\n\\subsection{Conclusion}\n\nWe have presented a implementation of a spherically symmetric Hartree-Fock system discretised using a Crank-Nicholson scheme. We also presented the derivation and implementation of an absorbing boundary condition approach to handle the outgoing wave condition. It was shown using a Laplace transform method that it is possible to construct a boundary condition at a finite distance away from the origin. This came at the cost of it being non-local in time, meaning the value of the wave-function at the boundary has to be stored throughout the calculation, causing an increase in the time taken to calculate each iteration as it progressed.\n\nThe results of the testing show that absorbing boundary conditions do provide a suitable way of treating the boundary in spatially unbounded time-dependent problems. We see that although there are errors introduced from the discretization of the absorbing boundaries, they are small and stay small throughout the various manipulations required to calculate the strength functions. As well as being accurate they also show a good improvement in the speed of the calculation compared to using a large box.\n\nWe applied the method to large amplitude motion, and found acceptable results. We discussed perspectives for future, and more realistic, calculations.\n\n\n\\newpage\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Algebraic homogeneous spaces}\n\nLet $G$ be an affine algebraic group over an algebraically closed field ${\\mathbb K}$. A $G$-module $V$ is said to be rational if\nany vector in $V$ is contained in a finite-dimensional rational $G$-submodule. \nBelow all modules are supposed to be rational.\nBy $V^G$ denote the subspace of $G$-fixed vectors in $V$.\n\nThe group $G\\times G$ acts on $G$ by translations, $(g_1,g_2)g:=g_1gg_2^{-1}$. This action induces the action on the algebra of regular functions\non $G$:\n$$\n( G\\times G) : {\\mathbb K}[G], \\ \\ ((g_1,g_2)f)(g):=f(g_1^{-1}gg_2).\n$$\n\nFor any closed subgroup $H$ of $G$, $H_l$, $H_r$ denote the groups of all left and right translations of ${\\mathbb K}[G]$ by \nelements of $H$. \nUnder these actions, the algebra ${\\mathbb K}[G]$ becomes a rational $H_l$- (and $H_r$-) module.\n\n\\smallskip\n\nBy Chevalley's Theorem, the set $G\/H$ of left $H$-cosets in $G$ admits a structure of a quasi-projective algebraic variety such that\nthe projection $p:G\\to G\/H$ is a surjective $G$-equivariant morphism. Moreover, a structure of an algebraic variety\non $G\/H$ satisfying these conditions is unique. It is easy to check that the morphism $p$ is open and the algebra of regular functions on $G\/H$ may be\nidentified with the subalgebra ${\\mathbb K}[G]^{H_r}$ in ${\\mathbb K}[G]$. We refer to~\\cite[Ch.~IV]{Hu2} for details.\n\n\n\n\\section{Matsushima's criterion}\n\nLet $G$ be a reductive algebraic group and $H$ a closed subgroup of $G$. It is known that the homogeneous space $G\/H$ is affine\nif and only if $H$ if reductive. The first proof was given over the field of complex numbers and used\nsome results from algebraic topology, see~\\cite{Ma} and~\\cite[Th.~4]{On}.\nAn algebraic proof in characteristic zero was obtained in~\\cite{BB}. A characteristic-free proof that\nuses the Mumford conjecture proved by W.J.~Haboush is given in~\\cite{Ri}. Another proof based on the Morozov-Jacobson Theorem may be found\nin~\\cite{Lu}. \n\nBelow we give an elementary proof of Matsushima's criterion in terms of representation theory. The ground field ${\\mathbb K}$ is assumed to be\nalgebraically closed and of characteristic zero.\n\n\n\\begin{theorem}\\label{t1}\n\nLet $G$ be a reductive algebraic group and $H$ its closed subgroup. Then the homogeneous space $G\/H$ is affine if\nand only if $H$ is reductive.\n\n\\end{theorem}\n\n\n\\begin{proof}\n\nWe begin with the ``easy half''.\n\n\n\\begin{proposition}\\label{pr1}\n\nLet $G$ be an affine algebraic group and $H$ its reductive subgroup. Then $G\/H$ is affine. \n\n\\end{proposition}\n\n\n\\begin{proof}\n\nIf a reductive group $H$ acts on an affine variety $X$, then the algebra of invariants ${\\mathbb K}[X]^H$ is finitely\ngenerated, the quotient morphism $\\pi: X\\to{\\rm Spec\\,}{\\mathbb K}[X]^H$ is surjective and any fiber of $\\pi$ contains\na unique closed $H$-orbit \\cite[Sec~.4.4]{PV}. In the case $X=G$ this shows that $G\/H$ is isomorphic to ${\\rm Spec\\,}{\\mathbb K}[G]^{H_r}$.\n\n\\end{proof}\n\n\nNow assume that $G$ is reductive and consider a decomposition\n$$\n{\\mathbb K}[G]={\\mathbb K}\\oplus{\\mathbb K}[G]_G,\n$$\n\n\\noindent where the first component corresponds to constant functions on $G$, and the second one is the sum of all\nsimple non-trivial $G_l$- (or $G_r$-) submodules in ${\\mathbb K}[G]$. Let ${\\rm pr}:{\\mathbb K}[G]\\to {\\mathbb K}$ be the projection\non the first component. Clearly, ${\\rm pr}$ is a $(G_l\\times G_r)$-invariant linear map. \n\nLet $H$ be a closed subgroup of $G$. Consider \n\n$$\nI(G,H)=\\{ f\\in{\\mathbb K}[G]^{H_r} \\ | \\ {\\rm pr}(fl)=0 \\ \\forall l\\in{\\mathbb K}[G]^{H_r} \\}.\n$$ \n\nThis is a $G_l$-invariant ideal in ${\\mathbb K}[G]^{H_r}$ with $1\\notin I(G,H)$. Assume that $G\/H$ is affine. Then \n$G\/H\\cong{\\rm Spec\\,}{\\mathbb K}[G]^{H_r}$ and ${\\mathbb K}[G]^{H_r}$ does not contain proper $G_l$-invariant ideals. Thus $I(G,H)=0$. Our aim\nis to deduce from this that any $H$-module is completely reducible.\n\n\n\\begin{lemma}\\label{l2}\n\nIf $W$ is an $H_r$-submodule in ${\\mathbb K}[G]$ and $f\\in W$ is a non-zero $H_r$-fixed vector,\nthen $W=\\langle f\\rangle\\oplus W'$, where $W'$ is an $H_r$-submodule. \n\n\\end{lemma}\n\n\n\\begin{proof}\n\nSince $I(G,H)=0$, there exists $l\\in{\\mathbb K}[G]^{H_r}$ such that ${\\rm pr}(fl)\\ne 0$. \nThe submodule $W'$ is defined as $W'=\\{ w\\in W \\ | \\ {\\rm pr}(wl)=0\\}$. \n\n\\end{proof}\n\n\n\\begin{lemma}\\label{l1}\n\nIf $f\\in{\\mathbb K}[G]$ is an $H_r$-semi-invariant of the weight $\\xi$, then there exists an $H_r$-semi-invariant in ${\\mathbb K}[G]$ of\nthe weight $-\\xi$.\n\n\\end{lemma}\n\n\n\\begin{proof}\n\nLet $Z$ be the zero set of $f$ in $G$. Since $Z$ is $H_r$-invariant, one has $Z=p^{-1}(p(Z))$.\nThis implies that $p(Z)$ is a proper closed subset of $G\/H$. There exists a non-zero $\\alpha\\in{\\mathbb K}[G\/H]$ with $\\alpha|_{p(Z)}=0$.\nThen $p^*\\alpha\\in{\\mathbb K}[G]^{H_r}$ and $p^*\\alpha|_Z=0$. By Hilbert's Nullstellensatz, there are \n$n\\in{\\mathbb N}$, $s\\in{\\mathbb K}[G]$ such that $(p^*\\alpha)^n=fs$. This shows that $s$ is an $H_r$-semi-invariant of the weight $-\\xi$. \n\n\\end{proof}\n\n\n\\begin{lemma}\\label{l4}\n\n(1) \\ Any cyclic $G$-module $V$ may be embedded (as a $G_r$-submodule) into ${\\mathbb K}[G]$.\n\n(2) \\ Any $n$-dimensional $H$-module $W$ may be embedded (as an $H_r$-submodule) into $({\\mathbb K}[H])^n$.\n\n(3) \\ Any finite-dimensional $H$-module may be embedded (as an $H$-submodule) into a finite-dimensional $G$-module.\n\n\\end{lemma}\n\n\n\\begin{proof}\n\n(1) Suppose that $V=\\langle Gv\\rangle$.\nThe map $\\phi: G\\to V$, $\\phi(g)=g^{-1}v$, induces the embedding of the dual module $\\phi^*: V^*\\to{\\mathbb K}[G]$. Consider the $G_r$-submodule \n$U=\\{f\\in{\\mathbb K}[G] \\ | \\ {\\rm pr}(fl)=0 \\ \\forall l\\in\\phi^*(V^*)\\}$. By the complete reducibility, ${\\mathbb K}[G]=U\\oplus U'$ for some\n$G_r$-submodule $U'$. Obviously, $I(G,G)=0$ and $U'$ is $G_r$-isomorphic to $V$. \n\n(2) Let $\\lambda_1,\\dots,\\lambda_n$ be a basis of $W^*$. The embedding may be given as\n$$\n w\\to (f_1^w,\\dots,f_n^w), \\ f_i^w(h):=\\lambda_i(hw).\n$$\n\n(3) Note that the restriction homomorphism ${\\mathbb K}[G]\\to{\\mathbb K}[H]$ is surjective. By (2), any finite-dimensional $H$-module\n$W$ has the form $W_1\/W_2$, where $W_1$ is a finite-dimensional $H$-submodule in a $G$-module $V$ and $W_2$ is an $H$-submodule of $W_1$.\nConsider $W_1\\bigwedge(\\bigwedge^m W_2)$ as an $H$-submodule in $\\bigwedge^{m+1} W_1$, where $m=\\dim W_2$. Note that\n$W\\cong (W_1\\bigwedge(\\bigwedge^m W_2))\\otimes (\\bigwedge^m W_2)^*$. By (1), the cyclic $G$-submodule of $\\bigwedge^m V$ generated\nby $\\bigwedge^m W_2$ may be embedded into ${\\mathbb K}[G]$. By Lemma~\\ref{l1}, $(\\bigwedge^m W_2)^*$ also may be embedded into a \n$G$-module.\n \n\\end{proof}\n\n\n\\begin{lemma}\\label{l3}\n\nFor any $H$-module $W$ and any non-zero $w\\in W^H$ there is an $H$-submodule $W'$\nsuch that $W=\\langle w\\rangle\\oplus W'$.\n\n\\end{lemma}\n\n\n\\begin{proof}\n\nEmbed $W$ into a $G$-module $V$. Let $V_1=\\langle Gw\\rangle$. Then $V=V_1\\oplus V_2$ for some $G$-submodule $V_2$.\nEmbed $V_1$ into ${\\mathbb K}[G]$ as a $G_r$-submodule. By Lemma~\\ref{l2}, $V_1=\\langle w\\rangle\\oplus W_1$ for some\n$H$-submodule $W_1$. Finally, $W'=W\\cap (W_1\\oplus V_2)$.\n\n\\end{proof}\n\n\n\\begin{lemma}\\label{wl}\n\nAny $H$-module is completely reducible.\n\n\\end{lemma}\n\n\n\\begin{proof}\n\nAssume that $W_1$ is a simple submodule in an $H$-module $W$.\nConsider two submodules in the $H$-module ${\\rm End}(W,W_1)$:\n\n$$\n L_2=\\{ p\\in {\\rm End}(W,W_1) \\ | \\ p|_{W_1}=0\\} \\ \\subset \\ L_1=\\{p\\in{\\rm End}(W,W_1) \\ | \\ p|_{W_1} \\ {\\rm is \\ scalar} \\}.\n$$\n\nClearly, $L_2$ is a hyperplane in $L_1$. Consider an $H$-eigenvector $l\\in (L_1)^*$ corresponding to $L_2$. Taking the tensor product\nwith a one-dimensional $H$-module, one may assume that $l$ is $H$-fixed. By Lemma~\\ref{l3}, $(L_1)^*=\\langle l\\rangle\\oplus M$, implying\n$L_1=L_2\\oplus\\langle P\\rangle$, where $M$ and $\\langle P\\rangle $ are $H$-submodules. \nThen ${\\rm Ker}\\,P$ is a complementary submodule to $W_1$. \n\n\\end{proof}\n\n\nTheorem~\\ref{t1} is proved.\n\n\\end{proof}\n\n\n\\begin{remark}\n\nIn~\\cite{Vi}, for any action $G:X$ of a reductive group $G$ on an affine variety $X$ with the decomposition\n${\\mathbb K}[X]={\\mathbb K}[X]^G\\oplus{\\mathbb K}[X]_G$ and the projection ${\\rm pr}:{\\mathbb K}[X]\\to{\\mathbb K}[X]^G$, the ${\\mathbb K}[X]^G$-bilinear scalar\nproduct $(f,g)={\\rm pr}(fg)$ on ${\\mathbb K}[X]$ was introduced and the kernel of this product was considered.\nOur ideal $I(G,H)$ is such kernel in the case $X={\\rm Spec\\,}{\\mathbb K}[G]^{H_r}$ provided ${\\mathbb K}[G]^{H_r}$ is finitely\ngenerated. \n\n\\end{remark}\n\n\n\\begin{remark}\n\nFor convenience of the reader we include all details in the proof of Theorem~\\ref{t1}. Lemma~\\ref{l1} and Lemma~\\ref{l4} are\ntaken from~\\cite{BBHM}. They show that for a quasi-affine $G\/H$ any $H$-module may be realized as an $H$-submodule of a $G$-module.\nThe converse is also true~\\cite{BBHM},~\\cite{Gr}. \nProposition~\\ref{pr1} is a standart fact. The proof of Lemma~\\ref{wl} is a part of the proof of the Weyl Theorem\non complete reducibility~\\cite{Hu}, see also~\\cite[Prop.~2.2.4]{Sp}.\n\n\\end{remark}\n\n\n\\section{Some additional remarks}\n\nThe following lemma may be found in~\\cite{BB}.\n\n\\begin{lemma}\\label{lbn}\n\nLet $G$ be an affine algebraic group and $H$ its reductive subgroup. Then ${\\mathbb K}[G]^{H_r}$ does not contain\nproper $G_l$-invariant ideals.\n\n\\end{lemma}\n\n\n\\begin{proof}\n\nConsider a decomposition\n$$\n {\\mathbb K}[G]={\\mathbb K}[G]^{H_r}\\oplus{\\mathbb K}[G]_{H_r},\n$$\n\n\\noindent where ${\\mathbb K}[G]_{H_r}$ is the sum of all non-trivial simple $H_r$-submodules in ${\\mathbb K}[G]$. Clearly, \n${\\mathbb K}[G]^{H_r}{\\mathbb K}[G]_{H_r}\\subseteq{\\mathbb K}[G]_{H_r}$. Hence any proper $G_l$-invariant ideal in ${\\mathbb K}[G]^{H_r}$ \ngenerates a proper $G_l$-invariant ideal in ${\\mathbb K}[G]$, a contradiction.\n\n\\end{proof}\n\n\nBy Hilbert's Theorem on invariants, the algebra ${\\mathbb K}[G]^{H_r}$ is finitely generated.\nIt is easy to see that functions from ${\\mathbb K}[G]^{H_r}$ separate (closed) right $H$-cosets in $G$. These observations \nand Lemma~\\ref{lbn} give another proof of\nProposition~\\ref{pr1}. Moreover, it is proved in~\\cite[Prop.~1]{BB} that for a quasi-affine $G\/H$ the algebra\n${\\mathbb K}[G]^{H_r}$ does not contain proper $G_l$-invariant ideals if and only if $G\/H$ is affine. \n\n\n\\smallskip\n\nNow assume that $G$ is reductive.\n\n\\begin{proposition}\\cite[Prop.~1]{Vi}\\label{pr2}\nThe ideal $I(G,H)$ is the biggest $G_l$-invariant ideal in ${\\mathbb K}[G]^{H_r}$ different from ${\\mathbb K}[G]^{H_r}$.\n\n\\end{proposition}\n\n\n\\begin{proof}\n\nAny proper $G_l$-invariant ideal $I$ of ${\\mathbb K}[G]^{H_r}$ is contained in ${\\mathbb K}[G]^{H_r}\\cap{\\mathbb K}[G]_G$. \nThus ${\\rm pr}(il)=0$ for any $l\\in{\\mathbb K}[G]^{H_r}$, $i\\in I$. This implies $I\\subseteq I(G,H)$.\n\n\\end{proof}\n\n\n\\begin{remark}\n\nFor non-reductive $G$ the biggest invariant ideal in ${\\mathbb K}[G]^{H_r}$ may not exist. For example,\none may take \n$$\nG=\\left\\{\n\\begin{pmatrix}\n1 & * & * \\\\\n0 & * & * \\\\\n0 & * & * \n\\end{pmatrix}\n\\right\\}, \\ \\ \nH=\\left\\{\n\\begin{pmatrix}\n1 & 0 & * \\\\\n0 & 1 & * \\\\\n0 & 0 & *\n\\end{pmatrix}\n\\right\\}.\n$$\n\nHere $G\/H\\cong {\\mathbb K}^3\\setminus\\{x_2=x_3=0\\}$, ${\\mathbb K}[G]^{H_r}\\cong{\\mathbb K}[x_1,x_2,x_3]$, and the maximal ideals\n$(x_1-a,x_2,x_3)$ are $G_l$-invariant for any $a\\in{\\mathbb K}$. \n\n\\end{remark}\n\n\n\n\\section{The boundary ideal}\n\nIn this section we assume that $H$ is an observable subgroup of $G$, i.e., $G\/H$ is quasi-affine. \n\nIf the algebra ${\\mathbb K}[G]^{H_r}$ is finitely generated, then the affine $G$-variety $X={\\rm Spec\\,}{\\mathbb K}[G]^{H_r}$ has an\nopen $G$-orbit isomorphic to $G\/H$ and may be considered as the canonical embedding $G\/H\\hookrightarrow X$. Moreover,\nthis embedding is uniquely characterized by two properties: $X$ is normal and ${\\rm codim}_X (X\\setminus G\/H)\\ge 2$, see~\\cite{Gr}.\nThere are two remarkable $G_l$-invariant ideals in ${\\mathbb K}[G]^{H_r}$, namely\n$$\n I^b(G,H)=I(X\\setminus(G\/H))=\\{f\\in{\\mathbb K}[G]^{H_r} \\ | \\ f|_{X\\setminus(G\/H)}=0\\},\n$$\nand, if $G$ is reductive, the ideal $I^m(G,H)$ of the unique closed $G$-orbit in $X$. If $G\/H$ is affine, then $I^b(G,H)={\\mathbb K}[G]^{H_r}$,\n$I^m(G,H)=0$. In other cases $I^b(G,H)$ is the smallest proper radical $G_l$-invariant ideal, and $I^m(G.H)$ is the biggest\nproper $G_l$-invariant ideal of ${\\mathbb K}[G]^{H_r}$. \nBy Proposition~\\ref{pr2}, $I^m(G,H)=I(G,H)$. Moreover, \n${\\mathbb K}[G]^{H_r}\/I^m(G,H)\\cong{\\mathbb K}[G]^{S_r}$, where $S$ is a minimal reductive subgroup of $G$ containing $H$. (Such a subgroup may be not unique, \nbut all of them are $G$-conjugate, see~\\cite[Sec.~7]{Ar}.) \nIt follows from the Slice Theorem~\\cite{Lu} and \\cite[Prop.~4]{Ar} that $I^b(G,H)=I^m(G,H)$ if and only if $H$ is a\nquasi-parabolic subgroup of a reductive subgroup of $G$.\n\n\\smallskip\n\nNow assume that ${\\mathbb K}[G]^{H_r}$ is not finitely generated. If $G$ is reductive, then $I(G,H)$ may be consider as an analog of $I^m(G,H)$\nin this situation (Proposition~\\ref{pr2}). We claim that $I^b(G,H)$ also has an analog, even for non-reductive $G$. \n\n\n\\begin{proposition}\\label{pr3}\n\nLet $\\hat X$ be a quasi-affine variety, $\\hat X\\hookrightarrow X$ be an (open) embedding into an affine variety $X$,\n$I(X\\setminus \\hat X)\\lhd{\\mathbb K}[X]$, and $\\mathcal{I}=\\mathcal{I}(\\hat X)$ be the radical of the ideal of ${\\mathbb K}[\\hat X]$ generated by\n$I(X\\setminus\\hat X)$. Then\n\n(1) \\ the ideal $\\mathcal{I}\\lhd{\\mathbb K}[\\hat X]$ does not depend on $X$;\n\n(2) \\ $I(X\\setminus\\hat X)$ is the smallest radical ideal of ${\\mathbb K}[X]$ generating an ideal in ${\\mathbb K}[\\hat X]$ with the radical $\\mathcal{I}$.\n \n\\end{proposition}\n \n\n\\begin{proof}\n\n(1) \\ Consider two affine embeddings: $\\phi_i:\\hat X\\hookrightarrow X_i$, $i=1,2$. Let $X_{12}$ be the closure of $(\\phi_1\\times\\phi_2)(\\hat X)$ in\n$X_1\\times X_2$ with the projections $r_i: X_{12}\\to X_i$. Let us identify the images of $\\hat X$ in $X_1$, $X_2$, and $X_{12}$. We claim\nthat $r_i(X_{12}\\setminus\\hat X)\\subseteq X_i\\setminus\\hat X$. Indeed, the diagonal image of $\\hat X$ is closed in $\\hat X\\times X_j$, $j\\ne i$, as\nthe graph of a morphism.\n\n\nIt follows from what was proved above that the ideal of ${\\mathbb K}[X_{12}]$ generated by $r_i^*(I(X_i\\setminus\\hat X))$ has the radical\n$I(X_{12}\\setminus\\hat X)$. This shows that the radical of the ideal generated by $I(X_i\\setminus\\hat X)$ in ${\\mathbb K}[\\hat X]$ does not\ndepend on $i$. \n\n(2) Assume that there is a radical ideal $I_1\\lhd{\\mathbb K}[X]$ not containing $I=I(X\\setminus\\hat X)$ and generating \nan ideal in ${\\mathbb K}[\\hat X]$ with the radical $\\mathcal{I}$.\nThere is $x_0\\in\\hat X$ such that $h(x_0)=0$ for any $h\\in I_1$. Take $f\\in I$ such that $f(x_0)\\ne 0$.\nOne has\n$f^k=\\alpha_1h_1+\\dots+\\alpha_kh_k$ for some $\\alpha_i\\in{\\mathbb K}[\\hat X]$, $h_i\\in I_1$, $k\\in{\\mathbb N}$, and this implies $f(x_0)=0$, a contradiction. \n\n\\end{proof}\n\n\nSo $\\mathcal{I}(G\/H)$ is a radical $G_l$-invariant ideal of ${\\mathbb K}[G]^{H_r}$, and $\\mathcal{I}(G\/H)=I^b(G,H)$ provided\n${\\mathbb K}[G]^{H_r}$ is finitely generated.\n\n\n\\begin{proposition}\n\n$\\mathcal{I}(G\/H)$ is the smallest non-zero radical $G_l$-invariant ideal of ${\\mathbb K}[G]^{H_r}$.\n\n\\end{proposition}\n\n\n\\begin{proof}\n\nLet $f\\in{\\mathbb K}[G]^{H_r}$ and $I(f)$ be the ideal of ${\\mathbb K}[G]^{H_r}$ generated by the orbit $G_lf$. It is sufficient\nto prove that $\\mathcal{I}(G\/H)\\subseteq {\\rm rad}\\,I(f)$. Take any $G$-equivariant affine embedding\n$G\/H\\hookrightarrow X$ with $f\\in{\\mathbb K}[X]$. For the ideal $I'(f)$ generated by $G_lf$ in ${\\mathbb K}[X]$\none has $I(X\\setminus(G\/H))\\subseteq {\\rm rad}\\,I'(f)$, hence $\\mathcal{I}(G\/H)\\subseteq{\\rm rad}\\,I(f)$.\n\n\\end{proof}\n\n\n\\begin{corollary}\n\nLet $G$ be an affine algebraic group and $H$ its observable subgroup. Then $G\/H$ is affine if and only if\n$\\mathcal{I}(G\/H)={\\mathbb K}[G]^{H_r}$.\n\n\\end{corollary}\n\n\nIt should be interesting to give a description of the ideal $\\mathcal{I}(G\/H)$ similar to the \ndefinition of $I(G,H)$,\nand to find a geometric meaning of the $G_l$-algebras ${\\mathbb K}[G]^{H_r}\/I(G,H)$ and ${\\mathbb K}[G]^{H_r}\/\\mathcal{I}(G\/H)$\nfor non-finitely generated ${\\mathbb K}[G]^{H_r}$.\n\n\\medskip\n\n{\\it Acknowledgements.} The author is grateful to J.~Hausen for useful discussions. In particular, Proposition~\\ref{pr3}\nappears during such a discussion. Thanks are also due to D.A.~Timashev for valuable remarks.\n\n\nThis paper was written during the staying at Eberhard Karls Universit\\\"at T\\\"ubingen (Germany). The author wishes to\nthank this institution and especially J\\\"urgen Hausen for invitation and hospitality. \n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}