diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzhgcv" "b/data_all_eng_slimpj/shuffled/split2/finalzzhgcv" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzhgcv" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\nSickle cell disease (SCD) affects nearly 100,000 people in the US\\footnote{\\url{https:\/\/www.hematology.org\/Patients\/Anemia\/Sickle-Cell.aspx}} and is an inherited red blood cell disorder. Common complications of SCD include acute pain, organ failure, and early death \\cite{mohammed2019machine}. Acute pain arises in patients when blood vessels are obstructed by sickle-shaped red blood cells mitigating the flow of oxygen, a phenomenon called vaso-occlusive crisis. Further, pain is the leading cause of hospitalizations and emergency department admissions for patients with SCD. The numerous health care visits lead to a massive amount of electronic health record (EHR) data, which can be leveraged to investigate the relationships between SCD and pain. Since SCD is associated with several complications, it is important to identify clinical notes with signs of pain from those without pain. It is equally important to gauge changes in pain for proper treatment.\n\nDue to their noisy nature, analyzing clinical notes is a challenging task. In this study, we propose techniques employing natural language processing, text mining and machine learning to predict \\textbf{pain relevance} and \\textbf{pain change} from SCD clinical notes. We build two kinds of models: 1) A binary classification model for classifying clinical notes into \\textit{pain relevant} or \\textit{pain irrelevant}; and 2) A multiclass classification model for classifying the \\textit{pain relevant} clinical notes into i) \\textit{pain increase}, ii) \\textit{pain uncertain}, iii) \\textit{pain unchanged}, and iv) \\textit{pain decrease}. We experiment with Logistic Regression, Decision Trees, Random Forest, and Feed Forward Neural Network (FFNN) for both the binary and multiclass classification tasks. For the multiclass classification task, we conduct ordinal classification as the task is to predict pain change levels ranging from \\textit{pain increase} to \\textit{pain decrease}. We evaluate the performance of our ordinal classification model using graded evaluation metrics proposed in \\cite{gaur2019knowledge}.\n\n\\section{Related Work}\n\nThere is an increasing body of work assessing complications within SCD. Mohammed et al. \\cite{mohammed2019machine} developed an ML model to predict early onset organ failure using physiological data of patients with SCD. They used five physiologic markers as features to build a model using a random forest classifier, achieving the best mean accuracy in predicting organ failure within six hours before the incident. Jonassaint et al. \\cite{jonassaint2015usability} developed a mobile app to monitor signals such as clinical symptoms, pain intensity, location and perceived severity to actively monitor pain in patients with SCD. Yang et al. \\cite{yang2018improving} employed ML techniques to predict pain from objective vital signs shedding light on how objective measures could be used for predicting pain.\n\nPast work on predicting pain or other comorbidities of SCD, has thus, relied on features such as physiological data to assess pain for a patient with SCD. In this study, we employ purely textual data to assess the prevalence of pain in patients and whether pain increases, decreases or stays constant. \n\nThere have been studies on clinical text analysis for other classification tasks. Wang et al. \\cite{wang2019clinical} conducted smoking status and proximal femur fracture classification using the i2b2 2006 dataset. Chodey et al. \\cite{chodey2016clinical} used ML techniques for named entity recognition and normalization tasks. Elhadad et al. \\cite{elhadad2015semeval} conducted clinical disorder identification using named entity recognition and template slot filling from the ShARe corpus (Pradhan et al., 2015) \\cite{pradhan2014evaluating}. Similarly, clinical text can be used for predicting the prevalence and degree of pain in sickle cell patients as it has a rich set of indicators for pain.\n\n\n\\section{Data Collection}\n\nOur dataset consists of 424 clinical notes of 40 patients collected by Duke University Medical Center over two years (2017 - 2019). The clinical notes are jointly annotated by two co-author domain experts. There are two rounds of annotation conducted on the dataset. In the first round, the clinical notes were annotated as \\textit{relevant to pain} or \\textit{irrelevant to pain}. In the second round, the \\textit{relevant to pain} clinical notes were annotated to reflect \\textbf{pain change}. Figure-1 shows the size of our dataset based on \\textbf{pain relevance} and \\textbf{pain change}. As shown, our dataset is mainly composed of \\textit{pain relevant} clinical notes. Among the \\textit{pain relevant} clinical notes, clinical notes labeled \\textit{pain decrease} for the \\textbf{pain change} class outnumber the rest. Sample \\textit{pain relevant} and \\textit{pain irrelevant} notes are shown in Table-I.\\\n\n\n\n\n\n\\begin{table}[htbp]\n\n\\caption{Sample Clinical notes}\n\\begin{center}\n\n\\begin{tabular}{p{2.0cm} | p{5.5cm} }\n\n\\hline\n\\cline{2-2} \n\n\\textbf{\\textbf{Pain Relevance}}& \\textbf{\\textbf{Sample Clinical Note}} \\\\\n\n\\hline\nYES & Patient pain increased from 8\/10 to 9\/10 in chest. \\\\\n\n\\hline\nNO & Discharge home\n\\vspace*{-\\baselineskip}\n\n\n\n\n\n\\end{tabular}\n\\label{tab1}\n\\end{center}\n\\vspace{-4mm}\n\\end{table}\n\n\n\n\nOur dataset is highly imbalanced, particularly, among the \\textbf{pain relevance} classes. There are significantly higher instances of clinical notes labeled \\textit{pain relevant} than \\textit{pain irrelevant}. To address this imbalance in our dataset, we employed a technique called Synthetic Minority Over-sampling TEchnique (SMOTE) \\cite{chawla2002smote} for both classification tasks.\n\n\n\\begin{figure}[htbp]\n\\centerline{\\includegraphics[width=70mm]{pain_relevance_change.png}}\n\n\\caption{Statistics of dataset for \\textbf{Pain Relevance} and \\textbf{Pain Change} classes}\n\\label{fig}\n\\end{figure}\n\n\n\nWe preprocessed our dataset by removing stop words as well as punctuations, and performed lemmatization.\n\n\\section{Methods}\n\nThe clinical notes are labeled by co-author domain experts based on their \\textbf{pain relevance} and \\textbf{pain change} indicators. The \\textbf{pain change} labels use a scale akin to the Likert scale from severe to mild. Our pipeline (Figure-2) consists of data collection, data preprocessing, linguistic\/topical analysis, feature extraction, feature selection, model creation, and evaluation. We use linguistic and topical features to build our models. While linguistic analysis is used to extract salient features, topical features are used to mine latent features. We performed two sets of experiments: 1) Binary Classification for \\textbf{pain relevance} classification, and 2) Multiclass Classification for \\textbf{pain change} classification.\n\n\n\\begin{figure}[htbp]\n\\centerline{\\includegraphics[width=75mm]{pipeline.png}}\n\n\\caption{Sickle Cell Disease Pain Classification Pipeline}\n\\label{fig}\n\\end{figure}\n\n\n\n\\subsection{Linguistic Analysis}\n\nTo infer salient features in our dataset, we performed linguistic analysis. We generated n-grams for \\textit{pain-relevant} and \\textit{pain-irrelevant} clinical notes and clinical notes labeled \\textit{pain increase}, \\textit{pain uncertain}, \\textit{pain unchanged}, or \\textit{pain decrease}. In our n-grams analysis, we observe there are unigrams and bigrams that are common to different classes (e.g., common to \\textit{pain relevant} and \\textit{pain irrelevant}). Similarly, there are unigrams and bigrams that are exclusive to a given class. Table-II shows the top 10 unigrams selected using \\( \\chi^2 \\) feature selection for our dataset based on the classes of interest.\n\n\n\n\\begin{table}[htbp]\n\n\\caption{Top 10 Unigrams}\n\\begin{center}\n\n\\begin{tabular}{p{2.15cm} |p{2cm}| p{2cm} }\n\n\\hline\n\\cline{2-3} \n\n\\textbf{\\textbf{Pain Relevant\n(Exclusive)}}& \\textbf{\\textbf{Pain Irrelevant\n(Exclusive)}}& \\textbf{\\textit{Pain Relevant AND Pain Irrelevant}} \\\\\n\n\n\n\\hline\nemar, intervention, increase, dose, expressions, chest, regimen, alteration, toradol, medication & home, wheelchair, chc, fatigue, bedside, parent, discharge, warm, relief, mother & pain, pca, plan, develop, control, altered, patient, level, comfort, manage \n\n\n\n\n\n\\end{tabular}\n\\label{tab1}\n\\end{center}\n\\vspace{-4mm}\n\\end{table}\n\n\\subsection{Topical Analysis}\nWhile n-grams analysis uncovers explicit language features in the clinical notes, it is equally important to uncover the hidden features characterizing the topical distribution. We adopt the Latent Dirichlet Allocation (LDA) \\cite{blei2003latent} for unraveling these latent features. We train an LDA model using our entire corpus. \n\nTo determine the optimal number of topics for a given class of clinical notes (e.g., \\textit{pain relevant} notes), we computed coherence scores \\cite{stevens2012exploring}. The higher the coherence score for a given number of topics, the more intepretable the topics are (see Figure-3). We set the number of words characterizing a given topic to eight. These are words with the highest scores in the topic distribution. We found the human-interpretable optimal number of topics for each of the classes of the clinical notes in our dataset to be two. This is interpreted as each class of the clinical notes is a mixture of two topics. Table-III shows words for the two topics for \\textit{pain relevant} and \\textit{pain irrelevant} clinical notes. As can be seen in the table, \\textit{pain relevant} notes can be interpreted to have mainly the topic of pain control, while \\textit{pain irrelevant} notes to have primarily the topic of home care. Similarly, Table-IV shows the distribution of words for the topics for each of the \\textbf{pain change} classes (underscored words are exclusive to the corresponding class for Topic-1). Further, \\textit{pain} appears in each of the topics for \\textbf{pain change} classes and, as a result, is not discriminative. While a common word such as \\textit{pain} in the topic distribution can be considered as a stop word and not helpful for \\textbf{pain change} classification, we did not remove it since \\textit{pain} helps with interpretation of a given topic regardless of other topics.\n\n\n\\begin{figure}[htbp]\n\\centerline{\\includegraphics[width=46mm, height=2.5cm]{coherence_final.png}}\n\n\\caption{Coherence Scores vs Number of Topics }\n\\label{fig}\n\n\\vspace{-1.5em}\n\\end{figure}\n\n\n\\begin{table}[htbp]\n\\vspace{1 mm}\n\\caption{Topic distribution based on pain relevance}\n\\begin{center}\n\\vspace{-4 mm}\n\n\n\\begin{tabular}{p{1.25cm} | p{3.2cm} | p{3.2cm}}\n\n\\hline\n\\cline{2-3} \n\n\n\\textbf{\\textbf{Pain Relevance}}& \\textbf{Most Prevalent Words in Topic-1} & \\textbf{Most Prevalent Words in Topic-2} \\\\\n\n\\hline\nYES & progress, pain, improve, decrease, knowledge, contro\n& patient, pain, medication, knowledge, goal, stat\n\\\\\n\n\\hline\nNO & note, admission, discharge, patient, home, abilit\n& pain, goal, admission, outcome, relief, continue\n\n\n\\vspace*{-\\baselineskip}\n\n\n\\end{tabular}\n\\label{tab1}\n\\end{center}\n\\vspace{-4mm}\n\\end{table}\n\n\n\n\n\\begin{table}[htbp]\n\n\\caption{Topic distribution based on pain change}\n\\begin{center}\n\n\\vspace{-4 mm}\n\n\\begin{tabular}{p{1.25cm} | p{3.2cm} | p{3.2cm}}\n\n\\hline\n\\cline{2-2} \n\n\\textbf{\\textbf{Pain Change}}& \\textbf{\\textbf{Most Prevalent Words in Topic-1}} & \\textbf{\\textbf{Most Prevalent Words in Topic-2}} \\\\\n\n\\hline\nPain increase & pain, progress, \\textbf{\\underline{medication}}, \\textbf{\\underline{management}}, patient, \\textbf{\\underline{schedule}}, pca,\\textbf{\\underline{ intervention}} & pain, patient, give, goal, intervention, dose, button, plan\\\\\n\n\\hline\nPain uncertain & pain, patient, \\textbf{\\underline{goal}}, \\textbf{\\underline{continue}}, plan, \\textbf{\\underline{improve}}, decrease, develop & outcome, pain, problem, knowledge, regimen, deficit, carry, method\\\\\n\n\\hline\nPain unchanged & pain, progress, \\textbf{\\underline{level}}, \\textbf{\\underline{control}}, develop, plan, regimen, pca & patient, pain, remain, well, demand, plan, level, manage\\\\\n\n\\hline\nPain decrease & pain, progress, patient, decrease, plan, regimen, \\textbf{\\underline{satisfy}}, \\textbf{\\underline{alter}} & pain, patient, improve, satisfy, control, decrease, manage, ability\\\\\n\n\\vspace*{-\\baselineskip}\n\n\n\n$^{\\mathrm{}}$& & \\\\\n\n\\hline\n\n\\end{tabular}\n\\label{tab1}\n\\end{center}\n\\vspace{-4mm}\n\\end{table}\n\n\n\n\n\n\\subsection{Classification}\n\nThe language and topical analyses results are used as features in building the ML models. \nOur classification task consists of two sub-classification tasks: 1) \\textbf{pain relevance} classification; 2) \\textbf{pain change} classification, each with its own sets of features. The \\textbf{pain relevance} classifier classifies clinical notes into \\textit{pain-relevant} and \\textit{pain-irrelevant}. The \\textbf{pain change} classifier is used to classify the \\textit{pain-relevant} clinical notes into 1) \\textit{pain increase}, 2) \\textit{pain uncertain}, 3) \\textit{pain unchanged}, and 4) \\textit{pain decrease}. We trained and evaluated various ML models for each classification task. We used a combination of different linguistic and topical features to train our models. Since linguistic and topical features are generated using independent underlying techniques, which make them orthogonal, concatenation operation is used to combine their representations. We split our dataset into 80\\% training and 20\\% testing sets and built logistic regression, decision trees, random forests, and FFNN for both classification tasks. Table-V shows the results of the \\textbf{pain relevance} classifier while Table-VI shows \\textbf{pain change} classification results. For the ordinal classification, we considered the following order in the severity of pain change from high to low: \\textit{pain increase}, \\textit{pain uncertain}, \\textit{pain unchanged}, \\textit{pain decrease}.\n\n\\begin{comment}\n\n\n\\begin{table}\n\\centering\n\\caption{Pain Relevance Classification}\n\\label{res:expdata}\n\\scalebox{0.80}{\n\\vline\n\\begin{tabular}{|c|c|c|c|c|c|}\n\\hline\n\\multirow{Model} & \\multirow{Features} & \\multirow{Precision} & \\multirow{Recall} & \\multirow{F-measure}\\\\\n\\cline{1-5}\n\n\\end{tabular}\n}\n\n\\end{table}\n\n\\end{comment}\n\n\\begin{table}[ht]\n \\caption{Pain Relevance Classification}\n \\centering\n \\scalebox{0.85}{\n \\begin{tabular}{|c|c|c|c|c|}\n \\hline\n \\textbf{Model} & \\textbf{Feature} & \\textbf{Precision} & \\textbf{Recall} & \\textbf{F-measure}\\\\\n \\hline\n \\multirow{3}{*} {Logistic Regression} & Linguistic & 0.94 & 0.93 & 0.94\\\\\n & Topical & 0.98 & 0.86 & 0.91\\\\\n & Linguistic + Topical & 0.95 & 0.95 & 0.95\\\\\n \\hline\n \\multirow{3}{*} \\textbf{Decision Trees} & Linguistic & 0.95 & 0.95 & 0.95\\\\\n & Topical & 0.98 & 0.98 & 0.98\\\\\n & \\textbf{Linguistic + Topical} & \\textbf{0.98} & \\textbf{0.98} & \\textbf{0.98}\\\\\n \n \\hline\n \n \n \\multirow{3}{*} {Random Forest} & Linguistic & 0.90 & 0.95 & 0.92\\\\\n & Topical & 0.95 & 0.98 & 0.98\\\\\n & Linguistic + Topical & 0.90 & 0.95 & 0.93\\\\\n \n \n \\hline\n \\multirow{3}{*} {FFNN} & Linguistic & 0.94 & 0.94 & 0.94\\\\\n & Topical & 0.98 & 0.98 & 0.98\\\\\n & Linguistic + Topical & 0.96 & 0.96 & 0.94\\\\\n \\hline\n \\end{tabular}\n }\n \n \\label{multirow_table}\n\\end {table}\n\n\n\n\n\n\\begin{table}[ht]\n \\caption{Pain Change Classification}\n \\centering\n \\scalebox{0.85}{\n \\begin{tabular}{|c|c|c|c|c|}\n \\hline\n \\textbf{Model} & \\textbf{Feature} & \\textbf{Precision} & \\textbf{Recall} & \\textbf{F-measure}\\\\\n \\hline\n \\multirow{3}{*} {Logistic Regression} & Linguistic & 0.75 & 0.56 & 0.63\\\\\n & Topical & 0.50 & 0.55 & 0.52\\\\\n & Linguistic + Topical & 0.76 & 0.58 & 0.66\\\\\n \\hline\n \\multirow{3}{*} \\textbf{Decision Trees} & Linguistic & 0.76 & 0.59 & 0.67\\\\\n & Topical & 0.73 & 0.65 & 0.68\\\\\n & \\textbf{Linguistic + Topical} & \\textbf{0.74} & \\textbf{0.68} & \\textbf{0.70}\\\\\n \n \\hline\n \n \\hline\n \\multirow{3}{*} {Random Forest} & Linguistic & 0.74 & 0.49 & 0.59\\\\\n & Topical & 0.94 & 0.52 & 0.66\\\\\n & Linguistic + Topical & 0.81 & 0.46 & 0.59\\\\\n \n \n \\hline\n \\multirow{3}{*} {FFNN} & Linguistic & 0.71 & 0.59 & 0.65\\\\\n & Topical & 0.73 & 0.65 & 0.68\\\\\n & Linguistic + Topical & 0.83 & 0.51 & 0.63\\\\\n \\hline\n \\end{tabular}}\n \n \\label{multirow_table}\n\n\\end {table}\n \n\\section{Discussion}\n\nFor \\textbf{pain relevance} classification, the four models have similar performance. For \\textbf{pain change} classification, however, we see a significant difference in performance across the various combinations of features and models. Decision trees with linguistic and topical features achieve the best performance in F-measure. While random forest, and FFNN offer better precision, each, than decision tree, they suffer on Recall, and therefore on F-measure. Further, most models perform better when trained on topical features than pure linguistic features. A combination of topical and linguistic features usually offers the best model performance. Thus, latent features obtained using LDA enable an ML model to perform better. \n\nEvaluation of the multiclass classification task is conducted using the techniques used by Gaur et al. \\cite{gaur2019knowledge} where a model is penalized based on how much it deviates from the true label for an instance. Formally, the count of true positives is incremented when the true label and predicted label of an instance are the same. Similarly, false positives' count gets incremented by an amount equal to the gap between a predicted label and true label (when predicted label is higher than true label). False negatives' count is incremented by the difference between the predicted label and true label (when predicted label is lower than true label). Precision, and recall are then computed following the implementations defined in ML libraries\\footnote{\\url{https:\/\/bit.ly\/3a5Fibb}} using the count of true positives, false positives, and false negatives. Finally, F-measure is defined as the harmonic mean of precision and recall.\n\nWhile we achieved scores on the order of 0.9 for \\textbf{pain relevance} classification, the best we achieved for \\textbf{pain change} classification was 0.7. This is because there is more disparity in linguistic and topical features between \\textit{pain relevant} and \\textit{pain irrelevant} notes than there is among the four \\textbf{pain change} classes. Since the price of false negatives is higher than false positives in a clinical setting, we favor decision trees with n-grams and topics used as features as they achieve the best Recall and F-measure, albeit they lose to other models on Precision. Thus, identification of \\textit{pain relevant} notes with 0.98 F-measure followed by a 0.70 F-measure on determining \\textit{pain change} is impressive. We believe our model can be used by MPs for SCD-induced pain mitigation.\n\n\\vspace{8mm}\n\n\\section{Conclusion and Future Work}\n\nIn this study, we conducted a series of analyses and experiments to leverage the power of natural language processing and ML to predict \\textbf{pain relevance} and \\textbf{pain change} from clinical text. Specifically, we used a combination of linguistic and topical features to build different models and compared their performance. Results show decision tree followed by feed forward neural network as the most promising models.\n\nIn future work, we plan to collect additional clinical notes and use unsupervised, and deep learning techniques for predicting pain. Further, we look forward to fusing different modalities of sickle cell data for better modeling of pain or different physiological manifestations of SCD.\n\n\n\n\n\n\n\n\n\n\\vspace{12pt}\n\n\n\n\n\\bibliographystyle{IEEEtran} \n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\n\\section{Appendix}\n\n\nTable~\\ref{tab:hyperparameters} lists the training hyperparameters and\nruntime hyperparameters used by \\texttt{PenumbraMixture}.\nTable~\\ref{tab:cross-eval-top-1}, Table~\\ref{tab:cross-eval-top-5},\nand Table~\\ref{tab:cross-eval-winner-acc}\nprovide top-1 action, top-5 action, and winner accuracies, respectively,\nbetween each headset in the neural network.\nFigure~\\ref{fig:game-length-distribution}\nshows game length distributions for each headset.\n\nThe synopsis features were hand-designed.\nMany of them are natural given the rules of chess.\nSome of them are near duplicates of each other.\nTable~\\ref{tab:feature-description-1} and Table~\\ref{tab:feature-description-2}\njointly provide brief descriptions of each synopsis feature plane.\nThese tables also include\nsaliency estimates averaged over five runs.\nThe penultimate column orders the synopsis features by\ntheir per-bit saliency based on action gradients, and\nthe final column reports the average difference\nof the policy head accuracies\nwhen the model was retrained without each feature.\n\n\\begin{table}[h]\n\\caption{Hyperparameters used by \\texttt{PenumbraMixture}}\n\\label{tab:hyperparameters}\n\\vskip 0.15in\n\\begin{center}\n\\begin{small}\n\\begin{tabular}{llr}\n Symbol & Parameter & Value \\\\\n \\hline\n \n $b$ & Batch size & $256$ \\\\\n $c$ & Exploration constant & $2$ \\\\\n $d_{\\text{sense}}$ & Search depth for sense actions & $6$ \\\\\n $d_{\\text{move}}$ & Search depth for move actions & $12$ \\\\\n $F$ & \\# of binary synopsis features & $8\\stimes8\\stimes104$ \\\\\n $k$ & Rejection sampling persistence & $512$ \\\\\n $\\ell$ & Limited state set size & $128$ \\\\\n $m$ & Bandit mixing constant & $1$ \\\\\n $n_{\\text{particles}}$ & \\# of samples to track & $4096$ \\\\\n $n_{\\text{vl}}$ & Virtual loss & $1$ \\\\\n $n_{\\text{batches}}$ & Total minibatches of training & $650000$ \\\\\n $n_{\\text{width}}$ & Network width; \\# features per layer & $128$ \\\\\n $n_{\\text{depth}}$ & Network depth; \\# residual blocks & $10$ \\\\\n $z$ & Depth increase threshold & $\\infty$ \\\\\n $\\kappa$ & Caution & $0$ \\\\\n $\\phi$ & Paranoia & $0$ \\\\\n $\\epsilon$ & Learning rate & $0.0005$ \\\\\n\\end{tabular}\n\\end{small}\n\\end{center}\n\\vskip -0.1in\n\\end{table}\n\n\n\\subsection{2019 NeurIPS competition}\n\n\\texttt{Penumbra}{} was originally created to compete in the\n2019 reconnaissance blind chess competition\nhosted by the Conference on Neural Information Processing Systems (NeurIPS).\nHowever, it performed very poorly in that competition,\nwinning fewer games than the random bot.\n\nThe program and underlying algorithm presented in this paper\nare largely the same as the originals.\nThe main differences are that some\nhyperparameters were adjusted,\nthe neural network was retrained with more data,\nand a key bug in the playout{} code was fixed.\nInstead of choosing actions according to the policy from the neural network,\nthe playout{} code erroneously always selected the last legal action.\nGiving the program a \\texttt{break} made a huge difference.\n\n\n\\subsection{Comparison with Kriegspiel}\n\nA comparison between RBC and Kriegspiel chess\n\\citep{ciancarini2009mcts, parker2010paranoia, richards2012reasoning}\nmay be worthwhile. Kriegspiel chess \nalso introduces uncertainty about the opposing pieces\nbut lacks an explicit sensing mechanism.\nInstead, information is gathered solely from\ncaptures, check notifications, and illegal move attempts.\nIn Kriegspiel, illegal moves are rejected and the player is allowed to choose a new\nmove with their increased information about the board state,\nwhich entangles the positional and informational aspects of the game.\nIn contrast, sensing in RBC gives players direct control\nover the amount and character of the information they possess.\n\nAnother significant difference comes from the mechanics related to check.\nCapturing pieces and putting the opposing king into check\nhave benefits in both games:\ncapturing pieces leads to a material advantage,\nand check often precedes checkmate.\nIn Kriegspiel, however, both capturing and giving check\nalso provide the opponent with information.\nIn RBC, while capturing does give the opponent information,\nputting their king into check does not,\nwhich makes sneak attacks more viable.\n\n\n\\subsection{Games played}\n\nThe games that were played in order to produce Table~\\ref{tab:baseline-bots}, Table~\\ref{tab:caution-and-paranoia}, and Table~\\ref{tab:exploration}\nare available for download from \\url{https:\/\/github.com\/w-hat\/penumbra}.\n\n\n\\begin{table*}[ht]\n\\caption{Top-1 action accuracy across headsets.}\n\\label{tab:cross-eval-top-1}\n\\vspace{0.1in}\n\\begin{center}\n\\rowcolors{2}{gray!10}{white}\n\\setlength{\\tabcolsep}{3.2pt}\n\\begin{tabular}{ccccccccccccccc}\n \\crossevaltableheader\n \\texttt{All}{} & \\bf 33.6 & 41.5 & 40.5 & 32.3 & 43.7 & 44.1 & 41.5 & 20.3 & 31.9 & 37.1 & 36.6 & 21.9 & 41.4 & 3.8 \\\\\n \\texttt{Top}{} & 30.5 & \\bf 43.1 & 44.5 & 32.3 & 43.9 & 44.4 & 40.3 & 19.4 & 31.6 & 22.4 & 26.0 & 18.5 & 15.6 & 3.3 \\\\\n \\texttt{StrangeFish}{} & 27.3 & 40.5 & \\bf 45.9 & 30.3 & 36.9 & 38.8 & 33.2 & 17.9 & 27.7 & 21.0 & 23.6 & 17.5 & 14.6 & 3.3 \\\\\n \\texttt{LaSalle}{} & 26.0 & 34.4 & 34.6 & \\bf 36.4 & 34.1 & 34.5 & 33.2 & 17.2 & 24.6 & 22.6 & 26.1 & 15.5 & 11.2 & 3.4 \\\\\n \\texttt{Dyn.Entropy} & 27.9 & 37.9 & 35.2 & 27.5 & \\bf 50.0 & 40.1 & 37.4 & 18.0 & 32.6 & 18.3 & 22.6 & 16.9 & 13.8 & 3.3 \\\\\n \\texttt{Oracle}{} & 28.8 & 38.4 & 36.3 & 29.1 & 41.2 & \\bf 49.3 & 35.1 & 17.2 & 29.5 & 19.9 & 26.6 & 17.0 & 11.9 & 3.4 \\\\\n \\focuswbernar{} & 28.8 & 38.1 & 33.9 & 30.7 & 42.9 & 38.6 & \\bf 45.2 & 17.9 & 29.3 & 18.4 & 23.1 & 16.7 & 11.6 & 3.3 \\\\\n \\texttt{Marmot}{} & 22.4 & 29.4 & 28.6 & 24.0 & 31.4 & 29.3 & 29.7 & \\bf 24.6 & 25.0 & 16.4 & 15.5 & 15.4 & 11.1 & 3.4 \\\\\n \\texttt{Genetic}{} & 24.0 & 32.5 & 30.3 & 24.1 & 39.8 & 35.0 & 32.3 & 16.9 & \\bf 40.4 & 15.1 & 15.7 & 15.1 & 7.8 & 3.4 \\\\\n \\texttt{Zugzwang}{} & 20.5 & 21.8 & 23.2 & 20.5 & 20.4 & 23.5 & 17.3 & 11.7 & 12.3 & \\bf 47.0 & 34.6 & 14.0 & 10.9 & 3.2 \\\\\n \\texttt{Trout}{} & 22.8 & 25.1 & 26.0 & 24.0 & 23.0 & 27.8 & 21.3 & 12.5 & 14.4 & 36.1 & \\bf 41.8 & 14.9 & 14.7 & 3.7 \\\\\n \\texttt{Human}{} & 23.8 & 30.1 & 30.6 & 24.9 & 31.4 & 30.1 & 28.4 & 16.8 & 24.1 & 24.7 & 24.5 & \\bf 24.9 & 12.5 & 3.3 \\\\\n \\texttt{Attacker}{} & 10.6 & 11.7 & 11.0 & 9.8 & 11.4 & 11.6 & 12.4 & 8.6 & 8.8 & 9.2 & 9.0 & 6.7 & \\bf 45.1 & 4.4 \\\\\n \\texttt{Random}{} & 14.0 & 16.7 & 16.2 & 14.0 & 16.5 & 17.8 & 16.8 & 9.4 & 11.4 & 15.7 & 16.5 & 10.4 & 10.4 & \\bf 4.5 \\\\\n\\end{tabular}\n\\end{center}\n\\end{table*}\n\n\n\\begin{table*}[ht]\n\\caption{Top-5 action accuracy across headsets.}\n\\label{tab:cross-eval-top-5}\n\\vspace{0.1in}\n\\begin{center}\n\\rowcolors{2}{gray!10}{white}\n\\setlength{\\tabcolsep}{3pt}\n\\begin{tabular}{ccccccccccccccc}\n \\crossevaltableheader\n \\texttt{All}{} & \\bf 62.0 & 72.3 & 71.0 & 66.6 & 74.9 & 75.8 & 72.5 & 51.3 & 65.3 & 65.5 & 61.8 & 48.1 & 59.1 & 18.2 \\\\\n \\texttt{Top}{} & 58.8 & \\bf 73.4 & 73.4 & 66.4 & 75.5 & 76.4 & 72.0 & 50.1 & 64.7 & 48.5 & 52.2 & 45.4 & 35.3 & 16.3 \\\\\n \\texttt{StrangeFish}{} & 56.8 & 72.2 & \\bf 74.3 & 64.7 & 72.6 & 74.2 & 67.5 & 48.5 & 62.1 & 46.5 & 50.1 & 43.8 & 41.4 & 15.7 \\\\\n \\texttt{LaSalle}{} & 56.8 & 68.8 & 68.4 & \\bf 68.2 & 69.8 & 70.6 & 68.3 & 48.7 & 58.9 & 51.5 & 56.6 & 43.6 & 30.8 & 16.8 \\\\\n \\texttt{Dyn.Entropy} & 55.5 & 69.1 & 66.9 & 60.9 & \\bf 76.7 & 72.3 & 69.6 & 47.2 & 64.4 & 38.4 & 47.2 & 42.7 & 41.4 & 16.6 \\\\\n \\texttt{Oracle}{} & 56.7 & 70.4 & 68.9 & 61.9 & 74.1 & \\bf 77.4 & 69.1 & 45.6 & 63.4 & 43.9 & 50.6 & 43.1 & 34.4 & 16.4 \\\\\n \\focuswbernar{} & 57.0 & 70.0 & 67.3 & 64.7 & 74.0 & 71.3 & \\bf 73.6 & 49.3 & 63.3 & 43.3 & 50.2 & 44.0 & 29.8 & 17.0 \\\\\n \\texttt{Marmot}{} & 53.4 & 65.0 & 64.0 & 58.7 & 68.6 & 66.1 & 65.2 & \\bf 55.4 & 59.3 & 43.0 & 46.2 & 42.6 & 32.2 & 16.2 \\\\\n \\texttt{Genetic}{} & 52.6 & 65.8 & 64.2 & 58.2 & 71.3 & 69.7 & 65.8 & 45.4 & \\bf 70.5 & 37.4 & 41.6 & 40.9 & 27.0 & 16.2 \\\\\n \\texttt{Zugzwang}{} & 42.8 & 45.0 & 45.4 & 46.0 & 42.1 & 47.5 & 42.4 & 32.5 & 32.8 & \\bf 71.6 & 57.9 & 35.3 & 29.0 & 15.4 \\\\\n \\texttt{Trout}{} & 49.3 & 54.5 & 54.4 & 53.9 & 53.7 & 57.2 & 52.5 & 38.4 & 42.2 & 62.9 & \\bf 63.4 & 38.7 & 40.9 & 18.1 \\\\\n \\texttt{Human}{} & 53.5 & 62.9 & 62.4 & 58.3 & 64.7 & 64.6 & 62.5 & 45.9 & 55.7 & 53.7 & 53.4 & \\bf 51.9 & 33.4 & 16.3 \\\\\n \\texttt{Attacker}{} & 35.2 & 39.5 & 38.8 & 34.7 & 40.8 & 40.2 & 39.7 & 31.1 & 33.8 & 33.0 & 32.4 & 28.4 & \\bf 61.9 & 20.0 \\\\\n \\texttt{Random}{} & 39.7 & 45.6 & 44.5 & 41.2 & 46.4 & 47.9 & 46.0 & 31.8 & 37.7 & 41.3 & 42.1 & 31.5 & 30.3 & \\bf 20.7 \\\\\n\\end{tabular}\n\\end{center}\n\\end{table*}\n\n\\begin{table*}[ht]\n\\caption{Winner accuracy across headsets.}\n\\label{tab:cross-eval-winner-acc}\n\\vspace{0.1in}\n\\begin{center}\n\\rowcolors{2}{gray!10}{white}\n\\setlength{\\tabcolsep}{3pt}\n\\begin{tabular}{ccccccccccccccc}\n \\crossevaltableheader\n \\texttt{All}{} & \\bf 74.8 & 73.4 & 76.6 & 68.4 & 74.6 & 76.3 & 67.8 & 79.3 & 74.1 & 76.7 & 76.7 & 72.1 & 79.6 & 91.3 \\\\\n \\texttt{Top}{} & 63.1 & \\bf 82.4 & \\bf 86.7 & 68.0 & 76.0 & 80.6 & 66.3 & 65.6 & 73.2 & 49.1 & 55.1 & 52.7 & 47.7 & 29.9 \\\\\n \\texttt{StrangeFish}{} & 64.1 & 82.1 & 86.6 & 67.5 & 76.2 & 80.6 & 65.2 & 65.8 & 72.7 & 50.1 & 55.4 & 55.8 & 60.9 & 37.3 \\\\\n \\texttt{LaSalle}{} & 69.9 & 77.3 & 80.7 & \\bf 69.7 & 76.0 & 78.8 & 67.1 & 73.5 & 75.9 & 65.1 & 68.3 & 62.4 & 71.9 & 60.7 \\\\\n \\texttt{Dyn.Entropy} & 67.8 & 80.4 & 85.0 & 69.4 & \\bf 78.9 & 80.8 & 67.0 & 71.7 & 75.6 & 58.5 & 61.1 & 61.5 & 65.0 & 47.3 \\\\\n \\texttt{Oracle}{} & 66.0 & 82.0 & 86.4 & 69.4 & 77.3 & \\bf 81.4 & 66.9 & 69.2 & 75.3 & 53.5 & 58.2 & 57.9 & 59.5 & 39.7 \\\\\n \\focuswbernar{} & 71.3 & 78.5 & 82.5 & 69.3 & 77.2 & 79.5 & \\bf 68.3 & 75.3 & 76.8 & 64.0 & 65.8 & 66.9 & 72.6 & 68.9 \\\\\n \\texttt{Marmot}{} & 70.6 & 67.7 & 70.1 & 64.5 & 70.9 & 72.3 & 65.7 & \\bf 80.8 & 72.9 & 73.9 & 73.4 & 69.2 & 77.0 & 72.6 \\\\\n \\texttt{Genetic}{} & 67.1 & 80.0 & 84.1 & 68.6 & 77.1 & 80.3 & 67.1 & 71.5 & \\bf 77.9 & 56.5 & 60.8 & 60.9 & 62.1 & 44.3 \\\\\n \\texttt{Zugzwang}{} & 68.3 & 54.6 & 54.0 & 59.7 & 61.4 & 60.0 & 61.7 & 76.1 & 64.4 & \\bf 80.1 & 78.8 & 72.5 & 78.5 & 88.9 \\\\\n \\texttt{Trout}{} & 69.7 & 58.6 & 59.1 & 60.9 & 63.6 & 64.4 & 63.4 & 77.9 & 69.0 & 77.9 & \\bf 79.8 & 72.4 & 78.6 & 87.3 \\\\\n \\texttt{Human}{} & 71.1 & 64.8 & 66.0 & 63.8 & 67.7 & 68.0 & 65.1 & 76.6 & 70.4 & 74.6 & 76.2 & \\bf 73.3 & 77.7 & 90.1 \\\\\n \\texttt{Attacker}{} & 63.7 & 47.1 & 46.1 & 55.6 & 53.8 & 50.9 & 56.5 & 71.4 & 59.5 & 75.0 & 73.4 & 71.5 & \\bf 80.1 & 92.7 \\\\\n \\texttt{Random}{} & 51.0 & 25.5 & 20.9 & 43.4 & 34.8 & 28.5 & 46.5 & 54.9 & 38.2 & 65.7 & 56.6 & 62.5 & 69.5 & \\bf 94.7 \\\\\n\\end{tabular}\n\\end{center}\n\\end{table*}\n\n\n\\begin{figure*}[ht]\n\\centering\n\\begin{minipage}{1\\linewidth}\n\\includegraphics[width=1\\linewidth]{images\/game-length-distributions-8.png}\n\\end{minipage}\n\\caption{\nThe historical game length distributions are shown for the\ndata used to train each of the headsets.\nOn average, the games from \\texttt{Attacker}{} were the shortest,\nand the games from \\texttt{StockyInference}{} where the longest.\n}\n\\label{fig:game-length-distribution}\n\\end{figure*}\n\n\n\n\\begin{table*}[ht]\n\\caption{%\nSynopsis feature descriptions, saliency estimates, and ablation study results.\n}\n\\label{tab:feature-description-1}\n\\vspace{0.1in}\n\\begin{center}\n\\begin{tabular}{clrrrrrr}\n \\featuretableheader\n 0 & East side (constant) & 3.24 & 0.86 & 0.80 & 0.91 & 34 & 0.27 \\\\\n 1 & West side (constant) & 3.12 & 0.82 & 0.84 & 0.81 & 43 & 0.00 \\\\\n 2 & South side (constant) & 3.24 & 0.86 & 0.78 & 0.94 & 33 & -0.08 \\\\\n 3 & North side (constant) & 3.22 & 0.82 & 0.89 & 0.75 & 44 & 0.27 \\\\\n 4 & Rank 1 (constant) & 3.15 & 0.80 & 0.81 & 0.76 & 50 & -0.03 \\\\\n 5 & Rank 8 (constant) & 3.12 & 0.82 & 0.85 & 0.61 & 42 & -0.07 \\\\\n 6 & A-file (constant) & 3.04 & 0.78 & 0.81 & 0.53 & 59 & 0.18 \\\\\n 7 & H-file (constant) & 3.08 & 0.80 & 0.83 & 0.58 & 51 & 0.15 \\\\\n 8 & Dark squares (constant) & 3.08 & 0.82 & 0.81 & 0.82 & 45 & 0.21 \\\\\n 9 & Light squares (constant) & 3.03 & 0.78 & 0.77 & 0.78 & 61 & 0.01 \\\\\n 10 & Stage (move or sense) & 7.80 & 3.14 & 2.82 & 3.45 & 0 & -0.19 \\\\\n 11 & Not own piece & 5.40 & 1.43 & 2.66 & 1.13 & 8 & -0.29 \\\\\n 12 & Own pawns & 4.16 & 1.14 & 1.07 & 1.73 & 14 & 0.01 \\\\\n 13 & Own knights & 3.68 & 0.93 & 0.91 & 1.63 & 22 & 0.09 \\\\\n 14 & Own bishops & 3.46 & 0.89 & 0.87 & 1.63 & 27 & 0.02 \\\\\n 15 & Own rooks & 3.67 & 0.94 & 0.93 & 1.12 & 21 & 0.03 \\\\\n 16 & Own queens & 3.28 & 0.87 & 0.85 & 2.24 & 32 & 0.06 \\\\\n 17 & Own king & 3.14 & 0.79 & 0.79 & 0.88 & 52 & 0.10 \\\\\n 18 & Definitely not opposing pieces & 3.85 & 1.14 & 1.03 & 1.21 & 13 & -0.10 \\\\\n 19 & Definitely opposing pawns & 3.49 & 1.01 & 1.02 & 0.73 & 17 & -0.08 \\\\\n 20 & Definitely opposing knights & 3.30 & 0.93 & 0.93 & 0.61 & 23 & -0.05 \\\\\n 21 & Definitely opposing bishops & 3.21 & 0.88 & 0.88 & 0.59 & 29 & -0.02 \\\\\n 22 & Definitely opposing rooks & 3.04 & 0.81 & 0.82 & 0.38 & 47 & 0.02 \\\\\n 23 & Definitely opposing queens & 3.15 & 0.85 & 0.85 & 0.60 & 35 & -0.10 \\\\\n 24 & Definitely opposing king & 3.60 & 0.92 & 0.91 & 2.27 & 26 & 0.04 \\\\\n 25 & Possibly not opposing pieces & 5.22 & 1.54 & 1.34 & 1.56 & 5 & -0.04 \\\\\n 26 & Possibly opposing pawns & 3.50 & 0.92 & 0.92 & 0.93 & 24 & 0.06 \\\\\n 27 & Possibly opposing knights & 2.97 & 0.77 & 0.77 & 0.81 & 67 & 0.07 \\\\\n 28 & Possibly opposing bishops & 2.95 & 0.75 & 0.74 & 0.89 & 70 & 0.09 \\\\\n 29 & Possibly opposing rooks & 3.01 & 0.75 & 0.76 & 0.63 & 69 & -0.18 \\\\\n 30 & Possibly opposing queens & 3.05 & 0.78 & 0.77 & 1.05 & 57 & -0.07 \\\\\n 31 & Possibly opposing kings & 4.86 & 1.48 & 1.43 & 2.64 & 7 & -0.04 \\\\\n 32 & Last from & 2.77 & 0.72 & 0.72 & 0.83 & 76 & -0.11 \\\\\n 33 & Last to & 3.28 & 0.96 & 0.96 & 1.40 & 19 & 0.02 \\\\\n 34 & Last own capture & 3.10 & 0.83 & 0.83 & 1.17 & 40 & 0.07 \\\\\n 35 & Last opposing capture & 8.04 & 2.83 & 2.82 & 6.51 & 1 & -0.08 \\\\\n 36 & Definitely attackable & 2.72 & 0.70 & 0.62 & 0.78 & 84 & -0.06 \\\\\n 37 & Definitely attackable somehow & 2.73 & 0.71 & 0.65 & 0.78 & 80 & -0.02 \\\\\n 38 & Possibly attackable & 3.02 & 0.81 & 0.71 & 0.92 & 48 & 0.19 \\\\\n 39 & Definitely doubly attackable & 2.67 & 0.66 & 0.63 & 0.80 & 92 & -0.11 \\\\\n 40 & Definitely doubly attackable somehow & 2.66 & 0.69 & 0.67 & 0.80 & 88 & 0.14 \\\\\n 41 & Possibly doubly attackable & 2.71 & 0.75 & 0.73 & 0.83 & 71 & -0.26 \\\\\n 42 & Definitely attackable by pawns & 3.54 & 0.92 & 0.92 & 2.38 & 25 & 0.13 \\\\\n 43 & Possibly attackable by pawns & 3.11 & 0.78 & 0.78 & 0.95 & 58 & -0.10 \\\\\n 44 & Definitely attackable by knights & 2.91 & 0.72 & 0.71 & 0.84 & 77 & 0.24 \\\\\n 45 & Definitely attackable by bishops & 2.60 & 0.64 & 0.61 & 0.80 & 95 & 0.15 \\\\\n 46 & Possibly attackable by bishops & 2.60 & 0.68 & 0.64 & 0.85 & 89 & -0.07 \\\\\n 47 & Definitely attackable by rooks & 2.63 & 0.65 & 0.64 & 0.75 & 93 & 0.07 \\\\\n 48 & Possibly attackable by rooks & 2.74 & 0.70 & 0.69 & 0.77 & 81 & 0.00 \\\\\n 49 & Possibly attackable without king & 2.72 & 0.70 & 0.63 & 0.79 & 82 & 0.19 \\\\\n 50 & Possibly attackable without pawns & 2.63 & 0.67 & 0.62 & 0.73 & 90 & 0.17 \\\\\n 51 & Definitely attackable by opponent & 3.25 & 0.87 & 0.91 & 0.77 & 31 & -0.03 \\\\\n\\end{tabular}\n\\end{center}\n\\end{table*}\n\n\\begin{table*}[ht]\n\\caption{%\nSynopsis feature descriptions, saliency estimates, and ablation study results (continued).\n}\n\\label{tab:feature-description-2}\n\\vspace{0.1in}\n\\begin{center}\n\\begin{tabular}{clrrrrrr}\n \\featuretableheader\n 52 & Possibly attackable by opponent & 3.15 & 0.84 & 0.90 & 0.81 & 37 & 0.06 \\\\\n 53 & Definitely doubly attackable by opp. & 2.56 & 0.65 & 0.66 & 0.56 & 94 & -0.01 \\\\\n 54 & Possibly doubly attackable by opp. & 2.67 & 0.71 & 0.73 & 0.66 & 79 & -0.13 \\\\\n 55 & Definitely attackable by opp. pawns & 3.10 & 0.87 & 0.87 & 1.69 & 30 & 0.12 \\\\\n 56 & Possibly attackable by opp. pawns & 2.84 & 0.77 & 0.77 & 1.10 & 62 & 0.30 \\\\\n 57 & Definitely attackable by opp. knights & 2.78 & 0.69 & 0.70 & 0.59 & 87 & 0.21 \\\\\n 58 & Possibly attackable by opp. knights & 2.14 & 0.52 & 0.51 & 0.55 & 102 & 0.08 \\\\\n 59 & Definitely attackable by opp. bishops & 2.66 & 0.67 & 0.67 & 0.63 & 91 & 0.09 \\\\\n 60 & Possibly attackable by opp. bishops & 2.16 & 0.53 & 0.52 & 0.56 & 101 & -0.09 \\\\\n 61 & Definitely attackable by opp. rooks & 2.77 & 0.70 & 0.72 & 0.48 & 83 & -0.19 \\\\\n 62 & Possibly attackable by opp. rooks & 2.10 & 0.51 & 0.53 & 0.47 & 103 & 0.20 \\\\\n 63 & Possibly attackable by opp. w\/o king & 2.55 & 0.64 & 0.63 & 0.64 & 96 & -0.17 \\\\\n 64 & Possibly attackable by opp. w\/o pawns & 2.45 & 0.62 & 0.61 & 0.62 & 97 & -0.13 \\\\\n 65 & Possibly safe opposing king & 6.04 & 2.06 & 2.01 & 3.38 & 2 & 0.07 \\\\\n 66 & Squares the opponent may move to & 2.40 & 0.60 & 0.60 & 0.60 & 98 & 0.01 \\\\\n 67 & Possible castle state for opponent & 3.09 & 0.79 & 0.79 & 0.72 & 53 & 0.00 \\\\\n 68 & Known squares & 4.94 & 1.52 & 1.67 & 1.45 & 6 & 0.13 \\\\\n 69 & Own king's king-neighbors & 3.10 & 0.78 & 0.77 & 0.93 & 56 & 0.14 \\\\\n 70 & Own king's knight-neighbors & 2.82 & 0.71 & 0.70 & 0.91 & 78 & 0.31 \\\\\n 71 & Definitely opp. knights near king & 3.09 & 0.79 & 0.79 & 1.64 & 54 & 0.13 \\\\\n 72 & Possibly opp. knights near king & 5.13 & 1.72 & 1.72 & 2.77 & 4 & -0.01 \\\\\n 73 & Own king's bishop-neighbors & 2.74 & 0.69 & 0.68 & 0.86 & 85 & -0.10 \\\\\n 74 & Definitely opp. bishops near king & 3.04 & 0.79 & 0.79 & 0.89 & 55 & 0.23 \\\\\n 75 & Possibly opp. bishops near king & 5.23 & 1.75 & 1.75 & 2.41 & 3 & -0.11 \\\\\n 76 & Own king's rook-neighbors & 2.76 & 0.69 & 0.68 & 0.83 & 86 & -0.13 \\\\\n 77 & Definitely opp. rooks near king & 3.10 & 0.81 & 0.81 & 0.87 & 49 & 0.31 \\\\\n 78 & Possibly opp. rooks near king & 4.45 & 1.40 & 1.40 & 1.55 & 10 & 0.05 \\\\\n 79 & All own pieces & 5.26 & 1.36 & 1.09 & 2.47 & 11 & -0.01 \\\\\n 80 & Definitely empty squares & 3.69 & 0.96 & 1.05 & 0.84 & 20 & -0.13 \\\\\n 81 & May castle eventually & 3.11 & 0.81 & 0.81 & 1.26 & 46 & 0.24 \\\\\n 82 & Possibly may castle & 3.05 & 0.77 & 0.77 & 0.63 & 68 & 0.05 \\\\\n 83 & Definitely may castle & 3.04 & 0.77 & 0.77 & 0.87 & 66 & 0.12 \\\\\n 84 & Own queens' rook-neighbors & 2.20 & 0.54 & 0.53 & 0.63 & 100 & 0.04 \\\\\n 85 & Own queens' bishop-neighbors & 2.33 & 0.57 & 0.57 & 0.67 & 99 & 0.06 \\\\\n 86 & Previous definitely not opp. pieces & 3.82 & 0.88 & 0.87 & 0.89 & 28 & -0.30 \\\\\n 87 & Previous definitely opp. pawns & 4.16 & 1.16 & 1.18 & 0.88 & 12 & 0.14 \\\\\n 88 & Previous definitely opp. knights & 3.02 & 0.77 & 0.77 & 0.72 & 63 & 0.12 \\\\\n 89 & Previous definitely opp. bishops & 2.92 & 0.73 & 0.73 & 0.70 & 75 & -0.02 \\\\\n 90 & Previous definitely opp. rooks & 3.60 & 1.01 & 1.02 & 0.56 & 16 & 0.05 \\\\\n 91 & Previous definitely opp. queens & 3.93 & 1.11 & 1.11 & 1.00 & 15 & 0.22 \\\\\n 92 & Previous definitely opp. king & 3.33 & 0.83 & 0.82 & 1.58 & 38 & -0.04 \\\\\n 93 & Previous possibly not opp. pieces & 4.40 & 1.43 & 1.21 & 1.47 & 9 & -0.05 \\\\\n 94 & Previous possibly opp. pawns & 3.27 & 0.83 & 0.82 & 0.92 & 39 & 0.21 \\\\\n 95 & Previous possibly opp. knights & 3.04 & 0.78 & 0.78 & 0.73 & 60 & 0.22 \\\\\n 96 & Previous possibly opp. bishops & 3.10 & 0.74 & 0.74 & 0.78 & 73 & 0.16 \\\\\n 97 & Previous possibly opp. rooks & 2.94 & 0.77 & 0.79 & 0.45 & 64 & 0.10 \\\\\n 98 & Previous possibly opp. queens & 3.02 & 0.74 & 0.74 & 0.81 & 74 & -0.07 \\\\\n 99 & Previous possibly opp. king & 3.14 & 0.83 & 0.82 & 1.15 & 41 & 0.04 \\\\\n 100 & Previous last from & 2.85 & 0.75 & 0.74 & 0.85 & 72 & -0.04 \\\\\n 101 & Previous last to & 3.36 & 1.00 & 1.00 & 1.50 & 18 & 0.21 \\\\\n 102 & Previous own capture & 3.05 & 0.84 & 0.84 & 1.13 & 36 & -0.09 \\\\\n 103 & Previous opposing capture & 2.93 & 0.77 & 0.77 & 1.05 & 65 & 0.05 \\\\\n\\end{tabular}\n\\end{center}\n\\end{table*}\n\n\n\n\\section*{Checklist}\n\n\n\\begin{enumerate}\n\n\\item For all authors...\n\\begin{enumerate}\n \\item Do the main claims made in the abstract and introduction accurately reflect the paper's contributions and scope?\n \\answerYes{The synopsis abstraction makes DSMCP tractable for large games, and the experiments show that the stochastic bandit algorithm contributes significantly to the algorithm's performance.}\n \\item Did you describe the limitations of your work?\n \\answerYes{The paper is upfront that DSMCP does not explicitly seek a Nash equilibrium.}\n \\item Did you discuss any potential negative societal impacts of your work?\n \\answerYes{The broader impact section mentions how algorithms that are capable of planning in large domains involving uncertainty (such as the real world) could have harmful applications.}\n \\item Have you read the ethics review guidelines and ensured that your paper conforms to them?\n \\answerYes{}\n\\end{enumerate}\n\n\\item If you are including theoretical results...\n\\begin{enumerate}\n \\item Did you state the full set of assumptions of all theoretical results?\n \\answerNA{The main paper does not contain theoretical results, though the appendix may include statements about of the asymptotic behavior of the stochastic bandit algorithm and of DSMCP.}\n\t\\item Did you include complete proofs of all theoretical results?\n \\answerNA{The main paper does not contain theoretical results, though the appendix may include of proofs the theoretical statements in it.}\n\\end{enumerate}\n\n\\item If you ran experiments...\n\\begin{enumerate}\n \\item Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)?\n \\answerYes{While it is not released yet, the games produced in the experiments will be shared and the algorithms will be contributed to an open source suite of algorithms.}\n \\item Did you specify all the training details (e.g., data splits, hyperparameters, how they were chosen)?\n \\answerNo{For example, the random seeds and precise makeup of the \\texttt{Top}{} dataset are not provided, but the paper does make a best-effort to provide as many details as possible which are likely to be sufficient to reproduce the results.}\n\t\\item Did you report error bars (e.g., with respect to the random seed after running experiments multiple times)?\n \\answerYes{The experiments contains 95\\% confidence intervals for the provided Elo ratings.}\n\t\\item Did you include the total amount of compute and the type of resources used (e.g., type of GPUs, internal cluster, or cloud provider)?\n \\answerYes{Section~\\ref{sec:training-procedure} mentions the four RTX 2080 Ti GPUs used.}\n\\end{enumerate}\n\n\\item If you are using existing assets (e.g., code, data, models) or curating\/releasing new assets...\n\\begin{enumerate}\n \\item If your work uses existing assets, did you cite the creators?\n \\answerYes{Section~\\ref{sec:reconnaissance-blind-chess} cites the creators of reconnaissance blind chess, and Section~\\ref{sec:experiments} cites the authors of the baseline programs used in the experiments.}\n \\item Did you mention the license of the assets?\n \\answerNo{It looks like Johns Hopkins University has not provided an explicit license for the board game or the games played on the website.}\n \\item Did you include any new assets either in the supplemental material or as a URL?\n \\answerYes{Additional assets will be provided in the supplementary material.}\n \\item Did you discuss whether and how consent was obtained from people whose data you're using\/curating?\n \\answerNo{The paper does not explicitly discuss receiving consent for participating in the RBC contest or training on RBC games, which seems okay.}\n \\item Did you discuss whether the data you are using\/curating contains personally identifiable information or offensive content?\n \\answerNo{Whether or not a person or program can be identified by what moves are chosen in a game of RBC is an interesting research question, but it is tangential and seems acceptable to omit in this paper.}\n\\end{enumerate}\n\n\\item If you used crowdsourcing or conducted research with human subjects...\n\\begin{enumerate}\n \\item Did you include the full text of instructions given to participants and screenshots, if applicable?\n \\answerNA{}\n \\item Did you describe any potential participant risks, with links to Institutional Review Board (IRB) approvals, if applicable?\n \\answerNA{}\n \\item Did you include the estimated hourly wage paid to participants and the total amount spent on participant compensation?\n \\answerNA{}\n\\end{enumerate}\n\n\\end{enumerate}\n\n\n\n\n\n\n\\section{Introduction}\n\\label{sec:introduction}\n\nChoosing a Nash equilibrium strategy\nis rational when the opponent is able to\nidentify and exploit suboptimal behavior \\citep{bowling2001rational}.\nHowever, not all opponents are so responsive,\nand computing a Nash equilibrium is intractable for many games.\nThis paper presents deep synoptic Monte Carlo planning (DSMCP), an\nalgorithm for large imperfect information games that\nseeks a best-response strategy rather than a Nash equilibrium strategy.\n\nWhen opponents use fixed policies,\nan imperfect information game may be viewed as a\npartially observable Markov decision process (POMDP)\nwith the opponents as part of the environment.\nDSMCP treats playing against specific opponents as related offline\nreinforcement learning (RL) problems and exploits predictability.\nImportantly,\nthe structure of having opponents with imperfect information is preserved\nin order to account for their uncertainty.\n\n\\begin{figure}[t]\n\\begin{minipage}{.49\\linewidth}\n\\input{first_figure.tex}\n\\end{minipage}%\n\\begin{minipage}{0.02\\linewidth}\n\\,\n\\end{minipage}%\n\\begin{minipage}{.49\\linewidth}\n\\begin{center}\n\\begin{minipage}{0.5\\columnwidth}\n \\centering\n \\includegraphics[width=0.96\\linewidth]{images\/game-120174-replay.png}\n (a)\n\\end{minipage}%\n\\begin{minipage}{0.5\\columnwidth}\n \\centering\n \\includegraphics[width=0.96\\linewidth]{images\/game-124718-replay.png}\n (b)\n\\end{minipage}%\n\\end{center}\n\\caption{\nPlaying RBC well requires balancing risks and rewards.\n(a) On the left, \\texttt{Penumbra}{} moved the white queen to \\texttt{g8}.\nAfter sensing at \\texttt{d2}, Black could infer that the white queen\noccupied one of 25 squares.\nThat uncertainty allowed the white queen\nto survive and capture the black king on the next turn.\n(b) On the right, \\texttt{Penumbra}{} moved the black queen to \\texttt{h2}.\nIn this case, the opponent detected and captured the black queen.\nThe games are available online at\n\\url{https:\/\/rbc.jhuapl.edu\/games\/120174} and\n\\url{https:\/\/rbc.jhuapl.edu\/games\/124718}.\n}\n\\label{fig:queen-risks}\n\\end{minipage}\n\\end{figure}\n\nDSMCP uses sampling to break the\n``curse of dimensionality'' \\citep{pineau2006anytime} in three ways:\nsampling possible histories with a particle filter,\nsampling possible futures with\nupper confidence bound tree search (UCT) \\citep{kocsis2006bandit},\nand sampling possible world states{} within each information state{} uniformly.\nIt represents information state{}s with a generally-applicable\nstochastic abstraction technique that\nproduces a ``synopsis'' from sampled world states{}.\nThis paper assesses DSMCP on reconnaissance blind chess (RBC),\na large imperfect information chess variant.\n\n\n\\section{Background}\n\\label{sec:related-work}\n\nSignificant progress has been made in recent years in\nboth perfect and imperfect information settings.\nFor example, using deep neural networks to guide UCT\nhas enabled monumental achievements in\nabstract strategy games as well as computer games\n\\citep{silver2016mastering, silver2017mastering,\n silver2017chess, schrittwieser2019mastering,\n wu2020accelerating, tomasev2020assessing}.\nThis work employs deep learning in a similar fashion. \n\nRecent advancements in imperfect information games are also remarkable.\nSeveral programs have reached superhuman performance in Poker\n\\citep{moravcik2017deepstack, brown2018superhuman, brown2019superhuman, brown2020combining}.\nIn particular, ReBeL \\citep{brown2020combining}\ncombines RL and search by converting imperfect information games\ninto continuous state space perfect information games\nwith public belief states as nodes.\nThis approach is powerful, but it relies on public knowledge and fails to scale to\ngames with hidden actions and substantial private information, such as RBC.\n\nInformation set search \\citep{parker2006overconfidence, parker2010paranoia}\nis a limited-depth algorithm for imperfect information games\nthat operates on information states{} according to a minimax rule.\nThis algorithm was designed for and evaluated\non Kriegspiel chess, which is comparable to RBC.\n\nPartially observable Monte Carlo planning (POMCP) \\citep{silver2010pomcp}\nachieves optimal policies for POMDPs\nby tracking approximate belief states with an unweighted particle filter\nand planning with a variant of UCT on a search tree of histories.\nIn practice, POMCP can suffer from particle depletion,\nrequiring a domain-specific workaround.\nThis work combines an unweighted particle filter\nwith a novel information state{} abstraction technique\nwhich increases sample quality and supports deep learning.\n\nSmooth UCT \\citep{heinrich2015smoothuct} and\ninformation set Monte Carlo tree search (ISMCTS) \\citep{cowling2012ismcts}\nmay be viewed as multi-agent versions of POMCP.\nThese two algorithms for playing extensive-form games\nbuild search trees (for each player) of information states{}.\nThese two algorithms and DSMCP all\nperform playouts from determinized states\nthat are accurate from the current player's perspective,\neffectively granting the opponent extra information.\nStill, Smooth UCT approached a Nash equilibrium\nby incorporating a stochastic bandit algorithm into its tree search.\nDSMCP uses a similar bandit algorithm that mixes in\na learned policy during early node visits.\n\nWhile adapting perfect information algorithms has performed surprisingly well\nin some imperfect information settings \\citep{long2010pimc},\nthe theoretical guarantees of\nvariants of counterfactual regret minimization (CFR)\n\\citep{neller2013introduction, brown2018deep}\nare enticing.\nOnline outcome sampling (OOS) \\citep{lisy2015online}\nextends Monte Carlo counterfactual regret minimization (MCCFR) \\citep{lanctot2009regret}\nby building its search tree incrementally and\ntargeting playouts to relevant parts of the tree.\nOOS draws samples from the beginning of the game.\nMCCFR and OOS are theoretically guaranteed to converge\nto a Nash equilibrium strategy.\nSpecifically, CFR-based algorithms produce mixed strategies\nwhile DSMCP relies on incidental stochasticity.\n\nNeural fictitious self-play (NFSP) \\citep{heinrich2016deep} is an RL algorithm\nfor training two neural networks for imperfect information games.\nExperiments with NFSP employed compact observations embeddings of information states{}.\nDSMCP offers a generic technique for embedding information states{} in large games.\nDual sequential Monte Carlo (DualSMC) \\citep{wang2019dualsmc}\nestimates belief states and plans\nin a continuous domain via sequential Monte Carlo with heuristics.\n\n\n\\section{Reconnaissance blind chess}\n\\label{sec:reconnaissance-blind-chess}\n\nReconnaissance blind chess (RBC)\n\\citep{newman2016reconnaissance, markowitz2018complexity, pmlr-v123-gardner20a}\nis a chess variant that incorporates\nuncertainty about the placement of the opposing pieces\nalong with a private sensing mechanism.\nAs shown in Figure~\\ref{fig:log-information-set-graph},\nRBC players are often faced with thousands of possible game states,\nand reducing uncertainty increases the odds of winning.\n\n\n\\paragraph{Game rules}\n\nPieces move in the same way in RBC as in chess.\nPlayers cannot directly observe the movement of the opposing pieces.\nHowever, at the beginning of each turn,\nplayers may view the ground truth of any $3\\stimes3$ patch of the board.\nThe information gained from the sensing action remains private to that player.\nPlayers are also informed of the location of all captures,\nbut not the identity of capturing pieces.\nWhen a requested move is illegal,\nthe move is substituted with the closest legal move\nand the player is notified of the substitution.\nFor example, in Figure~\\ref{fig:queen-risks}~(a),\nif Black attempted to move the rook from \\texttt{h8} to \\texttt{f8},\nthe rook would capture the queen on \\texttt{g8} and stop there instead.\nPlayers are always able to track the placement of their own pieces.\nCapturing the opposing king wins the game, and\nplayers are not notified about check.\nPassing and moving into check are legal.\n\n\n\\paragraph{Official competition}\n\nThis paper introduces the program \\texttt{Penumbra}{},\nthe winner of the official 2020 RBC rating competition.\nIn total, 34 programs competed to achieve the highest rating\nby playing public games.\nRatings were computed with \\textit{BayesElo} \\citep{coulom2008whr},\nand playing at least 100 games was required to be eligible to win.\nFigure~\\ref{fig:queen-risks} shows ground truth positions from the tournament in which \\texttt{Penumbra}{} voluntarily put its queen in danger.\nPlayers were paired randomly,\nbut the opponent's identity was provided at the start of each game\nwhich allowed catering strategies for specific opponents.\nHowever, opponents were free to change their strategies at any point,\nso attempting to exploit others could backfire.\nNonetheless, \\texttt{Penumbra}{} sought to model and counter predictable opponents\nrather than focusing on finding a Nash equilibrium.\n\n\n\\paragraph{Other RBC programs}\n\nRBC programs have employed a variety of\nalgorithms \\citep{pmlr-v123-gardner20a} including\nQ-learning \\citep{mnih2013playing},\ncounterfactual regret minimization (CFR) \\citep{zinkevich2008regret},\nonline outcome sampling (OOS) \\citep{lisy2015online},\nand the Stockfish chess engine \\citep{romstad2020stockfish}.\nAnother strong RBC program\n\\citep{highley2020, blowitski2021checkpoint}\nmaintains a probability distribution for each piece.\nMost RBC programs select sense actions and move actions in separate ways\nwhile DSMCP unifies all action selection.\n\\citet{savelyev2020mastering} also applied UCT to RBC and modeled the\nroot belief state with a distribution over 10,000 tracked positions.\nInput to a neural network consisted of the most-likely 100 positions,\nand storing a single training example required\n3.5MB on average,\nlarge enough to hinder training.\nThis work circumvented the same issue by\nrepresenting training examples with compact synopses which are\nless than 1kB.\n\n\n\\section{Terminology}\n\\label{sec:extensive-form-games}\n\nConsider the two-player extensive-form game with\nagents $\\mathcal{P} = \\{$self, opponent$\\}$, actions $\\mathcal{A}$,\n``ground truth'' world states{} $\\mathcal{X}{}$,\nand initial state $x_0 \\in \\mathcal{X}{}$.\nEach time an action is taken,\neach agent $p \\in \\mathcal{P}$ is given an observation\n$\\mathbf{o}_p \\in \\mathcal{O}$ that matches ($\\sim$) the possible world states{} from $p$'s perspective.\nFor simplicity, assume the game has deterministic actions\nsuch that each $a \\in \\mathcal{A}$ is a function $a : X{} \\rightarrow \\mathcal{X}{}$\ndefined on a subset of world states{} $X{} \\subset \\mathcal{X}{}$.\nDefine $\\mathcal{A}_x{}$ as the set of actions available from $x{} \\in \\mathcal{X}{}$.\n\nAn information state{} (infostate{}) $s \\in \\mathcal{S}$ for agent $p$\nconsists of all observations $p$ has received so far.\\footnote{\nAn infostate{} is equivalent to an information set,\nwhich is the set of all possible action histories from $p$'s perspective\n\\citep{osborne1994course}.\n}\nLet $\\mathcal{X}{}_s \\subset \\mathcal{X}{}$ be the set of all\nworld states{} that are possible from $p$'s perspective from $s$.\nIn general, $\\mathcal{X}_s$ contains less information than $s$\nsince some (sensing) actions may not affect the world state.\nDefine a collection of limited-size world state{} sets\n$\\mathcal{L} = \\{L \\subset \\mathcal{X}_s : s \\in \\mathcal{S}, |L| \\le \\ell\\}$,\ngiven a constant $\\ell$.\n\n\nLet\n$\\rho : \\mathcal{X}{} \\rightarrow \\mathcal{P}$ indicate the agent to act in each world state{}.\nAssume\nthat $\\mathcal{A}_x{} = \\mathcal{A}_y$ and $\\rho(x{}) = \\rho(y)$\nfor all $x{}, y \\in \\mathcal{X}{}_s$ and $s \\in \\mathcal{S}$.\nThen extend the definitions of\nactions available $\\mathcal{A}_*$ and agent to act $\\rho$\nover sets of world states{} and over infostates{} in the natural way.\nA policy $\\pi(a | s)$\nis a distribution over actions given an infostate{}.\nA belief state $B(h)$ is a distribution over action histories.\nCreating a belief state from an infostate{}\nrequires assumptions about the opponent's action policy $\\tau(a|s)$.\nLet $\\mathcal{R}_p : \\mathcal{X}{} \\rightarrow \\mathbb{R}$ map terminal states to the reward for player $p$.\nThen $(\\mathcal{S}, \\mathcal{A}, \\mathcal{R}_\\text{self}, \\tau, s_0)$\nis a POMDP, where\nthe opponent's policy $\\tau$ provides environment state transitions\nand $s_0$ is the initial infostate{}.\nIn the rest of this paper, the word ``state'' refers to a world state{}\nunless otherwise specified.\n\n\\section{Deep synoptic Monte Carlo planning}\n\\label{sec:dsmcp-algorithm}\n\n\\begin{figure}\n\\begin{minipage}{.5\\linewidth}\n\\begin{center}\n\\input{overview_figure.tex}\n\\end{center}\n\\caption{%\nDSMCP approximates infostates{}\nwith size-limited sets of possible states (circles).\nIt tracks all possible states $X_t$ for each turn from its own perspective\nand constructs belief states $\\hat{B}_t$ with\napproximate infostates{} from the opponent's perspective.\nAt the root of each playout{},\nthe initial approximate infostate{} for the opponent\nis sampled from $\\hat{B}_t$, and the initial approximate infostate{} for\nitself is a random subset of $X_t$.\n}\n\\label{fig:algorithm-figure}\n\\end{minipage}%\n\\begin{minipage}{.5\\linewidth}\n\\input{algorithms_one.tex}\n\\end{minipage}\n\\end{figure}\n\nEffective planning algorithms for imperfect information games\nmust model agents' choice of actions based on (belief states derived from)\ninfostates{}, not on world states{} themselves.\nDeep synoptic Monte Carlo planning (DSMCP) approximates infostates{}\nwith size-limited sets of possible world states{} in $\\mathcal{L}$.\nIt uses those approximations to construct a belief state and as UCT nodes\n\\citep{kocsis2006bandit}.\nFigure~\\ref{fig:algorithm-figure} provides a high-level\nvisualization of the algorithm.\n\n\\begin{figure}\n\\input{algorithms.tex}\n\\end{figure}\n\nA bandit algorithm chooses an action during each node visit,\nas described in Algorithm~\\ref{alg:stochastic-bandit}.\nThis bandit algorithm is similar to Smooth UCB \\citep{heinrich2015smoothuct}\nin that they both introduce stochasticity by mixing in a secondary policy.\nSmooth UCB empirically approached a Nash equilibrium\nutilizing the average policy according to action visits at each node.\nDSMCP mixes in a neural network's policy ($\\pi$) instead.\nThe constant $c$ controls the level of exploration,\nand $m$ controls how the policy $\\pi$ is mixed into the bandit algorithm.\nFor example, taking $m = 0$ always selects actions directly with $\\pi$\nwithout considering visit counts, and taking $m = \\infty$ never mixes in $\\pi$.\nAs in \\cite{silver2016mastering},\n$\\pi$ provides per-action exploration values which guide the tree search.\n\nApproximate belief states are constructed as subsets\n$\\hat{B} \\subset \\mathcal{L}$, where each $L \\in \\hat{B}$\nis a set of possible world-states from the opponent's perspective.\nThis ``second order'' representation of\nbelief states allows DSMCP to account for the opponent's uncertainty.\nInfostates{} sampled with rejection\n(Algorithm~\\ref{alg:prepare-sample})\nare used as the ``particles'' in a particle filter which models\nsuccessive belief states.\nSampling is guided by a neural network policy ($\\hat{\\tau}$)\nbased on the identity of the opponent.\nTo counter particle deprivation, if $k$ consecutive candidate samples\nare rejected as incompatible with the possible world states,\nthen a singleton sample consisting of a randomly-chosen possible\nstate is selected instead.\n\nThe tree search, described in Algorithm~\\ref{alg:choose-action},\ntracks an approximate infostate{} for each player while simulating playouts{}.\nPlayouts{} are also guided by\npolicy ($\\pi$ and $\\hat{\\tau}$) and value ($\\nu$) estimations from a neural network.\nA synopsis function $\\sigma$\ncreates a fixed-size summary of each node as input for the network.\nThe constant $b$ is the batch size for inference, $d$ is the search depth,\n$\\ell$ is the size of approximate infostate{}s,\n$n_{\\text{vl}}$ is the virtual loss weight,\nand $z$ is a threshold for increasing search depth.\n\nAlgorithm~\\ref{alg:play-game} describes how to play an entire game,\ntracking all possible world states{}.\nApproximate belief states ($\\hat{B}_t$) are constructed for each past turn\nby tracking $n_{\\text{particles}}$ elements of $\\mathcal{L}$\n(from the opponent's point of view) with an unweighted particle filter.\nEach time the agent receives a new observation,\nall of the (past) particles that are inconsistent with the observation\nare filtered out and replenished, starting with the oldest belief states.\n\n\n\\subsection{Synopsis}\n\n\\input{example_game.tex}\n\n\\begin{figure*}[ht]\n\\begin{center}\n\\input{bitboards_wide.tex}\n\\end{center}\n\\caption{\nThis set of synopsis bitboards was used as input to the neural network\nbefore White's sense on turn 5 of the game in Figure~\\ref{fig:sample-game}.\nThe synopsis contains 104 bitboards.\nEach bitboard encodes 64 binary features\nof the possible state set that the synopsis summarizes.\nFor example, bitboard \\#26 contains the possible locations of opposing pawns, and\nbitboard \\#27 contains the possible locations of opposing knights.\nAn attentive reader may notice that the black pawn on \\texttt{h4} is missing from bitboard \\#26,\nwhich is due to subsampling to $\\ell = 128$ states before computing the bitboards.\nIn this case, the true state was missing from the set of states used to create the synopsis.\nThe features in each synopsis are only approximations of the\ninfostates{} that they represent.\nThe first 10 bitboards are constants, which provide information that\nis difficult for convolutions to construct otherwise\n\\citep{liu2018coordconv}.\n}\n\\label{fig:sample-situation}\n\\end{figure*}\n\n\\begin{figure*}[ht]\n\\begin{center}\n\\input{architecture.tex}\n\\end{center}\n\\caption{\n\\texttt{Penumbra}{}'s network contains a shared tower with 10 residual blocks\nand 14 headsets.\nEach headset contains 5 heads for a total of 70 output heads.\nThe residual blocks are shown with a double border,\nand they each contain two $3 \\stimes 3$ convolutional layers and batch normalization.\nAll of the convolutional layers in the headsets are $1 \\stimes 1$ convolutions\nwith the exception of the one residual block for each policy head.\nEach headset was trained on a separate subset of the data, as described in\nTable~\\ref{tab:headset-descriptions}.\nThe policy head provides logits for both sense and move actions.\n}\n\\label{fig:architecture-diagram}\n\\end{figure*}\n\nOne of the contributions of this paper is the methodology\nused to approximate and encode infostates{}.\nGames that consist of a fixed number of turns, such as poker, admit a\nnaturally-compact infostate{} representation based on observations\n\\citep{heinrich2015smoothuct, heinrich2016deep}.\nHowever, perfect representations are not always practical.\nGame abstractions are often used to reduce\ncomputation and memory requirements.\nFor example, imperfect recall is an effective abstraction when past actions are\nunnecessary for understanding the present situation\n\\citep{waugh2009imperfectrecall, lanctot2012noregret}.\n\nDSMCP employs a stochastic abstraction\nwhich represents infostates{} with sets of world states{}\nand then subsamples to a manageable cardinality $(\\ell)$.\nFinally, a permutation-invariant synopsis function $\\sigma$ produces\nfixed-size summaries of the approximate infostates{}\nwhich are used for inference.\nAn alternative is to run inference on ``determinized'' world states{} individually\nand then somehow aggregate the results.\nHowever, such aggregation can easily lead to strategy fusion \\citep{frank1998finding}.\nOther alternatives include evaluating states with a recurrent network\n\\citep{rumelhart1986learning}\none-at-a-time or using a permutation-invariant architecture\n\\citep{zaheer2017deep, wagstaff2019limitations}.\n\nGiven functions $g_i : \\mathcal{X} \\rightarrow \\{0, 1\\}$\nfor $i = 0, \\dots, F$\nthat map states to binary features,\ndefine the $i^\\text{th}$ component of a synopsis function\n$\\sigma : \\mathcal{L} \\rightarrow \\{0, 1\\}^{F}$\nas\n\\begin{equation}\n \\sigma_i(X) =\n g_i(x_0) *_i g_i(x_1) \\ast_i \\dots *_i g_i(x_{\\ell})\n\\end{equation}\nwhere $X = \\{x_0, x_1, \\dots, x_\\ell\\}$\nand $*_i$ is either the logical \\texttt{AND} ($\\land$)\nor the logical \\texttt{OR} ($\\lor$) operation.\nFor example, if $g_i$ encodes whether \nan opposing knight can move to the \\texttt{d7} square of a chess board\nand $*_i = \\land$, then\n$\\sigma_i$ indicates that a knight can definitely move to \\texttt{d7}.\nFigure~\\ref{fig:sample-game} shows an example game, and\nFigure~\\ref{fig:sample-situation} shows an example\noutput of \\texttt{Penumbra}{}'s synopsis function,\nwhich consists of 104 bitboard feature planes each with 64 binary features.\nThe appendix describes each feature plane.\n\n\n\\subsection{Network architecture}\n\n\\texttt{Penumbra}{} uses a residual neural network\n\\citep{he2016resnet}\nas shown in Figure~\\ref{fig:architecture-diagram}.\nThe network contains 14 headsets,\ndesigned to model specific opponents and regularize each other\nas they are trained on different slices of data \\citep{zhang2020balance}.\nEach headset contains 5 heads:\na policy head, a value head, two heads for predicting\nwinning and losing within the next 5 actions,\nand a head for guessing the number of pieces of each type in the\nground truth world state{}.\nThe \\texttt{Top}{} policy head\nand the \\texttt{All}{} value head\nare used for planning as $\\pi$ and $\\nu$, respectively.\nThe other heads\n(including the \\texttt{SoonWin}, \\texttt{SoonLose}, and \\texttt{PieceCount} heads)\nprovide auxiliary tasks for further\nregularization \\citep{wu2020accelerating, fifty2020measuring}.\nWhile playing against an opponent that is ``recognized''\n(when a headset was trained on data from only that opponent),\nthe policy head ($\\hat{\\tau}$) of the corresponding headset is used\nfor the opponent's moves while progressing the particle filter\n(Algorithm~\\ref{alg:prepare-sample}) and\nwhile constructing the UCT tree (Algorithm~\\ref{alg:choose-action}).\nWhen the opponent is unrecognized, the \\texttt{Top}{} policy head is used by default.\n\n\\subsection{Training procedure}\n\\label{sec:training-procedure}\n\nThe network was trained on historical%\n\\footnote{%\nThe games were downloaded\nfrom \\url{rbmc.jhuapl.edu} in June, 2019 and \\url{rbc.jhuapl.edu} in August, 2020.\nAdditionally, 5,000 games were played locally by \\texttt{StockyInference}{}.\n} game data\nas described by Table~\\ref{tab:headset-descriptions}.\nThe reported accuracies are averages over 5 training runs.\nThe \\texttt{All}{} headset was trained on all games,\nthe \\texttt{Top}{} headset was trained on games from the highest-rated players,\nthe \\texttt{Human}{} headset was trained on all games played by humans,\nand each of the other 11 headsets were trained to mimic specific opponents.\n\n10\\% of the games were used as validation data based on game filename hashes.\nTraining examples were extracted from games multiple times since\nreducing possible state sets to $\\ell$ states is non-deterministic.\nA single step of vanilla stochastic gradient descent\nwas applied to one headset at a time,\nalternating between headsets according to their training weights.\nSee the appendix for hyperparameter settings and accuracy cross tables.\nTraining and evaluation were run on four RTX 2080 Ti GPUs.\n\n\\input{headset_summary.tex}\n\n\\subsection{Implementation details}\n\\label{sec:implementation-details}\n\n\\texttt{Penumbra}{} plays RBC with DSMCP along with\nRBC-specific extensions.\nFirst, sense actions that are dominated by other sense actions are pruned from consideration.\nSecond, \\texttt{Penumbra}{} can detect some forced wins in\nthe sense phase, the move phase, and during the opponent's turn.\nThis static analysis is applied at the root and to playouts{};\nplayouts{} are terminated as soon as a player could win,\navoiding unnecessary neural network inference.\nThe static analysis was also used to clean training games in which\nthe losing player had sufficient information to find a forced win.\n\nPiece placements are represented with bitboards \\citep{browne2014bitboard},\nand the tree of approximate infostates{}\nis implemented with a hash table.\nZobrist hashing \\citep{zobrist1990hashing}\nmaintains hashes of piece placements incrementally.\nHash table collisions are resolved by overwriting older entries.\nThe tree was not implemented until after the competition, so\nfixed-depth playouts{} were used instead ($m = 0$).\nInference is done in batches of $256$\nduring both training and online planning.\nThe time used per action is approximately proportional to the time remaining.\nThe program processes approximately 4,000 nodes per second, and it\nplays randomly when the number of possible states exceeds 9 million.\n\n\n\\section{Experiments}\n\\label{sec:experiments}\n\nThis section presents the results of playing games between \\texttt{Penumbra}{} and\nseveral publicly available RBC baselines\n\\citep{pmlr-v123-gardner20a, bernardoni2020baselines}.\nEach variant of \\texttt{Penumbra}{} in Table~\\ref{tab:baseline-bots} played 1000 games against each baseline, and each variant in \nTable~\\ref{tab:caution-and-paranoia}\nand Table~\\ref{tab:exploration} played 250 games against each baseline.\nGames with errors were ignored and replayed.\nThe Elo ratings and 95\\% confidence intervals\nwere computed with \\textit{BayesElo} \\citep{coulom2008whr}\nand are all compatible.\nThe scale was anchored with \\texttt{StockyInference}{} at 1500\nbased on its rating during the competition.\n\nTable~\\ref{tab:baseline-bots} gives ratings of the baselines and\nfive versions of \\texttt{Penumbra}{}.\n\\texttt{PenumbraCache}\nrelied solely on the network policy for action selection in playouts\n($m \\,{=}\\, 0$),\n\\texttt{PenumbraTree} built a UCT search tree\n($m \\,{=}\\, \\infty$), and\n\\texttt{PenumbraMixture}\nmixed in the network policy during early node visits\n($m \\,{=}\\, 1$).\nThe mixed strategy performed the best.\n\\texttt{PenumbraNetwork} selected actions\nbased on the network policy without performing any playouts.\n\\texttt{PenumbraSimple}\nis the same as \\texttt{PenumbraMixture}\nwith the static analysis described\nin Section~\\ref{sec:implementation-details} disabled.\n\\texttt{PenumbraNetwork} and \\texttt{PenumbraSimple} serve as ablation studies;\nremoving the search algorithm is detrimental while the effect of removing\nthe static analysis is not statistically significant.\nUnexpectedly, \\texttt{Penumbra}{} played the strongest against \\texttt{StockyInference}{}\nwhen that program was unrecognized.\nSo, in this case, modeling the opponent with a stronger policy\noutperformed modeling it more accurately.\n\nTwo algorithmic modifications that give the opponent\nan artificial advantage during planning were investigated.\nTable~\\ref{tab:caution-and-paranoia}\nreports the results of a grid search over\n``cautious'' and ``paranoid'' variants of DSMCP.\nThe caution parameter $\\kappa$ specifies the percentage\nof playouts{} that use $\\ell = 4$\nfor the opponent instead of the higher default limit.\nSince each approximate infostate{}\nis guaranteed to contain the correct ground truth in playouts{},\nreducing $\\ell$ for the opponent gives the opponent higher-quality information,\nallowing the opponent to counter risky play more easily in the constructed UCT tree.\n\nThe paranoia parameter augments the exploration values in\nAlgorithm~\\ref{alg:stochastic-bandit} to incorporate\nthe minimum value seen during the current playout{}.\nWith paranoia $\\phi$, actions are selected according to\n\\begin{equation}\n \\argmax_a \\left((1 - \\phi)\\frac{ \\vec{q}_a }{ \\vec{n}_a }\n + \\phi \\vec{m}_a\n + c \\pi_a \\sqrt{\\frac{\\ln{n}}{\\vec{n}_a}} \\right)\n\\end{equation}\nwhere $\\vec{m}$ contains the minimum value observed for each action.\nThis is akin to the notion of paranoia studied by\n\\citet{parker2006overconfidence, parker2010paranoia}.\n\n\\input{result_tables.tex}\n\nTable~\\ref{tab:exploration} shows the results of a grid search over\nexploration constants and two bandit algorithms.\nUCB1 \\citep{kuleshov2014algorithms}\n(with policy priors), which is used on the last line of\nAlgorithm~\\ref{alg:stochastic-bandit},\nis compared with ``a variant of PUCT'' (aVoP)\n\\citep{silver2016mastering, yu2019elf, lee2019minigo},\nanother popular bandit algorithm.\nThis experiment used $\\kappa = 20\\%$ and $\\phi = 20\\%$.\nFigure~\\ref{fig:uncertainty-graphs} show that\n\\texttt{Penumbra}{}'s value head accounts for\nthe uncertainty of the underlying infostate{}.\n\n\\begin{figure}[h]\n \\centering\n \\begin{minipage}{0.5\\linewidth}\n \\centering\n \\includegraphics[width=1\\linewidth]{images\/uncertainty-train-plot-6.png}\n (a)\n \\end{minipage}%\n \\begin{minipage}{0.5\\linewidth}\n \\centering\n \\includegraphics[width=1\\linewidth]{images\/uncertainty-test-plot-6.png}\n (b)\n \\end{minipage}\n\\caption{\nThe mean historical win percentage and the mean network value assigned to\n(a) train and (b) validation synopses tend to decrease as\nthe number of world states{} given to $\\sigma$ increases.\n}\n\\label{fig:uncertainty-graphs}\n\\end{figure}\n\n\n\\section{Per-bit saliency}\n\\label{sec:saliency}\n\nSaliency methods may be able to identify which of the\nsynopsis feature planes are most important and which are least important.\nGradients only provide local information, and some saliency methods\nfail basic sanity checks \\citep{adebayo2018sanity}.\nHigher quality saliency information may be surfaced by\nintegrating gradients over gradually-varied inputs\n\\citep{sundararajan2017axiomatic, kapishnikov2019xrai}\nand by smoothing gradients locally\n\\citep{smilkov2017smoothgrad}.\nThose saliency methods are not directly applicable to\ndiscrete inputs such as the synopses used in this work.\nSo, this paper introduces a saliency method that aggregates\ngradient information across two separate dimensions:\ntraining examples and iterations.\nPer-batch saliency (PBS) averages the absolute value of gradients\nover random batches of test examples throughout training.\nSimilarly, per-bit saliency (PbS) averages\nthe absolute value of gradients over bits (with specific values)\nwithin batches of test examples throughout training.\nGradients were taken both with respect to the loss\nand with respect to the action policy.\n\n\\begin{figure}[t]\n \\centering\n \\begin{minipage}{0.5\\linewidth}\n \\centering\n \\includegraphics[width=1\\linewidth]{images\/dark-squares-loss-per-batch-saliency-17.png}\n (a)\n \\includegraphics[width=1\\linewidth]{images\/dark-squares-action-per-bit-saliency-17.png}\n (b)\n \\end{minipage}%\n \\begin{minipage}{0.5\\linewidth}\n \\centering\n \\includegraphics[width=1\\linewidth]{images\/top-loss-per-batch-saliency-17.png}\n (c)\n \\includegraphics[width=1\\linewidth]{images\/top-action-per-bit-saliency-17.png}\n (d)\n \\end{minipage}\n\\caption{\n(a) The loss per-batch saliency (PBS) and\n(b) the action per-bit saliency (PbS)\nare taken on test examples during training.\nThese graphs show the saliency of\nfeature plane \\#8, dark squares, for each headset in one training run.\nThe large gradients with respect to the loss\nsuggest that the \\texttt{Genetic}{} headset has overfit.\n(c) The loss PBS and\n(d) the action PbS\nprovide insight about which synopsis features are most useful.\nThe top-five most-salient feature planes\nand the least-salient feature plane\nfor the \\texttt{Top}{} headset from one training run are shown.\n}\n\\label{fig:saliency-graphs}\n\\end{figure}\n\nFigure~\\ref{fig:saliency-graphs}\nshows saliency information for input synopsis features used by \\texttt{Penumbra}{}.\nIn order to validate that these saliency statistics are meaningful,\nthe model was retrained 104 times,\nonce with each feature removed \\citep{hooker2018benchmark}.\nHigher saliency is slightly correlated\nwith decreased performance when a feature is removed.\nThe correlation coefficient to the average change in accuracy is\n$-0.208$ for loss-PBS, and $-0.206$ for action-PbS.\nExplanations for the low correlation include\nnoise in the training process and\nthe presence of closely-related features.\nUltimately, the contribution of a feature during training is distinct from\nhow well the model can do without that feature.\nSince some features are near-duplicates of others,\nremoving one may simply increase dependence on another.\nStill, features with high saliency\n--- such as the current stage (sense or move) and the location of the last capture ---\nare likely to be the most important,\nand features with low saliency may be considered for removal.\nThe appendix includes saliency statistics for each feature plane.\n\n\n\\section{Discussion}\n\\label{sec:discussion}\n\n\\paragraph{Broader impact}\n\nDSMCP is more broadly applicable\nthan some prior algorithms for imperfect information games,\nwhich are intractable in settings with large infostates{}\nand small amounts of shared knowledge \\citep{brown2020combining}.\nRBC and the related game Kriegspiel were motivated by\nuncertainty in warfare \\citep{newman2016reconnaissance}.\nWhile playing board games is not dangerous in itself,\nalgorithms that account for uncertainty may become\neffective and consequential in the real world.\nIn particular, since it focuses on exploiting weaknesses of other agents,\nDSMCP could be applied in harmful ways.\n\n\\paragraph{Future work}\n\nPlanning in imperfect information games\nis an active area of research \\citep{russell2020artificial},\nand RBC is a promising testing ground for such research.\n\\texttt{Penumbra}{} would likely benefit from further hyperparameter tuning\nand potentially alternative corralled bandit algorithms \\citep{arora2020corralling}.\nModeling an opponent poorly could be catastrophic;\nalgorithmic adjustments may lead to more-robust best-response strategies\n\\citep{ponsen2011acm}.\nHow much is lost by collapsing infostates{} with synopses is unclear\nand deserves further investigation.\nFinally, the ``bitter lesson'' of machine learning \\citep{sutton2019bitter}\nsuggests that a learned synopsis function may perform better.\n\n\n\\section*{Acknowledgements}\n\nThanks to the Johns Hopkins University Applied Physics Laboratory\nfor inventing such an intriguing game and for hosting RBC competitions.\nThanks to Ryan Gardner for valuable correspondence.\nThanks to Rosanne Liu, Joel Veness, Marc Lanctot, Zhe Zhao, and Zach Nussbaum\nfor providing feedback on early drafts.\nThanks to William Bernardoni for open sourcing high-quality baseline bots.\nThanks to Solidmind for the song ``Penumbra'',\nwhich is an excellent soundtrack for programming.\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nAge-related facial technologies generally address the two areas of age estimation \\cite{Chen_FG2011, Luu_FG2011, Luu_BTAS2009, Luu_IJCB2011, Duong_ICASSP2011, Luu_ROBUST2008} and age progression \\cite{nhan2015beyond, patterson2007comparison, Zhang_2017_CVPR, Patterson2013, wang2018face_aging, Shu_2015_ICCV}. The face age-estimation problem is defined as building computer software that has the ability to recognize the ages of individuals in a given photograph. Comparatively, the face age-progression problem necessitates the more complex capability to predict the future facial likeness of people appearing in images \\cite{Luu_CAI2011}. Aside from the innate curiosity of individuals, research of face aging has its origins in cases of missing persons and wanted fugitives, in either case law enforcement desires plausible age-progressed images to facilitate searches. Accurate face aging also provides benefits for numerous practical applications such as age-invariant face recognition \\cite{Xu_IJCB2011, Xu_TIP2015, Le_JPR2015}. There have been numerous anthropological, forensic, computer-aided, and computer-automated approaches to facial age-progression. However, the results from previous methods for synthesizing aged faces that represent accurate physical processes involved in human aging are still far from perfect. This is especially so in age-progressing videos of faces, due to the usual challenges for face processing involving pose, illumination, and environment variation as well as differences between video frames.\n\n\nThere have been two key research directions in age progression for both conventional computer-vision approaches and recent deep-learning methods -- \\textit{one-shot synthesis} and \\textit{multiple-shot synthesis}. Both approaches have used facial image databases with longitudinal sample photos of individuals, where the techniques attempt to discover aging patterns demonstrated over individuals or the population represented. In one-shot synthesis approaches, a new face at the target age is directly synthesized via inferring the relationships between training images and their corresponding age labels then applying them to generate the aged likeness. These prototyping methods \\cite{burt1995perception, kemelmacher2014illumination,rowland1995manipulating} often classify training images in facial image databases into age groups according to labels. Then the average faces, or mean faces, are computed to represent the key presentation or archetype of their groups.\nThe variation between the input age and the target age archetypes is complimented to the input image to synthesize the age-progressed faces at the requested age.\nIn a similar way, Generative Adversarial Networks (GANs) \\cite{Zhang_2017_CVPR, wang2018face_aging} methods present the relationship between semantic representation of input faces and age labels by constructing a deep neural network generator. It is then combined with the target age labels to synthesize output results.\n\nMeanwhile, in multiple-shot synthesis, the longitudinal aging process is decomposed into multiple steps of aging effects \\cite{Duong_2017_ICCV,Duong_2016_CVPR, Shu_2015_ICCV, wang2016recurrent,yang2016face}. These methods build on the facial aging transformation between two consecutive age groups. Finally, the progressed faces from one age group to the next are synthesized step-by-step until they reach the target age. These methods can model the long-term sequence of face aging using this strategy. However, these methods still have drawbacks due to the limitations of long-term aging not being well represented nor balanced in face databases. \n\nExisting age-progression methods all similarly suffer from problems in both directions. Firstly, they only work on single input images. Supposing there is a need to synthesize aging faces presented in a captured video, these methods usually have to split the input video into separate frames and synthesize every face in each frame \\textit{independently} which may often present \\textit{inconsistencies} between synthesized faces. Since face images for each frame are synthesized separately, the aging patterns of generated faces of the same subject are also likely not coherent. Furthermore, most aging methods are unable to produce \\textit{high-resolution} images of age progression, important for features such as fine lines that develop fairly early in the aging process. This may be especially true in the latent based methods \\cite{kemelmacher2014illumination, Duong_2017_ICCV,Duong_2016_CVPR, Shu_2015_ICCV, wang2016recurrent,yang2016face}.\n\n\\paragraph{Contributions of this work:} \nThis paper presents a deep Reinforcement Learning (RL) approach to Video Age Progression to guarantee the consistency of aging patterns in synthesized faces captured in videos. In this approach, the age-transformation embedding is modeled as the optimal selection using Convolutional Neural Network (CNN) features under a RL framework. Rather than applying the image-based age progression to each video frame independently as in previous methods, the proposed approach has the capability of exploiting the temporal relationship between two consecutive frames of the video. This property facilitates maintaining consistency of aging information embedded into each frame.\nIn the proposed structure, not only can a \\textit{smoother synthesis} be produced across frames in videos, but also the \\textit{visual fidelity} of aging data, i.e. all images of a subject in different or the same age, is preserved for better age transformations. To the best of our knowledge, our framework is one of the first face aging approaches in videos.\nFinally, this work contributes a new large-scale face-aging database\\footnote{\\url{https:\/\/face-aging.github.io\/RL-VAP\/}} to support future studies related to automated face age-progression and age estimation in both images and videos.\n\\section{Related work}\n\n\n\\begin{table*}[!t] \n\t\\small \n\t\\centering\n\t\\caption{The properties of our collected AGFW-v2 in comparison with other aging databases. For AGFW-v2 video set, the images of the subjects in old age are also collected for reference in terms of subject's appearance changing.}\n\t\\label{tab:AgingDatabaseProperties}\n\t\\begin{tabular}{l c c c c c c }\n\t\t\\Xhline{2\\arrayrulewidth}\n\t\t\\textbf{Database} & \\textbf{\\# Images} & \\textbf{\\# Subjects} & \\textbf{Label type} & \\textbf{Image type} & \\textbf{Subject type} & \\textbf{Type}\\\\ \n\t\t\\Xhline{2\\arrayrulewidth}\n\t\tMORPH - Album 1 \\cite{ricanek2006morph} & 1,690 & 628 & Years old & Mugshot & Non-famous & Image DB\\\\\t\t\t\t\n\t\tMORPH - Album 2 \\cite{ricanek2006morph} & 55,134 & 13,000 & Years old & Mugshot & Non-famous & Image DB\\\\\n\t\t\\hline\n\t\tFG-NET \\cite{fgNetData} & 1,002 & 82 & Years old & In-the-wild & Non-famous & Image DB\\\\\n\t\tAdienceFaces \\cite{levi2015age} & 26,580 & 2,984 & Age groups & In-the-wild & Non-famous & Image DB\\\\\n\t\tCACD \\cite{chen14cross} & 163,446 & 2,000 & Years old & In-the-wild & Celebrities & Image DB\\\\\n\t\tIMDB-WIKI \\cite{Rothe-IJCV-2016} & 52,3051 & 20,284 & Years old & In-the-wild & Celebrities & Image DB\\\\\n \n AgeDB \\cite{AgeDB} & 16,488 & 568 & Years old & In-the-wild & Celebrities & Image DB\\\\\n AGFW \\cite{Duong_2016_CVPR} & 18,685 & 14,185 & Age groups & In-the-wild\/Mugshot & Non-famous & Image DB\\\\\n \\hline\n \\textbf{AGFW-v2 (Image)} & \\textbf{36,299} & \\textbf{27,688} & \\textbf{Age groups} & \\textbf{In-the-wild\/Mugshot} & \\textbf{Non-famous} & \\textbf{Image DB}\\\\\n \\textbf{AGFW-v2 (Video)} & \\textbf{20,000} & \\textbf{100} & \\textbf{Years old} & \\textbf{Interview\/Movie-style} & \\textbf{Celebrities} & \\textbf{Video DB}\\\\\n\t\t\\hline\n\t\t\n\t\\end{tabular}\n\t\\vspace{-4mm}\n\\end{table*}\n\nThis section provides an overview of recent approaches for age progression; \\textit{these methods primarily use still images}. The approaches generally fall into one of four groups, i.e. modeling, reconstruction, prototyping, and deep learning-based approaches.\n\n\\textit{Modeling-based} approaches aim at modeling both shape and texture of facial images using parameterization method, then learning to change these parameters via an aging function. \nActive Appearance Models (AAMs) have been used with four aging functions in \\cite{lanitis2002toward,patterson2006automatic} to model linearly both the general and the specific aging processes. Familial facial cues were combined with AAM-based techniques in \\cite{luu2009Automatic, patterson2007comparison}. \\cite{Patterson2013} incorporated an AAM reconstruction method to the synthesis process for a higher photographic fidelity of aging. An AGing pattErn Subspace (AGES) \\cite{geng2007automatic} was proposed to construct a subspace for aging patterns as a chronological sequence of face images. \nIn \\cite{tsai2014human}, AGES was enhanced with guidance faces consisting the subject's characteristics for more stable results. \nThree-layer And-Or Graph (AOG) \\cite{suo2010compositional, suo2012concatenational} was used to model a face as a combination of smaller parts, i.e. eyes, nose, mouth, etc. \nThen a Markov chain was employed to learn the aging process for each part. \n\nIn \\textit{reconstruction-based} approaches, an aging basis is unified in each group to model aging faces. Person-specific and age-specific factors were independently represented by sparse-representation hidden factor analysis (HFA) \\cite{yang2016face}. \nAging dictionaries (CDL) \\cite{Shu_2015_ICCV} were proposed to model personalized aging patterns by attempting to preserve distinct facial features of an individual through the aging process.\n\n\n\\textit{Prototyping-based} approaches employed proto-typical facial images in a method to synthesize faces. The average face of each age group is used as the representative image for that group, and these are named the ``age prototypes'' \\cite{rowland1995manipulating}. Then, by computing the differences between the prototypes of two age groups, an input face can be progressed to the target age through image-based manipulation \\cite{burt1995perception}. In \\cite{kemelmacher2014illumination}, high quality average prototypes constructed from a large-scale dataset were employed in conjunction with the subspace alignment and illumination normalization.\n\nRecently, \\textit{Deep learning-based approaches} have yielded promising results in facial age progression. \nTemporal and Spatial Restricted Boltzmann Machines (TRBM) were introduced in \\cite{Duong_2016_CVPR} to represent the non-linear aging process, with geometry constraints, and to model a sequence of reference faces as well as wrinkles of adult faces. A Recurrent Neural Network (RNN) with two-layer Gated Recurrent Unit (GRU) was employed to approximate aging sequences \\cite{wang2016recurrent}. \nAlso, the structure of Conditional Adversarial Autoencoder (CAAE) was applied to synthesize aged images in \\cite{antipov2017face}. Identity-Preserved Conditional Generative Adversarial Networks (IPCGANs) \\cite{wang2018face_aging} brought the structure of Conditional GANs with perceptual loss into place for synthesis process. A novel generative probabilistic model, called Temporal Non-Volume Preserving (TNVP) transformation \\cite{Duong_2017_ICCV} was proposed to model a long-term facial aging as a sequence of short-term stages. \n\n\\begin{figure*}[t]\n\t\\centering \\includegraphics[width=1.5\\columnwidth]{Aging_RL_framework.jpg}\n\t\\caption{The structure of the face aging framework in video. \\textbf{Best viewed in color and 2$\\times$ zoom in.}}\t\n\t\\label{fig:RL_Framework}\n\\end{figure*}\n\n\n\\section{Data Collection} \\label{sec:dbcollec}\nThe quality of age representation in a face database is one of the most important features affecting the aging learning process and could include such considerations as the number of longitudinal face-image samples per subject, the number of subjects, the range and distribution of age samples overall, and the population representation presented in the database. \nPrevious public databases used for age estimation or progression systems have been very limited in the total number of images, the number of images per subject, or the longitudinal separation of the samples of subjects in the database, i.e. FG-NET \\cite{fgNetData}, MORPH \\cite{ricanek2006morph}, AgeDB \\cite{AgeDB}. Some recent ones may be of larger scale but have noise within the age labels, i.e. CACD \\cite{chen14cross}, IMDB-WIKI \\cite{Rothe-IJCV-2016}. \nIn this work we introduce an extension of Aging Faces in the Wild (AGFW-v2) in terms of both \\textit{image and video} collections.\nTable \\ref{tab:AgingDatabaseProperties} presents the properties of our collected AGFW-v2 in comparison with others.\n\n\n\\subsection{Image dataset}\nAGFW \\cite{Duong_2016_CVPR} was first introduced with 18,685 images with individual ages sampled ranging from 10 to 64 years old. Based on the collection criteria of AGFW, a double-sized database was desired. Compared to other age-related databases, \\textit{most of the subjects in AGFW-v2 are not public figures and less likely to have significant make-up or facial modifications}, helping embed accurate aging effects during the learning process.\nIn particular, AGFW-v2 is mainly collected from three sources. Firstly, we adopt a search engine using different keywords, e.g. male at 20 years old, etc. Most images come from the daily life of non-famous subjects. Besides the images, all publicly available meta-data related to the subject's age are also collected. \nThe second part comes from mugshot images that are accessible from public domain. These are passport-style photos with \nages reported by service agencies. Finally, we also include the Productive Aging Laboratory (PAL) database \\cite{PALDB}.\nIn total, AGFW-v2 consists of 36,299 images divided into 11 age groups with a span of five years.\n\\noindent\n\\subsection{Video dataset}\nAlong with still photographs, we also collected a video dataset for temporal aging evaluations with 100 videos of celebrities. Each video clip consists of 200 frames.\nIn particular, searching based on the individuals' names during collection efforts, their interview, presentation, or movie sessions were selected such that only one face, in a clear manner, is presented in the frame.\nAge annotations were estimated using the year of the interview session versus the year of birth of the individual. Furthermore, in order to provide a reference for subject's appearance in old age, the face images of these individuals at the current age are also collected and provided as meta-data for the subjects' videos. \n\n\\section{Video-based Facial Aging}\n\nIn the simplest approach, age progression of a sequence may be achieved by independently employing image-based aging techniques on each frame of a video. However, treating single frames independently may result in inconsistency of the final aged-progressed likeness in the video, i.e. some synthesized features such as wrinkles appear differently across consecutive video frames as illustrated in Fig. \\ref{fig:FrameVsVideo_Mark}. \nTherefore, rather than considering a video as a set of independent frames, this method exploits the temporal relationship between frames of the input video to maintain visually cohesive age information for each frame.\nThe aging algorithm is formulated as the sequential decision-making process from a goal-oriented agent while interacting with the temporal visual environment. At time sample, the agent integrates related information of the current and previous frames then modifies action accordingly. The agent receives a scalar reward at each time-step with the goal of maximizing the total long-term aggregate of rewards, emphasizing effective utilization of temporal observations in computing the aging transformation employed on the current frame.\n\nFormally, given an input video, let $\\mathcal{I} \\in \\mathbb{R}^d$ be the image domain and $\\mathbf{X}^t = \\{\\mathbf{x}_y^t,\\mathbf{x}_o^t\\}$ be an image pair at time-step $t$ consisting of the $t$-th frame $\\mathbf{x}_y^t \\in \\mathcal{I}$ of the video at young age and the synthesized face $\\mathbf{x}_o^t \\in \\mathcal{I}$ at old age.\nThe goal is to learn a synthesis function $\\mathcal{G}$ that maps $\\mathbf{x}_y^t$ to $\\mathbf{x}_o^t$ as.\n\\begin{equation}\n\\footnotesize\n\\begin{split}\n\\mathbf{x}_o^t = \\mathcal{G}(\\mathbf{x}_y^t) | \\mathbf{X}^{1:t-1}\n\\end{split}\n\\label{eqn:mapping1}\n\\end{equation}\nThe conditional term indicates the temporal constraint needs to be considered during the synthesis process. \nTo learn $\\mathcal{G}$ effectively, we decompose $\\mathcal{G}$ into sub-functions as.\n\\begin{equation}\n\\footnotesize\n\\begin{split}\n \\mathcal{G} = \\mathcal{F}_1 \\circ \\mathcal{M} \\circ \\mathcal{F}_2\n \\end{split} \n\\end{equation}\nwhere $\\mathcal{F}_1: \\mathbf{x}_y^t \\mapsto \\mathcal{F}_1(\\mathbf{x}_y^t)$ maps the young face image $\\mathbf{x}_y^t$ to its representation in feature domain; $\\mathcal{M}: (\\mathcal{F}_1(\\mathbf{x}_y^t);\\mathbf{X}^{1:t-1}) \\mapsto \\mathcal{F}_1(\\mathbf{x}_o^t)$ defines the traversing function in feature domain; and $\\mathcal{F}_2: \\mathcal{F}_1(\\mathbf{x}_o^t) \\mapsto \\mathbf{x}_o^t$ is the mapping from feature domain back to image domain.\n\nBased on this decomposition, the architecture of our proposed framework (see Fig. \\ref{fig:RL_Framework}) consists of three main processing steps: (1) Feature embedding; (2) Manifold traversal; and (3) Synthesizing final images from updated features.\nIn the second step, a Deep RL based framework is proposed to guarantee the consistency between video frames in terms of aging changes during synthesis process.\n\n\\subsection{Feature Embedding} \\label{sec:FeatEmbbed}\nThe first step of our framework is to learn an embedding function $\\mathcal{F}_1$ to map $\\mathbf{x}_y^t$ into its latent representation $\\mathcal{F}_1(\\mathbf{x}_y^t)$. Although there could be various choices for $\\mathcal{F}_1$, to produce high quality synthesized images in later steps, the chosen structure for $\\mathcal{F}_1$ should produce a feature representation with two main properties: (1) \\textit{linearly separable} and (2) \\textit{detail preserving}. On one hand, with the former property, transforming the facial likeness from one age group to another age group can be represented as the problem of linearly traversing along the direction of a single vector in feature domain. On the other hand, the latter property guarantees a certain detail to be preserved and produce high quality results. In our framework, CNN structure is used for $\\mathcal{F}_1$. \nIt is worth noting that there remain some compromises regarding the choice of deep layers used for the representation such that both properties are satisfied. \\textit{Linear separability} is preferred in deeper layers further along the linearization process while \\textit{details of a face} are usually embedded in more shallow layers \\cite{mahendran2015understanding}.\nAs an effective choice in several image-modification tasks \\cite{gatys2015texture, gatys2015neural}, we adopt the normalized VGG-19\\footnote{This network is trained on ImageNet for better latent space.} and use the concatenation of three layers $\\{conv3\\_1, conv4\\_1, conv5\\_1\\}$ as the feature embedding. \n\n\\begin{figure*}[t]\n\t\\centering \n\t\\includegraphics[width=1.5\\columnwidth]{State_Action_RL_Aging_new_small.jpg}\n\t\\caption{The process of selecting neighbors for age-transformation relationship. \\textbf{Best viewed in color and 2$\\times$ zoom in.}}\t\n\t\\label{fig:RL_policy_net}\n\\end{figure*}\n\n\\subsection{Manifold Traversing}\nGiven the embedding $\\mathcal{F}_1(\\mathbf{x}_y^t)$, the age progression process can be interpreted as the linear traversal from the younger age region of $\\mathcal{F}_1(\\mathbf{x}_y^t)$ toward the older age region of $\\mathcal{F}_1(\\mathbf{x}_o^t)$ within the deep-feature domain. Then the Manifold Traversing function $\\mathcal{M}$ can be written as in Eqn \\eqref{eqn:traversing}.\n\\begin{equation}\n\\footnotesize\n\\begin{split}\n \\mathcal{F}_1(\\mathbf{x}_o^t) & = \\mathcal{M}(\\mathcal{F}_1(\\mathbf{x}_y^t); \\mathbf{X}^{1:t-1}) \\\\\n& = \\mathcal{F}_1(\\mathbf{x}_y^t) + \\alpha \\Delta^{\\mathbf{x}^t|\\mathbf{X}^{1:t-1}}\n\\end{split}\n\\label{eqn:traversing}\n\\end{equation}\nwhere $\\alpha$ denotes the user-defined combination factor, and $\\Delta^{\\mathbf{x}^t|\\mathbf{X}^{1:t-1}}$ encodes the amount of aging information needed to reach the older age region for the frame $\\mathbf{x}_y^t$ conditional on the information of previous frames. \n\n\\subsubsection{Learning from Neighbors} \\label{sec:LearnFromNeighbor}\nIn order to compute $\\Delta^{\\mathbf{x}^t|\\mathbf{X}^{1:t-1}}$ containing only aging effects without the presence of other factors, i.e. identity, pose, etc., we exploit the relationship in terms of the aging changes between the nearest neighbors of $\\mathbf{x}_y^t$ in the two age groups. In particular, given $\\mathbf{x}_y^t$, we construct two neighbor sets $\\mathcal{N}_y^t$ and $\\mathcal{N}_o^t$ that contain $K$ nearest neighbors of $\\mathbf{x}_y^t$ in the young and old age groups, respectively.\nThen \n\\small\n$\\Delta^{\\mathbf{x}^t|\\mathbf{X}^{1:t-1}}= \\Delta^{\\mathbf{x}^t|\\mathbf{X}^{1:t-1}}_{\\mathcal{A}(\\cdot, \\mathbf{x}_y^t)}$ \n\\normalsize\nis estimated by:\n\n\\begin{equation} \\label{eqn:delta_f} \\nonumber\n\\footnotesize\n\\begin{split}\n\\Delta^{\\mathbf{x}^t|\\mathbf{X}^{1:t-1}} \n& =\\frac{1}{K} \\left[\\sum_{\\mathbf{x} \\in \\mathcal{N}_o^t} \\mathcal{F}_1(\\mathcal{A}(\\mathbf{x},\\mathbf{x}_y^t))- \\sum_{\\mathbf{x} \\in \\mathcal{N}_y^t} \\mathcal{F}_1(\\mathcal{A}(\\mathbf{x},\\mathbf{x}_y^t)) \\right]\n\\end{split}\n\\end{equation}\n\\normalsize\nwhere $\\mathcal{A}(\\mathbf{x},\\mathbf{x}_y^t)$ denotes a face-alignment operator that positions the face in $\\mathbf{x}$ with respect to the face location in $\\mathbf{x}_y^t$. \nSince only the nearest neighbors of $\\mathbf{x}_y^t$ are considered in the two sets, conditions apart from age difference should be sufficiently similar between the two sets and subtracted away in $\\Delta^{\\mathbf{x}^t|\\mathbf{X}^{1:t-1}}$. Moreover, the averaging operator also helps to ignore identity-related factors, and, therefore, emphasizing age-related changes as the main source of difference to be encoded in $\\Delta^{\\mathbf{x}^t|\\mathbf{X}^{1:t-1}}$. The remaining question is how to choose the appropriate neighbor sets such that the aging changes provided by $\\Delta^{\\mathbf{x}^t|\\mathbf{X}^{1:t-1}}$ and $\\Delta^{\\mathbf{x}^{t-1}|\\mathbf{X}^{1:t-2}}$ are consistent. In the next section, a Deep RL based framework is proposed for selecting appropriate candidates for these sets.\n\n\\subsubsection{Deep RL for Neighbor Selection}\nA straightforward technique of choosing the neighbor sets for $\\mathbf{x}_y^t$ in young and old age is to select faces that are close to $\\mathbf{x}_y^t$ based on some \\textit{closeness criteria} such as distance in feature domain, or number of matched attributes. However, since these criteria are not frame-interdependent, they are unable to maintain visually cohesive age information across video frames. \nTherefore, we propose to exploit the relationship presented in the image pair $\\{\\mathbf{x}_y^t, \\mathbf{x}_y^{t-1}\\}$ and the neighbor sets of $\\mathbf{x}_y^{t-1}$ as an additional guidance for the selection process. Then an RL based framework is proposed and formulated as a sequential decision-making process with the goal of maximizing the temporal reward estimated by the consistency between the neighbor sets of $\\mathbf{x}_y^t$ and $\\mathbf{x}_y^{t-1}$.\n\nSpecifically, given two input frames $\\{\\mathbf{x}_y^t,\\mathbf{x}_y^{t-1}\\}$ and two neighbor sets $\\{\\mathcal{N}_y^{t-1}, \\mathcal{N}_o^{t-1}\\}$ of $\\mathbf{x}_y^{t-1}$, the agent of a policy network will iteratively analyze the role of each neighbor of $\\mathbf{x}_y^{t-1}$ in both young and old age in combination with the relationship between $\\mathcal{F}_1 (\\mathbf{x}_y^t)$ and $\\mathcal{F}_1 (\\mathbf{x}_y^{t-1})$ to determine new suitable neighbors for $\\{\\mathcal{N}_y^{t}, \\mathcal{N}_o^{t}\\}$ of $\\mathbf{x}_y^{t}$.\nA new neighbor is considered appropriate when it is sufficiently similar to $\\mathbf{x}_y^{t}$ and maintains aging consistency between two frames.\nEach time a new neighbor is selected, the neighbor sets of $\\mathbf{x}_y^t$ are updated and received a reward based on estimating the similarity of embedded aging information between two frames.\nAs a result, the agent can iteratively explore an optimal route for selecting neighbors to maximize the long-term reward. Fig. \\ref{fig:RL_policy_net} illustrates the process of selecting neighbors for age-transformation relationship.\n\n\\textbf{State:} The state at $i$-th step $\\mathbf{s}^t_i=\\left[\\mathbf{x}_y^{t}, \\mathbf{x}_y^{t-1}, \\mathbf{z}^{t-1}_i, (\\mathcal{N}^t)_i, \\mathcal{\\bar{N}}^t, \\mathbf{M}_i\\right]$ is defined as a composition of six components: (1) the \\textit{current frame} $\\mathbf{x}_y^t$; (2) the \\textit{previous frame} $\\mathbf{x}_y^{t-1}$; (3) the \\textit{current considered neighbor} $\\mathbf{z}^{t-1}_i$ of $\\mathbf{x}_y^{t-1}$, i.e. either in young and old age groups; (4) the \\textit{current construction of the two neighbor sets} $(\\mathcal{N}^t)_i = \\{(\\mathcal{N}_y^t)_i,(\\mathcal{N}_o^t)_i\\}$ of $\\mathbf{x}_y^{t}$ until step $i$; (5) the \\textit{extended neighbor sets} $\\mathcal{\\bar{N}}^t=\\{\\mathcal{\\bar{N}}_y^t,\\mathcal{\\bar{N}}_o^t\\}$ consisting of $N$ neighbors, i.e. $N > K$, of $\\mathbf{x}_y^{t}$ for each age group.\nand (6) a \\textit{binary mask} $\\mathbf{M}_i$ indicating which samples in $\\mathcal{\\bar{N}}^t$ are already chosen in previous steps. \nNotice that in the initial state $\\mathbf{s}^t_0$, the two neighbor sets $\\{(\\mathcal{N}_y^t)_0, (\\mathcal{N}_o^t)_0\\}$ are initialized using the $K$ nearest neighbors of $\\mathbf{x}_y^t$ of the two age groups, respectively. \nTwo measurement criteria are considered for finding the nearest neighbors: \\textit{the number of matched facial attributes}, e.g gender, expressions, etc.; and \\textit{the cosine distance between two feature embedding vectors}.\nAll values of the mask $\\mathbf{M}_i$ are set to 1 in $\\mathbf{s}^t_0$.\n\n\\textbf{Action:} Using the information from the chosen neighbor $\\mathbf{z}^{t-1}_i$ of $\\mathbf{x}_y^{t-1}$, and the relationship of $\\{\\mathbf{x}_y^{t}, \\mathbf{x}_y^{t-1}\\}$, an action $a_{i}^{t}$ is defined as selecting the new neighbor for the current frame such that with this new sample added to the neighbor sets of the current frame, the aging-synthesis features between $\\mathbf{x}_y^{t}$ and $\\mathbf{x}_y^{t-1}$ are more consistent. Notice that since not all samples in the database are sufficiently similar to $\\mathbf{x}_y^{t}$, we restrict the action space by selecting among $N$ nearest neighbors of $\\mathbf{x}_y^{t}$. In our configuration, $N= n * K$ where $n$ and $K$ are set to 4 and 100, respectively.\n\n\\textbf{Policy Network:} \nAt each time step $i$, the policy network first encodes the information provided in state $\\mathbf{s}^t_i$ as\n\\begin{equation}\n\\footnotesize\n\\begin{split}\n\\mathbf{u}^t_i &= \\left[\\delta^{\\text{pool5}}_{\\mathcal{F}_1}(\\mathbf{x}_y^t, \\mathbf{x}_y^{t-1}), \\mathcal{F}^{\\text{pool5}}_1(\\mathbf{z}_i^{t-1})\\right] \\\\\n\\mathbf{v}^t_i &=\\left[d\\left((\\mathcal{N}^t)_i,\\mathbf{x}_y^t\\right), d\\left(\\mathcal{\\bar{N}}^t,\\mathbf{x}_y^t\\right), \\mathbf{M}_i\\right]\n\\end{split}\n\\end{equation}\nwhere $\\mathcal{F}^{\\text{pool5}}_1$ is the embedding function as presented in Sec. \\ref{sec:FeatEmbbed}, but the $pool5$ layer is used as the representation; $\\delta^{\\text{pool5}}_{\\mathcal{F}_1}(\\mathbf{x}_y^t, \\mathbf{x}_y^{t-1}) = \\mathcal{F}^{\\text{pool5}}_1(\\mathbf{x}_y^t)-\\mathcal{F}^{\\text{pool5}}_1(\\mathbf{x}_y^{t-1})$ embeds the relationship of $\\mathbf{x}_y^{t}$ and $\\mathbf{x}_y^{t-1}$ in the feature domain. $d\\left((\\mathcal{N}^t)_i,\\mathbf{x}_y^t\\right)$ is the operator that maps all samples in $(\\mathcal{N}^t)_i$ to their representation in the form of cosine distance to $\\mathbf{x}_y^t$.\nThe last layer of the policy network is reformulated as $P(\\mathbf{z}^t_i = \\mathbf{x}_j|\\mathbf{s}_{i}^t) = e^{c_i^j} \/ {\\sum_{k} c_i^k}$,\nwhere \n\\small\n$\\mathbf{c}_i = \\mathbf{M}_i \\odot \\left(\\mathbf{W} \\mathbf{h}_i^t + \\mathbf{b}\\right)$\n\\normalsize\nand \n\\small\n$\\mathbf{h}_i^t=\\mathcal{F}_{\\pi}\\left(\\mathbf{u}_i^t,\\mathbf{v}^t_i. \\theta_{\\pi}\\right)$\n\\normalsize\n; $\\{\\mathbf{W},\\mathbf{b}\\}$ are weight and bias of the hidden-to-output connections.\nSince $\\mathbf{h}_i^t$ consists of the features of the sample picked for neighbors of $\\mathbf{x}_y^{t-1}$ and the temporal relationship between $\\mathbf{x}_y^{t-1}$ and $\\mathbf{x}_y^{t}$, it directly encodes the information of \\textit{how the face changes} and \\textit{what aging information} from the previous frame has been used.\nThis process helps the agent evaluate its choice to confirm the optimal candidate of $\\mathbf{x}_y^t$\nto construct the neighbor sets.\n\nThe output of the policy network is an $N+1$-dimension vector $\\mathbf{p}$ indicating the probabilities of all available actions $P(\\mathbf{z}^t_{i}=\\mathbf{x}_j|\\mathbf{s}_{i}^t),j=1..N$ where each entry indicates the probability of selecting sample $\\mathbf{x}_j$ for step $i$. It is noticed that the $N+1$-th value of $\\mathbf{p}$ indicates an action that there is no need to update the neighbor sets in this step.\nDuring training, an action $a_{i}^{t}$ is taken by stochastically sampling from this probability distribution. During testing, the one with highest probability is chosen for synthesizing process.\n\n\n\\textbf{State transition:} After decision of action $a_{i}^t$ in state $\\mathbf{s}_{i}^t$ has been made, the next state $\\mathbf{s}_{i+1}^t$ can be obtained via the state-transition function $\\mathbf{s}^t_{i+1} = Transition(\\mathbf{s}^t_{i}, a_i^t)$ where $\\mathbf{z}^{t-1}_i$ is updated to the next unconsidered sample $\\mathbf{z}^{t-1}_{i+1}$ in neighbor sets of $\\mathbf{x}_y^{t-1}$. Then the neighbor that is least similar to $\\mathbf{x}_y^{t}$ in the corresponding sets of $\\mathbf{z}^{t-1}_i$ is replaced by $\\mathbf{x}_j$ according to the action $a_{i}^t$.\nThe \\textit{terminate state} is reached when all the samples of $\\mathcal{N}_y^{t-1}, \\mathcal{N}_o^{t-1}$ are considered.\n\n\n\\textbf{Reward:}\nDuring training, the agent will receive a reward signal $r^t_i$ from the environment after executing an action $a_{i}^t$ at step $i$. In our proposed framework, the reward is chosen to measure aging consistency between video frames as.\n\n\\begin{equation} \\label{eqn:reward}\n\\footnotesize\nr^t_i = \\frac{1}{\\parallel \n\t\\Delta^{\\mathbf{x}^t|\\mathbf{X}^{1:t-1}}_{i,\\mathcal{A}(\\cdot, \\mathbf{x}_y^t)} - \\Delta^{\\mathbf{x}^{t-1}|\\mathbf{X}^{1:t-2}}_{\\mathcal{A}(\\cdot, \\mathbf{x}_y^t)} \\parallel + \\epsilon}\n\\end{equation}\nNotice that in this formulation, we align all neighbors of both previous and current frames to $\\mathbf{x}^t_y$. Since the same alignment operator $\\mathcal{A}(\\cdot,\\mathbf{x}_y^{t})$ on all neighbor sets of both previous and current frames is used, the effect of alignment factors, i.e. poses, expressions, location of the faces, etc., can be minimized in $r^t_i$. Therefore, $r^t_i$ reflects only the difference in aging information embedded into $\\mathbf{x}_y^{t}$ and $\\mathbf{x}_y^{t-1}$.\n\n\\textbf{Model Learning:} The training objective is to maximize the sum of the reward signals: $R = \\sum_i r^t_i$. We optimize the recurrent policy network with the REINFORCE algorithm \\cite{Williams92simplestatistical} guided by the reward given at each time step. \n\n\\subsection{Synthesizing from Features}\nAfter the neighbor sets of $\\mathbf{x}_y^t$ are selected, the $\\Delta^{\\mathbf{x}^t|\\mathbf{X}^{1:t-1}}$ can be computed as presented in Sec. \\ref{sec:LearnFromNeighbor} and the embedding of $\\mathbf{x}_y^t$ in old age region $\\mathcal{F}_1(\\mathbf{x}_o^t)$ is estimated via Eqn. \\eqref{eqn:traversing}. In the final stage, $\\mathcal{F}_1(\\mathbf{x}_o^t)$ can then be mapped back into the image domain $\\mathcal{I}$ via $\\mathcal{F}_2$ which can be achieved by the optimization shown in Eqn. \\eqref{eqn:tv_update} \\cite{mahendran2015understanding}.\n\\begin{equation} \\label{eqn:tv_update}\n\\small\n\\mathbf{x}^{t*}_o = \\arg \\min_{\\mathbf{x}} \\frac{1}{2} \\parallel \\mathcal{F}_1(\\mathbf{x}_o^t) - \\mathcal{F}_1(\\mathbf{x}) \\parallel^2_2 + \\lambda_{V^\\beta} R_{V^\\beta}(\\mathbf{x})\n\\end{equation}\nwhere $R_{V^\\beta}$ represents the Total Variation regularizer encouraging smooth transitions between pixel values.\n\n\\begin{figure}[t]\n\t\\centering \\includegraphics[width=0.9\\columnwidth]{Fig_AgingResults.jpg}\n\t\\caption{\\textbf{Age Progression Results.} For each subject, the two rows shows the input frames at the young age, and the age-progressed faces at 60-years old, respectively.} \n\t\\label{fig:Video_AP_frontal}\n\\end{figure}\n\n\\section{Experimental Results}\n\n\\subsection{Databases} \\label{subsec:db}\nThe proposed approach is trained and evaluated using training and testing databases that are not overlapped. Particularly, the neighbor sets are constructed using a large-scale database composing face images from our collected \\textbf{AGFW-v2}\nand \\textbf{LFW-GOOGLE} \\cite{upchurch2016deep}.\nThen Policy network is trained using videos from \\textbf{300-VW} \\cite{shen2015first}. \nFinally, the video set from AGFW-v2 is used for evaluation.\n\n\\textbf{LFW-GOOGLE} \\cite{upchurch2016deep}: includes 44,697 high resolution images collected using the names of 5,512 celebrities. \nThis database does not have age annotation. \nTo obtain the age label, we employ the age estimator in \\cite{Rothe-IJCV-2016} for initial labels which are manually corrected as needed after estimation. \n\n\\textbf{300-VW} \\cite{shen2015first}: includes 218595 frames from 114 videos. Similar to the video set of AGFW-v2, the videos are movie or presentation sessions containing one face per frame.\n\n\n\\begin{figure}[t]\n\t\\centering \\includegraphics[width=1\\columnwidth]{DifferentAge_Ex.png}\n\t\\caption{\\textbf{Age Progression Results.} Given different frames of a subject, our approach can consistently synthesized the faces of that subject at different age groups.} \t\\label{fig:Video_AP_DifferentAgeGroup}\n\\end{figure}\n\n\\subsection{Implementation Details} \\label{subsec:imple}\n\n\\textbf{Data Setting.} In order to construct the neighbor sets for an input frames in young and old ages, images from AGFW-v2 and LFW-GOOGLE are combined and divided into 11 age groups from 10 to 65 with the age span of five years. \n\n\\textbf{Model Structure and Training.} For the policy network, we employ a neural network with two hidden layers of 4096 and 2048 hidden units, respectively. Rectified Linear Unit (ReLU) activation is adopted for each hidden layer. \nThe videos from 300-VW are used to train the policy network.\n\n\\textbf{Computational time.} Processing time of the synthesized process depends on the resolution of the input video frames.\nIt roughly takes from 40 seconds per $240 \\times 240$ frame or 4.5 minutes per video frame with the resolution of $900 \\times 700$.\nWe evaluate on a system using an Intel i7-6700 CPU@3.4GHz with an NVIDIA GeForce TITAN X GPU. \n\n\\subsection{Age Progression} \\label{subsec:agingresult}\nThis section demonstrates the validity of the approach for robustly and consistently synthesizing age-progressed faces across consecutive frames of input videos.\n\n\\textbf{Age Progression in frontal and off-angle faces.} \nFigs. \\ref{fig:Video_AP_frontal} and \\ref{fig:Video_AP_DifferentAgeGroup} illustrate our age-progression results across frames from AGFW-v2 videos that contain both frontal and off-angle faces. From these results, one can see that even in case of \\textit{frontal faces} (i.e. the major changes between frames come from facial expressions and movements of the mouth and lips), or \\textit{off-angle faces} (i.e. more challenging due to the pose effects in the combination of other variations), \nour proposed method is able to robustly synthesize aging faces. Wrinkles of soft-tissue areas (i.e. under the subject's eyes; around the cheeks and mouth) are coherent robust between consecutive synthesized frames. We also compare our methods against Temporal Non-volume Preserving (TNVP) approach \\cite{Duong_2017_ICCV} and Face Transformer (FT) \\cite{faceTransformer} in Fig. \\ref{fig:AP_Comparisons}. These results further show the advantages of our model when both TNVP and FT are unable to ensure the consistencies between frames, and may result in different age-progressed face for each input. Meanwhile, in our results, the temporal information is efficiently exploited. This emphasizes the crucial role of the learned policy network. \n\n\n\\begin{figure}[t]\n\t\\centering \\includegraphics[width=1\\columnwidth]{Fig_AgingComparison.jpg}\n\t\\caption{\\textbf{Comparisons between age progression approaches}. For each subject, the top row shows frames in the video at a younger age. The next three rows are our results, TNVP \\cite{Duong_2017_ICCV} and Face Transformer \\cite{faceTransformer}, respectively.}\n\t\\label{fig:AP_Comparisons}\n\\end{figure}\n\n\n\\textbf{Aging consistency.} Table \\ref{tb:Consistency_Eval} compares the aging consistency between different approaches.\nFor the \\textbf{\\textit{consistency measurement}}, we adopt the average inverted reward $r^{-1}$ of all frames for each synthesis video. \nFurthermore, to validate the \\textbf{\\textit{temporal smoothness}}, we firstly compute the optical flow, i.e. an estimation of image displacements, between frames of each video to estimate changes in pixels through time. Then we evaluate the differences ($\\ell_2$-norm) between the flows of the original versus synthesis videos. \nFrom these results, one can see that policy network has consistently and robustly shown its role on maintaining an appropriate aging amount embedded to each frame, and, therefore, producing smoother synthesis across frames in the output videos.\n\n\n\\subsection{Video Age-Invariant Face Recognition} \\label{subsec:recog}\nThe effectiveness of our proposed approach is also validated in terms the performance gain for cross-age face verification. With the present of RL approach, not only is the consistency guaranteed, but also are the improvements made in both matching accuracy and matching score deviation. We adapt one of the state-of-the-art deep face matching models in \\cite{deng2018arcface} for this experiment. \nWe set up the face verification as follows. For all videos with the subject's age labels in the video set of AGFW-v2, the proposed approach is employed to synthesize all video frames to the current ages of the corresponding subjects in the videos. Then each frame of the age-progressed videos is matched against the real face images of the subjects at the current age. The matching scores distributions between original (young) and aged frames are presented in Fig. \\ref{fig:MatchingScoreDistribution}. Compared to the original frames, our age-progressed faces produce higher matching scores and, therefore, improve the matching performance over original frames. Moreover, with the consistency during aging process, the score deviation is maintained to be low. This also helps to improve the overall performance further. The matching accuracy among different approaches is also compared in Table \\ref{tb:Consistency_Eval} to emphasize the advantages of our proposed model.\n\n\\begin{table}[t]\n\t\\footnotesize\n\t\\centering\n\t\\caption{Comparison results in terms of consistency and temporal smoothness (\\textit{smaller value indicates better consistency}); and matching accuracy (\\textit{higher value is better}). \n\t} \n\t\\label{tb:Consistency_Eval} \n\t\\small\n\t\\begin{tabular}{l c c c }\n\t\t\\Xhline{2\\arrayrulewidth}\n\t\t\\textbf{Method} & \n\t\t\\begin{tabular}{@{}c@{}}\\textbf{Aging} \\\\ \\textbf{Consistency} \\end{tabular}& \\begin{tabular}{@{}c@{}}\\textbf{Temporal}\\\\ \\textbf{Smoothness}\\end{tabular} & \\begin{tabular}{@{}c@{}}\\textbf{Matching}\\\\ \\textbf{Accuracy}\\end{tabular}\\\\\n\t\t\\hline\n\t\t\\begin{tabular}{@{}l@{}} Original Frames\\end{tabular} & $-$ & $-$ & 60.61\\%\\\\\n\t\t\\hline\n\t\tFT \\cite{faceTransformer} & 378.88 & 85.26 & 67.5\\%\\\\\n\t\tTNVP \\cite{Duong_2017_ICCV} & 409.45 & 87.01 & 71.57\\%\\\\\n\t\tIPCGANs \\cite{wang2018face_aging} & 355.91 & 81.45&73.17\\%\\\\\n\t\t\\hline\n\t\t\\hline\n\t\t\\textbf{Ours(Without RL)} & 346.25 & 75.7 & 78.06\\%\\\\\n\t\t\\textbf{Ours(With RL)} & \\textbf{245.64} & \\textbf{61.80} & \\textbf{83.67\\%}\\\\\n\t\t\\Xhline{2\\arrayrulewidth}\n\t\\end{tabular}\n\\end{table}\n\n\n\\begin{figure}[t]\n\t\\centering \\includegraphics[width=0.85\\columnwidth]{Aging_Matching_Score_Distribution.png}\n\t\\caption{The distributions of the matching scores (of each age group) between frames of original and age-progressed videos against real faces of the subjects at the current age. }\t\n\t\\label{fig:MatchingScoreDistribution}\n\\end{figure}\n\n\\section{Conclusions}\nThis work has presented a novel Deep RL based approach for age progression\nin videos.\nThe model inherits the strengths of both recent advances of deep networks and reinforcement learning techniques to synthesize aged faces of given subjects both plausibly and coherently across video frames.\nOur method can generate age-progressed facial likenesses in videos with consistently aging features across frames. Moreover, our method guarantees preservation of the subject's visual identity after synthesized aging effects.\n\n{\\small\n\\bibliographystyle{ieee_fullname}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nRecently, lots of attention has been devoted to studies of different systems in a space with\na deformed Heisenberg algebra that takes into account the quantum nature of space on the phenomenological level.\nThese works are motivated\nby several independent lines of investigations in string theory and quantum gravity (see, e.g., \\cite{gross, maggiore, witten}) which lead to the\nGeneralized Uncertainty Principle (GUP)\n\\begin{eqnarray}\n\\Delta X\\ge{\\hbar\\over2}\\left({1\\over \\Delta P}+\\beta\\Delta P\\right)\n\\end{eqnarray}\nand suggest the existence of the\nfundamental minimal length $\\Delta X_{\\rm min}=\\hbar\\sqrt\\beta$, which is\nof order of Planck's length $l_p=\\sqrt{\\hbar G\/c^3}\\simeq 1.6\\times 10^{-35}\\rm m$.\n\nIt was established that minimal length\ncan be obtained in the frame of small quadratic modification (deformation) of the Heisenberg algebra \\cite{Kem95,Kem96}\n\\begin{eqnarray}\n[X,P]=i\\hbar(1+\\beta P^2).\n\\end{eqnarray}\nIn the classical limit $\\hbar\\to 0$ the quantum-mechanical commutator for operators is replaced by the Poisson bracket for corresponding classical variables\n\\begin{eqnarray}\n{1\\over i\\hbar}[X,P]\\to\\{X,P\\},\n\\end{eqnarray}\nwhich in the deformed case reads\n\\begin{eqnarray}\n\\{X,P\\}=(1+\\beta P^2).\n\\end{eqnarray}\n\n We point out that historically the first algebra of that kind in the relativistic case was proposed by Snyder in 1947 \\cite{Snyder47}. But only after investigations in string theory and quantum gravity the considerable interest in the studies of physical properties of classical and quantum systems in spaces with deformed algebras appeared.\n\nObservation that GUP can be obtained from the deformed Heisenberg algebra opens the possibility to study the influence of minimal length on properties of physical systems on the quantum level as well as on the classical one.\n\nDeformed commutation relations bring new difficulties in the quantum\nmechanics as well as in the classical one. Only a few problems are known to be solved exactly.\nThey are: one-dimensional harmonic\noscillator with minimal uncertainty in position \\cite{Kem95} and\nalso with minimal uncertainty in position and momentum\n\\cite{Tkachuk1,Tkachuk2}, $D$-dimensional isotropic harmonic\noscillator \\cite{chang, Dadic}, three-dimensional Dirac oscillator\n\\cite{quesne},\n(1+1)-dimensional Dirac\noscillator within Lorentz-covariant deformed algebra \\cite{Quesne10909},\none-dimensional Coulomb problem\n\\cite{fityo}, and\nthe\nsingular inverse square\npotential with a minimal length \\cite{Bou1,Bou2}.\nThree-dimensional\nCoulomb problem with deformed Heisenberg algebra was studied within the perturbation theory \\cite{Brau,Benczik,mykola,Stet,mykolaOrb}.\nIn \\cite{Stet07} the scattering problem in the deformed space with minimal length was studied.\nThe ultra-cold\nneutrons in gravitational field with minimal length were considered in\n\\cite{Bra06,Noz10,Ped11}.\nThe influence of minimal length on Lamb's shift, Landau levels, and tunneling current in scanning tunneling microscope was studied \\cite{Das,Ali2011}.\nThe Casimir effect in a space with minimal length was examined in \\cite{Frassino}.\nIn \\cite{Vaki} the effect of noncommutativity and of the existence of a minimal length on the phase space of cosmological model was investigated.\nThe authors of paper \\cite{Batt}\nstudied various physical consequences which follow from the noncommutative Snyder space-time geometry.\nThe classical mechanics in a space with deformed Poisson brackets was studied\nin \\cite{BenczikCl,Fryd,Sil09}.\nThe composite system ($N$-particle system) in the deformed space with\nminimal length was studied in \\cite{Quesne10,Bui10}.\n\nNote that deformation of Heisenberg algebra brings not only technical difficulties in solving of corresponding equations\nbut also brings problems of fundamental nature.\nOne of them is the violation of the equivalence principle in\nspace with minimal length \\cite{Ali11}.\nThis is the result of assumption that the parameter of deformation\nfor\nmacroscopic bodies of different mass is unique.\nIn paper \\cite{Quesne10} we shown that the center of mass of a macroscopic body in deformed space is\ndescribed by an effective parameter of deformation, which is essentially smaller than the parameters of deformation for particles consisting the body. Using the result of \\cite{Quesne10} for the effective parameter of deformation we show that the equivalence principle in the space with minimal length can be recovered.\nIn section 3 we reproduce the result of \\cite{Quesne10} concerning the effective parameter of deformation for the center of mass on the classical level and in addition show that the independence of kinetic energy on the composition leads to the recovering of the equivalence principle in the space with deformed Poisson bracket.\n\n\\section{Free fall of particle in a uniform gravitational field}\n\n\nThe Hamiltonian of a particle (a macroscopic body which we consider as a point particle) of mass $m$ in a uniform gravitational field reads\n\\begin{eqnarray}\nH={P^2\\over 2m}-mgX,\n\\end{eqnarray}\nthe gravitational field is characterized by the factor $g$ is directed along the $x$ axis.\nNote that here the inertial mass ($m$ in the first term) is equal to the gravitational mass\n($m$ in the second one).\nThe Hamiltonian equations of motion in space with deformed Poisson brackets are as follows\n\\begin{eqnarray}\\label{dxp}\n\\dot{X}=\\{X,H\\}={P\\over m}(1+\\beta P^2),\\\\\n\\dot{P}=\\{P,H\\}=mg(1+\\beta P^2).\n\\end{eqnarray}\nWe impose zero initial conditions for position and momentum, namely $X=0$, and $P=0$ at $t=0$.\nThese equations can be solved easily.\nFrom the second equation we find\n\\begin{eqnarray}\nP={1\\over \\sqrt\\beta}\\tan(\\sqrt\\beta mgt).\n\\end{eqnarray}\nFrom the first equation we obtain for velocity\n\\begin{eqnarray}\\label{soldX}\n\\dot{X}={1\\over m\\sqrt\\beta}{\\tan(\\sqrt\\beta mgt)\\over\\cos^2(\\sqrt\\beta mgt)}\n\\end{eqnarray}\nand for position\n\\begin{eqnarray}\\label{solX}\nX={1\\over 2g m^2\\beta}\\tan^2(\\sqrt\\beta mgt).\n\\end{eqnarray}\nOne can verify that the motion is periodic with period $T={\\pi\\over m\\sqrt\\beta g}$. The particle moves from $X=0$\nto $X=\\infty$, then reflects from $\\infty$ and moves in the opposite direction to $X=0$.\nBut from the physical point of view this solution is correct only for time $t\\ll T$ when the velocity of particle\nis much smaller than the speed of light. In other cases, the relativistic mechanics must be considered.\n\nIt is instructive to write out the results for velocity and coordinate in the first order over $\\beta$:\n\\begin{eqnarray}\\label{soldXap}\n\\dot{X}=gt\\left(1+{4\\over 3}\\beta m^2g^2t^2\\right),\\\\\\label{solXap}\nX= {gt^2\\over2}\\left(1+{2\\over 3}\\beta m^2g^2t^2\\right).\n\\end{eqnarray}\nIn the limit\n$\\beta\\to 0$ we reproduce the well known results\n\\begin{eqnarray}\n\\dot{X}=gt, \\ \\\nX= {gt^2\\over2},\n\\end{eqnarray}\nwhere kinematic characteristics, such as velocity and position of a free-falling particle depend only on initial position and velocity of the particle and do not depend on the composition and mass of the particle.\nIt is in agreement with the weak equivalence principle, also known as the universality of free fall or the Galilean equivalence principle. Note that in the nondeformed case, when the Newtonian equation of motion in gravitational field is fulfilled the weak equivalence principle is noting else that the statement of equivalence\nof inertial and gravitational masses.\n\nAs we see from (\\ref{soldX}) and (\\ref{solX}) or (\\ref{soldXap}) and (\\ref{solXap}), in the deformed space\nthe trajectory of the point mass in the gravitational field depends on the mass of the particle if we suppose that\nparameter of deformation is the same for all bodies.\nSo, in this case the equivalence principle is violated.\nIn paper \\cite{Quesne10} we shown on the quantum level that in fact the motion of the center of mass of a composite system in deformed space is governed by an\neffective parameter (in \\cite{Quesne10} it is denoted as $\\tilde\\beta_0$, here we denote it as $\\beta$). So, the parameter of deformation for a macroscopic body\nis\n\\begin{eqnarray}\\label{betaN}\n\\beta=\\sum_i\\mu_i^3\\beta_i,\n\\end{eqnarray}\nwhere\n$\\mu_i=m_i\/\\sum_i m_i$, $m_i$ and $\\beta_i$ are the masses and parameters of deformation of particles which form composite system (body). Note that in the next section we derive this result considering kinetic energy of a body consisting of $N$ particles.\n\nFirstly, let us consider a special case $m_i=m_1$ and $\\beta_i=\\beta_1$ when body consists of the same elementary particles. Then we find\n\\begin{eqnarray}\n\\beta={\\beta_1\\over N^2},\n\\end{eqnarray}\nwhere $N$ is the number of particles of body with mass $m=Nm_1$.\nNote that expressions (\\ref{soldX}) and (\\ref{solX}) contain combination $\\sqrt\\beta m$.\nSubstituting the effective parameter of deformation\n$\\beta_1\/N^2$ instead of $\\beta$ we find\n\\begin{eqnarray}\n\\sqrt\\beta m=\\sqrt\\beta_1 m\/N=\\sqrt\\beta_1 m_1.\n\\end{eqnarray}\nAs a result, the trajectory now does not depend on the mass of the macroscopic body but depends on\n$\\sqrt\\beta_1 m_1$, which is the same for bodies of different mass.\nSo, the equivalence principle is recovered.\n\nThe general case when a body consists of the different elementary particles is more complicated.\nThen the situation is possible when different combinations of elementary particles\nlead to the same mass but with different effective parameters of deformation.\nThen the motion of bodies of equal\nmass but different composition will be different.\nThis also violates the weak equivalence principle.\nThe equivalence principle can be recovered when we suppose that\n\\begin{eqnarray}\\label{gamma}\n\\sqrt\\beta_1 m_1=\\sqrt\\beta_2 m_2=\\dots=\\sqrt\\beta_N m_N=\\gamma\n\\end{eqnarray}\nReally, then the effective parameter of deformation for a macroscopic body is\n\\begin{eqnarray}\n\\beta=\\sum_i{m_i^3\\over(\\sum_i m_i)^3}\\beta_i={\\gamma^2\\over(\\sum_i m_i)^2}={\\gamma^2\\over m^2}\n\\end{eqnarray}\nand thus\n\\begin{eqnarray}\n\\sqrt\\beta m=\\gamma,\n\\end{eqnarray}\nthat is the same as (\\ref{gamma}).\nNote, that the trajectory of motion in this case does not depend on mass and depends only on $\\gamma$\nwhich takes same value for all bodies.\nIt means that bodies of different mass and different composition move in a gravitational field in the same way\nand thus the weak equivalence principle is not violated when (\\ref{gamma}) is satisfied. Equation (\\ref{gamma}) brings one new fundamental constant $\\gamma$. Note that parameter $1\/\\gamma$ has the dimension of velocity. The parameters of deformation $\\beta_i$ of particles or macroscopic bodies of mass $m_i$ are determined by fundamental constant $\\gamma$ as follows\n\\begin{eqnarray}\\label{bg}\n\\beta_i={\\gamma^2\\over m_i^2},\n\\end{eqnarray}\nSo, the parameter of deformation is completely determined by the mass of a particle.\nIn the next section we derive formula (\\ref{betaN}) on the classical level and give some arguments concerning\nthe relation (\\ref{gamma}).\n\n\\section{Kinetic energy of a composite system in deformed space and parameter of deformation}\n\nIn this section we use the natural statement:\n{\\it The kinetic energy has the additivity property and does not depend on composition of a body but only on its mass.}\n\nFirstly, we consider {\\it the additivity property of the kinetic energy.}\nLet us consider $N$ particles with masses $m_i$ and deformation parameters $\\beta_i$.\nIt is equivalent to the situation when the macroscopic body is divided into $N$ parts which can be treated as point particles with corresponding masses and parameters of deformation.\nWe consider the case when each particle of the system moves with the same velocity as the whole system.\n\nLet us rewrite the kinetic energy as a function of velocity.\nFrom the relation between velocity and momentum (\\ref{dxp}) in the first approximation over $\\beta$\nwe find\n\\begin{eqnarray}\nP=m \\dot X(1-\\beta m^2\\dot X^2).\n\\end{eqnarray}\nThen the kinetic energy as a function of velocity in the first order approximation over $\\beta$ reads\n\\begin{eqnarray}\\label{TV}\nT={m\\dot X^2\\over 2}-\\beta m^3\\dot X^4.\n\\end{eqnarray}\n\nThe kinetic energy of the whole system is given by (\\ref{TV}) where $m=\\sum_i m_i$. On the other hand,\nthe kinetic energy of the whole system is the sum of kinetic energies of particles which constitute the system:\n\\begin{eqnarray}\\label{TVsum}\nT=\\sum_i T_i={m\\dot X^2\\over 2}-\\sum_i\\beta_i m_i^3\\dot X^4,\n\\end{eqnarray}\nwhere we take into account that velocities of all particles are the same as the velocity\nof the whole system $\\dot X_i=\\dot X$, $i=1,\\dots,N$.\nComparing (\\ref{TV}) and (\\ref{TVsum}) we obtain (\\ref{betaN}).\n\nNow let us consider {\\it the independence of kinetic energy on the composition of a body}.\nIt is enough to consider a body of a fixed mass consisting of two parts (particles) with masses $m_1=m\\mu$ and $m_2=m(1-\\mu)$, where $0\\le\\mu\\le1$. Parameters of deformation for the first and second particles are $\\beta_1=\\beta_{\\mu}$ and $\\beta_2=\\beta_{1-\\mu}$, here we write explicitly that\nparameters of deformations are some function of mass ($\\mu=m_1\/m$ is dimensionless mass).\nThe particles with different masses constitute the body with the same mass $m=m_1+m_2$.\nSo, in this situation we have the body of the same mass but with different composition.\n\nThe kinetic energy of the whole body is given by (\\ref{TV}) with the\nparameter of deformation\n\\begin{eqnarray}\\label{Eqbeta}\n\\beta=\\beta_{\\mu}\\mu^3+\\beta_{1-\\mu}(1-\\mu)^3.\n\\end{eqnarray}\nSince the kinetic energy does not depend on the composition, the parameter of deformation for the whole body must be fixed $\\beta={\\rm const}$ for different $\\mu$. Thus (\\ref{Eqbeta}) is the equation for $\\beta_{\\mu}$ as a function of $\\mu$ at fixed $\\beta$.\nOne can verify that the solution reads\n\\begin{eqnarray}\n\\beta_{\\mu}={\\beta\\over\\mu^2}.\n\\end{eqnarray}\nTaking into account that $\\mu=m_1\/m$ we find\n\\begin{eqnarray}\n\\beta_1 m_1^2=\\beta m^2\n\\end{eqnarray}\nthat corresponds to (\\ref{gamma}). So, the independence of the kinetic energy from composition leads to the one fundamental constant $\\gamma^2=\\beta m^2$. Then parameters of deformation $\\beta_i$ of particles or composite bodies\nof different masses $m_i$ are\n$\\beta_i=\\gamma^2\/m_i^2$\nthat is in agreement with relation (\\ref{bg}).\n\\section{Conclusions}\nOne of the main results of the paper is the expression for the parameter of deformation\nfor particles or bodies of different mass (\\ref{bg})\nwhich recovers the equivalence principle and thus the equivalence principle is reconciled with the\ngeneralized uncertainty principle. It is necessary to stress that\nexpression (\\ref{bg}) was derived also in section 3 from the\ncondition of the independence of kinetic energy on composition.\n\nNote that (\\ref{bg}) contains the same constant $\\gamma$ for different particles and parameter of deformation\nis inverse to the squared mass.\nThe constant $\\gamma$ has dimension inverse to velocity. Therefore, it is convenient to introduce\na dimensionless constant $\\gamma c$, where $c$ is the speed of light.\nIn order to make some speculations concerning the possible value of $\\gamma c$\nwe suppose that for the electron the parameter of deformation $\\beta_e$ is related to Planck's\nlength, namely\n\\begin{eqnarray}\n\\hbar\\sqrt\\beta_e=l_p=\\sqrt{\\hbar G\/c^3}.\n\\end{eqnarray}\nThen we obtain\n\\begin{eqnarray}\n\\gamma c=c\\sqrt\\beta m_e=\\sqrt{\\alpha{Gm_e^2\\over e^2}}\\simeq 4.2\\times 10^{-23},\n\\end{eqnarray}\nwhere $\\alpha=e^2\/\\hbar c$ is the fine structure constant.\n\nFixing the parameter of deformation for electron we can calculate the\nparameter of deformation for particles or bodies of different mass. It is more instructive to write\nthe minimal length for space where the composite body of mass $m$ lives:\n\\begin{eqnarray}\n\\hbar\\sqrt\\beta={m_e\\over m}\\hbar\\sqrt\\beta_e={m_e\\over m }l_p.\n\\end{eqnarray}\nAs an example let us consider nucleons (proton or\nneutron). The parameter of deformation for nucleons $\\beta_{\\rm nuc}$ or minimal length for nucleons\nreads\n$\\hbar\\sqrt\\beta_{\\rm nuc}\\simeq l_p\/1840.$\nSo, the effective minimal length for nucleons is three order smaller than that for electrons.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{sec:intro}\n\nIn many fluid dynamic scenarios the compressibility of a liquid is negligible. This allows for simplifications such that direct numerical simulations can rely on simpler incompressible models. In the context of droplet impingement incompressibility is only justified for small impact speeds. High impact speeds trigger compressibility effects of the liquid droplet which can determine the flow dynamics significantly. Examples for high speed droplet impact scenarios can be found in many industrial applications such as liquid-fueled engines, spray cooling or spray cleaning. In \\cite{Haller2002} it has been shown that incompressible models are not adequate to describe high speed impacts, especially due to the fact that the jetting dynamics are influenced by a developing shock wave in the liquid phase \\cite{Haller2003}. The time after the impact of the droplet until jetting occurs is actually smaller than the predicted time of incompressible models due to the shock wave pattern. In \\cite{Haller2002} a compressible sharp-interface model is used for the simulations. However, sharp-interface models become intricate in the presence of changes in droplet topology and contact line motion. For this reason, we introduce a diffuse-interface model in this contribution, namely a compressible Navier--Stokes--Allen--Cahn phase field model which allows for complex interface morphologies and dynamic contact angles.\n\\section{Phase Field Models}\n\\label{sec:model}\nPhase field models form a special class of diffuse-interface models. In contrast to sharp-interface models, the interface has a (small) finite thickness and in the interfacial region the different fluids are allowed to mix. An additional variable, the \\emph{phase field}, is introduced which allows to distinguish the different phases. This concept has the advantage that only one system of partial differential equations on the entire considered domain needs to be solved, whereas for sharp-interface models bulk systems need to be solved which are coupled across the interface by possibly complex conditions. Based on energy principles phase field models can be derived in a thermodynamic framework, see \\cite{Anderson1998,Freistuehler2016} for an overview. They\nfulfill the second law of thermodynamics meaning that the Clausius--Duhem inequality \\cite{Truesdell1952} is fulfilled. In the case of isothermal models this is equivalent to an energy inequality. There are several (quasi-)incompressible \\cite{Lowengrub1998,Abels2012}, compressible \\cite{Blesgen1999,Dreyer2014,Witterstein2010} and recently even incompressible--compressible phase field models \\cite{Repossi2017,Ostrowski2019}. In this section we introduce a compressible Navier--Stokes--Allen--Cahn model.\n\n\\subsection{A Compressible Navier--Stokes--Allen--Cahn system}\n\\label{sec:NSAC}\n\nWe consider a viscous fluid at constant temperature. The fluid is assumed to exist in two phases, a liquid phase denoted by subscript $\\mathrm{L}$ and a vapor phase denoted by subscript $\\mathrm{V}$. In each phase the fluid is thermodynamically described by the corresponding Helmholtz free energy density $\\varrho f_{\\mathrm{L\/V}}(\\varrho)$. The fluid occupies a domain $\\Omega \\subset \\mathbb{R}^d, \\ d\\in \\mathbb{N}$.\nLet $\\varrho >0 $ be the density of the fluid, $\\vec{v} \\in \\mathbb{R}^d$ the velocity and $\\varphi \\in [0,1]$ the phase field. Following \\cite{Dreyer2014} we assume that the dynamics of the fluid is described by the isothermal compressible Navier--Stokes--Allen--Cahn system.\n\\begin{align}\n\\partial_t \\varrho + \\operatorname{div}(\\varrho \\mathbf{v}) &= 0, \\label{eq:NSAC1}\\\\\n\\partial_t(\\varrho \\mathbf{v}) + \\operatorname{div}(\\varrho \\mathbf{v}\\otimes \\mathbf{v}+{p\\mathbf{I}})&= \\operatorname{div}(\\mathbf{S}) - \\gamma \\operatorname{div}(\\nabla \\varphi\\otimes \\nabla \\varphi) \\ \\text{ in } \\Omega \\times (0,T), \\label{eq:NSAC2}\\\\\n\\partial_t(\\varrho \\varphi) + \\operatorname{div}(\\varrho \\varphi \\mathbf{v}) &= -\\eta \\mu. \\label{eq:NSAC3} \n\\end{align}\nThe Helmholtz free energy density $\\varrho f$ is defined as \n\\begin{align}\n\\varrho f(\\varrho,\\varphi,\\nabla\\varphi) &= h(\\varphi)\\varrho f_\\mathrm{L}(\\varrho)+(1-h(\\varphi)) \\varrho f_\\mathrm{V}(\\varrho) + \\frac{1}{\\gamma}W(\\varphi) + \\frac{\\gamma}{2}|\\nabla\\varphi|^2 \\label{eq:rhof} \\\\\n&=: \\varrho \\psi(\\varphi,\\varrho) + \\frac{1}{\\gamma}W(\\varphi) + \\frac{\\gamma}{2} |\\nabla \\varphi|^2.\n\\end{align}\nIt consists of the interpolated free energy densities $\\varrho f_{\\mathrm{L\/V}}$ of the pure liquid and vapor phases with the nonlinear interpolation function \n\\begin{align}\\label{eq:def_h}\nh(\\varphi) = 3 \\varphi^2 - 2 \\varphi^3,\n\\end{align}\nand a mixing energy \\cite{Cahn1958} using the double well potential $W(\\varphi)=\\varphi^2(1-\\varphi)^2$.\n\nThe hydrodynamic pressure $p$ is determined through the Helmholtz free energy $\\varrho f$ by the thermodynamic relation\n\\begin{equation}\\label{eq:pdef}\np=p(\\varrho,\\varphi) = -\\varrho f(\\varrho,\\varphi)+\\varrho \\frac{\\partial (\\varrho f)}{\\partial \\varrho}(\\varrho, \\varphi).\n\\end{equation}\nWe define the generalized chemical potential \n\\begin{equation}\n\\mu = \\frac{1}{\\gamma}W'(\\varphi)+ \\frac{\\partial (\\varrho\\psi)}{\\partial \\varphi}-\\gamma \\Delta \\varphi,\n\\end{equation}\nwhich steers the phase field variable into equilibrium.\nAdditionally, we denote by $\\eta>0$ the (artificial) mobility.\n\nThe dissipative viscous part of the stress tensor reads as $\\mathbf{S}=\\mathbf{S}(\\varphi,\\nabla\\mathbf{v})= \\nu(\\varphi) (\\nabla \\mathbf{v}+ \\nabla \\mathbf{v}^\\top - \\operatorname{div}(\\mathbf{v})\\mathbf{I})$ with an interpolation of the viscosities $\\nu_{\\mathrm{L\/V}}$ of the pure phases $\\nu(\\varphi) = h(\\varphi)\\nu_\\mathrm{L} + (1-h(\\varphi))\\nu_\\mathrm{V} > 0$.\n\nThe total energy of the system \\eqref{eq:NSAC1}-\\eqref{eq:NSAC3} at time $t$ is defined as\n\\begin{align}\n\\label{eq:etot}\nE(t) &\\mathrel{\\vcenter{\\baselineskip0.5ex \\lineskiplimit0pt\n \\hbox{\\scriptsize.}\\hbox{\\scriptsize.}}\n = E_\\mathrm{free}(t) + E_\\mathrm{kin}(t) \\nonumber\\\\ \n &= \\int_\\Omega \\varrho(\\vec{x},t) f(\\varrho(\\vec{x},t),\\varphi(\\vec{x},t),\\nabla \\varphi(\\vec{x},t)) + \\frac{1}{2} \\varrho(\\vec{x},t) |\\mathbf{v}(\\vec{x},t)|^2 \\operatorname{~ d\\!} \\mathbf{x}.\n\\end{align}\n\n\\begin{remark} \\\n\\begin{enumerate}\n\\item \nThe phase field $\\varphi$ is in general an artificial variable, however in this case it can be viewed as a mass fraction $\\varphi = \\frac{m_\\mathrm{V}}{m},$ with the mass $m_\\mathrm{V}$ of the vapor constituent and the total mass $m$ of the fluid.\n\\item \nThe special form of the nonlinear interpolation function $h$ with $h'(0) = h'(1) \\neq 0$ guarantees that $\\eqref{eq:NSAC1}-\\eqref{eq:NSAC3}$ allows for physical meaningfull equilibria. This can be easily seen by considering a static single-phase equilibrium $\\vec{v} = \\boldsymbol{0}, \\varphi \\equiv 0$. If $h'(0)\\neq 0$ then the right hand side of the phase field equation \\eqref{eq:NSAC3} does not vanish.\n\\end{enumerate}\n\\end{remark}\nAssuming an impermeable wall, the velocity must satisfy the boundary condition\n\\begin{equation}\n\\mathbf{v}\\cdot \\mathbf{n} = 0 \\quad \\text{ on $\\partial \\Omega$}. \\label{eq:bc1}\n\\end{equation}\nAdditionally, the system is endowed with initial conditions\n\\begin{align}\n\\label{eq:IC}\n\\varrho = \\varrho_0, \\quad \\vec{v} = \\vec{v}_0, \\quad \\varphi = \\varphi_0 \\quad \\text{ on } \\Omega \\times \\{0\\},\n\\end{align}\nusing suitable functions $(\\varrho_0,\\vec{v}_0, \\varphi_0)\\colon \\Omega \\to \\mathbb{R}^+ \\times \\mathbb{R}^d \\times [0,1]$.\n\nHowever, in order to close the system \\eqref{eq:bc1} does not suffice. In the following section we derive a complete set of boundary conditions that allow for moving contact lines (MCL).\n\n\n\\subsection{Boundary Conditions}\n\\label{sec:bc}\nThe system \\eqref{eq:NSAC1}-\\eqref{eq:NSAC3} needs to be complemented with initial and boundary conditions. We are interested in MCL problems. With a sharp interface point of view, the contact line is the intersection of the liquid-vapor interface with the solid wall. The requirement of a contact line moving along the wall renders the derivation of boundary conditions nontrivial. Figure \\ref{fig:MCL} depicts a sketch of a compressible droplet impact scenario with the rebound shock wave dynamics and a moving contact line.\n\\begin{figure}[h]\n\\centering\\includegraphics[width=.649\\textwidth]{MCL.pdf}\n\\caption{Sketch of a compressible droplet impingement on a flat wall with moving contact line.}\n\\label{fig:MCL}\n\\end{figure}\nWe derive appropriate boundary conditions to handle MCL problems with the phase field system \\eqref{eq:NSAC1}-\\eqref{eq:NSAC3} in this section. \n\n\n\nFor the incompressible case so called \\emph{general Navier boundary conditions} (GNBC) have been derived \\cite{Qian2003,QIAN2006}. Motivated by these works we extend GNBC to the compressible case.\n\n\nBecause phase field modelling goes well with energy principles we add a wall free energy term $\\int_{\\partial \\Omega} g(\\varphi) \\operatorname{~ d\\!} s$ to the total energy $E$ from \\eqref{eq:etot} and obtain \n\\begin{align}\\label{eq:total_energy}\nE_{\\mathrm{tot}}(t) &= E(t) + E_{\\mathrm{wall}}(t) \\nonumber\\\\\n&= \\int_\\Omega \\varrho(t) f(\\varrho(t),\\varphi(t),\\nabla \\varphi(t)) + \\frac{1}{2} \\varrho(t) |\\mathbf{v}(t)|^2 \\operatorname{~ d\\!} \\mathbf{x} + \\int_{\\partial \\Omega} g(\\varphi(t)) \\operatorname{~ d\\!} s.\n\\end{align}\n\nHere $g(\\varphi)$ is the interfacial free energy per unit area at the fluid-solid boundary depending only on the local composition \\cite{QIAN2006}. The specific choice for $g$ is motivated by Young's equation.\nWith a sharp interface point of view we have\n\\begin{align}\\label{eq:young}\n\\sigma \\cos(\\theta_s)= \\sigma_\\mathrm{S}-\\sigma_\\mathrm{LS},\n\\end{align}\nwith the surface free energy $\\sigma$ of the liquid, the static contact angle $\\theta_\\mathrm{s}$, surface free energy $\\sigma_\\mathrm{S}$ of the solid, and interfacial free energy $\\sigma_{\\mathrm{LS}}$ between liquid and solid, see Figure \\ref{fig:young}. We prescribe the difference in energy for $g$, i.e.\n\\begin{align}\n\\sigma_\\mathrm{S}-\\sigma_\\mathrm{LS}=g(0)-g(1).\n\\end{align}\n\n\\begin{figure}[h]\n\\centering\\includegraphics[width=.6\\textwidth]{young_equation.pdf}\n\\caption{Illustration of Young's equation $\\sigma \\cos(\\theta_s)= \\sigma_\\mathrm{S}-\\sigma_\\mathrm{LS}.$}\n\\label{fig:young}\n\\end{figure}\n \nThen, we choose a smooth interpolation between the values $\\pm\\frac{\\Delta g}{2} = \\pm \\frac{g(1)-g(0)}{2}$. However, it was shown in \\cite{Qian2003} that the choice of the kind interpolation has no large impact. Hence, for reasons of consistency we use $h$ as interpolation function.\nWith \\eqref{eq:young} we obtain \n\\begin{equation}\ng(\\varphi) \\mathrel{\\vcenter{\\baselineskip0.5ex \\lineskiplimit0pt\n \\hbox{\\scriptsize.}\\hbox{\\scriptsize.}}}%\n = -\\sigma\n\\cos(\\theta_s) \\left(h(\\varphi)-\\frac{1}{2}\\right).\n\\end{equation} \nA variation $\\delta \\varphi$ of $\\varphi$ leads to a variation $\\delta E_\\mathrm{tot}$ of the total energy \\eqref{eq:total_energy}, that is\n\\begin{align*}\n\\delta E_\\mathrm{tot} = \\int_\\Omega \\mu \\delta \\varphi \\operatorname{~ d\\!} \\vec{x} - \\int_{\\partial \\Omega} L(\\varphi)\\frac{\\partial\\varphi}{\\partial_{\\boldsymbol{\\tau}}} \\delta\\varphi_{\\boldsymbol{\\tau}}.\n\\end{align*}\nHere,\n\\[L(\\varphi) \\coloneqq \\gamma \\frac{\\partial \\varphi}{\\partial \\mathbf{n}}+g'(\\varphi)\\] \ncan be interpreted as uncompensated Young stress \\cite{Qian2003}. The boundary tangential vector is denoted by $\\boldsymbol{\\tau}$ and $\\vec{n}$ denotes the outer normal. Thus, $L(\\varphi)=0$ is the Euler--Lagrange equation at the fluid-solid boundary for minimizing the total energy \\eqref{eq:total_energy} with respect to the phase field variable. We assume a boundary relaxation dynamics for $\\varphi$ given by\n\\begin{equation}\n\\partial_t \\varphi + \\mathbf{v}\\cdot \\nabla_{\\boldsymbol{\\tau}} \\varphi = -\\frac{\\alpha}{\\varrho} L(\\varphi),\n\\end{equation}\nwith a relaxation parameter $\\alpha>0$. Here $\\nabla_{\\boldsymbol{\\tau}} \\mathrel{\\vcenter{\\baselineskip0.5ex \\lineskiplimit0pt\n \\hbox{\\scriptsize.}\\hbox{\\scriptsize.}}}%\n = \\nabla-(\\mathbf{n}\\cdot\\nabla)\\mathbf{n}$ is the gradient along the tangential direction. \n Since $\\mathbf{v} \\cdot \\mathbf{n} = 0$, we have $\\mathbf{v}\\cdot \\nabla_{\\boldsymbol{\\tau}}\\varphi = v_{\\boldsymbol{\\tau}}\\frac{\\partial \\varphi}{\\partial \\boldsymbol{\\tau}}$,\nand finally we obtain\n\\begin{align}\n\\partial_t \\varphi + v_{\\boldsymbol{\\tau}}\\frac{\\partial \\varphi}{\\partial \\boldsymbol{\\tau}} &= -\\frac{\\alpha}{\\varrho} L(\\varphi) \\quad \\text{ on $\\partial \\Omega$}.\\label{eq:bc3}\n\\end{align}\n\nIn order to complete the derivation of the GNBC we incorporate a slip velocity boundary condition. In single phase models the slip velocity is often taken proportional to the tangential viscous stress. However, in our case we also have to take the uncompensated Young stress into account. In \\cite{Qian2003} it is shown from molecular dynamic simulations that the slip velocity should be taken proportional to the sum of the tangential viscous stress and the uncompensated Young stress.\nHence, with the slip length $\\beta > 0$ we prescribe the boundary condition\n\\begin{equation}\n\\beta v_{\\boldsymbol{\\tau}} + \\nu(\\varphi) \\frac{\\partial v_{\\boldsymbol{\\tau}}}{\\partial \\mathbf{n}} - L(\\varphi)\n\\frac{\\partial \\varphi}{\\partial \\boldsymbol{\\tau}} =0 \\quad \\text{ on $\\partial \\Omega$}. \\label{eq:bc2}\n\\end{equation}\n\nAway from the interface the last term in \\eqref{eq:bc2} drops out and we have the classical Navier-slip condition but in the interface region the additional term acts and allows for correct contact line movement.\n\nIn summary we obtain the following GNBC for the MCL problem\n\\begin{align}\n\\mathbf{v}\\cdot \\mathbf{n} &= 0, \\label{eq:gnbc1} \\\\\n\\beta v_{\\boldsymbol{\\tau}} + \\nu(\\varphi) \\frac{\\partial v_{\\boldsymbol{\\tau}}}{\\partial \\mathbf{n}} - L(\\varphi)\n\\frac{\\partial \\varphi}{\\partial \\boldsymbol{\\tau}} &=0, \\hspace*{6ex} \\text{ on $\\partial \\Omega$.} \\label{eq:gnbc2} \\\\\n\\partial_t \\varphi + v_{\\boldsymbol{\\tau}}\\frac{\\partial \\varphi}{\\partial \\boldsymbol{\\tau}} &= -\\frac{\\alpha}{\\varrho} L(\\varphi)\\label{eq:gnbc3}\n\\end{align}\n\nThe GNBC \\eqref{eq:gnbc1}, \\eqref{eq:gnbc2}, \\eqref{eq:gnbc3} contain certain subcases. For $\\alpha \\to \\infty$ we obtain the static contact angle boundary condition and with $\\beta \\to \\infty$ we end up with no-slip boundray conditions.\n\n\n\\subsection{Energy Inequality}\n\\label{sec:energy_ineq}\nFor isothermal models thermodynamical consistency means to verify that solutions of the problem at hand admit an energy inequality. Precisely, we have for the system \\eqref{eq:NSAC1}-\\eqref{eq:NSAC3} the following result.\n\\begin{thm}[Energy inequality]\\label{thm:energy_ineq}\nLet $(\\varrho,\\mathbf{v},\\varphi)$ with values in $(0,\\infty)\\times \\mathbb{R}^d \\times [0,1]$ be a classical solution of \\eqref{eq:NSAC1}-\\eqref{eq:NSAC3} in $(0,T) \\times \\Omega$ satisfying the boundary conditions \\eqref{eq:gnbc1} - \\eqref{eq:gnbc3} on $(0,T) \\times \\partial \\Omega$. Then for all $t \\in (0,T)$ the following energy inequality holds:\n\\begin{flalign} \\label{eq:energy_ineq}\n&\\frac{\\operatorname{d}}{\\operatorname{~ d\\!} t} E_\\mathrm{tot}(t) = \\frac{\\operatorname{d}}{\\operatorname{~ d\\!} t} (E_\\mathrm{free}(t) + E_\\mathrm{kin}(t) +E_\\mathrm{wall}(t)) & \\nonumber\\\\\n=&\\frac{\\operatorname{d}}{\\operatorname{~ d\\!} t} \\left(\\int_\\Omega \\varrho f(\\varrho,\\mathbf{v},\\varphi,\\nabla \\varphi) + \\frac{1}{2} \\varrho |\\mathbf{v}|^2 \\operatorname{~ d\\!} \\mathbf{x} + \\int_{\\partial \\Omega} g(\\varphi) \\operatorname{~ d\\!} s\\right) & \\nonumber \\\\ \n=&-\\int_\\Omega \\frac{\\eta}{\\varrho} \\mu^2 \\operatorname{~ d\\!} \\mathbf{x} - \\int_\\Omega \\mathbf{S} \\colon \\nabla \\mathbf{v} \\operatorname{~ d\\!} \\mathbf{x} \\nonumber\\\\\n&- \\int_{\\partial \\Omega} \\beta |v_{\\boldsymbol{\\tau}}|^2 \\operatorname{~ d\\!} s - \\int_{\\partial \\Omega} \\frac{\\alpha}{\\varrho} |L(\\varphi)|^2 \\operatorname{~ d\\!} s \\leq 0.\n\\end{flalign}\n\\end{thm}\nAs expected the energy inequality renders phase transition, viscosity, wall slip, and composition relaxation at the solid interface to be drivers of energy with respect to entropy dissipation.\n\\begin{proof}\nIn a straightforward way we compute:\n\n\\begin{align*}\n\\frac{\\operatorname{~ d\\!}}{\\operatorname{~ d\\!}t} E_\\mathrm{tot}(t) =& \\frac{\\operatorname{~ d\\!}}{\\operatorname{~ d\\!}t} \\left(\\int_\\Omega \\varrho f(\\varrho,\\varphi,\\nabla\\varphi) + \\frac{1}{2}\\varrho |\\vec{v}|^2 \\operatorname{~ d\\!}\\vec{x} + \\int_{\\partial \\Omega} g(\\varphi) \\operatorname{~ d\\!} s \\right) \\\\\n=& \\frac{\\operatorname{~ d\\!}}{\\operatorname{~ d\\!}t} \\left(\\int_\\Omega\\frac{1}{\\gamma} W(\\varphi) + \\varrho\\psi(\\varrho,\\varphi) + \\frac{\\gamma}{2} |\\nabla\\varphi|^2 + \\frac{1}{2}\\varrho |\\vec{v}|^2 \\operatorname{~ d\\!} \\vec{x}+ \\int_{\\partial \\Omega} g(\\varphi) \\operatorname{~ d\\!} s \\right)\\\\\n=& \\int_\\Omega \\varphi_t \\left(\\frac{1}{\\gamma}W'(\\varphi)+\\frac{\\partial (\\varrho \\psi)}{\\partial \\varphi}-\\gamma\\Delta \\varphi\\right) + \\varrho_t \\left(\\frac{\\partial (\\varrho \\psi)}{\\partial \\varrho} - \\frac{1}{2} |\\vec{v}|^2\\right) + (\\varrho\\vec{v})_t \\cdot \\vec{v} \\operatorname{~ d\\!}\\vec{x} \\\\\n&+ \\int_{\\partial \\Omega} \\varphi_t(g'(\\varphi) + \\gamma \\nabla \\varphi \\cdot \\vec{n}) \\operatorname{~ d\\!} s. \\\\\n\\intertext{Now we use \\eqref{eq:NSAC1}-\\eqref{eq:NSAC3} to replace the time derivatives in the volume integrals. Using \\eqref{eq:pdef} we obtain after basic algebraic manipulations}\n\\frac{\\operatorname{~ d\\!}}{\\operatorname{~ d\\!}t} E_\\mathrm{tot}(t) =& - \\int_\\Omega \\operatorname{div}(\\varrho\\vec{v})\\left(\\frac{\\partial (\\varrho\\psi)}{\\partial\\varrho}-\\frac{1}{2}|\\vec{v}|^2\\right) + \\operatorname{div}(\\varrho \\vec{v}\\otimes\\vec{v})\\cdot\\vec{v} \\operatorname{~ d\\!} \\vec{x} - \\int_\\Omega \\frac{\\eta}{\\varrho} \\mu^2 \\operatorname{~ d\\!} \\vec{x} \\\\\n&- \\int_\\Omega \\vec{v}\\cdot \\varrho \\nabla\\left(\\frac{\\partial (\\varrho \\psi)}{\\partial \\varrho}\\right) - \\operatorname{div}(\\vec{S})\\cdot\\vec{v} \\operatorname{~ d\\!} \\vec{x}\n+ \\int_{\\partial \\Omega} \\varphi_t L(\\varphi) \\operatorname{~ d\\!} s. \\\\ \\intertext{We integrate by parts and have}\n\\frac{\\operatorname{~ d\\!}}{\\operatorname{~ d\\!}t} E_\\mathrm{tot}(t) =& - \\int_\\Omega \\frac{\\eta}{\\varrho} \\mu^2 \\operatorname{~ d\\!} \\vec{x} - \\int_\\Omega \\vec{S}\\colon \\nabla \\vec{v} \\operatorname{~ d\\!} \\vec{x}+ \\int_{\\partial \\Omega} \\varphi_t L(\\varphi) \\operatorname{~ d\\!} s \\\\\n&+ \\int_{\\partial \\Omega} \\vec{S}\\vec{v}\\cdot\\vec{n} - \\varrho\\vec{v}\\left(\\frac{\\partial (\\varrho \\psi)}{\\partial \\varrho}+\\frac{1}{2}|\\vec{v}|^2\\right)\\cdot\\vec{n}\\operatorname{~ d\\!} s. \\\\ \\intertext{With the boundary conditions \\eqref{eq:gnbc1}-\\eqref{eq:gnbc3} we finally obtain}\n\\frac{\\operatorname{~ d\\!}}{\\operatorname{~ d\\!}t} E_\\mathrm{tot}(t) =& - \\int_\\Omega \\frac{\\eta}{\\varrho} \\mu^2 \\operatorname{~ d\\!} \\vec{x} - \\int_\\Omega \\vec{S}\\colon \\nabla \\vec{v} \\operatorname{~ d\\!} \\vec{x} - \\int_{\\partial \\Omega} \\beta |v_{\\boldsymbol{\\tau}}|^2 \\operatorname{~ d\\!} s - \\int_{\\partial \\Omega} \\frac{\\alpha}{\\varrho} |L(\\varphi)|^2 \\operatorname{~ d\\!} s.\n\\end{align*}\nThis concludes the proof.\n\\end{proof}\n\n\n\\subsection{Surface Tension}\n\\label{sec:surface_tens}\nThere are different interpretations of surface tension. It can be either viewed as a force acting in tangential direction of the interface or as excess energy stored in the interface \\cite{Jamet2002}.\nIn line with our energy-based derivation we consider a planar equilibrium profile and integrate the excess free energy density over this profile.\nWe assume that static equilibrium conditions hold, i.e. $\\vec{v} = \\boldsymbol{0}.$\nThe planar profile is assumed to be parallel to the $x$-axis and density, velocity and phase field are independent from $t, y,$ and $z$. \nThen the equilibrium is governed by the solution of the following boundary value problem on the real line. \n\nFind $\\varrho=\\varrho(x), \\varphi=\\varphi(x)$ such that\n\\begin{align}\n\\left(-\\varrho\\psi-\\frac{1}{\\gamma}W(\\varphi) - \\frac{\\gamma}{2}\\varphi_x^2 +\\varrho \\frac{\\partial (\\varrho \\psi)}{\\partial\\varrho}\\right)_x &= -{\\gamma(\\varphi_x^2)}_x, \\label{eq:eq1} \\\\\n\\frac{1}{\\gamma} W'(\\varphi) + \\frac{\\partial (\\varrho \\psi)}{\\partial \\varphi} - \\gamma \\varphi_{xx} &= 0,\\label{eq:eq2}\n\\end{align}\nand\n\\begin{equation}\n\\varrho(\\pm\\infty) = \\varrho_\\mathrm{V\/L}, \\quad \\varphi(-\\infty) = 0 , \\quad \\varphi(\\infty) = 1, \\quad \\varphi_x(\\pm\\infty)=0. \\label{eq:eqbc}\n\\end{equation}\n\nMultiplying \\eqref{eq:eq2} with $\\varphi_x$ and substracting from \\eqref{eq:eq1} yields\n\\begin{equation}\n\\frac{\\partial (\\varrho \\psi)}{\\partial \\varrho} = const. \n\\label{eq:muconst}\n\\end{equation}\nMultiplying \\eqref{eq:eq2} with $\\varphi_x$, integrating from $-\\infty$ to some $x\\in \\mathbb{R}$ using \\eqref{eq:eq1} and \\eqref{eq:eqbc} leads to\n\n\\begin{equation}\n\\frac{1}{\\gamma} W(\\varphi(x)) + \\varrho(x)\\psi(\\varrho(x),\\varphi(x)) - \\varrho_\\mathrm{V}(x)\\psi(\\varrho_\\mathrm{V}(x),0) = \\frac{\\gamma}{2}\\varphi_x^2(x).\n\\label{eq:surfacetenshelp}\n\\end{equation}\nFrom \\eqref{eq:surfacetenshelp} we obtain for $x\\to \\infty$\n\\begin{equation}\n\\varrho_\\mathrm{L}\\psi(\\varrho_\\mathrm{L},1) = \\varrho_\\mathrm{V}\\psi(\\varrho_\\mathrm{V},0) \\eqqcolon \\overline{\\varrho\\psi}.\n\\label{eq:surfacetenshelp1}\n\\end{equation}\n\nAs mentioned before, surface tension can be defined by means of excess free energy. Roughly speaking an excess quantity is the difference of the quanity in the considered system and in a (sharp interface) reference system where the bulk values are maintained up to a dividing interface.\nThe interface position $x_0$ is determined by vanishing excess density.\n\n\nIn summary we define surface tension $\\sigma$ via the relationship\n\\begin{align}\n\\sigma =& \\int_{-\\infty}^{x_0} \\varrho f(\\varrho,0,\\varphi,\\varphi_x) - \\varrho_\\mathrm{V}\\psi(\\varrho_\\mathrm{V},0) \\operatorname{~ d\\!} x \\nonumber \\\\\n&+\\int_{x_0}^\\infty \\varrho f(\\varrho,0,\\varphi,\\varphi_x) - \\varrho_\\mathrm{L}\\psi(\\varrho_\\mathrm{L},1) \\operatorname{~ d\\!} x, \\\\\n\\intertext{where $(\\varrho,\\varphi)$ is a solution of \\eqref{eq:eq1}-\\eqref{eq:eqbc}. Using \\eqref{eq:surfacetenshelp} we have}\n\\sigma =& \\int_{-\\infty}^{x_0} \\gamma \\varphi_x^2 \\operatorname{~ d\\!} x + \\int_{x_0}^\\infty \\gamma \\varphi_x^2 + (\\varrho_\\mathrm{V}\\psi(\\varrho_\\mathrm{V},0)-\\varrho_\\mathrm{L}\\psi(\\varrho_\\mathrm{L},1)) \\operatorname{~ d\\!} x. \\\\ \n\\intertext{With \\eqref{eq:surfacetenshelp1} it follows}\n\\sigma =& \\int_{-\\infty}^{\\infty} \\gamma \\varphi_x^2 \\operatorname{~ d\\!} x = \\sqrt{2} \\int_{\\varphi_\\mathrm{V}}^{\\varphi_\\mathrm{L}} \\sqrt{W(\\varphi)+\\gamma(\\varrho\\psi(\\varrho(\\varphi),\\varphi)- \\overline{\\varrho\\psi})} \\operatorname{~ d\\!} \\varphi.\n\\end{align}\n\n\n\nIn the last step we used the transformation from $x$ to $\\varphi$ integration. This is possible since $\\varrho$ can be written in dependence on $\\varphi$: Assuming convex free energies $\\varrho f_\\mathrm{L\/V}$ in \\eqref{eq:rhof}, we have convex $\\varrho\\psi$ in $\\varrho$ and from \\eqref{eq:muconst} follows with the implicit function theorem $\\varrho = \\varrho(\\varphi)$.\n\nOne can see that the surface tension is mainly dictated by the double well potential $W(\\varphi)$. There is a contribution due to the equations of state of the different phases, however in the sharp interface limit, i.e. $\\gamma \\to 0$ this contribution vanishes. This is a difference to (quasi-)incompressible models like \\cite{Lowengrub1998}. There is no contribution due to the equation of states and the surface tension is purely determined by the double well function. Of course surface tension is a material parameter and given by physics dependening on the fluids and walls considered. Therefore, in simulations the double well should be scaled accordingly to yield the correct surface tension.\n\n\n\\section{Numerical Experiments}\n\\label{sec:num_exp}\nThe phase field system \\eqref{eq:NSAC1}-\\eqref{eq:NSAC3} is of mixed hyperbolic-parabolic type. This complicates the derivation of discretization methods. An appropriate choice are discretizations based on discontinuous Galerkin methods. In fact even versions which reproduce the energy dissipation precisely are available \\cite{Giesselmann2015a,Repossi2017,Kraenkel2018}. The key idea behind those schemes is to achieve stabilization through the exact approximation of the energy, that means the energy inequality \\eqref{eq:energy_ineq} should be fullfilled exactly on the discrete level without introducing numerical dissipation. This helps to prevent increase of energy and possibly associated spurious currents. Additionally, the schemes are designed such that they preserve the total mass. Motivated by \\cite{Giesselmann2015a,Kraenkel2018} we derived such a scheme for the system \\eqref{eq:NSAC1}-\\eqref{eq:NSAC3},\nfor details we refer to \\cite{Ostrowski2018}.\nIn the following we present three numerical simulations using this scheme.\n\n\n\n\\subsection{Choice of Parameters}\nFor the equations of state in the bulk phases, we choose stiffened gas equations\n\\begin{align*}\n\\varrho f_\\mathrm{L\/V}(\\varrho) = \\alpha_\\mathrm{L\/V} \\varrho \\ln(\\varrho) + (\\beta_\\mathrm{L\/V}-\\alpha_\\mathrm{L\/V})\\varrho + \\gamma_\\mathrm{L\/V},\n\\end{align*}\nwith parameters $\\alpha_\\mathrm{L\/V} > 0, \\beta_\\mathrm{L\/V} \\in \\mathbb{R}, \\gamma_\\mathrm{L\/V} \\in \\mathbb{R}$.\nIn order to avoid prefering on of the phases, we choose the minima of the two free energies to be at the same height.\n\nDue to surface tension the density inside a liquid droplet is slightly higher than the value which minimizes $\\varrho f_\\mathrm{L}$. The value of the surrounding vapor is slightly lower than the minimizer of $\\varrho f_\\mathrm{V}$. We choose the initial density profile accordingly. For the bulk viscosities we set $\\nu_\\text{L} = 0.0125$ and $\\nu_\\text{V} = 0.00125$. If not stated otherwise, the capillary parameter is taken $\\gamma=5\\cdot 10^{-4}$ and the mobility $\\eta=10$. The polynomial order of the DG polynomials is $2$.\n\n\n\\subsection{Merging Droplets}\\label{sec:ex1}\n\nIn order to illustrate that phase field models are able to handle topological changes, we consider the example of two merging droplets. Initially we have no velocity field, $\\vec{v}_0 = \\boldsymbol{0}$ and look at two kissing droplets. The computational domain is $[0,1]\\times[0,1]$. The droplets are located at $(0.39,0.5)$ and $(0.6,0.5)$ with radii $0.08$ and $0.12$. The parameters for the equations of states are $\\alpha_\\mathrm{L} = 5 ,\\beta_\\mathrm{L}= -4, \\gamma_\\mathrm{L}= 11,\\alpha_\\mathrm{V} = 1.5 ,\\beta_\\mathrm{V}= 1.8, \\gamma_\\mathrm{V}=0.324$. The inital density profile is smeared out with value $\\varrho_\\mathrm{L}=2.23$ inside and $\\varrho_\\mathrm{V} = 0.3$ outside the droplet. As expected the droplets merge into one larger droplet. This evolution with $\\eta = 10$ is depicted in Figure \\ref{fig:merging}.\n\n\\begin{figure}[h]\n\\centering\\includegraphics[width=.9\\textwidth]{merging_190924.png}\n\\caption{Merging droplets. Density $\\varrho$ at times $t=0$, $t=0.2$, and $t=2$ for $\\eta = 10.$}\n\\label{fig:merging}\n\\end{figure}\n\nWe can observe that the model handles topological changes easily. However, the dynamics of the phase field relaxation are determined by the mobility $\\eta$ which needs to be chosen according to the problem. This is illustrated in Figure \\ref{fig:mergingenergy}, where the total energy over time for different values of the mobility $\\eta$ is plotted.\n\\begin{SCfigure}\n\\centering\\includegraphics[height=.4\\textwidth]{merging_energy.png}\n\\caption{Total energy $E_\\mathrm{tot}$ over time for droplet merging simulation with different values for the mobility $\\eta$.}\n\\label{fig:mergingenergy}\n\\end{SCfigure}\n\nThe numerical scheme is designed to mimic the energy inequality \\eqref{eq:energy_ineq} on the discrete level. The discrete energy decreases, as expected from \\eqref{eq:energy_ineq} the higher the value of $\\eta$, the higher the energy dissipation. \n\n\n\\subsection{Contact Angle}\n\\label{sec:ex2}\nIn this example we adress droplet wall interactions. We consider the case of static contact angle. This means we let the relaxation parameter $\\alpha$ in \\eqref{eq:bc3} tend to infinity.\nIn the limit we obtain the static contact angle boundary conditions:\n\nWe set the static contact angle $\\theta_s = 0.1\\pi \\approx 18^\\circ$. The computational domain, density values and EOS parameters are like in Section \\ref{sec:ex1}. As initial condition we use a droplet sitting on a flat surface with a contact angle of $90^\\circ$. The droplet position is $(0.5,0)$ with radius $0.2$. Since the initial condition is far away from equilibrium we have dynamics on the wall-boundary towards the equilibrium configuration. Thus, we can observe a wetting dynamic, see Figure \\ref{fig:wetting}. \n\n\\begin{figure}[h]\n\\centering\\includegraphics[width=.9\\textwidth]{contact_angle190924.png}\n\\caption{Wetting of smooth wall with (GNBC) boundary conditions for the static limit $\\alpha\\to\\infty$ and contact angle $\\theta_s=0.1\\pi$. Density $\\varrho$ at $t=0$ and $t=0.9$.}\n\\label{fig:wetting}\n\\end{figure}\n\nThe wall contribution leads to a large force on the boundary, which renders the system stiff. Although we have an implicit scheme we increased the interface width to be able to handle the boundary terms. Hence, we chose in this simulation $\\gamma = 10^{-2}$.\n\n\n\\subsection{Droplet Impingement}\n\\label{sec:ex3}\nWith this example we consider droplet impingement. The computational domain is the same as in Section \\ref{sec:ex1}. As initial condition we use a droplet at $(0.5,0.2)$ with radius $0.1$.\nThe parameters for the equations of states are $\\alpha_\\mathrm{L} = 5 ,\\beta_\\mathrm{L}= -0.8, \\gamma_\\mathrm{L}= 5.5,\\alpha_\\mathrm{V} = 1.5 ,\\beta_\\mathrm{V}= 1.8, \\gamma_\\mathrm{V}=0.084$. The inital density profile is smeared out with value $\\varrho_\\mathrm{L}=1.2$ inside and $\\varrho_\\mathrm{V} = 0.3$ outside the droplet.\nIn contrast to sharp interface models based on the Navier--Stokes equations, phase field models can still have contact line movement even if no-slip conditions are used. This is due to the fact that the contact line is regularized and the dynamics are driven by evolution in the phase field variable rather than advective transport. This can be seen in Figure \\ref{fig:impact} where a droplet impact with noslip conditions is simulated. This is a special case of the GNBC, with $\\alpha \\to \\infty$ and $\\beta\\to \\infty$. \n\n\\begin{figure}[h!]\n\\centering\\includegraphics[width=.86\\textwidth]{impact_mu_comparison_crop}\n\\caption{\nDroplet impact simulation. Density $\\rho$ and chemical potential $\\mu$ at times $t=0.005, t=0.13,t=0.21$.\n}\n\\label{fig:impact}\n\\end{figure}\n\nIt can be seen that the generalized chemical potential $\\mu$ is low at the contact line which leads to fast dynamics in the phase field. This leads to a moving contact line. Additionally, we can see the (smeared out) shock waves in the vapor phase and also in the liquid phase where the shocks move faster due to a higher speed of sound in the liquid phase.\n\n\n\\section{Summary and Conclusions}\nIn this work we presented a phase field approach to model and simulate compressible droplet impingement scenarios. To be precise, we introduced a compressible Navier-Stokes-Allen-Cahn model in Section \\ref{sec:NSAC}. We discussed modelling aspects, with emphasis on the energy-based derivation. We highlighted the connection of thermodynamic consistency with an energy inequality. Further, we proved in Theorem \\ref{thm:energy_ineq} that solutions to the system fulfill this inequality. Surface tension can be interpreted as excess free energy. We quantified the amount of surface tension present in the model in Section \\ref{sec:surface_tens}. Moving contact line problems need special attention with respect to boundary conditions. Hence, physical relevant boundary conditions were derived as Generalized Navier Boundary Conditions in Section \\ref{sec:bc}. In Section \\ref{sec:num_exp} numerical examples were given. In future work we implement the general, dynamic version of the GNBC to obtain jetting phenomena in the impact case.\n\n\n\n\n\\section*{Acknowledgments}\nThe authors kindly acknowledge the financial support of this work by the Deutsche Forschungsgemeinschaft (DFG)\nin the frame of the International Research Training Group \"Droplet Interaction Technologies\" (DROPIT).\n\n\n\\begin{small}\n\\bibliographystyle{abbrv}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}