diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzlxqw" "b/data_all_eng_slimpj/shuffled/split2/finalzzlxqw" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzlxqw" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=\\columnwidth]{images\/splash-figure.pdf}\n \\caption{\n %\n Example interaction between an annotator and the models in the loop. The annotator selects an answer from the passage, for which the Generative Annotation Assistant (GAA) prompts a question. The annotator can then freely modify the question and\/or answer, or generate another prompt. In the adversarial data collection setting, a model-in-the-loop provides predictions with the aim of encouraging annotators to find model-fooling examples. In the answer prompting setting, an answer suggestion is prompted by the assistive model instead of being selected by the annotator.\n %\n } \n \\label{fig:splash}\n\\end{figure}\n\nNatural language processing has become increasingly reliant on large datasets obtained using crowd sourcing.\nHowever, crowdsourcing as an unconstrained annotation approach is known to result in machine-exploitable annotator artefacts~\\cite{jia2017adversarial,schwartz2017effect,gururangan2018annotation, geva-etal-2019-modeling}, leading to poor out-of-distribution generalisation~\\cite{chen-etal-2016-thorough, weissenborn-etal-2017-making, Yogatama2019LearningAE, mccoy-etal-2019-right}.\nDynamic Adversarial Data Collection~(DADC) aims to address these issues by introducing state-of-the-art models into the data collection loop and asking human annotators to produce examples that these models find challenging~\\cite{kiela2021dynabench}.\nThe intuition behind this approach is that it leads human annotators to better explore the space of possible examples.\nPrevious work has found that DADC leads to improved model robustness on adversarial datasets~\\cite{nie-etal-2020-adversarial,bartolo2020beat}, increased sample diversity~\\cite{bartolo2020beat,wallace2021analyzing}, better training data \\cite{wallace2021analyzing} and better domain generalisation~\\cite{bartolo2021improving}.\n\nDespite these advantages, a downside to DADC is that it increases the human effort necessary to annotate a single example and thus the overall annotation cost.\nIn fact, to date, only a limited number of large-scale training datasets have been produced using DADC and its application has been primarily restricted to producing challenge sets or as additional training data to improve the performance of models already trained on non-DADC curated datasets.\nTo make better use of DADC data, \\citet{bartolo2021improving} propose generating synthetic adversarial training sets to further improve model robustness.\nHowever, this approach inevitably limits example diversity as it relies on examples ultimately generated by a model with no additional human input, and provides no guarantees that useful synthetic examples would transfer across target adversary models of varying capabilities or across annotation rounds.\n\nIn this work, we propose assisting annotators by having generative models aid human annotators in the data collection loop.\nConcretely, we utilise a Generative Annotation Assistant~(GAA) model that provides prompt suggestions to crowdworkers, while allowing full flexibility for edits and rewrites to support example generation while still allowing for human creativity as shown in Figure~\\ref{fig:splash}.\nWe explore GAAs in a broad range of experimental settings, including standard and adversarial data collection approaches, training on various source datasets, and employing sampling methodologies based on likelihood, adversarial feedback, and uncertainty.\nWe showcase the value of this approach on the task of extractive question answering~(QA), and find that GAAs can help improve both the standard and adversarial data collection paradigms.\nWe find considerable efficiency gains, with around a 28\\% observed annotation speed-up, as well as improved data effectiveness with up to a 4.5F$_\\text{1}${} improvement in downstream performance for adversarial data collection.\n\n\n\\begin{figure*}[t]\n\\includegraphics[width=\\textwidth]{images\/interface-gaas}\n\\caption{\nThe Annotation Interface used for data collection. This example shows a question generated using a generative assistant trained on the AdversarialQA data and selected an adversarial sampler, which successfully allowed the annotator to beat the QA model in the loop.\n}\n\\label{fig:interface_gaas}\n\\end{figure*}\n\n\n\\section{Related Work}\n\n\\subsection{Dynamic Adversarial Data Collection~(DADC)}\nThere exists a rich body of recent work showing the value of dynamic adversarial data collection in model evaluation \\cite{yang2017mastering,dua2019drop,dinan-etal-2019-build,nie-etal-2020-adversarial,bartolo2020beat,kiela2021dynabench,wallace2021analyzing}, although the approach has also been challenged for not necessarily leading to better generalisation on non-adversarial test sets \\cite{kaushik2021efficacy} and being unfair to the model that was used in the loop~\\cite{bowman2021will,phang2021adversarially}.\nThis work builds on previous work in adversarial data collection methods for QA~\\cite{bartolo2020beat}, and work investigating the use of generative models to create synthetic adversarial data to improve QA model robustness~\\cite{bartolo2021improving}.\n\n\\subsection{Generative Model Annotation Support}\nA long line of prior work has trained generative models for question answering \\citep{du-etal-2017-learning,du-cardie-2018-harvesting,zhao-etal-2018-paragraph,lewis2018generative,alberti-etal-2019-synthetic,puri-etal-2020-training,yang-etal-2020-generative,bartolo2021improving,lewis2021paq}.\nIn many cases, these approaches filter out questions that an external QA model gets wrong, in order to ensure correctness of the generated questions; our filtering strategies instead focus on generated questions that QA models get wrong as we hypothesise that these would serve as more useful initial prompts to human annotators.\n\nGenerative models have also been used to aid experts with writing contrast sets~\\citep{wu-etal-2021-polyjuice,ross2021tailor}, but to the best of our knowledge, this is the first work to investigate the use of generative annotation assistants for crowdworkers directly in the annotation loop for NLP.\nRecent work on supporting crowdworkers for textual entailment in a non-adversarial setting shows no improvements on downstream transfer performance over baseline, albeit with reductions in previously observed issues with annotation artefacts~\\cite{bowman-etal-2020-new}.\nSubsequent work highlights the need for further data collection efforts focusing on improving writing-based annotation processes~\\cite{vania-etal-2021-comparing}, which we aim to investigate in this work.\nSeparately,~\\citet{ettinger2017buildit} provide \\textit{breakers} with the ability to minimally edit original data to identify the boundaries of system capabilities, while~\\citet{potts-etal-2020-dynasent} analyse the use of prompts to assist crowdworkers in beating a model in the loop for sentiment analysis.\nIn both cases, prompts are sourced from existing datasets and are not generated on the fly.\n\n\\subsection{Active Learning and Weak Supervision}\nActive learning approaches have been used to accelerate annotation~\\cite{tsuruoka-etal-2008-accelerating}, although this typically assumes access to a pool or stream of unlabelled data for which the learning algorithm can query labels \\citep{settles2009active}.\nIn our setting, no unlabelled questions are provided, necessitating the use of a generative model to suggest questions instead.\nMoreover, our annotators are free to edit and browse generated questions, whereas annotators in active learning typically only provide labels and have no choice in what to label.\nSome of our sampling and filtering strategies based on entropy are inspired by uncertainty sampling, a standard active learning algorithm \\citep{lewis1994sequential}.\n\n\\section{Experimental Setup}\nOur study focuses on the effects of incorporating generative annotation assistants and their interactions with annotators and discriminative models-in-the-loop in a DADC context for QA.\nWe provide crowdworkers with a short passage from Wikipedia and ask them to write five questions and highlight the span in the passage that best answers the question for each (see Figure~\\ref{fig:interface_gaas}).\nWe pay workers equally across experiment modes to avoid creating an incentive imbalance and pay out an additional bonus for each question that successfully beats the discriminative QA model i.e., for each question that the model fails to answer correctly.\nFinally, we validate all collected examples using a distinct worker pool and ask three additional workers to report on the validity of each example.\n\n\\paragraph{Selected Passages}{\nWe select passages from KILT~\\citep{petroni-etal-2021-kilt} to allow the possibility of future investigation into cross-domain and task transfer of knowledge intensive language understanding in the context of data collected in a DADC setting.\nWe filter KILT passages to those with between 100 and 600 tokens that are used by at least 5 KILT tasks.\nWe further filter out any passages with any 8-gram overlap (after normalisation) to the SQuAD1.1{} training or development sets, seeking to ensure that all passages used in our study are novel and previously unseen by the discriminative QA models in the loop.\nThis leaves a total of 10,109 passages from 421 Wikipedia pages.\nWe retain and supply all passage-relevant KILT metadata (such as IDs and provenances) with our collected datasets to facilitate future work.\n}\n\n\\paragraph{Model-in-the-Loop}{\nThe discriminative QA model in the loop is ELECTRA\\textsubscript{Large}~\\citep{Clark2020ELECTRA} trained on SQuAD1.1{} and AdversarialQA, and enhanced using SynQA to improve adversarial robustness as investigated by~\\citet{bartolo2021improving}.\\footnote{You can interact with this model at \\url{https:\/\/dynabench.org\/models\/109}.}\nThis model represents the best-performing model on the Dynabench~\\cite{kiela2021dynabench} leaderboard at the time of conducting this study, obtaining a word-overlap F$_\\text{1}${} score of 94.5\\% on the SQuAD1.1{} dev set, and represents the state-of-the-art on AdversarialQA achieving 77.6\\% on the \\dataset{BiDAF}{} subset, 71.5\\% on \\dataset{BERT}{}, and 63.2\\% on \\dataset{RoBERTa}{}.\n}\n\n\\paragraph{Generator-in-the-Loop}{\nFor our generative model, we use the \\textit{fairseq}~\\citep{ott2019fairseq} implementation of BART$_\\text{Large}$~\\cite{lewis-etal-2020-bart}, fine-tuning the decoder to generate questions conditioned on the passage and the answer highlighted by the annotator.\nTo provide a diverse set of questions to annotators, we decode using nucleus sampling with $top_p = 0.75$ as decoding using standard beam search results in questions which are too similar to each other and therefore likely to be of less use as question prompts to annotators.\nTo speed up inference and model-annotator interaction, we preemptively identify answer candidates for each passage and generate questions to build up a large cache from which we serve questions during annotation.\nOnce there are no questions remaining in the cache for a particular answer, or if the annotator selects an answer that is not in the cache, we fall back to querying the generative model in real-time. \nIn this work, we investigate generative assistants trained on three different sources of questions: SQuAD1.1{}, AdversarialQA, and the combination of both SQuAD and AdversarialQA.\n}\n\n\\paragraph{Question Sampling}{\nWe investigate three different selection strategies for presenting the generated questions as prompts to annotators: \ni) \\textit{generator likelihood} samples candidates in the order prescribed by the generative model's associated likelihood values; \nii) \\textit{adversarial sampling} selects generated questions in order of the least word-overlap F$_\\text{1}${} scores when queried against the discriminative QA model; and \niii) \\textit{uncertainty sampling} is inspired by active learning and selects generated questions in order of the least span selection confidence when queried against the QA model.\nThe latter two provide an interesting trade-off for exploration as we would expect the quality of the generated questions to be worse than if sampled based on likelihood.\nHowever, we hope that such prompts could serve to inspire annotators and provide a ``starting point'' beyond the answering capabilities of the QA model, irrespective of correctness.\nWe hypothesise that modifying such examples might be a more effective process for annotators to undertake than when starting from higher quality but less model-confusing prompts, and investigate this question thoroughly.\n}\n\n\\paragraph{Answer Prompts}{\nWe also investigate the effects of abstracting away the answer selection task from the annotator.\nTo identify potential candidate answers, we use Self-Attention Labelling (SAL)~\\cite{bartolo2021improving} and investigate providing annotators with both answer prompts as well as the corresponding generated questions.\n}\n\n\\paragraph{Experimental Settings}{\nIn total, there are twenty different experimental settings involving combinations of the above-mentioned annotation pipeline components.\nWe collect 1,000 validated training examples for each of these settings, for a total of 20,000 examples.\nFor downstream evaluation we train ELECTRA\\textsubscript{Large} QA models on the training datasets collected each setting, and perform identical model selection and hyper-parameter tuning.\n}\n\n\\paragraph{Annotation Interface}{\nWe use a variant of the Dynabench~\\citep{kiela2021dynabench} QA interface that allows annotators to interact with the models in the loop, and further allows them to edit and modify generated questions and answers as required.\nThe same base interface is used across experimental settings and only varied minimally depending on the current setting, for example by changing the title and instructions in the adversarial annotation setting, or by adding a ``Generate Question'' button when the setting involves GAAs.\nIn the GAA settings, annotators are not informed what generative model they are interacting with, or what sampling mechanism is being used.\n}\n\n\\input{tables\/results_baselines}\n\n\\paragraph{Crowdsourcing Protocol}{\nWe use Amazon Mechanical Turk to recruit workers for this study.\nTo facilitate proficiency in English, crowdworkers are required to be based in Canada, the UK, or the US.\nThey are also require to have a Human Intelligence Task~(HIT) Approval Rate greater than $98\\%$, have previously completed at least 1,000 HITs, and are required to undergo a dedicated onboarding process.\nWorkers were randomly assigned to one of the possible experiment modes and were all presented with passages sampled from the same set, for which they were tasked with writing and answering five questions.\nAll collected questions were than validated for correctness by a separate group of crowdworkers.\nWe collect three validations per question and use this information, along with manual verification of a subset of the annotated examples, to maintain a high level of quality and remove examples from workers who were generating examples with an incorrectness rate above an acceptability threshold of 95\\%.\nWorkers were provided an additional \\$0.50 bonus for each example validated as having successfully fooled the model in the adversarial data collection settings.\nIn total, 1,388 workers participated in the study, with 1,113 contributing to the final datasets.\nWe also continuously validate both annotators and validators based on signals such as repetitiveness, agreement, and manual checks.\n}\n\n\\paragraph{Evaluation}\nWe evaluate the outcomes in each of the experimental settings by a selection of metrics: \ni) \\textit{median time per example} as a measure of annotation efficiency and where a lower time taken is better;\nii) \\textit{validated Model Error Rate~(vMER)}~\\citep{bartolo2021improving} which evaluates the effectiveness of annotators at generating valid question-answer pairs that the QA model fails to answer correctly;\niii) \\textit{median time per validated model-fooling example} which serves as a single metric incorporating both method efficiency and effectiveness and thus provides a convenient metric for comparison across the various experimental settings; and\niv) \\textit{downstream effectiveness} in which we evaluate the performance (by word-overlap F$_\\text{1}${} score) of a QA model trained on the data collected in each of the experimental modes on the standard SQuAD1.1{} benchmark, on the AdversarialQA benchmark, and in terms of domain generalisation ability on the MRQA~\\citep{fisch2019mrqa} dev sets.\nLower values are better for the time-dependent metrics, however, from the perspective of training data we consider a higher vMER to be better guided by the performance benefits observed for adversarial over standard data collection. This is corroborated by comparison with downstream results.\n\n\\input{tables\/results_improving_standard}\n\n\\section{Results}\nOur study allows us to perform a thorough investigation into both the efficiency and effectiveness of the different data annotation methodologies. \nIt also allows us to build on work investigating the various differences between standard and adversarial data collection~\\cite{kaushik-etal-2021-efficacy}.\n\n\\subsection{Standard versus Adversarial Data Collection}\nThe standard and adversarial data collection settings we use as baselines do not make use of GAAs, and are designed to replicate the SQuAD1.1{}~\\cite{rajpurkar2016squad} and AdversarialQA~\\cite{bartolo2020beat} annotation setups as closely as possible.\nHowever, in contrast to AdversarialQA, our setting only provides annotators with a financial incentive to \\emph{try} to beat the model in the loop through the use of a bonus, and does not restrict annotators to only submitting model-fooling examples.\n\nThe results, shown in Table~\\ref{tab:results_baselines}, highlight the differences between the two annotation approaches.\nAs expected, standard data collection is more efficient in terms of the time taken per example, as there is no requirement for annotators to make any effort to try to beat a model.\nHowever, the efficiency differences are not as large as seen in settings where annotators \\textit{have} to submit model-fooling examples~\\cite{bartolo2020beat}.\nWe also find considerable benefits from adversarial data collection in terms of the validated model error rate and subsequent downstream performance.\n\nWe note that the training data sizes in both these settings are relatively small, and the benefits of adversarial data collection have been shown to be more pronounced in the low data regime, likely due to increased example diversity.\nWe would not necessarily expect these differences to be as pronounced with larger scale collection efforts.\nWe also note that while our passages are sourced from Wikipedia, there may exist characteristic differences between these and the passages used in SQuAD{}.\nFurthermore, we highlight the considerably lower (i.e., better) adversarial human evaluation vMER scores achieved for our synthetically-augmented ELECTRA\\textsubscript{Large} model-in-the-loop compared to the 8.8\\% reported for RoBERTa\\textsubscript{Large} by~\\cite{bartolo2021improving}.\nWe hypothesise that this is primarily due to two factors: the improved robustness of ELECTRA in comparison to RoBERTa, and more stringent example validation.\nFor further evidence of the improved robustness of ELECTRA, see Appendix~\\ref{appendix:addsent}.\n\n\\input{tables\/results_improving_adversarial}\n\n\\subsection{Improving Standard Data Collection}\nWe now investigate whether it might be possible to improve standard data collection practices using generative assistants -- \\textit{can we achieve similar performance to adversarial data collection without access to any adversarial data?}\n\nWe therefore use a GAA trained on SQuAD1.1{}, and investigate the three sampling techniques namely: likelihood, adversarial, and uncertainty sampling.\nResults are shown in Table~\\ref{tab:results_improving_standard}.\n\nWe find that using a GAA with likelihood sampling considerably improves the efficiency of the annotation process in comparison to the standard data collection baseline in Table~\\ref{tab:results_baselines}, while giving comparable, if slightly improved, vMER and downstream results.\n\nFurthermore, both the adversarial and uncertainty sampling strategies prove effective.\nWhile the time taken per example not as impressive as for standard likelihood sampling, and is comparable to the standard data collection baseline, the vMER -- an indicator of the diversity of the collected training data -- is substantially improved and outperforms the adversarial data collection baseline.\nThe downstream results are also very promising, considerably improving on the standard data collection setting.\nThey also approach the values for the adversarial data collection baseline although, despite the improved vMER, overall downstream performance is better in the adversarial data collection setting.\nIn summary, this result shows that we can encourage annotators to come up with more challenging examples and approach the downstream performance achieved using adversarial data collection without requiring any adversarially-collected data or an adversarial model in the loop, simply through the use of GAAs paired with an appropriate sampling strategy.\nWhile impressive, this is in line with our initial hypothesis that sampling generated prompts from regions of known model uncertainty, or prompts that we know the model finds challenging to answer, irrespective of generated sample quality, provides annotators with a better starting point for example creation.\n\n\\input{tables\/results_answer_prompting}\n\n\\subsection{Improving Adversarial Data Collection}\nFollowing the impressive gains observed for standard data collection, we investigate whether it is possible for GAAs to provide further improvements over adversarial data collection.\nHere, we experiment with GAAs trained on three different datasets: SQuAD1.1{}, AdversarialQA, and the combination of both.\nWe combine each of these with the three previously discussed sampling strategies giving nine different experimental settings.\nResults are shown in Table~\\ref{tab:results_improving_adversarial}.\n\nWe find that when annotators are incentivised to try to beat an adversarial QA model-in-the-loop, the previously seen efficiency gains are not as clear cut.\nIn fact, annotators are slightly slower than the adversarial data collection baseline when using a SQuAD{}-trained GAA.\nWhen using a GAA that has been trained on adversarially-sourced questions, likelihood sampling provides efficiency gains over the baseline, however, both adversarial and uncertainty sampling (which naturally lead to more complex prompts that might be more challenging to work with) actually slow annotators down, although they do provide improved validated model error rates.\nIn terms of downstream performance, there is no clear best setting, but the best settings outperform the adversarial data collection baseline.\nWe also observe that a SQuAD-trained GAA with uncertainty sampling gives best performance on the less challenging evaluation sets, while an AdversarialQA-trained GAA with adversarial sampling gives best performance on the evaluation datasets collected using a more performant adversary.\nThis is also in line with the observations made by~\\citet{bartolo2020beat} showing a distributional shift in question type and complexity with an increasingly stronger model-in-the-loop.\n\nThe general takeaway therefore in terms of the ideal experimental setting from the perspective of downstream performance is that this depends on the particular evaluation setting, with GAAs trained on examples from a particular setting yielding better performance when the downstream model is also evaluated in similar conditions.\nAnother key observation is that both the validated model error rate and time per validated model-fooling example comfortably outperform the baselines across the board, highlighting the enhancements to the effectiveness of the annotation process provided by incorporating GAAs in the loop.\n\n\n\\subsection{Investigating Answer Prompting}\nThe previously explored settings focus on investigating the effects of assisting free-text generation using GAAs.\nHowever, the QA crowdsourcing setting also involves answer annotation, which we also explore in seek of efficiency gains.\nHere, we explore GAAs trained on datasets with adversarially-sourced components and the same three sampling strategies as previously, with the addition of providing annotators with an answer suggestion.\nIn essence, this is similar to an answer and question validation setting, with the difference that annotators have the ability to freely modify both answer and question, or request additional suggestions.\nResults are shown in Table~\\ref{tab:results_answer_prompting}.\n\nWe find that answer prompting is incredibly effective at improving annotation efficiency, providing gains in all six experimental settings while also providing improved vMER results in some cases.\nWe also see very similar downstream performance result patterns to the previous set of experiments -- for performance on the more challenging evaluation sets (\\dataset{BERT} and \\dataset{RoBERTa}), an AdversarialQA-trained GAA with likelihood sampling gives best performance, while for performance on SQuAD and \\dataset{BiDAF}, a GAA trained on examples including SQuAD coupled with uncertainty sampling gives best performance.\nThis consistency in performance patterns serves to further highlight our previous observation that, while using GAAs provides considerable gains in both the efficiency of the annotation process and effectiveness in terms of downstream results, the ideal annotation setup should be selected based on the target downstream evaluation.\n\n\n\\section{Annotator Interaction with GAAs}\nWhile we provide annotators with instructions explaining how they can use the GAAs to aid their annotation, they are free to query the generative models as many times as they like, if at all, during annotation.\nWe are interested to see how the three main factors affecting interaction with the GAAs that we explore -- training data, sampling strategy, and answer prompting -- affect the ways in which annotators interact or use the GAAs.\n\nResults, shown in Table~\\ref{tab:qs_per_example}, indicate that annotators query the GAA less frequently when being shown simpler prompts i.e. those obtained using a GAA trained on non-adversarially sourced examples, or selected using likelihood sampling which tends to provide higher quality and less complex generated texts.\nWe also find that annotators query the GAA more frequently when an answer prompt is also provided.\nWe believe that this can be attributed to the fact that the answer and question prompt setting is more similar to a validation workflow, allowing annotators to generate prompts until a satisfactory one is found.\n\n\\input{tables\/qs_per_example}\n\n\n\\section{Discussion and Conclusion}\nIn this work, we introduce Generative Annotation Assistants (GAAs) and investigate their potential to aid crowdworkers with creating more effective training data more efficiently.\nWe perform a thorough analysis of how GAAs can be used for improving QA dataset annotation in different settings, including different generative model training data, sampling strategies, and whether to also provide annotators with answer suggestions.\n\nWe find that GAAs are beneficial in both the standard and adversarial data collection settings.\nIn the standard data collection setting, and under the assumption of no access to adversarially-collected data, GAAs with prompts sampled based on likelihood provide annotation speed-ups, while prompts sampled by adversarial performance or uncertainty metrics provide benefits to both the model error rates on the collected data as well as subsequent downstream QA performance.\nWe find that we can get near-adversarial data collection downstream performance using GAAs without involving an adversaral model in the loop.\n\nFor adversarial data collection, we demonstrate improved effectiveness of the annotation process over the non-GAA baseline, although this comes at a cost of reduced annotation efficiency.\nWe show that also aiding annotators with answer prompts boosts data collection efficiency even beyond that achieved for standard data collection, while retaining downstream performance.\nWe find that the ideal annotation setting differs for different intended evaluations, with an uncertainty sampled GAA trained on data that was not entirely adversarially-collectedproviding best performance on simpler questions, while an adversarially sampled GAA trained on adversarially-collected data provides best downstream performance on more challenging evaluation sets. \nOverall, we see annotation speed-ups over a baseline of 28.6\\% for standard and 28.4\\% for adversarial data collection.\nWe also see a 3.75x improvement in vMER for adversarial data collection, along with best downstream performance gains of 0.6F$_\\text{1}${} on SQuAD{}\\textsubscript{dev}, 0.7F$_\\text{1}${} on \\dataset{BiDAF}, 4.5F$_\\text{1}${} on \\dataset{BERT}, and 3.8F$_\\text{1}${} on \\dataset{RoBERTa}. \nFurthermore, we see benefits in domain generalisation for standard data collection, and show that annotators interact with the GAA more frequently when it has been trained on adversarially-collected data, is sampled from based on adversarial or uncertainty feedback, and also provides answer prompts.\n\nWhile our analysis is limited by the size of the collected data, we believe that GAAs can help drive further innovation into improved data collection methodologies based on these observations. \nWe hope that our analysis of various aspects of GAA incorporation into the annotation pipeline can help inform future work exploring broader aspects of GAA use, such as for other NLP tasks or for larger scale annotation efforts.\n\n\n\\section{Ethical Considerations}\nWe collect a training datasets as a part of the analysis in this work. The passages are sourced from Wikipedia through KILT.\nAs described in the main text, our incentive structure is designed to ensure that crowdworkers were fairly compensated.\nOur datasets focus on the English language. As this data is not collected for the purpose of designing NLP applications, we do not foresee any risks associated with the use of this data.\n\n\n\\section*{Acknowledgments}\nThe authors would like to thank the Dynabench team for their feedback and continuous support.\n\n\n\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Problem description}\n\nOur main object of interest is a convex polytope $P\\subset{\\mathbb{R}}^d$ with $N$-vertices.\nWe assume that the polytope $P$ has a polynomial density $\\rho(\\mathbf{x})$ defined in\nthe interior of $P$. For any multivariate polynomial $g(\\mathbf{x})$ the corresponding\nmoment $\\mu_g$ of $P$ is given by\n\n\\[\n\\mu_g:= \\int_P g(\\mathbf{x})\\cdot \\rho(\\mathbf{x}) d\\mathbf{x}.\n\\]\n\n\nWe note that if all vertices of $P$ are rational (have rational coordinates) and\n$\\rho\\in{\\mathbb{Q}}[\\mathbf{x}]$, then every moment $\\mu_g$ of $P$\nfor a polynomial $g\\in{\\mathbb{Q}}[\\mathbf{x}]$ is a rational number as well. Why this is true will\nbecome clear in the next section.\n\n\\paragraph{\\bf Input} As an input to our problem we receive $O(Nd)$ moments\nof some underlying $N$-vertex convex polytope $P\\subset {\\mathbb{R}}^d$.\n\\paragraph{\\bf Output} The goal is to reconstruct $P$ (coordinates of the vertices).\n\nIn our computational experiments we did a few simplification assumptions about the underlying polytope:\n\\begin{enumerate}\n\\item we work with uniform density, i.e., $\\rho(\\mathbf{x})=1$ for any $\\mathbf{x}\\in P$;\n\\item we focus on {\\em simple} polytopes, i.e., polytopes where each vertex has exactly $d+1$ incident edges.\n\\end{enumerate}\n\n\nThe latter assumption is equivalent to saying that $P$ is a generic polytope in a hyper-plane description of the polytope, i.e.,\nno $d+1$ supporting hyperplanes of $P$ have common intersection. In order to construct\na random simple polytope our computational experiments we intersect a few half spaces each supported by a randomly chosen hyperplane.\n\nWe considered the problem in two different models of arithmetic:\n\\begin{enumerate}\n\\item vertices of $P$ are rational and rational moments are given in the input exactly;\n\\item vertices of $P$ have real coordinates and moments are given with certain precision.\n\\end{enumerate}\n\n\\section{Preliminaries}\n\nFor a non-negative integer $j$ the $j$-th {\\em axial moment}\nof $P$ in the direction $\\mathbf{z}\\in{\\mathbb{R}}^d$ with respect to density $\\rho$ is given by\n\\[\n\\mu_j(\\mathbf{z}):= \\mu_{j,\\rho} (\\mathbf{z}) := \\int_P \\langle \\mathbf{x}, \\mathbf{z} \\rangle ^j \\rho(\\mathbf{x}) d\\mathbf{x}.\n \\]\n\nWe remark that $\\langle \\mathbf{x}, \\mathbf{z} \\rangle^j$ is a homogeneous polynomial of degree $j$\nfor any fixed direction $\\mathbf{z}$.\n\nLet the set of all vertices of $P$ be given by ${\\text{Vert}}(P)$.\nFor each $\\mathbf{v} \\in {\\text{Vert}}(P)$, we consider a fixed set of\nvectors, parallel to the edges of $P$ that are incident with $\\mathbf{v}$, and call\nthese edge vectors $w_1(\\mathbf{v})$,\\dots $w_d(\\mathbf{v})$. Geometrically, the polyhedral\ncone generated by the non-negative real span of these edges at $\\mathbf{v}$ is called\nthe tangent cone at $\\mathbf{v}$, and is written as $K_\\mathbf{v}$. For each simple tangent\ncone $K_\\mathbf{v}$, we let $|\\det K_\\mathbf{v}|$ be the volume of the parallelepiped formed\nby the $d$ edge vectors $w_1(\\mathbf{v}), \\dots, w_d(\\mathbf{v})$. Thus, $|\\det K_\\mathbf{v}| = |\n\\det( w_1(\\mathbf{v}), \\dots, w_d(\\mathbf{v})) |$, the determinant of this parallelepiped.\n\n\nThe following results of BBaKLP\\ \\cite{MR1079024} tells us\n\\begin{equation}\\label{brion} \\mu_j (\\mathbf{z})= \\frac{j! (-1)^d}{ (j+d)!}\n\\sum_{\\mathbf{v}\\in {\\text{Vert}}(P)} \\bil{\\mathbf{v}}{\\mathbf{z}}^{j+d} D_\\mathbf{v}(\\mathbf{z}), \\text{ where}\n\\end{equation}\n\\begin{equation}\\label{Dvz} D_\\mathbf{v}(\\mathbf{z}):=\\frac{|\\det\nK_\\mathbf{v}|}{\\prod_{k=1}^d\\bil{w_k(\\mathbf{v})}{\\mathbf{z}}},\n\\end{equation}\nfor each $\\mathbf{z} \\in {\\mathbb{R}}^d$ such that the denominators in $D_\\mathbf{v}(\\mathbf{z})$ do not vanish.\nMoreover,\n\\begin{equation}\\label{brionzero} 0=\\sum_{v\\in {\\text{Vert}}(P)} \\bil{\\mathbf{v}}{\\mathbf{z}}^{j}\nD_\\mathbf{v}(\\mathbf{z}), \\text{ for each } 0 \\leq j \\leq d-1.\\end{equation}\n\nIn particular, from \\eqref{brion},\\eqref{Dvz} it is easy to see that every moment $\\mu_j(\\mathbf{z})$\nis a rational number, if $P$ is a rational polytope and $\\mathbf{z}\\in{\\mathbb{Q}}^d$. Since any polynomial $g\\in{\\mathbb{Q}}[\\mathbf{x}]$\ncan be expressed as a rational linear combination of the powers of linear forms with rational coefficients, we\ncan conclude that $\\mu_g\\in{\\mathbb{Q}}$.\n\nRewriting the above equations in the matrix form we get\n\n\\begin{equation}\\label{momentmatrix}\n\\begin{pmatrix}\n1&1&\\dots&1\\\\\n \\langle \\mathbf{v}_1, \\mathbf{z} \\rangle & \\langle \\mathbf{v}_2, \\mathbf{z} \\rangle & \\dots & \\langle \\mathbf{v}_N, \\mathbf{z} \\rangle \\\\\n{ \\langle \\mathbf{v}_1, \\mathbf{z} \\rangle}^2 & { \\langle \\mathbf{v}_2, \\mathbf{z} \\rangle}^2 & \\dots & {\\langle \\mathbf{v}_N, \\mathbf{z} \\rangle}^2 \\\\\n\\vdots&\\vdots&\\dots&\\vdots \\\\\n { \\langle \\mathbf{v}_1, \\mathbf{z} \\rangle}^k & { \\langle \\mathbf{v}_2, \\mathbf{z} \\rangle}^k & \\dots & {\\langle \\mathbf{v}_N, \\mathbf{z} \\rangle}^k \\\\\n\\end{pmatrix}\n\\begin{pmatrix}\nD_{\\mathbf{v}_1}(\\mathbf{z})\\\\\n\\vdots\\\\\nD_{\\mathbf{v}_N}(\\mathbf{z})\n\\end{pmatrix}=\n\\begin{pmatrix}\nc_0\\\\\n\\vdots\\\\\nc_{k}\n\\end{pmatrix},\n\\end{equation}\nwhere\n\\begin{equation}\\label{c-vector}\n\\left (c_0, \\dots, c_{k} \\right) = \\left(0, \\dots, 0, \\frac{d!(-1)^d}{0!} \\mu_0, \\frac{(1+d)!(-1)^d}{1!} \\mu_1, \\dots, \\frac{k!(-1)^d}{(k-d)!} \\mu_{k-d}\\right),\n\\end{equation}\nso that the vector ${\\bf{c}} = \\left(c_0, \\dots, c_{k}\\right)$ has zeros in the first $d$ coordinates, and scaled moments in the last $k+1-d$ coordinates.\n\n\nFor a fixed $m\\ge N+1$ let:\n\n\\begin{equation}\n\\mathbf{H} (c_0,\\dots,c_{2m-2}):=\n\\begin{pmatrix}\nc_0&c_1&\\dots&c_{m-1}\\\\\nc_1&c_2&\\dots&c_{m}\\\\\n\\vdots&\\vdots&\\dots&\\vdots\\\\\nc_{m-}&c_{m+1}&\\dots&c_{2m-2}\n\\end{pmatrix}.\n\\end{equation}\n\nBelow is given the algorithm (a variant of the Prony method) from \\cite{GLPR12} of how to find the projections\nof vertices of $P$ onto a general position axis $\\mathbf{z}\\in {\\mathbb{R}}^d$.\n\n\n\\begin{algorithm}[H]\n\\begin{enumerate}\n\n\\item Given $2m-1 \\ge 2N+1$ moments $c_0,\\dots,c_{2m-2}$ for $\\mathbf{z}$, construct \\\\\n\\noindent a square Hankel matrix $\\mathbf{H}(c_0,\\dots,c_{2m-2}).$\n\\medskip\n\\item Find the vector $v=\\left(a_0, \\ldots, a_{M-1}, 1, 0, \\ldots, 0 \\right)$ in $\\mathbf{K}$ \\\\\n\\noindent with the minimal possible $M.$ It turns out that the number of \\\\\n \\noindent vertices $N=M$.\n\\medskip\n\\item The set of roots $\\{x_i(\\mathbf{z}) = \\bil{\\mathbf{v}_i}{\\mathbf{z}} |\\mathbf{v}_i\\in{\\text{Vert}}(P)\\}$ of polynomial\n $p_\\mathbf{z}(t) = a_0 + a_1t + \\ldots + a_{N-1} t^{N-1} + t^N$\n then equals the set of \\\\\n \\noindent projections of ${\\text{Vert}}(P)$ onto $\\mathbf{z}$.\n\\end{enumerate}\n\\caption{Computing projections.}\\label{fig:compproj}\n\\end{algorithm}\n\nNext the algorithm in \\cite{GLPR12} finds projections of the vertices on $d$ different linearly independent directions $\\mathbf{z}\\in{\\mathbb{R}}^d$ and\nmatches the projections on the first direction with the projections on each of the rest $d-1$ directions. In order to do each matching\nbetween the first $\\mathbf{z}_1$ and $i$-th $\\mathbf{z}_i$ directions, vertex projections on a new direction $\\mathbf{z}_{1i}$ in the plane\nspanned by $\\mathbf{z}_1$ and $\\mathbf{z}_i$ are reconstructed. These extra projections on the direction $\\mathbf{z}_{1i}$ allow to restore the right matching between\nthe projections on $\\mathbf{z}_1$ and $\\mathbf{z}_i$ with very high probability.\n\n\n\n\\section{Actual Implementation}\nOur implementation was done in Sage \\cite{sage}.\n\\paragraph{\\bf Reconstructing projections on $\\mathbf{z}$.}Coming to the main part, we deviated a little bit from our original Prony\nmethod in computing the axial projections. Namely, we do not go directly\non finding the kernel of the Hankel system but look at the problem from the\nperspective of Pade approximation instead. The moments can be viewed as\ncoefficients in the expansion of a rational function, which we can approximate\nif enough data is known. Specifically, recalling \\eqref{momentmatrix} and \\eqref{Dvz},\nwe may write the following univariate generating function for the sequence of scaled moments $\\{c_k\\}$\n\n\\begin{eqnarray}\n\\sum_{k=0}^{\\infty}c_{k}t^k &=& \\sum_{k=0}^{\\infty}t^k\\sum_{i=1}^{N} \\langle\\mathbf{v}_i,\\mathbf{z}\\rangle^k D_{\\mathbf{v}_i}(\\mathbf{z})\\notag\\\\\n&=&\\sum_{i=1}^{N}D_{\\mathbf{v}_i}(\\mathbf{z}) \\sum_{k=0}^{\\infty}t^k\\langle\\mathbf{v}_i,\\mathbf{z}\\rangle^k=\n\\sum_{i=1}^{N}\\frac{D_{\\mathbf{v}_i}(\\mathbf{z})}{1-t\\langle\\mathbf{v}_i,\\mathbf{z}\\rangle}.\n\\label{pade}\n\\end{eqnarray}\n\nTherefore, $c_k$ are the coefficients in the Taylor series expansion of\n$p_\\mathbf{z}(t)\/q_\\mathbf{z}(t)$, where $q_\\mathbf{z}(t) = \\prod\\limits_{\\mathbf{v}\\in{\\text{Vert}}(P)} (1-t\\langle\\mathbf{v},\\mathbf{z}\\rangle)$ and $q_\\mathbf{z}(t)$ is a polynomial of degree at most $N-1$.\nIf enough moments are known for a fixed direction $\\mathbf{z}$ (in our case $2N$ are sufficient) then $p_\\mathbf{z}$ and $q_\\mathbf{z}$ can be computed. Then the roots of\n$q_\\mathbf{z}$ will give us the desired projections.\n\nIn our implementation we used one of the Pade approximation methods implemented in Sage. This is basically \\texttt{scipy.misc.pade} with control of the measured moments' precision. If:\n$$\n\\frac{p(t)}{q(t)} = \\frac{a_0 + a_1 t + \\dots + a_\\ell t^\\ell}{b_0 + b_1 t + \\dots + b_m t^m} = c_0 + c_1 t + \\dots + c_n t^n + \\dots\n$$\nwhere $n = \\ell+m$, $q_0=1$ and $c_0,\\dots,c_n$ are moments then we do the following:\n\n\\begin{algorithm}[H]\n\\begin{enumerate}\n\n\\item Trim the data $(c_0,\\dots,c_n)$ to $k$-bit precision with $k$ specified. \\\\\n\\medskip\n\\item Create a matrix $C_{m\\times m}$ with $C_{ij}=c_{\\ell+i-j}$.\n\\medskip\n\\item Solve the system $C\\cdot x = y$ with $x = (b_1,\\dots,b_m)^T$ and \\\\\n\\noindent $y = -(c_{\\ell+1},\\dots,c_{\\ell+m})^T$.\n\\end{enumerate}\n\\caption{Pade approximation.}\\label{fig:pade}\n\\end{algorithm}\n\n\\paragraph{\\bf Matching projections on different directions.}\nWe implemented a different and much more reliable matching procedure than the one\ndescribed in the original paper. Below we give a detailed description of the new matching method.\n\nAs was remarked in \\cite{GLPR12}, formulas \\eqref{brion}\nand \\eqref{Dvz} are valid not only for $\\mathbf{z}\\in{\\mathbb{R}}^d$ but also for $\\mathbf{z}\\in{\\mathbb{C}}^d$. The latter\nmeans that every point in $P\\subset{\\mathbb{R}}^d$ and each $\\mathbf{v}\\in{\\mathbb{R}}^d$ are regarded as\ncomplex vectors with all zero imaginary components and $\\bil{\\mathbf{v}}{\\mathbf{z}}$ is regarded as\na standard sesquilinear inner product in ${\\mathbb{C}}^d$.\n\nThus, we also can write \\eqref{momentmatrix} for complex $\\mathbf{z}=\\mathbf{z}_{re}+i\\cdot\\mathbf{z}_{im},$\nwhere $\\mathbf{z}_{re},\\mathbf{z}_{im}\\in{\\mathbb{R}}^d$.\nWe observe that $\\bil{\\mathbf{v}}{\\mathbf{z}}=\\bil{\\mathbf{v}}{\\mathbf{z}_{re}}+i\\cdot\\bil{\\mathbf{v}}{\\mathbf{z}_{im}}$ and\n\\begin{eqnarray}\n\\mu_j(\\mathbf{z}) &=& \\int_P \\Big(\\langle \\mathbf{x}, \\mathbf{z}_{re} \\rangle + i\\cdot\n\\langle \\mathbf{x}, \\mathbf{z}_{im} \\rangle \\Big)^j \\rho(\\mathbf{x}) d\\mathbf{x} \\notag\\\\\n&=&\\int_P g_1\\Big(\\langle \\mathbf{x}, \\mathbf{z}_{re} \\rangle,\n\\langle \\mathbf{x}, \\mathbf{z}_{im} \\rangle \\Big) \\rho(\\mathbf{x}) d\\mathbf{x}+ i\\cdot\\int_P g_2\\Big(\\langle \\mathbf{x}, \\mathbf{z}_{re} \\rangle,\n\\langle \\mathbf{x}, \\mathbf{z}_{im} \\rangle \\Big)\\rho(\\mathbf{x}) d\\mathbf{x}, \\notag\n\\end{eqnarray}\nwhere $g_1$ and $g_2$ are homogeneous real polynomials in two variables of degree $j$.\nHence, by receiving in the input moments $\\mu_{g_1},$ $\\mu_{g_2}$ we may find $\\mu_j(\\mathbf{z})$ for any $\\mathbf{z}\\in{\\mathbb{C}}^d$.\n\nWe further may write \\eqref{pade} for $\\mathbf{z}\\in{\\mathbb{C}}^d$ and find $q_\\mathbf{z}(t)$ from the first $2N$ moments. Next we\nfind all complex roots of the polynomial $q_\\mathbf{z}(t)$, which give us already matched projections on $\\mathbf{z}_{re}$ and $\\mathbf{z}_{im}$.\nIn our algorithm we fix some general position vector $\\mathbf{z}_{re}\\in{\\mathbb{R}}^d$ and consider $d-1$ vectors $\\mathbf{z}_j\\in{\\mathbb{R}}^d$, such that $\\mathbf{z}_{re}$ and all\n$\\mathbf{z}_j$ are linearly independent. We match projections on $\\mathbf{z}_{re}$ with the projections on $\\mathbf{z}_j$ by taking $\\mathbf{z}_{im}=\\mathbf{z}_j$ for each $j$.\nA big advantage of this matching method is that it is much less prone to numerical errors. In particular, for $d=2$ this method will provide\nan answer in any case, in other word for $d=2$ our problem is well posed. For $d\\ge 3$ there might be a problem that projections on $\\mathbf{z}_{re}$\nare different when we match them with projections on different $\\mathbf{z}_j$. We simply get around this problem by using an ascending order over projections on $\\mathbf{z}_{re}$ each time when we do such a matching.\n\n\\begin{rem}Interestingly, if we fix unit and orthogonal to each other directions $\\mathbf{z}_{re}$ and $\\mathbf{z}_{im}$,\nthen polynomials $g_1\\Big(\\langle \\mathbf{x}, \\mathbf{z}_{re} \\rangle,\n\\langle \\mathbf{x}, \\mathbf{z}_{im} \\rangle \\Big)$ and $g_2\\Big(\\langle \\mathbf{x}, \\mathbf{z}_{re} \\rangle,\n\\langle \\mathbf{x}, \\mathbf{z}_{im} \\rangle \\Big)$ considered as multivariate polynomials of $\\mathbf{x}$ are harmonic functions, i.e., $\\Delta g_1(\\mathbf{x})=\\Delta g_2(\\mathbf{x})=0.$\nOne can read more on harmonic moments in e.g. \\cite{PS12}.\n\\end{rem}\n\\begin{proof} We recall that Laplace operator $\\Delta$ is invariant under the isometry group of ${\\mathbb{R}}^d$. Therefore, we may assume\nthat $\\mathbf{z}_{re}$ is simply the first coordinate vector of $\\mathbf{x}$ and $\\mathbf{z}_{im}$ is the second coordinate vector of $\\mathbf{x}$. Now we need only to verify that\n$\\Delta=\\frac{\\partial^2}{\\partial x^2}+\\frac{\\partial^2}{\\partial y^2}$ when applied to the real and imaginary part of $(x+i\\cdot y)^j$ is zero.\nIndeed, we have\n\\[\n\\Delta(x+i\\cdot y)^j=j(j-1)(x+i\\cdot y)^{j-2}+j(j-1)i\\cdot i\\cdot(x+i\\cdot y)^{j-2}=0.\n\\]\n\\end{proof}\n\n\\begin{cor}From harmonic moments only, one may reconstruct vertices of a convex polytope $P$.\n\\end{cor}\n\n\\section{Numerical Experiments}\n\nWe did our numerical experiments first in the exact arithmetic, i.e with rational precision, to test\nthe exact algorithm from \\cite{GLPR12} and adjust the part of the algorithm\nfor selecting random directions. In this mode our implementation was far\nfrom optimal in terms of running time with exact arithmetic. The reason\nfor that is due to the inherent limitation of Sage's rational arithmetic.\n\n\\begin{table}[h]\n\\begin{tabular}{|c|c|c|c|c|}\n\\hline\nDimension & Number of & Exact & Float & Allowed\\\\\n & vertices & Arithmetic & Arithmetic & Error \\\\ \\hline\n2 & 10 & 0.47 sec & 0.07 sec & E-3 \\\\ \\hline\t\t\t\t\t\n3 & 20 & 39 sec & 0.42 sec & E-3 \\\\ \\hline\t\t\t\t\t\n4 & 30 & $>$ 5 mins & 1.89 sec & E-3 \\\\ \\hline\t\t\t\t\n5 & 40 & $>$ 10 mins & 7.10 sec & E-3 \\\\ \\hline \t\t\t\t\t\n\\end{tabular}\n\\caption{Efficiency Benchmarks.}\n\\label{table:timing}\n\\end{table}\n\nHowever, as can be seen from the table \\ref{table:timing}, converting numerical data into float precision yields drastic improvements in terms of running time. The allowed error on recovered projections is small enough and leaves the shape almost intact. On the figure \\ref{fig:3dim_20} are two images of the same 20-vertex polyhedron reconstructed with rational and float arithmetic. Note that with float arithmetic, tiny errors in projections altered co-planarity of many vertices and thus many facets are triangulated although the general shape is still preserved.\n\n\n\\begin{figure}[h]\n\\centering\n \\begin{subfigure}[b]{0.4\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{3D_20vert_rational.png}\n \\caption{Original Polyhedron}\n \\label{fig:3dim_20_rational}\n \\end{subfigure}%\n \\qquad\n \\begin{subfigure}[b]{0.4\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{3D_20vert_float_75}\n \\caption{75-digit Float Arithmetic}\n \\label{fig:3dim_20_float75}\n \\end{subfigure}\n\\caption{3D polyhedron with 20 vertices}\n\\label{fig:3dim_20}\n\\end{figure}\n\n\n\\begin{table}[h]\n\\begin{tabular}{|c|c|c|c|}\n\\hline\nNumber of & Error of & Error or & Error of \\\\\nvertices & order E-3 & order E-6 & order E-9 \\\\ \\hline\n4 & 20 bits & 25 bits & 35 bits \\\\ \\hline\t\t\t\t\t\n8 & 30 bits & 40 bits & 45 bits \\\\ \\hline\t\t\t\t\t\n12 & 45 bits & 55 bits & 65 bits \\\\ \\hline\t\t\t\t\n16 & 60 bits & 65 bits & 75 bits \\\\ \\hline \t\t\t\t\t\n20 & 75 bits & 80 bits\t & 90 bits \\\\ \\hline\n40 & 160 bits & 170 bits & 210 bits \\\\ \\hline\n\\end{tabular}\n\\caption{Errors v.s. Float Precision}\n\\label{table:trade-off}\n\\end{table}\n\n\nWhen noise is introduced to the measured moments, exact arithmetic becomes inapplicable. Float arithmetic on the other hand can tolerate errors to some degree. However, to retrieve projections with high precision, our method turns out to be very sensitive. In table \\ref{table:trade-off} we compare the precision level required with float arithmetic versus error tolerability.\n\n\nWe give an example of insufficient precision that results in distortions of the reconstructed shape. With the previous 20-vertex polyhedron where moments are measured now to only 60 bits of precision, the recovered shape looks as is shown on the figure \\ref{fig:distorted_recovery}.\n\n\n\\begin{figure}[h]\n\\centering\n \\begin{subfigure}[b]{0.4\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{3D_20vert_rational}\n \\caption{Original Polyhedron}\n \\end{subfigure}%\n \\qquad\n \\begin{subfigure}[b]{0.4\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{3D_20vert_float_60}\n \\caption{Distorted Recovery}\n \\label{fig:3dim_20_float60}\n \\end{subfigure}\n\\caption{Float arithmetic: 60 digits precision}\n\\label{fig:distorted_recovery}\n\\end{figure}\n\nWe would like to remark that the use of complex moments improved precision a lot compared to the real moments. Here is a concrete example of a 8-vertex polyhedron with the matrix $V$ containing vertex coordinates and $A$ representing its adjacency matrix.\n\n\\[\nV=\\bordermatrix{\n~ &v_1 &v_2 &v_3 &v_4 &v_5 &v_6 &\nv_7 &v_8 \\cr\nx & 17\/4 & 249\/121 & -719\/74 & -66\/43 & -82\/91 & -1588\/133 &\n545\/37 & 69\/7 \\cr\ny & -14\/3 & -211\/121 & -373\/74 & -267\/43 & -219\/91 & 414\/133 &\n765\/37 & 59\/21 \\cr\nz & -7\/12 & 1963\/121 & 426\/37 & -108\/43 & -148\/13 & -46\/133 &\n-85\/37 & -41\/3 \\cr\n}\n\\]\n\n\\[\n\\text{Adjacency matrix: } \\begin{pmatrix}\n0 & 1 & 0 & 1 & 0 & 0 & 0 & 1\\\\\n1 & 0 & 1 & 0 & 0 & 0 & 1 & 0\\\\\n0 & 1 & 0 & 1 & 0 & 1 & 0 & 0\\\\\n1 & 0 & 1 & 0 & 1 & 0 & 0 & 0\\\\\n0 & 0 & 0 & 1 & 0 & 1 & 0 & 1\\\\\n0 & 0 & 1 & 0 & 1 & 0 & 1 & 0\\\\\n0 & 1 & 0 & 0 & 0 & 1 & 0 & 1\\\\\n1 & 0 & 0 & 0 & 1 & 0 & 1 & 0\n\\end{pmatrix}\n\\]\n\n\n\n\\begin{figure}[h]\n\\centering\n \\includegraphics[width=0.5\\textwidth]{3D_8vert_rational}\n \\caption{3D Polyhedron with 8 vertices}\n \\label{fig:3dim_8vert}\n\\end{figure}\n\n\n\\texttt{z = vector([2,3,4])} is the random vectors upon which vertices are projected. The exact projections are:\n\n\\texttt{Proj:~[-54.56, -31.74, -26.52, -15.92, -7.83, 11.50, 63.78, 82.30]}\n\n\nWith moments measured in the real field with 25-bit precision wrapping, we recovered the projections as:\n\n\\texttt{RealField(25):~[-54.56, -30.87, -23.14, 11.46, 63.78, 82.30]}\n\n\nNotice that some projections are missed, because the computations done in the Real filed with $25$-digit precision have affected slightly the coefficients of $p_\\mathbf{z}(t)$ and, therefore, some real roots of $p_\\mathbf{z}(t)$ have disappeared. Now with a randomly chosen complex component, we can take $\\mathbf{z} =$\\texttt{ vector([2,3,4]) + I*vector([-5,2,-8])} and carry out the same computations in the complex field with 25-digit precision and the result recovers all 8 projections with much better precision \\texttt{ComplexField(25): [-54.56, -31.81, -26.48, -15.93, -7.84, 11.49, 63.78, 82.30]}\n\n\n\n\\section{Conclusions}\n\nIn our computational experiments with exact arithmetic and precise measurements we achieved the expected performance and precision guarantees and have improved the original algorithm suggested in \\cite{GLPR12} in certain respects. Namely, we implemented significantly more robust matching procedure of the vertex projections by recovering projections of the vertices on a complex plane instead of a single direction recovery as was proposed in the original work; we implemented an easier and more practical procedure based on Pade approximation to recover the projections on the given complex plane and\/or single real axis. One of the interesting implications of the former methodology is that harmonic moments (polynomials $p(\\mathbf{x})$, s.t. $\\Delta p(x) = 0$) are sufficient to recover vertices of any convex polytope as well as vertices of a non-convex polytope, if the respective coefficients at the vertices do not vanish. We note that there are examples of different non convex polytopes with exactly the same set of harmonic moments.\n\n\nOn the negative side, in the numerical experiments with bounded precision we have seen a very high sensitivity of our methodology to numerical inaccuracies.\nThe latter is an unavoidable obstacle to the practical usage of our algorithm.\n\n\n\n\n\\bibliographystyle{alpha}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{problem_description}\n\n\\begin{wrapfigure}{tr}{0.45\\textwidth}\n\\vspace*{-5.8ex}\n \\begin{center}\n \\includegraphics[width=0.45\\textwidth]{src\/figs\/valuation-game-overview.pdf}\n \\end{center}\n \\vspace*{-2ex}\n \\caption{\n An energy based treatment of cooperative games leads to a series of new player valuations: $K$-step variational values, satisfying basic valuation axioms: null player, marginalism \\& symmetry.}\n \\vspace*{-4.0ex}\n \\label{fig_overview}\n\\end{wrapfigure}\n\\markchange{\nValuation problems are becoming increasingly more significant in various\nmachine learning applications, ranging from feature interpretation \\citep{lundberg2017unified}, data valuation \\citep{ghorbani2019data} to model valuation for ensembles \\citep{rozemberczki2021shapley}.} They are often formulated as a player valuation problem in cooperative games. A cooperative game $(\\ensuremath{{N}}, F(S))$ consists of a grand coalition $\\ensuremath{{N}} = \\{1,..., n\\}$ of $n$ players and a value function (a.k.a.~characteristic function) $F(S): 2^\\ensuremath{{N}} \\rightarrow {\\mathbb{R}}$ describing the collective payoff of a coalition\/cooperation $S$. A fundamental problem in cooperative game theory is to assign an importance vector (i.e., solution concept) $\\phi(F) \\in {\\mathbb{R}}^n$ to $n$ players.\n\n\n\n\nIn this paper, we explore a {\\em probabilistic treatment} of cooperative games $(\\ensuremath{{N}}, F(S))$.\n\\markchange{\nSuch a treatment makes it possible to conduct learning and inference\nin a unified manner, and will yield connections with classical\nvaluation criteria. }\nConcretely, we seek a probability distribution over coalitions $p(\\mathbf{S}=S)$\\footnote{Note that distributions over subsets of $\\ensuremath{{N}}$ are equivalent to distributions of $|\\ensuremath{{N}}|=n$ binary random variables $X_1, ..., X_n\\in \\{0,1\\}$: We use $X_i$ as the indicator function of the event $i\\in S$, or $X_i = [i\\in S]$.\nWith slight abuse of notation, we use $\\mathbf{S}$ as a random variable represented as sets and often abbreviate $p(\\mathbf{S}=S)$ as $p(S)$.}, measuring the odds that a specific coalition $S$ happens.\nGenerally, we consider distributions where the probability of a coalition $p(S)$ grows monotonically with the payoff $F(S)$.\n\n\nAmong all the possible probability mass functions (pmfs), how should we construct the proper $p(S)$? We advocate to choose the pmf with the maximum entropy $\\entropy{p}$.\nThis principle makes sense since maximizing the entropy minimizes the amount of prior information built into the distribution. In other words, it amounts to assuming nothing about what is unknown, i.e., choosing the most ``uniform'' distribution. Now finding a proper $p(S)$ becomes the following constrained optimization problem: suppose each coalition $S$ is associated with a payoff $F(S)$ with probability $p(S)$. We would like to maximize the entropy $\\entropy{p} = - \\sum_{S\\subseteq \\ensuremath{{N}}} p(S) \\log p(S)$, subject to the constraints that $\\sum_S p(S) = 1, p(S)\\geq 0$ and $\\sum_S p(S) F(S) = \\mu$ (i.e., the average payoff is known as $\\mu$). Solving this optimization problem (derivation in \\cref{ap_maxent}), we reach the \\emph{maximum entropy distribution}:\n\\begin{align}\\label{eq_ebm}\n p(S) = \\frac{\\exp(F(S)\/T)}{\\parti}, \\quad \\parti:=\\sum\\nolimits_{S'\\subseteq \\ensuremath{{N}}} \\exp(F(S')\/T),\n\\end{align}\nwhere $T>0$ is the temperature. This is an energy-based model \\citep[EBM, cf.][]{lecun2006tutorial} with $-F(S)$ as the energy function.\n\n\nThe above energy-based treatment admits two benefits: i) Where supervision is available, it enables learning of value functions $F(S)$ through efficient training techniques for energy-based learning, such as\nnoise contrastive estimation \\citep{gutmann2010noise} and score matching \\citep{hyvarinen2005estimation}. ii) Approximate inference techniques such as variational inference or sampling can be adopted to solve the valuation problem. \\markchange{\nSpecifically, it enables to perform mean-field variational inference where\nparameters of the inferred surrogate distribution can be used as principled\nplayer valuations.}\n\nBelow, we explore mean-field variational inference for the energy-based formulation (\\cref{fig_overview}).\nPerhaps surprisingly, by conducting only \\emph{one-step} fixed point iteration for maximizing the mean-field (ELBO) objective, we recover classical valuation criteria, such as the Shapley value \\citep{shapley1953value} and the Banzhaf value \\citep{penrose1946elementary,banzhaf1964weighted}.\nThis observation also further supports existing criteria, motivating them as decoupling the correlations among players via the mean-field approach.\nBy running the fixed point iteration for \\emph{multiple} steps, we achieve a trajectory of valuations, among which we define the valuation with the best conceivable decoupling error as the \\emph{\\varindex}.\nOur major contributions can be summarized as below:\n\n i) We present a theoretically justified energy-based treatment for cooperative games. Through mean field inference, we provide a unified perspective on popular game-theoretic criteria. %\n This provides an alternative motivation of existing criteria via a \\emph{decoupling} perspective, i.e., decoupling correlations among $n$ players through the mean-field approach.\n %\n ii) In pursuit of better decoupling performance, we propose to run fixed point iteration for \\emph{multiple} steps, which generates a trajectory of valuations. \\markchange{Under uniform initializations, they all satisfy a set of game-theoretic axioms, which are required for being suitable valuation criteria. We define the valuation with the best conceivable decoupling error as the {\\varindex}.\n %\n iii) Synthetic and real-world experiments demonstrate intriguing properties of the proposed \\varindex, including lower decoupling error and better valuation performance.}\n\n\n\n\n\n\n\\section{Preliminaries and Background}\n\n\\textbf{Notation.}\nWe assume\n$\\chara_i\\in{\\mathbb{R}}^n$ being the standard $i^\\text{th}$ basis vector and use boldface letters $\\mathbf{x}\\in {\\mathbb{R}}^\\ensuremath{{N}}$ and $\\mathbf{x}\\in {\\mathbb{R}}^n$\ninterchangebly to indicate an $n$-dimensional vector, where $x_i$ is\nthe $i^\\text{th}$ entry of $\\mathbf{x}$.\nBy default, $f(\\cdot)$ is used to denote a continuous function, and\n$F(\\cdot)$ to represent a set function.\nFor a differentiable function $f(\\cdot)$, $\\nabla f(\\cdot)$ denotes its gradient.\n$\\sete{x}{i}{k}$ is the operation of setting the\n$i^\\text{th}$ element of $\\mathbf{x}$ to $k$, while keeping all other elements\nunchanged, i.e., $\\sete{x}{i}{k}=\\mathbf{x}-x_i \\bas_i + k\\bas_i$.\nFor two sets $S$ and $T$, $S+T$ and $S-T$ represent set union and set difference, respectively. $|S|$ is the cardinality of $S$. $i$ is used to denote the singleton $\\{i\\}$ with a bit abuse of notation.\n\n\n\\textbf{Existing valuation criteria.}\nVarious valuation criteria have been proposed from the area of cooperative games,\namongst them the most famous ones are the Shapley value \\citep{shapley1953value} and the Banzhaf value, which is extended from\nthe Banzhaf power index \\citep{penrose1946elementary,banzhaf1964weighted}.\nFor the Shapley value, the importance assigned to player $i$ is:\n\\begin{align}\\label{def_banzhaf}\n\\ensuremath{\\text {Sh}}_i = \\sum\\nolimits_{S\\subseteq \\ensuremath{{N}} - i} [F(S + i) - F(S)] \\frac{|S|! (n - |S| -1)!}{n!}.\n\\end{align}\nOne can see that it gives less weight to $n\/2$-sized coalitions.\nThe Banzhaf value\nassigns the following\nimportance to player $i$:\n\\begin{align}\\label{def_shapley}\n\\ensuremath{\\text {Ba}}_i & = \\sum\\nolimits_{S\\subseteq \\ensuremath{{N}} - i} [F(S + i) - F(S)] \\frac{1}{2^{n - 1}},\n\\end{align}\nwhich uses uniform weights for all the coalitions.\nSee \\citet{greco2015structural} for a comparison of them.\n\n\\textbf{Valuation problems in machine learning.} Currently, most classes of valuation problems \\citep{lundberg2017unified,ghorbani2019data,sim2020collaborative,rozemberczki2021shapley} use Shapley value as the valuation criterion. Along with the rapid progress of model interpretation in the past decades \\citep{zeiler2014visualizing,ribeiro2016should,lundberg2017unified,sundararajan2017axiomatic,petsiuk2018rise,wang2021shapley2}, attribution-based interpretation aims to assign importance to the features for a specific data instance $(\\mathbf{x}\\in {\\mathbb{R}}^\\ensuremath{{N}}, y)$ given a black-box model ${\\cal M}$. Here each feature maps to a player in the game $(\\ensuremath{{N}}, F(S) )$, and the value function $F(S)$ is usually the model response, such as the predicted probability for classification problems, when feeding a subset $S$ of features to ${\\cal M}$. The data valuation problem \\citep{ghorbani2019data} tries to assign values to the samples in the training dataset $\\ensuremath{{N}} = \\{(\\mathbf{x}_i, y_i)\\}_1^n$ for general supervised machine learning: one training sample corresponds to one player, and the value function $F(S)$ indicates the predictor performance on some test dataset given access to only a subset of the training samples in $S$. Model valuation in ensembles \\citep{rozemberczki2021shapley} measures importance of individual models in an ensemble in order to correctly label data points from a dataset, where each pre-trained model maps to a player and the value function measures the predictive performance of subsets of models.\n\n\n\\section{Related Work}\n\n\n\n\\textbf{Energy-based modeling.}\nEnergy based learning \\citep{lecun2006tutorial} is a classical learning framework that uses an energy function $E(\\mathbf{x})$ to measure\nthe quality of a data point $\\mathbf{x}$.\nEnergy based models have been applied to different domains, such as data generation \\citep{deng2020residual}, out-of-distribution detection \\citep{liu2020energy},\nreinforcement learning \\citep{haarnoja2017reinforcement}, memory modeling \\citep{bartunov2019meta}, discriminative learning \\citep{grathwohl2019your,gustafsson2020train} and\nbiologically-plausible training \\citep{scellier2017equilibrium}.\nEnergy based learning admits principled training methods, such as contrastive divergence \\citep{hinton2002training}, noise contrastive estimation \\citep{gutmann2010noise} and score matching \\citep{hyvarinen2005estimation}.\nFor approximate inference, sampling based approaches are mainly MCMC-style algorithms, such as stochastic gradient Langevin dynamics \\citep{Welling2011}.\nFor a wide class of EBMs with submodular or supermodular energies \\citep{djolonga14variational}, there exist provable mean field inference algorithms with constant factor approximation guarantees \\citep{bian2019optimalmeanfield,sahin2020sets,bian2020continuous}.\n\n\\textbf{Shapley values in machine learning.}\nShapley values have been extensively used for valuation problems in machine learning, including attribution-based interpretation \\citep{lipovetsky2001analysis,cohen2007feature,strumbelj2010efficient,owen2014sobol,datta2016algorithmic,lundberg2017unified,chen2018shapley,lundberg2018consistent,kumar2020shapley,williamson2020efficient,covert2020understanding,wang2021shapley}, data valuation \\citep{ghorbani2019data,jia2019empirical,jia2019towards,wang2020principled,fan2021improving}, collaborative machine learning \\citep{sim2020collaborative} and recently, model valuation in ensembles \\citep{rozemberczki2021shapley}. For a detailed overview of papers using Shapley values for feature interpretation, please see \\cite{covert2020explaining} and the references therein.\nTo alleviate the exponential computational cost of exact evaluation, various methods have been proposed to approximate Shapley values in polynomial time \\citep{ancona2017towards,ancona2019explaining}.\n\\cite{owen1972multilinear} proposes the multilinear extension purely as a representation of cooperative games and \\cite{okhrati2021multilinear} use it to develop sampling algorithms for Shapley values.\n\n\n\n\n\n\n\n\\section{Valuation for Cooperative Games: A Decoupling Perspective}\n\\label{sec_decoupling}\n\n\n\nIn the introduction, we have asserted that under the setting of cooperative games, the Boltzmann distribution (see \\cref{eq_ebm}) achieves the maximum entropy among all of the pmf functionals.\nOne can naturally view the importance assignment problem of cooperative games as a {\\em decoupling problem}: The $n$ players in a game $(\\ensuremath{{N}}, F(S))$ might be arbitrarily correlated in a very complicated manner. However, in order to assign each of them an {\\em individual} importance value, we have to decouple their interactions, which can be viewed as a way to simplify their correlations.\n\n\n\\looseness -1 We therefore consider a surrogate distribution $q(S; \\mathbf{x}) $ governed by parameters in $\\mathbf{x}$. $q$ has to be simple, given our intention to decouple the correlations among the $n$ players. A natural choice is to restrain $q(S; \\mathbf{x}) $ to be fully factorizable, which leads to a mean-field approximation of $p(S)$. The simplest form of $q(S; \\mathbf{x})$ would be a $n$ independent Bernoulli distribution, i.e.,\n$q(S; {\\mathbf{x}}):= \\prod_{i\\in S}x_i \\prod_{j\\notin S}(1-x_j), \\mathbf{x}\\in\n[0,1]^n$.\nGiven a divergence measure $D(\\cdot \\| \\cdot)$ for probability distributions, we can define the \\emph{best conceivable decoupling error} to be the\ndivergence between $p$ and the best possible $q$.\n\n\n\\begin{definition}[Best Conceivable Decoupling Error]\n\\label{def_decoupling_error}\nConsidering a cooperative game $(\\ensuremath{{N}}, F(S))$, and given a divergence measure $D(\\cdot \\| \\cdot)$ for probability distributions,\nthe \\emph{decoupling error} is defined as the divergence between $q$ and $p$: $D(q \\| p)$, and\nthe best conceivable decoupling error is defined as the\ndivergence between the best possible $q$ and $p$:\n\\begin{align}\n D^* := \\min_{q} D(q \\| p).\n\\end{align}\n\\end{definition}\nNote that the \\emph{best conceivable decoupling error}\n$D^*$ is closely related to the intrinsic coupling amongst $n$ players: if all the players are already independent with each other, then $D^*$ could be zero.\n\n\n\\subsection{Mean Field Objective for EBMs}\n\\label{sec_meanfield_lowerbounds}\n\n\\looseness -1 If we consider the \\emph{decoupling error} $D(q \\| p)$ to be the Kullback-Leibler divergence between $q$ and $p$, then we recover the mean field approach\\footnote{Notably, one could also apply the reverse KL divergence $\\kl{p}{q}$, which would lead to an expectation propagation \\citep{minka2001expectation} treatment of cooperative games.}.\nGiven the EBM formulation in \\cref{eq_ebm},\nthe classical mean-field inference approach aims to approximate $p(S)$ by\na fully factorized product distribution\n$q(S; {\\mathbf{x}}):= \\prod_{i\\in S}x_i \\prod_{j\\notin S}(1-x_j), \\mathbf{x}\\in\n[0,1]^n$,\nby minimizing the distance measured w.r.t.~the Kullback-Leibler\ndivergence between $q$ and $p$.\nSince $\\kl{q}{p}$ is non-negative, we have:\n\\begin{align}\\notag\n& 0\\leq \\kl{q}{p} = \\sum\\nolimits_{S\\subseteq \\ensuremath{{N}}} q(S; {\\mathbf{x}})\n\\log\\frac{q(S; {\\mathbf{x}})}{p(S)}\\\\\n& = - {\\mathbb E}_{q(S; {\\mathbf{x}})} [\\log p(S)] - \\entropy{q(S; {\\mathbf{x}})} \\\\\\label{eq_ebm_logp}\n& = - {\\mathbb E}_{q(S; {\\mathbf{x}})} [ \\frac{F(S)}{T} - \\log \\parti ] - \\entropy{q(S; {\\mathbf{x}})} \\\\\n&=\n -\\sum_{S\\subseteq \\ensuremath{{N}}} \\frac{F(S)}{T} \\prod_{i\\in S}x_i \\prod_{j\\notin S}(1-x_j)+ \\sum\\nolimits_{i=1}^{n} [x_i\\log x_i + (1-x_i)\\log(1-x_i)] + \\log \\parti.\n\\end{align}\nIn \\cref{eq_ebm_logp} we plug in the EBM formulation that $\\log p(S) = \\frac{F(S)}{T} - \\log \\parti$.\n Then one can get\n\\begin{align}\\notag\n\\log \\parti & \\geq \\sum_{S\\subseteq \\ensuremath{{N}}} \\frac{F(S)}{T} \\prod_{i\\in S}x_i \\prod_{j\\notin S}(1-x_j)- \\sum\\nolimits_{i=1}^{n} [x_i\\log x_i + (1-x_i)\\log(1-x_i)] \\\\ \\label{elbo}\n& = \\frac{\\multi(\\mathbf{x})}{T} + \\entropy{q(S; \\mathbf{x})} := (\\text{\\textcolor{blue}{ELBO}})\n\\end{align}\nwhere $\\entropy{\\cdot}$ is the entropy, ELBO stands for the {\\em evidence lower bound}, and\n\\begin{align}\n \\multi(\\mathbf{x}) := \\sum\\nolimits_{S\\subseteq \\ensuremath{{N}}} F(S) \\prod\\nolimits_{i\\in S}x_i \\prod\\nolimits_{j\\notin S}(1-x_j), \\mathbf{x} \\in [0, 1]^n,\n\\end{align}\nis the {\\em multilinear extension} of $F(S)$ \\citep{owen1972multilinear,calinescu2007maximizing}.\nNote that the multilinear extension plays a central role in modern combinatorial optimizaiton techniques \\citep{feige2011maximizing}, especially for guaranteed submodular maximization problems \\citep{krause2014submodular}.\n\n\nMaximizing $(\\text{ELBO})$ in \\cref{elbo} amounts to\nminimizing the Kullback-Leibler divergence between $q$ and $p$. If one solves this optimization problem to optimality, one can obtain the $q(S; \\mathbf{x}^*)$ with the best conceivable decoupling error. Here $x^*_i$ describes the odds that player $i$ shall participate in the game, so it can be naturally used to define the importance score of each player.\n\n\\begin{definition}[\\varindex of Cooperative Games]\nConsider a cooperative game $(\\ensuremath{{N}}, F(S))$ and its mean field approximation. Let $\\mathbf{x}^*$ be the variational marginals with the best conceivable decoupling error, we define $\\mathbf{s}^*:= T\\sigma^{-1}(\\mathbf{x}^*)$ to be the variational index of the game. Formally,\n\\begin{align}\\label{eq_kl}\n \\mathbf{x}^* = {\\arg\\min}_{\\mathbf{x}} \\kl{q(S; \\mathbf{x})}{p(S)},\n\\end{align}\nwhere $\\mathbf{x}^*$ can be obtained by maximizing the ELBO objective in \\cref{elbo}, and $\\sigma^{-1}( \\cdot )$ is the inverse of the sigmoid function, i.e. $\\sigma^{-1}(x) = \\log\\frac{x}{1-x}$. For a vector it is applied element-wise.\n\\end{definition}\n\n\\subsection{Algorithms for Calculating the \\varindex}\n\\label{subsec_algorithms}\n\n\\textbf{Equilibrium condition.}\nFor coordinate $i$, the partial\nderivative of the multilinear extension is $\\nabla_i\\multi(\\mathbf{x})$, and for\nthe entropy term, it is $\\nabla_i \\entropy{q(S; \\mathbf{x})} = \\log \\frac{1-x_i}{x_i}$. By setting the partial derivative of ELBO in \\cref{elbo} to be 0, we have the equilibrium condition:\n\\begin{align}\nx^*_i = \\sigma(\\nabla_i {{f^F_{\\text{mt}}}}(\\mathbf{x^*})\/T) = \\bigl(1+ \\exp(-\n \\nabla_i {{f^F_{\\text{mt}}}}(\\mathbf{x^*})\/T \\bigr)^{-1}, \\quad \\forall i \\in N,\n\\end{align}\nwhere $\\sigma$ is the sigmoid function. This equilibrium condition implies that one cannot change the value assigned to any player in order to further improve the overall decoupling performance.\nIt also implies the fixed point iteration\n$x_i \\leftarrow \\sigma(\\nabla_i \\multi(\\mathbf{x})\/T)$.\n When updating each coordinate sequentially, we recover the\nclassic naive mean field algorithm as shown in\n\\cref{app_alg}.\n\n\nInstead, here we suggest to use the full-gradient method shown in \\cref{alg_mfi_ga} for maximizing the ELBO objective. As we will see later, the resultant valuations satisfy certain game-theoretic axioms.\nIt needs an initial marginal vector $\\mathbf{x}^{0}\\in [0,1]^n$ and the number of epochs $K$. After $K$ steps of fixed point iteration, it returns the estimated marginal $\\mathbf{x}^{K}$.\n\\vspace{-.1cm}\n\\begin{algorithm}[htbp]\n\t\\caption{{Mean Field Inference with Full Gradient}: \\textcolor{blue}{$\\mfi(\\mathbf{x}; K)$}}\\label{alg_mfi_ga}\n\t\\KwIn{A cooperative game $(\\ensuremath{{N}}, F(S))$ with $n$ players. Initial marginals $\\mathbf{x}^\\pare{0}\\leftarrow \\mathbf{x} \\in [0, 1]^n$. \\#epochs $K$.\n\t}\n\t\\KwOut{Marginals after $K$ steps of iteration: $\\mathbf{x}^\\pare{K}$}\n\t\\For{$k = 1 \\rightarrow K$}{\n\t\t{$\\mathbf{x}^\\pare{k} \\leftarrow \\sigma(\\nabla \\multi(\\mathbf{x}^\\pare{k-1})\/T) = \\bigl(1+ \\exp(-\n \\nabla \\multi(\\mathbf{x}^\\pare{k-1})\/T \\bigr)^{-1}$ \\;}\n\t}\n\\end{algorithm}\n\\vspace{-.1cm}\n\nIn case \\cref{alg_mfi_ga} solves the optimization problem to optimality, we obtain the \\varindex. However, maximizing ELBO is in general a non-convex\/non-concave problem, and hence one can only ensure reaching a stationary solution. Below, when we say \\varindex, we therefore refer to its approximation obtained via \\cref{alg_mfi_ga} by default.\nMeanwhile, the $\\mfi(\\mathbf{x}; K)$ subroutine also defines a series of marginals, which enjoy interesting properties as we show in the next part.\nSo we define variational valuations through intermediate solutions of $\\mfi(\\mathbf{x}; K)$.\n\\vspace{-.1cm}\n\\begin{snugshade}\n\\vspace{-.2cm}\n\\begin{definition}[$K$-Step Variational Values]\n\\label{def_kstep_var_vals}\nConsidering a cooperative game $(\\ensuremath{{N}}, F(S))$ and its mean field approximation by \\cref{alg_mfi_ga}, we define the \\emph{$K$-Step Variational Values} initialized at $\\mathbf{x}$ as:\n\\begin{align}\n T \\sigma^{-1}(\\mfi(\\mathbf{x}; K)),\n\\end{align}\nwhere $\\sigma^{-1}()$ is the inverse of the sigmoid function ($\\sigma^{-1}(x) = \\log\\frac{x}{1-x}$).\n\\end{definition}\n\\vspace{-.2cm}\n\\end{snugshade}\n\\vspace{-.2cm}\nNotice when running more steps, the $K$-Step variational value will be more close to the \\varindex.\nThe gradient $\\nabla \\multi(\\mathbf{x})$ itself is defined with respect to an exponential sum via the multilinear extension.\nNext we show how it can be approximated via principled sampling methods.\n\n\\textbf{Sampling methods for estimating the partial derivative.}\nThe partial derivative follows,\n\\begin{flalign}\\label{eq_partial_derivative}\n\\nabla_i \\multi(\\mathbf{x}) &\n= {\\mathbb E}_{q(S ; (\\sete{x}{i}{1}))} [F(S)] - {\\mathbb E}_{q(S; (\\sete{x}{i}{0}))} [F(S)]\\\\\\notag\n& =\\multi(\\sete{x}{i}{1}) - \\multi(\\sete{x}{i}{0})\n\\\\\\notag\n& = \\sum_{S\\subseteq \\ensuremath{{N}}, S\\ni i } F(S)\n\\prod_{j \\in S - i}x_j\n\\prod_{j'\\notin S}(1-x_{j'})\n \\quad - \\sum_{S\\subseteq\n\t\\ensuremath{{N}} - i }\\ F(S) \\prod_{j\\in\n\tS} x_j \\prod_{j'\\notin S, j'\\neq i}\n(1-x_{j'})\\\\\\notag\n& = \\sum_{S\\subseteq\n\t\\ensuremath{{N}} - i }\\ [F(S+ i) - F(S)] \\prod_{j\\in\n\tS} x_j \\prod_{j'\\in \\ensuremath{{N}} - S - i}\n(1-x_{j'})\\\\\\notag\n& =\\! \\sum_{S\\subseteq\n\t\\ensuremath{{N}} - i }\\ [F(S+ i) - F(S)] q(S; (\\sete{x}{i}{0})) = {\\mathbb E}_{S \\sim q(S; (\\sete{x}{i}{0}))}\\ [F(S+ i) - F(S)].\n\\end{flalign}\nAll of the variational criteria are based on the calculation of the partial derivative $\\nabla_i \\multi(\\mathbf{x}) $, which can be approximated by Monte Carlo sampling since $\\nabla_i \\multi(\\mathbf{x}) = {\\mathbb E}_{S \\sim q(S; (\\sete{x}{i}{0}))}\\ [F(S+ i) - F(S)]$: we first sample $m$ coalitions $S_k, k=1,...,m$ from the surrogate distribution $q(S; (\\sete{x}{i}{0}))$, then approximate the expectation by the average $\\frac{1}{m}\\sum_{k=1}^m\\ [F(S_k+ i) - F(S_k)]$.\nAccording to the Chernoff-Hoeffding bound \\citep{hoeffding1963probability}, the approximation will be arbitrarily close to the true value with increasingly more samples: With probability at least\n$1- \\exp(-m\\epsilon^2\/2)$, it holds that\n$|\\frac{1}{m}\\sum_{k=1}^{m} [F(S_k+ i) - F(S_k)] - \\nabla_i \\multi(\\mathbf{x})| \\leq \\epsilon \\max_S\n|F(S+i) - F(S)| $, for all $\\epsilon > 0$.\n\n\n\\textbf{Roles of the initializer $\\mathbf{x}^\\pare{0}$ and the temperature $T$.}\nThis can be understood in the following respects: 1) The initializer $\\mathbf{x}^0$ represents the initial credit assignments to the $n$ players, so it denotes the prior knowledge\/initial belief of the contributions of the players;\n2) If one just runs \\cref{alg_mfi_ga} for one step, $\\mathbf{x}^0$ matters greatly to the output. However, if one runs \\cref{alg_mfi_ga} for many steps,\n$\\mathbf{x}^k$ will converge to the stationary points of the ELBO objective.\nEmpirically, it takes around 5$\\sim$10 steps to converge. %\nThe temperature $T$ controls the ``spreading'' of importance assigned to the players: A higher $T$ leads to flatter assignments, and a lower $T$ leads to more concentrated assignments.\n\n\\iffalse\n\\subsection{Efficiency of Calculating \\varindex and Variational Values}\n\n\n\n\\cref{alg_mfi_ga} calculates the $K$-step variational values, 1-step variatioinal value has the same computational cost as that of Shapley value and Banzhaf value, since they all need to evaluate the gradient of multilinear extension. Sampling methods could help with approximating all of the three criteria when there are a large number of players.\nVariational Index can be approximated by $K$-step variational value, the number $K$ depends on when \\cref{alg_mfi_ga} will converge, which follows,\n\\begin{proposition}[Convergence rate of maximizing ELBO]\nUnder the setting of maximizing ELBO,\n$\\mathbf{x}^k$ will converge to some stationary point $\\mathbf{x}^*$ with a least a sublinear rate of convergence.\nSpecifically, let $f^\\text{ELBO}(\\mathbf{x})$ be the ELBO objective in \\cref{elbo}. Suppose $f^\\text{ELBO}(\\mathbf{x})$ is $L$-smooth. For gradient ascent, in order to reach an $\\epsilon$-First order stationary point (i.e. $\\|\\nabla f^\\text{ELBO}(\\mathbf{x}) \\|\\leq \\epsilon$, it needs $O(\\frac{\\sqrt{L(f^\\text{ELBO}(\\mathbf{x}^*) - f^\\text{ELBO}(\\mathbf{x}^0))}}{\\epsilon^2})$ steps.\n\\end{proposition}\n\n\\begin{proof}\n\nWith sufficiently many steps, the stationary condition of \\cref{alg_mfi_ga} will be satisfied, based on the analysis of mean field approximation in \\cite{wainwright2008graphical}.\nThat is, \\cref{alg_mfi_ga} will converge to some stationary point $\\mathbf{x}^*$. One can easily see that $\\mathbf{x}^*$ is also the stationary point of maximizing the ELBO objective in \\cref{elbo}, and the stationary point of minimizing the KL divergence in \\cref{eq_kl}.\n\nRegarding the rate of convergence, one can resort to the results of optimizing $L$-smooth non-convex functions using the gradient descent algorithm, for example,\n\\cite{jain2017non}\nto get the sublinear rate of convergence.\n\\end{proof}\n\\fi\n\n\\textbf{Computational efficiency of calculating \\varindex and variational values.}\n\\cref{alg_mfi_ga} calculates the $K$-step variational values, \\markchange{1-step variational value has the same computational cost as that of\nBanzhaf value and of the integrand of the line integration of Shapley value in \\cref{eq_shapley_line} below, since they all need to evaluate $\\nabla \\multi(\\mathbf{x}) $.}\nSampling methods could help with approximating all of the three criteria when there are a large number of players.\nThe Variational Index can be approximated by the $K$-step variational value, where the number $K$ depends on when \\cref{alg_mfi_ga} converges. One can easily show that, under the setting of maximizing ELBO, $\\mathbf{x}^k$ will converge to some stationary point $\\mathbf{x}^*$, based on the analysis of mean field approximation in \\cite{wainwright2008graphical}.\nWe have also empirically verified the convergence rate of \\cref{alg_mfi_ga} in \\cref{subsec_convergence},\nand find that it converges within 5 to 10 steps.\nSo the computational cost is roughly similar as that of Shapley value and Banzhaf value.\n\n\n\n\n\n\\subsection{Recovering Classical Criteria}\n\n\n\\looseness -1 Perhaps surprisingly, it is possible to recover classical valuation criteria via the $K$-step variational values as\nin \\cref{def_kstep_var_vals}.\nFirstly, for Banzhaf value, by comparing with \\cref{eq_partial_derivative} it reads,\n\\begin{align}\n\\ensuremath{\\text {Ba}}_i = \\sum\\nolimits_{S\\subseteq \\ensuremath{{N}} - i} [F(S + i) - F(S)] \\frac{1}{2^{n - 1}}\n& = \\nabla_i \\multi(0.5*\\mathbf{1}) = T \\sigma^{-1}(\\mfi(0.5*\\mathbf{1}; 1) ),\n\\end{align}\nwhich is the $1$-step variational value initialied at $0.5*\\mathbf{1}$.\nWe can also recover the Shapley value through its connection to the multilinear extension \\citep{owen1972multilinear,grabisch2000equivalent}:\n\\begin{align}\\label{eq_shapley_line}\n\\ensuremath{\\text {Sh}}_i = \\int_0^1 \\nabla_i \\multi(x\\mathbf{1}) dx = \\int_0^1 T \\sigma^{-1}(\\mfi(x\\mathbf{1}; 1) ) dx,\n\\end{align}\nwhere the integration denotes integrating the partial-derivative of the multilinear extension\nalong the main diagonal of the unit hypercube.\nA self-contained proof is given in \\cref{app_recoving_proof}.\n\n\nThese insights offer a novel, unified interpretation of the two classical valuation indices: both the Shapley value and Banzhaf value can be viewed as approximating the variational index by running {\\em one step} of fixed point iteration for the decoupling (ELBO) objective. Specifically, for the Banzhaf value, it initializes $\\mathbf{x}$ at $0.5*\\mathbf{1}$, and runs one step of fixed point iteration. For the Shapley value, it also performs a one-step fixed point approximation. However, instead of starting at a single initial point, it averages over all possible initializations through the line integration in \\cref{eq_shapley_line}.\n\n\n\\textbf{Relation to probabilistic values.}\nProbabilistic values for games \\citep{weber1988probabilistic,monderer2002variations} capture a class of solution concepts, where the value of each player is given by some averaging of the player's marginal contributions to coalitions, and the weights depend on the coalitions only.\nAccording to \\cite[Equation (3.1)]{monderer2002variations}, a solution $\\phi$ is called a probabilistic value, if for each player $i$, there exists a probability $p^i \\in \\Delta(C^i)$, such that $\\phi_i$ is the expected marginal contribution of $i$ w.r.t. $p^i$. Namely,\n$\n\\phi_i = \\sum_{S\\in C^i} p^i(S) [F(S+i) - F(S)],\n $\nwhere $C^i$ is the set of all subsets of $N-i$, and $\\Delta(C^i)$ is the set of all probability measures on $C^i$.\nOne can easily see that, for any fixed $\\mathbf{x}$, 1-step variational value in \\cref{def_kstep_var_vals} is a probabilistic value with\n$p^i(S) = q(S; (\\mathbf{x}|x_i \\leftarrow 0))$,\nwhere $q(S; \\mathbf{x})$ is the surrogate distribution in our EBM framework.\n\n\n\n\\subsection{Axiomatisation of $K$-Step Variational Values}\n\n\n\nOur EBM framework introduces a series of variational values controlled by $T$ and the running step number $K$.\nWe now establish that the variational values $T \\sigma^{-1} (\\mfi(\\mathbf{x}; K))$ in \\cref{def_kstep_var_vals} satisfy certain game-theoretic axioms (see \\cref{appen_axioms} for definitions of five common axioms: Null player, Symmetry, Marginalism, Additivity and Efficiency).\n\\looseness -1 We prove that all the variational values in the trajectory satisfy three fundamental axioms: null player, marginalism and symmetry. The detailed proof is deferred to \\cref{app_proof_thm1}.\nWe expect it to be very difficult to find \\emph{equivalent} axiomatisations of the series of variational values, which we leave for future work.\nMeanwhile, our methods incur a decoupling and fairness tradeoff by tuning the hyperparameters $K$ and $T$.\n\\vspace{-.1cm}\n\\begin{snugshade}\n\\vspace{-.2cm}\n\\begin{restatable}[Axiomatisation of $K$-Step Variational Values of \\cref{def_kstep_var_vals}]{theorem}{restattheoremone}\n\\label{thm_axiom}\nIf initialized uniformly, i.e., $\\mathbf{x}^0 = x\\textbf{1}, x\\in [0, 1]$, all the variational values in the trajectory $T\\sigma^{-1}\n(\\mfi(\\mathbf{x}; k)), k=1,2,3...$\nsatisfy the null player, marginalism and symmetry axioms.\n\\end{restatable}\n\\vspace{-.2cm}\n\\end{snugshade}\n\\vspace{-.2cm}\n\\markchange{\nAccording to \\cref{thm_axiom}, our proposed $K$-step variational values satisfy the minimal set of axioms often associated with appropriate valuation criteria.\nNote that specific realizations of the $K$-step variational values can also satisfy more axioms, for example,\nthe $1$-step variational value initialized at $0.5*\\mathbf{1}$\n also satisfies the additivity axiom. Furthermore, we have the following observations:}\n\n\\markchange{\n\\textbf{Satisfying more axioms is not essential for valuation problems.} Notably, in cooperative game theory, one line of work is to seek for solution concepts that would satisfy more axioms.\nHowever, for valuation problems in machine learning, this is arguably not essential. For example, similar as argued by \\cite{ridaoui2018axiomatisation}, efficiency does not make sense for certain games.\nWe give a simple illustration in \\cref{append_misc}, which further shows that whether more axioms shall be considered really depends on the specific scenario being modeled, which will be left for important future work. }\n\n\n\n\n\n\n\n\\section{Empirical Studies}\nThroughout the experiments, we are trying to understand the following: 1) Would the proposed \\varindex have lower decoupling error compared to others? 2) Could the proposed \\varindex gain benefits compared to the classical valuation criteria for valuation problems?\n\n\\begin{figure}[tbp]\n\t%\n\t%\n\t\\centering\n\t\t\\vspace{-1cm}\n\t\\includegraphics[width=1\\textwidth]{src\/figs\/figs-data-removing3-rebuttal.pdf}\n\t\\caption{Data removal results. Numbers in the legend are the \\emph{decoupling} errors. Columns: 1st: synthetic data; 2nd: breast cancer data with 569 samples; 3rd: digits data with 1797 samples. Specific configurations (e.g., temperature) are put inside the figure texts.}\n\t\\label{fig_data_removing}\n\t\t\\vspace{-.4cm}\n\t%\n\\end{figure}\n\n\\looseness -1 Since we are mainly comparing the quality of different criteria, it is necessary to rule out the influence of approximation errors when estimating their values. So we focus on small-sized problems where one can compute the exact values of these criteria in a reasonable time. Usually this requires the number of players to be no more than 25. Meanwhile, we have also conducted experiments with a larger number of players in \\cref{apendix_large_players}, in order to show the efficiency of sampling methods.\nWe choose $T$ empirically from the values of 0.1, 0.2, 0.5, 1.0.\nWe choose $K$ such that \\cref{alg_mfi_ga} would converge. Usually, it takes around 5 to 10 steps to converge. We give all players a fair start, so $\\mathbf{x}^0$\n was initialized to be $0.5 \\times \\mathbf{1}$. Code is available at \\url{https:\/\/valuationgame.github.io}.\n\n\nWe first conduct synthetic experiments on submodular games (details defered to \\cref{exp_syn}), in order to verify the quality of solutions in terms of the true marginals $p(i\\in \\mathbf{S})$. One can conclude that \\varindex obtains better performance in terms of MSE and Spearman's rank correlation compared to\nthe one-point solutions (Shapley value and Banzhaf value) in all experiments.\nMore experimental results on data point and feature valuations are deferred to \\cref{appd_more_exps}.\n\n\n\\subsection{Experiments on Data Valuations}\n\n\nWe follow the setting of \\cite{ghorbani2019data} and reuse the code of \\url{https:\/\/github.com\/amiratag\/DataShapley}.\nWe conduct data removal: training samples are sorted according to the valuations returned by different criteria, and then samples are removed in that order to check how much the test accuracy drops. Intuitively, the best criteria would induce the fastest drop of performance.\nWe experiment with the following datasets:\na) Synthetic datasets similar as that of \\cite{ghorbani2019data}; b)\nThe breast cancer dataset, which is a binary classification dataset with 569 samples; c) The digits dataset, that is a 10-class classification dataset with 1797 samples. The above two datasets are both from UCI Machine Learning repository (\\url{https:\/\/archive.ics.uci.edu\/ml\/index.php}).\nSpecifically, we cluster data points into groups and studied two settings: 1) Grouping the samples randomly; 2) Clustering the samples with the k-means algorithm. For simplicity, we always use equal group sizes. The data point removal corresponds to singleton groups.\n\\cref{fig_data_removing} shows the results.\nOne can observe that in certain situations the \\varindex achieves the fastest drop rate. It always achieves the lowest decoupling error (as shown in the legends in each of the figures).\nSometimes \\varindex and Banzhaf show similar performance. We expect that this is because the Banzhaf value is a one-step approximation of \\varindex, and for the specific problem considered, the ranking of the solutions does not change after one-step of fixed point iteration.\nThere are also situations where\nthe rankings of the three criteria are not very distinguishable,\nhowever, the specific values are also very different since the\ndecoupling error differs.\n\n\n\\subsection{Experiments on Feature Valuations\/Attributions}\n\n\\begin{figure}[tbp]\n\t%\n\t%\n\t\\centering\n\t\t\\vspace{-1.2cm}\n\t\\includegraphics[width=1\\textwidth]{src\/figs\/figs-fea.pdf}\n\t\\caption{First column: Change of predicted probabilities when removing features. The \\emph{decoupling error} is included in the legend. Last three columns: waterfall plots of feature importance.\n\t}\n\t\\label{fig_feature_removing}\n\t\t\\vspace{-.3cm}\n\t%\n\\end{figure}\nWe follow the setting of \\cite{lundberg2017unified} and reuse the code of \\url{https:\/\/github.com\/slundberg\/shap} with an MIT License.\nWe train classifiers on the Adult dataset\\footnote{\\url{https:\/\/archive.ics.uci.edu\/ml\/datasets\/adult}}, which predicts whether an adult's income exceeds 50k dollar per year based on census data. It has\n48,842 instances and 14 features such as age, workclass, occupation, sex and capital gain (12 of them used).\n\n\\textbf{Feature removal results.}\nThis experiment follows a similar fashion as the data removal experiment: we remove the features one by one according to the order defined by the returned criterion, then observe the change of predicted probabilities. \\cref{fig_feature_removing} reports the behavior of the three criteria.\nThe first row shows the results from an xgboost classifier (accuracy: 0.893), second row a logistic regression classifier (accuracy: 0.842), third row a multi-layer perceptron (accuracy: 0.861). For the probability dropping results, \\varindex usually induces the fastest drop, and it always enjoys the smallest decoupling error, as expected from its mean-field nature. From the waterfall plots, one can see that the three criteria indeed produce different rankings of the features. Take the first row for example. All criteria put ``Capital Loss'' and ``Relationship'' as the first two features. However, the remaining features have different ranking: \\varindex and Banzhaf indicate that ``Marital Status'' should be ranked third, while Shapley ranks it in the fourth position. It is hard to tell which ranking is the best because: 1) There is no golden standard to determine the true ranking of features; 2) Even if there exists a ground truth ranking of some ``perfect model'', the trained xgboost model here might not be able to reproduce it, since it might not be aligned with the ``perfect model''.\n\n\\textbf{Average results.} We further provide the bar plots and averaged ranking across the adult datasets in \\cref{fig_stats}. From the bar plots one can see that different criterion has slightly different values for each feature on average. Average rankings in the table demonstrate the difference: The three methods do not agree on the colored features, for example, ``Age'', ``Education-Num'' and ``Captical Loss''.\n\n\\begin{figure}[tbp]\n\t%\n\t%\n\t\\centering\n\t\t\\vspace{-1.1cm}\n\t\\includegraphics[width=1.02\\textwidth]{src\/figs\/fig-stats-maintext.pdf}\n\t\\caption{Statistics on valuations with the \\emph{xgboost} classifier. First row: box plot of valuations.\n\tWe always consider the predicted probability of the ground truth label. ``True'' means the samples with positive ground truth label and ``False'' means with the negative ground truth label. Second row: Average ranking of the 12 features. Colored texts denote different rankings among the three criteria.}\n\t\\label{fig_stats}\n\t\t\\vspace{-.3cm}\n\t%\n\\end{figure}\n\n\n\n\n\\subsection{Empirical Convergence Results of \\cref{alg_mfi_ga}}\n\\label{subsec_convergence}\n\n\\cref{tab_convergence} shows convergence results of \\cref{alg_mfi_ga} on\nfeature and data valuation experiments. The value in the cells are the stepwise difference of $\\mathbf{x}^k$ , $\\frac{\\|\\mathbf{x}^k - \\mathbf{x}^{k-1}\\|^2}{n}$, which is a classical criterion to measure the convergence of iterative algorithms.\nOne can clearly see that\n\\cref{alg_mfi_ga} converges in 5 to 10 iterations.\n\n\\begin{table}[h!]\n\\vspace{-.3cm}\n \\caption{Stepwise difference $\\frac{\\|\\mathbf{x}^k - \\mathbf{x}^{k-1}\\|^2}{n}$ of \\cref{alg_mfi_ga} for different experiments.}\n\\label{tab_convergence}\n\\centering\n\\footnotesize\n\\vspace{-.2cm}\n \\begin{tabular}{r c c c c c c }\n \\hline\n Step\/Iteration Num & 1 &\t2 &\t3 &\t5\t & 9 & 10 \\\\\n \\hline\nData Val (breast cancer) & 0.0023 & 3.61e-6 &1.53e-7 &2.77e-10 &9.12e-16&0 \\\\\nData Val (digits)&\t0.00099&\t5.93e-7&\t1.46e-8&\t8.92e-12&\t9.25e-18\t&0 \\\\\nData Val (synthetic)&\t0.00059&\t2.49e-8&\t3.13e-10&\t6.06e-14&\t0&\t0 \\\\\nFeature Val (xgboost)&\t0.0066&\t1.68e-5&\t8.71e-7&\t2.35e-9&\t1.75e-14&\t9.25e-16 \\\\\nFeature Val (LR)&\t0.0092&\t2.63e-5&\t1.44e-6&\t4.31e-9&\t2.14e-15&\t1.28e-16 \\\\\nFeature Val (MLP)&\t0.0040&\t4.86e-6&\t1.86e-7&\t2.84e-10&\t6.82e-16&\t3.20e-17\\\\\n \\hline\n \\end{tabular}\n\\end{table}\n\n\n\\textsc{\\large Discussions and Future Work.}\nWe have presented an energy-based treatment of cooperative games, in order to improve the valuation problem.\nIt is very worthwhile to explore more in the following directions:\n1) Choosing the temperature $T$.\nThe temperature controls the level of fairness since, when $T\\rightarrow \\infty$, all players have equal importance, when $T\\rightarrow 0$, whereas a player has either 0 or 1 importance (assuming no ties). Perhaps one can use an annealing-style algorithm in order to control the fairness level: starting with a high temperature and gradually decreasing it, one can obtain a series of importance values under different fairness levels.\n2) Given the probabilistic treatment of cooperative games, one can naturally add priors over the players, in order to encode more domain knowledge.\nIt may also make sense to consider conditioning and marginalization in light of practical applications.\n3) It is very interesting to explore the interaction of a group of players in the energy-based framework, which would result in an ``interactive'' index among size-$k$ coalitions.\n\n\n\n\n\\section*{Ethics Statement and Broader Impact}\n\n\nBesides the valuation problems explored in this work, cooperative game theory has already been applied to a wide range of disciplines, to name a few, economics, political science, sociology, biology, so this work could potentially contribute to broader domains as well.\n\n\\looseness -1 Meanwhile, we have to be aware of possible negative societal impacts, including: 1) negative side effects of the technology itself, for example, possible unemployment issues due to the reduced amount of the need of valuations by human beings; 2) applications in negative downstream tasks, for instance, the data point valuation technique could make it easier to conduct underground transactions of private data.\n\n\\section*{Reproducibility Statement}\n\nAll the datasets are publicly available as described in the main text.\nIn order to ensure reproducibility, we have made the efforts in the following respects: 1) Provide code as supplementary material. 2) Provide self-contained proofs of the main claims in \\cref{app_recoving_proof,app_proof_thm1}; 3) Provide more details on experimental configurations and experimental results in \\cref{appd_more_exps}.\n\n\n{\n\\typeout{}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{Sec::introduction}\nAdvanced Video Coding (H.264\/AVC) \\cite{wiegand2003overview}, High-Efficiency Video Coding (H.265\/HEVC) \\cite{sullivan2012overview} are existing popular video coding standards. Versatile Video Coding (VVC) \\cite{Bross2019} is the emerging next-generation standard under the development of the Moving Pictures Expert Group (MPEG).\nThese video coding standards adopt the so-called hybrid coding frameworks, where the major procedures include prediction, transform, quantization, and entropy coding.\nIn the hybrid coding framework, a video frame is partitioned into non-overlapping coding blocks.\nThese blocks form the basis coding units (CU), prediction units (PU), transform units (TU), etc.\nA block-based coding scheme is hardware-friendly and easy to implement.\nIt also lends itself to useful coding functionalities such as parallelization. \n\nHowever, block-wise operation inevitably introduces video quality degradation near the block boundaries, known as block artifacts.\nBeyond that, coarse quantization is another major factor in causing video quality degradation, especially at the regions with sharp edges known as the ringing artifacts.\nThis ripple phenomenon induces poor visual quality and leads to a bad user-experience \\cite{lim2011ringing}.\nGiven this, extensive in-loop filters have been proposed to compensate for artifacts and distortions in video coding.\nThe in-loop filters can be classified into two categories based on whether the deep learning techniques are used.\n\n\n\nThe first category is traditional signal processing based methods, including Deblocking Filter (DF) \\cite{liu2007post, norkin2012hevc}, Sample Adaptive Offset (SAO) \\cite{1Fu2010, 2Fu2011, 3Fun2011}, Adaptive Loop Filter (ALF) \\cite{tsai2013adaptive}, non-local in-loop filter \\cite{ma2016nonlocal} and many others.\nDF can reduce blocking artifacts at PU and TU boundaries. \nSAO compensates the pixel-wise residuals by explicitly signaling offsets for pixel groups with similar characteristics. \nALF is essentially a Wiener filter where the current pixel is filtered as a linear combination of neighboring pixels.\nThe three filters mentioned above are based on neighbor-pixel statistics. In contrast, the non-local in-loop filter takes advantage of non-local similarities in natural images.\n\n\n\nTraditional methods improve the video quality with relatively low complexity.\nTherefore, they have been successfully applied in video coding standards. \nRecently, however, the deep learning based in-loop filters have been proposed to achieve further improvements \\cite{zhang2018residual, lu2018deep, jia2019content}. \nOne type of CNN utilizes the principle of the Kalman filter to construct a deep learning filter.\nAnother type of CNN consists of the highway or content-aware block units to achieve flexibility. \nPeople have realized that these deep learning based schemes have at least two benefits from traditional methods. \nOne is that non-linear filtering operations are involved in the system. \nIt is critical to capture and compensate for the distortions caused by codecs because these coding distortions are essentially non-linear by themselves. \nAnother benefit is that deep learning can learn features from a large amount of data automatically, which would be more efficient than handcraft features.\nThough the coding efficiency has been improved from traditional methods in HEVC, the coding information has not been fully utilized in the design of neural networks.\nIn \\cite{he2018enhancing}, the authors proposed to utilize partition information in the design of neural networks, indicating introducing more coding information can benefit the overall performance.\n\nMotivated by these, we propose a novel in-loop algorithm by introducing the residual signal to the network and devising two sub-networks for residual and reconstruction signals, respectively. \nThey are the Residual Network and the Reconstruction Network. \nThe major contributions of this work are three-fold:\n\\begin{itemize}\n\\item First, we supply the residual signal as the supplementary information and feed it into the neural network in pair with the reconstructed frame. \nTo the best of our knowledge, this is the first work that utilizes the residual signal to devise an in-loop filter for video coding.\n\\item Second, the network structure is carefully designed for the dual-input CNN to utilize the underlying features in different input channels fully.\nThe residual blocks are used for Residual-Network. \nA hierarchical autoencoder network with skip connections is used for Reconstruction-Network.\n\\item Third, extensive experiments have been conducted to compare with existing algorithms to demonstrate its effectiveness of the proposed scheme. \nThroughout analyses are provided to give more insights into the problem based on the experimental results.\n\\end{itemize}\n\nNote that a residual introduced deblocking method has been proposed in our previous work \\cite{jia2020Residual}.\nThis paper provides more motivation, analysis, experimental results, and comparison of related works on the residual-based loop filter.\nAdditionally, in order to validate the efficiency of our RRNet design, we recurs more three inputs-based methods for comparison.\nThe experimental results show that the customized Residual Network and Reconstruction Network is significantly beneficial for bitrate savings. \n\nWe organize the remainder of this paper as follows. \nIn Section~\\ref{Sec::related work}, we describe related works. \nSection~\\ref{Sec::proposed algorithm} introduces the proposed RRNet approach.\nIn Section~\\ref{Sec::experimental results}, we report and analyze the experimental results.\nFinally, Section~\\ref{Sec::conclusion} summarizes this paper and discusses future works.\n\n\n\n\\section{Related Work}\n\\label{Sec::related work}\n\nIn this section, we briefly review the prior works related to loop filters of video coding, including the traditional signal processing based methods and deep learning based methods.\n\\subsection{Traditional signal processing based methods}\nRelying on the signal processing theory, the following in-loop filter methods have been proposed.\n\n\\begin{itemize}\n\\item [1)] Deblocking Filter (DF).\nList \\emph{et al.} \\cite{list2003adaptive} devised the first version of an adaptive deblocking filter, which was adopted by H.264\/AVC standard.\nIt depressed distortions at block boundaries by applying an appropriate filter. \nZhang \\emph{et al.} \\cite{6166366} proposed a three-step framework considering task-level segmentation and data-level parallelization to efficiently parallelize the deblocking filter.\nTsu-Ming \\emph{et al.} \\cite{liu2007post} then proposed a high-throughput deblocking filter.\nIn HEVC, Norkin \\emph{et al.} \\cite{norkin2012hevc} designed a DF with lower complexity and better parallel-processing capability.\nLi \\emph{et al.} \\cite{8085172} provided deblocking with a shape-adaptive low-rank before preserving edges well and an extra before restoring the lost high-frequency components.\n\\item [2)] Sample Adaptive Offset (SAO) \\cite{fu2012sample}.\nChien and Karczewicz proposed an adaptive loop filtering technique \\cite{Chien2009} based on the Laplacian energy and classifications of the reconstructed pixel value.\nThis approach obtains obvious performance improvements but with high complexity.\nKen \\emph{et al.} \\cite{Ken2010} designed an extrema correcting filter (EXC) and a boundary correcting filter (BDC).\nHuang \\emph{et al.} \\cite{Huang2010} developed a picture-based boundary offset (PBO), picture-based border offset (PEO) and picture-based adaptive constraint (PAC).\nFu \\emph{et al.} \\cite{1Fu2010,2Fu2011} devised an algorithm that can adaptively select the optimal pixel-classification method.\nHowever, computational complexity is still very high.\nTo address this, Fu and Chen \\emph{et al.} \\cite{3Fun2011} proposed a sample adaptive offset (SAO) method, which was finally adopted by HEVC.\nIt provides a better trade-off between performance and complexity.\n\\item [3)] Adaptive Loop Filter (ALF).\nTsai \\emph{et al.} \\cite{tsai2013adaptive} proposed the ALF method to decrease the mean square error between original frames and decoded frames by Wiener-based adaptive filter.\nThe filter coefficients are trained for different pixel regions at the encoder.\nThe coefficients are then explicitly signaled to the decoder.\nBesides, ALF activates the filter at different regions by signaling control flags.\n\\item [4)] Non-local Mean Models.\nThe non-local mean methods improve the efficiency of in-loop filters as well.\nTo suppress the quantization noise optimally and improve the quality of the reconstructed frame, Han \\emph{et al.} \\cite{han2014quadtree} proposed a quadtree-based non-local Kuan's (QNLK) filter. \nMa \\emph{et al.} \\cite{ma2016nonlocal} proposed the group-based sparse representation with image local and non-local self-similarities. \nThis model lays a solid groundwork for the in-loop filter design.\nZhang \\emph{et al.} \\cite{zhang2016low} utilized image non-local prior knowledge to develop a loop filter by imposing the low-rank constraint on similar image patches for compression noise reduction.\n\\end{itemize}\n\n\\begin{figure}[tbp]\n\\centering\n\\begin{minipage}[b]{.49\\linewidth}\n \\centering\n \\includegraphics[width=\\linewidth]{kimonoGT.pdf}\n \\centerline{(a) Ground Truth}\\medskip\n\\end{minipage}\n\\begin{minipage}[b]{.49\\linewidth}\n \\centering\n \\includegraphics[width=\\textwidth]{resiKimonoAbs.pdf}\n \\centerline{(b) Residual}\\medskip\n\\end{minipage}\n\\caption{Typical example of the Kimono residual under $QP37$ with intra mode. \nThe color has been adjusted for clear viewing.\nThe inverse transformed residual signal provides the comprehensive partition information of the transforming units.\nIt is obvious to see the $32\\times32$, $16\\times16$, $8\\times8$, and $4\\times4$ partition blocks of TU in the residual.\nFor instance, the shapes of the woman's body and tree trunks are more easily discernable.\nMeanwhile, the residual contains a large amount of dense, detailed textures. \nFor example, we can see many needle leaves on the trees. This information can help to augment the considerable variation in some areas of the reconstruction.\n}\n\\label{Fig::Example}\n\\end{figure}\n\n\n\\begin{figure*}[tbp]\n\\begin{center}\n\\centering\n\\includegraphics[width=1.0\\textwidth]{hcnnAcnn3.pdf}\n\\caption{RRNet with sub-networks of both the Residual Network and the Reconstruction Network.\nResiduals are fed into the Residual Network to provide the TU partition information and the detailed textures information. \nThe Residual Network relies on residual blocks to learn features effectively with residual learning.\nWe feed the reconstruction into the Reconstruction Network.\nThe Reconstruction Network executes the downsampling and upsampling strategy to patch up the local and global information. \nThis enhances reconstruction quality and aids with the residual learning approach.\n}\n\\label{Fig::HFAF}\n\\end{center}\n\\end{figure*}\n\n\n\n\\subsection{Deep learning based methods}\n\\label{subsec::cnnMethods}\nRecently, the deep learning based in-loop filters have been proposed.\nFor images, Dong \\emph{et al.} \\cite{dong2015compression} designed a compact and efficient model, known as Artifacts Reduction Convolutional Neural Networks (AR-CNN).\nThis model was effective for reducing various types of coding artifacts.\nKang \\emph{et al.} \\cite{7109159} propose to learn sparse image representations for modeling the relationship between low-resolution and high-resolution image patches in terms of the learned dictionaries for image patches with and without blocking artifacts, respectively.\nWang \\emph{et al.} \\cite{wang2016d3} devised a Deep Dual-Domain ($D^3$) based fast restoration framework to recover high-quality images from JPEG compressed images.\nThe $D^3$ model increased the large learning capacity of deep networks.\n\nFor videos, Xue \\emph{et al.} \\cite{xue2017video} proposed the task-oriented flow (TOFlow), where a motion representation was learned for video enhancement.\nTao \\emph{et al.} \\cite{tao2017detail} proposed a sub-pixel motion compensation (SPMC) model, which has shown its efficiency in video super-resolution applications.\nIn the framework of video coding, Dai \\emph{et al.} \\cite{dai2017convolutional} designed a Variable-filter-size Residual-learning CNN (VRCNN) that achieved $4.6\\%$ bit-rate gain.\nYang \\emph{et al.} \\cite{yang2017decoder,yang2018enhancing} developed the Quality Enhancement Convolutional Neural Network (QE-CNN) method in HEVC.\nWith the residual learning \\cite{he2016deep}, Wang \\emph{et al.} \\cite{wang2018dense} designed the dense residual convolutional neural network (DRN), which exploits the multi-level features to recover a high-quality frame from a degraded one.\nOther CNN-based video compression works, including \\cite{jia2017spatial, park2016cnn, wang2017novel} pushed the horizon of in-loop filtering techniques as well. \nMost recently, \nZhang \\emph{et al.} \\cite{zhang2018residual} devised the residual highway convolutional neural network (RHCNN) in HEVC.\nLu \\emph{et al.} \\cite{lu2018deep} modeled loop filtering for video compression as a Kalman filtering process.\nJia \\emph{et al.} \\cite{jia2019content} proposed a content-aware CNN based in-loop filtering for HEVC.\nHowever, most of these frameworks are designed for one specific restoration task.\nTo address this issue, Jin \\emph{et al.} \\cite{8820082} proposed a flexible deep CNN framework that exploits the frequency characteristics of different types of artifacts.\n\n\n\n\n\\begin{figure}[tbp]\n\\centering\n\\begin{minipage}[b]{.49\\linewidth}\n \\centering\n \\includegraphics[width=\\linewidth]{cactusGT.pdf}\n \\centerline{(a) Cactus Ground Truth}\\medskip\n\\end{minipage}\n\\begin{minipage}[b]{.49\\linewidth}\n \\centering\n \\includegraphics[width=\\linewidth]{enc_Cactus_intra_main_V3_RESI_RECO_HNET_LNET_FM_CONCAT_Q37_NB3_PS64_resifm25.pdf}\n \\centerline{(b) Cactus Residual Feature Map}\\medskip\n\\end{minipage}\n\\begin{minipage}[b]{.49\\linewidth}\n \\centering\n \\includegraphics[width=\\linewidth]{bqSquareGT.pdf}\n \\centerline{(c) BQSquare Ground Truth}\\medskip\n\\end{minipage}\n\\begin{minipage}[b]{.49\\linewidth}\n \\centering\n \\includegraphics[width=\\linewidth]{enc_BQSquare_intra_main_V3_RESI_RECO_HNET_LNET_FM_CONCAT_Q37_NB3_PS64_resifm31.pdf}\n \\centerline{(d) BQSquare }\n \\centerline{ Residual Feature Map }\n\\end{minipage}\n\\caption{Residual feature maps of Cactus and BQSquare derived from the Residual Network of RRNet under $QP37$.\nThe residual features of Cactus with abundant context including pokers, calender and metal circle demonstrates its prominent contribution for enhancing the quality of the video frame.\nThe residual features of BQSquare which are a flat example show a great amount of details involving chairs and tables.\n}\n\\label{Fig::featureMap}\n\\end{figure}\n\n\n\n\n\n\\begin{figure*}[tbp]\n\\begin{center}\n\\centering\n\\includegraphics[width=1.0\\textwidth]{codingFrameV2.pdf}\n\\caption{The location of RRNet embedded in HEVC.\nWe insert the RRNet into HEVC as an in-loop method. The RRNet would input residual from extracting module and reconstruction into the Residual Network and the Reconstruction Network, respectively. The RRNet is executed instead of DF and SAO filters.\n}\n\\label{Fig::codingFrame}\n\\end{center}\n\\end{figure*}\n\n\nThe aforementioned deep learning methods only took the reconstructed low-quality video frame as input.\nHowever, the coding information was not efficiently utilized.\nTo better use coding information, Lin and He \\emph{et al.} \\cite{lin2019partition, he2018enhancing} proposed a partition-masked CNN, where the block partition information was utilized for improving the quality of the reconstructed frames.\nIt has shown additional improvements in terms of coding efficiency over the reconstruction-only methods.\n\n\n\n\\section{Proposed algorithm}\n\\label{Sec::proposed algorithm}\n\nThis section will discuss the proposed RRNet scheme in detail, including a more in-depth discussion on the architecture of the RRNet, loss function, dataset, and training process. \n\n\n\\begin{table}[tbp]\n\\caption{The Residual Network Parameters of conv layers}\n\\label{tab::resi}\n\\center\n\\begin{tabular}{l|c|c|c|c}\n\\hline\nLayers & Kernel Size & Feature maps & Stride & Padding\\\\\n & & Number & & \\\\\n\\hline\n\\multirow{1}{*}{Conv 1} & $3 \\times 3 $ & $32$ & $1$ & $1$ \\\\\n\\hline\nResidual Block 1 & $3 \\times 3 $ & $64$ & $1$ & $1$ \\\\ \n (2 convs) & & & & \\\\\n\\hline \nResidual Block 2 & $3 \\times 3 $ & $64$ & $1$ & $1$ \\\\\n (2 convs) & & & &\\\\\n\\hline\nResidual Block 3 & $3 \\times 3 $ & $64$ & $1$ & $1$ \\\\ \n (2 convs) & & & & \\\\\n\\hline\n\\multirow{1}{*}{Conv 8} & $3 \\times 3 $ & $32$ & $1$ & $1$ \\\\\n\\hline\n\\end{tabular}\n\\end{table}\n\n\n\\begin{table*}[tbp]\n\\caption{The Reconstruction Network Parameters of Conv And Transposed Conv Layers}\n\\label{tab::AFParas}\n\\center\n\\begin{adjustbox}{max width=\\textwidth}\n\\begin{tabular}{c|c|c|c|c|c|c|c|c}\n\\hline\n\\multirow{1}{*}{Type of Layer} & Conv1 & Conv2 & Conv3 & Transposed Conv1 & Conv4 & Transposed Conv2 & Conv5 & Conv6 \\\\\n\\hline\n\\multirow{1}{*}{Kernel Size} & $3 \\times 3$ & $3 \\times 3$ & $3 \\times 3$ & $2 \\times 2$ & $3 \\times 3$ & $2 \\times 2$ & $3 \\times 3$ & $3 \\times 3$ \\\\\n\\hline\n\\multirow{1}{*}{Feature Map Number} & $32$ & $64$ & $128$ & $64$ & $64$ & $32$ & $32$ & $32$ \\\\\n\\hline\n\\multirow{1}{*}{Stride} & $1$ & $1$& $1$& $2$& $1$& $2$& $1$& $1$ \\\\\n\\hline\n\\multirow{1}{*}{Padding} & $1$ & $1$& $1$& $0$& $1$& $0$& $1$& $1$ \\\\\n\\hline\n\\end{tabular}\n\\end{adjustbox}\n\\end{table*}\n\n\n\n\\subsection{Architecture of the proposed RRNet framework}\n\\label{Subsec::archiRRNet}\n\nFig.~\\ref{Fig::HFAF} shows the overall architecture of the proposed RRNet framework.\nThe proposed RRNet framework includes two sub-networks: the reconstruction network and the residual network.\nThe reconstruction network uses the reconstruction as input and derives reconstruction feature maps from the input.\nThe residual network uses the residual as input and derives residue feature maps from the input.\nThe feature maps derived from the two sub-networks are concatenated together and used as the input of the last convolutional layer.\nIn addition, we use the residual learning method that learns the difference between the input and the label to accelerate the training process.\n\nAs explained in the last paragraph, both the reconstruction and residual are utilized as the inputs of the proposed network.\nApplying the reconstruction as input is the same as most existing works since our target is to enhance the reconstruction.\nHowever, why the residual is used as the other input for our proposed RRNet network?\n\nFirst, we believe that the residual can provide accurate transform unit (TU) partitions and great textures beneficial for the enhancement. \nFig.~\\ref{Fig::Example} gives a typical example of the residual from the sequence Kimono. \nWe can see clear TU boundaries from the residual figure.\nAs we know, the basic unit of encoding the residual is a TU. \nEach TU transforms and quantizes independently. \nTherefore, it is more probable to have severe artifacts in the block boundary than the block center. \nThe TU boundary information is a good indicator that implicates where the distortion is more severe and guides the network to learn more distinct features. \nIn addition, we can see from the residual frame in Fig.~\\ref{Fig::Example} that, within each TU, the texture information is still visible.\nThey can illustrate the body shapes of the girl and tree trunks clearly. \nThis texture information also contributes to the reconstruction enhancement.\n\n\n\n\nSecond, the residual signal suffers from frame prediction accuracy, most notably in the areas where the residual contains non-zero values.\nThis essentially means that the encoder does not accurately predict the regions where the residual values are large.\nAccordingly, the residual is beneficial for the CNN learning process, especially in areas where the residual contains non-zero values.\nFrom the extracted residual feature maps as shown in Fig.~\\ref{Fig::featureMap}, we can see that the residual signal is useful for improving the capability of the CNN to learn sharp edges and complex shape information that would otherwise be missed by the encoder.\n\n\n\n\n\n\nIn addition to introducing the residual as the dual input, we can also see from Fig.~\\ref{Fig::HFAF} that we use different sub-networks for the reconstruction and residual.\nAs we know, the characteristics of the reconstruction and residual are different.\nThe residual is more sensitive, while the reconstruction consisting of residual and prediction contains more global information.\nWe should design specific sub-networks to optimize the features derived from various inputs and improve the reconstructed frame quality.\nA detailed introduction of the two sub-networks will be described in detail in the next two subsections.\n\nTo give a better illustration of how we embed the above-introduced framework in HEVC, we give a modified HEVC encoder in Fig.~\\ref{Fig::codingFrame}.\nWe replace the deblocking and SAO filters using the proposed RRNet framework.\nThe output frame from our framework will be used as a reference for the to-be-encoded frames in the future.\nNote that in the proposed RRNet framework, we need to extract the residual from the bitstream in addition to the reconstruction.\n\n\n\n\n\n\n\\subsection{Design of the Residual Network}\n\\label{Subsec::hfcnn}\n\n\nWe develop a Residual Network consisting of several residual blocks \\cite{he2016deep} to adapt to the residual features.\nThe residual block could effectively keep the residual features and the gradient information on the shallow layers.\nTherefore, the proposed Residual Network can derive the distinct features from the residual frame.\nConsidering the complexity, we use only $8$ convolutional layers to derive the residual features.\nBecause the network consisting of residual blocks \\cite{he2016deep} could effectively keep the residual features and the gradient information on the shallow layers \\cite{veit2016residual}, we adopt the residual block as the basic unit of our Residual Network.\n\nThe network based on the residual blocks brings apparent advantages.\nIn the residual blocks based network, the collection of multiple routes substitutes the simple sole route.\nBased on the multiple routes property, because of the independence of the routes in the residual block-based network, this uncorrelated property enhances the canonical effect of the Residual Network. \nBecause the contributions for the gradient information are mainly from the shallow layers, adding the weights of the short routes could effectively prevent from vanishing gradient.\n\nIn Fig.~\\ref{Fig::HFAF}, the upper pathway shows the detailed architecture of our proposed Residual Network.\nTable~\\ref{tab::resi} shows the convolutional layers configurations.\nThe Residual Network includes three residual blocks consisting of six convolutional layers and two convolutional layers at the beginning and end.\nWe set the Kernel Size for each convolutional layer as $3 \\times 3$, the Feature Map Number as $32$, Stride as $1$, and Padding as $1$.\nAs the Parametric Rectified Linear Unit (PReLU) \\cite{he2015delving} has been demonstrated to be more effective than the ReLU, we employ it as the activation function in the Residual Network.\nWe compute the feature maps of the Residual Network as follows:\n\n\n\\begin{equation}\n\\left\\{\n \\begin{array}{ll}\n F^{res}_i(x) = A(W_i * F^{res}_{i-1}(x) + B_i), i\\in\\{2,4,6,8\\} & \\\\\n F^{res}_j(x) = A(W_j * F^{res}_{j-1}(x) + B_j) + \\ F^{res}_{j-2}(x), \\ j\\in\\{3,5,7\\} & \n \\end{array}\n \n \n \n \n \n \n\\right.\n\\label{Eq::resi}\n\\end{equation}\nwhere $x$ denotes the input of residual, $A$ is the activation function, $W_i$ and $B_i$ are the weights and bias matrices respectively.\n\n\n\\subsection{Design of the Reconstruction Network}\n\\label{Subsec::afcnn}\n\nSimultaneously, we consider the reconstruction signal as the other input.\nTherefore, we design a Reconstruction Network containing several downsampling and upsampling pairs to learn the reconstruction features.\nThe Reconstruction Network adopts the classic autoencoder architecture \\cite{chen2016variational, hinton2006reducing} with the skip connection concatenating the encoder and decoder parts \\cite{ronneberger2015u}.\nIn this way, the reconstruction network can recover the global information and details as much as possible.\n\nThe Reconstruction Network has the following advantages.\nOn the encoder side, downsampling the reconstruction helps extract more useful reconstruction features of low space dimensions. \nBased on the downsampling operation, upsampling the small reconstruction features helps derive the more extensive reconstruction features on the decoder side. \nThe skip connection concatenating the reconstruction features from the encoder side could help the decoder to recover the global and detailed information of the reconstruction.\n\nIn Fig~\\ref{Fig::HFAF}, the lower pathway shows the detailed structure of our proposed Reconstruction Network.\nWe adopt the pooling and transposed convolutional layer to perform downsampling and upsampling, respectively.\nIn the encoder phase, downsampling reduces the redundancy effectively in the reconstruction and keeps useful information.\nHowever, it may cut the global context as well.\nHence, we execute the upsampling in the decoder phase to propagate the global information of the reconstruction to the next convolutional layer.\nNext, in the skip connection phase, we concatenate the concentrated reconstruction features from the encoder to the upsampling reconstruction features from the decoder.\nThis is to provide the network with both the brief features and global context in the reconstruction. \nThe Reconstruction Network is a difference learning network as well.\nTable~\\ref{tab::AFParas} shows the detailed configurations.\nFor the convolutional layers, we set the Kernel Size to $3 \\times 3$, Stride to $1$, Padding to $1$, Feature Map Number to $32$, $64$ or $128$.\nFor the transposed convolutional layers \\cite{zeiler2010deconvolutional}, we set the Kernel Size to $2 \\times 2$, Stride to 2, Padding to 1, Feature Map Number to $64$ or $32$.\nThe reconstruction network can be formulated as follows,\n\n\n\\begin{equation}\nF^{rec}_i(z) = P(W_i * F^{rec}_{i-1}(z) + B_i), i\\in\\{1,2\\}\n\\label{Eq::reci}\n\\end{equation}\nwhere $z$ is the reconstruction signal input, and $P$ represents the sequential functions for activation and max-pooling.\nWe choose PReLU as the activation function in the Reconstruction Network.\n\n\n\\begin{equation}\n \\begin{array}{rr}\n F^{rec}_5(z) = C(P(W_5 * F^{rec}_{4}(z) + B_5), F^{rec}_{2}(z)) & \\\\\n F^{rec}_7(z) = C(P(W_7 * F^{rec}_{6}(z) + B_7), F^{rec}_{1}(z)) & \n \\end{array}\n\\label{Eq::recc}\n\\end{equation}\nwhere $C$ denotes the concatenating function for jointing features.\n\n\nAfter concatenating the features of the Residual Network and the Reconstruction Network, we calculate them with a convolutional layer of $1$ channel.\nThen we obtain the final output $F_{out}(x, z)$ which is the same size as input.\n\n\n\\begin{table}[tbp]\n\\caption{Training parameters}\n\\label{tab::trainingParas}\n\\center\n\\begin{tabular}{l|l}\n\\hline\nParameters & QP 37 \\\\\n\\hline\n\\multirow{1}{*}{Base Learning Rate} & $1e^{-4}$ \\\\\n\\hline\n\\multirow{1}{*}{$\\gamma$ Adjusting Coefficient} & $0.1$ \\\\\n\\hline\n\\multirow{1}{*}{Adjusting Epochs Interval} & $100$ \\\\\n\\hline\n\\multirow{1}{*}{Weight Decay} & $1e^{-4}$ \\\\\n\\hline\n\\multirow{1}{*}{Momentum} & $0.9$ \\\\\n\\hline\n\\multirow{1}{*}{Total Epochs} & $120$ \\\\\n\\hline\n\\end{tabular}\n\\end{table}\n\n\n\n\\subsection{Loss function, dataset and training}\n\\label{Subsec::training}\n\n\\textup{\\textbf{Loss function}}.\nWe employ Mean Squared Error (MSE) \\cite{wackerly2014mathematical} as the loss function for our proposed RRNet as follows,\n\\begin{equation}\nL(\\Theta) = \\frac{1}{N}\\sum^N_{i=1}||\\Upsilon(Y_i|\\Theta) - X_i||^2_2\n\\label{Eq::mse}\n\\end{equation}\nwhere $\\Theta$ encapsulates the whole parameter set of the network containing weights and bias and $\\Upsilon(Y_i|\\Theta)$ denotes the network module.\n$X_i$ is a pixel of the original frame, where $i$ indexes each pixel.\n$Y_i$ is the corresponding pixel of the reconstruction, that is compressed by HEVC when we turn off its deblocking and SAO.\n$N$ is the number of pixels.\n\n\\textup{\\textbf{Dataset}}.\nWe employ the DIV2K \\cite{Ignatov_2018_ECCV_Workshops, agustsson2017ntire} dataset comprising $800$ training images and $100$ validating images of $2k$ resolution as the original frames.\nBecause modern video codecs operate on YUV color domain, we convert the original $900$ PNG images to YUV videos with FFMPEG \\cite{FFmpeg} of GPU acceleration.\nA modified HEVC reference software is then used to encode original frames to generate the reconstruction and residual with $QP22$, $QP27$, $QP32$, and $QP37$, respectively.\nWe finally extract $64\\times64$ blocks from the Luma component of the reconstructed, residual, and original frames and use them as the inputs and labels for training our proposed RRNet.\nIn total, there are $522,939$ groups of inputs and labels for training and $66,650$ groups for validation.\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\\textup{\\textbf{Training}}.\nOnce we obtain the residual and reconstruction patches of divided components, we feed them into the Residual Network and the Reconstruction Network, respectively, by batch-size of $16$.\nTable~\\ref{tab::trainingParas} exhibits the parameters of training procedure for $QP37$ samples.\nWe experiment with a larger learning ($1e^{-3}$) rate and a smaller learning rate ($1e^{-5}$), but the former one leads to the gradient explosion while the later one learns too slowly.\nTherefore, $1e^{-4}$ is the appropriate base learning rate of $QP37$ model.\nWe adopt the Adaptive Moment Estimation (Adam) \\cite{kingma2014adam} algorithm with the momentum of $0.9$ and the weight decay of $1e^{-4}$.\nThese parameter values are selected according to experience values.\nWhen the model is trained less than $120$ epochs, the loss has not been convergent.\nAccordingly, the $QP37$ model is trained with $120$ epochs.\nAfter $100$ epochs, we decrease the learning rate by $10$ times.\nAfter the $QP37$ model is derived, we fine tune it with $20$ epochs to obtain the other models: $QP22$, $QP27$, $QP32$.\nFinally, we obtain the models for all the $QPs$ for testing.\n\n\\begin{table*}[tbp]\n\\caption{BD-rate of the SOTAs and proposed RRNet against HEVC under All Intra case}\n\\label{tab::comparisonRVHIntra}\n\\center\n\\begin{adjustbox}{max width=\\textwidth}\n\\begin{tabular}{c|l|c|c|c|c}\n\\hline\n\\hline\n{Class} & Sequence & VRCNN \\cite{dai2017convolutional} & EDSR Residual & Partition-aware & \\ \\ \\ \\ \\ \\ \\ \\ \\ RRNet \\ \\ \\ \\ \\ \\ \\ \\ \\ \\\\\n & & vs. HEVC & Blocks \\cite{lim2017enhanced} vs. HEVC & CNN \\cite{lin2019partition} vs. HEVC & vs. HEVC\\\\\n\\hline\nA & Traffic & $-8.1\\% $& $-8.5\\%$ & $-8.7\\%$ & $\\textbf{-10.2}\\% $\\\\\n & PeopleOnStreet & $-7.7\\% $ &$-7.8\\%$& $-8.2\\%$& $\\textbf{-9.4}\\% $\\\\\n\\hline\nB & Kimono & $-5.9\\% $ &$-6.6\\%$& $-6.9\\%$& $\\textbf{-8.6}\\% $\\\\\n & ParkScene & $-6.2\\% $ &$-6.6\\%$& $-6.9\\%$& $\\textbf{-8.1}\\% $\\\\\n & Cactus & $-2.7\\% $ &$-4.9\\%$& $-5.4\\%$& $\\textbf{-5.8}\\% $\\\\\n & BasketballDrive & $-5.2\\% $ &$-4.6\\%$& $-4.7\\%$& $\\textbf{-7.7}\\% $\\\\\n & BQTerrace & $-2.9\\% $ &$-2.9\\%$& $-2.9\\%$& $\\textbf{-4.2}\\% $\\\\\n\\hline\nC & BasketballDrill & $-10.6\\% $ &$-10.9\\%$& $-11.3\\%$& $\\textbf{-13.8}\\% $\\\\\n & BQMall & $-7.3\\% $ &$-7.0\\%$& $-7.4\\%$& $\\textbf{-9.3}\\% $\\\\\n & PartyScene & $-4.6\\% $ &$-4.5\\%$& $-4.8\\%$& $\\textbf{-5.6}\\% $\\\\\n & RaceHorses & $-5.8\\% $ &$-5.0\\%$& $-5.3\\%$& $\\textbf{-7.1}\\% $\\\\\n\\hline\nD & BasketballPass & $-7.6\\% $ &$-7.3\\%$& $-7.8\\%$& $\\textbf{-9.5}\\% $\\\\\n & BQSquare & $-5.3\\% $ &$-5.4\\%$& $-5.8\\%$& $\\textbf{-6.3}\\% $\\\\\n & BlowingBubbles & $-5.5\\% $ &$-5.5\\%$& $-5.7\\%$& $\\textbf{-6.7}\\% $\\\\\n & RaceHorses & $-8.9\\% $ &$-8.8\\%$& $-9.1\\%$& $\\textbf{-10.2}\\% $\\\\\n\\hline\nE & FourPeople & $-10.0\\% $ &$-10.4\\%$& $-10.9\\%$& $\\textbf{-12.8}\\% $\\\\\n & Johnny & $-9.1\\% $ &$-8.1\\%$& $-8.7\\%$& $\\textbf{-12.5}\\% $\\\\\n & KristenAndSara & $-9.4\\% $ &$-9.0\\%$& $-9.6\\%$& $\\textbf{-11.8}\\% $\\\\\n\\hline\n& Class A & $-7.9\\% $ &$-8.2\\%$& $-8.5\\%$& $\\textbf{-9.8}\\% $\\\\\n & Class B & $-4.6\\% $ &$-5.1\\%$& $-5.4\\%$& $\\textbf{-6.9}\\% $\\\\\n & Class C & $-7.1\\% $ &$-6.9\\%$& $-7.2\\%$& $\\textbf{-8.9}\\% $\\\\\n & Class D & $-6.8\\% $ &$-6.7\\%$& $-7.1\\%$& $\\textbf{-8.2}\\% $\\\\\n & Class E & $-9.5\\% $ &$-9.2\\%$& $-9.7\\%$& $\\textbf{-12.4}\\% $\\\\\n\\hline\nAvg. & All & $-6.8\\% $ &$-6.9\\%$& $-7.2\\%$& $\\textbf{-8.9}\\% $\\\\\n\\hline\n\\hline\n\\end{tabular}\n\\end{adjustbox}\n\\end{table*}\n\n\n\\section{Experimental results}\n\\label{Sec::experimental results}\n\nTo test the performance of the proposed algorithm, we embedded the proposed RRNet scheme into HEVC reference software as shown in Fig.~\\ref{Fig::codingFrame}.\nIn this section, we first compare the proposed RRNet with VRCNN \\cite{dai2017convolutional}, EDSR Residual Blocks \\cite{lim2017enhanced}, Partition-aware CNN \\cite{lin2019partition}, and HEVC on BD-rate \\cite{Bjontegaard2001}, respectively.\nSubsequently, we validate the multiple inputs function by comparing the dual-input residual and reconstruction with the solo input reconstruction.\nMeanwhile, we compare the dual-input Residual and Reconstruction approach with the dual-input Partition and Reconstruction approach \\cite{lin2019partition}.\nAfterward, we evaluate the efficiency of different networks on the same inputs by comparing RRNet and EDSR Residual Blocks with the dual-input of residual and reconstruction.\nFor the test, we test all the sequences defined in HM-16.19 CTC \\cite{KarlSharman2018} under the intra-coding and inter-coding configurations.\n\n\n\n\n\n\\begin{figure*}[tp]\n\\centering\n\\begin{minipage}[b]{.49\\linewidth}\n \\centering\n \\includegraphics[width=\\linewidth]{RD_Comparison_Under_BasketballDrill.pdf}\n \\centerline{(a) BasketballDrill }\\medskip\n\\end{minipage}\n\\begin{minipage}[b]{.49\\linewidth}\n \\centering\n \\includegraphics[width=\\linewidth]{RD_Comparison_Under_FourPeople.pdf}\n \\centerline{(b) FourPeople }\\medskip\n\\end{minipage}\n\n\\begin{minipage}[b]{.49\\linewidth}\n \\centering\n \\includegraphics[width=\\linewidth]{RD_Comparison_Under_Johnny.pdf}\n \\centerline{(c) Johnny }\\medskip\n\\end{minipage}\n\\begin{minipage}[b]{.49\\linewidth}\n \\centering\n \\includegraphics[width=\\linewidth]{RD_Comparison_Under_Traffic.pdf}\n \\centerline{(d) Traffic }\\medskip\n\\end{minipage}\n\\caption{Comparison of RD curves in HEVC with DF and SAO, VRCNN and proposed RRNet on luminance.\nThe compared RD curves of BasketballDrill(a), FourPeople(b), Johnny(c) and Traffic(d) are shown.\nIt is obvious that our proposed RRNet outperforms HEVC with DF and SAO and VRCNN for all theses sequences under all tested QPs including $22, 27, 32 $ and $37$.\n}\n\\label{fig:rd}\n\\end{figure*}\n\n\n\\begin{table}[tbp]\n \\caption{The computational complexity of VRCNN and proposed RRNet against HEVC under All Intra case}\n \\label{tab::complexity}\n \\center\n \\begin{tabular}{c|c|c|c}\n \\hline\n \\hline\n \\multirow{1}{*}{Approches} & Frame-work & Encoding Time & Decoding Time \\\\\n \n \\hline\n \n \n\\multirow{1}{*}{VRCNN} & Pytorch(C++) & $108.72\\%$ & $420.41\\%$ \\\\\n \\hline\n \\multirow{1}{*}{RRNet} & Pytorch(C++) & $117.48\\%$ & $1238.78\\%$ \\\\\n \\hline\n \\hline\n \\end{tabular}\n\\end{table}\n\n\n\n\\begin{table}[tp]\n \\caption{BD-rate of VRCNN and proposed RRNet against HEVC under Random Access case}\n \\label{tab::comparisonRVHInter}\n \\center\n \\begin{tabular}{c|l|c|c}\n \\hline\n \\hline\n {Class} & Sequence & VRCNN vs. HEVC & RRNet vs. HEVC\\\\\n \\hline\n A & Traffic & $-5.0\\% $ & $\\textbf{-6.0}\\% $\\\\\n & PeopleOnStreet & $-1.4\\% $ & $\\textbf{-1.6}\\% $\\\\\n \\hline\n B & Kimono & $-1.9\\% $ & $\\textbf{-2.6}\\% $\\\\\n & ParkScene & $-2.7\\% $ & $\\textbf{-3.4}\\% $\\\\\n & Cactus & $-3.2\\% $ & $\\textbf{-3.9}\\% $\\\\\n & BasketballDrive & $-1.4\\% $ & $\\textbf{-1.9}\\% $\\\\\n & BQTerrace & $-5.2\\% $ & $\\textbf{-5.8}\\% $\\\\\n \\hline\n C & BasketballDrill & $-3.1\\% $ & $\\textbf{-4.3}\\% $\\\\\n & BQMall & $-2.0\\% $ & $\\textbf{-2.5}\\% $\\\\\n & PartyScene & $-0.5\\% $ & $\\textbf{-1.0}\\% $\\\\\n & RaceHorses & $-1.3\\% $ & $\\textbf{-1.4}\\% $\\\\\n \\hline\n D & BasketballPass & $-0.7\\% $ & $\\textbf{-0.9}\\% $\\\\\n & BQSquare & $-1.4\\% $ & $\\textbf{-2.1}\\% $\\\\\n & BlowingBubbles & $-1.8\\% $ & $\\textbf{-2.4}\\% $\\\\\n & RaceHorses & $-1.5\\% $ & $\\textbf{-1.6}\\% $\\\\\n \\hline\n E & FourPeople & $-8.2\\% $ & $\\textbf{-9.5}\\% $\\\\\n & Johnny & $-7.6\\% $ & $\\textbf{-10.2}\\% $\\\\\n & KristenAndSara & $-6.9\\% $ & $\\textbf{-7.6}\\% $\\\\\n \\hline\n & Class A & $-3.2\\% $ & $\\textbf{-3.8}\\% $\\\\\n & Class B & $-2.9\\% $ & $\\textbf{-3.5}\\% $\\\\\n & Class C & $-1.7\\% $ & $\\textbf{-2.3}\\% $\\\\\n & Class D & $-1.4\\% $ & $\\textbf{-1.7}\\% $\\\\\n & Class E & $-7.6\\% $ & $\\textbf{-9.1}\\% $\\\\\n \\hline\n Avg. & All & $-3.1\\% $ & $\\textbf{-3.8}\\% $\\\\\n \\hline\n \\hline\n \\end{tabular}\n \\end{table}\n\n\n\n\\subsection{Performances of the proposed RRNet algorithm}\n\\label{subsec::performances}\nTable~\\ref{tab::comparisonRVHIntra} shows the comparison results of VRCNN \\cite{dai2017convolutional}, EDSR Residual Blocks \\cite{lim2017enhanced}, Partition-aware CNN \\cite{lin2019partition}, and the proposed RRNet against HEVC under the all intra case.\nNote that to ensure fairness, the EDSR Residual Blocks and Partition-aware CNN all employ eight convolutional layers, including three residual blocks as shown in Table~\\ref{tab::gnet}, which have the same convolution layer depth as the one of the Residual Network in the proposed RRNet.\nWe train $QP37$ models of VRCNN, EDSR Residual Blocks, and Partition-aware CNN with $120$ epochs on the whole DIV2K dataset and then achieve the models of $QP32$, $QP27$ and $QP22$ by fine tuning the trained $QP37$ model with $20$ epochs.\nThese are identical to the process used to train RRNet as stated in Section~\\ref{Subsec::training}.\n\nWe can see that the proposed RRNet algorithm outperforms VRCNN, EDSR Residual Blocks, and Partition-aware CNN by an average of $2.1\\%$, $2.0\\%$, and $1.7\\%$, respectively.\nAdditionally, the RRNet method surpasses VRCNN, EDSR Residual Blocks, and Partition-aware CNN in every sequence in BD-rate.\nSpecifically, the proposed RRNet scheme outperforms VRCNN, EDSR Residual Blocks, and Partition-aware CNN by $2.9\\%$, $3.2\\%$, and $2.7\\%$ on Class E, respectively.\nSimilarly, compared to the HEVC anchor, RRNet realizes a substantial gain on BD-rate with an average of $-8.9\\%$.\nThe most remarkable individual difference occurs on BasketballDrill sequence with a gain of $-13.8\\%$ on BD-rate.\nThis sequence contains particularly complex textures with very dramatic variations.\nThese performances demonstrate that RRNet effectively enhances the reconstruction by introducing the residual signal and developing customized networks for residual and reconstruction inputs.\n\nFig.~\\ref{fig:rd} shows the luminance Rate-Distortion (RD) curves of the proposed RRNet approach, VRCNN, and HEVC anchor.\nAs illustrated, the PSNR of the proposed RRNet method is higher than the one of VRCNN and HEVC with in-loop filters under every QP in BasketballDrill, FourPeople, Johnny, and Traffic sequences.\nThis clearly shows that the proposed RRNet model is superior to the VRCNN and HEVC baseline approaches to enhance the quality of compressed video frames.\n\nThe time complexity \\cite{Yiming2019} is exhibited in Table~\\ref{tab::complexity}.\nIn all cases, we apply the same test environment.\nSpecifically, the GPU configuration is GTX 1080ti.\nDue to the huge computation of CNN on the encoder side, VRCNN takes $8.72\\%$ longer than HEVC.\nMeanwhile, because of the dual-input networks, RRNet takes $17.48\\%$ longer than HEVC.\nOn the decoder side, the results reflect a similar situation for complexity.\nHEVC computes fastest while RRNet complexity overhead is $1238.78\\%$.\nWe can adopt the methods of model compression and acceleration \\cite{cheng2017survey,cheng2018model,cheng2018recent} to reduce the redundancy of the proposed RRNet model.\nThe solutions of model compression and acceleration includes parameter pruning, quantization, low-rank factorization, compact convolutional filters, and knowledge distillation.\nWe can use the parameter pruning and quantization based approaches to remove the redundancy of the RRNet parameters.\nIn addition, the low-rank factorization based methods are utilized to calculate the useful parameters of RRNet.\nThe compact convolutional filters are structurally designed to shrink the parameter space of RRNet and save computation and storage resources. \nThe approaches based on knowledge distillation is used to train a more compact RRNet or learn a distilled RRNet model.\n\nTable~\\ref{tab::comparisonRVHInter} shows the experimental results in random access case. \nWe can see that the proposed algorithm can bring an average of $-0.7\\%$ and $-3.8\\%$ BD-rate gain compared to VRCNN and HEVC, respectively. \nAgain, we can also see that RRNet outperforms the other two methods in every class. \nMoreover, the peak difference between RRNet and VRCNN reaches $1.5\\%$ on Class E. \nThis demonstrates that the benefits brought by RRNet can be propagated to inter frames.\nThus the RRNet can bring significant performance improvements in random access case.\n\n\n\n\\begin{table}[tp]\n\\caption{The dual-input Residual and Reconstruction approach and the dual-input Partition and Reconstruction \\cite{lin2019partition} approach versus Reconstruction only approach on the BD-rate}\n\\label{tab::comparisonRecoGNET}\n\\center\n\\begin{tabular}{l|c|c}\n\\hline\n\\hline\n\\multirow{1}{*}{} & Partition and Reconstruction \\cite{lin2019partition}& Residual and Reconstruction\\\\\n & vs. Reconstruction & vs. Reconstruction\\\\\n\\hline\n Class A & $-0.4\\% $ & $\\textbf{-1.0}\\% $\\\\\n Class B & $-0.2\\% $ & $\\textbf{-0.9}\\% $\\\\\n Class C & $-0.4\\% $ & $\\textbf{-1.1}\\% $\\\\\n Class D & $-0.4\\% $ & $\\textbf{-0.8}\\% $\\\\\n Class E & $-0.6\\% $ & $\\textbf{-1.6}\\% $\\\\\n\\hline\nAvg. All & $-0.4\\% $ & $\\textbf{-1.0}\\% $\\\\\n\\hline\n\\hline\n\\end{tabular}\n\\end{table}\n\n\n\\begin{table}[tp]\n \\caption{The computational complexity of the dual-input Partition and Reconstruction method \\cite{lin2019partition} and the dual-input Residual and Reconstruction approach against HEVC}\n \\label{tab::complexityRRGG_PRHG}\n \\center\n \\begin{tabular}{c|c|c|c}\n \\hline\n \\hline\n \\multirow{1}{*}{Approches} & Frame-work & Encoding & Decoding \\\\\n & & Time & Time \\\\\n \\hline\n \n \\multirow{1}{*}{Partition Reconstruction \\cite{lin2019partition}} & Pytorch(C++) & $122.24\\%$ & $1581.63\\%$ \\\\\n \\hline\n \\multirow{1}{*}{Residual Reconstruction} & Pytorch(C++) & $123.81\\%$ & $1669.39\\%$ \\\\\n \\hline\n \\hline\n \\end{tabular}\n\\end{table}\n\n\n\n\\begin{table}[tbp]\n\\caption{Convolutional Parameters of EDSR Residual Blocks \\cite{lim2017enhanced}}\n\\label{tab::gnet}\n\\center\n\\begin{tabular}{l|l}\n\\hline\n\\multirow{1}{*}{Kernel Size} & $3 \\times 3$ \\\\\n\\hline\n\\multirow{1}{*}{Feature Map Number} & $32$ \\\\\n\\hline\n\\multirow{1}{*}{Stride} & $1$ \\\\\n\\hline\n\\multirow{1}{*}{Padding} & $1$ \\\\\n\\hline\n\\end{tabular}\n\\end{table}\n\n\\subsection{Results analysis of multiple inputs approaches}\n\\label{subsec::multipleInputs}\nHere we compare the method with residual and reconstruction inputs to the method with only reconstruction input.\nAdditionally, we compare the dual-input Residual and Reconstruction approach with another multiple inputs approach that utilizes the mean mask of the PU partition \\cite{lin2019partition} and Reconstruction.\nNote to guarantee a fair comparison, all reconstruction sub-networks utilize the same network with eight convolutional layers, including three EDSR residual blocks shown in Table~\\ref{tab::gnet}.\n\nTable~\\ref{tab::comparisonRecoGNET} exhibits the comparison of the dual-input Residual and Reconstruction scheme against Reconstruction only method and the comparison of the dual input PU Partition and Reconstruction method against Reconstruction only method.\nOn the one hand, the dual-input Residual and Reconstruction saves an average of $-1.0\\%$ BD-rate compared with Reconstruction only method.\nOn the other hand, the dual-input Residual and Reconstruction method saves an average of $-0.6\\%$ BD-rate over the dual input Partition and Reconstruction method. \nSpecifically, the dual-input Residual and Reconstruction approach leads $-1.6\\%$ BD-rate on Class E against the only Reconstruction method.\nThe peak difference of BD-rate between the dual-input Partition and Reconstruction method and the only Reconstruction method on Class E is $-0.6\\%$.\nIn every class, the dual-input of the Residual and Reconstruction approach is better than the only Reconstruction method and the dual-input of the Partition and Reconstruction method on BD-rate.\n\nThese performances clearly show that based on the same network architecture for video reconstruction, the residual signal provides useful information for augmenting the quality.\nThis is reasonable because the inverse transformed residual provides the TU partition information and the detailed textures used to enhance the reconstruction.\nHence, introducing the residual signal augments the quality of the compressed video frame prominently.\nIn conclusion, compared to the only Reconstruction method and another multiple input methods based on the mean mask of the partition, the dual-input Residual and Reconstruction approach clearly augments the reconstruction. \nOn the aspect of the time complexity, as shown in Table~\\ref{tab::complexityRRGG_PRHG}, the dual-input Residual and Reconstruction approach and the dual-input Partition and Reconstruction method are approximately on the same level.\n\n\\begin{table}[tp]\n\\caption{BD-rate of RRNet against the dual-input Residual and Reconstruction with EDSR Residual Blocks \\cite{lim2017enhanced}}\n\\label{tab::comparisonGGHA}\n\\center\n\\begin{tabular}{l|c}\n\\hline\n\\hline\n\\multirow{1}{*}{Class} & RRNet vs. Residual and Reconstruction with EDSR Residual Blocks\\\\\n\\hline\n Class A & $-0.8\\% $\\\\\n Class B & $-1.4\\% $\\\\\n Class C & $-1.0\\% $\\\\\n Class D & $-0.5\\% $\\\\\n Class E & $-2.2\\% $\\\\\n\\hline\nAvg. All & $-1.2\\% $\\\\\n\\hline\n\\hline\n\\end{tabular}\n\\end{table}\n\n\\begin{figure}[tp]\n \\begin{center}\n \\centering\n \\includegraphics[width=.8\\linewidth]{Comparison_PSNRTrendWithQPUnderQP32.pdf}\n \\caption{The comparison of QP $32$ model with respective QP models. The $\\Delta$PSNR on $\\Delta QP=0$ means the QP$32$ model compared to itself on PSNR is zero. Except QP$34$ setting, the PSNR of individual QP model is better than the one for the QP$32$ settings on the other QP parameters. The $\\Delta$PSNR increases significantly with the absolute value of $\\Delta$QP on the each side of $\\Delta QP=0$.}\n \\label{Fig::qpRange}\n \\end{center}\n\\end{figure}\n\n\\begin{figure*}[tp]\n \\centering\n\\includegraphics[angle=-90, scale=.16]{visualResults.pdf}%\n\\caption{Visual comparisons between the ground truths, HEVC anchor, VRCNN, and proposed RRNet approach on the luminance of $QP37$ in Johnny and BasketballDrill sequences, respectively.\nThe groups of figures (a), (b), (c), and (d) are the original video, the video generated using HEVC, the video generated using VRCNN, the video generated using RRNet, respectively. \n(Zoom in for better visual effects.)\n}\n \\label{fig_sub}\n\\end{figure*}\n\n\n\n\\subsection{Results analysis of network architecture}\n\\label{subsec::architecture}\nWe compare the proposed RRNet approach with the dual-input of residual and reconstruction method with EDSR Residual Blocks to evaluate the performance of the proposed Residual Network and Reconstruction Network.\nNote that both the RRNet and the second method have the same inputs.\nThe second method utilizes the EDSR Residual Blocks on both residual and reconstruction.\nTable~\\ref{tab::comparisonGGHA} shows the compared results between RRNet and the dual-input of residual and reconstruction approach with EDSR Residual Blocks.\nRRNet gains an average of $-1.2\\%$ BD-rate against the latter method.\nSpecifically, the proposed RRNet outperforms the dual-input of residual and reconstruction method with EDSR Residual Blocks in every class sequence for BD-rate.\nThe largest difference of BD-rate is $-2.2\\%$ on the Class E sequence.\nThese demonstrate that both the Residual Network and the Reconstruction Network fit their respective signals very well.\nThe results also clearly demonstrate that processing the residual and reconstruction with unique architectures is beneficial.\nAdditionally, the validation of comparison provides evidence that the RRNet network shows an obvious improvement in the quality of coded frames.\n\n\n\\subsection{The performance from a specific QP model on different QPs}\n\\label{subsec::qpRange}\nTo validate the performance from an assigned QP model on other QP settings, as illustrated in Fig.~\\ref{Fig::qpRange}, we compared the PSNR of QP$32$ when reconstructed by other QP models.\nThe $\\Delta$PSNR on $\\Delta QP=0$ means that the QP$32$ model compares itself on PSNR, and it should be zero.\nExcept for QP$34$, the PSNR of other QP models evaluated on itself is better than when it is evaluated on the QP$32$ model.\nThe $\\Delta$PSNR increases dramatically with the absolute value of $\\Delta$QP on both positive and negative sides.\nAccordingly, specific QP tuned models outperform the other QP models when tuned for that specific setting. \nIn summary, based on Fig.~\\ref{Fig::qpRange}, a model can be reused to replace another model in the range of $-2$ to $2$ $\\Delta$QP.\n\n\n\n\\subsection{Subjective Results}\n\\label{subsec::subjectiveRes}\n\nFig.~\\ref{fig_sub} exhibits the visual comparisons between the ground truths, HEVC anchor, VRCNN, and proposed RRNet approach on the luminance of $QP37$ in Johnny and BasketballDrill sequences, respectively.\nThe groups of figures (a), (b), (c), and (d) are the original video, the video generated using HEVC, the video generated using VRCNN, the video generated using RRNet, respectively. \nIn the Johnny, from the zoomed gold blocks, we can see that there are evident distortions and textures miss in the HEVC and VRCNN frames, while the RRNet frame shows smoother and more abundant textures.\nWe can see from the zoomed blue rectangles that the HEVC and VRCNN frames blur more severely than the RRNet frame.\nFrom the BasketballDrill, we can see from the zoomed gold and blue blocks that the distortions in HEVC and VRCNN frames are more serious than the one of the RRNet frame.\nThe experimental results demonstrate that the proposed RRNet can bring better subjective qualities than the previous in-loop filtering methods.\n\n\\section{Conclusion}\n\\label{Sec::conclusion}\nIn this paper, we propose a new video deblocking solution that utilizing both reconstructed pixels as well as rich information and features available from the compression pipeline. The coding residual signal unique from compression pipeline is utilized as an additional input for improving the CNN based in-loop filter for HEVC.\nIn essence, it is introduced to enhance the quality of reconstructed compressed video frames. \nIn this process, we first import the residual as an independent input to reinforce the textures and details.\nThen, we custom designed RRNet approach that involves two separate CNNs: the Residual Network and the Reconstruction Network.\nEach customized layer aims to reveal specific features that are characteristic of each type of frame.\nIn the Residual Network, we apply residual blocks to minimize the difference between the input frame and the output frame. \nIn the Reconstruction Network, we utilize both downsampling and upsampling ladders to adapt to learn the features for the reconstruction frames.\nThe experimental results demonstrate that the proposed algorithms significantly reduce artifacts from both objective and subjective perspectives.\nFrom the objective point of view, the BD-rate is significantly improved.\nFrom the subjective point of view, the reconstruction quality of the compressed video frames is superior.\nThese results demonstrate that the proposed schemes improved the current state of the art significantly in BD rate reduction. In the future, we will try to create more advanced in-loop methods for video coding, while develop complexity reduction for the inference time model.\n\n\n\n\n\\ifCLASSOPTIONcaptionsoff\n \\newpage\n\\fi\n\n\\bibliographystyle{IEEEtran}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nQuartz crystal microbalances (QCM) principally consist of a thin quartz crystal\nbetween two electrodes. Being a piezoelectric material, the quartz oscillates in\nresponse to an AC current. In many devices, the crystal is cut in such a way\nthat transverse vibrations take place parallel to the free surface. Due to the\noscillatory motion, we would reasonably expect (correctly, as it turns out) that\nincreasing the mass slightly by adding a small layer onto the surface of the QCM\nwill lead to a small decrease in the frequency of oscillation, $f$. For a\nharmonic oscillator of mass $M$, we can easily prove to ourselves that a small\nincrease in the mass $\\Delta M$ causes a decrease in frequency\n\\begin{equation}\n \\frac{\\Delta f}{f}= -\\frac{1}{2}\\frac{\\Delta M}{M}\n\\end{equation}\nIn 1959, Sauerbrey derived a similar equation for the QCM \\cite{Sauerbrey1959}.\nWith $\\Delta m$ representing mass per unit area deposited on the QCM,\n\\begin{equation}\n \\frac{\\Delta f}{f} = -\\frac{2f_0 \\Delta m}{Z_Q}.\n\\end{equation}\nTypically, for AT-cut crystals the fundamental frequency equals\n$f_0 = 5\\ \\mathrm{MHz}$, the acoustic impedance of the quartz\n$Z_Q = 8.8 \\times 10^6\\ \\mathrm{kg}\/(\\mathrm{m}^2\\mathrm{s})$, and $f$\nrepresents the working frequency. As frequency shifts may be measured with great\naccuracy, much experimental work has relied on the QCM as an extremely\nsensitive mass detector, hence its name.\n\nQCMs also work in contact with fluids, as demonstrated by Nomura and Okuhara\n\\cite{Nomura1982}, who showed that the transverse waves propagate into a fluid\n(of density $\\rho$ and shear viscosity $\\eta$) with a heavy damping. Thus, QCM\nwith dissipation monitoring (QCM-D) instruments measure both the frequency of\noscillation and the energy dissipation in a ring-down experiment, in which the\nAC voltage is turned off and the quartz crystal allowed to come to rest. Because\nthe damping occurs within tens of microseconds, a series of consecutive\nring-downs can monitor the evolution of molecular processes taking place over\nsecond or minute time scales. As a consequence, QCM-D instruments have become a\nstandard tool for biosensing systems of supported membranes, and\nLangmuir-Blodgett, protein and liposome films, among others\n\\cite{Johannsmann2015,Johannsmann2008,Lane2011,Voinova2002}. In this context,\nGizeli's group observed that, for some systems, the ratio of the dissipation to\nthe frequency shift, which they termed the \\textit{acoustic ratio}, did not\ndepend on the concentration of molecules deposited on the QCM, suggesting that\nit was a property of the geometry of the molecules, rather than the mass of the\ndeposited film \\cite{Johannsmann2015}.\n\nWhen thinking in terms of the Sauerbrey relation, one would never conceive of an\nincrease in the load leading to an increased frequency of vibration, but this has\nin fact been observed in experiments with massive (micron-sized) particles\n\\cite{Dybwad1985,Marxer2003,Pomorska2010,Kravchenko2019}. A widely repeated explanation\nfor these ``negative Sauerbrey masses'' states that an increase in the frequency shift\n(.i.e a ``negative acoustic mass'') arises as a consequence of a very fast response\nof the analyte-wall contact, modelled by a (generally complex) effective spring\n\\cite{Johannsmann2015}. However, this explanation does not take into account the\nhydrodynamic transport of momentum. Historically, hydrodynamic effects have often been\ndisregarded in QCM research. As we shall see below, however, the acoustic ratio may\nwell diverge or change sign naturally \\textit{even for moderately small loads\nsuspended in a fluid}.\n\nAlthough previous research has developed one-dimensional phenomenological models of\nthe viscoelasticity of films \\cite{Voinova2002}, recent experiments with\nnanoparticles, liposomes, viruses and DNA strands have shown strong deviations\nfrom these theories\n\\cite{Johannsmann2015,Johannsmann2008b,Johannsmann2009,Reviakine2011,Tsortos2008}. \n\nFollowing most of the previous work in the field, we make use of the small load\napproximation \\cite{Johannsmann2008}, which allows us to write the complex\nfrequency shift in terms of the load impedance.\n\\begin{equation}\n \\label{small_load_approximation}\n \\Delta f + \\mathrm{i} \\Delta \\Gamma = \\frac{\\mathrm{i} f_0}{\\pi} \\frac{Z_L}{Z_Q}.\n\\end{equation}\nHere, $\\Delta \\Gamma$ is the change in the decay rate of the resonator and the\ncomplex load impedance $Z_L$ equals the stress phasor on the QCM surface divided by\nits velocity phasor.\n\nThe main point in the present article is that the primary effect determining the\nimpedance of suspensions measured by the QCM involves the change in the\nhydrodynamic motion of the fluid due to the presence of suspended matter. To\nargue for this statement, we derived an analytical theory for the effect of an\ninfinite immersed layer on the motion of a QCM, represented by a flat horizontal\noscillating plane at $z = 0$ in contact with a fluid filling the space $z > 0$.\nWe also show that the changes in impedance as a function of distance and\nfrequency for a suspended membrane resemble the changes observed for a sparse\nperiodic array of spheres. The data for spheres was produced by our QCM\nsimulations of suspended liposomes using the FLUAM code \\cite{Balboa2012}, which\nis based on Peskin's immersed boundary method \\cite{Peskin2002}. We have shown\nelsewhere that our simulations agree with experimental results\n\\cite{Tsortos2020,Delgado2020}. In addition, we have considered the crossover to\npositive frequency shift (``negative acoustic mass''). Comparing the analytical\nprediction of the plate system with our simulations of sub-micron spheres and\nexperiments carried out with micron-sized colloids leads to several interesting\nconclusions, discussed below.\n\n\\section{Oscillating boundary layer}\nWe wish to study fluid sytems near a vibrating plane wall in the Stokes flow\nregime \\cite{Stokes1851}. Our fluid, with density $\\rho$ and shear viscosity\n$\\eta$, obeys the equations of linear hydrodynamics and lies in the space above\nthe $z = 0$ plane. Furthermore, the translational symmetry of the problem\nensures that the flow velocity $\\tilde{u}(z, t)$ will depend only on the $z$ coordinate\nand time. The equation describing the velocity field reads\n\\cite{Stokes1851,Kanazawa1985,Landau1987}\n\\begin{equation}\n \\frac{\\partial \\tilde{u}}{\\partial t} = \\frac{\\eta}{\\rho} \\frac{\\partial^2 \\tilde{u}}{\\partial z^2}.\n\\end{equation}\nWe will mark time-dependent oscillating functions with tildes and rely on phasors\nto represent the amplitudes of the steady-state solutions,\n\\begin{equation}\n \\tilde{u}(z, t) = \\Re\\left(u(z) e^{-\\mathrm{i} \\omega t}\\right),\n\\end{equation}\nwith the complex-valued phasor amplitude $u(z)$ \\cite{Kanazawa1985},\n\\begin{equation}\n \\label{Stokes_boundary_layer}\n u(z) = A e^{-\\alpha z} + B e^{\\alpha z},\n\\end{equation}\nwhere $\\alpha = (1 - \\mathrm{i})\/\\delta$, and $\\delta = \\sqrt{2 \\eta\/(\\omega \\rho)}$\nmeasures the depth of penetration of the oscillating flow.\nAssume the wall vibrates with frequency $\\omega$. If we apply stick boundary\nconditions where the fluid meets the plane and add that the velocity vanishes\nfar from it, then\n\\begin{align}\n u(0) & = u_0, \\\\\n \\lim_{z \\to \\infty} u(z) & = 0.\n\\end{align}\nThe coefficient $u_0$ may take, in general, complex values. By applying the\nboundary conditions to the general solution, we see that $A = u_0$ and $B = 0$,\nwhich implies a velocity phasor\n\\begin{equation}\n \\label{unperturbed_Stokes_flow}\n u_f(z) = u_0 \\exp(-\\alpha z)\n\\end{equation}\nfor the Stokes flow.\n\nWe now calculate the impedance associated to the Stokes flow, $Z_f$,\nAccording to the standard definition, $Z_f$ is the ratio of the shear stress\nexerted on the plane to its velocity. Let $\\sigma = \\eta \\left.\\frac{\\partial u}{\\partial z}\\right|_{z = 0}$ represent the phasor amplitude of the stress.\n\\begin{equation}\n Z_f = \\frac{\\sigma}{u(0)}\n = \\frac{\\eta \\left.\\frac{\\partial u}{\\partial z}\\right|_{z = 0}}{u_0}\n = -\\eta \\alpha.\n\\end{equation}\nThe subscript $f$ distinguishes the impedance of the base Stokes flow from\nother load impedances calculated below.\n\n\\section{\\label{immersed_rigid_plate}Immersed rigid plate}\n\n\\begin{figure}\n \\begin{center}\n \\begin{subfigure}[b]{.485\\linewidth}\n \\includegraphics[width = \\linewidth]{figures\/system_diagram.eps}\n \\caption{\\label{system_diagram}}\n \\end{subfigure}\n \\begin{subfigure}[b]{.485\\linewidth}\n \\includegraphics[width = \\linewidth]{figures\/impedance_vs_height.eps}\n \\caption{\\label{impedance_vs_height}}\n \\end{subfigure}\n \\begin{subfigure}[b]{.465\\linewidth}\n \\includegraphics[width = \\linewidth]{figures\/impedance_vs_freq.eps}\n \\caption{\\label{impedance_vs_freq}}\n \\end{subfigure} \n \\begin{subfigure}[b]{.525\\linewidth}\n \\includegraphics[width = \\linewidth]{figures\/zcut_vs_rhop.eps}\n \\caption{\\label{zcut_vs_rhop}}\n \\end{subfigure}\n \\end{center}\n \\caption{(\\subref{system_diagram}) Schematic representation of a rigid\n horizontal plate of density $\\rho'$ and thickness $a$\n immersed in a fluid of density $\\rho$ and shear viscosity $\\eta$\n above an oscillating plane.\n (\\subref{impedance_vs_height}) Complex impedance $Z_L$ due to the\n presence of the plate in Fig. \\ref{system_diagram} versus distance\n $d$ between plate and plane in units of\n $\\delta = \\sqrt{2\\nu\/\\omega}$ (the kinematic viscosity equals\n $\\nu = \\eta\/\\rho$). The curves correspond to the real (solid) and\n imaginary (dashed) parts of the impedance (divided by the Sauerbrey\n value $Z_{ref}=\\omega \\rho^{\\prime} a$).\n (\\subref{impedance_vs_freq}) Load impedance versus dimensionless\n parameter $\\omega d^2 \/ \\nu$ for three different plate densities.\n The solid plots $\\Re(Z_L)$, while the dashed line corresponds to\n $\\Im(Z_L)$. The thickness was chosen equal such that $d\/a = 1$. The\n experimental points for silica particles with a radius of half a micron\n in a $150\\ \\mathrm{mM}$ KCl electrolite were\n taken from Ref. \\cite{Olsson2012} (setting $d = 50\\ \\textrm{nm}$ for\n a qualitative comparison).\n (\\subref{zcut_vs_rhop}) Frequency $\\omega_c$ and height $d_c$ at\n which the imaginary part of the impedance crosses the horizontal\n axis (see Fig. \\ref{impedance_vs_height}) versus the ratio of the\n plate density to the fluid density. Please note that the scale on the\n left is not linear. Eq. (\\ref{solid_plate_load_impedance}) implies that\n changing $\\rho'\/\\rho$ is equivalent to changing $a$ while leaving\n $\\rho'\/\\rho$ fixed.\n }\n\\end{figure}\n\nNow imagine a solid horizontal plate of thickness $a$ and mass density $\\rho'$\nplaced above the vibrating plane at a distance $d$ (Fig. \\ref{system_diagram}).\nThe fluid will transmit the motion of the vibrating lower plane and drag the\nsuspended layer along. Having reached a stationary oscillation, the plate will\nmove with velocity\n\\begin{equation}\n \\tilde{v}(t) = v_0 e^{-\\mathrm{i} \\omega t},\n\\end{equation}\nwith a complex factor $v_0$ to be determined below. The motion of the solid layer\nresults from the shear stress exerted by the fluid from above and below. Let\n$\\tilde{u}_i(z,t)$ represent the velocity fields in the regions below ($i=1$) and\nabove ($i=2$) the plate. Then we can rewrite Newton's equation of\nmotion,\n\\begin{equation}\n \\rho' a \\frac{d\\tilde{v}}{dt}\n = \\eta \\left( \\left.\\frac{\\partial \\tilde{u}_2}{\\partial z}\\right|_{z = d + a}\n - \\left.\\frac{\\partial \\tilde{u}_1}{\\partial z}\\right|_{z = d} \\right),\n\\end{equation}\nin terms of phasor amplitudes,\n\\begin{equation}\n \\label{wall_equation_of_motion}\n-\\mathrm{i} \\omega \\rho' a v_0\n = \\eta \\left( \\left.\\frac{\\partial u_2}{\\partial z}\\right|_{z = d + a}\n - \\left.\\frac{\\partial u_1}{\\partial z}\\right|_{z = d} \\right).\n\\end{equation}\n\nClearly, the fluid above the plate obeys the equations of Stokes flow already\ncalculated above, but this time with an amplitude given by the motion of the\nplate.\n\\begin{equation}\n u_2(z) = v_0 e^{-\\alpha (z - (d + a))}.\n\\end{equation}\nFor the lower wall, we substitute the form of the general solution\n(\\ref{Stokes_boundary_layer}) into boundary conditions which ensure that the\nfluid moves at the same speed as the walls at the point of contact.\n\\begin{align}\n \\label{Boundary_conditions}\n u_1(0) & = u_0 = A + B \\nonumber \\\\\n u_1(d) & = v_0 = A e^{-\\alpha d} + B e^{\\alpha d} \n\\end{align}\nThe extra load impedance due to the plate, $Z_L$, equals the total impedance minus the\nimpedance due to the base Stokes flow $Z_f$,\n\\begin{equation}\n Z_L = \\frac{\\sigma}{u_0} - Z_f\n = 2 \\alpha \\eta \\frac{B}{u_0}.\n\\end{equation}\nFrom the boundary conditions (\\ref{Boundary_conditions}) we obtain $A$ and\n$B$ as a function of the plate and resonator velocity amplitudes, $v_0$ and \n$u_0$, and write the load impedance as\n\\begin{equation}\n \\label{load_impedance}\n Z_L = \\frac{\\alpha \\eta}{\\sinh(\\alpha d)}\\ \\frac{1}{u_0} \\left (v_0-u_f(d)\\right).\n\\end{equation}\nThus, a solid plate creates an impedance proportional to the difference\nbetween the plate velocity $v_0$ and the (unperturbed) Stokes flow velocity\n(\\ref{unperturbed_Stokes_flow}) at the lower fluid-plate interface ($z = d$). Eq.\n(\\ref{load_impedance}) leads to the conclusion that a fixed plate ($v_0 = 0$)\nyields an impedance inversely proportional to $1-e^{2 \\alpha d}$, and that the load\nimpedance vanishes if the plate moves with the base Stokes flow ($v_0 = u_f(d)$). In\nthe limiting case of a small gap between vibrating wall and plate, $\\alpha d \\ll 1$,\nthe load impedance corresponds to that of a Couette flow created by the perturbative\nvelocity $v_0 - u_f(d)$ in a gap of width $d$,\n\\begin{equation}\n Z_L = \\eta \\left(\\frac{v_0-u_f(d)}{d}\\right), \\ \\ \\text{for } d \\ll \\delta.\n\\end{equation}\n\nAll that remains now is to determine $v_0$ as a function of $u_0$. To this end,\nwe substitute the general solution for Stokes flow (\\ref{Stokes_boundary_layer})\ninto the equation of motion for the wall (\\ref{wall_equation_of_motion}) and use\nthe result in combination with the boundary conditions to solve for $v_0$. Substituting\nthe result into Eq. (\\ref{load_impedance}),\n\\begin{equation}\n \\label{solid_plate_load_impedance}\n Z_L = \\frac{\\omega \\rho' a}\n {\\frac{\\omega \\rho' a}\n {2 \\alpha \\eta}\n \\left(e^{-2 \\alpha d} - 1\\right) - \\mathrm{i}}\\ \n e^{-2 \\alpha d}.\n\\end{equation}\nNote that when the distance between the plate and the lower plane vanishes, we recover\na Sauerbrey-like relation, $\\lim_{d \\to 0} Z_L = \\mathrm{i} \\omega \\rho' a$, with\na purely imaginary impedance, corresponding to a frequency shift proportional to\nthe deposited mass $\\rho' a$. The opposite limit obviously leads to a vanishing load\nimpedance, $\\lim_{d \\to \\infty} Z_L = 0$. Below, we will often use the Sauerbrey\nimpedance, $Z_{ref}=\\omega \\rho^{\\prime} a$, as a reference to scale our results.\n\n\\subsection{Diverging and negative acoustic ratios}\n\nFig. \\ref{impedance_vs_height} plots the load impedance as a function of the height\n$d$ for three different plate densities. As already mentioned, within the small load\napproximation (\\ref{small_load_approximation}), the dimensionless acoustic ratio\n(defined as the ratio of the dissipation to the frequency shift) is proportional to\n$-2\\Re(Z_L)\/\\Im(Z_L)$. In experimental work, it is customary to present a ratio of\nthe dissipation $\\Delta D$ to the frequency shift $\\Delta f$, related to the\ndimensionlness acoustic ratio by\n\\begin{equation}\n -\\frac{\\Delta D}{\\Delta f} = -\\frac{2}{f_n}\\ \\frac{\\Re(Z_L)}{\\Im(Z_L)},\n\\end{equation}\nwhere $f_n$ is the frequency of the harmonic used in experiments. Therefore, if the\nimaginary part of $Z_L$ changes sign as a consequence of the variation of some parameter,\nthe acoustic ratio will diverge and become negative after the divergence.\n\nPositive frequency shifts (negative Sauerbrey masses) show up in experiments when\nanalysing massive particles above a certain crossover frequency $\\omega_c$. The simple\nplate model also displays such an inversion of the sign of $Z_L$. In particular, Fig.\n\\ref{impedance_vs_freq} shows that the plate qualitatively behaves like experiments\nwith micron-sized colloids. The plate load impedance is compared there to measurements of\nsilica particles of radius $R = 0.5\\ \\mu\\mathrm{m}$ adsorbed to a silica surface in a\n$\\mathrm{K}^+\\mathrm{Cl}^-$ electrolite at a concentration of $150\\ \\mathrm{mM}$ \n\\cite{Olsson2012}. The similarity between particle and plate suggests that the load\nimpedance results principally from hydrodynamic stress, in contrast to previous research,\nwhich had attributed the effect to contact forces between the surface and the load\n\\cite{Dybwad1985,Pomorska2010,Olsson2012,Johannsmann2015}. We will return to this\nimportant point below.\n\nRescaling the load impedance by $Z_{ref}$ leaves us with an expression that depends\nonly on the dimensionless parameters $\\rho'\/\\rho$, $a\/\\delta$ and $d\/\\delta$.\n\\begin{equation}\n \\label{crossover_distance}\n \\frac{Z_L}{Z_{ref}} = \\frac{e^{-2 (1 - \\mathrm{i}) d \/ \\delta}}\n {\\frac{1 + \\mathrm{i}}{2} \\frac{\\rho'}{\\rho}\n \\frac{a}{\\delta}\n \\left(e^{-2 (1 - \\mathrm{i}) d \/ \\delta} - 1\\right)\n - \\mathrm{i}}.\n\\end{equation}\nBecause $\\delta^2 \\propto \\omega^{-1}$, doubling the layer width $a$ and the distance\n$d$ has the same effect as multiplying the frequency $\\omega$ by four. Thus, for any\nfixed frequency we expect to observe a diverging acoustic ratio ($\\Im(Z_L) = 0$) for\nsome large enough distance $d_c$ (Fig. \\ref{zcut_vs_rhop}). Similarly, large enough\nanalytes ($a > a_c$) yield negative frequency shifts for given values of $\\omega$ and\n$d$. Setting $\\Im(Z_L) = 0$ leads to the following relation among the dimensionless\nparameters:\n\\begin{equation}\n \\frac{\\rho'}{\\rho} \\frac{a_c}{\\delta_c}\n = \\frac{2 \\cos\\left(2 \\frac{d_c}{\\delta_c}\\right)}\n {e^{-2 d_c\/\\delta_c}\n \\left(\n \\cos\\left(2 \\frac{d_c}{\\delta_c}\\right)\n - \\sin\\left(2 \\frac{d_c}{\\delta_c}\\right)\n \\right)},\n\\end{equation}\nwhere we have used the subindex $c$ as a reminder that we mean crossover values. We\nwill illustrate the generality of the hydrodynamic effect below by comparing this\nprediction to simulations of immersed spheres and experiments with colloidal particles.\n\n\\subsection{The hydrodynamic origin of ``negative acoustic masses''}\n \n\\begin{figure}\n \\begin{center}\n \\begin{subfigure}[b]{.485\\linewidth}\n \\includegraphics[width = \\linewidth]{figures\/velocity_profile.theta_0.d_1.eps}\n \\caption{\\label{velocity_profile.theta_0.d_1}\n $d = \\delta$, $\\theta = 0$, $\\Re(\\sigma) < 0$, $\\Delta \\Gamma > 0$.}\n \\end{subfigure}\n \\begin{subfigure}[b]{.485\\linewidth}\n \\includegraphics[width = \\linewidth]{figures\/velocity_profile.theta_pio2.d_1.eps}\n \\caption{\\label{velocity_profile.theta_pio2.d_1}\n $d = \\delta$, $\\theta = \\pi\/2$, $\\Im(\\sigma) < 0$, $\\Delta f > 0$.}\n \\end{subfigure}\n \\begin{subfigure}[b]{.485\\linewidth}\n \\includegraphics[width = \\linewidth]{figures\/velocity_profile.theta_0.d_0.25.eps}\n \\caption{\\label{velocity_profile.theta_0.d_0.25}\n $d = \\delta\/4$, $\\theta = 0$, $\\Re(\\sigma) > 0$, $\\Delta \\Gamma > 0$}\n \\end{subfigure}\n \\begin{subfigure}[b]{.485\\linewidth}\n \\includegraphics[width = \\linewidth]{figures\/velocity_profile.theta_pio2.d_0.25.eps}\n \\caption{\\label{velocity_profile.theta_pio2.d_0.25}\n $d = \\delta\/4$, $\\theta = \\pi\/2$, $\\Im(\\sigma)>0$, $\\Delta f < 0$}\n \\end{subfigure}\n \\end{center}\n \\caption{\\label{velocity_profiles}Velocity profiles for two different\n distances between the plate and the oscillating lower plane (\\textit{top row}:\n $d = \\delta$, \\textit{bottom row}: $d = \\delta\/4$). The left column shows the\n velocities at a $\\theta = \\omega t = 0\\ \\mathrm{rad}$ phase angle, and the\n right column corresponds to a phase angle of\n $\\theta = \\omega t = \\pi\/2\\ \\mathrm{rad}$. The arrows indicate the velocity of\n the plate. The solid blue line plots the velocity of the fluid at different\n heights and the dashed-dotted red line indicates the perturbation of the\n Stokes flow $u_p$ from Eq. (\\ref{perturbation_of_flow}) due to the presence of\n the immersed plate.}\n\\end{figure}\n\nTo understand why the hydrodynamic perturbation of the analyte may produce a positive\nfrequency shift (or equivalently a ``negative acoustic mass''), let us consider the\ntangential hydrodynamic stress at the surface. Without loss of generality, suppose\nthe resonator is moving with a velocity $u_0 \\cos(\\omega t)$, with $u_0$ a real number.\nThe observed stress $\\tilde{\\sigma}(t)$ can be decomposed into in-phase and\nout-of-phase components, proportional to the real and imaginary parts of the stress\nphasor, $\\tilde{\\sigma}(t) = \\Re\\left(\\sigma e^{-\\mathrm{i} \\omega t}\\right)$.\nNow, for phase angles $\\theta = \\omega t$ equal to integer multiples of $2\\pi$, the\nresonator velocity reaches its maximum value, $|u_0|$, as its displacement crosses the\nmidpoint of the oscillation. At this precise moment, the observed stress equals\n$\\tilde{\\sigma}(2 \\pi n \/ \\omega) = \\Re(\\sigma)$, revealing the dissipative part of\nthe stress. A quarter of a cycle later, ($\\theta = 2 \\pi n + \\pi\/2$), the observed\nstress equals $\\tilde{\\sigma}((2 \\pi n + \\pi \/ 2)\/\\omega) = \\Im(\\sigma)$, which unveils\nthe fate of the frequency shift. If $\\Im(\\sigma) > 0$ the extra stress created by the\nanalyte tends to pull the resonator forward (along $x > 0$), thus decreasing its frequency (negative acoustic mass). The opposite change takes place when\n$\\Im(\\sigma) < 0$. In other words, the ``acoustic mass'' or the frequency shift\nsimply emerges from the phase lag between the resonator velocity and the extra stress\ncoming from the analyte. This phase lag is proportional to the time required by\nviscous diffusion to propagate the surface stress from the plate at $z = d$ to the\nwall at $z = 0$.\n\nTo visualise the impedance in terms of the flow, consider the velocity profiles\ndrawn in Fig. \\ref{velocity_profiles}. The red dashed-dotted lines\ncorrespond to the perturbation $\\tilde{u}_p(z, t)$ of the laminar Stokes flow\n(\\ref{Stokes_boundary_layer}) due to the presence of the immersed plate.\n\\begin{equation}\n \\label{perturbation_of_flow}\n \\tilde{u}_p(z, t)\n = \\tilde{u}_j(z, t) - \\left( u_0 e^{-\\alpha z} \\right) e^{-\\mathrm{i} \\omega t},\n\\end{equation}\nwhere $j$ equals 1 or 2 depending on whether we focus on the fluid below the\nplate ($z < d$) or above it ($z > d + a$). Because the extra hydrodynamic stress\ncaused by the plate is $\\tilde{\\sigma} =\\eta \\partial_z \\tilde{u}_p$, the real part\nof $Z_L$ is proportional to the derivative of $\\tilde{u}_p$ with $z$ at a phase angle\nof $\\theta = \\omega t = 0\\ \\mathrm{rad}$, while the imaginary part corresponds to\nthe derivative at phase angle $\\theta = \\pi\/2\\ \\mathrm{rad}$. In the figure, the\nsign of $Z_L$ and the surface stress $\\tilde{\\sigma}$ depends on the slope of the red\ndashed-dotted line representing $\\tilde{u}_p$ with respect to the vertical dotted line,\nwhich stands for no perturbation ($\\tilde{\\sigma} = 0$). Notice that at\n$\\theta = \\pi\/2$, the slope at $z = 0$ in the top right figure has a sign opposite to\nthat of the bottom right one, indicating the change in the imaginary part of $Z_L$.\nThe top panel corresponds to a positive frequency shift $\\Delta f$, while the botom\npanel yields $\\Delta f < 0$. In other words, the top row leads to a ``negative acoustic\nmass'', while the bottom row produces a positive result.\n\n\\subsection{\\label{plate_vs_sphere}Comparing immersed plates with suspended spheres}\n\n\\begin{figure}\n \\begin{center}\n \\begin{subfigure}[b]{.485\\linewidth}\n \\includegraphics[width = \\linewidth]{figures\/impedance_vs_height.small_R.eps}\n \\caption{\\label{impedance_vs_height.simulations}}\n \\end{subfigure}\n \\begin{subfigure}[b]{.485\\linewidth}\n \\includegraphics[width = \\linewidth]{figures\/impedance_vs_freq.comparison.eps}\n \\caption{\\label{impedance_vs_freq.comparison}}\n \\end{subfigure}\n \\begin{subfigure}[b]{.485\\linewidth}\n \\includegraphics[width = \\linewidth]{figures\/impedance_vs_height.suspended_sphere.eps}\n \\caption{\\label{impedance_vs_height.suspended_sphere}}\n \\end{subfigure} \n \\begin{subfigure}[b]{.485\\linewidth}\n \\includegraphics[width = \\linewidth]{figures\/impedance_vs_height.suspended_sphere_close_up.eps}\n \\caption{\\label{impedance_vs_height.suspended_sphere_close_up}}\n \\end{subfigure}\n \\end{center}\n\n \\caption{(\\subref{impedance_vs_height.simulations}) Load impedance obtained\n from numerical simulations of a suspended neutrally-buoyant sphere\n of radius $R = 0.16\\ \\delta$ in a\n $(L \\times L \\times L_z)=(1.33 \\times 1.33 \\times 5.34) \\delta^3$\n box with periodic boundaries in the $x$ and $y$ directions\n versus the height of its centre (points, impedance scale on the right\n axis), compared to the impedance of a plate (curves, left axis) versus\n its distance to the oscillating plane. Results are scaled with the\n Sauerbrey impedance, $Z_{ref}= m \\omega$ where the masses per unit\n surface $m = (4\\pi\/3) R^3\\rho'\/L^2$ (sphere) and $m = \\rho' a$ (plate)\n were chosen equal to each other. The data was obtained from simulations\n at different frequencies.\n (\\subref{impedance_vs_freq.comparison}) Load impedance versus\n frequency for the small sphere in\n Fig. \\ref{impedance_vs_height.simulations}. Points represent \n simulation data (right axis), and curves represent the analytical\n result for the solid plate (left axis).\n (\\subref{impedance_vs_height.suspended_sphere}) Impedance of a sphere with\n radius $R = 0.526\\ \\delta$ (points corresponding to the right axis)\n as a function of the distance $d$ between its centre and the wall.\n The curves correspond to the left axis and show the\n impedance caused by a plane at height $d$ with the same lateral \n motion as a sphere. (\\subref{impedance_vs_height.suspended_sphere_close_up}) Close up of Fig. \\ref{impedance_vs_height.suspended_sphere} for simulations\n with the sphere close to the wall.}\n\\end{figure}\n\nComparing the impedance curve for a solid plate to simulation data for\nthree-dimensional suspended spheres leads to some interesting and surprising\nobservations. A few words concering the setup are first in order. These simulations\nwere performed using the immersed boundary method \\cite{Peskin2002,Balboa2012} with\nperiodic boundary conditions in the $x$ and $y$ directions (resonator plane) and\nintroducing no-slip rigid planes at the top and bottom of the simulation box. The\noscillating flow was imposed at the bottom of the box and the velocity was set to\nzero at the top. The analytical expression for the contribution of the upper boundary\nto the impedance results from Eq. (\\ref{load_impedance}), setting $v_0 = 0$ and\n$d$ equal to the box height. Both theory and simulations confirm that the change in\nthe impedance due to a stationary upper wall remains negligible when the box height\nexceeds about $3\\ \\delta$.\n\nFigures \\ref{impedance_vs_height.simulations} and \\ref{impedance_vs_freq.comparison}\nillustrate the proportionality between the impedance due to a small sphere\n($R = 0.16\\ \\delta$ in the figures) and that of a rigid plate at the same height as\nthe centre of the sphere. The parallel behaviours remain similar up to distances\nsurprisingly close to the wall.\n\nA large sphere qualitatively changes the behaviour of the impedance near the wall.\nWhile an immersed plate feels the effect of the flow only at height\n$d$ (remember that the thickness of the plate plays no role as long as we fix\nthe value of $\\rho' a$), the drag on the sphere comes from the different flow\nvelocities in the range $z \\in [d - R,\\ d + R]$, which we can only neglect when\n$R \\ll \\delta$. A simple way to approximate the response of a sphere with this \none-dimensional model, though, consists in forcing the immersed plate to move in such\na way that its lateral displacement mirrors that of a sphere of radius $R$ at height\n$d$ in response to the oscillating flow.\n\nIn the steady state, the sphere vibrates with frequency $\\omega$,\n\\begin{equation}\n x(t) = x_0 e^{-\\mathrm{i} \\omega t}.\n\\end{equation}\nTo determine $x_0$, we substitute $x(t)$ into Newton's second law,\n\\begin{equation}\n \\label{Mazur}\n -m\\omega^2 x_0 = \\mathrm{i} \\omega \\zeta x_0\n + 6 \\pi \\eta r \\left[(1 + \\alpha r) \\bar{v}_s + \\frac{1}{3} \\alpha^2 r^2 \\bar{v}_v \\right].\n\\end{equation}\nThe force phasor amplitude on the right was calculated by Mazur and Bedeaux in Ref.\n\\cite{Mazur1974}. The friction $\\zeta$ stands for\n\\begin{equation}\n \\zeta = 6 \\pi \\eta r \\left( 1 + \\alpha r + \\frac{1}{9} \\alpha^2 r^2 \\right),\n\\end{equation}\nand $\\bar{v}_s$ and $\\bar{v}_v$ for averages of the unperturbed flow over the\nsurface and volume of the sphere respectively. Their analytical expressions are\nderived in appendix A. Solving Eq. (\\ref{Mazur}) for the phasor describing the\nmotion of the sphere, we get\n\\begin{equation}\n \\label{position_phasor}\n x_0 = \\frac{6 \\pi \\eta r \\left[ (1 + \\alpha r) \\bar{v}_s + \\frac{1}{3} \\alpha^2 r^2 \\bar{v}_v \\right]}\n {-m\\omega^2 -\\mathrm{i} \\omega \\zeta}.\n\\end{equation}\nThe velocity phase amplitude equals $v_0 = -\\mathrm{i} \\omega x_0$, so the corresponding\nload impedance follows from Eq. (\\ref{load_impedance}) writing $v_0$ in terms of the\n$x_0$ given above. Plotting the impedance for a sphere with a diameter comparable to\nthe penentration depth, $R\/\\delta = 0.526$, produces the curves in Fig.\n\\ref{impedance_vs_height.suspended_sphere}. Once again, apart from the vertical scaling\nfactor, the curves agree as the sphere moves away from the wall, even though we are\ncomparing its impedance to that of a plane.\n\nEq. (\\ref{Mazur}) works well far from the wall but breaks down close to it. As Mazur\nand Bedeaux themselves pointed out \\cite{Mazur1974}, the theory does not take into\naccount the hydrodynamic reflections that significantly modify the Stokes flow felt by\nthe sphere when it approaches the resonator surface. Fig. \n\\ref{impedance_vs_height.suspended_sphere_close_up} confirms that the approximate\ntheory and simulations significantly disagree near the oscillating wall.\n\n\\begin{figure}\n \\begin{center}\n \\includegraphics[width = \\linewidth]{figures\/dcut_vs_delta.eps}\n \\end{center}\n \\caption{\\label{dcut_vs_delta}Scaled zero-frequency-shift separation for particles\n (\\textit{points}) and plates (\\textit{lines}) versus scaled penetration depth. The\n points correspond to simulations (\\textit{red}) and experiments (\\textit{blue}) for\n particles of different sizes ($R = 0.5\\ \\mu\\textrm{m}$ and $R = 2.5\\ \\mu\\textrm{m}$\n from Ref. \\cite{Olsson2012}, and $R = 2\\ \\mu\\textrm{m}$ from Refs.\n \\cite{Pomorska2010,Pomorska2011}). The solid line represents the plate model from Eq. (\\ref{solid_plate_load_impedance}), while the dashed line plots the forced plate\n model from section \\ref{plate_vs_sphere}.}\n\\end{figure}\n\nThe crossing over to positive frequency shifts ($\\Im(Z_L) < 0$) has received attention\nin experimental research, where it has been viewed as a proxy for interactions between\nlarge particles and the substrate\n\\cite{Dybwad1985,Pomorska2010,Olsson2012,Johannsmann2015}. Fig. \\ref{dcut_vs_delta}\npresents the scaled crossover separation between particles and QCM surface as a\nfunction of the scaled penetration depth. Rescaling the particle radius $R$ and the\npenetration depth $\\delta$ by the same factor results in an equivalent flow. Therefore,\nwhen we represent the zeros, $z_c$, of $\\Im(Z_L) = 0$ divided by $R$ versus $\\delta\/R$\nfor simulations of different spheres and penetration depths, they all collapse onto the\nsame curve. The penetration depth contains the frequency dependence of $z_c$ with\nrespect to $\\omega$ because $\\delta \\propto \\omega^{-1\/2}$. The solid line follows the\nplate model prediction of Eq. (\\ref{crossover_distance}) for $a = 2R$, while the\ndashed line corresponds to the prediction of the forced plate model in this section (Eqs.\n(\\ref{load_impedance}) and (\\ref{position_phasor}), setting $\\Im(Z_L) = 0$). The latter\nmodel gives an indication of how the acoustic response changes with the sphere dynamics,\nwhich arise from the forces induced by the surrounding flow. For large values of\n$\\delta\/R$ (low frequencies or small particles) we observe similar crossover values\nfor the plates and particles. Clearly, the theories for plates depart from the behaviour\nof spheres when the penetration depth becomes comparable to the sphere radius, with\nsignificant deviations when $\\delta\/R < 1.25$. For a QCM frequency of $35\\ \\mathrm{MHz}$\nthis corresponds to spheres with $R > 50\\ \\mathrm{nm}$. Close to this penetration depth\n$\\delta\/R \\approx 1.25$, the forced plate model yields slightly better predictions,\nsuggesting that the crossover height decreases more quickly than in the solid plate model\ndue to the sphere dynamics. However, as $\\delta\/R$ is further decreased, the forced plate\nmodel largely overestimates the decay of the cross-over distance. When the spheres lie\nclose to the resonator ($\\delta\/R < 1.25$), multiple hydrodynamic reflections between the\nresonator and the particle determine the flow and hydrodynamic impedance.\n\nFig. \\ref{dcut_vs_delta} also includes some experimental observations from Pomorska\n\\textit{et al.} \\cite{Pomorska2010} and Olsson \\textit{et al.} \\cite{Olsson2012} (in\nblue). Let us first turn our attention to the latter reference. There, the authors\nobserved silica particles over a bare silica surface in a (1:1) electrolite\n($\\mathrm{K}^{+}$, $\\mathrm{Cl^{+}}$) at different ionic strengths (from 0 to 150 mM).\nMetallurgical microscopy determined that the particles performed Brownian motion above\nthe surface. Adding enough electrolite ($c=150\\mathrm{mM}$) reduced the Brownian motion,\nindicating the screening of repulsive electrostatic forces and adsorption by\ndispersion (van der Waals) forces. Although not explictly mentioned in \\cite{Olsson2012},\nat smaller ionic strengths one expects to find the silica particles\nsuspended over the resonator and exposed to the wall-interaction potential.\nAccording to the DLVO theory, at low ionic strengths, below the critical coagulation\nconcentration, $c_{ccc}$, this potential has a secondary minimum at a distance\nof about $d \\approx 6\/\\kappa$ ($\\kappa$ stands for the Debye-H\u00fcckel screening length)\n\\cite{Israelachvili2010}. For $c>c_{ccc}$, the particles start to adhere to the surface\ndue to dispersion forces. For a KCl electrolite in water, the Debye length \n($\\kappa^{-1} \\propto c^{-1\/2}$) is about $10\\ \\mathrm{nm}$ for\n$c \\approx 1\\ \\mathrm{mM}$. Taking the values of the crossover frequency $\\omega_c$\nreported by Olsson \\textit{et al.} \\cite{Olsson2012} (which grow with the ionic\nstrength), we can extrapolate the the tendency observed in our simulations to estimate\nthe typical distance $d$ between silica partices and the surface. Notably, the result\nof this crude estimation agrees with distances $d$ decreasing with $c$ as $d\\sim\n6\/\\kappa$, which points to an acoustic response governed by hydrodynamics in these\nexperiments. A quantititative prediction would require an elaborate theory (which\nshould weight the impedance-height dependence) including a more complete set of\nexperimental details (surface charge values, for example). We have recently carried\nout a detailed analysis in the case of suspended liposomes tethered to DNA strands\n\\cite{Delgado2020}. The close agreement between experiments and simulations confirmed\nthe dominant role of the hydrodynamic impedance when dealing with suspended particles,\nand enabled quantitative predictions.\n\nA second set of experiments by Olsson \\textit{et al.} \\cite{Olsson2012} considered\nstreptavidin-decorated silica particles adsorbed to a biotinylated silica surface.\nIn that case the strong streptavidin-biotin links gradually adsorbed the particles\nand the results for the crossover frequency vary only mildly with the ionic strength.\nThe typical particle-surface distance $d$ corresponds to molecular contact\n(1 nm or less). Figure \\ref{dcut_vs_delta} shows that this estimation\nis also compatible with our hydrodynamic predition.\n\nThe experiments by Pomorska {\\em et al.} \\cite{Pomorska2010} provide further evidence\nof the importance of hydrodynamics, even when particles are adsorbed. In those\nexperiments the particles and surface were decorated with polyelectrolites of\nopposite charge to ensure a strong attractive potential and adhesion.\nThe size of the particles in these experiments was about $4.5\\ \\mu\\mathrm{m}$ and\nthe crossover frequency was close to $15\\ \\mathrm{MHz}$ for the two cases considered.\nWe set the distance to the resonator equal to a molecular contact\n($d_c \\in [0.5-1]\\ \\mathrm{nm}$) to plot the experimental data in Fig.\n\\ref{dcut_vs_delta} (blue squares). The point nicely extrapolates the trend we\npredicted for much smaller particles.\n\nIn summary, our analyses provide evidence that the leading contribution to the\nload impedance created by analytes immersed in liquids comes from hydrodynamics.\nWhile we have recently proved this claim in the case of suspended particles (liposomes\ntethered to DNA \\cite{Delgado2020}), in the case of adsorbed particles, our findings\ncall for a revision of the relevance and estimation of contact forces from QCM analyses.\n\n\\section{Elastic layer}\n\n\\begin{figure}\n \\begin{center}\n \\begin{subfigure}[b]{.485\\linewidth}\n \\includegraphics[width = \\linewidth]{figures\/elastic_layer_diagram.eps}\n \\caption{\\label{elastic_layer_diagram}}\n \\end{subfigure}\n \\begin{subfigure}[b]{.485\\linewidth}\n \\includegraphics[width = \\linewidth]{figures\/impedance_vs_height.elastic_layer.eps}\n \\caption{\\label{impedance_vs_height.elastic_layer}}\n \\end{subfigure}\n \\begin{subfigure}[b]{\\linewidth}\n \\includegraphics[width = \\linewidth]{figures\/AR_vs_klipo.eps}\n \\caption{\\label{AR_vs_klipo}}\n \\end{subfigure}\n \\end{center}\n \\caption{(\\subref{elastic_layer_diagram}) Simple model of an elastic layer,\n where an elastic material of density $\\rho'$ and shear modulus $\\mu$\n replaces the plate. The function $\\tilde{\\phi}(z, t)$ indicates the\n displacement in the $x$ direction at height $z$ and time $t$ within\n the layer.\n (\\subref{impedance_vs_height.elastic_layer}) Impedance due to an\n elastic layer of density $\\rho' = \\rho$ and thickness\n $a = \\delta\/\\sqrt{2}$ as a function of the distance $d$ to the lower\n plane. As the material becomes more rigid, the curves approach the\n solution for the solid plate (compare the curve for $\\mu = 1000$ to\n the green line for $\\rho' = \\rho$ in Fig. \\ref{impedance_vs_height}).\n (\\subref{AR_vs_klipo}) Acoustic ratio of a neutrally-buoyant elastic\n layer of thickness $a$ versus shear modulus $\\mu$ at\n $d = 0.18\\ \\delta$, compared to simulations of spherical liposomes of\n radius $R$ at height $d + R$ as a function of the elastic strength of the\n bonds used to connect neighbouring elements in the numerical model\n (\\textit{inset}).}\n\\end{figure}\n\nLet us replace the solid plate with an elastic layer of thickness $a$,\ndensity $\\rho'$ and shear modulus $\\mu$ (Fig. \\ref{elastic_layer_diagram}).\nWithin the layer, we denote the displacement of a point at height $z$ and\ntime $t$ along the $x$ direction with $\\tilde{\\phi}(z, t)$, which must satisfy the\nequation of motion \\cite{Kanazawa1985}\n\\begin{equation*}\n \\frac{\\partial^2 \\tilde{\\phi}(z, t)}{\\partial t^2}\n = \\frac{\\mu}{\\rho'} \\frac{\\partial^2 \\tilde{\\phi}(z, t)}{\\partial z^2},\n\\end{equation*}\na wave equation with speed $c = \\sqrt{\\mu\/\\rho'}$. The steady state solution at\nfrequency $\\omega$ equals\n\\begin{equation*}\n \\tilde{\\phi}(z, t)\n = \\Re\\left(\\left(C e^{-\\mathrm{i} k z} + D e^{\\mathrm{i} k z}\\right)\n e^{-\\mathrm{i} \\omega t}\\right),\n\\end{equation*}\nwith $k = \\omega\/c$. Imposing the boundary conditions on $\\tilde{u}_1$, $\\tilde{u}_2$\nand $\\tilde{\\phi}$, which amount to continuity in the speeds and stresses plus the no\nslip condition at $z = 0$ and vanishing velocity as $z$ tends towards infinity,\n\\begin{align*}\n \\tilde{u}_1(0, t) & = u_0 e^{-\\mathrm{i} \\omega t}, \\\\\n \\tilde{u}_1(d, t)\n & = \\left. \\frac{\\partial \\tilde{\\phi}}{\\partial t} \\right|_{z = d}, \\\\\n \\tilde{u}_2(d + a, t)\n & = \\left. \\frac{\\partial \\tilde{\\phi}}{\\partial t} \\right|_{z = d + a}, \\\\\n \\lim_{z \\to \\infty} \\tilde{u}_2(z, t) & = 0, \\\\\n \\eta \\left. \\frac{\\partial \\tilde{u}_1}{\\partial z} \\right|_{z = d}\n & = \\mu \\left. \\frac{\\partial \\tilde{\\phi}}{\\partial z} \\right|_{z = d}, \\\\\n \\eta \\left. \\frac{\\partial \\tilde{u}_2}{\\partial z} \\right|_{z = d + a}\n & = \\mu \\left. \\frac{\\partial \\tilde{\\phi}}{\\partial z} \\right|_{z = d + a},\n\\end{align*}\nwe obtain with the help of some computer algebra the following value for the\nload impedance,\n\\begin{equation*}\n Z_L = \\frac{2 \\alpha \\eta (1 - \\Lambda^2)\n \\left(e^{2 \\mathrm{i} a \\omega \/ c} - 1\\right)}\n {e^{2 \\alpha d}\n \\left((1 + \\Lambda)^2\n - e^{2 \\mathrm{i} a \\omega \/ c} (1 - \\Lambda)^2\\right)\n + (1 - \\Lambda^2)\\left(e^{2 \\mathrm{i} a \\omega \/ c} - 1\\right)}.\n\\end{equation*}\n$\\Lambda$ represents the dimensionless parameter\n\\begin{equation}\n \\Lambda = \\frac{\\alpha \\eta}{\\rho' c},\n\\end{equation}\nproportional to the ratio of the velocity of viscous diffusion over $\\delta$ to\nthe speed of elastic waves, $c$. The other relevant groups are phase lags, $a\\omega\/c$\nand $\\alpha d$.\n\nWhen the elastic medium becomes rigid ($c \\to \\infty$, $\\Lambda \\to 0$) we recover\nthe solution for the rigid plate. Figure \\ref{impedance_vs_height.elastic_layer}\nplots the impedance due to the layer as a function of the distance $d$ that separates\nit from the vibrating plane.\n\nUsing the computational methods mentioned in the previous section, we simulated\nelastic neutrally-buoyant liposomes using an elastic network made up of elements\nconnected by harmonic bonds of spring constant $k$. We observed that the acoustic ratio\ndecreased as we increased $k$. Increasing the rigidity of our layer leads to similar\npredictions (see Fig. \\ref{AR_vs_klipo}).\n\n\\section{Fluid layer}\n\nLastly, we will work out the impedance for a plane fluid layer of density\n$\\rho'$, shear viscosity $\\eta'$ and velocity field $u'(z, t)$. Fig.\n\\ref{fluid_layer_diagram} displays a sketch of the system. Once again, we\nexpress the fluid velocities with Eq. (\\ref{Stokes_boundary_layer}) and impose\nthe appropriate boundary conditions (continuity of velocities and stress, no\nslip at the lower boundary and vanishing velocity as $z$ tends towards\ninfinity). The resulting load impedance equals\n\\begin{equation}\n Z_L = \n \\frac{2 \\alpha \\eta e^{-2 \\alpha d}\\tanh(\\alpha' a)(1 - \\Upsilon^2)}\n {\\tanh(\\alpha' a) \\left(1 + e^{-2 \\alpha d}\n + \\Upsilon^2 \\left(1 - e^{-2 \\alpha d}\\right)\n \\right) + 2 \\Upsilon},\n\\end{equation}\nwith\n\\begin{equation}\n \\alpha' = (1 - \\mathrm{i}) \\sqrt{\\frac{\\omega \\rho'}{2 \\eta'}},\n\\end{equation}\nand\n\\begin{equation}\n \\Upsilon = \\frac{\\alpha' \\eta'}{\\alpha \\eta}.\n\\end{equation}\nFigure \\ref{impedance_vs_height.fluid_layer} shows the change in the\nimpedance of the fluid layer as a function of the distance $d$ to the lower\nplane for different values of the shear viscosity. As the viscosity increases,\nthe curves approach the solid plate limit. Interestingly, a layer viscosity\nlower than that of the surrounding fluid leads to a flip in the behaviour of the\nreal and imaginary parts of $Z_L$, as observed in QCM experiments with nanobubbles\n\\cite{Du2004,Zhang2008,Ondarcuhu2013}\n(in the figure, compare the red line for $\\eta'\/\\eta = 0.5$ to the green line for $\\eta'\/\\eta = 2$).\n\n\\begin{figure}\n \\begin{center}\n \\begin{subfigure}[b]{.485\\linewidth}\n \\includegraphics[width = \\linewidth]{figures\/fluid_layer_diagram.eps}\n \\caption{\\label{fluid_layer_diagram}}\n \\end{subfigure}\n \\begin{subfigure}[b]{.485\\linewidth}\n \\includegraphics[width = \\linewidth]{figures\/impedance_vs_height.fluid_layer.eps}\n \\caption{\\label{impedance_vs_height.fluid_layer}}\n \\end{subfigure}\n \\end{center}\n \\caption{(\\subref{fluid_layer_diagram}) Simple model of a fluid layer, with\n a liquid of density $\\rho'$ and shear viscosity $\\eta'$ instead of a\n solid layer. The function $u'(z, t)$ names the velocity field inside\n the layer.\n (\\subref{impedance_vs_height.fluid_layer}) Impedance due to a\n fluid layer of density $\\rho' = \\rho$ and thickness \n $a = \\delta\/\\sqrt{2}$ as a function of the distance $d$ to the lower\n plane. As the viscosity increases, the curves look more and more like\n those of the solid plate (the curves for $\\eta'\/\\eta = 10^3$ resemble\n the green lines for $\\rho' = \\rho$ in Fig.\n \\ref{impedance_vs_height}).}\n\\end{figure}\n\n\\section{Discussion}\n\nIn addition to providing analytical expressions for the load impedance of\ndifferent types of immersed layers, we have demonstrated the importance of\nconsidering the role of hydrodynamics in explaining the effects of these\nlayers on the QCM. Although we have not considered any contact forces between\nthe load and the QCM, the models explained above predict the behaviour of\nsuspended loads and recover the expected Sauerbrey relation in the limit\nof adsorbed layers. Furthermore, the ``vanishing mass'' phenomenon observed in\nsuspensions arises as a natural consequence in our derivations.\n\nThe evidence provided here strongly suggests that other types of suspensions\n(such as the simulated suspended spheres considered above) share the same\ngeneric features. Even though the plates in section \\ref{plate_vs_sphere} had an\nimpedance about five times greater than the spheres, the dependence on height\ndisplayed surprisingly parallel behaviours. Prefactors cancel out when calculating\nthe acoustic ratio, so the plate acoustic ratios provide a decent estimate of the\nvalue measured for sphere (see Fig. \\ref{AR_vs_height.suspended_sphere}).\n\n\\begin{figure}\n \\begin{center}\n \\includegraphics[width = \\linewidth]{figures\/AR_vs_height.suspended_sphere.eps}\n \\end{center}\n \\caption{\\label{AR_vs_height.suspended_sphere}Acoustic ratio vs height overt the\n QCM for neutrally-buoyant spheres (points) and solid plate (lines). The distance\n $d$ indicates the separation between the plate and QCM, and the distance from the\n center of the sphere to the QCM.}\n\\end{figure}\n\nThe preceding pages show that large acoustic ratios do not necessarily imply large\nvalues of the dissipation. As we have seen, vanishing frequency shifts naturally lead\nto diverging acoustic ratios.\n\nFinally, we considered the crossover to positive frequency shifts and compared the analytical prediction of the one-dimensional plate system to simulations of\nsub-micron spheres and experiments carried out with micron-sized colloids. The\nplate model correctly predicts the zero-frequency crossover for small enough\nparticles. We observe a transition to a large particle regime when the dimensionless\nparameter $R\/\\delta$ becomes large enough ($R\/\\delta > 0.8$). The one-dimensional\ntheories clearly fail in this regime. By contrast, our three-dimensional simulations\ncorrectly extrapolate to larger (micron-sized) colloids, even with the latter\nadsorbed to the wall. As we did not consider adhesive forces, such an agreement \nhighlights the role of hydrodynamics in determining the response of large adsorbed\nparticles, and calls for a hydrodynamic extension of the existing contact-force and\nelastic-stiffness QCM theories. The ``coupled-resonance model'', which predicts\npositive shifts within the ``elastic loading'' regime in QCM\n\\cite{Dybwad1985,Johannsmann2015} was originally derived for spheres in the dry state\nbut has subsequently been applied extensively in liquids\n\\cite{Pomorska2010,Olsson2012,Johannsmann2015}. Our results call for a revision of the\nrole of contact forces and elastic stiffness in liquids, an investigation which\nrequires a generalization of existing theories including the difficult topic\nof elastohydrodynamic lubrication \\cite{Gohar2001}. Advancing our theoretical\nundestanding in this direction would greatly improve the predictive power of QCM\nanalyses of molecular and mesoscopic contact forces.\n\n\\section{Acknowledgements}\n\nOur research here was supported by the European Commission FETOPEN Horizon 2020\nCatch-U-DNA project.\n\n\\section*{Appendix A. Average velocity integrals}\n\nThe velocities $\\bar{v}_s$ and $\\bar{v}_v$ in Eq. (\\ref{Mazur}) come from\naveraging the flow profile $u(z, t)$ over the surface and volume of the sphere,\nrespectively,\n\\begin{align*}\n \\bar{v}_s & = \\frac{1}{4 \\pi r^2} \\int_S u_0 e^{-\\alpha z}\\ d\\mathbf{S}, \\\\\n \\bar{v}_v & = \\frac{3}{4 \\pi r^3} \\int_V u_0 e^{-\\alpha z}\\ d\\mathbf{V}.\n\\end{align*}\nTo carry out the first of these integrals, we choose spherical coordinates\naround the centre of the immersed sphere at height $d$. Therefore,\n\\begin{equation*}\n \\int_S u(z, t)\\ d\\mathbf{S}\n = \\int_0^{2\\pi} \\left( \\int_0^\\pi u_0 e^{-\\alpha (d + r\\ \\cos(\\theta))} r^2 \\sin(\\theta)\\ d\\theta \\right)\\ d\\phi.\n\\end{equation*}\nAfter integrating over $\\phi$,\n\\begin{equation*}\n \\int_S u(z, t)\\ d\\mathbf{S}\n = 2 \\pi u_0 r e^{-\\alpha d} \\int_0^\\pi e^{-\\alpha r\\ \\cos(\\theta)} r\\ \\sin(\\theta)\\ d\\theta\n = 2 \\pi u_0 r e^{-\\alpha d} \\left[ \\frac{e^{-\\alpha r\\ \\cos(\\theta)}}{\\alpha} \\right]_0^\\pi.\n\\end{equation*}\nHence,\n\\begin{equation*}\n \\bar{v}_s = \\frac{u_0 e^{-\\alpha d}}{\\alpha r}\\sinh(\\alpha r).\n\\end{equation*}\nThe volume integral is simply equal to the integral over the radius of the\nsurface integral from 0 to the radius of the sphere $r$,\n\\begin{equation*}\n \\int_V u(z, t)\\ d\\mathbf{V}\n = \\int_0^r \\frac{4 \\pi r' u_0}{\\alpha} e^{-\\alpha d} \\sinh(\\alpha r') dr'\n = \\frac{4 \\pi u_0}{\\alpha} e^{-\\alpha d}\n \\left[ \\frac{r'\\ \\cosh(\\alpha r')}{\\alpha} - \\frac{\\sinh(\\alpha r')}{\\alpha^2} \\right]_0^r,\n\\end{equation*}\nfrom which we get\n\\begin{equation*}\n \\bar{v}_v = \\frac{3 u_0}{\\alpha r^3} e^{-\\alpha d}\n \\left( \\frac{r\\ \\cosh(\\alpha r)}{\\alpha} - \\frac{\\sinh(\\alpha r)}{\\alpha^2} \\right).\n\\end{equation*}\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}