diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzndbb" "b/data_all_eng_slimpj/shuffled/split2/finalzzndbb" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzndbb" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\nCrowdsourcing websites aggregate judgments in order to\nhelp users discover high quality content. These systems typically combine choices of many people to algorithmically \\textit{rank} content so that better items---product reviews~\\cite{lim2015evaluating}, news stories~\\cite{PopularDynam,SocialInfluenceBias}, or answers on question-answering (Q\\&A) platforms~\\cite{BestAnswerYahoo,HighQualityQA}---are easier to find.\n\nDespite a long history of crowdsourcing~\\cite{JuryThm,Galton1908}, some of its limitations have only recently become apparent. Salganik et al. (\\citeyear{Salganik2006}) found that aggregating the votes of many people to rank songs increases the inequality and instability of song popularity. Even when starting from the same initial conditions, the same songs could end up with vastly different rankings. Other studies have shown that algorithmic ranking can amplify the inequality of popularity~\\cite{lerman14as,Burghardt2018} and bias collective outcomes in crowdsourcing applications~\\cite{MyopiaCrowd,dev2019quantifying}. In addition, information about the choices of other users affects decisions in complex ways~\\cite{SocialInfluenceBias,Hogg2015hcomp,talton2019people}. Unfortunately for crowdsourcing system designers, it is still not clear how these finding could help improve collective outcomes, in large part due to difficulty of quantifying the quality of options (e.g., the best answer to a question) and its impact on individual decisions. \n\nTo better understand and improve collective outcomes that emerge from individual decisions, we break down the crowdsourcing task into its basic elements: item quality, item ranking, and social influence. We create a controlled experiment to study how these elements jointly affect individual decisions and collective outcomes. We use experimental data to construct and validate a mathematical model of human judgements, and then use it to explore algorithmic ranking and identify strategies to improve crowdsourcing performance. Our study addresses the following research questions:\n\\begin{description}\n \\item[RQ1] How does quality and presentation of options jointly impact individual decisions?\n \\item[RQ2] When is algorithmic ranking unstable and does not reliably identify the best option? \n \\item[RQ3] How can we stabilize algorithmic ranking such that the best option is typically ranked first?\n\\end{description}\n\n\n\\begin{figure}[tbh!]\n\\centering\n\\includegraphics[width=0.8\\columnwidth]{figures\/Fig1.pdf}\n\\caption{\\label{fig:ExperimentSchematics} Schematic of the experiment conditions. Left: guess condition, where subjects write a guess for the correct value. Center: control condition, where subjects choose the best of two answers. Right: social influence condition, where a randomly chosen answer is called the ``most popular'' and is ranked first.\n}\n\\end{figure}\n\n\nOur experiment asks subjects to choose the best answer to questions with objectively correct answers, such as the number of dots in a picture. Figure~\\ref{fig:ExperimentSchematics} illustrates one such question, where we ask users to find the area ratio between the largest and smallest shapes. (Other questions used in the experiment are shown in Appendix Fig.~\\ref{fig:Questions}.) While these simple questions abstract away some of the complexity of the often-subjective decision making people do in crowdsourcing systems, they allow quality to be objectively measured and its effects on decisions better understood.\n\n\n\nThe experiment has three conditions shown in Fig.~\\ref{fig:ExperimentSchematics}. In the first condition, we let subjects write their answers to understand how their subjective guesses deviate from the correct answers. In the remaining conditions, we ask subjects to choose the best of two randomly generated answers that are randomly ordered. In the control condition, subjects are not told how they are ordered, while in the social influence condition, the first answer is labeled ``more popular''. These simple conditions allow us to disentangle the elements of crowdsourcing systems and begin quantifying how individual decisions affect collective outcomes.\n\nTo begin answering RQ1, we construct a mathematical model of the probability to choose an option as a function of its position and quality. The model requires only two parameters to measure \\textit{cognitive heuristics} (mental shortcuts people use to make quick and efficient judgements) and is in excellent agreement with experimental data. The first parameter reflects a user's preference to pick the first answer (known as ``position bias''~\\cite{lerman14as,Burghardt2018,MusicLabModel,PopularDynam}) and the other parameter measures the rate at which answers are guessed at random. Subjects otherwise pick an answer closest to their initial (unobserved) guess. We call this model the Biased Initial Guess (BIG) model. The BIG model demonstrates that the ``social influence'' experiment condition enhances position bias~\\cite{Burghardt2018}, \ntherefore cognitive heuristics that produce position bias and social influence can be quantified with a single parameter, a substantial simplification over previous work \\cite{MusicLabModel,PopularDynam}. Moreover, it helps explain why users often choose the worst answer when answer quality differences are small.\n\nThe BIG model not only improves our understanding of how answers are chosen, but it allows us to test different ranking policies in simulations to answer RQ2 and RQ3. Importantly, these simulations demonstrate that cognitive heuristics can make popularity-based ranking highly unstable. When the quality difference between the options is small, initially minor differences in popularity can create a cumulative advantage \\cite{Rijt2014}, meaning the better answer does not always become the most popular. However, when the quality difference passes a critical point, popularity-ranking is stable, and the better answer eventually becomes the most popular. These results may help explain an under-appreciated finding of Salganik et al. (\\citeyear{Salganik2006}) that the best and worst songs tended to be correctly ranked when songs were ordered by popularity, but intermediate-quality songs landed anywhere in between. \n\nFinally, to answer RQ3, we propose an algorithm that rectifies this instability by ordering answers based on their inferred quality, which we call RAICR: Rectifying Algorithmic Instabilities in Crowdsourced Ranking. This method is found to be stable and consistently order the better answer first, thus making best answers easier to find, even when they are only slightly better. RAICR ranks answers as well as, or better than, common baselines such as ordering by popularity or by recency (ranking by the last answer picked).\n\nOur work shows that individual decisions within crowdsourcing systems are strongly affected by cognitive heuristics, which collectively create instability and poor crowd wisdom. Designers of crowdsource systems need to account for these biases in order make good content easier to find. Algorithms such as RAICR, however, can correct for these biases, thereby improving the wisdom of crowds. \n\n\\section{Related Literature}\n\n\\subsection{Crowdsourcing}\nCrowdsourcing has a two-century long history demonstrating how a collective can outperform individual experts \\cite{JuryThm,Galton1908,surowiecki2005wisdom,Kaniovski,Simoiu2019}, thus creating the moniker ``wisdom of crowds''. Crowds have been shown to beat sport markets \\cite{Brown2019,Peeters2018}, corporate earnings forecasts \\cite{Zhi2019}, and improve visual searches \\cite{Juni2017}. One reason crowd wisdom works is due to the law of large numbers: assuming unbiased and independent guesses, the average guess should converge to the true value. \n\nIndividual decisions, including in online settings, are biased by cognitive heuristics, such as anchoring \\cite{shokouhi2015anchoring}, primacy \\cite{Primacy1}, prior beliefs \\cite{white2013beliefs} \nand position bias \\cite{Burghardt2018}. These biases are not necessarily canceled out with large samples \\cite{Prelec2017,Kao2018}. As a result, aggregating guesses of a crowd does not necessarily converge to the correct answer.\n\nGuesses are usually not independent, which can sometimes improve the wisdom of crowds. Social influence models, such as the DeGroot model \\cite{Degroot1974}, have been shown to push simulated agents to an optimal decision \\cite{Golub2010,Mossel2015,Bala1998,Acemoglu2011}. These results have been backed up experimentally \\cite{Becker2017,Becker2019,Ungar2012,Tetlock2017}, even when opinion polarization is included \\cite{Becker2019}. One reason social influence can be beneficial is that it encourages people who are way off the mark to improve their guess \\cite{Mavrodiev2013,Becker2017,Abeliuk2017www}. \n\n\nOften, however, social influence can reduce crowd wisdom. Corporate earnings predictions \\cite{Zhi2019}, jury decisions \\cite{Kaniovski,BurghardtJury}, and other guesses can degrade with influence \\cite{Lorenz2011,Lorenz2015,Simoiu2019}, and malevolent individuals can manipulate people to make particular collective decisions \\cite{SocialInfluenceBias,Asch1951}. Too much influence by a single individual can also reduce the wisdom of collective decisions \\cite{Becker2017,Acemoglu2011,Golub2010}, and deferring to friends can sometimes make unpopular (and potentially low-quality) ideas appear popular \\cite{Lerman2016}. \n\nRecent work has also demonstrated how other cognitive heuristics can affect crowd wisdom. After the landmark study by Salganik et al. (\\citeyear{Salganik2006}), some researchers found that social influence has no effect on decisions \\cite{Antenore2018}, or that position bias, i.e., the preference to choose options listed first, largely explain biases in crowdsourcing \\cite{lerman14as,MusicLabModel}. Social influence instead enhances the position bias \\cite{Burghardt2018,MusicLabModel}. Burghardt et al. (\\citeyear{Burghardt2018}) have begun to tease apart these effects, showing that while social influence enhances position bias, it has no marginal effect when we control for position. This is consistent with our approach, which models and rectifies both biases with a single parameter. \n\n\\subsection{Algorithmic Ranking}\n\nThe goal of a ranking algorithm is to make good content easier to find. \nMany papers have begun to address this goal \\cite{Page1999,Jarvelin2002,Bendersky2011}, which has recently been applied to crowdsourced ranking. Because of human biases, however, algorithms that na{\\\"i}vely use human feedback to suggest content will end up forming echo chambers \\cite{Hilbert2018,Bozdag2013,Hajian2016}, or only recommend already-popular items \\cite{Abdollahpouri2017,Abdollahpouri2019}. This can also give some content a cumulative advantage \\cite{Rijt2014}, even when it is of similar quality to content that remains unpopular. \n\nTo correct for algorithmic bias in this paper, we create a ranking method that follows the strategy of Watts, who says, ``...we can instead measure directly how they respond to a whole range of possibilities and react accordingly'' \\cite{Watts2012}. In the present context, this strategy implies we can create better algorithms for option ranking by observing, and addressing, how people respond to social influence and position biases. We show this strategy applied in RAICR improves upon simple algorithms used in the past, which include ordering results by popularity \\cite{Burghardt2018} or recency \\cite{lerman14as}. While some crowdsourced ranking strategies use a two-tier platform model, in which researchers rank options based on whether content is downloaded and rated \\cite{Salganik2006,Antenore2018,Abeliuk2017www}, RAICR is based on a common simpler model in which we only observe if content is chosen \\cite{Burghardt2018,PopularDynam}. This subtle difference implies many previous ranking schemes are not applicable. The present paper also compliments previous work that uses features, such as answer position, to predict Q\\&A website quality \\cite{Shah2010,MyopiaCrowd}.\n\n\\section{Experiment}\n\nThe experiment asked subjects, hired through Amazon Mechanical Turk between August 2018 and September 2019, to answer a series of questions shown in the Appendix. The order of questions was randomized for each subject, and questions were not time-limited. Questions include specifying the ratio of the areas of two shapes or the lengths of two lines, or the number of dots in an image. We designed the experiment around quotidian tasks that do not require specific expertise but are difficult for most people. Despite their difficulty, the questions have objectively correct answers. We quantify the quality of an answer by the difference from the mean of all guesses, which better matches a log-normal distribution, as discussed later. In the Appendix, we define quality of an answer by the difference from the correct value and find results are qualitatively very similar. \n\nSubjects were assigned to one of three conditions. In the \\textit{guess condition}, shown in the left panel of Fig.~\\ref{fig:ExperimentSchematics}, the subjects could freely type their guesses. In the \\textit{control condition}, shown in the center panel of Fig.~\\ref{fig:ExperimentSchematics}, subjects were told to choose the best among two options, and were not told how the two answers were ordered. Finally, in the \\textit{social influence condition}, shown in the right panel of Fig.~\\ref{fig:ExperimentSchematics}, they were told the first among two answers was the most popular, but the layout was otherwise identical to the previous condition. We only showed ten questions to each subject to reduce the effects of performance depletion in Q\\&A systems~\\cite{Ferrara17}. Further, to reduce bias due to other phenomena, such as the decoy effect \\cite{Huber1982}, we showed subjects only two choices. Subjects in the same condition but assigned on different days have statistically similar behavior.\n\nApproximately 1800 subjects are evenly split between the conditions (596, 586, and 587 for the guess, control, and social influence condition, respectively). For guess values, we remove extreme outliers (guesses smaller than 1 or greater than $10^6$). All answers\nare supposed to be greater than 1, while values greater than $10^6$ may affect mean values and appear to represent throwaway answers. The number of valid participants for each question is shown in Table\\ref{tab:guesssummary}.\n\n\n\\begin{table}[t]\n\\centering\n\\caption{Number of subjects for each guess question (after cleaning)}\n\\label{tab:guesssummary}\n\\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|}\n\\hline\nQ1&Q2&Q3&Q4&Q5&Q6&Q7&Q8&Q9&Q10\\\\ \\hline\n595&594&593&595&592&593&589&591&584&590\\\\ \\hline\n\\end{tabular}\n\\begin{flushleft}\n\\end{flushleft}\n\\end{table}\n\n\nMechanical Turk workers were hired if they had an approval rate of over $95\\%$, completed more than 1000 Human Intelligence Tasks (HITs), and never participated in any of the experiment conditions before. Each worker was paid $\\$1.00$ for the guessing condition and $\\$0.50$ for the other two conditions. The assignment took $6$ minutes on average for the guess condition and $2$ minutes on average for the other conditions, equivalent to an hourly wage of $\\$10-\\$15$. The human experiment was approved by the appropriate IRB board.\n\n\\section{Results}\n\\subsection{A Mathematical Model of Decisions}\n\n\\begin{figure}[tbh!]\n\\centering\n\\includegraphics[width=0.7\\textwidth]{figures\/Fig2.pdf}\n\\caption{\\label{fig:CDFGuess} The CDF of guesses for each question. (a) Before normalization, and (b) after normalization.\n}\n\\end{figure}\n\nWe use data gathered from the experiment to answer \\textbf{RQ1}: \\textit{How does quality and presentation of options jointly impact individual decisions?}\n\n\\subsubsection{Open-Ended Experiment Condition} \nTo derive the model of how people make these decisions, we start by constructing the distribution of guesses for each question (the \\textit{guess} condition in the experiment). \nThe guesses, plotted in Appendix Fig.~\\ref{fig:GuessExperimentResults} and CDF shown in Fig.~\\ref{fig:CDFGuess}a, are highly variable (by as many as six orders of magnitude), while the correct answers vary by three orders of magnitude. The median guess may differ from the true answer, in agreement with previous work \\cite{Kao2018}, but values are typically the correct order of magnitude.\n\nWe normalize guesses by defining a new variable $A$:\n\\begin{equation*}\n A = \\frac{\\text{ln}(X)-\\langle \\text{ln}(X)\\rangle}{\\sigma_{\\text{ln}(X)}},\n\\end{equation*}\nwhere $X$ is a guess value, and $\\langle \\text{ln}(X)\\rangle$ is the mean of the logarithm of guesses. Figure~\\ref{fig:CDFGuess}b shows that this simple normalization scheme effectively collapses answer guesses to a single distribution. We show in the Appendix that guesses are not normally distributed, but instead are better approximated as log-normal, in agreement with previous work on a different set of questions \\cite{Kao2018}. The normalized guesses $A$ can be thought of as the $z$-scores in log-normal distributions. Alternative ways to center data, shown in the Appendix, produce similar results. We use these normalized guesses and distributions in the remaining two experiment conditions. Intuitively, if the mean of all guesses converges to the correct answer, $A=0$ can be thought of as the best answer.\n\n\\subsubsection{Two-Choice Experiment Conditions}\n\\begin{figure}[tbh!]\n\\centering\n\\includegraphics[width=0.3\\textwidth]{figures\/Fig3.pdf}\n\\caption{\\label{fig:ChooseVsPos} Probability to choose an answer versus its position for the social influence and control condition.\n}\n\\end{figure}\n\nThe latter two experiment conditions require subjects to pick the best among two answers to the question. For both the control and social influence conditions, answers are ordered vertically, with one answer above the other. There is a significant position effect when answers are ordered this way, as shown in Fig.~\\ref{fig:ChooseVsPos}. In the control condition, there is a slightly greater probability (52\\%) to choose the first (top) answer over the last one (p-value $<0.001$). In the social influence condition, meanwhile, the probability to choose the first answer is substantially larger (59\\%) and statistically significantly different than the control condition (p-value $<0.001$). This is in agreement with previous work showing that social influence amplifies the position effect~\\cite{Burghardt2018}.\n\n\n\\subsubsection{The Biased Initial Guess Model}\n\\begin{figure}[tbh!]\n\\centering\n\\includegraphics[width=0.4\\textwidth]{figures\/Fig4.pdf}\n\\caption{\\label{fig:ModelSchematics} Schematic to calculate $s(A_1,A_2)$. Guesses follow an approximately log-normal distribution unique to each question, but the normalized guesses, $A$ are approximately standard normal distributed. In an experiment, assume two candidate answers are provided, $A_1$ is listed first, and $A_2$ is listed last. The variable $s(A_1,A_2)$ in Eq.~\\ref{eq:initselect} is the probability an unknown initial guess is closer to $A_1$ than $A_2$.\n}\n\\end{figure}\n\nWe now have the necessary ingredients to model how decisions to choose an answer are affected by its quality, position, and social influence. \nWe present the Biased Initial Guess (BIG) decision model and show it is consistent with the data.\n\nWe first discuss the simplest case where a user has to choose the better of two answers $A_1$, listed first, and $A_2$, listed last, in the absence of cognitive biases. Figure~\\ref{fig:ModelSchematics} shows probability of the answer, with the normalized values of the choices $A_1$ and $A_2$, as well as the user's initial guess $A_I$ about the true answer, which we do not observe. All things equal, the user will choose the first answer $A_1$ if it is closer to the initial guess, i.e., if $|A_1-A_I|<|A_2-A_I|$, and will otherwise choose $A_2$. The probability to choose $A_1$ is then:\n\\begin{equation}\ns(A_1,A_2) = \n\\begin{cases}\n\\text{Pr}(A_I > \\frac{A_1+A_2}{2}) & A_1 > A_2 \\\\\n1\/2 & A_1 = A_2\\\\\n\\text{Pr}(A_I < \\frac{A_1+A_2}{2}) & A_1 < A_2\n\\end{cases}\n\\end{equation}\n Assuming $A_1$ and $A_2$ follow a normal distribution quantified in the guess experiment condition,\n\\begin{equation}\n\\label{eq:initselect}\ns(A_1,A_2)= \n\\begin{cases}\n\\text{erfc}\\left(\\frac{1}{\\sqrt 2}\\frac{A_1+A_2}{2}\\right)\/2 & A_1 \\ge A_2 \\\\\n\\left[1 + \\text{erf}\\left(\\frac{1}{\\sqrt 2}\\frac{A_1+A_2}{2}\\right)\\right]\/2, & A_1 < A_2\n\\end{cases}\n\\end{equation}\nwhere $\\text{erf}(.)$ is the error function, $\\text{erfc}(.) = 1-\\text{erf}(.)$ is the complimentary error function. When $s(A_1,A_2)>0.5$, $A_1$ is not only more likely to be closer to the initial guess ($A_I$) than $A_2$, but is also closer to zero than $A_2$, and therefore the objectively better answer. On the other hand, when $s(A_1,A_2)<0.5$, $A_1$ is further from most guesses compared to $A_2$ and the objectively worse answer.\n\nTo better model decision-making, we have to account for biases, due to cognitive heuristics and algorithmic ranking, to explain why people do not always choose the best option. As shown in Fig.~\\ref{fig:ChooseVsPos}, sometimes they choose the first answer even if it is not the best answer. We quantify this position bias by assuming that with probability $p$ participants choose the first answer regardless of its quality. This parameter should presumably be small in the control condition and large in the social influence condition. Subjects may also choose an answer regardless of its position or quality because there is no monetary incentive to choose good answers. We model this by allowing subjects to choose an answer at random with a probability $r$. Taking these two heuristics into account, we arrive at the BIG Model: \n\n\\begin{equation}\n\\label{eq:model_full}\n\\begin{split}\n\\text{Pr}(\\text{Choose}~A_{1}) &= r\/2 + (1 - r) [p + (1 - p) s(A_1,A_2)]\\\\\n&= \\begin{cases}\nr\/2 + (1 - r) \\left[p + (1 - p)\\text{erfc}\\left(\\frac{1}{\\sqrt 2}\\frac{A_1+A_2}{2}\\right)\/2\\right] & A_1 \\ge A_2 \\\\[7pt]\nr\/2 + (1 - r) \\left[p + (1 - p)\\left[1 + \\text{erf}\\left(\\frac{1}{\\sqrt 2}\\frac{A_1+A_2}{2}\\right)\\right]\/2\\right], & A_1 < A_2\n\\end{cases}\n\\end{split}\n\\end{equation}\nThe probability of choosing $A_2$ is simply the compliment of this probability. Because $A=0$ is expected to be the best answer, with some simple manipulation we can infer the probability the best answer is chosen.\n\n\\begin{equation}\n\\label{eq:model_best}\n\\text{Pr}(\\text{Choose}~A_{1}|A_{1}~\\text{Best}) = \\begin{cases}\nr\/2 + (1 - r) \\left[p + (1 - p) \\text{erfc}\\left(\\frac{1}{\\sqrt 2}\\frac{A_1+A_2}{2}\\right)\/2\\right] & \\frac{A_1+A_2}{2}<0\\\\[7pt]\nr\/2 + (1 - r) \\left[p + (1 - p) \\left(1-\\text{erfc}\\left(\\frac{1}{\\sqrt 2}\\frac{A_1+A_2}{2}\\right)\/2\\right)\\right] & \\frac{A_1+A_2}{2}\\ge0\\\\\n\\end{cases}\n\\end{equation}\nA similar equation can model $\\text{Pr}(\\text{Choose}~A_{2}|A_{2}~\\text{Best})$.\n\nAgreement between the model and data is shown in Fig.~\\ref{fig:ModelFit}. In the \\textit{control condition} (Fig.~\\ref{fig:ModelFit}a), the best parameters are $r=0.28\\pm0.02$ and $p=0.05\\pm0.02$. We find that the log-likelihood of the model, $\\ell=-3002.49$, is not statistically different from log-likelihood if the data came from the model itself: p-value $=0.40$. See Methods for how p-values and error bars are calculated. We also check if we need both parameters, $r$ and $p$, using the likelihood ratio test and Wilks' Theorem~\\cite{Wilks1938}. We compare the likelihood ratio of the two-parameter model to simpler models with $p$ or $r$ (or both) set to zero. The probability a simpler model could fit the data as well or better is $\\le 0.002$. We conclude that our model describes the control condition very well. \n\n\\begin{figure}[tbh!]\n\\centering\n\\includegraphics[width=0.8\\textwidth]{figures\/Fig5.pdf}\n\\caption{\\label{fig:ModelFit} Decision model agreement with experiment data. Plots show the probability the best answer is chosen for the model (lines) and experiment (symbols) under (a) the control condition and (b) the social influence condition. Purple diamonds are probabilities when the best answer is the first answer, $A_1$ and green squares for when the best answer is $A_2$.\n}\n\\end{figure}\n\nThe agreement between data and model is similarly close in the \\textit{social influence condition} (Fig.~\\ref{fig:ModelFit}b). The position bias parameter $p=0.21\\pm 0.01$ is larger than in the control condition, in agreement with expectations. We also find $r=0.08\\pm 0.02$, thus social influence reduces the frequency of random guesses. In both experiment conditions, surprisingly, $\\approx 20\\%$ of users choose answers for reasons besides ``quality'' ($r\/2+(1-r)p = 18\\%$ and $23\\%$ for the control and social influence conditions, respectively). Similar to the control condition, we find that the model is consistent with the data. The log-likelihood of the empirical data ($\\ell=-3190.13$) is not statistically different from log-likelihood values if the data came from the model: p-value $0.47$. The probability a simpler model ($r$ or $p$ set to zero) could fit the data as well or better is $<10^{-5}$. In conclusion, we find the BIG model is consistent with both experiment conditions and its parameters are interpretable and meaningful. In the Appendix, we show that all these results are consistent when we look at a subset of experiment questions or center the data differently. \n\n\\subsection{Algorithmic Ranking Instability}\nCrowdsourcing websites automatically highlight what they consider the best choices to help their users more quickly discover them. For example, Stack Exchange (like other Q\\&A platforms) usually ranks answers to questions by the number of votes they receive. Despite problems with popularity-based ranking identified in previous studies~\\cite{Salganik2006,lerman14as}, it is widely used for ranking content in crowdsourcing websites. In this section we identify an instability in popularity-based ranking: the first few votes a worse answer receives can lock it in the top position, where it acquires cumulative advantage \\cite{Rijt2014}. This allows us to answer \\textit{RQ2: When is algorithmic ranking unstable and does not reliably identify the best option?} \n\n\n\\begin{figure*}[tbh!]\n\\centering\n\\includegraphics[width=0.95\\textwidth]{figures\/Fig6.pdf}\n\\caption{Comparison of ranking policies via simulations. Plots show the probability the best answer is ranked first after (a) 50, (b) 500, and (c) 20,000 votes, when answers are ranked by quality (black line), recency (orange dashed line), and popularity. Also shown in (c) is the critical value of $A_{\\text{worst}}$ based on Eqs.~\\ref{eq:scrit} and~\\ref{eq:initselect}. In these simulations, subjects choose answers following the BIG Model with $p=0.2$ and $r=0.09$. ``Popularity 0'' (green dashed line), ``Popularity -10'' (cyan dashed line), and ``Popularity -200'' (red dashed line) means that the worst answer starts with a 0, 10, or 200 vote advantage, respectively. Shaded areas are 95\\% confidence intervals. \n\\label{fig:InitialGuessModelRankData} \n}\n\\end{figure*}\n\nTo demonstrate the instability, we simulate a group of agents who choose answers according to the BIG model with $p=0.2$ and $r=0.09$. In the simulations, one answer is objectively best, e.g., exactly equal to the correct answer ($A_{\\text{best}}=0$), while the worst answer is larger than $A_{\\text{best}}$ (results are symmetric if $A_{\\text{worst}}\\text{Pr}(\\text{Choose}~ A_{\\text{worst}}=A_1)\n\\end{equation} \nthen subjects are more likely to choose the better answer regardless of whether its ranked first or second, therefore answer order is stable. When the above inequality is not true, however, then subjects are more likely to pick the first answer regardless of its quality, and the popularity ranking is unstable. The critical value of $A_{\\text{worst}}$ between the two regimes is when \n\\begin{equation}\\label{eq:scrit}\n s_{\\text{crit}}=\\frac{1}{2 (1-p)}\n\\end{equation}\nand is independent of $r$. Intuitively, if the answer is exceptionally bad, it will always be less popular. This is alike to the results in Salganik's MusicLab study \\cite{Salganik2006}, where particularly good and bad songs were ranked correctly. However, if the first answer is likely to be chosen regardless of quality, the worst answer will continue accumulating votes and remain in the top position. Given $A_{\\text{best}}=0$, we can use Eqs.~\\ref{eq:scrit} and~\\ref{eq:initselect} to numerically solve for the critical value of $A_{\\text{worst}}$. We plot the critical point as a function of $p$ and $A_{\\text{worst}}$ in phase diagram in Fig.~\\ref{fig:SimPhaseSpace}. We see that there is always a large part of the phase space where popularity-based ranking will be unpredictable if answer quality is close together. Based on Eq.~\\ref{eq:scrit}, if $p\\ge 0.5$, there will be no case where popularity-based ranking is guaranteed to correctly rank answers. While this is an extreme case, it still points to substantial limitations of popularity-based ranking.\n\n\\begin{figure}[tbh!]\n\\centering\n\\includegraphics[width=0.44\\textwidth]{figures\/Fig7.pdf}\n\\caption{\\label{fig:SimPhaseSpace} Phase diagram for popularity-based ranking based on Eqs.~\\ref{eq:scrit} and~\\ref{eq:initselect}. We show the boundary between regimes where best answers are guaranteed to be ranked first with enough votes (white area) and where ranking is unstable (gray area) as a function of position bias ($p$) and the value of the worse answer ($A_{\\text{worst}}$). In this plot, larger values of $A_{\\text{worst}}>0$ correspond to worse answers and results are symmetric for $A_{\\text{worst}}<0$. \n}\n\\end{figure}\n\\subsection{Stabilizing Algorithmic Ranking}\n\nGiven the problems with popularity ranking, it is critical to create a more consistent ranking algorithm. Moreover, while we have so-far explored questions with numeric answers, we want a method that works for all types of answers. Using the BIG model, we can answer RQ3: \\textit{How can we stabilize algorithmic ranking such that the best option is typically ranked first?}\n\nAssume we can approximate $p$ and $r$, then we can invert Eq.~\\ref{eq:model_full} and use votes to infer the only unknown variable $s(A_{\\text{best}},A_{\\text{worst}})$. When $s(A_{\\text{best}},A_{\\text{worst}})>0.5$, $A_{\\text{best}}$ is the best answer, but if we incorrectly rank $A_{\\text{worst}}$ first, $s(A_{\\text{worst}},A_{\\text{best}})<0.5$. We can therefore rank the answer in which $s(A_{\\text{best}},A_{\\text{worst}})>0.5$ as the best answer. This is the backbone of the RAICR algorithm. The algorithm uses maximum likelihood estimation to solve for $s(A_{\\text{best}},A_{\\text{worst}})$, as shown in the Appendix. Therefore, if the data matches the BIG model with the correct $r$ and $p$ parameters, our method \\emph{optimally infers the correct raking} by having minimal variance and no bias \\cite{Newey1994}. Moreover, \\emph{this method only depends on the votes an answer receives rather than the type of answer}, such as a numerical or textual answer. We compare quality ranking to popularity-based ranking, and recency-based ranking (ranking by the last answer picked), as shown in Fig.~\\ref{fig:InitialGuessModelRankData}. The probability recency ranks the best answer first is calculated as the self-consistent equation: \n\\begin{dmath}\n \\text{Pr}(\\text{Rank $A_{\\text{best}}$ First}|\\text{Recency}) = \\text{Pr}(\\text{Choose}~ A_{\\text{best}}|A_{\\text{best}}~\\text{First}) \\text{Pr}(\\text{Rank $A_{\\text{best}}$ First}|\\text{Recency})\\\\ + \\text{Pr}(\\text{Choose}~ A_{\\text{best}}|A_{\\text{best}}~\\text{Last})(1 - \\text{Pr}(\\text{Rank $A_{\\text{best}}$ First}|\\text{Recency}) ).\n\\end{dmath}\nThis represents the limit answers acquire many votes. The solution to this equation is:\n\\begin{dmath}\n \\text{Pr}(\\text{Rank $A_{\\text{best}}$ First}|\\text{Recency}) = \\frac{2 (1-p) (1-r) s(A_{\\text{best}},A_{\\text{worst}}\n +r}{2-2 p (1-r)\n\\end{dmath}\nWe find that the RAICR algorithm performs at least as well as popularity-based ranking by ranking the better answer first, and often better after 20-50 votes (Fig.~\\ref{fig:InitialGuessModelRankData}a and Appendix Fig.~\\ref{fig:SimPRobust}). Moreover, RAICR always outperforms the recency-based algorithm. The benefit of the RAICR algorithm only improves as we collect more votes. \nFor example, Fig.~\\ref{fig:InitialGuessModelRankData}b shows after after 500 votes the advantage of RAICR is larger, and Fig.~\\ref{fig:InitialGuessModelRankData}c shows that after 20K votes the method is nearly optimal. Popularity-based ranking performs badly for $A_{worse}<0.6$, and recency performs worst if $A_{worse}>0.6$. \n\nWhile we show these results hold when we have 20 to 20,0000 votes, many platforms are underprovisioned, with a large fraction of webpages \nreceiving little attention and votes~\\cite{baeza2018bias,Gilbert2013}. It is the webpages that receive many votes, however, which may be the most important. Correctly ranking options in these popular pages is therefore especially critical to crowdsourcing websites. Moreover, a moderate number of votes is generally needed to make a reasonable estimate of quality, so very few methods will accurately rank unpopular pages.\n\nOne caveat of the RAICR algorithm is that we need an approximate value for two parameters: $r$ and $p$. What happens if either of these parameters are far off? For example, we could assume $r=0$ when $r=0.09$. Results in the Appendix, however, show that the findings are quantitatively very similar, and therefore our model is robust to assumptions about $r$. On the other hand, what if we incorrectly estimate both $r$ and $p$? We show in the Appendix that even in this worst-case scenario, our method performs slightly worse, but is comparable or substantially better than popularity-based ranking. \n\n\\section{Future Work and Design Implications\n\nIn the experiment, we decomposed the crowdsourcing task to its basic components to reduce the complexity and variation inherent in real world tasks. While this helps to disentangle effects of option quality and position without confounding factors muddying the relationships, future work is needed to verify ecological validity of our results \\cite{ObservationalVsExp}. It is encouraging that the probability to choose an answer (Fig.~\\ref{fig:ChooseVsPos}b) is quantitatively similar to empirical data gathered from Stack Exchange \\cite{Burghardt2018}, despite Mechanical Turk workers not being representative of the general population \\cite{Munger2019}. \n\nFor simplicity, we only explored two-option questions; future work should aim to understand multi-option decision-making given options of variable quality. A generalization of RAICR should also address more complicated biases, such as preference for round numbers \\cite{Fitz1986}, anchoring \\cite{shokouhi2015anchoring,Furnham2011}, and biases that appear in multi-option decisions, such as the decoy effect \\cite{Huber1982}. Finally, while RAICR is found to be robust to moderate changes in its parameters, this algorithm and its extensions may fail to rank options properly if its parameters are far off, or if the BIG model is wrong. In the experiment, the model is backed by data, but future work needs to address whether other tasks or questions follow this model. \n\nOur results offer implications for crowdsourcing platforms. First, designers must recognize the limits of crowdsourcing due to biases implicit in their platform. In our experiment people often upvoted options at random (up to 20\\% of all votes), and chose an inferior option simply because it was shown first. This creates a ranking instability when options are of similar quality. Our controlled experiment and mathematical model point to ways we can counteract this instability. Designers should similarly create platform-tailored mathematical models and controlled experiments to rigorously test how crowds can better infer the best options. \n\nA key property of our RAICR algorithm is that it relies on accurate modeling of user decisions to counteract cognitive biases. In effect, each vote is weighted depending on the ranks of answers at the time the vote is cast. The idea is similar to one described by Abeliuk et al.~\\citeyear{Abeliuk2017www} that ranks items by their inferred quality in order to more robustly identify blockbuster items. Similar weighting schemes could be applied to future debiased algorithms to address the unique goals of each crowdsourcing platform. \n\nThere are also simple methods that platforms like Reddit, Facebook, and Stack Exchange can try that may greatly outperform the baselines we mention in our paper. For example, items that have not yet acquired many votes can be ranked randomly to reduce initial ranking biases. Alternatively, new posts and links could be ranked appropriately but their popularity could be hidden until they gather enough votes. Our results suggest this could reduce social influence-based position bias up until the true option quality is more obvious. \n\n\\section{Conclusion}\nIn this paper, we introduce an experiment designed to inform how cognitive biases and option quality interact to affect crowdsourced ranking. Results from this experiment help us create a novel mathematical decision model, the BIG model, that greatly improves our understanding of how people find the best answer to a question as a function of answer quality, rank, and social influence. This model is then applied to the RAICR algorithm to better rank answers. The BIG model also helped us uncover instability in popularity-based ranking. The instability depends on the quality of options: when there are large differences between option qualities, popularity converges optimally and predictably. However, when the difference between the quality of options is small, the better option may not always become the most popular. These results can help us better understand the foundational empirical results of Salganik et al. (\\citeyear{Salganik2006}), who found that popularity-based ranking correctly ranked high and low quality songs, while the ranking of intermediate quality songs was highly unstable. Although our experimental setup is undeniably simpler than real crowdsourcing websites, our results suggest that accurate models of user behavior together with mathematically principled inference can improve the efficiency of crowdsourcing. \n\n\n\n\\begin{acks}\nOur work is supported by the US Army Research Office MURI Award No. W911NF-13-1-0340 and the DARPA Award No. W911NF-17-1-0077. Data as well as code to create experiments, create simulations, and analyze data is available at https:\/\/github.com\/KeithBurghardt\/QualityRankCodeAndData.\n\\end{acks}\n\n\n\n\n\n\\section{Introduction}\nCrowdsourcing websites aggregate judgments in order to\nhelp users discover high quality content. These systems typically combine choices of many people to algorithmically \\textit{rank} content so that better items---product reviews~\\cite{lim2015evaluating}, news stories~\\cite{PopularDynam,SocialInfluenceBias}, or answers on question-answering (Q\\&A) platforms~\\cite{BestAnswerYahoo,HighQualityQA}---are easier to find.\n\nDespite a long history of crowdsourcing~\\cite{JuryThm,Galton1908}, some of its limitations have only recently become apparent. Salganik et al. (\\citeyear{Salganik2006}) found that aggregating the votes of many people to rank songs increases the inequality and instability of song popularity. Even when starting from the same initial conditions, the same songs could end up with vastly different rankings. Other studies have shown that algorithmic ranking can amplify the inequality of popularity~\\cite{lerman14as,Burghardt2018} and bias collective outcomes in crowdsourcing applications~\\cite{MyopiaCrowd,dev2019quantifying}. In addition, information about the choices of other users affects decisions in complex ways~\\cite{SocialInfluenceBias,Hogg2015hcomp,talton2019people}. Unfortunately for crowdsourcing system designers, it is still not clear how these finding could help improve collective outcomes, in large part due to difficulty of quantifying the quality of options (e.g., the best answer to a question) and its impact on individual decisions. \n\nTo better understand and improve collective outcomes that emerge from individual decisions, we break down the crowdsourcing task into its basic elements: item quality, item ranking, and social influence. We create a controlled experiment to study how these elements jointly affect individual decisions and collective outcomes. We use experimental data to construct and validate a mathematical model of human judgements, and then use it to explore algorithmic ranking and identify strategies to improve crowdsourcing performance. Our study addresses the following research questions:\n\\begin{description}\n \\item[RQ1] How does quality and presentation of options jointly impact individual decisions?\n \\item[RQ2] When is algorithmic ranking unstable and does not reliably identify the best option? \n \\item[RQ3] How can we stabilize algorithmic ranking such that the best option is typically ranked first?\n\\end{description}\n\n\n\\begin{figure}[tbh!]\n\\centering\n\\includegraphics[width=0.8\\columnwidth]{figures\/Fig1.pdf}\n\\caption{\\label{fig:ExperimentSchematics} Schematic of the experiment conditions. Left: guess condition, where subjects write a guess for the correct value. Center: control condition, where subjects choose the best of two answers. Right: social influence condition, where a randomly chosen answer is called the ``most popular'' and is ranked first.\n}\n\\end{figure}\n\n\nOur experiment asks subjects to choose the best answer to questions with objectively correct answers, such as the number of dots in a picture. Figure~\\ref{fig:ExperimentSchematics} illustrates one such question, where we ask users to find the area ratio between the largest and smallest shapes. (Other questions used in the experiment are shown in Appendix Fig.~\\ref{fig:Questions}.) While these simple questions abstract away some of the complexity of the often-subjective decision making people do in crowdsourcing systems, they allow quality to be objectively measured and its effects on decisions better understood.\n\n\n\nThe experiment has three conditions shown in Fig.~\\ref{fig:ExperimentSchematics}. In the first condition, we let subjects write their answers to understand how their subjective guesses deviate from the correct answers. In the remaining conditions, we ask subjects to choose the best of two randomly generated answers that are randomly ordered. In the control condition, subjects are not told how they are ordered, while in the social influence condition, the first answer is labeled ``more popular''. These simple conditions allow us to disentangle the elements of crowdsourcing systems and begin quantifying how individual decisions affect collective outcomes.\n\nTo begin answering RQ1, we construct a mathematical model of the probability to choose an option as a function of its position and quality. The model requires only two parameters to measure \\textit{cognitive heuristics} (mental shortcuts people use to make quick and efficient judgements) and is in excellent agreement with experimental data. The first parameter reflects a user's preference to pick the first answer (known as ``position bias''~\\cite{lerman14as,Burghardt2018,MusicLabModel,PopularDynam}) and the other parameter measures the rate at which answers are guessed at random. Subjects otherwise pick an answer closest to their initial (unobserved) guess. We call this model the Biased Initial Guess (BIG) model. The BIG model demonstrates that the ``social influence'' experiment condition enhances position bias~\\cite{Burghardt2018}, \ntherefore cognitive heuristics that produce position bias and social influence can be quantified with a single parameter, a substantial simplification over previous work \\cite{MusicLabModel,PopularDynam}. Moreover, it helps explain why users often choose the worst answer when answer quality differences are small.\n\nThe BIG model not only improves our understanding of how answers are chosen, but it allows us to test different ranking policies in simulations to answer RQ2 and RQ3. Importantly, these simulations demonstrate that cognitive heuristics can make popularity-based ranking highly unstable. When the quality difference between the options is small, initially minor differences in popularity can create a cumulative advantage \\cite{Rijt2014}, meaning the better answer does not always become the most popular. However, when the quality difference passes a critical point, popularity-ranking is stable, and the better answer eventually becomes the most popular. These results may help explain an under-appreciated finding of Salganik et al. (\\citeyear{Salganik2006}) that the best and worst songs tended to be correctly ranked when songs were ordered by popularity, but intermediate-quality songs landed anywhere in between. \n\nFinally, to answer RQ3, we propose an algorithm that rectifies this instability by ordering answers based on their inferred quality, which we call RAICR: Rectifying Algorithmic Instabilities in Crowdsourced Ranking. This method is found to be stable and consistently order the better answer first, thus making best answers easier to find, even when they are only slightly better. RAICR ranks answers as well as, or better than, common baselines such as ordering by popularity or by recency (ranking by the last answer picked).\n\nOur work shows that individual decisions within crowdsourcing systems are strongly affected by cognitive heuristics, which collectively create instability and poor crowd wisdom. Designers of crowdsource systems need to account for these biases in order make good content easier to find. Algorithms such as RAICR, however, can correct for these biases, thereby improving the wisdom of crowds. \n\n\\section{Related Literature}\n\n\\subsection{Crowdsourcing}\nCrowdsourcing has a two-century long history demonstrating how a collective can outperform individual experts \\cite{JuryThm,Galton1908,surowiecki2005wisdom,Kaniovski,Simoiu2019}, thus creating the moniker ``wisdom of crowds''. Crowds have been shown to beat sport markets \\cite{Brown2019,Peeters2018}, corporate earnings forecasts \\cite{Zhi2019}, and improve visual searches \\cite{Juni2017}. One reason crowd wisdom works is due to the law of large numbers: assuming unbiased and independent guesses, the average guess should converge to the true value. \n\nIndividual decisions, including in online settings, are biased by cognitive heuristics, such as anchoring \\cite{shokouhi2015anchoring}, primacy \\cite{Primacy1}, prior beliefs \\cite{white2013beliefs} \nand position bias \\cite{Burghardt2018}. These biases are not necessarily canceled out with large samples \\cite{Prelec2017,Kao2018}. As a result, aggregating guesses of a crowd does not necessarily converge to the correct answer.\n\nGuesses are usually not independent, which can sometimes improve the wisdom of crowds. Social influence models, such as the DeGroot model \\cite{Degroot1974}, have been shown to push simulated agents to an optimal decision \\cite{Golub2010,Mossel2015,Bala1998,Acemoglu2011}. These results have been backed up experimentally \\cite{Becker2017,Becker2019,Ungar2012,Tetlock2017}, even when opinion polarization is included \\cite{Becker2019}. One reason social influence can be beneficial is that it encourages people who are way off the mark to improve their guess \\cite{Mavrodiev2013,Becker2017,Abeliuk2017www}. \n\n\nOften, however, social influence can reduce crowd wisdom. Corporate earnings predictions \\cite{Zhi2019}, jury decisions \\cite{Kaniovski,BurghardtJury}, and other guesses can degrade with influence \\cite{Lorenz2011,Lorenz2015,Simoiu2019}, and malevolent individuals can manipulate people to make particular collective decisions \\cite{SocialInfluenceBias,Asch1951}. Too much influence by a single individual can also reduce the wisdom of collective decisions \\cite{Becker2017,Acemoglu2011,Golub2010}, and deferring to friends can sometimes make unpopular (and potentially low-quality) ideas appear popular \\cite{Lerman2016}. \n\nRecent work has also demonstrated how other cognitive heuristics can affect crowd wisdom. After the landmark study by Salganik et al. (\\citeyear{Salganik2006}), some researchers found that social influence has no effect on decisions \\cite{Antenore2018}, or that position bias, i.e., the preference to choose options listed first, largely explain biases in crowdsourcing \\cite{lerman14as,MusicLabModel}. Social influence instead enhances the position bias \\cite{Burghardt2018,MusicLabModel}. Burghardt et al. (\\citeyear{Burghardt2018}) have begun to tease apart these effects, showing that while social influence enhances position bias, it has no marginal effect when we control for position. This is consistent with our approach, which models and rectifies both biases with a single parameter. \n\n\\subsection{Algorithmic Ranking}\n\nThe goal of a ranking algorithm is to make good content easier to find. \nMany papers have begun to address this goal \\cite{Page1999,Jarvelin2002,Bendersky2011}, which has recently been applied to crowdsourced ranking. Because of human biases, however, algorithms that na{\\\"i}vely use human feedback to suggest content will end up forming echo chambers \\cite{Hilbert2018,Bozdag2013,Hajian2016}, or only recommend already-popular items \\cite{Abdollahpouri2017,Abdollahpouri2019}. This can also give some content a cumulative advantage \\cite{Rijt2014}, even when it is of similar quality to content that remains unpopular. \n\nTo correct for algorithmic bias in this paper, we create a ranking method that follows the strategy of Watts, who says, ``...we can instead measure directly how they respond to a whole range of possibilities and react accordingly'' \\cite{Watts2012}. In the present context, this strategy implies we can create better algorithms for option ranking by observing, and addressing, how people respond to social influence and position biases. We show this strategy applied in RAICR improves upon simple algorithms used in the past, which include ordering results by popularity \\cite{Burghardt2018} or recency \\cite{lerman14as}. While some crowdsourced ranking strategies use a two-tier platform model, in which researchers rank options based on whether content is downloaded and rated \\cite{Salganik2006,Antenore2018,Abeliuk2017www}, RAICR is based on a common simpler model in which we only observe if content is chosen \\cite{Burghardt2018,PopularDynam}. This subtle difference implies many previous ranking schemes are not applicable. The present paper also compliments previous work that uses features, such as answer position, to predict Q\\&A website quality \\cite{Shah2010,MyopiaCrowd}.\n\n\\section{Experiment}\n\nThe experiment asked subjects, hired through Amazon Mechanical Turk between August 2018 and September 2019, to answer a series of questions shown in the Appendix. The order of questions was randomized for each subject, and questions were not time-limited. Questions include specifying the ratio of the areas of two shapes or the lengths of two lines, or the number of dots in an image. We designed the experiment around quotidian tasks that do not require specific expertise but are difficult for most people. Despite their difficulty, the questions have objectively correct answers. We quantify the quality of an answer by the difference from the mean of all guesses, which better matches a log-normal distribution, as discussed later. In the Appendix, we define quality of an answer by the difference from the correct value and find results are qualitatively very similar. \n\nSubjects were assigned to one of three conditions. In the \\textit{guess condition}, shown in the left panel of Fig.~\\ref{fig:ExperimentSchematics}, the subjects could freely type their guesses. In the \\textit{control condition}, shown in the center panel of Fig.~\\ref{fig:ExperimentSchematics}, subjects were told to choose the best among two options, and were not told how the two answers were ordered. Finally, in the \\textit{social influence condition}, shown in the right panel of Fig.~\\ref{fig:ExperimentSchematics}, they were told the first among two answers was the most popular, but the layout was otherwise identical to the previous condition. We only showed ten questions to each subject to reduce the effects of performance depletion in Q\\&A systems~\\cite{Ferrara17}. Further, to reduce bias due to other phenomena, such as the decoy effect \\cite{Huber1982}, we showed subjects only two choices. Subjects in the same condition but assigned on different days have statistically similar behavior.\n\nApproximately 1800 subjects are evenly split between the conditions (596, 586, and 587 for the guess, control, and social influence condition, respectively). For guess values, we remove extreme outliers (guesses smaller than 1 or greater than $10^6$). All answers\nare supposed to be greater than 1, while values greater than $10^6$ may affect mean values and appear to represent throwaway answers. The number of valid participants for each question is shown in Table\\ref{tab:guesssummary}.\n\n\n\\begin{table}[t]\n\\centering\n\\caption{Number of subjects for each guess question (after cleaning)}\n\\label{tab:guesssummary}\n\\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|}\n\\hline\nQ1&Q2&Q3&Q4&Q5&Q6&Q7&Q8&Q9&Q10\\\\ \\hline\n595&594&593&595&592&593&589&591&584&590\\\\ \\hline\n\\end{tabular}\n\\begin{flushleft}\n\\end{flushleft}\n\\end{table}\n\n\nMechanical Turk workers were hired if they had an approval rate of over $95\\%$, completed more than 1000 Human Intelligence Tasks (HITs), and never participated in any of the experiment conditions before. Each worker was paid $\\$1.00$ for the guessing condition and $\\$0.50$ for the other two conditions. The assignment took $6$ minutes on average for the guess condition and $2$ minutes on average for the other conditions, equivalent to an hourly wage of $\\$10-\\$15$. The human experiment was approved by the appropriate IRB board.\n\n\\section{Results}\n\\subsection{A Mathematical Model of Decisions}\n\n\\begin{figure}[tbh!]\n\\centering\n\\includegraphics[width=0.7\\textwidth]{figures\/Fig2.pdf}\n\\caption{\\label{fig:CDFGuess} The CDF of guesses for each question. (a) Before normalization, and (b) after normalization.\n}\n\\end{figure}\n\nWe use data gathered from the experiment to answer \\textbf{RQ1}: \\textit{How does quality and presentation of options jointly impact individual decisions?}\n\n\\subsubsection{Open-Ended Experiment Condition} \nTo derive the model of how people make these decisions, we start by constructing the distribution of guesses for each question (the \\textit{guess} condition in the experiment). \nThe guesses, plotted in Appendix Fig.~\\ref{fig:GuessExperimentResults} and CDF shown in Fig.~\\ref{fig:CDFGuess}a, are highly variable (by as many as six orders of magnitude), while the correct answers vary by three orders of magnitude. The median guess may differ from the true answer, in agreement with previous work \\cite{Kao2018}, but values are typically the correct order of magnitude.\n\nWe normalize guesses by defining a new variable $A$:\n\\begin{equation*}\n A = \\frac{\\text{ln}(X)-\\langle \\text{ln}(X)\\rangle}{\\sigma_{\\text{ln}(X)}},\n\\end{equation*}\nwhere $X$ is a guess value, and $\\langle \\text{ln}(X)\\rangle$ is the mean of the logarithm of guesses. Figure~\\ref{fig:CDFGuess}b shows that this simple normalization scheme effectively collapses answer guesses to a single distribution. We show in the Appendix that guesses are not normally distributed, but instead are better approximated as log-normal, in agreement with previous work on a different set of questions \\cite{Kao2018}. The normalized guesses $A$ can be thought of as the $z$-scores in log-normal distributions. Alternative ways to center data, shown in the Appendix, produce similar results. We use these normalized guesses and distributions in the remaining two experiment conditions. Intuitively, if the mean of all guesses converges to the correct answer, $A=0$ can be thought of as the best answer.\n\n\\subsubsection{Two-Choice Experiment Conditions}\n\\begin{figure}[tbh!]\n\\centering\n\\includegraphics[width=0.3\\textwidth]{figures\/Fig3.pdf}\n\\caption{\\label{fig:ChooseVsPos} Probability to choose an answer versus its position for the social influence and control condition.\n}\n\\end{figure}\n\nThe latter two experiment conditions require subjects to pick the best among two answers to the question. For both the control and social influence conditions, answers are ordered vertically, with one answer above the other. There is a significant position effect when answers are ordered this way, as shown in Fig.~\\ref{fig:ChooseVsPos}. In the control condition, there is a slightly greater probability (52\\%) to choose the first (top) answer over the last one (p-value $<0.001$). In the social influence condition, meanwhile, the probability to choose the first answer is substantially larger (59\\%) and statistically significantly different than the control condition (p-value $<0.001$). This is in agreement with previous work showing that social influence amplifies the position effect~\\cite{Burghardt2018}.\n\n\n\\subsubsection{The Biased Initial Guess Model}\n\\begin{figure}[tbh!]\n\\centering\n\\includegraphics[width=0.4\\textwidth]{figures\/Fig4.pdf}\n\\caption{\\label{fig:ModelSchematics} Schematic to calculate $s(A_1,A_2)$. Guesses follow an approximately log-normal distribution unique to each question, but the normalized guesses, $A$ are approximately standard normal distributed. In an experiment, assume two candidate answers are provided, $A_1$ is listed first, and $A_2$ is listed last. The variable $s(A_1,A_2)$ in Eq.~\\ref{eq:initselect} is the probability an unknown initial guess is closer to $A_1$ than $A_2$.\n}\n\\end{figure}\n\nWe now have the necessary ingredients to model how decisions to choose an answer are affected by its quality, position, and social influence. \nWe present the Biased Initial Guess (BIG) decision model and show it is consistent with the data.\n\nWe first discuss the simplest case where a user has to choose the better of two answers $A_1$, listed first, and $A_2$, listed last, in the absence of cognitive biases. Figure~\\ref{fig:ModelSchematics} shows probability of the answer, with the normalized values of the choices $A_1$ and $A_2$, as well as the user's initial guess $A_I$ about the true answer, which we do not observe. All things equal, the user will choose the first answer $A_1$ if it is closer to the initial guess, i.e., if $|A_1-A_I|<|A_2-A_I|$, and will otherwise choose $A_2$. The probability to choose $A_1$ is then:\n\\begin{equation}\ns(A_1,A_2) = \n\\begin{cases}\n\\text{Pr}(A_I > \\frac{A_1+A_2}{2}) & A_1 > A_2 \\\\\n1\/2 & A_1 = A_2\\\\\n\\text{Pr}(A_I < \\frac{A_1+A_2}{2}) & A_1 < A_2\n\\end{cases}\n\\end{equation}\n Assuming $A_1$ and $A_2$ follow a normal distribution quantified in the guess experiment condition,\n\\begin{equation}\n\\label{eq:initselect}\ns(A_1,A_2)= \n\\begin{cases}\n\\text{erfc}\\left(\\frac{1}{\\sqrt 2}\\frac{A_1+A_2}{2}\\right)\/2 & A_1 \\ge A_2 \\\\\n\\left[1 + \\text{erf}\\left(\\frac{1}{\\sqrt 2}\\frac{A_1+A_2}{2}\\right)\\right]\/2, & A_1 < A_2\n\\end{cases}\n\\end{equation}\nwhere $\\text{erf}(.)$ is the error function, $\\text{erfc}(.) = 1-\\text{erf}(.)$ is the complimentary error function. When $s(A_1,A_2)>0.5$, $A_1$ is not only more likely to be closer to the initial guess ($A_I$) than $A_2$, but is also closer to zero than $A_2$, and therefore the objectively better answer. On the other hand, when $s(A_1,A_2)<0.5$, $A_1$ is further from most guesses compared to $A_2$ and the objectively worse answer.\n\nTo better model decision-making, we have to account for biases, due to cognitive heuristics and algorithmic ranking, to explain why people do not always choose the best option. As shown in Fig.~\\ref{fig:ChooseVsPos}, sometimes they choose the first answer even if it is not the best answer. We quantify this position bias by assuming that with probability $p$ participants choose the first answer regardless of its quality. This parameter should presumably be small in the control condition and large in the social influence condition. Subjects may also choose an answer regardless of its position or quality because there is no monetary incentive to choose good answers. We model this by allowing subjects to choose an answer at random with a probability $r$. Taking these two heuristics into account, we arrive at the BIG Model: \n\n\\begin{equation}\n\\label{eq:model_full}\n\\begin{split}\n\\text{Pr}(\\text{Choose}~A_{1}) &= r\/2 + (1 - r) [p + (1 - p) s(A_1,A_2)]\\\\\n&= \\begin{cases}\nr\/2 + (1 - r) \\left[p + (1 - p)\\text{erfc}\\left(\\frac{1}{\\sqrt 2}\\frac{A_1+A_2}{2}\\right)\/2\\right] & A_1 \\ge A_2 \\\\[7pt]\nr\/2 + (1 - r) \\left[p + (1 - p)\\left[1 + \\text{erf}\\left(\\frac{1}{\\sqrt 2}\\frac{A_1+A_2}{2}\\right)\\right]\/2\\right], & A_1 < A_2\n\\end{cases}\n\\end{split}\n\\end{equation}\nThe probability of choosing $A_2$ is simply the compliment of this probability. Because $A=0$ is expected to be the best answer, with some simple manipulation we can infer the probability the best answer is chosen.\n\n\\begin{equation}\n\\label{eq:model_best}\n\\text{Pr}(\\text{Choose}~A_{1}|A_{1}~\\text{Best}) = \\begin{cases}\nr\/2 + (1 - r) \\left[p + (1 - p) \\text{erfc}\\left(\\frac{1}{\\sqrt 2}\\frac{A_1+A_2}{2}\\right)\/2\\right] & \\frac{A_1+A_2}{2}<0\\\\[7pt]\nr\/2 + (1 - r) \\left[p + (1 - p) \\left(1-\\text{erfc}\\left(\\frac{1}{\\sqrt 2}\\frac{A_1+A_2}{2}\\right)\/2\\right)\\right] & \\frac{A_1+A_2}{2}\\ge0\\\\\n\\end{cases}\n\\end{equation}\nA similar equation can model $\\text{Pr}(\\text{Choose}~A_{2}|A_{2}~\\text{Best})$.\n\nAgreement between the model and data is shown in Fig.~\\ref{fig:ModelFit}. In the \\textit{control condition} (Fig.~\\ref{fig:ModelFit}a), the best parameters are $r=0.28\\pm0.02$ and $p=0.05\\pm0.02$. We find that the log-likelihood of the model, $\\ell=-3002.49$, is not statistically different from log-likelihood if the data came from the model itself: p-value $=0.40$. See Methods for how p-values and error bars are calculated. We also check if we need both parameters, $r$ and $p$, using the likelihood ratio test and Wilks' Theorem~\\cite{Wilks1938}. We compare the likelihood ratio of the two-parameter model to simpler models with $p$ or $r$ (or both) set to zero. The probability a simpler model could fit the data as well or better is $\\le 0.002$. We conclude that our model describes the control condition very well. \n\n\\begin{figure}[tbh!]\n\\centering\n\\includegraphics[width=0.8\\textwidth]{figures\/Fig5.pdf}\n\\caption{\\label{fig:ModelFit} Decision model agreement with experiment data. Plots show the probability the best answer is chosen for the model (lines) and experiment (symbols) under (a) the control condition and (b) the social influence condition. Purple diamonds are probabilities when the best answer is the first answer, $A_1$ and green squares for when the best answer is $A_2$.\n}\n\\end{figure}\n\nThe agreement between data and model is similarly close in the \\textit{social influence condition} (Fig.~\\ref{fig:ModelFit}b). The position bias parameter $p=0.21\\pm 0.01$ is larger than in the control condition, in agreement with expectations. We also find $r=0.08\\pm 0.02$, thus social influence reduces the frequency of random guesses. In both experiment conditions, surprisingly, $\\approx 20\\%$ of users choose answers for reasons besides ``quality'' ($r\/2+(1-r)p = 18\\%$ and $23\\%$ for the control and social influence conditions, respectively). Similar to the control condition, we find that the model is consistent with the data. The log-likelihood of the empirical data ($\\ell=-3190.13$) is not statistically different from log-likelihood values if the data came from the model: p-value $0.47$. The probability a simpler model ($r$ or $p$ set to zero) could fit the data as well or better is $<10^{-5}$. In conclusion, we find the BIG model is consistent with both experiment conditions and its parameters are interpretable and meaningful. In the Appendix, we show that all these results are consistent when we look at a subset of experiment questions or center the data differently. \n\n\\subsection{Algorithmic Ranking Instability}\nCrowdsourcing websites automatically highlight what they consider the best choices to help their users more quickly discover them. For example, Stack Exchange (like other Q\\&A platforms) usually ranks answers to questions by the number of votes they receive. Despite problems with popularity-based ranking identified in previous studies~\\cite{Salganik2006,lerman14as}, it is widely used for ranking content in crowdsourcing websites. In this section we identify an instability in popularity-based ranking: the first few votes a worse answer receives can lock it in the top position, where it acquires cumulative advantage \\cite{Rijt2014}. This allows us to answer \\textit{RQ2: When is algorithmic ranking unstable and does not reliably identify the best option?} \n\n\n\\begin{figure*}[tbh!]\n\\centering\n\\includegraphics[width=0.95\\textwidth]{figures\/Fig6.pdf}\n\\caption{Comparison of ranking policies via simulations. Plots show the probability the best answer is ranked first after (a) 50, (b) 500, and (c) 20,000 votes, when answers are ranked by quality (black line), recency (orange dashed line), and popularity. Also shown in (c) is the critical value of $A_{\\text{worst}}$ based on Eqs.~\\ref{eq:scrit} and~\\ref{eq:initselect}. In these simulations, subjects choose answers following the BIG Model with $p=0.2$ and $r=0.09$. ``Popularity 0'' (green dashed line), ``Popularity -10'' (cyan dashed line), and ``Popularity -200'' (red dashed line) means that the worst answer starts with a 0, 10, or 200 vote advantage, respectively. Shaded areas are 95\\% confidence intervals. \n\\label{fig:InitialGuessModelRankData} \n}\n\\end{figure*}\n\nTo demonstrate the instability, we simulate a group of agents who choose answers according to the BIG model with $p=0.2$ and $r=0.09$. In the simulations, one answer is objectively best, e.g., exactly equal to the correct answer ($A_{\\text{best}}=0$), while the worst answer is larger than $A_{\\text{best}}$ (results are symmetric if $A_{\\text{worst}}\\text{Pr}(\\text{Choose}~ A_{\\text{worst}}=A_1)\n\\end{equation} \nthen subjects are more likely to choose the better answer regardless of whether its ranked first or second, therefore answer order is stable. When the above inequality is not true, however, then subjects are more likely to pick the first answer regardless of its quality, and the popularity ranking is unstable. The critical value of $A_{\\text{worst}}$ between the two regimes is when \n\\begin{equation}\\label{eq:scrit}\n s_{\\text{crit}}=\\frac{1}{2 (1-p)}\n\\end{equation}\nand is independent of $r$. Intuitively, if the answer is exceptionally bad, it will always be less popular. This is alike to the results in Salganik's MusicLab study \\cite{Salganik2006}, where particularly good and bad songs were ranked correctly. However, if the first answer is likely to be chosen regardless of quality, the worst answer will continue accumulating votes and remain in the top position. Given $A_{\\text{best}}=0$, we can use Eqs.~\\ref{eq:scrit} and~\\ref{eq:initselect} to numerically solve for the critical value of $A_{\\text{worst}}$. We plot the critical point as a function of $p$ and $A_{\\text{worst}}$ in phase diagram in Fig.~\\ref{fig:SimPhaseSpace}. We see that there is always a large part of the phase space where popularity-based ranking will be unpredictable if answer quality is close together. Based on Eq.~\\ref{eq:scrit}, if $p\\ge 0.5$, there will be no case where popularity-based ranking is guaranteed to correctly rank answers. While this is an extreme case, it still points to substantial limitations of popularity-based ranking.\n\n\\begin{figure}[tbh!]\n\\centering\n\\includegraphics[width=0.44\\textwidth]{figures\/Fig7.pdf}\n\\caption{\\label{fig:SimPhaseSpace} Phase diagram for popularity-based ranking based on Eqs.~\\ref{eq:scrit} and~\\ref{eq:initselect}. We show the boundary between regimes where best answers are guaranteed to be ranked first with enough votes (white area) and where ranking is unstable (gray area) as a function of position bias ($p$) and the value of the worse answer ($A_{\\text{worst}}$). In this plot, larger values of $A_{\\text{worst}}>0$ correspond to worse answers and results are symmetric for $A_{\\text{worst}}<0$. \n}\n\\end{figure}\n\\subsection{Stabilizing Algorithmic Ranking}\n\nGiven the problems with popularity ranking, it is critical to create a more consistent ranking algorithm. Moreover, while we have so-far explored questions with numeric answers, we want a method that works for all types of answers. Using the BIG model, we can answer RQ3: \\textit{How can we stabilize algorithmic ranking such that the best option is typically ranked first?}\n\nAssume we can approximate $p$ and $r$, then we can invert Eq.~\\ref{eq:model_full} and use votes to infer the only unknown variable $s(A_{\\text{best}},A_{\\text{worst}})$. When $s(A_{\\text{best}},A_{\\text{worst}})>0.5$, $A_{\\text{best}}$ is the best answer, but if we incorrectly rank $A_{\\text{worst}}$ first, $s(A_{\\text{worst}},A_{\\text{best}})<0.5$. We can therefore rank the answer in which $s(A_{\\text{best}},A_{\\text{worst}})>0.5$ as the best answer. This is the backbone of the RAICR algorithm. The algorithm uses maximum likelihood estimation to solve for $s(A_{\\text{best}},A_{\\text{worst}})$, as shown in the Appendix. Therefore, if the data matches the BIG model with the correct $r$ and $p$ parameters, our method \\emph{optimally infers the correct raking} by having minimal variance and no bias \\cite{Newey1994}. Moreover, \\emph{this method only depends on the votes an answer receives rather than the type of answer}, such as a numerical or textual answer. We compare quality ranking to popularity-based ranking, and recency-based ranking (ranking by the last answer picked), as shown in Fig.~\\ref{fig:InitialGuessModelRankData}. The probability recency ranks the best answer first is calculated as the self-consistent equation: \n\\begin{dmath}\n \\text{Pr}(\\text{Rank $A_{\\text{best}}$ First}|\\text{Recency}) = \\text{Pr}(\\text{Choose}~ A_{\\text{best}}|A_{\\text{best}}~\\text{First}) \\text{Pr}(\\text{Rank $A_{\\text{best}}$ First}|\\text{Recency})\\\\ + \\text{Pr}(\\text{Choose}~ A_{\\text{best}}|A_{\\text{best}}~\\text{Last})(1 - \\text{Pr}(\\text{Rank $A_{\\text{best}}$ First}|\\text{Recency}) ).\n\\end{dmath}\nThis represents the limit answers acquire many votes. The solution to this equation is:\n\\begin{dmath}\n \\text{Pr}(\\text{Rank $A_{\\text{best}}$ First}|\\text{Recency}) = \\frac{2 (1-p) (1-r) s(A_{\\text{best}},A_{\\text{worst}}\n +r}{2-2 p (1-r)\n\\end{dmath}\nWe find that the RAICR algorithm performs at least as well as popularity-based ranking by ranking the better answer first, and often better after 20-50 votes (Fig.~\\ref{fig:InitialGuessModelRankData}a and Appendix Fig.~\\ref{fig:SimPRobust}). Moreover, RAICR always outperforms the recency-based algorithm. The benefit of the RAICR algorithm only improves as we collect more votes. \nFor example, Fig.~\\ref{fig:InitialGuessModelRankData}b shows after after 500 votes the advantage of RAICR is larger, and Fig.~\\ref{fig:InitialGuessModelRankData}c shows that after 20K votes the method is nearly optimal. Popularity-based ranking performs badly for $A_{worse}<0.6$, and recency performs worst if $A_{worse}>0.6$. \n\nWhile we show these results hold when we have 20 to 20,0000 votes, many platforms are underprovisioned, with a large fraction of webpages \nreceiving little attention and votes~\\cite{baeza2018bias,Gilbert2013}. It is the webpages that receive many votes, however, which may be the most important. Correctly ranking options in these popular pages is therefore especially critical to crowdsourcing websites. Moreover, a moderate number of votes is generally needed to make a reasonable estimate of quality, so very few methods will accurately rank unpopular pages.\n\nOne caveat of the RAICR algorithm is that we need an approximate value for two parameters: $r$ and $p$. What happens if either of these parameters are far off? For example, we could assume $r=0$ when $r=0.09$. Results in the Appendix, however, show that the findings are quantitatively very similar, and therefore our model is robust to assumptions about $r$. On the other hand, what if we incorrectly estimate both $r$ and $p$? We show in the Appendix that even in this worst-case scenario, our method performs slightly worse, but is comparable or substantially better than popularity-based ranking. \n\n\\section{Future Work and Design Implications\n\nIn the experiment, we decomposed the crowdsourcing task to its basic components to reduce the complexity and variation inherent in real world tasks. While this helps to disentangle effects of option quality and position without confounding factors muddying the relationships, future work is needed to verify ecological validity of our results \\cite{ObservationalVsExp}. It is encouraging that the probability to choose an answer (Fig.~\\ref{fig:ChooseVsPos}b) is quantitatively similar to empirical data gathered from Stack Exchange \\cite{Burghardt2018}, despite Mechanical Turk workers not being representative of the general population \\cite{Munger2019}. \n\nFor simplicity, we only explored two-option questions; future work should aim to understand multi-option decision-making given options of variable quality. A generalization of RAICR should also address more complicated biases, such as preference for round numbers \\cite{Fitz1986}, anchoring \\cite{shokouhi2015anchoring,Furnham2011}, and biases that appear in multi-option decisions, such as the decoy effect \\cite{Huber1982}. Finally, while RAICR is found to be robust to moderate changes in its parameters, this algorithm and its extensions may fail to rank options properly if its parameters are far off, or if the BIG model is wrong. In the experiment, the model is backed by data, but future work needs to address whether other tasks or questions follow this model. \n\nOur results offer implications for crowdsourcing platforms. First, designers must recognize the limits of crowdsourcing due to biases implicit in their platform. In our experiment people often upvoted options at random (up to 20\\% of all votes), and chose an inferior option simply because it was shown first. This creates a ranking instability when options are of similar quality. Our controlled experiment and mathematical model point to ways we can counteract this instability. Designers should similarly create platform-tailored mathematical models and controlled experiments to rigorously test how crowds can better infer the best options. \n\nA key property of our RAICR algorithm is that it relies on accurate modeling of user decisions to counteract cognitive biases. In effect, each vote is weighted depending on the ranks of answers at the time the vote is cast. The idea is similar to one described by Abeliuk et al.~\\citeyear{Abeliuk2017www} that ranks items by their inferred quality in order to more robustly identify blockbuster items. Similar weighting schemes could be applied to future debiased algorithms to address the unique goals of each crowdsourcing platform. \n\nThere are also simple methods that platforms like Reddit, Facebook, and Stack Exchange can try that may greatly outperform the baselines we mention in our paper. For example, items that have not yet acquired many votes can be ranked randomly to reduce initial ranking biases. Alternatively, new posts and links could be ranked appropriately but their popularity could be hidden until they gather enough votes. Our results suggest this could reduce social influence-based position bias up until the true option quality is more obvious. \n\n\\section{Conclusion}\nIn this paper, we introduce an experiment designed to inform how cognitive biases and option quality interact to affect crowdsourced ranking. Results from this experiment help us create a novel mathematical decision model, the BIG model, that greatly improves our understanding of how people find the best answer to a question as a function of answer quality, rank, and social influence. This model is then applied to the RAICR algorithm to better rank answers. The BIG model also helped us uncover instability in popularity-based ranking. The instability depends on the quality of options: when there are large differences between option qualities, popularity converges optimally and predictably. However, when the difference between the quality of options is small, the better option may not always become the most popular. These results can help us better understand the foundational empirical results of Salganik et al. (\\citeyear{Salganik2006}), who found that popularity-based ranking correctly ranked high and low quality songs, while the ranking of intermediate quality songs was highly unstable. Although our experimental setup is undeniably simpler than real crowdsourcing websites, our results suggest that accurate models of user behavior together with mathematically principled inference can improve the efficiency of crowdsourcing. \n\n\n\n\\begin{acks}\nOur work is supported by the US Army Research Office MURI Award No. W911NF-13-1-0340 and the DARPA Award No. W911NF-17-1-0077. Data as well as code to create experiments, create simulations, and analyze data is available at https:\/\/github.com\/KeithBurghardt\/QualityRankCodeAndData.\n\\end{acks}\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\\section{Introduction}\\label{sec:intro}\n\nTrigonometric moment problems are ubiquitous in systems and control, such as spectral estimation, signal processing, system identification, image processing and remote sensing \\cite{bose2003multidimensional,ekstrom1984digital,stoica1997introduction}. In the (truncated) multidimensional trigonometric moment problem we seek a nonnegative measure $\\derivd \\mu$ on $\\mathbb{T}^d$ satisfying the moment equation \n\\begin{equation} \\label{eq:Cov}\nc_{\\boldsymbol k} = \\int_{\\mathbb{T}^d} e^{i(\\kb,\\thetab)} \\derivd \\mu({\\boldsymbol \\theta}) \\quad\\text{for all ${\\boldsymbol k}\\in\\Lambda$},\n\\end{equation}\nwhere $\\mathbb{T}:=(-\\pi,\\pi]$, ${\\boldsymbol \\theta}:=(\\theta_1,\\ldots, \\theta_d)\\in\\mathbb{T}^d$, and $({\\boldsymbol k},{\\boldsymbol \\theta}):=\\sum_{j=1}^d k_j\\theta_j$ is the scalar product in $\\mathbb{R}^d$. Here $\\Lambda \\subset \\mathbb{Z}^d$ is a finite index set satisfying $0 \\in \\Lambda$ and $- \\Lambda = \\Lambda$. A necessary condition for \\eqref{eq:Cov} to have a solution is that the sequence \n\\begin{equation}\n\\label{eq:covariances}\nc:= [ c_{\\boldsymbol k} \\mid {\\boldsymbol k}:=(k_1,\\ldots, k_d) \\in \\Lambda ]\n\\end{equation}\nsatisfy the symmetry condition $c_{-{\\boldsymbol k}}=\\bar{c}_{\\boldsymbol k}$. The space of sequences \\eqref{eq:covariances} with this symmerty will be denoted $\\mathfrak{C}$ and will be represented by vectors $c$, formed by ordering the coefficient in some prescribed manner, e.g., lexiographical. Note that $\\mathfrak{C}$ is isomorphic to $\\mathbb{R}^{|\\Lambda|}$, where $|\\Lambda|$ is the cardinality of $\\Lambda$. \nHowever, as we shall see below, not all $c\\in\\mathfrak{C}$ are {\\em bona fide\\\/} moments for nonnegative measures $\\derivd \\mu$. \n\nIn many of the applications mentioned above there is a natural complexity constraint prescribed by design specifications. In the context of finite-dimensional systems these constraints often arise in the requirement that transfer functions be rational. This leads to the {\\em rational covariance extension problem}, which has been studied in various degrees of generality in \\cite{georgiou2005solution,georgiou2006relative,karlsson2015themultidimensional,ringh2015themultidimensional,ringh2015multidimensional} and can be posed as follows. \n\nDefine $e^{i{\\boldsymbol \\theta}}:=(e^{i\\theta_1},\\ldots, e^{i\\theta_d})$ and let\n\\begin{subequations}\\label{eq:ratinaldmu}\n\\begin{equation}\\label{eq:decomp}\n\\derivd \\mu({\\boldsymbol \\theta}) = \\Phi(e^{i{\\boldsymbol \\theta}})\\derivd m({\\boldsymbol \\theta}) + \\derivd \\nu({\\boldsymbol \\theta}),\n\\end{equation}\nbe the (unique) Lebesgue decomposition of $d\\mu$ (see, e.g., \\cite[p. 121]{rudin1987real}), \nwhere $$dm ({\\boldsymbol \\theta}):=(1\/2\\pi)^d\\prod_{j=1}^d d\\theta_j$$ is the (normalized)\nLebesgue measure and $\\derivd \\nu$ is a singular measure. Then given a $c\\in\\mathfrak{C}$, we are interested in parameterizing solutions to \\eqref{eq:Cov} such that the absolutely continuous part of the measure \\eqref{eq:decomp} takes the form\n\\begin{equation}\\label{eq:rat}\n\\Phi(e^{i{\\boldsymbol \\theta}})=\\frac{P(e^{i{\\boldsymbol \\theta}})}{Q(e^{i{\\boldsymbol \\theta}})},\\quad p, q \\in \\bar{\\mathfrak{P}}_+ \\backslash \\{0\\},\n\\end{equation}\n\\end{subequations}\nwhere $\\bar{\\mathfrak{P}}_+$ is the closure of the convex cone $\\mathfrak{P}_+$ of the coefficients $p\\in\\mathfrak{C}$ corresponding to trigonometric polynomials\n\\begin{equation}\\label{Ptrigpol}\nP(e^{i{\\boldsymbol \\theta}})=\n\\sum_{{\\boldsymbol k}\\in \\Lambda}p_{\\boldsymbol k} e^{-i({\\boldsymbol k},{\\boldsymbol \\theta})}, \\quad p_{-{\\boldsymbol k}}=\\bar{p}_{\\boldsymbol k}\n\\end{equation}\nthat are positive for all ${\\boldsymbol \\theta}\\in \\mathbb{T}^d$. \n\nThe reason for referring to this problem as a rational covariance extension problem is that the numbers \\eqref{eq:covariances} correspond to covariances $c_{\\boldsymbol k}:= \\ExpOp \\{y({\\bf t}+{\\boldsymbol k})\\overline{y({\\bf t})}\\}$ of a discrete-time, zero-mean, and homogeneous\\footnote{Homogeneity generalizes stationarity in the case $d=1$. } stochastic process $\\{y({\\bf t});\\,{\\bf t}\\in\\mathbb{Z}^d\\}$. The corresponding power spectrum, representing the energy distribution across frequencies, is defined as the nonnegative measure $d\\mu$ on $\\mathbb{T}^d$ whose Fourier coefficients are the covariances \\eqref{eq:covariances}.\nA scalar version of this problem ($d=1$) was first posed by Kalman \\cite{kalman1981realization} and has been extensively studied and solved in the literature \\cite{georgiou1983partial, byrnes1995acomplete, byrnes2002identifyability, enqvist2004aconvex, nurdin2006new, CarliGeorgiou2011, lindquist2013thecirculant, byrnes2000anewapproach, zorzi2014rational, musicus1985maximum}.\nIt has been generalized to more general scalar moment problems \\cite{byrnes2001ageneralized, georgio2003kullback, byrnes2006generalizedinterpolation} and to the multidimensional setting \\cite{georgiou2006relative, georgiou2005solution, ringh2015multidimensional, ringh2015themultidimensional, karlsson2015themultidimensional}. Also worth mentioning here is work by Lang and McClellan \\cite{lang1982multidimensional, lang1983spectral, mcclellan1982multi-dimensional, mcclellan1983duality, lang1982theextension, lang1981spectral} considering the multidimensional maximum entropy problem, which hence has certain overlap with the above literature.\n\nThe multidimensional rational covariance extension problem posed above has a solution if and only if $c\\in\\mathfrak{C}_+$, where $\\mathfrak{C}_+$ is the open convex cone \n\\begin{equation*}\n\\mathfrak{C}_+ := \\left\\{ c \\mid \\langle c, p \\rangle > 0, \\quad \\text{for all $p \\in \\bar{\\mathfrak{P}}_+ \\setminus \\{0\\}$} \\right\\},\n\\end{equation*}\nwhere $\\langle c, p \\rangle := \\sum_{{\\boldsymbol k} \\in \\Lambda} c_{\\boldsymbol k} \\bar{p}_{\\boldsymbol k}$ is the inner product in $\\mathfrak{C}$ (Theorem~\\ref{thm:rational}). However, the covariances $[ c_{\\boldsymbol k} \\mid {\\boldsymbol k}:=(k_1,\\ldots, k_d) \\in \\Lambda]$ are generally determined from statistical data. Therefore the condition $c\\in\\mathfrak{C}_+$ may not be satisfied, and testing this condition is difficult in the multidimensional case. \nTherefore we may want to find a positive measure $\\derivd \\mu$ and a corresponding $r\\in\\mathfrak{C}_+$, namely\n\\begin{equation}\n\\label{eq:r}\nr_{\\boldsymbol k} = \\int_{\\mathbb{T}^d} e^{i(\\kb,\\thetab)} \\derivd \\mu({\\boldsymbol \\theta}), \\quad {\\boldsymbol k}\\in\\Lambda,\n\\end{equation}\nso that $r$ is close to $c$ in some norm, e.g., the Euclidian norm $\\|\\centerdot\\|_2$. This is an ill-posed inverse problem which in general has an infinite number of solutions $\\derivd \\mu$. As we already mentioned, we are interested in rational solutions \\eqref{eq:ratinaldmu}, and to obtain such solutions we use regularization as in \\cite{ringh2015multidimensional}. Hence, we seek a $d\\mu$ that minimizes\n\\begin{displaymath}\n\\lambda \\mathbb D (P\\derivd m, \\derivd \\mu)+\\frac12\\|r-c\\|_2^2\n\\end{displaymath}\nsubject to \\eqref{eq:r},\nwhere $\\lambda\\in\\mathbb{R}$ is a regularization parameter and\n\\begin{equation}\\label{eq:normalizedKullbackLeibler}\n \\mathbb D (P\\derivd m, \\derivd \\mu) := \\int_{\\mathbb{T}^d} \\left( P \\log\\frac{P}{\\Phi} \\derivd m +d\\mu -Pdm \\right)\n\\end{equation}\nis the nomalized Kullback-Leibler divergence \\cite[ch. 4]{gzyl1995method} \\cite{csiszar1991why, zorzi2014rational}. As will be explained in Section~\\ref{sec:prel}, $ \\mathbb D (P\\derivd m, \\derivd \\mu)$ is always nonnegative and has the property $\\mathbb D (P\\derivd m,P\\derivd m)=0$. \n\nIn this paper we shall consider a more general problem in the spirit of \\cite{enqvist2007approximative}. To this end, for any Hermitian, positive definite matrix $M$, we define the weighted vector norm $\\|x\\|_M:=(x^*Mx)^{1\/2}$ and consider the problem\n\\begin{align}\\label{eq:primal_relax}\n\\min_{\\derivd \\mu \\geq 0, \\, r} \\qquad & \\mathbb D (P\\derivd m, \\derivd \\mu) + \\frac{1}{2}\\|r - c\\|_{W^{-1}}^2 \\\\\n\\st \\quad &r_{\\boldsymbol k} = \\int_{\\mathbb{T}^d} e^{i(\\kb,\\thetab)} \\derivd \\mu({\\boldsymbol \\theta}), \\quad {\\boldsymbol k}\\in\\Lambda, \\nonumber\n\\end{align}\nwhich is the same as the problem above with $W=\\lambda I$. We shall refer to $W$ as the {\\em weight matrix}.\n\nUsing the same principle as in \\cite{schott1984maximum}, we shall also consider the problem to minimize $\\mathbb D (P\\derivd m, \\derivd \\mu)$ subject to \\eqref{eq:r} and the hard constraint \n\\begin{equation}\n\\label{eq:hardconstraint}\n\\|r - c \\|^2 \\le \\lambda \\,.\n\\end{equation}\nSince \\eqref{eq:r} are {\\em bona fide\\\/} moments and hence $r\\in\\mathfrak{C}_+$, while $c\\not\\in\\mathfrak{C}_+$ in general, this problem will not have a solution if the distance from $c$ to $\\mathfrak{C}_+$ is greater than $\\sqrt{\\lambda}$. Hence the choice of $\\lambda$ must be made with some care. Analogously with the {\\em rational covariance extension with soft constraints\\\/} in \\eqref{eq:primal_relax}, we shall consider the more general problem \n\\begin{align}\\label{eq:primal_relax_new}\n\\min_{\\derivd \\mu \\geq 0, \\, r} \\qquad &\\mathbb D (P\\derivd m, \\derivd \\mu) \\\\\n\\st \\quad &r_{\\boldsymbol k} = \\int_{\\mathbb{T}^d} e^{i(\\kb,\\thetab)} \\derivd \\mu({\\boldsymbol \\theta}), \\quad {\\boldsymbol k}\\in\\Lambda, \\nonumber \\\\\n \\quad &\\|r - c \\|_{W^{-1}}^2 \\le 1, \\nonumber\n\\end{align}\nto which we shall refer as the {\\em rational covariance extension problem with hard constraints}. Again this problem reduces to the simpler problem by setting $W=\\lambda I$.\n\nAs we shall see, the soft-constrained problem \\eqref{eq:primal_relax} always has a solution, while the hard-constrained problem \\eqref{eq:primal_relax_new} may fail to have a solution for some weight matrices $W$. However, in Section~\\ref{sec:homeomorphism} we show that the two problems are in fact equivalent in the sense that whenever \\eqref{eq:primal_relax_new} has a solution there is a corresponding $W$ in \\eqref{eq:primal_relax} that gives the same solution, and any solution of \\eqref{eq:primal_relax} can also be obtained from \\eqref{eq:primal_relax_new} by a suitable choice of $W$. The reason for considering both formulations is that one formulation might be more suitable than the other for the particular application at hand. For example, an absolute error estimate for the covariances is more naturally incorporated in the formulation with hard constraints. A possible choice of the weight matrix $W$ in either formulation would be the covariance matrix of the estimated moments, as suggested in \\cite{enqvist2007approximative}. This corresponds to the Mahalanobis distance and could be a natural way to incorporate uncertainty of the covariance estimates in the spectral estimation procedure.\n\nPrevious work in this direction can be found in \\cite{shankwitz1990onthe, enqvist2007approximative, byrnes2003theuncertain, schott1984maximum, karlsson2016confidence}, \nwhere \\cite{shankwitz1990onthe,karlsson2016confidence,byrnes2003theuncertain} consider the problem of selecting an appropriate covariances sequence to match in a given confidence region. The two approximation problems considered here are similar to the ones considered in \\cite{schott1984maximum} and \\cite{enqvist2007approximative}. (For more details, also see \\cite[Ch. B]{avventi2011spectral}.)\n\n\nWe begin in Section~\\ref{sec:prel} by reviewing the regular multidimensional rational covariance extension problem for exact covariance matching in a broader perspective. In Section~\\ref{sec:approxsoft} we present our main results on approximate rational covariance extension with soft constraints, and in Section~\\ref{sec:wellposed} we show that the dual solution is well-posed. In Section~\\ref{sec:no_singular_part} we investigate conditions under which there are solutions without a singular part. The approximate rational covariance extension with hard constraints is considered in \nSection~\\ref{sec:approxhard}, and in Section~\\ref{sec:homeomorphism} we establish a homeomorhism between the weight matrices in the two problems, showing that the problems are actually equivalent when solutions exist. We also show that under certain conditions the homeomorphism can be extended to hold between all sets of parameters, allowing us to carry over results from the soft-constrained setting to the hard-constrained one. In Section~\\ref{sec:covest} we discuss the properties of various covariance estimators, in Section~\\ref{sec:example} we give a 2D example from spectral estimation, and in Section~\\ref{sec:ex_texture} we apply our theory to system identification and texture reconstruction. \nSome of the results of this paper were announced in \\cite{ringh2016multidimensional} without proofs.\n\n\n\n\n\\section{Rational covariance extension with exact matching}\\label{sec:prel}\n\nThe trigonometric moment problem to determine a positive measure $\\derivd \\mu$ satisfying \\eqref{eq:Cov} is an inverse problem that has a solution if and only if $c\\in\\bar{\\mathfrak{C}}_+$ \\cite[Theorem 2.3]{karlsson2015themultidimensional}, where $\\bar{\\mathfrak{C}}_+$ is the closure of $\\mathfrak{C}_+$, and then in general it has infinitely many solutions. However, the nature of possible rational solutions \\eqref{eq:ratinaldmu} will depend on the location of $c$ in $\\bar{\\mathfrak{C}}_+$. To clarify this point we need the following lemma.\n\n\\begin{lemma}\\label{lem:PsubsetC}\n$\\bar{\\mathfrak{P}}_+ \\setminus \\{0\\} \\subset \\mathfrak{C}_+$.\n\\end{lemma}\n\n\\begin{proof}\nObviously the inner product $\\langle q, p \\rangle := \\sum_{{\\boldsymbol k} \\in \\Lambda} q_{\\boldsymbol k} \\bar{p}_{\\boldsymbol k}$ can be expressed in the integral form\n\\begin{equation}\n\\label{innerprod}\n\\langle q, p \\rangle = \\int_{\\mathbb{T}^d} Q(e^{i{\\boldsymbol \\theta}})\\overline{P(e^{i{\\boldsymbol \\theta}})}\\derivd m({\\boldsymbol \\theta}),\n\\end{equation}\nand therefore $\\langle q,p\\rangle >0$ for all $q,p\\in\\bar{\\mathfrak{P}}_+ \\setminus \\{0\\}$, as $P$ and $Q$ can have zeros only on sets of measure zero. Hence the statement of the lemma follows.\n\\end{proof}\n\nTherefore, under certain particular conditions, the multidimensional rational covariance extension problem has a very simple solution with a polynomial spectral density, namely\n\\begin{equation}\n\\label{eq:dmu=Pdm}\n\\derivd \\mu= P(e^{i{\\boldsymbol \\theta}})dm({\\boldsymbol \\theta}), \\quad p \\in \\bar{\\mathfrak{P}}_+ \\backslash \\{0\\}. \n\\end{equation}\n\n\\begin{proposition}\\label{prop:polsolution}\nThe multidimensional rational covariance extension problem has a unique polynomial solution \\eqref{eq:dmu=Pdm} if and only if $c\\in\\bar{\\mathfrak{P}}_+ \\backslash \\{0\\}$, namely $P=C$, where \n\\begin{displaymath}\nC(e^{i{\\boldsymbol \\theta}}):=\\sum_{{\\boldsymbol k}\\in \\Lambda}c_{\\boldsymbol k} e^{-i({\\boldsymbol k},{\\boldsymbol \\theta})}.\n\\end{displaymath}\n\\end{proposition}\n\nThe proof of Proposition~\\ref{prop:polsolution} is immediate by noting that any such $C$ is a {\\em bona fide\\\/} spectral density and noting that $c_{\\boldsymbol k}=\\int_{\\mathbb{T}^d}e^{i({\\boldsymbol k},{\\boldsymbol \\theta})}C(e^{i{\\boldsymbol \\theta}})dm({\\boldsymbol \\theta})$. \n\nAs seen from the following result presented in \\cite[Section 6]{karlsson2015themultidimensional}, the other extreme occurs for $c\\in\\partial\\mathfrak{C}_+:=\\bar{\\mathfrak{C}}_+\\setminus\\mathfrak{C}_+$, when only singular solutions exist.\n\n\\begin{proposition}\\label{prop:singular}\nFor any $c\\in\\partial\\mathfrak{C}_+$ there is a solution $\\derivd \\mu$ of \\eqref{eq:Cov} with support in at most $|\\Lambda|-1$ points. There is no solution with a absolutely continuous part $\\Phi\\derivd m$. \n\\end{proposition}\n\nHowever, for any $c\\in\\mathfrak{C}_+$, there is a rational solution \\eqref{eq:ratinaldmu} parametrized by $p\\in\\bar{\\mathfrak{P}}_+\\backslash\\{0\\}$, as demonstrated in \\cite{ringh2015multidimensional} by considering a primal-dual pair of convex optimization problems. In that paper the primal problem is a weighted maximum entropy problem, but as also noted in \\cite[Sec. 3.2]{ringh2015multidimensional}, it is equivalent to\n\\begin{align}\\label{eq:primal}\n\\min_{\\derivd \\mu \\geq 0} & \\quad \\int_{\\mathbb{T}^d} P \\log \\frac{P}{\\Phi} \\derivd m({\\boldsymbol \\theta}) \\\\\n\\st & \\quad c_{\\boldsymbol k} = \\int_{\\mathbb{T}^d} e^{i(\\kb,\\thetab)} \\derivd \\mu({\\boldsymbol \\theta}), \\quad {\\boldsymbol k} \\in \\Lambda, \\nonumber\n\\end{align}\nwhere $\\Phi dm$ is the absolutely continuous part of $d\\mu$.\nThis amounts to minimizing the (regular) Kullback-Leibler divergence between $P\\derivd m$ and $\\derivd \\mu$, subject to $\\derivd \\mu$ matching the given data \\cite{georgio2003kullback,ringh2015multidimensional}. In the present case of exact covariance matching, this problem is equivalent to minimizing \\eqref{eq:normalizedKullbackLeibler} subject to \\eqref{eq:Cov}, since $P$ is fixed and the total mass of $\\derivd \\mu$ is determined by the $0$:th moment $c_{\\boldsymbol 0} = \\int_{\\mathbb{T}^d} \\derivd \\mu$. \nHence both $\\int_{\\mathbb{T}^d} \\derivd \\mu$ and $\\int_{\\mathbb{T}^d} P\\derivd m$ are constants in this case. Hence problem \\eqref{eq:primal_relax} is the natural extension of \\eqref{eq:primal} for the case where the covariance sequence is not known exactly. \n\nThe primal problem \\eqref{eq:primal} is a problem in infinite dimensions, but with a finite number of constraints. The dual to this problem will then have a finite number of variables but an infinite number of constraints and is given by\n\\begin{align}\\label{eq:dual}\n\\min_{q\\in\\bar{\\mathfrak{P}}_+} & \\quad \\langle c, q \\rangle - \\int_{\\mathbb{T}^d} P \\log Q \\,\\derivd m({\\boldsymbol \\theta}). \n\\end{align}\nIn particular, Theorem 2.1 in \\cite{ringh2015multidimensional}, based on corresponding analysis in \\cite{karlsson2015themultidimensional}, reads as follows.\n\n\\begin{theorem}\\label{thm:rational}\nProblem \\eqref{eq:primal} has a solution if and only if $c \\in \\mathfrak{C}_+$. For every $c \\in \\mathfrak{C}_+$ and $p \\in \\bar{\\mathfrak{P}}_+ \\setminus \\{0\\}$ the functional in \\eqref{eq:dual} is strictly convex and has a unique minimizer $\\hat{q} \\in \\bar{\\mathfrak{P}}_+ \\setminus \\{0\\}$. Moreover, there exists a unique $\\hat{c} \\in \\partial \\mathfrak{C}_+$ and a (not necessarily unique) nonnegative singular measure $d\\hat\\nu$ with support \n\\begin{equation}\n\\label{supp(dnu)}\n\\supp(d\\hat\\nu) \\subseteq \\{ {\\boldsymbol \\theta} \\in \\mathbb{T}^d \\mid \\hat{Q}(e^{i{\\boldsymbol \\theta}}) = 0 \\}\n\\end{equation}\nsuch that\n\\begin{subequations}\n\\begin{align}\n& c_{{\\boldsymbol k}} = \\int_{\\mathbb{T}^d} e^{i(\\kb,\\thetab)} \\left( \\frac{P}{\\hat Q} \\derivd m + d\\hat\\nu \\right),\\quad {\\boldsymbol k} \\in \\Lambda, \\\\\n& \\hat{c}_{\\boldsymbol k} = \\int_{\\mathbb{T}^d} e^{i(\\kb,\\thetab)} d\\hat\\nu, \\quad{\\boldsymbol k} \\in \\Lambda.\n\\end{align}\n\\end{subequations}\nFor any such $d\\hat\\nu$, the measure \n\\begin{equation}\n\\label{ }\nd\\hat\\mu({\\boldsymbol \\theta}) = \\frac{P(e^{i{\\boldsymbol \\theta}})}{\\hat{Q}(e^{i{\\boldsymbol \\theta}})}\\derivd m({\\boldsymbol \\theta}) + d\\hat\\nu({\\boldsymbol \\theta})\n\\end{equation}\nis an optimal solution to the problem \\eqref{eq:primal}. Moreover, $d\\hat\\nu$ can be chosen with support in at most $|\\Lambda|-1$ points, where $|\\Lambda|$ is the cardinality of the index set $\\Lambda$.\n\\end{theorem}\n\nIf $c\\in \\partial \\mathfrak{C}_+$, only a singular measure with finite support would match the moment condition (Proposition~\\ref{prop:singular}). In this case, the problem \\eqref{eq:primal} makes no sense, since any feasible solution has infinite objective value. \n\nIn \\cite{karlsson2015themultidimensional} we also derived the KKT conditions\n\\begin{subequations}\\label{eq:opt_cond_exact}\n\\begin{align}\n& \\hat{q} \\in \\bar{\\mathfrak{P}}_+, \\quad \\hat c \\in \\partial \\mathfrak{C}_+, \\quad \\langle \\hat{c}, \\hat{q} \\rangle = 0 \\label{eq:opt_cond_exact_slack} \\\\\n& c_{\\boldsymbol k} = \\int_{\\mathbb{T}^d} e^{i(\\kb,\\thetab)} \\frac{P}{\\hat Q}\\derivd m + \\hat{c}_{\\boldsymbol k},\\quad {\\boldsymbol k} \\in \\Lambda ,\\label{eq:opt_cond_exact_KKT}\n\\end{align}\n\\end{subequations}\nwhich are necessary and sufficient for optimality of the primal and dual problems. \n\nSince \\eqref{eq:primal} is an inverse problem, we are interested in how the solution depends on the parameters of the problem. From Propositions 7.3 and 7.4 in \\cite{karlsson2015themultidimensional} we have the following result.\n\n\\begin{proposition}\\label{prop:cp2qhat}\nLet $c$, $p$ and $\\hat{q}$ be as in Theorem~\\ref{thm:rational}. Then the map $(c,p)\\mapsto \\hat{q}$ is continuous.\n\\end{proposition}\n\nTo get a full description of well-posedness of the solution we would like to extend this continuity result to the map $(c,p)\\mapsto (\\hat{q},\\hat{c})$. However, such a generalization is only possible under certain conditions. The following result is a consequence of Proposition~\\ref{prop:cp2qhat} and \\cite[Corollary 2.3]{ringh2015multidimensional}.\n\n\\begin{proposition}\\label{prop:c2qhatchat2}\nLet $c$, $p$, $\\hat{q}$ and $\\hat{c}$ be as in Theorem~\\ref{thm:rational}. Then, for $d\\leq 2$ and all $(c,p)\\in\\mathfrak{C}_+\\times\\mathfrak{P}_+$, the mapping $(c,p)\\to(\\hat q, \\hat c)$ is continuous. \n\\end{proposition}\n\nCorollary 2.3 in \\cite{ringh2015multidimensional} actually ensures that $\\hat{c}=0$ for $d\\leq 2$ and $p\\in\\mathfrak{P}_+$. However, in Section~\\ref{ssec:qhat2chat} we present a generalization of Proposition~\\ref{prop:c2qhatchat2} to cases with $d\\geq3$, where then $\\hat{c}$ may be nonzero. (The proof of this generalization can be found in \\cite{ringh2017further}.)\nHere we shall also consider an example where continuity fails when $p$ belongs to the boundary $\\partial\\mathfrak{P}_+:= \\bar{\\mathfrak{P}}_+ \\setminus \\mathfrak{P}_+$, i.e., the corresponding nonnegative trigonometric polynomial $P(e^{i{\\boldsymbol \\theta}})$ is zero in at least one point. \n\n\\section{Approximate covariance extension with soft constraints}\\label{sec:approxsoft}\n\nTo handle the case with noisy covariance data, when $c$ may not even belong to $\\mathfrak{C}_+$, we relax the exact covariance matching constraint \\eqref{eq:Cov} in the primal problem \\eqref{eq:primal} to obtain the problem \\eqref{eq:primal_relax}. In this case it is natural to reformulate the objective function in \\eqref{eq:primal} to include a term that also accounts for changes in the total mass of $\\derivd \\mu$. Consequently, we have exchanged the objective function in \\eqref{eq:primal} by the normalized Kullback-Leibler divergence \\eqref{eq:normalizedKullbackLeibler} plus a term that ensures approximate data matching.\n\nUsing the normalized Kullback-Leibler divergence, as proposed in \\cite[ch. 4]{gzyl1995method} \\cite{csiszar1991why, zorzi2014rational}, is an advantage in the approximate covariance matching problem since this divergence is always nonnegative, precisely as is the case for probability densities. To see this, observe that, in view of the basic inequality $x - 1 \\geq \\log x$,\n\\begin{align*}\n \\mathbb D(P\\derivd m, \\derivd \\mu) &= \\int_{\\mathbb{T}^d} \\left( P \\left( - \\log\\frac{\\Phi}{P} \\right) \\derivd m +d\\mu -Pdm \\right) \\\\\n & \\geq \\int_{\\mathbb{T}^d} \\left( P (1 - \\frac{\\Phi}{P})\\derivd m + \\Phi\\derivd m -Pdm \\right) + \\int_{\\mathbb{T}^d} \\derivd \\nu \\geq 0,\n \\end{align*}\n since $\\derivd \\nu$ is a nonnegative measure. \nMoreover, $ \\mathbb D(P\\derivd m, P\\derivd m)=0$, as can be seen by taking $\\derivd \\mu=Pdm$ in \\eqref{eq:normalizedKullbackLeibler}.\n\nThe problem under consideration is to find a nonnegative measure $\\derivd \\mu=\\Phi dm +d\\nu$ minimizing $$\\mathbb D(P\\derivd m, \\derivd \\mu)+\\frac{1}{2}\\|r - c\\|_{W^{-1}}^2$$ subject to \\eqref{eq:r}. To derive the dual of this problem we consider the corresponding maximization problem and form the Lagrangian \n\\begin{align*}\n\\mathcal{L}(\\Phi,d\\nu, r, q) = & \\; -\\mathbb D(P\\derivd m, \\derivd \\mu) - \\frac{1}{2}\\|r - c\\|_{W^{-1}}^2 + \\sum_{{\\boldsymbol k} \\in \\Lambda} q_{\\boldsymbol k}^* \\Big( r_{\\boldsymbol k} - \\int_{\\mathbb{T}^d} e^{i(\\kb,\\thetab)} \\derivd \\mu({\\boldsymbol \\theta}) \\Big)\\\\\n=&\\;-\\mathbb D(P\\derivd m, \\derivd \\mu) - \\frac{1}{2}\\|r - c\\|_{W^{-1}}^2 +\\langle r,q\\rangle - \\int_{\\mathbb{T}^d}Q\\derivd \\mu \\, ,\n\\end{align*}\nwhere $q := [ q_{\\boldsymbol k} \\mid {\\boldsymbol k}:=(k_1,\\ldots, k_d) \\in \\Lambda]$ are Lagrange multitipliers and $Q$ is the corresponding trigonometric polynomial \\eqref{Ptrigpol}. However, \n\\begin{equation}\n\\label{D(Pdm,dmu)}\n\\mathbb D(P\\derivd m, \\derivd \\mu)=\\int_{\\mathbb{T}^d}P(\\log P -1)dm - \\int_{\\mathbb{T}^d}P\\log\\Phi dm + r_{\\boldsymbol 0}\\, ,\n\\end{equation}\nand therefore \n\\begin{align}\\label{eq:Lagrangian}\n\\mathcal{L}(\\Phi,d\\nu, r, q) = &\\; \\int_{\\mathbb{T}^d}P\\log\\Phi dm - \\int_{\\mathbb{T}^d}Q\\Phi dm - \\int_{\\mathbb{T}^d} Qd\\nu -\\int_{\\mathbb{T}^d}P(\\log P -1)dm\\notag \\\\\n&\\; +\\langle r,q-e\\rangle - \\frac{1}{2}\\|r - c\\|_{W^{-1}}^2 \\, , \n\\end{align}\nwhere $e := [e_{\\boldsymbol k}]_{{\\boldsymbol k} \\in \\Lambda}$, $e_{\\boldsymbol 0} = 1$ and $e_{{\\boldsymbol k}} = 0$ for ${\\boldsymbol k} \\in \\Lambda \\setminus \\{ \\boldsymbol 0 \\}$, and hence $r_{\\boldsymbol 0}=\\langle r,e\\rangle$.\n\nIn deriving the dual functional $$\\varphi(q)=\\sup_{\\Phi\\geq 0,d\\nu\\geq 0,r}\\mathcal{L}(\\Phi,d\\nu, r, q),$$ to be minimized, we only need to consider $q\\in\\bar{\\mathfrak{P}}_+\\setminus\\{0\\}$, as $\\varphi$ will take infinite values for $q\\not\\in\\bar{\\mathfrak{P}}_+$. In fact, following along the lines of \\cite[p. 1957]{ringh2015multidimensional}, we note that, if $Q(e^{i{\\boldsymbol \\theta}_0})<0$, \\eqref{eq:Lagrangian} will tend to infinity when $\\nu({\\boldsymbol \\theta}_0)\\to\\infty$. Moreover, since $p\\in\\bar{\\mathfrak{P}}_+\\setminus\\{0\\}$, there is a neighborhood where $P(e^{i{\\boldsymbol \\theta}})>0$, letting $\\Phi$ tend to infinity in this neighborhood, \\eqref{eq:Lagrangian} will tend to infinity if $Q\\equiv 0$. We also note that the nonnegative function $\\Phi$ can only be zero on a set of measure zero; otherwise the first term in \\eqref{eq:Lagrangian} will be $-\\infty$.\n\nThe directional derivative\\footnote{Formally, the Gateaux differential \\cite{luenberger1969optimization}.} of the Lagrangian \\eqref{eq:Lagrangian} in any feasible direction $\\delta\\Phi$, i.e., any direction $\\delta\\Phi$ such that $\\Phi +\\varepsilon \\delta\\Phi \\geq 0$ for sufficiencly small $\\varepsilon >0$, is easily seen to be\n\\begin{displaymath}\n\\delta\\mathcal{L}(\\Phi,d\\nu, r, q;\\delta\\Phi)= \\int_{\\mathbb{T}^d}\\left(\\frac{P}{\\Phi}-Q\\right)\\delta\\Phi\\derivd m.\n\\end{displaymath} \nIn particular, the direction $\\delta\\Phi:=\\Phi\\,\\text{sign}(P-Q\\Phi)$ is feasible since $(1\\pm\\varepsilon)\\Phi\\geq 0$ for $0<\\varepsilon < 1$. Therefore, any maximizing $\\Phi$ must satisfy $\\int_{\\mathbb{T}^d}|P-Q\\Phi|\\derivd m\\leq 0$ and hence \n\\eqref{eq:rat}. Moreover, a maximizing choice of $d\\nu$ will require that \n\\begin{equation}\n\\label{eq:intQdnu=0}\n\\int_{\\mathbb{T}^d} Qd\\,\\nu=0, \n\\end{equation}\nas this nonnegative term can be made zero by the simple choice $d\\nu\\equiv 0$, and consequently \\eqref{supp(dnu)} must hold. Finally, the directional derivative \n\\begin{displaymath}\n\\delta\\mathcal{L}(\\Phi,d\\nu, r, q;\\delta r)= \\langle \\delta r, q-e + W^{-1}(r-c)\\rangle\n\\end{displaymath}\nis zero for all $\\delta r\\in \\mathfrak{C}$ if\n\\begin{equation}\n\\label{eq:q2r}\nr=c+W(q-e).\n\\end{equation} \nInserting this together with \\eqref{eq:rat} and \\eqref{eq:intQdnu=0} into \\eqref{eq:Lagrangian} then yields the dual functional \n\\begin{displaymath}\n\\varphi(q)=\\langle c, q\\rangle - \\int_{\\mathbb{T}^d} P\\log Q\\,\\derivd m +\\frac12\\|q-e\\|_W^2 +c_0.\n\\end{displaymath}\nConsequently the dual of the (primal) optimization problem \\eqref{eq:primal_relax} is equivalent to \n\\begin{align}\\label{eq:dual_relax}\n\\min_{q\\in\\bar{\\mathfrak{P}}_+} & \\quad \\langle c, q \\rangle - \\int_{\\mathbb{T}^d} P \\log Q\\, \\derivd m+ \\frac{1}{2}\\|q - e\\|_{W}^2 .\n\\end{align}\n\n\\begin{theorem}\\label{thm:softcontraints}\nFor every $p \\in \\bar{\\mathfrak{P}}_+ \\setminus \\{0\\}$ the functional in \\eqref{eq:dual_relax} is strictly convex and has a unique minimizer $\\hat{q}\\in \\bar{\\mathfrak{P}}_+ \\setminus \\{0\\}$. Moreover, there exists a unique $\\hat{r}\\in\\mathfrak{C}_+$, a unique $\\hat{c} \\in \\partial \\mathfrak{C}_+$ and a (not necessarily unique) nonnegative singular measure $d\\hat\\nu$ with support \n\\begin{equation}\n\\label{supp(dnu)soft}\n\\supp(d\\hat\\nu) \\subseteq \\{ {\\boldsymbol \\theta} \\in \\mathbb{T}^d \\mid \\hat{Q}(e^{i{\\boldsymbol \\theta}}) = 0 \\}\n\\end{equation}\nsuch that\n\\begin{subequations}\\label{rhat+chat}\n\\begin{align}\n& \\hat{r}_{{\\boldsymbol k}} = \\int_{\\mathbb{T}^d} e^{i(\\kb,\\thetab)} \\left( \\frac{P}{\\hat Q} \\derivd m + d\\hat\\nu \\right)\\, \\text{ for all } {\\boldsymbol k} \\in \\Lambda, \\label{rhat}\\\\\n& \\hat{c}_{\\boldsymbol k} = \\int_{\\mathbb{T}^d} e^{i(\\kb,\\thetab)} d\\hat\\nu, \\text{ for all } {\\boldsymbol k} \\in \\Lambda \\, ,\\ \\label{chat}\n\\end{align}\n\\end{subequations}\nand the measure \n\\begin{equation}\n\\label{eq:optdmu(soft)}\nd\\hat\\mu({\\boldsymbol \\theta}) = \\frac{P(e^{i{\\boldsymbol \\theta}})}{\\hat{Q}(e^{i{\\boldsymbol \\theta}})}\\derivd m({\\boldsymbol \\theta}) + d\\hat\\nu({\\boldsymbol \\theta})\n\\end{equation}\nis an optimal solution to the primal problem \\eqref{eq:primal_relax}. Moreover, $d\\hat\\nu$ can be chosen with support in at most $|\\Lambda|-1$ points.\n\\end{theorem}\n\n\\begin{proof}\nThe objective functional $\\mathbb{J}$ of the dual problem \\eqref{eq:dual_relax} can be written as the sum of two terms, namely \n\\begin{displaymath}\n\\mathbb{J}_1(q) = \\langle \\tilde{c}, q \\rangle - \\int_{\\mathbb{T}^d} P \\log(Q) \\derivd m\\quad\\text{and}\\quad \\mathbb{J}_2(q)=\\langle c-\\tilde{c}, q \\rangle +\\frac{1}{2}\\|q - e\\|_{W}^2\\, ,\n\\end{displaymath}\nwhere $\\tilde{c}\\in\\mathfrak{C}_+$.\nThe functional $\\mathbb{J}_1$ is strictly convex (Theorem~\\ref{thm:rational}), and trivially the same holds for $\\mathbb{J}_2$ since it is a positive definite quadratic form. Consequently, $\\mathbb{J}=\\mathbb{J}_1+\\mathbb{J}_2$ is strictly convex, as claimed. Moreover, $\\mathbb{J}_1$ is lower semicontinuous \\cite[Lemma 3.1]{ringh2015multidimensional} with compact sublevel sets $\\mathbb{J}_1^{-1}(-\\infty,\\rho]$ \\cite[Lemma 3.2]{ringh2015multidimensional}. Likewise, $\\mathbb{J}_2$ is continuous with compact sublevel sets. Therefore $\\mathbb{J}$ is lower semicontinuous with compact sublevel sets and therefore has a minimum $\\hat{q}$, which must be unique by strict convexity. \n\nIn view of \\eqref{eq:q2r}, the optimal value of $r$ is given by\n\\begin{equation}\n\\label{eq:rhat}\n\\hat{r}=c +W(\\hat{q}-e)\n\\end{equation}\nand is hence unique. Since therefore the linear term $c+W(q-e)$ in the gradient of $\\mathbb{J}$ takes the value $\\hat{r}$ at the optimal point, the analysis in \\cite[sect. 3.1.5]{ringh2015multidimensional} applies with obvious modifications, showing that there is a $\\hat{c}\\in \\bar{\\mathfrak{C}}_+$, which then must be unique, such that \n\\begin{displaymath}\n \\hat{r}_{{\\boldsymbol k}} = \\int_{\\mathbb{T}^d} e^{i(\\kb,\\thetab)}\\frac{P}{\\hat Q} \\derivd m +\\hat{c}_{{\\boldsymbol k}}.\n\\end{displaymath}\nMoreover, there is a discrete measure $d\\hat\\nu$ with support in at most $|\\Lambda|-1$ points such that \\eqref{chat} holds; see, e.g., \\cite[Proposition 2.4]{karlsson2015themultidimensional}. Then \\eqref{rhat} holds as well. In view of \\eqref{eq:intQdnu=0}, \n\\begin{equation}\n\\label{eq:compslacknes}\n\\langle\\hat{c},\\hat{q}\\rangle =\\int_{\\mathbb{T}^d} \\hat{Q}d\\hat\\nu =0,\n\\end{equation}\nand consequently $\\hat{c}\\in\\partial\\mathfrak{C}_+$, and the support of $d\\hat\\nu$ must satisfy \\eqref{supp(dnu)soft}.\n\n\nFinally, let $r$ be given in terms of $\\derivd \\mu$ by \\eqref{eq:r}, and let $\\mathbb{I}(\\derivd \\mu)$ be the corresponding primal functional in \\eqref{eq:primal_relax}. Then, for any such $\\derivd \\mu$, \n\\begin{displaymath}\n\\mathbb{I}(\\derivd \\mu)= \\mathcal{L}(\\Phi,d\\nu, r, \\hat{q})\\leq \\mathcal{L}(\\hat{\\Phi},d\\hat\\nu, \\hat{r}, \\hat{q}) =\\mathbb{I}(d\\hat\\mu),\n\\end{displaymath}\nand hence $\\derivd \\hat{\\mu}$ is an optimal solution to the primal problem \\eqref{eq:primal_relax}, as claimed. \n\\end{proof}\n\nWe collect the KKT conditions in the following corollary. \n\n\\begin{corollary}\\label{cor:KKTsoft}\nThe conditions\n\\begin{subequations}\\label{eq:opt_cond_relax}\n\\begin{align}\n& \\hat{q} \\in \\bar{\\mathfrak{P}}_+, \\quad \\hat c \\in \\partial \\mathfrak{C}_+, \\quad \\langle \\hat{c}, \\hat{q} \\rangle = 0 \\label{eq:opt_cond_relax_slack} \\\\\n& \\hat{r}_{\\boldsymbol k} = \\int_{\\mathbb{T}^d} e^{i(\\kb,\\thetab)} \\frac{P}{\\hat Q}\\derivd m + \\hat{c}_{\\boldsymbol k},\\quad {\\boldsymbol k}\\in\\Lambda \\label{eq:opt_cond_relax_KKT} \\\\\n&\\hat{r}- c = W(\\hat q - e). \\label{eq:opt_cond_relax_matchError}\n\\end{align}\n\\end{subequations}\nare necessary and sufficient conditions for optimality of the dual pair \\eqref{eq:primal_relax} and \\eqref{eq:dual_relax} of optimization problems. \n\\end{corollary}\n\n\n\\section{On the well-posedness of the soft-constrained problem}\\label{sec:wellposed}\n\nIn the previous sections we have shown that the primal and dual optimization problems are well-defined. Next we investigate the well-posedness of the primal problem as an inverse problem. Thus, we first establish continuity of the solutions $\\hat{q} $ in terms of the parameters $W$, $c$ and $p$.\n\n\n\n\n\\newcommand{c}{c}\n\\newcommand{c^{(0)}}{c^{(0)}}\n\\newcommand{c^{(1)}}{c^{(1)}}\n\\newcommand{c^{(2)}}{c^{(2)}}\n\\newcommand{c^{(k)}}{c^{(k)}}\n\\newcommand{p}{p}\n\\newcommand{p^{(0)}}{p^{(0)}}\n\\newcommand{p^{(1)}}{p^{(1)}}\n\\newcommand{p^{(2)}}{p^{(2)}}\n\\newcommand{p^{(k)}}{p^{(k)}}\n\\newcommand{P}{P}\n\\newcommand{P^{(0)}}{P^{(0)}}\n\\newcommand{P^{(1)}}{P^{(1)}}\n\\newcommand{P^{(2)}}{P^{(2)}}\n\\newcommand{P^{(k)}}{P^{(k)}}\n\n\\newcommand{W}{W}\n\\newcommand{W^{(0)}}{W^{(0)}}\n\\newcommand{W^{(1)}}{W^{(1)}}\n\\newcommand{W^{(2)}}{W^{(2)}}\n\\newcommand{W^{(k)}}{W^{(k)}}\n\n\n\\newcommand{{\\ca, \\pa, \\Wa}}{{c, p, W}}\n\\newcommand{{\\cb, \\pb, \\Wb}}{{c^{(0)}, p^{(0)}, W^{(0)}}}\n\\newcommand{{\\cc, \\pc, \\Wc}}{{c^{(1)}, p^{(1)}, W^{(1)}}}\n\\newcommand{{\\cd, \\pd, \\Wd}}{{c^{(2)}, p^{(2)}, W^{(2)}}}\n\\newcommand{{\\ck, \\pk, \\Wk}}{{c^{(k)}, p^{(k)}, W^{(k)}}}\n\n\\subsection{Continuity of $\\hat{q}$ with respect to $c$, $p$ and $W$} \n\nWe start considering the continuity of the optimal solution with respect to the parameters. The parameter set of interest is\n\\begin{equation}\n\\label{eq:Pcal}\n\\mathcal{P}=\\{({\\ca, \\pa, \\Wa})\\mid c\\in \\mathfrak{C}, p\\in \\bar{\\mathfrak{P}}_+\\setminus\\{0\\}, W>0\\}.\n\\end{equation}\n\n\\begin{theorem}\\label{thm:W2qcont} \nLet \n\\begin{equation}\n\\label{eq:Jsoft}\n\\mathbb{J}_{{\\ca, \\pa, \\Wa}}(q)=\\langle c, q \\rangle - \\int_{\\mathbb{T}^d} P \\log Q\\, \\derivd m+ \\frac{1}{2}\\|q - e\\|_{W}^2.\n\\end{equation}\nThen the map $({\\ca, \\pa, \\Wa})\\mapsto \\hat{q}:=\\argmin_{q\\in\\bar{\\mathfrak{P}}_+} \\mathbb{J}_{{\\ca, \\pa, \\Wa}}(q)$ is continuous on $\\mathcal{P}$.\n\\end{theorem}\n\n\n\\begin{proof}\nFollowing the procedure in \\cite[Proposition 7.3]{karlsson2015themultidimensional} we use the continuity of the optimal value (Lemma~\\ref{lem:W2Jcont}) to show continuity of the optimal solution. To this end, let $({\\ck, \\pk, \\Wk})$ be a sequence of parameters in $\\mathcal{P}$ converging to $({\\ca, \\pa, \\Wa})\\in\\mathcal{P}$ as $k\\to\\infty$. Moreover, defining $\\mathbb{J}_k(q):=\\mathbb{J}_{\\ck, \\pk, \\Wk} (q)$ and $\\mathbb{J}(q):=\\mathbb{J}_{\\ca, \\pa, \\Wa} (q)$ for simplicity of notation, let $\\hat{q}_k = \\argmin_{q \\in \\bar{\\mathfrak{P}}_+} \\mathbb{J}_{k}(q)$ and $\\hat{q} = \\argmin_{q \\in \\bar{\\mathfrak{P}}_+} \\mathbb{J}(q)$. By Lemma~\\ref{lem:W2Jcont}, $(\\hat{q}_k)$ is bounded, and hence there is a subsequence, which for simplicity we also call $(\\hat{q}_k)$, converging to a limit $q_\\infty$. If we can show that $q_\\infty = \\hat{q}$, then the theorem follows. To this end, choosing a $q_0 \\in \\mathfrak{P}_+$, we have\n\\begin{align*}\n\\mathbb{J}_{k}(\\hat{q}_k) &= \\mathbb{J}_{k} (\\hat{q}_k + \\varepsilon q_0) - \\langle c^{(k)}, \\varepsilon q_0 \\rangle + \\int_{\\mathbb{T}^d} P^{(k)} \\log \\left( \\frac{\\hat{Q}_k + \\varepsilon Q_0}{\\hat{Q}_k} \\right)\\derivd m \\\\ \n&\\quad+ \\frac12\\|\\hat{q}_k-e\\|_{W^{(k)}}^2 -\\frac12 \\|\\hat{q}_k+\\varepsilon q_0-e\\|_{W^{(k)}}^2 \\\\\n&\\geq \\mathbb{J}_{k} (\\hat{q}_k + \\varepsilon q_0) - \\langle c^{(k)}, \\varepsilon q_0 \\rangle + \\frac12\\|\\hat{q}_k-e\\|_{W^{(k)}}^2-\\frac12 \\|\\hat{q}_k+\\varepsilon q_0-e\\|_{W^{(k)}}^2.\n\\end{align*}\nConsequently, by Lemma~\\ref{lem:W2Jcont}, \n\\begin{equation*}\n\\mathbb{J}(\\hat{q}) = \\lim_{k \\to \\infty} \\mathbb{J}_{k}(\\hat{q}_k) \\geq \\lim_{k \\to \\infty} \\mathbb{J}_{k}(\\hat{q}_k + \\varepsilon q_0) - \\varepsilon \\langle c^{(k)}, q_0 \\rangle + \\frac12\\|\\hat{q}_k-e\\|_{W^{(k)}}^2 -\\frac12 \\|\\hat{q}_k+\\varepsilon q_0-e\\|_{W^{(k)}}^2.\n\\end{equation*}\nHowever $\\hat{q}_k + \\varepsilon q_0 \\in \\mathfrak{P}_+$, and, since $({\\ca, \\pa, \\Wa},q)\\mapsto\\mathbb{J}_{{\\ca, \\pa, \\Wa}}(q)$ is continuous in $\\mathcal{P}\\times\\mathfrak{P}_+$, we obtain\n\\begin{align}\n\\mathbb{J}(\\hat{q}) &\\geq \\lim_{k \\to \\infty} \\left(\\mathbb{J}_{k}(\\hat{q}_k + \\varepsilon q_0) - \\varepsilon \\langle c^{(k)}, q_0 \\rangle + \\frac12\\|\\hat{q}_k-e\\|_{W^{(k)}}^2-\\frac12 \\|\\hat{q}_k+\\varepsilon q_0-e\\|_{W^{(k)}}^2\\right)\\notag\\\\ \n&= \\mathbb{J}(q_\\infty + \\varepsilon q_0) - \\varepsilon \\langle c, q_0 \\rangle+ \\frac12\\|q_\\infty-e\\|_{W}^2 -\\frac12 \\|q_\\infty+\\varepsilon q_0-e\\|_{W}^2 .\\label{Jpestimate}\n\\end{align}\nLetting $\\varepsilon\\to 0$ in \\eqref{Jpestimate}, we obtain the inequality $\\mathbb{J}(\\hat{q})\\geq \\mathbb{J}(q_\\infty)$. By strict convexity of $\\mathbb{J}$ the optimal solution is unique, and hence $\\hat q=q_\\infty$. \n\\end{proof}\n\n\\subsection{Continuity of $\\hat{c}$ with respect to $\\hat{q}$} \\label{ssec:qhat2chat}\n\nWe have now established continuity from $({\\ca, \\pa, \\Wa})$ to $\\hat{q}$. In the same way as in Proposition~\\ref{prop:c2qhatchat2} we are also interested in continuity of the map $({\\ca, \\pa, \\Wa})\\mapsto (\\hat{q},\\hat{c})$. This would follow if we could show that the map from $\\hat{q}$ to $\\hat{c}$ is continuous. From the KKT condition \\eqref{eq:opt_cond_relax_matchError}, it is seen that $\\hat{r}$ is continuous in $c$, $W$ and $\\hat{q}$. In view of \\eqref{eq:opt_cond_relax_KKT}, i.e., \n\\begin{displaymath}\n\\hat{r}_{\\boldsymbol k} = \\int_{\\mathbb{T}^d} e^{i(\\kb,\\thetab)} \\frac{P}{\\hat Q}\\derivd m + \\hat{c}_{\\boldsymbol k},\\quad {\\boldsymbol k}\\in\\Lambda\n\\end{displaymath}\ncontinuity of $\\hat c$ would follow if $\\int_{\\mathbb{T}^d} P\\hat{Q}^{-1}\\derivd m$ is continuous in $(p,\\hat{q})$ whenever it is finite. If $p\\in\\mathfrak{P}_+$, this follows from the continuity the map $\\hat{q}\\mapsto \\hat{Q}^{-1}$ in $L_1(\\mathbb{T}^d)$.\nFor the case $d\\leq 2$, this is trivial since if $\\int_{\\mathbb{T}^d} \\hat{Q}^{-1}\\derivd m$ is finite, then $\\hat q\\in \\mathfrak{P}_+$ and $\\hat Q$ is bounded away from zero (cf., Proposition~\\ref{prop:c2qhatchat2}). However, for the case $d>2$ the optimal $\\hat q$ may belong to the boundary $\\partial \\mathfrak{P}_+$, i.e., $\\hat Q$ is zero in some point. The following proposition shows $L_1$ continuity of $\\hat{q}\\mapsto \\hat{Q}^{-1}$ for certain cases.\n\n\\begin{proposition} \\label{prp:qQinvLt}\nFor $d\\geq3$, let $\\hat q\\in\\bar{\\mathfrak{P}}_+$ and suppose that the Hessian $ \\nabla_{{\\boldsymbol \\theta}\\thetab} \\, \\hat Q$ is positive definite in each point where $\\hat Q$ is zero. Then $\\hat Q^{-1}\\in L_1(\\mathbb{T}^{d})$ and the mapping from the coefficient vector $q\\in \\bar{\\mathfrak{P}}_+$ to $Q^{-1}$ is $L_1$ continuous in the point $\\hat q$.\n\\end{proposition}\n\nThe proof of this proposition is given in \\cite{ringh2017further}. From Propositions~\\ref{prp:qQinvLt} and \\ref{prop:c2qhatchat2} the following continuity result follows directly.\n\n\\begin{corollary}\nFor all $c\\in \\mathfrak{C}, p\\in\\mathfrak{P}_+, W>0$, the mapping $(c,p,W)\\to(\\hat q, \\hat c)$ is continuous in any point $({\\ca, \\pa, \\Wa})$ for which the Hessian $ \\nabla_{{\\boldsymbol \\theta}\\thetab} \\, \\hat Q$ is positive definite in each point where $\\hat Q$ is zero.\n\\end{corollary}\n\nThe condition $p \\in \\mathfrak{P}_+$ is needed, since we may have pole-zero cancelations in $P\/\\hat{Q}$ when $p\\in\\partial\\mathfrak{P}_+$, and then $\\int_{\\mathbb{T}^d} P\/\\hat{Q}\\derivd m$ may be finite even if $\\hat{Q}^{-1}\\not\\in L_1$. The following example shows that this may lead to discontinuities in the map $p\\mapsto \\hat{c}$ (cf. Example 3.8 in \\cite{karlsson2015themultidimensional}).\n\n\\begin{example}\nLet\n\\begin{displaymath}\nc = \\begin{bmatrix}1 \\\\ 3 \\\\ 1\\end{bmatrix} = \n\\begin{bmatrix}0 \\\\ 2 \\\\ 0\\end{bmatrix} +\\begin{bmatrix}1 \\\\ 1 \\\\ 1\\end{bmatrix}= \n\\int_{-\\pi}^\\pi\\begin{bmatrix}e^{-i\\theta} \\\\ 1 \\\\ e^{i\\theta}\\end{bmatrix} \\left( 2 \\derivd m +d\\nu_0\\right),\n\\end{displaymath}\nwhere $\\derivd m=d\\theta\/2\\pi$ and $d\\nu_0$ is the singular measure $\\delta_{0}(\\theta)d\\theta$ with support in $\\theta=0$. Since $\\derivd \\mu:=2 \\derivd m +d\\nu_0$ is positive, $c \\in \\bar{\\mathfrak{C}}_+$. Moreover, since\n\\[\nT_{c} = \\begin{bmatrix}\n3 & 1\\\\\n1 & 3\n\\end{bmatrix}\n> 0\n\\]\nwe have that $c \\in \\mathfrak{C}_+$ (see, e.g., \\cite[p. 2853]{lindquist2013thecirculant}). Thus we know \\cite[Corollary 2.3]{ringh2015multidimensional} that for each $p \\in \\mathfrak{P}_+$ we have a unique $\\hat{q} \\in \\mathfrak{P}_+$ such that $P\/ \\hat{Q}$ matches $c$, and hence $\\hat{c} = 0$. However, for $p = 2 (-1, 2, -1)'$ we have that $\\hat{q} = (-1, 2, -1)'$ and $\\hat{c} = (1, 1, 1)'$ (Theorem~\\ref{thm:rational}). Then, for the sequence $(p_k)$, where $p_k=2 (-1, 2 + 1\/k, -1) \\in \\mathfrak{P}_+$, we have $\\hat{c}_k = 0$, so \n\\[\n\\lim_{k \\to \\infty} \\hat{c}_k = \\lim_{k \\to \\infty}\n\\begin{bmatrix}\n0 \\\\ 0 \\\\ 0\n\\end{bmatrix} \\neq \n\\begin{bmatrix}\n1 \\\\ 1 \\\\ 1\n\\end{bmatrix},\n\\]\nwhich shows that the mapping $p \\to \\hat{c}$ is not continuous.\n\\end{example}\n\n\\section{Tuning to avoid a singular part}\\label{sec:no_singular_part}\n\nIn many situations we prefer solutions where there is no singular measure $d\\nu$ in the optimal solution. An interesting question is therefore for what prior $P$ and weight $W$ we obtain $d\\hat\\nu=0$. The following result provides a sufficient condition. \n\n\\begin{proposition}\\label{boundlprop}\nLet $c\\in\\mathfrak{C}$ and let $p$ be the Fourier coefficients of the prior $P$. If the weight satisfies\\footnote{Here $\\|A\\|_{2,1}=\\max_{c\\neq 0}\\|Ac\\|_1\/\\|c\\|_2$ denotes the subordinate (induced) matrix norm.} \n\\begin{equation}\\label{eq:supnormbound}\n\\|W^{-1\/2}\\|_{2,1}<\\|c-p\\|_{W^{-1}}^{-1},\n\\end{equation}\n then the optimal solution of \\eqref{eq:primal_relax} is on the form $$d\\hat{\\mu}=(P\/\\hat{Q}) dm,$$ i.e., the singular part $d\\hat{\\nu}$ vanishes.\n\\end{proposition}\n\n\\begin{remark}\nNote that for a scalar weight, $W=\\lambda I$ the bound \\eqref{eq:supnormbound} simplifies to \n\\begin{equation}\n\\label{eq:scalarbound}\n\\lambda> |\\Lambda|^{1\/2}\\|c-p\\|_2,\n\\end{equation}\n where $|\\Lambda|$ is the cardinality of index set $\\Lambda$.\n\\end{remark}\n\nFor the proof of Proposition~\\ref{boundlprop} we need the following lemma.\n\n\\begin{lemma}\\label{boundlemma}\nCondition \\eqref{eq:supnormbound} implies \n\\begin{equation}\\label{eq:newnormbound}\n\\|W^{-1}(\\hat{r}-c)\\|_1<1,\n\\end{equation}\nwhere $\\hat{r}$ is the optimal value of $r$ in problem \\eqref{eq:primal_relax}.\n\\end{lemma}\n\n\\begin{proof}\nLet \n\\begin{equation}\n\\label{eq:primalcost}\n\\mathbb{I}(\\derivd \\mu,r):=\\mathbb D (P\\derivd m, \\derivd \\mu) + \\frac{1}{2}\\|r - c\\|_{W^{-1}}^2 \n\\end{equation}\nbe the cost function of problem \\eqref{eq:primal_relax}, and let $(d\\hat{\\mu},\\hat{r})$ be the optimal solution. Clearly, $\\mathbb{I}(Pdm,p)\\geq \\mathbb{I}(d\\hat{\\mu},\\hat{r})$, and consequently \n\\begin{displaymath}\n\\|\\hat{r}-c\\|_{W^{-1}} \\leq \\|p-c\\|_{W^{-1}},\n\\end{displaymath}\nsince $\\mathbb D (P\\derivd m, d\\hat{\\mu})\\geq 0$ and $\\mathbb D (P\\derivd m,P\\derivd m)=0$. Therefore,\n\\begin{align*}\n\\|W^{-1}(\\hat{r} - c)\\|_1\n&\\le \\|W^{-1\/2}\\|_{2,1} \\|W^{-1\/2}(\\hat{r} - c)\\|_2\\\\\n&= \\|W^{-1\/2}\\|_{2,1}\\|\\hat{r} - c\\|_{W^{-1}}\\\\\n&\\le \\|W^{-1\/2}\\|_{2,1} \\|p - c\\|_{W^{-1}},\n\\end{align*}\n which is less than one by \\eqref{eq:supnormbound}. Hence \\eqref{eq:supnormbound} implies \\eqref{eq:newnormbound}.\n\\end{proof}\n\n\n\\begin{proof}[{Proof of Proposition~\\ref{boundlprop}}]\nSuppose the optimal solution has a nonzero singular part $d\\hat{\\nu}$, and form the directional derivative of \\eqref{eq:primalcost} at $(d\\hat{\\mu},\\hat{r})$ in the direction $-d\\hat{\\nu}$. Then $\\Phi$ in \\eqref{eq:decomp} does not vary, and \n\\begin{displaymath}\n\\delta\\mathbb{I}(d\\hat{\\mu},\\hat{r};-d\\hat{\\nu},\\delta r)=-\\int_{\\mathbb{T}^d}d\\hat{\\nu} +\\delta r^*W^{-1}(\\hat{r} - c),\n\\end{displaymath}\nwhere\n\\begin{displaymath}\n\\delta r_{\\boldsymbol k} = -\\int_{\\mathbb{T}^d} \\expfunkd\\hat{\\nu}.\n\\end{displaymath}\nThen $|\\delta r_{\\boldsymbol k}|\\leq \\int d\\hat{\\nu}$ for all ${\\boldsymbol k}\\in\\Lambda$, and hence\n\\begin{displaymath}\n|\\delta r^*W^{-1}(\\hat{r} - c)|\\leq \\|W^{-1}(\\hat{r}-c)\\|_1 \\int_{\\mathbb{T}^d} d\\hat{\\nu} <\\int_{\\mathbb{T}^d} d\\hat{\\nu},\n\\end{displaymath}\nby \\eqref{eq:newnormbound} (Lemma~\\ref{boundlemma}). Consequently,\n\\begin{displaymath}\n\\delta\\mathbb{I}(d\\hat{\\mu},\\hat{r};-d\\hat{\\nu},\\delta r)<0\n\\end{displaymath}\nwhenever $d\\hat{\\nu}\\ne 0$, which contradicts optimality. Hence $d\\hat{\\nu}$ must be zero. \n\\end{proof}\n\nThe condition of Proposition~\\ref{boundlprop} is just sufficient and is in general conservative. To illustrate this, we consider a simple one-dimensional example ($d=1$). \n\n\\begin{example}\\label{ex:singular}\nConsider a covariance sequence $(1,c_1)$, where $c_1\\ne 0$, and a prior $P(e^{i\\theta})=1-\\cos\\theta$, and set $W =\\lambda I$. Then, since \n\\begin{displaymath}\nc=\\begin{pmatrix} c_1\\\\ 1 \\\\c_1 \\end{pmatrix} \\quad \\text{and} \\quad p=\\begin{pmatrix} -1\/2\\\\ 1 \\\\-1\/2 \\end{pmatrix},\n\\end{displaymath}\nthe sufficient condition \\eqref{eq:scalarbound} for an absolutely continuous solution is \n\\begin{equation}\n\\label{bound}\n\\lambda > \\sqrt{\\tfrac32}\\, |1+2c_1|.\n\\end{equation}\nWe want to investigate how restrictive this condition is. \n\nClearly we will have a singular part if and only if $\\hat{Q} = q_0 P$, in which case we have \n\\begin{displaymath}\n\\hat{q}=q_0\\begin{pmatrix} -1\/2\\\\ 1 \\\\-1\/2 \\end{pmatrix}\\quad \\text{and} \\quad\\hat c=\\beta\\begin{pmatrix} 1\\\\ 1 \\\\1\n\\end{pmatrix}\n\\end{displaymath}\nfor some $\\beta >0$. In fact, it follows from $\\langle \\hat{c}, \\hat{q} \\rangle = 0$ in \\eqref{eq:opt_cond_relax_slack} that $\\hat{c}_1=\\hat{c}_0$. Moreover, \\eqref{eq:opt_cond_relax_KKT} and \\eqref{eq:opt_cond_relax_matchError} yield\n\\begin{align*}\n\\hat{r} =&\\int \\frac{P}{\\hat{Q}}\\begin{pmatrix} e^{i\\theta}\\\\ 1 \\\\e^{-i\\theta}\n\\end{pmatrix} dm +\\hat c=\n\\begin{pmatrix} \\beta\\\\ \\beta+1\/q_0 \\\\\\beta\n\\end{pmatrix}\\\\\nc = &\\,\\hat{r}-\\lambda(q-e)=\\begin{pmatrix} \\beta+\\lambda q_0\/2,\\\\ \\beta+1\/q_0-\\lambda q_0+\\lambda \\\\\\beta+\\lambda q_0\/2\n\\end{pmatrix}.\n\\end{align*}\nBy eliminating $\\beta$, we get \n\\[\nc_1=1-\\frac{1}{q_0}-\\frac{3}{2}q_0 \\lambda +\\lambda,\n\\]\nand solving for $q_0$ yields\n\\[\nq_0=\\frac{\\lambda +c_1-1+ (6\\lambda + (\\lambda +c_1-1)^2 )^{1\/2}}{3\\lambda }\n\\]\n(note that $\\lambda >0$ and $q_0>0$). Again, using \\eqref{eq:opt_cond_relax_matchError} we have\n\\begin{align*}\n\\beta&=c_1-\\lambda q_0\/2\\\\\n&=c_1-\\frac{1}{6}\\left( \\lambda +c_1-1+ (6\\lambda + (\\lambda +c_1-1)^2)^{1\/2}\\right).\n\\end{align*}\nWe are interested in $\\lambda$ for which $\\beta >0$, i.e.,\n\\begin{equation}\n\\label{eq:inex}\n6c_1-(\\lambda +c_1-1)>(6\\lambda + (\\lambda +c_1-1)^2)^{1\/2},\n\\end{equation}\nwhich is equivalent to the two conditions\n\\begin{subequations}\\label{twoconditions}\n\\begin{equation}\n1+5c_1>\\lambda \n\\end{equation}\n\\begin{equation}\n2c_1(1+2c_1)>\\lambda (1+2c_1),\n\\end{equation}\n\\end{subequations}\nwhich could be seen by noting that the left member of \\eqref{eq:inex} must be positive and then squaring both sides.\nTo find out whether this has a solution we consider three cases, namely $c_1<-1\/2$, $-1\/20$. For $c_1<-1\/2$, condition \\eqref{twoconditions} becomes \n$2c_1<\\lambda <1+5c_1$,\nwhich is impossible since $1+5c_1<2c_1$. Condition \\eqref{twoconditions} cannot be satisfied when $-1\/20$. When $c_1>0$, Condition \\eqref{twoconditions} is satisfied if and only if $\\lambda <2c_1$.\n\nConsequently, there is no singular part if either $c_1$ is negative or $$\\lambda\\geq 2c_1.$$This shows that the condition \\eqref{bound} is not tight.\n\\end{example}\n\n\n\n\n\n\\section{Covariance extension with hard constraints}\\label{sec:approxhard}\nThe alternative optimization problem \\eqref{eq:primal_relax_new} amounts to minimizing $\\mathbb D (P\\derivd m, \\derivd \\mu)$ subject to the hard constraint $\\|r - c \\|_{W^{-1}}^2 \\le 1$, where $r_{\\boldsymbol k} = \\int_{\\mathbb{T}^d} e^{i(\\kb,\\thetab)} \\derivd \\mu$. Hard constraints of this type were used in \\cite{schott1984maximum} in the context of entropy maximization. In general the data $c\\not\\in\\bar{\\mathfrak{C}}_+$, whereas, by definition, $r\\in\\bar{\\mathfrak{C}}_+$. Consequently, a necessary condition for the existence of a solution is that $\\bar{\\mathfrak{C}}_+$ and the strictly convex set\n\\begin{equation}\n\\label{eq:SW}\n\\mathfrak{S}_W=\\{ r\\mid \\|r - c \\|_{W^{-1}}^2 \\le 1\\}\n\\end{equation}\nhave a nonempty intersection. In the case that $\\mathfrak{S}_W\\cap\\bar{\\mathfrak{C}}_+\\subset \\partial\\mathfrak{C}_+$, this intersection only contains one point \\cite[Section 3.12]{luenberger1969optimization}. In this case, any solution to the moment problem contains only a singular part (Proposition~\\ref{prop:singular}), and then the primal problem \\eqref{eq:primal_relax_new} has a unique feasible point $r$, but the objective function is infinite. Moreover, $\\mathbb D (P\\derivd m, \\derivd \\mu)\\geq 0$ is strictly convex with $\\mathbb D (P\\derivd m, P\\derivd m)= 0$, so if $p\\in\\mathfrak{S}_W$ then \\eqref{eq:primal_relax_new} has the trivial unique optimal solution $\\derivd \\hat{\\mu} =P\\derivd m$, and $\\hat{r}=p$. The remaining case, $p\\not\\in\\mathfrak{S}_W\\cap\\mathfrak{C}_+\\ne\\emptyset$ needs further analysis. \n\nTo this end, setting $\\derivd \\mu=\\Phi\\derivd m+d\\nu$, we consider the Lagrangian \n\\begin{align*}\n\\mathcal{L}(\\Phi,d\\nu, r, q, \\gamma) = & \\; -\\mathbb D (P\\derivd m, \\derivd \\mu)\n+ \\sum_{{\\boldsymbol k} \\in \\Lambda} q_{\\boldsymbol k}^* \\left( r_{\\boldsymbol k} - \\int_{\\mathbb{T}^d} e^{i(\\kb,\\thetab)} \\derivd \\mu({\\boldsymbol \\theta}) \\right) \\\\\n& \\; +\\gamma \\left(1- \\|r - c \\|_{W^{-1}}^2 \\right)\\\\\n = & \\; -\\mathbb D (P\\derivd m, \\derivd \\mu) + \\langle r,q\\rangle - \\int_{\\mathbb{T}^d} Q \\derivd \\mu +\\gamma \\left(1- \\|r - c \\|_{W^{-1}}^2 \\right), \n\\end{align*}\nwhere $\\gamma\\ge 0$. Therefore, in view of \\eqref{D(Pdm,dmu)}, \n\\begin{align}\\label{eq:Lagrangian2}\n\\mathcal{L}(\\Phi,d\\nu, r, q,\\gamma) = &\\; \\int_{\\mathbb{T}^d}P\\log\\Phi dm - \\int_{\\mathbb{T}^d}Q\\Phi dm - \\int_{\\mathbb{T}^d} Qd\\nu -\\int_{\\mathbb{T}^d}P(\\log P -1)dm \\notag \\\\\n&\\; +\\langle r,q-e\\rangle +\\gamma \\left(1-\\|r - c \\|_{W^{-1}}^2 \\right) , \n\\end{align}\nwhere, as before, $e := [e_{\\boldsymbol k}]_{{\\boldsymbol k} \\in \\Lambda}$, $e_{\\boldsymbol 0} = 1$ and $e_{{\\boldsymbol k}} = 0$ for ${\\boldsymbol k} \\in \\Lambda \\setminus \\{ \\boldsymbol 0 \\}$, and hence $r_{\\boldsymbol 0}=\\langle r,e\\rangle$. This Lagrangian differs from that in \\eqref{eq:Lagrangian} only in the last term that does not depend on $\\Phi$. Therefore, in deriving the dual functional $$\\varphi(q,\\gamma)=\\sup_{\\Phi\\geq 0,d\\nu\\geq 0,r}\\mathcal{L}(\\Phi,d\\nu, r, q,\\gamma),$$ we only need to consider $q\\in\\bar{\\mathfrak{P}}_+\\setminus\\{0\\}$, and a first variation in $\\Phi$ yields \\eqref{eq:rat} and \\eqref{eq:intQdnu=0}. The directional derivative \n\\begin{displaymath}\n\\delta\\mathcal{L}(\\Phi,d\\nu, r, q,\\gamma;\\delta r)= q-e + 2\\gamma W^{-1}(r-c)\n\\end{displaymath}\nis zero for \n\\begin{equation}\n\\label{eq:q2rhard}\nr=c+\\frac{1}{2\\gamma}W(q-e).\n\\end{equation} \nThus inserting \\eqref{eq:rat} and \\eqref{eq:intQdnu=0} and \\eqref{eq:q2rhard} into \\eqref{eq:Lagrangian2} yields the dual functional\n\\begin{equation}\n\\label{eq:harddual}\n\\varphi(q,\\gamma)=\\langle c, q\\rangle - \\int_{\\mathbb{T}^d} P\\log Q\\,\\derivd m +\\frac{1}{4\\gamma}\\|q-e\\|_W^2 +\\gamma -c_{\\boldsymbol 0}\n\\end{equation}\nto be minimized over all $q\\in\\bar{\\mathfrak{P}}_+\\setminus\\{0\\}$ and $\\gamma\\geq 0$. Since $\\frac{d\\varphi}{d\\gamma}= -\\frac{1}{4\\gamma^2}\\|q-e\\|_W^2 +1$, there is a stationary point\n\\begin{equation}\n\\label{eq:gammaopt}\n\\gamma =\\frac12\\|q-e\\|_W\n\\end{equation}\nthat is nonnegative as required. \n\nFor $\\gamma=0$ we must have $q=e$, and consequently $\\varphi(q,\\gamma)$ tends to zero as $\\gamma\\to 0$. By weak duality zero is therefore a lower bound for the minimization problem \\eqref{eq:primal_relax_new}, and $\\mathbb D (P\\derivd m, \\derivd \\hat{\\mu})=0$, which corresponds to the trivial unique solution $\\derivd \\hat{\\mu}=P\\derivd m$ and $\\hat{r}=p$ mentioned above. This solution is only feasible if $p\\in\\mathfrak{S}_W$. \nTherefore we can restrict our attention to the case $\\gamma>0$. Inserting \\eqref{eq:gammaopt} into \\eqref{eq:harddual} and removing the constant term $c_{\\boldsymbol 0}$, we obtain the modified dual functional\n\\begin{equation}\n\\label{eq:moddual}\n\\mathbb{J}(q)=\\langle c, q\\rangle - \\int_{\\mathbb{T}^d} P\\log Q\\,\\derivd m + \\|q-e\\|_W.\n\\end{equation}\nMoreover, combining \\eqref{eq:q2rhard} and \\eqref{eq:gammaopt}, we obtain\n\\begin{equation}\n\\label{eq:boundaryr}\n\\|r-c\\|_{W^{-1}}=1\\, ,\n\\end{equation}\nwhich also follows from complementary slackness since $\\gamma>0$ and restricts $r$ to the boundary of $\\mathfrak{S}_W$. \n\n\\begin{theorem}\\label{thm:hardcontraints}\nSuppose that $p\\in\\bar{\\mathfrak{P}}_+ \\setminus\\{0\\}$, $p\\not\\in\\mathfrak{S}_W$ and $\\mathfrak{S}_W\\cap\\mathfrak{C}_+\\ne\\emptyset$. Then the modified dual problem \n\\begin{equation}\n\\label{eq:moddualproblem}\n\\min_{q\\in\\bar{\\mathfrak{P}}_+}\\mathbb{J}(q)\n\\end{equation} \nhas a unique solution $\\hat{q}\\in\\bar{\\mathfrak{P}}_+ \\setminus\\{0\\}$. Moreover, there exists a unique $\\hat{r}\\in\\mathfrak{C}_+$, a unique $\\hat{c} \\in \\partial \\mathfrak{C}_+$ and a (not necessarily unique) nonnegative singular measure $d\\hat\\nu$ with support \n\\begin{equation}\n\\label{supp(dnu)hard}\n\\supp(d\\hat\\nu) \\subseteq \\{ {\\boldsymbol \\theta} \\in \\mathbb{T}^d \\mid \\hat{Q}(e^{i{\\boldsymbol \\theta}}) = 0 \\}\n\\end{equation}\nsuch that\n\\begin{subequations}\\label{rhat2+chat2}\n\\begin{align}\n& \\hat{r}_{{\\boldsymbol k}} = \\int_{\\mathbb{T}^d} e^{i(\\kb,\\thetab)} \\left( \\frac{P}{\\hat Q} \\derivd m + d\\hat\\nu \\right)\\, \\text{ for all } {\\boldsymbol k} \\in \\Lambda, \\label{rhat2}\\\\\n& \\hat{c}_{\\boldsymbol k} = \\int_{\\mathbb{T}^d} e^{i(\\kb,\\thetab)} d\\hat\\nu, \\text{ for all } {\\boldsymbol k} \\in \\Lambda \\, ,\\ \\label{chat2}\n\\end{align}\n\\end{subequations}\nand the measure \n\\begin{equation}\n\\label{eq:optdmu(hard)}\nd\\hat\\mu({\\boldsymbol \\theta}) = \\frac{P(e^{i{\\boldsymbol \\theta}})}{\\hat{Q}(e^{i{\\boldsymbol \\theta}})}\\derivd m({\\boldsymbol \\theta}) + d\\hat\\nu({\\boldsymbol \\theta})\n\\end{equation}\nis an optimal solution to the primal problem \\eqref{eq:primal_relax_new}. Moreover, \n\\begin{equation}\n\\label{eq:rhat&c}\n\\|\\hat{r}-c\\|_{W^{-1}}=1\\, ,\n\\end{equation}\nand $d\\hat\\nu$ can be chosen with support in at most $|\\Lambda|-1$ points. \n\nIf $p\\in\\mathfrak{S}_W$, the unique optimal solution is $\\derivd \\hat{\\mu}=P\\derivd m$, and then $\\hat{r}=p$. If $\\mathfrak{S}_W\\cap\\bar{\\mathfrak{C}}_+\\subset \\partial\\mathfrak{C}_+$, any solution to the moment problem will have only a singular part. Finally, if $\\mathfrak{S}_W\\cap\\bar{\\mathfrak{C}}_+=\\emptyset$, then the problem \\eqref{eq:primal_relax_new} will have no solution. \n\\end{theorem}\n\n\\begin{proof}\nWe begin by showing that the functional $\\mathbb{J}$ has a minimum under the stated conditions. To this end, \nwe first establish that the functional $\\mathbb{J}$ has compact sublevel sets $\\mathbb{J}^{-1}(-\\infty,\\rho]$, i.e., $\\|q\\|_\\infty$ is bounded for all $q$ such that $\\mathbb{J}(q)\\leq\\rho$, where $\\rho$ is sufficiently large for the sublevel set to be nonempty. The functional \\eqref{eq:moddual} can be decomposed in a linear and a logarithmic term as\n\\begin{displaymath}\n\\mathbb{J}(q)=h(q) - \\int_{\\mathbb{T}^d} P\\log Q\\,\\derivd m +c_{\\boldsymbol 0}\\, ,\n\\end{displaymath}\nwhere $h(q):= \\langle c, q-e\\rangle +\\|q-e\\|_W$. The integral term will tend to $-\\infty$ as $\\|q\\|_\\infty\\to \\infty$. \nTherefore we need to have the linear term to tend to $+\\infty$ as $\\|q\\|_\\infty\\to \\infty$, in which case we can appeal to the fact that linear growth is faster than logarithmic growth. However, if $c\\not\\in\\bar{\\mathfrak{C}}_+$ as is generally assumed, there is a $q\\in\\bar{\\mathfrak{P}}_+$ such that $\\langle c, q\\rangle< 0$, so we need to ensure that the positive term $\\|q-e\\|_W$ dominates. \n\nLet $\\tilde{r}\\in\\mathfrak{S}_W\\cap\\mathfrak{C}_+\\ne\\emptyset$. Then, by Theorem~\\ref{thm:rational}, there is a positive measure $d\\tilde{\\mu}=\\tilde{\\Phi}dm+ d\\tilde{\\nu}$ with a nonzero $\\tilde{\\Phi}$ such that \n\\begin{displaymath}\n\\tilde{r}=\\int_{\\mathbb{T}^d} e^{i(\\kb,\\thetab)} d\\tilde\\mu \\, ,\n\\end{displaymath} \nand $\\tilde{r}$ satisfies the constraints in the primal problem \\eqref{eq:primal_relax_new}. Consequently, \n\\begin{displaymath}\n\\varphi(q,\\gamma)\\geq\\mathcal{L}(\\tilde\\Phi,d\\tilde\\nu, \\tilde{r}, q,\\gamma)\\geq -\\mathbb{D}(P\\derivd m,d\\tilde{\\mu})\n\\end{displaymath}\n for all $q\\in\\bar{\\mathfrak{P}}_+$ and $\\gamma \\geq 0$, which in particular implies that \n\\begin{equation}\n\\label{eq:lowerbound}\n\\mathbb{J}(q)\\geq -\\mathbb{D}(P\\derivd m,d\\tilde{\\mu}) \\quad \\text{ for all $q\\in\\bar{\\mathfrak{P}}_+$}.\n\\end{equation}\nNow, if there is a $q\\in\\bar{\\mathfrak{P}}_+$ such that $h(q)\\leq 0$, then $\\mathbb{J}(\\lambda q)\\to -\\infty$ as $\\lambda\\to\\infty$, which contradicts \\eqref{eq:lowerbound}. Therefore, $h(q)> 0$ for all $q\\in\\bar{\\mathfrak{P}}_+$. Then, since $h$ is continuous, it has a minimum $\\varepsilon$ on the compact set $K:=\\{q\\in\\bar{\\mathfrak{P}}_+\\setminus\\{0\\}\\mid \\|q-e\\|_\\infty =1\\}$. As $e\\not\\in K$, $\\epsilon >0$. Therefore,\n\\begin{displaymath}\nh(q) \\geq \\varepsilon \\|q-e\\|_\\infty \\geq \\varepsilon\\|q\\|_\\infty - \\varepsilon\\|e\\|_\\infty \\geq \\frac{\\varepsilon}{|\\Lambda |} \\|Q\\|_\\infty - \\varepsilon\\|e\\|_\\infty \\, ,\n\\end{displaymath}\nsince $\\|Q\\|_\\infty \\leq |\\Lambda|\\|q\\|_\\infty$ \\cite[Lem. A.1]{ringh2015multidimensional}. Likewise, \n\\begin{align*}\n\\int_{\\mathbb{T}^d} \\! \\! P \\log Q \\derivd m &=\\int_{\\mathbb{T}^d} \\! \\! P \\log \\left[ \\frac{Q}{\\|Q\\|_\\infty} \\right] \\derivd m\\! +\\! \\int_{\\mathbb{T}^d}\\! \\! P \\log \\|Q\\|_\\infty \\derivd m\\\\ & \\leq \\int_{\\mathbb{T}^d}\\! \\! P \\log \\|Q\\|_\\infty \\derivd m\\, ,\n\\end{align*}\nsince $Q\/\\|Q\\|_\\infty \\leq 1$. Hence\n\\begin{equation}\\label{eq:Jhard_ineq}\n\\rho \\geq \\mathbb{J}(q) \\geq \\frac{\\varepsilon}{|\\Lambda |} \\|Q\\|_\\infty -\\int_{\\mathbb{T}^d}\\! \\! P \\log \\|Q\\|_\\infty \\derivd m - \\varepsilon\\|e\\|_\\infty \\, .\n\\end{equation}\nComparing linear and logarithmic growth we see that the sublevel set is bounded from above and below. Moreover, a trivial modification of \\cite[Lemma 3.1]{ringh2015multidimensional} shows that $\\mathbb{J}$ is lower semi-continuous, and hence $\\mathbb{J}^{-1}(-\\infty,\\rho]$ is compact. Consequently, the problem \\eqref{eq:moddualproblem} has an optimal solution $\\hat{q}$.\n\n\nNext we show that $\\hat{q}$ is unique. For this we return to the original dual problem to find a minimum of \\eqref{eq:harddual}. The solution $\\hat{q}$ is a minimizer of $\\varphi(q,\\hat{\\gamma})$, where \n\\begin{displaymath}\n\\hat{\\gamma}=\\frac12\\|\\hat{q}-e\\|_W\\, ,\n\\end{displaymath}\nand $\\mathbb{J}(\\hat{q})=\\varphi(\\hat{q},\\hat{\\gamma})+c_{\\boldsymbol 0}$.\nTo show that $\\varphi$ is strictly convex, we form the Hessian\n\\begin{displaymath}\nH=\\begin{bmatrix} \\int_{\\mathbb{T}^d}P\/Q^2\\derivd m &0\\\\0&0 \\end{bmatrix}\n+\\frac{1}{2\\gamma^3}\\begin{bmatrix} \\gamma^2 W& -\\gamma (q-e)^*W\\\\-\\gamma W(q-e)&(q-e)^*W(q-e)\\end{bmatrix}\\\n\\end{displaymath}\nand the quadratic form\n\\begin{displaymath}\n\\begin{bmatrix} x\\\\ \\xi\\end{bmatrix}^*H\\begin{bmatrix} x\\\\ \\xi\\end{bmatrix}=\nx^* \\left(\\int_{\\mathbb{T}^d}P\/Q^2\\derivd m\\right) x + \\frac{1}{2\\gamma^3} [\\gamma x - \\xi (q-e)]^*W[\\gamma x - \\xi (q-e)],\n\\end{displaymath}\nwhich is positive for all nonzero $(x,\\xi)$, since $(q-e)\\ne 0$ and $\\gamma>0$. Consequently, $\\varphi$ has a unique minimizer $(\\hat{q},\\hat{\\gamma})$, where $\\hat{q}$ is the unique minimizer of $\\mathbb{J}$.\n\nIt follows from \\eqref{eq:q2rhard} and \\eqref{eq:gammaopt} that \n\\begin{equation}\n\\label{eq:rhathard}\n\\hat{r}=c + \\frac{W(\\hat{q}-e)}{\\|\\hat{q}-e\\|_W}\\, ,\n\\end{equation}\nwhich consequently is unique. Moreover, $h(\\hat{q})=\\langle\\hat{r},\\hat{q}\\rangle -\\hat{r}_{\\boldsymbol 0}$, and hence we can follow the same line of proof as in Theorem~\\ref{thm:softcontraints} to show that there is a unique $\\hat{c}\\in\\partial\\mathfrak{C}_+$ such that $\\langle\\hat{c},\\hat{q}\\rangle=0$ and a positive discrete measure $d\\hat\\nu$ with support in $|\\Lambda| -1$ points so that \\eqref{supp(dnu)hard} and \\eqref{rhat2+chat2} hold. Next, let $\\mathbb{I}(\\derivd \\mu)=-\\mathbb{D}(Pdm,\\derivd \\mu)$ be the primal functional in \\eqref{eq:primal_relax_new}, where $\\derivd \\mu$ is restricted to the set of positive measures $\\derivd \\mu:=\\Phi\\derivd m + d\\nu$ such that $r$, given by \\eqref{eq:r}, satisfies the constraint $\\|r-c\\|_W\\leq 1$. \nIn view of \\eqref{eq:rhat&c}, \n\\begin{displaymath}\n\\mathbb{I}(\\derivd \\mu)= \\mathcal{L}(\\Phi,d\\nu, r, \\hat{q},\\hat{\\gamma})\\leq \\mathcal{L}(\\hat{\\Phi},d\\hat\\nu, \\hat{r}, \\hat{q},\\hat{\\gamma}) =\\mathbb{I}(d\\hat\\mu)\n\\end{displaymath}\nfor any such $\\derivd \\mu$, and hence $\\derivd \\hat{\\mu}$ is an optimal solution to the primal problem \\eqref{eq:primal_relax_new}. \nFinally, the cases $p\\in\\mathfrak{S}_W$, $\\mathfrak{S}_W\\cap\\bar{\\mathfrak{C}}_+\\subset \\partial\\mathfrak{C}_+$, and $\\mathfrak{S}_W\\cap\\bar{\\mathfrak{C}}_+=\\emptyset$ have already been discussed above.\n\\end{proof}\n\n\\begin{corollary}\\label{cor:KKThard}\nSuppose that $p\\in\\bar{\\mathfrak{P}}_+ \\setminus\\{0\\}$ and $\\mathfrak{S}_W\\cap\\mathfrak{C}_+\\ne\\emptyset$. The KKT conditions\n\\begin{subequations}\\label{eq:opt_cond_relax_new}\n\\begin{align}\n& \\hat{q} \\in \\bar{\\mathfrak{P}}_+, \\quad \\hat c \\in \\partial \\mathfrak{C}_+, \\quad \\langle \\hat{c}, \\hat{q} \\rangle = 0 \\label{eq:opt_cond_relax_slack_new} \\\\\n& \\hat{r}_{\\boldsymbol k} = \\int_{\\mathbb{T}^d} e^{i(\\kb,\\thetab)} \\frac{P}{\\hat Q}\\derivd m + \\hat{c}_{\\boldsymbol k},\\quad {\\boldsymbol k}\\in\\Lambda \\label{eq:opt_cond_relax_KKT_new} \\\\\n&(\\hat{r}-c)\\|\\hat{q}-e\\|_W = W(\\hat{q}-e), \\quad \\hat{r}\\in\\mathfrak{S}_W \\label{eq:opt_cond_relax_matchError_new}\n\\end{align}\n\\end{subequations}\nare necessary and sufficient conditions for optimality of the dual pair \\eqref{eq:primal_relax_new} and \\eqref{eq:moddualproblem} of optimization problems. \n\\end{corollary}\n\nThe corollary follows by noting that, if $p\\in\\mathfrak{S}_W$, then we obtain the trivial solution $\\hat{q}=e$, which corresponds to the primal optimal solution $\\derivd \\hat{\\mu} =P\\derivd m$. \n\n\\begin{proposition}\\label{prop:conditionW}\nThe condition\n\\begin{equation}\n\\label{eq:W>cc*}\nW > cc^*\n\\end{equation}\nis sufficient for the pair \\eqref{eq:primal_relax_new} and \\eqref{eq:moddualproblem} of dual problems to have optimal solutions. \n\\end{proposition}\n\n\\begin{proof}\nIf $W>cc^*$, then $(q-e)^*W(q-e)\\geq \\langle c, q-e\\rangle ^2$ with equality only for $q=e$. Hence, if $q\\ne e$, $\\|q-e\\|_W > |\\langle c, q-e\\rangle|$, i.e., $h(q) > 0$ for all $q\\in\\bar{\\mathfrak{P}}_+\\setminus\\{0\\}$ except $q=e$. Then we proceed as in the proof of Theorem~\\ref{thm:hardcontraints}.\n\\end{proof}\n\n\\begin{remark}\nCondition \\eqref{eq:W>cc*} guarantees that $0 \\in {\\rm int}(\\mathfrak{S}_W)$ and hence in particular that $\\mathfrak{S}_W\\cap\\mathfrak{C}_+\\ne\\emptyset$ as required in Theorem~\\ref{thm:hardcontraints}. To see this, note that $0\\in\\bar{\\mathfrak{C}}_+$ and that $r=0$ satisfies the hard constraint in \\eqref{eq:primal_relax_new} if $c^*W^{-1}c\\leq 0$. However, since $W>cc^*$, there is a $W_0 > 0$ such that $W = W_0 + cc^*$. Then the well-known Matrix Inversion Lemma (see, e.g., \\cite[p. 746]{lindquist2015linear}) yields\n\\[\n(W_0 + cc^*)^{-1} = W_0^{-1} - W_0^{-1} c (1 + c^*W_0^{-1}c)^{-1} c^* W_0^{-1},\n\\]\nand therefore\n\\[\nc^*W^{-1}c = c^*W_0^{-1}c - c^*W_0^{-1} c (1 + c^*W_0^{-1}c)^{-1} c^* W_0^{-1}c = \\frac{c^*W_0^{-1}c}{1 + c^*W_0^{-1}c} < 1,\n\\]\nwhich establishes that $0 \\in {\\rm int}(\\mathfrak{S}_W)$. However, for $\\mathfrak{S}_W\\cap\\mathfrak{C}_+$ to be nonempty, $r=0$ need not be contained in this set. Hence, condition \\eqref{eq:W>cc*} is not necessary, although it is easily testable. In fact, this provides an alternative proof of Proposition~\\ref{prop:conditionW}.\n\\end{remark}\n\n\\section{On the equivalence between the two problems}\\label{sec:homeomorphism} \n\nClearly $\\mathfrak{S}_W\\cap\\mathfrak{C}_+$ is always nonempty if $c\\in\\mathfrak{C}_+$. Then both the problem \\eqref{eq:primal_relax} with soft constraints and the problem \\eqref{eq:primal_relax_new} with hard constraints have a solution for any choice of W. On the other hand, if $c \\not \\in \\mathfrak{C}_+$, the problem with soft constraints will always have a solution, while the problem with hard constraints may fail to have one for certain choices of $W$. However, if the weight matrix in the hard-constrained problem -- let us denote it $W_{\\rm hard}$ -- is chosen in the set $\\mathcal{W}:=\\{W > 0 \\mid \\mathfrak{S}_W\\cap\\mathfrak{C}_+\\ne\\emptyset, p\\not\\in\\mathfrak{S}_W\\}$, then it can be seen from Corollaries \\ref{cor:KKTsoft} and \\ref{cor:KKThard} that we obtain exactly the same solution $\\hat{q}$ in the soft-constrained problem by choosing \n\\begin{equation}\n\\label{eq:Whard2Wsoft}\nW_{\\rm soft}=W_{\\rm hard}\/\\|\\hat{q}-e\\|_{W_{\\rm hard}}. \n\\end{equation}\nWe note that \\eqref{eq:Whard2Wsoft} can be written $W_{\\rm hard}=\\alpha W_{\\rm soft}$, where $\\alpha :=\\|\\hat{q}-e\\|_{W_{\\rm hard}}$. Therefore, substituting $W_{\\rm hard}$ in \\eqref{eq:Whard2Wsoft}, we obtain \n \\begin{displaymath}\nW_{\\rm soft}=\\frac{\\alpha W_{\\rm soft}}{\\|\\hat{q}-e\\|_{\\alpha W_{\\rm soft}}}=\\alpha^{1\/2}\\frac{W_{\\rm soft}}{\\|\\hat{q}-e\\|_{W_{\\rm soft}}},\n\\end{displaymath}\nwhich yields $\\alpha=\\|\\hat{q}-e\\|_{W_{\\rm soft}}^2$. Hence the inverse of \\eqref{eq:Whard2Wsoft} is given by \n\\begin{equation}\n\\label{eq:Wsoft2Whard}\nW_{\\rm hard}=W_{\\rm soft}\\|\\hat{q}-e\\|_{W_{\\rm soft}}^2. \n\\end{equation}\nBy Theorem~\\ref{thm:W2qcont} $\\hat{q}$ is continuous in $W_{\\rm soft}$, and hence, by \\eqref{eq:Wsoft2Whard}, the corresponding $W_{\\rm hard}$ varies continuously with $W_{\\rm soft}$. In fact, this can be strengthened to a homeomorphism between the two weight matrices. \n\n\\begin{theorem}\\label{thm:W2Whom}\nThe map \\eqref{eq:Whard2Wsoft} is a homeomorphism between $\\mathcal{W}$ and the space of all (Hermitian positive definite) weight matrices, and the inverse is given by \\eqref{eq:Wsoft2Whard}.\n\\end{theorem}\n\n\\begin{proof}\n By \\cite[Lemma 2.3]{byrnes2007interior}, a continuous map between two spaces of the same dimension is a homeomorphism if and only if it is injective and proper, i.e., the preimage of any compact set is compact. To see that $\\mathcal{W}$ is open, we observe that $\\mathfrak{S}_W$ is continuous in $W$ and that $\\mathfrak{C}_+$ is an open set. As noted above, the map \\eqref{eq:Wsoft2Whard} -- let us call it $f$ -- is continuous and also injective, as it can be inverted. Hence it only remains to show that $f$ is proper. To this end, we take a compact set $K\\subset\\mathcal{W}$ and show that $f^{-1}(K)$ is also compact. There are two ways this could fail. First, the preimage could contain a singular semidefinite matrix. However this is impossible by \\eqref{eq:Wsoft2Whard}, since $\\|\\hat{q}\\|_\\infty$ is bounded for $W_{\\text{hard}}\\in K$ (Lemma~\\ref{lem:qhatbounded}) and a nonzero scaling of a singular matrix cannot be nonsingular. Secondly, $\\|W_{\\rm soft}\\|_F$ could tend to infinity. However, this is also impossible. To see this, we first show that there is a $\\kappa >0$ such that $\\|p-r\\|_{W_{\\rm hard}^{-1}}\\geq \\kappa$ for all $r\\in\\mathfrak{S}_{W_{\\rm hard}}$ and all $W_{\\rm hard}\\in K$. To this end, we observe that the minimum of $\\|p-r\\|_{W^{-1}}$ over all $W\\in K$ and $r$ satisfying the constraint $\\|r-c\\|_{W^{-1}}\\leq 1$ is bounded by \n \\begin{displaymath}\n\\kappa:= \\min_{W\\in K} \\|p-c\\|_{W^{-1}} -1\n\\end{displaymath}\n by the triangle inequality $\\|p-r\\|_{W^{-1}}\\geq\\|p-c\\|_{W^{-1}}-\\|c-r\\|_{W^{-1}}\\geq\\|p-c\\|_{W^{-1}}-1$.\n The minimum is attained, since $K$ is compact, and positive, since $p\\not\\in\\bigcup_{W\\in K}\\mathfrak{S}_{W}$.\nNow, from Corollary~\\ref{cor:KKThard} we see that $\\hat{q}=e$ if and only if $\\hat{r}=p$. The map from $\\hat{q}\\mapsto\\hat{r}$ is continuous in $q=e$. In fact, $\\hat Q$ is uniformly positive in a neighborhood of $e$ and hence the corresponding $\\hat{c}=0$. \nDue to this continuity, if $\\hat{q}\\to e$, then $\\hat{r}\\to p$, which cannot happen since $\\|p-r\\|_{W^{-1}}\\geq \\kappa$ for all $W\\in K$. Thus, since $\\|\\hat{q}-e\\|_{W}$ is bounded away from zero, the preimage $f^{-1}(K)$ of $K$ is bounded. Finally, consider a convergent sequence $(W_k)$ in $f^{-1}(K)$ converging to a limit $W_\\infty$. Since the sequence is bounded and cannot converge to a singular matrix, we must have $W_\\infty >0$, i.e., $W_\\infty\\in f^{-1}(\\mathcal{W})$. By continuity, $f(W_k)$ tends to the limit $f(W_\\infty)$, which must belong to $K$ since it is compact. Hence the preimage $W_\\infty$ must belong to $f^{-1}(K)$. Therefore, $f^{-1}(K)$ is compact as claimed. \n\\end{proof}\n\nIt is illustrative to consider the simple case when $W=\\lambda I$. Then the two maps \\eqref{eq:Whard2Wsoft} and \\eqref{eq:Wsoft2Whard} become\n\\begin{equation}\n\\label{eq:Lambda2lambda}\n\\begin{split}\n\\lambda_{\\rm soft}&=\\frac{\\sqrt{\\lambda_{\\rm hard}}}{\\|\\hat{q}-e\\|_2}\\\\\n\\lambda_{\\rm hard}&=\\lambda_{\\rm soft}^2\\|\\hat{q}-e\\|_2^2\n\\end{split}\n\\end{equation}\nWhereas the range of $\\lambda_{\\rm soft}$ is the semi-infinite interval $(0,\\infty)$, for the homeomorphism to hold $\\lambda_{\\rm hard}$ is confined to \n\\begin{displaymath}\n\\lambda_{\\rm min} <\\lambda< \\lambda_{\\rm max},\n\\end{displaymath}\nwhere $\\lambda_{\\rm min}$ is the distance from $c$ to the cone $\\bar{\\mathfrak{C}}_+$ and $\\lambda_{\\rm max}=\\|c-p\\|$. When $\\lambda_{\\rm soft}\\to\\infty$, $\\lambda_{\\rm hard}\\to\\lambda_{\\rm max}$ and $\\hat{q}\\to e$. If $\\lambda_{\\rm hard}\\geq\\lambda_{\\rm max}$, then the coresponding problem has the trivial unique solution $\\hat{q}= e$, corresponding to the primal solution $\\derivd \\hat{\\mu}=P\\derivd m$.\n\n\n\n\n\nNote that Theorem~\\ref{thm:W2Whom} implies that some continuity results in one of the problems can be automatically transferred to the other problem. In particular, we have the following result.\n\n\n\\begin{theorem}\\label{thm:W2qcont_hard} \nLet \n\\begin{equation}\n\\label{eq:Jhard}\n\\mathbb{J}_W(q)=\\langle c, q \\rangle - \\int_{\\mathbb{T}^d} P \\log Q\\, \\derivd m+ \\|q - e\\|_{W}.\n\\end{equation}\nThen the map $W\\mapsto \\hat{q}:=\\argmin_{q\\in\\bar{\\mathfrak{P}}_+} \\mathbb{J}_W(q)$ is continuous.\n\\end{theorem}\n\n\\begin{proof}\nThe theorem follows by noting that $W\\mapsto \\hat{q}:=\\argmin_{q\\in\\bar{\\mathfrak{P}}_+} \\mathbb{J}_W(q)$ can be seen as a composition of two continuous maps, namely the one in Theorem~\\ref{thm:W2qcont} and the one in Theorem~\\ref{thm:W2Whom}.\n\\end{proof}\n\nNext we shall vary also $c$ and $p$, and to this end we introduce a more explicit notation for $\\mathfrak{S}_W$ and $\\mathcal{W}$, namely $\\mathfrak{S}_{c,W}=\\mathfrak{S}_W$ in \\eqref{eq:SW} and \n\\[\n\\mathcal{W}_{c,p}:=\\{W>0\\mid \\mathfrak{S}_{c,W}\\cap\\mathfrak{C}_+\\ne\\emptyset, p\\not\\in\\mathfrak{S}_{c,W}\\}.\n\\]\nThen the corresponding set of parameters \\eqref{eq:Pcal} for the problem with hard constraints is given by\n\\begin{equation}\n\\label{eq:Pcalhard}\n\\mathcal{P}_{\\rm hard}=\\{({\\ca, \\pa, \\Wa})\\mid c\\in \\mathfrak{C}, p\\in \\bar{\\mathfrak{P}}_+\\setminus\\{0\\}, W\\in \\mathcal{W}_{c,p}\\},\n\\end{equation}\nthe interior of which is\n\\begin{equation*}\n{\\rm int}(\\mathcal{P}_{\\rm hard})=\\{({\\ca, \\pa, \\Wa})\\mid c\\in \\mathfrak{C}, p\\in \\mathfrak{P}_+, W\\in \\mathcal{W}_{c,p}\\}.\n\\end{equation*}\nTheorem~\\ref{thm:W2Whom} can now be modified accordingly to yield the following theorem, the proof of which is deferred to the appendix.\n\\begin{theorem}\\label{thm:W2Whom2}\nLet the map $(c,p,W_{\\rm hard})\\mapsto W_{\\rm soft}$ be given by \\eqref{eq:Whard2Wsoft} and the map $(c,p,W_{\\rm soft})\\mapsto W_{\\rm hard}$ by \\eqref{eq:Wsoft2Whard}. Then the map that sends $(c,p,W_{\\rm hard})\\in {\\rm int}(\\mathcal{P}_{\\rm hard})$ to $(c,p,W_{\\rm soft})\\in {\\rm int}(\\mathcal{P})$ is a homeomorphism\n\\end{theorem}\nNote that this theorem is not a strict amplification of Theorem~\\ref{thm:W2Whom} as we have given up the possibility for $p$ to be on the boundary $\\partial \\mathfrak{P}_+$. The same is true for the following modification of Theorem~\\ref{thm:W2qcont_hard}.\n\\begin{theorem}\\label{thm:par2qcont_hard} \nLet $\\mathbb{J}_{{\\ca, \\pa, \\Wa}}(q)$ be as in \\eqref{eq:Jhard}.\nThen the map $({\\ca, \\pa, \\Wa})\\mapsto \\hat{q}:=\\argmin_{q\\in\\bar{\\mathfrak{P}}_+} \\mathbb{J}_{{\\ca, \\pa, \\Wa}}(q)$ is continuous on ${\\rm int}(\\mathcal{P}_{\\rm hard})$.\n\\end{theorem}\n\\begin{proof}\nThe theorem follows immediately by noting that $({\\ca, \\pa, \\Wa}_\\text{hard})\\mapsto \\hat{q}$ can be seen as a composition of two continuous maps, namely $({\\ca, \\pa, \\Wa}_\\text{hard})\\mapsto({\\ca, \\pa, \\Wa}_\\text{soft})$ of Theorem~\\ref{thm:W2Whom2} and $({\\ca, \\pa, \\Wa}_\\text{soft})\\mapsto\\hat{q}$ of Theorem~\\ref{thm:W2qcont}.\n\\end{proof}\nTheorem~\\ref{thm:par2qcont_hard} is a counterpart of Theorem~\\ref{thm:W2qcont} for the problem with hard constraints, except that $p$ is restricted to the interior $\\mathfrak{P}_+$. It should be possible to extend the result to hold for all $p\\in\\bar{\\mathfrak{P}}_+\\setminus \\{0\\}$ via a direct proof along the lines of the proof of Theorem~\\ref{thm:W2qcont}.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Estimating covariances from data}\\label{sec:covest}\n\nFor a scalar stationary stochastic process $\\{y(t);\\,t\\in\\mathbb{Z}\\}$, it is well-known that the biased covariance estimate \n\\[\nc_k = \\frac{1}{N} \\sum_{t = 0}^{N - k -1} y_t\\bar{y}_{t+k} ,\n\\]\nbased on an observation record $\\{ y_t \\}_{t = 0}^{N-1}$, yields a positive definite Toeplitz matrix, which is equivalent to $c \\in \\mathfrak{C}_+$ \\cite[pp. 13-14]{ahiezer1962some} In fact, these estimates correspond to the ones obtained from the periodogram estimate of the spectrum (see, e.g., \\cite[Sec. 2.2]{stoica1997introduction}). On the other hand, the Toeplitz matrix of the unbiased estimate \n\\[\nc_k = \\frac{1}{N-k} \\sum_{t = 0}^{N -k-1} y_t \\bar{y}_{t+k}\n\\]\nis in general not positive definite.\n\nThe same holds in higher dimensions ($d>1$) where the observation record is $\\{y_{\\boldsymbol t}\\}_{{\\boldsymbol t}\\in \\mathbb{Z}^d_N}$ with $$\\mathbb{Z}^d_N=\\{(\\ell_1,\\ldots, \\ell_d)\\,|\\, 0\\le \\ell_j\\le N_j-1, j=1,\\ldots, d\\}.$$ \nThe unbiased estimate is then given by\n\\begin{equation}\\label{eq:unbiased_est}\nc_{\\boldsymbol k} = \\frac{1}{\\prod_{j=1}^d (N_j-|k_j|)} \\sum_{{\\boldsymbol t} \\in \\mathbb{Z}^d_{N}} y_{\\boldsymbol t} \\bar{y}_{{\\boldsymbol t}+{\\boldsymbol k}},\n\\end{equation}\nand the biased estimate by\n\\begin{equation}\\label{eq:biased_est}\nc_{\\boldsymbol k} = \\frac{1}{\\prod_{j=1}^d N_j} \\sum_{{\\boldsymbol t} \\in \\mathbb{Z}^d_{N}} y_{\\boldsymbol t} \\bar{y}_{{\\boldsymbol t}+{\\boldsymbol k}},\n\\end{equation}\nwhere we define $y_{\\boldsymbol t}=0$ for ${\\boldsymbol t}\\notin \\mathbb{Z}^d_{N}$. The sequence of unbiased covariance estimates does not in general belong to $\\mathfrak{C}_+$, but the biased covariance estimates yields $c\\in\\mathfrak{C}_+$ also in the multidimensional setting. In fact, this can be seen by noting that the biased estimate corresponds to the Fourier coefficients of the periodogram \\cite[Sec. 6.5.1]{dudgeon1984multidimensional}, i.e., if the estimates $c_{\\boldsymbol k}$ are given by \\eqref{eq:biased_est}, then \n\\begin{equation}\\label{eq:periodogram}\n\\Phi_{\\rm periodogram}({\\boldsymbol \\theta}) := \\frac{1}{\\prod_{j=1}^d N_j} \\Big| \\sum_{{\\boldsymbol t} \\in \\mathbb{Z}^d_N} y_{\\boldsymbol t} e^{i({\\boldsymbol t},\\thetab)} \\Big|^2=\\sum_{{\\boldsymbol k} \\in \\mathbb{Z}^d_N-\\mathbb{Z}^d_N} c_{{\\boldsymbol k}} e^{-i(\\kb,\\thetab)} ,\n\\end{equation}\nwhere $\\mathbb{Z}^d_N-\\mathbb{Z}^d_N$ denotes the Minkowski set difference. This leads to the following lemma.\n\n\\begin{lemma}\\label{lem:biased}\nGiven the observed data $\\{ y_{{\\boldsymbol t}} \\}_{{\\boldsymbol t} \\in \\mathbb{Z}^d_N}$, let $\\{ c_{\\boldsymbol k} \\}_{{\\boldsymbol k} \\in \\Lambda}$ be given by \\eqref{eq:biased_est}. Then $c \\in \\mathfrak{C}_+$.\n\\end{lemma}\n\n\\begin{proof}\nGiven $\\{ y_{{\\boldsymbol t}} \\}_{{\\boldsymbol t} \\in \\mathbb{Z}^d_N}$, let $c = \\{ c_{\\boldsymbol k} \\}_{{\\boldsymbol k} \\in \\mathbb{Z}^d_N}$, where $c_{\\boldsymbol k}$ be given by \\eqref{eq:biased_est}. \nIn view of \\eqref{innerprod} and \\eqref{eq:periodogram} we have\n\\[\n\\langle c, p \\rangle = \\int_{\\mathbb{T}^d} \\frac{1}{\\prod_{j=1}^d N_j} \\Big| \\sum_{{\\boldsymbol t} \\in \\mathbb{Z}^d_N} y_{\\boldsymbol t} e^{i({\\boldsymbol t},\\thetab)} \\Big|^2 P(e^{i{\\boldsymbol \\theta}}) \\derivd m({\\boldsymbol \\theta}),\n\\]\nwhich is positive for all $p \\in \\bar{\\mathfrak{P}}_+ \\setminus \\{0\\}$. Consequently $c \\in \\mathfrak{C}_+$.\n\\end{proof}\n\nAn advantage of the approximate procedures to the rational covariance extension problem is that they can also be used for cases where the biased estimate is not available, e.g., where the covariance is estimated from snapshots.\n\n\n\\section{Application to spectral estimation}\\label{sec:example}\n\nAs long as we use the biased estimate \\eqref{eq:biased_est}, we may apply exact covariance matching as outlined in Section~\\ref{sec:prel}, whereas in general approximate covariance matching will be required for biased covariance estimates. However, as will be seen in the following example, approximate covariance matching may sometimes be better even if $c\\in\\mathfrak{C}_+$.\n\nIn this application it is easy to determine a bound on the acceptable error in the covariance matching, so we use the procedure with hard constraints. Given data generated from a two-dimensional stochastic system, we test three different procedures, namely (i) using the biased estimate and exact matching, (ii) using the biased estimate and the approximate matching \\eqref{eq:primal_relax_new}, and (iii) using the unbiased estimate and the approximate matching \\eqref{eq:primal_relax_new}. The procedures are then evaluated by checking the size of the error between the matched covariances and the true ones from the dynamical system.\n\n\\subsection{An example} \n\nLet $y_{(t_1, t_2)}$ be the steady-state output of a two-dimensional recursive filter driven by a white noise input $u_{(t_1, t_2)}$. Let the transfer function of the recursive filter be\n\\[\n\\frac{b(e^{i\\theta_1}, e^{i\\theta_2})}{a(e^{i\\theta_1}, e^{i\\theta_2})} = \\frac{ \\sum_{{\\boldsymbol k} \\in \\Lambda_+} b_{{\\boldsymbol k}} e^{-i({\\boldsymbol k},{\\boldsymbol \\theta})}}{\\sum_{{\\boldsymbol k} \\in \\Lambda_+} a_{{\\boldsymbol k}} e^{-i({\\boldsymbol k},{\\boldsymbol \\theta})}},\n\\]\nwhere $\\Lambda_+=\\{(k_1,k_2)\\in \\mathbb{Z}^2\\mid 0\\le k_1\\le 2, 0\\le k_2\\le 2 \\}$ and the coefficients are given by\n$b_{(k_1,k_2)}=B_{k_1+1, k_2+1} $ and $a_{(k_1,k_2)}=A_{k_1+1, k_2+1}$, where\n\\begin{align*}\nB = \\!\n{\\scriptscriptstyle\n\\begin{bmatrix}\n \\,0.9 & -0.2\\phantom{0} & 0.05 \\\\\n \\,0.2 & \\phantom{-} 0.3\\phantom{0} & 0.05 \\\\\n -0.05 & -0.05 & 0.1\\phantom{0}\n\\end{bmatrix}}, \\;\nA = \\!\n{\\scriptscriptstyle\n\\begin{bmatrix}\n \\phantom{-}1\\phantom{.0} & \\phantom{-}0.1 & \\phantom{-}0.1 \\\\\n -0.2 & \\phantom{-}0.2 & -0.1 \\\\\n \\phantom{-}0.4 & -0.1 & -0.2\n\\end{bmatrix}\n}.\n\\end{align*}\nThe spectral density $\\Phi$ of $y_{(t_1, t_2)}$, which is shown in Fig.~\\ref{fig:original} and is similar to the one considered in \\cite{ringh2015multidimensional}, is given by\n\\[\n\\Phi(e^{i\\theta_1}, e^{i\\theta_2}) = \\frac{P(e^{i\\theta_1}, e^{i\\theta_2})}{Q(e^{i\\theta_1}, e^{i\\theta_2})} = \\left|\\frac{b(e^{i\\theta_1}, e^{i\\theta_2})}{a(e^{i\\theta_1}, e^{i\\theta_2})}\\right|^2,\n\\]\nand hence the index set $\\Lambda$ of the coefficients of the trigonometric polynomials $P$ and $Q$ is given by $\\Lambda=\\Lambda_+-\\Lambda_+=\\{(k_1,k_2)\\in \\mathbb{Z}^2\\,|\\; |k_1|\\le 2, |k_2|\\le 2 \\}$. Using this example, we perform two different simulation studies.\n\\begin{figure}\n\\centering\n\\includegraphics[width=0.4\\textwidth]{figures\/ \\spect_est_fig figure3.png}\n\\caption{Log-plot of the original spectrum.}\n\\label{fig:original}\n\\end{figure}\n\n\n\\subsection{First simulation study}\nThe system was simulated for $500$ time steps along each dimension, starting from $y_{(t_1, t_2)} = u_{(t_1, t_2)} = 0$ whenever either $t_1 < 0$ or $t_2 < 0$. Then covariances were estimated from the $9 \\times 9$ last samples, using both the biased and the unbiased estimator. With this covariance data we investigate the three procedures (i), (ii) and (iii) described above. In each case, both the maximum entropy (ME) solutions and solutions with the true numerator are computed.\\footnote{Maximum entropy: $P\\equiv 1$. True numerator: $P=P_{\\text{true}}$.} The weighting matrix is taken to be $W = \\lambda I$, where $\\lambda$ is $\\lambda_{\\text{biased}} := \\| c_{\\text{true}} - c_{\\text{biased}} \\|_2^2$ in procedure (ii) and $\\lambda_{\\text{unbiased}} := \\| c_{\\text{true}} - c_{\\text{unbiased}} \\|_2^2$ in procedure (iii)\n\\footnote{Note that this is the smallest $\\lambda$ for which the true covariance sequence belongs to the uncertainty set $\\{r\\,|\\, \\|r-c\\|_2^2\\le \\lambda\\}$.}\nThe norm of the error%\n\\footnote{Here we use the norm of the covariance estimation error as measure of fit. However, note that this is not the only way to compare accuracy of the different methods. The reason for this choice is that comparing the accuracy of the spectral estimates is not straightforward since it depends on the selected metric or distortion measure.} \n between the matched covariances and the true ones, $\\| \\hat{r} - c_{\\text{true}} \\|_2$, is shown in Table.~\\ref{tab:mean_std_2}. The means and standard deviations are computed over the $100$ runs.\n\nThe biased covariance estimates belong to the cone $\\mathfrak{C}_+$ (Lemma~\\ref{lem:biased}), and therefore procedure (i) can be used. The corresponding error in Table~\\ref{tab:mean_std_2} is the statistical error in estimating the covariance. This error is quite large because of a short data record. Using approximate covariance matching in this case seems to give a worse match.\nHowever, approximate matching of the unbiased covariances gives as good a fit as exact matching of the biased ones. \n\n\\begin{table}[bt]\n\\caption{Norm differences $\\| \\hat{r} - c_{\\rm {true}} \\|_2$ for different solutions in the first simulation setup.}\n\\label{tab:mean_std_2}\n\\begin{center}\n\\begin{tabular}{l l l}\n\\toprule\n & Mean & Std. \\\\\n\\midrule\nBiased, exact matching & $3.2374$ & $1.7944$ \\\\\nBiased, approximate matching, ME-solution & $3.7886$ & $1.3274$ \\\\\nBiased, approximate matching, using true $P$ & $3.8152$ & $1.6509$ \\\\\nUnbiased, approximate matching, ME-solution & $3.2575$ & $1.4721$ \\\\\nUnbiased, approximate matching, using true $P$ & $3.2811$ & $1.7787$ \\\\\n\\bottomrule\n\\end{tabular}\n\\end{center}\n\\end{table}\n\n\n\n\\subsection{Second simulation study}\nIn this simulation the setup is the same as the previous one, except that the simulation data has been discarded if the {\\em unbiased\\\/} estimate belongs to $\\bar{\\mathfrak{C}}_+$. To obtain $100$ such data sets, $414$ simulations of the system were needed. (As a comparison, in the previous experiment $23$ out of the $100$ runs resulted in an unbiased estimate outside $\\bar{\\mathfrak{C}}_+$.)\nAgain, the norm of the error between matched covariances and the true ones is shown in Table~\\ref{tab:mean_std}, and the means and standard deviations are computed over the $100$ runs. \n\nAs before, the biased covariance estimates belong to the cone $\\mathfrak{C}_+$, and therefore procedure (i) can be used.\nComparing this with the results from procedure (ii) suggests that there may be an advantage not to enforce exact matching, although we know that the data belongs to the cone.\nRegarding procedure (iii), we know that the unbiased covariance estimates do not belong to the cone $\\bar{\\mathfrak{C}}_+$, hence we need to use approximate covariance matching. In this example, this procedure turns out to give the smallest estimation error.\n\n\n\n\\begin{table}[bt]\n\\caption{\nNorm differences $\\| \\hat{r} - c_{\\rm {true}} \\|_2$ for different solutions in the second setup, where all unbiased estimate are outside $\\bar{\\mathfrak{C}}_+$.}\n\\label{tab:mean_std}\n\\begin{center}\n\\begin{tabular}{l l l}\n\\toprule\n & Mean & Std. \\\\\n\\midrule\nBiased, exact matching & $2.9245$ & $2.2528$ \\\\\nBiased, approximate matching, ME-solution & $1.9087$ & $1.1324$ \\\\\nBiased, approximate matching, using true $P$ & $1.8532$ & $1.1904$ \\\\\nUnbiased, approximate matching, ME-solution & $1.5018$ & $0.6601$ \\\\\nUnbiased, approximate matching, using true $P$ & $1.4451$ & $0.7296$ \\\\\n\\bottomrule\n\\end{tabular}\n\\end{center}\n\\end{table}\n\n\n\n \n\n\n\n\n\n\\section{Application to system identification and texture reconstruction}\\label{sec:ex_texture}\n\nNext we apply the theory of this paper to texture generation via Wiener system identification.\nWiener systems form a class of nonlinear dynamical systems consisting of a linear dynamic part composed with a static nonlinearity as illustrated in Figure \\ref{fig:blockdiagramRep}. This is a subclass of so called block-oriented systems \\cite{billings1980identification}, and Wiener system identification is a well-researched area (see, e.g., \\cite{greblicki1992nonparametric} and references therein) that is still very active \\cite{lindsten2013bayesian, wahlberg2015identification, abdalmoaty2016simulated}. Here, we use Wiener systems to model and generate textures.\n\n\\begin{figure}[t\n\\centering\n\\input{figures\/blockdiagram.tex}\n\\caption{A Wiener system with thresholding as static nonlinearity.}\n\\label{fig:blockdiagramRep}\n\\end{figure} \n\nUsing dynamical systems for modeling of images and textures is not new and has been considered in, e.g., \\cite{Chiuso-F-P-05, picci2008modelling}. The setup presented here is motivated by \\cite{barman2015gaussian}, where thresholded Gaussian random fields are used to model porous materials for design of surface structures in pharmaceutical film coatings. Hence we let the static nonlinearity, call it $f$, be a thresholding with unknown thresholding parameter $\\tau$. In our previous work \\cite{ringh2017further} we applied exact covariance matching to such a problem. However, in general there is no guarantee that the estimated covariance sequence $c$ belongs to the cone $\\mathfrak{C}_+$. Consequently, here we shall use approximate covariance matching instead.\n\nThe Wiener system identification can be separated into two parts. We start by identifying the nonlinear part. Using the notations of Figure~\\ref{fig:blockdiagramRep}, let $\\{u_{\\boldsymbol t}; \\, {\\boldsymbol t} \\in \\mathbb{Z}^d\\}$ be a zero-mean Gaussian white noise input, and let $\\{x_{\\boldsymbol t}; \\, {\\boldsymbol t} \\in \\mathbb{Z}^d\\}$ be the stationary output of the linear system, which we assume to be normalized so that $c_{\\mathbf 0}:= \\ExpOp [x_{\\boldsymbol t}^2]=1$. Moreover, let $y_{\\boldsymbol t}=f(x_{\\boldsymbol t})$ where $f$ is the static nonlinearity\n\\begin{equation}\\label{eq:staticnonlin}\nf(x)=\\begin{cases}1 & x>\\tau \\\\0 & \\mbox{otherwise}\\end{cases}\n\\end{equation}\nwith unknown thresholding parameter $\\tau$. Since $ \\ExpOp [y_{\\boldsymbol t}] = 1-\\phi(\\tau)$, where $\\phi(\\tau)$ is the Gaussian cumulative distribution function, an estimate of $\\tau$ is given by $\\tau_{\\rm est}= \\phi^{-1}(1- \\ExpOp [y_{\\boldsymbol t}])$.\n\nNow, let $c_{\\boldsymbol k}^x := \\ExpOp [x_{{\\boldsymbol t}+{\\boldsymbol k}} x_{\\boldsymbol t}]$ be the covariances of $x_{\\boldsymbol t}$, and let $c_{\\boldsymbol k}^y:= \\ExpOp [y_{{\\boldsymbol t}+{\\boldsymbol k}} y_{\\boldsymbol t}] - \\ExpOp [y_{{\\boldsymbol t} + {\\boldsymbol k}}] \\ExpOp [y_{\\boldsymbol t}]$ be the covariances of $y_{\\boldsymbol t}$. As was explained in \\cite{ringh2017further}, by using results from \\cite{price1958useful} one can obtain a relation between $c_{\\boldsymbol k}^y$ and $c_{\\boldsymbol k}^x$, given by \n\\begin{equation}\n\\label{eq:randcrelation}\n\\begin{split}\nc_{\\boldsymbol k}^y = \\int_0^{c_{{\\boldsymbol k}}^x} \\frac{1}{2\\pi \\sqrt{1-s^2}}\\exp\\left(-\\frac{\\tau^2}{1+s}\\right)ds .\n\\end{split}\n\\end{equation}\nThis is an invertible map, which we compute numerically, and given $\\tau_{\\rm est}$ we can thus get estimates of the covariances $c_{\\boldsymbol k}^x$ from estimates of the covariances $c_{\\boldsymbol k}^y$. However, even if $c^y$ is is a biased estimate so that $c^y \\in \\mathfrak{C}_+$, $c^x$ may not be a {\\em bona fide} covariance sequence.\n\n\n\n\n\\subsection{Identifying the linear system}\\label{subsec:linear_syst}\n\nSolving \\eqref{eq:primal_relax} or \\eqref{eq:primal_relax_new} for a given sequence of covariance estimates $c$, we obtain an estimate of the absolutely continuous part of the power spectrum $\\Phi$ of that process. In the case $d=1$, $\\Phi =P\/Q$ can be factorized as \n\\[\n\\Phi(e^{i\\theta}) = \\frac{P(e^{i\\theta})}{Q(e^{i\\theta})} = \\frac{|b(e^{i\\theta})|^2}{|a(e^{i\\theta})|^2},\n\\]\nwhich provides a transfer function of a corresponding linear system, which fed by a white noise input will produce an autoregressive-moving-average (ARMA) process with an output signal with precisely the power distribution $\\Phi$ in steady state. For $d\\geq 2$, a spectral factorization of this kind is not possible in general \\cite{dumitrescu2007positive}, but instead there is always a factorization as a sum-of-several-squares \\cite{dritschel2004factorization, geronimo2006factorization},\n\\[\n\\Phi(e^{i{\\boldsymbol \\theta}}) = \\frac{P(e^{i{\\boldsymbol \\theta}})}{Q(e^{i{\\boldsymbol \\theta}})} = \\frac{\\sum_{k = 1}^{\\ell} |b_k(e^{i{\\boldsymbol \\theta}})|^2}{\\sum_{k = 1}^m |a_k(e^{i{\\boldsymbol \\theta}})|^2},\n\\]\nthe interpretation of which in terms of a dynamical system is unclear when $m > 1$. Therefore we resort to a heuristic and apply the factorization procedure in \\cite[Theorem 1.1.1]{geronimo2004positive} although some of the conditions \nrequired to ensure the existence of a spectral factor may not be met.\n(See \\cite[Section 7]{ringh2015multidimensional} for a more detailed discussion.)\n\n\\subsection{Simulation results}\nThe method, which is summarized in Algorithm~\\ref{alg:WienerEst}, is tested on some textures from the Outex database \\cite{ojala2002outex} (available online at \\url{http:\/\/www.outex.oulu.fi\/}). These textures are color images and have thus been converted to binary textures by first converting them to black-and-white and then thresholding them.%\n\\footnote{The algorithm has been implemented and tested in Matlab, version R2015b. The textures have been normalized to account for light inhomogenities using a reference image available in the database. The conversion from color images to black-and-white images was done with the built-in function \\texttt{rgb2gray}, and the threshold level was set to the mean value of the maximum and minimum pixel value in the black-and-white image.}\nThree such textures are shown in Figure~\\ref{subfig:texture_one} through \\ref{subfig:texture_three}.\n\n\\renewcommand{\\algorithmicrequire}{\\textbf{Input:}}\n\\renewcommand{\\algorithmicensure}{\\textbf{Output:}}\n\\algsetup{indent=12pt}\n\n\\begin{algorithm}\n\\caption{\n\\label{alg:WienerEst}\n\\begin{algorithmic}[1]\n\\REQUIRE $(y_{\\boldsymbol t})$\n\\STATE Estimate threshold parameter: $\\tau_{\\rm est}= \\phi^{-1}(1-E[y_{\\boldsymbol t}])$\n\\STATE Estimate covariances: $c_{\\boldsymbol k}^y := E[y_{{\\boldsymbol t}+{\\boldsymbol k}} y_{\\boldsymbol t}] - E[y_{{\\boldsymbol t} + {\\boldsymbol k}}]E[y_{\\boldsymbol t}]$\n\\STATE Compute covariances $c_{\\boldsymbol k}^x := E[x_{{\\boldsymbol t}+{\\boldsymbol k}} x_{\\boldsymbol t}]$ by using \\eqref{eq:randcrelation}\n\\STATE Estimate a rational spectrum using Theorem~\\ref{thm:softcontraints} or \\ref{thm:hardcontraints}\n\\STATE Apply the factorization procedure in \\cite[Theorem 1.1.1]{geronimo2004positive}\n\\ENSURE $\\tau_{\\rm est}$, coefficients for the linear dynamical system\n\\end{algorithmic}\n\\end{algorithm}\nIn this example there is no natural bound on the error, so we use the problem with soft constraints, for which we choose the weight $W = \\lambda I$ with $\\lambda=0.01$ for all data sets. Moreover, we do maximum-entropy reconstructions, i.e., we set the prior to $P \\equiv 1$. The optimization problems are then solved by first discretizing the grid $\\mathbb{T}^2$, in this case in $50 \\times 50$ points (cf. \\cite[Theorem 2.6]{ringh2015multidimensional}), and solving the corresponding problems using the CVX toolbox \\cite{cvx, grant2008graph}. The reconstructions are shown in Figures~\\ref{subfig:recon_texture_one}~-~\\ref{subfig:recon_texture_three}. Each reconstruction seems to provide a reasonable visual representation of the structure of the corresponding original. This is especially the case for the second texture.\n\n\n\\begin{figure*}%\n \\centering\n \\subfloat[First texture.\\label{subfig:texture_one}]{\\includegraphics[width=0.3\\textwidth]{figures\/ \"text_gen\/granite001-inca-100dpi-00_reg_0point01\/\" figure7_new.png}}%\n \\hfil\n \\subfloat[Second texture.\\label{subfig:texture_two}]{\\includegraphics[width=0.3\\textwidth]{figures\/ \"text_gen\/paper010-inca-100dpi-00_reg_0point01\/\" figure7_new.png}}%\n \\hfil\n \\subfloat[Third texture.\\label{subfig:texture_three}]{\\includegraphics[width=0.3\\textwidth]{figures\/ \"text_gen\/plastic008-inca-100dpi-00_reg_0point01\/\" figure7_new.png}}%\n \\hfil \n \\subfloat[Reconstruction of \\ref{subfig:texture_one}.\\label{subfig:recon_texture_one}]{\\includegraphics[width=0.3\\textwidth]{figures\/ \"text_gen\/granite001-inca-100dpi-00_reg_0point01\/\" figure37_new.png}}%\n \\hfil \n \\subfloat[Reconstruction of \\ref{subfig:texture_two}.\\label{subfig:recon_texture_two}]{\\includegraphics[width=0.3\\textwidth]{figures\/ \"text_gen\/paper010-inca-100dpi-00_reg_0point01\/\" figure37_new.png}}%\n \\hfil\n \\subfloat[Reconstruction of \\ref{subfig:texture_three}.\\label{subfig:recon_texture_three}]{\\includegraphics[width=0.3\\textwidth]{figures\/ \"text_gen\/plastic008-inca-100dpi-00_reg_0point01\/\" figure37_new.png}}%\n \\hfil\n \\subfloat[Close-up of \\ref{subfig:texture_one}.\\label{subfig:texture_one_closeup}]{\\includegraphics[width=0.3\\textwidth]{figures\/ \"text_gen\/granite001-inca-100dpi-00_reg_0point01\/\" figure6.png}}%\n \\hfil\n \\subfloat[Close-up of \\ref{subfig:texture_two}.\\label{subfig:texture_two_closeup}]{\\includegraphics[width=0.3\\textwidth]{figures\/ \"text_gen\/paper010-inca-100dpi-00_reg_0point01\/\" figure6.png}}%\n \\hfil\n \\subfloat[Close-up of \\ref{subfig:texture_three}.\\label{subfig:texture_three_closeup}]{\\includegraphics[width=0.3\\textwidth]{figures\/ \"text_gen\/plastic008-inca-100dpi-00_reg_0point01\/\" figure6.png}}%\n \\hfil \n \\subfloat[Close-up of \\ref{subfig:recon_texture_one}.\\label{subfig:recon_texture_one_closeup}]{\\includegraphics[width=0.3\\textwidth]{figures\/ \"text_gen\/granite001-inca-100dpi-00_reg_0point01\/\" figure36.png}}%\n \\hfil \n \\subfloat[Close-up of \\ref{subfig:recon_texture_two}.\\label{subfig:recon_texture_two_closeup}]{\\includegraphics[width=0.3\\textwidth]{figures\/ \"text_gen\/paper010-inca-100dpi-00_reg_0point01\/\" figure36.png}}%\n \\hfil\n \\subfloat[Close-up of \\ref{subfig:recon_texture_three}.\\label{subfig:recon_texture_three_closeup}]{\\includegraphics[width=0.3\\textwidth]{figures\/ \"text_gen\/plastic008-inca-100dpi-00_reg_0point01\/\" figure36.png}}%\n \\caption{In Figures~\\ref{subfig:texture_one}~-~\\ref{subfig:texture_three} three different binary textures, of size $1200 \\times 900$ pixels, are shown. These are obtained from the textures granite001-inca-100dpi-00, paper010-inca-100dpi-00, and plastic008-inca-100dpi-00 in the Outex database, respectively. The textures in Figures~\\ref{subfig:texture_one}~-~\\ref{subfig:texture_three} are used as input $(y_{\\boldsymbol t})$ to Algorithm~\\ref{alg:WienerEst} and in Figures~\\ref{subfig:recon_texture_one}~-~\\ref{subfig:recon_texture_three} the corresponding reconstructed textures of size $500\\times 500$ are shown. In Figures~\\ref{subfig:texture_one_closeup}~-~\\ref{subfig:recon_texture_three_closeup} close-ups of size $100\\times 100$ are shown of the original and reconstructed textures (areas marked in Figures~\\ref{subfig:texture_one}~-~\\ref{subfig:recon_texture_three}).\n }%\n \\label{fig:texture_generation}%\n\\end{figure*}\n\n\n\n\n\n\n\\section{Conclusions}\nIn this work we extend the results of our previous paper \\cite{ringh2015multidimensional} on the multidimensional rational covariance extension problem to allow for approximate covariance matching. We have provided two formulations to this problem, and we have shown that they are connected via a homeomorphism. In both formulations we have used weighted 2-norms to quantify the missmatch of the estimated covariances. However, we expect that by suitable modifications of the proofs similar results can be derived for other norms, since all norms have directional derivatives in each point \\cite[p. 49]{deimling1985nonlinear}. \n\nThese results provide a procedure for multidimensional spectral estimation, but in order to obtain a complete theory for multidimensional system identification and realization theory there are still some open problems, such as spectral factorization and interpretations in terms of multidimensional stochastic systems, as briefly discussed in Section~\\ref{subsec:linear_syst}.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction and Preliminaries\\label{sec:rathomotopy}}\nLet $M$ be a multiply connected Riemannian manifold. Then the conjugacy classes in the fundamental group $\\pi_1(M)$ may be mapped bijectively onto the free homotopy classes of closed loops on $M$. Closed geodesics are generated by undergoing an energy minimizing procedure. Thus, every closed multiply connected Riemannian manifold $M$ carries a closed geodesic. Likewise, every closed simply connected Riemannian manifold $M$ carries a closed geodesic according to Lyusternik and Fet \\cite{lyusternik-fet}. \nThe existence of at least one non-trivial, non-constant, closed geodesic on a closed multiply connected Riemannian manifold follows from properties of the gradient flow of the energy functional. \n\\begin{theorem}\\label{Cartan} Assume $M$ is closed and $\\pi_1(M)$ is non-trivial. There is a closed geodesic in each non-trivial free homotopy class or in each non-trivial conjugacy class of the fundamental group $\\pi_1(M)$.\n\\end{theorem}\nThe aforementioned theorem is only valid for $\\pi_1(M)$ non-trivial, i.e. when $M$ is multiply connected. The theorem of Lyusternik--Fet covers the case for $M$ simply connected.\n\\begin{theorem}[Lyusternik--Fet \\cite{lyusternik-fet}]\nAny closed simply connected Riemannian manifold carries at least one closed geodesic.\n\\end{theorem}\n\n\\begin{definition}\nA non-constant closed geodesic is said to be prime if it traces out its image exactly once and if it is not the iterate of another closed geodesic. Similarly, two geodesics are geometrically distinct if their images differ as subsets of $M$.\n\\end{definition} \nTwo geodesics have the same image if and only if their parameterizations differ modulo an affine transformation of $\\mathbb{R}$. Note, in general, prime closed geodesics are not simple; that is, they admit self-intersections. We henceforth address the following question.\n\\begin{question}\nDoes every closed Riemannian manifold $M$ admit infinitely many geometrically distinct, non-constant, prime closed geodesics?\n\\end{question}\nThe case for $S^2$ has been proven in the affirmative by Hingston \\cite{hingston}. However, the solution is not known in general. In the case of multiply connected manifolds, we can enumerate a class for which the fundamental group $\\pi_1(M)$ has infinitely many closed geodesics, whereby the minimization procedure produces infinitely many geometrically distinct geodesics, one for each homotopy class. As such, the essence of the problem is reduced to the simply connected case, which was addressed by Gromoll and Meyer \\cite{gromoll-meyer}. \n\n\\begin{notation}\nWe will use the following conventional notations.\n\\begin{enumerate}[label=(\\roman*)]\n\\item $\\Omega M$ denotes the loop space of a manifold $M$.\n\\item $\\mathcal{L} M$ denotes the smooth free loop space on a manifold $M$.\n\\item $\\Lambda M$ denotes the free loop space of Sobolev class $H^1=W^{1,2}$ on a manifold $M$.\n\\item $\\mathbb{F}_p$ denotes the field with $p$ elements for $p\\ge 2$ prime.\n\\item $b_k(\\Lambda M;\\mathbb{F}_p):=\\text{rank }H_k(\\Lambda M;\\mathbb{F}_p)$ is the $k$-th Betti number of $\\Lambda M$ in $\\mathbb{F}_p$ coefficients, which is equivalent to the $k$-th Betti number of $\\mathcal{L} M$ in $\\mathbb{F}_p$ coefficients.\n\\end{enumerate}\n\\end{notation}\n\n\\begin{definition}\nClosed geodesics $\\gamma:S^1=\\mathbb{R}\/\\mathbb{Z}\\to M$ are critical points of the energy functional \n\\begin{equation}\\label{energyfunctional}\nE[\\gamma]=\\int_{S^1}\\|\\dot\\gamma(t)\\|^2dt,\n\\end{equation} which is defined on the free loop space $\\Lambda M=H^1(S^1,M)$ of Sobolev class $H^1$.\n\\end{definition}\nThus, the critical points of $E$ are determined by the topology of the Hilbert manifold $\\Lambda M$. Let the bilinear map $\\nabla:\\Gamma(TM)\\times\\Gamma(TM)\\to\\Gamma(TM)$ on sections of the tangent bundle $TM$ be the Levi-Civita connection on $M$. For a smooth curve $\\gamma: I\\subset\\mathbb{R}\\to M$ parameterized by $t$, the covariant derivative $\\nabla_{\\dot\\gamma}\\dot\\gamma :=\\nabla_{t}\\dot\\gamma$ defines a vector field along $\\gamma$. The curve $\\gamma$ is said to be a geodesic if it satisfies the autoparallel transport equation \n\\begin{equation}\\label{autoparallel}\n\\nabla_t\\dot\\gamma=0.\n\\end{equation} \nA geodesic has constant speed $\\|\\dot\\gamma(t)\\|$ because $\\frac{1}{2}\\frac{d}{dt}\\|\\dot\\gamma\\|^2=\\langle \\nabla_t\\dot\\gamma,\\dot\\gamma\\rangle$. Equation (\\ref{autoparallel}) is a second-order, non-linear, ordinary differential equation with smooth coefficients. \n\nLet $p,q\\in M$ be distinct points, and consider the space of smooth paths defined on $[0,1]$ from $p$ to $q$:\n\\begin{equation}\n\\mathcal{P}(p,q)=\\{\\gamma\\in C^{\\infty}([0,1],M):\\gamma(0)=p,\\gamma(1)=q\\}.\n\\end{equation}\nThe space of such paths $\\mathcal{P}(p,q)$ is a Fr\\'echet manifold and the tangent space at a path $\\gamma$ is defined as $T_{\\gamma}P(p,q)=\\{\\xi\\in\\Gamma(\\gamma^*TM):\\xi(0)=0,\\xi(1)=0\\}$. That is, an element $\\xi\\in T_{\\gamma}\\mathcal{P}(p,q)$ uniquely determines a curve \n\\begin{equation}\n\\alpha_{\\xi}:\\mathcal{V}(0)\\subset\\mathbb{R}\\to\\mathcal{P}(p,q),\\quad \\alpha_{\\xi}(s)(t):=exp_{\\gamma(t)}(s\\xi(t)),\n\\end{equation} where $\\alpha_{\\xi}(0)=\\gamma$ and $\\evalat[\\big]{\\frac{d}{ds} }{s=0}\\alpha_{\\xi}(s)(t)=\\xi(t),t\\in[0,1]$. The energy functional $E:\\mathcal{P}(p,q)\\to\\mathbb{R}$ for this Fr\\'echet manifold is defined as \n\\begin{equation}\\label{energyfunctional}\nE[\\gamma]:=\\int_0^1 \\|\\dot\\gamma(t)\\|^2dt.\n\\end{equation} Similarly, the differential of $E$ at $\\gamma$ is a linear map $dE[\\gamma]:T_\\gamma\\mathcal{P}(p,q)\\xrightarrow{\\sim}\\mathbb{R}$ given by the first variation formula via its action on a smooth section of the tangent bundle $\\xi\\in T_{\\gamma}\\mathcal{P}(p,q)$, \n\\begin{equation}\ndE[\\gamma]\\xi=2\\int_0^1\\langle\\nabla_t\\xi,\\dot\\gamma\\rangle=-2\\int_0^1\\langle\\xi,\\nabla_t\\dot\\gamma\\rangle.\n\\end{equation}\n\\begin{remark}\nThe curve $\\gamma\\in C^{\\infty}([0,1],M)$ is a critical point of $E$ if and only if $\\nabla_t\\dot\\gamma=0$; that is, $\\gamma$ is a geodesic from $p$ to $q$.\n\\end{remark} \nFor our purposes, we examine Banach and, more specifically, Hilbert manifolds. Consider the following space of paths of Sobolev class $H^1$ defined on $[0,1]$ from $p$ to $q$ as the domain for the energy functional:\n\\begin{equation}\n\\Omega(p,q)=\\{\\gamma\\in H^1([0,1],M):\\gamma(0)=p,\\gamma(1)=q\\}.\n\\end{equation}\nSuch paths are continuous so the endpoint conditions are well-defined. The space $\\Omega(p,q)$ is, in fact, a smooth Hilbert manifold with tangent space at $\\gamma\\in \\Omega(p,q)$ given by the vector space of vector fields along $\\gamma$ belonging to the Sobolev class $H^1$ and vanishing at endpoints:\n\\begin{equation}\nT_{\\gamma}\\Omega(p,q)=\\{\\xi\\in H^1(\\gamma^*TM):\\xi(0)=0,\\xi(1)=0\\}.\n\\end{equation}\nMoreover, the energy functional $E:\\Omega(p,q)\\to\\mathbb{R}$ is $C^2$. The critical points are the $H^1$-solutions of the elliptic autoparallel transport equation $\\nabla_t\\dot\\gamma=0$. As such, the geodesic solutions are necessarily smooth. \n\nConsider the length functional \n\\begin{equation}\nL:\\mathcal{P}(p,q)\\to\\mathbb{R}, \\quad L[\\gamma]:=\\int_0^1\\|\\dot\\gamma(t)\\|^2dt.\n\\end{equation} \nThe functional $L$ is invariant under the action of the infinite-dimensional group of diffeomorphisms on the interval [0,1], i.e. $L[\\gamma(gt)]=L[\\gamma(t)]$ for all $g\\in G:=\\text{Diffeo}([0,1])$. Thus, the critical points of $L$ form infinite-dimensional families of solutions. The degeneracy is rectified by observing that some path $\\gamma\\in\\mathcal{P}(p,q)$ admits positive reparameterizations on $[0,1]$ with constant speed \\cite{alexandru}. A path $\\gamma\\in\\mathcal{P}(p,q)$ is a geodesic if and only if it is a critical point of the length functional $L[\\gamma]$ and has constant speed.\n\nConsider the quotient space $S^1=\\mathbb{R}\/\\mathbb{Z}$ and the space of smooth loops in $M$: \n\\begin{equation}\n\\mathcal{L}M:=C^{\\infty}(S^1,M),\n\\end{equation} which is a Fr\\'echet manifold on which the $2$-dimensional orthogonal group $O(2)=SO(2)\\rtimes\\{\\pm 1\\}$ acts by $(e^{i\\theta}\\cdot\\gamma)(t):=\\gamma(t+\\theta)$ and $(-1\\cdot\\gamma)(t):=\\gamma(-t)$. For economy, we consider the smooth Banach manifold\n\\begin{equation}\n\\Lambda M:=H^1(S^1,M),\n\\end{equation} which is the so-called $\\textit{free loop space}$ of $M$. The inclusion map $\\mathcal{L}M\\hookrightarrow \\Lambda M$ is a homotopy equivalence, and the $O(2)$ action descends naturally to $\\Lambda M$. Equation (\\ref{energyfunctional}) defines a functional $E:\\Lambda M\\to\\mathbb{R}$, whose critical points are smooth periodic or closed geodesics. However, under present considerations, $E$ is $O(2)$-invariant which induces critical point degeneracy. Closed geodesics may be classified crudely by bifurcating isotropy groups. In particular, a constant geodesic corresponds to a point in $M$ with isotropy group $O(2)$, whereas the isotropy group of a non-constant geodesic is $\\mathbb{Z}\/k\\mathbb{Z}$ with $1\/k$ its minimal period for $k\\in\\mathbb{Z}^+$. \n\nThe tangent space at a point $\\gamma\\in\\Lambda M$, identified with a path, is $T_{\\gamma}\\Lambda M=H^1(\\gamma^*TM)$, which is the space of sections of $\\gamma^*TM$ of Sobolev class $H^1$ for $\\gamma:S^1\\to M$ and $\\gamma^*:T^*M\\to T^*S^1$ its pullback. The free loop space $\\Lambda M$ of Sobolev class $H^1$ is a Hilbert manifold with respect to a $H^1$-inner product \n\\begin{equation}\n\\langle\\xi,\\eta\\rangle_1:=\\int_{S^1}\\langle\\xi,\\eta\\rangle+\\int_{S^1}\\langle\\nabla_t\\xi,\\nabla_t\\eta\\rangle=\\langle\\xi,\\eta\\rangle_0+\\langle\\nabla_t\\xi,\\nabla_t\\eta\\rangle_0.\n\\end{equation} \nThe Arzel\\'a-Ascoli theorem implies a compact inclusion $\\Lambda M=H^1(S^1,M)\\hookrightarrow C^0(S^1,M)$. This inclusion may be used to prove the following result.\n\\begin{proposition}[Klingenberg \\cite{klingenberg}]\nThe Riemannian metric on $\\Lambda M$ given by the $H^1$-inner product is complete.\n\\end{proposition}\n\n\\section{Morse Theory\\label{sec:morsetheory}}\n\nThe gradient $\\nabla E$ is a vector field on $\\Lambda M$ defined by the following inner product.\n\\begin{equation}\n\\langle\\nabla E[\\gamma],\\xi\\rangle_1=dE[\\gamma]\\xi=\\langle\\dot\\gamma,\\nabla_t\\xi\\rangle_0, \\quad \\xi\\in H^1(\\gamma^*TM)\n\\end{equation} \nFurthermore, if $\\gamma$ is smooth then $\\langle\\dot\\gamma,\\nabla_t\\xi\\rangle_0=-\\langle\\nabla_t\\dot\\gamma,\\xi\\rangle_0$ such that $\\nabla E[\\gamma]\\in\\Gamma(\\gamma^*TM)$ is the unique periodic solution of the differential equation: \n\\begin{equation}\n\\nabla_t^2\\eta(t)-\\eta(t)=\\nabla_t\\dot\\gamma(t).\n\\end{equation} \nThe following condition of Palais and Smale is necessary to extend Morse theory to the infinite-dimensional setting of Hilbert manifolds $\\Lambda M$.\n\\begin{theorem}[Palais-Smale \\cite{palais-smale}]\\label{Palais-Smale} The energy functional $E:\\Lambda M\\to\\mathbb{R}$ satisfies condition (C) of Palais and Smale:\n\n(C) Let $(\\gamma_m)\\in\\Lambda M$ be a sequence such that $E[\\gamma_m]$ is bounded and $\\|\\nabla E[\\gamma_m]\\|_1\\to 0$. Then $(\\gamma_m)$ has limit points and every limit point is a critical point of $E$.\n\\end{theorem}\n\nLet $\\text{Crit}($E$):=\\{\\gamma\\in\\Lambda M:dE[\\gamma]=0\\}$ denote the set of critical points of the energy functional, i.e., the set of closed geodesics on $M$. For $a\\ge 0$, we denote\n\\begin{align*}\n \\Lambda^{\\le a}:&=\\{\\gamma\\in\\Lambda M:E[\\gamma]\\le a\\}, \\\\\n \\Lambda^{0$ sufficiently small.\n\\end{enumerate}\n\nSuppose $\\gamma\\in \\text{Crit(}E\\text{)}$ is a critical point. The $O(2)$-invariance of the energy functional $E$ means that the orbit $O(2)\\cdot \\gamma $ is contained entirely in Crit($E$).\n\\begin{definition} The $\\textit{index}$ $\\lambda(\\gamma)$ of $\\gamma$ is the dimension of the negative eigenspace of the Hessian $d^2E[\\gamma]$.\n\\end{definition}\n\\begin{definition}\nThe nullity $\\nu(\\gamma)$ of $\\gamma$ is the dimension of the kernel of $d^2E[\\gamma]$, i.e. $\\nu(\\gamma):=\\dim\\ker d^2E[\\gamma]$.\n\\end{definition} \nNote, $\\langle\\dot\\gamma\\rangle\\in \\ker d^2E[\\gamma]$ so $\\nu(\\gamma)\\ge 1$ if $\\gamma$ is non-constant because Crit($E$) is invariant under the $S^1$ reparameterization action. The Hessian of $E$ at $\\gamma$ is given by the second variation formula:\n\\begin{equation}\nd^2E[\\gamma](\\xi,\\eta)=-\\int\\langle\\xi,\\nabla_t^2\\eta+R(\\dot\\gamma,\\eta)\\dot\\gamma\\rangle.\n\\end{equation}\n\\begin{definition}\\label{morsefunction}\nA $C^2$-function $f$ defined on a Hilbert manifold is a Morse function if all of its critical points are non-degenerate, which means that the Hessian has a $0$-dimensional kernel at each critical point $c$ so $\\dim\\ker d^2f(c)=0$.\n\\end{definition}\nThe energy functional $E$ defined on $\\Lambda M$ will never satisfy the conditions of Definition \\ref{morsefunction} because the critical points at level $0$ form a closed $\\dim M$-dimensional manifold, which means that it is never non-degenerate. In a similar vein, the energy functional $E$ is $S^1$-invariant which means that the kernel of the Hessian of $E$ at a non-constant geodesic $\\gamma$ is always at least $1$-dimensional, $\\dim\\ker d^2E[\\gamma] \\ge 1$, because it contains the infinitesimal generator $\\dot\\gamma$ of the $S^1$-action.\n\\begin{definition}[Oancea \\cite{alexandru}] \nA $C^2$-function defined on a Hilbert manifold is said to be a Morse-Bott function if its critical set is a disjoint union of closed connected submanifolds $\\bigsqcup_i\\Sigma_i$ and, for each critical point $c_i$, the kernel of the Hessian at that critical point coincides with the tangent space to the respective connected component of the critical locus, $\\ker d^2f(c_i)\\cong T\\Sigma_i$. As such, the critical set is said to be non-degenerate. \n\\end{definition}\n\\begin{definition}\nThe index $\\lambda(p)$ of a critical point $p$ is defined as $\\iota(p)=\\dim\\{V_{\\text{max}}\\subset T_pM: d^2f(p)<0\\}$, the dimension of the maximal subspace of the tangent space at $p$ on which the Hessian is negative definite.\n\\end{definition}\n\n\\begin{definition} \nThe nullity $\\nu(p)$ of a critical point $p$ is defined as $\\nu(p)=\\dim\\ker d^2f(p)$, the dimension of the kernel of the Hessian at $p$.\n\\end{definition} \n\\begin{theorem}[Palais \\cite{palais-hilbertmanifolds}]\\label{criticalmorse}\nAssume that the Riemannian metric on $M$ is chosen in such a way that the energy functional $E$ on $\\Lambda M$ is Morse-Bott.\n\\begin{enumerate}[label=(\\roman*)]\n\\item The critical values of $E$ are isolated and there are only a finite number of connected components of Crit($E$) on each critical set.\n\\item The index and nullity of each connected component of Crit($E$) are finite.\n\\item If there are no critical values of $E$ in $[a,b]$ then $\\Lambda^{\\le b}$ retracts onto and is diffeomorphic to $\\Lambda^{\\le a}$.\n\\item Let $a0$ sufficiently small, deformation retracts onto the the union of the sublevel $c-\\varepsilon$ and a $\\lambda(c)$-cell. The $\\lambda$-cell is the negative vector bundle over the critical manifold consisting of a single point. Such a retraction is forced by a modification of the negative gradient flow $\\frac{d}{ds}\\phi_s=-\\nabla f(\\phi_s)$ of $f$.\n\\end{remark} \n\\begin{corollary}\nAdopting the the aforementioned a posteriori hypotheses, the sublevel set $\\Lambda^{\\le b}$ is diffeomorphic to and deformation retracts onto $\\Lambda^{\\le c}$. Similarly, the sublevel set $\\Lambda^{1$ whose rational cohomology ring $H^*(M;\\mathbb{Q})$ is not a truncated polynomial ring. In fact, it was shown by Ziller \\cite{ziller} that compact globally symmetric spaces of rank $>1$ are such that the sequence of Betti numbers for their free loop space is unbounded with coefficients in $\\mathbb{F}_2$. On the other hand, compact globally symmetric spaces of rank $1$ have rational cohomology generated by a single element.\n\nWe must address two complications that arise in view of Gromoll and Meyer \\ref{Gromoll\/Meyer}, the first being that iterates $\\gamma_m:S^1\\to M$ with $\\gamma_m(t)=\\gamma(mt), m\\in\\mathbb{Z}$ of a closed geodesic $\\gamma$ are themselves closed geodesics (under the action of infinite-dimensional reparameterization), and thereby constitute a part of the homology of free loop space $\\Lambda M$. Second, we would like to handle degeneracy for closed geodesics. It was shown by Gromoll and Meyer \\cite{gromoll-meyer} that a single degenerate closed geodesic adds to homology by at most $\\nu(\\gamma)$ for degrees contained in $[\\lambda(\\gamma),\\lambda(\\gamma)+\\nu(\\gamma)]$. This result follows from the lemmata which dictate the behavior of index and nullity subject to iteration.\n\\begin{lemma}[Gromoll-Meyer Index Iteration \\cite{gromoll-meyer}]\\label{Index Iteration}\nThe index is either $\\lambda(\\gamma_m)=0$ for all $m\\ge 1$ or there is an $\\varepsilon>0$ and $C>0$ such that for all $m,s\\ge 1$:\n\\begin{equation}\n\\lambda(\\gamma_{m+s})-\\lambda(\\gamma_m)\\ge\\varepsilon s-C.\n\\end{equation}\n\\end{lemma}\n\\begin{lemma}[Gromoll-Meyer Nullity Iteration \\cite{gromoll-meyer}]\\label{Nullity iteration} The nullity is either $\\nu(\\gamma_m)=1$ for all $m\\ge 1$ or there are finitely many iterations $\\gamma_{m_1},\\cdots,\\gamma_{m_s}$ of $\\gamma$ such that for all $m\\ge 1$ there is an $i$ and $j$ for which $m=jm_i$ and $\\nu(\\gamma_m)=\\nu(\\gamma_{m_i})$. That is, the sequence $\\{\\nu(\\gamma_m)\\}_{m\\ge 1}$ only takes on finitely many values. \n\\end{lemma}\n\nBy virtue of a combinatorial argument and the Gromoll-Meyer index and nullity iteration lemmata, if the number of simple closed geodesics is bounded, there is a bound on the Betti numbers of $\\Lambda M$.\n\nLet\n\\begin{equation}\nA_{\\gamma}:H^1(\\gamma^*TM)\\to L^2(\\gamma^*TM),\\quad \\xi\\mapsto -\\xi''-R(\\dot\\gamma,\\xi)\\dot\\gamma\n\\end{equation} be the self-adjoint $\\textit{asymptotic elliptic operator}$ at the closed geodesic $\\gamma$ with compact resolvent such that $d^2E[\\gamma](\\xi,\\eta)=\\langle A_{\\gamma}\\xi,\\eta\\rangle_{L^2}$. More precisely, \n\\begin{equation}\n\\begin{split}\n \t\\langle A_{\\gamma}\\xi,\\eta\\rangle_{L^2}&=\\langle A_{\\gamma}\\eta,\\xi\\rangle_{L^2}\\\\ \n\t\t&=\\int\\langle-\\eta''-R(\\dot\\gamma,\\eta)\\dot\\gamma,\\eta\\rangle \\\\\n &= -\\int\\langle\\xi,\\nabla_t^2\\eta+R(\\dot\\gamma,\\eta)\\dot\\gamma\\rangle.\n\\end{split}\n\\end{equation} Its spectrum is discrete with no accumulation point besides $\\pm\\infty$ \\cite{kato}. The eigenvectors of $A_{\\gamma}$ are smooth by elliptic regularity. The eigenvalue problem $-\\xi''-R(\\dot\\gamma,\\xi)\\dot\\gamma=-\\lambda\\xi$ has no non-trivial periodic solutions for $\\lambda\\gg 0$. Moreover, the spectrum of the elliptic operator $A_{\\gamma}$ is bounded from below which implies the finiteness of the index of $\\gamma$, given by \n\\begin{equation}\n\\lambda(\\gamma)=\\sum_{\\text{Spec}(A_{\\gamma})\\ni\\lambda<0}\\text{mult}(\\lambda)<+\\infty\n\\end{equation} for $\\text{mult}(\\lambda)$ the multiplicity of the eigenvalue $\\lambda$. The nullity of $\\gamma$ is \n\\begin{equation}\n\\nu(\\gamma)=\\dim\\ker A_{\\gamma},\n\\end{equation} for which we have the bounds $1\\le\\nu(\\gamma)\\le 2n-1$. The first inequality is due to $\\langle\\dot\\gamma\\rangle\\in\\ker A_{\\gamma}$, and the second inquality follows because an element in $\\ker A_{\\gamma}$ solves a second-order ordinary differential equation.\n\nIt was shown by Takens and Klingenberg that for a $C^4$-generic metric on a closed manifold $M$, there exist closed geodesics of $\\textit{twist type}$ for which the eigenvalues of the Poincar\\'e return map have modulus unity, or there exist closed geodesics of $\\textit{hyperbolic type}$ for which the eigenvalues of the Poincar\\'e return map lie outside or inside the unit circle. The Birkhoff-Lewis fixed point theorem implies that for the former case there are infinitely many prime closed geodesics in a neighborhood of the twist geodesic. It is then left to show that this is true for hyperbolic geodesics. It should be noted that if all geodesic orbits are hyperbolic, then $\\lambda(\\gamma_m)=m\\lambda(\\gamma)$.\n\\begin{theorem}[Rademacher \\cite{rademacher}]\\label{Rademacher} \nLet $M$ be a simply connected closed Riemannian manifold with rational cohomology a truncated polynomial ring in one even variable and all geodesics hyperbolic. Then there are infinitely many geometrically distinct hyperbolic geodesics.\n\\end{theorem}\n\\begin{theorem} \nLet $M$ be a simply connected closed Riemannian manifold with rational homotopy type that of a rank $1$ symmetric space (i.e. $S^n,\\mathbb{CP}^n$, $\\mathbb{HP}^n$, $\\mathbb{P}^2(\\mathbb{O})$) and suppose all closed geodesics on $M$ are hyperbolic. Then the number $\\pi(x)$ of prime closed geodesics of length less than or equal to $x$ is on the order of \n\\begin{equation}\n\\pi(x)=|\\{\\gamma\\in \\Lambda M: L[\\gamma]\\le x\\}| \\sim\\frac{x}{logx}.\n\\end{equation}\n\\end{theorem} Recall that the isotropy group at a non-constant closed geodesic is $\\mathbb{Z}_m$ with $1\/m$ the minimal period. Since $\\mathbb{Z}_m,m\\ge 2$ are $\\mathbb{Q}$-acyclic, we have the following isomorphism on equivariant cohomology:\n\\begin{equation*} H_{\\mathbb{Z}_m}^*(\\Lambda M,M;\\mathbb{Q})\\cong H^*(\\Lambda M\/{\\mathbb{Z}_m}, M;\\mathbb{Q})\n\\end{equation*} where $H_{\\mathbb{Z}_m}^*(\\bullet):=H^*({\\bullet}_{\\mathbb{Z}_m})$ is $\\mathbb{Z}_m$-equivariant cohomology.\n\nThe most significant consequence of $O(2)$-invariance for $E$ is that the $m$-fold iterate $\\gamma_m$ of a closed geodesic adds to $H_{\\bullet}^{O(2)}(\\Lambda M)$ by $H_{\\bullet}(B\\mathbb{Z}_m)$. Thus \\cite{brown}, \n\\begin{align*}\n H_{\\bullet}(B\\mathbb{Z}_m;\\mathbb{Z}) &= \\begin{cases} \n \\mathbb{Z}, & \\text{if }\\bullet=0, \\\\\n \\mathbb{Z}_m, & \\text{if} \\hspace{.05 cm}\\bullet\\text{odd}, \\\\\n 0, & \\text{if }\\bullet>0\\text{ even}. \n \\end{cases}\n\\end{align*} By the universal coefficient theorem, we deduce that \n\\begin{align*}\nH_{\\bullet}(B\\mathbb{Z}_m;\\mathbb{Z}_{\\ell})&= \\begin{cases} \n \\mathbb{Z}_{\\ell}, & \\text{if }\\bullet=0, \\\\\n \\mathbb{Z}_d, & \\text{if }\\bullet>0,\n \\end{cases}\n\\end{align*} where $d=\\text{gcd}(m,\\ell)$, and \n\\begin{align*}\nH_{\\bullet}(B\\mathbb{Z}_m;\\mathbb{Q})= \n\\begin{cases} \n \\mathbb{Q}, & \\text{if }\\bullet=0, \\\\\n 0, & \\text{if }\\bullet>0. \n \\end{cases}\n\\end{align*}\n\n\\section{Rational Homotopy Theory\\label{sec: homotopytheory}}\n\nAn important consequence of rational homotopy theory is that every simply connected closed Riemannian manifold $M$ with rational cohomology not isomorphic to a truncated polynomial has infinitely many geometrically distinct prime closed geodesics.\n\\begin{definition}\nSuppose $M,N$ are simply connected topological spaces. A continuous map $f:M\\to N$ is called a rational homotopy equivalence if it induces an isomorphism on homotopy groups tensored with $\\mathbb{Q}$.\n\\end{definition}\n\\begin{definition}\nA rational space is a simply connected CW complex whose homotopy groups are vector spaces over $\\mathbb{Q}$.\n\\end{definition}\n\\begin{definition}\nThe rationalization of a simply connected CW complex $M$ is the rational space $M_{\\mathbb{Q}}$ for which the map from $M$ to $M_{\\mathbb{Q}}$ descends to an isomorphism on rational homology, i.e. $\\pi_k(M_{\\mathbb{Q}})\\cong\\pi_k(M)\\otimes\\mathbb{Q}$ and, by the universal coefficient theorem, $H_k(M_{\\mathbb{Q}},\\mathbb{Z})\\cong H_k(M,\\mathbb{Z})\\otimes\\mathbb{Q}\\cong H_k(M,\\mathbb{Q})$ for all $k>0$.\n\\end{definition}\nThe rational cohomology ring $H^*(M,\\mathbb{Q})$ is a graded commutative algebra and $\\pi_*(M)\\otimes\\mathbb{Q}$ is a graded homotopy Lie algebra. Let $\\Omega M$ be the $\\textit{loop space}$ of $M$, then $\\pi_*(\\Omega M)\\otimes\\mathbb{Q}$ is also a graded Lie agebra by the Whitehead product structure. \n\\begin{theorem}\nThe homology of loop space $H_{\\bullet}(\\Omega M,\\mathbb{Q})$ is the (graded) universal enveloping algebra of the (graded) Lie algebra $\\pi_{\\bullet}(\\Omega M,\\mathbb{Q})\\cong\\pi_{\\bullet-1}(M,\\mathbb{Q})$.\n\\end{theorem}\n\\begin{definition}\nLet $M$ be a simply connected (CW complex) space for $H^*(M,\\mathbb{Q})$ finite-dimensional over $\\mathbb{Q}$. A simply connected space $M$ is said to be rationally elliptic if the homotopy Lie algebra $\\pi_*(M)\\otimes\\mathbb{Q}$ is finite-dimensional over $\\mathbb{Q}$, or else it is rationally hyperbolic.\n\\end{definition}\n\nAccording to F\\'elix and Halperin \\cite{felix-halperin}, if $M$ is rationally hyperbolic, there exists an $\\alpha>1$ and an integer $N$ such that for all $n\\ge N$,\n\\begin{equation}\n\\sum_{i=1}^n\\dim_{\\mathbb{Q}}\\pi_i(M)\\otimes\\mathbb{Q}\\ge \\alpha^n.\n\\end{equation}\n\\begin{remark}\nLet $M$ be a simply connected closed Riemannian manifold. Then the rational cohomology ring $H^*(M;\\mathbb{Q})$ requires more than one generator if and only if $\\pi_{\\text{odd}}(M)\\otimes\\mathbb{Q}$ is more than $1$-dimensional, i.e. $\\dim\\pi_{\\text{odd}}(M)\\otimes\\mathbb{Q}>1$.\n\\end{remark}\n\\begin{proof}\nA basis for rational homotopy $\\pi_k(M)\\otimes\\mathbb{Q}$ corresponds to a set of algebra generators of the minimal model. Then the claim follows from Proposition 1 of Vigu\\'e-Poirrier and Sullivan \\cite{poirrier-sullivan}.\n\\end{proof}\n\\begin{corollary}\nThe rational space $M_{\\mathbb{Q}}$ cannot be a $K(G,n)$ Eilenberg--MacLane space for $G$ singly generated if $\\dim\\pi_{\\text{odd}}(M)\\otimes\\mathbb{Q}>1$.\n\\end{corollary}\n\\begin{proof}\nSuppose the rational space $M_{\\mathbb{Q}}$ is a $K(G,n)$ Eilenberg--MacLane space. Then for $n$ a positive integer, $\\pi_n(K(G,n))\\cong G$, and $\\pi_i(M_{\\mathbb{Q}})=0$ for all $i\\ne n$. Thus, $\\dim\\pi_i(M_{\\mathbb{Q}})=\\dim\\pi_i(M)\\otimes\\mathbb{Q}=\\delta_{in}$, which implies that $\\dim\\pi_{\\text{odd}}(M)\\otimes\\mathbb{Q}\\le 1$. It follows that $H^*(M;\\mathbb{Q})$ is singly generated so $M_{\\mathbb{Q}}\\ne K(G,n)$, a contradiction. \n\\end{proof} \n\\begin{lemma}\nIf the rational cohomology ring $H^*(M;\\mathbb{Q})$ is not singly generated, $M_{\\mathbb{Q}}=M\\times E\\mathbb{Q}\\ne K(G,n)$ for any positive integer $n$ or group $G$ acting freely on $M$.\n\\end{lemma}\n\\begin{theorem}[Hurewicz Theorem]\\label{hurewicz}\nIf $X$ is an $(k-1)$-connected topological space, then for a positive integer $n$, the group homomorphism\n\\begin{equation*}\nf_*:\\pi_n(X)\\to H_n(X)\n\\end{equation*} is an isomorphism for all $n\\le k$ and an abelianization for $n=1$.\n\\end{theorem}\n\\begin{definition}\nLet $G$ be an abelian group. The group $G \\otimes_{\\mathbb{Z}} \\mathbb{Q}$ is the divisible hull of $G$. The dimension of $G \\otimes_{\\mathbb{Z}} \\mathbb{Q}$ over $\\mathbb{Q}$ is the rational rank of $G$. We denote by $\\text{rank }_{\\mathbb{Q}}(G)$ the rational rank of $G$. If $G$ is torsion free then the map $G\\to G\\otimes\\mathbb{Q}$ is injective and the rank of $G$ is the minimum dimension of the $\\mathbb{Q}$-vector space containing $G$ as an abelian subgroup.\n\\end{definition} \n\\begin{lemma}\\label{rathyperbolic}\nIf $M$ is a rationally hyperbolic manifold then $\\dim_{\\mathbb{Q}}\\pi_1(M)\\otimes\\mathbb{Q}\\ge 2$ and $\\dim_{\\mathbb{Q}}\\pi_k(M)\\otimes\\mathbb{Q}>\\alpha^{k-1}$ for an integer $k>1$ and a real number $\\alpha>1$. Moreover, $\\dim_{\\mathbb{Q}}\\pi_{\\text{odd}}(M)\\otimes\\mathbb{Q}>1$ so the rational cohomology ring $H^*(M;\\mathbb{Q})$ has at least two generators.\n\\end{lemma}\n\\begin{proof}\nIf $M$ is rationally hyperbolic, there exists a real number $\\alpha>1$ and an integer $N$ such that \n\\begin{equation*}\n\\sum_{i=1}^n\\dim_{\\mathbb{Q}}\\pi_i(M)\\otimes\\mathbb{Q}\\ge \\alpha^n\n\\end{equation*} for all $n\\ge N$. If $n=1$ then $\\dim_{\\mathbb{Q}}\\pi_1(M)\\otimes\\mathbb{Q}\\ge \\alpha$ for $\\alpha>1$ so $\\dim_{\\mathbb{Q}}\\pi_1(M)\\otimes\\mathbb{Q}>1$. If $n=2$ then $\\dim_{\\mathbb{Q}}\\pi_1(M)\\otimes\\mathbb{Q}+\\dim_{\\mathbb{Q}}\\pi_2(M)\\otimes\\mathbb{Q}\\ge \\alpha^2$. Assume $\\dim_{\\mathbb{Q}}\\pi_1(M)\\otimes\\mathbb{Q}= \\alpha$ so that $\\dim_{\\mathbb{Q}}\\pi_2(M)\\otimes\\mathbb{Q}\\ge \\alpha^2-\\alpha$. If $n=3$ then $\\dim_{\\mathbb{Q}}\\pi_1(M)\\otimes\\mathbb{Q}+\\dim_{\\mathbb{Q}}\\pi_2(M)\\otimes\\mathbb{Q}+\\dim_{\\mathbb{Q}}\\pi_3(M)\\otimes\\mathbb{Q}\\ge \\alpha^3$. Let $\\dim_{\\mathbb{Q}}\\pi_2(M)\\otimes\\mathbb{Q}=\\alpha^2-\\alpha$ so $\\alpha+(\\alpha^2-\\alpha)+\\dim_{\\mathbb{Q}}\\pi_3(M)\\otimes\\mathbb{Q}\\ge \\alpha^3$ or $\\dim_{\\mathbb{Q}}\\pi_3(M)\\otimes\\mathbb{Q}\\ge \\alpha^3-\\alpha^2$. It follows by induction that $\\dim_{\\mathbb{Q}}\\pi_k(M)\\otimes\\mathbb{Q}\\ge \\alpha^k-\\alpha^{k-1}$ for $\\alpha>1$. More generally, $\\dim_{\\mathbb{Q}}\\pi_k(M)\\otimes\\mathbb{Q}\\ge \\alpha^{k-1}(\\alpha-1)>\\alpha^{k-1}$ or $\\dim_{\\mathbb{Q}}\\pi_k(M)\\otimes\\mathbb{Q}> \\alpha^{k-1}$ for $\\alpha>1$. As per the above, for $k=1$, it is already known that $\\dim_{\\mathbb{Q}}\\pi_1(M)\\otimes\\mathbb{Q}>1$. \n\\end{proof}\n\nAnother program for determining the dimension of $\\pi_i(M)\\otimes\\mathbb{Q}$ is by way of minimal models. If $M$ is a closed, simply connected smooth manifold, then the rank of the rational homotopy groups $\\pi_i(M)\\otimes\\mathbb{Q}$ equals the number of degree $i$ generators introduced in the construction of the minimal model for $M$. For manifolds that are $\\textit{formal}$, e.g. if $M$ is K\\\"{a}hler, their cohomology ring with trivial differential is quasi-isomorphic as a differential graded algebra to the minimal model. Thus, we can use the cohomology ring to determine the rank of rational homotopy groups.\n\n\\begin{theorem}\\label{betti-inequality}\nLet $E:\\Lambda M\\to\\mathbb{R}$ be a Morse function on the free loop space of Sobolev class $H^1$. Let $C_{\\lambda}$ denote the number of critical points of the energy functional $E$ of index $\\lambda$. Then we have the following constraints:\n\\begin{equation*}\n\\begin{split}\n& \\text{rank } H_{\\lambda}(\\Lambda M)=b_{\\lambda}\\le C_{\\lambda} \\text{ for each } \\lambda,\\\\ &\n\\chi(\\Lambda M)=\\sum_{\\lambda}(-1)^{\\lambda}C_{\\lambda}.\n\\end{split}\n\\end{equation*}\n\\end{theorem}\n\\begin{proof}\nLet $\\Lambda_1$ be obtained from $\\Lambda_0$ by attaching a handle of index $\\lambda$ for $M_0\\subset M_1$, then $H_k(\\Lambda_1,\\Lambda_0)=H_k(D^{\\lambda},S^{\\lambda-1})$. Thus, $H_k(\\Lambda_1,\\Lambda_0)=\\mathbb{Z}$ if $k=\\lambda$ and $0$ otherwise. Denote by $c_10$ and let $\\pi_i(M)\\otimes\\mathbb{Q}=0$ for $i\\le\\dim M-1$. Let $n:=\\dim M$ such that \n\\begin{equation}\n\\begin{split}\n\\chi(M)&=\\sum_{\\lambda=0}^{\\infty}(-1)^{\\lambda}\\text{rank }H_{\\lambda}(M;\\mathbb{Q})\\\\\n&=\\sum_{\\lambda=0}^{\\infty}(-1)^{\\lambda}\\dim\\pi_{\\lambda}(M)\\otimes\\mathbb{Q}\\\\&\n=(-1)^n\\dim\\pi_n(M)\\otimes\\mathbb{Q}>0.\n\\end{split}\n\\end{equation} Recall the isomorphism $\\pi_n(\\Lambda M;\\mathbb{Q})\\cong\\pi_{n+1}(M;\\mathbb{Q})$, which implies $\\pi_n(\\Lambda M_{\\mathbb{Q}})=\\pi_n(\\Lambda M)\\otimes\\mathbb{Q}\\cong\\pi_{n+1}(M)\\otimes\\mathbb{Q}\\cong H_{n+1}(M)\\otimes\\mathbb{Q}$ where it is assumed that $\\pi_i(M)\\otimes\\mathbb{Q}=0$ for $i\\le n-1.$ Then \n\\begin{equation}\n\\begin{split}\nH_{n-1}(\\Lambda M_{\\mathbb{Q}};\\mathbb{Z})\\cong H_{n-1}(\\Lambda M;\\mathbb{Q})&\\cong\\pi_{n}(M;\\mathbb{Q})\\cong\\pi_n(M)\\otimes\\mathbb{Q}\\\\&\\cong H_n(M)\\otimes\\mathbb{Q}\\cong H_n(M;\\mathbb{Q}),\n\\end{split}\n\\end{equation} where the first isomorphism follows from the rational Hurewicz theorem and the last by $H_i(M_{\\mathbb{Q}};\\mathbb{Z})\\cong H_i(M;\\mathbb{Z})\\otimes\\mathbb{Q}\\cong H_i(M;\\mathbb{Q})$. We a priori assume the odd Betti numbers $b_{2i+1}$ vanish. Then we obtain the following bounds:\n\\begin{equation}\n\\begin{split}\n0<\\chi(M)&=\\sum_{i=0}^{\\dim M}(-1)^ib_i(M;\\mathbb{Q})\\\\\n&=\\sum_{i=1}^{\\dim M}(-1)^{i-1}b_{i-1}(\\Lambda M;\\mathbb{Q})\\\\\n&\\le\\sum_{i=0}^{\\infty}(-1)^i\\text{rank }H_i(\\Lambda M;\\mathbb{Q})=\\chi(\\Lambda M)\n\\end{split}\n\\end{equation} so $0<\\chi(M)\\le\\chi(\\Lambda M)$ or $\\chi(\\Lambda M)>0$. \n\\begin{theorem}\nLet $M$ be a closed $(\\dim M-1)$-connected Riemannian manifold. If the Euler characteristic $\\chi(\\Lambda M)$ of free loop space of Sobolev type $H^1=W^{1,2}$ is positive, then $M$ is rationally elliptic. \n\\end{theorem}\nBott's conjecture posits that any simply connected closed Riemannian manifold with nonnegative sectional curvature should be rationally elliptic. Assuming Bott's conjecture holds, if the sectional curvature of $M$ is nonnegative, i.e. $K\\ge 0$, then $M$ is rationally elliptic. As before, let $M$ be a closed orientable even $n$-dimensional Riemannian manifold without boundary, and suppose $\\Omega$ is the curvature form of the Levi-Civita connection $^{\\text{L.C.}}\\nabla$ associated with $M$ of $\\dim M:=n=2k$, $k\\in\\mathbb{N}$. Then $\\Omega$ is an $\\frak{so}(n)$-valued $2$-form on $M$, so it is a skew-symmetric $n$-by-$n$ matrix of $2$-forms over $\\wedge^{\\text{even}}T^*M$. Let $Pf(\\Omega)$ denote the $n$-form Pfaffian. Since the sectional curvature is nonnegative, by the generalized Gauss-Bonnet theorem,\n\\begin{equation}\n\\begin{split}\n\\chi(M)&=\\frac{1}{\\sqrt{(2\\pi)^n}}\\int_M Pf(\\Omega)\\\\&=\\frac{1}{\\sqrt{(2\\pi)^n}}\\int_Mi^{\\frac{n^2}{4}}\\exp\\left(\\frac{1}{2}\\text{Tr}\\log((\\sigma_y\\otimes I_{n\/2})^T\\cdot\\Omega) \\right)\\ge 0.\n\\end{split}\n\\end{equation}\n\nThe prime geodesic theorem describes the asymptotic distribution of prime geodesics on an $n$-dimensional hyperbolic manifold $M=\\mathbb{H}^n\/\\Gamma$ with $\\Gamma$ a discrete subgroup of $SO_{(1,n)}^{+}\\mathbb {R}$.\n\\begin{theorem} [Prime Geodesic Theorem \\cite{sarnak}]\nLet $M$ be a hyperbolic manifold of dimension $\\dim M=m+1$. Furthermore, let $\\Gamma=\\pi_1(M)$ be its fundamental group. For any element $\\gamma\\in\\Gamma$ there exists a closed geodesic representative $\\gamma$ in $M$. Let $\\ell(\\gamma)$ denote the length of the geodesic $\\gamma$, $N(\\gamma):=e^{\\ell(\\gamma)}$ its norm, and $\\pi_{\\Gamma}(x)$ the number of primitive elements $\\gamma\\in\\Gamma$ such that $N(\\gamma)\\le x$. Moreover, let $\\{z_1,\\dots,z_N\\}$ denote the zeros of the Selberg zeta function and $\\text{li}(x)=\\int_0^x\\frac{dt}{lnt}$ the logarithmic integral. Then $\\pi_{\\Gamma}(x)$ satisfies the following equality.\n\\begin{equation}\n\\pi_{\\Gamma}(x)=\\text{li}(x^m)+\\sum_{n=0}^Nli(x^{z_n})+(\\text{error term})\n\\end{equation}\n\\end{theorem}\nAs before, let $C_{\\lambda}$ denote the number of critical points of $E:\\Lambda M\\to\\mathbb{R}$ of index $\\lambda$ where the index of a critical point $\\gamma\\in\\Lambda M$ is $\\lambda(\\gamma):=\\dim\\{V_{\\text{max}}\\subset T_{\\gamma}\\Lambda M:d^2E<0\\}<+\\infty$. Consider the energy functional $E:\\Lambda M\\to\\mathbb{R}$ for $\\Lambda M$ an infinite-dimensional Hilbert manifold. For $a,b\\in\\mathbb{R}$ regular values of $E$, let $C_{\\lambda}(a,b)$ denote the number of critical points of index $\\lambda$ in $E^{-1}[a,b]$ with homology rank $r_{\\lambda}(a,b)=\\text{rank }(H_{\\lambda}({\\Lambda M}^b,{\\Lambda M}^a))$ and torsion rank $t_{\\lambda}(a,b)=t(H_{\\lambda}({\\Lambda M}^b,{\\Lambda M}^a))$ for ${\\Lambda M}^c=E^{-1}(-\\infty,c]$. That is,\n\\begin{equation}\n\\begin{split}\n& r_{\\lambda}(a,b)+t_{\\lambda}(a,b)+t_{\\lambda -1}(a,b)\\le C_{\\lambda}, \\\\\n& \\sum_{i=0}^{\\lambda}(-1)^{\\lambda-i}r_i(a,b)\\le\\sum_{i=0}^{\\lambda}(-1)^{\\lambda-i}C_i,\n\\end{split}\n\\end{equation} for $\\lambda=0,1,\\dots$. For sufficiently large $\\lambda$, the last inequality becomes an equality. Thus, for $\\chi(\\Lambda M)=\\sum_{\\lambda=0}^{\\infty}(-1)^{\\lambda}b_{\\lambda}(\n\\Lambda M; \\mathbb{Q})$ and $b_{\\lambda}+t_{\\lambda}+t_{\\lambda-1}\\le C_{\\lambda}$, we have $b_{\\lambda}\\le C_{\\lambda}$. Therefore the total number of primitive elements $\\gamma\\in\\Gamma$ is\n\\begin{equation*}\n\\lim_{t\\to\\infty^-}\\int_0^t\\pi_{\\Gamma}(x)dx=\\int_0^{\\infty}\\pi_{\\Gamma}(x)dx.\n\\end{equation*} Recall, the total number of critical points is given by the sum over all indices of the number of critical points of a specific index, i.e., $\\sum_{\\lambda}C_{\\lambda}$. As such,\n\\begin{equation}\n\\sum_{\\lambda}C_{\\lambda}=\\sum_{\\lambda}b_{\\lambda}(\\Lambda M;\\mathbb{Q})=\\int_0^{\\infty}\\pi_{\\Gamma}(x)dx.\n\\end{equation} That is, for $\\Lambda M=\\mathbb{H}^{\\infty}\/\\Gamma$ hyperbolic, \n\\begin{equation*}\n\\begin{split}\n\\sum_{\\lambda=0}^{\\dim\\Lambda M}b_{\\lambda}(\\Lambda M;\\mathbb{Q})&=\\int_0^{\\infty}\\text{li}(x^{\\dim \\Lambda M-1})dx+\\sum_{n=0}^N\\text{li}(x^{z_n})dx+(\\text{error term})dx \\\\\n&=\\lim_{k\\to\\infty}\\int_0^{\\infty}\\text{li}(x^{k-1})dx+\\sum_{n=0}^N\\int_0^{\\infty}\\text{li}(x^{z_n})dx+\\int_0^{\\infty}(\\text{error term})dx.\n\\end{split}\n\\end{equation*} Moreover, for $\\Lambda M=\\mathbb{H}^{\\infty}\/\\Gamma$ hyperbolic with $\\Gamma\\subset SO^+_{(1,\\infty)}\\mathbb{R}$, the \\textit{Poincar\\'e polynomial} of $\\Lambda M$ is the generating function of Betti numbers of $\\Lambda M$, $P_{\\Lambda M}(z)=b_0(\\Lambda M)+b_1(\\Lambda M)z+b_2(\\Lambda M)z^2+\\cdots$, so $P_{\\Lambda M}(1)=\\sum_{\\lambda=0}^{\\infty}b_{\\lambda}(\\Lambda M)=\\int_0^{\\infty}\\pi_{\\Gamma}(x)dx$.\n\\begin{lemma} By the main result \\ref{main result}, \n\\begin{equation*}\n\\int_0^{\\infty}\\pi_{\\Gamma}(x)dx=P_{\\Lambda M}(1)=+\\infty.\n\\end{equation*}\nThus, there are infinitely many such prime closed geodesics.\n\\end{lemma}\nWe remark that we have only shown this to be true for $\\Lambda M$ hyperbolic. Thus, we resort to a holonomic classification of simply connected manifolds due to Berger. \n\n\\section{Holonomy, The Berger Classification, and The Classification of Symmetric Spaces \\label{sec: holonomyclassification}}\nWe begin with preliminary definitions of reducibility and irreducibility of holonomy representations.\n\\begin{definition}\nThe Riemannian holonomy of a Riemannian manifold $(M,g)$ is the holonomy of the Levi-Civita connection on the tangent bundle of the manifold.\n\\end{definition}\n\\begin{definition}\nA Riemannian manifold $(M,g)$ is said to be (resp. locally) reducible if it is (resp. locally) isometric to a Riemannian product.\n\\end{definition}\n\\begin{theorem}[de Rham \\cite{deRham}]\\label{de Rham} If a Riemannian manifold is complete, simply connected, and if its holonomy representation is reducible, then $(M,g)$ is a Riemannian product. \n\\end{theorem}\n\nBerger provided a complete classification of holonomy for simply connected Riemannian manifolds which are irreducible and non-symmetric (see Table \\ref{table}).\n\n\\begin{table}[H]\n\\centering\n \\begin{tabular}{ | c | c | c | c | }\n \\hline\n $\\text{Hol}(g)$ & $\\dim_{\\mathbb{R}}(M)$ & $G$-structure & Description \\\\ \\hline\n $SO(n)$ & n & Orientable manifold & - \\\\ \\hline\n $U(n)$ & $2n$ & K\\\"{a}hler & K\\\"{a}hler \\\\ \\hline\n $SU(n)$ & $2n$ & Calabi-Yau Manifold & Ricci-flat, K\\\"{a}hler\\\\ \\hline\n $Sp(n)\\cdot Sp(1)$ & $4n$ & Quaternion-K\\\"{a}hler manifold & Einstein\\\\ \\hline\n $G_2$ & $7$ & $G_2$ manifold & Ricci-flat\\\\ \\hline\n $Spin(7)$ & $8$ & $Spin(7)$ manifold & Ricci-flat\\\\ \\hline\n \\end{tabular}\n \\caption{The Berger classification of holonomy groups for irreducible and non-symmetric simply connected Riemannian manifolds.}\n \\label{table}\n\\end{table}\n\nThere is a sequence of inclusions $Sp(n)\\subset SU(2n)\\subset U(2n)\\subset SO(4n)$ so every hyperk\\\"{a}hler manifold is Calabi-Yau, every Calabi-Yau manifold is K\\\"{a}hler, and every K\\\"{a}hler manifold is also orientable. \n\\begin{theorem}\\label{holonomytheorem}\nIf $M$ is simply connected and has reducible holonomy, then the de Rham decomposition theorem implies that $M$ is a product, and hence does not have rational cohomology generated by one element.\n\\end{theorem}\n\\begin{proof}\nLet $M$ be a simply connected $n$-dimensional manifold such that there is a complete decomposable reduction of the tangent bundle $TM=T^{(0)}M\\oplus T^{(1)}M\\oplus\\cdots\\oplus T^{(k)}M$ under the action of holonomy and the tangent bundles $T^{(i)}M$ parameterizing Frobenius-integrable distributions. If $\\text{Hol}(M)$ is reducible then $M=V_0\\times V_1\\times\\cdots\\times V_k$ for each $V_i$ an integral manifold for the respective tangent bundle $T^{(i)}M$ where $V_0$ is an open subset of $\\mathbb{R}^n$. Moreover, the holonomy group $\\text{Hol}(M)$ splits as a direct product of the holonomy groups of each $M_i$. Thus, if $M=M_0\\times M_1\\times\\cdots\\times M_k$ then $\\text{Hol}(M)=\\text{Hol}(M_0\\times M_1\\times\\cdots\\times M_k)=\\text{Hol}(M_0)\\times \\text{Hol}(M_1)\\times\\cdots\\times \\text{Hol}(M_k)$. So $M=M_0\\times M_1\\times\\cdots\\times M_k$ means $H^*(M;\\mathbb{Q})$ cannot be singly generated.\n\\end{proof}\nCoupling the Berger classification with the de Rham decomposition, one obtains a classification of reducible holonomy groups by requiring that each factor is one of the examples from Berger's list, realized by compact simply connected manifolds. One can find metrics on product manifolds with irreducible holonomy so the converse of Theorem \\ref{holonomytheorem} is false. More precisely, a generic manifold, with $O(n)$ holonomy or $SO(n)$ if it is orientable, will have irreducible holonomy and have homology that is not generated by a single element.\n\\begin{corollary}\nIf $M$ is oriented then $\\text{Hol}(M)=SO(n)$ so there are at least two non-trivial classes in $H^*(M;\\mathbb{Q})$. Thus, if $\\text{Hol}(M)=\\prod_{i=1}^k \\text{Hol}(M_i)$ then $H^*(M;\\mathbb{Q})$ must have at least two generators.\n\\end{corollary}\n\nSince Ziller \\cite{ziller} showed $b_k(\\Lambda M;\\mathbb{F}_2)$ to be unbounded for $M$ a compact globally symmetric space of rank $>1$, we only consider simply connected Riemannian manifolds which are irreducible and non-symmetric, as well as non-compact symmetric spaces of rank $1$. We obtain the following classification of reducible holonomy groups from Berger's classification and de Rham's decomposition theorem. We require that each factor in the decomposition is one from Berger's list or the holonomy of an irreducible non-compact symmetric space of rank $1$, which may be interchanged with a compact symmetric space of rank $1$ under bijection in the decomposition.\nLet $N$ be a simply connected Riemannian symmetric space. Then it may decomposed as $N=N_0\\times N_1\\times\\cdots\\times N_r$ for $N_0$ Euclidean and the other $N_i$ irreducible. As such, the classification of symmetric spaces is reduced to the the irreducible case. By duality de Rham (see \\cite{helgason}, P. 199),\ncompact and noncompact simply connected irreducible symmetric spaces may be interchanged, which therefore simplifies the classification to that of compact irreducible symmetric spaces. \n\nLet $M$ be an $n$-dimensional simply connected Riemannian manifold that is not a compact symmetric space of rank $>1$. The holonomy is then a direct product decomposition of: \n\\begin{equation*}\n\\begin{split}\n\\text{Hol}(M)=&\\prod_{i_1=1}^{N_1}SO(n_1)\\prod_{i_2=1}^{N_2} U(n_2)\\prod_{i_1=3}^{N_3} SU(n_3)\\prod_{i_4=1}^{N_4} (Sp(n_4)\\cdot Sp(1))\\prod_{i_5=1}^{N_5} Sp(n_5)\\prod_{i_6=1}^{N_6} G_2\\prod_{i_7=1}^{N_7} Spin(7)\\\\ &\\times \\text{Hol}(\\text{compact symmetric spaces of rank}= 1)\n\\end{split}\n\\end{equation*}\nwhere it is required that a given decomposition has a least two irreducible factors and $1\\le N_k<\\infty$ for every $k$ in the expansion. Adopting the Cartan labeling convention, we consider the label AIII, BDI, CII, FII symmetric spaces $G\/K$.\n\\begin{remark}\nFor label AII, $G\/K=SU(p+q)\/S(U(p)\\times U(q))$ has $\\text{rank}(G\/K)=\\min(p,q)$ and $\\dim(G\/K)=2pq$. So we let $p=1$, such that $K=S(U(1)\\times U(q))$. If we let $\\dim(G\/K)=n=2q$ then $q=\\frac{n}{2}$ or $K=S\\left(U(1)\\times U\\big(\\frac{n}{2}\\big)\\right)$.\n\\end{remark}\n\\begin{remark}\nFor label BDI, $G\/K=SO(p+q)\/SO(p)\\times SO(q)$ has $\\text{rank}(G\/K)=\\min(p,q)$ and $\\dim(G\/K)=pq$. So if we let $p=1$, $SO(1+q)\/SO(1)\\times SO(q)$ has dimension $q$. Thus, $K=SO(1)\\times SO(n)$ for $\\dim(G\/K)=n$.\n\\end{remark}\n\\begin{remark}\nFor label CII, $G\/K=Sp(p+q)\/Sp(p)\\times Sp(q)$ has $\\text{rank}(G\/K)=\\min(p,q)$ and $\\dim (G\/K)=4pq$. We let $p=1$ so $F_4\/Spin(9)$ has dimension $n=4q$ or $K=Sp(1)\\times Sp\\left(\\frac{n}{4}\\right)$.\n\\end{remark}\n\\begin{remark}\nFor label FII, $G\/K=F_4\/Spin(9)$ has $\\text{rank}(G\/K)=1$ and $\\dim(G\/K)=16$.\n\\end{remark}\nNote, if $M=\\mathbb{R}^n$ with the flat Euclidean metric then parallel translation is simply a translation in $\\mathbb{R}^n$. In particular, if $P_{\\gamma} : T_{\\gamma(0)}M\\to T_{\\gamma(1)}M$ for $\\gamma:[0,1]\\to M$, then $P_{\\gamma}$ is the identity for each $\\gamma$ so $\\text{Hol}(\\mathbb{R}^n)$ is the trivial group, and it may be ignored in the decomposition. Thus, the decomposition of compact symmetric spaces for the problem under consideration is $\\text{Hol(compact simply connected symmetric spaces)}=S\\left(U(1)\\times U\\big(\\frac{n}{2}\\big)\\right)\\times (SO(1)\\times SO(n))\\times \\left(Sp(1)\\times Sp\\big(\\frac{n}{4}\\big)\\right)\\times Spin(9).$\nTo be more precise, the holonomy of a reducible simply connected Riemannian manifold $M$ (excluding compact symmetric spaces of rank $>1$) is given by the following.\n\\begin{theorem} (Classification of Reducible Holonomy.) \nIf $M$ is an $n$-dimensional simply connected Riemannian manifold with decomposition $M=M_1\\times\\cdots\\times M_j$ of irreducible $M_k$ ($k=1,\\dots, j)$, then it has holonomy decomposition given by the direct product:\n\\begin{equation*}\n\\begin{split}\n\\text{Hol}(M)=&\\prod_{i_1=1}^{N_1}SO(n_1)\\prod_{i_2=1}^{N_2} U(n_2)\\prod_{i_3=1}^{N_3} SU(n_3)\\prod_{i_4=1}^{N_4}(Sp(n_4)\\cdot Sp(1))\\prod_{i_5=1}^{N_5} Sp(n_5)\\prod_{i_6=1}^{N_6} G_2\\prod_{i_7=1}^{N_7} Spin(7)\\\\ &\\prod_{i_8=1}^{N_8} S(U(1)\\times U(n_6)) \\prod_{i_9=1}^{N_9} (SO(1)\\times SO(n_7))\\prod_{i_{10}=1}^{N_{10}} (Sp(1)\\times Sp(n_8))\\prod_{i_{11}=1}^{N_{11}} Spin(9)\n\\end{split}\n\\end{equation*} where it is required that there are at least two factors in a given decomposition and $\\sum_i\\dim_{\\mathbb{R}}(M_i)=n$.\n\\end{theorem}\n\\begin{corollary}\nIf $M$ is a simply connected Riemannian manifold with reducible holonomy given by the previous theorem, the rational cohomology ring $H^*(M;\\mathbb{Q})$ has at least two generators.\n\\end{corollary}\nSince Riemannian spaces that are locally isometric to the homogeneous spaces $G\/H$ have local holonomy isomorphic to $H$, the holonomy decomposition is\n\\begin{equation*}\n\\begin{split}\n\\text{Hol}(M)=&\\prod_{i_1=1}^{N_1}SO(n_1)\\prod_{i_2=1}^{N_2} U(n_2)\\prod_{i_3=1}^{N_3} SU(n_3)\\prod_{i_4=1}^{N_4} (Sp(n_4)\\cdot Sp(1))\\prod_{i_5=1}^{N_5} Sp(n_5)\\prod_{i_6=1}^{N_6} G_2\\prod_{i_7=1}^{N_7} Spin(7)\\\\ & \\prod_{i_8=1}^{N_8} \\text{Hol}\\left(\\frac{SU(n_6+1)}{S(U(1)\\times U(n_6))}\\right) \\prod_{i_9=1}^{N_9} \\text{Hol}\\left(\\frac{SO(n_7+1)}{SO(1)\\times SO(n_7)}\\right)\\prod_{i_{10}=1}^{N_{10}} \\text{Hol}\\left(\\frac{Sp(n_8+1)}{Sp(1)\\times Sp(n_8)}\\right)\\\\ & \\prod_{i_{11}=1}^{N_{11}} \\text{Hol}\\left(\\frac{F_4}{Spin(9)}\\right).\n\\end{split}\n\\end{equation*}\n\nLet $M$ be any space whose rational cohomology ring is a free graded-commutative algebra. Then the rationalization $M_{\\mathbb{Q}}$ is a product of Eilenberg-MacLane spaces. This hypothesis on cohomology also applies to compact Lie groups. Consider the decomposition of $M$, i.e. $M=M_1\\times\\cdots\\times M_k$, then the rationalization of holonomy is $\\text{Hol}(M)_{\\mathbb{Q}}=\\text{Hol}(M)\\times_{\\mathbb{Q}}E\\mathbb{Q}$ and $\\text{Hol}(M_{\\mathbb{Q}})=\\text{Hol}(M\\times_{\\mathbb{Q}}E\\mathbb{Q})=\\text{Hol}(M)\\times \\text{Hol}(E\\mathbb{Q})$. The total space of the universal bundle over $B\\mathbb{Q}$ is $E\\mathbb{Q}=B\\mathbb{Q}\\rtimes\\mathbb{Q}$ since $B\\mathbb{Q}=E\\mathbb{Q}\/\\mathbb{Q}$. Note, $B\\mathbb{Q}$ is the Eilenberg-MacLane space $K(\\mathbb{Q},1)$ so $E\\mathbb{Q}=K(\\mathbb{Q},1)\\rtimes\\mathbb{Q}$.\n\nWe can obtain the space $K(\\mathbb{Q},1)$ by the following procedure. Take the circle $S^1$, consider the sequence of maps $f_n:S^1\\to S^1$ of degree $n$, and form an infinite mapping telescope of $f_n$, a special case of a homotopy colimit. Observe, $\\mathbb{Q}$ is the filtered colimit $\\mathbb{Z}\\to\\mathbb{Z}\\to\\cdots$ where successive maps are multiplication by $1,2,\\dots$. Note, since $\\Omega\\mathbb{Q}\\cong\\ast$ regardless of its topology, the resulting classifying space $B\\mathbb{Q}$ will be a $K(\\mathbb{Q},1)$ space. It follows that $\\text{Hol}(M)_{\\mathbb{Q}}=\\text{Hol}(M)\\times_{\\mathbb{Q}}E\\mathbb{Q}$ and $\\text{Hol}(M_{\\mathbb{Q}})=\\text{Hol}(M)\\times \\text{Hol}(E\\mathbb{Q})$, so $\\text{Hol}(M)_{\\mathbb{Q}}=\\frac{\\text{Hol}(M_{\\mathbb{Q}})}{\\text{Hol}(E\\mathbb{Q})}\\times_{\\mathbb{Q}}E\\mathbb{Q}$. By first computing the rationalization space $M_{\\mathbb{Q}}$, we can determine the equivariant cohomology ring $H^*_{\\mathbb{Q}}(M;\\mathbb{Q})=H^*(E\\mathbb{Q}\\times_{\\mathbb{Q}}M;\\mathbb{Q})$, noting that the rationalization $M_{\\mathbb{Q}}$ is the product of Eilenberg-MacLane spaces whose rational cohomology rings are easier to compute. Since the decomposition of $M$ is given by:\n\\begin{equation*}\n\\begin{split}\nM=&\\prod_{i_1=1}^{N_1}M_{\\text{orientable, }SO(n_1)}\\prod_{i_2=1}^{N_2} M_{\\text{K\\\"{a}hler, } U(n_2)}\\prod_{i_3=1}^{N_3} M_{\\text{Calabi-Yau, }SU(n_3)}\\prod_{i_4=1}^{N_4} M_{\\text{quaternion-K\\\"{a}hler, }Sp(n_4)\\cdot Sp(1)}\\\\\n& \\prod_{i_5=1}^{N_5} M_{\\text{hyperk\\\"{a}hler, }Sp(n_5)}\\prod_{i_6=1}^{N_6} M_{G_2}\\prod_{i_7=1}^{N_7} M_{Spin(7)}\\prod_{i_8=1}^{N_8}\\frac{SU(n_6+1)}{S(U(1)\\times U(n_6))}\\prod_{i_9=1}^{N_9}\\frac{SO(n_7+1)}{SO(1)\\times SO(n_7)}\\\\\n&\\prod_{i_{10}=1}^{N_{10}} \\frac{Sp(n_8+1)}{Sp(1)\\times Sp(n_8)}\\prod_{i_{11}=1}^{N_{11}} \\frac{F_4}{Spin(9)},\n\\end{split}\n\\end{equation*} we can compute the rational cohomology ring $H^*(M;\\mathbb{Q})$ by the Kunneth formula.\n\\begin{equation*}\n\\begin{split}\nH^*(M;\\mathbb{Q})\\cong & \\bigotimes_{i_1=1}^{N_1}H^*(M_{\\text{orientable, }SO(n_1)};\\mathbb{Q})\\bigotimes_{i_2=1}^{N_2} H^*(M_{\\text{K\\\"{a}hler, } U(n_2)};\\mathbb{Q}) \\bigotimes_{i_3=1}^{N_1} H^*(M_{\\text{Calabi-Yau, }SU(n_3)};\\mathbb{Q})\\\\ &\\otimes\\cdots\\otimes\\bigotimes_{i_{11}=1}^{N_{11}} H^*(F_4\/Spin(9);\\mathbb{Q})\n\\end{split}\n\\end{equation*}\n\\section{Palais-Smale Local Approximation for Free Loop Space}\nLet $\\phi$ be a chart on the Hilbert manifold $\\Lambda M$ defined by \n\\begin{equation}\n\\begin{split}\n\\phi:\\quad& \\mathcal{U}\\to \\mathcal{V}\\subset\\mathbb{R}^n \\\\ \n& \\gamma\\mapsto \\left(\\int_{S^1}\\|\\dot\\gamma(t)\\|^2dt,\\int_{S^1}\\|\\dot\\gamma(t)\\|^2dt,\\dots \\right),\n\\end{split}\n\\end{equation} satisfying the following commutative diagram:\n\\newline\n\\[\n\\begin{tikzcd}[column sep=large, row sep=large]\n \\mathcal{U}\n \\arrow[swap]{d}{E}\n \\arrow{r}{\\phi}\n& \\mathcal{V}\\subset\\mathbb{R}^n \\\\\n \\mathbb{R} \\arrow[swap]{ru}{\\phi\\circ E^{-1}}\n&\n\\end{tikzcd}\n\\]\nThe Morse Lemma can be generalized to the infinite-dimensional setting. On separable Hilbert spaces, it takes the following form.\n\\begin{theorem}\\label{generalizedmorse}\nLet $H$ be a separable Hilbert space and $f:H\\to\\mathbb{R}$ a $C^k$ function $f$ with $k\\ge 3$ in the sense of Fr\\'echet differentiability, for which $c$ is a non-degenerate critical point. Then there exist convex neighborhoods $\\mathcal{U}$ and $\\mathcal{V}$ of $c$, and a diffeomorphism $\\varphi:\\mathcal{U}\\to\\mathcal{V}$ of class $C^{k-2}$ with $\\varphi(c)=c$ and a bounded orthogonal projection $P: H\\to H$ such that \n\\begin{equation}\nf(x)=f(c)-\\|P(\\varphi(x))\\|_H^2+\\|\\varphi(x)-P(\\varphi(x))\\|_H^2,\n\\end{equation} where $\\dim\\text{im }P$ is the Morse index of the critical point $c$.\n\\end{theorem}\n\\begin{theorem} \\label{result}\nIf the energy functional $E:\\Lambda M\\to\\mathbb{R}$, defined by $E[\\gamma]=\\int_{S^1}\\|\\dot\\gamma(t)\\|^2dt$ for $\\gamma:S^1\\to M$, is a $C^3$-function, then $M$ is a union of infinitely many Baire spaces $S^{\\infty}$. \n\\end{theorem}\n\\begin{proof}\nAssume that the Riemannian metric on $M$ is chosen such that the energy functional $E:\\Lambda M\\to\\mathbb{R}$ is Morse-Bott, i.e., $E$ is a $C^2$-function. Assume that $E$ is also a $C^3$-function by choice. Let $H_{\\alpha}\\subset\\Lambda M$ and let $\\Lambda M=\\bigcup_{\\alpha\\in A}H_{\\alpha}\\oplus H_{\\alpha}^{\\perp}$ for $A$ the set of critical points of $E$. Further, let ${\\mathcal{U}}_{\\alpha}\\subset\\Lambda M$ be a local neighborhood of the critical point $\\beta_{\\alpha}$. Then the projection map $P:\\mathcal{U}_{\\alpha}\\to H_{\\alpha}$ has $P(x)=m$ for $x=m+m',m\\in H_{\\alpha},m'\\in H_{\\alpha}^{\\perp}$. Recall that $\\dim\\text{im }P=\\lambda(\\beta_{\\alpha})$. Let $\\{\\gamma_{\\alpha k}\\}_{k=1}^{\\lambda(\\beta_{\\alpha})}$ be an orthonormal basis for $H_{\\alpha}$. That is, the orthonormal basis satisfies the following criteria.\n\\begin{enumerate}[label=(\\roman*)]\n\\item Orthogonality: Every two different basis elements are orthogonal, i.e. $\\langle\\gamma_k,\\gamma_j\\rangle=0$ for all $k,j=1,\\dots,\\lambda(\\beta_{\\alpha})$ with $k\\ne j$.\n\\item Normalization: Every element of this basis has norm $1$, i.e. $\\|\\gamma_{\\alpha k}\\|=1$ for all $k=1,\\dots,\\lambda(\\beta_{\\alpha})$.\n\\item Completeness: The linear span of the family $\\gamma_{\\alpha k}$, for $k=1,\\dots,\\lambda(\\beta_{\\alpha})$, is dense in $H_{\\alpha}$.\n\\end{enumerate} \nSimilarly, let $\\{\\gamma_{\\alpha k}^{\\perp}\\}_{k\\in B}$, for $B:=(\\lambda(\\beta_{\\alpha})+1,\\lambda(\\beta_{\\alpha})+2,\\dots)$ a countable but infinite set, be an orthonormal basis for $H_{\\alpha}^{\\perp}$, satisfying the above properties, of $\\dim H_{\\alpha}^{\\perp}=\\text{codim }H_{\\alpha}=\\infty$ for $\\dim H_{\\alpha}=\\lambda(\\beta_{\\alpha})$. Then $\\langle\\gamma_{\\alpha k},\\gamma_{\\alpha j}^{\\perp}\\rangle=0$ for all $k=1,\\dots,\\lambda(\\beta_{\\alpha}),j=\\lambda(\\beta_{\\alpha})+1,\\dots$. We may construct $\\Lambda M$ as a patching over neighborhoods for sufficiently many critical points, which are homeomorphic to $H_{\\alpha}\\oplus H_{\\alpha}^{\\perp}$ for $\\alpha\\in A$. Thus, for $\\mathcal{U}_{\\alpha}$ an open neighborhood of $\\beta_{\\alpha}\\in\\Lambda M$, $\\mathcal{U}_{\\alpha}=H_{\\alpha}\\oplus H_{\\alpha}^{\\perp}$ such that for $\\gamma_{\\alpha}\\in\\mathcal{U}_{\\alpha}$, there exists a unique $\\theta_{\\alpha}\\in H_{\\alpha}$ and $\\varepsilon_{\\alpha}\\in H_{\\alpha}^{\\perp}$ such that $\\gamma_{\\alpha}=\\theta_{\\alpha}+\\varepsilon_{\\alpha}$ whereby $\\theta_{\\alpha}=\\sum_{k=1}^{\\lambda(\\beta_{\\alpha})}c_{\\alpha k}\\gamma_{\\alpha k}$ and $\\varepsilon_{\\alpha}=\\sum_{j=\\lambda(\\beta_{\\alpha})+1}^{\\infty}\\widehat{c_{\\alpha j}}\\gamma_{\\alpha j}^{\\perp}$. Thus,\n\\begin{equation}\n\\gamma_{\\alpha}=\\sum_{k=1}^{\\lambda(\\beta_{\\alpha})}c_{\\alpha k}\\gamma_{\\alpha k}+\\sum_{j=\\lambda(\\beta_{\\alpha})+1}^{\\infty}\\widehat{c_{\\alpha j}}\\gamma_{\\alpha j}^{\\perp}\n\\end{equation} where $\\langle\\gamma_{\\alpha k},\\gamma_{\\alpha j}^{\\perp}\\rangle=0$. We invoke Theorem \\ref{approximation} to conclude that $E:\\Lambda M\\to\\mathbb{R}$ may be approximated by a smooth functional $E_{\\varepsilon}:\\Lambda M'\\to\\mathbb{R}$ with infinitely many critical points so that $A$ in $\\Lambda M=\\bigcup_{\\alpha\\in A}H_{\\alpha}\\oplus H_{\\alpha}^{\\perp}$ is uncountable or countably infinite. Note, this follows because $\\mathcal{U}_{\\alpha}$ is dense. \n\nFurthermore, let $\\{\\gamma_{\\alpha k}\\}_{k=1}^{\\infty}$ be an orthonormal basis for $\\mathcal{U}_{\\alpha}=H_{\\alpha}\\oplus H_{\\alpha}^{\\perp}$ such that if $\\gamma_{\\alpha}\\in\\mathcal{U}_{\\alpha}$ then $\\gamma_{\\alpha}=\\sum_{k=1}^{\\infty}c_{\\alpha k}\\gamma_{\\alpha k}$ and $P:\\mathcal{U}_{\\alpha}\\to H_{\\alpha}$ means $P(\\gamma_{\\alpha})=P\\left(\\sum_{k=1}^{\\infty}c_{\\alpha k}\\gamma_{\\alpha k}\\right)$. Since $P$ is a linear operator, $P(\\gamma_{\\alpha})=\\theta_{\\alpha}=\\sum_{k=1}^{\\infty}c_{\\alpha k}P(\\gamma_{\\alpha k})$. Let $\\{\\theta_{\\alpha k}\\}_{k=1}^{\\infty}$ be an orthonormal basis for $H_{\\alpha}$ where $\\theta_{\\alpha k}=0$ for $k>\\lambda(\\beta_{\\alpha})$. That is, $\\{\\theta_{\\alpha k}\\}_{k=1}^{\\lambda(\\beta_{\\alpha})}$ is an orthonormal basis for $H_{\\alpha}$. Then $P(\\gamma_{\\alpha k})=\\theta_{\\alpha k}$ and $P(\\gamma_{\\alpha k})=\\theta_{\\alpha k}=0$ for $k>\\lambda(\\beta_{\\alpha})$. Let $\\{\\theta_{\\alpha k}^{\\perp}\\}_{k=\\lambda(\\beta_{\\alpha})+1}^{\\infty}$ be an orthonormal basis for $H_{\\alpha}^{\\perp}$. Then for $P_{\\perp}:\\mathcal{U}_{\\alpha}\\to H_{\\alpha}^{\\perp}$ and $P_{\\perp}(\\gamma_{\\alpha k})=\\theta_{\\alpha k}^{\\perp}$, we have $P_{\\perp}(\\gamma_{\\alpha})=\\varepsilon_{\\alpha}=\\sum_{k=1}^{\\infty}\\widehat{c_{\\alpha k}}P_{\\perp}(\\gamma_{\\alpha k})$ such that $P_{\\perp}(\\gamma_{\\alpha k})=\\theta_{\\alpha k}^{\\perp}=0$ for $k<\\lambda(\\beta_{\\alpha})+1$. Therefore $\\gamma_{\\alpha}=\\sum_{k=1}^{\\infty}c_{\\alpha k}\\gamma_{\\alpha k}$ and \n\\begin{equation*}\n\\begin{split}\nP(\\gamma_{\\alpha})=\\theta_{\\alpha}&=\\sum_{k=1}^{\\infty}c_{\\alpha k}P(\\gamma_{\\alpha k})\\\\&=\\sum_{k=1}^{\\lambda(\\beta_{\\alpha})}c_{\\alpha k}P(\\gamma_{\\alpha k}) \\text{ with }\nP(\\gamma_{\\alpha k})=\\begin{cases} \n \\theta_{\\alpha k}, & \\text{if }k\\le\\lambda(\\beta_{\\alpha}), \\\\\n 0, & \\text{if }k>\\lambda(\\beta_{\\alpha}), \n \\end{cases} \n \\end{split}\n \\end{equation*}\n\\begin{equation*}\n\\begin{split}\nP_{\\perp}(\\gamma_{\\alpha})=\\varepsilon_{\\alpha}&=\\sum_{k=1}^{\\infty}\\widehat{c_{\\alpha k}}P_{\\perp}(\\gamma_{\\alpha k})\\\\&=\\sum_{k=\\lambda(\\beta_{\\alpha})+1}^{\\infty}\\widehat{c_{\\alpha k}}P_{\\perp}(\\gamma_{\\alpha k})\\text{ with } P_{\\perp}(\\gamma_{\\alpha k})=\\begin{cases} \n \\theta_{\\alpha k}^{\\perp}, & \\text{if }k\\ge\\lambda(\\beta_{\\alpha})+1, \\\\\n 0, & \\text{if }k<\\lambda(\\beta_{\\alpha})+1, \n \\end{cases} \n\\end{split}\n\\end{equation*} where $\\langle\\theta_{\\alpha k},\\theta_{\\alpha j}^{\\perp}\\rangle=0$ for all $k=1,\\dots,\\lambda(\\beta_{\\alpha}), j=\\lambda(\\beta_{\\alpha})+1,\\dots.$ Thus, by the Morse-Palais Lemma \\ref{generalizedmorse}, $E[\\gamma_{\\alpha}]=E[\\beta_{\\alpha}]-\\|P(\\varphi(\\gamma_{\\alpha}))\\|^2_{\\Lambda M}+\\|\\varphi(\\gamma_{\\alpha})-P(\\varphi(\\gamma_{\\alpha}))\\|^2_{\\Lambda M}$ where $\\varphi:\\mathcal{U}\\to\\mathcal{V}$ is a diffeomorphism such that for $\\gamma_{\\alpha}\\in\\mathcal{U}_{\\alpha}$, $\\varphi(\\gamma_{\\alpha})=\\gamma_{\\alpha}$, i.e. $\\varphi=\\text{id}_{\\mathcal{U}_{\\alpha}}$ by identification. It follows that \n\\begin{equation}\n\\begin{split}\n&E[\\gamma_{\\alpha}]=E[\\beta_{\\alpha}]-\\|P(\\varphi(\\gamma_{\\alpha}))\\|^2_{\\Lambda M}+\\|\\varphi(\\gamma_{\\alpha})-P(\\varphi(\\gamma_{\\alpha}))\\|^2_{\\Lambda M} \\\\\n&=E[\\beta_{\\alpha}]-\\|P(\\varphi(\\gamma_{\\alpha}))\\|^2_{\\Lambda M}+\\|\\gamma_{\\alpha}\\|^2_{\\Lambda M}-2Re\\langle\\gamma_{\\alpha},P(\\gamma_{\\alpha})\\rangle+\\|P(\\gamma_{\\alpha})\\|^2_{\\Lambda M} \\\\ &=E[\\beta_{\\alpha}]+\\|\\gamma_{\\alpha}\\|^2_{\\Lambda M}-2Re\\langle\\gamma_{\\alpha},P(\\gamma_{\\alpha})\\rangle.\n\\end{split}\n\\end{equation} \nLet \\begin{equation*}\n\\begin{split}\n\\varphi_{\\alpha}:\\quad &\\mathcal{U}_{\\alpha}\\cap H_{\\alpha}\\to\\mathbb{R}^{\\lambda(\\beta_{\\alpha})} \\\\\n&\\gamma_{\\alpha}\\mapsto \\left(\\gamma_{\\alpha 1},\\dots,\\gamma_{\\alpha \\lambda(\\beta_{\\alpha})}\\right)\n\\end{split}\n\\end{equation*} since $H_{\\alpha}$ is a vector space, so the tangent space $T_{\\gamma_{\\alpha}}H_{\\alpha}$ at any point $\\gamma_{\\alpha}\\in H_{\\alpha}$ is canonically isomorphic to $H_{\\alpha}$ itself. Similarly, let\n\\begin{equation*}\n\\begin{split}\n\\widehat{\\phi_{\\alpha}}:\\quad &\\mathcal{U}_{\\alpha}\\cap H_{\\alpha}^{\\perp}\\to\\mathbb{R}^{\\infty}\\\\\n&\\gamma_{\\alpha}\\mapsto(\\gamma_{\\alpha 1},\\gamma_{\\alpha 2},\\dots)\n\\end{split}\n\\end{equation*} with $H_{\\alpha}\\cap H_{\\alpha}^{\\perp}=\\{\\beta_{\\alpha}\\}$, affinely translated $\\{0\\}$. Then let \n\\begin{equation*}\n\\begin{split}\n\\phi_{\\alpha}:\\quad&\\mathcal{U}_{\\alpha}\\to\\mathbb{R}^{\\infty} \\\\\n& \\gamma_{\\alpha}\\mapsto (\\gamma_{\\alpha 1},\\gamma_{\\alpha 2},\\dots)\n\\end{split}\n\\end{equation*} be a chart on $\\mathcal{U}_{\\alpha}$. Thus, \n\\begin{equation*}\n\\begin{split}\n\\phi_{\\alpha}^i:\\quad&\\mathcal{U}_{\\alpha}\\to\\mathbb{R}\\\\\n&\\gamma_{\\alpha}\\mapsto \\gamma_{\\alpha i}\n\\end{split}\n\\end{equation*}\nare local coordinate charts such that $\\gamma_{\\alpha}=\\sum_{k=1}^{\\infty}c_{\\alpha k}\\gamma_{\\alpha k}=\\sum_{k=1}^{\\infty}c_{\\alpha k}\\phi_{\\alpha}^k$ so $\\|\\gamma_{\\alpha}\\|^2_{\\Lambda M}=\\langle \\gamma_{\\alpha},\\gamma_{\\alpha}\\rangle_{\\Lambda M}=\\left\\|\\sum_{k=1}^{\\infty}c_{\\alpha k}\\phi_{\\alpha}^k\\right\\|^2_{\\Lambda M}=\\sum_{k=1}^{\\infty}\\left\\|c_{\\alpha k}\\phi_{\\alpha}^k\\right\\|^2_{\\Lambda M}$ since the $\\phi_{\\alpha}^k$'s are orthogonal by choice, and $\\|\\gamma_{\\alpha}\\|^2_{\\Lambda M}=\\sum_{k=1}^{\\infty}c_{\\alpha k}^2\\|\\phi_{\\alpha}^k\\|^2_{\\Lambda M}$ where $\\|\\phi_{\\alpha}^k\\|^2_{\\Lambda M}=\\|\\phi_{\\alpha}^k\\|^2_0+\\|\\nabla_t\\phi_{\\alpha}^k\\|^2_0$. It follows that \n\\begin{equation*}\n\\|\\gamma_{\\alpha}\\|^2_{\\Lambda M}=\\sum_{k=1}^{\\infty}c_{\\alpha k}^2\\left(\\|\\phi_{\\alpha}^k\\|^2_0+\\|\\nabla_t\\phi_{\\alpha}^k\\|^2_0\\right).\n\\end{equation*}\nLikewise, $\\theta_{\\alpha}=\\sum_{k=1}^{\\lambda(\\beta_{\\alpha})}\\widehat{c_{\\alpha k}}P(\\phi_{\\alpha}^k)=\\sum_{k=1}^{\\lambda(\\beta_{\\alpha})}\\widehat{c_{\\alpha k}}\\phi_{\\alpha}^k$ since $P(\\phi_{\\alpha}^k)=\\phi_{\\alpha}^k$ for $k\\le\\lambda(\\beta_{\\alpha})$. Recall $\\gamma_{\\alpha}=\\sum_{k=1}^{\\infty}c_{\\alpha k}\\gamma_{\\alpha k}$. Therefore, $\\langle \\gamma_{\\alpha}, P(\\gamma_{\\alpha})\\rangle_{\\Lambda M}=\\left\\langle\\sum_{k=1}^{\\infty}c_{\\alpha k}\\phi_{\\alpha}^k, \\sum_{j=1}^{\\lambda(\\beta_{\\alpha})}\\widehat{c_{\\alpha j}}\\phi_{\\alpha}^j\\right\\rangle_{\\Lambda M}=\\sum_{(k,j)\\in K\\times J}c_{\\alpha k}\\widehat{c_{\\alpha j}}\\langle\\phi_{\\alpha}^k, \\phi_{\\alpha}^j\\rangle=c_{\\alpha k}\\widehat{c_{\\alpha k}}:=C_{\\alpha}$. As such, $E[\\gamma_{\\alpha}]=E[\\beta_{\\alpha}]+\\sum_{k=1}^{\\infty}c_{\\alpha k}^2\\|\\phi_{\\alpha}^k\\|^2_{\\Lambda M}-2C_{\\alpha}$ or, for $\\widehat{C_{\\alpha}}=E[\\gamma_{\\alpha}]-E[\\beta_{\\alpha}]+2C_{\\alpha}$, we have $\\sum_{k=1}^{\\infty}c_{\\alpha k}^2\\|\\phi_{\\alpha}^k\\|^2_{\\Lambda M}=\\widehat{C_{\\alpha}}$ where $\\gamma_{\\alpha}=\\sum_{k=1}^{\\infty}c_{\\alpha k}\\phi_{\\alpha}^k=c_{\\alpha 1}\\phi_{\\alpha}^1+c_{\\alpha 2}\\phi_{\\alpha}^2+\\cdots$, and $\\phi_{\\alpha}^k$ is defined on $\\mathcal{U}_{\\alpha}$. Then $c_{\\alpha k}=\\int_{\\mathcal{U}_{\\alpha}}\\phi_{\\alpha}^k(\\gamma_{\\alpha})d\\gamma_{\\alpha}$. Thus, $\\sum_{k=1}^{\\infty}\\frac{C_{\\alpha k}^2}{\\widehat{C}_{\\alpha}}\\|\\phi_{\\alpha}^k\\|^2_{\\Lambda M}=1$ so letting $a_{\\alpha k}=C^2_{\\alpha k}\/\\widehat{C_{\\alpha}}$ we have\n\\begin{equation}\n\\sum_{k=1}^{\\infty}a_{\\alpha k}\\|\\phi_{\\alpha}^k\\|^2_{\\Lambda M}=1.\n\\end{equation} The locus of all points $\\{\\phi_{\\alpha}^1,\\phi_{\\alpha}^2,\\dots\\}$, i.e. the coordinates on the patch $\\mathcal{U}_{\\alpha}$, corresponds to an infinite-dimensional ellipsoid with $a_{\\alpha i}\\ne 0$ for $k\\in\\mathbb{Z}^+$ because, otherwise, $a_{\\alpha i}= 0$ would obstruct the uniqueness and existence of the orthonormal basis. Thus, $\\sum_{k=1}^{\\infty}a_{\\alpha k}\\|\\phi_{\\alpha}^k\\|^2_{\\Lambda M}=1$ for each $\\mathcal{U}_{\\alpha}$. It follows that\n\\begin{equation}\n\\Lambda M=\\bigcup_{\\alpha\\in A}\\mathcal{U}_{\\alpha}=\\bigcup_{\\alpha\\in A}\\left\\{(\\phi_{\\alpha}^1,\\phi_{\\alpha}^2,\\dots)\\in\\mathbb{R}^{\\infty}\\Bigg | \\sum_{k=1}^{\\infty}a_{\\alpha k}\\|\\phi_{\\alpha}^k\\|^2_{\\Lambda M}=1\\right\\}\\cong \\bigcup_{\\alpha\\in A}S^{\\infty}.\n\\end{equation} Suppose there exists a suitable $C^0$-perturbation $E_{\\varepsilon}$ of $E$ such that there is an atlas with $\\Lambda M=\\bigcup_{\\alpha\\in A}\\mathcal{U}_{\\alpha}$ where $\\mathcal{U}_{\\alpha}$ is locally a Hilbert space for all $\\alpha\\in A$. Then given such a distribution of critical points on $\\Lambda M$, we can compute the cohomology groups via a nerve of a good cover and, thus, the sequence of Betti numbers. Note, such a construction of an atlas for $\\Lambda M$ can always be made because $E_{\\varepsilon}$ is an arbitrary $C^0$-perturbation of $E$. Then the rational cohomology of the Baire space $H^k(S^{\\infty};\\mathbb{Q})$ is the inverse limit $\\varprojlim_nH^k(S^{n};\\mathbb{Q})$.\n\\end{proof}\n\\begin{lemma}\nLet $\\Lambda M\\subset\\mathbb{R}^{\\infty}$ be a paracompact connected Hilbert manifold. Moreover, let $E:\\Lambda M\\to\\mathbb{R}$ be a $C^2$-function. Then the simplicial complex $\\Delta:=\\{\\gamma\\in \\Lambda M:dE[\\gamma]=0\\}$, for vertices identified with the intersections of a refined open cover $\\{U_{\\alpha}\\}_{\\alpha\\in A}$ the non-degenerate critical points of $E$, is homotopy equivalent to $\\Lambda M$.\n\\end{lemma}\n\\begin{proof}\nA $C^0$-perturbation of the energy functional $E:\\Lambda M\\to\\mathbb{R}$ produces infinitely many critical points. Suppose we form a good cover $\\mathcal{U}=\\{U_{\\alpha_k}\\}_{\\alpha_k\\in A}$ such that each $U_{\\alpha_k}$ is a neighborhood of the non-degenerate critical point ${\\gamma}_k$. Then, in general, this should be a cover of $\\Lambda M$ and the map from the geometric realization of the simplicial space, i.e. the topological nerve of $\\{U_{a_k}\\}$, to $\\Lambda M$ is a homotopy equivalence. Likewise, the geometric realization of the topological nerve $R(\\text{Top-Nrv}(\\mathcal{U}))$ is homotopy equivalent to the geometric realization of the simplicial set for which the $|A|$-simplices are $|A|$-tuples $(\\alpha_0,\\dots,\\alpha_{|A|})$, whereby the intersection $\\bigcap_{k\\in\\mathbb{Z}}U_{\\alpha_k}$ is nonempty and $|A|\\to\\infty$ under $C^0$-perturbation of $E$.\n\\end{proof}\n\nTreating $\\Lambda M$ as a CW complex, by Theorem \\ref{result}, it is given by the union of Baire spaces, namely $M=\\bigcup_{\\alpha\\in A}S^{\\infty}$ without information on intersections. The only requirement is that the union of such Baire spaces covers $\\Lambda M$. Let $\\Sigma=\\{S^{\\infty},\\dots,S^{\\infty}\\}$ be a finite collection of sets, with cardinality $|A|$. The nerve consists of all sub-collections whose sets have a non-empty common intersection, $\\text{Nrv}(\\Sigma)=\\left\\{X\\subseteq\\Sigma:\\bigcap X\\ne\\emptyset\\right\\}$, which is an abstract simplicial complex. That is, suppose we form a good cover $\\mathcal{U}=\\{U_{\\alpha_k}\\}_{\\alpha_k\\in A}$ such that each $U_{\\alpha_k}$ is a neighborhood of the non-degenerate critical point $\\gamma_k$. Since all Hilbert manifolds are paracompact, we may compute homology of the CW model of $\\Lambda M$ via \\v{C}ech homology:\n\\begin{equation*} \\check{H}_k(\\mathcal{U},\\mathcal{F})=\\varinjlim_{\\mathcal{U}\\in A}H_k(\\mathcal{U},\\mathcal{F})\n\\end{equation*} for $\\mathcal{F}$ a presheaf of abelian groups on $\\Lambda M$, and $A$ a directed set consisting of all open covers of $\\Lambda M$, which is directed by $\\mathcal{U}<\\mathcal{V}$ for $\\mathcal{V}$ a refinement of $\\mathcal{U}$. \\v{C}ech homology is similarly given by $\\check{H}_k(\\Lambda M,\\mathcal{F})=\\varinjlim_{\\mathcal{U}\\in A}H_k(\\text{Nrv}(\\mathcal{U}),\\mathcal{F})$.\n\n\\section{Integral Cellular Homology and Smith Normal Form for Free Loop Space\\label{sec: SNF}}\n\\begin{lemma}\nEvery continuous map $f:M\\to\\mathbb{R}^n$ from a Hilbert manifold can be arbitrarily approximated by a smooth map $g:M\\to\\mathbb{R}^n$ which has no critical points.\n\\end{lemma}\n\\begin{theorem}\\label{approximation}\nA continuous map $f:M\\to\\mathbb{R}^n$ from a Hilbert manifold may be approximated to create infinitely many local minima and maxima by a small $C^0$-perturbation. This is impossible, in general, for a $C^1$-perturbation.\n\\end{theorem}\n\\begin{proof}\nWorking in local charts, let $\\sigma(t):=\\max (0,\\min(2t,1))$ for $t\\in\\mathbb{R}$. For $\\varepsilon>0$, define $f_{\\varepsilon}(x)=f(\\sigma(\\|x\\|\/\\varepsilon)x)$. Then $f_{\\varepsilon}$ is constant in the ball of radius $\\varepsilon\/2$, it coincides with $f$ outside the ball of radius $\\varepsilon$, and $\\|f-f_{\\varepsilon}\\|_{\\infty}=o(1)$ for $\\varepsilon\\to 0$.\n\\end{proof}\n\n\\begin{lemma}[Homotopy Invariance]\\label{invariance}\nThe respective homotopy types of a closed Riemannian manifold $M$ and its (free) loop space $\\Lambda M$ are invariant under a $C^0$-perturbation of the energy functional $E:\\Lambda M\\to\\mathbb{R}$ in an open neighborhood of a critical point $\\gamma\\in\\Lambda M$ of index $\\lambda$. For $\\varepsilon>0$, such a local $C^0$-perturbation of $E$ is given by $E_{\\varepsilon}[\\gamma]=E[\\sigma(\\|\\gamma\\|\/\\varepsilon)\\gamma]$ where $\\sigma(t):=\\max(0,\\min(2t,1))$ for $t\\in\\mathbb{R}$.\n\\end{lemma}\n\\begin{proof}\nThe canonical fibration $\\Omega M\\hookrightarrow\\Lambda M\\overset{\\text{ev}}\\longrightarrow M$ admits the section $M\\hookrightarrow\\Lambda M$ given by the inclusion of constant loops, which implies that the associated long exact sequence of homotopy reduces to the split short exact sequences $0\\to\\pi_k(\\Omega M)\\to\\pi_k(\\Lambda M)\\to\\pi_k(M)\\to 0$. Thus, for $k\\ge 1$ we have canonical isomorphisms\n\\begin{equation*}\n\\pi_k(\\Lambda M)\\cong\\pi_k(\\Omega M)\\rtimes\\pi_k(M)\\cong\\pi_{k+1}(M)\\rtimes\\pi_k(M).\n\\end{equation*} Then for $\\Lambda M=E^{-1}(-\\infty,\\infty)$, define by $\\Lambda M':=E^{-1}_{\\varepsilon}(-\\infty,\\infty)$ the free loop space corresponding to the perturbed $M$, say $M'$, after a $C^0$-perturbation $E_{\\varepsilon}$ of the energy functional. For a $C^0$-perturbation, $E_{\\varepsilon}$ is constant in the ball of radius $\\varepsilon\/2$ and it coincides with $E$ outside the ball of radius $\\varepsilon$. The point $\\gamma\\in\\Lambda M$ has index $\\lambda$. Then let $\\alpha$ be the number of critical points, excluding $\\gamma$, of index $\\lambda$. All of the infinitely many local minima or maxima in the $\\varepsilon$-neighborhood created by the $C^0$-perturbation are also of index $\\lambda$. Therefore, the number of critical points of index $\\lambda$ changes from $1+\\alpha$ to $+\\infty$ since infinitely many critical points of index $\\lambda$ are added in addition to the already existing $(1+\\alpha)$-many. It follows that $\\Lambda M\\cong \\Lambda M'$ under homotopy. Likewise, $\\pi_k(\\Omega M)\\cong \\pi_k(\\Omega M')$.\n\nWe will now use this homotopy equivalence $\\pi_k(\\Lambda M)\\cong\\pi_k(\\Lambda M')$ to show that the homotopy type of $M$ is fixed. From the canonical isomorphisms, $\\pi_k(\\Lambda M)\\cong \\pi_k(\\Omega M)\\rtimes\\pi_k(M)\\cong \\pi_k(\\Lambda M')\\cong \\pi_k(\\Omega M')\\rtimes\\pi_k(M')$. By $\\pi_k(\\Omega M)\\cong \\pi_k(\\Omega M')$, it follows that $\\pi_k(M)\\cong\\pi_k(M')$. Let $f:M\\to M'$, then, because $\\pi_k(M,x_0)\\cong\\pi_k(M',f(x_0))$ for $x_0\\in M$, it follows that $f$ is a homotopy equivalence so $M$ and $M'$ are of the same homotopy type. This is similarly seen through the isomorphism $\\pi_k(\\Omega M)\\cong\\pi_{k-1}(M)$.\n\\end{proof} \n\nLet $Y_k=Y_{k-1}\\bigcup_{\\alpha_k\\in A_k}^{\\theta_k} H^{\\lambda_k}$ be a manifold $Y_{k-1}$ with $|A_k|$-many $\\lambda_k$-handles $H^{\\lambda_k}=D^{\\dim Y_{k-1}-\\lambda_k}\\times D^{\\lambda_k}$ attached along the embedding map $\\theta_k:S^{\\lambda_k-1}\\times D^{\\dim Y_{k-1}-\\gamma_k}\\to\\partial Y_{k-1}$ for $1\\le k\\le N$ and $1\\le \\alpha_k\\le |A_k|$ with $Y_0:=D^{\\infty}$ a Baire space and $Y:=Y_N$. Thus, free loop space has a handle decomposition given by $\\Lambda M:=Y=D^{\\infty}\\bigcup_{k=1}^N\\left[\\bigcup_{\\alpha_k\\in{A_k}}^{\\theta_k}H^{\\lambda_k}\\right]$ where $\\lambda_k$ is the index of $|A_k|$-many critical points of the Morse functional $E:\\Lambda M\\to\\mathbb{R}.$\n\nThere is a deformation retraction of $Y_k=Y_{k-1}\\bigcup_{\\alpha_k\\in A_k}^{\\theta_k} H^{\\lambda_k}$ onto the core $Y_{k-1}\\bigcup_{\\alpha_k\\in A_k}^{\\theta_k}(D^{\\lambda_k}\\times \\{0\\})$ so homotopically attaching a $\\lambda_k$-handle is the same as attaching a $\\lambda_k$-cell. Thus, $Y_k\\approx Y_{k-1}\\bigcup_{\\alpha_k\\in A_k}^{\\theta_k} (D^{\\lambda_k}\\times \\{0\\})\\cong Y_{k-1}\\bigcup_{\\alpha_k\\in A_k}^{\\theta_k}e^{\\lambda_k}$ for $e^{\\lambda_k}$ a $\\lambda_k$-cell. In the handle decomposition, we collapse each handle $D^m\\times D^{n-m}$ to obtain a homotopy equivalent CW complex $X\\approx_G\\Lambda M$ under homotopy $G:X\\to Y$. Thus, let $n_{\\lambda_i}$ denote the number of $\\lambda_i$-handles of $\\Lambda M$. Then $\\Lambda M$ has an infinite-dimensional CW structure with $n_{\\lambda_i}$-many $\\lambda_i$-cells for each $1\\le i\\le N$.\n\nIn particular, we construct the CW complex for $\\Lambda M$ as follows:\n\n\\begin{enumerate}[label=(\\roman*)]\n\\item Set $X_0:=D^{\\infty}$, which is regarded as an $\\infty$-cell $D^\\infty\\times \\{0\\}$ modulo a point $\\{0\\}$.\n\\item Form the $k$-skeleton $X_k$ from $X_{k-1}$ by attaching $\\lambda_k$-cells $e_{\\alpha}^{\\lambda_k}$ via attaching maps $\\phi_{\\alpha}^k:S^{k-1}\\to X_{k-1}$ for $e_{\\alpha}^{\\lambda_k}$. Here, we abbreviate $X_{\\lambda_k}$ by $X_k$.\n\\item We stop the inductive process with $\\Lambda M\\approx X=X_N$ for $N< \\infty$.\n\\end{enumerate} \n\\begin{definition}\nEvery cell $e_{\\alpha}^k$ in the cell complex $X$ has characteristic map $\\Phi_{\\alpha}^k: D_{\\alpha}^k\\to X$ extending the attaching map $\\phi_{\\alpha}^k$. It is a homeomorphism from the interior of $D_{\\alpha}^k$ onto $e_{\\alpha}^k$. Namely, $\\Phi_{\\alpha}^k$ is the composition $D_{\\alpha}^k\\hookrightarrow X_{k-1}\\sqcup_{\\alpha} D_{\\alpha}^k\\to X_k\\hookrightarrow X$ where the middle map is a quotient map defining $X_k$. \n\\end{definition}\nThe CW complex $X$ is infinite-dimensional because the $0$-th skeleton $X^0=D^{\\infty}$ is infinite-dimensional. We topologize $X$ by saying a set $A\\subset X$ is open (resp. closed) if and only if $A\\cap X_k$ is open (resp. closed) in $X_k$ for each $k$. That is, a set $A\\subset X$ is open (resp. closed) if and only if $\\Phi_{\\alpha}^{-1}(A)$ is open (resp. closed) in $D_{\\alpha}^k$ for each characteristic map $\\Phi_{\\alpha}$. The first direction follows from continuity of the characteristic maps $\\Phi_{\\alpha}$. In the other direction, suppose $\\Phi_{\\alpha}^{-1}(A)$ is open in $D_{\\alpha}^k$ for each $\\Phi_{\\alpha}$. Further suppose by induction on $k$ that $A\\cap X_{k-1}$ is open in $X_{k-1}$. Since $\\Phi_{\\alpha}^{-1}(A)$ is open in $D_{\\alpha}^k$ for all $\\alpha$, $A\\cap X_k$ is open in $X_k$ by definition of the quotient topology on $X_k$. Thus $A$ is open in $X$, from which it follows that $X$ is the quotient space of $\\bigsqcup_{k,\\alpha}D_{\\alpha}^k$. The $(k+1)$-th skeleton is obtain from the $k$-skeleton by attaching $(k+1)$-cells; that is, there is a pushout diagram:\n\\begin{equation*}\n\\begin{tikzcd}[column sep=large, row sep=large]\n\\bigsqcup S^k \\arrow{d} \\arrow{r}\n& X_k \\arrow{d} \\\\\n\\bigsqcup D^{k+1} \\arrow{r}\n& X_{k+1}\n\\end{tikzcd}\n\\end{equation*}\nwhere the disjoint unions are over all $(k+1)$-cells of $X$.\n\n\\begin{lemma}\\label{CWcomplex}\nSince $X$ is a CW complex, it follows that:\n\\begin{enumerate}[label=(\\roman*)]\n\\item $H_k(X_n,X_{n-1})=\\begin{cases}\n\\mathbb{Z}^{\\#\\lambda_n-\\text{cells}}, & \\text{if } k =n\\\\\n0, & \\text{if } k\\ne n.\n\\end{cases}$ \\\\ That is, $H_k(X_n,X_{n-1})$ is free abelian for $k=n$ with a basis in one-to-one correspondence with the $n$-cells of $X$.\n\\item $H_k(X_n)=0$ if $k>n$. In particular, if $X$ is finite-dimensional, then $H_k(X)=0$ if $k>\\dim (X)$. Note, we exclude the latter condition because $X$ under present consideration is, indeed, infinite-dimensional.\n\\item The map $H_k(X_n)\\to H_k(X)$ induced by the inclusion $i:X_n\\hookrightarrow X$ is an isomorphism for $kn$, i.e. for $n\\ne k +1$ and $n\\ne k$, we have that \n\\begin{equation*}\nH_k(X_n)\\cong H_k(X_{n-1})\\cong H_k(X_{n-2})\\cong\\dots\\cong H_k(X_0).\n\\end{equation*} In the case under consideration, $X_0$ is a Baire space $D^{\\infty}$ so $H_k(D^{\\infty})=0$ if $k\\ne 0$. Then for $k>n$, $H_k(X_n)=0$ as was to be shown. \\\\\n(iii) We prove the statement for an infinite-dimensional CW complex $X$. Let $ka$, define the sets $\\Lambda^{[a,b]}$ by $\\Lambda^{[a,b]}=\\{p\\in\\Lambda M: a\\le E(p)\\le b\\}$. Recall, if $\\Lambda^{[a,b]}$ is compact and contains no critical point of $E:\\Lambda M\\to\\mathbb{R}$ then $\\Lambda^a$ is diffeomorphic to $\\Lambda^b$. On the other hand, suppose that $\\Lambda^{[a,b]}$ is compact and contains exactly one non-degenerate critical point of $E$. If the index of the critical point is $\\lambda$ then $\\Lambda^b$ is obtained from $\\Lambda^a$ by successively attaching a handle of index $\\lambda$ and a collar.\n\\begin{proposition}\nEvery smooth Hilbert manifold admits a Morse function $E:\\Lambda M\\to\\mathbb{R}$ such that $\\Lambda^a$ is compact for every $a\\in\\mathbb{R}$.\n\\end{proposition}\n\\begin{theorem}\\label{handledecomposition}\nLet $E:\\Lambda M\\to\\mathbb{R}$ be a Morse function on a paracompact manifold $\\Lambda M$ such that each $\\Lambda^a$ is paracompact. Let $p_1,p_2,\\dots,p_k,\\dots$ be the critical points of $E$ and let the index of $p_i$ be $\\lambda_i$. Then $\\Lambda M$ has a handle decomposition which admits a handle of index $\\lambda_i$ for each $i$. \n\\end{theorem}\n\\begin{proof}\nConsider the Morse function $E:\\Lambda M\\to\\mathbb{R}$. Let $c_1a$, we let $\\Lambda^{[a,b]}=\\{p\\in\\Lambda M:a\\le E[p]\\le b\\}$. Consequently, $\\Lambda^{a_{k+1}}=\\Lambda^{a_k}\\cup\\Lambda^{[a_k,a_{k+1}]}$ and $\\Lambda M=\\bigcup_{k=0}^{m-1}\\Lambda^{[a_k,a_{k+1}]}$. The set $\\Lambda^{[a_k,a_{k+1}]}$ is paracompact so it only contains finitely many critical points of $E$. If $\\lambda_{k+1}$ is the index of $c_{k+1}$, it has the homotopy type of a handle of index $\\lambda_{k+1}$.\n\\end{proof}\n\\begin{corollary}\nEvery smooth manifold has the homotopy type of a CW complex.\n\\end{corollary}\n\\begin{proof}\nThe set $\\Lambda^b$ is homotopy equivalent to $\\Lambda^a$ with a $1$-cell attached.\n\\end{proof}\n\n\\begin{definition}\\label{Smithnormal}\nLet $A\\in\\text{Mat}_{m,n}(\\mathbb{Z})$ be a matrix over the principal ideal domain $\\mathbb{Z}$. Then the Smith normal form is a factorization $A=UDV$ for which:\n\\begin{enumerate}[label=(\\roman*)]\n\\item $D\\in\\text{Mat}_{m,n}(\\mathbb{Z})$ is diagonal where $d_{i,j}=0$ if $i\\ne j$.\n\\item Every diagonal entry of $D$ divides the next, i.e. $d_{i,i}|d_{i+1,i+1}$. Such diagonal entries are called the elementary divisors of $A$.\n\\item $U\\in\\text{Mat}_{m,m}(\\mathbb{Z})$ and $V\\in\\text{Mat}_{n,n}(\\mathbb{Z})$ are invertible over $\\mathbb{Z}$. That is, $\\det U,\\det V=\\pm 1$ for $U,V$ in $SL$(---,$\\mathbb{Z})$.\n\\end{enumerate} \n\\end{definition}\nSimilarly, for $A\\in\\text{Mat}_{m,n}(\\mathbb{Z})$ an integer matrix, the Smith normal form $A=UDV$ may be written as $D=U^{-1}AV^{-1}$, which is permissible by the invertibility of $U$ and $V$. Here, the product $U^{-1}AV^{-1}$ is\n\\begin{equation}\nSNF(A)=\\begin{bmatrix}\\alpha_1 & 0 & 0 & & \\cdots & & 0\\\\0& \\alpha_2 & 0 & & \\cdots & & 0\\\\ 0 & 0 & \\ddots & & & & 0\\\\ \\vdots& & & \\alpha_r & & & \\vdots \\\\ & & & & 0 & & & \\\\ & & & & & \\ddots & \\\\ 0 & & & \\cdots & & & 0 \\end{bmatrix}\n\\end{equation} where the diagonal elements $\\alpha_i$, unique up to multiplication by a unit of the principal ideal domain, satisfy $\\alpha_i|\\alpha_{i+1}$ for all $1\\le i 0$, define \n\\begin{equation}\nE_{\\varepsilon}(x):=E(\\sigma(\\|x\\|\/\\varepsilon)x)=\\int_{S^1}\\|\\dot{\\sigma}(\\|\\gamma\\|\/\\varepsilon)\\gamma(t)+\\sigma(\\|\\gamma\\|\/\\varepsilon)\\dot{\\gamma}(t)\\|^2dt.\n\\end{equation}\nTherefore, $E_{\\varepsilon}$ is constant in the ball of radius $\\varepsilon\/2$, it coincides with $E$ outside the ball of radius $\\varepsilon$, and $\\|E-E_{\\varepsilon}\\|_{\\infty}=o(1)$ for $\\varepsilon\\to 0$. The homotopy types of $\\Lambda M$ and $M$ are fixed under such a perturbation by Lemma \\ref{invariance}, i.e. for $f:M\\to M'$ and $x_0\\in M$, $\\pi_k(M,x_0)\\cong\\pi_k(M',f(x_0))$ for all $k$. Assume that we perturb $\\Lambda M$ via $E_{\\varepsilon}$ such that we create infinitely many minima or maxima in a local neighborhood of a critical point $\\gamma\\in X_k\\subset \\Lambda M$ of index $\\lambda_k$. Then, by Lemma \\ref{invariance}, the homotopy type of $M$ is fixed and the number of critical points of index $\\lambda_k$ tends to infinity, i.e. $n_{\\lambda_k}\\to\\infty$. Then the $k$-th Betti number of $H_k(\\Lambda M;\\mathbb{Z})$ becomes \n\\begin{equation*}\n\\lim_{n_{\\lambda_k}\\to\\infty}n_{\\lambda_k}-\\min\\left(n_{\\lambda_{k-1}},n_{\\lambda_k}\\right)-\\min\\left(n_{\\lambda_k},n_{\\lambda_{k+1}}\\right)=+\\infty\n\\end{equation*} where it is assumed that $n_{\\lambda_i}$, $1\\le i\\le N$, is finite for all $i\\ne k$. It follows that the sequence of Betti numbers $\\{b_k(\\Lambda M;\\mathbb{Z})\\}_{k\\ge 0}$ is unbounded. Since the homotopy type of $M$ is invariant under the $C^0$-perturbation of the energy functional, this can be done for an arbitrary closed Riemannian manifold. Thus, every closed Riemannian manifold $M$ carries infinitely many geometrically distinct, non-constant, prime closed geodesics. \n\nIf we instead compute cellular homology over a field $K$, say $\\mathbb{Q}$, then we may simply use Gaussian elimination to reach the factorization $M_k=U_kD_kV_k$ for $U_k\\in\\text{Mat}_{m_k,m_k}(\\mathbb{Q})$ and $V_k\\in\\text{Mat}_{n_k,n_k}(\\mathbb{Q})$ where $U_kD_k$ is the inverse of the Gaussian elimination matrix $E_k$ which transforms $M_k$ to $V_k$. Note, each diagonal entry of $U_k$ is $+1$ since $D_k$ is the diagonal matrix that contains all the pivots of $M_k$. Then the rank of $\\text{im }M_{k+1}$ is the number of non-zero rows of $D_{k+1}$, i.e. $l_{k+1}$, and the rank of $\\ker M_k$ is the number of zero columns of $D_k$, i.e. $n_k-l_k$. Then the $k$-th homology group of $\\Lambda M$ is\n\\begin{equation}\nH_k(\\Lambda M;\\mathbb{Q})=\\ker M_k\/\\text{im }M_{k+1}=\\mathbb{Q}^{n_k-l_{k+1}-l_k}\n\\end{equation} and the $k$-th Betti number is given by $b_k(\\Lambda M;\\mathbb{Q})=n_k-l_k-l_{k+1}$. That is, all torsion for homology vanishes.\n\\end{proof} \n\\begin{remark}\nFor a finite CW complex $X$, the Euler characteristic $\\chi(X)$ is defined to be the alternating sum $\\sum_k(-1)^kC_k$ where $C_k$ is the number of $k$-cells of $X$. The Euler characteristic of $X$ is independent of the choice of CW structure on $X$. We similarly define the Euler characteristic of an infinite-dimensional CW complex $\\Lambda M$ to be the alternating sum $\\sum_k(-1)^kC_k$ where $C_k$ is the number of $k$-cells of $\\Lambda M$ if and only if all homology groups have finite rank and all but finitely many are zero. In this case, $\\Lambda M\\approx_GX_N$ is such that $H_k(X_N)=0$ if $k>N$ by Lemma \\ref{CWcomplex} so that all but finitely many homology groups are zero because the index of any critical point is finite. \n\\end{remark}\n\\begin{definition}\nThe Poincar\\'e polynomial of $\\Lambda M$ is given by \n\\begin{equation*}\nP_{\\Lambda M}(t)=\\sum_{k=0}^{\\infty}(\\text{rank }H_k(\\Lambda M;\\mathbb{Z}))t^k.\n\\end{equation*} If the Euler characteristic is well-defined, i.e. if $\\text{rank }H_k(\\Lambda M;\\mathbb{Z})=0$ for $k\\ge N$ and a fixed $N$, then $\\chi(\\Lambda M)=P_{\\Lambda M}(-1)$.\n\\end{definition}\nSince $H_k(\\Lambda M;\\mathbb{Z})\\cong H_k(X_N;\\mathbb{Z})=0$ for $k>N$, we have:\n\\begin{equation} \n\\begin{split}\n\\chi(\\Lambda M)& =\\sum_{k=0}^{\\infty}(-1)^k\\Big(n_{\\lambda_k}-\\min\\left(n_{\\lambda_{k-1}},n_{\\lambda_k}\\right)-\\min\\left(n_{\\lambda_k},n_{\\lambda_{k+1}}\\right)\\Big) \\\\ \n&=\\sum_{k=0}^N(-1)^k\\Big(n_{\\lambda_k}-\\min\\left(n_{\\lambda_{k-1}},n_{\\lambda_k}\\right)-\\min\\left(n_{\\lambda_k},n_{\\lambda_{k+1}}\\right)\\Big).\n\\end{split}\n\\end{equation} Homology $H_k$ is sub-additive due to the increasing sequence of inclusions $D^{\\infty}=X_0\\subset X_1\\subset X_1\\subset X_2\\subset\\dots\\subset X_N\\approx\\Lambda M$ so we find\n\\begin{equation}\n\\text{rank }H_k(\\Lambda M;\\mathbb{Z})\\le \\sum_i\\text{rank }H_k(X_i,X_{i-1};\\mathbb{Z})=\\text{rank }H_k(X_k,X_{k-1};\\mathbb{Z})=n_{\\lambda_k}\n\\end{equation} from \n\\begin{equation*}\nH_k(X_i,X_{i-1};\\mathbb{Z})=\\begin{cases}\n\\mathbb{Z}^{\\#\\lambda_i-\\text{cells}}, &\\text{if }k=i\\\\\n0, & \\text{if }k\\ne i.\n\\end{cases}\n\\end{equation*} That is, the rank of $H_k(\\Lambda M;\\mathbb{Z})$ is less than or equal to the number of critical points of index $\\lambda_k$. Equivalently, $\\text{rank }H_k(\\Lambda M;\\mathbb{Z})=n_{\\lambda_k}-l_k-l_{k+1}\\le n_{\\lambda_k}$. Since the $k$-th Betti number of the free loop space $\\Lambda M$ is given by $b_k(\\Lambda M;\\mathbb{Z})=n_{\\lambda_k}-\\min\\left(n_{\\lambda_{k-1}},n_{\\lambda_k}\\right)-\\min\\left(n_{\\lambda_k},n_{\\lambda_{k+1}}\\right)$, the least upper bound of the sequence of Betti numbers is $\\sup_{k\\in\\mathbb{N}}b_k(\\Lambda M;\\mathbb{Z})=n_{\\lambda_k}$. Thus, the supremum of the $k$-th Betti number is controlled by the number of handles of index $\\lambda_k$.\n\n\\section{Acknowledgements}\nI would like to sincerely thank Dr. John Pardon for his guidance and suggestions regarding the content of this paper. I also thank Dr. Todd Trimble for his many useful conversations on techniques employed in this analysis.\n\n\n\\newpage\n\\bibliographystyle{plain}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\\section{Pump and Stokes Rabi frequency calibration}\n\nWe calibrate the pump Rabi frequency by measuring the Feshbach (FB) molecules loss rate as a function of laser power when the laser is resonant with the $\\ket{J'=1,M'_J=1}$ state. In the regime where $\\Omega_P \\ll \\Gamma$, the depletion rate is $1\/\\tau \\approx \\Omega_P^2\/\\Gamma$. We fit the inverse FB molecule lifetime versus pump power as shown in Fig.~\\ref{fig:pumpRabi}, and extract a Rabi frequency of $\\Omega_P\/2\\pi = 6.2(8)~{\\rm MHz} \\times \\sqrt{\\frac{P_P}{1~{\\rm mW}}}$.\n\\begin{figure}[htb]\n \\centering\n \\includegraphics[]{smfig1.pdf}\n \\caption{Depletion rate of Feshbach molecules as a function of pump laser power, for calibration of pump laser Rabi frequency.}\n \\label{fig:pumpRabi}\n\\end{figure}\n\nWe calibrate the Stokes laser Rabi frequency using Autler-Townes spectroscopy. After locating the rovibrational ground state, we tune the Stokes laser onto resonance with the $\\ket{J'=1,M'=1}$ state, and measure the depletion of Feshbach molecules as a function of pump laser frequency. We observe an Autler-Townes doublet feature as shown in Fig.~\\ref{fig:ATdoublet}, and fit it to a model function for the 2-body survival probability $P_{\\rm Na+Cs}$ after a pulse duration $t$ \\cite{Fleischhauer2005}\n\\begin{equation}\n P_{\\rm Na+Cs}(t) = P_{\\rm atom} + P_{\\rm mol} \\exp{\\left( - \\Omega_P^2 t \\frac{4 \\Gamma (\\Delta_P-\\Delta_S)}{|\\Omega_S^2 + 2 i (\\Delta_P-\\Delta_S) (\\Gamma + 2 i \\Delta_P)|^2} \\right)},\n\\end{equation}\nwhere $P_{\\rm atom}$ is an atomic background population, $P_{\\rm mol}$ is the Feshbach molecule creation fidelity, and $\\Delta_{P,S}$ are the one-photon detunings of pump and Stokes lasers, respectively. We obtain a value of $\\Omega_S\/2\\pi=283(9)~{\\rm MHz}$ at $P_S=3.2~{\\rm mW}$, or $\\Omega_S\/2\\pi=158(8)~{\\rm MHz} \\times \\sqrt{P_S\/(1~{\\rm mW})}$ for the Stokes laser Rabi frequency. \n\\begin{figure}[htb]\n \\centering\n \\includegraphics[]{smfig2.pdf}\n \\caption{Autler-Townes splitting of $\\ket{c^3\\Sigma_1,v'=26,J'=1,M'_J=1}$ used for calibration of Stokes laser Rabi frequency.}\n \\label{fig:ATdoublet}\n\\end{figure}\n\n\n\\section{Motional ground state fraction}\nIn Ref.~\\cite{Zhang2020}, we estimated that the center of mass (COM) ground state fraction of Feshbach molecules was 77(5)\\% using Raman sideband thermometry. In that work, we also observed a Feshbach molecule creation fidelity of 47(1)\\%, consistent with a relative motional ground state population of 58(4)\\%. Presently, we observe a slightly reduced Feshbach molecule creation fidelity of 38(3)\\%, while the atomic ground state cooling conditions have not changed from Ref.~\\cite{Zhang2020}. The reduced fidelity of Feshbach molecule creation arises from an axial misalignment of the dual species optical tweezers, which primarily leads to heating of the Na atom during the merge process. Because Na is much lighter than Cs, heating of the Na atom primarily contributes to excitation of the relative motional degree of freedom, as opposed to the COM. Using the Feshbach molecule creation fidelity as a proxy for thermometry, we estimate that the COM ground state fraction of Feshbach molecules under present conditions is 75(5)\\%. In the following, we neglect state changes of the $\\sim 25\\%$ initially motionally excited molecules.\n\nTwo main effects cause COM motional excitation of the molecule during Raman transfer. The first arises from the differential wavenumber of the pump and Stokes lasers, $\\Delta k$. In the Raman transfer process, motional sidebands of the COM degree of freedom are not resolved: The Raman Rabi frequency is of order $\\Omega_R\\approx 2\\pi \\times 200~{\\rm kHz}$, while the axial trap frequency during transfer is $\\omega_z\\approx 2 \\pi \\times 3~{\\rm kHz}$. Coherent absorption of a pump photon and emission of a Stokes photon during Raman transfer thus creates a coherent state of the molecule's COM motion with momentum $\\hbar \\Delta k$. The mean occupation number resulting from this kick is $\\bar{n} \\approx 0.085$, giving a 92\\% probability to remain in the ground motional state.\n\nA second source of motional excitation during Raman transfer is the mismatch of initial and final state trap frequencies, due to a difference in polarizability of each state to the trapping laser. The Feshbach state polarizability is estimated as ${\\rm Re}[\\alpha_{\\rm Na+Cs}(1064~{\\rm nm})] = 1397~a_0^3$ from the sum of atomic polarizabilities, while the $X^1\\Sigma$ polarizability is ${\\rm Re}[\\alpha_{X^1\\Sigma}(1064~{\\rm nm})] = 936~a_0^3$ from \\cite{Vexiau2017}. The ratio of trap depths is then $1397\/936 \\approx 1.5$, so that ratio of trap frequencies is $\\sqrt{1.5}$ which we call $f$ below, and the ratio of harmonic oscillator lengths is $(1.5)^{1\/4}$. The probability of remaining in the motional ground state is\n\\begin{equation}\n P_{0 \\leftarrow 0} = |\\bra{n'_x=0,n'_y=0,n'_z=0} e^{i \\Delta k z} \\ket{n_x=0,n_y=0,n_z=0}|^2.\n\\end{equation}\nThe initial Feshbach molecule COM spatial wavefunction is essentially projected onto the rovibrational ground state COM wavefunction. Because the Raman transfer lasers propagate along the $z$ axis, we can separate the integrals and evaluate them straightforwardly, finding\n\\begin{equation}\n P_{0\\leftarrow 0} = \\frac{8 f^{3\/2}}{(f+1)^3} \\exp{\\left(- \\frac{f \\, \\eta^2}{2f + 2}\\right)},\n\\end{equation}\nwhere $\\eta = \\Delta k \\, z_{\\rm ho} \\approx 0.413$ is the Lamb-Dicke parameter. We find $P_{0\\leftarrow 0} \\approx 0.94$. \n\nIn the worst case, in which we neglect any cancellation of these two motional excitation effects, we would expect a reduction of the motional ground state population by a factor 0.862. We estimate the overall COM ground state fraction of the rovibrational ground state to be 75(5)\\% $\\times$ 0.862 = 65(5)\\%. Both excitation effects could be reduced by increasing the trap frequency in order to perform Raman transfer in a sideband-resolved regime, however this will be limited by scattering of both trap and Raman lasers. \n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Entropy: In the Eye of the Beholder}\nInformation is a central concept in our daily life. We rely on information in order to make sense of the world: to make ``informed\" decisions. We use information technology in our daily interactions with people and machines. Even though most people are perfectly comfortable with their day-to-day understanding of information, the precise definition of information, along with its properties and consequences, is not always as well understood. I want to argue in this comment that a precise understanding of the concept of information is crucial to a number of scientific disciplines. Conversely, a vague understanding of the concept can lead to profound misunderstandings, within daily life and within the technical scientific literature. My purpose is to introduce the concept of information--mathematically defined--to a broader audience, with the express intent of eliminating a number of common misconceptions that have plagued the progress of information science in different fields.\n\nWhat is information? Simply put, information is that which allows you (who is in possession of that information) to make predictions with accuracy better than chance. Even though the former sentence appears glib, it captures the concept of information fairly succinctly. But the concepts introduced in this sentence need to be clarified. What do I mean with prediction? What is ``accuracy better than chance\"? Predictions of what? \n\nWe all understand that information is useful. When is the last time that you have found information to be counterproductive? Perhaps it was the last time you watched the News. I will argue that, when you thought that the information you were given was not useful, then what you were exposed to was most likely not information. That stuff, instead, was mostly entropy (with a little bit of information thrown in here or there). Entropy, in case you have not yet come across the term, is just a word we use to quantify how much isn't known.\n\n\\vskip 0.5cm\n\\begin{center}\n{\\em ``But, isn't entropy the same as information?\"}\n\\end{center}\nOne of the objectives of this comment is to make the distinction between the two as clear as possible. Information and entropy are two very different objects. They may have been used synonymously (even by Claude Shannon--the father of information theory--thus being responsible in part for a persistent myth) but they are fundamentally different. If the only thing you will take away from this article is your appreciation of the difference between entropy and information, then I will have succeeded.\n\nBut let us go back to our colloquial description of what information is, in terms of predictions. ``Predictions of what\"? you should ask. Well, in general, when we make predictions, they are about a system that we don't already know. In other words, an {\\em other} system. This other system can be anything: the stock market, a book, the behavior of another person. But I've told you that we will make the concept of information mathematically precise. In that case, I have to specify this ``other system\" as precisely as I can. I have to specify, in particular, which states the system can take on. This is, in most cases, not particularly difficult. If I'm interested in quantifying how much I don't know about a phone book, say, I just need to tell you the number of phone numbers in it. Or, let's take a more familiar example (as phone books may appeal, conceptually, only to the older crowd among us), such as the six-sided fair die. What I don't know about this system is which side is going to be up when I throw it next. What you do know is that it has six sides. How much don't you know about this die? The answer is not six. This is because information (or the lack thereof) is not defined in terms of the number of unknown states. Rather, it is given by the {\\em logarithm} of the number of unknown states. \n\\vskip 0.5cm\n\\begin{center}\n{\\em ``Why on Earth introduce that complication?\"}, you ask.\n\\end{center}\nWell, think of it this way. Let's quantify your uncertainty (that is, how much you don't know) about a system (System One) by the number of states it can be in. Say this is $N_1$. Imagine that there is another system (System Two), and that one can be in $N_2$ different states. How many states can the joint system (System One And Two Combined) be in? Well, for each state of System One, there can be $N_2$ number of states. So the total number of states of the joint system must be $N_1\\times N_2$. But our uncertainty about the joint system is not $N_1\\times N_2$. Our uncertainty adds, it does not multiply. And fortunately the logarithm is that one function where the log of a product of elements is the sum of the logs of the elements. So, the uncertainty (let's call it $H$) about the system $N_1\\times N_2$ is the logarithm of the number of states\n$$H(N_1N_2)=\\log(N_1N_2)=\\log(N_1) + \\log(N_2).$$\nLet's return to the six-sided die. You know, the type you've known most of your life. What you don't know about the state of this die (your uncertainty) {\\em before} throwing the die is $\\log 6$. When you peek at the number that came up, you have reduced your uncertainty (about the outcome of this throw) to zero. This is because you made a perfect measurement. (In an imperfect measurement, you only got a glimpse of the surface that rules out a ``1\" and a ``2\", say.) \n\nWhat if the die wasn't fair? Well that complicates things. Let us for the sake of the argument assume that the die is so unfair that one of the six sides (say, the ``six\") can never be up. You might argue that the {\\em a priori} uncertainty of the die (the uncertainty before measurement) should now be $\\log 5$, because only five of the states can be the outcome of the measurement. But how are you supposed to know this? You were not told that the die is unfair in this manner, so as far as you are concerned, your uncertainty is still $\\log 6$. \n\nAbsurd, you say? You say that the entropy of the die is whatever it is, and does not depend on the state of the observer? Well I'm here to say that if you think that, then you are mistaken. Physical objects do not have an intrinsic uncertainty. I can easily convince you of that. You say the fair die has an entropy of $\\log 6$? Let's look at an even more simple object: the fair coin. Its entropy is $\\log 2$, right? What if I told you that I'm playing a somewhat different game, one where I'm not just counting whether the coin comes up heads or tails, but am also counting the angle that the face has made with a line that points towards True North. And in my game, I allow four different quadrants, like in Fig.~\\ref{fig:coin} below.\n\n\\begin{figure}[htbp]\n \\centering\n \\includegraphics[width=4in]{Die.pdf} \n \\caption{A fair coin with entropy $\\log_2 8=3$ bits. In the figure on the left, the outcome is ``Tails, quadrant II\", while the coin on the right landed as ``Heads, quadrant I\". }\n \\label{fig:coin}\n\\end{figure}\n\nSuddenly, the coin has $2\\times4$ possible states, just because I told you that in my game the angle that the face makes with respect to a circle divided into 4 quadrants is interesting to me. It's the same coin, but I decided to measure something that is actually measurable (because the coin's faces can be in different orientation, as opposed to, say, a coin with a plain face but two differently colored sides). And you immediately realize that I could have divided the circle into as many quadrants as I can possibly resolve by eye. \n\nAll right, fine, you say, so the entropy is $\\log(2\\times N)$ where $N$ is the number of resolvable angles. But you know, what is resolvable really depends on the measurement device you are going to use. If you use a microscope instead of your eyes, you could probably resolve many more states. Actually, let's follow this train of thought. Let's imagine I have a very sensitive thermometer that can sense the temperature of the coin. When throwing it high (the coin, not the thermometer) the energy the coin absorbs when hitting the surface will raise the temperature of the coin slightly, compared to a coin that was tossed gently. If I so choose, I could include this temperature as another characteristic, and now the entropy is $\\log(2\\times N\\times M)$, where $M$ is the number of different temperatures that can be reliably measured by the device. And now you realize that I can drive this to the absurd, by deciding to consider the excitation states of the molecules that compose the coin, or of the atoms composing the molecules, or nuclei, the nucleons, or even the quarks and gluons. \n\nThe entropy of a physical object, it dawns on you, is not defined unless {\\em you} tell me which degrees of freedom are important to {\\em you}. In other words, it is defined by the number of states that can be resolved by the measurement that you are going to be using to determine the state of the physical object. If it is heads or tails that counts for you, then $\\log 2$ is your uncertainty. If you play the ``4-quadrant\" game, the entropy of the coin is $\\log 8$, and so on. Which brings us back to six-sided die that has been mysteriously manipulated to never land on ``six\". You (who do not know about this mischievous machination) expect six possible states, so this dictates your uncertainty. Incidentally, how do you even know the die has six sides it can land on? You know this from experience with dice, and having looked at the die you are about to throw. This knowledge allowed you to quantify your a priori uncertainty in the first place. (I'll discuss prior knowledge in much more detail in the next section)\n\nNow, you start throwing this weighted die, and after about twenty throws or so without a ``six\" turning up, you start to become suspicious. You write down the results of a longer set of trials, and still note this curious pattern of ``six\" {\\em never} showing up, but you find that the other five outcomes occur with roughly equal frequency. What happens now is that you adjust your expectation. You now hypothesize (a posteriori) that it is a weighted die with five equally likely outcome, and one outcome that never actually occurs. Now your {\\em expected} uncertainty is $\\log 5$. (Of course, you can't be 100\\% sure because you only took a finite number of trials.)\n\nBut you did learn something through all these measurements. You gained information. How much? Easy! It's the difference between your uncertainty before you started to be suspicious, minus the uncertainty after it dawned on you. The information you gained is just $\\log 6-\\log5$. How much is that? Well, you can calculate it yourself. You didn't give me the base of the logarithm you say? \nWell, that's true. Without specifying the logarithm's base, the information gained is not specified. It does not matter which base you choose: each base just gives units to your information gain. It's kind of like asking how much you weigh. Well, my weight is one thing. The number I give you depends on whether you want to know it in kilograms, or pounds. Or stones, for all it matters.\n\nIf you choose the base of the logarithm to be 2, well then your units will be called ``bits\" (which is what we all use in information theory land). But you may choose the Eulerian $e$ as your base. That makes your logarithms \"natural\", but your units of information (or entropy, for that matter) will be called \"nats\". You can define other units, but we'll keep it at that for the moment. \nSo, if you choose base 2 (bits), your information gain is $\\log_2(6\/5)\\approx 0.263$ bits. That may not sound like much, but in a Vegas-type setting this gain of information might be worth, well, a lot. Information that you have (and others do not) can be moderately valuable (for example, in a stock market setting), or it could mean the difference between life and death (in a predator\/prey setting). In any case, we should value information. \n\nAs an aside, this little example where we used a series of experiments to ``inform\" us that one of the six sides of the die will not, in all likelihood, ever show up, should have convinced you that we can never know the {\\em actual} uncertainty that we have about any physical object, unless the statistics of the possible measurement outcomes of that physical object are for some reason known with infinite precision (which you cannot attain in a finite lifetime). It is for that reason that I suggest to the reader to give up thinking about the uncertainty of any physical object, and be only concerned with differences between uncertainties (before and after a measurement, for example). The uncertainties themselves we call entropy. Differences between entropies (for example before and after a measurement) are called information. Information, you see, is real. Entropy on the other hand, is in the eye of the beholder.\n\n\\section{The Things We Know}\nIn the first section I have written mostly about entropy. How the entropy of a physical system (such as a die, a coin, or a book) depends on the measurement device that you will use for querying that system. That, come to think of it, the uncertainty (or entropy) of any physical object really is infinite, and made finite only by the finiteness of our measurement devices. \nOf course the things you could possibly know about any physical object is infinite! Think about it! Look at any object near to you. OK, the screen in front of you. Just imagine a microscope zooming in on the area framing the screen, revealing the intricate details of the material. The variations that the manufacturing process left behind, making each and every computer screen (or iPad or iPhone), essentially unique.\nIf this was a more formal article (as opposed to a comment), I would now launch into a discussion of how there is a precise parallel (really!) to renormalization in quantum field theory... but it isn't. So, let's instead delve head-first into the matter at hand, to prepare ourselves for a discussion of the concept of information.\n\nWhat does it even mean to have information? Yes, of course, it means that you know something. About something. Let's make this more precise. I'll conjure up the old ``urn\". The urn has things in it. You have to tell me what they are. So, now imagine that.....\n\n\\vskip 0.5cm\n\\begin{center}\n{\\em ``Hold on, hold on. Who told you that the urn has things in it? Isn't that information already? Who told you that?\"}\n\\end{center}\n\nOK, fine, good point. But you know, the urn is really just a stand-in for what we call \"random variables\" in probability theory. A random variable is a ``thing\" that can take on different states. Kind of like the urn, that you draw something from? When I draw a blue ball, say, then the ``state of the urn\" is blue. If I draw a red ball, then the ``state of the urn\" is red. So, ``urn=random variable\". OK?\n\\vskip 0.5cm\n\\begin{center}\n{\\em ``OK, fine, but you haven't answered my question. Who told you that there are blue and red balls in it? Who?\"}\n\\end{center}\nLet me think about that. Here's the thing. When a mathematician defines a random variable, they tell you which state it can take on, and with what probability. Like: ``A fair coin is a random variable with two states. Each state can be taken on with equal probability one-half.\" When they give you an urn, they also tell you how likely it is to get a blue or a red ball from it. They just don't tell you what you will {\\em actually} get when you pull one out. \n\\vskip 0.5cm\n\\begin{center}\n{\\em ``But is this how real systems are? That you know the alternatives before asking questions?\"}\n\\end{center}\nAll right, all right. I'm trying to teach you information theory, the way it is taught in any school you would set your foot in. I concede, when I define a random variable, then I tell you how many states it can take on, and what the probability is that you will see each of these states, when you ``reach into the random variable\". Let's say that this info is magically conferred upon you. Happy now?\n\\vskip 0.5cm\n\\begin{center}\n{\\em ``Not really.\"}\n\\end{center}\nOK, let's just imagine that you spend a long time with this urn, and after a while of messing with it, you realize that:\n\n\\noindent A) This urn has balls in it.\\\\\n\\noindent B) From what you can tell, they are blue and red.\\\\\n\\noindent C) Reds occur more frequently than blues, but you're still working on what the ratio is.\\\\\nIs this enough?\n\\vskip 0.5cm\n\\begin{center}\n{\\em ``At least now we're talking. Do you know that you assume a lot when you say \"random variable?\"}\n\\end{center}\nAll right, you're making this more difficult than I intended it to be. According to standard lore, it appears that you're allowed to assume that you know something about the things you know nothing about. Let's just call these things ``common sense\". Like, that a coin has two sides. Or an urn that has red and blue balls in it. They could be any pair of colors, you do realize. And the things you {\\em don't know} about the random variable are the things that go beyond common sense. The things that, unless you had performed dedicated experiments to ascertain the state of the variables, you wouldn't already know. \n\nHow much don't you know about it? Easily answered using our good buddy Shannon's insight. How much you don't know is quantified by the ``entropy\" of the urn. That's calculated from the fraction of blue balls known to be in the urn, and the fraction of red balls in the urn. You know, these fractions that are common knowledge. So, let's say that fraction of blue is $p$. The fraction of red then is of course (you do the math) $1-p$. And the entropy of the urn is\n\\begin{eqnarray}\n H(X)=-p\\log p-(1-p)\\log(1-p)\\;. \\label{ent}\n\\end{eqnarray}\n\\vskip 0.5cm\n\\begin{center}\n{\\em In the first section you wrote that the entropy is $\\log N$, where $N$ is the number of states of the system. Are you changing definitions on me?}\n\\end{center}\nI'm not, actually. I just used a special case of the entropy to get across the point that the uncertainty\/entropy is additive. It was the special case where each possible state occurs with equal likelihood. In that case, the probability $p$ is equal to $1\/N$, and the above formula (\\ref{ent}) turns into the first one. But let's get back to our urn. I mean random variable. And let's try to answer the question: \n\\vskip 0.5cm\n\\begin{center}\n{\"How much is there to know (about it)?\"}\n\\end{center}\nAssuming that we know the common knowledge stuff that the urn only has read and blue balls in it, then what we don't know is the identity of the next ball that we will draw. This drawing of balls is our experiment. We would love to be able to predict the outcome of this experiment exactly, but in order to pull off this feat, we would have to have some information about the urn. I mean, the contents of the urn. \n\nIf we know nothing else about this urn, then the uncertainty is equal to the log of the number of possible states, as I wrote before. Because there are only red and blue balls, that would be log 2. And if the base of the log is two, then the result is $\\log_2 2=1$ bit. So, if there are red and blue balls only in an urn, then I can predict the outcome of an experiment (pulling a ball from the urn) just as well as I can predict whether a fair coin lands on heads or tails. If I correctly predict the outcome (I will be able to do this about half the time, on average) I am correct purely by chance. Information is that which allows you to make a correct prediction with accuracy better than chance, which in this case means, more than half of the time. \n\\vskip 0.5cm\n\\begin{center}\n{\\em ``How can you do this for the case of the fair coin, or the urn with equal numbers of red and blue balls?\"}\n\\end{center}\nWell, you can't unless you cheat. I should say, the case of the urn and of the fair coin are somewhat different. For the fair coin, I could use the knowledge of the state of the coin before flipping, and the forces acting on it during the flip, to calculate how it is going to land, at least approximately. This is a sophisticated way to use extra information to make predictions (the information here is the initial condition of the coin) but something akin to that has been used by a bunch of physics grad students to predict the outcome of casino roulette in the late 70s (you can read about it in~\\cite{Bass1985}.) \n\nThe coin is different from the urn because for the urn you won't be able to get any ``extraneous\" information. But suppose the urn has blue and red balls in {\\em unequal} proportions. If you knew what these proportions were [the $p$ and $1-p$ in Eq. (\\ref{ent}) above] then you could reduce the uncertainty of 1 bit to $H(X)$. A priori (that is, before performing any measurements on the probability distribution of blue and read balls), the distribution is of course given by $p=1\/2$, which is what you have to assume in the absence of information. That means your uncertainty is 1 bit. But keep in mind (from section 1) that it is only one bit because you have decided that the color of the ball (blue or red) is what you are interested in predicting.\n\nIf you start drawing balls from the urn (and then replacing them, and noting down the result, of course) you would be able to estimate $p$ from the frequencies of blue and red balls. So, for example, if you end up seeing 9 times as many red balls as blue balls, you should adjust your prediction strategy to ``The next one will be red\". And you would likely be right about 90\\% of the time, quite a bit better than the 50\/50 prior.\n\\vskip 0.5cm\n\\begin{center}\n{\\em ``So what you are telling me, is that the entropy formula (\\ref{ent}) assumes a whole lot of things, such as that you already know to expect a bunch of things, namely what the possible alternatives of the measurement are, and even what the probability distribution is, which you can really only know if you have divine inspiration, or else made a ton of measurements!\"}\n\\end{center}\nYes, dear reader, that's what I'm telling you. Well, actually, instead of divine inspiration you might want to use a theory. Theories can sometimes constrain probability distributions in such a way that you know certain things about them {\\em before} making any measurements, that is, theory can shape your {\\em prior} expectation. For example, thermodynamics tells you something about the probability distribution of molecules in a container filled with gas, say, and knowing things like the temperature of that gas allows you to make predictions better than chance. But even in the absence of that, you already come equipped with {\\em some} information about the system you are interested in (built into your common sense) and if you can predict with accuracy better than chance (because for example somebody told you the $p$ of the probability distribution and it is not one half), then you have some extra information. And yes, most people won't tell you that. But if you want to know about information, you first need to know.... what it is that you already know.\n\n\\section{Everything is conditional}\nLet us take a few steps back for a second to contemplate the purpose of this article. \nI believe that Shannon's theory of information is a profound addition to the canon of theoretical physics. Yes, I said theoretical physics. I can't get into the details of why I think this here (but if you are wondering about this you can find my musings here~\\cite{Adami2011}). But if this theory is so fundamental (as I claim) then we should make an effort to understand the basic concepts in walks of life that are not strictly theoretical physics. I tried this for molecular biology~\\cite{Adami2004}, and evolutionary biology~\\cite{Adami2012a}. \n\nBut even though the theory of information is so fundamental to several areas of science, I find that it is also one of the most misunderstood theories. It seems, almost, that {\\em because} ``everybody knows what information is\", a significant number of people (including professional scientists) use the word, but do not bother to learn the concepts behind it. But you really have to. You end up making terrible mistakes if you don't. The theory of information, in the end, teaches you to think about knowledge, and prediction. I'm trying to give you the entry ticket to all that. \n\nHere's the quick synopsis of what we have learned in the first two sections:\n\n1.) It makes no sense to ask what the entropy of any physical system is. Because technically, it is infinite. It is only when you specify {\\em what} questions you will be asking (by specifying the measurement device that you will use in order to determine the state of the random variable in question) that entropy (a.k.a. uncertainty) is finite, and defined.\n\n2.) When you are asked to calculate the entropy of a mathematical (as opposed to physical) random variable, you are usually handed a bunch of information you didn't realize you have. Like, what's the number of possible states to expect, what those states are, and possibly even what the likelihood is of experiencing those states. But given those, your prime directive is to predict the state of the random variable as accurately as you can. And the more information you have, the better your prediction is going to be.\n\nNow that we've got these preliminaries out of the way, it seems like high time that we get to the concept of information in earnest. I mean, how long can you dwell on the concept of entropy, really? Actually, just a bit longer as it turns out. \n\nI think I confused you a bit in the first two sections. One time, I write that the entropy is just $\\log N$, the logarithm of the number of states the system can take on, and later I write Shannon's formula for the entropy of random variable $X$ that can take on states $x_i$ with probability $p_i$ as \n\\begin{eqnarray}\nH(X)=-\\sum_{i=1}^N p_i\\log p_i \\label{enter}\\;. \\label{ent2}\n\\end{eqnarray} \nActually, to be perfectly honest, I didn't even write that formula. I wrote one where there are only two states, that is, $N=2$ in Eq.~(\\ref{ent2}).\nAnd then I went on to tell you that the expression $\\log N$ was ``just a special case\" of Equation~(\\ref{ent}). But I think I need to clear up what happened here.\n\nIn section 2, I talked about the fact that you really are given some information when a mathematician defines a random variable. Like, for example, in Eq. (\\ref{ent2}) above. If you know nothing about the random variable, you don't know the $p_i$. You may not even know the range of $i$. If that's the case, we are really up the creek, with paddle {\\it in absentia}. Because you wouldn't even have any idea about how much you don't know. So in the following, let's assume that you know at least how many states to expect, that is, you know $N$.\nIf you don't know anything else about a probability distribution, then you have to assume that each state appears with equal probability. Actually, this isn't a law or anything. I just don't know how you would assign probabilities to states if you have zero information. Nada. You just have to assume that your uncertainty is maximal in that case. And this happens to be a celebrated principle: the ``maximum entropy principle\". The uncertainty (\\ref{ent2}) is maximized if $p_i=1\/N$ for all $i$. And if you plug in $p_i=1\/N$ in (\\ref{ent2}), you get\n\\begin{eqnarray}\n H_{\\rm max}=\\log N\\;. \\label{ent3}\n\\end{eqnarray}\nIt's that simple. So let me recapitulate. If you don't know the probability distribution, the entropy is (\\ref{ent3}). If you do know it, it is (\\ref{ent2}). The difference between the entropies is knowledge\\footnote{The statement that the most objective prior is the maximum-entropy one (implying that it is un-informative) should be a little bit more technical than appears here, because if the probability distribution of a random variable is known to be different than the uniform distribution, then the un-informative prior is not the uniform distribution, but one that maximizes the entropy under the constraint of the likelihood of the data given the probability distribution. This is discussed in Jaynes's book~\\cite{Jaynes2003}, and cases where a seemingly un-informative (uniform) prior clashes with the underlying probability distribution to make it actually informative, are discussed in~\\cite{ZhuLu2004,Neumann2007}.}\nThe uncertainty (\\ref{ent3}) does not depend on knowledge, but the entropy (\\ref{ent2}) does. On a more technical note, Eq.~(\\ref{ent3}) is really just like the entropy in statistical physics when using the microcanonical ensemble, while Eq. (\\ref{ent2}) is the Boltzmann-Gibbs entropy in the canonical ensemble, where the $p_i$ are given by the Boltzmann distribution.\nIf you have noticed that I've been using the words ``entropy\" and ``uncertainty\" interchangeably, I did this on purpose because they are one and the same thing here. You should use one or the other interchangeably too. But you should never say ``information\" when you don't know what you can predict with the entropy at hand.\n\nSo, getting back to the narrative, one of the entropies is conditional on knowledge, while another is not. But, you think while scratching your head, wasn't there something in Shannon's work about ``conditional entropies\"? Indeed, and those are the subject of this section. The section title kind of gave it away, I'm afraid. To introduce conditional entropies more formally, and then connect to (\\ref{ent2}), we first have to talk about conditional probabilities. What's a conditional probability? I know, some of you groan ``I've known what a conditional probability is since I was seven!\" But even you may learn something. After all, you learned something reading this article even though you're somewhat of an expert? Right? Why else would you still be reading? \n\\vskip 0.5cm\n\\begin{center}\n {\\em ``Infinite patience\"}, you say? Moving on. \n \\end{center}\nA conditional probability characterizes the likelihood of an event, when another event has happened at the same time. So, for example, there is a (generally small) probability that you will crash your car. The probability that you will crash your car while you are texting at the same time is considerably higher. On the other hand, the probability that you will crash your car while it is Tuesday at the same time is probably unchanged, that is, unconditional on the ``Tuesday\" variable. (Unless Tuesday is your texting day, that is.)\n\nSo, the probability of events depends on what else is going on at the same time. ``Duh\", you say. But while this is obvious, understanding how to quantify this dependence is key to understanding information. In order to quantify the dependence between ``two things that happen at the same time\", we just need to look at two random variables. In the case I just discussed, one variable is the likelihood that you will crash your car, and the other is the likelihood that you will be texting. The two are not always independent, you see. The problems occur when the two occur simultaneously. You know, if this was another article (like, the type where I veer off to discuss topics relevant only to theoretical physicists) I would now begin to remind you that the concept of simultaneity is totally relative, so that the concept of a conditional probability cannot even be unambiguously defined in relativistic physics (but concepts such as ``before\" and ``after\" can, so that helps a lot). But this is not that article, so I will just let it go. \n\nOK, here we go: $X$ is one random variable (think: $p_i$ is the likelihood that you crash your car while you conduct maneuver $X=x_i$, where each $x_i$ is a particular maneuver or action). The other random variable is $Y$. That variable has only two states: either you are texting ($Y=1$), or you are not ($Y=0$) And those two states have probabilities $q_1$ (texting) and $q_0$ (not texting) associated with them. \nI can then write down the formula for the uncertainty of crashing your car while texting, using the probability distribution\n\\begin{eqnarray}\n P(X=x_i|Y=1)\\; .\n\\end{eqnarray}\nThis you can read as ``the probability that random variable $X$ is in state $x_i$ given that, at the same time, random variable $Y$ is in state $Y=1$.\" This vertical bar ``$|$'' is always read as ``given\".\n\nSo, let me write $P(X=x_i|Y=1)$ as $p(i|1)$. It's much simpler that way. I can also define $P(X=x_i|Y=0)=p(i|0)$. $p(i|1)$ and $p(i|0)$ are two probability distributions that may be different (but they don't have to be if my driving is unaffected by texting). Fat chance for the latter, by the way. \n\nI can then write the entropy while texting as\n\\begin{eqnarray}\n H(X|{\\rm texting})=-\\sum_{i=1}^N p(x_i|1)\\log p(x_i|1)\\;. \\label{cond1}\n\\end{eqnarray}\nOn the other hand, the entropy of the driving variable while {\\em not} texting is \n\\begin{eqnarray}\n H(X|{\\rm not\\ texting})=-\\sum_{i=1}^N p(x_i|0)\\log p(x_i|0)\\;. \\label{cond2}\n\\end{eqnarray}\nNow, compare Eqs. (\\ref{cond1}) and (\\ref{cond2}) to Eq. (\\ref{ent2}). The latter two are conditional entropies, conditional in this case on the co-occurrence of another event, here texting. They look just like the Shannon formula for entropy, which I told you was the one where ``you already knew something\", like the probability distribution. In the case of (\\ref{cond1}) and (\\ref{cond2}), you know exactly what it is that you know, namely whether random variable $X$ is texting while driving, or not. \n\nSo here's the gestalt idea that I want to get across. Probability distributions are born being uniform. In that case, you know nothing about the variable, except perhaps the number of states it can take on. Because if you didn't know {\\em that}, then you wouldn't even know how much you don't know. That would be the ``unknown unknowns\", that a certain political figure once injected into the national discourse. \n\nThese probability distributions become non-uniform (that is, some states are more likely than others) once you acquire information about the states. This information is manifested by conditional probabilities. You really only know that a state is more or less likely than the random expectation if you at the same time know something else (like in the case discussed, whether the driver is texting or not). Put in another way, what I'm trying to tell you here is that any probability distribution that is not uniform (same probability for all states) is necessarily conditional. When someone hands you such a probability distribution, you may not know what it is conditional about. But I assure you that it is conditional. I'll state it as a theorem:\n\\vskip 0.5cm\n\\begin{center}\n{\\bf All probability distributions that are not uniform are in fact conditional probability distributions.}\n\\end{center}\nThis is not what your standard textbook will tell you, but it is the only interpretation of ``what do we know\" that makes sense to me. ``Everything is conditional\" thus, as the title of this section promised.\n\nWe can also write down what the {\\em average} uncertainty for crashing your car is, given your texting status. It is simply the average of the uncertainty while texting and the uncertainty while not texting, weighted by the probability that you engage in any of the two behaviors. Thus, the conditional entropy $H(X|Y)$, that is the uncertainty of crashing your car given your texting status, is\n\\begin{eqnarray}\n H(X|Y)=q_0H(X|Y=0)+q_1H(X|Y=1)\\;. \\label{condav}\n\\end{eqnarray}\nThat's obvious, right? $q_0$ being the probability that you are texting while executing any maneuver $i$, and $q_1$ the probability that you are not (while executing any maneuver).\nWith this definition of the entropy of one random variable given another, we can now finally tackle information.\n\n\\section{Information}\nBefore going on, let me quickly summarize the take-home points of sections 1-3.\n\n1.) Entropy, also known as ``uncertainty\", is something that is mathematically defined for a ``random variable\". But physical objects aren't mathematical. They are messy complicated things. They become mathematical when observed through the looking glass of a measurement device that has a finite resolution. We then understand that a physical object does not ``have an entropy\". Rather, its entropy is defined by the measurement device I choose to examine it with. Information theory therefore is a theory of the relative state of measurement devices. \n\n2.) Entropy, also known as uncertainty, quantifies how much you don't know about something (a random variable). But in order to quantify how much you don't know, you have to know {\\em something} about the thing you don't know. These are the hidden assumptions in probability theory and information theory. These are the things you didn't know you knew.\n\n3.) Shannon's entropy is written in terms of ``$p \\log p$\", but these ``$p$\" are really conditional probabilities if you know that they are {\\em not} uniform, that is, all $p$ the same for all states. They are not uniform given what else you know.\n\nI previously defined the unconditional entropy, which is the one where we know nothing about the system that the random variable describes. We call that $H_{\\rm max}$, because an unconditional entropy must be maximal: it tells us how much we don't know if we don't know anything except how many states my measurement device has. Then there is the conditional entropy $H=-\\sum_i p_i\\log p_i$, where the $p_i$ are conditional probabilities. They are conditional on some knowledge. Thus, $H$ tells you what remains to be known. So finally, I give you:\n\\vskip 0.5cm\n\\begin{center}\nInformation is: ``What you don't know minus what remains to be known given what you know\". \n\\end{center}\nThere it is. Clear?\n\\vskip 0.5cm\n\\begin{center}\n{\\em ``Hold on, hold on. Hold on for just a minute.\"}\n\\end{center}\nWhat?\n\\vskip 0.5cm\n\\begin{center}\n{\\em ``This is not what I've been reading in textbooks.\"}\n\\end{center}\nSo tell me what it is that you read.\n\\vskip 0.5cm\n\\begin{center}\n{\\em ``It says there that the mutual information is the difference between the entropy of random variable $X$, $H(X)$, and the conditional entropy $H(X|Y)$, which is the conditional entropy of variable $X$ given you know the state of variable $Y$. Come to think of it, you yourself defined that conditional entropy at the end of section 3. I think it is Equation (\\ref{condav}) there!\" And there is this Venn diagram on Wikipedia. It looks like Figure 2!\"}\n\\end{center}\n\\begin{figure}[htbp]\n \\centering\n \\includegraphics[width=4in]{entropy.pdf} \n \\caption{Entropy Venn diagram, showing conditional and mutual entropies of two variables. Source: Wikimedia}\n \\label{fig:venn}\n\\end{figure}\n\nAh, yes. That's a good diagram. Two variables $X$ and $Y$. The red circle represents the entropy of $X$, the blue circle the entropy of $Y$. The purple thing in the middle is the shared entropy $I(X:Y)$, which is what $X$ knows about $Y$. Also what $Y$ knows about $X$. They are the same thing.\n\\vskip 0.5cm\n\\begin{center}\n{\\em ``You wrote $I(X:Y)$ but Wiki says $I(X;Y)$. Is your semicolon key broken?\"}\n\\end{center}\nActually, there are two notations for the shared entropy (a.k.a information) in the literature. One uses the colon, the other the semicolon. Thanks for bringing this up. It confuses people. In fact, I wanted to bring up this other thing....\n\\vskip 0.5cm\n\\begin{center} {\\em ``Hold on again. You also keep on saying ``shared entropy\" when Wiki says ``shared information\". You really ought to pay more attention.\"}\n\\end{center}\nWell, you. That's a bit of a pet-peeve of mine. Just look at the diagram above. The thing in the middle, the purple area, it's a shared entropy. Information is shared entropy. ``Shared information\" would be, like, shared shared entropy. I think that's a bit ridiculous, don't you think?\n\\vskip 0.5cm\n\\begin{center}\n{\\em ``Well, if you put it like this. I see your point. But why do I read `shared information' everywhere?\"}\n\\end{center}\nThat is, dear reader, because people are confused about what to call entropy, and what to call information. A sizable fraction of the literature calls what we have been calling ``entropy\" (or uncertainty) ``information\". You can see this even in the book by Shannon and Weaver~\\cite{ShannonWeaver1949} (which, come to think of it, was edited by Weaver, not Shannon). When you do this, then what is shared by the ``informations\" is ``shared information\". But that does not make any sense, right?\n\\vskip 0.5cm\n\\begin{center}\n{\\em ``I don't understand. Why would anybody call entropy `information'? Entropy is what you don't know, information is what you know. How could you possibly confuse the two?\"}\n\\end{center}\nI'm with you there. Entropy is ``potential information\". It quantifies ``how much you could {\\em possibly} know\". But it is not what you {\\em actually} know. I think, between you and me, that it was just sloppy writing at first, which then ballooned into a massive misunderstanding. Both entropy and information are measured in bits, and so people would just flippantly say: ``a coin has two bits of information\", when they mean to say ``two bits of entropy\". And it's all downhill from there. \n\nI think I've made my point here, I hope. Being precise about entropy and information really matters. Colon vs. semicolon does not. Information is ``unconditional entropy minus conditional entropy\". When cast as a relationship between two random variables $X$ and $Y$, we can write it as\n\\begin{eqnarray}\nI(X:Y)=H(X)-H(X|Y)\\;.\n\\end{eqnarray}\nAnd because information is symmetric in the one who measures and the one who is being measured (remember: ``a theory of the relative state of measurement devices\") this can also be written as\n\\begin{eqnarray}\nI(X:Y)=H(Y)-H(Y|X)\\;.\n\\end{eqnarray}\nAnd both formulas can be verified by looking at the Venn diagram above.\n\\vskip 0.5cm\n\\begin{center}\n{\\em ``OK, this is cool.\"}\n\\end{center}\n\\begin{center} {\\em \"Hold on, hold on!\"}\n\\end{center}\nWhat is it again?\n\\vskip 0.5cm\n\\begin{center}\n{\\em ``I just remembered. This was all a discussion that came after I brought up that information was $I(X:Y)=H(X)-H(X|Y)$, while you said it was $H_{\\rm max}-H$, where the $H$ was clearly an entropy that you write as $H=-\\sum_i p_i\\log p_i$. All you have to do is look back a few pages, I'm not dreaming this!\"}\n\\end{center}\nSo you are saying that textbooks say\n\\begin{eqnarray}\nI=H(X)-H(X|Y) \\label{info1}\n\\end{eqnarray}\nwhile I write instead\n\\begin{eqnarray}\nI=H_{\\rm max}-H(X)\\;, \\label{info2}\n\\end{eqnarray}\nwhere $H(X)=-\\sum_i p_i\\log p_i$. Is that what you're objecting to?\n\\vskip 0.5cm\n\\begin{center}\n{\\em \n``Yes. Yes it is.\"}\n\\end{center}\nWell, here it is in a nutshell. In (\\ref{info1}), information is defined as the difference between the actual observed entropy of $X$, minus the actual observed entropy of $X$ given that I know the state of $Y$ (whatever that state may be). \nIn (\\ref{info2}), information is defined as what I don't know about $X$ (without knowing any of the things that we may implicitly know already), and the actual uncertainty of $X$, given a particular probability distribution that is non-uniform. The latter entropy does not mention a system $Y$. It quantifies my knowledge of $X$ without stressing {\\em what it is} that I know about $X$. If the probability distribution with which I describe $X$ is not uniform, then I do know something about $X$. My $I$ in Eq. (\\ref{info2}) quantifies that. Eq. (\\ref{info1}) quantifies what I know about $X$ above and beyond what I already know via Eq. (\\ref{info2}), namely using my knowledge of $Y$. It quantifies specifically the information that $Y$ conveys about $X$. So you could say that the total information that I have about $X$, given that I also know the state of $Y$, would be\n\\begin{eqnarray}\nI_{\\rm total}=H_{\\rm max}-H(X) + H(X)-H(X|Y)=H_{\\rm max}-H(X|Y)\\;.\n\\end{eqnarray}\nSo the difference between what I would write and what textbooks write is really only in the unconditional term: it should be maximal. But in the end, Eqs. (\\ref{info1}) and (\\ref{info2}) simply refer to different informations. Eq.~(\\ref{info2}) is information, but I may not be aware how I got into possession of that information. Eq.~(\\ref{info1}) tells me exactly the source of my information: the variable $Y$. Is it clear now?\n\\begin{center}\n{\\em ``I'll have to get back to you on that. I'm still reading. I think I have to read it again. It sort of takes some getting used to.\"}\n\\end{center}\nI know what you mean. It took me a while to get to that place. But, as I hinted at in the introduction, it pays off big time to have your perspective adjusted, so that you know what you are talking about when you say ``information\". I have been (and will be) writing a good number of articles that reference ``information\", and many of those are a consequence of research that was only possible when you understand the concept precisely. I wrote a series of articles on information in black holes already~\\cite{AdamiVerSteeg2014,AdamiVerSteeg2015,BradlerAdami2014,BradlerAdami2015}. That's just the beginning. There are others, for example on how to measure how much information is stored in DNA~\\cite{AdamiCerf2000} or proteins~\\cite{GuptaAdami2015}, and the relationship between information and cooperation~\\cite{Iliopoulosetal2010,AdamiHintze2013,HintzeAdami2015} (I mean, how you can fruitfully engage in the latter only if you have the former), and information processing in the brain~\\cite{Edlundetal2011,Marstalleretal2013}. And there are more to come, on information in DNA binding sites~\\cite{CliffordAdami2015} or even how to use information theory to estimate the likelihood of a spontaneous origin of life~\\cite{Adami2015}. I know it sounds more like a threat than a promise. I really mean it to be a promise. \n\n\\section{Epilogue}A kind reviewer brought to my attention an elegant and insightful article by E.T. Jaynes~\\cite{Jaynes1965}, in which Jaynes not only accurately characterizes the difference between Boltzmann and Gibbs thermodynamic entropies and derives thermodynamics' second law, but also makes essentially the same statement about entropy that I have made here, by writing: \n\n``From this we see that entropy is an anthropomorphic concept, not only in the well-known statistical sense that it measures the extent of human ignorance as to the microstate. {\\em Even at the purely phenomenological level, entropy is an anthropomorphic concept.} For it is a property, not of the physical system, but of the particular experiments you or I choose to perform on it.\"\n\nWhile this statement pertained to thermodynamics entropy, it applies just as well to Shannon entropy as the two are intimately related. Jaynes dissects this relationship cogently in his article and I see no need to repeat it here as my focus is on information, not entropy. I have written about the relationship between thermodynamic and Shannon entropy~\\cite{Adami2011} but wish I would have known Ref.~\\cite{Jaynes1965} then. Perhaps this link is best summarized by noting that thermodynamics is a special case of information theory where ``all the fast things have happened, but the slow things have not\" (as Richard Feynman described thermodynamical equilibrium~\\cite{Feynman1972}). In other words, in equilibrium information about the fast things has disappeared, but there may still be information about the slow things: that 's the things we don't always know we know. \n\n\\noindent{\\bf Acknowledgememnts}\nI would like to thank Julyan Cartwright for suggesting to turn my blog series on information into this comment, Eugene Koonin for lively discussions about information, evolution, and genomics, as well as three referees for insightful comments.\nThis work was supported in part by NSF's BEACON Center for the Study of Evolution in Action, under Contract No. DBI-0939454.\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}