diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzfxeh" "b/data_all_eng_slimpj/shuffled/split2/finalzzfxeh" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzfxeh" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\\label{sec:intro}\nThe World Health Organization (WHO) declared the SARS-CoV-2 (Covid-19) outbreak in Wuhan to be a pandemic on March 11, 2020.\nSince then, Covid-19 has become a serious global health threat due to its rapid spread, transmission through asymptomatic infected individuals and complex epidemiological dynamics.\nAs of May 2021, already more than 3 million lives have been lost due to the virus. The spread of SARS-CoV-2 has thus far been extremely difficult to contain. \\\\\nBy the end of 2020, the successful development of effective vaccines and the onset of their widespread distribution in most of the world's countries, was hailed as the decisive mean to contain the pandemic.\nHowever, important questions linger on whether the vaccination effort will succeed in effectively eradicating the disease.\nThe appearance and wide spread of more contagious SARS-Cov-2 strains, the onset and scale of the vaccine deployment and high levels of vaccine hesitancy\/denial in the society, are among the key factors hindering the vaccination effort and the achievement of herd immunity. \nModeling the impact of these key factors on the evolution of the pandemic is of critical importance for assessing the vaccination effectiveness against it.\\\\\nIn studying past epidemics, scientists have systematically applied ''random mixing'' compartmental\nmodels which assume that an infectious individual can spread the disease to any susceptible member of the population before becoming recovered or removed, \nas originally considered by Kermack and McKendrick~\\cite{Kermack_1927}. These models constrain the total population in compartments by considering stages of the infection and flows among them.\n\nIn the present study we propose a new model named SAIVR, which incorporates two important characteristics of the Covid-19 epidemic, namely the considerable transmission of the disease by asymptomatic infected individuals and the vaccination campaign with World Health Organization (WHO) approved vaccines. \nMore recent modeling approaches involve agent-based simulations ~\\cite{Kaxiras_2020}, heterogeneous social networks ~\\cite{Barthelemy_2005, Ferrari_2006, Volz_2008, Tagliazucchi_2020, Vespignani_2018, Zhang_2016}, and Bayesian inference models \\cite{Groendyke_2011}.\nAlthough a large number of research studies are currently investigating the Covid-19 epidemiological characteristics ~\\cite{Sanche_2020, Li_2020,Imai_2020, Rothe_2020, Wynants_2020, Koh2020, Riccardo2020, khalili2020, Jie2020}, we believe that a simple but efficient model, which can capture the basics of the complex behavior of the pandemic including the vaccine roll-out, can offer useful guidance for the pandemic's near-term and longer-term evolution. \nBy using a recently developed semi-supervised machine learning approach \\cite{Flamant, Marios_2020, Paticchio_NIPS} we systematically reproduced the pandemic dynamics during the 2021 spring in several different countries. \nWe then used the model to assess the importance of a rapid vaccination campaign to prevent future outbreaks driven by more infectious variants.\n\\\\\n\nThe work is organized as follows. In Sec.~\\ref{sec:SAIVR} we introduce the SAIVR model and its parameters. The machine learning approach that we used to reproduce the infectious curves of 27 selected countries\/states is thoroughly described in Sec.~\\ref{sec:NN}. In Sec.~\\ref{sec:future_scenarios} we study future scenarios involving more infectious variants making quantitative arguments on how they might affect herd immunity.\nSec.~\\ref{sec:Conclusions} is devoted to concluding remarks.\n\n\\section{The SAIVR model} \\label{sec:SAIVR}\n\nOne of the first attempts to mathematically describe the spread of an infectious disease is due to Kermack and McKendrick~\\cite{Kermack_1927}. In 1927 they introduced the so-called Susceptible-Infectious-Removed (SIR) model.\nThe SIR model describes the dynamics of a (fixed) population of $N$ individuals split into three compartments:\n\\begin{itemize}\n\\item\n$S(t)$ is the Susceptible compartment that counts the number of individuals susceptible but still not infected by the disease;\n\\item\n$I(t)$ is the Infectious compartment that counts the number of infectious individuals;\n\\item\n$R(t)$ is the Removed compartment. It represents the number of those who can no longer be infected either because they recovered and gained long-term immunity or because they passed away.\n\\end{itemize}\n\n\\begin{figure*}[h]\n\\begin{center}\n\\includegraphics[width=0.95\\textwidth]{SAIVR_sketch.pdf}\n\\caption{\nIllustration of the SAIVR model compartments and their inter-dependencies denoted by incoming and outgoing arrows and relevant flow parameters.\n}\n\\label{fig:SAIVR_schematic}\n\\end{center}\n\\end{figure*}\n\nThe model involves two positive parameters, $\\beta$ and $\\gamma$ which govern the flow from one compartment to the other:\\\\\n- $\\beta$ is the transmission rate or effective contact rate of the disease: \nan infected individual comes into contact with $\\beta$ other individuals per unit time \n(the fraction that are susceptible to contracting the disease is $S\/N$); \\\\\n- $\\gamma$ is the removal rate. $\\gamma^{-1}$ is the mean \nnumber of days who is infected spends in the Infectious compartment.\\\\\nThe SIR model obeys the following system of ordinary differential equations (ODE):\n\\begin{subequations}\n\\label{eq:SIR}\n\\begin{align}\n\\frac{dI}{dt} &= \\beta I \\frac{S}{N} - \\gamma I \n\\label{eq:SIR_I}\n\\\\\n\\frac{dS}{dt} &= - \\beta I \\frac{S}{N} \n\\label{eq:SIR_S}\n\\\\\n\\frac{dR}{dt} &= \\gamma I \n\\label{eq:SIR_R}\n\\end{align}\n\\end{subequations}\n\n\nAlthough the SIR model has been adopted\nto study epidemic outbreaks in many previous works \\cite{Saito_SIR, Fang_SIR, Smirnova_SIR, Alanazi_SIR, Palladino_SIR, Cooper_SIR, Kaxiras_2020}, it lacks few important aspects of the current ongoing pandemic.\nFirst of all, it has been reported ~\\cite{Asymptomatic_1, Asymptomatic_2} that an important fraction of those who are carrying the virus is asymptomatic. Since they often avoid contact tracing due to the absence of symptoms, they can spread the disease while remaining undetected.\n Furthermore, in December 2020 a global vaccination campaign has started. Vaccinating is a safe way to transfer people from the Susceptible to the Removed compartment bypassing the Infectious one thus reducing the likelihood of an outbreak.\\\\\n\nThe SAIVR model extends the SIR model by incorporating the two aforementioned additional compartments:\n\\begin{itemize}\n\\item\n$A(t)$ is the Asymptomatic\/Undetected compartment that counts the number of those individuals that despite being infected are not tested\/traced. This mainly occurs to those who recover from the infection without suffering any symptoms.\n\n\\item\n$V(t)$ is the Vaccinated compartment. It takes into account those that have received a vaccine shot but are still not fully immunized by it.\n\n\\end{itemize}\n\nThe SAIVR model ODEs read:\n\\begin{subequations}\n\\label{eq:SAIVR}\n\\begin{align}\n\\frac{dI}{dt} &= \\beta_1 I \\frac{S}{N} + \\alpha_2 A \\frac{S}{N} + \\zeta I \\frac{V}{N} - \\gamma I\n\\label{eq:SAIVR_I}\n\\\\\n\\frac{dA}{dt} &= \\alpha_1 A \\frac{S}{N} + \\beta_2 I \\frac{S}{N} + \\eta A \\frac{V}{N} - \\gamma A\n\\label{eq:SAIVR_A}\n\\\\\n\\frac{dS}{dt} &= - \\beta I \\frac{S}{N} - \\alpha A \\frac{S}{N} - \\delta \\frac{S}{N} + (1-\\lambda) \\epsilon V\n\\label{eq:SAIVR_S}\n\\\\\n\\frac{dV}{dt} &= \\delta \\frac{S}{N}- \\eta A \\frac{V}{N} - \\zeta I \\frac{V}{N} - \\epsilon V\n\\label{eq:SAIVR_S}\n\\\\\n\\frac{dR}{dt} &= \\gamma I + \\gamma A + \\lambda \\epsilon V\n\\label{eq:SAIVR_R}\n\\end{align}\n\\end{subequations}\n\nThe compartment inter-dependencies and flow are presented in Fig. \\ref{fig:SAIVR_schematic}. \nThe parameters of the SAIVR model are the following:\\\\\n- $\\beta_1$ describes the rate at which individuals are exposed to symptomatic infection. An infected symptomatic individual comes into contact and infects $\\beta_1$ susceptible individuals per unit time; \\\\\n- $\\alpha_1$ is the asymptomatic infection rate. An infected asymptomatic individual comes into contact with $\\alpha_1$ susceptible individuals per unit time; \\\\\n- $\\beta_2$ describes the rate at which susceptible individuals become asymptomatic infected after entering in contact with a symptomatic individual; \\\\\n- $\\alpha_2$ describes the rate at which who's susceptible becomes symptomatic after entering in contact with an asymptomatic individual; \\\\\n- $\\gamma$ retains the same meaning as in the SIR model, representing the mean removal rate. $\\gamma^{-1}$ is the mean \namount of time individuals spend either in the Infectious or Asymptomatic compartments;\\\\\n- $\\zeta$ is the rate at which a vaccinated (but still not immune) individual enters in contact with a symptomatic infectious; \\\\\n- $\\eta$ describes the transmission rate at which who's asymptomatic comes into contact and infects vaccinated (but still not immune) individuals; \\\\\n- $\\delta$ is the first shot vaccination rate;\\\\\n- $\\lambda$ is the vaccine efficacy; \\\\ \n- $\\epsilon^{-1}$ is the mean \namount of time an individual spends in the Vaccinated compartment before reaching immunity and moving to the Removed compartment.\\\\\\\\\nCountries and states do not respond to the disease as static entities passively facing the pandemic. They react by actively imposing (and relaxing) restrictive measures, learning how to effectively treat the infected, adjusting social interactions and by launching vaccination campaigns. Finally, the virus itself evolves in more infectious variants ~\\cite{2021_variants}.\\\\\nCountry-specific parameters can be obtained by fitting the SAIVR model to a selected infectious wave occurred in a given country.\nSAIVR has 14 adjustable parameters or initial conditions that needs to be estimated; given the scarcity of data (only the infectious and vaccinated populations are known) optimizing them presents a challenging problem. To address this, we either fixed some of them or employed a novel fitting method based on semi-supervised neural networks, which we present in the following section.\n\n\\section{Solving the SAIVR model with Neural Networks}\\label{sec:NN}\n\n\\begin{figure*}[h]\n\\begin{center}\n\\includegraphics[width=1.0\\textwidth]{NN_sketch.pdf}\n\\caption{ Semi-supervised network architecture. During the unsupervised procedure (blue box), a time sequence $t$, a set of initial conditions $\\mathcal{Z}_0$ and parameter bundles $\\Theta$ are fed as an input to a 6 layers fully connected network (FCN).\nThe output of the network $\\mathcal{Z}_{NN}$ is multiplied by a function $f(t)$ to become a tentative parametric solution $\\hat{\\mathcal{Z}}(t)$ of the system of ODE in Eq.~\\ref{eq:SAIVR}. \nThe quality of $\\hat{\\mathcal{Z}}(t)$ is probed by the loss function $\\mathcal{L}$.\nWhen the network has learned the solutions, the inverse problem is then solved (red box).\nAn optimization algorithm selects the initial conditions and parameters in the bundle that best fit a given data-set. The loss $\\mathcal{L}_{inv}$ depends on the infectious population of a given country\/state $I_{Data}$ during the time sequence $t$.\n}\n\\label{fig:NN_sketch}\n\\end{center}\n\\end{figure*}\n\n\n\nIn order to apply the SAIVR model we need a realistic estimate of the parameters and initial conditions for the system of Eqs.~\\ref{eq:SAIVR}.\nTo obtain them we employed machine learning, a powerful method which has been extensively used for disease modeling \\cite{Amazon, Yang_SEIR, Zou_NN, Paticchio_NIPS} and dynamical system forecasting \\cite{Gallinari_NN, Chen_NN,Flamant}.\nOur approach employs a semi-supervised procedure which determines the optimal set of initial conditions and parameters of the SAIVR model, yielding solutions that best fit a given data-set.\nA sketch of this procedure is shown in Fig.~\\ref{fig:NN_sketch}.\n\n\\subsection{Unsupervised learning} \\label{sec:unsupervised}\nThe unsupervised part (blue box) consists of a data-free Neural Network (NN) that is trained to discover solutions for an ODE system of the form:\n\\begin{equation}\n \\frac{d\\mathcal{Z}}{dt} = g(\\mathcal{Z}), \\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\; \\mathcal{Z}(t=0) = \\mathcal{Z}_0\n\\end{equation}\nwhere $\\mathcal{Z} = (S(t),A(t),I(t),V(t),R(t))$ and $g(\\mathcal{Z})$ is given in Eqs.\\ref{eq:SAIVR}.\nThe NN takes as an input a time sequence $t$, a set of initial conditions $\\mathcal{Z}_0$, and modeling parameters $\\Theta$.\\\\\nAs we'll see in the following, $t$ is the set of days involved in a given epidemic wave going from $t_0$ to $t_0 + \\Delta t$, $Z_0$ are the initial compartment populations and $\\Theta$ some parameters of the SAIVR model.\nThe initial conditions and parameters are randomly sampled at each iteration $n$ over predefined intervals called bundles \\cite{Flamant, Paticchio_NIPS}, so that the network learns an entire family of solutions.\nThe inputs propagate through the network until an output vector $\\mathcal{Z}_{NN}$ of the same dimensions as the target solutions $\\mathcal{Z}$ is produced.\nThe learned solutions $\\hat{\\mathcal{Z}}$ satisfy\nthe initial conditions identically by considering parametric solutions of the form:\n\\begin{equation}\n\\hat{\\mathcal{Z}} = \\mathcal{Z}_0 + f(t) (\\mathcal{Z}_{NN} - \\mathcal{Z}_0 )\n\\end{equation}\nwhere $f(t) =1- e^{-t}$ \\cite{Marios_2020}.\nThe loss function:\n\\begin{equation}\n\\label{eq:loss_de}\n\\mathcal{L} = \\bigg \\langle \\left( \\frac{d\\mathcal{Z}}{dt} - g(\\mathcal{Z})\\right) ^2 \\bigg \\rangle\n\\end{equation}\nsolely depends on the network predictions averaged ($\\langle .. \\rangle$) over all the iterations $n$, providing an unsupervised learning framework. \nTime derivatives are computed using the automatic-differentiation and back-propagation techniques \\cite{automatic_differentiation}.\n\n\\subsection{Fitting a dataset}\n\\label{sec:fitting_method}\nOnce the NN is trained to provide solutions for the system of Eq. \\ref{eq:SAIVR}, its weights and biases are fixed and the trained network is used to develop a supervised pipeline for the estimation of the initial conditions and parameters, leading to solutions $\\tilde{\\mathcal{Z}}(t)$ that fit given data.\nThis procedure is illustrated in the red box in Fig.~\\ref{fig:NN_sketch}. \nA solution $\\hat{\\mathcal{Z}}$ is generated by the network starting from $\\bar{\\mathcal{Z}_0}$ and $\\bar{\\Theta}$ randomly selected in the bundles. A stochastic gradient descent optimizer then adjusts $\\bar{\\mathcal{Z}_0}$ and $\\bar{\\Theta}$ in order to minimize the loss function:\n\\begin{equation}\n\\label{eq:loss_inv}\n\\mathcal{L}_{inv} = \\bigg \\langle \\left( \\tilde{I}(t) - I_{Data}(t) \\right) ^2 \\bigg \\rangle \n\\end{equation}\nwhere $I_{Data}(t)$ is the infectious population of a given country\/state and $\\tilde{I}(t)$ is its NN fit.\n\n\\subsection{Applying the method to real data}\n\\label{sec:fitting}\nWe used this method to reproduce the most recent Covid-19 waves in 27 countries or states.\nTo test the generality of the model we selected epidemic waves that occurred in a broad range of geopolitical conditions, restrictive measures, time periods and vaccination efforts.\nThe bundles and fixed parameters used during the unsupervised training of the network are listed in Table.~\\ref{tab:parameters}.\n\nWe found that the model is weakly sensitive on the choice of most parameters and thus, we kept some of them fixed during the training. \nThe value of the vaccine efficacy $\\lambda$ is based on Refs.\\cite{Polack, Baden_2020}, where a vaccine efficacy of $94.8\\%$ and $94.1\\%$ is reported for the Pfizer-BioNTech and Moderna mRNA vaccines.\nThe $V \\rightarrow I$ and $V \\rightarrow A$ rates $\\zeta$ and $\\eta$\nare derived by considering the order-of-magnitude ratio of the infected individuals in the vaccinated and placebo cohorts of Ref.~\\cite{Polack}, with\n$\\beta_2$ and $\\alpha_2$ set to $0.001$ and $0.01$ respectively, which are in order-of-magnitude agreement with the aforementioned results of the clinical trials.\nThe $V \\rightarrow R$ rate $\\epsilon$ is the inverse time an individual takes to acquire vaccine protection after the first shot. \nWe set it to $\\epsilon^{-1} = 21$ to reflect the fact that the second shot of vaccines is usually administered about three weeks after the first one.\n\n\n\\begin{figure*}[h]\n\\begin{center}\n\\includegraphics[width=1.0\\textwidth]{vaccinated_countries_fit2.pdf}\n\\caption{ Infectious population percentage (red dots) of some selected countries in which a high percentage of their population has received a vaccine shot. The infectious population is expressed as a function of time (days). The date at which the wave began is pointed out on the horizontal axis. The neural network fits and predictions are shown by black solid and dashed lines respectively. \n}\n\\label{fig:vaccinated_countries_fit}\n\\end{center}\n\\end{figure*}\n\n\n\\begin{table*\n\\begin{center}\n\\caption{Bundles and fixed parameters}\n\\begin{tabular}{cccccccc}\n$I_0$ & $A_0$ & $V_0$ & $R_0$ & $\\beta_1$ & $\\gamma$ & $\\alpha_1$ & $\\delta$ \\\\\n\\midrule\n$[0.1~\\%, 2~\\%]$ & $[0.1~\\%, 2~\\%]$ & $[0~\\%, 60~\\%]$ & $[0~\\%, 30~\\%]$ & $[0.1, 0.25]$ & $[0.07, 0.12]$ & $[0.1, 0.25]$ & $[0, 0.03]$ \\\\\n\\end{tabular}\n\n\n\\begin{tabular}{cccccccc}\n$\\epsilon$ & $\\lambda$ & $\\eta$ & $\\zeta$ & $\\beta_2$ & $\\alpha_2$ \\\\\n\\midrule\n1\/21 & 0.95 & 1e-2 & 5e-3 & 1e-3 & 1e-2 \\\\\n\\end{tabular}\n\\label{tab:parameters}\n\\end{center}\n\n\\end{table*}\n\n\n\\begin{figure*}[h]\n\\begin{center}\n\\includegraphics[width=1.0\\textwidth]{US_states_fit.pdf}\n\\caption{\nInfectious population percentage (red dots) of ten selected US states. The infectious population is expressed as a function of time (days). The date at which the wave began is pointed out on the horizontal axis. The neural network fits and predictions are shown by black solid and dashed lines respectively.\n }\n\\label{fig:US_states_fit}\n\\end{center}\n\\end{figure*}\n\nThe remaining parameters $\\Theta = (\\alpha_1, \\beta_1, \\delta, \\gamma)$ strongly depend on what kind of restrictive measures are taken or how fast the vaccination campaign is, i.e. they are country dependent. We therefore selected them in bundles so that the network could learn solutions corresponding to a broad range of parameters and fit multiple countries.\nThe decay rate $\\gamma$ is the inverse of the removal time which is about 1-2 weeks \\cite{Recovery_time}.\nThe main symptomatic infection rate $\\beta_1$ was sampled in an interval consistent with previous reports \\cite{Kaxiras_2020}.\nThe earlier estimates that $80\\%$ of infected population is asymptomatic has been considered\ntoo high and have since been revised down ~\\cite{Asymptomatic_1, Asymptomatic_2}; the initial studies estimating this proportion were\nlimited by heterogeneity in case definitions, incomplete symptom assessment,\nand inadequate retrospective and prospective follow-up of symptoms. We selected the main asymptomatic infection rate $\\alpha_1$ to be varying in the same interval of $\\beta_1$. The first shot vaccination rate $\\delta$ was selected based on known vaccination reports (see \\ref{sec:matsmethod}).\\\\\nFinally, the initial condition bundles $\\mathcal{Z}_0 = (S_0,A_0,I_0,V_0,R_0)$ are defined over broad intervals able to cover the expected ($S,A,I,V,R$) populations at any given time for all the cases considered. Although the initial infected population $I_0$ is known, we still included it in the set of quantities to be fit by the network. We found that by doing so, the network generalizes better improving the fit of a given epidemic wave.\n\nWe then performed the fitting procedure described in \\ref{sec:fitting_method} using the infectious populations of 27 countries\/states. The data (including the number of vaccine shots administered) is retrieved from the `Our World in Data' GitHub repository \\cite{OWID}.\nStrictly speaking, the Infectious population of the SAIVR model is the amount of people that are actively infected by the virus on a given day, a number that should not be confused with the daily new cases. As such, it was computed as the difference between the total number of cases and of recovered\/dead individuals.\n\nWe first applied the method to study the most recent Covid-19 wave in some of the countries with the fastest vaccination campaigns and that managed to inoculate the first shot in at least $30\\%$ of their population.\nFig.~\\ref{fig:vaccinated_countries_fit} presents real data (red points), fits (black solid line), and some predictions (black dashed line) for the infectious populations of Israel, UK, Hungary, France, Romania and Serbia.\nWe also studied the USA, although due to the large size of the country we focused only on the largest states or those with the highest vaccination rates in the first quarter 2021, see Fig.~\\ref{fig:US_states_fit}.\nTo assess the generality of the model and the fitting procedure, we applied it to other 12 countries spread throughout the world and which had at the end of spring 2021 a high number of cases.\nThe corresponding fits are shown in Figs. S1-S2 of the \\textbf{Supplementary Material}.\nAs can be seen, the model is able to well reproduce all these epidemic curves despite missing some abrupt and rapid events that can be captured by more sophisticated multiple-wave models \\cite{Kaxiras_2020}.\nAll the parameters determined by the network can be found in the \\textbf{Supplementary Material}; their values are within the bundles of Table.~\\ref{tab:parameters}.\nIn particular, $\\gamma$ was found in the $~ 10-12$ days range, $\\beta_1$ oscillating in $[0.14-0.19]$ while alpha was more volatile.\n\\\\\n\n\n\n\n\\section{Insights on the future: vaccine hesitancy, herd immunity and new variants}\n\n\\begin{figure}[h]\n\\includegraphics[width=1.0\\textwidth]{hesitancy.pdf}\n\\caption{Total infected population as a function of vaccination rate, vaccine efficacy and vaccine denial population percentage. Results are obtained by numerically solving the SAIVR model for $I_0 = 10^{-5} N_{pop}$ and $A_0 = 0.2 \\times I_0$, where $N_{pop} = 10^6$.\nThe parameters of the model used are those obtained by applying machine learning on the epidemic curves. \\\\\na) Infected population vs. vaccination rate $\\delta$ and vaccine efficacy $\\lambda$.\\\\\nb) Infected population as a function of the percentage of the population that avoids getting vaccinated and the vaccine rate $\\delta$.\n}\n\\label{fig:hesitancy}\n\\end{figure}\n\n\n\\label{sec:future_scenarios}\nIn this section we use the results of the analysis performed in the previous section to study how the vaccination campaign is affecting the pandemic and its future evolution.\nUnless otherwise specified, we set $\\beta_1 = 0.16$, $\\alpha_1 = 0.2$ and $\\gamma = 1\/12$, the average values retrieved from fitting real data. \nWe start by pointing out how the vaccine efficacy is a key factor in halting the spread of the virus and how hesitancy is challenging the vaccination campaigns.\nFinally, we discuss the concept of herd immunity and how it is affected by more infectious Covid-19 variants.\n\n\\subsection{Vaccination efficacy and hesitancy}\n\nFigure \\ref{fig:hesitancy} presents the total infected ($I + A$) population under increasing values of vaccination onset times ($T_0$), vaccination daily rates ($\\delta$), vaccine efficacy ($\\lambda$) and of vaccine hesitancy\/denial population percentage. \nIn the top panel, total infected population is shown as a function of the vaccination rate $\\delta$ and vaccine efficacy $\\lambda$. As can be seen, even vaccines with a relatively low efficacy can rapidly reduce the infected population.\nIn Fig.~\\ref{fig:hesitancy} b) we show how the number of those infected evolves as a function of $\\delta$ and the percentage of population that avoids getting vaccinated.\nThese findings suggest that vaccine hesitancy, which accounts for a significant proportion of the population might seriously threaten the reach of herd immunity, especially if the situation is worsened by the appearance of more infectious Covid-19 strains.\n\n\\subsection{Herd immunity and new Covid-19 variants}\n\n\n\\begin{figure*}[h]\n\\includegraphics[width=1.0\\textwidth]{future_scenaria.pdf}\n\\caption{\n{\\bf Top row}: Time evolution of epidemics following the introduction of infected individuals in a population that has been already vaccinated at 50$\\%$ (left panel), 60$\\%$ (middle panel), and 70$\\%$ (right panel), with permanent immunity and no further vaccine roll-out, for four different numbers of newly infected individuals ($I_0 = 1 - 0.001 \\%$ of total population, the color code is presented in the legends). Results are obtained from numerically solving the SAIVR model for $\\beta_1 = 0.16$, $\\alpha_1=0.20$, which represent the average of the countries' fitted values (see text for details and for the values of the other parameters). \n{\\bf Middle row}: Same as in top row but with $\\beta_1 = 0.25$, $\\alpha_1=0.25$, which represent a more contagious Covid-19 variant. As is shown, vaccinated coverage of $50\\%$ and $60\\%$ cannot prevent the resurgence of outbreaks.\n{\\bf Bottom row}: Same as in middle row, but with ongoing vaccination roll-out with rate $\\delta = 0.001$, per day, as it is shown, a continuing vaccine roll-out lowers the intensity of the resurgent waves and prevents the resurgence of subsequent outbreaks.\n}\n\\label{fig:future_scenarios}\n\\end{figure*}\n\nThe achievement of herd immunity has been hailed as the ultimate goal of a successful vaccination campaign.\n`Herd immunity', also known as `population immunity', is the indirect protection from an infectious disease that happens when a sufficient portion of the population is immune either through vaccination or immunity developed through previous infection.\nOnce the herd immunity threshold is met, the spread of the infectious disease is kept under control, current outbreaks will extinguish and endemic transmission of the pathogen will be interrupted. \nEarlier estimates of the threshold found values of about $60-70 \\%$ of the population \\cite{Howard_herd,Chowdhury2020,Aguas2020}. \nIn reality, highly transmissible strains tend to increase the threshold value, possibly keeping this goal out of reach.\nFurthermore, persistent hesitancy about vaccines makes vaccinating more than the $60 - 65\\%$ of the population unlikely even in countries which are at the global forefront of the vaccination effort.\n\nWe quantitatively investigate the likelihood of incurring resurgent Covid-19 epidemics after having immunized $50\\%$, $60\\%$, and $70\\%$ of the population, under different new infection introductions, Covid-19 variants and ongoing vaccine deployment pace.\nHerd immunity protection is affected by the initial value of the removed population ($R_0$ at $t=0$), which comprises both recovered as well as fully vaccinated individuals, assuming permanent immunity for both cases. In each scenario, we study the epidemic evolution after the introduction of a cluster of newly infected individuals in the population. $I_0$ represents the newly infected load at $t=0$ ($I_0 = 1\\%, 0.1\\%, 0.01\\%, 0.001\\%$ of the total population).\n\nIn the first part of the study we considered the less infectious variants spreading during the 2021 spring by using the parameters retrieved in Sec.~\\ref{sec:fitting}.\nThe second scenario involves a more infectious Covid-19 strain such as the Delta variant, which has been reported to be able to spread the virus more efficiently \\cite{Delta_variant}.\nFinally, we explore cases that involve or not further (continuing) vaccine roll-outs. \n\nFigure ~\\ref{fig:future_scenarios} presents the results obtained by numerically solving the SAIVR model for the aforementioned cases. The top row presents the time evolution of an outbreak in a population where the 50$\\%$ (left panel), 60$\\%$ (middle panel), and 70$\\%$ (right panel) of the individuals have been immunized, for different numbers of initially infected individuals $I_0$. Results are obtained by solving the SAIVR model for $\\beta_1 = 0.16$, $\\alpha_1=0.20$, and $\\gamma = 1\/12$; the average values obtained for the countries considered in Sec.~\\ref{sec:fitting} and listed in \\textbf{Supplementary Material}.\nAs it can be seen, when the immune portion of the population is only $50\\%$, the outbreaks are contained but not eradicated as the virus spreads in low intensity waves \nmaking the disease endemic. Given the contagiousness of the less infectious variants, an immunity threshold larger than $60\\%$ is enough to eradicate the disease.\\\\\nThe middle row of Fig.~\\ref{fig:future_scenarios} presents the evolution of outbreaks driven by a more contagious variant; here $\\beta_1 = 0.25$, and $\\alpha_1=0.25$. As it is shown, if the immunized portion of the population is only $50\\%$ or $60\\%$, the resurgence of outbreaks cannot be prevented ($60\\%$ immunity protection makes the disease endemic). Only when the $70\\%$ of the population is immunized the disease is eradicated. \\\\\nIn both the top and the middle rows, the vaccine deployment is not taking place during the outbreaks. The bottom row instead considers the highly infectious variant disease evolution but with constant vaccine roll-out (with rate $\\delta = 0.001$).\n As it can be seen, since even after getting only the first vaccine shot individuals are partially protected, continuing the vaccine administration rapidly lowers the intensity of the resurgent waves and helps preventing subsequent outbreaks. \n\nAlthough recent reports on highly infectious variants claim that the efficacy of most vaccines is still about $90 \\%$ in preventing serious illnesses ~\\cite{Delta_vaccine}, it is still not clear their performance on halting asymptomatic transmission.\nIn this study, we assumed that the vaccine efficacy on protecting from more infectious variants is the same as for the less infectious ones.\nDespite this optimistic assumption, the herd immunity threshold is moved to higher values by simply increasing the infection rates.\n\n\\section{Discussion} \\label{sec:Conclusions}\nCompartmental models are efficient tools to deal with the time evolution of disease outbreaks. They provide us with useful intuition on the impact of non-pharmaceutical intervention in decreasing the number of infectious incidence rates.\\\\\nIn this work, we have augmented the classic SIR model with the ability to accommodate asymptomatic transmission and vaccinated individuals.\nThe SAIVR model is a straightforward deterministic model, which does not take into consideration age, gender or geographic clustering.\nDespite this, its simplicity and the insights it offers on how key epidemiological variables affect individuals are among its main strengths. \nIts power also lies in the fact that, as factors such new variants are added to the model, it is easy to adjust its parameters and provide with best fit curves between the data and the model predictions.\\\\\nSince the inclusion of the Asymptomatic and Vaccinated compartments enlarged the number of parameters and initial conditions of the model, we employed a novel semi-supervised framework to estimate most of them.\nAn unsupervised neural network solves the model's differential equations over a range of parameters and initial conditions. A supervised approach then incorporates data and determines the optimal initial conditions and modeling parameters that best fit the 27 epidemic curves considered.\nAs expected due to the heterogeneity of the countries sample,\nthe resulting parameters fit are dissimilar although they follow similar trends.\\\\\nWe used these results to shed light on the impact of the vaccination campaign on the future of the pandemic.\nWe pointed out how vaccine hesitancy is one of the most important hurdles of the campaign and further efforts should be done to support people and give them correct information about vaccines. \nBecause of this, vaccinating the critical number of people that have to be immune in order to prevent future outbreaks (i.e. herd immunity), is likely to be out of reach. Widely circulating coronavirus variants are also a threat as they move the herd immunity threshold to higher values.\nThis points out the importance of rapidly reducing the infection rate by any means, such as\nby imposing restrictive measures in case highly infective new variants appears before the herd immunity threshold is reached.\nThese results manifest the need for continuing the vaccination effort and the drive for achieving high vaccination coverage in order to contain outbreaks generated by new and possibly more infectious variants.\\\\\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n{\\bf Data availability}\\\\\nThe code used to perform the fitting is available on GitHub \\cite{Github_Mattia}. All study data are either included in the article and\nsupporting information or available in Ref.~\\cite{OWID}. \\\\\n\n{\\bf Author Contribution}\\\\\nE.K. conceived the proposed model and supervised the study. M.A, G.N. and M.M. designed the code. M.A. and G.N. performed numerical experiments, collected data and analyzed the results. M.A. wrote the initial\ndraft of the manuscript. All authors critically revised, improved, and reviewed the manuscript in various\nways, and gave final approval for publication.\\\\\n\n{\\bf Competing interests}\\\\\nThe authors declare no competing interest.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nLet $U$ be an open set in \\textbf{C}$^n$ which contains 0, $f$ be\na holomorphic function defined on $U$, $f_p$ is the germ of $f$ at\npoint $p\\in U$.\\\\\n\nFor any two holomorphic functions $g,h$ defined on $U$,if\n$g_0,h_0$ are relatively prime with each other, then with the help\nof resultants, we know that $g,h$ are relatively prime with each\nother nearby. Precisely to say, that means their exists an open\nneighborhood $V\\subset U$ of 0, such that for any point $q\\in V$,\n$g_q$ and $h_q$ are relatively prime with each other. In this\nsense, we can say that \\textbf{Being co-prime is a stable\nproperty.}\\\\\n\nCan we say \\textbf{Irreducibility is a stable property}?In the\ncase of dimension 2, the answer is positive, and the proof is\neasy. But in the case of dimension 3, I will present a polynomial\nas counter-example.\n\n\n\\section{Proof for the Case of Dimension 2}\n\\textbf{Statement}: For any holomorphic function $f=f(z_1,z_2)$ on\n$U\\subset$ \\textbf{C}$^2$($0\\in U$), and the germ of f at origin\nis irreducible, then their exists an open neighborhood $V\\subset\nU$ of 0, such that for any point $q\\in V$, $f_q$ is\nirreducible.(\\textbf{Remark}:If $f(p)\\neq 0$, the $f$ is irreducible at $p$. So we only need to care about zero points of $f$.)\\\\\\\\\n\\textbf{Proof:} Without the loss of generality, we can assume\n$f(0,z_2)$ is not identically 0 near the origin, and $f(0,0)=0$.\\\\\\\\\nlet $w=z_2^d+e_1(z_1)z_2^{d-1}+\\cdots+e_{d-1}(z_1)+e_d(z_1)$ be a\nWeierstrass polynomial of $f$ near 0.\\\\\\\\\nBecause $w$ is irreducible at 0, so $w$ and $\\frac{\\partial\nw}{\\partial z_2}$ are relatively prime near 0. Then the resultant\nof $w$ and $\\frac{\\partial w}{\\partial z_2}$ is not zero. Then the\ncommon zero loci of $w$ and $\\frac{\\partial w}{\\partial z_2}$ are\ndiscrete near 0.\\\\\\\\\nFrom above, we know that their exists an open set $V(0\\in V\\subset\nU)$, such that in ${\\bf U}$, (0,0) is the only zero point of $w$ which\nis POSSIBLE to be singular.(since for other points in $q\\in U$,\n$\\frac{\\partial w}{\\partial z_2}(p)\\neq 0$ ).We can conclude that\nat any zero point $p(p\\neq 0)$ of $w$ in $V$,$w$ is a local\ncomplex parameter near $p$. Since $w$ is a local complex parameter\nnear $p$, then the germ of $w$ at $p$ is irreducible.\\\\\\\\\nFinally, because $w$ is a Weierstrass polynomial of $f$ at 0, then\nwe know that in $V$, the irreducibility of $f$ is as the same as t\nthat of $w$.\\hfill$\\Box$\\medskip\n\n\n\n\n\n\n\n\\section{A Counter Example in Dimension 3} \\label{se:C0}\nIn the case of dimension 3, the statement should be:\\\\\\\\\n\\textbf{Statement}: For any holomorphic function\n$f=f(z_1,z_2,z_3)$ on $U\\subset$ \\textbf{C}$^3$($0\\in U$), and the\ngerm of f at origin is irreducible, then their exists an open\nneighborhood $V\\subset U$ of 0, such that for any point $q\\in V$,\n$f_q$ is irreducible.\\\\\\\\\nBut unfortunately, this statement is not true.In this section, I\nwill present, a polynomial of three variables,\nas a counter example.\\\\\\\\\nThis polynomial is $f=z_3^2-z_1z_2^2$.\\\\\\\\\n\n\\subsection{Irreducibility of $f$ at origin}\nObviously, near 0, $f$ is a Weierstrass polynomial of itself(we\nchoose $z_3$ as the polynomial variable).Now, we will show the\nirreducibility at origin by means of contradiction. \\\\\\\\\nIf $f$ is not irreducible at origin, then its Weierstrass\npolynomial is decomposable at origin as a Weierstrass\nPolynomial.Assume that,near origin,\n$f=(z_3-g_(z_1,z_2))(z_3-h(z_1,z_2))$, here $g,h$ are holomorphic\nfunctions of variable $z_1,z_2$ near 0, and g(0,0)=h(0,0)=0.\\\\\\\\\nFrom the factorization $f=(z_3-g_(z_1,z_2))(z_3-h(z_1,z_2))$, we\nknow that $g+h=0,gh=-z_1z_2^2$, which implies $g^2=z_1z_2^2$ near\n0.\\\\\\\\\nBut if $g^2=z_1z_2^2$ near 0. Then for some $\\varepsilon\\in$\n\\textbf{C} whose norm is small enough,\n$g^2(z_1,\\varepsilon)=\\varepsilon ^2 z_1$ near 0. But just from\nelementary knowledge of functions of one complex variable, we know\nthis is not possible.\\\\\\\\ From argument above, we know $f$ is\nirreducible at origin.\n\n\\subsection{Further Argument}\nAt point $p=(z,0,0)(z\\neq 0)$, we know that $f(p)=0$, and easily\nwe can factorize $f$ as $f=(z_3+z_2r)(z_3-z_2r)$ near $p$, here\n$r$ is a one-variable holomorphic function such that $r^2=z_1$\nnear $(z,0,0)$(Because z is not 0, so we can take square-root of\n$z_1$ near by.).\\\\\\\\\nFrom the argument in \\textbf{3.2}, we know that, in any\nneighborhood $U$ of origin, there EXISTS some point $p$ such that\n$f$ is not irreducible at $p$. This fact can destroy our statement\nat the beginning of this section.\n\n\n\n\n\\end{document}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction and statement of results}\n\nFor a simple loopless graph $G = (V(G),E(G))$ and a graph $H = (V(H),E(H))$ (possibly with loops, but without multi-edges), an {\\em $H$-coloring} of $G$ is an adjacency-preserving map $f:V(G) \\to V(H)$ (that is, a map satisfying $f(x) \\sim_H f(y)$ whenever $x\\sim_G y$). Denote by $\\hom(G,H)$ the number of $H$-colorings of $G$.\n\nThe notion of $H$-coloring has been the focus of extensive research in recent years. Lov\\'asz's monograph \\cite{Lovasz} explores natural connections to graph limits, quasi-randomness and property testing. Many important graph notions can be encoded via homomorphisms --- for example, {\\em proper $q$-coloring} using $H=K_q$ (the complete graph on $q$ vertices), and {\\em independent} (or {\\em stable}) {\\em sets} using $H=H_{\\text{ind}}$ (an edge with one looped endvertex). The language of $H$-coloring is also ideally suited for the mathematical study of hard-constraint spin models from statistical physics (see e.g. \\cite{BrightwellWinkler}). Particularly relevant to the present paper is the Widom-Rowlinson model of the occupation of space by $k$ mutually repulsive particles, which is encoded as an $H$-coloring by using the graph $H = H_{\\rm WR}(k)$ which has loops on every vertex of $K_{1,k}$ (the star on $k+1$ vertices). Note that the original Widom-Rowlinson model \\cite{WidomRowlinson} has $k=2$.\n\nMany authors have addressed the following extremal enumerative question for $H$-coloring: given a family ${\\mathcal G}$ of graphs, and a graph $H$, which $G \\in {\\mathcal G}$ maximizes or minimizes ${\\rm hom}(G,H)$? This question can be traced back to Birkhoff's attacks on the 4-color theorem, but recent attention on it owes more to Wilf and (independently) Linial's mid-1980's query as to which $n$-vertex, $m$-edge graph admits the most proper $q$-colorings (i.e. has the most $K_q$-colorings). For a survey of the wide variety of results and conjectures on the extremal enumerative $H$-coloring question, see \\cite{Cutler}.\n\nA focus of the present paper is the extremal enumerative $H$-coloring question for the family $\\calT(n)$, the set of all trees on $n$ vertices. This family has two natural candidates for extremality, namely the path $P_n$ and the star $K_{1,n-1}$, and indeed in \\cite{ProdingerTichy} Prodinger and Tichy showed that these two are\nextremal for the count of independent sets in trees: for all $T \\in \\calT(n)$,\n\\begin{equation} \\label{inq-PT}\n\\hom(P_n,H_{\\text{ind}}) \\leq \\hom(T,H_{\\text{ind}}) \\leq \\hom(K_{1,n-1},H_{\\text{ind}}).\n\\end{equation}\nThe Hoffman-London matrix inequality (see e.g. \\cite{Hoffman,London,Sidorenko2}) is equivalent to the statement that $\\hom(P_n,H) \\leq \\hom(K_{1,n-1},H)$ for \\emph{all} $H$ and $n$; a significant generalization of this due to Sidorenko \\cite{Sidorenko} (see also \\cite{CsikvariLin} for a short proof) shows that the right-hand inequality of (\\ref{inq-PT}) extends to arbitrary $H$.\n\\begin{theorem}[Sidorenko] \\label{thm-siderenko}\nFix $H$ and $n \\geq 1$. Then for any $T \\in \\calT(n)$,\n\\[\n\\hom(T,H) \\leq \\hom(K_{1,n-1},H).\n\\]\n\\end{theorem}\nIn other words, the star admits not just the most independent sets among $n$-vertex trees, but also the most $H$-colorings for arbitrary $H$. Two points are worth noting here. First, since deleting edges in a graph cannot decrease the number of $H$-colorings, Theorem \\ref{thm-siderenko} shows that among all connected graphs on $n$ vertices, $K_{1,n-1}$ admits the most $H$-colorings. Second, if we extend $\\calT(n)$ instead to the family of graphs on $n$ vertices with minimum degree at least $1$ and consider even $n$, then as shown by the first author \\cite{Engbers} the number of $H$-colorings is maximized either by the star or by the graph consisting of a union of disjoint edges.\n\nThe left-hand side of (\\ref{inq-PT}) turns out {\\em not} to generalize to arbitrary $H$: Csikv\\'{a}ri and Lin \\cite{CsikvariLin}, following earlier work of Leontovich \\cite{Leontovich}, exhibit a (large) tree $H$ and a tree $E_7$ on seven vertices such that $\\hom(E_7,H) < \\hom(P_7,H)$, and raise the natural question of characterizing those $H$ for which $\\hom(P_n,H) \\leq \\hom(T,H)$ holds for all $n$ and all $T \\in \\calT(n)$.\n\nOur first result gives a partial answer to this question. Before stating it, we need to establish a convention concerning degrees of vertices in graphs with loops.\n\n\\medskip\n\n\\noindent \\textbf{Convention:} For all graphs in this paper, the degree of a vertex $v$ is the number of neighbors\nof\n$v$, i.e., $d(v) = |\\{w : v \\sim w\\}|$. In particular, a loop on a vertex\nadds one to the degree. We let $\\Delta$ denote the maximum degree of $H$.\n\nWe also let $G^{\\circ}$ denote the graph obtained from $G$ by adding loops to every vertex in $G$.\n\n\\begin{theorem} \\label{thm-minfortrees}\nLet $n\\geq 1$ and let $H$ be a regular graph. For an integer $\\ell \\geq 1$, let $H^{\\circ}(\\ell)$ be the join of $H$ and $K_{\\ell}^{\\circ}$. \nThen for any $T \\in \\calT(n)$,\n\\[\n\\hom(P_n,H^{\\circ}(\\ell)) \\leq \\hom(T,H^{\\circ}(\\ell)).\n\\]\nEquality occurs if and only if $T = P_n$ or $H^{\\circ}(\\ell) = K_{q}^{\\circ}$ for some $q \\geq \\ell$. \n\\end{theorem}\nNotice that the result also holds for $H$ where each component is of the form $H^{\\circ}(\\ell)$. Theorem \\ref{thm-minfortrees} generalizes the left-hand side of (\\ref{inq-PT}), as $H_{\\text{ind}}$ is the join of $K_1$ and $K_{1}^{\\circ}$, \nand our proof is a generalization of the inductive approach used in \\cite{ProdingerTichy}.\n\nSince the Widom-Rowlinson graph $H_{\\rm WR}(k)$ can be constructed from the disjoint union of $k$ looped vertices by the addition of a single looped dominating vertex, an immediate corollary of Theorem \\ref{thm-minfortrees} is that for all $n \\geq 1$,\n\\[\n\\hom(P_n,H_{\\rm WR}(k)) \\leq \\hom(T,H_{\\rm WR}(k)).\n\\]\nfor all $T \\in \\calT(n)$. We also note in passing that Theorem \\ref{thm-minfortrees} may be interpreted in terms of partial $H$-colorings of $G$ (that is, $H$-colorings of induced subgraphs of $G$ that need not be extendable to $H$-colorings of $G$). Specifically, when $\\ell=1$ the theorem says that if $H$ is regular, then among all $n$-vertex trees none admits fewer partial $H$-colorings than $P_n$.\n\nOur second result is a new proof of Sidorenko's theorem, valid for sufficiently large $n$.\n\\begin{theorem} \\label{thm-maxfortrees}\nThere is a constant $c_H$ such that if $n \\geq c_H$ and $T \\in \\calT(n)$ then\n\\[\n\\hom(T,H) \\leq \\hom(K_{1,n-1},H)\n\\]\nwith equality if and only if $H$ is regular or $T = K_{1,n-1}$.\n\\end{theorem}\n\nWhile Theorem \\ref{thm-maxfortrees} is weaker than Theorem \\ref{thm-siderenko} in that it only holds for $n \\geq c_H$, it is noteworthy for two reasons. Firstly, our proof for non-regular $H$ uses a stability technique --- we show that if a tree is not structurally close to a star (specifically, if it has a long path), then it admits significantly fewer $H$-colorings than the star, and if it is structurally almost a star but has some blemishes, then again it admits fewer $H$-colorings. Secondly, the proof is less tree-dependent than Sidorenko's, and so the ideas used may be applicable in other settings. We illustrate this by considering the extremal enumerative question for $H$-colorings of $2$-connected graphs. Let $\\calC_2(n)$ denote the set of $2$-connected graphs on $n$ vertices, and let $K_{a,b}$ be the complete bipartite graph with $a$ and $b$ vertices in the two color sets. Also, for graph $H$ with maximum degree $\\Delta$, denote by $s(H)$ the number of ordered pairs $(i,j)$ of vertices of $H$ satisfying $|N(i) \\cap N(j)| = \\Delta$. The special case $H=H_{\\text{ind}}$ of the following was established by Hua and Zhang \\cite{HuaZhang}.\n\\begin{theorem} \\label{thm-2connected}\nFor non-regular connected $H$ there is a constant $c_H$ such that if $n \\geq c_H$ and $G \\in \\calC_2(n)$ then\n\\[\n\\hom(G,H) \\leq \\hom(K_{2,n-2},H)\n\\]\nwith equality if and only if $G=K_{2,n-2}$.\n\nFor $\\Delta$-regular $H$ the same conclusion holds whenever $s(H) \\geq 2\\Delta^2 + 1$ (when $H$ is loopless and bipartite) or $s(H) \\geq \\Delta^2+1$ (otherwise).\n\\end{theorem}\nFor regular $H$ we have $s(H) \\geq |V(H)|$ (consider ordered pairs of the form $(i,i)$), so an immediate corollary in this case is that if $|V(H)| \\geq 2\\Delta^2+1$ (when $H$ is loopless and bipartite) or if $|V(H)| \\geq \\Delta^2+1$ (otherwise) then for all large $n$ the unique $2$-connected graph with the most $H$-colorings is $K_{2,n-2}$. Note that the bounds on $s(H)$ in the $\\Delta$-regular case are tight with respect to the characterization of uniqueness: when $H=K_{\\Delta}^{\\circ}$ we have $\\hom(G,H) = \\Delta^n$ for {\\em all} $n$-vertex $G$, and here $s(H)=\\Delta^2$, and when $H=K_{\\Delta, \\Delta}$ (a loopless bipartite graph) we have $\\hom(G,H) = 2\\Delta^n \\cdot \\textbf{1}_{\\{\\text{$G$ bipartite}\\}}$ for all $n$-vertex connected graphs $G$, and here $s(H)=2\\Delta^2$.\n\nSince all $G \\in \\calC_2(n)$ are connected, the restriction to connected $H$ is natural. Unlike with Theorem \\ref{thm-maxfortrees}, the restriction to non-regular $H$ is somewhat significant here. In particular, there are examples of regular graphs $H$ such that for all large $n$, there are graphs in $\\calC_2(n)$ that admit more $H$-colorings than $K_{2,n-2}$ (and so there is no direct analog of Theorem \\ref{thm-siderenko} in the world of $2$-connected graphs). One such example is $H=K_3$; it is easily checked that ${\\rm hom}(C_n,K_3)>{\\rm hom}(K_{2,n-2},K_3)$ for all $n \\geq 6$ (where $C_n$ is the cycle on $n$ vertices). \nMore generally, we have the following.\n\\begin{theorem} \\label{thm-2conn-cycles}\nLet $n \\geq 3$ and $q \\geq 3$ be given. Then for any $G \\in \\calC_2(n)$, \n\\[\n\\hom(G,K_q) \\leq \\hom(C_n,K_q),\n\\]\nwith equality if and only if $G=C_n$, unless $q=3$ and $n=5$ in which case $G=K_{2,3}$ also achieves equality.\n\nThe same conclusion holds with $\\calC_2(n)$ replaced by the larger family of $2$-edge-connected graphs on $n$ vertices.\n\\end{theorem}\n\n\nIn the setting of Theorem \\ref{thm-2conn-cycles} it turns out that no extra complications are introduced in moving from $2$-connected to $2$-edge-connected. We do not, however, currently see a way to extend Theorem \\ref{thm-2connected} to this larger family. In particular, the key structural lemma given in Corollary \\ref{cor-P3} does not generalize to the family of 2-edge-connected graphs. \n\nThe rest of the paper is laid out as follows. The proof of Theorem \\ref{thm-minfortrees} is given in Section \\ref{sec-minfortrees}. We prove Theorem \\ref{thm-maxfortrees} in Section \\ref{sec-maxfortrees}, and use similar ideas to then prove Theorem \\ref{thm-2connected} in Section \\ref{sec-2connected}. The proof of Theorem \\ref{thm-2conn-cycles} is given in Section \\ref{sec-2conn-cycles}. Finally, we end with a number of open questions in Section \\ref{sec-concluding remarks}. \n\n\n\\section{Proof of Theorem \\ref{thm-minfortrees}} \\label{sec-minfortrees}\n\nLet $H$ be a $\\Delta$-regular graph, and let $H^{\\circ}(\\ell)$ be the join of $H$ and $K_{\\ell}^{\\circ}$. \nUsing induction on $n$, we will show something a little stronger than Theorem \\ref{thm-minfortrees}, namely that for any $n$-vertex \\emph{forest} $F$, $\\hom(F,H^{\\circ}(\\ell)) \\geq \\hom(P_n,H^{\\circ}(\\ell))$, with equality if and only if either $H^{\\circ}(\\ell)$ is a complete looped graph, or $F=P_n$. We will first prove the inequality, and then address the cases of equality.\n\nFor $n \\leq 4$, the only forest $F$ on $n$ vertices that is not a subgraph of $P_n$ (and so for which $\\hom(F,H^{\\circ}(\\ell))\\geq \\hom(P_n,H^{\\circ}(\\ell))$ is immediate) is $F=K_{1,n-1}$; but in this case the required inequality follows directly from the Hoffman-London inequality \\cite{Hoffman,London}.\n\nSo now fix $n \\geq 5$, and let $F$ be a forest on $n$ vertices. In what follows, for $x \\in V(F)$ and $i \\in V(H^{\\circ}(\\ell))$ we write $\\hom(F,H^{\\circ}(\\ell) | x \\mapsto i)$ for the number of homomorphisms from $F$ to $H^{\\circ}(\\ell)$ that map $x$ to $i$.\nAlso, we'll assume that $H^{\\circ}(\\ell)$ has $q+\\ell$ vertices, say $V(H^{\\circ}(\\ell)) = \\{1,\\ldots,q, \\ldots,q+\\ell\\}$, with the looped dominating vertices being $q+1,\\ldots,q+\\ell$.\n\nLet $x$ be a leaf in $F$, with unique neighbor $y$ (note that we may assume that $F$ has an edge, because the desired inequality is trivial otherwise).\nFor each $i \\in \\{1,\\ldots,q+\\ell\\}$,\n\\begin{equation} \\label{eqn-restrict}\n \\hom(F,H^{\\circ}(\\ell) | x \\mapsto i) = \\sum_{j \\sim i} \\hom(F-x,H^{\\circ}(\\ell) | y \\mapsto j),\n\\end{equation}\nwhich in particular implies $\\hom(F,H^{\\circ}(\\ell) | x \\mapsto k) = \\hom(F-x,H^{\\circ}(\\ell))$ if $k \\in \\{q+1,\\ldots,q+\\ell\\}$.\nSo\n\\begin{eqnarray*}\n\\hom(F,H^{\\circ}(\\ell)) &=& \\sum_{i=1}^{q+\\ell} \\hom(F,H^{\\circ}(\\ell) | x \\mapsto i) \\\\\n&=& \\ell \\hom(F-x,H^{\\circ}(\\ell)) + \\sum_{i=1}^{q} \\sum_{j\\sim i} \\hom(F-x,H^{\\circ}(\\ell) | y \\mapsto j)\\\\\n&=& \\ell \\hom(F-x,H^{\\circ}(\\ell)) + \\sum_{j=q+1}^{q+\\ell} q\\hom(F-x,H^{\\circ}(\\ell) | y \\mapsto j) \\\\\n&& \\qquad + \\sum_{j=1}^{q} d_{H}(j) \\hom(F-x,H^{\\circ}(\\ell) | y \\mapsto j)\\\\\n&=& \\ell \\hom(F-x,H^{\\circ}(\\ell)) + \\sum_{j=q+1}^{q+\\ell} (q-\\Delta) \\hom(F-x,H^{\\circ}(\\ell) | y \\mapsto j) \\\\\n&& \\qquad + \\Delta\\sum_{j=1}^{q+\\ell} \\hom(F-x,H^{\\circ}(\\ell) | y \\mapsto j)\\\\\n&=& (q-\\Delta)\\hom(F-x-y,H^{\\circ}(\\ell)) + (\\Delta+\\ell) \\hom(F-x,H^{\\circ}(\\ell)).\n\\end{eqnarray*}\n\nIf $F=P_n$ then $F-x=P_{n-1}$ and $F-x-y=P_{n-2}$. Therefore, by induction, we have\n\\begin{eqnarray*}\n\\hom(F,H^{\\circ}(\\ell)) &=& (q-\\Delta)\\hom(F-x-y,H^{\\circ}(\\ell)) + (\\Delta+\\ell)\\hom(F-x,H^{\\circ}(\\ell))\\\\\n&\\geq& (q-\\Delta)\\hom(P_{n-2},H^{\\circ}(\\ell)) + (\\Delta+\\ell)\\hom(P_{n-1},H^{\\circ}(\\ell))\\\\\n&=& \\hom(P_{n},H^{\\circ}(\\ell)).\n\\end{eqnarray*}\n\nWhen can we achieve equality? If $H^{\\circ}(\\ell)$ is the complete looped graph, then we have equality for all $F$. So we may assume that $H^{\\circ}(\\ell)$ is not a complete looped graph, and that in particular there are $i, j \\in V(H^{\\circ}(\\ell))$ with $i \\not \\sim j$.\nNow consider an $F$ with more than one component, and let $u$ and $v$ be vertices of $F$ in different components. Using the looped dominating vertices of $H^{\\circ}(\\ell)$ we may construct an $H^{\\circ}(\\ell)$-coloring of $F$ in which $u$ is colored $i$ and $v$ is colored $j$. This is not a valid $H^{\\circ}(\\ell)$-coloring of the forest obtained from $F$ by adding the edge $uv$. It follows that there is a tree $T$ on $n$ vertices with $\\hom(F,H^{\\circ}(\\ell)) > \\hom(T,H^{\\circ}(\\ell))$.\nThe proof of the Hoffman-London inequality ($\\hom(K_{1,n-1},H) \\geq \\hom(P_n,H)$) given in \\cite{Sidorenko2} in fact shows strict inequality for $H^{\\circ}(\\ell)$ that is not a complete looped graph; since strict inequality holds for the base cases $n \\leq 4$, the inductive proof therefore gives strict inequality unless $F-x=P_{n-1}$ and $F-x-y=P_{n-2}$, which implies $F=P_n$.\n\n\n\n\\section{Proof of Theorem \\ref{thm-maxfortrees}} \\label{sec-maxfortrees}\nFirst, notice that for $\\Delta$-regular $H$ we have ${\\rm hom}(T,H) = |V(H)| \\Delta^{n-1}$ for all $T \\in \\calT(n)$, as can be seen by fixing the color on one vertex and iteratively coloring away from that vertex. \nTherefore the goal for the remainder of this section is to show that for non-regular $H$ there is a constant $c_H$ such that if $n \\geq c_H$ and $T \\in \\calT(n)$ then\n\\[\n\\hom(T,H) \\leq \\hom(K_{1,n-1},H),\n\\]\nwith equality if and only if $T = K_{1,n-1}$.\nRecall that a loop in $H$ will count \\emph{once} toward the degree of a vertex $v \\in V(H)$ and $\\Delta$ denotes the maximum degree of $H$. If $H$ has components $H_1,\\ldots,H_s$ then $\\hom(G,H)= \\hom(G,H_1) + \\cdots + \\hom(G,H_s)$, so we may assume that $H$ is connected.\n\nLet $V_{=\\Delta} \\subset V(H)$ be the set of vertices in $H$ with degree $\\Delta$. By coloring the center of the star with a color from $V_{=\\Delta}$, we have\n\\[\n\\hom(K_{1,n-1},H) \\geq |V_{=\\Delta}|\\Delta^{n-1}.\n\\]\nWe will show that if $n \\geq c_H$ and $T \\neq K_{1,n-1}$, then $\\hom(T,H) < |V_{=\\Delta}|\\Delta^{n-1}$. We use the following lemma, which will also be needed in the proof of Theorem \\ref{thm-2connected}.\n\n\\begin{lemma}\\label{lem-no long paths}\nFor non-regular $H$ there exists a constant $\\ell_H$ such that if $k \\geq \\ell_H$, then $\\hom(P_{k},H) < \\Delta^{k-2}$.\n\\end{lemma}\n\n\\begin{proof}\nLet $A$ denote the adjacency matrix of $H$, which is a symmetric non-negative matrix. \nSince an $H$-coloring of $P_k$ is exactly a walk of length $k$ through $H$, we have\n\\begin{equation} \\label{eq-homs-lin-alg-count}\n\\hom(P_k,H) = \\sum_{i,j} (A^k)_{ij} \\leq |V(H)|^2\\max_{i,j} (A^k)_{ij}.\n\\end{equation}\nBy the Perron-Frobenius theorem (see e.g. \\cite[Theorem 1.5]{Seneta}), the largest eigenvalue $\\lambda$ of $A$ has a strictly positive eigenvector ${\\bf x}$, for which it holds that $A^k \\textbf{x} = \\lambda^k \\textbf{x}$. Considering the row of $A^k$ containing $\\max_{i,j} \\left( A^k \\right)_{ij}$, it follows that $\\max_{i,j} \\left( A^k \\right)_{ij} \\leq c_1 \\lambda^k$ for some constant $c_1$. Combining this with (\\ref{eq-homs-lin-alg-count}) we find that $\\hom(P_k,H) \\leq |V(H)|^2 c_1 \\lambda^k$.\nSince $H$ is not regular, we have $\\lambda < \\Delta$, and so the lemma follows.\n\\end{proof}\n\nWe use Lemma \\ref{lem-no long paths} to show that any tree $T$ containing $P_k$ as a subgraph, for $k \\geq \\ell_H$, has $\\hom(T,H) < |V_{=\\Delta}|\\Delta^{n-1}$. Indeed, by first coloring the path and then iteratively coloring the rest of the tree we obtain\n\\[\n\\hom(T,H) < \\Delta^{k-2} \\Delta^{n-k} < |V_{=\\Delta}|\\Delta^{n-1}.\n\\]\n(The bound $\\hom(P_k,H) < \\Delta^{k-2}$ will be necessary for the proof of Theorem \\ref{thm-2connected}; here the weaker bound $\\hom(P_k,H) < |V_{=\\Delta}|\\Delta^{k-1}$ would suffice.)\n\nSo suppose that $T$ does not contain a path of length $\\ell_H$.\nNotice that for $n>2^{c+1}$ there are at most\n\\[\n1+n^{1\/(c+1)}+n^{2\/(c+1)}+\\cdots + n^{c\/(c+1)} < n\n\\]\nvertices in a rooted tree with depth at most $c$ and each vertex having degree at most $n^{1\/(c+1)}$. Because of this, there exists a constant $c_2>0$ (which depends on $\\ell_H$, but is independent of $n$) and \na vertex $v \\in T$ with $d(v) \\geq n^{c_2}$. \nSince $T \\neq K_{1,n-1}$ we have $d(v) < n-1$ and so some neighbor $w$ of $v$ is adjacent to a vertex in $V(T) \\setminus (\\{v\\} \\cup N(v))$.\nAs $H$ is connected and not regular, there is an $i \\in V(H)$ with $d_H(i) = \\Delta$ and $d_H(j) \\leq \\Delta-1$ for some neighbor $j$ of $i$.\n\nThe number of $H$-colorings of $T$ that don't color $v$ with a vertex of degree $\\Delta$ is at most\n\\begin{equation} \\label{eqn-cols1}\n|V(H)| (\\Delta-1)^{n^{c_2}}\\Delta^{n-1-n^{c_2}} \\leq |V(H)|e^{-n^{c_2}\/\\Delta} \\Delta^{n-1}.\n\\end{equation}\nThe number of $H$-colorings that color $v$ with a vertex of degree $\\Delta$ different from $i$ is at most\n\\begin{equation} \\label{eqn-cols2}\n(|V_{=\\Delta}| -1) \\Delta^{n-1},\n\\end{equation}\nand the number of $H$-colorings that color $v$ with $i$ and $w$ with a color different from $j$ is at most\n\\begin{equation} \\label{eqn-cols3}\n(\\Delta-1)\\Delta^{n-2} = \\left(1-\\frac{1}{\\Delta}\\right) \\Delta^{n-1}.\n\\end{equation}\nFinally, the number of $H$-colorings that color $v$ with $i$ and $w$ with $j$ is at most\n\\begin{equation} \\label{eqn-cols4}\n(\\Delta-1)\\Delta^{n-3} = \\left(\\frac{1}{\\Delta} - \\frac{1}{\\Delta^2}\\right) \\Delta^{n-1},\n\\end{equation}\nsince the degree of $j$ is at most $\\Delta-1$ and $w$ is adjacent to a vertex in $V(T) \\setminus (\\{v\\} \\cup N(v))$, which can be colored with one of the at most $\\Delta-1$ neighbors of $j$.\nCombining (\\ref{eqn-cols1}), (\\ref{eqn-cols2}), (\\ref{eqn-cols3}), and (\\ref{eqn-cols4}) we obtain\n\\[\n\\hom(T,H) \\leq \\left( |V_{=\\Delta}| - \\frac{1}{\\Delta^2} + |V(H)|e^{-n^{c_2}\/\\Delta} \\right) \\Delta^{n-1} < |V_{=\\Delta}|\\Delta^{n-1},\n\\]\nthe final inequality valid as long as $n \\geq c_H$.\n\n\n\\section{Proof of Theorem \\ref{thm-2connected}} \\label{sec-2connected}\n\nHere we build on the approach used in the proof of Theorem \\ref{thm-maxfortrees} to tackle the family of $n$-vertex $2$-connected graphs $\\calC_2(n)$.\n\nIn order to proceed, we will need a structural characterization of $2$-connected graphs. In the definition below we abuse standard notation a little, by allowing the endpoints of a path to perhaps coincide.\n\\begin{definition}\nAn \\emph{ear} on a graph $G$ is a path whose endpoints are vertices of $G$, but which otherwise is vertex-disjoint from $G$. An ear is an \\emph{open ear} \nif the endpoints of the path are distinct. \nAn \\emph{(open) ear decomposition} of a graph $G$ is a partition\nof the edge set of $G$ into parts $Q_0, Q_1,\\ldots Q_\\ell$\nsuch that $Q_0$ is a cycle and $Q_i$ for $1 \\leq i \\leq \\ell$ is an (open) ear on $Q_0 \\cup \\cdots \\cup Q_{i-1}$.\n\\end{definition}\n\n\\begin{theorem}[Whitney \\cite{Whitney}]\nA graph is $2$-connected if and only if it admits an open ear decomposition.\n\\end{theorem}\n\nSince removing edges from a graph does not decrease the count of $H$-colorings, we will assume in the proof of Theorem \\ref{thm-2connected} that we are working with a \\emph{minimally $2$-connected graph} $G$, meaning that $G$ is $2$-connected and for any edge $e$ we have that $G-e$ is not $2$-connected. A characterization of these graphs is the following, which can be found in \\cite{Dirac}.\n\\begin{theorem} \\label{thm-nochord}\nA $2$-connected graph $G$ is minimally $2$-connected if and only if no cycle in $G$ has a chord.\n\\end{theorem}\n\n\\begin{corollary}\\label{cor-P3}\nLet $G$ be a minimally $2$-connected graph. There is an open ear decomposition $Q_0, \\ldots, Q_\\ell$ of $G$, and a $c$ satisfying $1 \\leq c \\leq \\ell$, with the property that each of $Q_1$ up to $Q_c$ are paths on at least four vertices, and each of $Q_{c+1}$ through $Q_\\ell$ are paths on three vertices with endpoints in $\\cup_{k=1}^c Q_k$.\n\\end{corollary}\n\n\\begin{proof}\nLet $Q_0, \\ldots, Q_\\ell$ be an open ear decomposition of $G$. If any of the $Q_k$, $1 \\leq k \\leq \\ell$, is a path on two vertices, its removal leads to an open ear decomposition of a proper spanning subgraph of $G$, contradicting the minimality of $G$. So we may assume that each of $Q_1$ through $Q_\\ell$ is a path on at least three vertices. We claim that if $Q_i$ is a path on exactly three vertices, then the degree 2 vertex in $Q_i$ also has degree 2 in $G$; from this the corollary easily follows.\n\nTo verify the claim, let $Q_i$ be on vertices $x, a$ and $y$, with $a$ the vertex of degree 2, and suppose, for a contradiction, that there is some $Q_j$, $j > i$, that has endpoints $a$ and $z$ (the latter of which may be one of $x$, $y$). In $\\cup_{k=1}^{i-1} Q_k$ there is a cycle $C$ containing both $x$ and $y$, and in $\\cup_{k=1}^{j-1} Q_k$ there is a path $P$ from $z$ to $C$ that intersects $C$ only once. Now consider the cycle $C'$ that starts at $a$, follows $Q_j$ to $z$, follows $P$ to $C$, follows $C$ until it has met both $x$ and $y$ (meeting $y$ second, without loss of generality), and finishes along the edge $ya$. The edge $xa$ is a chord of this cycle, giving us the desired contradiction.\n\\end{proof}\n\nThe next lemma follows from results that appear in \\cite{Engbers}, specifically Lemma 5.3 and the proof of Corollary 5.4 of that reference.\n\n\\begin{lemma}\\label{lem-col paths}\nSuppose $H$ is not $K_{\\Delta,\\Delta}$ or $K_{\\Delta}^{\\circ}$ (the complete looped graph). Then for any two vertices $i$, $j$ of $H$ and for $k\\geq 4$ there are at most $(\\Delta^2-1)\\Delta^{k-4}$ $H$-colorings of $P_k$ that map the initial vertex of the path to $i$ and the terminal vertex to $j$.\n\\end{lemma}\n\n\\begin{remark}\nCorollary 5.4 in \\cite{Engbers} gives a bound of $\\Delta^{k-2}$ for a smaller class of $H$, which is simply for convenience. The proof given actually delivers a bound of $(\\Delta^2-1)\\Delta^{k-4}$ for all $H$ except $K_{\\Delta,\\Delta}$ and $K_{\\Delta}^{\\circ}$. \n\\end{remark}\n\n\\begin{proof}[Proof of Theorem \\ref{thm-2connected}]\n\nWe begin with the proof for a non-regular connected graph $H$, and then consider regular $H$ and describe the necessary modifications needed in the proof.\n\nLet $H$ be non-regular and connected. We will first show that for all sufficiently large $n$ and all minimally $2$-connected graphs $G$ that are different from $K_{2,n-2}$ we have $\\hom(G,H) < \\hom(K_{2,n-2},H)$, and we will then address the cases of equality when $G$ is allowed to be not minimal.\nAn easy lower bound on the number of $H$-colorings of $K_{2,n-2}$ is\n$$\n\\hom(K_{2,n-2},H) \\geq s(H)\\Delta^{n-2} \\geq \\Delta^{n-2},\n$$\nwhere $s(H)$ is the number of ordered pairs $(i,j)$ of (not necessarily distinct) vertices in $V(H)$ with the property that $|N(i) \\cap N(j)|=\\Delta$. For the first inequality, consider coloring the vertices in the partition class of size $2$ of $K_{2,n-2}$ with colors $i$ and $j$, and for the second note that the pair $(i,i)$, where $i$ is any vertex of degree $\\Delta$, is counted by $s(H)$.\n\nSuppose that $G$ contains a path on $k\\geq \\ell_H$ vertices (with $\\ell_H$ as given by Lemma \\ref{lem-no long paths}). By first coloring this path and then iteratively coloring the remaining vertices we have (using Lemma \\ref{lem-no long paths}) $\\hom(G,H) < \\Delta^{k-2} \\Delta^{n-k} = \\Delta^{n-2}$. We may therefore assume that $G$ contains no path on $k \\geq \\ell_H$ vertices.\n\nConsider now an open ear decomposition of $G$ satisfying the conclusions of Corollary \\ref{cor-P3}. Since $G$ contains no path on $\\ell_H$ vertices, we have that there is some constant (independent of $n$) that bounds the lengths of each of the $Q_i$, and so $\\ell$, the number of open ears in the decomposition, satisfies $\\ell=\\Omega(n)$.\nWe now show that $c$, the number of paths in the open ear decomposition that have at least four vertices, may be taken to be at most a constant (independent of $n$). Coloring $Q_0$ first, then coloring each of $Q_1$ through $Q_{c}$, and then iteratively coloring the rest of $G$, Lemma \\ref{lem-col paths} yields\n\\begin{eqnarray*}\n\\hom(G,H) & \\leq & |V(H)|\\Delta^{|Q_0|-1}(\\Delta^2-1)^{c} \\Delta^{\\left(\\sum_{i=1}^{c} |Q_i|\\right)-4c}\\Delta^{\\ell-c} \\\\\n& \\leq & \\frac{|V(H)|}{\\Delta}\\left(1-\\frac{1}{\\Delta^2}\\right)^{c} \\Delta^n\n\\end{eqnarray*}\n(noting that $n=|Q_0| + \\left(\\left(\\sum_{i=1}^{c} |Q_i|\\right)-2c\\right) + (\\ell-c)$). Unless $c$ is a constant, this quantity falls below the trivial lower bound on $\\hom(K_{2,n-2},H)$ for all sufficiently large $n$.\n\nNow since there are only constantly many vertices in the graph $G'$ with open ear decomposition $Q_0, \\ldots, Q_{c}$, and $G$ is obtained by adding the remaining vertices to $G'$, each joined to exactly two vertices of $G'$, it follows by the pigeonhole principle that for some constant $c'$ \nthere is a pair of vertices $w_1$, $w_2$ in $G$ with at least $c' n$ common neighbors, all among the middle vertices of the $Q_i$ for $i \\geq {c}+1$.\n(Notice that this makes $G$ ``close'' to $K_{2,n-2}$ in the same sense that in the proof of Theorem \\ref{thm-maxfortrees} a tree with a vertex of degree $\\Omega(n)$ was ``close'' to $K_{1,n-1}$.)\n\nWe count the number of $H$-colorings of $G$ by first considering those in which $w_1$ is colored $i$ and $w_2$ is colored $j$, for some pair $i, j \\in V(H)$ with $|N(i) \\cap N(j)| < \\Delta$. There are at most\n\\begin{equation} \\label{small-common-nhood}\n|V(H)|^2(\\Delta-1)^{c' n}\\Delta^{n-2-c' n}\n\\end{equation}\nsuch $H$-colorings. Next, we count the number of $H$-colorings of $G$ in which $w_1$ is colored $i$ and $w_2$ is colored $j$, for some pair $i,j \\in V(H)$ with $|N(i) \\cap N(j)| = \\Delta$. We argue that in $G'$ (the graph with open ear decomposition $Q_0, \\ldots, Q_{c}$) there must be a path $P$ on at least $4$ vertices with endpoints $w_1$ and $w_2$. \nTo see this, note that since $G \\neq K_{2,n-2}$ there must be some vertex $v\\in G'$ so that $v \\neq w_1$, $v \\neq w_2$, and $v \\notin N(w_1) \\cap N(w_2)$. Choose a cycle in $G'$ containing $v$ and $w_1$, and find a path from $w_2$ to that cycle (the path will be trivial if $w_2$ is on the cycle). From this structure we find such a path $P$, and it must have at least $4$ vertices as $v \\notin N(w_1) \\cap N(w_2)$.\n\nColoring $G$ by first coloring $w_1$ and $w_2$, then the vertices of $P$, and finally the rest of the graph, we find by Lemma \\ref{lem-col paths} that the number of\n$H$-colorings of $G$ in which $w_1$ is colored $i$ and $w_2$ is colored $j$, for some pair $i,j \\in V(H)$ with $|N(i) \\cap N(j)| = \\Delta$ is at most\n\\begin{equation} \\label{large-common-nhood}\ns(H)\\left(1-\\frac{1}{\\Delta^2}\\right)\\Delta^{n-2}.\n\\end{equation}\nCombining (\\ref{small-common-nhood}) and (\\ref{large-common-nhood}) we have\n\\begin{eqnarray*}\n\\hom(G,H) &\\leq& |V(H)|^2\\Delta^{n-2}\\left(\\frac{\\Delta-1}{\\Delta}\\right)^{c' n} + s(H)\\left(1-\\frac{1}{\\Delta^2}\\right) \\Delta^{n-2}\\\\\n&<& s(H)\\Delta^{n-2}\\\\\n&\\leq& \\hom(K_{2,n-2},H),\n\\end{eqnarray*}\nwith the strict inequality valid for all sufficiently large $n$.\n\n\n\\medskip\n\nWe have shown that for non-regular connected $H$, $K_{2,n-2}$ is the unique minimally $2$-connected $n$-vertex graph maximizing the number of $H$-colorings. We now complete the proof of Theorem \\ref{thm-2connected} for non-regular connected $H$ by showing that if \n$G$ is an $n$-vertex $2$-connected graph that is not minimally $2$-connected then $\\hom(G,H) < \\hom(K_{2,n-2},H)$. Suppose for the sake of contradiction that $\\hom(G,H) = \\hom(K_{2,n-2},H)$. Since deleting edges from $G$ cannot decrease the count of $H$-colorings, the uniqueness of $K_{2,n-2}$ among minimally $2$-connected graphs shows that $K_{2,n-2}$ is the only minimally $2$-connected graph that is a subgraph of $G$, and in particular we may assume that $G$ is obtained from $K_{2,n-2}$ by adding a single edge, necessarily inside one partition class of $K_{2,n-2}$. Since $K_{2,n-2}$ is a subgraph of $G$, all $H$-colorings of $G$ are $H$-colorings of $K_{2,n-2}$.\n\nSuppose that $i$ and $j$ are distinct adjacent vertices of $H$. The $H$-coloring of $K_{2,n-2}$ that maps one partition class to $i$ and the other to $j$ must also be an $H$-coloring of $G$, which implies that both $i$ and $j$ must be looped. By similar reasoning, if $i$ and $j$ are adjacent in $H$, and also $j$ and $k$, then $i$ and $k$ must be adjacent. It follows that $H$ is a disjoint union of fully looped complete graphs, and so if connected must be a single complete looped graph (and therefore is regular), which is a contradiction. \n\n\\medskip\n\nNow we turn to regular connected $H$ satisfying $s(H) \\geq 2\\Delta^2+1$ (if $H$ is loopless and bipartite) or $s(H) \\geq \\Delta^2+1$ (otherwise). For these $H$, Lemma \\ref{lem-no long paths} does not apply. We will argue, however, that there is still an $\\ell_H$ such that if $2$-connected $n$-vertex $G$ has an open ear decomposition in which any of the added paths $Q_i$, $i \\geq 1$, has at least $\\ell_H$ vertices then $G$ has fewer $H$-colorings than $K_{2,n-2}$. Once we have established this, the proof proceeds exactly as in the non-regular case.\n\nWe begin by establishing that $G$ has no long cycles; we will use the easy fact that the number of $H$-colorings of a $k$-cycle is the sum of the $k$th powers of the eigenvalues of the adjacency matrix of $H$. By the Perron-Frobenius theorem there is exactly one eigenvalue of the adjacency matrix of $H$ equal to $\\Delta$, and a second one equal to $-\\Delta$ if and only if $H$ is loopless and bipartite, with the remaining eigenvalues having absolute value strictly less than $\\Delta$. It follows that the number of $H$-colorings of a $k$-cycle is, for large enough $k$, less than $(2+\\frac{1}{\\Delta^2})\\Delta^k$ (if $H$ is loopless and bipartite) or $(1+\\frac{1}{\\Delta^2})\\Delta^k$ (otherwise). If $G$ has such a cycle then by coloring the cycle first and then coloring the remaining vertices sequentially, bounding the number of options for the color of each vertex by $\\Delta$, we get that the number of $H$-colorings of $G$ falls below the trivial lower bound on ${\\rm hom}(K_{2,n-2},H)$ of $s(H)\\Delta^{n-2}$.\n\nNow if a $2$-connected $G$ has an open ear decomposition with an added path $Q_i$, $i \\geq 1$, on at least $\\ell_H$ vertices, then it has a cycle of length at least $\\ell_H$, since the endpoints of $Q_i$ are joined by a path in $Q_0\\cup \\ldots \\cup Q_{i-1}$.\n\n\\end{proof}\n\n\n\\section{Proof of Theorem \\ref{thm-2conn-cycles}} \\label{sec-2conn-cycles}\n\nIn this section we show that for all $q \\geq 3$ and $n \\geq 3$, among $n$-vertex $2$-edge-connected graphs, and therefore among the subfamily of $n$-vertex $2$-connected graphs, the cycle $C_n$ uniquely (up to one small exception) admits the greatest number of proper $q$-colorings (that is, $K_q$-colorings). The result is trivial for $n=3$ and easily verified directly for $n=4$, so throughout we assume $n \\geq 5$.\n\nWe will use the following characterization of $2$-edge-connected graphs, due to Robbins (see Section \\ref{sec-2connected} for the definition of ear decomposition).\n\n\\begin{theorem}[Robbins \\cite{Robbins}]\nA graph is $2$-edge-connected if and only if it admits an ear decomposition.\n\\end{theorem}\n\nNotice that we are trying to show that $C_n$, which is the unique graph constructed with a trivial ear decomposition $Q_0=C_n$, maximizes $\\hom(G,K_q)$. To do this, we aim to show that almost any time a $2$-edge-connected graph contains a cycle and a path that meets the cycle only at the endpoints (which is what results after the first added ear in an ear decomposition for $G \\neq C_n$), we can produce an upper bound on $\\hom(G,K_q)$ which is smaller than $\\hom(C_n,K_q)$; we will then deal with the exceptional cases by hand. The proof will initially consider an arbitrary fixed cycle and a path joined to that cycle at its endpoints in $G$ (but independent of a particular ear decomposition of $G$), and will begin by producing an upper bound on $\\hom(G,K_q)$ based on the lengths of the path and cycle. \n\nLet $G \\neq C_n$ be an $n$-vertex $2$-edge-connected graph, and let $C$ be a non-Hamiltonian cycle in $G$ of length $\\ell$.\nSince $\\ell < n$ and $G$ is $2$-edge-connected, there is a path $P$ on at least $3$ vertices that meets $C$ only at its endpoints. \n\nSuppose first that $P$ contains $m+2$ vertices, where $m \\geq 2$ (so there are $m$ vertices on $P$ outside of the vertices of $C$). Color $G$ by first coloring $C$, then $P$, and then the rest of the graph. Using $\\hom(C_{\\ell},K_q) = (q-1)^{\\ell} + (-1)^{\\ell} (q-1)$ and Lemma \\ref{lem-col paths}, we have\n\\begin{eqnarray*}\n\\hom(G,K_q) &\\leq& \\left((q-1)^{\\ell} + (-1)^{\\ell} (q-1)\\right) \\left[ \\left( (q-1)^{2} -1\\right) (q-1)^{m-2} \\right] (q-1)^{n-m-\\ell}\\\\\n&=& (q-1)^n - (q-1)^{n-2} + (-1)^{\\ell} (q-1)^{n-\\ell+1} - (-1)^{\\ell} (q-1)^{n-\\ell-1}\\\\\n&\\leq& (q-1)^n - (q-1)^{n-2} + (q-1)^{n-\\ell+1} - (q-1)^{n-\\ell-1}\\\\\n&\\leq& \\hom(C_n,K_q).\n\\end{eqnarray*}\nSince the last inequality is strict for $\\ell > 3$ and the second-to-last inequality is strict for all odd $\\ell$, the chain of inequalities is strict for all $\\ell \\geq 3$. \nIn other words, if $G$ contains any cycle $C$ and a path $P$ on at least $4$ vertices that meets $C$ only at its endpoints, then $\\hom(G,K_q) < \\hom(C_n,K_q)$.\n\nSuppose next that all the paths on at least three vertices that meet $C$ only at their endpoints have exactly three vertices. If $\\ell \\geq 5$, or if $\\ell=4$ and there is such a path that joins two adjacent vertices of the cycle, then using just the vertices and edges of $C$ and one such path $P$ we can easily find a new cycle $C'$\nand a path $P'$ on at least four vertices that only meets $C'$ at it endpoints, and so $\\hom(G,K_q) < \\hom(C_n,K_q)$ as before.\n\nIt remains to find an upper bound for those $2$-edge-connected graphs (apart from $C_n$) that do not have any of the following:\n\\begin{itemize}\n\\item[(a)] a cycle of any length with a path on at least $4$ vertices that only meets the cycle at its endpoints;\n\\item[(b)] a cycle of length at least $5$ and a path on $3$ vertices that only meets the cycle at its endpoints; or\n\\item[(c)] a cycle of length $4$ with a path of length $3$ that only meets the cycle at its endpoints, and those endpoints are two adjacent vertices of the cycle.\n\\end{itemize}\n\nRecall that we are assuming $n \\geq 5$. For the remaining graphs, notice that if $Q_0$ is a cycle of length $3$, then $Q_0 \\cup Q_1$, where $Q_1$ is a path on three vertices (by (a) above), must contain a cycle of length $4$. \nIn fact, in this case this cycle will use vertices from both $Q_0$ and $Q_1$.\nTherefore, we need not separately analyze the situation where $C$ is a cycle of length $3$ among the remaining graphs. \nIn particular, this means that there is one final case to consider: $C$ has length $\\ell=4$, and $P$ is a path on $3$ vertices that only meets $C$ at its endpoints, and the endpoints of $P$ are non-adjacent vertices of $C$. In this case, $C$ and $P$ form a copy of $K_{2,3}$, and we color this copy of $K_{2,3}$ first and then the rest of $G$. This gives \n$$\n\\hom(G,K_q) \\leq \\left( q(q-1)^3 + q(q-1)(q-2)^3\\right) (q-1)^{n-5}.\n$$\nFor $n \\geq 6$, some algebra shows that the right-hand side above is strictly less than $\\hom(C_n,K_q)$, and also for $n=5$ and $q > 3$. For $n=5$ and $q=3$ we have equality, and so in this case $\\hom(G,K_3) \\leq \\hom(C_5,K_3)$ with equality only if $G$ has the same number of proper $3$-colorings as $K_{2,3}$ and has $K_{2,3}$ as a subgraph; this can only happen if $G=K_{2,3}$ (using a similar argument as the one given in the proof of the cases of equality for Theorem \\ref{thm-2connected}).\n\n\\section{Concluding Remarks}\\label{sec-concluding remarks}\nIn this section, we highlight a few questions related to the work in this article.\n\n\\begin{question}\nWhich graphs $H$ have the property that the path on $n$ vertices is the tree that minimizes the number of $H$-colorings?\n\\end{question}\n\nSince the star maximizes the number of $H$-colorings among trees, it is natural to repeat the maximization question among trees with some prescribed bound on the maximum degree. Some results (for strongly biregular $H$) are obtained in \\cite{Ray}.\n\n\\begin{question}\nFor a fixed non-regular $H$ and positive integer $\\Delta$, which tree on $n$ vertices with maximum degree $\\Delta$ has the most $H$-colorings?\n\\end{question}\n\nThere are still open questions about regular $H$ in the family of $2$-connected graphs. In particular, we have seen that $K_{2,n-2}$ and $C_n$ are $2$-connected graphs that have the most $H$-colorings for some $H$. \n\n\\begin{question}\nWhat are the necessary and sufficient conditions on $H$ so that $K_{2,n-2}$ is the unique 2-connected graph that has the most $H$-colorings?\n\\end{question}\n\n\\begin{question}\nAre $C_n$ and $K_{2,n-2}$ the only $2$-connected graphs that uniquely maximize the number of $H$-colorings for some $H$?\n\\end{question}\n\nFinally, it would be natural to try to extend these results to $k$-connected graphs and to $k$-edge-connected graphs. \n\n\\begin{question}\\label{q-kconn}\nLet $H$ be fixed. Which $k$-connected graph has the most $H$-colorings? Which $k$-edge-connected graph has the most number of $H$-colorings?\n\\end{question}\nSome results related to Question \\ref{q-kconn} for graphs with fixed minimum degree $\\delta$ can be found in \\cite{Engbers}, with $K_{\\delta,n-\\delta}$ shown to be the graph with the most $H$-colorings in this family. We remark that family of connected graphs with fixed minimum degree $\\delta$ has been considered very recently in \\cite{Engbers1}, where it is shown that for connected non-regular $H$ and all large enough $n$, again $K_{\\delta,n-\\delta}$ is the unique maximizer of the count of $H$-colorings. The question for regular $H$ remains fairly open.\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{Sec:Intro}\nSurfaces with features from macro~\\cite{Candela} to nanoscale~\\cite{Bhushan} can be found both in\nnature and in industrial products. Such features cause light scattering and diffraction\nand, therefore, they impact the visual appearance of objects. Among the phenomena related to light\nscattering and diffraction there are many non-exotic effects which can be experienced in\neveryday life: reflection of light from, or transmission trough, a rough surface~\\cite{Stover,Simonsen2010};\ndiffraction through apertures~\\cite{BornWolf}~(holes in the window blinds) or from or through\na transparent\/opaque material of a particular shape (Mie scattering~\\cite{Cohen1982}); diffraction\nfrom periodic structures (diffraction gratings~\\cite{Loewen1997}, photonic crystals~\\cite{Joannopoulos}).\nIn textbooks these phenomena are often considered a separate phenomenon and often restricted to a\nparticular range of parameters. For instance, ``diffraction grating'' analysis is almost\nexclusively applied to periodic structures for which the lattice constants are comparable to the\nwavelength of the incident light, which for visible light, corresponds to roughly \\num{0.5}--\\SI{2}{\\micro\\meter}.\nMeanwhile, the habit of considering only specific cases of light scattering and diffraction for a particular range of\nparameters may lead to an incorrect approach to the characterization of samples and thus to inaccurate or even incorrect \ninterpretation of experimental data.\n\nPeriodic patterns with lattice constants from tens to hundreds of micrometers exhibit hundreds\nto thousands of (propagating) diffraction orders when the sample is illuminated by visible light. Such structures are by no\nmeans efficient diffraction gratings. Yet, as will be shown in this paper, they can still produce a\nnoticeable effect on the optical properties of the samples and thus their visual appearances. Moreover,\nif several feeble optical effects are brought together they can impact significantly the optical\nresponse of the surfaces.\n\nMeasurements of gloss and haze are often the first step in the experimental characterization of the\noptical properties of samples which scatter light due to surface patterns or volume disorder. The simple\nintegral optical properties --- haze and gloss --- allow a distinction of the light that is reflected\/transmitted specularly\nby the sample from the light that is scattered diffusely by it.\nHowever, the actual spectral and angular distributions of the intensity of the scattered light can be rather\ncomplex. Despite the convenience of haze and gloss in terms of the simplicity of measurement and speed,\nsuch integral properties may misguide the interpretation, especially if such measurements \ncontain the artifacts related to the diffraction of light.\n\n\\smallskip\nThe rest of this paper is organized in the following way. Section~\\ref{Sec:Methods} presents the samples that we will study in this work and the instruments used to perform the angle-resolved intensity measurements. Such and other results are presented in Sec.~\\ref{Sec:Results} where we also discuss and interpret the various features that are present in them. In particular, in this section, we discuss origin of the different types of circular intensity fringes that the angle-resolved transmitted intensity distributions possess. The haze of the samples, both directly measured, and obtained on the basis of the angle-resolved measurements are also discussed here [Sec.~\\ref{Sec:Results}.\\ref{Sec:Haze}]. Finally the conclusions that can be drawn from this work are presented in Sec.~\\ref{Sec:Concusions}. \n \n\\section{Materials and Methods}\n\\label{Sec:Methods}\n\n\\subsection{Description of the samples}\n\\label{Sec:samples}\nThe studied samples consist of \\SI[product-units=power]{5 x 5}{\\cm} glass slides covered by an array of cylindrical silica micropillars [Fig.~\\ref{Fig:2}]; all cylinders had the height $h=\\SI{10}{\\micro\\meter}$ while their diameter were $d=\\SI{10}{\\micro\\meter}$, \\SI{20}{\\micro\\meter} or \\SI{40}{\\micro\\meter}. Both regular and random arrays were studied. The regular arrays were either hexagonal or square, for which the lattice vectors are $\\vec{a}_1= a\\vecUnit{x}$ [both cases] and $\\vec{a}_2 = (a\/2)[-\\vecUnit{x} + \\sqrt{3}\\vecUnit{y}]$~[hexagonal] or $\\vec{a}_2 = a \\vecUnit{y}$~[square], where $\\vecUnit{x}$ and $\\vecUnit{y}$ are orthogonal unit vectors in the sample plane (the $xy$-plane). The lattice constant $a$ of the regular arrays varied from \\SIrange{20}{80}{\\micro\\meter}~[Fig.~\\ref{Fig:2}]. The surface coverage $\\rho$ is defined as the ratio between the areas of the base of one micropillar to the area of a unit cell. If the diameter of the micropillars is denoted $d$, one finds that for a hexagonal array \n$\\rho = (\\pi\/2\\sqrt{3})(d\/a)^2$ while for a square array $\\rho=(\\pi\/4)(d\/a)^2$~\\cite{Turbil2016}.\nFor instance, for pillars of diameter \\SI{10}{\\micro\\meter} arranged in a hexagonal array the previously given values of $a$ correspond to the surface coverage between $\\rho=\\SI{1.4}{\\percent}$ ($a=\\SI{80}{\\micro\\meter}$) and \\SI{22.7}{\\percent} ($a=\\SI{20}{\\micro\\meter}$). Random arrays were produced to have the same surface coverage as the regular arrays for the same type of micropillars. To this end, the centers of the pillars were chosen to be uniformly distributed in such a way that two pillars could not be closer to each other than a minimum center-to-center distance (\\SI{20}{\\micro\\meter}). About a \\SI{300}{nm} thick residual silica layer between the glass surface and the base of the pillars provided an adhesion layer for the pillars to the surface of the substrate~[Fig.~\\ref{Fig:2}]. The thickness of the glass substrates was \\SI{2}{mm} for the majority of samples, and \\SI{1}{mm} for samples measured with the multimodal microscope. All the samples were produced in sol-gel silica by the use of the nanoimprinting technique (for detailed description see Ref.~\\cite{Dubov2013}).\n\n\n\\begin{figure}[tbhp]\n \\begin{center}\n \\centering\n \\includegraphics[width=0.6\\columnwidth]{Figure_1}\n \\caption{Scanning electron microscope image of a cross-section of the sample taken at grazing incidence. The height of all micropillars used in this study was $h=\\SI{10}{\\micro\\meter}$. }\n \\label{Fig:2}\n \\end{center}\n\\end{figure}\n\n\n\n\n\n\n\n\n\n\\subsection{Optical measurements}\n\n\nConventional integral haze and gloss measurements were used in this study together with two\nfacilities for angle-resolved measurements: OMS4 goniospectrophotometer (commercialized\nby OPTIS) and a home-made multimodal imaging (Mueller) polarimetric microscope.\nBoth angle-resolved setups allow for the measurements of the optical intensity response of the\nsample. Since the two instruments are based on very different optical configurations, we\ndecided to use them both to verify the consistency of the results and to identify the presence\nof possible artifacts due to the instrument response functions. Despite of the fact that the two\ninstruments also allowed for the measurement of the state of polarization of the transmitted or\nreflected light~\\cite{Yoo2017}, this paper focuses on the angular distribution in the far field of the intensity\ntransmitted through the sample regardless of its polarization.\n\n\nThe integral optical properties, haze and gloss, were obtained by the use of a \n\\textit{hazemeter} (BYK Gardner Haze-gard plus) for measurements in transmission and a\n\\textit{glossmeter} (Enrichsen -- Pico Glossemaster Model 500) for measurements in reflection.\n\nBeing a goniospectrophotometer device, OMS4 consists of the sample holder and two\narms where one of the arms have a set of light sources installed and the other a photomultiplicator detector. The\nsample holder and detector arm can be moved automatically with the help of one and two\nprecise motors, respectively. This allows to scan the whole angular region around the sample and thus,\nto measure the angular intensity distributions of the reflected or transmitted light, at any angle of incidence in the\nrange of $\\theta_i\\in[\\ang{0}; \\ang{85}]$.\n\nIn this study OMS4 was used in transmission mode to measure the angular distribution of the transmitted intensity; the angular resolution was \\ang{0.5} around the specular direction. Measurements were performed with three coherent laser sources~(RGB) and an incoherent Xenon lamp light with or without color filters. The bidirectional\ntransmittance distribution function~(BTDF)~\\cite{Bartell1981} of the samples was collected for\nthe polar angles of incidence $\\theta_i=\\ang{0}$, \\ang{10}, \\ang{30} and \\ang{60}. A related function, the differential transmission coefficient~(DTC)~\\cite{Hetland}, is obtained by multiplying the BTDF by the cosine of the polar angle of transmission~[$\\cos\\theta_t$].\nThe contour plots of the DTC, obtained in this way from data measured by OMS4, were plotted with the SPEOS software package produced by OPTIS.\n\n\n\\smallskip\nThe main difference between the goniospectrophotometer and the multimodal\npolarimetric microscope is the absence of moving parts in the latter system. The multimodal\nmicroscope can be operated in two imaging modes, real plane and Fourier (or conjugate\nspace) plane imaging modes. In real plane imaging mode the microscope produces images of the studied\nsample, while in Fourier imaging mode the images correspond to the angular distribution of\nlight transmitted or reflected by the sample. The optical configuration of the multimodal\nmicroscope is sketched in Fig.~\\ref{Fig:3}. The instrument was coupled to a laser emitting green light\nat a wavelength of \\SI{533}{nm} with a spectral width of less than \\SI{2}{nm}. Speckle effects due to the\ncoherence of the laser were minimized using a vibrating rough membrane (Laser Speckle\nreducer from Optotune) just in front of the laser source. The microscope was mounted in\nthe transmission configuration; the sample was located between two identical microscope\nobjectives (one for imaging and another one for illumination). The microscope objectives can be\nselected to have different magnifications; $50\\times$, $20\\times$, $10\\times$, and $5\\times$\ndepending on the required resolution and a numerical aperture.\n\n\n\n\n\\begin{figure}[tbhp]\n \\begin{center}\n \\centering\n \\includegraphics[width=0.85\\columnwidth]{Figure_2}\n \\caption{Schematic illustration of the multimodal (Mueller) microscope in the\n transmission configuration. The positions of the conjugate images corresponding to the back-focal\n planes~(BFP) of the two objectives, as well as the conjugate planes of the sample in the\n illumination and imaging arm are shown. Also indicated are the positions of the retractable Bertrand lens, the\n light source, and the detection camera.}\n \\label{Fig:3}\n \\end{center}\n\\end{figure}\n\n\n\nThanks to the use of a series of relay lenses, it is possible to create conjugate images of the\nback-focal planes (BFP) of the objectives in both the illumination and imaging parts.\nTherefore, we can insert apertures in the conjugate plane of the BFP of the illuminating\nobjective with different shapes and sizes to simultaneously control the direction and the\nangular aperture of the illuminating beam. Analogously, the insertion of a pinhole or polar mask\nat the conjugate plane of the BFP of the imaging microscope objective, allows controlling the\ndirection and aperture of the detected scattered beam.\n\n\nThe direction of the illuminating beam is defined by the mean polar angle of incidence\n$\\left<\\theta_i\\right> = \\arcsin(D\/f)$.\nHere $f$ is the focal length of the microscope objective and $D$ the off-axis distance measured from the\ncenter of the pinhole to the optical axis of the microscope. For instance, if a pinhole is placed in the plane conjugated to the illuminating objective BFP,\nand, this same pinhole is shifted to a given distance to the optical axis, then the sample can be\nilluminated with an oblique incidence. When the pinhole is aligned with the microscope\noptical axis, the sample is illuminated at normal incidence.\n\n\n\nMoreover, once the average polar angle $\\left<\\theta_i\\right>$ is known, the divergence (div in radians) of the\nillumination, or alternatively the imaging beam, can be expressed as a function of the\ncorresponding pinhole diameter $\\phi_{\\textrm{pin}}$, the focal length of microscope objective $f$, and the mean\npolar angle according to:\n\\begin{align}\n \\textrm{div}\n &=\n \\frac{\\phi_{\\textrm{pin}}}{f \\cos \\left<\\theta_i\\right>}.\n \\label{eq:6}\n\\end{align}\nThe relay lens system also provides a conjugate of the object plane (the sample) in both; the\nillumination and the imaging arms, therefore, the use of pinholes or polar masks in those\nplanes, helps to define the shape and size of the illuminated and imaged area of the sample,\nor, in other words, the field of view (FOV). The insertion of a Bertrand lens in the optical\npath of the microscope allows to easily switching between the real and the Fourier imaging\nmodes~\\cite{Kurvits2015}.\n\n\n\\section{Results and discussions}\n\\label{Sec:Results}\n\nFirst in this section we are concerned with angle-resolved measurements for both the regular and the random arrays of micropillars and the discussion of the features that such data show. Later we address the integral optical properties of the samples, like haze and gloss. This will be done by direct measurements of haze, but also by calculations based on the angle-resolved measurements that we performed.\n\n\n\n\\subsection{Angle-resolved measurements}\n\\label{SubSec:3B}\n\nIn standard scatterometry measurements, such as BTDF or DTC measurements, the direction of illumination\nand detection are both selected by moving the goniometric arms on which the source and the\ndetector are mounted, respectively. In the multimodal microscope, however, the control of the\ndirection of the beam in both the illumination and the imaging arm is obtained by the use of pinholes\nplaced at different positions in the conjugate planes~(BFP) of the microscope objective as\npreviously discussed. These intrinsic differences in how the measurements are performed\nwith these two setups make a comparison of the obtained data very interesting.\n\n\n\nFigure~\\ref{Fig:5} presents contour plots of the angular distribution of the normalized DTCs obtained for a sample of a hexagonal array of micropillars and measured by either the goniospectrophotometer[Figs.~\\ref{Fig:5}(a)--(b)] or the microscope [Fig.~\\ref{Fig:5}(c)]. The normalization was done with respect to the maximum value of the angular dependent DTC, which in our case, was found in the direction of specular transmission.\n\nWe first start by discussing the measurements performed with the goniospectrophotometer. In the outer brown region of Figs.~\\ref{Fig:5}(a) and ~\\ref{Fig:5}(b) no measurements were performed since either the detector arm covered for the source (around $\\phi_t=\\ang{0}$ in Fig.~\\ref{Fig:5}(a)) or the angular region being inaccessible to the detector due to the physical dimensions of the support on which the setup is mounted (around $\\phi_t=\\ang{270}$ in Figs.~\\ref{Fig:5}(a) and \\ref{Fig:5}(b)). The specular direction of transmission in Fig.~\\ref{Fig:5}, and in the preceding experimental results to be presented, is at $(\\theta_t, \\phi_t)=(\\theta_i,\\phi_i-\\ang{180})=(\\theta_i,\\ang{0})$. The sample consisted of cylindrical pillars of diameter $d=\\SI{10}{\\micro\\meter}$ and the lattice constant was $a=\\SI{30}{\\micro\\meter}$~[$\\rho=\\SI{10}{\\percent}$]. The angles of incidence assumed in obtaining the results in Fig.~\\ref{Fig:5}(a) were $(\\theta_i,\\phi_i)=(\\ang{0},\\ang{180})$ while in Figs.~\\ref{Fig:5}(b)--(c) the angles of incidence were $(\\theta_i,\\phi_i)=(\\ang{30},\\ang{180})$. In obtaining the results in\nFigs.~\\ref{Fig:5}(a)--(b) the illuminating source consisted of a Xenon lamp to which a \\SI{10}{nm}-wide spectral filter centered at \\SI{535}{nm} was applied; a laser source of wavelength $\\lambda=\\SI{533}{nm}$ was used to obtain the results presented in Fig.~\\ref{Fig:5}(c). All the transmitted light, independent of polarization, was detected. It is challenging to accurately align the micro-patterned sample in the macroscopic setup so therefore the azimuthal angle of incidence may show slight deviations from $\\phi_i=\\ang{180}$. It should be mentioned that for the case of regular arrays of micropillars we for all experiments tried to align the samples so that the plane of incidence contains the lattice vector $\\vec{a}_1$. \n\n\n\n\n\\begin{figure}[tbhp]\n \\begin{center}\n \\centering\n \\includegraphics[width=0.8\\columnwidth]{Figure_3}\n \\caption{\n The angular distribution of the DTCs, normalized by their maximum values, and measured with\n (a)~a goniospectrophotometer and (b)~a multimodal microscope in the Fourier plane imaging mode.\n In both cases, the polar angle of incidence was $\\theta_i=\\ang{30}$ and the sample consisted of a hexagonal array of cylindrical micropillars\n of diameter $d=\\SI{10}{\\micro\\meter}$, height $h=\\SI{10}{\\micro\\meter}$, and the lattice constant was $a= \\SI{30}{\\micro\\meter}$.\n The measurements were performed at wavelengths (a)~\\SI{535}{nm} and (b)~\\SI{533}{nm}. In panel~(b) the angular aperture of the\n illuminating beam was \\ang{10}. The angular distribution of the measured DTCs contain three main features (as marked in each of the panels):\n (1)~a specular peak and diffraction orders; (2)~a diffuse ring; and (3)~concentric patterns.\n The inset in panel~(a) shows the details around the specular direction of an angular width of $\\pm\\ang{20}$. The regions of polar angles of transmission over which the measurements were performed were $\\theta_t\\leq\\ang{85}$~[Figs.~\\ref{Fig:5}(a)--(b)] and $\\theta_t\\leq\\ang{45}$~[Fig.~\\ref{Fig:5}(c)].}\n \\label{Fig:5}\n \\end{center}\n\\end{figure}\n\n\n\nSeveral interesting features should be observed from the results presented in Fig.~\\ref{Fig:5}. First, specular\npeaks of high transmitted intensity are observed in the measurements at $(\\theta_t,\\phi_t)=(\\theta_i, \\ang{0})$,\nand the transmitted intensities drop off away from this direction.\n\nSecond, the measurements have sufficient angular resolution to allow for the observation of the dense pattern of propagating diffractive orders; this is most apparent from Fig.~\\ref{Fig:5}(a) and the region marked~(1) in Fig.~\\ref{Fig:5}(b) where the diffraction orders appears like small dots.\nThe positions of the propagating diffractive orders are determined by the grating equation which states that the lateral wavevector of transmission is given by\n\\begin{align}\n \\pvec{q}^{(m)} = \\pvec{k}+\\vec{G}_m,\n \\label{eq:grating_equation} \n\\end{align}\nwhere $\\vec{G}_m=m_1\\vec{b}_1+m_2\\vec{b}_2$ is a vector defined in terms of the primitive lattice vectors $\\vec{b}_i$ of the reciprocal lattice, and $m=\\{m_1,m_2\\}$ denotes a set of two integers~[$m_i\\in\\mathbb{Z}$] defining a diffractive order. The primitive lattice vectors $\\vec{b}_j$ are obtained from the relations $\\vec{a}_i\\cdot \\vec{b}_j= 2\\pi \\delta_{ij}$. For a hexagonal array of lattice vectors $\\vec{a}_i$ defined in Sec.~\\ref{Sec:Methods}.\\ref{Sec:samples}, we obtain the primitive lattice vectors $\\vec{b}_1=(2\\pi\/a)[\\vecUnit{x}+\\vecUnit{y}\/\\sqrt{3}]$ and $\\vec{b}_2=(2\\pi\/a)[2 \\vecUnit{y}\/\\sqrt{3}]$.\nThe wavevector $\\pvec{q}^{(m)}$ of the diffractive order characterized by $m$ and the angles $(\\theta_t^{(m)},\\phi_t^{(m)})$ is defined in terms of the angles of transmission as $\\pvec{q}=(\\omega\/c)\\sin\\theta_t(\\cos\\phi_t,\\sin\\phi_t,0)$ with $(\\theta_t,\\phi_t)=(\\theta_t^{(m)},\\phi_t^{(m)})$ where $\\omega\/c=2\\pi\/\\lambda$ is the vacuum wave number. The wavevector $\\pvec{k}$ of the incident light is defined in a similar manner but in terms of the angles of incidence $(\\theta_i,\\phi_i-\\ang{180})$. \nThe polar angles of transmission $\\theta_t^{(m)}$ for the diffractive order characterized by $m$ is therefore obtained from the relation $q_\\parallel^{(m)}=(\\omega\/c)\\sin\\theta_t^{(m)}$ and the corresponding azimuthal angle of transmission from the orientation of the unit vector $\\pvecUnit{q}^{(m)}$. \nIn this way we calculated the angular positions of the various diffractive orders indicated in Fig.~\\ref{Fig:5}(a) as black dots; this figure displays a good agreement between the measured and the theoretically predicted angular positions of the diffractive orders. \n\n\n\n\nThird, and probably most unexpected and interesting, are the additional complex angular patterns that are visible in the DTC for non-normal incidence~[Figs.~\\ref{Fig:5}(b)--(c)] and which are further away from the specular directions. These features are marked~(2) and (3) in Fig.~\\ref{Fig:5}(b). The first of these features, refers to the \\textit{circular structure} of high transmitted intensity seen in this figure. In the following we will simply refer to it as a ``halo''. The circular halo is centered at the direction of normal transmission~[$\\theta_t=\\ang{0}$] and it contains the direction of specular transmission $(\\theta_t,\\phi_t)=(\\theta_i,\\ang{0})$. This means that the angular position of the halo is defined by the equation $\\theta_t=\\theta_i$ (which is also the angular radius of the circular structure). This has the consequence that the halo is predicted to disappear, or coincide with the specular, for light that is incident normally onto the micropillar array. \nThat this is indeed the case, can be observed from Fig.~\\ref{Fig:5}(a), where the measured data do not show any apparent signs of a halo. Further discussion and details about the halo will be provided in Sec.~\\ref{Sec:Results}.\\ref{SubSec:3B1}.\n\n\nThe features marked~(3) in Fig.~\\ref{Fig:5}(b) refer to the ``intensity oscillations'', or \\textit{fringes}, observed in the angular distribution of the DTC in the angular region around the specular direction that also is outside the halo. As will be discussed in greater detail in Sec.~\\ref{Sec:Results}.\\ref{SubSec:3B3}, these fringes belong to two distinct classes of \\textit{concentric patterns}: (i)~those circular fringes that are concentric to the specular direction of transmission and (ii) those patterns that are concentric to the normal direction of transmission $(\\theta_t,\\phi_t)=(\\ang{0},\\ang{0})$ [the ``origin'']. The former class of fringes are, for instance, readily observed within a \\ang{10} cone around the specular direction of transmission in Figs.~\\ref{Fig:5}(a)--(b), while the latter class is particularly visible for larger polar angles of transmission Fig.~\\ref{Fig:5}(a).\n\n\\smallskip\nWe now turn to the measurement performed by the multimodal microscope. Figure~\\ref{Fig:5}(c) presents the result for the angular distribution of the normalized DTC obtained in this way when light was incident at the polar angle of incidence $\\theta_i = \\ang{30}$ on a hexagonal array of micropillars of the same kind as used to produce the results in Figs.~\\ref{Fig:5}(a)--(b).\nThe microscope measurement was performed for the range of polar angles of transmission $\\theta_t\\leq \\ang{45}$ and all azimuthal angles of transmission $\\phi_t$ with no aperture restriction imposed in the imaging arm. Moreover, the wavelength of the incident light was $\\lambda = \\SI{533}{nm}$. This is \\SI{2}{nm} less than the wavelength used to perform the measurements with the goniospectrophotometer, but we believe that this minor difference in wavelength will not cause any significant difference in the obtained results. The four concentric dashed circles seen in Fig.~\\ref{Fig:5}(c) indicate the polar angles of transmission $\\theta_t=\\ang{10}$, \\ang{20}, \\ang{30} and \\ang{40} (from inner to outer). The white circular area in this figure represents the region around the specular direction.\n\n\nThe angular aperture, $\\Theta$, or in other words, the maximum polar angle that can be measured by the multimodal microscope is $\\sim\\ang{45}$. This value is given by the numerical aperture~(NA) of the objective of the microscope used, which in our case was identical to the magnification $50\\times$ and an NA of \\num{0.85}. In this case the minimum available thickness of the glass substrate on which the cylinders were deposited was \\SI{1}{mm}. In consequence, using objectives with higher NA and thereby working distance much smaller than \\SI{1}{mm}, was not possible in this study.\n\n\nThe angular dependence of the normalized DTC measured with the multimodal microscope is, for the same angles of incidence, \nsimilar to the same quantity measured with the goniospectrophotometer.\nAn additional analogy between the images taken with the goniospectrophotometer\nand the multimodal microscope is the fact that the diffracted orders are also visible in the\nimage taken with the multimodal microscope as an ensemble of regularly spaced spots (light\nblue) clearly seen on a dark blue background.\nIn the image taken with the goniospectrophotometer the intensity oscillations that are due to scattering are easily seen to be located for polar angles of transmission $\\theta_t >\\theta_i$ (on the right of the specular spot). Unfortunately, in the image taken by the multimodal microscope, this angular region of high contrast falls outside the measured area because the polar angle of the incident beam is close to the maximum aperture accepted by the microscope objective used for the measurements.\n\n\n\n\n\n\n\n\\subsubsection{Diffraction orders due to the regular array of pillars}\n\\label{SubSec:3B1}\n\nNow we will more closely inspect the intensities and positions of some of the propagating diffractive orders that the hexagonal arrays of micropillars give rise to. Such results will in part provide information on the quality and consistency of the measurements. The results reported in Fig.~\\ref{Fig:5} where obtained using narrow band sources. Therefore, to facilitate a more direct comparison of the measured results to those predicted from the grating equation, Eq.~\\eqref{eq:grating_equation}, and to study the dependence on the wavelength of the incident light, we performed measurements in the plane of incidence by the use of a laser source of wavelength $\\lambda$ in vacuum.\nFigure~\\ref{Fig:6}(a) presents the dependence of the out-of-plane transmitted intensity measured with the goniospectrophotometer for hexagonal arrays defined by three values of the lattice constant $a$ for normal incidence [$(\\theta_i,\\phi_i)=(\\ang{0},\\ang{0})$] and wavelength $\\lambda=\\SI{513}{nm}$. The angular positions of the diffractive peaks are in good agreement with the predictions from the grating equation~\\eqref{eq:grating_equation} indicated by the black vertical dashed lines (for $m=\\{0,m_2\\}$ with $m_2=0,\\pm 1$) in the figure. Furthermore, the gray shaded region represents the $\\theta_t\\leq\\ang{2.5}$ used in the definition of haze.\nAlso good agreement is obtained between the measured data for $a=\\SI{30}{\\micro\\meter}$ and the result obtained from standard diffraction theory when the polar angle of incidence is changed~[Fig.~\\ref{Fig:6}(b)] or wavelength of the incident light is changed~[Fig.~\\ref{Fig:6}(c)]. The latter results presented in these figures as black dashed lines are obtained on the basis of Eq.~\\eqref{eq:grating_equation}.\nBased on the results in Fig.~\\ref{Fig:6} [and Fig.~\\ref{Fig:5}(a)] we conclude that our angular dependent measurements are able to resolve well the densely packed diffractive orders of the regular arrays give rise to; at least this was the case for the values of the lattice constant $a\\leq \\SI{80}{\\micro\\meter}$ that we considered.\n\n\n\\begin{figure}[tbhp]\n \\begin{center}\n \\centering\n\t\t\\includegraphics[width=0.99\\columnwidth]{Figure_4}\n \\caption{The out-of-plane dependence of the DTCs measured by the goniospectrophotometer for hexagonal arrays of micropillars ($d=\\SI{10}{\\micro\\meter}$) illuminated by a laser source of wavelength $\\lambda$. \n (a) The angular dependence of the out-of-plane DTCs for a set of three values of the lattice constants $a$ for $\\theta_i=\\ang{0}$ and $\\lambda=\\SI{513}{nm}$;\n the dependence of the polar angle of transmission $\\theta_t^{(m)}$ for $m=\\{0,m_2\\}$ (out-of-plane) obtained experimentally (filled symbols) as a function of (b)~the polar angle of incidence $\\theta_i$ for $\\lambda=\\SI{513}{nm}$; or (c)~the wavelength $\\lambda$ for normal incidence [$\\theta_i =\\ang{0}$]. In panels~(b) and (c) the lattice constant was $a=\\SI{30}{\\micro\\meter}$ and $m_2=0,\\pm 1$.\n The plane of incident was chosen so that the lattice vectors $\\vec{a}_1$ lies in this plane.\n The dashed black lines represent the theoretic values for the polar angle of diffraction $\\theta_t^{(m)}$ obtained on the basis of Eq.~\\protect\\eqref{eq:grating_equation}. The shaded gray areas in panel~(a) represent the regions for which $\\theta_t\\leq \\ang{2.5}$. } \n \\label{Fig:6}\n \\end{center}\n\\end{figure}\n \n\\subsubsection{The diffuse ring or halo: impact of the polar angle of incidence}\n\n\nNext we turn to a discussion of the physical origin and the properties of the halo that is so distinctly present in Fig.~\\ref{Fig:5}. To this end, we first investigate if the halo is exclusive to the regular arrays of micropillars or if it also is present in the light transmitted through random arrays of micropillars of similar surface densities. Second, we study how the intensity at the position of the halo $\\theta_t=\\theta_i$ varies with azimuthal angle when we change the polar angle of incidence $\\theta_i$. Figure~\\ref{Fig:8} presents contour plots for the angular distribution of the DTCs, measured with the goniospectrophotometer, and normalized by their maximum values for a set of polar angles of incidence. The samples were patterned by either a random array of micropillars or a hexagonal array of micropillars. All the micropillars were assumed to be identical and characterized by the diameter $d=\\SI{10}{\\micro\\meter}$. Both samples were produced to have a surface coverage of about \\SI{10}{\\percent} which for the an hexagonal array corresponds to the lattice constant $a=\\SI{30}{\\micro\\meter}$. The source used to illuminate these samples was light from a Xenon lamp, filtered at the center wavelength $\\lambda=\\SI{535}{nm}$ by a window of \\SI{10}{nm} spectral width, that was unpolarized. The subplots of Fig.~\\ref{Fig:8} correspond to the polar angles of incidence $\\theta_i=\\ang{10}$, \\ang{30} and \\ang{60} and in all cases the azimuthal angle of incidence was $\\phi_i=\\ang{180}$~(top-down in Fig.~\\ref{Fig:8}). As expected, the measured data show a reflection symmetry with respect to the plane of incidence. Therefore, for reasons of convenience, we have combined the presentation of the results for the random and regular arrays for the same surface coverage. This is done in such a way that the upper halves of each of the subplots in Fig.~\\ref{Fig:8} [$\\ang{0}<\\phi_t<\\ang{180}$] represents the normalized DTCs for the random arrays, while the lower halves [$\\ang{180}<\\phi_t<\\ang{360}$] present the corresponding results for the regular array. The various insets in Fig.~\\ref{Fig:8} present the details in an angular region around the direction of specular transmission. The first striking observation to be made from the results presented in Fig.~\\ref{Fig:8} is how similar the angular intensity distributions for the random and the regular arrays of micropillars are. Of course, the transmitted intensity distributions for the random array do not display diffractive orders as is the case for the intensity distributions for the regular array. Hence, it is the intensity envelopes of the latter data sets that are similar to the corresponding intensity distributions obtained for the random array. \n\nBefore further discussing the several interesting features that can be observed in the measurements reported in Fig.~\\ref{Fig:8}, we present additional measurements obtained by the multimodal microscope in Fig.~\\ref{Fig:9} for the same random array used in obtaining the results presented in Fig.~\\ref{Fig:8}. In particular, Fig.~\\ref{Fig:9} shows results for polar angles of incidence $\\theta_t=\\ang{0}$, \\ang{10}, \\ang{20} and \\ang{30} and without any angular restriction in the\ncollection arm of the microscope [\\textit{i.e.} $\\theta_t\\leq\\ang{45}$]. Again one finds that the intensity patterns measured with the microscope agree rather well with the corresponding results obtained with the goniospectrophotometer.\nIt should be remarked that, located at the lower right edges of Figs.~\\ref{Fig:9}(a)--(c), one can see a few irregular spots~(in yellow) These features we believe to be due to contamination present on the surfaces of the optical elements of the microscope and they are only visible when the intensity measurement requires an extremely high dynamic range. These irregular features are therefore artifacts and must not be considered as part of the light scattered by the sample.\n\nThe results presented in Figs.~\\ref{Fig:8} and \\ref{Fig:9} show explicitly that halos are present in the angular distribution of the transmitted intensities for \\textit{both} regular and random arrays of micropillars if $\\theta_i\\neq \\ang{0}$. Furthermore, for both array types we find that the polar angle of transmission defining the halo is related to the polar angle of incidence by the relation $\\theta_t=\\theta_i$, which for our geometry, also is the polar angle of the specular direction of transmission. These results suggest that the presence of the halo is related to individual micropillars and their cylindrical shape rather than to how they are arranged on the surface of the substrate.\n\n\\begin{figure}[tbhp]\n \\begin{center}\n \\centering\n \\includegraphics[width=0.75\\columnwidth]{Figure_5}\n \\caption{The angular dependence of the DTCs for the polar angles of incidence (a)~$\\theta_i=\\ang{10}$, (b)~\\ang{30} and (c)~\\ang{60} measured with the goniospectrophotometer and normalized by their maximum values (found in the specular direction). The insets zoom in on $\\pm\\ang{15}$ angular regions around the specular directions. The samples consisted of a random array of micropillars~(upper halfs) or a hexagonal regular array of micropillars~(lower halfs). All micropilars were characterized by a diameter and a height of $d=\\SI{10}{\\micro\\meter}$ and $h=\\SI{10}{\\micro\\meter}$, respectively. The surface coverage of the two arrays were approximately \\SI{10}{\\percent} which for the regular array corresponds to the lattice constant of $a=\\SI{30}{\\micro\\meter}$. In the case of the regular array, one aimed for the plane of incidence to be aligned with one of the lattice vectors $\\vec{a}_i$ ($i=1,2$). The source of the incident light was a Xenon lamp filtered around the center wavelength $\\lambda=\\SI{535}{nm}$ by a window of width \\SI{10}{nm}. The dashed grid circles are placed at multiples of $\\theta_t=\\ang{10}$.} \n \\label{Fig:8}\n \\end{center}\n\\end{figure}\n\n\n\n\n\n\\smallskip\nThe origin of the halo can in fact be understood within the framework of either the extended Mie theory for non-spherical particles~\\cite{Cohen1982} or the Debye series approach~\\cite{Xu2010}. According to the later formalism, the origin of the halo can be attributed to rays which have been directly transmitted \nthrough the microsized cylinders (like in an optical fiber), or which have been directly reflected (or scattered) by the outer surface of the cylinders.\nA detailed analysis of the halo and its origin will be presented in a separate publication using polarization dependent measurements. This study will be based on polarization sensitive measurements and it concludes that the halo is a consequence of reflection and\/or transmission of light by \\textit{individual} cylinders, and not due to, for instance, multiple reflections or the excitation of leaky guided modes in the glass slide (substrate).\n\n\n\\begin{figure}[tbhp]\n \\begin{center}\n \\centering\n \\includegraphics[width=0.99\\columnwidth]{Figure_6}\n \\caption{The far-field distribution of the normalized total intensity transmitted\n through a glass substrate patterned with a random array of micropillars of a surface coverage of \\SI{10}{\\percent}. \n The measurements were performed by the multimodal microscope in the in Fourier imaging mode.\n The polar angle of incidence of the illuminating beam was assumed to be \n (a) $\\theta_i=\\ang{0}$; (b) \\ang{10}; (c) \\ang{20}; and (d) \\ang{30}.\n The angular aperture of the illumination beam was set to \\ang{10} in all cases and the measurements were performed for the wavelength $\\lambda=\\SI{533}{nm}$.}\n \\label{Fig:9}\n \\end{center}\n\\end{figure}\n\n\n\n \n\n\n\\subsubsection{Concentric circular fringes}\n\\label{SubSec:3B3}\n\nDuring the discussion of the results presented in Figs.~\\ref{Fig:5}(a)--(b), we observed two classes of concentric circular intensity patterns; \\textit{class~1 fringes} are concentric about the direction defined by $\\theta_t=\\ang{0}$ while \\textit{class~2 fringes} are centered around the direction of specular transmission $(\\theta_t,\\phi_t)=(\\theta_i,\\ang{0})$. The azimuthally symmetric circular fringes that are centered at $\\theta_t=\\ang{0}$ are readily observed in Fig.~\\ref{Fig:10}(a) for sufficiently large values of $\\theta_t$; these are the class~1 fringes for this configuration. In particular what is presented in Fig.~\\ref{Fig:10}(a) are the angle-resolved DTCs for normal incidence obtained for two square arrays of micropillars where the micropillars of the first had diameter $d=\\SI{40}{\\micro\\meter}$~(upper half) and the second $d=\\SI{20}{\\micro\\meter}$~(lower half). The lattice constants were rather different for the two arrays and defined by $a=2d$. In both cases the heights of the micropillars were $h=\\SI{10}{\\micro\\meter}$. The surface coverage of square lattices is given by $\\rho =(\\pi\/4)(d\/a)^2$ and hence the two square arrays had the same surface coverage~[$\\rho=\\pi\/16$]. A careful inspection and comparison of the results in the upper and lower halves of Fig.~\\ref{Fig:10}(a) reveals that the (class~1) fringes for the two samples are rather similar even if the size of the micropillars and the lattice constants were different for the two samples. Furthermore, when the illumination is oblique, the azimuthal symmetry of these fringes is lost, as seen in Fig.~\\ref{Fig:8}, for instance, but the fringes remain centered at $\\theta_t=\\ang{0}$.It should be noted that class~1 fringes are also present in the intensity of the light transmitted through random arrays of micropillars~[Fig.~\\ref{Fig:8}; upper halves]. In particular, for normal and oblique incidence, for random and regular arrays of micropillars, and for different sizes of micropillars, we find that the polar angle separation between two consecutive maxima of the fringes is of the order \\ang{10}.\n\n\nThese results seem to rule out a specific size of the mircropillars, or the sample being periodic or random, as the main reason for the fringes pattern that we observe. Instead we believe that the origin of the class~1 fringes in the transmitted intensities can be attributed to guiding of light inside the sample. To support this interpretation we conducted an additional experiment, in which we remeasured the angle-resolved DTC for normal incidence for the sample used to produce the results in Fig.~\\ref{Fig:5}~[hexagonal array of micropillars defined by $d=\\SI{10}{\\micro\\meter}$ and $a=\\SI{30}{\\micro\\meter}$]. However, in this additional experiment, the front and back sides of the sample were covered by a piece of black paper with the hole in the center each, where a hole of \\SI{3}{cm} diameter was aligned with the incident beam on each side of sample. The diameter of the hole was chosen to be larger than the full width of the incident beam in order to allow the transmission of the direct beam, but, at the same time, partially block the light presumably partially guided inside the sample. By partially we mean that light is not perfectly confined inside the glass substrate as it would be in an ideal waveguide. The guided light modes in the glass are converted to radiative modes because the faces of the substrate are neither infinite nor perfectly smooth. Roughness and micropillars contribute to convert guided modes to radiative ones. The result for the angular distribution of the DTCs, with and without the black cover paper (on both the front and back side), are presented in Fig.~\\ref{Fig:10}(b) as the upper half and lower halves, respectively. A comparison of these two data sets show clearly that the class~1 fringes vanish, or at least are significantly suppressed, when the sample has the black front and back covers. This finding we take as a direct confirmation that guiding of light in the sample is what causes the class~1 circular fringes. \n\n\n\n\n\\smallskip\nAdditional fringes, which are of another type than those we just discussed, are observed in the intensity transmitted through regular or random arrays of micropillars that are concentric about the direction of specular transmission. For instance, in the insets to Figs~\\ref{Fig:8}(a) and \\ref{Fig:10}(a), showing the details around the specular direction of transmission, they are seen for smaller values of $\\theta_t-\\theta_i$ (and $\\phi_t-\\phi_i-\\ang{180}$), and these fringes are examples of the class~2 fringes. Note that such fringes are also seen in Fig.~\\ref{Fig:5}(b). The class~2 fringes are in the measured data superimposed on the class~1 fringes so that it can be challenging to distinguish them in certain cases. However, the frequency of the class~2 fringes is typically found to be significantly higher than the frequency for the class~1 fringes. The class~2 fringes do depend on the size of the micropillars which contrasts what we found for the class~1 fringes. For instance, the upper and lower halves of Fig.~\\ref{Fig:10}(a) compare DTCs for regular arrays of micropillars of different sizes (and lattice constants); the diameters of the micropillars were $d$=\\SI{20}{\\micro\\meter} and $d$=\\SI{40}{\\micro\\meter}, respectively. From the inset to this figure it is found that the polar angle separation between consecutive fringes close to the specular direction is smallest for the array consisting of micropillars of the largest diameter. Furthermore, the class~2 fringes, like their class~1 counterparts, can be observed for both regular and random arrays of micropillars. In Fig.~\\ref{Fig:8}(b), the class~2 fringes are observed clearly as oscillations along the halo for both the random and regular array, and the angular distance between them are of the same order for both cases. These results we take as indications that the class~2 fringes are related to the size of the micropillars, not to how they are organized along the surface of the substrate.\n\nThe results for the class~2 fringes for normal incidence presented in Figs.~\\ref{Fig:10}(a)--(b) we rationalize in the following way. The normally incident light of wavelength $\\lambda$ is assumed to couple into the micropillars of diameter $d\\sim\\num{E1}\\lambda$. This guided light will be radiated into the glass slide by the open-ended circular waveguide giving rise to an Airy like diffraction pattern predicted by Fraunhofer diffraction~\\cite{BornWolf}. Taking a circular aperture, for simplicity, the\nexpression for the normalized transmitted intensity distribution of the diffracted light reads~\\cite{BornWolf}\n\\begin{align}\n \\bar{I}(\\theta_t)\n &=\n \\left(\n \\frac{ 2 \\, \\mathtt{J}_1\\!\\!\\left(\\pi\\frac{d}{\\lambda}\\sin\\theta_t \\right)\n }{\n \\pi\\frac{d}{\\lambda}\\sin\\theta_t\n }\n \\right)^2,\n \\label{eq:4}\n\\end{align}\nwhere $\\mathtt{J}_1$ denotes the Bessel function of the first kind and order one, and $d$ denotes the diameter of the aperture that we will set equal the diameter of the micropillars~\\cite{BornWolf}. Such normalized intensity distributions are shown on Fig.~\\ref{Fig:10}(c) for diameter $d=\\SI{10}{\\micro\\meter}$, \\SI{20}{\\micro\\meter} and \\SI{40}{\\micro\\meter}, respectively. The frequency of the oscillations observed in the transmitted intensities in the region around the specular direction of transmission are clearly different for the two samples considered in Fig.~\\ref{Fig:10}(a), and the trend that one finds, is in agreement with the prediction of Eq.~\\eqref{eq:4} [see also Fig.~\\ref{Fig:10}(c)]. This demonstrates the high sensitivity of the angular position and intensity of the oscillations to the size of the cylinders, which is in good agreement with the predictions made on the basis of the Fraunhofer diffraction formalism. A direct comparison of the measurements and the Airy patterns is not straightforward for samples with a regular array of pillars, since the Airy pattern is modulated by the presence of diffraction orders.\n\n\n\n\\begin{figure}[tbhp]\n \\begin{center}\n \\centering\n \\includegraphics[width=0.85\\columnwidth]{Figure_7}\n \\caption{Angle-resolved DTCs for (a)~square or (b)~hexagonal arrays of supported micropillars obtained for normal incidence~[$\\theta_i=\\ang{0}$]. \n The illumination was done with a Xenon lamp filtered at the center wavelength $\\lambda=\\SI{535}{nm}$ by a window of width \\SI{10}{nm}.\n (a)~The diameters of the micropillars and the lattice constants of the square arrays were $d=\\SI{40}{\\micro\\meter}$ and $a=\\SI{80}{\\micro\\meter}$ (upper half); and \n $d=\\SI{20}{\\micro\\meter}$ and $a=\\SI{40}{\\micro\\meter}$~(lower half). (b) An hexagonal array of micropillars defined by $d=\\SI{10}{\\micro\\meter}$ and $a=\\SI{30}{\\micro\\meter}$ measured with~(upper half) and without~(lower half) a black cover on both front and back sides. Each cover had a SI{3}{cm} hole in diameter that was big enough to not interact with the incident beam, and was aligned with the incident beam. (c) Airy patterns from Eq.~\\eqref{eq:4} for circular apertures of diameter\n \\SI{10}{\\micro\\meter}, \\SI{20}{\\micro\\meter} and \\SI{40}{\\micro\\meter}, respectively.}\n \n \\label{Fig:10}\n \\end{center}\n\\end{figure}\n\n\\subsection{Haze and gloss}\n\\label{Sec:Haze}\n\n\nTwo of the most commonly used integral optical measurements are haze and gloss. \nAccording to the two standards, ISO 14782:1999~\\cite{ISO1} and ASTM D1003-13~\\cite{ASTM1}, haze is defined as the percentage of\ntransmitted intensity, passing through a specimen, which deviates from the specular direction of transmission by more\nthan 0.044 rad (\\ang{2.5})~\\cite{Simonsen2009}. Similarly, gloss is defined as the total intensity scattered inside a small angular region about the specular direction normalized by the intensity that is scattered by a standard sample; this ratio defines the ``gloss unit'' scale. According to the standards\nISO~2813:2014~\\cite{ISO2} and ASTM~D523-14~\\cite{ASTM2}, gloss can be measured at three different polar angles of incidence: $\\theta_i=\\ang{20}$, \\ang{60} and \\ang{80}~\\cite{Simonsen2005}.\n\n\nWe will now discuss the haze and gloss values that can be obtained for our supported regular and random arrays of micropillars. The gloss measurements that we report were performed in reflection for a polar angle of incidence of $\\theta_i=\\ang{20}$, while haze was measured in transmission for normal incidence~[$\\theta_i=\\ang{0}$].\nFigure~\\ref{Fig:4} presents the experimental values obtained for haze and gloss~(filled symbols) as functions of the surface coverage of the supported arrays by the use a hazemeter and a glossmeter. These results were obtained by keeping the diameter of the pillars constant at $d=\\SI{10}{\\micro\\meter}$ and changing the lattice constant of the hexagonal array between $a=\\SI{20}{\\micro\\meter}$ and \\SI{80}{\\micro\\meter}.\nWe also performed measurements of haze and gloss for some samples of random arrays of micropillars [open symbols in Fig.~\\ref{Fig:4}], and only minor differences were found between values of these quantities obtained for regular and random arrays of the same surface coverage. From the results presented in Fig.~\\ref{Fig:4} it is observed that the overall trend is that haze (in transmission) is an increasing (and approximately linear) function of the surface coverage while gloss decreases as \nfunction of the same quantity. The increase in haze with surface coverage indicates an increase in the amount of light transmitted away from the specular direction by more than \\ang{2.5}. This observation is consistent with the decreasing level of measured gloss for the same samples. It is tempting to interpret this observation as a decrease in the specular component of the transmitted intensity with increasing surface coverage, while, at the same time, the diffuse component increases.\n\n\n\\begin{figure}[tbhp]\n \\begin{center}\n \\centering\n \\includegraphics[width=0.99\\columnwidth]{Figure_8}\n \\caption{Measured haze and gloss values for hexagonal arrays of micropillars~(filled symbols) or random arrays of micropillars~(open symbols). The gloss values were measured with the glossmeter~(Enrichsen -- Pico Glossemaster Model 500) in reflection at $\\theta_i=\\ang{20}$. The haze values were obtained by a hazemeter~(BYK Gardner Haze-gard plus) in transmission at $\\theta_i=\\ang{0}$. The diameter and height of all micropillars were both \\SI{10}{\\micro\\meter}.}\n \\label{Fig:4}\n \\end{center}\n\\end{figure}\n\n\n\nWhen the angular dependency of the DTC is integrated over the full solid angle of transmission~[$\\mathrm{d}\\Omega_t=\\sin\\theta_t\\,\\mathrm{d}\\theta_t\\mathrm{d}\\phi_t$], the transmittance of the sample is obtained~\\cite{Hetland}. To calculate the haze of a sample from the angular distribution of the DTC measured for a given angle of incidence, one proceeds in the following way. First one performs a solid angle integration of the DTC over an angular region around the specular direction of transmission $(\\theta_t,\\phi_t)=(\\theta_i,\\phi_i-\\ang{180})$ defined by $|\\theta_t -\\theta_i| \\leq \\Delta\\theta_t$. For given angles of incidence, the haze of the sample is calculated by first dividing the result obtained in this way by the transmittance and then subtracting the resulting ratio from unity~(see Ref.~\\cite{Simonsen2009}).\nWe have chosen $\\Delta\\theta_t=\\ang{2.5}$ to be consistent with the standard, and what is assumed in the construction of the hazemeter. \n\n \nFigure~\\ref{Fig:7} compares the haze values calculated in this way from the angle-resolved DTC measurements with the values obtained by direct haze measurements performed on the same samples of hexagonal arrays in transmission and at normal incidence; in such calculations, the missing data points were set to zero. One finds good agreement between haze values measured directly and values calculated from the measured angle-resolved DTC data. This testifies to the consistency of the angle-resolved measurements and their normalization. For instance, the haze values calculated from the angle-resolved DTC data are \\SI{18}{\\percent} for lattice constant a=\\SI{20}{\\micro\\meter} (or surface coverage $\\rho$=\\SI{23}{\\percent}); \\SI{9}{\\percent} for a=\\SI{30}{\\micro\\meter} [$\\rho$=\\SI{10}{\\percent}]; and \\SI{2}{\\percent} for a=\\SI{60}{\\micro\\meter} [$\\rho$=\\SI{3}{\\percent}]. These values should be compared to the corresponding measured haze values for the same samples which are \\SI{20}{\\percent}, \\SI{8}{\\percent} and \\SI{4}{\\percent}, respectively. An increase of the lattice constant $a$, or equivalently, a decrease of the surface coverage $\\rho$, cause more of the diffractive orders to end up inside a cone of angular width $\\Delta\\theta_t$ about the specular direction, and hence, the value of haze to drop. Yet, transmission efficiencies of individual diffraction orders are naively expected to decay with increasing lattice constant and increasing order. Therefore, for sufficiently large lattice constant, a further increase of it will only marginally affect the resulting haze value. However, for smaller lattice constants, for which only a few diffractive orders fall inside $\\Delta \\theta_t < \\ang{2.5}$, this is no longer the case. Therefore, from haze values measured with a hazemeter alone, we are not able to distinguish a regular array from a random array if the surface coverage is sufficiently large. For reasons of comparison, Fig.~\\ref{Fig:7} also reports results for the calculations assuming $\\Delta\\theta_t=\\ang{1}$ a value suggested as more realistic in a recent study on the angular width of specular beams~\\cite{Leloup2016}. As expected, when using this value for $\\Delta\\theta_t$, the calculation procedure results in values that are higher than the haze values for the same sample. However, the interesting observation is not that one gets larger values, but how much larger the obtained values are. For small values of the surface coverage, or larger lattice constants, the difference between the results obtained when using these two values for $\\Delta\\theta_t$ in the calculation of haze are not very dramatic. However, for the larger values of the surface coverage the differences increase. \n\n\\begin{figure}[tbhp]\n \\begin{center}\n \\centering\n \\includegraphics[width=0.95\\columnwidth]{Figure_9}\n \\caption{Comparison of the haze values for hexagonal arrays obtained with the hazemeter (direct measurements) and those calculated from angular resolved DTC data as functions of the surface coverage. The DTC data used in obtaining these results were measured at normal incidence by the use of the goniospectrophotometer. The measurement configuration was identical (accept for the angles of incidence) to what was used in obtaining the results in Fig.~\\ref{Fig:5}(a). The way that haze was calculated from such data is described in the main text. The value of the polar angular interval $\\Delta\\theta_t$ assumed in such calculations are given in the legend. It is noted that the value $\\Delta\\theta_t=\\ang{2.5}$ corresponds to the norm of haze measurements. The diameter and height of all micropillars were both \\SI{10}{\\micro\\meter}.}\n \\label{Fig:7}\n \\end{center}\n\\end{figure}\n\n\\smallskip\nWhen the visual aspect of an object is to be taken into account, the particularities of the human eye, rather than those of a artificial detector, must be considered. For instance the angular resolution of the human eye is about \\ang{0.03}~\\cite{Zettler1976}.\nIf haze is intended to quantify the fraction of transmitted intensity that is transmitted away from the specular direction (of transmission), the use of the value $\\Delta \\theta_t=\\ang{2.5}$, which is perfect for a number of industrial needs is far too large when it comes to discuss human visual perception of the optical response of 1D or 2D gratings.\n\n For the sake of illustration, let us restrict ourselves to a one-dimensional grating at normal incidence for which the polar angle of the propagating diffractive orders in transmission is given by $\\sin\\theta_m=m(\\lambda\/a)$ with $m$ and an integer ($m\\in\\mathbb{Z}$) and $m=0$ corresponds to specular transmission. For this system, the number of propagating diffracted orders in transmission is (red curve in Fig.~\\ref{Fig:1}) $N = 2 \\left \\lfloor{\\frac{a}{\\lambda}}\\right \\rfloor + 1$,\nwhere $\\left \\lfloor{x}\\right \\rfloor$ denotes the floor function of $x$ which returns the greatest integer less than or equal to its argument $x$. At normal incidence the two first diffractive orders are symmetric about the specular direction and correspond to the polar angles of transmissions $\\pm\\theta_1$.\nFigure~\\ref{Fig:1} illustrates the variation of the angle of the first diffracted order $\\theta_1$ under the assumption of normal incidence [$\\theta_i=\\ang{0}$]. The green solid line in this figure corresponds to an illumination wavelength of $\\lambda=\\SI{500}{nm}$ while the green area around this line represents the variation due to the whole visible range of wavelengths from \\SIrange{380}{780}{nm}. Moreover, the red solid curve in Fig.~\\ref{Fig:1} illustrates the total number of diffraction orders in such system.\n\nFor a lattice constant of about a=\\SI{9}{\\micro\\meter} and an illuminating wavelength of $\\lambda=\\SI{380}{nm}$, the first diffracted order will enter into the ``specular'' area of the haze measurement. Starting from this wavelength (and higher) the haze measurements, as a measure of the fraction of transmitted intensity away from the specular direction, are biased (unshaded region of Fig.~\\ref{Fig:1}). For lattice constants all the way up to $a=\\SI{1}{mm}$ a human observer will be able to distinguish specular transmission from the first diffraction order. These results hints towards a not optimal definition of haze for gratings of long periods compared to the wavelength of visible light. According to our discussion, when the visual aspect of gratings with large periods matters, a haze definition making use of smaller angular spread around the specular direction than the actual 2.5\u00b0 defined in the norms, would provide objective haze values which would be in agreement with the subjective experience of the human eye.\n\n\\begin{figure}[tbhp]\n \\begin{center}\n \\centering\n \\includegraphics[width=0.9\\columnwidth]{Figure_10}\n \\caption{Normally incident light~[$\\theta_i=\\ang{0}$] diffracted through a surface consisting of a one-dimensional grating of lattice constant $a$. The green line shows the angle of diffraction $\\theta_1$ of the first diffraction order [$m=1$], while the red line represents the total number of propagating diffracted orders $N$, both obtained by assuming the wavelength $\\lambda=\\SI{500}{nm}$ for the incident light. The corresponding green and red shaded areas (around the solid lines of the same color) represent the variations of these two quantities due to the wavelength of the incident light varying over the visible range \\num{380}--\\SI{780}{nm}. The blue horizontal line corresponds to the smallest diffraction angle~[\\ang{2.5}] for which the diffracted light in transmission contributes to haze. Haze measurements for lattice constants smaller than approximately \\SI{9}{\\micro\\meter} receive contribution from all higher order diffractive orders for which $m\\neq 0$. However, for larger lattice constants not \\textrm{all} such higher orders will contribute. \n For comparison, the horizontal red dashed line corresponds the limit of angular resolution of the human eye.\n } \n \\label{Fig:1}\n \\end{center}\n\\end{figure}\n\n\n\n\n\n\n\n\n\n\n\n\\section{Conclusions}\n\\label{Sec:Concusions}\n\nWe report experimental results for the angle-resolved transmitted intensity measurements for a set of regular or random arrays of dielectric micropillars in the low coverage limit that are supported by thin index matched glass slides. The regular arrays were characterized lattice constants in the range from $a=\\SI{20}{\\micro\\meter}$ to \\SI{80}{\\micro\\meter}. The measurements were performed by either a goniospectrophotometer or a multimodal imaging polarimetric microscope and the two sets of measurements gave comparable results. On the basis of experimental data obtained in this way, it is demonstrated that for identical micropillars, the mean differential transmission coefficients for the random arrays agree well with the envelope of the same quantity for the regular array under the assumption that the surface coverage is the same. Moreover, we find that the angle-resolved measurements display unique diffractive features that are due to properties of single micropillars and not to how they are organized along the surface. Finally we perform a comparison of direct measurements of haze in transmission for our structured samples with what can can calculated from the angle-resolved transmitted intensity measurements. Good agreement between the two types of results are found which testifies to the accuracy of the angle-resolved measurements that we report. However, we find that for larger surface coverage, haze values alone can not be used to distinguish regular and random arrays of micropillars. \n\n\n\\bigskip\n\\textbf{Acknowledgments}\nWe thank Gael Obein, Guillaume Ged, Sebastien Noygues and Emmanuel Garre for valuable discussions.\n\nFrench National Research Agency (ANR-15-CHIN-0003; IDEX Paris-Saclay ANR-11-IDEX-0003-02)\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\chapter*{Acknowledgements}\nI would like to thank my supervisor, Prof. Meir Feder, for his devoted mentoring throughout my research. Without Meir's enthusiasm, curiosity and endless support, this thesis could not have been what it is now. His depth as an information theorist, his broad-mindedness as a scientist and his openheartedness inspired me all along the way. I am grateful for having given the opportunity to walk this path with him.\n\nI would also like to thank Or Ordentlich and Yuval Lomnitz for their interest and insights, which helped me tackle some of the most challenging problems in my research.\n\n\\cleardoublepage\n\n\\begin{abstract}\nIn this study we consider rateless coding over discrete memoryless channels (DMC) with feedback. Unlike traditional fixed-rate codes, in rateless codes each codeword in infinitely long, and the decoding time depends on the confidence level of the decoder. Using rateless codes along with sequential decoding, and allowing a fixed probability of error at the decoder, we obtain results for several communication scenarios. The results shown here are non-asymptotic, in the sense that the size of the message set is finite.\n\nFirst we consider the transmission of equiprobable messages using rateless codes over a DMC, where the decoder knows the channel law. We obtain an achievable rate for a fixed error probability and a finite message set. We show that as the message set size grows, the achievable rate approaches the optimum rate for this setting. We then consider the \\emph{universal} case, in which the channel law is unknown to the decoder. We introduce a novel decoder that uses a mixture probability assignment instead of the unknown channel law, and obtain an achievable rate for this case.\n\nFinally, we extend the scope for more advanced settings. We use different flavors of the rateless coding scheme for joint source-channel coding, coding with side-information and a combination of the two with universal coding, which yields a communication scheme that does not require any information on the source, the channel, or the amount the side information at the receiver.\n\\end{abstract}\n\n\\cleardoublepage\n\n\\pagestyle{plain}\n\\pagenumbering{roman}\n\n\\tableofcontents\n\\singlespace\n\\listoffigures\n\n\\cleardoublepage\n\n\\baselineskip0.7cm\n\n\\pagestyle{plain}\n\\pagenumbering{arabic}\n\n\\chapter{Introduction}\n\\section{Background} \\label{sec:Background}\nIn traditional channel coding schemes the code rate, which is the ratio between the lengths of the encoder's input and output blocks, is an integral part of the code definition. If one of $M$ messages is to be encoded at rate $R$, then the corresponding codeword has length $n=(\\log M)\/R$. Provided that the rate is chosen properly, the error probability decreases as $M$ grows. The capacity of the channel $C$ is defined as the largest value of $R$ for which the error probability can vanish.\n\nAn alternative approach to fixed-rate channel coding is \\emph{rateless codes}. In this approach, we abandon the basic assumption of a fixed coding rate, and allow the codeword length, and hence also the rate, to depend on the channel conditions. When the encoder wants to send a certain message, it starts transmitting symbols from an infinite-length codeword. The decoder receives the symbols that passed through the channel and when it is confident enough about the message, it makes a decision. Perhaps the simplest example of a rateless code is the following (see e.g. \\cite[Ch.3]{Nadav} or \\cite[Ch.7]{Cover}). Suppose that we have a binary erasure channel (BEC) with erasure probability $\\delta$. Suppose also that noiseless feedback exists, i.e. the encoder at time instant $n$ has an access to the outputs of the channel at times $1,\\ldots,n-1$. We use a simple repetition coding, in which each binary symbol is retransmitted until the decoder receives an unerased symbol. Since the erasure probability is $\\delta$, the expected number of transmissions until an unerased symbol is received is $1\/(1-\\delta)$. This transmission time implies a rate of $1-\\delta$, which is exactly the capacity of the binary erasure channel. This simple setting exemplifies some important concepts of rateless codes. First, the transmission time is not fixed, but rather is a random variable (geometrically-distributed in the above case); second, when the length of the transmission is set dynamically, the error probability may be controllable. In this case the transmission is only terminated once the decoder \\emph{knows} what message has been transmitted, so the error probability of this coding scheme is zero; third, the code design is rate-independent. In fact, this code can be used for any binary erasure channel; fourth, the continuity of the transmission requires feedback to the encoder. Indeed, as we shall see in this thesis, when rateless codes are used for point-to-point communication, some form of feedback, which can be limited to decision feedback, must exist to enable continuity. However, rateless codes are also invaluable for other settings such as multicast or broadcast communications, in which the existence of feedback is not explicitly required. Shulman \\cite{Nadav} introduced the concept of \\emph{Static Broadcasting}, in which the transmitter sends a message to multiple users, and each user remains connected until it retrieved enough symbols to make a confident decision. This scheme does not require feedback; the user remains online only as much as it needs, and the rate is determined according to the time the user spent online.\n\nIn this thesis we assume a discrete memoryless channel (DMC) with feedback, and devise rateless coding schemes which allow a small (but fixed) error probability $\\epsilon$. We investigate the dependence between the rate, the error probability and the size of the message set. The entire analysis is done for a finite message set, and we show that when the size of the message set is taken to infinity, our results agree with classic results from coding theory. We also investigate the rate of convergence to these results. We start by building a simple rateless coding scheme for a known channel. The motivation for this method is due to Wald's analysis (see \\cite[Ch.3]{Wald}), where he demonstrated that the Sequential Probability Ratio Test (SPRT) performs like the most powerful test in terms of error probabilities, while using about half the samples on average.\n\nBuilding on the rateless coding scheme devised for the case of known channel, we obtain a \\emph{universal} channel coding scheme that does not require channel knowledge at the receiver. Unlike previous results on universal decoding, the results here are non-asymptotic and are valid for an arbitrary message set size. We then extend the coding scheme to joint source-channel coding, and show that optimal rate is achievable even when the encoder is uninformed on the source statistics. Next, we use a rateless coding scheme for source coding with side information at the receiver and show that the Slepian-Wolf rate for this scenario is achievable even when the encoder is unaware of the amount of side information. Finally, we show how to combine the above-mentioned techniques with universal source coding, to obtain a scheme that can operate when the statistics of both the source and the channel are unknown, potentially using side information that is obscure to the encoder.\n\nOur work follows previous results discovered by Shulman \\cite{Nadav} for the universal case, where the decoder is ignorant of the channel law. In particular, a sequential version of the maximal mutual information (MMI) decoder \\cite{CsiszarKorner} is used for universal channel decoding and joint-source channel coding, including the case of side information at the decoder. However, the results in \\cite{Nadav} are asymptotic in the size of the message set, while the analysis here is made for a fixed size of the message set. For the case of known channel, the decoder used here can be viewed as the counterpart of the sequential MMI decoder that uses the channel law rather than the empirical mutual information. This scheme has been originally introduced by Polyanskiy in \\cite{PPV}, where it is proven to achieve the best variable-length coding rate. While the analysis in \\cite{PPV} concentrates on finding the best achievable size of the message set with a constraint on the average decoding time, in this paper we seek the optimum decoding time for a fixed size of the message set. More importantly, the analysis introduced here is then extended naturally to apply for the case of unknown channel, where we use a novel universal decoder, as well as for joint source-channel coding with and without side information at the receiver.\n\n\\section{Thesis Outline} \\label{sec:Outline}\nThe rest of the thesis is organized as follows. In Chapter \\ref{ch:DefinitionsAndNotation} we define rateless codes and provide related definitions and notation. In Chapter \\ref{ch:PreviousResults} we survey previous results related to universal communication and rateless codes. In Chapter \\ref{ch:KnownChannel} we treat the case of known channel, for which we obtain an achievable rate using rateless codes. We also prove a converse theorem showing that this rate is asymptotically optimal, and we analyze the rate of convergence. The case of unknown channel is examined in Chapter \\ref{ch:UnknownChannel}, where we develop a universal decoder and analyze its performance for a general DMC. In Chapter \\ref{ch:Extensions} we extend the coding scheme for the case of message sets with non-equiprobable messages, and we also show how rateless coding can be used for problems with side information. Chapter \\ref{ch:Summary} concludes the thesis.\n\n\\chapter{Definitions and Notation}\n\\label{ch:DefinitionsAndNotation}\nThroughout this thesis, random variables will be denoted by capital letters and their realizations by the corresponding lowercase letters. Vectors are denoted by superscript that indicate their length, for instance $X^n = [X_1,\\ldots,X_n]$. Unless otherwise stated, all logarithms are taken to the base of 2. We focus on communication over a discrete memoryless channel (DMC) characterized by a transition probability $p(y|x)$, $x \\in \\mathcal{X} , y \\in \\mathcal{Y}$, where $\\mathcal{X}$ and $\\mathcal{Y}$ are the input and output alphabets of the channel, respectively. With a slight abuse of notation, we use $p(\\cdot|\\cdot)$ also to denote the joint transition probabilities of the channel, thus $p(y^n|x^n) = \\prod_{i=1}^n p(y_i|x_i)$. The capacity of the channel (in bits per channel use) in conventionally defined as $C = \\max_{q(x)}I(X;Y)$, where $I(X;Y)$ is the mutual information between the input of the channel and its output, and the maximization is over all channel input priors $q(x)$. If $|\\mathcal{X}| = |\\mathcal{Y}|$, and $p(y|x)=1$ if $x=y$ and $p(y|x)=0$ otherwise, then the channel is said to be noiseless, and in that case $C = \\log |\\mathcal{X}|$. We also assume that a noiseless feedback exists from the receiver to the transmitter.\n\nA rateless code has the following elements:\n\\begin{enumerate}\n\\item Message set $\\mathcal{W}$ containing $M$ messages. Without the loss of generality we assume that $\\mathcal{W}=\\{1,\\ldots,M\\}$, with corresponding probabilities $\\pi(1),\\ldots,\\pi(M)$. Occasionally, we define $K=\\log M$ as the number of bits conveyed in a message.\n\\item Codebook $\\mathcal{C} = \\{\\mathbf{c}_i\\}_{i=1}^M$, where each codeword $\\mathbf{c}_i \\in \\mathcal{X}^\\infty$ is generated by drawing i.i.d. symbols according to a prior $q(x), x\\in\\mathcal{X}$.\n\\item Set of encoding functions $f_n:\\mathcal{W} \\rightarrow \\mathcal{X}$, $n \\geq 1$.\n\\item Set of decoding function $g_n:\\mathcal{Y}^n \\rightarrow \\mathcal{W} \\cup \\{0\\}$, $n \\geq 1$.\n\\end{enumerate}\nUnlike conventional codes, for which the rate is a fundamental property, the above description does not specify a working rate--hence the term \\emph{rateless code}. To encode a message $w \\in \\mathcal{W}$, the encoder starts transmitting the codeword $\\mathbf{c}_w$ over the channel. Upon receiving each channel output, the decoder can either decide on one of the messages $\\hat{w}$ or decide to wait for further channel outputs, returning `$0$'. Through feedback, the decoder's decision is known to the encoder, which correspondingly decides whether to transmit further symbols from $\\mathbf{c}_w$ or to proceed to the next message. We note that two different forms of feedback can be assumed here: channel feedback and decision feedback. In channel feedback, the encoder at time instance $t$ observes $Y^{t-1}$, the channel outputs so far, and by imitating the decoder's operation it becomes aware of any decision made by the decoder. In decision feedback, the encoder is only informed that a decision has been made, and it can proceed to the next message. While channel feedback requires no intervention from the decoder in the feedback process, it essentially assumes that the feedback channel has the same bandwidth as the main channel. Decision feedback, in contrast, requires only one feedback bit per symbol.\n\nWe conclude this section with a few definitions required for the next sections.\n\\begin{definition} \\label{def:StoppingTime}\nA \\emph{stopping time} $T$ of a rateless code is a random variable defined as\n\\begin{equation} \\label{eq:StoppingTimeDef}\n T = \\min\\{n: g_n(Y^n) \\neq 0\\}\n\\end{equation}\n\\end{definition}\n\\begin{definition} \\label{def:EffectiveRate}\nAn \\emph{effective rate} $R$ of a rateless code is defined as\n\\begin{equation} \\label{eq:EffectiveRateDef}\n R = \\frac{\\log M}{\\E\\{T\\}}\n\\end{equation}\nwhere $\\E\\{T\\}=\\E_q \\{ \\E_p \\{T\\}\\}$, i.e. the averaging is done over all possible codebooks and channel realizations.\n\\end{definition}\nUsing the definition of stopping time, we can define the error event as the case in which the decoder stops, deciding on the wrong message. The error event conditioned on a particular message is defined as\n\\begin{equation} \\label{eq:EmDef}\n E_w = \\{\\hat{W} \\neq w \\ | \\ W = w \\}\n\\end{equation}\nwhere $\\hat{W}=g_{_T}(Y^T)$.\nThe average error probability for the entire message set is therefore\n\\begin{equation} \\label{eq:PeDef}\n P_e = \\sum_{w=1}^M \\pi(w) \\cdot \\Pr\\{E_w\\}\n\\end{equation}\n\\begin{definition}\nFor a given DMC, an $(R,M,\\epsilon)$-code is a rateless code with effective rate $R$, containing $M$ messages and error probability $P_e \\leq \\epsilon$.\n\\end{definition}\n\n\\chapter{Previous Results}\n\\label{ch:PreviousResults}\nAs noted, the rateless coding scheme is a special case of communication over a channel with feedback. Shannon \\cite{ShannonZEC} proved that the capacity of a DMC is not increased by adding feedback. However, adding feedback \\emph{can} increase the zero-error capacity of the channel. In his well known paper, Burnashev \\cite{Burnashev} investigated the effect of feedback in communication over a DMC by analyzing the error exponent of such channel. Introducing the notion of random transmission time, Burnashev obtained a bound on the mean transmission time for a fixed error probability, from which he derived the error exponent\\footnote{Referred to as \\emph{reliability function}.} for a DMC with feedback. He also proved a converse theorem showing that the expected transmission time, hence also the error exponent, are asymptotically optimal. (That is, they coincide with the results of the converse theorem as the size of the message set grows to infinity.) The main result of \\cite{Burnashev} is the following theorem.\n\\begin{untheorem}[Burnashev \\cite{Burnashev}]\nThe optimum error exponent for a DMC with noiseless feedback is\n\\begin{equation}\\label{eq:BurnasheErrorExp}\n \\lim_{M \\to \\infty} -\\frac{1}{\\E\\{T\\}} \\log P_e = C_1\\left(1 - \\frac{R}{C} \\right), \\qquad 0 \\leq R \\leq C\n\\end{equation}\nwhere $T$ is the transmission time, $R$ is defined in \\eqref{eq:EffectiveRateDef} and\n\\begin{equation}\\label{eq:C1Def}\n C_1 \\triangleq \\max_{(x,x') \\in \\mathcal{X} \\times \\mathcal{X}} D\\left(p(\\cdot|x)||p(\\cdot|x')\\right)\n\\end{equation}\n\\end{untheorem}\n\nExamining \\eqref{eq:BurnasheErrorExp} we can observe that whenever $R \\geq C$, the error exponent vanishes, which concurs with Shannon's result \\cite{ShannonZEC}. Moreover, whenever the channel has at least two inputs that are completely distinguishable from one another, i.e. $p(y|x)>0$ and $p(y|x')=0$ for some $x,x' \\in \\mathcal{X}$ and $y \\in \\mathcal{Y}$, it holds that $D\\left(p(\\cdot|x)||p(\\cdot|x')\\right) \\to \\infty$ and hence also $C_1 \\to \\infty$ for that channel. Therefore, the error exponent in that case is infinite at \\emph{every} rate below the channel capacity, which implies that the zero-error capacity coincides with the channel capacity $C$.\n\nAlso for the case feedback channels, Shulman \\cite{Nadav} developed a coding scheme providing reliable communication over an unknown channel, without compromising the rate. Introducing the concept of \\emph{static broadcasting}, which is based on random codebook and universal sequential decoder, he demonstrated that it is possible to achieve vanishing error probability at rate that tends to the capacity of the channel as the size of the message set grows indefinitely. Furthermore, Shulman showed that even if the statistics of the information source is unknown to the transmitter, this scheme achieves the optimal decoding length that would have been achieved if the source were compressed by an optimal source-encoder and the channel were known at both ends. More formally, if $K$ information bits of a source $S$ were to be transmitted over an unknown channel $W$, then the average decoding length satisfies\n\\begin{equation}\\label{eq:DecodingLengthNadav}\n \\lim_{K \\to \\infty} \\frac{\\E\\{T\\}}{K} = \\frac{H(S)}{I(P;W)}\n\\end{equation}\nwhere $P$ is the codebook generation prior and $I(P;W)$ is the mutual information between the input and the output of the channel $W$ when the input is drawn according to distribution $P$.\n\nShulman also used the coding scheme for source-encoding of correlated sources. He demonstrated that using static broadcasting, it is possible to achieve the Slepian-Wolf optimal rate region. Combining all into one communication scheme, the achievable decoding length is\n\\begin{equation}\\label{eq:DecodingLengthNadav}\n \\lim_{K \\to \\infty} \\frac{\\E\\{T\\}}{K} = \\frac{H(S|Z)}{I(P;W)}\n\\end{equation}\nwhere $Z$ is the side information at the decoder. Shulman's work has been the main inspiration for this research.\n\nFor the case of unknown channel, Tchamkerten and Telatar in \\cite{Telatar} used a rateless coding scheme similar to the one defined in Chapter \\ref{ch:DefinitionsAndNotation}, where the stopping condition is that the mutual information between (at least) one of the codewords and the channel output sequence exceeds a certain time-dependent threshold. The authors proved that this scheme can achieve the capacity of a general DMC.\\footnote{Since no assumption has been made on the capacity-achieving prior, authors only demonstrated that the rate approaches $I(PQ)$, where $P$ is the codebook generation prior and $Q$ is the transition probability of the channel.} Moreover, they demonstrated that for the class of binary symmetric channel with crossover probabilities $L \\in [0,1\/2)$, this coding scheme can achieve Burnashev's exponent at a rate bounded by any fraction of the channel capacity. The latter result is obtained by using a second coding phase, in which the transmitter indicates whether the decoder's decision is correct (an \\emph{Ack\/Nack} phase). Tchamkerten and Telatar also demonstrated that for the class of $Z$ channels with parameter $L \\in [0,1)$, the achievable rate can be arbitrarily close to the channel capacity, while the error exponent is infinite. The latter result also coincides with Burnashev's exponent ($C_1$ in \\eqref{eq:BurnasheErrorExp} is infinite in this case), since error-free communication is attainable for the $Z$ channel.\n\nWe note that all the above-mentioned results were asymptotic in the size of the message set. Recently, Polyanskiy, Poor and Verd{\\'u} in \\cite{PPV} introduced non-asymptotic results for communication over DMC with feedback. Through the use of variable-rate coding and sequential decoding they obtained upper and lower bounds for the maximal message set size for fixed bounds on the error probability and mean decoding length. The authors showed that for an error probability constraint $P_e \\leq \\epsilon$ and mean decoding length constraint $\\E \\{T\\} \\leq \\ell$, the maximal message set size $M^*(\\ell,\\epsilon)$ satisfies\n\\begin{equation}\\label{eq:PolyanskiyUpperLower}\n \\frac{\\ell C}{1-\\epsilon} - \\log \\ell + O(1) \\leq \\log M^*(\\ell,\\epsilon) \\leq \\frac{\\ell C}{1-\\epsilon} + O(1)\n\\end{equation}\nThe setting of \\cite{PPV}, as well as the coding scheme, is similar to the one defined later in Chapter \\ref{ch:KnownChannel}. However, while in \\cite{PPV} the optimization is on $M$, for fixed $\\epsilon$ and $\\ell$, we fix $\\epsilon$ and $M$ and find the optimum mean decoding length. The analysis is slightly different, but the results of Chapter \\ref{ch:KnownChannel} comply with \\cite{PPV}. The analysis in Chapter \\ref{ch:KnownChannel}, coming next, lays the ground for the derivation of our novel results for the case of unknown channel.\n\n\\chapter{Rateless Coding -- Known Channel}\n\\label{ch:KnownChannel}\n\\section{Sequential Decoder} \\label{sec:SequentialDecoder}\nWe begin by introducing a rateless coding scheme for noisy channels and analyzing its effective rate, under certain constraints on the size of the message set and the error probability. As will be shown in the sequel, the effective rate is closely related to the channel capacity $C$. More precisely, we will show that under the conventional setting, in which the size message set is taken to infinity, the effective rate coincides with the capacity of the channel.\n\nConsider a discrete memoryless source with a set of $M$ equiprobable messages, i.e. $\\pi(i)=1\/M$, $i=1,\\ldots,M$. We use a rateless code as defined in Section \\ref{ch:DefinitionsAndNotation}, where each codeword $\\mathbf{c}_i$, $i=1,\\ldots,M$ is generated by drawing i.i.d. symbols according to $q(x)$, the capacity-achieving prior of the channel. The source of randomness generating the codewords is shared by the encoder and the decoder, so that the codebook in known at both ends. The decoder uses the following decision rule:\n\\begin{equation} \\label{eq:ChannelDecoderLin}\n g_n(y^n)= \\begin{cases}\n w, \\ \\prod_{k=1}^n p(c_{w,k}|y_k) \\geq A \\cdot \\prod_{k=1}^n q(c_{w,k}) \\\\\n 0, \\ \\text{if no such $w$ exists}\n \\end{cases}\n\\end{equation}\nwhere $\\{c_{w,k}\\}_{k=1}^{\\infty}$ are the symbols in $\\mathbf{c}_w$. If the threshold crossing condition in \\eqref{eq:ChannelDecoderLin} is satisfied by more than one codeword, we randomly choose one of them and declare an error. We note here that similar decoders have been proposed by Polyanskiy \\cite{PPV} and Burnashev \\cite[Ch.3]{Burnashev}.\nThe decision rule at \\eqref{eq:ChannelDecoderLin} can be equivalently written as\n\\begin{equation} \\label{eq:ChannelDecoderLog}\n g_n(y^n)= \\begin{cases}\n w, \\ z_{w,1}+\\ldots+z_{w,n} \\geq a \\\\\n 0, \\ \\text{if no such $w$ exists}\n \\end{cases}\n\\end{equation}\nwhere\n\\begin{equation}\n z_{w,k} = \\log \\frac{p(c_{w,k}|y_k)}{q(c_{w,k})}, \\qquad k=1,\\ldots,n\n\\end{equation}\nand we define $a = \\log A$.\n\nThe above-described coding scheme can be summarized as follows. Having selected a message, the encoder starts transmitting an infinite-length random codeword corresponding to that message. The decoder sequentially receives symbols from this codeword that passed through the channel, and at each time instant $k$ calculates $z_{w,k}$ for $w={1,\\ldots,M}$. It then updates a set of $M$ accumulators, each corresponding to a possible message, and checks whether any of those crossed a prescribed threshold $a$. If neither of the counters crossed the threshold, `$0$' is returned and the decoder waits for the next channel output; if exactly one counter crossed the threshold, the decoder makes a decision; and if more that one threshold crossing occurred, an error is declared. In the two latter cases, the encoder proceeds to the next codeword.\n\nFor the above-described scheme we have the following theorem.\n\\begin{theorem} \\label{Theorem1}\nFor the decoder in \\eqref{eq:ChannelDecoderLog} with $P_e \\leq \\epsilon$, the following effective rate is achievable:\n\\begin{equation}\n R = \\frac{C}{1+ \\frac{C - \\log \\epsilon}{\\log M}} \\label{eq:AchievableRateChannelDec}\n\\end{equation}\n\\end{theorem}\n\n\\begin{proof}\nSince $T$ is a stopping time of the i.i.d. sequence $Z_1,Z_2,\\ldots$, Wald's equation \\cite{Wald} implies\n\\begin{equation} \\label{eq:ETForWald}\n \\E\\{T\\} = \\frac{\\E \\{Z_1 + \\ldots + Z_T\\}}{\\E\\{Z\\}},\n\\end{equation}\nwhere $\\E\\{Z\\}$ is the expectation of a single sample $Z_i$. If $X_i$ and $Y_i$ are input and output of the channel, respectively, then by the definition of $Z_i$ we have\n\\begin{equation} \\label{eq:EZForWald}\n \\E\\{Z\\} = \\E\\{Z_i\\} = \\E \\left\\{\\frac{p(X_i|Y_i)}{q(X_i)}\\right\\} = C.\n\\end{equation}\nFurthermore, since the stopping condition was not fulfilled time instant $T-1$ we have\n\\begin{equation}\n Z_1 + \\ldots + Z_{T-1} < a\n\\end{equation}\nwhich implies\n\\begin{equation} \\label{eq:SumZForWald}\n Z_1 + \\ldots + Z_T < a + Z_T\n\\end{equation}\nCombining \\eqref{eq:ETForWald}, \\eqref{eq:EZForWald} and \\eqref{eq:SumZForWald} we obtain\n\\begin{equation} \\label{eq:WaldC}\n \\E\\{T\\} < \\frac{a+C}{C}.\n\\end{equation}\n\nWe now tune the threshold parameter $a$ to meet the error probability requirement. Suppose that the stopping time of the correct codeword is $T_w$. An error occurs if a competing codeword $\\mathbf{c}_{w'}$, independent of $\\{Y_k\\}_{k=1}^{\\infty}$, crosses the threshold before $\\mathbf{c}_w$ does. Thus,\n\\begin{align}\n \\Pr \\{E_w\\} &= \\Pr \\left\\{ \\bigcup_{w' \\neq w} \\bigcup_{t=1}^{T_w}\n \\left\\{ \\frac{\\prod_{k=1}^{t} p(C_{w',k}|Y_k)}{\\prod_{k=1}^{t} q(C_{w',k})} > A \\right\\} \\right\\} \\\\\n &\\leq (M-1) \\Pr \\left\\{ \\bigcup_{t=1}^{T_w}\n \\left\\{ \\frac{\\prod_{k=1}^{t} p(X_k|Y_k)}{\\prod_{k=1}^{t} q(X_k)} > A \\right\\} \\right\\} \\label{eq:ErrorProbUB} \\\\\n &\\leq (M-1) \\Pr \\left\\{ \\bigcup_{t=1}^{\\infty}\n \\left\\{ \\frac{\\prod_{k=1}^{t} p(X_k|Y_k)}{\\prod_{k=1}^{t} q(X_k)} > A \\right\\} \\right\\} \\label{eq:ErrorProbInf}\n\\end{align}\nwhere \\eqref{eq:ErrorProbUB} follows from the union bound for an arbitrary series $\\{X_k\\}_{k=1}^{\\infty}$ drawn i.i.d. from $q(x)$, independently of $\\{Y_k\\}_{k=1}^{\\infty}$. Note that the bound in \\eqref{eq:ErrorProbInf} represents the probability that a randomly-chosen codeword will exceed the threshold at any time instant. Define a sequence of random variables\n\\begin{equation}\\label{eq:uidef}\n U_t = \\begin{cases}\n \\frac{p(X_t|Y_t)}{q(X_t)}, \\ \\prod_{k=1}^{t-1} U_k \\leq A \\\\\n 1, \\ \\text{otherwise}\n \\end{cases}\n\\end{equation}\nIf at instant $t$ the threshold at \\eqref{eq:ErrorProbInf} is exceeded for the first time, then we have $U_k=p(X_k|Y_k)\/q(X_k)$ for $k=1,\\ldots,t$ and $U_k=1$ for all $k>t$. Therefore, it is easy to see that\n\\begin{equation}\n \\bigcup_{t=1}^{\\infty}\n \\left\\{ \\frac{\\prod_{k=1}^{t} p(X_k|Y_k)}{\\prod_{k=1}^{t} q(X_k)} > A \\right\\}\n \\Leftrightarrow \\prod_{t=1}^{\\infty} U_t > A\n\\end{equation}\nWe can also see that $\\E\\{U_t\\}=1$ for \\emph{all} $t$ because\n\\begin{equation*}\n \\E \\left\\{ U_t|\\prod_{k=1}^{t-1} U_k > A \\right\\} = 1\n\\end{equation*}\nsince $U_t = 1$ deterministically in this case, and\n\\begin{align}\n \\E \\left\\{ U_t|\\prod_{k=1}^{t-1} U_k \\leq A \\right\\} &= \\E \\left\\{ \\frac{p(X_t|Y_t)}{q(X_t)} \\right\\} \\\\\n &= \\E \\left\\{ \\E \\left\\{ \\frac{p(X_t|Y_t)}{q(X_t)}|Y_t \\right\\} \\right\\} \\\\\n &= \\E \\left\\{ \\sum_{x \\in \\mathcal{X}} \\frac{p(x|Y_t)}{q(x)} \\cdot q(x) \\right\\} \\label{eq:IndependentX} \\\\\n &= 1\n\\end{align}\nwhere \\eqref{eq:IndependentX} follows since $X_t$ and $Y_t$ are independent. For an arbitrary $N$ we have\n\\begin{align}\\label{eq:ProdU}\n \\E \\left\\{\\prod_{t=1}^N U_t \\right\\} &= \\E \\left\\{ \\E \\left\\{\\prod_{t=1}^N U_t | \\prod_{t=1}^{N-1} U_t \\right\\} \\right\\} \\\\\n &= \\E \\{ U_N \\} \\cdot \\E \\left\\{\\prod_{t=1}^{N-1} U_t \\right\\} \\\\\n &= \\E \\left\\{\\prod_{t=1}^{N-1} U_t \\right\\} = \\ldots = \\E\\{U_1\\} = 1\n\\end{align}\nSince the above holds for all $N$, we also have\n\\begin{equation}\\label{eq:InfProdU}\n \\E \\left\\{\\prod_{t=1}^\\infty U_t \\right\\} = 1\n\\end{equation}\nReturning to \\eqref{eq:ErrorProbInf}, we get\n\\begin{align}\n \\Pr \\{E_w\\} &\\leq (M-1) \\Pr \\left\\{ \\bigcup_{t=1}^{\\infty}\n \\left\\{ \\frac{\\prod_{k=1}^{t} p(X_k|Y_k)}{\\prod_{k=1}^{t} q(X_k)} > A \\right\\} \\right\\} \\label{eq:BoundErrorProbKnownCh} \\\\\n &= (M-1) \\Pr \\left\\{ \\prod_{t=1}^{\\infty} U_t > A \\right\\} \\\\\n &\\leq \\frac{M-1}{A} \\label{eq:Markov}\n\\end{align}\nwhere \\eqref{eq:Markov} follows from \\eqref{eq:InfProdU} and Markov's Inequality.\nSince the above holds for all $w \\in \\mathcal{W}$, we also have\n\\begin{equation}\n P_e \\leq \\frac{M-1}{A}\n\\end{equation}\n\nBy choosing $a = \\log M - \\log \\epsilon $, or equivalently $A = M\/\\epsilon$, we secure that $P_e < \\epsilon$. Substituting $a$ into \\eqref{eq:WaldC} and using Definition \\ref{def:EffectiveRate}, we obtain \\eqref{eq:AchievableRateChannelDec}.\n\\end{proof}\n\nIt is important to note that the encoding operation is independent of the working rate; the encoder needs to know the channel law only to generate the codebook. However, if the channel is known to belong to a family for which the capacity-achieving prior is known (e.g. the uniform prior for symmetric channel), then the optimal rate can be achieved even when the encoder is uninformed on the channel law. Furthermore, from a practical point of view, using the uniform prior instead of the capacity-achieving prior is known to perform relatively well in many cases. For instance, using a uniform prior in a binary channel will lose at most 6\\% of the capacity (see \\cite[Ch.5]{Nadav}).\n\n\\section{Coding Theorem for Known Channel} \\label{sec:CodingThmKnownChannel}\nWe will now use the coding scheme developed in Section \\ref{sec:SequentialDecoder} to prove the main result for rateless channel coding. For a fixed error probability, we will obtain an achievable rate using rateless codes. Then, we will prove that this rate is within $O(\\log \\log M \/ \\log M)$ from the optimal rate achievable with this error probability. Before we get to the main theorem, we prove the following lemma, which facilitates some refinement in the achievable rate.\n\n\\begin{lemma} \\label{Lemma1}\nSuppose that an $(R,M,\\epsilon)$-code exists for a DMC. Then for any $0<\\alpha<1$, there also exists an $(R',M,\\epsilon')$-code for the same channel, where\n\\begin{eqnarray}\n R' & = & (1-\\alpha)^{-1}R \\\\\n \\epsilon' & = & \\alpha + \\epsilon - \\alpha \\epsilon\n\\end{eqnarray}\n\n\\end{lemma}\n\n\\begin{proof}\nTo show that the triplet $(R',M,\\epsilon')$ is achievable, we use the original code with randomized decision-making at the decoder. For each transmitted message, the decoder either terminates the transmission immediately and declares an error, with probability $\\alpha$, or uses the original decision rule. Denote the stopping time of the original decoder and the modified one by $T$ and $T'$, respectively. The expected decision time of the modified decoder is\n\\begin{equation}\n \\E\\{T'\\} = (1-\\alpha) \\E\\{T\\},\n\\end{equation}\nwhich implies\n\\begin{equation}\n R' = (1-\\alpha)^{-1} R.\n\\end{equation}\nThe error event in the modified scheme is a union of two non-mutually-exclusive events: error in the original decoder and the event of early termination. The probability of this union is\n\\begin{equation}\n \\epsilon' = \\alpha + \\epsilon - \\alpha \\epsilon.\n\\end{equation}\nFinally, we note that the number of messages in the codebook remains unchanged---which completes the proof of the lemma.\n\\end{proof}\n\n\\begin{theorem}\nFor rateless codes, the following rate is achievable:\n\\begin{equation} \\label{eq:AchievableRate}\n R' = \\begin{cases}\n \\frac{1 - 1\/\\log M}{1+ \\frac{C + \\log \\log M}{\\log M}} \\cdot \\frac{C}{1-\\epsilon} & \\epsilon > 1\/\\log M \\\\\n \\frac{C}{1+ \\frac{C - \\log \\epsilon}{\\log M}} & \\epsilon \\leq 1\/\\log M \\\\\n \\end{cases}\n\\end{equation}\n\\end{theorem}\n\nWe note that if $\\epsilon$ is fixed and $M$ is large enough so that $\\epsilon > 1\/\\log M$, the achievable rate has the following asymptotics:\n\\begin{equation} \\label{eq:AchievableRateAsym}\n R' = \\frac{C}{1-\\epsilon} \\cdot \\left(1 - O \\left( \\frac{\\log \\log M}{\\log M} \\right) \\right)\n\\end{equation}\n\\begin{proof}\nTheorem \\ref{Theorem1} implies that the triplet $(R,M,\\delta)$ is achievable for all $0 < \\delta < 1$, where\n\\begin{equation}\n R = \\frac{C}{1+ \\frac{C - \\log \\delta}{\\log M}}\n\\end{equation}\nBy Lemma \\ref{Lemma1}, we can also achieve $(R',M,\\delta')$, where\n\\begin{eqnarray}\n R' & = & \\frac{C}{(1-\\alpha)\\left(1+ \\frac{C - \\log \\delta}{\\log M}\\right)} \\\\\n \\delta' & = & \\alpha + \\delta - \\alpha \\delta\n\\end{eqnarray}\nfor all $0 < \\alpha < 1$. By choosing\n\\begin{equation}\n \\alpha = \\frac{\\epsilon - \\delta}{1 - \\delta}\n\\end{equation}\nwe obtain\n\\begin{eqnarray}\n R' & = & \\frac{1 - \\delta}{1+ \\frac{C - \\log \\delta}{\\log M}} \\cdot \\frac{C}{1-\\epsilon} \\\\\n \\delta' & = & \\epsilon\n\\end{eqnarray}\nSince the foregoing analysis holds for all $0 < \\delta < \\epsilon$, we can choose $\\delta = \\min\\{\\epsilon, 1\/\\log M\\}$ to obtain \\eqref{eq:AchievableRate}.\n\\end{proof}\n\n\\begin{remark}\nIf $\\epsilon \\leq 1\/ \\log M$, the choice $\\delta = \\epsilon$ implies $\\alpha = 0$, that is, no randomization at the decoder. This result could be anticipated, since the randomized decoder trades rate for reliability: it obtains a better effective rate with some compromise on the error probability. Hence, whenever the error probability constraint is more important than the working rate -- randomization can only worsen matters.\n\\end{remark}\n\n\\section{Error Exponent}\nTheorem 2 in the previous section provides a relation between the working rate and the allowed error probability. We will now investigate this dependency in the regime of low error probability by developing the error exponent induced by this coding scheme. Assuming that a low error probability is required, randomization at the decoder is inapplicable, so \\eqref{eq:AchievableRate} can be rewritten as\n\\begin{equation}\n - \\frac{R}{\\log M} \\log \\epsilon = C - R - \\frac{CR}{\\log M}\n\\end{equation}\n\nRecall that $R = \\log M \/ \\E\\{T\\}$, so\n\\begin{equation}\n - \\frac{\\log \\epsilon}{\\E\\{T\\}} = C - R - \\frac{CR}{\\log M} \\triangleq E(R)\n\\end{equation}\n\nWe can see that the error exponent is a \\emph{linear} function of the rate, which is also the case in Burnashev's analysis \\eqref{eq:BurnasheErrorExp} (albeit with a different coefficient). Furthermore, as $M$ grows, the error exponent converges to $C-R$ and the convergence is dominated by a term of order $O(1\/\\log M)$, or $O(1\/K)$. This term can be interpreted as a penalty for using a finite message set.\n\n\\section{Weak Converse}\nIn the previous section we have seen that if we use a codebook with $M$ messages and allow an error probability $P_e \\leq \\epsilon$, then we can achieve an effective rate with the following asymptotics:\n\\begin{equation}\n R' = \\frac{C}{1-\\epsilon} \\cdot \\left(1 - O \\left( \\frac{\\log \\log M}{\\log M} \\right) \\right)\n\\end{equation}\n\nWe will now prove that under the above constraints on the message set and the error probability, the best achievable rate has the same asymptotics. In other words, the achievable rate at \\eqref{eq:AchievableRate} converges to the optimal rate, and the convergence is dominated by a term of order $O(1\/\\log M)$.\n\n\\begin{theorem}\nGiven a decoder with random \\footnote{Fixed stopping time is a private case of random stopping time, in which $T$ takes only one value.} stopping time $T$, any rate for which the probability of error does not exceed $\\epsilon$ satisfies\n\\begin{equation}\n R' \\leq \\frac{C}{1-\\epsilon} \\cdot \\left(1 + O \\left( \\frac{1}{\\log M} \\right) \\right).\n\\end{equation}\n\\end{theorem}\n\n\\begin{proof}\nDefine\n\\begin{equation}\n \\mu(n) = H(W|Y^n) + nC.\n\\end{equation}\nBy \\cite[Lemma 2]{Burnashev}) we have\n\\begin{equation}\n \\E \\{\\mu(n+1)|Y^n\\} - \\mu(n) = \\E \\{H(W|Y^{n+1})-H(W|Y^n)|Y^n\\} + C \\geq 0\n\\end{equation}\nwhich implies that $\\mu(n)$ is a submartingale with respect to the process $\\{Y_k\\}_{k=1}^{\\infty}$. Therefore we have\n\\begin{align}\n \\log M &= H(W) = \\mu(0) \\nonumber \\\\\n &\\leq \\E \\{\\mu(T)\\} \\nonumber \\\\\n &= \\E \\{H(W|Y^T)\\} + C \\cdot \\E\\{T\\} \\label{eq:SubmartingalIneq}\n\\end{align}\nFurthermore, by \\cite[Lemma 1]{Burnashev} we have\n\\begin{align}\n \\E\\{H(W|Y^T)\\} &\\leq h \\left(P_e\\right) + P_e \\cdot \\log (M-1) \\nonumber \\\\\n &< 1 + \\epsilon \\cdot \\log M \\label{eq:Fano}\n\\end{align}\nwhere \\eqref{eq:Fano} follows from the requirement $P_e \\leq \\epsilon$, and from an upper bound on the binary entropy function. Combining \\eqref{eq:SubmartingalIneq} and \\eqref{eq:Fano} we obtain\n\\begin{equation} \\label{eq:BoundLogM}\n \\log M < 1 + \\epsilon \\cdot \\log M + C \\cdot \\E\\{T\\}\n\\end{equation}\nwhich implies\n\\begin{equation} \\label{eq:RPrimeWithET}\n R' = \\frac{\\log M}{\\E\\{T\\}} < \\frac{C}{1-\\epsilon} \\cdot \\left(1 + \\frac{1}{C \\cdot \\E\\{T\\}}\\right)\n\\end{equation}\nFurthermore, from \\eqref{eq:BoundLogM} we can see that\n\\begin{equation}\n C \\cdot \\E\\{T\\} > (1 - \\epsilon) \\cdot \\log M - 1\n\\end{equation}\nand therefore \\eqref{eq:RPrimeWithET} can be replaced by\n\\begin{equation}\n R' = \\frac{\\log M}{\\E\\{T\\}} \\leq \\frac{C}{1-\\epsilon} \\cdot \\left(1 + O\\left(\\frac{1}{\\log M}\\right)\\right) \\label{eq:UpperBound}\n\\end{equation}\n\\end{proof}\n\n\\begin{remark}\nWhile \\eqref{eq:AchievableRate} approaches \\eqref{eq:UpperBound} for large $M$, the upper bound is not tight for a finite $M$. Note that the converse used here is ``weak'', in that it is based on Fano's inequality, which is known to be loose in many cases. We conjecture that a strong converse can be found, which will be tighter (i.e. closer to \\eqref{eq:AchievableRate}) even in the non-asymptotic realm.\n\\end{remark}\n\n\\begin{remark}\nEquation \\eqref{eq:AchievableRateAsym}, the achievable rate, is essentially equivalent to the left-hand side of \\cite[Eq.18]{PPV}, and equation \\eqref{eq:UpperBound}, the upper bound on the rate, is equivalent to the right-hand side of that equation. Note, however, that the formulation is slightly different: in \\cite{PPV} size of the message set $M$ is optimized with constraint on the maximal transmission time, while here $M$ is fixed and the transmission time is minimized.\n\\end{remark}\n\n\\section{Further Discussions}\n\\subsection{Application for Gaussian Channels}\nWhile the analysis in Sections \\ref{sec:SequentialDecoder} and \\ref{sec:CodingThmKnownChannel} is done for discrete channels, it can be easily extended to memoryless Gaussian channels. Suppose that $X_t$ and $Y_t$ are the input and output of an additive white Gaussian noise channel at time instant $t$, i.e.\n\\begin{equation} \\label{eq:AWGNDef}\n Y_t = X_t + V_t, \\qquad t=1,2,\\ldots\n\\end{equation}\nwhere $\\{V_t\\}_{t=1}^\\infty$ is a sequence of i.i.d. Gaussian RV's with zero mean and a known variance. The encoding and the decoding processes, as well as the expression for the resulting effective rate, are similar to those of the DMC, where $q(\\cdot)$ is the codebook generation PDF and $p(\\cdot|\\cdot)$ is the transition PDF of the backward channel.\n\nSpecifically, consider the above-described setting where $V_k \\sim N(0,\\theta)$. Suppose that the codebook is Gaussian with power constraint $P$, i.e. $C_{m,k} \\sim N(0,P)$ for all $m,k$. (Here again, $C_{m,k}$ is the $k$-th symbol of the $m$-th codeword.) The decoding rule is given by \\eqref{eq:ChannelDecoderLin}, where\n\\begin{align}\n p(x|y) &= \\left( 2\\pi \\condvar \\right)^{-1\/2} \\exp \\left\\{ -\\frac{1}{2 \\cdot \\condvar} \\left( x - \\Wiener \\cdot y \\right)^2 \\right\\} \\\\\n q(x) &= \\left( 2\\pi P \\right)^{-1\/2} \\exp \\left\\{ -\\frac{x^2}{2P} \\right\\}\n\\end{align}\nThe effective rate of the decoder is given in \\eqref{eq:AchievableRateChannelDec}, where\n\\begin{equation}\n C = \\frac{1}{2} \\log \\left( 1 + \\frac{P}{\\theta}\\right)\n\\end{equation}\n\n\\subsection{Limited Feedback Channel}\nIn the forgoing analysis, we assumed that the feedback channel must be used once per each main channel use. In practice, however, it may be desirable to reduce the amount of data transmitted over the feedback channel. For instance, in the case of broadcasting to multiple users, the upstream channel may have a more stringent bandwidth constraint as it must be accessed by all users. It is therefore interesting to see how lowering the frequency of the feedback affects the performance of the rateless coding scheme. Suppose that we want to use the feedback channel only once per $s$ received symbols. The maximal number of excess symbols transmitted over the main channel (i.e. the number of symbols transmitted after a decoder without feedback limitation would acknowledge the message) is $s-1$, which implies an effective rate of\n\\begin{equation}\\label{eq:RateLimitedFB}\n R = \\frac{C}{1+ \\frac{(s-1)C - \\log \\epsilon}{\\log M}}\n\\end{equation}\nFrom \\eqref{eq:RateLimitedFB} we see that limiting the feedback frequency has negligible effect if either $s \\ll (-\\log \\epsilon)\/C$ or $s \\ll (\\log M)\/C$. In the former case, the required confidence level is high, and in the latter case the messages are long. That is, in both cases the codewords are long with respect to the capacity of the channel, which implies long transmission time. Therefore, in both cases the excess decoding time is small compared to the entire transmission length, and the effect of the limiting the feedback is negligible.\n\n\\chapter{Rateless Coding -- Unknown Channel}\n\\label{ch:UnknownChannel}\nIn Chapter \\ref{ch:KnownChannel} we assumed that the communication channel, characterized by $p(y|x)$, is known at the receiver end. Assume now, that the underlying channel is unknown to the receiver. The capacity of the channel is known to be achievable in this scenario using sequential versions of the Maximal Mutual Information (MMI) decoder \\cite{Nadav}, \\cite{Telatar}. However, while these schemes provide reliable communication at rate equal to the channel capacity, they assume that the size of the message set $M$ is infinite. In this chapter we try to answer the question whether universal communication is feasible with a finite message set, and if it is, what rates are achievable? As we shall see shortly, it is possible to achieve reliable communication over an unknown channel even when the message set is finite, and we can also bound the rate degradation due to lack of information about the channel law.\n\n\\section{Achievable Rate for an Unknown Channel} \\label{sec:RateUnknownChannel}\nSuppose that we wish to communicate over a DMC with unknown (backward) transition probabilities\n\\begin{equation}\n \\theta_{ij} = \\Pr\\{X=i|Y=j\\}, \\qquad i = 1,\\ldots,|\\mathcal{X}| \\qquad j = 1,\\ldots,|\\mathcal{Y}|\n\\end{equation} \\label{eq:ThetaParamsDef}\nWe use a coding scheme similar to the one described in Chapter \\ref{ch:KnownChannel} with the following modification. Instead of using the true transition probability $p_{\\tv}(x^t|y^t)$, which is unknown to the decoder, we use a \\emph{universal} probability assignment defined as\n\\begin{equation}\\label{eq:UniversalProb}\n p_U (x^t|y^t) \\triangleq \\int_{\\Lambda} w(\\tv') p_{\\tv'} (x^t|y^t) d\\tv'\n\\end{equation}\nwhere\n\\begin{equation}\n \\Lambda = \\left\\{\\tv' \\in [0,1]^{\\XY} \\ | \\ \\sum_{i=1}^{|\\mathcal{X}|} \\theta'_{ij} = 1, \\quad j = 1,\\ldots,|\\mathcal{Y}|\\right\\}\n\\end{equation}\nand the weight function $w(\\cdot)$ is chosen to be Jeffreys Prior \\footnote{This is also a special case of Dirichlet Distribution.}, i.e.\n\\begin{equation} \\label{eq:JeffreysPrior}\n w(\\tv') = \\frac{1}{\\BXY \\sqrt{\\prod_{i,j} \\theta'_{ij}}}\n\\end{equation}\nwhere\n\\begin{equation} \\label{eq:DefBXY}\n \\BXY = \\int_{\\Lambda} \\frac{d\\tv'}{\\sqrt{\\prod_{i,j} \\theta_{ij}}}\n\\end{equation}\n\n\\begin{remark}\nWhile the unknown channel is usually characterized by a set of transition probabilities\n\\begin{equation*}\n \\tilde{\\theta}_{ij} = \\Pr\\{Y=j|X=i\\}, \\qquad i = 1,\\ldots,|\\mathcal{X}| \\qquad j = 1,\\ldots,|\\mathcal{Y}|\n\\end{equation*}\nthe entire derivation here is done for the \\emph{backward} channel parameterization given in \\eqref{eq:ThetaParamsDef}. However, this does not need to bother us since the entire analysis assumes a known input prior $q(x)$, and therefore given $\\{\\tilde{\\theta}_{ij}\\}$, the parameters in \\eqref{eq:ThetaParamsDef} are well-defined. Moreover, the region $\\tilde{\\Lambda}$, induced by $\\{\\tilde{\\theta}_{ij}\\}$ and $q(x)$, is clearly contained in the region $\\Lambda$. Therefore, if a coding scheme is universal with respect to all possible realizations of the backward channel, it is also universal w.r.t. all possible realizations of the forward channel.\n\\end{remark}\n\nThe universal probability assignment implies the following decoding rule, which is the universal counterpart of \\eqref{eq:ChannelDecoderLin}:\n\\begin{equation} \\label{eq:ChannelDecoderUnv}\n g_n(y^n)= \\begin{cases}\n w, \\ p_U(\\mathbf{c}_w|y^n) \\geq A \\cdot q(\\mathbf{c}_w) \\\\\n 0, \\ \\text{if no such $w$ exists}\n \\end{cases}\n\\end{equation}\n\nIn Chapter \\ref{ch:KnownChannel} we used Wald's Identity to bound the expected transmission time, thereby obtaining an effective rate for the sequential decoder. Unfortunately, in the universal case $p_U(\\cdot|\\cdot)$ is not necessarily multiplicative, so $\\log p_U(\\cdot|\\cdot)$ cannot be expressed as the sum of i.i.d. random variables. Therefore, the expected transmission time in the universal case cannot be calculated directly by applying Wald's identity. Nevertheless, as we shall see shortly, we can use the results for the known channel case to obtain an upper bound for the transmission time in the universal case.\n\nThe following lemma shows that given two sequences $x^t$ and $y^t$, the universal metric cannot be too far from the conditional probability assignment that is optimally fitted to $x^t$ and $y^t$.\n\\begin{lemma} \\label{thm:UnvProbLemma}\nFor any two series $x^t$ and $y^t$ we have\n\\begin{equation}\n \\log \\frac{p_{\\htv} (x^t|y^t)}{p_U (x^t|y^t)} \\leq \\frac{\\left(\\XX-1\\right)\\YY}{2} \\log \\frac{t}{2\\pi} + \\YY \\Lkappa_{\\XX} + \\left( \\frac{\\XX^2 \\YY}{4} + \\frac{\\XY}{2} \\right) \\log e\n\\end{equation}\nwhere\n\\begin{equation} \\label{eq:ThetaOpt}\n \\htv = \\arg \\max_{\\tv' \\in \\Lambda} p_{\\tv'} (x^t|y^t)\n\\end{equation}\nand we define\n\\begin{equation}\n \\Lkappa_{\\XX} = \\log \\frac{\\Gamma(1\/2)^{\\XX}}{\\Gamma(\\XX\/2)}\n\\end{equation}\n\\end{lemma}\n\n\\begin{proof}\nNote that\n\\begin{align}\n p_{\\htv}(x^t|y^t)&= \\max_{\\{\\theta_{i,j}\\}} \\prod_{i,j} \\theta_{i,j}^{N(x^t,y^t;i,j)} \\\\\n &= \\max_{\\{\\theta_{i,1}\\}} \\prod_i \\theta_{i,1}^{N(x^t,y^t;i,1)} \\cdot \\max_{\\{\\theta_{i,2}\\}} \\prod_i \\theta_{i,2}^{N(x^t,y^t;i,2)} \\cdot \\ldots \\cdot\n \\max_{\\{\\theta_{i,\\YY}\\}} \\prod_i \\theta_{i,\\YY}^{N(x^t,y^t;i,\\YY)}\n\\end{align}\nwhere\n\\begin{equation} \\label{eq:NxyDef}\n N(x^t,y^t;i,j) = \\left| \\left\\{k \\ : \\ (x_k,y_k)=(i,j) \\right\\} \\right|\n\\end{equation}\nSince both $w(\\cdot)$ and $p_{\\tv}$ are multiplicative functions, we also have\n\\begin{align}\n p_U (x^t|y^t) &= \\int_{\\Lambda} w(\\tv') p_{\\tv'} (x^t|y^t) d\\tv' \\\\\n &= \\int_{\\tilde{\\Lambda}} w(\\tilde{\\tv}) \\prod_i \\tilde{\\theta}_i^{N(x^t,y^t;i,1)} d\\tilde{\\tv} \\cdot \\int_{\\tilde{\\Lambda}} w(\\tilde{\\tv}) \\prod_i \\tilde{\\theta}_i^{N(x^t,y^t;i,2)} d\\tilde{\\tv} \\cdot \\ldots \\\\\n & \\cdot \\int_{\\tilde{\\Lambda}} w(\\tilde{\\tv}) \\prod_i \\tilde{\\theta}_i^{N(x^t,y^t;i,\\YY)} d\\tilde{\\tv}\n\\end{align}\nwhere\n\\begin{equation}\n \\tilde{\\Lambda} = \\left\\{\\tilde{\\tv} \\in [0,1]^{\\XX} \\ | \\ \\sum_{i=1}^{|\\mathcal{X}|} \\tilde{\\theta}_i = 1, \\right\\}\n\\end{equation}\n\nFrom \\cite[Lemma 1]{Barron} we know that\n\\begin{equation}\\label{eq:BarronsIneq}\n \\log \\frac{\\max_{\\{\\theta_{i,j}\\}} \\prod_i \\theta_{i,j}^{N(x^t,y^t;i,j)}}{\\int_{\\tilde{\\Lambda}} w(\\tilde{\\tv}) \\prod_i \\tilde{\\theta}_{i,j}^{N(x^t,y^t;i,j)} d\\tilde{\\tv}} \\leq \\frac{\\XX-1}{2} \\log \\frac{t}{2\\pi} + \\Lkappa_{\\XX} + \\left( \\frac{\\XX^2}{4} + \\frac{\\XX}{2} \\right) \\log e\n\\end{equation}\nfor all $j = 1,\\ldots,\\YY$. Thus, we obtain\n\\begin{align}\n \\log \\frac{p_{\\htv} (x^t|y^t)}{p_U (x^t|y^t)} &= \\log \\prod_j \\frac{\\max_{\\{\\theta_{i,j}\\}} \\prod_i \\theta_{i,j}^{N(x^t,y^t;i,j)}}{\\int_{\\tilde{\\Lambda}} w(\\tilde{\\tv}) \\prod_i \\tilde{\\theta}_{i,j}^{N(x^t,y^t;i,j)} d\\tilde{\\tv}} \\\\\n &\\leq \\frac{\\left(\\XX-1\\right)\\YY}{2} \\log \\frac{t}{2\\pi} + \\YY \\Lkappa_{\\XX} + \\left( \\frac{\\XX^2 \\YY}{4} + \\frac{\\XY}{2} \\right) \\log e \\\\\n &=\\frac{\\left(\\XX-1\\right)\\YY}{2} \\log t + \\beta\n\\end{align}\nwhere we define\n\\begin{equation}\\label{eq:BetaDef}\n \\beta \\triangleq \\YY \\Lkappa_{\\XX} + \\left( \\frac{\\XX^2 \\YY}{4} + \\frac{\\XY}{2} \\right) \\log e - \\frac{\\left(\\XX-1\\right)\\YY}{2} \\log (2\\pi)\n\\end{equation}\n\\end{proof}\n\nWe are now ready to prove the main theorem for rateless coding over an unknown channel.\n\n\\begin{theorem} \\label{thm:UnvRate}\nFor the decoder in \\ref{eq:ChannelDecoderUnv} with $P_e \\leq \\epsilon$, the following effective rate is achievable:\n\\begin{equation}\\label{eq:UnvEffectiveRate}\n R = \\frac{C \\left( 1 - \\frac{\\hXY}{\\log M \\ln 2} \\right)}{1 + \\frac{C + \\beta - \\log \\epsilon + \\frac{\\XY}{2} \\left( \\log \\log M - \\log C - \\frac{1}{\\ln 2} \\right)}{\\log M}}\n\\end{equation}\n\\end{theorem}\n\n\\begin{proof}\nThe stopping time in the above-described scheme is\n\\begin{equation} \\label{eq:StoppingTimeUnv}\n T = \\min \\left\\{ t: \\frac{p_U (x^t|y^t)}{\\prod_{k=1}^t q(x_k)} > A \\right\\}\n\\end{equation}\nsince\n\\begin{equation}\n \\log p_U (x^t|y^t) = \\log p_{\\tv} (x^t|y^t) - \\log \\frac{p_{\\tv} (x^t|y^t)}{p_U (x^t|y^t)}\n\\end{equation}\nwe have\n\\begin{align}\n T &= \\min \\left\\{ t: \\frac{p_U (x^t|y^t)}{\\prod_{k=1}^t q(x_k)} > A \\right\\} \\\\\n &= \\min \\left\\{ t: \\log \\frac{ \\prod_{k=1}^t p_{\\tv} (x_k|y_k)}{\\prod_{k=1}^t q(x_k)} > \\log A + \\log \\frac{p_{\\tv} (x^t|y^t)}{p_U (x^t|y^t)} \\right\\} \\\\\n &\\leq \\min \\left\\{ t: \\log \\frac{ \\prod_{k=1}^t p_{\\tv} (x_k|y_k)}{\\prod_{k=1}^t q(x_k)} > \\log A + \\log \\frac{p_{\\htv} (x^t|y^t)}{p_U (x^t|y^t)} \\right\\} \\label{eq:ThetaOptUse} \\\\\n &< \\min \\left\\{ t: \\sum_{k=1}^t \\log \\frac{p_{\\tv}(x_k|y_k)}{q(x_k)} > \\log A + \\frac{\\XY}{2} \\log t + \\beta \\right\\} \\label{eq:UnvProbLemmaUse}\n\\end{align}\nwhere \\eqref{eq:ThetaOptUse} follows since $p_{\\htv} (x_k|y_k) \\geq p_{\\tv} (x_k|y_k)$ by definition \\eqref{eq:ThetaOpt} and \\eqref{eq:UnvProbLemmaUse} follows from Lemma \\ref{thm:UnvProbLemma}.\n\nFrom the same considerations as in the proof of Theorem \\ref{Theorem1}, at the stopping time $T$ we necessarily have\n\\begin{equation} \\label{eq:UnvStoppingTimeCond}\n \\sum_{k=1}^T \\log \\frac{p_{\\tv}(X_k|Y_k)}{q(X_k)} \\leq a + \\frac{\\XY}{2} \\log T + \\beta + \\log \\frac{p_{\\tv}(X_T|Y_T)}{q(X_T)}\n\\end{equation}\nwhere we define $a = \\log A$. By \\eqref{eq:UnvStoppingTimeCond} and Wald's Identity,\n\\begin{align}\n \\E \\{T\\} &= \\frac{\\E \\left\\{\\sum_{k=1}^T \\log \\frac{p_{\\tv}(X_k|Y_k)}{q(X_k)}\\right\\}}{\\E\\left\\{\\log \\frac{p_{\\tv}(X|Y)}{q(X)}\\right\\}} \\\\\n &\\leq \\frac{a + \\frac{\\XY}{2} \\cdot \\E\\{\\log T\\} + \\beta + C}{C} \\label{eq:BoundWithLog}\n\\end{align}\nSince $\\log_2 u \\leq \\frac{u}{v \\ln 2} + \\log_2 v - \\frac{1}{\\ln 2}$ for all $u,v>0$, \\eqref{eq:BoundWithLog} implies\n\\begin{equation}\n \\E \\{T\\} \\leq \\frac{a + \\frac{\\XY}{2} \\left( \\log v - \\frac{1}{\\ln 2} \\right) + \\beta + C}{C \\left( 1 - \\frac{\\hXY}{C \\cdot v\\ln 2} \\right)}\n\\end{equation}\n\nFor $v = \\frac{\\log M}{C}$ we obtain\n\\begin{equation} \\label{eq:UnvExpStoppingTime}\n \\E \\{T\\} \\leq \\frac{a + \\frac{\\XY}{2} \\left( \\log \\log M - \\log C - \\frac{1}{\\ln 2} \\right) + \\beta + C}{C \\left( 1 - \\frac{\\hXY}{\\log M \\ln 2} \\right)}\n\\end{equation}\nwhich corresponds to the following effective rate:\n\\begin{equation}\\label{eq:UnvEffectiveRateParam}\n R = \\frac{C \\log M \\left( 1 - \\frac{\\hXY}{\\log M \\ln 2} \\right)}{a + \\frac{\\XY}{2} \\left( \\log \\log M - \\log C - \\frac{1}{\\ln 2} \\right) + \\beta + C}\n\\end{equation}\n\nSimilarly to the derivation in Chapter \\ref{ch:KnownChannel}, we bound the error probability by\n\\begin{equation*} \\label{eq:ErrorProbUnv}\n \\Pr \\{E_w\\} \\leq (M-1) \\Pr \\left\\{ \\bigcup_{t=1}^{\\infty}\n \\left\\{ \\frac{p_U(X^t|Y^t)}{q(X^t)} > A \\right\\} \\right\\}\n\\end{equation*}\nwhere $\\{X_k\\}_{k=1}^{\\infty}$ and $\\{Y_k\\}_{k=1}^{\\infty}$ are independent sequences. Define\n\\begin{equation}\\label{eq:uidef}\n \\Phi_t = \\begin{cases}\n \\frac{p_U(X^t|Y^t)}{p_U(X^{t-1}|Y^{t-1}) \\cdot q(X_t)}, \\ \\prod_{k=1}^{t-1} \\Phi_k \\leq A \\\\\n 1, \\ \\text{otherwise}\n \\end{cases}\n\\end{equation}\nWe can see that\n\\begin{equation}\n \\bigcup_{t=1}^{\\infty}\n \\left\\{ \\frac{p_U(X^t|Y^t)}{q(X^t)} > A \\right\\}\n \\Leftrightarrow \\prod_{t=1}^{\\infty} \\Phi_t > A\n\\end{equation}\nFurthermore, we can see that $\\E\\{\\Phi_t\\}=1$ for all $t$ since\n\\begin{equation*}\n \\E \\left\\{ \\Phi_t | \\prod_{k=1}^{t-1} \\Phi_k > A \\right\\} = 1\n\\end{equation*}\nand\n\\begin{align}\n \\E \\left\\{ \\Phi_t | \\prod_{k=1}^{t-1} \\Phi_k \\leq A \\right\\}\n &= \\E \\left\\{ \\frac{p_U(X^t|Y^t)}{p_U(X^{t-1}|Y^{t-1}) \\cdot q(X_t)} \\right\\} \\\\\n &= \\E \\left\\{ \\E \\left\\{ \\frac{p_U(X^t|Y^t)}{p_U(X^{t-1}|Y^{t-1}) \\cdot q(X_t)} | X^{t-1}, Y^t \\right\\} \\right\\} \\\\\n &= \\E \\left\\{ \\frac{ \\E \\left\\{ \\int_{\\Lambda} w(\\tv') \\frac{p_{\\tv'} (x^t|y^t)}{q(x_t)} d\\tv' | X^{t-1}, Y^t \\right\\}}{\\int_{\\Lambda} w(\\tv') p_{\\tv'} (x^{t-1}|y^{t-1}) d\\tv'} \\right\\} \\\\\n &= \\E \\left\\{ \\frac{ \\sum_{x_t \\in \\mathcal{X}} q(x_t) \\int_{\\Lambda} w(\\tv') \\frac{p_{\\tv'} (x^t|y^t)}{q(x_t)} d\\tv'}{\\int_{\\Lambda} w(\\tv') p_{\\tv'} (x^{t-1}|y^{t-1}) d\\tv'} \\right\\} \\\\\n &= \\E \\left\\{ \\frac{ \\int_{\\Lambda} w(\\tv') \\sum_{x_t \\in \\mathcal{X}} p_{\\tv'} (x^t|y^t) d\\tv' }{\\int_{\\Lambda} w(\\tv') p_{\\tv'} (x^{t-1}|y^{t-1}) d\\tv'} \\right\\} \\\\\n &= \\E \\left\\{ \\frac{ \\int_{\\Lambda} w(\\tv') p_{\\tv'} (x^{t-1}|y^{t-1}) d\\tv' }{\\int_{\\Lambda} w(\\tv') p_{\\tv'} (x^{t-1}|y^{t-1}) d\\tv'} \\right\\} = 1\n\\end{align}\nFor an arbitrary $N$ we have\n\\begin{align}\\label{eq:ProdPhi}\n \\E \\left\\{\\prod_{t=1}^N \\Phi_t \\right\\} &= \\E \\left\\{ \\E \\left\\{\\prod_{t=1}^N \\Phi_t | \\prod_{t=1}^{N-1} \\Phi_t \\right\\} \\right\\} \\\\\n &= \\E \\{ \\Phi_N \\} \\cdot \\E \\left\\{\\prod_{t=1}^{N-1} \\Phi_t \\right\\} \\\\\n &= \\E \\left\\{\\prod_{t=1}^{N-1} \\Phi_t \\right\\} = \\ldots = 1\n\\end{align}\nSince the above holds for all $N$, we also have\n\\begin{equation}\\label{eq:InfProdPhi}\n \\E \\left\\{\\prod_{t=1}^\\infty \\Phi_t \\right\\} = 1\n\\end{equation}\nThus, similarly to the case of known channel, the error probability can be bounded by\n\\begin{align}\n \\Pr \\{E_w\\} &\\leq (M-1) \\Pr \\left\\{ \\bigcup_{t=1}^{\\infty}\n \\left\\{ \\frac{p_U(X^t|Y^t)}{q(X^t)} > A \\right\\} \\right\\} \\label{eq:BoundErrorProbUnknownCh}\\\\\n &= (M-1) \\Pr \\left\\{ \\prod_{t=1}^{\\infty} \\Phi_t > A \\right\\} \\leq \\frac{M-1}{A}\n\\end{align}\nHere again, we choose $A = M\/\\epsilon$ to obtain $P_e < \\epsilon$. Substituting $a = \\log A = \\log M - \\log \\epsilon$ into \\eqref{eq:UnvEffectiveRateParam} we finally get \\eqref{eq:UnvEffectiveRate}.\n\\end{proof}\n\n\\begin{remark}\nInterestingly, the upper bound on the error probability in \\eqref{eq:BoundErrorProbKnownCh}, obtained when the decoder uses the known channel law $p(x|y)$, applies for an arbitrary probability assignment $p_U(x|y)$, where the only required constraint is that the latter integrates to unity.\n\\end{remark}\n\n\\begin{remark}\nAs in the case of known channel, we can use randomized decoder here to obtain the following rate:\n\\begin{equation}\\label{eq:UnvEffectiveRateRand}\n R = \\frac{C \\left( 1 - \\frac{\\hXY}{\\log M \\ln 2} \\right)}{1 + \\frac{C + \\beta - \\log \\delta + \\frac{\\XY}{2} \\left( \\log \\log M - \\log C - \\frac{1}{\\ln 2} \\right)}{\\log M}} \\cdot \\frac{1-\\delta}{1-\\epsilon}\n\\end{equation}\nfor all $0 < \\delta < \\epsilon$. As we mentioned in Section \\ref{sec:CodingThmKnownChannel}, if the required error probability is small, randomization should not be applied. However, if the error probability constraint is loose enough, a better rate may be obtained by optimizing delta in \\eqref{eq:UnvEffectiveRateRand}.\n\\end{remark}\n\n \\section{Discussion}\n \\subsection{Comparison to the Known Channel Case}\n Having obtained achievable rates for the cases of both known and unknown channels, it is interesting to compare these results and evaluate the rate degradation due to the unknown channel. For the case of a known channel, the effective rate at \\eqref{eq:AchievableRateChannelDec} can be approximated by\n \\begin{equation}\n R \\thickapprox C \\left(1 - \\frac{C- \\log \\epsilon}{\\log M} \\right)\n \\end{equation}\n\n For the case of unknown channel, we can approximate \\eqref{eq:UnvEffectiveRate} by\n \\begin{align}\n R_U &\\thickapprox C \\left(1 - \\frac{C- \\log \\epsilon}{\\log M} \\right) \\nonumber \\\\\n &- C \\left( \\frac{\\XY}{2} \\frac{\\log \\log M}{\\log M} + \\frac{(\\hXY-1)\/\\ln 2 + \\beta + \\log C}{\\log M} \\right) + O \\left( \\frac{1}{\\log^2 M}\\right)\n \\end{align}\n\n Hence, the penalty for lack of channel knowledge amounts to\n \\begin{align} \\label{eq:RateDegradation}\n R-R_U &= C \\left( \\frac{\\XY}{2} \\frac{\\log \\log M}{\\log M} + \\frac{(\\hXY-1)\/\\ln 2 + \\beta + \\log C}{\\log M} \\right) \\\\\n &+ O \\left( \\frac{1}{\\log^2 M}\\right) \\nonumber\n \\end{align}\n\n The leading term in the latter expression behaves as $O(\\log \\log M \/ \\log M) = O(\\log K\/K)$, factorized by the product of the cardinalities of input and output of the channel. It is interesting to compare this result with known results from universal source coding, where the \\emph{redundancy}\\footnote{The excess of the average codeword length above the entropy of the source.} is dominated by the cardinality of the alphabet of the source \\cite{UnvPrediction}, and a term that behaves as $O(\\log n\/n)$, where $n$ is the source length.\n\n \\subsection{Induced Error Exponent}\n Let us now examine Theorem \\ref{thm:UnvRate} in light of the previous results. Equation \\eqref{eq:UnvEffectiveRate} implies the following error exponent:\n \\begin{equation}\\label{eq:UnvErrorExp}\n - \\frac{\\log \\epsilon}{\\E\\{T\\}} = C - R - \\frac{\\XY}{2} \\cdot \\frac{\\log \\log M}{\\log M} + O\\left(\\frac{1}{\\log M}\\right)\n \\end{equation}\n As in the case of a known channel, we see that the error exponent is a linear function of the rate, but an additional term of order $O(\\log \\log M \/ \\log M)$ is added. Here again, we interpret this term as a penalty for the lack of channel knowledge at the receiver. Furthermore, by taking $M \\to \\infty$, we can also see that \\eqref{eq:UnvErrorExp} coincides with \\cite[Proposition 1]{Telatar}.\n\n \\subsection{Training and Channel Estimation}\n In many practical applications, communication over an unknown channel is done by means of channel estimation. In this approach, the transmission includes predefined \\emph{training} signals, which are known to the receiver and are used to estimate the channel parameters. As an alternative to the universal communication scheme introduced in this chapter, we can use the following method. Prior to any message transmission, the transmitter sends a training sequence, which the receiver uses to estimate the channel. After the training phase, the transmitter sends the message. The receiver uses the \\emph{estimated} channel parameters to decode the message, using, for instance, the decoding rule at \\eqref{eq:ChannelDecoderLog}. A drawback from this approach is that even after the channel estimation phase, the residual error in the estimated channel parameters will degrade the performance of the decoder. Furthermore, enhancement of the channel estimation accuracy requires long training sequences, which will introduce non-negligible overhead to the transmission time. Clearly, using training will not lead to the convergence rate of \\eqref{eq:RateDegradation}.\n\n\\chapter{Extensions}\n\\label{ch:Extensions}\n\\section{Joint Source-Channel Coding} \\label{sec:JointSC}\nIn the previous chapters we assumed that the messages conveyed over the channel were equiprobable, which is the case if, for instance, the source of information has been compressed and the message $W$ is the output of the source encoder. Assume now, that the messages have arbitrary probabilities ${\\pi(1),\\ldots,\\pi(M)}$. Each message now contains a different amount of information, which would translate into different codeword length at the output of the source encoder. However, in rateless codes the codeword assigned to each message is always infinite, and the actual codeword length is determined by the decoder. (The effective length of the message depends on the decoder's stopping time.) It is therefore tempting to use rateless codes for an uncompressed source and try to achieve good compression rate and reliable communication simultaneously. To simplify matters, we begin by tackling the case of known channel and postpone the analysis for unknown channel to Section \\ref{sec:CompleteUnv}. We use the following generalized version of the encoder \\eqref{eq:ChannelDecoderLog}.\n\\begin{equation} \\label{eq:JointScDec}\n g_n(y^n)= \\begin{cases}\n w, \\ z_{w,1}+\\ldots+z_{w,n} \\geq a_w \\\\\n 0, \\ \\text{if no such $w$ exists}\n \\end{cases}\n\\end{equation}\nwhere $a_w$ is a threshold that depends on the message $w$, and we define $a_w= \\log A_w$. Repeating the derivation for the error probability done in the previous section, we get by Markov's inequality\n\\begin{equation}\n \\Pr\\{E_w\\} \\leq \\sum_{w' \\neq w} \\frac{1}{A_{w'}}\n\\end{equation}\nBy choosing\n\\begin{equation}\n A_w = \\frac{1}{\\epsilon \\cdot \\pi(w)} \\qquad \\forall w \\in \\mathcal{W}\n\\end{equation}\nwe get a uniform bound on the error probability\n\\begin{equation}\n \\Pr\\{E_w\\} \\leq \\epsilon \\cdot \\sum_{w' \\neq w} \\pi(w') \\leq \\epsilon\n\\end{equation}\nwhich also implies\n\\begin{equation}\n P_e \\leq \\epsilon\n\\end{equation}\n\nThus, for an appropriate choice of message-dependent threshold values, the average probability of error for the entire message set is bounded by $\\epsilon$. Recall, however, that the effective rate depends on the threshold value and therefore needs to be reexamined here. When different thresholds are used for different messages, the stopping time depends on which message crosses the threshold. We can therefore use Wald's equation \\eqref{eq:WaldC} conditioned on the true message:\n\\begin{equation}\n \\E\\{T|W=w\\} \\leq \\frac{a_w + C}{C}\n\\end{equation}\nwhere\n\\begin{equation}\n a_w = \\log A_w = - \\log \\pi(w) - \\log \\epsilon\n\\end{equation}\nAveraging on the entire message set, we have\n\\begin{align}\n \\E\\{T\\} &= \\E\\{\\E\\{T|W\\}\\} \\leq \\frac{\\E\\{a_{_W}\\} + C}{C} \\nonumber \\\\\n &= \\frac{\\E\\{- \\log \\pi(W)\\} - \\log \\epsilon + C}{C} \\nonumber \\\\\n &= \\frac{H(W) - \\log \\epsilon + C}{C} \\label{eq:JointScET}\n\\end{align}\nwhere $H(W)$ is the entropy rate of the source in bits per symbol. Let us now examine \\eqref{eq:JointScET} in a practical setting. Suppose the we wish to convey blocks of $K$ source bits with fixed probability of error $\\epsilon > 0$. Since every source symbol contains $\\log M$ bits, $K \/ \\log M$ source symbols will be needed. Thus, the rate at which source bits can be conveyed over the channel will be\n\\begin{align}\n R &= \\frac{K}{\\E\\{T\\}} \\geq \\frac{K \\cdot C}{\\frac{K \\cdot H(W)}{\\log M} - \\log \\epsilon + C} \\\\\n &= \\frac{C}{\\mathscr{H}(W) + \\frac{C - \\log \\epsilon}{K}} \\\\\n &= \\frac{C}{\\mathscr{H}(W)} \\cdot \\frac{1}{1 + \\frac{C - \\log \\epsilon}{\\mathscr{H}(W) \\cdot K}} \\\\\n &= \\frac{C}{\\mathscr{H}(W)} \\cdot \\left(1 - O \\left( \\frac{1}{K} \\right) \\right) \\label{eq:JointScRateAsym}\n\\end{align}\nwhere we define $\\mathscr{H}(W)=H(W)\/ \\log M$ as the per-bit entropy of the source.\n\nNote that the encoder used here, as well as the codebook, are the same ones defined in Chapter \\ref{ch:DefinitionsAndNotation} and the only change is in the definition of the decoder. The encoder is uninformed on the statistics of the source or the capacity of the channel, yet the rate approaches the optimum rate achievable by an informed encoder. We note the practical implication of such scheme: the compression algorithms can be implemented and maintained at the decoder, while the encoder remains simple and source-independent.\n\n\\section{Source Coding with Side Information} \\label{sec:SI}\nSuppose now, that the source of information emits independent pairs of messages $(W_1,W_2) \\in \\mathcal{W}_1 \\times \\mathcal{W}_2$ according to a probability distribution $\\pi_{_{W_1,W_2}}(w_1,w_2)$, which are encoded separately and pass through a noiseless channel. Suppose that $R_1$ and $R_2$ are the coding rates of $W_1$ and $W_2$, respectively. By Slepian-Wolf theorem, if $W_1$ is encoded with rate $R_1 \\geq H(W_1)$, then $W_2$ can be encoded independently with $R_2 = H(W_2|W_1)$. (This rate pair is a corner point in the achievable rate region.) We will now show that using rateless codes, we can approach this rate with some redundancy due to the usage of finite message set. The encoder of $W_1$ assigns to each message in $\\mathcal{W}_1$ an infinite codeword $\\mathbf{c}_{w_1} \\in \\{0,1\\}^\\infty$, $w_1 = 1,\\ldots,|\\mathcal{W}_1|$, and transmits it over the channel. The encoder of $W_2$ operates similarly to that of $W_1$ and independently of it, with codewords $\\mathbf{d}_{w_2} \\in \\{0,1\\}^\\infty$, $w_2 = 1,\\ldots,|\\mathcal{W}_2|$. The codewords are assumed to be i.i.d.\\ Bernoulli$(1\/2)$ sequences. To reconstruct $W_1$, the decoder can use the decision rule \\eqref{eq:JointScDec}, to to obtain an error probability of\n\\begin{equation} \\label{eq:BoundErrorW1}\n \\Pr\\{\\hat{W_1} \\neq W_1\\} \\leq \\frac{\\epsilon}{2}\n\\end{equation}\nSince binary code is used and the channel is noiseless, we have $C=1$, so \\eqref{eq:JointScET} implies that the expected transmission time for $W_1$ satisfies\n\\begin{equation} \\label{eq:SlepianWolfR1}\n R_1 = \\E\\{T_1\\} \\leq \\ H(W) - \\log \\frac{\\epsilon}{2} + 1\n\\end{equation}\nNote that the coding rate is defined here as the average codeword length for the message set. Therefore, the effective rate equals the expected transmission time, rather than its reciprocal as in channel coding.\n\nHaving decoded message $W_1$, the decoder uses the following decision rule to reconstruct $W_2$:\n\\begin{equation}\n g^{(2)}_n(y^n,w_1)= \\begin{cases}\n w_2, \\ z_{w_2,1}+\\ldots+z_{w_2,n} \\geq a(w_1,w_2) \\\\\n 0, \\ \\text{if no such $w_2$ exists}\n \\end{cases}\n\\end{equation}\nwhere\n\\begin{equation}\n z_{w_2,k} = \\log \\frac{p(y_k|d_{w_2,k})}{p(y_k)}, \\qquad k=1,\\ldots,n\n\\end{equation}\nSimilar derivation for the error probability as in Section \\ref{sec:JointSC} yields\n\\begin{equation}\n \\Pr\\{\\hat{W_2} \\neq w_2 \\ | \\ W_1 = w_1, W_2 = w_2 \\} \\leq \\sum_{w_2' \\neq w_2} \\frac{1}{A(w_1,w_2)}\n\\end{equation}\nWe choose\n\\begin{equation}\n A(w_1,w_2) = \\frac{1}{\\epsilon\/2 \\cdot \\pi_{_{W_2|W_1}}(w_2|w_1)}\n\\end{equation}\nso that\n\\begin{equation}\n \\Pr\\{\\hat{W_2} \\neq w_2 \\ | \\ W_1 = w_1, W_2 = w_2 \\} \\leq \\epsilon\/2 \\cdot \\sum_{w_2' \\neq w_2} \\pi_{_{W_2|W_1}}(w_2|w_1) \\leq \\frac{\\epsilon}{2}\n\\end{equation}\nTherefore,\n\\begin{equation} \\label{eq:BoundErrorW2}\n \\Pr\\{\\hat{W_2} \\neq W_2\\} \\leq \\frac{\\epsilon}{2}\n\\end{equation}\nUsing \\eqref{eq:BoundErrorW1}, \\eqref{eq:BoundErrorW2} and the union bound, we have\n\\begin{equation}\n \\Pr\\{\\hat{W_1} \\neq W_1 \\bigcup \\hat{W_2} \\neq W_2\\} \\leq \\epsilon\n\\end{equation}\n\nSince $a(w_1,w_2) = - \\log \\epsilon\/2 - \\log \\pi_{_{W_2|W_1}}(w_2|w_1)$, we can use Wald's equation for the stopping time of decoding $W_2$ to obtain\n\\begin{align}\n R_2 &= \\E\\{T_2\\} = \\E\\{\\E\\{T_2|W_1,W_2\\}\\} \\nonumber \\\\\n &\\leq \\E\\{a(W_1,W_2)\\} + 1 \\nonumber \\\\\n &= \\E\\{- \\log \\pi_{_{W_2|W_1}}(W_2|W_1)\\} - \\log \\frac{\\epsilon}{2} + 1 \\nonumber \\\\\n &= H(W_2|W_1) - \\log \\frac{\\epsilon}{2} + 1 \\label{eq:SlepianWolfR2}\n\\end{align}\n\nCombining \\eqref{eq:SlepianWolfR1} and \\eqref{eq:SlepianWolfR2}, we get\n\\begin{equation} \\label{eq:SlepianWolfSumRate}\n R_1 + R_2 = H(W_1,W_2) - 2 \\log \\frac{\\epsilon}{2} + 2\n\\end{equation}\n\nSimilarly to Section \\ref{sec:JointSC}, if we take blocks of $K$ source bits and a fixed error probability $\\epsilon > 0$, we obtain\n\\begin{equation}\n R_1 + R_2 = H(W_1,W_2) \\cdot \\left(1 + O \\left( \\frac{1}{K} \\right) \\right)\n\\end{equation}\n\n\\section{Complete Universality} \\label{sec:CompleteUnv}\n\nWe now consider the case of joint source-channel coding of an unknown source over an unknown channel, with an unknown amount of side-information at the receiver. Initially, we bring together the results of the previous sections to obtain a communication scheme for a source with unknown statistics over an unknown channel. As a straightforward generalization of the universal source coding scheme in Section \\ref{sec:RateUnknownChannel}, we use a fusion of the decoders \\eqref{eq:ChannelDecoderUnv} and \\eqref{eq:JointScDec}, i.e.\n\\begin{equation} \\label{eq:JointScUnvDec}\ng_n(y^n)= \\begin{cases}\n w, \\ p_U(\\mathbf{c}_w|y^n) \\geq A_w \\cdot q(\\mathbf{c}_w) \\\\\n 0, \\ \\text{if no such $w$ exists}\n \\end{cases}\n\\end{equation}\nwhere\n\\begin{equation}\\label{eq:Aw}\n A_w = \\frac{1}{\\epsilon \\cdot \\pi(w)}\n\\end{equation}\nSimilar derivation to those done at Sections \\ref{sec:RateUnknownChannel} and \\ref{sec:JointSC} yields the following rate for an uncompressed source $W \\in \\{1,\\ldots,M\\}$ over an unknown channel with capacity $C$:\n\\begin{equation}\\label{eq:JointScUnvRate}\n R = \\frac{C \\left( 1 - \\frac{\\hXY}{\\log M \\ln 2} \\right)}{\\mathscr{H}(W) + \\frac{C + \\beta - \\log \\epsilon + \\frac{\\XY}{2} \\left( \\log \\log M - \\log C - \\frac{1}{\\ln 2} \\right)}{\\log M}}\n\\end{equation}\nwhere $\\mathscr{H}(W)$ is defined in Section \\ref{sec:JointSC}. We note that while the encoder can be ignorant of the source statistic, the decoder needs to know $\\pi(w), \\ w \\in \\mathcal{W}$.\n\nWe now go one step further and assume that the decoder has no knowledge of the statistics of the source or the channel. Suppose that the source $S$ generates sequences of $L$ symbols from an alphabet $\\mathcal{S}$, drawn i.i.d. according to set of $|\\mathcal{S}|$ unknown probabilities $\\gv$. Each sequence is encoded as one message, hence $M = |\\mathcal{S}|^L$. Instead of using the set of thresholds \\eqref{eq:Aw}, which depends on the unknown probabilities, we use a universal probability measure \\cite{UnvPrediction}\n\\begin{equation}\n \\ph(s^L) = \\int u(\\gv) \\pi_{\\gv}(s^L)\n\\end{equation}\nso that\n\\begin{equation}\n a_w = \\log A_w = -\\log \\epsilon -\\log \\ph (s^L)\n\\end{equation}\nIf the weight function $u(\\cdot)$ is chosen to be Jeffreys prior, we get (see \\cite[Eq.17]{UnvPrediction})\n\\begin{equation}\n \\E\\{a_w\\} = -\\log \\epsilon + H(W) + \\frac{|\\mathcal{S}|-1}{2} \\log \\frac{L}{2 \\pi e} + O(1)\n\\end{equation}\nHence, similarly to \\eqref{eq:JointScUnvRate} we can achieve the following rate\n\\begin{equation}\\label{eq:CompUnvRate}\n R = \\frac{C \\left( 1 - \\frac{\\hXY}{\\log M \\ln 2} \\right)}{\\hat{\\mathscr{H}}(W) + \\frac{C + \\beta - \\log \\epsilon + \\frac{\\XY}{2} \\left( \\log \\log M - \\log C - \\frac{1}{\\ln 2} \\right)}{\\log M}}\n\\end{equation}\nwhere\n\\begin{equation}\n \\hat{\\mathscr{H}}(W) = \\mathscr{H}(W) + \\frac{|\\mathcal{S}|-1}{2} \\frac{\\log L}{\\log M} + O\\left(\\frac{1}{\\log M}\\right)\n\\end{equation}\nRecall that $L = \\log_{|\\mathcal{S}|} M$, so\n\\begin{equation}\\label{eq:EmpEntropy}\n \\hat{\\mathscr{H}}(W) = \\mathscr{H}(W) + \\frac{|\\mathcal{S}|-1}{2} \\frac{\\log \\log M}{\\log M} + O\\left(\\frac{1}{\\log M}\\right)\n\\end{equation}\nBy plugging \\eqref{eq:EmpEntropy} into \\eqref{eq:CompUnvRate} we get\n\\begin{equation}\\label{eq:CompUnvRateAsym}\n R = \\frac{C}{\\mathscr{H}(W)} \\cdot \\left(1 - O \\left( \\frac{\\log K}{K} \\right) \\right) + O \\left( \\frac{1}{K} \\right)\n\\end{equation}\nwhere $K = \\log M$ is the number of encoded bits. Comparing \\eqref{eq:CompUnvRateAsym} to \\eqref{eq:JointScRateAsym}, we see that the leading term is unchanged and equals the optimal rate achievable by separated source-channel coding. However, the lack of information affects the rate of convergence, which is now dominated by a $O \\left( \\frac{\\log K}{K} \\right)$ term, as opposed to $O \\left( \\frac{1}{K} \\right)$ for an informed decoder.\n\nThe implications of the latter result are far-reaching. We have shown that even if the statistics of both the channel and the source are unknown to the decoder, rateless coding not only achieves the best source-channel coding rate as $M \\to \\infty$, but it also has the same asymptotics of a rateless scheme with an informed decoder. This observation has been made in \\cite[Ch.4]{Nadav} for infinitely large message sets. The results obtained here coincide with those of \\cite{Nadav}, and also quantify the redundancy caused by the lack of information on the source and the channel, and by the use of finite blocks.\n\n\\subsection*{Unknown Side Information at the Decoder}\nSimilarly to Section \\ref{sec:SI}, if the source contains side information $V$ that is known non-causally at the decoder, we can further improve the communication rate. Combining the technique from Section \\ref{sec:SI} with the derivation above, we obtain the following rate for universal joint source-channel coding with side information at the decoder:\n\\begin{equation}\\label{eq:CompUnvRateSI}\n R = \\frac{C}{\\mathscr{H}(W|V)} \\cdot \\left(1 - O \\left( \\frac{\\log K}{K} \\right) \\right) + O \\left( \\frac{1}{K} \\right)\n\\end{equation}\nwhere $\\mathscr{H}(W|V)$ is the conditional entropy of the source $W$ given the side information $V$, normalized by $\\log M$. Since $\\mathscr{H}(W|V) \\leq \\mathscr{H}(W)$, the side information improves the rate, even if the encoder is uninformed on the amount (or the existence) of the side information.\n\n\\chapter{Summary}\n\\label{ch:Summary}\nIn this study we developed and analyzed several communication schemes that are all based on the concept of \\emph{rateless codes}. In rateless codes, each codeword has an infinite length and the decoding length is dynamically determined by the confidence level of the decoder. Throughout this study, we allowed the coding schemes to have a fixed error probability, while aiming to achieve shortest mean transmission time, or equivalently, the highest rate. This approach is different than the prevalent one, in which the communication rate is held fixed and the codebook is enlarged indefinitely so that the error probability vanishes. We demonstrated how rateless codes, combined with sequential decoding, can be used in basic communication scenarios such as communication over a DMC, but can also be used to solve more complex problems, such as communication over an unknown channel. The decoding methods introduced here enabled us to obtain results for finite message set, while previous studies were restricted to asymptotic results.\n\nWe began by describing rateless codes and surveyed some previous results related to such coding schemes. Then, we introduced the sequential decoder that uses a known channel law. Using Wald's theory and the notion of stopping time, we obtained an upper bound for the mean transmission time for a fixed error probability, and the resulting effective rate is shown to approach to the capacity of the channel as the size of the message set, $M$, grows. We also obtained an upper bound for the rate for a fixed error probability. The upper bound is not tight for small $M$, but it converges to the achievable rate as $M \\to \\infty$. We conjecture that a stronger converse can be found, which will be tighter also in the non-asymptotic realm. Although we developed the above-mentioned scheme for a DMC, we also demonstrated that it is applicable in a memoryless Gaussian channel.\n\nFor the case of an unknown channel we introduced a novel decoding metric. Unlike previous studies, the universal decoding metric in not based on empirical mutual information, but on a mixture probability assignment. For an appropriate choice of mixture, we were able to bound the difference between the universal metric and the one used by an informed decoder. Thus, we used the results obtained for an informed decoder to upper bound the mean transmission time in the universal case.\n\nWe then applied rateless coding to more advanced scenarios. We showed how with only a minor change in the sequential decoder, we can easily use rateless codes as a joint source-channel coding scheme. We also used rateless coding for source coding with side information, obtaining the optimum Slepian-Wolf rate for this setting. Finally, we combined the techniques for universal channel coding, joint source-channel coding and source coding with side information and demonstrated that even without any information on the source, the channel or the amount (or even the existence) of side information---reliable communication is feasible, and the rate can be analyzed even for a finite message set.\n\n\\bibliographystyle{IEEETran}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}