diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzidot" "b/data_all_eng_slimpj/shuffled/split2/finalzzidot" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzidot" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\n{\\it Magnetic resonance imaging} (MRI) is a widely used imaging method in clinical applications and research. It is based on measuring the magnetic signal resulting from {\\it nuclear magnetic resonance} (NMR) of $\\rm ^1_1H$ nuclei (protons). In NMR, the magnetization rotates around an applied magnetic field $\\vec{B}$ at the proton Larmor frequency $f_{\\rm L}$, which is proportional to $B$ \\cite{Abragam}. This behavior of the magnetization is often referred to as {\\it precession} due to the direct connection to the quantum mechanical precession of nuclear spin angular momentum. \n\nConventionally, the magnetic precession signal has been detected using induction coils. The voltage induced in a coil by an oscillating magnetic field is proportional to the frequency of the oscillation, leading to vanishing signal amplitudes as $f_{\\rm L}$ approaches zero. Today, clinical MRI scanners indeed use a high main static field $\\vec B_0$; typically $B_0 = 3$\\unit{T}, corresponding to a frequency $f_0 = 128$\\unit{MHz}. However, when the signal is detected using magnetic field (or flux) sensors with a frequency-independent response, this need for high frequencies disappears. Combined with the so-called prepolarization technique for signal enhancement, highly sensitive magnetic field detectors, typically those based on {\\it superconducting quantum-interference devices} (SQUIDs), provide an NMR signal-to-noise ratio (SNR) that is independent of $B_0$ \\cite{Clarke2007}. In recent years, there has been growing interest in ultra-low-field (ULF) MRI, usually measured in a field on the order of Earth's magnetic field ($B_0 \\sim 10$--$100$\\unit{\\textmu T}). \n\n\\begin{figure}\n\t\\centering\n\t\t\\includegraphics[width=.98\\columnwidth]{array_schem_3d.pdf}\n \\vspace{-2mm}\n\t\\caption{Helmet-type sensor array geometries consisting of (a) triple-sensor modules at 102 positions similar to standard Elekta\/Neuromag MEG configurations and (b) an array with larger overlapping pickup coils for increased perfomance. Magnetometers are marked in green and gradiometers in red or blue; see Sec.~\\ref{ssPickups} for descriptions of pickup coils. (The sample head shape is from MNE-Python \\cite{MNESoftware}.)}\n\t\\label{figGradArrays}\n\\end{figure}\n\nA number of ULF-MRI-specific imaging techniques have emerged, including rotary-scanning acquisition (RSA) \\cite{Hsu2016}, temperature mapping \\cite{VesanenTemperature2013}, signal-enhancing dynamic nuclear polarization \\cite{KRISSLee2010, Buckenmaier2018}, imaging of electric current density (CDI) \\cite{Vesanen2014, Nieminen2014, Hommen2019}, and making use of significant differences in NMR relaxation mechanisms at ULF compared to tesla-range fields \\cite{Lee2005, Hartwig2011, Vesanen2013Temperature}. Several groups have also investigated possibilities to directly detect changes in the NMR signal due to neural currents in the brain \\cite{KrausJr2008, Korber2013, Xue2006, KRISSKim2014} and electrical activation of the heart \\cite{KRISSKim2012}. A further notable field of research now focuses on combining ULF MRI with magnetoencephalography (MEG). In MEG, an array of typically $\\sim 100$ sensors \\cite{Lounasmaa2004, Vrba2002, Pizzella2001} is arranged in a helmet-shaped configuration around the head (see Fig.~\\ref{figGradArrays}) to measure the weak magnetic fields produced by electrical activity in the brain \\cite{Hamalainen1993, DelGratta2001}. SQUID sensors tailored for ULF MRI can typically also be used for MEG, and performing MEG and MRI with the same device can significantly improve the precision of localizing brain activity \\cite{MEGMRI2013,Magnelind2011co, Luomahaara2018, Roadmap2016, Makinen2019}. \n\nIn typical early ULF-MRI setups \\cite{Clarke2007}, the signal was detected by a single dc SQUID coupled to a superconducting pickup coil wound in a gradiometric configuration that rejects noise from distant sources. In this case, the maximum size of the imaging field of view (FOV) is roughly given by the diameter of the pickup coil. With large diameters such as 60\\unit{mm}, field sensitivities better than 1\\unit{fT$\/\\sqrt{\\rm Hz}$} have been achieved with a reasonable FOV. A large coil size, however, does have its drawbacks, including issues such as high inductance and increased requirements in dynamic range. Therefore, the most straightforward way to increase the available FOV and the SNR is to use an array of sensors. In addition, as is well known in the context of MEG \\cite{Uusitalo1997ssp, Vrba2002, Taulu2005}, a multi-channel measurement allows forming so-called software gradiometers and more advanced signal processing techniques to reduce noise that can be optimized separately for different noise environments. In ULF-MRI, this can even be done individually for each voxel (volume element) position within the imaging target, as will be shown later. While single-channel systems are still common, several groups have already been using arrays of sensors.\n\n\nAlso in conventional MRI, so-called parallel MRI is performed using an array of tens of induction coils, allowing full reconstruction of images from a reduced number of data acquisitions \\cite{Pruessmann1999, Larkman2007}. There are studies on designing arrays of induction coils for parallel MRI \\cite{Ohliger2006} with an emphasis on minimizing artefacts caused by the reduced number of acquisitions. At the kHz frequencies of ULF MRI, the dominant noise mechanisms are significantly different, and one needs to consider, for instance, electromagnetic interference from power lines and electrical equipment, thermal noise from the radiation shield of the cryostat required for operating the superconducting sensors, as well as noise and transients from other parts of the ULF MRI system structure and electronics \\cite{Zevenhoven2014amp}. Studies on the design of arrays for MEG \\cite{Vrba2002,Ahonen1993,Nurminen2014thesis},\nwhich mainly focus on the accuracy of localizing brain activity, are also not applicable to ULF MRI. In terms of single-sensor ULF-MRI signals, there are existing studies of the depth sensitivity \\cite{Burmistrov2013} and SNR as a function of frequency with different detector types \\cite{Myers2007}. \n\nPreviously, in Ref.~\\cite{Zevenhoven2011}, we presented approaches for quantitative comparison of sensor arrays in terms of the combined performance of the sensors, the results indicating that the optimum sensor for ULF MRI of the brain would be somewhat larger than typical MEG sensors.\nExtending and refining those studies, we aim to provide a fairly general study of the optimization of ULF-MRI array performance, with special attention to SNR and imaging the human head.\n\nWe begin by defining relevant quantities and reviewing basic principles of ULF MRI in Sec.~\\ref{sBasics}. Then, we analyze the effects of sensor geometry and size with different noise mechanisms (Sec.~\\ref{sSingleSensor}), advancing to sensor arrays (Sec.~\\ref{sArrays}). Finally, we show computed estimations of array SNR as functions of pickup size and number, and provide more detailed comparison of spatial SNR profiles with different array designs (Secs.~\\ref{sMethods} and \\ref{sResults}).\n\n\\section{SQUID-detected MRI} \\label{sBasics}\n\n\\subsection{Signal model and single-channel SNR} \\label{ssULFMRI}\n\nIn contrast to conventional MRI, where the tesla-range main field is static and accounts for both polarizing the sample and for the main readout field, ULF MRI employs switchable fields. Dedicated electronics \\cite{Zevenhoven2014amp} are able to ramp on and off even the main field $\\vec B_0$ with an ultra-high effective dynamic range. An additional pulsed prepolarizing field $\\vec{B}_{\\rm p}$ magnetizes the target before signal acquisition. Typically, a dedicated coil is used to generate $\\vec{B}_{\\rm p}$ ($B_{\\rm p} \\sim 10$--$100$\\unit{mT}) in some direction to cause the proton bulk magnetization $\\vec{M}(\\vec r\\,)$ to relax with a longitudinal relaxation time constant $T_1$ towards its equilibrium value corresponding to $\\vec B_{\\rm p}$. After a polarizing time on the order of seconds or less, $\\vec B_{\\rm p}$ is switched off---adiabatically, in terms of spin dynamics---so that $\\vec M$ turns to the direction of the remaining magnetic field, typically $\\vec B_0$, while keeping most of its magnitude. \n\nNext, say at time $t=0$, a short excitation pulse $\\vec B_1$ is applied which flips $\\vec M$ away from $\\vec B_0$, typically by 90$^\\circ$, bringing $\\vec M$ into precession around the magnetic field at positions $\\vec r$ throughout the sample. While rotating, $\\vec M(\\vec r\\,)$ decays towards its equilibrium value corresponding to the applied magnetic field in which the magnetization precesses. This field, $\\vec B_\\mathrm L$, may sometimes simply be a uniform $\\vec B_0$, but for spatial encoding and other purposes, different non-uniform magnetic fields $\\mathrm\\Delta \\vec B(\\vec r, t)$ are additionally applied to affect the precession before or during acquisitions. The encoding is taken into account in the subsequent image reconstruction. \n\n\nThe ULF MRI signal can be modeled to a high accuracy given the absence of unstable distortions common at high frequencies and high field strengths. To obtain a model for image formation, we begin by examining $\\vec M$ at a single point. If the $z$ axis is set parallel to the total precession field $\\vec B_\\mathrm{L}$, then the $xy$ (transverse) components of $\\vec M$ account for the precession. Assuming, for now, a static $\\vec B_\\mathrm L$, and omitting the decay for simplicity, the transverse magnetization $\\vec M_{xy} = \\vec M_{xy}(t)$ can be written as \n\\begin{align}\n\\vec M_{xy}(t) = M_{xy}&\\left[\\widehat{e}_x\\cos(\\omega t+\\phi_0)\n - \\widehat{e}_y\\sin(\\omega t+\\phi_0)\\right]\\,,\n\\end{align}\nwhere $\\omega = 2\\pi f_\\mathrm{L}$ is the precession angular frequency, $\\widehat{e}_\\heartsuit$ is the unit vector along the $\\heartsuit$ axis ($\\heartsuit = x, y, z$), and $\\phi_0$ is the initial phase, which sometimes contains useful information.\n\nIn an infinitesimal volume $dV$ at position $\\vec r$ in the sample, the magnetic dipole moment of protons in the volume is $\\vec M(\\vec r\\,)\\, dV$. It is straightforward to show that the rotating components of this magnetic dipole are seen by any magnetic field or flux sensor as a sinusoidal signal $d\\psi_{\\rm s} = |\\beta|\\cos(\\omega t+\\phi_0 + \\phi_\\mathrm s) M_{xy}\\,dV$. Here $|\\beta| = |\\beta(\\vec r\\,)|$ is the peak sensitivity of the sensor to a unit dipole at $\\vec r$ that precesses in the $xy$ plane, and $\\phi_{\\mathrm s} = \\phi_{\\mathrm s}(\\vec r\\,)$ is a phase shift depending on the relative positioning of the sensor and the dipole. To obtain the total sensor signal $\\psi_{\\rm s}$, $d\\psi_{\\rm s}$ is integrated over all space:\n\\begin{align}\\label{eqRSignal}\n&\\psi_{\\rm s}(t) = \\int |\\beta(\\vec r\\,)|M_{xy}(\\vec r\\,)\\cos \\phi(\\vec r, t)\\, d^3\\vec r\\,,\\\\\\nonumber\n&\\text{where }~\\phi(\\vec r, t) = \\int_0^t\\omega(\\vec r, t^\\prime) \\,dt^\\prime +\\phi_0(\\vec r\\,)+\\phi_{\\mathrm s}(\\vec r\\,)\\,.\n\\end{align}\nHere, we have noted that the magnetic field can vary in both space and time and therefore $\\omega = \\omega(\\vec r, t) = \\gamma B(\\vec r, t)$, where $\\gamma$ is the gyromagnetic ratio; $\\gamma \/2\\pi = 42.58$\\unit{MHz\/T} for a proton.\n\nFor convenience, the signal given by Eq.~\\eqref{eqRSignal} can be demodulated at the angular Larmor frequency $\\omega_0=2\\pi f_0$ corresponding to $B_0$; using the quadrature component of the phase sensitive detection as the imaginary part, one obtains a complex-valued signal\n\\begin{align}\\nonumber\n\\Psi(t) &= \\int |\\beta(\\vec r\\,)|M_{xy}(\\vec r\\,)e^{-i[\\phi(\\vec r, t)-\\omega_0 t]}\\,d^3\\vec r\\\\\\label{eqSignal}\n&= \\int \\beta^*(\\vec r\\,) m(\\vec r\\,)e^{-i\\int_0^t\\mathrm\\Delta\\omega(\\vec r,t^\\prime)\\, dt^\\prime}\\,d^3\\vec r\\,,\n\\end{align}\nwhere $^*$ denotes the complex conjugate, $m(\\vec r\\,) = M_{xy}(\\vec r\\,)e^{-i\\phi_0(\\vec r\\,)}$ is the \\emph{uniform-sensitivity image}, $\\mathrm\\Delta\\omega = \\omega-\\omega_0$, and we define\n\\begin{equation}\n\\beta(\\vec r\\,) = |\\beta(\\vec r\\,)|e^{i\\phi_{\\mathrm s}(\\vec r\\,)}\n\\end{equation} \nas the single-channel \\emph{complex sensitivity profile}. Besides geometry, $\\beta$ generally also depends on the direction of the precession field; $\\beta = \\beta_{\\vec B_{\\mathrm L}}(\\vec r\\,)$. \n\nAfter acquiring enough data of the form of Eq.~\\eqref{eqSignal}, the image can be reconstructed---in the simplest case using only one sensor, or using multiple sensors, each having its own sensitivity profile $\\beta$. As a simplified model for understanding image formation, ideal Fourier encoding turns Eq.~\\eqref{eqSignal} into the 3-D Fourier transform of the sensitivity-weighted complex image $\\beta^*m = (\\beta^*m)(\\vec r\\,)$. In reality, however, the inverse Fourier transform only provides an approximate reconstruction, and more sophisticated techniques should be used instead \\cite{Hsu2014}. \n\nHere, we do not assume a specific spatial encoding scheme. Notably, however, the sensitivity profile is indistinguishable from $m$ based on the signal [Eq.~\\eqref{eqSignal}]. In other words, the spatial variation of $\\beta^*$ affects the acquired data in the same way as a similar variation of the actual image would, regardless of the spatial encoding sequence in $\\mathrm\\Delta \\omega$.\n\nConsider a small voxel of centered at $\\vec r$. The contribution of the voxel to the signal in Eq.~\\eqref{eqSignal} is proportional to an effective voxel volume $V$. Due to measurement noise, the voxel value becomes $V\\beta^* m + \\xi$, where $\\xi$ is a random complex noise term. If $\\beta$ is known, the intensity-corrected voxel of a real-valued image from a single sensor is given by\n\\begin{equation} \\label{eqVoxelIntensity}\n{\\rm Re}\\left(m(\\vec r\\,) + \\frac{\\xi}{V\\beta^*(\\vec r\\,)}\\right) = \n m(\\vec r\\,) + \\frac{{\\rm Re}\\left(\\xi e^{i\\phi_{\\mathrm s}}\\right)}{|s(\\vec r\\,)|}\\, ,\n \\end{equation}\nwhere $s(\\vec r\\,)=V\\beta^*(\\vec r\\,)$ is the sensitivity of the sensor to $m$ in the given voxel.\nAssuming that the distribution of $\\xi=|\\xi|e^{i\\phi_\\xi}$ is independent of the phase $\\phi_\\xi$, the standard deviation $\\sigma$ of ${\\rm Re}\\left(\\xi e^{i\\phi_{\\mathrm s}}\\right)$ is independent of $\\phi_{\\mathrm s}$ and proportional to $\\sigma_{\\rm s}$, the standard deviation of the noise in the relevant frequency band of the original sensor signal. \n\nThe precision of a voxel value can be described by the (amplitude) SNR of the voxel value. The voxel SNR is defined as the correct voxel value $m(\\vec r\\,)$ divided by the standard deviation of the random error and can be written as\n\\begin{equation} \\label{eqSNR0}\n{\\rm SNR} = \\frac{m(\\vec r\\,)V|\\beta(\\vec r\\,)|}{\\sigma}\n\\propto\\frac{B_{\\rm p}V|\\beta(\\vec r\\,)|\\sqrt{T_{\\rm tot}}}{\\sigma_{\\rm s}}\\, ,\n\\end{equation}\nwhere the last expression incorporates that $m \\propto B_{\\rm p}$, and that $\\sigma$ is inversely proportional to the square root of the total signal acquisition time, which is proportional to the total MRI scanning time $T_{\\rm tot}$. It should be recognized, however, that $\\sigma$ also depends heavily on factors not visible in Eq.~\\eqref{eqSNR0}, such as the imaging sequence.\n\nUltimately, the ability to distinguish between different types of tissue depends on the {\\it contrast-to-noise ratio} (CNR), which can be defined as the SNR of the difference between image values corresponding to two tissues. A better CNR can be achieved by improving either the SNR or the contrast, which both strongly depend also on the imaging sequence.\n\n\\subsection{SQUIDs, pickup coils and detection} \\label{ssPickups}\n\n\n\nSQUIDs are based on {\\it superconductivity}, the phenomenon where the electrical resistivity of a material completely vanishes below a critical temperature $T_{\\rm c}$ \\cite{SQUID-HB}. A commonly used material is niobium (Nb), which has $T_{\\rm c}=9.2\\,$K. It is usually cooled by immersion in a liquid helium bath that boils at $4.2\\,$K in atmospheric pressure. \n\nSQUIDs can be divided into two categories, rf and dc SQUIDs, of which the latter is typically used for biomagnetic signals as well as for ULF MRI \\cite{Lounasmaa2004, Roadmap2016}. The dc SQUID is a superconducting loop interrupted by two weak links, or Josephson junctions; see Fig.~\\ref{figSQUID}(a). With suitable shunting and biasing to set the electrical operating point, the current or voltage across the SQUID can be configured to exhibit an oscillatory dependence on the magnetic flux going through the loop---analogously to the well known double-slit interference of waves.\n\nA linear response to magnetic flux is obtained by operating the SQUID in a flux-locked loop (FLL), where an electronic control circuit aims to keep the flux constant by applying negative flux feedback via an additional feedback coil.\n\n\\begin{figure}\n\t\\centering\n\t\t\\includegraphics[width=0.95\\columnwidth]{squid.pdf}\n\t\\caption{Schematic (a) of a simple SQUID sensor and the flux-locked loop (more detail in Secs.~\\ref{ssIntrinsic} and \\ref{ssCorrEffect}), and (b--f) of different types of pickup coils. Pickup coil types are (b) magnetometer (M0), (c) planar first-order gradiometer (PG1), (d) axial first-order gradiometer (AG1), (e) axial second-order gradiometer (AG2), (f) planar gradiometer with a long baseline, and (g) a magnetometer and two planar gradiometers in a triple-sensor unit (M0, PG1$x$, PG1$y$).}\n\t\\label{figSQUID}\n\\end{figure}\n\nTo avoid harmful resonances and to achieve low noise, the SQUID loop itself is usually made small. The signal is coupled to it using a larger pickup coil connected to the SQUID via an input circuit to achieve high sensitivity. An input circuit may simply consist of a {\\it pickup coil} and an {\\it input coil} in series, forming a continuous superconducting path which, by physical nature, conserves the flux through itself, and feeds the SQUID according to the signal received by the pickup coil, as explained in Sec.~\\ref{ssIntrinsic} along with more sophisticated input circuits.\n\nDifferent types of responses to magnetic fields can be achieved by varying the pickup coil geometry. Fig.~\\ref{figSQUID}(b--g) schematically depicts some popular types. The simplest case is just a single loop, a {\\it magnetometer}, which in a homogeneous field responds linearly to the field component perpendicular to the plane of the loop (b). Two loops of the same size and orientation, but wound in opposite directions, can be used to form a {\\it gradiometer}. The resulting signal is that of one loop subtracted from that of the other. It can be used to approximate a derivative of the field component with respect to the direction in which the loops are displaced (by distance $b$, called the baseline). Typical examples are the planar gradiometer (c) and the axial gradiometer (d). By using more loops, one can measure higher-order derivatives. Some ULF-MRI implementations \\cite{Clarke2007,Zotev2007} use second-order axial gradiometers (e). If a source is close to one loop of a long-baseline gradiometer, that `pickup loop' can be thought of as a magnetometer, while the additional loops suppress noise from MRI coils or distant sources. However, adding loops also increases the inductance $L_\\mathrm p$. Before a more detailed theoretical discussion regarding $L_\\mathrm p$ and SQUID noise scaling, we study the detection of the MRI signal by the pickup coils. \n\n\\subsection{Sensitivity patterns and signal scaling}\n\nThe magnetic flux $\\Phi$ picked up by a coil made of a thin superconductor is given by the integral of the magnetic field $\\vec{B}$ over a surface $S$ bound by the coil path $\\partial S$,\n\\begin{equation} \\label{eqFlux}\n\\Phi = \\int_S \\vec{B}\\cdot d_{\\mathrm n}^2\\vec r = \\oint_{\\partial S} \\vec{A}\\cdot d\\vec r\\,.\n\\end{equation}\nHere, the line integral form was obtained by writing $\\vec{B}$ in terms of the vector potential $\\vec{A}$ as $\\vec{B} = \\nabla \\times \\vec{A}$, and applying Stokes's theorem.\n\nAs explained in Sec.~\\ref{ssULFMRI}, the signal in MRI arises from spinning magnetic dipoles. The quasi-static approximation holds well at signal frequencies, providing a vector potential for a dipole $\\vec{m}$ positioned at $\\vec r\\,'$ as $\\vec{A}(\\vec r\\,) = \\frac{\\mu}{4\\pi}\\frac{\\vec{m}\\times(\\vec r-\\vec r\\,^\\prime)}{|\\vec r-\\vec r\\,^\\prime|^3},$\nwhere $\\mu$ is the permeability of the medium, assumed to be that of vacuum; $\\mu = \\mu_0$. Substituting this into Eq.~\\eqref{eqFlux} and rearranging the resulting scalar triple product leads to\n\\begin{equation} \\label{eqLeadField}\n\\Phi = \\vec{m}\\cdot \\vec{B}_{\\rm s}(\\vec r\\,')\\,, \\;\\; \\vec{B}_{\\rm s}(\\vec r\\,') = \\frac{\\mu}{4\\pi}\\oint_{\\partial S} \\frac{d\\vec r \\times (\\vec r\\,'-\\vec r\\,)}{|\\vec r\\,'-\\vec r\\,|^3}\\,,\n\\end{equation}\nwhere the expression for the \\emph{sensor field} $\\vec{B}_{\\rm s}$ is the Biot--Savart formula for the magnetic field at $\\vec r\\,'$ caused by a hypothetical unit current in the pickup coil, as required by reciprocity.\n\nThe sensor field $\\vec B_\\mathrm s$ is closely related to the complex sensitivity pattern $\\beta$ introduced in Sec.~\\ref{ssULFMRI}. In an applied field $\\vec{B}_\\mathrm L = B_\\mathrm L\\widehat e_z$, the magnetization precesses in the $xy$ plane, and $\\beta$ can in fact be written as\n\\begin{equation} \\label{eqBeta0}\n\t\\beta(\\vec r\\,) = \\vec B_\\mathrm s(\\vec r\\,) \\cdot \\left(\\widehat e_x + i \\,\\widehat e_y\\right)\\,.\n\\end{equation}\nFor arbitrary $\\vec B = B_\\mathrm L\\widehat e_\\mathrm L$, we have\n\\begin{equation} \\label{eqBetaNorm}\n |\\beta_{\\vec B} (\\vec r\\,)| = \\sqrt{|\\vec B_\\mathrm s(\\vec r\\,)|^2 - [\\vec B_\\mathrm s (\\vec r\\,) \\cdot\\widehat e_\\mathrm L]^2}\\,.\n\\end{equation}\n\n\nWe choose to define the measured signal as the \\emph{flux} through the pickup coil---a convention that appears throughout this paper. The measurement noise is considered accordingly, as flux noise. This contrasts looking at magnetic-field signals and noise, as is often seen in the literature. Working with magnetic flux signals allows for direct comparison of different pickup coil types. Moreover, the approximation that magnetometer and gradiometer pickups respond to the field and its derivatives, respectively, is not always valid.\n\nThe signal often scales as simple power laws $R^\\alpha$ with the pickup coil size $R$ (or radius, for circular coils). When the distance $l$ from the coil to the signal source is large compared to $R$, a magnetometer sees a flux $\\Phi\\propto BR^2$, giving an \\emph{amplitude scaling exponent} $\\alpha=2$. When scaling a gradiometer, however, also the baseline $b$ is proportional to $R$. This leads to $\\alpha=3$ for a first-order gradiometer, or $\\alpha=2+k$ for one of $k^{\\rm th}$ order. Conversely, the signal scales with the distance as $l^{-\\alpha-1}$, as is verified by writing the explicit forms of the field and its derivatives. The additional $-1$ in the exponent reflects the dipolar nature of the measured field ($-2$ for quadrupoles etc.).\n\nFor some cases, the detected flux can be calculated analytically using Eq.~\\eqref{eqLeadField}. First, as a simple example, consider a dipole at the origin, and a circular magnetometer pickup loop of radius $R$ parallel to the $xy$ plane at $z=l$, centered on the $z$ axis. The integral in Eq.~\\eqref{eqLeadField} is easily integrated in cylindrical coordinates to give\n\\begin{equation}\\label{eqCircleLead}\n\\vec{B}_{\\rm s} = B_{\\rm s}\\widehat{e}_z \n= \\frac{\\mu R^2}{2(R^2+l^2)^\\frac{3}{2}}\\widehat{e}_z\\,.\n\\end{equation}\nIf the dipole precesses in, for instance, the $xz$ plane, the corresponding sensitivity is $|\\beta| = B_{\\rm s}$. Instead, if precession takes place in the $xy$ plane, the sensitivity vanishes; $|\\beta|=0$, and no signal is received. In this case, moving the pickup loop away from the $z$ axis would cause a signal to appear. These extreme cases show that even the absolute value of a single-channel sensitivity is strongly dependent on the sensor orientation with respect to the source and the magnetic field, as is also seen in Fig.~\\ref{figSensContours}.\n\n\\begin{figure}\n\t\\centering\n\t\t\\includegraphics[width=0.90\\columnwidth]{contours3d2.png}\n\t\\caption{Isosurfaces of sensitivity patterns $|\\beta(\\vec r\\,)|$ inside a helmet array for two of the magnetometer loops marked in red. The arrow depicts the direction of the precession field $\\vec B_\\mathrm L$ during readout ({\\it e.g.}\\ $\\vec{B}_0$). Note that, because of the precession plane, there are insensitive directions (``blind angles'') in the profiles, depending on the relative orientation of $\\vec B_\\mathrm L$.}\n\t\\label{figSensContours}\n\\end{figure}\n\n\nAnother notable property of the sensitivity $|\\beta|=B_{\\rm s}$ from Eq.~\\eqref{eqCircleLead} is that if $l$ is fixed, there is a value of $R$ above which the sensitivity starts to decrease, {\\it i.e.}, part of the flux going through the loop comes back at the edges canceling a portion of the signal. By requiring $\\partial B_{\\rm s}\/\\partial R$ to vanish, one obtains $R=l\\sqrt{2}$, the loop radius that gives the maximum signal. Interestingly, however, if instead of the perpendicular ($z$) distance, $l$ is taken as the closest distance to the pickup-coil winding, then the coil is on a spherical surface of radius $R_\\mathrm a = l$. Now, based on Pythagoras's theorem, $R^2 + l^2$ in Eq.~\\eqref{eqCircleLead} is replaced with $l^2$. In other words, the sensor field is simply $\\vec B_\\mathrm{s} = \\widehat e_z \\,\\mu R^2\/2l^3$, so the scaling of $\\alpha = 2$ happens to be the same as for distant sources in this simple case. \n\nImportantly, however, the \\emph{noise} mechanisms also depend on $R$, and moreover, the situation is complicated by the presence of multiple sensors. These matters are discussed in Secs.~\\ref{sSingleSensor}--\\ref{sArrays}.\n\n\n\n\n\\section{Noise mechanisms and scaling} \\label{sSingleSensor}\n\nThe signal from each measurement channel, corresponding to a pickup coil in the sensor array, contains flux noise that can originate from various sources. Examples of noise sources are the sensor itself, noise in electronics that drives MRI coils, cryostat noise, magnetic noise due to thermal motion of particles in other parts of the measurement device and in the sample, noise from other sensors, as well as environmental noise. This section is devoted to examining the various noise mechanisms and how the noise can be dealt with. Unless stated otherwise, noise is considered a random signal with zero average. We use amplitude scaling exponents $\\alpha$ to characterize the dependence of noise on pickup-coil size and type. \n\n\\subsection{Flux coupling and SQUID noise} \\label{ssIntrinsic}\n\nFor estimates of SQUID sensor noise as a function of pickup coil size, a model for the sensor is needed. As explained in Sec.~\\ref{ssPickups}, the signal is coupled into the SQUID loop via an input circuit. \nIn general, the input circuit may consist of a sequence of one or more all-superconductor closed circuits connected by intermediate transformers. Via inductance matching and coupling optimization, these circuits are designed to efficiently couple the flux signal into the SQUID loop. \n\n\n\\begin{figure}\n\t\\centering\n\t\t\\includegraphics[width=\\columnwidth]{inputcircuit.pdf}\n\t\\caption{Simplified schematic of a superconducting SQUID input circuit. Zero or more intermediate transformers (dashed box) may be present.}\n\t\\label{figInputCircuit}\n\\end{figure}\nIntermediate transformers can be useful for optimal coupling of a large pickup coil to a SQUID-coupled input coil, as analyzed {\\it e.g.} in Ref.~\\cite{Mates2014}. To further understand the concept, consider a two-stage input circuit where a pickup coil ($L_\\mathrm p$) is connected to a transmitting inductor $L_1$ to form a closed superconducting path; see Fig.~\\ref{figInputCircuit}. Ideally, the distance between the two coils is fairly small in order to avoid signal loss due to parasitic inductances of the connecting traces or wiring. The total inductance of this flux-coupling circuit by itself is $L_\\mathrm p + L_1$. The primary is coupled to a secondary inductor $L_2$ with mutual inductance $M_{12}$. As the magnetic flux picked up in $L_\\mathrm p$ changes by $\\mathrm\\Delta\\Phi_{\\mathrm p}$, there is a corresponding change $\\mathrm\\Delta J_1$ in the supercurrent flowing in the circuit such that the flux through the closed path remains constant. This passes the flux signal onwards to $L_2$ which forms another flux-transfer circuit together with the input coil $L_\\mathrm i$, which couples inductively into the SQUID.\n\nSuperconductivity has two important effects on the transmission of flux into the next circuit. First, the presence of superconducting material close to a coil tends to reduce the coil inductance because of the Meissner effect: the magnetic flux is expelled and the material acts as a perfect diamagnet. This effect is included in the given inductances $L_\\mathrm{p}$ and $L_1$. The other effect emerges when the flux is transmitted into another closed superconducting circuit, such as via $M_{12}$. This is because the transmitting coil is subject to the counteracting flux $M_{12}^2 \\mathrm\\Delta J_1\/(L_2 + L_\\mathrm{i})$ from the receiving coil of the other circuit. Now current $\\mathrm\\Delta J_1$ only generates a flux $[L_1 - M_{12}^2\/(L_2 + L_\\mathrm{i})]\\mathrm\\Delta J_1$ in $L_1$. Closing the secondary circuit thus changes the inductance from $L_1$ to \n\\begin{equation}\nL_1^\\prime = L_1 - \\frac{M_{12}^2}{L_2 + L_\\mathrm{i}} = L_1\\left(1 - \\frac{k_{12}^2}{1 + L_\\mathrm{i}\/L_2}\\right)\\,, \n\\end{equation}\nwhere the last form is obtained by expressing the mutual inductance in terms of the coupling constant $k_{12}$ ($|k_{12}|<1$) as $M_{12} = k_{12}\\sqrt{L_1L_2}$. Note that we do not include a counteracting flux from the SQUID inductance $L_\\mathrm{S}$ back into $L_\\mathrm{i}$, {\\it i.e.}, no screening from the biased SQUID loop. However, like other inductances, $L_\\mathrm{i}$ does include the effect of the presence of the nearby superconductors through the Meissner effect.\n\n\nThe change of flux though the dc SQUID loop is now obtained as\n\\begin{align}\n\\mathrm\\Delta \\Phi_\\mathrm{S} &= M_\\mathrm{iS}\\mathrm\\Delta J_2 = \\frac{M_\\mathrm{iS}M_{12}}{L_2 + L_\\mathrm{i}}\\mathrm\\Delta J_1 \\\\&= \\frac{M_\\mathrm{iS}M_{12}}{(L_2 + L_\\mathrm{i})(L_\\mathrm{p} + L_1) - M_{12}^2} \\mathrm\\Delta\\Phi_\\mathrm{p} \\, ,\n\\end{align} \nor, with $M_\\mathrm{iS} = k_\\mathrm{iS}\\sqrt{L_\\mathrm{i}L_\\mathrm{S}}$ and defining $\\chi_1$ and $\\chi_2$ such that $L_1 = \\chi_1 L_\\mathrm{p}$ and $L_2 = \\chi_2 L_\\mathrm{i}$, we have\n\\begin{equation}\\label{eqSQUIDFluxChi}\n\\frac{\\mathrm\\Delta \\Phi_\\mathrm{S}}{\\mathrm\\Delta \\Phi_\\mathrm{p}} = \\frac{k_\\mathrm{iS}\\sqrt{L_\\mathrm{S}}}{\\sqrt{L_\\mathrm{p}}}\\times \\frac{k_{12}\\sqrt{\\chi_1\\chi_2} }{\\chi_1\\chi_2(1 - k_{12}^2) + \\chi_1 + \\chi_2 + 1} \\, .\n\\end{equation}\n\nFor a given pickup coil, $\\chi_1$ and $\\chi_2$ can usually be chosen to maximize the flux seen by the SQUID. While the function in Eq.~\\eqref{eqSQUIDFluxChi} is monotonous in $k_{12}$, there is a single maximum with respect to parameters $\\chi_1,\\chi_2 > 0$. Noting the symmetry, we must have $\\chi_1 = \\chi_2 =: \\chi$, and the factor in Eq.~\\eqref{eqSQUIDFluxChi} becomes $k_{12}\\chi\/[\\chi^2(1-k_{12}^2) + 2\\chi + 1]$, which is maximized at $\\chi = 1\/\\sqrt{1-k_{12}^2}$. At the optimum, the coupled flux is given by\n\\begin{equation}\n\\frac{\\mathrm\\Delta \\Phi_\\mathrm{S}}{\\mathrm\\Delta \\Phi_\\mathrm{p}} = \n\\frac{k_\\mathrm{iS}k_{12}\\sqrt{L_\\mathrm{S}}}{2\\sqrt{L_\\mathrm{p}}\\left(1 + \\sqrt{1-k_{12}^2}\\right)} \\underset{k_{12} \\rightarrow 1^-}{\\longrightarrow}\n\\frac{k_\\mathrm{iS}}{2}\\sqrt\\frac{L_\\mathrm{S}}{L_\\mathrm{p}}\n\\,.\n\\end{equation}\nNotably, with a $k_{12} \\approx 1$, the coupling corresponds to a perfectly matched single flux-coupling circuit \\cite{SQUID-HB}. Already at $k_{12} = 0.8$, 50\\% of the theoretical maximum is achieved, while matching without an intermediate transformer may cause practical difficulties or parasitic resonances.\n\nWhen referred to SQUID flux $\\Phi_\\mathrm{S}$, the noise in the measured SQUID voltage in the flux-locked loop corresponds to a noise spectral density $S_{\\Phi_{\\rm S}}(f)$ at frequency $f$. As the signal transfer from the pickup coil to the SQUID is given by Eqs.~\\eqref{eqSQUIDFluxChi}, the equivalent flux resolution referred to the signal through the pickup coil can be written as\n\\begin{equation} \\label{eqNoiseSDens1}\nS_{\\Phi_{\\rm p}}^{1\/2}(f) = \\frac{2\\sqrt{L_{\\rm p}}\\left(1 + \\sqrt{1-k_{12}^2}\\right)}{k_\\mathrm{iS}k_{12}\\sqrt{L_\\mathrm{S}}}S_{\\Phi_{\\rm S}}^{1\/2}(f)\\,.\n\\end{equation}\nDue to resonance effects and thermal flux jumps, $L_\\mathrm{S}$ needs to be kept small \\cite{SQUID-HB}. The flexibility of intermediate transformers allows the same model to estimate noise levels with a wide range of pickup coil inductances $L_\\mathrm{p}$.\n\nIn general, the inductance of a coil with a given shape scales as the linear dimensions, or radius $R$, of the coil. If the wire thickness is not scaled accordingly, there will be an extra logarithmic term \\cite{Grover1973}. Even then, within a range small enough, the dependence is roughly $S_{\\Phi_{\\rm p}}^{1\/2} \\propto R^\\alpha$ with $\\alpha = 1\/2$. The case of a magnetometer loop in a homogeneous field then still has a field resolution $S_B^{1\/2}(f)$ proportional to $R^{-3\/2}$.\n\n\n\\subsection{Thermal magnetic noise from conductors} \\label{ssThermalNoise}\n\nElectric noise due to the thermal motion of charge carriers in a conducting medium is called Johnson--Nyquist noise \\cite{Nyquist1928, Johnson1928}. According to Amp$\\grave{\\rm e}$re's law $\\nabla \\times \\vec B = \\mu_0 \\vec J$, the noise currents in the current density $\\vec J$ also produce a magnetic field which may interfere with the measurement. In this view, devices should be designed in such a way that the amount of conducting materials in the vicinity of the sensors is small. However, there is a lower limit set by the conducting sample---the head. Estimations of the sample noise \\cite{Myers2007} have given noise levels below $0.1\\,{\\rm fT}\/\\sqrt{\\rm Hz}$, consistent with a recent experimental result of $55\\,{\\rm aT}\/\\sqrt{\\rm Hz}$ \\cite{Storm2019}. Other noise sources still exceed those values by more than an order of magnitude. More restrictingly, it is difficult to avoid metals in most applications. \n\n\nTo keep the SQUID sensors in the superconducting state, the array is kept in a helmet-bottom cryostat filled with liquid helium at $4.2\\,$K. The thermal superinsulation of a cryostat usually involves a vacuum as well as layers of aluminized film to suppress heat transfer by radiation \\cite{SQUID-HB}. The magnetic noise from the superinsulation can be reduced by breaking the conducting materials into small isolated patches. Seton {\\it et al.} \\cite{Seton2005} used aluminium-coated polyester textile, which efficiently breaks up current paths in all directions. By using very small patches, one can decrease the field noise at the sensors by orders of magnitude, although with increased He boil-off \\cite{Tervo2016MSc}.\n\n\nTo look at the thermal noise from the insulation layers in some more detail, consider first a thin slab with conductivity $\\sigma$ on the $xy$ plane at temperature $T$. Johnson--Nyquist currents in the conductor produce a magnetic field $\\vec{B}(x,y,z,t)$ outside the film. For an infinite (large) slab, the magnitude of the resulting field noise depends, besides the frequency, only on $z$, the distance from the slab (assume $z>0$). At low frequencies, the spectral densities $S_{B_\\alpha}$ ($\\alpha = x,y,z$) corresponding to Cartesian field noise components are then given by \\cite{Varpula1984}\n\\begin{equation} \\label{eqBz1}\nS_{B_z}^{1\/2} = \\sqrt{2} S_{B_x}^{1\/2} = \\sqrt{2} S_{B_y}^{1\/2} =\\frac{\\mu}{2}\\sqrt{ \\frac{k_{\\rm B}T}{2\\pi} \\frac{\\sigma d}{z(z+d)}}\\,, \n\\end{equation}\nwhere $d$ \nis the thickness of the slab and $k_{\\rm B}$ the Boltzmann constant.\n\nThe infinite slab is a good approximation when using a flat-bottom cryostat or when the radius of curvature of the cryostat wall is large compared to individual pickup loops. Consider a magnetometer pickup loop with area $A$ placed parallel to the conducting films in the insulation---to measure the $z$ component of the magnetic field, $B_z$. The coupled noise flux is the integral of $B_z$ over the loop area. If the loop is small, the noise couples to the pickup circuit as $S_{\\Phi}^{1\/2} = S_{B_z}^{1\/2}A$. A coil of size $R$ then sees a flux noise proportional to $S_{B_z}^{1\/2}R^2$, that is, $\\alpha = 2$. \n\nInstead, if the pickup coil is large, the situation is quite different. The instantaneous magnetic field depends on all coordinates and varies significantly over the large coil area. Consider the noise field at two points in the plane of the coil. The fields at the two points are nearly equal if the points are close to each other. However, if the points are separated by a distance larger than a correlation length $\\lambda_{\\rm c}(z)$, the fields are uncorrelated. Therefore, if $R \\gg \\lambda_c$, the coupled flux is roughly a sum of $A\/\\lambda_{\\rm c}^2$ uncorrelated terms from regions in which the field is correlated. Each term has a standard deviation of order $S_{B_z}^{1\/2}\\lambda_{\\rm c}^2$. The spectral density of the cryostat noise is then\n\\begin{equation} \\label{eqSPhiSlab}\nS_{\\Phi,\\rm c}(f) \\approx A S_{B_z}(\\vec r, f)\\lambda_{\\rm c}^2(\\vec r\\,)\\,.\n\\end{equation}\nMost importantly, the flux noise amplitude $S_{\\Phi,\\rm c}^{1\/2}$ is directly proportional to the coil size $R$, and we now have $\\alpha=1$. Still, the noise increases to a higher power of $R$ than the sensor noise, which according to section \\ref{ssIntrinsic} scales as $\\sqrt{R}$ and hence dominates in small pickup coils.\n\nFor a continuous film, the correlation length $\\lambda_{\\rm c}$ can be estimated from data in Ref.~\\cite{Nenonen96} to be around several times $z$. The correlation at distances smaller than $\\lambda_c$ is due to two reasons. First, the magnetic field due to a small current element in the conductor is spread in space according to the Biot--Savart law. Second, the noise currents in elements close to each other are themselves correlated. The latter effect is broken down when the film is divided into small patches; only very small current loops can occur, and the noise field starts to resemble that of Gaussian uncorrelated magnetic point dipoles throughout the surface. In this case, Eq.~\\eqref{eqBz1} is no longer valid, but the approximate relation of Eq.~\\eqref{eqSPhiSlab} still holds---now with a smaller $\\lambda_\\mathrm{c}$. \n\nThe magnetometer case is easily extended to first-order planar gradiometers parallel to the superinsulation layers [Fig.~\\ref{figSQUID}($\\mathrm b, \\mathrm f$)]. For a very small baseline, $b\\ll\\lambda_c$, the field noise is effectively homogeneous and thus cancels out. However, when $b\\gg\\lambda_c$, the spectral density of the noise power is twice that of a single loop. \n\n\n\n\n\\subsection{MRI electronics, coils and other noise sources}\n\nAs explained in Sec.~\\ref{ssULFMRI}, MRI makes heavy use of applied magnetic fields. The fields are generated with dedicated current sources, or amplifiers, to feed currents into coils wound in different geometries. As opposed to applying static fields, a major challenge arises from the need for oscillating pulses and the desire to quickly switch on and off all fields, including not only readout gradients but also the main field $\\vec B_0$, which requires an ultra-high dynamic range to avoid excess noise. Switching of $\\vec B_0$ enables full 3-D field mapping for imaging of small electric currents in volume \\cite{Zevenhoven2014amp}. Noise in the coil currents can be a major concern in the instrumentation. The contribution from $\\vec B_0$ ideally scales with pickup coil size as $R^\\alpha$, $\\alpha=2$ for a magnetometer, and noise in linear gradients essentially scales as $\\alpha=2$ in magnetometers as well as fixed-baseline gradiometers. With $b \\propto R$, first-order gradiometers experience noise from linear gradient coils according to $\\alpha=3$.\n\nMRI coils themselves also produce Johnson--Nyquist noise. In particular, the polarizing coil is often close to the sensors and made of thick wires as it should be able to produce relatively high fields. This allows thermal electrons to form current loops that generate field noise with complicated spatial characteristics, which is detrimental to image quality and should be eliminated. Another approach is to use litz wire, which is composed of thin wires individually coated with an insulating layer. This prevents significant noise currents perpendicular to the wire and eliminates large current loops. However, efficient uniform cooling of litz wire is problematic, leading to larger coil diameters. Increasing the coil size, however, significantly increases harmful transients in the system as well as the power and cooling requirements \\cite{Zevenhoven2011MSc}. Instead, we have had promising results with thin custom-made superconducting filament wire and DynaCAN (Dynamical Coupling for Additional dimeNsions) in-sequence degaussing waveforms to solve the problem of trapped flux \\cite{Zevenhoven2011MSc, Zevenhoven2013degauss}; optimized oscillations at the end of a pulse can expel the flux from the superconductor. Such coils contain much less metal, and significantly reduce the size of current loops that can generate magnetic noise.\n\nA significant amount of noise also originates from more distant locations. Power lines and electric devices, for instance, are sources that often can not be removed. Indeed, magnetically shielded rooms (MSRs) effectively attenuate such magnetic interference. However, pulsed magnetic fields inside the shielded room induce eddy currents exceeding $1\\,$kA in conductive MSR walls \\cite{Zevenhoven2014eddy}, leading to strong magnetic field transients that not only saturate the SQUID readout, but also seriously interfere with the nuclear spin dynamics in the imaging field of view. Even a serious eddy current problem can again be solved with a DynaCAN approach where optimized current waveforms are applied in additional coil windings to couple to the complexity of the transient \\cite{Zevenhoven2015}.\n\nNoise from distant sources typically scales with the pickup coil size with an exponent at least as large as the signal from far-away sources: $\\alpha=2+k$ for a $k^{\\rm th}$-order gradiometer (see Sec.~\\ref{ssPickups}). Although the noise detected by gradiometers scales to a higher power than with magnetometers ($k=0$), gradiometers have the advantage that they, in principle, do not respond to a uniform field. For a higher-order gradiometer that is not too large, the environmental noise is nearly uniform in space, and therefore effectively suppressed by the pickup coil geometry. Gradiometers with relatively long baselines can also be seen as magnetometers when the source is close to one of the loops. Still, they function as gradiometers from the perspective of distant noise sources. A similar result applies for so-called software gradiometers, which can, for example, be formed by afterwards taking the difference of the signals of two parallel magnetometers. However, in Sec.~\\ref{ssArrayNoise}, a more sophisticated technique is described for minimizing noise in the combination of multiple channels.\n\nAt very low system noise levels, other significant noise mechanisms include noise due to dielectric losses. Electrical activity in the brain can also be seen as a source of noise. This noise, however, is strongest at frequencies well below $1\\,$kHz. Using Larmor frequencies in the kHz range may therefore be sufficient for spectral separation of brain noise from MRI.\n\nThe amplitude scaling exponents $\\alpha$ for signal and noise are summarized in Table \\ref{tabExponents}. The notation in later sections refers to the scaling of flux signal and noise in terms of $\\alpha_\\mathrm s$ and $\\alpha_\\mathrm n$, respectively. For a single sensor, the SNR scaling $R^\\delta$ is given by $\\delta = \\alpha_\\mathrm s - \\alpha_\\mathrm n$.\n\n\n\n\n\n\\begin{table}\n\\caption{Amplitude scaling exponents $\\alpha$ for the flux noise standard deviation $\\sigma \\propto R^\\alpha$ as well as the signal, given different pickup-coil geometries and noise mechanisms.}\\label{tabExponents}\\vspace{5mm}\n\t\\centering\n\t\\begin{tabular}{l@{$\\;\\;$}c@{$\\;\\;$}c@{$\\;\\;$}c}\n\tPickup type (see Fig. 1) $\\rightarrow$ & M0 & AG$k$ & PG$k$\\\\\n\t\\hline\n\tSensor noise (optimally matched) & 1\/2 & 1\/2 & 1\/2 \\\\\n Sensor noise (unmatched, large $L_\\mathrm p$) & 1 & 1 & 1 \\\\\n\tDistant source, $b \\propto R$ & 2 & $2 + k$ & $2 + k$ \\\\\n\tDistant source, $b$ fixed & 2 & 2 & -- \\\\\n $\\vec B_0$ amplifier & 2 & $0^*$ & $0^*$ \\\\\n Gradient amplifiers, $b \\propto R$, $k \\le 1$ & 2 & 3 & 3 \\\\\n Gradient amplifiers, $b$ fixed & 2 & 2 & -- \\\\\n\tCryostat noise, small $R$ & 2 & 2 & $2 + k$ \\\\\n\tCryostat noise, large $R$ & 1 & 1 & 1 \\\\\n\t\\hline\n\t\\end{tabular}\n \\\\\\vspace{1mm}$^*$ Larger in practice, because of gradiometer \\\\imbalance and field inhomogeneities.\n\\end{table}\n\n\n\n\\section{Sensor arrays} \\label{sArrays}\n\n\n\n\n\n\n\\subsection{Combining data from multiple channels} \\label{ssArrayNoise}\n\nIt is common to work with absolute values of the complex images to eliminate phase shifts. Images from multiple channels can then be combined by summing the squares and taking the square root. This procedure, however, causes asymmetry in the noise distribution and loses information that can be used for improved combination of the data. If the sensor array and the correlations of noise between different sensors are known, the multi-channel data can be combined more effectively. \n\nIn the following, we show that, where multiple sensors can form a software gradiometer, an array of $N$ sensors can form an $N^\\text{th}$-order combination optimized to give the best SNR for each voxel.\n\nTo follow the derivation in Ref.~\\cite{Zevenhoven2011}, consider a voxel centered at $\\vec r$, and $N$ sensors indexed by $j = 1,2, ...,N$. Based on Sec.~\\ref{ssULFMRI}, each sensor has a unit magnetization image $s_j(\\vec r\\,) = \\beta_j^*(\\vec r\\,)V$, where $\\beta_j$ and $V$ are the sensitivity profile and voxel volume, respectively. The absolute value $|s_j|$ gives the sensed signal amplitude caused by a unit magnetization in the voxel, precessing perpendicular to $\\vec B_\\mathrm L$. The complex phase represents the phase shift in the signal due to the geometry. To study the performance of the array only, we set $V$ to unity.\n\nFor a voxel centered at $\\vec r$, we have a vector of reconstructed image values ${\\bf v} = [v_1,v_2, ...,v_N]^\\top $ corresponding to the $N$ sensors. At this point, the values $v_j$ have not been corrected according to the sensitivity. The linear combination that determines the final voxel value $u$ can be written in the form\n\\begin{equation}\\label{eqVoxelLinComb}\nu = \\sum_{j=1}^{N} a_j^*v_j = {\\bf a}^\\dagger {\\bf v}\\,,\n\\end{equation}\nwhere $^\\dagger $ denotes the conjugate transpose. Requiring that the outcome is sensitivity-corrected sets a condition on the coefficient vector ${\\bf a} = [ a_1, ...,a_N]^\\top $. In the absence of noise, a unit source magnetization gives $v_j = s_j(\\vec r\\,)$. The final voxel value $u$ should represent the source, which leads to the condition\n\\begin{equation} \\label{eqConstr}\n{\\bf a}^\\dagger {\\bf s} = 1\\,.\n\\end{equation}\nBelow, we show how ${\\bf a} = [ a_1, ..., a_N]^\\top $ should be chosen in order to maximize the SNR in the final image given the sensor array and noise properties.\n\nThe single-sensor image values $v_i$ can be written in the form $v_j = w_j + \\xi_j$ where $w_j$ is the `pure' signal and $\\xi_j$ is the noise. The noise terms $\\xi_j$ can be modeled as random variables, which, for unbiased data, have zero expectation: ${\\rm E}(\\xi_j)=0$. If there is a bias, it can be measured and subtracted from the signals before this step. The expectation of the final value of this voxel is then\n\\begin{equation}\\label{eqExpectU}\n{\\rm E}(u) = {\\rm E}\\left[{\\bf a}^\\dagger ({\\bf w + \\boldsymbol \\xi})\\right] = {\\bf a}^\\dagger {\\bf w}\\,.\n\\end{equation}\nThe noise in the voxel is quantified by the variance of $u$. Eqs. \\eqref{eqVoxelLinComb} and \\eqref{eqExpectU} yield $u = {\\rm E}(u) + {\\bf a}^\\dagger \\boldsymbol\\xi$, leading to\n\\begin{equation} \\label{eqVaru}\n{\\rm Var}(u) = {\\rm E}\\left[|u-{\\rm E}(u)|^2\\right] = {\\rm E}\\left[{\\bf a}^\\dagger \\boldsymbol{\\xi}\\boldsymbol{\\xi}^\\dagger {\\bf a}\\right] = {\\bf a}^\\dagger {\\mathbf\\Sigma}{\\bf a}\\,,\n\\end{equation}\nwhere ${\\mathbf\\Sigma} = {\\rm E}(\\boldsymbol \\xi \\boldsymbol\\xi^\\dagger )$ identifies as the noise covariance matrix.\nFor simple cases, ${\\mathbf\\Sigma}$ is the same for all voxels. However, it may vary between voxels if, for instance, the voxels are of different sizes.\n\nNow, the task is to minimize the noise ${\\bf a}^\\dagger {\\mathbf\\Sigma}{\\bf a}$ subject to the constraint in Eq.~\\eqref{eqConstr}. The Lagrange multiplier method turns the problem into finding the minimum of\n\\begin{equation} \\label{eqLagrange}\nL = {\\bf a}^\\dagger {\\mathbf\\Sigma}{\\bf a} - \\lambda(1-{\\bf a}^\\dagger {\\bf s})\n\\end{equation}\nwith respect to ${\\bf a}$, while still requiring that Eq.~\\eqref{eqConstr} holds. From the constraint it follows that ${\\bf a}^\\dagger {\\bf s}$ is real, so it may be replaced by $({\\bf a}^\\dagger {\\bf s}+{\\bf s}^\\dagger {\\bf a})\/2$ in Eq.~\\eqref{eqLagrange}. By `completing the square' in Eq.~\\eqref{eqLagrange}, one obtains\n\\begin{equation}\nL = {(\\bf a - {\\bf \\tilde a})}^\\dagger {\\mathbf\\Sigma}{(\\bf a- {\\bf\\tilde a})} - \\lambda + {\\rm constant}\\,,\n\\end{equation}\nwhere ${\\bf \\tilde a}$ satisfies \n\\begin{equation}\\label{eqLagrange2}\n2{\\mathbf\\Sigma}{\\bf\\tilde a} = -\\lambda {\\bf s}\\,.\n\\end{equation}\nSince $\\mathbf\\Sigma$, being a covariance matrix, is positive (semi)definite, the minimum of $L$ is found at ${\\bf a} = {\\bf\\tilde a}$. \n\nFurther, ${\\mathbf\\Sigma}$ is always invertible, as the contrary would imply that some non-trivial linear combination of the signals would contain zero noise. Multiplying Eq.~\\eqref{eqLagrange2} by ${\\bf s}^\\dagger {\\mathbf\\Sigma}^{\\text{-1}}$ from the left and using Eq.~\\eqref{eqConstr} leads to $\\lambda = -2\/{\\bf s}^\\dagger {\\mathbf\\Sigma}^{\\text{-1}}{\\bf s}$. When this expression for $\\lambda$ is put back into Eq.~\\eqref{eqLagrange2}, the optimal choice for the coefficient vector ${\\bf a} = \\tilde{\\bf a}$ is obtained as\n\\begin{equation} \\label{eqOptimalCoeff}\n{\\bf a} = \\frac{{\\mathbf\\Sigma}^{\\text{-1}}{\\bf s}}{{\\bf s}^\\dagger {\\mathbf\\Sigma}^{\\text{-1}}{\\bf s}}\\,.\n\\end{equation}\nSimilar to Eq.~(7) of Ref.~\\cite{Capon1970}, Eqs. \\eqref{eqVaru} and \\eqref{eqOptimalCoeff} reveal the final noise variance $\\sigma_{\\rm fin}^2$ for the given voxel position,\n\\begin{equation}\\label{eqNoiseVar}\n\\sigma_{\\rm fin}^2 = {\\bf a}^\\dagger {\\mathbf\\Sigma}{\\bf a} = \\frac{1}{{\\bf s}^\\dagger \n{\\mathbf\\Sigma^{\\text{-1}}}{\\bf s}}\\,.\n\\end{equation}\n\nIn the above derivation, we assumed little about how the individual single-sensor data were acquired. In fact, the only significant requirement was that the sensitivities $s_i$ are well defined and accessible. As discussed previously, the signal can be modeled to high accuracy at ULF (see Sec.~\\ref{ssULFMRI}).\n\n\\subsection{Figures of merit and scaling for arrays} \\label{ssFigures}\n\nGiven the $N^\\mathrm{th}$-order combination from Eqs.~\\eqref{eqVoxelLinComb} and \\eqref{eqOptimalCoeff}, the contribution of the sensor array to the voxel-wise image SNR is given by Eq.~\\eqref{eqNoiseVar}. We define the \\emph{array-sensitivity-to-noise ratio} aSNR as\n\\begin{equation} \\label{eqaSNR}\n\\text{aSNR} = \\sqrt{\\mathbf s^\\dagger \\mathbf \\Sigma^{\\text{-1}} \\mathbf s}\\,.\n\\end{equation}\nWhen each sensor in the array sees an equal flux noise level $\\sigma$, the aSNR$^{1\/2}$ takes the form\n\\begin{equation}\n\\text{aSNR} = \\frac{\\sqrt{\\mathbf s^\\dagger \\mathbf X^{\\text{-1}} \\mathbf s}}{\\sigma} = \\frac{\\text{array sensitivity}}{\\text{noise level}}\\,,\n\\end{equation}\nwhere $\\mathbf X = \\mathbf\\Sigma\/\\sigma^2$ is the dimensionless noise \\emph{correlation} matrix. We refer to the quantity $\\sqrt{\\mathbf s^\\dagger \\mathbf X^{\\text{-1}} \\mathbf s}$ as the \\emph{array sensitivity}, which for weak correlation is given approximately as $||\\mathbf s||_2$. Scaling law exponents for the array sensitivity are denoted by $\\alpha_\\mathrm a$, and for the aSNR by $\\delta = \\alpha_\\mathrm a - \\alpha_\\mathrm n$.\n\n\n\\subsection{Correlation of noise between sensors} \\label{ssCorrEffect}\n\nAs already seen in Secs.~\\ref{ssArrayNoise} and \\ref{ssFigures}, the aSNR is affected by the correlation of random noise between different single-sensor channels. There are two main reasons for such correlations. First, a noise source that is not an intrinsic part of a sensor can directly couple to many sensors. For instance, thermal noise in conductors close to the sensors may result in such correlated noise (see Sec.~\\ref{ssThermalNoise}). Second, the pickups of the sensors themselves are coupled to each other through their mutual inductances. This cross-coupling increases noise correlation and may also affect the sensitivity profiles via signal cross-talk.\n\nTo see the effect of noise correlation on the image SNR, consider a noise covariance matrix of the form\n\\begin{equation} \\label{eqCovSimplified}\n{\\mathbf\\Sigma} = \\sigma^2({\\bf I} + {\\bf C})\\,,\n\\end{equation}\nwhere ${\\bf I}$ is the identity matrix and $\\bf C$ contains the correlations between channels (the off-diagonal elements of $\\mathbf X$). In words, each channel has a noise variance of $\\sigma^2$ and channels $p$ and $q$ have correlation $C_{pq}={\\rm E}(\\xi_p\\xi_q^*)\/\\sigma^2$. Assume further that absolute values of the correlations $C_{pq}$ are substantially smaller than one. \n\nTo first order in $\\mathbf C$, the inverse of $\\mathbf\\Sigma$ is obtained as $\\mathbf\\Sigma^{-1} \\approx \\sigma^{-2}({\\bf I} - {\\bf C})$. The SNR in the final image, according to Eq.~\\eqref{eqNoiseVar}, is then proportional to $\\sigma_{\\rm fin}^{-1}$, with\n\\begin{align}\n \\sigma_{\\rm fin}^{-2} &\\approx \\sigma^{-2}\\left({\\bf s}^\\dagger {\\bf s} - {\\bf s}^\\dagger {\\bf C}{\\bf s}\\right)\\nonumber\\\\\n &= \\sigma^{-2}\\left\\|{\\bf s}\\right\\|_2^2 - 2\\sigma^{-2}\\sum_{p 0$. This leads to the conclusion that the noise correlation tends to decrease the image SNR.\n\nWhile the assumptions made in the above discussion may not always be exactly correct, the result is an indication that the correlation of noise between adjacent sensors is usually harmful---even if it is taken into account in reconstruction. Moreover, the actions taken in order to reduce noise correlation are often such that the noise variances decrease as well. For instance, eliminating a noise source from the vicinity of the sensor array does exactly that. \n\nCorrelation can also be reduced by minimizing the inter-sensor cross-talk, for instance by designing a sensor array with low mutual inductances between pickup coils. If the mutual inductances are non-zero, the cross-talk can be dramatically reduced by coupling the feedback of the SQUID flux-locked loop to the pickup circuit instead of more directly into the SQUID loop \\cite{SQUID-HB}. This way, the supercurrent in the pickup coil stays close to zero at all times. In theory, the cross-talk of the \\emph{flux signals} can be completely eliminated by this method.\n\nCorrelated noise originating from sources far from the subject's head and the sensor array can also be attenuated by signal processing methods prior to image reconstruction. The {\\it signal space separation} method (SSS) was developed at Elekta Neuromag Oy \\cite{Taulu2005} (now MEGIN) for use with `whole-head' MEG sensor arrays. The SSS method can distinguish between signals from inside the sensor helmet and those produced by distant sources. Now, the strong noise correlation is in fact exploited to significantly improve the SNR. Similar methods may be applicable to ULF MRI as well. To help such methods, additional sensors can be placed outside the helmet arrangement to provide an improved noise reference.\n\nFor sensor array comparisons, we assume that all measures have been taken to reduce correlated noise before image reconstruction. The details of the remaining noise correlation depend on many, generally unknown aspects. Therefore, we set ${\\bf C} = 0$ in Eq.~\\eqref{eqCovSimplified} for a slightly optimistic estimate, {\\it i.e.}, sensor noises are uncorrelated, each having variance $\\sigma^2$.\n\n\\subsection{Filling the array} \\label{ssSizeInfluence}\n\nIn this section, we use general scaling arguments to provide estimations of how the whole sensor array performs as a function of the pickup coil size. Consider a surface, for instance, of the shape of a helmet, and a voxel at a distance $l$ from the surface.\nThe surface is filled with $N$ pickup coils of radius $R$ to measure the field perpendicular to the surface. We assume the pickup coils are positioned either next to each other or in such a way that their areas overlap by a given fraction (see Fig.~\\ref{figGradArrays}). The number of sensors that fit the surface is then proportional to $R^{-2}$.\n\nTake, at first, a voxel far from the sensors; $l \\gg R$. Now, the signal from the voxel is spread over many sensors. For $\\mathbf\\Sigma = \\sigma^2{\\bf I}$, the aSNR is proportional to $\\|{\\bf s}\\|_2\/\\sigma$. Assume that $s_j\\propto R^{\\alpha_\\mathrm s}$ and $\\sigma \\propto R^{\\alpha_\\mathrm n}$, which leads to $\\|{\\bf s}\\|_2 \\propto \\sqrt{N}R^{\\alpha_\\mathrm s}\\propto R^{\\alpha_\\mathrm s-1}$, and finally, \n\\begin{equation}\n{\\rm aSNR} \\propto R^\\delta,\\quad \\delta = \\alpha_\\mathrm a - \\alpha_\\mathrm n = \\alpha_\\mathrm s - \\alpha_\\mathrm n-1\\, .\n\\end{equation}\nHere we thus have array sensitivity scaling according to $\\alpha_\\mathrm a = \\alpha_\\mathrm s - 1$, as opposed to $\\alpha_\\mathrm a = \\alpha_\\mathrm s$ when $N$ is fixed. \nRecall from Sec.~\\ref{ssPickups} that the flux sensitivities scale as $R^{\\alpha_\\mathrm s}$ with $\\alpha_\\mathrm s=2$ for magnetometers and $\\alpha_\\mathrm s=3$ for first-order planar gradiometers, given that $l\\gg R$. \nAssuming, for instance, optimally matched input circuits, the intrinsic flux noise of the sensor in both cases has a power law behavior with exponent $\\alpha_\\mathrm n=1\/2$ (see Sec.~\\ref{ssThermalNoise}), which yields $\\delta=0.5$ and $\\delta=1.5$. This is clearly in favor of using larger pickup coils. Especially for larger $R$, however, the cryostat noise may become dominant, and one has $\\alpha_\\mathrm n\\approx 1$. Now, magnetometer arrays have $\\delta\\approx 0$, {\\it i.e.}, the coils size does not affect the SNR. Still, gradiometer arrays perform better with larger $R$ ($\\alpha_\\mathrm a\\approx 1$).\n\nIn the perhaps unfortunate case that noise sources far from the sensors are dominant, the noise behaves like the signal, that is, $\\alpha_\\mathrm s=\\alpha_\\mathrm n$ and $\\delta=-1$. Unlike in the other cases, a higher SNR would be reached by decreasing the pickup coil size. However, such noise conditions are not realistic in the low-correlation limit. Instead, one should aim to suppress the external noise by improving the system design or by signal processing.\n\nThe breakdown of the assumption of $l \\gg R$ needs some attention. If the voxel of interest is close to the sensor array, the image value is formed almost exclusively by the closest pickup-loop. Now, for non-overlapping pickups, the results for single sensors ($\\alpha_\\mathrm a = \\alpha_\\mathrm s$) are applicable, and the optimum magnetometer size is $R\\approx l$. But then, if the voxel is far from the array (deep in the head), and $R$ is increased to the order of $l$, it is more difficult to draw conclusions. We therefore extend this discussion in Secs.~\\ref{sMethods} and \\ref{sResults} by a computational study.\n\n\n\n\n\n\n\n\n\n\\section{Methods for numerical study} \\label{sMethods}\n\nIn order to be able to compare the performance of different sensor configurations, we used 3-D computer models of sensor arrays and calculated their sensitivities to signals from different locations in the sample.\n\nThe sensitivities of single pickup coils were calculated using $\\vec B_\\textrm s$ from Eq.~\\eqref{eqLeadField}. Evaluating the line integral required the coil path $\\partial S$ to be discretized. The number of discretization points could be kept small by analytically integrating Eq.~\\eqref{eqLeadMethod0} over $n$ straight line segments between consecutive discretization points $\\vec r_j$ and $\\vec r_{j+1}$ (the end point $\\vec r_n = \\vec r_0$):\n\\begin{equation}\\label{eqLeadMethod0}\n \\vec B_\\textrm s(\\vec r\\,) = \\frac{\\mu}{4\\pi}\\sum_{k=0}^{n - 1} \\int_{\\vec r\\,^\\prime = \\vec r_j}^{\\vec r_{j+1}} \\frac{d\\vec r\\,^\\prime \\times (\\vec r-\\vec r\\,^\\prime)}{|\\vec r-\\vec r\\,^\\prime|^3}\\,.\n\\end{equation}\n As shown in Appendix A, this integrates exactly to\n\\begin{equation}\\label{eqLeadMethod1}\n \\vec B_\\mathrm s(\\vec r\\,) = \\frac{\\mu}{4\\pi}\\sum_{j=0}^{n - 1} \\frac{ a_j+a_{j+1}}{a_ja_{j+1}}\\,\\frac{\\vec a_j\\times \\vec a_{j+1}}{a_j a_{j+1} + \\vec a_j\\cdot\\vec a_{j+1}}\\,,\n\\end{equation}\nwhere $\\vec a_j = \\vec r_j-\\vec r$. Besides reducing computational complexity and increasing accuracy, this result allowed exact computation for polygonal coils.\n\nFor a precession field $\\vec B_\\mathrm L = B_\\mathrm L\\widehat e_\\mathrm L$, the single-sensor sensitivities were obtained from Eq.~\\eqref{eqBetaNorm} and the array-sensitivity and aSNR maps were computed according to Sec.~\\ref{ssFigures}. The normalization of the values computed here is somewhat arbitrary; the real image SNR depends on a host of details that are not known at this point (see Sec.~\\ref{ssULFMRI}). However, the results can be used for studying array sensitivity patterns and---with noise levels scaled according to estimated coil inductances---for comparing different possible array setups. \n\n\\section{Results} \\label{sResults}\n\nNumerical calculations were performed for simple spherical sensor arrays (Sec.~\\ref{ssSphereResults}) as well as for realistic configurations (Sec.~\\ref{ssHelmetResults}), {\\it e.g.}, of the shape of a helmet. The former were used for studying scaling behavior of array sensitivities with sensor size and number, extending the discussion in Sec.~\\ref{ssSizeInfluence}. The latter were used for comparing array sensitivity patterns of different potential designs. \n\n\\subsection{Effects of size and number} \\label{ssSphereResults}\n\nA sensor array model was built by filling the surface of a sphere of radius $10\\,$cm (see Fig.~\\ref{figSphere}) with $N$ magnetometers or $N\/2$ planar units of two orthogonal planar first-order gradiometers. Combining one of the magnetometers with one of the gradiometer units would thus give a sensing unit similar to those of the Elekta\/Neuromag MEG system, though circular (radius $R$). All sensors were oriented to measure the radial component of the field. A spherical surface of radius $6\\,$cm was chosen to represent the cerebral cortex. The cortex surface was thus at distance $4\\,$cm from the sensor shell. In addition, the center of the sphere was considered to represent deep parts of the brain.\n\\begin{figure}\n\t\\centering\n \\includegraphics{sphere.pdf}\\\\\n\t\\caption{Geometry used in numerical analysis of the dependence of array sensitivity as functions of sensor size $R$ and number $N$ at different points inside the imaging volume. Sensors are on a spherical surface of radius $10\\,$cm. A shell with radius $6\\,$cm is representative of points on the cerebral cortex.}\n\t\\label{figSphere}\n\\end{figure}\n\nThe data in Fig.~\\ref{figRDep} show the dependence of the array sensitivity on $R$. Note that the number of sensors is approximately proportional to $R^{-2}$. The largest coil size $R=10\\,$cm corresponds to one magnetometer or gradiometer unit on each of the six faces of a cube. The solid lines correspond to the scaling of the sensitivity as $R^{\\alpha_\\mathrm a}$, $\\alpha_\\mathrm a = \\alpha_\\mathrm s - 1$. For smaller $R$, the scaling laws from Sec.~\\ref{ssSizeInfluence} hold in all cases, and particularly well for gradiometers and deep sources. The scaling law fails most notably with the magnetometer array at the cortex. Indeed, the sensitivity starts to \\emph{decrease} with $R$ when $R$ is very large, as was shown for a special case in Sec.~\\ref{ssPickups}. \n\\begin{figure}\n\t\\centering\n\t\\includegraphics[width=1.00\\columnwidth]{array_snr_rdep.pdf}\n\t\\caption{Scaling of array sensitivity at the center and on the cortex as depicted in Fig.~\\ref{figSphere}: sphere filled with magnetometer loops and with planar units of two orthogonal gradiometers arranged side by side. Error bars correspond to the minimum and maximum values. Noise scaling with size is included in the figure, illustrating a potential cross-over from sensor noise with $\\alpha_\\mathrm n = 1\/2$ to cryostat noise or suboptimal input circuit matching with $\\alpha_\\mathrm n = 1$. With fixed $N$, array sensitivity scaling is steeper and given by $\\alpha_\\mathrm a = 2, 3$ for planar magnetometers and gradiometers.}\n\t\\label{figRDep}\n\\end{figure}\n\nThe error bars in Fig.~\\ref{figRDep} correspond to the minimum and maximum value of the sensitivity at the cortex while the data symbols correspond to the average value. Despite the strong orientational dependence of single sensors (see Sec.~\\ref{ssPickups}), the array sensitivities are fairly uniform at the cortex. Only at large $R$ do the orientational effects emerge. \n\n\\begin{figure}\n\t\\centering\n\t\t\\includegraphics[width=\\columnwidth]{array_snr_ndep.pdf}\n\t\\caption{Scaling of array sensitivity as $\\sqrt{N}$ at the center and on the cortex as depicted in Fig.~\\ref{figSphere}, when the pickup coil radius is fixed at $R=1.44\\,$cm: (left) $N$ magnetometers, (right) $N\/2$ planar units of two orthogonal gradiometers. Error bars correspond to the minimum and maximum values.}\n\t\\label{figSparse}\n\\end{figure}\n\nFigure \\ref{figSparse} shows a different dataset on how the array sensitivity changes with how densely the sensors are packed into the array. In this case, a varying number of magnetometer coils or gradiometer units with fixed radius $R=1.44\\,$cm was distributed on the spherical shell. The aSNR of voxels at the center scales as $\\sqrt{N}$ to an excellent accuracy. While the average sensitivity at points on the cortex also obeys $\\sqrt{N}$ scaling remarkably well, the uniformity drops dramatically when $N$ is lowered below roughly 30 sensors. Closer to the sensors, {\\it e.g.}\\ on the scalp, this effect is even more pronounced.\n\n\\subsection{Realistic sensor configurations} \\label{ssHelmetResults}\n\nFigure~\\ref{fig:snrfigures} presents several possible sensor configurations and provide maps of $\\log_{10}(\\text{aSNR})$ for their comparison for their comparison. The data shown are sagittal slices of the 3-D maps, {\\it i.e.}, on the symmetry plane of the sensor array. Other slices, however, displayed similar performance at the cortex. Also changing the direction of the precession field $\\vec B_\\mathrm L$ had only a minor effect on the SNR in the region of interest. In all cases shown here, $\\vec B_\\mathrm L$ was parallel to the $y$ axis, which is perpendicular to the visualization plane. Note that this contrasts MRI convention, where the $\\vec B_\\mathrm L$ direction is considered fixed and always along the $z$ axis.\n\nIn most cases, the sensors are arranged on a helmet surface at 102 positions as in the Elekta\/Neuromag system. Again, magnetometers and planar double-gradiometer units are considered separately (here, $R=1.25\\,$cm, resembling conventional MEG sensors). The same flux noise level was assumed for magnetometers and planar gradiometers of the same size. In addition, we consider arrays with axial gradiometers as well as radially oriented planar gradiometers, both cases having $k=1$, $b=4\\,$cm and $R=1.25\\,$cm. Configurations with 102 overlapping units with $R=2.5\\,$cm are also considered, as well as the existing Los Alamos 7-channel coil geometry \\cite{Zotev2007} and the single large second-order gradiometer at UC Berkeley \\cite{Clarke2007} (see figure caption). For long-baseline gradiometers with $k=1$, $L_\\mathrm p$ was estimated to be twice that of a single loop, and six times for $k=2$.\n\nWith planar sensor units of $R=1.25\\,$cm [Fig.~\\ref{fig:snrfigures}(a--b)], the aSNR for 102 magnetometers is three times that of 204 gradiometers at the cerebral cortex. At the center of the head, the difference is almost a whole order of magnitude in favor of the magnetometers. Therefore, the small gradiometers bring little improvement to the image SNR if the magnetometers are in use. However, as shown previously, especially gradiometer performance improves steeply with coil size. Allowing the coils to overlap with $R = 2.5\\,$cm [Fig.~\\ref{fig:snrfigures}(g--h)] leads to a vastly improved aSNR, especially with gradiometers, but also with magnetometers.\n\n\nGradiometers with long baselines provide somewhat magnetometer-like sensitivity patterns while rejecting external noise. However, their aSNR performance is inferior to magnetetometers because of their larger inductance, yielding higher flux noise when the sensor noise dominates; see Sec.~\\ref{ssIntrinsic}. Helmet arrays of magnetometers can provide a similar aSNR in the deepest parts of the brain as the Berkeley gradiometer provides at a small area on the scalp. \n\n\\begin{figure*}\n\t\\centering\n\t\t\\includegraphics[width=0.95\\textwidth]{aSNR_maps.pdf}\n\t\\caption{Base-10 logarithms of aSNR for different sensor-array geometries. To allow comparison of different arrays, we assumed SQUID noise scaling according to optimally matched input circuits. (a) Magnetometers: $R=1.25\\,$cm, (b) double-gradiometer units: $R=1.25\\,$cm, (c) axial gradiometers: $b=4\\,$cm, $R=1.25\\,$cm, (d) 7 Los Alamos second-order axial gradiometers: $b = 6\\,$cm, $R=1.85\\,$cm, \n (e) Berkeley single second-order axial gradiometer: $b=7.5\\,$cm, $R=3.15\\,$cm, (f) radially oriented planar gradiometers [Fig.~\\ref{figSQUID}(f)]: $b=4\\,$cm, $R=1.25\\,$cm, (g) overlapping double-gradiometer units: $R=2.5\\,$cm, (h) overlapping magnetometers: $R=2.5\\,$cm. The data rate of the acquisition is proportional to the square of the of aSNR.}\n\t\\label{fig:snrfigures}\n\\end{figure*}\n\n\\section{Conclusions and outlook}\n\nExtending Ref.~\\cite{Zevenhoven2011}, we analyzed a variety of factors that affect the noise and sensitivity of a SQUID-based sensor array for ULF MRI of the brain. Many of the principles, however, apply to non-SQUID arrays as well. We also derived numerical means for studying and comparing the SNR performances of any given sensor array designs.\n\nSignal- and noise-scaling arguments and calculations showed that filling a sensor array with a huge number of tiny sensors is usually not advantageous. Larger pickup coil sizes give a better image SNR at the center of the head and, up to some point, also at closer sources such as the cerebral cortex. This is true even if the number of sensors needs to be decreased due to the limited area available for the array. However, the average voxel SNR is proportional to the square root of the number of sensors.\n\n\n\\sloppy Several possible array designs were compared, including existing arrays designed for MEG and ULF MRI. The results are mostly in favor of magnetometers and large first-order gradiometers. While typically having inferior SNR, gradiometers do have the advantage of rejecting external fields, reducing also transient issues due to pulsed fields \\cite{Zevenhoven2011MSc}. An especially dramatic difference was found when comparing a magnetometer-filled helmet with a single larger gradiometer.\n\nIn general, using an array of sensors relaxes the dynamic range requirements for sensor readout. Splitting a large loop into smaller ones further allows interference rejection based on correlation, while also increasing the SNR close to the center of the loop. An array of many sensors also solves the single-sensor problem of `blind angles'.\n\nOur initial analysis of \\emph{overlapping} magnetometer and gradiometer coils gave promising results. Implementing such arrays, however, poses challenges. Practical considerations include how to fabricate such an array and what materials to use. For instance, wire-wound Type-I superconducting pickup coils have shown some favorable properties \\cite{Luomahaara2011,Hwang2014} in pulsed systems, and exploiting the dynamics of superconductor-penetrating flux \\cite{Zevenhoven2011MSc,Zevenhoven2013degauss,Al-Dabbagh2018} has been promising. However, existing techniques are not suitable for helmet configurations with overlapping coils. In addition, careful design work should be conducted to minimize mutual inductances and other coupling issues. Further significant improvements could be achieved by placing the sensors closer to the scalp, but that would require dramatic advancements in cryostat technology, and was not studied here.\n\nHere, we only considered the contribution of the sensor array to the imaging performance. Other things to consider are the polarizing technique as well as the ability of the instrumentation to apply more sophisticated sequences and reconstruction techniques, while preserving low system noise. A class of techniques enabled by multichannel magnetometers is accelerated parallel MRI \\cite{Larkman2007}. However, the so-called geometry factor should be taken into account \\cite{Lin2013} if large parallel acceleration factors are pursued.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nAs cloud services and distributed data storage become increasingly prevalent, growing concerns about users' privacy have sparked much recent interest in the problem of Private Information Retrieval (PIR). Originally introduced in \\cite{PIRfirst,PIRfirstjournal}, the goal of PIR is to allow a user to efficiently retrieve a desired message from a server or a set of servers where multiple messages are stored, without revealing any information about which message is desired. In the information theoretic framework, which requires perfect privacy and assumes long messages, the capacity of PIR is the maximum number of bits of desired information that can be retrieved per bit of download from the server(s) \\cite{Sun_Jafar_PIR}. Capacity characterizations have recently been obtained for various forms of PIR, especially for the multi-server setting \\cite{Shah_Rashmi_Kannan, Sun_Jafar_PIR, Tajeddine_Gnilke_Karpuk_Hollanti, Sun_Jafar_TPIR, Sun_Jafar_SPIR, \nBanawan_Ulukus_BPIR, \nWang_Skoglund_PIRSPIRAd, \nWang_Skoglund_TSPIR, \nFREIJ_HOLLANTI, Sun_Jafar_MDSTPIR, Wang_Skoglund_MDS, Jia_Sun_Jafar_XSTPIR, Jia_Jafar_MDSXSTPIR, Yang_Shin_Lee,\nJia_Jafar_GXSTPIR, Tandon_CachePIR, Wei_Banawan_Ulukus, Tajeddine_Gnilke_Karpuk, Yao_Liu_Kang_Collusion_Pattern, PIR_survey}. \n\nPIR in the basic single server setting would be most valuable if it could be made efficient. However, it was already shown in the earliest works on PIR \\cite{PIRfirst, PIRfirstjournal} that in the single server case there is no better alternative to the trivial solution of downloading everything, which is prohibitively expensive. Since the optimal solution turns out to be trivial, single server PIR generally received less attention from the information theoretic perspective, until recently. \nInterest in the capacity of single-server PIR was revived by the seminal contribution of Kadhe et al. in \\cite{Kadhe_Garcia_Heidarzadeh_ElRouayheb_Sprintson_PIR_SI} which showed that the presence of \\emph{side information} at the user can significantly improve the efficiency of PIR, and that capacity characterizations under side information are far from trivial. This crucial observation inspired much work on understanding the role of side-information in PIR \\cite{Chen_Wang_Jafar_Side, heidarzadeh2018oncapacity, li2018single, li2020single, kazemi2019single, heidarzadeh2019single, heidarzadeh2018capacity, heidarzadeh2019capacity, Wei_Banawan_Ulukus_Side, PIR_PCSI}, which remains an active topic of research. Among the recent advances in this area is the study of single-server PIR with private coded side information (PIR-PCSI) that was initiated by Heidarzadeh, Kazemi and Sprintson in \\cite{PIR_PCSI}. Heidarzadeh et al. obtain sharp capacity characterizations for PIR-PCSI in many cases, and also note an open problem, along with an intriguing conjecture that motivates our work in this paper.\n\nIn the PIR-PCSI problem, a single server stores $K$ independent messages $\\bm{W}_1, \\cdots, \\bm{W}_K$, each represented by $L$ i.i.d. uniform symbols from a finite field $\\mathbb{F}_q$. A user wishes to efficiently retrieve a desired message $\\bm{W}_{\\bm{\\theta}}$, while utilizing private side information $(\\bm{\\mathcal{S}}, \\bm{\\Lambda}, \\bm{Y}^{[\\bm{\\mathcal{S}},\\bm{\\Lambda}]})$ that is unknown to the server, comprised of a linear combination $ \\bm{Y}^{[\\bm{\\mathcal{S}},\\bm{\\Lambda}]}=\\sum_{m=1}^M\\bm{\\lambda}_m\\bm{W}_{i_m}$ of a uniformly chosen size-$M$ subset of messages, $\\bm{\\mathcal{S}}=\\{\\bm{i}_1,\\bm{i}_2,\\cdots,\\bm{i}_M\\}\\subset[K], \\bm{i}_1<\\bm{i}_2<\\cdots<\\bm{i}_M$, with the coefficient vector $\\bm{\\Lambda}=(\\bm{\\lambda}_1, \\cdots, \\bm{\\lambda}_M)$ whose elements are chosen i.i.d. uniform from $\\mathbb{F}_q^\\times$. Depending on whether $\\bm{\\theta}$ is drawn uniformly from $[K]\\setminus\\bm{\\mathcal{S}}$ or uniformly from $\\bm{\\mathcal{S}}$, there are two settings, known as PIR-PCSI-I and PIR-PCSI-II, respectively. In each case, $(\\bm{\\theta}, \\bm{\\mathcal{S}})$ must be kept private. Capacity of PIR is typically defined as the maximum number of bits of desired message that can be retrieved per bit of download from the server(s), and includes a supremum over message size $L$. Since the side-information formulation specifies a finite field $\\mathbb{F}_q$, the capacity of PIR-PCSI can potentially depend on the field. A field-independent notion of capacity is introduced in \\cite{PIR_PCSI} by allowing a supremum over all finite fields. For PIR-PCSI-I, where $\\bm{\\theta}\\notin \\bm{\\mathcal{S}}$, Heidarzadeh et al. fully characterize the capacity as $(K-M)^{-1}$ for $1 \\leq M \\leq K-1$. For PIR-PCSI-II, the capacity is characterized as $(K-M+1)^{-1}$ for $\\frac{K+1}{2} < M \\leq K$. Capacity characterization for the remaining case of $2 \\leq M \\leq \\frac{K+1}{2}$ is noted as an open problem in \\cite{PIR_PCSI}, and it is conjectured that the capacity in this case is also $(K-M+1)^{-1}$. \n\nThe main motivation of our work is to settle this conjecture and obtain the capacity characterization for PIR-PCSI-II when $2 \\leq M \\leq \\frac{K+1}{2}$. Given the importance of better understanding the role of side information for single-server PIR, additional motivation comes from the following questions: What is the infimum capacity (infimum over all finite fields instead of supremum)? What if the coefficient vector $\\bm{\\Lambda}$ (whose privacy is not required in \\cite{PIR_PCSI}) is also required to be private? Can the side-information be reduced, e.g., to save storage, without reducing capacity? \n\n\n\\begin{table*}[!t]\n \\caption{Capacity results for PIR-PCSI-I, PIR-PCSI-II and PIR-PCSI}\n \\label{tab:capacity}\n \\centering\n \\scalebox{0.76}{\n \\begin{tabular}{|c|c|c|}\n \\hline\n PIR-PCSI-I ($1 \\leq M \\leq K-1$) & PIR-PCSI-II ($2 \\leq M \\leq K$) & PIR-PCSI ($1 \\leq M \\leq K$)\\\\ \\hline\n $C_{\\mbox{\\tiny PCSI-I}}^{\\sup} = \\frac{1}{K-M}$, \\cite{PIR_PCSI} \n & $C_{\\mbox{\\tiny PCSI-II}}^{\\sup} =\n \\begin{cases}\n \\frac{2}{K}, & 2 \\leq M \\leq \\frac{K+1}{2}, \\text{Thm. \\ref{thm:cap_PCSI2_sup}}\\\\\t\n \\frac{1}{K-M+1}, & \\frac{K+1}{2} < M \\leq K,\\text{\\cite{PIR_PCSI}}\n \\end{cases}$ \n & $C_{\\mbox{\\tiny PCSI}}^{\\sup} = \n \\begin{cases}\n \\frac{1}{K-1}, & M=1,\\\\\n \\frac{1}{K-M+1}, & 2 \\leq M \\leq K,\n \\end{cases}$, Thm. \\ref{thm:cap_PCSI_sup}\\\\ \\hline\n $C_{\\mbox{\\tiny PCSI-I}}^{\\inf} =\n \\begin{cases}\n \\frac{1}{K-1}, & 1 \\leq M \\leq \\frac{K}{2},\\\\\n \\big(K - \\frac{M}{K-M}\\big)^{-1}, & \\frac{K}{2} < M \\leq K-1,\n \\end{cases}$, Thm. \\ref{thm:cap_PCSI1_inf}\n & $C_{\\mbox{\\tiny PCSI-II}}^{\\inf} = \\frac{M}{(M-1)K}$, Thm. \\ref{thm:cap_PCSI2_inf} & $C_{\\mbox{\\tiny PCSI}}^{\\inf} = \\frac{1}{K-1}$, Thm. \\ref{thm:cap_PCSI_inf} \\\\ \\hline\n \n \\begin{tabular}{c}$ C_{\\mbox{\\tiny PCSI-I}}^{\\mbox{\\tiny pri}, \\sup} = C_{\\mbox{\\tiny PCSI-I}}^{\\inf}$\\\\\n $\\frac{1}{K-1} \\leq C_{\\mbox{\\tiny PCSI-I}}^{\\mbox{\\tiny pri}, \\inf} \\leq \\min\\bigg(C_{\\mbox{\\tiny PCSI-I}}^{\\inf}, \\frac{1}{K-2}\\bigg)$\\end{tabular}, Thm. \\ref{thm:pcsi1_pub_pri} & $ C_{\\mbox{\\tiny PCSI-II}}^{\\mbox{\\tiny pri}}(q) = C_{\\mbox{\\tiny PCSI-II}}^{\\inf}$, Thm. \\ref{thm:pcsi2_pub_pri} & $ C_{\\mbox{\\tiny PCSI}}^{\\mbox{\\tiny pri}}(q) = C_{\\mbox{\\tiny PCSI}}^{\\inf}$, Thm. \\ref{thm:pcsi_pub_pri} \\\\ \\hline\n\n \n \\end{tabular}\n }\n\\end{table*}\n\n\nThe contributions of this work are summarized in Table \\ref{tab:capacity}, along with prior results from \\cite{PIR_PCSI}. As our main contribution we show that the capacity of PIR-PCSI-II for $2 \\leq M \\leq \\frac{K+1}{2}$ is equal to $2\/K$, which is strictly higher than the conjectured value in this parameter regime. The result reveals two surprising aspects of this parameter regime. First, whereas previously known capacity characterizations of PIR-PCSI-II (and PIR-PCSI-I) in \\cite{PIR_PCSI} are all strictly increasing with $M$ (the size of the support set of side information), here the capacity does not depend on $M$. Second, in this parameter regime (and also when $M=\\left\\lfloor (K+1)\/2\\right\\rfloor +1$), half of the side information turns out to be redundant, i.e., the supremum capacity remains the same even if the user discards half of the side information. Half the side information is also the minimal necessary, because we show that the same capacity is not achievable with less than half of the side information. By contrast, in other regimes no redundancy exists in the side information, i.e., any reduction in side information would lead to a loss in supremum capacity.\n\nThe optimal rate $2\/K$ is shown to be achievable for any finite field $\\mathbb{F}_q$ where $q$ is an \\emph{even} power of a prime. The achievable scheme requires downloads that are ostensibly non-linear in $\\mathbb{F}_q$, but in its essence the scheme is linear, as can be seen by interpreting $\\mathbb{F}_q$ as a $2$ dimensional vector space over the base field $\\mathbb{F}_{\\sqrt{q}}$, over which the downloads are indeed linear. Intuitively, the scheme may be understood as follows. A rate of $2\/K$ means a download of $K\/2$, which is achieved by downloading \\emph{half} of every message (one of the two dimensions in the $2$ dimensional vector space over $\\mathbb{F}_{\\sqrt{q}}$). The key idea is \\emph{interference alignment} -- for the undesired messages that appear in the side information, the halves that are downloaded are perfectly \\emph{aligned} with each other, whereas for the desired message, the half that is downloaded is not aligned with the downloaded halves of the undesired messages. For messages that are not included in the side information, any random half can be downloaded to preserve privacy. \n\nWith a bit of oversimplification for the sake of intuition, suppose there are $K=4$ messages, that can be represented as $2$-dimensional vectors $\\bm{A}=[\\bm{a}_1~~ \\bm{a}_2], \\bm{B}=[\\bm{b}_1~~ \\bm{b}_2], \\bm{C}=[\\bm{c}_2~~ \\bm{c}_2], \\bm{D}=[\\bm{d}_1~~ \\bm{d}_2]$, the side information is comprised of $M=3$ messages, say at first $\\bm{A}+\\bm{B}+\\bm{C}=[\\bm{a}_1+\\bm{b}_1+\\bm{c}_1~~ \\bm{a}_2+\\bm{b}_2+\\bm{c}_2]$, and the desired message is $\\bm{A}$. Then the user could recover $\\bm{A}$ by downloading $\\bm{a}_1, \\bm{b}_2, \\bm{c}_2$ and either $\\bm{d}_1$ or $\\bm{d}_2$, i.e., half of each message for a total download of $K\/2=2$ (normalized by message size). We may also note that half of the side information is redundant, i.e., the user only needs $\\bm{a}_2+\\bm{b}_2+\\bm{c}_2$, and can discard the rest. But there is a problem with this oversimplification -- this toy example seemingly loses privacy because the matching indices reveal that $\\bm{b}_2$ aligns with $\\bm{c}_2$ but not $\\bm{a}_1$. This issue is resolved by noting that the side information is in fact $\\bm{\\lambda}_1\\bm{A}+\\bm{\\lambda}_2\\bm{B}+\\bm{\\lambda}_3\\bm{C}=\\bm{A}'+\\bm{B}'+\\bm{C}'$. Suppose $\\bm{\\lambda}_1, \\bm{\\lambda}_2, \\bm{\\lambda}_3$ are random (unknown to the server) independent linear transformations (\\emph{matrices}) that independently `\\emph{rotate}' $\\bm{A}, \\bm{B}, \\bm{C}$ vectors into $\\bm{A}',\\bm{B}',\\bm{C}'$ vectors, respectively, such that the projections (combining coefficients) of each along any particular dimension become independent of each other. In other words, $\\bm{a}_i', \\bm{b}_i', \\bm{c}_i'$ are independent projections of $\\bm{A}, \\bm{B}, \\bm{C}$, and downloading, say $(\\bm{a}_1', \\bm{b}_2', \\bm{c}_2', \\bm{d}_2')$ reveals to the server no information about their relative alignments in the side information. From the server's perspective, each downloaded symbol is simply an independent random linear combination of the two components of each message. Intuitively, since the random rotation is needed to maintain privacy, it is important that $\\bm{\\lambda}_i$ are matrices, not scalars (because scalars only scale, they do not rotate vectors). This is not directly the case in $\\mathbb{F}_q$ because $\\bm{\\lambda}_i$ are scalars in $\\mathbb{F}_q$. However, viewed as a $2$ dimensional vector space over $\\mathbb{F}_{\\sqrt{q}}$, the $\\bm{\\lambda}_i$ indeed act as invertible $2\\times 2$ matrices that act on the vectors $\\bm{A}, \\bm{B}, \\bm{C}, \\bm{D}$, rotating each vector randomly and independently, thus ensuring privacy. \n\nIn order for $\\mathbb{F}_{\\sqrt{q}}$ to be a valid finite field we need $q$ to be an \\emph{even} power of a prime. This suffices to characterize the capacity because the capacity definition in \\cite{PIR_PCSI} allows a supremum over all fields. However, the question remains about whether the rate $2\/K$ is achievable over every finite field. To understand this better, we explore an alternative definition of capacity (called infimum capacity in this work) which considers the infimum (instead of supremum) over all $\\mathbb{F}_q$. We find that the infimum capacity of PIR-PCSI-II is always equal to $M\/((M-1)K)$. Evidently, for $M=2$ the capacity is field independent because the infimum and supremum over fields produce the same capacity result. In general however, the infimum capacity can be strictly smaller, thus confirming field-dependence. The worst case corresponds to the binary field $\\mathbb{F}_2$. Intuitively, the reason that the infimum capacity corresponds to the binary field is that over $\\mathbb{F}_2$ the non-zero coefficients $\\bm{\\lambda}_m$ must all be equal to one, and thus the coefficients are essentially known to the server. On the other hand, we also present an example with $q=3$ (and $M=3, K=4$) where $2\/K$ is achievable (and optimal), to show that the achievability of $2\/K$ for $M>2$ is not limited to just field sizes that are even powers of a prime number. We also show that for PIR-PCSI-II, the the infimum capacity with private $(\\bm{\\theta},\\bm{\\mathcal{S}})$ is the same as the (supremum or infimum) capacity with private $(\\bm{\\theta},\\bm{\\mathcal{S}},\\bm{\\Lambda})$, i.e., when the coefficients $\\bm{\\Lambda}$ must also be kept private from the server. \n\nNext we consider PIR-PCSI-I where $\\bm{\\theta}$ is drawn from $[K]\\setminus\\bm{\\mathcal{S}}$. The supremum capacity of PIR-PCSI-I is fully characterized in \\cite{PIR_PCSI}. In this case, we show that there is no redundancy in the CSI. As in PIR-PCSI-II, we find that the infimum capacity of PIR-PCSI-I is strictly smaller than the supremum capacity in general, and the binary field $\\mathbb{F}_2$ yields the worst case. Unlike PIR-PCSI-II, however, the infimum capacity of PIR-PCSI-I with private $(\\bm{\\theta},\\bm{\\mathcal{S}})$ does not always match the infimum capacity with private $(\\bm{\\theta},\\bm{\\mathcal{S}},\\bm{\\Lambda})$. For example, if $M=K-1$, then both the supremum and infimum capacities of PIR-PCSI-I are equal to $1$ for private $(\\bm{\\theta},\\bm{\\mathcal{S}})$, but if the coefficient vector $\\bm{\\Lambda}$ must also be kept private then the infimum capacity is no more than $1\/(K-2)$. Thus, the loss in capacity from requiring privacy of coefficients can be quite significant.\n\nTo complete the picture, we finally consider the capacity of PIR-PCSI where $\\bm\\theta$ is drawn uniformly from $[K]$. In PIR-PCSI the server is not allowed to learn anything about whether or not $\\bm{\\theta}\\in\\bm{\\mathcal{S}}$. The supremum capacity of PIR-PCSI is found to be $(K-M+1)^{-1}$ for $2 \\leq M \\leq K$. Remarkably, this is not just the smaller of the two capacities of PIR-PCSI-I and PIR-PCSI-II, so there is an additional cost to be paid for hiding from the server whether $\\bm{\\theta} \\in \\bm{\\mathcal{S}}$ or $\\bm{\\theta} \\notin \\bm{\\mathcal{S}}$. Depending on the relative values of $M$ and $K$, in this case we find that the redundancy in CSI can be as high as $1\/2$ or as low as $0$. The infimum capacity of PIR-PCSI is smaller than the supremum capacity, the binary field $\\mathbb{F}_2$ yields the worst case, and as in PIR-PCSI-II, the infimum capacity with private $(\\bm{\\theta},\\bm{\\mathcal{S}})$ is the same as the (supremum or infimum) capacity with private $(\\bm{\\theta},\\bm{\\mathcal{S}},\\bm{\\Lambda})$.\n\nThis paper is organized as follows: Section \\ref{sec:state} states PIR-PCSI, PIR-PCSI-I, PIR-PCSI-II problems in \\cite{PIR_PCSI}. Section \\ref{sec:main} states our capacity and redundancy (in the CSI) results for PIR-PCSI-II, PIR-PCSI-I, PIR-PCSI with fourteen theorems which are proved in Section \\ref{sec:cap_PCSI2_sup} to Section \\ref{proof:pcsi_pub_pri}. Section \\ref{sec:con} concludes this paper and gives possible future directions.\n\n\\emph{Notation}: For a positive integer $a$, let $[a]$ denote the set $\\{1,2,\\cdots,a\\}$. For two integers $a, b$ where $a < b$, $[a:b]$ denotes the set $\\{a, a+1, \\cdots, b\\}$. For a set $\\mathcal{S} = \\{i_1, i_2, \\cdots, i_n\\}$, $|\\mathcal{S}|$ denotes the cardinality of $\\mathcal{S}$. $\\mathbf{I}_{M}$ denotes the $M \\times M$ identity matrix, and $\\mathbf{0}_{M}$ denotes the $M \\times M$ all-zero matrix. For a matrix $\\mathbf{A}$, let $\\mathbf{A}(i,:)$ be the $i^{th}$ row of $\\mathbf{A}$. For a set $\\mathcal{A}$ whose elements are integers, let $\\mathcal{A}(i)$ denote the $i^{th}$ element of $\\mathcal{A}$ in ascending order. Let $\\mathbb{F}_{q}$ denote the finite field of order $q$ and $\\mathbb{F}_{q}^{\\times}$ contain all the non-zero elements of $\\mathbb{F}_{q}$. The notation $\\mathbb{F}_q^{a\\times b}$ represents the set of all $a\\times b$ matrices with elements in $\\mathbb{F}_q$. Let $\\mathfrak{S}$ be the set of all the subsets with cardinality $M$ of $[K]$, i.e., $|\\mathfrak{S}| = \\tbinom{K}{M}$, and let $\\mathfrak{C}$ be the set of all length $M$ sequences with elements in $\\mathbb{F}_{q}^{\\times}$, i.e., $|\\mathfrak{C}| = (q-1)^M$. For an index set $S\\subset[K]$, define the subscript notation $X_S=\\{X_s\\mid s\\in S\\}$. All entropies are in $q$-ary units.\n\n\\section{Problem Statement}\\label{sec:state}\n\\subsection{Capacity of PIR-PCSI-I, PIR-PCSI-II, PIR-PCSI}\nA single server stores $K$ independent messages $\\bm{W}_1, \\bm{W}_2, \\cdots, \\bm{W}_{K}\\in\\mathbb{F}_q^L$, each comprised of $L$ i.i.d. uniform symbols from $\\mathbb{F}_{q}$, where we refer to $\\mathbb{F}_{q}$ as the \\emph{base field}. In terms of entropies,\n\\begin{align}\n &H(\\bm{W}_{1}) = H(\\bm{W}_{2}) = \\cdots = H(\\bm{W}_{K}) = L,\\\\\n &H(\\bm{W}_{[K]}) = \\sum_{k \\in [K]}H(\\bm{W}_{k}) = KL.\n\\end{align}\n\n A user who wishes to retrieve a message $\\bm{W}_{\\bm{\\theta}}$ for a privately generated index $\\bm{\\theta}$. The user has a linear combination of $M$ messages available as coded side information (CSI). $M$ is globally known. The CSI is comprised of $(\\bm{\\mathcal{S}}, \\bm{\\Lambda}, \\bm{Y}^{[\\bm{\\mathcal{S}},\\bm{\\Lambda}]})$, defined as follows. The \\emph{support index set} $\\bm{\\mathcal{S}}$, drawn uniformly from $\\mathfrak{S}$, is a subset of $[K]$, of cardinality $M$. The vector of coefficients $\\bm{\\Lambda}=(\\bm{\\lambda}_1,\\bm{\\lambda}_2,\\cdots,\\bm{\\lambda}_M)$ is drawn uniformly from $\\mathfrak{C}$. \nThe linear combination available to the user is\n\\begin{align}\n \\bm{Y}^{[\\bm{\\mathcal{S}},\\bm{\\Lambda}]}\\triangleq \\bm{\\lambda}_1\\bm{W}_{\\bm{\\mathcal{S}}(1)} + \\bm{\\lambda}_2\\bm{W}_{\\bm{\\mathcal{S}}(2)} + \\cdots + \\bm{\\lambda}_M\\bm{W}_{\\bm{\\mathcal{S}}(M)},\\label{eq:sideinfo_CSI}\n\\end{align}\nwhere we recall the notation that $\\bm{\\mathcal{S}}(m)$ denotes the $m^{th}$ element of $\\bm{\\mathcal{S}}$, in ascending order, i.e., $\\bm{\\mathcal{S}}(1)<\\bm{\\mathcal{S}}(2)<\\cdots<\\bm{\\mathcal{S}}(M)$. \nWe assume that $(\\bm{\\theta}, \\bm{\\mathcal{S}})$, $\\bm{\\Lambda}$, $\\bm{W}_{[K]}$ are independent.\n\\begin{align}\n H(\\bm{\\theta}, \\bm{\\mathcal{S}}, \\bm{\\Lambda}, \\bm{W}_{[K]}) = H(\\bm{\\theta}, \\bm{\\mathcal{S}}) + H(\\bm{\\Lambda}) + H(\\bm{W}_{[K]}).\n\\end{align}\n\nThere are three formulations of the problem depending on how $\\bm{\\theta}$ is chosen by the user.\n\\begin{enumerate}\n\\item{\\bf PIR-PCSI-I}: $\\bm{\\theta}$ is chosen uniformly from $[K]\\setminus\\bm{\\mathcal{S}}$.\n\\item{\\bf PIR-PCSI-II}: $\\bm{\\theta}$ is chosen uniformly from $\\bm{\\mathcal{S}}$.\n\\item{\\bf PIR-PCSI}: $\\bm{\\theta}$ is chosen uniformly from $[K]$.\n\\end{enumerate}\nWhen referring to all three formulations, we will refer to the problem as {\\bf PIR-PCSI*} for brevity. In such statements, PCSI* can be replaced with PCSI-I, PCSI-II, or PCSI to obtain corresponding statements for each of the three formulations.\n\nThe server knows the distributions but not the realizations of $\\bm\\theta, \\bm{\\mathcal{S}}, \\bm{\\Lambda}, \\bm{Y}^{[\\bm{\\mathcal{S}},\\bm{\\Lambda}]}$.\nIt is required that $(\\bm{\\theta},\\bm{\\mathcal{S}})$ be kept jointly private from the server. Note that the privacy of $\\bm{Y}^{[\\bm{\\mathcal{S}},\\bm{\\Lambda}]}$ or the coefficient vector $\\bm{\\Lambda}$ is not required. While the server initially knows nothing about the realization of $\\bm{\\Lambda}$, a PIR-PCSI* scheme may reveal some information about the coefficients, especially if it allows for efficient retrieval without leaking any information about $(\\bm{\\theta},\\bm{\\mathcal{S}})$. Leaking information about $\\bm{\\Lambda}$ has implications for reusability of side-information, an issue that is explored recently in \\cite{Anoosheh_reusable}.\n\nIn order to retrieve $\\bm{W_\\theta}$, we assume as in \\cite{PIR_PCSI} that the user generates a random query $\\bm{Q}$ that is independent of the messages. Specifically,\n\\begin{align}\n I(\\bm{W}_{[K]}; \\bm{Q}, \\bm{\\theta}, \\bm{\\mathcal{S}}, \\bm{\\Lambda}) = 0.\\label{eq:indQ}\n\\end{align}\nLet $\\mathcal{Q}$ denote the alphabet of $\\bm{Q}$. \n\nBecause the messages are i.i.d. uniform, and the coefficients are non-zero, according to the construction of $\\bm{Y}^{[\\bm{\\mathcal{S}}, \\bm{\\Lambda}]}$, it follows that\n\\begin{align}\nL&=H(\\bm{Y}^{[\\bm{\\mathcal{S}}, \\bm{\\Lambda}]}),\\\\\n& = H(\\bm{Y}^{[\\bm{\\mathcal{S}}, \\bm{\\Lambda}]} \\mid \\bm{Q}, \\bm{\\mathcal{S}}, \\bm{\\Lambda}, \\bm{W}_{{[K]}\\setminus\\{\\bm{\\mathcal{S}}(m)\\}}), \\forall m \\in [M].\\label{eq:indY} \n\\end{align}\n\nThe user uploads $\\bm{Q}$ to the server. Mathematically, the privacy constraint is expressed as,\n\\begin{align}\n &\\text{[$(\\bm{\\theta}, \\bm{\\mathcal{S}})$ Privacy]} &&I\\left(\\bm{\\theta}, \\bm{\\mathcal{S}}; \\bm{Q}, \\bm{W}_{[K]}\\right) = 0.\\label{eq:tsprivacy}\n\\end{align}\nThe server returns an answer $\\bm{\\Delta}$ as a function of $\\bm{Q}$ and the messages, i.e.,\n\\begin{align}\n H\\left(\\bm{\\Delta} \\mid \\bm{Q}, \\bm{W}_{[K]}\\right) = 0.\n\\end{align}\nUpon receiving the answer, the user must be able to decode the desired message $\\bm{W}_{\\bm\\theta}$. \n\\begin{align}\n &\\text{[Correctness]} &&H(\\bm{W}_{\\bm{\\theta}} \\mid \\bm{\\Delta}, \\bm{Q}, \\bm{Y}^{[\\bm{\\mathcal{S}}, \\bm{\\Lambda}]}, \\bm{\\mathcal{S}}, \\bm{\\Lambda},\\bm{\\theta}) = 0.\n\\end{align}\nThe rate of an achievable scheme is the ratio of the number of bits of the desired message to the total number of bits downloaded on average. If the average download is $D$ $q$-ary symbols, from which the $L$ $q$-ary symbols of the desired message are retrieved, then the rate achieved is,\n\\begin{align}\n R = \\frac{L}{D}\n\\end{align}\nThe capacity is the supremum of achievable rates over all message sizes $L$,\n\\begin{align}\n C_{\\mbox{\\tiny PCSI*}}(q) = \\sup_{L, \\mbox{\\tiny achievable $R$}}R.\n\\end{align}\nThe capacity can depend on the field $\\mathbb{F}_q$ which affects the nature of side information. Field-independent measures of capacity may be obtained by taking a supremum (as in \\cite{PIR_PCSI}) or infimum over all finite fields. These are called supremum and infimum capacity, respectively.\n\\begin{align}\n C_{\\mbox{\\tiny PCSI*}}^{\\sup} &= \\sup_{q}C_{\\mbox{\\tiny PCSI*}}(q),\\\\\n C_{\\mbox{\\tiny PCSI*}}^{\\inf} &= \\inf_{q}C_{\\mbox{\\tiny PCSI*}}(q).\n\\end{align}\n\n\\subsection{Capacity of PIR-PCSI* with Private Coefficients}\nRecall that in the formulation of PIR-PCSI* as presented above, while $(\\bm{\\theta},\\bm{\\mathcal{S}})$ must be kept private, the privacy of the coefficient vector $\\bm{\\Lambda}$ is not required. As an important benchmark, we consider the setting where the privacy of coefficients must also be preserved. In this setting, the privacy constraint is modified so that instead of \\eqref{eq:tsprivacy} we require the following.\n\\begin{align}\n &\\text{[$(\\bm{\\theta}, \\bm{\\mathcal{S}}, \\bm{\\Lambda})$ Privacy]} &&I\\left(\\bm{\\theta}, \\bm{\\mathcal{S}}, \\bm{\\Lambda}; \\bm{Q}, \\bm{W}_{[K]}\\right) = 0.\\label{eq:tscprivacy}\n\\end{align}\nThe capacity under this privacy constraint is referred to as the capacity with private coefficients and is denoted as $C_{\\mbox{\\tiny PCSI*}}^{\\mbox{\\tiny pri}}(q)$, which is potentially a function of the field size $q$. The supremum and infimum (over $q$) of $C_{\\mbox{\\tiny PCSI*}}^{\\mbox{\\tiny pri}}(q)$ are denoted as $C_{\\mbox{\\tiny PCSI*}}^{\\mbox{\\tiny pri},\\sup}, C_{\\mbox{\\tiny PCSI*}}^{\\mbox{\\tiny pri},\\inf}$, respectively.\n\n\\subsection{Redundancy of CSI}\nIn addition to the capacity of PIR-PCSI*, we also wish to determine how much (if any) of the side information is redundant, i.e., can be discarded without any loss in the \\emph{supremum capacity}. \n\nFor all $\\mathcal{S}\\in\\mathfrak{S}, \\Lambda\\in\\mathfrak{C}$, let $f_{\\mathcal{S},\\Lambda}$ be functions that produce\n\\begin{align}\n\\overline{\\bm{Y}}^{[{\\mathcal{S}}, {\\Lambda}]} = f_{\\mathcal{S}, \\Lambda}(\\bm{Y}^{[{\\mathcal{S}}, {\\Lambda}]}).\n\\end{align}\nLet us refer to all these functions collectively as $\\mathcal{F}=(f_{\\mathcal{S}, \\Lambda})_{\\mathcal{S}\\in\\mathfrak{S}, \\Lambda\\in\\mathfrak{C}}$. \nDefine, $\\overline{C}_{\\mbox{\\tiny PCSI*}}(q,\\mathcal{F})$ as the capacity (supremum of achievable rates) if the decoding must be based on $\\overline{\\bm{Y}}^{[{\\mathcal{S}}, {\\Lambda}]}$ instead of $\\bm{Y}^{[{\\mathcal{S}}, {\\Lambda}]}$, i.e., the correctness condition is modified to\n\\begin{align}\nH(\\bm{W}_{\\bm{\\theta}} \\mid \\bm{\\Delta}, \\bm{Q}, \\overline{\\bm{Y}}^{[\\bm{\\mathcal{S}}, \\bm{\\Lambda}]}, \\bm{\\mathcal{S}}, \\bm{\\Lambda},\\bm{\\theta}) = 0.\n\\end{align}\nWe say that $\\mathcal{F}$ uses $\\alpha$-CSI, where\n\\begin{align}\n\\alpha=\\max_{\\mathcal{S}\\in\\mathfrak{S}, \\Lambda\\in\\mathfrak{C}} H(\\overline{\\bm{Y}}^{[{\\mathcal{S}}, {\\Lambda}]})\/L\n\\end{align}\nWhereas storing $\\bm{Y}^{[{\\mathcal{S}}, {\\Lambda}]}$ requires $L$ $q$-ary symbols, note that storing $\\overline{\\bm{Y}}^{[{\\mathcal{S}}, {\\Lambda}]}$ requires only $\\alpha L$ storage, i.e., storage is reduced by a factor $\\alpha$. Define the $\\alpha$-CSI constrained capacity as\n\\begin{align}\n\\overline{C}_{\\mbox{\\tiny PCSI*}}(q,\\alpha)&=\\sup_{\n\\mathcal{F}: ~\\mbox{\\footnotesize uses no more than $\\alpha$-CSI}} \\overline{C}_{\\mbox{\\tiny PCSI*}}(q,\\mathcal{F})\n\\end{align}\nIn other words, $\\overline{C}_{\\mbox{\\tiny PCSI*}}(q,\\alpha)$ is the capacity when the user is allowed to retain no more than a fraction $\\alpha$ of the CSI $\\bm{Y}^{[{\\mathcal{S}}, {\\Lambda}]}$.\nThe notion of $\\alpha$-CSI constrained capacity is of broader interest on its own. However, in this work we will explore only the redundancy of CSI with regard to the supremum capacity. We say that `$\\alpha$-CSI is sufficient' if \n\\begin{align}\n\\sup_q\\overline{C}_{\\mbox{\\tiny PCSI*}}(q,\\alpha)&={C}_{\\mbox{\\tiny PCSI*}}^{\\sup}\n\\end{align}\nDefine $\\alpha^*$ as the smallest value of $\\alpha$ such that $\\alpha$-CSI is sufficient.\nThe redundancy of PCSI is defined as $\\rho_{\\mbox{\\tiny PCSI*}}=1-\\alpha^*$.\nNote that the opposite extremes of $\\rho_{\\mbox{\\tiny PCSI*}}=1$ and $\\rho_{\\mbox{\\tiny PCSI*}}=0$ correspond to situations where all of the side information is redundant, and where none of the side information is redundant, respectively.\n\nFor later use, it is worthwhile to note that for any scheme that uses no more than $\\alpha$-CSI, because $\\overline{\\bm{Y}}^{[{\\mathcal{S}}, {\\Lambda}]}$ is a function of ${\\bm{Y}}^{[{\\mathcal{S}}, {\\Lambda}]}$, it follows from \\eqref{eq:indY} that for all\\footnote{We say $(Q,\\mathcal{S},\\Lambda)$ is feasible if $\\Pr((\\bm{Q}, \\bm{\\mathcal{S}}, \\bm{\\Lambda}) = (Q,\\mathcal{S},\\Lambda))>0$.} feasible $(Q,\\mathcal{S},\\Lambda)$,\n{\\small\n\\begin{align}\nH\\bigg(\\overline{\\bm{Y}}^{[{\\mathcal{S}}, {\\Lambda}]} \\mid (\\bm{Q},\\bm{\\mathcal{S}},\\bm{\\Lambda})=(Q,\\mathcal{S},\\Lambda)\\bigg)=H(\\overline{\\bm{Y}}^{[{\\mathcal{S}}, {\\Lambda}]} )\\leq\\alpha L.\\label{eq:invaYR}\n\\end{align}\n}\nThis is because of the property that if $A$ is independent of $B$, then any function of $A$ is also independent of $B$. In this case, \\eqref{eq:indY} tells us that ${\\bm{Y}}^{[{\\mathcal{S}}, {\\Lambda}]}$ is independent of ${\\bf Q}$, therefore so is $\\overline{{\\bm{Y}}}^{[{\\mathcal{S}}, {\\Lambda}]}$.\n\n\\section{Main Results}\\label{sec:main}\nWe start with the setting of PIR-PCSI-II (where $\\bm{\\theta}\\in\\bm{\\mathcal{S}}$), which is the main motivation for this work. Note that the case $M=1$ is trivial, because in that case the user already has the desired message. Therefore, for PIR-PCSI-II we will always assume that $M>1$.\n\\subsection{PIR-PCSI-II (where $\\bm{\\theta}$ is drawn uniformly from $\\bm{\\mathcal{S}}$)}\n\\begin{theorem}\\label{thm:cap_PCSI2_sup}\n The supremum capacity of PIR-PCSI-II is\n \\begin{align}\n C_{\\mbox{\\tiny PCSI-II}}^{\\sup} &=\\max\\left(\\frac{2}{K},\\frac{1}{K-M+1}\\right)\\\\\n &=\n \\begin{cases}\n \\frac{2}{K}, & 1 < M \\leq \\frac{K+1}{2},\\\\\t\n \\frac{1}{K-M+1}, & \\frac{K+1}{2} < M \\leq K,\\text{\\cite{PIR_PCSI}}\n \\end{cases}\n \\end{align}\n\\end{theorem}\n\nThe case $(K+1)\/20$. So consider an achievable scheme such that $\\alpha$ PCSI is sufficient and the average download cost $D\/L\\leq K\/2+\\epsilon$ for some $L$. Since $D\/L\\leq K\/2+\\epsilon$, we have\n\\begin{align}\n&LK\/2+\\epsilon L\\notag\\\\\n&\\geq D\\\\\n &\\geq H(\\bm{\\Delta} \\mid \\bm{Q}) \\\\\n &\\geq I(\\bm{\\Delta}; \\bm{W}_{[K]} \\mid \\bm{Q}) \\notag\\\\\n &= \\sum_{k \\in [K]}I(\\bm{\\Delta}; \\bm{W}_k \\mid \\bm{Q}, \\bm{W}_{[k-1]})\\\\\n &= \\sum_{k \\in [K]}\\bigg(H(\\bm{W}_k \\mid \\bm{Q}, \\bm{W}_{[k-1]})-H(\\bm{W}_k \\mid \\bm{\\Delta}, \\bm{Q}, \\bm{W}_{[k-1]})\\bigg)\\\\\n &= \\sum_{k \\in [K]}\\bigg(H(\\bm{W}_k)-H(\\bm{W}_k \\mid \\bm{\\Delta}, \\bm{Q}, \\bm{W}_{[k-1]})\\bigg)\\label{eq:independent}\\\\\n &\\geq \\sum_{k \\in [K]}\\bigg(H(\\bm{W}_k)-H(\\bm{W}_k \\mid \\bm{\\Delta}, \\bm{Q})\\bigg)\\\\\n &= \\sum_{k \\in [K]}I(\\bm{W}_k; \\bm{\\Delta}, \\bm{Q}),\\\\\n &\\geq K I(\\bm{W}_{k^*}; \\bm{\\Delta}, \\bm{Q})\n \\label{eq:alpha_half}\n\\end{align}\nwhere \\eqref{eq:independent} holds since all the messages and the query are mutually independent, and\n\\begin{align}\nk^*=\\arg\\min_{k\\in[K]}I(\\bm{W}_k; \\bm{\\Delta}, \\bm{Q})\n\\end{align}\nFrom \\eqref{eq:alpha_half} we have,\n\\begin{align}\n H(\\bm{W}_{k^*} \\mid \\bm{\\Delta}, \\bm{Q}) \\geq L\/2 -\\epsilon L\/K.\n\\end{align}\nThus, there must exist a feasible query $Q$ such that \n\\begin{align}\n H(\\bm{W}_{k^*} \\mid \\bm{\\Delta}, \\bm{Q}=Q) \\geq L\/2-\\epsilon L\/K. \\label{eq:smallest}\n\\end{align}\nLet $\\mathcal{S} =\\{i_1, \\cdots, i_{M-1}, k^{*}\\}\\subset[K]$, such that $|\\mathcal{S}|=M$. Then according to Lemma \\ref{lem:privacy} and \\eqref{eq:invaYR}, there must exist $\\Lambda \\in \\mathfrak{C}$ such that \n\\begin{align}\n &H(\\bm{W}_{k^*} \\mid \\bm{\\Delta}, \\overline{\\bm{Y}}^{[\\mathcal{S},\\Lambda]}, \\bm{Q}=Q) = 0,\\label{eq:dec_smallest}\\\\\n &H(\\overline{\\bm{Y}}^{[\\mathcal{S},\\Lambda]} \\mid \\bm{Q} = Q) = H(\\overline{\\bm{Y}}^{[\\bm{\\mathcal{S}},\\bm{\\Lambda}]}) \\leq \\alpha L.\n\\end{align}\nCombining \\eqref{eq:smallest} and \\eqref{eq:dec_smallest}, we have\n\\begin{align}\n I(\\overline{\\bm{Y}}^{[\\mathcal{S},\\Lambda]}; \\bm{W}_{k^*} \\mid \\bm{\\Delta}, \\bm{Q}=Q) \\geq L\/2-\\epsilon L\/K.\n\\end{align}\nThus \n\\begin{align}\n \\alpha L &\\geq H(\\overline{\\bm{Y}}^{[\\mathcal{S},\\Lambda]} \\mid \\bm{Q} = Q)\\notag\\label{eq:indYQR}\\\\\n &\\geq I(\\overline{\\bm{Y}}^{[\\mathcal{S},\\Lambda]}; \\bm{W}_{k^*} \\mid \\bm{\\Delta}, \\bm{Q}=Q) \\geq \\frac{L}{2}-\\epsilon L\/K\n\\end{align}\nwhich implies that $\\alpha \\geq 1\/2 - \\epsilon\/K$. In order to approach capacity, we must have $\\epsilon\\rightarrow 0$, therefore we need $\\alpha\\geq 1\/2$. Since this is true for any $\\alpha$ such that $\\alpha$ PCSI is sufficient, it is also true for $\\alpha^*$, and therefore the redundancy is $\\rho_{\\mbox{\\tiny PCSI-II}}\\leq 1\/2$.\n\n\\begin{lemma}\\label{lem:alpha_min_2}\n For $\\frac{K+2}{2} < M \\leq K$, the redundancy $\\rho_{\\mbox{\\tiny PCSI-II}}\\leq 0$.\n \\end{lemma}\n\n\\proof Recall that the capacity for this case is $(K-M+1)^{-1}$, i.e., the optimal average download cost is $D\/L=K-M+1$. Consider an achievable scheme such that $\\alpha$ PCSI is sufficient and the average download cost $D\/L\\leq K-M+1+\\epsilon$ for some $L$. Since $D\/L\\leq K-M+1+\\epsilon$, we have $L(K-M+1)+\\epsilon L\\geq D\\geq H(\\bm{\\Delta} \\mid \\bm{Q})$. Thus, there exists a feasible $Q$ such that \n\\begin{align}\n H(\\bm{\\Delta} \\mid \\bm{Q} = Q) \\leq (K-M+1)L+\\epsilon L.\n\\end{align}\nFor all $i \\in [K-M+1]$, let $\\mathcal{S}_{i} = [i:i+M-1]$. Also, let $\\mathcal{S}_{K-M+2} = \\{1\\} \\cup [K-M+2:K]$. For all $i \\in [K-M+2]$, let $\\Lambda_i \\in \\mathfrak{C}$ satisfy\n\\begin{align}\n H(\\bm{W}_i \\mid \\bm{\\Delta}, \\overline{\\bm{Y}}^{[\\mathcal{S}_i,\\Lambda_i]}, \\bm{Q} = Q) = 0.\n\\end{align}\nSuch $\\Lambda_i$'s must exist according to Lemma \\ref{lem:privacy}. \n\nWriting $\\overline{\\bm{Y}}^{[\\mathcal{S}_i,\\Lambda_i]}$ as $\\overline{\\bm{Y}}_{i}$ for compact notation, we have \n\\begin{align}\n H(\\bm{W}_{[K-M+2]} \\mid \\bm{\\Delta}, \\overline{\\bm{Y}}_{[K-M+2]}, \\bm{Q}=Q) = 0. \\label{eq:redundancy2_1}\n\\end{align}\nAccording to \\eqref{eq:invaYR}, \n\\begin{align}\n H(\\overline{\\bm{Y}}_i \\mid \\bm{Q} = Q) \\leq \\alpha L.\n\\end{align}\nso we have\n\\begin{align}\n &(K-M+1)L+\\epsilon L + H(\\overline{\\bm{Y}}_{[K-M+1]} \\mid \\bm{Q} = Q)+\\alpha L\\notag\\\\\n &\\geq H(\\bm{\\Delta}, \\overline{\\bm{Y}}_{[K-M+2]} \\mid \\bm{Q} = Q)\\\\\n &\\geq I(\\bm{\\Delta}, \\overline{\\bm{Y}}_{[K-M+2]}; \\bm{W}_{[K-M+2]},\\overline{\\bm{Y}}_{[K-M+2]}\\mid \\bm{Q}=Q)\\\\\n &= H(\\bm{W}_{[K-M+2]},\\overline{\\bm{Y}}_{[K-M+2]} \\mid \\bm{Q}=Q)\\label{eq:redundancy2_2}\\\\\n &\\geq H(\\bm{W}_{[K-M+2]},\\overline{\\bm{Y}}_{[K-M+1]} \\mid \\bm{Q}=Q)\\\\\n &= H(\\bm{W}_{[K-M+2]} \\mid \\bm{Q}=Q)\\notag\\\\\n &\\quad\\quad + H(\\overline{\\bm{Y}}_{[K-M+1]} \\mid \\bm{W}_{[K-M+2]}, \\bm{Q}=Q)\\\\\n &\\geq (K-M+2)L\\notag\\\\\n &\\quad\\quad + H(\\overline{\\bm{Y}}_{[K-M+1]} \\mid \\bm{W}_{[M-1]}, \\bm{Q}=Q),\\label{eq:region}\n\\end{align}\nwhere \\eqref{eq:redundancy2_2} follows from \\eqref{eq:redundancy2_1}. Step \\eqref{eq:region} uses the independence of messages and queries according to \\eqref{eq:indQ} and the fact that $M-1 \\geq K-M+2$, because we require $M>(K+2)\/2$. We further bound\n\\begin{align}\n &H(\\overline{\\bm{Y}}_{[K-M+1]} \\mid \\bm{W}_{[M-1]}, \\bm{Q}=Q)\\notag\\\\\n &=H(\\overline{\\bm{Y}}_{1} \\mid\\bm{W}_{[M-1]}, \\bm{Q}=Q) + \\cdots \\notag\\\\\n &\\quad\\quad + H(\\overline{\\bm{Y}}_{K-M+1} \\mid \\bm{W}_{[M-1]}, \\overline{\\bm{Y}}_{[K-M]}, \\bm{Q}=Q)\\\\\n &\\geq \\sum_{i=1}^{K-M+1}H(\\overline{\\bm{Y}}_i\\mid \\bm{W}_{[i+M-2]}, \\bm{Q}=Q)\\label{eq:linearfunc}\\\\\n &= \\sum_{i=1}^{K-M+1}H(\\overline{\\bm{Y}}_i\\mid \\bm{Q}=Q)\\label{eq:pcsi2_red_indYO}\\\\\n &\\geq H(\\overline{\\bm{Y}}_{[K-M+1]}\\mid \\bm{Q}=Q)\\label{eq:plugin}\n\\end{align}\n \\eqref{eq:linearfunc} holds because $\\overline{\\bm{Y}}_{[i-1]}$ is a function of $\\bm{W}_{[i+M-2]}$ for all $i \\in [2:K-M+1]$. Step \\eqref{eq:pcsi2_red_indYO} follows from \\eqref{eq:invaYR}. Substituting from \\eqref{eq:plugin} into \\eqref{eq:region}, we have \n\\begin{align}\n &(K-M+1)L + \\epsilon L+ \\alpha L \\geq (K-M+2)L ,\n\\end{align}\nwhich gives $\\alpha \\geq 1-\\epsilon$. In order to approach capacity, we must have $\\epsilon\\rightarrow 0$, so we need $\\alpha\\geq 1$, and since this is true for any $\\alpha$ such that $\\alpha$ PCSI is sufficient, it is also true for $\\alpha^*$. Thus, the redundancy is bounded as $\\rho_{\\mbox{\\tiny PCSI-II}}\\leq 0$. $\\hfill\\square$\n\nAccording to Remark \\ref{rmk:PCSI2_margin} and \\ref{rmk:half_CSI}, $\\alpha = 1\/2$ is sufficient for $2 \\leq M \\leq \\frac{K+2}{2}$ and by the construction of CSI (a linear combination of messages), $\\alpha \\leq 1$. Theorem \\ref{thm:red} is thus proved.\n\n\\section{Proof of Theorem \\ref{thm:cap_PCSI2_inf}}\\label{sec:cap_PCSI2_inf}\nWe prove Theorem \\ref{thm:cap_PCSI2_inf} by first showing that $C_{\\mbox{\\tiny PCSI-II}}(q=2)\\leq M\/((M-1)K)$ and then presenting a PIR-PCSI-II scheme with rate $M\/((M-1)K)$ that works for any $\\mathbb{F}_{q}$.\n\n\\subsection{Converse for $C_{\\mbox{\\tiny PCSI-II}}(q=2)$}\nNote that Lemma \\ref{lem:privacy} is true for arbitrary $\\mathbb{F}_q$. In $\\mathbb{F}_{2}$, we can only have $\\bm{\\Lambda}=(1,1,\\cdots,1)=1_M$, i.e., the length $M$ vector whose elements are all equal to $1$. As a direct result of Lemma \\ref{lem:privacy}, for PIR-PCSI-II in $\\mathbb{F}_2$,\n\\begin{align}\n H(\\bm{W}_{\\mathcal{S}} \\mid \\bm{\\Delta}, \\bm{Y}^{[\\mathcal{S},1_{M}]}, \\bm{Q}=Q) = 0, ~\\forall(Q,\\mathcal{S}) \\in \\mathcal{Q}\\times\\mathfrak{S}.\\label{eq:pcsi2_inf_dec}\n\\end{align}\nThus, $\\forall (Q,\\mathcal{S}) \\in \\mathcal{Q}\\times\\mathfrak{S}$,\n\\begin{align}\n &H(\\bm{W}_{\\mathcal{S}} \\mid \\bm{\\Delta}, \\bm{Q}=Q)\\notag\\\\\n &= H(\\bm{W}_{\\mathcal{S}}, \\bm{Y}^{[\\mathcal{S},1_M]} \\mid \\bm{\\Delta}, \\bm{Q}=Q)\\label{eq:F2CSI}\\\\\n &= H(\\bm{Y}^{[\\mathcal{S},1_M]} \\mid \\bm{\\Delta}, \\bm{Q}=Q)\\notag\\\\\n &\\quad + H(\\bm{W}_{\\mathcal{S}} \\mid \\bm{\\Delta}, \\bm{Y}^{[\\mathcal{S},1_M]}, \\bm{Q}=Q)\\\\\n &\\leq L.\n\\end{align}\n\\eqref{eq:F2CSI} holds because $\\bm{Y}^{[\\mathcal{S},1_M]}$ is simply the summation of $\\bm{W}_{\\mathcal{S}}$. Averaging over $\\bm{Q}$, we have $H(\\bm{W}_{\\mathcal{S}} \\mid \\bm{\\Delta}, \\bm{Q}) \\leq L, \\forall \\mathcal{S} \\in \\mathfrak{S}$. By submodularity,\n\\begin{align}\n H(\\bm{W}_{[K]} \\mid \\bm{\\Delta}, \\bm{Q}) \\leq KL\/M.\n\\end{align}\nThe download cost can now be lower bounded as,\n\\begin{align}\n D\\geq H(\\bm{\\Delta} \\mid \\bm{Q}) \\geq KL - H(\\bm{W}_{[K]} \\mid \\bm{\\Delta}, \\bm{Q})) \\geq \\frac{(M-1)KL}{M}.\n\\end{align}\nThus, we have shown that $C_{\\mbox{\\tiny PCSI-II}}(q=2) \\leq \\frac{M}{(M-1)K}$.\n\n\\subsection{A PIR-PCSI-II Scheme for Arbitrary $q$}\\label{sec:PCSI2_inf_ach}\nIn this section, we prove $C_{\\mbox{\\tiny PCSI-II}}(q) \\geq \\frac{M}{(M-1)K}$ for all $q$ by proposing a scheme, namely \\emph{Generic Linear Combination Based Scheme}, that can achieve the rate $\\frac{M}{(M-1)K}$ for any $\\mathbb{F}_{q}$.\n\nLet us choose $L = Ml$ where $M$ is the size of the support index set and $l$ is a positive integer which can be arbitrarily large. Thus, any message $\\bm{W}_k, k \\in [K]$ can be represented as a length-$M$ column vector $V_{\\bm{W}_k} \\in \\mathbb{F}_{q^l}^{M\\times 1}$. Let \n\\begin{align}\n V_{\\bm{W}_{\\bm{\\mathcal{S}}}} = \n \\begin{bmatrix}\n V_{\\bm{W}_{\\bm{i}_1}}^{\\mathrm{T}} & \\cdots & V_{\\bm{W}_{\\bm{i}_M}}^{\\mathrm{T}}\n \\end{bmatrix}^{\\mathrm{T}} \\in \\mathbb{F}_{q^l}^{M^2\\times 1}\n\\end{align}\nwhere $\\bm{\\mathcal{S}} = \\{\\bm{i}_1, \\cdots, \\bm{i}_M\\}$ is the support index set. The CSI $\\bm{Y}$ can be represented as $V_{\\bm{Y}} \\in \\mathbb{F}_{q^l}^{M\\times 1}$ such that,\n\\begin{align}\n V_{\\bm{Y}} = \\underbrace{\n \\begin{bmatrix}\n \\bm{\\lambda}_{1}\\mathbf{I}_{M} & \\bm{\\lambda}_{2}\\mathbf{I}_{M} & \\cdots & \\bm{\\lambda}_{M}\\mathbf{I}_{M}\n \\end{bmatrix}}_{M}V_{\\bm{W}_{\\bm{\\mathcal{S}}}},\n\\end{align}\nwhere $\\mathbf{I}_{M} \\in \\mathbb{F}_{q^l}^{M\\times M}$ is the $M \\times M$ identity matrix. \n\nThe download is specified as,\n\\begin{align}\n \\bm{\\Delta} = \\{&\\mathbf{L}_1^{(1)}V_{\\bm{W}_{1}}, \\cdots, \\mathbf{L}_1^{(M-1)}V_{\\bm{W}_{1}}, \\notag\\\\ \n &\\cdots, \\mathbf{L}_K^{(1)}V_{\\bm{W}_{K}}, \\cdots, \\mathbf{L}_K^{(M-1)}V_{\\bm{W}_{K}}\\},\n\\end{align}\nwhere $\\forall k \\in [K], m \\in [M-1], \\mathbf{L}_k^{(m)} \\in \\mathbb{F}_{q^l}^{1\\times M}$ is a length-$M$ row vector, i.e., for any message vector $V_{\\bm{W}_{k}} \\in \\mathbb{F}_{q^l}^{M\\times 1}$, $\\bm{\\Delta}$ contains $M-1$ linear combinations of that message vector.\n\nSuppose the vectors $\\mathbf{L}_k^{(m)}$ are chosen such that $\\forall \\mathcal{S}=\\{j_1, \\cdots, j_{M}\\} \\in \\mathfrak{S}$ the following $M^2\\times M^2$ square matrix has full rank.\n\\begin{align}\n \\mathbf{G}_{\\mathcal{S}} = \n \\begin{bmatrix}\n \\lambda_{1}\\mathbf{I}_{M} & \\cdots & \\lambda_{M}\\mathbf{I}_{M}\\\\\n & \\mathbf{e}_{1}\\otimes\\mathbf{L}_{j_1}^{(1)} &\\\\\n & \\cdots &\\\\\n & \\mathbf{e}_{1}\\otimes\\mathbf{L}_{j_1}^{(M-1)} &\\\\\n & \\cdots &\\\\\n & \\mathbf{e}_{M}\\otimes\\mathbf{L}_{j_M}^{(1)} &\\\\\n & \\cdots &\\\\\n & \\mathbf{e}_{M}\\otimes\\mathbf{L}_{j_M}^{(M-1)} &\n \\end{bmatrix},\\label{eq:F2_inv}\n\\end{align}\n$\\mathcal{S} = \\{j_1, \\cdots, j_{M}\\} \\in \\mathfrak{S}$. Note that $(\\lambda_1, \\cdots, \\lambda_M) \\in \\mathfrak{C}$ is the realization of $\\bm{\\Lambda}$, $\\mathbf{e}_{m}, m\\in [M]$ is the $m^{th}$ row of the $M\\times M$ identity matrix and ``$\\otimes$'' is the Kronecker product.\n\nThe correctness constraint is satisfied because the side-information and the downloads allow the user to obtain $\\mathbf{G}_{\\bm{\\mathcal{S}}}V_{\\bm{W}_{\\bm{\\mathcal{S}}}}$, which can then be multiplied by the inverse of $\\mathbf{G}_{\\bm{\\mathcal{S}}}$ to obtain $V_{\\bm{W}_{\\bm{\\mathcal{S}}}}$, i.e., $\\bm{W}_{\\bm{\\mathcal{S}}}$ which contains $\\bm{W}_{\\bm{\\theta}}$. Specifically the side-information corresponds to the first $M$ rows of $\\mathbf{G}_{\\bm{\\mathcal{S}}}V_{\\bm{W}_{\\bm{\\mathcal{S}}}}$, the downloads $\\mathbf{L}_{\\bm{j}_1}^{(1)}V_{\\bm{W}_{\\bm{j}_1}},\\cdots,\\mathbf{L}_{\\bm{j}_1}^{(M-1)}V_{\\bm{W}_{\\bm{j}_1}}$ correspond to the next $M-1$ rows of $\\mathbf{G}_{\\bm{\\mathcal{S}}}V_{\\bm{W}_{\\bm{\\mathcal{S}}}}$, and so on.\n\nOn the other hand, the privacy constraint is satisfied because the construction is such that for every feasible $\\mathcal{S}$, the user is able to decode all $M$ messages $\\bm{W}_{\\mathcal{S}}$. \n\nFinally let us evaluate the rate achieved by this scheme. Since the user downloads $\\frac{M-1}{M}$ portion of every message, the download cost is $D=LK(M-1)\/M$, and the rate achieved is $M\/((M-1)K))$. Since this rate is achieved for any $\\mathbb{F}_q$, we have the lower bound $C_{\\mbox{\\tiny PCSI-II}}(q) \\geq M\/((M-1)K))$.\n\nIt remains to show the existence of such $\\mathbf{L}_k^{(m)}$, for which we need the following lemma.\n\\begin{lemma}\\label{lem:existence}\n There exist $\\{\\mathbf{L}_{k}^{(m)}\\}_{k \\in [K], m \\in [M-1]}$ such that for every $\\mathcal{S} = \\{j_1, \\cdots, j_{M}\\} \\in \\mathfrak{S}$, the matrix $ \\mathbf{G}_{\\mathcal{S}} $ in \\eqref{eq:F2_inv} has full rank, provided \n \\begin{align}\n q^l > \\tbinom{K}{M}M(M-1).\n \\end{align}\n\\end{lemma}\n\\proof The proof is in Appendix \\ref{app:existence}. $\\hfill\\square$\n\nWith the help of Lemma \\ref{lem:existence}, Theorem \\ref{thm:cap_PCSI2_inf} is proved. Let us illustrate the scheme with an example.\n\n\\begin{example}\nConsider $M=2, K=4, L=2l, q = 2$. The $4$ messages are $\\bm{A},\\bm{B},\\bm{C},\\bm{D}$. The user has $\\bm{A}+\\bm{B}$ as the side information and wants to retrieve $\\bm{A}$.\n\n$\\bm{A}$ can be represented as a $2\\times 1$ vector $V_{\\bm{A}} = [V_{\\bm{A}}(1) \\quad V_{\\bm{A}}(2)]^{\\mathrm{T}}$ where $V_{\\bm{A}}(1), V_{\\bm{A}}(2) \\in \\mathbb{F}_{2^l}$. Similarly, $\\bm{B}, \\bm{C}, \\bm{D}$ can be represented as $V_{\\bm{B}}$, $V_{\\bm{C}}$, $V_{\\bm{D}}$, respectively. Thus, \n\\begin{align}\n V_{\\bm{W}_{\\bm{\\mathcal{S}}}} = [V_{\\bm{A}}(1) \\quad V_{\\bm{A}}(2) \\quad V_{\\bm{B}}(1) \\quad V_{\\bm{B}}(2)]^{\\mathrm{T}},\n\\end{align}\n\\begin{align}\n V_{\\bm{Y}} = \n \\begin{bmatrix}\n 1&0&1&0\\\\\n 0&1&0&1\n \\end{bmatrix}V_{\\bm{W}_{\\bm{\\mathcal{S}}}} =\n \\begin{bmatrix}\n V_{\\bm{A}}(1)+V_{\\bm{B}}(1)\\\\\n V_{\\bm{A}}(2)+V_{\\bm{B}}(2)\n \\end{bmatrix}.\n\\end{align}\n\nThe download from the server is \n\\begin{align}\n\\bm{\\Delta} = \\{&V_{\\bm{A}}(1)+\\alpha_1 V_{\\bm{A}}(2), V_{\\bm{B}}(1)+\\alpha_2 V_{\\bm{B}}(2),\\notag\\\\\n&V_{\\bm{C}}(1)+\\alpha_3 V_{\\bm{C}}(2), V_{\\bm{D}}(1)+\\alpha_4 V_{\\bm{D}}(2)\\}.\n\\end{align}\nwhere $\\alpha_1, \\cdots, \\alpha_4$ are elements of $\\mathbb{F}_{2^l}$ \nsuch that the $4\\times 4$ matrix over $\\mathbb{F}_{2^l}$, \n\\begin{align}\n\\begin{bmatrix}\n 1&0&1&0\\\\\n 0&1&0&1\\\\\n 1&\\alpha_i&0&0\\\\\n 0&0&1&\\alpha_j\n\\end{bmatrix}\n\\end{align}\nhas full rank for any $i < j, \\{i,j\\} \\subset [4]$, which is true if and only if $\\alpha_1, \\alpha_2, \\alpha_3, \\alpha_4,1,0$ are distinct. Thus any $2^l\\geq 6$ works, i.e., it suffices to choose $l=3$.\n\\end{example}\n\n\n\\section{Proof of Theorem \\ref{thm:MK}}\\label{proof:MK}\nFor the case $q=2$, it suffices to download any $K-1$ messages out of the $K$ messages to achieve the capacity $\\frac{1}{K-1}$, since the desired message is either directly downloaded or can be recovered by subtracting the $K-1$ downloaded messages from the CSI.\n\nFor $q \\neq 2$, to achieve the capacity $1$, it suffices to download a linear combination of all $K$ messages with non-zero coefficients. Specifically, \n\\begin{align}\n \\bm{\\Delta} = \\bm{Y} + \\bm{\\lambda}^{\\prime}\\bm{W}_{\\bm{\\theta}},\n\\end{align}\nwhere $\\bm{Y}$ is the CSI and $\\bm{\\lambda}^{\\prime} \\in \\mathbb{F}_{q}^{\\times}$ is a non-zero element in $\\mathbb{F}_{q}$ such that $\\bm{\\lambda}_{\\bm{t}} + \\bm{\\lambda}^{\\prime} \\neq 0$ (let $\\bm{\\lambda}_{\\bm{t}}$ denote the coefficient in front of $\\bm{W}_{\\bm{\\theta}}$ in the CSI $\\bm{Y}$). Such $\\bm{\\lambda}^{\\prime}$ always exists for $q \\neq 2$. From the server's perspective, the user is downloading a random linear combination of $K$ messages so the privacy constraint is satisfied. The user is able to decode $\\bm{W}_{\\bm{\\theta}}$ by subtracting $\\bm{Y}$ from $\\bm{\\Delta}$ so the correctness constraint is satisfied. \n\n\\section{Proof of Theorem \\ref{thm:M3K4}}\\label{proof:M3K4}\nLet us denote the $K=4$ messages as $\\bm{W}_1=\\bm{A},\\bm{W}_2=\\bm{B},\\bm{W}_3=\\bm{C},\\bm{W}_4=\\bm{D}$ for simpler notation. We have $M = 3$, the base field is $\\mathbb{F}_{3}$ and the length of each message is $L=1$. Our goal is to prove the achievability of rate $1\/2$, i.e., download cost $D=2$ for $L=1$. The user downloads, \n\\begin{align}\n \\bm{\\Delta} = \\{&\\bm{\\Delta}_1 = \\bm{A} + \\bm{\\eta}_{b}\\bm{B} + \\bm{\\eta}_{c}\\bm{C}, \\notag\\\\\n &\\bm{\\Delta}_2 = 2\\bm{\\eta}_{b}\\bm{B} + \\bm{\\eta}_{c}\\bm{C} + \\bm{\\eta}_{d}\\bm{D}\\}.\\label{eq:queryfixed}\n\\end{align}\nFrom $\\bm{\\Delta}$, the user is able to also compute \n\\begin{align}\n \\bm{L}_1 = \\bm{\\Delta}_1 + \\bm{\\Delta}_2 &= \\bm{A} + 2\\bm{\\eta}_{c}\\bm{C} + \\bm{\\eta}_{d}\\bm{D},\\\\\n \\bm{L}_2 = \\bm{\\Delta}_1 + 2\\bm{\\Delta}_2 &= \\bm{A} + 2\\bm{\\eta}_{b}\\bm{B} + 2\\bm{\\eta}_{d}\\bm{D}.\n\\end{align}\nLet $\\bm{W}_{\\bm{\\theta}}$ denote the desired message. Let us normalize $\\bm{\\lambda_1}=1$ without loss of generality.\nThe $\\bm{\\eta}_b, \\bm{\\eta}_c, \\bm{\\eta}_d$ values are specified as follows. \n\\begin{enumerate}\n \\item When $\\bm{\\mathcal{S}}=\\{1,2,3\\}$ and $\\bm{Y} = \\bm{A} + \\bm{\\lambda}_2\\bm{B} + \\bm{\\lambda}_3\\bm{C}$, then $\\bm{\\eta}_d$ is randomly chosen from $\\mathbb{F}_{3}^{\\times}=\\{1,2\\}$ and $\\bm{\\eta}_b,\\bm{\\eta}_c$ are chosen so that the desired message $\\bm{W}_{\\bm\\theta}$ can be recovered from $\\bm{Y}$ and $\\bm{\\Delta}_1$ as follows.\n {\\small\n \\begin{align}\n \\bm{W}_{\\bm{\\theta}} = \\bm{A}:& ~(\\bm{\\eta}_b,\\bm{\\eta}_c) =( 2 \\bm{\\lambda}_2, 2 \\bm{\\lambda}_3), 2\\bm{A}=\\bm{Y}+\\bm{\\Delta}_1 \\notag\\\\\n \\bm{W}_{\\bm{\\theta}} = \\bm{B}:&~(\\bm{\\eta}_b,\\bm{\\eta}_c) =( 2 \\bm{\\lambda}_2, \\bm{\\lambda}_3), \\bm{\\lambda}_2\\bm{B}=2\\bm{Y}+\\bm{\\Delta}_1 \\notag\\\\\n \\bm{W}_{\\bm{\\theta}} = \\bm{C}:&~ (\\bm{\\eta}_b,\\bm{\\eta}_c) =( \\bm{\\lambda}_2, 2\\bm{\\lambda}_3),\\bm{\\lambda}_3\\bm{C}=2\\bm{Y}+\\bm{\\Delta}_1 \\notag\n \\end{align} \n }\n \n \\item When $\\bm{\\mathcal{S}}=\\{2,3,4\\}$ and $\\bm{Y} = \\bm{B} + \\bm{\\lambda}_2\\bm{C} + \\bm{\\lambda}_3\\bm{D}$, then $\\bm{\\eta}_b$ is randomly chosen from $\\mathbb{F}_{q}^{\\times}=\\{1,2\\}$ and $\\bm{\\eta}_c,\\bm{\\eta}_d$ are chosen so that the desired message $\\bm{W}_{\\bm\\theta}$ can be recovered from $\\bm{Y}$ and $\\bm{\\Delta}_2$ as follows.\n {\\small\n \\begin{align}\n \\bm{W}_{\\bm{\\theta}} = \\bm{B}:& ~(\\bm{\\eta}_c,\\bm{\\eta}_d) =( \\bm{\\eta}_b \\bm{\\lambda}_2, \\bm{\\eta}_b \\bm{\\lambda}_3), \\bm{B}=2\\bm{Y}+\\bm{\\Delta}_2\/\\bm{\\eta}_b \\notag \\\\\n \\bm{W}_{\\bm{\\theta}} = \\bm{C}:&~(\\bm{\\eta}_c,\\bm{\\eta}_d) =( \\bm{\\eta}_b \\bm{\\lambda}_2, 2\\bm{\\eta}_b \\bm{\\lambda}_3), 2\\bm{\\lambda}_2 \\bm{C}=\\bm{Y}+\\bm{\\Delta}_2\/\\bm{\\eta}_b \\notag \\\\\n \\bm{W}_{\\bm{\\theta}} = \\bm{D}:&~(\\bm{\\eta}_c,\\bm{\\eta}_d) =(2 \\bm{\\eta}_b \\bm{\\lambda}_2, \\bm{\\eta}_b \\bm{\\lambda}_3), 2\\bm{\\lambda}_3 \\bm{D}=\\bm{Y}+\\bm{\\Delta}_2\/\\bm{\\eta}_b \\notag \n \\end{align} \n }\n \\item When $\\bm{\\mathcal{S}}=\\{1,3,4\\}$ and $\\bm{Y} = \\bm{A} + \\bm{\\lambda}_2\\bm{C} + \\bm{\\lambda}_3\\bm{D}$, then $\\bm{\\eta}_b$ is randomly chosen from $\\mathbb{F}_{q}^{\\times}$ and $\\bm{\\eta}_c,\\bm{\\eta}_d$ are chosen so that the desired message $\\bm{W}_{\\bm\\theta}$ can be recovered from $\\bm{Y}$ and $\\bm{L}_1$ as follows.\n {\\small\n \\begin{align}\n \\bm{W}_{\\bm{\\theta}} = \\bm{A}:& ~(\\bm{\\eta}_c,\\bm{\\eta}_d) =( \\bm{\\lambda}_2, 2\\bm{\\lambda}_3), 2\\bm{A}=\\bm{Y}+\\bm{L}_1 \\notag \\\\\n \\bm{W}_{\\bm{\\theta}} = \\bm{C}:&~(\\bm{\\eta}_c,\\bm{\\eta}_d) =( \\bm{\\lambda}_2, \\bm{\\lambda}_3), \\bm{\\lambda}_2\\bm{C}=2\\bm{Y}+\\bm{L}_1 \\notag \\\\\n \\bm{W}_{\\bm{\\theta}} = \\bm{D}:&~(\\bm{\\eta}_c,\\bm{\\eta}_d) =( 2\\bm{\\lambda}_2, 2\\bm{\\lambda}_3), \\bm{\\lambda}_3 \\bm{D}=2\\bm{Y}+\\bm{L}_1 \\notag \n \\end{align} \n }\n\n \\item When $\\bm{\\mathcal{S}}=\\{1,2,4\\}$ and $\\bm{Y} = \\bm{A} + \\bm{\\lambda}_2\\bm{B} + \\bm{\\lambda}_3\\bm{D}$, then $\\bm{\\eta}_c$ is randomly chosen from $\\mathbb{F}_{q}^{\\times}$ and $\\bm{\\eta}_b,\\bm{\\eta}_d$ are chosen so that the desired message $\\bm{W}_{\\bm\\theta}$ can be recovered from $\\bm{Y}$ and $\\bm{L}_2$ as follows. \n {\\small\n \\begin{align}\n \\bm{W}_{\\bm{\\theta}} = \\bm{A}:& ~(\\bm{\\eta}_b,\\bm{\\eta}_d) =( \\bm{\\lambda}_2, \\bm{\\lambda}_3), 2\\bm{A}=\\bm{Y}+\\bm{L}_2 \\notag \\\\\n \\bm{W}_{\\bm{\\theta}} = \\bm{B}:&~(\\bm{\\eta}_b,\\bm{\\eta}_d) =( \\bm{\\lambda}_2, 2\\bm{\\lambda}_3), \\bm{\\lambda}_2\\bm{B}=2\\bm{Y}+\\bm{L}_2 \\notag \\\\\n \\bm{W}_{\\bm{\\theta}} = \\bm{D}:&~(\\bm{\\eta}_b,\\bm{\\eta}_d) =( 2\\bm{\\lambda}_2, \\bm{\\lambda}_3), \\bm{\\lambda}_3 \\bm{D}=2\\bm{Y}+\\bm{L}_2 \\notag \n \\end{align} \n }\n\\end{enumerate}\nCorrectness is already shown. For privacy, note that the form of the query is fixed as in \\eqref{eq:queryfixed} so the user only needs to specify $\\bm{\\eta}_b,\\bm{\\eta}_c,\\bm{\\eta}_d$, and those are i.i.d. uniform over $\\mathbb{F}_3^{\\times}=\\{1,2\\}$, regardless of $(\\bm{\\mathcal{S}},\\bm{\\theta})$. Thus, the scheme is private, and the rate achieved is $1\/2$, which completes the proof of Theorem \\ref{thm:M3K4}.\n\n\\section{Proof of Theorem \\ref{thm:pcsi2_pub_pri}}\\label{proof:pcsi2_pub_pri}\n\\subsection{Converse}\nHere we prove that \n\\begin{align}\n C_{\\mbox{\\tiny PCSI-II}}^{\\mbox{\\tiny pri}}(q) \\leq C_{\\mbox{\\tiny PCSI-II}}(q=2) = C_{\\mbox{\\tiny PCSI-II}}^{\\inf}.\n\\end{align}\n\nThe following lemma states that for PIR-PCSI*, for every feasible $Q$ and $(\\theta, \\mathcal{S})$ value, all possible coefficient vectors must allow successful decoding.\n\\begin{lemma}\\label{lem:fullypri} Under the constraint of $(\\bm{\\theta}, \\bm{\\mathcal{S}, \\bm{\\Lambda}})$ privacy, \n \\begin{align}\n &\\mbox{PIR-PCSI: } \\forall (Q,\\mathcal{S},\\theta,\\Lambda)\\in\\mathcal{Q}\\times \\mathfrak{S}\\times[K]\\times\\mathfrak{C},\\notag\\\\\n &\\hspace{0.2cm} H(\\bm{W}_{\\theta} \\mid \\bm{\\Delta}, \\bm{Y}^{[\\mathcal{S},\\Lambda]}, \\bm{Q}=Q) = 0.\\label{eq:pcsi_pri}\\\\\n &\\mbox{PIR-PCSI-I: }\\forall (Q,\\mathcal{S},\\theta,\\Lambda)\\in\\mathcal{Q}\\times \\mathfrak{S}\\times([K]\\setminus\\mathcal{S})\\times\\mathfrak{C},\\notag\\\\\n &\\hspace{0.2cm} H(\\bm{W}_{\\theta} \\mid \\bm{\\Delta}, \\bm{Y}^{[\\mathcal{S},\\Lambda]}, \\bm{Q}=Q) = 0.\\label{eq:pcsi1_pri}\\\\\n &\\mbox{PIR-PCSI-II: }\\forall (Q,\\mathcal{S},\\theta,\\Lambda)\\in\\mathcal{Q}\\times \\mathfrak{S}\\times\\mathcal{S}\\times\\mathfrak{C},\\notag\\\\\n &\\hspace{0.2cm} H(\\bm{W}_{\\theta} \\mid \\bm{\\Delta}, \\bm{Y}^{[\\mathcal{S},\\Lambda]}, \\bm{Q}=Q) = 0.\\label{eq:pcsi2_pri}\n \\end{align}\n\\end{lemma}\n\\proof Since the server knows $\\bm{\\Delta}, \\bm{Q}$ and can test all possible realizations of $\\bm{\\theta}, \\bm{\\mathcal{S}}, \\bm{\\Lambda}$ for decodability. If there exists $(\\theta, \\mathcal{S}, \\Lambda)$ such that $\\bm{W}_{\\theta}$ cannot be decoded, then that $(\\theta, \\mathcal{S}, \\Lambda)$ can be ruled out by the server. This contradicts the joint $(\\bm{\\theta}, \\bm{\\mathcal{S}, \\bm{\\Lambda}})$ privacy constraint.$\\hfill\\square$\n\nAs a direct result of \\eqref{eq:pcsi2_pri}, for any PIR-PCSI-II scheme that preserves joint $(\\bm{\\theta}, \\bm{\\mathcal{S}}, \\bm{\\Lambda})$ privacy, \n\\begin{align}\n H(\\bm{W}_{\\mathcal{S}} \\mid \\bm{\\Delta}, \\bm{Y}^{[\\mathcal{S},\\Lambda]}, \\bm{Q}=Q) = 0,\\notag\\\\\n \\forall (\\mathcal{S}, \\Lambda, Q) \\in \\mathfrak{S}\\times\\mathfrak{C}\\times\\mathcal{Q}.\\label{eq:pcsi2_pri_dec}\n\\end{align}\nNote that \\eqref{eq:pcsi2_pri_dec} is a \\emph{stronger} version of \\eqref{eq:pcsi2_inf_dec} which is sufficient to bound $C_{\\mbox{\\tiny PCSI-II}}(q=2)$. Thus, we have $C_{\\mbox{\\tiny PCSI-II}}^{\\mbox{\\tiny pri}}(q) \\leq C_{\\mbox{\\tiny PCSI-II}}(q=2) = C_{\\mbox{\\tiny PCSI-II}}^{\\inf}$.\n\n\\subsection{Achievability}\nThe \\emph{Generic Linear Combination Based Scheme} in Section \\ref{sec:PCSI2_inf_ach} where $M-1$ linear combinations of each messages (represented in the extended field $\\mathbb{F}_{q^l}$ where $L = Ml$) are downloaded, also works under $(\\bm{\\theta}, \\bm{\\mathcal{S}}, \\bm{\\Lambda})$ privacy, but with a slight modification. The only difference between the modified scheme and the infimum capacity achieving scheme of PIR-PCSI-II in Section \\ref{sec:PCSI2_inf_ach} is that, instead of the matrix in \\eqref{eq:F2_inv}, the following matrix \n\\begin{align}\n \\mathbf{G}_{\\mathcal{S}}^{(\\gamma_1, \\gamma_2, \\cdots,\\gamma_M)} = \n \\begin{bmatrix}\n \\gamma_{1}\\mathbf{I}_{M} & \\cdots & \\gamma_{M}\\mathbf{I}_{M}\\\\\n & \\mathbf{e}_{1}\\otimes\\mathbf{L}_{j_1}^{(1)} &\\\\\n & \\cdots &\\\\\n & \\mathbf{e}_{1}\\otimes\\mathbf{L}_{j_1}^{(M-1)} &\\\\\n & \\cdots &\\\\\n & \\mathbf{e}_{M}\\otimes\\mathbf{L}_{j_M}^{(1)} &\\\\\n & \\cdots &\\\\\n & \\mathbf{e}_{M}\\otimes\\mathbf{L}_{j_M}^{(M-1)} &\n \\end{bmatrix},\\label{eq:Fq_inv_arb}\n\\end{align}\nmust have full rank for every $\\mathcal{S} = \\{j_1, \\cdots, j_{M}\\} \\in \\mathfrak{S}$ and every realization of $(\\gamma_1, \\gamma_2, \\cdots,\\gamma_M) \\in \\mathfrak{C}$. Let us prove that the scheme is correct, jointly private and such $\\mathbf{L}_{\\cdot}^{(\\cdot)}$ vectors exist when $l$ is large enough that,\n\\begin{align}\n q^l > (q-1)^{M}\\tbinom{K}{M}M(M-1).\n\\end{align}\n\n\n\\proof For a particular realization of $(\\gamma_1, \\gamma_2, \\cdots, \\gamma_M)$, e.g., $(\\gamma_1, \\gamma_2, \\cdots, \\gamma_M) = (1, 1, \\cdots, 1)$, \\eqref{eq:Fq_inv_arb} yields a set of $\\tbinom{K}{M}$ matrices \n\\begin{align}\n \\mathcal{G}^{(1,1,\\cdots,1)} = \\{\\mathbf{G}_{\\mathcal{S}_{1}}^{(1,1,\\cdots,1)}, \\mathbf{G}_{\\mathcal{S}_{2}}^{(1,1,\\cdots,1)}, \\cdots, \\mathbf{G}_{\\mathcal{S}_{\\tbinom{K}{M}}}^{(1,1,\\cdots,1)}\\}\\notag\n\\end{align}\ncorresponding to all possible $\\{j_1, j_2, \\cdots, j_M\\} \\in \\mathfrak{S}$. If all the $\\tbinom{K}{M}$ matrices in $\\mathcal{G}^{(1,1,\\cdots,1)}$ are invertible, this scheme preserves the joint privacy of $(\\bm{\\theta}, \\bm{\\mathcal{S}})$ and enables the user to decode all the $M$ messages in the support set, when all the coefficients in CSI are $1$, according to Appendix \\ref{app:existence}. \n\nGoing over all the possible realizations of $(\\gamma_1, \\cdots, \\gamma_M) \\in \\mathfrak{C}$ and $\\{j_1, j_2, \\cdots, j_M\\} \\in \\mathfrak{S}$, \\eqref{eq:Fq_inv_arb} yields $(q-1)^{M}$ sets of matrices \n\\begin{align}\n \\mathcal{G}^{(1,\\cdots,1)}, \\mathcal{G}^{(1,\\cdots,1,2)}, \\cdots, \\mathcal{G}^{(q-1,\\cdots,q-1)},\n\\end{align}\neach of which contains $\\tbinom{K}{M}$ matrices, i.e., there are in total $(q-1)^{M}\\tbinom{K}{M}$ matrices. If all the $(q-1)^{M}\\tbinom{K}{M}$ matrices are invertible, then for arbitrary realization of $(\\gamma_1, \\gamma_2, \\cdots, \\gamma_M)$, i.e., arbitrary $M$ coefficients in the CSI, this scheme enables the user to decode all the $M$ messages in the support set and preserves the joint $(\\bm{\\theta}, \\bm{\\mathcal{S}})$ privacy. Since this scheme works for arbitrary coefficients, from the server's perspective, all the realizations of $M$ coefficients are equally likely. Thus, the joint privacy of coefficients $\\bm{\\Lambda}$, index $\\bm{\\theta}$, and support set $\\bm{\\mathcal{S}}$, is preserved.\n\nTo prove the existence of such linear combinations, note that the determinant of each one of the $(q-1)^{M}\\tbinom{K}{M}$ matrices yields a degree $M(M-1)$ multi-variate polynomial as proved in Appendix \\ref{app:existence}. Thus, the product of the determinants of all the matrices $F$ is a multi-variate polynomial of degree $(q-1)^{M}\\tbinom{K}{M}M(M-1)$. Again, as in Appendix \\ref{app:existence}, according to the Schwartz-Zippel Lemma, when $q^l > (q-1)^{M}\\tbinom{K}{M}$, there exists elements in $\\mathbb{F}_{q^l}$ such that the polynomial $F$ does not evaluate to $0$, i.e., all the $(q-1)^{M}\\tbinom{K}{M}M(M-1)$ matrices are invertible. \n\n\n\\section{Proof of Theorem \\ref{thm:redundancy1}}\\label{proof:redundancy1}\nHere we bound the redundancy $\\rho_{\\mbox{\\tiny PCSI-I}} $ from above (equivalently, lower-bound $\\alpha^{*}$) for $1 \\leq M \\leq K-1$.\n\nRecall that the supremum capacity for PIR-PCSI-I is $(K-M)^{-1}$, i.e., the optimal average download cost is $D\/L = K-M$. Consider an achievable scheme such that $\\alpha$ PCSI is sufficient and the average download cost $D\/L \\leq K-M+\\epsilon$ for some $L$. Since $D\/L \\leq K-M+\\epsilon$, we have $L(K-M) + \\epsilon L \\geq D \\geq H(\\bm{\\Delta} \\mid \\bm{Q})$. Thus, there exists a feasible $Q$ such that \n\\begin{align}\n H(\\bm{\\Delta} \\mid \\bm{Q}=Q) \\leq (K-M)L + \\epsilon L.\n\\end{align}\nFor all $i \\in [M]$, let $\\mathcal{S}_{i} = [M+1] \\setminus \\{i\\}$. ALso, for all $i \\in [M+1:K]$, let $\\mathcal{S}_{i} = [M]$. For all $i \\in [K]$, let $\\Lambda_i \\in \\mathfrak{C}$ satisfy \n\\begin{align}\n H(\\bm{W}_{i} \\mid \\bm{\\Delta}, \\overline{\\bm{Y}}^{[\\mathcal{S}_i, \\Lambda_i]}, \\bm{Q}=Q) = 0.\\label{eq:redundancy1_1}\n\\end{align}\nSuch $\\Lambda_i$'s must exist according to \\eqref{eq:lemma1pcsi1} in Lemma \\ref{lem:privacy}.\n\nWriting $\\overline{\\bm{Y}}^{[\\mathcal{S}_i, \\Lambda_i]}$ as $\\overline{\\bm{Y}}_{i}$ for compact notation, we have \n\\begin{align}\n &H(\\bm{W}_{[K]} \\mid \\bm{\\Delta}, \\overline{\\bm{Y}}_{[M]}, \\bm{Q}=Q)\\\\\n &= H(\\bm{W}_{[K]} \\mid \\bm{\\Delta}, \\overline{\\bm{Y}}_{[M]}, \\bm{W}_{[M]}, \\bm{Q}=Q)\\label{eq:redundancy1_2}\\\\\n &= H(\\bm{W}_{[M+1:K]} \\mid \\bm{\\Delta}, \\overline{\\bm{Y}}_{[K]}, \\bm{W}_{[M]}, \\bm{Q}=Q)\\label{eq:redundancy1_3}\\\\\n &= 0\\label{eq:redundancy1_4},\n\\end{align}\nwhere \\eqref{eq:redundancy1_2} follows from \\eqref{eq:redundancy1_1}. \\eqref{eq:redundancy1_3} is correct since $\\overline{\\bm{Y}}_{[M+1:K]}$ are functions of $\\bm{W}_{[M]}$. \\eqref{eq:redundancy1_4} follows from \\eqref{eq:redundancy1_1}. Since we are considering the case where the supremum capacity is achieved, we have \n\\begin{align}\n &(K-M)L + \\epsilon L + M\\alpha L\\notag\\\\\n &\\geq H(\\bm{\\Delta}, \\overline{\\bm{Y}}_{[M]} \\mid \\bm{Q}=Q)\\label{eq:redundancy1_5}\\\\\n &\\geq I(\\bm{\\Delta}, \\overline{\\bm{Y}}_{[M]}; \\bm{W}_{[K]} \\mid \\bm{Q}=Q)\\notag\\\\\n &= H(\\bm{W}_{[K]} \\mid \\bm{Q}=Q) = KL.\\label{eq:redundancy1_6}\n\\end{align}\n\\eqref{eq:redundancy1_5} follows from \\eqref{eq:invaYR}. Step \\eqref{eq:redundancy1_6} follows from \\eqref{eq:redundancy1_4} and the fact that the query and the messages are mutually independent according to \\eqref{eq:indQ}. Thus we have $\\alpha \\geq 1 - \\frac{\\epsilon}{M}$. In order to approach capacity, we must have $\\epsilon\\rightarrow 0$, so we need $\\alpha\\geq 1$, and since this is true for any $\\alpha$ such that $\\alpha$ PCSI is sufficient, it is also true for $\\alpha^*$. Thus, the redundancy is bounded as $\\rho_{\\mbox{\\tiny PCSI-I}}\\leq 0$.\n\n\\section{Proof of Theorem \\ref{thm:cap_PCSI1_inf}}\\label{sec:cap_PCSI1_inf}\n\\subsection{Converse for $C_{\\mbox{\\tiny PCSI-I}}(q=2)$}\nAgain, \\eqref{eq:lemma1pcsi1} is true for arbitrary $\\mathbb{F}_{q}$. The only thing different in $\\mathbb{F}_{2}$ is that $\\bm{\\Lambda}$ must be the vector of all ones. As a direct result of \\eqref{eq:lemma1pcsi1}, for PIR-PCSI-I in $\\mathbb{F}_{2}$,\n\\begin{align}\n H(\\bm{W}_{[K]\\setminus\\mathcal{S}} \\mid \\bm{\\Delta}, {\\bm{Y}}^{[\\mathcal{S}, 1_{M}]}, \\bm{Q} = Q) = 0, \\forall (Q,\\mathcal{S}) \\in \\mathcal{Q} \\times \\mathfrak{S}\\label{eq:dec_inf_PCSI1_1}\n\\end{align}\nand thus \n\\begin{align}\n &H(\\bm{W}_{[K]\\setminus\\mathcal{S}} \\mid \\bm{\\Delta}, \\bm{Q} = Q) \\\\\n &=I(\\bm{W}_{[K]\\setminus\\mathcal{S}}; {\\bm{Y}}^{[\\mathcal{S}, 1_{M}]}\\mid \\bm{\\Delta}, \\bm{Q} = Q)\\\\\n &\\leq H({\\bm{Y}}^{[\\mathcal{S}, 1_{M}]}\\mid \\bm{\\Delta}, \\bm{Q} = Q)\\\\\n &\\leq L, ~~\\forall (Q,\\mathcal{S}) \\in \\mathcal{Q} \\times \\mathfrak{S}.\n\\end{align}\nAveraging over $\\bm{Q}$ gives \n\\begin{align}\n H(\\bm{W}_{[K]\\setminus\\mathcal{S}} \\mid \\bm{\\Delta}, \\bm{Q}) \\leq L, \\forall \\mathcal{S} \\in\\mathfrak{S}.\\label{eq:dec_inf_PCSI1_2}\n\\end{align}\nAlso, for all $\\mathcal{S} \\in \\mathfrak{S}$ and $Q \\in \\mathcal{Q}$, \n\\begin{align}\n &H(\\bm{W}_{[K]} \\mid \\bm{\\Delta}, \\bm{Q}=Q)\\\\ \n &= H(\\bm{W}_{\\mathcal{S}}\\mid \\bm{\\Delta}, \\bm{Q}=Q)\\notag\\\\\n &~~ + H(\\bm{W}_{[K]\\setminus\\mathcal{S}}\\mid \\bm{\\Delta}, \\bm{W}_{\\mathcal{S}}, \\bm{Q}=Q)\\\\\n &= H(\\bm{W}_{\\mathcal{S}}\\mid \\bm{\\Delta}, \\bm{Q}=Q)\\notag\\\\ \n &~~ + H(\\bm{W}_{[K]\\setminus\\mathcal{S}}\\mid \\bm{\\Delta}, \\bm{W}_{\\mathcal{S}}, {\\bm{Y}}^{[\\mathcal{S}, 1_{M}]}, \\bm{Q}=Q)\\label{eq:pcsi1_inf_Ysum}\\\\\n &= H(\\bm{W}_{\\mathcal{S}}\\mid \\bm{\\Delta}, \\bm{Q}=Q),\n\\end{align}\nwhere \\eqref{eq:pcsi1_inf_Ysum} results from the fact that $\\overline{\\bm{Y}}^{[\\mathcal{S}, 1_{M}]} = \\sum_{s \\in \\mathcal{S}}\\bm{W}_{s}$, and the last step follows from \\eqref{eq:dec_inf_PCSI1_1}. Averaging over $\\bm{Q}$, it follows that \n\\begin{align}\n H(\\bm{W}_{[K]} \\mid \\bm{\\Delta}, \\bm{Q}) = H(\\bm{W}_{\\mathcal{S}}\\mid \\bm{\\Delta}, \\bm{Q}), &&\\forall \\mathcal{S} \\in \\mathfrak{S}.\\label{eq:pcsi1_inf_equ}\n\\end{align}\n\nLet us first prove $C_{\\mbox{\\tiny PCSI-I}}(q=2) \\leq (K-1)^{-1}$ in the regime where $1 \\leq M \\leq \\frac{K}{2}$. \n\\begin{align}\n &H(\\bm{W}_{[K]} \\mid \\bm{\\Delta}, \\bm{Q}) \\notag\\\\\n &=H(\\bm{W}_{[M]} \\mid \\bm{\\Delta}, \\bm{Q})\\label{eq:dec_inf_PCSI1_r1_1}\\\\\n &\\leq H(\\bm{W}_{[K-M]} \\mid \\bm{\\Delta}, \\bm{Q})\\label{eq:MtoK-M}\\\\\n & \\leq L \\label{eq:K-MtoL},\n\\end{align}\nwhere \\eqref{eq:dec_inf_PCSI1_r1_1} is true according to \\eqref{eq:pcsi1_inf_equ}, \\eqref{eq:MtoK-M} follows from $(K-M \\geq M)$ and \\eqref{eq:dec_inf_PCSI1_1}, and \\eqref{eq:K-MtoL} follows from \\eqref{eq:dec_inf_PCSI1_2}. Thus\n\\begin{align}\n H(\\bm{\\Delta} \\mid \\bm{Q}) &\\geq I(\\bm{\\Delta}; \\bm{W}_{[K]} \\mid \\bm{Q})\\\\\n &= H(\\bm{W}_{[K]} \\mid \\bm{Q}) - H(\\bm{W}_{[K]} \\mid \\bm{\\Delta}, \\bm{Q})\\\\\n &\\geq KL - L.\n\\end{align} \nThus $D \\geq H(\\bm{\\Delta} \\mid \\bm{Q}) \\geq KL-L$ and since the rate $L\/D\\leq (K-1)^{-1}$ for every achievable scheme, we have shown that $C_{\\mbox{\\tiny PCSI-I}}(q=2) \\leq (K-1)^{-1}$ when $K-M\\geq M\\geq 1$, i.e., $1\\leq M\\leq K\/2$.\n\nNext let us prove that $C_{\\mbox{\\tiny PCSI-I}}(q=2) \\leq \\big(K - \\frac{M}{K-M}\\big)^{-1}$ for the regime $\\frac{K}{2} < M \\leq K-1$. It suffices to prove $H(\\bm{\\Delta} \\mid \\bm{Q}) \\geq KL - \\frac{ML}{K-M}$. Define,\n\\begin{align}\n H_{m}^{K} = \\frac{1}{\\tbinom{K}{m}}\\sum_{\\mathcal{M}:\\mathcal{M}\\subset[K], |\\mathcal{M}|=m}\\frac{H(\\bm{W}_{\\mathcal{M}} \\mid \\bm{\\Delta}, \\bm{Q})}{m},\n\\end{align} \nwe have\n\\begin{align}\n H_{K-M}^{K}&\\geq H_{M}^{K}\\label{eq:dec_inf_PCSI1_r2_1}\\\\\n &= \\frac{H(\\bm{W}_{[K]}\\mid \\bm{\\Delta}, \\bm{Q})}{M},\\label{eq:dec_inf_PCSI1_r2_2}\n\\end{align}\nwhere \\eqref{eq:dec_inf_PCSI1_r2_1} follows from Han's inequality \\cite{Cover_Thomas}, and \\eqref{eq:dec_inf_PCSI1_r2_2} follows from \\eqref{eq:pcsi1_inf_equ}. Note that according to \\eqref{eq:dec_inf_PCSI1_2},\n\\begin{align}\n \\frac{L}{K-M} \\geq H_{K-M}^{K},\n\\end{align}\nand therefore,\n\\begin{align}\n H(\\bm{W}_{[K]}\\mid \\bm{\\Delta}, \\bm{Q}) \\leq \\frac{ML}{K-M}.\n\\end{align}\nThus, $H(\\bm{\\Delta} \\mid \\bm{Q}) \\geq KL - \\frac{ML}{K-M}$, which completes the converse proof for Theorem \\ref{thm:cap_PCSI1_inf}. We next prove achievability.\n\n\\subsection{Two PIR-PCSI-I Schemes for Arbitrary $q$}\\label{sec:PCSI1_inf_ach}\n\\subsubsection{Achieving rate $\\frac{1}{K-1}$ when $1 \\leq M \\leq \\frac{K}{2}$}\\label{sec:PCSI1_inf_ach1}\nThe goal here is to download $K-1$ generic linear combinations so that along with the one linear combination already available as side-information, the user has enough information to retrieve all $K$ messages. Let $L$ be large enough that $q^L > \\tbinom{K}{M}(K-1)$. For all $k \\in [K]$, message $\\bm{W}_k \\in \\mathbb{F}_{q}^{L\\times 1}$ can be represented as a scalar $\\bm{w}_k \\in \\mathbb{F}_{q^L}$. Let \n\\begin{align}\n \\bm{w}_{[K]} = \n \\begin{bmatrix}\n \\bm{w}_1 & \\bm{w}_2 & \\cdots & \\bm{w}_K\n \\end{bmatrix}^{\\mathrm{T}} \\in \\mathbb{F}_{q^L}^{K\\times 1},\n\\end{align}\nbe the length $K$ column vector whose entries are the messages represented in $\\mathbb{F}_{q^{L}}$. Let $\\Psi \\in\\mathbb{F}_{q^L}^{K\\times (K-1)}$ be a $K\\times (K-1)$ matrix whose elements are the variables $\\psi_{ij}$. The user downloads \n\\begin{align}\n \\bm{\\Delta} =\\Psi^T \\bm{w}_{[K]} \\in\\mathbb{F}_{q^L}^{(K-1)\\times 1}.\n\\end{align}\nSuppose the realization of the coefficient vector is $\\bm{\\Lambda}=\\Lambda$. The linear combination available to the user can be expressed as $\\bm{Y}^{[{\\Lambda},\\bm{\\mathcal{S}}]}=U_{{{\\Lambda}},\\bm{\\mathcal{S}}}^T\\bm{w}_{[K]}$ for some $K\\times 1$ vector $U_{{\\Lambda},\\bm{\\mathcal{S}}}$ that depends on $({\\Lambda},\\bm{\\mathcal{S}})$. Combined with the download, the user has \n\\begin{align}\n[U_{{\\Lambda},\\bm{\\mathcal{S}}}, \\Psi]^T\\bm{w}_{[K]},\n\\end{align}\nso if the $K\\times K$ matrix $G_{{\\Lambda},\\bm{\\mathcal{S}}}=[U_{{\\Lambda},\\bm{\\mathcal{S}}}, \\Psi]$ is invertible (full rank) then the user can decode all $K$ messages. For all $\\mathcal{S}\\in\\mathfrak{S}$, let $f_{\\Lambda,\\mathcal{S}}(\\cdot)$ be the multi-variate polynomial of degree $K-1$ in variables $\\psi_{ij}$, representing the determinant of $G_{{\\Lambda},{\\mathcal{S}}}$. This is not the zero polynomial because the $K-1$ columns of $\\Psi$ can always be chosen to be linearly independent of the vector $U_{{\\Lambda},{\\mathcal{S}}}$ in a $K$ dimensional vector space. The product of all such polynomials, $f_\\Lambda=\\prod_{\\mathcal{S}\\in\\mathfrak{S}}f_{\\Lambda,\\mathcal{S}}$ is itself a multi-variate non-zero polynomial of degree $(K-1)\\binom{K}{M}$ in the variables $\\psi_{ij}$. By Schwartz-Zippel Lemma, if the $\\psi_{ij}$ are chosen randomly from $\\mathbb{F}_{q^L}$ then the probability that the corresponding evaluation of $f_\\Lambda$ is zero, is no more than $(K-1)\\binom{K}{M}\/q^L<1$, so there exists a choice of $\\psi_{ij}$ for which all $f_{\\Lambda,\\mathcal{S}}$ evaluate to non-zero values, i.e., $G_{\\Lambda, \\mathcal{S}}$ is invertible for every $\\mathcal{S}\\in\\mathfrak{S}$. Thus, with this choice of $\\Psi$, we have a scheme with rate $1\/(K-1)$ that is correct and private and allows the user to retrieve all $K$ messages. To verify privacy, note that the user constructs the query based on the realization of $\\bm\\Lambda$ alone, and does not need to know $(\\bm{\\mathcal{S}},\\bm{\\theta})$ before it sends the query, so the query is independent of $(\\bm{\\mathcal{S}},\\bm{\\theta})$. \n\n\\begin{remark}\\label{rmk:pcsi1_inf_pcsi_inf}\nSince the scheme allows the user to decode all messages, the scheme also works if $\\bm{\\theta}$ is uniformly drawn from $[K]$, i.e., in the PIR-PCSI setting.\n\\end{remark}\n\n\\subsubsection{Achieving rate $(K-\\frac{M}{K-M})^{-1}$ when $K\/2 < M \\leq K-1$}\nNow let us present a scheme with rate $(K-\\frac{M}{K-M})^{-1}$ which is optimal for the regime $\\frac{K}{2} < M \\leq K-1$. The scheme is comprised of two steps.\n\n\\emph{Step 1}: The user converts the $(M,K)$ PIR-PCSI-I problem to $(K-M,K)$ PIR-PCSI-II problem as follows.\n\nThe user first downloads\n\\begin{align}\n \\bm{\\Delta}_{1} = \\sum_{k \\in [K]}\\bm{a}_{k}\\bm{W}_{k},\n\\end{align}\nwhere $\\bm{a}_{\\bm{i}_m} = \\bm{\\lambda}_m$ for $\\bm{i}_{m} \\in \\bm{\\mathcal{S}}$ while for $k \\notin \\bm{\\mathcal{S}}$, $\\bm{a}_k$'s are independently and uniformly drawn from $\\mathbb{F}_{q}^{\\times}$.\nThe user then computes\n\\begin{align}\n \\bm{Y}^{\\prime} = \\bm{\\Delta}_1 - \\bm{Y}^{[\\bm{\\mathcal{S}}, \\bm{\\Lambda}]} = \\sum_{k \\in [K]\\setminus \\bm{\\mathcal{S}}}\\bm{a}_{k}\\bm{W}_{k}.\n\\end{align} \nIn this step, from the server's perspective, $\\bm{a}_1, \\cdots, \\bm{a}_K$ are i.i.d. uniform over $\\mathbb{F}_{q}^{\\times}$, thus there is no loss of privacy. The download cost of this step is $H(\\bm{\\Delta}_1) = L$.\n\n\\emph{Step 2}: The user has $\\bm{Y}^{\\prime}$ as coded side information and applies the fully private PIR-PCSI-II scheme described in Section \\ref{proof:pcsi2_pub_pri} that protects the privacy of all the coefficients. \n\nThe reason to apply the PIR-PCSI-II scheme that maintains the privacy of coefficients is that in \\emph{Step 1}, server knows $\\bm{a}_1, \\cdots, \\bm{a}_K$. If in the second step, the Query is not independent of $\\bm{a}_i, i \\in [K]\\setminus \\bm{\\mathcal{S}}$, then the server may be able to rule out some realizations of $\\bm{\\mathcal{S}}$. The download cost of this step is $\\frac{K(K-M-1)L}{K-M}$. Thus, the total download cost of this scheme is $KL - \\frac{ML}{K-M}$ and the rate is $\\big(K - \\frac{M}{K-M}\\big)^{-1}$.\n\n\\section{Proof of Theorem \\ref{thm:pcsi1_pub_pri}}\\label{proof:pcsi1_pub_pri}\n\\subsection{Proof of $C_{\\mbox{\\tiny PCSI-I}}^{\\mbox{\\tiny pri}, \\sup}=C_{\\mbox{\\tiny PCSI-I}}^{\\inf}$}\nFirst let us prove the converse. As a direct result of \\eqref{eq:pcsi1_pri} in Lemma \\ref{lem:fullypri}, for any PIR-PCSI-I scheme that preserves joint $(\\bm{\\theta}, \\bm{\\mathcal{S}}, \\bm{\\Lambda})$ privacy, \n\\begin{align}\n H(\\bm{W}_{[K]\\setminus\\mathcal{S}} \\mid \\bm{\\Delta}, \\bm{Y}^{[\\mathcal{S},\\Lambda]}, \\bm{Q}=Q) = 0, \\notag\\\\\n \\forall (\\mathcal{S}, \\Lambda, Q) \\in \\mathfrak{S}\\times\\mathfrak{C}\\times\\mathcal{Q}.\\label{eq:pcsi1_pri_dec}\n\\end{align}\nNote that \\eqref{eq:pcsi1_pri_dec} is a stronger version of \\eqref{eq:dec_inf_PCSI1_1} which is sufficient to bound $C_{\\mbox{\\tiny PCSI-I}}(q=2)$. Thus, we have $C_{\\mbox{\\tiny PCSI-I}}^{\\mbox{\\tiny pri}}(q) \\leq C_{\\mbox{\\tiny PCSI-I}}(q=2) = C_{\\mbox{\\tiny PCSI-I}}^{\\inf}$, which completes the proof of converse.\n\nFor achievability, let us note that $C_{\\mbox{\\tiny PCSI-I}}^{\\mbox{\\tiny pri}, \\sup}\\geq C_{\\mbox{\\tiny PCSI-I}}^{\\mbox{\\tiny pri}}(q=2)=C_{\\mbox{\\tiny PCSI-I}}(q=2)=C_{\\mbox{\\tiny PCSI-I}}^{\\inf}$, because over $\\mathbb{F}_2$, the $\\bm{\\Lambda}$ vector is constant (all ones) and therefore trivially private.\n\n\\subsection{Proof of the bound: $C_{\\mbox{\\tiny PCSI-I}}^{\\mbox{\\tiny pri}, \\inf} \\leq \\min\\bigg(C_{\\mbox{\\tiny PCSI-I}}^{\\inf}, \\frac{1}{K-2}\\bigg)$}\nSince privacy of $\\bm\\Lambda$ only further constrains PIR-PCSI, it is trivial that $C_{\\mbox{\\tiny PCSI-I}}^{\\mbox{\\tiny pri}, \\inf} \\leq C_{\\mbox{\\tiny PCSI-I}}^{\\inf}$. For the remaining bound, $C_{\\mbox{\\tiny PCSI-I}}^{\\mbox{\\tiny pri}, \\inf}\\leq \\frac{1}{K-2}$, it suffices to show that $C_{\\mbox{\\tiny PCSI-I}}^{\\mbox{\\tiny pri}}(q\\geq M) \\leq \\frac{1}{K-2}$, because $C_{\\mbox{\\tiny PCSI-I}}^{\\mbox{\\tiny pri}, \\inf}\\leq C_{\\mbox{\\tiny PCSI-I}}^{\\mbox{\\tiny pri}}(q\\geq M)$. Note that by $C_{\\mbox{\\tiny PCSI-I}}^{\\mbox{\\tiny pri}}(q\\geq M)$ we mean $C_{\\mbox{\\tiny PCSI-I}}^{\\mbox{\\tiny pri}}(q)$ for all $q\\geq M$.\n\nLet \n\\begin{align}\n \\bm{Y}_1 &= \\bm{W}_2 + \\alpha_3 \\bm{W}_3 + \\cdots \\alpha_{M+1} \\bm{W}_{M+1},\\\\\n \\bm{Y}_2 &= \\bm{W}_1 + \\bm{W}_3 + \\bm{W}_4 + \\cdots \\bm{W}_{M+1},\n\\end{align}\nwhere $\\alpha_3, \\alpha_4, \\cdots, \\alpha_{M+1}$ are $M-1$ distinct elements in $\\mathbb{F}_{q}^\\times$.\n\nLet $\\beta_3, \\beta_4, \\dots \\beta_{M+1}$ be $M-1$ distinct elements in $\\mathbb{F}_{q}^\\times$ such that $\\forall{m \\in [3:M+1]}, \\beta_{m}\\alpha_{m} + 1 = 0$ in $\\mathbb{F}_{q}$.\n\nNote that such $\\alpha$'s and $\\beta$'s exist since $q \\geq M$.\n\nThen let \n\\begin{align}\n \\bm{Y}_{m} &= \\beta_m \\bm{Y}_1 + \\bm{Y}_2 \\notag\\\\\n &= \\bm{W}_1 + \\beta_m \\bm{W}_2 + (\\beta_m \\alpha_3 + 1)\\bm{W}_3 + \\cdots \\notag\\\\\n &\\quad +(\\beta_m \\alpha_i + 1) \\bm{W}_i + \\cdots + (\\beta_m \\alpha_{M+1} + 1)\\bm{W}_{M+1}, \\notag\\\\\n &\\forall m \\in [3:M+1],\n\\end{align}\nbe $M-1$ linear combinations of the first $M+1$ messages $\\bm{W}_{[M+1]}$. Note that for any $m \\in [3:M+1]$, the coefficient for $\\bm{W}_m$ in $\\bm{Y}_m$ (i.e., $\\beta_{m}\\alpha_{m} + 1$) is $0$ while the coefficient for any $\\bm{W}_i, i\\in[M+1], i\\neq m$ (i.e., $\\beta_{m}\\alpha_{i} + 1$) is non-zero\\footnote{Since $\\beta_{m}\\alpha_{m}+1=0$, $\\beta_{m}\\alpha_{i}+1\\neq 0$ for $i \\neq m$.}. For example, \n\\begin{align}\n \\bm{Y}_3 &= \\bm{W}_1 + \\beta_3 \\bm{W}_2 + 0\\bm{W}_3 + (\\beta_3\\alpha_4 + 1)\\bm{W}_4\\notag\\\\\n &\\quad + \\cdots + (\\beta_3\\alpha_{M+1} + 1)\\bm{W}_{M+1}.\n\\end{align}\nThus, for any $m \\in [M+1]$, $\\bm{Y}_m$ is a linear combination of $M$ messages $\\bm{W}_{[M+1]\\setminus\\{m\\}}$ with non-zero coefficients. For $\\mathcal{S}_m=[M+1]\/\\{m\\}$ and $\\Lambda_m$ as the vector of coefficients that appear in $\\bm{Y}_m$, we $\\bm{Y}^{[\\mathcal{S}_m,\\Lambda_m]}=\\bm{Y}_m$.\n\nAccording to \\eqref{eq:pcsi1_pri_dec}, \n\\begin{align}\n H(\\bm{W}_m, \\bm{W}_{[M+2:K]} \\mid \\bm{\\Delta}, \\bm{Y}_m, \\bm{Q} = Q) = 0,\\notag\\\\\n \\forall m \\in [M+1], Q \\in \\mathcal{Q}. \\label{eq:PCSI1_pri_dec1}\n\\end{align}\nThus, for all $Q\\in\\mathcal{Q}$,\n\\begin{align}\n &H(\\bm{W}_{[K]} \\mid \\bm{\\Delta}, \\bm{Q} = Q) \\notag\\\\\n &\\leq H(\\bm{W}_{[K]}, \\bm{Y}_{[M+1]} \\mid \\bm{\\Delta}, \\bm{Q}=Q) \\\\\n &= H(\\bm{Y}_{[M+1]} \\mid \\bm{\\Delta}, \\bm{Q}=Q) + H(\\bm{W}_{[K]} \\mid \\bm{\\Delta}, \\bm{Y}_{[M+1]}, \\bm{Q}=Q)\\\\\n &= H(\\bm{Y}_1, \\bm{Y}_2 \\mid \\bm{\\Delta}, \\bm{Q}=Q)\\label{eq:PCSI1_pri_dec2}\\\\\n &\\leq 2L,\n\\end{align}\nwhere \\eqref{eq:PCSI1_pri_dec2} follows from \\eqref{eq:PCSI1_pri_dec1} and the fact that $\\bm{Y}_{[3:M+1]}$ are functions of $\\bm{Y}_{1}, \\bm{Y}_{2}$. Averaging over $\\bm{Q}$ we have \n\\begin{align}\n H(\\bm{W}_{[K]} \\mid \\bm{\\Delta}, \\bm{Q}) \\leq 2L.\n\\end{align}\n\n\\noindent Therefore, the average download cost is bounded as,\n\\begin{align}\n D&\\geq H(\\bm{\\Delta} \\mid \\bm{Q}) \\geq H(\\bm{W}_{[K]}\\mid\\bm{Q}) - H(\\bm{W}_{[K]} \\mid \\bm{\\Delta}, \\bm{Q}) \\\\\n & \\geq (K-2)L.\n\\end{align}\nThus, for $q\\geq M$, we have $C_{\\mbox{\\tiny PCSI-I}}^{\\mbox{\\tiny pri}}(q) \\leq \\frac{1}{K-2}$.\n\n\\subsection{Proof of $C_{\\mbox{\\tiny PCSI-I}}^{\\mbox{\\tiny pri}, \\inf} \\geq \\frac{1}{K-1}$}\\label{sec:pcsi1_pri_ach}\nWe need to show that $C_{\\mbox{\\tiny PCSI-I}}^{\\mbox{\\tiny pri}}(q)\\geq \\frac{1}{K-1}$ for all $\\mathbb{F}_q$. The scheme is identical to the scheme with rate $(K-1)^{-1}$ in Section \\ref{sec:PCSI1_inf_ach1} with a slight modification. Instead of fixing a realization $\\bm{\\Lambda}=\\Lambda$, we will consider all possible realizations $\\Lambda\\in\\mathfrak{C}$, and consider the product polynomial $f=\\prod_{\\Lambda\\in\\mathfrak{C}}f_{\\Lambda}$ which is a multi-variate polynomial of degree $(K-1)\\binom{K}{M}(q-1)^M$ in variables $\\psi_{ij}$. Following the same argument based on the Schwartz-Zippel Lemma, we find that there exists a $\\Psi$ for which all $G_{\\Lambda,\\mathcal{S}}$ are invertible matrices, provided that $L$ is large enough that $q^L>(q-1)^M(K-1)\\binom{K}{M}$. Thus, with this choice of $\\Psi$ we have a scheme that is allows the user to retrieve all $K$ messages. The scheme is also $(\\bm{\\mathcal{S}},\\bm{\\theta},\\bm{\\Lambda})$ private because we note that the user does not need to know the realization of $(\\bm{\\mathcal{S}},\\bm{\\theta},\\bm{\\Lambda})$ before it sends the query, so the query is independent of $(\\bm{\\mathcal{S}},\\bm{\\theta},\\bm{\\Lambda})$. \n\n\\begin{remark}\\label{rmk:pcsi1_pri_pcsi_pri}\nSince the scheme allows the user to decode all messages, and the query does not depend on $(\\bm{\\theta}, \\bm{\\mathcal{S}}, \\bm{\\Lambda})$, the scheme also works if $\\bm{\\theta}$ is uniformly drawn from $[K]$, i.e., in the PIR-PCSI setting.\n\\end{remark}\n\n\\section{Proof of Theorem \\ref{thm:cap_PCSI_sup}}\\label{sec:cap_PCSI_sup}\n\\subsection{Converse}\nThe converse is divided into two regimes.\n\n\\textbf{Regime 1}: $2 \\leq M \\leq K$. The proof relies on \\eqref{eq:lemma1pcsi} in Lemma \\ref{lem:privacy}.\nConsider any particular realization $Q \\in \\mathcal{Q}$ of $\\bm{Q}$. For all $i \\in [K]$, consider $\\mathcal{S} = [M], \\theta = i$, and let $\\Lambda_i$ be a coefficient vector that satisfies \\eqref{eq:lemma1pcsi} according to Lemma \\ref{lem:privacy}, so that \n\\begin{align}\n H(\\bm{W}_i \\mid \\bm{\\Delta}, \\bm{Y}^{[[M],\\Lambda_i]}, \\bm{Q} = Q) = 0.\\label{eq:con_PCSI_0}\n\\end{align}\nWriting $\\bm{Y}^{[[M],\\Lambda_i]}$ as $\\bm{Y}_{i}$ for compact notation, we have\n\\begin{align}\n &H(\\bm{W}_{[K]} \\mid \\bm{\\Delta}, \\bm{Y}_{[M-1]}, \\bm{Q} = Q)\\notag\\\\\n &= H(\\bm{W}_{[K]} \\mid \\bm{\\Delta}, \\bm{Y}_{[M-1]}, \\bm{W}_{[M-1]}, \\bm{Q} = Q)\\label{eq:con_PCSI_1}\\\\\n &= H(\\bm{W}_{[K]} \\mid \\bm{\\Delta}, \\bm{W}_{[M]}, \\bm{Q} = Q)\\label{eq:con_PCSI_2}\\\\\n &= H(\\bm{W}_{[M+1:K]} \\mid \\bm{\\Delta}, \\bm{W}_{[M]}, \\bm{Y}_{[M+1:K]}, \\bm{Q} = Q)\\label{eq:con_PCSI_3}\\\\\n &= 0,\\label{eq:con_PCSI_3a}\n\\end{align}\nwhere \\eqref{eq:con_PCSI_1} holds according to \\eqref{eq:con_PCSI_0}, and \\eqref{eq:con_PCSI_2} follows from the fact that $\\bm{W}_M$ is decodable by subtracting $\\bm{W}_{[M-1]}$ terms from $\\bm{Y}_1$. Then, \\eqref{eq:con_PCSI_3} uses the fact that $\\bm{Y}_{[M+1:K]}$ are functions of $\\bm{W}_{[M]}$. Finally, \\eqref{eq:con_PCSI_3a} follows from \\eqref{eq:con_PCSI_0}. \n\nAveraging over $\\bm{Q}$, \n\\begin{align}\n H(\\bm{W}_{[K]} \\mid \\bm{\\Delta}, \\bm{Y}_{[M-1]}, \\bm{Q}) = 0.\\label{eq:con_PCSI_4}\n\\end{align}\nThen we have \n\\begin{align}\n &H(\\bm{W}_{[K]} \\mid \\bm{\\Delta}, \\bm{Q})\\\\\n &= H(\\bm{W}_{[K]}, \\bm{Y}_{[M-1]} \\mid \\bm{\\Delta}, \\bm{Q})\\label{eq:con_PCSI_5}\\\\\n &= H(\\bm{Y}_{[M-1]} \\mid \\bm{\\Delta}, \\bm{Q}) + H(\\bm{W}_{[K]} \\mid \\bm{\\Delta}, \\bm{Q}, \\bm{Y}_{[M-1]})\\\\\n &\\leq H(\\bm{Y}_{[M-1]})\\label{eq:con_PCSI_6}\\\\\n &\\leq (M-1)L,\n\\end{align}\nwhere \\eqref{eq:con_PCSI_5} follows from the fact that $\\bm{Y}_{[M-1]}$ are linear combinations of $\\bm{W}_{[M]}$. Step \\eqref{eq:con_PCSI_6} holds because of \\eqref{eq:con_PCSI_4}, and because conditioning reduces entropy.\n\nThus $D \\geq H(\\bm{\\Delta} \\mid \\bm{Q}) \\geq H(\\bm{W}_{[K]}) - H(\\bm{W}_{[K]} \\mid \\bm{\\Delta}, \\bm{Q}) \\geq (K-M+1)L$, which implies that $C_{\\mbox{\\tiny PCSI}}^{\\sup} \\leq (K-M+1)^{-1}$ for $2 \\leq M \\leq K$.\n\n\\textbf{Regime 2}: $M=1$.\n\nConsider any particular realization $Q \\in \\mathcal{Q}$ of $\\bm{Q}$. Since $M=1$, $\\bm\\Lambda$ is irrelevant, e.g., we may assume $\\bm{\\Lambda}=\\Lambda=1$ without loss of generality. For all $j \\in [2:K]$, consider $\\mathcal{S} = \\{1\\}, \\theta = j$, and apply \\eqref{eq:lemma1pcsi} according to Lemma \\ref{lem:privacy} so that \n\\begin{align}\n H(\\bm{W}_j \\mid \\bm{\\Delta}, \\bm{Y}^{[\\{1\\},1]}, \\bm{Q}=Q) = 0\\\\\n\\implies H(\\bm{W}_{[2:K]} \\mid \\bm{\\Delta}, \\bm{Y}^{[\\{1\\},1]}, \\bm{Q}=Q) = 0\\label{eq:dec_PCSI_corner}\n\\end{align}\n\\begin{align}\n &H(\\bm{W}_{[K]} \\mid \\bm{\\Delta}, \\bm{Q}=Q)\\\\\n &\\leq H(\\bm{W}_1, \\bm{Y}^{[\\{1\\},1]} \\mid \\bm{\\Delta}, \\bm{Q} = Q)\\label{eq:corner_PCSI_1}\\\\\n &= H(\\bm{W}_1 \\mid \\bm{\\Delta}, \\bm{Q}=Q)\\label{eq:corner_PCSI_2}\\\\\n &\\leq L,\n\\end{align}\nwhere \\eqref{eq:corner_PCSI_1} holds since \\eqref{eq:dec_PCSI_corner} holds. Averaging over $\\bm{Q}$, $H(\\bm{W}_{[K]} \\mid \\bm{\\Delta}, \\bm{Q}) \\leq L$. Thus $D \\geq H(\\bm{\\Delta} \\mid \\bm{Q}) \\geq H(\\bm{W}_{[K]}) - H(\\bm{W}_{[K]} \\mid \\bm{\\Delta}, \\bm{Q}) \\geq KL-L$, which implies that $C_{\\mbox{\\tiny PCSI}}(q) \\leq (K-1)^{-1}$ for $M=1$.\n\n\\subsection{Achievability}\nFor $2 \\leq M \\leq K$, the achievable scheme will be a combination of \\emph{Specialized GRS Codes} and \\emph{Modified Specialized GRS Codes} which are schemes in \\cite{PIR_PCSI} for PIR-PCSI-I and PIR-PCSI-II setting, respectively.\n\nThe rate $(K-M)^{-1}$ is achievable by \\emph{Specialized GRS Codes} for PIR-PCSI-I setting and the rate $(K-M+1)^{-1}$ is achievable by \\emph{Modified Specialized GRS Codes} for the PIR-PCSI-II setting. Both schemes work for $L=1$, so let us say $L=1$ here. Intuitively, these two achievable schemes have the same structures as explained below. \n\nFor the PIR-PCSI-I setting, the desired message is not contained in the support set. The download will be $K-M$ linear equations of $K$ unknowns ($K$ messages). These $K-M$ linear equations are independent by design, so they allow the user to eliminate any $K-M-1$ unknowns and get an equation in the remaining $K-(K-M-1) = M+1$ unknowns (messages). Let these $M+1$ unknowns be the $M$ messages in the support set and the desired message. With careful design, the equation will be equal to $\\bm{Y}^{[\\bm{\\mathcal{S}},\\bm{\\Lambda}]} + \\bm{\\lambda}^{\\prime}\\bm{W}_{\\bm{\\theta}}$ for some non-zero $\\bm{\\lambda}^\\prime$. Thus by subtracting CSI from the equation the user is able to recover $\\bm{W}_{\\bm{\\theta}}$.\n\nFor the PIR-PCSI-II setting the desired message is contained in the support set. The download will be $K-M+1$ linear equations in $K$ unknowns (messages). These $K-M+1$ linear equations are independent by design, so they allow the user to eliminate any $K-M$ unknowns and get an equation in the remaining $K-(K-M) = M$ unknowns (messages). Let these $M$ unknowns be the $M$ messages in the support set. With careful design, the equation will be equal to $\\bm{Y}^{[\\bm{\\mathcal{S}},\\bm{\\Lambda}]} + \\bm{\\lambda}^{\\prime}\\bm{W}_{\\bm{\\theta}}$ for some $\\bm{\\lambda}^{\\prime} \\neq 0$. Thus by subtracting CSI from the equation the user is able to recover $\\bm{W}_{\\bm{\\theta}}$.\n\nConsider a scheme where the user applies \\emph{Specialized GRS Codes} when $\\bm{\\theta} \\notin \\bm{\\mathcal{S}}$ and applies \\emph{Modified Specialized GRS Codes} when $\\bm{\\theta} \\in \\bm{\\mathcal{S}}$. This scheme is obviously correct but not private because the server can tell if $\\bm{\\theta} \\in \\bm{\\mathcal{S}}$ or not from the download cost since the download cost of the two schemes are different. However, if the user always downloads one more redundant equation when applying \\emph{Specialized GRS Codes}, then there is no difference in the download cost. This is essentially the idea for the achievable scheme.\n\nLet us first present the \\emph{Specialized GRS Codes} in \\cite{PIR_PCSI} here for ease of understanding. There are $K$ distinct evaluation points in $\\mathbb{F}_{q}$, namely $\\omega_{1}, \\cdots, \\omega_{K}$. A polynomial $\\bm{p}(x)$ is constructed as \n\\begin{align}\n \\bm{p}(x) &\\triangleq \\prod_{k \\in [K]\\setminus(\\bm{\\mathcal{S}} \\cup \\{\\bm{\\theta}\\})}(x - \\omega_{k})\\\\\n & = \\sum_{i=1}^{K-M}\\bm{p}_i x^{i-1}.\\label{eq:polyGRS}\n\\end{align}\nThe query $\\bm{Q}$ is comprised of $K-M$ row vectors, each $1\\times K$, namely $\\bm{Q}_{1}, \\cdots, \\bm{Q}_{K-M}$ such that \n\\begin{align}\n \\bm{Q}_i = [\\bm{v}_1\\omega_{1}^{i-1}~~ \\cdots~~ \\bm{v}_K\\omega_{K}^{i-1}], \\forall i \\in [K-M],\n\\end{align}\nwhere for $\\bm{i}_m \\in \\bm{\\mathcal{S}}, m \\in [M]$, $\\bm{v}_{\\bm{i}_m} = \\frac{\\bm{\\lambda}_m}{p(\\omega_{\\bm{i}_m})}$ ($\\bm{\\lambda}_m$ is the $m^{th}$ coefficient in the CSI), while for $k \\notin \\bm{\\mathcal{S}}$, $\\bm{v}_{k}$ is randomly drawn from $\\mathbb{F}_{q}^{\\times}$. Upon receiving $\\bm{Q}$, the server sends \n\\begin{align}\n \\bm{\\Delta} = \n \\begin{bmatrix}\n \\bm{\\Delta}_1\\\\\n \\vdots\\\\\n \\bm{\\Delta}_{K-M}\n \\end{bmatrix}\n =\n \\begin{bmatrix}\n \\bm{Q}_1\\\\\n \\vdots\\\\\n \\bm{Q}_{K-M}\n \\end{bmatrix}\n \\begin{bmatrix}\n \\bm{W}_1\\\\\n \\bm{W}_2\\\\\n \\vdots\\\\\n \\bm{W}_K\n \\end{bmatrix}\n\\end{align}\nto the user. Let us call $[\\bm{Q}_1^{\\mathrm{T}} ~ \\cdots ~ \\bm{Q}_{K-M}^{\\mathrm{T}}]^{\\mathrm{T}}$ the \\emph{Specialized GRS Matrix} and $[\\bm{\\Delta}_1 ~ \\cdots ~ \\bm{\\Delta}_{K-M}]^{\\mathrm{T}}$ \\emph{Specialized GRS Codes} of $\\bm{W}_{[K]}$ for ease of reference. Note that the \\emph{Specialized GRS Matrix} is uniquely defined by $\\bm{v}_{1}, \\cdots, \\bm{v}_{K}$ as $\\omega$'s are constants.\n\nThe user gets $\\bm{W}_{\\bm{\\theta}}$ by subtracting $\\bm{Y}^{[\\bm{\\mathcal{S}}, \\bm{\\Lambda}]}$ from \n\\begin{align}\n \\sum_{i=1}^{K-M}\\bm{p}_i\\bm{\\Delta}_{i} = \\bm{Y}^{[\\bm{\\mathcal{S}}, \\bm{\\Lambda}]} + \\bm{v}_{\\bm{\\theta}}\\bm{p}(\\omega_{\\bm{\\theta}})\\bm{W}_{\\bm{\\theta}}.\n\\end{align}\n\nOur PIR-PCSI scheme is as follows.\nFor any realization $(\\theta, \\mathcal{S})$ of $(\\bm{\\theta}, \\bm{\\mathcal{S}})$, \n\\emph{1)} When $\\theta \\in [K]\\setminus\\mathcal{S}$, first apply the Specialized GRS Codes in \\cite{PIR_PCSI}. Besides $Q_1, Q_2, \\cdots, Q_{K-M}$ as specified in the \\emph{Specialized GRS Codes} of \\cite{PIR_PCSI}, the user also has \n\\begin{align}\n Q_{K-M+1} = [v_1\\omega_{1}^{K-M}, \\cdots, v_K\\omega_{K}^{K-M}]\n\\end{align}\nas part of the query. And the answer $\\bm{\\Delta}_{K-M+1} = \\sum_{j=1}^{K}v_j \\omega_j^{K-M} \\bm{W}_j$ will be generated for $Q_{K-M+1}$ and downloaded by the user as a redundant equation. Note that the matrix $[Q_1^{\\mathrm{T}}, Q_2^{\\mathrm{T}}, \\cdots, Q_{K-M+1}^{\\mathrm{T}}]^{\\mathrm{T}}$ is the generator matrix of a $(K, K-M+1)$ GRS code \\cite{Coding_Theory}.\n\n\\emph{2)} When $\\theta \\in \\mathcal{S}$, the user will directly apply \\emph{Modified Specialized GRS Codes} where the queries also form a generator matrix of a $(K, K-M+1)$ GRS code as specified in \\cite{PIR_PCSI}.\n\nSuch a scheme is private since the queries in both cases form a generator matrix of a $(K,K-M+1)$ GRS code, and the $v_1, \\cdots, v_{K}$ in both cases are identically uniform over $\\mathbb{F}_{q}^{\\times}$ for any realization of $\\bm{\\theta}, \\bm{\\mathcal{S}}$.\n\nFor the corner case $M=1$, it suffices to download $K-1$ generic linear combinations of all the $K$ messages such that from the $K-1$ downloaded linear combinations and the CSI, all the $K$ messages are decodable as noted in Remark \\ref{rmk:pcsi1_inf_pcsi_inf}.\n\n\\section{Proof of Theorem \\ref{thm:redundancy}}\\label{proof:redundancy}\nHere we bound the redundancy $\\rho_{\\mbox{\\tiny PCSI}}$ from above (equivalently, lower-bound $\\alpha^{*}$) for $1 \\leq M \\leq K$. For $\\frac{K+2}{2} < M \\leq K$, the proof for $\\rho_{\\mbox{\\tiny PCSI}} = 0$ is the same as in Section \\ref{proof:red} show that so it will not be repeated. \n\n\nConsider an achievable scheme such that $\\alpha$ PCSI is sufficient and the average download cost, $D \\leq \\frac{1}{C_{\\mbox{\\tiny PCSI}}^{\\sup}}L+\\epsilon L$ for some $L$. Note that $D\\geq H(\\bm{\\Delta\\mid \\bm{Q}})$, therefore,\n\\begin{align}\nH(\\bm{\\Delta\\mid \\bm{Q}})\\leq \\frac{1}{C_{\\mbox{\\tiny PCSI}}^{\\sup}}L+\\epsilon L\\label{eq:deltabound}\n\\end{align}\n\nIt follows from \\eqref{eq:deltabound} that there exists a feasible $Q \\in \\mathcal{Q}$ such that \n\\begin{align}\n H(\\bm{\\Delta} \\mid \\bm{Q}=Q) \\leq \\frac{1}{C_{\\mbox{\\tiny PCSI}}^{\\sup}}L+\\epsilon L.\n\\end{align}\nFor all $i \\in [K]$, let $\\Lambda_{i} \\in \\mathfrak{C}$ satisfy \n\\begin{align}\n H(\\bm{W}_{i} \\mid \\bm{\\Delta}, \\overline{\\bm{Y}}^{[[M], \\Lambda_i]}, \\bm{Q} = Q) = 0.\\label{eq:red_M1_dec}\n\\end{align}\nThe argument that such $\\Lambda_i$'s must exist is identical to the proof of Lemma \\ref{lem:privacy}. \nWriting $\\overline{\\bm{Y}}^{[[M], \\Lambda_i]}$ as $\\overline{\\bm{Y}}_{i}$ for compact notation,\n\\begin{align}\n &H(\\bm{W}_{[K]} \\mid \\bm{\\Delta}, \\overline{\\bm{Y}}_{[M]}, \\bm{Q} = Q)\\\\\n &= H(\\bm{W}_{[M]} \\mid \\bm{\\Delta}, \\overline{\\bm{Y}}_{M}, \\bm{Q} = Q)\\notag\\\\\n &~~~+ H(\\bm{W}_{[M+1:K]} \\mid \\bm{\\Delta}, \\overline{\\bm{Y}}_{[M]}, \\bm{W}_{[M]}, \\bm{Q} = Q)\\\\\n &= 0 + H(\\bm{W}_{[K]} \\mid \\bm{\\Delta}, \\bm{W}_{[M]}, \\overline{\\bm{Y}}_{[K]}, \\bm{Q} = Q)\\label{eq:red_M1_funcW}\\\\\n &=0.\n\\end{align}\nwhere \\eqref{eq:red_M1_funcW} follows from \\eqref{eq:red_M1_dec} and the fact that $\\overline{\\bm{Y}}_{[K]}$ are functions of $\\bm{W}_{[M]}$. The last step also follows from \\eqref{eq:red_M1_dec}. Thus,\n\\begin{align}\n &\\frac{1}{C_{\\mbox{\\tiny PCSI}}^{\\sup}}L + \\epsilon L + M\\alpha L\\notag\\\\\n &\\geq H(\\bm{\\Delta}, \\overline{\\bm{Y}}_{[M]} \\mid \\bm{Q}=Q)\\label{eq:red_M1_indY}\\\\\n &\\geq I(\\bm{\\Delta}, \\overline{\\bm{Y}}_{[M]}; \\bm{W}_{[K]} \\mid \\bm{Q}=Q)\\\\\n &=H(\\bm{W}_{[K]} \\mid \\bm{Q}=Q) = KL.\\label{eq:red_M1_indW}\n\\end{align}\n\\eqref{eq:red_M1_indY} is true because \\eqref{eq:indQ}, \\eqref{eq:invaYR} hold. Step \\eqref{eq:red_M1_indW} follows from \\eqref{eq:red_M1_dec} and the fact that the query and messages are mutually independent according to \\eqref{eq:indQ}. Thus, $\\alpha \\geq (K-\\frac{1}{C_{\\mbox{\\tiny PCSI}}^{\\sup}})\/M - \\epsilon\/M$. In order to achieve capacity, we must have $\\epsilon \\rightarrow 0$, so we must have $\\alpha \\geq (K-\\frac{1}{C_{\\mbox{\\tiny PCSI}}^{\\sup}})\/M$, for all $1\\leq M\\leq K$.\n\nNow note that for $M=1$, since $C_{\\mbox{\\tiny PCSI}}^{\\sup} = (K-1)^{-1}$, we have shown that $\\alpha \\geq 1$, which implies $\\rho_{\\mbox{\\tiny PCSI}} = 0$ in this case. \n\nFor $2 \\leq M \\leq \\frac{K+2}{2}$, since $C_{\\mbox{\\tiny PCSI}}^{\\sup} = (K-M+1)^{-1}$, we have shown that $\\alpha \\geq \\frac{M-1}{M}$, which implies $\\rho_{\\mbox{\\tiny PCSI}} \\leq \\frac{1}{M}$ in this case.\n\nIt only remains to show that for $M=2$, $\\rho_{\\mbox{\\tiny PCSI}} = \\frac{1}{2}$ is achievable, or equivalently, $\\alpha^{*} = \\frac{1}{2}$. For this case, let us present a PIR-PCSI scheme that achieves the rate $(K-M\/2)^{-1}$ for arbitrary $1 \\leq M \\leq K$. Note that $K-M\/2 = K-M+1$ when $M=2$, which is the only case where the supremum capacity is achieved by this scheme. The rate of this scheme is strictly smaller than $C_{\\mbox{\\tiny PCSI}}^{\\sup}$ for other $M \\neq 2$.\n\n\nLet the size of the base field $q$ be an even power of a prime number such that $\\sqrt{q}$ is a prime power and $\\sqrt{q} \\geq K$. For arbitrary realization $(\\theta, \\mathcal{S}) \\in [K]\\times\\mathfrak{S}$ of $(\\bm{\\theta},\\bm{\\mathcal{S}})$, if $\\theta \\in \\mathcal{S}$, the user can apply the \\emph{Interference Alignment} based PIR-PCSI-II scheme where half of each message is downloaded. If $\\theta \\in [K]\\setminus\\mathcal{S}$, then user can apply the \\emph{Specialized GRS Codes} based scheme for the halves of the messages corresponding to the CSI dimension that is retained (while the other half of the CSI dimensions is discarded as redundant) and download the other half dimension of all the messages directly. Note that in both cases, a half-dimension of each of the $K$ messages is directly downloaded. The other halves are involved in the download corresponding to the \\emph{Specialized GRS Codes} which is not needed for decodability\/correctness if $\\theta \\in \\mathcal{S}$, but is still included for privacy, i.e., to hide whether or not $\\bm\\theta\\in\\bm{\\mathcal{S}}$. The download cost required is $K\\left(\\frac{L}{2}\\right)$ for the direct downloads of half of every message, plus $(K-M)\\frac{L}{2}$ for the \\emph{Specialized GRS Codes} based scheme that usually requires $K-M$ downloads per message symbol, but is applied here to only half the symbols from each message, for a total download cost of $(K-M\/2)L$ which achieves the supremum capacity of PIR-PCSI for $M=2$. The details of the scheme are presented next.\n\nFor all $k \\in [K]$, let $V_{\\bm{W}_k} \\in \\mathbb{F}_{\\sqrt{q}}^{2\\times 1}$ be the length $2$ vector representation of $\\bm{W}_k \\in \\mathbb{F}_{q}$. For all $m \\in [M]$, let $M_{\\bm{\\lambda}_m} \\in \\mathbb{F}_{\\sqrt{q}}^{2\\times 2}$ be the matrix representation of $\\bm{\\lambda}_m \\in \\mathbb{F}_{q}^{\\times}$ where $\\bm{\\lambda}_m$ is the $m^{th}$ entry of the coefficient vector $\\bm{\\Lambda}$. Let \n\\begin{align}\n \\overline{\\bm{Y}}^{[\\bm{\\mathcal{S}}, \\bm{\\Lambda}]} = M_{\\bm{\\lambda}_1}(1,:)V_{\\bm{W}_{\\bm{i}_1}} +\\cdots+ M_{\\bm{\\lambda}_M}(1,:)V_{\\bm{W}_{\\bm{i}_M}},\n\\end{align}\nwhere $\\bm{\\mathcal{S}} = \\{\\bm{i}_1, \\bm{i}_2, \\cdots, \\bm{i}_{M}\\}$ is the support index set, be the processed CSI where $H(\\overline{\\bm{Y}}^{[\\bm{\\mathcal{S}}, \\bm{\\Lambda}]}) = \\frac{1}{2}H(\\bm{W}_k)$. Note that $\\forall m \\in [M], M_{\\bm{\\lambda}_m}(1,:)$ is uniform over $\\mathbb{F}_{\\sqrt{q}}^{1\\times 2} \\setminus \\{[0~~0]\\}$ according to Lemma \\ref{lem:uniform12}.\n\nThe query $\\bm{Q} = \\{\\bm{Q}_1, \\bm{Q}_2, \\bm{Q}_3\\}$,\n\\begin{align}\n \\bm{Q}_1 &= \\{\\mathbf{L}_1, \\mathbf{L}_2, \\cdots, \\mathbf{L}_{K}\\},\\\\\n \\bm{Q}_2 &= \\{\\mathbf{L}_1^{\\prime}, \\mathbf{L}_2^{\\prime}, \\cdots, \\mathbf{L}_{K}^{\\prime}\\},\\\\\n \\bm{Q}_3 &= \\{\\bm{v}_1, \\bm{v}_2, \\cdots, \\bm{v}_{K}\\}.\n\\end{align}\nwhere $\\mathbf{L}_{k}, \\mathbf{L}_{k}^{\\prime} \\in \\mathbb{F}_{\\sqrt{q}}^{1\\times 2} \\setminus \\{[0~~0]\\}$. $\\mathbf{L}_{k}, \\mathbf{L}_{k}^{\\prime}$ serve as two linearly independent projections that ask the server to split $\\bm{W}_k$ into two halves \n\\begin{align}\n \\bm{w}_{k}(1) = \\mathbf{L}_{k}V_{\\bm{W}_k} \\in \\mathbb{F}_{\\sqrt{q}},\\\\\n \\bm{w}_{k}(2) = \\mathbf{L}_{k}^{\\prime}V_{\\bm{W}_k} \\in \\mathbb{F}_{\\sqrt{q}}.\n\\end{align}\n$\\bm{Q}_3$ uniquely defines a \\emph{Specialized GRS Matrix} whose elements are in $\\mathbb{F}_{\\sqrt{q}}$.\n\nThe user will download the first halves of all the $K$ messages after projection, i.e., $\\bm{w}_{[K]}(1)$ and apply the \\emph{Specialized GRS Matrix} to download a \\emph{Specialized GRS Codes} of the second halves of all the $K$ messages after projection, i.e., $\\bm{w}_{[K]}(2)$. \n\nLet us specify $\\mathbf{L}_{k}, \\mathbf{L}_{k}^{\\prime}, \\bm{v}_{k}$. Consider any realization $(\\theta, \\mathcal{S}) \\in [K]\\times\\mathfrak{S}$ of $(\\bm{\\theta},\\bm{\\mathcal{S}})$. Let us say $\\mathcal{S} = \\{i_1, i_2, \\cdots, i_M\\}$. For the messages not involved in the CSI, they are randomly projected to two linearly independent directions, i.e., for any $k \\in [K] \\setminus \\mathcal{S}$, $\\mathbf{L}_{k}, \\mathbf{L}_{k}^{\\prime}$ are linearly independent and are randomly drawn from $\\mathbb{F}_{\\sqrt{q}}^{1 \\times 2} \\setminus \\{[0~~0]\\}$. Also, for any $k \\in [K] \\setminus \\mathcal{S}$, $\\bm{v}_{k}$ is uniformly distributed in $\\mathbb{F}_{\\sqrt{q}}^{\\times}$. \n\nFor messages involved in the CSI, the construction of projections and $\\bm{v}$'s depends on whether $\\theta$ is in $\\mathcal{S}$ or not.\n\\begin{enumerate}\n \\item When $\\theta \\in \\mathcal{S}$, for any $m \\in [M]$,\n \\begin{align}\n \\mathbf{L}_{i_m} = \n \\begin{cases}\n M_{\\bm{\\lambda}_m}(2,:), i_m = \\theta,\\\\\n M_{\\bm{\\lambda}_m}(1,:), i_m \\neq \\theta.\n \\end{cases}\n \\end{align}\n $\\mathbf{L}_{i_m}^{\\prime}$ is then chosen randomly from $\\mathbb{F}_{\\sqrt{q}}^{1 \\times 2} \\setminus \\{[0~~0]\\}$ such that it is linearly independent with $\\mathbf{L}_{i_m}$. Meanwhile, $\\bm{v}_{i_m}$ is randomly drawn from $\\mathbb{F}_{\\sqrt{q}}^{\\times}$. Under this case, the user has \n \\begin{align}\n \\overline{\\bm{Y}}^{[\\mathcal{S}, \\bm{\\Lambda}]} = \\sum_{i_m \\in \\mathcal{S}\\setminus\\{\\theta\\}}\\bm{w}_{i_m}(1) + \\bm{w}_{\\theta}(2)\n \\end{align} \n according to the construction of $\\mathbf{L}_{i_m}$. $\\bm{w}_{\\theta}(1)$ is directly downloaded and $\\bm{w}_{\\theta}(2)$ can be recovered by subtracting $\\{\\bm{w}_{i_m}(1)\\}_{i_m \\neq \\theta}$ from $\\overline{\\bm{Y}}^{[\\mathcal{S}, \\bm{\\Lambda}]}$. The user is then able to recover $\\bm{W}_{\\theta}$ as the two projections are linearly independent. $\\bm{Q}_3$ uniquely defines a \\emph{Specialized GRS Matrix} and applying $\\bm{Q}_3$ to download a \\emph{Specialized GRS Codes} of $\\bm{w}_{[K]}(2)$ is just for privacy.\n\n \\item When $\\theta \\in [K]\\setminus\\mathcal{S}$, for any $m \\in [M]$, \n \\begin{align}\n \\mathbf{L}_{i_m}^{\\prime} = \\frac{1}{\\bm{a}_m}M_{\\bm{\\lambda}_m}(1,:),\n \\end{align}\n where $\\bm{a}_m$ is randomly drawn from $\\mathbb{F}_{\\sqrt{q}}^{\\times}$. $\\mathbf{L}_{i_m}$ is then chosen randomly from $\\mathbb{F}_{\\sqrt{q}}^{1 \\times 2} \\setminus \\{[0~~0]\\}$ such that they are linearly independent with $\\mathbf{L}_{i_m}^{\\prime}$. Under this case, the user has \n \\begin{align}\n \\sum_{m\\in[M]}\\bm{a}_{m}\\bm{w}_{i_m}(2) = \\overline{\\bm{Y}}^{[\\mathcal{S}, \\bm{\\Lambda}]},\n \\end{align}\n and sets \n \\begin{align}\n \\bm{v}_{i_m} = \\frac{\\bm{a}_m}{p(\\omega_{i_m})}, \\forall m \\in [M],\n \\end{align}\n where $p(\\omega_{i_m})$ is the evaluation of the polynomial specified in \\eqref{eq:polyGRS} (when $(\\bm{\\theta},\\bm{\\mathcal{S}}) = (\\theta$, $\\mathcal{S})$) at $\\omega_{i_m}$, which is a non-zero constant given $(\\theta, \\mathcal{S})$. Thus, given $(\\theta, \\mathcal{S})$, $\\bm{v}_{i_m}$ is still uniform over $\\mathbb{F}_{\\sqrt{q}}^{\\times}$. $\\bm{Q}_{3}$ uniquely defines a \\emph{Specialized GRS Matrix}. Applying $\\bm{Q}_3$ to download a \\emph{Specialized GRS Codes} of $\\bm{w}_{[K]}(2)$, together with $\\sum_{m\\in[M]}\\bm{a}_{m}\\bm{w}_{i_m}(2)$ as the side information, enable the user to recover $\\bm{w}_{\\theta}(2)$. Since the first halves of all the projected messages are also downloaded, the user also has $\\bm{w}_{\\theta}(1)$, thus, is able to decode $\\bm{W}_{\\theta}$.\n\\end{enumerate}\n\nNote that for arbitrary realization $(\\theta, \\mathcal{S})$ of $(\\bm{\\theta}, \\bm{\\mathcal{S}})$, no matter $\\theta \\in \\mathcal{S}$ or not, $\\mathbf{L}_1, \\cdots, \\mathbf{L}_{K}$, $\\mathbf{L}_1^{\\prime}, \\cdots, \\mathbf{L}_{K}^{\\prime}$, $\\bm{v}_1, \\cdots, \\bm{v}_{K}$ are independent, and for any $k \\in [K]$, the matrix whose first row is $\\mathbf{L}_{k}$ and second row is $\\mathbf{L}_{k}^{\\prime}$ is uniform over the set that contains all the full-rank matrix in $\\mathbb{F}_{\\sqrt{q}}^{2\\times 2}$, $\\bm{v}_{k}$ is uniform over $\\mathbb{F}_{\\sqrt{q}}^{\\times}$. Thus, the scheme is private.\n\n\n\\section{Proof of Theorem \\ref{thm:cap_PCSI_inf}}\\label{sec:cap_PCSI_inf}\nThe rate $\\frac{1}{K-1}$ PIR-PCSI-I scheme in Section \\ref{sec:PCSI1_inf_ach} is also the infimum capacity achieving PIR-PCSI scheme as noted in Remark \\ref{rmk:pcsi1_inf_pcsi_inf}, so we just prove the converse here.\n\nAs a result of \\eqref{eq:lemma1pcsi} and the fact that in $\\mathbb{F}_{2}$, we can only have $\\bm{\\Lambda} = 1_M$, i.e., the length-$M$ vector all of whose elements are equal to $1$, we have\n\\begin{align}\n H(\\bm{W}_{[K]} \\mid \\bm{\\Delta}, \\bm{Y}^{[\\mathcal{S},1_M]}, \\bm{Q}=Q) = 0, \\notag\\\\\n \\forall (Q,\\mathcal{S}) \\in \\mathcal{Q}\\times\\mathfrak{S}. \\label{eq:pcsi_inf_dec}\n\\end{align}\nWriting $\\bm{Y}^{[[M],1_M]}$ as $\\bm{Y}$ for compact notation, for any $Q \\in \\mathcal{Q}$, we have\n\\begin{align}\n &H(\\bm{W}_{[K]} \\mid \\bm{\\Delta}, \\bm{Q}=Q)\\notag\\\\\n &= H(\\bm{W}_{[K]}, \\bm{Y} \\mid \\bm{\\Delta}, \\bm{Q}=Q) \\label{eq:dec_K_1}\\\\\n &= H(\\bm{Y} \\mid \\bm{\\Delta}, \\bm{Q}=Q) + H(\\bm{W}_{[K]} \\mid \\bm{\\Delta}, \\bm{Y}, \\bm{Q} = Q)\\label{eq:dec_K_2}\\\\\n &\\leq H(\\bm{Y}) = L.\n\\end{align}\n\\eqref{eq:dec_K_1} is true since $\\bm{Y}$ is a summation of the first $M$ messages, and \\eqref{eq:dec_K_2} follows from \\eqref{eq:pcsi_inf_dec}. Averaging over $\\bm{Q}$ we have,\n\\begin{align}\n H(\\bm{W}_{[K]} \\mid \\bm{\\Delta}, \\bm{Q}) \\leq L.\n\\end{align}\nThus, $D \\geq H(\\bm{\\Delta} \\mid \\bm{Q}) \\geq H(\\bm{W}_{[K]}) - H(\\bm{W}_{[K]} \\mid \\bm{\\Delta}, \\bm{Q}) \\geq KL-L$ which implies that $C_{\\mbox{\\tiny PCSI}}^{\\inf}(q = 2) \\leq (K-1)^{-1}$.\n\n\n\\section{Proof of Theorem \\ref{thm:pcsi_pub_pri}}\\label{proof:pcsi_pub_pri}\nThe rate $\\frac{1}{K-1}$ PIR-PCSI-I scheme which preserves $(\\bm{\\theta}, \\bm{\\mathcal{S}}, \\bm{\\Lambda})$ in Section \\ref{sec:pcsi1_pri_ach} is also the capacity achieving PIR-PCSI scheme with private coefficients as noted in Remark \\ref{rmk:pcsi1_pri_pcsi_pri}, so we just prove the converse here. Specifically, we prove that $C_{\\mbox{\\tiny PCSI}}^{\\mbox{\\tiny pri}}(q)\\leq C_{\\mbox{\\tiny PCSI}}(q=2) = C_{\\mbox{\\tiny PCSI}}^{\\inf}$.\n\nAccording to \\eqref{eq:pcsi_pri} in Lemma \\ref{lem:fullypri}, for a fully private PIR-PCSI scheme,\n\\begin{align}\n H(\\bm{W}_{[K]} \\mid \\bm{\\Delta}, \\bm{Y}^{[\\mathcal{S}, \\Lambda]}, \\bm{Q} = Q) = 0, \\notag\\\\\n \\forall (Q,\\mathcal{S},\\Lambda) \\in \\mathcal{Q} \\times \\mathfrak{S} \\times \\mathfrak{C}.\\label{eq:pcsi_pri_dec}\n\\end{align}\nNote that \\eqref{eq:pcsi_pri_dec} is a \\emph{stronger} version of \\eqref{eq:pcsi_inf_dec} which is sufficient to bound $C_{\\mbox{\\tiny PCSI}}(q=2) = C_{\\mbox{\\tiny PCSI}}^{\\inf}$. Thus, $C_{\\mbox{\\tiny PCSI}}^{\\mbox{\\tiny pri}}(q) \\leq C_{\\mbox{\\tiny PCSI}}^{\\inf}$.\n\n\\section{Conclusion} \\label{sec:con}\nSide-information is a highly valuable resource for PIR in general, and for single-server PIR in particular. Building on the foundation laid by Heidarzadeh et al. in \\cite{PIR_PCSI}, this work presents a more complete picture, as encapsulated in Table \\ref{tab:capacity}, revealing new insights that are described in the introduction. The redundancy of side-information is particularly noteworthy, because it allows the user to save storage cost, which may be used to store additional non-redundant side-information, e.g., multiple linear combinations instead of just one, as assumed in this work and in \\cite{PIR_PCSI}. An interesting direction for future work is to understand the trade-off between the size of side information and the efficiency of single-server PIR, e.g., by characterizing the $\\alpha$-CSI constrained capacity of PIR-PCSI-I, PIR-PCSI-II, PIR-PCSI. Other questions that remain open include issues that are field-specific. For example, is the supremum capacity of PIR-PCSI-II for $M>2$ achievable for all fields except $\\mathbb{F}_2$? Are there other fields besides $\\mathbb{F}_2$ over which the capacity is equal to the infimum capacity? Can the capacity over certain fields take values other than the supremum and infimum capacities? Progress on these issues may require field-dependent constructions of interference alignment schemes for achievability, and combinatorial arguments for converse bounds, both of which may be of broader interest.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nQuestion answering (QA) relates to the building of systems capable of automatically answering questions posed by humans in natural language. Various frameworks have been proposed for question answering, ranging from simple information-retrieval techniques for finding relevant knowledge articles or webpages, through methods for identifying the most relevant sentence in a text regarding a posed question, to methods for querying structured knowledge-bases or databases to produce an answer~\\cite{burke1997question,voorhees1999trec,kwok2001scaling,hirschman2001natural,ravichandran2002learning}\n\nA popular QA task is {\\em answer selection}, where, given a question, the system must pick correct answers from a pool of candidate answers~\\cite{xu2002trec,jijkoun2004answer,ko2007probabilistic,lee2009model,severyn2013automatic}.\n\nAnswer selection has many commercial applications. Virtual assistants such as Amazon Alexa and Google Assistant are designed to respond to natural language questions posed by users. In some cases such systems simply use a search engine to find relevant webpages; however, for many kinds of queries, such systems are capable of providing a concise specific answer to the posed question. \n\nSimilarly, various AI companies are attempting to improve customer service by automatically replying to customer queries. One way to design such a system is to curate a dataset of historical questions posed by customers and the responses given to these queries by human customer service agents. Given a previously unobserved query, the system can then locate the best matching answer in the curated dataset. \n\nAnswer selection is a difficult task, as typically there is a large number of possible answers which need to be examined. Furthermore, although in many cases the correct answer is lexically similar to the question, in other cases semantic similarities between words must be learned in order to find the correct answer~\\cite{kolomiyets2011survey,allam2012question}. Additionally, many of the words in the answer may not be relevant to the question. \n\nConsider, for example, the following question answer pair:\n\n\\begin{displayquote}\n\\textbf{How do I freeze my account?}\n\nHello, hope you are having a great day. You can freeze your account by logging into our site and pressing the freeze account button. Let me know if you have any further questions regarding the management of your account with us. \n\\end{displayquote}\n\n\\noindent Intuitively, the key section which identifies the above answer as correct is ``[...] you can freeze your account by [...]'', which represents a small fraction of the entire answer.\n\nEarlier work on answer selection used various techniques, ranging from information retrieval methods~\\cite{clarke2001exploiting} and machine learning methods relying on hand-crafted features~\\cite{parsetreeManning,wang2007jeopardy}. Deep learning methods, which have recently shown great success in many domains including image classification and annotation~\\cite{krizhevsky2012imagenet,zhou2014learning,lewenberg2016predicting}, multi-annotator data fusion~\\cite{albarqouni2016aggnet,gaunt2016training}, NLP and conversational models~\\cite{graves2013speech,bahdanau2014ntm,li2015diversity,kandasamy2017batch,shao2017generating} and speech recognition~\\cite{graves2013speech,albarqouni2016aggnet}, have also been successfully applied to question answering~\\cite{fengCNN}. Current state-of-the-art methods use recurrent neural network (RNN) architectures which incorporate attention mechanisms~\\cite{tan2016}. These allow such models to better focus on relevant sections of the input~\\cite{bahdanau2014ntm}.\n\n{\\bf Our contribution: } We propose a new architecture for question answering. Our high-level approach is similar to recently proposed QA systems~\\cite{fengCNN,tan2016}, but we augment this design with a more sophisticated attention mechanism, combining the {\\em local} information in a specific part of the answer with a {\\em global} representation of the entire question and answer. \n\nWe evaluate the performance of our model using the recently released {\\em InsuranceQA dataset}~\\cite{fengCNN}, a large open dataset for answer selection comprised of insurance related questions such as: ``what can you claim on Medicare?''. \\footnote{As opposed to other QA tasks such as answers extraction or machine text comprehension and reasoning~\\cite{weston2015towards,rajpurkar2016squad}, the InsuranceQA dataset questions do not generally require logical reasoning.}\n\nWe beat state-of-the-art approaches ~\\cite{fengCNN,tan2016}, and achieve good performance even when using a relatively small network. \n\n\n\n\\section{Previous Work}\n\nAnswer selection systems can be evaluated using various datasets consisting of questions and answers. Early answer selection models were commonly evaluated against the QASent dataset \\cite{wang2007jeopardy}; however, this dataset is very small and thus less similar to real-world applications. Further, its candidate answer pools are created by finding sentences with at least one similar (non-stopword) word as compared to the question, which may create a bias in the dataset. \n\nWiki-QA~\\cite{yang2015wikiqa} is a dataset that contains several orders of magnitude more examples than QASent, where the candidate answer pools were created from the sentences in the relevant Wikipedia page for a question, reducing the amount of keyword bias in the dataset compared to QASent. \n\nOur analysis is based on the InsuranceQA~\\cite{fengCNN} dataset, which is much larger, and similar to real-world QA applications. The answers in InsuranceQA are relatively long (see details in Section~\\ref{sec:setup}), so the candidate answers are likely to contain content that does not relate directly to the question; thus, a good QA model for InsuranceQA must be capable of identifying the most important words in a candidate answer. \n\n\nEarly work on answer selection was based on finding the semantic similarity between question and answer parse trees using hand-crafted features \\cite{parsetreeManning, wang2007jeopardy}. Often, lexical databases such as WordNet were used to augment such models \\cite{ChangWordnet}. Not only did these models suffer from using hand-crafted features, those using lexical databases were also often language-dependent. \n\nRecent attempts at answer selection aim to map questions and candidate answers into n-dimensional vectors, and use a vector similarity measure such as cosine similarity to judge a candidate answer's affinity to a question. In other words, the similarity between a question and a candidate is high if the candidate answers the question well, low if the candidate is not a good match for the question. \n\nSuch models are similar to Siamese models, a good review of which can be found in Muller et al's paper~\\cite{mueller2016siamese}. Feng et al.~\\cite{fengCNN} propose using convolutional neural networks to vectorize both questions and answers before comparing them using cosine similarity. Similarly, Tan et al.~\\cite{tan2016} use a recurrent neural network \nto vectorize questions and answers. \nAttention mechanisms have proven to greatly improve the performance of recurrent networks in many tasks \\cite{bahdanau2014ntm, tan2016, rocktaschel2015entailment,rush2015neural,luong2015effective}, and indeed Tan et al.~\\cite{tan2016} incorporate a simple attention mechanism in their system.\n\n\n\\section{Preliminaries}\n\\label{l_sect_prelim}\n\n\\begin{figure*}\n\\centering\n\\includegraphics[width=0.65\\textwidth]{attention0}\n\\caption{Model architecture using answer-localized attention \\cite{tan2016}. The left hand side used for the question. The right side of the architecture is used for both the answer and distractor.}\n\\label{fig:oldModel}\n\\end{figure*}\n\n\n\\begin{figure*}\n\\centering\n\\includegraphics[width=0.8\\textwidth]{attention3}\n\\caption{Our proposed architecture with augmented attention. As in Figure~\\ref{fig:oldModel}, the right side of the model is used to embed answers and distractors.}\n\\label{fig:newModel}\n\\end{figure*}\n\nOur approach is similar to the {\\em Answer Selection Framework} of Tan et al.~\\cite{tan2016}, but we propose a different network architecture and a new attention mechanism. We first provide a high level description of this framework (see the original paper for a more detailed discussion), then discuss our proposed attention mechanism. \n\n The framework is based on a neural network with parameters $\\theta$ which can embed either a question $q$ or a candidate answer $a$ into low dimensional vectors $r \\in \\!R^k$. The network can embed a question with no attention, which we denote as $f_{\\theta}(q)$, and embed a candidate answer with attention to the question, denoted as $g_{\\theta}(a, q)$. We denote the similarity function used as $s(x,y)$ ($s$ may be the dot product function, the cosine similarity function or some other similarity function).\n\nGiven a trained network, we compute the similarity between question and answer embeddings:\n\n$$s_i = s(f_{\\theta}(q), g_{\\theta}(A_i, q))$$\n\\noindent for any $i \\in {1,2,\\ldots,k}$ with $A_i$ being the $i$th candidate answer in the pool. We then select the answer yielding the highest similarity $\\arg \\max_i s_i$. \n\nThe embedding functions, $f_{\\theta}$ and $g_{\\theta}$, depend on the architecture used and the parameters $\\theta$. The network is trained by choosing a loss function $\\mathcal{L}$, and using stochastic gradient descent to tune the parameters given the training data. Each training item consists of a question $q$, the correct answer $a^*$ and a distractor $d$ (an incorrect answer). A prominent choice is using a shifted hinge loss, designating that the correct answer must have a higher score than the distractor by at least a certain margin $M$, where the score is based on the similarity to the question. \n\n$$ \\mathcal{L} =\\max \\Big\\{ 0, M - \\sigma_{a^*} + \\sigma_{d} \\Big\\} $$ \n\nwhere:\n$$\n\\sigma_{a^*} = s \\Big(f_{\\theta}(q), g_{\\theta}(a^*, q) \\Big) \n$$ \n$$\n\\sigma_{d} = s\\Big( f_{\\theta}(q), g_{\\theta}(d, q) \\Big)\n$$\n\nThe above expression has a zero loss if the correct answer has a score higher than the distractor by at least a margin $M$, and the loss linearly increases in the score difference between the correct answer and the distractor. \n\nAny reasonable neural network design for $f_{\\theta}$ can be used to build a working answer-selection systems using the above approach; however, the network design can have a big impact on the system's accuracy. \n\n\\subsection{Embedding Questions and Answers}\n\nEarlier work examined multiple approaches for embedding questions and answers, including convolutional neural networks, recurrent neural networks (RNNs) (sometimes augmented with an attention mechanism) and hybrid designs \\cite{fengCNN,tan2016}. \n\nAn RNN design ``digests'' the input sequence, one element at a time, changing its internal state at every timestep. The RNN is based on a cell, a parametrized function mapping a current state and an input element to the new state~\\cite{werbos1990backpropagation}. A popular choice for the RNN's cell is the Long Short Term Memory (LSTM) cell~\\cite{hochreiter1997long}.\n\nGiven a question comprised of words $q=(x_1,x_2,\\ldots,x_m)$, we denote the $i$'th output of an LSTM RNN digesting the question as $q_i$; similarly given an answer $a=(y_1,y_2,\\ldots,y_n)$ we denote the $j$'th output of an LSTM RNN digesting the question as $a_j$. \n\nOne simple approach is to have the embeddings of the question and answer be the last LSTM output, i.e. $f_{\\theta}(q) = q_m$ and $f_{\\theta}(a) = a_n$. Note that $q_i,a_i$ are vectors whose dimensionality depends on the dimensionality of the LSTM cell; we denote by $q_{i,j}$ the $j$'th coordinate of the LSTM output at timestep $i$.\n\nAnother alternative is to aggregate the LSTM outputs across the different timesteps by taking their coordinate-wise mean (mean-pooling):\n$$f_{\\theta}(q)_r = \\frac{1}{m} \\sum_{i=1}^m q_{i,r}$$\n\\noindent Alternatively, one may aggregate by taking the or coordinate-wise max (max-pooling):\n$$f_{\\theta}(q)_r = max_{i=1}^m q_{i,r}$$\n\nWe use another simple way of embedding the question and answer, which is based on term-frequency (TF) features. Given a vocabulary of words $V=(w_1,\\ldots,w_v)$, and a text $p$ we denote the TF representation of $p$ as $p^{\\text{tf}} = (d_1,\\ldots,d_v)$ where $d_j=1$ if the word $w_j$ occurs in $p$ and otherwise $d_j=0$. \\footnote{Another alternative is setting $d_j$ to the {\\em number} of times the word $w_j$ appears in $p$. A slightly more complex option is using TF-IDF features~\\cite{ramos2003using} or an alternative hand-crafted feature scheme; however we opt for the simpler TF representation, letting the neural network learn how to use the raw information.}\n\nA simple overall embedding of a text $p$ is $p' = W t(p)$ where $W$ is an $v \\times d$ matrix, and where $d$ determines the final embedding's dimensionality; the weights of $W$ are typically part of the neural network parameters, to be learned during the training of the network. Instead of a single matrix multiplication, one may use the slightly more elaborate alternative of applying a feedforward network, in order to allow for non-linear embeddings.\n\nWe note that a TF representation loses information regarding the {\\em order} of the words in the text, but can provide a good global view of key topics discussed in the text. \n\nOur main contribution is a new design for the neural network that ranks candidate answers for a given question. Our design uses a TF-based representation of the question and answer, and includes a new attention mechanism which uses this global representation when computing the attention weights (in addition to the local information used in existing approaches). We describe existing attention designs (based on local information) in Section~\\ref{l_sect_loc_attn}, before proceeding to describe our approach in Section~\\ref{l_sect_glob_loc_attn}. \n\n\\subsection{Local Attention}\n\\label{l_sect_loc_attn}\n\nEarly RNN designs were based on applying a deep feedforward network at every timestep, but struggled to cope with longer sequences due to exploding and diminishing gradients \\cite{lstm}. Other recurrent cells such as the LSTM and GRU cells \\cite{lstm,gru} have been proposed as they alleviate this issue; however, even with such cells, tackling large sequences remains hard~\\cite{lstmsSUCK}. Consider using an LSTM to digest a sequence, and taking the final LSTM state to represent the entire sequence; such a design forces the system to represent the entire sequence using a single LSTM state, which is a very narrow channel, making it difficult for the network to represent all the intricacies of a long sequence~\\cite{bahdanau2014ntm}. \n\nAttention mechanisms allow placing varying amounts of emphasis across the entire sequence~\\cite{bahdanau2014ntm}, making it easier to process long sequences; in QA, we can give different weights to different parts of the answer while aggregating the LSTM outputs along the different timesteps: \n$$f_{\\theta}(a) = \\sum_{i=1}^m \\alpha_i a_{i,r}$$\n\\noindent where $\\alpha_i$ denotes the weight (importance) placed on timestep $i$ and $a_{i,r}$ is the $r$th value of the $i$th embedding vector. \n\nTan et al.~\\cite{tan2016} proposed a very simple attention mechanism for QA, shown in Figure~\\ref{fig:oldModel}:\n$$ m_{a,q}(i) = W_{ad} a_i + W_{qd} f_{\\theta}(q) $$\n$$ \\alpha_i \\propto exp (w_{ms}^T \\tanh(m_{a,q}(i))) $$\n$$ \\hat{a} = \\sum_{i=1}^m \\alpha_i a_i $$ \n\\noindent where $\\alpha_i a(i)$ is the weighted hidden layer, $W_{ad}$ and $W_{qd}$ are matrix parameters to be learned, and $w_{ms}$ is a vector parameter to be learned.\n\n\n\\section{Global-Local Attention}\n\\label{l_sect_glob_loc_attn}\n\nA limitation of the attention mechanism of Tan et al.~\\cite{tan2016} is that it only looks at the the embedded question vector and one candidate answer word embedding at a time. Our proposed attention mechanism adds a {\\em global} view of the candidate, incorporating information from {\\em all} words in the answer. \n\n\\subsection{Creating Global Representations}\n\nOne possibility for constructing a global embedding is an RNN design. However, RNN cells tend to focus on the more recent parts of an examined sequence~\\cite{lstmsSUCK}. We thus opted for using a term-frequency vector representing the entire answer, as shown in Figure~\\ref{fig:newModel}. We denote this representation as:\n$$a^{\\text{tf}} = (d_1,d_2,\\ldots,d_v) $$ \n\\noindent where $d_i$ relates to the i'th word in our chosen vocabulary, and $d_i = 1$ if this word appears in the candidate answer, and $d_i = 0$ otherwise. \n\nConsider a candidate answer $a = (y_1,\\ldots,y_n)$, and let $(a_1,\\ldots,a_n)$ denote its sequence of RNN LSTM outputs, i.e. $a_i$ denotes the $i$'th output of a RNN LSTM processing this sequence (so $a_i$ is a vector whose dimensionality is as the hidden size of the LSTM cell). We refer to $a_i$ as the local-embedding at time $i$. \\footnote{Note that although we call $a_i$ a local embedding, the $i$'th LSTM state does of course take into account other words in the sequence (and not only the $i$'th word). By referring to it as ``local'' we simply mean to say that it is more heavily influenced by the $i$'th word or words close to it in the sequence.}\n\n\\subsection{Combining Local and Global Representations to Determine Attention Weights}\n\nThe goal of an attention mechanism is to construct an overall representation of the candidate answer $a$, which is later compared to the question representation to determine how well the candidate answers the question; this is achieved by obtaining a set of weights $w_1,\\ldots,w_n$ (where $w_i \\in \\mathbb{R}^+$), and constructing the final answer representation as a weighted average of the LSTM outputs, with these weights. \n\nGiven a candidate answer $a$, we compute the attention coefficient $w_i$ for timestep $i$ as follows. \n\nFirst, we combine the local view (the LSTM output, more heavily influenced by the words around timestep $t$) with the global view (based on TF features of all the words in the answer). We begin by taking linear combinations of the TF features then passing them through a $\\tanh$ nonlinearity (so that the range of each dimension is bounded in $[-1,1]$):\n$$ b^{\\text{tf}} = \\tanh (W_{1} a^{\\text{tf}}) $$\n\\noindent The weights of the matrix $W_{1}$ are model parameters to be learned, and its dimensions are set so as to map the sparse TF vector $a^{\\text{tf}}$ to a dense low dimensional vector (in our implementation $b^{\\text{tf}}$ is a 50 dimensional vector). \n\nSimilarly, we take a linear combination of the different dimensions of the local representation $a_i$ (in this case there is no need for the $tanh$ operation, as the LSTM output is already bounded):\n$$ b_i^{\\text{loc}} = W_{2} a_i $$\n\\noindent where the weights of the $W_{2}$ are model parameters to be learned (and with dimensions set so that $b_i^{\\text{loc}}$ would be a 140 dimensional vector). \n\nGiven a TF representation of a text $x^{\\text{tf}}$, whose dimensionality is the size of the vocabulary, and an RNN representation of the text $x^{\\text{rnn}}$, with a certain dimentionality $h$, we may wish construct a normalized representation of the text. As the norms of these two parts may differ, simply concatenating these parts may result in a vector dominated by one side. We thus define a joint representation \n$h(x^{\\text{tf}}, x^{\\text{rnn}})$ as follows. \n\nWe normalize each part so as to have a desired ratio of norms $\\frac{\\alpha}{\\beta}$ between the RNN and TF representations; this ratio reflects the relative importance of the RNN and TF embeddings in the combined representation (for instance when settings both $\\alpha, \\beta$ to $1$ both parts would have a unit norm, giving them equal importance): \n$$ c^{\\text{tf}} = \\frac{\\alpha}{||x^{\\text{tf}}||} \\cdot x^{\\text{tf}} $$\n$$ c^{\\text{rnn}} = \\frac{\\beta} {||x^{\\text{rnn}}||} \\cdot x^{\\text{rnn}} $$ \n\\noindent We then concatenate the normalized TF and RNN representations to generate the joint representation:\n$$h(x^{\\text{tf}}, x^{\\text{rnn}}) = c^{\\text{tf}} \\| c^{\\text{rnn}} $$\n\\noindent where $\\|$ represents vector concatenation. \n\nWe construct the local attention representation at the $i$'th word of the answer as:\n$$ a_i^{\\text{glob-loc}} = h(b^{\\text{tf}}, b_i^{\\text{loc}}) $$ \n\\\\ using values of $\\alpha=0.5, \\beta=1$.\n\nThe raw attention coefficient of the $i$'th word in the answer is computed by measuring the similarity of a vector representing the question, and a local-global representation of the answer at word $i$. We build these representations, of matching dimensions, by taking the same number of linear combinations from $a_i^{\\text{glob-loc}}$ (the raw global-local representation of the answer at word $i$). Thus the attention weight for the $i$'th word is:\n\n$$\n\\alpha'_i = sim\\Big( W_{3} a_i^{\\text{glob-loc}}, W_{4} f_{\\theta}(q) \\Big)\n$$\n\\noindent where $W_2$, $W_3$ are matrices whose weights are parameters to be learned (and whose dimensions are set so that $ W_{3} a_i^{\\text{glob-loc}} $ and $W_{4} f_{\\theta}(q)$ would be vectors of identical dimensionality, 140 in our implementation), and where $sim$ denotes the cosine similarity between vectors:\n$$ sim(u,v) = \\frac{u \\cdot v}{||u|| \\cdot ||v|| } $$ \n\\noindent with the $\\cdot$ symbol in the nominator denoting the dot product between two vectors.\n\n\nFinally, we normalize the attention coefficients with respect to their exponent to obtain the final attention weights, by applying the softmax operator on the raw attention coefficients. We take the raw attention coefficients, $\\alpha' = (\\alpha'_1, \\alpha'_2, \\ldots, \\alpha'_m)$ and define the final attention weights $\\alpha = (\\alpha_1, \\alpha_2, \\ldots, \\alpha_m)$ where $\\alpha_i \\propto exp(\\alpha'_i)$ and \n$\\alpha$ is the result of the softmax operator applied on $\\alpha$:\n$$ \\alpha_i = \\frac{\\exp{(\\alpha'_i)}}{\\sum_{j=1}^{m} \\exp{(\\alpha'_j)}} $$\n\n\\subsection{Building the Final Attention Based Representation}\n\nThe role of the attention weights is building a final representation of a candidate answer; different answers are ranked based on the similarity of their final representation and a final question representation. \nSimilarly to the TF representation of the answer, we denote the TF representation of the question as: $q^{\\text{tf}} = (r_1,r_2,\\ldots,r_v) $, where $r_i$ relates to the i'th word in our chosen vocabulary, and $r_i = 1$ if this word appears in the question, and $r_i = 0$ otherwise. Our final representation of the question is a joining of the TF representation of the question and the mean pooled RNN question representation (somewhat similarly to how we join the TF and RNN representation when determining the attention weights):\n$$ f'_{\\theta}(q) = h(q^{\\text{tf}}, f_{\\theta}(q)) $$ \n\nOur final representation of the answer is also a joining two parts, a TF part $a^{\\text{tf}}$ (as defined earlier) and an attention weighted RNN part $\\hat{a}$. We construct $\\hat{a}$ as the weighted average of the LSTM outputs, where the weights are the attention weights defined above:\n$$ \\hat{a} = \\sum_{i=1}^m \\alpha_i a_i $$ \n\nThe final representation of the answer is thus:\n$$ f'_{\\theta}(a) = h(a^{\\text{tf}}, \\hat{a}) $$ \n\nFigure \\ref{fig:newModel} describes the final architecture of our model, showing how we use a TF-based global embedding both in determining the attention weights and in the overall representation of the questions and answers. The dotted lines in the figures indicate that our model's attention weights depend not only on the local embedding but also on the global embedding. \n\n\\subsection{Tuning Parameters to Minimize the Loss}\n\nThe loss function $\\mathcal{L}$ we use is the shifted hinge loss defined in Section~\\ref{l_sect_prelim}. We compute the score of an answer candidate $a$ as the similarity between its final representation $f'_{\\theta}(a)$ and the final representation of the question $f'_{\\theta}(q)$ \n\\footnote{We use the cosine similarity as our similarity function for the loss, though other similarity functions can also be used.} : \n$$sim(f'_{\\theta}(q),f'_{\\theta}(a))$$ \n\\noindent Given the score of the correct answer candidate $\\sigma_{a^*} = sim(f'_{\\theta}(q),f'_{\\theta}(a))$ and the score of a distractor (incorrect) candidate $d$, $\\sigma_d = sim(f'_{\\theta}(q),f'_{\\theta}(d))$, our loss is \n$\\mathcal{L} =\\max \\Big\\{ 0, M - \\sigma_{a^*} + \\sigma_{d} \\Big\\}$. \n\nThe above loss relates to a single training item (consisting of a single question, its correct answer and an incorrect candidate answer). Training the neural network parameters involves iteratively examining items in a dataset consisting of many training items (each containing a question, its correct answer and a distractor) and modifying the current network parameters. We train our system using variant of stochastic gradient descent (SGD) with the Adam optimization~\\cite{kingma2014adam}.\n\n\\section{Empirical Evaluation}\n\nWe evaluate our proposed neural network design in a similar manner to earlier evaluations of Siamese neural network designs~\\cite{yang2015wikiqa,severyn2015learning}, where a neural network is trained to embed both questions and candidate answers as low dimensional vectors. \n\n\\subsection{Experiment Setup} \\label{sec:setup}\n\n\\begin{figure*}[h!t]\n\\includegraphics[width=\\textwidth]{att3_example5}\n\\includegraphics[width=0.8\\textwidth]{att3_example92}\n\\caption{A visualization of the attention weights for each word in a correct answer to a question. These examples show how the attention mechanism is focusing on relevant parts of the correct answer (although the attention is still quite noisy).}\n\\label{fig-example-attn-weights}\n\\end{figure*}\n\n\\begin{figure}[h!t]\n\\centering\n\\includegraphics[width=0.5\\textwidth]{ModelPerf}\n\\caption{Performance of our system on InsuranceQA for various model sizes $h$ (both the LSTM hidden layer size and embedding size)}\n\\label{fig-size-to-perf}\n\\end{figure}\n\nWe use the InsuranceQA dataset and its evaluation framework~\\cite{fengCNN}. \nThe InsuranceQA dataset contains question and answer pairs from the insurance domain, with roughly 25,000 unique answers, and is already partitioned into a training set and two test sets, called test 1 and test 2. \n\nThe InsuranceQA dataset has relatively short questions (mean length of 7). However, the answers are typically very long (mean length of 94). \n\nAt test time the system takes as input a question $q$ and a pool of candidate answers $P=(a_1,a_2,\\ldots,a_k)$ and is asked to select the best matching answer $a^*$ to the question from the pool. The InsuranceQA comes with answer pools of size $k=500$, consisting of the correct answers and random distractors chosen from the set of answers to other questions. \n\nState-of-the-art results for InsuranceQA were achieved by Tan et al~\\cite{tan2016}, which also provide a comparison with several baselines: Bag-of-words (with IDF weighted sum of word vectors and cosine similarity based ranking), the Metzler-Bendersky IR model~\\cite{bendersky2010}, and ~\\cite{fengCNN} - the CNN based Architecture-II and Architecture-II with Geometricmean of Euclidean and Sigmoid Dot product (GESD).\n\nWe implemented our model in TensorFlow~\\cite{abadi2016tensorflow} and conducted experiments on our GPU cluster. \n\nWe use the same hidden layer sizes and embedding size as Tan et al.~\\cite{tan2016}: $h=141$ for the bidirectional LSTM size and an embedding size of $e=100$; this allows us to investigate the impact of our proposed attention mechanism. \\footnote{As is the case with many neural networks, increasing the hidden layer size or embedding size can improve the performance on our InsuranceQA models; we compare our performance to the work of Tan et al.~\\cite{tan2016} with the same hidden and embedding sizes; similarly to them we use embeddings pre-trained using Word2Vec~\\cite{mikolov2013} and avoid overfitting by applying early stopping (we also apply Dropout~\\cite{dropout,zaremba2014recurrent}). } \n\n\\begin{table}[h!t]\n\\small\n\\centering\n \\begin{tabular}{ | l | c | c | }\n \\hline\n Model & Test1 & Test2 \\\\\n \\hline\n \\hline\n Bag-of-words & 32.1 & 32.2 \\\\\n \\hline\n Metzler-Bendersky & 55.1 & 50.8 \\\\\n \\hline\n Arch-II~\\cite{fengCNN} & 62.8 & 59.2 \\\\\n \\hline\n Arch-II GSED~\\cite{fengCNN} & 65.3 & 61.0 \\\\\n \\hline \n Attention LSTM~\\cite{tan2016} & 69.0 & 64.8 \\\\ \n \\hline\n \\hline\n TF-LSTM Concatenation & 62.1 & {61.5} \\\\\n \n \\hline \n Local-Global Attention & {\\bf 70.1} & {\\bf 67.4} \\\\\n \n \\hline \n \\end{tabular}\n \\vspace{0.2cm}\n\\caption{Performance of various models on InsuranceQA}\n\\label{table:perf_insqa}\n\\end{table}\n\n\\subsection{Results}\n\nTable~\\ref{table:perf_insqa} presents the results of our model and the various baselines for InsuranceQA. The performance metric used here is P@1, the proportion of instances where a correct answer was ranked higher than all other distractors in the pool. The table shows that our model outperforms the previous baselines. \n\nWe have also examined the performance of our model as a function of its size (determining the system's runtime and memory consumption). We used different values $h \\in \\{10,20,30,40,50\\}$ for both the size of the LSTM's hidden layer size and embedding size, and examined the performance of the resulting QA system on InsuranceQA. Our results are given in Figure~\\ref{fig-size-to-perf}, which shows both the P@1 metric and the mean reciprocal rank (MRR)~\\cite{craswell2009mean,chapelle2009expected} \\footnote{The MRR metric assigns the model partial credit even in cases where the highest ranking candidate is an incorrect answer, with the score depending on the highest rank of a correct answer. }\n\nFigure~\\ref{fig-size-to-perf} shows that performance improves as the model gets larger, but the returns on extending the model size quickly diminish. Interestingly, even relatively small models achieve a reasonable question answering performance. \n\nTo show our attention mechanism is necessary to achieve good performance, we also construct a model that simply concatenates the output of the feedforward network (on TF features) and the output of the bidirectional LSTM, called TF-LSTM concatenation. While this model does make use of TF-based features in addition to the LSTM state of the RNN, it does not use an attention mechanism to allow it to focus on the more relevant parts of the text. As the table shows, the performance of the TF-LSTM model is significantly lower than that of our model with the global-local attention mechanism. This indicates that the improved performance stems from the model's improved ability to focus on the relevant parts of the answer (and not simply from having a larger capacity and including TF-features).\n\nFinally, we examine the the attention model's weights to evaluate it qualitatively. Figure~\\ref{fig-example-attn-weights} visualizes the weights for two question-answer pairs, where the color intensity reflects the relative weight placed on the word (the $\\alpha_i$ coefficients discussed earlier). The figure shows that our attention model can focus on the parts of the candidate answer that are most relevant for the given question. \n\n\n\n\\section{Conclusion}\n\nWe proposed a new neural design for answer selection, using\nan augmented attention mechanism, which combines both local and global information when determining the attention weight to place at a given timestep. Our analysis shows that our design outperforms earlier designs based on a simpler attention mechanism which only considers the local view. \n\nSeveral questions remain open for future research. First, the TF-based global view of our design was extremely simple; could a more elaborate design, possibly using convolutional neural networks, achieve better performance? \n\nSecond, our attention mechanism joins the local and global information in a very simple manner, by normalizing each vector and concatenating the normalized vectors. Could a more sophisticated joining of this information, perhaps allowing for more interaction between the parts, help further improve the performance of our mechanism?\n\nFinally, can the underlying principles of our global-local attention design improve the performance of other systems, such as machine translation or image processing systems? \n\n\\bibliographystyle{IEEEtran}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nThe duality between symplectic and orthogonal groups has a long standing history,\nand has been noted in physics literature in various settings, see e.g.\n \\cite{Jungling-Oppermann-80,Mkrtchyan-81,Wegner-83,Witten-98}. Informally, the duality asserts that\n averages such as moments or partition functions\nfor the symplectic case of ``dimension'' $N$, can be derived from the respective\nformulas for the orthogonal case of dimension $N$ by inserting $-N$\n into these expressions and by simple scaling.\nThe detailed study of the moments of one-matrix Wishart ensembles, with duality explicitly noted,\nappears in \\cite{Hanlon-Stanley-Stembridge-92}, see \\cite[Corollary 4.2]{Hanlon-Stanley-Stembridge-92}.\nThe duality for one matrix Gaussian Symplectic Ensemble was noted\nby Mulase and Waldron \\cite{Mulase-Waldron-03} who introduced M\\\"obius graphs to write the\nexpansion for traces of powers of\nGOE\/GUE\/GSE expansions in a unified way. The duality appears also in\n \\cite[Theorem 6]{Ledoux-07} as a by-product of differential equations\n for the generating functions of moments.\nRef. \\cite{Goulden-Jackson-96,Goulden-Jackson-96b,Goulden-Jackson-97}\nanalyze the related ``genus series\" over locally orientable surfaces.\n\n\n\nThe purpose of this paper is to prove that the duality between moments of the Gaussian Symplectic Ensemble\nand the Gaussian Orthogonal Ensemble, and between real Wishart and quaternionic Wishart ensembles extends\nto several independent\nmatrices. Our technique consists of elementary combinatorics; our proofs\n differ from \\cite{Mulase-Waldron-03} in the one matrix case,\n and provide a more geometric interpretation for the duality; in the one-matrix Wishart case,\n our proof completes the combinatorial approach initiated in\n \\cite[Section 6]{Hanlon-Stanley-Stembridge-92}.\nThe technique limits the scope of our results to moments, but the\nrelations between moments suggest similar relations between other analytic objects,\nsuch as partition functions, see \\cite{Mulase-Waldron-03}, \\cite{Kodama-Pierce}. The asymptotic expansion of the partition function and analytic description of\n the coefficients of this expansion\n for $\\beta=2$ case appear in \\cite{Ercolani-McLaughlin-03,Guionnet-Maurel-Segala-0503064,Maurel-Segala-0608192}.\n\nThe paper is organized as follows. In Section \\ref{Sect1} we review basic properties of quaternionic\nGaussian random variables. In Section \\ref{Sect2} we introduce M\\\"obius graphs; Theorems \\ref{T quaternion moments} and \\ref{thm2.1}\n give formulae\nfor the expected values of products of quaternionic Gaussian random\nvariables in terms of the Euler characteristics of sub-families of\nM\\\"obius graphs or of bipartite M\\\"obius graphs.\nIn Section \\ref{duality} we apply the formulae to the quaternionic Wigner and Wishart families.\n\nIn this paper, we do not address the question of whether the duality\n can be extended to more general functions, or to more general\n $\\beta$-Hermite and $\\beta$-Laguerre ensembles introduced in \\cite{Dumitru-Edelman-06}.\n\n\n\\section{ Moments of quaternion-valued Gaussian random variables}\\label{Sect1}\n\n\\subsection{Quaternion Gaussian law}\nRecall that a quaternion $q\\in \\mathbb{H}$ can be represented as $q=x_0+i x_1+j x_2+ k x_3$\nwith\n$i^2=j^2=k^2=ijk=-1$ and with real coefficients $x_0,\\dots,x_3$.\nThe conjugate quaternion is $\\overline{q}=x_0-i x_1-j x_2- k x_3$,\nso $|q|^2:=q\\overline{q}\\geq 0$. Quaternions with $x_1=x_2=x_3=0$ are usually identified with real numbers;\nthe real part of a quaternion is $\\Re(q)=(q+\\bar{q})\/2$.\n\nIt is well known that quaternions can be identified with the set of certain $2\\times2$ complex matrices:\n\\begin{equation}\n \\label{H2C}\n \\mathbb{H}\\ni x_0+ix_1+jx_2+kx_3\\sim \\left[\\begin{matrix}\n x_0+ix_1 & x_2+i x_3 \\\\\n -x_2+ix_3&x_0-ix_1\n\\end{matrix}\\right]\\in \\mathcal{M}_{2\\times 2}(\\mathbb{C}),\n\\end{equation}\nwhere on the right hand side $i$ is the usual imaginary unit of $\\mathbb{C}$.\nNote that since $\\Re(q)$ is twice the trace of the matrix\nrepresentation in \\eqref{H2C}, this implies the cyclic property\n\\begin{equation}\n \\label{tmp**}\\Re(q_1q_2\\dots q_n)=\\Re(q_2q_3\\dots q_nq_1).\n\\end{equation}\n\n\nThe (standard) quaternion Gaussian random variable is an $\\mathbb{H}$-valued random variable\nwhich can be represented as\n\\begin{equation}\n \\label{HH}\n Z=\\xi_0+i \\xi_1+ j\\xi_2+k\\xi_3\n\\end{equation} with independent real\nnormal $N(0,1)$ random variables $\\xi_0,\\xi_1,\\xi_2,\\xi_3$.\nDue to symmetry of the centered normal laws on $\\mathbb{R}$,\nthe law of $(Z,\\overline{Z})$ is the same as the law of $(\\overline{Z},Z)$.\nA calculation shows that if $Z$ is quaternion Gaussian then\nfor fixed $q_1, q_2 \\in\\mathbb{H}$,\n$$\n\\mathbb{E}(Z q_1 Zq_2)=\\mathbb{E}(Z^2)\\bar{q}_1q_2,\\;\n\\mathbb{E}(Z q_1 \\overline{Z}q_2)=\\mathbb{E}(Z\\bar{Z})\\Re(q_1)q_2\\,.\n$$\nFor future reference, we insert explicitly the moments:\n\\begin{eqnarray}\n \\label{q1}\n\\mathbb{E}(Z q_1 Zq_2)&=&-2\\bar{q}_1q_2,\\\\\n\\label{q2}\n\\mathbb{E}(Z q_1 \\overline{Z}q_2)&=& 2 (q_1+\\bar{q}_1)q_2.\n\\end{eqnarray}\nBy linearity, these formulas imply\n\\begin{eqnarray}\n\\label{q3}\n\\mathbb{E}( \\Re(Z q_1) \\Re( \\overline{Z} q_2) ) &=& \\Re( q_1 q_2 ) ,\\\\\n\\label{q4}\n\\mathbb{E}( \\Re(Z q_1) \\Re(Z q_2 ) ) &=& \\Re( \\bar{q}_1 q_2 ).\n\\end{eqnarray}\n\n\\subsection{Moments}\nThe following is known as Wick's theorem \\cite{Wick-50}.\n\\begin{theoremA}[Isserlis \\cite{Isserlis-1918}\n\\label{WickTHMR} If $(X_1,\\dots,X_{2n})$ is\na $\\mathbb{R}^{2n}$-valued Gaussian random vector with mean zero, then\n\\begin{equation}\\label{R-Wick}\n \n E(X_1X_2\\dots X_{2n})=\\sum_{V}\\prod_{\\{j,k\\}\\in V} E(X_jX_k),\n\\end{equation}\nwhere the sum is taken over all pair partitions $V$ of\n$\\{1,2,\\dots,2n\\}$, i.e.,\npartitions into two-element sets, so each $V$ has the form\n$$V=\\left\\{\\{j_1,k_1\\},\\{j_2,k_2\\},\\dots,\\{j_n, k_n\\}\\right\\}.$$\n\\end{theoremA}\n\n\n Theorem \\ref{WickTHMR} is a consequence of the moments-cumulants relation \\cite{Leonov-Shirjaev-59};\n the connection is best visible in the partition formulation of \\cite{Speed-83}. For another proof,\n see \\cite[page 12]{Janson-97}.\n\n Our first goal is to extend this formula to certain quaternion Gaussian random variables.\nThe general multivariate quaternion Gaussian law is discussed in\n\\cite{Vakhania-99}. Here we will only consider a special setting of\nsequences that are drawn with repetition from a sequence of\nindependent standard Gaussian quaternion random variables. In\nsection \\ref{duality}\n we apply this result to a multi-matrix version of the\nduality between GOE and GSE ensembles of random matrices.\n\n\n\nIn view of the Wick formula \\eqref{R-Wick} for real-valued jointly Gaussian random\nvariables, formulas \\eqref{q1} and \\eqref{q2} allow us to compute\nmoments of certain products of quaternion Gaussian random variables.\nSuppose the $n$-tuple $(X_1,X_2,\\dots,X_{n})$\nconsists of random variables taken, possibly with repetition, from\nthe set\n $$\\{Z_1,\\bar{Z}_1,Z_2,\\bar{Z_2},\\dots\\},$$\n where\n$Z_1,Z_2,\\dots$ are independent quaternion Gaussian random variables.\n Consider an auxiliary family of independent pairs $\\{(Y_{j}^{(r)},Y_{k}^{(r)}): r=1,2,\\dots\\}$ which have the same laws as\n $(X_j,X_k)$, $1\\leq j,k\\leq n$ and are independent for different $r$.\n Then the Wick formula for real-valued Gaussian variables implies $\\mathbb{E}(X_1X_2\\dots X_{n})=0$ for odd $n$, and\n \\begin{equation}\\label{Wick0}\n\\mathbb{E}(X_1X_2\\dots X_{n})=\\sum_{f}\\mathbb{E}(Y_1^{(f(1))}Y_2^{(f(2))}\\dots Y_{n}^{(f(n))}),\n \\end{equation}\n where the sum is over the pair partitions $V$ that appear under the sum in Theorem \\ref{WickTHMR}, each represented by the level sets of a two-to-one valued function\n $f:\\{1,\\dots,n\\}\\to\\{1,\\dots,m\\}$ for $n=2m$. (Thus the sum is over classes of equivalence of $f$, each of $m!$ representatives contributing the same value.)\n\n\n For example, if $Z$ is quaternion Gaussian then\napplying \\eqref{Wick0} with $f_1$ that is constant, say $1$, on $\\{1,2\\}$,\n $f_2$ that is constant on $\\{1,3\\}$, and $f_3$ that is constant on $\\{1,4\\}$\nwe get\n$$\n\\mathbb{E}(Z^4)=\\mathbb{E}(Z^2) \\left(\\mathbb{E}(Z^2) + \\mathbb{E}(\\bar{Z}Z)+ \\mathbb{E}(\\bar{Z}^2)\\right)=0.\n$$\n\n\nFormulas \\eqref{q1} and \\eqref{q2} then show that the Wick reduction step takes the\nfollowing form.\n\\begin{equation}\\label{Wick1}\n \\mathbb{E}(X_1X_2\\dots X_n)=\\sum_{j=2}^n \\mathbb{E}(X_1X_j)\\mathbb{E}(U_j X_{j+1}\\dots X_n),\n\\end{equation}\nwhere\n$$\nU_j=\\begin{cases}\n \\Re(X_2\\dots X_{j-1}) & \\mbox{ if $X_j=\\bar{X}_1$ }\\\\\n \\bar{X}_{j-1}\\dots \\bar{X}_2 &\\mbox{ if $X_j={X}_1$}\\\\\n 0 &\\mbox{ otherwise} \\,.\n\\end{cases}\n$$\nThis implies that one can do inductively the calculations, but due to noncommutativity\narriving at explicit answers may still require significant work.\n\n\n\nFormula \\eqref{Wick1} implies that $\\mathbb{E}(X_1X_2\\dots X_n)$ is real, so on the left hand side of \\eqref{Wick1} we can write\n$\\mathbb{E}(\\Re(X_1X_2\\dots X_n))$; this form of the formula will be associated with one-vertex M\\\"obius graphs.\n\nFurthermore, we have a Wick reduction which will correspond to the multiple vertex M\\\"obius graphs:\n\\begin{multline}\\label{Wick2}\n\\mathbb{E}( \\Re( X_1 ) \\Re( X_2 X_3 \\dots X_n ) ) \\\\=\n\\sum_{j=2}^n \\mathbb{E}(\\Re(X_1)\\Re(X_j))\\mathbb{E}( \\Re(X_2 \\dots X_{j-1} X_{j+1} \\dots X_n )).\n\\end{multline}\n(This is just a consequence of Theorem \\ref{WickTHMR}).\n\n\n\n\nIn the next section we will show that formulae (\\ref{Wick1}) and\n(\\ref{Wick2}) give a method of computing the expected values of\nquaternionic Gaussian random variables by the enumeration of M\\\"obius\ngraphs partitioned by their Euler characteristic. This is analogous\nto similar results for complex Gaussian random variables and ribbon\ngraphs, and for real Gaussian random variables and M\\\"obius graphs.\nIn Section \\ref{duality} we will show that this\nresult implies the duality of the GOE and GSE\nensembles of Wigner random matrices and the duality of real and\nquaternionic Wishart random matrices.\n\n\n\n\\section{M\\\"obius graphs and quaternionic Gaussian moments}\\label{Sect2}\n\nIn this section we introduce M\\\"obius graphs and then give formulae\nfor the expected values of products of quaternionic Gaussian random\nvariables in terms of the Euler characteristics of sub-families of\nM\\\"obius graphs. This is an analogue of the method of t'Hooft\n\\cite{Bessis-Itzykson-Zuber-80,Goulden-Harer-Jackson-01, Goulden-Jackson-97,harer-zagier-86, Jackson-94, tHooft}. M\\\"obius graphs have also been used to give combinatoric\ninterpretations of the expected values of traces of Gaussian\northogonal ensemble of random matrices and of Gaussian symplectic ensembles, see the articles\n\\cite{Goulden-Jackson-97} and \\cite{Mulase-Waldron-03}. The connection between M\\\"obius graphs and\nquaternionic Gaussian random variables is at the center of the work of Mulase\nand Waldron \\cite{Mulase-Waldron-03}.\n\n\n\n\n\\subsection{M\\\"obius graphs}\nM\\\"obius graphs are ribbon graphs where the edges (ribbons) are\nallowed to twist, that is they either preserve or reverse the local\norientations of the vertices. As the convention is that the ribbons\nin \\textit{ribbon graphs} are not twisted we follow \\cite{Mulase-Waldron-03} and call the unoriented\nvariety M\\\"obius graphs. The vertices of a M\\\"obius graph are\nrepresented as disks together with a local orientation; the edges are represented as\nribbons, which preserve or reverse the local orientations of the\nvertices connected by that edge. Next we identify the collection of\ndisjoint cycles of the sides of the ribbons found by following the\nsides of the ribbon and obeying the local orientations at each\nvertex. We then attach disks to each of these cycles by gluing the\nboundaries of the disks to the sides of the ribbons in the cycle.\nThese disks are called the faces of the M\\\"obius graph, and the\nresulting surface we find is the surface of maximal\nEuler\ncharacteristic on which the M\\\"obius graph may be drawn so that edges do not cross.\n\n\nDenote by $v(\\Gamma)$, $e(\\Gamma)$, and $f(\\Gamma)$ the number of vertices, edges, and faces of $\\Gamma$.\nWe say that the Euler characteristic of $\\Gamma$ is\n$$\n\\chi(\\Gamma)=v(\\Gamma)-e(\\Gamma)+f(\\Gamma),\n$$\nfor connected $\\Gamma$, this is also the maximal Euler characteristic of a connected surface into which\n$\\Gamma$ is embedded.\nFor example, in Fig. \\ref{F1}, the Euler characteristics are\n$\\chi_1=1-1+2=2$ and $\\chi_2=1-1+1=1$. The two graphs may be embedded\ninto the sphere or projective sphere respectively.\n\n\nIf $\\Gamma$ decomposes into connected components $\\Gamma_1$, $\\Gamma_2$, then\n$\\chi(\\Gamma)=\\chi(\\Gamma_1)+\\chi(\\Gamma_2)$.\n\nThroughout the paper our M\\\"obius graphs will have the following\nlabels attached to them: the vertices are labeled to make them\ndistinct, in addition the edges emanating from each vertex are also\nlabeled so that rotating any vertex produces a distinct graph. These\nlabels may be removed if one wishes by rescaling all of our\nquantities by the number of automorphisms the unlabeled graph would\nhave.\n\n\\subsection{Quaternion version of Wick's theorem}\n\n\\tolerance=2000\nSuppose the $2n$-tuple $$(X_1,X_2,\\dots,X_{2n})$$\nconsists of random variables taken, possibly with repetition, from\nthe set $\\{Z_1,\\bar{Z_1},Z_2,\\bar{Z_2},\\dots\\}$ where\n$Z_1,Z_2,\\dots$ are independent quaternionic Gaussian.\nFix a sequence $j_1,j_2,\\dots,j_m$ of natural numbers such that $j_1+\\dots+j_m=2n$.\n\nConsider the family $\\mathcal{M}=\\mathcal{M}_{j_1,\\dots,j_m}(X_1,X_2,\\dots,X_{2n})$, possibly empty, of M\\\"obius graphs with $m$ vertices of degrees $j_1,j_2,\\dots,j_m$ with edges labeled by $X_1,X_2,\\dots,X_{2n}$,\nwhose regular edges correspond to pairs $X_i=\\bar{X}_j$ and flipped edges correspond to pairs $X_i=X_j$. No edges of $\\Gamma\\in \\mathcal{M}$ can join random variables $X_i,X_j$ that are independent.\n\\tolerance=1000\n\n\\begin{theorem}\n \\label{T quaternion moments}\nLet $\\left\\{ X_1, X_2, \\dots, X_{2n} \\right\\}$ be chosen, possibly\nwith repetition, from the set $\\{ Z_1, \\bar{Z_1}, Z_2, \\bar{Z_2},\n\\dots \\}$ where $Z_j$ are independent quaternionic Gaussian random\nvariables, then\n\\begin{multline}\n\\mathbb{E}\\big(\\Re(X_1X_2 \\dots X_{j_1})\\Re(X_{j_1+1}^{}\\dots X_{j_1+j_2}^{})\\times\\dots \\\\\\times\\Re(X_{j_1+j_2+\\ldots+j_{m-1}+1}^{}\\dots\n X_{2n}^{})\\big)=4^{n-m}\\sum_{\\Gamma\\in \\mathcal{M}} (-2)^{\\chi(\\Gamma)}.\n\\end{multline}\n(The right hand side is interpreted as $0$ when $\\mathcal{M}=\\emptyset$.)\n\\end{theorem}\n\n\\begin{remark}\n We would like to emphasize that in computing the Euler characteristic one must first break the graph into\n connected components. For example, if $j_1=\\dots=j_m=1$ so that $m=2n$ is even,\nand $X_1, X_2, \\dots, X_{2n}$ are $n$ independent pairs, as the real\nparts are commutative we may assume that $X_{2k} = X_{2k-1}$, and\nthe moment is\n$$ 1=\\mathbb{E}( \\Re (X_1) \\Re (X_2) \\dots \\Re (X_{2n}) ) = 4^{-n} (-2)^{\\chi(\\Gamma)}. $$\nWe see that graphically $\\Gamma$\n is a collection of $2n$ degree one vertices connected together forming $n$\n dipoles (an edge with a vertex on either end).\n Hence there are $n$ connected components each of Euler characteristic $2$,\n therefore the total Euler characteristic is $\\chi = 2 n $ giving\n$$ 4^{-n} (-2)^\\chi= 4^{-n} 4^{n} = 1. $$\n\\end{remark}\n\n\\begin{proof}[Proof of Theorem \\ref{T quaternion moments}]\nIn view of \\eqref{Wick0} and \\eqref{Wick1}, it suffices to show that if $X_1,\\dots,X_{2n}$ consists of $n$ independent pairs, and each pair is either of the form $(X,X)$ or $(X,\\bar{X})$, then\n\\begin{multline}\n\\label{star2}\n \\mathbb{E}\\big(\\Re(X_1X_2 \\dots X_{j_1})\\Re(X_{j_1+1}^{}\\dots X_{j_1+j_2}^{})\\times\\dots \\\\\\times\\Re(X_{j_1+j_2+ \\dots+\n j_{m-1}+1}^{}\\dots\n X_{2n}^{})\\big)= 4^{n-m}(-2)^{\\chi(\\Gamma)}\\;,\n\\end{multline}\nwhere $\\Gamma$ is the M\\\"obius graph that describes the pairings of the sequence.\n\nFirst we check the two M\\\"obius graphs for\n$n=1$, $m=1$:\n\\[ \\mathbb{E}( \\Re( X \\bar{X}) ) = (-2)^2 \\,, \\quad \\mbox{and}\\quad \\mathbb{E}( \\Re(X X))\n= (-2)^1 \\,.\\]\nOne checks that these correspond to the M\\\"obius graphs in Figure \\ref{F1},\nwhich gives a sphere ($\\chi=2$) and projective sphere ($\\chi=1$) respectively.\n\\begin{figure}[htb]\n \\includegraphics[width=4in]{fig1}\n \\caption{The two possible M\\\"obius graphs with a single degree 2 vertex. The left hand one is a ribbon which is untwisted and the graph embeds into a copy of the Riemann sphere, while the\n right hand one is a ribbon which is twisted and the graph embeds into a copy of the projective sphere. \n \\label{F1}}\n\\end{figure}\n\n\nWe now proceed with the induction step.\n One notes that by independence of the pairs at different edges, the\nleft hand side of \\eqref{star2} factors into the product corresponding to\nconnected components of $\\Gamma$. It is therefore enough to consider\nconnected $\\Gamma$. \\label{tmp1}\n\nIf $\\Gamma$ has two vertices that are joined by an edge, we can use cyclicity\nof $\\Re$ to move the variables that label the edge to the first positions in\ntheir cycles, say $X_1$ and $X_{j_1+1}$ and use \\eqref{q3} or \\eqref{q4} to\neliminate this pair from the product. The use of relation (\\ref{q3})\nis just that of gluing the two vertices together removing the edge $x$\nwhich is labeled by the two appearances of $Z$. Relation (\\ref{q4}) glues\ntogether the two vertices, removing\nthe edge $x$, and the reversal of orientation across the edge is\ngiven by the conjugate (see Figure \\ref{F4}).\nThese geometric operations reduce $n$ and $m$ by one without changing the\nEuler characteristic:\nthe number of edges and the number of vertices are reduced by 1; the faces are\npreserved -- in the case of edge flip\nin Fig. \\ref{F4}, the edges of the face from which we remove the edge, after\nreduction follow the same order.\n\n\n\n Therefore we will only need to prove the\nresult for the single vertex case of the induction step.\n\n\\begin{figure}[htb]\n\\includegraphics[height=4in]{fig2\n \\caption{A M\\\"obius graph with two vertices connected by a ribbon may be reduced to a M\\\"obius graph with one less vertex and one less edge.\nIn these graphs the ``$\\dots$\" are to mean that there are arbitrary\nnumbers of other edges at the vertex, and the edges drawn are\nconnected to other vertices.\n The top graph is an example of this reduction when the connecting ribbon is untwisted, in this case the two vertices are glued together with no other changes in the ribbons. The bottom graph is an example of this reduction when the connecting ribbon is twisted, in this case the two vertices are glued together and the order and orientation (twisted or untwisted) of the ribbons on one side are reversed.\n\\label{F4}}\n\\end{figure}\n\n\nWe wish to show that\n\\begin{equation} \\label{star3}\n \\mathbb{E}( X_1 X_2 X_3 X_4 \\dots X_{2n}) =\n(-2)^{\\chi({\\Gamma})} 4^{n-1} \\,,\n\\end{equation}\nwhere $\\Gamma$ is a one vertex M\\\"obius graph with arrows (half edges)\n labeled by\n$X_k$.\nWe will do this by induction, there are two cases:\n\\begin{enumerate}\n\n\\item[\\bf Case 1:] $X_1 = \\bar{X}_j$ for $1< j \\leq 2n $,\n\\begin{align} \\nonumber\n\\mathbb{E}( X_1 X_2 \\dots X_{j-1} \\bar{X}_1 X_{j+1} \\dots X_{2n}) &=\n\\mathbb{E}( X_1 \\bar{X}_1 ) \\mathbb{E}( \\Re( X_2 \\dots X_{j-1} ) \\Re( X_{j+1} \\dots X_{2n}) ) \\\\\n\\label{oriented_reduction}\n&= 4 \\mathbb{E}( \\Re( X_2 \\dots X_{j-1}) \\Re( X_{j+1} \\dots X_{2n}) ) \\,.\n\\end{align}\nThis corresponds to the reduction of the M\\\"obius graph pictured\nin Figure \\ref{orient-reduct}, which splits the single vertex\ninto two vertices. The Euler characteristic becomes $\\chi_2 = 2 - (n-1) + f_1\n= \\chi_1 +2 $.\nBy the induction assumption we find\n\\begin{equation*}\n\\mathbb{E}( X_1 \\dots X_{2n}) = 4 \\left[ 4^{(n-1) - 2} (-2)^{\\chi_2} \\right]\n= 4^{n-2} (-2)^{\\chi_1 +2} = 4^{n-1} (-2)^{\\chi_1} \\,.\n\\end{equation*}\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[width=8cm]{orient-reduct}\n\\end{center}\n\\caption{ Here we have an untwisted ribbon of the M\\\"obius graph\nreturning to the same vertex, this edge is removed in our reduction\nprocedure giving us a M\\\"obius graph with one more vertex and one\nless edge. \\label{orient-reduct}}\n\\end{figure}\n\n\\item[\\bf Case 2:] $X_1 = X_j$ for $1 < j \\leq 2n$,\n\\begin{align}\\nonumber\n\\mathbb{E}( X_1 X_2 \\dots X_{j-1} X_1 X_{j+1} \\dots X_{2n}) &=\n\\mathbb{E}( X_1 X_1 ) \\mathbb{E}( \\bar{X}_{j-1} \\dots \\bar{X}_2 X_{j+1} \\dots X_{2n} )\n\\\\ \\label{unoriented-reduction}\n&= (-2) \\mathbb{E}( \\bar{X}_{j-1} \\dots \\bar{X}_2 X_{j+1} \\dots X_{2n} ) \\,.\n\\end{align}\nThis corresponds to the reduction of the M\\\"obius graph pictured\nin Figure \\ref{unorient-reduct}, which keeps the single vertex\nand flips the order and orientation of the edges between $X_1$ and $X_j$.\nThe Euler characteristic becomes $\\chi_2 = 1 - (n-1) + f_1 = \\chi_1 + 1$.\nBy the induction assumption we find\n\\begin{equation*}\n\\mathbb{E}( X_1 \\dots X_{2n}) = (-2) \\left[ 4^{(n-1) -1} (-2)^{\\chi_2} \\right]\n= (-2)^{-1} 4^{n-1} (-2)^{\\chi_1 + 1} = 4^{n-1} (-2)^{\\chi_1}\\,.\n\\end{equation*}\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[width=8cm]{unorient-reduct}\n\\end{center}\n\\caption{ Here we have a twisted ribbon of the M\\\"obius graph\nreturning to the same vertex, this edge is removed in our reduction\nprocedure giving us a M\\\"obius graph with one less edge and giving a\nreverse in both the order and orientation (twisted or untwisted) of\nthe ribbons on one side of the removed ribbon.\n\\label{unorient-reduct}}\n\\end{figure}\n\n\n\n\\end{enumerate}\n\n\nNote that taking away an oriented ribbon creates a new vertex. The\nremaining graph might still be connected, or it may split into two\ncomponents. If taking away a loop makes the graph disconnected, then\nthe counts of changes to edges and vertices are still the same. But\nthe faces need to be counted as follows: the inner face of the\nremoved edge becomes the outside face of one component, and the\nouter face at the removed edge becomes the outer face of the other\ncomponent. Thus the counting of faces is not affected by\nwhether the graph is connected.\n\nWith these two cases checked, by the induction hypothesis, the\nproof is completed.\n\n\\end{proof}\n\n\n\n\\subsection{Bipartite M\\\"obius graphs and quaternionic Gaussian moments} \\label{Sect bipartite}\nTo deal with quaternionic Wishart random matrices, we need to consider a special subclass of quaternionic\nGaussian variables from Theorem \\ref{T quaternion moments}.\nSuppose the $2n$-tuple $(X_{\\pm 1},X_{\\pm 2},\\dots,X_{\\pm n})$\nconsists of $n$ pairs of random variables taken with repetition, from\nthe set $\\{Z_1,Z_2,\\dots,Z_n\\}$ of independent quaternionic Gaussian random variables. Note that in contrast to the setup for Theorem \\ref{T quaternion moments}, here all $Z$'s are without conjugation.\nFix a sequence $j_1,j_2,\\dots,j_m$ of natural numbers such that $j_1+\\dots+j_m=n$.\nTheorem \\ref{T quaternion moments} then says that\n\\begin{align*}\n \\mathbb{E}\\big( \\Re( \\bar{X}_{-1} X_1 \\cdots \\bar{X}_{-j_1} X_{j_1})\n\\times \\Re( \\bar{X}_{-j_1-1} X_{j_1 +1} \\cdots \\bar{X}_{-j_1-j_2} X_{j_1+j_2} )\n\\times \\cdots\n\\\\ \\dots \\times\n\\Re( \\bar{X}_{-j_1-j_2-\\dots -j_{m-1}-1} \\cdots \\bar{X}_{-n} X_{n} )\\big)\n\\\\ = 4^{n-m} (-2)^{\\chi(\\Gamma)},\n\\end{align*}\nwhere $\\Gamma$ is the M\\\"obius graph with $m$ vertices and with edges labeled by $X_{\\pm 1},\\dots,X_{\\pm n}$ that describes the\npairings between the variables under the expectation. Our goal is to show that the same formula holds true for another graph, a bipartite M\\\"obius graph whose edges are labeled by $n$ pairs $(X_{-j},X_j)$, $1\\leq j \\leq n$.\n\nThe bipartite M\\\"obius graph has two types of vertices: black vertices and white (or later, colored) vertices with ribbons that can only connect a black vertex to a white vertex. (As previously, the ribbons may carry a ``flip\" of orientation which we represent graphically as a twist.) To define this graph, we need to introduce three pair partitions on the set $\\{\\pm 1,\\dots,\\pm n\\}$.\nThe first partition, $\\delta$, pairs $j$ with $-j$. The second partition, $\\sigma$, describes the placement of $\\Re$: its pairs are\n\\begin{align*}\n\\{1,-2\\},\\{2,-3\\},\\dots,\\{j_1,-1\\},\\\\\n\\{j_1+1, -(j_1+2)\\},\\dots, \\{j_1+j_2,-(j_1+1)\\},\\\\\n\\vdots\\\\\n\\{1+\\sum_{k=1}^{m-1}j_k,-(2+\\sum_{k=1}^{m-1}j_k)\\},\\dots, \\{n,-(1+\\sum_{k=1}^{m-1}j_k)\\}.\n\\end{align*}\nThe third partition, $\\gamma$, describes the choices of pairs from $Z_1,\\dots,Z_n$. Thus\n$\n\\{j,k\\}\\in\\gamma\n$\nif $X_j=X_k$ when $jk>0$ or $X_j=\\bar{X}_k$ if $jk<0$.\n\nWe will also represent these pair partitions as graphs with vertices arranged in\ntwo rows, and\nwith the edges drawn between the vertices in each pair of a partition.\nThus\n\\begin{equation}\n \\label{eq:delta}\n \\delta=\\begin{matrix}\n \\xymatrix @-1pc{\n {^1_ \\bullet} \\ar@{-}[d]& {^2_\\bullet}\n \\ar@{-}[d]& \\dots & {^n_\\bullet} \\ar@{-}[d]\\\\\n {^{\\hspace{1.5mm}\\bullet}_{-1}} &{^{\\hspace{1.5mm}\\bullet}_{-2}} &\\dots &\n{^{\\hspace{1.5mm}\\bullet}_{-n}} \\\\\n\n}\n\\end{matrix}\n\\end{equation}\nand\n{\\small\n$$\n\\sigma=\\begin{matrix}\n \\xymatrix @-1pc{\n {^1_ \\bullet} \\ar@{-}[dr]& {^2_\\bullet} \\ar@{-}[dr]& {^3_\\bullet}\n & \\dots & {^{j_1}_\\bullet}\\ar@{-}[dllll] & {^{j_1+1}_{\\hspace{2.5mm}\\bullet}}\\ar@{-}[dr] & {^{j_1+2}_{\\hspace{3mm}\\bullet}}&\\dots & {^{j_1+j_2}_{\\hspace{3.7mm}\\bullet}}\\ar@{-}[dlll]&\n {^{j_1+j_2+1}_{\\hspace{5mm}\\bullet}}&\\dots \\\\\n {^{\\hspace{1.5mm}\\bullet}_{-1}} &{^{\\hspace{1.5mm}\\bullet}_{-2}} &{^{\\hspace{1.5mm}\\bullet}_{-3}}&\\dots &\n{^{}_\\bullet}& {^{}_\\bullet} & {^{}_\\bullet}&\\dots&{^{}_\\bullet}&{^{}_\\bullet}&\\dots\n}\n\\end{matrix}\n$$\n}\n\nConsider the 2-regular graphs $\\delta\\cup \\gamma$ and $\\delta\\cup \\sigma$. We orient the cycles of these graphs by ordering $(-j,j)$ on the left-most vertical edge of the cycle.\nFor example,\n\n{\\small $$\n\\sigma\\cup\\delta=\\begin{matrix}\n \\xymatrix @-1pc{\n {^1_ \\bullet} \\ar@{-}[dr]& {^2_\\bullet} \\ar@{-}[dr]& {^3_\\bullet}\n & \\dots & {^{j_1}_\\bullet}\\ar@{-}[dllll] & {^{j_1+1}_{\\hspace{2.5mm}\\bullet}}\\ar@{-}[dr] & {^{j_1+2}_{\\hspace{3mm}\\bullet}}&\\dots & {^{j_1+j_2}_{\\hspace{3.7mm}\\bullet}}\\ar@{-}[dlll]& {^{j_1+j_2+1}_{\\hspace{5mm}\\bullet}}&\\dots \\\\\n {^{\\hspace{1.5mm}\\bullet}_{-1}}\\ar@{->}[u] &{^{\\hspace{1.5mm}\\bullet}_{-2}}\\ar@{-}[u] &{^{\\hspace{1.5mm}\\bullet}_{-3}}\\ar@{-}[u]&\\dots &\n{^{}_\\bullet}\\ar@{-}[u]& {^{}_\\bullet} \\ar@{->}[u]& {^{}_\\bullet}\\ar@{-}[u]&\\dots&{^{}_\\bullet}\\ar@{-}[u]& {^{}_\\bullet} \\ar@{->}[u]&\\dots\n}\n\\end{matrix}\n$$\n}\nWe now define the bipartite M\\\"obius graph by assigning black vertices to the $m$ cycles of $\\delta\\cup \\sigma$, and white vertices to the cycles of $\\delta\\cup \\gamma$.\n\nEach black vertex is oriented counter-clockwise. For each black vertex we follow the cycle of $\\delta\\cup \\sigma$, drawing a labeled line for each element of the partition. The lines corresponding to $-j,j$ are adjacent, and will eventually become two edges of a ribbon.\n\nEach white vertex is oriented, say, clockwise; we identify the graphs that differ only by\na choice of orientation at some of the white vertices.\nFor each white vertex, we follow the corresponding cycle of $\\delta\\cup \\gamma$, drawing a labeled line for each element of the partition. The lines corresponding to $-j,j$ are adjacent, but may appear in two different orders depending on the orientation of the corresponding edge of $\\delta$ on the cycle.\n\nThe final step is to connect pairs $(j,-j)$ on the black vertices with the same pairs on the white vertices. This creates the ribbons, which carry a flip if the orientation of the two lines\non the black vertex does not match the orientation of the same edges on the white vertex.\n\n\n\\begin{figure}\n\\begin{center}\\includegraphics[width=6cm]{black-white}\n\\end{center}\n\\caption{\\label{black_white} Representation of a bipartite M\\\"obius\ngraph, the edges drawn are ribbons that are either twisted or\nuntwisted. }\n\\end{figure}\nThe individual edges pictured in Figure \\ref{black_white} are ribbons and are\nlabeled as in Figure \\ref{black-white-2}.\n\\begin{figure}\n\\begin{center}\\includegraphics[width=5cm]{black-white-2}\\end{center}\n\\caption{\\label{black-white-2} Example of the labeling we use for\nthe edges emanating from a black vertex in a bipartite M\\\"obius\ngraph. }\n\\end{figure}\n We allow twists of ribbons to\npropagate through a white vertex calling the two bipartite M\\\"obius graphs\nequivalent.\n\n\n\nSuppose the $2n$-tuple $(X_{\\pm 1},X_{\\pm 2},\\dots,X_{\\pm n})$\nconsists of random variables taken with repetition, from\nthe set $\\{Z_1,Z_2,\\dots,Z_n\\}$ of independent quaternionic Gaussian random variables.\nLet $\\mathcal{M}=\\mathcal{M}(X_{\\pm 1},X_{\\pm 2},\\dots,X_{\\pm n})$ denote the set of all bipartite M\\\"obius graphs $\\Gamma$\nthat correspond to various ways of pairing all repeated $Z$'s in the sequence\n$(X_{\\pm 1},X_{\\pm 2},\\dots,X_{\\pm n})$; the pairs are given by adjacent half edges at each white\nvertex. (See the preceding construction.) $\\mathcal{M}=\\emptyset$ if there is a $Z_j$ that is repeated\nan odd number of times.\n\\begin{theorem} \\label{thm2.1}\n\\begin{align*}\n \\mathbb{E}\\big( \\Re( \\bar{X}_{-1} X_1 \\cdots \\bar{X}_{-j_1} X_{j_1})\n\\times \\Re( \\bar{X}_{-j_1-1} X_{j_1 +1} \\cdots \\bar{X}_{-j_1-j_2} X_{j_1+j_2} )\n\\times \\cdots\n\\\\ \\dots \\times\n\\Re( \\bar{X}_{-j_1-j_2-\\dots -j_{m-1}-1} \\cdots \\bar{X}_{-n} X_{n} )\\big)\n\\\\ = 4^{n-m}\\sum_{\\Gamma\\in\\mathcal{M}} (-2)^{\\chi(\\Gamma)}\\;,\n\\end{align*}\n(The right hand side is interpreted as $0$ when $\\mathcal{M}=\\emptyset$.)\n\\end{theorem}\n\n\n\n\\begin{proof}\nThe proof is fundamentally the same as that of the Wigner version of this\ntheorem.\nIn view of \\eqref{Wick0} and \\eqref{Wick1}, it suffices\n to consider $\\{ X_{\\pm 1}, \\dots X_{\\pm n} \\}$ that form $n$ independent pairs, and show that\n\\begin{multline}\\label{WWW314}\n \\mathbb{E}\\big( \\Re( \\bar{X}_{-1} X_1 \\cdots \\bar{X}_{-j_1} X_{j_1})\n\\times \\Re( \\bar{X}_{-j_1-1} X_{j_1 +1} \\cdots \\bar{X}_{-j_1-j_2} X_{j_1+j_2} )\n\\times \\cdots\n\\\\ \\dots \\times\n\\Re( \\bar{X}_{-j_1-j_2-\\dots -j_{m-1}-1} \\cdots \\bar{X}_{-n} X_{n} )\\big)\n = 4^{n-m} (-2)^{\\chi(\\Gamma)}\\;,\n\\end{multline}\nwhere $\\Gamma$ is the bipartite M\\\"obius graph that describes the\npairings.\n\nWe will prove \\eqref{WWW314} by induction; to that end we first check\nthat with $n=1, m=1$\nwe have $\\mathbb{E}(\\bar{X}X)=(-2)^2$ in agreement with \\eqref{WWW314}.\n\nIf $\\Gamma$ has two black vertices connected together by edges adjacent at a white vertex,\nwe can use the cyclicity of $\\Re$ to move the variables that label the\nrespective edges and share the same face to\nthe first position in their cycles, so that we may call them $\\bar{X}_{-1}$\nand either $X_j$ or $\\bar{X}_{-j}$.\nWe now use relations (\\ref{q3}) and (\\ref{q4}) to eliminate the pair from the\nproduct:\n\\begin{equation*}\n\\mathbb{E}\\left( \\Re( \\bar{X}_{-1} \\cdots X_{j_1}) \\Re( X_j \\dots X_{j_1+j_2} \\bar{X}_{-j} )\\right)\n= \\mathbb{E}( X_1 \\cdots X_{j_1+j_2} \\bar{X}_{-j})\\,,\n\\end{equation*}\nor\n\\begin{equation*}\n\\mathbb{E}\\left( \\Re( \\bar{X}_{-1} \\cdots X_{j_1}) \\Re( \\bar{X}_j \\dots X_{j_1+j_2} )\\right)\n= \\mathbb{E}( \\bar{X}_{j_1} X_{-j_1} \\cdots \\bar{X}_1 X_j \\cdots X_{j_1+j_2})\\,.\n\\end{equation*}\nThe use of relation (\\ref{q3}) corresponds to that of gluing together the two\nribbons along the halves adjacent at the white vertex, and gluing together the\ncorresponding black vertices (see Figure \\ref{two-gluing-orient}).\nThe use of relation (\\ref{q4}) corresponds to the same gluing, but in this\ncase one of the ribbons has an orientation reversal in it, resulting in an\norientation reversal for the remaining sides (see Figure\n\\ref{two-gluing-unorient}).\nThese geometric operations reduce $n$ and $m$ by one without changing the\nEuler characteristic: both the number of edges and the number of vertices are\nreduced by one while the number of faces is preserved.\n\n\\begin{figure}\n\\includegraphics[width=12cm]{two-gluing-orient}\n\\caption{ Here two black vertices are connected together through\nuntwisted edges adjacent at a white vertex. This bipartite M\\\"obius\ngraph reduces to one with one less vertex and one less edge. The\nreduction is found by gluing the two black vertices together and\ngluing the two ribbons together along their adjacent sides, here\nlabeled by $\\bar{X}_{-1}$ and $X_j$. The same reduction would apply\nif the two edges were twisted as we could pass this twist through\nthe white vertex. \\label{two-gluing-orient}}\n\\end{figure}\n\n\\begin{figure}\n\\includegraphics[width=12cm]{two-unorient}\n\\caption{ Here two black vertices are connected together through one\ntwisted edge and one untwisted edge adjacent at a white vertex.\nThis bipartite M\\\"obius graph reduces to one with one less vertex\nand one less edge. The reduction is found by gluing the two black\nvertices together, gluing the two ribbons together along the sides\nadjacent at the white vertex, and reversing both the order and\norientations of remaining ribbons on the second black vertex.\n\\label{two-gluing-unorient}}\n\\end{figure}\n\nTherefore we will only need to prove the result for the single black vertex case of\nthe induction step.\nWe wish to show that\n\\begin{equation}\n\\mathbb{E}( \\bar{X}_{-1} X_1 \\bar{X}_{-2} X_2 \\cdots \\bar{X}_{-n} X_n ) = 4^{n-1}\n(-2)^{\\chi(\\Gamma)}\\;,\n\\end{equation}\nwhere $\\Gamma$ is a bipartite M\\\"obius graph with a single black vertex and\nhalf ribbons labeled by $X_{\\pm k}$. We will\ndo this by induction, there are two cases:\n\\begin{enumerate}\n\n\\item[\\bf Case 1:] $X_{-1} = X_j$ for $1 \\leq j \\leq n$,\n\\begin{align*}\n\\mathbb{E}( \\bar{X}_{-1} X_1 \\cdots \\bar{X}_{-j} X_j \\cdots X_n ) &=\n\\mathbb{E}( \\bar{X}_{-1} X_j ) \\mathbb{E}( \\Re( X_1 \\cdots \\bar{X}_{-j} ) \\Re( \\bar{X}_{-j-1}\nX_{j+1} \\cdots X_n)) \\\\\n&= 4 \\mathbb{E}(\\Re( \\bar{X}_{-j} X_1 \\cdots X_{j-1} ) \\Re( \\bar{X}_{-j-1} X_{j+1}\n\\cdots X_n) ) \\,.\n\\end{align*}\nThis corresponds to the reduction of the bipartite M\\\"obius graph pictured in Figure\n\\ref{reduce-1}, which for $j>1$ splits the single black vertex into two black vertices,\nand glues the two edges labeled as $\\bar{X}_{-1}$ and $X_j$ together.\n\nThe edges $(1,-1,j,-j)$ are adjacent at the white vertex and appear either in this order, or in the reverse order. Thus after the removal of $\\{-1,j\\}$ we get an ordered pair of labels $(1,-j)$ or $(-j,1)$ to glue back into a ribbon.\nOn the black vertex, due to our conventions the edges of the ribbons appear in the following order\n $$((-1,1), (-2, 2),\\dots, (-j,j), (-k_1,k_1),\\dots,(-k_r,k_r)).$$\n Once we split the black vertex into two vertices with the edges of ribbons given by\n$((-1,1), (-2, 2),\\dots, (-j,j))$ and $((-k_1,k_1),\\dots,(-k_r,k_r))$, the removal of $\\{-1,j\\}$ creates a new pair $(1,-j)$ which we use to create the ribbon to the white vertex.\n[After this step, we relabel all edges to use again $\\pm 1,\\pm 2,\\dots$ consecutively.]\n\n\nWe note that the number of faces of the new graph is the same as the previous graph -- the face with the sequence of edges $$(\\dots, X_{k_r},\\bar{X}_{-1},X_j,\\bar{X}_{-k_1},\\dots)$$ becomes the face with edges $(\\dots, X_{k_r},\\bar{X}_{-k_1},\\dots)$ on the new graph.\n The\nEuler characteristic becomes $\\chi_2 = (v_1 + 1) - (e_1 - 1) + f_1 = \\chi_1 +\n2$ where $v_1, e_1$ and $f_1$ are the number of vertices, edges, and faces of\n$\\Gamma$.\nBy the induction assumption we then find\n\\begin{equation*}\n\\mathbb{E}( \\bar{X}_{-1} X_1 \\cdots X_n) = 4 \\left[ 4^{(n-1) - 2} (-2)^{\\chi_2}\n\\right] = 4^{n-2} (-2)^{\\chi_1 + 2} = 4^{n-1} (-2)^{\\chi_1} \\,.\n\\end{equation*}\n\n\n\\begin{figure}\n\\includegraphics[width=9cm]{reduce-1}\n\\caption{ Here we have a black vertex with two ribbons, both twisted\nor both untwisted, adjacent at the same white vertex. The reduction\nglues these two ribbons together along their common side. The\nresult is a bipartite M\\\"obius graph with one more vertex and one\nless edge. The resulting graph may or may not be disconnected at\nthis point. \\label{reduce-1}}\n\\end{figure}\n\n\n\\item[\\bf Case 2:] $X_{-1} = X_{-j}$ for $1 < j \\leq n$,\n\\begin{multline*}\n\\mathbb{E}( \\bar{X}_{-1} X_1 \\cdots \\bar{X}_{-j} X_j \\cdots X_n ) \\\\=\n\\mathbb{E}( \\bar{X}_{-1} \\bar{X}_{-j} ) \\mathbb{E}( \\bar{X}_{j-1} X_{-j+1} \\cdots \\bar{X}_1\nX_j \\bar{X}_{-j-1} X_{j+1} \\cdots X_n) \\\\\n= (-2) \\mathbb{E}( \\bar{X}_{j-1} X_{-j+1} \\cdots \\bar{X}_1\nX_j \\bar{X}_{-j-1} X_{j+1} \\cdots X_n) \\,.\n\\end{multline*}\nThis corresponds to the reduction of the M\\\"obius graph pictured in Figure\n\\ref{reduce-2}, which switches the order and the orientations of the\nribbons on one side of the\nblack vertex from the $\\pm 1$ and $\\pm j$ ribbons and glues the two\nedges adjacent at the white vertex together as shown.\nAs previously, the removed edges are adjacent at the white vertex. At the black vertex, the labeled lines for the construction of the bipartite graph change from the sequence\n$$\n(-1,1),(-2,2),\\dots,(-j+1,j-1),(-j,j),(-k_1,k_1),\\dots,(-k_r,k_r)\n$$\nto the sequence\n$$\n(j-1,-j+1),\\dots,(2,-2),(1,j),(-k_1,k_1),\\dots,(-k_r,k_r)\n$$\nwhich then needs to be relabeled to use $\\pm 1,\\dots,\\pm n$. Again the number of faces on the bipartite graphs is preserved:\nthe face with edges\n$$(\\dots, 2, -1, -j,k_1,\\dots)$$\nbecomes the face\n$$\n(\\dots,2,1,j,k_1,\\dots).\n$$\nThe Euler\ncharacteristic becomes $\\chi_2 = v_1 - (e_1 -1) + f_1 = \\chi_1 + 1$. By the\ninduction assumption we then find\n\\begin{equation*}\n\\mathbb{E}( \\bar{X}_{-1} X_1 \\cdots X_n) = (-2) \\left[ 4^{(n-1) - 1} (-2)^{\\chi_2}\n\\right] = (-2) 4^{n-2} (-2)^{\\chi_1 + 1} = 4^{n-1} (-2)^{\\chi_1} \\,.\n\\end{equation*}\n\n\n\\begin{figure}\n\\includegraphics[width=9cm]{reduce-2}\n\\caption{ Here we have a black vertex with two ribbons, one twisted\nand the other untwisted, adjacent at the same white vertex. The\nreduction glues these two ribbons together along their common side.\nThe result is a bipartite M\\\"obius graph with one less edge, and\nwith the ribbons on one side of the removed ribbon now with reversed\norder and orientations. \\label{reduce-2}}\n\\end{figure}\n\n\\end{enumerate}\n\nWith these two cases checked, by the induction hypothesis, the proof is\ncompleted.\n\n\\end{proof}\n\nOne should note that this is fundamentally the same proof as in the Wigner\ncase, however in this case the geometric reduction is given by gluing together\ntwo ribbons, while in the Wigner case the geometric reduction is the\nelimination of one ribbon at a time. The inductive steps remain\nthe same.\n\n\n\n\n\n\n\n\n\n\\section{\\label{duality} Duality between real and symplectic ensembles}\nBy $\\mathcal{M}_{M\\times N}(\\mathbb{H} )$ we denote the set of all $M\\times N$ matrices with entries from $\\mathbb{H}$.\nFor $\\mathbf A\\in \\mathcal{M}_{M\\times N}(\\mathbb{H})$, the adjoint matrix is $A^*_{i,j}:=\\overline{A}_{j,i}$. The trace is\n${\\rm tr}(\\mathbf A)=\n \\sum_{j=1}^NA_{jj}\n$.\nSince the traces ${\\rm tr}(\\mathbf A)$ may fail to commute, in the formulas we will use $\\Re({\\rm tr}(\\mathbf A))$, compare to\n \\cite{Hanlon-Stanley-Stembridge-92}.\n\n\\subsection{Duality between GOE and GSE ensembles}\n\nThe Gaussian orthogonal ensembles consist of square symmetric matrices, $\\mathbf Z$,\nwhose independent entries are independent (real) Gaussian random\nvariables; the off diagonal entries have variance $1$ while the\ndiagonal entries have variance\n$2$.\nOne may show that\n\\begin{theoremA} \\label{thm3.1} \\tolerance=2000\nFor $\\mathbf Z$ from the $N\\times N$ Gaussian orthogonal ensemble:\n\\begin{equation*}\n\\frac{1}{N^{n-m}} \\mathbb{E}( {\\rm tr}(\\mathbf Z^{j_1}) {\\rm tr}(\\mathbf Z^{j_2}) \\dots\n{\\rm tr}(\\mathbf Z^{j_m}) ) = \\sum_{\\Gamma} N^{\\chi(\\Gamma)} \\,,\n\\end{equation*}\nwhere the sum is over labeled M\\\"obius graphs $\\Gamma$ with $m$\nvertices of degree $j_1, j_2, \\dots, j_m$, $\\chi(\\Gamma)$ is the Euler\ncharacteristic and $j_1 + j_2 + \\dots + j_m = 2n$.\nMore generally, if $\\mathbf Z_1, \\dots, \\mathbf Z_s$ are independent $N\\times N$ GOE\nensembles and $t: \\{ 1, 2, \\dots, 2n\\} \\to \\{ 1, \\dots, s\\}$ is fixed,\nthen\n\\begin{multline*}\n\\frac{1}{N^{n-m}} \\mathbb{E}( {\\rm tr}( \\mathbf Z_{t(1)} \\dots \\mathbf Z_{t(\\beta_1)} )\n{\\rm tr}( \\mathbf Z_{t(\\alpha_2)} \\dots \\mathbf Z_{t(\\beta_2)} ) \\times \\dots \\\\ \\dots\\times\n{\\rm tr}( \\mathbf Z_{t(\\alpha_m)} \\dots \\mathbf Z_{t(\\beta_m)}) )\n= \\sum_{\\Gamma} N^{\\chi(\\Gamma)} \\,,\n\\end{multline*}\nwhere $\\alpha_1 = 1$, $\\alpha_k = j_1 + j_2 + \\dots +\n j_{k-1} + 1 $, and\n$\\beta_k = j_1 + j_2 + \\dots + j_k $ denote the ranges under the\ntraces, and where the sum is over labeled color-preserving M\\\"obius\ngraphs $\\Gamma$ with vertices of degree $j_1, j_2,\\dots, j_m$ whose edges are\ncolored by the mapping $t$.\nIf there are no $\\Gamma$ that are consistent with the coloring we\ninterpret the sum as being $0$.\n\\end{theoremA}\nThe single color version of this Theorem was given in Ref.\n\\cite{Goulden-Jackson-97}.\n\\tolerance=1000\n\nThe Gaussian symplectic ensembles (GSE)\nconsist of square self-adjoint matrices\n\\begin{equation}\n \\label{GUE} \\mathbf Z=\\left[Z_{i,j}\\right],\n\\end{equation}\nwhere $\\{Z_{i,j}:i