diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzziygl" "b/data_all_eng_slimpj/shuffled/split2/finalzziygl" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzziygl" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\nRadio-Frequency (RF) waves can be utilized for transmission of both information and power simultaneously. As one of the primary works in the information theory literature, Varshney studied this problem in \\cite{Varshney_2008}, in which he characterized the capacity-power function for a point-to-point discrete memoryless channel (DMC). He showed the existence of tradeoff between the information rate and the delivered power for some channels, such as, point-to-point binary channels and amplitude constraint Gaussian channels. Recent results in the literature have also revealed that in many scenarios, there is a tradeoff between information rate and delivered power. Just to name a few, frequency-selective channel \\cite{Grover_Sahai_2010}, MIMO broadcasting \\cite{Zhang_Keong_2013}, interference channel \\cite{Park_Clerckx_2013,Park_Clerckx_2014}, relaying \\cite{Nasir_Zhou_Durrani_Kennedy_2013,Huang_Clerckx_2016_2}.\n\nOne of the major efforts in a Simultaneous Wireless Information and Power Transfer (SWIPT) architecture is to increase the Direct-Current (DC) power at the output of the harvester without increasing transmit power. The harvester, known as rectenna, is composed of a rectifier\\footnote{In the literature, the rectifier is usually considered as a diode, which is the main source of nonlinearity induced in the system.} followed by a low-pass filter. In \\cite{Trotter_Griffin_Durgin_2009,Clerckx_Bayguzina_2016}, it is shown that the RF-to-DC conversion efficiency is a function of rectenna's structure, as well as its input waveform. Accordingly, in order to maximize rectenna's DC power output, a systematic waveform design is crucial to make the best use of an available RF spectrum. In \\cite{Clerckx_Bayguzina_2016}, an analytical model for rectenna's output is introduced via the Taylor expansion of the diode characteristic function. As one of the main conclusions, it is shown that the rectifier's nonlinearity is key to design efficient wireless powered systems.\n\nThe design of an efficient SWIPT architecture fundamentally relies on designing an efficient Wireless Power Transfer (WPT) structure as an important building\nblock of SWIPT. The SWIPT literature has so far focused on the linear model of the rectifier, e.g., \\cite{Grover_Sahai_2010,Zhang_Keong_2013,Park_Clerckx_2013,Park_Clerckx_2014,Nasir_Zhou_Durrani_Kennedy_2013,Huang_Clerckx_2016_2}, whereas, it is expected that considering nonlinearity effect changes the SWIPT design, signalling and architecture significantly. Indeed, in \\cite{Clerckx_2016,Clerckx_2016_Proc}, the design of SWIPT waveforms is studied accounting for rectenna's nonlinearity with a power splitter at the receiver. It is shown that superposing deterministic multisines (for power transfer purposes) with Orthogonal Frequency Division Multiplexing (OFDM) symbols modulated with Circularly Symmetric Complex Gaussian (CSCG) zero-mean inputs (for information purposes) enlarges the Rate-Power (R-P) region, compared to merely zero-mean inputs. This highlights the potential and benefits of departing from conventional CSCG inputs in SWIPT.\n\nLeveraging the aforementioned observations, we provide a step closer at identifying the fundamental limits of SWIPT accounting for the nonlinearity of rectenna. In this paper, we study a flat-fading Additive White Gaussian Noise (AWGN) channel for SWIPT. Taking the advantage of the approximation for rectenna's nonlinear output introduced in \\cite{Clerckx_Bayguzina_2016}, we obtain the general form of the delivered power in terms of system baseband parameters. Assuming that the receiver jointly extracts information and harvests power from the received RF signal,\\footnote{We note that, leveraging the results in thermodynamics of computing, it is demonstrated that energy need not be dissipated in the decoding process. This is due to the reason that to perform a mathematical work, energy is not required \\cite[Ch. 5]{Feynman_1998}. In particular, decoders that are reversible computational devices would not dissipate any energy \\cite{Landauer_1987} and electronic circuits that are almost thermodynamically reversible have been built \\cite{Frank_thesis}. Motivated by this, we also assume that at the receiver, the decoder is able to jointly harvest power and extract information from the received RF signal.} it is shown that the delivered power at the receiver is dependent on the first to fourth moment statistics of the channel input distribution. Considering the optimization problem of maximizing R-P region, we obtain an achievable scheme as an inner bound for the general problem. The scheme is based on constraining the channel inputs to independent and identically distributed (i.i.d.) distributions that are determined by their first and second moment statistics. For the studied inner bound, we show that there is a tradeoff between the delivered power at the receiver and the rate of the received information. This result is highlighted in contrast to the scenario, in which a linear model is considered for the power harvester at the receiver. It can be easily verified that under an assumption of linear model for the power harvester, the goal of maximum rate and maximum energy are aligned in the flat-fading channel. Additionally, we show that the maximum rate-power (for the studied inner bound) is achieved when the channel input distributions is Gaussian with mean zero, however, with different (asymmetric) power allocations to the real and imaginary dimensions.\n\n\\textit{Organization}: In Section \\ref{Sec_System_Model}, we introduce the system model. In Section \\ref{Sec_Delivered_power}, the delivered power at the receiver is obtained in terms of system baseband parameters accounting the approximation for nonlinearity of rectenna. In Section \\ref{Sec_Problem_statement}, we introduce the problem considered in this paper, and accordingly, in Section \\ref{Sec_Main_Result}, we obtain an achievable scheme as an inner bound for the general optimization problem. In Section \\ref{Sec_Conclusion}, we conclude the paper.\n\n\\textit{Notation}: Throughout this paper, the standard CSCG distribution is denoted by $\\mathcal{CN}(0,1)$. Complex conjugate of a complex number $c$ is denoted by $c^{*}$. For a random process $X(t)$, corresponding random variable at time index $n$ is represented by $X_n$. The operators $\\mathbb{E}[\\cdot]$ and $\\mathcal{E}[\\cdot]$ denote the expectation over statistical randomness and the average over time, respectively. $\\Re\\{\\cdot\\}$ and $\\Im\\{\\cdot\\}$ are real and imaginary operators, respectively. We use the notations $\\mathrm{sinc}(t)=\\frac{\\sin(\\pi t)}{\\pi t}$ and $s_l=\\text{sinc}(l+1\/2)$ for integer $l$. We also define $\\delta_l$ as\n\\begin{align}\n\\delta_l=\\left\\{\\begin{array}{ll}\n 1 & l=0 \\\\\n 0 & l\\neq 0\n\\end{array}\\right..\n\\end{align}\n\n\\section{System Model}\\label{Sec_System_Model}\nConsidering a point-to-point flat-fading AWGN channel, in the following, we explain the operation of the transmitter and the receiver.\n\n\\subsection{Transmitter}\nAt the transmitter, the signal $X(t)$ is produced as\n\\begin{align}\nX(t)=\\sum_{n}X_n\\text{sinc}(f_wt-n),\n\\end{align}\nwhere $X_n$ is an information-power symbol at time index $n$, modelled as a random variable, which is produced in an i.i.d. fashion and $X(t)$ is with bandwidth $[-f_w\/2,f_w\/2]$. Next, the signal $X(t)$ is upconverted to the carrier frequency $f_c$ and is sent over the channel.\n\n\\subsection{Receiver}\nThe filtered received RF waveform at the receiver is modelled as\n\\begin{align}\n Y_{\\text{rf}}(t) &=\\sqrt{2}\\Re\\left\\{Y(t)e^{j2\\pi f_ct}\\right\\},\n\\end{align}\nwhere $Y(t)$ is the baseband equivalent of the channel output with bandwidth $[-f_w\/2,f_w\/2]$. We assume that $f_c>2f_w$.\n\n\\textit{Power}: At the receiver, the power of the RF signal $Y_{\\text{rf}}(t) $ is delivered through the rectenna. In the following, we leverage the approximation for rectenna's output introduced in \\cite{Clerckx_Bayguzina_2016}\\footnote{According to \\cite{Clerckx_Bayguzina_2016}, due to the presence of a diode in rectenna's structure, its output current is an exponential function, which is approximated by expanding its Taylor series. The approximation used here, is the fourth moment truncation of Taylor series, in which the first and third moments are zero with respect to the time averaging.}. Accordingly, the delivered power (denoted by $P_{\\text{del}}$) is modelled as\\footnote{According to \\cite{Clerckx_Bayguzina_2016}, rectenna's output in (\\ref{eqn:1}) is in the form of current with unit Ampere. However, since power is proportional to current, with abuse of notation, we refer to the term in (\\ref{eqn:1}) as power.}\n\\begin{align}\\label{eqn:1}\nP_{\\text{del}}=\\mathbb{E}\\mathcal{E}[k_2Y_{\\text{rf}}(t)^2 + k_4 Y_{\\text{rf}}(t)^4],\n\\end{align}\nwhere $k_2$ and $k_4$ are constants. Note that, in the linear model for the delivered power $P_{\\text{del}}$, in (\\ref{eqn:1}), we have only the second moment of the received RF signal $Y_{\\text{rf}}(t)$. Validating through circuit simulations in \\cite{Clerckx_Bayguzina_2016}, it is shown that the linear model is inaccurate and inefficient from a signal design perspective.\n\n\\textit{Information}: The signal $Y_{\\text{rf}}(t) $ is downconverted producing the baseband signal $Y(t)$ given as \\footnote{We model the baseband equivalent channel impulse response as $H(\\tau,t)=\\sum_{i}a_i^b(t)\\delta(\\tau-\\tau_i(t))+W(t)$ where $\\alpha_i^b(t)$, $\\tau_i(t)$ are the channel coefficient and delay of path $i$.}\n\\begin{align}\nY(t)=\\sum_{i}a_i^b(t)X(t-\\tau_i(t))+W(t).\n\\end{align}\nNext, $Y(t)$ is sampled with a sampling frequency $f_w$ producing $Y_{m}=Y(m\/f_w)$ given as\n\\begin{align}\\label{eqn:20}\nY_m&=X_m \\sum_{i}a_i^b(m\/f_w)+W_m,\n\\end{align}\nwhere in (\\ref{eqn:20}), we used $\\tau_i(m\/f_w)f_w\\approx 0$ because the channel is flat-fading. $W_m$ and $X_m$ represent samples of the additive noise $W(t)$ and the signal $X(t)$ at time $t=m\/f_w$, respectively.\n\nWe model $W_m$ as an i.i.d. and CSCG random variable with variance $\\sigma_w^2$, i.e., $W_m\\sim \\mathcal{CN}(0,\\sigma_w^2)$. We assume that both the transmitter and the receiver know the Channel gain, namely, $h(t)=\\sum_{i}a_i^b(t)$ at times $t=m\/2f_w$ for integer $m$, which is assumed to be fixed over all the transmissions. Throughout the paper, since the transmitted symbols $X_m$ and the noise $W_m$ are i.i.d., we drop the index $m$ for $X_m,~W_m$. We also define $h=\\sum_i a_i^b((2m)\/(2f_w))$ and $\\tilde{h}=\\sum_i a_i^b((2m+1)\/(2f_w))$. Note that $h$ and $\\tilde{h}$ are assumed to be fixed, however, we assume they are not necessarily equal. Therefore, (\\ref{eqn:20}) reads\n\\begin{align}\\label{eqn:34}\nY=h X + W.\n\\end{align}\nNote that in (\\ref{eqn:34}), only even samples of the channel, i.e., $h$ are involved.\n\n\\section{Delivered power}\\label{Sec_Delivered_power}\nIn this section, we study the power delivered at the receiver. Note that most of the communication processes, such as, coding\/ decoding, modulation\/ demodulation, etc, is done at the baseband. Therefore, from a communication system design point of view, it is most preferable to have baseband equivalent presentation of the system. Henceforth, in the following Proposition, we derive the delivered power $P_{\\text{del}}$ at the receiver in terms of system baseband parameters.\n\n\\begin{prop}\\label{Prop1}\nAssuming the channel input distributions are i.i.d., the delivered power $P_{\\text{del}}$ at the receiver, can be expressed in terms of system baseband parameters as\n\\begin{align}\\label{eqn:22}\nP_{\\text{del}}=\\alpha Q+\\tilde{\\alpha}\\tilde{Q}+(\\beta +\\tilde{\\beta})P+\\gamma,\n\\end{align}\nwhere $\\tilde{Q}$ is given by\n\\begin{align}\\nonumber\n\\tilde{Q}&=\\frac{1}{3}\\big(Q_{r}+Q_{i}+2(\\mu_{r}T_{r}+\\mu_{i}T_{i})\\\\\n&+6P_{r}P_{i}+6P_{r}(P_{r}-\\mu_{r}^{2})+6P_{i}(P_{i}-\\mu_{i}^{2})\\big),\n\\end{align}\nwhere the parameters $\\alpha,~\\tilde{\\alpha},~\\beta,~\\tilde{\\beta}$ and $\\gamma$ are given as\n\\begin{align}\\label{eqn:32}\n\\alpha&=\\frac{3k_4}{4f_w}|h|^4,\\\\\n\\tilde{\\alpha}&=\\frac{3k_4}{4f_w}|\\tilde{h}|^4,\\\\\n \\beta&=\\frac{1}{f_w}\\left(k_2+6k_4\\sigma_w^2\\right)|h|^2, \\\\\n \\tilde{\\beta}&=\\frac{1}{f_w}\\left(k_2+6k_4\\sigma_w^2\\right)|\\tilde{h}|^2, \\\\\\label{eqn:33}\n \\gamma&=\\frac{1}{f_w}(k_2\\sigma_w^2+3k_4\\sigma_w^4),\n\\end{align}\nand $Q=\\mathbb{E}[|X|^4]$, $T=\\mathbb{E}[|X|^3]$, $P=\\mathbb{E}[|X|^2]$, $\\mu=\\mathbb{E}[X]$. Similarly, $Q_r=\\mathbb{E}[\\Re\\{X\\}^4]$, $T_r=\\mathbb{E}[\\Re\\{X\\}^3]$, $P_r=\\mathbb{E}[\\Re\\{X\\}^2]$, $\\mu_r=\\mathbb{E}[\\Re\\{X\\}]$ and $Q_i=\\mathbb{E}[\\Im\\{X\\}^4]$, $T_i=\\mathbb{E}[\\Im\\{X\\}^3]$, $P_i=\\mathbb{E}[\\Im\\{X\\}^2]$, $\\mu_i=\\mathbb{E}[\\Im\\{X\\}]$.\n\\end{prop}\n\\textit{Proof}: See Appendix \\ref{app:1}.\n\n\\begin{rem}\nWe note that obtaining a closed form expression for the delivered power $P_{\\text{del}}$ at the receiver when the channel inputs are not i.i.d. is cumbersome. This is due to the fact that the fourth moment of the received signal $Y_{\\text{rf}}(t)$ creates dependencies of the statistics of the present channel input on the statistics of the channel inputs on other time indices (see e.g., eq. (\\ref{eqn:9}) and eq. (\\ref{eqn:30}) in Appendix \\ref{app:1}).\n\\end{rem}\n\n\\section{Problem statement}\\label{Sec_Problem_statement}\n\nWe aim at maximizing the rate of the received information, as well as the amount of power delivered at the receiver. Accordingly, the optimization problem we consider, is the maximization of mutual information between the channel input $X$ and the channel output $Y$ under a given power constraint at the transmitter and a minimum delivered power constraint at the receiver. Hence, for the optimization problem, we have\n\\begin{equation}\\label{eqn:13}\n\\begin{aligned}\n& \\underset{ p_{X}(x)}{\\text{sup}}\n& & I\\left(X;Y\\right) \\\\\n& \\text{s.t.}\n& & \\left\\{\\begin{array}{l}\n P\\leq P_{a} \\\\\n P_{\\text{del}}\\geq P_d\n \\end{array}\\right.,\n\\end{aligned}\n\\end{equation}\nwhere $\\sup$ is taken over all input distributions $p_{X}(x)$ satisfying the constraints in (\\ref{eqn:13}). $P_{a}$ is the available power budget at the transmitter and $P_d$ is the minimum amount of power that is to be delivered to the receiver.\n\n\\begin{rem}\nWe note that, for the problem in (\\ref{eqn:13}), if the second constraint (the minimum delivered power at the receiver) is represented via a linear model, i.e., $\\mathbb{E}[|Y|^2]\\geq P_d$, the maximum is achieved using a CSCG input distribution. It can also be verified easily that there is no tradeoff between the received information rate and delivered power at the receiver.\n\\end{rem}\n\n\\section{Main Result}\\label{Sec_Main_Result}\nIn this section, we obtain an inner bound for the problem in (\\ref{eqn:13}) by constraining the input distributions to those that are determined by their first and second moment statistics\\footnote{This assumption is justified due to the fact that in practice, most of the modulation schemes are i.i.d. and are fully characterized by the knowledge of the first and second moment statistics only.}. We show that for the considered scenario, there is a tradeoff between the rate of the transmitted information, namely $I(X;Y)$ and delivered power $P_{\\text{dc}}$ at the receiver and accordingly, we characterize the tradeoff.\n\n\\begin{prop}\\label{Prop2}\nWhen a channel input distribution $p_{X}(x)$ is completely determined by its first and second moment statistics, the supremum in (\\ref{eqn:13}) is achieved by a zero mean Gaussian distribution as the channel input, i.e., $\\Re\\{X\\}\\sim\\mathcal{N} (0,P_r)$, and $\\Im\\{X\\}\\sim \\mathcal{N}(0,P_i)$, where $P=P_r+P_i=P_a$. Furthermore, let $P_{\\text{dc,max}}=3(\\alpha+\\tilde{\\alpha}) {P_{a}}^2+(\\beta+\\tilde{\\beta}) P_{a} +\\gamma$ and $P_{\\text{dc,min}}=2(\\alpha+\\tilde{\\alpha}) {P_{a}}^2+(\\beta+\\tilde{\\beta}) P_{a} +\\gamma$ be the maximum and minimum delivered power at the receiver, respectively. For $P_d=P_{\\text{dc,max}}$, the maximum in (\\ref{eqn:13}) is attained by $P_i=0,P_r=P_{a}$ or $P_i=P_a,P_r=0$. For $P_d=P_{\\text{dc,min}}$, the maximum in (\\ref{eqn:13}) is attained by $P_i=P_a\/2,P_r=P_a\/2$. For $P_{\\text{dc,min}}5$, the diffusion in PCN seems to be consistent with a power law, $\\alpha_{\\mathcal{C}}(\\widetilde{t}) \\sim \\widetilde{t}^{-\\beta}$, where in our case the characteristic exponent is $\\beta\\simeq 1.1$.\nIt is worth mentioning that we achieved the very same results for PCN constructed without considering the lower filter at 4 \\AA{}, that is, by considering all contacts within 8 \\AA{} (data not shown).\n\nSimilar anomalies of functional form have been observed in the (cumulative) distribution of many experimental time series, especially in those related to financial markets \\cite{kwapien2012physical}.\nThis phenomenon might happen when the functional form is consistent with one of the $q$-exponentials family, which originated in the field of non-extensive statistical mechanics \\cite{tsallis2001nonextensive}.\nIn the case of PCN, this behavior is the signature of a crucial physical property of proteins, i.e., the energy flow.\nEnergy flow in proteins mimics the transport in a three-dimensional percolation cluster \\cite{doi:10.1146\/annurev.physchem.59.032607.093606}: energy flows readily between connected sites of the cluster and only slowly between non connected sites.\nThis experimentally validated double regime seems to be captured by the HT decay trend shown in Fig. \\ref{fig:scaling_HT_slopes_time__P}. \nWe stress that this result is elaborated from the herein exploited minimalistic PCN model, so confirming the relevance of graph-based representations in protein science.\n\\begin{figure*}[ht!]\n\\centering\n\n\\subfigure[Linear fitting slopes of HT over time.]{\n\\includegraphics[viewport=0 0 340 247,scale=0.6,keepaspectratio=true]{.\/scaling_HT_slopes_time_log_log.pdf}\n\\label{fig:scaling_HT_slopes_time}}\n~\n\\subfigure[Linear fitting slopes of HT over time (PCN only).]{\n\\includegraphics[viewport=0 0 340 247,scale=0.6,keepaspectratio=true]{.\/scaling_HT_slopes_time_log_log_PROTONLY.pdf}\n\\label{fig:scaling_HT_slopes_time__P}}\n\n\\subfigure[Linear fitting slopes of HT over time (MN only).]{\n\\includegraphics[viewport=0 0 340 247,scale=0.6,keepaspectratio=true]{.\/scaling_HT_slopes_time_log_log_MNONLY.pdf}\n\\label{fig:scaling_HT_slopes_time__M}}\n~\n\\subfigure[Linear fitting slopes of HT over time (GEN only).]{\n\\includegraphics[viewport=0 0 340 247,scale=0.6,keepaspectratio=true]{.\/scaling_HT_slopes_time_log_log_GEONLY.pdf}\n\\label{fig:scaling_HT_slopes_time__G}}\n\n\\caption{Scaling of HT linear best fitting slopes over time (time is sampled in 1000 equally-spaced points between 0 and 100).}\n\\label{fig:scaling_HT_slopes}\n\\end{figure*}\n\nNow let us consider the results for the HC (Figs. \\ref{fig:scaling_HC_t1}, \\ref{fig:scaling_HC_t5}, and \\ref{fig:scaling_HC_t9}). Those three figures depict the scaling of the HC over the network size, considering the information of the entire HM. Notably, PCN and PCN-S are the only network types showing a consistent linear scaling with the size for all time instants. Other networks are not well-described by a linear fit as the time increases.\nFinally, in Fig. \\ref{fig:scaling_HCI} we show the scaling of the first HCI coefficient with the vertices (please note that for $m=1$, Eq. \\ref{eq:hci} yields negative values). Of notable interest is the fact that PCN denote a nearly constant trend.\nThis means that, since the HCIs are time-independent features synthetically describing the HC information, PCN denote a similar characteristic in this respect, as in fact HC scaling in Fig. \\ref{fig:scaling_HC} is consistently preserved over time.\n\nIn Fig. \\ref{fig:HM_time} we offer a visual representation of the heat diffusion pattern over time that is observable through the entire HM.\nWe considered two exemplar networks of exactly the same size: the ``JW0058'' protein and the synthetic counterpart belonging to PCN-S that we denote here as ``JW0058-SYNTH''.\nAs discussed before, PCN are characterized by a highly modular and fractal structure, while the considered synthetic counterpart exhibits a typical small world topology.\nAccordingly, by comparing the diffusion occurring on the two networks over time, it is possible to recognize significantly different patterns that were not noted in the scalings of Fig. \\ref{fig:scaling_HC}.\nOf course, initially ($t=1$) the heat is mostly concentrated in the vertices, which results in a very intense trace.\nAs the time increases, the diffusion pattern for the real protein is more evident and also persistent.\nThis is in agreement with recent laboratory experiments \\cite{doi:10.1146\/annurev.physchem.59.032607.093606,lervik2010heat}, which demonstrated that diffusion in proteins proceeds slower than normal diffusion.\nConversely, the diffusion for JW0058-SYNTH is in general faster since in fact the trace vanishes quickly.\nIn graph-theoretical terms, this means that the spectral gap of PCN dominates Eq. \\ref{eq:heat_matrix} as $t$ becomes large.\nThis result suggests us that, in future research studies, it could be interesting to devote focused attention to the properties of the spectral density of the ensembles.\nAnalogue results have been obtained by considering the other network types; we do not show them for the sake of brevity.\n\\begin{figure*}[ht!]\n\\centering\n\n\\subfigure[Scaling of HC for $t=1$.]{\n\\includegraphics[viewport=0 0 351 245,scale=0.62,keepaspectratio=true]{.\/scaling_HC_t1.pdf}\n\\label{fig:scaling_HC_t1}}\n~\n\\subfigure[Scaling of HC for $t=5$.]{\n\\includegraphics[viewport=0 0 351 245,scale=0.62,keepaspectratio=true]{.\/scaling_HC_t5.pdf}\n\\label{fig:scaling_HC_t5}}\n\n\\subfigure[Scaling of HC for $t=9$.]{\n\\includegraphics[viewport=0 0 351 245,scale=0.62,keepaspectratio=true]{.\/scaling_HC_t9.pdf}\n\\label{fig:scaling_HC_t9}}\n~\n\\subfigure[Scaling of HCI (first coefficient).]{\n\\includegraphics[viewport=0 0 351 245,scale=0.62,keepaspectratio=true]{.\/scaling_HCI.pdf}\n\\label{fig:scaling_HCI}}\n\n\\caption{Scaling of HC and HCI over network size.}\n\\label{fig:scaling_HC}\n\\end{figure*}\n\\begin{figure*}[ht!]\n\\centering\n\n\\subfigure[HM for JW0058 at $t=1$.]{\n\\includegraphics[viewport=0 0 1043 634,scale=0.205,keepaspectratio=true]{.\/HM_JW0058_t1.pdf}\n\\label{fig:HM_JW0058_t1}}\n~\n\\subfigure[HM for JW0058-SYNTH at $t=1$.]{\n\\includegraphics[viewport=0 0 1037 615,scale=0.21,keepaspectratio=true]{.\/HM_JW0058_SYNTH_t1.pdf}\n\\label{fig:HM_TY_t1}}\n\n\\subfigure[HM for JW0058 at $t=5$.]{\n\\includegraphics[viewport=0 0 1037 615,scale=0.21,keepaspectratio=true]{.\/HM_JW0058_t5.pdf}\n\\label{fig:HM_JW0058_t5}}\n~\n\\subfigure[HM for JW0058-SYNTH at $t=5$.]{\n\\includegraphics[viewport=0 0 1043 634,scale=0.205,keepaspectratio=true]{.\/HM_JW0058_SYNTH_t5.pdf}\n\\label{fig:HM_TY_t5}}\n\n\\subfigure[HM for JW0058 at $t=9$.]{\n\\includegraphics[viewport=0 0 1037 615,scale=0.21,keepaspectratio=true]{.\/HM_JW0058_t9.pdf}\n\\label{fig:HM_JW0058_t9}}\n~\n\\subfigure[HM for JW0058-SYNTH at $t=9$.]{\n\\includegraphics[viewport=0 0 1037 615,scale=0.21,keepaspectratio=true]{.\/HM_JW0058_SYNTH_t9.pdf}\n\\label{fig:HM_TY_t9}}\n\n\\subfigure[HM for JW0058 at $t=15$.]{\n\\includegraphics[viewport=0 0 1037 615,scale=0.21,keepaspectratio=true]{.\/HM_JW0058_t15.pdf}\n\\label{fig:HM_JW0058_t15}}\n~\n\\subfigure[HM for JW0058-SYNTH at $t=15$.]{\n\\includegraphics[viewport=0 0 1037 615,scale=0.21,keepaspectratio=true]{.\/HM_JW0058_SYNTH_t15.pdf}\n\\label{fig:HM_TY_t15}}\n\n\\caption{HM diffusion pattern over time for protein JW0058 and its synthetic counterpart.}\n\\label{fig:HM_time}\n\\end{figure*}\n\n\n\\clearpage\n\\section{Conclusions}\n\\label{sec:conclusions}\n\nIn this paper we have investigated the structure of three types of complex networks: protein contact networks, metabolic networks, and gene regulatory networks, together with simulated archetypal models acting as probes.\nWe biased the study on protein contact networks, highlighting their peculiar structure with respect to the other networks.\nOur analysis focused on ensemble statistics, that is, we analyzed the features elaborated by considering several instances of varying size of such networks.\nWe considered two main network characterizations: the first one based on classical topological descriptors, while the second one exploited several invariants extracted from the discrete heat kernel.\nWe found strong statistical agreement among those two representations, which allowed for a consistent interpretation of the results in terms of principal component analysis.\nOur major result was the demonstration of a double regime characterizing a (simulated) diffusion process in the considered protein contact networks.\nAs shown by laboratory experiments, both energy flow and vibration dynamics in proteins exhibit subdiffusive properties, i.e., slower-than-normal diffusion \\cite{doi:10.1146\/annurev.physchem.59.032607.093606}.\nThe notable difference in the diffusion pattern between real proteins and the herein considered simulated polymers (whose contact networks have the same local structure of the corresponding real proteins), points to a peculiar mesoscopic organization of proteins going beyond the pure backbone folding.\nThe observed correlations between MOD and HT indicates this principle in the presence of well-characterized domains.\nThe novelty of our results is that we were able to demonstrate such a well-known property of proteins by exploiting graph-based representations and computational tools only.\nThe fact that the observed properties emerged with no explicit reference to chemico-physical characterization of proteins, relying hence on pure topological properties, suggests the existence of general universal mesoscopic principles fulfilling the hopes expressed by Laughlin \\textit{et al.} \\cite{laughlin2000middle}.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nRepeated studies have illustrated that neural networks can be trained to solve inverse problems in imaging, including problems such as image reconstruction in MRI, inpainting, superresolution, deblurring, and more. Recent reviews and tutorials on this topic \\cite{arridge2019solving,ongie2020deep} have described various approaches to this problem. \\change{For concreteness, we focus on the case of \\emph{linear} inverse problems in imaging.} In the general framework of interest, an unknown\n$n$-pixel image (in vectorized form) $x \\in \\mathbb{R}^n$ (or $\\mathbb{C}^n$) is observed via $m$ noisy \\change{linear} measurements \n$y\\in\\mathbb{R}^m$ (or $\\mathbb{C}^m$) according to the model\n\\begin{equation}\\label{eq:invprob0}\ny=A_0x+\\varepsilon,\n\\end{equation}\nwhere the matrix $A_0 \\in \\mathbb{R}^{m\\times n}$ (or $\\mathbb{C}^{m\\times n})$ is the {\\em forward model} and $\\varepsilon$ represents a vector of noise. The goal is to recover $x$ from $y$.\n\n\\begin{figure*}[tb]\n\\centering\n\\setlength{\\tabcolsep}{0pt}\n\\begin{tabular}{cccc}\n\\subfloat{\\includegraphics[width = 0.245\\textwidth]{figures\/revisedbigfig\/mri\/nomask.png}}\\hspace{0.1em} &\n\\subfloat{\\includegraphics[width = 0.245\\textwidth]{figures\/revisedbigfig\/mri\/initial.png}}\\hspace{0.1em} &\n\\subfloat{\\includegraphics[width = 0.245\\textwidth]{figures\/revisedbigfig\/mri\/newmodel.png}} \\hspace{0.1em} &\n\\subfloat{\\includegraphics[width = 0.245\\textwidth]{figures\/revisedbigfig\/mri\/rnr.png}} \\\\\n\\subfloat{\\includegraphics[width = 0.245\\textwidth]{figures\/newbigfig\/motionblur\/ground_truth}}\\hspace{0.1em} &\n\\subfloat{\\includegraphics[width = 0.245\\textwidth]{figures\/newbigfig\/motionblur\/original_30.679604}}\\hspace{0.1em} &\n\\subfloat{\\includegraphics[width = 0.245\\textwidth]{figures\/newbigfig\/motionblur\/newmodel_21.416325}}\\hspace{0.1em} &\n\\subfloat{\\includegraphics[width = 0.245\\textwidth]{figures\/newbigfig\/motionblur\/PnP_29.681627}} \\\\\n\\small (a) Ground truth & \\small (b) No model drift & \\small (c) Model drift & \\begin{minipage}{0.25\\linewidth}\\small (d) Model drift w\/model adaptation \\end{minipage}\n\\end{tabular} \\\\\n\\caption{Small perturbations in measurements for deep learning-based image reconstruction operators can lead to both subtle and obvious artifacts in reconstructions across problems and domains. In the top row, we present results for undersampled MRI reconstruction of knee images, and the second row illustrates deblurring images of human faces.\n(a) Ground truth image. (b) No model drift. Training and test data correspond to same model, $A_0$, yielding accurate reconstruction via learned model. (c) Model drift but no model adaptation. Training assumes model $A_0$ but at test time we have model $A_1$. Reconstruction using trained network {\\em without model adaptation} gives significant distortions. (d) Model drift and model adaptation. Training assumes model $A_0$ but at test time we have model $A_1$. Reconstruction using model adaptation prevents distortions and compares favorably to the setting without model drift. The MRI example demonstrates our Reuse and Regularize method (Alg. \\ref{alg:rnr}), and the deblurring example demonstrates our Parameterize and Perturb method (Alg. \\ref{alg:pnp}). Experimental details are in Section~\\ref{sec:exp}.}\n\\label{fig:fig1}\n\\end{figure*}\n\nIn this paper, we focus on the setting in which the forward model $A_0$ is known and used during training. Past work has illustrated that leveraging knowledge of $A_0$ during training can reduce the sample complexity \\cite{gilton2019neumann}. This paradigm is particularly common in applications such as medical imaging, where $A_0$ represents a model of the imaging system. For instance, in magnetic resonance imaging (MRI), $A_0$ reflects which k-space measurements are collected. \n\n\n\n\nUnfortunately, these methods can be surprisingly fragile in the face of {\\em model drift}, which occurs when, at test time, we are provided samples of the form \n\\begin{equation}\n\\label{eq:invprob1}\ny = A_1 x+\\varepsilon'\n\\end{equation}\nfor some new forward model $A_1 \\neq A_0$ \\change{and\/or a change in the noise distribution (\\emph{i.e.},~ the noise $\\varepsilon'$ is distributed differently than $\\varepsilon$).} That is, \nassume we have trained a solver that is a function of both the original forward model $A_0$ and a learned neural network. \nOne might try to reconstruct $x$\nfrom $y$ using this solver, but it will perform poorly because it is using a misspecified model ($A_0$ instead of $A_1$). Alternatively, we might attempt to use the same general solver where we replace $A_0$ with $A_1$ but leave the learned component intact. In this case, the estimate $x$ computed from $y$ may also be poor, as illustrated in \\cite{antun2020instabilities} and \\cite{hussein2020correction}. The situation is complicated even further if we do not have a precise model of $A_1$ at test time. \n\nThese are real challenges in practice. For example, in MRI reconstruction there is substantial variation in the forward model depending on the type of acquisition -- e.g., Cartesian versus non-Cartesian k-space sampling trajectories, different undersampling factors, different number of coils and coil sensitivity maps, magnetic field inhomogeneity maps, and other calibration parameters \\cite{fessler} -- all which need to be accounted for during training and testing. A network trained for one of these forward models may need to be retrained from scratch in order to perform well on even a slightly different setting (e.g., from six-fold to four-fold undersampling of k-space). Furthermore, training a new network from scratch may not always be feasible after deployment due to a lack of access to ground truth images. This could be either due to privacy concerns of sharing patient data between hospitals and researchers, or because acquiring ground truth images is difficult for the new inverse problem. \n\n\nThis leads us to formulate the problem of {\\em model adaptation}: given a reconstruction network trained on measurements from one forward model adapt\/retrain\/modify the network to reconstruct images from measurements reflecting a new forward model. We consider a few variants of this problem: (a) the new forward model $A_1$ is known, along with one or more unlabeled training samples $y_i$ reflecting $A_1$, and (b) $A_1$ is unknown or only partially known, and we only have one or more unlabeled training samples reflecting $A_1$. These training samples are unlabeled in the sense that they are not paired with ``ground truth'' images used to generate the $y_i$'s. \nOur proposed model adaptation methods allow \na reconstruction network to be trained for a known forward model and then adapted to a related forward model without access to ground truth images, and without knowing the exact parameters of the new forward model.\n\n\nModel drift as stated above is a particular form of \\emph{distribution drift}, in which the distribution of $Y|X=x$ changes between training and deployment and we know $Y$ has a linear dependence on $X$ before and after the drift (even if we do not know the parameters of those linear relationships, represented as $A_0$ and $A_1$). \\change{That is, if we assume $\\varepsilon \\sim {\\cal N}(0,\\sigma^2 I)$, then the training distribution is $Y|X=x \\sim {\\cal N}(A_0 x,\\sigma^2 I)$ and the distribution at deployment is $Y|X=x \\sim {\\cal N}(A_1 x,\\sigma^2 I)$. In general, distribution drift challenges may be addressed using transfer learning \\cite{pan2009survey,weiss2016survey,tan2018survey} and domain adaptation \\cite{muandet2013domain,li2017deeper,wang2018deep}. One of the methods we explore in the body of the paper, Parameterize and Perturb, shares several features with transfer learning methodology. However, since in our setting we have a specific form of distribution drift, it is possible to design more targeted methods with better performance, as illustrated by\nexisting specialized methods for image inpainting \\cite{fawzi2016image}, as well as our general-purpose Reuse and Regularize method (detailed below).}\n\\subsection{Related Work}\n\nA broad collection of recent works, as surveyed by \\cite{arridge2019solving} and \\cite{ongie2020deep}, have explored using machine learning methods to help solve inverse problems in imaging. The current paper is motivated in part by experiments presented in \\cite{antun2020instabilities}, which show that deep neural networks trained to solve inverse problems are prone to several types of instabilities. \nSpecifically, they \nshowed that model drift in the form of slight changes in the forward model (even ``beneficial'' ones, like increasing the number of k-space samples in MRI) often have detrimental impacts on reconstruction performance. While \\cite{antun2020instabilities}\nis mostly empirical in nature, a follow-up mathematical study \\cite{gottschling2020troublesome} provides theoretical support to this finding, implying that instability arises naturally from training standard deep learning inverse solvers. However, recent work also shows that the instabilities observed in in \\cite{antun2020instabilities} can be mitigated to some extent by adding noise to measurements during training, though such techniques are not sufficient to resolve artifacts arising from substantial model drift.\nTo address a subset of these issues, \\cite{raj2020improving} and \\cite{lunz2018adversarial} propose adversarial training frameworks that increases the robustness of inverse problem solvers. However, \\cite{raj2020improving} and \\cite{lunz2018adversarial} focus on robustness to adversarial perturbations in the measurements for a fixed forward model, and do not address \\change{a global change in the forward model}, which is the focus of this work.\n\nSimilar to this work, a recent paper \\cite{jong} has proposed domain adaptation techniques to transfer a reconstruction network from one inverse problem setting to another, e.g., adapting a network trained for CT reconstruction to perform MRI reconstruction. However, the focus of that approach is on adapting to {\\em changes in the image distribution}, whereas our approaches focus on {\\em changes to the forward model} assuming the image distribution is unchanged. Additionally, to our knowledge, no existing domain adaptation approaches consider the scenario where the new forward model depends on unknown calibration parameters, as we do in this work. \n\nAnother line of work explores learned methods for image reconstruction with automatic parameter tuning; see \\cite{wei2020tuning} and references therein. However, this work focuses on learning regularization and optimization parameters, not parameters of a drifting forward model. Also, \\cite{wu2019learning} describes a unrolling approach to learning a forward model in an imaging context, but with the goal of designing a forward model that optimizes reconstruction quality, rather than estimating a correction to the forward model from measurements. Some recent studies have used pre-trained generative models to solve inverse problems with unknown calibration parameters \\cite{anirudh2018unsupervised}; this line of work can be viewed as an extension of compressed sensing with general models framework introduced in \\cite{bora2017compressed}.\n\n\\section{Problem Formulation}\nHere we formalize the problem of \\emph{model adaptation} as introduced above.\n\nSuppose we have access to an estimator $\\widehat{x} = f_0(y)$ that has been designed\/trained to solve the inverse problem\n\\begin{equation}\\label{eq:invprob0a}\n y = A_0x + \\varepsilon,\\quad x \\sim P_X,~\\varepsilon\\sim P_{N_0}\\tag{P0}\n\\end{equation}\nwhere $A_0$ is a known (linear) forward model, $P_X$ denotes the distribution of images $x$ and $P_{N_0}$ denotes the distribution of the noise $\\varepsilon$. We assume the trained estimator ``solves'' the inverse problem in the sense that it produces an estimate $\\hat x = f_0(y)$ such that the mean-squared error (MSE) $\\mathbb{E}_{x,\\varepsilon}[\\|\\hat x - x\\|^2]$ is small. \n\nNow assume that the forward model has changed from $A_0$ to a new model $A_1$ and\/or the noise distribution has changed from $P_{N_0}$ to a new noise distribution $P_{N_1}$, resulting in the new inverse problem\n\\begin{equation}\\label{eq:invprob1a}\n y = A_1 x + \\varepsilon',\\quad x \\sim P_X,~\\varepsilon'\\sim P_{N_1}.\\tag{P1}\n\\end{equation}\nWe consider both the case where $A_1$ is \\emph{known} (\\emph{i.e.},~ we have an accurate estimate of $A_1$) and the case \\change{where $A_1$ is partially unknown, in the sense that it belongs to a class of parameterized forward models, \\emph{i.e.},~ $A_1 \\in \\{A(\\sigma): \\sigma \\in \\mathbb{R}^q\\}$, where the parameters $\\sigma \\in \\mathbb{R}^q$ are unknown.}\n\nThe goal of \\emph{model adaptation} is to adapt\/retrain\/modify the estimator $\\widehat{x} = f_0(y)$ that was designed to solve the original inverse problem (P0) to solve the new inverse problem (P1). We will consider two variants of this problem:\n\\change{\n\\begin{itemize}\n \\item \\emph{Model adaptation without calibration data:} In this setting, we assume access to only one measurement vector $y$ generated according to \\eqref{eq:invprob1a}.\n \\item \\emph{Model adaptation with calibration data:} In this setting we assume access to a new set of measurement vectors $\\{y_i\\}_{i=1}^N$ generated according to \\eqref{eq:invprob1a}, but without access to the paired ground truth images (i.e., the corresponding $x_i$'s).\n\\end{itemize}\n}\n\nWhile the above discussion centers around a general estimator $\\widehat{x} = f_0(y)$, we are particularly interested in estimators that combine a trained trained deep neural network component depending on a vector of weights\/parameters $\\theta_0$, along with the original forward model $A_0$; we will call such an estimator a \\emph{reconstruction network}. Specifically, we assume that the forward model $A_0$ (or other derived quantities, such as its transpose $A_0^\n\\top$, psuedo-inverse $A_0^\\dagger$, etc.) is embedded in the reconstruction network, either in an initialization layer and\/or in multiple subsequent layers. This is the case for networks based on unrolling of iterative algorithms (see, for example, \\cite{gregor2010learning, sun2016deep, mardani2018neural,ongie2020deep,arridge2019solving}, and references therein), in which $A_0$ appears repeatedly in the network in ``data-consistency'' layers that approximately re-project the intermediate outputs of the network onto the set of data constraints $\\{x\\in \\mathbb{R}^n : A_0 x = y\\}$.\nIn general, we will\nassume the reconstruction network can be parametrized as $f_0(\\cdot) = f(\\cdot;\\theta_0,A_0)$ where $\\theta_0\\in\\mathbb{R}^p$ is the vector of pre-trained neural network weights\/parameters and $A_0$ is the original forward model.\n\n\\change{Finally, to simplify the presentation, we will assume an additive white Gaussian noise model for both (P0) and (P1), \\emph{i.e.},~ $P_{N_0} = \\mathcal{N}(0,\\sigma_0^2 I)$ and $P_{N_1} = \\mathcal{N}(0,\\sigma_1^2 I)$ with known variances $\\sigma_0^2$ and $\\sigma_1^2$. In this case the negative log-likelihood of $x$ given $y$ under measurement model (P1) is $\\frac{1}{2\\sigma_1^2}\\|A_1x-y\\|_2^2$, which justifies our use of quadratic data-consistency terms in the development below.}\n\n\\begin{figure}[ht]\n\\centering\n\\begin{subfigure}{.8\\linewidth}\n\n \\caption{Original reconstruction network.}\n \\centering\n \\includegraphics[width=.6\\linewidth]{figures\/fig2_orig.png} \n\\end{subfigure} \\\\[1em]\n\\begin{subfigure}{.8\\linewidth}\n \\caption{Model adaptation by Parametrize and Perturb (P\\&P).}\n \\centering\n \\includegraphics[width=.6\\linewidth]{figures\/fig2_pnp.png} \n\\end{subfigure} \\\\[1em]\n\\begin{subfigure}{0.8\\linewidth}\n \\caption{Model adaptation by Reuse \\& Regularize (R\\&R).}\n \\centering\n \\includegraphics[width=0.8\\linewidth]{figures\/fig2_rnr.png} \n\\end{subfigure}\n\\caption{Three basic paradigms of reconstruction under ``model drift''. (a) If the training data is generated using the model $y = A_0 x + \\varepsilon$, this can be used to learn a reconstruction network $f(y;\\theta_0,A_0)$ which is parameterized by weights or parameters $\\theta_0$ and may also explicitly depend on forward model $A_0$. (b) {\\bf Parametrize and Perturb (P\\&P):} If at test time we are presented with data corresponding to the model $y = A_1 x + \\varepsilon'$, we may not only use the new forward model $A_1$ but also learn a perturbation $\\delta$ to the original network parameters $\\theta_0$ to compensate for the model drift.\n(c) {\\bf Reuse and Regularize (R\\&R):} Alternatively to P\\&P, we may reuse the pre-trained network $f_0$ as an implicit regularizer in an iterative model-based reconstruction scheme. The proposed scheme alternates between applying $f_0\\circ A_0$, which denoises and\/or removes artifacts, and a data-consistency step (denoted by $\\DC_{A_1}$ above) that enforces the estimated image $\\widehat{x}$ satisfies $A_1 \\widehat{x} \\approx y$.\n}\n\\label{fig:proposedmethods}\n\\end{figure}\n\n\\subsection{The feasibility of model adaptation}\n\nTo compute an accurate reconstruction under the original forward model, $A_0$, the learned solver must reconstruct components of the image that lie in the null space $N(A_0)$: for superresolution, these are high-frequency details lost during downsampling, and in inpainting, these are the pixels removed by $A_0$. \n\\change{The neural network trained as a component of $f_0$ implicitly represents a mapping from image components in the range of $A_0$ to components in $N(A_0)$.} \n\n\\change{Reconstructing under a different forward model, $A_1$, requires reconstructing different components of the image in the null space $N(A_1)$. The general intuition behind model adaptation is that if $A_0$ and $A_1$ are similar, then the mapping represented by $f_0$ can inform the new mapping that we need to learn from image components in the range of $A_1$ to components in $N(A_1)$. For example, in an inpainting setting, the learned network not only represents the missing pixels, but it also represents some function of the observed pixels that are relevant to filling in the missing pixels. Thus if $A_1$ has a similar null space (\\emph{e.g.},~ an offset in the collection of missing pixels), it is reasonable to expect that the original network has learned to represent some information about image components in the null space of $A_1$ but not in the null space of $A_0$. As the null spaces of $A_0$ and $A_1$ get further apart, model adaptation becomes less effective. This is similar to the widely-noted behavior of transfer learning, where transfer learning efficacy depends on the similarity of the training and target distributions. This intuition is supported by our empirical results, which illustrate that when $A_0$ and $A_1$ correspond to different blur kernels or perturbed k-space sampling patterns in MRI, the learned mapping $f_0$ does contain information about image components in the null space of $A_1$ that can be leveraged to improve reconstruction accuracy, even without additional training samples drawn using the model $A_1$.}\n\n\\section{Proposed Approaches}\n\\label{sec:proposed}\nWe propose two distinct model adaptation approaches, {\\em Parameterize \\& Perturb (P\\&P)} and {\\em Reuse \\& Regularize (R\\&R)}, as detailed below.\n\n\\subsection{Parametrize and Perturb: A transfer learning approach}\n\nLet $f_0$ be a reconstruction network trained to solve inverse problem (P0). Suppose we can explicitly parameterize $f_0$ both in terms of the trained weights\/parameters $\\theta_0$ and the original forward model $A_0$, \\emph{i.e.},~ we may write $f_0(\\cdot) = f(\\cdot\\,;\\theta,A_0)$. Given a new measurement vector $y$ under the measurement model (P1), a ``naive'' approach to model adaptation is to simply to replace substitute the new forward model $A_1$ for $A_0$ in this parametrization, and estimate the image as $f(y;\\theta_0,A_1)$. However, as illustrated in Figure \\ref{fig:fig1}, this can lead to artifacts in the reconstruction due to model mismatch.\n\nInstead, we propose estimating the image as $ f(y;\\theta_1,A_1)$ where $\\theta_1$ is a perturbed set of of network parameters obtained by solving the optimization problem:\n\\begin{equation}\\label{eq:A1}\n\\min_{\\theta} \\|y - A_1 f(y;\\theta,A_1)\\|_2^2 + \\mu \\|\\theta-\\theta_0\\|_2^2.\n\\end{equation}\nwhere $\\mu > 0$ is a tunable parameter.\nThe first term enforces data consistency, i.e., the estimated image $\\widehat{x}$ should satisfy $y \\approx A_1 \\widehat{x}$, while the second term $\\|\\theta-\\theta_0\\|_2^2$ ensures the retrained parameters $\\theta$ stay close to the original network parameters $\\theta_0$. This term is necessary to avoid degenerate solutions, \\change{which we demonstrate in the Supplement. Our use of a proximity term of this form is also inspired in part by its success in other transfer learning applications (see, \\emph{e.g.},~ \\cite{xuhong2018explicit}).}\n\nIf the forward model $A_1$ is also unknown, we propose optimizing for it as well in the above formulation, which gives:\n\\begin{equation}\\label{eq:A2}\n\\min_{\\theta,A \\in {\\cal A}} \\|y - A f(y;\\theta,A)\\|_2^2 + \\mu \\|\\theta-\\theta_0\\|_2^2.\n\\end{equation}\nwhere $\\mathcal{A}$ denotes a constraint set.\nWe assume the forward model is parameterized such that the constraint set is given by $\\mathcal{A} = \\{A(\\sigma) : \\sigma \\in \\mathbb{R}^q\\}$, where $A(\\sigma)$ denotes a class of of forward models parametrized by a vector $\\sigma \\in \\mathbb{R}^q$ with $q \\ll m\\cdot n$ (e.g., in a blind deconvolution setting, $A(\\sigma)$ corresponds convolution with an unknown kernel $\\sigma$). In particular, we propose optimizing over the parameters $\\sigma$, which is possible with first-order methods such as stochastic gradient descent, provided the map $\\sigma \\mapsto A_\\sigma$ is first-order differentiable.\n\n{\\centering\n\\begin{minipage}{.6\\linewidth}\n\\centering\n\\begin{algorithm}[H]\n \\caption{Parameterize \\& Perturb (P\\&P)}\n \\label{alg:pnp}\n \\centering\n \\begin{algorithmic}[1]\n \\Require{Original forward model $A_0$, new forward model $A_1$, pre-trained reconstruction network $f_0(\\cdot) = f(\\cdot; \\theta_0,A_0)$, regularization parameter $\\mu$, new measurements $y$.}\n \\State Modify the reconstruction network $f_0$ by internally changing $A_0$ to $A_1$, to obtain the estimate $f(y;\\theta_0, A_1)$\n \\State Fine-tune the network weights as $\\theta_1 = \\theta_0+\\delta$ where $\\delta$ is a perturbation learned by solving \\eqref{eq:A1}\n \\end{algorithmic} \n\\end{algorithm}\n\\end{minipage}\n\\par\n}\\vspace{1em}\n\n\nThe preceding discussion focused on the case of reconstructing single measurement vector $y$ at test time, \\emph{i.e.},~ model adaptation without calibration data. Additionally, we consider a P\\&P\\ approach in the case where we have access to calibration data $y_1,...,y_N$ generated according to (P1). In this case we propose retraining the network\nby minimizing the sum of data-consistency terms over the calibration set:\n\\begin{equation}\\label{eq:A3}\n(\\theta_1,A_1) = \\argmin_{\\theta,A \\in {\\cal A}} \\frac{1}{N}\\sum_{i=1}^N\\|y_i - A f(y_i;\\theta,A)\\|_2^2 + \\lambda \\|\\theta-\\theta_0\\|_2^2.\n\\end{equation}\nAt deployment, we propose using the retrained network $\\widehat{x} = f(y;\\theta_1,A_1)$ as our estimator.\n\n\\change{\nIt is worth noting that the P\\&P\\ model adaptation technique presented above bears similarities to the deep image prior (DIP) approach to solving inverse problems as introduced in \\cite{dip}. However, P\\&P\\ differs from DIP in two key aspects: First, in DIP the reconstruction network is initialized with random weights, whereas in P\\&P\\ we start with a network whose initial weights $\\theta_0$ are obtained by training to solve the initial inverse problem (P0). Second, we explicitly enforce proximity to the initial weights to prevent overfitting to the data, and do not rely on early stopping heuristics as is the case DIP. The P\\&P\\ approach also shares similarities to the ``fine-tuning'' step proposed in the $\\Sigma$-net MRI reconstruction framework \\cite{hammernik2019sigmanet}, where a loss similar to \\eqref{eq:A1} is minimized to enforce data consistency at test time. However, different from P\\&P, the fine-tuning approach in \\cite{hammernik2019sigmanet} regularizes the reconstruction by minimizing the loss between initial reconstruction and the new network output in the SSIM metric. As demonstrated in Figure \\ref{fig:fig1}, this initial reconstruction can have severe artifacts in certain settings due to model mismatch, in which case enforcing proximity in image space to an initial reconstruction is less justified.\n}\n\n\n\\subsection{Reuse \\& Regularize: Model adaptation without retraining} \n\\change{\nOne drawback of the P\\&P\\ approach is that it requires fine-tuning the network for each input $y$, which is computationally expensive relative to a feed-forward reconstruction approach. Additionally, the P\\&P\\ approach is somewhat indirect, relying only on the inductive bias of the network architecture and its original parameter configuration to impart a regularization effect for the new inverse problem (P1). Here we propose a different model adaptation approach that does not require retraining the original reconstruction network, and explicitly makes use of the fact that the original network is designed to solve (P0).\n\nSuppose we are given a reconstruction network $f_0(y)$\ntrained to solve (P0). The key idea we exploit is that the composition of $f_0$ with the original forward model $A_0$, should act as an \\emph{auto-encoder}, \\emph{i.e.},~ if we define the map $g:\\mathbb{R}^d\\rightarrow\\mathbb{R}^d$ by $g(x) = f_0(A_0x)$ then by design we should have $g(x)\\approx x$ for any image $x$ sampled from the image distribution $P_X$. See Figure \\ref{fig:autoencoder} for an illustration in the case of undersampled MRI reconstruction.\n\n\nGiven this fact, one simple approach to reconstructing a measurement vector $y$ under (P1) is to start from an initial guess, \\emph{e.g.},~ the least squares solution $x ^{(0)}= A_1^\\dagger y$, and attempt to find a fixed-point of $g(\\cdot)$ by iterating:\n\\begin{equation}\\label{eq:fp}\nx^{(k+1)} = g(x^{(k)}),~~k=0,1,2,...\n\\end{equation}\nHowever, this approach only uses knowledge of the new forward model $A_1$ in the initialization step. Also, unless we can guarantee the map $g(\\cdot)$ is non-expansive (i.e., its Jacobian is 1-Lipschitz), these iterations could diverge.\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.7\\columnwidth]{figures\/autoencoder.pdf}\n \\caption{Illustration of the auto-encoding property of the map $f_0 \\circ A_0$ as used in the proposed R\\&R\\ model adaptation approach, illustrated in an undersampled MRI reconstruction setting.}\n \\label{fig:autoencoder}\n\\end{figure}\n\nInstead, building off the intuition that $g$ acts as an auto-encoder, we propose using $g$ as a regularizer in an iterative model-based reconstruction scheme. In particular, we adopt a regularization-by-denoising (RED) approach, which allows one to convert an arbitrary denoiser\/de-artifacting map into a regularizer \\cite{RED}. The RED approach is motivated by the following cost function:\n\\begin{equation}\\label{eq:RED}\n \\min_{x} \\frac{1}{2}\\|A_1x - y\\|_2^2 + \\lambda\\rho(x)\n\\end{equation}\nwhere the function $\\rho(x) := x^\\top(x-g(x))$ can be interpreted as a regularizer induced by the map $g(x)$ and $\\lambda > 0$ is a regularization parameter. Under appropriate conditions on the function $g$, one can show $\\nabla \\rho(x) = x-g(x)$. This fact is used in \\cite{RED} to derive a variety of iterative algorithms based on first-order methods (see also \\cite{reehorst2018regularization} for further analysis of RED, including convergence guarantees).\n\n{\\centering\n\\begin{minipage}{.6\\linewidth}\n\\centering\n\\begin{algorithm}[H]\n \\caption{Reuse \\& Regularize (R\\&R)}\n \\label{alg:rnr}\n \\centering\n \\begin{algorithmic}[1]\n \\Require{Pre-trained reconstruction network $f_0(\\cdot)$, original forward model $A_0$, new forward model $A_1$, regularization parameter $\\lambda>0$, max iterations $K$, new measurements $y$.}\n \\State $x\\gets A_1^\\dagger y$ \\Comment{\\emph{least-squares initialization}}\n \\For{$k=1,2,...,K$}\n \\State $z \\gets f_0(A_0x)$ \\Comment{\\emph{regularize using pre-trained network}}\n \\State $x \\gets (A_1^\\top A_1 + \\lambda I)^{-1}(A_1^\\top y + \\lambda z)$ \n \\Comment{\\emph{data consistency}}\n \\EndFor\n \\State \\textbf{return} $x$ \n \n \\end{algorithmic} \n\\end{algorithm}\n\\end{minipage}\n\\par\n}\\vspace{1em}\n\nFor simplicity, we focus on a RED approach with proximal gradient descent (see, e.g., \\cite{parikh2014proximal}) as the base algorithm with stepsize $\\tau > 0$. This results in an alternating scheme:\n\\begin{align*}\n z^{(k)} & = (1-\\tau)x^{(k)} + \\tau g(x^{(k)})\\\\\n x^{(k+1)} & = \\argmin_{x} \\frac{1}{2\\lambda}\\|A_1x - y\\|_2^2 + \\frac{1}{2\\tau} \\|x- z^{(k)}\\|_2^2\n\\end{align*}\nThe $x$-update above has the closed-form expression\n\\begin{equation}\n x^{(k+1)} = \\left(A_1^\\top A_1 + \\tfrac{\\lambda}{\\tau} I\\right)^{-1}\\left(A_1^\\top y + \\tfrac{\\lambda}{\\tau} z^{(k)}\\right) \\label{eq:xup}\n\\end{equation}\nFor simplicity of implementation and to reduce the number of tuning parameters, we fix the stepsize to $\\tau = 1$ in all our experiments. We summarize these steps in Algorithm \\ref{alg:rnr}.\n\nNote that in the limit as $\\lambda\\rightarrow \\infty$, Algorithm \\ref{alg:rnr} reduces to the fixed-point scheme \\eqref{eq:fp}, and in the limit as $\\lambda \\rightarrow 0$ Algorithm \\ref{alg:rnr} will return the initialization $x = A_1^\\dagger y$. In general, the output from Algorithm \\ref{alg:rnr} will interpolate between these two extremes: $x$ will be an approximate fixed point of $g$ and will approximately satisfy data consistency, \\emph{i.e.},~ $y \\approx A_1 x$.\n\n\nFor certain types of forward models the $x$-update in \\eqref{eq:xup} can be computed efficiently (e.g., if $A_1$ corresponds to a 2-D discrete convolution with circular boundary conditions, then $A_1^\\top A_1$ diagonalizes under the 2-D discrete Fourier transform). However, in general, the matrix inverse $(A_1^\\top A_1 + \\lambda I)^{-1}$ may be expensive to apply. Therefore, in practice we propose approximating \\eqref{eq:xup} with a fixed number of conjugate gradient iterations.\n\nA notable aspect of the R\\&R\\ approach is that it has potential to improve the accuracy of network-based reconstructions \\emph{even in the absence of model drift}, \\emph{i.e.},~ even if $A_1 = A_0$. This is because data-consistency is not guaranteed by certain reconstruction networks (e.g., U-Nets). However, we are less likely to see a benefit in the case where the reconstruction network already incorporates data-consistency layers, such as networks inspired by unrolling iterative optimization algorithms. We explore this aspect empirically in Section \\ref{sec:mrisamplingexp} in the context of MRI reconstruction.\n\n{\\centering\n\\begin{minipage}{.6\\linewidth}\n\\centering\n\\begin{algorithm}[H]\n \\caption{Reuse \\& Regularize with fine-tuning (R\\&R$+$)}\n \\label{alg:rnrp}\n \\centering\n \\begin{algorithmic}[1]\n \\Require{Pretrained reconstruction network $f_0(\\cdot) = f(\\cdot; \\theta_0)$, original forward model $A_0$, new forward model $A_1$, regularization parameter $\\lambda$, new measurements $y$.}\n \\State Construct an estimator $\\widehat{x}(y;\\theta_0)$ by unrolling $K$ iterations of Algorithm \\ref{alg:rnr}\n \\State Fine-tune the network weights as $\\theta_1 = \\theta_0+\\delta$ where $\\delta$ is a perturbation learned by approximately minimizing the cost \\eqref{eq:rnrpcost} via SGD.\n \\State \\textbf{return} $x = \\widehat{x}(y;\\theta_1)$\n \\end{algorithmic} \n\\end{algorithm}\n\\end{minipage}\n\\par\n}\\vspace{1em}\n\nThe R\\&R\\ approach can also be extended the case where the new forward model $A_1$ depends on unknown parameters. First, we define an estimator $\\hat{x}(y;A_1)$ by unrolling of a fixed number of iterates of Algorithm \\ref{alg:rnr}, \\emph{i.e.},~ we take $\\hat{x}(y;A_1) = x^{(K)}$ where $x^{(K)}$ is the $K$th iterate of Algorithm \\ref{alg:rnr} with input $y$ for some small fixed value of $K$ (e.g., $K=5$). Supposing $A_1$ belongs to a parameterized class of forward models $A(\\sigma)$, i.e., $A_1 = A(\\sigma_1)$ for some set of parameters $\\sigma_1$, we propose estimating $\\sigma_1$ by minimizing data-consistency of the estimator:\n\\begin{equation}\\label{eq:B2}\n\\tilde{\\sigma}_1 = \\argmin_\\sigma \\|A(\\sigma)\\hat{x}(y;A(\\sigma)) - y\\|_2^2\n\\end{equation}\nThe resulting image estimate is then taken to be $\\hat{x}(y;\\tilde{A}_1)$ where $\\tilde{A}_1 = A(\\tilde{\\sigma}_1)$.\n\nFinally, we also consider a combination of the P\\&P\\ and R\\&R\\ approaches where we additionally fine-tune the weights $\\theta_0$ of the reconstruction network $f_0$ embedded in the unrolled R\\&R\\ estimator. Writing this estimator as $\\widehat{x}(y;\\theta_0)$, similar to the P\\&P\\ approach we propose ``fine-tuning'' the weights $\\theta_0$ by approximately minimizing the cost function\n\\begin{equation}\\label{eq:rnrpcost}\n\\min_\\theta \\|A_1 \\widehat{x}(y;\\theta) -y\\|_2^2,\n\\end{equation}\nto obtain the updated network parameters $\\theta_1 = \\theta + \\delta$ where $\\delta$ is some small perturbation. The estimated image is then given by $x = \\widehat{x}(y;\\theta_1)$. We call this approach R\\&R$+$. Empirically we see consistent improvement in reconstruction accuracy from R\\&R$+$ over R\\&R\\ without any fine-tuning (see Figures \\ref{fig:megafigmotblur} and \\ref{fig:megafigmri}). However, this comes at the additional computational cost of having to retrain the reconstruction network parameters at test time.\n}\n\n\n\\section{Experiments}\\label{sec:exp}\n\nIn this section we empirically demonstrate our approach to model adaptation on three types of inverse problems with two example reconstruction network architectures. We have chosen these comparison points for their simplicity and to illustrate the broad applicability of our proposed approaches. In particular, our approaches to model adaptation are not tied to a specific architectural design.\n\n\\subsection{Methods and datasets used}\n\nWe demonstrate our approaches on three inverse problems: motion deblurring, superresolution, and undersampled single-coil MRI reconstruction. \n\nFor motion deblurring, our initial model $A_0$ corresponds to a $10^\\circ$ motion blur with a $7\\times7$ kernel, and $A_1$ is a \\change{$20^\\circ$} motion blur with a $7\\times7$ kernel, with angle given with respect to the horizontal axis. In superresolution, our initial model is a bilinear downsampling with rate $2\\times$, and $A_1$ corresponds to $2\\times$ bicubic downsampling. \n\nMRI reconstruction is performed with a \\change{$6\\times$} undersampling of k-space in the phase encoding direction for both $A_0$ and $A_1$. The sampling maps are shown in Fig \\ref{fig:mrimasks}.\n\n\\begin{figure}[ht]\n\\centering\n\\begin{tabular}{cc}\n\\includegraphics[width=0.25\\columnwidth, trim= 20 20 20 20, clip]{figures\/A0_mask.png} & \\includegraphics[width=0.25\\columnwidth,trim= 20 20 20 20, clip]{figures\/A1_mask.png}\\\\[0.1em]\n\\begin{minipage}{0.25\\columnwidth}\n(a) Original k-space sampling pattern ($A_0$)\n\\end{minipage} &\n\\begin{minipage}{0.25\\columnwidth}\n(b) Resampled k-space sampling pattern ($A_1$)\n\\end{minipage}\n\\end{tabular}\n\\caption{Visualization of k-space masks used for MRI experiments. Each mask represents a \\change{6-fold Cartesian undersampling} with 4\\% of the center k-space lines fully sampled, and the remaining lines sampled according to a Gaussian variable density scheme. \\change{The $A_1$ mask contains the same center lines, but the higher frequency k-space lines are sampled separately.}}\\label{fig:mrimasks}\n\\end{figure}\n\nWe use two datasets in our experiments. First, for motion deblurring and superresolution, we train and test on 128x128-pixel aligned photos of human faces from the CelebA dataset \\cite{liu2015faceattributes}. \n\nThe data used in the undersampled MRI experiments were obtained from the NYU fastMRI Initiative \\cite{zbontar2018fastMRI}. The primary goal of the fastMRI dataset is to test whether machine learning can aid in the reconstruction of medical images. We trained and tested on a subset of the single-coil knee dataset\\change{, which consist of simulated single-coil measurements. In all tests, we use complex-valued data, which interfaces with our deep networks by treating the real and imaginary parts of the images as separate channels. We measure reconstruction accuracy with respect to the center 320$\\times$320 pixels of the complex IFFT of the fully-sampled k-space data. For the purpose of visualization, we display only the magnitude images in the following sections.}\n\nLearning rates and regularization parameters (\\emph{i.e.},~ $\\mu$ in Algorithm \\ref{alg:pnp} and $\\lambda$ in Algorithm \\ref{alg:rnr})\nwere tuned via cross-validation on a hold out validation set of 512 images for CelebA, and 64 MR images for fastMRI. Batch sizes were fixed in advance to be 128 for the motion blur and superresolution settings, and 8 for the MRI setting. Hyperparameters were tuned via grid search on a log scale. \\change{For R\\&R, we use $K=5$ iterations in the main loop of Algorithm \\ref{alg:rnr}. During training, we add Gaussian noise with $\\sigma=0.01$ to all measurements, as suggested by \\cite{genzel2020solving} to improve robustness.} \n\nWe compare the performance of two reconstruction network architectures across all datasets. First, we utilize the U-Net architecture \\cite{ronneberger2015u}. Our U-Net implementation takes as input the adjoint of the measurements under the forward model $A_0^\\top y$ or $A_1^\\top y$, which is then passed through several CNN layers before obtaining a reconstructed image $\\widehat{x}$.\n\nWe also utilize the MoDL architecture \\cite{aggarwal2018modl}, a learned architecture designed for solving inverse problems with known forward models. MoDL is an iterative or ``unrolled'' architecture, which alternates between a data-consistency step and a trained CNN denoiser, with weights tied across unrolled iterations. We use a U-Net architecture as the denoiser in our implementation of MoDL, ensuring that the overall number of parameters (except for a learned scaling factor in MoDL) is the same in both architectures.\n\n\\change{To compare to deep learning-based approaches which do not require training on particular forward models, we compare to the Image-Adaptive GAN (IAGAN) \\cite{hussein2020image} and to Regularization by Denoising (RED) \\cite{RED}. IAGAN leverages a pretrained GAN to reconstruct arbitrary linear measurements by fitting the latent code input to the GAN, while also tuning the GAN parameters in a way similar to our proposed P\\&P\\ approach and the Deep Image Prior approach \\cite{dip}.\n\nRED requires only a pretrained denoiser, which we implement by pretraining a set of residual U-Net denoisers on the fastMRI and CelebA training sets, with a variety of different Gaussian noise levels. Specifically, we train 15 denoisers for each problem setting, with $\\sigma$ ranging from $10^{-4}$ to $10^1$ on a logarithmic scale. All results shown are tuned on the validation set to ensure the optimal denoisers are used.\n\nWe also compare to a penalized least squares approach with total variation regularization \\cite{rudin1992nonlinear}, a classical approach that does not use any learned elements. While more complex regularizers are possible, total variation (TV) is used because of its status as a simple, widely-used conventional baseline.}\n\n\n\\subsection{Parametrizing forward models}\n\nBoth of our proposed model adaptation methods permit the new forward model to be unknown during training, provided it has a known parametrization. In this case, the parameters describing the forward model are learned along with the reconstruction. Here we describe the parametrizations of the forward models that are used.\n\nFor the deblurring task, the unknown blur kernel is parametrized as a 7x7 blur kernel, initialized with the weights used for the ground-truth kernel during the initial stage of training. Practically, this is identical to a standard convolutional layer with a fixed initialization and only one learned kernel.\n\nA similar approach is used for superresolution. The forward model can be efficiently represented by strided convolution, and the adjoint is represented by a standard \"convolution transpose\" layer, again with the weights initialized to match the forward operator in the initial pre-training phase.\n\n\\change{In the case of MRI, we use two choices of $A_1$, depending on whether we assume $A_1$ is fully known or not. In the case $A_1$ is fully known, we utilize another $6\\times$ undersampled k-space mask, but with resampled high-frequency lines. We display the original and new k-space sampling masks in Figure \\ref{fig:mrimasks}. To illustrate the utility of our approach under miscalibration of the forward model in an MRI reconstruction setting, we also consider a unknown random perturbation of the original k-space lines, which we attempt to learn during reconstruction. The vertical k-space lines are still fully sampled, as are the center 4$\\%$ of frequencies, but all high frequency lines are perturbed uniformly at random with a continuous value from -2 to 2. We wish to emphasize that this experiment is not meant to reflect clinical practice, since such miscalibration of k-space sampling locations is not typically encountered in anatomical imaging with Cartesian k-space sampling trajectories. However, we include this experiment simply to illustrate that our approach could be extended to unknown parametric changes in the forward model in an MR reconstruction setting.\n}\n\n\\subsection{Main results}\n\n\n\\begin{table*}\t\n\\centering\n\t\\adjustbox{max width=\\columnwidth}{\n\t\\begin{tabular}{cc|c|c|c|c|c|c|}\n\t\t\\hline\n & & \\multicolumn{6}{c|}{Baselines} \\\\\n & & \\multirow{2}{*}{TV} & \\multirow{2}{*}{RED} & & Train w\/$A_0$ & Train w\/$A_0$ & Train w\/$A_1$ \\\\\n\t\t & & & & & Test w\/$A_0$ & Test w\/$A_1$ & Test w\/$A_1$ \\\\ \\hline \\hline\n\t \\multicolumn{2}{c|}{\\multirow{2}{*}{Blur}} & \\multirow{2}{*}{27.61} & \\multirow{2}{*}{30.23} & U-Net & 34.15 & 25.42 & 33.98 \\\\\n\t & & & & MoDL & 36.25 & 23.91 & 36.13 \\\\ \\hline\n\t \\multicolumn{2}{c|}{\\multirow{2}{*}{SR}} & \\multirow{2}{*}{28.33} & \\multirow{2}{*}{28.59} & U-Net & 30.74 & 26.3 & 31.22 \\\\\n\t & & & & MoDL & 31.32 & 22.27 & 31.98 \\\\ \\hline\n\t \\multicolumn{2}{c|}{\\multirow{2}{*}{MRI}} & \\multirow{2}{*}{25.09} & \\multirow{2}{*}{27.76} & U-Net & 31.51 & 27.47 & 32.33 \\\\\n\t\t & & & & MoDL & 31.88 & 22.82 & 31.79 \\\\ \\hline \\hline\n\t\t \\multicolumn{8}{c}{Proposed Model Adaptation Methods} \\\\\n\t &\t & \\multicolumn{3}{c|}{Known $A_1$} & \\multicolumn{3}{c}{Unknown $A_1$} \\\\\n\t &\t & P\\&P (Alg. \\ref{alg:pnp}) & R\\&R (Alg. \\ref{alg:rnr}) & R\\&R+ (Alg. \\ref{alg:rnrp}) & P\\&P (Alg. \\ref{alg:pnp}) & R\\&R (Alg. \\ref{alg:rnr}) & R\\&R+ (Alg. \\ref{alg:rnrp}) \\\\ \\hline\n\t\t \\multirow{2}{*}{Blur} & U-Net & 33.01 & 32.11 & \\bf 33.50 & 29.18 & 27.67 & 30.05 \\\\\n\t\t & MoDL & 30.08 & 33.82 & \\bf 34.73 & 29.89 & 27.81 & 27.94 \\\\ \\hline\n\t\t \\multirow{2}{*}{SR} & U-Net & 28.00 & 29.95 & \\bf 29.99 & 27.77 & 26.98 & 29.35 \\\\\n\t\t & MoDL & 24.59 & 28.18 & \\bf 29.83 & 23.14 & 24.93 & 25.29 \\\\ \\hline\n\t\t \\multirow{2}{*}{MRI} & U-Net & 29.07 & 29.71 & \\bf 31.43 & 28.92 & 28.06 & 29.54 \\\\\n\t\t & MoDL & 30.63 & 30.25 & \\bf 31.44 & 26.64 & 23.46 & 27.67 \\\\\n\t\\end{tabular}\n\t}\n\t\\vspace{0.5em}\n\t\\caption{\\change{Comparison of performance of various baseline methods for inverse problems across a variety of datasets and forward models. The metric presented is the mean PSNR. SSIM values can be found in Table \\ref{table:ssimtable}}}\n\t\\label{table:mainadaptationtable}\n\\end{table*}\n\nIn Table \\ref{table:mainadaptationtable} we present our main results. We present sample reconstructions for the deblurring problem and MRI reconstruction problem in Figs. \\ref{fig:megafigmotblur} and \\ref{fig:megafigmri}. For reference, the ground truth, inputs to the networks, a total variation regularized reconstruction, and a RED reconstruction are presented in Figs. \\ref{fig:deblurinitialmethods} and \\ref{fig:mriinitialmethods}. We also provide in the Appendix a table of SSIM values as well as the full version of Table \\ref{table:mainadaptationtable}, which contains the standard deviations of PSNR.\n\nWhile the magnitude of the improvements vary across domains and problems, we find that retraining the network with the proposed model adaptation techniques significantly improve performance by several dBs in the new setting. This effect is particularly striking in the case of MRI reconstruction with MoDL, where the ``naive'' approach of replacing $A_0$ with $A_1$ in the network gives catastrophic results (a roughly 9 dB drop in reconstruction PSNR), while the proposed model adaptation approaches give reconstruction PSNRs within 1-2 dB of the baseline approach of training and testing with the same forward model in the case where $A_1$ is known.\n\n\\begin{figure}\n\\centering\n\\adjustbox{max width=0.5\\columnwidth}{\n\\renewcommand*{\\arraystretch}{0}\n\\begin{tabular}{c@{}c@{}c@{}c@{}}\n \\small Ground & \\small Blurred & \\small TV-Regularized & \\small RED \\\\\n \\small Truth & & \\small Reconstruction & \\\\ \\vspace{2pt} && \\\\\n \\includegraphics[align=c, width = 0.15\\columnwidth]{figures\/megafig\/truth\/celeba\/48_9.032992.png} &\n\\includegraphics[align=c,width = 0.15\\columnwidth]{figures\/megafig\/corrupted\/motionblur\/48_20.794127.png} &\n \\includegraphics[align=c, width = 0.15\\columnwidth]{figures\/megafig\/tv\/48_9.032992_blur_tv.png} &\n\\includegraphics[align=c, width = 0.15\\columnwidth]{figures\/revisionfigs\/blur\/48_red_32.89262294769287_0.9620666.png} \\\\ \n \\includegraphics[align=c, width = 0.15\\columnwidth]{figures\/megafig\/truth\/celeba\/48_9.032992_res.png}\n& \\includegraphics[align=c, width = 0.15\\columnwidth]{figures\/megafig\/corrupted\/motionblur\/48_20.794127_res.png} &\n\\includegraphics[align=c, width = 0.15\\columnwidth]{figures\/megafig\/tv\/48_9.032992_blur_tv_res.png} &\n\\includegraphics[align=c, width = 0.15\\columnwidth]{figures\/revisionfigs\/blur\/48_red_err__32.89196014404297_0.9620606.png}\n\\end{tabular}\n}\n\\caption{Comparison figures for the deblurring methods in Figure \\ref{fig:megafigmotblur}. We present the ground truth, the blurred image (with Gaussian noise with $\\sigma=0.01$ added), a total variation (TV) regularized reconstruction, \\change{and a comparison to Regularization by Denoising (RED), a model-agnostic method leveraging a deep denoiser}. Below each of the above is the residual image, multiplied by 5$\\times$ for ease of visualization.}\n\\label{fig:deblurinitialmethods}\n\\end{figure}\n\n\n\\begin{figure}\n\\centering\n\\adjustbox{max width=0.5\\columnwidth}{\n\\renewcommand*{\\arraystretch}{0}\n\\begin{tabular}{@{}c@{}c@{}c@{}c@{}}\n \\small Ground & \\small IFFT & \\small TV-Regularized & \\small RED \\\\\n \\small Truth & \\small Reconstruction & \\small Reconstruction & \\\\ \\vspace{1pt} && \\\\%\\vspace{2pt} \\\\\n \\includegraphics[align=c,width = 0.15\\columnwidth]{figures\/revisionfigs\/mri\/111_truth.png} &\n\\includegraphics[align=c,width = 0.15\\columnwidth]{figures\/revisionfigs\/mri\/111_ifft.png} &\n\\includegraphics[align=c,width = 0.15\\columnwidth]{figures\/revisionfigs\/mri\/111_tv.png} &\n\\includegraphics[align=c,width = 0.15\\columnwidth]{figures\/revisionfigs\/mri\/111_red.png} \\\\\n \\includegraphics[align=c,width = 0.15\\columnwidth]{figures\/megafig\/truth\/mri\/14_48.477177_res.png} &\n\\includegraphics[align=c,width = 0.15\\columnwidth]{figures\/revisionfigs\/mri\/111_ifft_err.png} &\n\\includegraphics[align=c,width = 0.15\\columnwidth]{figures\/revisionfigs\/mri\/111_tv_err.png} &\n\\includegraphics[align=c,width = 0.15\\columnwidth]{figures\/revisionfigs\/mri\/111_red_err.png}\n\\end{tabular}\n}\n\\caption{Comparison figures for the MRI reconstruction methods in Figure \\ref{fig:megafigmri}. We present the IFFT with all k-space data maintained, the na\u00efve IFFT reconstruction after k-space masking, a total variation (TV) regularized reconstruction (with PSNR 27.3 dB), and a RED reconstruction (with PSNR 28.4 dB). We also present the residuals relative to the fully-sampled IFFT, multiplied by 5$\\times$ for ease of visualization.}\n\n\\label{fig:mriinitialmethods}\n\\end{figure}\n\n\n\n\\begin{figure*}\n\\centering\n\\adjustbox{max width=\\columnwidth}{\n\\renewcommand*{\\arraystretch}{0}\n\\begin{tabular}{@{}c@{}c@{}c@{}c@{}c@{}c@{}c@{}c@{}c@{}}\n& \\small Train w\/$A_0$ & \\small Train w\/$A_0$ & \\small P\\&P & \\small R\\&R & \\small R\\&R+ & \\small P\\&P & \\small R\\&R & \\small R\\&R+ \\\\\n & \\small Test w\/$A_0$ & \\small Test w\/$A_1$ & \\small Known $A_1$ & \\small Known $A_1$ & \\small Known $A_1$ & \\small Unknown $A_1$ & \\small Unknown $A_1$ & \\small Unknown $A_1$ \\\\ \\vspace{2pt} \\\\\n\\begin{minipage}{0.08\\linewidth} U-Net \\end{minipage} &\n\\includegraphics[align=c, width = 0.12\\linewidth]{figures\/revisionfigs\/blur\/unet\/48_orig__34.860546588897705_0.9174336.png} & \n\\includegraphics[align=c, width = 0.12\\linewidth]{figures\/revisionfigs\/blur\/unet\/48_noadapt__24.471983909606934_0.8112663.png} & \n\\includegraphics[align=c, width = 0.12\\linewidth]{figures\/revisionfigs\/blur\/unet\/48_pnp__32.33306646347046_0.9213726.png} &\n\\includegraphics[align=c, width = 0.12\\linewidth]{figures\/revisionfigs\/blur\/unet\/48_rnr_32.935969829559326_0.97213316.png} & \n\\includegraphics[align=c, width = 0.12\\linewidth]{figures\/revisionfigs\/blur\/unet\/48_rnrp_33.25258016586304_0.9752408.png} & \n\\includegraphics[align=c, width = 0.12\\linewidth]{figures\/revisionfigs\/blur\/unet\/48_pnp1__26.703567504882812_0.85022515.png} & \n\\includegraphics[align=c, width = 0.12\\linewidth]{figures\/revisionfigs\/blur\/unet\/48_rnr1_0.9337511_27.55173921585083_0.9337511.png} &\n\\includegraphics[align=c, width = 0.12\\linewidth]{figures\/revisionfigs\/blur\/unet\/48_rnrp1_0.95398253_29.146764278411865_0.95398253.png} \\\\\n\\vspace{2pt}\\begin{minipage}{0.08\\linewidth} U-Net \\\\ Residual \\end{minipage} &\n\\includegraphics[align=c, width = 0.12\\linewidth]{figures\/revisionfigs\/blur\/unet\/48_orig_err__34.860546588897705_0.9174336.png} & \n\\includegraphics[align=c, width = 0.12\\linewidth]{figures\/revisionfigs\/blur\/unet\/48_noadapt_err__24.471983909606934_0.8112663.png} & \n\\includegraphics[align=c, width = 0.12\\linewidth]{figures\/revisionfigs\/blur\/unet\/48_pnp_err__32.33306646347046_0.9213726.png} &\n\\includegraphics[align=c, width = 0.12\\linewidth]{figures\/revisionfigs\/blur\/unet\/48_rnr_err__32.935969829559326_0.97213316.png} & \n\\includegraphics[align=c, width = 0.12\\linewidth]{figures\/revisionfigs\/blur\/unet\/48_rnrp_err__33.25258016586304_0.9752408.png} & \n\\includegraphics[align=c, width = 0.12\\linewidth]{figures\/revisionfigs\/blur\/unet\/48_pnp1_err__26.703567504882812_0.85022515.png} &\n\\includegraphics[align=c, width = 0.12\\linewidth]{figures\/revisionfigs\/blur\/unet\/48_rnr1_err__27.55173921585083_0.9337511.png} & \n\\includegraphics[align=c, width = 0.12\\linewidth]{figures\/revisionfigs\/blur\/unet\/48_rnrp1_err__29.146764278411865_0.95398253.png} \\\\\n\\begin{minipage}{0.08\\linewidth} MoDL \\end{minipage} &\n\\includegraphics[align=c, width = 0.12\\linewidth]{figures\/revisionfigs\/blur\/modl\/48_orig__36.96568489074707_0.91479874.png} & \n\\includegraphics[align=c, width = 0.12\\linewidth]{figures\/revisionfigs\/blur\/modl\/48_noadapt__23.280608654022217_0.69428253.png} & \n\\includegraphics[align=c, width = 0.12\\linewidth]{figures\/revisionfigs\/blur\/modl\/48_pnp__35.34075736999512_0.9167714.png} &\n\\includegraphics[align=c, width = 0.12\\linewidth]{figures\/revisionfigs\/blur\/modl\/48_rnr__34.358670711517334_0.97277963.png} & \n\\includegraphics[align=c, width = 0.12\\linewidth]{figures\/revisionfigs\/blur\/modl\/48_rnrp__34.52039957046509_0.97645557.png} & \n\\includegraphics[align=c, width = 0.12\\linewidth]{figures\/revisionfigs\/blur\/modl\/48_pnp1__34.72598075866699_0.91442525.png} & \n\\includegraphics[align=c, width = 0.12\\linewidth]{figures\/revisionfigs\/blur\/modl\/48_rnr1__27.185657024383545_0.92795646.png} & \n\\includegraphics[align=c, width = 0.12\\linewidth]{figures\/revisionfigs\/blur\/modl\/48_rnrp1__29.811813831329346_0.9518878.png} \\\\\n\\begin{minipage}{0.08\\linewidth} MoDL \\\\ Residual \\end{minipage} &\n\\includegraphics[align=c, width = 0.12\\linewidth]{figures\/revisionfigs\/blur\/modl\/48_orig_err__36.96568489074707_0.91479874.png} & \n\\includegraphics[align=c, width = 0.12\\linewidth]{figures\/revisionfigs\/blur\/modl\/48_noadapt_err__23.280608654022217_0.69428253.png} & \n\\includegraphics[align=c, width = 0.12\\linewidth]{figures\/revisionfigs\/blur\/modl\/48_pnp_err__35.34075736999512_0.9167714.png} &\n\\includegraphics[align=c, width = 0.12\\linewidth]{figures\/revisionfigs\/blur\/modl\/48_rnr_err__34.358670711517334_0.97277963.png} & \n\\includegraphics[align=c, width = 0.12\\linewidth]{figures\/revisionfigs\/blur\/modl\/48_rnrp_err__34.52039957046509_0.97645557.png} & \n\\includegraphics[align=c, width = 0.12\\linewidth]{figures\/revisionfigs\/blur\/modl\/48_pnp1_err__34.72598075866699_0.91442525.png} & \n\\includegraphics[align=c, width = 0.12\\linewidth]{figures\/revisionfigs\/blur\/modl\/48_rnr1_err__27.185657024383545_0.92795646.png} & \n\\includegraphics[align=c, width = 0.12\\linewidth]{figures\/revisionfigs\/blur\/modl\/48_rnrp1_err__29.811813831329346_0.9518878.png}\n\\end{tabular}\n}\n\\caption{Visual examples of reconstruction quality for the motion deblurring inverse problem solved by U-Net and MoDL, as well as the associated residuals. Each residual is multiplied by 5 for ease of inspection. The initial forward model $A_0$ is a 7x7 motion blur with angle $10^\\circ$, and the $A_1$ model has a 7x7 motion blur kernel with angle $20^\\circ$. The analogous figure for the superresolution problem, and further examples, are available in the Supplement. Best viewed electronically.}\n\\label{fig:megafigmotblur}\n\\end{figure*}\n\n\\begin{figure*}\n\\centering\n\\renewcommand*{\\arraystretch}{0}\n\\begin{tabular}{@{}c@{}c@{}c@{}c@{}c@{}c@{}}\n & \\small Train w\/$A_0$ & \\small Train w\/$A_0$ & \\small P\\&P & \\small R\\&R & \\small R\\&R+ \\\\\n& \\small Test w\/$A_0$ & \\small Test w\/$A_1$ & & & \\\\\n\\begin{minipage}{0.08\\linewidth} \\small U-Net \\end{minipage} &\n\\includegraphics[align=c,width = 0.11\\linewidth]{figures\/revisionfigs\/mri\/unet\/111_orig__31.326417922973633_0.81668746.png} & \n\\includegraphics[align=c,width = 0.11\\linewidth]{figures\/revisionfigs\/mri\/unet\/111_noadapt__27.354295253753662_0.79132897.png} & \n\\includegraphics[align=c,width = 0.11\\linewidth]{figures\/revisionfigs\/mri\/unet\/111_pnp__30.4063081741333_0.81077015.png} &\n\\includegraphics[align=c,width = 0.11\\linewidth]{figures\/revisionfigs\/mri\/unet\/111_rnr__29.316372871398926_0.797895.png} &\n\\includegraphics[align=c,width = 0.11\\linewidth]{figures\/revisionfigs\/mri\/unet\/111_rnrp_31.570298671722412_0.8103058.png} \\\\\n\\vspace{2pt}\\begin{minipage}{0.08\\linewidth}\\small U-Net \\\\ Residual \\end{minipage} & \n\\includegraphics[align=c,width = 0.11\\linewidth]{figures\/revisionfigs\/mri\/unet\/111_orig_err__31.326417922973633_0.81668746.png} & \n\\includegraphics[align=c,width = 0.11\\linewidth]{figures\/revisionfigs\/mri\/unet\/111_noadapt_err__27.354295253753662_0.79132897.png} & \n\\includegraphics[align=c,width = 0.11\\linewidth]{figures\/revisionfigs\/mri\/unet\/111_pnp_err__30.406277179718018_0.81077003.png} &\n\\includegraphics[align=c,width = 0.11\\linewidth]{figures\/revisionfigs\/mri\/unet\/111_rnr_err__29.316372871398926_0.797895.png} &\n\\includegraphics[align=c,width = 0.11\\linewidth]{figures\/revisionfigs\/mri\/unet\/111_rnrp_err__31.42012119293213_0.8079765.png} \\\\ \n\\vspace{2pt} \\begin{minipage}{0.08\\linewidth} \\small MoDL \\end{minipage} &\n\\includegraphics[align=c,width = 0.11\\linewidth]{figures\/revisionfigs\/mri\/modl\/111_orig__33.28503370285034_0.8331989.png} & \n\\includegraphics[align=c,width = 0.11\\linewidth]{figures\/revisionfigs\/mri\/modl\/111_noadapt__23.445076942443848_0.6545583.png} & \n\\includegraphics[align=c,width = 0.11\\linewidth]{figures\/revisionfigs\/mri\/modl\/111_pnp__31.869094371795654_0.82581556.png} &\n\\includegraphics[align=c,width = 0.11\\linewidth,align=c]{figures\/revisionfigs\/mri\/modl\/111_rnr__31.056983470916748_0.82304215.png} &\n\\includegraphics[align=c,width = 0.11\\linewidth,align=c]{figures\/revisionfigs\/mri\/modl\/111_rnrp__32.294108867645264_0.8277144.png} \\\\\n\\vspace{2pt}\\begin{minipage}{0.08\\linewidth}\\small MoDL \\\\ Residual \\end{minipage} & \n\\includegraphics[align=c,width = 0.11\\linewidth]{figures\/revisionfigs\/mri\/modl\/111_orig_err__33.28503370285034_0.8331989.png} & \n\\includegraphics[align=c,width = 0.11\\linewidth]{figures\/revisionfigs\/mri\/modl\/111_noadapt_err__23.445076942443848_0.6545583.png} & \n\\includegraphics[align=c,width = 0.11\\linewidth]{figures\/revisionfigs\/mri\/modl\/111_pnp_err__31.869094371795654_0.82581556.png} &\n\\includegraphics[align=c,width = 0.11\\linewidth,align=c]{figures\/revisionfigs\/mri\/modl\/111_rnr_err__31.056983470916748_0.82304215.png} &\n\\includegraphics[align=c,width = 0.11\\linewidth,align=c]{figures\/revisionfigs\/mri\/modl\/111_rnrp_err__32.294108867645264_0.8277144.png}\n\\end{tabular}\n\\caption{Visual examples of different reconstruction approaches for the MRI inverse problem under model drift, along with associated residuals. All residual images are scaled by 5x for ease of inspection. Best viewed electronically.}\n\\label{fig:megafigmri}\n\\end{figure*}\n\n\n\\subsection{Learning multiple forward models}\n\n\\definecolor{mygreen}{HTML}{01B051}\n\\definecolor{myblue}{HTML}{5B9BD5}\n\\definecolor{myred}{HTML}{FF0001}\n\\definecolor{myorange}{HTML}{ED7D31}\n\\definecolor{myyellow}{HTML}{FFC000}\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=0.5\\linewidth]{figures\/multimodelexp.pdf}\n\\caption{Na\\\"ively learning to deblur with a single network and multiple blur kernels sacrifices performance on all blurs. \nIn \\textcolor{mygreen}{\\bf green}, the test-time accuracy of a network trained to deblur multiple blurs, tested on a known kernel. In \\textcolor{myorange}{\\bf orange}, the same network, but tested on a new blur that was not used during training. In \\textcolor{black}{\\bf black}, our proposed P\\&P approach (Alg. \\ref{alg:pnp}), with a known model, and in \\textcolor{myyellow}{\\bf yellow} the same with a learned forward model. \\textcolor{myblue}{\\bf Blue} and \\textcolor{myred}{\\bf red} show the performance of our \\change{R\\&R} approach (Alg. \\ref{alg:rnr}), with and without a known forward model.}\n\\label{fig:multimodelexp}\n\\end{figure}\n\nIn this section we explore an alternative approach to model adaptation. In this setting, we assume that a set of candidate forward models are known during training time. During test time, a single forward model is used for measurement, but the test-time forward model is not known during training. This case represents the setting where the forward model might be parametrized, and so a reasonable approach may be to train the learned network using a number of different forward models to improve robustness.\n\nIn simple settings, training on multiple models might be reasonable. However, when the forward model parameterization is high-dimensional, learning to invert all possible forward models may be difficult. \n\nWe demonstrate this setting with a deblurring example, in which the same network is trained using a number of blur kernels. The blur kernels are the same kernels used for comparisons in \\cite{hradivs2015convolutional}. For consistency, we resize all 50 blur kernels to 7x7, and normalize the weights to sum to 1. We compare reconstruction accuracy when the ground truth blur kernel is included in the set of kernels used for training, as well as when the reconstruction network has never seen data blurred with the testing kernel.\n\nThe results are shown in Fig \\ref{fig:multimodelexp}. Experimentally, we find that training on multiple blur kernels simultaneously incurs a performance penalty as the number of blur kernels used in training increases. In this setting, where the forward model has many degrees of freedom and data is limited, attempting to learn to solve all models simultaneously is worse than transferring a single learned model, even in the absence of further ground truth data for calibration.\n\n\\change{\\subsection{Adapting to variable sampling rates in single-coil MRI}\\label{sec:mrisamplingexp}\n\nA particular concern raised in \\cite{antun2020instabilities} is related to the stability of a learned solver with respect to the level of undersampling at measurement time. In particular, the authors of that work observe that an image reconstruction system trained to recover images sampled at a particular rate would experience a degradation in reconstruction accuracy for \\emph{higher} sampling rates than the one the system was trained on.\n\nIn Fig. \\ref{fig:samplerate} we explore this problem in the MRI setting using a U-Net as the reconstruction method, and demonstrate that our R\\&R\\ method can adapt to this setting as well. By using R\\&R\\ during inference, the learned network was trained at a $6\\times$ accceleration acquisition setting, but was safely deployed for other accelerations \\emph{without significant degradation in reconstruction quality}, and \\emph{comparing favorably to networks trained explicitly for other sampling rates}.\n\nFor comparison purposes, we also train a U-Net using multiple sampling masks. During training, the multiple-model solver is trained to reconstruct MRIs that are measured using the five different sampling patterns demonstrated in Fig. \\ref{fig:samplerate}. We present the mean PSNR on the test set in Table \\ref{table:samplingratetable}, along with the mean test PSNR for applying R\\&R\\ to the multiple-model solver, assuming at test time that the sampling pattern is known. Reconstructions from the multiple-model solver can be found in the Supplement. We observe that training with multiple models means that at test time all models produce reasonable reconstructions, but at the cost of reconstruction quality compared to networks trained for single sampling patterns.\n\nIn this experiment, we also observe an interesting side-effect of R\\&R: when R\\&R\\ is used to ``adapt'' to a forward model $A_0$ that the original network was trained on, we tend to see an improvement in reconstruction quality. This effect is most pronounced for the U-Net trained to reconstruct multiple sampling patterns, but is also true for the ``dedicated'' solver, demonstrated in Fig. \\ref{fig:samplerate}.\n}\n\n\\begin{figure*}[ht]\n \\centering\n \n \\includegraphics[width=\\linewidth]{figures\/revisionfigs\/accelfig.png}\n\\caption{\\change{Using the R\\&R\\ model adaptation approach permits using a U-Net trained for $6\\times$ acceleration on MRI reconstruction across a range of acceleration parameters. The various k-space sampling patterns used in these experiments are shown in the top row. Without adaptation (second row), the reconstruction quality decreases when changing the acceleration factor, \\emph{even when more k-space measurements are taken}, as originally observed in \\cite{antun2020instabilities}. The R\\&R\\ reconstructions (third row) compare favorably to the performance of networks trained on each particular k-space sampling pattern (bottom row). The PSNR of each image is presented in dB in yellow on each image.}}\n\\label{fig:samplerate}\n\\end{figure*}\n\n\\begin{table}\t\n\\centering\n\t\\begin{tabular}{c|c|c|c|c|c}\n\t\t\\hline\n\t & 2$\\times$ & 4$\\times$ & 6$\\times$ & 8$\\times$ & 12$\\times$ \\\\ \\hline\n\t\t Single-Model & 35.74 & 32.53 & 31.51 & \\bf 30.69 & \\bf 29.48 \\\\\n\t\t 6$\\times$ No adaptation & 27.02 & 30.20 & 31.51 & 27.76 & 26.15 \\\\\n\t\t 6$\\times$ R\\&R & 35.11 & \\bf 32.61 & \\bf 31.73 & 29.40 & 27.34 \\\\\n\t\t Multi-Model & 33.99 & 31.62 & 30.48 & 29.25 & 28.35 \\\\\n\t\t Multi-Model R\\&R & \\bf 35.80 & 32.35 & 30.81 & 29.60 & 28.61 \\\\\n\t\\end{tabular}\n\t\\vspace{0.5em}\n\t\\caption{\\change{Comparison of reconstruction PSNR for a variety of MRI acceleration factors for several different approaches. ``Multi-model'' refers to a U-Net trained for reconstruction on all shown sampling rates, whereas each column of the ``Single-Model'' results represents a network trained for that particular sampling pattern. $6\\times$ refers to the network shown in other experiments. Training a single-sample model consistently performs well for that particular forward model, but at the cost of lower performance on other accelerations, even for higher sampling rates. The multi-model approach sacrifices performance on any one forward model, but most of the difference can be removed by augmenting the multi-model network with our R\\&R\\ method.}}\n\t\\label{table:samplingratetable}\n\\end{table}\n\n\\change{\\subsection{Model adaptation under variable model overlap}\\label{sec:exp:nullspaceshift}\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=0.5\\linewidth]{figures\/revisionfigs\/samplingfig.png}\n\\caption{\\change{Comparison of the mean PSNR for the R\\&R\\ method and no adaptation for single-sample MRI reconstruction vs. the number of frequencies that differ between $A_0$ and $A_1$. The shaded areas represent the standard deviation of mean test PSNR over 10 runs, since frequencies are replaced randomly.}}\n\\label{fig:nullspaceoverlap}\n\\end{figure}\n\nIn this section we explore how varying the distance between the forward models $A_0$ and $A_1$ affects reconstruction quality, and how our proposed R\\&R\\ method deals with different amounts of overlap between $A_0$ and $A_1$. The forward model under investigation is $6\\times$ single-coil MRI reconstruction.\n\nTo explore variable levels of model drift in the single-coil MRI reconstruction case, we vary which k-space frequencies are sampled in a Cartesian pattern. Specifically, we construct a list of ``non-sampled'' frequencies and a list of ``sampled'' frequencies under $A_0$. We create $A_1$ by swapping $n$ ``sampled'' frequencies for $n$ frequencies in the original ``non-sampled'' list, to ensure that the new $A_1$ contains exactly $n$ frequencies that were not sampled under $A_0$. We do not swap the 4$\\%$ center frequencies in any test.\n\nIn Fig. \\ref{fig:nullspaceoverlap} we plot $n$ vs the mean PSNR over 10 separate instantiations of the above experiment for a no-adaptation approach as well as our R\\&R\\ method. We run 10 separate instances since the frequencies that are swapped, as well as what frequencies they are swapped to, is random, introducing some variance to the process. We also visually represent the maximum and minimum PSNR across all instances with shading.\n\nNote that the setup in this subsection is different than the experiments used in Fig. \\ref{fig:samplecompfig}, Table \\ref{table:psnrtable}, or Fig. \\ref{fig:megafigmri}. In those experiments, $A_1$ resamples using a Gaussian distribution, biasing the frequencies towards central frequencies, and does not check for overlaps. Since $A_0$ is also biased toward central frequencies, and all frequencies are uniformly swapped, the $A_1$'s in this section may be ``harder'' to reconstruct from than the original $A_1$. In this experiment, the number of changed frequencies acts as a proxy for the difference between $N(A_0)$ and $N(A_1)$, the null spaces of $A_0$ and $A_1$. This experiment also therefore illustrates the extent to which $f_0$ contains information about image components in the null space of $A_1$ for different $A_1$, since if $f_0$ did not contain any further information about the null space of $A_1$, we would expect the R\\&R\\ curve in Fig. \\ref{fig:nullspaceoverlap} to overlap with the ``No Adaptation'' curve.\n}\n\n\\subsection{Sample complexity}\\label{sec:exp:samplecomp}\n\n\\change{Our other experiments assume that model adaptation is performed at the level of individual test samples. However, in the case where we have access to a \\emph{calibration set} of measurements under the new forward model $A_1$ that we can leverage to retrain the network using the P\\&P\\ approach.}\n\nIn the transfer learning setting, a key concern is the size of the transfer learning set necessary to achieve high-quality results. In this section we compare the performance of P\\&P across different calibration set sizes.\n\nIn Fig. \\ref{fig:samplecompfig} we explore the effect of the number of samples observed under the new forward model on the adapted model. We observe that even without knowing the forward model, a single calibration sample is sufficient to give improvement over the ``naive'' method that replaces $A_0$ with $A_1$ without further retraining.\nWhen the forward model is known during calibration and testing, a single example image can result in a 2 dB improvement in PSNR.\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=0.5\\linewidth]{figures\/calibrationsamplecomp.png}\n\\caption{Performance of the P\\&P model adaptation approach for motion deblurring as a function of the number of calibration samples (blurred images) under the new forward model. Both of our approaches outperform a naive approach (``No Adaptation''), even without exact knowledge of the new forward model.}\n\\label{fig:samplecompfig}\n\\end{figure}\n\n\\subsection{Model-blind reconstruction with generative networks}\\label{sec:exp:gan}\n\nRecent work \\cite{bora2017compressed, anirudh2018unsupervised, basioti2020image, hussein2020image} has explored solving inverse problems using generative networks, which permit reconstruction under arbitrary forward models assuming an expressive enough generative network. In particular, \\cite{anirudh2018unsupervised} and \\cite{basioti2020image} consider the case where the forward model is either partially or entirely unknown, and hence may be learned by parameterizing and jointly optimizing over both the forward model and the latent code for the generative network.\n\n\\change{In Fig. \\ref{fig:ganfig} we provide an illustration of reconstructions obtained by the method of \\cite{hussein2020image}, compared to our proposed R\\&R\\ approach. In our demonstration, as in \\cite{hussein2020image}, the generative network under consideration is a pretrained Boundary Equilibrium GAN (BEGAN) \\cite{berthelot2017began}.\nThe reconstruction quality is higher when a model-specific network is used, especially when examining fine details and textures.}\n\nIn the absence of $(x_i, y_i)$ pairs, a generative approach may be reasonable. However, learning the data manifold in its entirety requires a great deal of data at minimum, along with a sufficiently large and well-tuned generator. The authors of \\cite{yeh2017semantic} also note this fundamental limitation: for smaller or simpler applications, learning a high-quality GAN is straightforward, but for more complex applications it is difficult to train GAN models that are sufficiently accurate to rely on for high-quality reconstructions.\n\n\\begin{figure}\n\\centering\n\\renewcommand*{\\arraystretch}{0}\n\\begin{tabular}{@{}c@{}c@{}c@{}}\n\\small Truth & \\small \\change{R\\&R$+$} & \\small IAGAN \\\\ \\vspace{2pt} \\\\\n\\subfloat{\\includegraphics[width = 0.15\\linewidth, height = 0.15\\linewidth]{figures\/ganfigs\/truth\/3_10.818476.png}} \\hfill &\n\\subfloat{\\includegraphics[width = 0.15\\linewidth]{figures\/ganfigs\/rnrsup\/3_rnr_0.94_36.72.png}} & \n\\subfloat{\\includegraphics[width = 0.15\\linewidth]{figures\/ganfigs\/gansuperres\/3_iagan_0.84_27.62.png}} \\\\\n\\subfloat{\\includegraphics[width = 0.15\\linewidth]{figures\/ganfigs\/truth\/5_8.551126.png}} &\n\\subfloat{\\includegraphics[width = 0.15\\linewidth]{figures\/ganfigs\/rnrsup\/5_rnr_0.88_31.34.png}} & \n\\subfloat{\\includegraphics[width = 0.15\\linewidth]{figures\/ganfigs\/gansuperres\/5_iagan_0.84_23.96.png}} \\\\\n\\subfloat{\\includegraphics[width = 0.15\\linewidth]{figures\/ganfigs\/truth\/26_8.849044.png}} & \n\\subfloat{\\includegraphics[width = 0.15\\linewidth]{figures\/ganfigs\/rnrsup\/26_rnr_0.95_31.64.png}} & \n\\subfloat{\\includegraphics[width = 0.15\\linewidth]{figures\/ganfigs\/gansuperres\/26_iagan_0.87_26.77.png}}\\\\\n\\end{tabular}\n\\caption{Comparison of model adaptation (\\change{R\\&R$+$}) with a model-blind GAN-based reconstruction approach \\change{(IAGAN \\cite{hussein2020image})} for 2$\\times$ super-resolution. While a GAN-based approach only requires learning a single generative network for all forward models, our results suggest that a network trained for a specific forward model with the same number of training samples gives better reconstructions. Best viewed electronically.}\n\\label{fig:ganfig}\n\\end{figure}\n\n\\section{Discussion and Conclusion}\n\nThis paper explores solutions to the fragility of learned inverse problem solvers in the face of model drift. We demonstrate across a range of simple, practical applications that using a learned image reconstruction network in settings even slightly different than they were trained in results in significant reconstruction errors, both subtle and obvious. We propose two model adaptation procedures: the first is based on a transfer learning approach that attempts to learn a perturbation to the pre-trained network parameters, which we call Parametrize and Perturb (P\\&P); \\change{the second reuses the network as an implicitly defined regularizer in an iterative model-based reconstruction scheme, which we call Reuse and Regularize (R\\&R). We also look at a hybrid approach combining these techniques we call R\\&R+.}\n\nWe show that our model adaptation techniques enable retraining\/reuse of learned solvers under a change in the forward model, even when the change in forward model is not known. In addition, we demonstrate that just learning to invert a variety of forward models at once is not necessarily the solution to the problem of model drift: directly training on many forward models empirically appears to cause reconstruction quality to fall across all learned models. We also show that our approach is superior to one that requires learning a model of the entire image space via a generative model.\n\n\\change{The proposed P\\&P, R\\&R, and R\\&R+ model adaptation approaches each have different trade-offs, and may be useful in different scenarios. In general, we observe that R\\&R+\\ produces superior reconstructions over R\\&R\\ and P\\&P, but incurs significant computation and time costs associated with network retraining specified in \\eqref{eq:rnrpcost}. The P\\&P\\ approach also incurs similar costs associated with network retraining. However, when a calibration set is available (as in Section \\ref{sec:exp:samplecomp}), the P\\&P\\ approach only needs to be retrained once, and computation cost at deployment matches the original solver. However, we observe two significant benefits of the R\\&R\\ approach. First, empirically we observe that only few iterations of R\\&R\\ (see Algorithm \\ref{alg:rnr}) tend to be required to give accurate results (namely, $5$ iterations in all our experiments), which increases computational cost by only a constant factor relative to the original reconstruction network. In addition, in the R\\&R\\ approach only one new parameter is introduced, in contrast to several parameters related to the optimization required for P\\&P\\ and R\\&R+. Finally, our experiments suggest that the improvement offered by R\\&R+ tends to be marginal relative to the improvement seen by going from no adaptation to R\\&R. Therefore, in situations where reconstruction time is crucial, model adaptation by R\\&R\\ may be preferred over R\\&R+.}\n\n\\change{One surprising benefit of the R\\&R\\ approach is that even in the absence of model drift (i.e., $A_0=A_1$) the reconstruction accuracy improves relative to the output from the reconstruction network. This is because R\\&R\\ iteratively modifies the output of the network to enforce data-consistency at test time. This may potentially resolve the issue raised in \\cite{sidky2020cnns} about whether learned image reconstruction networks are truly ``solving'' a given inverse problem, \\emph{i.e.},~ give a well-defined inverse map of the measurement model. However, to show this would require a much more detailed analysis of the estimator defined by the R\\&R\\ approach that is beyond the scope of this work.}\n\n\n\nAdapting learned inverse problem solvers to function under new forward models is just one step towards robustifying these powerful approaches. Our approach for unknown $A_1$ assumes an explicit parametrization of the forward model, but such a parametrization is not always straightforward or realistic. How best to adapt to complex changes in the forward model that are not easily parametrized is an important open question for future work; \\change{see \\cite{lunz2021learned} for one recent approach to learning non-parametric (and potentially non-linear) changes to forward models in an iterative reconstruction framework.}\n\n\\change{While this work focused on linear inverse problem, many of the principles introduced in this work extend also to non-linear inverse problems. For example, the R\\&R\\ approach, which is based on the regularization-by-denoising technique (RED), is readily adapted to non-linear problems amenable to a RED approach, which includes phase retrieval \\cite{metzler2018prdeep} among others.}\n\n\\change{Our empirical evidence suggests that successful model adaptation is possible provided the nullspace (or approximate nullspaces) of $A_0$ and $A_1$ are close in some sense. However, in settings where nullspaces of $A_0$ and $A_1$ are far apart, model adaptation may lead to artifacts or hallucinated details in the reconstructions. In order to understand these limitations of model adaptation, recent methods introduced to quantify hallucinations induced by neural-network based reconstructions may prove to be useful \\cite{bhadra2020hallucinations}.}\n\n\n\nFinally, while we focused our attention on model drift, an important open problem is how to adapt to simultaneous model and data distribution drift, and the extent to which these effects can be treated independently. We hope to address these questions in future work.\n\n\\bibliographystyle{plain}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\n\n\n\nPartially identified models have been receiving extensive attentions in recent years due to their broad applications in econometrics. Partial identification of a structural parameter arises when the data available and the constraints coming from economic theory only allow to place the parameter inside a proper subset of the parameter space. Due to the limitation of the data generating process, the data cannot provide any information within the set where the structural parameter is partially identified (called \\textit{identified set}).\n\nThis paper aims at developing a semi-parametric Bayesian inference for partially identified models. A Bayesian approach may be appealing for several reasons. First, Bayesian procedures often conduct inference through Bayesian credible sets (BCS's), which are often relatively easy to construct thanks to the use of Markov Chain Monte Carlo (MCMC) methods. This is particularly useful when we are concerned about marginalizing the BCS to a low-dimensional space. In some situations, we are interested only in a projection of the identified region for the subset inference. We demonstrate that our proposed approach provides tractable computational tools for projecting a high-dimensional identified region to a low-dimensional space. This has important implications for practical implementations.\n\n\n\nSecondly, our Bayesian procedures also have comprehensive frequentist validations. In particular, our constructed BCS of the identified set also has a correct asymptotic frequentist coverage probability. We construct credible sets based on the support function; the latter completely characterizes convex and closed identified sets. We also show the Bernstein-von Mises theorem for the posterior distribution of the support function. At the best of our knowledge this has not been studied in the literature yet. This powerful result in turn allows us to establish the (asymptotic) equivalence between BCS's and frequentist confidence sets (FCS's) for the identified set. The literature on partial identification distinguishes between credible\/confidence sets for the partially identified parameter and for the identified set. Credible sets for the identified set play an important role not only when the target of interest is the partially identified parameter but even when the identified set is itself the object of interest. While focusing on the study of BCS for the identified set, we also extend Moon and Schorfheide (2010)'s analysis for the partially identified parameter to a semi-parametric setup, which is relevant in more general \\textit{moment inequality models} where the likelihood function may be unknown. Moreover, if we admit the existence of a true value of the structural parameter and the identified set, the corresponding posterior distributions concentrate asymptotically in a neighborhood of the true value (set). This property is known as the \\textit{posterior consistency}. It is important because it guarantees that, with a sufficiently large amount of data, we can recover the truth accurately with large probabilities.\n\n\n\n\nThird, putting a prior on the partially identified parameter can be viewed as a way of incorporating researchers' beliefs. A Bayesian approach conveniently combines the information from both the observed data and other sources of prior information. The prior information is, for instance, information coming from historical data, information based on experience or on previous survey data. In some applications this information is largely available, \\textit{e.g.} in macroeconomics, central banking and finance. We stress that the prior information will not affect the boundary of the identified set, but will only play a role in determining which areas inside the identified set are a priori ``more likely'' than others. On the other hand, when specifying a prior distribution on the partially identified parameter is either difficult or conflicting with the philosophy of partial identification, a researcher can still use our procedure and either specify a uniform prior or just construct the BCS for the identified set for inference. The latter is due to an important feature of our procedure that the Bayesian analysis of the identified set does not require to specify a prior on the partially identified parameter. Therefore, we accommodate both situations where a researcher does have prior information as well as situations where she does not.\n\n\n\n\nFrom the posterior perspective, the Bayesian partial identification produces a posterior distribution of the partially identified parameter whose support will asymptotically concentrate around the true identified set. When informative priors are available, the shape of the posterior density may not be flat inside the identified set, and will ground on the prior distribution even asymptotically. Therefore, the asymptotic behavior of the posterior distribution is different from that of the traditional point identified case where (in the latter case) the information from the prior is often washed away by the data asymptotically. Thus, the Bayesian approach to partial identification links conclusions and inferences to various information sources -- data, prior, experience, etc.-- in a transparent way.\n\nFinally, when the identified set depends on a point identified nuisance parameter, say $\\phi$, and this is integrated out with respect to its posterior, then the prior information on the partially identified parameter is completely revised by the data. Hence, the proposed procedure also learns about the partially identified parameter based on the whole posterior distribution of $\\phi$, which is potentially useful in finite samples. Consequently, there is a strong motivation for us to conduct a comprehensive Bayesian study for the partially identified econometric models.\\\\\n\n\n\nThere are in general two approaches in the literature on Bayesian partial identification. The first approach specifies a parametric likelihood function and assumes it is known up to a finite-dimensional parameter. This approach has been used frequently in the literature, see e.g., Moon and Schorfheide (2012), Poirier (1998), Bollinger and Hasselt (2009), Norets and Tang (2012) among many others. In many applications, however, econometric models usually only identify a set of moment inequalities instead of the full likelihood function. Examples are: interval-censored data, interval instrumental regression, asset pricing (Chernozhukov et al. 2008), incomplete structural models (Menzel 2011), etc. Assuming a parametric form of the likelihood function is ad-hoc in these applications. Once the likelihood is mis-specified, the posterior can be misleading. The second approach starts from a set of moment inequalities, and uses a moment-condition-based likelihood such as the limited information likelihood (Kim 2002) and the exponential tilted empirical likelihood (Schennach 2005). Further references may be found in Liao and Jiang (2010), Chernozhukov and Hong (2003) and Wan (2011). This approach avoids assuming the knowledge of the true likelihood function. However, it only studies the structural parameter, and it is hard to construct posteriors and credible sets for the identified set. Moreover, it does not have a Bayesian probabilistic interpretation.\n\n\nThis paper proposes a pure Bayesian procedure without assuming a parametric form of the true likelihood function. We place a nonparametric prior on the likelihood and obtain the marginal posterior distribution for the partially identified parameter and the identified set. A similar Bayesian procedure was recently used in Florens and Simoni (2011). As a result, our procedure is semi-parametric Bayesian and can make inference about both the partially identified parameter and its identified set easily. It only requires a set of moment conditions and then it can be completely nonparametric on the data generating process. This is an appealing feature in general moment inequality models. On the other hand, if the likelihood function is known, our procedure continues to work and this paper is still well-motivated. In fact, many contributions of this paper, e.g., Bayesian inference of the support function, construction of BCS for the identified set, subset inferences, etc., are relevant and original also for the case with a known likelihood.\n\n\n\n\n\n\n\n\n\n\n\n\nThere is a growing literature on Bayesian partially identified models. Besides those mentioned above, the list also includes Gelfand and Sahu (1999), Neath and Samaniego (1997), Gustafson (2012), Epstein and Seo (2011), Stoye (2012), Kitagawa (2012), Kline (2011), etc. There is also an extensive literature that analyzes partially identified models from a frequentist point of view. A partial list includes Andrews and Guggenberger (2009), Andrews and Soares (2010), Andrews and Shi (2013), Beresteanu, Molchanov and Molinari (2011), Bugni (2010), Canay (2010), Chernozhukov, Hong and Tamer (2007), Chiburis (2009), Imbens and Manski (2004), Romano and Shaikh (2010), Rosen (2008), Stoye (2009), among others. See Tamer (2010) for a review.\n\n\nWhen the identified set is closed and convex, the support function becomes one of the useful tools to characterize its properties. The literature on this perspective has been growing rapidly, see for example, Bontemps, Magnac and Maurin (2012), Beresteanu and Molinari (2008), Beresteanu et al. (2012), Kaido and Santos (2013), Kaido (2012) and Chandrasekhar et al. (2012). This paper is also closely related to the asymptotic nonparametric Bayesian literature: Wu and Ghosal (2008), Ghosh and Ramamoorthi (2003), Ghosal and van der Vaart (2001), Shen and Wasserman (2001), Ghosal et al. (1999), Amewou-Atisso et al. (2003), Walker et al. (2007), van der Vaart and van Zanten (2008), Bickel and Kleijn (2012), Jiang (2007), Choi and Ramamoorthi (2008), Castillo (2008), Freedman (1999), Rivoirard and Rousseau (2012), among others.\n\n\n\n\nThe paper is organized as follows. Section \\ref{highlight} outlines our main results and contributions. Section \\ref{s_general_setup} sets up the model and discusses the prior specification on the underlying likelihood function. Section \\ref{s_nonparametric_prior} studies the (marginal) posterior distribution of the structural parameter. Section \\ref{ssf} studies the posterior of the support function in moment inequality models. In particular, the Bernstein-von Mises theorem and a linear representation for the support function are obtained. Section \\ref{sbcs} constructs the Bayesian credible sets for both the structural parameter and its identified set. In addition, the frequentist coverages of these credible sets are studied. Section \\ref{s_Projection_Subset_Inference} addresses the subset inference when the target of interest is only a component of the full parameter. Section \\ref{s_posterior_consistency_set} shows the posterior consistency for the identified set and provides the concentration rate. Section \\ref{s_further_illustration_Uniformity} addresses the uniformity. In particular, it discusses the case when point identification is actually achieved. Section \\ref{s_Financial_asset_pricing} applies the support function approach to a financial asset pricing study.\nFinally, Section \\ref{s_Conclusion} concludes with further discussions. All the proofs are given in the appendix to this paper and in a supplementary appendix.\n\n\n\n\n\n\\section{Highlights of Our Contributions}\\label{highlight}\n\n\n\nThis section provides a global vision of our main contributions of this paper. Formal setup of the model starts from Section 3.\n\n\n\\subsubsection*{Semi-parametric Bayesian partial identification}\n\n\n\nWe focus on semi-parametric models where the true likelihood function may be unknown, which is more relevant in moment inequality models. Then there are three types of parameters in the Bayesian setup: $\\theta$, which is the partially identified structural parameter; $\\phi$, a point-identified parameter that characterizes the identified set, and the unknown likelihood $F$. The identified set can be written as $\\Theta(\\phi)$. According to the Bayesian philosophy, we treat the identified set as random, and construct its posterior distribution.\n\nWithout assuming any parametric form for the likelihood, we place a nonparametric prior $\\pi(F)$ on it. The posteriors of $\\phi$ and of the identified set can then be constructed via the posterior of $F$. Such a semi-parametric posterior requires only a set of moment inequalities, and therefore is robust to the likelihood specification. Moreover, to make inference about the partially identified $\\theta,$ we place a conditional prior $\\pi(\\theta|\\phi)$ supported only on $\\Theta(\\phi)$. Note that Bayesian inference for the identified set may be carried out based on the posterior of $\\Theta(\\phi)$ which does not depend on $\\pi(\\theta|\\phi)$. So the prior specification for $\\theta$ plays a role only in the inference about $\\theta$.\n\n\n\n\n\n\nFor these posteriors, we show that asymptotically $p(\\theta|Data)$ will be supported within an arbitrarily small neighborhood of the true identified set, and the posterior of $\\Theta(\\phi)$ also concentrates around the true set in the Hausdorff distance. These are the notion of \\emph{posterior consistency} under partial identification.\n\n\n\\subsubsection*{Support function}\n\n\nTo make inference on $\\Theta(\\phi)$ we can take advantage of the fact that when $\\Theta(\\phi)$ is closed and convex it is completely characterized by its \\textit{support function} $S_{\\phi}(\\cdot)$ defined as:\n$$\nS_{\\phi}(\\nu)=\\sup_{\\theta\\in\\Theta(\\phi)}\\theta^T\\nu\n$$\nwhere $\\nu\\in\\mathbb{S}^{\\dim(\\theta)}$, the unit sphere. Therefore, inference on $\\Theta(\\phi)$ may be conveniently carried out through inference on its support function. The posterior distribution of $S_{\\phi}(\\cdot)$ is also determined by that of $\\phi$. We show that in a general moment inequality model, the support function has an asymptotic linear representation in a neighborhood of the true value of $\\phi$, which potentially extends the inference in Bontemps et al. (2012) to nonlinear models.\n \nOur paper also establishes the Bernstein-von Mises theorem for the support function, that is, the posterior distribution of $S_{\\phi}(\\cdot)$ converges weakly to a Gaussian process. \nWe also calculate the support function for a number of interesting examples, including interval censored data, missing data, interval instrumental regression and asset pricing models.\n\n\n\\subsubsection*{Two-sided Bayesian credible sets for the identified set}\n\nWe construct two types of Bayesian credible sets (BCS's): one for the identified set $\\Theta(\\phi)$ and the other for the partially identified parameter $\\theta$. In particular, the BCS for the {identified set} is constructed based on the support function, is two-sided, and has an asymptotically correct frequentist coverage probability. Specifically, we find sets $\\Theta(\\hat\\phi )^{-q_{\\tau}\/\\sqrt{n}}$ and $\\Theta(\\hat\\phi )^{q_{\\tau}\/\\sqrt{n}}$, satisfying: for level $1-\\tau$ where $\\tau\\in (0,1)$,\n\nBayesian coverage:\n\\begin{equation}\\label{eq2.1add}\nP(\\Theta(\\hat\\phi )^{-q_{\\tau}\/\\sqrt{n}}\\subset\\Theta(\\phi)\\subset\\Theta(\\hat\\phi )^{q_{\\tau}\/\\sqrt{n}}|Data)=1-\\tau;\n\\end{equation}\n\nFrequentist coverage:\n\\begin{equation}\\label{eq2.2add}\nP_{D_n}(\\Theta(\\hat\\phi )^{-q_{\\tau}\/\\sqrt{n}}\\subset\\Theta(\\phi_0)\\subset\\Theta(\\hat\\phi )^{q_{\\tau}\/\\sqrt{n}})\\geq1-\\tau,\n\\end{equation}\nwhere $P_{D_n}$ denotes the sampling probability, and $\\Theta(\\phi_0)$ is the true identified set. In (\\ref{eq2.1add}) the random set is $\\Theta(\\phi)$ while in (\\ref{eq2.2add}) the random sets are $\\Theta(\\hat\\phi )^{-q_{\\tau}\/\\sqrt{n}}$ and $\\Theta(\\hat\\phi )^{q_{\\tau}\/\\sqrt{n}}$. One of the important features is that the BCS for the identified set does not require specifying a prior on the partially identified parameter. The notation $\\Theta(\\hat\\phi )^{-q_{\\tau}\/\\sqrt{n}}$, $\\Theta(\\hat\\phi )^{q_{\\tau}\/\\sqrt{n}}$, $\\hat\\phi$ and $q_{\\tau}$ are to be formally defined in the paper. Therefore, the constructed two-sided BCS can also be used as frequentist confidence set for the identified set.\n\n\n\nFurthermore, we find that in the semi-parametric Bayesian model, Moon and Schorfheide (2012)'s conclusion about the BCS for the partially identified parameter $\\theta$ still holds: it is smaller than frequentist confidence sets in large samples.\nHence, while the BCS for the partially identified parameter does not have a correct frequentist coverage, the asymptotic equivalence between BCS and FCS for the identified set holds. Intuitively, this is because the prior information still plays an important role in the posterior of the partially identified parameter even asymptotically; on the other hand, as the identified set is ``point identified\", whose BCS is independent of the prior on $\\theta$, then its prior information is ``washed away'' asymptotically. Thus, the proposed inference for the identified set and the support function is asymptotically robust to their prior specification.\n\n\n\\subsubsection*{Projection and subset inference}\n\nWe show that with our approach it is easy to project\n(marginalize) onto low-dimensional subspaces for subset inferences. This computation is fast.\nSuppose the dimension of $\\theta$ is relatively large, but we are interested in only a few components of $\\theta$, and aim to make inference about these components and their marginal identified set. In our approach, constructing the identified set and BCS for the marginal components simply requires the marginalization of a joint distribution and can be carried out efficiently thanks to the use of MCMC methods. It is also computationally convenient to calculate the BCS for the marginal identified set. Hence, the proposed procedure has large potentiality in many empirical applications.\n\n\\subsubsection*{Uniformity}\n\nThe proposed Bayesian inference for the identified set is valid uniformly over a class of data generating process. In particular, using specific examples, we illustrate that as the identified set shrinks to a singleton, so that point identification is (nearly) achieved, our Bayesian inference for the identified set carries over.\n\n\n\\subsubsection*{Applications}\n\nWe develop a detailed application of Bayesian partial identification to financial asset pricing, which is an example where the identified set is of direct interest. Estimation and inference for the support function as well as for the identified set are conducted. Moreover, throughout the paper, we study in detail other typical examples including the interval censoring, interval regression and missing data problems.\n\n\n\n\n\n\n\n\n\n\n\n\\section{General Setup of Bayesian Partially Identified Model}\\label{s_general_setup}\n\n\\subsection{The Model}\nEconometric models often involve a structural parameter $\\theta\\in\\Theta$ that is only partially identified by the data generating process (DGP) on a non-singleton set, which we call \\textit{identified set}. The model also contains two parameters that are point identified by the DGP: a finite-dimensional parameter $\\phi\\in\\Phi\\subset\\mathbb{R}^{d_{\\phi}}$ and the distribution function $F$ of the observed data, which is infinite-dimensional. Here, $\\Phi$ denotes the parameter space for $\\phi$ and $d_{\\phi}$ its dimension. The point identified parameter $\\phi$ often arises naturally as it characterizes the data distribution. In most of partially identified models, the identified set is also characterized by $\\phi$, hence we denote it by $\\Theta(\\phi)$ to indicate that once $\\phi$ is determined, so is the identified set. Let $d=\\dim(\\theta)$ and $\\Theta\\subset\\mathbb{R}^{d}$ denote the parameter space for $\\theta$; we assume $\\Theta(\\phi)\\subseteq\\Theta$.\n\nIn a parametric Bayesian partially identified model as in Poirier (1998), Gustafson (2012) and Moon and Schorfheide (2012), $F$ is linked with a known likelihood function to $\\phi$. However, as in the usual point identified models, in some applications assuming a known likelihood function may suffer from a model specification problem, and may lead to misleading conclusions. Instead, econometric applications often involve only a set of moment conditions as in (\\ref{eq2.1}) below. This gives rise to the \\textit{moment inequality models}. A parametric form of the likelihood function and of $F$ can be unavailable in these models. A robust approach is to proceed without assuming a parametric form for the likelihood function, but to put a prior on $(\\theta,\\phi,F)$ instead. This yields the semi-parametric Bayesian setup.\n\n\nWe specify a nonparametric prior on data's cumulated distribution function (CDF) $F$, which can deduce a prior for $\\phi $ through a transformation $\\phi=\\phi(F)$, as $\\phi$ often is a functional of $F$. Moreover, the prior on the identified set $\\Theta(\\phi)$ is determined through that of $\\phi$.\n Due to the identification feature, for any given $\\phi\\in\\Phi$, we specify a conditional prior $\\pi(\\theta|\\phi)$ such that\n$$\n\\pi(\\theta\\in\\Theta(\\phi)|\\phi)=1.\n$$\nBy construction, this prior for $\\theta$ puts all its mass on $\\Theta(\\phi)$ for any $\\phi\\in\\Phi$. So it takes the form:\n \\begin{displaymath}\n \\pi(\\theta|\\phi) \\propto I_{\\theta\\in\\Theta(\\phi)}g(\\theta),\n \\end{displaymath}\nwhere $g(\\cdot)$ is some probability density function and $I_{\\theta\\in\\Theta(\\phi)}$ is the indicator function of $\\Theta(\\phi)$. In Section \\ref{app_prior_theta} we discuss the philosophy of specifying the prior on $\\theta$.\n\n\n\n\n\nOur analysis focuses on the situation where $\\Theta(\\phi)$ is a closed and convex set for each $\\phi$. Therefore, $\\Theta(\\phi)$ can be uniquely characterized by its \\emph{support function}. For any fixed $\\phi$, the support function for $\\Theta(\\phi)$ is a function $S_{\\phi}(\\cdot): \\mathbb{S}^d\\rightarrow\\mathbb{R}$ such that\n \\begin{displaymath}\n S_{\\phi}(\\nu)= \\sup_{\\theta\\in\\Theta(\\phi)}\\theta^T\\nu.\n \\end{displaymath}\nwhere $\\mathbb{S}^d$ denotes the unit sphere in $\\mathbb{R}^d$. The support function plays a central role in convex analysis since it determines all the characteristics of a convex set. Hence, it is one of the essential objects for our Bayesian inference. In a similar way as for $\\Theta(\\phi)$, we put a prior on $S_{\\phi}(\\cdot)$ via the prior on $\\phi$.\n\n\nSuppose $p(\\phi|D_n)$ denotes the posterior of $\\phi$, given the data $D_n$ and a prior $\\pi(\\phi)$. It is readily seen that (see e.g., Poirier 1998) the joint posterior of $(\\theta, \\phi)$ is given by\n$$\np(\\theta, \\phi|D_n)\\propto\\pi(\\theta|\\phi)p(\\phi|D_n).\n$$\nBy integrating out $\\phi$, we obtain the marginal posterior for $\\theta$. On the other hand, the posteriors of $\\Theta(\\phi)$ and $S_{\\phi}(\\cdot)$ are also determined through the marginal posterior $p(\\phi|D_n)$. This also highlights an important feature of this paper: our results on $\\Theta(\\phi)$ and the support function do not require placing a prior on the partially identified parameter $\\theta$, because as far as $p(\\phi|D_n)$ is concerned, the prior for $\\theta$ is not needed at all. Furthermore, as the identified set and support function are ``point identified'', their posteriors are asymptotically robust to the prior specifications on $\\phi$.\n\n\n\n\n\n\n\nLet us present a few examples that have received much attention in partially identified econometric models literature. In the rest of the paper, we denote by $X$ the observable random variable for which we have $n$ i.i.d. observations $D_n=\\{X_i\\}_{i=1}^n$. Let $(\\mathcal{X},\\mathcal{B}_{x},F)$ denote a probability space in which $X$ takes values and $\\mathcal{F}$ denote the parameter space of $F$.\n\n\n\n\n\n\n \\begin{comment}\n These moment restrictions can be written as moment inequalities through the use of a function $\\Psi: \\Theta \\times \\Phi \\rightarrow \\mathbb{R}^{m}$ as\n\n \\begin{equation}\\label{eq_moment_inequalities}\n \\Psi(\\theta,\\phi) \\leq 0.\n \\end{equation}\n\n\\noindent By denoting $\\Psi_{i}(\\theta,\\phi)$ the $i$-th component of $\\Psi(\\theta,\\phi)$, then (\\ref{eq_moment_inequalities}) is equivalent to $\\Psi_{i}(\\theta,\\phi)\\leq 0$ for all $1\\leq i\\leq m$. A common feature of many settings we are considering is that the parameter $\\theta$ does not characterize the sampling distribution $F$ and the relationship between $(F,\\phi)$ and $\\theta$ is only described by the moment restrictions in (\\ref{eq_moment_inequalities}).\\\\\n\\indent For a given value of $\\phi$ we denote by $\\Theta_{0}(\\phi)$ the set of all $\\theta\\in \\Theta$ that satisfy the moment restrictions:\n\n \\begin{displaymath}\n \\Theta_{0}(\\phi) \\equiv \\left\\{\\theta\\in\\Theta; \\; \\Psi(\\theta,\\phi) \\leq 0\\right\\}.\n \\end{displaymath}\n\n\\noindent We refer to this set as the \\emph{identified set}. The structural parameter $\\theta$ is \\emph{point-identified} if $\\forall\\phi\\in\\Phi$, $\\Theta_{0}(\\phi)$ is a singleton; otherwise, we say that $\\theta$ is \\emph{partially identified}. In this paper, we focus on the partially identified case where $\\Theta_{0}(\\phi)$ is a nonsingleton set.\\\\\n\n\n\\noindent where $<\\cdot,\\cdot>$ denotes the inner product in $\\mathbb{R}^{d}$. Since the support function is homogeneous of degree $1$, we restrict its domain to the unit sphere $\\mathbb{S}^{d}$ in $\\mathbb{R}^{d}$ as it is common in the literature. Therefore, $S_{\\phi}(\\cdot):\\mathbb{S}^{d}\\rightarrow \\mathbb{R}$.\n\n\\end{comment}\n\n\\begin{exm}[Interval censored data]\\label{ex2.1}\n Let $(Y,Y_{1},Y_{2})$ be a $3$-dimensional random vector such that $Y\\in[Y_{1},Y_{2}]$ with probability one. The random variables $Y_{1}$ and $Y_{2}$ are observed while $Y$ is unobservable (see, e.g., Moon and Schorfheide 2012). We denote: $\\theta=E(Y)$ and $\\phi=(\\phi_{1},\\phi_{2})' \\equiv (E(Y_{1}),E(Y_{2}))'$. Therefore, we have the following\nidentified set for $\\theta$: $\\Theta(\\phi)=[\\phi_1, \\phi_2]$. The support function for $\\Theta(\\phi)$ is easy to derive:\n \\begin{displaymath}\n \n S_{\\phi}(1)=\\phi_2, \\quad S_{\\phi}(-1)=-\\phi_1.\n \\end{displaymath}\n The non-parametric prior specification on the likelihood is to be discussed in Section \\ref{sss_nonparametric_prior}.\n$\\square$\n\n\n \n \n \n\n\\end{exm}\n\n\\begin{exm}[Interval regression model]\\label{ex2.2}\n The regression model with interval censoring has been studied by, for example, Haile and Tamer (2003). Let $(Y,Y_{1},Y_{2})$ be a $3$-dimensional random vector such that $Y\\in[Y_{1},Y_{2}]$ with probability one. The random variables $Y_{1}$ and $Y_{2}$ are observed while $Y$ is unobservable. Assume that\n$$\nY=x^T\\theta+\\epsilon\n$$\nwhere $x$ is a vector of observable regressors. In addition, assume there is a $d$-dimensional vector of nonnegative exogenous variables $Z$ such that $E(Z\\epsilon)=0$. Here $Z$ can be either a vector of instrumental variables when $X$ is endogenous, or a nonnegative transformation of $x$ when $x$ is exogenous. It follows that\n\\begin{equation}\\label{eq2.1}E(ZY_1)\\leq E(ZY)=E(Zx^T)\\theta\\leq E(ZY_2).\n\\end{equation}\nWe denote $\\phi=(\\phi_1, \\phi_2, \\phi_3)$ where $(\\phi_1^T,\\phi_3^T)=(E(ZY_1)^T, E(ZY_2)^T)$ and $\\phi_2=E(Zx^T).$ Then the identified set for $\\theta$ is given by $\\Theta(\\phi)=\\{\\theta\\in\\Theta: \\phi_1\\leq \\phi_2\\theta\\leq \\phi_3\\}.$ Suppose $\\phi_2^{-1}$ exists. The support function for $\\Theta(\\phi)$ is given by (denote $(x)_i$ as the $i$th component of $x$)\\footnote{See Appendix \\ref{s_appendix_support_function_interval_regression} in the supplementary material for detailed derivations of the support function in this example. Similar results but in a slightly different form are presented in Bontemps et al. (2012).}:\n$$\nS_{\\phi}(\\nu)=\\nu^T\\phi_2^{-1}\\left(\\frac{\\phi_1+\\phi_3}{2}\\right)+\\alpha_{\\nu}^T\\left(\\frac{\\phi_3-\\phi_1}{2}\\right),\\quad \\nu\\in\\mathbb{S}^d\n$$\nwhere $\\alpha_{\\nu}=( |(\\nu^T\\phi_2^{-1})_1|,..., |(\\nu^T\\phi_2^{-1})_d|)^T.$\n \n\n\n\n\n $\\square$\n\\end{exm}\n\n\\begin{exm}[Missing data]\\label{ex2.3}\n Consider a bivariate random vector $(Y,M)$ where $M$ is a binary random variable which takes the value $M=0$ when $Y$ is missing and $1$ otherwise. Here $Y$ represents whether a treatment is successful ($Y=1$) or not ($Y=0$). The parameter of interest is the probability $\\theta=P(Y=1)$. This problem without the missing-at-random assumption has been extensively studied in the literature, see for example, Manski and Tamer (2002), Manski (2003), etc. Since $P(Y=1|M=0)$ cannot be recovered from the data, the empirical evidence partially identifies $\\theta$ and $\\theta$ is characterized by the following moment restrictions:\n \\begin{displaymath}\n P(Y=1|M=1)P(M=1) \\leq \\theta \\leq P(Y=1|M=1)P(M=1)+ P(M=0).\n \\end{displaymath}\n Here, $\\phi=(P(M=1), P(Y=1|M=1))=(\\phi_1,\\phi_2)$. The identified set is $\\Theta(\\phi)=[\\phi_1\\phi_2,\\phi_1\\phi_2+1-\\phi_1]$,\n and its support function is:\n$S_{\\phi}(1) = \\phi_1\\phi_2+1-\\phi_1$, $S_{\\phi}(-1)=-\\phi_1\\phi_2$.\n\n\\end{exm}\n\n\\subsection{Nonparametric prior scheme for $(\\phi, F)$}\\label{sss_nonparametric_prior}\n\n\n\nWhen the model only specifies a set of moment inequalities, we can place a non-parametric prior on the likelihood function through $F$, e.g., a Dirichlet process prior. Since $\\phi$ is point identified, we assume it can be rewritten as a measurable function of $F$ as $\\phi = \\phi(F)$. The prior distribution for $\\phi$ is then deduced from that of $F$ via $\\phi(F)$. The Bayesian experiment is (we use the notation ``$\\sim$\" to mean ``distributed as\")\n$$\n X|F \\sim F,\\qquad F\\sim \\pi(F), \\qquad \\theta|\\phi \\sim \\pi(\\theta|\\phi(F))\n$$\nFor instance, in the interval censored data example \\ref{ex2.1}, let $F$ be the joint CDF of $(Y_1, Y_2)$, then $(\\phi_1, \\phi_2)=\\phi(F)=(E(Y_1|F), E(Y_2|F))$, and the identified set is modeled as $\\Theta(\\phi)=[\\phi_1(F), \\phi_2(F)]$, which is a set-valued function of $F$.\n\n\nThe prior distribution $\\pi(F)$ is a distribution on $\\mathcal{F}$. Examples of such a prior include Dirichlet process priors (Ferguson 1973) and Polya tree (Lavine 1992). The case where $\\pi(F)$ is a Dirichlet process prior in partially identified models is proposed by \\cite{FlorensSimoni2011}.\n\n\n\nLet $p(F|D_n)$ denote the marginal posterior of $F$ which, by abuse of notation, can be written $p(F|D_n)\\propto\\pi(F)\\prod_{i=1}^nF(X_i)$. The posterior distributions of $\\phi$, $\\Theta(\\phi)$, and the support function $S_{\\phi}(\\cdot)$ are deduced from the posterior of $F$, but do not depend on the prior on $\\theta$. Moreover, it can be shown that $p(\\theta|\\phi(F), D_n)=\\pi(\\theta|\\phi(F)).$ Then, for any measurable set $B\\subset\\Theta$, the marginal posterior probability of $\\theta$ is given by, averaging over $F$:\n \\begin{eqnarray*}\\label{eq2.3}\n P(\\theta\\in B|D_n) & = & \\int_{\\mathcal{F}}P(\\theta\\in B |\\phi(F), D_n)p(F|D_n)dF\\nonumber\\\\\n & = & \\int_{\\mathcal{F}}\\pi(\\theta\\in B|\\phi(F))p(F|D_n)dF = E[\\pi(\\theta\\in B|\\phi(F))|D_n]\n \\end{eqnarray*}\nwhere the conditional expectation is taken with respect to the posterior of $F$. The above posterior is easy to calculate via simulation when $F$ has a Dirichlet process prior.\n\n\n\nAn alternative prior scheme for $(\\phi, F)$ consists in putting a prior on $\\phi$ directly. This is particularly useful when there is informative prior information for $\\phi$. It models the unknown likelihood function semi-parametrically through reformulating $F$ as $F=F_{\\phi, \\eta}$ where $\\eta$ is an infinite-dimensional nuisance parameter (often a density function) that is \\textit{a priori} independent of $\\phi$. The prior on $(\\phi, F)$ is then deduced from the prior on $(\\phi,\\eta)$. We describe this alternative semi-parametric prior in Appendix \\ref{sss_semiparametric_prior}.\n\n\n\n\n\n\n\n\n\n\\section{Bayesian Inference for $\\theta$}\\label{s_nonparametric_prior}\n\n\\subsection{Putting priors on partially identified $\\theta$}\\label{app_prior_theta}\n\nIn this section we briefly discuss the meaning of the prior $\\pi(\\theta|\\phi)$. As stated in Tamer (2010): ``(Partial identification) links conclusions drawn from various\nempirical models to sets of assumptions made in a transparent way.\nIt allows researchers to examine the informational content of their\nassumptions and their impacts on the inferences made.''\n\nBy imposing a prior on the partially identified parameter $\\theta$, we reflect how prior beliefs and\/or assumptions can impact the associated statistical inference. To illustrate the rationale of imposing such a prior, let us consider the missing data example (Example \\ref{ex2.3}). Writing $\\alpha=P(Y=1|M=0)$, we then link $\\theta$ with $\\alpha$ by\n\\begin{equation}\\label{eq2.2}\n\\theta=\\phi_2\\phi_1+\\alpha(1-\\phi_1).\n\\end{equation}\nAs $\\phi$ is point identified, statistical inferences about $\\theta$ therefore relies on the treatment of $\\alpha$. On the other hand, various ways of dealing with $\\alpha$ reflect various researchers' prior beliefs, which also correspond to the ``informational content of their assumptions\".\n\nFrom a Bayesian point of view, this is fulfilled by putting a distribution on $\\alpha$, as a prior $\\pi(\\alpha)$ supported on $[0,1]$ (possibly also depending on $\\phi$). The traditional exogeneity assumption such as missing-at-random, in this case, corresponds to a point mass prior on $\\alpha=\\phi_2=P(Y=1|M=1)$. The more concentrating is the prior, the stronger are the assumptions we impose on the missing mechanism. Such a prior distribution can also come from a previous study based on a different dataset that contain information about $\\alpha$ where only summarizing statistics are available instead of the complete data set. Above all, when no informative knowledge about $\\alpha$ is available, a uniform prior on $[0,1]$ is imposed for $\\alpha$, which reduces to Manski's bounds approach.\n\nGiven the imposed distribution $\\pi(\\alpha)$ that reflects researchers' assumptions or beliefs about the missing mechanism, we can deduce a conditional prior for $\\theta$ through (\\ref{eq2.2}) given $\\phi=(\\phi_1,\\phi_2)$.\nAs a result, putting a prior on the partially identified parameter can be viewed as a way of incorporating researchers' assumptions on the missing mechanisms. This varies from the traditional exogeneity approach to the most robust bounds approach, which also bridges point identification and partial identification.\n\n\n\\subsection{Posterior Consistency for $\\theta$}\nThe shape of the posterior of a partially identified parameter still relies upon its prior distribution asymptotically, which distinguishes from the asymptotic posterior behavior in the classical point identified case. On the other hand, the support of the prior distribution of $\\theta$ is revised after data are observed and eventually converges towards the true identified set asymptotically. The latter corresponds to the frequentist consistency of the posterior distribution for partially identified parameters. Posterior consistency is one of the benchmarks of a Bayesian procedure under consideration, which ensures that with a sufficiently large amount of data, it is nearly possible to discover the true identified set.\n\n\n\n\n\nWe assume there is a true value of $\\phi$, denoted by $\\phi_0$, which induces a true identified set $\\Theta(\\phi_0)$, and a true $F$, denoted by $F_{0}$. Our goal is to achieve the frequentist \\textit{posterior consistency} for the partially identified parameter: for any $\\epsilon > 0$ there is $\\tau \\in (0,1]$ such that\n$$\nP(\\theta\\in\\Theta(\\phi_0)^{\\epsilon}|D_n)\\rightarrow^p 1, \\text{ and }\\quad P(\\theta\\in\\Theta(\\phi_0)^{-\\epsilon}|D_n)\\rightarrow^p (1 - \\tau).\n$$\nHere $ \\Theta(\\phi)^{\\epsilon} $ and $ \\Theta(\\phi)^{-\\epsilon} $ are the $\\epsilon$-envelope and $\\epsilon$-contraction of $\\Theta(\\phi)$, respectively:\n \\begin{equation}\\label{eq_e_envelope}\n \\Theta(\\phi)^{\\epsilon} = \\{\\theta\\in\\Theta: d(\\theta, \\Theta(\\phi))\\leq\\epsilon\\},\\quad \\Theta(\\phi)^{-\\epsilon} = \\{\\theta\\in\\Theta(\\phi): d(\\theta, \\Theta\\backslash\\Theta(\\phi))\\geq\\epsilon\\},\n \\end{equation}\n with $\\Theta\\backslash\\Theta(\\phi) =\\{\\theta\\in\\Theta; \\theta\\notin \\Theta(\\phi)\\}$ and $d(\\theta,\\Theta(\\phi))=\\inf_{x\\in\\Theta(\\phi)}\\|\\theta-x\\|$. Note that this result still carries over when $\\theta$ is point identified, in which case $\\Theta(\\phi)^{\\epsilon}$ is an $\\epsilon$-ball around $\\theta$, $\\Theta(\\phi)^{-\\epsilon}$ is empty, and $\\tau=1.$\n\n\n\n\n \n \n\nThe likelihood function is endowed with a prior through either the nonparametric prior $\\pi(F)$ as described in Section \\ref{sss_nonparametric_prior} or the semi-parametric prior $\\pi(\\phi)$ as described in Appendix \\ref{sss_semiparametric_prior}. We assume that the priors $\\pi(F)$ and $\\pi(\\phi)$ specified for $F$ and $\\phi$ are such that the corresponding posterior distribution of $p(\\phi|D_n)$ is consistent.\n\n\\begin{assum}\\label{ass3.1}\n At least one of the following holds:\n\n \\begin{itemize}\n \\item[(i)] The measurable function $\\phi(F):\\mathcal{F}\\rightarrow\\Phi$ is continuous. The prior $\\pi(F)$ is such that the posterior $p(F|D_{n})$ satisfies:\n\n \\begin{displaymath}\n \\int_{\\mathcal{F}} m(F)p(F|D_n)dF \\rightarrow^p \\int_{\\mathcal{F}} m(F)\\delta_{F_0}(dF)\n \\end{displaymath}\n\n for any bounded and continuous function $m(\\cdot)$ on $\\mathcal{F}$ where $\\delta_{F_0}$ is the Dirac function at the true distribution function $F_0$;\n\n \\item[(ii)] The prior $\\pi(\\phi)$ is such that the posterior $p(\\phi|D_{n})$ satisfies:\n\n \\begin{displaymath}\n \\int_{\\Phi} m(\\phi)p(\\phi|D_n)d\\phi \\rightarrow^p \\int_{\\Phi} m(\\phi)\\delta_{\\phi_0}(d\\phi)\n \\end{displaymath}\n\n for any bounded and continuous function $m(\\cdot)$ on $\\Phi$ where $\\delta_{\\phi_0}$ is the Dirac function at the true $\\phi_0$.\n\n \\end{itemize}\n\\end{assum}\n\nAssumptions \\ref{ass3.1} \\textit{(i)} and \\textit{(ii)} refer to the nonparametric and semi-parametric prior scheme respectively, and are verified by many nonparametric and semi-parametric priors. Examples are: Dirichlet process priors, Polya Tree process priors, Gaussian process priors, etc. For instance, when $\\pi(F)$ is the Dirichlet process prior, the second part of Assumption \\ref{ass3.1} \\textit{(i)} was proved in Ghosh and Ramamoorthi (2003, Theorem 3.2.7) while the condition that $\\phi(F)$ is continuous in $F$ is verified in many examples relevant for applications. For instance, in Example \\ref{ex2.1}, $\\phi(F)=(E(Y_1|F),E(Y_2|F))^T$ and in Example \\ref{ex2.2}, $\\phi(F)=(E(ZY_1|F), E(ZX^T|F), E(ZY_2|F))$, which are all bounded linear functionals of $F$.\nWe refer to \\cite{GhoshRamamoorthi2003} for examples and sufficient conditions for this assumption.\n\n\\begin{assum}[Prior for $\\phi$]\\label{ass3.2} For any $\\epsilon>0$ there are measurable sets $A_2\\subset A_1\\subset\\Phi$ such that $0<\\pi(\\phi\\in A_i)\\leq 1$, $i=1,2$ and \\\\\n(i) for all $\\phi\\in A_1$, $\\Theta(\\phi_0)^{\\epsilon}\\cap \\Theta(\\phi)\\neq\\emptyset$; for all $\\phi\\notin A_1$, $\\Theta(\\phi_0)^{\\epsilon}\\cap \\Theta(\\phi)=\\emptyset$,\\\\\n(ii) for all $\\phi\\in A_2$, $\\Theta(\\phi_0)^{-\\epsilon}\\cap \\Theta(\\phi)\\neq\\emptyset$; for all $\\phi\\notin A_2$, $\\Theta(\\phi_0)^{-\\epsilon}\\cap \\Theta(\\phi)=\\emptyset$.\n\\end{assum}\n \n\n\n\n\nAssumption \\ref{ass3.2} is satisfied as long as the identified set $\\Theta(\\phi)$ is bounded and the prior of $\\phi$ spreads over a large support of the parameter space. This assumption allows us to prove the posterior consistency without assuming the prior $\\pi(\\theta|\\phi)$ to be a continuous function of $\\phi$, and therefore priors like $I_{\\phi_1<\\theta<\\phi_2}$ in the interval censoring data example are allowed. Under this assumption\nthe conditional prior probability of the $\\epsilon$-envelope of the true identified set can be approximated by a continuous function, that is, there is a sequence of bounded and continuous functions $h_m(\\phi)$ such that (see lemma \\ref{LemB.1} in the appendix) almost surely in $\\phi$:\n$$\n\\pi (\\theta\\in\\Theta(\\phi_0)^{\\epsilon}|\\phi) = \\lim_{m\\rightarrow\\infty}h_m(\\phi).\n$$\n\\noindent A similar approximation holds for the conditional prior of the $\\epsilon$-contraction $\\pi(\\theta\\in\\Theta(\\phi_0)^{-\\epsilon}|\\phi)$.\n\n\n\\begin{assum}[Prior for $\\theta$]\\label{ass3.3}\nFor any $\\epsilon>0$, and $\\phi\\in\\Phi$, $\\pi(\\theta\\in\\Theta(\\phi)^{-\\epsilon}|\\phi)<1.$\n\\end{assum}\n\n In the special case when $\\theta$ is point identified ($\\Theta(\\phi)$ is a singleton), the $\\epsilon$-contraction is empty and thus $\\pi(\\theta\\in\\Theta(\\phi)^{-\\epsilon}|\\phi)=0.$\n\n\nAssumption \\ref{ass3.3} is an assumption on the prior for $\\theta$, which means the identified set should be \\textit{sharp} with respect to the prior information. Roughly speaking, the support of the prior should not be a proper subset of any $\\epsilon$-contraction of the identified set $\\Theta(\\phi)$. If otherwise the prior information restricts $\\theta$ to be inside a strict subset of $\\Theta(\\phi)$ so that Assumption \\ref{ass3.3} is violated, then that prior information should be taken into account in order to shrink $\\Theta(\\phi)$ to a sharper set. In that case, the posterior will asymptotically concentrate around a set that is smaller than the set identified by the data alone. Remark that assumption \\ref{ass3.3} is not needed for the first part of Theorem \\ref{th3.1} below.\n\n\n\n\n The following theorem gives the posterior consistency for partially identified parameters.\n\\begin{thm}\\label{th3.1}\n \n Under Assumptions \\ref{ass3.1} and \\ref{ass3.2}, for any $\\epsilon>0$,\n\n \\begin{displaymath}\n P(\\theta\\in \\Theta(\\phi_{0})^{\\epsilon}|D_n)\\rightarrow^p 1. \\end{displaymath}\n If Assumption \\ref{ass3.3} is further satisfied, then there is $ \\tau \\in (0,1]$ such that\n $$\n P(\\theta\\in\\Theta(\\phi_0)^{-\\epsilon}|D_n)\\rightarrow^p (1 - \\tau).\n $$\n\\end{thm}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Bayesian Inference of Support Function}\\label{ssf}\nOur analysis focuses on identified sets which are closed and convex. These sets are completely determined by their support functions, and efficient estimation of support functions may lead to optimality of estimation and inference of the identified set. As a result, much of the new development in the partially identified literature focuses on the support function, e.g., Kaido and Santos (2013), Kaido (2012), Beresteanu and Molinari (2008), Bontemps et al. (2012).\n\nThis section develops Bayesian analysis for the support function $S_{\\phi}(\\nu)$ of the identified set $\\Theta(\\phi)$. We consider a more specific partially identified model: the \\textit{moment inequality model} which is described in section \\ref{ss_5.1} below. Bayesian inference for the support function has two main interests. First, it provides an alternative way to characterize and perform estimation of the identified set $\\Theta(\\phi)$, which in many cases is relatively easy for computations and simulations. Second, it allows us to construct a two-sided BCS for $\\Theta(\\phi)$ that is also asymptotically equivalent to a frequentist confidence set. In this section we first develop a local linearization in $\\phi$ of the support function. As the support function itself is ``point identified'', we prove that its posterior satisfies the Bernstein-von Mises theorem. This result is \\textit{per se} of particular interest in the nonparametric Bayesian literature.\n\n\n\n\\subsection{Moment Inequality Model}\\label{ss_5.1}\nThe \\textit{moment inequality model} assumes that $\\theta$ satisfies $k$ moment restrictions:\n\\begin{equation}\\label{eq4.1}\n\\Psi(\\theta, \\phi)\\leq 0,\\quad \\Psi(\\theta, \\phi)=(\\Psi_1(\\theta,\\phi),..., \\Psi_k(\\theta,\\phi))^T\n\\end{equation}\nwhere $\\Psi: \\Theta\\times\\Phi\\rightarrow\\mathbb{R}^k$ is a known function of $(\\theta, \\phi)$. The identified set can be characterized as:\n \\begin{equation}\\label{eq4.2old}\n \\Theta(\\phi)=\\{\\theta\\in\\Theta: \\Psi(\\theta, \\phi)\\leq 0\\}.\n \\end{equation}\nSince most of the partially identified models can be characterized as moment inequality models, model (\\ref{eq4.1})-(\\ref{eq4.2old}) has received extensive attention in the literature.\n\nWe assume each component of $\\Psi(\\theta,\\phi)$ to be a convex function of $\\theta$ for every $\\phi\\in\\Phi$, as stated in the next assumption.\n\n\\begin{assum}\\label{ass5.1}\n $\\Psi(\\theta,\\phi)$ is continuous in $(\\theta,\\phi)$ and convex in $\\theta$ for every $\\phi\\in\\Phi$.\n\\end{assum}\n\nLet us consider the support function $S_{\\phi}(\\cdot):\\mathbb{S}^{d}\\rightarrow\\mathbb{R}$ of the identified set $\\Theta(\\phi)$. We restrict its domain to the unit sphere $\\mathbb{S}^{d}$ in $\\mathbb{R}^{d}$ since $S_{\\phi}(\\nu)$ is positively homogeneous in $\\nu$. Under Assumption \\ref{ass5.1} the support function is the optimal value of an ordinary convex program:\n\\begin{displaymath}\n S_{\\phi}(\\nu) = \\sup_{\\theta\\in\\Theta}\\{\\nu^T\\theta;\\: \\Psi(\\theta, \\phi)\\leq 0\\}.\n\\end{displaymath}\nTherefore, it also admits a Lagrangian representation (see Rockafellar 1970, chapter 28):\n\\begin{equation}\\label{eq5.1}\n S_{\\phi}(\\nu) = \\sup_{\\theta\\in\\Theta}\\{\\nu^T\\theta - \\lambda(\\nu,\\phi)^T\\Psi(\\theta,\\phi)\\},\n\\end{equation}\n\n\\noindent where $\\lambda(\\nu,\\phi): \\mathbb{S}^{d}\\times\\mathbb{R}^{d_{\\phi}}\\rightarrow\\mathbb{R}_{+}^{k}$ is a $k$-vector of Lagrange multipliers.\\\\% Note that $d_{\\phi}$ is the dimension of $\\phi$.\\\\\n\\indent We denote by $\\Psi_{S}(\\theta,\\phi_{0})$ the $k_{S}$-subvector of $\\Psi(\\theta,\\phi_{0})$ containing the constraints that are strictly convex functions of $\\theta$ and by $\\Psi_{L}(\\theta,\\phi_{0})$ the $k_{L}$ constraints that are linear in $\\theta$. So $k_S + k_L = k$. The corresponding Lagrange multipliers are denoted by $\\lambda_{S}(\\nu,\\phi_{0})$ and $\\lambda_{L}(\\nu,\\phi_{0})$, respectively, for $\\nu\\in\\mathbb{S}^{d}$. Moreover, define $\\Xi(\\nu,\\phi) = \\arg\\max_{\\theta \\in\\Theta}\\{\\nu^T\\theta; \\: \\Psi(\\theta,\\phi)\\leq 0\\}$ as the \\textit{support set} of $\\Theta(\\phi)$. Then, by definition,\n$$\n\\nu^T\\theta=S_{\\phi}(\\nu),\\quad \\forall\\theta\\in\\Xi(\\nu,\\phi).\n$$\nWe also denote by $\\nabla_{\\phi}\\Psi(\\theta,\\phi)$ the $k\\times d_{\\phi}$ matrix of partial derivatives of $\\Psi$ with respect to $\\phi$, and by $\\nabla_{\\theta}\\Psi_{i}(\\theta,\\phi)$ the $d$-vector of partial derivatives of $\\Psi_{i}$ with respect to $\\theta$ for each $i\\leq k$. In addition, let\n$$Act(\\theta,\\phi)\\equiv\\{i\\leq k;\\: \\Psi_i (\\theta,\\phi)=0\\}$$\nbe the set of the inequality active constraint indices. For some $\\delta>0$, let $B(\\phi_{0}, \\delta)=\\{\\phi\\in\\Phi; \\: \\|\\phi - \\phi_{0}\\|\\leq \\delta\\}$.\n\nWe assume the following:\n\n\\begin{assum}\\label{ass5.2}\n The true value $\\phi_{0}$ is in the interior of $\\Phi$, and $\\Theta$ is convex and compact.\n\\end{assum}\n\n\n\n\\begin{assum}\\label{ass5.3} There is some $\\delta>0$ such that for all $\\phi \\in B(\\phi_{0}, \\delta)$, we have:\\\\\n(i) the matrix $\\nabla_{\\phi}\\Psi(\\theta,\\phi)$ exists and is continuous in $(\\theta,\\phi)$;\\\\\n(ii) the set $\\Theta(\\phi)$ is non empty;\\\\\n(iii) there exists a $\\theta\\in\\Theta$ such that $\\Psi(\\theta,\\phi)<0$;\\\\\n(iv) $\\Theta(\\phi)$ belongs to the interior of $\\Theta$;\\\\\n(v) for every $i\\in Act(\\theta,\\phi_{0})$, with $\\theta\\in\\Theta(\\phi_{0})$, the vector $\\nabla_{\\theta}\\Psi_{i}(\\theta,\\phi)$ exists and is continuous in $(\\theta,\\phi)$ for every $\\phi\\in B(\\phi_{0},\\delta)$ and $\\theta\\in\\Theta(\\phi)$.\n\\end{assum}\n\nAssumption \\ref{ass5.3} \\textit{(iii)} is the Slater's condition which is a sufficient condition for strong duality to hold. It implies Assumption \\ref{ass5.3} \\textit{(ii)}. However, we keep both conditions because in order to establish some technical results we only need condition \\textit{(ii)} which is weaker.\n\n\n\nThe next assumption concerns the inequality active constraints. Assumption \\ref{ass5.4} requires that the active inequality constraints gradients $\\nabla_{\\theta}\\Psi_{i}(\\theta,\\phi_{0})$ be linearly independent. This assumption guarantees that a $\\theta$ which solves the optimization problem (\\ref{eq5.1}) with $\\phi = \\phi_{0}$ satisfies the Kuhn-Tucker conditions. Alternative assumptions that are weaker than Assumption \\ref{ass5.4} could be used, but the advantage of Assumption \\ref{ass5.4} is that it is easy to check.\n\n\\begin{assum}\\label{ass5.4}\n \n \n \n For any $ \\theta\\in\\Theta(\\phi_{0})$, the gradient vectors $\\{\\nabla_{\\theta}\\Psi_{i}(\\theta,\\phi_{0})\\}_{i\\in Act(\\theta,\\phi_{0})}$ are linearly independent\n \n\\end{assum}\n\n The following assumption is key for our analysis, and is sufficient for the differentiability of the support function at $\\phi_0$:\n\\begin{assum}\\label{ass5.6}\nAt least one of the following holds:\n \\begin{itemize}\n \\item[\\textit{(i)}] For the ball $B(\\phi_0,\\delta)$ in Assumption \\ref{ass5.3}, for every $(\\nu,\\phi)\\in\\mathbb{S}^{d}\\times B(\\phi_{0},\\delta)$, $\\Xi(\\nu,\\phi)$ is a singleton;\n \\item[\\textit{(ii)}] There are linear constraints in $\\Psi(\\theta,\\phi_0)$, which are also separable in $\\theta$, that is, $k_L>0$ and $\\Psi_{L}(\\theta,\\phi_{0}) = A_{1}\\theta + A_{2}(\\phi_{0})$ for some function $A_{2}: \\Phi\\rightarrow \\mathbb{R}^{k_{L}}$ (not necessarily linear) and some $(k_L\\times d)$-matrix $A_{1}$.\n \\end{itemize}\n\\end{assum}\n\n\n Assumption \\ref{ass5.6} is particularly important for the linearization of the support function that we develop in Section \\ref{ss_5.2}. In fact, if one of the two parts of Assumption \\ref{ass5.6} holds then the support function is differentiable at $\\phi$ for every $(\\nu,\\phi)\\in\\mathbb{S}^{d}\\times B(\\phi_{0}, \\delta)$, and we have a closed form for its derivative. This assumption also plays one of the key roles in the study of asymptotic efficiency by Kaido and Santos (2013).\n\nThe last set of assumptions will be used to prove the Bernstein-von Mises theorem for $S_{\\phi}(\\cdot)$ and allows to strengthen the result of Theorem \\ref{Lem5.2} below. The first three assumptions are (local) Lipschitz equi-continuity assumptions.\n\n\\begin{assum}\\label{ass5.5}\n For the ball $B(\\phi_{0},\\delta)$ in Assumption \\ref{ass5.3}, for some $K>0$ and $\\forall \\phi_{1},\\phi_{2}\\in B(\\phi_{0},\\delta)$:\n \\begin{itemize}\n \\item[(i)] $\\sup_{\\nu\\in\\mathbb{S}^d}\\|\\lambda(\\nu,\\phi_{1}) - \\lambda(\\nu,\\phi_{2})\\| \\leq K\\|\\phi_{1} - \\phi_{2}\\|$;\n \\item[(ii)] $\\sup_{\\theta\\in\\Theta}\\|\\nabla_{\\phi}\\Psi(\\theta,\\phi_{1}) - \\nabla_{\\phi}\\Psi(\\theta,\\phi_{2})\\| \\leq K\\|\\phi_{1} - \\phi_{2}\\|$;\n \\item[(iii)] $\\|\\nabla_{\\phi}\\Psi(\\theta_1,\\phi_{0}) - \\nabla_{\\phi}\\Psi(\\theta_2,\\phi_{0})\\| \\leq K\\|\\theta_{1} - \\theta_{2}\\|$, for every $\\theta_1, \\theta_2\\in \\Theta$;\n \\item[(iv)] If $\\Xi(\\nu,\\phi_0)$ is a singleton for any $\\nu$ in some compact subset $W\\subseteq \\mathbb{S}^d$, and if the correspondence $(\\nu,\\phi) \\mapsto \\Xi(\\nu,\\phi)$ is upper hemicontinous on $\\mathbb{S}^d\\times B(\\phi_0,\\delta)$ then there exists $\\varepsilon =O(\\delta)$ such that $\\Xi(\\nu,\\phi_1) \\subseteq \\Xi^{\\varepsilon}(\\nu,\\phi_0)$.\n \\end{itemize}\n\\end{assum}\nHere $\\|\\nabla_{\\phi}\\Psi(\\theta,\\phi)\\|$ denotes the Frobenius norm of the matrix. The above conditions are not stringent. In particular, condition \\textit{(iv)} is easy to understand when $\\Xi(p,\\phi)$ is a singleton, that is, when the optimization problem for the support function has a unique solution, for each $\\phi\\in B(\\phi_{0},\\delta)$. Then $\\Xi(\\nu,\\phi_1)$ and $\\Xi(\\nu,\\phi_0)$ are singletons that are close to each other, and $\\Xi(\\nu,\\phi_0)^{\\varepsilon}$ is a small ball around $\\Xi(\\nu,\\phi_0)$.\n\n\n\\indent We show in the following example that Assumptions \\ref{ass5.1}-\\ref{ass5.5} are easily satisfied.\n\n\\begin{exm}[Interval censored data - \\textit{continued}]\n The setup is the same as in Example \\ref{ex2.1}. Assumption \\ref{ass5.2} is verified if $Y_{1}$ and $Y_{2}$ are two random variables with finite first moments $\\phi_{0,1}$ and $\\phi_{0,2}$, respectively. Moreover, $\\Psi(\\theta,\\phi) = (\\phi_{1} - \\theta, \\theta - \\phi_{2})^T$, $\\phi = (\\phi_{1},\\phi_{2})^T$,\n\\begin{displaymath}\n \\nabla_{\\phi}\\Psi(\\theta,\\phi) = \\left(\\begin{array}\n {cc} 1 & 0\\\\ 0 & -1\n\\end{array}\\right)\n\\end{displaymath}\nso that Assumptions \\ref{ass5.1}, \\ref{ass5.2} and \\ref{ass5.3} \\textit{(i)}-\\textit{(ii)} are trivially satisfied. Assumption \\ref{ass5.3} \\textit{(iii)} holds for every $\\theta$ inside $(\\phi_{1},\\phi_{2})$; Assumption \\ref{ass5.3} \\textit{(iv)} is satisfied if $\\phi_{1}$ and $\\phi_{2}$ are bounded. To see that Assumptions \\ref{ass5.3} \\textit{(v)} and \\ref{ass5.4} are satisfied note that $\\forall \\theta < \\phi_{0,1}$ we have $Act(\\theta,\\phi_{0}) = \\{1\\}$, $\\forall \\theta > \\phi_{0,2}$ we have $Act(\\theta,\\phi_{0}) = \\{2\\}$ while $\\forall \\theta \\in[\\phi_{0,1}, \\phi_{0,2}]$ we have $Act(\\theta,\\phi_{0}) = \\emptyset$. Assumption \\ref{ass5.6} \\textit{(i)} and \\textit{(ii)} are both satisfied since the support set takes the values $\\Xi(1,\\phi)=\\phi_2$ and $\\Xi(-1,\\phi)=-\\phi_1$ and the constraints in $\\Psi(\\theta,\\phi_{0})$ are both linear with $A_{1} = (-1,1)^T$ and $A_{2}(\\phi_{0}) = \\nabla_{\\phi}\\Psi(\\theta,\\phi_{0})\\phi_{0}$.\\\\\n\\indent Assumptions \\ref{ass5.5} \\textit{(ii)}-\\textit{(iii)} are naturally satisfied because $\\nabla_{\\phi} \\Phi(\\theta, \\phi)$ does not depend on $(\\theta, \\phi)$. The Lagrange multiplier is $\\lambda(\\nu,\\phi) = (-\\nu I(\\nu<0), \\nu I(\\nu\\geq 0))^{T}$ so that Assumption \\ref{ass5.5} \\textit{(i)} is satisfied since the norm is equal to $0$. Finally, the support set $\\Xi(\\nu,\\phi) = \\phi_{1}I(\\nu<0) + \\phi_{2}I(\\nu\\geq 0)$ is a singleton for every $\\phi\\in B(\\phi_{0}, \\delta)$ and $\\Xi(\\nu,\\phi_{0})^{\\varepsilon} = \\{\\theta\\in\\Theta;\\|\\theta - \\theta_{*}\\| \\leq \\varepsilon \\}$ where $\\theta_{*} = \\Xi(\\nu,\\phi_{0}) = \\phi_{0,1}I(\\nu<0) + \\phi_{0,2}I(\\nu\\geq 0)$. Therefore, $\\|\\Xi(\\nu,\\phi) - \\theta_{*}\\| \\leq \\delta$ and Assumption \\ref{ass5.5} \\textit{(iv)} holds with $\\varepsilon = \\delta$. $\\square$\n\\end{exm}\n\n\\begin{comment}\n\\begin{exm}[Interval regression model - \\textit{continued}]\n Consider Example \\ref{ex2.2}. Assumption \\ref{ass5.2} is verified if the random vector $(Y_{1},Y_{2},X^T,Z^T)$ is such that there exists positive constants $c_1,c_2$ and $c_3$ such that $\\|E(ZY_1)\\| 1$; assumption \\ref{ass5.6} \\textit{(ii)} holds if we assume that $\\phi_{2}$ is a known parameter.\\\\\nUnder the assumption that $\\dim(Z) = \\dim(X) \\equiv d$ and $rank(E(ZX^T)) = d$ we are able to rewrite the identified set in an equivalent form.\n\\begin{lem}\\label{Lem5.1}\n Suppose $\\phi_2^{-1}$ exists, then\n $$\\Theta(\\phi)=\\left\\{\\theta\\in\\Theta: \\theta=\\phi_2^{-1}(\\frac{\\phi_1+\\phi_3}{2}+u), u\\in(-\\frac{\\phi_3-\\phi_1}{2},\\frac{\\phi_3-\\phi_1}{2})\\right\\}.$$\n\\end{lem}\n\nNow we are ready to calculate the support function for $\\Theta(\\phi)$.\n\\begin{thm}\\label{th5.1}\n Suppose $\\phi_2^{-1}$ exists. The support function for $\\Theta(\\phi)$ is given by:\n $$\n S_{\\phi}(p)=p^T\\phi_2^{-1}(\\frac{\\phi_1+\\phi_3}{2})+\\alpha_p^T(\\frac{\\phi_3-\\phi_1}{2}),\n $$\n where $d=\\dim(\\theta)$, $\\text{sgn}(x)=I(x>0)-I(x<0)$,\n $$\n \\alpha_p=\\begin{pmatrix}\n (p^T\\phi_2^{-1})_1\\text{sgn} (p^T\\phi_2^{-1})_1\\\\\n \\vdots\\\\\n (p^T\\phi_2^{-1})_d\\text{sgn} (p^T\\phi_2^{-1})_d\n \\end{pmatrix}.\n $$\n\\end{thm}\n$\\square$\n\\end{exm}\n\n\n\\end{comment}\n\n\n\n\\subsection{Asymptotic Analysis}\\label{ss_5.2}\nThe support function of the closed and convex set $\\Theta(\\phi)$\nadmits directional derivatives in $\\phi$, see e.g. \\cite{MilgromSegal2002}. Moreover, if Assumption \\ref{ass5.6} holds for a particular value $(\\nu,\\phi)$, then $S_{\\phi}(\\nu)$ is differentiable at $\\phi$ and its derivative is equal to the left and right directional derivatives. The next theorem exploits this fact and states that the support function can be locally approximated by a linear function of $\\phi$. \n\n\n\\begin{thm}\\label{Lem5.2}\n If Assumptions \\ref{ass5.1}-\\ref{ass5.6} hold with $\\delta = r_{n}$ for some $r_n=o(1)$, then there is $N\\in\\mathbb{N}$ such that for every $n\\geq N$, there exist: (i) a real function $f(\\phi_{1},\\phi_{2})$ defined for every $\\phi_1, \\phi_2\\in B(\\phi_{0},r_{n})$, \n (ii) a Lagrange multiplier function $\\lambda(\\nu,\\phi_0): \\mathbb{S}^{d}\\times\\mathbb{R}^{d_{\\phi}}\\rightarrow\\mathbb{R}_{+}^{k}$, and (iii) a Borel measurable mapping $\\theta_{*}(\\nu):\\mathbb{S}^{d}\\rightarrow \\Theta$ satisfying $\\theta_{*}(\\nu)\\in\\Xi(\\nu,\\phi_{0})$ for all $\\nu\\in\\mathbb{S}^{d}$, such that for every $\\phi_1, \\phi_2\\in B(\\phi_{0},r_{n})$:\n\n \\begin{displaymath}\n \\sup_{\\nu\\in\\mathbb{S}^{d}}\\left|\\left(S_{\\phi_1}(\\nu) - S_{\\phi_2}(\\nu)\\right) - \\lambda(\\nu,\\phi_{0})^T\\nabla_{\\phi}\\Psi(\\theta_{*}(\\nu),\\phi_{0})[\\phi_1-\\phi_2]\\right| = f(\\phi_{1},\\phi_{2})\n \\end{displaymath}\n\n \\noindent and $\\frac{f(\\phi_{1},\\phi_{2})}{\\|\\phi_{1} - \\phi_{2}\\|} \\rightarrow 0$ uniformly in $\\phi_1, \\phi_2\\in B(\\phi_{0},r_{n})$ as $n\\rightarrow \\infty$.\n\\end{thm}\n\nWe remark that the functions $\\lambda$ and $\\theta_*$ do not depend on the specific choice of $\\phi_1$ and $\\phi_2$ inside $B(\\phi_0,r_n)$, but only on $\\nu$ and the true value $\\phi_0$. The expansion can also be viewed as stochastic when $\\phi_1, \\phi_2$ are interpreted as random variables associated with the posterior distribution $P(\\phi|D_{n})$. This interpretation is particularly useful to understand Theorems \\ref{th5.2} and \\ref{th5.3}.\n\n\n\nWith the approximation given in the theorem we are now ready to state posterior consistency (with concentration rate) and asymptotic normality of the posterior distribution of $S_{\\phi}(\\nu)$. The posterior consistency of the support function is also based upon the posterior concentration rate for $\\phi$. In a semi-parametric Bayesian model where $\\phi$ is point identified, the posterior of $\\phi$ achieves a near-parametric concentration rate under proper prior conditions. Since our goal is to study the posterior of $S_{\\phi}(\\nu)$, we state a high-level assumption on the posterior of $\\phi$ as follows, instead of deriving it from more general conditions.\n\n\\begin{assum}\\label{ass4.1}The marginal posterior of $\\phi$ is such that, for some $C>0$,\n$$P(\\|\\phi-\\phi_0\\|\\leq Cn^{-1\/2}(\\log n)^{1\/2}|D_n)\\rightarrow^p1.$$\n\\end{assum}\n\nThis assumption is\na standard result in semi\/non-parametric Bayesian literature. If we place a nonparametric prior on $F$ as described in Section \\ref{sss_nonparametric_prior}, the notation used in this assumption is a shorthand for $$P(\\|\\phi(F)-\\phi(F_0)\\|\\leq Cn^{-1\/2}(\\log n)^{1\/2}|D_n)\\rightarrow^p1.$$ When the likelihood function is unknown, a formal derivation of this assumption for a semi-parametric prior of $(\\phi, F)$ will be presented in Appendix \\ref{s_appendix_posterior_concentration_phi}.\n\n\n\nThe next theorem gives the contraction rate for the posterior of the support function.\n\n\n\\begin{thm}\\label{th5.2}\n Under Assumption \\ref{ass4.1} and the Assumptions of Theorem \\ref{Lem5.2} with $r_n=\\sqrt{(\\log n)\/n}$, for some $C>0$,\n \\begin{equation}\\label{eq5.2}\n P\\left(\\left.\\sup_{\\nu\\in\\mathbb{S}^{d}}|S_{\\phi}(\\nu) - S_{\\phi_{0}}(\\nu)|0$ such that for all $\\phi\\in B(\\phi_{0},\\delta)$ there exists a $\\theta\\in\\Theta$ such that $\\Psi_{i}(\\theta,\\phi)<0$, $\\forall i=1,\\ldots, k_1$.\n \\end{assum}\n\nThe results of Section \\ref{ss_5.2} are still valid with minor modifications in the proofs. We detail these modifications in Appendix \\ref{ss_E.4}.\n\n\n\\section{Bayesian Credible Sets }\\label{sbcs}\nInferences can be carried out through finite-sample Bayesian credible sets (BCS's). We study two kinds of BCS's: credible sets for $\\theta$ and credible sets for the identified set $\\Theta(\\phi)$.\n\n\n\n\\subsection{Credible set for $\\Theta(\\phi)$}\n\n\\subsubsection{Two-sided BCS}\nWe focus on the case when the identified set is convex and closed, and aim at constructing two-sided credible sets $A_1$ and $A_2$ such that\n$$\nP(A_1\\subset\\Theta(\\phi)\\subset A_2|D_n)\\geq 1-\\tau\n$$\n for $\\tau\\in (0,1)$, where the probability is taken with respect to the posterior of $\\phi$.\nOur construction is based on the support function. To illustrate why support function can help, for a set $\\Theta(\\phi)$ recall its $\\epsilon$-envelope: $\\Theta(\\phi)^{\\epsilon}=\\{\\theta\\in\\Theta: d(\\theta, \\Theta(\\phi))\\leq\\epsilon\\}$ and its $\\epsilon$-contraction: $\\Theta(\\phi)^{-\\epsilon}=\\{\\theta\\in\\Theta(\\phi): d(\\theta, \\Theta\\backslash\\Theta(\\phi))\\geq\\epsilon\\}$ where $\\epsilon\\geq 0$. \nLet $\\hat{\\phi}$ be a Bayesian estimator for $\\phi_0$, which can be, e.g., the posterior mean or mode. We have, for any $c_n\\geq0$,\n$$\nP(\\Theta(\\hat\\phi )^{-c_n}\\subset\\Theta(\\phi)\\subset\\Theta(\\hat\\phi )^{c_n}|D_n)=P(\\sup_{\\|\\nu\\|=1}|S_{\\phi}(\\nu)-S_{\\hat\\phi}(\\nu)|\\leq c_n|D_n).\n$$\nNote that the right hand side of the above equation depends on the posterior of the support function. Let $q_{\\tau}$ be the $1-\\tau$ quantile of the posterior of $$J(\\phi)=\\sqrt{n}\\sup_{\\|\\nu\\|=1}|S_{\\phi}(\\nu)-S_{\\hat\\phi }(\\nu)|$$ so that\n\\begin{equation}\\label{eq6.2}\nP\\left(J(\\phi)\\leq q_{\\tau}\\bigg|D_n\\right)=1-\\tau.\n\\end{equation}\nThe posterior of $J(\\phi)$ is determined by that of $\\phi$. Hence $q_{\\tau}$ can be simulated efficiently from the MCMC draws from $p(\\phi|D_n)$. Immediately, we have the following theorem:\n\n\\begin{thm}\\label{th6.2}Suppose for any $\\tau\\in(0,1),$ $q_{\\tau}$ is defined as in (\\ref{eq6.2}), then for every sampling sequence $D_n,$\n$$\nP(\\Theta(\\hat\\phi )^{-q_{\\tau}\/\\sqrt{n}}\\subset\\Theta(\\phi)\\subset\\Theta(\\hat\\phi )^{q_{\\tau}\/\\sqrt{n}}|D_n)= 1-\\tau.\n$$\nIn particular, $\\Theta(\\hat\\phi )^{-q_{\\tau}\/\\sqrt{n}}$ is allowed to be an empty set.\n\\end{thm}\n\n\n\n\\begin{remark}\\label{rem_6.3}\nIt is straightforward to construct the one-sided BCS for $\\Theta(\\phi)$ using the described procedure. For example, let $\\tilde{q}_{\\tau}$ be such that $$P( \\sqrt{n}\\sup_{\\|\\nu\\|=1}(S_{\\phi}(\\nu)-S_{\\hat\\phi }(\\nu))\\leq \\tilde{q}_{\\tau}|D_n) = 1 - \\tau.$$ Then, $P(\\Theta(\\phi)\\subset \\Theta(\\hat\\phi )^{\\tilde{q}_\\tau\/\\sqrt{n}}|D_n)=1-\\tau$ for every sampling sequence $D_n.$\n\\end{remark}\n\n\n\n\n\n\n\n\n\\subsubsection{Frequentist coverage probability of BCS for $\\Theta(\\phi)$}\n\nThe constructed two-sided BCS for the identified set has desired frequentist properties, which follows from the Bernstein-von Mises Theorem (see Theorem \\ref{th5.3}) of the support function.\nThe frequentist coverage probability for a general (two-sided) multi-dimensional BCS has been largely unknown in the literature before.\nThe analysis relies on the following assumption, which requires the asymptotic normality and semi-parametric efficiency of the consistent estimator $\\hat \\phi$. Under mild conditions, it holds for many regular estimators such as the posterior mean, mode and the maximum likelihood estimator.\n\n\n\\begin{assum}\\label{ass6.2}\nThe consistent estimator $\\hat\\phi $ satisfies $$\n\\sqrt{n}(\\hat\\phi -\\phi_0)\\rightarrow^d \\mathcal{N}(0, I_0^{-1})\n$$\nwhere $I_0$ denotes the semi-parametric efficient information matrix as in Assumption \\ref{ass5.9}.\n\\end{assum}\n\n\n\n\n\n\n\\begin{thm} \\label{th6.3} Consider the moment inequality model in (\\ref{eq4.1})-(\\ref{eq4.2old}). If the Assumptions of Theorem \\ref{th5.3} and Assumption \\ref{ass6.2} hold, then the constructed two-sided Bayesian credible set has asymptotically correct frequentist coverage probability, that is, for any $\\tau\\in(0,1)$,\n$$\nP_{D_n}(\\Theta(\\hat\\phi )^{-q_{\\tau}\/\\sqrt{n}}\\subset\\Theta(\\phi_0)\\subset\\Theta(\\hat\\phi )^{q_{\\tau}\/\\sqrt{n}})\\geq 1-\\tau+o_p(1).\\footnote{The result presented here is understood as: There is a random sequence $\\Delta(D_n)$ that depends on $D_n$ such that $\\Delta(D_n)=o_p(1)$, and for any sampling sequence $D_n$, we have $P_{D_n}(\\Theta(\\hat\\phi )^{-q_{\\tau}\/\\sqrt{n}}\\subset\\Theta(\\phi_0)\\subset\\Theta(\\hat\\phi )^{q_{\\tau}\/\\sqrt{n}})\\geq 1-\\tau+\\Delta(D_n)$. Similar interpretation applies to (\\ref{eq6.3add}).}\n$$\nwhere $P_{D_n}(.)$ denote the probability measure based on the sampling distribution, fixing $(\\theta, \\phi)=(\\theta_0,\\phi_0)$.\n\\end{thm}\n\n\n\nNote that in Theorem \\ref{th6.2}, the random set is $\\Theta(\\phi)$, while in Theorem \\ref{th6.3} the random sets are $\\Theta(\\hat\\phi )^{-q_{\\tau}\/\\sqrt{n}}$ and $\\Theta(\\hat\\phi )^{q_{\\tau}\/\\sqrt{n}}$. \n The rationale of this theorem is that, because the identified set itself is ``point identified\", its prior does not depend on that of $\\theta$ and is dominated by the data asymptotically.\n \n\\begin{remark}\nNote that $q_{\\tau}$ depends only on the posterior of $\\phi$. Hence Theorem \\ref{th6.3} does not rely on the prior of $\\theta$, and shows asymptotic robustness to the prior of $\\phi$. It also holds when $\\Theta(\\phi)$ becomes a singleton, and in that case the lower-side $\\Theta(\\hat\\phi )^{-q_{\\tau}\/\\sqrt{n}}$ is empty. Therefore the point identified case is also nested. We shall discuss the uniformity issue in more detail in Section \\ref{s_further_illustration_Uniformity}.\n\\end{remark}\n\n \n\n\n\nSimilarly, we can show that the one-sided BCS as constructed in Remark \\ref{rem_6.3} above has asymptotically correct coverage probability too. For example, for $\\tilde{q}_{\\tau}$ such that \\\\ $P(\\sqrt{n}\\sup_{\\|\\nu\\|=1}(S_{\\phi}(\\nu)-S_{\\hat\\phi }(\\nu))\\leq \\tilde{q}_{\\tau}|D_n)=1-\\tau$, then\n\\begin{equation}\\label{eq6.3add}\nP_{D_n}(\\Theta(\\phi_0)\\subset\\Theta(\\hat\\phi )^{\\tilde{q}_{\\tau}\/\\sqrt{n}})\\geq 1-\\tau+o_p(1).\n\\end{equation}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\subsection{Credible set for $\\theta$}\nWe now construct the Bayesian credible set for $\\theta$. A BCS for $\\theta$ at level $1-\\tau$ is a set BCS$(\\tau)$ such that\n\\begin{equation*}\\label{eq6.1}\nP(\\theta\\in \\text{BCS}(\\tau)|D_n)=1-\\tau\n\\end{equation*}\nfor $\\tau\\in(0,1)$. One of the popular choices of the credible set is the highest-probability-density (HPD) set, which has been widely used in empirical studies and also used in the Bayesian partially identified literature by e.g., Moon and Schorfheide (2012) and Norets and Tang (2012).\n\nThe BCS can be compared with the frequentist confidence set (FCS). A frequentist confidence set FCS$(\\tau)$ for $\\theta_0$ satisfies\n$$\n\\lim_{n\\rightarrow\\infty}\\inf_{\\phi\\in\\Phi}\\inf_{\\theta_{0}\\in\\Theta(\\phi)}P_{D_n}(\\theta_{0}\\in \\text{FCS}(\\tau))\\geq 1-\\tau.\n$$\nThere have been various procedures in the literature to construct a FCS$(\\tau)$ that satisfies the above inequality. One of the important FCS's is based on a consistent estimator $\\hat{\\phi}$ of $\\phi_0$ such that $\\Theta(\\hat{\\phi})\\subset\\text{FCS}({\\tau})$. By using a known likelihood function, Moon and Schorfheide (2012) compared the BCS with this type of FCS and showed that the BCS and FCS are asymptotically different. As Theorem \\ref{t6.1} below shows, such a comparison still carries over under the more robust semi-parametric Bayesian setup. The following assumption is needed.\n\\begin{assum}\\label{ass6.1}\n(i) The frequentist FCS($\\tau$) is such that, there is $\\hat{\\phi}$ with $\\|\\hat{\\phi}-\\phi_0\\|=o_p(1)$ satisfying $\\Theta(\\hat{\\phi})\\subset \\text{FCS}(\\tau)$.\\\\\n(ii) $\\pi(\\theta\\in\\Theta(\\phi)|\\phi)=1$ for all $\\phi\\in\\Phi$; $\\sup_{(\\theta,\\phi)\\in\\Theta\\times\\Phi}\\pi(\\theta|\\phi)<\\infty$.\n\\end{assum}\n Many frequentist FCS's satisfy condition \\textit{(i)}, see, e.g., Imbens and Manski (2004), Chernozhukov et al. (2007), Rosen (2008), Andrews and Soares (2010), etc. Condition \\textit{(ii)} is easy to verify since $\\Theta\\times\\Phi$ is compact. When for every $\\phi\\in\\Phi$, $\\Theta(\\phi)$ is not a singleton, examples of $\\pi(\\theta|\\phi)$ satisfying assumption \\ref{ass6.1} \\textit{(ii)} include: the uniform prior with density $$\\pi(\\theta|\\phi)=\\mu(\\Theta(\\phi))^{-1}I_{\\theta\\in\\Theta(\\phi)},$$ where $\\mu(\\cdot)$ denotes the Lebesgue measure; and the truncated normal prior with density\n$$\\pi(\\theta|\\phi)=\\left[\\int_{\\Theta(\\phi)}h(x;\\lambda,\\Sigma)dx\\right]^{-1}h(\\theta; \\lambda,\\Sigma)I_{\\theta\\in\\Theta(\\phi)},$$\nwhere $h(x;\\lambda,\\Sigma)$ is the density function of a multinormal distribution $\\mathcal{N}(\\lambda,\\Sigma)$.\n\n\\begin{thm}\\label{t6.1}\nUnder Assumption \\ref{ass6.1} and the assumptions of Theorem \\ref{th5.2}, $\\forall\\tau\\in(0,1)$,\\\\\n(i)\n$$\nP(\\theta\\in\\text{FCS}(\\tau)|D_n)\\rightarrow^p1,\n$$\n(ii) $$\nP(\\theta\\in\\text{FCS}(\\tau), \\theta\\notin\\text{BCS}(\\tau)|D_n)\\rightarrow^p\\tau.\n$$\n\\end{thm}\n\\begin{remark}\n\n\nTheorem \\ref{t6.1} \\textit{(i)} shows that the posterior probability that $\\theta$ lies inside the frequentist confidence set is arbitrarily close to one, as $n\\rightarrow\\infty$. This indicates that the posterior will asymptotically concentrate inside the FCS. On the other hand, by \\textit{(ii)}, there is a non-negligible probability that FCS is strictly larger than BCS. The prior information on $\\theta$ still plays a non-negligible role in the posterior as the sample size increases.\n\nOur prior condition in Assumption \\ref{ass6.1} \\textit{(ii)} implies that Theorem \\ref{t6.1} only focuses on partial identification. It can be restrictive in the point identified case. Because our prior is such that $\\pi(\\theta\\in\\Theta(\\phi)|\\phi)=1$ for each $\\phi$, when $\\Theta(\\phi)$ is a singleton $\\pi(\\theta|\\phi)$ becomes a Dirac function and $\\sup_{\\theta,\\phi}\\pi(\\theta|\\phi)<\\infty$ cannot be expected to hold in this case. On the other hand, Assumption \\ref{ass6.1} \\textit{(ii)} does cover many partially identified models of interest, and it is a prior assumption that has been used frequently elsewhere in the literature, e.g., Moon and Schorfheide (2012) and Gustafson (2012).\n\\end{remark}\n\n\n\n\\section{Projection and Subset Inference}\\label{s_Projection_Subset_Inference}\nOne of the important features of the proposed procedure is that it is relatively easy to marginalize onto low-dimensional subspaces, and the computation is fast. Suppose the dimension of $\\theta$ is relatively large, but we are interested in only one component of $\\theta$, say $\\theta_1$. Then projections aim at constructing the BCS's for $\\theta_1$ and for its identified set $\\widetilde{\\Theta}(\\phi)_1$.\n\nWe illustrate this by using the interval regression example (Example \\ref{ex2.2}). Suppose the full parameter $\\theta$ is high-dimensional. Let $W_1=ZY_1$, $W_2=ZY_2$ and $V=Zx^T$. Here $\\phi_1=EW_1$, $\\phi_2=EV$ and $\\phi_3=EW_2$. Let $\\phi=(\\phi_1^T,vec(\\phi_2)^T,\\phi_3^T)$, and $e=(1,0,...,0)^T$. The identified set for $\\theta_1$ can be expressed using the support function $S_{\\phi}(\\cdot)$:\n$$\n\\widetilde{\\Theta}(\\phi)_1=\\{\\theta_1: \\exists \\omega=(\\theta_2,...,\\theta_{d})\\text{ such that } (\\theta_1,\\omega)\\in\\Theta(\\phi)\\}=[-S_{\\phi}(-e), S_{\\phi}(e)],\n$$\nwhere the exact expression for $S_{\\phi}(\\cdot)$ is given in Appendix C.1 in the supplementary material. We place a Dirichlet process prior $ \\mathcal{D}ir(\\nu_{0},Q_{0})$ on the joint CDF of $(W_1, W_2, V)$. By the stick-breaking representation (see Sethuraman 1994) the deduced posterior distribution of $\\phi$ is the distribution of the following quantity:\n\\begin{equation}\\label{e6.4}\n \\phi|D_{n}= \\rho\\sum_{i=1}^{n}\\beta_{i}D_{n,i} + (1 - \\rho)\\sum_{j=1}^{\\infty}\\alpha_{j}\\xi_{j}\n\\end{equation}\n where $D_{n,i}$ is the $i$th observation of the vector $(W_1^T, vec(V)^T, W_2^T)$, $\\rho$ is drawn from a Beta distribution $\\mathcal{B}e(n, \\nu_{0})$ independently of the other quantities, $(\\beta_{1},\\ldots,\\beta_{n})$ is drawn from a Dirichlet distribution of parameters $(1,\\ldots,1)$ on the simplex $S_{n-1}$ of dimension $(n-1)$, $\\xi_{j}\\sim \\:iid\\: Q_{0}$ and $\\{\\alpha_{k}\\}_{k\\geq 1}$ are computed as $\\alpha_{k} = v_{k}\\prod_{l=1}^{k}(1 - v_{l})$ where $\\{v_{l}\\}_{l\\geq 1}$ are independent drawings from a beta distribution $\\mathcal{B}e(1,\\nu_{0})$ and $\\{v_{j}\\}_{j\\geq1}$ are independent of $\\{\\xi_{j}\\}_{j\\geq 1}$. In practice, we can set a truncation $K$, so the infinite sum in the posterior representation in (\\ref{e6.4}) is replaced with a truncated sum $(1-\\rho)\\sum_{j=1}^K\\alpha_j\\xi_j.$ In addition, $(\\alpha_1,...\\alpha_K)$ are normalized so that $\\sum_{j=1}^K\\alpha_j=1.$\n\n We can place a uniform prior for $\\theta$, and draw $\\{\\theta^{(i)},\\phi^{(i)}\\}_{i=1}^B$ from the posterior $(\\theta, \\phi)|D_n$. Then $\\{\\theta_1^{(i)}\\}_{i=1}^B$ are the draws from the marginal posterior of $\\theta_1.$ Let $\\theta_1^{(\\tau\/2)}$ and $\\theta_1^{(1-\\tau\/2)}$ be the $\\tau\/2$th and ${(1-\\tau\/2)}$th sample quantiles of $\\{\\theta_1^{(i)}\\}_{i=1}^B$. Then $[\\theta_1^{(\\tau\/2)}, \\theta_1^{(1-\\tau\/2)}]$ is the BCS($\\tau$) of $\\theta_1$. Moreover, let $q_{\\tau}$ be the $(1-\\tau)$th quantile of the posterior of\n$$\nJ(\\phi)=\\sqrt{n}\\max\\{S_{\\phi}(e)-S_{\\hat\\phi }(e), S_{\\phi}(-e)-S_{\\hat\\phi }(-e)\\},\n$$\nwhich can be approximated by the $(1-\\tau)$th sample quantile of $\\{J(\\phi^{(i)})\\}_{i=1}^B$.\nWe then construct the BCS$(\\tau)$ for $\\widetilde{\\Theta}(\\phi)_1$ as\n$\n[-S_{\\hat\\phi }(-e)-\\frac{q_{\\tau}}{\\sqrt{n}}, S_{\\hat\\phi }(e)+\\frac{q_{\\tau}}{\\sqrt{n}}].\n$\n\n\n\nWe present a simple numerical result for illustration, where $\\theta_{01}=1$, but the total dimension is high: $\\dim(\\theta_0)=10.$ Let $W_1\\sim \\mathcal{N}(0,0.5I)$ and $W_2\\sim \\mathcal{N}(5, I)$. Set $\\nu_0=3$ and the base measure $Q_0=\\mathcal{N}(0, I)$. $B=100$ posterior draws are sampled. While the finite sample performance is very robust to the choice of the truncation $K$, we choose $K$ following the guidance of Ishwaran and James (2002), who obtained an approximation error of order $n\\exp(-(K-1)\/\\nu_0)$ for truncations. Hence, in the simulation with $n=500, \\nu_0=3$, the choice $K=50$ gives an error of order $4\\times 10^{-5}$. Table \\ref{table1} summarizes the true identified set $\\widetilde\\Theta(\\phi_0)_1$ for $\\theta_1$, and the averaged BCS$(0.1)$ for both $\\theta_1$ and the projected set $\\widetilde{\\Theta}(\\phi)_1$ over 50 replications. Results based on various choices of $(n, B, K)$ are reported.\n\n\n\n\\begin{table}[htdp]\n\\caption{90\\% Bayesian credible sets marginalized to the subset for $\\theta_1$}\n\\begin{center}\n\n\\begin{tabular}{cccccc}\n\\hline\n$n$ & $K$ & $B$ & $\\widetilde\\Theta(\\phi_0)_1$& BCS for $\\theta_1$ & BCS for $\\widetilde{\\Theta}(\\phi)_1$ \\\\\n\\hline\n500 & 50 & 100 & [0, 1.667] & [0.007, 1.667] & [-0.174, 1.844] \\\\\n & 50 & 500 & & [-0.008 1.666] & [-0.174, 1.837] \\\\\n & 100 & 100 & & [-0.012 1.662] & [-0.181, 1.832] \\\\\n & 100 & 500 & & [-0.000 1.667] & [-0.169, 1.840] \\\\\n \\hline\n1000 & 50 & 100 & [0, 1.667] & [0.023, 1.641] & [-0.121, 1.789] \\\\\n & 50 & 500 & & [0.011, 1.641] & [-0.126, 1.786] \\\\\n & 100 & 100 & & [0.036, 1.636] & [-0.120, 1.781] \\\\\n & 100 & 500 & & [0.025, 1.641] & [-0.121, 1.786] \\\\\n\\hline\n\\end{tabular}\n\n\n\n\\label{table1}\n \\end{center}\n\\end{table}\n\n\n\n\nWhen computing the BCS for $\\widetilde{\\Theta}(\\phi)_1$, it is also interesting to compare the computation time with that of the high-dimensional projection based on the criterion function approach as in, e.g. Chernozhukov et al. (2007), Andrews and Soares (2010) etc, because they have the same asymptotic frequentist coverages as ours. For the moment inequalities $\\Psi(\\theta,\\phi)=(\\phi_2\\theta-\\phi_3,\\phi_1-\\phi_2\\theta)\\leq0$ we employ the criterion function and construct a confidence set FCS as in Chernozhukov et al. (2007): $$Q_n(\\theta)=\\sum_{j}\\max(\\Psi_j(\\theta,\\hat\\phi),0)^2w_j,\\quad \\text{FCS}(\\tau)=\\{\\theta: \\sqrt{n}Q_n(\\theta)\\leq c_{\\tau}\\},\n$$ with $w_j=1$ and $\\hat\\phi$ the sample mean estimator of $\\phi$. The critical value $c_{\\tau}$ is obtained via the bootstrap procedure proposed by Bugni (2010), which requires solving a constrained optimization problem. We use the ``fmincon\" toolbox in Matlab for the numerical optimization\\footnote{We use the Matlab code of Bugni (2010), downloaded from the online supplement of Econometrica. The optimization is solved constrained on an estimated identified set, which involves an additional parameter $t_n$. We set $t_n=\\log(n).$}, and then project FCS$(\\tau)$ onto the subspace for $\\theta_1$ to get the marginal confidence set FCS$_1(\\tau)$. The projection is done through the following steps: generate $\\{\\theta_j^*\\}_{j=1}^M$ uniformly from $\\Theta$. Let $\\theta_{j,1}^*$ be the first component of $\\theta_j^*$ and\n\\begin{equation}\\label{eq7.2}\nL(\\tau)=\\min\\{\\theta_{j,1}^*: \\theta_{j}^*\\in\\text{FCS}(\\tau), j=1,...,M\\},\\quad U(\\tau)=\\max\\{\\theta_{j,1}^*: \\theta_{j}^*\\in\\text{FCS}(\\tau), j=1,...,M\\}.\n\\end{equation}\nThen $[L(\\tau), U(\\tau)]$ forms a projected frequentist confidence interval for $\\widetilde\\Theta(\\phi_0)_1.$ In the simulation, we set a small parameter space $\\Theta=\\otimes_{i=1}^{10}[-2, 2]$ in order to calculate $L(\\tau)$ and $U(\\tau)$ efficiently.\n\nTable \\ref{table2add} compares the computation times necessary to obtain the projected sets using our proposed BCS and using the criterion function approach. Reported is the averaged time for one computation over 50 replications, using the same simulated model. We see that the proposed BCS projection computes much faster.\n\n\n\\begin{table}[htdp]\n\\caption{Computation times (in seconds) for the projected BCS and criterion-function-FCS }\n\\begin{center}\n\n\\begin{tabular}{cc|ccc|ccc}\n\\hline\n & & & BCS & & & FCS& \\\\\n & & & $K$ & & &$M$ \\\\\n$n$ & $B$ & 50 & 100 & 500 & 30 & 50 & 100 \\\\\n\\hline\n & & & & & & \\\\\n500 & 50 & 0.065 & 0.073 & 0.129 & 9.169 & 10.214 & 10.955 \\\\\n & 100 & 0.128 & 0.142 & 0.248 & 18.963 & 18.893 & 19.496 \\\\\n & 200 & 0.244 & 0.273 & 0.479 & 37.067 & 36.599 & 37.288 \\\\\n & & & & & & \\\\\n1000 & 50 & 0.072 & 0.079 & 0.136 & 14.123 & 14.248 & 15.837 \\\\\n & 100 & 0.137 & 0.155 & 0.259 & 26 & 27.029 & 28.442 \\\\\n & 200 & 0.269 & 0.295 & 0.549 & 54.027 & 52.661 & 54.240 \\\\\n\\hline\n\\end{tabular}\n\n\\label{table2add}\n \\end{center}\n \\small \\textit{Proposed BCS and criterion-function-based-FCS are compared. $B$ is the number of either posterior draws (for BCS) or Bootstrap draws (for FCS) to compute the critical values; $K$ is the truncation number to approximate the Dirichlet process posterior; $M$ is used in (\\ref{eq7.2}) for the projected FCS. Computations are conducted using a 2.3 GHz Mac with Intel Core i7 CPU.}\n\n\\end{table}\n\n\n\n\n\n\n\n\n\n\n\\section{Posterior consistency for $\\Theta(\\phi)$}\\label{s_posterior_consistency_set}\n\n The estimation accuracy of the identified set is often measured, in the literature, by the Hausdorff distance. Specifically, for a point $ a$ and a set $A$, let\n$\nd(a,A)=\\inf_{x\\in A}\\|a-x\\|,\n$\nwhere $\\|\\cdot\\|$ denotes the Euclidean norm. The Hausdorff distance between sets $A$ and $B$ is defined as\n\\begin{equation*}\nd_H(A, B)= \\max\\left\\{\\sup_{a\\in A}d(a, B), \\sup_{b\\in B}d(b, A)\\right\\}=\\max\\left\\{\\sup_{a\\in A}\\inf_{b\\in B}\\|a-b\\|, \\sup_{b\\in B}\\inf_{a\\in A}\\|b-a\\|\\right\\}.\n\\end{equation*}\n This section aims at deriving a rate $r_n=o(1)$ such that for some constant $C>0,$\n$$\nP(d_H(\\Theta(\\phi), \\Theta(\\phi_0))0$, $\\forall \\phi_1, \\phi_2\\in \\Phi$,\n$$\n\\sup_{\\theta\\in\\Theta}\\|\\Psi(\\theta, \\phi_1)-\\Psi(\\theta, \\phi_2)\\|\\leq K\\|\\phi_1-\\phi_2\\|.\n$$\n\\end{assum}\nGiven the compactness of $\\Theta$, this assumption is satisfied by many interesting examples of moment inequality models.\n\n\n\\begin{assum} \\label{ass4.4}\nThere exists a closed neighborhood $U(\\phi_0)$ of $\\phi_0$, such that for any $a_n=O(1)$, and any $\\phi\\in U(\\phi_0)$, there exists $C>0$ that might depend on $\\phi$, so that\n$$\n\\inf_{\\theta: d(\\theta, \\Theta(\\phi))\\geq Ca_n}\\max_{i\\leq k} \\Psi_i(\\theta,\\phi)>a_n.\n$$\n\\end{assum}\n\nIntuitively, when $\\theta$ is bounded away from $\\Theta(\\phi)$ (up to a rate $a_n$), at least one of the moment inequalities is violated, which means $\\max_{i\\leq k}\\Psi_i(\\theta,\\phi)>0$. This assumption quantifies how much $\\max_{i\\leq k}\\Psi_i(\\theta,\\phi)$ will depart from zero. This is a sufficient condition for the partial identification condition (4.5) in Chernozhukov, Hong and Tamer (2007). If we define $$\nQ(\\theta,\\phi)=\\|\\max(\\Psi(\\theta,\\phi),0)\\|= \\left[\\sum_{i=1}^k(\\max(\\Psi_i(\\theta,\\phi), 0))^2\\right]^{1\/2}\n$$\nthen $Q(\\theta,\\phi)=0$ if and only if $\\theta\\in\\Theta(\\phi)$. The partial identification condition in Chernozhukov et al. (2007, condition (4.5)) assumes that there exists $K>0$ so that for all $\\theta$,\n\\begin{equation}\\label{eq4.2}\nQ(\\theta,\\phi)\\geq Kd(\\theta,\\Theta(\\phi)),\n\\end{equation}\nwhich says that $Q$ should be bounded below by a number proportional to the distance of $\\theta$ from the\nidentified set if $\\theta$ is bounded away from the identified set. Assumption \\ref{ass4.4} is a sufficient condition for (\\ref{eq4.2}).\n\\begin{exm}[Interval censored data - \\textit{continued}] In the interval censoring data example, $\\Psi(\\theta, \\phi)=(\\theta-\\phi_2, \\phi_1-\\theta)^T$ and for any $\\phi=(\\phi_1, \\phi_2)$ and $\\tilde{\\phi}=(\\tilde{\\phi}_1, \\tilde{\\phi}_2)$ we have:\n$\\|\\Psi(\\theta, \\phi)-\\Psi(\\theta, \\tilde{\\phi})\\|=\\|\\phi-\\tilde{\\phi}\\|$. This verifies Assumption \\ref{ass4.3}. Moreover, for any $\\theta$ such that $d(\\theta,\\Theta(\\phi))\\geq a_n$, either $\\theta\\leq\\phi_1-a_n$ or $\\theta\\geq\\phi_2+a_n.$ If $\\theta\\leq \\phi_1-a_n$, then $\\Psi_2(\\theta,\\phi)=\\phi_1-\\theta\\geq a_n$; if $\\theta\\geq \\phi_2+a_n$, then $\\Psi_1(\\theta,\\phi)=\\theta-\\phi_2\\geq a_n.$ This verifies Assumption \\ref{ass4.4}. $\\square$\n\\end{exm}\n\nThe following theorem shows the concentration rate for the posterior of the identified set.\n\\begin{thm}\\label{t4.1}\nUnder Assumptions \\ref{ass4.1}, \\ref{ass8.1}-\\ref{ass4.4}, for some $C>0$,\n\\begin{equation}\\label{eq_t4.1}\n P(d_H(\\Theta(\\phi), \\Theta(\\phi_0))>C\\sqrt{\\frac{\\log n}{n}}|D_n)\\rightarrow^p0.\n\\end{equation}\n\\end{thm}\n\\begin{remark}\nThe convergence in Hausdorff distance can be implied by that of the support function for convex and close sets (e.g., Beresteanu and Molinari 2008). Therefore, (\\ref{eq_t4.1}) is another statement of result (\\ref{eq5.2}). However, they are obtained under different assumptions and (\\ref{eq_t4.1}) is obtained directly from the perspective of the posterior of the identified set.\n\\end{remark}\n\\begin{remark}\nRecently, Kitagawa (2012) obtained the posterior consistency for $\\Theta(\\phi)$ in the one-dimensional case:\n$\nP(d_H(\\Theta(\\phi), \\Theta(\\phi_0))>\\epsilon|D_n)\\rightarrow0\n$\nfor almost every sampling sequence of $D_n.$ This result was obtained for the case where $\\Theta(\\phi)$ is a connected interval and $d_H(\\Theta(\\phi), \\Theta(\\phi_0))$ is assumed to be a continuous map of $\\phi$. In multi-dimensional cases where $\\Theta(\\phi)$ is a more general convex set, however, verifying the continuity of $d_H(\\Theta(\\phi), \\Theta(\\phi_0))$ is much more technically involved, due to the challenge of computing the Hausdorff distance in multi-dimensional manifolds. In contrast, our Lipschitz equi-continuity condition in Assumption \\ref{ass4.3} and Assumption \\ref{ass4.4} are much easier to verify in specific examples, as they depend on the moment conditions directly.\n\\end{remark}\n\n\\begin{comment}\n\n\\section{Set-Optimality based on Bayesian Decision Theory}\\label{sopt}\nOptimality is an important problem for partially identified models. While there are some recent results on statistical inference (e.g., Canay 2010), the optimality issue has not been well addressed in the literature from an estimation point of view. Kaido and Santos (2013) used a semi-parametric efficiency criterion of estimating the identified set based on the Hausdorff distance. We propose a new optimality criterion based on the Bayesian decision theory for the set estimation. In the literature, Kitagawa (2012) proposed a ``gamma-minimax\" analysis for the partially identified parameter, which does not possess a closed form.\n\n In the Bayesian decision theory, using the Hausdorff distance as the loss function is considered to be too strong. Instead, we define a new loss function as follows: for any estimator $\\Omega$ of the identified set, define the loss function\n$$\nL(\\Omega, \\phi)=\\mu(\\Theta(\\phi)\\Delta\\Omega), \\quad \\Theta(\\phi)\\Delta\\Omega=(\\Theta(\\phi)\\cap\\Omega^c)\\cup(\\Theta(\\phi)^c\\cap\\Omega),\n$$\nwhere $\\mu(\\cdot)$ denotes the Lebesgue measure. Note that $\\Theta(\\phi)\\Delta\\Omega$ is the \\textit{symmetric difference} between $\\Omega$ and $\\Theta(\\phi)$. Like the Hausdorff distance, it is also commonly used to measure the accuracy of set approximations in many literature, for instance, density level set estimation (e.g., Gayraud and Rousseau 2005, Rigollet and Vert 2009), convex geometry (Bianchi et al. 2012), etc.\n\nDefine a set estimator of the identified set:\n$$\n\\widehat\\Omega=\\{x\\in\\Theta: P(x\\in\\Theta(\\phi)|D_n)\\geq 0.5\\}.\n$$\nwhere $P(x\\in\\Theta(\\phi)|D_n)$ is the probability measure taken with respect to the posterior of $\\Theta(\\phi)$, for a given $x\\in\\Theta$. The following theorem shows the optimality of $\\widehat\\Omega$ under the posterior expected loss, defined as $E[L(\\Omega, \\phi)|D_n]=\\int \\mu(\\Theta(\\phi)\\Delta\\Omega)p(\\phi|D_n)d\\phi$ for each set $\\Omega$.\n\n\\begin{thm}\\label{t51}For any set $\\Omega\\subset\\Theta$ and every sampling sequence $D_n$,\n$$\nE[L(\\widehat \\Omega, \\phi)|D_n]\\leq E[L(\\Omega, \\phi)|D_n].\n$$\n\\end{thm}\n\n\n\nHence $\\widehat\\Omega$ is an optimal estimator of the identified set in the sense that it minimizes the Bayesian risk, which is the posterior expected loss. Therefore we can also call it the \\textit{Bayes rule} from a decision-making point of view. As often the case in the decision theory, the Bayes rule is also \\textit{admissible}: for any set estimator $\\Omega$ that depends on the data $D_n$, let\n$$\nR(\\Omega,\\phi)=E_{D_n}[L(\\Omega,\\phi)|\\phi],\n$$\nwhere the expectation is taken with respect to the true sampling distribution, given any fixed $\\phi$. We have,\n\n\\begin{cor} \\label{c51} Suppose the prior $\\pi(\\phi)$ is such that $\\pi(\\phi\\in B)>0$ for any open ball $B\\subset\\Phi$. For any $\\Omega\\subset\\Theta$, assume $R(\\Omega, \\phi)$ is continuous in $\\phi$. Then either one of the following holds:\\\\\n(i) there is $\\phi^*\\in\\Phi$ such that $R(\\Omega,\\phi^*)>R(\\widehat\\Omega, \\phi^*)$;\\\\\n(ii) for any $\\phi\\in\\Phi$, $R(\\Omega, \\phi)\\geq R(\\widehat\\Omega, \\phi)$.\n\n\\end{cor}\n\n\n \\end{comment}\n\n\n\n\n\n\n\n\n\n\\section{Further Illustrations and Uniformity}\\label{s_further_illustration_Uniformity}\n\\subsection{Missing data: coverage probabilities and prior sensitivity}\\label{smd}\nThis subsection illustrates the coverages of the proposed BCS in the missing data problem (example \\ref{ex2.3}), previously discussed by Manski (2003). Let $Y$ be a binary variable, indicating whether a treatment is successful ($Y=1$) or not ($Y=0$). However, $Y$ is observed subject to missing. We write $M=0$ if $Y$ is missing, and $M=1$ otherwise. Hence, we observe $(M, MY)$. The parameter of interest is $\\theta = P(Y=1)$. The identified parameters are denoted by\n$$\n\\phi_1=P(M=1),\\quad \\phi_2=P(Y=1|M=1).\n$$ Let $\\phi_0=(\\phi_{10},\\phi_{20})$ be the true value of $\\phi=(\\phi_1,\\phi_2)$.\nThen, without further assumption on $P(Y=1|M=0)$, $\\theta$ is only partially identified on $\n\\Theta(\\phi)=[\\phi_1\\phi_2,\\phi_1\\phi_2+1-\\phi_1]\n$.\nThe support function is easy to calculate and is\n$$S_{\\phi}(1)=\\phi_1\\phi_2+1-\\phi_1\\quad S_{\\phi}(-1)=-\\phi_1\\phi_2.$$ Suppose we observe i.i.d. data $\\{(M_i, Y_iM_i)\\}_{i= 1}^{n}$, and define $\\sum_{i=1}^nM_i=n_1$ and $\\sum_{i=1}^nY_iM_i=n_2$. In this example, the true likelihood function\n$\nl_n(\\phi) \\propto \\phi_1^{n_1}(1-\\phi_1)^{n-n_1}\\phi_2^{n_2}(1-\\phi_2)^{n_1-n_2}\n$ is known.\n\n\nWe place independent beta priors, Beta($\\alpha_1,\\beta_1$) and Beta$(\\alpha_2,\\beta_2)$, on $(\\phi_1,\\phi_2)$. The uniform distribution is a special case of Beta prior. Then the posterior of $(\\phi_1,\\phi_2)$ is a product of Beta($\\alpha_1+n_1,\\beta_1+n-n_1$) and Beta($\\alpha_2+n_2,\\beta_2+n_1-n_2$). If in addition, we have prior information on $\\theta$ and place a prior $\\pi(\\theta|\\phi)$ supported on $\\Theta(\\phi)$, then by integrating out $\\phi$, we immediately obtain the marginal posterior of $\\theta.$\n\n\nWe now present the two-sided BCS for $\\Theta(\\phi)$ obtained by using the support function of $\\Theta(\\phi)$. The estimator $\\hat\\phi$ is taken as the posterior mode: $\n\\hat{\\phi}_{1}= (n_1+\\alpha_1-1)\/(n+\\alpha_1+\\beta_1-2),$ and $\\hat{\\phi}_{2}=(n_2+\\alpha_2-1)\/(n_1+\\alpha_2+\\beta_2-2)$.\nThen\n$$J(\\phi)=\\sqrt{n}\\max\\left\\{|\\phi_1\\phi_2-\\phi_1-\\hat{\\phi}_{1}\\hat{\\phi}_{2}+\\hat{\\phi}_{1}|, |\\phi_1\\phi_2-\\hat{\\phi}_{1}\\hat{\\phi}_{2}|\\right\\}.\n$$\nLet $q_{\\tau}$ be the $1-\\tau$ quantile of the posterior of $J(\\phi)$, which can be obtained by simulating from the Beta distributions. The lower and upper $1-\\tau$ level BCS's for $\\Theta(\\phi)$ are $\\Theta(\\hat\\phi )^{-q_{\\tau}\/\\sqrt{n}}\\subset \\Theta(\\phi)\\subset \\Theta(\\hat\\phi )^{q_{\\tau}\/\\sqrt{n}}$ where\n$$\n\\Theta(\\hat\\phi )^{-q_{\\tau}\/\\sqrt{n}}=[\\hat{\\phi}_{1}\\hat{\\phi}_{2}+q_{\\tau}\/\\sqrt{n}, \\hat{\\phi}_{1}\\hat{\\phi}_{2}+1-\\hat{\\phi}_{1}-q_{\\tau}\/\\sqrt{n}]\n$$\n\\noindent and\n$$\n\\Theta(\\hat\\phi )^{q_{\\tau}\/\\sqrt{n}}=[\\hat{\\phi}_{1}\\hat{\\phi}_{2}-q_{\\tau}\/\\sqrt{n}, \\hat{\\phi}_{1}\\hat{\\phi}_{2}+1-\\hat{\\phi}_{1}+q_{\\tau}\/\\sqrt{n}],\n$$\nwhich are also two-sided asymptotic $1-\\tau$ frequentist confidence intervals of the true $\\Theta(\\phi_{0}).$\n\nHere we present a simple simulated example, where the true $\\phi_0=(0.7,0.5)$. This implies the true identified interval to be $[0.35, 0.65]$ and about thirty percent of the simulated data are ``missing\". We set $\\alpha_1=\\alpha_2, \\beta_1=\\beta_2$ in the prior. In addition, $B=1,000$ posterior draws $\\{\\phi^i\\}_{i=1}^B$ are sampled from the posterior Beta distribution. For each of them, compute $J(\\phi^{i})$ and set $q_{0.05}$ as the 95\\% upper quantile of $\\{J(\\phi^i)\\}_{i=1}^B$ to obtain the critical value of the BCS and construct the two-sided BCS for the identified set. Each simulation is repeated for 500 times to calculate the coverage frequency of the true identified interval. Table \\ref{table2} presents the results. We see that the coverage probability for the two-sided is close to the desired 95\\% when sample size increases. In addition, the marginal coverages of the lower and upper sets are close to 97.5\\% when sample size is relatively large.\n\n\nMoreover, Figure 1 plots the five conjugate prior specifications used in this study: flat prior, reverse $J$-shaped with a right tail, $J$-shaped with a left tail, $U$-shaped, and uni-mode. These priors reflect different types of prior beliefs: the first prior is used if a researcher has no informative prior information, the second one (resp. third one) is used if one strongly believes that the probability of missing is low (resp. high), the fourth prior is used when one thinks that the probability of missing is either very high or very low, and the last prior corresponds to a symmetric prior belief centered at fifty percent. So Table \\ref{table2} also provides simple sensitivity analysis of prior specification. The results demonstrate robustness of the coverage probabilities to conjugate prior specification.\n\n\n\n\\begin{table}[htdp]\n\\caption{Frequentist coverage probability of BCS and prior sensitivity for missing data}\n\\begin{center}\n\n\\begin{tabular}{c|cc|ccc}\n\\hline\n$n$ & $\\alpha$ & $\\beta$ & Lower & Upper & Two-sided \\\\\n\\hline\n50 & 1 & 1 & 0.978 & 0.944 & 0.924 \\\\\n & 1 & 0.1 & 0.964 & 0.944 & 0.912 \\\\\n & 0.1 & 1 & 0.952 & 0.958 & 0.916 \\\\\n & 0.1 & 0.1 & 0.974 & 0.958 & 0.938 \\\\\n &2&2&0.958 & 0.970& 0.932\\\\\n \\hline\n100 & 1 & 1 & 0.982 & 0.96 & 0.948 \\\\\n & 1 & 0.1 & 0.978 & 0.968 & 0.950 \\\\\n & 0.1 & 1 & 0.968 & 0.968 & 0.948 \\\\\n &0.1 & 0.1 & 0.972 & 0.972 & 0.944 \\\\\n &2&2&0.956 & 0.978& 0.944\\\\\n \\hline\n500 & 1 & 1 & 0.970 & 0.974 & 0.950 \\\\\n & 1& 0.1 & 0.978 & 0.978 & 0.958 \\\\\n & 0.1 & 1 & 0.974 & 0.972 & 0.948 \\\\\n & 0.1 & 0.1 & 0.972 & 0.974 & 0.950 \\\\\n &2&2& 0.976 & 0.978& 0.956\\\\\n\\hline\n\\end{tabular}\n\n\n\\label{table2}\n\\small\n\n\\it Lower, Upper and Two-sided represent the frequencies of the events $\\Theta(\\hat\\phi )^{-q_{\\tau}\/\\sqrt{n}}\\subset\\Theta(\\phi_0)$, $\\Theta(\\phi_0)\\subset \\Theta(\\hat\\phi )^{q_{\\tau}\/\\sqrt{n}}$, and $\\Theta(\\hat\\phi )^{-q_{\\tau}\/\\sqrt{n}}\\subset\\Theta(\\phi_0)\\subset \\Theta(\\hat\\phi )^{q_{\\tau}\/\\sqrt{n}}$ over 500 replicates. The coverage probability for the two-sided BCS is set to 95\\%.\n\n\\end{center}\n\\end{table}\n\n\n\n \\begin{figure}[htbp]\n\\begin{center}\n\\caption{Conjugate priors for sensitivity analysis in the missing data problem}\n\\includegraphics[width=12cm]{prior.eps}\n\\label{figureadd}\n\\end{center}\n\\end{figure}\n\n\n\n\n\n\n\\subsection{Uniformity: from partial identification to point identification}\\label{s_from_partial_identification_point_Identification}\n\n\nWe have been focusing on partially identified models, and the inference results achieved are for a fixed data generating process. It is interesting to see whether they still hold uniformly over a class of data generating processes, including the case when point identification is nearly achieved. This is important because in many cases it is possible that we actually have point identification and, in that event, $\\Theta(\\phi)$ degenerates to a singleton. For example, in the interval censored model, when $EY_1=EY_2$, $\\theta=EY$ is point identified.\n\nWhen point identification is indeed achieved, the frequentist coverage probability of the upper-sided BCS $\\Theta(\\phi)\\subset\\Theta(\\hat\\phi )^{q_{\\tau}\/\\sqrt{n}}$ and the asymptotic normality for the posterior of the support function still hold because they are generally guaranteed by the semi-parametric Bernstein-von Mises theorem for $\\phi$ when $\\Theta(\\phi)$ is a singleton (e.g., Rivoirard and Rousseau 2012, Bickel and Kleijn 2011). But the low-side BCS for $\\Theta(\\phi)$ will be empty with a positive probability.\nTheorem \\ref{t6.1}, however, does not hold anymore when $\\theta$ is point identified, as we discussed previously.\nWe further illustrate the uniformity in two examples.\n\n\\begin{exm}[Interval censored data - \\textit{continued}] We show that the the upper BCS for the identified set has a uniformly correct frequentist asymptotic coverage probability.\nTo simplify our illustration, we assume $Y_1$ and $Y_2$ are independent and follow $\\mathcal{N}(\\phi_{10}, 1)$ and $\\mathcal{N}(\\phi_{20},1)$, respectively. A sequence of different $\\phi_0$ are considered which includes the case $\\phi_{10}-\\phi_{20}=o(1)$. When $\\phi_{10}=\\phi_{20}$, however, we suppose $Y_1, Y_2$ are sampled independently. Suppose econometricians place independent standard normal priors on $\\phi_1$ and $\\phi_2$, then the posteriors are independent, given by $\\phi_i|D_n\\sim \\mathcal{N}(\\bar{Y}_i\\frac{n}{1+n}, \\frac{1}{1+n}), i=1,2,\n$\nand $\\hat{\\phi} = \\frac{n}{1+n}(\\bar{Y}_1,\\bar{Y}_2)$ is the posterior mode of the joint distribution.\nThe support function is $S_{\\phi}(1)=\\phi_2$, $S_{\\phi}(-1)=-\\phi_1$. Let $$J(\\phi)=\\sqrt{n}\\sup_{\\|\\nu\\|=1}(S_{\\phi}(\\nu)-S_{\\hat\\phi }(\\nu))=\\sqrt{n}\\max\\left\\{\\phi_2-\\frac{n}{1+n}\\bar{Y}_2, \\frac{n}{1+n}\\bar{Y}_1-\\phi_1\\right\\},$$ and let $\\tilde{q}_{\\tau}$ be the $1-\\tau$ quantile of the posterior of $J(\\phi)$. We now show that the frequentist coverage of BCS($\\tau$)$=[\\bar{Y}_1\\frac{n}{1+n}-\\frac{\\tilde{q}_{\\tau}}{\\sqrt{n}}, \\bar{Y}_2\\frac{n}{1+n}+\\frac{\\tilde{q}_{\\tau}}{\\sqrt{n}}]$ is valid uniformly for $\\phi_0=(\\phi_{10},\\phi_{20})\\in\\Phi$, that is,\n\\begin{equation}\\label{e71}\n\\liminf_{n\\rightarrow\\infty}\\inf_{(\\phi_{01},\\phi_{02})\\in\\Phi}P_{D_n}(\\Theta(\\phi_0)\\subset \\text{BCS}(\\tau))= 1-\\tau.\n\\end{equation}\nWe can simplify $J(\\phi)$ to be $\\sqrt{n\/(1+n)}\\max\\{Z_1, Z_2\\}$ where $Z_i\\sim \\mathcal{N}(0,1)$, $i=1,2$ and $Z_1$ and $Z_2$ are independent. This implies, for the standard normal's CDF $H(\\cdot)$,\n$$\n1 - \\tau = P(J(\\phi)\\leq\\tilde{q}_{\\tau}|D_n)=P\\left(\\max\\{Z_1,Z_2\\}\\leq\\tilde{q}_{\\tau}\\sqrt{\\frac{1+n}{n}}\\right)\n= H\\left(\\tilde{q}_{\\tau}\\sqrt{\\frac{1+n}{n}}\\right)^2.\n$$\nHence, $H\\left(\\tilde{q}_{\\tau}\\sqrt{(1+n)\/n}\\right)^2 = H(\\tilde{q}_{\\tau})^2 + o(1)$ and so $H(\\tilde{q}_{\\tau})^2\\rightarrow1-\\tau.$ The event $\\{\\Theta(\\phi_0)\\subset \\textrm{BCS}(\\tau)\\}$ is equivalent to $(\\bar{Y}_1-\\phi_{10})\\frac{n}{1+n}\\leq \\frac{\\tilde{q}_{\\tau}}{\\sqrt{n}}+\\frac{\\phi_{10}}{1+n}$ and $(\\bar{Y}_2-\\phi_{20})\\frac{n}{1+n}\\geq \\frac{\\phi_{20}}{1+n}- \\frac{\\tilde{q}_{\\tau}}{\\sqrt{n}}$. Hence,\n\\begin{eqnarray*}\n\\inf_{\\phi_0\\in\\Phi}P_{D_n}(\\Theta(\\phi_0)\\subset\\text{BCS}(\\tau))&=&\\inf_{\\phi_0\\in\\Phi}H\\left((\\frac{\\tilde{q}_{\\tau}}{\\sqrt{n}}+\\frac{\\phi_{10}}{1+n})\\frac{1+n}{\\sqrt{n}}\\right)\nH\\left((\\frac{\\tilde{q}_{\\tau}}{\\sqrt{n}}-\\frac{\\phi_{20}}{1+n})\\frac{1+n}{\\sqrt{n}}\\right)\\cr\n&=&H(\\tilde q_{\\tau})^2+o(1)\\rightarrow1-\\tau.\n\\end{eqnarray*}\n This gives (\\ref{e71}).\n\n On the other hand, if $\\phi_{20}-\\phi_{10}=o(1)$, the lower BCS for $\\Theta(\\phi)$ is empty with a large probability. To see this, for any fixed $q$, the lower BCS is $A=[\\bar{Y}_1\\frac{n}{1+n}+\\frac{q}{\\sqrt{n}}, \\bar{Y}_2\\frac{n}{1+n}-\\frac{q}{\\sqrt{n}}]$. Let $\\Delta_n=\\phi_{20}-\\phi_{10}$, then\n$\nP(A=\\emptyset)=P((\\bar{Y}_2-\\bar{Y}_1)\\frac{n}{1+n}<\\frac{2q}{\\sqrt{n}})=H(\\sqrt{2}q-\\sqrt{\\frac{n}{2}}\\Delta_n)+o(1).\n$\nSuppose $\\sqrt{n}\\Delta_n=o(1)$, then $P(A=\\emptyset)\\rightarrow H(\\sqrt{2}q)$.\nThis probability is very large for many reasonable cut-off $q$. For example, if $q=1.96$, $H(\\sqrt{2}q)=0.997.$\n\nFor a numerical illustration, set $\\phi_{10}=1, \\phi_{20}=1+\\Delta_n$ for a sequence of small $\\Delta_n$ that decreases to zero, and calculate the frequency that $\\Theta(\\phi_0)\\subset$ BCS$(0.05)$ and $A=\\emptyset.$ The model is nearly point identified, and point identification is achieved when $\\Delta_n=0.$ Results are reported in Table \\ref{t2add}.\n\n\n\n\\end{exm}\n\n\n\n\n\\begin{exm}[Missing data example - \\textit{continued}]\nConsider again the missing data example in Section \\ref{smd}, where now the true $\\phi_{10}$ is $\\phi_{10}=1-\\Delta_n$ with $\\Delta_n\\rightarrow0,$ that is, the probability of missing is close to zero. So the model is close to point identification. However, suppose we still place priors on $\\phi_1$ and $\\phi_2$ and $\\Theta(\\phi)=[\\phi_1\\phi_2, \\phi_1\\phi_2+1-\\phi_1]$ as before. Our result shows that\n\\begin{equation}\\label{eq8.2}\nP_{D_n}(\\Theta(\\phi_0)\\subset\\Theta(\\hat\\phi )^{\\tilde{q}_{\\tau}\/\\sqrt{n}})\\rightarrow 1-\\tau\n\\end{equation}\n when $\\tilde{q}_{\\tau}$ is the $1-\\tau$ quantile of the posterior of\n $$\n \\sqrt{n}\\max\\left\\{\\phi_1\\phi_2-\\phi_1-\\hat{\\phi}_{1}\\hat{\\phi}_{2}+\\hat{\\phi}_{1}, \\hat{\\phi}_{1}\\hat{\\phi}_{2}-\\phi_1\\phi_2\\right\\}.\n $$\nIt can also be shown that the coverage (\\ref{eq8.2}) holds uniformly for $\\phi_0$ inside a compact parameter space.\nIt is also easy to see that, if $\\Delta_n=o(n^{-1\/2})$, then for any $\\tau\\in(0,1)$, the lower BCS$(\\tau)$ is empty with probability approaching one. \n\n\n\nWe illustrate the above discussions using a simulated example, where $\\phi_{10}=1-\\Delta_n$ for a sequence of small $\\Delta_n$. We use the uniform priors and compute the frequency of the events that $\\Theta(\\phi_0)\\subset\\Theta(\\hat\\phi )^{\\tilde q_{0.05}\/\\sqrt{n}}$ and that the lower BCS is empty. We set $\\phi_{20}=0.5$ so that $\\hat{\\phi}_{2}$ has the maximum possible variance. Therefore, our simulation also demonstrates how sensitive the coverage frequency is to the variance of the point identified estimator. The frequency of coverage over 500 replications are summarized in Table \\ref{t2add} below.\n\n\\begin{table}[htdp]\n\\caption{Frequency of BCS(0.05) coverages for near point identification}\n\\begin{center}\n\n\n\\begin{tabular}{ccc|cccc}\n\\hline\n& & & & $\\Delta_n$ \\\\\n&$n$ & event & 0.1 & 0.05 & 0.01 & 0 \\\\\n\\hline\n &&&&&&\\\\\nInterval censoring&50 & Lower BCS$=\\emptyset$ & 0.966 & 0.94 & 0.972 & 0.956 \\\\\n && $\\Theta(\\phi_0)\\subset\\Theta(\\hat\\phi )^{\\tilde q\/\\sqrt{n}}$ & 0.952 & 0.964 & 0.952 & 0.944 \\\\\n &&&&&&\\\\\n&100 & Lower BCS$=\\emptyset$ & 0.964 & 0.96 & 0.962 & 0.962 \\\\\n &&$\\Theta(\\phi_0)\\subset\\Theta(\\hat\\phi )^{\\tilde q\/\\sqrt{n}}$ & 0.962 & 0.966 & 0.958 & 0.95 \\\\\n\\hline\n& &&&&&\\\\\nMissing data &50 & Lower BCS$=\\emptyset$ & 0.998 & 1 & 1 & 1 \\\\\n && $\\Theta(\\phi_0)\\subset\\Theta(\\hat\\phi )^{\\tilde q\/\\sqrt{n}}$& 0.95 & 0.952 & 0.942 & 0.944 \\\\\n& &&&&&\\\\\n&100 & Lower BCS$=\\emptyset$ & 0.99 & 1 & 1 & 1 \\\\\n && $\\Theta(\\phi_0)\\subset\\Theta(\\hat\\phi )^{\\tilde q\/\\sqrt{n}}$& 0.952 & 0.956 & 0.958 & 0.952 \\\\\n\\hline\n \\end{tabular}\n\n\n \\label{t2add}\n\\small\n\n\\it\nThe frequencies (over 500 replications) that the lower BCS is empty and that the upper BCS covers the true identified set are summarized. The length of the true identified set is $\\Delta_n$. The model achieves point identification when $\\Delta_n=0$.\n\n\\end{center}\n\\end{table}\n\n\nWe see that the upper BCS with 95\\% credible level has the coverage probability for the true identified set close to 0.95.\n Also, the lower BCS is empty almost all the times.\n\n \\end{exm}\n\n\n\n\n\\section{Financial Asset Pricing}\\label{s_Financial_asset_pricing}\nWe develop a detailed application in financial asset pricing model, where the identified set is of direct interest.\n\n\\subsection{The model}\nAsset pricing models state that the equilibrium price $P_{t}^{i}$ of a financial asset $i$ is equal to\n$$\n P_{t}^{i} = E [M_{t+1}P_{t+1}^{i}|\\mathcal{I}_{t}],\\qquad i=1,\\ldots,N\n$$\n\n\\noindent where $P_{t+1}^{i}$ denotes the price of asset $i$ at the period $(t+1)$, $M_{t+1}$ is the stochastic discount factor (SDF, hereafter) and $\\mathcal{I}_{t}$ denotes the information set at time $t$. In vectorial form this rewrites as\n \\begin{equation}\\label{eq9.1}\n \\iota = E [M_{t+1}R_{t+1}|\\mathcal{I}_{t}]\n \\end{equation}\n\n\\noindent where $\\iota$ is the $N$-dimensional vector of ones and $R_{t+1}$ is the $N$-dimensional vector of gross asset returns at time $(t+1)$: $R_{t+1} = (r_{1,t+1},\\ldots, r_{N,t+1})^{T}$ with $r_{i,t+1} = P_{t+1}^{i}\/P_{t}^{i}$. This model can be interpreted as a model of the SDF $M_{t+1}$ and may be used to detect the SDFs that are compatible with asset return data. Hansen and Jagannathan (1991) have obtained a lower bound on the volatility of SDFs that could be compatible with a given SDF-mean value and a given set of asset return data. Therefore, the set of SDFs $M_{t+1}$ that can price existing assets generally form a proper set.\\\\\n\\indent Let $m$ and $\\Sigma$ denote, respectively, the vector of unconditional mean returns and covariance matrix of returns of the $N$ risky assets, that is, $m = E (R_{t+1})$ and $\\Sigma = E (R_{t+1} - m)(R_{t+1} - m)^{T}$. Denote $\\mu = E (M_{t+1})$ and $\\sigma^{2} = Var(M_{t+1})$, which are partially identified. We assume that $m$, $\\Sigma$, $\\mu$ and $\\sigma^{2}$ do not vary with $t$. Hansen and Jagannathan (1991) showed that given $(m,\\Sigma)$, which are point identified by the observed $\\{R_{t+1}\\}$, if the SDF $M_{t+1}$ satisfies (\\ref{eq9.1}), then its variance $\\sigma^2$ should be no smaller than:\n \\begin{eqnarray}\\label{eq7.1}\n \\sigma_{\\phi}^{2}(\\mu) & = & (\\iota - \\mu m)^{T}\\Sigma^{-1} (\\iota - \\mu m) \\equiv \\phi_{1}\\mu^{2} - 2\\phi_{2}\\mu + \\phi_{3}\\cr\n & & \\nonumber\\\\\n \\textrm{with } \\phi_{1} & = & m^{T}\\Sigma^{-1}m, \\qquad \\phi_{2} = m^{T}\\Sigma^{-1}\\iota, \\qquad \\phi_{3} = \\iota^{T}\\Sigma^{-1}\\iota.\n \\end{eqnarray}\n\n\\noindent Therefore, an SDF correctly prices an asset only if, for given $(m,\\Sigma)$, its mean $\\mu$ and variance $\\sigma^{2}$ are such that $\\sigma^{2}\\geq \\sigma_{\\phi}^{2}(\\mu)$, and in this case the SDF is called \\textit{admissible}. Inadmissible SDFs do not satisfy model (\\ref{eq9.1}).\n\nDefine the set of admissible SDF's means and variances:\n \\begin{equation*}\n \\Theta(\\phi) = \\left\\{(\\mu,\\sigma^{2})\\in \\Theta; \\: \\sigma_{\\phi}^{2}(\\mu) - \\sigma^{2} \\leq 0\\right\\}\n \\end{equation*}\n\n\\noindent where $\\phi = (\\phi_{1},\\phi_{2},\\phi_{3})^{T}$ and $\\Theta\\subset \\mathbb{R}_{+}\\times\\mathbb{R}_{+}$ is a compact set that we can choose based on experiences. Usually, we can fix upper bounds $\\bar{\\mu}>0$ and $\\bar{\\sigma}>0$ as large as we want and take $\\Theta = [0,\\bar{\\mu}] \\times [0,\\bar{\\sigma}^{2}]$. In practice, $\\bar{\\mu}$ and $\\bar{\\sigma}$ must be chosen sufficiently large such that $\\Theta(\\phi)$ is non-empty. We also point out that to be consistent with our developed theory, the parameter space is chosen to be compact. Thus, the space for $\\sigma^2$ includes zero. In practice, one can require $\\sigma^2\\geq \\epsilon$ for a sufficiently small $\\epsilon>0.$ For simplicity, we keep the current parameter space for $\\sigma^2$, which is also used sometimes in the literature. Making inference on $\\Theta(\\phi)$ allows to check whether a family of SDF (and then a given utility function) prices a financial asset correctly or not. Frequentist inference for this set is carried out in Chernozhukov, Kocatulum and Menzel (2012).\n\n Using our previous notation, we define $\\theta = (\\mu,\\sigma^{2})$ and\n \\begin{displaymath}\n \\Psi(\\theta,\\phi) = \\sigma_{\\phi}^2(\\mu) - \\sigma^{2},\n \\end{displaymath}\nwhich gives a moment inequality model.\n\n\\subsection{Support function}\n\nIn this case $\\Psi(\\theta,\\phi)$ is convex in $\\theta$. More precisely, $\\Psi(\\theta,\\phi)$ is linear in $\\sigma^{2}$ and strictly convex in $\\mu$ (because $\\Sigma$ is positive definite so $\\phi_{1}>0$). Assumptions \\ref{ass5.1}- \\ref{ass5.5} are easy to verify except for Assumptions \\ref{ass5.6} and \\ref{ass5.5}\\textit{(i)} and \\textit{(iv)}. However, it can be shown that the support function is differentiable at $\\phi_{0}$ without Assumption \\ref{ass5.6} being satisfied. So, our Bayesian analysis on the support function of $\\Theta(\\phi)$ still goes through. Assumption \\ref{ass5.5} \\textit{(i)} and \\textit{(iv)} must be checked case by case (that is, for every region of values of $\\nu$) since $\\lambda(\\nu,\\phi)$ takes a different expression in each case, see Appendix \\ref{ss_support_set_HJ} in the supplementary material.\n\n\n\n\nWe can rewrite the support function to be $S_{\\phi}(\\nu)=\\Xi(\\nu,\\phi)^T\\nu$, where\n \\begin{eqnarray*}\n \\Xi(\\nu,\\phi) & = & \\arg\\max_{\\theta\\in\\Theta}\\left\\{\\nu^T\\theta; \\:\\Psi(\\theta,\\phi)\\leq 0\\right\\}\\\\\n & = & \\arg\\max_{0\\leq\\mu<\\bar{\\mu},\\:0<\\sigma^{2}<\\bar{\\sigma}^{2}}\\left\\{\\nu_{1}\\mu + \\nu_{2}\\sigma^{2} - \\lambda(\\nu,\\phi)(\\phi_{1}\\mu^{2} - 2\\phi_{2}\\mu + \\phi_{3} - \\sigma^{2})\\right\\}\n \n \\end{eqnarray*}\n\n\\noindent where $\\nu=(\\nu_{1},\\nu_{2})$. The support function and $\\Xi(\\nu,\\phi)$ have explicit expressions, but they are very long and complicated. Thus, we present them in Appendix \\ref{ss_support_set_HJ}\n\n\\begin{comment}\n\\begin{thm}\n Let $p\\in\\mathbb{S}^{2}$ and denote $p=(p_{1},p_{2})$. Then we have:\n\n \\begin{enumerate}\n \\item $p_{2}<0$ and $p_{1}>0$:\n \\begin{displaymath}\n \\Xi(p,\\phi) = \\left\\{\\begin{array}\n {cc} \\left(\\frac{\\phi_{2}}{\\phi_{1}} - \\frac{p_{1}}{2p_{2}\\phi_{1}},\\frac{p_{1}^{2}}{4p_{2}^{2}\\phi_{1}} + \\frac{\\phi_{1}\\phi_{3} - \\phi_{2}^{2}}{\\phi_{1}}\\right) & \\textrm{ if } \\bar{\\mu} \\geq \\frac{\\phi_{2}}{\\phi_{1}} - \\frac{p_{1}}{2p_{2}\\phi_{1}} \\textrm{ and }\\bar{\\sigma}^{2}\\geq \\frac{p_{1}^{2}}{4p_{2}^{2}\\phi_{1}} + \\frac{\\phi_{1}\\phi_{3} - \\phi_{2}^{2}}{\\phi_{1}}\\\\\n &\\\\\n \\left(\\bar{\\mu}, \\phi_{1}\\bar{\\mu}^{2} - 2\\phi_{2}\\bar{\\mu} + \\phi_{3}\\right)& \\textrm{ if } \\bar{\\mu} \\leq \\frac{\\phi_{2}}{\\phi_{1}} - \\frac{p_{1}}{2p_{2}\\phi_{1}} \\textrm{ and }\\bar{\\sigma}^{2}\\geq \\phi_{1}\\bar{\\mu}^{2} - 2\\phi_{2}\\bar{\\mu} + \\phi_{3}\\\\\n & \\\\\n \\left(\\frac{\\phi_{2} + \\sqrt{\\phi_{2}^{2} - \\phi_{1}(\\phi_{3} - \\bar{\\sigma}^{2})}}{\\phi_{1}},\\bar{\\sigma}^{2}\\right)& \\textrm{ if } \\bar{\\mu} \\geq \\frac{\\phi_{2} + \\sqrt{\\phi_{2}^{2} - \\phi_{1}(\\phi_{3} - \\bar{\\sigma}^{2})}}{\\phi_{1}} \\textrm{ and }\\bar{\\sigma}^{2}\\leq \\frac{p_{1}^{2}}{4p_{2}^{2}\\phi_{1}} + \\frac{\\phi_{1}\\phi_{3} - \\phi_{2}^{2}}{\\phi_{1}}\n \\end{array}\\right.\n \\end{displaymath}\n\n \\item $p_{2}<0$ and $p_{1}<0$:\n \\begin{displaymath}\n \\Xi(p,\\phi) = \\left\\{\\begin{array}\n {cc} (0,\\phi_{3}) & \\textrm{ if } 2p_{2}\\phi_{2}\\leq p_{1}<0 \\textrm{ and }\\bar{\\sigma}^{2}\\geq \\phi_{3}\\\\\n &\\\\\n \\left(\\frac{\\phi_{2} + \\sqrt{\\phi_{2}^{2} - \\phi_{1}(\\phi_{3} - \\bar{\\sigma}^{2})}}{\\phi_{1}},\\bar{\\sigma}^{2}\\right)& \\textrm{ if } 2p_{2}\\phi_{2}\\leq p_{1}<0,\\;\\bar{\\mu} \\geq \\frac{\\phi_{2} + \\sqrt{\\phi_{2}^{2} - \\phi_{1}(\\phi_{3} - \\bar{\\sigma}^{2})}}{\\phi_{1}}\\\\\n & \\textrm{ and }\\bar{\\sigma}^{2}\\leq \\phi_{3}\\\\\n \\textrm{ same as in 1. } & \\textrm{ if } p_{1}<2p_{2}\\phi_{2}<0\n \n \n \n \n \n \\end{array}\\right.\n \\end{displaymath}\n\n \\item $p_{2}>0$ and $p_{1}>0$:\n \\begin{displaymath}\n \\Xi(p,\\phi) = \\left\\{\\begin{array}\n {cc} (\\bar{\\mu},\\bar{\\sigma}^{2}) & \\textrm{ if } \\bar{\\sigma}^{2}\\geq \\phi_{1}\\bar{\\mu}^{2} - 2\\phi_{2}\\bar{\\mu} + \\phi_{3}\\\\\n \\left(\\frac{\\phi_{2} + \\sqrt{\\phi_{2}^{2} - \\phi_{1}(\\phi_{3} - \\bar{\\sigma}^{2})}}{\\phi_{1}},\\bar{\\sigma}^{2}\\right) & \\textrm{ if } \\bar{\\sigma}^{2} < \\phi_{1}\\bar{\\mu}^{2} - 2\\phi_{2}\\bar{\\mu} + \\phi_{3} \\textrm{ and } \\bar{\\mu}> \\frac{\\phi_{2}}{\\phi_{1}}\n \\end{array}\\right.\n \\end{displaymath}\n\n \\item $p_{2}>0$ and $p_{1}<0$:\n \\begin{displaymath}\n \\Xi(p,\\phi) = \\left\\{\\begin{array}\n {cc} (0,\\bar{\\sigma}^{2}) & \\textrm{ if } \\bar{\\sigma}^{2}\\geq \\phi_{3}\\\\\n \\left(\\mu_{min},\\bar{\\sigma}^{2}\\right) & \\textrm{ otherwise}\n \\end{array}\\right.\n \\end{displaymath}\n \\noindent where $\\mu_{\\min} = \\min\\{\\mu\\in[0,\\bar{\\mu}];\\:\\phi_{1}\\mu^{2} - 2\\phi_{2}\\mu + \\phi_{3} \\leq \\bar{\\sigma}^{2}\\}$.\n \\end{enumerate}\nThe support function is given by\n$$\nS_{\\phi}(p)=p^T\\Xi(p,\\phi), \\quad\\| p\\|=1.\n$$\n\n \\end{thm}\n\n\\noindent Thus, $\\Xi(p,\\phi)$ is a singleton and the support function can be easily obtained. We remark that in cases 3 and 4 the constraint given by the Hansen-Jagannathan bound is no longer binding, so that the corresponding Lagrange multiplier is $0$.\n\n\\end{comment}\n\\subsection{Dirichlet process prior}\n Let $F$ denote a probability distribution. The Bayesian model is $R_t|F\\sim F$ and $ (m, \\Sigma)=(m(F), \\Sigma(F))$, where\n $$\nm(F)=\\int r F(dr),\\quad \\Sigma(F)=\\int rr^TF(dr)-\\int rF(dr)\\int rF(dr)^T.\n $$\n Let us impose a Dirichlet process prior for $F$, with parameter $v_0$ and base probability measure $Q_0$ on $\\mathbb{R}^N$. By Sethuraman (1994)'s decomposition, the Dirichet process prior induces a prior for $(m, \\Sigma)$ as:\n $m=\\sum_{j=1}^{\\infty}\\alpha_j\\xi_j$, and $\\Sigma=\\sum_{j=1}^{\\infty}\\alpha_j\\xi_j\\xi_j^T-\\sum_{i=1}^{\\infty}\\alpha_i\\xi_i\\sum_{j=1}^{\\infty}\\alpha_j\\xi_j^T$\n where $\\xi_j$ are independently sampled from $Q_0$; $\\alpha_j=u_j\\prod_{l=1}^j(1-u_l)$ with $\\{u_i\\}_{i=1}^n$ drawn from Beta$(1,v_0)$. These priors then induce a prior for $\\phi.$ The posterior distribution for $(m,\\Sigma)$ can be calculated explicitly:\n \\begin{eqnarray*}\n \\Sigma|D_n&\\sim&(1-\\gamma)\\sum_{j=1}^{\\infty}\\alpha_j\\xi_j\\xi_j^T+\\gamma\\sum_{t=1}^n\\beta_tR_tR_t^n\\cr\n &&-\\left( (1-\\gamma)\\sum_{j=1}^{\\infty}\\alpha_j\\xi_j+\\gamma\\sum_{t=1}^n\\beta_tR_t\\right)\\left( (1-\\gamma)\\sum_{j=1}^{\\infty}\\alpha_j\\xi_j+\\gamma\\sum_{t=1}^n\\beta_tR_t\\right)^T,\n \\end{eqnarray*}\n $$\n m|D_n\\sim (1-\\gamma)\\sum_{j=1}^{\\infty}\\alpha_j\\xi_j+\\gamma\\sum_{t=1}^n\\beta_tR_t,\\quad \\gamma\\sim\\text{Beta}(n,v_0),\\quad \\{\\beta_j\\}_{j=1}^n\\sim Dir(1,...,1).\n $$\nWe can set a truncation $K>0$, so the infinite sums in the posterior representation are replaced with a truncated sum. We can then simulate the posterior for $\\phi$ based on the distributions of $\\Sigma|D_n$, $m|D_n$ and (\\ref{eq7.1}).\n\n\\subsection{Simulation}\nWe present a simple simulated example. The returns $R_t$ are generated from a 2-factor model:\n$R_t=\\Lambda f_t+u_t+2\\iota$, where $\\Lambda$ is a $N\\times 2$ matrix of factor loadings. The error terms $\\{u_{it}\\}_{i\\leq N, t\\leq n}$ are i.i.d. uniform U$[-2,2]$. Besides, the components of $\\Lambda$ are standard normal, and the factors are also uniform U$[-2,2]$. The true $m=ER_t=2\\iota$, $\\Sigma=\\Lambda\\Lambda^{T}+I_N$.\n\n\n\nWe set $N=5, n=200$. When the posterior is calculated, the DGP's distributions and the factor model structure are treated unknown, and we apply the nonparametric Dirichlet Process prior on the CDF of $R_t-m$, with parameter $v_0=3$, and based measure $Q_0=\\mathcal{N}(0,1)$. We use a uniform prior for $(\\sigma^2,\\mu)$, and obtain the posterior distributions for $(m,\\Sigma, \\phi_1,\\phi_2,\\phi_3,\\sigma^2,\\mu)$. More concretely, the prior is assumed to be:\n$$\n\\pi(\\sigma^2,\\mu|\\phi)=\\pi(\\sigma^2|\\phi,\\mu)\\pi(\\mu);\\quad \\pi( \\sigma^2|\\phi, \\mu)\\sim U[\\sigma^2_{\\phi}(\\mu), \\bar{\\sigma}^2], \\pi(\\mu)\\sim U[0,\\bar{\\mu}],\n$$\nwhere $\\mu$ and $\\phi$ are a priori independent. We sample 1,000 draws from the posterior of $(\\phi, \\sigma^2,\\mu)$. Each time we first draw $(m,\\Sigma)$ from their marginal posterior distributions, based on which obtain the posterior draw of $\\phi$ from (\\ref{eq7.1}). In addition, draw $\\mu$ uniformly from $[0,\\bar{\\mu}]$, and finally $\\sigma^2$ uniformly from $[\\sigma^2_{\\phi}(\\mu), \\bar{\\sigma}^2]$, where $\\sigma^2_{\\phi}(\\mu)$ is calculated based on the drawn $\\phi$ and $\\mu$.\n\n\n\nThe posterior mean $(\\hat\\phi_1,\\hat\\phi_2,\\hat\\phi_3)$ of $\\phi$ is calculated, based on which we calculate a Bayesian estimate of the boundary of the identified set (we set $\\bar{\\mu}=1.4$ and $\\bar{\\sigma}^2=6$):\n$$\n\\partial\\Theta(\\hat\\phi)=\\{\\mu\\in[0,\\bar{\\mu}], \\sigma^2\\in[0,\\bar{\\sigma}^2]: \\sigma^2=\\hat\\phi_1\\mu^2-2\\hat\\phi_2\\mu+\\hat\\phi_3\\},\n$$\nwhich is helpful to compute the BCS for the identified set. In addition, we estimate the support function $S_{\\phi}(\\nu)$ using the posterior mean of $\\phi$.\n In Figure \\ref{f1}, we plot the Bayesian estimates of the support function for two cases: $\\nu_2\\in [0,1]$, $\\nu_1=\\sqrt{1-\\nu_2^2}$, and $\\nu_2\\in [-1,0]$, $\\nu_1=-\\sqrt{1-\\nu_2^2}$.\n\n \\begin{figure}[htbp]\n\\begin{center}\n\\caption{ Posterior estimates of support function. Left panel is for $\\nu_2\\in [0,1]$, $\\nu_1=\\sqrt{1-\\nu_2^2}$; right panel is for $\\nu_2\\in [-1,0]$, $\\nu_1=-\\sqrt{1-\\nu_2^2}$}\n\\includegraphics[width=8cm]{su1.eps}\n\\includegraphics[width=8cm]{su2.eps}\n\\label{f1}\n\\end{center}\n\\end{figure}\n\n\n\n Using the support function, we calculate the 95\\% posterior quantile $q_{\\tau}$ for $J(\\phi)$, based on which we construct the BCS $\\Theta(\\hat\\phi )^{q_{\\tau}\/\\sqrt{n}}$ for the identified set. The boundary of $\\Theta(\\hat\\phi )^{q_{\\tau}\/\\sqrt{n}}$ is given by\n $$\n\\partial \\Theta(\\hat\\phi )^{q_{\\tau}\/\\sqrt{n}}= \\left\\{\\mu\\in[0,\\bar{\\mu}], \\sigma^2\\in[0,\\bar{\\sigma}^2]: \\inf_{z}\\sqrt{|z-\\mu|^2+|\\sigma^2_{\\hat\\phi }(z)-\\sigma^2|^2}=q_{\\tau}\/\\sqrt{n}\\right\\}.\n $$\nIn Figure \\ref{f2}, we plot the posterior draws of $(\\mu, \\sigma^2)$, $\\partial\\Theta(\\hat\\phi), \\partial \\Theta(\\hat\\phi )^{q_{\\tau}\/\\sqrt{n}}$ and the boundary of the true identified set. The scatter plot of posterior draws, the estimated boundaries and the BCS show that the true identified set is well estimated.\n\n\n\n\n\n\n \\begin{figure}[htbp]\n\\begin{center}\n\\caption{1,000 posterior draws of $(\\mu,\\sigma^2)$. Solid line is the boundary of the true identified set; dashed line represents the estimated boundary using the posterior mean; dotted line gives the 95\\% BCS of the identified set. Plots are obtained based on a single set of simulated data. The BCS also covers a part of negative values for $\\sigma^2$. In practice, we can truncate it to ensure it is always positive.}\n\\includegraphics[width=7cm]{f2.eps}\n\\label{f2}\n\\end{center}\n\\end{figure}\n\n\n\\begin{comment}\n \\begin{table}[htdp]\n\\caption{Posterior estimates of $\\phi$}\n\\begin{center}\n\\begin{tabular}{c|c|c|c}\n\\hline\n&true&posterior mean & posterior mode\\\\\n\\hline\n$\\phi_1$&19.01&13.11& 12.18\\\\\n$\\phi_2$&9.51&6.67& 6.23\\\\\n$\\phi_3$&4.75&3.4& 3.21\\\\\n\\hline\n\\end{tabular}\n\\label{table3}\n\\small\n\n\\end{center}\n\\end{table}\n\n\n\\end{comment}\n\n\n\n\\section{Conclusion}\\label{s_Conclusion}\n\nWe propose a semi-parametric Bayesian procedure for inference about partially identified models. Bayesian approaches are appealing in many aspects. Classical Bayesian approach in this literature has been assuming a known likelihood function. However, in many applications econometric models only identify a set of moment inequalities, and therefore assuming a known likelihood function suffers from the risk of misspecification, and may result in inconsistent estimations of the identified set. On the other hand, Bayesian approaches that use moment-condition-based likelihoods such as the limited information and exponential tilted empirical likelihood, though guarantee the consistency, lack of probabilistic interpretations, and do not provide an easy way to make inference about the identified set. Our approach, on the contrary, only requires a set of moment conditions but still possesses a pure Bayesian interpretation. Importantly, we can conveniently analyze both the partially identified parameter and its identified set. Moreover, we shed light on many appealing features of our proposed approach such as the computational efficiency for subset inference and projections.\n\n\nOur analysis primarily focuses on identified sets which are closed and convex. These sets are completely characterized by their support function. By imposing a prior on the support function, we construct its posterior distribution. It is shown that the support function for a very general moment inequality model admits a linear expansion, and its posterior is consistent. The Bernstein-von Mises theorem for the posterior of the support function is proven.\n\n\n\nSome researcher may argue that frequentist and Bayesian approaches are asymptotically equivalent because the prior is ``washed away\" as the sample size increases. So why bother with Bayesian asymptotics? Interestingly, this is no longer the case under partial identification. As was shown by Moon and Schorfheide (2012), when making inference about the partially identified parameter, the BCS can be strictly smaller than the FCS. Therefore, the two fundamental statistical methodologies provide different inferences that are not necessarily equivalent even asymptotically. This paper completes such a comparison. We also establish the large-sample correspondence of the two approaches for the identified set. It is also illustrated that the results hold in the uniform sense, in particular when point identification is nearly achieved.\n\n\n\n\n\n\n\\newpage\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}