diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzisff" "b/data_all_eng_slimpj/shuffled/split2/finalzzisff" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzisff" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\\IEEEPARstart{T}{rough} capturing both intensities and directions from sampled light rays, light field achieves high-quality view synthesis without the need of complex and heterogeneous information (e.g., geometry and texture). More importantly, benefit from the light field rendering technology~\\cite{LFrendering}, light field is capable of producing photorealistic views in real-time, regardless of the scene complexity or non-Lambertian effect. This high quality rendering usually requires light fields with disparities between adjacent views to be less than one pixel, i.e., the so-called densely-sampled light field (DSLF). However, typical DSLF capture either suffers from a long period of acquisition time (e.g., DSLF gantry system~\\cite{LFrendering,LFrig}) or falls into the well-known resolution trade-off problem, i.e., the light fields are sampled sparsely in either the angular or the spatial domain due to the limitation of the sensor resolution~\\cite{Lytro}.\n\nRecently, a more promising way is the fast capturing of a sparsely-sampled (angular domain) light field followed by a direct reconstruction or a depth-based view synthesis~\\cite{DoubleCNN,shi2020learning} with advanced deep learning techniques. On the one hand, typical learning-based reconstruction methods~\\cite{LFCNN,WuEPICNN2018,YeungECCV2018} employ multiple convolutional layers to map the low angular resolution light field to the DSLF. But due to the limited perceptive range of convolutional filters~\\cite{Long2014Do}, the networks will fail to collect enough information among the correspondences when dealing with large disparities, leading to aliasing effects in the reconstructed light field. On the other hand, depth-based view synthesis methods address the large disparity problem through plane sweep (depth estimation), and then synthesize novel views using learning-based prediction~\\cite{DoubleCNN,DeepStereo,mildenhall2019local}. However, such methods require depth consistency along the angular dimension, and thus, often fail to solve the depth ambiguity caused by the non-Lambertian effects.\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[width=1\\linewidth]{.\/materials\/teaser.pdf}\n\\end{center}\n\\vspace{-4mm}\n \\caption{We propose a spatial-angular attention module embedded in a multi-scale reconstruction structure for learning-based light field reconstruction. The network perceives correspondence pixels in a non-local manner, and is able to produce high quality reconstruction using sparse input. In the bottom results, the input light fields are upsampled by using nearest interpolation for better demonstration. Light fields courtesy of Moreschini~\\textit{et al.}~\\cite{ICME2018} and Adhikarla~\\textit{et al.}~\\cite{kiran2017towards}.}\n\\label{fig:Teaser}\n\\end{figure}\n\nIn this paper, we propose a Spatial-Angular Attention Network, termed as SAA-Net, to achieve DSLF reconstruction from a sparse input. The proposed SAA-Net perceives correspondences in the Epipolar Plane Image (EPI) in a non-local manner, solving the aforementioned non-Lambertian effect and large disparity in an unified framework (Sec. \\ref{Sec:Network}). Specifically, the SAA-Net is composed by two parts, a spatial-angular attention module and an U-net backbone. Motivated by the non-local attention mechanism in~\\cite{wang2018non-local, zhang2018self}, for each pixel in the input light field, the Spatial-Angular Attention Module (termed as SAAM for short) computes the responses from all the positions in its epipolar plane, and produces an attention map that records the correspondences along the angular dimension, as shown in Fig. \\ref{fig:Teaser} (top). This correspondence information in the attention map is then applied to driven the reconstruction in the angular dimension via multiplication and deconvolution.\n\nTo efficiently perform the non-local attention, we propose a convolutional neural network with multi-scale reconstruction structure. The network follows the basic architecture of the U-net, i.e., an encoder-decoder structure with skip connections. The encoder compresses the input light field in the spatial dimensions and removes redundancy information for the SAAM. Rather than simply reconstruct the light field at the end of the network, we propose a multi-scale reconstruction structure by performing deconvolution along the angular dimension in each skip connection branch, as shown in Fig. \\ref{fig:Teaser} (top). The proposed multi-scale reconstruction structure maintains the view consistency in the low spatial scale while preserving fine details in the high spatial scales.\n\nFor the network training, we propose a spatial-angular perceptual loss that is specifically designed for the high-dimensional light field data (Sec. \\ref{Sec:training}). Rather than computing the high-level feature loss~\\cite{dosovitskiy2016generating, Johnson2016Perceptual} by feeding each view in the light field to a 2D CNN (e.g., the commonly-used VGG~\\cite{simonyan2015very}), we pretrain a 3D auto-encoder that considers the consistency in both the spatial and angular dimensions of the light field. We demonstrate the superiority of the SAA-Net by performing extensive evaluations on various light field datasets. The proposed network presents high-quality DSLF on challenge cases with both non-Lambertian effects and large disparities, as illustrated in Fig. \\ref{fig:Teaser} (bottom). In summary, we make the following contributions\\footnote{We will release the source code of this work upon acceptance.}:\n\\begin{itemize}\n \\item A spatial-angular attention module that perceives correspondences non-locally in the epipolar plane;\n \\item A multi-scale reconstruction structure for efficiently performing the non-local attention in the low spatial scale while also preserving the high frequencies;\n \\item A spatial-angular perceptual loss specifically designed for the high-dimensional light field data.\n\\end{itemize}\n\n\\section{Related Work}\n\\subsection{Light Lield Reconstruction}\nFirst, we will give a brief review on researches of light field view synthesis (or view synthesis) depending on whether the depth information is explicitly used.\n\n\\textbf{Depth image-based view synthesis.} Typically, these kind of approaches first estimate the depth of a scene~\\cite{Tao,Occ,huang2017robust,zhang2016robust}, then warp and blend the input views to synthesize a novel view~\\cite{DeepStereo,DoubleCNN,soft3D,shi2020learning}. Conventional light field depth estimation approaches follow the pipeline of stereo matching~\\cite{scharstein2002taxonomy}, i.e., cost computation, cost aggregation (or cost volume filtering) and post refinement. The main different is that light field converts the disparity from the discrete space into a continuous space~\\cite{Wanner}, deriving various depth cues specifically for a light field, e.g., structure tensor-based local direction estimation~\\cite{Wanner}, depth from defocus~\\cite{Tao,Occ}. Also, some learning-based approaches incorporate the depth estimation pipeline described above with 2D convolution-based feature extraction, 3D convolution-based cost volume refinement and depth regression~\\cite{kendall2017end,tsaiattention} For novel view synthesis, input views are warped to the novel viewpoints with sub-pixel accuracy using bilinear interpolation and blended in different manners, e.g., total variation optimization~\\cite{Wanner}, soft blending~\\cite{soft3D} and learning-based synthesis~\\cite{Zheng2018ECCV}.\n \nRecently, researchers mainly focus on the studies for maximizing the quality of synthesized views based on the deep learning technique. Flynn \\textit{et al.}~\\cite{DeepStereo} proposed a learning-based method to synthesize novel views with predicted probabilities and colors for each depth plane. Kalantari \\textit{et al.}~\\cite{DoubleCNN} further employed a sequential network to infer depth (disparity) and color, and optimized the model via end-to-end training. Shi \\textit{et al.}~\\cite{shi2020learning} developed a convolutional network that fuses low-level pixels and high-level features in an unified framework. Zhou \\textit{et al.}~\\cite{zhou2018stereo} introduced a learning-based MultiPlane Image (MPI) representation that infers a novel view by the alpha blending of different images. Mildenhall \\textit{et al.}~\\cite{mildenhall2019local} further proposed to use multiple MPIs to synthesize a local light field. \n\n\\textbf{Reconstruction without explicit depth.} These kind of approaches treat the problem of light field reconstruction as the approximation of plenoptic function. In the Fourier domain, the sparse sampling in the angular dimension produces overlaps between the original spectrum and its replicas, leading to aliasing effect. Classical approaches~\\cite{chai2000plenoptic,zhang2003spectral} consider a reconstruction filter (usually in a wedge shape) to extract the original signal while filtering the aliasing high-frequency components. For instance, Vagharshakyan \\textit{et al.}~\\cite{Shearlet} utilized an adapted discrete shearlet transform in the Fourier domain to remove the high-frequency spectra that introduce aliasing effects. Shi \\textit{et al.}~\\cite{LFfourier} performed DSLF reconstruction as an optimization for sparsity in the continuous Fourier domain.\n\n\\begin{figure*}\n\\begin{center}\n\\includegraphics[width=1\\linewidth]{.\/materials\/RF.pdf}\n\\end{center}\n\\vspace{-4mm}\n \\caption{Analysis of reconstruction quality in terms of the network receptive field and disparity range of the scene. For a scene with small disparities, both networks (a) with small receptive field ($27\\times27$ pixels) and (b) with large receptive field ($53\\times53$ pixels) are able to reconstruct high-quality light field. However, for a scene with large disparities, network (c) with small receptive field suffers from severe aliasing effects. While network (d) with large receptive field can still produce plausible results. We show the sparely-sampled inputs on the top row and the reconstructed on the bottom. The receptive field of each network is visualized with green box. The input EPIs are stretched along the angular dimension fore better demonstration.}\n\\label{fig:RF}\n\\end{figure*}\n\nIn recent years, some learning-based approaches were also proposed for depth-independent reconstruction~\\cite{LFCNN,WuEPICNN2018,YeungECCV2018,wang2018end}. Zhu \\emph{et al}.~\\cite{zhu2019revisiting} proposed an auto-encoder that combines convolutional layers and convLSTM layer~\\cite{shi2015convolutional}. For explicitly addressing the aliasing effects, Wu \\emph{et al}.~\\cite{WuEPICNN2018} took advantage of the clear texture structure of the EPI and proposed a ``blur-restoration-deblur'' framework. However, when applying a large blur kernel for large disparities, the approach tends to fail at recovering the high-frequency details, and thus leading to the blur effect. Wang~\\cite{wang2018end} further proposed to apply a 3D CNN that takes a 3D slice as the input. Yeung \\textit{et al.}~\\cite{YeungECCV2018} directly fed the entire 4D light field into a 4D convolutional network, and applied a coarse-to-fine model to iteratively refine the spatial and angular dimensions of the light field. Wu \\emph{et al}.~\\cite{wu2019learning} proposed an evaluation network for EPIs with different shear amount, termed as sheared EPI structure. In this structure, the depth information is implicitly used to select a well reconstructed EPI. However, the performance of the network is limited due to the finite perceptive field of the convolutional neurons, especially when handling the large disparity problem.\n\n\\subsection{Attention Mechanism}\nAttention was first built to imitate the mechanism of human perception that mainly focuses on the salient part~\\cite{itti1998model,rensink2000dynamic,corbetta2002control}. Vaswani \\textit{et al.}~\\cite{vaswani2017attention} indicated that the attention mechanism is able to solve the long term dependency problem even without being embedded in the backbone of a recurrent or CNN. Therefore, the attention mechanism is recently developed to enabling the non-local perception in the spatial or temporal dimension~\\cite{zhang2019deep}. \n\nTo achieve the feature of non-local perception, Hu \\textit{et al.}~\\cite{hu2019senet} and Woo \\textit{et al.}~\\cite{woo2018cbam} proposed to use a global pooling (max-pooling or average-pooling) followed by a multi-layer perceptron to aggregate the entire information in the spatial dimension. Tsai \\textit{et al.}~\\cite{tsaiattention} introduced an attention module in the angular dimension to weight the contribution of each view in a light field. Vaswani \\textit{et al.}~\\cite{vaswani2017attention} proposed to use a weighted average of the responses from all the positions with respect to a certain position in the latent space, which is called self-attention (also known as intra-attention). Alternatively, Wang \\textit{et al.}~\\cite{wang2018non-local} achieved the self-attention by using matrix multiplication between reshaped feature maps, and is termed as non-local attention. For a high-dimensional task like video classification, the proposed module reshapes the 4D tensor (time, height, width and channel) into a 2D matrix. Zhang \\textit{et al.}~\\cite{zhang2018self} further extended this idea into a Generative Adversarial Network (GAN). Rather than using the non-local attention mechanism, Wang \\textit{et al.}~\\cite{wang2019learning} proposed a parallax attention module to compute the response across two stereo images. For each epipolar line in the stereo images (feature maps), the 2D matrices (width and channel) are multiplied to produce a sparse attention map that implies correspondences. In this paper, we extend the non-local attention mechanism to high dimensional light field data. For each pixel in the input 3D light field, the attention is computed in the 2D epipolar plane, rather than the epipolar line in~\\cite{wang2019learning} or the entire 3D data space in~\\cite{wang2018non-local}.\n\n\\section{Problem Analysis}\\label{Sec:Problem}\nIn the following analysis, we empirically show that the performance of a learning-based light field method is closely related to the perception range of its neuron (or convolutional filter), especially when addressing the large disparity problem. Deep neural network is proved to be a powerfull technique in solving ill-posed inverse problems~\\cite{jin2017deep}. In the light field reconstruction problem, the performance of a deep neural network mainly depends on two factors: disparity range of the scene (input light field) and network structure. Since the first factor is normally unalterable once the light field is acquired, typical deep learning-based approaches pursue a more appropriate architecture for higher performance~\\cite{LFCNN,DoubleCNN,WuEPICNN2018,YeungECCV2018}. Among those deep learning-based approaches, the depth-based view synthesis methods convert the feature maps into a physically meaningful depth map, while depth-independent methods directly map them to novel views. Essentially, both of two kind of approaches employ convolutional filter to generate responses (feature maps) between correspondence pixels.\n\nTo quantify the measurement of the capability to capture correspondences, we apply the concept of receptive field introduced in~\\cite{Long2014Do,zhou2015object}. The receptive field measures the number of pixels that are connected to a particular filter in the CNN, i.e., the number of correspondence pixels perceived by the convolutional filter. \n\nWe analyse the reconstruction qualities of two networks with the same structure but different receptive field sizes, as illustrated in Fig. \\ref{fig:RF}. For a scene with small disparity (about 3 pixels in the demonstrated example), both networks with small receptive field ($27\\times27$ pixels) and large receptive field ($53\\times53$ pixels) can reconstruct high angular resolution light fields (EPIs) with view consistency, as shown in Fig. \\ref{fig:RF}(a) and Fig. \\ref{fig:RF}(b). However, for a scene with large disparity (about 9 pixels), the network with small receptive field is not able to collect enough information from the corresponding pixels of its center point, as shown clearly at the top of Fig. \\ref{fig:RF}(c). Note that the actual size of the receptive field can be smaller than its theoretical size~\\cite{zhou2015object}. Consequently, the reconstructed result suffers from severe aliasing effects, as shown at the bottom of Fig. \\ref{fig:RF}(c). In comparison, the network with large receptive field can produce high quality result. In this example, the input EPIs are stretched along angular dimension for better demonstration.\n\n\\begin{figure*}\n\t\\begin{center}\n\t\t\\includegraphics[width=1\\linewidth]{.\/materials\/CNN.pdf}\n\t\\end{center}\n\t\\vspace{-4mm}\n\t\\caption{Architecture of the proposed Spatial-Angular Attention Network (SAA-Net). The SAA-Net is composed of (a) a multi-scale reconstruction structure, and (b) a Spatial-Angular Attention Module (SAAM). The input is a 3D slice ($L(u,v,s)$ or $L(v,u,t)$) of the light field.The batch and channel dimensions are omitted in the figure.}\n\t\\label{fig:CNN}\n\\end{figure*}\n\nDue to the limitation of parameter amount, it is intractable to expand the receptive field by stubbornly deepening the network or enlarging the filter size. The fundamental idea of the proposed approach is to design a light field reconstruction network that catches the correspondences non-locally across the spatial and angular dimensions of the light field via non-local perception. We achieve this with two aspects: 1) a spatial-angular attention module inspired by the non-local attention mechanism~\\cite{wang2018non-local, zhang2018self}; and 2) an encoder-decoder network that can reduce the redundancies in the light field so that the non-local perception can be implemented efficiently.\n\n\\section{Spatial-Angular Attention Network}\\label{Sec:Network}\nIn this section, we first introduce the overall architecture of the proposed Spatial-Angular Attention Network for light field reconstruction, which is termed as SAA-Net. We then present the proposed spatial-angular attention module that is specifically designed for disentangling the disparity information with a non-local perception. The input of the SAA-Net is a 3D light field slice with two spatial dimensions and one angular dimension, i.e., $L(x,y,s)$ (or $L(y,x,t)$). By splitting light field into 3D slices, the proposed network can be adopted for not only 3D light fields from a single-degree-of-freedom gantry system but also 4D light fields from plenoptic camera and camera array system.\n\nFor a 4D light field $L(x,y,s,t)$, we adopt a hierarchical reconstruction strategy similar with that in~\\cite{WuEPICNN2018}. The strategy first reconstruct 3D light fields using slices $L_{t^*}(x,y,s)$ and $L_{s^*}(y,x,t)$, then use the 3D light fields from the synthesized views to generate the final 4D light field.\n\n\\subsection{Network Architecture}\nWe propose a multi-scale reconstruction structure to maintain the view consistency (i.e., continuity in the angular dimension) in the low spatial scale while preserving fine details in the high spatial scales. The backbone of the proposed SAA-Net follows the encoder-decoder structure with skip connections, also known as U-net, as shown in Fig. \\ref{fig:CNN}(a). But the proposed SAA-Net has two particular differences: 1) We use deconvolution along the angular dimension in each skip connection branch before the concatenation in the decoder part; 2) We apply convolution layers with stride specifically in the spatial dimensions of the light field. Table~\\ref{table:Archi} provides the detailed configuration of the proposed SAA-Net.\n\nThe \\textbf{encoder} part of the SAA-Net construct multi-scale light field features and reduces the redundant information in the spatial dimension to alleviate the computational and GPU memory costs for the non-local perception in the spatial-angular attention module. We use two convolutional layers (3D) with stride $[2, 2]$ and $[2, 1]$ to downsample the spatial resolution of the light field with ratio 4 and 2 along the width and height dimension, respectively. Before each downsampling, two 3D convolutional layers with filter sizes $3\\times1\\times3$ and $1\\times3\\times3$ (width, height and angular) are employed to take place of a single convolutional layer with filter size $3\\times3\\times3$, reducing $1\/3$ parameters without performance degradation.\n\nThe \\textbf{skip connections} are fed with the feature layers before the downsampling in each encoder level, as shown in Fig. \\ref{fig:CNN}(a), which have full and half spatial resolutions. For each skip connection, a deconvolution layer (also known as transpose convolution layer) is then applied to upsample the feature map in the angular dimension, followed by a $1\\times1\\times1$ convolution. For the reconstruction of a 3D light field $L(x,y,s)$, the angular information can be mainly extracted from the 2D EPI $E(x,s)$, therefore, the filter size in each deconvolution layer is set to $3\\times1\\times7$.\n\nThe \\textbf{decoder} part of the SAA-Net upsamples the feature map from the spatial-angular attention module by using two deconvolution layer with stride $[2, 1]$ and $[2, 2]$ in the spatial dimensions (width and height). The decoder also receives information from the skip connections by concatenating the them along the channel dimension~\\cite{Eilertsen2017HDR}, as shown in Fig. \\ref{fig:CNN}(a). We then use two 3D convolutional layers with filter sizes $3\\times1\\times3$ and $1\\times3\\times3$ to compress the channel numbers in each level of the decoder. This can be considered as the blending of the light field features from different reconstruction scale. Note that all the reconstructions (upsampling operations) in the angular dimension are performed by the skip connections or the spatial-angular attention module, where latter will be introduced in the following subsection.\n\n\\begin{table}\n\\caption{Detail configuration of the proposed SAA-Net, where $k$ denotes the kernel size, $s$ the stride, $chn$ the number of channels, Conv the 3D convolution layer, Deconv the 3D deconvolution layer and Concat the concatenation.}\n\\label{table:Archi}\n\\begin{center}\n\\begin{tabular}{l|cccc}\nLayer & $k$ & $s$ & $chn$ & Input\\\\\n\\hline\n\\multicolumn{5}{c}{Encoder}\\\\\n\\hline\nConv1$\\_1$ & $3\\times1\\times3$ & - & 1\/24 & $L(x,y,s)$\\\\\nConv1$\\_2$ & $1\\times3\\times3$ & - & 24\/24 & Conv1$\\_1$\\\\\nConv1$\\_3$ & $3\\times3\\times1$ & $[2,2,1]$ & 24\/48 & Conv1$\\_2$\\\\\nConv2$\\_1$ & $3\\times1\\times3$ & - & 48\/48 & Conv1$\\_3$\\\\\nConv2$\\_2$ & $1\\times3\\times3$ & - & 48\/48 & Conv2$\\_1$\\\\\nConv2$\\_3$ & $3\\times1\\times1$ & $[2,1,1]$ & 48\/96 & Conv2$\\_2$\\\\\nConv3$\\_1$ & $1\\times1\\times1$ & - & 96\/48 & Conv2$\\_3$\\\\\nConv3$\\_2$ & $3\\times1\\times3$ & - & 48\/48 & Conv3$\\_1$\\\\\nConv3$\\_3$ & $1\\times3\\times3$ & - & 48\/48 & Conv3$\\_2$\\\\\nConv3$\\_4$ & $3\\times1\\times3$ & - & 48\/48 & Conv3$\\_3$\\\\\nConv3$\\_5$ & $1\\times3\\times3$ & - & 48\/48 & Conv3$\\_4$\\\\\n\\hline\n\\multicolumn{5}{c}{Skip connection}\\\\\n\\hline\nDeconv4$\\_1$ & $3\\times1\\times7$ & $[1,1,\\alpha_a]$ & 24\/24 & Conv1$\\_2$\\\\\nConv4$\\_2$ & $1\\times1\\times1$ & - & 24\/24 & Deconv4$\\_1$\\\\\nDeconv5$\\_1$ & $3\\times1\\times7$ & $[1,1,\\alpha_a]$ & 48\/48 & Conv2$\\_2$\\\\\nConv5$\\_2$ & $1\\times1\\times1$ & - & 48\/48 & Deconv5$\\_1$\\\\\n\\hline\n\\multicolumn{5}{c}{SAAM}\\\\\n\\hline\n\\multicolumn{5}{c}{Decoder}\\\\\n\\hline\nConv6$\\_1$ & $1\\times1\\times1$ & - & 48\/96 & SAAM\\\\\nDeconv6$\\_2$ & $4\\times1\\times1$ & $[2,1,1]$ & 96\/48 & Conv6$\\_1$\\\\\nConcat1 & - & - & - & Conv6$\\_1$;Conv4$\\_2$\\\\\nConv6$\\_3$ & $3\\times1\\times3$ & - & 48\/48 & Concat1\\\\\nConv6$\\_4$ & $1\\times3\\times3$ & - & 48\/48 & Conv6$\\_3$\\\\\nDeconv7$\\_1$ & $4\\times4\\times1$ & $[2,2,1]$ & 48\/24 & Conv6$\\_4$\\\\\nConcat2 & - & - & - & Conv7$\\_1$;Conv5$\\_2$\\\\\nConv7$\\_2$ & $3\\times1\\times3$ & - & 24\/24 & Concat2\\\\\nConv7$\\_3$ & $1\\times3\\times3$ & - & 24\/24 & Conv7$\\_2$\\\\\nConv8 & $3\\times3\\times3$ & - & 24\/1 & Conv7$\\_3$\\\\\n\\end{tabular}\n\\end{center}\n\\end{table}\n\n\n\n\n\\subsection{Spatial-Angular Attention Module}\nInspired by the non-local attention mechanism in \\cite{wang2018non-local,zhang2018self}, we propose a Spatial-Angular Attention Module (SAAM) to disentangling the disparity information in light field. The main differences between the proposed SAAM and the previous non-local attention~\\cite{wang2018non-local,zhang2018self} are as follows: 1) Since the disparity information is encoded in the EPI, the non-local attention mechanism is performed in the 2D epipolar plane rather than the entire 3D space; 2) Taking advantage of the non-local perception of the EPI, we embed light field reconstruction in the spatial-angular attention module.\n\nA straightforward choice of performing spatial-angular attention is to embed the attention module in each resolution scale of the U-net. However, implementing non-local perception in the full resolution light field (feature map) is intractable in terms of computation complexity and GPU memory. Alternatively, we insert the proposed SAAM between the encoder and decoder as shown in Fig. \\ref{fig:CNN}(b). \n\nIn a 3D convolutional network, the feature layer will be a 5D tensor $\\phi\\in\\mathbb{R}^{B\\times W\\times H\\times A\\times C}$ (i.e., batch, width, hight, angular and channel). We first apply two convolution layers with kernel size $1\\times1\\times1$ to produce two feature layers $\\phi_a$ and $\\phi_b$ with size of $B\\times W\\times H\\times A\\times C'$. The channel number $C'$ is set to be $\\frac{C}{8}$ (i.e., $C'=6$) for computation efficiency. Then the feature layers $\\phi_a$ and $\\phi_b$ are reshaped into 3D tensors $\\phi'_a$ and $\\phi'_b$ of shapes $BH\\times WA\\times C'$ and $BH\\times C'\\times WA$, respectively. In this way, we merge the angular and width dimensions ($s$ and $x$ or $t$ and $y$ in a light field) together to implement the non-local perception in the epipolar plane. \n\nWe apply batch-wise matrix multiplication between $\\phi'_a$ and $\\phi'_b$ and a softmax function to produce a attention map $M$ as illustrated in Fig. \\ref{fig:CNN}(b). The attention map is composed of $BH$ matrices with shape $WA\\times WA$. Each matrix can be considered as a 2D expansion map of a 4D tensor $M'\\in\\mathbb{R}^{W\\times A\\times W\\times A}$ (the batch and height dimensions are neglected). The point $M'(x_0,s_0,x_1,s_1)$ indicates the response of light field position $L(x_0,y,s_0)$ to position $L(x_1,y,s_1)$ in the latent space. In other words, the attention map is able to capture correspondence among all the views in the input 3D light field. \n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[width=1\\linewidth]{.\/materials\/att_map.pdf}\n\\end{center}\n\\vspace{-4mm}\n \\caption{Visualization of the attention map. (a) An EPI with a foreground point $A$ and a background point $B$; (b) The corresponding high spatial-angular resolution EPI; (c) Three sub-maps extracted from the attention map. A point will have a strong response at the location of its correspondence in the attention map.}\n\\label{fig:att_map}\n\\end{figure}\n\nWe demonstrate the non-local perception of the proposed SAAM by visualizing a part of the attention map as shown in Fig. \\ref{fig:att_map}. In this example, there are two points $A$ and $B$ with remarkable visual features as shown in Fig. \\ref{fig:att_map}(a). And their corresponding points in other views are marked as $A'$ ($A''$) and $B'$ ($B''$). As the viewpoint changes along the angular dimension, the background point $B$ will be occluded by the foreground point $A$, which is demonstrated more obviously in Fig. \\ref{fig:att_map}(b). Fig. \\ref{fig:att_map}(c) shows three sub-maps extracted from the attention map $M'$ with $s_0=1$ and $s_1=1,2$ and 3, respectively. It can be clearly seen that a point will have the strongest response at the location of its correspondence in the attention map. For instance, the response $R(A,A')$ at the location $M'(8,1,6,2)$ for the corresponding patch $(A,A')$ (the middle sub-figure of Fig. \\ref{fig:att_map}(c)), and the response $R(A,A'')$ at the location $M'(8,1,4,3)$ for the corresponding patch $(A,A'')$ (the right sub-figure of Fig. \\ref{fig:att_map}(c)). For the occluded point $B$, the location of the maximum response changes from $M'(7,1,7,1)$ to $M'(7,1,4,3)$\\footnote{Due to the subpixel disparity of point $B$, the actual location of the maximum response could be $M'(7,1,5.5,2)$ in the middle sub-figure of Fig. \\ref{fig:att_map}(c).}. In this case, the attention module is able to locate the occluded point $B''$ through the surrounding pixels.\n\nSimilar with $\\phi'_a$ and $\\phi'_b$, $\\phi'_c$ is obtained by another $1\\times1\\times1$ convolution using input tensor $\\phi$, followed by the reshape operation. The main difference is that the channel number of the feature layer is $C''=\\frac{C}{2}$, i.e., $C''=24$ in our implementation. Another batch-wise matrix multiplication is applied between the attention map $M$ and $\\phi'_c$, resulting a 3D tensor $\\phi'_d\\in\\mathbb{R}^{BH\\times WA\\times C''}$. We then reshape $\\phi'_d$ into a 5D tensor $\\phi_d\\in\\mathbb{R}^{B\\times W\\times H\\times A\\times C''}$ and adopt a $1\\times1\\times1$ convolution to expand the channel dimension from $C''$ to $C$, generating a 5D tensor (or feature layer) $\\phi_e\\in\\mathbb{R}^{B\\times W\\times H\\times A\\times C}$. We further multiply the feature layer $\\phi_e$ by a trainable scale parameter (initialized as 0) and add back the input feature layer.\n\nWe implicitly adopt the non-local similarities or correspondences captured by the spatial-angular attention in the latent space and perform light field reconstruction by using deconvolution in the angular dimension, as shown in Fig. \\ref{fig:CNN}. The output of the SAAM is a 5D tensor $\\phi_f\\in\\mathbb{R}^{B\\times W\\times H\\times (\\alpha_a (A-1)+1)\\times C}$. By combining the proposed SAAM with the feature maps in the skip connections, the network is able to reconstruct light field with view consistency while also preserving the high frequency components. Detailed parameter setting of SAAM is listed in Table \\ref{table:Archi_SAAM}.\n\n\\begin{table}\n\\caption{Detail configuration of the proposed Spatial-Angular Attention Module (SAAM), where MatMul denotes the matrix multiplication and Add the element-wise addition.}\n\\label{table:Archi_SAAM}\n\\begin{center}\n\\begin{tabular}{l|cccc}\nLayer & $k$ & $s$ & $chn$ & Input\\\\\n\\hline\nConv1 & $1\\times1\\times1$ & - & 48\/6 & Encoder\\\\\nConv2 & $1\\times1\\times1$ & - & 48\/6 & Encoder\\\\\nConv3 & $1\\times1\\times1$ & - & 48\/6 & Encoder\\\\\nReshape1 &- & - & - & Conv1 \\\\\nReshape2 &- & - & - & Conv2 \\\\\nReshape3 &- & - & - & Conv3 \\\\\nMatMul1 & - & - & - & Reshape1;Reshape2 \\\\\nSoftmax & - & - & - & MatMul1\\\\\nMatMul2 & - & - & - & Softmax;Reshape3 \\\\\nReshape4 &- & - & - & MatMul2 \\\\\nConv4 & $1\\times1\\times1$ & - & 24\/48 & Reshape4\\\\\nAdd & - & - & 48\/48 & Encoder;Conv4\\\\\nDeconv & $3\\times1\\times7$ & $[1,1,\\alpha_a]$ & 48\/48 & Add\\\\\nConv$\\_6$ & $1\\times1\\times1$ & - & 48\/48 & Deonv\\\\\n\\end{tabular}\n\\end{center}\n\\end{table}\n\n\n\\section{Network Training}\\label{Sec:training}\n\\subsection{Spatial-Angular Perceptual Loss}\nTypical learning-based light field reconstruction or view synthesis methods optimize the network parameters by formulating a pixel-wise loss between the inferred image and the desired view (or EPI~\\cite{WuEPICNN2018}). Recently, researches~\\cite{zhou2018stereo,zhang2019image,meng2019high,mildenhall2019local} show that formulating the loss function in the high-level feature space will motivate the restoration of high-frequency components. This high-level feature loss, also known as perceptual loss, can be computed from part of the feature layers in the autologous network~\\cite{dosovitskiy2016generating} or other pre-trained networks~\\cite{Johnson2016Perceptual}, such as the commonly-used VGG network~\\cite{simonyan2015very}.\n\nIn this paper, we propose a spatial-angular perceptual loss that is specifically designed for the high-dimensional light field data. Existing approaches~\\cite{mildenhall2019local,meng2019high} for light field reconstruction apply perceptual loss between 2D sub-aperture images, neglecting the view consistency constraint in the angular dimension. Alternatively, we propose to use a 3D light field encoder to map the 3D light fields into high-dimensional feature tensors (width, height, angular and feature channel). To achieve this, we design another 3D encoder-decoder network (auto-encoder)\\footnote{The architecture of the 3D auto-encoder for the perceptual loss is different with that of the SAA-Net.} optimized by using unsupervised learning, i.e., the network is trained by inferring (compress and restore) the input light field. We then employ the encoder part to extract the high-level feature for the proposed spatial-angular perceptual loss. Note that the auto-encoder can be also generalized to 4D form. But given that some light field datasets have only one angular dimension (e.g., light fields from gantry system in~\\cite{ICME2018}) and the proposed SAA-Net also takes 3D light field as input, we only adopts 3D convolution in the encoder and decoder.\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[width=1\\linewidth]{.\/materials\/AE.pdf}\n\\end{center}\n\\vspace{-4mm}\n \\caption{Architecture of the 3D encoder-decoder network designed for the proposed spatial-angular perceptual loss.}\n\\label{fig:AE}\n\\end{figure}\n\nFig. \\ref{fig:AE} demonstrates the computation process of the proposed spatial-angular perceptual loss as well as the designed auto-encoder. We use 3D convolutional layers with kernel size $3\\times3\\times3$ to encode the 3D light field (the SAA-Net output or the ground truth) into latent representations. The encoder applies stride convolutional layers with stride 2 in each dimension to compress the light field from low-level pixel space into high-level feature space. The decoder employs bilinear upsampling and convolutional layers to restore the light field from the latent representations. Detailed configuration of each layer is shown in Fig. \\ref{fig:AE}. The loss function $\\mathcal{L}_{AE}$ for optimizing the auto-encoder is as\n$$\\mathcal{L}_{AE}(L_{HR})=\\Vert f_{AE}(L_{HR})-L_{HR} \\Vert_1,$$\nwhere $f_{AE}$ denotes the auto-encoder.\n\nWith the unsupervised learning, the encoder part is trained to extract high-frequency features in different scales. In this paper, we use the second, fourth and sixth layers in the encoder to form the spatial-angular perceptual loss\n$$\n\\mathcal{L}_{feat}(\\hat{L}_{HR},L_{HR})=\\sum\\limits_{l=2,4,6}\\lambda_{feat}^{(l)}\\Vert\\phi_{ae}^{(l)}(\\hat{L}_{HR})-\\phi_{ae}^{(l)}(L_{HR})\\Vert_1,\n$$\nwhere $\\phi_{ae}^{(l)}(\\cdot)$ $(l={2,4,6})$ denotes the feature layers in the encoder, and $\\lambda_{feat}=0.2, 0.2, 0.1$ is a set of hyperparameters for the proposed spatial-angular perceptual loss, $\\hat{L}_{HR}$ is the light field reconstructed by the SAA-Net and $L_{HR}$ is the desired high-angular resolution light field.\n\nTo prevent the potential possibility that different light field patches are mapped to the same feature vector~\\cite{dosovitskiy2016generating}, our loss function also contains a pixel-wise term $\\mathcal{L}_{pix}$ using Mean Absolute Error (MAE) between $\\hat{L}_{HR}$ and $L_{HR}$, i.e.,\n$$\\mathcal{L}_{pix}(\\hat{L}_{HR},L_{HR})=\\Vert \\hat{L}_{HR}-L_{HR} \\Vert_1.$$\nThen the final loss function $\\mathcal{L}_{SAA}$ for training the SAA-Net is defined as\n\\begin{equation}\\label{eq:loss}\n\\mathcal{L}_{SAA}=\\mathcal{L}_{pix}+\\mathcal{L}_{feat}.\n\\end{equation}\nThe two terms are weighted by the set of hyperparameters $\\lambda_{feat}$ in the perceptual loss.\n\n\\subsection{Training Data}\nWe use light fields from the Stanford (New) Light Field Archive~\\cite{StanfordLFdatasets} as the training dataset, which contains 12 light fields\\footnote{The light field \\textit{Lego Gantry Self Portrait} is excluded from the training dataset since the moving camera may influence the reconstruction performance.} with $17\\times17$ views. Since the network input is 3D light fields, we can extract 17 $L(x,y,s)$ and 17 $L(y,x,t)$ in each 4D light field set. Similar with the data augmentation strategy proposed in~\\cite{zhu2019revisiting}, we augment the extracted 3D light fields using shearing operation~\\cite{Ng2005Fourier}\n$$L_d(x,y,s)=L(x+(s-\\frac{S}{2})\\cdot d,y,s),$$\nwhere $S$ is the angular resolution of the 3D light field $L(x,y,s)$ and $L_d(x,y,s)$ is the resulting 3D light field with shear amount $d$. $L_d(y,x,t)$ can be obtained similarly. In practice, we use two shear amounts $d\\pm2$. The shearing-based data augmentation increases the number of training examples by 2 times. More importantly, the disparity effects in the augmented light field will be more obvious as shown in Fig. \\ref{fig:data_aug}, enabling the network to address the large disparity problem.\n\nTo accelerating procedure the training and insure the same resolution between the input examples in the meantime, the extracted 3D light fields are cropped into sub-light fields of spatial resolution $64\\times24$ (width and height for $L(x,y,s)$ or height and width for $L(y,x,t)$) with stride 40 pixels. About $6.7\\times10^5$ examples can be extracted from the 3D light fields (original and augmented).\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[width=1\\linewidth]{.\/materials\/data_aug.pdf}\n\\end{center}\n\\vspace{-4mm}\n \\caption{An illustration of training data augmentation using shearing operation. For clear display, one of the spatial dimension in the 3D light field is ignored.}\n\\label{fig:data_aug}\n\\end{figure}\n\n\\subsection{Implementation Details}\nTwo models with reconstruction factors (upsampling scale in the angular dimension) $\\alpha_a=3,4$ are trained. The input\/output angular resolution of the training samples for these two models are 5\/17 and 6\/16, respectively. Although the reconstruction factor of the network is fixed, we can achieve a flexible upsampling rate through network cascade. The training is performed on the Y channel (i.e., the luminance channel) of the YCbCr color space. We initialize the weights of both convolution and deconvolution layers by drawing randomly from a Gaussian distribution with a zero mean and standard deviation $1\\times10^{-3}$, and the biases by zero. The network is optimized by using ADAM solver~\\cite{Kingma2014Adam} with learning rate of $1\\times10^{-4}$ ($\\beta_1=0.9$, $\\beta_2=0.999$) and mini-batch size of 28. The training model is implemented using the \\emph{Tensorflow} framework~\\cite{TensorFlow}. The network converges after $8\\times10^5$ steps of backpropagation, taking about 35 hours on a NVIDIA Quadro GV100.\n\n\\begin{figure*}\n\\begin{center}\n\\includegraphics[width=1\\linewidth]{.\/materials\/Result2.pdf}\n\\end{center}\n\\vspace{-4mm}\n \\caption{Comparison of the results on the light fields from the CIVIT Dataset~\\cite{ICME2018} ($16\\times$ upsampling). The results show one of our reconstructed view, EPIs extracted from light fields reconstructed by each methods.}\n\\label{fig:Result2}\n\\end{figure*}\n \n\\section{Evaluations}\nIn this section, we evaluate the proposed SAA-Net on several datasets, including light fields from gantry system, light fields from plenoptic camera (Lytro Illum~\\cite{Lytro}). We mainly compare our approach with three state-of-the-arts learning-based methods by Kalantari~\\textit{et al.}~\\cite{DoubleCNN} (depth-based), Wu~\\textit{et al.}~\\cite{WuEPICNN2018} (without explicit depth) and Yeung~\\textit{et al.}~\\cite{YeungECCV2018} (without explicit depth). To empirically validate the effectiveness of the proposed schemes, we perform ablation studies of our approach by training our network without the SAAM, without the multi-scale reconstruction structure and without the spatial-angular perceptual loss, respectively. The quantitative evaluations is reported by measuring the average PSNR and SSIM~\\cite{SSIM} values over the synthesized views of the luminance channel. For more quantitative and qualitative evaluations, please see the submitted video.\n\n\\begin{table*}\n\\caption{Quantitative results (PSNR\/SSIM) of reconstructed light fields on the light fields from the CIVIT Dataset~\\cite{ICME2018}.}\n\\label{table:Result2}\n\\vspace{-3mm}\n\\begin{center}\n\\begin{tabular}{p{2.5cm}|c|p{1.9cm}<{\\centering} p{1.9cm}<{\\centering} p{1.9cm}<{\\centering} p{1.9cm}<{\\centering} p{1.9cm}<{\\centering} p{1.9cm}<{\\centering}}\n& Scale & \\textit{Seal \\& Balls} & \\textit{Castle} & \\textit{Holiday} & \\textit{Dragon} & \\textit{Flowers} & Average\\\\\n\\hline\nKalantari~\\textit{et al.}~\\cite{DoubleCNN}& \\multirow{4}*{$8\\times$} & 46.83 \/ 0.990 & 39.14 \/ 0.973 & 36.03 \/ 0.979 & 43.97 \/ 0.989 & 39.00 \/ 0.989 & 40.99 \/ 0.984\\\\\nWu~\\textit{et al.}~\\cite{WuEPICNN2018}& & 49.01 \/ 0.997 & 37.67 \/ 0.984 & 40.46 \/ 0.995 & 48.38 \/ 0.997 & 45.85 \/ 0.998 & 44.27 \/ 0.994\\\\\nYeung~\\textit{et al.}~\\cite{YeungECCV2018} & & 49.83 \/ 0.997 & 40.84 \/ 0.993 & 41.16 \/ 0.996 & 48.61 \/ 0.997 & 47.83 \/ 0.997 & 45.65 \/ 0.996\\\\\nOur proposed & &\\textbf{51.05 \/ 0.998} & \\textbf{43.15 \/ 0.994} & \\textbf{42.27 \/ 0.997} &\\textbf{49.68 \/ 0.998} &\\textbf{48.35 \/ 0.998} &\\textbf{46.90 \/ 0.997}\\\\\n\\hline\nKalantari~\\textit{et al.}~\\cite{DoubleCNN}& \\multirow{7}*{$16\\times$} & 43.13 \/ 0.985 & 36.03 \/ 0.965 & 32.44 \/ 0.961 & 39.50 \/ 0.985 & 35.21 \/ 0.973 & 37.26 \/ 0.974\\\\\nWu~\\textit{et al.}~\\cite{WuEPICNN2018}& & 45.21 \/ 0.994 & 35.20 \/ 0.977 & 35.58 \/ 0.987 & 46.39 \/ 0.997 & 41.60 \/ 0.995 & 40.80 \/ 0.990\\\\\nYeung~\\textit{et al.}~\\cite{YeungECCV2018} & & 44.38 \/ 0.992 & 37.86 \/ 0.989 & 36.06 \/ 0.988 & 45.52 \/ 0.997 & 42.30 \/ 0.994 & 41.22 \/ 0.992\\\\\nw\/o SAAM & & 46.85 \/ 0.995 & 37.78 \/ 0.989 & 36.17 \/ 0.988 & 47.10 \/ 0.998 & 42.98 \/ 0.996 & 42.18 \/ 0.993\\\\\nw\/o MSR structure & & 46.53 \/ 0.995 & 38.33 \/ 0.990 & 36.94 \/ 0.989 & 46.92 \/ 0.997 & 43.01 \/ 0.996 & 42.35 \/ 0.993\\\\\nw\/o SAP loss & & 49.02 \/ 0.996 & 40.69 \/ 0.992 & 38.97 \/ 0.992 & 48.23 \/ 0.997 & 44.46 \/ 0.997 & 44.27 \/ 0.995\\\\\nOur proposed & &\\textbf{49.35 \/ 0.997} & \\textbf{40.85 \/ 0.992} & \\textbf{39.21 \/ 0.993} & \\textbf{48.54 \/ 0.998} & \\textbf{44.67 \/ 0.997} & \\textbf{44.52 \/ 0.995}\\\\\n\\end{tabular}\n\\end{center}\n\\vspace{-4mm}\n\\end{table*}\n\n\\subsection{Evaluations on Light Fields from Gantry Systems}\nA gantry system capture a light field by mounting a conventional camera on a mechanical gantry. Typical gantry system takes minutes to hours (depending on the angular density) to take a light field. With a high quality DSLF reconstruction \/ view synthesis approach, the acquisition time will be considerably reduced. In this experiment, we use light fields from the CIVIT Dataset~\\cite{ICME2018} ($1\\times193$ views of resolution $1280\\times720$) and the MPI Light Field Archive~\\cite{kiran2017towards} ($1\\times101$ views of resolution $960\\times720$) with upsampling scales $8\\times$ and $16\\times$. Since the vanilla version of the network by Yeung et al.~\\cite{YeungECCV2018} was specifically designed for 4D light fields, we modify the convolutional layers for the 3D input while remain the network architecture unchanged. The networks by Kalantari~\\textit{et al.}~\\cite{DoubleCNN} and Yeung et al.~\\cite{YeungECCV2018} are re-trained using the same training dataset as the proposed network. In this experiment, the performances in terms of both angular sparsity and non-Lambertian are taken into consideration.\n\nFig. \\ref{fig:Result2} shows the reconstruction results on three light fields, \\textit{Castle}, \\textit{Holiday} and \\textit{Flowers}, from the CIVIT Dataset~\\cite{ICME2018} with upsampling scale $16\\times$ (disparity range $d_{\\min}-d_{\\max}=14$px). The first case and the third case have thin structures with complex occlusions. The depth and learning-based approach by Kalantari~\\textit{et al.}~\\cite{DoubleCNN} fail to estimate depth maps accurately enough to warp the input images, and the color CNN is not able to correct the misaligned views, producing ghosting artifacts as shown in the figure. For the second case, we demonstrate reconstructed EPIs in a highly non-Lambertian region, as shown in the figure. Caused by the depth ambiguity, the approach by Kalantari~\\textit{et al.}~\\cite{DoubleCNN} produces choppiness artifacts along the angular dimension. Due to the limited receptive field of the networks, the results by Wu~\\textit{et al.}~\\cite{WuEPICNN2018} and Yeung et al.~\\cite{YeungECCV2018} show aliasing effects in various degrees. Table \\ref{table:Result2} lists the quantitative measurements on the light fields from the CIVIT Dataset~\\cite{ICME2018} with upsampling scale $8\\times$ and $16\\times$.\n\n\\begin{figure*}\n\\begin{center}\n\\includegraphics[width=1\\linewidth]{.\/materials\/Result3.pdf}\n\\end{center}\n\\vspace{-4mm}\n \\caption{Comparison of the results on the light fields from the MPI Light Field Archive~\\cite{kiran2017towards} ($16\\times$ upsampling).}\n\\label{fig:Result3}\n\\end{figure*}\n \n\\begin{table*}\n\\caption{Quantitative results (PSNR\/SSIM) of reconstructed light fields on the light fields from the MPI Light Field Archive~\\cite{kiran2017towards}.}\n\\label{table:Result3}\n\\vspace{-3mm}\n\\begin{center}\n\\begin{tabular}{p{2.5cm}|c|p{1.9cm}<{\\centering} p{1.9cm}<{\\centering} p{1.9cm}<{\\centering} p{1.9cm}<{\\centering} p{1.9cm}<{\\centering} p{1.9cm}<{\\centering}}\n& Scale & \\textit{Bikes} & \\textit{FairyCollection} & \\textit{LivingRoom} & \\textit{Mannequin} & \\textit{WorkShop} & Average\\\\\n\\hline\nKalantari~\\textit{et al.}~\\cite{DoubleCNN}& \\multirow{4}*{$8\\times$} & 34.83 \/ 0.969 & 36.66 \/ 0.977 & 46.35 \/ 0.991 & 40.62 \/ 0.983 & 38.66 \/ 0.986 & 39.42 \/ 0.981\\\\\nWu~\\textit{et al.}~\\cite{WuEPICNN2018}& & 38.39 \/ 0.990 & 40.32 \/ 0.992 & 45.48 \/ 0.996 & 43.26 \/ 0.995 & 41.55 \/ 0.995 & 41.80 \/ 0.994\\\\\nYeung~\\textit{et al.}~\\cite{YeungECCV2018} & & 39.55 \/ 0.993 & 40.25 \/ 0.993 & 47.32 \/ 0.997 & 44.49 \/ 0.996 & 43.17 \/ 0.996 & 42.96 \/ 0.995\\\\\nOur proposed & &\\textbf{40.53 \/ 0.995} & \\textbf{42.23 \/ 0.995} & \\textbf{47.96 \/ 0.997} &\\textbf{45.02 \/ 0.996} &\\textbf{45.29 \/ 0.997} &\\textbf{44.21 \/ 0.996}\\\\\n\\hline\nKalantari~\\textit{et al.}~\\cite{DoubleCNN}& \\multirow{4}*{$16\\times$} & 30.67 \/ 0.935 & 32.39 \/ 0.952 & 41.62 \/ 0.973 & 37.15 \/ 0.970 & 33.94 \/ 0.971 & 35.15 \/ 0.960\\\\\nWu~\\textit{et al.}~\\cite{WuEPICNN2018}& & 31.22 \/ 0.951 & 30.33 \/ 0.942 & 42.43 \/ 0.991 & 39.53 \/ 0.989 & 33.49 \/ 0.977 & 35.40 \/ 0.970\\\\\nYeung~\\textit{et al.}~\\cite{YeungECCV2018} & & 32.67 \/ 0.967 & 31.82 \/ 0.969 & 43.54 \/ 0.993 & 40.82 \/ 0.992 & 37.21 \/ 0.988 & 37.21 \/ 0.982\\\\\nOur proposed & &\\textbf{36.01 \/ 0.985} & \\textbf{36.13 \/ 0.982} & \\textbf{46.45 \/ 0.997} & \\textbf{41.08 \/ 0.993} & \\textbf{39.11 \/ 0.992} & \\textbf{39.76 \/ 0.990}\\\\\n\\end{tabular}\n\\end{center}\n\\vspace{-4mm}\n\\end{table*}\n\nFig. \\ref{fig:Result3} shows the reconstruction results on three light fields, \\textit{Bikes}, \\textit{FairyCollection} and \\textit{WorkShop}, from the MPI Light Field Archive~\\cite{kiran2017towards} with upsampling scale $16\\times$ (disparity range up to 33.5px). The first case has complex structure occluded by the foreground bikes as shown in the top row of Fig. \\ref{fig:Result3}. The baseline methods fail to reconstruct the complex structure in the background. Among them, the depth and learning-based approach~\\cite{DoubleCNN} fail to estimate a proper occlusion relation between the bikes and the background. The second scene is a non-Lambertian case, i.e., a refractive glass before the toys. The approach by Kalantari~\\textit{et al.}~\\cite{DoubleCNN} cannot reconstruct the refractive object. And the reconstructed EPIs by the baseline methods~\\cite{WuEPICNN2018,YeungECCV2018} appear severe aliasing effects. Table \\ref{table:Result3} lists the quantitative measurements on the light fields from the MPI Light Field Archive~\\cite{kiran2017towards} with upsampling scale $8\\times$ and $16\\times$.\n\n\\textbf{Ablation studies.} We empirically validate the proposed approach by performing the following ablation studies. First, we replace the proposed SAAM with a simple transpose convolution layer, denoted as ``w\/o SAAM'' for short. As show by the quantitative result in Table \\ref{table:Result2}, the average PSNR value decreases about 2.3dB without the SAAM. In the second ablation study, we use a typical 3D U-net as the backbone and remove the transpose convolution layer in the SAAM, denoted as ``w\/o MSR structure'' for short (without the Multi-Scale Reconstruction structure). The angular reconstruction is simply realized by using transpose convolution at the end of the network. The performance of the network decreases about 2.2dB in terms of PSNR. In the last ablation study, we train the proposed SAA-Net simply by using the pixel-wise term (MAE loss) without the proposed spatial-angular perceptual loss, denoted as ``w\/o SAP loss'' for short. The performance (PSNR) decreases about 0.3dB.\n\n\\subsection{Evaluations on Light Fields from Lytro Illum}\nWe evaluate the proposed approach using three Lytro light field datasets (113 light fields in total), the \\textit{30 Scenes} dataset by Kalantari~\\textit{et al.}~\\cite{DoubleCNN}, and the \\textit{Reflective} and \\textit{Occlusions} categories from the Stanford Lytro Light Field Archive~\\cite{StanfordLytro}. In this experiment, we reconstruct a $7\\times7$ light field from $3\\times3$ views ($3\\times$ upsampling) and a $8\\times8$ light field from $2\\times2$ views ($7\\times$ upsampling). Since the vanilla versions of the networks by Kalantari~\\textit{et al.}~\\cite{DoubleCNN} and Yeung et al.~\\cite{YeungECCV2018} are trained on Lytro light fields, we use their original parameters without re-training. Note that the proposed network is not fine-tuned on any Lytro light field datasets, and the results are produced by the same set of network parameters for both upsampling scales $3\\times$ and $7\\times$.\n\nWe demonstrate two cases with relatively large disparities (maximum disparity up to 13px), \\textit{IMG1743} from the \\textit{30 Scenes}~\\cite{DoubleCNN} and \\textit{Occlusions 23} from the \\textit{Occlusions} category~\\cite{StanfordLytro}, as shown in Fig. \\ref{fig:Result1}. In both cases, the reconstruction results by Wu~\\textit{et al.}~\\cite{WuEPICNN2018} and Yeung et al.~\\cite{YeungECCV2018} show ghosting artifacts around the region with large disparity (background in the \\textit{IMG1743} case, and foreground in the \\textit{Occlusions 23} case), which we believe are caused by the limited receptive field of their networks. The depth and learning-based approach by Kalantari~\\textit{et al.}~\\cite{DoubleCNN} produces plausible result in the first case, but appears tearing artifacts near the occlusion boundary as marked by the red arrow in the EPI. In the second case, the approach by Kalantari~\\textit{et al.}~\\cite{DoubleCNN} fail to estimate proper depth information, introducing misalignment as shown by the EPI. In comparison, the proposed SAA-Net provides reconstructed light fields with higher view consistency (as shown in the demonstrated EPIs). Table \\ref{table:Result1} lists the quantitative results on the evaluated Lytro light fields. The PSNR and SSIM values are averaged over the light fields in each dataset.\n\n\\begin{figure*}\n\t\\begin{center}\n\t\t\\includegraphics[width=1\\linewidth]{.\/materials\/Result1.pdf}\n\t\\end{center}\n\t\\vspace{-4mm}\n\t\\caption{Comparison of the results on the light fields from Lytro Illum. The results show the error map (absolute error of the grey-scale image) and the EPIs at the location marked by red lines. Light fields are from the \\textit{30 Scenes}~\\cite{DoubleCNN} and the \\textit{Occlusions} category~\\cite{StanfordLytro}.}\n\t\\label{fig:Result1}\n\\end{figure*}\n\n\\begin{table}\n\\caption{Quantitative results (PSNR\/SSIM) of reconstructed views on the light fields from Lytro Illum~\\cite{Lytro}. The \\textit{30 Scenes} dataset courtesy of Kalantari~\\textit{et al.}~\\cite{DoubleCNN}, and the \\textit{Reflective} and \\textit{Occlusions} categories are from the Stanford Lytro Light Field Archive~\\cite{StanfordLytro}.}\n\\label{table:Result1}\n\\begin{center}\n\\begin{tabular}{l|c|ccc}\n& Scale & \\textit{30 Scenes} &\\textit{Reflective} &\\textit{Occlusions}\\\\\n\\hline\n\\scriptsize{Kalantari~\\textit{et al.}}~\\cite{DoubleCNN}& \\multirow{4}*{$3\\times$} & 39.62\/0.978 & 37.78\/0.971 & 34.02\/0.955 \\\\\nWu~\\textit{et al.}~\\cite{WuEPICNN2018}& & 41.85\/0.992 & 41.76\/0.986 & 38.52\/0.970 \\\\\nYeung~\\textit{et al.}~\\cite{YeungECCV2018}& & 44.53\/0.990 & 42.56\/0.975 &39.27\/0.945\\\\\nOur proposed & & \\textbf{44.69}\/\\textbf{0.996} & \\textbf{43.99}\/\\textbf{0.991} & \\textbf{40.33}\/\\textbf{0.969} \\\\\n\\hline\n\\scriptsize{Kalantari~\\textit{et al.}}~\\cite{DoubleCNN}& \\multirow{4}*{$7\\times$} & 38.21\/0.974 & 35.84\/0.942 & 31.81\/0.895 \\\\\nWu~\\textit{et al.}~\\cite{WuEPICNN2018}& & 36.74\/0.969 & 36.55\/0.964 & 33.11\/0.939 \\\\\nYeung~\\textit{et al.}~\\cite{YeungECCV2018}& & \\textbf{39.22}\/0.977 & 36.47\/0.947 & 32.68\/0.906\\\\\nOur proposed & & 39.09\/\\textbf{0.983} & \\textbf{37.47}\/\\textbf{0.977} & \\textbf{33.77}\/\\textbf{0.952} \\\\\n\\end{tabular}\n\\end{center}\n\\vspace{-4mm}\n\\end{table}\n\n\\begin{figure*}\n\\begin{center}\n\\includegraphics[width=1\\linewidth]{.\/materials\/att_map1.pdf}\n\\end{center}\n\\vspace{-4mm}\n \\caption{Additional results of attention map on scenes with (a) large disparity and (b) non-Lambertian effect.}\n\\label{fig:att_map1}\n\\end{figure*}\n\n\\subsection{Further Analysis}\n\\textbf{Spatial-angular attention map.} We visualize additional attention maps on scenes with large disparity and non-Lambertian effect as shown in Fig. \\ref{fig:att_map1}. We demonstrate the spatial-angular attention on a scene with large disparities in Fig. \\ref{fig:att_map1}(a). In this case, the disparity between neighbouring views are about 16 pixels. Due to the spatial downsampling in the SAA-Net, the disparity of the light field (feature map) fed to the SAAM is about 4 pixels, see the top left figure in Fig. \\ref{fig:att_map1}(a). Part of the attention map $M'(x_0,s_0,x_1,s_1), s_0=1, s_1=1,2,3$ is visualized in the bottom of Fig. \\ref{fig:att_map1}(a). We can clearly see the the response moves from $R(A,A)$ at the positon $M'(13,1,13,1)$ to $R(A,A'')$ at the position $M'(13,1,5,1)$ along the angular dimension.\n\nFig. \\ref{fig:att_map1}(b) demonstrates the spatial-angular attention on a scene with non-Lambertian effect. In this case, the positional relation of the corresponding points $B$, $B'$ and $B''$ does not follow their depth, as clearly shown in the top right figure of Fig. \\ref{fig:att_map1}(b). We visualize part of the attention map $M'(x_0,s_0,x_1,s_1), s_0=3, s_1=1,2,3$ in the bottom of Fig. \\ref{fig:att_map1}(b). The result shows that the proposed SAAM is able to catch the correspondences even for regions with non-Lambertian effects.\n\n\\textbf{Computational time.} For a 3D light field, the network takes about 53 seconds to reconstruct a $1\\times97$ light field from $1\\times7$ views of spatial resolution $960\\times720$ ($16\\times$ upsampling), i.e., 0.54s per view. For a 4D light field from Lytro Illum, it takes about 18 seconds to reconstruct a $7\\times7$ light field from $3\\times3$ views of spatial resolution $536\\times376$ ($3\\times$ upsampling), i.e., less than 0.36s per view. And the reconstruction of a $8\\times8$ Lytro light field from $2\\times2$ views ($7\\times$ upsampling) takes less than 30 seconds, i.e., less then 0.5s per view. The above evaluations are performed on an Intel Xeon Gold 6130 CPU @ 2.10GHz with a NVIDIA Quadro GV100.\n\n\\textbf{Limitations.} Repetitive patterns in the input light field can cause multiple plausible responses in the non-local attention, leading to misalignments in the reconstructed light fields. A possible solution is to introduce a smooth term in the attention map to penalize the multiple responses. \n\nAlthough we have proposed a multi-scale reconstruction structure to alleviate the GPU memory cost, the SAAM will still eat up the GPU memory when dealing with an input light field with high spatial-angular resolution. For example, when reconstructing light fields from the CIVIT Dataset~\\cite{ICME2018} ($1\\times25$ input views of resolution $1280\\times720$), we have to disassemble the 3D data into sub-light fields of resolution $25\\times1280\\times6$. Our investigation shows that the disassembling is harmful to the reconstruction quality. Decomposing a large tensor into the combination of small tensors~\\cite{kolda2009tensor} will solve this problem in a more essential way.\n\n\\section{Conclusions}\nWe have proposed a spatial-angular attention module in a 3D U-net backbone to capture correspondence information non-locally in the light field reconstruction problem. The introduced Spatial-Angular Attention Module (termed as SAAM) is designed to compute the responses from all the positions in the epipolar plane for each pixel in the light field and produce a spatial-angular attention map that records the correspondences. The attention map is then applied to driven the light field reconstruction via deconvolution in the angular dimension. We further propose a multi-scale reconstruction structure based on the 3D U-net backbone that implements the SAAM efficiently in the low spatial scale, while also preserving fine details in the high spatial scales by using decovlution-based reconstruction in each skip connenction. For the network training, a spatial-angular perceptual loss is designed specifically for the high-dimensional light field data by pretraining a 3D auto-encoder. The evaluations on light fields with challenging non-Lambertian effects and large disparities have demonstrated the superiority of the proposed spatial-angular attention network.\n\n\n\n\n\n\n\\ifCLASSOPTIONcaptionsoff\n \\newpage\n\\fi\n\n\n\n\n\n\\bibliographystyle{IEEEtran}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{\\label{sec:level1}Introduction}\n\nRandomness plays a prominent role in quantum computing, quantum information processing and physics in general. In particular, random unitaries chosen from the Haar measure \\cite{4} on the unitary group U(N) find applications in randomized benchmarking \\cite{6} , noise estimation \\cite{5}, quantum metrology \\cite{OAC+16}, as well as modeling thermalization \\cite{7} and even black hole physics \\cite{8}. \nUnfortunately, genuine Haar distributed unitaries are hard to create as the scaling required is exponential in the number of qubits \\cite{9}. \nOn the other hand efficient substitutes of Haar distributed unitaries were shown to exist \\cite{1}, \\cite{10}, \\cite{11}, \\cite{12}, \\cite{13}, \\cite{3}, \\cite{15}. These substitutes are known as unitary t-designs \\cite{10} - ensembles over subsets of U(N) which mimic exactly \\cite{10}, \\cite{11} , \\cite{15} or approximately \\cite{1}, \\cite{12}, \\cite{13},\\cite{3},\\cite{15} choosing from the Haar measure up to order t in the statistical moments. \n\nFollowing \\cite{15} our approach is to harness the quantum randomness arising from applying a measurement based (MB) scheme to produce approximate designs.\nIn MB computation \\cite{14}, unitary determinsitic computation is achieved by making sequential, adaptive measurements on an entangled multipartite state, known as a graph state \\cite{20}. Without these adaptive feedforward corrections, the inherent randomness of the measurements effectively samples from ensembles of unitaries.\nIn \\cite{15} it was shown that starting with a fixed graph state, applying fixed angle measurements (with no need for feedforward corrections), effectively samples from an approximate t-design. \nFurthermore this process is efficient in the number of qubits, preperation and measurements, following from the efficiency of the construction of Brandao et al. \\cite{1}. \nIndeed the construction of the graph state essentially mimics the random circuit construction of Brandao et al. \\cite{1}.\nHowever, in doing so, the graph itself is rather complicated, and moreover is not a simple regular lattice. \nA natural question is then, can simple, regular lattices (such as those useful for universal measurement based computation \\cite{14}, \\cite{19}) applied to generate t-designs?\nAs well as being more convenient from a practical point of view (in terms of generating the graph state), this connects the question of optimal generation of ensembles to standard measurement based quantum computation. Furthermore it requires a new proof that it is an approximate t-design (though the techniques also follow along the lines of \\cite{1} it does not follow directly from their results).\n\nIn this work we show that it is possible. In particular we show that running fixed measurement MB scheme on a regular graph with poly-log (in n, t and $\\dfrac{1}{\\varepsilon}$ ) number of qubits, with no feed-forward, results in an ensemble of random unitaries which forms a $\\varepsilon$-approximate t-design ensemble. \nThe graph we use is very similar to the brickwork graph known to be a universal resource for MB quantum computation \\cite{19}. The proofs presented here rely principally on the G-local random circuit construction (GLRC) of Brandao et al. \\cite{1}, the detectability lemma (DL) of Aharonov et al. \\cite{2} as well as a theorem \\cite{33}, \\cite {3}, \\cite{1} on the equivalence between tensor product expanders (TPE's) and approximate t-designs.\\\\\n\nThis paper is divided as follows: section II defines some preliminary notions, section III provides a brief statement of the results, section IV contains a detailed proof of the results, finally section V discusses briefly some potential applications.\n\n\\section{\\label{sec:level2}Preliminaries}\n\n\n\n\n\\subsection{\\label{subsec:level1} MBQC}\n\nThe model of computation used throughout this work is the measurement based quantum computation (MBQC) model. This model was first proposed by Raussendorf and Briegel \\cite{14} as an alternative to the gate model of quantum computing \\cite{16}.\nComputation is carried out by first preparing a large entangled state (a graph state), followed by single qubit measurements. \nCrucially, in order to deterministically perform a desired unitary, the measurements must be done adaptively - to counter the inherent randomness arising from the measurements the measurement angles are corrected by feedforward process using previous measurement results. However, instead of doing these corrections we will use this randomness as a resource to sample from an ensemble of unitaries with the desired structure - namely that they are a t-design.\n\nA \\emph{graph state} \\cite{20} is a pure entangled quantum state of $n$ qubits in one to one correspondence with a graph $G=\\{E,V\\}$ of $n$ vertices and edges. Each vertex $i \\in V$ of the graph is associated with a qubit, and each edge ${i,j \\in E}$ represents a preperation entanglement\n\\begin{eqnarray}\n|G\\rangle = \\prod_{i,j \\in E} CZ_{i,j} |+...+\\rangle_V \\nonumber.\n\\end{eqnarray}\nFor computations on quantum inputs, a set of vertices $I \\subset V$ are assigned as the input vertices, with initial input state $|\\psi_{in}\\rangle_I$, and the associated \\emph{open graph state} is defined as\n\\begin{eqnarray} \\label{eq Open Graph State}\n|G(\\psi)\\rangle = \\prod_{i,j \\in E} CZ_{i,j} |\\psi_{in}\\rangle_I|+...+\\rangle_{V\\setminus I}.\n\\end{eqnarray}\nWe also identify the set of output qubits by the vertices $O\\subset V$. Computation is then carried out by sequentially measuring all the non output qubits, in our case in a basis on the $X-Y$ plane. Each measurement is represented by an angle $\\alpha$, corresponding to measureing in the basis \\newline $\\{|\\pm\\alpha\\rangle := (|0\\rangle \\pm e^{i \\alpha}|1\\rangle)\\}$.\nIn standard MBQC the correction strategy is given by the gflow \\cite{21}- a partial order over the graph as well as a function, which give a time order for the measurements and the dependency respectively. In this work, following \\cite{15}, we do not adapt the measurement angles so that for each measurement result a potentially different unitary is performed.\n\nFollowing the convention of \\cite{21} (see also \\cite{15}) we represent these resources as graphs where the input vertices have squares on them and the measured qubits have an angle inside representing the measurement angle, hence the quantum outputs $O$ are empty circles (note that in \\cite{21} measured qubits are simply coloured black).\nFigure \\ref{fig1} illustrates this for a simple example. Following equation (\\ref{eq Open Graph State}), for input state $|\\psi_{in}\\rangle = a|0\\rangle+b|1\\rangle$ Fig. \\ref{fig1} corresponds to an initial open graph state \n\\begin{eqnarray}\n|G_\\psi\\rangle & =& a |0\\rangle |+\\rangle + b |1\\rangle |-\\rangle \\nonumber \\\\\n& = &\\frac{1}{\\sqrt{2}}\\left(|+\\alpha\\rangle HZ(\\alpha) |\\psi_{in}\\rangle +|-\\alpha\\rangle H Z Z(\\alpha) |\\psi_{in}\\rangle \\right)\\nonumber,\n\\end{eqnarray}\nwhere $H$ is the Hadamard gate, $Z$ is the Pauli Z gate, $Z(\\alpha):=e^{-i \\alpha Z\/2}$ is a rotation by angle $\\alpha$ around the Z axis. Denoting $m$ as the binary measurement outcome, associating $m=0$ to outcome $+\\alpha$ and $m=1$ to $-\\alpha$, it is clear that measuring the first qubit is equivalent to applying the random unitary\n\\begin{eqnarray}\nU(m) = H Z^m Z(\\alpha), \n\\end{eqnarray}\nwith equal probabilty for $m=0$ and $m=1$.\nAs in \\cite{15}, the same idea applied to larger graphs, with more inputs and outputs is the source of the random unitary ensembles we will study in this work.\n\n\n\\begin{figure}[h]\n\\begin{center}\n\n\n\\graphicspath{}\n\n\\includegraphics[trim={0 10cm 0 0cm}, scale=0.18]{fig1corr.pdf}\n\\caption{MB scheme on a 2 qubit cluster state, the input (squared) qubit when measured at an angle $\\alpha$ in the XY plane results in propagation of the input state to output along with application of a random unitary $U$ =$ H $$Z^{m}$ $Z$($\\alpha$).}\n\\label{fig1}\n\\end{center}\n\\end{figure}\n\n\\subsection{\\label{subsec:level2}t-designs and Tensor Product Expanders}\nUnitary t-designs \\cite{10}, \\cite{12}, \\cite{13} are subsets of the unitary group U(N) , sampled with some probability distribution, which mimic either exactly or approximately choosing from the Haar measure on U(N) up to order t in the statistical moments. Our concern in this work is with the latter, known as $\\varepsilon$-approximate t-designs. \nMore formally, let H =($C ^{ 2} )^{ \\otimes n} $ be a Hilbert space on n qubits, define the density matrix $\\rho$=$\\ket{\\phi}\\bra{\\phi}$ with $\\ket{\\phi}$ a unit vector in H (n-qubit state). Then let \\{ $p_{i}$ , $U_{i}$ \\} be a collection of unitaries in U(N)= U($2^{n}$) (the n-qubit unitary group), where we sample each $ U_{i}$ with probability $p_{i}$ . Let $\\mu_{H}$ denote the Haar measure \\cite{4} on U($2^{n}$ ). Consider now t-copies of our n-qubit system, an ensemble $\\{p_i, U_i\\}$ is an \\emph{exact t-design} if it satisfies\n\\begin{equation}\n\\begin{aligned}\n\\label{eq1}\n\\sum_{i}p_{i} U_{i} ^{\\otimes t} \\rho^{t} U_{i} ^{\\dagger \\otimes t}=\\int_{ U(2^{n})}U ^{\\otimes t}\\rho^{t}U ^{\\dagger \\otimes t}\\mu_{H}(dU),\n\\end{aligned}\n\\end{equation}\nfor all $\\rho^{t} \\in \\mathcal{B}(H^{\\otimes t})$, where $\\int_{ U(2^{n})}U ^{\\otimes t}\\rho^{t}U ^{\\dagger \\otimes t}\\mu_{H}(dU)$ represents averaging over the Haar measure. \n\n\n$\\varepsilon$-approximate t-designs are defined similarly as ensembles which satisfy\n\\begin{equation}\n\\begin{aligned}\n\\label{eq2}\n(1-\\varepsilon)\\int_{ U(2^{n})}U ^{\\otimes t}\\rho^{t}U ^{\\dagger \\otimes t}\\mu_{H}(dU) \\leq \\sum_{i}p_{i} U_{i} ^{\\otimes t} \\rho^{t} U_{i} ^{\\dagger \\otimes t} \\\\ \\leq (1+\\varepsilon) \\int_{ U(2^{n})}U ^{\\otimes t}\\rho^{t}U ^{\\dagger \\otimes t}\\mu_{H}(dU).\n\\end{aligned}\n\\end{equation\n\nApproximate t-designs can also be defined using various norms \\cite{1}, \\cite{13}, \\cite{3}.\nA useful concept equivalent to a t-design is a tensor product expander (TPE) \\cite{22} . We say that a couple \\{ $p_{i}$ , $U_{i}$ \\} is an ($\\eta$,t)-TPE if the following equation holds \\cite{3}, \\cite{22} :\n\n\\begin{equation}\n\\begin{aligned}\n\\label{eq3}\ng(t,\\mu):=\\mid\\mid\n\\sum_{i }p_i U_i^{\\otimes t,t} - \\int_{U(2^{n}) } U^{\\otimes t,t}\\mu_{H}(dU) \\mid\\mid_\\infty\\leq\\eta,\n\\end{aligned}\n\\end{equation}\nwhere $U^{\\otimes t,t} =U^{\\otimes t} \\otimes U^{\\star \\otimes t}$, $\\star$ denotes the complex conjugate and $\\mu$ represents the probability measure on U(d) which results in choosing $U_i$ with probability $p_i$. \nIn our main proof we will rely on the following theorem \\cite{33}, \\cite {3}, \\cite{1} on the equivalence between TPE's and $\\varepsilon$-approximate t-designs:\n\n\\newtheorem{theorem}{Theorem}\n\\begin{theorem} \\cite{33}, \\cite {3}, \\cite{1}\\\\\n Let \\{ $p_{i}$ , $U_{i}$ \\} be an ($\\eta$,t)-TPE with $\\eta$ < 1.Denote by $\\mathcal{U}_{i}$ the set of all possible unitaries $U_{i}$. Then iterating this TPE k times ( i.e. obtaining the product $U$=$\\prod_{j=1,...,k}$$U_{\\pi (j)}$ with the $U_{\\pi (j)}$'s independently chosen from the ensemble \\{$p_{i}$ , $U_{i}$\\} ,\\\\ $\\pi$(j) $\\in$ $\\{1,\u2026,|\\mathcal{U}_{i}|\\}$ with k $\\ge$ $\\dfrac{1}{log(\\dfrac{1}{\\eta})}log(\\dfrac{d^{t}}{\\varepsilon})$ results in an ensemble\\{ {$p_U$ , $U$=$\\prod_{j=1,...,k}$$U_{\\pi (j)}$ }\\} which is an $\\varepsilon$-approximate t-design. Here $d$ is the dimension of the unitary group.\n\\end{theorem}\n\n\nFor convenience we define the t'th \\emph{moment super operator} of \\{$p_{i}$ , $U_{i}$\\} (or simply moment super operator) as follows:\n\\begin{equation}\n\\begin{aligned}\n\\label{momop}\n M_t [\\mu]:=\\sum_{i} p_i U_i^{\\otimes t,t}.\n\\end{aligned}\n\\end{equation}\nIts role in the TPE condition (\\ref{eq3}) means $M_t [\\mu]$ plays a major role in our proofs. Indeed, for any ensemble such that sampling it many times does eventually lead to the Haar measure, then $g(t,\\mu)$ is equal to the second highest eigenvalue of $M_t[\\mu]$ \\cite{12,1}.\nWe will then follow the techniques of \\cite{1}, \\cite{12}, \\cite{13} in connecting the calculation of this to gaps of Hamiltonians, which will allow us to prove the connection to t-designs via theorem 1.\n\n\n\n\n\\subsection{\\label{subsec:level3}Many body Physics and t-designs}\n\nIt has been known for some time \\cite{1}, \\cite{12}, \\cite{13} that the problem of estimating the scaling rate (number of iterations needed to reach a desired accuracy $\\varepsilon$) of an $\\varepsilon$- approximate t-design can be reduced to a problem of finding the spectral gap (the difference of energy between the ground and first excited state) of some many-body Hamiltonian. \nHere we give an overview of these techniques, in particular as used in \\cite{1}.\n\n\nAn extensive body of research (\\cite{23}, \\cite{24}, \\cite {25}, \\cite{26} and many others) has been devoted to the case of 1D spin chains, with local Hamiltonians (we assume a finite interaction range) with translational symmetry (see Fig.\\ref{fig2}). \nWe will focus exclusively on this case, and more precisely on a type of 1D Hamiltonian (the one we use in our proof) consisting of local terms acting on nearest neighbor spins $ i$ and $i$+1 with translational symmetry, which are frustration free (the entire Hamiltonian can be minimized by minimizing each of its local terms individually) as well as verifying the Nachtergaele criterion (\\cite{23}, condition C.3). \nThis family of Hamiltonians was used by Brandao et al. \\cite {1} to study their local random circuit (LRC) construction, and later the so-called G-local random circuits (GLRC). \nWe will briefly define these families of circuits and review these proofs.\n\n\\begin{figure}[h]\n\\begin{center}\n\n\n\\graphicspath{}\n\\includegraphics[trim={0 12cm 0 16cm}, scale=0.08]{Figurefornewarticlecorr.pdf}\n\\caption{Example of a 4 spins 1D system. The local Hamiltonians $h_{i,i+1}$ have a range of 2 (i.e. act on nearest neighbor spins). For example $h_{1,2}$ acts on spins 1 and 2. Translational invariance means that any $ h_{i,i+1}$ has the same form on all 2 qubit systems ($i$, $i$+1). In this case the total Hamiltonian of the system of 4 spins can be written as a sum of local Hamiltonians, i.e. H=$h_{1,2}$+$h_{2,3}$+$h_{3,4}$. }\n\\label{fig2}\n\\end{center}\n\\end{figure}\n\nThe local random circuits (LRC) in \\cite{1} generate random circuits on n-qubits as follows.\nFor each run of the LRC, a unitary $ U$ $\\in$ U(4) is chosen from the Haar measure on U(4), then an index $i$ is chosen uniformly at random from the set \\{1,\u2026., n-1\\} , finally $U$ is applied to qubits $i$ and $ i$+1. \nThe LRC defines a couple \\{$\\mu_{LRC}$,$\\mathcal{U}$\\} where $\\mu_{LRC}$ is the probability measure induced by one LRC run, and $\\mathcal{U}$ is the set of all the possible unitaries which can be generated by one LRC run. \nWe arrive at the following moment super operator associated to one run of the LRC\n\\begin{equation}\n\\begin{aligned}\n\\label{eq4}\nM_{t}[\\mu_{LRC}]=\\dfrac{1}{n-1}\\sum_{i=1}^{n-1}\\int_{U(4)}U_{i,i+1}^{\\otimes t,t}\\mu_{H}(dU)=\\dfrac{1}{n-1}\\sum_{i=1}^{n-1}P_{i,i+1},\n\\end{aligned}\n\\end{equation}\n\\\\ where $ U_{i,i+1}$=$ 1^{\\otimes i-1}\\otimes U \\otimes1^{\\otimes n-i-1}$, $U$ $ \\in$ U(4), $P_{i,i+1}:=\\int_{U(4)}U_{i,i+1}^{\\otimes t,t}\\mu_{H}(dU)$ and $\\mu_{H}$ is the Haar measure on U(4). \n\n Since each of the $P_{i,i+1}$'s is Hermitian, then $M_t [\\mu_{LRC} ]$ is itself Hermitian.\nNow consider the Hamiltonian \n\\begin{equation}\nH=\\sum_{i}h_{i,i+1},\n\\end{equation}\nwhere $ h_{i,i+1}$=$1- P_{i,i+1}$. Then\n\\begin{equation}\nM_{t}[\\mu_{LRC}] = I-\\dfrac{H}{n-1}\n\\end{equation} \nThe ground space of H has an eigenvalue of 0 and the gap between its ground and first excited spaces gives the second highest eigenvalue of $M_{t}[\\mu_{LRC}]$.\nThis $H$ is a 1D spin chain Hamiltonian with nearest neighbor local terms which are translationally invariant. It is also frustration free by construction because the Hamiltonian can be minimized simply by minimizing all of its local terms (taking their ground state of energy 0) individually.\nBrandao et al. also proved that this Hamiltonian verifies the Nachtergaele criterion, then also bounded the Nachtergaele Bound using path coupling techniques \\cite{1}. \nIn this way they show that the spectral gap $\\Delta H$ of $H$ admits the following (polynomial in t) bound for n $\\ge$ $\\lfloor 2.5log_{2}(4t) \\rfloor$ :\n\\begin{equation}\n\\begin{aligned}\n\\label{eq5}\n\\Delta H \\ge (1700. \\lfloor log_{2} (4t) \\rfloor^{2}. t^{5}. t^{\\dfrac{3.1}{log(2)}})^{-1},\n\\end{aligned}\n\\end{equation}\nwhere $\\lfloor x \\rfloor$ denotes the floor function acting on variable $x$.\n\n\nWe now move to G-local random circuits, which are the finite set counter parts of LRC.\nOne run of the GLRC follows exactly as the LRC case, but instead of choosing a unitary $U$ from the Haar measure of U(4), we choose with uniform probability from a finite set G of SU(4) which is universal and contains inverses. One can show from the beautiful result of Bourgain and Gamburd \\cite{26} that the Hamiltonian\n$H_{GLRC}$=$\\sum_{i} h^{'}_{i,i+1}$=$\\sum_{i}(1-P^{'}_{i.i+1})$ with \n$P^{'}_{i.i+1}$ =$\\dfrac{1}{|G|} \\sum_{U \\in G}(1^{\\otimes i-1}\\otimes U \\otimes1^{\\otimes n-i-1})^{\\otimes t,t}$ admits the following bound for its spectral gap:\n\\begin{equation}\n\\begin{aligned}\n\\label{eq6}\n\\Delta H_{GLRC} \\ge \\alpha.\\Delta H,\n\\end{aligned}\n\\end{equation}\nwith $\\alpha$ a constant and $\\Delta H$ the spectral gap of the LRC Hamiltonian.\n\nNote that because the set G contains unitaries and their inverses and samples them uniformly, then $ \\mu_{G }$($U$)=$\\mu_{G}$ ($U^{\\dagger}$), for all $U$ ,$ U^{\\dagger}$ $\\in$ G. This means that $P^{'}_{i,i+1}$ (hence $ H_{GLRC}$ ) a Hermitian operator, and the above definition of a GLRC Hamiltonian makes sense.\nWe can rewrite equation (\\ref{eq6}) as follows:\n\\begin{equation}\n\\begin{aligned}\n\\label{eq7}\n\\Delta H_{GLRC} \\ge (C. \\lfloor log_{2} (4t)\\rfloor^{2}. t^{5}. t^{\\dfrac{3.1}{log(2)}})^{-1}= P_{GLRC}\n\\end{aligned}\n\\end{equation}\n C being a constant depending on the gate set G. \n\nThese spectral gaps directly give the second highest eigenvalue of the corresponding moment super operators, equal to the $g(t, \\mu)$ in equation (\\ref{eq3}) confirming the TPE conditions, which through theorem 1 then allow statements about their efficiency as t-designs \\cite{1}.\n\n\n\\section{\\label{sec:level4} Main Results}\n\nIn order to state the main results, we define some simple graph states for two input qubits. We will hence forth refer to a graph state with an MB scheme applied to it as a gadget. These will act as the building blocks for our construction. \nConsider the 2, 5-column 2-row brickwork states with a fixed angle MB scheme in Fig.\\ref{fig3} and Fig.\\ref{fig4} which we call $S_{I_{1}}$ and $S_{I_{2}}$ .\n\n\\begin{figure}[h]\n\\begin{center}\n\n\n\\graphicspath{}\n\\includegraphics[trim={0 12cm 0 0cm}, scale=0.4]{fig3acorr.pdf}\n\\caption{The 2-row, 5-column brickwork state gadget giving rise to $S_{I_{1 }}$ }\n\\label{fig3}\n\\end{center}\n\\end{figure}\n\\begin{figure}[h]\n\\begin{center}\n\n\n\\graphicspath{}\n\\includegraphics[trim={0 12cm 0 0cm}, scale=0.4]{fig3bcorr.pdf}\n\\caption{The 2-row, 5-column brickwork state gadget giving rise to $S_{I_{2 }}$ }\n\\label{fig4}\n\\end{center}\n\\end{figure}\n\n$S_{I_{1} }$ and $ S_{I_{2} }$ give rise respectively to 2 MB ensembles \\{$\\dfrac{1}{2^{8}}$,$U_{1_{i}}$\\} and \\{$\\dfrac{1}{2^{8}}$,$U_{2_{i}}$\\}. \nThe number of unitaries generated by the MB scheme on the 2-row, 5-column brickwork state is $2^{8}$\n\nIt can be easily checked that each unitary of the ensembles $S_{I_{1} }$ and $ S_{I_{2} }$ contains (up to a global phase) an inverse in the ensemble. \nThat is, denoting $\\mathcal{U}_{S_{I_1}}$ as the set of unitaries generated by $S_{I_1}$ (and similarly for $S_{I_2}$), for all $U_{1_{i}}$$\\in \\mathcal{U}_{S_{1_i}}$\n there exists $U_{1_{j}}$$\\in \\mathcal{U}_{S_{I_1}}$ such that $U_{1_{i}}=U^{\\dagger}_{1_{j}}$. Simlarly for $S_{I_2}$.\n\nConsider now a 13-column brickwork gadget: B= $S_{I_{1}} \\circ S_{I_{2 }} \\circ S_{I_{1} }$, where we mean by W $\\circ$ V a concatenation which identifies the output of graph W as an input of graph V. We are now in a position to state our main results:\\\\ \\\\\n\n\\begin{theorem}\n The gadget B gives rise to an ensemble of unitaries which is \\\\\n (i) universal on SU(4) \\\\\n (ii) contains elements and their inverses, and\\\\\n (iii) is sampled with a uniform probability. \\\\\n\\end{theorem}\n\nThis theorem means that the set of unitaries generated by B (call it $\\mathcal{U}_B$) satisfies the conditions necessary to form a GLRC. Though this is not exactly how our construction works, it will be important in proving our construction works in the proof of theorem 3.\n\nNow consider the gadget on n-qubits given in Fig.\\ref{fig5} which we call $G_{n}$. \nThe horizontal line with a circle in the middle means a direct link between output and input performed only on the 1st and last rows. \nThe square with the letter B is our 13-column brickwork gadget B, and the empty 3 sided square means that there is no vertical entanglement. \n\n\\begin{figure}[h]\n\\begin{center}\n\n\n\\graphicspath{}\n\\includegraphics[trim={0 5cm 0 0cm},scale=0.4]{fig4corr.pdf}\n\\caption{The graph gadget $G_{n}$ pictured here for even n (the odd n case follows straightforwardly) }\n\\label{fig5}\n\\end{center}\n\\end{figure}\n\n\n\nThe first and last rows of $G_{n}$ are made up of 13 qubits, and all rows in between are made up of 25 qubits. This gives rise in total to a graph composed of 26$+$25(n$-$2) = 25n$-$24 qubits. We now state our second main result. \\\\ \\\\\n\n\\begin{theorem}\n The k(n,t,$\\varepsilon$)-fold concatenation of $G_{n}$, \\newline $E_{n}$ = $G_{n} \\circ G_{n} \\circ$... , results in an ensemble of unitaries which forms an $\\varepsilon$-approximate t-design on n-qubits ( n $\\ge$ $\\lfloor 2.5log_{2}(4t) \\rfloor$ ), with: \\\\ k(n,t,$\\varepsilon$) $\\ge$ $\\dfrac{3}{log_{2}(1+\\dfrac{P_{GLRC}}{2})}(nt+log_{2}(\\dfrac{1}{\\varepsilon}))$\n\\end{theorem}\n\n\n\\section{\\label{sec:level5}Proofs}\n\\subsection{\\label{subsec:level6}Proof of Theorem 2}\nBefore going on to universality, let us briefly explain why the set of unitaries generated by B, $\\mathcal{U}_B$ contains inverses ($(ii)$ in Theorem 2). \nAny element $U$ $\\in$ $\\mathcal{U}_B$ may be written as: $U$=$U_{1}.U_{2}.U_{1}^{'}$, where $U_{1}$ , $U_{1}^{'}$ $\\in$ $\\mathcal{U}_{S_{I_1}}$and $U_{2}$ $\\in$ $\\mathcal{U}_{S_{I_2}}$. \nSince $\\mathcal{U}_{S_{I_1}}$and $\\mathcal{U}_{S_{I_2}}$contain unitaries and their inverses, we can always find $ U^{\\dagger}$=$U_{1}^{ ' \\dagger}$.$U_{2}^{\\dagger}$.$U_{1}^{\\dagger}$ $\\in$ $\\mathcal{U}_B$. Furthermore, each unitary $U_{\\{m\\}} \\in \\mathcal{U}_B$ associated to a specific binary string $\\{m\\}$ is sampled with a uniform probability of $\\dfrac{1}{|\\mathcal{U}_B|}$, proving $(iii)$ in Theorem 2.\n\n\nThe remainder of this subsection is devoted to proving universality ($(i)$ in Theorem 2), and we will use the approach outlined in \\cite{27} and \\cite{34} for doing so. Following \\cite{27} and \\cite{34}, one can show that the group generated by the set of unitaries \\{$A$,$A^{\\dagger}$,$C$,$C^{\\dagger}$,$E$,$E^{\\dagger}$,$F$,$F^{\\dagger}$\\} is dense (universal) on U(4) if the following conditions are satisfied:\\\\ \\\\\n$C_{1}$ : $H_{1}$:=$\\dfrac{log(A)}{i}$ , $H_{2}$:=$\\dfrac{log(C)}{i}$, $H_{3}$:=$\\dfrac{log(E)}{i}$ and $H_{4}$:=$\\dfrac{log(F)}{i}$ and their commutators form a set of 16 linearly independent Hamiltonians which span the Lie algebra \\cite{28} of U(4).\\\\ \\\\\n$C_{2}$ : $H_{1}$, $ H_{2}$, $ H_{3}$ and $ H_{4}$ have eigenvalues that are irrationally related to $\\pi$ .\\\\\n\nWe first consider $C_{1}$. We found 4 distinct unitaries $A$=$U_{\\{m\\}}$, $C$=$U_{\\{m^{'}\\}}$, $E$=$U_{\\{m^{''}\\}}$, and $F$=$U_{\\{m^{'''}\\}}$ . Where $U_{\\{m\\}}$, $U_{\\{m^{'}\\}}$, $U_{\\{m^{''}\\}}$ and $U_{\\{m^{'''}\\}}$ $\\in$ $\\mathcal{U}_B$ are associated to the binary strings \\{$m$\\}=\\{0,1,1,0,1,1,1,0,0,1,1,0,0,0,0,0,0,1,1,0,1,1,1,0\\} , \\{$m^{'}$\\}=\\{0,0,0,1,1,1,1,0,1,0,1,0,1,1,1,1,0,1,1,1,0,1,0,0\\}, \\{$m^{''}$\\}=\\{0,0,0,1,1,1,1,0,1,0,1,0,1,1,1,1,0,1,1,1,0,1,0,0\\}, and \\{$m^{'''}$\\}=\\{0,1,1,1,0,1,1,0,0,0,0,1,0,0,0,1,0,0,1,0,0,0,1,0\\}. We adopt the convention that the first 12 binary numbers appearing in a given binary string represent the measurement results on qubits of the first row of B from left to right (input towards output), and the last 12 binaries represent the measurements performed on the qubits of the second row of B from left to right. \n\n\nWe then construct 16 Hamiltonians $H_{1}$ ,\u2026, $H_{16}$ as follows:\\\\ \\\\\n$H_{1}=\\dfrac{log(A)}{i}$\\\\\n$H_{2}=\\dfrac{log(C)}{i}$\\\\\n$H_{3}=\\dfrac{log(E)}{i}$\\\\\n$H_{4}=\\dfrac{log(F)}{i}$\\\\\n$H_{5}=i.[H_{1},H_{2}]$\\\\\n$H_{6}=i.[H_{1},H_{3}]$\\\\\n$H_{7}=i.[H_{1},H_{4}]$\\\\\n$H_{8}=i.[H_{2},H_{3}]$\\\\\n$H_{9}=i.[H_{2},H_{4}]$\\\\\n$H_{10}=i.[H_{2},H_{5}]$\\\\\n$H_{11}=i.[H_{2},H_{6}]$\\\\\n$H_{12}=i.[H_{3},H_{4}]$\\\\\n$H_{13}=i.[H_{3},H_{5}]$\\\\\n$H_{14}=i.[H_{3},H_{6}]$\\\\\n$H_{15}=i.[H_{4},H_{5}]$\\\\\n$H_{16}=i.[H_{4},H_{6}]$\\\\\n\nAfter that, we expand each of the 16 Hamiltonians in the basis:\n P=\\{$P_{ij}$\\} $i$,$j$=0,..,3, where P is a basis of the Lie algebra of U(4) . \n In other words, we write each: $H_{k}$=$a_{k}^{ij}$.($P_{ij}$) (Einstein summation convention adopted over i and j), where the $a_{k}^{ij}$ ' s are real numbers. \n Since P is a basis of the Lie algebra of U(4) (over the field of real numbers ), proving linear independence of the 16 Hamiltonians \\{$H_{k}$\\} $k$=1,..16 in the basis P means that the set \\{$H_{k}$\\} is itself a basis of the Lie algebra of U(4). \n The linear independence of the 16 generators \\{$H_k$\\} is equivalent to the non-vanishing of the determinant of a 16 by 16 matrix M, where each of the 16 columns of M are made up of the 16 coefficients\\{$a_{k}^{ij}$\\} for a given $ k$. \nWe found that the 16 Hamiltonians of our above constructed scheme give rise to a matrix M with non-vanishing determinant \\footnote{this was done numerically however well within numerical precision}, thus this scheme forms a basis of the Lie algebra of U(4) and $C_{1}$ is verified for a subset of $\\mathcal{U}_B$ (and hence for $\\mathcal{U}_B$ itself ). \n\n \n \nProving $C_{2}$ requires the use of a result in algebraic number theory called Lehmer's theorem \\cite{30}. Its context is described in the following Lemma:\n\\newtheorem{lem}{Lemma}\n\\begin{lem} \\cite{30} \n If n>2 and k and n are coprime integers, then 2cos($\\dfrac{2k\\pi}{n}$) is an algebraic integer.\n\\end{lem}\nAn algebraic number is a complex number which is a solution of a polynomial equation with integer coefficients. The minimal polynomial of an algebraic number $z$ is the polynomial of lowest degree with integer coefficients for which $z$ is a solution. An algebraic integer is an algebraic number whose minimal polynomial is monic (that is, the coefficient in front of the highest degree variable is 1) \\cite{31}. \n\nLehmer's theorem states that angles $\\alpha$ = $\\dfrac{2k\\pi}{n}$ which are rationally related to $\\pi$ must have $2cos(\\alpha)$ an algebraic integer. So, if we can find instances of angles $\\alpha$ in which $2cos(\\alpha)$ is not an algebraic integer, then $\\alpha$ has to be irrationally related to $\\pi$ as a consequence of Lehmer's theorem. \nEach of the eigenvalues $\\lambda$ of $A$, $C$, $E$ or $F$ is a complex number with unit norm (because they are unitary matrices). Thus, $\\lambda$=$e^{i\\theta}$. We calculated the expression $ 2cos(\\theta)$\nand constructed its minimal polynomial. \nWe found that for each of the eigenvalues $\\lambda$ of $A$ , $C$, $E$ and $F$, $ 2cos(\\theta)$ does not verify a monic minimal polynomial (not an algebraic integer) \nand thus all the $\\theta$'s are irrationally related to $\\pi$ by Lehmer's theorem. Further, because $A$, $C$, $E$ and $F$ are diagonal in the same basis as their Hamiltonians $H_{1}$, $H_{2}$, $H_{3}$ and $H_{4}$ \\cite{27},\n the $\\theta$'s we calculated are the eigenvalues of these Hamiltonians. Hence, the eigenvalues of the Hamiltonians are irrationally related to $\\pi$ which proves $C_{2}$. \n \n\nProving $C_{1}$ and $C_{2}$ means that the subset \\{$A$,$A^{\\dagger}$,$C$,$C^{\\dagger}$,$E$,$E^{\\dagger}$,$F$,$F^{\\dagger}$\\} of $\\mathcal{U}_B$ is universal on U(4), and thus so is $\\mathcal{U}_B$. But $(i)$ further requires that the set be on SU(4). Fortunately, the moment super operator of a set sampled from U(4) can always be thought of as a sampling from SU(4). This can be seen by noting that for all $U$ $\\in$ U(4) we have $det(U)$ $\\neq$ 0, hence $U^{\\otimes t,t }$=$|det(U)|^{\\dfrac{t}{2}}$.$U^{ ' \\otimes t,t}$=$U^{ ' \\otimes t,t}$, where $U^{'}$ $\\in$ SU(4). \n\n\n\\subsection{\\label{subsec:level7}Proof of Theorem 3}\nOur approach for proving Theorem 3 can be summarized by 2 steps. \nIn the first step, we prove that the ensemble generated by the gadget $G_{n}$ is an ($\\eta$,t)-TPE with $\\eta$=poly(t) <1 . We do so by using Aharonov et al.'s detecability lemma \\cite{2}. \nThe second step uses Theorem 1 to establish the bound on $k(n,t,\\varepsilon)$.\n\nConsider a GLRC n-qubit Hamiltonian\n\\begin{eqnarray}\nH_{GLRC} &=& \\sum_{i}h^{'}_{i,i+1} \\nonumber \\\\\n&=&\\sum_{i}(1-P^{'}_{i,i+1})\n\\end{eqnarray}\nwith G = $\\mathcal{U}_B$, and \\\\ $P^{'}_{i,i+1}$=$\\dfrac{1}{|\\mathcal{U}_B|} \\sum_{U\\in \\mathcal{U}_B}(1^{\\otimes i-1}\\otimes U\\otimes1^{\\otimes n-i-1})^{\\otimes t,t}$. Define $ P_{odd}$=$P^{'}_{1,2} .P^{'}_{3,4}$\u2026\nand $P_{even}$=$P^{'}_{2,3} . P^{'}_{4,5}$\u2026.$ P_{odd}$ and $ P_{even}$ can be considered as projectors onto the \"odd\" and \"even\" ground spaces of $H_{GLRC}$. Let $P_{0}$ be the projector onto the entire ground space of $H_{GLRC}$. Further, because $H_{GLRC}$ is constructed from universal sets on U(4), then its ground space projector is nothing but the t'th Haar moment super operator \\cite{12}, \nIn other words $P_{0}$=$\\int_{U(2^{n})}$$U^{\\otimes t,t }$$ \\mu_{H}(dU)$ ; $U$ $\\in$ U($2^{n}$) and $\\mu_{H}$ being the Haar measure on U($2^{n}$) . \n\n\nThe statement of the detecability lemma is the following :\n\\begin{lem} \\cite{2} \\\\ $\\mid\\mid P_{even}.P_{odd} - P_{0} \\mid\\mid_{\\infty}$ $\\leq$ $(1+\\dfrac{\\Delta H_{GLRC}}{2} )^{-\\dfrac{1}{3}}$\n\\end{lem}\n\nTo relate this to the ensemble generated by the gadget $ G_{n}$ we prove the following Lemma:\n\\begin{lem}\n $ M_{t} [\\mu_{G_{n} } ]$=$P_{even}.P_{odd}$\n\\end{lem}\n\\textbf{Proof of Lemma 3} :\\\\ \\\\ We first note that because all unitaries are drawn independently, we can think of the moment super operator as being composed of 2 layers (an odd layer (left part of the gadget of Fig.\\ref{fig5}) and an even layer (right part of the gadget of Fig.\\ref{fig5} ), this is similar to reasoning found in \\cite{3}). Then:\\\\ \t\n\n$M_{t} [\\mu_{G_{n} } ]$=($\\dfrac{1}{|\\mathcal{U}_B|})^{\\delta_{even}}\\sum_{U_{23} \\in \\mathcal{U}_B,U_{45} \\in \\mathcal{U}_B,...} (U_{23}\\otimes U_{45} \\otimes \u2026)^{\\otimes t,t}$. ($\\dfrac{1}{|\\mathcal{U}_B|})^{\\delta_{odd}}\\sum_{U_{12} \\in \\mathcal{U}_B,U_{34} \\in \\mathcal{U}_B,...} (U_{12} \\otimes U_{34} \\otimes \u2026)^{\\otimes t,t}$, \\\\ \nwhere $\\delta_{odd}$=\\{ $\\dfrac{n}{2}$ if n$mod$2=0 or $\\dfrac{n-1}{2}$ if n$ mod$2 =1 \\} \\\\ and $\\delta_{even}$=\\{ $\\dfrac{n}{2}-1$ if n$mod$2=0 or $\\dfrac{n-1}{2}$ if n$mod$2=1 \\}.\\\\\n\n\nNote that since the $U_{ii+1}$ ' s are independently drawn from $\\mathcal{U}_B$, one can rewrite this as:\\\\ \n\n$M_{t} [\\mu_{G_{n} } ]$= ($\\dfrac{1}{|\\mathcal{U}_B|} \\sum_{U_{23} \\in \\mathcal{U}_B}(1 \\otimes U_{23} \\otimes 1^{\\otimes n-3})^{\\otimes t,t}$ .\\\\$\\dfrac{1}{|\\mathcal{U}_B|}(\\sum_{U_{45} \\in \\mathcal{U}_B}(1^{\\otimes 3}\\otimes U_{45}\\otimes 1^{\\otimes n-5})^{\\otimes t,t}$...) .\\\\($\\dfrac{1}{|\\mathcal{U}_B|} (\\sum_{U_{12} \\in \\mathcal{U}_B}(U_{12} \\otimes 1^{\\otimes n-2})^{\\otimes t,t}$.\\\\$\\dfrac{1}{|\\mathcal{U}_B|}\\sum_{U_{34} \\in \\mathcal{U}_B}(1^{\\otimes 2} \\otimes U_{34} \\otimes 1^{\\otimes n-4})^{\\otimes t,t}$...)\\\\=$(P^{'}_{2,3}.P^{'}_{4,5}...)$.$(P^{'}_{1,2}.P^{'}_{3,4}...)$\\\\=$P_{even}$. $P_{odd}$. $\\Box$ \\\\ \\\\ \n\nThen, as a direct consequence of the detectibility lemma we obtain:\n\\begin{equation}\n\\begin{aligned}\n\\label{dl}\ng(t,\\mu_{G_{n} }) = \\mid\\mid M_{t} [\\mu_{G_{n} } ] - P_{0} \\mid\\mid_{\\infty}\\leq(1+\\dfrac{\\Delta H_{GLRC}}{2} )^{-\\dfrac{1}{3}}\n\\end{aligned}\n\\end{equation}\n \nAll that remains now is to bound the RHS of Equation (\\ref{dl}). Using Equation (\\ref{eq7}) one directly obtains:\n \n\\begin{equation}\n\\begin{aligned}\n\\label{eq8}\n(1+\\dfrac{\\Delta H_{GLRC}}{2})^{-\\dfrac{1}{3}}\\leq(1+\\dfrac{P_{GLRC}}{2})^{-\\dfrac{1}{3}} \n\\end{aligned}\n\\end{equation}\nEquation (\\ref{dl}) along with Equation (\\ref{eq8}) directly leads to the following Corollary:\n\\newtheorem{corollary}{Corollary}\n\\begin{corollary}\nThe ensemble $\\{\\dfrac{1}{|\\mathcal{U_{G}}|} , \\mathcal{U_{G}}\\}$ generated by the gadget $G_{n}$ is an ($\\eta$,t)-TPE with :\\\\\n $\\eta$=$(1+\\dfrac{P_{GLRC}}{2})^{-\\dfrac{1}{3}}$= poly(t) < 1.\n\\end{corollary}\nPlugging Corollary 1 into Theorem 1 with: \\\\ \\{ $p_{i}$, $U_{i}$ \\}=$\\{\\dfrac{1}{|\\mathcal{U_{G}}|} , \\mathcal{U_{G}}\\}$ , $d$=$2^{n}$ and $\\eta$=$(1+\\dfrac{P_{GLRC}}{2})^{-\\dfrac{1}{3}}$ , then multiplying and dividing the bound of $k$ in Theorem 1 by $log(2)$ allows one to obtain Theorem 3.\n\n\\section{\\label{sec:level8}Conclusions and discussion}\nWe have found a simple n-qubit graph gadget which implements an $\\varepsilon$-approximate t-design under repeated concatenations with fixed measurement and no feedforward.\nThe number of concatenations $k(n,t,\\varepsilon)$=$\\Omega(nt+ log(\\dfrac{1}{\\varepsilon}))$ required is linear in both the qubit number and order t of the design. Also, because the number of qubits in the graph gadget scales linearly with n, we thus only require $\\Omega(n^{2}t + nlog(\\dfrac{1}{\\varepsilon}))$ qubits in total to implement the gadget $E_{n}$=$G_{n} \\circ G_{n} \\circ...$. Furthermore, the choice of the 2-qubit gadget B is not at all unique. In fact, $G_{n}$ could be made even more practical provided simpler (less number of qubits, less needed entanglements,...) 2-qubit gadgets possessing the properties of B can be found. \n\n\nOur construction is very similar to the brickwork state, which is a universal resource for MBQC \\cite{19} - it is basically the brickwork state but with regular holes.\nIn MBQC these holes would simply teleport the inputs through, so that the proofs of universality of \\cite{19} easily extend to our graph - that is, concatenations of the graph used in $G_n$ is also a universal resource for MBQC.\nIn addition to being pleasing from a practical point of view, this opens the door to applications of techniques for delegation of ensemble generation, as done for computation \\cite{19,barz2013experimental}, and indeed the possibility to hide whether one is sampling unitaries or performing some deterministic computation.\\\\ \\\\ \\\\\n\n\n\n\\section{\\label{sec:level8}Acknowledgements}\n\nWe thank Y. Nakata for fruitful discussions and P. Turner for comments. R. Mezher Acknowledges funding from the Lebanese PhD grant $CNRS-L$\/$UL$.\nDM acknowledges support from ANR grant COMB.\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nSemi-analytic models of galaxy formation (SAMs) are well established tools for\nexploring galaxy formation scenarios in their cosmological context. The problem of\nhow galaxies form and evolve is described by a set of coupled differential equations\ndealing with well-defined astrophysical processes. These are driven by dark\nmatter halo merger trees that determine the source terms in the equation network\n\\citep[for reviews see e.g.][]{BensonReview,Baugh2006,SomervilleDave2015}.\nDue to the approximate nature of the methods used in these simulations, and the\nuncertainties in the physical process that are modelled, these models include a large\nnumber of uncertain parameters. While order of magnitude estimates for these\nparameters can be made, their precise values must be determined by comparison\nto observational data.\n\nTraditionally, parameter values have been set through a trial-and-error approach,\nwhere the galaxy formation modeller varies an individual parameter developing\nintuition about its effects on the model predictions for a particular observable and\nthen uses this understanding to select a parameter set that gives a good description\nof the observations.\nDespite its simplicity, and obvious limitations, this procedure has led to\nsubstantial progress in the field.\nRecently, however, several papers have employed more rigorous statistical methods to\nexplore the\nhigh dimensional parameter space systematically\n\\citep{Kampakoglou2008,Henriques2009,Bower2010,Henriques2013,Benson2014,Lu2014,\nHenriques2015}. Such approaches\nprovide a richer analysis, and seek to identify the regions of\nparameter space that are in agreement with observational data, and not just to find\noptimal parameter values. This therefore informs as to the uniqueness of the\nparameter choices, and provides understanding of the degeneracies between different\nparameters.\n\nIn this work we study which constraints are imposed on the semi-analytic model\n\\textsc{galform}\\xspace by the observations of galaxy stellar mass function (GSMF). We first consider\nthe constraints imposed by local observations and then investigate how the\nparameters are further constrained by the introduction of high\nredshift data. This makes powerful\nuse of the iterative emulator technique described by \\citet{Bower2010}, which\nprovides an efficient way of probing a high dimensional parameter space.\nImportantly, the method allows additional constraints to be added in post-processing.\nThus, we start by finding the region in the parameter space which contains models\nthat produce a good match to the local Universe GSMF.\nThis region is, then, further probed to check whether a match to higher\nredshift data is possible.\nBy analysing 2D projections of the plausible models sub-volume and performing a\nprincipal component analysis of it, we are able to study the degeneracies and\ninteractions between the most constrained parameters. We note that typical\napproaches to analysing comparable models using Bayesian MCMC require millions of\nmodel runs (at least), while the approach used here, which utilises Bayesian\nemulation, only required tens of thousands of model runs, representing a substantial\nimprovement in efficiency.\n\n\n\n\n\n\nClosely reproducing the observed high-redshift galaxy mass function\n\\citep{Cirasuolo2010, Henriques2013} is problematic for many galaxy formation models.\n\\citet{Henriques2013}, for example, concludes that the effectiveness of galaxy\nfeedback (specifically the re-incorporation time of expelled gas) must depend on\n\\green{the virial mass of the dark matter halo\non the basis of a Monte Carlo exploration of the parameter space of their model.\nThis is not fully satisfactory, however, since one would expect the re-incorporation\ntime to be physically related to the halo dynamical time and not the halo mass.\n}\n\nIn this paper, we explore an appealing and well-motivated alternative. Observations\nof galaxy winds \\citep[eg.,][]{heckman1990, martin2012} suggest that the effective\nmass loading is strongly dependent on the surface density of star formation. It\nappears that efficient outflows are more readily generated when star formation\noccurs in dense bursts than when the star formation occurs in a smooth\nand quiescent disk. These observations motivate a more careful exploration of the\ntreatment of galaxy winds from starburst and quiescent disks, and in this paper we\nparametrize the mass loading of the wind independently in these two cases. This may\nnaturally resolve the difficulty presented by observations of the high redshift GSMF\nsince the cosmic star formation rate density may be more dominated by starbursts at\nhigh redshift, while it is dominated by quiescent star formation at low redshift\n\\citep{Malbon2007}.\n\nThis paper is organized as follows.\nIn \\S\\ref{sec:model} we describe the galaxy formation model, specifying\nwhich parameters were varied and briefly reviewing the physical meaning of the\nmost relevant of them.\nIn \\S\\ref{sec:emulator} the iterative history matching methodology is\nreviewed.\nIn \\S\\ref{sec:z0} we present our results for the matching to the local\nGSMF.\nIn \\S\\ref{sec:high_z} we examine the effects of including higher redshift\ndata.\nIn \\S\\ref{sec:subspace}, 2D projections of the parameter space are\nanalysed.\nIn \\S\\ref{sec:pca}, the results of a principal component analysis of the\nnon-implausible volume are shown.\nFinally, in \\S\\ref{sec:summary}, we summarize our conclusions.\n\n\n\\section{Galaxy formation model}\n\\label{sec:model}\n\nThe basis of this paper is the semi-analytic model \\textsc{galform}\\xspace, first introduced by\n\\citet{Cole2000}. Our starting point is the model discussed by \\citet[][hereafter\nGP14]{Gonzalez-Perez2014}, which re-calibrates the version described in\n\\citet{Lagos2012}\n to match observational data taking into account the\nbest-fitting cosmological parameters obtained by WMAP7 \\citep{WMAP7}.\nThe model of \\citet{Lagos2012} is itself a development of the version presented\nby \\citet{Bower2006} -- which introduced AGN feedback and disc instabilities to\nthe original \\textsc{galform}\\xspace model -- introducing a modified prescription for star formation\nin galaxy discs (\\S\\ref{sec:BR_starformation}, see \\citealt{Lagos2011sfl} for an\nin-depth discussion).\n\nWe note that there is now a more modern variant of the \\textsc{galform}\\xspace model which differs from\nthe base model used here. This model, described {comprehensively}\nby \\citet{Lacey2016}, assumes two initial mass functions (IMFs), one for quiescent\nstar formation and a different one for star bursts\n{-- an approach which improves the model predictions for number counts and\nredshift distribution of sub-millimetre galaxies.\nThe model presented here assumes a universal IMF, which considerably simplifies\ncomparison with the galaxy stellar mass function. A universal IMF is compatible with\ndirect observational measurements: see \\citet{Bastian2010} and \\citet{Smith2015} for\na recent discussion.}\n\n\n\\subsection{Differences from GP14}\n\nAlthough the model we use here is based on GP14, there are a number of\nsmall, but important, differences.\nFirstly, the merger trees in the present study were constructed using the Monte Carlo\nalgorithm described by \\citet{Parkinson2008}, which is based on the\nExtended Press-Schechter theory \\citep{Bower1991,LaceyCole1993}, while\nGP14 uses merger trees extracted from a Millenium-class N-body\nsimulation \\citep{Guo2013}.\n\\green{-- t}he use of Monte Carlo merger trees allows \\textsc{galform}\\xspace to run significantly faster since\nit is possible to control the number of haloes with a given final mass in the\nsimulation, whereas in the case of the N-body trees, most of the computational time\nis spent on over-represented small mass haloes.\nIn GP14, ram-pressure stripping is modelled by\ncompletely and instantaneously removing the hot gas halo when a galaxy becomes a\nsatellite. Here we follow the same prescription as \\citet{Font2008}, which uses the\n\\citet{McCarthy2008} ram-pressure stripping model that is based on hydrodynamic\nsimulations \\green{-- a} similar update to the model is used in \\citet{Lagos2014}.\n\\green{\nFinally, the present model adopts the IMF obtained by \\citet{ChabrierIMF}, while GP14\nuses a \\citet{KennicuttIMF} IMF.\n}\n\\subsection{Varied parameters}\n\\label{sec:parameters}\n\n\\begin{table*}\n\\begin{center}\n\\caption{\nParameters varied in this work, the physical processes and their ranges.\nFor reference, values of these parameters used in GP14 are shown.\n}\n\\input{parameters_table.tex}\n\\label{tab:parameters}\n\\end{center}\n\\end{table*}\n\nThe semi-analytic approach to the problem of galaxy formation relies on a large\nnumber of parameters which codify the\nuncertainties associated with the many astrophysical processes involved.\nSince the emulator technique allows us to survey a parameter space of high\ndimensionality both quickly and at a relatively low computational cost, we are\nable to vary parameters simultaneously. One should bear in mind\nthat varying a larger number of parameters in the present approach corresponds\nto a more \\emph{conservative} choice, since it requires less \\textit{a priori}\nassumptions about the role of each parameter.\n\nWe varied 20 parameters, all of which are listed, together with their ranges,\nin table \\ref{tab:parameters}. We outline the physical meaning of\nparameters related to star formation and feedback in the subsections\nbelow, for further details, we refer the reader to the original\npapers, and to \\citet{Lacey2016}.\n\nFor the purposes of sampling, computations of volumes and principal\ncomponents analysis, the parameters were rescaled to $[-1,1] $ within the\ninitial range, either linearly,\n\\begin{equation}\n\\label{eq:scale_lin}\n p^{(s)} = 2\\left(\\frac{p-p_\\text{min}}{p_\\text{max}-p_\\text{min}}\\right)-1\\,,\n\\end{equation}\nor logarithmically,\n\\begin{equation}\n\\label{eq:scale_log}\n p^{(l)} =\n2\\left[\\frac{\\log_{10}(p\/p_\\text{min})}{\\log_{10}(p_\\text{max}\/p_\\text{min})}\n\\right]-1\\,.\n\\end{equation}\nThe scaling used is also listed in table \\ref{tab:parameters}.\n\n\\subsubsection{Quiescent star formation}\n\nIt is assumed in the model that the surface density of the star formation rate is set by the surface density of molecular gas (see \\citealt{Lagos2011sfl} and references\ntherein),\n\\label{sec:BR_starformation}\n\\begin{equation}\n\\dot \\Sigma_\\star = \\nu_{0,\\text{sf}} \\,\\Sigma_\\text{mol} =\\nu_{0,\\text{sf}}\nf_\\text{mol} \\Sigma_\\text{gas}\\label{eq:BR_starformation}\\,,\n\\end{equation}\nwhere \\green{$\\Sigma_\\text{gas}$ is the surface density of cold gas in the disc and}\nthe fraction of molecular hydrogen,\n$f_\\text{mol}=R_\\text{mol}\/(R_\\text{mol}+1)$, is computed using the pressure\nrelation of \\citet{BlitzRosolowsky2006}\n\\begin{equation}\n R_\\text{mol} = \\left(\\frac{P_\\text{ext}}{P_\\text{sf}}\\right)^{\\beta_\\text{sf}}\n \\label{eq:BR_Rmol}\\,,\n\\end{equation}\nwith\n\\begin{equation}\n P_\\text{ext}=\\frac{\\upi}{2} G \\Sigma_\\text{gas}\\left[ \\Sigma_\\text{gas} +\\left(\n\\frac{\\sigma_\\text{gas}}{\\sigma_\\star}\\right)\\Sigma_\\star \\right]\\,.\n\\end{equation}\n\n\n\\subsubsection{Star formation bursts}\n\nDuring a starburst the star formation rate is set to\n\\label{sec:burst_starformation}\n\\begin{equation}\n \\text{SFR}_\\text{burst} = \\frac{M_\\text{gas,bulge}}{\\tau_{\\star, \\text{burst}}}\n\\end{equation}\nwith\n\\begin{equation}\n \\tau_{\\star, \\text{burst}} = \\max\\left( f_\\text{dyn}\\tau_\\text{dyn},\\,\\,\n \\tau_{\\text{min,burst}} \\right)\\label{eq:burst}\n\\end{equation}\nwhere $\\tau_\\text{dyn}$ is the dynamical time of the newly formed spheroid and\n$f_\\text{dyn}$ and $\\tau_{\\text{min,burst}}$ are model parameters.\n\n\n\n\\subsubsection{Supernovae feedback}\n\\label{sec:SN_feedback}\nThe outflow of gas from the disc or the bulge of a galaxy is modelled\nusing\n\\begin{equation}\\label{eq:alphahot}\n\\dot M_\\text{out,disc\/burst} = \\beta \\times \\text{SFR}_\\text{disc\/burst}\n\\end{equation}\nwhere $\\text{SFR}_\\text{disc\/burst}$ are the total star formation rates in the\nquiescent and starburst cases and $\\beta$ is the \\emph{mass loading}, given by\n\\begin{equation}\n\\beta = \\beta_{0,\\text{disc\/burst}}\n\\left(\\frac{V_\\text{disc\/bulge}}{200\\,\\text{km}\\,\\text{s}^{-1}}\n\\right)^{-\\alpha_\\text{hot}}\n\\label{eq:beta}\n\\end{equation}\nwhere $V_\\text{disc\/bulge}$ are the circular velocity associated with\nwith the disc (in the quiescent case) or with the newly formed spheroid\n(bulge) component (in a starburst).\n\nIn previous \\textsc{galform}\\xspace works the mass loadings associated with discs and bursts were\nassumed to share the same normalization, i.e. $\\beta_{0,\\text{burst}}=\\beta_{0,\\text{disc}} = \\beta_0$.\nThis assumption was relaxed in the present work. The notation in previous works was\nalso slightly different: the equivalent parameter\n\\begin{equation}\nV_\\text{hot}~\\equiv~(200\\,\\text{km}\\,\\text{s}^{-1})~\\times~\\beta_0^{\n-1\/\\alpha_\\text{hot}}\n\\end{equation}\nwas used instead.\n\nThe outflowed gas is assumed to be once more available to cool and form stars\non a time-scale\n\\begin{equation}\n t_\\text{reinc} = \\frac{\\tau_\\text{halo}}{\\alpha_\\text{reheat}},\n\\label{eq:reheat}\n\\end{equation}\nwhere $\\tau_\\text{halo}$ is the dynamical time of the halo.\n\\green{The amount of cold gas available (as well as the amount of stars formed) is\ndetermined by simultaneously solving for both the star formation rate and the\noutflow rate.}\n\n\n\\subsubsection{AGN feedback}\n\\label{sec:AGN_feedback}\n\nThe model assumes the cooling of gas from the hot gas halo can be disrupted by\nthe injection of energy by the AGN. This is assumed to happen only at haloes\nunder `quasi-hydrostatic equilibrium', defined by\n\\begin{equation}\n t_\\text{cool}(r_\\text{cool}) > \\alpha_\\text{cool}^{-1} t_\\text{ff}(r_\\text{cool})\n\\label{eq:tcool}\n\\end{equation}\nwhere $t_\\text{cool}$ and $r_\\text{cool}$ are the cooling time and radius and\n$t_\\text{ff}$ is the free fall time. Thus, the parameter $\\alpha_\\text{cool}$ determines\nthe halo mass at which AGN feedback is effective (i.e. lower values of\n$\\alpha_\\text{cool}$ implies AGN feedback active in smaller mass haloes).\n\nThe cooling of gas from the hot gas halo is interrupted if a galaxy satisfies\nequation \\eqref{eq:tcool}, and\n\\begin{equation}\n L_\\text{cool} < \\epsilon_\\text{edd}L_\\text{edd}\n\\end{equation}\nwhere $L_\\text{edd}$ is the Eddington luminosity of the central galaxy's black hole.\n\n\\subsubsection{Disc stability}\n\\label{sec:stability}\n\nDiscs are considered stable if they satisfy\n\\begin{equation}\n\\frac{V_\\text{max}}{\\sqrt{{1.68} \\,G M_\\text{disc}\/r_\\text{disc}}}< f_{\\rm stab}\n\\end{equation}\nwhere $f_{\\rm stab}$ is a model parameter close to 1. If at any timestep this\ncriterion is not satisfied, it is assumed that the disc is quickly converted\ninto an spheroid due to a disc instability and a starburst is triggered\n\\green{-- i.e. all the gas and stars are instantaneously moved into the spheroid\ncomponent where the star formation follows equation~\\eqref{eq:burst}.}\n\n\n\\section{Bayesian Emulation Methodology}\n\\label{sec:emulator}\n\nThe use of complex simulation models, such as \\textsc{galform}\\xspace, is now widespread across many\nscientific areas.\nSlow simulators with high dimensional input and\/or output spaces give rise to several\nmajor problems, the most ubiquitous being that of\nmatching the model to observed data, and the subsequent global parameter\nsearch that such a match entails.\n\nThe general area of Uncertainty Analysis has been developed within the Bayesian\nstatistical community to solve the corresponding problems associated with slow\nsimulators~\\citep{Craig97_Pressure,Kennedy01_Calibration}. A core part of this area\nis the use of emulators: an emulator is a\nstochastic function that mimics the \\textsc{galform}\\xspace model but which is many orders of\nmagnitude faster to evaluate, with specified prediction uncertainty that varies\nacross the input space \\citep{OHagan06_Tutorial,Vernon2010,galf_stat_sci}.\nAny subsequent calculation one wishes to do with \\textsc{galform}\\xspace can\ninstead be performed far more efficiently using an emulator \\citep{Higdon09_Coyote2}.\nFor example, an emulator can be used within an MCMC algorithm to greatly speed up\nconvergence \\citep{Kennedy01_Calibration,Higdon04_prediction, Henderson:2009aa}.\nThis is especially useful as for scenarios possessing moderate to high numbers of\ninput parameters,\nMCMC algorithms often require vast numbers (billions, trillions or more) of model\nevaluations to adequately explore the input space and\nreach convergence: see for example the excellent discussion in\n\\cite{geyer2011introduction}. Such numbers of evaluations are clearly\nimpractical for models that possess substantial run time, such as \\textsc{galform}\\xspace. Another\nmajor issue with MCMC is that of pseudo convergence:\nan MCMC algorithm may after a large number of iterations {\\it appear} to have\nconverged and hence pass every convergence test, but continued running would\neventually reveal a sudden and substantial change in chain location, showing that\nthe chain had not in fact reached equilibrium at all \\citep{geyer2011introduction}.\n\nHence, although we fully support the Bayesian paradigm, we do not use an MCMC\nalgorithm here, due both to the reasons discussed above, and to the fact that a\nBayesian MCMC approach requires a full joint probabilistic specification across all\nuncertain\nquantities, that is often hard to make and hard to justify.\nWe instead outline a more efficient and robust approach known as iterative history\nmatching using Bayesian emulation~\\citep{Vernon2010}. Here the set of all inputs\ncorresponding to acceptable matches to the\nobserved data is found, by iteratively removing unacceptable regions of the input\nspace in waves. History matching naturally incorporates Bayesian emulation and has\nbeen\nsuccessfully employed in a range of scientific disciplines\nincluding galaxy formation~\\citep{Vernon2010,Vernon10_CS_rej,Bower2010,\ngalf_stat_sci}, epidemiology~\\citep{Yiannis_HIV_1,Yiannis_HIV_2}, oil reservoir\nmodelling~\\citep{Craig96_Pressure,Craig97_Pressure,JAC_Handbook,JAC_sma_samp},\nclimate modelling~\\citep{Williamson:2013aa}\nand environmental science~\\citep{asses_mod}. History matching can be viewed as a\nuseful precursor to a fully Bayesian analysis\nthat is often in itself sufficient for model checking and model development.\nHere we use it within a Bayes Linear framework, a simpler, more tractable\nversion of Bayesian statistics, where only expectations, variances and covariances\nneed to be\nspecified \\citep{Goldstein_99,Goldstein07_BayesLinearBook}. However, if one is\ncommitted to a full Bayesian MCMC approach, performing an a priori history match\ncan dramatically improve the subsequent efficiency of the MCMC by first removing the\nvast regions of input parameter space that would have extremely low\nposterior probability.\n\n\n\\subsection{Emulator Construction}\n\\label{sec:emconstr}\n\nWe now outline the core emulator methodology \\citep[see][for\nfurther description]{Vernon2010,Bower2010}.\nWe represent the \\textsc{galform}\\xspace model as a function $f(x)$, where\n$x=~\\!(\\nu_{0,\\text{sf}},P_\\text{sf}\/k_\\text{B},\\dots,\\epsilon_\\text{strip},\\alpha_\\text{rp})$\nis a\nvector\ncomposed of the 20 input parameters given in table~\\ref{tab:parameters}, and $f$ is a\nvector containing all \\textsc{galform}\\xspace outputs of interest, specifically the GSMF at various\nmass bins and redshifts. To construct an emulator we generally perform an initial\nspace filling set of wave 1 runs, using a maximin latin hypercube design over the\nfull 20 dimensional input space\n\\citep[see][for details]{Bower2010,SWMW89_DACE,Santner03_DACE,Currin91_BayesDACE}.\nFor each output $f_i(x)$, $i=1\\dots q$, a Bayesian emulator can be structured as\nfollows:\n\\begin{equation}\n\\label{eq_emulator}\nf_i(x) = \\sum_j \\beta_{ij} g_{ij}(x_{A_i}) + u_i(x_{A_i}) + v_i(x)\n\\end{equation}\nHere $\\beta_{ij}$, $u_i(x_{A_i})$ and $v_i(x)$ are uncertain quantities to be\ninformed by the current set of runs.\nThe active variables $x_{A_i}$ are a subset of the inputs that are found to be most\ninfluential for output $f_i(x)$. The $g_{ij}$ are known deterministic\nfunctions of $x_{A_i}$, with a common choice being low order polynomials, and the\n$\\beta_{ij}$ are unknown regression coefficients. $u_i(x_{A_i})$ is a Gaussian\nprocess with, for example, zero mean and possible covariance function:\n\\begin{equation}\n\\label{eq_corr}\n{\\rm Cor}(u_i(x_{A_i}),u_i(x'_{A_i})) = \\sigma^2_{u_i} {\\rm exp}\\left\\{-\n\\|x_{A_i}-x_{A_i}'\\|^2 \/ \\theta_i^2 \\right\\}\n\\end{equation}\nwhere $\\sigma^2_{u_i}$ and $\\theta_i$ are the variance and correlation length of\n$u_i(x_{A_i})$ which must be specified,\nand $v_i(x)$ is an uncorrelated nugget with expectation zero and ${\\rm Var} (v_i(x))\n= \\sigma^2_{v_i}$, that represents the effect of the remaining inactive input\nvariables, and\/or any stochasticity exhibited by the model \\citep{Vernon2010}.\n\nWe could employ a fully Bayesian approach by specifying joint prior distributions for\nall uncertain quantities in equation~(\\ref{eq_emulator}), and subsequently updating\nbeliefs about $f_i(x)$ in light of the wave~1 runs via Bayes theorem.\nHere instead we prefer to use the more tractable Bayes Linear approach, a version of\nBayesian statistics that requires only expectations, variances and covariances for\nthe prior specification, and which uses only efficient matrix calculations, and no\nMCMC\n\\citep{Goldstein07_BayesLinearBook}.\nTherefore if we are prepared to specify ${\\rm E}(\\beta_{ij})$, ${\\rm\nVar}(\\beta_{ij})$, $\\sigma^2_{u_i}$, $\\sigma^2_{v_i}$ and $\\theta_i$, we can obtain\nthe\ncorresponding Bayes Linear priors for $f_i(x)$ namely ${\\rm E}(f_i(x)), {\\rm\nVar}(f_i(x))$ and\n${\\rm Cov}(f_i(x),f_i(x'))$ using equations~(\\ref{eq_emulator}) and (\\ref{eq_corr}).\n\nThe initial wave of $n$ runs is performed at input locations $x^{(1)},\nx^{(2)},\\dots,x^{(n)}$ which give model output values $D_i = (f_i(x^{(1)}),\nf_i(x^{(2)}),\\dots,f_i(x^{(n)}))$, where $i$ labels the model output. We obtain the\nBayes Linear adjusted expectation ${\\rm E}_{D_i}(f_i(x))$ and variance ${\\rm\nVar}_{D_i}(f_i(x))$ for $f_i(x)$ at new input point $x$ using:\n\\begin{align}\n {\\rm E}_{D_i}(f_i(x)) =&\\; \\label{eq_BLE}\n{\\rm E}(f_i(x)) \\nonumber\\\\\n&+\\, {\\rm Cov}( f_i(x), D_i) {\\rm Var}(D_i)^{-1} (D_i - {\\rm E}(D_i))\n\\\\\n{\\rm Var}_{D_i}(f_i(x)) =&\\; \\label{eq_BLV}\n{\\rm Var}(f_i(x)) \\nonumber\\\\\n &-\\, {\\rm Cov}( f_i(x), D_i) {\\rm Var}(D_i)^{-1} {\\rm Cov}(D_i,f_i(x))\n\\end{align}\nThe emulator thus provides a prediction ${\\rm E}_{D_i}(f_i(x))$ for the behaviour of\nthe \\textsc{galform}\\xspace model at new input point $x$ along with a corresponding $x$ dependent\nuncertainty ${\\rm Var}_{D_i}(f_i(x))$. It is the later feature that strongly\ncontributes to emulators being more advanced than interpolators. These two quantities\n${\\rm E}_{D_i}(f_i(x))$ and ${\\rm Var}_{D_i}(f_i(x))$ are used directly in the\nimplausibility measures that form the basis of the global parameter search described\nbelow.\n\n\n\\subsection{Simple 1-dimensional Example}\n\\label{sec:1demul}\n\n\nTo clarify the above description we outline the construction of a simple\n1-dimensional emulator of the function\n\\begin{equation}\nf(x) \\;\\;=\\;\\; 3 \\,x \\sin\\left(\\frac{5\\upi(x-0.1)}{ 0.4}\\right)\n\\end{equation}\nfor which we perform a set of $n=10$ equally spaced wave 1 runs at locations\n$x^{(j)}=0.1,\\dots,0.5$ giving rise to run data\n\\begin{equation}\\label{eq_Di_sim}\nD = (f(x^{(1)}), f(x^{(2)}),\\dots,f(x^{(n)}))\n\\end{equation}\nwhere we have dropped the $i$ subscript as the output is only 1-dimensional.\n\nFor simplicity we reduce the emulator's regression terms $\\beta_{ij} g_{ij}(x_A)$, in\nequation~(\\ref{eq_emulator}), to a constant $\\beta_0$ and remove the\nnugget $v_i(x)$ as there are no inactive inputs. The emulator\nequation~(\\ref{eq_emulator}) therefore reduces to:\n\\begin{equation}\\label{eq_sim_em}\nf(x) \\;\\;=\\;\\; \\beta_0 + u(x)\n\\end{equation}\nA possible prior specification is to treat the constant or mean term $\\beta_0$ as\nknown, with ${\\rm E}(\\beta_0)=0.1$ and hence ${\\rm Var}(\\beta_0)=0$.\nWe also set $\\sigma_{u}=0.6$ and $\\theta = 0.06$: a choice that represents curves of\nmoderate\nsmoothness.\nWe can now calculate all terms on the rhs of equations~(\\ref{eq_BLE}) and\n(\\ref{eq_BLV}) using equations~(\\ref{eq_sim_em}), (\\ref{eq_corr}) and\n(\\ref{eq_Di_sim}), for example:\n\\begin{eqnarray}\n{\\rm E}(f(x)) &=& \\beta_0 \\\\\n{\\rm Var}(f(x)) &=& \\sigma_{u}^2 \\\\\n{\\rm E}(D) &=& (\\beta_0, \\dots, \\beta_0)^T\n\\end{eqnarray}\nwhile ${\\rm Cov}( f(x), D)$ is now a row vector of length $n$ with $j$th component\n\\begin{eqnarray}\n{\\rm Cov}( f(x), D)_j &=& {\\rm Cov}( u(x), u(x^{(j)})) \\\\\n&=& \\sigma^2_{u} {\\rm exp}\\left\\{- \\|x-x^{(j)}\\|^2 \/ \\theta^2 \\right\\} \\nonumber\n\\end{eqnarray}\nand ${\\rm Var}(D)$ is an $n\\times n$ matrix with $(j,k)$ element\n\\begin{eqnarray}\n{\\rm Var}(D)_{jk} &=& {\\rm Cov}( u(x^{(j)} ), u(x^{(k)})) \\\\\n&=& \\sigma^2_{u} {\\rm exp}\\left\\{- \\|x^{(j)}-x^{(k)}\\|^2 \/ \\theta^2 \\right\\}\n\\nonumber\n\\end{eqnarray}\nWe can now construct the emulator by calculating the adjusted expectation and\nvariance ${\\rm E}_{D}(f(x))$ and ${\\rm Var}_{D}(f(x))$ from equations~(\\ref{eq_BLE})\nand (\\ref{eq_BLV}) respectively, for any new input point $x$.\n\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=\\columnwidth]{.\/simple_emul4_a.pdf}\n \\caption{\nThe 1-dimensional emulator as constructed in \\S\\ref{sec:1demul}. The\n\\green{dashed (blue)}\nline is the emulator prediction ${\\rm E}_{D}(f(x))$ as a function of $x$, and\nthe credible\ninterval ${\\rm E}_{D}(f(x)) \\pm 3 \\sqrt{{\\rm Var}_{D}(f(x))}$ is given by the\n\\green{dotted (red)}\nlines. The true function $f(x)$ is shown as the black solid line, and the 10 model\nruns that make up the vector $D$ used to build the emulator are given as the red\npoints.\n }\n \\label{fig:1demul}\n\\end{figure}\n\nFig.~\\ref{fig:1demul} shows the 1-dimensional emulator where ${\\rm E}_{D}(f(x))$ as\na function of $x$ is given by the \\green{dashed} blue line, and the credible interval\n${\\rm E}_{D}(f(x)) \\pm 3 \\sqrt{{\\rm Var}_{D}(f(x))}$ by the \\green{dotted} red lines. We can see\nthat ${\\rm E}_{D}(f(x))$ precisely interpolates the known runs at outputs $D$, with\nzero uncertainty (as the red lines touch at these points): a desirable feature as\nhere $f(x)$ is a deterministic function. The\ncredible regions get wider the further we are from known runs, appropriately\nreflecting our lack of knowledge in these regions.\nThe true function $f(x)$ is given by the solid black line which lies within the\ncredible region for all $x$, only getting close to the boundary for $x>0.5$.\nThis demonstrates the power of an emulator: using only a small number of runs we can\nsuccessfully mimic relatively complex functions to a known accuracy, a feature that\nscales well in higher dimensions due to the chosen form of the emulator. The speed of\nBayesian emulators is also crucial\nfor global parameter searches where we may need to evaluate the emulator a huge\nnumber of times to fully explore the input space.\nNote that the\nemulator calculation is extremely fast because it only requires matrix multiplication\nfor each new $x$. The inverse ${\\rm Var}(D_i)^{-1}$ that features in\nequations~(\\ref{eq_BLE}) and (\\ref{eq_BLV}) is independent of\n$x$ (and indeed of $D_i$) and hence can be performed only once, offline and in\nadvance of even the run evaluations $D_i$.\n\n\n\n\n\\subsection{Emulating in Higher Dimensions}\\label{sec:em_high_dim}\n\nWhen emulating functions possessing high input dimension, the polynomial regression\nterms\n$\\beta_{ij} g_{ij}(x_{A_i})$ in the emulator equation~(\\ref{eq_emulator}) become more\nimportant, as they efficiently capture many of the more global features often present\nin the physical model~\\citep{Vernon2010,Vernon10_CS_rej}. Prior specifications for\nthe $\\beta_{ij}$ can be given,\nbased say on structural knowledge of the model, or on past experience running a\nfaster but simpler previous version of the model \\citep{JAC_Handbook}. However, if no\nstrong prior\nknowledge is available and the number of runs performed is reasonably high, a vague\nprior limit can be taken in the Bayes Linear update equations~(\\ref{eq_BLE}) and\n(\\ref{eq_BLV}), resulting in the adjusted expectation and variance of the\n$\\beta_{ij}$ terms tending toward their Generalised Least Squares (GLS) estimates.\nFor space filling runs, such as those from a maximin latin hypercube, the GLS\nestimates can be accurately approximated by the\ncorresponding Ordinary Least Squares (OLS) estimates, which can also be used to\nestimate\n$\\sigma_{u_i}^2$, providing further efficiency gains \\citep{Vernon2010}.\n\n\nIn addition, the choice of active input variables $x_{A_i}$ and the choice of the\nspecific regression terms $\\beta_{ij} g_{ij}(x_{A_i})$ that feature in the emulator,\ncan both be made using linear model selection techniques based on AIC or BIC\ncriteria. For example, these can be simply employed using the lm() and step()\nfunctions in R \\citep{Vernon2010,R-Core-Team:2015aa}. The use of active variables\n$x_{A_i}$ can lead to substantial\ndimensional reduction of the input space of each of the outputs, and hence convert a\nhigh dimensional\nproblem into a collection of low dimensional problems, which is often far easier to\nanalyse \\citep[see][for further discussion of this benefit]{Vernon10_CS_rej}. It is\nworth noting that reasonably accurate emulators can often be constructed just using\nsuch regression models. This can be a sensible first step\n\\cite[see][]{Yiannis_HIV_3}, before one attempts the construction of a full emulator\nof the form given in equation~\\eqref{eq_emulator}.\n\n\\subsection{Iterative History Matching via Implausibility}\n\\label{sec:HM}\n\nWe now describe the powerful iterative global search method known as History\nMatching \\citep{Craig96_Pressure,Craig97_Pressure}, which\nnaturally incorporates the use of Bayesian emulators, and which has been\nsuccessfully applied across a variety of scientific disciplines.\nIt aims to identify the set $\\mathcal{W}$ of all inputs $x$ that would give rise\nto an acceptable match between the \\textsc{galform}\\xspace outputs $f(x)$ and the corresponding vector\nof observed data $w$, and proceeds iteratively, discarding regions of input space\nwhich are deemed {\\it implausible} based on information from the emulators. For more\ndetail on the contents of this section see\n\\citep{Vernon2010,Vernon10_CS_rej}.\n\nFor an output $f_i(x)$ we define the implausibility measure:\n\\begin{equation}\\label{eq_imp1}\nI_i^2(x,w_i) \\;\\;=\\;\\; \\frac{({\\rm E}_{D_i}(f_i(x)) - w_i)^2}\n{ {\\rm Var}_{D_i}(f_i(x)) + \\sigma^2_{\\epsilon_i} + \\sigma^2_{e_i} }\n\\end{equation}\nwhich takes the distance between the emulator's prediction of the $i$th output\n${\\rm E}_{D_i}(f_i(x))$ and the actual observed data $w_i$ and standardises\nit with respect to the variances of the three major uncertainties: the emulator\nuncertainty ${\\rm Var}_{D_i}(f_i(x))$, the model discrepancy $\\sigma^2_{\\epsilon_i}$\nand the observation error $\\sigma^2_{e_i}$.\n\nThe least familiar of these is the model discrepancy $\\sigma^2_{\\epsilon_i}$ which\nis an upfront acknowledgement of the deficiencies of the \\textsc{galform}\\xspace model\nin terms of assumptions used, missing physics and simplifying approximations.\nIn addition to ensuring the analysis is more meaningful, this term guards against\noverfitting, and the subsequent technical and robustness problems this can cause for\na global parameter search. See\n\\cite{Kennedy01_Calibration,Brynjarsdottir:2014aa,Goldstein09_Reify} for extended\ndiscussions\non this point\\footnote{It is worthwhile noting that any analysis that does not\ninclude a model discrepancy is only meaningful given that ``the model $f(x)$ is a\n{\\it precise} match to the real Universe for some input $x$\", and all conclusions\nderived from such an analysis should be written with this conditioning statement\nattached.}.\nThe form of the implausibility comes from the ``best input approach\" which models\nthe link between the \\textsc{galform}\\xspace model evaluated at its best possible input $x^*$ and the\nreal Universe $y$ as $y=f(x^*) + \\epsilon$, where $\\epsilon$ is a random quantity\nrepresenting the model discrepancy with variance $\\sigma^2_{\\epsilon}$, and assumes\nthat the observed data $w$ is measured with uncertain error $e$ with variance\n$\\sigma^2_{e}$, such that $w=y+e$. See\n\\cite{Craig97_Pressure,Vernon2010,Vernon10_CS_rej} for further justifications and\ndiscussions.\n\nMost importantly, a large value of the implausibility $I_i(x,w_i)$ for any output\nimplies that the point $x$ is unlikely to yield an acceptable match between $f(x)$\nand $w$ were we to run the \\textsc{galform}\\xspace model there, hence $x$ is deemed {\\it implausible}\nand can be discarded from further analysis. We therefore impose cutoffs of the form\n$I_i(x,w_i) < c$ to rule out regions of input space, where the choice of $c$ is\nmotivated from Pukelsheim's 3-sigma rule\\footnote{Pukelsheim's 3-sigma rule is the\npowerful, general, but underused\nresult that states for any continuous unimodal distribution, 95\\% of the probability\nmust lie within $\\mu\\pm 3\\sigma$, regardless of its asymmetry or skew.}\n\\citep{threesigma}.\nWe can combine the implausibility measures from several outputs in various ways e.g.\n\\begin{equation}\\label{eq_maximp}\nI_M(x,w) \\;\\;=\\;\\; \\max_{i \\in Q} I_i(x,w_i)\n\\end{equation}\nwhere $Q$ represents the subset of outputs currently considered (often we will only\nemulate a small subset of outputs in early iterations). We may use the second or\nthird maximum implausibility instead for robustness reasons, or use multivariate\nimplausibility\nmeasures to incorporate correlations \\citep{Vernon2010,Vernon10_CS_rej}.\n\nHistory matching proceeds iteratively, discarding implausible regions of the input\nparameter space in waves.\nAt the $k$th wave we define the current set of non-implausible input points as\n$\\mathcal{W}_k$ and the set of outputs that have so far been considered for emulation\nas~$Q_k$. We proceed according to the following algorithm:\n\\begin{itemize}\n\\item[]\n\\vspace{-0.1cm}\n\\begin{enumerate}[1.]\n\\item Design and evaluate a space filling set of wave $k$ runs over the current\nnon-implausible space $\\mathcal{W}_k$. \\label{alg:1}\n\\item Check if there are informative outputs that can now be emulated accurately\n(that were difficult to emulate in previous waves) and add them to $Q_k$, to define\n$Q_{k+1}$.\n\\item Use the wave $k$ runs to construct new, more accurate emulators defined only\nover the region $\\mathcal{W}_k$ for each output in $Q_{k+1}$.\n\\item Recalculate the implausibility measures $I_i(x,w_i)$, $i \\in Q_{k+1}$, over\n$\\mathcal{W}_k$, using the new emulators.\n\\item Impose cutoffs $I_i(x,w_i) < c$ to define a new, smaller\nnon-implausible volume $\\mathcal{W}_{k+1}$ which satisfies $\\mathcal{W} \\subset\n\\mathcal{W}_{k+1} \\subset \\mathcal{W}_k$.\n\\item Unless:\n \\begin{enumerate}[A)]\n \\item the emulator variances ${\\rm Var}_{D_i}(f_i(x))$\n are now small in comparison to the other\n sources of uncertainty: $\\sigma^2_{\\epsilon_i} +\n\\sigma^2_{e_i}$,\\label{alg:6a}\n \\item the entire input space has been deemed implausible or\n \\item computational resources have been exhausted,\n \\end{enumerate}\n return to step~\\ref{alg:1}.\n\\item If \\ref{alg:6a} is true, generate a large number of acceptable runs from the\nfinal non-implausible volume $\\mathcal{W}$, using appropriate sampling for the\nscientific purpose.\\label{alg:7}\n\\end{enumerate}\n\\vspace{0.3cm}\n\\end{itemize}\nWe are then free to analyse the structure of the non-implausible volume $\\mathcal{W}$\nand the behaviour of model evaluations from different locations within it. The\nhistory matching approach is powerful for several reasons:\n\\begin{itemize}\n\\item As we progress through the waves and reduce the non-implausible volume, we\nexpect the function $f(x)$ to become smoother, and hence to be more accurately\napproximated by the regression part of the emulator $\\beta_{ij} g_{ij}(x_{A_i})$\n(which is often composed of low order polynomials -- see\nequation~\\ref{eq_emulator}).\n\\item At each new wave we have a higher density of points in a smaller volume,\ntherefore the emulator's Gaussian process term $u_i(x_{A_i})$ will be more effective,\nas it depends mainly on the proximity of $x$ to the nearest runs.\n\\item In later waves the previously strongly dominant active inputs $x_{A_i}$ from\nearly waves will have had their effects curtailed, and hence it will be easier to\nselect additional active inputs, unnoticed before.\n\\item There may be several outputs that are difficult to emulate in early waves\n(often due to their erratic behaviour in scientifically uninteresting parts of the\ninput space) but\nsimple to emulate in later waves, once we have restricted the input space to a much\nsmaller and more physically realistic region.\n\\end{itemize}\nHistory matching can be viewed as the appropriate analysis suitable for model\ninvestigation, model checking and model development.\nShould one wish to perform a fully Bayesian analysis using say MCMC, history\nmatching can be used as a highly effective precursor to such a calculation in order\nto rule out vast regions of input space that would only contain extremely low\nposterior probability.\nHowever such an MCMC analysis would only be warranted assuming one is willing to\nspecify meaningful joint probability distributions over all uncertain quantities\ninvolved, in\ncontrast to only the expectations, variances and covariances required for the Bayes\nLinear history\nmatch.\n\n\n\n\n\\subsection{Application of Emulation and History Matching to \\textsc{galform}\\xspace and the GSMF}\n\n\nWe now apply the above Bayesian emulation and history matching methodology to \\textsc{galform}\\xspace\nand the\nGSMF, and generalise it to the case of multiple available observed data sets.\nWe first identify the \\textsc{galform}\\xspace model outputs $f_i(x)$ that we wish to emulate, and the\ncorresponding observed data $w_i^{(m)}$ to match them to as\n\\begin{equation}\\label{eq_f_and_w}\nf_i(x) = \\log\\phi_{i,\\text{model}} \\quad \\text{and} \\quad w_i^{(m)} =\n\\log\\phi_{i,\\text{obs}_{(m)}}\n\\end{equation}\nwhere\n\\begin{equation}\n \\phi_i = \\left.\\frac{\\dv n}{\\dv \\log M_\\star} \\right|_{M_{\\star,i}, z}\\nonumber\n\\end{equation}\nis the GSMF at the stellar mass bin $M_{\\star,i}$ for redshift $z$. Here $m$ labels\nthe choice of observed data sets we use, represented for output $i$ by\n $w_i^{(m)}$.\n{Following the discussion in \\citet{Bower2010}, we adopt a model\ndiscrepancy of 0.1 dex. This term summarises the accuracy we expect for the model\ndue to the approximations inherent in the semi-analytic method}.\n{In effect, this means that we will regard models that lie within 5\\% of the\nobserved data-point as a perfectly adequate fit, even if the quoted Poisson\nobservational errors are substantially smaller. This means that if a model has a\nmarginally acceptable implausibility, $\\,I\\sim 3$, it may be 0.3 dex away from the\nobservational data-point.\n}\n\nAs we have multiple sets of observed data for the GSMF which we wish to match to, we\nhave to make an additional decision as to how to combine these within the history\nmatching process. Here we generalise the implausibility measure of\nequation~\\eqref{eq_maximp} by minimizing over the $m$\n data sets:\n\\begin{equation}\\label{eq_maximp2}\nI_M(x,w) \\;\\;=\\;\\; \\max_{i \\in Q} \\{ \\min_m I_i(x,w_i^{(m)}) \\}\n\\end{equation}\nwith the second and third maximum implausibilities defined similarly.\nThis implies that our history match search will attempt to find all inputs that lead\nto matches to {\\it any} of the observed data sets, judged on an individual bin basis.\nThis is\na simple\nway of incorporating several (possibly conflicting) data sets into the history match\nthat does not involve additional assumptions or further statistical modelling,\nand which is sufficient for our current purposes.\nIt should lead to the identification of all inputs of interest, subsets of which (for\nexample those that match a specific data set, or a combined data set) can be\nsubsequently explored in further detail.\n\nThe emulators used in each wave were constructed following the techniques described\nin \\S\\ref{sec:emconstr}, \\S\\ref{sec:1demul} and specifically the high\ndimensional approaches of \\S\\ref{sec:em_high_dim}.\n\n\\subsection{Observational datasets used}\n\\label{sec:obs}\n\nFor the local Universe GSMF, we use the results of \\citet{Li2009} based on SDSS and\n\\citet{Baldry2012} on the GAMA survey.\n\nFor larger redshifts, we combine the results of \\citet{Tomczak2014} based on the\n\\textsc{zfourge} and \\textsc{candels} surveys, and \\citet{Muzzin2013}, based on the\n\\textsc{UltraVISTA} survey.\nIn these papers, the GSMF is reported for redshift\nintervals\/bins. For simplicity, we adopt the midpoint of each redshift bin as\nthe typical redshift to be compared with the model (e.g. the GSMF obtained for\n$0.50$ redshifts were assumed to\nfollow the redshift-dependent estimate as \\citet{Behroozi2013}, i.e.\n$\\sigma_M(z) = \\sigma_0 + \\sigma_z z$, with $\\sigma_0=0.07$ and $\\sigma_z=0.04$.\nThese mass errors were accounted for convolving the model GSMF with a Gaussian kernel\n(see \\S\\ref{sec:high_z} for a discussion).\n}\n\n\\begin{table}\n\\begin{center}\n\\caption{\n\\label{tab:cutoffs}Thresholds used for eliminating implausible regions with respect\nto the local Universe GSMF after each wave and the fraction of the initial volume in\nthe non-implausible region.}\n\\begin{tabular}{ccccc}\n\\hline\nWave & \\mc{3}{c}{Threshold} & Fraction of the \\\\\n& 1$^{\\rm st}$ max. & \\mc{1}{c}{2$^{\\rm nd}$} max.& \\mc{1}{c}{3$^{\\rm rd}$} max.&\ninitial volume \\\\\n\\hline\n1 & - & \\mc{1}{c}{3.2} & \\mc{1}{c}{2.5} & 0.2522\\\\\n2 & 4.5 & \\mc{1}{c}{3.0} & \\mc{1}{c}{2.3} & 0.0494\\\\\n3 & 3.75 & \\mc{1}{c}{2.5} & \\mc{1}{c}{2.0} & 0.0170\\\\\n4 & 3.5 & \\mc{1}{c}{2.5} & \\mc{1}{c}{2.0} & 0.0116\\\\\n5 & 3.0 & \\mc{1}{c}{2.25} & \\mc{1}{c}{2.0} & 0.0036\\\\\n6 & 2.4 & \\mc{1}{c}{2.15} & \\mc{1}{c}{1.8} & 0.0010\\\\\n\\hline\n\\end{tabular}\n\\end{center}\n\\end{table}\n\n\\section{Results}\n\n\n\\label{sec:results}\n\nThe parameter space exploration was conducted through successive waves of runs.\nAfter each wave, emulators were generated from its results and used to design\nthe parameter choices for the next wave, discarding a vast, specifically\nimplausible, region of the parameter space. Each wave was designed using a\nLatin Hypercube Sampling of $5000$ points of the non-implausible region of the\nparameter space (full details are given in appendix \\ref{ap:emu_details}).\n\n\\subsection{Matching the local galaxy stellar mass function}\n\\begin{figure}\n \\centering\n \\includegraphics[width=\\columnwidth]{.\/hist.pdf}\n \\includegraphics[width=\\columnwidth]{.\/hist_final.pdf}\n \\caption{\n Histograms of the model implausibilities (with respect to the local\n Universe only) obtained at each wave.\n In the top panel, exploratory waves 2-6 are shown.\n Each new wave reduces the tail of very implausible models.\n However, the increase in the number of models with $I<2$ occurs only slowly\n after each wave.\n In the bottom panel, waves 7a and 7b are also shown (please, note the\n different range in $I$). Wave 7a was designed specifically to obtain many\n plausible runs, instead of uniformly covering the non-implausible\nparameter space, and Wave 7b (discussed in \\S\\ref{sec:high_z}) takes into account\n the constraints by high redshift data.\n }\n \\label{fig:wall}\n\\end{figure}\n\n\\label{sec:z0}\n\\begin{figure}\n \\centering\n \\includegraphics[width=\\columnwidth]{.\/smf_z0.pdf}\n \\caption{\n Local galaxy stellar mass function.\n The coloured curves show all the runs with implausibility $I<3.5$, with\n different shades showing different implausibilities (see colour-bar in the\n plot).\n The surrounding light grey curves correspond to the initial set of runs\n (wave 1).\n Data-points show the GSMF data obtained by \\citet{Li2009} and\n \\citet{Baldry2012}.\n {Note that a 0.1 dex model discrepancy was assumed (see text for\n details).}\n }\n \\label{fig:smf_z0}\n\\end{figure}\n\nThere were initially 6 waves of runs, where the implausibilities were computed with\nrespect to the local Universe GSMF data only.\nTable \\ref{tab:cutoffs} shows the implausibility cut-off thresholds applied,\nwhich decreased after each wave as we build more trustworthy emulators.\nTable \\ref{tab:cutoffs} also shows the fraction of the\ninitial volume which corresponds to the region classified as non-implausible after\neach wave.\nTo compute the volumes, the parameters were rescaled following equations\n\\eqref{eq:scale_lin} and \\eqref{eq:scale_log} -- i.e. lengths associated with the\nrange of each parameter were considered equivalent.\nDespite making very conservative choices\nfor the thresholds, there is a strong decrease in the volume after each wave and\nafter wave 6 only $10^{-3}$ of the original volume was classified as\nnon-implausible.\nIn Fig. \\ref{fig:wall} the evolution of the distribution of the\nimplausibilities can be followed: after each wave the number of highly\ndiscrepant models is strongly reduced, but the number of acceptable models,\nwith $I<3$, increases only modestly.\n\n\\begin{figure*}\n\\centering\n\\includegraphics[width=\\textwidth]{smf_grid_z0.pdf}\n\\caption{\n The GSMF of selected models colour-coded by their\n implausibility, $I$ (see colour bar),\n calculated \\emph{only} with respect to the GSMF\n data at the local Universe, from \\citet{Baldry2012} and \\citet{Li2009}.\n Only runs with $I<3.5$ with are shown.\n The high redshift observational data\n shown were obtained from \\citet{Muzzin2013} and \\citet{Tomczak2014}.\n \\label{fig:smf_grid_z0}\n The models selected solely by their good match to $z=0$ data produce\n poor agreement to higher redshift data.\n }\n\\end{figure*}\n\n\\begin{figure*}\n\\centering\n\\includegraphics[width=\\textwidth]{smf_grid_z.pdf}\n\\caption{\\label{fig:smf_grid_high}\n The GSMF of selected models colour-coded by their\n implausibility, $I$, calculated \\emph{simultaneously} with respect to\n the GSMF at\n $z=0,\\,0.35,\\,0.62,\\,0.75,\\,0.88,\\,1.12,\\,1.25$ and $1.43$.\n Only runs with $I<3.5$ are shown.\n The high redshift observational data\n shown were obtained from \\citet{Muzzin2013} and \\citet{Tomczak2014}\n \\green{and local Universe data were obtained from \\citet{Baldry2012}\n and \\citet{Li2009}.}\n \\textsc{galform}\\xspace models generally lead to a GSMF with a high mass end which is too\n shallow\n at $z=0$ and too steep at higher redshifts.\n }\n\\end{figure*}\n\nAfter the 6$^{\\rm th}$ wave the emulator variance at each point was already\nsmaller or equal to the other uncertainties,\nindicating that no further refinement was possible (condition \\ref{alg:6a}\nin the algorithm described in \\S\\ref{sec:HM}). A final, wave 7a,\nwas then designed, this time using the emulator information to aim for\nthe best possible matches to the GSMF (i.e. step \\ref{alg:7} in the algorithm;\nin contrast to the previous waves where the non-implausible space was uniformly\nsampled to ensure an optimum input for the next wave).\n\nIn Fig. \\ref{fig:smf_z0}, the final results of the history matching are shown,\ntogether with our observational constraints.\n{Error bars in this and other figures show only the quoted\nobservational errors, and do not include the model discrepancy term, so that the\nover all quality of the fits can be judged from figures.\nThe purpose of the model discrepancy is to avoid rejecting models when the\nobservational errors become very small.}\nAll the models in our library with implausibility $I<3.5$ are\nshown {in Fig.~\\ref{fig:smf_z0}}, with the curves\ncolour-coded by the implausibility.\n{The width of the lightest shade, corresponding to $I\\leq 1.0$,\nallows one to visualize the effect of adopted model discrepancy.\n}\nAlso shown are the wave 1 runs given as the grey curves, many of which\nwere far from the observed data.\nThe impact of the history match in terms of the removal of substantial amounts of\nimplausible regions of the parameter space can be seen by comparing the coloured\nregion with the grey curves.\n\nWhile there are models with $I\\sim2.5-3$ which produce an excess in the number of\nsmall\nmass galaxies, the opposite (i.e. $\\phi$ smaller than the observations at the low\nmass end) is very rare. A similar behaviour is also present in the high mass end.\nThus, acceptable ($I\\lesssim 3$) models may display over-abundances of very small\n[$\\log(M_\\star\/\\,\\text{M}_\\odot)\\lesssim 8.5$] or very large\n[$\\log(M_\\star\/\\,\\text{M}_\\odot)\\gtrsim 11.5$] masses, but there are no acceptable models\nwith significant under-abundances in these ranges.\n\nOnce the locus of models with good fits to the local Universe GSMF was found, we\nexamined how well these models performed with respect to high redshift data. In\nFig. \\ref{fig:smf_grid_z0} the GSMF output by the models shown in Fig.\n\\ref{fig:smf_z0} is now compared with higher redshift data.\nOne finds that the models selected only by their ability to\nreproduce the local Universe GSMF data under-represent the abundance of high mass\ngalaxies at higher redshifts while simultaneously generating an excessive number of\ngalaxies of lower masses.\n\nIn the following section we will examine the parameter space and show that the vast\nmajority of acceptable models have $\\beta_{0,\\text{burst}} > \\beta_{0,\\text{disc}}$ and so lie in a region\nof parameter space not available to the original model.\n\n\n\\subsection{Constraining models with higher redshift data}\n\\label{sec:high_z}\n\n\\begin{figure*}\n\\centering\n\\includegraphics{output_pair.pdf}\n\\caption{\nEach panel compares the output of the GSMF for different mass bins and\/or redshifts.\nWaves 2, 4, 6, 7a, 7b are shown colour-coded as indicated.\nThe observational constraints of previous figures are shown as blue shades.\nThe light grey shade was added to guide the eye, indicating where the GSMF values in\nthe vertical axis are smaller than the horizontal axis, and the dashed\ndiagonal line indicates the case where the GSMF for the two is the same.\nPanels below the diagonal use the same estimates for the errors in the mass\ndetermination as \\citet{Behroozi2013}: $\\sigma_M = 0.07 + 0.04\\,z$ (see text for\ndetails). For comparison, in the panels above the diagonal we double the mass error.\nAfter successive waves there is improvement in the\nagreement with the low redshift data and with the low-mass-end of the high\nredshift data, however, for the high-mass end, there is still tension at higher\nredshifts. The increase in the mass error does not avoid the tension.\nThis tension originates from the data being consistent with small or no evolution\nfor bins of large mass.\n}\\label{fig:output_pair}\n\\end{figure*}\n\nTo investigate if, and to what extent, the present model could reproduce the\nevolution of the GSMF, a new wave of runs was generated from the wave 6 emulator\n(wave 7b), this time computing the implausibilities simultaneously with respect to\nthe GSMF data at higher redshifts, up to $z=1.75$.\n\nAfter just a single additional wave, the emulation technique indicated that no extra\nrefinement was likely: the emulator variances became smaller than the other\nuncertainties, corresponding to step 6A in the algorithm of \u00a73, suggesting that it\nwould be highly unlikely to find a locus of more plausible runs within any sub-volume\nof the parameter space. A new (and final) wave was then designed, to produce runs\nwhich provided a good match to the GSMF at those redshifts (corresponding to step 7).\nThis set of runs was deliberately focused towards the regions of lowest emulator\nimplausibility, where we would now expect the best matches to occur. This is a good\ntechnique for exploring the correlations between parameter sets; however, it is\nimportant to note that the resulting design of runs would not be a suitable basis for\nthe construction of a statistical emulator.\n\nIn Fig. \\ref{fig:smf_grid_high} we show the evolution of the GSMF for all the runs\n(of all waves) with implausibility $I<3.5$ with respect to redshifts up to\n$z=1.12$.\nThe adoption of higher redshift constraints leads to tension with the local Universe\ndata: the least implausible models produce a GSMF with a too shallow high mass end\nat $z=0$ and too steep at any other redshift. In the low mass end, there is an\nexcess of $\\lesssim 10^{10} \\,\\text{M}_\\odot$ galaxies at higher redshift and a small deficit\nof them in the local Universe.\nThis is consistent with behaviour seen in the runs constrained at $z=0$ only.\n{\\green{It should be noted that, despite the tension,\nthe level agreement achieved is still better than what is found in most published models, and is not dissimilar from what is found by \\citet{Henriques2013,Henriques2015}.}}\n\nThis tension becomes clearer when the results for specific mass bins of the GSMF\nare compared. This is shown in panels below the diagonal in\nFig.~\\ref{fig:output_pair}, for two mass bins, $\\log(m\/\\,\\text{M}_\\odot h)=9.5$ and $11.2$,\nredshifts $z=0.0$, $0.35$ and $0.75$. The observational constraints are shown as\nblue bands. By showing the constraints in pairs, we gain insight into the conflicting\npressures imposed on the model.\nInitially, successive waves of runs (shown by colours from red to green\\green{,\nas indicated in the figure}) are increasingly\nfocused towards the point at which the two bands intersect.\nHowever, it becomes increasingly evident that some constraint pairs cannot be matched\nby the model and the successive waves lead to no improvement. For example, the panel\nshowing $\\log(m\/\\,\\text{M}_\\odot h)=11.2$ at $z=0$ and $z=0.35$ has a strong diagonal line above\nwhich\nthe model is never able to cross. The same behaviour is found when comparing the high\nmass end of the GSMF at $z=0$ with other redshifts. For the constraints at\n$\\log(m\/\\,\\text{M}_\\odot h)=9.5$,\nthe outputs of all models are tightly correlated when comparing between redshift.\nComparison\nbetween different mass bins appears to be less constraining.\n\nOne can best interpret Fig.~\\ref{fig:output_pair} by comparing the models to a\nnon-evolving GSMF. This is shown by the dashed diagonal line in panels that\ncompare the same mass bins. The\ngrey-shaded side of the dashed diagonal line show the case where the number of\ngalaxies decreases with time. It can be seen that the observational data used leads\nto no\nevolution (or even decrease) in the number of $10^{11.2}h^{-1}\\,\\text{M}_\\odot$ galaxies if\n$z=0$ is compared to other datasets. This makes it clear why it is not possible to\nfind models in the exact target region. Since \\textsc{galform}\\xspace is inherently hierarchical, it is\ndifficult\nto conceive of a mechanism which could lead to a significant decrease in the\nabundance of massive galaxies with time. This would only be possible if\n$10^{11.2}h^{-1}\\,\\text{M}_\\odot$ galaxies\nwere to grow in mass (and so leave the mass bin) faster than lower mass (and more\nabundant) galaxies were able to grow and move into the bin. Clearly, the situation\nnever arises in the \\textsc{galform}\\xspace model and the only way of obtaining points in the grey\nregion for the high\nmass bin panes is due to the distortion caused by errors in the galaxy mass\ndetermination, as we will discuss below.\n\nSystematic errors in the determination of galaxy masses (`mass errors' for short)\narising from the modelling of the star formation history, choice of dust model and\nthe choice of IMF can significantly affect the shape of the GSMF which is inferred\nfrom the observations \\citep{Mitchell2013}.\n\\green{As mentioned in \\S\\ref{sec:obs}, mass errors were accounted for by convolving\nthe model GSMF with a Gaussian kernel.\n}\nThe main effect of such convolution is making the GSMF appear less steep at higher\nredshifts.\nThis raises the question of whether underestimated mass errors could explain the\ndifficulty in simultaneously matching the high mass end of the GSMF at different\nredshifts.\n\n\\bigskip\n\nIn the panels above the diagonal of Fig.~\\ref{fig:output_pair}, we show\nthe consequences of \\green{doubling the mass error -- i.e.}\nconsidering $\\sigma_0=0.14$ and $\\sigma_z=0.08$.\nThis has the effect of loosening the implausibility contours: the\nblue regions are the same as those below\nthe diagonal. The effect of these much increased mass errors is to\nallow models near to the ``no evolution'' region, alleviating the tension by allowing\nthe corrected \\textsc{galform}\\xspace\nresults to get closer to the target region. However, even considering these mass\nerrors, some tension still persists.\n\n\\subsection{Plausible models subspace}\n\\label{sec:subspace}\n\\begin{figure*}\n\\includegraphics[width=\\textwidth]{.\/cabral_implaus_report_highz.pdf}\n\\caption{The panels show two dimensional projections of the plausible\n parameter space. Each circle\n represents a \\textsc{galform}\\xspace run and is colour coded by its implausibility (as\nindicated by the colour bar); lower implausibility runs are plotted on\n top facilitating the visualisation of their clustering in the projected\n space; only runs with $I<3.5$ are shown.\n In panels above the diagonal, the implausibility is computed with\n respect to the observed local GSMF only.\n In panels below the diagonal, the implausibility is computed with\n respect to the GSMF at redshifts\n $z=0.0,\\,0.35,\\,0.62,\\,0.75,\\,0.88,\\,1.12,\\,1.25$ and $1.43$.\n Note that the axes are labelled consistently above and below the diagonal. A\npanel below the diagonal should be rotated and inverted in order to compare it to the\nequivalent panel above the diagonal.\n This figure summarizes the main constraints imposed by the GSMF and its\n evolution on the \\textsc{galform}\\xspace parameters.\n }\n\\label{fig:implaus_space}\n\\end{figure*}\n\nWe examine now what are the main properties of the subspace of plausible models,\nwhich we define as models having implausibility, $I<3.5$, a\nconservative threshold.\n\nWe begin by considering the models that provide a plausible match to the\nGSMF at $z=0$. The distribution of these models are shown above the diagonal\nin Fig.~\\ref{fig:implaus_space}. In each panel we show the plausible models\nprojected\ninto the two dimensional space of a pair of variables. The models are coloured by\nimplausibility and the lowest implausibility runs are plotted last to ensure they are\nvisible. This method of plotting also gives\na good impression of the ``optical depth'' of the parameter region in the hidden\nparameters of each panel.\nWe only show the most interesting variables in this plot, the panels for other\nvariable pairs are less informative scatter plots.\n\nThe most constrained parameters are: the disk wind parameters, $\\alpha_\\text{hot}$ and\n$\\beta_{0,\\text{disc}}$, the normalisation of the star formation law, $\\nu_{0,\\text{sf}}$, the AGN feedback\nparameters, $\\alpha_\\text{cool}$, and the disk stability threshold, $f_{\\rm stab}$. Several\nparameter degeneracies can be picked out in the figure.\nFor example, values of $\\alpha_\\text{hot}$ are strongly\ncorrelated with $\\beta_{0,\\text{disc}}$, with larger $\\beta_{0,\\text{disc}}$ being compensated by a\nsmaller $\\alpha_\\text{hot}$: i.e., the higher mass loading normalisation is compensated by a\nweaker mass dependence so that the level of feedback is similar in low-mass galaxies.\n\n\nOther parameters are more weakly constrained, and it is possible to find plausible\nmodels over most of the range of the parameter considered. The parameter\n$\\alpha_\\text{reheat}$ is a good example. In this case, smaller values of $\\alpha_\\text{reheat}$ can\nbe compensated by reductions in $\\beta_{0,\\text{disc}}$.\nThis makes physical sense. The time-scale on which gas\nis re-incorporated into the halo after ejection depends on $\\alpha_\\text{reheat}^{-1}$\n(equation~\\ref{eq:reheat}), so that increases in the time-scale can be offset by an\noverall lower mass loading of the disk wind \\citep{Mitchell2016}.\n\nOne surprising feature is that the normalization mass loading associated with star\nburst galaxies, $\\beta_{0,\\text{burst}}$, (see \\S\\ref{sec:SN_feedback}) is weakly constrained.\nAlthough the best models (and also the greatest number of models) have $\\beta_{0,\\text{burst}} >\n20$, entirely plausible models can be found with much\nsmaller values. This is presumably because the impact of the large values of\n$\\beta_{0,\\text{burst}}$ can be offset\nby adjusting the values of other parameters. The pairs plot does not, however, reveal\nan obvious\ninteraction with another individual parameter. In \\S\\ref{sec:pca}, we will use a\nprinciple\ncomponent method to try to isolate simpler interactions between parameter\ncombinations, and we explore the physical interpretation there.\n\nThe panels below the diagonal line show the models that generate plausible fits\nto the GSMF\nover the redshift range $z=0$ to $1.43$. A panel below the diagonal must be rotated\nand inverted in order to compare it to the equivalent panel above the diagonal. As we\nhave already discussed, this is a stringent requirement, and even the best models\nhave $I>2$. The volume of the parameter space within which plausible models can be\nfound is significantly reduced compared to the situation if only the $z=0$\nimplausibility is considered.\nThe plausible range of the parameters $\\alpha_\\text{reheat}$, $\\alpha_\\text{cool}$ and $\\nu_{0,\\text{sf}}$ is\nparticularly affected.\nFor example, the addition of the high redshift GSMF excludes very long gas cycling\ntime-scales (and thus\nsmall values of $\\alpha_\\text{reheat}$).\n\nPlotting the data in this way does not, however, expose any new correlations between\nparameters, or make\nit easy to appreciate the physical differences in the model that result in the very\ndifferent behaviour\nat high redshift that can be seen by comparing Figs. \\ref{fig:smf_grid_z0} and\n\\ref{fig:smf_grid_high}.\nIn order to make it easier to identify these differences, we will analyse the\ndistribution of the plausible models in the Principle Component Analysis (PCA) space.\n This allows us to better identify the critical parameter combinations that are\npicked out by the data. We have already noted that several parameters show\nsignificant (anti-)correlation, and the PCA analysis will identify the\nmost important relations.\n\nOne of the motivations for undertaking a full parameter space exploration is the\npossibility of the existence of multiple disconnected implausibility minima, which\nwould be unlikely to be found in the `traditional' trial-and-error approach to\nchoosing the parameters. Nevertheless, we find that the locus of acceptable \\textsc{galform}\\xspace runs\nis connected and there are no signs of multiple minima or other complex\nshapes. Because of this, the distribution of plausible models is particularly\namenable to the PCA method.\n\n\n\\subsection{Principal component analysis}\n\\label{sec:pca}\n\nIn order to obtain greater insight into the constraints imposed by the GSMF, and in\nparticular the constraints imposed by the higher redshift data, we performed a\nprincipal component analysis (PCA)\non the volume of the input parameter space containing\nruns with $I<3.4$ in all the datasets at\n$z=0,\\,0.35,\\,0.62,\\,0.75,\\,0.88,\\,1.12,\\,1.25$ and\n $1.43$, giving a set of 508 runs in total.\nThe PCA generates a new set of 20 orthogonal variables defined as the\neigenvectors of the\ncovariance matrix formed from the input parameter locations of the 508 runs, ordered\nby size of eigenvalue.\nTherefore the first new variable (Var~1) gives the direction which has the largest\nvariance in the input space,\nwhile the last (Var~20) gives the direction with the smallest variance. Usually, PCA\nis applied to find the directions with\nthe largest variance, but here we are precisely interested in the opposite: we wish\nto learn about those directions\nin input parameter space that have been most constrained by the observed data. This\nanalysis allows the examination of the\nlocation of acceptable runs in the rotated (and translated) PCA space, to identify\npossible hidden features, and the\ntransformation of the (approximately) orthogonal constraints observed in the PCA\nspace back on to the original parameters\nto aid physical interpretation.\nFor example, acceptable model runs all have similar values for Var~20, Var~19 etc,\nand this can be inverted to express the dependencies of the variables on one\nanother. It is important to note that the precise components of the PCA variables\ndepend on their original range (and whether the variables are normalised on to a log\nor linear scale).\nThis can be viewed in a Bayesian sense, in that we are quantifying the increase in\nknowledge about the values of the variables relative to our prior knowledge. It is\nalso important to bear in mind that variables with similar variance are degenerate,\nand that alternative combinations of them will describe the distribution of the data\nsimilarly well, but may have a simpler physical interpretation.\n\n\\begin{figure}\n\\includegraphics{.\/PCA_bars_highz.pdf}\n\\caption{Summary of results of the principal component analysis.\nThe 6 most constrained components of the region with\n$I<3.4$ with respect to $z=0,\\,0.35,\\,0.62,\\,0.75,\\,0.88,\\,1.12,\\,1.25$ and\n$1.43$ (which contains 508 models).\nThe bars show the absolute values of the PCA loads associated with each\nscaled parameter (only parameters with non-negligible loads are shown).\nParameters with larger loads\n($>0.3$)\nare drawn in red\n\\green{and have their names and loads written on the top of the bars.}\nBright (dark)\ncolours show variables with positive (negative) loads.\n}\n\\label{fig:pca}\n\\end{figure}\n\nThe resulting PCA variables (and the centroid of the distribution) are listed in\nAppendix \\ref{ap:fullPCA}.\nThe standard deviation in the directions defined by Var~20 and\nVar~19 is extremely small (less than 0.1 relative to the prior distribution of\n$\\pm1$). Var~18 and Var~17 are also significantly constrained (std. dev. less than\n0.22). The constraints on the other variables are much less significant, Var~14, 15\nand 16 all have std. dev. $\\sim 0.4$. This gives us a quantitative measure of the\ninformation content of the GSMF relative to the freedoms of the model.\n\nThe components of the 6 most constrained variables are shown in Fig.~\\ref{fig:pca}.\nWe begin by considering the strongly constrained components Var~19 and Var~20. The\nvariance of these two components is similar and so we should consider them together.\nAs shown by the colouring of the histogram, Var~19 is dominated by $\\beta_{0,\\text{disc}}$ and\n$\\alpha_\\text{cool}$, with a smaller contribution from $\\alpha_\\text{reheat}$. Qualitatively, this\nsimply confirms that the break of the GSMF is controlled by competition between AGN\nand stellar feedback;\nstronger winds from disks in $200\\,\\text{km}\\,\\text{s}^{-1}$ galaxies (i.e., larger\n $\\beta_{0,\\text{disc}}$, equation~\\ref{eq:beta}), or a longer re-incorporation\ntime-scale (i.e.,\nsmaller $\\alpha_\\text{reheat}$, equation~\\ref{eq:reheat}), need to be compensated by an\nincrease the halo mass at which AGN become effective (i.e., smaller $\\alpha_\\text{cool}$,\nsince $t_\\text{cool}\/t_\\text{ff}(r_\\text{cool})$ increases with halo mass,\nequation~\\ref{eq:tcool}). As well as providing qualitative insight, this can be\ntranslated\ninto quantitative constraints on the input parameters. To do this we neglect the\ndependence on parameters with small loads ($<0.3$, shown in blue in\nFig.~\\ref{fig:pca}) and assume that they have values close to the centroid\nof the PCA expansion. Using superscripts to denote that this relation applies to\nthe rescaled variables (given by equations \\ref{eq:scale_lin} and\n\\ref{eq:scale_log}),\nthe constraint can then be simplified to:\n\\begin{align}\n\\label{eq:var19}\n|\\text{Var}\\,19|\\, =&\\,\\,\n|\\,-0.669(\\beta_{0,\\text{disc}}^{(s)}+0.464)\n -0.576(\\alpha_\\text{cool}^{(l)}-0.065) \\nonumber\\\\\n & +0.356(\\alpha_\\text{reheat}^{(s)}-0.462)\n \\,|\\,\\, \\lesssim 0.095.\n\\end{align}\n\nVar~20 is mainly composed of $\\alpha_\\text{cool}$ (the AGN feedback parameter), $\\alpha_\\text{hot}$\nand $\\beta_{0,\\text{disc}}$ (the quiescent feedback parameters). Eliminating variables with small\nweight, we arrive at the following inequality:\n\\begin{align}\n|\\text{Var}\\,20|\\, =& \\,\\,|\\,\n +0.401(\\beta_{0,\\text{disc}}^{(s)}+0.464)\n +0.583(\\alpha_\\text{hot}^{(s)}-0.673)\\nonumber\\\\\n &-0.634(\\alpha_\\text{cool}^{(l)}-0.065)\n \\,|\\,\\, \\lesssim \\,0.071.\n\\end{align}\nPhysicaly, this relation tells us that if we pick the disk feedback parameters\n$\\alpha_\\text{hot}$ and $\\beta_{0,\\text{disc}}$, the AGN feedback must follow from the equality.\nIncreases in $\\alpha_\\text{hot}$ and\/or $\\beta_{0,\\text{disc}}$ (making supernovae driven feedback) need\nto be compensated by increases $\\alpha_\\text{cool}$ (making AGN feedback effective only in\nhigher mass haloes). Since Var~19\nalready determines $\\alpha_\\text{cool}$, it is more useful to write the constraint as\n(neglecting small weights):\n\\begin{align}\n|\\text{Var}\\,20|\\, \\approx& \\,\\,|\\,\n +1.137(\\beta_{0,\\text{disc}}^{(s)}+0.464) -0.391(\\alpha_\\text{reheat}^{(s)}-0.462) \\nonumber\\\\\n &+0.583(\\alpha_\\text{hot}^{(s)}-0.673)\n \\,|\\,\\, \\lesssim \\,0.175,\n\\label{eq:approx}\n\\end{align}\nwhich expresses the requirement that a given choice of $\\beta_{0,\\text{disc}}$ (and\n$\\alpha_\\text{reheat}$) parameters need to be balanced by a suitable choice of circular\nvelocity dependence of supernova feedback, $\\alpha_\\text{hot}$.\n\nThe next two components, Var~18 and Var~17, have significantly larger variances\n($\\sigma=0.174$ and $0.217$, respectively). Var~17 is almost completely determined\nby $f_{\\rm stab}$, so that successful models require a narrow range of the stability\nparameter, almost independent of the other variables.\n\\begin{equation}\n\\label{eq:var17}\n|\\text{Var}\\,17|\\, =\\,\\,\n|\\,-0.931(f_{\\rm stab}^{(s)}+0.362)\n \\,\\,\\,|\\,\\, \\lesssim 0.217\\,.\n\\end{equation}\nVar~18 relates the star formation\nefficiency $\\nu_{0,\\text{sf}}$ to $\\alpha_\\text{hot}$, the halo mass\ndependence of feedback (which in turn relates to the choice of feedback parameters\n$\\beta_{0,\\text{disc}}$ and $\\alpha_\\text{reheat}$, see equation~\\ref{eq:approx}):\n\\begin{align}\n\\label{eq:var18}\n|\\text{Var}\\,18|\\, &= \\,\\,\\nonumber\n|\\,\\,0.613(\\nu_{0,\\text{sf}}^{(s)}+0.456)\\\\\n &-0.604(\\alpha_\\text{hot}^{(s)}-0.673)\n \\,\\,\\,|\\,\\, \\lesssim 0.174.\n\\end{align}\nIncreasing the strength of feedback in small galaxies (greater $\\alpha_\\text{hot}$) requires\nthat star formation is made more efficient to compensate\n\\green{(i.e. by increasing star formation at higher mass galaxies, maintaining thus\nthe total amount of stars at low $z$)}.\n\nThe remaining variables are relatively weakly constrained, but have similar variance.\nThey provide addition constraints on the disk and AGN feedback parameters\n($\\alpha_\\text{reheat}$, $\\alpha_\\text{cool}$, $\\alpha_\\text{hot}$ and $\\beta_{0,\\text{disc}}$) and the star formation\nlaw ($\\nu_{0,\\text{sf}}$). Although they are weakly constrained, these relations play an\nimportant role in determining whether models successfully match the higher redshift\nGSMF data as well as the $z=0$ GSMF, as we will show below.\n\n\\subsection{Effect of GSMF constraints in PCA space}\n\\label{sec:pca_comparison}\n\n\\begin{figure*}\n\\includegraphics[width=\\textwidth]{.\/cabral_implaus_report_pca.pdf}\n\\caption{A comparison of runs that provide plausible fits to the $z<1.43$ GSMF\ndatasets (below the diagonal), and those that provide a plausible description of the\n$z=0$ GSMF, but a very implausible match to the high-z data (above the diagonal). We\nshow the comparison PCA-space, with individual panels showing two dimensional\nprojections. The PCA variables are defined using the set of plausible $z<1.43$ GSMF\ndatasets. We show the 6 most constrained variables (see text for discussion). Each\ncircle represents a \\textsc{galform}\\xspace run and is colour coded by its implausibility (as indicated\nby the colour bar); lower implausibility runs are plotted on top to facilitate the\nvisualisation of their clustering in the projected spaced. To facilitate comparison\nof the runs above and below the diagonal, we show the full set of runs with plausibly\nfits\nto the $z=0$ GSMF as the underlying grey points.\n }\n\\label{fig:pca_pairs}\n\\end{figure*}\n\nIn order to better understand why some runs generate a plausible match to the\n$z<1.43$ GSMF (as well as that at $z=0$) while others do not, we select the 5\ncomponents with least variance and rotate the distribution of the full set of runs\nwith plausible $z=0$ into this space. Note that the variables are defined using the\nplausible $z<1.43$ GSMF runs, but we can use the same rotation to examine the\ndistribution of any set of runs. We show projections into pairs of these variables in\nFig.~\\ref{fig:pca_pairs}. Below the diagonal, we show the runs selected on the basis\nof the full redshift range of GSMF data (as in Fig.~\\ref{fig:implaus_space}). The\ncolouring, and plotting order, of points is the same as in the previous figures.\nAbove the diagonal, we show the set of runs that provide a good match to the $z=0$\nGSMF, but a very implausible match to the full $z<1.4$ implausibility ($I>6$). We\nadd the underlying grey points to show the distribution of the runs giving plausible\nfits to the $z=0$ GSMF (regardless of their $z<1.4$ implausibility) in order to make\nit simpler to compare with panels above and below the diagonal.\n\nThe location of the runs in the strongly constrained variables Var~19 and Var~20\nhardly changes.\nThese strong selection rules seem to primarily select runs with a good match to the\n$z=0$ GSMF, and are not particularly important in determining whether a run also\nmatches the higher redshift data or not. Var~15, 16 and 17, however, show systematic\nshifts above and below the diagonal, showing that it is these secondary\nrelationships between the feedback variables and the disk stability parameters that\nare critical in matching the evolution of the mass function. In particular,\nwe recall that Var~17 is almost exclusively dependent on the disk stability\ncriterion: runs which match the $z=0$ GSMF but not the higher redshift data tend to\nhave higher values of Var~17, and thus lower values of $f_{\\rm stab}$ which tends to make\ndisks more unstable at low redshift.\n\\green{Therefore, when larger redshift data is considered, models where\ninstabilities are mostly present at higher redshifts are preferred.\nVar~15 and 16 also show shifts, however, showing that the\nre-incorporation time-scale (ie., $\\alpha_\\text{reheat}$) and the\nstrength of disk feedback also play an important role. In particular, there\nis significant shift in the median value of Var~15 towards smaller values\nwhen higher redshift data is considered,\nwhich implies, simultaneously, an increase in $\\beta_{0,\\text{disc}}$ and a decrease in both\n$\\alpha_\\text{hot}$ and $\\nu_{0,\\text{sf}}$. The combined effect is to reduce the efficiency of star formation in galaxy disks.\n}\n\n\n\n\\subsection{The star formation history of the Universe}\n\nIn this paper we have deliberately focused on the GSMF. This encoded the star\nformation history of the Universe in the fossil record of the stars that have been\nformed. It is nevertheless of interest to examine the star formation histories of\nthe models that have been selected on this basis.\nFurthermore, it is interesting to separate models in which the mass loading in\nstarbursts, $\\beta_{0,\\text{burst}}$, is comparable to that during quiescent star formation\n($\\beta_{0,\\text{disc}}$).\nFor simplicity, previous versions of \\textsc{galform}\\xspace have assumed that the parameters for the\nnormalization of the mass loading in quiescent discs, $\\beta_{0,\\text{disc}}$, and starbursts,\n$\\beta_{0,\\text{burst}}$, were equal. By relaxing this assumption in this work, we found in\n\\S\\ref{sec:subspace} that a larger $\\beta_{0,\\text{burst}}$ is favoured.\nWhile it is possible to find\nplausible models for which\n$\\beta_{0,\\text{burst}}\\sim \\beta_{0,\\text{disc}}$, we found that most of the volume\n(and the most plausible runs) of the plausible parameter space has\n$\\beta_{0,\\text{burst}} \\gg \\beta_{0,\\text{disc}}$.\n\\green{Since starbursts are more frequent at earlier times, it is worth noting that\na $\\beta_{0,\\text{burst}} > \\beta_{0,\\text{disc}}$ can lead to stronger supernova feedback at high redshift.\n}\n\n\n\\begin{figure}\n\\includegraphics[width=\\columnwidth]{SFH_large_betaburst.pdf}\n\\includegraphics[width=\\columnwidth]{SFH_small_betaburst.pdf}\n\\includegraphics[width=\\columnwidth]{dsfrd_dz_burst.pdf}\n\\caption{Panels (a) and (b), show the cosmic star formation history (or\n comoving star formation rate density, SFRD), of\n runs with $I<3.5$ with respect to the redshifts\n $z=0.0,\\,0.35,\\,0.62,\\,0.75,\\,0.88,\\,1.12$ and $1.25$. The\n colours correspond to their implausibilities as indicated.\n Observational data from\n \\citet{Rodighiero2010,Karim2011,Cucciati2012}, and \\citet{Burgarella2013}.\n Panel (a) shows only models with $\\beta_{0,\\text{burst}}>2\\,\\beta_{0,\\text{disc}}$ while the\n panel (b) shows the case $\\beta_{0,\\text{burst}} \\leq \\beta_{0,\\text{disc}}$.\n Panel (c) highlights the slope as a function of the $\\beta_{0,\\text{burst}}\/\\beta_{0,\\text{disc}}$\n ratio with observational constraints shown as shaded areas (same colours as\n previous panels).\n While runs with a larger $\\beta_{0,\\text{burst}}\/\\beta_{0,\\text{disc}}$ {display a}\n {qualitatively a better fit} to the\n SFRD, they fail to produce the strong increase in SFRD with redshift\n between $z=0.5$ and $z=1$, despite providing a better match to the GSMF\n evolution at the same redshift interval (as it can be seen by the colours).}\n\\label{fig:SFH}\n\\end{figure}\n\nIn Fig.~\\ref{fig:SFH} we show the evolution of the cosmic star formation rate density\n(SFRD) for runs with $\\beta_{0,\\text{burst}} > 2\\,\\beta_{0,\\text{disc}}$ (upper panel) and for\n$\\beta_{0,\\text{burst}} \\leq \\beta_{0,\\text{disc}}$ (middle panel), in both cases selecting only\n``acceptable'' runs,\nwith $I<3.5$ when conditioned on the full range of GSMF data. A selection of\nobservational data are shown as coloured points. Runs with a larger\n$\\beta_{0,\\text{burst}}\/\\beta_{0,\\text{disc}}$ ratio match well the observations for the SFRD at low\nredshifts ($z\\leq 0.5$),\nbut fail to reproduce the steep rise in SFRD with redshift in the interval\n$0.50.35$ and $z=0.0$, compared to the model in which galaxies cannot avoid\ngrowing in mass. This tension would still be present even if mass errors had been\nunderestimated by a factor of 2.\n\nIn order to better understand the dimensionality and most important variables of\nthe parameter space, we performing a PCA of the non-implausible volume of the\nparameter space (constrained using the full range of redshifts,\nFig.~\\ref{fig:pca_pairs}). We show that it is possible to write approximate\nrelations between the parameters, expressing conditions which need to be satisfied\nin order to obtain a model with an acceptable match to the GSMF. Two principal\ncomponents (i.e. 2 directions in the parameter space) contain most of the\ninformation about the basic shape of the GSMF, and these are mainly combinations of\nthe parameters $\\alpha_\\text{cool}$, $\\alpha_\\text{hot}$, $\\beta_{0,\\text{disc}}$, $\\alpha_\\text{reheat}$, i.e. the\nparameters controlling feedback processes.\nThe parameters $\\nu_{0,\\text{sf}}$, $f_{\\rm stab}$ are also significantly constrained compared to\ntheir initial values.\n\nThe PCA analysis provides a simple way to better understand why some model are able\nto match both the local and high redshift GSMF data (points below the diagonal in\nFig.~\\ref{fig:pca_pairs}),\nwhile other models only match the observational at $z=0$ (points above the diagonal\nin {Fig.~\\ref{fig:pca_pairs}).\nWe show that the primary differences are encoded in Var~15, 16 and (primarily) 17.\nModels which\nmatch the $z=0$ GSMF but not the higher redshift data tend to have higher values of\nVar~17, and thus lower values of $f_{\\rm stab}$ which tends to make disks more unstable at\nlow redshift.\n\n{\nIn this paper, we explored a model in which the we allowed the mass loading in\nstarburst (driven by mergers or disk instabilities) to be different from the mass\nloading in quiescent star formation.\nThe normalization of the quiescent mass, $\\beta_{0,\\text{disc}}$ loading is strongly\nconstrained, while marginally acceptable models can be found for most of\nthe range of values for the burst mass loading, $\\beta_{0,\\text{burst}}$. Nevertheless, this\ndoes not mean that the full range of $\\beta_{0,\\text{burst}}$ is equally plausible: there is a\nmuch larger density of acceptable models ($I\\lesssim 3$) with $20<\\beta_{0,\\text{burst}}<30$\nand the most plausible models, with $I<2.5$, have $1.66<\\beta_{0,\\text{burst}}\/\\beta_{0,\\text{disc}}<2.56$.\n}\n\nWe have deliberately focused the paper on the GSMF. This encoded the star formation\nhistory of the\nUniverse, but we can also compare the models to the observed star formation rates of\ngalaxies. We do this by computing the volume averaged star formation rate density in\nthe model. We find that the star formation history is sensitive to the choice of the\nratio $\\beta_{0,\\text{burst}}\/\\beta_{0,\\text{disc}}$. While models with $\\beta_{0,\\text{burst}}>\\beta_{0,\\text{disc}}$ offer a\nreasonable match to the GSMF evolution,\nthey fail to display sufficiently rapid increase in the cosmic SFRD.\nThese results show the important additional information that can be extracted by\nconfronting the constrained models with additional datasets, but that this needs to\nbe done with care, since it is quite possible that systematic differences may make\nit hard to simultaneously provide a plausible description of all the available data\nif the observational uncertainites are taken at face value. The apparent\ncontradictions inherent in different datasets must be carefully accounted for: as\nthey may point to missing physics in the model. Clearly a future avenue for further\nprogress is to apply the methods we have developed here to a much wider range of\ndatasets.\n\nFinally, we note that the main aim of this paper has been to examine how information\non the formation of galaxies can be extracted from observational dataset. We have\nshown how simple physical results can emerge from the analysis of a highly\ncomplex model. This approach can equally be applied across a wide range of science\ndisciplines where observational data are used to constrain seeming complex numerical\nmodels.\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\n\nGiven a complete filtered probability space $(\\Omega,\\F,\\FF,P)$ satisfying the usual hypotheses, let $R$ be an optional process of class $(D)$, and consider the optimal stopping problem\n\\begin{equation}\\label{os}\\tag{OS}\n\\maximize\\quad ER_\\tau\\quad\\ovr\\quad \\tau\\in\\T,\n\\end{equation}\nwhere $\\T$ is the set of stopping times with values in $[0,T]\\cup\\{T+\\}$ and $R$ is defined to be zero on $T+$. We allow $T$ to be $\\infty$ in which case $[0,T]$ is interpreted as the one-point compactification of the positive reals.\n\nWithout further conditions, optimal stopping times need not exist (take any deterministic process $R$ whose supremum is not attained).\nTheorem~II.2 of Bismut and Skalli~\\cite{bs77} establishes the existence for bounded reward processes $R$ such that $R\\ge\\cev R$ and $\\vec R\\le\\pp R$. Here,\n\\[\n\\vec R_t:=\\limsup_{s\\upto t} R_s\\quad\\text{and}\\quad\\cev R_t:=\\limsup_{s\\downto t} R_s,\n\\]\nthe {\\em left-} and {\\em right-upper semicontinuous regularizations} of $R$, respectively. Bismut and Skalli mention on page 301 that, instead of boundedness, it would suffice to assume that $R$ is of class $(D)$.\n\nIn order to extend the above, we study the ``optimal quasi-stopping problem''\n\\begin{equation}\\label{oqs}\\tag{OQS}\n\\maximize\\quad E[R_\\tau+\\vec R_{\\tilde \\tau}]\\quad\\ovr\\quad (\\tau,\\tilde\\tau)\\in\\hat\\T,\n\\end{equation}\nwhere $\\hat\\T$ is the set of {\\em quasi-stopping times} (``split stopping time'' in Dellacherie and Meyer~\\cite{dm82}) defined by\n\\[\n\\hat\\T:=\\{(\\tau,\\tilde\\tau)\\in\\T\\times\\T_p \\mid \\tilde\\tau>0,\\ \\tau\\vee\\tilde\\tau =T+ \\},\n\\]\nwhere $\\T_p$ is the set of predictable times. When $R$ is cadlag, $\\vec R=R_-$, and our formulation of the quasi-optimal stopping coincides with that of Bismut~\\cite{bis79}. Our main result gives the existence of optimal quasi-stopping times when $R\\ge\\cev R$. When $R \\ge \\cev R$ and $\\vec R\\le \\pp R$, we obtain the existence for \\eqref{os} thus extending the existence result of \\cite[Theorem~II.2]{bs77} to possibly unbounded processes $R$ as suggested already on page 301 of \\cite{bs77}. \n\nOur existence proofs are based on functional analytical arguments that avoid the use of Snell envelopes which are used in most analyses of optimal stopping. Our strategy is to first look at a convex relaxation of the problem. This turns out be a linear optimization problem over a compact convex set of random measures whose extremal points can be identified with (quasi-)stopping times. As soon as the objective is upper semicontinuous on this set, Krein-Milman theorem gives the existence of (quasi-)stopping times. Sufficient conditions for upper semicontinuity are obtained as a simple application of the main result of Perkki\\\"o and Trevino~\\cite{pt18a}. The overall approach was suggested already on page 287 of Bismut~\\cite{bis79b} in the case of optimal stopping. We extended the strategy (and provide explicit derivations) to quasi-optimal stopping for a merely right-upper semicontinuous reward process.\n\nThe last section of the paper develops a dual problem and optimality conditions for optimal (quasi-)stopping problems. The dual variables turn out to be martingales that dominate $R$. As a simple consequence, we obtain the duality result of Davis and Karatzas~\\cite{dk94} in a more general setting where the reward process $R$ is merely of class $(D)$.\n\n\n\n\n\\section{Regular processes}\\label{sec:regular}\n\n\nIn this section, the reward process $R$ is assumed to be {\\em regular}, i.e.\\ of class $(D)$ such that the left-continuous version $R_-$ and the predictable projection $\\pp R$ of $R$ are indistinguishable; see e.g.\\ \\cite{bis78} or \\cite[Remark~50.d]{dm82}. Our analysis will be based on the fact that the space of regular processes is a Banach space whose dual can be identified with optional measures of essentially bounded variation; see Theorem~\\ref{thm:reg1} below.\n\n\nThe space $M$ of Radon measures may be identified with the space $X_0$ of left-continuous functions of bounded variation on $\\reals_+$ which are constant on $(T,\\infty]$ and $x_0=0$. Indeed, for every $x\\in X_0$, there exists a unique $Dx\\in M$ such that $x_t=Dx([0,t))$ for all $t\\in\\reals$. Thus $x\\mapsto Dx$ defines a linear isomorphism between $X_0$ and $M$. The value of $x$ for $t>T$ will be denoted by $x_{T+}$. Similarly, the space $\\M^\\infty$ of optional random measures with essentially bounded total variation may be identified with the space $\\N_0^\\infty$ of adapted processes $x$ with $x\\in X_0$ almost surely and $Dx\\in\\M^\\infty$.\n\nLet $C$ the space of continuous functions on $[0,T]$ equipped with the supremum norm and let $L^1(C)$ be the space of (not necessarily adapted) continuous processes $y$ with $E\\|y\\|<\\infty$. The norm $E\\|y\\|$ makes $L^1(C)$ into a Banach space whose dual can be identified with the space $L^\\infty(M)$ of random measures whose pathwise total variation is essentially bounded. The following result is essentially from \\cite{bis78}; see \\cite[Theorem~8]{pp18c} or \\cite[Corollary~16]{pp18b}. It provides the functional analytic setting for analyzing optimal stopping with regular processes.\n\n\n\\begin{theorem}\\label{thm:reg1}\nThe space $\\R^1$ of regular processes equipped with the norm\n\\[\n\\|y\\|_{\\R^1}:=\\sup_{\\tau\\in\\T}E|y_\\tau|\n\\]\nis Banach and its dual can be identified with $\\M^\\infty$ through the bilinear form\n\\[\n\\langle y,u\\rangle =E\\int ydu.\n\\]\nThe optional projection is a continuous surjection of $L^1(C)$ to $\\R^1$ and its adjoint is the embedding of $\\M^\\infty$ to $L^\\infty(M)$. The norm of $\\R^1$ is equivalent to\n\\[\np(y):= \\inf_{z\\in L^1(C)} \\{E\\|z\\| \\mid \\op z = y\\}\n\\]\nwhich has the dual representation\n\\[\np(y)=\\sup\\{\\langle y,u\\rangle\\,|\\,\\esssup(\\|u\\|)\\le 1\\}.\n\\]\n\\end{theorem}\n\nWe first write the optimal stopping problem as\n\\[\n\\maximize\\quad \\langle R, Dx\\rangle\\quad\\ovr\\quad x\\in\\C_e,\n\\]\nwhere \n\\[\n\\C_e:=\\{x\\in\\N_0^\\infty\\,|\\, Dx\\in\\M^\\infty_+,\\ x_t\\in\\{0,1\\}\\}.\n\\]\nThe equation $\\tau(\\omega) = \\inf\\{t\\in\\reals\\mid x_t(\\omega)\\ge 1\\}$ gives a one-to-one correspondence between the elements of $\\T$ and $\\C_e$. Consider also the convex relaxation\n\\begin{equation}\\label{Rcr}\\tag{ROS}\n\\maximize\\quad \\langle R,Dx\\rangle\\quad\\ovr\\quad x\\in\\C,\n\\end{equation}\nwhere\n\\[\n\\C:=\\{x\\in\\N_0^\\infty\\,|\\, Dx\\in\\M^\\infty_+,\\ x_{T+}\\le 1\\}.\n\\]\nClearly, $\\C_e\\subset\\C$ so the optimum value of optimal stopping is dominated by the optimum value of the relaxation. The elements of $\\C$ are {\\em randomized stopping times} in the sense of Baxter and Chacon~\\cite[Section~2]{bc77}. \n\nRecall that $x\\in\\C$ is an {\\em extreme point} of $\\C$ if it cannot be expressed as a convex combination of two points of $\\C$ different from $x$.\n\n\\begin{lemma}\\label{lem:km}\nThe set $\\C$ is convex, $\\sigma(\\N_0^\\infty,\\R^1)$-compact and $\\C_e$ is the set of its extreme points.\n\\end{lemma}\n\n\\begin{proof}\nThe set $\\C$ is a closed convex set of the unit ball that $\\N_0^\\infty$ has as the dual of the Banach space $\\R^1$. The compactness thus follows from Banach-Alaoglu. It is easily shown that the elements of $\\C_e$ are extreme points of $\\C$. On the other hand, if $x\\notin\\C_e$ there exists an $\\bar s\\in(0,1)$ such that the processes\n\\[\nx^1_t:=\\frac{1}{\\bar s}[x_t\\wedge\\bar s]\\quad\\text{and}\\quad x^2_t:=\\frac{1}{1-\\bar s}[(x_t-\\bar s)\\vee 0]\n\\]\nare different elements of $\\C$. Since $x=\\bar s x^1+(1-\\bar s)x^2$, it is not an extreme point of $\\C$.\n\\end{proof}\n\nSince the function $x\\mapsto\\langle R,Dx\\rangle$ is continuous, the compactness of $\\C$ in Lemma~\\ref{lem:km} implies that the maximum in \\eqref{Rcr} is attained. The fact that the maximum is attained at a genuine stopping time follows from the characterization of the extreme points in Lemma~\\ref{lem:km} and the following variant of the Krein-Millman theorem; see e.g.~\\cite[Theorem~25.9]{cho69}.\n\n\\begin{theorem}[Bauer's maximum principle]\\label{thm:bauer}\nIn a locally convex Hausdorff topological vector space, an upper semicontinuous (usc) convex function on a compact convex set $K$ attains its maximum at an extremal point of $K$.\n\\end{theorem}\n\n\nCombining Lemma~\\ref{lem:km} and Theorem~\\ref{thm:bauer} gives the following.\n\n\\begin{theorem}\\label{thm:os}\nOptimal stopping time in \\eqref{os} exists for every $R\\in\\R^1$.\n\\end{theorem}\n\nThe above seems to have been first proved in Bismut and Skalli~\\cite[Theorem~I.3]{bs77}, which says that a stopping time defined in terms of the Snell envelope of the regular process $R$ is optimal. Their proof assumes bounded reward $R$ but they note on page~301 that it actually suffices that $R$ be of class $(D)$. The proof of Bismut and Skalli builds on the (nontrivial) existence of a Snell envelope and further limiting arguments involving sequences of stopping times. In contrast, our proof is based on elementary functional analytic arguments in the Banach space setting of Theorem~\\ref{thm:reg1}, which is of independent interest.\n\nNote that $x$ solves the relaxed optimal stopping problem if and only if $R$ is {\\em normal} to $\\C$ at $x$, i.e.\\ if $R\\in\\partial\\delta_\\C(x)$ or equivalently $x\\in\\partial\\sigma_\\C(R)$, where\n\\[\n\\sigma_\\C(R) = \\sup_{x\\in\\C}\\langle R,Dx\\rangle.\n\\]\nHere, $\\partial$ denotes the {\\em subdifferential} of a function; see e.g.\\ \\cite{roc74}. If $R$ is nonnegative, we have $\\sigma_\\C(R)=\\|R\\|_{\\R^1}$ (by Krein--Milman) and the optimal solutions of the relaxed stopping problem are simply the subgradients of the $\\R^1$-norm at $R$.\n\n\n\n\\section{Cadlag processes}\\label{sec:cadlag}\n\n\n\nThis section extends the previous section to optimal quasi-stopping problems when the reward process $R$ is merely {\\em cadlag and of class $(D)$}. In this case, optimal stopping times need not exist (see the discussion on page 1) but we will prove the existence of a quasi-stopping time by functional analytic arguments analogous to those in Section~\\ref{sec:regular}.\n\nThe Banach space of cadlag functions equipped with the supremum norm will be denoted by $D$. The space of purely discontinuous Borel measures will be denoted by $\\tilde M$. The dual of $D$ can be identified with $M\\times\\tilde M$ through the bilinear form\n\\[\n\\langle y,(u,\\tilde u)\\rangle := \\int ydu + \\int y_-d\\tilde u\n\\]\nand the dual norm is given by \n\\[\n\\sup_{y\\in D}\\left\\{\\left.\\int ydu + \\int y_-d\\tilde u\\,\\right|\\,\\|y\\|\\le 1\\right\\}=\\|u\\|+\\|\\tilde u\\|,\n\\]\nwhere $\\|u\\|$ denotes the total variation norm on $M$. This can be deduced from \\cite[Theorem~1]{pes95} or seen as the deterministic special case of \\cite[Theorem~VII.65]{dm82} combined with \\cite[Remark~VII.4(a)]{dm82}.\n\nThe following result from \\cite{pp18b} provides the functional analytic setting for analyzing quasi-stopping problems with cadlag processes of class $(D)$.\n\n\\begin{theorem}\\label{thm:L1}\nThe space $\\D^1$ of optional cadlag processes of class $(D)$ equipped with the norm\n\\[\n\\|y\\|_{\\D^1}:=\\sup_{\\tau\\in\\T}E|y_\\tau|\n\\]\nis Banach and its dual can be identified with \n\\[\n\\hat\\M^\\infty:=\\{(u,\\tilde u)\\in L^\\infty(M\\times\\tilde M)\\mid u\\text{ is optional},\\, \\tilde u\\text{ is predictable}\\}\n\\]\nthrough the bilinear form\n\\[\n\\langle y,(u,\\tilde u)\\rangle =E\\left[\\int ydu+\\int y_-d\\tilde u\\right].\n\\]\nThe optional projection is a continuous surjection of $L^1(D)$ to $\\D^1$ and its adjoint is the embedding of $\\hat\\M^\\infty$ to $L^\\infty(M\\times \\tilde M)$. The norm of $\\D^1$ is equivalent to\n\\[\n p(y):= \\inf_{z\\in L^1(D)} \\{E\\|z\\| \\mid \\op z = y\\},\n\\]\nwhich has the dual representation\n\\[\np(y)=\\sup\\{\\langle y,(u,\\tilde u)\\rangle\\,|\\,\\esssup(\\|u\\|+\\|\\tilde u\\|)\\le 1\\}.\n\\]\n\\end{theorem}\n\nThe space $M\\times\\tilde M$ may be identified with the space $\\hat X_0$ of (not necessarily left-continuous) functions $x:\\reals_+\\to\\reals$ of bounded variation which are constant on $(T,\\infty]$ and have $x_0=0$. Indeed, every $x\\in\\hat X_0$ can be written uniquely as\n\\[\nx_t = Dx([0,t)) + \\tilde Dx([0,t]),\n\\]\nwhere $\\tilde Dx\\in\\tilde M$ and $Dx\\in M$ are the measures associated with the functions $\\tilde x_t :=\\sum_{s\\le t} (x_s-x_{s-})$ and $x-\\tilde x$, respectively. The linear mapping $x\\mapsto(Dx,\\tilde Dx)$ defines an isomorphism between $\\hat X_0$ and $M\\times\\tilde M$. The value of $x$ for $t>T$ will be denoted by $x_{T+}$. Similarly, the space $\\hat \\M^\\infty$ may be identified with the space $\\hat\\N_0^\\infty$ of predictable processes $x$ with $x\\in \\hat X_0$ almost surely and $(Dx,\\tilde Dx)\\in\\hat\\M^\\infty$.\n\n\nProblem \\eqref{oqs} can be written as\n\\[\n\\maximize\\quad \\langle R, (Dx,\\tilde Dx)\\rangle\\quad\\ovr\\quad x\\in\\hat\\C_e,\n\\]\nwhere \n\\[\n\\hat\\C_e:=\\{x\\in\\hat\\N_0^\\infty\\,|\\, (Dx,\\tilde Dx)\\in\\hat\\M^\\infty_+,\\ x_t\\in\\{0,1\\}\\}.\n\\]\nIndeed, the equations $\\tau(\\omega) = \\inf\\{t\\in\\reals\\mid x_t(\\omega)\\ge 1\\}$ and $\\tilde\\tau(\\omega) = \\inf\\{t\\in\\reals\\mid x_t-x_{t-}(\\omega)\\ge 1\\}$ give a one-to-one correspondence between the elements of $\\hat\\T$ and $\\hat\\C_e$.\n\nConsider also the convex relaxation\n\\begin{equation}\\label{Dcr}\\tag{ROQS}\n\\maximize\\quad \\langle R,(Dx,\\tilde Dx)\\rangle\\quad\\ovr\\quad x\\in\\hat\\C,\n\\end{equation}\nwhere \n\\[\n\\hat\\C:=\\{x\\in\\hat\\N_0^\\infty\\,|\\, (Dx,\\tilde Dx)\\in\\hat\\M^\\infty_+,\\ x_{T+}\\le 1\\}.\n\\]\n\n\\begin{lemma}\\label{lem:cadkm}\nThe set $\\hat\\C$ is convex, $\\sigma(\\hat \\M^\\infty,\\D^1)$-compact and the set of quasi-stopping times $\\hat \\C_e$ is its extreme points. Moreover, the set of stopping times is $\\sigma(\\hat \\M^\\infty,\\D^1)$-dense in $\\hat \\C_e$ and, thus, $\\C$ is $\\sigma(\\hat \\M^\\infty,\\D^1)$-dense in $\\hat\\C$.\n\\end{lemma}\n\n\\begin{proof}\nThe set $\\hat\\C$ is a closed convex set of the unit ball that $\\hat\\N_0^\\infty$ has as the dual of the Banach space $\\D^1$. The compactness thus follows from Banach-Alaoglu. It is easily shown that the elements of $\\hat\\C_e$ are extreme points of $\\hat\\C$.\n\nIf $x\\notin\\hat\\C_e$, there exist $\\bar s\\in(0,1)$ such that\n\\begin{align*}\nx^1_t &:=\\frac{1}{\\bar s }[x_t\\wedge\\bar s],\\quad\\quad x^2_t:=\\frac{1}{1-\\bar s}[(x_t-\\bar s)\\vee 0]\n\\end{align*}\nare distinguishable processes that belong to $\\hat\\C$. Since $x=\\bar s x^1+(1-\\bar s)x^2$, $x$ is not an extremal in $\\hat C$.\n\nTo prove the last claim, let $(\\tau,\\tilde\\tau)$ be a quasi-stopping time and $(\\tau^\\nu)$ an announcing sequence for $\\tilde\\tau$. We then have\n\\[\n\\langle(\\delta_{\\tau\\wedge\\tau^\\nu},0),y\\rangle\\to\\langle(\\delta_\\tau,\\delta_{\\tilde\\tau}),y\\rangle\n\\]\nfor every $y\\in\\D^1$.\n\\end{proof}\n\nJust like in Section~\\ref{sec:regular}, a combination of Lemma~\\ref{lem:cadkm} and Theorem~\\ref{thm:bauer} gives the following existence result which was established in Bismut~\\cite{bis79} using more elaborate techniques based on the existence of Snell envelopes.\n\n\\begin{theorem}\\label{thm:os}\nIf $R\\in\\D^1$, then optimal quasi-stopping time in \\eqref{oqs} exists and the optimal values of \\eqref{oqs}, \\eqref{os} and \\eqref{Dcr} are all equal.\n\\end{theorem}\n\nAs another implication of Lemma~\\ref{lem:cadkm} and Theorem~\\ref{thm:L1}, we recover the following result of Bismut which says that the seminorms in Theorem~\\ref{thm:L1} are not just equivalent but equal\n\n\\begin{theorem}[{\\cite[Theorem~4]{bis78}}]\\label{thm:equiv1}\nFor every $y\\in\\D^1$,\n\\[\n\\|y\\|_{\\D^1}=\\inf_{z\\in L^1(D)}\\{E\\|z\\|_D\\mid \\op z = y\\}.\n\\]\n\\end{theorem} \n\n\\begin{proof}\nThe expression on the right is the seminorm $p$ in Theorem~\\ref{thm:L1} with the dual representation \n\\[\np(y)=p(|y|)=\\sup_{x\\in\\hat\\C}\\langle |y|,(Dx,\\tilde Dx)\\rangle\n\\]\nwhich, by Theorem~\\ref{thm:os}, equals the left side.\n \n\\end{proof}\n\nCombining the above with Theorem~\\ref{thm:reg1} gives a simple proof of the following.\n\n\\begin{theorem}[{\\cite[Theorem~3]{bis78}}]\\label{thm:equivreg}\nFor every $y\\in\\R^1$,\n\\[\n\\|y\\|_{\\R^1}=\\inf_{z\\in L^1(C)}\\{E\\|z\\|_D\\mid \\op z = y\\}.\n\\]\n\\end{theorem} \n\n\\begin{proof}\nBy Jensen's inequality, the left side is less than the right which is the seminorm $p$ in Theorem~\\ref{thm:reg1} with the dual representation\n\\begin{align*}\n p(y) &= \\sup\\{\\langle y,u\\rangle\\,|\\,\\esssup(\\|u\\|)\\le 1\\}\\\\\n &\\le \\sup\\{\\langle y,(u,\\tilde u)\\rangle\\,|\\,\\esssup(\\|u\\|+\\|\\tilde u\\|)\\le 1\\}\\\\\n &=\\sup_{x\\in\\hat\\C}\\langle |y|,(Dx,\\tilde Dx)\\rangle,\n\\end{align*}\nwhich, again by Theorem~\\ref{thm:os}, equals the left side.\n\\end{proof}\n\n\n\\section{Non-cadlag processes}\\label{sec:usc}\n\n\n\nThis section gives a further extension to cases where the reward process is not necessarily cadlag but merely {\\em right-upper semicontinuous} (right-usc) in the sense that $R\\ge\\cev R$. In this case, the objective of the relaxed quasi-optimal stopping problem \\eqref{Dcr} need not be continuous. The following lemma says that it is, nevertheless, upper semicontinuous, so Bauer's maximum principle still applies.\n\n\\begin{lemma}\\label{lem:usc}\nIf $R$ is right-usc and of class $(D)$, then the functional \n\\begin{align*}\n\\hat\\J(u,\\tilde u)=\\begin{cases} \nE\\left[\\int Rdu + \\int \\vec R d\\tilde u\\right]\\quad &\\text{if }(u,\\tilde u)\\in\\hat\\M^\\infty_+\\\\\n-\\infty\\quad&\\text{otherwise}\n\\end{cases}\n\\end{align*}\nis $\\sigma(\\hat\\M^\\infty,\\D^1)$-usc.\n\\end{lemma}\n\n\\begin{proof}\nRecalling that every optional process of class $(D)$ has a majorant in $\\D^1$ (see \\cite[Remark 25, Appendix I]{dm82}), the first example in \\cite[Section~8]{pt18a} shows, with obvious changes of signs, that $\\hat\\J$ is usc. \n\\end{proof}\n\nCombining Lemma~\\ref{lem:usc} with Theorem~\\ref{thm:bauer} gives the existence of a relaxed quasi-stopping time at an extreme point of $\\C$ which, by Lemma~\\ref{lem:cadkm}, is a quasi-stopping time. We thus obtain the following.\n\n\n\\begin{theorem}\\label{thm:usc}\nIf $R$ is right-usc and of class $(D)$, then \\eqref{oqs} has a solution. \n\\end{theorem}\n\nWe have not been able find the above result in the literature but it can be derived from Theorem~2.39 of El Karoui~\\cite{elk81} on ``divided stopping times'' (temps d'arret divis\\'es). A recent analysis of divided stopping times can be found in Bank and Besslich \\cite{bb18a}. These works extend Bismut's approach on optimal quasi-stopping by dropping the assumption of right-continuity and augmenting quasi-stopping times with a third component that acts on the right limit of the reward process. Much like Bismut's approach, \\cite{elk81,bb18a} build on the existence of a Snell envelope.\n\nTheorem~\\ref{thm:usc} yields the existence of an optimal stopping time when the reward process $R$ is {\\em subregular} in the sense that it is right-usc, of class $(D)$ and $\\vec R\\le\\pp R$.\n\n\n\n\\begin{theorem}\\label{thm:uscreg}\nIf $R$ is subregular, then \\eqref{os} has a solution and its optimum value equals that of \\eqref{oqs}.\n\\end{theorem}\n\n\\begin{proof}\nClearly, the optimum value of \\eqref{oqs} is at least that of \\eqref{os} while for subregular $R$,\n\\[\nE[R_\\tau+\\vec R_{\\tilde\\tau}] \\le E[R_\\tau+\\pp R_{\\tilde\\tau}] = E[R_\\tau+R_{\\tilde\\tau}] = ER_{\\tau\\wedge\\tilde\\tau},\n\\]\nwhere the first equality holds by the definition of predictable projection. The claim now follows from Theorem~\\ref{thm:usc}.\n\\end{proof}\n\n\nThe above seems to have been first established in Bismut and Skalli~\\cite[Section~II]{bs77} for bounded $R$ (again, they mention on page 301 that, instead of boundedness, it would suffice to assume that $R$ is of class $(D)$).\n\nRegularity properties are preserved under compositions with convex functions much like martingale properties. Indeed, if $R$ is regular and $g$ is a real-valued convex function on $\\reals$ then $g(R)$ is subregular as soon as it is of class $(D)$. Indeed, for any $\\tau\\in\\T_p$, conditional Jensen's inequality gives\n\\[\nE[g(\\vec R_\\tau)\\one_{\\tau<+\\infty}]= E[g(\\pp R_\\tau)\\one_{\\tau<+\\infty}] \\le E[g(R_\\tau)\\one_{\\tau<+\\infty}].\n\\]\nSimilarly, if $R$ is subregular and $g$ is a real-valued increasing convex function, then $g(R)$ is subregular as soon as the composition is of class $(D)$.\n\n\n\n\n\n\\section{Duality}\n\n\n\nWe end this paper by giving optimality conditions and a dual problem for the optimal stopping problems. The derivations are based on the conjugate duality framework of~\\cite{roc74} which addresses convex optimization in general locally convex vector spaces.\nThe results below establish the existence of dual solutions without assuming the existence of optimal (quasi-)stopping times. They hold without any path properties as long as the reward process $R$ is of class $(D)$.\n\nWe denote the space of martingales of class $(D)$ by $\\R^1_m$.\n\n\\begin{theorem}\\label{thm:osdual}\nLet $R$ be of class $(D)$. Then the optimum values of \\eqref{oqs} and \\eqref{os} coincide and equal that of\n\\begin{equation}\\label{d}\\tag{DOS}\n\\inf\\{EM_0 \\mid M\\in\\R^1_m,\\ R\\le M\\},\n\\end{equation}\nwhere the infimum is attained.\n\nMoreover, $x\\in\\hat\\C$ is optimal in the convex relaxation of \\eqref{oqs} if and only if there exists $M\\in\\R^1_m$ with $R\\le M$ and\n\\begin{align}\n\\int (M-R)dx+\\int(M_--\\vec R)d\\tilde x &= 0,\\label{eq:oc1}\\\\ \nx_{T+}=1\\quad\\text{or}\\quad M_T&=0\\label{eq:oc2}\n\\end{align}\nalmost surely. Thus, $(\\tau,\\tilde\\tau)\\in\\hat\\T$ is optimal in \\eqref{oqs} if and only if there exists $M\\in\\R^1_m$ with $R\\le M$, $M_\\tau=R_\\tau$, $M_{\\tilde \\tau_-} =\\vec R_{\\tilde\\tau}$ and almost surely either $\\tau+\\tilde\\tau<\\infty+$ or $M_T=0$.\n\nIn particular, $x\\in\\C$ is optimal in the convex relaxation of \\eqref{os} if and only if there exists $M\\in\\R^1_m$ with $R\\le M$ and\n\\begin{align*}\n\\int (M-R)dx&= 0,\\\\ \nx_{T+}=1\\quad\\text{or}\\quad M_T&=0\n\\end{align*}\nalmost surely. Thus, $\\tau\\in\\T$ is optimal in \\eqref{os} if and only if there exists $M\\in\\R^1_m$ with $R\\le M$, $M_\\tau=R_\\tau$ and almost surely either $\\tau<\\infty+$ or $M_T=0$.\n\\end{theorem}\n\n\\begin{proof}\nBy \\cite[Remark~25, Appendix~I]{dm82}, there are measurable processes $z$ and $\\tilde z$ such that $R =\\op z$, $\\vec R=\\op{\\tilde z}$ and $E[\\sup_tz_t+\\sup_t\\tilde z_t]<\\infty$. The optimum value and optimal solutions of \\eqref{oqs} coincide with those of\n\\begin{align}\\label{eq:ros}\n&\\maximize_{x\\in\\hat\\N^\\infty}\\quad E\\left[\\hat \\J(Dx,\\tilde Dx) - \\rho(x_{T+}-1)^+\\right],\n\\end{align}\nwhere $\\rho:=\\sup_tz_t+\\sup_t\\tilde z_t+1$ and $\\hat\\J$ is defined as in Lemma~\\ref{lem:usc}. Indeed, if $x$ is feasible in \\eqref{eq:ros} then $\\bar x:=x\\wedge 1$ is feasible in \\eqref{oqs} and since $x-\\bar x$ is an increasing process with $(x-\\bar x)_{T+}=(x_{T+}-1)^+$, we get \n\\begin{align*}\n\\hat\\J(D\\bar x,\\tilde D\\bar x) &= \\hat\\J(Dx,\\tilde Dx) - \\hat\\J(D(x-\\bar x),\\tilde D(x-\\bar x))\\\\\n&\\ge \\hat\\J(Dx,\\tilde Dx) - E\\rho(x_{T+}-1)^+.\n\\end{align*}\n\nProblem~\\eqref{eq:ros} fits the general conjugate duality framework of \\cite{roc74} with $U=L^\\infty$, $Y=L^1$ and\n\\[\nF(x,w) = -\\hat\\J(Dx,\\tilde Dx) + E\\rho(x_{T+}+w-1)^+.\n\\]\nBy \\cite[Theorem~22]{roc74}, $w\\to F(0,w)$ is continuous on $L^\\infty$ in the Mackey topology that it has as the dual of $L^1$. Thus, by \\cite[Theorem~17]{roc74}, the optimum value of \\eqref{eq:ros} coincides with the infimum of the dual objective \n\\[\ng(y) :=-\\inf_{x\\in\\hat\\N^\\infty} L(x,y),\n\\]\nwhere $L(x,y) :=\\inf_{w\\in L^\\infty}\\{F(x,w)-Ewy\\}$, and moreover, the infimum of $g$ is attained. By the interchange rule \\cite[Theorem~14.60]{rw98},\n\\begin{align*}\nL(x,y)&=\n\\begin{cases}\n+\\infty & \\text{if $x\\notin\\hat\\N^\\infty_+$},\\\\\n-\\hat\\J(Dx,\\tilde Dx) + E\\left[\\inf_{u\\in\\reals}\\{\\rho(x_{T+}+u-1)^+ - uy\\}\\right]&\\text{otherwise}\\\\\n\\end{cases}\\\\\n&=\n\\begin{cases}\n+\\infty & \\text{if $x\\notin\\hat\\N^\\infty_+$},\\\\\n-\\hat\\J(Dx,\\tilde Dx)+ E\\left[x_{T+}y-y - \\delta_{[0,\\rho]}(y)\\right]&\\text{otherwise}.\n\\end{cases}\n\\end{align*}\nWe have\n\\[\nE[x_{T+}y] = E[\\int(y\\one)dx+\\int(y\\one)d\\tilde x] = \\langle M,(Dx,\\tilde Dx)\\rangle,\n\\]\nwhere $M=\\op(y\\one)\\in\\R^1_m$. Thus,\n\\[\nL(x,y) = \n\\begin{cases}\n+\\infty & \\text{if $x\\notin\\hat\\N^\\infty_+$},\\\\\n-\\hat\\J(Dx,\\tilde Dx) + \\langle M,(Dx,\\tilde Dx)\\rangle - EM_T & \\text{if $x\\in\\hat\\N^\\infty_+$ and $0\\le M_T\\le\\rho$,}\\\\\n-\\infty & \\text{otherwise}.\n\\end{cases}\n\\]\nThe dual objective can be written as\n\\begin{align*}\ng(y) &= \n\\begin{cases}\nEM_0 & \\text{if $0\\le M_T\\le\\rho$, $M\\ge R$ and $M_-\\ge\\vec R$},\\\\\n+\\infty & \\text{otherwise}.\n\\end{cases}\n\\end{align*}\nSince $M$ is cadlag, $M_-\\ge\\vec R$ holds automatically when $M\\ge R$. In summary, the optimum value of \\eqref{oqs} equals that of \\eqref{d}.\n\nThe dual problem of \\eqref{os} is obtained similarly by defining\n\\[\nF(x,w) = -\\J(Dx) + E\\rho(x_{T+}+w-1)^+.\n\\]\nThe function $w\\to F(0,w)$ is again Mackey-continuous on $L^\\infty$ and one finds that the dual is again \\eqref{d}. Thus, the optimum value of \\eqref{os} equals that of \\eqref{d}.\n\nAs to the optimality conditions, \\cite[Theorem~15]{roc74} says that $x$ is optimal in \\eqref{eq:ros} and $y$ is optimal in the dual if and only i\n\\[\n0\\in\\partial_x L(x,y),\\quad 0\\in\\partial_y[-L](x,y).\n\\]\nThe former means that $x\\in\\hat\\N^\\infty_+$, $M\\ge R$ and \n\\[\n\\int (M-R)dx=0, \\quad \\int (M_--\\vec R)d\\tilde x=0\\quad P\\text{-a.s.}\n\\]\nBy the interchange rule for subdifferentials (\\cite[Theorem~21c]{roc74}), the latter is equivalent to \\eqref{eq:oc2}.\n\\end{proof}\n\n\nNote that for any martingale $M\\in\\R^1_m$,\n\\[\n\\sup_{\\tau\\in\\T}ER_\\tau = \\sup_{\\tau\\in\\T}E(R_\\tau+M_T-M_{\\tau})\\le E\\sup_{t\\in[0,T]}(R_t+M_T-M_t),\n\\]\nwhere the last expression is dominated by $EM_0$ if $R\\le M$. Thus,\n\\begin{align*}\n\\sup_{\\tau\\in\\T}ER_\\tau &\\le \\inf_{M\\in\\R^1_m}E\\sup_{t\\in[0,T]}(R_t+M_T-M_t)\\\\\n&\\le \\inf_{M\\in\\R^1_m}\\{E\\sup_{t\\in[0,T]}(R_t+M_T-M_t)\\,|\\,R\\le M\\}\\\\\n&\\le \\inf_{M\\in\\R^1_m}\\{EM_0\\,|\\,R\\le M\\},\n\\end{align*}\nwhere, by Theorem~\\ref{thm:os}, the last expression equals the first one as soon as $R$ is of class $(D)$. The optimum value of the stopping problem then equals\n\\[\n\\inf_{M\\in\\R^1_m}E\\sup_{t\\in[0,T]}(R_t+M_T-M_t).\n\\]\nThis is the dual problem derived in Davis and Karatzas~\\cite{dk94} and Rogers~\\cite{rog2}. Note also that if $Y$ is the Snell envelope of $R$ (the smallest supermartingale that dominates $R$), then the martingale part $M$ in the Doob--Meyer decomposition $Y=M-A$ is dual optimal. These facts were obtained in \\cite{dk94} and \\cite{rog2} under the assumptions that $\\sup_tR_t$ is integrable.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\bibliographystyle{plain}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Supporting information}\n\\subsection{Hydrodynamic transport regime}\nThe hydrodynamic equations for electron drift velocity $\\bf u$, chemical potential $\\mu$ and temperature $T$ can be obtained by supplying the local-equilibrium distribution function\n\\begin{equation}\n f_0 = [1+e^{(\\epsilon_p - {\\bf p u}_0 - \\mu)\/T}]^{-1}\n\\end{equation}\ninto kinetic equation and integrating it by phase space multiplied by $1$, $\\bf p$ and $\\epsilon_{\\bf p}$. Though a more accurate derivation with explicit account of collisions is possible (see section II), such simplistic derivation may be convenient for analysis of nonlinear effects. The resulting equations can be presented as\n\\begin{gather}\n \\partial_t n + \\partial_{\\bf r}(n{\\bf u})=0,\\\\\n \\partial_t(\\rho u_i) + \\partial_{x_j}\\Pi_{ij} = e n \\partial_{x_i}\\varphi,\\\\\n \\partial_t\\varepsilon + \\partial_{\\bf r}(\\rho v_0^2 {\\bf u}) - e n {\\bf u}{\\partial_{\\bf r}\\varphi} = 0. \n\\end{gather}\n\nAbove, $n$ is the density of electrons, $\\rho$ is the equivalent of mass density, $\\varepsilon$ is the internal energy density, and $\\Pi_{ij}$ is the stress tensor. At given value of chemical potential $\\mu$, all these quantities depend on drift velocity $\\beta = u \/ v_0$. At the same time, they can be expressed via their values in the absence of drift, i.e. at $\\beta = 0$:\n\\begin{gather}\n\\label{Eqs-of-state}\n n = \\frac{n_{\\beta = 0}}{[1-\\beta^2]^{3\/2}},\\qquad \\rho = \\frac{\\rho_{\\beta = 0}}{[1-\\beta^2]^{5\/2}},\\\\\n \\varepsilon = \\varepsilon_{\\beta = 0} \\frac{1 + \\beta^2\/2}{[1-\\beta^2]^{5\/2}},\\\\ \\rho_{\\beta = 0}v_0^2 = \\frac{3}{2}\\varepsilon_{\\beta = 0}\n \\Pi_{xx} = \\frac{\\varepsilon_{\\beta = 0}}{2}\\frac{1+2\\beta^2}{[1-\\beta^2]^{5\/2}},\\qquad \\Pi_{yy} = \\frac{\\varepsilon_{\\beta = 0}}{2}\\frac{1 - \\beta^2}{[1-\\beta^2]^{5\/2}}\n\\end{gather}\n\nTo avoid dealing with 'relativistic factors' of the type $[1-\\beta^2]^{\\alpha}$, it is convenient to consider density $n$, mass density $\\rho$ and velocity $\\beta$ as hydrodynamic variables. In these variables, the Euler and heat balance equations take on the closed form (assuming one-dimensional motion along $x$-axis)\n\\begin{gather}\n \\partial_t(\\rho u_x) + \\frac{1}{3}\\partial_{x}[\\rho (v_0^2 + 2 u^2)] = e n \\partial_{x}\\varphi,\\\\\n \\frac{2}{3} \\partial_t(\\rho [v_0^2 + u^2\/2]) + \\partial_{\\bf r}(\\rho v_0^2 {\\bf u}) - e n {\\bf u}{\\partial_{\\bf r}\\varphi} = 0. \n\\end{gather}\nThe equations can now be linearized to find the conductivity of drifting Dirac electrons in the hydrodynamic regime, $n = n_0 + \\delta ne^{i(kx-\\omega t)}$, $u = u_0 + \\delta u e^{i(kx-\\omega t)}$, $\\rho = \\rho_0 + \\delta \\rho e^{i(kx-\\omega t)}$. This results in the system of equations $\\hat M_{\\rm hd}\\delta{\\bf x} = \\delta {\\bf F}_{\\rm hd}$, with the matrix and right-hand side given by\n\\begin{gather}\n\\label{HD-matrix-2}\n \\hat{M}_{\\rm hd}=\\left( \\begin{matrix}\n -i(\\omega - q u_0) & i q & 0 \\\\\n 0 & -i (\\omega - \\frac{4}{3}qu_0) & - i (\\beta_0\\omega - \\frac{q}{3}) \\\\\n 0 & - i (\\beta_0 \\omega - \\frac{3}{2}q) & -i(\\omega - \\frac{3}{2}qu_0) \\\\\n\\end{matrix} \\right) \\\\ \n\\label{HD_forces-2}\n {\\delta {\\bf F}}_{\\rm hd}=\\frac{e\\delta \\varphi }{m v_{0}^{2}}\n \\left( \n \\begin{matrix}\n 0 \\\\\n i q v_0 \\\\\n \\frac{3}{2}i q u_0 \\\\\n\\end{matrix}\n \\right) \n\\end{gather}\nthe hydrodynamic mass $m$ si given by\n\\begin{equation}\n m = \\frac{\\rho_0}{n_0} \\approx \\frac{\\varepsilon_F}{v_0^2 - u_0^2}.\n\\end{equation}\n\nAs a final step, it is possible to solve the system (\\ref{HD-matrix-2}-\\ref{HD_forces-2}) for density $\\delta n$ and velocity $\\delta u$ to find the polarizability and conductivity\n\\begin{gather}\n\\label{HD_polarization}\n \\Pi = -\\frac{n q^2 (1-\\beta_0^2)}{m \\left[\\omega^2(1-\\frac{\\beta_0^2}{2}) - \\frac{q^2v_0^2}{2}(1-2\\beta^2) - 2 q u_0 \\omega\\right]},\\\\\n \\sigma = -\\frac{ie^2\\omega}{q^2}\\Pi.\n\\end{gather}\n\nAt zero drift velocity, the polarzation of graphene acquires a simple form common for bulk solids:\n\\begin{equation}\n \\Pi = -\\frac{n q^2 \/m}{\\omega^2 - v_s^2 q^2},\n\\end{equation}\nwhere $v_s = v_0\/\\sqrt{2}$ is the sound velocity.\n\n\n\\subsection{Polarizability and conductivity for drifting electrons at the hydrodynamic-to-ballistic crossover}\nThe solution of kinetic equation with electron-electron collisions reads\n\\begin{equation}\n\\label{Distribution_funtion}\n \\delta f = \\frac{i\\nu_{ee}\\delta f_{\\rm hd} - i e\\delta\\varphi {\\bf q} \\partial_{\\bf p}f_0}{\\omega+i\\nu_{ee}-{\\bf qv_p}},\n\\end{equation}\nwhere the perturbed hydrodynamic distribution reads\n\\begin{equation}\n \\delta f_{hd} = \\delta\\mu \\partial_{\\mu}f_0 + \\delta {\\bf u} \\partial_{\\bf u}f_0 + \\delta T \\partial_{T}f_0,\n\\end{equation}\nand the unperturbed drifting distribution is \n\\begin{equation}\n f_0 = \\left[1+\\exp\\left\\{ \\frac{\\epsilon_p - {\\bf p u}_0 - \\mu}{T}\\right\\}\\right]^{-1}.\n\\end{equation}\nThe local-equilibrium Fermi energy $\\mu$, drift velocity ${\\bf u}_0$ and temperature $T$ should be determined from the solution of dc transport equations from the known drain and gate voltages. We leave this solution for further work. The variations of local-equilibrium parameters due to ac field $\\delta \\mu$, $\\delta {\\bf u}$ and $\\delta T$ should be obtained by requiring particle, momentum and energy conservation upon e-e collisions:\n\\begin{equation}\n\\label{Conservation_laws}\n \\sum_{\\bf p}{[\\delta f- \\delta f_{\\rm hd}]} = 0, \\qquad \n \\sum_{\\bf p}{{\\bf p}[\\delta f- \\delta f_{\\rm hd}]} = 0, \\qquad\n \\sum_{\\bf p}{\\epsilon_{\\bf p}[\\delta f- \\delta f_{\\rm hd}]} = 0.\n\\end{equation}\nSubstituting the distribution function (\\ref{Distribution_funtion}) into conservation laws (\\ref{Conservation_laws}), we can obtain a set of generalized hydrodynamic equations for determination of Fermi energy, drift velocity and temperature. In the course of evaluation, one encounters the following integrals\n\\begin{equation}\n\\sum_{\\bf p}{\\frac{p^{m-1} \\cos^n\\theta_{\\bf p} f_0({\\bf p})}{a-\\cos\\theta_{\\bf p}} },\\qquad a=\\frac{\\omega+i\\nu_{ee}}{qv_0}. \n\\end{equation}\nAnisotropy of $f_0({\\bf p})$ introduces complications that can be handled via the change of momentum\n\\begin{equation}\n \\tilde p = p(1 - u_0\/v_0 \\cos\\theta_{\\bf p}),\n\\end{equation}\nafter which the integrals over momentum modulus and angle are decoupled\n\\begin{equation}\n\\sum_{\\bf p}{\\frac{p^{m-1} \\cos^n\\theta_{\\bf p} f_0({\\bf p})}{a-\\cos\\theta_{\\bf p}} } = \\frac{1}{(2\\pi)^2}\\int_0^{\\infty}{d\\tilde p \\tilde p^m f_F(\\tilde p)} \\int_0^{2\\pi}{\\frac{\\cos^n\\theta d\\theta}{(a-\\cos\\theta)(1-\\beta \\cos\\theta)^m}}. \n\\end{equation}\nAbove $f_F$ is already an isotropic Fermi function with the same values of $\\mu$, $T$ and zero drift velocity. We introduce the notation\n\\begin{equation}\n J_{nm} = \\frac{1}{2\\pi}\\int\\limits_0^{2\\pi}{\\frac{\\cos^n\\theta d\\theta}{(a-\\cos\\theta)(1-\\beta \\cos\\theta)^m}}.\n\\end{equation}\nThe integral is evaluated by setting $z=e^{i\\theta}$ and computing the residues in the poles inside the unit circle $|z|=1$. The remaining integrals over momentum modulus can be expressed via electron density $n$ and energy density $\\varepsilon$ at zero drift velocity $\\beta\\equiv u_0\/v_0$. This results in generalized hydrodynamic system:\n\\begin{gather}\n \\delta n = \\frac{i\\nu_{ee}}{qv_0}\\delta[n_{\\beta = 0} J_{02}] + i \\frac{e \\delta\\varphi}{qv_0} \\frac{\\partial n_{\\beta = 0}}{\\partial \\mu} [qv_0 J_{12} - {\\bf qu}_0 J_{02} ],\\\\\n \\delta{{\\bf P}v_0} = \\frac{i \\nu_{ee}}{qv_0} \\delta[\\varepsilon_{\\beta = 0} J_{13}]+ i \\frac{e \\delta\\varphi}{qv_0} 2n_{\\beta = 0} [qv_0 J_{23} - {\\bf qu}_0 J_{13} ],\\\\\n \\delta{\\varepsilon} = \\frac{i \\nu_{ee}}{qv_0} \\delta[\\varepsilon_{\\beta = 0} J_{03}]+ i \\frac{e \\delta\\varphi}{qv_0} 2n_{\\beta = 0} [qv_0 J_{13} - {\\bf qu}_0 J_{03} ].\n\\end{gather}\n\nFinally, to justigy the form (\\ref{HD-matrix}-\\ref{HD_forces}) of the main text, we express the quantities $n_{\\beta=0}$ and $\\varepsilon_{\\beta=0}$ through $n$ and $\\varepsilon$ at finite drift velocity using the equations of state (\\ref{Eqs-of-state}). We also introduce another set of dimensionless integrals\n\\begin{equation}\n I_{nm} = (1-\\beta^2)^{m-1\/2}J_{nm},\n\\end{equation}\nand inverse Knudsen number $\\tilde \\gamma_{ee} = (qv_0 \\tau_{ee})^{-1}$. Generally, the right-hand side of generalized hydrodynamic equations has the form \n\\begin{equation}\n {\\delta {\\bf F}}=-2e\\delta \\varphi \n \\left( \n \\begin{matrix}\n \\frac{I_{12}-{\\beta_0}{I_{02}}}{m_{k,\\beta=0} v_{0}^{2}} \\\\\n \\frac{I_{23}-{\\beta_0}{I_{13}}}{m_{hd,\\beta=0} v_{0}^{2}} \\\\\n \\frac{3}{2}\\frac{{I}_{13}-{\\beta_0}{I_{03}}}{m_{hd,\\beta=0} v_{0}^{2}} \\\\\n\\end{matrix}\n \\right)\n\\end{equation}\nwhere the 'kinetic' and 'hydrodynamic' masses have the form\n\\begin{equation}\n m_{k,\\beta=0} = \\frac{n_{\\beta=0}}{v_0^2 \\partial n_{\\beta=0}\/\\partial\\mu},\\qquad m_{hd,\\beta=0} = \\frac{\\rho_{\\beta=0}}{n_{\\beta=0}}.\n\\end{equation}\nIn the main text, we neglect the difference between these masses which is justified in the degenerate limit $T\/\\epsilon_F \\ll 1$. A small difference between these masses may result in extra plasmon damping due to relaxation of velocity modes by e-e collisions in non-parabolic bands~\\cite{Crossover}.\n\nThe formal solution for polarizability is readily derived from generalized hydrodynamic system~(\\ref{HD-matrix}-\\ref{HD_forces}). Denoting the elements of hydrodynamic matrix as $M_{ij}$ and components of generalized force vector as $\\delta F_i$, we find\n\\begin{equation}\n \\Pi (q,\\omega) = \\frac{1}{e\\delta\\varphi}\\frac{\\delta F_1}{M_{11}} +\\frac{M_{12}}{M{11}}\\frac{M_{33}\\delta F_2 - M_{23}\\delta F_3}{M_{23}M_{32} - M_{22}M_{33}}\n\\end{equation}\n\n\\subsection{Evaluation of auxiliary spatially dispersive integrals}\nUpon obtaining the generalized hydrodynamic equations, on encounters the following integrals\n\\begin{equation}\n J_{nm} = \\frac{1}{2\\pi}\\int\\limits_0^{2\\pi}{\\frac{\\cos^n\\theta d\\theta}{(a-\\cos\\theta)(1-\\beta \\cos\\theta)^m}}.\n\\end{equation}\nAt any given integers $n$ and $m$, they are evaluated by passing to the complex variable $z=e^{i\\theta}$. This results in\n\\begin{equation}\n J_{nm} = \\frac{1}{2\\pi i}\\int\\limits_{|z|=1}{\\frac{(z+z^{-1})^n dz}{2^n z [a - (z+z^{-1})\/2][ 1-\\beta (z+z^{-1})\/2]^m}}.\n\\end{equation}\nThe integrand has poles at the points:\n\\begin{equation}\n z_0=0,\\qquad z^{(a)}_{\\pm} = a \\pm \\sqrt{a^2-1}, \\qquad z^{(\\beta)}_{\\pm} = \\frac{1\\pm\\sqrt{1-\\beta^2}}{\\beta}.\n\\end{equation}\nAmong these points, $z^{(\\beta)}_{-}$ and $z_0$ lie inside the unit circle $z=1$, $z^{(a)}_{-}$ lies inside the unit circle for $\\rm{Re} a > 0$ and $z^{(a)}_{+}$ -- for $\\rm{Re} a < 0$. These statements are independent of sign of imaginary part of $a$.\n\nEvaluation is completed by computation of integrand residues at these poles. As a result, we arrive at the following expressions:\n\\begin{gather}\n\\label{Integrals}\n J_{03} = \\frac{1}{2 (a \\beta -1)^3} \\left\\{\\beta \\frac{ \\left(a^2+2\\right) \\beta ^4+\\left(2 a^2-5\\right) \\beta ^2-6 a \\beta +6 }{\\left(1-\\beta\n ^2\\right)^{5\/2}}+ 2 i \\frac{\\text{sign} \\rm{Im} a }{\\sqrt{1-a^2}}\\right\\},\\\\\n J_{02} = \\frac{1}{(a\n \\beta -1)^2} \\left\\{\\frac{\\text{sign} \\rm{Re} a}{\\sqrt{a^2-1}}+ \\beta \\frac{ \\beta (a+\\beta )-2 }{\\left(1-\\beta ^2\\right)^{3\/2}}\\right\\},\\\\\n J_{12} = \\frac{1}{{(a \\beta -1)^2}} \\left\\{a \\frac{\\text{sign} \\rm {Re} a}{\\sqrt{a^2-1}}-\\frac{1-a \\beta ^3}{\\left(1-\\beta ^2\\right)^{3\/2}}\\right\\}.\n\\end{gather}\nThe remaining necessary integrals can be obtained with recurrence relations\n\\begin{equation}\n \\frac{\\partial J_{nm}}{\\partial \\beta} = m J_{n+1,m+1}.\n\\end{equation}\nAs apparent from the forms (\\ref{Integrals}), there is a singularity at $a = \\rightarrow 1$. As the parameter $a$ has finite imaginary part ($a = (\\omega + i\\gamma_{ee})\/qv_0$), the singularity is present only in the ballistic regime (i.e. at $\\omega \\gg \\gamma_{ee}$). Finite strength of e-e collisions softens the singularity. \n\nThe integrals also diverge at $\\beta \\rightarrow 1$, a situation close to that in special relativity. However, this divergence can be re-absorbed into definitions of particle and mass density (\\ref{Eqs-of-state}), so that the resulting generalized hydrodynamic equations are free of divergences.\n\nThere is a spurious singularity at $a \\beta \\rightarrow 1$, however, a closer inspection reveals that it is compensated by the zero value of the numerator. An only special property of the computed dielectric response at $a \\beta \\rightarrow 1$ is the absence of dissipation, as discussed in the main text.\n\n\\subsection{Analysis of instabilities in the double-layer system}\nThe dispersion law for plasmons in a double-layer structure separated by distance $d$ reads\n\\begin{equation}\n\\label{Double-layer}\n \\epsilon_{2l}(q,\\omega,\\beta)\\equiv (1 + V_0 \\Pi_+) (1 + V_0 \\Pi_-) - V^2_0 \\Pi_- \\Pi_+ e^{-2|q|d} = 0.\n\\end{equation}\nHere $\\Pi_+$ and $\\Pi_-$ are the polarizabilities of individual top and bottom layers, $V_0 = 2\\pi e^2\/\\kappa |q|$ is the Fourier transform of Coulomb interaction and $\\kappa$ is the background dielectric constant. Substitution of hydrodynamic polarizability (\\ref{HD_polarization}) results in biquadratic equation with two eigenmodes\n\\begin{equation}\n\\label{HD-modes-double-layer}\n \\frac{\\omega^2_{\\pm}}{q^2v_0^2} = \\frac{ 2 \\left(\\beta ^4-3 \\beta ^2+2\\right) s_p^2 + 2 \\beta ^4-3 \\beta ^2+2 \\pm 2 \\sqrt{2 \\beta ^2 \\left(\\beta\n ^2-1\\right) \\left(\\left(\\beta ^2-1\\right) + \\left(\\beta ^2-2\\right) s_p^2 \\right)+\\left(\\beta ^4-3 \\beta ^2+2\\right)^2 s_p^4 e^{-2 d\n q}}}{\\left(\\beta ^2-2\\right)^2},\n\\end{equation}\nhere we have introduced the dimesionless 'plasmon phase velocity' \n\\begin{equation}\n s_p = \\frac{\\omega_p}{qv_0}, \\qquad \\omega_p = \\sqrt{\\frac{2\\pi n e^2 |q|}{\\kappa m_{hd}}}.\n\\end{equation}\nThe signs $+$ and $-$ in the absove dispersion can be traced back to optical and acoustic modes of the double-layer structure in the absence of drift, respectively. Indeed, at $\\beta = 0$ one obtains\n\\begin{equation}\n \\omega^2_{\\pm} = \\frac{q^2v_0^2}{2} + \\omega_p^2(1 \\pm e^{-2qd}).\n\\end{equation}\n\nThe instability emerges as the frequency of acoustic mode in (\\ref{HD-modes-double-layer}) passes through zero with increasing the drift velocity. This occurs at\n\\begin{equation}\n\\beta^\\pm_{\\rm th} = \\frac{v_0}{\\sqrt{2}}\\sqrt{\\frac{q^2v_0^2 + 2\\omega^2_p(1 \\pm e^{-qd})}{q^2v_0^2 + \\omega^2_p(1 \\pm e^{-qd})}}. \n\\end{equation}\n\nThe solution of dispersion relation (\\ref{Double-layer}) with polarizability including e-e collisions shows that instability sets on once the acoustic mode frequency crosses zero, i.e. at $\\rm Re \\omega_- = 0$. A direct verification of this fact is challenging, but the experience of numerical solutions tells that it is the case. The stability diagram in Fig.~\\ref{Fig2} (C) was therefore obtained by numerical solution of $\\epsilon_{2l}(q,0,\\beta)=0$.\n\nWe note that the pattern of instabilities can be much richer if the carrier densities in the two layers are non-equal. In this case, the instability does not necessarily set on once the mode frequency crosses zero. A detailed analysis of these cases will be presented elsewhere.\n\n\\subsection{Diffraction on grating-gated graphene}\nWe consider the diffraction of an electromagnetic wave normally incident on graphene covered by a metal grating. The electric field is polarized along the $x$-axis, i.e. perpendicular to the gratings. The metal grating is assumed to be infinitely thin, its surface conductivity $\\sigma_{\\rm m}$ exceeds the velocity of light, $\\sigma_{\\rm m} \\gg c$, and is set to infinity in the following calculation. The grating-to-graphene distance $d$ is well below the grating period $a$ and with $W = f a$, where $f$ is the filling factor. This ensures efficient coupling of evanescent waves generated by the grating to the surface plasmons. We also assume that the structure is globally gated with a highly conducting substrate, the distance to back gate $z_0$ is the largest length scale in the problem, $z_0 \\gg W \\gg d$. We tune $z_0$ to accommodate nearly quarter of wavelength in the substrate material, $z_0 \\sim \\lambda_0\/4n_{\\rm sub}$, where $\\lambda_0$ is the free-space wavelength.\n\nDue to uniformity of 2DES is the $x$-direction, the diffraction problem can be formulated on surface current in the grating $j_s(x)$, $x \\in [0;W]$, while all the information about 2DES is accommodated in the Green's function of electromagnetic problem. This results in the following integral equation\n\\begin{equation}\n\\label{Diffraction-problem}\n \\frac{j_s(x)}{\\sigma_{\\rm m}} = \\mathcal{E}_0 + \\int\\limits_{0}^{W}{dx' Z(x-x') j_s(x')},\n\\end{equation}\nwhere $\\mathcal{E}_0$ is the field in the grating plane $z=0$ {\\it in the absence of grating}. It can be presented as\n\\begin{equation}\n\\mathcal{E}_0 = \\mathcal{E}_{\\rm inc}(1 - r_{2d}),\n\\end{equation}\nwhere $r_{2d}$ is the reflection coefficient of bottom-gated 2DES without grating, and $\\mathcal{E}_{\\rm inc}$ is the electric field in the incident wave. \n\nThe impedance kernel $Z(x-x')$ is obtained as follows. First, one finds electric field induced at $z=0$ by $G$-th spatial Fourier harmonic of surface current passing in the grating plane:\n\\begin{equation}\n\\mathcal{E}_{{\\rm ind}{G}} = j_{s,{G}} Z_{G},\n\\end{equation}\nthe function $Z_{G}$ is easily obtained by plane-wave matching or transfer-matrix methods. Then $Z(x-x')$ is the inverse Fourier transform of $Z_{G}$ with the wave vectors running across the reciprocal wave vectors of the grating, $G_n = 2\\pi n\/a$:\n\\begin{equation}\n\\label{Impedance_kernel}\n Z(x-x') = \\sum\\limits_{n=-\\infty}^{+\\infty}{Z_{G_n} e^{-iG_n(x-x')}}.\n\\end{equation}\n\nOne should distinguish the cases $n=0$ (corresponding to the normally incident propagating wave) and $|n|\\ge 1$ (corresponding to evanescent waves generated by grating):\n\\begin{gather}\n\\label{Impedance1}\n Z^{-1}_{|G|\\ge 2\\pi\/a} = \\frac{i \\omega }{4 \\pi |G| }\\left\\{ 1 - \\varepsilon_{\\rm sub} + \\frac{2 \\varepsilon_{\\rm sub} \\left(1+\\frac{2 i \\pi \\sigma_{2d} |G| }{\\omega \\varepsilon_{\\rm sub} }\\right)}{1+\\frac{2 i \\pi \\sigma_{2d} |G| \n \\left(1-e^{-2 d |G| }\\right)}{\\omega \\varepsilon_{\\rm sub} }} \\right\\},\\\\\n\\label{Impedance2}\n Z^{-1}_{G=0} = \\frac{\\omega}{4 \\pi k_1} \\left\\{\\frac{2 \\varepsilon_{\\rm sub} \\left(1-\\frac{2 \\pi k_1 \\sigma}{\\omega \\varepsilon_{\\rm sub} } \\left(-1+e^{2 i k_1 (z_0 - d)}\\right) \\right)}{\\frac{2 \\pi k_1 \\sigma}{\\omega \\varepsilon_{\\rm sub} }\n \\left(\\left(-1+e^{-2 i d k_1}\\right) e^{2 i k_1 z_0 }+e^{2 i d\n k_1}-1\\right)+e^{2 i k_1 z_0}-1}-\\frac{k_1}{k}+\\varepsilon_{\\rm sub} \\right\\};\n\\end{gather}\nhere $k=\\omega\/c$ and $k_1 = \\omega \\sqrt{\\varepsilon_{\\rm sub}}\/c$ are the wave vectors in vacuum and the substrate material.\n\nIt is important to note that impedance kernel (\\ref{Impedance_kernel}) with Fourier components (\\ref{Impedance1}-\\ref{Impedance2}) diverges at large $G$. This is associated with singularities of electric field near the keen edges of thin metal stripe carrying uniform current. The divergence can be cured in two ways. As a first possibility, one can transform Eq.~\\ref{Diffraction-problem} into a second-order differential with respect to $x$ and impose zero boundary conditions on current at the edges \\begin{equation}\n j_s(x=0) = j_s (x=W) = 0.\n\\end{equation}\nThe Fourier components of modified kernel would have extra two powers of $G$ in the denominator, while the coordinate representation will be non-divergent. \n\nAs a second possibility, one can expand the unknown current over the orthogonal basis functions $\\phi_n(x)$ that already satisfy the zero boundary condition:\n\\begin{equation}\n j_s(x) = \\sum_n{c_n \\phi_n(x)},\\qquad \\phi(0) = \\phi(W) = 0.\n\\end{equation}\nThe resulting matrix equation\n\\begin{equation}\n \\left[Z_{nm} - \\frac{\\delta_{nm}}{\\sigma_{\\rm m}}\\right]c_m = (\\mathcal{E}_0)_n\n\\end{equation}\nwould have matrix elements $Z_{nm}$ that quickly converge at large $G$. \n\nWe have chosen the basis functions\n\\begin{equation}\n \\phi_n(x) = \\sqrt{\\frac{2a}{W}}\\sin\\left(\\frac{\\pi n a}{W}\\right)\n\\end{equation}\nthat are orthogonal with respect to the inner product:\n\\begin{equation}\n \\int_0^W{\\frac{dx}{a}\\phi_n(x)\\phi_m(x)} = \\delta_{nm}.\n\\end{equation}\nIn this basis, the elements of the impedance matrix are evaluated analytically at each $G$:\n\\begin{equation}\n (Z_G)_{nm} = 2 \\pi ^2 m n \\frac{W}{a}\\frac{ \\left((-1)^m e^{-i G W} - 1 \\right) \\left((-1)^n e^{i G\n W} - 1\\right)}{\\left(\\pi ^2 m^2-G^2 W^2\\right) \\left(\\pi ^2 n^2-G^2 W^2\\right)} Z_G.\n\\end{equation}\nExpansion of current density over Chebyshev polynomials $U_n(x)$ multiplied by their respective weight functions $w(x)$ is advantageous in predicting the character of the field at the edges. However, it comes at the cost of numerical approximation to matrix elements $Z_{nm}$.\n\nIn actual calculations, we have truncated the linear system at $n = m = 10$, and evaluated the sums up to $G_{\\max} = 20 (2 \\pi \/a)$. \n\n\n\n\\end{widetext}\n\n\\end{document}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}