diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzedtw" "b/data_all_eng_slimpj/shuffled/split2/finalzzedtw" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzedtw" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\nThree-dimensional (3D) object detection enables a machine to sense its surrounding environment by detecting the location and category of objects around it. Therefore, 3D object detection plays a crucial role in systems that interact with the real world, such as autonomous vehicles and robots. The goal of 3D object detection is to generate 3D Bounding Boxes (BBoxes) parameterized by size, location, and orientation to locate the detected objects. \nMost existing methods rely heavily on LiDAR \\cite{qi2017pointnet++, shi2019pv, shi2020point, shi2020points, shi2019pointrcnn}, because LiDAR can generate point cloud data with high-precision depth information, which enhances the accuracy of 3D object detection. However, the high cost and short service life make it difficult for LiDAR to be widely used in practice. \nAlthough binocular camera-based methods \\cite{li2019stereo, qin2019triangulation, konigshof2019realtime, chen2020dsgn, chen20173d} achieve good detection results, this is still not a cheap option, and there are often difficulties in calibrating binocular cameras. \nIn contrast, the monocular camera is cost-effective, very easy to assemble, and can provide a wealth of visual information for 3D object detection. Monocular 3D object detection has vast potential for applications, such as self-driving vehicles and delivery robots.\n\nMonocular 3D object detection is an extremely challenging task without the depth provided during the imaging process. To address this, researchers have made various attempts on the depth estimation from monocular images. For instance, \\cite{chabot2017deep, barabanau2019monocular} utilize CAD models to assist in estimating the depth of the vehicle. Similarly, a pre-trained depth estimation model is adopted to estimate the depth information of the scene in \\cite{vianney2019refinedmpl, bao2019monofenet, wang2019pseudo}. However, such methods directly or indirectly used 3D depth ground-truth data in monocular 3D object detection. Meanwhile, the methods \\cite{brazil2019m3d, chen2020monopair} without depth estimation can also achieve high accuracy in the 3D object detection task. In this paper, we propose a 3D object detector for monocular images that achieves state-of-the-art performance on KITTI benchmark \\cite{Geiger2012CVPR}.\n\nHumans can perceive how close the objects in a monocular image are from the camera. Why is that? When the human brain interprets the depth of an object, it compares the object with all other objects and the surrounding environment to obtain the difference in visual effect caused by the relative position relationship. For objects of the same size, the bigger, the closer from a fixed perspective. Inspired by this, we propose a novel Asymmetric Non-local Attention Block (ANAB) to compute the response at a position as a weighted sum of the features at all positions. Inspired by \\cite{chen20182, zhu2019asymmetric}, we use both the local features in multiple scales and the features that can represent the global information to learn the depth-wise features. The multi-scale features can reduce computational costs. The attentive maps in multiple scales shows an explicit correlation between the sampling spatial resolution and the depth of the objects. \n\nIn one-stage monocular 3D object detection methods, 2D and 3D BBoxes are detected simultaneously. However, for anchor-based methods, there exists feature mismatching in the prediction of 2D and 3D BBoxes. This occurs for two reasons: (1) the receptive field of the feature does not match the shape of the anchor in terms of aspect ratio and size; (2) the center of the anchor, generally considered as the center of the receptive field for the feature map, does not overlap with the center of the object. The misalignment affects the performance of 3D object detection. Thus, we propose a two-step feature alignment method, aiming at aligning the features in 2D and 3D BBox regression. In the first step, we obtain the target region according to the classification confidence scores for the pre-defined anchors. This allows the receptive field of the feature map to focus on the pre-defined anchor regions with high confidence scores. In the second step, we use the prediction results of the 2D\/3D center to compute the feature offset that can mitigate the gap between the predictions and its corresponding feature map.\n\nWe summarize our contributions as follows:\n\\begin{itemize}\n\\setlength\\itemsep{0.1mm}\n \\item We propose a simple but very efficient monocular 3D single-stage object detection (M3DSSD) method. The M3DSSD achieves significantly better performance than the monocular 3D object detection methods on the KITTI dataset for car, pedestrian, and cyclist object class using one single model, in both 3D object detection and bird's eye view tasks.\n \\item We propose a novel asymmetric non-local attention block with multi-scale sampling for the depth-wise feature extraction, thereby improving the accuracy of the object depth estimation. \n \\item We propose a two-step feature alignment module to overcome the mismatching in the size of the receptive field and the size of the anchor, and the misalignment in the object center and the anchor center. \n\\end{itemize}\n\n\\section{Related Work}\n\\label{gen_inst}\nIn order to estimate depth information in monocular images, researchers have proposed many different approaches. For instance, \\cite{xu2018pointfusion, chen2017multi, liang2019multi} utilize point cloud data to obtain accurate 3D spatial information. Pointfusion \\cite{xu2018pointfusion} uses two networks to process images and raw point cloud data respectively, and then fuses them at the feature level. MV3D \\cite{chen2017multi} encodes the sparse point cloud with a multi-view representation and performs region-based feature fusion. Liang et al. \\cite{liang2019multi} exploit the point-wise feature fusion mechanism between the feature maps of LiDAR and images. LiDAR point cloud and image fusion methods have achieved promising performance. However, LiDAR cannot be widely used in practice at present due to its expensive price.\n\nCAD models of vehicles are also used in monocular 3D object detection. Barabanau et al. \\cite{barabanau2019monocular} detects 3D objects via geometric reasoning on key points. Specifically, the dimensions, rotation, and key points of a car are predicted by a convolutional neural network. Then, according to the key points' coordinates on the image plane and the corresponding 3D coordinates on the CAD model, simple geometric reasoning is performed to obtain the depth and 3D locations of the car. Deep MANTA \\cite{chabot2017deep} predicts the similarity between a vehicle and a predefined 3D template, as well as the coordinates and visibility of key points, using a convolutional neural network. Finally, given the 2D coordinates of an object's key points and the corresponding 3D coordinates on the 3D template, the vehicle's location and rotation can be solved by a standard 2D\/3D matching \\cite{lepetit2009epnp}. However, it is difficult to collect CAD models in all kinds of vehicles.\n\nMonocular depth estimation networks are adopted in \\cite{vianney2019refinedmpl, ding2019learning, ma2019accurate, bao2019monofenet, xu2018multi, cai2020monocular} to estimate depth or disparity maps. Most of the methods transform the estimated depth map into a point cloud representation and then utilize the approaches based on LiDAR to regress the 3D BBoxes. The performance of these methods relies heavily on the accuracy of the depth map. D4LCN \\cite{ding2019learning} proposed a new type of convolution, termed depth-guided convolution, in which the weights and receptive fields of convolution can be automatically learned from the estimated depth.\nThe projection of the predicted 3D BBox should be consistent with the predicted 2D BBox. This is utilized to build geometric constraints in \\cite{naiden2019shift, gahlert2018mb, mousavian20173d} to determine the depth. \nThanks to the promising performance of convolutional neural networks in 2D object detection, more and more approaches \\cite{brazil2019m3d, li2019gs3d, roddick2018orthographic, qin2019triangulation, qin2019monogrnet, liu2020smoke, chen2020monopair, jorgensen2019monocular} have been proposed to directly predict 3D BBoxes using well-designed convolutional neural network for monocular 3D object detection. GS3D \\cite{li2019gs3d} proposed a two-stage 3D object detection framework, in which the surface feature extraction is utilized to eliminate the problem of representation ambiguity brought by using a 2D bounding box. M3D-RPN \\cite{brazil2019m3d} proposed an anchor-based single-stage 3D object detector that generates both 2D and 3D BBoxes simultaneously. M3D-RPN achieves good performance, but it does not solve the problem of feature misalignment.\n\n\n\\begin{figure*}[t!]\n \\centering\n \\includegraphics[width=1\\textwidth]{overview_1.pdf}\n \\caption{ The architecture of M3DSSD. (a) The backbone of the framework, which is modified from DLA-102 \\cite{yu2018deep}. (b) The two-step feature alignment, classification head, 2D\/3D center regression heads, and ANAB especially designed for predicting the depth $z_{3d}$. (c) Other regression heads.\n} \n \\label{figure1}\n \\vspace{-4mm}\n\\end{figure*} \n\n\n\\section{Method}\n\\label{headings}\nIn this section, we describe the proposed M3DSSD, which consists of four main components: the backbone, the feature alignment, the asymmetric non-local attention block, and the 2D-3D prediction heads, as shown in Fig.~\\ref{figure1}. The details of each component are described below.\n\n\\subsection{Backbone}\nFollowing \\cite{yu2018deep}, we adopt the Deep Layer Aggregation network DLA-102 as the backbone. To adaptively change the receptive field and enhance the feature learning \\cite{zhou2019objects, liu2020smoke}, all the convolution in hierarchical aggregation connections are replaced with Deformable Convolution (DCN) \\cite{zhu2019deformable}. The down-sampling ratio is set to 8, and the size of the output feature map is $256 \\times H\/ 8\\times W\/8$, where $H$ and $W$ are the height and width of the input image. \n\n\n\\subsection{Feature Alignment}\nAnchor-based methods often suffer from feature mismatching. On one hand, this occurs if the receptive field of the feature does not match the shape of the anchor in terms of aspect ratio and size. On the other hand, the center of the anchor, generally considered as the center of the receptive field of the feature, might not overlap with the center of the object. The proposed feature alignment consists of shape alignment and center alignment: (1) shape alignment aims at forcing the receptive field of the feature map to focus on the anchor with the highest classification confidence score; (2) center alignment is performed to reduce the gap between the feature on the center of the object and the feature that represents the center of the anchor. Different from previous feature alignment methods \\cite{chen20182, wang2019region} that are applied to one-stage object detection via a two-shot regression, the proposed feature alignment can be applied in one shot, which is more efficient and self-adaptive.\n\n\\begin{figure}\n \\centering\n \\begin{subfigure}[b]{\\columnwidth}\n \\label{fig.shape1}\n \\includegraphics[width=\\textwidth]{shape_align.pdf}\n \\end{subfigure}\n \\hfill\n \\begin{subfigure}{0.73\\columnwidth}\n \\label{fig.shape2}\n \\includegraphics[width=\\textwidth]{shape_align.png}\n \\end{subfigure}\n\n \\caption{\\small The architecture of shape alignment and the outcome of shape alignment on objects. The yellow squares indicate the sampling location of the AlignConv, and the anchors are in red.}\n \\vspace{-4mm}\n \\label{fig.shape_align}\n\\end{figure}\n\n\\begin{figure}\n \\centering\n \\begin{subfigure}[b]{\\columnwidth}\n \\label{fig.center1}\n \\includegraphics[width=\\textwidth]{center_align.pdf}\n \\end{subfigure}\n \\hfill\n \\begin{subfigure}{0.73\\columnwidth}\n \\label{fig.center2}\n \\includegraphics[width=\\textwidth]{center_align.png}\n \\end{subfigure}\n \\caption{\\small The architectures of center alignment and the outcome of the center alignment. When applying center alignment to objects, the sampling locations on the foreground regions (in white) all concentrate on the centers of objects (in yellow) after center alignment, which are near to the true centers of objects (in red).}\n \\vspace{-4mm}\n \\label{fig.center_align}\n\\end{figure}\n\n\\textbf{Shape alignment} We can first obtain the foreground region according to the classification results. Then, the receptive fields of the features in the foreground regions can focus on the anchor with the highest confidence scores, as shown in Fig.~\\ref{fig.shape_align}. This makes sense because among all the anchors located at the same position, the one with the highest confidence is more likely to remain after the NMS algorithm. We use a convolution termed AlignConv in the implementation of shape alignment and center alignment. AlignConv is similar to the deformable convolution \\cite{zhu2019deformable}. The difference is that the offset of the former is computed from the prediction results. The normal convolution can be considered as a special case of AlginConv where the offset equals zero. Unlike the RoI convolution proposed in \\cite{chen2019revisiting}, we align the shape of the receptive field or the location of the center in one shot. When performing shape alignment on the feature map with stride $S$, the offset $(O^{sa}_i, O^{sa}_j)$ of the convolution with kernel size $k_h \\times k_w$ is defined as:\n\\begin{eqnarray}\n&O^{sa}_i = (\\frac{h_{a}}{S \\times k_h} - 1) \\times (i - \\frac{k_h}{2} + 0.5), \\\\\n&O^{sa}_j = (\\frac{w_{a}}{S \\times k_w} - 1) \\times (j - \\frac{k_w}{2} + 0.5),\n\\end{eqnarray}\nwhere $h_{a}, w_{a}$ are the height and the width of the anchor with the highest confidence.\n\n\\textbf{Center alignment} The purpose of center feature alignment is to align the feature at the center of the object to the feature that represents the center of the anchor. As shown in Fig.~\\ref{fig.center_align}, the prediction results from the 2D\/3D center regression are used to compute the offset of the convolution on the feature map with stride $S$:\n\\begin{eqnarray}\n&O^{ca}_i = \\frac{y_r}{S},\n&O^{ca}_j = \\frac{x_r}{S}, \n\\end{eqnarray}\nwhere $x_{r}$ and $y_{r}$ are the prediction results of the 2D\/3D centers in objects, respectively. As shown in Fig.~\\ref{fig.center_align}, when center alignment with a $1 \\times 1$ convolutional kernel is applied to the feature map, the sampling position is adaptively concentrated on the center of objects.\n\n\\subsection{Asymmetric Non-local Attention Block}\n\\label{sect:asy}\n\nWe propose a novel asymmetric non-local attention block to improve the accuracy of the depth $z_{3d}$ prediction by extracting the depth-wise features that can represent the global information and the long-range dependencies. The standard non-local block \\cite{wang2018non} is promising in establishing long-range dependencies, but its computational complexity is $O(N^2C)$, where $N = h \\times w$, $h$, $w$ and $C$ indicate the spatial height, width, and channel number of the feature map, respectively. This is very computationally expensive and inefficient compared to normal convolutions. Thus, the applications are limited. The Asymmetric Pyramid Non-local Block \\cite{zhu2019asymmetric} reduces the computational cost by decreasing the number of feature descriptors using pyramid pooling. However, pyramid pooling on the same feature map may lead to features with low resolution being replaced with high-resolution features. In other words, there exists redundancy in the computational cost regarding the image resolution. As such, we propose an Asymmetric Non-local Attention Block (ANAB), which can extract multi-scale features to enhance the feature learning with a low computational cost.\n\n\\begin{figure}[t!]\n \\centering\n \n \n \n \n \n \n \\begin{subfigure}{\\columnwidth}\n \\label{fig:anab}\n \\includegraphics[width=\\textwidth]{ANAB.pdf}\n \\end{subfigure}\n \n \\begin{subfigure}{\\columnwidth}\n \\label{fig:papa}\n \\includegraphics[width=\\textwidth]{PAPA.pdf}\n \\end{subfigure}\n \\caption{\\small Top: Asymmetric Non-local Attention Block. The key and query branches share the same attention maps, which forces the key and value to focus on the same place. Bottom: Pyramid Average Pooling with Attention ($PA^2$) that generates different level descriptors in various resolutions.}\n \\label{fig:pan}\n \\vspace{-4mm}\n\\end{figure}\n\n\nAs shown at the top of Fig.~\\ref{fig:pan}, we use the pyramidal features of the $key$ and $value$ branches to reduce the computational cost. The bottom of Fig.~\\ref{fig:pan} illustrates the Pyramid Average Pooling with Attention ($PA^2$) module. The different levels of the feature pyramid have different receptive fields, thereby modeling regions with different scales. \nTwo matrix multiplications are performed in ANAB. First, the similarity matrix between the reshaped feature matrices $\\mathbf{M}_Q$ and $\\mathbf{M}_K$ obtained from $querry$ and $key$ is defined as:\n\\begin{equation}\n \\mathbf{M}_{S} = \\mathbf{M}_{Q} \\times \\mathbf{M}_{K}^T, \\quad\\mathbf{M}_{Q} \\in \\mathbb{R}^{N\\times C}, \\mathbf{M}_{K} \\in \\mathbb{R}^{L\\times C}. \n\\end{equation}\nThen, the softmax function is used to normalize the last dimension of the similarity matrix and multiply it by the reshaped feature matrix $\\mathbf M_V$ obtained from $value$ to get the output:\n\\begin{equation}\n \\mathbf{M}_{out} = Softmax(\\mathbf{M}_{S}) \\times \\mathbf{M}_{V} , \\quad \\mathbf{M}_{V} \\in \\mathbb{R}^{L\\times C}.\n\\end{equation}\nwhere $L$ is the number of features after sampling. The standard non-local block \\cite{wang2018non} has computational complexity $O(N^2C)$, while the complexity of ANAB is $O(NLC)$. In practice, $L$ is usually significantly smaller than $N$. In our case, we use a four-level downsampling strategy on the feature map $48 \\times 160$. The resolution of the four-level feature pyramid is set to $i \\in \\{1\\times1, 4\\times4, 8\\times8, 16\\times16\\}$, the sum of which is the total number $L$ of features after downsampling. So $L = 377$ is much smaller than $N = 7680$.\n\nAnother effective component of ANAB is the application of the multi-scale attention maps to the $key$ and $value$ branches in $PA^2$ module, as shown at the bottom of Fig. \\ref{fig:pan}. \nThe motivation is to keep the key information of the origin feature map when greatly reducing the dimensions of matrices $\\mathbf{M}_{K}$ and $\\mathbf{M}_{V}$ from $N \\times C$ to $L \\times C$. The spatial attention maps generated by a $1 \\times 1$ convolutional layer are used as weights. This module adaptively adjusts the weights to pay more attention to the useful information and suppress the less useful information. \nThe attentive map can be treated as a mask performed on multi-scale features. We use the average pooling with attention to downsample the feature maps. Such a weighted average pooling operation offers an efficient way to gather the key features.\n\n\\subsection{2D-3D Prediction and Loss}\n\\textbf{Anchor definition.}\nWe adopt a one-stage 2D-3D anchor-based network as our detector. To detect the 2D and the 3D BBoxes simultaneously, our predefined anchor contains the parameters of both the 2D BBoxes $[w, h]_{2d}$ and the 3D BBoxes $[z, w, h, l, \\alpha]_{3d}$. $\\alpha$ is the observation angle of the object that measures the angle at which the camera views the object.\nCompared with the rotation angle of the object, the observation angle is more meaningful for monocular 3D object detection \\cite{mousavian20173d}. The dimension of the object is given by $[w, h, l]_{3d}$. We project the center of the object onto the image plane to encode the 3D location of the object into the anchor:\n\n\\begin{equation}\n\\begin{bmatrix}\nX_{p} & Y_{p} & 1\n\\end{bmatrix} ^\\mathrm{T} \\cdot Z_{p}\n = \\mathbf{K} \\cdot \n \\begin{bmatrix}\n X & Y & Z & 1\n \\end{bmatrix} ^\\mathrm{T}, \n \\label{eqn.1}\n\\end{equation}\nwhere $(X_p, X_p)$ are the coordinates of the 3D point projected onto the image plane, and $(X, Y, Z)$ are the 3D space coordinates in the camera coordinate system. $K \\in \\mathbb{R}^{3\\times4}$ is the intrinsic camera matrix, which is known at both the training and testing phase.\nWe obtain the 3D parameters of each anchor by computing the mean of the corresponding 3D parameters of the objects whose intersection over union (IoU) is greater than a given threshold (0.5) with the predefined 2D anchors $[w, h]_{2d}$.\n\n\\textbf{Output transformation.}\nGiven the detection outputs $cls$, $[t_x, t_y, t_w, t_h]_{2d}$ and $[t_x, t_y, t_z, t_w, t_h, t_l, t_\\alpha]_{3d}$ for each anchor, the 2D BBox $[X, Y, W, H]_{2d}$ and 3D BBox $[X, Y, Z, W, H, L, A]_{3d}$ can be restored from the output of the detector by:\n\\begin{align}\n&[X, Y]_{2d} = [t_x, t_y]_{2d} \\otimes [w, h]_{2d} + [x, y]_{2d} \\notag \\\\\n&[W, H]_{2d} = \\exp([t_w, t_h]_{2d}) \\otimes [w, h]_{2d} \\notag \\\\\n&[X_p, Y_p]_{3d} = [t_x, t_y]_{3d} \\otimes [w, h]_{2d} + [x, y]_{2d} \\notag \\\\\n&[W, H, L]_{3d} = \\exp([t_w, t_h, t_l]_{3d}) \\otimes [w, h, l]_{3d} \\notag \\\\\n&[Z_p, A]_{3d} = [t_z, t_\\alpha] + [z, \\alpha]_{3d} , \\notag \\\\\n\\end{align}\nwhere $\\otimes$ denotes the element-wise product and $A$ is the rotation angle. During the inference phase, $[X, Y, Z]_{3d}$ can be obtained by projecting $[X_p, Y_p, Z_p]$ back to the camera coordinate system using the inverse operation of Eqn.~\\ref{eqn.1}.\n\n\\textbf{Loss function.}\nWe employ a multi-task loss function to supervise the learning of the network, which is composed of three parts: a classification loss, 2D BBox regression loss, and 3D BBox regression loss. The 2D regression and 3D regression loss are regularized with weights $\\lambda_1$ and $\\lambda_2$:\n\\begin{equation}\n\\label{loss}\n L = L_{cls} + \\lambda_1 L_{2d} + \\lambda_2 L_{3d} ,\n\\end{equation}\nFor the classification task, we employ the standard cross entropy loss function: \n\\begin{equation}\n L_{cls} = -\\log(\\frac{\\exp(c')}{\\sum \\exp(c_i)}).\n\\end{equation}\nFor the 2D BBox regression task, we use $-\\log(IoU)$ as the loss function for the ground-truth 2D BBox $\\hat b_{2d}$ and the predicted 2D BBox $b'_{2d}$, similar to \\cite{brazil2019m3d}:\n\\begin{equation}\n L_{2d} = -\\log(IoU(b'_{2d}, \\hat b_{2d})).\n\\end{equation}\nA smooth L1 loss function is employed to supervise the regression of 3D BBoxes:\n\\begin{equation}\n\\begin{array}{l}\n L_{3d} = \\sum\\limits_{v_{3d} \\in P_{3d}} SmoothL_1(v'_{3d}, \\hat v_{3d}), \\\\ P_{3d} = \\{t_{x}, t_{y}, t_{z}, t_{w}, t_{h}, t_{l}, t_{\\alpha}\\}_{3d}.\n\\end{array}\n\\end{equation}\n\n\n\\begin{table*}\n \\small\n \\centering\n \\resizebox{1\\textwidth}{!}{\n \\begin{tabular}{l|c|ccc|ccc}\n \\toprule\n \\multirow{2}*{Methods} & \\multirow{2}*{Extra} & \\multicolumn{3}{|c|}{$AP_{3d}(val\/test) \\quad IoU \\ge 0.7$} & \\multicolumn{3}{c}{$AP_{BEV}(val\/test) \\quad IoU \\ge 0.7$}\\\\\n {} & {} & Easy & Moderate & Hard & Easy & Moderate & Hard \\\\\n \\midrule\n \n MonoFENet\\cite{bao2019monofenet} & Depth &17.54 \/ 8.35 &11.16 \/ 5.14 &9.74 \/ 4.10 &30.21 \/ 17.03 &20.47 \/ 11.03 &17.58 \/ 9.05 \\\\\n AM3D\\cite{ma2019accurate} & Depth &32.23 \/ 16.50 &21.09 \/ 10.74 &17.26 \/ 9.52 &43.75 \/ 25.03 &28.39 \/ 17.32 &23.87 \/ 14.91 \\\\\n D4LCN\\cite{ding2019learning} & Depth &26.97 \/ 16.65 &21.71 \/ 11.72 &18.22 \/ 9.51 &34.82 \/ 22.51 &25.83 \/ 16.02 &23.53 \/ 12.55 \\\\\n \\midrule\n GS3D\\cite{li2019gs3d} &None &13.46 \/ 4.47 &10.97 \/ 2.90 &10.38 \/ 2.47 &\\qquad- \/ 8.41 &\\qquad- \/ 6.08 &\\qquad- \/ 4.94 \\\\\n MonoPSR\\cite{ku2019monocular} &None &12.75 \/ 10.76 &11.48 \/ 7.25 &8.59 \/ 5.85 &20.63 \/ 18.33 &18.67 \/ 12.58 &14.45 \/ 9.91 \\\\\n MonoGRNet\\cite{qin2019monogrnet} &None &13.88 \/ 9.61 &10.19 \/ 5.74 &7.62 \/ 4.25 &\\qquad- \/ 18.19 & \\qquad- \/ 11.17 &\\qquad- \/ 8.73 \\\\\n SS3D\\cite{jorgensen2019monocular} &None &14.52 \/ 10.78 &13.15 \/ 7.68 &11.85 \/ 6.51 &\\qquad- \/ 16.33 &\\qquad- \/ 11.52 &\\qquad- \/ 9.93 \\\\\n MonoDIS\\cite{simonelli2019disentangling} &None &18.05 \/ 10.37 &14.98 \/ 7.94 &13.42 \/ 6.40 &24.26 \/ 17.23 &18.43 \/ 13.19 &16.95 \/ 11.12 \\\\\n MonoPair\\cite{chen2020monopair} &None &\\qquad- \/ 13.04 &\\qquad- \/ 9.99 &\\qquad- \/ 8.65 &\\qquad- \/ 19.28 &\\qquad- \/ 14.83 &\\qquad- \/ \\textbf{12.89} \\\\\n SMOKE\\cite{liu2020smoke} &None &14.76 \/ 14.03 &12.85 \/ 9.76 &11.50 \/ 7.84 &19.99 \/ 20.83 &15.61 \/ 14.49 &15.28 \/ 12.75 \\\\\n M3D-RPN\\cite{brazil2019m3d} &None &20.27 \/ 14.76 &17.06 \/ 9.71 &15.21 \/ 7.42 &25.94 \/ 21.02 &21.18 \/ 13.67 &17.90 \/ 10.23 \\\\\n RTM3D\\cite{li2020rtm3d} &None &20.77 \/ 14.41 &16.86 \/ 10.34 &16.63 \/ 8.77 &25.56 \/ 19.17 &22.12 \/ 14.20 &20.91 \/ 11.99 \\\\\n \\midrule\n M3DSSD(ours) &None &\\textbf{27.77 \/ 17.51} &\\textbf{21.67 \/ 11.46} &\\textbf{18.28 \/ 8.98} &\\textbf{34.51 \/ 24.15} &\\textbf{26.20 \/ 15.93} &\\textbf{23.40} \/ 12.11 \\\\\n \n \\bottomrule\n \\end{tabular}\n }\n \\caption{AP scores on $val$ and $test$ set of 3D object detection and bird's eye view for cars. }\n \\label{tab:ap}\n \n\\end{table*}\n\n\\section{Experiments}\n\n\\begin{figure*}[htbp]\n \\centering\n \\includegraphics[width=1\\textwidth]{vis.png}\n \\caption{Qualitative results of 3D detection (left) and bird's eye view (right), prediction in green and ground-truth in red.} \n \\label{fig:vis}\n \\vspace{-4mm}\n\\end{figure*}\n\n\\subsection{Evaluation Dataset}\nWe evaluate our framework on the challenging KITTI benchmark for 3D object detection and bird's eye view tasks. The KITTI dataset contains 7481 images with labels and 7518 images for testing, covering three main categories of objects: cars, pedestrians, and cyclists. We use common split methods \\cite{chen20173d} to divide the images with labels into the training set and the validation set. We pad the images to the size of $384 \\times 1280$ in both the training and inference phase. In the training phase, in addition to the conventional data augmentation methods of random translation and horizontal mirror flipping, the random scaling operation is applied for monocular images. \n\n\\begin{table*}[htbp]\n \\centering\n \n \\begin{tabular}{l|ccc|ccc}\n \\toprule\n \\multirow{2}*{Methods} & \\multicolumn{3}{|c|}{$Pedestrian \\quad AP_{3D}\/AP_{bev}$} & \\multicolumn{3}{c}{$Cyclist \\quad AP_{3D}\/AP_{bev}$}\\\\\n {} &Easy &Moderate &Hard &Easy &Moderate &Hard\\\\\n \\midrule\n M3D-RPN\\cite{brazil2019m3d} &4.92 \/ 5.65 &3.48 \/ 4.05 &2.94 \/ 3.29 &0.94 \/ 1.25 & 0.65 \/ 0.81 & 0.47 \/ 0.78\\\\\n D4LCN\\cite{ding2019learning} &4.55 \/ 5.06 &3.42 \/ 3.86 &2.83 \/ 3.59 &2.45 \/ 2.72 &\\textbf{1.67} \/ 1.82 &1.36 \/ \\textbf{1.79}\\\\\n SS3D\\cite{jorgensen2019monocular} &2.31 \/ 2.48 &1.78 \/ 2.09 &1.48 \/ 1.61 &\\textbf{2.80} \/ \\textbf{3.45} &1.45 \/ 1.89 &1.35 \/ 1.44\\\\\n \\midrule\n M3DSSD(ours) &\\textbf{5.16} \/ \\textbf{6.20} &\\textbf{3.87} \/ \\textbf{4.66} &\\textbf{3.08} \/ \\textbf{3.99} &2.10 \/ 2.70 &1.51 \/ \\textbf{2.01} &\\textbf{1.58} \/ 1.75\\\\\n \\bottomrule\n \\end{tabular}\n \\caption{Detection performance for pedestrians and cyclists on $test$ set, at $0.5$ $IoU$ threshold.}\n \\label{tab:ped}\n \\vspace{-4mm}\n\\end{table*}\n\n\\subsection{Implementation Details}\nWe implement our model with PyTorch. We adopt the SGD optimizer with momentum to train the network with a CPU E52698 and GPU TITAN V100, in an end-to-end manner, for 70 epochs. The momentum of the SGD optimizer is set to 0.9, and weight decay is set to 0.0005. The mini-batch size is set to 4. The learning rate increases linearly from 0 to the target learning rate of 0.004 in the first epoch and then decreases to $4\\times 10^{-8}$ with cosine annealing. Terms $\\lambda_1$ and $\\lambda_2$ in Eqn.~\\ref{loss} are both set to 1.0. \nWe lay 36 anchors on each pixel of the feature map, the size of which increases from 24 to 288 following the exponential function of $24 \\times 12^{i\/11}, i \\in \\{0, 1, 2, \\dots , 11\\}$, and the aspect ratio is set to $\\{0.5, 1.0, 1.5\\}$. We apply online hard-negative mining by sampling the top $20\\%$ high loss boxes in each minibatch in the training phase. \nIn the inference phase, we apply NMS with 0.4 IoU criteria on the 2D BBox and filter out the objects with a confidence lower than 0.75. The post-optimization algorithm proposed in \\cite{brazil2019m3d} is used to make the rotation angle more reasonable. \nThe algorithm uses projection consistency to optimize the rotation angle. The rotation angle is optimized iteratively to minimize the L1 loss of the projection of the predicted 3D BBox and the predicted 2D BBox.\n\n\\subsection{Performance Evaluation}\nWe set the network after removing the feature alignment module and ANAB from M3DSSD as the baseline. More specifically, for the baseline, the feature map output from the backbone is directly used for classification and 2D BBox regression and 3D BBox regression.\n\nWe evaluate our framework on the KITTI benchmark for both bird's eye view and 3D object detection tasks. The average precision (AP) of Intersection over Union (IoU) is used as the metric for evaluation in both tasks and it is divided into easy, moderate, and hard according to the height, occlusion, and truncation level of objects. Note that the official KITTI evaluation has been using $AP|_{R40}$ with 40 recall points instead of $AP|_{R11}$ with 11 recall points since October 8, 2019. However, most previous methods evaluated on the validation used $AP|_{R11}$. Thus, we report the $AP|_{R40}$ for the test dataset and $AP|_{R11}$ for the validation dataset for a fair comparison. We set the threshold of IoU to 0.7 for cars and 0.5 for pedestrians and cyclists as the same as the official settings. Fig.~\\ref{fig:vis} shows qualitative results for 3D object detection and bird's eye view. The detection results and depth predictions are less accurate with further distance. The videos of 3D object detection results and the additional results can be found in the supplemental material. \n \n \n\n\\textbf{Bird's eye view.}\nThe bird's eye view task is to detect objects projected on the ground, which is closely related to the 3D location of objects. The detection results for cars on both the $val$ and $test$ set are reported in Tab.~\\ref{tab:ap}. M3DSSD achieves state-of-the-art performance on the bird's eye view task compared to approaches with and without depth estimation. Our method has significant improvement compared to the methods without depth estimation. \n\n\\begin{figure}[htb]\n \\centering\n \\includegraphics[width=0.8\\columnwidth]{gt_depth_err.pdf}\n \\caption{The average depth estimation error varies along with the ground truth depth. Best viewed in color.} \n \\label{figure.depth_err}\n \\vspace{-4mm}\n\\end{figure}\n\n\\begin{table}[htbp]\n \\centering\n \\resizebox{1\\columnwidth}{!}{\n \\begin{tabular}{l|ccc}\n \\toprule\n \\multirow{2}*{Methods} & \\multicolumn{3}{|c}{$AP_{3d}\/AP_{BEV}\\quad IoU \\ge 0.7 $}\\\\\n {} &Easy &Mod. &Hard \\\\\n \\midrule\n \n \n \n \n \n \n \n Baseline w\/ ANAB \\dag & 25.70 \/ 33.48 &19.02 \/ 24.79 &17.31 \/ 20.15 \\\\ \n \\dag w\/ Shape Alignment & 27.26 \/ 33.64 &21.56 \/ 25.24 &18.07 \/ 22.81 \\\\\n \\dag w\/ Center Alignment & 27.33 \/ \\textbf{34.85} &21.51 \/ 25.96 &18.03 \/ 23.26 \\\\\n \\dag w\/ Full Alignment & \\textbf{27.77} \/ 34.51 &\\textbf{21.67} \/ \\textbf{26.20} &\\textbf{18.28} \/ \\textbf{23.40} \\\\\n \\bottomrule\n \\end{tabular}\n }\n \\caption{Ablation study on feature alignment.}\n \\label{tab:feature alignment}\n \n\\end{table}\n\n\\begin{table*}[t]\n \\centering\n \\resizebox{0.9\\textwidth}{!}{\n \\begin{tabular}{l|ccc}\n \\toprule\n \\multirow{2}*{Methods} & \\multicolumn{3}{|c}{$AP_{3d}\/AP_{BEV} \\quad IoU \\ge 0.7 $} \\\\\n {} &Easy &Moderate &Hard \\\\\n \\midrule\n baseline &23.40 \/ 28.66 &18.32 \/ 23.53 &16.62 \/ 19.54 \\\\\n ANB &23.65 \/ 29.19 &18.47 \/ 23.65 &16.54 \/ 19.50 \\\\\n ANAB &\\textbf{25.70} \/ \\textbf{33.48} &\\textbf{19.02} \/ \\textbf{24.79} &\\textbf{17.31} \/ \\textbf{20.15} \\\\\n \\bottomrule\n \\end{tabular}\n \n \\begin{tabular}{||l|c|c}\n \\toprule\n \\multirow{2}*{Methods} & \\multirow{2}*{GPU time} & \\multirow{2}*{GPU memory}\\\\\n {} & (ms) & (Gbyte)\\\\\n \\midrule\n Non-local \\cite{wang2018non} & 5.89 \/ 104.12 & 1.97 \/ 15.67 \\\\\n ANB & \\textbf{1.68} \/ \\textbf{5.92} & \\textbf{1.09} \/ \\textbf{1.43} \\\\\n ANAB & 1.86 \/ 6.76 & 1.22 \/ 1.91 \\\\\n \\bottomrule\n \\end{tabular}}\n \n \\caption{Ablation study on non-local blocks with detection accuracy, GPU time, and memory for different input sizes.}\n \\label{tab:anab}\n \n\\end{table*}\n\n\\begin{figure}[htb]\n \\centering\n \\includegraphics[width=0.8\\columnwidth]{align_depth_err.pdf}\n \\caption{The average depth estimation error varies along with the size of objects that is the average of the length and the width of the 2D BBox. Best viewed in color.} \n \\label{figure.depth_err_alg}\n \\vspace{-4mm}\n\\end{figure}\n\n\\begin{figure*}[htb]\n \\flushright\n \\includegraphics[width=0.93\\textwidth]{vis_atten.pdf}\n \\caption{Visualization for attention maps in $PA^2$ with a four-level feature pyramid $\\{1\\times1, 4\\times4, 8\\times8, 16\\times16\\}$.} \n \\label{figure.anab_vis}\n \\vspace{-4mm}\n\\end{figure*}\n\\textbf{3D object detection for cars.}\nThe 3D object detection task aims to detect 3D objects in the camera coordinate system, which is more challenging than the bird's eye view task due to the additional y-axis. Compared with the approaches without depth estimation, Tab.~\\ref{tab:ap} shows that M3DSSD achieves better performance in both the $val$ and $test$ set. Note that M3DSSD is better than most of the approaches with depth estimation. Further, our method achieves competitive performance against D4LCN that adopts a pre-trained model for depth estimation \\cite{ding2019learning}.\n\nFig.~\\ref{figure.depth_err} shows the average depth estimation error with the different ground truth depth ranges \\cite{liu2020smoke}. We compared our proposed method with SMOKE \\cite{liu2020smoke}, Mono3D \\cite{chen2016monocular} and 3DOP \\cite{chen20173d} on the same validation set. Fig.~\\ref{figure.depth_err} demonstrates that the proposed M3DSSD achieves better performance at all distance ranges, except for the distance greater than 60m, where the number of samples is usually small.\n\n\n\\textbf{3D object detection for pedestrians and cyclists.}\nCompared with cars, 3D object detection for pedestrians and cyclists is more challenging. This is because the size of pedestrians and bicycles is relatively small. In addition, people are non-rigid bodies, and their shapes vary a lot, thereby making it difficult to locate pedestrians and cyclists.\nWe report the detection results for pedestrians and cyclists on the test set of the KITTI benchmark in Tab.~\\ref{tab:ped}. Since some methods did not report the pedestrian and the cyclist results, we compare our model with M3D-RPN \\cite{brazil2019m3d}, D4LCN \\cite{ ding2019learning}, and SS3D \\cite{jorgensen2019monocular}. Our model achieves competitive performance in both 3D detection and bird's eye view tasks for pedestrians and bicycles, especially for the pedestrian category. Note that we train only one single model to detect the three object classes simultaneously. \n\n\\subsection{Ablation Study}\n\\textbf{Feature alignment.} \nWe evaluate the feature alignment strategies, including shape alignment, center alignment, and full alignment (both center alignment and shape alignment). \nAs shown in Tab. \\ref{tab:feature alignment}, that the proposed shape alignment, center alignment, and full alignment achieve better results compared to the case without alignment. \n\nFig.~\\ref{figure.depth_err_alg} illustrates the average depth estimation error varies with the size of objects for the model with and without feature alignment. The x-axis is set as the size of the 2D BBox $(w_{2d}+h_{2d})\/2$. It shows that the proposed feature alignment module is effective on objects of different sizes, especially for the small objects in $[0 - 25]$. This also explains why M3DSSD outperforms other methods in small object detection such as pedestrians and cyclists.\n\n\\textbf{Asymmetric non-local attention block.}\nWe compare the Asymmetric Non-local Block (ANB), and our proposed Asymmetric Non-local Attention Block (ANAB), which applies pyramid average pooling on the feature map with attentions. We use the same sampling size for both methods. Tab.~\\ref{tab:anab} shows that the network with ANAB achieves the best performance. With a similar computational time, the proposed ANAB has better detection accuracy than ANB. Meanwhile, both methods cost much less GPU time and memory than the standard non-local block \\cite{wang2018non}. The attention module costs a little more consuming time with significant improvement, especially in easy tasks. Tab. \\ref{tab:anab} on the right shows the GPU time and memory regarding the input size $[1, 256, 48, 160]$ and $[1, 256, 96, 320]$. This shows that the computational cost is closer to the theoretical analysis in Sect.~\\ref{sect:asy} with a larger input size. ANAB has extra pooling layers, convolutional layers, and an element-wise multiplication, which are not considered in the theoretical analysis.\n\nIn ANAB, the attention maps are assigned to the multi-scale pooling operations for the depth-wise feature extraction. Fig.~\\ref{figure.anab_vis} shows that the attention map for $1 \\times 1$ feature pyramid has larger weights on the objects which are close to the camera, while the attention map for the higher-level feature pyramid assigns larger weights on the objects that are away from the camera. The attention maps in different levels show a correlation between the resolution of the feature pyramid and the object depth. This lies in the fact that the feature pyramid with low resolution has a large receptive field that is sensitive to the object in large size, while the feature pyramid with high resolution has a small receptive field that is sensitive to the object in small size. For the size of the same-class object from a fixed perspective, the smaller, the farther.\nThe depth-wise attention maps enhance the capability of perceiving the depth of objects, thereby improving the performance of object depth estimation.\n\n\\section{Conclusion}\n\nIn this work, we propose a simple and very effective monocular single-stage 3D object detector. We present a two-step feature alignment approach to address the feature mismatching, which enhances the feature learning for object detection. The asymmetric non-local attention block enables the network to extract depth-wise features, which improves the performance of the depth prediction in the regression head. Compared to the methods with or without the estimated depth as an extra input, M3DSSD achieves better performance on the challenging KITTI dataset for car, pedestrian, and cyclist object class using one single model, for both bird's eye view and 3D object detection.\n\n\\textbf{Acknowledgement:} This work was supported in part by the National Key Research and Development Program of China (2018YFE0183900). Hang Dai would like to thank the support from MBZUAI startup fund (GR006). \n\n\\iffalse \n\\subsection{The ruler}\nThe \\LaTeX\\ style defines a printed ruler which should be present in the\nversion submitted for review. The ruler is provided in order that\nreviewers may comment on particular lines in the paper without\ncircumlocution. If you are preparing a document using a non-\\LaTeX\\\ndocument preparation system, please arrange for an equivalent ruler to\nappear on the final output pages. The presence or absence of the ruler\nshould not change the appearance of any other content on the page. The\ncamera ready copy should not contain a ruler. (\\LaTeX\\ users may uncomment\nthe \\verb'\\cvprfinalcopy' command in the document preamble.) Reviewers:\nnote that the ruler measurements do not align well with lines in the paper\n--- this turns out to be very difficult to do well when the paper contains\nmany figures and equations, and, when done, looks ugly. Just use fractional\nreferences (e.g.\\ this line is $095.5$), although in most cases one would\nexpect that the approximate location will be adequate.\n\n\\subsection{Mathematics}\n\nPlease number all of your sections and displayed equations. It is\nimportant for readers to be able to refer to any particular equation. Just\nbecause you didn't refer to it in the text doesn't mean some future reader\nmight not need to refer to it. It is cumbersome to have to use\ncircumlocutions like ``the equation second from the top of page 3 column\n1''. (Note that the ruler will not be present in the final copy, so is not\nan alternative to equation numbers). All authors will benefit from reading\nMermin's description of how to write mathematics:\n\\url{http:\/\/www.pamitc.org\/documents\/mermin.pdf}.\n\n\n\\subsection{Blind review}\n\nMany authors misunderstand the concept of anonymizing for blind\nreview. Blind review does not mean that one must remove\ncitations to one's own work---in fact it is often impossible to\nreview a paper unless the previous citations are known and\navailable.\n\nBlind review means that you do not use the words ``my'' or ``our''\nwhen citing previous work. That is all. (But see below for\ntechreports.)\n\nSaying ``this builds on the work of Lucy Smith [1]'' does not say\nthat you are Lucy Smith; it says that you are building on her\nwork. If you are Smith and Jones, do not say ``as we show in\n[7]'', say ``as Smith and Jones show in [7]'' and at the end of the\npaper, include reference 7 as you would any other cited work.\n\nAn example of a bad paper just asking to be rejected:\n\\begin{quote}\n\\begin{center}\n An analysis of the frobnicatable foo filter.\n\\end{center}\n\n In this paper we present a performance analysis of our\n previous paper [1], and show it to be inferior to all\n previously known methods. Why the previous paper was\n accepted without this analysis is beyond me.\n\n [1] Removed for blind review\n\\end{quote}\n\n\nAn example of an acceptable paper:\n\n\\begin{quote}\n\\begin{center}\n An analysis of the frobnicatable foo filter.\n\\end{center}\n\n In this paper we present a performance analysis of the\n paper of Smith \\etal [1], and show it to be inferior to\n all previously known methods. Why the previous paper\n was accepted without this analysis is beyond me.\n\n [1] Smith, L and Jones, C. ``The frobnicatable foo\n filter, a fundamental contribution to human knowledge''.\n Nature 381(12), 1-213.\n\\end{quote}\n\nIf you are making a submission to another conference at the same time,\nwhich covers similar or overlapping material, you may need to refer to that\nsubmission in order to explain the differences, just as you would if you\nhad previously published related work. In such cases, include the\nanonymized parallel submission~\\cite{Authors14} as additional material and\ncite it as\n\\begin{quote}\n[1] Authors. ``The frobnicatable foo filter'', F\\&G 2014 Submission ID 324,\nSupplied as additional material {\\tt fg324.pdf}.\n\\end{quote}\n\nFinally, you may feel you need to tell the reader that more details can be\nfound elsewhere, and refer them to a technical report. For conference\nsubmissions, the paper must stand on its own, and not {\\em require} the\nreviewer to go to a techreport for further details. Thus, you may say in\nthe body of the paper ``further details may be found\nin~\\cite{Authors14b}''. Then submit the techreport as additional material.\nAgain, you may not assume the reviewers will read this material.\n\nSometimes your paper is about a problem which you tested using a tool which\nis widely known to be restricted to a single institution. For example,\nlet's say it's 1969, you have solved a key problem on the Apollo lander,\nand you believe that the CVPR70 audience would like to hear about your\nsolution. The work is a development of your celebrated 1968 paper entitled\n``Zero-g frobnication: How being the only people in the world with access to\nthe Apollo lander source code makes us a wow at parties'', by Zeus \\etal.\n\nYou can handle this paper like any other. Don't write ``We show how to\nimprove our previous work [Anonymous, 1968]. This time we tested the\nalgorithm on a lunar lander [name of lander removed for blind review]''.\nThat would be silly, and would immediately identify the authors. Instead\nwrite the following:\n\\begin{quotation}\n\\noindent\n We describe a system for zero-g frobnication. This\n system is new because it handles the following cases:\n A, B. Previous systems [Zeus et al. 1968] didn't\n handle case B properly. Ours handles it by including\n a foo term in the bar integral.\n\n ...\n\n The proposed system was integrated with the Apollo\n lunar lander, and went all the way to the moon, don't\n you know. It displayed the following behaviours\n which show how well we solved cases A and B: ...\n\\end{quotation}\nAs you can see, the above text follows standard scientific convention,\nreads better than the first version, and does not explicitly name you as\nthe authors. A reviewer might think it likely that the new paper was\nwritten by Zeus \\etal, but cannot make any decision based on that guess.\nHe or she would have to be sure that no other authors could have been\ncontracted to solve problem B.\n\\medskip\n\n\\noindent\nFAQ\\medskip\\\\\n{\\bf Q:} Are acknowledgements OK?\\\\\n{\\bf A:} No. Leave them for the final copy.\\medskip\\\\\n{\\bf Q:} How do I cite my results reported in open challenges?\n{\\bf A:} To conform with the double blind review policy, you can report results of other challenge participants together with your results in your paper. For your results, however, you should not identify yourself and should not mention your participation in the challenge. Instead present your results referring to the method proposed in your paper and draw conclusions based on the experimental comparison to other results.\\medskip\\\\\n\n\n\n\\begin{figure}[t]\n\\begin{center}\n\\fbox{\\rule{0pt}{2in} \\rule{0.9\\linewidth}{0pt}}\n \n\\end{center}\n \\caption{Example of caption. It is set in Roman so that mathematics\n (always set in Roman: $B \\sin A = A \\sin B$) may be included without an\n ugly clash.}\n\\label{fig:long}\n\\label{fig:onecol}\n\\end{figure}\n\n\\subsection{Miscellaneous}\n\n\\noindent\nCompare the following:\\\\\n\\begin{tabular}{ll}\n \\verb'$conf_a$' & $conf_a$ \\\\\n \\verb'$\\mathit{conf}_a$' & $\\mathit{conf}_a$\n\\end{tabular}\\\\\nSee The \\TeX book, p165.\n\nThe space after \\eg, meaning ``for example'', should not be a\nsentence-ending space. So \\eg is correct, {\\em e.g.} is not. The provided\n\\verb'\\eg' macro takes care of this.\n\nWhen citing a multi-author paper, you may save space by using ``et alia'',\nshortened to ``\\etal'' (not ``{\\em et.\\ al.}'' as ``{\\em et}'' is a complete word.)\nHowever, use it only when there are three or more authors. Thus, the\nfollowing is correct: ``\n Frobnication has been trendy lately.\n It was introduced by Alpher~\\cite{Alpher02}, and subsequently developed by\n Alpher and Fotheringham-Smythe~\\cite{Alpher03}, and Alpher \\etal~\\cite{Alpher04}.''\n\nThis is incorrect: ``... subsequently developed by Alpher \\etal~\\cite{Alpher03} ...''\nbecause reference~\\cite{Alpher03} has just two authors. If you use the\n\\verb'\\etal' macro provided, then you need not worry about double periods\nwhen used at the end of a sentence as in Alpher \\etal.\n\nFor this citation style, keep multiple citations in numerical (not\nchronological) order, so prefer \\cite{Alpher03,Alpher02,Authors14} to\n\\cite{Alpher02,Alpher03,Authors14}.\n\n\n\\begin{figure*}\n\\begin{center}\n\\fbox{\\rule{0pt}{2in} \\rule{.9\\linewidth}{0pt}}\n\\end{center}\n \\caption{Example of a short caption, which should be centered.}\n\\label{fig:short}\n\\end{figure*}\n\n\\section{Formatting your paper}\n\nAll text must be in a two-column format. The total allowable width of the\ntext area is $6\\frac78$ inches (17.5 cm) wide by $8\\frac78$ inches (22.54\ncm) high. Columns are to be $3\\frac14$ inches (8.25 cm) wide, with a\n$\\frac{5}{16}$ inch (0.8 cm) space between them. The main title (on the\nfirst page) should begin 1.0 inch (2.54 cm) from the top edge of the\npage. The second and following pages should begin 1.0 inch (2.54 cm) from\nthe top edge. On all pages, the bottom margin should be 1-1\/8 inches (2.86\ncm) from the bottom edge of the page for $8.5 \\times 11$-inch paper; for A4\npaper, approximately 1-5\/8 inches (4.13 cm) from the bottom edge of the\npage.\n\n\\subsection{Margins and page numbering}\n\nAll printed material, including text, illustrations, and charts, must be kept\nwithin a print area 6-7\/8 inches (17.5 cm) wide by 8-7\/8 inches (22.54 cm)\nhigh.\nPage numbers should be in footer with page numbers, centered and .75\ninches from the bottom of the page and make it start at the correct page\nnumber rather than the 4321 in the example. To do this fine the line (around\nline 23)\n\\begin{verbatim}\n\\setcounter{page}{4321}\n\\end{verbatim}\nwhere the number 4321 is your assigned starting page.\n\nMake sure the first page is numbered by commenting out the first page being\nempty on line 46\n\\begin{verbatim}\n\\end{verbatim}\n\n\n\\subsection{Type-style and fonts}\n\nWherever Times is specified, Times Roman may also be used. If neither is\navailable on your word processor, please use the font closest in\nappearance to Times to which you have access.\n\nMAIN TITLE. Center the title 1-3\/8 inches (3.49 cm) from the top edge of\nthe first page. The title should be in Times 14-point, boldface type.\nCapitalize the first letter of nouns, pronouns, verbs, adjectives, and\nadverbs; do not capitalize articles, coordinate conjunctions, or\nprepositions (unless the title begins with such a word). Leave two blank\nlines after the title.\n\nAUTHOR NAME(s) and AFFILIATION(s) are to be centered beneath the title\nand printed in Times 12-point, non-boldface type. This information is to\nbe followed by two blank lines.\n\nThe ABSTRACT and MAIN TEXT are to be in a two-column format.\n\nMAIN TEXT. Type main text in 10-point Times, single-spaced. Do NOT use\ndouble-spacing. All paragraphs should be indented 1 pica (approx. 1\/6\ninch or 0.422 cm). Make sure your text is fully justified---that is,\nflush left and flush right. Please do not place any additional blank\nlines between paragraphs.\n\nFigure and table captions should be 9-point Roman type as in\nFigures~\\ref{fig:onecol} and~\\ref{fig:short}. Short captions should be centred.\n\n\\noindent Callouts should be 9-point Helvetica, non-boldface type.\nInitially capitalize only the first word of section titles and first-,\nsecond-, and third-order headings.\n\nFIRST-ORDER HEADINGS. (For example, {\\large \\bf 1. Introduction})\nshould be Times 12-point boldface, initially capitalized, flush left,\nwith one blank line before, and one blank line after.\n\nSECOND-ORDER HEADINGS. (For example, { \\bf 1.1. Database elements})\nshould be Times 11-point boldface, initially capitalized, flush left,\nwith one blank line before, and one after. If you require a third-order\nheading (we discourage it), use 10-point Times, boldface, initially\ncapitalized, flush left, preceded by one blank line, followed by a period\nand your text on the same line.\n\n\\subsection{Footnotes}\n\nPlease use footnotes sparingly.\nIndeed, try to avoid footnotes altogether and include necessary peripheral\nobservations in\nthe text (within parentheses, if you prefer, as in this sentence). If you\nwish to use a footnote, place it at the bottom of the column on the page on\nwhich it is referenced. Use Times 8-point type, single-spaced.\n\n\n\\subsection{References}\n\nList and number all bibliographical references in 9-point Times,\nsingle-spaced, at the end of your paper. When referenced in the text,\nenclose the citation number in square brackets, for\nexample~\\cite{Authors14}. Where appropriate, include the name(s) of\neditors of referenced books.\n\n\\begin{table}\n\\begin{center}\n\\begin{tabular}{|l|c|}\n\\hline\nMethod & Frobnability \\\\\n\\hline\\hline\nTheirs & Frumpy \\\\\nYours & Frobbly \\\\\nOurs & Makes one's heart Frob\\\\\n\\hline\n\\end{tabular}\n\\end{center}\n\\caption{Results. Ours is better.}\n\\end{table}\n\n\\subsection{Illustrations, graphs, and photographs}\n\nAll graphics should be centered. Please ensure that any point you wish to\nmake is resolvable in a printed copy of the paper. Resize fonts in figures\nto match the font in the body text, and choose line widths which render\neffectively in print. Many readers (and reviewers), even of an electronic\ncopy, will choose to print your paper in order to read it. You cannot\ninsist that they do otherwise, and therefore must not assume that they can\nzoom in to see tiny details on a graphic.\n\nWhen placing figures in \\LaTeX, it's almost always best to use\n\\verb+\\includegraphics+, and to specify the figure width as a multiple of\nthe line width as in the example below\n{\\small\\begin{verbatim}\n \\usepackage[dvips]{graphicx} ...\n \\includegraphics[width=0.8\\linewidth]\n {myfile.eps}\n\\end{verbatim}\n}\n\n\n\\subsection{Color}\n\nPlease refer to the author guidelines on the CVPR 2020 web page for a discussion\nof the use of color in your document.\n\n\\section{Final copy}\n\nYou must include your signed IEEE copyright release form when you submit\nyour finished paper. We MUST have this form before your paper can be\npublished in the proceedings.\n\nPlease direct any questions to the production editor in charge of these \nproceedings at the IEEE Computer Society Press: \n\\url{https:\/\/www.computer.org\/about\/contact}. \n\n\\fi \n\n{\\small\n\\bibliographystyle{ieee_fullname}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nThe properties of kinks of the $\\varphi^6$ model have been fairly well studied. In particular, the interactions between the kink and the antikink were investigated both using the collective coordinate method and by numerically solving the equation of motion \\cite{GaKuLi,Demirkaya.JHEP.2017,Weigel.JPCS.2014}. Interesting results have been obtained on various phenomena in the kink-antikink and multi-kink scattering \\cite{MGSDJ.JHEP.2017,Dorey.PRL.2011,Romanczukiewicz.PLB.2017,Lima.JHEP.2019}.\n\nOn the other hand, there is the so-called deformation procedure \\cite{Bazeia.PRD.2002,Bazeia.PRD.2004,Bazeia.EPJC.2018,Gani.arXiv.2020.deformations}, which allows to obtain a new model along with its kink solution from known model with its known kink solution.\n\nIn this paper, we apply the deformation procedure to the $\\varphi^6$ model, using the hyperbolic sine as the deforming function. The result of such deformation is the sinh-deformed $\\varphi^6$ model. (The fact that the study of the properties of kinks of the sinh-deformed $\\varphi^6$ model is of interest was indicated in \\cite{Bazeia.EPJC.2018}.) The preliminary results reported by Dr.\\ Aliakbar Moradi Marjaneh at the ICPPA-2020 conference indicate that in collisions of the kink and antikink of the sinh-deformed $\\varphi^6$ model resonance phenomena (escape windows) are present; see, e.g., \\cite{Belova.UFN.1997} for details about escape windows. At first glance, this contradicts the fact that there are no vibrational modes in the excitation spectrum of the kink. However, if we recall papers \\cite{Dorey.PRL.2011,Belendryasova.CNSNS.2019}, then we understand that everything is not so simple.\n\nSo, let's move on to a slightly more detailed presentation of our idea. Emphasize that this text cannot be regarded as a complete study, but only as a brief presentation of preliminary results.\n\n\\section{The $\\varphi^6$ model and its hyperbolic deformation}\n\nWithin the $\\varphi^6$ model the dynamics of a real scalar field $\\varphi(x,t)$ is described by the Lagrangian density\n\\begin{equation}\\label{eq:Largangian}\n\t\\mathscr{L} = \\frac{1}{2} \\left( \\frac{\\partial\\varphi}{\\partial t} \\right)^2 - \\frac{1}{2} \\left( \\frac{\\partial\\varphi}{\\partial x} \\right)^2 - V(\\varphi)\n\\end{equation}\nwith the potential\n\\begin{equation}\\label{eq:potential}\n V^{(0)}(\\varphi) = \\frac{1}{2}\\varphi^2\\left(1-\\varphi^2\\right)^2.\n\\end{equation}\nThis model has kink solutions (two kinks and two antikinks) connecting the vacua $\\varphi=0$ and $\\varphi=\\pm 1$. All of them can be obtained by symmetry transformations from the kink\n\\begin{equation}\\label{eq:phi6_kink}\n\\varphi_{\\rm K}^{(0)}(x) = \\sqrt{\\frac{1+\\tanh x}{2}},\n\\end{equation}\nsee, e.g., \\cite{GaKuLi}.\n\nWe apply the deformation procedure to the $\\varphi^6$ model, using the deforming function $f(\\varphi)=\\sinh\\varphi$. As a result, we obtain the sinh-deformed $\\varphi^6$ model with the potential\n\\begin{equation}\\label{eq:sinh_deformed_phi6_potential}\nV^{(1)}(\\varphi)=\\frac{1}{2} \\tanh^2 \\varphi(1-\\sinh^2 \\varphi)^2\n\\end{equation}\nand the kink solution, corresponding to the $\\varphi^6$ kink \\eqref{eq:phi6_kink}:\n\\begin{equation}\\label{eq:deformed_phi6_kink}\n\\varphi_{\\rm K}^{(1)}(x) = \\mbox{arsinh}\\:\\sqrt{\\frac{1+\\tanh x}{2}}.\n\\end{equation}\nIn the next section, we will discuss the stability potential of the sinh-deformed $\\varphi^6$ kink \\eqref{eq:deformed_phi6_kink}.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Stability potential of the sinh-deformed $\\varphi^6$ kink}\n\nFirst of all, the stability potential of a kink can be obtained by adding a small perturbation to the static kink. Then the linearized equation of motion leads to the Sturm-Liouville problem\n\\begin{equation}\\label{eq:stat_Schr}\n\\left[-\\frac{d^2}{dx^2} + U(x)\\right]\\psi(x) = \\omega^2\\psi(x),\n\\end{equation}\nsee, e.g., \\cite[Sec.~2]{Bazeia.EPJC.2018} for more details. The function\n\\begin{equation}\\label{eq:Schr_pot}\nU(x) = \\left.\\frac{d^2 V^{(1)}}{d\\varphi^2}\\right|_{\\varphi_{\\rm K}^{(1)}(x)}=\\frac{30 \\tanh x + 2\\:\\text{sech}^4 x - \\left(16\\tanh x + 45\\right) \\text{sech}^2x + 34}{\\left(\\tanh x + 3\\right)^2}\n\\end{equation}\nis the stability potential, which defines the kink's excitation spectrum. The structure of the discrete part of the spectrum is very important for various processes with kinks. The spectrum always has zero level, see, e.g., \\cite[Eqs.~(25), (26)]{Belendryasova.CNSNS.2019}, and all eigenvalues of the problem \\eqref{eq:stat_Schr} are non-negative \\cite[Eqs.~(2.20), (2.21)]{Bazeia.EPJC.2018}.\n\nIt is known that the stability potential of the $\\varphi^6$ kink does not have vibrational modes. We found that the stability potential \\eqref{eq:Schr_pot} of the sinh-deformed $\\varphi^6$ kink also does not have vibrational modes. It would seem that it can be assumed that resonance phenomena associated with resonant energy exchange between zero and vibrational modes are impossible in the kink-antikink collisions in both models. However, this is not quite true. In the sinh-deformed $\\varphi^6$ antikink-kink collisions we observe escape windows, see Fig.~\\ref{fig:a0a_bounce_collision}.\n\\begin{figure}[t!]\n\\begin{center}\n \\centering\n \\subfigure[two-bounce escape window, the initial velocity is 0.0456]{\\includegraphics[width=0.49\\textwidth]{Fa0aV00456.pdf}\\label{fig:phi_a0a_00456}}\n \\subfigure[three-bounce escape window, the initial velocity is 0.0441]{\\includegraphics[width=0.49\\textwidth]{Fa0aV00441.pdf}\\label{fig:phi_a0a_00441}}\n \\caption{Examples of escape windows in the antikink-kink scattering. The initial positions of the kinks are $\\pm 20$.}\n \\label{fig:a0a_bounce_collision}\n\\end{center}\n\\end{figure}\nAppearance of the escape windows indicates the presence of resonant energy exchange between kinetic energy and (at least one) vibrational mode. The question is where is this vibrational mode hiding?\n\nThe answer can be in that the kinks of both models are asymmetric. This leads to the asymmetric stability potentials with different asymptotic values at $x\\to\\pm\\infty$, see Fig.~\\ref{fig:QMPkink}.\n\\begin{figure}[t!]\n\\begin{center}\n \\centering\n \\subfigure[]{\\includegraphics[width=0.49\\textwidth]{QMPkink.pdf}\\label{fig:QMPkink}}\n \\subfigure[]{\\includegraphics[width=0.49\\textwidth]{QMPkinkantikink.pdf}\\label{fig:QMPkinkantikink}}\n \\caption{Stability potential for (a) kink and (b) `antikink+kink' configuration with kink and antikink at $x=\\pm 10$, i.e.\\ $x_0^{}=10$ in Eq.~\\eqref{eq:Schr_pot_kk}.}\n \\label{fig:QMP}\n\\end{center}\n\\end{figure}\nSuch asymmetry means that closely placed kink and antikink can form a mutual stability potential\n\\begin{equation}\\label{eq:Schr_pot_kk}\nU(x) = \\left.\\frac{d^2 V^{(1)}}{d\\varphi^2}\\right|_{\\varphi_{\\rm \\bar K}^{(1)}(x+x_0^{})+\\varphi_{\\rm K}^{(1)}(x-x_0^{})}\n\\end{equation}\nin the form of a potential well, Fig.~\\ref{fig:QMPkinkantikink}, in which, in addition to the zero level, there will also be levels of the discrete spectrum (vibrational modes). (For the sake of convenience, in Fig.~\\ref{fig:QMPkinkantikink} we used $x_0^{}=10$, which is not small.) Our idea is that the situation can be similar to that observed for the $\\varphi^6$ kinks in \\cite{Dorey.PRL.2011} or for the $\\varphi^8$ kinks in \\cite{Belendryasova.CNSNS.2019}.\n\nNote that preliminary results show presence of vibrational modes in the potential well of Fig.~\\ref{fig:QMPkinkantikink}.\n\n\n\n\n\n\\section{Conclusion}\n\nWe have studied the sinh-deformed $\\varphi^6$ model which is obtained from the well-known $\\varphi^6$ scalar field theory. We have shown that in this new model there are no vibrational modes in the kink excitation spectrum. At the same time, in our numerical simulations of collisions of the sinh-deformed $\\varphi^6$ kink and antikink at some initial velocities we observed resonance phenomena --- escape windows. We suppose that these resonance phenomena may be a consequence of the resonant energy exchange between the translational modes of kinks (their kinetic energy) and the vibrational modes of the `antikink+kink' system as a whole. A more detailed study of this issue is of great importance and is planned for the near future.\n\n\n\n\n\\section*{Acknowledgements}\n\nV.A.G.\\ acknowledges the support of the Russian Foundation for Basic Research under Grant No.\\ 19-02-00930.\n\nA.M.M.\\ thanks the Islamic Azad University, Quchan Branch, Iran (IAUQ) for their financial support under the Grant.\n\nThe work of the MEPhI group was supported by the MEPhI Academic Excellence Project.\n\n\\section*{References}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{sec:introduction}\nMany engineering design problems can be cast as optimization problems of the form\n\\begin{equation}\n\t\\vc x^* = \\argmin_{\\vc x \\in \\mc X} \\eta(\\vc x),\n\t\\label{eqn:optimization}\n\\end{equation}\nwhere $\\vc x \\in \\mc X$ is a $d_x$-dimensional vector of design and operation parameters, and $\\eta$ is some quantity of interest, such as the strength of a material or the efficiency of an engine.\nSolving optimization problems that fit under the scope of Eq.\\ (\\ref{eqn:optimization}) can be challenging if the dimension $d_x$ of the search space is high and\/or the function $\\eta$ is nonconvex and highly rugged, resulting in an intractably large number of potential solutions that may be considered.\nFurthermore, if it is expensive to evaluate $\\eta$ (corresponding to performing a laboratory experiment or running a complex simulation code), one may be limited by their experimental budget and be forced to try and determine a good solution to Eq.\\ (\\ref{eqn:optimization}) with few opportunities to gain information about the true response surface for the system of interest.\n\nThe most elementary approaches for solving Eq.\\ (\\ref{eqn:optimization}) involve specifying some space-filling design and querying $\\eta$ at all of the design points.\nWhile rudimentary designs such as factorial designs suffer from the curse of dimensionality and are intractable for all but the simplest problems, more advanced space-filling designs such as Latin hypercube designs still may require far too many evaluations than is feasible.\nImproving on these by orders of magnitude, in Bayesian optimization (BO), one defines a Bayesian metamodel $y(\\vc x)$ [such as a Gaussian process (GP)] to approximate $\\eta$ and uses it to adaptively select promising designs until the optimum is found or an experimental budget is exhausted.\nStill, such an approach commonly takes tens to hundreds of evaluations to converge to a satisfactory degree in practice.\nIn this work, we are interested in improving on this by another order of magnitude, finding plausible solutions to Eq.\\ (\\ref{eqn:optimization}) with a handful--or \\textit{no}---evaluations of $\\eta$.\n\nHumans intuitively solve optimization problems in their daily lives in novel settings by leveraging knowledge about previous related experiences.\nBy contrast, it is common to start quantitative engineering design problems from scratch since it is unclear how to re-use data from legacy systems with different response surfaces.\nThere is a growing literature around methods that seek to devise suitable models to bridge the gap between these disparate sources of information.\nThrough this, we hope to leverage so-called ``legacy'' data about previously seen tasks in a manner that endows our current effort with strong, but reasonable inductive biases that guide it towards effective solutions.\n\nThe main technical challenge is to devise a means of reasoning in a quantitative way about features of data that are not numerical in nature and therefore not suitable for standard modeling approaches\\footnote{Some literature refer to these as ``qualitative'' features, though this seems somewhat of a misnomer since certain types of attributes in question can be numerical in nature, such as zip codes, yet are clearly unsuitable to treat as numbers in a model; others may not be quantitative, but are nonetheless precise (e.g.\\ the name of an operator), yet ``qualitative'' does not convey this preciseness.}\nThese ``general'' features are typically found when describing the difference between tasks \\cite{argyriou2007multi} and could include, for example, the serial number of a machine, the identity of an operator, or a chemical compound involved in some process of interest.\n\nThe key to our approach is to learn a probabilistic embedding associated with the general features associated with a system such that notions of similarity can be quantified and utilized by a downstream data-driven model.\nMinding our ultimate goal of solving optimization problems, we focus on Gaussian process metamodels and call the composition of our probabilistic embedding with the Gaussian process metamodel ``Bayesian embedding GP'' (BEGP).\nThe contributions of this work are the following:\n\\begin{enumerate}\n\t\\item We define the structure of the BEGP metamodel designed to fuse information from systems with differing general features. \n\tWe use a variational inference scheme to learn to infer reasonable probabilistic embeddings of the general features that capture uncertainty due to limited data while showing that the compositional model can be recognized as a deep Gaussian process \\cite{damianou2013deep} with a particular choice of kernel function.\n\t\\item We explain and demonstrate the application of this model to the task of Bayesian optimization, showing how the BEGP can be used to satisfy the usual requirements of the algorithm.\n\t\\item We conduct a series of computational experiments on a variety of synthetic and real-world systems to illustrate the usage of our approach and compare its performance to existing methods and evaluating the contribution of various components of the metamodel.\n\\end{enumerate}\n\nThe scope of our work here is optimization problems of the form in Eq.\\ (\\ref{eqn:optimization}).\nHowever, because our approach can be used as a drop-in replacement for other Bayesian metamodels, it is straightforward to extend our work to cases including multi-objective optimization and problems involving complex or unknown constraints.\nWe also consider regression tasks as a stepping stone to our ultimate application of interest since satisfactory predictive power is a desired preliminary skill.\n\nThe remainder of this paper is organized as follows:\nIn section \\ref{sec:methodology}, we explain the formulation of our metamodel and its usage within Bayesian optimization\nIn section \\ref{sec:related_work}, we review related approaches and results from the literature.\nIn section \\ref{sec:examples}, we demonstrate our approach on a variety of synthetic and real-world examples.\nIn section \\ref{sec:conclusion}, we offer concluding remarks.\n\n\\section{Methodology}\n\\label{sec:methodology}\nIn this section, we describe the methodology including our model definition, training objective, predictive distribution, and application to Bayesian optimization.\n\n\\subsection{Model definition}\nWe begin the discussion of our methodology by defining our model, explaining how inference may be done to determine its posterior, and how predictions are computed.\n\n\\subsubsection{Gaussian process regression}\nWe begin by reviewing Gaussian processes for regression; the interested reader may consult \\cite{rasmussen2006gaussian} for more details.\nA Gaussian process (GP) $\\mc{GP}(\\mu(\\cdot), k(\\cdot, \\cdot))$ with mean function $\\mu$ and kernel $k$ is a distribution over functions such that the marginal distribution of the random function over a finite index set is a multivariate Gaussian.\nConcretely, let $f: \\mc X \\rightarrow \\reals$ be described by a GP where $\\mc X$ is an input space that indexes the random process; traditionally, this will be $d_x$-dimensional Euclidean space $\\reals^{d_x}$ or some subset therein.\nHowever, any (infinite) set of inputs might be used, and we are mindful of this potential generality in the review in this section.\nGiven $n$ inputs $\\mtx X \\in \\scriptX^n$ with corresponding outputs $\\vc f \\in \\reals^n$, we write the joint probability density of $\\vc f$ as\n\\begin{equation}\n\tp(\\vc f | \\mtx X) = \\G{\\vc f}{\\vc \\mu}{\\kff},\n\t\\label{eqn:pf}\n\\end{equation}\nwhere $\\mu_i = \\mu(\\vc x_i)$, and $(\\kff)_{ij} = k(\\vc x_i, \\vc x_j)$.\nThe quantity $\\vc f$ is recognized as the latent output of the GP model.\nNext, we define a Gaussian likelihood \n\\begin{equation}\n\tp(\\vc y|\\vc f) = \\G{\\vc y}{\\vc f}{\\sigma_y^2 \\mtx I_{n \\times n}},\n\t\\label{eqn:likelihood}\n\\end{equation}\nwhere $\\vc y \\in \\reals^n$ denotes the observed output values.\nIntegrating out $\\vc f$ results in the familiar marginal likelihood\n\\begin{equation}\n\tp(\\vc y | \\mtx X) = \\G{\\vc y}{\\vc \\mu}{\\kyy},\n\t\\label{eqn:gp:marginal_likelihood}\n\\end{equation}\nwhere $\\kyy = \\kff + \\sigma_y^2 \\mtx I_{n \\times n}$.\nThe negative logarithm of this quantity is conventionally used to determine appropriate parameters for the mean and kernel functions as well as the likelihood through gradient-based minimization. \nAlternatively, approximate Bayesian inference of the model parameters may be done as well using Markov chain Monte Carlo or variational inference once suitable priors are defined \\cite{bilionis2013multi}.\n\nGiven a training set $\\scriptD = \\{\\mtx X, \\vc y\\} \\in \\scriptX^n \\times \\reals^n$, predictions are made by using Bayes' rule to condition the GP on the training set.\nThe resulting predictive distribution over the latent outputs $\\vc f^* \\in \\reals^{n^*}$ at some test inputs $\\mtx X^* \\in \\scriptX^{n^*}$ is\n\\begin{equation}\n\tp(\\vc f^* | \\mtx X^*, \\scriptD) = \\G{\\vc f^*}{\\ksf \\kyyinv (\\vc y - \\vc \\mu) + \\vc \\mu^*}{\\kss - \\ksf \\kyyinv \\kfs},\n\t\\label{eqn:qf}\n\\end{equation}\nwhere\n\\begin{align}\n\t(\\kfs)_{i, j} &= k(\\vc x_i, \\vc x_j^*),\n\t\\\\\n\t(\\kss)_{i, j} &= k(\\vc x_i^*, \\vc x_j^*),\t\n\\end{align}\n$\\ksf = \\kfs^\\transpose$, and $\\mu_i^* = \\mu(\\vc x_i^*)$.\nApplying the likelihood and marginalizing out $\\vc f^*$ gives the posterior in output space\n\\begin{equation}\n\tp(\\vc y^* | \\mtx X^*, \\scriptD) = \\G{\\vc y^*}{\\ksf \\kyyinv (\\vc y - \\vc \\mu) + \\vc \\mu^*}{\\kss - \\ksf \\kyyinv \\kfs + \\sigma_y^2 \\mtx I_{n^* \\times n^*}}.\n\t\\label{eqn:qy}\n\\end{equation}\n\n\\textbf{Remark:}\nOur use of a likelihood term in our model definition generalizes the concept of a ``nugget'' or ``jitter'' that is sometimes used within the GP metamodeling community \\cite{bilionis2012multi, bilionis2013multi}, usually with the stated purpose of either improving the conditioning of the kernel matrix or representing the noisiness in the observations being modeled.\n\n\\subsubsection{Incorporating general input features as inputs to Gaussian process regression models}\nThe main source of the rich behavior in GP models is their kernel function, which encodes strong inductive biases about the statistics of the functions being modeled.\nIt is most common to consider parametric kernels of the form \n$k(\\cdot, \\cdot; \\vc \\theta_k) : \\reals^{d_x} \\times \\reals^{d_x} \\rightarrow \\reals$.\nFor example, the popular exponentiated quadratic kernel is defined as\n\\begin{equation}\n\tk(\\vc x, \\vc x'; \\vc \\theta_k) = \\sigma_k^2 \\exp \\left[ - \\sum_{i=1}^{d_x} \\left(\\frac{x_i - x_i'}{l_i} \\right)^2 \\right],\n\t\\label{eqn:exponentiated-quadratic}\n\\end{equation}\nand can be evaluated easily for any pair of $d_x$-dimensional vectors $\\vc x$ and $\\vc x'$.\n\nWhile this is suitable for real-valued inputs commonly found in engineering problems, it is not suitable for more general inputs such as those mentioned above.\nOne workaround is to embed elements from general input spaces as $d_z$-dimensional latent variables; one can then form the complete input vector by concatenating these latents $\\vc z$ with the real-valued inputs $\\vc x^{(r)}$ to form a $d_{x^{(r)}} + d_z$-dimensional input vector amenable to computation with traditional parametric kernels such as Eq.\\ (\\ref{eqn:exponentiated-quadratic}).\n\nMore general nonparametric positive-definite kernels exist that are valid for Gaussian processes.\nFor example, the white noise kernel function,\n\\begin{equation}\n\tk_{white}(\\vc x, \\vc x'; \\vc \\theta) = \n\t\\begin{cases} \n\t\t\\sigma_{white}^2 & \\textrm{if} ~ \\vc x = \\vc x' \\\\\n\t\t0 & \\textrm{otherwise}\n\t\\end{cases},\n\\end{equation}\ndoes not require Euclidean inputs; instead, it merely requires that we are able to distinguish between elements in its index set.\n\nTherefore, to accomplish the embedding mentioned above, we map our general inputs $\\vc x^{(g)} \\in \\scriptX_g$ through a $d_z$-variate Gaussian process with white noise kernel:\n\\begin{equation}\n\t\\vc z_i \\sim \\mc{GP}\\left(0; k_{white}(\\cdot, \\cdot)\\right) : \\scriptX_g \\rightarrow \\reals, ~i=1, \\dots, d_z.\t\n\\end{equation}\nSampling this GP at some general input returns potential embeddings of the input as a $d_z$-dimensional vector in Euclidean space.\nThese latent variables may then be concatenated with the numerical feature space and fed through a second Gaussian process that serves as the familiar regression model employed in Bayesian metamodeling.\nThe Bayesian embedding GP (BEGP) model that combines the embedding Gaussian process with the GP regression model is shown in Fig.\\ \\ref{fig:egp}.\nThis model is composed of two GPs, where the outputs of the first (embedding) GP are fed as inputs to the second (regression) GP, making it a deep Gaussian process with serial topology similar to those originally discussed in \\cite{damianou2013deep}.\n\\begin{figure}[hbt]\n\t\\centering\n\t\\includegraphics[width=0.6\\textwidth]{fig1.pdf}\n\t\\caption{\n\t\tProbabilistic graphical model of the Bayesian embedding Gaussian process (BEGP) model.\n\t\tGeneral inputs $\\vc x^{(g)}$ are first mapped through a GP with white noise kernel to latent variables $\\vc z$.\n\t\tThese latents are concatenated with real-valued inputs $\\vc x^{(r)}$ to form the full inputs $\\vc x$ to the second GP model with a traditional parametric mean function and kernel.\n\t\tWhite nodes denote variables that are unobserved, and shaded nodes are observed.\n\t\tPoint nodes are deterministic, and large nodes are associated with probability distributions.\n\t\tModel parameters associated with mean functions, kernels, and the likelihood are hidden for clarity.\n\t}\n\t\\label{fig:egp}\n\\end{figure}\n\nOur modeling approach may be thought of as a type of \\textit{multi-task learning} in that one doing regression traditionally segregates datasets along differences in general features, building a separate model for each dataset using the remaining real-valued features; by contrast, here we build in correlations across general features within a single model, simultaneously improving the predictive power across all of the input space.\n\n\\textbf{Remark:} for systems with multi-dimensional general inputs (e.g.\\ operator IDs \\textit{and} machine serial numbers), it is straightforward to extend our approach to embed each general input dimension separately to its own latent space; these latents are then all concatenated with each other to define the total latent variable which is then concatenated with $\\vc x^{(r)}$ as before.\n\n\\subsubsection{Inference for the Bayesian embedding GP model}\nHaving defined our model, we now discuss how to do inference on it.\nInference for the Bayesian EGP model presents two challenges.\nFirst, inferring the posterior of the embedding layer is challenging due to its nonparameteric nature stemming from the white noise kernel combined with the fact that its posterior will generally be non-Gaussian due to the nonlinearity of the downstream model that operates on it. \nSecond, unlike a traditional Gaussian process regression model, it is analytically intractable to marginalize out the latent variables $\\vc z$.\nIn this section, we discuss simple strategies for overcoming these two challenges so that we may devise an objective function for training our model.\n\nFirst, we consider the challenges associated with the embedding layer.\nLet $n_g$ denote the number of distinct general inputs that we observe from $\\scriptX_g$ (i.e.\\ the number of legacy datasets that we would like to incorporate in our model).\nDue to the kernel, the joint prior density over latents $\\mtx Z \\in \\reals^{n_g \\times d_z}$ associated with general inputs $\\mtx X^{(g)} \\in \\scriptX_g^{n_g}$ is\n\\begin{equation}\n\tp \\left( \\mtx Z | \\mtx X^{(g)} \\right) = \\prod_{i=1}^n \\prod_{j=1}^{d_z} \\G{z_{ij}}{0}{\\sigma_w^2},\n\t\\label{eqn:pz}\n\\end{equation}\nwhere $z_{ij}$ is the $j$-th dimension associated with the latent for input $i$.\nFor reasons analogous to those mentioned in previous literature \\cite{titsias2010bayesian, damianou2013deep, salimbeni2017doubly}, the posterior over these inputs is generally non-Gaussian.\nWe define a mean-field variational posterior $q(\\mtx Z)$ to approximate the true posterior $p(\\mtx Z | \\scriptD)$ with the form\n\\begin{equation}\n\tq(\\mtx Z) = \\prod_{i=1}^{n_g} \\prod_{j=1}^{d_z} \\G{z_{ij}}{m_{ij}}{s_{ij}},\n\t\\label{eqn:qz}\n\\end{equation}\nwhere $\\mtx M, \\mtx S \\in \\reals^{n_g \\times d_z}$ are variational parameters.\nFortunately, one is usually only interested in a relatively small number of inputs in $\\scriptX_g$, so it is not a challenge to track these variational parameters.\nIn fact, computational challenges associated with data scalability of Gaussian process models are likely to become problematic well before challenges associated with working with a high number of tasks, even though recent advances have make significant progress in lift traditional barriers to Gaussian process modeling in large-data regimes \n\\cite{snelson2006sparse, titsias2009variational, hensman2013gaussian, wilson2015kernel, salimbeni2017doubly, wang2019exact}.\n\nLastly, note that if no data have been observed for some held-out general input $\\vc x^{(g),*}$, then, by the nature of the white noise kernel, the posterior over the associated latent is equal to the prior.\nthis observation is critical for enabling our model to make credible predictions in a zero-shot setting, where no data about some task of interest is available at the outset of optimization.\nIndeed, we will see that this probabilistic approach enables our model to make surprisingly good inferences to guide the first iteration of Bayesian optimization in such settings.\n\nHaving resolved the challenge of representing the posterior of the embedding layer, we now turn our attention to the second challenge regarding the intractability of the model's marginal log-likelihood (or evidence), which usually serves as the objective function for training probabilistic models.\nAgain, following a variational strategy, we derive a tractable evidence lower bound (ELBO) that can be used in place of the exact marginal log-likelihood.\nWhile this approach was first demonstrated for Gaussian processes using a sparse approach based on inducing variables, it turns out that later advances in variational inference \\cite{ranganath2014black} allow us to avoid relying on a sparse GP formulation.\nThat said, we recognize that such an approach may realistically be helpful in the plausible cases where an abundance of legacy data takes the problem into a large-data setting.\nThis is in contrast to the usual assumptions of data-scarcity in engineering metamodeling, which are usually myopically focused only on solving a single task in isolation.\nFor the sake of completeness, we now provide the derivation of the ELBO for the BEGP model.\nBased on the probabilistic graphical model shown in Fig.\\ \\ref{fig:egp}, the joint probability of our model with $n$ observations is\n\\begin{equation}\n\t\\mc P = p(\\mtx Z, \\vc f, \\vc y | \\mtx X^{(r)}, \\mtx X^{(g)}) = p(\\mtx Z | \\mtx X^{(g)}) p(\\vc f | \\mtx Z, \\mtx X^{(r)}) p(\\vc y | \\vc f),\n\\end{equation}\nwhere p$(\\mtx Z | \\mtx X^{(r)})$ is given by Eq.\\ (\\ref{eqn:pz}), $p(\\vc f | \\mtx Z, \\mtx X^{(r)})$ is analogous to Eq.\\ (\\ref{eqn:pf}), and $p(\\vc y | \\vc f)$ is the likelihood of Eq.\\ (\\ref{eqn:likelihood}).\nand the marginal log-likelihood is obtained by simply integrating over the latent variables and taking the logarithm of the result:\n\\begin{equation}\n\t\\log p(\\vc y | \\mtx X^{(r)}, \\mtx X^{(g)}) = \\log \\int \\mc P d \\vc f d \\mtx Z.\n\\end{equation}\nFrom this point onwards, we will omit the conditioning on the inputs for brevity.\nWhile we can integrate out $\\vc f$ due to the conjugacy of the likelihood, $\\mtx Z$ cannot be integrated out analytically.\nWe multiply and divide the integrand by $q(\\mtx Z)$ to obtain\n\\begin{equation}\n\t\\log p(\\vc y) = \\log \\int q(\\mtx Z) \\frac{p(\\vc y, \\mtx Z)}{q(\\mtx Z)} d \\mtx Z.\n\\end{equation}\nApplying Jensen's equality, we move the logarithm inside the integrand to get\n\\begin{equation}\n\\log p(\\vc y) \\ge ELBO = \\int q(\\mtx Z) \\log \\frac{p(\\vc y, \\mtx Z)}{q(\\mtx Z)} d \\mtx Z.\n\\end{equation}\nRearranging and using $p(\\vc y, \\mtx Z) = p(\\vc y | \\mtx Z) p(\\mtx Z)$ gives\n\\begin{equation}\n\tELBO = \\int q(\\mtx Z) \\left( \\log p(\\vc y | \\mtx Z) + \\log p(\\mtx Z) - q(\\mtx Z) \\right) d \\mtx Z.\n\t\\label{eqn:elbo}\n\\end{equation}\nAdditional progress may be made at this point by restricting our choice of kernel function for the second GP layer to either an exponentiated quadratic or a linear kernel since the kernel expectations that form the crux of the challenge may then be evaluated analytically.\nFurthermore, under the special case where the same real-valued input points are sampled for each general input, Atkinson and Zabaras \\cite{atkinson2018structured, atkinson2019structured} showed that the kernel expectations can be further broken down into a Kronecker product between a deterministic kernel matrix and a kernel expectation enabling extensions to modeling millions of data.\nIn this work, we instead opt to estimate Eq.\\ (\\ref{eqn:elbo}) through sampling so as to maintain the ability to leverage arbitrary kernel functions.\n\nThe model is trained by maximizing Eq.\\ (\\ref{eqn:elbo}) with respect to the parameters of the variational posteriors as well as the kernel hyperparameters and likelihood variance $\\sigma_y^2$ through gradient-based methods.\nWe find that it is typically sufficient to approximate the integral with a single sample from $q(\\mtx Z)$ at each iteration of the optimization.\n\nNote that there is a redundancy in defining the scale of the latent space in that our model possesses both a parameter $\\sigma_w^2$ for the prior variance of the embedding layer as well as length scales for the parametric kernel of the GP regression layer.\nGiven this freedom, one can make the ELBO arbitrarily high by bringing $\\sigma_w^2$ towards zero while scaling the variational parameters of $q(\\mtx Z)$ and the length scales of the regression kernel correspondingly without changing the predictions of the model.\nto avoid this pathology and establish the scale of the latent space, we fix $\\sigma_w^2 = 1$.\n\nWe choose to learn point estimates of the other model parameters since they are informed by all of the tasks' data.\nTherefore, the uncertainty contributed to the predictions due to uncertainty in these parameters will generally be small.\nThis is again only because we are incorporating a large amount of data into our model by leveraging legacy data; in the typical single-task engineering setting, we might expect to have far fewer data, and Bayesian inference over the parameters may provide value to the model.\n\n\\paragraph{A remark on variational inference for EGP}\nA reasonable alternative to our variational inference approach is to instead perform inference using Monte Carlo.\nWe favor VI because we need to update the model posteriors repeatedly during Bayesian optimization as new data are added to the model.\nThis is simple to do with VI since we can continue our gradient-based optimization from the previous variational posterior.\nBy contrast, Monte Carlo requires us to run the chain for almost as long as the initial chain every time that we need to update the model.\nThus, while there is not such a clear advantage for one method compared to the other when initially training the model on legacy data, VI quickly becomes more attractive in the following steps.\nFurthermore, we note that updating the model parameters is rather important since we are focusing on extreme low-data regimes (fewer than 10 data from the task of interest), where each new datum will generally have considerable effects on the posterior.\n\nAnother reason for using MC inference is because highly-factorized variational posterior distributions are known to underestimate the entropy of the true posterior when there is mismatch between the variational form and the true posterior \\cite{bishop2006pattern}.\nOne might reasonably worry that we are neglecting important uncertainty information by using a mean-field posterior for $q(\\mtx z)$.\nWe find in practice that this approximation successfully captures much of the uncertainty and is sufficient to provide competitive or superior predictions compared to the baselines considered in our examples.\nHowever, our method is not incompatible with more powerful VI methods such as multivariate Gaussians with full covariance matrices \\cite{atkinson2019structured} or normalizing flow-based approaches \\cite{kingma2016improved}.\n\n\\subsubsection{Predictions with the model}\nGiven a trained model, we are now interested in making predictions at held-out inputs in $\\scriptX_r \\times \\scriptX_g$.\nConsider a set of test inputs $\\mtx X^* \\in \\scriptX_g^{n_g^*} \\times \\scriptX_r^{n_r^*}$ with $n_g^*$ distinct general inputs and $n_r^*$ distinct real-valued inputs, combined to form $n^*$ test inputs in all.\nWe first apply the embedding layer $\\mtx X^{(g),*}$ to obtain a sample of the latents $\\mtx Z^* \\in \\reals^{n_g^* \\times d_z}$ as well as the training data's latents $\\mtx Z$.\nThese latents are expanded to form the full training and test inputs for the regression model, \n$\\{\\mtx X, \\mtx X^*\\} = \\left\\{ [\\mtx X^{(r)}, \\mtx Z], [\\mtx X^{(r),*}, \\mtx Z^*] \\right\\} \\in \\reals^{n \\times d_x} \\times \\reals^{n^* \\times d_x}$,\nwhere $d_x = d_{x^{(r)}} + d_z$.\nGiven these samples we can compute the conditional predictive distributions over the latent and observed outputs, given by Eq.\\ (\\ref{eqn:qf}) and (\\ref{eqn:qy}), respectively.\n\nDue to the nonlinearity of the GP model, marginalizing over latent variables' variational posterior $q(\\mtx Z, \\mtx Z^*)$ generally induces a non-Gaussian predictive distribution.\nHowever, given a Gaussian variational posterior, the moments of the marginal predictive distribution admit analytic expressions; previous works \\cite{girard2003gaussian, titsias2010bayesian} have approximated the predictive distribution as Gaussian with these statistics.\nAlternatively, one may sample the predictive distribution in order to better resolve its details.\nWe utilize the former approach in our examples.\n\n\\subsection{Bayesian optimization}\nHaving discussed the formulation of our model, including training and predictions, we now turn our attention to using it in the context of optimization.\nThe main algorithm for Bayesian optimization is given in Algorithm \\ref{alg:BO}.\nWe restrict ourselves to the task of optimizing subject to a fixed general input $\\vc x_g^*$, though our method permits searching over multiple general inputs with trivial modification.\n\n\\begin{algorithm}\n\t\\caption{Bayesian optimization}\n\t\\label{alg:BO}\n\t\\textbf{Require:} \n\tTraining data $\\scriptD$, \n\tgeneral input of interest $\\vc x_g^*$, \n\tacquisition function $a(\\cdot)$, \n\tcomputational budget $n^*$.\n\t\n\t\\textbf{Ensure:} optimal design $\\vc x^{(r),*}$\n\t\n\t\\begin{algorithmic}[1]\n\t\t\t\\State $\\scriptM \\rightarrow BEGP(\\scriptD)$.\n\t\t\t\\State $n \\rightarrow 0$, $\\scriptD^* = \\emptyset$.\n\t\t\t\\For{$i=1, \\dots, n^*$}\n\t\t\t\t\\State Train $\\scriptM$ on $\\scriptD$.\n\t\t\t\t\\State $\\vc x_{next}^{(r)} \\rightarrow \\argmin_{\\vc x^{(r)} \\in \\mc X_r} a(\\vc x^{(r)})$.\n\t\t\t\t\\State $y_{next} \\rightarrow \\eta(\\vc x_g^*, \\vc x_{next}^{(r)})$, $n \\rightarrow n + 1$.\n\t\t\t\t\\State $\\scriptD \\rightarrow \\scriptD \\cup \\{ (\\vc x_g^*, \\vc x_{next}^{(r)}, y_{next}) \\}$\n\t\t\t\t\\State $\\scriptD^* \\rightarrow \\scriptD^* \\cup \\{ (\\vc x_g^*, \\vc x_{next}^{(r)}, y_{next}) \\}$\n\t\t\t\\EndFor\n\t\t\t\\Return $\\vc x^{(r),*} = \\argmin_{x^{(r)} \\in \\scriptD^*} y(\\vc x^{(r)})$.\n\t\\end{algorithmic}\n\\end{algorithm}\n\n\\subsubsection{Acquisition functions}\nOne ingredient that must be specified for BO is the acquisition function $a$, which is used to steer the selection of points at which one conducts experiments to evaluate $\\eta$.\nIn this work, for systems in which we can evaluate $\\eta$ at any location in $\\scriptX_r$ for the current task $\\vc x_g^*$, we consider the expected improvement\n\\begin{equation}\nEI(\\vc x) = \\expectation{p(y | \\vc x, \\scriptD)}{(y(\\vc x) - y_{min})\\Theta(y_{min} - y(\\vc x))},\n\\label{eqn:expected_improvement}\n\\end{equation}\nwhere $\\Theta(\\cdot)$ is the Heaviside theta function and $y_{min}$ is the value of the current best design.\nWhen $p(y|\\vc x, \\scriptD)$ has a known form such as a Gaussian, evaluation of Eq.\\ (\\ref{eqn:expected_improvement}) can be done cheaply.\nNotice as a matter of convention that we have defined $a$ in Eq.\\ (\\ref{eqn:expected_improvement}) such that lower values are better.\n\nFinally, the traditional approach to maximizing the acquisition function has been to simple select a large random sampling of $\\mc X_r$, evaluate $a$ on all of the samples (which is generally cheap), then select the best sample.\nHowever, following the widespread adoption of computational frameworks supporting automatic differentiation such as TensorFlow \\cite{abadi2016tensorflow} and PyTorch \\cite{paszke2017automatic}, it has become customary to accelerate the inner loop optimization over $a$ using gradient-based methods by simply backpropagating the acquisition function back through the model to obtain its gradient with respect to the inputs $\\vc x^{(r)}$ \\cite{snoek2012practical}.\nOther recent work has investigated this matter more broadly \\cite{zhang2019finding}.\nHere, we use gradient-based search with restarts to find the next point to select similarly to \\cite{snoek2012practical}.\nFor the case where one must estimate $a$ through sampling (e.g.\\ due to estimating the predictive posterior by sampling latents from $q(\\mtx Z)$, stochastic backpropagation \\cite{rezende2014stochastic} can be used to deal with the variance in the estimates of $a$ due to sampling.\nWe also note that this setup may be applied immediately to solve \\textit{robust} optimization problems with no modification by additionally sampling from any uncontrolled inputs.\n\nIn our real-world examples, we do not have access to the real-world data-generating process and must instead work with a static, pre-computed dataset.\nIn this case, our design space is practically the finite set at which experiments have already been carried out.\nIn this case, we can instead directly estimate the negative\\footnote{We take the negative probability to keep with the convention that lower values of $a$ are better.} probability that a given available point will be the best design:\n\\begin{equation}\n\ta(\\vc x_i^{(r)}) = \\prod_{j \\not = i} P\\left(y(\\vc x_i^{(r)}, \\vc x^{(g), *}) < y(\\vc x_j^{(r)}, \\vc x^{(g), *}) \\right).\n\t\\label{eqn:prob-best}\n\\end{equation}\nWe approximate Eq.\\ (\\ref{eqn:prob-best}) by repeatedly sampling the joint predictive distribution over all available design points and counting the frequency with which each design point is predicted to have the best output.\nWe find that this crude approximation is not overly computationally burdensome and results in good performance in practice as shown by our examples.\n\n\n\\section{Related work}\n\\label{sec:related_work}\nMcMillan et al.\\ \\cite{mcmillan1999analysis} study Gaussian process models based on ``qualitative'' and ``quantitative'' variables.\nThis approach would be analogous to a deterministic version of our Bayesian embedding layer.\nAs we show in our examples, the incorporation of Bayesian uncertainty estimates provides highly valuable information that is essential to making the predictive distribution of the model credible.\n\nThe Gaussian process latent variable model \\cite{lawrence2004gaussian} is a seminal work for inferring latent variables by using the marginal log-likelihood of a Gaussian process as training signal.\nThe later Bayesian extension by Titsias and Lawrence \\cite{titsias2010bayesian} was found to significantly improve the quality of the model and gives compelling evidence in favor of modeling latent variables in a Bayesian manner.\n\nThe task of inferring input-output relationships where the inputs are only partially-specified was first identified by Damianou and Lawrence as ``semi-described learning'' \\cite{damianou2015semi}.\nOur problem statement may be regarded as similar in that we choose to associate the general features in our data with latent variables that are unspecified \\textit{a priori}.\n\nUsing related systems to improve knowledge about a related system of interest has a long history in multi-fidelity modeling \\cite{kennedy2000predicting, forrester2007multi}, where one traditionally considers a sequence of systems that constitute successively more accurate (and expensive) representations of a true physical process.\nHowever, it is unclear how to define a ``sequence'' of legacy datasets that are all ``on equal footing'' (such as different operators or machines performing the same task), particularly as the number of such instances becomes large.\n\nOur model is similar to the multi-task Gaussian process introduced in \\cite{swersky2013multi}.\nOur approaches are similar in that we both learn a kernel over tasks; however, \\cite{swersky2013multi} require that the kernel matrix decompose as a Kronecker product between a task-wise covariance matrix and the input kernel matrix. \nThe Kronecker product kernel matrix can be equivalent to a kernel matrix evaluated on our augmented inputs in the special cases of if one uses either a linear or a Gaussian kernel, but more general kernels cannot be composed in this way.\nBy finding representations of tasks in a Euclidean input space, we lift this restriction, allowing us to use arbitrary kernels instead, wherein the Kronecker product decomposition of \\cite{swersky2013multi} is a special case.\nAdditionally, by posing a prior over the latent input space, our method also allows us to do principled zero-shot predictions on unseen inputs; it is significantly more challenging to formulate a reasonable prior on the kernel matrix itself.\nFinally, we can inject additional modeling bias into our model by selecting the dimensionality $d_z$ of our latent space.\nWhile a low-rank approximation to the task covariance matrix might be derived to obtain similar savings, it again seems more natural in our opinion to exercise this ability in a latent space.\n\nThe task of utilizing ``legacy'' data in order to improve the predictive capability on a related system of interest was explored in \\cite{ghosh2018bayesian}.\nHowever, the approach used in that work can only utilize legacy models through linear combinations, making extrapolation on the system of interest difficult, particularly when few (or no) data are available from the task of interest.\nAdditionally, their method requires a cross-validation scheme for model selection; by contrast, the probabilistic representation of tasks in the current method enables the use of Bayesian inference to guide model selection in the sense of identifying relevance between tasks.\n\nA recent work by Zhang et al.\\ \\cite{zhang2019bayesian} also studies Bayesian optimization in the context of qualitative and quantitative inputs.\nHowever, they learn point embeddings via maximum likelihood estimation rather than inferring a posterior through Bayesian inference; this makes the overall model prone to overfitting in practice and generally unsuitable for providing credible uncertainty estimates.\nThis also makes their model difficult to apply in few-shot settings, and impossible to apply in a principled way for zero-shot learning.\nA related work by Iyer et al.\\ \\cite{iyer2019data} identifies a pathology associated with the point estimate inference strategy and proposes an ad hoc workaround; a natural outcome of our Bayesian embedding approach is that this challenge vanishes.\n\nThe BEGP model can be thought of as a latent variable model \\cite{kingma2013auto, lawrence2004gaussian, titsias2010bayesian} and has similarities with previous works with latent variable GPs \\cite{dai2017efficient, atkinson2018structured, atkinson2019structured}, though those works additionally impose structure over latent variables as well as (potentially) the input training data.\nHowever, neither works were applied within the context of Bayesian optimization, nor did they identify that multiple general feature sets could be decomposed as we have chosen to do.\n\n\\section{Examples}\n\\label{sec:examples}\nIn this section, we demonstrate our method on a variety of synthetic and real-world examples.\nThe embedding GP model is implemented using \\texttt{gptorch}\\footnote{\\url{https:\/\/github.com\/cics-nd\/gptorch}}, a Gaussian process framework built on PyTorch \\cite{paszke2017automatic}.\nSource code for implementing the model and synthetic experiments described below can be found on GitHub\\footnote{\\url{https:\/\/github.com\/sdatkinson\/BEBO}}.\nFor each system, we consider both a regression and optimization problem using our embedding GP model with both probabilistic and deterministic embedding layers; the latter is achieved by replacing the posterior of Eq.\\ (\\ref{eqn:qz}) with a delta function at its mode.\nWe use an RBF kernel as in Eq.\\ (\\ref{eqn:exponentiated-quadratic}) and constant mean function whose value is determined via an MLE point estimate.\n\nWe compare our results against a vanilla Gaussian process metamodel that is unable to directly leverage the legacy data as a baseline.\nSince GP models typically fare very poorly the the extreme low-data regime that we are interested in, we conduct full Bayesian inference over the mean and kernel function parameters, using as prior a factorized Gaussian over the parameters\\footnote{Or the logarithm of the parameters that are constrained to be positive.} whose mean and variance are estimated by fitting GPs to each legacy task and using the statistics of the point estimates of the model parameters found therein.\nInference for the Bayesian GP is carried out using Hamiltonian Monte Carlo \\cite{duane1987hybrid} with the NUTS sampler \\cite{hoffman2014no} as implemented in Pyro \\cite{bingham2018pyro}.\n\n\\subsection{Systems}\nWe begin by describing the systems that we consider.\nEach system contains a set of tasks that we will jointly learn.\n\n\\subsubsection{Toy system}\n\\label{sec:examples:systems:toy}\nWe first consider a system with one-dimensional input and $\\Omega_x = [0, 1]$.\nThe set of response surfaces are given by\n\\begin{equation}\n\t\\eta(x, \\vc \\theta) = 0.1 * z^4 - z^2 + (2.0 + \\theta_2) \\sin(2z),\n\\end{equation}\nwhere we define $z = \\theta_1 + 4x - 4$.\nDifferent response surfaces are selected by sampling $\\vc \\theta \\sim \\mc U[0, 1]^2$.\nWe consider a problem setting where $5$ legacy tasks are available, each with $5$ data with inputs sampled uniformly from $\\Omega_x$.\nIn the regression setting, we randomly sample a number of data from one response surface as the current task and split it into a training and test set.\nWe quantify our prediction accuracy in terms of mean negative log probability (MNLP), mean absolute error (MAE), and root mean square error (RMSE).\nIn the optimization setting, we perform Bayesian optimization over the current task and track the best design point as a function of the number of evaluations on the current task.\nWe repeat both experiments with $10$ different random seeds to quantify the performance statistics of each metamodeling approach.\n\n\\subsubsection{Forrester function}\n\\label{sec:examples:systems:forrester}\nWe also consider a generalization of the Forrester function \\cite{forrester2007multi} to a many-task setting:\n\\begin{equation}\n\t\\eta(x; \\vc \\theta) = \\theta_1 * \\eta_{forrester}(x) + \\theta_2 * (x - 0.5) + \\theta_3,\n\t\\label{eqn:forrester-param}\n\\end{equation}\nwhere \n\\begin{equation}\n\t\\eta_{forrester}(x) = (6x-2)^2 \\sin(12x-4).\n\t\\label{eqn:forester-hi}\n\\end{equation}\nUnder the parameterization of Eq.\\ (\\ref{eqn:forrester-param}), the original ``high-fidelity'' function considered in \\cite{forrester2007multi} is obtained when $\\vc \\theta = (1, 0, 0)$, and the low-fidelity function corresponds to $\\vc \\theta = (0.5, 10, -5)$.\nWe always include the low-fidelity function in the legacy tasks along with other tasks generated by sampling \n$\\vc \\theta \\sim \\mc U[0, 1] \\times \\mc U[0, 10] \\times \\mc U[-5, 5]$.\nNote that this places the high-fidelity task at an extremum of the system parameter space, meaning that multi-task models will not necessarily benefit by assuming \\textit{a priori} that the held-out task lives at the center of latent space (assuming that their learned latent representation resembles that of the parameterization we have chosen).\nWe consider a design space $\\Omega_x = [0, 1]$.\nEach of the 5 legacy tasks includes 5 data where the input was sampled uniformly in $\\Omega_x$.\nWe consider a regression and an optimization with identical setup to the toy system in Sec.\\ \\ref{sec:examples:systems:toy}.\n\n\\subsubsection{Pump data}\n\\label{sec:examples:systems:pump}\nOur first real-world dataset comprises of data in which the performance of a pump is quantified in terms of seven design variables.\nOur data includes data from 6 pump families, each with of which has between $11$ and $55$ data.\nWe consider each pump family in turn as the ``current'' task while using the other families as legacy task data.\nWe repeat our experiment 6 times, each time using a different pump family as the current task and the other 5 datasets as legacy task data.\nIn the regression setting, we split the current task data into a train and test set, training our metamodel on all of the legacy data as well as whatever data is available from the current task, then quantify our predictions on the held-out test set of the current task in terms of MNLP, MAE, and RMSE.\nEach experiment is repeated 10 times with different random initializations and train-test splits.\nIn the optimization setting, we begin with no evaluations from the current task and carry out Bayesian optimization using the available data as a (finite) design set.\nData are selected at each iteration according the acquisition function of Eq.\\ (\\ref{eqn:prob-best}).\nWe track the best evaluation (relative to the per-task best design) as a function of the number of evaluations on the current task.\n\n\\subsubsection{Additive manufacturing data}\n\\label{sec:examples:systems:additive}\nWe consider a dataset in which the performance of an additive manufacturing process is quantified in terms of four design variables.\nWe have data from $35$ different chemistries, each of which has $24$ data.\nWe conduct experiments on each of the $35$ chemistries in an analogous manner to the experiments on the pump data.\n\n\\subsection{Regression results}\nFigure \\ref{fig:examples:regression:synthetic} shows regression results for the synthetic systems described in Sections \\ref{sec:examples:systems:toy} and \\ref{sec:examples:systems:forrester}.\nWe notice that the embedding GPs typically outperform the Bayesian GP by a margin, though there is overlap in the statistics of their performance over repeats.\nCuriously, we notice that the deterministic embedding GP occasionally outperforms the Bayesian embedding in terms of RMSE and MAE, but fails catastrophically in terms of MNLP, where the overconfidence of its predictions is severely penalized.\n\n\\begin{figure}[hbt]\n\t\\centering\n\t\\includegraphics[width=0.3\\textwidth]{fig2.pdf}\n\t\\includegraphics[width=0.3\\textwidth]{fig3.pdf}\n\t\\includegraphics[width=0.3\\textwidth]{fig4.pdf}\n\t\\\\\n\t\\includegraphics[width=0.3\\textwidth]{fig5.pdf}\n\t\\includegraphics[width=0.3\\textwidth]{fig6.pdf}\n\t\\includegraphics[width=0.3\\textwidth]{fig7.pdf}\n\t\\caption{\n\t\t(Regression, synthetic systems)\n\t\tPerformance on held-out tasks.\n\t\tFrom left to right, performance in terms of root mean squared error (RMSE), mean absolute error (MAE), and median negative log probability (MNLP).\n\t\tTop row: Forrester function.\n\t\tBottom row: synthetic system of functions.\n\t\tSolid lines show the median performance over 10 random seeds and the shaded region shows the 80\\% coverage interval.\n\t}\n\t\\label{fig:examples:regression:synthetic}\n\\end{figure}\n\nFigure \\ref{fig:examples:regression:pump} shows our results for the pump dataset.\nFor this problem, we observe mixed results with the embedding GPs outperforming the baseline on some of the tasks, but underperforming on others. \nWe hypothesize that this is because certain tasks are substantially different from the others, causing the prior information from legacy tasks to be unhelpful.\nHowever, on certain tasks (3 and 4), we see that the Bayesian embedding GP outperforms the baseline even in a zero-shot setting.\nWe also notice that the deterministic embedding GP consistently gives rather poor uncertainty estimates as evidenced by its MNLP scores.\nThus, we see a clear benefit to the Bayesian embedding approach for improving the credibility of the predictions.\n\n\\begin{figure}[hb!t]\n\t\\centering\n\t\\includegraphics[width=0.3\\textwidth]{fig8.pdf}\n\t\\includegraphics[width=0.3\\textwidth]{fig9.pdf}\n\t\\includegraphics[width=0.3\\textwidth]{fig10.pdf}\n\t\\\\\n\t\\includegraphics[width=0.3\\textwidth]{fig11.pdf}\n\t\\includegraphics[width=0.3\\textwidth]{fig12.pdf}\n\t\\includegraphics[width=0.3\\textwidth]{fig13.pdf}\n\t\\\\\n\t\\includegraphics[width=0.3\\textwidth]{fig14.pdf}\n\t\\includegraphics[width=0.3\\textwidth]{fig15.pdf}\n\t\\includegraphics[width=0.3\\textwidth]{fig16.pdf}\n\t\\\\\n\t\\includegraphics[width=0.3\\textwidth]{fig17.pdf}\n\t\\includegraphics[width=0.3\\textwidth]{fig18.pdf}\n\t\\includegraphics[width=0.3\\textwidth]{fig19.pdf}\n\t\\\\\n\t\\includegraphics[width=0.3\\textwidth]{fig20.pdf}\n\t\\includegraphics[width=0.3\\textwidth]{fig21.pdf}\n\t\\includegraphics[width=0.3\\textwidth]{fig22.pdf}\n\t\\caption{\n\t\t(Regression, pump dataset)\n\t\tRegression performance on held-out tasks.\n\t\tFrom left to right, performance in terms of root mean squared error (RMSE), mean absolute error (MAE), and median negative log probability (MNLP).\n\t\tEach row shows the performance for a different held-out task.\n\t\tSolid lines show the median performance over 10 random seeds and the shaded region shows the 80\\% coverage interval.\n\t}\n\t\\label{fig:examples:regression:pump}\n\\end{figure}\n\nFigure \\ref{fig:examples:regression:additive} shows our results for the additive dataset for the first few held-out tasks.\nIn contrast to the pump problem, we often see considerable improvements by using the embedding models.\nFor many of the held-out tasks, the \\textit{zero-shot} predictions from the embedding models are, on average, superior to those of the Bayesian GP even after it has seen 10 examples from the current task.\nOn others (e.g.\\ RMSE, MAE for Task 2 in Fig. \\ref{fig:examples:regression:additive}), the Bayesian GP does somewhat outperform the embedding models, though the embedding models continue to improve.\nThis may be because the prior knowledge built up from the legacy tasks is unhelpful for the held-out task; the embedding model must learn to ``overturn'' this prior knowledge by finding a way to separate the embedding of the held-out task from the unrelated legacy tasks.\nHowever, this requires observations to gradually overturn the belief via Bayesian updating.\n\nAdditionally, we see again that the deterministic embedding consistently results in very poor performance in terms of MNLP, frequently performing so much worse that it is impractical to show on the same axes as the Bayesian GP and Bayesian EGP models.\nFigure \\ref{fig:examples:regression:additive:mnlp-no-zoom} shows the results when zoomed out to view the performance of the deterministic EGP for one task.\nWe see a similar deficiency in all of the other held-out tasks, firmly underscoring that it is critical to account for uncertainty in the embedding in order to obtain credible predictions.\nThis makes sense, given the high number of tasks we must embed relative to the number of data available to elucidate their input-output characteristics.\nThis is not uncommon in engineering design problems, where one typically has a limited computational budget to explore any given design problem, but may potentially have access to a large archive of legacy tasks.\n\n\\begin{figure}[hb!t]\n\t\\centering\n\t\\includegraphics[width=0.3\\textwidth]{fig23.pdf}\n\t\\includegraphics[width=0.3\\textwidth]{fig24.pdf}\n\t\\includegraphics[width=0.3\\textwidth]{fig25.pdf}\n\t\\\\\n\t\\includegraphics[width=0.3\\textwidth]{fig26.pdf}\n\t\\includegraphics[width=0.3\\textwidth]{fig27.pdf}\n\t\\includegraphics[width=0.3\\textwidth]{fig28.pdf}\n\t\\\\\n\t\\includegraphics[width=0.3\\textwidth]{fig29.pdf}\n\t\\includegraphics[width=0.3\\textwidth]{fig30.pdf}\n\t\\includegraphics[width=0.3\\textwidth]{fig31.pdf}\n\t\\\\\n\t\\includegraphics[width=0.3\\textwidth]{fig32.pdf}\n\t\\includegraphics[width=0.3\\textwidth]{fig33.pdf}\n\t\\includegraphics[width=0.3\\textwidth]{fig34.pdf}\n\t\\\\\n\t\\includegraphics[width=0.3\\textwidth]{fig35.pdf}\n\t\\includegraphics[width=0.3\\textwidth]{fig36.pdf}\n\t\\includegraphics[width=0.3\\textwidth]{fig37.pdf}\n\t\\caption{\n\t\t(Regression, additive dataset)\n\t\tRegression performance on held-out tasks.\n\t\tFrom left to right, performance in terms of root mean squared error (RMSE), mean absolute error (MAE), and median negative log probability (MNLP).\n\t\tEach row shows the performance for a different held-out task.\n\t\tSolid lines show the median performance over 10 random seeds and the shaded region shows the 80\\% coverage interval.\n\t}\n\t\\label{fig:examples:regression:additive}\n\\end{figure}\n\n\\begin{figure}[hbt]\n\t\\centering\n\t\\includegraphics[width=0.5\\textwidth]{fig38.pdf}\n\t\\caption{\n\t\t(Regression, additive dataset)\n\t\tZoomed-out MNLP performance for held-out task 0.\n\t\tThe deterministic embedding GP performs worse than the other methods by orders of magnitude.\n\t}\n\t\\label{fig:examples:regression:additive:mnlp-no-zoom}\n\\end{figure}\n\n\\subsection{Optimization results}\nFigure \\ref{fig:examples:optimization:synthetic:running_best} shows the best design discovered for the synthetic systems as a function of the number of evaluations on the current task.\nWe report the median performance over $10$ splits as well as the 80\\% coverage interval ($10$-th to $90$-th percentile).\nWe see that the legacy task data enables the embedding GPs to consistently find near-optimal designs with a either a single evaluation (Forrester) or \\textit{without any data from the current task} (synthetic).\nBy contrast, the GP usually requires a handful of evaluations before it begins to find good solutions.\n\n\\begin{figure}[hbt]\n\t\\centering\n\t\\includegraphics[width=0.45\\textwidth]{fig39.pdf}\n\t\\includegraphics[width=0.45\\textwidth]{fig40.pdf}\n\t\\caption{\n\t\tBayesian optimization for the synthetic and Forrester systems.\n\t\tThe solid curves show the median performance over $10$ repeats, and the shaded region shows the 80\\% coverage interval over all repeats.\n\t}\n\t\\label{fig:examples:optimization:synthetic:running_best}\n\\end{figure}\n\nFigures \\ref{fig:examples:optimization:synthetic:posterior} and \\ref{fig:examples:optimization:forrester:posterior} show the zero- and one-shot posteriors of all three methods on the toy and Forrester systems, respectively.\nWe see that the legacy task data equip the embedding GPs with helpful priors that aid them in identifying the response surface.\nBy contrast, the Bayesian GP guesses randomly in the zero-shot setting and has minimal insight for its one-shot predictions.\nWe also notice that while the deterministic embedding model seems to fortunately pick good designs, its model of the response surface is highly overconfident.\nBy contrast, we see that the Bayesian embedding endows our models with well-calibrated uncertainty estimates that simultaneously allows is to find good designs while retaining a credible metamodel.\nThus, while the EGP performs comparably to the BEGP, we must be cautious about generalizing these results.\nWe also point out that while it seems like the BGP has picked a very good first point in Fig.\\ \\ref{fig:examples:optimization:synthetic:posterior}, this selection is by pure luck since BGP's predictive distribution with zero training data (i.e.\\ the prior) is flat in its inputs. Indeed, we see that it picks the same first point for the Forrester function in Fig.\\ \\ref{fig:examples:optimization:forrester:posterior}, which is not nearly so close to optimal.\n\n\\begin{figure}[hbt]\n\t\\centering\n\t\\includegraphics[width=0.3\\textwidth]{fig41.pdf}\n\t\\includegraphics[width=0.3\\textwidth]{fig42.pdf}\n\t\\includegraphics[width=0.3\\textwidth]{fig43.pdf}\n\t\\\\\n\t\\includegraphics[width=0.3\\textwidth]{fig44.pdf}\n\t\\includegraphics[width=0.3\\textwidth]{fig45.pdf}\n\t\\includegraphics[width=0.3\\textwidth]{fig46.pdf}\n\t\\caption{\n\t\tPredictive posteriors for the metamodels on the toy system.\n\t\tTop row: zero-shot predictions.\n\t\tBottom row: one-shot predictions.\n\t\tFrom left to right: BGP, EGP, BEGP metamodels.\n\t}\n\t\\label{fig:examples:optimization:synthetic:posterior}\n\\end{figure}\n\n\\begin{figure}[hbt]\n\t\\centering\n\t\\includegraphics[width=0.3\\textwidth]{fig47.pdf}\n\t\\includegraphics[width=0.3\\textwidth]{fig48.pdf}\n\t\\includegraphics[width=0.3\\textwidth]{fig49.pdf}\n\t\\\\\n\t\\includegraphics[width=0.3\\textwidth]{fig50.pdf}\n\t\\includegraphics[width=0.3\\textwidth]{fig51.pdf}\n\t\\includegraphics[width=0.3\\textwidth]{fig52.pdf}\n\t\\caption{\n\t\tPredictive posteriors for the metamodels on the Forrester system.\n\t\tTop row: zero-shot predictions.\n\t\tBottom row: one-shot predictions.\n\t\tFrom left to right: GP, EGP, BEGP metamodels.\n\t}\n\t\\label{fig:examples:optimization:forrester:posterior}\n\\end{figure}\n\nFigure \\ref{fig:examples:optimization:real:running_best} shows the running best design for the pump and additive systems as a function of the number of current task evaluations.\nSince each legacy system has a different optimum, results show the performance relative to the best design for the current task.\nAgain, we see that the embedding GPs vastly outperform the vanilla GP approach, usually finding good designs on their first attempts.\nHowever, here we see that the Bayesian embedding gives better results over a deterministic embedding; this can be directly attributed to our earlier observation that the deterministic embedding tends to overfit, particularly when there are very few data to inform the embedding.\nBy contrast, the Bayesian embedding safely guards against overfitting and provides robust performance.\n\n\\begin{figure}[hbt]\n\t\\centering\n\t\\includegraphics[width=0.45\\textwidth]{fig53.pdf}\n\t\\includegraphics[width=0.45\\textwidth]{fig54.pdf}\n\t\\caption{\n\t\tBayesian optimization for the pump and additive systems.\n\t\tResults are shown relative to the per-task best design.\n\t\tThe solid curves show the median performance over all repeats, and the shaded region shows the 80\\% coverage interval.\n\t}\n\t\\label{fig:examples:optimization:real:running_best}\n\\end{figure}\n\n\\section{Conclusion}\n\\label{sec:conclusion}\nWe have introduced a method for Bayesian optimization applicable to settings where one has access to a variety of related datasets but does not know how they relate to each other \\textit{a priori}.\nThe Bayesian embedding GP automatically learns latent representations that enable a single metamodel to share information across multiple tasks, thereby improving the predictive performance in all cases, particularly in novel settings where limited data is available.\nWe observe that our method enables Bayesian optimization to more quickly and reliably find good solutions compared to traditional methods.\n\nWe find that variational inference with rudimentary variational approximations (factorized Gaussians) over the latent variables provides sufficient uncertainty information that credible estimates may be obtained in practice and is robust with respect to the number of latent dimensions.\nOur experiments show that this Bayesian embedding is critical for obtaining credible uncertainty quantification.\n\nThere are a number of ways in which our work might be extended.\nFor one, our method might be applied to optimization settings with unknown constraints.\nIn such cases, one might simultaneously model the probability of violating a constraint with a multi-task model.\nModeling binary outputs such as constraint violation with GPs requires non-conjugate inference, which can be enabled using techniques from \\cite{hensman2015scalable}.\nSecond, our approach is straightforward to apply to problems of robust design.\nIn fact, we have already shown how uncertainty in our metamodel can be propagated efficiently during design selection through stochastic backpropagation, eliminating the need for expensive double-loop optimization.\nOther approaches \\cite{ryan2018gaussian, ling2018efficient} exploit analytic properties to solve the inner-loop optimization, though our work suggests that it might be approximately solved in general using stochastic optimization.\nFinally, our model learns a single embedding for each general input (task) as a latent vector.\nWe would like to explore the effect of a hierarchical latent representation in which the embedding of each task is allowed to vary with $\\vc x^{(r)}$.\nSuch a conditional multi-task model might enable one to exploit partial similarities between tasks in certain regions of input space while still providing for the possibility that their trends might differ elsewhere.\nWhile this was studied in \\cite{ghosh2018bayesian}, we expect that our Bayesian latent representation might improve performance in few-shot settings such as those considered in this work.\n\n\\section*{Funding Sources}\nFunding for this work was provided internally by GE Research.\n\n\n\\clearpage\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\n\nMarkov decision processes (MDPs) model sequential decision-making in stochastic systems with nondeterministic choices. A policy, i.e., a decision strategy, resolves the nondeterminism in an MDP and induces a stochastic process. In this regard, an MDP represents a (infinite) family of stochastic processes. In this paper, for a given MDP, we aim to synthesize a policy that induces a process with maximum entropy among the ones whose paths satisfy a temporal logic specification. \n\nEntropy, as an information-theoretic quantity, measures the unpredictability of outcomes in a random variable \\cite{Cover}. Considering a stochastic process as an infinite sequence of (dependent) random variables, we define the entropy of a stochastic process as the joint entropy of these random variables by following \\cite{Biondi},\\cite{Chen}. Therefore, intuitively, our objective is to obtain a process whose paths satisfy a temporal logic specification in the most unpredictable way to an observer. \n\nTypically, in an MDP, a decision-maker is interested in satisfying certain properties \\cite{Behcet} or accomplishing a task \\cite{Jie}. Linear temporal logic (LTL) is a formal specification language \\cite{Pneuli} that has been widely used to check the reliability of software \\cite{Tan}, describe tasks for autonomous robots \\cite{Gazit, Belta} and verify the correctness of communication protocols \\cite{Kumar}. For example, in a robot navigation scenario, it allows to specify tasks such as safety (never visit the region A), liveness (eventually visit the region A) and priority (first visit the region A, then B). \n\nThe entropy of paths of a (Markovian) stochastic process is introduced in \\cite{Ekroot} and quantifies the randomness of realizations with fixed initial and final states. We first extend the definition for the entropy of paths to realizations that reach a certain set of states, rather than a fixed final state. Then, we show that the entropy of a stochastic process is equal to the entropy of paths of the process, if the process has a finite entropy. The established relation provides a mathematical basis to the intuitive idea that maximizing the entropy of an MDP minimizes the predictability of paths.\n\nWe observe that the maximum entropy of an MDP under stationary policies may not exist, i.e., for any given level of entropy, using stationary policies, one can induce a process whose entropy is greater than that level. In this case, we say that the maximum entropy of the MDP is unbounded. Additionally, if there exists a process with the maximum entropy, the entropy of such a process can be finite or infinite. Hence, before attempting to synthesize a policy that maximizes the entropy of an MDP, we first verify whether there exists a policy that attains the maximum entropy. \n\nThe contributions of this paper are fourfold. First, we provide necessary and sufficient conditions on the structure of the MDP under which the maximum entropy of the MDP is finite, infinite or unbounded. We also present a polynomial-time algorithm to check whether the maximum entropy of an MDP is finite, infinite or unbounded. Second, we present a polynomial-time algorithm based on a convex optimization problem to synthesize a policy that maximizes the entropy of an MDP. Third, we show that maximizing the entropy of an MDP with non-infinite maximum entropy is equivalent to maximizing the entropy of paths of the MDP. Lastly, we provide a procedure to obtain a policy that maximizes the entropy of an MDP subject to a general LTL specification. \n\nThe applications of this theoretical framework range from motion planning and stochastic traffic assignments to software security. In a motion planning scenario, for security purposes, an autonomous robot might need to randomize its paths while carrying out a mission \\cite{Paruchuri, Paruchuri2}. In such a scenario, a policy synthesized by the proposed methods both provides probabilistic guarantees on the completion of the mission and minimizes the predictability of the robot's paths through the use of online randomization mechanisms. Additionally, such a policy allows the robot to explore different parts of the environment \\cite{Saerens}, and behave robustly against uncertainties in the environment \\cite{Deep_learning}. The proposed methods can also be used to distribute traffic assignments over a network, which is known as stochastic traffic assignments \\cite{Akamatsu}, as it promotes the use of different paths. Finally, as it is shown in \\cite{Biondi}, the maximum information that an adversary can leak from a (deterministic) software, which is modeled as an MDP, can be quantified by computing the maximum entropy of the MDP. \n\n\\textbf{Related Work.} A preliminary version \\cite{Yagiz} of this paper considered entropy maximization problem for MDPs subject to expected reward constraints. This considerably extended version includes an additional section establishing the relation between the maximum entropy of an MDP and the entropy of paths of the MDP, detailed proofs for all theoretical results, and additional numerical examples.\n\nThe computation of the maximum entropy of an MDP is first considered in \\cite{Chen}, where the authors present a robust optimization problem to compute the maximum entropy for an MDP with finite maximum entropy. However, their approach does not allow to incorporate additional constraints due to the formulation of the problem. References \\cite{Biondi} and \\cite{Biondi2} compute the maximum entropy of an MDP for special cases without providing a general algorithm. \n\nThe work \\cite{Biondi} provides the necessary and sufficient conditions for an interval Markov chain (MC) to have a finite maximum entropy. Therefore, some of the results provided in this paper, e.g., the necessary and sufficient conditions for an MDP to have finite, unbounded or infinite maximum entropy, can be seen as an extension of the results given in \\cite{Biondi}.\n\nIn \\cite{Bullo,Bullo2}, the authors study the problem of synthesizing a transition matrix with maximum entropy for an irreducible MC subject to graph constraints. The problem studied in this paper is considerably different from that problem since MDPs represent a more general model than MCs, and an MC induced from an MDP by a policy is not necessarily irreducible.\n\nIn \\cite{Paruchuri}, the authors maximize the entropy of a \\textit{policy} while keeping the expected total reward above a threshold. They claim that the entropy maximization problem is not convex. Their formulation is a special case of the convex optimization problem that we provide in this paper. Therefore, here, we also prove the convexity of their formulation.\n\nThe entropy of paths of absorbing MCs is discussed in \\cite{Ekroot}, \\cite{Akamatsu}, \\cite{Kafsi}. The reference \\cite{Saerens} establishes the equivalence between the entropy of paths and the entropy of an absorbing MC. We establish this relation for a general MC and show the connections to the maximum entropy of an MDP. \n\n\n\nWe also note that none of the above work discusses the unbounded and infinite maximum entropy for an MDP or considers LTL to specify desired system properties.\n\n\\textbf{Organization.} We provide the preliminary definitions and formal problem statement in Sections \\ref{Prelim} and \\ref{problem_set}, respectively. We analyze the properties of the maximum entropy of an MDP and present an algorithm to synthesize a policy that maximizes the entropy of an MDP in Section \\ref{max-ent}. The relation between the maximum entropy of an MDP and the entropy of paths is established in Section \\ref{relate_paths}. We present a procedure to synthesize a policy that maximizes the entropy of an MDP subject to an LTL specification in Section \\ref{cons_section}. We provide numerical examples in Section \\ref{examples_section} and conclude with suggestions for future work in Section \\ref{conclusion_section}. Proofs for all results are provided in Appendix \\ref{proofs_appendix}, and a procedure to synthesize a policy that maximizes the entropy of an MDP with infinite maximum entropy is presented in Appendix \\ref{infinite_case_appendix}.\n\n\\section{Preliminaries}\\label{Prelim}\\noindent\n\\textbf{Notation:} For a set $S$, we denote its power set and cardinality by $2^S$ and $\\lvert S \\rvert$, respectively. For a matrix $P$$\\in$$\\mathbb{R}^{n \\times n}$, we use $P^k$ and $P^k_{i,j}$ to denote the k-th power of $P$ and the $(i,j)$-th component of the k-th power of $P$, respectively. All logarithms are to the base 2 and the set $\\mathbb{N}$ denotes $\\{0,1,2,\\ldots\\}$.\n\\subsection{Markov chains and Markov decision processes}\\label{MDP_subsection}\n{\\setlength{\\parindent}{0cm}\n\\begin{definition}\nA \\textit{Markov decision process} (MDP) is a tuple $\\mathcal{M}$$=$$(S, s_0, \\mathcal{A}, \\mathbb{P},\\mathcal{AP},\\mathcal{L})$ where $S$ is a finite set of states, $s_0$$\\in$$S$ is the initial state, $\\mathcal{A}$ is a finite set of actions, $ \\mathbb{P}$$:$$S$$\\times$$ \\mathcal{A}$$ \\times$$ S$$\\rightarrow$$[0,1]$ is a transition function such that $\\sum_{t\\in S}\\mathbb{P}(s,a,t)$$=$$1$ for all $s$$\\in$$S$ and $a$$\\in$$\\mathcal{A}$, $\\mathcal{AP}$ is a set of atomic propositions, and $\\mathcal{L}$ $:$ $ S$$\\rightarrow$$ 2^{\\mathcal{AP}}$ is a function that labels each state with a subset of atomic propositions.\n\\end{definition}}\nWe denote the transition probability $ \\mathbb{P}(s,a,t)$ by $ \\mathbb{P}_{s,a,t}$, and all available actions in a state $s$$\\in$$S$ by $\\mathcal{A}(s)$. The set of successor states for a state action pair $(s,a)$ is defined as $Succ(s,a)$$:=$$\\{t $$\\in$$ S | \\mathbb{P}_{s,a,t}$$>$$0, a$$\\in$$\\mathcal{A}(s)\\}$. The \\textit{size of an MDP} is the number of triples $(s,a,t)$$\\in$$S$$\\times$$\\mathcal{A}$$\\times$$S$ such that $\\mathbb{P}_{s,a,t}$$>$$0$. \n\nA \\textit{Markov chain} (MC) $\\mathcal{C}$ is an MDP such that $\\lvert \\mathcal{A}\\rvert$$=$$1$. We denote the transition function (matrix) for an MC by $\\mathcal{P}$, and the set of successor states for a state $s$$\\in$$S$ by $Succ(s)$$=$$\\{t $$\\in$$ S | \\mathcal{P}_{s,t}$$>$$0\\}$. The \\textit{expected residence time} in a state $s$$\\in$$ S$ for an MC $\\mathcal{C}$ is defined as\n\\begin{align}\n\\label{residence}\n\\xi_s:=\\sum_{k=0}^{\\infty}\\mathcal{P}^k_{s_0,s}.\\end{align}\nThe expected residence time $\\xi_s$ represents the expected number of visits to state $s$ starting from the initial state ~\\cite{Marta}. A state $s$$\\in$$S$ is \\textit{recurrent} for an MC if and only if $\\xi_s$$=$$\\infty$, and is \\textit{transient} otherwise; it is \\textit{stochastic} if and only if it satisfies $\\lvert Succ(s)\\rvert$$>$$1$, and is \\textit{deterministic} otherwise; and it is \\textit{reachable} if and only if $\\xi_s$$>$$0$, and is \\textit{unreachable} otherwise. \n{\\setlength{\\parindent}{0cm}\n\\noindent \\begin{definition}\nA \\textit{policy} for an MDP $\\mathcal{M}$ is a sequence $\\pi$$=$$\\{\\mu_0, \\mu_1, \\ldots\\}$ where each $\\mu_k $$:$$ S $$ \\times$$ \\mathcal{A}$$\\rightarrow$$[0,1]$ is a function such that $\\sum_{a\\in \\mathcal{A}(s)}\\mu_k(s,a)$$=$$1$ for all $s$$\\in$$S$. A \\textit{stationary} policy is a policy of the form $\\pi$$=$$\\{\\mu, \\mu, \\ldots\\}$. For an MDP $\\mathcal{M}$, we denote the set of all policies and all stationary policies by $\\Pi(\\mathcal{M})$ and $\\Pi^S(\\mathcal{M})$, respectively.\n\\end{definition}}\n We denote the probability of choosing an action $a$$\\in$$\\mathcal{A}(s)$ in a state $s$$\\in$$S$ under a stationary policy $\\pi$ by $\\pi_s(a)$. For an MDP $\\mathcal{M}$, a stationary policy $\\pi$$\\in$$\\Pi^S(\\mathcal{M})$ induces an MC denoted by $\\mathcal{M}^{\\pi}$. We refer to $\\mathcal{M}^{\\pi}$ as \\textit{induced MC} and specify the transition matrix for $\\mathcal{M}^{\\pi}$ by $\\mathcal{P}^{\\pi}$, whose $(s,t)$-th component is given by\n\\begin{align}\n\\label{induced_MC_prob}\n\\mathcal{P}^{\\pi}_{s,t}=\\sum_{a\\in\\mathcal{A}(s)}\\pi_s(a) \\mathbb{P}_{s,a,t}.\n\\end{align}\nThroughout the paper, we assume that for a given MDP $\\mathcal{M}$, for any state $s$$\\in$$S$ there exists an induced MC $\\mathcal{M}^{\\pi}$ for which the state $s$ is reachable. This is a standard assumption for MDPs \\cite{Belta}, which ensures that each state in the MDP is reachable under some policy.\n\nAn infinite sequence $\\varrho^{\\pi}$$=$$s_0s_1s_2\\ldots$ of states generated in $\\mathcal{M}$ under a policy $\\pi$$\\in$$\\Pi(\\mathcal{M})$ is called a \\textit{path}, starting from the initial state $s_0$ and satisfies $\\sum_{a_k\\in \\mathcal{A}(s_k)}\\mu_k(s_k)(a_k)\\mathbb{P}_{s_k,a_k,s_{k+1}}$$>$$0$ for all $k$$\\geq$$0$. Any finite prefix of $\\varrho^{\\pi}$ that ends in a state is a finite path fragment. We define the set of all paths and finite path fragments in $\\mathcal{M}$ under the policy $\\pi$ by $Paths^{\\pi}(\\mathcal{M})$ and $Paths_{fin}^{\\pi}(\\mathcal{M})$, respectively. \n\nWe use the standard probability measure over the outcome set $Paths^{\\pi}(\\mathcal{M})$ \\cite{Model_checking}. For a path $\\varrho^{\\pi}$$\\in$$Paths^{\\pi}(\\mathcal{M})$, let the sequence $s_0s_1\\ldots s_n$ be the finite path fragment of length $n$, and let $Paths^{\\pi}(\\mathcal{M})(s_0s_1\\ldots s_n)$ denote the set of all paths in $Paths^{\\pi}(\\mathcal{M})$ starting with the prefix $s_0s_1\\ldots s_n$. The probability measure $\\text{Pr}_{\\mathcal{M}}^{\\pi}$ defined on the smallest $\\sigma$-algebra over $Paths^{\\pi}(\\mathcal{M})$ that contains $Paths^{\\pi}(\\mathcal{M})(s_0s_1\\ldots s_n)$ for all $s_0s_1\\ldots s_n$$\\in$$Paths_{fin}^{\\pi}(\\mathcal{M})$ is the unique measure that satisfies \n{\\setlength{\\mathindent}{0pt}\n\\begin{flalign}\n\\label{first_measure}\\noindent\n\\text{Pr}_{\\mathcal{M}}^{\\pi}\\{Paths^{\\pi}&(\\mathcal{M})(s_0\\ldots s_n)\\}=\\nonumber \\\\\n&\\prod_{0\\leq k < n} \\sum_{a_k\\in\\mathcal{A}(s_k)}\\mu_k(s_k)(a_k) \\mathbb{P}_{s_k,a_k, s_{k+1}}.\n\\end{flalign}}\\noindent\\vspace{-0.7cm}\n\n\\subsection{The entropy of stochastic processes}\\label{entropy_subsection}\nFor a (discrete) random variable $X$, its support $\\mathcal{X}$ defines a countable sample space from which $X$ takes a value $x$$\\in$$\\mathcal{X}$ according to a probability mass function (pmf) $p(x)$$:=$$\\text{Pr}(X$$=$$x)$. The \\textit{entropy} of a random variable $X$ with countable support $\\mathcal{X}$ and pmf $p(x)$ is defined as\n\\begin{align}\\centering\nH(X):=-\\sum_{x\\in\\mathcal{X}} p(x)\\log p(x).\n\\end{align}\nWe use the convention that $0$$\\log$$0$$=$$0$. Let $(X_0,X_1)$ be a pair of random variables with the joint pmf $p(x_0,x_{1})$ and the support $\\mathcal{X}\\times \\mathcal{X}$. The \\textit{joint entropy} of $(X_0,X_1)$ is \n\\begin{align}\n\\label{joint_entropy}\nH(X_0,X_1):= -\\sum_{x_0\\in \\mathcal{X}}\\sum_{x_{1}\\in \\mathcal{X}}p(x_0,x_{1})\\log p(x_0,x_{1}),\n\\end{align}\\noindent\nand the \\textit{conditional entropy} of $X_1$ given $X_0$ is\n\\begin{align}\n\\label{conditional_entropy}\n&H(X_1 | X_0):=-\\sum_{x_0\\in \\mathcal{X}}\\sum_{x_{1}\\in \\mathcal{X}}p(x_0,x_{1})\\log p(x_1 |x_0).\n\\end{align}\\noindent\nThe definitions of the joint and conditional entropies extend to collection of $k$ random variables as it is shown in \\cite{Cover}. \nA discrete \\textit{stochastic process} $\\mathbb{X}$ is a discrete time-indexed sequence of random variables, i.e., $\\mathbb{X}$$=$$\\{X_k$$\\in$$\\mathcal{X}$ $:$ $k$$\\in$$\\mathbb{N}\\}$. \n{\\setlength{\\parindent}{0cm}\n\\noindent\\begin{definition} (Entropy of a stochastic process) \\cite{Biondi_thesis}\nThe \\textit{entropy of a stochastic process} $\\mathbb{X}$ is defined as \n \\begin{align}\\label{entropy_def_stochastic}\n H(\\mathbb{X}):=\\lim_{k\\rightarrow \\infty}H( X_0, X_1\\ldots, X_{k}).\n \\end{align}\n \\end{definition}}\nNote that this definition is different from the \\textit{entropy rate} of a stochastic process, which is defined as $\\lim_{k\\rightarrow \\infty}\\frac{1}{k}H( X_0, X_1\\ldots, X_{k})$ when the limit exists \\cite{Cover}. The limit in \\eqref{entropy_def_stochastic} either converges to a non-negative real number or diverges to positive infinity \\cite{Biondi_thesis}. \n\nAn MC $\\mathcal{C}$ is equipped with a discrete stochastic process $\\{X_k$$\\in$$S$ $:$ $ k$$\\in$$\\mathbb{N}\\}$ where each $X_k$ is a random variable over the state space $S$. For a given k-dimensional pmf $p(s_0,s_1,\\ldots, s_k)$, this process respects the \\textit{Markov property}, i.e., $p(s_k|s_{k-1},\\ldots,s_0)$$=$$p(s_k|s_{k-1})$\nfor all $k$$\\in$$\\mathbb{N}$. Then, the \\textit{entropy of a Markov chain} $\\mathcal{C}$ is given by \n\\begin{align}\n\\label{process_entropy}\nH(\\mathcal{C})=H(X_0)+\\sum_{i=1}^{\\infty}H(X_{i}| X_{i-1})\n\\end{align} \nusing \\eqref{joint_entropy}, \\eqref{conditional_entropy} and \\eqref{entropy_def_stochastic}. Note that $H(X_0)$$=$$0$, since we define an MC with a unique initial state.\n\nFor an MDP $\\mathcal{M}$, a policy $\\pi$$\\in$$\\Pi(\\mathcal{M})$ induces a discrete stochastic process $\\{X_k $$\\in$$S$ $:$ $ k$$\\in$$\\mathbb{N}\\}$. We denote the entropy of an MDP $\\mathcal{M}$ under a policy $\\pi$$\\in$$\\Pi(\\mathcal{M})$ by $H(\\mathcal{M},\\pi)$. \nUsing the next proposition, we restrict our attention to stationary policies for maximizing the entropy of an MDP. \n{\\setlength{\\parindent}{0cm}\\noindent\n\\begin{prop}\n\\label{memoryless_1}\nThe following equality holds:\n\\vspace{-0.2cm}\n\\begin{align}\\label{eqeq}\n\\sup_{\\pi\\in\\Pi(\\mathcal{M})}H(\\mathcal{M},{\\pi})=\\sup_{\\pi\\in\\Pi^S(\\mathcal{M})}H(\\mathcal{M},{\\pi}).\\quad \\triangleleft\n\\end{align}\n\\end{prop}}A proof for Proposition \\ref{memoryless_1} is provided in Appendix \\ref{proofs_appendix}.\n{\\setlength{\\parindent}{0cm}\\noindent\\begin{remark} If the supremum in \\eqref{eqeq} is infinite, the set of stationary policies may not be sufficient to attain the supremum while a non-stationary policy can attain it. In particular, there exists a family of distributions that are defined over a countable support and have infinite entropy (see equation (7) in \\cite{Bacetti}). It can be shown that for some MDPs, there exists a non-stationary policy that induces a stochastic process with such a probability distribution, and hence, have infinite entropy, while stationary policies can only induce stochastic processes with finite entropies\\footnote{ A preliminary version \\cite{Yagiz} of this paper relied on Proposition 36 from \\cite{Chen}. This proposition is not valid in general. Here, we provide the corrected results by defining the maximum entropy of an MDP over stationary policies.}.\\end{remark}}\\vspace{-0.2cm}\n{\\setlength{\\parindent}{0cm}\\noindent\\begin{definition} (Maximum entropy of an MDP) The \\textit{maximum entropy of an MDP} $\\mathcal{M}$ is\n\\begin{align}\\label{max_ent_definition}\nH(\\mathcal{M}):=\\sup_{\\pi\\in\\Pi^{S}(\\mathcal{M})}H(\\mathcal{M},{\\pi}).\n\\end{align}\\end{definition}}\nA policy $\\pi^{\\star}$$\\in$$\\Pi^S(\\mathcal{M})$ \\textit{maximizes} the entropy of an MDP $\\mathcal{M}$ if $H(\\mathcal{M})$$=$$H(\\mathcal{M},{\\pi^{\\star}})$. Finally, we define the properties of the maximum entropy of an MDP as follows.{\\setlength{\\parindent}{0cm}\n\\noindent \\begin{definition}(The properties of the maximum entropy)\\label{props_def}\nThe maximum entropy of an MDP $\\mathcal{M}$ is\n\\begin{itemize} \n\\item \\textit{finite}, if and only if\n\\begin{align}\\label{def_finite_ent}\nH(\\mathcal{M})=\\max_{\\pi\\in\\Pi^{S}(\\mathcal{M})}H(\\mathcal{M},{\\pi})<\\infty;\n\\end{align}\n\\item \\textit{infinite}, if and only if\n\\begin{align} H(\\mathcal{M})=\\max_{\\pi\\in\\Pi^{S}(\\mathcal{M})}H(\\mathcal{M},{\\pi})=\\infty; \n\\end{align}\n\\item \\textit{unbounded}, if and only if the following two conditions hold.\n\\begin{align}\\label{unbounded_definition_1}\n(i)\\qquad &H(\\mathcal{M})=\\sup_{\\pi\\in\\Pi^{S}(\\mathcal{M})}H(\\mathcal{M},{\\pi})=\\infty,\\\\ \\label{unbounded_definition_2}\n(ii)\\qquad &H(\\mathcal{M},{\\pi})<\\infty\\ \\ \\text{for all}\\ \\pi\\in\\Pi^{S}(\\mathcal{M}).\n\\end{align}\n\\end{itemize}\n\\end{definition}}\nAlthough it is not defined here, there is a fourth possible property which is unachievable finite maximum entropy, i.e., $\\max_{\\pi\\in\\Pi^S(\\mathcal{M})}H(\\mathcal{M},{\\pi})$$<$$H(\\mathcal{M})$$<$$\\infty$. In Theorem \\ref{Theorem1}, we show that it is not possible for the maximum entropy of an MDP to have this property.\n\\subsection{Linear temporal logic}\nWe employ linear temporal logic (LTL) to specify tasks and refer the reader to \\cite{Model_checking} for the syntax and semantics of LTL. \n\nAn LTL formula is built up from a set $\\mathcal{AP}$ of atomic propositions, logical connectives such as conjunction ($\\land$) and negation ($\\lnot$), and temporal modal operators such as always ($\\square$) and eventually ($\\lozenge$). An infinite sequence of subsets of $\\mathcal{AP}$ defines an infinite \\textit{word}, and an LTL formula is interpreted over infinite words on $2^{\\mathcal{AP}}$. We denote by $w$$\\models$$\\varphi$ that a word $w$$=$$w_0w_1w_2\\ldots$ satisfies an LTL formula $\\varphi$.\n{\\setlength{\\parindent}{0cm}\n\\noindent \\begin{definition}\nA \\textit{deterministic Rabin automaton} (DRA) is a tuple $A$$=$$(Q,q_0, \\Sigma, \\delta, Acc)$ where $Q$ is a finite set of states, $q_0$$\\in$$Q$ is the initial state, $\\Sigma$ is the alphabet, $\\delta$$:$$Q$$\\times$$\\Sigma$$\\rightarrow$$Q$ is the transition relation, and $Acc$$\\subseteq$$2^{Q}$$\\times$$2^{Q}$ is the set of accepting state pairs. \n\\end{definition}}\nA \\textit{run} of a DRA $A$, denoted by $\\sigma$$=$$q_0q_1q_2\\ldots$, is an infinite sequence of states in $\\mathcal{A}$ such that for each $i$$\\geq$$0$, $q_{i+1}$$\\in$$\\delta(q_i, p)$ for some $p$$\\in$$\\Sigma$. A run $\\sigma$ is \\textit{accepting} if there exists a pair $(J,K)$$\\in$$Acc$ and an $n$$\\geq$$0$ such that (i) for all $m$$\\geq$$n$ we have $q_m$$\\not\\in$$J$, and (ii) there exists infinitely many $k$ such that $q_k$$\\in$$K$.\n\nFor any LTL formula $\\varphi$ built up from $\\mathcal{AP}$, a DRA $A_{\\varphi}$ can be constructed with input alphabet $2^{\\mathcal{AP}}$ that accepts all and only words over $\\mathcal{AP}$ that satisfy $\\varphi$ \\cite{Model_checking}. \n\nFor an MDP $\\mathcal{M}$ under a policy $\\pi$, a path $\\varrho^{\\pi}$$=$$s_0s_1\\ldots$ generates a word $w$$=$$w_0w_1\\ldots$ where $w_k$$=$$\\mathcal{L}(s_k)$ for all $k$$\\geq$$0$. With a slight abuse of notation, we use $\\mathcal{L}(\\varrho^{\\pi})$ to denote the word generated by $\\varrho^{\\pi}$. For an LTL formula $\\varphi$, the set $\\{\\varrho^{\\pi}$$\\in$$ Paths^{\\pi}(\\mathcal{M})$$:$$ \\mathcal{L}(\\varrho^{\\pi})$$\\models$$\\varphi\\}$ is measurable \\cite{Model_checking}. We define \n\\begin{equation*}\n\\begin{aligned}\n\\text{Pr}_{\\mathcal{M}}^{\\pi}(s_0\\models \\varphi):=\\text{Pr}_{\\mathcal{M}}^{\\pi}\\{\\varrho^{\\pi}\\in Paths^{\\pi}(\\mathcal{M}) : \\mathcal{L}(\\varrho^{\\pi})\\models \\varphi\\}\n\\end{aligned}\n\\end{equation*}\nas the probability of satisfying the LTL formula $\\varphi$ for an MDP $\\mathcal{M}$ under the policy $\\pi$$\\in$$\\Pi(\\mathcal{M})$. \n\\section{Problem Statement}\\label{problem_set}\nThe first problem we study concerns the synthesis of a policy that maximizes the entropy of an MDP.\n {\\setlength{\\parindent}{0cm}\n\\noindent \\begin{problem}(\\textbf{Entropy Maximization})\\label{prob_1}\nFor a given MDP $\\mathcal{M}$, provide an algorithm to verify whether there exists a policy $\\pi^{\\star}$$\\in$$\\Pi^S(\\mathcal{M})$ such that $H(\\mathcal{M})$$=$$H(\\mathcal{M},\\pi^{\\star})$. If such a policy exists, provide an algorithm to synthesize it. If it does not exist, provide a procedure to synthesize a policy $\\pi'$$\\in$$\\Pi^S(\\mathcal{M})$ such that $H(\\mathcal{M},\\pi')$$\\geq$$\\ell$ for a given constant $\\ell$. \n\\end{problem}}\n\nFor an MDP $\\mathcal{M}$, the synthesis of a policy $\\pi'$$\\in$$\\Pi^S(\\mathcal{M})$ such that $H(\\mathcal{M},\\pi')$$\\geq$$\\ell$ allows one to induce a stochastic process with the desired level of entropy, even if there exists no stationary policy that maximizes the entropy of $\\mathcal{M}$.\n\nIn the second problem, we introduce linear temporal logic (LTL) specifications to the framework. In particular, we consider the problem of synthesizing a policy that induces a stochastic process with maximum entropy whose paths satisfy a given LTL formula with desired probability. The formal statement of the second problem is deferred to Section \\ref{cons_section} since it requires the introduction of additional notations.\n\n\n\\section{Entropy maximization for MDPs }\n\\label{max-ent}\n In this section, we focus on the entropy maximization problem. We refer to a policy as an \\textit{optimal} policy for an MDP if it maximizes the entropy of the MDP.\n \\subsection{The entropy of MCs versus MDPs}\n For an MC, the \\textit{local entropy} of a state $s$$\\in$$S$ is defined as\n\\begin{align}\n\\label{local_entropy_def}\nL(s):=-\\sum_{t\\in S}\\mathcal{P}_{s,t}\\log \\mathcal{P}_{s,t}.\n\\end{align}\nThe following proposition characterizes the relationship between the local entropy of states and the entropy of an MC.{\\setlength{\\parindent}{0cm}\n\\noindent \\begin{prop}\n\\label{Biondi_theorem}\n(Theorem 1 in \\cite{Biondi}) For an MC $\\mathcal{C}$,\n\\begin{align} \n\\label{MC_entropy_biondi}\nH(\\mathcal{C})=\\sum_{s\\in S}L(s)\\xi_s.\\quad \\triangleleft\n\\end{align}\n\\end{prop}}\\noindent\n\\par An MC $\\mathcal{C}$ has a finite entropy if and only if all of its recurrent states have zero local entropy \\cite{Biondi}. That is, $H(\\mathcal{C})$$<$$\\infty$ if and only if for all states $s$$\\in$$ S$, $\\xi_s$$=$$\\infty$ implies $L(s)$$=$$0$. If the entropy of an MC is finite, each recurrent state $s$$\\in$$S$ has a single successor state, i.e., $\\lvert Succ(s)\\rvert$$=$$1$. Consequently, recurrent states have no contribution to the sum in \\eqref{process_entropy}. In this case, we take the sum in \\eqref{MC_entropy_biondi} only over the transient states.\n\nFor an MDP, different policies may induce stochastic processes with different entropies. For example, consider the MDP given in Fig. \\ref{fig:MDP_1} and suppose that the action $a_1$ at state $s_0$ is taken with probability $\\varepsilon$. If we let $\\varepsilon$ range over $[0, \\frac{1}{2}]$, then the entropy of the resulting stochastic processes ranges over $[0,1]$. The optimal policy for this MDP is $\\pi_{s_0}(a_1)$$=$$\\pi_{s_0}(a_2)$$=$$1\/2$, which uniformly randomizes actions. \n\\par Unlike the MDP given in Fig. \\ref{fig:MDP_1}, the maximum entropy of an MDP is not generally achieved by a policy that chooses available actions at each state uniformly. For example, consider the MDP given in Fig. \\ref{fig:MDP_2}. The optimal policy for this MDP is $\\pi_{s_0}(a_1)$$=$$2\/3$, $\\pi_{s_0}(a_2)$$=$$1\/3$.\n\n\n\\begin{figure}[b]\\vspace{-0.4cm}\n\\hspace{0.05\\linewidth}\n\\begin{subfigure}[b]{0.3\\linewidth}\n\\centering\n\\scalebox{0.9}{\n\\begin{tikzpicture}[->, >=stealth', auto, semithick, node distance=1.5cm]\n\n \\tikzstyle{every state}=[fill=white,draw=black,thick,text=black,scale=0.7]\n\n \\node[state,initial,initial text=] (s_0) {$s_0$};\n \\node[state] (s_1) [above right =6mm of s_0] {$s_1$};\n \\node[state] (s_2) [below right=6mm of s_0] {$s_2$};\n\n\\path\n(s_0) edge node{$a_1, 1$} (s_1)\n(s_0)\t edge node{$a_2, 1$} (s_2)\n(s_1)\t edge [loop right] node{$1$} (s_1)\n(s_2)\t edge [loop right] node{$ 1$} (s_2);\n\\end{tikzpicture}}\n\\caption{}\n\\label{fig:MDP_1}\n\\end{subfigure}\n\\hspace{0.1\\linewidth}\n\\begin{subfigure}[b]{0.4\\linewidth}\n\\centering\n\\scalebox{0.9}{\n\\begin{tikzpicture}[->, >=stealth', auto, semithick, node distance=2cm]\n\n \\tikzstyle{every state}=[fill=white,draw=black,thick,text=black,scale=0.7]\n\n \\node[state,initial,initial text=] (s_0) {$s_0$};\n \\node[state] (s_1) [above right =6mm of s_0] {$s_1$};\n \\node[state] (s_2) [below right=6mm of s_0] {$s_2$};\n\n\\path\n(s_0) edge [loop below] node{$a_1, 1\/3$} (s_0)\n(s_0) edge node{$a_1, 2\/3$} (s_1)\n(s_0)\t edge node{$a_2, 1$} (s_2)\n(s_1)\t edge [loop right] node{$1$} (s_1)\n(s_2)\t edge [loop right] node{$ 1$} (s_2);\n\\end{tikzpicture}}\n\\caption{}\n\\label{fig:MDP_2}\n\\end{subfigure}\n\\caption{Randomizing actions uniformly at each state may or may not achieve the maximum entropy. The optimal policy for the MDP given in (a) is $\\pi_{s_0}(a_1)$$=$$\\pi_{s_0}(a_2)$$=$$1\/2$, and for the MDP given in (b) is $\\pi_{s_0}(a_1)$$=$$2\/3$, $\\pi_{s_0}(a_2)$$=$$1\/3$.}\n\\label{fig:examples_policies}\n\\end{figure}\n\\par Examples given in Fig. \\ref{fig:examples_policies} show that finding an optimal policy for an MDP may not be trivial. To analyze the maximum entropy of an MDP, we first obtain a compact representation of the maximum entropy as follows. For an MC $\\mathcal{M}^{\\pi}$ induced from an MDP $\\mathcal{M}$ by a policy $\\pi$$\\in$$\\Pi^S(\\mathcal{M})$, let the expected residence time in a state $s$$\\in$$S$ be \n\\begin{align}\n\\label{MDP_residence_time}\n\\xi^{\\pi}_s:=\\sum_{k=0}^{\\infty}(\\mathcal{P}^{\\pi})^k_{s_0,s}.\n\\end{align}\n\nAdditionally, let the local entropy of a state $s$$\\in$$S$ in $\\mathcal{M}^{\\pi}$ be\n$L^{\\pi}(s)$$:=$$-\\sum_{t\\in S}\\mathcal{P}^{\\pi}_{s,t}\\log\\mathcal{P}^{\\pi}_{s,t}$. Then, the maximum entropy of $\\mathcal{M}$ can be written as\n\\begin{flalign}\n\\label{objective_MDP}\n&H(\\mathcal{M})=\\sup_{\\pi\\in{\\Pi^S(\\mathcal{M})}}\\Big[\\sum_{s\\in S}\\xi^{\\pi}_sL^{\\pi}(s)\\Big].\n\\end{flalign}\nNote that the right hand side of \\eqref{objective_MDP} can still be infinite or unbounded. We analyze the properties of the maximum entropy of MDPs in the next section.\n\\subsection{Properties of the maximum entropy of MDPs}\\label{characteristics}\n\nThe maximum entropy of an MDP can be infinite or unbounded even for simple cases. For example, consider MDPs given in Fig. \\ref{fig:examples}. For the MDP shown in Fig. \\ref{fig:unbounded}, let the action $a_2$ be taken with probability $\\delta$$\\in$$(0,1]$ in state $s_0$. Then, the expected residence time $\\xi^{\\pi}_{s_0}$ in state $s_0$ is equal to $\\frac{1}{\\delta}$, and the entropy of the induced MC $\\mathcal{M}^{\\pi}$ is given by\n\\begin{align}\nH(\\mathcal{M},{\\pi})=-\\frac{(1-\\delta)\\log(1-\\delta)+\\delta\\log(\\delta)}{\\delta},\n\\end{align} \nwhich satisfies $H(\\mathcal{M},{\\pi})$$\\rightarrow$$\\infty$ as $\\delta$$\\rightarrow$$0$. Note also that if $\\delta$$=$$0$, the entropy of the induced MC is zero due to \\eqref{MC_entropy_biondi}. Hence, the maximum entropy is unbounded, and there is no optimal stationary policy for this MDP. \n\nFor the MDP given in Fig. \\ref{fig:infinite}, choosing a policy such that $\\pi_i(a_j)$$>$$0$ for $i$$=$$1,2$, $j$$=$$1,2$ yields $\\xi^{\\pi}_{s_0}$$=$$\\xi^{\\pi}_{s_1}$$=$$\\infty$ and $L^{\\pi}(s_0)$$>$$0$, $L^{\\pi}(s_1)$$>$$0$. Then, the maximum entropy of this MDP is infinite, and the maximum can be attained by any randomized policy. \n\n\\par Examples in Fig. \\ref{fig:examples} show that we should first verify the existence of optimal policies before attempting to synthesize them. We need the following definitions about the structure of MDPs to state the conditions that cause an MDP to have finite, infinite or unbounded maximum entropy.\n\n\\begin{figure}[b!]\\vspace{-0.5cm}\n\\begin{subfigure}[t]{0.33\\linewidth}\n\\scalebox{0.9}{\n\\begin{tikzpicture}[->, >=stealth', auto, semithick, node distance=2cm]\n\n \\tikzstyle{every state}=[fill=white,draw=black,thick,text=black,scale=0.7]\n\n \\node[state,initial,initial text=] (s_0) {$s_0$};\n \\node[state] (s_1) [right=10mm of s_0] {$s_1$};\n\n\\path\n(s_0) edge [loop above=10] node{$a_1, 1$} (s_0)\n(s_0)\t edge node{$a_2, 1$} (s_1)\n(s_1) edge [loop right=10] node{$1$} (s_1);\n\\end{tikzpicture}}\n\\caption{}\n\\label{fig:unbounded}\n\\end{subfigure}\n\\hspace{0.13\\linewidth}\n\\begin{subfigure}[t]{0.33\\linewidth}\n\\scalebox{0.9}{\n\\begin{tikzpicture}[->, >=stealth', auto, semithick, node distance=2cm]\n\n \\tikzstyle{every state}=[fill=white,draw=black,thick,text=black,scale=0.7]\n\n \\node[state,initial,initial text=] (s_0) {$s_0$};\n \\node[state] (s_1) [right=10mm of s_0] {$s_1$};\n\n\\path\n(s_0) edge [loop above=10] node{$a_1, 1$} (s_0)\n(s_0) edge [bend left=15] node{$a_2, 1$} (s_1)\n(s_1) edge [loop right=10] node{$a_1,1$} (s_1)\n(s_1) edge [bend left=15] node{$a_2,1$} (s_0);\n\\end{tikzpicture}}\n\\caption{}\n\\label{fig:infinite}\n\\end{subfigure}\n\\caption{Examples of MDPs with (a) unbounded maximum entropy and (b) infinite maximum entropy.}\n\\label{fig:examples}\n\\end{figure}\n\nA \\textit{directed graph} (digraph) is a tuple $G$$=$$(V,E)$ where $V$ is a set of vertices and $E$ is a set of ordered pairs of vertices $V$. For a digraph $G$, a path $v_1v_2\\ldots v_n$ from vertex $v_1$ to $v_n$ is a sequence of vertices such that $(v_k,v_{k+1})$$\\in$$E$ for all $1$$\\leq$$k$$<$$n$. A digraph $G$ is \\textit{strongly connected} if for every pair of vertices $u,v$$\\in$$V$, there is a path from $u$ to $v$, and $v$ to $u$. \n\nA \\textit{sub-MDP} of an MDP is a pair $(C,D)$ where $\\emptyset$$\\neq$$C$$\\subseteq$$S$ and $D$$: $$C$$\\rightarrow$$ 2^{\\mathcal{A}}$ is a function such that (i) $D(s)$$\\subseteq$$\\mathcal{A}(s)$ is non-empty for all $s$$\\in$$C$, and (ii) $s$$\\in$$C$ and $a$$\\in$$D(s)$ imply that $Succ(s,a)$$\\subseteq$$C$. An \\textit{end component} is a sub-MDP $(C,D)$ such that the digraph induced by $(C,D)$ is strongly connected.\n{\\setlength{\\parindent}{0cm}\n\\noindent \\begin{definition}\nA \\textit{maximal end component} (MEC) $(C,D)$ in an MDP is an end component such that there is no end component $(C', D')$ with $(C,D)$$\\neq$$(C',D')$, and $C$$\\subseteq$$C'$ and $D(s)$$\\subseteq$$D'(s)$ for all $s$$\\in$$C$.\\end{definition}}\n \\par A MEC $(C,D)$ in an MDP is \\textit{bottom strongly connected} (BSC) if for all $s$$\\in$$ C$, $\\mathcal{A}(s)$$\\backslash$$D(s)$$=$$\\emptyset$. For a given state $s$$\\in$$C$, we define the set of all actions under which the MDP can leave the MEC $(C, D)$ as $D_0(s)$$:=$$\\{a$$\\in$$\\mathcal{A}(s) | Succ(s,a)$$\\not\\subseteq$$ C\\}$. Note that in a BSC MEC $(C,D)$, $D_0(s)$$=$$\\emptyset$ for all $s$$\\in$$C$. \n {\\setlength{\\parindent}{0cm}\n \\noindent \\begin{lemma}\\label{Successor}\nFor an MDP $\\mathcal{M}$ with MECs $(C_i,D_i)$ $i$$=$$1,2,\\ldots,n$, let $C$$:=$$\\cup_{i=1}^nC_i$ and $D$$:=$$\\cup_{i=1}^nD_i$. Then, there exists an induced MC $\\mathcal{M}^{\\pi}$ for which a state $s$$\\in$$C$ is both stochastic and recurrent if and only if $|\\cup_{a\\in D(s)}Succ(s,a)|$$>$$1$.$\\quad\\triangleleft$ \\end{lemma}}\n{\\setlength{\\parindent}{0cm}\n \\begin{thm}\\label{Theorem1}\nFor an MDP $\\mathcal{M}$ with MECs $(C_i,D_i)$ $i$$=$$1,2,\\ldots,n$, let $C$$:=$$\\cup_{i=1}^nC_i$ and $D$$:=$$\\cup_{i=1}^nD_i$. Then, the following statements hold. \\\\\n(i) $H(\\mathcal{M})$ is infinite if and only if there exists an induced MC for which a state $s$$\\in$$C$ is both stochastic and recurrent.\\\\\n(ii) $H(\\mathcal{M})$ is unbounded if and only if $|\\cup_{a\\in D(s)}Succ(s,a)|$$=$$1$ for all $s$$\\in$$C$, and there exists a MEC that is not bottom strongly connected. \\\\\n(iii) $H(\\mathcal{M})$ is finite if and only if it is not infinite and not unbounded.$\\quad \\triangleleft$\n\\end{thm}}\\noindent\n\nProofs for above results can be found in Appendix \\ref{proofs_appendix}. Informally, Theorem \\ref{Theorem1} states that for an MDP to have finite maximum entropy, all recurrent states of all MCs that are induced from the MDP by a stationary policy should be deterministic. Although necessary conditions for the finiteness of the maximum entropy is quite restrictive, there are some special cases, such as stochastic shortest path (SSP) problems \\cite{Bertsekas}, where MDP structures actually satisfy the necessary conditions. Specifically, since all \\textit{proper} policies in SSP problems are guaranteed to reach an absorbing target state within finite time steps with probability 1, the problem of synthesizing a \\text{proper} policy with maximum entropy has a finite solution.\n\nThe following corollary is due to Proposition \\ref{memoryless_1}, Theorem \\ref{Theorem1}, and the definition of finite maximum entropy \\eqref{def_finite_ent}.\n{\\setlength{\\parindent}{0cm}\n \\begin{corollary}\\label{corollary1} If $ \\sup_{\\pi\\in\\Pi(\\mathcal{M})}H(\\mathcal{M},{\\pi})$$<$$\\infty$, then we have\n \\begin{align}\n \\sup_{\\pi\\in\\Pi(\\mathcal{M})}H(\\mathcal{M},{\\pi})= \\max_{\\pi\\in\\Pi^S(\\mathcal{M})}H(\\mathcal{M},{\\pi}).\n \\end{align}\n \\end{corollary}}\n\n\n\\begin{algorithm}[b]\n \\caption{Verify the properties of the maximum entropy.}\n \\textbf{Require:} $\\mathcal{M}$$=$$(S,s_0, \\mathcal{A},\\mathbb{P},\\mathcal{AP},\\mathcal{L})$\\\\\n \\textbf{Return:} R\n\nFind: MECs $(C_i,D_i)$, $i=1,...,n$\n \nFind: $Succ(s,a)$ for all $s$$\\in$$S$, $a$$\\in$$\\mathcal{A}(s)$ \n\nR := $\\emptyset$;\n\n \\begin{algorithmic}\n\\For {$i$$=$$ 1,2,\\ldots,n$}\n \t\\For {$s$ \\textbf{in} $C_i$}\n\t\t\\If{$\\lvert \\cup_{a\\in D_i(s)} Succ(s,a)\\rvert$$>$$1$}\n\t\t\t\\State R := R $\\cup$ $\\{\\text{infinite}\\}$ ;\n\t\t\\EndIf\n\t\t \\If{ $\\mathcal{A}(s)\\backslash D_i(s)$$\\neq$$\\emptyset$ }\n\t\t\t\\State R := R $\\cup$ $\\{\\text{unbounded}\\}$ ;\n\t\t\\EndIf\n\t\t\n\n\t\\EndFor\n \\EndFor \n \\If{\\qquad \\ \\ \\text{infinite}$\\in$R } R=infinite \n \\ElsIf {\\ \\ \\text{unbounded}$\\in$R }\\ R=unbounded\n \\Else \\qquad R=finite\n \\EndIf\n \\end{algorithmic}\n \\label{algo_1}\n\\end{algorithm}\n\n\nWe present Algorithm \\ref{algo_1} which, for an MDP $\\mathcal{M}$, verifies whether $H(\\mathcal{M})$ is finite, infinite or unbounded by checking the necessary conditions in Theorem \\ref{Theorem1}. For $\\mathcal{M}$, its MECs can be found in $\\mathcal{O}(\\lvert S\\rvert ^2)$ time \\cite{Model_checking}, $Succ(s,a)$ can be found in $\\mathcal{O}(\\lvert S\\rvert ^2\\lvert \\mathcal{A}\\rvert)$ time, and the necessary conditions can be verified in $\\mathcal{O}(\\lvert S\\rvert)$ time since no state can belong to more than one MEC. Hence, Algorithm \\ref{algo_1} runs in polynomial-time in the size of $\\mathcal{M}$.\n\n\n\\subsection{Policy synthesis }\\label{policy_syntesis_section}\nWe now provide algorithms to synthesize policies that solve the entropy maximization problem. \n\n\\subsubsection{Finite maximum entropy}\\label{finite_policy_synthesis}\nWe first modify a given MDP by making all states in its MECs absorbing.\n{\\setlength{\\parindent}{0cm}\n\\begin{prop}\\label{modified_MDP_prop} Let $\\mathcal{M}$ be an MDP such that $H(\\mathcal{M})$$<$$\\infty$, $(C_i,D_i)$ $i$$=$$1,2,\\ldots,n$ be MECs in $\\mathcal{M}$, $C$$:=$$\\cup_{i=1}^nC_i$, and $\\mathcal{M'}$ be the modified MDP that is obtained from $\\mathcal{M}$ by making all states $s$$\\in$$C$ absorbing, i.e., if $s$$\\in$$C$, then $\\mathbb{P}_{s,a,s}$$=$$1$ for all $a$$\\in$$\\mathcal{A}(s)$ in $\\mathcal{M'}$. Then, we have\n$ H(\\mathcal{M})$$=$$H(\\mathcal{M'})$.$\\quad \\triangleleft$ \n\\end{prop}}\n\nThere is a one-to-one correspondence between the paths of $\\mathcal{M}$ and $\\mathcal{M'}$ since all states in the set $C$ must have a single successor state in an MDP with finite maximum entropy due to Theorem \\ref{Theorem1}. Moreover, for a given policy $\\pi'$$\\in$$\\Pi^S(\\mathcal{M'})$ on $\\mathcal{M'}$, the policy $\\pi$$\\in$$\\Pi^S(\\mathcal{M})$ induced by $\\pi'$ on $\\mathcal{M}$ is the same policy with $\\pi'$, i.e. $\\pi'$$=$$\\pi$. Therefore, we synthesize an optimal policy for $\\mathcal{M}$ by synthesizing an optimal policy for $\\mathcal{M'}$.\n\n\n We use the nonlinear programming problem in \\eqref{non_reach_objective}-\\eqref{non_reach_cons6} to synthesize an optimal policy for $\\mathcal{M'}$. \n\\begin{subequations}\n\\label{unconstrained_program}\n\\begin{align}\n\\label{non_reach_objective}\n& \\underset{\\lambda(s,a),\\lambda(s)}{\\text{maximize}}\\ \\ -\\sum_{s\\in S\\backslash C}\\sum_{t\\in S}\\eta(s,t)\\log\\Big(\\frac{\\eta(s,t)}{\\nu(s)}\\Big)\\\\\n\\label{non_reach_cons1}\n& \\text{subject to:}\\nonumber\\\\\n& \\nu(s)-\\sum_{t\\in S\\backslash C}\\eta(t,s)=\\alpha(s) \\ \\quad \\forall s\\in S\\backslash C\\\\\n\\label{non_reach_cons2}\n& \\lambda(s) -\\sum_{t\\in S\\backslash C}\\eta(t,s)=\\alpha(s) \\ \\quad \\forall s\\in C \\\\\n\\label{non_reach_cons3}\n& \\eta(s,t)=\\sum_{a\\in\\mathcal{A}(s)}\\lambda(s,a)\\mathbb{P}_{s,a,t} \\quad \\forall t \\in S, \\forall s\\in S\\backslash C\\\\\n\\label{non_reach_cons4}\n&\\nu(s)=\\sum_{a\\in\\mathcal{A}(s)}\\lambda(s,a) \\qquad \\qquad \\forall s\\in S\\backslash C \\\\\n\\label{non_reach_cons5}\n&\\lambda(s,a)\\geq 0 \\qquad \\qquad \\qquad \\qquad \\forall a\\in\\mathcal{A}(s), \\forall s\\in S\\backslash C\\\\\n\\label{non_reach_cons6}\n&\\lambda(s)\\geq 0 \\qquad \\qquad \\quad \\qquad \\qquad \\forall s \\in C\n\\end{align}\n\\end{subequations}\n\nThe decision variables in \\eqref{non_reach_objective}-\\eqref{non_reach_cons2} are $\\lambda(s)$ for each $s$$\\in$$C$, and $\\lambda(s,a)$ for each $s$$\\in$$S\\backslash C$ and each $a$$\\in$$\\mathcal{A}(s)$. \nThe function $\\alpha$$:$$S$$\\rightarrow$$\\{0,1\\}$ satisfies $\\alpha(s_0)$$=$$1$ and $\\alpha(s)$$=$$0$ for all $s$$\\in$$S$$\\backslash$$\\{s_0\\}$. Variables $\\eta(s,t)$ and $\\nu(s)$ are functions of $\\lambda(s,a)$, and used just to simplify the notation. \n\nThe constraints \\eqref{non_reach_cons1}-\\eqref{non_reach_cons2} represent the balance between the ``inflow\" to and ``outflow\" from states. The constraints \\eqref{non_reach_cons3} and \\eqref{non_reach_cons4} are used to simplify the notation and define the variables $\\eta(s,t)$ and $\\nu(s)$, respectively. The constraints \\eqref{non_reach_cons5} and \\eqref{non_reach_cons6} ensure that the expected residence time in the state-action pair $(s,a)$ and the probability of reaching the state $s$ is non-negative, respectively. We refer the reader to \\cite{Marta}, \\cite{Puterman} for further details about the constraints.\n {\\setlength{\\parindent}{0cm}\n\\begin{prop} \\label{prop_convex} The nonlinear program in \\eqref{non_reach_objective}-\\eqref{non_reach_cons6} is convex. $\\quad \\triangleleft$\\end{prop}}\n\nThe above result indicates that a global maximum for the problem in \\eqref{non_reach_objective}-\\eqref{non_reach_cons6} can be computed efficiently. We now introduce Algorithm \\ref{Algo_2} to synthesize an optimal policy for a given MDP with finite maximum entropy. \n{\\setlength{\\parindent}{0cm}\n\\begin{thm}\n\\label{Main_theorem_2}\n Let $\\mathcal{M}$ be an MDP such that $H(\\mathcal{M})$$<$$\\infty$, $(C_i,D_i)$ $i$$=$$1,2,\\ldots,n$ be MECs in $\\mathcal{M}$, and $C$$:=$$\\cup_{i=1}^nC_i$. For the input ($\\mathcal{M}, C )$, Algorithm \\ref{Algo_2} returns an optimal policy $\\pi^{\\star}$$\\in$$\\Pi^S(\\mathcal{M})$ for $\\mathcal{M}$, i.e. $H(\\mathcal{M},{\\pi^{\\star}})$$=$$H(\\mathcal{M})$. $\\quad \\triangleleft$\n\\end{thm}}\n\n\\begin{algorithm}[t]\n \\caption{Synthesize the maximum entropy policy}\n \\textbf{Require:} $\\mathcal{M}$$=$$(S,s_0, \\mathcal{A},\\mathbb{P},\\mathcal{AP},\\mathcal{L})$ and $C$. \\\\\n \\textbf{Return:} An optimal policy $\\pi^{\\star}$ for $\\mathcal{M}$\n \\begin{enumerate}[label=\\arabic*:]\n \\item Form the modified MDP $\\mathcal{M'}$.\n\\item Solve \\eqref{non_reach_objective}-\\eqref{non_reach_cons6} for ($\\mathcal{M'}$, $C$), and obtain $\\lambda^{\\star}(s,a)$.\n\\item \\begin{algorithmic}\n\\For {$s$$\\in$$ S$}\n\t\\If {$s$$\\not\\in$$ C$}\n\t\t\\If {$\\sum_{a\\in\\mathcal{A}(s)}\\lambda^{\\star}(s,a)$$>$$0$}\n\t\t\\State\n\t\t\t\\State $\\pi^{\\star}_s(a)$$:=$$\\frac{\\lambda^{\\star}(s,a)}{\\sum_{a\\in\\mathcal{A}(s)}\\lambda^{\\star}(s,a)}$\n\t\t\\Else\n\t\t \t\\State $\\pi^{\\star}_s(a)$$:=$$1$ for an arbitrary $a$$\\in$$\\mathcal{A}(s)$,\n\t\t\\EndIf\n\t\\Else\n\t\t\\State $\\pi^{\\star}_s(a)$$:=$$1$ for an arbitrary $a$$\\in$$\\mathcal{A}(s)$.\n\t\\EndIf\n\\EndFor\n \\end{algorithmic}\n \\end{enumerate}\n \\label{Algo_2}\n\\end{algorithm} \nProofs for above results can be found in Appendix \\ref{proofs_appendix}. Computationally, the most expensive step of Algorithm \\ref{Algo_2} is to solve the convex optimization problem \\eqref{non_reach_objective}-\\eqref{non_reach_cons6}. A solution whose objective value is arbitrarily close to the optimal value of \\eqref{non_reach_objective} can be computed in time polynomial in the size of $\\mathcal{M}$ via interior-point methods \\cite{Serrano}, \\cite{Nesterov}. Hence, the time complexity of Algorithm \\ref{Algo_2} is polynomial in the size of $\\mathcal{M}$.\n\n\n\n \n\\subsubsection{Unbounded maximum entropy} \\label{unbounded_policy_synthesis} There is no optimal policy for this case due to \\eqref{unbounded_definition_1}-\\eqref{unbounded_definition_2}. Therefore, for a given MDP $\\mathcal{M}$ and a constant $\\ell$, we synthesize a policy $\\pi$$\\in$$\\Pi^S(\\mathcal{M})$ such that $H(\\mathcal{M},\\pi)$$\\geq$$\\ell$. Let $S_{B}$ be the union of all states in BSC MECs of $\\mathcal{M}$, which can be found by using Algorithm \\ref{algo_1}. We modify the MDP $\\mathcal{M}$ by making all states $s$$\\in$$S_B$ absorbing and denote the modified MDP by $\\mathcal{M}'$. It can be shown that $H(\\mathcal{M}')$$=$$H(\\mathcal{M})$ by using arguments similar to the ones used in the proof of Proposition \\ref{modified_MDP_prop}. As the first approach, we solve a convex feasibility problem. Specifically, we remove the objective in \\eqref{non_reach_objective} and add the constraint\n\\begin{align}\\label{residence_bound_L}\n-\\sum_{s\\in S\\backslash S_B}\\sum_{t\\in S}\\eta(s,t)\\log\\Big(\\frac{\\eta(s,t)}{\\nu(s)}\\Big)\\geq \\ell\n\\end{align}\nto the constraints in \\eqref{non_reach_cons1}-\\eqref{non_reach_cons6}. Then, we solve the resulting convex feasibility problem for ($\\mathcal{M}'$, $S_B$, $\\ell$) and obtain the desired policy $\\pi$ by using the step 3 of Algorithm \\ref{Algo_2}. \n\nRecall from Theorem \\ref{Theorem1} that the unboundedness of the maximum entropy is caused by the existence of non-BSC MECs in $\\mathcal{M}'$. In particular, we can induce MCs with arbitrarily large entropy by making the expected residence time in states contained in non-BSC MECs arbitrarily large. As the second approach, we bound the expected residence time in states $s$$\\in$$S\\backslash S_B$ in $\\mathcal{M}'$ and relax this bound according to the desired level of entropy. Specifically, we add the constraint\n\\begin{align}\\label{residence_bound}\n\\sum_{s\\in S\\backslash S_B}\\sum_{a\\in\\mathcal{A}(s)}\\lambda(s,a)\\leq \\Gamma\n\\end{align}\nto the problem in \\eqref{non_reach_objective}-\\eqref{non_reach_cons6}. For the constraint \\eqref{residence_bound}, $\\Gamma$$\\geq$$0$ is a predefined value and limits the expected residence time in states $s$$\\in$$S\\backslash S_B$. Let $H_{\\Gamma}(\\mathcal{M}')$ denote the maximum entropy $H(\\mathcal{M}')$ of $\\mathcal{M}'$ subject to the constraint \\eqref{residence_bound}. Then, we have\n\\begin{align}\\label{gamma_ineq}\nH_{\\Gamma}(\\mathcal{M}')\\geq H_{\\Gamma'}(\\mathcal{M}')\n\\end{align}\nfor $\\Gamma$$\\geq$$\\Gamma'$, and $H_{\\Gamma}(\\mathcal{M}')$$=$$\\infty$ for $\\Gamma$$=$$\\infty$. Therefore, by choosing an arbitrarily large $\\Gamma$ value, we can synthesize a policy that induces an MC with arbitrarily large entropy. \n\n\n\\subsubsection{Infinite maximum entropy} \\label{infinitee_policy_synthesis} \nThe procedure to synthesize an optimal policy for MDPs with infinite maximum entropy is very similar to the ones described in Sections \\ref{finite_policy_synthesis} and \\ref{unbounded_policy_synthesis}. Therefore, we provide it in Appendix \\ref{infinite_case_appendix}.\n\\section{Relating the maximum entropy of an MDP with the probability distribution of paths}\\label{relate_paths}\nIn this section, we establish a link between the maximum entropy of an MDP $\\mathcal{M}$ and the entropy of paths in an MC $\\mathcal{M}^{\\pi}$ induced from $\\mathcal{M}$ by a stationary policy $\\pi\\in\\Pi^S(\\mathcal{M})$. \n\nWe begin with an example demonstrating the probability distribution of paths in an MC induced by a policy that maximizes the entropy of an MDP. Consider the MDP shown in Fig. \\ref{fig:MDP_21} which is used in \\cite{Biondi}. The policy that maximizes the entropy of the MDP is given by $\\pi_{s_0}(a_1)$$=$$2\/3$, $\\pi_{s_0}(a_2)$$=$$1\/3$, $\\pi_{s_1}(a_1)$$=$$\\pi_{s_1}(a_2)$$=$$1\/2$. The MC induced by this policy is shown in Fig. \\ref{fig:MDP_22}. There are three paths that reach the MECs, i.e., $(\\{s_3\\},\\{a_1\\})$ and $(\\{s_4\\},\\{a_1\\})$, of the MDP, each of which is followed with probability $1\/3$ in the induced MC, i.e., the probability distribution of paths is uniform. \n\nNote that for the example given in Fig. \\ref{fig:MDP_21}, the optimal policy that maximizes the entropy of the MDP is randomized, and action-selection at each state is performed in an online manner. In particular, an agent that follows the optimal policy chooses its action at each stage according to the outcomes of an online randomization mechanism. Therefore, it does not commit to follow a specific path at any state.\n\\begin{figure}[b!]\\vspace{-0.2cm}\n\\begin{subfigure}[b]{0.3\\linewidth}\n\\scalebox{0.8}{\n\\begin{tikzpicture}[->, >=stealth', auto, semithick, node distance=2cm]\n\n \\tikzstyle{every state}=[fill=white,draw=black,thick,text=black,scale=0.7]\n\n \\node[state,initial,initial text=] (s_0) {$s_0$};\n \\node[state] (s_1) [above right =6mm of s_0] {$s_1$};\n \\node[state] (s_2) [below right=6mm of s_0] {$s_2$};\n \\node[state] (s_3) [right=10mm of s_1] {$s_3$};\n \\node[state] (s_4) [right=10mm of s_2] {$s_4$};\n\n\\path\n(s_0) edge node{$a_1, 1$} (s_1)\n(s_0)\t edge node{$a_2, 1$} (s_2)\n(s_1)\t edge node{$a_1, 1$} (s_3)\n(s_1)\t edge node{$a_2, 1$} (s_4)\n(s_2)\t edge node{$ a_1,1$} (s_4)\n(s_3)\t edge [loop right] node{$a_1,1$} (s_3)\n(s_4)\t edge [loop right] node{$a_1,1$} (s_4);\n\\end{tikzpicture}}\n\\caption{}\n\\label{fig:MDP_21}\n\\end{subfigure}\n\\hspace{0.18\\linewidth}\n\\begin{subfigure}[b]{0.3\\linewidth}\n\\scalebox{0.8}{\n\\begin{tikzpicture}[->, >=stealth', auto, semithick, node distance=2cm]\n\n \\tikzstyle{every state}=[fill=white,draw=black,thick,text=black,scale=0.7]\n\n \\node[state,initial,initial text=] (s_0) {$s_0$};\n \\node[state] (s_1) [above right =6mm of s_0] {$s_1$};\n \\node[state] (s_2) [below right=6mm of s_0] {$s_2$};\n \\node[state] (s_3) [right=10mm of s_1] {$s_3$};\n \\node[state] (s_4) [right=10mm of s_2] {$s_4$};\n\n\\path\n(s_0) edge node{$2\/3$} (s_1)\n(s_0)\t edge node{$1\/3$} (s_2)\n(s_1)\t edge node{$1\/2$} (s_3)\n(s_1)\t edge node{$1\/2$} (s_4)\n(s_2)\t edge node{$1$} (s_4)\n(s_3)\t edge [loop right] node{$1$} (s_3)\n(s_4)\t edge [loop right] node{$1$} (s_4);\n\\end{tikzpicture}}\n\\caption{}\n\\label{fig:MDP_22}\n\\end{subfigure}\n\\caption{(a) An MDP example \\cite{Biondi}. (b) The MC induced by the policy that maximizes the entropy of the MDP. }\n\\label{fig:two-step-authenticate}\n\\end{figure}\n\nTo rigorously establish the relation, illustrated in Fig. \\ref{fig:MDP_21}, between the maximum entropy of an MDP and the entropy of paths in an induced MC, we need the following definitions.\n\n A \\textit{strongly connected component} (SCC) $V$$\\subseteq$$S$ in an MC $\\mathcal{M}^{\\pi}$ induced by a policy $\\pi$$\\in$$\\Pi^S(\\mathcal{M})$ is a maximal set of states in $\\mathcal{M}^{\\pi}$ such that for any $s$,$t$$\\in$$V$, $(\\mathcal{P}^{\\pi})_{s,t}^n$$>$$0$ for some $n$$\\in$$\\mathbb{N}$. A \\textit{bottom strongly connected component} (BSCC) $S_B$ in $\\mathcal{M}^{\\pi}$ is an SCC such that for all $s$$\\in$$S_B$, $(\\mathcal{P}^{\\pi})_{s,t}^n$$=$$0$ for all $n$$\\in$$\\mathbb{N}$ and for all $t$$\\in$$S\\backslash S_B$.\n\nIn this section, for an induced MC $\\mathcal{M}^{\\pi}$, we denote the probability of a path with the finite path fragment $s_0\\ldots s_n$ by\n \\begin{align}\\label{joint_prob_define}\n \\mathcal{P}^{\\pi}(s_0\\ldots s_n):=\\prod_{0\\leq k < n} \\mathcal{P}^{\\pi}_{s_k, s_{k+1}},\n \\end{align}\n which agrees with the probability measure introduced in Section \\ref{Prelim}. Additionally, if the finite path fragment $s_0\\ldots s_n$ in $\\mathcal{M}^{\\pi}$ satisfies $s_0,\\ldots,s_{n-1}$$\\not\\in$$S_B$ and $s_n$$\\in$$S_B$ for some $S_B$$\\subseteq$$S$, we write $s_0\\ldots s_n$$\\in$$(S\\backslash S_B)^{\\star}S_B$. \n{\\setlength{\\parindent}{0cm}\n\\noindent \\begin{definition}\n(Entropy of paths) Let $\\mathcal{M}^{\\pi}$ be an MC induced from an MDP $\\mathcal{M}$ by a stationary policy $\\pi$$\\in$$\\Pi^S(\\mathcal{M})$ and $S_B$$\\subseteq$$S$ be the union of all BSCCs in $\\mathcal{M}^{\\pi}$. For $\\mathcal{M}^{\\pi}$, the \\textit{entropy of the paths} that start from the initial state and reach a state in a BSCC in $\\mathcal{M}^{\\pi}$ is defined as\n\\begin{align}\\label{entropy_paths_def}\n&H(Paths^{\\pi}(\\mathcal{M})): =\\nonumber \\\\\n&\\qquad \\quad-\\sum_{s_0\\ldots s_n \\in T } \\mathcal{P}^{\\pi}(s_0\\ldots s_n)\\log \\mathcal{P}^{\\pi}(s_0\\ldots s_n)\n\\end{align}\nwhere $T$$:=$$Paths_{fin}^{\\pi}(\\mathcal{M})$$\\cap$$ (S\\backslash S_B)^{\\star}S_B$. \n\\end{definition}}\n\nA similar definition for the entropy of paths with fixed initial and final states can be found in \\cite{Ekroot},\\cite{Kafsi}. We note that\n\\begin{align}\\label{sum_prob_paths}\n\\sum_{s_0\\ldots s_n \\in T } \\mathcal{P}^{\\pi}(s_0\\ldots s_n)=1,\n\\end{align}\nsince any finite-state MC eventually reaches a BSCC \\cite{Model_checking}. The following lemma establishes a relation between the entropy of paths and the entropy of an induced MC. \n{\\setlength{\\parindent}{0cm}\n\\begin{lemma}\n\\label{paths_lemma}\nLet $\\mathcal{M}$ be an MDP such that $H(\\mathcal{M},{\\pi})$$<$$\\infty$ for any $\\pi$$\\in$$\\Pi^S(\\mathcal{M})$. Then, for any $\\pi$$\\in$$\\Pi^S(\\mathcal{M})$, we have\n\\begin{align}\nH(Paths^{\\pi}(\\mathcal{M}))=H(\\mathcal{M},{\\pi}). \\quad \\triangleleft\n\\end{align}\n\\end{lemma}}\\noindent\n\n\nA proof for Lemma \\ref{paths_lemma} can be found in Appendix \\ref{proofs_appendix}. Finally, from the definition of the properties of the maximum entropy, Proposition \\ref{memoryless_1} and Lemma \\ref{paths_lemma}, we conclude that, if an MDP $\\mathcal{M}$ has non-infinite maximum entropy, then we have\n\\begin{align}\\label{equivalence_paths_entropy}\n&H(\\mathcal{M})\\ = \\ \\sup_{\\pi\\in\\Pi^S(\\mathcal{M})} \\ H(Paths^{\\pi}(\\mathcal{M})).\n\\end{align}\nThe equality in \\eqref{equivalence_paths_entropy} states that, for an MDP with non-infinite maximum entropy, a policy that maximizes the entropy of the MDP induces an MC with maximum entropy of paths among all MCs that can be induced from the MDP. Moreover, considering \\eqref{sum_prob_paths}, such a policy maximizes the randomness of all paths with non-zero probability in an induced MC.\n\\section{Constrained Entropy Maximization for MDPs}\\label{cons_section}\nIn this section, we consider the problem of maximizing the entropy of an MDP subject to an LTL constraint. We note that stationary policies are not sufficient to satisfy LTL constraints in general \\cite{Model_checking}. Therefore, to be consistent with our definition of maximum entropy \\eqref{max_ent_definition}, we first introduce the product MDP, over which LTL constraints are transformed into reachability constraints for which stationary policies are sufficient. \n\\subsection{Product MDP}\\label{product_section}\n\\par We construct an MDP that captures all paths of an MDP $\\mathcal{M}$ satisfying an LTL specification $\\varphi$ by taking the product of $\\mathcal{M}$ and the DRA $A_{\\varphi}$ corresponding to the specification $\\varphi$.\n{\\setlength{\\parindent}{0cm}\n \\noindent \\begin{definition} (Product MDP)\nLet $\\mathcal{M}$$=$$(S, s_0, \\mathcal{A}, \\mathbb{P}, \\mathcal{AP}, \\mathcal{L})$ be an MDP and $A_{\\varphi}$$=$$(Q, q_0, 2^{\\mathcal{AP}}, \\delta, Acc)$ be a DRA. The product MDP $\\mathcal{M}_p$$=$$\\mathcal{M}$$\\otimes$$ A_{\\varphi}$$=$$(S_p, s_{0_p}, \\mathcal{A}, \\mathbb{P}_p, \\mathcal{L}_p, Acc_p)$ is a tuple where\n\\begin{itemize}\n\\item $S_p$$=$$S $$\\times$$ Q$,\n\\item $s_{0_p}=(s_0,q)$ such that $q=\\delta(q_0,\\mathcal{L}(s_0))$,\n\\item $\\mathbb{P}_p((s,q), a, (s',q'))$=$\\begin{cases} \\mathbb{P}_{s,a,s'} & \\text{if} \\quad q'=\\delta(q,\\mathcal{L}(s')) \\\\ 0 & \\text{otherwise}, \\end{cases}$\n\\item $\\mathcal{L}_p((s,q))=\\{q\\}$,\n\\item $Acc_p$$=$$\\{(J_1^p,K_1^p),\\ldots,(J_k^p,K_k^p) \\}$ where $J_i^p$$=$$S$$\\times$$J_i$ and $K_i^p$$=$$S$$\\times$$K_i$ for all $(J_i,K_i)$$\\in$$Acc$ and for all $i$$=$$1,\\ldots, k$.\n\\end{itemize}\n\\end{definition}}\nThe product MDP $\\mathcal{M}_p$ may contain unreachable states which can be found in time polynomial in the size of $\\mathcal{M}_p$ by graph search algorithms, e.g., breadth-first search. Such states have no effect in the analysis of MDPs, and hence, can be removed from the MDP. \nWe hereafter assume that there is no unreachable state in $\\mathcal{M}_p$. \n\nThere is a one-to-one correspondence between the paths of $\\mathcal{M}_p$ and $\\mathcal{M}$ \\cite{Model_checking}. Moreover, a similar one-to-one correspondence exists between policies on $\\mathcal{M}_p$ and $\\mathcal{M}$. More precisely, for a given policy $\\pi^{p}$$=$$\\{\\mu_0^p,\\mu_1^p,\\ldots\\}$ on $\\mathcal{M}_p$, we can construct a policy $\\pi$$=$$\\{\\mu_0,\\mu_1,\\ldots\\}$ on $\\mathcal{M}$ by setting $\\mu_i(s_i)$$=$$\\mu_i^p((s_i,q_i))$. For a given policy $\\pi^p$$\\in$$\\Pi^S(\\mathcal{M}_p)$ on $\\mathcal{M}_p$, the policy $\\pi$$\\in$$\\Pi(\\mathcal{M})$ constructed in this way is a non-stationary policy \\cite{Model_checking}. \n\n\nLet $\\pi^p$$\\in$$\\Pi^S(\\mathcal{M}_p)$ be a policy on $\\mathcal{M}_p$ and $\\pi$$\\in$$\\Pi(\\mathcal{M})$ be the policy on $\\mathcal{M}$ constructed from $\\pi^p$ through the procedure explained above. The paths of the MDP $\\mathcal{M}$ under the policy $\\pi$ satisfies the LTL specification $\\varphi$ with probability of at least $\\beta$, i.e., $\\text{Pr}^{\\pi}_{\\mathcal{M}}(s_0$$\\models$$\\varphi)$$\\geq$$\\beta$, if and only if the paths of the product MDP $\\mathcal{M}_p$ under the policy $\\pi^p$ reaches accepting MECs in $\\mathcal{M}_p$ with probability of at least $\\beta$ and stays there forever \\cite{Model_checking}. \n{\\setlength{\\parindent}{0cm}\n\\noindent \\begin{definition} (Accepting MEC)\nA MEC $(C,D)$ in a product MDP $\\mathcal{M}_p$ with the set of accepting state pairs $Acc_p$ is an \\textit{accepting} MEC if for some $(J^p,K^p)$$\\in$$Acc_p$, $J^p$$\\not\\in$$C$ and $K^p$$\\in$$C$.\n\\end{definition}}\nInformally, accepting MECs are sets of states where the system can remain forever, and where the set $K_p$ is visited infinitely often and the set $J_p$ is visited finitely often. \n\n\\vspace{-0.4cm}\n\\subsection{Constrained Problem}\n In this section, we formally state the constrained entropy maximization problem. Recall that, for an MDP $\\mathcal{M}$, the problem of synthesizing a policy $\\pi$$\\in$$\\Pi(\\mathcal{M})$ that satisfies an LTL formula $\\varphi$ with probability of at least $\\beta$, i.e., $\\text{Pr}^{\\pi}_{\\mathcal{M}}(s_0$$\\models$$\\varphi)$$\\geq$$\\beta$, is equivalent to the problem of synthesizing a policy $\\pi$$\\in$$\\Pi^S(\\mathcal{M}_p)$ that reaches the accepting MECs in $\\mathcal{M}_p$ with probability of at least $\\beta$ and stays there forever. \n\n\nOur objective is to synthesize a policy that induces a stochastic process with maximum entropy whose paths satisfy the given LTL specification with desired probability. To this end, we synthesize a policy $\\pi$$\\in$$\\Pi^S(\\mathcal{M}_p)$ on $\\mathcal{M}_p$ that reaches the accepting MECs in $\\mathcal{M}_p$ with probability of at least $\\beta$ and stays there forever.\n\nWe first partition the set $S_p$ of states of $\\mathcal{M}_p$ into three disjoint sets as follows. We let $B$ be the set of all states in accepting MECs in $\\mathcal{M}_p$, and $S_0$ be the set of all states that have zero probability of reaching the set $B$. Finally, we let $S_r$$=$$S_p\\backslash \\{B\\cup S_0\\}$ be the set of all states that are not in an accepting MEC in $\\mathcal{M}_p$ and have nonzero probability of reaching the set $B$. These sets can be found in time polynomial in the size of $\\mathcal{M}_p$ by graph search algorithms \\cite{Model_checking}. \n {\\setlength{\\parindent}{0cm}\n\\noindent \\begin{problem}(\\textbf{Constrained Entropy Maximization})\\label{prob_2}\nFor a product MDP $\\mathcal{M}_p$, verify whether there exists a policy $\\pi^{\\star}$$\\in$$\\Pi^S(\\mathcal{M}_p)$ that solves the following problem:\n\\begin{subequations}\n\\begin{align}\\label{max_ent_product_objective}\n&\\underset{\\pi\\in\\Pi^S(\\mathcal{M}_p)}{\\text{maximize}} \\qquad H(\\mathcal{M}_p,{\\pi})\\\\ \\label{LTL_constraint_product}\n& \\text{subject to:} \\qquad \\text{Pr}^{\\pi}_{\\mathcal{M}_p}(s_0\\models \\lozenge B)\\geq \\beta\n\\end{align}\n\\end{subequations}\nwhere $\\text{Pr}^{\\pi}_{\\mathcal{M}_p}(s_0$$\\models$$\\lozenge B)$ denotes the probability of reaching the set $B$ from the initial state in $\\mathcal{M}_p$ under the policy $\\pi$. If such a policy exists, provide an algorithm to synthesize it. If it does not exist, provide a procedure to synthesize a policy $\\pi'$$\\in$$\\Pi^S(\\mathcal{M}_p)$ such that $\\text{Pr}^{\\pi'}_{\\mathcal{M}_p}(s_0\\models \\lozenge B)$$\\geq$$\\beta$ and $H(\\mathcal{M},\\pi')$$\\geq$$\\ell$ for a given constant $\\ell$.\n\\end{problem}}\n\nNote that if a policy that solves the problem in \\eqref{max_ent_product_objective}-\\eqref{LTL_constraint_product} chooses the actions in states $s$$\\in$$B$ such that they form a BSCC in the induced MC, then the resulting policy ensures that the paths of the induced MC visit the states inside the set $B$ infinitely often and thus satisfies $\\varphi$\\cite{Model_checking}. \n\\subsection{Policy synthesis}\\label{product_policy_section}\n \nIn this section, for a product MDP $\\mathcal{M}_p$ and its state partition $S_p$$=$$B$$\\cup$$S_0$$\\cup$$S_r$, we assume that $0$$<$$\\beta$$\\leq$$\\max_{\\pi\\in\\Pi(\\mathcal{M})}\\text{Pr}^{\\pi}_{\\mathcal{M}_p}(s_0$$\\models$$\\lozenge B)$, which can be verified in polynomial time by solving a linear optimization problem as shown in \\cite{Model_checking, Marta}. We refer to a policy $\\pi$$\\in$$\\Pi^S(\\mathcal{M}_p)$ as an \\textit{optimal policy} if it is a solution to the problem in \\eqref{max_ent_product_objective}-\\eqref{LTL_constraint_product} and chooses the actions in states $s$$\\in$$B$ such that they form a BSCC in the induced MC.\n\n\nFor the synthesis of an optimal policy, we consider three cases according to the maximum entropy $H(\\mathcal{M}_p)$ of $\\mathcal{M}_p$, namely, finite, unbounded and infinite. \n \n \\subsubsection{Finite maximum entropy} \\label{finite_constrained_case}\nLet $(C_i,D_i)$ $i$$=$$1,2,\\ldots,n$ be the MECs in $\\mathcal{M}_p$, $C$$:=$$\\cup_{i=1}^n C_i$, and $D$$:=$$\\cup_{i=1}^n D_i$. We form the modified product MDP $\\mathcal{M}_p'$ by making all states $s$$\\in$$C$ absorbing in $\\mathcal{M}_p$. We have $H(\\mathcal{M}_p')$$=$$H(\\mathcal{M}_p)$ due to Proposition \\ref{modified_MDP_prop}. Recall that for a state $s$$\\in$$C$, the variable $\\lambda(s)$ in \\eqref{non_reach_objective}-\\eqref{non_reach_cons6} represents the probability of reaching the state $s$ from the initial state \\cite{Marta}. Hence, we append the constraint\n\\begin{align}\\label{reach_cons}\n\\sum_{s\\in B}\\lambda(s)\\geq \\beta\n\\end{align}\nto the problem in \\eqref{non_reach_objective}-\\eqref{non_reach_cons6} in order to obtain a policy that induces an MC whose paths satisfy $\\varphi$ with probability of at least $\\beta$. Noting that $\\beta$$\\leq$$\\sum_{s\\in B}\\lambda(s)$$\\leq$$\\max_{\\pi\\in\\Pi(\\mathcal{M})}\\text{Pr}^{\\pi}_{\\mathcal{M}_p}(s_0$$\\models$$\\lozenge B)$, the resulting optimization problem always has a solution since its feasible set constitutes a closed compact set when the product MDP has finite maximum entropy. \n\n\nThe procedure to obtain a policy $\\pi^{\\star}_p$$\\in$$\\Pi^S(\\mathcal{M}_p)$ that solves the problem in \\eqref{max_ent_product_objective}-\\eqref{LTL_constraint_product} for $\\mathcal{M}_p$ with finite maximum entropy is as follows. First, we find MECs $(C,D)$ in $\\mathcal{M}_p$ and form the modified MDP $\\mathcal{M}_p'$ by making all states $s$$\\in$$C$ absorbing. Second, we solve the problem in \\eqref{non_reach_objective}-\\eqref{non_reach_cons6} for $(\\mathcal{M}_p', C, \\beta)$ with the additional constraint \\eqref{reach_cons}. Finally, we use step 3 of Algorithm \\ref{Algo_2} to synthesize the policy $\\pi^{\\star}_p$$\\in$$\\Pi(\\mathcal{M}_p)$. Note that the constructed policy ensures that, once reached, the system stays in the set $B$ forever, since all MECs in $\\mathcal{M}_p$ with finite maximum entropy are bottom strongly connected. \n\n \\subsubsection{Unbounded maximum entropy}\\label{unbounded_constrained_case} In this case, the product MDP $\\mathcal{M}_p$ contains a non-BSC MEC due to Theorem \\ref{Theorem1}. We assume that there is only one non-BSC MEC in $\\mathcal{M}_p$, and it is contained in $S_r$. \nWe first form the modified product MDP $\\mathcal{M}'_p$ by making all states in BSC MECs in $\\mathcal{M}_p$ absorbing. Note that $H(\\mathcal{M}_p')$$=$$H(\\mathcal{M}_p)$. Let $S_B$ denote the union of all absorbing states in $\\mathcal{M}'_p$.\nWe verify the existence of a solution to the problem in \\eqref{max_ent_product_objective}-\\eqref{LTL_constraint_product} by solving the problem in \\eqref{non_reach_objective}-\\eqref{non_reach_cons6} for $(\\mathcal{M}_p', S_B, \\beta)$ with the additional constraint \\eqref{reach_cons}. If the optimum value of the resulting problem is bounded, then we synthesize an optimal policy through step 3 of Algorithm \\ref{Algo_2}. If it is not bounded, then there exists no optimal policy, in which case for a given constant $\\ell$, we synthesize a policy $\\pi^{\\star}$$\\in$$\\Pi^S(\\mathcal{M}_p)$ such that $H(\\mathcal{M}_p,\\pi^{\\star})$$\\geq$$\\ell$ and $\\text{Pr}^{\\pi^{\\star}}_{\\mathcal{M}_p}(s_0\\models \\lozenge B)$$\\geq$$ \\beta$ by employing two different approaches. \n\n\nAs the first approach, we solve a convex feasibility problem. Specifically, for the problem in \\eqref{non_reach_cons1}-\\eqref{non_reach_cons6}, we remove the objective \\eqref{non_reach_objective} and append the constraints \\eqref{residence_bound_L} and \\eqref{reach_cons} to the problem. Then, we solve the resulting convex feasibility problem for $(\\mathcal{M}_p', S_B, \\ell, \\beta)$, and using step 3 of Algorithm \\ref{Algo_2}, obtain a policy $\\pi^{\\star}_p$$\\in$$\\Pi^S(\\mathcal{M}_p)$ such that $H(\\mathcal{M}_p,\\pi^{\\star}_p)$$\\geq$$\\ell$ and $\\text{Pr}^{\\pi^{\\star}_p}_{\\mathcal{M}_p}(s_0$$\\models$$\\lozenge B)\\geq \\beta$. \n\nThe second approach to obtain an induced MC with arbitrarily large entropy, whose paths satisfy the LTL specification with desired probability, is to bound the expected residence time in states $s$$\\in$$S_p\\backslash S_B$ and relax this bound according to the desired level of entropy. Specifically, we solve the problem in \\eqref{non_reach_objective}-\\eqref{non_reach_cons6} for $(\\mathcal{M}_p', S_B, \\beta, \\Gamma)$ together with the constraints \\eqref{residence_bound} and \\eqref{reach_cons}, where $\\Gamma$ is as defined in Section \\ref{unbounded_policy_synthesis}. Then, by choosing an arbitrarily large $\\Gamma$ value, we obtain an induced MC with the desired level of entropy. \n\nFinally, to ensure that the paths of the MC that is induced by the synthesized policy satisfies the LTL specification $\\varphi$ with desired probability, we choose actions in states $s$$\\in$$B$ such that $Succ(s)$$\\subseteq$$B$.\n\n \\subsubsection{Infinite maximum entropy}\\label{infinitee_constrained_case}\nFor product MDPs with infinite maximum entropy, the verification of the existence and the synthesis of an optimal policy are achieved by procedures that are very similar to the ones presented in Sections \\ref{finite_constrained_case} and \\ref{unbounded_constrained_case}. Hence, we provide the analysis for product MDPs with infinite maximum entropy in Appendix \\ref{infinite_case_appendix}. \n\n\\section{Examples}\\label{examples_section}\nIn this section, we illustrate the proposed methods on different motion planning scenarios. All computations are run on a 2.2 GHz dual core desktop with 8 GB RAM. All optimization problems are solved by using the splitting conic solver (SCS) \\cite{SCS} in CVXPY \\cite{cvxpy}. For all LTL specifications, we construct deterministic Rabin automata using ltl2dstar \\cite{ltl2dstar}.\n\nIn most motion planning scenarios, an agent can return to its current position by following different paths. Therefore, in general, the maximum entropy of an MDP that models the motion of an agent is either unbounded or infinite. However, as explained in Section \\ref{policy_syntesis_section} and shown in the following examples, a policy that induces a stochastic process with an arbitrarily large entropy can easily be obtained by introducing constraints on the expected residence time in certain states. Additional motion planning examples are provided in \\cite{Yagiz}. \n\\subsection{Relation between entropy and exploration}\nRandomizing an agent's paths while ensuring the completion of a task is important for achieving a better exploration of the environment \\cite{Saerens} and obtaining a robust behavior against transition perturbations \\cite{Deep_learning}. In this example, we demonstrate how the proposed method randomizes the agent's paths depending on the expected time until the completion of the task. \n\n\\textit{Environment:} We consider the grid world shown in Fig. \\ref{grid_graph} (left). The agent starts from the brown state. The red and green states are absorbing, i.e., once entered those states cannot be left. The agent has four actions in all other states, namely left, right, up and down. At each state, a transition to the chosen direction occurs with probability (w.p.) 0.7, and the agent slips to each adjacent state in the chosen direction w.p. 0.15. If the adjacent state in the chosen direction is a wall, e.g. up in brown state, a transition to the chosen direction occurs w.p. 0.85. If the state in the chosen direction is a wall, e.g., left in brown state, the agent stays in the same state w.p. 0.7 and moves to each adjacent state w.p. 0.15. \n\\newcommand{\\StaticObstacle}[2]{ \\fill[red] (#1+0.1,#2+0.1) rectangle (#1+0.9,#2+0.9);}\n\\newcommand{\\initialstate}[2]{ \\fill[brown] (#1+0.15,#2+0.15) rectangle (#1+0.85,#2+0.85);}\n\\newcommand{\\goalstate}[2]{ \\fill[green] (#1+0.15,#2+0.15) rectangle (#1+0.85,#2+0.85);}\n\\begin{figure}[b]\\vspace{-0.4cm}\n\\begin{subfigure}[b]{0.3\\linewidth}\n\\scalebox{0.32}{\n\\begin{tikzpicture}\n\\draw[black,line width=1pt] (0,0) grid[step=1] (11,11);\n\\draw[black,line width=4pt] (0,0) rectangle (11,11);\n\t\t\t \n\t\t\t \\StaticObstacle{4}{3} \\StaticObstacle{5}{3} \\StaticObstacle{6}{3}\n\t\t\t \\StaticObstacle{4}{7} \\StaticObstacle{5}{7} \\StaticObstacle{6}{7}\n\t\t\t \\initialstate{0}{5} \\goalstate{10}{5}\n\t\t\t \n\\node at (0.5,5+0.5) { \\textbf{\\huge S}};\n\\node at (10+0.5,5+0.5) { \\textbf{\\huge T}};\n\\node at (4+0.5,3+0.5) { \\textbf{\\huge B}};\n\\node at (5+0.5,3+0.5) { \\textbf{\\huge B}};\n\\node at (6+0.5,3+0.5) { \\textbf{\\huge B}};\n\\node at (4+0.5,7+0.5) { \\textbf{\\huge B}};\n\\node at (5+0.5,7+0.5) { \\textbf{\\huge B}};\n\\node at (6+0.5,7+0.5) { \\textbf{\\huge B}};\n\n\n\\end{tikzpicture}\n\n}\n\\end{subfigure}\n\\hspace{0.2\\linewidth}\n\\begin{subfigure}[b]{0.3\\linewidth}\n\\scalebox{0.35}{\n\\begin{tikzpicture}\n\\draw[black,line width=1pt] (0,0) grid[step=1] (10,10);\n\\draw[black,line width=4pt] (0,0) rectangle (10,10);\n\t\t\t \n\t\t\t \\StaticObstacle{2}{2} \\StaticObstacle{3}{2}\n\t\t\t \\StaticObstacle{2}{3} \\StaticObstacle{3}{3}\t\n\t\t\t \\StaticObstacle{6}{6} \\StaticObstacle{7}{6}\n\t\t\t \\StaticObstacle{6}{7} \\StaticObstacle{7}{7} \n\t\t\t \\initialstate{0}{0} \\goalstate{9}{9}\n\\node at (0.5,0.5) { \\textbf{\\huge S}};\n\\node at (5+0.5,0.5) { \\textbf{\\huge R1}};\n\\node at (8+0.5,3+0.5) { \\textbf{\\huge R2}};\n\\node at (1+0.5,6+0.5) { \\textbf{\\huge R3}};\n\\node at (4+0.5,8+0.5) { \\textbf{\\huge R4}};\n\\node at (9+0.5,9+0.5) { \\textbf{\\huge T}};\n\\node at (2+0.5,2+0.5) { \\textbf{\\huge B}};\n\\node at (2+0.5,3+0.5) { \\textbf{\\huge B}};\n\\node at (3+0.5,2+0.5) { \\textbf{\\huge B}};\n\\node at (3+0.5,3+0.5) { \\textbf{\\huge B}};\n\\node at (6+0.5,7+0.5) { \\textbf{\\huge B}};\n\\node at (7+0.5,7+0.5) { \\textbf{\\huge B}};\n\\node at (6+0.5,6+0.5) { \\textbf{\\huge B}};\n\\node at (7+0.5,6+0.5) { \\textbf{\\huge B}};\n\\end{tikzpicture}\n}\n\\end{subfigure}\n\\caption{Grid world environments. The brown (S) and green (T) states are the initial and target states, respectively. The red (B) states are absorbing. }\n\\label{grid_graph}\n\\end{figure}\n\n\n\n\\textit{Task:} The agent's task is to reach and stay in the green state, labeled as $T$, while avoiding the red states, labeled as $B$. Formally, the task is $\\varphi$$=$$\\square \\lnot B \\land \\lozenge \\square T$.\n\n We form the product MDP for the given task. It has 484 states, 1196 transitions, 10 MECs, and the average number of states in each MEC is 23. We require the agent to complete the task w.p. 1, i.e., $\\text{Pr}_{\\mathcal{M}}^{\\pi}(s_0$$\\models$$\\varphi)$$=$$1$. The maximum entropy of the product MDP subject to the LTL constraint is unbounded. The minimum expected time $\\Gamma$ required to complete the task $\\varphi$ is roughly $14$ time steps, which can be computed by replacing the objective in \\eqref{non_reach_objective} with ``minimize \\ $\\sum_{s\\in S_r} \\sum_{a\\in \\mathcal{A}(s)}\\lambda(s,a)$\" and appending \\eqref{reach_cons} to the constraints in \\eqref{non_reach_cons1}-\\eqref{non_reach_cons6}. \n\nWe synthesize two policies for two different expected times until the completion of the task. First, we synthesize a policy by requiring the agent to complete the task as fast as possible, i.e., $\\Gamma$$=$$14$ time steps. Then, we synthesize a policy by allowing the agent to spend more time in the environment until the completion of the task, i.e., $\\Gamma$$=$$60$ time steps. Solving the convex optimization problems take 122 and 166 seconds for $\\Gamma$$=$$14$ and $\\Gamma$$=$$60$ time steps, respectively.\n\n\nThe expected residence time in states for the induced MCs are shown in Fig. \\ref{exploration_figure}. We remind the reader that the environment is given in Fig. \\ref{grid_graph} (left).\n\\begin{figure}[t]\n\\begin{subfigure}[b]{0.3\\linewidth}\n\\includegraphics[width= 3.95 cm, trim= {50 30 150 40 }, clip=true]{heat_map1.pdf}\n\\end{subfigure}\n\\hspace{0.15\\linewidth}\n\\begin{subfigure}[b]{0.3\\linewidth}\n\\includegraphics[width= 5 cm, trim= {50 30 50 40 }, clip=true]{heat_map2.pdf}\n\\end{subfigure}\n\\caption{The expected residence time in states for different expected times $\\Gamma$ until the completion of the task \\textit{in the same environment}. (Left) $\\Gamma$$=$$14$ time steps, i.e., the minimum time required to complete the task with probability 1. (Right) $\\Gamma$$=$$60$. }\n\\vspace{-0.3cm}\n\\label{exploration_figure}\n\\end{figure} \nWhen the agent is given the minimum time $\\Gamma$$=$$14$ time steps (left) to complete the task, it follows only the shortest paths, and therefore, cannot explore the environment. On the other hand, as it is allowed to spend more time, i.e., $\\Gamma$$=$$60$ time steps (right), in the environment, it visits different states more often and utilizes different paths to complete the task. Consequently, the synthesized policy enables the continual exploration of the environment while ensuring the completion of the task.\n\\subsection{Relation between entropy and predictability}\\label{example_1}\n\nIn this example, we consider an agent whose aim is to complete a task while leaking minimum information about its paths to an observer. We illustrate how the restrictions applied to the agent's paths by the task affect the predictability.\n\n\\textit{Environment:} We consider the grid world shown in Fig. \\ref{grid_graph} (right). The agent starts from the brown (S) state. The red (B) states and green (T) state are absorbing. The agent has four actions in all other states, namely left, right, up and down. A transition to the chosen direction occurs w.p. 1 if the state in that direction is not a wall. If it is a wall, e.g., left direction in brown state, the agent stays in the same state w.p. 1.\n\n\n\\textit{Tasks:} We consider five increasingly restrictive task specifications for the agent which are listed in Table \\ref{tasks_table}. The first task $\\varphi_1$ is to reach and stay in the $T$ state while avoiding all red states. The second task $\\varphi_2$ requires the agent to visit $R4$ state before completing the first task. The third task $\\varphi_3$ requires the agent to visit $R3$ state before completing the second task and so on. \n\n\\begin{table}[h!]\\vspace{-0.3cm}\n\\centering\n\\caption{The agent's tasks. }\n\\scalebox{1.1}{\n\\begin{tabular}{ | l |}\n \\hline\t\t\t\n$\\varphi_1$$=$$\\square \\lnot Red \\land \\lozenge \\square T$ \\\\ \\hline \n $\\varphi_2$$=$$\\square \\lnot Red \\land \\lozenge R4\\land \\lozenge \\square T$ \\\\ \\hline \n $\\varphi_3$$=$$\\square \\lnot Red \\land \\lozenge( R3\\land \\lozenge R4 )\\land \\lozenge \\square T$ \\\\ \\hline \n $\\varphi_4$$=$$\\square \\lnot Red \\land \\lozenge( R2\\land \\lozenge (R3 \\land \\lozenge R4) )\\land \\lozenge \\square T$ \\\\ \\hline \n $\\varphi_5$$=$$\\square \\lnot Red \\land \\lozenge( R1\\land \\lozenge (R2 \\land \\lozenge (R3 \\land \\lozenge R4)) )\\land \\lozenge \\square T$ \\\\ \\hline \n\\end{tabular}}\n\\label{tasks_table}\n\\end{table}\n\n\\textit{Observer:} There is an observer that aims to predict the agent's paths in the environment. The observer is aware of the agent's task, knows the transition probabilities exactly, and runs yes-no probes in each state to determine the successor state of the agent, i.e., probes that return an answer yes if the agent moves to the predicted successor state and no otherwise. The average number of yes-no probes run in a state is the expected number of observations needed by the observer to determine the correct successor state in that state \\cite{Paruchuri}. The observer uses the Huffman procedure \\cite{Huffman} to minimize the required number of probes. Let $\\mathcal{P}_{s}$$=$$(\\mathcal{P}_{s,1}, \\mathcal{P}_{s,2}, \\ldots, \\mathcal{P}_{s,n} )$ be the transition probabilities from state $s$ to successor states sorted in decreasing order. The number of yes-no probes run in state $s$ is denoted by $\\Upsilon_s$$=$$\\mathcal{P}_{s,1}+\\ldots+(n-1)\\mathcal{P}_{s,n-1}$$+$$(n-1)\\mathcal{P}_{s,n}$. The expected number of observations required to determine the agent's path is given by $O_{avg}$$=$$\\sum_s\\xi_s\\Upsilon_s$, which weighs the required number of probes in each state with the expected residence time in the state. We refer the reader to \\cite{Paruchuri} for further details about the observer model.\n\nWe form product MDPs for all tasks. The product MDP with the maximum number of states and transitions is the one constructed for the task $\\varphi_5$. It has 800 states, 2138 transitions, 12 MECs, and the average number of states in each MEC is 29. For each task, we require the agent to complete the task w.p. 1. The maximum entropy of all product MDPs subject to corresponding LTL constraints are unbounded. We bound the expected time until the completion of any task by taking $\\Gamma$$=$$33$ time steps, which is the minimum expected time required to complete the task $\\varphi_5$, i.e., the most restrictive task. For each task, we synthesize a policy for the agent using the procedure explained in Section \\ref{cons_section}. The longest computation time, which is for $\\varphi_5$, is 15.2 seconds. \n\nThe entropy of Markov chains induced by the synthesized policies and the average number of observations required to predict the agent's paths are shown in Fig. \\ref{tasks_figure}. As the task imposes more restrictions on the agent's paths, the entropy of the induced MC decreases and the prediction requires less observations in average. Additionally, as can be seen in Fig. \\ref{tasks_figure}, the required numbers of observation for $\\varphi_4$ and $\\varphi_5$ are significantly smaller than those for $\\varphi_1$, $\\varphi_2$ and $\\varphi_3$. This decrease is due to that these tasks force the agent to follow an ``S-shaped\" path in a restricted time, i.e. $\\Gamma=33$ time steps. For these tasks, although the agent still randomizes its paths to some extent, better predictability results cannot be obtained due to time restrictions.\n\n\\begin{figure}[h!]\n\\centering\n\\includegraphics[height=4 cm, width=7cm, trim= {0 0 0 30 }, clip=true]{tasks_vs_obs.pdf}\n\\caption{ The relation between the maximum entropy of an MDP subject to an LTL constraint and the required number of observations to predict the agent's paths. }\n\\label{tasks_figure}\\vspace{-0.5cm}\n\\end{figure}\n\n\\subsection{Predictability in a randomly generated MDP }\nIn this example, we investigate the relation between the probability of completing a task and the predictability of paths. We also evaluate the proposed algorithm against the algorithms introduced in \\cite{Paruchuri}.\n\n\\textit{Environment:} We generate an MDP with 200 states, where each state has 8 randomly selected successor states. We choose four states, make them absorbing, and label three of them as ``unsafe\" states and the remaining one as the ``target\" state. The agent has 5 actions at each state, for which the transition probabilities to successor states are assigned randomly. \n\n\\textit{Task:} The agent's task is to reach the target state while avoiding the unsafe states, i.e., $\\varphi$$=$$\\square \\lnot unsafe \\land \\lozenge\\square target$.\n\n\\textit{Observer:} We use the same observer model introduced in Section \\ref{example_1}.\n\n\\textit{Policies:} We compare the proposed method with weighted maximum entropy ($WME$) and binary search for randomization linear programming ($BRLP$) algorithms which are introduced in \\cite{Paruchuri} for randomizing an agent's policy to minimize predictability. We note that in \\cite{Paruchuri}, the authors claim that $WME$ algorithm is non-convex and cannot be solved in polynomial time. However, its convexity can be proven by Proposition \\ref{prop_convex} since it solves a special case of the convex optimization problem given in \\eqref{unconstrained_program}, i.e., it is equivalent to problem in \\eqref{unconstrained_program} when transition probabilities are either 0 or 1. We refer the reader to \\cite{Paruchuri} for further details about the $WME$ and $BRLP$ algorithms. \n\n We form the product MDP. It has 800 states, 2172 transitions and 5 MECs each of which contains a single state. The maximum probability of completing the task $\\varphi$ is obtained as $\\beta$$=$$0.75$ by solving a linear programming problem introduced in \\cite{Marta}. The maximum entropy of the product MDP subject to the LTL constraint $\\text{Pr}_{\\mathcal{M}}^{\\pi}(s_0$$\\models$$\\varphi)$$\\geq$$\\beta$ is unbounded for all $\\beta$$>$$0$. We fix the expected time until the completion of the task to $\\Gamma$$=$$200$ time steps, and synthesize policies for different values of $\\beta$. Solving the optimization problems takes at most 150, 155, and 92 seconds for the proposed method, $WME$ and $BRLP$ algorithms, respectively. \n\nThe required number of observations to predict the agent's paths for different $\\beta$ values are shown in Fig. \\ref{compare_paruchuri}. As the probability of completing the task decreases, the randomness of the agent's paths increases and the prediction requires more observations in average. Therefore, there is a trade-off between the probability of satisfying the task and the randomness of the paths. Additionally, the proposed method (green) requires two times more observations than the $BRLP$ algorithm (red) when $\\beta$$=$$0.5$. Note also that the $WME$ algorithm (blue) cannot achieve better predictability results than the proposed method because it does not exploit the inherent stochasticity in the environment and rely solely on the randomization of the agent's actions to generate unpredictable paths.\n\n\\begin{figure}[h!]\n\\centering\n\\includegraphics[height=4 cm, width=7cm, trim= {0 0 0 30 }, clip=true]{comparison_example.pdf}\n\\caption{The trade-off between the probability of completing the task and the predictability of paths. $BRLP$ and $WME$ are algorithms proposed in \\cite{Paruchuri} to randomize the agent's actions.}\n\\label{compare_paruchuri}\n\\end{figure}\n\n\n\n\\section{Conclusions and Future Work}\\label{conclusion_section}\nWe showed that the maximum entropy of an MDP can be either finite, infinite or unbounded, and presented an algorithm to verify the property of the maximum entropy for a given MDP. We presented an algorithm, based on a convex optimization problem, to synthesize a policy that maximizes the entropy of an MDP. For MDPs with non-infinite maximum entropy, we established the equivalence between the maximum entropy of an MDP and the maximum entropy of paths in the MDP. Finally, we provided a procedure to obtain a policy that maximizes the entropy of an MDP while ensuring the satisfaction of a temporal logic specification with desired probability.\n\nAn interesting future direction is to include adversaries to the framework by modeling the problem as a two-player game. Being informed about the aims and capabilities of rational\/irrational adversaries in the environment, an agent may want to explore its environment while avoiding the threats caused by adversaries. Another future direction may be to extend this work to multi-agent scenarios by describing the tasks, and communication and coordination constraints between the agents as temporal logic specifications.\n\n\\bibliographystyle{IEEEtran}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nSimultaneous estimation of several independent normal means has been a topic of great research interest, especially in the 60's, 70's and 80's, after the publication of the celebrated James-Stein estimator (James and Stein, 1961). \nLet $y=(y_1,\\ldots,y_m)^{\\prime}$ be a maximum likelihood estimator of $\\theta=(\\theta_1,\\cdots,\\theta_m)^{\\prime}$ under the model:\n$y_i|\\theta_i \\stackrel{ind.}{\\sim} N(\\theta_i, 1),\\; i=1,\\cdots,m.$\nJames-Stein (1961) provided a surprising result that for $m\\ge 3$, $y$ is an inadmissible estimator of $\\theta$ under the model and the sum of squared error loss function: $L(\\hat{\\theta},\\theta)=\\sum_{i=1}^{m}(\\hat{\\theta}_i-\\theta_i)^2$. They also showed \nthat the estimator $\\hat \\theta_i^{JS}=(1-\\hat{B}^{JS})y_i$, where $\\hat{B}^{JS}={(m-2)}\/{(\\sum_{i=1}^{m}y_i^2)}$, dominates $y$ in terms of the frequentist's risk. To be specific, \n$E[\\sum_i^{m}(\\hat{\\theta}_i^{JS}-\\theta_i)^2|\\theta]\\leq E[\\sum_i^{m}(y_i-\\theta_i)^2|\\theta]$, for all \n$\\theta \\in \\mathcal{R}^m,$ the $m$-dimensional Euclidean space, with strict inequality holding for at least one point $\\theta$. \n\nThe potential of different extensions of the James-Stein estimator to improve data analysis became transparent when Efron and Morris (1973) provided an empirical Bayesian justification of the James-Stein estimator using the prior $\\theta_i\\sim^{iid.}N(0,A)$, \\; $i=1,\\cdots,m$. \nSome earlier applications of empirical Bayesian method include the estimation of: (i) false alarm probabilities in New York City (Carter and Rolph, 1974), (ii) the batting averages of major league baseball players (Efron and Morris, 1975), (iii) prevalence of toxoplasmosis in El Salvador (Efron and Morris, 1975) and (iv) per-capita income of small places in the USA (Fay and Herriott, 1979). More recently, variants of the method given in Efron and Morris (1973) was used: to estimate poverty rates for the US states, counties, and school districts (Citro and Kalton, 2000) and Chilean municipalities (Casas-Cordero, Encina and Lahiri , 2016), and to estimate proportions at the\nlowest level of literacy for states and counties (Mohadjer et al. 2012).\n\nThe following two-level Normal hierarchical model is an extension of the model used by Efron and Morris (1973):\n\n\\noindent For $i=1,\\ldots, m$, \n\n{\\rm Level \\ 1 \\ (sampling \\ model):}\n$y_i|\\theta _i \\stackrel{\\mathrm{ind.}}{\\sim} N(\\theta_i,D_i)$;\n\n{\\rm Level \\ 2 \\ (linking \\ model):}\n$\\theta_i \\stackrel{\\mathrm{ind.}}{\\sim}N(x_i^{\\prime}\\beta, A)$.\n\n\\noindent In the above model, level 1 is used to account for the sampling distribution of unbiased estimates $y_i$ based on observations taken from the $i$th population. In this model, we assume that the sampling variances $D_i$ are known and this assumption often follows from the asymptotic variances of transformed direct\n estimates (Efron and Morris, 1975; Carter and Rolph, 1974) or from empirical variance modeling (Fay and Herriot, 1979, Otto and Bell, 1995).\nLevel 2 links the random effects $\\theta_i$ to a vector of $p$ known auxiliary variables $x_i=(x_{i1},\\cdots,x_{ip})^{\\prime}$, which are often obtained from various alternative data sources. The parameters $\\beta$ and $A$ are generally unknown and are estimated from the available data. We assume that $\\beta\\in \\mathcal{R}^p,$ the $p$-dimensional Euclidian space. \nIn the growing field of small area estimation, this model is commonly referred to as the Fay-Herriot model, named after the authors of the landmark paper with more than 1200 citations to date (according to Google Scholar) by Fay and Herriot (1979). For a comprehensive review of small area estimation, the readers are referred to the book by Jiang (2007) and Rao and Molina (2015).\n\nWe may be interested in the high dimensional parameters (random effects) $\\theta_i$ and\/or the hyperparameters $\\beta$ and $A$. The estimation problem can be addressed using either Bayesian or linear mixed model classical approach. When hyperparameters are known, both the Bayesian and linear mixed model classical approaches use conditional distribution of $\\theta_i$ given the data for point estimation and measuring uncertainty of the point estimator. To elaborate, the posterior mean of $\\theta_i$, the Bayesian point estimator, is identical to the best predictor of $\\theta_i$. Moreover, the posterior variance of $\\theta_i$ is identical to the mean squared error of the best predictor. When $A$ is known but $\\beta$ is unknown, a flat prior is generally assumed for $\\beta$ under the Bayesian approach. Interestingly, in this unknown $\\beta$ case, the posterior mean and posterior variance of $\\beta$ are identical to the maximum likelihood estimator of $\\beta$ and the variance of the maximum likelihood estimator, respectively. Moreover, the posterior mean and variance of $\\theta_i$ are identical to the best linear unbiased predictor of $\\theta_i$ and its mean squared error, respectively.\n\nWhen both $\\beta$ and $A$ are unknown, flat prior, i.e., $\\pi (\\beta,A)\\propto 1,\\;\\beta\\in \\mathcal{R}^p, A>0$, is common though a few other priors for $A$ have been considered; see, e.g., Datta et al. (2005) and Morris and Tang (2011). In a linear mixed model classical approach, different estimators of $A$ have been proposed and the estimator of $\\beta$ is obtained by plugging in an estimator of $A$ in the maximum likelihood estimator of $\\beta$ when $A$ is known. In this general case, the relationship between the Bayesian and linear mixed model classical approach is not clear. The main goal of this paper is to understand the nature of such relationship. In particular, we answer the following question: For a given classical method of estimation of $A$, is it possible to find a prior on $A$ that will make the Bayesian solution closer to the classical solution in achieving multiple goals (i)-(v), described in Section 3, or a subset of these goals given in Theorem 2?\n\n\nWhat would be the parameters of interest in setting the multiple goals? To this end, we first note that Morris and Tang (2011) pointed out the need for accurately estimating the shrinkage parameters $B_i=D_i\/(A+D_i)$ as they appear linearly in the Bayes estimators of $\\theta_i$, which are the prime parameters of interest in many applications like the small area estimation. Moreover, the shrinkage parameters are good indicators of the strength of the prior on the random effects $\\theta_i$. Despite the importance of shrinkage parameters, relatively little research has been conducted in order to understand the theoretical properties of existing estimators. For the balanced case when $D_i=D,\\; i=1,\\cdots,m$, Morris (1983) proposed an exact unbiased estimator of $B=D\/(A+D)$ and showed component-wise dominance of the resulting empirical Bayes estimator of $\\theta_i$ under the joint distribution of $\\{(y_i,\\theta_i),\\;i=1,\\cdots,m\\}$ when $p\\le m-3.$ For the general unbalanced case, Hirose and Lahiri (2018) proposed an adjusted maximum likelihood estimator of $B_i$ that satisfies multiple desirable properties. First, the method yields an estimator of $B_i$ that is strictly less than 1, which prevents the overshrinking problem in the related empirical best linear unbiased predictor or simply empirical best predictor of $\\theta_i$. Secondly, this adjusted maximum likelihood estimator of $B_i$ has the smallest bias among all existing rival estimators in the higher order asymptotic sense. Thirdly, when this adjusted maximum likelihood method is used, second-order unbiased estimator of mean squared error of empirical best linear unbiased predictor can be produced in a straightforward way without additional bias corrections that are necessary for other existing variance component estimation methods. \nFor prior work on the adjusted maximum likelihood method, the readers are referred to Lahiri and Li (2009), Li and Lahiri (2010), Yoshimori and Lahiri (2014a,b), Hirose and Lahiri (2018), and Hirose (2017,2019). \n\nAs stated in Morris and Tang (2011), flat prior leads to admissible minimax estimators of the random effects for a special case of the model. In Section 3, we show that the bias of the Bayes estimator of $B_i$, under the flat prior and the two-level model, is $O(m^{-1})$ except for the balanced case when it is of lower order $o(m^{-1})$. Thus, in general, the Bayes estimator of $B_i$, under the flat prior, has more bias than the adjusted maximum likelihood estimator of Hirose and Lahiri (2018) in the higher order asymptotic sense. In this section, we propose a prior for the hyperparameters that leads to the Bayes estimator of $B_i$ with bias of lower order $o(m^{-1})$ and thus is on par with the adjusted maximum likelihood of Hirose and Lahiri (2018). Interestingly, this prior also makes the resulting Bayesian method much closer to the Hirose-Lahiri\\rq{}s empirical best linear unbiased prediction method in multiple sense. \nIn particular, the posterior variance of the random effect $\\theta_i$, under the proposed prior, is identical to both the Taylor series and parametric bootstrap second-order mean squared error estimators of Hirose and Lahiri (2018) in the higher order asymptotic sense. To our knowledge, we establish for the first time the relationship between the Bayesian posterior variance and parametric bootstrap mean squared error estimator in this higher-order asymptotic sense. \n\n\n\nThe outline of the paper is as follows. In Section 2, we first introduce a classical method for the two level model by proposing a general adjustment factor in estimating $A$. We show how the method is related to the commonly used residual maximum likelihood method for a given choice of the adjustment factor. We then construct a prior, called a multi-goal prior, that provides a Bayesian solution close (with respect to several properties in higher order asymptotic sense) to classical solution in order to estimate the hyperparameters and random effects. Section 3 discusses prior choice for an important special case considered by Hirose and Lahiri (2018). In addition to the multiple properties discussed in Section 2, this section develops a unique multi-goal prior that establishes a relationship of the posterior variances of the random effects with the Hirose-Lahiri Taylor series and parametric bootstrap mean squared error estimators that do not require the usual complex bias corrections. We reiterate that this paper demonstrates for the first time how to bring the Bayesian and classical parametric bootstrap methods closer in the context of random effects models. In Section 4, we compare the proposed multi-goal prior with the superharmonic prior using a real life data. In Section 5, we discuss issues in extending our results to a general model. All the technical proofs are deferred to the Appendix.\n\n \n\\section{Prior Choice for reconciliation of the Bayesian and classical approach}\nIn this section, we first introduce a general classical method for estimation of hyperparameters and random effects in the two-level Normal hierarchical model. Then we construct prior for the hyperparameters so that the corresponding Bayesian method is identical to the classical method in the higher order asymptotic sense with respect to multiple properties. \n\nWe first introduce the empirical best linear unbiased predictor of $\\theta_i$ when the variance component $A$ is estimated by a general adjusted maximum likelihood method. To this end, we define mean squared error of a given predictor $\\hat{\\theta}_i$ of $\\theta_i$ as $M_i(\\hat{\\theta}_i)=E(\\hat{\\theta}_i-\\theta_i)^2$, where the expectation is with respect to the joint distribution of $y=(y_1,\\cdots,y_m)^{\\prime}$ and $\\theta=(\\theta_1,\\cdots,\\theta_m)^{\\prime}$ under the two-level normal model. \nThe best linear unbiased predictor $\\hat{\\theta}_i^{BLUP}$ of $\\theta_i$, which minimizes $M_i(\\hat{\\theta}_i)$ among all linear unbiased predictors $\\hat\\theta_i$, is given by \n$\\hat{\\theta}_i^{BLUP} (A)=(1-B_i)y_i+B_i x^{\\prime}_i\\hat{\\beta}(A),$ \nwhere $B_i\\equiv B_i(A)=D_i\/(A+D_i)$ is the shrinkage factor and $\\hat{\\beta}(A)=(X^{\\prime}{V}^{-1}X)^{-1}X^{\\prime}{V}^{-1}y$ is the weighted least square estimator of $\\beta$ when $A$ is known. In this formula, $X^{\\prime}=(x_1,\\cdots,x_m)$ denotes $p\\times m$ matrix of known auxiliary variables and $V=\\mbox{diag}(A+D_1,\\cdots,A+D_m)$ denotes a $m\\times m$ diagonal covariance matrix of $y$. \n\nWe consider the following general adjusted maximum likelihood estimator $\\hat{A}_{i;G}$ of $A$ : \n\\begin{align}\n\\hat{A}_{i;G}=\\mathop{\\rm arg~max}\\limits_{0 \\le A<\\infty} h_{i;G}(A)L_{RE}(A), \\label{ad.est}\n\\end{align}\nwhere the general adjustment factor $h_{i;G}(A)$ satisfies Condition R5 in Appendix A. \nNote that maximum likelihood, residual maximum likelihood and different adjusted maximum likelihood estimators of $A$ can be produced using suitable choices of $h_{i;G}(A)$. Plugging in $\\hat{A}_{i;G}$ for $A$ in the best linear unbiased predictor, one obtains an empirical best linear unbiased predictor $\\hat{\\theta}_i^{EB}(\\hat{A}_{i;G})$ of $\\theta_i$. \n\nSince the residual maximum likelihood estimator of $A$ has the lowest bias among existing estimators in the higher-order asymptotic sense, it is of interest to establish a relationship between the general adjusted maximum likelihood estimator and the residual maximum likelihood estimator. We describe such relationship in Theorem \\ref{L1}; see Appendix A.1 for a proof.\n\n\\begin{theorem}\n\\label{L1}\nUnder regularity conditions R1-R5, \n$$\\hat{A}_{i;G}-\\hat{A}_{RE}=\\frac{2\\tilde l_{i;G}^{(1)}(A)}{tr[V^{-2}]}+o_p(m^{-1}),$$ \n\\end{theorem}\nwhere $\\tilde l_{i;G}^{(1)}(A)=\\frac{\\partial \\log h_{i;G}(A)}{\\partial A}$.\n\nWe now present Theorem \\ref{rel} for constructing a prior, starting from a given adjustment factor $h_{i,G}(A)$, in order to bring the resulting Bayesian method closer to the classical method with respect to three criteria. To this end, let $p (\\beta, A)$ denote the prior for $(\\beta,A)$. Following Datta et al. (2005), we assume \n$p (\\beta, A)\\propto \\pi (A)$ and introduce the following notations to be used throughout the paper: \n\\begin{align*} \n&\\hat b_1=\\frac{\\partial B_i}{\\partial A}\\Big |_{\\hat{A}_{RE}},\\ \\hat b_2=\\frac{\\partial^2 B_i}{\\partial A^2}\\Big |_{\\hat{A}_{RE}},\\ \\hat \\rho_1 =\\frac{\\partial \\log \\pi(A)}{\\partial A}\\Big |_{\\hat{A}_{RE}},\\\\ \n&\\hat h_2=-\\frac{1}{m}\\frac{\\partial^2 l_{RE}}{\\partial A^2}\\Big |_{\\hat{A}_{RE}}=\\frac{tr[V^{-2}]}{2m}+o_p(m^{-1}),\\\\ \n&\\hat h_3=-\\frac{1}{m}\\frac{\\partial^3 l_{RE}}{\\partial A^3}\\Big |_{\\hat{A}_{RE}}=-\\frac{2tr[V^{-3}]}{m}+o_p(m^{-1}),\n\\end{align*}\nwhere $\\hat A_{RE}$ is the residual maximum likelihood estimator of $A$, and $l_{RE}$ is the logarithm of residual likelihood.\n\\begin{theorem}\n\\label{rel}\nUnder Regularity Conditions R1-R5, if $p(\\beta,A)\\propto \\pi_{i;G}(A)$ and \n\\begin{align}\\pi_{i;G}(A)\\propto (A+D_i)tr(V^{-2}){h}_{i;G}(A),\\label{g.p}\\end{align}\nwe have;\n\\begin{align*}\n&(i) \\hat{B}_i^{GHB}=\\hat{B}_i(\\hat{A}_{i;G})+o_{p}(m^{-1});\\\\\n&(ii) \\hat{V}_i^{GHB}=V[B_i|y]=Var( \\hat B_i(\\hat{A}_{i;G}))+o_p(m^{-1});\\\\\n&(iii) \\hat{\\theta}_i^{GHB}=\\hat{\\theta}_i(\\hat{A}_{i;G})+o_{p}(m^{-1}).\n\\end{align*}\n\\end{theorem}\nThe proof of Theorem \\ref{rel} is deferred to Appendix A.2.\n\n\n\\begin{remark}\n\\label{pri.cond}\nWe have several remarks on the general multi-goal prior given by (\\ref{g.p}). \n\\begin{description}\n\\item[(a)] Theorem \\ref{rel} is valid for multiple choices of $h_{i;G}$.\n\n\\item[(b)]\nThere exists at least one strictly positive estimate of $A$ if $h_{i;G}(A)>0$ and \n\\begin{align}\nh_{i;G}(A)=o(A^{(m-p)\/2}),\\label{F.ec}\n\\end{align}\nfor large $A$ under R6-7.\n\n\n\\item[(c)] Note that $h_{i;G}(A)$ may not qualify as a bonafide prior since it may result in an improper posterior; see Yoshimori and Lahiri (2014b) for an example. However, if we restrict the class of priors to $h_{i;G}(A)=(A+D_i)^s$ for some $s>0$, we show in Appendix B.1 that $h_{i;G}(A)=o(A^{(m-p-2)\/2})$ is a sufficient condition for the propriety of posterior and hence can serve as a prior for $A$.\n\nOn the other hand, it is straightforward to show that $\\pi_{i;G}(A)$ given by (\\ref{g.p}) with $h_{i;G}(A)=o(A^{(m-p)\/2})$ yields proper posterior because of multiplication of $h_{i;G}(A)$ by $(A+D_i)tr(V^{-2})$. \nIn either case, Theorem \\ref{rel} can facilitate users for selecting an adjusment factor in the emprical best linear unbiased prediction approach or prior in the Bayesian approach.\n\n\\end{description}\n\\end{remark}\n\n\n\n\n\n\\section{Multi-Goal Prior for an important special case}\n\n\nHirose and Lahiri (2018) put forward a classical approach for an important choice of $h_{i;G}(A)$ that satisfies the following desirable properties under regularity conditions R1-R7: \n \n\\begin{description}\n\\item [1.] It is desirable to have a second-order unbiased estimator of $B_i$, i.e., $E(\\hat B_i)=B_i+o(m^{-1})$.\n\\item [2.]$0<\\mbox{inf}_{m\\ge 1}\\hat B_i\\le \\mbox{sup}_{m\\ge 1}\\hat B_i<1$ (a.s.) \nfor protecting the empirical best linear unbiased predictor from over-shrinking to the regression estimator.\n\\item [3.] It is desirable to obtain a simple second-order unbiased Taylor series mean squared error estimator of the empirical best linear unbiased predictor without any bias correction; that is, $E[\\hat{M}_{i}(\\hat A_i)]=M_i(\\hat{\\theta}_i^{EB})+o(m^{-1}).$\n\\item [4.] It is desirable to produce a strictly positive second-order unbiased single parametric bootstrap mean squared error estimator without any bias-correction,\n\\end{description}\nwhere $\\hat{M}_i(\\hat{A}_i)$ denotes a estimator of mean squared error of $\\hat\\theta_i^{EB}(\\hat{A})$. \n\nLet $\\hat{A}_{i;MG}$, $\\hat{B}_{i;MG}$, $\\hat{\\theta}_{i;MG}^{EB}$, $\\hat{M}_{i;MG}$, $\\hat{M}_{i;MG}^{boot}$ be the Hirose--Lahiri\\rq{}s estimators of $A, B_i,$ the empirical best linear unbiased predictor of $ \\theta_i$, Taylor series and parametric bootstrap estimators of the mean squared error of the empirical best linear unbiased predictor, respectively. They are given by\n\\begin{align*} \n\\hat {A}_{i;MG}=\\mathop{\\rm arg~max}\\limits_{0< A <\\infty}& \\tilde h_i(A)L_{RE}(A),\\\\ \n\\hat {B}_{i;MG}=\\hat{B}_i(\\hat{A}_{i;MG}),& \\ \\hat {\\theta}_{i;MG}^{EB}=\\hat{\\theta}_i^{EB}(\\hat{A}_{i;MG}), \\\\\n\\hat{M}_{i;MG}=\\hat{M}_i(\\hat{A}_{i;MG}),& \\ \\hat{M}_{i;MG}^{boot}=E_*[\\{\\hat{\\theta}_i(\\hat{A}_{i;MG}^*,y^{*})-\\theta_i^*)\\}^2],\n\\end{align*}\nwhere $\\tilde h_i(A)=h_{+}(A)(A+D_i)$ with $m>p+2$; $h_{+}(A)$ satisfies Conditions R6-R7 in Appendix A; \n$\\theta_i^{*}=x_i^{\\prime}\\hat \\beta(\\hat{A}_{1;MG},\\ldots,\\hat{A}_{m;MG})+u_i^*$ with $u_i^* \\sim^{ind.} N(0,\\hat{A}_{i;MG})$; $E_*$ is expectation with respect to the two-level Normal hierarchical model with $\\beta$ and $A$ replaced by $\\hat\\beta(\\hat{A}_{1;MG},\\ldots,\\hat{A}_{m;MG})$ and $\\hat A_{i;MG}$, respectively. Note that the choice of $h_{+}(A)$ is not unique in general. One can use the choice given in Yoshimori and Lahiri (2014a). \n\nThe following corollary follows from Theorem \\ref{L1}, Hirose and Lahiri (2018) and the fact that $\\frac{\\partial \\hat \\beta(A)}{\\partial A}=O_p(m^{-1\/2})$. \n\\begin{corollary}\n\\label{cor}\nUsing the regularity conditions,\n\\begin{align*}\n&(i) \\hat{A}_{i;MG}-\\hat{A}_{RE}=O_{p}(m^{-1});\\\\\n&(ii) x_i^{\\prime}\\hat{\\beta}(\\hat{A}_{1;MG},\\ldots,\\hat{A}_{m;MG})-x_i^{\\prime}\\hat{\\beta}(\\hat{A}_{RE})=o_{p}(m^{-1}). \n\\end{align*}\n\\end{corollary}\n\n\nIn this section, we suggest a Bayesian approach that is close to the classical approach to achieve multiple goals in the higher-order asymptotic sense. To this end, we seek a multi-goal prior on the hyperparameters $(\\beta,A)$ that satisfies all the following properties simultaneously:\n\\begin{description}\n\\item[(i)] $\\hat{B}_i^{HB}\\equiv E[B_i|y]=\\hat{B}_{i,MG}+o_p(m^{-1})$;\n\\item[(ii)] $V[B_i|y]=Var( \\hat B_{i;MG} )+o_p(m^{-1})$;\n\\item[(iii)] $\\hat{\\theta}_i^{HB}\\equiv E[\\theta_i|y]=\\hat{\\theta}_{i,MG}+o_p(m^{-1})$;\n\\item [(iv)] $V[\\theta_i|y]=\\hat{M}_{i;MG}+o_p(m^{-1})$;\n\\item [(v)] $V[\\theta_i|y]=\\hat{M}_{i;MG}^{boot}+o_p(m^{-1})$.\n\\end{description}\n\n\nFirst we prepare the following result, which follows from Corollary \\ref{cor} (i) and Hirose and Lahiri (2018): \n\\begin{align}\n\\hat{B}_i(\\hat{A}_{i;MG})-\\hat{B}_i(\\hat{A}_{RE})&=(\\hat{A}_{i,MG}-\\hat{A}_{RE})\\hat b_1+o_p(m^{-1})\\notag\\\\\n&=\\{E[\\hat{A}_{i;MG}-A]-E[\\hat{A}_{RE}-A]\\} b_1+o_p(m^{-1})\\notag\\\\\n&=-\\frac{2D_i}{tr[V^{-2}](A+D_i)^3}+o_p(m^{-1}).\\label{bias.B}\n\\end{align}\n\nIf we use the flat prior $\\pi(A)\\propto 1$, we get the following result using equation (21) of Datta et al. (2005) with $b(A)=B_i(A)$ and equation (\\ref{bias.B}): \n$$E[B_i|y]=\\hat B_i(\\hat A_{MG})+\\frac{4D_i}{tr[V^2](A+D_i)^2}\\left[\\frac{1}{A+D_i}-\\frac{tr[V^{-3}]}{tr[V^{-2}]}\\right]+o_p(m^{-1}).$$ \nThis result emphasizes that the flat prior $\\pi(A)\\propto 1$ cannot achieve Property (i) except for balanced case ($D_i=D$ for all $i$). We, therefore, seek a prior $\\pi (A)$ to satisfy Property (i), even in unbalanced case. \nTo this end, we also use the following result (\\ref{B.HB}) given in (21) of Datta et al. (2005) with $b(A)=B_i(A)$:\n\\begin{align}\nE[B_i|y]=\\hat{B}_i(\\hat{A}_{RE})+\\frac{1}{2m\\hat{h}_2}\\left(\\hat b_2-\\frac{\\hat{h}_3}{\\hat{h}_2}\\hat b_1 \\right)+\\frac{\\hat{b}_1}{m\\hat{h}_2}\\hat \\rho_1+o_p(m^{-1}).\\label{B.HB}\n\\end{align}\nIt is evident from equations (\\ref{bias.B}) and (\\ref{B.HB}) that our desired prior must satisfy the following differential equation, up to the order of $O(m^{-1})$:\n\\begin{align}\n\\frac{1}{2m{h}_2}\\left( b_2-\\frac{{h}_3}{{h}_2} b_1 \\right)+\\frac{{b}_1}{m{h}_2} \\rho_1=-\\frac{2D_i}{tr[V^{-2}](A+D_i)^3}.\\label{diff1}\n\\end{align}\n\nNote that the differential equation (\\ref{diff1}) is equivalent to the following differential equation, up to the order of $O_p(m^{-1})$;\n\\begin{align}\n\\rho_1=\\frac{\\partial \\log \\pi(A)}{\\partial A}&=-\\frac{m h_2}{ b_1}\\frac{2D}{tr[V^{-2}](A+D_i)^3}-\\frac{1}{2}\\left[\\frac{ b_2}{ b_1}-\\frac{ h_3}{ h_2} \\right]\\notag\\\\\n&=\\frac{2}{A+D_i}-\\frac{2tr[V^{-3}]}{tr[V^{-2}]}.\\label{difffinal}\n\\end{align}\n\nHence, we obtain a solution to differential equation (\\ref{difffinal}) as follows: \n\\begin{eqnarray}\n\\pi(A)\\propto (A+D_i)^2tr[V^{-2}]. \\label{MG.pri0}\n\\end{eqnarray}\n\nNote that the prior (\\ref{MG.pri0}) depends on $i$. Therefore, we redefine it as:\n\\begin{eqnarray}\n\\pi_i(A)\\propto (A+D_i)^2tr[V^{-2}]. \\label{MG.pri}\n\\end{eqnarray}\n\n\\begin{remark}\n\\label{R1}\nWe have several important remarks on the prior (\\ref{MG.pri}). \n\\begin{description}\n\\item[(a)] The prior satisfies the rest of Properties (ii)-(v) simultaneously, as shown in Appendix B.2. \nIt is remarkable that $\\pi_i(A)$ given by (\\ref{MG.pri}) is the unique prior to achive Properties (i)-(v) simultaneously, up to the order of $O_p(m^{-1})$, since $E[g_{1i}(A)|y]=g_{1i}(\\hat A_{i;MG})+o_p(m^{-1})$ shown in (\\ref{g1.exp}).\n\n\n\\item[(b)] The prior given by equation (\\ref{MG.pri}) reduces to the Stein's super-harmonic prior for the balanced case $D_i=D,\\;i=1,\\cdots,m$, up to the order of $O_p(m^{-1})$.\n\n\\item[(c)] Datta et al. (2005) found the same prior by matching (in a higher order asymptotic sense) expected value of the posterior variance of $\\theta_i$ with the mean squared error of the empirical best linear unbiased predictor with the residual maximum likelihood estimator used for the variance component $A$. It is interesting to note that the same prior achieves multiple goals, a fact gone unnoticed.\n\n\\item [(d)] From the result of Ganesh and Lahiri (2008), the prior \n$$\\pi (A)\\propto \\frac{\\sum \\{1\/(A+D_i)^2\\}}{\\sum \\omega_i\\{D_i^2\/(A+D_i)^2\\}}$$ also satisfies \n$\\sum_{i}^m \\omega_i E[{V}(\\theta_i|y)-MSE[\\hat \\theta_i(\\hat{A}_{i;MG})]]=o(m^{-1}).$\n\\end{description}\n\\end{remark}\n\n\\section{Data Analysis}\nIn this section, using the 1993 Small Area Income and Poverty Estimates (SAIPE) data set, we demonstrate that our proposed multi-goal prior(MGP) performs better than the superharmonic prior (SHP) in producing Bayesian solutions closer to the multi-goal classical solutions of Hirose and Lahiri (2018). The SAIPE data we use here is from Bell and Franco (2017), available at \\url{https:\/\/www.census.gov\/srd\/csrmreports\/byyear.html}. The data contains direct poverty rates($y_i$), associated sampling variances ($D_i$), and auxiliary variables ($x_i$) derived from administrative and census data for the 50 states and the District of Columbia. Much has been written about SAIPE over the years. See, for instance, the recent book chapter by Bell et al. (2016). \n\nFirst consider the estimation of the shrinkage parameters $B_i$ for all the states. Fig 1 displays classical multi-goal estimates $\\hat B_{i;MG}$ and Bayes estimates of $B_i$ under the superharmonic and the multi-goal priors for all the states arranged in decreasing order of $\\hat B_{i;MG}$. Note that the Bayes estimate of $B_i$ is an one-dimensional integral, which is approximated by numerical integration using the R function \\lq\\lq{}adaptIntegrate\\rq\\rq{}. Overall, the Bayes estimates under the multi-goal prior are closer to the classical estimates (MGF) than the superharmonic prior.\n\nNext, in Fig 2, we compare the mean squared error estimates by Taylor series (MGF) and parametric bootstrap (PB MG) of Hirose and Lahiri (2018) with the posterior variances under the two different priors. The parametric bootstrap mean squared error estimates use $10^4$ bootstrap samples. The two mean squared error estimates are virtually identical. Again our posterior variances under the multi-goal prior are much closer to the mean squared error estimates than the corresponding posterior variances under the superharmonic prior.\n\n\\begin{figure}[ht]\n\\includegraphics[bb=0 0 1920 1082,scale=0.2,clip]{Bi93.png}\n\\caption{$B_i$ estimates (MGF:$\\hat B_{i;MG}$, MGP: $E_{MG}[B_i|y]$, SHP:$E_{SHP}[B_i|y]$)}\n\\label{Bi.93}\n\\end{figure}\n\n\n\n\\begin{figure}[ht]\n\\includegraphics[bb=0 0 1920 1082,scale=0.2,clip]{Vi93.png}\n\\caption{MSE estimates \n(PB.MG:$\\hat M^*_{i;MG}$, MGF:$\\hat M_{i;MG}$, MG Prior:$V_{MG}[\\theta_i|y]$, SHP:$V_{SHP}[\\theta_i|y]$)\n}\n\\label{Vi93}\n\\end{figure}\n\n\n\n\\section{Discussion} \n\nCan we extend our results to a general linear mixed model? To answer this question, we consider the following nested error regression model considered by Battese et al. (1988): \n\\begin{eqnarray}\ny_{ij}=\\theta_{ij}+e_{ij}=x_{ij}^{\\prime}\\beta+v_i+e_{ij}, \\ (i=1,\\ldots,m;\\ j=1,\\ldots,n_i),\\label{NERM}\n\\end{eqnarray}\nwhere $\\{v_1\\ldots,v_m\\}$ and $\\{e_1,\\ldots, e_m\\}$ are independent with $v_i{\\sim}N(0,\\sigma_v^2)$ and $e_i{\\sim}N(0,\\sigma_e^2)$; $x_{ij}$ is a $p$-dimensional vector of known auxiliary variables; $\\beta\\in \\mathcal{R}^p$ is a $p$-dimensional vector of unknown regression coefficients; $\\psi=(\\sigma_v^2, \\sigma_e^2)^{\\prime}$ is an unknown variance component vector; $n_i$ is the number of observed unit level data in $i$-th area.\n\nThe condition for achieving desired property 1 given in Section 3, we need to solve the following system of differential equations with shrinkage factor $B_i=\\sigma_e^2\/(n_i\\sigma_v^2+\\sigma_e^2)$, under certain regularity conditions: \n\\begin{align}\n\\left[\\frac{\\partial \\log h_{i;G}(\\psi)}{\\partial \\psi}\\right]^{\\prime}I_F^{-1}\\left[\\frac{\\partial B_{i}(\\psi)}{\\partial \\psi}\\right]=&H(\\psi),\n\\end{align}\nwhere $$\\frac{\\partial \\log h_{i;G}(\\psi)}{\\partial \\psi}=\\left(\\frac{\\partial \\log h_{i;G}(\\psi)}{\\partial \\sigma_v^2}, \\frac{\\partial \\log h_{i;G}(\\psi)}{\\partial \\sigma_e^2}\\right)^{\\prime},$$ \n$$H(\\psi)=-\\frac{1}{2}tr\\left[\\frac{\\partial^2 B_{i}(\\psi)}{\\partial \\psi^2}I_F^{-1}\\right], \\ \n\\frac{\\partial B_{i}(\\psi)}{\\partial \\psi}=\\frac{n_i}{(n_i\\sigma_v^2+\\sigma_e^2)^2}(-\\sigma_e^2, \\sigma_v^2)^{\\prime},$$ \n$$I_F^{-1}=\\frac{2}{a}\\left (\n\\begin{array}{cc}\n\\sum[(n_i-1)\/\\sigma_e^4+(n_i\\sigma_v^2+\\sigma_e^2)^{-2}] & -\\sum n_i\/(n_i\\sigma_v^2+\\sigma_e^2)^2\\\\\n-\\sum n_i\/(n_i\\sigma_v^2+\\sigma_e^2)^2&\\sum n_i^2\/(n_i\\sigma_v^2+\\sigma_e^2)^2\\\\\n\\end{array}\\right),$$ \n$$a=[\\sum n_i^2\/(n_i\\sigma_v^2+\\sigma_e^2)^2][\\sum \\{(n_i-1)\/\\sigma_e^4+(n_i\\sigma_v^2+\\sigma_e^2)^{-2}\\}]-[\\sum n_i\/(n_i\\sigma_v^2+\\sigma_e^2)^2]^2.$$\n\nIf we use the following adjustment factor $h_{i;G}(\\psi)$ for achieving desired property 1:\n\\begin{align}\n\\frac{\\partial \\log h_{i;G}(\\psi)}{\\partial \\psi}=v{k},\n\\end{align}\nfor a given two dimensional fixed vector ${k}$, \nthe solution of $v$ can be obtained as $$v=\\frac{H(\\psi)}{{k}^{\\prime}I_F^{-1}\\frac{\\partial B_{i}(\\psi)}{\\partial {\\psi}}}.$$ \nThis solution thus leads to an appropriate adjustment factor satisfying $$\\frac{\\partial \\log h_{i;G}(\\psi)}{\\partial \\psi}=\\frac{H(\\psi)}{{k}^{\\prime}I_F^{-1}\\frac{\\partial B_{i}(\\psi)}{\\partial {\\psi}}}{k}.$$ \nThus, there exist multiple solutions for $h_{i;G}(\\psi)$ satisfying desired property 1 under the nested error regression model (\\ref{NERM}). \nFurther research is needed to identify a reasonable adjustment factor for the general linear mixed model and to establish a connection with the corresponding Bayesian approach.\n\n\n\n\\section*{Acknowledgements}\nThe first and second authors\\rq{} research was supported by JSPS KAKENHI Grant Number 18K12758 and U.S. National Science Foundation Grant SES-1534413, respectively. \n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}