diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzmvym" "b/data_all_eng_slimpj/shuffled/split2/finalzzmvym" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzmvym" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\nIn art investigations often multiple imaging systems, such as visual light photography (VIS), infrared reflectography (IRR), ultraviolet fluorescence photography (UV), and x-radiography (XR), are utilized. For instance, IR is used to reveal underdrawings, UV to visualize overpaintings and restorations, and XR to highlight white lead. \nSince the multi-modal images are acquired using different imaging systems, we have to take into account different image resolutions and varying viewpoints of the devices.\nThus, for a direct comparison on pixel level, image registration is crucial to align the multi-modal images. \n\n\n\\begin{figure}[t]\n\t\\centering\n\\includegraphics[width=\\textwidth]{graphics\/flowchart}\n\t\\caption{Our multi-modal registration and visualization tool CraquelureNetApp. Users can select the images and registration options in the GUI. The registration itself is performed fully automatically using the CraquelureNet, a CNN for multi-modal keypoint detection and description, which was ported from PyTorch to C++. Due to the high-resolution images of paintings, the CraquelureNet is applied patch-wise. The registration results are depicted inside the GUI and the users can interact with the visualizations, \\emph{e.g}\\onedot} \\def\\Eg{\\emph{E.g}\\onedot joint zooming of the views. \nImage sources: Workshop Lucas Cranach the Elder, Katharina of Bora (Detail), visual light and UV-fluorescence photographs, captured by Ulrike H\u00fcgle, Stiftung Deutsches Historisches Museum, Berlin, Cranach Digital Archive, KKL-No IV.M21b, all rights reserved\n}\n\\label{fig-01:overview}\n\\end{figure} \n\nImage registration methods for art imaging can mainly be split into intensity\\mbox{-,} control point- and feature-based approaches.\nThe intensity-based method of Cappellini~\\emph{et al}\\onedot~\\cite{CappelliniV2005} uses mutual information to iteratively register multispectral images.\nThe control point- and feature-based methods compute a geometric transform based on detected correspondences in both images.\nMurashov~\\cite{MurashovD2011} detects local grayscale maxima as control points in VIS-XR image pairs and applies coherent point drift~\\cite{MyronenkoA2010}.\nConover \\emph{et al}\\onedot~\\cite{ConoverDM2015} uses Wavelet transform, phase correlation, and disparity filtering for control point detection in sub-images of VIS, IR, and XR.\nThe feature-based method of Zacharopoulos~\\emph{et al}\\onedot~\\cite{ZacharopoulosA2018} uses SIFT~\\cite{LoweDG2004} to align multispectral images of artworks.\nCraquelureNet~\\cite{SindelA2021} uses a convolutional neural network (CNN) to detect keypoints and descriptors based on branching points in the crack structure (craquelure) of the paint, as it is visible by all imaging systems, in contrast to the depicted image content that can be very different in the multi-modal images. \n\nMore and more museums or art projects provide the ability to inspect their artworks in interactive website viewers. Specifically for multi-modal images website viewer have been designed that allow a synchronized scrolling and zooming of the image views~\\cite{BoschProject,FransenB2020} or a curtain viewer~\\cite{BoschProject} that allows to interactively inspect multiple images in a single view. For these projects, the specific multi-modal images of artworks were pre-registered offline.\n\nFor the daily work, it would be practical for art technologists and art historians to have a tool, with which they can easily perform the registration themselves and also can interactively inspect the registered images.\nIn the field of medical 2D-2D or 3D-3D registration, there are open source tools such as MITK~\\cite{SteinD2010} that provides a graphical user interface (GUI) application for iterative rigid and affine registration and also a developer software framework. Since these software tools are very complex, domain knowledge is required to adapt the algorithms for multi-modal registration of paintings.\nThe ImageOverlayApp~\\cite{SindelA2020} is a small and easy to handle GUI application that allows the direct comparison of two registered artworks using image superimposition and blending techniques. However, here as well as in the online viewers, image registration is not provided.\n\nIn this paper, we propose a software tool for automatic multi-modal image registration and visualization of artworks. We designed a GUI to receive the user's registration settings and to provide an interactive display to show the registration results, such as superimposition and blending of the registered image pair, and both, individual and synchronized zooming and movement of the image views. As registration method, we integrate the keypoint detection and description network CraquelureNet into our application by porting it to C++. A quantitative evaluation and qualitative examples show the effective application of our software tool on our multi-modal paintings dataset and its transferability to our historical prints dataset.\n\n\\section{Methods}\nIn this section, the software tool for multi-modal registration and visualization is described.\n\n\\subsection{Overview of the Registration and Visualization Tool}\nIn Fig.~\\ref{fig-01:overview}, the main building blocks of the CraquelureNetApp are shown, consisting of the GUI, the preprocessing such as data loading and patch extraction, the actual registration method, which is split into the keypoint detection and description network CraquelureNet that was ported from PyTorch to LibTorch and into descriptor matching, homography estimation, and image warping, and the computation of visualizations of the registration results and its interactions with the user.\n \n\\subsection{Registration Method}\nCraquelureNet~\\cite{SindelA2021} is composed of a ResNet backbone and a keypoint detection and a keypoint description head. The network is trained on small $32 \\times 32 \\times 3$ sized image patches using a multi-task loss for both heads. The keypoint detection head is optimized using binary cross-entropy loss to classify the small patches into ``craquelure'' and ``background'', i.e. whether the center of the patch (a) contains a branching or sharp bend of a crack structure (``craquelure'') or (b) contains it only in the periphery or not at all (``background''). The keypoint description head is trained using bidirectional quadruplet loss~\\cite{SindelA2021} to learn cross-modal descriptors. Using online hard negative mining, for a positive keypoint pair the hardest non-matching descriptors are selected in both directions within the batch. Then, the positive distances (matching keypoint pairs) are minimized while the negative distances (hardest non-matching keypoint pairs) are maximized.\n\nFor inference, larger image input sizes can be fed to the fully convolutional network. Due to the architectural design, the prediction of the keypoint detection head and description head are of lower resolution than the input image and hence are upscaled by a factor of $4$ using bicubic interpolation for the keypoint heatmap and bilinear interpolation for the descriptors based on the extracted keypoint positions. Keypoints are extracted using non-maximum suppression and by selecting all keypoints with a confidence score higher than $\\tau_\\text{kp}$ based on the keypoint heatmap. \nSince the images can be very large, CraquelureNet is applied patch-based and the keypoints of all patches are merged and reduced to the $N_\\text{max}$ keypoints with the highest confidence score. Then, mutual nearest neighbor descriptor matching is applied and Random Sample Consensus (RANSAC)~\\cite{FischlerMA1981} is used for homography estimation~\\cite{SindelA2021}.\n\n\\begin{figure}[t]\n\t\\centering\n\\includegraphics[width=\\textwidth]{graphics\/gui}\n\t\\caption{The graphical user interface (GUI) of our CraquelureNetApp. The different elements of the GUI are marked in orange: the menu, the toolbar items for registration and visualization and the views.}\n\t\\label{fig-02:gui}\n\\end{figure} \n\nCraquelureNet~\\cite{SindelA2021} is completely implemented in Python using PyTorch and OpenCV. To transfer the trained PyTorch model to C++, we made use of the TorchScript conversion functionality and restructured the source code accordingly using a combination of TorchScript modules and Tracing. We have directly reimplemented the other parts of the registration pipeline in C++ using the same OpenCV functions as in Python.\n\nFor homography estimation, we provide the option to choose between different robust estimators of the OpenCV library: RANSAC~\\cite{FischlerMA1981} which is already used in~\\cite{SindelA2021,SindelA2022} is the default option. RANSAC is an iterative method to estimate a model, here homography, for data points that contain outliers. In each iteration, a model is estimated on a random subset of the data and is scored using all data points, \\emph{i.e}\\onedot} \\def\\Ie{\\emph{I.e}\\onedot the number of inliers for that model is computed. In the end, the best model is selected based on the largest inlier count and can optionally be refined. \nUniversal RANSAC (USAC)~\\cite{RaguramR2013} includes different RANSAC variants into a single framework, \\emph{e.g}\\onedot} \\def\\Eg{\\emph{E.g}\\onedot different sampling strategies, or extend the verification step by also checking for degeneracy. We use the USAC default method of OpenCV 4.5.5 that applies iterative local optimization to the so far best models~\\cite{ChumO2003}.\nFurther, we enable two RANSAC variants which are also included in the OpenCV USAC framework:\nGraph-Cut RANSAC (GC-RANSAC)~\\cite{BarathD2018} uses the graph-cut algorithm for the local optimization and MAGSAC++~\\cite{BarathD2020} applies a scoring function that does not require a threshold and a novel marginalization method.\n\n\\begin{figure}[t]\n\t\\centering\n\\includegraphics[width=\\textwidth]{graphics\/menu_dialogs}\n\t\\caption{The menu and dialogs to configure registration and save options.}\n\t\\label{fig-03:menu_dialogs}\n\\end{figure} \n\n\\subsection{Graphical User Interface}\nThe GUI is implemented in C++ as a Qt5 desktop application. In Fig.~\\ref{fig-02:gui} the main GUI elements are marked in orange. The user can interact with the GUI using the menu or the toolbar that provides items specified for the registration and visualization task or can also directly interact with the views. The size of the views depends on the size of the application window.\n\n\\subsubsection{User interaction prior to registration:} To perform registration, two images have to be selected. They can either be chosen via the menu (Fig.~\\ref{fig-03:menu_dialogs}a) using a file opener or per drag and drop of the image file to the image view. The selected images are visualized in the two image views in the top row. \nPer click on the configure button in the menu (Fig.~\\ref{fig-03:menu_dialogs}b) or on the registration toolbar (``gear'' icon), a user dialog (Fig.~\\ref{fig-03:menu_dialogs}c) is opened to choose the registration options, such as patch size, number of maximum keypoints $N_\\text{max}$, image input size, method for homography estimation (RANSAC, USAC, GC-RANSAC, or MAGSAC++), whether to run on the GPU (using CUDA) or on the CPU and whether to visualize keypoint matches.\nCustom settings of patch size, input image size, and maximum number of keypoints are also possible when the predefined selections are not sufficient. The settings are saved in a configuration file to remember the user's choice. It is always possible to restore the default settings. The registration is started by clicking the run button in the menu (Fig.~\\ref{fig-03:menu_dialogs}b) or in the toolbar (``triangle'' icon).\n\n\\subsubsection{Visualizations and user interaction:} We include two different superimposition techniques to visually compare the registration results, a false-color image overlay (red-cyan) of the registered pair and a blended image overlay, similarly to~\\cite{SindelA2020}. After registration, the views in the GUI are updated with the transformed moving image (view 2) and the image fusions in the bottom row (view 3 and 4). The red-cyan overlay (view 4), stacks the reference image (view 1) into the red and the transformed moving image (view 2) into the green and blue channel. The blended image (view 3) is initially shown with alpha value 0.5. By moving the slider in the visualization toolbar, the user can interactively blend between both registered images. For the optional visualization of the keypoint matches, a separate window is opened that shows both original images with superimposed keypoints as small blue circles and matches as yellow lines using OpenCV.\n\nAdditionally to the image overlays, we have implemented a synchronization feature of the views that enriches the comparison. It can be activated by the ``connect views'' icon in the visualization toolbar. All interactions with one view are propagated to the other views. Using the mouse wheel the view is zoomed in or out with the focus at the current mouse position. Using the arrow keys the image view can be shifted in all directions. By activating the ``hand mouse drag'' item in the toolbar, the view can be shifted around arbitrarily. By pressing the ``maximize'' icon in the toolbar, the complete image is fitted into the view area preserving aspect ratio. This can also be useful after asynchronous movement of single views to reset them to a common basis. \n\nTo save the registration results to disk, the user can click the ``save'' button in the menu (Fig.~\\ref{fig-03:menu_dialogs}a) to choose the files to save in a dialog (Fig.~\\ref{fig-03:menu_dialogs}d) or the ``save all'' button to save them all. \n\n\\begin{figure}[t]\n\t\\centering\n\\includegraphics[width=\\textwidth]{graphics\/vis_irr}\n\t\\caption{VIS-IRR registration of paintings using CraquelureNetApp. In (a) the keypoint matches between the reference image (VIS) and moving image (IRR) are visualized as yellow lines. In (b) the registered image pair and the blended overlay and the false color overlay are depicted as complete images and in (c) they are synchronously zoomed in.\n\tImage sources: Meister des Marienlebens, Tempelgang Mari\\\"a, visual light photograph and infrared reflectogram, Germanisches Nationalmuseum, Nuremberg, on loan from Wittelsbacher Ausgleichsfonds\/Bayerische Staats\\-gem\\\"alde\\-samm\\-lungen, \\mbox{Gm 19}, all rights reserved}\n\t\\label{fig-04:vis_irr}\n\\end{figure} \n\n\\begin{figure}[t]\n\t\\centering\n\\includegraphics[width=\\textwidth]{graphics\/vis_uv}\n\t\\caption{VIS-UV registration of paintings using CraquelureNetApp. In (a) the keypoint matches of VIS and UV are densely concentrated at the craquelure in the facial region. (b) and (c) show the registered pair and image fusions of the complete images and synchronized details.\t\n\tImage sources: Workshop Lucas Cranach the Elder or Circle, Martin Luther, visual light photograph, captured by Gunnar Heydenreich and UV-fluorescence photograph, captured by Wibke Ottweiler, Lutherhaus Wittenberg, Cranach Digital Archive, KKL-No I.6M3, all rights reserved} \n\t\\label{fig-05:vis_uv}\n\\end{figure} \n\n\\begin{figure}[t]\n\t\\centering\n\\includegraphics[width=\\textwidth]{graphics\/vis_xr}\n\t\\caption{VIS-XR registration of paintings using CraquelureNetApp. In (a) the keypoint matches of VIS and XR are mostly concentrated at the craquelure in the lighter image area such as the face, hands, and green background. (b) and (c) show the qualitative registrations results as overall view and as synchronized zoomed details.\n\tImage sources: Lucas Cranach the Elder, Martin Luther as ``Junker J\u00f6rg'', visual light photograph Klassik Stiftung Weimar, Museum, X-radiograph HfBK Dresden (Mohrmann, Ri\u00dfe), Cranach Digital Archive, KKL-No II.M2, all rights reserved}\n\t\\label{fig-06:vis_xr}\n\\end{figure} \n\n\\begin{table}[t]\n\\centering\n\\caption{Quantitative evaluation for multi-modal registration of paintings for the VIS-IRR, VIS-UV, and VIS-XR test datasets (each 13 image pairs) using CraquelureNet with different robust homography estimation methods. Registration results are evaluated with the success rates (SR) of mean Euclidean error (ME) and maximum Euclidean error (MAE) of the control points for different error thresholds $\\epsilon$. Best results are highlighted in bold.} \n\\label{tab-01}\n\\scriptsize\n\\begin{tabular*}{\\textwidth}{l @{\\extracolsep{\\fill}} ll rrr rrr}\n\\toprule\nMulti-modal & CraquelureNet & Homography & \\multicolumn{3}{c}{SR of ME [\\%] $\\uparrow$} & \\multicolumn{3}{c}{SR of MAE [\\%] $\\uparrow$}\\\\\nDataset & Model & Method & $\\epsilon=3$ & $\\epsilon=5 $ & $\\epsilon=7$ & $\\epsilon=6$ & $\\epsilon=8$ & $\\epsilon=10$ \\\\ \n\\midrule\n & PyTorch (Python) & RANSAC & 84.6 & \\textbf{100.0} & \\textbf{100.0} & 38.5 & \\textbf{69.2} & \\textbf{84.6} \\\\ \n & LibTorch (C++) & RANSAC & \\textbf{92.3} & 92.3 & \\textbf{100.0} & 38.5 & \\textbf{69.2} & \\textbf{84.6} \\\\ \nVIS-IRR & LibTorch (C++) & USAC & \\textbf{92.3} & 92.3 & 92.3 & \\textbf{46.2} & \\textbf{69.2} & \\textbf{84.6} \\\\ \n & LibTorch (C++) & GC-RANSAC & \\textbf{92.3} & 92.3 & 92.3 & \\textbf{46.2} & \\textbf{69.2} & \\textbf{84.6} \\\\ \n & LibTorch (C++) & MAGSAC++ & \\textbf{92.3} & 92.3 & 92.3 & \\textbf{46.2} & \\textbf{69.2} & \\textbf{84.6} \\\\ \n\\midrule\n& PyTorch (Python) & RANSAC & \\textbf{92.3} & \\textbf{100.0} & \\textbf{100.0} & 46.2 & 53.8 & 61.5 \\\\ \n & LibTorch (C++) & RANSAC & 84.6 & 92.3 & \\textbf{100.0} & \\textbf{53.8} & 53.8 & 53.8 \\\\ \nVIS-UV & LibTorch (C++) & USAC & \\textbf{92.3} & 92.3 & 92.3 & \\textbf{53.8} & \\textbf{69.2} & \\textbf{69.2} \\\\ \n & LibTorch (C++) & GC-RANSAC & \\textbf{92.3} & 92.3 & 92.3 & \\textbf{53.8} & \\textbf{69.2} & \\textbf{69.2} \\\\ \n & LibTorch (C++) & MAGSAC++ & 84.6 & 92.3 & 92.3 & 46.2 & 61.5 & \\textbf{69.2} \\\\ \n\\midrule\n & PyTorch (Python) & RANSAC & 69.2 & \\textbf{84.6} & 84.6 & 23.1 & 38.5 & 61.5 \\\\ \n & LibTorch (C++) & RANSAC & 53.8 & \\textbf{84.6} & \\textbf{92.3} & \\textbf{30.8} & 46.2 & 61.5 \\\\ \nVIS-XR & LibTorch (C++) & USAC & 76.9 & \\textbf{84.6} & 84.6 & 23.1 & 38.5 & \\textbf{76.9} \\\\ \n & LibTorch (C++) & GC-RANSAC & 76.9 & \\textbf{84.6} & 84.6 & 23.1 & 38.5 & \\textbf{76.9} \\\\ \n & LibTorch (C++) & MAGSAC++ & \\textbf{84.6} & \\textbf{84.6} & 84.6 & 15.4 & \\textbf{53.8} & \\textbf{76.9} \\\\ \n\\bottomrule \n\\end{tabular*}\n\\end{table}\n\n\\begin{figure}[t]\t\t\t\t\t\t\t\t \n\\centering\t\n\\begin{tikzpicture}\n\t\\begin{customlegend}[\n\t\t\tlegend entries={SIFT,D2-Net,SuperPoint,CraquelureNet (C++)},\n\t\t\tlegend style={\n\t\t\t\t\/tikz\/every even column\/.append\tstyle={column sep=.75cm}},\n\t\t\tlegend columns=-1,\n\t\t\tlegend cell align=left]\n\t\t\\csname pgfplots@addlegendimage\\endcsname{oorange,mark=square*}\n\t\t\\csname pgfplots@addlegendimage\\endcsname{red,mark=square*}\t\t\n\t\t\\csname pgfplots@addlegendimage\\endcsname{blue,mark=square*}\n\t\t\\csname pgfplots@addlegendimage\\endcsname{cyan,mark=square*}\t\t\t\t\n\t\\end{customlegend}\n\\end{tikzpicture}\n\n\\subcaptionbox{VIS-IRR\\label{fig-07:paint_VIS-IRR_me}}{\n\\begin{tikzpicture}\n \\begin{axis}[\n width = 0.33\\textwidth,\n height = 3cm,\n major x tick style = transparent,\n ybar,\n bar width=0.15cm,\n \t\tx=1.3cm, \n \t\tenlarge x limits={abs=0.7cm},\n \t\tenlarge y limits=false,\n ymajorgrids = true,\n ylabel = {SR of ME [\\%]},\n \tymin=0,\n ymax=100,\n symbolic x coords= {$\\epsilon=3$,$\\epsilon=7$},\n xtick = data,\n nodes near coords={\\pgfmathprintnumber[fixed zerofill,precision=2]\\pgfplotspointmeta},\n every node near coord\/.append style={font=\\tiny, xshift = 0 , rotate=90, anchor=west},\n nodes near coords align = {center},\n ]\n \\addplot [style={oorange,fill=oorange,mark=none}]\n table[x=Metric, y=SIFT] \n {tables\/paintings_VIS-IRR_success_rates_me.dat}; \n \\addplot [style={red,fill=red,mark=none}]\n table[x=Metric, y=D2-Net] \n {tables\/paintings_VIS-IRR_success_rates_me.dat}; \n \\addplot [style={blue,fill=blue,mark=none}]\n table[x=Metric, y=SuperPoint] \n {tables\/paintings_VIS-IRR_success_rates_me.dat}; \n \\addplot [style={cyan,fill=cyan,mark=none}]\n table[x=Metric, y=CraquelureNetC++] \n {tables\/paintings_VIS-IRR_success_rates_me.dat}; \n\t\\end{axis}\n\\end{tikzpicture}\t\n\t}\n\\subcaptionbox{VIS-UV\\label{fig-07:paint_VIS-UV_me}}{\n\\begin{tikzpicture}\n \\begin{axis}[\n width = 0.33\\textwidth,\n height = 3cm,\n major x tick style = transparent,\n ybar,\n bar width=0.15cm,\n \t\tx=1.3cm, \n \t\tenlarge x limits={abs=0.7cm},\n \t\tenlarge y limits=false,\n ymajorgrids = true,\n \tymin=0,\n ymax=100,\n symbolic x coords= {$\\epsilon=3$,$\\epsilon=7$},\n xtick = data,\n nodes near coords={\\pgfmathprintnumber[fixed zerofill,precision=2]\\pgfplotspointmeta},\n every node near coord\/.append style={font=\\tiny, xshift = 0 , rotate=90, anchor=west},\n nodes near coords align = {center},\n ]\n \\addplot [style={oorange,fill=oorange,mark=none}]\n table[x=Metric, y=SIFT] \n {tables\/paintings_VIS-UV_success_rates_me.dat}; \n \\addplot [style={red,fill=red,mark=none}]\n table[x=Metric, y=D2-Net] \n {tables\/paintings_VIS-UV_success_rates_me.dat}; \n \\addplot [style={blue,fill=blue,mark=none}]\n table[x=Metric, y=SuperPoint] \n {tables\/paintings_VIS-UV_success_rates_me.dat}; \n \\addplot [style={cyan,fill=cyan,mark=none}]\n table[x=Metric, y=CraquelureNetC++] \n {tables\/paintings_VIS-UV_success_rates_me.dat}; \n\t\\end{axis}\n\\end{tikzpicture}\t\n\t}\n\\subcaptionbox{VIS-XR\\label{fig-07:paint_VIS-XR_me}}{\n\\begin{tikzpicture}\n \\begin{axis}[\n width = 0.33\\textwidth,\n height = 3cm,\n major x tick style = transparent,\n ybar,\n bar width=0.15cm,\n \t\tx=1.3cm, \n \t\tenlarge x limits={abs=0.7cm},\n \t\tenlarge y limits=false,\n ymajorgrids = true,\n \tymin=0,\n ymax=100,\n symbolic x coords= {$\\epsilon=3$,$\\epsilon=7$},\n xtick = data,\n nodes near coords={\\pgfmathprintnumber[fixed zerofill,precision=2]\\pgfplotspointmeta},\n every node near coord\/.append style={font=\\tiny, xshift = 0 , rotate=90, anchor=west},\n nodes near coords align = {center},\n ]\n \\addplot [style={oorange,fill=oorange,mark=none}]\n table[x=Metric, y=SIFT] \n {tables\/paintings_VIS-XR_success_rates_me.dat}; \n \\addplot [style={red,fill=red,mark=none}]\n table[x=Metric, y=D2-Net] \n {tables\/paintings_VIS-XR_success_rates_me.dat}; \n \\addplot [style={blue,fill=blue,mark=none}]\n table[x=Metric, y=SuperPoint] \n {tables\/paintings_VIS-XR_success_rates_me.dat}; \n \\addplot [style={cyan,fill=cyan,mark=none}]\n table[x=Metric, y=CraquelureNetC++] \n {tables\/paintings_VIS-XR_success_rates_me.dat}; \n\t\\end{axis}\n\\end{tikzpicture}\t\t\n\t}\t\n\t\n\\subcaptionbox{VIS-IRR\\label{fig-07:paint_VIS-IRR_mae}}{\n\\begin{tikzpicture}\n \\begin{axis}[\n width = 0.33\\textwidth,\n height = 3cm,\n major x tick style = transparent,\n ybar,\n bar width=0.15cm,\n \t\tx=1.3cm, \n \t\tenlarge x limits={abs=0.7cm},\n \t\tenlarge y limits=false,\n ymajorgrids = true,\n \tymin=0,\n ymax=100,\n ylabel = {SR of MAE [\\%]},\n symbolic x coords= {$\\epsilon=6$,$\\epsilon=10$},\n xtick = data,\n nodes near coords={\\pgfmathprintnumber[fixed zerofill,precision=2]\\pgfplotspointmeta},\n every node near coord\/.append style={font=\\tiny, xshift = 0 , rotate=90, anchor=west},\n nodes near coords align = {center},\n ]\n \\addplot [style={oorange,fill=oorange,mark=none}]\n table[x=Metric, y=SIFT] \n {tables\/paintings_VIS-IRR_success_rates_mae.dat}; \n \\addplot [style={red,fill=red,mark=none}]\n table[x=Metric, y=D2-Net] \n {tables\/paintings_VIS-IRR_success_rates_mae.dat}; \n \\addplot [style={blue,fill=blue,mark=none}]\n table[x=Metric, y=SuperPoint] \n {tables\/paintings_VIS-IRR_success_rates_mae.dat}; \n \\addplot [style={cyan,fill=cyan,mark=none}]\n table[x=Metric, y=CraquelureNetC++] \n {tables\/paintings_VIS-IRR_success_rates_mae.dat}; \n\t\\end{axis}\n\\end{tikzpicture}\t\n\t}\n\\subcaptionbox{VIS-UV\\label{fig-07:paint_VIS-UV_mae}}{\n\\begin{tikzpicture}\n \\begin{axis}[\n width = 0.33\\textwidth,\n height = 3cm,\n major x tick style = transparent,\n ybar,\n bar width=0.15cm,\n \t\tx=1.3cm, \n \t\tenlarge x limits={abs=0.7cm},\n \t\tenlarge y limits=false,\n ymajorgrids = true,\n \tymin=0,\n ymax=100,\n symbolic x coords= {$\\epsilon=6$,$\\epsilon=10$},\n xtick = data,\n nodes near coords={\\pgfmathprintnumber[fixed zerofill,precision=2]\\pgfplotspointmeta},\n every node near coord\/.append style={font=\\tiny, xshift = 0 , rotate=90, anchor=west},\n nodes near coords align = {center},\n ]\n \\addplot [style={oorange,fill=oorange,mark=none}]\n table[x=Metric, y=SIFT] \n {tables\/paintings_VIS-UV_success_rates_mae.dat}; \n \\addplot [style={red,fill=red,mark=none}]\n table[x=Metric, y=D2-Net] \n {tables\/paintings_VIS-UV_success_rates_mae.dat}; \n \\addplot [style={blue,fill=blue,mark=none}]\n table[x=Metric, y=SuperPoint] \n {tables\/paintings_VIS-UV_success_rates_mae.dat}; \n \\addplot [style={cyan,fill=cyan,mark=none}]\n table[x=Metric, y=CraquelureNetC++] \n {tables\/paintings_VIS-UV_success_rates_mae.dat}; \n\t\\end{axis}\n\\end{tikzpicture}\t\n\t}\n\\subcaptionbox{VIS-XR\\label{fig-07:paint_VIS-XR_mae}}{\n\\begin{tikzpicture}\n \\begin{axis}[\n width = 0.33\\textwidth,\n height = 3cm,\n major x tick style = transparent,\n ybar,\n bar width=0.15cm,\n \t\tx=1.3cm, \n \t\tenlarge x limits={abs=0.7cm},\n \t\tenlarge y limits=false,\n ymajorgrids = true,\n \tymin=0,\n ymax=100,\n symbolic x coords= {$\\epsilon=6$,$\\epsilon=10$},\n xtick = data,\n nodes near coords={\\pgfmathprintnumber[fixed zerofill,precision=2]\\pgfplotspointmeta},\n every node near coord\/.append style={font=\\tiny, xshift = 0 , rotate=90, anchor=west},\n nodes near coords align = {center},\n ]\n \\addplot [style={oorange,fill=oorange,mark=none}]\n table[x=Metric, y=SIFT] \n {tables\/paintings_VIS-XR_success_rates_mae.dat}; \n \\addplot [style={red,fill=red,mark=none}]\n table[x=Metric, y=D2-Net] \n {tables\/paintings_VIS-XR_success_rates_mae.dat}; \n \\addplot [style={blue,fill=blue,mark=none}]\n table[x=Metric, y=SuperPoint] \n {tables\/paintings_VIS-XR_success_rates_mae.dat}; \n \\addplot [style={cyan,fill=cyan,mark=none}]\n table[x=Metric, y=CraquelureNetC++] \n {tables\/paintings_VIS-XR_success_rates_mae.dat}; \n\t\\end{axis}\n\\end{tikzpicture}\t\t\n\t}\n\\caption{Quantitative evaluation for our multi-modal paintings dataset using SR of ME with $\\epsilon=3,7$ and SR of MAE with $\\epsilon=6,10$.}\n\\label{fig-07:eval_paintings_sota}\n\\end{figure} \n\n\\begin{figure}[t]\n\t\\centering\n\\includegraphics[width=\\textwidth]{graphics\/print}\n\t\\caption{VIS-VIS registration of prints using CraquelureNetApp. CraquelureNet detects a high number of good matches for the two prints in (a), although it did not see images of prints during training. (b) and (c) show the qualitative registration results as overall and detail views.\n\tImage sources: Monogramist IB (Georg Pencz), Philipp Melanchthon with beret and cloak, (left) Germanisches Nationalmuseum, Nuremberg, K 21148, captured by Thomas Klinke and (right) Klassik Stiftung Weimar, Museen, Cranach Digital Archive, DE{\\_}KSW{\\_}Gr-2008-1858, all rights reserved}\n\t\\label{fig-08:print}\n\\end{figure} \n\n\n\\section{Applications and Evaluation}\nIn this section, we apply the CraquelureNetApp to multi-modal images of paintings and to images of different prints of the same motif. We use the test parameters as defined in~\\cite{SindelA2021}: GPU, patch size of $1024$, $N_\\text{max}=8000$, $\\tau_\\text{kp}=0$, resize to same width, and RANSAC with a reprojection error threshold $\\tau_\\text{reproj}=5$.\n\nFor the quantitative evaluation, we measure the registration performance based on the success rate of successfully registered images. The success rate is computed by calculating the percentage of image pairs for which the error distance of manual labeled control points of the registered pair using the predicted homography is less or equal to an error threshold $\\epsilon$.\nAs metric for the error distance, we use the mean Euclidean error (ME) and maximum Euclidean error (MAE)~\\cite{SindelA2022}.\n\nAs comparison methods, we use the conventional keypoint and feature descriptor SIFT~\\cite{LoweDG2004} and the pretrained models of the two deep learning methods, SuperPoint~\\cite{DeToneD2018} and D2-Net~\\cite{DusmanuM2019}, which we apply patch-based to both the paintings and prints. SuperPoint is a CNN with a keypoint detection head and a keypoint description head and is trained in a self-supervised manner by using homographic warpings. D2-Net simultaneously learns keypoint detection and description with one feature extraction CNN, where keypoints and descriptors are extracted from the same set of feature maps.\nFor all methods, we use the same test settings as for our method (patch size, $N_\\text{max}$, RANSAC). \n\n\\subsection{Multi-modal Registration of Historical Paintings}\nThe pretrained CraquelureNet~\\cite{SindelA2021}, which we embedded into our registration tool, was trained using small patches extracted from multi-modal images of 16th century portraits by the workshop of Lucas Cranach the Elder and large German panel paintings from the 15th to 16th century. We use the images of the test split from these multi-modal datasets (13 pairs per domain: VIS-IRR, VIS-UV, VIS-XR) and the corresponding manually labeled control point pairs (40 point pairs per image pair)~\\cite{SindelA2021} to test the CraquelureNetApp. \n\nThe qualitative results of three examples (VIS-IRR, VIS-UV and VIS-XR) are shown in~\\cref{fig-04:vis_irr,fig-05:vis_uv,fig-06:vis_xr}. For each example, the keypoint matches, the complete images (Fit to view option in the GUI), and a zoomed detail view in the synchronization mode are depicted showing good visual registration performance for all three multi-modal pairs. \n\nSecondly, we compare in \\cref{tab-01} the success rates of ME and MAE of the registration for the C++ and PyTorch implementation of CraquelureNet using RANSAC for homography estimation or alternatively using USAC, GC-RANSAC, or MAGSAC++. \nThe C++ implementation using RANSAC achieves comparable results as the PyTorch implementation (both $\\tau_\\text{reproj}=5$): For VIS-IRR and VIS-UV, all images are registered successfully using ME with an error threshold $\\epsilon=5$ (PyTorch) and $\\epsilon=7$ (C++). For VIS-XR, $12$ (C++) or $11$ (PyTorch) out of $13$ images were successfully registered using ME ($\\epsilon=7$). \nThe small deviations between the two models are due to slightly different implementations \\emph{e.g}\\onedot} \\def\\Eg{\\emph{E.g}\\onedot of non-maximum suppression that were necessary for the conversion to C++.\nUsing the more recent USAC, GC-RANSAC, or MAGSAC++ is slightly less robust for VIS-IRR and VIS-UV, since not more than $12$ out of $13$ image pairs can be registered using ME ($\\epsilon=7$) as one image pair fails completely. VIS-XR registration is the most difficult part of the three domain pairs due to the visually highly different appearance of the VIS and XR images. Here, we can observe similar performance of ME, with a slightly higher percentage of USAC, GC-RANSAC, and especially MAGSAC++ for $\\epsilon=3$, but for $\\epsilon=7$ those are on par with the PyTorch model and are slightly inferior to the C++ model using RANSAC.\nRegarding the success rates of MAE, we observe a bit higher values for USAC, GC-RANSAC, and MAGSAC++ than for both RANSAC models for VIS-UV and VIS-XR ($\\epsilon=\\{8,10\\}$), while for VIS-IRR they are the same.\n\nIn \\cref{fig-07:eval_paintings_sota}, the multi-modal registration performance of our C++ CraquelureNet (RANSAC) is measured in comparison to SIFT~\\cite{LoweDG2004}, SuperPoint~\\cite{DeToneD2018}, and D2-Net~\\cite{DusmanuM2019}. CraquelureNet achieves the highest success rates of ME and MAE for all multi-modal pairs. D2-Net is relatively close to CraquelureNet for VIS-IRR with the same SR of ME but with a lower SR of MAE. For VIS-IRR and VIS-UV, all learning-based methods are clearly better than SIFT. For the challenging VIS-XR domain, the advantage of our cross-modal keypoint detector and descriptor is most distinct, since for $\\epsilon=7$, CraquelureNet still achieves high success rates of $92.3$\\,\\% for ME and $61.5$\\,\\% for MAE, whereas for $\\epsilon=7$, D2-Net only successfully registers $61.5$\\,\\% for ME and $15.3$\\,\\% for MAE, SuperPoint only $23$\\,\\% for ME and none for MAE, and lastly, SIFT does not register any VIS-XR image pair successfully.\nD2-Net was developed to find correspondences in difficult image conditions, such as day-to-night or depiction changes~\\cite{DusmanuM2019}, hence it also is able to detect to some extend matching keypoint pairs in VIS-XR images. SuperPoint's focus is more on images with challenging viewpoints~\\cite{DeToneD2018,DusmanuM2019}, thus the pretrained model results in a lower VIS-XR registration performance. In our prior work~\\cite{SindelA2021}, we fine-tuned SuperPoint using image patches extracted from our manually aligned multi-modal paintings dataset, which did not result in an overall improvement. On the other hand, CraquelureNet is robust for the registration of all multi-modal pairs and does not require an intensive training procedure, as it is trained in efficient time only using very small image patches. \n\nThe execution time of CraquelureNet (C++ and PyTorch) for the VIS-IRR registration (including loading of network and images, patch extraction of size $1024 \\times 1024 \\times 3$ pixels, network inference, homography estimation using RANSAC and image warping) was about 11\\,s per image pair on the GPU and about 2\\,min per image pair on the CPU. We used an Intel Xeon W-2125 CPU 4.00 GHz with 64 GB RAM and one NVIDIA Titan XP GPU for the measurements. \nThe comparable execution times of the C++ and PyTorch implementation of CraquelureNet (the restructured one for a fair comparison) is not surprising, as PyTorch and OpenCV are wrappers around C++ functions.\nThe relatively fast inference time for the registration of high-resolution images makes CraquelureNet suitable to be integrated into the registration GUI and to be used by art technologists and art historians for their daily work.\n\n\\subsection{Registration of Historical Prints}\nTo evaluate the registration performance for prints, we have created a test dataset of in total $52$ images of historical prints with manually labeled control point pairs. The dataset is composed of $13$ different motifs of 16th century prints with each four exemplars that may show wear or production-related differences in the print. For each motif a reference image was selected and the other three copies will be registered to the reference, resulting in $39$ registration pairs. For each image pair, $10$ control point pairs were manually annotated.\n\nWe test the CraquelureNet, that was solely trained on paintings, for the 16th century prints. One qualitative example is shown in~\\cref{fig-08:print}. \nCraquelureNet also finds a high number of good matches in this engraving pair, due to the multitude of tiny lines and branchings in the print as CraquelureNet is mainly focusing on these branching points.\n\nFor the quantitative evaluation, we compute the success rates of ME and MAE for the registration of the print dataset using our CraquelureNet C++ model in comparison to using SIFT~\\cite{LoweDG2004}, SuperPoint~\\cite{DeToneD2018}, and D2-Net~\\cite{DusmanuM2019}.\nFor the registration, the images are scaled to a fixed height of $2000$ pixel, as then the structures in the prints have a suitable size for the feature detectors. In the GUI of CraquelureNetApp this option is ``resize to custom height''. The results of the comparison are depicted in~\\cref{fig-09:prints_SR}. The CNN-based methods obtain clearly superior results to SIFT for both metrics. Overall, CraquelureNet and SuperPoint show the best results, as both achieve a success rate of ME of $100$\\,\\% at error threshold $\\epsilon=4$, where they are closely followed by D2-Net at $\\epsilon=5$ and all three methods achieve a success rate of MAE close to $100$\\,\\% at $\\epsilon=10$. For smaller error thresholds, CraquelureNet is slightly superior for MAE and SuperPoint for ME. As none of the methods was fine-tuned for the print dataset, this experiment shows the successful possibility of applying the models to this new dataset. \n\n\\begin{figure}[t]\n\\centering\n\\begin{tikzpicture}\n\t\\begin{customlegend}[\n\t\t\tlegend entries={SIFT,D2-Net,SuperPoint,CraquelureNet (C++)},\n\t\t\tlegend style={\n\t\t\t\t\/tikz\/every even column\/.append\tstyle={column sep=.75cm}},\n\t\t\tlegend columns=-1,\n\t\t\tlegend cell align=left]\n\t\t\\csname pgfplots@addlegendimage\\endcsname{oorange,mark=square*}\n\t\t\\csname pgfplots@addlegendimage\\endcsname{red,mark=*}\t\t\n\t\t\\csname pgfplots@addlegendimage\\endcsname{blue,mark=triangle*}\n\t\t\\csname pgfplots@addlegendimage\\endcsname{cyan,mark=star}\t\t\t\t\n\t\\end{customlegend}\n\\end{tikzpicture}\n\n\\subcaptionbox{~\\label{fig-09:prints_SR_ME}}{\n\\begin{tikzpicture}\n\\begin{axis} [\n\t\t\tscale = 0.6, \t\t\t\n\t\t\txmajorgrids,\n\t\t\tymajorgrids,\n\t\t\tscaled ticks=false,\n\t\t\tylabel = {SR of ME [\\%]},\n\t\t\txlabel = Error threshold,\n\t\t\txtick ={1,2,3,4,5,6},\n\t\t\tymax = 100,\t\n\t\t\tymin = 0\t\n\t\t\t]\n\t\\addplot [oorange,mark=square*]\n table[x=Thresh, y=SIFT] \n {tables\/prints_success_rates_me.dat}; \t\n\t\\addplot [red,mark=*]\n table[x=Thresh, y=D2-Net] \n {tables\/prints_success_rates_me.dat}; \n\t\\addplot [blue,mark=triangle*]\n table[x=Thresh, y=SuperPoint] \n {tables\/prints_success_rates_me.dat};\n\t\\addplot [cyan,mark=star]\n table[x=Thresh, y=CraquelureNet] \n {tables\/prints_success_rates_me.dat}; \n\\end{axis}\n\\end{tikzpicture}\n}\n\\hfill\n\\subcaptionbox{~\\label{fig-09:prints_SR_MAE}}{\n\\begin{tikzpicture}\n\\begin{axis} [\n\t\t\tscale = 0.6, \t\t\t\n\t\t\txmajorgrids,\n\t\t\tymajorgrids,\n\t\t\tscaled ticks=false,\n\t\t\tylabel = {SR of MAE [\\%]},\n\t\t\txlabel = Error threshold,\n\t\t\txtick ={5,6,7,8,9,10},\n\t\t\tymax = 100,\t\n\t\t\tymin = 0\t\n\t\t\t]\n\t\\addplot [oorange,mark=square*]\n table[x=Thresh, y=SIFT] \n {tables\/prints_success_rates_mae.dat}; \t\n\t\\addplot [red,mark=*]\n table[x=Thresh, y=D2-Net] \n {tables\/prints_success_rates_mae.dat}; \n\t\\addplot [blue,mark=triangle*]\n table[x=Thresh, y=SuperPoint] \n {tables\/prints_success_rates_mae.dat};\n\t\\addplot [cyan,mark=star]\n table[x=Thresh, y=CraquelureNet] \n {tables\/prints_success_rates_mae.dat}; \n\\end{axis}\n\\end{tikzpicture}\n}\n\\caption{Quantitative comparison of success rates for registration of prints ($39$ image pairs). For all methods RANSAC with $\\tau_\\text{reproj}=5$ was used. None of the methods was fine-tuned for the print dataset.\nIn (\\subref{fig-09:prints_SR_ME}) the success rate of mean Euclidean error (ME) for the error thresholds $\\epsilon=\\{1,2,...,6\\}$ and in (\\subref{fig-09:prints_SR_MAE}) the success rate of maximum Euclidean error (MAE) for the error thresholds $\\epsilon=\\{5,6,...,10\\}$ is plotted.} \n\\label{fig-09:prints_SR}\n\\end{figure}\n\n\\section{Conclusion}\nWe presented an interactive registration and visualization tool for multi-modal paintings and also applied it to historical prints. The registration is performed fully automatically using CraquelureNet. The user can choose the registration settings and can interact with the visualizations of the registration results. In the future, we could extend the application by including trained models of CraquelureNet on other datasets, such as the RetinaCraquelureNet~\\cite{SindelA2022} for multi-modal retinal registration. A further possible extension would be to add a batch processing functionality to the GUI to register a folder of image pairs.\n\n\\subsubsection{Acknowledgements} \nThanks to Daniel Hess, Oliver Mack, Daniel G\\\"orres, Wib\\-ke Ottweiler, Germanisches Nationalmuseum (GNM), and Gunnar Heydenreich, Cranach Digital Archive (CDA), and Thomas Klinke, \\mbox{TH K\\\"oln}, and Amalie H\\\"ansch, FAU Erlangen-N\\\"urnberg for providing image data, and to Leibniz Society for funding the research project ``Critical Catalogue of Luther portraits (1519 - 1530)'' with grant agreement No. SAW-2018-GNM-3-KKLB, to the European Union's Horizon 2020 research and innovation programme within the Odeuropa project under grant agreement No. 101004469 for funding this publication, and to NVIDIA for their GPU hardware donation. \n\n \\bibliographystyle{splncs04}\n ","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\n\nThis is the first part of a series of papers on the spreading and\nvanishing dynamics of diffusive equations with free boundaries of the form,\n\\begin{equation}\n\\label{main-eq}\n\\begin{cases}\nu_t=u_{xx}+uf(t,x,u),\\quad &t>0,\\,\\, 00,\\cr\nu_x(t,0)=u(t,h(t))=0, \\quad &t>0,\\cr h(0)=h_0, u(0,x)=u_0(x),\\quad &0\\leq x\\leq h_0,\n\\end{cases}\n\\end{equation}\nwhere $x=h(t;u_0,h_0)$ is the moving boundary to be determined, $\\mu$, $h_0$ are given\npositive constants, and the initial function $u_0(x)$ satisfies\n \\begin{equation}\n \\label{initial-value}\n u_0\\in C^2([0,h_0]), \\ u^{'}_0(0)=u_0(h_0)=0, \\ {\\rm and} \\ u_0>0 \\ {\\rm in} \\ [0,h_0).\n \\end{equation}\n\nWe assume that\n $f(t,x,u)$ is a $C^1$ function in $t\\in\\RR$, $x\\in\\RR$, and\n$u\\in\\RR$; $f(t,x,u)<0$ for $u\\gg 1$; $f_u(t,x,u)<0$ for $u\\ge 0$;\nand $f(t,x,u)$ is almost periodic in $t$ uniformly with respect to\n$x\\in\\RR$ and $u\\in\\RR$ (see (H1), (H2) in subsection 2.1 for detail). Here is a typical example\nof such functions,\n$f(t,x,u)=a(t,x)-b(t,x)u$, where $a(t,x)$ and $b(t,x)$ are almost periodic in $t$ and periodic in $x\\in\\RR$, and\n$\\inf_{t\\in\\RR,x\\in\\RR}b(t,x)>0$.\n\n\n Observe that for any given $u_0$ satisfying \\eqref{initial-value},\n \\eqref{main-eq} has a unique (local) solution $(u(t,x;u_0,h_0)$, $h(t;u_0,h_0))$ with\n $u(0,x;u_0,h_0)=u_0(x)$ and $h(0;u_0,h_0)=h_0$ (see \\cite{DuGuPe}). Moreover,\nby comparison principle for parabolic equations, $(u(t,x;u_0,h_0),h(t;u_0,h_0))$ exists for all\n$t>0$ and $u_x(t,h(t))<0$. Hence $h(t;u_0,h_0)$ increases as $t$ increases.\n\nEquation \\eqref{main-eq} with $f(t,x,u)=u(a-bu)$ and $a$ and $b$ being\ntwo positive constants was introduced by Du and Lin in \\cite{DuLi}\nto understand the spreading of species. A great deal of previous mathematical investigation on the spreading of species (in one space dimension case)\nhas been based on diffusive equations of the form\n\\begin{equation}\n\\label{kpp-general-eq}\nu_t=u_{xx}+u f(t,x,u),\\quad x\\in\\RR,\n\\end{equation}\nwhere $f(t,x,u)<0$ for $u\\gg 1$ and $f_u(t,x,u)<0$ for $u\\ge 0$. Thanks to the pioneering works of Fisher (\\cite{Fis}) and Kolmogorov, Petrowsky, Piscunov\n(\\cite{KPP}) on the following special case of \\eqref{kpp-general-eq}\n\\begin{equation}\n\\label{kpp-special-eq}\nu_t=u_{xx}+u(1-u),\\quad x\\in\\RR,\n\\end{equation}\n\\eqref{main-eq}, resp. \\eqref{kpp-general-eq}, is referred to as diffusive Fisher or KPP equation.\n\n\n\nOne of the central problems for both \\eqref{main-eq} and \\eqref{kpp-general-eq} is\nto understand their spreading dynamics. For \\eqref{kpp-general-eq}, this is closely related to\nspreading speeds and transition fronts of \\eqref{kpp-general-eq} and has been widely studied.\nWhen $f(t,x,u)$ is independent of $t$ and $x$ or is periodic in $t$ and $x$, the spreading dynamics\nfor \\eqref{kpp-general-eq} is quite well understood. For example, assume that $f(t,x,u)$ is periodic in\n$t$ with period $T$ and periodic in $x$ with period $p$, and that\n$u\\equiv 0$ is a linearly unstable solution of \\eqref{kpp-general-eq} with respect to periodic perturbations. Then it is known that\n\\eqref{kpp-general-eq} has a unique positive periodic solution $u^*(t,x)$ ($u^*(t+T,x)=u^*(t,x+p)=u^*(t,x)$) which is asymptotically stable with\nrespect to periodic perturbations and\n it\nhas been proved\n that there is a positive constant $c^*$ such that for every $c\\geq c^*$, there is a periodic\n traveling wave solution $u(t,x)$ connecting $u^*$ and $u\\equiv 0$\n with speed $c$ (i.e. $u(t,x)=\\phi(x-ct,t,x)$ for some $\\phi(\\cdot,\\cdot,\\cdot)$ satisfying that\n $\\phi(\\cdot,\\cdot+T,\\cdot)=\\phi(\\cdot,\\cdot,\\cdot+p)=\\phi(\\cdot,\\cdot,\\cdot)$ and $\\phi(-\\infty,\\cdot,\\cdot)=u^*(\\cdot,\\cdot)$ and\n $\\phi(\\infty,\\cdot,\\cdot)=0$), and there is no such traveling\n wave solution of slower speed (see \\cite{LiZh, Nad, NoXi, Wei}). Moreover, the minimal wave speed $c^*$ is\n of the following spreading property and is hence called the spreading speed of \\eqref{kpp-general-eq}:\n for any given $u_0\\in C_{\\rm unif}^b(\\RR,\\RR^+)$ with non-empty support,\n \\begin{equation}\n \\label{kpp-spreading-eq}\n \\begin{cases}\n \\lim_{|x| \\le c^{'}t,t\\to\\infty}[u(t,x;u_0)-u^*(t,x)]=0\\quad \\forall\\,\\, c^{'} c^*,\n \\end{cases}\n \\end{equation}\n where $u(t,x;u_0)$ is the solution of \\eqref{kpp-general-eq} with $u(0,x;u_0)=u_0(x)$\n (see \\cite{LiZh, Wei}).\n\n\n\n\nThe spreading property \\eqref{kpp-spreading-eq} for \\eqref{kpp-general-eq} in the case that $f(t,x,u)$ is periodic in $t$ and $x$\n implies that spreading always happens for a solution of\n\\eqref{kpp-general-eq} with a positive initial function, no matter how small the positive initial function is.\nThe following strikingly different spreading scenario has been proved for \\eqref{main-eq} in the case that\n$f(t,x,u)\\equiv f(u)$ (see \\cite{DuGu, DuLi}): it exhibits a spreading-vanishing dichotomy in the sense that for any given positive initial data\n$u_0$ satisfying \\eqref{initial-value} and $h_0$, either vanishing occurs (i.e. $\\lim_{t\\to\\infty}h(t;u_0,h_0)<\\infty$ and $\\lim_{t\\to\\infty}u(t,x;u_0,h_0)=0$) or\nspreading occurs (i.e. $\\lim_{t\\to\\infty}h(t;u_0,h_0)=\\infty$ and $\\lim_{t\\to\\infty}u(t,x;u_0,h_0)=u^*$ locally uniformly in $x\\in \\RR^+$, where\n$u^*$ is the unique positive solution of $f(u)=0$). The above spreading-vanishing dichotomy for \\eqref{main-eq} with\n$f(t,x,u)\\equiv f(u)$ has also been extended to the cases that $f(t,x,u)$ is periodic in $t$ or that $f(t,x,u)$ is independent of $t$ and periodic\nin $x$ (see \\cite{DuGuPe,DuLia}). The spreading-vanishing dichotomy proved for \\eqref{main-eq} in \\cite{DuGu, DuGuPe, DuLi, DuLia}\n is well supported by some empirical evidences, for example, the introduction of several\nbird species from Europe to North America in the 1900s was successful only after many initial\nattempts (see \\cite{LoHoMa, ShKa}).\n\n\n\n\nIn reality, many evolution systems in biology are subject to non-periodic time and\/or space variations. It is therefore\nof great importance to investigate the spreading dynamics for both \\eqref{main-eq} and \\eqref{kpp-general-eq} with general time and space\ndependent $f(t,x,u)$. The spreading dynamics for \\eqref{kpp-general-eq} with non-periodic time and\/or space dependence has been\nstudied by many people recently\n(see \\cite{BeHa, BeHaNa, BeNa, HuSh, KoSh, NaRo, NoXi1, She1, She2, She3, TaZhZl, Zla}, etc.). However, there is little study\non the spreading dynamics for \\eqref{main-eq} with non-periodic time and space dependence.\n\n\nThe objective of the current series\nof papers is to investigate the spreading-vanishing dynamics of \\eqref{main-eq} in the case that $f(t,x,u)$ is almost periodic in $t$,\nthat is, to investigate whether the population will\nsuccessfully establishes itself\nin the entire space (i.e. spreading occurs), or it fails to establish and vanishes\neventually (i.e. vanishing occurs).\nRoughly speaking, for given $(u_0,h_0)$, if $h_\\infty=\\lim_{t\\to\\infty}h(t;u_0,h_0)=\\infty$ and for any $M>0$,\n$\\liminf_{t\\to\\infty}\\inf_{0\\le x\\le M}u(t,x;u_0,h_0)>0$,\nwe say {\\it spreading} occurs. If $h_\\infty<\\infty$ and $\\lim_{t\\to\\infty}u(t,x;u_0,h_0)=0$, we say {\\it vanishing} occurs (see Definition\n\\ref{spreading-vanishing-def} for detail). We say a positive number $c^*$ is a {\\it spreading speed} of \\eqref{main-eq} if for any $(u_0,h_0)$ such that the spreading occurs,\n$$\\lim_{t\\to\\infty}\\frac{h(t;u_0,h_0)}{t}=c^*$$\n and\n $$\n \\liminf_{0\\le x\\le c^{'}t,t\\to\\infty}u(t,x;u_0,h_0)>0\\quad \\forall \\,\\, c^{'}0$ such that for any given $u_0$ satisfying \\eqref{initial-value}, vanishing occurs if and only if\n$h_\\infty\\le l^*$} (see Theorem \\ref{main-thm2}).\n\n\\medskip\n\n\n\nTo characterize the detailed spreading and vanishing dynamics of \\eqref{main-eq},\nwe also consider the following fixed boundary problem on half line,\n\\begin{equation}\n\\label{aux-main-eq1}\n\\begin{cases}\nu_t=u_{xx}+uf(t,x,u),\\quad x\\in (0,\\infty)\\cr\nu_x(t,0)=0.\n\\end{cases}\n\\end{equation}\n\nObserve that if $u^*(t,x)$ is a solution of \\eqref{aux-main-eq1} and $u_0(x)\\le u^*(0,x)$ for $x\\in [0,h_0]$, then\n$u(t,x;u_0,h_0)\\le u^*(t,x)$ for $0\\le x\\le h(t;u_0,h_0)$. Among others, we prove that\n\n\\medskip\n\n\\noindent $\\bullet$ {\\it Assume (H1)-(H5) stated in subsection 2.1.\n \\eqref{aux-main-eq1} has a unique time almost periodic positive solution $u^*(t,x)$} (see Theorem \\ref{main-thm1}) {\\it and\nfor any given $u_0$ satisfying \\eqref{initial-value}, if spreading occurs in \\eqref{main-eq}, then $u(t,x;u_0,h_0)-u^*(t,x)\\to 0$ as $t\\to\\infty$ locally uniformly in $x\\ge 0$} (see\nTheorem \\ref{main-thm2}).\n\n\\medskip\n\n\nWe note that the techniques for \\eqref{main-eq} can be modified to study the\nfollowing double fronts free boundary problem:\n\\begin{equation}\n\\begin{cases}\n\\label{main-doub-eq}\nu_t=u_{xx}+uf(t,x,u), \\quad &t>0,\\,\\, g(t)0,\\cr\nu(t,h(t))=0, h^{'}(t)=-\\mu u_x(t,h(t)), \\quad &t>0,\\cr\nu(0,x)=u_0(x),\\quad & h_0\\le x\\le g_0 \\cr\nh(0)=h_0,\\,\\, g(0)=g_0\n\\end{cases}\n\\end{equation}\nwhere both $x=g(t)$ and $x=h(t)$ are to be determined and $u_0$ satisfies\n\\begin{equation}\n\\label{initial-value-1}\n\\begin{cases}\nu_0\\in C^2([g_0,h_0]) \\cr\nu_0(g_0)=u_0(h_0)=0 \\ \\ and \\ \\ u_0>0 \\ \\ in \\ \\ (g_0,h_0).\n\\end{cases}\n\\end{equation}\nUnder the assumptions (H1), (H2), $(H4)^*$, and (H5) (see section 6 for $(H4)^*$), spreading-vanishing dichotomy for \\eqref{main-doub-eq} also holds.\nIn particular, we prove that\n\n\\medskip\n\\noindent $\\bullet$ {\\it Assume (H1), (H2), $(H4)^*$, and (H5). For given $h_0>0$ and $u_0$ satisfying \\eqref{initial-value-1},\n either\n $h_\\infty-g_\\infty<\\infty$ and\n$\\lim_{t\\to+\\infty} u(t,x;u_0,h_0,g_0)=0$ uniformly in $x$,\nor $h_{\\infty}=-g_{\\infty}=\\infty$ and\n$\\liminf_{t\\to\\infty}\\inf_{|x|\\le M}u(t,x;u_0,h_0,g_0)>0$\nfor any $M>0$}\n(see Proposition \\ref{spreading-vanishing-doub-prop}).\n\n\\medskip\n\n\nIn the second part of the series of the papers, we will study the existence of spreading speeds\nfor \\eqref{main-eq} and the existence of time almost periodic semi-wave solutions of the\nfollowing free boundary problem associated to \\eqref{main-eq},\n\\begin{equation}\n\\label{aux-main-eq3}\n\\begin{cases}\nu_t=u_{xx}+uf(t,x,u),\\quad &t>0, \\,\\, -\\infty0,\\cr\nh^{'}(t)=-\\mu u_x(t,h(t)), \\quad &t>0.\n\\end{cases}\n\\end{equation}\n If $(u(t,x),h(t))$ is an entire solution of \\eqref{aux-main-eq3},\nit is called a {\\it semi-wave solution} of \\eqref{aux-main-eq3}.\n\n\n The rest of this paper is organized as follows. In Section 2, we introduce the definitions and standing assumptions and\n state the main results of the paper. We present preliminary materials in Section 3 for the use in later sections.\n Section 4 is devoted to the investigation of time almost periodic KPP equation \\eqref{aux-main-eq1}\n on the half line and to the proof of Theorem \\ref{main-thm1}. In Section 5,\n we explore the spreading and vanishing dichotomy scenario of \\eqref{main-eq} and prove\n Theorem \\ref{main-thm2}. The paper is ended with some remarks on spreading-vanishing dichotomy for \\eqref{main-doub-eq} in Section 6.\n\n\n\\section{Definitions, Assumptions, and Main Results}\n\nIn this section, we introduce the definitions and standing assumptions, and state the main results.\n\n\\subsection{Definitions and assumptions}\n\nIn this subsection, we introduce the definitions and standing assumptions. We first recall the definition of almost periodic\nfunctions, next recall the definition of principal Lyapunov exponents for some linear parabolic equations, then state\nthe standing assumptions, and finally introduce the definition of spreading and vanishing for \\eqref{main-eq}.\n\n\\begin{definition}[Almost periodic function]\n\\label{almost-periodic-def}\n\\begin{itemize}\n\n\\item[(1)] A continuous function $g:\\RR\\to \\RR$ is called {\\rm almost periodic} if\nfor any $\\epsilon>0$, the set\n$$\nT(\\epsilon)=\\{\\tau\\in\\RR\\,|\\, |f(t+\\tau)-f(t)|<\\epsilon\\,\\, \\, \\text{for all}\\,\\, t\\in\\RR\\}\n$$\nis relatively dense in $\\RR$.\n\n\n\n\\item[(2)] Let $g(t,x,u)$ be a continuous function of $(t,x,u)\\in\\RR\\times\\RR^m\\times\\RR^n$. $g$ is said to be {\\rm almost periodic in $t$ uniformly with respect to $x\\in\\RR^m$ and\n$u$ in bounded sets} if\n$g$ is uniformly continuous in $t\\in\\RR$, $x\\in\\RR^m$, and $u$ in bounded sets and for each $x\\in\\RR^m$ and $u\\in\\RR^n$, $g(t,x,u)$ is almost periodic in $t$.\n\n\\item[(3)] For a given almost periodic function $g(t,x,u)$, the hull $H(g)$ of $g$ is defined by\n\\begin{align*}\nH(g)=\\{\\tilde g(\\cdot,\\cdot,\\cdot)\\,|\\, & \\exists t_n\\to\\infty \\,\\,\\text{such that}\\,\\, g(t+t_n,x,u)\\to \\tilde g(t,x,u)\\,\\, \\text{uniformly in}\\,\\, t\\in\\RR,\\\\\n&\n\\,\\, (x,u)\\,\\, \\text{in bounded sets}\\}.\n\\end{align*}\n\\end{itemize}\n\\end{definition}\n\n\\begin{remark}\n\\label{almost-periodic-rk}\n(1) Let $g(t,x,u)$ be a continuous function of $(t,x,u)\\in\\RR\\times\\RR^m\\times\\RR^n$. $g$ is almost periodic in $t$ uniformly with respect to\n$x\\in\\RR^m$ and $u$ in bounded sets if and only if\n $g$ is uniformly continuous in $t\\in\\RR$, $x\\in\\RR^m$, and $u$ in bounded sets and for any sequences $\\{\\alpha_n^{'}\\}$,\n$\\{\\beta_n^{'}\\}\\subset \\RR$, there are subsequences $\\{\\alpha_n\\}\\subset\\{\\alpha_n^{'}\\}$, $\\{\\beta_n\\}\\subset\\{\\beta_n^{'}\\}$\nsuch that\n$$\n\\lim_{n\\to\\infty}\\lim_{m\\to\\infty}g(t+\\alpha_n+\\beta_m,x,u)=\\lim_{n\\to\\infty}g(t+\\alpha_n+\\beta_n,x,u)\n$$\nfor each $(t,x,u)\\in\\RR\\times\\RR^m\\times\\RR^n$ (see \\cite[Theorems 1.17 and 2.10]{Fink}).\n\n(2) We may write $g(\\cdot+t,\\cdot,\\cdot)$ as $g\\cdot t(\\cdot,\\cdot,\\cdot)$.\n\\end{remark}\n\nFor a given positive constant $l>0$ and a given $C^1$ function $a(t,x)$ which is almost periodic in $t$ uniformly in $x$ in bounded sets, consider\n\\begin{equation}\n\\label{linearized-eq1}\n\\begin{cases}\nv_t=v_{xx}+a(t,x)v,\\quad 00$ and a given $C^1$ function\n $a(t,x)$ which is almost periodic function in $t$ uniformly in $x$ in bounded sets, consider\nalso\n\\begin{equation}\n\\label{aaux-linearized-eq1}\n\\begin{cases}\nv_t=v_{xx}+a(t,x)v,\\quad 00$ such that $$\\sup_{t\\in\\RR,x\\in\\RR,u\\ge M}f(t,x,u)<0$$ and\n$$\\sup_{t\\in\\RR,x\\in\\RR,u\\ge 0}f_u(t,x,u)<0.$$}\n\n\\medskip\n\n\\noindent{\\bf (H2)} {\\it $f(t,x,u)$ and $Df(t,x,u)=(f_t(t,x,u),f_x(t,x,u),f_u(t,x,u))$ are almost periodic in $t$ uniformly with respect to $x\\in\\RR$ and\n$u$ in bounded sets.\n}\n\n\n\n\\medskip\n\\noindent {\\bf (H3)} {\\it There is $l^*>0$ such that\n$\\lambda(a(\\cdot,\\cdot),l)>0$ for $l>l^*$, where $a(t,x)=f(t,x,0)$.\n}\n\n\\medskip\n\n\n\\noindent{\\bf (H4)} {\\it There are $y^*\\ge 0$ and $L^*\\ge 0$ such that\n $\\tilde\\lambda(a(\\cdot,\\cdot+y),l)>0$ for $y\\ge y^*$ and $l\\ge L^*$.}\n\n \\medskip\n\n\\noindent {\\bf (H5)}\n {\\it For any given sequence $\\{y_n^{'}\\}\\subset \\RR$ and $\\{g_{n}^{'}\\}\\subset H(f)$, there are subsequences\n$\\{y_n\\}\\subset\\{y_n^{'}\\}$ and $\\{g_n\\}\\subset\\{g_n^{'}\\}$ such that\n$\\lim_{n\\to\\infty} g_n(t,x+y_n,u)$ exists uniformly in $t\\in\\RR$ and $(x,u)$ in bounded sets.\n}\n\n\\medskip\n\nAssume (H1) and (H2).\nWe remark that, if $f(t,x,u)\\equiv f(t,u)$, then (H3) (resp. (H4)) holds if and only if\n$\\lim_{t\\to\\infty}\\frac{1}{t}\\int_0^t f(s,0)ds>0$ (see Lemma \\ref{lyapunov-exponent-lm3} for the reasoning).\nIf $f(t,x,u)\\equiv f(t,u)$ or $f(t,x,u)$ is periodic in $x$, (H5) is automatically true.\n\n\nConsider \\eqref{main-eq}. Throughout this paper, we assume (H1) and (H2).\nFor any given $u_0$ satisfying \\eqref{initial-value}, \\eqref{main-eq} has a unique solution\n$(u(t,x;u_0,h_0),h(t;u_0,h_0))$ with $u(0,x;u_0,h_0)=u_0(x)$\nand $h(0;u_0,h_0)=h_0$ (see \\cite{DuGuPe}). By comparison principle for parabolic equations, $u(t,x;u_0,h_0)$ exists for all $t>0$ and\n$u_x(t,x;u_0,h_0)\\le 0$ for $t>0$.\nHence $h(t;u_0,h_0)$ is monotonically increasing, and therefore\nthere exists $h_{\\infty}\\in(0,+\\infty]$ such that\n$\\lim_{t\\to+\\infty}h(t;u_0,h_0)=h_{\\infty}$.\n\n\\begin{definition}[Spreading-vanishing and spreading speed]\n\\label{spreading-vanishing-def}\nConsider \\eqref{main-eq}.\n\\begin{itemize}\n\\item[(1)] For any given $u_0$ satisfying \\eqref{initial-value}, let $h_\\infty=\\lim_{t\\to\\infty}h(t;u_0,h_0)$.\n It is said that the {\\rm vanishing occurs} if $ h_\\infty<\\infty$ and\n$\\lim_{t\\to\\infty}\\|u(t,\\cdot;u_0,h_0)\\|_{C([0,h(t)])}=0$. It is said that the {\\rm spreading occurs}\n if $h_\\infty=\\infty$ and\n $\\liminf_{t\\to\\infty} u(t,x;u_0,h_0)>0$ locally uniformly in\n$x\\in [0,\\infty)$.\n\n\\item[(2)] A real number $c^*>0$ is called the {\\rm spreading speed} of \\eqref{main-eq} if for any $(u_0,h_0)$ such that\n\\eqref{initial-value} is satisfied and the spreading occurs,\nthere holds\n$$\n\\lim_{t\\to\\infty}\\frac{h(t;u_0,h_0)}{t}=c^*\n$$\nand\n$$\n\\liminf_{0\\le x\\le c^{'}t,t\\to\\infty}u(t,x;u_0,h_0)>0,\\quad \\forall\\,\\, c^{'}0$,\n$$\n\\lim_{t\\to\\infty}\\|u(t,\\cdot;u_0)-u^*(t,\\cdot)\\|_{C([0,\\infty))}=0,\n$$\nwhere $u(t,x;u_0)$ is the solution of \\eqref{aux-main-eq1} with $u(0,x;u_0)=u_0(x)$. If, in addition, $f(t,x,u)\\equiv f(t,u)$,\nthen $u^*(t,x)\\equiv V^*(t)$, where $V^*(t)$ is the unique time almost periodic positive solution of the following\nODE,\n\\begin{equation}\n\\label{ode-eq1}\n\\dot u=uf(t,u).\n\\end{equation}\n\\end{theorem}\n\n\nThe following theorem is about the spreading and vanishing dichotomy of \\eqref{main-eq}.\n\n\\begin{theorem}[Spreading-vanishing dichotomy]\n\\label{main-thm2}\nAssume (H1)-(H5). For any given $h_0>0$ and $u_0(\\cdot)$ satisfying \\eqref{initial-value},\nthe following hold.\n\\begin{itemize}\n\\item[(1)]\nEither\n\n\n(i)\n$h_{\\infty}\\le l^*$\nand $\\lim_{t\\to+\\infty} u(t,x;u_0,h_0)=0$\n\nor\n\n\n(ii) $h_{\\infty}=\\infty$ and\n$\\lim_{t\\to\\infty}[u(t,x;u_0,h_0)-u^*(t,x)]=0$\nlocally uniformly for $x\\in[0,+\\infty)$, where $u^*(t,x)$ is as\nin Theorem \\ref{main-thm1}.\n\n\n\n\\item[(2)] If $h_0\\ge l^*$, then $h_\\infty=\\infty$.\n\n\\item[(3)]\nSuppose $h_{0}0$\nsuch that spreading occurs if $\\mu>\\mu^{*}$ and vanishing occurs if $\\mu\\le \\mu^{*}$.\n\\end{itemize}\n\\end{theorem}\n\nWe remark that similar results as those in Theorems \\ref{main-thm1} and \\ref{main-thm2} hold for \\eqref{main-doub-eq}\n(see Propositions \\ref{positive-almost-periodic-solution-doub-prop} and \\ref{spreading-vanishing-doub-prop}).\n\n\\section{Preliminary}\n\nIn this section,\nwe present some preliminary results to be applied in later sections, including basic properties for principal Lyapunov exponents (see subsection 3.1),\nnon-increasing property of the so called part metric associated to diffusive KPP equations in both bounded and unbounded domains (see subsection 3.2.),\nthe asymptotic\ndynamics of diffusive KPP equations with time almost periodic dependence in fixed bounded environments (see subsection 3.3), and comparison principles\nfor free boundary problems (see subsection 3.4).\n\n\n\n\n\\subsection{Principal Lyapunov exponents}\n\nConsider \\eqref{linearized-eq1}. Let $X=X(l)$, where $X(l)$ is as in \\eqref{bounded-domain-space-eq1}. We denote by\n$\\|\\cdot\\|$ either the norm in $X$ or in $\\mathcal{L}(X,X)$.\nRecall that for any $v_0\\in X$, \\eqref{linearized-eq1} has a unique solution $v(t,\\cdot;v_0,a)$ and\n$$\n\\lambda(a,l)=\\limsup_{t\\to\\infty}\\frac{\\ln\\|V(t,a)\\|_{X(l)}}{t},\n$$\nwhere $V(t,a)v_0=v(t,\\cdot;v_0,a)$.\nFor any $b\\in H(a)$, consider also\n\\begin{equation}\n\\label{linearized-eq1-1}\n\\begin{cases}\nv_t=v_{xx}+b(t,x)v,\\quad 0\\epsilon \\phi^{l_1}(a)$ on $[0,l_1]$. Then, by\n comparison principle for parabolic equations, we have that\n$$v(t,x;\\epsilon\\phi^{l_1}(a),a)0$.\nMoreover, if $u_0\\in X^+(l)\\setminus\\{0\\}$, then $u(t,\\cdot;u_0,g)\\in X^{++}(l)$ for $t>0$.\n\n\nFor any $u_1,u_2\\in X^{++}(l)$, we can define the so called part metric, $\\rho(u_1,u_2)$, between $u_1$ and $u_2$, as follows,\n\\begin{equation}\n\\label{part-metric-eq1}\n\\rho(u_1,u_2)=\\inf\\{\\ln\\alpha\\,|\\, \\alpha\\ge 1,\\,\\, \\frac{1}{\\alpha}u_1(\\cdot)\\le u_2(\\cdot)\\le\\alpha u_1(\\cdot)\\}.\n\\end{equation}\nNote that if $u_1,u_2\\in X^{++}(l)$, then $u(t,\\cdot;u_i,g)\\in X^{++}(l)$ $(i=1,2$) for any $t>0$ and $g\\in H(f)$. Hence\n$\\rho(u(t,\\cdot;u_1,g),u(t,\\cdot;u_2,g))$ is also well defined.\n\n\nNext, consider \\eqref{aux-main-eq1} and consider also\n\\begin{equation}\n\\label{unbounded-eq3}\n\\begin{cases}\nu_t=u_{xx}+ug(t,x,u),\\quad 00\\}.\n$$\nNote that $\\tilde X^{++}$ is not empty and is an open subset of $\\tilde X^+$.\nBy semigroup theory (see \\cite{PA}), for any $g\\in H(f)$ and $u_0\\in \\tilde X$, \\eqref{unbounded-eq3}\nhas a unique solution $u(t,x;u_0,g)$ with $u(0,x;u_0,g)=u_0(x)$.\nBy (H1) and comparison principle for parabolic equations, if $u_0\\in \\tilde X^+$, then\n$u(t,\\cdot;u_0,g)$ exists and $u(t,\\cdot;u_0,g)\\in \\tilde X^+$ for all $t>0$. Moreover, if $u_0\\in\\tilde X^{++}$, then\n$u(t,\\cdot;u_0,g)\\in \\tilde X^{++}$ for all $t>0$.\n\nFor given $u_1,u_2\\in \\tilde X^{++}$, we can also define the part metric, $\\rho(u_1,u_2)$, between $u_1$ and $u_2$ as follows,\n$$\n\\rho(u_1,u_2)=\\inf\\{\\ln\\alpha\\,|\\, \\alpha\\ge 1, \\,\\, \\frac{1}{\\alpha}u_1(\\cdot)\\le u_2(\\cdot)\\le \\alpha u_1(\\cdot)\\}.\n$$\nNote that if $u_1,u_2\\in\\tilde X^{++}$, then $\\rho(u(t,\\cdot;u_1,g),u(t,\\cdot;u_2,g))$ is well defined for $t>0$.\n\n\n\nWe now have the following proposition about the non-increasing of part metric.\n\n\\begin{proposition}\n\\label{part-metric-prop}\n\\begin{itemize}\n\n\\item[(1)] Consider \\eqref{fixed-boundary-eq3} and let $u(t,\\cdot;u_0,g)$ denote the solution of \\eqref{fixed-boundary-eq3}\nwith $u(0,\\cdot;u_0,g)=u_0(\\cdot)\\in X(l)$.\n For given $u_0,v_0\\in X^{++}(l)$ with $u_0\\not = v_0$, $\\rho(u(t,\\cdot;u_0,g),u(t,\\cdot;v_0,g))$ is strictly decreasing as $t$\nincreases.\n\n\\item[(2)] Consider \\eqref{unbounded-eq3} and let $u(t,\\cdot;u_0,g)$ denote the solution of \\eqref{unbounded-eq3}\nwith $u(0,\\cdot;u_0,g)\\in \\tilde X$.\n\\item[] (i) Given any $u_0,v_0\\in \\tilde X^{++}$ and $g\\in H(f)$, $\\rho(u(t,\\cdot;u_0,g),u(t,\\cdot;v_0,g))$\ndecreases as $t$ increases.\n\n\\item[] (ii) For any $\\epsilon>0$, $\\sigma>0$, $M>0$, and $\\tau>0$ with $\\epsilon0$ such that\nfor any $g\\in H(f)$, $u_0,v_0\\in \\tilde X^{++}$ with $\\epsilon\\le u_0(x)\\le M$, $\\epsilon\\le v_0(x)\\le M$ for $x\\in\\RR^{+}$ and\n$\\rho(u_0,v_0)\\ge\\sigma$, there holds\n$$\n\\rho(u(\\tau,\\cdot;u_0,g),u(\\tau,\\cdot;v_0,g))\\le \\rho(u_0,v_0)- \\delta.\n$$\n\\end{itemize}\n\\end{proposition}\n\n\n\n\\begin{proof} The proposition can be proved by the similar arguments as in \\cite[Proposition 3.4]{KoSh}.\nFor the completeness, we provide a proof in the following.\n\n (1) For any $u_0,v_0\\in X^{++}(l)$ with $u_0\\not = v_0$, there is $\\alpha^*> 1$ such that $\\rho(u_0,v_0)=\\ln \\alpha^{*}$\nand $\\frac{1}{\\alpha^{*}}u_0\\leq v_0\\leq \\alpha^{*} u_0$.\nBy comparison principle for parabolic equations,\n$$u(t, \\cdot;v_0,g)\\le u(t, \\cdot; \\alpha^{*}u_0,g)\\quad {\\rm for}\\quad t>0.\n$$\nLet\n $$v(t, x)=\\alpha^{*}u(t, x; u_0,g).$$\nWe then have\n\\begin{align*}\nv_{t}(t,x)&=v_{xx}(t,x)+v(t,x)g(t,x, u(t,x;u_0,g))\\\\\n& =\nv_{xx}(t,x)+v(t,x)g(t, x,v(t,x))+v(t,x)g(t, x,u(t,x;u_0,g))-v(t,x)g(t,x,v(t,x))\\\\\n&> v_{xx} (t,x)+ v(t,x)g(t, x,v(t,x))\\quad \\ for \\ all \\ t>0,\\quad 0\\le x0.\n$$\nBy strong comparison principle for parabolic equations,\n$$\nu(t,x;\\alpha^* u_0,g)< \\alpha^* u(t,x;u_0,g)\\quad {\\rm for}\\quad 0\\le x < l.\n$$\nThen by Hopf lemma for parabolic equations, there is $\\tilde\\alpha^*<\\alpha^*$ such that\n$$\nu(t,x;\\alpha^* u_0,g)\\le \\tilde \\alpha^* u(t,x;u_0,g)\\quad {\\rm for}\\quad 0\\le x\\le l\n$$\nand hence\n$$\nu(t,\\cdot;v_0,g)\\leq \\tilde \\alpha^* u(t,\\cdot;u_0,g)\n$$\nfor $t>0$.\nSimilarly, we can prove that\n$$\n\\frac{1}{\\bar \\alpha^*}u(t,\\cdot;u_0,g)\\le u(t,\\cdot;v_0,g)\n$$\nfor some $\\bar \\alpha^*<\\alpha^*$ and $t>0$. It then follows that\n$$\n\\rho(u(t,\\cdot;u_0,g),u(t,\\cdot;v_0,g))< \\rho(u_0,v_0)\\quad \\ for \\ all \\,\\, t\\ge 0\n$$\nand then\n$$\n\\rho(u(t_2,\\cdot;u_0,g),u(t_2,\\cdot;v_0,g))< \\rho(u(t_1,\\cdot;u_0,g),u(t_1,\\cdot;v_0,g))\n\\quad \\ for \\ all \\,\\, 0\\le t_1< t_2.\n$$\n\n(2) (i) It follows from the arguments in (1).\n\n(ii)\nLet $\\epsilon>0$, $\\sigma>0$, $M>0$, and $\\tau>0$ be given and $\\epsilon0$ and $M_1>0$ such that\nfor any $g\\in H(f)$ and $u_0\\in \\tilde{X}^{++}$ with $\\epsilon\\le u_0(x)\\le M$ for $x\\in\\RR^+$, there holds\n\\begin{equation*}\n\\label{part-metric-eq1}\n\\epsilon_1\\le u(t,x;u_0,g)\\le M_1\\quad \\ for \\ all \\,\\, t\\in[0,\\tau],\\,\\, x\\in \\RR^+.\n\\end{equation*}\nIn fact, let $\\tilde M>0$ be such that $f(t,x,u)<0$ for $u\\ge \\tilde M$. Then for $0<\\tilde\\epsilon<\\max\\{\\epsilon,\\tilde M\\}$,\n$u(t,\\cdot;u_{\\tilde \\epsilon},g)\\le \\tilde{M}$ for all $t\\ge 0$ and $g\\in H(f)$, where $u_{\\tilde\\epsilon}(x)\\equiv \\tilde \\epsilon$.\nNote that $g(t,x,u)\\ge \\tilde \\alpha =\\inf_{t\\in\\RR,x\\in\\RR^+}f(t,x,\\tilde M)$ for $u\\le \\tilde M$.\nHence by comparison principal for parabolic equations,\n$$\n\\tilde M\\ge u(t,x;u_{\\tilde\\epsilon},g)\\ge e^{\\tilde{\\alpha} t} \\tilde \\epsilon\\quad \\ for \\ all \\,\\, t\\ge 0,\\quad x\\in \\RR^+.\n$$\nThe claim then follows.\n\nLet\n\\begin{equation*}\n\\label{part-metric-eq2}\n\\delta_1=\\epsilon_1^2 e^\\sigma (1-e^\\sigma)\\sup_{g\\in H(f),t\\in[0,\\tau],x\\in \\RR^+,u\\in[\\epsilon_1,M_1M\/\\epsilon]}g_u(t,x,u).\n\\end{equation*}\nThen $\\delta_1>0$ and there is $0<\\tau_1\\le\\tau$ such that\n\\begin{equation}\n\\label{part-metric-eq3-1}\n\\frac{\\delta_1}{2}\\tau_10$. We prove that $\\delta$ defined in \\eqref{part-metric-eq5} satisfies the property in the proposition.\n\nFor any $u_0,v_0\\in\\tilde{X}^{++}$ with $\\epsilon\\le u_0(x)\\le M$ and $\\epsilon\\le v_0(x)\\le M$ for $x\\in \\RR^+$ and\n$\\rho(u_0,v_0)\\ge\\sigma$, there is $\\alpha^*> 1$ such that $\\rho(u_0,v_0)=\\ln \\alpha^{*}$\nand $\\frac{1}{\\alpha^{*}}u_0\\leq v_0\\leq \\alpha^{*} u_0$.\n\n\nNote that $e^\\sigma\\le\\alpha^*\\le \\frac{M}{\\epsilon}$. Let\n $$v(t, x)=\\alpha^{*}u(t, x; u_0,g)$$\n\\begin{align*}\nv_{t}(t,x)&=v_{xx}(t,x)+v(t,x)g(t,x, u(t,x;u_0,g))\\\\\n& =\nv_{xx}(t,x)+v(t,x)g(t, x,v(t,x))+v(t,x)g(t, x,u(t,x;u_0,g))-v(t,x)g(t,x,v(t,x))\\\\\n&\\ge v_{xx} (t,x)+ v(t,x)g(t, x,v(t,x))+\\delta_1\\quad \\forall 00$, where we write $u_{0}\\ll v_{0}$ if\n$v_{0}-u_{0}\\in X^{++}(l)$.\n\n\n\n\\begin{proposition}\n\\label{fixed-boundary-prop1}\nLet $a(t,x)=f(t,x,0)$.\n\\begin{itemize}\n\\item[(1)] If $\\lambda(a,l)< 0$, then for any $u_0\\in X^+(l)$,\n$\\|u(t,\\cdot;u_0,g)\\|\\to 0$ as $t\\to\\infty$ uniformly in $g\\in H(f)$. In particular,\n$\\|u(s+t,\\cdot;u_0,s)\\|\\to 0$ as $t\\to\\infty$ uniformly in\n$s\\in\\RR$.\n\n\\item[(2)] If $\\lambda(a,l)>0$, then\nthere exists $u^l:H(f)\\to X^{++}(l)$ such that $u^l(g)$ is continuous in $g\\in H(f)$,\n$u(t,\\cdot;u^l(g),g)=u^l(g\\cdot t)(\\cdot)$ for any $g\\in H(f)$, and for any $u_0\\in X^+(l)\\setminus\\{0\\}$,\n$$\n\\|u(t,\\cdot;u_0,g)-u(t,\\cdot;u^l(g),g)\\|\\to 0\n$$\nas $t\\to\\infty$ uniformly in $g\\in H(f)$. In particular,\n$u^{*,l}(t,x):=u(t,x;u^l(f),f)$ is almost periodic in $t\\in\\RR$ and for any $u_0\\in X^+(l)\\setminus\\{0\\}$,\n$$\n\\|u(s+t,\\cdot;u_0,s)-u^{*,l}(s+t,\\cdot)\\|\\to 0\n$$\nas $t\\to\\infty$ uniformly in $s\\in\\RR$, where $u(s+t,\\cdot;u_0,s)=u(t,\\cdot;u_0,f(\\cdot+s,\\cdot,\\cdot))$.\n\\end{itemize}\n\\end{proposition}\n\n\nThe proposition follows from \\cite[Theorem A]{MiSh2}. For completeness,\nwe provide a proof in the following.\n\n\n\\begin{proof}[Proof of Proposition \\ref{fixed-boundary-prop1}]\n(1) Let $b(t,x)=g(t,x,0)$ for any $g\\in H(f)$. Since $\\lambda(a,l)<0$,\nit is well known that $\\|v(t,x;\\phi^l(b),b)\\|\\to 0$ as $t\\to\\infty$, where $v(t,x;\\phi^l(b),b)$ is\nthe solution of \\eqref{linearized-eq1-1} with $v(0,\\cdot;\\phi^l(b),b)=\\phi^l(b)$ and\n$\\phi^l(b)$ is as in Lemma \\ref{lyapunov-exponent-lm0}.\nFor any $u_{0}\\in X^{+}(l)$, we can choose $M>0$ such that $u_{0}\\leq M\\phi^l(b)$\nfor $x\\in(0,l)$. It follows from comparison principle for parabolic equations\nthat\n$0\\leq u(t,x,u_{0},g)\\leq Mv(t,x,\\phi^l(b),b)$ for $x\\in[0,l]$.\nThis together with a priori estimates for parabolic equations implies that\n $\\|u(t,\\cdot,u_{0},g)\\|\\to 0$\nas $t\\to\\infty$.\n\n(2) Choose $\\xi>0$ such that $\\lambda(a,l)-\\xi>0$. Let\n$\\bar{a}(t,x)=f(t,x,0)-\\xi$\nand\n$\\phi^l: H(\\bar a)\\to X^{++}(l)$ be as in Lemma \\ref{lyapunov-exponent-lm0}.\nFor any $g\\in H(f)$, we choose $\\bar{b}\\in H(\\bar{a})$ such that\n$\\bar{b}(t,x)=g(t,x,0)-\\xi$. Let\n$v(t,x;\\phi^l(\\bar{b}),\\bar{b})$ be the solution of\n\\begin{equation*}\n\\begin{cases}\n\\label{linearized-eq7}\nv_t=v_{xx}+\\bar{b}(t,x)v,\\quad 00$, we can find $T>0$, such that\n$$\\|v(T,\\cdot;\\phi^l(\\bar{b}),\\bar{b})\\|\\geq1 \\, \\,\\,\\forall\\,\\, \\bar{b}\\in H(\\bar{a}).$$\nChoose $0<\\epsilon\\ll1$ such that for any $g\\in H(f)$,\n$$ug(t,x,u)\\geq(g(t,x,0)-\\xi)u \\quad {\\rm for}\\,\\,\n0\\leq u\\leq \\sup_{0\\leq t\\leq T}\\|v(t,\\cdot;\\epsilon\\phi^l(\\bar{b}),\\bar{b})\\|.$$\nUsing comparison principle for parabolic equations we obtain that\n$v(t,\\cdot;\\epsilon\\phi^l(\\bar{b}),\\bar{b}) (0\\leq t\\leq T)$ is a subsolution of\nthe problem \\eqref{fixed-boundary-eq3}. We then have\n$$\nu(T,\\cdot;\\epsilon \\phi^l(\\bar b),g)\\ge \\epsilon \\phi^l(\\bar b_{T})\n$$\nand then\n$$\nu(nT,\\cdot;\\epsilon \\phi^l(\\bar b),g)\\ge \\epsilon \\phi^l(\\bar b_{nT}).\n$$\nLet $\\omega(\\epsilon\\phi^l(\\bar a),f)$ be the $\\omega$-limit set of $\\Pi_t(\\epsilon \\phi^l(\\bar a),f)$. We then have\n$\\omega(\\epsilon \\phi^l(\\bar a),f)\\subset X^{++}(l)\\times H(f)$.\n\nWe claim that for any $g\\in H(f)$,\nthere is unique $u^l(g)\\in X^{++}(l)$ such that $(u^l(g),g)\\in\\omega(\\epsilon \\phi^l(\\bar a),f)$.\nIn fact, if there is $g\\in H(f)$ such that there are $u_1,u_2\\in X^{++}(l)$ with\n$(u_i,g)\\in \\omega(\\epsilon \\phi^l(\\bar a),f)$ and $u_1\\not = u_2$, then $(u(t,\\cdot;u_i,g),g_t)\\in \\omega(\\epsilon \\phi^l(\\bar a),f)$ for all\n$t\\in\\RR$. By Proposition \\ref{part-metric-prop}(1), there is $\\rho_\\infty>0$ such that\n$\\rho(u(t,\\cdot;u_1,g),u(t,\\cdot;u_2,g))\\to \\rho_\\infty$ as $t\\to -\\infty$. For any $t_n\\to -\\infty$, without loss of\ngenerality, assume that\n$g_{t_n}\\to g^*$ and $u(t_n\\cdot;u_i,g)\\to u_i^*$. Then\n$$\nu(t,\\cdot;u_i^*,g^*)=\\lim_{n\\to\\infty}u(t+t_n,\\cdot;u_i,g)\n$$\nand\n$$\n\\rho(u(t,\\cdot;u_1^*,g^*),u(t,\\cdot;u_2^*,g^*))=\\lim_{n\\to\\infty}\\rho(u(t+t_n,\\cdot;u_1,g),u(t+t_n,\\cdot;u_2,g))=\\rho_\\infty\n$$\nfor all $t\\in\\RR$, which contradicts to Proposition \\ref{part-metric-prop}(1). Therefore,\nthe claim holds and $u^l:H(f)\\to X^{++}$ is continuous. In particular,\n$u^{*,l}(t,x)=u(t,x;u^l(f),f)$ is an almost periodic solution.\nMoreover, by the above arguments, for any $u_0\\in X^{++}$,\n$\\omega(u_0,f)=\\omega(\\epsilon \\phi^l(\\bar a),f)$ and then\n$$\n\\lim_{t\\to\\infty}\\|u(t,\\cdot;u_0,f)-u^{*,l}(t,\\cdot)\\|=0.\n$$\n\\end{proof}\n\n\n\n\n\n\n\n\n\n\n\n\\subsection{Comparison principal for free boundary problems}\n\nIn order for later application, we need a comparison principle which can\nbe used to estimate both $u(t,x)$ and the free boundary $x=h(t)$.\n\\begin{proposition}\n\\label{comparison-principle} Suppose that $T\\in (0,\\infty)$, $\\bar{h}\n\\in C^{1}([0,T])$, $\\bar{u}\\in C(\\bar{D}^{*}_{T})\\cap C ^{1,2}(D^{*}_{T})$\nwith $D^{*}_{T}=\\{(t,x)\\in\\mathbb R^{2}:00, 00, \\cr\n\\bar{u}_x(t,0)\\leq 0,u(t,\\bar{h}(t))=0,\\quad &t>0.\n\\end{cases}\n\\end{equation*}\nIf $h_{0}\\leq \\bar{h}(0)$ and $u_{0}(x)\\leq\\bar{u}(0,x)$ in $[0,h_{0}]$,\nthen the solution $(u,h)$ of the free boundary problem \\eqref{main-eq}\nsatisfies\n$$h(t)\\leq \\bar{h}(t) \\ for\\ all\\ t\\in(0,T],\\ u(t,x)\\leq\\bar{u}(x,t) \\ for \\\nt\\in(0,T]\\ and\\ x\\in(0,h(t)).$$\n\n\\begin{proof}\nThe proof of this Proposition is similar to that of Lemma 3.5 in\n\\cite{DuLi} and Lemma 2.6 in \\cite{DuGu}.\n\\end{proof}\n\n\\begin{remark}\nThe pair $(\\bar{u},\\bar{h})$ in Proposition \\ref{comparison-principle}\nis called an upper solution of the free boundary problem. We can\ndefine a lower solution by reversing all the inequalities in the\nobvious places.\n\n\\end{remark}\n\n\\end{proposition}\n\n\\begin{proposition}\n\\label{comparison-principle1} Suppose that $T\\in (0,\\infty)$, $\\bar{g}, \\bar{h}\n\\in C^{1}([0,T])$, $\\bar{u}\\in C(\\bar{D}^{*}_{T})\\cap C ^{1,2}(D^{*}_{T})$\nwith $D^{*}_{T}=\\{(t,x)\\in\\mathbb R^{2}:00, \\bar{g}(t)0 \\cr\n\\bar{u}(t, \\bar{g}(t))=0, \\bar{g}^{'}(t)\\leq -\\mu\\bar{u}_x(t,\\bar{g}(t)),\\quad &t>0.\n\\end{cases}\n\\end{equation*}\nIf $[g_0, h_0] \\subset [\\bar{g}(0), \\bar{h}(0)]$ and $u_{0}(x)\\leq\\bar{u}(0,x)$ in $[g_0,h_0]$,\nthen the solution $(u,g,h)$ of the free boundary problem \\eqref{main-doub-eq} satisfies\n$$g(t)\\ge \\bar{g}(t), h(t)\\leq \\bar{h}(t) \\ for\\ all\\ t\\in(0,T],\\ u(t,x)\\leq\\bar{u}(x,t) \\ for \\\nt\\in(0,T]\\ and\\ x\\in(g(t),h(t)).$$\n\n\\begin{proof}\nThe proof of this Proposition only requires some obvious modifications as in\nProposition \\ref{comparison-principle}.\n\\end{proof}\n\n\\end{proposition}\n\n\n\\begin{proposition}\n\\label{global-existence} For any given $h_0>0$ and $u_0\\ge\n0$, $(u(t,x;u_0,h_0), h(t;u_0,h_0))$ exists for all $t\\ge 0$.\n\\end{proposition}\n\n\\begin{proof}\nThe proof is similar to that of Theorem 4.3 in \\cite{DuGu}.\n\\end{proof}\n\n\\begin{remark}\nFrom the uniqueness of the solution to \\eqref{main-eq} and\nsome standard compactness argument, we can obtain that the\nunique solution $(u,h)$ depends continuously on $u_{0}$ and\nthe parameters appeared in \\eqref{main-eq}.\n\\end{remark}\n\n\n\n\\section{Asymptotic Dynamics of Diffusive KPP Equations on\n Fixed Unbounded Domain and Proof of Theorem \\ref{main-thm1}}\n\n\nIn this section, we consider the asymptotic dynamics of \\eqref{aux-main-eq1} and\nprove Theorem \\ref{main-thm1}. Throughout this section, we assume that $f$ satisfies (H1)-(H5).\nWe let $\\tilde X$ be as in \\eqref{tidle-x-space} and\n$u(t,\\cdot;u_0,g)$ be the solution of \\eqref{unbounded-eq3} with $u(0,\\cdot;u_0,g)=u_0(\\cdot)\\in\\tilde X$.\nThe main results of this section are stated in the following proposition.\n\n\n\\begin{proposition}\n\\label{unbounded-prop1}\nAssume (H1)-(H5). There is $u^*:H(f)\\to \\tilde X^{++}$ satisfying the following properties.\n\\begin{itemize}\n\\item[(1)] (Almost periodicity in time) $u^*(g)(x)$ is continuous in $g\\in H(f)$ in open compact topology with respect to $x$\n(that is, if $g_n\\to g$ in $H(f)$, then $u^*(g_n)(x)\\to u^*(g)(x)$ locally uniformly in $x$)\nand $u(t,x;u^*(g),g)=u^*(g\\cdot t)(x)$ (hence $u^*(g\\cdot t)(x)$ is an almost periodic solution of \\eqref{unbounded-eq3}).\n\n\\item[(2)] (Stability) For any $u_0\\in\\tilde X^{++}$,\n$$\n\\|u(t,\\cdot;u_0,g)-u^*(g(\\cdot+t,\\cdot,\\cdot))(\\cdot)\\|_{\\tilde X}\\to 0\n$$\nas $t\\to\\infty$ uniformly in $g\\in H(f)$.\n\n\\item[(3)] (Uniqueness) For given $g\\in H(f)$, if $\\tilde u^{*}(t,x)$ is an entire positive solution of \\eqref{unbounded-eq3}, and $\\inf_{t\\in\\RR,x\\in\\RR^+}\\tilde u^{*}(t,x)>0$, then $\\tilde u^{*}(t,x)=u(t,x;u^*(g),g)$.\n\n \\item[(4)] (Spatial homogeneity) If $f(t,x,u)\\equiv f(t,u)$, then $u^*(g)(x)$ is independent of $x$ and\n $V^*(t;g)=u^*(g\\cdot t)$ is the unique time almost periodic solution of\n \\begin{equation}\n \\label{ode-eq2}\n u_t=ug(t,u).\n \\end{equation}\n\\end{itemize}\n\\end{proposition}\n\n\n\\begin{proof} [Proof of Theorem \\ref{main-thm1}]\nLet $u^*(t,x)=u^*(f \\cdot t)(x)$, where $u^*(f \\cdot t)$ is as in\n Proposition \\ref{unbounded-prop1}. Theorem \\ref{main-thm1} then follows.\n \\end{proof}\n\n We remark that\n the existence and uniqueness of positive solutions which are bounded away from $0$\nof KPP equations in heterogeneous unbounded domains have been studied in \\cite{BeHaNa} (see \\cite[Propositions 1.7, 1.8]{BeHaNa}).\nThe almost periodicity and stability results in Proposition \\ref{unbounded-prop1} are new.\n\n\n\n\nTo prove Proposition \\ref{unbounded-prop1}, we first prove two lemmas.\n\nFor any $L\\ge L^*$ and $y\\ge y^*$, consider\n\\begin{equation}\n\\label{unbounded-domain-eq1}\n\\begin{cases}\nu_t=u_{xx}+u g^y(t,x,u),\\quad 00$ for $y\\ge y^*$. Then by the arguments of Proposition \\ref{fixed-boundary-prop1},\n\\eqref{unbounded-domain-eq1} has a unique time almost periodic\n positive solution $u^*(t,x;g,y,L)$.\nNote that\n$$\nu^*(t,x;g,y,L)=u^*(0,x;g\\cdot t,y,L).\n$$\n\n\n\\begin{lemma}\n\\label{unbounded-domain-lm2}\nAssume (H1)-(H5). Fix a $L\\ge L^*$. Then\n\\begin{equation}\n\\label{bounded-away-from-zero-eq1}\n\\inf_{y\\ge y^*, L\/4\\le x\\le 3L\/4, g\\in H(f)}u^*(0,x;g,y,L)>0.\n\\end{equation}\n\n\\end{lemma}\n\n\\begin{proof}\nAssume that \\eqref{bounded-away-from-zero-eq1} does not hold. Then there are $y_n\\ge y^*$, $g_n\\in H(f)$, and $x_n\\in [L\/4,3L\/4]$ such that\n$$\n\\lim_{n\\to\\infty} u^*(0,x_n;g_n,y_n,L)=0.\n$$\nBy (H5), without loss of generality, we may assume that\n$$\ng_n^{y_n}(t,x,u)\\to g^*(t,x,u)\n$$\nuniformly in $t\\in\\RR$ and $(x,u)$ in bounded sets.\n\n\nLet $a_n(t,x)=g_n^{y_n}(t,x,0)$ and $a^*(t,x)=g^*(t,x,0)$. By (H2), $g^*$ is almost periodic in $t$.\nThen\n$$\n\\tilde \\lambda(a_n,L)\\to\\tilde \\lambda(a^*,L)\n$$\nand hence $\\tilde \\lambda(a^*,L)>0$. Note that for any $\\epsilon>0$,\n$$\ng_n^{y_n}(t,x,u)\\ge g^*(t,x,u)-\\epsilon\\quad \\forall\\,\\, n\\gg 1,\\,\\, 0\\le x\\le L.\n$$\nand\n\\begin{equation*}\n\\label{unbounded-domain-eq2}\n\\begin{cases}\nu_t=u_{xx}+u\\big( g^*(t,x,u)-\\epsilon\\big),\\quad 00$. By comparison principle for parabolic equations,\nwe have\n$$\nu^*(t,x;g_n,y_n,L)\\ge \\tilde u^{L}(t,x)\\quad \\forall\\,\\, n\\gg 1.\n$$\nThis implies that\n$$\nu^*(0,x_n;g_n,y_n,L)\\not \\to 0\n$$\nas $n\\to\\infty$, which is a contradiction. Hence\n$$\\inf_{y\\ge y^*, L\/4\\le x\\le 3L\/4, g\\in H(f)}u^*(0,x;g,y,L)>0.$$\n\\end{proof}\n\n\\begin{lemma}\n\\label{unbounded-domain-lm3}\nAssume (H1)-(H5). Let $u_0\\equiv M(\\gg 1)$. Then\n$u(t,\\cdot;u_0,g\\cdot(-t))$ decreases as $t$ increases. Let $u^*(g)(x)=\\lim_{t\\to\\infty} u(t,x;u_0,g\\cdot(-t))$ for\n$x\\in [0,\\infty)$.\nThen $u(t,\\cdot;u^*(g),g)=u^*(g\\cdot t)(\\cdot)$ and $\\inf_{x\\in\\RR^+,g\\in H(f)}u^*(g)(x)>0$.\n\\end{lemma}\n\n\n\\begin{proof}\nFirst of all, by comparison principle for parabolic equations, we have $u(t,\\cdot;u_0,g)\\le u_0$ for any $t>0$ and $g\\in H(f)$. Hence\n$$u(t+s,\\cdot;u_0,g\\cdot(-t-s))=u(t,\\cdot;u(s,\\cdot;u_0,g\\cdot(-t-s)),g\\cdot(-t))\\le u(t,\\cdot;u_0,g\\cdot(-t))$$\nfor any $t,s\\ge 0$. Therefore, $u(t,\\cdot;u_0,g\\cdot(-t))$ decreases as $t$ increases.\nLet\n$$\nu^*(g)(x)=\\lim_{t\\to \\infty}u(t,x;u_0,g\\cdot(-t))\\quad \\forall \\, x\\in [0,\\infty).\n$$\n\nNext, for any $g\\in H(f)$ and $y\\ge y^*$,\n$$\nu(t,x+y;u_0,g\\cdot(-t))\\ge u^*(t,x;g\\cdot(-t),y,L)\\quad \\forall\\,\\, 0 y^*+L\/4,g\\in H(f)}u^*(g)(x)>0$.\nChoose $l>y^*+L\/4$ and fix it. By Proposition \\ref{fixed-boundary-prop1},\n$$\nu(t,x;u_0,g\\cdot (-t))\\ge u^l(g)(x)\\quad {\\rm for}\\quad 0\\le x\\le l.\n$$\nNote that\n$$\n\\inf_{g\\in H(f),0\\le x\\le y^*+L\/4}u^l(g)(x)>0.\n$$\nIt then follows that\n$$\n\\inf_{x\\ge 0,g\\in H(f)} u^*(g)(x)>0.\n$$\n\nNow, note that\n$$\nu(s,x;u_0,g\\cdot(-s))\\to u^*(g)(x) \\ as \\ s\\to\\infty\n$$\nuniformly in bounded sets. This implies that\n\\begin{align*}\nu(t,x;u^*(g),g)&=\\lim_{s\\to\\infty} u(t,x;u(s,\\cdot;u_0,g\\cdot(-s)),g)\\\\\n&=\\lim_{s\\to\\infty}u(t+s,x;u_0,g\\cdot(-s))\\\\\n&=\\lim_{s\\to\\infty} u(t+s,x;u_0,(g\\cdot t)\\cdot(-t-s))\\\\\n&=u^*(g\\cdot t)(x)\n\\end{align*}\nuniformly in bounded sets. The lemma is thus proved.\n\\end{proof}\n\n\n\n\n\n\\begin{proof}[Proof of Proposition \\ref{unbounded-prop1}]\n(1) Let $u^*(g)$ be as in Lemma \\ref{unbounded-domain-lm3} for $g\\in H(f)$. We prove that $g\\mapsto u^*(g)$ satisfies the conclusions in (1).\n\nFirst, assume that $g_n\\to g^*$ as $n\\to\\infty$. By regularity and\na priori estimates for parabolic equations, there is\n$n_k\\to \\infty$ such that\n$$\nu^*(g_{n_k})(x)\\to u^{**}(x)\n$$\nuniformly in bounded sets.\nWe prove that\n$u^{**}(x)=u^*(g^*)(x)$. Suppose that $u^{**}(x)\\not \\equiv u^*(g^*)(x)$. Note that\n$u(t,x;u^{**},g^*)$ and $u(t,x;u^*(g^*),g^*)$ exist globally (i.e. exist for all $t\\in\\RR$)\nand\n$$\\inf u(t,x;u^{**},g^*)>0,\\quad \\inf u(t,x;u^*(g^*),g^*)>0.\n$$\n Therefore,\n $$\n \\sup_{t\\in\\RR} \\rho(u(t,\\cdot;u^{**},g^*),u(t,\\cdot;u^*(g^*),g^*)<\\infty\n $$\n and there is $\\rho^*>0$ such that\n$$\n\\rho(u^{**}(\\cdot),u^*(g)(\\cdot))=\\rho^*.\n$$\nThen by Proposition \\ref{part-metric-prop}(2),\nfor any $\\tau>0$ there is $\\delta>0$ such that\n\\begin{align*}\n\\rho^*\\leq\\rho(u(-n\\tau,\\cdot;u^{**},g^*),u(-n\\tau,\\cdot;u^*(g^*),g^*))-n\\delta\n\\ for \\ n\\in\\NN.\n\\end{align*}\nLetting $n\\to\\infty$, we get a contradiction.\nHence $u^{**}(\\cdot)=u^*(g^*)(\\cdot)$ and\n$u^*(g)(x)$ is continuous in $g$ in open compact topology with respect to $x$.\n\nNext, by Lemma \\ref{unbounded-domain-lm3},\n we have that, for any $g\\in H(f)$, $u(t,\\cdot;u^*(g),g)=u^*(g\\cdot t)(\\cdot)$.\n\n\nWe prove now that $u^*(g\\cdot t)(x)$ is almost periodic in $t$ uniformly in $x$ in bounded sets.\nNote that for any given $\\{\\alpha_n^{'}\\}\\subset\\RR$ and $\\{\\beta_n^{'}\\}\\subset\\RR$,\nthere are $\\{\\alpha_n\\}\\subset\\{\\alpha_n^{'}\\}$ and $\\{\\beta_n\\}\\subset\\{\\beta_n^{'}\\}$ such that\n$\\lim_{n\\to\\infty}\\lim_{m\\to\\infty}g(t+\\alpha_n+\\beta_m,x,u)=\\lim_{n\\to\\infty}g(t+\\alpha_n+\\beta_n,x,u)$ for $(t,x,u)\\in\\RR^3$.\nAssume $\\lim_{m\\to\\infty}g(t+\\beta_m,x,u)=g^*(t,x,u)$ and $g^{**}(t,x,u)=\\lim_{n\\to\\infty}g(t+\\alpha_n+\\beta_n,x,u)$.\nIt then follows that\n$$\n\\lim_{m\\to\\infty} u(t+\\beta_m,x;u^*(g),g)=u^*(g^*\\cdot t)(x)\n$$\nuniformly in $x$ in bounded sets,\n\\begin{align*}\n\\lim_{n\\to\\infty}\\lim_{m\\to\\infty} u(t+\\alpha_n+\\beta_m,x;u^*(g),g)&=\\lim_{n\\to\\infty} u(\\alpha_n,x;u^*(g^*\\cdot t),g^*\\cdot t)\\\\\n&=\\lim_{n\\to\\infty}u(t,x;u^*(g^*\\cdot\\alpha_n),g^*\\cdot \\alpha_n)\\\\\n&=u^*(g^{**}\\cdot t)(x)\n\\end{align*}\nuniformly in $x$ in bounded sets, and\n$$\n\\lim_{n\\to\\infty} u(t+\\alpha_n+\\beta_n,x;u^*(g),g)=u^*(g^{**}\\cdot t)(x)\n$$\nuniformly in $x$ in bounded set. Therefore\n$\\lim_{n\\to\\infty}\\lim_{m\\to\\infty} u(t+\\alpha_n+\\beta_m,x;u^*(g),g)=\\lim_{n\\to\\infty} u(t+\\alpha_n+\\beta_n,x;u^*(g),g)$.\nBy regularity and a priori estimates for parabolic equations,\n$u(t,x;u^*(g),g)$ is uniformly continuous in $t\\in\\RR$ and $x\\in\\RR^+$.\nHence, $u^*(g\\cdot t)(x)$ is almost periodic in $t$ uniformly in $x$ in bounded set.\n\n(2) For any $u_0\\in\\tilde{X}^{++}$ and $g\\in H(f)$.\nBy Proposition \\ref{part-metric-prop}(2),\n$\\rho(u(t,\\cdot;u_0,g),u^*(g\\cdot t)(\\cdot))$ decreases as $t$ increases.\nIt suffices to prove that\n$\\lim_{t\\to\\infty}\\rho(u(t,\\cdot;u_0,g),u^*(g\\cdot t)(\\cdot))=0$.\nSuppose that this is not true. Then there are $t_n\\to\\infty$, $g^*\\in H(f)$, $u^{**},\\tilde u^{**}\\in \\tilde X^{++}$\nwith $u^{**}\\neq\\tilde u^{**}$\n such that $g\\cdot t_n\\to g^*$,\n$u^*(g\\cdot t_n)(x)\\to u^{**}(x)$ and $u(t_n,\\cdot;u_0,g)\\to \\tilde u^{**}(x)$ locally uniformly in $x\\ge 0$.\nNote that $u(t,\\cdot;u^{**},g^*)$ and $u(t,\\cdot;\\tilde u^{**},g^*)$ exists for all $t\\in\\RR$,\n$$\n\\sup_{t\\in\\RR}\\rho(u(t,\\cdot;u^{**},g^*),u(t,\\cdot;\\tilde u^{**},g^*))<\\infty,\n$$\nand there is $\\rho^*>0$ such that\n$$\n\\rho(u^{**},\\tilde u^{**})=\\rho^*.\n$$\nBy the arguments in (1) and Proposition \\ref{part-metric-prop}(2), $u^{**}=\\tilde u^{**}$, a contradiction. Therefore\n$$\\lim_{t\\to\\infty}\\rho(u(t,\\cdot;u_0,g),u^*(g\\cdot t)(\\cdot))=0$$ and then\n$$\n\\lim_{t\\to\\infty} \\|u(t,x;u_0,g)-u^*(g)(x)\\|_{\\tilde X}=0\n$$\nuniformly in $g\\in H(f)$.\n\n(3) Suppose that $\\tilde u^{*}(t,x)$ is an entire positive solution of \\eqref{unbounded-eq3}, and $\\inf_{t\\in\\RR,x\\in\\RR^+}\\tilde u^{*}(t,x)>0$.\nAssume $\\tilde u^*(0,x)\\not \\equiv u^*(g)(x)$. By the arguments in (1) and Proposition \\ref{part-metric-prop}(2), there is\n$\\delta_1>0$ such that\n$$\n\\rho(\\tilde u^*(0,\\cdot),u^*(g)(\\cdot))\\le \\rho(\\tilde u^*(-n,\\cdot),u^*(g\\cdot(-n))(\\cdot))-n\\delta_1\n$$\nfor $n\\ge 1$. Letting $n\\to\\infty$, we get a contradiction. Therefore $\\tilde u^*(0,x)\\equiv u^*(g)(x)$ and then $\\tilde u^*(t,x)\\equiv\nu^*(g\\cdot t)(x)$.\n\n(4) It follows from the fact that, if $f(t,x,u)\\equiv f(t,u)$, then for $u_0\\equiv M$, $u(t,x;u_0,g)$ is independent of $x$.\n\\end{proof}\n\n\n\n\n\\section{Spreading-Vanishing Dichotomy in Diffusive KPP Equations with a Free Boundary and Proof of Theorem \\ref{main-thm2}}\n\nIn this section, we study the spreading\/vanishing scenario of \\eqref{main-eq} and prove Theorem \\ref{main-thm2}.\nThroughout this section, we assume (H1)-(H5).\n\nWe first prove a lemma. For any given $h_0>0$ and $u_0$ satisfying \\eqref{initial-value},\nrecall that $(u(t,x;u_0,h_0)$, $h(t;u_0,h_0))$ is the solution of \\eqref{main-eq},\nand $x=h(t;u_0,h_0)$ is increasing, and therefore\nthere exists $h_{\\infty}\\in(0,+\\infty]$ such that\n$\\lim_{t\\to+\\infty}h(t;u_0,h_0)=h_{\\infty}$.\n To stress the dependence of $h(t;u_0,h_0)$ on $\\mu$, we now\nwrite $h_{\\mu}(t;u_0,h_0)$ instead of $h(t;u_0,h_0)$ and $h_\\infty(\\mu)$\ninstead of $h_\\infty$ in the following. If no confusion occurs, we write $h_\\mu(t;u_0,h_0)$ as $h_\\mu(t)$.\n\n\n\\begin{lemma}\n\\label{monotone-increase}\nFor any $t\\in(0,+\\infty)$, $h_{\\mu}(t)$ is a strictly\n increasing function of $\\mu$.\n\\end{lemma}\n\n\\begin{proof}\nSuppose $0<\\mu_{1}<\\mu_{2}$. Let $(u_{1},h_{\\mu_{1}})$\nand $(u_{2},h_{\\mu_{2}})$ are the solutions of the following\nfree boundary problems\n\\begin{equation*}\n\\label{free-eq1}\n\\begin{cases}\nu_t=u_{xx}+uf(t,x,u),\\quad &t>0, 00 \\cr\nu_x(t,0)=u(t,h_{\\mu_{1}}(t))=0,\\quad &t>0 \\cr\nh_{\\mu_{1}}(0)=h_{0},u(0,x)=u_{0}(x),\\quad &00, 00 \\cr\nu_x(t,0)=u(t,h_{\\mu_{2}}(t))=0,\\quad &t>0 \\cr\nh_{\\mu_{2}}(0)=h_{0},u(0,x)=u_{0}(x),\\quad &00$ such\nthat $h_{\\mu_{1}}(t)0$ in $\\Omega_{t^{*}}$ with\n$w(t^{*},h_{\\mu_{1}}(t^{*}))=0$. It follows that\n$w_{x}(t^{*},h_{\\mu_{1}}(t^{*}))< 0$, from which\nwe deduce, in view of\n$(u_{1})_{x}(t^{*},h_{\\mu_{1}}(t^{*}))<0$ and\n$\\mu_{1}<\\mu_{2}$, that\n$$-\\mu_{1}(u_{1})_{x}(t^{*},h_{\\mu_{1}}(t^{*}))<\n-\\mu_{2}(u_{2})_{x}(t^{*},h_{\\mu_{2}}(t^{*})).\n$$\nThus $h'_{\\mu_{1}}(t^{*})0$.\n\\end{proof}\n\n\\begin{remark}\nIf we consider \\eqref{main-doub-eq}, for any $t\\in(0,+\\infty)$,\nby Proposition \\ref{comparison-principle1} and using the same\nargument as Lemma \\ref{monotone-increase} we have $g_\\mu(t)$\nis a strictly monotone decreasing function of $\\mu$.\n\\end{remark}\n\nWe now prove Theorem \\ref{main-thm2}.\n\n\\begin{proof}[Proof of Theorem \\ref{main-thm2}]\n\n(1)(i) Suppose that $h_\\infty<\\infty$. First of all, we claim that\n $h^{'}(t;u_0,h_0)\\to 0$ as $t\\to\\infty$. Assume that the claim is not true. Then there is $t_n\\to \\infty$ ($t_n\\ge 1$) such that\n $\\lim_{n\\to\\infty}h^{'}(t_n;u_0,h_0)>0$. Let $h_n(t)=h(t+t_n;u_0,h_0)$ for $t\\ge -1$.\n Note that $h_n(t)\\to h_\\infty$ as $n\\to\\infty$ uniformly in $t\\ge -1$.\n By \\cite[Theorem 2.1]{DuLi}, $\\{h_n^{'}(t)\\}$ is uniformly bounded and equicontinuous on $[-1,\\infty)$. We then may assume that\n there is a continuous function $h^*(t)$ such that $h_n^{'}(t)\\to h^*(t)$ as $n\\to\\infty$ uniformly in $t$ in bounded sets of $[-1,\\infty)$.\n It then follows that $h^*(t)=\\frac{d h_\\infty}{dt}\\equiv 0$ and then $\\lim_{n\\to\\infty}h^{'}(t_n;u_0,h_0)=0$, which is a contradiction.\n Hence the claim holds.\n\n\n\n\n By regularity and a priori estimates for parabolic equations,\n for any sequence $t_n\\to\\infty$, there are $t_{n_k}\\to\\infty$ and\n$u^*\\in C^1(\\RR\\times [0,h_\\infty])$ and $g^*\\in H(f)$ such that\n$$\nf\\cdot t_{n_k}\\to g^*\n$$\nand\n$$\n\\|u(t+t_{n_k},\\cdot;u_0,h_0)-u^*(t,\\cdot)\\|_{C^1([0,h(t+t_{n_k})])}\\to 0\n$$\nas $t_{n_k}\\to\\infty$. Moreover, we have that $u^*(t,x)$ is an entire solution\nof\n\\begin{equation}\n\\label{limit-eq1}\n\\begin{cases}\nu_t=u_{xx}+u g^*(t,x,u),\\quad 00$ such that $h(t)>h_{\\infty}-\\epsilon>l^*$\nfor all $t\\geq\\tilde{T}$ and some small $\\epsilon>0$.\nConsider\n\\begin{equation}\n\\label{limit-eq2}\n\\begin{cases}\nv_t=v_{xx}+vf(t,x,v),\\quad 0< x< h_{\\infty}-\\epsilon \\cr\nv_{x}(t,0)=v(t,h_{\\infty}-\\epsilon)=0.\n\\end{cases}\n\\end{equation}\nBy comparison principle for parabolic equations,\n$$\nu(t+\\tilde T,\\cdot;u_0,h_0)\\ge v(t+\\tilde{T},\\cdot;u(\\tilde T,\\cdot;u_0,h_0),\\tilde T)\\quad {\\rm for}\\quad t\\ge 0,\n$$\nwhere $v(t+\\tilde{T},\\cdot;u(\\tilde T,\\cdot;u_0,h_0),\\tilde T)$ is the solution of \\eqref{limit-eq2}\nwith $u(\\tilde T,\\cdot;u(\\tilde T,\\cdot;u_0,h_0),\\tilde T)=u(\\tilde T,\\cdot;u_0,h_0)$.\nBy Proposition \\ref{fixed-boundary-prop1}, \\eqref{limit-eq2} has a unique time almost periodic positive solution $v_{h_{\\infty}-\\epsilon}(t,x)$.\nMoreover, for any $v_0\\ge 0$ and $v_0\\not\\equiv 0$,\n\\begin{equation}\n\\label{limit-eq3}\n\\|v(t+\\tilde T,\\cdot;v_0,\\tilde T)-v_{h_{\\infty-\\epsilon}}(t+\\tilde T,\\cdot)\\|\\to 0\n\\end{equation}\nas $t\\to\\infty$. By \\eqref{limit-eq3} and\ncomparison principle for parabolic equations,\n$$\nu^*(t,x)>0\\quad \\forall\\,\\, t\\in\\RR,\\,\\, x\\in (0,h_\\infty).\n$$\nIt then follows that $u^*_x(t,h_\\infty)<0$. This implies that\n$$\n\\limsup_{t\\to\\infty}u_x(t,h(t);u_0,h_0)<0\n$$\nand then\n$$\n\\liminf h^{'}(t)=\\liminf_{t\\to\\infty} -\\mu u_x(t,h(t);u_0,h_0)>0,\n$$\nwhich is a contradiction. Therefore $h_\\infty\\le l^*$.\n\n\n\n\n\nWe now show that $\\lim_{t\\to\\infty}\\|u(t,\\cdot;u_0,h_0)\\|\n_{C([0,h(t)])}=0$. Let $\\bar{u}(t,x)$\ndenote the solution of the problem\n\\begin{equation*}\n\\label{fixed-boundary-eq14}\n\\begin{cases}\n\\bar{u}_t=\\bar{u}_{xx}+\\bar{u}f(t,x,\\bar{u}),\\quad &t>0, 00 \\cr\n\\bar{u}(0,x)=\\tilde{u}_{0}(x), \\quad &0\\leq x\\leq h_{\\infty}\n\\end{cases}\n\\end{equation*}\nwhere\n$$\\tilde{u}_{0}(x)=\n\\begin{cases}\nu_{0}(x) & \\ for \\ 0\\leq x\\leq h_{0} \\\\\n0 & \\ for \\ x> h_{0}\n\\end{cases}\n$$\nThe comparison principle implies that\n$$0\\leq u(t,x;u_0,h_0)\\leq\\bar{u}(t,x) \\ \\ for \\ t>0 \\ and \\\nx\\in[0,h(t)]$$\n\nIf $h_{\\infty}0,\n$$\nwhich is a contradiction again.\n\n\n\n(1)(ii)\nFirst note that for any fixed $x$, $u^l(g)(x)$ is increasing in $l$ and $u^l(g)(x)\\le u^*(g)(x)$.\nThen there is $\\tilde u^*(g)(x)$ such that\n$$\n\\lim_{l\\to\\infty}u^l(g)(x)=\\tilde u^*(g)(x)\\le u^*(g)(x)\n$$\nlocally uniformly in $x$.\n\nWe claim that\n$$\n\\tilde u^*(g)(x)\\equiv u^*(g)(x).\n$$\nIn fact, by Lemma \\ref{unbounded-domain-lm2},\n$$\n\\inf_{x\\ge 0,g\\in H(f)}\\tilde u^*(g)(x)>0.\n$$\nNote that $u(t,x;\\tilde u^*(g),g)=\\tilde u^*(g\\cdot t)(x)$. Then by Proposition \\ref{unbounded-prop1},\n $u^*(g)(x)\\equiv \\tilde u^*(g)(x)$.\n\n\nNote that for any $T>0$ satisfying $h(T)>l^*$,\n$$\nu(t+T,x;u_0,h_0)\\ge u^l(t,x;u(T,\\cdot;u_0,h_0),f\\cdot T)\\quad \\forall\\,\\, t\\ge 0,\n$$\nwhere $u^l(t,x;u(T,\\cdot;u_0,h_0),f\\cdot T)$ is the solution of \\eqref{fixed-boundary-eq3} with\n$g=f\\cdot T$, $l=h(T;u_0,h_0)$, and $u^l(0,x;u(T,\\cdot;u_0,h_0),f\\cdot T)=u(T,x;u_0,h_0)$.\nNote also that\n$$\nu^l(t,x;u(T,\\cdot;u_0,h_0),f\\cdot T)-u^l(f\\cdot (t+T))(x)\\to 0\n$$\nas $t\\to\\infty$ uniformly in $x\\in [0,l]$ and\n$$\nu^l(f\\cdot (t+T))(x)-u^*(f\\cdot(t+T))(x)\\to 0\n$$\nas $l\\to\\infty$ locally uniformly in $x\\in [0,\\infty)$. It then follows that\n$$\nu(t,x;u_0,h_0)-u^*(f\\cdot t)(x)\\to 0\n$$\nas $t\\to\\infty$ locally uniformly in $x\\in [0,\\infty)$.\n\n\n\n(2) If $h_0\\ge l^*$, then $h_\\infty> h_0\\ge l^*$. (2) then follows from (1).\n\n\n(3) Assume that $h_{0}0$\nsuch that $h_{\\mu^{*}}(T)>l^*$. By the continuous dependence of\n$h_\\mu$ on $\\mu$, there is $\\epsilon>0$ small such that\n$h_\\mu(T)>l^*$ for all $\\mu\\in[\\mu^{*}-\\epsilon,\\mu^{*}+\\epsilon]$.\nHence, for all such $\\mu$ we have\n$$h_\\infty(\\mu)=\\lim_{t\\to\\infty}h_\\mu(t)>h_\\mu(T)>l^*$$\nThus, $h_\\infty(\\mu)=\\infty$. This implies that\n$[\\mu^{*}-\\epsilon,\\mu^{*}+\\epsilon]\\cap\\{\\mu|h_\\infty(\\mu)<\\infty\\}=\\emptyset$,\nand it is a contradiction to the definition of $\\mu^{*}$. So we proved the claim\nthat $\\mu^{*}\\in\\{\\mu|h_\\infty(\\mu)<\\infty\\}$.\n\nFor $\\mu>\\mu^{*}$, we get $h_\\infty(\\mu)=\\infty$. If not,\nit must have $\\mu\\leq\\mu^{*}$, and it is a contradiction.\nThen spreading happens.\n\nFor $\\mu\\leq\\mu^{*}$, by the Lemma \\ref{monotone-increase}\nwe can obtain\n$$h_\\mu(t)\\leq h_{\\mu^{*}}(t) \\ for \\ all \\ t\\in(0,+\\infty)$$\nIt follow that $h_\\infty(\\mu)\\leq h_\\infty(\\mu^{*})<\\infty$,\nand vanishing happens.\n\\end{proof}\n\n\n\\section{Remarks}\nWe have examined the dynamical behavior of the population $u(t,x)$\nwith spreading front $x=h(t)$ determined by \\eqref{main-eq}, and\nproved that for this problem a spreading-vanishing dichotomy holds\n(see Theorem \\ref{main-thm2}). In this section, we discuss how the techniques for \\eqref{main-eq} can be modified to study the\n double fronts free boundary \\eqref{main-doub-eq}.\n\nFirst, note that the existence and uniqueness results for solutions of \\eqref{main-eq} with given initial datum $(u_0,h_0)$\n can be\nextended to \\eqref{main-doub-eq} using the same arguments as in Section 5\n\\cite{DuLi}, except that we need to modify the transformation in the\nproof of Theorem 2.1 in \\cite{DuLi} such that both boundaries are\nstraightened. In particular, for given\n$g_00$.}\n\\medskip\n\n\n\\noindent Consider the following fixed boundary problem on $\\RR^1$,\n\\begin{equation}\n\\label{glob-eq}\nu_t=u_{xx}+uf(t,x,u) \\quad x\\in(-\\infty,\\infty).\n\\end{equation}\nFor given $u_0\\in C^b_{\\rm unif}(\\RR,\\RR^+)$, let $u(t,x;u_0)$ be the solution of\n\\eqref{glob-eq} with $u(0,x;u_0)=u_0(x)$.\nBy the similar arguments as those in Theorem \\ref{main-thm1}, we can prove\n\n\\begin{proposition}\n\\label{positive-almost-periodic-solution-doub-prop}\nAssume (H1), (H2), $ (H4)^*$, and (H5).\n \\eqref{glob-eq} has a unique time almost periodic positive solution\n$u^*(t,x)$ and for any $u_0\\in C^b_{\\rm unif}(\\RR,\\RR^+)$ with\n$\\inf_{x\\in(-\\infty,\\infty)}u_0(x)>0$,\n$\\lim_{t\\to\\infty}\\|u(t,\\cdot;u_0)-u^*(t,\\cdot)\\|_{C(\\RR)}=0$.\n\\end{proposition}\n\nWe now have the following spreading-vanishing dichotomy for \\eqref{main-doub-eq}.\n\n\\begin{proposition}\n\\label{spreading-vanishing-doub-prop}\nAssume (H1), (H2), $(H4)^*$, and (H5). For given $h_0>0$ and $u_0$ satisfying \\eqref{initial-value-1},\nthe following hold.\n\\begin{itemize}\n\\item[(1)]\nEither\n\n\n(i) $h_\\infty-g_\\infty\\le L^*$ and\n$\\lim_{t\\to+\\infty} u(t,x;u_0,h_0,g_0)=0$ uniformly in $x$\n\nor\n\n\n(ii) $h_{\\infty}=-g_{\\infty}=\\infty$ and\n$\\lim_{t\\to\\infty}[u(t,x;u_0,h_0,g_0)-u^*(t,x)]=0$\nlocally uniformly for $x\\in(-\\infty,\\infty)$, where $u^*(t,x)$ is the\nunique time almost periodic positive solution of \\eqref{glob-eq}.\n\n\n\n\\item[(2)] If $h_0-g_0\\ge L^*$, then $h_\\infty=-g_{\\infty}=\\infty$.\n\n\\item[(3)]\nSuppose $h_0-g_00$\nsuch that spreading occurs if $\\mu>\\mu^{*}$ and vanishing occurs if $\\mu\\le \\mu^{*}$.\n\\end{itemize}\n\\end{proposition}\n\n\\begin{proof}\n(1) Observe that we have either $h_\\infty-g_\\infty<\\infty$ or $h_\\infty-g_\\infty=\\infty$.\n\nSuppose that $h_\\infty-g_\\infty<\\infty$. By $(H4)^*$ and the similar arguments as those in Theorem \\ref{main-thm2}(1)(i),\nwe must have $h_\\infty-g_\\infty\\le L^*$ and $u(t,x;u_0,h_0,g_0)\\to 0$ as $t\\to\\infty$.\n\nSuppose that $h_\\infty-g_\\infty=\\infty$. We first claim that $h_\\infty=-g_\\infty=\\infty$.\nIn fact, if the claim does not hold, without loss of generality, we may assume that $-\\infty0$ be such that $h(T^*)-g(T^*)>L^*$. Then by $(H4)^*$,\n$$\n\\inf_{t>T^*,x\\in [g(T^*),h(T^*)]}u(t,x;u_0,h_0,g_0)>0.\n$$\nLet $t_n\\to\\infty$ be such that $f(t+t_n,x,u)\\to g^*(t,x,u)$ and $u(t+t_n,x;u_0,h_0,g_0)\\to u^*(t,x)$.\nThen $u^*(t,x)$ is the solution of\n$$\n\\begin{cases}\nu_t=u_{xx}+u g^*(t,x,u),\\quad g_\\infty0$. Then by Hopf Lemma for parabolic equations,\n$$\nu^*_x(t,g_\\infty)>0.\n$$\nThis implies that\n$$g^{'}(t+t_n;u_0,h_0,g_0)\\to -\\mu u^*_x(t,g_\\infty)<0,\n$$\nwhich contradicts to the fact that $g^{'}(t;u_0,h_0,g_0)\\to 0$ as $t\\to\\infty$. Hence $(g_\\infty,h_\\infty)=(-\\infty,\\infty)$.\nBy the similar arguments as those in Theorem \\ref{main-thm2}(1)(ii), we have\n$$\n\\lim_{t\\to\\infty}[u(t,x;u_0,h_0,g_0)-u^*(t,x)]=0\n$$\nlocally uniformly in $x$ in bounded sets.\n\n(2) and (3) follows from the similar arguments as those in Theorem \\ref{main-thm2} (2) and (3), respectively.\n\\end{proof}\n\n\\section*{Acknowledgements}\n\nFang Li would like to thank the China Scholarship Council for\nfinancial support during the two years of her overseas study and to\nexpress her gratitude to the Department of Mathematics and\nStatistics, Auburn University for its kind hospitality. \n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nDue to the huge bandwidth, millimeter wave (mmWave) communications have been recognized as one of the key technologies to meet the demand for unprecedentedly high data rate transmission in the future mobile networks \\cite{A. L. Swindlehurst}. By equipping large-scale antenna arrays, massive multiple-input multiple-output (MIMO) can provide sufficiently large array gains for spatial multiplexing and beamforming \\cite{L. Lu}. MmWave massive MIMO communications can obtain the merits of both of them and thus have attracted significant interest \\cite{B. Wang}. However, the expensive and power-hungry hardwares used in mmWave bands become the main obstacle to equipping a dedicated radio frequency (RF) chain for each antenna. The mainstream solution for this problem is to use the two-stage hybrid architecture, where a large number of antennas are connected to much fewer RF chains via phase shifters \\cite{L. Liang}, \\cite{L. Pan}.\n\n\\subsection{Related Work}\n\nFor mmWave massive MIMO systems with the hybrid architecture, both the analog and digital processing should be carefully designed to achieve the comparable performance to the fully-digital systems. In \\cite{L. Liang}, a low-complexity hybrid precoding scheme at the base station (BS) has been proposed for the massive MIMO downlink with single-antenna users. The hybrid architecture has been further introduced to the user side in \\cite{W. Ni}, where hybrid block diagonalization (HBD) has been used for the analog and digital processing design. By exploiting the sparsity of mmWave channels, the hybrid precoding and combining at both the transmitter and receiver have been optimized in \\cite{O. E. Ayach}. The heuristic hybrid beamforming design in \\cite{F. Sohrabi} can approach the performance of the fully-digital architecture. The alternating minimization algorithms for both fully-connected and sub-connected hybrid architectures in \\cite{X. Yu} are with low complexity and limited performance loss. In \\cite{L. Zhao}, the hybrid processing along with channel estimation has been designed and analyzed for both the sparse and non-sparse channels. The uniform channel decomposition and nonlinear digital processing have been introduced in \\cite{Y. Lin} for hybrid beamforming design. In the existing works, the hybrid processing matrices at the transmitter and receiver are usually optimized separately due to the intractability of the joint optimization with non-convex constraints, which makes the further performance improvement possible with joint optimization.\n\nDeep learning (DL) has achieved great success in various fields, including computer vision \\cite{K. He_a}, speech signal processing \\cite{A. Graves}, natural language processing \\cite{R. Collobert}, and so on, due to its unique ability in extracting and learning inherent features. It has been recently introduced to wireless communications and shown quite powerful in the optimization of communication systems \\cite{T. O'hea}--\\cite{G. Gui} and resource allocation \\cite{L. Liang_b}--\\cite{Z. Yang}. In \\cite{H. Ye_a}, DL has been successfully applied in pilot-assisted signal detection for orthogonal frequency division multiplexing (OFDM) systems with non-ideal transceiver and channel conditions. For wideband mmWave massive MIMO systems in time-varying channels, channel correlation has been exploited by deep convolutional neural network (CNN) in \\cite{P. Dong_b} to improve the accuracy and accelerate the computation for the channel estimation. Deep neural network (DNN) has been utilized in \\cite{S. Gao} to model the mapping relationship among antennas for reliable channel estimation in massive MIMO systems with mixed-resolution ADCs. An autoencoder-like DNN has been developed in \\cite{C.-K. Wen} to reduce the overhead for channel state information (CSI) feedback in the frequency duplex division massive MIMO system. In \\cite{C. Lu_a}, CNN has been utilized in CSI compression and uncompression to significantly improve the recovery accuracy. By combining the residual network and CNN, an efficient channel quantization scheme has been proposed from the perspective of bit-level in \\cite{C. Lu_b}. The DL based end-to-end optimization has been developed in \\cite{S. Dorner} and \\cite{H. Ye_b} by breaking the block structures at the transceiver. DL has been recently used to design the hybrid processing matrices for massive MIMO systems with various transceiver architectures \\cite{H. Huang}--\\cite{X. Bao}. In \\cite{H. Huang}, the analog and digital precoder design has been modeled as the DNN mapping based on geometric mean decomposition. In \\cite{T. Lin}, DNN has been applied to design the analog precoder for massive multiple-input single-output (MISO) systems. Deep CNN has been applied to learn the phases of the analog precoder and combiner for mmWave massive MIMO systems in \\cite{A. M. Elbir}. For the same system, channel estimation and analog processing have been jointly optimized by DL with reduced pilot overhead in \\cite{X. Li}. In \\cite{X. Bao}, deep CNN along with an equivalent channel hybrid precoding algorithm have been proposed to design the hybrid processing matrices.\n\n\\subsection{Motivation and Contribution}\n\nThe research on the DL based hybrid processing for mmWave massive MIMO systems is still in the exploratory stage and has many open issues. The existing works have applied DL to design the analog precoder \\cite{T. Lin}, the analog combiner \\cite{X. Bao}, the analog precoder and combiner \\cite{A. M. Elbir}, \\cite{X. Li}, and the analog and digital precoders \\cite{H. Huang}. Currently, only partial hybrid processing is designed by DL for the mmWave transceiver. In addition, conventional hybrid processing schemes are usually used to generate label matrices for the DNN to approximate, which limits the performance of the DL based approaches. The problems in the existing works motivate us to propose a general DL based joint hybrid processing framework (DL-JHPF) with the following two unique features:\n\\begin{itemize}[\\IEEEsetlabelwidth{Z}]\n\\item[1)] The framework jointly optimizes the analog and digital processing matrices at both the transmitter and receiver in an end-to-end manner without pre-designed label matrices. By doing this, it can be applied to various types of mmWave transceiver architectures and will have the potential to break through the performance of the existing schemes.\n\n\\item[2)] The framework enables end-to-end optimization but still preserves the block structures at the transceiver considering the hardware and power constraints in practical implementation for the hybrid architecture, which is quite different from the end-to-end optimization in \\cite{S. Dorner} and \\cite{H. Ye_b}.\n\\end{itemize}\nThe main contributions of this paper are summarized as follows.\n\\begin{itemize}[\\IEEEsetlabelwidth{Z}]\n\\item[1)] We model the joint analog and digital processing design for the transceiver as a DL based framework, which consists of the NN based hybrid processing designer, signal flow simulator, and NN based signal demodulator. For the sake of practical implementation, it does not break the original block structures at the transceiver but still allows the back-propagation (BP) based end-to-end optimization by minimizing the cross-entropy loss between recovered and original bits. The trainability of DL-JHPF is proved theoretically.\n\n\\item[2)] We extend the proposed framework to OFDM systems by simply modifying the structure of the training data. The extension does not complicate the framework architecture and guarantees the relatively short training time even if the number of subcarriers is large.\n\n\\item[3)] We verify the effectiveness of the proposed framework by numerical results based on the 3rd Generation Partnership Project (3GPP) channel model that can well depict the real channel environment. The proposed DL-JHPF achieves remarkable improvement in bit-error rate (BER) performance even with mismatched CSI and channel scenarios. Thanks to the careful design, DL-JHPF reduces the runtime significantly by sufficiently exploiting the parallel computing and thus is more suitable for rapidly varying mmWave channels.\n \n\\end{itemize}\n\nThe rest of the paper is organized as follows. Section II describes the channel model and signal transmission process for the considered mmWave massive MIMO system. The proposed DL-JHPF is elaborated in Section III. Simulation results are provided in Section IV to verify the effectiveness of the proposed framework and finally Section V gives concluding remarks.\n\n\\emph{Notations}: In this paper, we use upper and lower case boldface letters to denote matrices and vectors, respectively. $\\lVert\\cdot\\rVert_{F}$, $(\\cdot)^T$, $(\\cdot)^H$, and $\\mathbb{E}\\{\\cdot\\}$ represent the Frobenius norm, transpose, conjugate transpose, and expectation, respectively. $\\mathcal{CN}(\\mu,\\sigma^2)$ represents circular symmetric complex Gaussian distribution with mean $\\mu$ and variance $\\sigma^2$. $[\\mathbf{X}]_{i,j}$ and $[\\mathbf{x}]_{i}$ denote the $(i,j)$th element of matrix $\\mathbf{X}$ and the $i$th element of vector $\\mathbf{x}$, respectively. $|\\cdot|$ denotes the amplitude of a complex number.\n\n\\section{System Model}\n\n\\begin{figure}[t]\n\\centering\n\\includegraphics[width=3.5in]{Fig1}\n\\caption{MmWave massive MIMO system model.}\\label{system_model}\n\\end{figure}\n\nAs shown in Fig.~\\ref{system_model}, we consider a point-to-point massive MIMO systems working at mmWave bands, where the transmitter and the receiver are with $N_{\\textrm{T}}$ and $N_{\\textrm{R}}$ antennas, respectively. To reduce the hardware cost and power consumption, $N_{\\textrm{T}}^{\\textrm{RF}}(< N_{\\textrm{T}})$ and $N_{\\textrm{R}}^{\\textrm{RF}}(< N_{\\textrm{R}})$ RF chains are used at the transmitter and the receiver, respectively, and are connected to the large-scale antennas via phase shifters.\n\n\\subsection{Channel Model}\n\nDue to the sparse scattering property, the Saleh-Valenzuela channel model has been used to well depict the mmWave propagation environment, where the scattering of multiple rays forms several clusters. According to \\cite{O. E. Ayach}, the $N_{\\textrm{R}}\\times N_{\\textrm{T}}$ channel matrix between the receiver and the transmitter can be represented as\n\\begin{eqnarray}\n\\label{eqn_H_tau}\n\\mathbf{H}=\\sqrt{\\frac{N_{\\textrm{T}}N_{\\textrm{R}}}{N_{\\textrm{cl}}N_{\\textrm{ray}}}}\\sum_{n=1}^{N_{\\textrm{cl}}}\\sum_{m=1}^{N_{\\textrm{ray}}} \\alpha_{n,m}\\mathbf{a}_{\\textrm{R}}(\\varphi_{n,m})\\mathbf{a}_{\\textrm{T}}^{H}(\\phi_{n,m}),\n\\end{eqnarray}\nwhere $N_{\\textrm{cl}}$ and $N_{\\textrm{ray}}$ denote the number of scattering clusters and the number of rays in each cluster, respectively, $\\alpha_{n,m}\\sim \\mathcal{CN}(0, \\sigma_{\\alpha}^2)$ is the propagation gain of the $m$th path in the $n$th cluster with $\\sigma_{\\alpha}^2$ being the average power gain, $\\varphi_{n,m}$ and $\\phi_{n,m}\\in[0,2\\pi]$ are the azimuth angles of arrival and departure (AoA\/AoD) at the receiver and the transmitter, respectively, of the $m$th path in the $n$th cluster.\\footnote{The path gain $\\alpha_{n,m}$ is the fast fading and varies in the time scale of channel coherence interval. Other parameters, $N_{\\textrm{cl}}$, $N_{\\textrm{ray}}$, $\\varphi_{n,m}$, $\\phi_{n,m}$, are slow fading and may be unchanged in a large time scale compared to $\\alpha_{n,m}$. The Doppler spread determines how often these channel parameters change.} For a uniform linear array with $N$ antenna elements and an azimuth angle of $\\theta$, the response vector can be expressed as\n\\begin{eqnarray}\n\\label{eqn_au}\n\\mathbf{a}(\\theta)=\\frac{1}{\\sqrt{N}} \\left[1,e^{-j2\\pi\\frac{d}{\\lambda}\\sin(\\theta)},\\ldots,e^{-j2\\pi\\frac{d}{\\lambda}(N-1)\\sin(\\theta)}\\right]^{T},\n\\end{eqnarray}\nwhere $d$ and $\\lambda$ denote the distance between the adjacent antennas and carrier wavelength, respectively.\n\nIn the above channel model, we assume the transmitted signal is with narrowband and therefore, channel matrix is independent of frequency. For wideband transmission, OFDM is used to convert a frequency-selective channel into multiple flat fading channels and the corresponding channel matrices will be different at different subcarriers. Accordingly, the design of DL-JHPF in Section III will start at the narrowband systems and is then extended to the wideband OFDM systems.\n\n\\subsection{Signal Transmission}\n\nThe transmitter sends $N_{\\textrm{s}}$ parallel data streams to the receiver through the wireless channel. The bits of each data stream are first mapped to the symbol by the $M$-ary modulation. The symbol vector intended for the receiver, $\\mathbf{x}\\in\\mathbb{C}^{N_{\\textrm{s}}\\times 1}$ with $\\mathbb{E}\\left\\{\\mathbf{x}\\mathbf{x}^{H}\\right\\}=\\frac{1}{N_{\\textrm{s}}}\\mathbf{I}_{N_{\\textrm{s}}}$, is successively processed by the digital precoder, $\\mathbf{F}_{\\textrm{BB}}\\in\\mathbb{C}^{N_{\\textrm{T}}^{\\textrm{RF}}\\times N_{\\textrm{s}}}$, at the baseband and the analog precoder, $\\mathbf{F}_{\\textrm{RF}}\\in\\mathbb{C}^{N_{\\textrm{T}}\\times N_{\\textrm{T}}^{\\textrm{RF}}}$, through the phase shifters, yielding the transmitted signal\n\\begin{eqnarray}\n\\label{eqn_s}\n\\mathbf{s}=\\sqrt{P}\\mathbf{F}_{\\textrm{RF}}\\mathbf{F}_{\\textrm{BB}}\\mathbf{x},\n\\end{eqnarray}\nwhere $P$ denotes the transmit power. $\\mathbf{F}_{\\textrm{RF}}$ represents the phase-only modulation by the phase shifters and thus has the constraint of $\\left|[\\mathbf{F}_{\\textrm{RF}}]_{i,j}\\right|=\\frac{1}{\\sqrt{N_{\\textrm{T}}}}$, $\\forall \\,i, \\,j$. $\\mathbf{F}_{\\textrm{BB}}$ is normalized as $\\|\\mathbf{F}_{\\textrm{RF}}\\mathbf{F}_{\\textrm{BB}}\\|_{F}^2=N_{\\textrm{s}}$ to satisfy the total power constraint at the transmitter. Then the received signal at the receiver is given by\n\\begin{eqnarray}\n\\label{eqn_y}\n\\mathbf{y}=\\sqrt{P}\\mathbf{H}\\mathbf{F}_{\\textrm{RF}}\\mathbf{F}_{\\textrm{BB}}\\mathbf{x}+\\mathbf{n},\n\\end{eqnarray}\nwhere $\\mathbf{n}\\in\\mathbb{C}^{N_{\\textrm{R}}\\times 1}$ is additive white Gaussian noise (AWGN) with $\\mathcal{CN}(0,1)$ elements.\n\nThe received signal $\\mathbf{y}$ is then processed by the hybrid architecture at the receiver as\n\\setlength{\\arraycolsep}{0.05em}\n\\begin{eqnarray}\n\\label{eqn_r}\n\\mathbf{r}&&=\\mathbf{W}_{\\textrm{BB}}^H\\mathbf{W}_{\\textrm{RF}}^H\\mathbf{y} =\\sqrt{P}\\mathbf{W}_{\\textrm{BB}}^H\\mathbf{W}_{\\textrm{RF}}^H\\mathbf{H}\\mathbf{F}_{\\textrm{RF}}\\mathbf{F}_{\\textrm{BB}}\\mathbf{x} +\\mathbf{W}_{\\textrm{BB}}^H\\mathbf{W}_{\\textrm{RF}}^H\\mathbf{n},\n\\end{eqnarray}\nwhere $\\mathbf{W}_{\\textrm{RF}}\\in\\mathbb{C}^{N_{\\textrm{R}}\\times N_{\\textrm{R}}^{\\textrm{RF}}}$ and $\\mathbf{W}_{\\textrm{BB}}\\in\\mathbb{C}^{N_{\\textrm{R}}^{\\textrm{RF}}\\times N_{\\textrm{s}}}$ represent the analog combiner and digital combiner, respectively. A hardware constraint is imposed on $\\mathbf{W}_{\\textrm{RF}}$ such that $\\left|[\\mathbf{W}_{\\textrm{RF}}]_{i,j}\\right|=\\frac{1}{\\sqrt{N_{\\textrm{R}}}}$, $\\forall \\,i, \\,j$ similar to $\\mathbf{F}_{\\textrm{RF}}$. Then the detected signal vector, $\\mathbf{r}$, is demodulated to recover the original bits of $N_{\\textrm{s}}$ data streams.\n\nSince the performance of the digital communication system is ultimately determined by BER, we aim to jointly design $\\mathbf{F}_{\\textrm{RF}}$, $\\mathbf{F}_{\\textrm{BB}}$, $\\mathbf{W}_{\\textrm{RF}}$, and $\\mathbf{W}_{\\textrm{BB}}$ to minimize the BER between the original and demodulated bits, that is\n\\begin{eqnarray}\n&&\\min\\limits_{\\mathbf{F}_{\\textrm{RF}},\\mathbf{F}_{\\textrm{BB}},\\mathbf{W}_{\\textrm{RF}},\\mathbf{W}_{\\textrm{BB}}}\\quad P_{\\textrm{e}}\\left(\\mathbf{F}_{\\textrm{RF}},\\mathbf{F}_{\\textrm{BB}},\\mathbf{W}_{\\textrm{RF}},\\mathbf{W}_{\\textrm{BB}}\\right),\\label{eqn_optim_problem}\\\\\n&&\\quad\\quad\\:\\:\\textrm{s.t.}\\qquad\\qquad\\!\\! \\left|[\\mathbf{F}_{\\textrm{RF}}]_{i,j}\\right|=\\frac{1}{\\sqrt{N_{\\textrm{T}}}}, \\forall \\,i, \\,j,\\label{const1}\\\\\n&&\\qquad\\qquad\\qquad\\quad\\;\\; \\left|[\\mathbf{W}_{\\textrm{RF}}]_{i,j}\\right|=\\frac{1}{\\sqrt{N_{\\textrm{R}}}}, \\forall \\,i, \\,j,\\label{const2}\\\\\n&&\\qquad\\qquad\\qquad\\quad\\;\\; \\|\\mathbf{F}_{\\textrm{RF}}\\mathbf{F}_{\\textrm{BB}}\\|_{F}^2=N_{\\textrm{s}}\\label{const3}.\n\\end{eqnarray}\nThe BER in (\\ref{eqn_optim_problem}) is a complicated nonlinear function of $\\mathbf{F}_{\\textrm{RF}}$, $\\mathbf{F}_{\\textrm{BB}}$, $\\mathbf{W}_{\\textrm{RF}}$, and $\\mathbf{W}_{\\textrm{BB}}$ without closed-form expression and the constraints in (\\ref{const1}) and (\\ref{const2}) are non-convex, which make this optimization problem intractable to be solved by the traditional approaches. DL is a potential solution by using the BP algorithm and thus we develop DL-JHPF to address this problem.\n\n\\begin{figure*}[t]\n\\centering\n\\includegraphics[width=6.6in]{Fig2}\n\\caption{Proposed DL-JHPF.}\\label{DL_framework}\n\\end{figure*}\n\n\\section{Proposed DL-JHPF}\n\nIn this section, we first briefly review the existing work on the DNN based end-to-end communications. Then we propose DL-JHPF, where the framework is first described, followed by the details of training, deployment, and testing along with the corresponding complexity analysis. Finally, we extend the framework to OFDM systems over wideband mmWave channels.\n\n\\subsection{DNN based End-to-End Communications}\n\nPrior works have shown that DNN based end-to-end optimization is an efficient tool to minimize BER. The BP algorithm makes the DNN based end-to-end communications over the air possible so long as the optimized performance metric is differentiable \\cite{T. O'hea}, \\cite{S. Dorner}, \\cite{H. Ye_b}. For the DNN based end-to-end communication system, the modules at the transmitter and the receiver are replaced by two DNNs, respectively. Specifically, the DNN at the transmitter encodes the original symbols into the transmitted signal and the one at the receiver recovers the original symbols from the output of the wireless channel. In the training stage, the error between the original and recovered symbols is computed and the weights of the two DNNs are adjusted iteratively based on the error gradient propagated from the output layer of the DNN at the receiver to optimize the recovery accuracy.\n\nIn this paper, we focus on the DL based joint analog and digital processing design for the transceiver in mmWave massive MIMO systems. Then, the existing DNN based end-to-end communication is not suitable for this task since it integrates the modules of the transceiver into two DNNs and thus cannot meet the hardware and power constraints in practical implementation. To address this challenge, we design DL-JHPF in the following.\n\n\\subsection{Framework Description}\n\n\n\nAs shown in Fig.~\\ref{DL_framework}, the proposed DL-JHPF consists of three parts: hybrid processing designer, signal flow simulator, and NN demodulator, which are elaborated as follows.\n\n\\emph{Hybrid processing designer:} It plays the role of outputting the hybrid processing matrices for the transceiver by using NNs based on the channel matrix. It includes six fully-connected NNs and is used to generate the analog and digital processing matrices for the transmitter and the receiver based on the channel matrix, $\\mathbf{H}$. Specifically, $\\mathbf{H}\\in\\mathbb{C}^{N_{\\textrm{R}}\\times N_{\\textrm{T}}}$ is first converted to a $2N_{\\textrm{T}}N_{\\textrm{R}}\\times 1$ real-valued vector.\\footnote{In Fig.~\\ref{DL_framework}, only the main process of the framework is shown while the matrix and vector reshaping process is omitted.} Then it is input into two NNs, called precoder phase NN (PP-NN) and combiner phase NN (CP-NN), to generate the corresponding phases, $\\boldsymbol{\\phi}_{\\textrm{P}}\\in\\mathbb{R}^{N_{\\textrm{T}}N_{\\textrm{T}}^{\\textrm{RF}}\\times 1}$ and $\\boldsymbol{\\phi}_{\\textrm{C}}\\in\\mathbb{R}^{N_{\\textrm{R}}N_{\\textrm{R}}^{\\textrm{RF}}\\times 1}$, respectively, for phase shifters. With $\\boldsymbol{\\phi}_{\\textrm{P}}$ and $\\boldsymbol{\\phi}_{\\textrm{C}}$, two complex-valued vectors with constant amplitude elements are generated as\n\\begin{eqnarray}\n\\label{eqn_fRF}\n\\bar{\\mathbf{f}}_{\\textrm{RF}}=\\frac{1}{\\sqrt{N_{\\textrm{T}}}}e^{j\\boldsymbol{\\phi}_{\\textrm{P}}},\n\\end{eqnarray}\n\\begin{eqnarray}\n\\label{eqn_wRF}\n\\bar{\\mathbf{w}}_{\\textrm{RF}}=\\frac{1}{\\sqrt{N_{\\textrm{R}}}}e^{j\\boldsymbol{\\phi}_{\\textrm{C}}},\n\\end{eqnarray}\nbased on which, $\\mathbf{F}_{\\textrm{RF}}$ and $\\mathbf{W}_{\\textrm{RF}}$ are given by\n\\begin{eqnarray}\n\\label{eqn_FRF}\n\\mathbf{F}_{\\textrm{RF}}=\\mathcal{T}_{\\textrm{v}\\rightarrow \\textrm{m}}(\\bar{\\mathbf{f}}_{\\textrm{RF}}),\n\\end{eqnarray}\n\\begin{eqnarray}\n\\label{eqn_WRF}\n\\mathbf{W}_{\\textrm{RF}}=\\mathcal{T}_{\\textrm{v}\\rightarrow \\textrm{m}}(\\bar{\\mathbf{w}}_{\\textrm{RF}}),\n\\end{eqnarray}\nwhere $\\mathcal{T}_{\\textrm{v}\\rightarrow \\textrm{m}}(\\cdot)$ denotes the operation reshaping a vector to a matrix.\nThen, $\\mathbf{F}_{\\textrm{RF}}$ and $\\mathbf{W}_{\\textrm{RF}}$ along with $\\mathbf{H}$ are used to generate a low-dimensional equivalent channel, i.e.,\n\\begin{eqnarray}\n\\label{eqn_Heq}\n\\mathbf{H}_{\\textrm{eq}}=\\mathbf{W}_{\\textrm{RF}}^H\\mathbf{H}\\mathbf{F}_{\\textrm{RF}}.\n\\end{eqnarray}\n$\\mathbf{H}_{\\textrm{eq}}\\in\\mathbb{C}^{N_{\\textrm{R}}^{\\textrm{RF}}\\times N_{\\textrm{T}}^{\\textrm{RF}}}$ is converted to a $2N_{\\textrm{T}}^{\\textrm{RF}}N_{\\textrm{R}}^{\\textrm{RF}}\\times 1$ real-valued vector before it is input into four parallel NNs. The first two NNs, corresponding to the real part digital combiner NN (ReDC-NN) and the imaginary part digital combiner NN (ImDC-NN), output two $N_{\\textrm{s}}N_{\\textrm{R}}^{\\textrm{RF}}\\times1$ vectors, $\\bar{\\mathbf{w}}_{\\textrm{BB,re}}$, $\\bar{\\mathbf{w}}_{\\textrm{BB,im}}$, respectively. Then $\\mathbf{W}_{\\textrm{BB}}$ can be obtained as\n\\begin{eqnarray}\n\\label{eqn_WBB}\n\\mathbf{W}_{\\textrm{BB}}=\\mathcal{T}_{\\textrm{v}\\rightarrow \\textrm{m}}(\\bar{\\mathbf{w}}_{\\textrm{BB,re}}+j\\bar{\\mathbf{w}}_{\\textrm{BB,im}}).\n\\end{eqnarray}\nAnother two NNs, corresponding to the real part digital precoder NN (ReDP-NN) and the imaginary part digital precoder NN (ImDP-NN), output two $N_{\\textrm{s}}N_{\\textrm{T}}^{\\textrm{RF}}\\times1$ vectors, $\\bar{\\mathbf{f}}_{\\textrm{BB,re}}$, $\\bar{\\mathbf{f}}_{\\textrm{BB,im}}$, respectively. Then the unnormalized digital precoder $\\bar{\\mathbf{F}}_{\\textrm{BB}}$ is given by\n\\begin{eqnarray}\n\\label{eqn_unnorm_FBB}\n\\bar{\\mathbf{F}}_{\\textrm{BB}}=\\mathcal{T}_{\\textrm{v}\\rightarrow \\textrm{m}}(\\bar{\\mathbf{f}}_{\\textrm{BB,re}}+j\\bar{\\mathbf{f}}_{\\textrm{BB,im}}).\n\\end{eqnarray}\nThe following normalization utilizes $\\bar{\\mathbf{F}}_{\\textrm{BB}}$ and $\\mathbf{F}_{\\textrm{RF}}$ in (\\ref{eqn_FRF}) to output the final digital precoder as\n\\begin{eqnarray}\n\\label{eqn_DP_norm}\n\\mathbf{F}_{\\textrm{BB}}=\\frac{\\sqrt{N_{\\textrm{s}}}}{\\|\\mathbf{F}_{\\textrm{RF}}\\bar{\\mathbf{F}}_{\\textrm{BB}}\\|_{F}}\\bar{\\mathbf{F}}_{\\textrm{BB}}.\n\\end{eqnarray}\n\n\\emph{Signal flow simulator:} In the training stage, it simulates the process from the original bits, $\\mathbf{X}_{\\textrm{b}}$, to the detected signal, $\\mathbf{r}$, over the channel, $\\mathbf{H}$, with AWGN, $\\mathbf{n}$, where $\\mathbf{X}_{\\textrm{b}}$ with the size of $N_{\\textrm{s}}\\times\\log_2 M$, $\\mathbf{H}$, and $\\mathbf{n}$ are generated in the simulation environment. It bridges the back propagation of the error gradient from NN demodulator to hybrid processing designer as we will elaborate in Section III.C. In the deployment and testing stage, the signal flow simulator is replaced by the actual transceiver and the actual wireless fading channel. In these two stages, the analog and digital processing matrices at the transceiver are provided by the hybrid processing designer based on the simulated or actual $\\mathbf{H}$.\n\n\\emph{NN demodulator:} It is a fully-connected NN, which receives the detected signal, $\\mathbf{r}$, from the signal flow simulator (in the training stage) or the actual receiver (in the testing stage) and outputs recovered bits $\\hat{\\mathbf{x}}_{\\textrm{b}}\\in \\mathbb{R}^{N_{\\textrm{s}}\\log_2 M\\times1}$ with each element lies in the interval $[0,1]$. $\\hat{\\mathbf{x}}_{\\textrm{b}}$ is then reshaped to $\\hat{\\mathbf{X}}_{\\textrm{b}}$ with the same size as $\\mathbf{X}_{\\textrm{b}}$.\n\n\\textbf{Remark 1.} \\emph{The learning of hybrid processing matrices, $\\mathbf{F}_{\\textrm{RF}}$, $\\mathbf{W}_{\\textrm{RF}}$, $\\mathbf{F}_{\\textrm{BB}}$, and $\\mathbf{W}_{\\textrm{BB}}$, in DL-JHPF is embedded into the signal transmission and demodulation process instead of approximating pre-designed label matrices. All NNs are optimized jointly sharing the mapping principle from $\\mathbf{X}_{\\textrm{b}}$ at the transmitter to $\\mathbf{X}_{\\textrm{b}}$ at the receiver that resembles an\nautoencoder. By minimizing the error between $\\mathbf{X}_{\\textrm{b}}$ and $\\hat{\\mathbf{X}}_{\\textrm{b}}$, each NN in hybrid processing designer can learn to output the appropriate vectors with specific meaning implicitly, i.e., phases of phase shifters and real and imaginary parts of the digital precoder and combiner. By doing this, DL-JHPF will have the potential to break through the performance of the existing schemes.}\n\n\\subsection{Framework Training}\n\nThe goal of offline training is to determine the weights of the NNs in hybrid processing designer and NN demodulator based on the training samples with the input tuple $\\langle\\mathbf{H}, \\mathbf{X}_{\\textrm{b}}, \\mathbf{n}\\rangle$ and the label $\\mathbf{X}_{\\textrm{b}}$, where $\\mathbf{H}$ is generated by certain channel model and $\\mathbf{n}$ is generated according to the $\\mathcal{CN}(0,1)$ distribution. By minimizing the end-to-end error between the original bits, $\\mathbf{X}_{\\textrm{b}}$, and the recovered bits, $\\hat{\\mathbf{X}}_{\\textrm{b}}$, the weights of each NN in DL-JHPF are adjusted iteratively and the training procedure is elaborated as follows.\n\n\\begin{figure}[t]\n\\centering\n\\includegraphics[width=3.0in]{Fig3}\n\\caption{Training model for proposed DL-JHPF.}\\label{visual_model}\n\\end{figure}\n\nThe proposed DL-JHPF is actually an integrated DNN consisting of neuron layers and custom layers. The training model in Fig.~\\ref{visual_model} demonstrates the detailed training process of the framework. For each training sample, $\\mathbf{H}$ is converted into a real-valued vector by matrix-to-vector reshaping and real and imaginary parts stacking, which is input into PP-NN and CP-NN consisting of dense and batch normalization (BN) layers to generate the corresponding phases, $\\boldsymbol{\\phi}_{\\textrm{P}}$ and $\\boldsymbol{\\phi}_{\\textrm{C}}$, respectively. Then (\\ref{eqn_fRF}) and (\\ref{eqn_wRF}) are executed by the same custom layer. Afterwards, the output vectors are reshaped according to (\\ref{eqn_FRF}) and (\\ref{eqn_WRF}) to generate $\\mathbf{F}_{\\textrm{RF}}$ and $\\mathbf{W}_{\\textrm{RF}}$, respectively. Next, (\\ref{eqn_Heq}) is executed by a custom layer to generate $\\mathbf{H}_{\\textrm{eq}}$, followed by matrix-to-vector reshaping and real and imaginary parts stacking. This vector is input into four NNs consisting of dense and BN layers, i.e., ReDC-NN, ImDC-NN, ReDP-NN, and ImDP-NN, respectively. The output vectors of the former two NNs are used to generate $\\mathbf{W}_{\\textrm{BB}}$ through real and imaginary parts combining and vector-to-matrix reshaping as (\\ref{eqn_WBB}). Using the same operation, the output vectors of the latter two NNs are used to generate $\\bar{\\mathbf{F}}_{\\textrm{BB}}$ as (\\ref{eqn_unnorm_FBB}). After obtaining $\\bar{\\mathbf{F}}_{\\textrm{BB}}$, a custom layer is added to perform the normalization in (\\ref{eqn_DP_norm}) to generate $\\mathbf{F}_{\\textrm{BB}}$. Then (\\ref{eqn_r}) is executed through a custom layer by using the input tuple $\\langle\\mathbf{H}, \\mathbf{X}_{\\textrm{b}}, \\mathbf{n}\\rangle$ and the generated $\\mathbf{F}_{\\textrm{RF}}$, $\\mathbf{F}_{\\textrm{BB}}$, $\\mathbf{W}_{\\textrm{RF}}$, and $\\mathbf{W}_{\\textrm{BB}}$ to yield the detected signal, $\\mathbf{r}$. After real and imaginary stacking, $\\mathbf{r}$ is converted to a real-valued vector and input into the NN demodulator consisting of dense and BN layers to output the recovered bits, $\\hat{\\mathbf{x}}_{\\textrm{b}}$, which is then reshaped to $\\hat{\\mathbf{X}}_{\\textrm{b}}$. The binary cross-entropy (BCE) loss between $\\mathbf{X}_{\\textrm{b}}$ and $\\hat{\\mathbf{X}}_{\\textrm{b}}$ is calculated as\n\\begin{eqnarray}\n\\label{eqn_loss}\n\\mathcal{L}&&=-\\frac{1}{N_{\\textrm{tr}}}\\sum_{n=1}^{N_{\\textrm{tr}}}\\sum_{i=1}^{N_{\\textrm{s}}}\\sum_{j=1}^{\\log_2 M}\\biggl( [\\mathbf{X}_{\\textrm{b}}^{n}]_{i,j}\\ln([\\hat{\\mathbf{X}}_{\\textrm{b}}^{n}(\\boldsymbol\\Theta)]_{i,j})\\nonumber\\\\\n&&\\quad+(1-[\\mathbf{X}_{\\textrm{b}}^{n}]_{i,j})\\ln(1-[\\hat{\\mathbf{X}}_{\\textrm{b}}^{n}(\\boldsymbol\\Theta)]_{i,j})\\biggr),\n\\end{eqnarray}\nwhere $N_{\\textrm{tr}}$ denotes the number of training samples, superscript $n$ is added to indicate the index of the training sample, and $\\hat{\\mathbf{X}}_{\\textrm{b}}^{n}$ is expressed as the function of the parameter set of all NNs in DL-JHPF, i.e., $\\boldsymbol\\Theta$.\n\nRecall the optimization problem in (\\ref{eqn_optim_problem}), the BER over the training set can be written as\n\\begin{eqnarray}\n\\label{eqn_BER}\n&&P_{\\textrm{e,tr}}\\left(\\mathbf{F}_{\\textrm{RF}},\\mathbf{F}_{\\textrm{BB}},\\mathbf{W}_{\\textrm{RF}},\\mathbf{W}_{\\textrm{BB}}\\right) =P_{\\textrm{e,tr}}\\left(\\boldsymbol\\Theta\\right) \\nonumber\\\\\n&&\\quad=\\frac{\\sum_{n=1}^{N_{\\textrm{tr}}}\\sum_{i=1}^{N_{\\textrm{s}}}\\sum_{j=1}^{\\log_2 M}|[\\mathbf{X}_{\\textrm{b}}^{n}]_{i,j}-[\\hat{\\mathbf{X}}_{\\textrm{b,bin}}^{n}(\\boldsymbol\\Theta)]_{i,j}|}{N_{\\textrm{tr}}N_{\\textrm{s}}\\log_2 M},\\;\\;\\;\\;\n\\end{eqnarray}\nwhere $\\hat{\\mathbf{X}}_{\\textrm{b,bin}}^{n}(\\boldsymbol\\Theta)$ is the binary demodulated bit matrix with $[\\hat{\\mathbf{X}}_{\\textrm{b,1hot}}^{n}(\\boldsymbol\\Theta)]_{i,j}=0$ for $[\\hat{\\mathbf{X}}_{\\textrm{b}}^{n}(\\boldsymbol\\Theta)]_{i,j}<0.5$ and $1$ otherwise. With $\\mathbf{X}_{\\textrm{b}}^{n}$ fixed, minimizing $\\mathcal{L}$ in (\\ref{eqn_loss}) with respect to $\\hat{\\mathbf{X}}_{\\textrm{b}}^{n}(\\boldsymbol\\Theta)$ yields $\\hat{\\mathbf{X}}_{\\textrm{b}}^{n}(\\boldsymbol\\Theta)=\\mathbf{X}_{\\textrm{b}}^{n}$, which also minimizes $P_{\\textrm{e,tr}}\\left(\\boldsymbol\\Theta\\right)$ in (\\ref{eqn_BER}). Therefore, DL-JHPF can directly minimize the BER over the training set by minimizing the BCE loss and the feasibility is guaranteed by the following theorem.\n\n\\textbf{Theorem 1.} \\emph{The proposed DL-JHPF is trainable and can minimize the BCE loss through BP algorithm.}\n\\begin{IEEEproof}\nConsidering the mini-batch training, the BCE loss over a batch is written as\n\\begin{eqnarray}\n\\label{eqn_loss_batch}\n\\mathcal{L}_{\\textrm{bat}}&&=-\\frac{1}{N_{\\textrm{bat}}}\\sum_{n=1}^{N_{\\textrm{bat}}}\\sum_{i=1}^{N_{\\textrm{s}}}\\sum_{j=1}^{\\log_2 M}\\biggl( [\\mathbf{X}_{\\textrm{b}}^{n}]_{i,j}\\ln([\\hat{\\mathbf{X}}_{\\textrm{b}}^{n}(\\boldsymbol\\Theta)]_{i,j})\\nonumber\\\\\n&&\\quad+(1-[\\mathbf{X}_{\\textrm{b}}^{n}]_{i,j})\\ln(1-[\\hat{\\mathbf{X}}_{\\textrm{b}}^{n}(\\boldsymbol\\Theta)]_{i,j})\\biggr),\n\\end{eqnarray}\nwhere $N_{\\textrm{bat}}$ denotes the batch size. Then $\\boldsymbol\\Theta$ will be updated $\\lceil\\frac{N_{\\textrm{tr}}}{N_{\\textrm{bat}}}\\rceil$ times in each epoch.\n\nTo prove Theorem 1, we need to show that $\\mathcal{L}_{\\textrm{bat}}$ is differentiable with respect to each parameter in $\\boldsymbol\\Theta$. According to \\cite{Y. LeCun}, the outputs are differentiable with respect to the corresponding weights and inputs for each NN in DL-JHPF. Since DL-JHPF can be viewed as an integrated DNN consisting of neuron layers and custom layers, the proof can be further simplified to prove that $\\mathcal{L}_{\\textrm{bat}}$ is differentiable with respect to the outputs of each NN due to chain rule. In the following, we prove the differentiability of $\\mathcal{L}_{\\textrm{bat}}$ with respect to the outputs of each NN by incorporating the custom layers.\n\n\\emph{NN demodulator:} From (\\ref{eqn_loss_batch}), $\\mathcal{L}_{\\textrm{bat}}$ is differentiable with respect to $[\\hat{\\mathbf{X}}_{\\textrm{b}}^{n}(\\boldsymbol\\Theta)]_{i,j}, \\forall i, j$.\n\n\\emph{Re\/ImDC-NN:} As mentioned in Section III.B, $\\bar{\\mathbf{w}}_{\\textrm{BB,re}}$ and $\\bar{\\mathbf{w}}_{\\textrm{BB,im}}$ are the outputs of ReDC-NN and ImDC-NN, respectively. Without loss of generality, we will prove that $\\mathcal{L}_{\\textrm{bat}}$ is differentiable with respect to $[\\bar{\\mathbf{w}}_{\\textrm{BB,re}}]_{1}$ and $[\\bar{\\mathbf{w}}_{\\textrm{BB,im}}]_{1}$. According to (\\ref{eqn_r}) and (\\ref{eqn_WBB}), $[\\mathbf{r}]_{1}$ is the function of $[\\bar{\\mathbf{w}}_{\\textrm{BB,re}}]_{1}$ and $[\\bar{\\mathbf{w}}_{\\textrm{BB,im}}]_{1}$, that is\n\\begin{eqnarray}\n\\label{eqn_r1}\n\\!\\!\\!\\!\\![\\mathbf{r}]_{1}&&=([\\bar{\\mathbf{w}}_{\\textrm{BB,re}}]_{1}-j[\\bar{\\mathbf{w}}_{\\textrm{BB,im}}]_{1})[\\mathbf{z}]_{1}+C_1\\nonumber\\\\\n&&=([\\bar{\\mathbf{w}}_{\\textrm{BB,re}}]_{1}-j[\\bar{\\mathbf{w}}_{\\textrm{BB,im}}]_{1})([\\mathbf{z}]_{1,\\textrm{re}}+j[\\mathbf{z}]_{1,\\textrm{im}})+C_1\\nonumber\\\\\n&&=([\\bar{\\mathbf{w}}_{\\textrm{BB,re}}]_{1}[\\mathbf{z}]_{1,\\textrm{re}}+[\\bar{\\mathbf{w}}_{\\textrm{BB,im}}]_{1}[\\mathbf{z}]_{1,\\textrm{im}}+C_{1,\\textrm{re}})\\nonumber\\\\\n&&\\quad+j([\\bar{\\mathbf{w}}_{\\textrm{BB,re}}]_{1}[\\mathbf{z}]_{1,\\textrm{im}}-[\\bar{\\mathbf{w}}_{\\textrm{BB,im}}]_{1}[\\mathbf{z}]_{1,\\textrm{re}}+C_{1,\\textrm{im}})\\nonumber\\\\\n&&=[\\mathbf{r}]_{1,\\textrm{re}}+j[\\mathbf{r}]_{1,\\textrm{im}},\n\\end{eqnarray}\nwhere $\\mathbf{z}=\\mathbf{W}_{\\textrm{RF}}^H\\mathbf{y}$ and $C_1$ denotes the component of $[\\mathbf{r}]_{1}$ independent of $[\\bar{\\mathbf{w}}_{\\textrm{BB,re}}]_{1}$ and $[\\bar{\\mathbf{w}}_{\\textrm{BB,im}}]_{1}$ with the subscripts `re' and `im' indicating the real and imaginary parts, respectively. Since $[\\mathbf{r}]_{1,\\textrm{re}}$ and $[\\mathbf{r}]_{1,\\textrm{im}}$ are a part of inputs of NN demodulator, $\\mathcal{L}_{\\textrm{bat}}$ is differentiable with respect to $[\\mathbf{r}]_{1,\\textrm{re}}$ and $[\\mathbf{r}]_{1,\\textrm{im}}$. Then we have\n\\begin{eqnarray}\n\\label{eqn_derL_wre_1}\n\\frac{\\partial\\mathcal{L}_{\\textrm{bat}}}{\\partial[\\bar{\\mathbf{w}}_{\\textrm{BB,re}}]_{1}} &&=\\frac{\\partial\\mathcal{L}_{\\textrm{bat}}}{\\partial[\\mathbf{r}]_{1,\\textrm{re}}} \\frac{\\partial[\\mathbf{r}]_{1,\\textrm{re}}}{\\partial[\\bar{\\mathbf{w}}_{\\textrm{BB,re}}]_{1}} +\\frac{\\partial\\mathcal{L}_{\\textrm{bat}}}{\\partial[\\mathbf{r}]_{1,\\textrm{im}}} \\frac{\\partial[\\mathbf{r}]_{1,\\textrm{im}}}{\\partial[\\bar{\\mathbf{w}}_{\\textrm{BB,re}}]_{1}}\\nonumber\\\\\n&&=[\\mathbf{z}]_{1,\\textrm{re}}\\frac{\\partial\\mathcal{L}_{\\textrm{bat}}}{\\partial[\\mathbf{r}]_{1,\\textrm{re}}} +[\\mathbf{z}]_{1,\\textrm{im}}\\frac{\\partial\\mathcal{L}_{\\textrm{bat}}}{\\partial[\\mathbf{r}]_{1,\\textrm{im}}},\n\\end{eqnarray}\n\\begin{eqnarray}\n\\label{eqn_derL_wim_1}\n\\frac{\\partial\\mathcal{L}_{\\textrm{bat}}}{\\partial[\\bar{\\mathbf{w}}_{\\textrm{BB,im}}]_{1}} &&=\\frac{\\partial\\mathcal{L}_{\\textrm{bat}}}{\\partial[\\mathbf{r}]_{1,\\textrm{re}}} \\frac{\\partial[\\mathbf{r}]_{1,\\textrm{re}}}{\\partial[\\bar{\\mathbf{w}}_{\\textrm{BB,im}}]_{1}} +\\frac{\\partial\\mathcal{L}_{\\textrm{bat}}}{\\partial[\\mathbf{r}]_{1,\\textrm{im}}} \\frac{\\partial[\\mathbf{r}]_{1,\\textrm{im}}}{\\partial[\\bar{\\mathbf{w}}_{\\textrm{BB,im}}]_{1}}\\nonumber\\\\\n&&=[\\mathbf{z}]_{1,\\textrm{im}}\\frac{\\partial\\mathcal{L}_{\\textrm{bat}}}{\\partial[\\mathbf{r}]_{1,\\textrm{re}}} -[\\mathbf{z}]_{1,\\textrm{re}}\\frac{\\partial\\mathcal{L}_{\\textrm{bat}}}{\\partial[\\mathbf{r}]_{1,\\textrm{im}}}.\n\\end{eqnarray}\n\n\\emph{Re\/ImDP-NN:} Since $\\bar{\\mathbf{f}}_{\\textrm{BB,re}}$ and $\\bar{\\mathbf{f}}_{\\textrm{BB,im}}$ are the outputs of ReDP-NN and ImDP-NN, respectively, we also aim to prove that $\\mathcal{L}_{\\textrm{bat}}$ is differentiable with respect to $[\\bar{\\mathbf{f}}_{\\textrm{BB,re}}]_{1}$ and $[\\bar{\\mathbf{f}}_{\\textrm{BB,im}}]_{1}$. Considering the normalization in (\\ref{eqn_DP_norm}), we first calculate the derivatives of $\\mathcal{L}_{\\textrm{bat}}$ with respect to the real and imaginary parts of $[\\mathbf{F}_{\\textrm{BB}}]_{1,1}$, i.e., $\\frac{\\partial\\mathcal{L}_{\\textrm{bat}}}{\\partial[\\mathbf{F}_{\\textrm{BB}}]_{1,1,\\textrm{re}}}$ and $\\frac{\\partial\\mathcal{L}_{\\textrm{bat}}}{\\partial[\\mathbf{F}_{\\textrm{BB}}]_{1,1,\\textrm{im}}}$, which can be obtained similarly to (\\ref{eqn_derL_wre_1}) and (\\ref{eqn_derL_wim_1}). According to (\\ref{eqn_DP_norm}), we have\n\\begin{eqnarray}\n\\label{eqn_FBB11re}\n[\\mathbf{F}_{\\textrm{BB}}]_{1,1,\\textrm{re}}=\\frac{[\\bar{\\mathbf{f}}_{\\textrm{BB,re}}]_{1}}{f([\\bar{\\mathbf{f}}_{\\textrm{BB,re}}]_{1}, [\\bar{\\mathbf{f}}_{\\textrm{BB,im}}]_{1})},\n\\end{eqnarray}\n\\begin{eqnarray}\n\\label{eqn_FBB11im}\n[\\mathbf{F}_{\\textrm{BB}}]_{1,1,\\textrm{im}}=\\frac{[\\bar{\\mathbf{f}}_{\\textrm{BB,im}}]_{1}}{f([\\bar{\\mathbf{f}}_{\\textrm{BB,re}}]_{1}, [\\bar{\\mathbf{f}}_{\\textrm{BB,im}}]_{1})},\n\\end{eqnarray}\nwhere $f([\\bar{\\mathbf{f}}_{\\textrm{BB,re}}]_{1}, [\\bar{\\mathbf{f}}_{\\textrm{BB,im}}]_{1})\\!=\\![([\\mathbf{F}_{\\textrm{RF}}]_{1,1,\\textrm{re}}[\\bar{\\mathbf{f}}_{\\textrm{BB,re}}]_{1} \\!-\\![\\mathbf{F}_{\\textrm{RF}}]_{1,1,\\textrm{im}}[\\bar{\\mathbf{f}}_{\\textrm{BB,im}}]_{1}+C_{2,\\textrm{re}})^2+([\\mathbf{F}_{\\textrm{RF}}]_{1,1,\\textrm{re}}[\\bar{\\mathbf{f}}_{\\textrm{BB,im}}]_{1} \\!+\\![\\mathbf{F}_{\\textrm{RF}}]_{1,1,\\textrm{im}}[\\bar{\\mathbf{f}}_{\\textrm{BB,re}}]_{1}+C_{2,\\textrm{im}})^2+C_3]^{\\frac{1}{2}}$ with $C_2$ and $C_3$ independent of $[\\bar{\\mathbf{f}}_{\\textrm{BB,re}}]_{1}$ and $[\\bar{\\mathbf{f}}_{\\textrm{BB,im}}]_{1}$. Then we can find that $[\\mathbf{F}_{\\textrm{BB}}]_{1,1,\\textrm{re}}$ and $[\\mathbf{F}_{\\textrm{BB}}]_{1,1,\\textrm{im}}$ are differentiable with respect to $[\\bar{\\mathbf{f}}_{\\textrm{BB,re}}]_{1}$ and $[\\bar{\\mathbf{f}}_{\\textrm{BB,im}}]_{1}$, which leads to\n\\begin{eqnarray}\n\\label{eqn_derL_fBBre_1}\n\\frac{\\partial\\mathcal{L}_{\\textrm{bat}}}{\\partial[\\bar{\\mathbf{f}}_{\\textrm{BB,re}}]_{1}} &&\\!=\\!\\frac{\\partial\\mathcal{L}_{\\textrm{bat}}}{[\\mathbf{F}_{\\textrm{BB}}]_{1,1,\\textrm{re}}} \\frac{\\partial[\\mathbf{F}_{\\textrm{BB}}]_{1,1,\\textrm{re}}}{\\partial[\\bar{\\mathbf{f}}_{\\textrm{BB,re}}]_{1}} \\!+\\!\\frac{\\partial\\mathcal{L}_{\\textrm{bat}}}{\\partial[\\mathbf{F}_{\\textrm{BB}}]_{1,1,\\textrm{im}}} \\frac{\\partial[\\mathbf{F}_{\\textrm{BB}}]_{1,1,\\textrm{im}}}{\\partial[\\bar{\\mathbf{f}}_{\\textrm{BB,re}}]_{1}},\\nonumber\\\\\n&&\n\\end{eqnarray}\n\\begin{eqnarray}\n\\label{eqn_derL_fBBim_1}\n\\frac{\\partial\\mathcal{L}_{\\textrm{bat}}}{\\partial[\\bar{\\mathbf{f}}_{\\textrm{BB,im}}]_{1}} &&\\!=\\!\\frac{\\partial\\mathcal{L}_{\\textrm{bat}}}{[\\mathbf{F}_{\\textrm{BB}}]_{1,1,\\textrm{re}}} \\frac{\\partial[\\mathbf{F}_{\\textrm{BB}}]_{1,1,\\textrm{re}}}{\\partial[\\bar{\\mathbf{f}}_{\\textrm{BB,im}}]_{1}} \\!+\\!\\frac{\\partial\\mathcal{L}_{\\textrm{bat}}}{\\partial[\\mathbf{F}_{\\textrm{BB}}]_{1,1,\\textrm{im}}} \\frac{\\partial[\\mathbf{F}_{\\textrm{BB}}]_{1,1,\\textrm{im}}}{\\partial[\\bar{\\mathbf{f}}_{\\textrm{BB,im}}]_{1}}.\\nonumber\\\\\n&&\n\\end{eqnarray}\n\n\\emph{PP-NN:} We still aim to prove that $\\mathcal{L}_{\\textrm{bat}}$ is differentiable with respect to $[\\boldsymbol{\\phi}_{\\textrm{P}}]_{1}$ that is one of the output of PP-NN and generates $[\\bar{\\mathbf{f}}_{\\textrm{RF}}]_{1,\\textrm{re}}$ and $[\\bar{\\mathbf{f}}_{\\textrm{RF}}]_{1,\\textrm{im}}$ as\n\\begin{eqnarray}\n\\label{eqn_fRF_reim}\n\\frac{1}{\\sqrt{N_{\\textrm{T}}}}e^{j[\\boldsymbol{\\phi}_{\\textrm{P}}]_{1}}&&=\\frac{1}{\\sqrt{N_{\\textrm{T}}}}(\\cos[\\boldsymbol{\\phi}_{\\textrm{P}}]_{1} +j\\sin[\\boldsymbol{\\phi}_{\\textrm{P}}]_{1})\\nonumber\\\\\n&&=[\\bar{\\mathbf{f}}_{\\textrm{RF}}]_{1,\\textrm{re}}+j[\\bar{\\mathbf{f}}_{\\textrm{RF}}]_{1,\\textrm{im}}.\n\\end{eqnarray}\nFrom (\\ref{eqn_r}) and (\\ref{eqn_Heq}), $[\\bar{\\mathbf{f}}_{\\textrm{RF}}]_{1,\\textrm{re}}$ and $[\\bar{\\mathbf{f}}_{\\textrm{RF}}]_{1,\\textrm{im}}$ influence the values of $[\\mathbf{r}]_{i}, i=1,\\ldots,N_{\\textrm{s}}$ and $[\\mathbf{H}_{\\textrm{eq}}]_{j,1}, j=1,\\ldots,N_{\\textrm{R}}^{\\textrm{RF}}$. According to the previous proof, $\\mathcal{L}_{\\textrm{bat}}$ is differentiable with respect to the real and imaginary parts of each element in $\\mathbf{r}$ and $\\mathbf{H}_{\\textrm{eq}}$. $[\\mathbf{r}]_{i,\\textrm{re}}$, $[\\mathbf{r}]_{i,\\textrm{im}}$, $[\\mathbf{H}_{\\textrm{eq}}]_{j,1,\\textrm{re}}$, and $[\\mathbf{H}_{\\textrm{eq}}]_{j,1,\\textrm{im}}$, $i=1,\\ldots,N_{\\textrm{s}}, j=1,\\ldots,N_{\\textrm{R}}^{\\textrm{RF}}$ are also differentiable with respect to $[\\bar{\\mathbf{f}}_{\\textrm{RF}}]_{1,\\textrm{re}}$ and $[\\bar{\\mathbf{f}}_{\\textrm{RF}}]_{1,\\textrm{im}}$. Resorting to chain rule, we have\n\\begin{eqnarray}\n\\label{eqn_derL_fRF_1re}\n&&\\frac{\\partial\\mathcal{L}_{\\textrm{bat}}}{\\partial[\\bar{\\mathbf{f}}_{\\textrm{RF}}]_{1,\\textrm{re}}} =\\sum_{i=1}^{N_{\\textrm{s}}}\\left(\\frac{\\partial\\mathcal{L}_{\\textrm{bat}}}{\\partial[\\mathbf{r}]_{i,\\textrm{re}}} \\frac{\\partial[\\mathbf{r}]_{i,\\textrm{re}}}{\\partial[\\bar{\\mathbf{f}}_{\\textrm{RF}}]_{1,\\textrm{re}}} +\\frac{\\partial\\mathcal{L}_{\\textrm{bat}}}{\\partial[\\mathbf{r}]_{i,\\textrm{im}}} \\frac{\\partial[\\mathbf{r}]_{i,\\textrm{im}}}{\\partial[\\bar{\\mathbf{f}}_{\\textrm{RF}}]_{1,\\textrm{re}}}\\right)\\nonumber\\\\\n&&+\\!\\sum_{j=1}^{N_{\\textrm{R}}^{\\textrm{RF}}}\\!\\left(\\frac{\\partial\\mathcal{L}_{\\textrm{bat}}}{\\partial[\\mathbf{H}_{\\textrm{eq}}]_{j,1,\\textrm{re}}} \\frac{\\partial[\\mathbf{H}_{\\textrm{eq}}]_{j,1,\\textrm{re}}}{\\partial[\\bar{\\mathbf{f}}_{\\textrm{RF}}]_{1,\\textrm{re}}} \\!+\\!\\frac{\\partial\\mathcal{L}_{\\textrm{bat}}}{\\partial[\\mathbf{H}_{\\textrm{eq}}]_{j,1,\\textrm{im}}} \\frac{\\partial[\\mathbf{H}_{\\textrm{eq}}]_{j,1,\\textrm{im}}}{\\partial[\\bar{\\mathbf{f}}_{\\textrm{RF}}]_{1,\\textrm{re}}}\\right)\\!,\\nonumber\\\\\n&&\n\\end{eqnarray}\n\\begin{eqnarray}\n\\label{eqn_derL_fRF_1im}\n&&\\frac{\\partial\\mathcal{L}_{\\textrm{bat}}}{\\partial[\\bar{\\mathbf{f}}_{\\textrm{RF}}]_{1,\\textrm{im}}} =\\sum_{i=1}^{N_{\\textrm{s}}}\\left(\\frac{\\partial\\mathcal{L}_{\\textrm{bat}}}{\\partial[\\mathbf{r}]_{i,\\textrm{re}}} \\frac{\\partial[\\mathbf{r}]_{i,\\textrm{re}}}{\\partial[\\bar{\\mathbf{f}}_{\\textrm{RF}}]_{1,\\textrm{im}}} +\\frac{\\partial\\mathcal{L}_{\\textrm{bat}}}{\\partial[\\mathbf{r}]_{i,\\textrm{im}}} \\frac{\\partial[\\mathbf{r}]_{i,\\textrm{im}}}{\\partial[\\bar{\\mathbf{f}}_{\\textrm{RF}}]_{1,\\textrm{im}}}\\right)\\nonumber\\\\\n&&+\\!\\sum_{j=1}^{N_{\\textrm{R}}^{\\textrm{RF}}}\\!\\left(\\frac{\\partial\\mathcal{L}_{\\textrm{bat}}}{\\partial[\\mathbf{H}_{\\textrm{eq}}]_{j,1,\\textrm{re}}} \\frac{\\partial[\\mathbf{H}_{\\textrm{eq}}]_{j,1,\\textrm{re}}}{\\partial[\\bar{\\mathbf{f}}_{\\textrm{RF}}]_{1,\\textrm{im}}} \\!+\\!\\frac{\\partial\\mathcal{L}_{\\textrm{bat}}}{\\partial[\\mathbf{H}_{\\textrm{eq}}]_{j,1,\\textrm{im}}} \\frac{\\partial[\\mathbf{H}_{\\textrm{eq}}]_{j,1,\\textrm{im}}}{\\partial[\\bar{\\mathbf{f}}_{\\textrm{RF}}]_{1,\\textrm{im}}}\\right)\\!.\\nonumber\\\\\n&&\n\\end{eqnarray}\nBy considering (\\ref{eqn_fRF_reim})$-$(\\ref{eqn_derL_fRF_1im}), we arrive at\n\\begin{eqnarray}\n\\label{eqn_derL_pha_1}\n\\frac{\\partial\\mathcal{L}_{\\textrm{bat}}}{\\partial[\\boldsymbol{\\phi}_{\\textrm{P}}]_{1}} &&=\\frac{\\partial\\mathcal{L}_{\\textrm{bat}}}{\\partial[\\bar{\\mathbf{f}}_{\\textrm{RF}}]_{1,\\textrm{re}}} \\frac{\\partial[\\bar{\\mathbf{f}}_{\\textrm{RF}}]_{1,\\textrm{re}}}{\\partial[\\boldsymbol{\\phi}_{\\textrm{P}}]_{1}} +\\frac{\\partial\\mathcal{L}_{\\textrm{bat}}}{\\partial[\\bar{\\mathbf{f}}_{\\textrm{RF}}]_{1,\\textrm{im}}} \\frac{\\partial[\\bar{\\mathbf{f}}_{\\textrm{RF}}]_{1,\\textrm{im}}}{\\partial[\\boldsymbol{\\phi}_{\\textrm{P}}]_{1}}\\nonumber\\\\\n&&=-\\sin[\\boldsymbol{\\phi}_{\\textrm{P}}]_{1}\\frac{\\partial\\mathcal{L}_{\\textrm{bat}}}{\\partial[\\bar{\\mathbf{f}}_{\\textrm{RF}}]_{1,\\textrm{re}}} +\\cos[\\boldsymbol{\\phi}_{\\textrm{P}}]_{1}\\frac{\\partial\\mathcal{L}_{\\textrm{bat}}}{\\partial[\\bar{\\mathbf{f}}_{\\textrm{RF}}]_{1,\\textrm{im}}},\n\\end{eqnarray}\n\n\\emph{CP-NN:} The proof is similar to that of PP-NN and thus is omitted for simplicity.\n\nNow we have shown $\\mathcal{L}_{\\textrm{bat}}$ is differentiable with respect to each parameter in $\\boldsymbol\\Theta$, which completes the proof.\n\\end{IEEEproof}\n\n\nIt can be seen that the proposed DL-JHPF is abstracted into an integrated DNN, where the hybrid processing matrices, $\\mathbf{F}_{\\textrm{RF}}$, $\\mathbf{F}_{\\textrm{BB}}$, $\\mathbf{W}_{\\textrm{RF}}$, and $\\mathbf{W}_{\\textrm{BB}}$, are essentially the trainable weights therein. From the proof of Theorem 1, each weight of this integrated DNN can be optimized iteratively through BP algorithm by minimizing the BCE loss. Therefore, the optimal precoding and combining matrices on training set are obtained.\n\nFor the NNs in Fig.~\\ref{visual_model}, each dense layer is with rectified linear unit (ReLU) activation function and followed by a BN layer to avoid gradient diffusion and overfitting. The number of dense layers and the number of neurons in each dense layer need to be adjusted according to the input and output dimensions. Since the outputs of the NNs will be used for hybrid processing at the transmitter and the reciever, the activation functions of the output layers should be carefully designed and are elaborated as follows.\n\n\\emph{PP-NN and CP-NN:} The two NNs generate the phases for $\\mathbf{F}_{\\textrm{RF}}$ and $\\mathbf{W}_{\\textrm{RF}}$, respectively. Since (\\ref{eqn_fRF}) and (\\ref{eqn_wRF}) are periodic functions, ReLU activation function is used in the output layer to provide the unbiased output for all possible phases. We may also use Sigmoid or hyperbolic tangent as the activation function, after which the outputs are multiplied by $2\\pi$ or $\\pi$ to obtain the final phases with the range of $[0,2\\pi]$ or $[-\\pi,\\pi]$. According to the simulation trails, ReLU and hyperbolic tangent achieve almost the same performance while Sigmoid performs worse. Therefore, ReLU is preferable since it is simple and free of the operation of exponential functions.\n\n\\emph{Re\/ImDP-NN and Re\/ImDC-NN:} The four NNs generate the real and imaginary parts for $\\mathbf{F}_{\\textrm{BB}}$ and $\\mathbf{W}_{\\textrm{BB}}$, respectively. Since $\\mathbf{F}_{\\textrm{BB}}$ can be normalized by (\\ref{eqn_DP_norm}) while $\\mathbf{W}_{\\textrm{BB}}$ has no constraint, the output layers do not apply any activation function to impose constraints and directly output the values that are input into the neurons.\n\n\\emph{NN demodulator:} This NN approximates the original bits, $\\mathbf{X}_{\\textrm{b}}$, based on $\\mathbf{r}$. The approximation for each element in $\\mathbf{X}_{\\textrm{b}}$ is a binary classification and thus the Sigmoid activation function is used for the output layer of the NN demodulator.\n\n\\vspace{-0.2cm}\n\\subsection{Deployment and Testing}\n\nIn this subsection, we elaborate the deployment and testing of the trained DL-JHPF for practical implementation, where $\\mathbf{H}$ is assumed to be available at both the transmitter and the receiver.\\footnote{Although only the estimated channel is available in practical implementation, it has been shown by the simulation results that the relatively accurate channel estimate hardly causes performance loss and is almost equivalent to $\\mathbf{H}$.}\n\nThe practical deployment of DL-JHPF includes the following three parts:\n\n\\emph{Deployment of hybrid processing designer:} PP-NN and CP-NN will be deployed together at \\emph{both the transmitter and the receiver} to output the analog processing matrices, $\\mathbf{F}_{\\textrm{RF}}$ and $\\mathbf{W}_{\\textrm{RF}}$, based on which the equivalent channel, $\\mathbf{H}_{\\textrm{eq}}$, can be generated via (\\ref{eqn_Heq}). ReDP-NN and ImDP-NN are equipped at the \\emph{transmitter} to generate the digital precoder, $\\mathbf{F}_{\\textrm{BB}}$, while ReDC-NN and ImDC-NN are equipped at the \\emph{receiver} to generate the digital combiner, $\\mathbf{W}_{\\textrm{BB}}$, both based on $\\mathbf{H}_{\\textrm{eq}}$.\n\n\\emph{Deployment of signal flow simulator:} It is only used for the training stage and will be replaced by the actual transceiver and wireless fading channel in the deployment and testing stage.\n\n\\emph{Deployment of NN demodulator:} It will be deployed at the \\emph{receiver} to output the recovered bits, $\\hat{\\mathbf{X}}_{\\textrm{b}}$, based on the detected signal, $\\mathbf{r}$, after compensating the impact of the fading channel.\n\nWhen testing the trained DL-JHPF in real world, the channel may change rapidly due to the relative motion of the transceiver and scatterers, in which case DL-JHPF will be faced new propagation scenarios with different channel statistics from the training stage. This channel scenario discrepancy poses a high requirement on the robustness of DL-JHPF. Fortunately, the offline trained framework in Section III.C is quite robust to the new channel scenarios that are not observed before as shown from our simulation results (Figs.~\\ref{HBD_BS_proDL_UMaipCSI} and \\ref{HBD_BS_proDL_ofdm_UMaipCSI}). The further online fine-tuning may only provide marginal performance improvement but requires a relatively large overhead and needs to be performed frequently in the rapidly changed channel scenario. In addition, only the NNs at the receiver can be fine-tuned and thus the performance after fine-tuning will still have an intrinsic loss compared to the end-to-end training in Section III.C. To sum up, the proposed framework can cope with the mismatch of the channel scenario without relying on the fine-tuning in most cases.\n\n\\subsection{Complexity Analysis}\n\nIn this subsection, we analyze the computational complexity of the proposed DL-JHPF in testing stage by using the metric of required number of floating point operations (FLOPs). According to Fig.~\\ref{visual_model}, the total required FLOPs of all neural layers in DL-JHPF is given by\n\\begin{eqnarray}\n\\label{eqn_Complexity_NN}\n\\mathcal{C}_{\\textrm{NN}}\\sim\\mathcal{O}\\left(\\sum_{\\Delta\\in\\mathcal{N}}\\sum_{i=1}^{L^{\\Delta}-1}N^{\\Delta}_{i}N^{\\Delta}_{i+1}\\right),\n\\end{eqnarray}\nwhere $\\mathcal{N}$ denotes the set including all NNs in DL-JHPF, $L^{\\Delta}$ and $N^{\\Delta}_{i}$ represent the number of neural layers and the number of neurons of the $i$th neural layer of the NN $\\Delta$.\n\nIn addition, the complexity of matrix multiplications in the framework is given by\n\\begin{eqnarray}\n\\label{eqn_Complexity_matrix}\n\\mathcal{C}_{\\textrm{Mat}}\\sim\\mathcal{O}\\bigl(N_{\\textrm{R}}^{\\textrm{RF}}N_{\\textrm{T}}N_{\\textrm{R}}).\n\\end{eqnarray}\n\nThen, the total complexity of the proposed DL-JHPF can be expressed as\n\\begin{eqnarray}\n\\label{eqn_Complexity_DL_JHPF}\n\\mathcal{C}_{\\textrm{DL-JHPF}}\\sim\\mathcal{C}_{\\textrm{NN}}+\\mathcal{C}_{\\textrm{Mat}}.\n\\end{eqnarray}\nIt is noted that the NNs can be run efficiently via parallel computing on the graphic processing unit (GPU) and the simple matrix multiplications only cause negligible computational load for the central processing unit (CPU) compared with the existing schemes. Therefore, the proposed DL-JHPF is with low complexity and consumes the very limited runtime.\n\n\\subsection{Extension to OFDM Systems}\n\nIn this subsection, we extend the proposed DL-JHPF to the wideband OFDM systems. Two key issues need to be considered for the extension:\n\\begin{itemize}[\\IEEEsetlabelwidth{Z}]\n\\item[1)] In the OFDM systems, the digital precoder and combiner can be designed independently for different subcarriers while the analog precoder and combiner must be shared by all subcarriers. It is critical to design the unified analog precoder and combiner performing well for all subcarriers.\n\n\\item[2)] It is important to maintain the relatively small size, i.e., the number of hidden layers and the number of neurons in each layer in the NNs, and short training time for DL-JHPF when the number of subcarriers is large.\n\\end{itemize}\n\nIn the following, we study how to address the two issues when extending DL-JHPF to the OFDM systems.\n\nAccording to \\cite{P. Dong_b}, the $N_{\\textrm{R}}\\times N_{\\textrm{T}}$ channel matrix between the receiver and the transmitter of the $k$th subcarrier can be expressed as\n\\begin{eqnarray}\n\\label{eqn_Hk}\n\\mathbf{H}[k]&&=\\beta\\sum_{n=1}^{N_{\\textrm{cl}}}\\sum_{m=1}^{N_{\\textrm{ray}}} \\alpha_{n,m}e^{-j2\\pi\\tau_n f_s\\frac{k}{K}}\\mathbf{a}_{\\textrm{R}}(\\varphi_{n,m})\\mathbf{a}_{\\textrm{T}}^{H}(\\phi_{n,m}),\n\\end{eqnarray}\nwhere $\\beta=\\sqrt{\\frac{N_{\\textrm{T}}N_{\\textrm{R}}}{N_{\\textrm{cl}}N_{\\textrm{ray}}}}$, $\\tau_n$, $f_s$, and $K$ denote the delay of the $n$th cluster, the sampling rate, and the number of OFDM subcarriers, respectively.\nThe signal transmission model in (\\ref{eqn_r}) becomes subcarrier dependent and the detected signal of the $k$th subcarrier is given by\\footnote{Although $\\mathbf{x}$ and $\\mathbf{n}$ are also different for different subcarriers, they are independent of the channel and thus the index $k$ in them is omitted.}\n\\setlength{\\arraycolsep}{0.05em}\n\\begin{eqnarray}\n\\label{eqn_rk}\n\\mathbf{r}[k]&&=\\sqrt{P}\\mathbf{W}_{\\textrm{BB}}^H[k]\\mathbf{W}_{\\textrm{RF}}^H\\mathbf{H}[k]\\mathbf{F}_{\\textrm{RF}}\\mathbf{F}_{\\textrm{BB}}[k]\\mathbf{x}\\nonumber\\\\\n&&\\quad+\\mathbf{W}_{\\textrm{BB}}^H[k]\\mathbf{W}_{\\textrm{RF}}^H\\mathbf{n}.\n\\end{eqnarray}\n\nIn the following, we propose a simple method to design the structure of training data so that the DL-JHPF in Section III.C can be flexibly extended to OFDM systems without changing the framework architecture. That is, both the framework size and training time will not be increased. The process of training and testing is detailed as follows.\n\n\\emph{Training:} Compared to the training sample with the input tuple $\\langle\\mathbf{H}, \\mathbf{X}_{\\textrm{b}}, \\mathbf{n}\\rangle$ in Section III.C, we modify the input tuple as $\\langle\\bar{\\mathbf{H}}, \\mathbf{H}[i], \\mathbf{X}_{\\textrm{b}}, \\mathbf{n}\\rangle$, where $\\bar{\\mathbf{H}}$ is the channel matrix of a given subcarrier, e.g., the $q$th subcarrier, same for all training samples while $\\mathbf{H}[i]$ is the channel matrix of an uncertain subcarrier with $i$ randomly generated from the set $\\{1,2,\\ldots,K\\}$ for each training sample. As shown in Fig.~\\ref{DL_framework_ofdm}, when inputting each training sample into the framework, $\\bar{\\mathbf{H}}$ will be used to generate $\\mathbf{F}_{\\textrm{RF}}$ and $\\mathbf{W}_{\\textrm{RF}}$ via PP-NN and CP-NN. Then $\\mathbf{F}_{\\textrm{RF}}$ and $\\mathbf{W}_{\\textrm{RF}}$ along with $\\mathbf{H}[i]$ are used to generate the equivalent channel of the $i$th subcarrier, $\\mathbf{H}_{\\textrm{eq}}[i]$, based on which, $\\mathbf{F}_{\\textrm{BB}}[i]$ and $\\mathbf{W}_{\\textrm{BB}}[i]$ can be obtained through Re\/ImDP-NN and Re\/ImDC-NN. On the other hand, $\\mathbf{H}[i]$ is also input into the signal flow simulator to act as the fading channel since this training sample is used to simulate the transmission of the $i$th subcarrier. Then the end-to-end training can be performed by minimizing the BCE loss between $\\mathbf{X}_{\\textrm{b}}$ and $\\hat{\\mathbf{X}}_{\\textrm{b}}$. Through training, we can obtain the unified analog precoder and combiner that match the channel of each subcarrier well without complicating the architecture of DL-JHPF.\n\n\n\\begin{figure}[t]\n\\centering\n\\includegraphics[width=3.5in]{Fig4}\n\\caption{Extension of the proposed DL-JHPF to OFDM systems.}\\label{DL_framework_ofdm}\n\\end{figure}\n\n\\emph{Testing:} With $\\mathbf{H}[k], k=1,2,\\ldots,K$, available at the transceiver, choose the channel matrix of the $q$th subcarrier as $\\bar{\\mathbf{H}}$. Input $\\bar{\\mathbf{H}}$ into PP-NN and CP-NN to generate the unified $\\mathbf{F}_{\\textrm{RF}}$ and $\\mathbf{W}_{\\textrm{RF}}$ for all subcarriers. The unified $\\mathbf{F}_{\\textrm{RF}}$ and $\\mathbf{W}_{\\textrm{RF}}$ along with the channel of each subcarrier, $\\mathbf{H}[k], k=1,2,\\ldots,K$, are used to generate the corresponding equivalent channel, which will be input into Re\/ImDP-NN and Re\/ImDC-NN to generate $\\mathbf{F}_{\\textrm{BB}}[k]$ and $\\mathbf{W}_{\\textrm{BB}}[k]$ for channel equalization in each subcarrier. The NN demodulator will be used to recover the original bits for each subcarrier based on the detected signal, $\\mathbf{r}[k]$.\n\n\\section{Simulation Results}\n\nIn this section, the effectiveness of the proposed DL-JHPF is verified in several cases. Six hybrid processing schemes and the fully-digital transceiver architecture are used as the baseline schemes for comparison: 1) HBD scheme in \\cite{W. Ni}; 2) Beam sweeping (BeS) scheme in \\cite{O. E. Ayach}; 3) Discrete Fourier transform (DFT) codebook based joint digital beamforming (DCJDB) scheme, where the analog precoder and combiner are searched from the DFT codebook by the method in \\cite{A. M. Elbir} while the digital precoder and combiner are jointly optimized according to \\cite{D. P. Palomar}; 4) Joint digital beamforming with alternating minimization (JDB-AltMin), where the optimal precoding and combining matrices are first designed according to \\cite{D. P. Palomar}, based on which the hybrid precoding and combining matrices are constructed according to the PE-AltMin algorithm in \\cite{X. Yu}; 5) Hybrid beamforming via deep learning (HBDL) scheme in \\cite{A. M. Elbir}; 6) Deep learning for direct hybrid precoding (DLDHP) scheme in \\cite{X. Li}; 7) Fully-digital transceiver architecture.\n\n\\subsection{Simulation Settings}\n\n\\begin{table}[!t]\n\\centering\n\\caption{Architectures of DNNs in Proposed DL-HPF}\n\\label{table_1}\n\\begin{tabular}{p{1.4cm}<{\\centering}|c|c|c}\n\\hline\n~ & Layer type & \\makecell{Number of \\\\neurons} & \\makecell{Activation \\\\function}\\\\\n\\hline\n\\multirow{5}{*}{PP-NN} & Input & $2N_{\\textrm{T}}N_{\\textrm{R}}$ & - \\\\\n\\cline{2-4}\n~ & Dense & 512 & ReLU\\\\\n\\cline{2-4}\n~ & Dense & 256 & ReLU\\\\\n\\cline{2-4}\n~ & Dense & 128 & ReLU\\\\\n\\cline{2-4}\n~ & Output & $N_{\\textrm{T}}N_{\\textrm{T}}^{\\textrm{RF}}$ & ReLU\\\\\n\\hline\n\\multirow{6}{*}{CP-NN} & Input & $2N_{\\textrm{T}}N_{\\textrm{R}}$ & - \\\\\n\\cline{2-4}\n~ & Dense & 512 & ReLU\\\\\n\\cline{2-4}\n~ & Dense & 256 & ReLU\\\\\n\\cline{2-4}\n~ & Dense & 128 & ReLU\\\\\n\\cline{2-4}\n~ & Dense & 64 & ReLU\\\\\n\\cline{2-4}\n~ & Output & $N_{\\textrm{R}}N_{\\textrm{R}}^{\\textrm{RF}}$ & ReLU\\\\\n\\hline\n\\multirow{5}{*}{\\makecell{Re\/ImDP-\\\\NN}} & Input & $2N_{\\textrm{T}}^{\\textrm{RF}}N_{\\textrm{R}}^{\\textrm{RF}}$ & - \\\\\n\\cline{2-4}\n~ & Dense & 20 & ReLU\\\\\n\\cline{2-4}\n~ & Dense & 40 & ReLU\\\\\n\\cline{2-4}\n~ & Dense & 20 & ReLU\\\\\n\\cline{2-4}\n~ & Output & $N_{\\textrm{T}}^{\\textrm{RF}}N_{\\textrm{s}}$ & - \\\\\n\\hline\n\\multirow{5}{*}{\\makecell{Re\/ImDC-\\\\NN}} & Input & $2N_{\\textrm{T}}^{\\textrm{RF}}N_{\\textrm{R}}^{\\textrm{RF}}$ & - \\\\\n\\cline{2-4}\n~ & Dense & 20 & ReLU\\\\\n\\cline{2-4}\n~ & Dense & 40 & ReLU\\\\\n\\cline{2-4}\n~ & Dense & 20 & ReLU\\\\\n\\cline{2-4}\n~ & Output & $N_{\\textrm{R}}^{\\textrm{RF}}N_{\\textrm{s}}$ & - \\\\\n\\hline\n\\multirow{5}{*}{\\makecell{NN\\\\demodulator}} & Input & $2N_{\\textrm{s}}$ & - \\\\\n\\cline{2-4}\n~ & Dense & 20 & ReLU\\\\\n\\cline{2-4}\n~ & Dense & 50 & ReLU\\\\\n\\cline{2-4}\n~ & Dense & 20 & ReLU\\\\\n\\cline{2-4}\n~ & Output & $N_{\\textrm{s}}\\log_{2} M$ & Sigmoid \\\\\n\\hline\n\\end{tabular}\n\\vspace{-0.3cm}\n\\end{table}\n\n\\emph{1) System Settings:} We set $N_{\\textrm{T}}=32$ and $N_{\\textrm{T}}^{\\textrm{RF}}=3$ for the transmitter and $N_{\\textrm{R}}=16$ and $N_{\\textrm{R}}^{\\textrm{RF}}=3$ for the receiver. The number of data streams is set as $N_{\\textrm{s}}=3$. The channel data are generated according to the 3GPP TR 38.901 Release 15 channel model \\cite{3GPP}. Specifically, we use the clustered delay line models with $N_{\\textrm{cl}}=3$ clusters and $N_{\\textrm{ray}}=20$ rays in each cluster. The carrier frequency is $f_c=28$ GHz. For OFDM systems, the sampling rate is $f_s=100$ MHz and the number of subcarriers is $K=64$. Two channel scenarios, urban micro (UMi) street non-line of sight (NLOS) scenario and urban macro (UMa) NLOS scenario, are considered.\\footnote{According to the parameters for UMi NLOS scenario and UMa NLOS scenario defined by \\cite{3GPP}, we use the system object, nr5gCDLChannel, embedded in 5G Library for LTE System Toolbox in MATLAB to generate the corresponding channel data.} Quadrature phase shift keying (QPSK) is used as the modulation method.\n\n\\emph{2) Proposed DL-JHPF Settings:} The training set, validation set, and testing set contain $261$,$000$, $29$,$000$, and $10$,$000$ samples, respectively. The training set and validation set are generated in UMi NLOS scenario while the testing set is generated in both UMi NLOS and UMa NLOS scenarios. Adam is used as the optimizer. The number of epochs in the training stage is set as $800$ while the corresponding learning rates are $10^{-3}$ for the first $500$ epochs and $10^{-4}$ for the rest $300$ epochs, respectively. The batch size is $256$. The architecture of each NN in DL-JHPF is listed in Table~\\ref{table_1}, where the BN layer is added after each dense layer and thus is not listed in the table for simplicity.\n\n\n\\subsection{Performance Evaluation}\n\n\\begin{figure}[!t]\n\\centering\n\\includegraphics[trim=0 0 0 0, width=3.6in]{Fig5}\n\\caption{BER performance of the proposed DL-JHPF and the existing hybrid processing schemes.}\\label{HBD_BS_proDL}\n\\end{figure}\n\nIn Figs.~\\ref{HBD_BS_proDL}$-$\\ref{HBD_BS_proDL_UMaipCSI}, the proposed DL-JHPF is first evaluated in narrowband systems while the performance in wideband OFDM systems is presented in Figs.~\\ref{HBD_BS_proDL_ofdm} and \\ref{HBD_BS_proDL_ofdm_UMaipCSI}.\n\nFig.~\\ref{HBD_BS_proDL} shows the BER performance of HBD, BeS, DCJDB, JDB-AltMin, HBDL, DLDHP, the proposed DL-JHPF, and the fully-digital architecture versus signal-to-noise ratio (SNR) in UMi NLOS scenario with perfect CSI. From the figure, DL-JHPF has a larger slope for the BER curve and outperforms the other six hybrid processing schemes after $\\textrm{SNR}=0$ dB although it performs not very well in the low SNR regime. When $\\textrm{BER}=10^{-2}$, the proposed DL-JHPF achieves about $0.2$ dB, $1$ dB, $1.2$ dB, $2$ dB, $6$ dB, and $8$ dB gains compared to JDB-AltMin, DLDHP, DCJDB, BeS, HBDL, and HBD, respectively. The advantage of DL-JHPF becomes more obvious as SNR increases and the BER is smaller than $10^{-4}$ when $\\textrm{SNR}=10$ dB while the performance of other four schemes is larger than $10^{-3}$. With the significantly increased number of RF chains, the fully-digital beamforming obtains substantial diversity gains, which directly leads to the better BER performance than all the hybrid processing schemes. The performance gap between the proposed DL-JHPF and the fully-digital beamforming is about 4dB.\n\n\\begin{figure}[!t]\n\\centering\n\\includegraphics[trim=0 0 0 0, width=3.6in]{Fig6}\n\\caption{Robustness of the proposed DL-JHPF with mismatched CSI.}\\label{HBD_BS_proDL_ipCSI}\n\\end{figure}\n\nPerfect CSI is used in framework training while only estimated CSI is available in the practical transmitter and receiver, which leads to the CSI mismatch. In Fig.~\\ref{HBD_BS_proDL_ipCSI}, we investigate the robustness of the proposed DL-JHPF with mismatched CSI, where the BER curve tested with perfect CSI in Fig.~\\ref{HBD_BS_proDL} is also plotted as the lower bound. We use the approach in \\cite{P. Dong_b} to estimate channels at $\\textrm{SNR}=10$ dB and $20$ dB, respectively, for hybrid processing design. From Fig.~\\ref{HBD_BS_proDL_ipCSI}, when tested with the CSI estimated at $20$ dB, DL-JHPF achieves almost the same BER performance as the perfect CSI case and outperforms the other six hybrid processing schemes after $\\textrm{SNR}=0$ dB, indicating that DL-JHPF is hardly impacted by the mismatched CSI estimated at $20$ dB. When tested with the CSI estimated at $10$ dB, performance loss occurs at an acceptable level for DL-JHPF. The loss is less than $1$ dB when $\\textrm{BER}=10^{-2}$ and DL-JHPF still has the clear performance superiority after $\\textrm{SNR}=2.5$ dB even compared to other hybrid processing schemes with the CSI estimated at $20$ dB.\n\n\\begin{figure}[!t]\n\\centering\n\\includegraphics[trim=0 0 0 0, width=3.6in]{Fig7}\n\\caption{Robustness of the proposed DL-JHPF with mismatched channel scenario and CSI.}\\label{HBD_BS_proDL_UMaipCSI}\n\\end{figure}\n\nAs mentioned in Section III.D, it is very likely to face with different channel scenarios in the practical testing for DL-JHPF. In Fig.~\\ref{HBD_BS_proDL_UMaipCSI}, we further consider this channel scenario mismatch and test the robustness of DL-JHPF to the aggregate impact caused by channel scenario and CSI mismatch. For DL-JHPF, the BER performance is tested in UMi NLOS scenario with perfect CSI, in UMa NLOS scenario with perfect CSI (mismatched channel scenario), and in UMa NLOS scenario with the CSI estimated at $20$ dB (mismatched channel scenario and CSI), respectively. The performance curves of the baseline schemes evaluated in UMa NLOS scenario with the CSI estimated at $20$ dB are also plotted for comparison. From Fig.~\\ref{HBD_BS_proDL_UMaipCSI}, the channel scenario mismatch causes only less than $0.5$ dB performance loss for DL-JHPF. The total loss caused by the aggregate impact of channel scenario and CSI mismatch is only less than $1$ dB. The proposed DL-JHPF has learned the inherent structure of the mmWave channels and thus is able to maintain its advantage even with mismatched channel scenarios and CSI.\n\n\\begin{figure}[!t]\n\\centering\n\\includegraphics[trim=0 0 0 0, width=3.6in]{Fig8}\n\\caption{BER performance of the proposed DL-JHPF and the existing hybrid processing schemes in OFDM systems.}\\label{HBD_BS_proDL_ofdm}\n\\end{figure}\n\nFig.~\\ref{HBD_BS_proDL_ofdm} shows the BER performance of HBD, BeS, DCJDB, JDB-AltMin, HBDL, DLDHP, the proposed DL-JHPF, and the fully-digital architecture in OFDM systems with UMi NLOS scenario and perfect CSI, which is similar to that in Fig.~\\ref{HBD_BS_proDL}. In addition, we plot the BER performance of an ideal case with matched analog processing (AP) for DL-JHPF, where different analog processing matrices are designed for different subcarriers to match the corresponding channels. This is impossible to be implemented in practical systems and we just use it to quantify the performance loss caused by using the unified analog processing matrices for all subcarriers. From Fig.~\\ref{HBD_BS_proDL_ofdm}, only about $1$ dB loss is incurred, which proves the effectiveness of DL-JHPF in OFDM systems by simply modifying the structure of training data without changing the framework architecture and increasing the training time.\n\n\\begin{figure}[!t]\n\\centering\n\\includegraphics[trim=0 0 0 0, width=3.6in]{Fig9}\n\\caption{Robustness of the proposed DL-JHPF with mismatched channel scenario and CSI in OFDM systems.}\\label{HBD_BS_proDL_ofdm_UMaipCSI}\n\\end{figure}\n\nIn Fig.~\\ref{HBD_BS_proDL_ofdm_UMaipCSI}, we further test the robustness of DL-JHPF in OFDM systems with the mismatched channel scenario and CSI. The aggregate impact of channel scenario and CSI mismatch is still limited, and DL-JHPF tested in UMa NLOS scenario with the CSI estimated at $20$ dB (mismatched channel scenario and CSI) even outperforms that tested in UMi NLOS scenario with perfect CSI after $\\textrm{SNR}=8$ dB, which verifies the effectiveness and robustness of the proposed DL-JHPF in OFDM systems. In addition, the performance gap between the proposed DL-JHPF and the fully-digital beamforming maintains at about 4dB.\n\n\\subsection{Computational Complexity Comparison}\n\n\\begin{table}\n \\centering\n \\caption{Runtime of Hybrid Processing Schemes}\n \\label{runtime}\n \\begin{tabular}{c|c}\n \\hline\n ~ & Runtime (in ms)\\\\\n \\hline\n HBD & $6.98$\\\\\n \\hline\n BeS & $10.61$\\\\\n \\hline\n DCJDB & $338.73$\\\\\n \\hline\n JDB-AltMin & $1.56$\\\\\n \\hline\n HBDL & $0.46$\\\\\n \\hline\n DLDHP & $0.16$\\\\\n \\hline\n Proposed DL-JHPF & $0.06$\\\\\n \\hline\n \\end{tabular}\n\\end{table}\n\nFor mmWave mobile communications, the length of coherence time becomes smaller compared to that in sub-6 GHz and thus the runtime of a hybrid processing scheme is vital. Based on the simulation settings mentioned above, we compare the runtime of the proposed DL-JHPF in the testing stage with the baseline schemes in Table~\\ref{runtime}. The HBD, BeS, DCJDB, and JDB-AltMin schemes are run on the Intel(R) Core(TM) i7-3770 CPU while the proposed DL-JHPF are run on the NVIDIA GeForce GTX 2080 Ti GPU. For HBDL and DLDHP, the predictions of $\\mathbf{F}_{\\textrm{RF}}$ and $\\mathbf{W}_{\\textrm{RF}}$ are implemented via DNN on the GPU while the following design of $\\mathbf{F}_{\\textrm{BB}}$ and $\\mathbf{W}_{\\textrm{BB}}$ are executed on the CPU. By moving the time-consuming design of analog processing to the GPU that enables the efficient parallel computing, the DL based schemes reduce the runtime significantly compared to the conventional schemes. Through carefully design, the proposed DL-JHPF is fully GPU-driven when generating hybrid processing matrices and thus consumes the minimum time among the three DL based schemes. Therefore, the proposed DL-JHPF is more suitable for mmWave communications, especially for the high-mobility scenario.\n\n\\section{Conclusion}\n\nIn this paper, DL is applied for joint hybrid processing design at the transceiver in mmWave massive MIMO systems. A novel DL-JHPF is developed to learn the optimal analog and digital processing matrices by minimizing the end-to-end BCE loss between the original and recovered bits. The elaborate architecture of the proposed DL-JHPF guarantees the BP-enabled training of each NN therein. By simply modifying the structure of training data, DL-JHPF can be flexibly extended to OFDM systems without changing the framework architecture and increasing the training time. Simulation results show the superiority and robustness of DL-JHPF in various non-ideal conditions with the significantly reduced runtime.\n\n\n\n\n\n\n\\ifCLASSOPTIONcaptionsoff\n \\newpage\n\\fi\n\n\n\n\n\\section{Introduction}\n\nDue to the huge bandwidth, millimeter wave (mmWave) communications have been recognized as one of the key technologies to meet the demand for unprecedentedly high data rate transmission in the future mobile networks \\cite{A. L. Swindlehurst}. By equipping large-scale antenna arrays, massive multiple-input multiple-output (MIMO) can provide sufficiently large array gains for spatial multiplexing and beamforming \\cite{L. Lu}. MmWave massive MIMO communications can obtain the merits of both of them and thus have attracted significant interest \\cite{B. Wang}. However, the expensive and power-hungry hardwares used in mmWave bands become the main obstacle to equipping a dedicated radio frequency (RF) chain for each antenna. The mainstream solution for this problem is to use the two-stage hybrid architecture, where a large number of antennas are connected to much fewer RF chains via phase shifters \\cite{L. Liang}, \\cite{L. Pan}.\n\n\\subsection{Related Work}\n\nFor mmWave massive MIMO systems with the hybrid architecture, both the analog and digital processing should be carefully designed to achieve the comparable performance to the fully-digital systems. In \\cite{L. Liang}, a low-complexity hybrid precoding scheme at the base station (BS) has been proposed for the massive MIMO downlink with single-antenna users. The hybrid architecture has been further introduced to the user side in \\cite{W. Ni}, where hybrid block diagonalization (HBD) has been used for the analog and digital processing design. By exploiting the sparsity of mmWave channels, the hybrid precoding and combining at both the transmitter and receiver have been optimized in \\cite{O. E. Ayach}. The heuristic hybrid beamforming design in \\cite{F. Sohrabi} can approach the performance of the fully-digital architecture. The alternating minimization algorithms for both fully-connected and sub-connected hybrid architectures in \\cite{X. Yu} are with low complexity and limited performance loss. In \\cite{L. Zhao}, the hybrid processing along with channel estimation has been designed and analyzed for both the sparse and non-sparse channels. The uniform channel decomposition and nonlinear digital processing have been introduced in \\cite{Y. Lin} for hybrid beamforming design. In the existing works, the hybrid processing matrices at the transmitter and receiver are usually optimized separately due to the intractability of the joint optimization with non-convex constraints, which makes the further performance improvement possible with joint optimization.\n\nDeep learning (DL) has achieved great success in various fields, including computer vision \\cite{K. He_a}, speech signal processing \\cite{A. Graves}, natural language processing \\cite{R. Collobert}, and so on, due to its unique ability in extracting and learning inherent features. It has been recently introduced to wireless communications and shown quite powerful in the optimization of communication systems \\cite{T. O'hea}--\\cite{G. Gui} and resource allocation \\cite{L. Liang_b}--\\cite{Z. Yang}. In \\cite{H. Ye_a}, DL has been successfully applied in pilot-assisted signal detection for orthogonal frequency division multiplexing (OFDM) systems with non-ideal transceiver and channel conditions. For wideband mmWave massive MIMO systems in time-varying channels, channel correlation has been exploited by deep convolutional neural network (CNN) in \\cite{P. Dong_b} to improve the accuracy and accelerate the computation for the channel estimation. Deep neural network (DNN) has been utilized in \\cite{S. Gao} to model the mapping relationship among antennas for reliable channel estimation in massive MIMO systems with mixed-resolution ADCs. An autoencoder-like DNN has been developed in \\cite{C.-K. Wen} to reduce the overhead for channel state information (CSI) feedback in the frequency duplex division massive MIMO system. In \\cite{C. Lu_a}, CNN has been utilized in CSI compression and uncompression to significantly improve the recovery accuracy. By combining the residual network and CNN, an efficient channel quantization scheme has been proposed from the perspective of bit-level in \\cite{C. Lu_b}. The DL based end-to-end optimization has been developed in \\cite{S. Dorner} and \\cite{H. Ye_b} by breaking the block structures at the transceiver. DL has been recently used to design the hybrid processing matrices for massive MIMO systems with various transceiver architectures \\cite{H. Huang}--\\cite{X. Bao}. In \\cite{H. Huang}, the analog and digital precoder design has been modeled as the DNN mapping based on geometric mean decomposition. In \\cite{T. Lin}, DNN has been applied to design the analog precoder for massive multiple-input single-output (MISO) systems. Deep CNN has been applied to learn the phases of the analog precoder and combiner for mmWave massive MIMO systems in \\cite{A. M. Elbir}. For the same system, channel estimation and analog processing have been jointly optimized by DL with reduced pilot overhead in \\cite{X. Li}. In \\cite{X. Bao}, deep CNN along with an equivalent channel hybrid precoding algorithm have been proposed to design the hybrid processing matrices.\n\n\\subsection{Motivation and Contribution}\n\nThe research on the DL based hybrid processing for mmWave massive MIMO systems is still in the exploratory stage and has many open issues. The existing works have applied DL to design the analog precoder \\cite{T. Lin}, the analog combiner \\cite{X. Bao}, the analog precoder and combiner \\cite{A. M. Elbir}, \\cite{X. Li}, and the analog and digital precoders \\cite{H. Huang}. Currently, only partial hybrid processing is designed by DL for the mmWave transceiver. In addition, conventional hybrid processing schemes are usually used to generate label matrices for the DNN to approximate, which limits the performance of the DL based approaches. The problems in the existing works motivate us to propose a general DL based joint hybrid processing framework (DL-JHPF) with the following two unique features:\n\\begin{itemize}[\\IEEEsetlabelwidth{Z}]\n\\item[1)] The framework jointly optimizes the analog and digital processing matrices at both the transmitter and receiver in an end-to-end manner without pre-designed label matrices. By doing this, it can be applied to various types of mmWave transceiver architectures and will have the potential to break through the performance of the existing schemes.\n\n\\item[2)] The framework enables end-to-end optimization but still preserves the block structures at the transceiver considering the hardware and power constraints in practical implementation for the hybrid architecture, which is quite different from the end-to-end optimization in \\cite{S. Dorner} and \\cite{H. Ye_b}.\n\\end{itemize}\nThe main contributions of this paper are summarized as follows.\n\\begin{itemize}[\\IEEEsetlabelwidth{Z}]\n\\item[1)] We model the joint analog and digital processing design for the transceiver as a DL based framework, which consists of the NN based hybrid processing designer, signal flow simulator, and NN based signal demodulator. For the sake of practical implementation, it does not break the original block structures at the transceiver but still allows the back-propagation (BP) based end-to-end optimization by minimizing the cross-entropy loss between recovered and original bits. The trainability of DL-JHPF is proved theoretically.\n\n\\item[2)] We extend the proposed framework to OFDM systems by simply modifying the structure of the training data. The extension does not complicate the framework architecture and guarantees the relatively short training time even if the number of subcarriers is large.\n\n\\item[3)] We verify the effectiveness of the proposed framework by numerical results based on the 3rd Generation Partnership Project (3GPP) channel model that can well depict the real channel environment. The proposed DL-JHPF achieves remarkable improvement in bit-error rate (BER) performance even with mismatched CSI and channel scenarios. Thanks to the careful design, DL-JHPF reduces the runtime significantly by sufficiently exploiting the parallel computing and thus is more suitable for rapidly varying mmWave channels.\n \n\\end{itemize}\n\nThe rest of the paper is organized as follows. Section II describes the channel model and signal transmission process for the considered mmWave massive MIMO system. The proposed DL-JHPF is elaborated in Section III. Simulation results are provided in Section IV to verify the effectiveness of the proposed framework and finally Section V gives concluding remarks.\n\n\\emph{Notations}: In this paper, we use upper and lower case boldface letters to denote matrices and vectors, respectively. $\\lVert\\cdot\\rVert_{F}$, $(\\cdot)^T$, $(\\cdot)^H$, and $\\mathbb{E}\\{\\cdot\\}$ represent the Frobenius norm, transpose, conjugate transpose, and expectation, respectively. $\\mathcal{CN}(\\mu,\\sigma^2)$ represents circular symmetric complex Gaussian distribution with mean $\\mu$ and variance $\\sigma^2$. $[\\mathbf{X}]_{i,j}$ and $[\\mathbf{x}]_{i}$ denote the $(i,j)$th element of matrix $\\mathbf{X}$ and the $i$th element of vector $\\mathbf{x}$, respectively. $|\\cdot|$ denotes the amplitude of a complex number.\n\n\\section{System Model}\n\n\\begin{figure}[t]\n\\centering\n\\includegraphics[width=3.5in]{Fig1}\n\\caption{MmWave massive MIMO system model.}\\label{system_model}\n\\end{figure}\n\nAs shown in Fig.~\\ref{system_model}, we consider a point-to-point massive MIMO systems working at mmWave bands, where the transmitter and the receiver are with $N_{\\textrm{T}}$ and $N_{\\textrm{R}}$ antennas, respectively. To reduce the hardware cost and power consumption, $N_{\\textrm{T}}^{\\textrm{RF}}(< N_{\\textrm{T}})$ and $N_{\\textrm{R}}^{\\textrm{RF}}(< N_{\\textrm{R}})$ RF chains are used at the transmitter and the receiver, respectively, and are connected to the large-scale antennas via phase shifters.\n\n\\subsection{Channel Model}\n\nDue to the sparse scattering property, the Saleh-Valenzuela channel model has been used to well depict the mmWave propagation environment, where the scattering of multiple rays forms several clusters. According to \\cite{O. E. Ayach}, the $N_{\\textrm{R}}\\times N_{\\textrm{T}}$ channel matrix between the receiver and the transmitter can be represented as\n\\begin{eqnarray}\n\\label{eqn_H_tau}\n\\mathbf{H}=\\sqrt{\\frac{N_{\\textrm{T}}N_{\\textrm{R}}}{N_{\\textrm{cl}}N_{\\textrm{ray}}}}\\sum_{n=1}^{N_{\\textrm{cl}}}\\sum_{m=1}^{N_{\\textrm{ray}}} \\alpha_{n,m}\\mathbf{a}_{\\textrm{R}}(\\varphi_{n,m})\\mathbf{a}_{\\textrm{T}}^{H}(\\phi_{n,m}),\n\\end{eqnarray}\nwhere $N_{\\textrm{cl}}$ and $N_{\\textrm{ray}}$ denote the number of scattering clusters and the number of rays in each cluster, respectively, $\\alpha_{n,m}\\sim \\mathcal{CN}(0, \\sigma_{\\alpha}^2)$ is the propagation gain of the $m$th path in the $n$th cluster with $\\sigma_{\\alpha}^2$ being the average power gain, $\\varphi_{n,m}$ and $\\phi_{n,m}\\in[0,2\\pi]$ are the azimuth angles of arrival and departure (AoA\/AoD) at the receiver and the transmitter, respectively, of the $m$th path in the $n$th cluster.\\footnote{The path gain $\\alpha_{n,m}$ is the fast fading and varies in the time scale of channel coherence interval. Other parameters, $N_{\\textrm{cl}}$, $N_{\\textrm{ray}}$, $\\varphi_{n,m}$, $\\phi_{n,m}$, are slow fading and may be unchanged in a large time scale compared to $\\alpha_{n,m}$. The Doppler spread determines how often these channel parameters change.} For a uniform linear array with $N$ antenna elements and an azimuth angle of $\\theta$, the response vector can be expressed as\n\\begin{eqnarray}\n\\label{eqn_au}\n\\mathbf{a}(\\theta)=\\frac{1}{\\sqrt{N}} \\left[1,e^{-j2\\pi\\frac{d}{\\lambda}\\sin(\\theta)},\\ldots,e^{-j2\\pi\\frac{d}{\\lambda}(N-1)\\sin(\\theta)}\\right]^{T},\n\\end{eqnarray}\nwhere $d$ and $\\lambda$ denote the distance between the adjacent antennas and carrier wavelength, respectively.\n\nIn the above channel model, we assume the transmitted signal is with narrowband and therefore, channel matrix is independent of frequency. For wideband transmission, OFDM is used to convert a frequency-selective channel into multiple flat fading channels and the corresponding channel matrices will be different at different subcarriers. Accordingly, the design of DL-JHPF in Section III will start at the narrowband systems and is then extended to the wideband OFDM systems.\n\n\\subsection{Signal Transmission}\n\nThe transmitter sends $N_{\\textrm{s}}$ parallel data streams to the receiver through the wireless channel. The bits of each data stream are first mapped to the symbol by the $M$-ary modulation. The symbol vector intended for the receiver, $\\mathbf{x}\\in\\mathbb{C}^{N_{\\textrm{s}}\\times 1}$ with $\\mathbb{E}\\left\\{\\mathbf{x}\\mathbf{x}^{H}\\right\\}=\\frac{1}{N_{\\textrm{s}}}\\mathbf{I}_{N_{\\textrm{s}}}$, is successively processed by the digital precoder, $\\mathbf{F}_{\\textrm{BB}}\\in\\mathbb{C}^{N_{\\textrm{T}}^{\\textrm{RF}}\\times N_{\\textrm{s}}}$, at the baseband and the analog precoder, $\\mathbf{F}_{\\textrm{RF}}\\in\\mathbb{C}^{N_{\\textrm{T}}\\times N_{\\textrm{T}}^{\\textrm{RF}}}$, through the phase shifters, yielding the transmitted signal\n\\begin{eqnarray}\n\\label{eqn_s}\n\\mathbf{s}=\\sqrt{P}\\mathbf{F}_{\\textrm{RF}}\\mathbf{F}_{\\textrm{BB}}\\mathbf{x},\n\\end{eqnarray}\nwhere $P$ denotes the transmit power. $\\mathbf{F}_{\\textrm{RF}}$ represents the phase-only modulation by the phase shifters and thus has the constraint of $\\left|[\\mathbf{F}_{\\textrm{RF}}]_{i,j}\\right|=\\frac{1}{\\sqrt{N_{\\textrm{T}}}}$, $\\forall \\,i, \\,j$. $\\mathbf{F}_{\\textrm{BB}}$ is normalized as $\\|\\mathbf{F}_{\\textrm{RF}}\\mathbf{F}_{\\textrm{BB}}\\|_{F}^2=N_{\\textrm{s}}$ to satisfy the total power constraint at the transmitter. Then the received signal at the receiver is given by\n\\begin{eqnarray}\n\\label{eqn_y}\n\\mathbf{y}=\\sqrt{P}\\mathbf{H}\\mathbf{F}_{\\textrm{RF}}\\mathbf{F}_{\\textrm{BB}}\\mathbf{x}+\\mathbf{n},\n\\end{eqnarray}\nwhere $\\mathbf{n}\\in\\mathbb{C}^{N_{\\textrm{R}}\\times 1}$ is additive white Gaussian noise (AWGN) with $\\mathcal{CN}(0,1)$ elements.\n\nThe received signal $\\mathbf{y}$ is then processed by the hybrid architecture at the receiver as\n\\setlength{\\arraycolsep}{0.05em}\n\\begin{eqnarray}\n\\label{eqn_r}\n\\mathbf{r}&&=\\mathbf{W}_{\\textrm{BB}}^H\\mathbf{W}_{\\textrm{RF}}^H\\mathbf{y} =\\sqrt{P}\\mathbf{W}_{\\textrm{BB}}^H\\mathbf{W}_{\\textrm{RF}}^H\\mathbf{H}\\mathbf{F}_{\\textrm{RF}}\\mathbf{F}_{\\textrm{BB}}\\mathbf{x} +\\mathbf{W}_{\\textrm{BB}}^H\\mathbf{W}_{\\textrm{RF}}^H\\mathbf{n},\n\\end{eqnarray}\nwhere $\\mathbf{W}_{\\textrm{RF}}\\in\\mathbb{C}^{N_{\\textrm{R}}\\times N_{\\textrm{R}}^{\\textrm{RF}}}$ and $\\mathbf{W}_{\\textrm{BB}}\\in\\mathbb{C}^{N_{\\textrm{R}}^{\\textrm{RF}}\\times N_{\\textrm{s}}}$ represent the analog combiner and digital combiner, respectively. A hardware constraint is imposed on $\\mathbf{W}_{\\textrm{RF}}$ such that $\\left|[\\mathbf{W}_{\\textrm{RF}}]_{i,j}\\right|=\\frac{1}{\\sqrt{N_{\\textrm{R}}}}$, $\\forall \\,i, \\,j$ similar to $\\mathbf{F}_{\\textrm{RF}}$. Then the detected signal vector, $\\mathbf{r}$, is demodulated to recover the original bits of $N_{\\textrm{s}}$ data streams.\n\nSince the performance of the digital communication system is ultimately determined by BER, we aim to jointly design $\\mathbf{F}_{\\textrm{RF}}$, $\\mathbf{F}_{\\textrm{BB}}$, $\\mathbf{W}_{\\textrm{RF}}$, and $\\mathbf{W}_{\\textrm{BB}}$ to minimize the BER between the original and demodulated bits, that is\n\\begin{eqnarray}\n&&\\min\\limits_{\\mathbf{F}_{\\textrm{RF}},\\mathbf{F}_{\\textrm{BB}},\\mathbf{W}_{\\textrm{RF}},\\mathbf{W}_{\\textrm{BB}}}\\quad P_{\\textrm{e}}\\left(\\mathbf{F}_{\\textrm{RF}},\\mathbf{F}_{\\textrm{BB}},\\mathbf{W}_{\\textrm{RF}},\\mathbf{W}_{\\textrm{BB}}\\right),\\label{eqn_optim_problem}\\\\\n&&\\quad\\quad\\:\\:\\textrm{s.t.}\\qquad\\qquad\\!\\! \\left|[\\mathbf{F}_{\\textrm{RF}}]_{i,j}\\right|=\\frac{1}{\\sqrt{N_{\\textrm{T}}}}, \\forall \\,i, \\,j,\\label{const1}\\\\\n&&\\qquad\\qquad\\qquad\\quad\\;\\; \\left|[\\mathbf{W}_{\\textrm{RF}}]_{i,j}\\right|=\\frac{1}{\\sqrt{N_{\\textrm{R}}}}, \\forall \\,i, \\,j,\\label{const2}\\\\\n&&\\qquad\\qquad\\qquad\\quad\\;\\; \\|\\mathbf{F}_{\\textrm{RF}}\\mathbf{F}_{\\textrm{BB}}\\|_{F}^2=N_{\\textrm{s}}\\label{const3}.\n\\end{eqnarray}\nThe BER in (\\ref{eqn_optim_problem}) is a complicated nonlinear function of $\\mathbf{F}_{\\textrm{RF}}$, $\\mathbf{F}_{\\textrm{BB}}$, $\\mathbf{W}_{\\textrm{RF}}$, and $\\mathbf{W}_{\\textrm{BB}}$ without closed-form expression and the constraints in (\\ref{const1}) and (\\ref{const2}) are non-convex, which make this optimization problem intractable to be solved by the traditional approaches. DL is a potential solution by using the BP algorithm and thus we develop DL-JHPF to address this problem.\n\n\\begin{figure*}[t]\n\\centering\n\\includegraphics[width=6.6in]{Fig2}\n\\caption{Proposed DL-JHPF.}\\label{DL_framework}\n\\end{figure*}\n\n\\section{Proposed DL-JHPF}\n\nIn this section, we first briefly review the existing work on the DNN based end-to-end communications. Then we propose DL-JHPF, where the framework is first described, followed by the details of training, deployment, and testing along with the corresponding complexity analysis. Finally, we extend the framework to OFDM systems over wideband mmWave channels.\n\n\\subsection{DNN based End-to-End Communications}\n\nPrior works have shown that DNN based end-to-end optimization is an efficient tool to minimize BER. The BP algorithm makes the DNN based end-to-end communications over the air possible so long as the optimized performance metric is differentiable \\cite{T. O'hea}, \\cite{S. Dorner}, \\cite{H. Ye_b}. For the DNN based end-to-end communication system, the modules at the transmitter and the receiver are replaced by two DNNs, respectively. Specifically, the DNN at the transmitter encodes the original symbols into the transmitted signal and the one at the receiver recovers the original symbols from the output of the wireless channel. In the training stage, the error between the original and recovered symbols is computed and the weights of the two DNNs are adjusted iteratively based on the error gradient propagated from the output layer of the DNN at the receiver to optimize the recovery accuracy.\n\nIn this paper, we focus on the DL based joint analog and digital processing design for the transceiver in mmWave massive MIMO systems. Then, the existing DNN based end-to-end communication is not suitable for this task since it integrates the modules of the transceiver into two DNNs and thus cannot meet the hardware and power constraints in practical implementation. To address this challenge, we design DL-JHPF in the following.\n\n\\subsection{Framework Description}\n\n\n\nAs shown in Fig.~\\ref{DL_framework}, the proposed DL-JHPF consists of three parts: hybrid processing designer, signal flow simulator, and NN demodulator, which are elaborated as follows.\n\n\\emph{Hybrid processing designer:} It plays the role of outputting the hybrid processing matrices for the transceiver by using NNs based on the channel matrix. It includes six fully-connected NNs and is used to generate the analog and digital processing matrices for the transmitter and the receiver based on the channel matrix, $\\mathbf{H}$. Specifically, $\\mathbf{H}\\in\\mathbb{C}^{N_{\\textrm{R}}\\times N_{\\textrm{T}}}$ is first converted to a $2N_{\\textrm{T}}N_{\\textrm{R}}\\times 1$ real-valued vector.\\footnote{In Fig.~\\ref{DL_framework}, only the main process of the framework is shown while the matrix and vector reshaping process is omitted.} Then it is input into two NNs, called precoder phase NN (PP-NN) and combiner phase NN (CP-NN), to generate the corresponding phases, $\\boldsymbol{\\phi}_{\\textrm{P}}\\in\\mathbb{R}^{N_{\\textrm{T}}N_{\\textrm{T}}^{\\textrm{RF}}\\times 1}$ and $\\boldsymbol{\\phi}_{\\textrm{C}}\\in\\mathbb{R}^{N_{\\textrm{R}}N_{\\textrm{R}}^{\\textrm{RF}}\\times 1}$, respectively, for phase shifters. With $\\boldsymbol{\\phi}_{\\textrm{P}}$ and $\\boldsymbol{\\phi}_{\\textrm{C}}$, two complex-valued vectors with constant amplitude elements are generated as\n\\begin{eqnarray}\n\\label{eqn_fRF}\n\\bar{\\mathbf{f}}_{\\textrm{RF}}=\\frac{1}{\\sqrt{N_{\\textrm{T}}}}e^{j\\boldsymbol{\\phi}_{\\textrm{P}}},\n\\end{eqnarray}\n\\begin{eqnarray}\n\\label{eqn_wRF}\n\\bar{\\mathbf{w}}_{\\textrm{RF}}=\\frac{1}{\\sqrt{N_{\\textrm{R}}}}e^{j\\boldsymbol{\\phi}_{\\textrm{C}}},\n\\end{eqnarray}\nbased on which, $\\mathbf{F}_{\\textrm{RF}}$ and $\\mathbf{W}_{\\textrm{RF}}$ are given by\n\\begin{eqnarray}\n\\label{eqn_FRF}\n\\mathbf{F}_{\\textrm{RF}}=\\mathcal{T}_{\\textrm{v}\\rightarrow \\textrm{m}}(\\bar{\\mathbf{f}}_{\\textrm{RF}}),\n\\end{eqnarray}\n\\begin{eqnarray}\n\\label{eqn_WRF}\n\\mathbf{W}_{\\textrm{RF}}=\\mathcal{T}_{\\textrm{v}\\rightarrow \\textrm{m}}(\\bar{\\mathbf{w}}_{\\textrm{RF}}),\n\\end{eqnarray}\nwhere $\\mathcal{T}_{\\textrm{v}\\rightarrow \\textrm{m}}(\\cdot)$ denotes the operation reshaping a vector to a matrix.\nThen, $\\mathbf{F}_{\\textrm{RF}}$ and $\\mathbf{W}_{\\textrm{RF}}$ along with $\\mathbf{H}$ are used to generate a low-dimensional equivalent channel, i.e.,\n\\begin{eqnarray}\n\\label{eqn_Heq}\n\\mathbf{H}_{\\textrm{eq}}=\\mathbf{W}_{\\textrm{RF}}^H\\mathbf{H}\\mathbf{F}_{\\textrm{RF}}.\n\\end{eqnarray}\n$\\mathbf{H}_{\\textrm{eq}}\\in\\mathbb{C}^{N_{\\textrm{R}}^{\\textrm{RF}}\\times N_{\\textrm{T}}^{\\textrm{RF}}}$ is converted to a $2N_{\\textrm{T}}^{\\textrm{RF}}N_{\\textrm{R}}^{\\textrm{RF}}\\times 1$ real-valued vector before it is input into four parallel NNs. The first two NNs, corresponding to the real part digital combiner NN (ReDC-NN) and the imaginary part digital combiner NN (ImDC-NN), output two $N_{\\textrm{s}}N_{\\textrm{R}}^{\\textrm{RF}}\\times1$ vectors, $\\bar{\\mathbf{w}}_{\\textrm{BB,re}}$, $\\bar{\\mathbf{w}}_{\\textrm{BB,im}}$, respectively. Then $\\mathbf{W}_{\\textrm{BB}}$ can be obtained as\n\\begin{eqnarray}\n\\label{eqn_WBB}\n\\mathbf{W}_{\\textrm{BB}}=\\mathcal{T}_{\\textrm{v}\\rightarrow \\textrm{m}}(\\bar{\\mathbf{w}}_{\\textrm{BB,re}}+j\\bar{\\mathbf{w}}_{\\textrm{BB,im}}).\n\\end{eqnarray}\nAnother two NNs, corresponding to the real part digital precoder NN (ReDP-NN) and the imaginary part digital precoder NN (ImDP-NN), output two $N_{\\textrm{s}}N_{\\textrm{T}}^{\\textrm{RF}}\\times1$ vectors, $\\bar{\\mathbf{f}}_{\\textrm{BB,re}}$, $\\bar{\\mathbf{f}}_{\\textrm{BB,im}}$, respectively. Then the unnormalized digital precoder $\\bar{\\mathbf{F}}_{\\textrm{BB}}$ is given by\n\\begin{eqnarray}\n\\label{eqn_unnorm_FBB}\n\\bar{\\mathbf{F}}_{\\textrm{BB}}=\\mathcal{T}_{\\textrm{v}\\rightarrow \\textrm{m}}(\\bar{\\mathbf{f}}_{\\textrm{BB,re}}+j\\bar{\\mathbf{f}}_{\\textrm{BB,im}}).\n\\end{eqnarray}\nThe following normalization utilizes $\\bar{\\mathbf{F}}_{\\textrm{BB}}$ and $\\mathbf{F}_{\\textrm{RF}}$ in (\\ref{eqn_FRF}) to output the final digital precoder as\n\\begin{eqnarray}\n\\label{eqn_DP_norm}\n\\mathbf{F}_{\\textrm{BB}}=\\frac{\\sqrt{N_{\\textrm{s}}}}{\\|\\mathbf{F}_{\\textrm{RF}}\\bar{\\mathbf{F}}_{\\textrm{BB}}\\|_{F}}\\bar{\\mathbf{F}}_{\\textrm{BB}}.\n\\end{eqnarray}\n\n\\emph{Signal flow simulator:} In the training stage, it simulates the process from the original bits, $\\mathbf{X}_{\\textrm{b}}$, to the detected signal, $\\mathbf{r}$, over the channel, $\\mathbf{H}$, with AWGN, $\\mathbf{n}$, where $\\mathbf{X}_{\\textrm{b}}$ with the size of $N_{\\textrm{s}}\\times\\log_2 M$, $\\mathbf{H}$, and $\\mathbf{n}$ are generated in the simulation environment. It bridges the back propagation of the error gradient from NN demodulator to hybrid processing designer as we will elaborate in Section III.C. In the deployment and testing stage, the signal flow simulator is replaced by the actual transceiver and the actual wireless fading channel. In these two stages, the analog and digital processing matrices at the transceiver are provided by the hybrid processing designer based on the simulated or actual $\\mathbf{H}$.\n\n\\emph{NN demodulator:} It is a fully-connected NN, which receives the detected signal, $\\mathbf{r}$, from the signal flow simulator (in the training stage) or the actual receiver (in the testing stage) and outputs recovered bits $\\hat{\\mathbf{x}}_{\\textrm{b}}\\in \\mathbb{R}^{N_{\\textrm{s}}\\log_2 M\\times1}$ with each element lies in the interval $[0,1]$. $\\hat{\\mathbf{x}}_{\\textrm{b}}$ is then reshaped to $\\hat{\\mathbf{X}}_{\\textrm{b}}$ with the same size as $\\mathbf{X}_{\\textrm{b}}$.\n\n\\textbf{Remark 1.} \\emph{The learning of hybrid processing matrices, $\\mathbf{F}_{\\textrm{RF}}$, $\\mathbf{W}_{\\textrm{RF}}$, $\\mathbf{F}_{\\textrm{BB}}$, and $\\mathbf{W}_{\\textrm{BB}}$, in DL-JHPF is embedded into the signal transmission and demodulation process instead of approximating pre-designed label matrices. All NNs are optimized jointly sharing the mapping principle from $\\mathbf{X}_{\\textrm{b}}$ at the transmitter to $\\mathbf{X}_{\\textrm{b}}$ at the receiver that resembles an\nautoencoder. By minimizing the error between $\\mathbf{X}_{\\textrm{b}}$ and $\\hat{\\mathbf{X}}_{\\textrm{b}}$, each NN in hybrid processing designer can learn to output the appropriate vectors with specific meaning implicitly, i.e., phases of phase shifters and real and imaginary parts of the digital precoder and combiner. By doing this, DL-JHPF will have the potential to break through the performance of the existing schemes.}\n\n\\subsection{Framework Training}\n\nThe goal of offline training is to determine the weights of the NNs in hybrid processing designer and NN demodulator based on the training samples with the input tuple $\\langle\\mathbf{H}, \\mathbf{X}_{\\textrm{b}}, \\mathbf{n}\\rangle$ and the label $\\mathbf{X}_{\\textrm{b}}$, where $\\mathbf{H}$ is generated by certain channel model and $\\mathbf{n}$ is generated according to the $\\mathcal{CN}(0,1)$ distribution. By minimizing the end-to-end error between the original bits, $\\mathbf{X}_{\\textrm{b}}$, and the recovered bits, $\\hat{\\mathbf{X}}_{\\textrm{b}}$, the weights of each NN in DL-JHPF are adjusted iteratively and the training procedure is elaborated as follows.\n\n\\begin{figure}[t]\n\\centering\n\\includegraphics[width=3.0in]{Fig3}\n\\caption{Training model for proposed DL-JHPF.}\\label{visual_model}\n\\end{figure}\n\nThe proposed DL-JHPF is actually an integrated DNN consisting of neuron layers and custom layers. The training model in Fig.~\\ref{visual_model} demonstrates the detailed training process of the framework. For each training sample, $\\mathbf{H}$ is converted into a real-valued vector by matrix-to-vector reshaping and real and imaginary parts stacking, which is input into PP-NN and CP-NN consisting of dense and batch normalization (BN) layers to generate the corresponding phases, $\\boldsymbol{\\phi}_{\\textrm{P}}$ and $\\boldsymbol{\\phi}_{\\textrm{C}}$, respectively. Then (\\ref{eqn_fRF}) and (\\ref{eqn_wRF}) are executed by the same custom layer. Afterwards, the output vectors are reshaped according to (\\ref{eqn_FRF}) and (\\ref{eqn_WRF}) to generate $\\mathbf{F}_{\\textrm{RF}}$ and $\\mathbf{W}_{\\textrm{RF}}$, respectively. Next, (\\ref{eqn_Heq}) is executed by a custom layer to generate $\\mathbf{H}_{\\textrm{eq}}$, followed by matrix-to-vector reshaping and real and imaginary parts stacking. This vector is input into four NNs consisting of dense and BN layers, i.e., ReDC-NN, ImDC-NN, ReDP-NN, and ImDP-NN, respectively. The output vectors of the former two NNs are used to generate $\\mathbf{W}_{\\textrm{BB}}$ through real and imaginary parts combining and vector-to-matrix reshaping as (\\ref{eqn_WBB}). Using the same operation, the output vectors of the latter two NNs are used to generate $\\bar{\\mathbf{F}}_{\\textrm{BB}}$ as (\\ref{eqn_unnorm_FBB}). After obtaining $\\bar{\\mathbf{F}}_{\\textrm{BB}}$, a custom layer is added to perform the normalization in (\\ref{eqn_DP_norm}) to generate $\\mathbf{F}_{\\textrm{BB}}$. Then (\\ref{eqn_r}) is executed through a custom layer by using the input tuple $\\langle\\mathbf{H}, \\mathbf{X}_{\\textrm{b}}, \\mathbf{n}\\rangle$ and the generated $\\mathbf{F}_{\\textrm{RF}}$, $\\mathbf{F}_{\\textrm{BB}}$, $\\mathbf{W}_{\\textrm{RF}}$, and $\\mathbf{W}_{\\textrm{BB}}$ to yield the detected signal, $\\mathbf{r}$. After real and imaginary stacking, $\\mathbf{r}$ is converted to a real-valued vector and input into the NN demodulator consisting of dense and BN layers to output the recovered bits, $\\hat{\\mathbf{x}}_{\\textrm{b}}$, which is then reshaped to $\\hat{\\mathbf{X}}_{\\textrm{b}}$. The binary cross-entropy (BCE) loss between $\\mathbf{X}_{\\textrm{b}}$ and $\\hat{\\mathbf{X}}_{\\textrm{b}}$ is calculated as\n\\begin{eqnarray}\n\\label{eqn_loss}\n\\mathcal{L}&&=-\\frac{1}{N_{\\textrm{tr}}}\\sum_{n=1}^{N_{\\textrm{tr}}}\\sum_{i=1}^{N_{\\textrm{s}}}\\sum_{j=1}^{\\log_2 M}\\biggl( [\\mathbf{X}_{\\textrm{b}}^{n}]_{i,j}\\ln([\\hat{\\mathbf{X}}_{\\textrm{b}}^{n}(\\boldsymbol\\Theta)]_{i,j})\\nonumber\\\\\n&&\\quad+(1-[\\mathbf{X}_{\\textrm{b}}^{n}]_{i,j})\\ln(1-[\\hat{\\mathbf{X}}_{\\textrm{b}}^{n}(\\boldsymbol\\Theta)]_{i,j})\\biggr),\n\\end{eqnarray}\nwhere $N_{\\textrm{tr}}$ denotes the number of training samples, superscript $n$ is added to indicate the index of the training sample, and $\\hat{\\mathbf{X}}_{\\textrm{b}}^{n}$ is expressed as the function of the parameter set of all NNs in DL-JHPF, i.e., $\\boldsymbol\\Theta$.\n\nRecall the optimization problem in (\\ref{eqn_optim_problem}), the BER over the training set can be written as\n\\begin{eqnarray}\n\\label{eqn_BER}\n&&P_{\\textrm{e,tr}}\\left(\\mathbf{F}_{\\textrm{RF}},\\mathbf{F}_{\\textrm{BB}},\\mathbf{W}_{\\textrm{RF}},\\mathbf{W}_{\\textrm{BB}}\\right) =P_{\\textrm{e,tr}}\\left(\\boldsymbol\\Theta\\right) \\nonumber\\\\\n&&\\quad=\\frac{\\sum_{n=1}^{N_{\\textrm{tr}}}\\sum_{i=1}^{N_{\\textrm{s}}}\\sum_{j=1}^{\\log_2 M}|[\\mathbf{X}_{\\textrm{b}}^{n}]_{i,j}-[\\hat{\\mathbf{X}}_{\\textrm{b,bin}}^{n}(\\boldsymbol\\Theta)]_{i,j}|}{N_{\\textrm{tr}}N_{\\textrm{s}}\\log_2 M},\\;\\;\\;\\;\n\\end{eqnarray}\nwhere $\\hat{\\mathbf{X}}_{\\textrm{b,bin}}^{n}(\\boldsymbol\\Theta)$ is the binary demodulated bit matrix with $[\\hat{\\mathbf{X}}_{\\textrm{b,1hot}}^{n}(\\boldsymbol\\Theta)]_{i,j}=0$ for $[\\hat{\\mathbf{X}}_{\\textrm{b}}^{n}(\\boldsymbol\\Theta)]_{i,j}<0.5$ and $1$ otherwise. With $\\mathbf{X}_{\\textrm{b}}^{n}$ fixed, minimizing $\\mathcal{L}$ in (\\ref{eqn_loss}) with respect to $\\hat{\\mathbf{X}}_{\\textrm{b}}^{n}(\\boldsymbol\\Theta)$ yields $\\hat{\\mathbf{X}}_{\\textrm{b}}^{n}(\\boldsymbol\\Theta)=\\mathbf{X}_{\\textrm{b}}^{n}$, which also minimizes $P_{\\textrm{e,tr}}\\left(\\boldsymbol\\Theta\\right)$ in (\\ref{eqn_BER}). Therefore, DL-JHPF can directly minimize the BER over the training set by minimizing the BCE loss and the feasibility is guaranteed by the following theorem.\n\n\\textbf{Theorem 1.} \\emph{The proposed DL-JHPF is trainable and can minimize the BCE loss through BP algorithm.}\n\\begin{IEEEproof}\nConsidering the mini-batch training, the BCE loss over a batch is written as\n\\begin{eqnarray}\n\\label{eqn_loss_batch}\n\\mathcal{L}_{\\textrm{bat}}&&=-\\frac{1}{N_{\\textrm{bat}}}\\sum_{n=1}^{N_{\\textrm{bat}}}\\sum_{i=1}^{N_{\\textrm{s}}}\\sum_{j=1}^{\\log_2 M}\\biggl( [\\mathbf{X}_{\\textrm{b}}^{n}]_{i,j}\\ln([\\hat{\\mathbf{X}}_{\\textrm{b}}^{n}(\\boldsymbol\\Theta)]_{i,j})\\nonumber\\\\\n&&\\quad+(1-[\\mathbf{X}_{\\textrm{b}}^{n}]_{i,j})\\ln(1-[\\hat{\\mathbf{X}}_{\\textrm{b}}^{n}(\\boldsymbol\\Theta)]_{i,j})\\biggr),\n\\end{eqnarray}\nwhere $N_{\\textrm{bat}}$ denotes the batch size. Then $\\boldsymbol\\Theta$ will be updated $\\lceil\\frac{N_{\\textrm{tr}}}{N_{\\textrm{bat}}}\\rceil$ times in each epoch.\n\nTo prove Theorem 1, we need to show that $\\mathcal{L}_{\\textrm{bat}}$ is differentiable with respect to each parameter in $\\boldsymbol\\Theta$. According to \\cite{Y. LeCun}, the outputs are differentiable with respect to the corresponding weights and inputs for each NN in DL-JHPF. Since DL-JHPF can be viewed as an integrated DNN consisting of neuron layers and custom layers, the proof can be further simplified to prove that $\\mathcal{L}_{\\textrm{bat}}$ is differentiable with respect to the outputs of each NN due to chain rule. In the following, we prove the differentiability of $\\mathcal{L}_{\\textrm{bat}}$ with respect to the outputs of each NN by incorporating the custom layers.\n\n\\emph{NN demodulator:} From (\\ref{eqn_loss_batch}), $\\mathcal{L}_{\\textrm{bat}}$ is differentiable with respect to $[\\hat{\\mathbf{X}}_{\\textrm{b}}^{n}(\\boldsymbol\\Theta)]_{i,j}, \\forall i, j$.\n\n\\emph{Re\/ImDC-NN:} As mentioned in Section III.B, $\\bar{\\mathbf{w}}_{\\textrm{BB,re}}$ and $\\bar{\\mathbf{w}}_{\\textrm{BB,im}}$ are the outputs of ReDC-NN and ImDC-NN, respectively. Without loss of generality, we will prove that $\\mathcal{L}_{\\textrm{bat}}$ is differentiable with respect to $[\\bar{\\mathbf{w}}_{\\textrm{BB,re}}]_{1}$ and $[\\bar{\\mathbf{w}}_{\\textrm{BB,im}}]_{1}$. According to (\\ref{eqn_r}) and (\\ref{eqn_WBB}), $[\\mathbf{r}]_{1}$ is the function of $[\\bar{\\mathbf{w}}_{\\textrm{BB,re}}]_{1}$ and $[\\bar{\\mathbf{w}}_{\\textrm{BB,im}}]_{1}$, that is\n\\begin{eqnarray}\n\\label{eqn_r1}\n\\!\\!\\!\\!\\![\\mathbf{r}]_{1}&&=([\\bar{\\mathbf{w}}_{\\textrm{BB,re}}]_{1}-j[\\bar{\\mathbf{w}}_{\\textrm{BB,im}}]_{1})[\\mathbf{z}]_{1}+C_1\\nonumber\\\\\n&&=([\\bar{\\mathbf{w}}_{\\textrm{BB,re}}]_{1}-j[\\bar{\\mathbf{w}}_{\\textrm{BB,im}}]_{1})([\\mathbf{z}]_{1,\\textrm{re}}+j[\\mathbf{z}]_{1,\\textrm{im}})+C_1\\nonumber\\\\\n&&=([\\bar{\\mathbf{w}}_{\\textrm{BB,re}}]_{1}[\\mathbf{z}]_{1,\\textrm{re}}+[\\bar{\\mathbf{w}}_{\\textrm{BB,im}}]_{1}[\\mathbf{z}]_{1,\\textrm{im}}+C_{1,\\textrm{re}})\\nonumber\\\\\n&&\\quad+j([\\bar{\\mathbf{w}}_{\\textrm{BB,re}}]_{1}[\\mathbf{z}]_{1,\\textrm{im}}-[\\bar{\\mathbf{w}}_{\\textrm{BB,im}}]_{1}[\\mathbf{z}]_{1,\\textrm{re}}+C_{1,\\textrm{im}})\\nonumber\\\\\n&&=[\\mathbf{r}]_{1,\\textrm{re}}+j[\\mathbf{r}]_{1,\\textrm{im}},\n\\end{eqnarray}\nwhere $\\mathbf{z}=\\mathbf{W}_{\\textrm{RF}}^H\\mathbf{y}$ and $C_1$ denotes the component of $[\\mathbf{r}]_{1}$ independent of $[\\bar{\\mathbf{w}}_{\\textrm{BB,re}}]_{1}$ and $[\\bar{\\mathbf{w}}_{\\textrm{BB,im}}]_{1}$ with the subscripts `re' and `im' indicating the real and imaginary parts, respectively. Since $[\\mathbf{r}]_{1,\\textrm{re}}$ and $[\\mathbf{r}]_{1,\\textrm{im}}$ are a part of inputs of NN demodulator, $\\mathcal{L}_{\\textrm{bat}}$ is differentiable with respect to $[\\mathbf{r}]_{1,\\textrm{re}}$ and $[\\mathbf{r}]_{1,\\textrm{im}}$. Then we have\n\\begin{eqnarray}\n\\label{eqn_derL_wre_1}\n\\frac{\\partial\\mathcal{L}_{\\textrm{bat}}}{\\partial[\\bar{\\mathbf{w}}_{\\textrm{BB,re}}]_{1}} &&=\\frac{\\partial\\mathcal{L}_{\\textrm{bat}}}{\\partial[\\mathbf{r}]_{1,\\textrm{re}}} \\frac{\\partial[\\mathbf{r}]_{1,\\textrm{re}}}{\\partial[\\bar{\\mathbf{w}}_{\\textrm{BB,re}}]_{1}} +\\frac{\\partial\\mathcal{L}_{\\textrm{bat}}}{\\partial[\\mathbf{r}]_{1,\\textrm{im}}} \\frac{\\partial[\\mathbf{r}]_{1,\\textrm{im}}}{\\partial[\\bar{\\mathbf{w}}_{\\textrm{BB,re}}]_{1}}\\nonumber\\\\\n&&=[\\mathbf{z}]_{1,\\textrm{re}}\\frac{\\partial\\mathcal{L}_{\\textrm{bat}}}{\\partial[\\mathbf{r}]_{1,\\textrm{re}}} +[\\mathbf{z}]_{1,\\textrm{im}}\\frac{\\partial\\mathcal{L}_{\\textrm{bat}}}{\\partial[\\mathbf{r}]_{1,\\textrm{im}}},\n\\end{eqnarray}\n\\begin{eqnarray}\n\\label{eqn_derL_wim_1}\n\\frac{\\partial\\mathcal{L}_{\\textrm{bat}}}{\\partial[\\bar{\\mathbf{w}}_{\\textrm{BB,im}}]_{1}} &&=\\frac{\\partial\\mathcal{L}_{\\textrm{bat}}}{\\partial[\\mathbf{r}]_{1,\\textrm{re}}} \\frac{\\partial[\\mathbf{r}]_{1,\\textrm{re}}}{\\partial[\\bar{\\mathbf{w}}_{\\textrm{BB,im}}]_{1}} +\\frac{\\partial\\mathcal{L}_{\\textrm{bat}}}{\\partial[\\mathbf{r}]_{1,\\textrm{im}}} \\frac{\\partial[\\mathbf{r}]_{1,\\textrm{im}}}{\\partial[\\bar{\\mathbf{w}}_{\\textrm{BB,im}}]_{1}}\\nonumber\\\\\n&&=[\\mathbf{z}]_{1,\\textrm{im}}\\frac{\\partial\\mathcal{L}_{\\textrm{bat}}}{\\partial[\\mathbf{r}]_{1,\\textrm{re}}} -[\\mathbf{z}]_{1,\\textrm{re}}\\frac{\\partial\\mathcal{L}_{\\textrm{bat}}}{\\partial[\\mathbf{r}]_{1,\\textrm{im}}}.\n\\end{eqnarray}\n\n\\emph{Re\/ImDP-NN:} Since $\\bar{\\mathbf{f}}_{\\textrm{BB,re}}$ and $\\bar{\\mathbf{f}}_{\\textrm{BB,im}}$ are the outputs of ReDP-NN and ImDP-NN, respectively, we also aim to prove that $\\mathcal{L}_{\\textrm{bat}}$ is differentiable with respect to $[\\bar{\\mathbf{f}}_{\\textrm{BB,re}}]_{1}$ and $[\\bar{\\mathbf{f}}_{\\textrm{BB,im}}]_{1}$. Considering the normalization in (\\ref{eqn_DP_norm}), we first calculate the derivatives of $\\mathcal{L}_{\\textrm{bat}}$ with respect to the real and imaginary parts of $[\\mathbf{F}_{\\textrm{BB}}]_{1,1}$, i.e., $\\frac{\\partial\\mathcal{L}_{\\textrm{bat}}}{\\partial[\\mathbf{F}_{\\textrm{BB}}]_{1,1,\\textrm{re}}}$ and $\\frac{\\partial\\mathcal{L}_{\\textrm{bat}}}{\\partial[\\mathbf{F}_{\\textrm{BB}}]_{1,1,\\textrm{im}}}$, which can be obtained similarly to (\\ref{eqn_derL_wre_1}) and (\\ref{eqn_derL_wim_1}). According to (\\ref{eqn_DP_norm}), we have\n\\begin{eqnarray}\n\\label{eqn_FBB11re}\n[\\mathbf{F}_{\\textrm{BB}}]_{1,1,\\textrm{re}}=\\frac{[\\bar{\\mathbf{f}}_{\\textrm{BB,re}}]_{1}}{f([\\bar{\\mathbf{f}}_{\\textrm{BB,re}}]_{1}, [\\bar{\\mathbf{f}}_{\\textrm{BB,im}}]_{1})},\n\\end{eqnarray}\n\\begin{eqnarray}\n\\label{eqn_FBB11im}\n[\\mathbf{F}_{\\textrm{BB}}]_{1,1,\\textrm{im}}=\\frac{[\\bar{\\mathbf{f}}_{\\textrm{BB,im}}]_{1}}{f([\\bar{\\mathbf{f}}_{\\textrm{BB,re}}]_{1}, [\\bar{\\mathbf{f}}_{\\textrm{BB,im}}]_{1})},\n\\end{eqnarray}\nwhere $f([\\bar{\\mathbf{f}}_{\\textrm{BB,re}}]_{1}, [\\bar{\\mathbf{f}}_{\\textrm{BB,im}}]_{1})\\!=\\![([\\mathbf{F}_{\\textrm{RF}}]_{1,1,\\textrm{re}}[\\bar{\\mathbf{f}}_{\\textrm{BB,re}}]_{1} \\!-\\![\\mathbf{F}_{\\textrm{RF}}]_{1,1,\\textrm{im}}[\\bar{\\mathbf{f}}_{\\textrm{BB,im}}]_{1}+C_{2,\\textrm{re}})^2+([\\mathbf{F}_{\\textrm{RF}}]_{1,1,\\textrm{re}}[\\bar{\\mathbf{f}}_{\\textrm{BB,im}}]_{1} \\!+\\![\\mathbf{F}_{\\textrm{RF}}]_{1,1,\\textrm{im}}[\\bar{\\mathbf{f}}_{\\textrm{BB,re}}]_{1}+C_{2,\\textrm{im}})^2+C_3]^{\\frac{1}{2}}$ with $C_2$ and $C_3$ independent of $[\\bar{\\mathbf{f}}_{\\textrm{BB,re}}]_{1}$ and $[\\bar{\\mathbf{f}}_{\\textrm{BB,im}}]_{1}$. Then we can find that $[\\mathbf{F}_{\\textrm{BB}}]_{1,1,\\textrm{re}}$ and $[\\mathbf{F}_{\\textrm{BB}}]_{1,1,\\textrm{im}}$ are differentiable with respect to $[\\bar{\\mathbf{f}}_{\\textrm{BB,re}}]_{1}$ and $[\\bar{\\mathbf{f}}_{\\textrm{BB,im}}]_{1}$, which leads to\n\\begin{eqnarray}\n\\label{eqn_derL_fBBre_1}\n\\frac{\\partial\\mathcal{L}_{\\textrm{bat}}}{\\partial[\\bar{\\mathbf{f}}_{\\textrm{BB,re}}]_{1}} &&\\!=\\!\\frac{\\partial\\mathcal{L}_{\\textrm{bat}}}{[\\mathbf{F}_{\\textrm{BB}}]_{1,1,\\textrm{re}}} \\frac{\\partial[\\mathbf{F}_{\\textrm{BB}}]_{1,1,\\textrm{re}}}{\\partial[\\bar{\\mathbf{f}}_{\\textrm{BB,re}}]_{1}} \\!+\\!\\frac{\\partial\\mathcal{L}_{\\textrm{bat}}}{\\partial[\\mathbf{F}_{\\textrm{BB}}]_{1,1,\\textrm{im}}} \\frac{\\partial[\\mathbf{F}_{\\textrm{BB}}]_{1,1,\\textrm{im}}}{\\partial[\\bar{\\mathbf{f}}_{\\textrm{BB,re}}]_{1}},\\nonumber\\\\\n&&\n\\end{eqnarray}\n\\begin{eqnarray}\n\\label{eqn_derL_fBBim_1}\n\\frac{\\partial\\mathcal{L}_{\\textrm{bat}}}{\\partial[\\bar{\\mathbf{f}}_{\\textrm{BB,im}}]_{1}} &&\\!=\\!\\frac{\\partial\\mathcal{L}_{\\textrm{bat}}}{[\\mathbf{F}_{\\textrm{BB}}]_{1,1,\\textrm{re}}} \\frac{\\partial[\\mathbf{F}_{\\textrm{BB}}]_{1,1,\\textrm{re}}}{\\partial[\\bar{\\mathbf{f}}_{\\textrm{BB,im}}]_{1}} \\!+\\!\\frac{\\partial\\mathcal{L}_{\\textrm{bat}}}{\\partial[\\mathbf{F}_{\\textrm{BB}}]_{1,1,\\textrm{im}}} \\frac{\\partial[\\mathbf{F}_{\\textrm{BB}}]_{1,1,\\textrm{im}}}{\\partial[\\bar{\\mathbf{f}}_{\\textrm{BB,im}}]_{1}}.\\nonumber\\\\\n&&\n\\end{eqnarray}\n\n\\emph{PP-NN:} We still aim to prove that $\\mathcal{L}_{\\textrm{bat}}$ is differentiable with respect to $[\\boldsymbol{\\phi}_{\\textrm{P}}]_{1}$ that is one of the output of PP-NN and generates $[\\bar{\\mathbf{f}}_{\\textrm{RF}}]_{1,\\textrm{re}}$ and $[\\bar{\\mathbf{f}}_{\\textrm{RF}}]_{1,\\textrm{im}}$ as\n\\begin{eqnarray}\n\\label{eqn_fRF_reim}\n\\frac{1}{\\sqrt{N_{\\textrm{T}}}}e^{j[\\boldsymbol{\\phi}_{\\textrm{P}}]_{1}}&&=\\frac{1}{\\sqrt{N_{\\textrm{T}}}}(\\cos[\\boldsymbol{\\phi}_{\\textrm{P}}]_{1} +j\\sin[\\boldsymbol{\\phi}_{\\textrm{P}}]_{1})\\nonumber\\\\\n&&=[\\bar{\\mathbf{f}}_{\\textrm{RF}}]_{1,\\textrm{re}}+j[\\bar{\\mathbf{f}}_{\\textrm{RF}}]_{1,\\textrm{im}}.\n\\end{eqnarray}\nFrom (\\ref{eqn_r}) and (\\ref{eqn_Heq}), $[\\bar{\\mathbf{f}}_{\\textrm{RF}}]_{1,\\textrm{re}}$ and $[\\bar{\\mathbf{f}}_{\\textrm{RF}}]_{1,\\textrm{im}}$ influence the values of $[\\mathbf{r}]_{i}, i=1,\\ldots,N_{\\textrm{s}}$ and $[\\mathbf{H}_{\\textrm{eq}}]_{j,1}, j=1,\\ldots,N_{\\textrm{R}}^{\\textrm{RF}}$. According to the previous proof, $\\mathcal{L}_{\\textrm{bat}}$ is differentiable with respect to the real and imaginary parts of each element in $\\mathbf{r}$ and $\\mathbf{H}_{\\textrm{eq}}$. $[\\mathbf{r}]_{i,\\textrm{re}}$, $[\\mathbf{r}]_{i,\\textrm{im}}$, $[\\mathbf{H}_{\\textrm{eq}}]_{j,1,\\textrm{re}}$, and $[\\mathbf{H}_{\\textrm{eq}}]_{j,1,\\textrm{im}}$, $i=1,\\ldots,N_{\\textrm{s}}, j=1,\\ldots,N_{\\textrm{R}}^{\\textrm{RF}}$ are also differentiable with respect to $[\\bar{\\mathbf{f}}_{\\textrm{RF}}]_{1,\\textrm{re}}$ and $[\\bar{\\mathbf{f}}_{\\textrm{RF}}]_{1,\\textrm{im}}$. Resorting to chain rule, we have\n\\begin{eqnarray}\n\\label{eqn_derL_fRF_1re}\n&&\\frac{\\partial\\mathcal{L}_{\\textrm{bat}}}{\\partial[\\bar{\\mathbf{f}}_{\\textrm{RF}}]_{1,\\textrm{re}}} =\\sum_{i=1}^{N_{\\textrm{s}}}\\left(\\frac{\\partial\\mathcal{L}_{\\textrm{bat}}}{\\partial[\\mathbf{r}]_{i,\\textrm{re}}} \\frac{\\partial[\\mathbf{r}]_{i,\\textrm{re}}}{\\partial[\\bar{\\mathbf{f}}_{\\textrm{RF}}]_{1,\\textrm{re}}} +\\frac{\\partial\\mathcal{L}_{\\textrm{bat}}}{\\partial[\\mathbf{r}]_{i,\\textrm{im}}} \\frac{\\partial[\\mathbf{r}]_{i,\\textrm{im}}}{\\partial[\\bar{\\mathbf{f}}_{\\textrm{RF}}]_{1,\\textrm{re}}}\\right)\\nonumber\\\\\n&&+\\!\\sum_{j=1}^{N_{\\textrm{R}}^{\\textrm{RF}}}\\!\\left(\\frac{\\partial\\mathcal{L}_{\\textrm{bat}}}{\\partial[\\mathbf{H}_{\\textrm{eq}}]_{j,1,\\textrm{re}}} \\frac{\\partial[\\mathbf{H}_{\\textrm{eq}}]_{j,1,\\textrm{re}}}{\\partial[\\bar{\\mathbf{f}}_{\\textrm{RF}}]_{1,\\textrm{re}}} \\!+\\!\\frac{\\partial\\mathcal{L}_{\\textrm{bat}}}{\\partial[\\mathbf{H}_{\\textrm{eq}}]_{j,1,\\textrm{im}}} \\frac{\\partial[\\mathbf{H}_{\\textrm{eq}}]_{j,1,\\textrm{im}}}{\\partial[\\bar{\\mathbf{f}}_{\\textrm{RF}}]_{1,\\textrm{re}}}\\right)\\!,\\nonumber\\\\\n&&\n\\end{eqnarray}\n\\begin{eqnarray}\n\\label{eqn_derL_fRF_1im}\n&&\\frac{\\partial\\mathcal{L}_{\\textrm{bat}}}{\\partial[\\bar{\\mathbf{f}}_{\\textrm{RF}}]_{1,\\textrm{im}}} =\\sum_{i=1}^{N_{\\textrm{s}}}\\left(\\frac{\\partial\\mathcal{L}_{\\textrm{bat}}}{\\partial[\\mathbf{r}]_{i,\\textrm{re}}} \\frac{\\partial[\\mathbf{r}]_{i,\\textrm{re}}}{\\partial[\\bar{\\mathbf{f}}_{\\textrm{RF}}]_{1,\\textrm{im}}} +\\frac{\\partial\\mathcal{L}_{\\textrm{bat}}}{\\partial[\\mathbf{r}]_{i,\\textrm{im}}} \\frac{\\partial[\\mathbf{r}]_{i,\\textrm{im}}}{\\partial[\\bar{\\mathbf{f}}_{\\textrm{RF}}]_{1,\\textrm{im}}}\\right)\\nonumber\\\\\n&&+\\!\\sum_{j=1}^{N_{\\textrm{R}}^{\\textrm{RF}}}\\!\\left(\\frac{\\partial\\mathcal{L}_{\\textrm{bat}}}{\\partial[\\mathbf{H}_{\\textrm{eq}}]_{j,1,\\textrm{re}}} \\frac{\\partial[\\mathbf{H}_{\\textrm{eq}}]_{j,1,\\textrm{re}}}{\\partial[\\bar{\\mathbf{f}}_{\\textrm{RF}}]_{1,\\textrm{im}}} \\!+\\!\\frac{\\partial\\mathcal{L}_{\\textrm{bat}}}{\\partial[\\mathbf{H}_{\\textrm{eq}}]_{j,1,\\textrm{im}}} \\frac{\\partial[\\mathbf{H}_{\\textrm{eq}}]_{j,1,\\textrm{im}}}{\\partial[\\bar{\\mathbf{f}}_{\\textrm{RF}}]_{1,\\textrm{im}}}\\right)\\!.\\nonumber\\\\\n&&\n\\end{eqnarray}\nBy considering (\\ref{eqn_fRF_reim})$-$(\\ref{eqn_derL_fRF_1im}), we arrive at\n\\begin{eqnarray}\n\\label{eqn_derL_pha_1}\n\\frac{\\partial\\mathcal{L}_{\\textrm{bat}}}{\\partial[\\boldsymbol{\\phi}_{\\textrm{P}}]_{1}} &&=\\frac{\\partial\\mathcal{L}_{\\textrm{bat}}}{\\partial[\\bar{\\mathbf{f}}_{\\textrm{RF}}]_{1,\\textrm{re}}} \\frac{\\partial[\\bar{\\mathbf{f}}_{\\textrm{RF}}]_{1,\\textrm{re}}}{\\partial[\\boldsymbol{\\phi}_{\\textrm{P}}]_{1}} +\\frac{\\partial\\mathcal{L}_{\\textrm{bat}}}{\\partial[\\bar{\\mathbf{f}}_{\\textrm{RF}}]_{1,\\textrm{im}}} \\frac{\\partial[\\bar{\\mathbf{f}}_{\\textrm{RF}}]_{1,\\textrm{im}}}{\\partial[\\boldsymbol{\\phi}_{\\textrm{P}}]_{1}}\\nonumber\\\\\n&&=-\\sin[\\boldsymbol{\\phi}_{\\textrm{P}}]_{1}\\frac{\\partial\\mathcal{L}_{\\textrm{bat}}}{\\partial[\\bar{\\mathbf{f}}_{\\textrm{RF}}]_{1,\\textrm{re}}} +\\cos[\\boldsymbol{\\phi}_{\\textrm{P}}]_{1}\\frac{\\partial\\mathcal{L}_{\\textrm{bat}}}{\\partial[\\bar{\\mathbf{f}}_{\\textrm{RF}}]_{1,\\textrm{im}}},\n\\end{eqnarray}\n\n\\emph{CP-NN:} The proof is similar to that of PP-NN and thus is omitted for simplicity.\n\nNow we have shown $\\mathcal{L}_{\\textrm{bat}}$ is differentiable with respect to each parameter in $\\boldsymbol\\Theta$, which completes the proof.\n\\end{IEEEproof}\n\n\nIt can be seen that the proposed DL-JHPF is abstracted into an integrated DNN, where the hybrid processing matrices, $\\mathbf{F}_{\\textrm{RF}}$, $\\mathbf{F}_{\\textrm{BB}}$, $\\mathbf{W}_{\\textrm{RF}}$, and $\\mathbf{W}_{\\textrm{BB}}$, are essentially the trainable weights therein. From the proof of Theorem 1, each weight of this integrated DNN can be optimized iteratively through BP algorithm by minimizing the BCE loss. Therefore, the optimal precoding and combining matrices on training set are obtained.\n\nFor the NNs in Fig.~\\ref{visual_model}, each dense layer is with rectified linear unit (ReLU) activation function and followed by a BN layer to avoid gradient diffusion and overfitting. The number of dense layers and the number of neurons in each dense layer need to be adjusted according to the input and output dimensions. Since the outputs of the NNs will be used for hybrid processing at the transmitter and the reciever, the activation functions of the output layers should be carefully designed and are elaborated as follows.\n\n\\emph{PP-NN and CP-NN:} The two NNs generate the phases for $\\mathbf{F}_{\\textrm{RF}}$ and $\\mathbf{W}_{\\textrm{RF}}$, respectively. Since (\\ref{eqn_fRF}) and (\\ref{eqn_wRF}) are periodic functions, ReLU activation function is used in the output layer to provide the unbiased output for all possible phases. We may also use Sigmoid or hyperbolic tangent as the activation function, after which the outputs are multiplied by $2\\pi$ or $\\pi$ to obtain the final phases with the range of $[0,2\\pi]$ or $[-\\pi,\\pi]$. According to the simulation trails, ReLU and hyperbolic tangent achieve almost the same performance while Sigmoid performs worse. Therefore, ReLU is preferable since it is simple and free of the operation of exponential functions.\n\n\\emph{Re\/ImDP-NN and Re\/ImDC-NN:} The four NNs generate the real and imaginary parts for $\\mathbf{F}_{\\textrm{BB}}$ and $\\mathbf{W}_{\\textrm{BB}}$, respectively. Since $\\mathbf{F}_{\\textrm{BB}}$ can be normalized by (\\ref{eqn_DP_norm}) while $\\mathbf{W}_{\\textrm{BB}}$ has no constraint, the output layers do not apply any activation function to impose constraints and directly output the values that are input into the neurons.\n\n\\emph{NN demodulator:} This NN approximates the original bits, $\\mathbf{X}_{\\textrm{b}}$, based on $\\mathbf{r}$. The approximation for each element in $\\mathbf{X}_{\\textrm{b}}$ is a binary classification and thus the Sigmoid activation function is used for the output layer of the NN demodulator.\n\n\\vspace{-0.2cm}\n\\subsection{Deployment and Testing}\n\nIn this subsection, we elaborate the deployment and testing of the trained DL-JHPF for practical implementation, where $\\mathbf{H}$ is assumed to be available at both the transmitter and the receiver.\\footnote{Although only the estimated channel is available in practical implementation, it has been shown by the simulation results that the relatively accurate channel estimate hardly causes performance loss and is almost equivalent to $\\mathbf{H}$.}\n\nThe practical deployment of DL-JHPF includes the following three parts:\n\n\\emph{Deployment of hybrid processing designer:} PP-NN and CP-NN will be deployed together at \\emph{both the transmitter and the receiver} to output the analog processing matrices, $\\mathbf{F}_{\\textrm{RF}}$ and $\\mathbf{W}_{\\textrm{RF}}$, based on which the equivalent channel, $\\mathbf{H}_{\\textrm{eq}}$, can be generated via (\\ref{eqn_Heq}). ReDP-NN and ImDP-NN are equipped at the \\emph{transmitter} to generate the digital precoder, $\\mathbf{F}_{\\textrm{BB}}$, while ReDC-NN and ImDC-NN are equipped at the \\emph{receiver} to generate the digital combiner, $\\mathbf{W}_{\\textrm{BB}}$, both based on $\\mathbf{H}_{\\textrm{eq}}$.\n\n\\emph{Deployment of signal flow simulator:} It is only used for the training stage and will be replaced by the actual transceiver and wireless fading channel in the deployment and testing stage.\n\n\\emph{Deployment of NN demodulator:} It will be deployed at the \\emph{receiver} to output the recovered bits, $\\hat{\\mathbf{X}}_{\\textrm{b}}$, based on the detected signal, $\\mathbf{r}$, after compensating the impact of the fading channel.\n\nWhen testing the trained DL-JHPF in real world, the channel may change rapidly due to the relative motion of the transceiver and scatterers, in which case DL-JHPF will be faced new propagation scenarios with different channel statistics from the training stage. This channel scenario discrepancy poses a high requirement on the robustness of DL-JHPF. Fortunately, the offline trained framework in Section III.C is quite robust to the new channel scenarios that are not observed before as shown from our simulation results (Figs.~\\ref{HBD_BS_proDL_UMaipCSI} and \\ref{HBD_BS_proDL_ofdm_UMaipCSI}). The further online fine-tuning may only provide marginal performance improvement but requires a relatively large overhead and needs to be performed frequently in the rapidly changed channel scenario. In addition, only the NNs at the receiver can be fine-tuned and thus the performance after fine-tuning will still have an intrinsic loss compared to the end-to-end training in Section III.C. To sum up, the proposed framework can cope with the mismatch of the channel scenario without relying on the fine-tuning in most cases.\n\n\\subsection{Complexity Analysis}\n\nIn this subsection, we analyze the computational complexity of the proposed DL-JHPF in testing stage by using the metric of required number of floating point operations (FLOPs). According to Fig.~\\ref{visual_model}, the total required FLOPs of all neural layers in DL-JHPF is given by\n\\begin{eqnarray}\n\\label{eqn_Complexity_NN}\n\\mathcal{C}_{\\textrm{NN}}\\sim\\mathcal{O}\\left(\\sum_{\\Delta\\in\\mathcal{N}}\\sum_{i=1}^{L^{\\Delta}-1}N^{\\Delta}_{i}N^{\\Delta}_{i+1}\\right),\n\\end{eqnarray}\nwhere $\\mathcal{N}$ denotes the set including all NNs in DL-JHPF, $L^{\\Delta}$ and $N^{\\Delta}_{i}$ represent the number of neural layers and the number of neurons of the $i$th neural layer of the NN $\\Delta$.\n\nIn addition, the complexity of matrix multiplications in the framework is given by\n\\begin{eqnarray}\n\\label{eqn_Complexity_matrix}\n\\mathcal{C}_{\\textrm{Mat}}\\sim\\mathcal{O}\\bigl(N_{\\textrm{R}}^{\\textrm{RF}}N_{\\textrm{T}}N_{\\textrm{R}}).\n\\end{eqnarray}\n\nThen, the total complexity of the proposed DL-JHPF can be expressed as\n\\begin{eqnarray}\n\\label{eqn_Complexity_DL_JHPF}\n\\mathcal{C}_{\\textrm{DL-JHPF}}\\sim\\mathcal{C}_{\\textrm{NN}}+\\mathcal{C}_{\\textrm{Mat}}.\n\\end{eqnarray}\nIt is noted that the NNs can be run efficiently via parallel computing on the graphic processing unit (GPU) and the simple matrix multiplications only cause negligible computational load for the central processing unit (CPU) compared with the existing schemes. Therefore, the proposed DL-JHPF is with low complexity and consumes the very limited runtime.\n\n\\subsection{Extension to OFDM Systems}\n\nIn this subsection, we extend the proposed DL-JHPF to the wideband OFDM systems. Two key issues need to be considered for the extension:\n\\begin{itemize}[\\IEEEsetlabelwidth{Z}]\n\\item[1)] In the OFDM systems, the digital precoder and combiner can be designed independently for different subcarriers while the analog precoder and combiner must be shared by all subcarriers. It is critical to design the unified analog precoder and combiner performing well for all subcarriers.\n\n\\item[2)] It is important to maintain the relatively small size, i.e., the number of hidden layers and the number of neurons in each layer in the NNs, and short training time for DL-JHPF when the number of subcarriers is large.\n\\end{itemize}\n\nIn the following, we study how to address the two issues when extending DL-JHPF to the OFDM systems.\n\nAccording to \\cite{P. Dong_b}, the $N_{\\textrm{R}}\\times N_{\\textrm{T}}$ channel matrix between the receiver and the transmitter of the $k$th subcarrier can be expressed as\n\\begin{eqnarray}\n\\label{eqn_Hk}\n\\mathbf{H}[k]&&=\\beta\\sum_{n=1}^{N_{\\textrm{cl}}}\\sum_{m=1}^{N_{\\textrm{ray}}} \\alpha_{n,m}e^{-j2\\pi\\tau_n f_s\\frac{k}{K}}\\mathbf{a}_{\\textrm{R}}(\\varphi_{n,m})\\mathbf{a}_{\\textrm{T}}^{H}(\\phi_{n,m}),\n\\end{eqnarray}\nwhere $\\beta=\\sqrt{\\frac{N_{\\textrm{T}}N_{\\textrm{R}}}{N_{\\textrm{cl}}N_{\\textrm{ray}}}}$, $\\tau_n$, $f_s$, and $K$ denote the delay of the $n$th cluster, the sampling rate, and the number of OFDM subcarriers, respectively.\nThe signal transmission model in (\\ref{eqn_r}) becomes subcarrier dependent and the detected signal of the $k$th subcarrier is given by\\footnote{Although $\\mathbf{x}$ and $\\mathbf{n}$ are also different for different subcarriers, they are independent of the channel and thus the index $k$ in them is omitted.}\n\\setlength{\\arraycolsep}{0.05em}\n\\begin{eqnarray}\n\\label{eqn_rk}\n\\mathbf{r}[k]&&=\\sqrt{P}\\mathbf{W}_{\\textrm{BB}}^H[k]\\mathbf{W}_{\\textrm{RF}}^H\\mathbf{H}[k]\\mathbf{F}_{\\textrm{RF}}\\mathbf{F}_{\\textrm{BB}}[k]\\mathbf{x}\\nonumber\\\\\n&&\\quad+\\mathbf{W}_{\\textrm{BB}}^H[k]\\mathbf{W}_{\\textrm{RF}}^H\\mathbf{n}.\n\\end{eqnarray}\n\nIn the following, we propose a simple method to design the structure of training data so that the DL-JHPF in Section III.C can be flexibly extended to OFDM systems without changing the framework architecture. That is, both the framework size and training time will not be increased. The process of training and testing is detailed as follows.\n\n\\emph{Training:} Compared to the training sample with the input tuple $\\langle\\mathbf{H}, \\mathbf{X}_{\\textrm{b}}, \\mathbf{n}\\rangle$ in Section III.C, we modify the input tuple as $\\langle\\bar{\\mathbf{H}}, \\mathbf{H}[i], \\mathbf{X}_{\\textrm{b}}, \\mathbf{n}\\rangle$, where $\\bar{\\mathbf{H}}$ is the channel matrix of a given subcarrier, e.g., the $q$th subcarrier, same for all training samples while $\\mathbf{H}[i]$ is the channel matrix of an uncertain subcarrier with $i$ randomly generated from the set $\\{1,2,\\ldots,K\\}$ for each training sample. As shown in Fig.~\\ref{DL_framework_ofdm}, when inputting each training sample into the framework, $\\bar{\\mathbf{H}}$ will be used to generate $\\mathbf{F}_{\\textrm{RF}}$ and $\\mathbf{W}_{\\textrm{RF}}$ via PP-NN and CP-NN. Then $\\mathbf{F}_{\\textrm{RF}}$ and $\\mathbf{W}_{\\textrm{RF}}$ along with $\\mathbf{H}[i]$ are used to generate the equivalent channel of the $i$th subcarrier, $\\mathbf{H}_{\\textrm{eq}}[i]$, based on which, $\\mathbf{F}_{\\textrm{BB}}[i]$ and $\\mathbf{W}_{\\textrm{BB}}[i]$ can be obtained through Re\/ImDP-NN and Re\/ImDC-NN. On the other hand, $\\mathbf{H}[i]$ is also input into the signal flow simulator to act as the fading channel since this training sample is used to simulate the transmission of the $i$th subcarrier. Then the end-to-end training can be performed by minimizing the BCE loss between $\\mathbf{X}_{\\textrm{b}}$ and $\\hat{\\mathbf{X}}_{\\textrm{b}}$. Through training, we can obtain the unified analog precoder and combiner that match the channel of each subcarrier well without complicating the architecture of DL-JHPF.\n\n\n\\begin{figure}[t]\n\\centering\n\\includegraphics[width=3.5in]{Fig4}\n\\caption{Extension of the proposed DL-JHPF to OFDM systems.}\\label{DL_framework_ofdm}\n\\end{figure}\n\n\\emph{Testing:} With $\\mathbf{H}[k], k=1,2,\\ldots,K$, available at the transceiver, choose the channel matrix of the $q$th subcarrier as $\\bar{\\mathbf{H}}$. Input $\\bar{\\mathbf{H}}$ into PP-NN and CP-NN to generate the unified $\\mathbf{F}_{\\textrm{RF}}$ and $\\mathbf{W}_{\\textrm{RF}}$ for all subcarriers. The unified $\\mathbf{F}_{\\textrm{RF}}$ and $\\mathbf{W}_{\\textrm{RF}}$ along with the channel of each subcarrier, $\\mathbf{H}[k], k=1,2,\\ldots,K$, are used to generate the corresponding equivalent channel, which will be input into Re\/ImDP-NN and Re\/ImDC-NN to generate $\\mathbf{F}_{\\textrm{BB}}[k]$ and $\\mathbf{W}_{\\textrm{BB}}[k]$ for channel equalization in each subcarrier. The NN demodulator will be used to recover the original bits for each subcarrier based on the detected signal, $\\mathbf{r}[k]$.\n\n\\section{Simulation Results}\n\nIn this section, the effectiveness of the proposed DL-JHPF is verified in several cases. Six hybrid processing schemes and the fully-digital transceiver architecture are used as the baseline schemes for comparison: 1) HBD scheme in \\cite{W. Ni}; 2) Beam sweeping (BeS) scheme in \\cite{O. E. Ayach}; 3) Discrete Fourier transform (DFT) codebook based joint digital beamforming (DCJDB) scheme, where the analog precoder and combiner are searched from the DFT codebook by the method in \\cite{A. M. Elbir} while the digital precoder and combiner are jointly optimized according to \\cite{D. P. Palomar}; 4) Joint digital beamforming with alternating minimization (JDB-AltMin), where the optimal precoding and combining matrices are first designed according to \\cite{D. P. Palomar}, based on which the hybrid precoding and combining matrices are constructed according to the PE-AltMin algorithm in \\cite{X. Yu}; 5) Hybrid beamforming via deep learning (HBDL) scheme in \\cite{A. M. Elbir}; 6) Deep learning for direct hybrid precoding (DLDHP) scheme in \\cite{X. Li}; 7) Fully-digital transceiver architecture.\n\n\\subsection{Simulation Settings}\n\n\\begin{table}[!t]\n\\centering\n\\caption{Architectures of DNNs in Proposed DL-HPF}\n\\label{table_1}\n\\begin{tabular}{p{1.4cm}<{\\centering}|c|c|c}\n\\hline\n~ & Layer type & \\makecell{Number of \\\\neurons} & \\makecell{Activation \\\\function}\\\\\n\\hline\n\\multirow{5}{*}{PP-NN} & Input & $2N_{\\textrm{T}}N_{\\textrm{R}}$ & - \\\\\n\\cline{2-4}\n~ & Dense & 512 & ReLU\\\\\n\\cline{2-4}\n~ & Dense & 256 & ReLU\\\\\n\\cline{2-4}\n~ & Dense & 128 & ReLU\\\\\n\\cline{2-4}\n~ & Output & $N_{\\textrm{T}}N_{\\textrm{T}}^{\\textrm{RF}}$ & ReLU\\\\\n\\hline\n\\multirow{6}{*}{CP-NN} & Input & $2N_{\\textrm{T}}N_{\\textrm{R}}$ & - \\\\\n\\cline{2-4}\n~ & Dense & 512 & ReLU\\\\\n\\cline{2-4}\n~ & Dense & 256 & ReLU\\\\\n\\cline{2-4}\n~ & Dense & 128 & ReLU\\\\\n\\cline{2-4}\n~ & Dense & 64 & ReLU\\\\\n\\cline{2-4}\n~ & Output & $N_{\\textrm{R}}N_{\\textrm{R}}^{\\textrm{RF}}$ & ReLU\\\\\n\\hline\n\\multirow{5}{*}{\\makecell{Re\/ImDP-\\\\NN}} & Input & $2N_{\\textrm{T}}^{\\textrm{RF}}N_{\\textrm{R}}^{\\textrm{RF}}$ & - \\\\\n\\cline{2-4}\n~ & Dense & 20 & ReLU\\\\\n\\cline{2-4}\n~ & Dense & 40 & ReLU\\\\\n\\cline{2-4}\n~ & Dense & 20 & ReLU\\\\\n\\cline{2-4}\n~ & Output & $N_{\\textrm{T}}^{\\textrm{RF}}N_{\\textrm{s}}$ & - \\\\\n\\hline\n\\multirow{5}{*}{\\makecell{Re\/ImDC-\\\\NN}} & Input & $2N_{\\textrm{T}}^{\\textrm{RF}}N_{\\textrm{R}}^{\\textrm{RF}}$ & - \\\\\n\\cline{2-4}\n~ & Dense & 20 & ReLU\\\\\n\\cline{2-4}\n~ & Dense & 40 & ReLU\\\\\n\\cline{2-4}\n~ & Dense & 20 & ReLU\\\\\n\\cline{2-4}\n~ & Output & $N_{\\textrm{R}}^{\\textrm{RF}}N_{\\textrm{s}}$ & - \\\\\n\\hline\n\\multirow{5}{*}{\\makecell{NN\\\\demodulator}} & Input & $2N_{\\textrm{s}}$ & - \\\\\n\\cline{2-4}\n~ & Dense & 20 & ReLU\\\\\n\\cline{2-4}\n~ & Dense & 50 & ReLU\\\\\n\\cline{2-4}\n~ & Dense & 20 & ReLU\\\\\n\\cline{2-4}\n~ & Output & $N_{\\textrm{s}}\\log_{2} M$ & Sigmoid \\\\\n\\hline\n\\end{tabular}\n\\vspace{-0.3cm}\n\\end{table}\n\n\\emph{1) System Settings:} We set $N_{\\textrm{T}}=32$ and $N_{\\textrm{T}}^{\\textrm{RF}}=3$ for the transmitter and $N_{\\textrm{R}}=16$ and $N_{\\textrm{R}}^{\\textrm{RF}}=3$ for the receiver. The number of data streams is set as $N_{\\textrm{s}}=3$. The channel data are generated according to the 3GPP TR 38.901 Release 15 channel model \\cite{3GPP}. Specifically, we use the clustered delay line models with $N_{\\textrm{cl}}=3$ clusters and $N_{\\textrm{ray}}=20$ rays in each cluster. The carrier frequency is $f_c=28$ GHz. For OFDM systems, the sampling rate is $f_s=100$ MHz and the number of subcarriers is $K=64$. Two channel scenarios, urban micro (UMi) street non-line of sight (NLOS) scenario and urban macro (UMa) NLOS scenario, are considered.\\footnote{According to the parameters for UMi NLOS scenario and UMa NLOS scenario defined by \\cite{3GPP}, we use the system object, nr5gCDLChannel, embedded in 5G Library for LTE System Toolbox in MATLAB to generate the corresponding channel data.} Quadrature phase shift keying (QPSK) is used as the modulation method.\n\n\\emph{2) Proposed DL-JHPF Settings:} The training set, validation set, and testing set contain $261$,$000$, $29$,$000$, and $10$,$000$ samples, respectively. The training set and validation set are generated in UMi NLOS scenario while the testing set is generated in both UMi NLOS and UMa NLOS scenarios. Adam is used as the optimizer. The number of epochs in the training stage is set as $800$ while the corresponding learning rates are $10^{-3}$ for the first $500$ epochs and $10^{-4}$ for the rest $300$ epochs, respectively. The batch size is $256$. The architecture of each NN in DL-JHPF is listed in Table~\\ref{table_1}, where the BN layer is added after each dense layer and thus is not listed in the table for simplicity.\n\n\n\\subsection{Performance Evaluation}\n\n\\begin{figure}[!t]\n\\centering\n\\includegraphics[trim=0 0 0 0, width=3.6in]{Fig5}\n\\caption{BER performance of the proposed DL-JHPF and the existing hybrid processing schemes.}\\label{HBD_BS_proDL}\n\\end{figure}\n\nIn Figs.~\\ref{HBD_BS_proDL}$-$\\ref{HBD_BS_proDL_UMaipCSI}, the proposed DL-JHPF is first evaluated in narrowband systems while the performance in wideband OFDM systems is presented in Figs.~\\ref{HBD_BS_proDL_ofdm} and \\ref{HBD_BS_proDL_ofdm_UMaipCSI}.\n\nFig.~\\ref{HBD_BS_proDL} shows the BER performance of HBD, BeS, DCJDB, JDB-AltMin, HBDL, DLDHP, the proposed DL-JHPF, and the fully-digital architecture versus signal-to-noise ratio (SNR) in UMi NLOS scenario with perfect CSI. From the figure, DL-JHPF has a larger slope for the BER curve and outperforms the other six hybrid processing schemes after $\\textrm{SNR}=0$ dB although it performs not very well in the low SNR regime. When $\\textrm{BER}=10^{-2}$, the proposed DL-JHPF achieves about $0.2$ dB, $1$ dB, $1.2$ dB, $2$ dB, $6$ dB, and $8$ dB gains compared to JDB-AltMin, DLDHP, DCJDB, BeS, HBDL, and HBD, respectively. The advantage of DL-JHPF becomes more obvious as SNR increases and the BER is smaller than $10^{-4}$ when $\\textrm{SNR}=10$ dB while the performance of other four schemes is larger than $10^{-3}$. With the significantly increased number of RF chains, the fully-digital beamforming obtains substantial diversity gains, which directly leads to the better BER performance than all the hybrid processing schemes. The performance gap between the proposed DL-JHPF and the fully-digital beamforming is about 4dB.\n\n\\begin{figure}[!t]\n\\centering\n\\includegraphics[trim=0 0 0 0, width=3.6in]{Fig6}\n\\caption{Robustness of the proposed DL-JHPF with mismatched CSI.}\\label{HBD_BS_proDL_ipCSI}\n\\end{figure}\n\nPerfect CSI is used in framework training while only estimated CSI is available in the practical transmitter and receiver, which leads to the CSI mismatch. In Fig.~\\ref{HBD_BS_proDL_ipCSI}, we investigate the robustness of the proposed DL-JHPF with mismatched CSI, where the BER curve tested with perfect CSI in Fig.~\\ref{HBD_BS_proDL} is also plotted as the lower bound. We use the approach in \\cite{P. Dong_b} to estimate channels at $\\textrm{SNR}=10$ dB and $20$ dB, respectively, for hybrid processing design. From Fig.~\\ref{HBD_BS_proDL_ipCSI}, when tested with the CSI estimated at $20$ dB, DL-JHPF achieves almost the same BER performance as the perfect CSI case and outperforms the other six hybrid processing schemes after $\\textrm{SNR}=0$ dB, indicating that DL-JHPF is hardly impacted by the mismatched CSI estimated at $20$ dB. When tested with the CSI estimated at $10$ dB, performance loss occurs at an acceptable level for DL-JHPF. The loss is less than $1$ dB when $\\textrm{BER}=10^{-2}$ and DL-JHPF still has the clear performance superiority after $\\textrm{SNR}=2.5$ dB even compared to other hybrid processing schemes with the CSI estimated at $20$ dB.\n\n\\begin{figure}[!t]\n\\centering\n\\includegraphics[trim=0 0 0 0, width=3.6in]{Fig7}\n\\caption{Robustness of the proposed DL-JHPF with mismatched channel scenario and CSI.}\\label{HBD_BS_proDL_UMaipCSI}\n\\end{figure}\n\nAs mentioned in Section III.D, it is very likely to face with different channel scenarios in the practical testing for DL-JHPF. In Fig.~\\ref{HBD_BS_proDL_UMaipCSI}, we further consider this channel scenario mismatch and test the robustness of DL-JHPF to the aggregate impact caused by channel scenario and CSI mismatch. For DL-JHPF, the BER performance is tested in UMi NLOS scenario with perfect CSI, in UMa NLOS scenario with perfect CSI (mismatched channel scenario), and in UMa NLOS scenario with the CSI estimated at $20$ dB (mismatched channel scenario and CSI), respectively. The performance curves of the baseline schemes evaluated in UMa NLOS scenario with the CSI estimated at $20$ dB are also plotted for comparison. From Fig.~\\ref{HBD_BS_proDL_UMaipCSI}, the channel scenario mismatch causes only less than $0.5$ dB performance loss for DL-JHPF. The total loss caused by the aggregate impact of channel scenario and CSI mismatch is only less than $1$ dB. The proposed DL-JHPF has learned the inherent structure of the mmWave channels and thus is able to maintain its advantage even with mismatched channel scenarios and CSI.\n\n\\begin{figure}[!t]\n\\centering\n\\includegraphics[trim=0 0 0 0, width=3.6in]{Fig8}\n\\caption{BER performance of the proposed DL-JHPF and the existing hybrid processing schemes in OFDM systems.}\\label{HBD_BS_proDL_ofdm}\n\\end{figure}\n\nFig.~\\ref{HBD_BS_proDL_ofdm} shows the BER performance of HBD, BeS, DCJDB, JDB-AltMin, HBDL, DLDHP, the proposed DL-JHPF, and the fully-digital architecture in OFDM systems with UMi NLOS scenario and perfect CSI, which is similar to that in Fig.~\\ref{HBD_BS_proDL}. In addition, we plot the BER performance of an ideal case with matched analog processing (AP) for DL-JHPF, where different analog processing matrices are designed for different subcarriers to match the corresponding channels. This is impossible to be implemented in practical systems and we just use it to quantify the performance loss caused by using the unified analog processing matrices for all subcarriers. From Fig.~\\ref{HBD_BS_proDL_ofdm}, only about $1$ dB loss is incurred, which proves the effectiveness of DL-JHPF in OFDM systems by simply modifying the structure of training data without changing the framework architecture and increasing the training time.\n\n\\begin{figure}[!t]\n\\centering\n\\includegraphics[trim=0 0 0 0, width=3.6in]{Fig9}\n\\caption{Robustness of the proposed DL-JHPF with mismatched channel scenario and CSI in OFDM systems.}\\label{HBD_BS_proDL_ofdm_UMaipCSI}\n\\end{figure}\n\nIn Fig.~\\ref{HBD_BS_proDL_ofdm_UMaipCSI}, we further test the robustness of DL-JHPF in OFDM systems with the mismatched channel scenario and CSI. The aggregate impact of channel scenario and CSI mismatch is still limited, and DL-JHPF tested in UMa NLOS scenario with the CSI estimated at $20$ dB (mismatched channel scenario and CSI) even outperforms that tested in UMi NLOS scenario with perfect CSI after $\\textrm{SNR}=8$ dB, which verifies the effectiveness and robustness of the proposed DL-JHPF in OFDM systems. In addition, the performance gap between the proposed DL-JHPF and the fully-digital beamforming maintains at about 4dB.\n\n\\subsection{Computational Complexity Comparison}\n\n\\begin{table}\n \\centering\n \\caption{Runtime of Hybrid Processing Schemes}\n \\label{runtime}\n \\begin{tabular}{c|c}\n \\hline\n ~ & Runtime (in ms)\\\\\n \\hline\n HBD & $6.98$\\\\\n \\hline\n BeS & $10.61$\\\\\n \\hline\n DCJDB & $338.73$\\\\\n \\hline\n JDB-AltMin & $1.56$\\\\\n \\hline\n HBDL & $0.46$\\\\\n \\hline\n DLDHP & $0.16$\\\\\n \\hline\n Proposed DL-JHPF & $0.06$\\\\\n \\hline\n \\end{tabular}\n\\end{table}\n\nFor mmWave mobile communications, the length of coherence time becomes smaller compared to that in sub-6 GHz and thus the runtime of a hybrid processing scheme is vital. Based on the simulation settings mentioned above, we compare the runtime of the proposed DL-JHPF in the testing stage with the baseline schemes in Table~\\ref{runtime}. The HBD, BeS, DCJDB, and JDB-AltMin schemes are run on the Intel(R) Core(TM) i7-3770 CPU while the proposed DL-JHPF are run on the NVIDIA GeForce GTX 2080 Ti GPU. For HBDL and DLDHP, the predictions of $\\mathbf{F}_{\\textrm{RF}}$ and $\\mathbf{W}_{\\textrm{RF}}$ are implemented via DNN on the GPU while the following design of $\\mathbf{F}_{\\textrm{BB}}$ and $\\mathbf{W}_{\\textrm{BB}}$ are executed on the CPU. By moving the time-consuming design of analog processing to the GPU that enables the efficient parallel computing, the DL based schemes reduce the runtime significantly compared to the conventional schemes. Through carefully design, the proposed DL-JHPF is fully GPU-driven when generating hybrid processing matrices and thus consumes the minimum time among the three DL based schemes. Therefore, the proposed DL-JHPF is more suitable for mmWave communications, especially for the high-mobility scenario.\n\n\\section{Conclusion}\n\nIn this paper, DL is applied for joint hybrid processing design at the transceiver in mmWave massive MIMO systems. A novel DL-JHPF is developed to learn the optimal analog and digital processing matrices by minimizing the end-to-end BCE loss between the original and recovered bits. The elaborate architecture of the proposed DL-JHPF guarantees the BP-enabled training of each NN therein. By simply modifying the structure of training data, DL-JHPF can be flexibly extended to OFDM systems without changing the framework architecture and increasing the training time. Simulation results show the superiority and robustness of DL-JHPF in various non-ideal conditions with the significantly reduced runtime.\n\n\n\n\n\n\n\\ifCLASSOPTIONcaptionsoff\n \\newpage\n\\fi\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section*{ACKNOWLEDGMENTS}\nThe authors thank Matthew G. Knepley (Rice University) \nfor his invaluable advice. The authors also thank\nthe Los Alamos National Laboratory (LANL)\nInstitutional Computing program. \nJC and KBN acknowledge the financial support from the \nHouston Endowment Fund and from the Department of Energy \nthrough Subsurface Biogeochemical Research Program. \nSK thanks the LANL LDRD program and the LANL Environmental\nPrograms Directorate for their support.\nThe opinions expressed in this paper are those of \nthe authors and do not necessarily reflect that \nof the sponsors.\n\n\\bibliographystyle{plain}\n\n\\section{GOVERNING EQUATIONS AND ASSOCIATED NON-NEGATIVE NUMERICAL METHODOLOGIES}\n\\label{Sec:Governing_Equations}\nLet $\\Omega \\subset \\mathbb{R}^{nd}$ be a bounded open \ndomain, where ``$nd$'' is the number of spatial dimensions. \nThe boundary of the domain is denoted by $\\partial \\Omega \n= \\overline{\\Omega} - \\Omega$, which is assumed to be \npiecewise smooth. A spatial point is denoted by $\\mathbf{x} \n\\in \\overline{\\Omega}$. The gradient and divergence operators \nwith respect to $\\mathbf{x}$ are, respectively, denoted as \n$\\mathrm{grad}[\\cdot]$ and $\\mathrm{div}[\\cdot]$. As usual, \nthe boundary is divided into two parts: $\\Gamma^{\\mathrm{D}}$ \nand $\\Gamma^{\\mathrm{N}}$. $\\Gamma^{\\mathrm{D}}$ is that part \nof the boundary on which Dirichlet boundary conditions \nare prescribed, and $\\Gamma^{\\mathrm{N}}$ is the part of \nthe boundary on which Neumann boundary conditions are \nprescribed. For mathematical well-posedness, we assume \n$\\Gamma^{\\mathrm{D}} \\cup \\Gamma^{\\mathrm{N}} = \\partial \n\\Omega$ and $\\Gamma^{\\mathrm{D}} \\cap \\Gamma^{\\mathrm{N}} \n= \\emptyset$. \nThe unit outward normal to boundary is denoted as \n$\\widehat{\\mathbf{n}}(\\mathbf{x})$. The diffusivity \ntensor is denoted by $\\mathbf{D}(\\mathbf{x})$, which \nis assumed to symmetric, bounded above and uniformly \nelliptic. That is,\n\\begin{align}\n \\mathbf{D}(\\mathbf{x}) = \\mathbf{D}^{\\mathrm{T}}\n (\\mathbf{x}) \\quad \\forall \\mathbf{x} \\in \\Omega\n\\end{align}\nand there exists two constants $\\mathrm{0 < \n \\xi_1 \\leq \\xi_2 < +\\infty}$ such that \n\\begin{align}\n \\label{Eqn:Helmholtz_positive_definiteness_D}\n \\xi_1 \\mathbf{y}^{\\mathrm{T}} \\mathbf{y}\\leq \\mathbf{y}^{\\mathrm{T}} \n \\mathbf{D}(\\mathbf{x}) \\mathbf{y} \\leq \\xi_2 \\mathbf{y}^{\\mathrm{T}} \n \\mathbf{y} \\quad \\forall \\mathbf{x} \\in \\Omega \\; \\mathrm{and} \\; \n \\forall \\mathbf{y} \\in \\mathbb{R}^{nd}\n\\end{align} \n\n\\subsection{Governing equations for steady-state response}\nWe shall denote the steady-state concentration field \nby $c(\\mathbf{x})$. The governing equations can be \nwritten as follows: \n\\begin{subequations}\n \\label{Eqn:LargeScale_BVP}\n \\begin{alignat}{2}\n \\label{Eqn:LargeScale_GE}\n -&\\mathrm{div}[\\mathbf{D}(\\mathbf{x}) \n \\mathrm{grad}[c]] = f(\\mathbf{x}) & \\qquad & \n \\mathrm{in} \\; \\Omega \\\\\n %\n \\label{Eqn:LargeScale_Dirichlet}\n &c(\\mathbf{x}) = c^{\\mathrm{p}} (\\mathbf{x}) & \\qquad &\n \\mathrm{on} \\; \\Gamma^{\\mathrm{D}} \\\\\n %\n \\label{Eqn:LargeScale_Neumann}\n -&\\mathbf{\\widehat{n}}(\\mathbf{x}) \\cdot \\mathbf{D}\n (\\mathbf{x}) \\mathrm{grad}[c] =\n q^{\\mathrm{p}}(\\mathbf{x}) & \\quad & \\mathrm{on} \\;\n \\Gamma^{\\mathrm{N}}\n \\end{alignat}\n\\end{subequations}\nwhere $f(\\mathbf{x})$ is the volumetric \nsource\/sink, $c^{\\mathrm{p}}(\\mathbf{x})$ is the \nprescribed concentration, and $q^{\\mathrm{p}}\n(\\mathbf{x})$ is the prescribed flux. For \nuniqueness, we assume $\\Gamma^{\\mathrm{D}} \n\\neq \\emptyset$. \n\n\\subsubsection{Maximum principle and the non-negative constraint}\nThe above boundary value problem is a self-adjoint \nsecond-order elliptic partial differential equation \n(PDE). It is well-known that such PDEs possess an \nimportant mathematical property -- the classical \nmaximum principle \\cite{Evans_PDE}. The mathematical \nstatement of the classical maximum principle can be \nwritten as follows: \nIf $c(\\mathbf{x}) \\in C^{2}(\\Omega) \\cap C^{0}(\\overline{\\Omega})$, \n$\\partial \\Omega = \\Gamma^{\\mathrm{D}}$, and $f(\\mathbf{x}) \\leq 0$ \nin $\\Omega$ then \n\\begin{align}\n \\max_{\\mathbf{x} \\in \\overline{\\Omega}} c(\\mathbf{x}) \n = \\max_{\\mathbf{x} \\in \\partial \\Omega} c^{\\mathrm{p}}(\\mathbf{x})\n\\end{align}\nSimilarly, if $f(\\mathbf{x}) \\geq 0$ in $\\Omega$ then\n\\begin{align}\n \\label{Eqn:LargeScale_MP}\n \\min_{\\mathbf{x} \\in \\overline{\\Omega}} c(\\mathbf{x}) \n = \\min_{\\mathbf{x} \\in \\partial \\Omega} c^{\\mathrm{p}}(\\mathbf{x})\n\\end{align}\nTo make our presentation on maximum principles \nsimple, we have assumed stronger regularity on \nthe solution (i.e., $c(\\mathbf{x}) \\in C^{2} \n\\cap C^{0}(\\overline{\\Omega})$), and assumed that \nDirichlet boundary conditions are prescribed on \nthe entire boundary. However, maximum principles \nrequiring milder regularity conditions on the \nsolution, even for the case when Neumann boundary \nconditions are prescribed on the boundary, can be \nfound in literature (see \n\\cite{Mudunuru_Nakshatrala_arXiv_2015,\nMudunuru_Nakshatrala_ADR_arXiv_2015}). \n\nIf $f(\\mathbf{x}) \\geq 0$ in $\\Omega$ and \n$c^{\\mathrm{p}}(\\mathbf{x}) \\geq 0$ on the \nentire $\\partial \\Omega$ then the maximum \nprinciple implies that $c(\\mathbf{x}) \n\\geq 0$ in the entire domain, which is \nthe non-negativity of the concentration \nfield. \n\n\\subsubsection{Single-field Galerkin weak formulation}\nThe following function spaces will be \nused in the rest of this paper: \n\\begin{align}\n \\mathcal{U} &:= \\left\\{c(\\mathbf{x}) \\in H^{1}(\\Omega) \n \\; \\big| \\; c(\\mathbf{x}) = c^{\\mathrm{p}}(\\mathbf{x}) \\; \n \\mathrm{on} \\; \\Gamma^{\\mathrm{D}}\\right\\} \\\\\n %\n \\mathcal{W} &:= \\left\\{w(\\mathbf{x}) \\in H^{1}(\\Omega) \n \\; \\big| \\; w(\\mathbf{x}) = 0 \\; \\mathrm{on} \\; \n \\Gamma^{\\mathrm{D}}\\right\\} \n\\end{align}\nwhere $H^{1}(\\Omega)$ is a standard Sobolev space \n\\cite{adams2003sobolev}. The single-field Galerkin \nweak formulation corresponding to equations \n\\eqref{Eqn:LargeScale_GE}--\\eqref{Eqn:LargeScale_Neumann} \nreads: Find $c(\\mathbf{x}) \\in \\mathcal{U}$ \nsuch that we have \n\\begin{align}\n \\label{Eqn:Single_field_formulation}\n \\mathcal{B}(w;c) = L(w) \\quad \\forall \n w(\\mathbf{x}) \\in \\mathcal{W}\n\\end{align}\nwhere the bilinear form and linear functional \nare, respectively, defined as \n\\begin{subequations}\n \\label{Eqn:functionals_B_L}\n \\begin{align}\n \\mathcal{B}(w;c) &:= \\int_{\\Omega} \\mathrm{grad}[w(\\mathbf{x})] \n \\cdot \\mathbf{D}(\\mathbf{x}) \\mathrm{grad}[c(\\mathbf{x})] \n \\; \\mathrm{d} \\Omega \n \\\\\n L(w) &:= \\int_{\\Omega} w(\\mathbf{x}) f(\\mathbf{x}) \n \\; \\mathrm{d} \\Omega \n + \\int_{\\Gamma^{\\mathrm{N}}} w(\\mathbf{x}) \n q^{\\mathrm{p}}(\\mathbf{x}) \\; \\mathrm{d} \\Gamma \n \\end{align}\n\\end{subequations}\nSince $\\mathbf{D}(\\mathbf{x})$ is symmetric, \nby Vainberg's theorem \\cite{Hjelmstad}, the \nsingle-field Galerkin weak formulation given \nby equation \\eqref{Eqn:Single_field_formulation} \nis equivalent to the following variational \nproblem: \n\\begin{align}\n \\label{Eqn:LargeScale_var_stat}\n \\mathop{\\mathrm{minimize}}_{c(\\mathbf{x}) \\in \\mathcal{U}} \n \\quad \\frac{1}{2} \\mathcal{B}(c;c) - L(c)\n\\end{align} \n\n\\subsubsection{A methodology to enforce the maximum principle \n for steady-state problems}\nOur methodology is based on the finite element method. \nWe decompose the domain into ``$Nele$'' non-overlapping \nopen element sub-domains such that \n\\begin{align}\n \\overline{\\Omega} = \\bigcup_{e = 1}^{Nele} \\overline{\\Omega}^{e}\n\\end{align}\n(Recall that a superposed bar denotes the set closure.) The \nboundary of $\\Omega^e$ is denoted by $\\partial \\Omega^{e} \n:= \\overline{\\Omega}^{e} - \\Omega^{e}$. We shall define the \nfollowing finite dimensional vector spaces of $\\mathcal{U}$ \nand $\\mathcal{W}$:\n\\begin{subequations}\n \\begin{align}\n \\mathcal{U}^{h} &:= \\left\\{c^{h}(\\mathbf{x}) \\in \\mathcal{U} \n \\; \\big| \\; c^{h}(\\mathbf{x}) \\in C^{0}(\\overline{\\Omega}), \n c^{h}(\\mathbf{x}) \\big|_{\\Omega^e} \\in \\mathbb{P}^{k}(\\Omega^{e}), \n e = 1, \\cdots, Nele \\right\\} \\\\\n \n \\mathcal{W}^{h} &:= \\left\\{w^{h}(\\mathbf{x}) \\in \\mathcal{W} \n \\; \\big| \\; w^{h}(\\mathbf{x}) \\in C^{0}(\\overline{\\Omega}), \n w^{h}(\\mathbf{x}) \\big|_{\\Omega^e} \\in \\mathbb{P}^{k}(\\Omega^{e}), \n e = 1, \\cdots, Nele \\right\\} \n \\end{align}\n\\end{subequations}\nwhere $k$ is a non-negative integer, and \n$\\mathbb{P}^{k}(\\Omega^{e})$ denotes the linear \nvector space spanned by polynomials up to $k$-th \norder defined on the sub-domain $\\Omega^{e}$. The \nfinite element formulation for equation \n\\eqref{Eqn:Single_field_formulation} can be \nwritten as: Find $c^{h}(\\mathbf{x}) \\in \\mathcal{P}^{h}$ \nsuch that we have \n\\begin{align}\n \\label{Eqn:FE_formulation}\n \\mathcal{B}(q^{h};c^{h}) = L(q^{h}) \\quad \n \\forall q^{h}(\\mathbf{x}) \\in \\mathcal{Q}^{h}\n\\end{align}\nIt has been documented in the literature that the \nabove finite element formulation violates the \nmaximum principle and the non-negative constraint \\cite{liska2008enforcing,\nNakshatrala_JCP_2009_v228_p6726,Nagarajan_IJNMF_2011_v67_p820}.\n \nWe now outline an optimization-based methodology that \nsatisfies the maximum principle and the non-negative \nconstraint on general computational grids. \nTo this end, we shall use the symbols $\\preceq$ \nand $\\succeq$ to denote component-wise inequalities \nfor vectors. That is, for given any two vectors \n$\\boldsymbol{a}$ and $\\boldsymbol{b}$ \n\\begin{align}\n \\boldsymbol{a} \\preceq \\boldsymbol{b} \\quad \n \\mbox{means that } \\quad a_i \\leq b_i \\; \\forall i\n\\end{align}\nThe symbol $\\succeq$ can be similarly defined as well.\nLet $<\\cdot;\\cdot>$ denote the standard inner-product \nin Euclidean space. After finite element discretization, \nthe discrete equations corresponding to equation \n\\eqref{Eqn:FE_formulation} take the form\n\\begin{align}\n \\label{Eqn:Helmholtz_discrete}\n \\boldsymbol{K} \\boldsymbol{c} = \\boldsymbol{f}\n\\end{align}\nwhere $\\boldsymbol{K}$ is a symmetric positive definite matrix, $\\boldsymbol{c}$ is the \nvector containing nodal concentrations, and $\\boldsymbol{f}$ is the force vector. \nEquation~\\eqref{Eqn:Helmholtz_discrete} is equivalent to the following minimization problem \n\\begin{align}\n \\label{Eqn:Helmholtz_minimization}\n \\mathop{\\mbox{minimize}}_{\\boldsymbol{c} \\in \\mathbb{R}^{ndofs}} \\quad \\frac{1}{2} \n \\langle\\boldsymbol{c}; \\boldsymbol{K} \\boldsymbol{c}\\rangle \n - \\langle\\boldsymbol{c}; \\boldsymbol{f}\\rangle\n\\end{align}\nwhere ``$ndofs$'' denotes the number of degrees of freedom \nfor concentration. Equation~\\eqref{Eqn:Helmholtz_discrete} \ncan lead to unphysical negative solutions.\n\nFollowing \\cite{Nagarajan_IJNMF_2011_v67_p820,\nliska2008enforcing}, a methodology corresponding to equation \n\\eqref{Eqn:Helmholtz_minimization} that satisfies the \nnon-negative constraint can be written as follows: \n\\begin{subequations}\n \\label{Eqn:non-negative}\n \\begin{align}\n &\\mathop{\\mbox{minimize}}_{\\boldsymbol{c} \\in \\mathbb{R}^{ndofs}} \\quad \n \\frac{1}{2} <\\boldsymbol{c}; \\boldsymbol{K} \\boldsymbol{c}> - \n <\\boldsymbol{c}; \\boldsymbol{f}> \\\\\n \n \\label{Eqn:non-negative_constraint}\n &\\mbox{subject to} \\quad \\boldsymbol{0} \\preceq \\boldsymbol{c} \n \\end{align}\n\\end{subequations}\nwhere $\\boldsymbol{0}$ is a vector of size $ndofs$ \ncontaining zeros. Since $\\boldsymbol{K}$ is positive \ndefinite, equation \\eqref{Eqn:non-negative} has a \nunique global minimum \\cite{Boyd_convex_optimization}. \nSeveral robust numerical methods can be used to solve \nequation \\eqref{Eqn:non-negative}, which include active \nset strategy, interior point methods \n\\cite{Boyd_convex_optimization}. In this paper, to \nsolve the resulting optimization problems, we shall \nuse the parallel optimization toolkit TAO \n\\cite{tao-user-ref}, which has the active-set Newton trust \nregion (TRON) and quasi-Newton-based bounded limited memory\nvariable metric (BLMVM) algorithms.\n\n\\subsection{Governing equations for transient response}\nWe shall denote the time by $t \\in [0,\\mathcal{I}]$, where \n$\\mathcal{I}$ denotes the length of the time interval of \ninterest. We shall denote the time-dependent concentration \nby $c(\\mathbf{x},t)$. The initial boundary value problem \ncan be written as follows:\n\\begin{subequations}\n \\begin{alignat}{2}\n \\label{Eqn:LargeScale_GE_unsteady}\n &\\frac{\\partial c}{\\partial t} \n = \\mathrm{div}[\\mathbf{D}(\\mathbf{x}) \\mathrm{grad}[c]] \n + f(\\mathbf{x},t) \n && \\quad \\mathrm{in} \\; \\Omega \\times (0,\\mathcal{I}) \\\\\n %\n &c(\\mathbf{x},t) = c^{\\mathrm{p}}(\\mathbf{x},t) \n && \\quad \\mathrm{on} \\; \\Gamma^{\\mathrm{D}} \n \\times (0,\\mathcal{I}) \\\\\n %\n -&\\widehat{\\mathbf{n}}(\\mathbf{x}) \\cdot \n \\mathbf{D}(\\mathbf{x}) \\mathrm{grad}[c] \n = q^{\\mathrm{p}}(\\mathbf{x},t) \n && \\quad \\mathrm{on} \\; \\Gamma^{\\mathrm{N}} \n \\times (0,\\mathcal{I}) \\\\ \n %\n \\label{Eqn:LargeScale_IC}\n &c(\\mathbf{x},0) = c_{0}(\\mathbf{x}) \n && \\quad \\mathrm{in} \\; \\Omega \n %\n \\end{alignat}\n\\end{subequations}\nwhere $c_0(\\mathbf{x})$ is the prescribed initial \nconcentration, $f(\\mathbf{x},t)$ is the time-dependent \nvolumetric source\/sink, $c^{\\mathrm{p}}(\\mathbf{x},t)$ is \nthe time-dependent prescribed concentration on the \nboundary, and $q^{\\mathrm{p}}(\\mathbf{x},t)$ is the \nprescribed time-dependent flux on the boundary. \n\n\\subsubsection{Maximum principle and the non-negative constraint}\nThe maximum principle of a transient diffusion equation \nasserts that the maximum can occur only on the boundary \nof the domain or in the initial condition if $f(\\mathbf{x},\nt) \\leq 0$ and $\\Gamma^{\\mathrm{D}} = \\partial \\Omega$. \nMathematically, a solution to equations \n\\eqref{Eqn:LargeScale_GE_unsteady}--\\eqref{Eqn:LargeScale_GE_unsteady} \nwill satisfy:\n\\begin{align}\n c(\\mathbf{x},t) \\leq \\max \\left[\\max_{\\mathbf{x} \\in \\Omega} \n c_0(\\mathbf{x}), \\max_{\\mathbf{x} \\in \\partial \\Omega} \n c^{\\mathrm{p}}(\\mathbf{x},t)\\right] \\quad \\forall t\n\\end{align}\nprovided $f(\\mathbf{x},t) \\leq 0$. Similarly, the \nminimum will occur either on the boundary or in the \ninitial condition if $f(\\mathbf{x},t) \\geq 0$. That \nis, if $f(\\mathbf{x},t) \\geq 0$ then a solution to \nequations \\eqref{Eqn:LargeScale_GE_unsteady}--\\eqref{Eqn:LargeScale_GE_unsteady} \nsatisfies:\n\\begin{align}\n c(\\mathbf{x},t) \\geq \\min \\left[\\min_{\\mathbf{x} \\in \\Omega} \n c_0(\\mathbf{x}), \\min_{\\mathbf{x} \\in \\partial \\Omega} \n c^{\\mathrm{p}}(\\mathbf{x},t)\\right] \\quad \\forall t\n\\end{align}\n\nIf $f(\\mathbf{x},t) \\geq 0$ in $\\Omega$, $c^{\\mathrm{p}}\n(\\mathbf{x},t) \\geq 0$ on the entire $\\partial \\Omega$, \nand $c_0(\\mathbf{x}) \\geq 0$ in $\\Omega$ then the maximum \nprinciple implies that $c(\\mathbf{x},t) \\geq 0$ in the \nentire domain at all times, which is the non-negative \nconstraint for the concentration field for transient \nproblems. \n\n\\subsubsection{A methodology to enforce the \n maximum principle for transient problems}\nWe divide the time interval of interest into \n$\\mathcal{N}$ sub-intervals. That is, \n\\begin{align}\n [0,\\mathcal{I}] := \\bigcup_{n = 0}^{\\mathcal{N}} [t_n,t_{n+1}]\n\\end{align}\nwhere $t_n$ denotes the $n$-th time-level. We \nassume that the time-step is uniform, which \ncan be written as:\n\\begin{align}\n \\Delta t = t_{n+1} - t_{n}\n\\end{align}\nFollowing the recommendation provided in \n\\cite{Nakshatrala_Nagarajan_Shabouei_Arxiv_2013} to \nmeet maximum principles, we employ the backward Euler \nmethod for temporal discretization. We shall denote \nthe nodal concentrations at the $n$-th time-level \nby $\\boldsymbol{c}^{(n)}$. We shall denote the minimum \nand maximum values for the concentration by $c_{\\mathrm{min}}$ \nand $c_{\\mathrm{max}}$, which will be provided by the maximum \nprinciple and the non-negative constraint. At each time-level, \none has to solve the following convex quadratic program:\n\\begin{subequations}\n\\label{Eq:non-negative}\n \\begin{align}\n &\\mathop{\\mathrm{minimize}}_{\\boldsymbol{c}^{(n+1)}} \\quad \n \\frac{1}{2} \\langle {\\boldsymbol{c}^{(n+1)}}; \n \\widetilde{\\boldsymbol{K}} \\boldsymbol{c}^{(n+1)} \\rangle \n - \\langle {\\boldsymbol{c}^{(n+1)}}; \n \\widetilde{\\boldsymbol{f}}^{(n+1)} \\rangle \\\\\n &\\mbox{subject to} \\quad c_{\\mathrm{min}} \\mathbf{1} \\preceq \n \\boldsymbol{c}^{(n+1)} \\preceq c_{\\mathrm{max}} \\mathbf{1}\n \\end{align} \n\\end{subequations}\nwhere\n\\begin{align}\n &\\widetilde{\\boldsymbol{K}} := \\frac{1}{\\Delta t} \\boldsymbol{M} + \n \\boldsymbol{K} \\\\\n &\\widetilde{\\boldsymbol{f}}^{(n+1)} := \\boldsymbol{f}^{(n+1)} + \n \\frac{1}{\\Delta t} \\boldsymbol{M} \\boldsymbol{c}^{(n+1)}\n\\end{align}\nIn the above equation, $\\boldsymbol{M}$ is the capacity \nmatrix \\cite{Nakshatrala_Nagarajan_Shabouei_Arxiv_2013}. \n\n\\section{CONCLUDING REMARKS}\n\\label{Sec:Concluding_Remarks}\nWe presented a parallel non-negative computational framework \nsuitable for solving large-scale steady-state and transient \nanisotropic diffusion equations. The main contribution is \nthat the proposed parallel computational framework satisfies \nthe discrete maximum principles for large-scale diffusion-type \nequations even on general computational grids. \nThe parallel framework is built upon PETSc's DMPlex\ndata structure, which can handle unstructured meshes, \nand TAO for solving the resulting optimization problems \nfrom the discretization formulation. We have conducted \nsystematic performance modeling and strong-scaling \nstudies to demonstrate the efficiency, both in the parallel and\nhardware sense of the computational framework. The robustness of \nthe proposed framework has been illustrated by solving a \nlarge-scale realistic problem involving the transport of \nchromium in the subsurface at Los Alamos, New Mexico.\nFuture areas of research include: \n(a) extending the proposed parallel framework to \nadvective-diffusive and advective-diffusive-reactive \nsystems, and (b) posing the discrete problem as a \nvariational inequality, which will be valid even \nfor non-self-adjoint operators, and use other PETSc\ncapabilities to solve such variational \ninequalities. \n\n\n\n\n\n\\section{REPRESENTATIVE NUMERICAL RESULTS}\n\\label{Sec:Numerical_Results}\nIn this section, we compare the performance of our \nnon-negative methodology using the TAO solver to \nthat of the Galerkin formulation using the Krylov \nSubspace (KSP) solver. We examine the performance \nusing two problems: \n\\begin{enumerate}[(i)]\n\\item a unit cube with a hole under steady-state, and \n\\item a transient Chromium transport problem.\n\\end{enumerate}\nThe diffusivity tensor is assumed to depend \non the flow velocity through \n\\begin{align}\n \\mathbf{D}(\\mathbf{x}) = \\left(\\alpha_T \\|\\mathbf{v}\\|+D_{M}\\right) \\mathbf{I} + \n(\\alpha_L - \\alpha_T) \\frac{\\mathbf{v} \\otimes \\mathbf{v}}{\\|\\mathbf{v}\\|} \n\\label{Eqn:S5_dispersion_tensor}\n\\end{align}\nwhere $\\alpha_L$, $\\alpha_T$, and $D_M$ denote the longitudinal dispersivity,\ntransverse dispersivity and molecular diffusivity, respectively. We employ the \nconjugate gradient method and the block Jacobi\/ILU(0) preconditioner for solving \nthe linear system from the Galerkin formulation and employ TAO's TRON and BLMVM \nmethods for the non-negative methodology. The relative convergence tolerances for \nboth KSP and TAO solvers are set to $10^{-6}$, and $\\Delta t$ for the transient \nresponse in the Chromium problem is initially set to 0.2 days. For strong-scaling \nstudies shown here, we used OpenMPI v1.6.5 for message passing and bound processes to cores \nwhile mapping by sockets. ParaView\\cite{paraview} and VisIt\\cite{VisIt} were \nused to generate all contour and mesh plots.\n\n\\begin{remark}\nThroughout the paper, the non-negative methodology that we refer to, is in fact a \ndiscrete maximum principle preserving methodology, in that, along with the \nnon-negative constraint we also enforce that the concentrations are less than \nor equal to 1.\n\\end{remark}\n\n\\subsection{Anisotropic diffusion in a unit cube with a cubic hole}\nLet the computational domain be a unit cube with a cubic hole of size \n$[4\/9, 5\/9] \\times [4\/9, 5\/9] \\times [4\/9, 5\/9]$. The concentration on the \nouter boundary is taken to be zero and the concentration on the interior \nboundary is taken to be unity. The volumetric source is taken as zero \n(i.e., $f(\\mathbf{x}) = 0$). The velocity vector field for this problem is \nchosen to be\n\\begin{align}\n\\mathbf{v}(\\mathbf{x}) = \\mathbf{e}_x + \\mathbf{e}_y + \\mathbf{e}_z\n\\end{align}\n\\begin{table}[t]\n \\centering\n \\caption{Cube with a hole: list of various mesh type and refinement \n level combinations used \\label{Tab:S5_cube_information}}\n \\begin{tabular}{lcccc}\n \\hline\n Case & Mesh type & Refinement level & Tetrahedrons & Vertices \\\\\n \\hline\n A1 & A & 1 & 199,296 & 36,378 \\\\\n B1 & B & 1 & 409,848 & 75,427 \\\\\n C1 & C & 1 & 793,824 & 140,190 \\\\\n A2 & A & 2 & 1,594,368 & 278,194 \\\\\n B2 & B & 2 & 3,278,784 & 574,524 \\\\\n C2 & C & 2 & 6,350,592 & 1,089,562 \\\\\n A3 & A & 3 & 12,754,994 & 2,175,330 \\\\\n B3 & B & 3 & 26,230,272 & 4,483,126 \\\\\n C3 & C & 3 & 50,804,736 & 9,172,044 \\\\\n \\hline\n \\end{tabular}\n\\end{table}\n\\begin{table}[t]\n \\centering\n \\caption{Cube with a hole: minimum and maximum concentrations \n for each case \\label{Tab:S5_cube_dmp}}\n \\begin{tabular}{lccc}\n \\hline\n Case & Min. concentration & Max. concentration & \\% nodes violated \\\\\n \\hline\n A1 & -0.0224825 & 1.00000 & 9,518\/36,378 $\\rightarrow$ 26.2\\% \\\\\n B1 & -0.0139559 & 1.00000 & 32,247\/43,180 $\\rightarrow$ 42.8\\% \\\\\n C1 & -0.0125979 & 1.00000 & 57,272\/140,190 $\\rightarrow$ 40.9\\% \\\\\n A2 & -0.0311518 & 1.00103 & 82,983\/278,194 $\\rightarrow$ 29.2\\% \\\\\n B2 & -0.0143857 & 1.00000 & 255,640\/574,524 $\\rightarrow$ 44.9\\% \\\\\n C2 & -0.0119539 & 1.00972 & 453,766\/1,089,562 $\\rightarrow$ 41.6\\% \\\\\n A3 & -0.0258559 & 1.00646 & 643,083\/2,175,330 $\\rightarrow$ 29.6\\% \\\\\n B3 & -0.0115908 & 1.00192 & 2,073,934\/4,483126 $\\rightarrow$ 46.3\\% \\\\\n C3 & -0.0096186 & 1.00545 & 4,932,551\/9,172,044 $\\rightarrow$ 53.8\\% \\\\\n \\hline\n \\end{tabular}\n\\end{table}\n\\begin{table}[b]\n \\centering\n \\caption{Cube with a hole: wall-clock times (seconds) on Mustang for each solver \\label{Tab:S5_cube_times_mustang}}\n \\begin{tabular}{lccccc}\n \\hline\n Case & Galerkin & TRON1 & TRON2 & TRON3 & BLMVM \\\\\n \\hline\n A1 & 0.337 & 0.933 & 0.981 & 1.14 & 2.62 \\\\\n B1 & 0.790 & 1.72 & 2.06 & 2.71 & 5.04 \\\\\n C1 & 2.24 & 4.34 & 5.80 & 7.74 & 13.5 \\\\\n A2 & 7.21 & 15.2 & 21.7 & 32.5 & 72.0 \\\\\n B2 & 15.4 & 30.0 & 43.7 & 57.5 & 109 \\\\\n C2 & 40.4 & 67.8 & 113 & 118 & 286 \\\\\n A3 & 121 & 225 & 414 & 599 & 1167 \\\\\n B3 & 315 & 498 & 1061 & 1344 & 2524 \\\\\n C3 & 997 & 1539 & 2490 & 4365 & 9679 \\\\\n \\hline\n \\end{tabular}\n\\end{table}\n\\begin{table}[b]\n \\centering\n \\caption{Cube with a hole: wall-clock times (seconds) on Wolf for each solver \\label{Tab:S5_cube_times_wolf}}\n \\begin{tabular}{lccccc}\n \\hline\n Case & Galerkin & TRON1 & TRON2 & TRON3 & BLMVM \\\\\n \\hline\n A1 & 0.126 & 0.388 & 0.396 & 0.449 & 1.01 \\\\\n B1 & 0.314 & 0.720 & 0.853 & 1.07 & 2.03 \\\\\n C1 & 0.888 & 1.91 & 2.47 & 3.31 & 5.71 \\\\\n A2 & 2.58 & 6.34 & 8.74 & 12.8 & 26.2 \\\\\n B2 & 5.90 & 12.9 & 17.8 & 22.8 & 46 \\\\\n C2 & 16.2 & 30.1 & 47.3 & 48.9 & 133 \\\\\n A3 & 48.0 & 98.4 & 129 & 247 & 609 \\\\\n B3 & 107 & 171 & 342 & 435 & 1060 \\\\\n C3 & 281 & 467 & 870 & 1245 & 3131 \\\\\n \\hline\n \\end{tabular}\n\\end{table}\n\nThe diffusion parameters are set as: $\\alpha_L = 1$, $\\alpha_T = 0.001$, and \n$D_M = 0$. The pictorial description of the computational domain and the three mesh\ntypes composed of 4-node tetrahedrons are shown in Figure \n\\ref{Fig:S5_cube_description}. We consider three unstructured mesh types \nwith three levels of element-wise mesh refinement, giving us nine total case \nstudies of increasing problem size as shown in Table \\ref{Tab:S5_cube_information}. \nWe ran a total of five different simulations for this study: \n\\begin{itemize}\n\\item Galerkin with CG\/block Jacobi\n\\item TRON1: with KSP tolerance of $10^{-1}$\n\\item TRON2: with KSP tolerance of $10^{-2}$\n\\item TRON3: with KSP tolerance of $10^{-3}$\n\\item BLMVM\n\\end{itemize} \nThe TRON solvers also use the CG and block Jacobi preconditioner but with \ndifferent KSP tolerances. Numerical results for both the Galerkin formulation and \nthe non-negative methodologies for some of the mesh cases are shown in Figure \n\\ref{Fig:S5_cube_results}. The top row of figures arise from the Galerkin \nformulation where the white regions denote negative concentrations, and the \nbottom row arise from either TRON or BLMVM. Details concerning the \nviolation of the DMP for each case study can be found in Table \n\\ref{Tab:S5_cube_dmp}. Concentrations both negative\nand greater than one arise for all case studies. Moreover, \nsimply refining the mesh does not resolve these issues; in fact, \nrefinment worsens the violation. These numerical results indicate that our \ncomputational framework can successfully enforce the DMP for diffusion problems \nwith highly anisotropic diffusivity.\n\\begin{figure}[t]\n\\centering\n\\subfloat{\\includegraphics[scale=0.55]{Figures\/cube_solver_iterations.eps}}\n\\caption{Cube with a hole: solver iterations needed for Galerkin and BLMVM.}\n\\label{Fig:S5_cube_iterations_galerkin}\n\\end{figure}\n\\begin{figure}[t]\n\\centering\n\\subfloat{\\includegraphics[scale=0.55]{Figures\/cube_solver_iterations_ksp.eps}}\n\\subfloat{\\includegraphics[scale=0.55]{Figures\/cube_solver_iterations_tao.eps}}\n\\caption{Cube with a hole: KSP (left) and TAO (right) solver iterations needed for TRON.}\n\\label{Fig:S5_cube_iterations_tron}\n\\end{figure}\n\n\\subsubsection{Performance modeling}\nWe first consider the wall-clock time spent in the solvers on a single core. \nTable \\ref{Tab:S5_cube_times_mustang} and \\ref{Tab:S5_cube_times_wolf} depict \nthe solver time for each mesh, and we first note that Mustang system requires \nsignificantly more wall-clock time to obtain a solution than Wolf; \nthis behavior is expected due to the difference in HPC hardware specifications \nlisted in Table \\ref{Tab:S3_HPC}, specifically, Mustang has the lower clock rate \nand lower bandwidth (as determined through STREAMS Triad). It can also be seen that \nthe various non-negative solvers consume varying amounts of wall-clock time.\nBLMVM can require as much as ten times the amount of wall-clock time as the standard\nGalerkin method. TRON on the other hand, does not consume nearly as much time but \ntightening the KSP tolerances will gradually increase the amount of time. \nWe are interested in determine why these optimization solvers consume more \nwall-clock time, whether it be mostly due to additional workload \nassociated with optimization-based techniques or due to the presence of \nrelatively more complicated and expensive data structures compared to the \nstandard solvers used for the Galerkin formulation. The first step is noting \nthe total KSP and TAO iterations needed and how they vary with respect to problem size. \nFigure \\ref{Fig:S5_cube_iterations_galerkin} depicts the KSP and TAO iterations for \nthe Galerkin and BLMVM methods respectively. It is well-known that block \nJacobi (also known as ILU(0)) requires more iterates as the size of the \nproblem increases. In other words, the solver may exhibit poor scaling for \nextremely large problems, but we see that the BLMVM algorithm has an even poorer \nscaling rate of the solver iterates. For the TRON solvers, we document both \nthe KSP and TAO iterates as shown in Figure \\ref{Fig:S5_cube_iterations_tron}. \nWe see that tightening the KSP tolerance increases the number of KSP iterates \nbut requires slightly fewer TAO iterates. This behavior indicates that the more \naccurate the computed gradient projection is, the fewer \noptimization loops the solver has to perform.\n\n\\begin{figure}[tp]\n\\centering\n\\subfloat[Mustang]{\\includegraphics[scale=0.55]\n{Figures\/cube_flopss_mustang.eps}}\n\\label{Fig:S5_cube_flopss_mustang}\n\\subfloat[Wolf]{\\includegraphics[scale=0.55]\n{Figures\/cube_flopss_wolf.eps}}\n\\label{Fig:S5_cube_flopss_wolf}\\\\\n\\caption{Cube with a hole: measured floating-point rate (FLOPS\/s) on a single core.}\n\\label{Fig:S5_cube_flopss}\n\\end{figure}\n\\begin{figure}[t]\n\\centering\n\\subfloat{\\includegraphics[scale=0.55]{Figures\/cube_ai.eps}}\n\\caption{Cube with a hole: arithmetic intensity for all solvers and all cases on a single processor.}\n\\label{Fig:S5_cube_ai}\n\\end{figure}\n\nWe also examine the measured floating-point rate provided by the PETSc performance logs,\nas shown in Figure \\ref{Fig:S5_cube_flopss}, of all five solvers across their respective \nmachines, and the floating point performance decreases as the problem size grows. One \ncould compare these numbers to the TPP and see that the hardware efficiencies are no \ngreater than $5\\%$, but it is difficult to draw any other conclusions with regard to the\ncomputational performance. The calculated AI, based on our proposed performance model, \nis shown in Figure \\ref{Fig:S5_cube_ai}. It is \ninteresting to note that the AI remains largely invariant with problem size \nunlike the wall-clock time, solver iterations, and floating point rates. \nAccording to the perfect cache model, the Galerkin formulation's AI is greater \nthan any of the non-negative methodologies. The optimization-based \nalgorithm based on TAO's BLMVM solver has significantly more streaming\/vector \noperations which explains the relatively lower AI. \nUsing these metrics in equation \\eqref{Eqn:S3_efficiency} as well as the \nSTREAMS Triad bandwidth of one core as shown in Figure \\ref{Fig:S3_streams}, \nthe estimated roofline-based efficiencies are shown in Figure \\ref{Fig:S5_cube_roofline}. \nAlthough the raw floating-point rate of BLMVM is lower than the Galerkin method, the roofline\nmodel suggests that BLMVM is actually more efficient in the hardware sense. The TRON methods\nhave much lower floating-point rates, but these metrics can be improved or ``gamed\" by \ntightening the KSP tolerances. This behavior leads us to believe that there is some \nlatency associated with setting up the data structures needed to compute gradient \ndescent projections.\n\n\\begin{figure}[tp]\n\\centering\n\\subfloat[Mustang]{\\includegraphics[scale=0.55]\n{Figures\/cube_efficiency_mustang.eps}}\n\\label{Fig:S5_cube_flopss_mustang}\n\\subfloat[Wolf]{\\includegraphics[scale=0.55]\n{Figures\/cube_efficiency_wolf.eps}}\n\\label{Fig:S5_cube_flopss_wolf}\\\\\n\\caption{Cube with a hole: estimated floating-point efficiency with respect to the \narithmetic intensity and measured memory bandwidth from STREAMS.}\n\\label{Fig:S5_cube_roofline}\n\\end{figure}\n\\begin{figure}[t]\n\\centering\n\\subfloat{\\includegraphics[scale=0.55]{Figures\/cube_speedup_mustang.eps}}\n\\caption{Cube with a hole: speedup for all 9 mesh cases up to 64 processors on the Mustang system (16 cores per node).}\n\\label{Fig:S5_cube_speedup_mustang}\n\\end{figure}\n\\begin{figure}[t]\n\\centering\n\\subfloat{\\includegraphics[scale=0.55]{Figures\/cube_speedup_wolf.eps}}\n\\caption{Cube with a hole: speedup for all 9 mesh cases up to 64 cores on the Wolf system (8 cores per node).}\n\\label{Fig:S5_cube_speedup_wolf}\n\\end{figure}\n\\subsubsection{Strong-scaling}\nThe metric of most interest to many computational scientists is the strong-scaling \npotential of any numerical framework. We conduct strong-scaling \nstudies to measure the speedup of all nine case studies over 64 cores. \nFour Mustang nodes with 16 cores each and 8 Wolf nodes with 8 cores \neach are allocated for this study. We do not fully saturate the compute\nnodes because the STREAMS benchmark indicates that there is little or no \ngain in memory performance when using a full node. Figure \\ref{Fig:S5_cube_speedup_mustang} \ndepicts the speedup on the Mustang system, and Figure \\ref{Fig:S5_cube_speedup_wolf}\ndepicts the speedup on the Wolf system. First, we note that the parallel efficiency \n(actual speedup over ideal speedup) increases with problem size due to \nAmdahl's Law. We also note that Wolf exhibits better strong-scaling due to the\nfaster speedups for the same test studies. For all problems and machines, \nthe TRON simulations are slightly less efficient in the parallel sense but can be \nimproved by tightening the KSP tolerances. Interestingly, the BLMVM algorithm not \nonly has the best roofline efficiency but also the best parallel speedup. \nWe can infer from these results that although BLMVM is the more efficient \noptimization in the hardware sense, TRON is more efficient in the algorithmic sense \ndue to its lesser time-to-solution. Our study has shown that one can draw \ncorrelations between the performance models conducted on a single-core and the \nactual speedup across multiple distributed memory nodes. As future solvers and \nalgorithms are implemented within PETSc, we can use this performance model to \nassess how efficient they are in both the hardware and algorithmic sense and how \nefficiently they will scale in a parallel setting.\n\n\\begin{figure}[t]\n\\centering\n\\subfloat{\\includegraphics[scale=0.45]{Figures\/chromium_perspective.png}}\n\\caption{Chromium plume migration in the subsurface: Permeability field (m$^2$) and the locations of the pumping well (R28) and contaminant source (R42).}\n\\label{Fig:S5_chromium_description}\n\\end{figure}\n\\subsection{Transport of chromium in subsurface}\n\\begin{table}[b]\n \\centering\n \\caption{Chromium plume migration in the subsurface: parameters \\label{Tab:S5_chrom_properties}}\n \\begin{tabular}{ll}\n \\hline\n Parameter & Value \\\\\n \\hline\n $\\alpha_L$ & 100 m \\\\\n $\\alpha_T$ & 0.1 m \\\\\n Contaminant source (R42) & $1\\times 10^{-4}$ kg\/m$^2$s$^2$ \\\\\n $\\Delta t$ & 0.2 days \\\\\n Domain size & 7000 km$\\times$6000 km$\\times$100 m \\\\\n $D_M$ & $1\\times 10^{-9}$ m$^{2}$\/s \\\\\n Permeability & Varies \\\\\n Pumping well (R28) & -0.01 kg\/m$^2$s$^2$\\\\\n Total hexahedrons & 1,984,512 \\\\\n Total vertices & 2,487,765 \\\\\n $\\mathbf{v}$ & Varies with position \\\\\n Viscosity & 3.95$\\times 10^{-5}$ Pa s \\\\\n \\hline\n \\end{tabular}\n\\end{table}\n\\begin{figure}[htp]\n\\centering\n\\subfloat{\\includegraphics[scale=0.36]{Figures\/chrom_orig_001.png}}\n\\subfloat{\\includegraphics[scale=0.36]{Figures\/chrom_orig_005.png}}\\\\\n\\subfloat{\\includegraphics[scale=0.36]{Figures\/chrom_orig_024.png}}\n\\subfloat{\\includegraphics[scale=0.36]{Figures\/chrom_orig_076.png}}\\\\\n\\subfloat{\\includegraphics[scale=0.36]{Figures\/chrom_orig_300.png}}\n\\subfloat{\\includegraphics[scale=0.36]{Figures\/chrom_orig_900.png}}\n\\caption{Chromium plume migration in the subsurface: concentrations at select times using the Galerkin formulation.}\n\\label{Fig:S5_chromium_orig}\n\\end{figure}\n\\begin{figure}[htp]\n\\centering\n\\subfloat{\\includegraphics[scale=0.36]{Figures\/chrom_nonneg_001.png}}\n\\subfloat{\\includegraphics[scale=0.36]{Figures\/chrom_nonneg_005.png}}\\\\\n\\subfloat{\\includegraphics[scale=0.36]{Figures\/chrom_nonneg_024.png}}\n\\subfloat{\\includegraphics[scale=0.36]{Figures\/chrom_nonneg_076.png}}\\\\\n\\subfloat{\\includegraphics[scale=0.36]{Figures\/chrom_nonneg_300.png}}\n\\subfloat{\\includegraphics[scale=0.36]{Figures\/chrom_nonneg_900.png}}\n\\caption{Chromium plume migration in the subsurface: concentrations at select times using the non-negative methodology.}\n\\label{Fig:S5_chromium_nonneg}\n\\end{figure}\nSubsurface clean-up due to anthropogenic contamination \nis a big challenge \\cite{us2004cleaning}. \nRemediation studies \\cite{hammond2010field,harp2013contaminant} need accurate predictions \nof transport of the involved chemical species, which are obtained \nusing limited data at monitoring wells and through \nnumerical simulations.\nTo accurately predict the fate of the contaminant,\na transport solver that: a) is robust, in that it will not \ngive unphysical solutions, and b) can handle field-scale \nscenarios, is needed. The computational framework that is \nproposed in this paper is an ideal candidate \nfor such problems. We now consider a realistic \nlarge-scale problem to predict the fate of \nchromium in the Los Alamos, New Mexico area. \nThe chromium was released into the Sandia canyon in the 50s up to \nearly 70s. Back then chromium was used as an anti-corrosion\nagent for the cooling towers at a power plant at the \nLos Alamos National Laboratory (see \\cite{heikoop2014isotopic} \nand references therein for details). \n\nHere we study the effectiveness of our proposed framework\nto this real-world scenario of predicting the extent of \nchromium plume. The following is a conceptual model domain\nthat is considered: A domain of size\n $[496,503]\\mathrm{km} \\times [536,542]\n\\mathrm{km} \\times [0,100]\\mathrm{m}$ with the permeability field (m$^2$) as shown \nin Figure \\ref{Fig:S5_chromium_description}. R42 in \nFigure \\ref{Fig:S5_chromium_description} is estimated to be the contaminant source\nlocation and a pumping well is located at R28. The parameters used for this \nproblem are shown in Table \\ref{Tab:S5_chrom_properties}, and we employ the \nfollowing boundary conditions:\n\\begin{align}\n \\label{Eqn:S5_chrom_bc_conc}\n &c^{\\mathrm{p}}(x=496\\mathrm{km},y,z) = c^{\\mathrm{p}}(x=503\\mathrm{km},y,z) = \nc^{\\mathrm{p}}(x,y=536\\mathrm{km},z) = c^{\\mathrm{p}}(x,y=542\\mathrm{km},z) = 0\n\\end{align}\nFor this highly heterogeneous problem, we \nemploy PETSc's algebraic multi-grid preconditioner (GAMG) and\ncouple this with the TRON algorithm for the non-negative solver. Our goal is \nto examine its strong-scaling potential across 1024 cores. \nWe first solve the steady flow equation (based on mass balance and\nDarcy's model to relate pressure and mass flux) with the pumping well \nlocated at R28. Cell-wise velocity is obtained from the resulting pressure field and \nused to calculated element-wise dispersion tensor. We then solve \nthe transient diffusion problem (with tensorial dispersion) with a \nconstant contaminant source located at R40 for up to 180 days. \nThe concentrations at select time levels for Galerkin formulation and \nnon-negative methodology are shown in Figures \\ref{Fig:S5_chromium_orig} \nand \\ref{Fig:S5_chromium_nonneg}, respectively. Negative concentrations arise with \nthe Galerkin formulation even as the solution approaches steady-state.\n\nFigure \\ref{Fig:S5_chromium_strongscale_first} depicts the amount of wall-clock \ntime with respect to the number of cores at the first time level. We see here \nthat the demonstrates good parallel performance across 1024 cores with up to \n35 percent parallel efficiency. Unlike the previous benchmark problem, we consider \na case where we completely saturate a Wolf node by using all 16 cores and notice\nthat the performance is slightly worse than using a partially saturated node \n(8 cores). This change of behavior can be attributed to the lack of memory \nperformance improvement one achieves when using all 16 cores as shown in \nFigure \\ref{Fig:S3_streams}. Interprocess communication becomes a major component of\nthe Wolf simulations so the parallel scalability gets worse the more efficiently TRON\nconducts its work. Nonetheless, the higher quality computing resources \nof a Wolf node results in faster solve times than the solve time on Mustang \neven with lesser parallel efficiency. \n\nAnother metric of interest is the number of solver iterations required for\nconvergence across various number of MPI processes. Figure \n\\ref{Fig:S5_chromium_iterations_first} depicts the number of KSP solver iterations\nand TAO solver iterations across 1024 cores, and we notice that there are \nsignificant fluctuations. This trend is largely attributed to the accumulation \nof numerical round-offs from the TRON algorithm. One can reduce\nthese fluctuations by tightening the solver tolerances, but the strong-scaling \nremains largely unaffected even for the results shown. This study suggests that \nthe proposed non-negative methodology using TRON with GAMG preconditioning \nis suitable for large-scale transient heterogeneious and anisotropic diffusion \nequations.\n\n\\begin{figure}[t]\n\\centering\n\\subfloat{\\includegraphics[scale=0.55]{Figures\/chrom_strong_first.eps}}\n\\caption{Chromium plume migration in the subsurface: wall-clock time of the TRON \noptimization solver with multi-grid preconditioner (GAMG) versus number \nof processors after the first time level. Two Wolf cases are considered, where \nwe fully saturate a compute node (16 cores) and where we partially saturate a \ncompute node (8 cores per node). The parallel efficiency with respect to 16 \ncores is shown on the right hand side.}\n\\label{Fig:S5_chromium_strongscale_first}\n\\end{figure}\n\n\\begin{figure}[t]\n\\centering\n\\subfloat{\\includegraphics[scale=0.55]{Figures\/chrom_solver_iters.eps}}\n\\caption{Chromium plume migration in the subsurface: number of KSP (left hand side) \nand TAO (right hand side) solver iterations for the TRON optimization solver versus \nnumber of cores after the first time level.}\n\\label{Fig:S5_chromium_iterations_first}\n\\end{figure}\n\n\\section{INTRODUCTION}\n\\label{Sec:Introduction}\nThe modeling of flow and transport in subsurface is vital \nfor energy, climate and environmental applications. Examples \ninclude CO$_2$ migration in carbon-dioxide sequestration, \nenhanced geothermal systems, oil and gas production, \nradio-nuclide transport in a nuclear waste repository, \ngroundwater contamination, and thermo-hydrology in the \nArctic permafrost due to the recent climate change \n\\cite{karra2014three,lichtner2014modeling,\nhammond2010field,kelkar2014simulator}. \nSeveral numerical codes (e.g., FEHM \\cite{zyvoloski2007fehm}, \nTOUGH \\cite{pruess2004tough}, PFLOTRAN \\cite{lichtner2015pflotran}) \nhave been developed to model flow and transport in subsurface \nat reservoir-scale. These codes typically solve unsteady Darcy \nequations for flow and advection-diffusion equation for \ntransport. The predictive capability of a numerical simulator \ndepends on the robustness of the underlying numerical methods. \nA necessary and essential requirement is to satisfy important \nmathematical principles and physical constraints. \nOne such property in transport and reactive-transport problems \nis that the concentration of a chemical species cannot be negative. \nMathematically, this translates to the satisfaction of the \ndiscrete maximum principle (DMP) for diffusion-type equations. \nSubsurface flow and transport applications typically encounter \ngeological media that are highly heterogeneous and anisotropic \nin nature, and it is well-known that the classical finite \nelement (or finite volume and finite difference, for that \nmatter) formulations do not produce non-negative solutions \non arbitrary meshes for such porous media \n\\cite{Ciarlet_Raviart_CMAME_1973_v2_p17,\nliska2008enforcing,\nNakshatrala_JCP_2009_v228_p6726,\nNagarajan_IJNMF_2011_v67_p820}. \n\nSeveral studies over the years have focused on the \ndevelopment of methodologies that enforce the DMP \nand ensure non-negative solutions \\cite{Nakshatrala_JCP_2009_v228_p6726,\nNagarajan_IJNMF_2011_v67_p820,Payette_IJNME_2012_v91_p742,\nNakshatrala_JCP_2013_v253_p278}. However, these studies \ndid not address how these methods can be used for \nrealistic large-scale subsurface problems that have millions of grid nodes. \nFurthermore, complex coupling between different physical \nprocesses as well as the presence of multiple species amplify the \ndegrees-of-freedom (i.e., the number of unknowns).\nThe aim of this paper is to develop a parallel \ncomputational framework that solves anisotropic diffusion \nequations on general meshes, ensures non-negative solutions, \nand can be employed to solve large-scale realistic problems.\n\nLarge-scale problems can be tackled by using recent \nadvancements in high-performance computing (HPC) \nmethods and toolkits that can be used on the \nstate-of-the-art supercomputing architecture. \nOne such toolkit is PETSc \\cite{petsc-user-ref}, which \nprovides data structures and subroutines for setting up \nstructured and unstructured grids, parallel communication, \nlinear and non-linear solvers, and parallel I\/O. These \nhigh-level data structures and subroutines help in faster \ndevelopment of parallel application codes and minimize \nthe need to program low-level message passing, so that \nthe domain scientists can focus more on the application. \nTo this end, we develop a non-negative parallel framework\nby leveraging the existing capabilities within PETSc. Our \nframework ensures the DMP for anisotropic diffusion by using \nlower-order finite elements and the optimization-based \napproach in \\cite{Nakshatrala_JCP_2009_v228_p6726,\nliska2008enforcing,Nagarajan_IJNMF_2011_v67_p820}. The TAO \ntoolkit \\cite{tao-user-ref}, which is built on top of PETSc, \nis used for solving the resulting optimization problems. The \nrobustness of the proposed framework will be demonstrated by \nsolving realistic large-scale problems. \n\nThe rest of this paper is organized as follows. In Section \n\\ref{Sec:Governing_Equations}, we present the governing \nequations and the classical single-field Galerkin finite \nelement formulation for steady-state and transient diffusion \nequations. The optimization-based method to ensure non-negative \nconcentrations is also outlined in this section. \nIn Section \\ref{Sec:Parallel_Implementation}, the parallel \nimplementation procedure using PETSc and TAO is presented. \nWe also highlight the relevant data structures used in \nthis study and present a pseudo algorithm describing our parallel framework. \nIn Section \\ref{Sec:Performance_Modeling}, a performance model \nloosely based on the roofline model is outlined. This model is used \nto estimated the efficiency with respect to computing hardware \nof currently available solvers within PETSc and TAO.\nIn Section \\ref{Sec:Numerical_Results}, we first verify our \nimplementation using a 3D benchmark problem from the literature\nand present a detailed performance study using the proposed model. \nThen, we study a large-scale three-dimensional realistic \nproblem involving the transport of chromium in the subsurface\nand document the numerical results of the non-negative \nmethodology with the classical single-field Galerkin formulation. \nConclusions are drawn in Section \\ref{Sec:Concluding_Remarks}.\n\n\\section{PERFORMANCE MODELING}\n\\label{Sec:Performance_Modeling}\nPETSc is a constantly evolving open-source library that\nbrings out new features and algorithms almost every day.\nIt has capabilities to interface with a large number of\nother open-source software and linear algebra packages. \nHowever, it is not always known which of these algorithms \nwill have the best performance across multiple distributed \nmemory HPC systems, especially if these packages have little \ndocumentation and have to be used as black-box solvers. \nComputational scientists would like to know which solvers \nor algorithms to use for their specific need before running \njobs on the state-of-the-art HPC systems. The first and the \ntrivial metric to look for in answering this question is the \ntime-to-solution for a given solver or optimization method. \nHowever, additional information is needed in order to quantify \nthe hardware and algorithmic efficiency as well as the potential \nscalability across multiple cores in the strong sense.\n\\begin{table}[htp]\n \\centering\n \\caption{List of HPC systems used in this study \\label{Tab:S3_HPC}}\n \\begin{tabular}{lcc}\n \\hline\n & Mustang (MU) & Wolf (WF) \\\\\n \\hline\n Processor & AMD Opteron 6176 & Intel Xeon E5-2670 \\\\\n Clock rate& 2.3 GHz & 2.6 GHz\\\\\n FLOPs\/cycle & 4 & 8\\\\\n Sockets per compute node & 2 & 2\\\\\n NUMA nodes per socket & 2 & 1\\\\\n Cores per socket & 12 & 8\\\\\n Total cores (compute nodes) & 38400 (1600) & 9856 (616)\\\\\n Memory per compute node & 64 GB & 64 GB\\\\\n L1 cache per core & 128 KB &32 KB\\\\\n L2 cache per core & 512 KB & 256 KB\\\\\n L3 cache per socket & 12 MB & 20 MB\\\\\n Interconnect & 40 Gb\/s & 40 Gb\/s \\\\\n \\hline\n \\end{tabular}\n\\end{table}\n\nHardware specifications of HPC systems significantly impact the performance of any \nnumerical algorithm. Ideally we want our simulations to consume as little wall-clock\ntime as possible as the number of processing cores increases (i.e., achieving good \nspeedup), but several other factors including compiler vectorization, cache locality, \nmemory bandwidth, and code implementation may drastically affect the performance. \nTable \\ref{Tab:S3_HPC} lists the hardware specifications \nof the two HPC systems (Mustang and Wolf) that are used \nin our numerical experiments. The Mustang HPC system consists of\nrelatively older generation of processors so \nit is expected to not perform as well. One could \nsimply measure wall-clock time across multiple compute \nnodes on the respective HPC machines and determine the \nparallel efficiency of a certain algorithm, but we are \ninterested in quantifying how different algorithms behave \nsequentially and what kind of parallel performance to expect \nbefore running numerical simulations on supercomputers. The wall-clock \ntime of any simulation can generally be summed up as a function of three things: \nthe workload, transfer of data between the memory and CPU register, \nand interprocess communication. Hardware efficiency in this \ncontext is defined as the amount of time spent performing work \nover waiting on memory fetching and cache registers to free up. \n\nThe limiting factor of performance for numerical methods on modern computing \narchitectures is upper-bounded by the memory bandwidth. That is, the floating point \nperformance given by FLOPS\/s will never reach the theoretical peak performance \n(TPP). This limitation is particularly important for iterative solvers and \noptimization methods that rely on numerous sparse matrix-vector (SpMV) \nmultiplications (see \\cite{may_ptatin} and the references within). The \nfrequent use of SpMV allows for little cache reuse and will result in a \nlarge number of very expensive cache misses. Such behavior is important to \ndocument when determining how efficient a scientific code is, so performance models \nsuch as the Roofline Model \\cite{williams,Lo_roofline}, which measures memory transfers, \nhave been used to better quantify the efficiency with respect to the hardware. \nPerformance models in general can help application developers \nidentify bottlenecks and indicate which areas of the code can be further optimized. \nIn other words, the code can be designed so that it maximizes the full \nbenefits of the available computing resources. In the next \nsection, we will demonstrate that such models can also be used to predict the \nparallel efficiency of various optimization solvers on the two very different LANL \nHPC systems. The key parameter for these performance models is the Arithmetic \nIntensity (AI) which is defined as:\n\\begin{align}\n &\\mbox{AI} = \\frac{\\mbox{Total FLOPS}}\n {\\mbox{Total Bytes Transferred}}\n \\label{Eqn:S3_efficiency}\n\\end{align}\nwhere the Total Bytes Transferred (TBT) metric denotes\nthe amount of bandwidth needed for a given floating\npoint operation. The AI serves as a multiplier to the\nactual memory bandwidth and creates a ``roofline\" for\nthe estimation of ideal peak performance. A cache model \nis needed in order to properly define the TBT. \n\n\\begin{table}[tp]\n \\centering\n \\caption{Commonly used PETSc operations and their \n respective Total Bytes Transferred. Here we note \n $X,Y,Z$ as vectors with $i = 1,\\cdots,N$ entries, \n $a$ is a scalar value, and $nz$ denotes the total \n number of non-zeros. We assume that the sizes of \n integers and doubles are 4 and 8 bytes respectively. \n \\label{Tab:S3_TBT}}\n \\begin{tabular}{lccc}\n \\hline\n PETSc function & Operation & Total Bytes Transferred \\\\\n \\hline\n VecNorm() & $a = \\sqrt{\\sum^{N}_{i} X(i)^2} $ & $8(N + 1)$ \\\\\n VecDot() & $a = \\sum^{N}_{i}X(i)*Y(i)$ & $8(2N + 1)$ \\\\\n VecCopy() & $Y \\leftarrow X$ & $8(2N)$ \\\\\n VecSet() & $Y(i) = a$ & $8(2N)$ \\\\\n VecScale() & $Y = a*Y$ & $8(2N)$ \\\\\n VecAXPY() & $Y = a*X+Y$ & $8(3N)$ \\\\\n VecAYPX() & $Y = X+a*Y$ & $8(3N)$ \\\\\n VecPointwiseMult() & $Z(i) = X(i)*Y(i)$ & $8(3N)$ \\\\\n MatMult() & SpMV & $4(N + nz) + 8(2N + nz)$ \\\\\n \\hline\n \\end{tabular}\n\\end{table}\nTo this end, we propose a roofline-like performance \nmodel where the TBT assumes a ``perfect cache\" -- each \nbyte of the data needs to be fetched from DRAM only \nonce. This assumption enables us to predict a slightly \nmore realistic upper bound of the peak performance than \nby simply comparing to the TPP. Table \\ref{Tab:S3_TBT} \nlists the key PETSc functions used for the solvers and their \nrespective estimates of TBT based on the perfect cache assumption. \nThe formula for SpMV follows the procedure outlined in \n\\cite{Kaushik99towardrealistic}. We assume that the TBT formula \nfor operations also involving a sparse matrix and vector \nlike the incomplete lower-upper (ILU) factorization to be \nthe same as MatMult(). Estimating the TBT for other \nimportant operations like the sparse matrix-matrix and \ntriple matrix products (which are important for multi-grid methods) \nis an area of future work. In short, our AI formulation \nrelies on the following four key assumptions:\n\\begin{enumerate}[(i)]\n\\item All floating-point operations (add, multiply, square roots, etc.) are treated \nequally and equate to one FLOP count. \n\\item There are no conflict misses. That is, each matrix and vector element is \nloaded into cache only once.\n\\item Processor never waits on a memory reference. That is, any number of loads and \nstores are satisfied in a single cycle.\n\\item Compilers are capable of storing scalar multipliers in the register only for \npure streaming computations.\n\\end{enumerate}\nTherefore, the efficiency based on this new roofline-like performance model as:\n\\begin{figure}[htp]\n\\centering\n\\subfloat{\\includegraphics[scale=0.55]{Figures\/streams_benchmark.eps}}\n\\caption{Estimated memory bandwidth of a single Mustang and Wolf compute node based \non the STREAMS Triad Benchmark}\n\\label{Fig:S3_streams}\n\\end{figure}\n\\begin{align}\n &\\mbox{Efficiency (\\%)} = \\frac{\\mbox{Measured FLOPS\/s}}\n {\\mathrm{min}\\left\\{\\begin{array}{l}\n \\mbox{TPP} \\\\\n \\mathrm{AI}\\times\\mbox{STREAMS}\n \\end{array}\n \\right.}\\times100\n \\label{Eqn:S3_efficiency}\n\\end{align}\nwhere the numerator is reported by the PETSc program\nand the denominator is the ideal performance upper-bounded \nby both the TPP and the product of AI and STREAMS bandwidth. \nSTREAMS Triad \\cite{streams} is one of the most \npopular benchmarks for determining the achievable memory \nperformance of a given machine. Figure \\ref{Fig:S3_streams} denotes the \nestimated memorybandwidth as a function of number of cores on a single \nMustang and Wolf node. It is interesting to note that although\nthe Wolf node has a greater bandwidth, there is\nno performance gain past eight cores. This means that an optimal use of a\nWolf compute node for memory-bandwidth bound algorithms would be \neight cores, whereas one would still see some performance gains when\nusing all 24 cores on a Mustang node.\n\nThe performance model that uses equation \\eqref{Eqn:S3_efficiency} \nis a serial model so the STREAMS metric for the Mustang and Wolf \nsystems are 5.65 GB\/s and 15.5 GB\/s respectively. It should be \nnoted that this performance model does account for cache effects. \nThat is, it does not quantify the useful bandwidth sustained for \nsome level of cache. The true hardware and algorithmic efficiency \nis not be reflected by this model, so our aim is to show relative \nperformance between select PETSc and TAO solvers. Comparing the \nAI and the measured FLOPS\/s with the STREAMS bandwidth will give \nus a better understanding of how high-performing the PETSc and TAO \nsolvers are for select problems.\n\n\\section{PARALLEL IMPLEMENTATION}\n\\label{Sec:Parallel_Implementation}\n\\subsection{PETSc and TAO}\n\\begin{figure}[htp]\n\\centering\n\\subfloat[Optimal 2D and 3D elements]{\\includegraphics[scale=0.75]{Figures\/DMPlex_simple.eps}}\n\\label{Fig:S3_dmplex_simple}\\\\\n\\subfloat[Interpolated 2D and 3D elements]{\\includegraphics[scale=0.75]{Figures\/DMPlex_interpolated.eps}}\n\\label{Fig:S3_dmplex_interpolated}\n\\caption{Representation of mesh points within the DMPlex data structure and their associated directed acyclic graphs}\n\\label{Fig:S3_dmplex}\n\\end{figure}\nWe leverage on existing scientific libraries such as PETSc and TAO to formulate our \nlarge-scale computational framework. PETSc is a suite of data structures and \nroutines for the parallel solution of scientific applications. It also provides \ninterfaces to several other libraries such as Metis\/ParMETIS\\cite{METIS} and HDF5 \n\\cite{hdf5} for mesh partitioning and binary data format handling respectively. The \nData Management (DM) data structure is used to manage all information including \nvectors and sparse matrices and compatible with binary data formats. To handle \nunstructured grids in parallel, a subset of the DM structure called DMPlex \n(see \\cite{Lange_easc,Knepley_scicomp,petsc-user-ref}), as shown in Figure \\ref{Fig:S3_dmplex},\nuses the direct acyclic graph to organize all mesh information. This topology \nenables the freedom to mix and match various non vertex-based discretization such as\nthe two-point flux finite volume method and the classical mixed formulations\nbased on the lowest-order Raviart Thomas finite element space.\n\nAnother important feature within PETSc is TAO. The TAO library has a suite of data \nstructures and routines that enable the solution of large-scale optimization \nproblems. It can support any data structure or solver within PETSc. Our non-negative\nmethodology will use both the Newton-Trust Region (TRON) and Bounded Limited-Memory \nVariable-Metric (BLMVM) solvers available within TAO. BLMVM is a quasi-Newton method \nthat uses projected gradients to approximate the Hessian, which is useful for problems where the\nhessian is too complicated or expensive to compute. Other optimization algorithms \nsuch as TRON and the Gradient Projected Conjugate Gradient (GPCG) typically require \nHessian information and more memory, but they are expected to converge more rapidly\nthan BLMVM. Further details regarding the implementation of these various methods \nmay be found in \\cite{tao-user-ref} and the references within.\n \n\\subsection{Finite element implementation}\nPETSc abstractions for finite elements, quadrature rules, and function spaces have \nalso been recently introduced and are suitable for the mesh topology within DMPlex. \nThey are built upon the same framework as the Finite element Automatic Tabulator \n(FIAT) found within the FEniCS Project \\cite{Kirby2012a,Alogg,LoggMardalEtAl2012a}. \nThe finite element discretizations simply need the equations, auxiliary coefficients\n(e.g., permeability, diffusivity, etc.), and boundary conditions specified as \npoint-wise functions. We express all discretizations in nonlinear form so let \n$\\boldsymbol{r}$ and $\\boldsymbol{J}$ denote the residual and Jacobian respectively.\n\nFollowing the FEM model outlined in \\cite{2013arXiv1309.1204K}, we consider the weak\nform that depends on fields and gradients. The residual evaluation can be expressed\nas: \n\\begin{align}\n&\\boldsymbol{w}^{\\mathrm{T}}\\boldsymbol{r}(\\boldsymbol{c}) \\sim \\int_{\\Omega^{e}} \\left[w\\cdot \n\\mathcal{F}_{0}\\left(c,\\nabla c\\right) + \n\\nabla w \\cdot \\boldsymbol{\\mathcal{F}}_1(c,\\nabla c) \n\\right]\\mathrm{d} \\Omega = 0\n\\end{align}\nwhere $\\mathcal{F}_0(c,\\nabla c)$ and $\\boldsymbol{\\mathcal{F}}_1(c,\\nabla c)$ are \nuser-defined point-wise functions that capture the problem physics. This framework \ndecouples the problem specification from the mesh and degree of freedom traversal. \nThat is, the scientist need only focus on providing point function evaluations while\nletting the finite element library take care of meshing, quadrature points, basis \nfunction evaluation, and mixed forms if any. The discretization of the residual is \nwritten as: \n\\begin{align}\n&\\boldsymbol{r}(\\boldsymbol{c}) = \\mathop{\\huge \\boldsymbol{\\mathsf{A}}}_{e=1}^{Nele} \n\\left[\\begin{array}{lr}\\boldsymbol{N}^{\\mathrm{T}} & \\boldsymbol{B}^{\\mathrm{T}}\t\n\\end{array}\\right]\n\\boldsymbol{W}\\left[\\begin{array}{l}\t\\mathcal{F}_0(c_q,\\nabla c_q) \\\\ \n\\boldsymbol{\\mathcal{F}}_1(c_q,\\nabla c_q)\\end{array}\\right]\n\\label{Eqn:S3_residual}\n\\end{align}\nwhere $\\boldsymbol{\\mathsf{A}}$ represents the standard assembly operator, \n$\\boldsymbol{N}$ and $\\boldsymbol{B}$ are matrix forms of basis functions that \nreduce over quadrature points, $\\boldsymbol{W}$ is a diagonal matrix of quadrature \nweights (including the geometric Jacobian determinant of the element), and $c_q$ is \nthe field value at quadrature point $q$. The Jacobian of \\eqref{Eqn:S3_residual} \nneeds only the derivatives of the point-wise functions:\n\\begin{align}\n&\\boldsymbol{J}(\\boldsymbol{c}) = \\mathop{\\huge \\boldsymbol{\\mathsf{A}}}_{e=1}^{Nele} \n\\left[\\begin{array}{lr}\\boldsymbol{N}^{\\mathrm{T}} & \\boldsymbol{B}^{\\mathrm{T}}\t\\end{array}\\right]\n\\boldsymbol{W}\\left[\\begin{array}{ll} \\mathcal{F}_{0,0} & \\mathcal{F}_{0,1} \\\\ \n\\boldsymbol{\\mathcal{F}}_{1,0}& \\boldsymbol{\\mathcal{F}}_{1,1}\\end{array}\\right]\n\\left[\\begin{array}{c} \\boldsymbol{N} \\\\ \\boldsymbol{B}\n\\end{array}\\right], \\quad \\left[\\mathcal{F}_{i,j}\\right] = \n\\left[\\begin{array}{ll}\n\\frac{\\partial\\mathcal{F}_0}{\\partial c} & \\frac{\\partial\\mathcal{F}_0}{\\partial\\nabla c} \\\\\n\\frac{\\partial\\boldsymbol{\\mathcal{F}}_1}{\\partial c} & \\frac{\\partial\\boldsymbol{\\mathcal{F}}_1}{\\partial\\nabla c}\n\\end{array}\\right](c_q,\\nabla c_q)\n\\label{Eqn:S3_jacobian}\n\\end{align}\nThe point-wise functions corresponding to the weak form in \n\\eqref{Eqn:functionals_B_L} would be:\n\\begin{algorithm}[b]\n \\begin{algorithmic}\n \\State Create\/input DAG on rank 0\n \\State Create\/input cell-wise velocity on rank 0\n \\If {size $>$ 1}\n \\State Partition mesh among all processors\n \\EndIf\n \\State Refine distributed mesh if necessary\n \\State Create PetscSection and FE discretization\n \\State Set $n=0$ and $\\boldsymbol{c}^{(0)}=10^{-8} $\n \\State Insert Dirichlet BC constraints into $\\boldsymbol{c}^{(0)} $\n \\State Compute Jacobian $\\boldsymbol{J}$\n \\While {true} \\Comment{Begin time-stepping scheme}\n\t \\State Compute Residual $\\boldsymbol{r}^{(n)}$ \n\t \\If {Classical Galerkin}\n\t \\Comment{Solve without non-negative methodology}\n\t \\State $\\boldsymbol{c}^{(n+1)} = \\boldsymbol{c}^{(n)} - \\boldsymbol{J}\\backslash\\boldsymbol{r}^{(n)}$\n\t \\Else\\Comment{Solve with non-negative methodology}\n\t \\State TaoSolve() for $\\boldsymbol{c}^{(n+1)}$ based on equations \\eqref{Eqn:S3_tao_objective} and \\eqref{Eqn:S3_tao_gradient}\n\t \\EndIf\n\t \\If {steady-state or $(n) == $ total number of time steps}\n\t \\State break\n\t \\Else\n\t \\State $n += 1$\n\t \\EndIf\n \\EndWhile\n \n \\end{algorithmic}\n \\caption{Pseudocode for the large-scale transport solver \\label{Algo:S3_pseudocode}}\n\\end{algorithm}\n\\begin{subequations}\n\\begin{align}\n&\\mathcal{F}_0 = -f(\\mathbf{x}),\\quad \\boldsymbol{\\mathcal{F}}_1 = \\mathbf{D}(\\mathbf{x})\n\\nabla c_q \\\\\n&\\mathcal{F}_{0,0} = 0, \\quad \\mathcal{F}_{0,1} = 0, \\quad \\boldsymbol{\\mathcal{F}}_{1,0} = \\mathbf{0}, \\quad \\boldsymbol{\\mathcal{F}}_{1,1} = \\mathbf{D}(\\mathbf{x})\n\\end{align}\n\\end{subequations}\nSimilarly, the point-wise functions for the transient response are:\n\\begin{subequations}\n\\begin{align}\n&\\mathcal{F}_0 = \\dot{c}_q-f(\\mathbf{x},t),\\quad \\boldsymbol{\\mathcal{F}}_1 = \\mathbf{D}(\\mathbf{x})\n\\nabla c_q \\\\\n&\\mathcal{F}_{0,0} = \\frac{1}{\\Delta t}, \\quad \\mathcal{F}_{0,1} = 0, \\quad \\boldsymbol{\\mathcal{F}}_{1,0} = \\mathbf{0}, \\quad \\boldsymbol{\\mathcal{F}}_{1,1} = \\mathbf{D}(\\mathbf{x})\n\\end{align}\n\\end{subequations}\nwhere $\\dot{c}_{q}$ denotes the time derivative. A similar discretization is used \nto project the Neumann boundary conditions into the residual vector. \nAssuming a fixed time-step, $[\\mathcal{F}_{i,j}]$ and the Jacobian in equation \n\\eqref{Eqn:S3_jacobian} do not change with time and have to be computed only once. \nIf $n$ denotes the time level ($n$ = 0 denotes initial condition) then \nthe residual and Jacobian can be defined as:\n\\begin{align}\n&\\boldsymbol{r}^{(n)} \\equiv \\boldsymbol{r}(\\boldsymbol{c}^{(n)})\\\\\n&\\boldsymbol{J} \\equiv \\boldsymbol{J}(\\boldsymbol{c}^{(0)})\n\\end{align}\nTo enforce the non-negative methodology, the following objective function $b$ \nand gradient function $\\boldsymbol{g}$ is provided: \n\n \\begin{align}\n \\label{Eqn:S3_tao_objective}\n &b = \\frac{1}{2}\\boldsymbol{c}^{(n+1)}\\cdot\\boldsymbol{J}\\boldsymbol{c}^{(n+1)} + \\boldsymbol{c}^{(n+1)}\\cdot\\left[\\boldsymbol{r}^{(n)}-\\boldsymbol{J}\\boldsymbol{c}^{(n)}\\right]\\\\\n \\label{Eqn:S3_tao_gradient}\n &\\boldsymbol{g} = \\boldsymbol{J}\\left[\\boldsymbol{c}^{(n+1)}-\\boldsymbol{c}^{(n)}\\right] + \\boldsymbol{r}^{(n)}\n \\end{align}\nBLMVM relies only on the above two equations, whereas TRON needs the Hessian which \nis equivalent to $\\boldsymbol{J}$. Algorithm \\ref{Algo:S3_pseudocode} outlines \nthe steps taken in our computational framework.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{\\textbf{INTRODUCTION}}\n\n\nCircumstellar disks play a fundamental role in the formation of \nstars and planets. \nA significant fraction of the mass of a star is thought to be \nbuilt up by accretion through the disk.\nThe gas and dust in the inner disk ($r<$10\\,AU) \nalso constitute the likely material from which planets form. \nAs a result, observations of the gaseous \ncomponent of inner disks have the potential to provide critical clues to \nthe physical processes governing star and planet formation.\n\nFrom the planet formation perspective, probing the structure,\ngas content, and dynamics of inner disks is of interest, since\nthey all play important\nroles in establishing the architectures of planetary systems\n(i.e., planetary masses, orbital radii, and eccentricities).\nFor example, the lifetime of gas in the inner disk\n(limited by accretion onto the star, photoevaporation, \nand other processes)\nplaces an upper limit on the timescale for giant planet formation\n(e.g., {\\em Zuckerman et al.}, 1995).\n\nThe evolution of gaseous inner disks may also bear \non the efficiency of \norbital migration and the eccentricity evolution of giant and terrestrial\nplanets. Significant inward orbital migration, induced\nby the interaction of planets with a gaseous disk, is \nimplied by the small orbital radii of extrasolar\ngiant planets compared to their likely formation distances\n(e.g., {\\em Ida and Lin}, 2004). \nThe spread in the orbital radii of the planets (0.05--5\\,AU)\nhas been further taken to indicate that the timing of the dissipation \nof the inner disk sets the final orbital radius of the planet\n({\\em Trilling et al.}, 2002).\nThus, understanding \nhow inner disks dissipate may impact \nour understanding of the origin of planetary orbital radii.\nSimilarly, residual gas in the terrestrial planet region may play \na role in defining the final masses and eccentricities of \nterrestrial planets. Such issues have a strong connection\nto the question of the likelihood of solar systems like our own.\n\nAn important issue from the perspective of both star and planet\nformation is the nature of the physical mechanism that is \nresponsible for disk accretion. \nAmong the proposed mechanisms, \nperhaps the foremost is the magnetorotational instability \n({\\em Balbus and Hawley}, 1991) although other possibilities exist. \nDespite the significant theoretical progress that has been made \nin identifying plausible accretion mechanisms \n(e.g., {\\em Stone et al.}, 2000), there is little observational \nevidence that any of these processes are active in disks. \nStudies of the gas in inner disks offer opportunities \nto probe the nature of the accretion process. \n\nFor these reasons, it is of interest to probe \nthe dynamical state, physical and chemical structure, and the \nevolution of the gas content of inner disks.\nWe begin this Chapter with a brief review of the \ndevelopment of this field and an overview of \nhow high resolution spectroscopy can be used to study the \nproperties of inner disks (Section 1). \nPrevious reviews provide additional background on these topics \n(e.g., {\\em Najita et al.}, 2000). \nIn Sections 2 and 3, we review recent observational and theoretical \ndevelopments in this field, first describing \nobservational work to date on the gas in \ninner disks, and then describing theoretical models \nfor the surface and interior regions of disks. \nIn Section 4, we look to the future, highlighting several topics \nthat can be explored using the tools discussed in Sections 2 and 3. \n\n\n\n\\bigskip\n\\noindent\n\\textbf{1.1 Historical Perspective} \n\\bigskip\n\nOne of the earliest studies of gaseous inner disks was \nthe work by Kenyon and Hartmann on FU Orionis objects. \nThey showed that many of the peculiarities of these systems \ncould be explained in terms of \nan accretion outburst in a disk surrounding \na low-mass young stellar object (YSO; cf.\\ {\\em Hartmann and Kenyon},\\ 1996).\nIn particular, the varying spectral type of FU Ori objects in \noptical to near-infrared spectra, evidence for double-peaked \nabsorption line profiles, and the decreasing widths of absorption \nlines from the optical to the near-infrared argued for an origin \nin an optically thick gaseous atmosphere in the inner region of \na rotating disk.\nAround the same time, \nobservations of CO vibrational overtone emission, first in the BN object\n({\\em Scoville et al.},\\ 1983) and later in other high and low mass objects\n({\\em Thompson}, 1985; {\\em Geballe and Persson}, 1987; \n{\\em Carr}, 1989), revealed the \nexistence of hot, dense molecular gas plausibly located in a disk.\nOne of the first models for the CO overtone emission ({\\em Carr}, 1989) placed\nthe emitting gas in an optically-thin inner region of an accretion\ndisk. \nHowever, only the observations of the BN object had sufficient \nspectral resolution to constrain the kinematics of the emitting gas. \n\nThe circumstances under which a disk would produce emission or \nabsorption lines of this kind \nwere explored in early models\nof the atmospheres of gaseous accretion disks under the influence\nof external irradiation (e.g., {\\em Calvet et al.},\\ 1991). The models\ninterpreted the FU Ori absorption features \nas a consequence of midplane accretion rates high enough to \noverwhelm external irradiation in establishing a temperature\nprofile that decreases with disk height. At lower accretion rates,\nthe external irradiation of the disk was expected to induce a\ntemperature inversion in the disk atmosphere, producing emission\nrather than absorption features from the disk atmosphere. \nThus the models potentially provided an explanation for the \nFU Ori absorption features and CO emission lines that had been \ndetected.\n\nBy PPIV ({\\em Najita et al.},\\ 2000), high-resolution spectroscopy \nhad demonstrated that CO overtone emission shows the dynamical signature of a\nrotating disk ({\\em Carr et al.},\\ 1993; {\\em Chandler et al.},\\ 1993), thus\nconfirming theoretical expectations and opening the door\nto the detailed study of gaseous inner disks in a larger number of\nYSOs.\nThe detection of CO fundamental emission (Section 2.3) and emission\nlines of hot H$_2$O (Section 2.2) had also added new probes of the \ninner disk gas.\n\nSeven years later, at PPV, we find both a growing\nnumber of diagnostics available to probe gaseous inner disks as\nwell as increasingly detailed information that can be gleaned from\nthese diagnostics. Disk diagnostics familiar from PPIV \nhave been used to infer the intrinsic line broadening of disk\ngas, possibly indicating evidence for turbulence in disks (Section 2.1). \nThey also demonstrate the differential rotation of disks, \nprovide evidence for non-equilibrium molecular abundances (Section 2.2), \nprobe the inner radii of gaseous disks (Section 2.3), \nand are being used to probe the gas dissipation timescale \nin the terrestrial planet region (Section 4.1).\nAlong with these developments, new spectral line diagnostics have \nbeen used as probes of the gas in inner disks. These \ninclude transitions of molecular hydrogen at UV, near-infrared, and \nmid-infrared wavelengths (Sections 2.4, 2.5) and the fundamental ro-vibrational \ntransitions of the OH molecule (Section 2.2). Additional potential \ndiagnostics are discussed in Section 2.6. \n\n\n\n\\bigskip\n\\noindent\n\\textbf{1.2 High Resolution Spectroscopy of Inner Disks}\n\\bigskip\n\nThe growing suite of diagnostics can be used to probe \ninner disks using standard high resolution spectroscopic \ntechniques. \nAlthough inner disks are typically too small to resolve \nspatially at the distance of the nearest star forming regions,\nwe can utilize the likely differential rotation \nof the disk along with high spectral resolution to separate \ndisk radii in velocity. \nAt the warm temperatures ($\\sim$100\\,K--5000\\,K) and high densities of inner\ndisks, molecules are expected to be abundant in the gas phase and\nsufficiently excited to produce rovibrational features in the\ninfrared. Complementary atomic transitions are likely to be good \nprobes of the hot inner disk and the photodissociated surface \nlayers at larger radii. \nBy measuring multiple transitions of different species, \nwe should therefore be able\nto probe the temperatures, column densities, and abundances\nof gaseous disks as a function of radius.\n\nWith high spectral \nresolution we can resolve individual lines, which facilitates \nthe detection of weak spectral features. \nWe can also work around \ntelluric absorption features, using the radial velocity of the \nsource to shift its spectral features out of telluric \nabsorption cores. \nThis approach makes it \npossible to study a variety of atomic and molecular \nspecies, including those present in the Earth's atmosphere.\n\n\nGaseous spectral features are expected in a variety of situations. \nAs already mentioned, significant vertical variation in the temperature \nof the disk atmosphere will produce emission (absorption) features if the \ntemperature increases (decreases) with height ({\\em Calvet et al.}, 1991; \n{\\em Malbet and Bertout}, 1991). \nIn the general case, when the disk is optically thick, observed spectral\nfeatures measure only the atmosphere of the disk and are unable to \nprobe directly the entire disk column density, a situation familiar \nfrom the study of stellar atmospheres. \n\n\nGaseous emission features are also expected \nfrom regions of the disk that are optically thin in the continuum. \nSuch regions might arise as a result of dust sublimation \n(e.g., {\\em Carr}, 1989) or as a consequence of grain growth \nand planetesimal formation. \nIn these scenarios, the disk would have a low continuum opacity \ndespite a potentially large gas column density. \nOptically thin regions can also be produced by a significant \nreduction in the total column density of the disk. This situation \nmight occur as a consequence of giant planet formation, in which \nthe orbiting giant planet carves out a ``gap'' in the disk. \nLow column densities would also be characteristic of a dissipating disk. \nThus, we should be able to use gaseous emission lines to probe \nthe properties of inner disks in a variety of interesting \nevolutionary phases. \n\n\n\n\\section{\\textbf{OBSERVATIONS OF GASEOUS INNER DISKS}}\n\n\\bigskip\n\\noindent\n\\textbf{2.1 CO Overtone Emission}\n\\bigskip\n\nThe CO molecule is expected to be \nabundant in the gas phase over a wide range of \ntemperatures, from the temperature at which it condenses on grains \n($\\sim$20\\,K) up to its thermal dissociation temperature \n($\\sim$4000\\,K at the densities of inner disks). \nAs a result, CO transitions are expected to probe \ndisks from their cool outer reaches ($>$100\\,AU) \nto their innermost radii.\nAmong these, the overtone transitions of CO ($\\Delta v$=2, \n$\\lambda$=2.3$\\mu$m) were the emission line diagnostics first \nrecognized to probe the gaseous inner disk.\n\nCO overtone emission is detected in both low and high mass young\nstellar objects, but only in a small fraction of the objects observed.\nIt appears more commonly among higher luminosity objects. Among \nthe lower luminosity stars, it is detected from embedded protostars \nor sources with energetic outflows\n({\\em Geballe and Persson}, 1987; {\\em Carr}, 1989; \n{\\em Greene and Lada}, 1996; {\\em Hanson et al.}, 1997;\n{\\em Luhman et al.}, 1998; {\\em Ishii et al.}, 2001; \n{\\em Figueredo et al.}, 2002; {\\em Doppmann et al.}, 2005).\nThe conditions required to excite the overtone emission, \nwarm temperatures ($\\gtrsim 2000$ K) and high densities \n($>$$10^{10}\\rm \\,cm^{-3}$), may be met in disks\n({\\em Scoville et al.}, 1983; {\\em Carr}, 1989; \n{\\em Calvet et al.}, 1991), inner winds ({\\em Carr}, 1989), \nor funnel flows ({\\em Martin}, 1997). \n\nHigh resolution spectroscopy can be used to distinguish among these \npossibilities. The observations typically find strong \nevidence for the disk interpretation. The emission line profiles \nof the $v$=2--0 bandhead in most cases \nshow the characteristic signature of bandhead emission from\nsymmetric, double-peaked line profiles originating in \na rotating disk (e.g., {\\em Carr et al.}, 1993; {\\em Chandler et al.}, 1993; \n{\\em Najita et al.}, 1996; {\\em Blum et al.}, 2004). The symmetry of the observed \nline profiles argues against the likelihood that the emission arises \nin a wind or funnel flow, since inflowing or outflowing gas is \nexpected to produce line profiles with red- or blue-shifted \nabsorption components (alternatively line asymmetries) of the \nkind that are seen in the hydrogen Balmer lines of T Tauri stars \n(TTS).\nThus high resolution spectra provide strong evidence for \nrotating inner disks. \n\nThe velocity profiles of the CO overtone emission are normally\nvery broad ($>$100\\,${\\rm km}\\,{\\rm s}^{-1}$). In lower mass stars \n($\\sim$$1 M_\\odot$), the emission profiles show that the emission \nextends from very close to the star,\n$\\sim$0.05\\,AU, out to $\\sim$0.3\\,AU\n(e.g., {\\em Chandler et al.}, 1993; {\\em Najita et al.}, 2000).\nThe small radii are consistent with the high excitation\ntemperatures measured for the emission ($\\sim$1500--4000\\,K).\nVelocity resolved spectra have also been modeled in a number\nof high mass stars ({\\em Blum et al.}, 2004; {\\em Bik and Thi}, 2004), \nwhere the CO emission is found to arise at radii $\\sim 3$\\,AU.\n\nThe large near-infrared excesses of the sources in which CO \novertone emission is detected imply that the warm emitting gas \nis located in a vertical temperature inversion region in the disk atmosphere. \nPossible heating sources for the temperature inversion include: \nexternal irradiation by the star at optical through UV wavelengths \n(e.g., {\\em Calvet et al.}, 1991; {\\em D'Alessio et al.}, 1998) \nor by stellar X-rays ({\\em Glassgold et al.}, 2004; henceforth GNI04); \nturbulent heating in the disk atmosphere generated by a \nstellar wind flowing over the disk surface ({\\em Carr et al.}, 1993); \nor the dissipation of turbulence generated by disk accretion \n(GNI04). \nDetailed predictions of how these mechanisms heat the gaseous \natmosphere are needed in order to \nuse the observed bandhead emission strengths and profiles to \ninvestigate the origin of the temperature inversion. \n\nThe overtone emission provides an additional clue \nthat suggests a role for turbulent dissipation \nin heating disk atmospheres. \nSince the CO overtone bandhead is made up of closely spaced lines \nwith varying inter-line spacing and optical depth, the emission \nis sensitive to the intrinsic line broadening of the emitting \ngas (as long as the gas is not optically thin). \nIt is therefore possible to distinguish intrinsic line \nbroadening from macroscopic motions such as rotation. \nIn this way, \none can deduce from spectral synthesis modeling \nthat the lines are suprathermally broadened, with line widths \napproximately Mach 2 ({\\em Carr et al.}, 2004; {\\em Najita et al.}, 1996). \n{\\em Hartmann et al.}\\ (2004) find further evidence for \nturbulent motions in disks based on high resolution \nspectroscopy of CO overtone absorption in FU Ori objects.\n\nThus disk atmospheres appear to be turbulent. The turbulence \nmay arise as a consequence of turbulent angular momentum \ntransport in disks, as in the magnetorotational instability \n(MRI; {\\em Balbus and Hawley}, 1991) or the global baroclinic instability \n({\\em Klahr and Bodenheimer}, 2003). Turbulence in the upper disk \natmosphere may also be generated by a wind blowing over the \ndisk surface. \n\n\n\\bigskip\n\\noindent\n\\textbf{2.2 Hot Water and OH Fundamental Emission}\n\\bigskip\n\nWater molecules are also expected to be abundant in disks over \na range of disk radii, from the temperature at which water \ncondenses on grains ($\\sim$150 K) up to its thermal dissociation \ntemperature ($\\sim$2500 K). Like the CO overtone \ntransitions, the rovibrational transitions of water are also \nexpected to probe the high density conditions in disks. \nWhile the strong telluric absorption produced by water vapor in the\nEarth's atmosphere will restrict the study of cool water to\nspace or airborne platforms, it is possible to observe from the\nground water that is much hotter than the Earth's atmosphere.\nVery strong emission from hot water can be detected in the near-infrared \neven at low spectral resolution (e.g., SVS-13; {\\em Carr et al.}, 2004). \nMore typically, high resolution spectroscopy of\nindividual lines is required to detect much weaker emission lines.\n\nFor example, emission from individual lines of water in the\n$K$- and $L$-bands have been detected in a few stars (both low \nand high mass) that also show CO overtone emission \n({\\em Carr et al.}, 2004; {\\em Najita et al.}, 2000; {\\em Thi and Bik}, 2005).\nVelocity resolved spectra show that the widths of the\nwater lines are consistently narrower than those of the \nCO emission lines. Spectral synthesis modeling further shows \nthat the excitation temperature of the water emission\n(typically $\\sim$1500\\,K),\nis less than that of the CO emission. These results\nare consistent with both the water and CO originating in a\ndifferentially rotating\ndisk with an outwardly decreasing temperature profile. That is,\ngiven the lower dissociation temperature of water ($\\sim$2500\\,K)\ncompared to CO ($\\sim$4000\\,K),\nCO is expected to extend inward to smaller radii than \nwater, i.e., to higher velocities and temperatures. \n\nThe $\\Delta v$=1 OH fundamental transitions \nat 3.6$\\mu$m have also been detected in the spectra of two actively \naccreting sources, SVS-13 and V1331~Cyg, that also show CO \novertone and hot water emission (Carr et al., in preparation). \nAs shown in Fig.~1, these \nfeatures arise in a region that is crowded with spectral \nlines of water and perhaps other species. Determining the strengths of \nthe OH lines will, therefore, require making corrections for \nspectral features that overlap closely in wavelength. \n\nSpectral synthesis modeling of the detected CO, H$_2$O and OH features \nreveals relative abundances \nthat depart significantly from chemical equilibrium \n(cf.\\ {\\em Prinn}, 1993), with the \nrelative abundances of H$_2$O and OH a factor of 2--10 below that of CO \nin the region of the disk probed by both diagnostics \n({\\em Carr et al.}, 2004; Carr et al., in preparation; \nsee also {\\em Thi and Bik}, 2005).\nThese abundance ratios may arise from strong vertical abundance \ngradients produced by the external irradiation of the disk \n(see Section 3.4). \n\n\\begin{figure}[b!]\n\\plotfiddle{najita_fig1.eps}{0.05in}{270}{180}{250}{-8}{0}\n\\vspace{-.7in}\n\\caption{\\small\\baselineskip 10pt\nOH fundamental ro-vibrational emission from SVS-13 \non a relative flux scale.}\n\\end{figure}\n\n\n\\bigskip\n\\noindent\n\\textbf{2.3 CO Fundamental Emission}\n\\bigskip\n\nThe fundamental ($\\Delta v$=1) transitions of CO at 4.6$\\mu$m \nare an important probe of inner disk gas in part because of \ntheir broader applicability compared, e.g., to the CO overtone lines. \nAs a result of their comparatively small A-values, \nthe CO overtone transitions require large column densities of warm gas \n(typically in a disk temperature inversion region) in order to produce \ndetectable emission. \nSuch large column densities of warm gas may be rare \nexcept in sources with the largest accretion rates, i.e., \nthose best able to tap a large accretion energy budget and \nheat a large column density of the disk atmosphere. \nIn contrast, the CO fundamental transitions, with their much larger \nA-values, should be detectable \nin systems with more modest column densities of warm gas, \ni.e., in a broader range of sources. \nThis is borne out in high resolution spectroscopic surveys for CO \nfundamental emission from TTS ({\\em Najita et al.}, 2003) \nand Herbig AeBe stars ({\\em Blake and Boogert}, 2004) \nwhich detect emission from \nessentially all sources with accretion rates typical of these \nclasses of objects. \n\n\n\\begin{figure}[t!]\n\\vspace{-1.6in}\n\\plotfiddle{najita_fig2.eps}{1.50in}{0.}{240}{240}{5}{150}\n\\vspace{-0.3in}\n\\caption{\\small\\baselineskip 10pt\nGaseous inner disk radii for TTS from CO fundamental emission \n(filled squares) compared with corotation radii for the same sources. \nAlso shown are dust inner radii from near-infrared interferometry \n(filled circles; {\\em Akeson et al.}, 2005a,b) or spectral energy \ndistributions \n(open circles; {\\em Muzerolle et al.}, 2003). \nThe solid and dashed lines indicate an inner radius equal to, \ntwice, and 1\/2 the corotation radius.\nThe points for the three stars with measured inner radii\nfor both the gas and dust are connected by dotted lines.\nGas is observed to extend inward of the dust inner radius \nand typically inward of the corotation radius.\n} \n\\end{figure}\n\nIn addition, the lower temperatures\nrequired to excite the CO $v$=1--0 transitions make these transitions \nsensitive to cooler gas at larger disk radii, beyond the \nregion probed by the \nCO overtone lines. Indeed, the measured line profiles for the \nCO fundamental emission are broad (typically 50--100\\,${\\rm km}\\,{\\rm s}^{-1}$ FWHM) and \ncentrally peaked, in contrast to the CO overtone lines which are \ntypically double-peaked. These velocity profiles suggest \nthat the CO fundamental emission arises from a wide range of radii, \nfrom $\\lesssim$0.1\\,AU out to 1--2\\,AU in disks \naround low mass stars, i.e., the terrestrial planet region of \nthe disk ({\\em Najita et al.}, 2003). \n\nCO fundamental emission spectra typically show symmetric emission \nlines from multiple vibrational states (e.g., $v$=1--0, 2--1, 3--2); \nlines of $^{13}$CO can also be detected when the emission is strong \nand optically thick. \nThe ability to study multiple vibrational states as well as \nisotopic species within a limited spectral range makes the CO \nfundamental lines an appealing choice to probe gas in the inner disk \nover a range of temperatures and column densities. \nThe relative strengths of the lines \nalso provide insight into the excitation mechanism for the emission.\n\nIn one source, the Herbig AeBe star HD141569, the excitation\ntemperature of the rotational levels ($\\sim$200\\,K) is much lower\nthan the excitation temperature of the vibrational levels ($v$=6\nis populated), which is suggestive of UV pumping of cold gas \n({\\em Brittain et al.}, 2003). \nThe emission lines from the source are narrow, indicating\nan origin at $\\gtrsim$17\\,AU. The lack of fluorescent emission\nfrom smaller radii strongly suggests that the region within 17\\,AU \nis depleted of gaseous CO. \nThus detailed models of the fluorescence process can be used\nto constrain the gas content in the inner disk region \n(S. Brittain, personal communication).\n\nThus far HD141569 appears to be an unusual case. For the majority \nof sources from which CO fundamental is detected, the relative line \nstrengths are consistent with emission from thermally excited\ngas. They indicate typical excitation temperatures of \n1000--1500\\,K and CO column densities of $\\sim$$10^{18}\\rm \\,cm^{-2}$ \nfor low mass stars. \nThese temperatures are much warmer than the dust temperatures at\nthe same radii implied by spectral energy distributions (SEDs) \nand the expectations of some disk atmosphere models\n(e.g., {\\em D'Alessio et al.}, 1998). \nThe temperature difference can be \naccounted for by \ndisk atmosphere models \nthat allow for the thermal decoupling of the\ngas and dust (Section 3.2).\n\n\nFor CTTS systems in which the inclination is known, we can convert \na measured HWZI velocity for the emission to an inner radius. The CO inner \nradii, thus derived, are typically $\\sim$0.04\\,AU for TTS \n({\\em Najita et al.}, 2003; {\\em Carr et al.}, in preparation), \nsmaller than the inner radii that are measured for the \ndust component either through interferometry \n(e.g., {\\em Eisner et al.}, 2005; {\\em Akeson et al.}, 2005a; \n{\\em Colavita et al.}, 2003; see chapter by {\\em Millan-Gabet et al.})\\ or \nthrough the interpretation \nof SEDs (e.g., {\\em Muzerolle et al.}, 2003).\nThis shows that gaseous disks extend inward \nto smaller radii than dust disks, a result that is not \nsurprising given the relatively low sublimation temperature \nof dust grains ($\\sim$1500--2000 K) compared to the \nCO dissociation temperature ($\\sim$4000 K). \nThese results are consistent with the suggestion that \nthe inner radius of the dust disk is defined by \ndust sublimation rather than by physical truncation\n({\\em Muzerolle et al.}, 2003; {\\em Eisner et al.}, 2005).\n\nPerhaps more interestingly, the inner radius of the CO emission \nappears to extend up to and usually within the corotation \nradius (i.e., the radius at which the disk rotates at the same \nangular velocity as the star; Fig.~2). In the current paradigm for \nTTS, a strong stellar magnetic field \ntruncates the disk near the corotation radius. \nThe coupling between the stellar magnetic field and the gaseous inner disk \nregulates the rotation of the star, bringing \nthe star into corotation with the disk at the coupling radius. \nFrom this region emerge both energetic (X-)winds and magnetospheric \naccretion flows (funnel flows; {\\em Shu et al.}, 1994). \nThe velocity extent of the CO fundamental emission shows that gaseous\ncircumstellar disks indeed extend inward beyond the dust destruction\nradius to the corotation radius (and beyond), \nproviding the material that feeds both X-winds and funnel flows.\nSuch small coupling radii are consistent with the rotational rates \nof young stars. \n\n\n\\begin{figure}[t!]\n\\vspace{-0.1in}\n\\plotfiddle{najita_fig3.eps}{0.05in}{0.}{180}{180}{20}{0}\n\\vspace{-0.1in}\n\\caption{\\small\\baselineskip 10pt\nThe distribution of gaseous inner radii, measured with \nthe CO fundamental transitions, compared to the \ndistribution of orbital radii of short-period\nextrasolar planets.\nA minimum planetary orbital radius of $\\sim$0.04\\,AU \nis similar to the minimum gaseous inner radius\ninferred from the CO emission line profiles.\n} \n\\end{figure}\n\n\n\nIt is also interesting to compare the distribution of inner\nradii for the CO emission with the orbital radii of\nthe ``close-in'' extrasolar giant planets (Fig.\\ 3).\nExtra-solar planets discovered by radial velocity surveys\nare known to pile-up near a minimum radius of 0.04 AU.\nThe similarity between these distributions is roughly consistent \nwith the idea that the truncation of the inner disk \ncan halt the inward orbital migration of a giant \nplanet ({\\em Lin et al.}, 1996). In detail, however, the planet \nis expected to migrate slightly inward \nof the truncation radius, to the 2:1 resonance, \nan effect that is not seen in the present \ndata. A possible caveat is that the wings of the CO lines may \nnot trace Keplerian motion or that the innermost gas is not \ndynamically significant. It would be interesting to explore \nthis issue further since the results impact our understanding \nof planet formation and the origin of planetary architectures. \nIn particular, the existence of a stopping mechanism implies \na lower efficiency for giant planet formation, e.g., compared \nto a scenario in which multiple generations of planets form and \nonly the last generation survives \n(e.g., {\\em Trilling et al.}, 2002). \n\n\n\n\\bigskip\n\\noindent\n\\textbf{2.4 UV Transitions of Molecular Hydrogen}\n\\bigskip\n\nAmong the diagnostics of inner disk gas \ndeveloped since PPIV, perhaps the most interesting \nare those of \nH$_2$.\nH$_2$\\ is presumably the dominant gaseous species in disks, due\nto high elemental abundance, low depletion onto grains, and robustness\nagainst dissociation. Despite its expected ubiquity, H$_2$\\ is difficult\nto detect because permitted electronic transitions are in the far\nultraviolet (FUV) and accessible only from space. Optical and \nrovibrational IR\ntransitions have radiative rates that are 14 orders of magnitude\nsmaller.\n\n\n\\begin{figure}[t!]\n\\plotfiddle{najita_fig4.eps}{0.05in}{0.}{230}{230}{2}{0}\n\\vspace{-0.1in}\n\\caption{\\small\\baselineskip 10pt\nLy\\,$\\alpha$\\ emission from TW Hya, an accreting T Tauri star,\nand a reconstruction of the Ly\\,$\\alpha$\\ profile seen by the circumstellar H$_2$.\nEach observed H$_2$\\ progression (with a common excited state) yields a\nsingle point in the reconstructed Ly\\,$\\alpha$\\ profile. The wavelength of\neach point in the reconstructed Ly\\,$\\alpha$\\ profile corresponds to the\nwavelength of the upward transition that pumps the progression.\nThe required excitation energies for the H$_2$\\ \\textit{before} the\npumping is indicated in the inset energy level diagram. There are\nno low excitation states of H$_2$\\ with strong transitions that overlap\nLy\\,$\\alpha$. Thus, the H$_2$\\ must be very warm to be preconditioned for\npumping and subsequent fluorescence.} \n\\end{figure}\n\n\n\nConsidering only radiative transitions with spontaneous rates above\n10$^7$ s$^{-1}$, H$_2$\\ has about 9000 possible Lyman-band (B-X)\ntransitions from 850-1650 \\AA\\ and about 5000 possible Werner-band\n(C-X) transitions from 850-1300 \\AA\\ ({\\em Abgrall et al.}, 1993a,b). However,\nonly about 200 FUV transitions have actually been detected in spectra\nof accreting TTS.\nDetected H$_2$\\ emission lines in the FUV all originate from about two\ndozen radiatively pumped states, each more than 11 eV above ground.\nThese pumped states of H$_2$\\ are the only ones connected to the ground\nelectronic configuration by strong radiative transitions that overlap\nthe broad Ly\\,$\\alpha$\\ emission that is characteristic of accreting TTS (see\nFig.\\ 4). Evidently, absorption of broad Ly\\,$\\alpha$\\ emission pumps the H$_2$\\\nfluorescence.\nThe two dozen strong H$_2$\\ transitions that happen to overlap the broad\nLy\\,$\\alpha$\\ emission are all pumped out of high $v$ and\/or high $J$ states at\nleast 1 eV above ground (see inset in Fig.~4). This means some\nmechanism must excite H$_2$\\ in the ground electronic configuration,\nbefore Ly\\,$\\alpha$\\ pumping can be effective. If the excitation mechanism is\nthermal, then the gas must be roughly 10$^3$ K to obtain a significant\nH$_2$\\ population in excited states. \n\n\nH$_2$\\ emission is a ubiquitous feature of accreting TTS. Fluoresced H$_2$\\\nis detected in the spectra of 22 out of 24 accreting TTS observed in the\nFUV by HST\/STIS ({\\em Herczeg et al.}, 2002; {\\em Walter et al.}, 2003; \n{\\em Calvet et al.}, 2004; {\\em Bergin et al.}, 2004; \n{\\em Gizis et al.}, 2005; {\\em Herczeg et al.}, 2005; \nunpublished archival data). Similarly, H$_2$\\ is detected in\nall 8 accreting TTS observed by HST\/GHRS ({\\em Ardila et al.}, 2002) and all\n4 published FUSE spectra ({\\em Wilkinson et al.}, 2002; \n{\\em Herczeg et al.}, 2002; 2004; 2005). \nFluoresced H$_2$\\ was even detected in 13 out of 39\naccreting TTS observed by IUE, despite poor sensitivity \n({\\em Brown et al.}, 1981; {\\em Valenti et al.,} 2000).\nFluoresced H$_2$\\ has not been detected in FUV spectra of \nnon-accreting TTS, despite observations of 14 stars with STIS \n({\\em Calvet et al.}, 2004; unpublished archival data), \n1 star with GHRS ({\\em Ardila et al.}, 2002), and 19 stars with IUE \n({\\em Valenti et al.}, 2000). However,\nthe existing observations are not sensitive enough to prove that the\ncircumstellar H$_2$\\ column decreases contemporaneously with the\ndust continuum of the inner disk.\nWhen accretion onto the stellar surface stops, fluorescent pumping\nbecomes less efficient because the strength and breadth of Ly\\,$\\alpha$\\\ndecreases significantly and the H$_2$\\ excitation needed to prime the\npumping mechanism may become less efficient. COS, if installed on\nHST, will have the sensitivity to set interesting limits on H$_2$\\\naround non-accreting TTS in the TW Hya association.\n\n\nThe intrinsic Ly\\,$\\alpha$\\ profile of a TTS is not observable at Earth, except\npossibly in the far wings, due to absorption by neutral hydrogen along\nthe line of sight. However, observations of H$_2$\\ line fluxes constrain\nthe Ly\\,$\\alpha$\\ profile seen by the fluoresced H$_2$. The rate at which a\nparticular H$_2$\\ upward transition absorbs Ly\\,$\\alpha$\\ photons is equal to the\ntotal rate of observed downward transitions out of the pumped state,\ncorrected for missing lines, dissociation losses, and propagation\nlosses. If the total number of excited H$_2$\\ molecules before pumping is\nknown (e.g., by assuming a temperature), then the inferred\npumping rate yields a Ly\\,$\\alpha$\\ flux point at the wavelength of each\npumping transition (Fig.\\ 4).\n\n\n{\\em Herczeg et al.}\\ (2004) applied this type of analysis to TW Hya,\ntreating the circumstellar H$_2$\\ as an isothermal, self-absorbing slab.\nFig.~4 shows reconstructed Ly\\,$\\alpha$\\ flux points for the upward pumping\ntransitions, assuming the fluoresced H$_2$\\ is at 2500 K. The smoothness\nof the reconstructed Ly\\,$\\alpha$\\ flux points implies that the H$_2$\\ level\npopulations are consistent with thermal excitation. Assuming an H$_2$\\\ntemperature warmer or cooler by a few hundred degrees leads to\nunrealistic discontinuities in the reconstructed Ly\\,$\\alpha$\\ flux points.\nThe reconstructed Ly\\,$\\alpha$\\ profile has a narrow absorption component\nthat is blueshifted by $-90~ {\\rm km}\\,{\\rm s}^{-1}$, presumably due to an intervening\nflow.\n\n\nThe spatial morphology of fluoresced H$_2$\\ around TTS is diverse.\n{\\em Herczeg et al.} (2002) used STIS to observe TW Hya with 50 mas\nangular resolution, corresponding to a spatial resolution of 2.8 AU at\na distance of 56 pc, finding no evidence that the fluoresced H$_2$\\ is\nextended. At the other extreme, {\\em Walter et al.} (2003) detected\nfluoresced H$_2$\\ up to 9 arcsec from T Tau N, but only in progressions\npumped by H$_2$\\ transitions near the core of Ly\\,$\\alpha$.\nFluoresced H$_2$\\ lines have a main velocity component at or near the\nstellar radial velocity and perhaps a weaker component that is\nblueshifted by tens of ${\\rm km}\\,{\\rm s}^{-1}$ ({\\em Herczeg et al.}, 2006). These two\ncomponents are attributed to the disk and the outflow, respectively.\nTW Hya has H$_2$\\ lines with no net velocity shift, consistent with\nformation in the face-on disk ({\\em Herczeg et al.}, 2002). On the other\nhand, RU Lup has H$_2$\\ lines that are blueshifted by $12~ {\\rm km}\\,{\\rm s}^{-1}$, suggesting\nformation in an outflow. In both of these stars, absorption in\nthe blue wing of the \\ion{C}{2} 1335 \\AA\\ wind feature strongly\nattenuates H$_2$\\ lines that happen to overlap in wavelength, so in\neither case H$_2$\\ forms inside the warm atomic wind \n({\\em Herczeg et al.}, 2002; 2005).\n\n\nThe velocity widths of fluoresced H$_2$\\ lines (after removing\ninstrumental broadening) range from $18~ {\\rm km}\\,{\\rm s}^{-1}$ to $28~ {\\rm km}\\,{\\rm s}^{-1}$ for the 7\naccreting TTS observed at high spectral resolution with STIS \n({\\em Herczeg et al.}, 2006). \nLine width does not correlate well with inclination.\nFor example, TW Hya (nearly face-on disk) and DF Tau (nearly edge-on\ndisk) both have line widths of $18~ {\\rm km}\\,{\\rm s}^{-1}$. Thermal broadening is\nnegligible, even at 2000 K. Keplerian motion, enforced corotation, and\noutflow may all contribute to H$_2$\\ line width in different systems.\nMore data are needed to understand how velocity widths (and shifts)\ndepend on disk inclination, accretion rate, and other factors.\n\n\n\n\\bigskip\n\\noindent\n\\textbf{2.5 Infrared Transitions of Molecular Hydrogen} \n\\bigskip\n\nTransitions of molecular hydrogen have also been studied at longer\nwavelengths, in the near- and mid-infrared. \nThe $v$=1--0 S(1) transition of H$_2$ (at $2\\mu$m) has been detected in\nemission in a small sample of classical T Tauri stars (CTTS) and one weak\nT Tauri star (WTTS; {\\em Bary et al.}, 2003 and references therein). \nThe narrow emission lines ($\\lesssim$$10{\\rm km}\\,{\\rm s}^{-1}$), if arising in a disk, \nindicate an origin at\nlarge radii, probably beyond 10\\,AU. The high temperatures\nrequired to excite these transitions thermally (1000s\\,K), in contrast\nto the low temperatures expected for the outer disk, suggest that\nthe emission is non-thermally excited, possibly by X-rays ({\\em Bary et\nal.}, 2003). \nThe measurement of other rovibrational transitions of H$_2$ is needed\nto confirm this.\n\nThe gas mass detectable by this technique depends\non the depth to which the exciting radiation can penetrate the disk.\nThus, the emission strength may be limited\neither by the strength of the radiation field, if the gas column\ndensity is high, or by the mass of gas present, if the gas column\ndensity is low.\nWhile it is therefore difficult to measure total gas masses with \nthis approach, clearly non-thermal processes can light \nup cold gas, making it easier to detect. \n\nEmission from a WTTS is surprising\nsince WTTS are thought to be largely devoid of circumstellar \ndust and gas, given \nthe lack of infrared excesses and the low accretion rates for these\nsystems. The Bary et al.\\ results call this\nassumption into question and suggest that longer lived gaseous\nreservoirs may be present in systems with low accretion rates. We\nreturn to this issue in Section 4.1.\n\nAt longer wavelengths, the pure rotational transitions of H$_2$ \nare of considerable interest because molecular hydrogen\ncarries most of the mass of the disk, and these mid-infrared\ntransitions are capable of probing the $\\sim$100 K temperatures\nthat are expected for the giant planet region of the disk.\nThese transitions present both advantages\nand challenges as probes of gaseous disks.\nOn the one hand, their small A-values make them sensitive, in\nprinciple, to very large gas masses (i.e., the transitions do not\nbecome optically thick until large gas column densities \n$N_H$=$10^{23}-10^{24}\\rm \\,cm^{-2}$ \nare reached). On the other hand, the small A-values also \nimply \nsmall critical densities, which allows the\npossibility of contaminating emission from gas at lower densities\nnot associated with the disk,\nincluding shocks in outflows and UV excitation of ambient gas.\n\nIn considering the detectability of H$_2$ emission from gaseous \ndisks mixed with dust, one issue is that the dust \ncontinuum can become optically thick over column densities \n$N_H \\ll 10^{23}-10^{24}\\rm \\,cm^{-2}$. \nTherefore, in a disk that is optically thick in the continuum \n(i.e., in CTTS), H$_2$ emission may probe smaller \ncolumn densities. \nIn this case, the line-to-continuum contrast may be low unless there \nis a strong temperature inversion in the disk atmosphere, and \nhigh signal-to-noise observations may be required to detect the \nemission. \nIn comparison, in disk systems that are optically thin \nin the continuum (e.g., WTTS), H$_2$ could \nbe a powerful probe as long as there are sufficient heating \nmechanisms (e.g., beyond gas-grain coupling) to heat the H$_2$.\n\nA thought-provoking result from ISO was the report of approximately\nJupiter-masses of warm gas residing in $\\sim$20\\,Myr old debris\ndisk systems ({\\em Thi et al.}, 2001) based on the detection of the \n28~$\\mu$m and 17~$\\mu$m lines of H$_2$. This result was surprising \nbecause of the advanced age of the sources in which the emission \nwas detected; gaseous reservoirs are expected to dissipate on \nmuch shorter timescales (Section 4.1). \nThis intriguing result is, thus far, unconfirmed by either \nground-based studies ({\\em Richter et al.}, 2002; \n{\\em Sheret et al.}, 2003; \n{\\em Sako et al.}, 2005) or studies with Spitzer \n(e.g., {\\em Chen et al.}, 2004). \n\n\nNevertheless, ground-based studies have detected pure rotational \nH$_2$ emission from some sources. Detections to date include AB Aur \n({\\em Richter et al.}, 2002). \nThe narrow width of the emission in AB Aur ($\\sim$$10~{\\rm km}\\,{\\rm s}^{-1}$ FWHM), \nif arising in a disk, locates the emission \nbeyond the giant planet region. \nThus, an important future direction for these studies is to search \nfor H$_2$ emission in a larger number of sources and \nat higher velocities, in the giant planet region of the disk. \nHigh resolution mid-IR spectrographs on $>$3-m telescopes \nwill provide the greater sensitivity needed for such studies.\n\n\n\\bigskip\n\\noindent\n\\textbf{2.6 Potential Disk Diagnostics} \n\\bigskip\n\nIn a recent development, {\\em Acke et al.}\\ (2005) have reported\nhigh resolution spectroscopy of the [OI]\\,6300\\,\\AA\\ line in \nHerbig AeBe stars. The majority of the sources show a\nnarrow ($<$$50~{\\rm km}\\,{\\rm s}^{-1}$ FWHM), fairly symmetric emission component\ncentered at the radial velocity of the star. In some cases,\ndouble-peaked lines are detected. These features are interpreted\nas arising in a surface layer of the disk that is irradiated by the star. \nUV photons incident on the disk surface are thought to photodissociate \nOH and H$_2$O, producing a non-thermal population of excited \nneutral oxygen that decays radiatively, producing the observed emission lines. \nFractional OH abundances of $\\sim$$10^{-7}-10^{-6}$ are needed \nto account for the observed line luminosities.\n\nAnother recent development is the report of strong absorption in the \nrovibrational bands of C$_2$H$_2$, HCN, and CO$_2$ in the 13--15~$\\mu$m \nspectrum of a low-mass class I source in Ophiuchus, IRS\\,46 \n({\\em Lahuis et al.}, 2006). The high excitation temperature of \nthe absorbing gas (400-900\\,K) suggests an origin close\nto the star, an interpretation that is consistent with millimeter \nobservations of HCN which indicate a source size $\\ll$100\\,AU.\nSurprisingly, high dispersion observations of rovibrational CO\n(4.7~$\\mu$m) and HCN (3.0~$\\mu$m) show that the molecular absorption \nis {\\it blueshifted} relative to the molecular cloud. If IRS\\,46\nis similarly blueshifted relative to the cloud, the absorption may\narise in the atmosphere of a nearly edge-on disk. A disk origin \nfor the absorption is consistent with the observed relative abundances \nof C$_2$H$_2$, HCN, and CO$_2$ ($10^{-6}$--$10^{-5}$), which are\nclose to those predicted by {\\em Markwick et al.}\\ (2002) for\nthe inner region of gaseous disks ($\\lesssim$2\\,AU; see Section 3).\nAlternatively, if IRS\\,46 has approximately the same velocity as\nthe cloud, then the absorbing gas is blueshifted with respect to the star \nand the absorption may arise in an outflowing wind. \nWinds launched from the disk, at AU distances, may have \nmolecular abundances similar to those observed if the chemical \nproperties of the wind change slowly as the wind is launched. \nDetailed calculations of the chemistry of disk winds are needed to \nexplore this possibility. The molecular abundances in the \ninner disk midplane (Section 3.3) provide the initial conditions for such \nstudies. \n\n\n\n\n\\section{\\textbf{THERMAL-CHEMICAL MODELING}}\n\n\\bigskip\n\\noindent\n\\textbf{3.1 General Discussion}\n\\bigskip\n\nThe results discussed in the previous section illustrate the growing\npotential for observations to probe gaseous inner disks. While,\nas already indicated, some conclusions can be drawn directly from\nthe data coupled with simple spectral synthesis modeling, harnessing\nthe full diagnostic potential of the observations will likely rely\non detailed models of the thermal-chemical structure \n(and dynamics) of disks. Fortunately, the development of \nsuch \nmodels has been an active area of recent research.\nAlthough much of the effort has been devoted to understanding the\nouter regions of disks ($\\sim$100\\,AU; e.g., {\\em Langer et al.}, 2000;\nchapters by {\\em Bergin et al.}\\ and {\\em Dullemond et al.}), recent \nwork has begun to focus on the region within 10\\,AU.\n\nBecause disks are intrinsically complex structures, the models\ninclude a wide array of processes. These encompass heating sources\nsuch as stellar irradiation (including far UV and X-rays) and viscous\naccretion; chemical processes such as photochemistry and grain\nsurface reactions; and mass transport via magnetocentrifugal winds,\nsurface evaporative flows, turbulent mixing, and accretion onto the\nstar. The basic goal of the models is to calculate the density,\ntemperature, and chemical abundance structures that result from\nthese processes. Ideally, the calculation would be fully\nself-consistent, although approximations are made to simplify the\nproblem.\n\nA common simplification is to adopt a specified density distribution \nand then solve the rate equations that define the chemical model. \nThis is justifiable where the thermal and chemical\ntimescales are short compared to the dynamical timescale.\nA popular choice is the $\\alpha$-disk model ({\\em Shakura and Sunyaev}, 1973; \n{\\em Lynden-Bell and Pringle}, 1974) in which a \nphenomenological parameter $\\alpha$ characterizes the efficiency of \nangular momentum transport; its vertically averaged \nvalue is estimated to be $\\sim$$10^{-2}$ for \nT Tauri disks on the basis of\nmeasured accretion rates ({\\em Hartmann et al.}, 1998). \nBoth vertically isothermal $\\alpha$-disk models \nand the Hayashi minimum mass solar nebula \n(e.g., {\\em Aikawa et al.}, 1999) \nwere adopted in early studies. \n\nAn improved method removes the assumption of vertical \nisothermality and calculates the vertical thermal structure \nof the disk \nincluding viscous accretion heating at the midplane \n(specified by $\\alpha$) \nand stellar radiative heating\nunder the assumption that the gas and dust temperatures are the\nsame ({\\em Calvet et al.}, 1991; {\\em D'Alessio et al.}, 1999).\nSeveral chemical models have been built using the D'Alessio density\ndistribution (e.g., {\\em Aikawa and Herbst}, 1999; GNI04; \n{\\em Jonkheid et al.}, 2004).\n\nStarting about\n2001, theoretical models showed that the gas temperature can become\nmuch larger than the dust temperature in the \natmospheres of outer \n({\\em Kamp and van Zadelhoff}, 2001) \nand inner ({\\em Glassgold and Najita}, 2001) disks.\nThis suggested the need to treat the gas and dust as two independent\nbut coupled thermodynamic systems. \nAs an example of this approach, {\\em Gorti and Hollenbach} (2004) \nhave iteratively solved a system of chemical rate equations \nalong with the equations of hydrostatic equilibrium\nand thermal balance for both the gas and the dust. \n\nThe chemical models developed so far \nare characterized by diversity as well as uncertainty. \nThere is diversity in \nthe adopted density distribution and external radiation field \n(UV, X-rays, and cosmic rays; the relative importance of these \ndepends on the evolutionary stage) and in \nthe thermal and chemical processes considered.\nThe relevant heating processes are less well understood than \nline cooling. \nOne issue is how UV, X-ray, and cosmic rays heat the gas.\nAnother is the role of mechanical heating associated\nwith various flows in the disk, especially accretion \n(GNI04). \nThe chemical processes are also less certain. \nOur understanding of astrochemistry is based mainly\non the interstellar medium, where densities and temperatures\nare low compared to those of inner disks,\nexcept perhaps in shocks and photon-dominated regions. \nNew reaction pathways or \nprocesses may be important at the higher densities \n($> 10^{7}\\rm \\,cm^{-3}$)\nand higher radiation fields of inner disks. \nA basic challenge is to understand the thermal-chemical \nrole of dust grains and PAHs. Indeed, perhaps the most significant \ndifference between models is the treatment of grain chemistry. \nThe more sophisticated models include adsorption of gas onto \ngrains in cold regions and desorption in warm regions. \nYet another level of complexity is introduced \nby transport processes which can affect the\nchemistry through vertical or radial mixing. \n\nAn important practical issue in thermal-chemical modeling is that \nself-consistent calculations become increasingly\ndifficult as the density, temperature, and the number of species\nincrease. Almost all models employ truncated chemistries with \nwith somewhere from 25 to 215 species, compared with 396 in the UMIST data\nbase ({\\em Le Teuff et al.}, 2000). The truncation\nprocess is arbitrary, determined largely by the goals of the\ncalculations. {\\em Wiebe et al.}, (2003) have an\nobjective method for selecting the most important reactions from\nlarge data bases. \nAdditional insights \ninto disk chemistry are offered in the chapter by {\\em Bergin et al.}\n\n\\bigskip\n\\noindent\n\\textbf{3.2 The Disk Atmosphere}\n\\bigskip\n\nAs noted above, {\\em Kamp and van Zadelhoff} (2001) concluded \nin their model of debris disks that \nthe gas and dust temperature can differ, \nas did {\\em Glassgold and Najita} (2001) for T Tauri disks. The\nformer authors developed a comprehensive thermal-chemical model where\nthe heating is primarily from the dissipation of the drift velocity\nof the dust through the gas. For T Tauri disks, stellar X-rays,\nknown to be a universal property of low-mass YSOs,\nheat the gas to temperatures \nthousands of degrees \nhotter than the dust temperature.\n\n\n\\begin{figure}[t!]\n\\vspace{-0.2in}\n\\plotfiddle{najita_fig5.ps}{0.05in}{0.}{220}{220}{10}{0}\n\\vspace{-0.3in}\n\\caption{\\small\\baselineskip 10pt\nTemperature profiles from GNI04 for a protoplanetary disk atmosphere. \nThe lower solid line shows the dust \ntemperature of {\\em D'Alessio et al.} (1999) \nat a radius of 1\\,AU and a mass accretion rate of \n$10^{-8}M_\\odot$ yr$^{-1}$. \nThe upper curves show the corresponding gas temperature as a function of the \nphenomenological mechanical heating parameter defined by\nEquation 1, $\\alpha_h$ = 1 (solid line), 0.1 (dotted line), and\n0.01 (dashed line). The $\\alpha_h = 0.01$ curve closely follows\nthe limiting case of pure X-ray heating. The lower vertical lines\nindicate the major chemical transitions, specifically CO forming\nat $\\sim 10^{21} {\\rm cm}^{-2}$, H$_2$ forming\nat $\\sim 6\\times 10^{21} {\\rm cm}^{-2}$, and\nwater forming at higher column densities.} \n\\end{figure}\n\nFig.~5 shows the vertical temperature profile obtained by {\\em\nGlassgold et al.}\\ (2004) with a thermal-chemical model\nbased on the dust model of {\\em D'Alessio et al.}\\ (1999) for a generic\nT Tauri disk. Near the midplane, the densities are high enough\nto strongly couple the dust and gas. \nAt higher altitudes,\nthe disk becomes optically thin to the stellar optical\nand infrared radiation, and the temperature of the (small) grains\nrises, as does the still closely-coupled gas temperature. However,\nat still higher altitudes, the gas responds strongly to the \nless attenuated X-ray flux, and its temperature\nrises much above the dust temperature. \nThe presence of a hot X-ray heated layer above a cold midplane\nlayer was obtained independently by {\\em Alexander et al.} (2004).\n\nGNI04 also considered \nthe possibility that the surface layers of protoplanetary disks \nare heated by the dissipation of mechanical energy. \nThis might arise through \nthe interaction of a wind with the upper layers of the disk \nor through disk angular momentum transport. \nSince the theoretical understanding of such processes is incomplete, \na phenomenological\ntreatment is required. \nIn the case of angular momentum transport, \nthe most widely accepted mechanism\nis the MRI ({\\em Balbus and Hawley}, 1991; \n{\\em Stone et al.}, 2000), which leads to the local\nheating formula,\n\\begin{equation}\n\\label{accheat}\n\\Gamma_{\\rm acc} = \\frac{9}{4} \\alpha_h \\rho c^2 \\Omega, \n\\end{equation} \nwhere $\\rho$ is the mass density, $c$ is the isothermal sound speed,\n$\\Omega$ is the angular rotation speed, and $\\alpha_h$ is a\nphenomenological parameter \nthat depends on how the turbulence dissipates.\nOne can argue, on the basis of simulations by {\\em\nMiller and Stone} (2000), that midplane turbulence generates Alfv\\'en\nwaves which, on reaching the diffuse surface regions, produce shocks\nand heating. Wind-disk heating can be represented by a similar\nexpression on the basis of dimensional arguments.\nEquation 1 is essentially an adaptation of the expression for \nvolumetric heating in an $\\alpha$-disk model, where $\\alpha$ can in \ngeneral depend on position. GNI04 used the notation $\\alpha_h$ \nto distinguish its value in the disk atmosphere from the usual \nmidplane value. \n\n\nIn the top layers fully exposed to X-rays, the gas temperature\nat 1\\,AU is $\\sim$5000\\,K. Further down, there is a \nwarm transition region (500--2000\\,K) composed mainly of atomic \nhydrogen but with carbon fully associated into CO. \nThe conversion from atomic H to H$_2$ is reached at a column \ndensity of $\\sim$$6\\times 10^{21}\\rm \\,cm^{-2}$, with more complex \nmolecules such as water forming deeper in the disk. \nThe location and thickness of the warm molecular region \ndepends on the strength of the surface heating. The curves \nin Fig.~5 illustrate this dependence for a T Tauri disk at\n$r=1$\\,AU. With $\\alpha_h = 0.01$, X-ray heating dominates this\nregion, whereas with $\\alpha_h > 0.1$, mechanical heating dominates.\n\nGas temperature inversions can also be produced by UV radiation\noperating on small dust grains and PAHs, as demonstrated by\nthe thermal-chemical models of \n{\\em Jonkheid et al.} (2004) \nand {\\em Kamp and Dullemond} (2004). {\\em\nJonkheid et al.}\\ use the {\\em D'Alessio et al.}\\ (1999) model and\nfocus on the disk beyond 50\\,AU. At this radius, the gas temperature\ncan rise to 800\\,K or 200\\,K, depending on whether small grains\nare well mixed or settled. For a thin disk and a high stellar UV\nflux, {\\em Kamp and Dullemond} obtain temperatures that asymptote\nto several 1000\\,K inside 50\\,AU. \nOf course these results are\nsubject to various assumptions that have been made about the stellar\nUV, the abundance of PAHs, and the growth and settling of dust\ngrains.\n\n\nMany of the earlier chemical models, oriented towards outer\ndisks (e.g., {\\em Willacy and Langer}, 2000; {\\em Aikawa\nand Herbst}, 1999; 2001; {\\em Markwick et al.}, 2002), adopt a value\nfor the {\\it stellar} UV radiation field that is $10^4$ times larger\nthan Galactic \nat a distance of 100\\,AU. This choice can be traced\nback to early IUE measurements of the stellar UV beyond 1400\\,\\AA\\, \nfor several TTS ({\\em Herbig and Goodrich}, 1986). Although\nthe UV flux from TTS covers a range of values and is\nundoubtedly time-variable, detailed studies with IUE\n(e.g., {\\em Valenti et al.}, 2000; {\\em Johns-Krull et al.}, 2000)\nand FUSE (e.g., {\\em Wilkinson et al.}, 2002; {\\em Bergin et al.}, \n2003) indicate that it decreases into the FUV domain with\na typical value $\\sim$$10^{-15} {\\rm erg} \\, {\\rm cm}^{-2} {\\rm s}^{-1}$\\,\\AA $^{-1}$,\nmuch smaller than earlier estimates.\nA flux of $\\sim$$10^{-15}{\\rm erg} \\, {\\rm cm}^{-2} {\\rm s}^{-1}$\\,\\AA $^{-1}$ at Earth\ntranslates into a value at 100\\,AU of $\\sim$100 times the traditional \nHabing value for the interstellar medium.\nThe data in the FUV range are sparse, unfortunately, as a function \nof age or the evolutionary state of the system. More measurements of \nthis kind are needed since \nit is obviously important to use realistic fluxes in the crucial FUV band\nbetween 912 and 1100\\,\\AA\\ where atomic C can be photoionized and\nH$_2$ and CO photodissociated \n({\\em Bergin et al.}, 2003 and the chapter by {\\em Bergin et al.}). \n\nWhether stellar FUV or X-ray radiation dominates the ionization, \nchemistry, and heating of protoplanetary disks is important because of \nthe vast difference in photon energy. \nThe most direct physical consequence\nis that FUV photons cannot ionize H, and thus the abundance of\ncarbon provides an upper limit to the ionization level produced by\nthe photoionization of heavy atoms, \n$x_e\\sim$$10^{-4}$--$10^{-3}$. Next,\nFUV photons are absorbed much more readily than X-rays, although\nthis depends on the size and spatial distribution of the dust grains,\ni.e, on grain growth and sedimentation. Using realistic numbers for\nthe FUV and X-ray luminosities of TTS, we estimate that\n$L_{\\rm FUV} \\sim L_{\\rm X}$. The rates used in many early chemical\nmodels correspond to $L_{\\rm X} \\ll L_{\\rm FUV} $.\nThis suggests that future chemical modeling of protoplanetary disks \nshould consider both X-rays and FUV in their treatment of ionization,\nheating, and chemistry.\n\n\\bigskip\n\\noindent\n\\textbf{3.3 The Midplane Region}\n\\bigskip\n\nUnlike the warm upper atmosphere of the disk, which is accessible\nto observation, \nthe optically thick midplane is much more difficult to\nstudy. Nonetheless, it is extremely important for understanding the dynamics\nof the basic flows in star formation such as accretion and outflow. \nThe important role of the\nionization level for disk accretion via the MRI was pointed out by\n{\\em Gammie} (1996). The physical reason is that collisional coupling\nbetween electrons and neutrals is required to transfer the turbulence\nin the magnetic field to the neutral material of the disk. Gammie\nfound that Galactic cosmic rays cannot penetrate beyond a \nsurface layer of the disk. He suggested that accretion only occurs\nin the surface of the inner disk (the ``active region'') and not in\nthe much thicker midplane region (the ``dead zone'') where the\nionization level is too small to mediate the MRI. \n\n{\\em Glassgold et al.}\\ (1997) argued that the Galactic cosmic rays never\nreach the inner disk because they are blown away by the stellar\nwind, much as the solar wind excludes Galactic cosmic rays. They\nshowed that YSO X-rays do almost as good a job as cosmic rays in\nionizing surface regions, thus preserving the layered accretion\nmodel of the MRI for YSOs. {\\em Igea and Glassgold} (1999) supported\nthis conclusion with a Monte Carlo calculation of X-ray transport\nthrough disks, demonstrating that scattering plays an important\nrole in the MRI by extending the active surface layer to column\ndensities greater than $10^{25} \\, {\\rm cm}^{-2}$, approaching the\nGalactic cosmic ray range used by {\\em Gammie} (1996).\nThis early work showed that the theory of disk ionization and\nchemistry is crucial for understanding the role of the MRI for YSO\ndisk accretion and possibly for planet formation. Indeed, {\\em Glassgold,\nNajita, and Igea} suggested that Gammie's dead zone might provide\na good environment for the formation of planets. \n\nThese challenges\nhave been taken up by several groups (e.g., {\\em Sano et al.}, 2000;\n{\\em Fromang et al.}, 2002; {\\em Semenov et al.}, \n2004; {\\em Kunz and Balbus}, 2004; {\\em Desch}, 2004; {\\em\nMatsumura and Pudritz}, 2003, 2005; and {\\em Ilgner and Nelson}, \n2006a,b). {\\em Fromang et al.}\\ discussed many of the issues that\naffect the size of the dead zone: differences in the disk model,\nsuch as a Hayashi disk or a standard $\\alpha$-disk; temporal evolution\nof the disk; the role of a small abundance of heavy atoms that\nrecombine mainly radiatively; and the value of the magnetic Reynolds\nnumber. {\\em Sano et al.} (2000) explored the role played by small dust\ngrains in reducing the electron fraction when it becomes as small\nas the abundance of dust grains. They showed that the dead zone \ndecreases and eventually vanishes as the grain size increases or\nas sedimentation towards the midplane proceeds.\nMore recently, {\\em Inutsuka and Sano} (2005) have suggested that\na small fraction of the energy dissipated by the MRI leads to the\nproduction of fast electrons with energies sufficient to ionize H$_2$. \nWhen coupled with vertical mixing of highly ionized surface regions,\nInutsuka and Sano argue that the MRI can self generate the ionization \nit needs to be operative throughout the entire disk.\n\n\nRecent chemical modeling ({\\em Semenov et al.}, 2004;\n{\\em Ilgner and Nelson}, 2006a,b) confirms that the level of ionization\nin the midplane is affected by many microphysical\nprocesses. These include the abundances of radiatively-recombining atomic\nions, molecular ions, small grains, and PAHs. The proper treatment of the\nions represents a great challenge for disk chemistry, one made\nparticularly difficult by the lack of observations of the dense gas at the\nmidplane of the inner disk. Thus the uncertainties in inner disk\nchemistry preclude definitive quantitative conclusions about the\nmidplane ionization of protoplanetary disks. Perhaps the biggest\nwild card is the issue of grain growth, emphasized anew by {\\em\nSemenov et al.}, (2004). If the disk grain size\ndistribution were close to interstellar, then the small grains would\nbe effective in reducing the electron fraction and producing dead\nzones. But significant grain growth is expected {\\em and} observed\nin the disks of YSOs, limiting the extent of dead zones (e.g.,\n{\\em Sano et al.}, 2002).\n\n\nThe broader chemical properties of the {\\it inner} midplane region are\nalso of great interest since most of the gas in the disk is within\none or two scale heights. \nThe chemical composition\nof the inner midplane gas is important because it provides the\ninitial conditions for outflows and for the formation of planets\nand other small bodies; it also determines whether the MRI operates. \nRelatively little work has been done on the midplane\nchemistry of the inner disk. For example, GNI04 excluded N and S\nspecies and restricted the carbon chemistry to species closely\nrelated to CO. However, {\\em Willacy et al.}\\ (1998), {\\em Markwick\net al.}\\ (2002), and {\\em Ilgner et al.}\\ (2004) have carried out\ninteresting calculations that shed light on a possible rich organic\nchemistry in the inner disk.\n\nUsing essentially the same chemical model, these authors follow\nmass elements in time as they travel in a steady accretion\nflow towards the star. At large distances, the gas is subject to\nadsorption, and at small distances to thermal desorption. In between\nit reacts on the surface of the dust grains; on being liberated\nfrom the dust, it is processed by gas phase chemical reactions. The\ngas and dust are assumed to have the same temperature, and all\neffects of stellar radiation are ignored. The ionizing sources are\ncosmic rays and $^{26}$Al. Since the collisional ionization of low \nionization potential atoms is ignored, a very low ionization level \nresults. {\\em Markwick et al.}\\ improve on {\\em Willacy et al.}\\ by\ncalculating the vertical variation of the temperature, and {\\em\nIlgner et al.}\\ consider the effects of mixing. Near 1\\,AU, H$_2$O\nand CO are very abundant, as predicted by simpler models, but {\\em\nMarkwick et al.}\\ find that CH$_4$ and CO have roughly equal abundances.\nNitrogen-bearing molecules, such as NH$_3$, HCN, and HNC are also predicted\nto be abundant, as are a variety of hydrocarbons such as CH$_4$, C$_2$H$_2$,\nC$_2$H$_3$, C$_2$H$_4$, etc. {\\em Markwick et al.}\\ also simulate\nthe presence of penetrating X-rays and find increased column densities\nof CN and HCO$^+$. \nDespite many uncertainties, these predictions \nare of interest for our future understanding of the midplane region.\n\nInfrared spectroscopic searches for hydrocarbons in disks may be \nable to test these predictions. \nFor example, {\\em Gibb et al.}\\ (2004) searched for CH$_4$ in absorption \ntoward HL Tau. The upper limit on the abundance of \nCH$_4$ relative to CO ($<$1\\%) in the absorbing gas \nmay contradict the predictions of {\\em Markwick et al.}\\ (2002) \nif the absorption arises in the disk atmosphere. \nHowever, some support for the {\\em Markwick et al.}\\ (2002) model comes \nfrom a recent report by {\\em Lahuis et al.}\\ (2006)\nof a rare detection by {\\it Spitzer} of C$_2$H$_2$ and HCN in\nabsorption towards a YSO, with ratios close to\nthose predicted for the inner disk (Section 2.6). \n\n\\bigskip\n\\noindent\n\\textbf{3.4 Modeling Implications}\n\\bigskip\n\nAn interesting implication of the irradiated disk atmosphere models\ndiscussed above is that the region\nof the atmosphere over which the gas and dust temperatures differ \nincludes the region that is accessible\nto observational study. Indeed, the models have interesting \nimplications for some of the observations presented in Section 2. \nThey can account roughly \nfor the unexpectedly warm gas temperatures that have been found for\nthe inner disk region based on the CO fundamental (Section 2.3) and UV\nfluorescent H$_2$ transitions (Section 2.4). \nIn essence, the warm gas temperatures arise from the direct heating \nof the gaseous component and the poor thermal coupling between the \ngas and dust components at the low densities characteristic \nof upper disk atmospheres. \nThe role of X-rays in heating disk atmospheres has some support from \nthe results of {\\em Bergin et al.}\\ (2004); they suggested that some of \nthe UV H$_2$ emission from TTS arises \nfrom excitation by fast electrons produced by X-rays. \n\nIn the models, CO is found to form at a column density \n$N_H$$\\simeq$$10^{21}\\rm \\,cm^{-2}$ \nand temperature $\\sim$1000\\,K in the radial range 0.5--2\\,AU \n(GNI04; Fig.~5), \nconditions similar to those deduced for the emitting gas \nfrom the CO fundamental lines ({\\em Najita et al.}, 2003). \nMoreover, CO is abundant in a region \nof the disk that is predominantly atomic hydrogen, a situation \nthat is favorable for exciting the rovibrational transitions \nbecause of the large collisional excitation cross section \nfor H + CO inelastic scattering.\nInterestingly, X-ray irradiation alone is probably \ninsufficient to explain the strength of the CO emission \nobserved in actively-accreting TTS. \nThis suggests that \nother processes may be important in heating disk atmospheres. \nGNI04 have explored the role of mechanical \nheating. Other possible heating processes are FUV irradiation \nof grains and or PAHs. \n\nMolecular hydrogen column densities comparable to the UV \nfluorescent column of $\\sim$$5\\times 10^{18}{\\rm cm}^{-2}$ observed \nfrom TW Hya are reached at 1\\,AU \nat a total vertical hydrogen column density\nof $\\sim$$5\\times 10^{21}{\\rm cm}^{-2}$, where the fractional abundance \nof H$_2$ is $\\sim$$10^{-3}$ (GNI04; Fig.~5). \nSince Ly$\\alpha$ photons must traverse the entire \n$\\sim$$5\\times 10^{21}{\\rm cm}^{-2}$ in order to excite the emission, \nthe line-of-sight dust opacity through this column must be relatively low. \nObservations of this kind, when combined with atmosphere models, \nmay be able to constrain the gas-to-dust ratio in disk atmospheres, \nwith consequent implications for grain growth and settling. \n\nWork in this direction has been carried out by {\\em Nomura and Millar} \n(2005). They have made a detailed thermal model of a\ndisk that includes the formation of H$_2$ on grains, destruction via FUV\nlines, and excitation by Ly$\\alpha$ photons. The gas at the surface is\nheated primarily by the photoelectric effect on dust grains and PAHs,\nwith a dust model \nappropriate for interstellar clouds, i.e., \none that reflects little grain growth. \nBoth interstellar and stellar UV radiation\nare included, the latter based on observations of TW Hya. The gas\ntemperature at the surface of their flaring disk model reaches 1500\\,K\nat 1\\,AU. They are partially successful in accounting for the\nmeasurements of {\\em Herczeg et al.}\\ (2002), but their model\nfluxes fall short by a factor of five or so. A likely defect in\ntheir model is that the calculated temperature of the disk surface is\ntoo low, a problem that might be remedied by reducing the UV attenuation \nby dust and by including X-ray or other surface heating processes.\n\nThe relative molecular abundances that are predicted by \nthese non-turbulent, layered \nmodel atmospheres are also of interest. At a distance of 1\\,AU, the\ncalculations of GNI04 indicate that the relative abundance of \nH$_2$O to CO is $\\sim$$10^{-2}$ in the disk atmosphere for\ncolumn densities $<$$10^{22}\\rm \\,cm^{-2}$; \nonly at column densities $>$$10^{23}\\rm \\,cm^{-2}$ are H$_2$O and CO \ncomparably abundant. \nThe abundance ratio in the atmosphere is significantly lower than \nthe few relative abundances measurements to date (0.1--0.5) \nat $<$0.3\\,AU \n({\\em Carr et al.}, 2004; Section 2.2). \nPerhaps layered\nmodel atmospheres, when extended to these \nsmall radii, will be able to account for the abundant water that is detected. \nIf not, the large water abundance may be evidence of strong vertical \n(turbulent) mixing that carries abundant water from deeper in the \ndisk up to the surface. \nThus, it would be of great interest to develop the modeling for the \nsources and regions where water is observed in the context of \nboth layered models and those with vertical mixing. \nWork in this direction has the potential to place unique constraints \non the dynamical state of the disk. \n\n\n\n\\section{\\textbf{CURRENT AND FUTURE DIRECTIONS}}\nAs described in the previous sections, significant progress has \nbeen made in developing both observational probes of gaseous \ninner disks as well as the theoretical models that are needed to \ninterpret the observations. In this section, we describe some \nareas of current interest as well as future directions \nfor studies of gaseous inner disks. \n\n\n\\bigskip\n\\noindent\n\\textbf{4.1 Gas Dissipation Timescale} \n\\bigskip\n\nThe lifetime of gas in the inner disk is of interest in the context \nof both giant and terrestrial planet formation. \nSince significant gas must be present in the disk in order for a \ngas giant to form, the gas dissipation timescale in the giant \nplanet region of the disk can help to identify dominant pathways \nfor the formation of giant planets. A short dissipation time scale \nfavors processes such as gravitational instabilities which can \nform giant planets on short time scales ($< 1000$ yr; {\\em Boss}, 1997; \n{\\em Mayer et al.}, 2002). \nA longer dissipation time scale accommodates the more leisurely \nformation of planets in the core accretion scenario (few--10\\,Myr; \n{\\em Bodenheimer and Lin}, 2002). \n\n\nSimilarly, \nthe outcome of terrestrial \nplanet formation (the masses and eccentricities of the planets and \ntheir consequent habitability) may depend sensitively on \nthe residual gas in the terrestrial planet region of the disk \nat ages of a few Myr. \nFor example, in the picture of terrestrial planet formation \ndescribed by {\\em Kominami and Ida} (2002), \nif the gas column density in this region is $\\gg$$1 \\rm \\,g\\,cm^{-2}$ \nat the epoch when protoplanets assemble to form terrestrial \nplanets, \ngravitational gas drag is strong enough to circularize the orbits \nof the protoplanets, making it difficult for them to collide and \nbuild Earth-mass planets. In contrast, if the gas column density is \n$\\ll$$1 \\rm \\,g\\,cm^{-2}$, Earth-mass planets can be produced, but gravitational \ngas drag is too weak to recircularize their orbits. \nAs a result, only a narrow range of gas column densities around \n$\\sim$$1 \\rm \\,g\\,cm^{-2}$ is expected to lead to planets with the Earth-like \nmasses and \nlow eccentricities that we associate with habitability on Earth. \n\n\n\nFrom an observational perspective, \nrelatively little is known about the evolution of the \ngaseous component. Disk lifetmes are typically inferred from \ninfrared excesses that probe the dust component of the disk, \nalthough processes such as grain growth, planetesimal formation, \nand rapid grain inspiraling produced by gas drag ({\\em Takeuchi and Lin}, 2005) \ncan compromise dust as a tracer of the gas. \nOur understanding of disk lifetimes can be improved \nby directly probing the gas content of disks and using \nindirect probes of disk gas content such as stellar accretion rates \n(see {\\em Najita,} 2006 for a review of this topic). \n\nSeveral of the diagnostics decribed in Section 2 may \nbe suitable as direct probes of disk gas content. \nFor example, transitions of H$_2$ and other molecules and atoms \nat mid- through far-infrared wavelengths are thought to be \npromising probes of the giant planet region of the disk \n({\\em Gorti and Hollenbach},\\ 2004). \nThis is a important area of investigation currently for the \nSpitzer Space Telescope and, in the future, for Herschel \nand 8- to 30-m ground-based telescopes. \n\n\nStudies of the lifetime of gas in the terrestrial planet region are\nalso in progress. The CO transitions are well suited for this purpose\nbecause the transitions of CO and its isotopes probe gas column\ndensities in the range of interest ($10^{-4}-1 \\rm \\,g\\,cm^{-2}$). \nA current study by Najita, Carr, and Mathieu, \nwhich explores the residual gas content of optically thin disks \n({\\em Najita}, 2004), \nillustrates some of the challenges \nin probing the residual gas content of disks. \nFirstly,\ngiven the well-known correlation between IR excess and accretion\nrate in young stars (e.g., {\\em Kenyon and Hartmann}, 1995), CO emission\nfrom sources with optically thin inner disks may be intrinsically\nweak if accretion contributes significantly to heating disk\natmospheres. \nThus, high signal-to-noise spectra may be \nneeded to detect this emission. \nSecondly, since the line emission may be intrinsically weak, \nstructure in the stellar photosphere may complicate the \nidentification of emission features. \nFig.~6 shows an example in which CO absorption in the stellar \nphotosphere of TW Hya likely veils weak emission from the \ndisk. Correcting for the stellar photosphere would not only amplify \nthe strong $v$=1--0 emission that is clearly present \n(cf. {\\em Rettig et al.}, 2004), it would also uncover \nweak emission in the higher vibrational lines, confirming the \npresence of the warmer gas probed by the UV fluorescent lines \nof H$_2$ ({\\em Herczeg et al.}, 2002). \n\n\\begin{figure}[t!]\n\\vspace{-1.2in}\n\\plotfiddle{najita_fig6a.eps}{0.05in}{0.}{300}{300}{-30}{0}\n\\vspace{-2.6in}\n\\plotfiddle{najita_fig6b.ps}{0.05in}{0.}{300}{300}{-30}{0}\n\\vspace{-1.5in}\n\\caption{\\small\\baselineskip 10pt\n(Top) Spectrum of the transitional disk system TW Hya at 4.6~$\\hbox{$\\mu$m}$ \n(histogram). \nThe strong emission in the $v$=1--0 CO fundamental lines extend \nabove the plotted region. \nAlthough the model stellar photospheric spectrum (light solid line) fits \nthe weaker features in the TW Hya spectrum, \nit predicts stronger absorption in the low vibrational CO \ntransitions (indicated by the lower vertical lines) than is \nobserved. This suggests that the stellar \nphotosphere is veiled by CO emission from warm disk gas. \n(Bottom) \nCO fundamental emission from the transitional disk system \nV836 Tau. Vertical lines mark the approximate \nline centers at the velocity of the star. The velocity widths \nof the lines locate the emission within a few AU of the star, \nand the relative strengths of the lines suggest optically thick \nemission. Thus, a large reservoir of gas may be present in the \ninner disk despite the weak infrared excess from this portion \nof the disk. \n} \n\\end{figure}\n\n\n\n\nStellar accretion rates \nprovide a complementary probe of the gas content of inner disks. \nIn a steady accretion disk, the column density $\\Sigma$ is related to \nthe disk accretion rate $\\dot M$ \nby a relation of the form $\\Sigma\\propto \\dot M\/\\alpha T$, where \n$T$ is the disk temperature. \nA relation of this form allows us to infer $\\Sigma$ from $\\dot M$ \ngiven a value for the viscosity parameter $\\alpha$. \nAlternatively, the relation could be calibrated empirically \nusing measured disk column densities. \n\nAccretion rates are available for many sources in the age \nrange 0.5--10\\,Myr (e.g., {\\em Gullbring et al.}, 1998; \n{\\em Hartmann et al.}, 1998; \n{\\em Muzerolle et al.}, 1998, 2000). \nA typical value of $10^{-8}M_{\\odot}\\,{\\rm yr}^{-1}$ for TTS \ncorresponds to a(n active) disk column density of \n$\\sim$$100 \\rm \\,g\\,cm^{-2}$ at 1\\,AU for \n$\\alpha$=0.01 ({\\em D'Alessio et al.}, 1998). \nThe accretion rates show an overall decline with time \nwith a large dispersion at any given age. \nThe existence of 10\\,Myr old sources with accretion rates as large \nas $10^{-8}\\,M_{\\odot}\\,{\\rm yr}^{-1}$ ({\\em Sicilia-Aguilar et al.}, 2005) suggests \nthat gaseous disks may be long lived in some systems.\n\nEven the lowest measured accretion rates may be dynamically significant. \nFor a system like V836 Tau (Fig.\\ 6), a $\\sim$3\\,Myr old \n({\\em Siess et al.}, 1999) system with an optically thin inner disk, \nthe stellar accretion rate of $4\\times 10^{-10}M_{\\odot}\\,{\\rm yr}^{-1}$ \n({\\em Hartigan et al.}, 1995; {\\em Gullbring et al.}, 1998) would correspond\nto $\\sim$$4 \\rm \\,g\\,cm^{-2}$ at 1\\,AU. Although the accretion rate \nis irrelevant for the buildup of the stellar mass, it corresponds \nto a column density that would \nfavorably impact terrestrial planet formation. \nMore interesting perhaps\nis St34, a TTS with a Li depletion age of 25\\,Myr; \nits stellar accretion rate of $2\\times 10^{-10}M_{\\odot}\\,{\\rm yr}^{-1}$ \n({\\em White and Hillenbrand}, 2005) \nsuggests a dynamically significant reservoir of gas\nin the inner disk region. These examples suggest that dynamically\nsignificant reservoirs of gas may persist even after inner disks \nbecome optically thin and over the timescales \nneeded to influence the outcome of terrestrial planet formation. \n\nThe possibility of long lived gaseous reservoirs can be confirmed \nby using the diagnostics described in Section 2 to measure total disk \ncolumn densities. Equally important, a measured the disk \ncolumn density, combined with the stellar accretion rate, would allow \nus to infer a value for viscosity parameter $\\alpha$ for the system. \nThis would be another way of constraining the disk accretion mechanism. \n\n\n\n\\bigskip\n\\noindent\n\\textbf{4.2 Nature of Transitional Disk Systems} \n\\bigskip\n\nMeasurements of the gas content and distribution in inner disks \ncan help us to identify systems in various states of planet \nformation. Among the most interesting objects to study in this \ncontext are the transitional disk systems, which possess \noptically thin inner and optically thick outer disks. \nExamples of this class of objects include TW Hya, GM Aur, DM Tau, \nand CoKu Tau\/4 ({\\em Calvet et al.}, 2002; {\\em Rice et al.}, 2003; \n{\\em Bergin et al.}, 2004; {\\em D'Alessio et al.}, 2005; \n{\\em Calvet et al.}, 2005). \nIt was suggested early on that optically thin inner disks might \nbe produced by the dynamical sculpting of the disk by orbiting giant \nplanets ({\\em Skrutskie et al.}, 1990; see also {\\em Marsh and Mahoney}, 1992). \n\nIndeed, optically thin disks may arise in multiple phases of disk \nevolution.\nFor example, as a first step in planet formation (via core accretion), \ngrains are expected to grow into planetesimals and eventually \nrocky planetary cores, producing a region of the disk that has reduced \ncontinuum opacity but is gas-rich. \nThese regions of the disk may therefore show strong line emission.\nDetermining the fraction of \nsources in this phase of evolution may help to establish the \nrelative time scales for planetary core formation and the accretion of \ngaseous envelope. \n\nIf a planetary core accretes enough gas to produce a low mass \ngiant planet ($\\sim$1$M_J$), it is expected to carve out a gap in its \nvicinity (e.g., {\\em Takeuchi et al.}, 1996). Gap crossing streams \ncan replenish an inner disk and allow further accretion onto \nboth the star and planet ({\\em Lubow et al.}, 1999). \nThe small solid angle subtended by the accretion streams would \nproduce a deficit in the emission from both gas and dust in the \nvicinity of the planet's orbit. \nWe would also expect to detect the presence of an inner disk. \nPossible examples of systems in this phase of evolution include \nGM Aur and TW Hya in which hot gas is detected close to the star as is \naccretion onto the star ({\\em Bergin et al.}, 2004; {\\em Herczeg et al.}, 2002; \n{\\em Muzerolle et al.}, 2000). The absence of gas in the vicinity of \nthe planet's orbit would help to confirm this interpretation. \n\nOnce the planet accretes enough mass via the accretion streams to \nreach a mass $\\sim$5--10$M_J$, it is expected to cut off further accretion \n(e.g., {\\em Lubow et al.}, 1999). The inner disk will accrete onto the \nstar, leaving a large inner hole and no trace of stellar accretion. \nCoKu Tau\/4 is a possible example of a system in this phase of \nevolution (cf.\\ {\\em Quillen et al.}, 2004) since it appears to have a large \ninner hole and a low to negligible accretion rate \n($<$few\\,$\\times 10^{-10}M_{\\odot}\\,{\\rm yr}^{-1}$). \nThis interpretation predicts little gas anywhere within the orbit \nof the planet. \n\nAt late times, \nwhen the disk column density around 10\\,AU has decreased sufficiently \nthat the outer disk is being photoevaporated away faster than \nit can resupply material to the inner disk via accretion, \nthe outer disk will decouple from the inner disk, which will accrete \nonto the star, leaving an inner hole that is devoid of gas and dust \n(the ``UV Switch'' model; {\\em Clarke et al.}, 2001). \nMeasurements of the disk gas column density and the stellar \naccretion rate can be used to test this possibility. \nAs an example, TW Hya is in the age range ($\\sim$10\\,Myr) where \nphotoevaporation is likely to be significant. \nHowever, the accretion rate onto star, gas content of the inner disk \n(Sections 2 and 4), \nas well as the column density inferred for the outer disk \n($32 \\rm \\,g\\,cm^{-2}$ at 20\\,AU based on the dust SED; \n{\\em Calvet et al.}, 2002) \nare all much larger than is expected in the UV switch model. \nAlthough this mechanism is, therefore, unlikely to explain \nthe SED for TW Hya, it may explain the \npresence of inner holes in less massive disk systems of comparable age. \n\n\n\n\\bigskip\n\\noindent\n\\textbf{4.3 Turbulence in Disks} \n\\bigskip\n\nFuture studies of gaseous inner disks may also help to \nclarify the nature of the disk accretion process. As indicated \nin Section 2.1, evidence for suprathermal line broadening \nin disks supports the idea of a turbulent accretion process. \nA turbulent inner disk may have important \nconsequences for the survival of terrestrial planets and the \ncores of giant planets. An intriguing puzzle \nis how these objects avoid Type-I \nmigration, which is expected to cause the object to lose angular \nmomentum and spiral into the star on short timescales (e.g., {\\em Ward}, 1997). \nA recent suggestion is that if disk accretion is turbulent, \nterrestral planets will scatter off \nturbulent fluctuations, executing a ``random walk'' which greatly \nincreases the migration time as well as the chances of \nsurvival ({\\em Nelson et al.}, 2000; see chapter by {\\em Nelson et al.}). \n\nIt would be interesting to explore this possible connection further by extending \nthe approach used for the CO overtone lines to a wider \nrange of diagnostics to probe the intrinsic line width as a \nfunction of radius and disk height. By comparing the results \nto the detailed predictions of theoretical models, it may be \npossible to distinguish between the turbulent signature, produced \ne.g., by the MRI instability, from the turbulence that might \nbe produced by, e.g., a wind blowing over the disk. \n\nA complementary probe of turbulence may come from exploring the\nrelative molecular abundances in disks. As noted in Section 3.4,\nif relative abundances cannot be explained by model\npredictions for non-turbulent, layered accretion flows, a \nsignificant role for strong vertical mixing produced by turbulence\nmay be implied. Although model-dependent, this approach toward \ndiagnosing turbulent accretion appears to be less sensitive \nto confusion from wind-induced turbulence, especially if one can identify \ndiagnostics that require vertical mixing from deep down in the disk. \nAnother complementary approach toward probing the accretion process, \ndiscussed in Section 4.1, is to measure total gas column densities \nin low column density, dissipating disks in order to infer values \nfor the viscosity parameter $\\alpha$. \n\n\n\n\n\n\\section{\\textbf{SUMMARY AND CONCLUSIONS}} \n\nRecent work has lent new insights on the \nstructure, dynamics, and gas content of inner disks \nsurrounding young stars. \nGaseous atmospheres appear to be hotter than the dust in \ninner disks. This is a consequence of irradiative (and \npossibly mechanical) heating of the gas as well as \nthe poor thermal coupling between the gas and dust at the low \ndensities of disk atmospheres.\nIn accreting systems, the gaseous disk appears to be turbulent \nand extends inward beyond the dust sublimation radius\nto the vicinity of the corotation radius.\nThere is also evidence that dynamically significant reservoirs \nof gas can persist even \nafter the inner disk becomes optically thin in the continuum. \nThese results bear on important star and planet formation issues such as the\norigin of winds, funnel flows, and the rotation rates of young\nstars; the mechanism(s) responsible for disk accretion; and the role\nof gas in the determining the architectures of terrestrial and\ngiant planets.\nAlthough significant future work is needed to reach any conclusions \non these issues, the future for such studies is bright. \nIncreasingly detailed studies of the inner disk region should be \npossible with the advent of \npowerful spectrographs and interferometers (infrared and submillimeter) \nas well as sophisticated models that describe the coupled \nthermal, chemical, and dynamical state of the disk. \n\n\n\n\n\n\\textbf{ Acknowledgments.} We thank Stephen Strom who contributed \nsignificantly to the discussion on the nature of transitional disk systems. \nWe also thank Fred Lahuis and Matt Richter for sharing \nmanuscripts of their work in advance of publication. \nAEG acknowledges support from the NASA Origins and NSF Astronomy \nprograms. \nJSC and JRN also thank the NASA Origins program for its support. \n\n\\bigskip\n\n\\centerline\\textbf{ REFERENCES }\n\\bigskip\n\\parskip=0pt\n{\\small\n\\baselineskip=11pt\n\n\n\\par\\noindent\\hangindent=1pc\\hangafter=1 Abgrall H., Roueff E., Launay F., Roncin J. Y., and\nSubtil, J. L. (1993) {\\em Astron. Astrophys. Suppl., 101}, 273-321.\n\n\\par\\noindent\\hangindent=1pc\\hangafter=1 Abgrall H., Roueff E., Launay F., Roncin J. Y., and\nSubtil J. L. (1993) {\\em Astron. Astrophys. Suppl., 101}, 323-362.\n\n\\par\\noindent\\hangindent=1pc\\hangafter=1 Acke B., van den Ancker M. E., and Dullemond C. P. (2005), \n\\aap, 436, 209-230.\n\n\\par\\noindent\\hangindent=1pc\\hangafter=1 Aikawa Y., Miyama S. M., Nakano T., and Umebayashi T. (1996)\n{\\em Astrophys. J., 467}, 684-697. \n\n\\par\\noindent\\hangindent=1pc\\hangafter=1 Aikawa Y., Umebayashi T., Nakano T., and Miyama S. M. (1997)\n{\\em Astrophys. J., 486}, L51-L54. \n\n\\par\\noindent\\hangindent=1pc\\hangafter=1 Aikawa Y., Umebayashi T., Nakano T., and Miyama S. M. (1999)\n{\\em Astrophys. J., 519}, 705-725. \n\n\\par\\noindent\\hangindent=1pc\\hangafter=1 Aikawa Y. and Herbst E. (1999)\n{\\em Astrophys. J., 526}, 314-326. \n\n\\par\\noindent\\hangindent=1pc\\hangafter=1 Aikawa Y. and Herbst E. (2001)\n\\aap, 371, 1107-1117. \n\n\\par\\noindent\\hangindent=1pc\\hangafter=1 Akeson R. L., Walker, C. H., Wood, K., Eisner, J. A., \nScire, E. et al.\\ (2005a) \n\\apj, 622, 440-450.\n\n\\par\\noindent\\hangindent=1pc\\hangafter=1 Akeson R. L., Boden, A. F., Monnier, J. D., Millan-Gabet, R., \nBeichman, C. et al.\\ (2005b) \n\\apj, 635, 1173-1181.\n\n\\par\\noindent\\hangindent=1pc\\hangafter=1 Alexander R. D., Clarke C. J., and Pringle J. E. (2004)\n\\mnras, 354, 71-80.\n\n\\par\\noindent\\hangindent=1pc\\hangafter=1 Ardila D. R., Basri G., Walter F. M., Valenti J. A., and\nJohns-Krull C. M. (2002) {\\em Astrophys. J., 566}, 1100-1123. \n\n\\par\\noindent\\hangindent=1pc\\hangafter=1 Balbus S. A. and Hawley J. F. (1991) \n{\\em Astrophys. J., 376}, 214-222. \n\n\\par\\noindent\\hangindent=1pc\\hangafter=1 Bary J.~S., Weintraub D.~A., and Kastner J.~H. (2003) \n{\\em \\apj, 586}, 1138-1147.\n\n\\par\\noindent\\hangindent=1pc\\hangafter=1 Bergin E., Calvet, N., Sitko M. L., Abgrall H., \nD'Alessio, P. et al. (2004)\n{\\em Astrophys. J., 614}, L133-Ll37. \n\n\\par\\noindent\\hangindent=1pc\\hangafter=1 Bergin E., Calvet N., D'Alessio P., and Herczeg G. J. (2003) \n{\\em \\apj, 591}, L159-L162. \n\n\\par\\noindent\\hangindent=1pc\\hangafter=1 Blake G. A. and Boogert A. C. A. (2004) \n{\\em \\apj, 606}, L73-L76.\n\n\\par\\noindent\\hangindent=1pc\\hangafter=1 Blum R. D., Barbosa C. L., Damineli A., Conti P. S., \nand Ridgway S.\\ (2004) {\\em \\apj, 617}, 1167-1176. \n\n\\par\\noindent\\hangindent=1pc\\hangafter=1 Bodenheimer P. and Lin D. N. C. (2002) \n{\\em Ann. Rev. Earth Planet. Sci., 30} 113-148.\n\n\\par\\noindent\\hangindent=1pc\\hangafter=1 Boss A. P. (1995)\n{\\em Science, 276} 1836-1839.\n\n\\par\\noindent\\hangindent=1pc\\hangafter=1 Brittain S.~D., Rettig T.~W., Simon T., Kulesa C.,\nDiSanti M.~A., and Dello Russo N. (2003)\n{\\em \\apj, 588}, 535-544.\n\n\\par\\noindent\\hangindent=1pc\\hangafter=1 Brown A., Jordan C., Millar T. J., Gondhalekar P., and\nWilson R. (1981) {\\em Nature, 290}, 34-36.\n\n\\par\\noindent\\hangindent=1pc\\hangafter=1 Calvet N., Patino A., Magris G., and D'Alessio P.\\ (1991) \n{\\em \\apj, 380}, 617-630.\n\n\\par\\noindent\\hangindent=1pc\\hangafter=1 Calvet N., D'Alessio P., Hartmann L, Wilner D., Walsh A.,\nand Sitko M. (2002) \n{\\em \\apj, 568}, 1008-1016.\n\n\n\\par\\noindent\\hangindent=1pc\\hangafter=1 Calvet N., Muzerolle J., Brice\\~no C., Hern\\'andez J.,\nHartmann L., Saucedo J. L., and Gordon K. D. (2004) {\\em Astron.\nJ., 128}, 1294-1318.\n\n\\par\\noindent\\hangindent=1pc\\hangafter=1 Calvet N., D'Alessio P., Watson D. M., Franco-Hern\\'andez R., \nFurlan, E. et al. (2005) \n{\\em \\apj, 630} L185-L188.\n\n\\par\\noindent\\hangindent=1pc\\hangafter=1 Carr J. S.\\ (1989) \n{\\em \\apj, 345}, 522-535.\n \n\\par\\noindent\\hangindent=1pc\\hangafter=1 Carr J. S., Tokunaga A. T., Najita J., Shu F. H., and \nGlassgold A. E.\\ (1993) \n{\\em \\apj, 411}, L37-L40.\n\n\\par\\noindent\\hangindent=1pc\\hangafter=1 Carr J. S., Tokunaga A. T., and Najita J. (2004)\n{\\em \\apj, 603}, 213-220.\n\n\\par\\noindent\\hangindent=1pc\\hangafter=1 Chandler C. J., Carlstrom J. E., Scoville N. Z., \nDent W. R. F., and Geballe T. R. (1993) \n{\\em \\apj, 412}, L71-L74.\n\n\\par\\noindent\\hangindent=1pc\\hangafter=1 Chen C.~H., Van Cleve J.~E., Watson D.~M., Houck J.~R.,\nWerner M.~W., Stapelfeldt K.~R., Fazio G.~G., and Rieke G.~H. (2004) \n{\\em AAS Meeting Abstracts, 204}, 4106.\n\n\\par\\noindent\\hangindent=1pc\\hangafter=1 Clarke C. J., Gendrin A., and Sotomayor M. (2001)\n{\\em MNRAS, 328}, 485-491.\n\n\\par\\noindent\\hangindent=1pc\\hangafter=1 Colavita M., Akeson R., Wizinowich P., Shao M., \nActon S. et al.\\ (2003) \n{\\em \\apj, 592}, L83-L86.\n\n\\par\\noindent\\hangindent=1pc\\hangafter=1 D'Alessio P., Canto J., Calvet N., and Lizano S. (1998)\n{\\em {\\apj}, 500}, 411.\n\n\\par\\noindent\\hangindent=1pc\\hangafter=1 D'Alessio P., Calvet N., and Hartmann L., Lizano S., and\nCanto\\'o J. (1999) \n{\\em Astrophys. J., 527}, 893-909. \n\n\\par\\noindent\\hangindent=1pc\\hangafter=1 D'Alessio P., Calvet N., and Hartmann L. (2001) \n{\\em Astrophys. J., 553}, 321-334. \n\n\\par\\noindent\\hangindent=1pc\\hangafter=1 D'Alessio P., Hartmann L., Calvet N., Franco-Hern\\'andez R., \nForrest W. J. et al. (2005) \n{\\apj, 621}, 461-472.\n\n\\par\\noindent\\hangindent=1pc\\hangafter=1 Desch S. (2004)\n{\\em Astrophys. J., 608}, 509\n\n\\par\\noindent\\hangindent=1pc\\hangafter=1 Doppmann G. W., Greene T. P., Covey K. R., and \nLada C. J. (2005)\n{\\em \\apj, 130}, 1145-1170.\n\n\\par\\noindent\\hangindent=1pc\\hangafter=1 Edwards S., Fischer W., Kwan J., Hillenbrand L., and \nDupree A. K.\\ (2003) {\\em \\apj, 599}, L41-L44. \n\n\\par\\noindent\\hangindent=1pc\\hangafter=1 Eisner J. A., Hillenbrand L. A., White R. J., Akeson R. L., \nand Sargent A. E. (2005) {\\em \\apj, 623}, 952-966.\n\n\\par\\noindent\\hangindent=1pc\\hangafter=1 Feigelson E. D. and Montmerle T. (1999) \n{\\em Ann. Rev. Astron. Astrophys., 37}, 363-408. \n\n\\par\\noindent\\hangindent=1pc\\hangafter=1 Figuer\\^edo E., Blum R. D., Damineli A., and Conti P. S. (2002) \n{\\em AJ, 124}, 2739-2748.\n \n\\par\\noindent\\hangindent=1pc\\hangafter=1 Fromang S., Terquem C., and Balbus S. A. (2002)\n{\\em \\mnras 339}, 19\n\n\\par\\noindent\\hangindent=1pc\\hangafter=1 Gammie C. F. (1996) {\\em Astrophys. J., 457}, 355-362. \n\n\\par\\noindent\\hangindent=1pc\\hangafter=1 Geballe T. R. and Persson S. E.,\\ (1987) \n{\\em \\apj, 312}, 297-302.\n\n\\par\\noindent\\hangindent=1pc\\hangafter=1 Gibb E. L., Rettig T., Brittain S. and Haywood R.\n(2004) {\\em \\apj, 610}, L113-L116.\n\n\\par\\noindent\\hangindent=1pc\\hangafter=1 Gizis J. E., Shipman H. L., and Harvin J. A. (2005)\n{\\em Astrophys. J., 630}, L89-L91.\n\n\\par\\noindent\\hangindent=1pc\\hangafter=1 Glassgold A. E., Najita J., and Igea J. (1997) \n{\\em Astrophys. J., 480}, 344-350 (GNI97). \n\n\\par\\noindent\\hangindent=1pc\\hangafter=1 Glassgold A. E. and Najita J.(2001) in \n{\\it Young Stars Near Earth},\nASP Conf. Ser. vol. 244 (R. Jayawardhana and T. Greene, eds.) \npp.\\ 251-255. ASP, San Francisco.\n\n\\par\\noindent\\hangindent=1pc\\hangafter=1 Glassgold A. E., Najita J., and Igea J. (2004) \n{\\em Astrophys. J., 615}, 972-990 (GNI04). \n\n\\par\\noindent\\hangindent=1pc\\hangafter=1 Gorti U. and Hollenbach D. H. (2004)\n{\\em Astrophys. J., 613}, 424-447.\n\n\\par\\noindent\\hangindent=1pc\\hangafter=1 Greene T. P. and Lada C. J. (1996) \n{\\em AJ, 112}, 2184-2221.\n\n\\par\\noindent\\hangindent=1pc\\hangafter=1 Gullbring E., Hartmann L., Brice\\~no C., and Calvet N. (1998) \n{\\em \\apj, 492}, 323-341.\n\n\\par\\noindent\\hangindent=1pc\\hangafter=1 Hanson M. M., Howarth I. D., and Conti P. S.\\ (1997) \n{\\em \\apj, 489}, 698-718.\n\n\\par\\noindent\\hangindent=1pc\\hangafter=1 Hartmann L. and Kenyon S.\\ (1996) \n{\\em Ann. Rev. Astron. Astrophys., 34}, 207-240.\n\n\\par\\noindent\\hangindent=1pc\\hangafter=1 Hartmann L., Calvet N., Gullbring E. and D'Alessio P.\n(1998) {\\em \\apj 495}, 385-400.\n\n\\par\\noindent\\hangindent=1pc\\hangafter=1 Hawley J. F., Gammie C. F., and Balbus S. A. (1995) \n{\\em Astrophys. J., 440}, 742-763. \n\n\\par\\noindent\\hangindent=1pc\\hangafter=1 Hartmann L., Hinkle K., and Calvet N.\\ (2004) \n{\\em \\apj, 609}, 906-916.\n\n\\par\\noindent\\hangindent=1pc\\hangafter=1 Herczeg G.~J., Linsky J.~L., Valenti J.~A., \nJohns-Krull C.~M., and Wood B.~E. (2002) \n{\\em \\apj, 572}, 310-325.\n\n\\par\\noindent\\hangindent=1pc\\hangafter=1 Herczeg G. J., Wood B. E., Linsky J. L., Valenti J. A.,\nand Johns-Krull C. M. (2004) {\\em Astrophys. J., 607}, 369-383. \n\n\n\\par\\noindent\\hangindent=1pc\\hangafter=1 Herczeg G. J., Walter F. M., Linsky J. L., Gahm G. F., \nArdila D. R. et al.\\ (2005) {\\em Astron. J., 129}, 2777-2791. \n\n\\par\\noindent\\hangindent=1pc\\hangafter=1 Herczeg G. J., Linsky J. L., Walter F. M., Gahm G. F.,\nand Johns-Krull C. M. (2006) {\\em in preparation}\n\n\\par\\noindent\\hangindent=1pc\\hangafter=1 Hollenbach D. J. Yorke H. W., Johnstone D. (2000)\nin {\\it Protostars and Planets IV}, \n(V. Mannings et al., eds.), pp. 401-428. \nUniv. of Arizona, Tucson.\n\n\\par\\noindent\\hangindent=1pc\\hangafter=1 Ida S. and Lin D. N. C. (2004) {\\em \\apj, 604}, 388-413. \n\n\\par\\noindent\\hangindent=1pc\\hangafter=1 Igea J. and Glassgold A. E. (1999)\n{\\em \\apj, 518}, 848-858.\n\n\\par\\noindent\\hangindent=1pc\\hangafter=1 Ilgner M., Henning Th., Markwick A. J., and Millar T. J. (2004)\n{\\em \\aap, 415}, 643-659. \n\n\\par\\noindent\\hangindent=1pc\\hangafter=1 Ilgner M. and Nelson R. P. (2006a)\n{\\em \\aap, 445}, 205-222.\n\n\\par\\noindent\\hangindent=1pc\\hangafter=1 Ilgner M. and Nelson R. P. (2006b)\n{\\em \\aap, 445}, 223-232.\n\n\\par\\noindent\\hangindent=1pc\\hangafter=1 Inutsuka S. and Sano T. (2005) \n{\\em \\apj, 628}, L155-L158.\n\n\\par\\noindent\\hangindent=1pc\\hangafter=1 Ishii M., Nagata T., Sato S., Yao Y., Jiang Z., and \nNakaya H. (2001) \n{\\em Astron. Astrophys., 121}, 3191-3206.\n\n\\par\\noindent\\hangindent=1pc\\hangafter=1 Jonkheid B., Faas F. G. A., van Zadelhoff G.-J., and\nvan Dishoeck E. F. (2004) \n{\\em \\aap 428}, 511-521. \n\n\\par\\noindent\\hangindent=1pc\\hangafter=1 Johns-Krull C. M., Valenti J. A., and Linsky J. L. (2000)\n{\\em Astrophys. J. 539}, 815-833.\n\n\\par\\noindent\\hangindent=1pc\\hangafter=1 Kamp I. and van Zadelhoff G.-J. (2001)\n{\\em \\aap, 373}, 641-656. \n\n\n\\par\\noindent\\hangindent=1pc\\hangafter=1 Kamp I. and Dullemond C. P. (2004)\n{\\em Astrophys. J., 615}, 991-999.\n\n\\par\\noindent\\hangindent=1pc\\hangafter=1 Kenyon S. J. and Hartmann L. (1995) \n{\\em \\apjs, 101}, 117-171.\n\n\\par\\noindent\\hangindent=1pc\\hangafter=1 Klahr H. H. and Bodenheimer P. (2003) \n{\\em Astrophys. J., 582}, 869-892. \n\n\\par\\noindent\\hangindent=1pc\\hangafter=1 Kominami J. and Ida S. (2002) \n{\\em Icarus, 157}, 43-56.\n\n\\par\\noindent\\hangindent=1pc\\hangafter=1 Kunz M. W. and Balbus S. A. (2004)\n{\\em \\mnras, 348}, 355-360.\n\n\\par\\noindent\\hangindent=1pc\\hangafter=1 Lahuis F., van Dishoeck E. F., Boogert A. C. A., \nPontoppidan K. M., Blake G. A. et al. (2006) \n{\\em \\apj, 636}, L145-L148. \n\n\\par\\noindent\\hangindent=1pc\\hangafter=1 Langer W. et al.\\ (2000) \nin {\\it Protostars and Planets IV},\n(V. Mannings et al., eds.), \npp. 29-. Univ. of Arizona, Tucson.\n\n\\par\\noindent\\hangindent=1pc\\hangafter=1 Le Teuff Y., Markwick, A., and Millar, T. (2000)\n{\\em \\aap, 146}, 157-168. \n\n\\par\\noindent\\hangindent=1pc\\hangafter=1 Ida S. and Lin D. N. C.\\ (2004) \n{\\em \\apj, 604}, 388-413.\n\n\\par\\noindent\\hangindent=1pc\\hangafter=1 Lin D. N. C., Bodenheimer P., and Richardson D. C. \n(1996) {\\em Nature, 380}, 606-607. \n\n\\par\\noindent\\hangindent=1pc\\hangafter=1 Lubow S. H., Seibert M., and Artymowicz P. (1999)\n{\\em \\apj, 526}, 1001-1012. \n\n\\par\\noindent\\hangindent=1pc\\hangafter=1 Luhman K. L., Rieke G. H., Lada C. J., and Lada E. A. (1998) \n{\\em \\apj, 508}, 347-369.\n\n\\par\\noindent\\hangindent=1pc\\hangafter=1 Lynden-Bell D. and Pringle J. E. (1974) \n{\\em \\mnras, 168}, 603-637. \n\n\\par\\noindent\\hangindent=1pc\\hangafter=1 Malbet F. and Bertout C. (1991) \n{\\em \\apj, 383}, 814-819. \n\n\\par\\noindent\\hangindent=1pc\\hangafter=1 Markwick A. J., Ilgner M., Millar T. J., and Henning Th. (2002) \n{\\em \\aap, 385}, 632-646.\n\n\\par\\noindent\\hangindent=1pc\\hangafter=1 Marsh K.~A. and Mahoney M.~J. (1992) \n{\\em \\apj, 395}, L115-L118.\n\n\\par\\noindent\\hangindent=1pc\\hangafter=1 Martin S. C. (1997) \n{\\em \\apj, 478}, L33-L36.\n\n\\par\\noindent\\hangindent=1pc\\hangafter=1 Matsumura S. and Pudritz R. E. (2003)\n{\\em Astrophys. J., 598}, 645-656.\n\n\\par\\noindent\\hangindent=1pc\\hangafter=1 Matsumura S. and Pudritz R. E. (2005)\n{\\em Astrophys. J., 618}, L137-L140. \n\n\\par\\noindent\\hangindent=1pc\\hangafter=1 Mayer L., Quinn T., Wadsley J., and Stadel J. (2002) \n{\\em Science, 298}, 1756-1759.\n\n\\par\\noindent\\hangindent=1pc\\hangafter=1 Miller K. A. and Stone J. M. (2000)\n{\\em \\apj, 534}, 398-419.\n\n\\par\\noindent\\hangindent=1pc\\hangafter=1 Muzerolle J., Hartmann L., and Calvet N. (1998)\n{\\em \\aj, 116}, 2965-2974.\n\n\\par\\noindent\\hangindent=1pc\\hangafter=1 Muzerolle J., Calvet N., Brice\\~no C., Hartmann L., \nand Hillenbrand L. (2000) {\\em \\apj, 535}, L47-L50. \n\n\\par\\noindent\\hangindent=1pc\\hangafter=1 Muzerolle J., Calvet N., Hartmann L., and D'Alessio P. \n(2003) {\\em \\apj, 597}, L149-152. \n\n\\par\\noindent\\hangindent=1pc\\hangafter=1 Najita J., Carr J. S., Glassgold A. E., Shu F. H., \nand Tokunaga A. T.\\ (1996) \n{\\em \\apj, 462}, 919-936.\n\n\\par\\noindent\\hangindent=1pc\\hangafter=1 Najita J., Edwards S., Basri G., and Carr J. (2000) \nin {\\it Protostars and Planets IV},\n(V. Mannings et al., eds.), p. 457-483.\nUniv. of Arizona, Tucson.\n\n\\par\\noindent\\hangindent=1pc\\hangafter=1 Najita J., Carr J. S., and Mathieu R. D. (2003) \n{\\em \\apj, 589}, 931-952.\n\n\\par\\noindent\\hangindent=1pc\\hangafter=1 Najita, J. (2004) \nin\n{\\it Star Formation in the Interstellar Medium} \nASP Conf. Ser., vol.\\ 323,\n(D. Johnstone et al., eds.) pp. 271-277.\nASP, San Francisco.\n\n\\par\\noindent\\hangindent=1pc\\hangafter=1 Najita, J. (2006) \nin\n{\\it A Decade of Extrasolar Planets Around Normal Stars} \nSTScI Symposium Series, vol.\\ 19,\n(M. Livio, ed.) \nCambridge U. Press, Cambridge, in press.\n\n\\par\\noindent\\hangindent=1pc\\hangafter=1 Nelson R. P., Papaloizou J. C. B., Masset F., and \nKley W. (2000) {\\em MNRAS, 318}, 18-36. \n\n\\par\\noindent\\hangindent=1pc\\hangafter=1 Nomura H. and Millar T. J. (2005) \n{\\em \\aap, 438}, 923-938.\n\n\\par\\noindent\\hangindent=1pc\\hangafter=1 Quillen A. C., Blackman E. G., Frank A., and \nVarni\\`ere P. (2004) {\\em \\apj, 612}, L137-L140. \n\n\\par\\noindent\\hangindent=1pc\\hangafter=1 Prinn R.\\ (1993) \nin {\\it Protostars and Planets III}, \n(E. Levy and J. Lunine, eds.) pp. 1005-1028.\nUniv. of Arizona, Tucson.\n\n\\par\\noindent\\hangindent=1pc\\hangafter=1 Rettig T.~W., Haywood J., Simon T., Brittain S.~D., and\nGibb E. (2004) \n{\\em \\apj, 616}, L163-L166.\n\n\\par\\noindent\\hangindent=1pc\\hangafter=1 Rice W. K. M., Wood K., Armitage P. J., Whitney B. A.,\nand Bjorkman J. E. (2003) \n{\\em MNRAS, 342}, 79-85.\n\n\\par\\noindent\\hangindent=1pc\\hangafter=1 Richter M.~J., Jaffe D.~T., Blake G.~A., and Lacy J.~H. (2002)\n{\\em \\apj, 572}, L161-L164.\n\n\\par\\noindent\\hangindent=1pc\\hangafter=1 Sako S., Yamashita T., Kataza H., Miyata T., Okamoto Y. K. \net al. (2005) \n{\\em \\apj, 620}, 347-354.\n\n\\par\\noindent\\hangindent=1pc\\hangafter=1 Sano T., Miyama S., Umebayashi T., amd Nakano T. (2000)\n{\\em Astrophys. J., 543}, 486-501. \n\n\\par\\noindent\\hangindent=1pc\\hangafter=1 Scoville N., Kleinmann S. G., Hall D. N. B., and \nRidgway S. T.\\ (1983) \n{\\em \\apj, 275}, 201-224. \n\n\\par\\noindent\\hangindent=1pc\\hangafter=1 Scoville N. Z. (1985) in {\\it Protostars and Planets II}, \npp. 188-200.\nUniv. of Arizona, Tucson.\n \n\\par\\noindent\\hangindent=1pc\\hangafter=1 Semenov D., Widebe D., and Henning Th. (2004) \n{\\em \\aap, 417}, 93-106. \n\n\\par\\noindent\\hangindent=1pc\\hangafter=1 Shakura N. I. and Sunyaev R. A. (1973) \n{\\em \\aap, 24}, 337-355. \n\n\\par\\noindent\\hangindent=1pc\\hangafter=1 Sheret I., Ramsay Howat S. K., Dent W. R. F. (2003)\n{\\em MNRAS, 343}, L65-L68.\n\n\\par\\noindent\\hangindent=1pc\\hangafter=1 Shu F. H., Johnstone D., Hollenbach D. (1993)\n{\\em Icarus, 106}, 92.\n\n\\par\\noindent\\hangindent=1pc\\hangafter=1 Skrutskie M. F., Dutkevitch D., Strom S.~E.,\nEdwards S., Strom K.~M., and Shure M.~A. (1990)\n{\\em \\aj, 99}, 1187-1195.\n\n\\par\\noindent\\hangindent=1pc\\hangafter=1 Sicilia-Aguilar A., Hartmann L.~W., Hern\\'andez J.,\nBrice\\~no C., and Calvet N. (2005)\n{\\em \\aj, 130}, 188-209.\n\n\\par\\noindent\\hangindent=1pc\\hangafter=1 Siess L., Forestini M., and Bertout C. (1999)\n{\\em \\aap, 342}, 480-491.\n\n\\par\\noindent\\hangindent=1pc\\hangafter=1 Stone J. M., Gammie C. F., Balbus S. A., and Hawley J. F. (2000)\nin {\\it Protostars and Planets IV}, \n(V. Mannings et al., eds) pp. 589-611. \nUniv. Arizona, Tucson. \n\n\\par\\noindent\\hangindent=1pc\\hangafter=1 Strom K. M., Strom S. E., Edwards S., Cabrit S., and \nSkrutskie M. F. (1989) \n{\\em \\aj, 97}, 1451-1470. \n\n\\par\\noindent\\hangindent=1pc\\hangafter=1 Takeuchi T., Miyama S. M., and Lin D. N. C. (1996) \n{\\em \\apj, 460}, 832-847.\n\n\\par\\noindent\\hangindent=1pc\\hangafter=1 Takeuchi T., and Lin D. N. C. (2005) \n{\\em \\apj, 623}, 482-492.\n\n\n\\par\\noindent\\hangindent=1pc\\hangafter=1 Thi W.~F., van Dishoeck E. F., Blake G. A., van Zadelhoff G. J., \nHorn J. et al. (2001) \n{\\em \\apj, 561}, 1074-1094.\n\n\\par\\noindent\\hangindent=1pc\\hangafter=1 Thompson R.\\ (1985) \n{\\em \\apj, 299}, L41-L44. \n\n\\par\\noindent\\hangindent=1pc\\hangafter=1 Trilling D. D., Lunine J. I., and Benz W. (2002) \n{\\em \\aap 394}, 241-251. \n\n\\par\\noindent\\hangindent=1pc\\hangafter=1 Valenti J. A., Johns-Krull C. M., and Linsky J. L. (2000)\n{\\em Astrophys. J. Suppl. 129}, 399-420.\n\n\\par\\noindent\\hangindent=1pc\\hangafter=1 Walter F. M., Herczeg G., Brown A., Ardila D. R., Gahm\nG. F., Johns-Krull C. M., Lissauer J. J., Simon M., and Valenti\nJ. A. (2003) {\\em Astron. J., 126}, 3076-3089.\n\n\\par\\noindent\\hangindent=1pc\\hangafter=1 Ward W. R. (1997) \n{\\em Icarus, 126}, 261-281. \n\n\\par\\noindent\\hangindent=1pc\\hangafter=1 White R. J. and Hillenbrand L. A. (2005) \n{\\em \\apj, 621}, L65-L68.\n\n\\par\\noindent\\hangindent=1pc\\hangafter=1 Wiebe D., Semenov D. and Henning Th. (2003) \n{\\em \\aap, 399}, 197-210. \n\n\\par\\noindent\\hangindent=1pc\\hangafter=1 Wilkinson E., Harper G. M., Brown A. and Herczeg G. J. (2002)\n{\\em \\aj, 124}, 1077-1081.\n\n\\par\\noindent\\hangindent=1pc\\hangafter=1 Willacy K. and Langer W. D. (2000)\n{\\em \\apj, 544}, 903-920.\n\n\\par\\noindent\\hangindent=1pc\\hangafter=1 Willacy K., Klahr H. H., Millar T. J., and Henning Th. (1998)\n{\\em \\aap, 338}, 995-1005. \n\n\\par\\noindent\\hangindent=1pc\\hangafter=1 Zuckerman B., Forveille T., and Kastner J.~H. (1995) \n{\\em Nature, 373}, 494-496\n\n\\end{document}\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}