diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzpmer" "b/data_all_eng_slimpj/shuffled/split2/finalzzpmer" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzpmer" @@ -0,0 +1,5 @@ +{"text":"\n\\section{Algorithm Optimization} \\label{sect: Algorithm Optimization}\nThis section explains a novel TI optimizations tailored for CPU-FPGA platforms. TI has been used for optimizing distance-related problems, but is often on the sequential processing systems. Our design features an innovative way of applying TI to obtain low-overhead distance bounds for unnecessary distance computation elimination while maintaining the computation regularity to ease the hardware acceleration on FPGAs.\n\n\\subsection{TI in Distance-related Algorithm}\nAs a simple but powerful mathematical concept, TI has been used to optimize the distance-related algorithm. Figure~\\ref{fig: TI Optimization.}\\subfig{a} gives an illustration. It states that $d(A, B) \\leq d(A, L_{ref}) + d( L_{ref}, B)$, where $d(A, B)$ represents the distance between point $A$ and $B$ in some metrics (\\textit{e.g.}, Euclidean distance). The assistant point $L_{ref}$ is a landmark point for reference. Directly from the definition, we could compute both the lower bound ($lb(A, B)$) and upper bound ($ub(A, B)$) of the distance between two points A and B. This is the standard and most common usage of TI for deriving bounds of distance.\n\nIn general, bounds can be used as a substitute for the exact distances in the distance-related data analysis. Take N-body simulation as an example. It requires to find target points that are within $R$ (the radius) from each given query point. Suppose we get the $lb(A, B) = 10$ and $10 > R$, then we are 100\\% confident that source point $A$ is not within $R$ of query point $B$. As a result, there is no need to compute the exact distance between point $A$ and $B$. Otherwise, the exact distance computation will still be carried out for direct comparison. \nWhile many previous researches~\\cite{elkan2003using, ding2015yinyang, lin2012k, MakingKMeansFaster, KNNAdaptiveBound} gain success in directly porting the above point-based TI to optimize distance-related algorithms, they usually suffer from memory overhead and computations irregularity, which result in inferior performance. \n\\begin{figure*}[htb] \\small\n \\centering\n \\makebox{\\includegraphics[scale=0.8]{images\/TI-optimization.pdf}}\n \\caption{TI Optimization.}\n \\label{fig: TI Optimization.}\n\\end{figure*}\n\\vspace{-1em}\n\\subsection{Generalized Triangle Inequality (GTI)} \nAccD uses a novel Generalized TI (GTI) to remove redundant distance computation. It generalizes the traditional point-based TI while significantly reducing the overhead of bound computations. The traditional point-based TI focuses on tighter bound (more closer to the exact distance) to remove more distance computations, but it induces the extra bound computations, which could become the new performance bottleneck even after many distance calculations being removed. In contrast, GTI strikes a good balance between distance computation elimination and bound computation overhead. In particular, AccD highlights GTI from three perspectives: \\textit{Two-landmark bound computation}, \\textit{Trace-based bound computation}, and \\textit{Group-level bound computation}.\n\n\\paragraph{Two-landmark Bound Computation} \nTwo-landmark scheme aims at reducing the bound computation through effective distance reuse. In this case, the distance bound between two points can be measured through two landmarks as the reference points. As illustrated in Figure \\ref{fig: TI Optimization.}\\subfig{b}, the distance bound between point $A$ and $B$ can be computed based on $d(A, A_{ref})$, $d(B, B_{ref})$ and $d(A_{ref}, B_{ref})$ through Equation \\ref{equ: Two-landmark Bound Computation.}, where $A_{ref}$ and $B_{ref}$ are the landmark points for point $A$ and $B$, correspondingly. \n\\begin{equation} \\small\n\\label{equ: Two-landmark Bound Computation.}\n\\begin{aligned}\n lb(A, B) \\geq d(A_{ref}, B_{ref}) - d(A, A_{ref}) - d(B, B_{ref}) \\\\\n ub(A, B) \\leq d(A_{ref}, B_{ref}) + d(A, A_{ref}) + d(B, B_{ref})\n\\end{aligned}\n\\end{equation}\n\nOne representative application scenario of Two-landmark bound computation is KNN-join, where two disjoint sets of landmarks are selected for the query and target point set. In this case, much fewer bound computations are required compared with the one-landmark case (shown in Figure~\\ref{fig: TI Optimization.}\\subfig{a}). This can also be validated through a simple calculation. Assuming in KNN-join, we have $m$ query points, $n$ target points, $z_{qry}$ query landmarks, and $z_{trg}$ target landmarks. Also, we have $z_{qry}< d(c, d) + d_{max}(d, d')$, it is impossible that the points inside the group $A'$ and $B'$ can become the closest point of $c$ in the current iteration. Therefore, the distance computation between the point $c$ and all points inside these groups can be safely avoided.\n\nIn addition to distance saving, group-level bound computation offers another two benefits to facilitate the underlying hardware acceleration. First, the computation regularity on the remaining distance computation becomes higher compared with the point-level bound computation. Since points inside each group will share the commonality in computation, which facilitates the parallelization for acceleration. For example, point-level bound computation usually results in a large divergence of distance computation among different points, as shown in Figure~\\ref{fig: group-level bound computation -- computation regularity.}\\subfig{a}, which is a killer of parallelization and pipelining. However, in group-level bound computation, points inside the same source group will always maintain the same groups of target points for distance computation, as shown in Figure~\\ref{fig: group-level bound computation -- computation regularity.}\\subfig{b}. \n\\begin{figure} [ht] \\small\n \\centering\n \\makebox{\\includegraphics[width=0.75\\columnwidth]{images\/Data-Grouping.pdf}}\n \\caption{Bound Computation at (a) Point-level, (b) Group-level.}\n \\label{fig: group-level bound computation -- computation regularity.}\n\\end{figure}\n\\vspace*{-0.8em}\nSecond, group-level bound computation brings the benefit of reducing memory overhead. Assuming we have $m$ source points, $n$ target points, $z_{src}$ source groups, and $z_{trg}$ target groups. The memory overhead of maintaining distance bounds is $O(m\\times n)$ in the point-level bound computation case. However, in the group-level bound computation case, we only have to maintain distance bounds among groups, and the memory overhead is $O(z_{src}\\times z_{trg})$, where $z_{src} << m$ and $z_{trg}<< n$. Therefore, in terms of memory efficiency, group-level bound computation can outperform the point-level bound computation to a great extent.\n\n\\iffalse\n\\subsection{Input-adaptive Strategy Selection}\nTo efficiently handle the diverse range of incoming distance-related algorithms and their data inputs, AccD offers two types of grouping strategies: \\textbf{source grouping} and \\textbf{target grouping}, to distinguish points based on their required operations. To maximize the benefits from grouping, AccD provides three types of filtering options: \\textbf{point-to-point}, \\textbf{group-to-point}, and \\textbf{group-to-group}, which can reduce the distance computation at the different levels of granularity and maintain different levels of computation regularity. Each level of filtering have different computation elimination power and overhead to obtain, therefore it is essential to obtain substantial speedup by choosing the proper filtering strategies. To this end, AccD utilizes a decision-tree based approach to determine the grouping and the filtering strategy at the design time. \n\n\\begin{figure} [h] \\small\n \\centering\n \\makebox{\\includegraphics[width=0.75\\columnwidth]{images\/AccD-grouping-strategy.pdf}}\n \\caption{AccD Grouping Strategy.}\n \\label{fig: AccD Grouping Strategy.}\n\\end{figure}\nTo show the power of this comprehensive strategy selection of AccD, we demonstrate a small example. As shown in the Figure~\\ref{fig: AccD Grouping Strategy.}, AccD would take different grouping \\& filtering combination based on the proprieties of the source and target points. There are four conditions: 1) If the dataset comes with a small target set and a small source set, AccD will not conduct and grouping and only apply point-to-point filtering, since it is inexpensive to compute point-to-point distance; 2) If the dataset comes with a small target set and a large source set, AccD will group the points and conduct group-point filtering, since it is cheaper to execute group-based filtering; 3) If the dataset comes with a large target set and a small source set, AccD will group the target points and conduct group-point filtering, since it is also cheaper to execute group-based filtering; 4) If the dataset comes with a large target and a large source, AccD will apply group-group filtering to reduce the bound overhead and maintain computation regularity, since it can balance the computation reduction performance and bound overhead.\n\\vspace{-0.2em}\n\\figureautorefname{}\n\\fi\n\\section{Hardware Acceleration} \\label{sect: Architecture Design}\nAccD design is built on the CPU-FPGA architecture, which highlights its significant performance and energy efficiency, and has been widely adopted as the modern data center solution for high-performance computing and acceleration. The host-side application of AccD design is responsible for data grouping and distance computation filtering, which consists of complex operations and execution dependency, but lacks pipeline and parallelism. On the other hand, the FPGA-side of AccD design is built for accelerating the distance computations, which are composed of simple and vectorizable operations.\n\nWhile FPGA accelerator features with high computation capability, the memory bandwidth bottleneck constraints the overall design performance. Therefore, optimizing data placement and memory architecture is the key to improving memory performance. In addition, the OpenCL-based programming model adds a layer of architectural complexity of the kernel design and management, which is also critical to the design performance. AccD framework distinguishes itself by using a novel memory and kernel optimization strategy that is tailored for TI-optimized distance-related algorithms to benefit CPU-FPGA designs.\n\n\\subsection{Memory Optimization}\nAfter applying the GTI optimization to remove the redundant distance computation, each source point group will have different target groups as candidates for distance computation, as shown in Figure~\\ref{table: inter-group memory.}\\subfig{a}, where \\textbf{Source-grp} is ID of the source group, and \\textbf{Target-grp} is ID of the target group. However, this would raise two concerns about performance degradation. \n\\begin{table}[ht] \\small\n\\begin{minipage}[b]{0.5\\columnwidth}\n\\centering\n\\begin{tabular}{|| c | c ||}\n\\hline\n\\textbf{Source-grp}\n& \\makecell{\\ \\textbf{Target-grp}\\\\} \n\\\\ \n\\hline\n\\hline $s_1$ &\t$t_1$, $t_4$, $t_6$\n\\\\ \n\\hline $s_2$ &\t$t_8$, $t_{10}$, $t_{12}$\n\\\\\n\\hline ... &\t...\n\\\\\n\\hline $s_5$ & $t_2$, $t_4$, $t_6$\n\\\\\n\\hline $s_6$ & $t_8$, $t_{10}$, $t_{12}$\n\\\\\n\\hline\n\\end{tabular}\n\\caption*{(a)}\n\\end{minipage}\n\\hspace*{-\\textwidth} \\hfill\n\\begin{minipage}[b]{0.5\\columnwidth}\n\\centering\n\\label{table: Optimized Memory}\n\\begin{tabular}{|| c | c ||}\n\\hline\n\\textbf{Source-grp}\n& \\makecell{\\ \\textbf{Target-grp}\\\\} \n\\\\ \n\\hline\n\\hline $s_1$ &\t$t_2$, $t_4$, $t_6$\n\\\\ \n\\hline $s_5$ &\t$t_2$, $t_4$, $t_6$\n\\\\\n\\hline $s_2$ &\t$t_8$, $t_{10}$, $t_{12}$\n\\\\\n\\hline $s_6$ & $t_8$, $t_{10}$, $t_{12}$\n\\\\\n\\hline ... &\t...\n\\\\\n\\hline\n\\end{tabular}\n\\caption*{(b)}\n\\end{minipage}\n\\captionof{figure}{(a) Non-optimized inter-group memory access; (b) Optimized inter-group memory access.}\n\\label{table: inter-group memory.}\n\\end{table}\n\\vspace{-0.8em}\n\nThe first issue is inter-group memory irregularity and low data reuse. For example, the target group information ($t_1$, $t_4$, $t_6$) required by source group $s_1$ can not be reused by $s_2$. Since $s_2$ requires quite different target groups ($t_8$, $t_{10}$, and $t_{12}$) for distance computation, thus, additional costly memory access has to be carried out. To tackle this problem, AccD places the source groups to the continuous memory space to maximize the memory access efficiency, only if these source groups have the same set of target groups as candidates for distance computation. An example has been shown in Figure~\\ref{table: inter-group memory.}\\subfig{b}, where the source group $s_2$ and $s_6$ are placed side by side in the memory, since they have the same list of target groups ($t_8$, $t_{10}$, and $t_{12}$), which can take advantage of the memory temporal locality without issuing another memory access.\n\nThe second issue is intra-group memory irregularity. For example, points from \\textit{group 1}, \\textit{2}, and \\textit{3} have taken up the memory space at intervals, as shown in Figure~\\ref{fig: memory optimization.}\\subfig{a}. However, a group of points are usually accessed simultaneously due to GTI optimization. This would cause frequent inefficient memory access for fetching individual point distributed at the discontinuous memory address. To solve this issue, AccD offers a second memory optimization to re-organize the target\/source points inside the same target\/source group into continuous memory space within the same memory bank, as illustrated in Figure~\\ref{fig: memory optimization.}\\subfig{b}. This strategy can largely benefit memory coalescing and external memory bandwidth while minimizing the access contention, since points inside the same bank can be accessed efficiently and points inside different banks can be accessed in parallel. \n\\begin{figure}[ht]\n\\begin{minipage}[c]{0.3\\columnwidth}\n\\centering\n\\begin{tabular}{|| c | c ||}\n\\hline\n\\textbf{Group}\n& \\makecell{\\ \\textbf{Points}\\\\} \n\\\\ \n\\hline\n\\hline $Grp_1$ & $3$, $8$, $9$\n\\\\ \n\\hline $Grp_2$ &\t$5$, $6$, $7$ \n\\\\\n\\hline $Grp_3$ &\t$1$, $2$, $4$\n\\\\\n\\hline ... &\t...\n\\\\\n\\hline\n\\end{tabular}\n\\caption*{(a)}\n\\label{table: Group Point Mapping}\n\\end{minipage}\n\\qquad\n\\begin{minipage}[c]{0.3\\columnwidth}\n \\centering\n \\includegraphics[width=0.7\\columnwidth]{images\/AccD-memory-opt1.pdf}\n \\caption*{(b)}\n\\end{minipage} \n\\hfill \n\\begin{minipage}[c]{0.3\\columnwidth}\n \\centering\n \\includegraphics[width=0.92\\columnwidth]{images\/AccD-memory-opt2.pdf}\n \\caption*{(c)}\n\\end{minipage}\n\\captionof{figure}{(a) Group-point mapping; (b) Non-aligned intra-group memory; (c) Aligned intra-group memory.}\n\\label{fig: memory optimization.}\n\\end{figure}\n\\vspace*{-1em}\n\\subsection{Distance Computation Kernel}\nDistance computation takes the major time complexity in distance-related algorithms. In AccD, after TI filtering on CPU, the remaining distance computations are accelerated on FPGA. Points involved in the remaining distance computations are organized into two sets: source set and target set, which can be organized as two matrices, $Mat_{A}$ ($m\\times d$) and $Mat_{B}$ ($n\\times d$), respectively, where each row of these matrices represents a point with $d$ dimension. The distance computation between $Mat_A$ and $Mat_B$ can be decomposed into three parts, as shown in Equation~\\ref{equ: distance computation},\n\\begin{equation} \\small\n \\label{equ: distance computation}\n {(Mat_{A} - Mat_{B})}^2 = Mat_{A}^2 - 2 * Mat_{A} \\cdot Mat_{B} + Mat_{B}^2\n\\end{equation}\nwhere $Mat_{A}^2$ or $Mat_{B}^2$ only takes the complexity of $O(m\\times d)$ and $O(n\\times d)$, while $Mat_A\\times Mat_ B$ takes $O(m\\times n\\times d^2)$, which dominates the overall computation complexity. AccD spots an efficient way of accelerating $Mat_A\\cdot Mat_B$ through highly-efficient matrix-matrix multiplication, which can benefit the hardware implementation on FPGA.\n\\begin{figure} [ht!] \\small\n \\centering\n \\includegraphics[width=0.8\\columnwidth]{images\/mm-opt.pdf}\n \\caption{AccD Matrix-based Distance Computation.}\n \\label{fig: AccD Distance Computation.}\n\\end{figure}\n\nThe overall computation process can be described as Figure~\\ref{fig: AccD Distance Computation.}, the source ($Mat_A$) and target set ($Mat_B$) Row-wise Square Sum (RSS) is pre-computed through in a fully-parallel manner. And the vector multiplication between each source and target point is mapped to an OpenCL kernel thread for a fine-grained parallelization. Moreover, a block of threads, as highlighted in the \"red\" square box of Figure~\\ref{fig: AccD Distance Computation.}, is the kernel thread workgroup, which can share a part of the source and target points to increase the on-chip data locality. Based on this kernel organization, AccD hardware architectural design offers several tunable hyperparameters for performance and resource trade-off: the size of kernel block, the number of parallel pipeline in each kernel block, etc. To efficiently find the \"optimal\" parameters that can maximize overall performance while respecting the constraints, we harness the AccD explorer for efficient design space search, which is detailed in Section~\\ref{sect: Design Space Exploration}.\n\n\\section{AccD Compiler} \\label{sect: AccD Compiler}\nIn this section, we detail AccD compiler in two aspects: design parameters and constraints, and design space exploration.\n\\vspace{-0.2em}\n\\subsection{Design Parameters and Constraints}\nAccD uses a parameterized design strategy for better design flexibility and efficiency. It takes the design parameters and constraints from algorithm and hardware to explore and locate the \"optimal\" design point tailored for the specific application scenario. At the algorithm level, the number of groups affects distance computation filtering performance. At the hardware level, there are three parameters: 1) Size of computation block, which decides the size of data shared by a group of computing elements; 2) SIMD factor, which decides the number of computing elements inside each computation block; 3) Unroll factor, which tells the degree of parallelization in each single distance computation. In addition, there are several hardware constraints, such as the on-chip memory size, the number of logic units, and the number of registers. All of these parameters and constraints are included in our analytical model for design exploration.\n\n\\vspace{0.4em}\n\\subsection{Design Space Exploration} \n\\label{sect: Design Space Exploration}\nFinding the best combination of design configurations (a set of hyper-parameters) under the given constraints requires non-trivial efforts in the design space search. Therefore, we incorporate an AccD explorer in our compiler framework for efficient design space exploration (Figure~\\ref{fig: AccD Explorer}). AccD explorer takes a set of raw configurations (hyper-parameters) as the initial input, and generates the optimal configuration as the output through several iterations of the design configuration optimization process. In particular, AccD explorer consists of three major phases: \\textit{Configuration Generation and Selection}, \\textit{Performance and Resource Modeling}, \\textit{Constraints Validation}. \n\\begin{figure} [h] \\small\n \\centering\n \\includegraphics[height=10em]{images\/AccD-Compiler-workflow.pdf}\n \\caption{AccD Explorer.}\n \\label{fig: AccD Explorer}\n\\end{figure}\n\\vspace{-1.3em}\n\\paragraph{Configuration Generation and Selection}\nThe functionality of this phase depends on its input. There are two kinds of inputs: If the input is from the initial configurations, this phase will directly feed these configurations to the modeling phase for performance and resource evaluation; If the input is the result from the constraints validation in the last iteration, this phase will leverage the genetic algorithm to crossover the \"premium\" configurations kept from the last iteration, and generate a new set of configurations for the modeling phase. \n\\paragraph{Performance Modeling}\nPerformance modeling measures the design latency and bandwidth requirement based on the input design configurations. We formulate the design latency by using Equation~\\ref{equ: Latency model},\n\\begin{equation} \\small\n \\label{equ: Latency model}\n \\begin{multlined}\n Latency = Latency_{filt} + Latency_{comp}\n \\end{multlined}\n\\end{equation}\nwhere $Latency_{filt}$ and $Latency_{comp}$ are the time of the GTI filtering process and remaining distance computations, respectively. And they can be calculated as Equation~\\ref{equ: Runtime Filtering.},\n\\begin{equation} \\small\n\\label{equ: Runtime Filtering.}\n\\begin{aligned}\n Latency_{filt} &=\\frac{n_{trg\\_grp}\\times n_{src\\_grp} \\times src_{size}\\times trg_{size}\\times d}{n_{iteration}} \n \\\\\n Latency_{comp} &=\\frac{src_{size}\\times trg_{size}\\times ratio_{save}\\times d}{blk^2\\times frequency\\times unroll \\times simd}\n\\end{aligned}\n\\end{equation}\nwhere $n_{src\\_grp}$ and $n_{trg\\_grp}$ are the number of groups for source and target points, respectively; $src_{size}$ and $trg_{size}$ are the number of points inside source and target set, respectively; $d$ is the data dimensionality; $n_{iteration}$ is the number of grouping iteration; $blk$ is the size of computation kernel block; $frequency$ is the FPGA design clock frequency; $unroll$ is the distance computation unroll factor; $simd$ is the number of parallel worker threads inside each computation block; $ratio_{save}$ is the distance saving ratio through GTI filtering (Equation~\\ref{equ: Saving Ratio}),\n\\begin{equation} \\small\n \\label{equ: Saving Ratio}\n ratio_{save} = \\frac{n_{iteration}}{\\alpha} \\times \\sqrt{\\frac{src_{size}\\times trg_{size}}{n_{src\\_grp} \\times n_{trg\\_grp}}}\n\\end{equation}\nwhere the $\\alpha$ the density of points distribution. This formula also tells that increasing of number of iterations and number of points inside each group would improve the performance of GTI filtering performance. Also, the increase of points distribution density $\\alpha$, (\\textit{i.e.} points are closer to each other) will decrease the GTI filtering performance.\n\nTo get the required bandwidth $BW$ of the current design, we leverage Equation~\\ref{equ: required bandwith}, \n\\begin{equation} \\small\n\\label{equ: required bandwith}\n BW = \\frac{(src_{size} + trg_{size})\\times d \\times size_{data\\_type}}{Latency}\n\\end{equation}\nwhere the $size_{data\\_type}$ can be either 32-bit for \\textit{int} and \\textit{float} or 64-bit for \\textit{double}. \n\n\\paragraph{Resource Modeling}\nDirectly measuring the hardware resource usage of the accelerator design from high-level algorithm description is challenging because of the hidden transformation and optimization in hardware design suite. However, AccD uses a micro-benchmark based methodology to measure the hardware resource usage by analytical modeling. The major hardware resource consumption of AccD comes from the distance computation kernel, which depends on several design factors, including the kernel block size, the number of SIMD workers, etc.\n\nIn AccD resource analytical model, the design factors are classified into two categories: \\textit{dataset-dependent} and \\textit{dataset-independent} factors. The main idea behind the AccD resource modeling is to get the exact hardware resource consumption statistics through micro-benchmark on the hardware designs with different dataset-independent factors. For example, we can benchmark a single distance computation kernel with different sizes of computation block to get its resource statistics. Since this factor is dataset-independent, which can be decided before knowing the dataset details. However, to estimate the resource consumption for datasets with different sizes and dimensionalities, AccD leverages the formula-based approach to estimate the overall hardware resource consumption (Equation~\\ref{equ: Resource estimation}), which combines online information (\\textit{e.g.}, kernel organization, and dataset properties) and offline information (\\textit{e.g.}, miro-benchmark statistics).\n\\begin{equation} \\small\n \\label{equ: Resource estimation}\n \\begin{aligned}\n Resource_{est} = Resource_{single}\\times \\mathbf{ceil}(\\frac{src_{size}}{blk})\\times \\mathbf{ceil}(\\frac{trg_{size}}{blk}) \\\\\n \\end{aligned}\n\\end{equation}\nwhere the types of $Resource$ can be on-chip memory, computing units, and logic operation units; $Resource_{est}$ is the estimated overall usage of a certain type resource for the overall design; $Resource_{single}$ is the usage of a certain type of resource for only one distance computation kernel block.\n\n\\paragraph{Constraints Validation}\nConstraints validation is the third phase of AccD explorer, which checks whether the design resources consumption of a given configuration is within the budget of the given hardware platform. The input of this phase are the resource estimation results from resource modeling step. The design constraint inequalities are listed in Equation~\\ref{eqn: Resource Constraints}, which includes $Mem$ (the size of on-chip memory), $BW$ (the bandwidth of data communication between external memory and on-chip memory), $Computing\\_Unit$ (the number of computing units) and $Logic\\_Unit$ (the number of logic units):\n\\begin{equation} \\small\n\\label{eqn: Resource Constraints}\n\\begin{aligned}\n BW &\\leq BW_{max}, \n \\\\\n Mem &\\leq Mem_{max}, \n \\\\\n Computing\\_Unit &\\leq Computing\\_Unit_{max},\n \\\\\n Logic\\_Unit &\\leq Logic\\_Unit_{max}\n\\end{aligned}\n\\end{equation}\n\nConstraints validation phase will also discard the configurations that cannot match the design performance and constraints, and only keep the \"well-performed\" configurations for further optimization in the next iteration. The constraints validation phase will also record the modeling information of the best configuration statistics in the last iteration, which will be used to terminate the optimization process if the modeling results difference between the configurations in two consecutive iterations is lower than a predefined threshold. This strategy can also help to avoid unnecessary time cost. After termination of the AccD explorer, the \"best\" configuration with maximum design performance under the given constraints will be output as the \"optimal\" solution for the AccD design. \n\n\n\\section{Conclusion} \n\\vspace{-0.4\\baselineskip}\n\\label{sect: Conclusion}\nIn this paper, we present our AccD compiler framework to accelerate the distance-related algorithms on the CPU-FPGA platform. Specifically, AccD leverages a simple but expressive language construct (DDSL) to unify the distance-related algorithms, and an optimizing compiler to improve the design performance from algorithmic and hardware perspective systematically and automatically.\nRigorous experiments on three popular algorithms (K-means, KNN-join, and N-body simulation) demonstrate the AccD as a powerful and comprehensive framework for hardware acceleration of distance-related algorithms on the modern CPU-FPGA platforms.\n\\section{\\textbf{D}istance-related Algorithm \\textbf{D}omain-\\textbf{S}pecific \\textbf{L}anguage (DDSL)} \n\\label{sect: DDSL}\nDistance-related algorithms share commonalities across different application domains and scenarios, even though they look different in their high-level algorithmic description. Therefore, it is possible to generalize these distance-related algorithms. AccD framework defines a DDSL, which provides a high-level programming interface to describe distance-related algorithms in a unified manner. Unlike the API-based programming interface used in the TOP framework~\\cite{Topframework}, DDSL is built on C-like language and provides more flexibility in low-level control and performance tuning support, which is crucial for FPGA accelerator design.\n\nSpecifically, DDSL utilizes several constructs to describe the basic components (\\textbf{Definition}, \\textbf{Operation}, and \\textbf{Control}) of the distance-related algorithms, and also identify the potential parallelism and pipeline opportunities during the design time. We detail these constructs in the following part of this section. \n\n\\subsection{Data Construct}\nData construct is a basic \\textbf{Definition Construct}. It leverages \\code{DSet} primitive to indicate the name of the data variable, and the \\code{DType} primitive to notate the type characters of the defined variable. Data construct serves as the basis for AccD compiler to understand the algorithm description input, such as the data points that are used in the distance-related algorithms. An example of data constructs is shown in the code below, where we define the variable and dataset using DDSL data construct.\n\\vspace*{-0.3\\baselineskip}\n\\begin{lstlisting}\n\/* Define a single variable *\/\nDVar [setName] DType [Optional_Initial_Value];\n\/* Define the matrix of dataset *\/\nDSet [setName] DType [size] [dim];\n\\end{lstlisting}\n\\vspace*{-0.3\\baselineskip}\n\nIn most distance-related algorithms, the dataset can be defined as the source set and the target set. For example, in K-means, the source set is the set of data points, and the target set is the set of clusters. Currently, AccD supports several data types including \\textit{int} (32-bit), \\textit{float} (32-bit), \\textit{double} (64-bit) based on the users' requests, algorithm performance, and accuracy trade-offs.\n\n\\subsection{Distance Compute Construct}\nDistance computation is the core \\textbf{Operation Construct} for distance-related algorithms, which measures the exact distance between two different data points. This construct requires several fields, including data dimensionality, distance metrics, and weight matrix (if weighted distance is specified).\n\\begin{lstlisting}\nAccD_Comp_Dist(Input p1, Input p2, Output disMat, Output idMat, Dim dim, Met mtr, Weg mat)\n\\end{lstlisting}\n\\vspace{-1.5em}\n\\begin{table}[ht] \\small\n\\centering\n \\begin{tabular}{|| c | l ||} \n \\hline\n p1, p2 & Input data matrix. ($n_1 \\times d$, $n_2 \\times d$)\\\\ \n \\hline\n disMat & Output distance matrix. ($n_1 \\times n_2$)\\\\\n \\hline\n idMat & Output id matrix. ($n_1 \\times n_2$) \\\\\n \\hline\n dim & Dimensionality of input data point.\\\\\n \\hline\n mtr & Distance metric:\\code{(Weighted|Unweighted)}\\\\ \\hline\n mat & Weight matrix: Used for weighted distance ($1\\times d$)\\\\\n \\hline\n\\end{tabular}\n\\caption{Distance Compute Construct Parameters.}\n\\label{fig: Distance Compute Construct Parameters.}\n\\end{table}\n\n\\vspace*{-1.3\\baselineskip}\n\\subsection{Distance Selection Construct}\nDistance selection construct is an \\textbf{Operation Construct} for distance value selection and it returns the Top-K smallest or largest distances and their corresponding points ID number from the provided distance and ID list. This construct helps AccD compiler to understand the distances of users' interests.\n\\begin{lstlisting}\nAccD_Dist_Select(Input distMat, Input idMat, Output TopKMat, Range ran, Scope scp)\n\\end{lstlisting}\n\\begin{table}[h] \\small\n\\centering\n \\begin{tabular}{|| c | p{18em}||} \n \\hline\n TopKMat & Top-K id matrix ($n_1 \\times k$)\\\\ \n \\hline\n ran & Scalar value of \\code{K} (\\textit{e.g.}, K-means, KNN) or distance threshold (\\textit{e.g.}, N-body Simulation)\\\\\n \\hline\n scp & Top-K \\code{(smallest|largest)} values\\\\\n \\hline\n\\end{tabular}\n\\caption{Distance Selection Construct Parameters.}\n\\label{fig: Distance Selection Construct Parameters.}\n\\end{table}\n\\vspace*{-1.1\\baselineskip}\n\\subsection{Data Update Construct}\n\\vspace*{-0.2\\baselineskip}\nData update construct is an \\textbf{Operation Construct} for updating the data points based on the results from the prior constructs. For example, K-means updates the cluster centers by averaging the positions of the points inside. This construct requires the variable to be updated and additional information to finish this update, such as the point-to-cluster distances. The status of this data update will be returned after the completion of all its inside operations. The status variable is to tell whether the data update makes a difference or not.\n\\begin{lstlisting} \nAccD_Update(Update var, Input p1 ,..., Input pm, Status s)\n\\end{lstlisting}\n\\vspace*{-1.2\\baselineskip}\n\\begin{table}[ht] \\small\n\\centering\n \\begin{tabular}{|| c | l ||} \n \\hline\n upVar & Input data\/dataset to be updated \\\\\n \\hline\n {p1, ..., pm} & Additional information used in update\\\\ \n \\hline\n $S$ & Status of update operation.\\\\\n \\hline\n\\end{tabular}\n\\caption{Data Update Construct Parameters.}\n\\label{fig: Data Update Construct Parameters.}\n\\end{table}\n\n\\vspace*{-1.5\\baselineskip}\n\\subsection{Iteration Construct}\n\\vspace*{-0.5\\baselineskip}\nIteration construct is a top-level \\textbf{Control Construct}. It is used to describe the distance-related algorithms that require iteration, such as K-means. Iteration construct requires users to provide either the maximum number of iteration or other exit condition.\n\\begin{lstlisting}[mathescape=true]\nAccD_Iter(maxIterNum|exitCond){\n subConstruct $sc_1$;\n subConstruct $sc_2$;\n ...\n subConstruct $sc_n$;\n}\n\\end{lstlisting}\n\n\\vspace*{-0.5\\baselineskip}\n\\subsection{Example: K-means}\n\\vspace*{-0.2em}\nTo show the expressiveness of DDSL, we take K-means as an example. From the code shown below, with no more than 20 lines of code, DDSL can capture the key components of user-defined K-means algorithm, which is essential for AccD compiler to generate designs for CPU-FPGA platforms.\n\\vspace{0.2em}\n\\begin{lstlisting}\nDVar K int 10;\nDVar D int 20;\nDVar psize int 1400;\nDVar csize int 200;\nDSet pSet float psize D;\nDSet cSet float csize D;\nDSet distMat float psize csize;\nDSet idMat int psize csize;\nDSet pkMat int psize K;\nAccD_Iter(S){\n S = false;\n \/* Compute the inter-dataset distances *\/\n AccD_Comp_Dist(pSet, cSet, distMat, idMat, D, \"Unweighted L1\", 0);\n \/* Select the distances of interests *\/\n AccD_Dist_Select(distMat, idMat, K, \"smallest\", pkMat);\n \/* Update the cluster center *\/\n AccD_Update(cSet, pSet, pkMat, S)\n}\n\\end{lstlisting}\n\n\\section{Evaluation} \\label{sect: Evaluation}\nIn this section, we choose three representative benchmarks (K-means, KNN-join, and N-body Simulation) and evaluate their corresponding AccD designs on the CPU-FPGA platform.\n\n\\paragraph{K-means}\nK-means~\\cite{LloydKMeans,dataclustering50, efficientKmeans, coates2012learning, ray1999determination} clusters a set of points into several groups in an iterative manner. At each iteration, it first computes the distances between each point and all clusters, and then update the clusters based on the average position of their inside points. We choose it as our benchmark since it can show the benefits of AccD hierarchy (\\textbf{Trace-based + Group-level}) bound computation optimization on iterative algorithms with disjoint source and target set.\n\n\\paragraph{KNN-join}\nKNN-join Search~\\cite{altman1992introduction, KNNJoinsHybridApproach, KNNJoinsDataStreams} finds the Top-K nearest neighbor points for each point in the source set from the target set. It first computes the distances between each source point and all the target points. Then it ranks the K-smallest distances for each source point and gets its corresponding closest Top-K target points. KNN-join can help to demonstrate the effectiveness of AccD hybrid (\\textbf{Two-landmark + Group-level}) bound computation optimization on non-iterative algorithms. \n\n\\paragraph{N-body Simulation}\nN-body Simulation~\\cite{nylons2007fast, ida1992n} mimics the particle movement within a certain range of 3D space. At each time step, distances between each particle and its neighbors (within a radius $R$) are first computed, and then the acceleration and the new position of each particle will be updated based on these distances. While N-body simulation is also iterative, it has several differences compared with K-means algorithm: 1) N-body simulation has the same dataset (particles) for source and target set, whereas K-means operates on different source (point) and target (cluster) sets; 2) All points in the N-body simulation would change their positions according to the time variation, whereas in K-means only the target set (cluster) would change their positions during the center update; 3) N-body simulation has the same size of source and target set, whereas K-means target set (cluster) is much smaller than source set (point) in general. N-body simulation can help us to show the strength of AccD hybrid bound computation (\\textbf{Two-landmark + Trace-based + Group-level}) on iterative algorithms with the same source and target set.\n\n\n\\vspace{-0.3\\baselineskip}\n\\subsection{Experiment Setup}\n\\vspace{-0.2\\baselineskip}\n\\paragraph{Tools and Metrics} \nIn our evaluation, we use Intel Stratix 10 DE10-Pro~\\cite{DE10-Pro} as the FPGA accelerator and run the host side software program on Intel Xeon Silver 4110 processor~\\cite{intelXeon} (8-core 16-thread, 2.1GHz base clock frequency, 85W TDP). DE10-Pro FPGA has 378,000 Logic elements (LEs), 128,160 adaptive logic modules (ALM), 512,640 ALM registers, 648 DSPs, and 1,537 M20K memory blocks. We implement AccD design on DE10-Pro by using Intel Quartus Prime Software Suite~\\cite{IntelQuartus} with Intel OpenCL SDK included. To measure the system power consumption (Watt) accurately, we use the off-the-shelf Poniie PN2000 as the external power meter to get the runtime power of Xeon CPU and DE10 Pro FPGA.\n\\vspace{-1em}\n\\begin{table}[h] \\small\n\\centering\n\\caption{Implementation Description.}\n\\label{table: Implementation Description.}\n\\begin{tabular}{|| c | C{9em} | L{9em} ||}\n\\hline\n\\textbf{Name}\n& \\makecell{\\ \\textbf{Techniques}}\n& \\makecell{\\ \\textbf{Description}} \n\\\\ \n\\hline\n\\hline \\textbf{Baseline} & Standard Algorithm without any optimization, CPU. & Naive for-loop based implementation on CPU.\n\\\\ \n\\hline \\textbf{TOP} & Point-based Triangle-inequality Optimized Algorithms, CPU. & TOP \\cite{Topframework} optimized distance-related algorithm running on CPU.\n\\\\ \n\\hline \\textbf{CBLAS} & CBLAS library Accelerated Algorithms, CPU. & Standard distance-related algorithm with CBLAS~\\cite{openblas} acceleration.\n\\\\\n\\hline \\textbf{AccD} & Algorithmic-hardware co-design, CPU-FPGA platform. & GTI filtering and FPGA acceleration of distance computations.\n\\\\\n\\hline\n\\end{tabular}\n\\end{table}\n\\begin{table*}[t] \\small\n\\centering\n \\begin{tabular}{|| l c c c || l c c || c c ||} \n\\hline\n \\multicolumn{4}{||c||}{\\textbf{K-means}} \n & \\multicolumn{3}{c||}{\\textbf{KNN-join}} \n & \\multicolumn{2}{c||}{\\textbf{N-body Simulation}}\\\\ \n\\hline\n \\textbf{Dataset} & \\textbf{Size} & \\textbf{Dimension} & \\textbf{\\#Cluster} \n &\n \\textbf{Dataset} & \\textbf{Dimension} & \\textbf{\\#Source} & \n \\textbf{Dataset} & \\textbf{\\#Particle} \\\\\n\\hline\nPoker Hand & 25,010 &\t11\t & 158 & \nHarddrive1\t& 64 & 68,411 &\nP-1 & 16,384 \n \\\\ \nSmartwatch Sens & 58,371\t& 12\t& 242 & \nKegg Net Directed\t & 24 & 53,413\t &\nP-2 & 32,768\n\\\\\nHealthy Older People & 75,128 & \t9\t & 274 & \n3D Spatial Network\t & 3 & 434,874 &\nP-3 & 59,049\n\\\\\nKDD Cup 2004 & 285,409 & \t74\t & 534 & \nKDD Cup 1998\t & 56 & 95,413\t &\nP-4 & 78,125 \n\\\\\nKegg Net Undirected & 65,554 &\t28\t& 256 & \nSkin NonSkin & 4 & 245,057 & \nP-5 & 177,147 \n\\\\\nIpums & 70,187\t& 60 &\t265 & \nProtein\t & 11 & 26,611 &\nP-6 & 262,144 \n \\\\\n \\hline\n \\end{tabular}\n \\caption{Datasets for Evaluation.}\n \\label{table: Evaluation Dataset}\n\\end{table*} \n\n\\begin{figure*}[h] \\small\n \\centering\n \\subfloat[]{\\includegraphics[width=0.33\\textwidth]{images\/speedup_1.pdf}}\n \\subfloat[]{\\includegraphics[width=0.33\\textwidth]{images\/speedup_2.pdf}}\n \\subfloat[]{\\includegraphics[width=0.33\\textwidth]{images\/speedup_3.pdf}}\n \\caption{Performance Comparison (TOP, CBLAS, AccD): (a) K-means (b) KNN-Join (c) N-body Simulation. Note: Speedup is normalized w.r.t Baseline.}\n \\label{fig: Speedup Performance Comparison.}\n\\end{figure*} \n\\begin{figure*}[ht!] \\small\n \\centering\n \\subfloat[]{\\includegraphics[width=0.33\\textwidth]{images\/power-efficiency-1.pdf}} \n \\subfloat[]{\\includegraphics[width=0.33\\textwidth]{images\/power-efficiency-2.pdf}} \n \\subfloat[]{\\includegraphics[width=0.33\\textwidth]{images\/power-efficiency-3.pdf}}\n \\caption{Energy Efficiency Comparison (TOP, CBLAS, AccD): (a) K-means. (b) KNN-Join. (c) N-body Simulation. Note: Energy Efficiency is normalized w.r.t Baseline.}\n \\label{fig: Energy Efficiency Comparison}\n\\end{figure*}\n\\vspace{-1em}\n\\paragraph{Implementations} \nThe CPU-based implementations consist of three types of programs: the naive for-loop sequential implementation without any optimization (selected as our \\textbf{Baseline} to normalize the speedup and energy-efficiency), the algorithm optimized by \\textbf{TOP}~\\cite{Topframework} framework and the algorithm optimized by \\textbf{CBLAS}~\\cite{openblas} computing library. Note that the TOP + CBLAS implementation is \\textbf{not} included in our evaluation, since after applying TOP point-based TI filtering, each point in the source set has a distinctive list of points from the target set for distance computation, whereas CBLAS requires uniformity in the distance computations. Therefore, it is challenging to combine TOP and CBLAS optimization.\n\n\\paragraph{Dataset} \nIn the evaluation, we use six datasets for each algorithm. The selected datasets can cover the wide spectrum of mainstream datasets, including datasets from UCI Machine Learning Repository~\\cite{UCI-dataset}, and datasets that have ever been used by previous papers~\\cite{Topframework, ding2015yinyang, chen2017sweet} in the related domains. Details of these datasets are listed in Table~\\ref{table: Evaluation Dataset}. Note that KNN-join algorithm will find the Top-1000 closest neighbors of each query point.\n\n\\subsection{Comparison with Software Implementation} \n\\vspace{-0.2\\baselineskip}\n\\label{sect: Comparison with Software Implementation}\n\\paragraph{Performance Comparison} \nAs shown in Figure~\\ref{fig: Speedup Performance Comparison.}, TOP, CBLAS, and AccD achieve average $9.12\\times$, $9.19\\times$ and $31.42\\times$ compared with Baseline across all algorithm and dataset settings, respectively. As we can see, AccD design can always maintain the highest speedup among these implementations. This largely dues to AccD GTI optimization in reducing distance computation and its efficient hardware acceleration of the distance computation on FPGA.\n\nWe also observe that TOP implementation shows its strength for large datasets. For example, on dataset 3D Spatial Network ($n=434,874$) in KNN-join, TOP implementation achieves $39.78\\times$ speedup. Since the fine-grained point-based TI optimization of TOP can reduce most (more than 90\\%) of the unnecessary distance computations, which benefits the overall performance to a great extent. Note that the intrinsic point distribution of the dataset would also affect the filtering performance of TOP, but in general, the larger dataset could lead TOP to spot and remove more redundant computations.\n\nWhat we also notice is that CBLAS implementation demonstrates its performance on datasets with relatively high dimensionality. For example, on dataset KDD Cup 2004 ($d=74$) in the K-means algorithm, CBLAS achieves $11.78\\times$ speedup over Baseline, which is higher than its performance on other K-means datasets. This is because, on high dimension dataset, CBLAS implementation can get more benefits of parallelized computing and more regularized memory access, whereas, in low dimension settings, the same optimization can only yield minor speedup.\n\nOur AccD design achieves a considerable speedup on datasets with large size and high dimensionality. For example, on dataset KDD Cup 2004 ($n=285,409, d=74$) and Ipums ($n=70,187, d=60$) in K-means, AccD achieves $51.61\\times$ and $66.61\\times$ speedup over Baseline, and also significantly higher than both TOP and CBLAS implementations. This conclusion can also be extended to KNN-join, such as $88.95\\times$ speedup on dataset KDD Cup 1998 ($n=95,413$, $d=56$). Since our AccD design can effectively reconcile the benefits from both the GTI optimization and the FPGA acceleration, where the former provides the opportunity to reduce the distance computation at the algorithm level, and the latter boosts the performance from hardware acceleration perspective. More importantly, our AccD design can balance the above two benefits to maximize the overall performance. \n\n\n\\paragraph{Energy Comparison} \nThe energy efficiency of AccD design is also significant. For example, on the K-means algorithm, AccD designs deliver an average $116.85\\times$ better energy efficiency compared with Baseline, which is significantly higher than TOP and CBLAS implementations. There are namely two reasons behind these results: 1) Much lower power consumption. AccD CPU-FPGA design only consumes $5w \\sim 17.12w$ across all algorithm and dataset settings, whereas Intel Xeon CPU consumes at least $20.9w$ and $42.49w$ on TOP and CBLAS implementations, respectively; 2) Considerable performance. AccD design achieves a much better speedup (more than $5\\times$ on average) compared with the TOP and CBLAS, which contributes to overall design energy-efficiency. \n\nAmong these implementations, CLBAS implementation has the lowest energy efficiency, since it relies on multi-core parallel processing capability of the CPU, which improves the performance at the cost of much higher power consumption (average $65.79w$). TOP only leverages the single-core processing capability of the CPU and achieves moderate performance with effective distance computation reduction, which results in less power consumption (average $25.59w$) and higher energy efficiency (average $9.12\\times$) compared with Baseline. Different from the TOP and CBLAS implementations, AccD design is built upon a low-power platform with considerable performance, which shows a far better energy-performance trade-off. \n\n\\vspace{-0.15\\baselineskip}\n\\subsection{Performance Benefits Analysis}\n\\vspace{-0.15\\baselineskip}\nTo analyze the performance benefits of AccD CPU-FPGA design in detail, we use K-means as the example algorithm for study. Specifically, we build four implementations for comparison: 1) TOP K-means on CPU; 2) TOP K-means on CPU-FPGA platform; 3) AccD K-means on CPU; 4) AccD K-means on CPU-FPGA platform. Note that TOP K-means is designed for sequential-based CPUs, and no publicly available TOP implementation on CPU-FPGA platforms. For a fair comparison, we implement TOP K-means on CPU-FPGA platform with memory optimizations (inter-group and intra-group memory optimization) and distance computation kernel optimization (Vector-Matrix multiplication). These optimizations improve the data reuse and memory access performance.\n\\vspace{-0.8\\baselineskip}\n\\begin{figure} [ht] \\small\n \\centering\n \\makebox{\\includegraphics[width=0.85\\columnwidth]{images\/KMeans-speedup-breakdown.pdf}}\n \\caption{AccD Performance Benefits Breakdown.}\n \\label{fig: AccD Performance Benefits Breakdown.}\n\\end{figure}\nWe compute the normalized speedup performance of each implementation w.r.t the naive for-loop based K-means implementation on CPU. \n\nAs shown in Figure~\\ref{fig: AccD Performance Benefits Breakdown.}, AccD K-means on CPU-FPGA platform can always deliver the best overall speedup performance among these implementations. We also observe that TOP K-means can achieve average $3.77\\times$ speedup on CPU, however, directly porting this optimization towards CPU-FPGA platform could even lead to inferior performance (average $2.63\\times$). Even though we manage to add several possible optimizations, applying such fine-grained TI optimization from TOP would still cause a large divergence of computation among points, leading to low data reuse and inefficient memory access. \n\nWe also notice that AccD design on CPU achieves lower speedup (average $2.69\\times$) compared with the TOP (average $3.77\\times$), since its coarse-grained GTI optimization spots a fewer number of unnecessary distance computations. However, when combining AccD design with CPU-FPGA platform, the benefits of AccD GTI optimization become prominent (average $37.37\\times$), since it can maintain computation regularity while reducing memory overhead to facilitate the hardware acceleration on FPGA. Whereas, applying optimization to maximize the algorithm-level benefits while ignoring hardware-level properties would result in poor performance, such as the TOP (CPU-FPGA) implementation. \nMoreover, comparison of AccD (CPU) and AccD (CPU-FPGA) can also demonstrate the effectiveness of using FPGA as the hardware accelerator to boost the performance of the algorithms, which can deliver additional $9.68\\times \\sim 15.71\\times$ speedup compared with the software-only solution.\n\n\n\n\n\n\n\n\\section{Introduction} \nDistance-related algorithm (\\textit{e.g.}, K-means~\\cite{LloydKMeans}, KNN~\\cite{altman1992introduction}, and N-body Simulation~\\cite{NBody-simulation}) plays a vital role in many domains, including machine learning, computational physics, etc. However, these algorithms often come with high computation complexity, leading to poor performance and limited applicability.\nTo improve their performance, FPGA-based acceleration gains lots of interests from both industry and research field, given its great performance and energy-efficiency. However, accelerating distance-related algorithms on FPGAs requires non-trivial efforts, including the hardware expertise, time and monetary cost. While existing works try to ease this process, they inevitably fall in short in one of the following aspects.\n\n\n\n\\textbf{Rely on problem-specific design and optimization while missing effective generalization.} There is no such unified abstraction to formalize the definition and optimization of distance algorithms systematically. Most of the previous hardware designs and optimizations~\\cite{KMeansMicroarray,lin2012k,kdtreeKMeanscolorimage, KNNfpgahls} are heavily coded for a specific algorithm (\\textit{e.g.}, K-means), which can not be shared with different distance-related algorithms. Moreover, these \"hard-coded\" strategies could also fail to catch up with the ever-changing upper-level algorithmic optimizations and the underlying hardware settings, which could result in a large cost of re-design and re-implementation during the design evolvement.\n\n \\textbf{Lack of algorithm-hardware co-design.} Previous algorithmic~\\cite{elkan2003using, ding2015yinyang} and hardware optimizations~\\cite{lin2012k, kdtreeKMeanscolorimage, KMeansMicroarray, multicoreKMeans, KNNfpgahls} are usually applied separately instead of being combined collaboratively. Existing algorithmic optimizations, most of which are based on \\textit{Triangle Inequality (TI)}~\\cite{elkan2003using, ding2015yinyang, Topframework, chen2017sweet}, are crafted for sequential-based CPU. Despite removing a large number of distance computations, they also incur high computation irregularity and memory overhead. Therefore, directly applying these algorithmic optimizations to massively parallel platforms without taking appropriate hardware-aware adaption could lead to inferior performance\n\n\\textbf{Count on FPGAs as the only source of acceleration.} \nPrevious works~\\cite{ParallelArchitecturesKNN, IPcoresKNN, ParameterizedKMeans, Lavenier00fpgaimplementation, KMeansMicroarray, KNNfpgahls} place the whole algorithm on the FPGA accelerator without considering the assists from the computing resource on the host CPU. As a result, their designs are usually limited by the on-chip memory and computing elements, and cannot fully exploit the power of the FPGA. Moreover, they miss the full performance benefits from the heterogeneous computing paradigm, such as using the CPU for complex logic and control operations while offloading the compute-intensive tasks to the FPGA.\n\n\\textbf{Lack of well-structured design workflow.} Previous works~\\cite{ParallelArchitecturesKNN, ParameterizedKMeans, kdtreeKMeanscolorimage, lin2012k, KNNfpgahls} follow the traditional way of hardware implementation and require intensive user involvement in hardware design, implementation, and extra manual tuning process, which usually takes long development-to-validation cycles. Also, the problem-specific strategy leads to a case-by-case design process, which cannot be widely applied to handle different problem settings.\n\\begin{figure*}[t]\n \\centering\n \\includegraphics[width=1.6\\columnwidth]{images\/AccD-overview.pdf}\n \\caption{AccD Overview.}\n \\label{fig: AccD Workflow}\n\\end{figure*}\nTo this end, we present a compiler-based optimization framework, \\textit{AccD}, to automatically accelerate distance-related algorithms on the CPU-FPGA platform (shown in Figure~\\ref{fig: AccD Workflow}). \nFirst, AccD provides a \\textbf{Distance-related Domain-Specific Language} (\\textit{DDSL}) as a problem-independent abstraction to unify the description and optimization of various distance-related algorithms. With the assist of the \\textit{DDSL}, end-user can easily create highly-efficient CPU-FPGA designs by only focusing on high-level problem specification without touching the algorithmic optimization or hardware implementation. \n\nSecond, AccD offers a novel \\textbf{algorithmic-hardware co-optimization} scheme to reconcile the acceleration from both sides. At the algorithmic level, AccD incorporates a novel \\textit{Generalized Triangle Inequality (GTI)} optimization to eliminate unnecessary distance computations, while maintaining the computation regularity to a large extent. At the hardware level, AccD employs a \\textit{specialized data layout} to enforce memory coalescing and an \\textit{optimized distance computation kernel} to accelerate the distance computations on the FPGA. \n\nThird, AccD leverages both the host and accelerator side of the \\textbf{CPU-FPGA heterogeneous system for acceleration}. In particular, AccD distributes the algorithm-level optimization (\\textit{e.g.}, data grouping and distance computation filtering) to CPU, which consists of complex operations and execution dependency, but lacks pipeline and parallelism. On the other hand, AccD assigns hardware-level acceleration (\\textit{e.g.}, distance computations) to the FPGA, which is composed of simple and vectorizable operations. Such mapping successfully capitalizes the benefit of CPU for managing control-intensive tasks and the advantage of FPGA for accelerating computation-intensive workloads. \n\nLastly, AccD compiler integrates an intelligent \\textbf{Design Space Explorer} \\textit{(DSE)} to pinpoint the \"optimal\" design for different problem settings. In general, there is no existing \"one size fits all\" solution: the best configuration for algorithmic and hardware optimization would differ across different distance-related algorithms or different inputs of the same distance-related algorithm.\nTo produce a high-quality optimization configuration automatically and efficiently, DSE combines the design modeling (performance and resource) and Genetic Algorithm to facilitate the design space search.\n\nOverall, our contributions are:\n\\begin{itemize} \n \\item We propose the first optimization framework that can automatically optimize and generate high-performance and power-efficient designs of distance-related algorithms on CPU-FPGA heterogeneous computing platforms. \n \\item We develop a Domain-specific Language, DDSL, to unify different distance-related algorithms in an effective and succinct manner, laying the foundation for general optimizations across different problems.\n \n \\item We build an optimizing compiler for the DDSL, which automatically reconciles the benefits from both the algorithmic optimization on CPU and hardware acceleration on FPGA.\n \n \n \n \\item Intensive experiments on several popular algorithms across a wide spectrum of datasets show that AccD-generated CPU-FPGA designs could achieve $31.42\\times$ speedup and $99.63\\times$ better energy-efficiency on average compared with standard CPU-based implementations\n \n\\end{itemize}\n\n\\section{Overview of AccD Framework} \n\n\\section{Related Work} \\label{sect: Related Work}\n\\vspace*{-0.32em}\nPrevious research accelerates distance-related algorithms in two aspects: \\textit{Algorithmic Optimization} and \\textit{Hardware Acceleration}. More details are discussed in the following subsections.\n\n\\subsection{Algorithmic Optimization} \nFrom the algorithmic standpoint, previous research highlights two optimizations. The first one is KD-tree based optimization~\\cite{KD-TreeKMeans, efficientKmeans, KNNJoinsDataStreams, 5952342, Zhong:2013:GEI:2505515.2505749}, which relies on storing points in special data structures to enable nearest neighbor search without computing distances to all target points. These methods often deliver $3\\times \\sim 6\\times$ performance improvement~\\cite{KD-TreeKMeans, efficientKmeans, KNNJoinsDataStreams, 5952342, Zhong:2013:GEI:2505515.2505749} compared with the unoptimized versions in low dimensional space, while suffering from a serious performance degradation when handling large datasets with high dimension ($d \\geq 20$) due to their exponentially-increased memory and computation overhead.\n\nThe second one is TI based optimization~\\cite{elkan2003using, ding2015yinyang, Topframework, chen2017sweet}, which aims at replacing computation-expensive distance computations with cheaper bound computations, demonstrates its flexibility and scalability. It can not only reduce the computation complexity at different levels of granularity but is also more adaptive and robust to the datasets with a wide range of size and dimension. \nHowever, most existing works focus on one specific algorithm (\\textit{e.g.}, KNN~\\cite{chen2017sweet}, K-means~\\cite{elkan2003using, ding2015yinyang}, etc.), which lack extensibility and generality across different distance-related problems. \nAn exception is a recent work, TOP~\\cite{Topframework}, which builds a unified framework to optimize various distance-related problems with pure TI optimization on CPUs. Our work shares a similar high-level motivation with their work, but targets at a more challenging scenario: algorithmic and hardware co-optimization on CPU-FPGA platforms.\n\n\\subsection{Hardware Acceleration}\nFrom the hardware perspective, several FPGA accelerator designs have been proposed, but still suffer from some major limitations.\n\nFirst, previous FPGA designs are generally built for specific distance-related algorithm and hardware. For example, works from~\\cite{KMeansMicroarray, kdtreeKMeanscolorimage, lin2012k} target on KNN FPGA acceleration, while researches from~\\cite{KNNfpgahls, IPcoresKNN, ParallelArchitecturesKNN} focus on K-means. Moreover, previous designs~\\cite{lin2012k, KMeansMicroarray} usually assume that dataset can be fully fit into the FPGA on-chip memory, and they are only evaluated on a limited number of small datasets, for example, in~\\cite{lin2012k}, K-means acceleration is evaluated on a micro-array dataset with only 2,905 points. These designs often encounter portability issues when transferring to different settings. Besides, these \"hard-coded\" designs and optimizations create difficulties for a fair comparison among different designs, which hamper future studies in this direction. \n\nThe second problem with previous works is that they fail to incorporate algorithmic optimizations in the hardware design.\nFor example, works from~\\cite{KMeansMicroarray, ParallelArchitecturesKNN, kdtreeKMeanscolorimage, KNNfpgahls}, directly port the standard K-means and KNN algorithms to FPGA, and only apply hardware-level optimization. \nOne exception is a recent work~\\cite{KPynq}, which promotes to combine TI optimization and FPGA acceleration for K-means. It gives a considerable speedup compared to state-of-the-art methods, showcasing the great opportunity of applying algorithm-hardware co-optimization. Nevertheless, this idea is far from well-explored, possibly because it requires the domain knowledge and expertise from both the algorithm and hardware to combine both of them effectively. \n\nIn addition, previous works largely focus on the traditional hardware design flow, which requires a long implementation cycle and huge manual efforts. For example, works from~\\cite{ParameterizedKMeans, KMeansMicroarray, multicoreKMeans, kdtreeKMeanscolorimage, ICSICT2016, Lavenier00fpgaimplementation, adaptiveKNNPartialReconfiguration, adaptiveKNN} build the design based on VHDL\/Verilog design flow, which requires hardware expertise and over months of arduous development. In contrast, our AccD design flow brings significant advantages of programmability and flexibility due to its high-level OpenCL-based programming model, which minimizes the user involvement in the tedious hardware design process.\n\n\n\n\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\n\n\n\\section{I\/O Contention and Inherent Noise Errors}\\label{sec:data_collection_errors}\nWith the ability to estimate the amount of application and system modeling error, as well as detect outlier jobs,\nleftover error is caused by system contention or inherent noise. Both of these error classes are caused by aleatory\nuncertainty, since the model lacks deeper insight into jobs or the system, as opposed the OoD case where the model lacks\nsamples. While e.g., application error was predictable and explainable in terms of broad application behavior (e.g.,\nthis application is slow because it frequently writes to shared files), the impact of contention and noise on I\/O\nthroughput is caused by lower level, transient effects. Though it may be possible to observe and log such effects\nthrough microarchitectural hardware counters or network switch logs, such logging would require vast amounts of storage\nper job and would impact performance. Lack of practical logging ability makes the last two error categories typically\n\\textit{unobservable}. Furthermore, these two classes may only be separated in hindsight, and while I\/O noise levels\nmay be constant, the amount of I\/O contention on the system is unpredictable for a job that is about to run.\n\nThe question we ask in this section are: how can errors due to noise and contention be separated from errors due to poor\nmodeling or epistemic uncertainty? Is there a fundamental limit to how accurate I\/O models can become? Can system I\/O\nvariability be quantified? \n\n\\subsection{Establishing the bounds of I\/O modeling}\nTo separate the contention and noise from the first three classes of error, we develop a litmus test based on the test\nfrom Section~\\ref{sec:stationary_errors}. \nThere, by observing sets of duplicates, the performance of an ideal model was estimated for which $e_{app} = 0$. Comparing real models against this ideal model allowed us to calculate real model's $e_{app}$.\nA similar litmus test can be designed estimate the sum of contention and noise error, where only concurrent duplicates are\nobserved and both application behavior $j$ and global system behavior $\\zeta_g$ can be held static for each duplicate set:\n\\begin{tcolorbox}[left=0mm,right=0mm]\n\\vspace{-0.1cm}\n1. OoD jobs are removed and sets of duplicate jobs ran at the same time ($\\Delta t=0$) are collected;\n2. The mean I\/O throughput of each set is calculated; \n3. Duplicate error is calculated as before;\n4. The median error across all duplicates is reported. \n\\vspace{-0.1in}\n\\end{tcolorbox}\n\nIn the fifth column of Figure~\\ref{fig:teaser} we show the distribution of I\/O throughput differences $\\Delta \\phi$ and\ntiming differences $\\Delta t$ between all pairs of Cori duplicate jobs, weighted so that large duplicate sets are not\noverrepresented. The vertical strip on the left contains Cori duplicate jobs that were ran\nsimultaneously, largely because they were batched together. These jobs share $j$ and $\\zeta_g$, but may differ in\n$\\zeta_l$ and $\\omega$. Due to the denser sampling around 1 minute to 1 hour range, it is not immediately apparent\nhow the I\/O difference changes between duplicates ran at the same time and duplicates ran with a delay. By grouping\nduplicates from different $\\Delta t$ ranges and independently scaling them, a better understanding of duplicate I\/O\nthroughput distributions across timescales can be made, as shown in Figure~\\ref{fig:duplicates_over_time_kde}. For both\nsystems (Cori omitted due to lack of space), the distributions on the right contain jobs ran over large periods of time where\nglobal system impact $\\zeta_g$ might have changed, explaining the asymmetric shape of some of them. The left-most\ndistributions are similar, since variance only stems from contention $\\zeta_l$ and noise $\\omega$. While some\ndistributions (e.g., the $10^5$ to $10^6$ second) show complex multimodal behavior, all of the\ndistributions seem to contain the initial zero second ($0s$ to $1s$) distribution. \n\n\\begin{figure}[h]\n \\centering\n \\vspace{-0.5cm}\n \\includegraphics[width=0.8\\columnwidth]{figures\/kde_duplicates.pdf}\n \\caption{Distribution of errors for different periods between duplicate runs.}\n \\label{fig:duplicates_over_time_kde}\n \\vspace{-0.2cm}\n\\end{figure}\n\nBy fitting a normal distribution to the $\\Delta t=0$ distribution (0s to 1s) in Figure~\\ref{fig:duplicates_over_time_kde}, we can both\n(1) learn the lower limit on total modeling error and (2) learn the system's I\/O noise level, i.e., how much I\/O\nthroughput variance should jobs running on the system expect. However, upon closer inspection, the $\\Delta t=0$\ndistribution \\textit{does not} follow a normal distribution. This is surprising, since if noise is normally\ndistributed, independent over time, and it's effects are cumulative, the total impact is a sum of normal distributions,\nwhich should also be a normal distribution. The answer lies in how the concurrent ($\\Delta t=0$) duplicates are sampled.\nWhen observing duplicates, in general, duplicate sets have between 2 and hundreds of thousands of identical jobs in them.\nHowever, in duplicate sets with identical start times on Theta, 70\\% of the sets only have two identical jobs, and 96\\% have\n6 jobs or less, with similar results on Cori. The issue stems from how small (sub-30 samples) duplicate set\nerrors are calculated: when only a small number of jobs exist in the set, the mean I\/O throughput of the set is biased\nby the sampling, i.e., the estimated mean is closer to the samples than the real mean is. This causes the set I\/O\nthroughput variance to decrease and therefore duplicate error estimate will be reduced as well. Student's\n\\textit{t}-distribution describes this effect: when the true mean of a distribution is known, error calculations follow\na normal distribution. When the true mean is not known, the biased mean estimate makes the error follow the\n\\textit{t}-distribution. As the sample count reaches 30 and above, the \\textit{t}-distribution approaches the normal\ndistribution.\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n\nFinally, we seek to estimate the I\/O noise variance of a given system. Naively taking the variance of the\n\\textit{t}-distribution will produce a biased sample variance $\\sigma^2$, but it can be fixed by applying Bessel's\ncorrection as $\\frac{n}{n-1}\\sigma^2$. \nIn practical terms, a job\nrunning on Theta can expect an I\/O throughput within $\\pm5.71\\%$ of the predicted value \\textit{68\\% of the\ntime}, or within $\\pm 10.56\\%$ 95\\% of the time. For Cori, these values are $\\pm 7.21\\%$ and $\\pm 14.99\\%$, respectively.\nThis is a fundamental barrier not just to I\/O model improvement, but to predictable system usage in general. Although some insight into contention can be gained through low-level logging tools, noise cannot be overcome. I\/O practitioners\ncan use this litmus test to evaluate the noise levels of their systems, and ML practitioners should reconsider how they\nevaluate models, since some systems may be simply harder to model. \n\n\n\n\n\n\\endinput\n\n\n\n\\section{Discussion}~\\label{sec:discussion}\nDeveloping production-ready machine learning models that analyze HPC jobs and predict I\/O throughput is difficult: the\nspace of all application behaviors is large, HPC jobs are competing for resources, and the system changes over time. To efficiently improve these models, we present a taxonomy of HPC I\/O modeling errors that allows us to study the\ntypes of errors independently, quantify their impact, and identify the most promising avenues for model improvement. Our\ntaxonomy breaks errors into five categories: (1) application and (2) system modeling errors, (3) poor generalization, (4) \nresource contention, and (5) I\/O noise. We present litmus tests that quantify what percentage of existing error belongs\nto the classes and show that models improved by using the taxonomy are within several percentage points of an\nestimated best-case I\/O throughput modeling accuracy. We show that a large portion of I\/O throughput modeling error is irreducible and stems from I\/O variability. We provide tests that quantify the I\/O variability and establish an upper bound on how accurate models can become. Our test shows that jobs run on Theta and Cori can expect an I\/O throughput standard deviation of 5.7\\% and 7.2\\%, respectively.\n\n\\section{Applying the taxonomy} \\label{sec:framework}\n\n\\begin{figure*}\n \\centering\n \\includegraphics[width=.99\\linewidth]{figures\/framework_updated.pdf}\n \\caption{Framework for applying the taxonomy (left), and the results from ALCF Theta and NERSC Cori systems.} \n \\label{fig:framework}\n \\vspace{-.2in}\n\\end{figure*}\n\nWe now illustrate how the proposed taxonomy can be used in practice. In Figure~\\ref{fig:framework} (left), we show the\nsteps a modeler can follow to evaluate the taxonomy on a new system. \nStep 1: The modeler splits the available data into training and test sets, and then trains and evaluates some baseline\nmachine learning model on the task of predicting I\/O throughput. This model does not have to be perfect, as the\ntaxonomy should reveal what are the main sources of error and approximately how much the quality of the model is at\nfault. \nStep 2.1: The modeler estimates application modeling errors by finding duplicate jobs and evaluating the mean predictor\nperformance on every set of duplicates. Assuming that the distribution of duplicate HPC jobs is representative of the\nwhole population of jobs, this step provides the modeler with a lower bound on the application modeling error. \nStep 2.2: By contrasting the baseline model error (Step 1) and the estimated application modeling error, the modeler can\nestimate the percentage of error that can be attributed to poor modeling. The modeler performs a hyperparameter or network\narchitecture search and arrives at a good model close to the bound. \nStep 3.1: The modeler estimates system modeling errors by exposing the job start time to a golden model.\nThis step requires that the modeler has developed a well-performing model in Step 2.2, that is, one that achieves\nclose to the estimated ideal performance. The test set error of the model serves as an estimate of the application\n+ system modeling lower bound. \nStep 3.2: The modeler explores adding sources of system data to improve the performance of the baseline\nmodel up to the estimated limit of application and system modeling. \nStep 4: The modeler identifies out-of-distribution samples using AutoDEUQ, calculates OoD error that stems\nfrom these samples, and removes them from the dataset. \nStep 5: The modeler estimates the error that can be attributed to contention and noise, as well as I\/O variance of the\nsystem. This estimate is made by observing the I\/O throughput differences between sets of concurrent duplicates, i.e.,\nduplicate jobs ran at around the same time. \n\nIn the middle and right portion of Figure~\\ref{fig:framework} we show the average baseline model error (inner pink circle\nsegment) of both ANL Theta and NERSC Cori systems, and how that error is broken down into different classes of error.\nWe do not focus on the cumulative error value of the two systems; instead, we focus on attributing the baseline model error\ninto the five classes of errors in the taxonomy (middle circle segments of the pie chart), and how much improved\napplication and system modeling can help reduce the cumulative error (outer segments of the pie chart). \nThe inner blue section of the two pie charts represents the estimated application modeling error,\nas arrived at in Step 2.1. The outer blue section represents how much of the error can be fixed through hyperparameter\nexploration, as explored in Step 2.2. \nThe inner green section represents the estimated system modeling error, derived in Step 3.1. Note that the total\npercentage of system modeling error is relatively small on both systems; i.e., I\/O contention, filesystem health,\nhardware faults, etc., do not have a dominant impact on I\/O throughput. The outer green circle segment represents the\npercentage of error that can be fixed by including system logs (LMT logs in our case), as described in Step 3.2. Only\nthe Cori pie chart has this segment, as Theta does not collect LMT logs. On Cori, the inclusion of LMT logs helps remove\nmost of the system modeling errors, pointing to the conclusion that including other logs (i.e., topology, networking)\nmay not help to significantly reduce errors. \nThe inner red segment represents the percentage of error that can be attributed to out-of-distribution samples of the\ntwo systems, as calculated in Step 4. \nFinally, the yellow circle segment represents the percentage of error that can be attributed to aleatory uncertainty.\nFor both Theta and Cori, this is a rather large amount, pointing to the fact that there exists a lot of innate noise in the\nbehavior of these systems, and setting a relatively high lower bound on ideal model performance. \n\nThe similarity between the modeling error estimates (Steps 2.1 and 3.1) and the actual updated model performance (Steps 2.2\nand 3.2) is surprising and serves as evidence for the quality of the error estimates. However, the estimates of the five error classes \\textit{do not} add up to 100\\%. The first three error estimates are just that - estimates,\nderived from a subset of data (duplicate HPC jobs) that do not necessarily follow the same distribution as the rest of\nthe dataset and may be biased. If we add the estimates, we see that on Theta 32.9\\% of the error is\nunexplained, and on Cori 13.5\\% of the error is unexplained.\nCori's lower unexplained error may be due to the fact that we have collected some 1.1M jobs compared to 100K on Theta. \n\n\\section{Introduction}\n\nAs scientific applications push to leverage ever more capable computational platforms, there is a critical need to\nidentify and address bottlenecks of all types.\nDue to the large data movements in these\napplications, the I\/O subsystem is often a major source of performance bottlenecks, and it is common for applications to\nattain only a small fraction of the peak I\/O rates~\\cite{luu:behavior}. These performance problems can severely limit the\nscalability of applications and are difficult to detect, diagnose, and fix. Data-driven machine learning-based models\nof I\/O throughput can help practitioners understand application bottlenecks (e.g.,~\\cite{isakov_sc20, moana,\n10.1007\/978-3-319-92040-5_10, 10.1145\/3337821.3337922, isakov_ross20, 10.1145\/3369583.3392678}), and have the potential to\nautomate I\/O tuning and other tasks. However, current machine learning-based I\/O models are not robust enough for production\nuse~\\cite{isakov_ross20}.\nA thorough investigation of \\emph{why} these models underperform when deployed on high\nperformance computing (HPC) systems will provide key insights and guidance on how to address their shortcomings. The\ngoal of our study is to help machine learning (ML)-driven I\/O modeling techniques make the transition from theory to\npractice. \n\nThere are several reasons why machine learning-based I\/O models underperform when deployed: poor modeling choices~\\cite{isakov_sc20,\n10.1145\/3369583.3392678}, concept drift in the data~\\cite{10.1145\/3337821.3337922}, and weak generalizatio\n~\\cite{isakov_ross20},\namong others. I\/O models are often opaque, and there is no established methodology for diagnosing the root\ncause of model errors. In this work, we present a taxonomy of ML-based I\/O modeling errors, as shown in\nFigure~\\ref{fig:teaser}. Through this taxonomy, we show that I\/O throughput prediction errors can be separated and\nquantified into five error classes: inadequate (1) application and (2) system models, (3) novel application or system\nbehaviors, (4) I\/O contention and (5) inherent noise. \nFor each class, we present data-driven litmus tests that estimate the portion of\nmodeling error caused by that class. The taxonomy enables independent study of each source of error \nand prescribes appropriate ML techniques to tackle the underlying sources of error. \n\n\n\nOur contributions in this work are as follows:\n\\begin{enumerate}[leftmargin=*]\n \\item We introduce a taxonomy of ML-based I\/O throughput modeling errors which consists of five causes.\n \n \n \n \n\n \\item We show that choice of machine learning model type, scaling model size, and hyperparameter tuning cannot reduce all potential errors.\n We present two litmus tests that quantify error due to poor application and\n system modeling.\n\n \\item We present a litmus test that estimates what portion of error is caused by\n \n \n rare jobs with previously unseen behavior, and apply uncertainty quantification methods to classify those jobs as\n out-of-distribution jobs.\n\n \\item We present a method for quantifying the impact of I\/O contention and noise on I\/O throughput, which (1)\n defines a fundamental limit in how accurate ML models can become,\n \n and (2) gives HPC system users and\n administrators a practical estimate of the I\/O throughput variance they should expect. We show that underlying\n system noise is the dominant source of errors, and not poor modeling or lack of application or system data. \n \n\n \\item We present a framework for how the taxonomy is practically applied to new systems and evaluate it on two\n leadership-class supercomputers: Argonne Leadership Computing Facility (ALCF) Theta and National Energy\n Research Scientific Computing Center (NERSC) Cori.\n\\end{enumerate}\n\n\n\n\\section{Global system modeling errors}\\label{sec:nonstationary_errors}\nThe second part of the approximation error in our taxonomy is the global system modeling error. This error \nrefers to I\/O climate and I\/O weather effects~\\cite{lockwood_pdsw17} that affect all jobs running on the system, and\ncorresponds to the second component in Equation~\\ref{eq:phi_breakdown}. While global and local system impacts have\ncomplex and overlapping effects, we suggest that factorizing the impact applied to all jobs versus the impact that is\ndependent on pairs of concurrent jobs is useful for modeling purposes. \nThe main difference between the two is that local system impacts cannot be predicted or modeled without knowledge of all\njobs running on the system, while global system impacts can. In other words, global system impacts can be expressed as a\nproperty of the system at a given time, effectively compressing a part of system behavior.\nWe now ask: How does I\/O contention impact job I\/O throughput prediction? What are the\nlimits of global system modeling? How can I\/O models approach this limit? \n\n\\subsection{Estimating limits of global system modeling}\nGlobal system impact $\\zeta_g(t)$ on job $j$ from Equation~\\ref{eq:phi_breakdown} can be formalized as some\nfunction $\\zeta_g(t) = g(J(t))$ where $J$ is the set of jobs running at time $t$. Since jobs have a start and end\ntime, given a dataset with a dense enough sampling of $J$, $g(J(t))$ can be calculated for every point in time. During periods of time where\nthe file system is suffering a service degradation, all jobs on the system will be impacted with varying severity. A\nmodel of the system does not need to understand how and why the degradation happened, it only needs to know the period\nwhen it lasted and how different types of jobs were impacted. This time-based model is useless for predicting\nfuture performance, and it's only utility is in evaluating how much of the degradation can be described as purely a\nfunction of time. A deployed model does not have data on the future and will still need to observe the system.\n\nTo evaluate the global system impact, a golden model that exhibits no global modeling error is developed, against which we\ncan compare other, `real' ML models. Since the global system impact of $\\zeta_g(t)$ only depends on time and does not need the set of all jobs $J$, only application behavior $j$ and the job start time\nfeature are exposed to the golden model. Here, a golden model is an XGBoost model fine tuned on a validation set and\nevaluated on a test set. Assuming that the golden model is exposed to enough jobs throughout the lifetime of the system,\nit will learn the impact of $\\zeta_g(t)$ even without having access to the underlying system features causing that\nimpact. This golden model is used in the following litmus test:\n\\begin{tcolorbox}[left=0mm,right=0mm]\n\\vspace{-0.1cm}\n1. The timing feature is\nadded to the Darshan-only (no Lustre or Cobalt) dataset; 2. A hyperparameter search is performed on a validation set and\napplication modeling error is removed; 3. the final model is evaluated on the test set, and it's error is reported.\n\\vspace{-0.1in}\n\\end{tcolorbox}\nIf the litmus test is correct, the golden model only suffers from the last three classes of errors: poor generalization, \nlocal system impact, and inherent noise. \nIn Figure~\\ref{fig:lustre_model_comparison} we evaluate a baseline model (blue) and a model enriched with the\njob start time (orange). The addition of this single feature has a large impact on error: on\nCori, the error drops $40\\%$, from 16.49\\% down to 10.02\\%, while on Theta the error drops by 30.8\\%. Note that to obtain high accuracy on the POSIX+start time, a much larger model is needed, i.e., one that can remember the I\/O weather\nthroughout the lifetime of the system. \n\n\\subsection{Improving modeling through I\/O visibility}\nWith an estimate of minimal error achievable assuming perfect application and global system modeling, we investigate\nwhether I\/O subsystem logs can help models approach this limit. Since Theta does not collect I\/O subsystem logs, we\nanalyze Cori, which collects both application and I\/O logs. Figure~\\ref{fig:lustre_model_comparison} shows the XGBoost\nperformance of three models: a baseline where $e_{app}=0$ (blue), the litmus test golden model where also $e_{system}=0$ (orange), and a\nLustre-enriched model (green). Cori's median absolute error is reduced by 40\\%, from\n16.49\\% down to 9.96\\%. The Lustre-enriched results are surprisingly close to the litmus test's predictions,\nand suggest that predictions cannot be improved through further I\/O insight since the litmus test's prediction is\nreached.\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.95\\columnwidth]{figures\/lustre.pdf}\n \\caption{\n \n Error distribution of models trained on (1) POSIX, (2) POSIX + \n the start time feature, and (3) Darshan and Lustre}\n \\label{fig:lustre_model_comparison}\n \\vspace{-0.5cm}\n\\end{figure}\n\n\\endinput \n\n\n\\section{Generalization errors}\\label{sec:generalization_errors}\nThe remaining three classes of error are caused by lack of data and not poor modeling, as the top branch of the\ntaxonomy shows. While I\/O contention and inherent noise errors are examples of aleatory uncertainty and\nare caused by lack of insight into specific jobs, generalization errors stem from epistemic uncertainty, i.e., the lack\nof other logged jobs around a specific job of interest. \nTo motivate this section, in the third graph of\nFigure~\\ref{fig:teaser} we show error distribution of a model trained on data from January 2018 to July 2019. When \nevaluated on held-out data from the same period, the median absolute error is low (green line). Once the model is\ndeployed and evaluated on the data collected after the training period (July 2019 and after), median error spikes up (red\nline). \n\n\n\n\n\n\\subsection{Estimating generalization error}\n \n \n \n \n \n \n \n\n\nEstimating the amount of out-of-distribution error $e_{OoD}$ is important because any\nunaccounted OoD error will be classified as noise or contention. This will make systems that run a lot of novel jobs\nappear to be more noisy than they truly are. Because OoD and ID jobs will likely have a similar amount of I\/O and\ncontention noise, it is better to have false positives (ID jobs classified as OoD) than the other way, since false\nnegatives contribute to overestimating I\/O noise.\nTo estimate the impact of out-of-distribution jobs on error $e_{OoD}$, we aim to quantify the how much of error is\nepistemic, and how much is aleatory in nature, as shown in Figure~\\ref{fig:teaser} (upper right).\nThe leading paradigm for uncertainty quantification works by training an ensemble of models and evaluating all of the\nmodels on the test set. If the models make the same error, the sample has high aleatory uncertainty, but if the models\ndisagree, the sample has high epistemic uncertainty~\\cite{10.5555\/3295222.3295387}. The intuition is that predictions on\nout-of-distribution samples will vary significantly on the basis of the model architecture, whereas predictions on noisy\nsamples will exhibit the same amount of error. Since this method relies on ensemble model diversity, several works have explored\nincreasing diversity through different model hyperparameters~\\cite{Wenzel2020HyperparameterEF}, different\narchitectures~\\cite{Zaidi2020NeuralES}, or both~\\cite{autodeuq}. We choose to use AutoDEUQ~\\cite{autodeuq}, a method\nthat evolves an ensemble of neural network models and jointly optimizes both the architecture and hyperparameters of the\nmodels. AutoDEUQ's Neural Architecture Search (NAS) is compatible with the NAS search from\nsection~\\ref{sec:stationary_errors}, reducing the computational load of applying the\ntaxonomy. Figure~\\ref{fig:autodeuq} shows the distribution of epistemic (EU) and aleatory uncertainties (AU) of Theta\nand Cori test sets. For both systems, aleatoric uncertainty is significantly higher than epistemic uncertainty.\nFurthermore, \\textit{all} jobs seem to have AU larger than some about 0.05, hinting at the inherent noise present in the\nsystem. The inverse cumulative distributions on the top and right show the what percentage of total error is caused by\nAU \/ EU \\textit{below} that value. For example, for both systems 50\\% of all error is caused by jobs with EU below 0.04,\nwhile in case of AU, 50\\% of error is below AU=0.25. The low total EU is expected since the test set was drawn from the\nsame distribution as the training set, and increases on the 2020 set (omitted due to space concerns). \n\n\\begin{figure}[h]\n \\vspace{-0.2cm}\n \\centering \n \\includegraphics[width=0.95\\columnwidth]{figures\/uncertainty}\n \\caption{Distribution of job aleatory and epistemic uncertainties for the two systems, with marginal distributions\n and inverse cumulative error shown on the margins.} \n \\label{fig:autodeuq}\n \\vspace{-0.2cm}\n\\end{figure}\n\nEpistemic uncertainty does not directly translate into the out-of-distribution error $e_{OoD}$ from\nEquation~\\ref{eq:error_breakdown}. When a sample is truly OoD, it may not be possible to separate aleatory and epistemic\nuncertainty, since a good estimate of AU requires dense sampling around the job of interest. Therefore, we choose to\nattribute all errors of a sample marked as out-of-distribution to $e_{OoD}$. This error attribution requires classifying\nevery test set sample as either in- or out-of-distribution, but since EU estimates are continuous values, an EU\nthreshold which will separate OoD and ID samples is required. Although this threshold is specific to the dataset and\nmay require tuning, the quick drop or `shoulder' in inverse cumulative error around EU=0.1 in Figure~\\ref{fig:autodeuq}\nmakes the choice of an $e_{OoD}$ threshold robust. \nA litmus test that estimates the error due to out-of-order samples has the following steps:\n\\begin{tcolorbox}[left=0mm,right=0mm]\n\\vspace{-0.1cm}\n1. Run NAS and collect the best performing models; 2. Estimate the aleatory and epistemic uncertainty using AutoDEUQ; 3. Find a stable EU threshold and classify the samples as ID and OoD; 4. Calculate $e_{OoD}$ as the sum of OoD sample errors. \n\\vspace{-0.1in}\n\\end{tcolorbox}\nOn Theta, for an EU threshold of 0.24, .7\\% of the samples are classified as OoD, but constitute 2.4\\%\nof the errors, while on Cori 2.1\\% of error gets removed for the same EU threshold. In other words, the selected jobs have $3\\times$ larger average error than random samples. By manually exploring the types of jobs that do get removed, we confirm that these are typically rare or novel applications. \n\n\\endinput\n\n\n\\section{Datasets and experimental setup}\\label{sec:preliminaries}\nThis work is evaluated on two datasets, one collected from ALCF Theta\nsupercomputer in the period from 2017 to 2020, and one collected from NERSC Cori\nsupercomputer in the period from 2018 to 2019. Theta collects Darshan~\\cite{darshan} and Cobalt logs and consists\nof about 100K jobs with an I\/O volume larger than 1GiB, while Cori collects Darshan and Lustre Monitoring\nTools (LMT) logs, and consists of 1.1M jobs larger than 1GiB. \n\nDarshan is an HPC I\/O characterization tool that collects HPC job I\/O access patterns on both the POSIX and MPI-IO levels, and serves as our main insight\ninto application behavior. It collects POSIX aggregate job-level data, e.g., total number of bytes transferred, accesses made,\nread \/ write ratios, unique or shared files opened, distribution of accesses per access size, etc. \nMPI-IO is a library built on top of POSIX that offers\nhigher-level primitives for performing I\/O operations and may offer the model more insight into application behavior.\nDarshan collects MPI-IO information for jobs that use it, and all requests through MPI-IO are also visible on the POSIX level.\nThe Cobalt scheduler logs number of nodes and cores assigned to a job, job start and end times, job placement, etc.\nCobalt logs may be useful to the model since the number of cores a job allotted is not visible to Darshan. Darshan\nonly collects the number of job processes, which is commonly equal to or greater than the number of cores allocated to a job.\nLMT collects I\/O\nsubsystem information such as storage server load and file system utilization, and serves as our main insight into the\nI\/O subsystem state over time. \nLMT records the state of object storage servers (OSS) and\ntargets (OST), and metadata servers (MDS) and targets (MDT) of the Lustre scratch filesystem every 5 seconds. Some of\nthe features LMT collects are CPU and memory utilization of the OSS's and MDS's, number of bytes\ntransferred to and from the OSTs, the fullness of the system, or the number of metadata operations (e.g., \\texttt{open},\n\\texttt{close}, \\texttt{mkdir}, etc.) performed by the metadata targets, etc. \nLMT collects per-OSS\/OST\/MDS\/MDT logs, but since a job may be served by an arbitrary number of these I\/O nodes, only the\nminimum, maximum, mean and standard deviation are exposed to the ML model.\nOverall, models have access to 48 Darshan POSIX, 48 Darshan MPI-IO, 37 LMT, and 5 Cobalt features. \n\n\nThe ML models in this work are trained using supervised learning on the task of predicting the I\/O throughput of\nindividual HPC jobs. The error models are optimizing is:\n\\begin{equation}\n e(y, \\hat y) = \\frac{1}{n} \\sum^n_{i=1}\\left|log_{10} \\left(\\frac{y_i}{\\hat y_i}\\right)\\right|\n\\end{equation}\nwhere $y_i$ and $\\hat y_i$ are the $i$-th job's measured and predicted I\/O throughputs. \nBecause $log(x) = -log(1\/x)$, if a model overestimates or underestimates the I\/O throughput by the same\nrelative amount, the error remains the same. We use percentages to write errors, where, e.g., a -25\\% error specifies \nthat the model underestimated real I\/O throughput by 25\\%; however, some figures show the absolute error when model bias\nis not important. While models try to lower mean error, they report median values since some of the distributions have heavy tails that make mean estimates unreliable.\n\n\\section{Related work}\\label{sec:related}\nIn recent years, automating HPC I\/O system analysis through ML has received significant attention, with two prominent\ndirections: \n(1) workload clustering to better understand groups of HPC jobs and automate handling of whole groups, and \n(2) I\/O subsystem modeling and make predictions of HPC job I\/O time, I\/O throughput, optimal scheduling, etc. \nClustering HPC job logs has been explored in~\\cite{isakov_sc20, gauge, taxonomist} with the\ngoal of better understanding workload distribution, scaling I\/O expert effort more efficiently, and revealing hidden\ntrends and I\/O workload patterns. ML-based modeling has been used for predicting I\/O\ntime~\\cite{10.1007\/978-3-319-92040-5_10}, I\/O throughput~\\cite{10.1145\/3369583.3392678, isakov_sc20}, optimal filesystem\nconfiguration~\\cite{8752835, capes}, as well as for building black boxes of I\/O subsystems in order to apply ML model\ninterpretation techniques~\\cite{isakov_sc20}. While there have been some attempts at creating analytical models of I\/O\nsubsystems~\\cite{10.1109\/CLUSTER.2015.29}, most attempts are data-driven, and rely on HPC system logs to create models\nof I\/O \\cite{luu:behavior, 10.1145\/3369583.3392678, 10.1007\/978-3-319-92040-5_10, isakov_sc20, Wang2018IOMinerLA}. \nAlthough the challenges of developing accurate machine learning models are well known, the nature of the domain requires\nspecial consideration: I\/O subsystems have to service multiple competing jobs, their configuration evolves over time,\nthey have periods of increased variability, they experience occasional hardware faults, etc.~\\cite{1526010,\n10.1109\/SC.2018.00077, costa1}. Diagnosing this I\/O variability, where the performance of a job depends on external factors to\nthe job itself has been extensively studied~\\cite{moana, 1526010, 10.1145\/3322789.3328743,\n10.1007\/978-3-319-92040-5_10, costa1}. Finally, the deployment of I\/O models has been shown to require special consideration as\nthese models often significantly underperform on new applications~\\cite{10.1145\/3337821.3337922, isakov_ross20}. \nWhile different sources of model error have been studied individually, no prior work characterizes the relative impact\nof different sources of error on model accuracy.\n\n\n\\section{Application modeling errors}\\label{sec:stationary_errors}\nWhen an ML practitioner is tasked with a regression problem (predicting continuous values), the first model they\nevaluate will likely under-perform on the task, e.g., due to inadequate data preprocessing, architecture, or\nhyperparameters. Therefore, the model will suffer from \\textit{approximation errors}, which can be removed by tuning the\nmodel hyperparameters. Because inappropriate models may be unable to fit even easy tasks, this class of errors should\nbe solved before seeking, e.g., additional samples or sample features.\n\nFigure~\\ref{fig:teaser} shows that the approximation consists of the application and\nsystem approximation. This section asks whether I\/O models build faithful representations of application\nbehavior.\n The questions we\nask are as follows: What are the limits of I\/O application modeling? In practice, do I\/O models faithfully learn application\nbehavior? Can I\/O application modeling benefit from extra hyperparameter fine-tuning or new application features? \n\n\\subsection{Estimating limits of application modeling}\nWe develop an application modeling error litmus test whose objective is to separate the application modeling error $e_{app}$ from\nthe four other error classes in Equation~\\ref{eq:error_breakdown}. To do so, we seek a `golden model' that models\napplication behavior as accurately as possible given the inputs. When practical ML models are compared with this golden model, an\nestimate of the application modeling error can be made. \n\nTo build this `golden model', we rely on a property of synthetic datasets where the data-generating process can be freely\nand repeatedly sampled. When analyzing HPC logs, it is common to see records of the same application ran multiple times\non the same data, or data of the same format. For example, system benchmarks such as IOR~\\cite{ior} may be run\nperiodically to evaluate filesystem health and overall performance. We call these sets of repeated jobs\n\\textit{`duplicate jobs'}. Jobs are duplicates if they belong to the same application and all their\n\\textit{observable} application features are identical.\nNote that an application can have many different sets of duplicate jobs, likely ran with different input parameters.\nBecause the duplicate features fed to an ML model are identical, the model cannot distinguish between duplicates from\nthe same set, and will achieve best possible accuracy on the training set if it learns to predict the mean I\/O\nthroughput value of a set.\nA model that does not learn to predict a set's mean value suffers from application-modeling error.\n\nSets of duplicate jobs can be used to build a litmus test that evaluates the median absolute error of a model which has $e_{app}=0$. Any practical model with a median error $e^p$ \ncan then learn its application modeling error $e^p_{app}$ by comparing it with the median error\nof the `golden model' $e^g$ as $e^p_{app} = e^p - e^g$. The litmus test is administered as: \n\\begin{tcolorbox}[left=0mm,right=0mm]\n\\vspace{-0.1cm}\n1. Sets of duplicate jobs in the dataset are found; 2. The mean I\/O throughput of each set is calculated; \n3. This mean is subtracted from each duplicate's I\/O throughput in the set to calculate duplicate error, and Bessel's correction is applied~\\cite{bishop};\n4. The median error across all duplicates is reported. \n\\vspace{-0.1in}\n\\end{tcolorbox}\nThis average difference represents\nthe smallest average error \nthat a model $m(j)$ can achieve on \\textit{duplicate jobs}. Assuming that duplicate jobs are drawn\nfrom the same distribution of applications as the whole dataset, the duplicate median absolute error represents the lower bound on\nmedian absolute error a model can achieve on the whole dataset. Note that different applications may have different distributions of\nduplicate I\/O throughputs, as shown in the fourth column of Figure~\\ref{fig:teaser}. \nFor this litmus test to be accurate, a large sample of applications representative of the HPC system workload \nmust be acquired. When applied to Theta, 19010 duplicates (23.5\\% of the dataset) over 3509 sets show a median \nabsolute error of 10.01\\%. Cori has 504920 duplicates (54\\%) in 77390 sets show a median absolute error of 14.15\\%.\nIf the litmus test is correct, a model that has the same (but not lower!) median absolute error on the general dataset can be found.\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\\subsection{Minimizing application modeling error}\nThe next question is whether ML models can practically reach the estimate of the lower bound of the error. Several I\/O modeling works\nhave explored different types of ML models: linear regression~\\cite{isakov_sc20}, decision\ntrees~\\cite{10.1007\/978-3-319-58667-0_19}, gradient boosting machines~\\cite{isakov_sc20, 10.1007\/978-3-319-58667-0_19, xie1},\nGaussian processes~\\cite{10.1007\/978-3-319-92040-5_10}, neural networks~\\cite{10.1145\/3337821.3337922}, etc. Here, we explore two types of models: XGBoost~\\cite{xgboost}, an implementation\nof gradient boosting machines, and feedforward neural networks. These model types are chosen for their accuracy,\nscalability, and previous success in I\/O modeling. \n\nNeither type of model achieves ideal performance `out of the box'. XGBoost model performance can be improved through\nhyperparameter tuning, e.g., by exploring different (1) numbers of decision trees, (2) their depth, (3) the features each\ntree is exposed to, and (4) part of the dataset each tree is exposed to. Neural networks are more complex, since they\nrequire tuning hyperparameters (learning rate, weight decay, dropout, etc.), while also exploring different\narchitectures (number, size, type of layers, and their connectivity). In the case of XGBoost, we exhaustively explore four\nhyperparameters listed above, for a total of 8046 XGBoost models. \nIn the case of neural\nnetworks, exhaustive exploration is not feasible due to state space explosion, so we use\nAgEBO~\\cite{DBLP:journals\/corr\/abs-2010-16358}, a Network Architecture Search (NAS) method that trains populations of\nneural networks and updates each subsequent generation's hyperparameters and architectures through Bayesian logic.\n\nThe leftmost column of Figure~\\ref{fig:teaser} shows a heatmap of an exhaustive search over two parameters on the Theta \ndataset, with the other two parameters (\\% of columns and rows revealed to the trees) selected from the\nbest possible result found. \nThe best performing model has an error rate of 10.51\\% - very close to the predicted bound.\nThe search on Cori data arrives at a similar configuration (omitted due to space).\n\nIn the case of neural networks, Figure~\\ref{fig:dnn_nas} shows a scatter plot of test set errors of 10 generations\nof neural networks on the Cori system, with 30 networks per generation. The networks are evolved using a separate\nvalidation test to prevent leakage of the test set into the model parameters. Networks approach the estimated error\nlimit, and the best result achieves a median absolute error of 14.3\\%. The fact that (1) after extensive tuning both\ntypes of models asymptotically approach the estimated limit in model accuracy, and (2) NAS does little to improve\nmodels, since models improve predictions only 6 times (gold stars). This suggests that both types of ML models\nare impeded by the same barrier and that the architecture and the tuning of models are not the fundamental\nissue in achieving better accuracy, i.e., that the source of error lies elsewhere.\n\n\\begin{figure}[h]\n \\vspace{-0.3cm}\n \\centering\n \\includegraphics[width=\\columnwidth]{figures\/nas}\n \\caption{Results of the Neural Architecture Search (NAS), with the estimated error lower bound highlighted in red.}\n \\label{fig:dnn_nas}\n \\vspace{-0.3cm}\n\\end{figure}\n\n\n\\subsection{Increasing visibility into applications}\nWhile hyperparameter and architecture searches approach but do not surpass the litmus test's estimated lower bound on\nerror, this is not conclusive evidence that all application modeling error has been removed and that error stems\nfrom other sources. Possibly, there exist missing application features that might further reduce errors. We explore\ntwo such sets of features: MPI-IO logs and Cobalt scheduler logs. \n\nFigure~\\ref{fig:stationary_models} shows the absolute error distribution of the tuned models on three Theta datasets:\nPOSIX, POSIX + MPI-IO, and POSIX + Cobalt (Cori excluded because of the lack of Cobalt logs). None of the dataset enrichments\nhelp reduce error, corroborating the conclusion that application modeling is not a source of error for these models, and further insight into applications will not help. Adding Cobalt logs does reduce the error on\nthe training set, and our experiments show that the job start and end time features are the cause.\nOnce timing features are present in the dataset, no two jobs are duplicates due to small timing variations. Although\npreviously the ML model was not able to overfit the dataset due to the existence of duplicates, this is no longer the\ncase, and the ML model can differentiate and memorize each individual sample. In~\\cite{isakov_sc20} authors remove\ntiming features for a similar reason: ML models can learn Darshan's implementation of I\/O throughput calculation and\nachieve good predictions without having to observe job behavior.\n\n\\begin{figure}[h]\n \\centering\n \n \\includegraphics[width=0.85\\columnwidth]{figures\/models.pdf}\n \\vspace{-0.0in}\n \\caption{Error distributions for models trained on POSIX, POSIX + MPI-IO, and POSIX + Cobalt feature\n sets.} \n \\label{fig:stationary_models}\n \\vspace{-0.2in}\n\\end{figure}\n\n\n\\section{Modeling HPC applications and systems}\\label{sec:system}\nThe behavior of an HPC system is governed by both complex rules and inherent noise. By formalizing the system as a\nmathematical function (or, more generally, a stochastic process) with its inputs and outputs, the process may be decomposed into\nsmaller components more amenable to analysis. The I\/O throughput of a system running specific sets of applications may\nbe treated as a data-generating process from which I\/O throughput measurements are drawn. While building a perfect model\nof an HPC system may not be possible, it is useful to understand the inputs to the `true' process and the process's\nfunctional properties. The theoretical model of the process must include all causes that might affect a real HPC system,\nsuch as: how well a job uses the system, hardware and software configurations over the life of the system, resource\ncontention between concurrent jobs, inherent application-specific and system noise, as well as application-specific\nnoise sensitivity. Although many of these aspects are not observable in practice, the true\ndata-generating process takes these aspects into account.\n\nWe adapt the global modeling formulation from Madireddy et al.~\\cite{10.1007\/978-3-319-92040-5_10}, and formulate I\/O\nthroughput $\\phi(j)$ of a job $j$ as:\n\\begin{equation} \\label{eq:global_model}\n \\phi(j) = f(j, \\zeta, \\omega)\n\\end{equation}\nHere, $\\zeta$ represents system state (e.g., filesystem health, system configuration, node availability, etc.) and\nsystem behavior (e.g., the behavior of other applications co-located with the modeled application during its run,\ncontention from resource sharing, etc.) \\textit{at a given time}. $\\omega$ represents randomness acting on the\nsystem.\nThe system $\\zeta$ can be further decomposed as:\n\\begin{equation}\n \\zeta = \\zeta_g(t) + \\zeta_l(t, j)\n\\end{equation}\nThe component $\\zeta_g(t)$ represents the \\textit{global system impact} on all jobs running on the system (e.g., a\nservice degradation that equally impacts all jobs) and is only a function of time $t$. The component $\\zeta_l(t, j)$\nrepresents the \\textit{local system impact} on the I\/O throughput of job $j$ caused by resource contention and interactions with other jobs\nrunning on the system. Contrary to the $\\zeta_g(t)$ component, $\\zeta_l(t, j)$ is job-specific and depends on the behavior of the\ncurrent set of applications running on the system, the sensitivity of $j$ to resource contention and noise, etc. Without\nloss of generality, the I\/O throughput of a job from Equation~\\ref{eq:global_model} can be represented as:\n\\begin{equation} \\label{eq:phi_breakdown}\n\\begin{split}\n \n \\phi(j) &= f(j,\\; \\zeta_g(t),\\; \\zeta_l(t, j),\\; \\omega) \\\\\n &= f_a(j) + f_g(j, \\zeta_g(t)) + f_l(j, \\zeta_l(t, j)) + f_n(j, \\zeta, \\omega)\n\\end{split}\n\\raisetag{28pt}\n\\end{equation}\n\nHere, $f_a(j)$ represents job throughput on an idealized system where the job is alone on the system, the system does not change over time, and there is no resource contention. $f_g(j, \\zeta_g(t))$ represents how the evolving\nconfiguration of the system (hardware provisioning, software updates, etc.) affects a job's I\/O throughput. The $f_l(j,\n\\zeta_l(t, j))$ component represents the impact of resource contention and the sensitivity of $j$ to I\/O noise. \nFinally, $f_n(j, \\zeta, \\omega)$ represents the impact of inherent system noise (e.g., packets dropped) on the job. \n\n\\vspace{-0.17cm}\n\\subsection{Modeling assumptions}\nThe task of modeling a system's I\/O throughput is to predict the behavior of the system when executing a job from some\napplication on some data. Modeling I\/O throughput requires modeling both the HPC system and the jobs running on it.\nMachine learning models used in this work attempt to learn the true function $\\phi$ by mapping observable features of the\njob $j$ and the system $\\zeta$ to measured I\/O throughputs $\\phi(j)$. A model $m(j_o, \\zeta_o)$ is tasked with predicting throughput $\\phi(j)$, where $j_o$ and $\\zeta_o$ are the observable job and system features.\n\nWhen designing ML models, the choice of model architecture and model inputs has implicit assumptions about the process\nthat generates the data. When incorrect assumptions are made about the domain, the model will suffer errors that cannot\nbe fixed within that modeling framework. We investigate four common assumptions about the HPC domain.\n\n\\textbf{All data is in-distribution:}\na common assumption that practitioners make is that all model errors are the product of insufficiently trained models,\ninadequate model architectures, or missing discriminative \\textit{features}. However, some jobs in the\ndataset may be \\textit{Out of Distribution (OoD)}, that is, they may be collected at a different time or environment, or\nthrough a different process. The model may underperform on OoD jobs due to the lack of similar jobs in the training\nset and not due to lack of insight (features) into the job. The cause of the problem is \\textit{epistemic uncertainty (EU)}\n- the model suffers from lack of knowledge or \\textit{reducible uncertainty}, since a broader\ntraining set would make the OoD jobs in-distribution (ID). In the HPC domain, epistemic uncertainty is present in\ncases of rarely run or novel jobs or uncommon system states. Without considering the possibility that a portion of the\nerror is a product of epistemic uncertainty, practitioners will put effort into tuning models instead of collecting more\nunderrepresented jobs. Referring to Equation~\\ref{eq:global_model}, this assumption may be expressed as: deployment\ntime $j_d$ and $\\zeta_d$ are drawn from a different distribution from training time $j_t$ and $\\zeta_t$.\n\n\\textbf{Noise is absent:} \nall systems have some inherent noise that cannot be modeled and will impact predictions.\n\\textit{Aleatory uncertainty} (AU) refers to inherent uncertainty, either due to noise or missing features about jobs in the\ndataset.\nModeling errors due to aleatory uncertainty are\ndifferent from epistemic uncertainty because collecting more jobs may not reduce AU. HPC I\/O domain experts note that\ncertain systems do have significantly higher or lower I\/O noise~\\cite{wan1, xie2}, but quantifying noise in modern leadership computing systems is an under-explored subject. Understanding and characterizing system's inherent I\/O noise is key to account for the aleatory uncertainty\nin the ML-based I\/O models and better quantify the effect of this uncertainty on the I\/O throughput predictions. I\/O\nmodeling works rarely attempt to quantify ML model uncertainty~\\cite{madireddy_vae} even though an estimate of AU\nsignificantly helps in model selection. The assumption that noise is not present in the dataset can be expressed as\nfollows: The practitioner assumes that the process has the form of $\\phi(j) = f(j, \\zeta)$ instead of $\\phi(j) = f(j,\n\\zeta, \\omega)$. \n\n\n \n\\textbf{Sampling is independent:}\nrunning a job on a system can be viewed as sampling the combination of system state and application\nbehavior and measuring I\/O throughput.\nMost I\/O modeling works implicitly assume that multiple samples taken at the same time are independent of each other. The system is modeled as equally affecting\nall jobs running on it, that is, the placement of different jobs on nodes, the interactions between neighboring jobs,\nnetwork contention, etc. \\textit{do not affect the job}. This assumption can then be expressed as: the process has the\nform of $\\phi(j) = f(j, \\zeta_g(t), \\omega)$, not $\\phi(j) = f(j, \\zeta, \\omega)$. \n\n\\textbf{Process is stationary:}\na common assumption is that the data-generating process is stationary, and that the same job ran at\ndifferent times achieves the same I\/O throughput. As hardware fails, as new nodes are provisioned, and\nshared libraries get updates, the system evolves over time. This assumption is therefore incorrect, and ignoring it,\ne.g., by not exposing \\textit{when} a job is ran to the ML model may cause hard-to-diagnose errors. In other words, the\nassumption is that the process has the form of $\\phi(j) = f(j, \\omega)$ and $f_g(j, \\zeta_g(t)) = 0$.\n\n\n\\section{Classifying I\/O throughput prediction errors}\\label{sec:taxonomy}\nNo matter the problem to which machine learning is applied, a systematic characterization of\nthe sources of errors is crucial to improve model accuracy.\nWhile there is no substitute for `looking at the data' to understand the root\ncause of the problem, this approach does not scale for large datasets. We seek a systematic way to understand the barriers\nto greater accuracy and improve ML models applied to system data. \n\nThe key questions we ask in this work are: What are the impediments to the successful application of learning algorithms in\nunderstanding I\/O? Should ML practitioners focus on acquiring more data on HPC applications or the HPC system?\nHow much of the error stems from poor ML model architectures? How much of the error can be attributed to the dynamic\nnature of the system and the interactions between concurrent jobs? What fraction of jobs exhibit a truly novel I\/O\nbehavior compared to the jobs observed thus far? At what point are the applications \\textit{too novel}, so much so that\nusers should no longer trust the predictions of the I\/O model? We now describe five classes of errors and\ndive deeper into error attribution in Sections~\\ref{sec:stationary_errors}, \\ref{sec:nonstationary_errors},\n\\ref{sec:data_collection_errors} and \\ref{sec:generalization_errors}.\n\nThe lack of application and system observability, the inherent noise $\\omega$, and \nthe OoD jobs prevent ML models from fully capturing system behavior, causeing errors. We define the I\/O\nthroughput prediction error of a model $m$ in a job $j$ as: \n\\begin{equation} \n e(j) = \\phi(j, \\zeta, \\omega) - m(j_o, \\zeta_o)\n\\end{equation}\nFollowing the $\\phi(j)$ terms from Eq.~\\ref{eq:phi_breakdown} and including the out-of-distribution\nerror, the error can be broken down as follows: \n\\begin{equation}\\label{eq:error_breakdown}\n e(j) = e_{app} + e_{system} + e_{OoD} + e_{contention} + e_{noise}\n\\end{equation}\nHere, the application modeling error $e_{app}$ is caused by a poor model fit of application behavior ($f_a(j)$ component),\nthe global system error $e_{system}$ is caused by poor predictions of system dynamics ($f_g(j, \\zeta_g(t))$ component),\nthe out-of-distribution error $e_{OoD}$ is caused by weak model generalization on novel applications or system states,\nthe contention error $e_{contention}$ is caused by poor predictions of job interactions ($f_l(j, \\zeta_l(t, j))$ component),\nand the noise error $e_{noise}$ is caused by the inability of any model to predict inherent noise ($f_4(j, \\zeta, \\omega)$ component). \nThis leaves us with five classes of errors shown at the bottom of Figure~\\ref{fig:teaser}. While attributing such an error\non a per-job basis is difficult, we will show that estimating each component across a whole dataset is possible.\n\n\\input{figures\/taxonomy}\n\n\\vspace{-0.15cm}\n\\subsection{I\/O Model Error Taxonomy and Litmus Tests}\nErrors in Equation~\\ref{eq:error_breakdown} must be estimated in a specific order shown in Figure~\\ref{fig:teaser} due\nto the specifics of individual litmus tests. For example, before the effect of aleatory and epistemic\nuncertainty can be separated, a good model must be found~\\cite{autodeuq}. Similarly, before global and local system\nmodeling errors can be separated, OoD jobs must be identified. \n\n\n\\textbf{Application modeling errors: }\nML models can have varying expressivity and may not always have the correct structure or enough parameters to fit the\navailable data. Models whose structure or training prevents them from learning the shape of the data-generating process\nare said to suffer from \\textit{approximation errors}, which are further divided into \\textit{application} and \\textit{system\nmodeling errors}. \nApplication modeling errors are caused by poor predictions of application behavior which can be fixed through better I\/O models or hyperparameter searches. \nThe first column of Figure~\\ref{fig:teaser} illustrates the impact of application modeling errors\nwith an example hyperparameter search over two XGBoost parameters on the Theta dataset. \nThe search finds that 32 trees with depth of 21 perform best, while the XGBoost defaults use 100 trees of depth 6.\nApproximation errors cannot be classified\nas epistemic or aleatory in nature, because no new features or jobs are necessary to remove this error. To\nestimate AU and EU in the dataset, methods such as AutoDEUQ~\\cite{autodeuq} require first that an appropriate model\narchitecture is found and trained, so approximation errors are the first branch of the taxonomy. \n\n\\textbf{System modeling errors:}\nsystem behavior changes over time due to transient or long-term changes such as filesystem metadata\nissues, failing components, new provisions, etc.~\\cite{lockwood_pdsw17}. A model that is only aware of application\nbehavior, but not of system state implicitly assumes that the process is stationary. It will be forced to learn the\n\\textit{average} system response to I\/O patterns, and will suffer greater prediction errors during periods when system\nbehavior is perturbed. Errors that occur due to poor modeling of the global system component $\\zeta_g(t)$ are called\n\\textit{system modeling errors}. To illustrate this class of errors, two models are trained to predict I\/O throughput in the second column of Figure~\\ref{fig:teaser}, and each model's weekly average error is plotted against time.\nThe blue model is exposed only to application behavior, while the orange model also knows the \\textit{job start time}.\nDuring service degradations, the blue model has long periods of biased errors while the orange model does not, since it knows when degradations happen.\n\n\\textbf{Generalization errors: }\nML models generally perform well on data drawn from the same distribution from which their training set\nwas collected. When exposed to samples highly dissimilar from their training set, the same models tend to make\nmispredictions. These samples are called `out-of-distribution' (OoD) because they come from new, shifted,\ndistributions, or the training set does not have full coverage of the sample space.\nAs an example, the third column of Figure~\\ref{fig:teaser} shows model error before (green) and after (red) deployment, with the error\nsignificantly rising when the model is evaluated on data collected outside the training time span.\n\n\\textbf{Contention errors:}\na diverse and variable number of applications compete for compute, networking, and I\/O\nbandwidth on HPC systems and interact with each other through these shared\nresources~\\cite{10.1145\/3322789.3328743, 7877142}. Although the global system state will impact all jobs equally, the impact of resource sharing is specific to pairs\nof jobs that are interacting and is harder to observe and model. Prediction errors that occur due to lack of\nvisibility into job interactions are called \\textit{contention errors} and are shown in the fourth column of\nFigure~\\ref{fig:teaser}. Here, the I\/O throughputs of a number of identical runs (the same code and data) of different\napplications illustrate that some applications are more sensitive to contention than others, even when accounting for\nglobal system state.\n\n\\textbf{Inherent noise errors:}\nwhile hard to measure, resource sharing errors can potentially be removed through greater insight into the system and\nworkloads. What fundamentally cannot be removed are \\textit{inherent noise errors}: errors due to random behavior by the\nsystem (e.g., dropped packets, randomness introduced through scheduling, etc.). Inherent noise is problematic both\nbecause ML models are bound to make errors on samples affected by noise and because noisy samples may impede model\ntraining. The fifth column of Figure~\\ref{fig:teaser} shows the I\/O throughput and start time differences between pairs\nof identical jobs. The leftmost column contains jobs that ran at exactly the same time, which often experience 5\\%\nor more difference in I\/O throughput.","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\\section{Appendix}\n\\label{sec:appendix}\n\nIn this section, we gather a few technical results that are used in the main results of the paper. \n\n\n\\begin{definition}[Type $K$ function]~\\cite{Smith:08}\nLet $g:S\\mapsto\\mathbb{R}^n$ be a function on $S\\subset\\mathbb{R}^n$. $g$ is said to be of type $K$ in $S$ if, for each $i \\in \\until{n}$, $g_i(z_1)\\leq g_i(z_2)$ holds true for any two points $z_1$ and $z_2$ in $S$ satisfying $z_1 \\leq z_2$ and $z_{1,i}=z_{2,i}$.\n\\end{definition}\n\n\\begin{lemma}\n\\label{lem:monotonicity-same-size}\nLet $\\map{g}{S}{\\mathbb{R}^N}$ and $\\map{h}{S}{\\mathbb{R}^N}$ be both of type $K$ over $S \\subset \\mathbb{R}^N$. Let $z_1(t)$ and $z_2(t)$\nbe the solutions to $\\dot{z}=g(z)$ and $\\dot{z}=h(z)$, respectively, starting from initial conditions $z_1(0)$ and $z_2(0)$ respectively. \nLet $S$ be positively invariant under $\\dot{z}=g(z)$ and $\\dot{z}=h(z)$.\nIf $g(z) \\leq h(z)$ for all $z \\in S$, and $z_1(0) \\leq z_2(0)$, then $z_1(t) \\leq z_2(t)$ for all $t \\geq 0$. \n\\end{lemma}\n\\begin{proof}\nBy contradiction, let $\\tilde{t} \\geq 0$ be the smallest time at which, there exists, say $k \\in \\until{N}$, such that $z_1(\\tilde{t}) \\leq z_2(\\tilde{t})$, $z_{1,k}(\\tilde{t}) = z_{2,k}(\\tilde{t})$, and \n\\begin{equation}\n\\label{eq:class-K-contradiction}\ng_k(z_1(\\tilde{t})) > h_k(z_2(\\tilde{t})).\n\\end{equation}\nSince $g(z)$ is of class $K$, $z_1(\\tilde{t}) \\leq z_2(\\tilde{t})$ and $z_{1,k}(\\tilde{t}) = z_{2,k}(\\tilde{t})$ imply that $g(z_1(\\tilde{t})) \\leq g(z_2(\\tilde{t}))$. This, combined with the assumption that $g(z) \\leq h(z)$ for all $z \\in S$ implies that $g(z_1(\\tilde{t})) \\leq h(z_2(\\tilde{t}))$, which contradicts \\eqref{eq:class-K-contradiction}.\n\\qed\n\\end{proof}\n\nLemma~\\ref{lem:monotonicity-same-size} is relevant because the basic dynamical system in our case is of type $K$. \n\n\\begin{lemma}\n\\label{lem:car-following-type-K}\nFor any $L > 0$, $m>0$, and $N \\in \\mathbb{N}$, the right hand side of \\eqref{eq:x-dynamics-in-Rn} is of type $K$ in $\\mathbb{R}_+^N$.\n\\end{lemma}\n\\begin{proof}\nConsider $\\tilde{x}, \\hat{x} \\in \\mathbb{R}_+^N$ such that $\\tilde{x} \\leq \\hat{x}$. \nIf $\\tilde{x}_i = \\hat{x}_i$ for some $i \\in \\until{N}$, then, according to \\eqref{eq:inter-vehicle-distance-Rn}, $y_i(\\tilde{x})-y_i(\\hat{x})= \\left(\\tilde{x}_{i+1} - \\hat{x}_{i+1} \\right) -\\left(\\tilde{x}_i-\\hat{x}_i \\right) = \\tilde{x}_{i+1} - \\hat{x}_{i+1}$ if $i \\in \\until{N-1}$, and is equal to $\\left(\\tilde{x}_{1} - \\hat{x}_{1} \\right) - \\left(\\tilde{x}_N-\\hat{x}_N \\right)= \\tilde{x}_{1} - \\hat{x}_{1}$ if $i=N$. In either case, $y_i(\\tilde{x}) \\leq y_i(\\hat{x})$, which also implies $y_i^m(\\tilde{x}) \\leq y_i^m(\\hat{x})$ for all $m > 0$.\n\\qed\n\\end{proof}\n\n\n\n\n\nIn order to state the next lemma, we need a couple of additional definitions. \n\n\\begin{definition}[Monotone Aligned and Monotone Opposite Functions]\nTwo strictly monotone functions $\\map{h}{\\mathbb{R}}{\\mathbb{R}}$ and $\\map{g}{\\mathbb{R}}{\\mathbb{R}}$ are said to be \\emph{monotone-aligned} if they are both either strictly increasing, or strictly decreasing. Similarly, the two functions are called \\emph{monotone opposite} if one of them is strictly increasing, and the other is strictly decreasing. \n\\end{definition}\n\n\n\\begin{lemma}\n\\label{lem:appendix-general-summation}\nLet $\\map{h}{\\mathbb{R}_+}{\\mathbb{R}}$ and $\\map{g}{\\mathbb{R}_+}{\\mathbb{R}}$ be strictly monotone functions. Then, for every $y \\in \\mathcal S_N^L$, $n \\in \\mathbb{N}$, $L>0$,\n\\begin{equation}\n\\label{eq:summation-general}\n\\sum_{i=1}^N h(y_i) \\left(g(y_{i+1}) - g(y_i) \\right)\n\\end{equation}\nis non-negative if $h$ and $g$ are monotone-opposite, and is non-positive if $h$ and $g$ are monotone-aligned. Moreover, \\eqref{eq:summation-general} is equal to zero if and only if $y=\\frac{L}{N} \\mathbf{1}$.\n\\end{lemma}\n\\begin{proof}\nFor $i \\in \\until{N}$, let $I_i$ be the interval with end points $g(y_i)$ and $g(y_{i+1})$. \nFor $i \\in \\until{N}$, let $f_i(z):=\\sgn{g(y_{i+1}) - g(y_i)} h(y_i) \\mathbf{1}_{I_i}(z)$. \nLet $g_{\\text{min}}:=\\min_{i \\in \\until{N}} g(y_i)$, and $g_{\\text{max}}:=\\max_{i \\in \\until{N}} g(y_i)$. With $f(z):=\\sum_{i=1}^N f_i(z)$,\n \\eqref{eq:summation-general} can then be written as:\n\\begin{equation}\n\\label{eq:line-integral}\n\\sum_{i=1}^N h(y_i) \\left(g(y_{i+1}) - g(y_i) \\right) = \\int_{g_{\\text{min}}}^{g_{\\text{max}}} f(z) \\, dz.\n\\end{equation}\nWe now show that, for every $z \\in [g_{\\text{min}}, g_{\\text{max}}] \\setminus \\{g(y_i): i \\in \\until{N}\\}$, $f(z)$ is non-negative if $h$ and $g$ are monotone-opposite, and is non-positive if $h$ and $g$ are monotone-aligned. \nThis, together with \\eqref{eq:line-integral}, will then prove the lemma.\n\nIt is easy to see that every $z \\in [g_{\\text{min}}, g_{\\text{max}}] \\setminus \\{g(y_i): i \\in \\until{N}\\}$ belongs to an even number of intervals in $\\{I_i: \\, i \\in \\until{N}\\}$, say $I_{\\ell_1}, I_{\\ell_2}, \\ldots$, with $\\ell_1 < \\ell_2 < \\ldots$ (see Figure \\ref{fig:intervals} for an illustration).\nWe now show that $f_{\\ell_1}(z) + f_{\\ell_2}(z)$ is non-negative if $h$ and $g$ are monotone-opposite, and is non-positive if $h$ and $g$ are monotone-aligned. The same argument holds true for $f_{\\ell_3}(z)+f_{\\ell_4}(z), \\ldots$. \nAssume that $g(y_{\\ell_1}) \\leq g(y_{\\ell_2})$; the other case leads to the same conclusion. By definition of $f_i$'s, $f_{\\ell_1}(z)=h(y_{\\ell_1})$ and $f_{\\ell_2}(z)=-h(y_{\\ell_2})$. $g(y_{\\ell_1}) \\leq g(y_{\\ell_2})$ implies that \n$f_{\\ell_1}(z)+f_{\\ell_2}(z)=h(y_{\\ell_1})-h(y_{\\ell_2})$ is non-negative if $h$ and $g$ are monotone-opposite, and is non-positive if $h$ and $g$ are monotone-aligned, with the equality holding true if and only if $y_{\\ell_1}=y_{\\ell_2}$. \n\\begin{figure}[htb!]\n \\centering\n \\includegraphics[width=15.5cm]{intervals} \n \\caption{A schematic view of (a) $f_i(z), i=\\{1,2,3,4\\}$ and (b) $f(z)=\\sum_{i=1}^4f_i(z)$ for a $y\\in\\mathcal S_4^L$ ($L=1$) with $y_{\\text{min}} = y_2 < y_4 < y_3 < y_1 = y_{\\mathrm{max}}$ for a $m<1$. }\n \\label{fig:intervals}\n\\end{figure}\n\\qed\n\\end{proof}\n\n\n\n\\begin{lemma}\\label{lemma:t-n}\nFor $n \\in \\mathbb{N} \\setminus \\{1\\} $, let $\\psi_n$ be the n-fold convolution of $\\psi \\in \\Psi$. Then, \n$$\\int_{0}^t z \\, \\psi(z)\\psi_{n-1}(t-z)\\mathrm{d} z = \\frac{t}{n}\\psi_n(t) \\qquad \\forall \\, t \\geq 0$$\n\\end{lemma}\n\\begin{proof}\nLet $J_1,\\cdots,J_n$ be $n$ random variables, all with distribution $\\psi$. Therefore, the probability distribution function of the random variable $V:=\\sum_{i=1}^n J_i$ is $\\psi_n$. \nUsing linearity of the expectation, we get that \n\\begin{equation*}\nt=E\\left[\\sum_{i=1}^n J_i|V=t\\right]=\\sum_{i=1}^n E\\left[J_i|V=t\\right]=n \\, E\\left[J_1|V=t \\right]\n\\end{equation*}\ni.e., \n\\begin{equation}\n\\label{eq:D1-cond-sum}\nE\\left[J_1|V=t \\right] = \\frac{t}{n}\n\\end{equation}\nLet $f_{J_1|V}(j_1|t)$ denote the probability distribution function of $J_1|V$. By definition:\n\\begin{equation}\\label{eq:f-d1-z}\nf_{J_1|V}(j_1|t) = \\frac{f_{J_1,V}(j_1,t)}{\\psi_n(t)}=\\frac{\\psi(j_1)\\psi_{n-1}(t-j_1)}{\\psi_n(t)}\n\\end{equation}\nTherefore, using \\eqref{eq:D1-cond-sum} and \\eqref{eq:f-d1-z}, we get that\n\\begin{equation*}\nE[J_1|V=t] = \\int_0^t zf_{J_1|V}(z|t)\\, \\mathrm{d} z = \\int_0^t z\\frac{\\psi(z)\\psi_{n-1}(t-z)}{\\psi_n(t)} \\, \\mathrm{d} z = \\frac{t}{n}\n\\end{equation*}\nSimple rearrangement gives the lemma. \n\\qed\n\\end{proof}\n\n\nThe following is an adaptation of \\cite[Lemma 2.3.4]{Ross:96}. \n\\begin{lemma}\\label{lemma:busy-period-type1-prob}\nLet $a_1, \\cdots, a_{n-1}$ denote the ordered values from a set of $n-1$ independent uniform $(0,t)$ random variables. Let $\\tilde d_0=z \\geq 0$ be a constant and $\\tilde d_1, \\tilde d_2, \\cdots \\tilde d_{n-1}$ be i.i.d. non-negative random variables that are also independent of $\\{a_1, \\cdots, a_{n-1}\\}$, then \n\\begin{align*}\n\\Pr(\\tilde d_{k}+ \\cdots +\\tilde d_{n-1}\\leq a_{n-k},k=1,\\cdots,n-1 |\\tilde d_0+ \\cdots +\\tilde d_{n-1}=t, \\tilde{d}_0=z)\n = \\begin{cases} z\/t & z 0$, $m=1$, $\\varphi \\in \\Phi$, $\\psi \\in \\Psi$, the mean value of the busy period duration is equal to $\\bar \\psi \/ (L - \\lambda \\bar \\psi)$.\n \\end{lemma}\n \\begin{proof}\nA busy period, say of duration $B$, is initiated by the arrival of a vehicle, say $j$, when the system is idle. \nLet the number of vehicles that arrive during the busy period be $N_{bn}$. Note that $N_{bn}$ does not include the vehicle initiating the busy period. Therefore, the workload brought into the system during the busy period is equal to $w_B=\\sum_{i=j}^{j+N_{bn}} d_i$.\nThe expected value of $N_{bn}$ can be obtained by conditioning on the duration of the busy period: \n\\begin{equation}\\label{eq:nb-expectation}\nE[N_{bn}]=E\\left[E[N_{bn}|B]\\right]=E[\\lambda B]=\\lambda E[B]\n\\end{equation}\nwhere the second equality follows from the fact that the arrival process is a Poisson process. \nSince the event $\\{N_{bn}+1=n\\}$ is independent of $\\{d_{j+i},i > n\\}$, $N_{bn}+1$ is a stopping time for the sequence $\\{d_{j+i},i\\geq 1\\}$. Therefore, using Wald's equation, e.g., see \\cite[Theorem 3.3.2]{Ross:96}, and \\eqref{eq:nb-expectation}, the expected value of the workload $w_B$ added to the system during the busy period $B$ is given by: \n\\begin{equation}\\label{eq:wb-expectation}\nE[w_B]=(E[N_{bn}]+1) \\, \\bar \\psi=(\\lambda E[B]+1) \\, \\bar \\psi.\n\\end{equation}\n\n\n\n\\begin{figure}[htb!]\n \\centering\n \\includegraphics[width=5.5cm]{busy-period-new-notation} \n \\caption{(a) Queue length process and (b) workload process during a busy period. }\n \\label{fig:busy-period}\n\\end{figure}\n\nSince the workload decreases at a constant rate $L$ during a busy period, we have $B=w_B\/L$ (see Figure \\ref{fig:busy-period} for an illustration). Therefore, $E[B]=E[w_B]\/L$, which when combined with \\eqref{eq:wb-expectation}, establishes the lemma. \n\\qed\n\\end{proof}\n\n\n\n\n\\begin{remark}\n\\label{remark:waiting-time}\nSince the mean busy period duration is an upper bound on the mean waiting time, Lemma \\ref{lemma:busy-period-mean} also gives an upper bound on the mean waiting time. One can then use Little's law~\\cite{Kleinrock:75}\\footnote{Little's law has previously been used in the context of processor sharing queues, e.g., in \\cite{Altman.ea:06}.} to show that the mean queue length is upper bounded by $\\lambda\\bar \\psi \/ (L - \\lambda \\bar \\psi)$.\n\\end{remark}\n\nLet $\\mathcal{I}(t):=\\int_0^t\\delta_{\\{w(s)=0\\}}\\mathrm{d} s$ be the cumulative \\emph{idle time} up to time $t$. The following result characterizes the long run proportion of the idle time in the linear case.\n\\begin{proposition}\n\\label{prop:long-run-idle-time}\nFor any $\\lambda 0$, $\\varphi \\in \\Phi, \\psi \\in \\Psi$, the long-run proportion of time in which HTQ is idle is given by the following:\n$$\\lim_{t\\to\\infty} \\frac{\\mathcal{I}(t)}{t}=1-\\frac{\\lambda \\bar \\psi}{L}>0 \\quad a.s.$$\n\\end{proposition}\n\n\\begin{proof}\nHTQ alternates between busy and idle periods. Let $Z=I+B$ be the duration of a cycle that contains an idle period of length $I$ followed by a busy period of length $B$. Idle period, $I$, has the same distribution as inter-arrival times i.e. an exponential random variable with mean $1\/\\lambda$, and the mean value of $B$ is given in Lemma \\ref{lemma:busy-period-mean}. Note that duration of cycles, $Z$, are i.i.d. random variables. Thus, the busy-idle profile of the system is an alternating renewal process where renewals correspond to the moments at which the system gets idle. Suppose the system earns reward at a rate of one per unit of time when it is idle (and thus the reward for a cycle equals the idle time of that cycle i.e. $I$). Then, the total reward earned up to time $t$ is equal to the total idle time in $[0,t]$ (or $\\mathcal{I}(t)$), and by the result for renewal reward process (see \\cite{Ross:96}, Theorem 3.6.1), with probability one,\n$\\lim_{t\\to\\infty} \\mathcal{I}(t)\/t=E[I]\/(E[B]+E[I])$. \n\\qed\n\\end{proof}\n\n\n\n\n\n\n\n\n\\subsection{Busy Period Distribution}\n\\label{sec:busy-period}\nIn this section, we compute the cumulative distribution function for the number of new arrivals during a busy period for a HTQ with constant service rate, say $p>0$. This could, e.g., correspond to \\eqref{eq:inter-vehicle-distance-dynamics} for $m=1$. However, our analysis in this section, is not restricted to this specific model, but applies to any HTQ with constant service rate $p$. \nThis cumulative distribution for the number of new arrivals during a busy period, while of independent interest, will be used to derive lower bounds on the throughput in the super-linear case in Section~\\ref{subsec:superlinear}. Our analysis is inspired by that of M\/G\/1 queue, e.g., see \\cite{Ross:96}, where our consideration for non-zero initial condition appears to be novel. \n\nLet us consider an arbitrary busy period spanning time interval $(0,t)$, without loss of generality. For non-zero initial condition, one has to distinguish between the first and subsequent busy periods. Let the workload at the beginning of the arbitrary busy period, denoted as $d_0$, be sampled from $\\theta$. The relationship between $\\theta$ and $\\psi$ is as follows. \nIf the system starts with a non-zero initial initial condition with initial workload $w_0>0$, then the value of the $d_0$ for the first busy period will be deterministic and equals $w_0$, and hence $\\theta=\\delta_{w_0}$. However, for subsequent busy periods, or if the initial condition is zero, $d_0$ is sampled from $\\theta=\\psi$.\nThe workload brought to the system by arriving vehicles, $\\{d_i\\}_{i=1}^\\infty$, equals to the distance that vehicles wish to travel and are sampled identically and independently from the distribution $\\psi$. When the system is busy, the workload decreases at a given constant rate $p > 0$. The busy period ends when the workload becomes zero. \n\\begin{remark}\nWe emphasize that $d_0$ denotes the workload at the beginning of a busy period (see Figure \\ref{fig:normalized-distance} for further illustration), and hence is not equal to zero when the queue starts from a zero initial condition. \n\\end{remark}\n\n \n\nIn order to align our calculations with the standard M\/G\/1 framework, where service rate is assumed to be unity, we consider normalized workloads, $\\tilde{d}_i:=d_i\/p$ for all $i \\in \\{0,1,\\cdots\\}$ (see Figure \\ref{fig:normalized-distance} for an illustration). Correspondingly, let the distributions for the normalized distances be denoted as $\\tilde{\\theta}$ and $\\tilde{\\psi}$. Let the arrival time of the $k$-th new vehicle during $(0,t)$ be denoted as $T_k$, and let $N_{bn}$ denote the number of arrivals in $(0,t)$, i.e., the total number of arrivals over the entire duration of the busy period, including the vehicle which initiates the busy period, is $N_{bn} + 1$. \n\n A busy period ends at time $t$, and $N_{bn}=n-1$ if and only if, \n\\begin{enumerate}[(i)]\n\\item $T_k\\leq \\tilde d_0+\\cdots+ \\tilde d_{k-1}, \\qquad k=1,\\cdots,n-1$ \n\\item $\\tilde d_0+\\cdots+ \\tilde d_{n-1}=t$\n\\item There are exactly $n-1$ arrivals in $(0,t)$\n\\end{enumerate} \n\n\n\n\\begin{figure}[htbp]\n\\begin{center}\n\\includegraphics[width=8cm]{normalized-distance} \n\\caption{Evolution of workload during first two busy periods for an HTQ with constant service rate $p$, and starting from a non-zero initial condition. In the first busy period, $d_0$ is equal to the workload $w_0$ associated with the non-zero initial condition. In the second busy period, $d_0$ is equal to the workload brought by the first vehicle that initiates that busy period.}\n\\label{fig:normalized-distance}\n\\end{center}\n\\end{figure}\n\n\n\nBy treating densities as if they are probabilities, we get:\n\\begin{align}\n\\label{eq:prop-busy-des}\n\\Pr & (B =t \\text{ and }N_{bn} = n -1)\\nonumber \\\\\n=& \\Pr(\\tilde d_0+\\cdots+ \\tilde d_{n-1}=t, n-1 \\text{ arrivals in $(0,t)$}, T_k\\leq \\tilde d_0+\\cdots+ \\tilde d_{k-1}, k=1,\\cdots,n-1) \\nonumber \\\\\n=& \\int_0^t\\Pr(T_k\\leq \\tilde d_0+\\cdots+ \\tilde d_{k-1}, k=1,\\cdots,n-1| n-1 \\text{ arrivals in $(0,t)$}, \\tilde d_0+\\cdots+ \\tilde d_{n-1}=t, \\tilde d_0 =z) \\nonumber \\\\\n& \\times \\Pr(n-1 \\text{ arrivals in $(0,t)$}, \\tilde d_1+\\cdots+ \\tilde d_{n-1}=t-z)\\, \\tilde \\theta(z) \\, \\mathrm{d} z \n\\end{align}\nwhere we recall that $B$ is the random variable corresponding to the busy period duration. By the independence of normalized distances and the arrival process, the second probability term in the integrand in \\eqref{eq:prop-busy-des} can be expressed as\n\\begin{align}\n\\label{eq:prop-busy-part2}\n\\Pr(n-1 \\text{ arrivals in $(0,t)$}, \\tilde d_1+\\cdots+ \\tilde d_{n-1}=t-z)=e^{-\\lambda t}\\frac{(\\lambda t)^{n-1}}{(n-1)!} \\tilde \\psi_{n-1}(t- z) \n\\end{align}\nwhere $\\tilde\\psi_n$ is the $n$-fold convolution of $\\tilde\\psi$ with itself. \n\nIn the first probability term in \\eqref{eq:prop-busy-des}, it is given that the system receives $n-1$ arrivals in $(0,t)$ and since the arrival process is a Poisson process, the ordered arrival times, $\\{T_1,T_2,\\cdots,T_{n-1}\\}$, are distributed as the ordered values of a set of $n-1$ independent uniform $(0,t)$ random variables $\\{a_1,a_2,\\cdots,a_{n-1}\\}$ (see Theorem 2.3.1 in \\cite{Ross:96}). Thus,\n\\begin{align}\\label{eq:busy-period-given-n-prob}\n\\Pr & (T_k\\leq \\tilde d_0+\\cdots+ \\tilde d_{k-1}, k=1,\\cdots,n-1| n-1 \\text{ arrivals in $(0,t)$}, \\tilde d_0+\\cdots+ \\tilde d_{n-1}=t,\\tilde d_0 =z) \\nonumber \\\\\n& = \\Pr(a_k\\leq \\tilde d_0+\\cdots+\\tilde d_{k-1}, k=1,\\cdots,n-1|\\tilde d_0+\\cdots+\\tilde d_{n-1} =t,\\tilde d_0 =z)\n\\end{align}\n\n\n\n\n\n\n\n\nBy noting that $t-U$ will also be a uniform $(0,t)$ random variable whenever $U$ is, it follows that $a_1,\\cdots,a_{n-1}$ has the same joint distribution as $t-a_{n-1},\\cdots,t-a_1$. Thus, replacing $a_k$ with $a_{n-k}$ for $k\\in\\{1,\\cdots,n-1\\}$ in \\eqref{eq:busy-period-given-n-prob}, we get\n\\begin{align}\\label{eq:prob-manipulation}\n \\Pr&(a_k \\leq \\tilde d_0+\\cdots+\\tilde d_{k-1}, k=1,\\cdots,n-1|\\tilde d_0+\\cdots+\\tilde d_{n-1} =t,\\tilde d_0 =z) \\nonumber \\\\\n & =\\Pr(t-a_{n-k} \\leq \\tilde d_0+\\cdots+\\tilde d_{k-1}, k=1,\\cdots,n-1|\\tilde d_0+\\cdots+\\tilde d_{n-1} =t,\\tilde d_{0} =z) \\nonumber \\\\\n & = \\Pr(t-a_{n-k} \\leq t-(\\tilde d_{k}+\\cdots+\\tilde d_{n-1}), k=1,\\cdots,n-1|\\tilde d_0+\\cdots+\\tilde d_{n-1} =t,\\tilde d_0 =z) \\nonumber \\\\\n & = \\Pr(a_{n-k} \\geq \\tilde d_{k}+\\cdots+\\tilde d_{n-1}, k=1,\\cdots,n-1|\\tilde d_0+\\cdots+\\tilde d_{n-1} =t,\\tilde d_0 =z) \\nonumber \\\\\n & = \\Pr(a_{n-k} \\geq \\tilde d_{k}+\\cdots+\\tilde d_{n-1}, k=1,\\cdots,n-1|\\tilde d_0+\\cdots+\\tilde d_{n-1} =t,\\tilde d_0 =z) = \\begin{cases} z\/t & z1$ case}\nIn this section, we assume that system starts from an empty initial condition and then we derive a probabilistic lower bound on the throughput of the system. For our analysis we focus on busy period and the number of vehicles served in a busy period. \n\nWe first assume an upper bound on the queue length, i.e. $N(t)\\leq M \\; \\forall t\\geq0$, which by Lemma \\ref{lem:service-rate-bounds} gives a lower bound on the service rate of the system i.e. $s(y(t))\\geq L^mM^{1-m}\\; \\forall t\\geq 0$. Then, we use this lower bound on the service rate to compare the system with a system with the same temporal-spatial arrival process but a slower decrease rate of workload . We use this comparison as a tool to find a lower bound on the probability of achieving the assumed upper bound on the queue length. This result leads to a probabilistic lower bound on the throughput of the system which is stated in Theorem \\ref{}. Our analysis heavily depends on the probability distribution of busy periods which is described in the next section.\n\n\n \n \\subsubsection{From busy period to throughput}\n\n\n \n \\begin{proposition}\\label{prop:nbp-lower-bound}\n For any $m>1$, $w_0=0$, $L>0$, $\\lambda > 0$ $\\varphi \\in \\Psi$ and $M\\in\\{2,3,\\cdots\\}$, given $N(t)0$, then \n $$\\Pr(N_{bp}\\leq M)\\geq C^{(2)}_{L^m M^{1-m}}(M,\\lambda)\\qquad \\forall n\\in\\mathbb{N}$$\n \\end{proposition}\n \\begin{proof}\n \\mmmargin{service is not greater than this lower bound for all t because of idle time. revise it}\n For $m>1$, by Lemma \\ref{lem:service-rate-bounds}, and the upper bound given on the queue length,\n \\begin{equation}\n s(t)\\geq L^mM^{1-m} = p \\quad \\forall t>0\n \\end{equation} \n \n We consider another system with the same realization of temporal-spatial arrival process but fixed service rate of $p$ and name it as the \\emph{slower} system. Let $B_1$ and $B_2$ be random variables that denote the length of a busy period in actual and slower system, respectively. Since the realization of arrival processes are the same for both systems, and the decrease rate of workload in slower system is smaller it is clear that,\n \\begin{equation}\\label{eq:workload-actual-slower}\n w_1(t)\\leq w_2(t) \\quad \\forall t>0\n \\end{equation}\nwhere $w_1(t)$ and $w_2(t)$ denote the the workload processes in actual and slower system, respectively. \n\nNow, let $t_s$ be the time at which a (random) busy period in the slower system ends and $t_1$ be the time at which the next busy period in that system ends (see Figure \\ref{fig:b1-b2} for an illustration). Obviously, $w_1(t_s)=w_1(t_f)=0$. Also, by \\eqref{eq:workload-actual-slower}, $w_2(t_s)=w_2(t_f)=0$. Let $t_1\\in(t_s,t_f)$ be the time of the first arrival after $t=t_s$. Note that this time is the same in both systems. Therefore, in the slower system, the busy period that lies in $[t_s,t_f]$ has the length of $t_f-t_1$. Let $t_2\\in[t_1,t_f]$ be the time at which the actual system gets idle for the first time after $t=t_1$. We want to show that $t_2\\leq t_f$. By contradiction, assume $t_2> t_f$. Thus, $w_1(t_f)>0$, but $w_2(t_f)=0$. This contradicts \\eqref{eq:workload-actual-slower} and proves that $t_2\\leq t_f$. This implies that the longest busy period in actual system contained in $[t_s,t_f]$ is no longer than the busy period in actual system. This argument holds for any busy period in slower system. Thus, with probability one, \n \\begin{equation}\\label{eq:relation-b1-b2}\n B_1\\leq B_2\n \\end{equation} \n \n\\begin{figure}[htbp]\n\\begin{center}\n\\includegraphics[width=9cm]{b1-b2} \n\\caption{Workload process for two systems with the same realization temporal-spatial arrival process, a system with slower service rate (solid line) and a system with a faster service rate (dashed line).}\n\\label{fig:b1-b2}\n\\end{center}\n\\end{figure}\n\nNow, let $N_{bp}$ and $\\tilde N_{bp}$ be the number of customers served in a busy period in the actual system and slower system, respectively. Note that $N_{bp}$ and $\\tilde{N}_{bp}$ are Poisson random variables with mean $\\lambda B_1$ and $\\lambda B_2$, respectively and by \\eqref{eq:relation-b1-b2}, $\\lambda B_1\\leq \\lambda B_2$. Poisson random variable is stochastically increasing in its mean (see e.g. \\cite{Ross:96}), therefore\n\\begin{equation}\n\\tilde{N}_{bp}\\geq_{\\text{st}}{N}_{bp}\n\\end{equation} \nwhich establishes this proposition.\n\n\n\n \\end{proof}\n \n\\mmmargin{have we defined this notation ($n_0$) earlier?}\n \\begin{proposition}\\label{prop:nbn-lower-bound}\n For any $m>1$, $w_0>0$, \\mmcomment{$n_0\\in\\mathbb{N}$}, $L>0$, $\\lambda > 0$, $\\varphi \\in \\Psi$ and $M\\in\\{2,3,\\cdots\\}$, given $N(t)0$, then \n $$\\Pr(N_{bn}\\leq M)\\geq C^{(1)}_{L^m (M+n_0)^{1-m}}(M,\\lambda,w_0)\\qquad \\forall n\\in\\mathbb{N}$$\n \\end{proposition}\n \\begin{proof}\n The proof is very similar to the proof of Proposition \\ref{prop:nbp-lower-bound}; however, in this case the upper bound on the queue length is $M+n_0$, and the lower bound on the service rate is $L^m(M+n_0)^{1-m}$. \n \\end{proof}\n\n\n\n \n\n\n\n \n \\begin{lemma} \\label{lemma:st-order-queulength-nbp}\n For any $m>1$, $L>0$, $\\lambda>0$, $\\varphi \\in \\Psi$, $w_0=0$\n $$N_{bp}\\geq_{\\text{st}}N(t)\\quad \\forall t\\geq 0$$\n \\end{lemma}\n \\begin{proof}\n Conditioned on the length of busy period, $B$, \n \\begin{equation}\n N_{bp}\\leq M | B = b \\implies N(t)\\leq M | B = b \\quad \\forall t\\in[0,b]\n \\end{equation}\n where here $t=0$ denotes the time at which busy period starts. Then,\n \\begin{equation}\n \\Pr(N_{bp}\\leq M | B = b) \\leq \\Pr(N(t)\\leq M | B = b) \\quad \\forall t\\in[0,b]\n \\end{equation}\n and by taking the expectation of the above with respect to length o busy period, we get \n \\begin{equation}\n \\Pr(N_{bp}\\leq M) \\leq \\Pr(N(t)\\leq M) \\quad \\forall t>0\n \\end{equation}\n which establishes this lemma. \n \\end{proof} \n\n\n Now, we are ready to state the main result in the following theorem which gives a probabilistic lower bound on the throughput of the system for $m>1$.\n \\begin{theorem}\n Given $\\delta\\in(0,1)$, $L>0$, $m>1$, $\\varphi\\in\\Psi$, $w_0=0$, then\n $$\\lambda_{\\max}(m, L, \\varphi,\\delta)\\geq \\max_{M\\geq 2}\\;\\max\\{\\lambda>0 \\;| C^{(2)}_{L^m M^{1-m}}(M,\\lambda)\\geq1-\\delta\\}$$\n \\end{theorem}\n \\begin{proof}\nFor a given upper bound on the queue length, $M\\in\\{2,3\\cdots\\}$, \n \\begin{equation}\\label{eq:prob-bound-queue}\n \\Pr(N\\leq M) \\geq \\Pr(N_{bp}\\leq M) \\geq C^{(2)}_{L^m M^{1-m}}(M,\\lambda)\n \\end{equation}\n where the first inequality follows by Lemma \\ref{lemma:st-order-queulength-nbp} and the second inequality follows by Proposition \\ref{prop:nbp-lower-bound}. Therefore, based on Definition \\ref{def:throughput}, $\\tilde \\lambda = \\max\\{\\lambda>0 \\;| C_{L^m M^{1-m}}(M,\\lambda)=1-\\delta\\}$ gives a lower bound on $\\lambda_{\\max}$. In Definition \\ref{def:throughput}, however, only boundedness of queue length matters. Thus, one can maximize $\\tilde \\lambda$ over $M\\geq 2$ to get tighter lower bound. \n \n \n\n\n\n\n\n \\end{proof}\n \n \n \\begin{theorem}\n Given $\\delta_1,\\delta_2\\in(0,1)$, $L>0$, $m>1$, $\\varphi\\in\\Psi$, $w_0>0$, $n_0\\in\\mathbb{N}$ then\n $$\\lambda_{\\max}(m, L, \\varphi,\\delta)\\geq \\max_{M_1,M_2\\geq 2}\\;\\max\\{\\lambda>0 \\;| C^{(1)}_{L^m (M+n_0)^{1-m}}(M,\\lambda,w_0)C^{(2)}_{L^m (M_2)^{1-m}}(M,\\lambda)\\geq(1-\\delta_1)(1-\\delta_2)\\geq (1-\\delta)\\}$$\n \\end{theorem}\n \\begin{proof}\n\\mmcomment{work in progress ...} \n \n \\end{proof}\n \n In order to further clarify the analysis in this section, the following example shows the application of these results in order to probabilistically characterize the throughput of the system for a specific spatial distribution. \n \n \\begin{example}[Deterministic traveled distances] Assume that vehicles, upon arrival, wish to travel a deterministic distance of length $L>0$ i.e. $\\varphi(x)=\\delta_{\\{x=L\\}}$ and their motion is determined by a car following model with $m>1$. Also, further assume that the queue length does not exceed $M\\in\\{2,3,\\cdots\\}$. In this case, the lower bound on the service rate is $p=L^mM^{1-m}$. Application of \\eqref{eq:busy-period-and-nbp-joint} gives,\n \\begin{equation}\\label{eq:busy-period-joint-deterministic}\n D_p(t,N_{bp}=n) = \\int_0^te^{-\\lambda y}\\frac{(\\lambda y)^{n-1}}{n!}\\delta_{\\{y=nL\/p\\}}\\mathrm{d} y=\\begin{cases}0 & t\\leq nL\/p\\\\ e^{-\\lambda nL\/p}\\frac{(\\lambda nL\/p)^{n-1}}{n!}& t> nL\/p\\end{cases}\n \\end{equation}\nand by Lemma \\ref{lemma:st-order-queulength-nbp} and Proposition \\ref{prop:nbp-lower-bound},\n\\begin{equation}\n \\Pr(N\\leq M)\\geq \\Pr(N_{bp}\\leq M)\\geq C_{L^m M^{1-m}}(M,\\lambda) = \\sum_{i=1}^M e^{-\\lambda nL\/p}\\frac{(\\lambda nL\/p)^{n-1}}{n!} \\geq1-\\delta.\n\\end{equation}\nThe above inequality can be used in order to find the maximum throughput for which queue length remains bounded by $M$ with probability of at least $1-\\delta$.\n \\end{example}\n\n\n\\section{Conclusions}\n\\label{sec:conclusions}\n\nIn this paper, we formulated and analyzed a novel horizontal traffic queue. A key characteristic of this queue is the state dependence of its service rate. We establish useful properties of the service rate dynamics. We also extend calculations for M\/G\/1 busy period distributions to our setting, even for non-empty initial condition. These results allow us to provide tight results for throughput in the linear case, and probabilistic bounds on queue length over finite time horizon in the super-linear case. We also study throughput under a batch release control policy, where the additional waiting induced by the control policy is interpreted as a perturbation to the arrival process. We provide lower bound on the throughput for a maximum permissible perturbation. In particular, if the allowable perturbation is sufficiently large, then this lower bound grows unbounded as $m \\to 0^+$. Simulation results suggest a sharp phase transition in the throughput as the car-following behavior transitions from super-linear to sub-linear regime. \n\nIn future, we plan to sharpen our analysis to theoretically demonstrate the phase transition behavior. \nThis could include, for example, considering other release control policies with better perturbation properties. The increasing nature of the throughput in the super-linear regime for large values of $L$, as illustrated in Figure~\\ref{fig:throughput-simulations}, is possibly because the car-following model considered in this paper does not impose any explicit upper bounds on the speed of the vehicles. We plan to extend our analysis to such practical constraints, as well as to higher order, e.g., second order, car-following models, and models emerging from consideration of inter-vehicle distance beyond the vehicle immediately in front. The connections with processor sharing queue, as highlighted in this paper, suggest the possibility of utilizing the construct of measure-valued state descriptors~\\cite{Grishechkin:94,Gromoll.Puha.ea:02} to derive fluid and diffusion limits of the proposed horizontal traffic queues. In particular, one could interpret the measure-valued state descriptor to play the role of traffic density in the context of traffic flow theory. Along this direction, we plan to investigate connections between the fluid limit of horizontal traffic queues, and PDE models for traffic flow. \n\\section{Introduction}\n\\label{sec:introduction}\nWe consider a horizontal traffic queue (HTQ) on a periodic road segment, where vehicles arrive according to a spatio-temporal Poisson process, and depart the queue after traveling a distance that is sampled independently and identically from a spatial distribution. When inside the queue, the speed of a vehicle is proportional to a power $m>0$ of the distance to the vehicle in front. For a given initial condition, we define the throughput of such a queue as the largest arrival rate under which the queue length remains bounded. We provide rigorous analysis for the service rate, busy period distribution, and throughput of the proposed HTQ. \n\nOur motivation for studying HTQ comes from advancements in connected and autonomous vehicle technologies that allow to program individual vehicles with rules that can optimize system level performance. Within this application context, one can interpret the results of this paper as rigorously characterizing the impact of a parametric class of car-following behavior on system throughput. \n\n\nIn the linear case ($m=1$), i.e., when the speed of every vehicle is proportional to the distance to the vehicle directly in front, the periodicity of the road segment implies that the sum of the speeds of the vehicles is proportional to the total length of the road segment, i.e., it is constant. This feature allows us to exploit the equivalence between workload and queue length to show that, independent of the initial condition and almost surely, the throughput is the inverse of the time required by a solitary vehicle to travel average distance. \n\nIn the non-linear case ($m \\neq 1$), the cumulative service rate of HTQ queue is constant if and only if all the inter-vehicle distances are equal. For all other inter-vehicle configurations, we show that the service rate is strictly decreasing (resp., strictly increasing) in the super-linear, i.e., $m>1$ (resp., sub-linear, i.e., $m<1$) case. The service rate exhibits an another contrasting behavior in the sub- and super-linear regimes. In the super-linear case, the service rate is maximum (resp., minimum) when all the vehicles are co-located (resp., when the inter-vehicle distances are equal), and vice-versa for the sub-linear case. Using a combination of these properties, we prove that, when the length of the road segment is at most one, the throughput in the super-linear (resp., sub-linear) case is upper (resp., lower) bounded by the throughput for the linear case.\n\nWe prove the remaining bounds on the throughput for the non-linear case as follows. The standard calculations for joint distributions of duration and number of arrivals during a busy period for M\/G\/1 queue are extended to the HTQ setting, including for non-empty initial conditions. \nThese joint distributions are used to derive probabilistic upper bounds on queue length over finite time horizons for HTQ for the $m>1$ case. Such bounds are optimized to get lower bounds on throughput defined over finite time horizons. Simulation results show good comparison between such lower bounds and numerical estimates.\n \nWe also analyze throughput in the sub-linear and super-linear cases under perturbation to the arrival process, which is attributed to the additional expected waiting time induced by a release control policy that adds appropriate delay to the arrival times to ensure a desired minimum inter-vehicle distance $\\triangle>0$ at the time of a vehicle joining the HTQ. Since the minimum inter-vehicle distance is non-decreasing in between arrivals and jumps, this implies an upper bound on the queue length which is inversely proportional to $\\triangle$. We derive a lower bound on throughput for a given \ncombination of maximum allowable perturbation. In particular, if the allowable perturbation is sufficiently large, then this lower bound grows unbounded, as $m \\to 0^+$. \n\n\n \nQueueing models have been used to model and analyze traffic systems. The focus here has been primarily on vertical queues, under which vehicles travel at maximum speed until they hit a congestion spot where all vehicles queue on top of each other.\nThe queue length and waiting time of a minor traffic stream at an unsignalized intersection where major traffic stream has high priority is studied in \\cite{Tanner:62} and \\cite{Heidemann:91}. \nIn \\cite{Heidemann:94}, a vertical single server queue is utilized to model the queue length distribution at signalized intersections.\nIn \\cite{Jain.Smith:97}, a state-dependent queuing system is used to model vehicular traffic flow where the service rate depends on the number of vehicles on each road link. \n\n\n On the other hand, the \\emph{horizontal traffic queue} terminology has been primarily used to study macroscopic traffic flow, e.g., see \\cite{Helbing:03}. While such models capture the macroscopic relationship between traffic flow and density, a rigorous description and analysis of an underlying queue model is lacking. Indeed, to the best of our knowledge, there is no prior work on the analysis of a traffic queue model that explicitly incorporates car-following behavior. \n\nThe proposed HTQ has an interesting connection with processor sharing (PS) queues, and this connection does not seem to have been documented before.\nA characteristic feature of PS queues is that all the outstanding jobs receive service simultaneously, while keeping the total service rate of the server constant. The simplest model is where the service rate for an individual job is equal to $1\/N$, where $N$ is the number of outstanding jobs. In our proposed system, one can interpret the road segment as a server simultaneously providing service to all the vehicles, with the service rate of an individual vehicle equal to its speed. This natural analogy between HTQ and PS queues, to the best of our knowledge, was reported for the first time in our recent work~\\cite{Motie.Savla.CDC15}.\nThe $1\/N$ rule applied to our setting implies that all the vehicles travel with the same speed. Clearly, such a rule, or even the general discriminatory PS disciplines, e.g., see \\cite{Kleinrock:67}, are not applicable to the car following models considered in this paper. Indeed, the proposed HTQ is best described as a state-dependent PS queue. \n\n\nIn the PS queue literature, the focus has been on the sojourn time and queue length distribution. For example, see \\cite{Ott:84} and \\cite{Yashkov:83} for M\/G\/1-PS queue and \n\\cite{Grishechkin:94} for G\/G\/1-PS queue. Fluid limit analysis for PS queue is provided in \\cite{Chen.ea:97} and \\cite{Gromoll.Puha.ea:02}. \nHowever, relatively less attention has been paid to the throughput analysis of state-dependent PS queues.\nIn \\cite{Moyal:08,Kherani.Kumar:02,Chen.Jordan:07}, throughput analysis for state-dependent PS queues is provided, where throughput is defined as the quantity of work achieved by the server per unit of time.\nStability analysis for a single server queue with workload-dependent service and arrival rate is provided in \\cite{Bambos.Walrand:89} and \\cite{Bekker:05}. However, the dependence of service rate on the system state in the HTQ proposed in the current paper is complex, and hence none of these results are readily applicable. \n\nIn summary, there are several novel contributions of the paper. First, we propose a novel horizontal traffic queue and place it in the context of processor-sharing queues and state-dependent queues. We establish monotonicity properties of service rates in between jumps (i.e., arrivals and departures), and derive bounds on change in service rates at jumps. \nSecond, we adapt busy period calculations for M\/G\/1 queue to our current setup, including for non-empty initial conditions. These results allow us to provide tight results for throughput in the linear case, and probabilistic bounds on queue length over finite time horizon in the super-linear case. We also study throughput under a batch release control policy, whose effect is interpreted as a perturbation to the arrival process. We provide lower bound on the throughput for a maximum permissible perturbation for sub- and super-linear cases. In particular, we show that, for sufficiently large perturbation, this lower bound grows unbounded as $m \\to 0^+$. It is interesting to compare our analytical results with simulation results, which suggest a sharp transition in the throughput from being unbounded in the sub-linear regime to being bounded in the super-linear regime. While our analytical results do not exhibit such a phase transition yet, their novelty is in providing rigorous estimates of any kind on the throughput of horizontal traffic queues under nonlinear car following models.\n\nThe rest of the paper is organized as follows. We conclude this section with key notations to be used throughout the paper. The setting for the proposed horizontal traffic queue and formal definition of throughput are provided in Section~\\ref{sec:problem-formulation}. Section~\\ref{sec:service-rate-monotonicity} contains useful properties on the dynamics in service rate in between and during jumps. Key busy period properties for the M\/G\/1 queue are extended to the HTQ case in Section~\\ref{sec:busy-period}. Throughput analysis is reported in Section~\\ref{sec:throughput-analysis}. Simulations are presented in \\ref{sec:simulations}. Concluding remarks and directions for future work are presented in Section~\\ref{sec:conclusions}. A few technical intermediate results are collected in the appendix. \n\n\\subsection*{Notations}\nLet $\\mathbb{R}$, $\\mathbb{R}_+$, and $\\mathbb{R}_{++}$ denote the set of real, non-negative real, and positive real numbers, respectively. \nLet $\\mathbb{N}$ be the set of natural numbers. \nIf $x_1$ and $x_2$ are of the same size, then $x_1 \\geq x_2$ implies element-wise inequality between $x_1$ and $x_2$. If $x_1$ and $x_2$ are of different sizes, then $x_1 \\geq x_2$ implies inequality only between elements which are common to $x_1$ and $x_2$ -- such a common set of elements will be specified explicitly. \nFor a set $\\mathcal{J}$, let $\\text{int}(\\mathcal{J})$ and $|\\mathcal{J}|$ denote the interior and cardinality of $\\mathcal{J}$, respectively.\nGiven $a \\in\\mathbb{R}$, and $b > 0$, we let $\\mod(a,b):=a-\\lfloor \\frac{a}{b}\\rfloor b$. Let $\\mathcal S_N^L$ be the $N-1$-simplex over $L$, i.e., $\\mathcal S_N^L=\\setdef{x \\in \\mathbb{R}_+^N}{\\sum_{i=1}^N x_i = L}$. When $L=1$, we shall use the shorthand notation $\\mathcal S_N$.\nWhen referring to the set $\\until{N}$, for brevity, we let the indices $i=-1$ and $i=N+1$ correspond to $i=N$ and $i=1$ respectively.\n Also, for $p, q \\in \\mathcal S_N$, we let\n$D(p || q)$ denote the K-L divergence of $q$ from $p$, i.e., $D(p || q):= \\sum_{i=1}^N p_i \\log\\left(p_i\/q_i\\right)$.\nWe also define a permutation matrix, $P^- \\in \\{0,1\\}^{N \\times N}$, as follows:\n\\begin{align*}\nP^-:=\\begin{bmatrix}\n\\mathbf{0}_{N-1}^T & 1 \\\\\nI_{N-1} & \\mathbf{0}_{N-1}\n\\end{bmatrix}\n\\end{align*}\nwhere $\\mathbf{0}_N$ and $\\mathbf{1}_N$ stand for vectors of size $N$, all of whose entries are zero and one, respectively. We shall drop $N$ from $\\mathbf{0}_N$ and $\\mathbf{1}_N$ whenever it is clear from the context.\n\n\n\n\n\\subsection{Linear Case: $m=1$}\n\\label{sec:linear}\nIn this section, we provide an exact characterization of throughput for the linear case, i.e., when $m=1$. Recall that, for $m=1$, the service rate $s(y)=\\sum_{i=1}^N y_i \\equiv L$ is constant.\n\n\n\n\\begin{proposition}\\label{prop:unstable}\nFor any $L > 0$, $\\varphi \\in \\Phi$, $\\psi \\in \\Psi$, $x_0 \\in [0,L]^{n_0}$, $n_0 \\in \\mathbb{N}$ and :\n$$\n\\lambda_{\\text{max}}(L,m=1,\\varphi,\\psi,x_0,\\delta=0) \\leq L\/\\bar{\\psi} \\, .\n$$ \n\\end{proposition}\n\\begin{proof}\nBy contradiction, assume $\\lambda_{\\text{max}}>L\/\\bar{\\psi}$. Let $r(t):=\\sum_{i=1}^{A(t)} d_i$ be the workload added to the system by the $A(t)$ vehicles that arrive over $[0,t]$. Therefore, \n\\begin{equation}\\label{eq:workload-linear}\nw(t)=w_0+r(t)-L(t-\\mathcal{I}(t))\n\\end{equation}\nwhere $w_0$ is the initial workload. The process $\\{r(t), \\, t\\geq 0\\}$ is a renewal reward process, where the renewals correspond to arrivals of vehicles and the rewards correspond to the distances $\\{d_i\\}_{i=1}^{\\infty}$ that vehicles wish to travel in the system upon arrival before their departures. \nInter-arrival times are exponential random variables with mean $1\/\\lambda$, and the reward associated with each renewal is independently and identically sampled from $\\psi$, whose mean is $\\bar{\\psi}$. \nTherefore, e.g., \\cite[Theorem 3.6.1]{Ross:96} implies that, with probability one, \n\\begin{equation}\n\\label{eq:slln-renewal}\n\\lim_{t\\to\\infty}\\frac{r(t)}{t}=\\lambda \\bar{\\psi}\n\\end{equation}\nThus, for all $\\varepsilon \\in \\left(0,\\lambda \\bar{\\psi} -L\\right)$, there exists a $t_0\\geq0$ such that, with probability one,\n\\begin{equation}\\label{eq:s-t-lowerbound}\n\\frac{r(t)}{t}\\geq\\lambda \\bar \\psi-\\varepsilon\/2> L + \\varepsilon\/2 \\qquad \\forall \\, t \\geq t_0.\n\\end{equation}\nSince $w_0$ and $\\mathcal{I}(t)$ are both non-negative, \\eqref{eq:workload-linear} implies that $w(t)\\geq r(t) - Lt$ for all $t \\geq 0$. \nThis combined with \\eqref{eq:s-t-lowerbound} implies that, with probability one, $w(t) \\geq \\varepsilon t \/2$ for all $t \\geq t_0$, and hence \n$\\lim_{t\\to\\infty}w(t)= + \\infty$. This combined with \\eqref{eq:workload-upperbound} implies that, with probability one, $\\lim_{t\\to\\infty}N(t)=+\\infty$. \n\\qed\n\\end{proof}\n\n\\begin{theorem}\\label{thm:stable}\nFor any $L > 0$, $\\varphi \\in \\Phi$, $\\psi \\in \\Psi$, $x_0 \\in [0,L]^{n_0}$, $n_0 \\in \\mathbb{N}$:\n$$\n\\lambda_{\\text{max}}(L,m=1,\\varphi,\\psi,x_0,\\delta=1) = L\/\\bar{\\psi} \\, .\n$$ \n\\end{theorem}\n\\begin{proof}\nAssume that for some $\\lambda < L\/\\bar{\\psi}$, there exists some initial condition $(x_0,n_0)$ such that the queue length grows unbounded with some positive probability. \nSince the workload brought by every vehicle is i.i.d., and the inter-arrival times are exponential, without loss of generality, we can assume that the queue length never becomes zero. That is, the idle time satisfies $\\mathcal{I}(t) \\equiv 0$. Moreover, \\eqref{eq:slln-renewal} implies that, for every $\\varepsilon \\in \\left(0, L- \\lambda \\bar{\\psi} \\right)$, there exists $t_0 \\geq 0$ such that, with probability one,\n\\begin{equation}\n\\label{eq:s-t-upperbound}\n\\frac{r(t)}{t} \\leq \\lambda \\bar{\\psi} + \\varepsilon\/2 < L - \\varepsilon\/2 \\qquad \\forall t \\geq t_0\n\\end{equation} \n\nCombining \\eqref{eq:workload-linear} with \\eqref{eq:s-t-upperbound}, and substituting $\\mathcal{I}(t) \\equiv 0$, we get $w(t) < w_0 - \\varepsilon t \/2$, which implies that workload, and hence queue length, goes to zero in finite time after $t_0$, leading to a contradiction. \nCombining this with the upper bound proven in Proposition~\\ref{prop:unstable} gives the result. \n\\qed\n\\end{proof}\n\n\\begin{remark}\nTheorem~\\ref{thm:stable} implies that the throughput in the linear case is equal to the inverse of the \ntime required to travel average total distance by a solitary vehicle in the system. In the linear case, the throughput can be characterized with probability one, independent of the initial condition of the queue.\n\\end{remark}\n\n\n\n\n\n\\subsection{Monotonicity of Throughput in $m$ and $x_0$}\n\\label{sec:non-linear}\n\n\nIn this section, we show the following monotonicity property of $\\lambda_{\\text{max}}$ with respect to $m$ for small values of $L$: for given $x_0 \\in [0,L]^{n_0}$, $n_0 \\in \\mathbb{N}$, $L \\in (0,1)$, $\\varphi \\in \\Phi$, and $\\psi \\in \\Psi$, throughput is a monotonically decreasing function of $m$. \nFor this section, we rewrite \\eqref{eq:dynamics-moving-coordinates} in $\\mathbb{R}_+^N$, i.e., without projecting onto $[0,L]^N$. Specifically, let the vehicle coordinates be given by the solution of \n\\begin{equation}\n\\label{eq:x-dynamics-in-Rn}\n\\dot{x}_i = y_i^m, \\qquad x_i(0)=x_{0,i}, \\qquad i \\in \\until{N} \n\\end{equation} \nLet $X(t;x_0,m)$ denote the solution to \\eqref{eq:x-dynamics-in-Rn} at $t$ starting from $x_0$ at $t=0$. We will compare $X(t;x_0,m)$ under different values of $m$ and initial conditions $x_0$, over an interval of the kind $[0,\\tau)$, in between arrivals and departures. \nWe recall the notation that, if $x_{0}^1$ and $x_{0}^2$ are vectors of different sizes, then $x_{0}^1 \\leq x_{0}^2$ implies element-wise inequality only for components which are common to $x_{0}^1$ and $x_{0}^2$. In Lemma~\\ref{lemma:two-size-compare} and Proposition~\\ref{prop:queue-length-2m}, this common set of components corresponds to the set of vehicles common between $x_{0}^1$ and $x_{0}^2$.\n \n\n\\begin{lemma}\n\\label{lemma:two-size-compare}\nFor any $L \\in (0,1]$, $x^1_{0} \\in \\mathbb{R}_+^{n_1}$, $x^2_{0} \\in \\mathbb{R}_+^{n_2}$, $n_1, n_2 \\in \\mathbb{N}$,\n$$\nx^1_{0} \\leq x^2_{0}, \\, \\, n_2 \\leq n_1, \\, \\, 0 < m_2 \\leq m_1 \\implies X(t;x^1_{0},m_1) \\leq X(t;x^2_{0},m_2) \\qquad \\forall \\, t \\in [0,\\tau)\n$$ \n\\end{lemma}\n \\begin{proof}\nThe proof is straightforward when $n_1=n_2$. This is because, in this case, since $y_i \\leq L\\leq1$, $m_2 \\leq m_1$ implies $y_i^{m_2} \\geq y_i^{m_1}$ for all $i \\in \\until{n_1}$. Using this with Lemmas~\\ref{lem:monotonicity-same-size} and \\ref{lem:car-following-type-K} gives the result. \n \nIn order to prove the result for $n_2 < n_1$, we show that $X(t; x^1_{0}, m_1) \\leq X(t; x^2_{0}, m_1) \\leq X(t; x^2_{0}, m_2)$. Note that the second inequality follows from the previous case. Therefore, it remains to prove the first inequality. Let $(i_1, \\ldots, i_{n_2})$ be the set of indices of $n_2$ vehicles such that $0 \\leq x^2_{0,i_1} \\leq \\ldots \\leq x^2_{0,i_{n_2}} \\leq L$. Similarly, let $(i_1, i_1+1, \\ldots, i_2, i_2+1, \\ldots)$ be the indices of $n_1$ vehicles in the order of increasing coordinates in $x_0^1$. Our assumption on the initial condition implies that $x^1_{0,i_k} \\leq x^2_{0,i_k}$ for all $k \\in \\until{n_2}$. For brevity, let $x^1(t) \\equiv X(t;x_0^1,m_1)$, and $x^2(t) \\equiv X(t;x_0^2,m_1)$. It is easy to check that, for all $t \\in [0,\\tau)$, and all $k \\in \\until{n_2}$, \n\\begin{equation}\n\\label{eq:xdot-ub}\n\\dot{x}^1_{i_k} = \\left(x^1_{i_{k}+1} - x^1_{i_k}\\right)^{m_1} \\leq \\left(x^1_{i_{k+1}} - x^1_{i_k}\\right)^{m_1}\n\\end{equation}\nLet $t \\in [0,\\tau)$ be the first time instant when $x^1_{i_k}(t)=x^2_{i_k}(t)$ for some $k \\in \\until{n_2}$. Then, recalling $x^1_{i_{k+1}}(t) \\leq x^2_{i_{k+1}}(t)$, \\eqref{eq:xdot-ub} implies that $\\dot{x}_{i_k}^1(t) \\leq \\left(x^2_{i_{k+1}} - x^2_{i_k} \\right)^{m_1} = \\dot{x}^2_{i_k}(t)$. The result then follows from Lemma~\\ref{lem:monotonicity-same-size}. \n\\qed\n \\end{proof}\n \n \n\nLemma~\\ref{lemma:two-size-compare} is used to establish monotonicity of throughput as follows.\n\\begin{proposition}\n\\label{prop:queue-length-2m}\nFor any $L \\in (0,1]$, $\\varphi \\in \\Phi$, $\\psi \\in \\Psi$, $\\delta \\in (0,1)$, $x^1_{0} \\in [0,L]^{n_1}$, $x^2_{0} \\in [0,L]^{n_2}$, $n_1, n_2 \\in \\mathbb{N}$:\n$$\nx^1_{0} \\leq x^2_{0}, \\, \\, n_2 \\leq n_1, \\, \\, 0 < m_2 \\leq m_1 \\implies \\lambda_{\\text{max}}(L,m_1,\\varphi,\\psi,x^1_0,\\delta) \\leq \\lambda_{\\text{max}}(L,m_2,\\varphi,\\psi,x^2_0,\\delta)\n$$\n\\end{proposition}\n\\begin{proof}\nFor brevity in notation, we refer to the queue corresponding to $m_1$, and initial condition $x_0^1$ as HTQ-S. We refer to the other queue as HTQ-F. Let $\\lambda$, $\\varphi$ and $\\psi$ common to HTQ-S and HTQ-F be given. Let $x^1(t) \\equiv X(t; x^1_0, m_1)$ and $x^2(t) \\equiv X(t; x^2_0, m_2)$, and let $N_s(t)$ and $N_f(t)$ be the queue lengths in the two queues at time $t$. It suffices to show that $N_s(t) \\geq N_f(t)$ for a given realization of arrival times, arrival locations, and travel distances. In particular, this also implies that the departure locations are also the same for every vehicle, including the vehicles present at $t=0$, in both the queues. \n\nIndeed, it is sufficient to show that $x^1(\\tau) \\leq x^2(\\tau)$ and $N_s(\\tau)\\geq N_f(\\tau)$ where $\\tau$ is the time of first arrival or departure from either HTQ-S or HTQ-F. Accordingly, we consider two cases, corresponding to whether $\\tau$ corresponds to arrival or departure. \n\nSince $x^1(t) \\leq x^2(t)$ for all $t \\in [0,\\tau)$ from Lemma~\\ref{lemma:two-size-compare}, and the departure locations of all the vehicles in HTQ-S and HTQ-F are identical, the first departure from HTQ-S can not happen before the first departure in HTQ-F. Therefore, $N_s(\\tau) \\geq N_f(\\tau)$. Since $x^1(\\tau^-) \\leq x^2(\\tau^-)$, and $x^2(\\tau)$ is a subset of $x^2(\\tau^-)$, we also have $x^1(\\tau) \\leq x^2(\\tau)$. \n\nWhen $\\tau$ corresponds to the time of the first arrival, since the arrivals happen at the same location in HTQ-S and HTQ-F, and since $x^1(\\tau^-) \\leq x^2(\\tau^-)$, rearrangement of the indices of the vehicles to include the new arrival at $t=\\tau$ implies that $x^1(\\tau) \\leq x^2(\\tau)$. Moreover, since $N_s(\\tau^-) \\geq N_f(\\tau^-)$, and the arrivals happen simultaneously in both HTQ-S and HTQ-F, we have $N_s(\\tau) \\leq N_f(\\tau)$. \n\\qed\n\\end{proof}\n\n\\begin{remark}\nProposition \\ref{prop:queue-length-2m} establishes monotonicity of throughput only for $L\\in(0,1]$. This is consistent with our simulation studies, e.g., as reported in Figure \\ref{fig:throughput-simulations}, according to which, the throughput is non-monotonic for large $L$. \n\\end{remark}\nFor the analysis of the linear car following model, we exploited the fact that the total service rate of the system is constant. However, for the nonlinear model, i.e., $m \\neq 1$, the total service rate depends on the number and relative locations of vehicles. The state dependent service rate of nonlinear models makes the throughput analysis much more complex. In the next section, we find probabilistic bound on the throughput in the super-linear case.\n\n\n\n\n\n\n\n\n\n\n\n\\section{The Horizontal Traffic Queue (HTQ) Setup}\n\\label{sec:problem-formulation}\nConsider a periodic road segment of length $L$; without loss of generality, we assume it be a circle. Starting from an arbitrary point on the circle, we assign coordinates in $[0,L]$ to the circle in the clock-wise direction (See Figure \\ref{fig:htq}). Vehicles arrive on the circle according to a spatio-temporal process: the arrival process $\\{A(t),t\\geq0\\}$, is assumed to be a Poisson process with rate $\\lambda>0$, and the arrival locations are sampled independently and identically from a spatial distribution $\\varphi$ and mean value $\\bar \\varphi$. Without loss of generality, let the support of $\\varphi$ be $\\text{supp}(\\varphi)=[0,\\ell]$ for some $\\ell\\in[0,L]$. \nUpon arriving, vehicle $i$ travels distance $d_i$ in a counter-clockwise direction, after which it departs the system. \nThe travel distances $\\{d_i\\}_{i=1}^\\infty$ are sampled independently and identically from a spatial distribution $\\psi$ with support $[0,R]$ and mean value $\\bar \\psi$.\nLet the set of $\\varphi$ and $\\psi$ satisfying the above conditions be denoted by $\\Phi$ and $\\Psi$ respectively.\nThe stochastic processes for arrival times, arrival locations, and travel distances are all assumed to be independent of each other.\n\n\n\\begin{figure}[htb!]\n \\centering\n \\includegraphics[width=5cm]{htq-colorful} \n \\caption{Illustration of the proposed HTQ with three vehicles.}\n \\label{fig:htq}\n\\end{figure}\n\n\n\n\n\n\n\n\n\n\n\n\\subsection{Dynamics of vehicle coordinates between jumps}\nLet the time epochs corresponding to arrival and departure of vehicles be denoted as $\\{\\tau_1, \\tau_2, \\ldots\\}$. We shall refer to these events succinctly as \\emph{jumps}. We now formally state the dynamics under this car-following model. \nWe describe the dynamics over an arbitrary time interval of the kind $[\\tau_j,\\tau_{j+1})$. Let $N \\in \\mathbb{N}$ be the fixed number of vehicles in the system during this time interval. \nDefine the inter-vehicle distances associated with vehicle coordinates $x \\in [0,L]^N$ as follows:\n\\begin{equation}\n\\label{eq:inter-vehicle-distance-Rn}\ny_i(x) = \\mod \\left(x_{i+1} - x_i, L\\right), \\quad i \\in \\until{N} \n\\end{equation}\nwhere we implicitly let $x_{N+1} \\equiv x_1$ (See Figure \\ref{fig:htq} for an illustration). Note that the normalized inter-vehicle distances $y\/L$ are probability vectors. \nWhen inside the queue, the speed of every vehicle is proportional to a power $m >0$ of the distance to the vehicle directly in front of it. \nWe assume that this power $m > 0$ is the same for every vehicle at all times. Then, starting with $x(\\tau_{j}) \\in [0,L]^N$, the vehicle coordinates over $[\\tau_{j},\\tau_{j+1})$ are given by:\n\\begin{equation}\n\\label{eq:dynamics-moving-coordinates}\nx_i(t) = \\mod \\left( x_i(\\tau_{j}) + \\int_{\\tau_{j}}^t y_i^m(x(z)) \\, dz, L \\right), \\qquad \\forall \\, i \\in \\until{N}, \\quad \\forall \\, t \\in [\\tau_{j},\\tau_{j+1}) \\, ,\n\\end{equation}\n\n\\begin{remark}\nIt is easy to see that the clock-wise ordering of the vehicles is invariant under \\eqref{eq:inter-vehicle-distance-Rn}-\\eqref{eq:dynamics-moving-coordinates}. \n\\end{remark}\n\nThe dynamics in inter-vehicle distances is given by:\n\\begin{equation}\n\\label{eq:inter-vehicle-distance-dynamics}\n\\dot{y}_i = y^m_{i+1} - y_i^m, \\qquad i \\in \\until{N} \n\\end{equation}\nwhere we implicitly let $y_{N+1} \\equiv y_1$.\n\n\\subsection{Change in vehicle coordinates during jumps}\nLet $x(\\tau^-_{j}) =\\left(x_1(\\tau^-_{j}), \\ldots, x_N(\\tau^-_{j})\\right) \\in [0,L]^N$ be the vehicle coordinates just before the jump at $\\tau_{j}$. If the jump corresponds to the departure of vehicle $k \\in \\until{N}$, then the coordinates of the vehicles $x(\\tau_{j}) =\\left(x_1(\\tau_{j}), \\ldots, x_{N-1}(\\tau_{j})\\right) \\in [0,L]^{N-1}$ after re-ordering due to the jump, for $i \\in \\until{N-1}$, are given by:\n \\begin{equation*}\n\t\tx_i(\\tau_{j})=\\left\\{\\begin{array}{ll}\n\t\t\\displaystyle x_i(\\tau^-_{j}) \t& i\\in \\until{k-1} \\\\[15pt]\n\t\t\\displaystyle x_{i+1}(\\tau^-_{j})\t& i\\in \\{k+1, \\ldots, N-1\\}\\,.\\end{array}\\right.\n\t\\end{equation*}\n\n Analogously, if the jump corresponds to arrival of a vehicle at location $z \\in [0,\\ell]$ in between the locations of the $k$-th and $k+1$-th vehicles at time $\\tau_{j}^-$, then the coordinates of the vehicles $x(\\tau_{j}) =\\left(x_1(\\tau_{j}), \\ldots, x_{N+1}(\\tau_{j})\\right) \\in [0,L]^{N+1}$ after re-ordering due to the jump, for $i \\in \\until{N+1}$, are given by:\n \\begin{equation*}\n \\begin{split}\n x_{k+1}(\\tau_{j}) & = z \\\\\n\t\tx_i(\\tau_{j}) & =\\left\\{\\begin{array}{ll}\n\t\t\\displaystyle x_i(\\tau^-_{j}) \t& i\\in \\until{k} \\\\[15pt]\n\t\t\\displaystyle x_{i-1}(\\tau^-_{j})\t& i\\in \\{k+2, \\ldots, N+1\\}\\,.\\end{array}\\right.\n\t\t\\end{split}\n\t\\end{equation*}\n\n\\subsection{Problem statement}\nLet $x_0 \\in [0,L]^{n_0}$ be the initial coordinates of $n_0$ vehicles present at $t=0$. An HTQ is described by the tuple $\\left(L, m, \\lambda, \\varphi, \\psi, x_0\\right)$. \nLet \n$N(t;L, m, \\lambda, \\varphi, \\psi, x_0)$ be the corresponding queue length, i.e., the number of vehicles at time $t$ for an HTQ $\\left(L, m, \\lambda, \\varphi, \\psi, x_0\\right)$. For brevity in notation, at times, we shall not show the dependence of $N$ on parameters which are clear from the context.\n\n\n\nIn this paper, our objective is to provide rigorous characterizations of the dynamics of the proposed HTQ. A key quantity that we study is throughput, defined below. \n \n\\begin{definition}[Throughput of HTQ]\\label{def:throughput}\nGiven $L > 0$, $m > 0$, $\\varphi \\in \\Phi , \\psi \\in \\Psi$, $x_0 \\in [0,L]^{n_0}$, $n_0 \\in \\mathbb{N}$ and $\\delta \\in [0,1)$, \nthe throughput of HTQ is defined as:\n\\begin{equation}\n\\label{eq:throughput-def}\n\\lambda_{\\text{max}}(L,m,\\varphi,\\psi,x_0,\\delta):= \\sup \\left\\{\\lambda \\geq 0: \\, \\Pr \\left( N(t;L, m, \\lambda, \\varphi, \\psi, x_0) < + \\infty, \\quad \\forall t \\geq 0 \\right) \\geq 1 - \\delta \\right\\}.\n\\end{equation}\n\\end{definition}\n\n\nFigure~\\ref{fig:throughput-simulations} shows the complex dependency of throughput on key queue parameters such as $m$ and $L$. In particular, it shows that for every $L$, $\\varphi$, $\\psi$, $x_0$ and $\\varphi$, the throughput exhibits a phase transition from being unbounded for $m \\in (0,1)$ to being bounded for $m>1$. Moreover, Figure~\\ref{fig:throughput-simulations} also suggests that, for sufficiently small $L$, throughput is monotonically non-increasing in $m$, and that it is monotonically non-decreasing in $m>1$, for sufficiently large $L$. Also, it can be observed that initial condition can also affect the throughput. We now develop analytical results that match the throughput profile in Figure~\\ref{fig:throughput-simulations} as closely as possible. To that purpose, we will make extensive use of novel properties of \\emph{service rate} and \\emph{busy period} of the proposed HTQ, which could be of independent interest. \n\n\n\\begin{figure}\n\\centering\n\\subfigure[]{\\includegraphics[width=6cm]{phase-transition-4L-n0}}\n\\centering\n\\hspace{0.3in}\n\\subfigure[]{\\includegraphics[width=6cm]{phase-transition-5L-n100}}\\\\\n\\subfigure[]{\\includegraphics[width=6cm]{phase-transition-4L-n0-unif}}\n\\centering\n\\hspace{0.3in}\n\\subfigure[]{\\includegraphics[width=6cm]{phase-transition-5L-n100-unif}}\n\\caption{Throughput for various combinations of $m$, $L$, and $n_0$. The parameters used in individual cases are: (a) $\\varphi=\\delta_{0}$, $\\psi = \\delta_L$, and $n_0=0$ (b) $\\varphi=\\delta_{0}$, $\\psi = \\delta_L$, and $n_0=100$ (c) $\\varphi=U_{[0,L]}$, $\\psi = U_{[0,L]}$, and $n_0=0$ (d) $\\varphi=U_{[0,L]}$, $\\psi = U_{[0,L]}$, and $n_0=100$. In all the cases, the locations of initial $n_0$ vehicles were chosen at equal spacing in $[0,L]$.\n \\label{fig:throughput-simulations}}\n\\end{figure}\n\n\n\n\n\n\n\\section{Service Rate Properties of the Horizontal Traffic Queue}\n\\label{sec:service-rate-monotonicity}\nFor every $y \\in \\mathcal S_N^L$, $N \\in \\mathbb{N}$, $L>0$, we let \n$y_{\\text{min}}:=\\min_{i \\in \\until{N}} y_i$, and $y_{\\mathrm{max}}:=\\max_{i \\in \\until{N}} y_i$ denote the minimum and maximum inter-vehicle distances respectively. It is easy to establish the following monotonicity properties of $y_{\\text{min}}$ and $y_{\\mathrm{max}}$.\n\n\\begin{lemma}[Inter-vehicle Distance Monotonicity Between Jumps]\n\\label{lem:vehicle-distance-monotonicity}\nFor any $y \\in \\mathcal S^L_N$, $N \\in \\mathbb{N}$, $L>0$, under the dynamics in \\eqref{eq:inter-vehicle-distance-dynamics}, for all $m>0$\n$$\n\\frac{d}{dt} y_{\\text{min}} \\geq 0 \\qquad \\& \\qquad \\qquad \\frac{d}{dt} y_{\\mathrm{max}} \\leq 0.\n$$\n\\end{lemma}\n\\begin{proof}\nLet $y_{\\text{min}}(t) = y_j(t)$, i.e., the $j$-th vehicle has the minimum inter-vehicle distance at time $t\\geq 0$.\nTherefore, \\eqref{eq:inter-vehicle-distance-dynamics} implies that $\\dot{y}_{\\min}(t) = \\dot{y}_j(t)=y_{j+1}^m(t) - y_j^m(t)\\geq 0$. One can similarly show that $y_{\\mathrm{max}}$ is non-increasing.\n\\qed\n\\end{proof}\n\nDue to the complex state-dependence of the departure process, the queue length process is difficult to analyze. We propose to study a related scalar quantity, called \\emph{workload} formally defined as follows, where we recall the notations introduced in \nSection~\\ref{sec:problem-formulation}.\n\n\\begin{definition}[Workload]\nThe workload associated with the HTQ at any instant is the sum of the distances remaining to be travelled by all the vehicles present at that instant. That is, if the current coordinates and departure coordinates of all vehicles are $x \\in [0,L]^N$ and $q \\in \\mathbb{R}_+^N$ respectively, with $q \\geq x$, then the workload is given by:\n$$\nw(x,q):= \\sum_{i=1}^N (q_i-x_i).\n$$ \n\\end{definition}\n\n\nSince the maximum distance to be travelled by any vehicle from the time of arrival to the time of departure is upper bounded by $R$, we have the following simple relationship between workload and queue length at any time instant:\n\\begin{equation}\\label{eq:workload-upperbound}\nw(t) \\leq N(t)\\, R \\, , \\qquad \\forall \\, t \\geq 0 \\, .\n\\end{equation}\nAn implication of \\eqref{eq:workload-upperbound} is that unbounded workload implies unbounded queue length in our setting. We shall use this relationship to establish an upper bound on the throughput.\nHowever, a finite workload does not necessarily imply finite queue length. In order to see this, consider the state of the queue with $N$ vehicles, all of whom have distance $1\/N$ remaining to be travelled. Therefore, the workload at this instant is $1\/N \\times N= 1$, which is independent of $N$. \n\nWhen the workload is positive, its rate of decrease is equal to \\emph{service rate} in between jumps, defined next.\n\n\\begin{definition}[Service Rate]\nWhen the HTQ is not idle, its instantaneous service rate is equal to the sum of the speeds of the vehicles present in the system at that time instant, i.e., $s(x)=\\sum_{i=1}^N y_i^m(x)$.\n\\end{definition}\n\n\nSince the service rate depends only on the inter-vehicle distances, we shall alternately denote it as $s(y)$. For $m=1$, $s(y)=\\sum_{i=1}^N y_i \\equiv L$, i.e., the service rate is independent of the state of the system, and is constant in between and during jumps. This property does not hold true in the nonlinear ($m \\neq 1$) case. Nevertheless, one can prove interesting properties for the service rate dynamics. We start by deriving bounds on service rate in between jumps. \n\n\\begin{lemma}[Bounds on Service Rates]\n\\label{lem:service-rate-bounds}\nFor any $y \\in \\mathcal S_N^L$, $N \\in \\mathbb{N}$, $L>0$, under the dynamics in \\eqref{eq:inter-vehicle-distance-dynamics},\n\\begin{enumerate}\n\\item $L^m N^{1-m} \\leq s(y) \\leq L^m$ if $m > 1$;\n\\item $L^m \\leq s(y) \\leq L^m N^{1-m}$ if $m \\in (0,1)$.\n\\end{enumerate}\n\\end{lemma}\n\\begin{proof}\nNormalizing the inter-vehicular distances by $L$, the service rate can be rewritten as \n\\begin{equation}\n\\label{eq:service-rate-normalized}\ns(y)=L^m \\sum_{i=1}^N \\left(\\frac{y_i}{L}\\right)^m. \n\\end{equation}\nTherefore, for $m > 1$, $s(y) \\leq L^m \\sum_{i=1}^N \\frac{y_i}{L}=L^m$. One can similarly show that, for $m \\in (0,1)$, $s(y) \\geq L^m$. In order to prove the remaining bounds, we note that $\\sum_{i=1}^N z_i^m$ is strictly convex in $z=[z_1, \\ldots, z_N]$ for $m > 1$, and that the minimum of $\\sum_{i=1}^N z_i^m$ over $z \\in \\mathcal S_N$ occurs at $z= \\mathbf{1}\/N$, and is equal to $N^{1-m}$. Similarly, for $m \\in (0,1)$, $\\sum_{i=1}^N z_i^m$ is strictly concave in $z$, and its maximum over $z \\in \\mathcal S_N$ occurs at $z= \\mathbf{1}\/N$, and is equal to $N^{1-m}$. Combining these facts with \\eqref{eq:service-rate-normalized}, and noting that $y\/L \\in \\mathcal S_N$, gives the lemma.\n\\qed\n\\end{proof}\n\n\\begin{lemma}[Service Rate Monotonicity Between Jumps]\n\\label{lem:service-rate-dynamics-between-jumps}\nFor any $y \\in \\mathcal S^L_N$, $N \\in \\mathbb{N}$, $L>0$, under the dynamics in \\eqref{eq:inter-vehicle-distance-dynamics}, \n$$\n\\frac{d}{dt} s(y) \\leq 0 \\quad \\text{ if } m > 1 \\qquad \\& \\qquad \\frac{d}{dt} s(y) \\geq 0\\quad \\text{ if } m \\in (0,1) \\, ,\n$$\nwhere the equality holds true if and only if $y=\\frac{L}{N}\\mathbf{1}$.\n\\end{lemma}\n\\begin{proof}\nThe time derivative of service rate is given by:\n\\begin{align}\n\\frac{d}{dt} s(y) & = \\frac{d}{dt} \\sum_{i=1}^N y_i^m = m \\sum_{i=1}^N y_i^{m-1} \\dot{y}_i \\nonumber \\\\ \n\\label{eq:service-rate-derivative}\n& = m \\sum_{i=1}^N y_i^{m-1} \\left( y_{i+1}^m - y_i^m\\right) \n\\end{align}\nwhere the second equality follows by \\eqref{eq:inter-vehicle-distance-dynamics}. The result then follows by application of Lemma~\\ref{lem:appendix-general-summation}, and by noting that $g(z)=z^m$ is a strictly increasing function for all $m>0$, and $h(z)=z^{m-1}$ is strictly decreasing if $m \\in (0,1)$, and strictly increasing if $m>1$.\n\\qed\n\\end{proof}\n\n\nThe following lemma quantifies the change in service rate due to departure of a vehicle. \n\n\\begin{lemma}[Change in Service Rate at Departures]\n\\label{lem:service-rate-jumps}\nConsider the departure of a vehicle that changes inter-vehicle distances from $y \\in \\mathcal S_N^L$ to $y^- \\in \\mathcal S_{N-1}^L$, for some $N \\in \\mathbb{N} \\setminus \\{1\\}$, $L > 0$. If $y_1 \\geq 0$ and $y_2 \\geq 0$ denote the inter-vehicle distances behind and in front of the departing vehicle respectively, at the moment of departure, then the change in service rate due to the departure satisfies the following bounds:\n\\begin{enumerate}\n\\item if $m > 1$, then $0 \\leq s(y^-) - s(y) \\leq (y_1+y_2)^m \\left(1-2^{1-m} \\right)$;\n\\item if $m \\in (0,1)$, then $0 \\leq s(y) - s(y^-) \\leq \\min\\{y_1^m,y_2^m\\}$.\n\\end{enumerate}\n\\end{lemma}\n\\begin{proof}\nIf $m>1$, then $\\left(\\frac{y_1}{y_1+y_2}\\right)^m + \\left(\\frac{y_2}{y_1+y_2}\\right)^m \\leq \\frac{y_1}{y_1+y_2} + \\frac{y_2}{y_1+y_2} = 1$, i.e., $s(y^-)-s(y) = (y_1 + y_2)^m - y_1^m - y_2^m \\geq 0$. One can similarly show that $s(y)-s(y^-) \\geq 0$ if $m \\in (0,1)$.\n\nIn order to show the upper bound on $s(y^-)-s(y)$ for $m>1$, we note that the minimum value of $z^m + (1-z)^m$ over $z \\in [0,1]$ for $m > 1$ is $2^{1-m}$, and it occurs at $z=1\/2$. Therefore, \n\\begin{align*}\ns(y^-)-s(y) = (y_1 + y_2)^m - y_1^m - y_2^m & = (y_1 + y_2)^m \\left(1 - \\left(\\frac{y_1}{y_1+y_2}\\right)^m - \\left(\\frac{y_2}{y_1+y_2}\\right)^m \\right) \\\\\n& \\leq (y_1+y_2)^m \\left(1-2^{1-m} \\right)\n\\end{align*}\n\nThe upper bound on $s(y)-s(y^-)$ for $m \\in (0,1)$ can be proven as follows. Since $y_1^m \\leq (y_1 + y_2)^m$, $s(y) - s(y^-) = y_1^m + y_2 ^m - (y_1 + y_2)^m \\leq y_2^m$. \nSimilarly, $s(y)-s(y^-) \\leq y_1^m$. Combining, we get $s(y)-s(y^-) \\leq \\min \\{y_1^m, y_2^m\\}$. Note that, in proving this, we nowhere used the fact that $m \\in (0,1)$. However, this bound is useful only for $m \\in (0,1)$. \n\\qed\n\\end{proof}\n\n\\begin{remark}[Change in Service Rate at Arrivals]\n\\label{rem:service-rate-jump-arrival}\nThe bounds derived in Lemma~\\ref{lem:service-rate-jumps} can be trivially used to prove the following bounds for change in service rate at arrivals:\n\\begin{enumerate}\n\\item if $m > 1$, then $0 \\leq s(y) - s(y^+) \\leq (y_1+y_2)^m \\left(1-2^{1-m} \\right)$;\n\\item if $m \\in (0,1)$, then $0 \\leq s(y^+) - s(y) \\leq \\min\\{y_1^m,y_2^m\\}$,\n\\end{enumerate}\nwhere $y_1$ and $y_2$ are the inter-vehicle distances behind and in front of the arriving vehicle respectively, at the moment of arrival.\n\\end{remark}\n\n\nThe following lemma will facilitate generalization of Lemma~\\ref{lem:service-rate-dynamics-between-jumps}. In preparation for the lemma, let $f(y,m):=m \\sum_{i=1}^N y_i^{m-1} \\left( y_{i+1}^m - y_i^m\\right)$ be the time derivative of service rate, as given in \\eqref{eq:service-rate-derivative}.\n\n\\begin{lemma}\n\\label{lem:service-rate-dot-lower-bound}\nFor all $y \\in \\text{int}(\\mathcal S_N^L)$, $N \\in \\mathbb{N} \\setminus \\{1\\}$, $L > 0$:\n\\begin{equation}\n\\label{eq:service-rate-der-m}\n\\frac{\\partial}{\\partial m} f(y,m)|_{m=1} = - L D\\left(\\frac{y}{L} || P^- \\frac{y}{L}\\right) \\leq 0\n\\end{equation}\nAdditionally, if $L < e^{-2}$, then\n\\begin{equation}\n\\label{eq:service-rate-twice-der-m}\n\\frac{\\partial^2}{\\partial m^2} f(y,m)|_{m=1} \\geq 0\n\\end{equation}\nMoreover, equality holds true in \\eqref{eq:service-rate-der-m} and \\eqref{eq:service-rate-twice-der-m} if and only if $y = \\frac{L}{N}\\mathbf{1}$.\n\\end{lemma}\n\\begin{proof}\nTaking the partial derivative of $f(y,m)$ with respect to $m$, we get that \n\\begin{align*}\n\\frac{\\partial}{\\partial m} f(y,m) & = \\frac{f(y,m)}{m} + m \\sum_{i=1}^N \\left( y_i^{m-1} y_{i+1}^m \\left(\\log y_i + \\log y_{i+1} \\right) - 2 y_i^{2m-1} \\log y_{i} \\right)\n\\end{align*}\nIn particular, for $m=1$: \n\\begin{align*}\n\\frac{\\partial}{\\partial m} f(y,m) |_{m=1} & = f(y,1) + \\sum_{i=1}^N \\left(y_{i+1} \\left(\\log y_i + \\log y_{i+1} \\right) - 2 y_i \\log y_i \\right) \\\\\n& = L \\sum_{i=1}^N \\frac{y_i}{L} \\log \\left(\\frac{y_{i-1}\/L}{y_i\/L} \\right) \\\\\n& = - L D\\left(\\frac{y}{L} || P^- \\frac{y}{L}\\right)\n\\end{align*}\nwhere, for the second equality, we used the trivial fact that $f(y,1)=0$. \nTaking second partial derivative of $f(y,m)$ w.r.t. $m$ gives:\n\\begin{align*}\n\\frac{\\partial^2}{\\partial m^2} f(y,m) = & \\sum_{i=1}^N y_i^{m-1}\\log y_i \\left(y_{i+1}^m - y_i^m \\right) + \\sum_{i=1}^N y_i^{m-1} \\left(y_{i+1}^m \\log y_{i+1} - y_i^m \\log y_i \\right)\\\\\n& + \\sum_{i=1}^N \\left( y_i^{m-1} y_{i+1}^m \\left(\\log y_i + \\log y_{i+1} \\right) - 2 y_i^{2m-1} \\log y_{i} \\right) \\\\\n& + m \\sum_{i=1}^N \\left( y_i^{m-1} y_{i+1}^m \\left(\\log y_i + \\log y_{i+1} \\right)^2 - 4 y_i^{2m-1} \\log^2 y_i \\right)\n\\end{align*}\nIn particular, for $m=1$:\n\\begin{align}\n\\frac{\\partial^2}{\\partial m^2} f(y,m) |_{m=1} = & \\sum_{i=1}^N \\left(y_{i+1} - y_i \\right) \\log y_i + \\sum_{i=1}^N \\left(y_{i+1} \\log y_{i+1} - y_i \\log y_i \\right) \\nonumber \\\\\n& + \\sum_{i=1}^N \\left(y_{i+1} \\left(\\log y_i + \\log y_{i+1} \\right) - 2 y_i \\log y_i \\right) \\nonumber \\\\\n& + \\sum_{i=1}^N \\left( y_{i+1} (\\log y_i + \\log y_{i+1})^2 - 4 y_i \\log^2 y_i\\right) \\nonumber \\\\\n\\label{eq:second-derivative-m-final-eq}\n= & \\sum_{i=1}^N \\log^2 y_i \\left(y_{i+1} - y_i \\right) + 2 \\sum_{i=1}^N \\log y_i \\left(y_{i+1} \\log y_{i+1} + y_{i+1} - y_i \\log y_i - y_i \\right) \\\\\n\\nonumber\n\\geq & \\, 0\n\\end{align}\nIt is easy to check that, $\\log z$, $\\log^2 z$ and $z + z \\log z$ are strictly increasing, strictly decreasing and strictly decreasing functions, respectively, for $z \\in (0,e^{-2})$. Therefore, Lemma~\\ref{lem:appendix-general-summation} implies that each of the two terms in \\eqref{eq:second-derivative-m-final-eq} is non-negative, and hence the lemma.\n\\qed\n\\end{proof}\n\n\\iffalse\n\n\\begin{lemma}\n\\label{lem:service-rate-dot-lower-bound}\nFor all $y \\in \\mathcal S_n^L$, $n \\in \\mathbb{N} \\setminus \\{1\\}$, $L > 0$:\n\\begin{equation}\n\\label{eq:service-rate-der-m}\n\\frac{\\partial}{\\partial m} f(y,m)|_{m=1} = - L D\\left(\\frac{y}{L} || P^- \\frac{y}{L}\\right) \\leq 0\n\\end{equation}\nAdditionally, if $L < e^{-2}$, then\n\\begin{equation}\n\\label{eq:service-rate-twice-der-m}\n\\frac{\\partial^2}{\\partial m^2} f(y,m)|_{m=1} \\geq 0\n\\end{equation}\nMoreover, equality holds true in \\eqref{eq:service-rate-der-m} and \\eqref{eq:service-rate-twice-der-m} if and only if $y = \\frac{L}{n}\\mathbf{1}$.\n\\end{lemma}\n\\begin{proof}\n\\ksmargin{need to exclude from the proof all $y$ such that $y_i=0$ for some $i \\in \\until{n}$}\nTaking the partial derivative of $f(y,m)$ with respect to $m$, we get that \n\\begin{align*}\n\\frac{\\partial}{\\partial m} f(y,m) & = \\frac{f(y,m)}{m} + m \\sum_{i=1}^n \\left( y_i^{m-1} y_{i+1}^m \\left(\\log y_i + \\log y_{i+1} \\right) - 2 y_i^{2m-1} \\log y_{i} \\right)\n\\end{align*}\nIn particular, for $m=1$: \n\\begin{align*}\n\\frac{\\partial}{\\partial m} f(y,m) |_{m=1} & = f(y,1) + \\sum_{i=1}^n \\left(y_{i+1} \\left(\\log y_i + \\log y_{i+1} \\right) - 2 y_i \\log y_i \\right) \\\\\n& = L \\sum_{i=1}^n \\frac{y_i}{L} \\log \\left(\\frac{y_{i-1}\/L}{y_i\/L} \\right) \\\\\n& = - L D\\left(\\frac{y}{L} || P^- \\frac{y}{L}\\right)\n\\end{align*}\nwhere, for the second equality, we used the trivial fact that $f(y,1)=0$. \nTaking second partial derivative of $f(y,m)$ w.r.t. $m$ gives:\n\\begin{align*}\n\\frac{\\partial^2}{\\partial m^2} f(y,m) = & \\sum_{i=1}^n y_i^{m-1}\\log y_i \\left(y_{i+1}^m - y_i^m \\right) + \\sum_{i=1}^n y_i^{m-1} \\left(y_{i+1}^m \\log y_{i+1} - y_i^m \\log y_i \\right)\\\\\n& + \\sum_{i=1}^n \\left( y_i^{m-1} y_{i+1}^m \\left(\\log y_i + \\log y_{i+1} \\right) - 2 y_i^{2m-1} \\log y_{i} \\right) \\\\\n& + m \\sum_{i=1}^n \\left( y_i^{m-1} y_{i+1}^m \\left(\\log y_i + \\log y_{i+1} \\right)^2 - 4 y_i^{2m-1} \\log^2 y_i \\right)\n\\end{align*}\nIn particular, for $m=1$:\n\\begin{align}\n\\frac{\\partial^2}{\\partial m^2} f(y,m) |_{m=1} = & \\sum_{i=1}^n \\left(y_{i+1} - y_i \\right) \\log y_i + \\sum_{i=1}^n \\left(y_{i+1} \\log y_{i+1} - y_i \\log y_i \\right) \\nonumber \\\\\n& + \\sum_{i=1}^n \\left(y_{i+1} \\left(\\log y_i + \\log y_{i+1} \\right) - 2 y_i \\log y_i \\right) \\nonumber \\\\\n& + \\sum_{i=1}^n \\left( y_{i+1} (\\log y_i + \\log y_{i+1})^2 - 4 y_i \\log^2 y_i\\right) \\nonumber \\\\\n\\label{eq:second-derivative-m-final-eq}\n= & \\sum_{i=1}^n \\log^2 y_i \\left(y_{i+1} - y_i \\right) + 2 \\sum_{i=1}^n \\log y_i \\left(y_{i+1} \\log y_{i+1} + y_{i+1} - y_i \\log y_i - y_i \\right) \\\\\n\\nonumber\n\\geq & \\, 0\n\\end{align}\nIt is easy to check that, $\\log z$, $\\log^2 z$, and $z + z \\log z$ are strictly increasing, strictly decreasing and strictly decreasing functions, respectively, for $z \\in (0,e^{-2})$. Therefore, Lemma~\\ref{lem:appendix-general-summation} implies that each of the two terms in \\eqref{eq:second-derivative-m-final-eq} is non-negative, and hence the lemma.\n\\end{proof}\n\n\\fi\n\nLemma~\\ref{lem:service-rate-dot-lower-bound} implies that, for sufficiently small $L$, $f(y,m)$ is locally convex in $m$. One can use this property along with an exact expression for $\\frac{\\partial}{\\partial m}f(y,m)$ in Lemma~\\ref{lem:service-rate-dot-lower-bound} at $m=1$, and the fact that $f(y,1)=0$ for all $y$, to develop a linear approximation in $m$ of $f(y,m)$ around $m=1$. The following lemma derives this approximation, as also suggested by Figure~\\ref{fig:service-rate-dot-vs-m}.\n\n\\begin{figure}[htb!]\n \\centering\n \\includegraphics[width=5cm]{service-rate-dot-vs-m} \n \\caption{$f(y,m)$ vs. $m$ for a typical $y \\in \\mathcal S_{10}$.}\n \\label{fig:service-rate-dot-vs-m}\n\\end{figure}\n\n\\begin{lemma}\n\\label{lem:service-rate-dot-deltay-bound}\nFor a given $y \\in \\text{int}(\\mathcal S_N^L)$, $n \\in \\mathbb{N}$, $L \\in (0,e^{-2})$, there exists $\\underbar{m}(y) \\in [0,1)$ such that \n$$\n\\frac{d}{dt} s(y) \\geq 2 \\frac{(1-m)}{L} \\left(y_{\\mathrm{max}} - y_{\\text{min}} \\right)^2 \\, , \\qquad \\forall \\, m \\in [\\underbar{m}(y),1]\n$$\n\\end{lemma}\n\\begin{proof}\nFor a given $y \\in \\text{int}(\\mathcal S_N^L)$, the local convexity of $f(y,m):=\\frac{d}{dt} s(y)$ in $m$, and the expression of $\\frac{\\partial}{\\partial m}f(y,m)$ at $m=1$ in Lemma~\\ref{lem:service-rate-dot-lower-bound} implies that $\\frac{d}{dt} s(y) \\geq (1-m) L D\\left(\\frac{y}{L} || P^-\\frac{y}{L} \\right)$ for sufficiently small $m<1$. Pinsker's inequality implies $D\\left(\\frac{y}{L} || P^-\\frac{y}{L} \\right) \\geq \\frac{\\|y-P^-y\\|_1^2}{2 L^2}$. This, combined with the fact that $\\|y-P^-y\\|_1 \\geq 2(y_{\\mathrm{max}}-y_{\\text{min}})$ for all $y \\in \\text{int}(\\mathcal S_N^L)$, gives the lemma.\n\\qed\n\\end{proof}\n\n\n\\section{Simulations}\n\\label{sec:simulations}\nIn this section, we present simulation results on throughput analysis, and compare with our theoretical results from previous sections. \n \n \n\n\n\n \n\n\n\n\\begin{figure}\n\\centering\n\\subfigure[]{\\includegraphics[width=5cm]{lambda-m-lowerUpper-w-phase-transition-separated-v2}}\n\\centering\n\\hspace{0.3in}\n\\subfigure[]{\\includegraphics[width=5cm]{lambda-m-lowerUpper-uniform-w-phase-transition-separated-v2}}\n\\caption{Comparison between theoretical estimates of throughput from Theorem~\\ref{thm:superlinear-bound-empty}, and range of numerical estimates from simulations, for zero initial condition. The parameters used for this case are: $L = 1$, $\\delta = 0.1$, and (a) $\\varphi = \\delta_0$, $\\psi = \\delta_L$, (b) $\\varphi = U_{[0,L]}$, $\\psi=U_{[0,L]}$.\n \\label{fig:lambda-m-empty}}\n\\end{figure}\n\n \\begin{figure}[htb!]\n\\begin{center}\n\\includegraphics[width=5cm]{lambda-m-busy-period-L100-v2} \n\\caption{Comparison between theoretical estimates of throughput from Theorem \\ref{thm:superlinear-bound-empty}, and range of numerical estimates from simulations, for zero initial condition. The parameters used for this case are: $L = 100$, $\\delta = 0.1$, $T=10$, and $\\varphi = \\delta_0$, $\\psi = \\delta_L$.}\n\\label{fig:superlinear-large-L}\n\\end{center}\n\\end{figure} \n\n \\begin{figure}[htb!]\n\\begin{center}\n\\includegraphics[width=5cm]{phase-transition-non-empty-separated-v2} \n\\caption{Comparison between theoretical estimates of throughput from Theorem~\\ref{thm:superlinear-bound-non-empty}, and range of numerical estimates from simulations. The parameters used for this case are: $L = 1$, $\\delta = 0.1$, $\\varphi = \\delta_0$, $\\psi = \\delta_L$, $w_0=1$ and $n_0=4$, $x_1(0) =0.6, x_2(0) =0.7, x_3(0) =0.8, x_4(0) =0.9$.}\n\\label{fig:phase-transition-non-empty}\n\\end{center}\n\\end{figure}\n\n\n\n\n\n\nFigures~\\ref{fig:lambda-m-empty}, \\ref{fig:superlinear-large-L} and \\ref{fig:phase-transition-non-empty} show comparison between the lower bound on throughput over finite time horizons, as given by Theorems~\\ref{thm:superlinear-bound-empty} and \\ref{thm:superlinear-bound-non-empty}, and the corresponding numerical estimates from simulations. \n\n Figures~\\ref{fig:lambda-m-empty} and \\ref{fig:superlinear-large-L} are for zero initial condition, and Figure~\\ref{fig:phase-transition-non-empty} is for non-zero initial condition.\n\nFigures~\\ref{fig:sublinear-lowerbound} and \\ref{fig:superlinear-batch} show comparison between the lower bound on throughput as given by the bacth release control policy, as per Theorems~\\ref{thm:main-sub-linear} and \\ref{thm:main-batch-super-linear}, respectively, under a couple of representative values of maximum permissible perturbation $\\eta$. In particular, Figure~\\ref{fig:sublinear-lowerbound} demonstrates that the lower bound achieved from Theorem \\ref{thm:main-sub-linear} increases drastically as $m \\to 0^+$. Both the figures also confirm that the throughput indeed increases with increasing maximum permissible perturbation $\\eta$.\n\nIt is instructive to compare Figures~\\ref{fig:lambda-m-empty}(b) and \\ref{fig:superlinear-batch}(a), both of which depict throughput estimates for the sub-linear case, however obtained from different methods, namely busy period distribution and batch release control policy. Accordingly, one should bear in mind that the two bounds have different qualifiers attached to them: the bound in Figure~\\ref{fig:lambda-m-empty}(b) is valid probabilistically only over a finite time horizon, whereas the bound in Figure~\\ref{fig:superlinear-batch}(a) is valid with probability one, although under a perturbation to the arrival process. \n\n \\begin{figure}[htb!]\n\\begin{center}\n\\includegraphics[width=5cm]{sublinear-lowerbound} \n\\caption{Theoretical estimates of throughput from Theorems~\\ref{thm:main-sub-linear} for different values of $\\eta$. The parameters used for this case are: $L = 1$, $\\varphi = U_{[0,L]}$, $\\psi = U_{[0,L]}$, and $w_0=0$ . Note that the vertical axis is in logarithmic scale.}\n\\label{fig:sublinear-lowerbound}\n\\end{center}\n\\end{figure}\n\n\n\n \n\n\n\n\\begin{figure}\n\\centering\n\\subfigure[]{\\includegraphics[width=5cm]{sublinear-small-L}}\n\\centering\n\\hspace{0.3in}\n\\subfigure[]{\\includegraphics[width=5cm]{superlinear-batch-new-v2}}\n\\caption{Theoretical estimates of throughput from Theorem \\ref{thm:main-batch-super-linear}, and numerical estimates from simulations for different The parameters used for this case are: $\\varphi=U_{[0,L]}$, $\\psi=U_{[0,L]}$, and (a) $L = 1$, (b) L = 100. \\label{fig:superlinear-batch}}\n\\end{figure}\n\n \\begin{figure}[htb!]\n\\begin{center}\n\\includegraphics[width=5cm]{upper-bound-queue-length-linear-large-lambdas} \n\\caption{Comparison between the empirical expectation of the queue length and the upper bound suggested by Remark~\\ref{remark:waiting-time}. We let the simulations run up to time $t=80,000$. The parameters used for this case are: $L=1$, $m=1$, $\\varphi=\\delta_0$, $\\psi=\\delta_L$. For these values, we have $\\lambda_{\\text{max}}=1$.}\n\\label{fig:queue-length-linear}\n\\end{center}\n\\end{figure} \n\nFinally, Figure~\\ref{fig:queue-length-linear} shows a good agreement between queue length bound suggested by Remark~\\ref{remark:waiting-time}, and the corresponding numerical estimates in the linear case. \n\n \n\n\n\n\n\n\n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\subsection{Throughput Bounds under Batch Release Control Policy}\nIn this section, we consider a \\emph{time-perturbed} version of the arrival process. \nFor a given realization of arrival times, $\\{t_1,t_2,\\cdots\\}$, consider a perturbation map $t_i^{\\prime} \\equiv t_i^{\\prime}(t_1,\\ldots,t_i)$ satisfying $t_i^{\\prime}\\geq t_i$ for all $i$, which prescribes the perturbed arrival times. The magnitude of perturbation is defined as $\\eta := E\\left(t_i^{\\prime}-t_i\\right)$, where the expectation is with respect to the Poisson process with rate $\\lambda$ that generates the arrival times.\n\nWe prove boundedness of the queue length under a specific perturbation map. This perturbation map is best understood in terms of a control policy that governs the release of arrived vehicles into HTQ. In order to clarify the implementation of the control policy, we decompose the proposed HTQ into two queues in series: denoted as HTQ1 and HTQ2, both of which have the same geometric characteristics as HTQ, i.e., a circular road segment of length $L$ (see Figure \\ref{fig:htq1-htq2} for illustrations). The original arrival process for HTQ, i.e. spatio-temporal Poisson process with rate $\\lambda$ and spatial distribution $\\varphi$ is now the arrival process for HTQ1. Vehicles remain stationary at their arrival locations in HTQ1, until released by the control policy into HTQ2. Upon released into HTQ2, vehicles travel according to \\eqref{eq:dynamics-moving-coordinates} until they depart after traveling a distance that is sampled from $\\psi$, as in the case of HTQ. The time of release of the vehicles into HTQ2 correspond to their perturbed arrival times $t_1^{\\prime}, t_2^{\\prime}, \\ldots$. The average waiting time in HTQ1 under the given release control policy is then the magnitude of perturbation in the arrival times. \n\n\n\n\n\\begin{figure}[htb!]\n\\begin{center}\n\\includegraphics[width=14cm]{htq1-htq2-decomposed} \n\\caption{Decomposition of HTQ into HTQ1 and HTQ2 in series.}\n\\label{fig:htq1-htq2}\n\\end{center}\n\\end{figure}\n\nWe consider the following class of release control policy, for which we recall from the problem setup in Section~\\ref{sec:problem-formulation} that $\\text{supp}(\\varphi)=[0,\\ell]$ for some $\\ell \\in [0,L]$. \n\n\n\\begin{definition}[Batch Release Control Policy $\\pi^b_\\triangle$]\n\\label{def:batch-release-control-policy}\nDivide $[0,\\ell]$ into sub-intervals, each of length $\\triangle$, enumerated as $1, 2, \\ldots, \\lceil \\frac{\\ell}{\\triangle} \\rceil$. Let $T_1$ be the first time instant when HTQ2 is empty. At time $T_1$, release one vehicle each, if present, from all odd-numbered sub-intervals in $\\{1, 2, \\ldots, \\lceil \\frac{\\ell}{\\triangle} \\rceil\\}$ simultaneously into HTQ2. Let $T_2$ be the next time instant when HTQ2 is empty. At time $T_2$, release one vehicle each, if present, from all even-numbered sub-intervals in $\\{1, 2, \\ldots, \\lceil \\frac{\\ell}{\\triangle} \\rceil\\}$ simultaneously into HTQ2. Repeat this process of alternating between releasing one vehicle each from odd and even-numbered sub-intervals every time that HTQ2 is empty.\n\\end{definition}\n\n\n\n\\begin{remark}\n\\begin{enumerate}\n\\item Under $\\pi^b_{\\triangle}$, when vehicles are released into HTQ2, the inter-vehicle distances in the front and rear of each vehicle being released is at least equal to $\\triangle$. \n\\item The order in which vehicles are released into HTQ2 from HTQ1 under $\\pi^b_{\\triangle}$ may not be the same as the order of arrivals into HTQ1.\n\\end{enumerate}\n\\end{remark}\n\nIn the next two sub-sections, we analyze the performance of the batch release control policy for sub-linear and super-linear cases. \n\\subsubsection{The Sub-linear Case}\nIn this section, we derive a lower bound on throughput when $m\\in(0,1)$. We first derive a trivial lower bound in Proposition \\ref{prop:throughput-bound-sublinear} implied by Lemma \\ref{lem:service-rate-jumps} and Remark \\ref{rem:service-rate-jump-arrival}. Next, we improve this lower bound in Theorem \\ref{thm:main-sub-linear} under a under a batch release control policy, $\\pi_{\\triangle}^b$. \n\\begin{proposition}\n\\label{prop:throughput-bound-sublinear}\nFor any $L > 0$, $m\\in(0,1)$, $\\varphi \\in \\Phi$, $\\psi \\in \\Psi$, $x_0 \\in [0,L]^{n_0}$, $n_0 \\in \\mathbb{N}$:\n$$\n \\lambda_{\\text{max}}(L,m,\\varphi,\\psi,x_0,\\delta=0) \\geq L^m\/\\bar{\\psi} \n$$\n\\end{proposition}\n\\begin{proof}\nRemark \\ref{rem:service-rate-jump-arrival} implies that, for $m \\in (0,1)$, the service rate does not decrease due to arrivals. Therefore, a simple lower bound on the service rate for any state is the service rate when there is only one vehicle in the system, i.e., $L^m$. Therefore, \nthe workload process is upper bounded as\n$w(t) = w_0+r(t)-\\int_0^ts(z)\\mathrm{d} z \\leq w_0+r(t) -L^m(t-\\mathcal{I}(t)), \\quad \\forall t\\geq 0$, \nwhere $r(t)$ and $\\mathcal{I}(t)$ denote the renewal reward and the idle time processes, respectively, as introduced in the proof of Proposition \\ref{prop:unstable}. Similar to the proof of Proposition \\ref{prop:unstable}, it can be shown that, if $\\lambda < L^m\/\\bar{\\psi}$, then the workload, and hence the queue length, goes to zero in finite time with probability one. \n\\qed\n\\end{proof}\n\nNext, we establish better throughput guarantees than Proposition~\\ref{prop:throughput-bound-sublinear}, under a batch release control policy, $\\pi_{\\triangle}^b$.\n\nThe next result characterizes the time interval between release of successive batches into HTQ2 under $\\pi_\\triangle^b$.\n\n\\begin{lemma}\n\\label{eq:successive-release-time-difference}\nFor given $\\lambda>0$, $\\triangle > 0$, $\\varphi \\in \\Phi$, $\\psi \\in \\Psi$ with $\\text{supp}(\\psi)=[0,R]$, $R>0$, $m\\in(0,1)$, $x_0\\in[0,L]^{n_0}$, $L>0$, $n_0\\in \\mathbb{N}$, let $T_1$, $T_2$, $\\ldots$ denote the random variables corresponding to time of successive batch releases into HTQ2 under $\\pi_{\\triangle}^b$. Then, $T_1 \\leq \\frac{n_0R}{L^m}$, $T_{i+1}-T_i \\leq R\/\\triangle^m$ for all $i \\geq 1$, and $y_{\\text{min}}(t) \\geq \\triangle$ for all $t \\geq T_1$.\n\\end{lemma}\n\\begin{proof}\nSince the maximum distance to be traveled by every vehicle is upper bounded by $R$, the initial workload satisfies $w_0\\leq n_0R$. Since the minimum service rate for $m \\in (0,1)$ is $L^m$ (see proof of Proposition~\\ref{prop:throughput-bound-sublinear}), with no new arrivals, it takes at most $w_0\/L^m=n_0 R\/L^m$ amount of time for the system to become empty. This establishes the bound on $T_1$. \n\nLemma~\\ref{lem:vehicle-distance-monotonicity} implies that, under $\\pi^b_{\\triangle}$, the minimum inter-vehicle distance in HTQ2 is at least $\\triangle$ after $T_1$. \nThis implies that $y_{\\text{min}}(t) \\geq \\triangle$ for all $t \\geq T_1$, and hence the minimum speed of every vehicle in HTQ2 is at least $\\triangle^m$ after $T_1$. Since the maximum distance to be traveled by every vehicle is $R$, this implies that the time between release of a vehicle into HTQ2 and its departure is upper bounded by $R\/\\triangle^m$, which in turn is also an upper bound on the time required by all the vehicles released in one batch to depart from the system. \n\\qed\n\\end{proof}\nLet $N_1(t)$ and $N_2(t)$ denote the queue lengths in HTQ1 and HTQ2, respectively, at time $t$. Lemma~\\ref{eq:successive-release-time-difference} implies that, for every $\\triangle>0$, $N_2(t)$ is upper bounded for all $t \\geq T_1$. The next result identifies conditions under which $N_1(t)$ is upper bounded. \n\n\n\n\nFor $F>0$, let $\\Phi_F:=\\setdef{\\varphi \\in \\Phi}{\\sup_{x \\in [0,\\ell]} \\varphi(x) \\leq F}$. For subsequent analysis, we now derive an upper bound on the \\emph{load factor}, i.e., the ratio of the arrival and departure rates, associated with a typical sub-queue of HTQ1 among $\\{1, 2, \\ldots, \\lceil \\frac{\\ell}{\\triangle} \\rceil\\}$. It is easy to see that, for every $\\varphi \\in \\Phi_F$, $F>0$, the arrival process into every sub-queue is Poisson with arrival rate upper bounded by $\\lambda F \\triangle$. Lemma~\\ref{eq:successive-release-time-difference} implies that the departure rate is at least $\\triangle^m\/2R$. Therefore, the load factor for every sub-queue is upper bounded as \n\\begin{equation}\n\\label{eq:load-factor-upper-bound}\n\\rho \\leq \\frac{2 R \\lambda F \\triangle}{\\triangle^m}=2 R \\lambda F \\triangle^{1-m}\n\\end{equation}\nIn particular, if \n\\begin{equation}\\label{eq:triangle-star}\n\\triangle < \\triangle^*(\\lambda):=\\left(2 R \\lambda F \\right)^{-\\frac{1}{1-m}},\n\\end{equation} \nthen $\\rho<1$. It should be noted that for $n_0<+\\infty$, by Lemma \\ref{eq:successive-release-time-difference}, $T_1<+\\infty$. The service rate is zero during $[0,T_1]$; however, since $T_1$ is finite, this does not affect the computation of load factor. \n\n\n\\begin{proposition}\n\\label{prop:pi1-policy}\nFor any $\\lambda>0$, $\\varphi \\in \\Phi_F$, $F>0$, $\\psi \\in \\Psi$ with $\\text{supp}(\\psi)=[0,R]$, $R>0$, $m\\in(0,1)$, $x_0\\in[0,L]^{n_0}$, $L>0$, $n_0\\in \\mathbb{N}$, for sufficiently small $\\triangle$, $N_1(t)$ is bounded for all $t \\geq 0$ under $\\pi_{\\triangle}^b$, almost surely.\n\\end{proposition}\n\\begin{proof}\nBy contradiction, assume that $N_1(t)$ grows unbounded. This implies that there exists at least one sub-queue, say $ j \\in \\{1, 2, \\ldots, \\lceil \\frac{\\ell}{\\triangle} \\rceil\\}$, such that its queue length, say $N_{1,j}(t)$, grows unbounded. In particular, this implies that there exists $t_0 \\geq T_1$ such that $N_{1,j}(t) \\geq 2$ for all $t \\geq t_0$. Therefore, for all $t \\geq t_0$, the ratio of arrival rate to departure rate for the $j$-th sub-queue is given by \\eqref{eq:load-factor-upper-bound}, which is a decreasing function of $\\triangle$, and hence becomes strictly less than one for sufficiently small $\\triangle$. A simple application of the law of large numbers then implies that, almost surely, $N_{1,j}(t)=0$ for some finite time, leading to a contradiction.\n\\qed \n\\end{proof}\n\n\n\n\n\n\n\n\n\n\nThe following result gives an estimate of the mean waiting time in a typical sub-queue in HTQ1 under the $\\pi_{\\triangle}^b$ policy.\n\n\\begin{proposition}\n\\label{eq:waiting-time}\nFor $\\varphi \\in \\Phi_F$, $F>0$, $\\psi \\in \\Psi$, $m \\in (0,1)$, there exists a sufficiently small $\\triangle$ such that the average waiting time in HTQ1 under $\\pi_{\\triangle}^b$ is upper bounded as: \n\\begin{equation}\n\\label{eq:W-upper-bound}\nW \\leq R (2 R \\lambda F )^{\\frac{m}{1-m}} \\left(\\frac{2}{m^{\\frac{m}{1-m}}} + \\frac{m}{m^{\\frac{m}{1-m}}-m^{\\frac{1}{1-m}}} \\right).\n\\end{equation}\n\\end{proposition}\n\\begin{proof}\nIt is easy to see that the desired waiting time corresponds to the system time of an M\/D\/1 queue with load factor given by \\eqref{eq:load-factor-upper-bound} along with the arrival and departure rates leading to \\eqref{eq:load-factor-upper-bound}. Note that, by Lemma \\ref{eq:successive-release-time-difference}, for finite $n_0$, the value of $T_1$ is finite and does not affect the average waiting time. Therefore, using standard expressions for M\/D\/1 queue~\\cite{Kleinrock:75}, we get that the waiting time in HTQ1 is upper bounded as follows for $\\rho <1$:\n\\begin{align}\nW \\leq \\frac{2R}{\\triangle^m} + \\frac{R}{\\triangle^m} \\frac{\\rho}{1-\\rho} & \\leq \\frac{2R}{\\triangle^m} + \\frac{R}{\\triangle^m} \\frac{1}{1-\\rho} \\nonumber \\\\ \n\\label{eq:waiting-time-upper-bound}\n& \\leq \\frac{2R}{\\triangle^m} + \\frac{R}{\\triangle^m -2 R \\lambda F \\triangle} \n\\end{align} \nIt is easy to check that the minimum of the second term in \\eqref{eq:waiting-time-upper-bound} over $\\big(0,\\triangle^*(\\lambda)\\big)$ occurs at $\\triangle = \\left(\\frac{m}{2 R \\lambda F} \\right)^{\\frac{1}{1-m}}$. Substitution in the right hand side of the first inequality in \\eqref{eq:waiting-time-upper-bound} gives the result.\n\\qed\n\\end{proof}\n\n\\begin{remark}\n\\label{rem:W-limit}\n\\eqref{eq:W-upper-bound} implies that, for every $R>0$, $F>0$, $\\lambda>0$, we have $W \\to 2R$ as $m \\to 0^+$. \n\\end{remark}\n\nWe extend the notation introduced in \\eqref{eq:throughput-def} to $\\lambda_{\\text{max}}(L,m,\\varphi,\\psi,x_0,\\delta,\\eta)$ to also show the dependence on maximum allowable perturbation $\\eta$. This is not to be confused with the notation for $\\lambda_{\\text{max}}$ used in Theorems~\\ref{thm:superlinear-bound-empty} and \\ref{thm:superlinear-bound-non-empty}, where we used the notion of throughput over finite time horizons. We choose to use the same notations to maintain brevity. \n\nIn order to state the next result, for given $R>0$, $F>0$, $m \\in (0,1)$ and $\\eta \\geq 0$, let $\\tilde{W}(m,F,R,\\eta)$ \nbe the value of $\\lambda$ for which the right hand side of \\eqref{eq:W-upper-bound} is equal to $\\eta$, if such a $\\lambda$ exists and is at least $L^m\/\\bar{\\psi}$, and let it be equal to $L^m\/\\bar{\\psi}$, otherwise. The lower bound of $L^m\/\\bar{\\psi}$ in the definition of $\\tilde{W}$ is inspired by Proposition~\\ref{prop:throughput-bound-sublinear}. The next result formally states $\\tilde{W}$ as a lower bound on $\\lambda_{\\text{max}}$. \n\n\n\n\n\n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n \n\n\n\n\n\n\n\n\n\n\\begin{theorem}\\label{thm:main-sub-linear}\nFor any $\\varphi \\in \\Phi_F$, $F>0$, $\\psi \\in \\Psi$ with $\\text{supp}(\\psi)=[0,R]$, $R>0$, $m \\in (0,1)$, $x_0 \\in [0,L]^{n_0}$, $n_0 \\in \\mathbb{N}$, $L>0$, and maximum permissible perturbation $\\eta \\geq 0$, \n$$\n\\lambda_{\\text{max}}(L,m,\\varphi,\\psi,x_0,\\delta=0,\\eta) \\geq \\tilde{W}(m,F,R,\\eta)\n$$\nIn particular, if $\\eta > 2R$, then $\\lambda_{\\text{max}}(L,m,\\varphi,\\psi,x_0,\\delta=0,\\eta) \\to + \\infty$ as $m \\to 0^+$.\n\\end{theorem}\n\\begin{proof}\nConsider any $\\lambda \\leq\\tilde{W}(m,F,R,\\eta)$, and $\\triangle \\leq \\left(\\frac{m}{2 R \\lambda F} \\right)^{\\frac{1}{1-m}}$. Under policy $\\pi_\\triangle^b$, Lemma \\ref{eq:successive-release-time-difference} and Proposition \\ref{prop:pi1-policy} imply that, for finite $n_0$, $N_2(t)$ and $N_1(t)$ remain bounded for all times, with probability one. Also, for $\\lambda = \\tilde{W}(m,F,R,\\eta)$, by Proposition \\ref{eq:waiting-time} and the definition of $\\tilde{W}(m,F,R,\\eta)$, the introduced perturbation remains upper bounded by $\\eta$. Since the right hand side of \\eqref{eq:W-upper-bound} is monotonically increasing in $\\lambda$, perturbations remain bounded by $\\eta$ for all $\\lambda \\leq \\tilde{W}(m,F,R,\\eta)$. \nIn particular, by Remark \\ref{rem:W-limit}, we have $W\\to 2R$ as $m\\to 0^+$. In other words, as $m\\to 0^+$, the magnitude of the introduced perturbation becomes independent of $\\lambda$. \nTherefore, when $\\eta >2R$, and $m\\to 0^+$ throughput can grow unbounded while perturbation and queue length remains bounded.\n\\qed\n\\end{proof}\n\n\n\n\n\\begin{remark}\nWe emphasize that the only feature required in a batch release control policy is that, at the moment of release, the front and rear distances for the vehicles being released should be greater than $\\triangle$. The requirement of the policy in Definition~\\ref{def:batch-release-control-policy} for the road to be empty at the moment of release makes the control policy conservative, and hence affects the maximum permissible perturbation. In fact, for special spatial distributions, e.g., when $\\varphi$ is a Dirac delta function and the support of $\\psi$ is $[0,L-\\triangle])$, one can relax the conservatism to guarantee unbounded throughput for arbitrarily small permissible perturbation. \n\\end{remark}\n\n\\subsubsection{The Super-linear Case}\n\nIn this section, we study the throughput for the super-linear case under perturbed arrival process with a maximum permissible perturbation of $\\eta$. For this purpose, we consider the batch release control policy $\\pi_{\\triangle}^b$, defined in Definition \\ref{def:batch-release-control-policy}, for our analysis. Time intervals between release of successive batches, under $\\pi_{\\triangle}^b$, are characterized the same as Lemma \\ref{eq:successive-release-time-difference}. However, in the super linear case, by Lemma \\ref{lem:service-rate-bounds}, the initial minimum service rate is $L^mn_0^{1-m}$. Therefore, the time of first release is bounded as $T_1 \\triangle^*(\\lambda).\n\\end{equation}\nIt should be noted that since the batch release control policy iteratively releases from odd and even sub-queues, we need at least two sub-queues to be able to implement this policy. As a result, $\\triangle$ cannot be arbitrary large and $\\triangle<\\ell\/2$. This constraint gives the following bound on the admissible throughput under this policy\n\\begin{equation}\\label{eq:lambda-star-superlinear}\n\\lambda<\\lambda^*:=(\\ell\/2)^{m-1}\/2RF\n\\end{equation}\n The following result shows that for the above range of throughput, the queue length in HTQ1, $N_1(t)$, remains bounded at all times. \n \\begin{proposition}\n\\label{prop:pi1-policy-super-linear}\nFor any $\\lambda<\\lambda^*$, $\\triangle\\in\\big(\\triangle^*(\\lambda),\\ell\/2\\big]$, $\\varphi \\in \\Phi_F$, $F>0$, $\\psi \\in \\Psi$ with $\\text{supp}(\\psi)=[0,R]$, $R>0$, $m>1$, $x_0\\in[0,L]^{n_0}$, $L>0$, $n_0\\in \\mathbb{N}$, $N_1(t)$ is bounded for all $t \\geq 0$ under $\\pi_{\\triangle}^b$, almost surely.\n\\end{proposition}\n\n\\begin{proof}\nThe proof is similar to proof of Proposition \\ref{prop:pi1-policy}. In particular, by \\eqref{eq:triangle-star-superlinear} and \\eqref{eq:lambda-star-superlinear}, one can show that load factor \\eqref{eq:load-factor-upper-bound} remains strictly smaller than one. This implies that no sub-queue in HTQ1 can grow unbounded, and $N_1(t)$ remains bounded for all times, with probability one. \n\\qed\n\\end{proof}\n\n\n\n\\begin{proposition}\n\\label{eq:waiting-time-super-linear}\nFor any $\\lambda<\\lambda^*$, $\\varphi \\in \\Phi_F$, $F>0$, $\\psi \\in \\Psi$, $m> 1$, the average waiting time in HTQ1 under $\\pi_{\\triangle}^b$ for $\\triangle = \\ell\/2$ is upper bounded as: \n\\begin{equation}\n\\label{eq:waiting-time-super-linear-eq}\nW \\leq \\frac{2R}{(\\ell\/2)^m}+\\frac{R}{(\\ell\/2)^m}\\frac{2R\\lambda F(\\ell\/2)^{1-m}}{1-2R\\lambda F(\\ell\/2)^{1-m}}\n\\end{equation}\n\\end{proposition}\n\\begin{proof}\nThe proof is very similar to the proof of Proposition \\ref{eq:waiting-time}. Thus, we get the following bounds:\n\\begin{align*}\nW \\leq \\frac{2R}{\\triangle^m} + \\frac{R}{\\triangle^m} \\frac{\\rho}{1-\\rho} \\leq \\frac{2R}{\\triangle^m}+\\frac{R}{\\triangle^m}\\frac{2R\\lambda F\\triangle^{1-m}}{1-2R\\lambda F\\triangle^{1-m}}\n\\end{align*} \nThe right hand side of the above inequality is a decreasing function of $\\triangle$; therefore, $\\triangle = \\ell\/2$ minimizes it, and gives \\eqref{eq:waiting-time-super-linear-eq}. \n\\qed\n\\end{proof}\n\nLet $\\hat{W}(m,F,R,\\eta)$ \nbe the value of $\\lambda$ for which the right hand side of \\eqref{eq:waiting-time-super-linear-eq} is equal to $\\eta$, if such a $\\lambda\\leq \\lambda^*$ exists, and let it be equal to $\\lambda^*$ otherwise. Note that since the right hand side of \\eqref{eq:waiting-time-super-linear-eq} is monotonically increasing in $\\lambda$, for all $\\lambda\\leq \\hat{W}(m,F,R,\\eta)$ the introduced perturbation remains upper bounded by $\\eta$. \n\\begin{theorem}\\label{thm:main-batch-super-linear}\nFor any $\\varphi \\in \\Phi_F$, $F>0$, $\\psi \\in \\Psi$ with $\\text{supp}(\\psi)=[0,R]$, $R>0$, $m >1$, $x_0 \\in [0,L]^{n_0}$, $n_0 \\in \\mathbb{N}$, $L>0$, and maximum permissible perturbation $\\eta \\geq 0$, \n$$\n\\lambda_{\\text{max}}(L,m,\\varphi,\\psi,x_0,\\delta=0,\\eta) \\geq \\hat{W}(m,F,R,\\eta)\n.$$\n\\end{theorem}\n\\begin{proof}\nFor any $\\lambda <\\hat{W}(m,F,R,\\eta)$, under $\\pi_\\triangle^b$, Lemma \\ref{eq:successive-release-time-difference} and Proposition \\ref{prop:pi1-policy-super-linear} imply that, for finite $n_0$, $N_2(t)$ and $N_1(t)$ remain bounded for all times, with probability one. Also, by Proposition \\ref{eq:waiting-time-super-linear} and the definition of $\\hat{W}(m,F,R,\\eta)$, the introduced perturbation remains upper bounded by $\\eta$. \n\\qed\n\\end{proof}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\subsection{Throughput Bounds for the Super-linear Case from Busy Period Calculations}\\label{subsec:superlinear}\nIn this section, we derive lower bound on the throughput for the super-linear case. The next result computes a bound on the probability that the queue length of the HTQ satisfies a given upper bound over a given time interval, using the probability distribution functions from \\eqref{eq:G-def}.\nIn Propositions~\\ref{prop:nbp-lower-bound} and \\ref{prop:nbn-lower-bound}, for the sake of clarity, we add explicit dependence on $\\lambda$ to this probability distribution function. \n\n\n\n\\begin{proposition}\\label{prop:nbp-lower-bound}\nFor any $m>1$, $M\\in \\mathbb{N}$, $L>0$, $\\lambda >0$, $\\varphi\\in \\Phi$, $\\psi \\in \\Psi$, and zero initial condition $x_0=0$, the probability that the queue length is upper bounded by $M$ over a given time interval $[0,T]$ satisfies the following bound: \n\\begin{equation}\n\\label{eq:nbp-lower-bound}\n \\Pr \\big(N(t) \\leq M \\quad \\forall t \\in [0,T] \\big)\\geq \\sup_{r \\in \\mathbb{N}} \\, \\sum_{n=1}^M \\int_T^{\\infty} G_{r,L^m M^{1-m}}(t,n,\\psi,\\lambda) \\, \\mathrm{d} t\n \\end{equation}\n \\end{proposition}\n \\begin{proof}\nLet us denote the current queueing system as HTQ-f. We shall compare queue lengths between HTQ-f and a slower queueing system HTQ-s, which starts from the same (zero) initial condition, and experiences the same realizations of arrival times, locations and travel distances. Let every incoming vehicle into HTQ-s and HTQ-f be tagged with a unique identifier. At time $t$, let $\\mathcal J(t)$ be the set of identifiers of vehicles present both in HTQ-s and HTQ-f, $\\mathcal J_{s\/f}(t)$ be the set of identifiers of vehicles present only in HTQ-s, and $\\mathcal J_{f\/s}(t)$ be the set of identifiers of vehicles present only in HTQ-f.\nLet $v^f_i$ denote the speed of the vehicle in HTQ-f with identifier $i \\in \\mathcal J(t) \\cup \\mathcal J_{f\/s}(t)$, as determined by the car-following behavior underlying \\eqref{eq:dynamics-moving-coordinates}. The vehicle speeds in HTQ-s are not governed by the car following behavior, but are rather related to the speeds of vehicles in HTQ-f as: \n \\begin{equation}\n \\label{eq:htq-s-speed}\n\t\tv^s_i(t) =\\left\\{\\begin{array}{ll}\n\t\t\\displaystyle v^f_i(t) \\frac{p}{v^f(t)} \\frac{|\\mathcal J(t)|}{|\\mathcal J(t)| + |\\mathcal J_{s\/f}(t)|} \t&i \\in \\mathcal J(t)\\\\[15pt]\n\t\t\\displaystyle \\frac{p}{|\\mathcal J(t)| + |\\mathcal J_{s\/f}(t)|}& i \\in \\mathcal J_{s\/f}(t)\\,\\end{array}\\right.\n\t\\end{equation}\nwhere $v^f(t):=\\sum_{i \\in \\mathcal J(t)}v_i^f(t)$ is the sum of speeds of vehicles in HTQ-f that are also present in HTQ-s at time $t$, and $p$ is a parameter to be specified. Indeed, note that $\\sum_{i \\in \\mathcal J(t) \\cup \\mathcal J_{s\/f}(t)} v_i^s(t) \\equiv p$, i.e., $p$ is the (constant) service rate of HTQ-s. \n\nConsider a realization where the number of arrivals into HTQ-s with $p=L^m M^{1-m}$ during any busy period overlapping with $[0,T]$ does not exceed $M$. We refer to such a realization as \\emph{event} in the rest of the proof. Since the maximum queue length during a busy period is trivially upper bounded by the number of arrivals during that busy period, conditioned on the event, we have \n\\begin{equation}\n\\label{eq:slow-queue-queue-length-bound}\nN_s(t) \\leq M, \\qquad t \\in [0,T]\n\\end{equation}\n\nConsider the union of departure epochs from HTQ-s and HTQ-f in $[0,T]$: $0=\\tau_0 \\leq \\tau_1 \\leq \\ldots$. If $\\mathcal J_{f\/s}(\\tau_k)=\\emptyset$ for some $k \\geq 0$, then $\\mathcal J_{f\/s}(t)=\\emptyset$ for all $t \\in (\\tau_k,\\tau_{k+1})$. Hence, the service rate for HTQ-f over the interval $(\\tau_k,\\tau_{k+1})$ is $v^f(t)$, which, conditioned on the event, is lower bounded by $L^m M^{1-m}=p$ by Lemma \\ref{lem:service-rate-bounds}.\nTherefore, $p\/v^f(t) \\leq 1$ over $(\\tau_k,\\tau_{k+1})$, and hence \\eqref{eq:htq-s-speed} implies that all the vehicles with identifiers in $\\mathcal J_f$ will travel slower in HTQ-s in comparison to HTQ-f. In particular, this implies that $\\mathcal J_{f\/s}(\\tau_{k+1})=\\emptyset$. This, combined with the fact that $\\mathcal J_{f\/s}(\\tau_0)=\\emptyset$ (both the queues start from the same initial condition), we get that, conditioned on the event, $\\mathcal J_{s\/f}(t) \\equiv \\emptyset$, and hence $N(t) \\leq N_s(t)$ over $[0,T]$. Combining this with \\eqref{eq:slow-queue-queue-length-bound} gives that, conditioned on the event, $N(t) \\leq M$ over $[0,T]$. \n\n\nWe now compute the probability of the occurrence of the event using busy period calculations from Section~\\ref{sec:busy-period}. The event can be categorized by the maximum number of busy periods, say $r \\in \\mathbb{N}$, that overlap with $[0,T]$, i.e., the $r$-th busy period ends after time $T$ (and each of these busy periods has at most $M$ arrivals). Since these busy periods are interlaced with idle periods, the probability of the $r$-th busy period ending after time $T$ is lower bounded by the probability that the sum of the durations of $r$ busy periods is at least $T$. \\eqref{eq:G-def} implies that the latter quantity is equal to $\\sum_{n=1}^M \\int_T^{\\infty} G_{r,L^m M^{1-m}}(t,n,\\psi,\\lambda) \\, \\mathrm{d} t$. The proposition then follows by noting that this is true for any $r \\in \\mathbb{N}$.\n\\qed\n \\end{proof}\n \n \\begin{remark}\nIn the proof of Proposition~\\ref{prop:nbp-lower-bound}, when deriving probabilistic upper bound on the queue length over a given time horizon $[0,T]$, we neglected the idle periods in $[0,T]$. This introduces conservatism in the bound on the right hand side of \\eqref{eq:nbp-lower-bound}. Since the idle period durations are distributed independently and identically according to an exponential random variable (since the arrival process is Poisson), one could incorporate them into \\eqref{eq:nbp-lower-bound} by taking convolution of $G$ with idle period distributions. Our choice for not doing so here is to ensure conciseness in the presentation of bounds in \\eqref{eq:nbp-lower-bound}. The resulting conservatism is also present in Proposition~\\ref{prop:nbn-lower-bound}, and carries over to Theorems~\\ref{thm:superlinear-bound-empty} and \\ref{thm:superlinear-bound-non-empty}, as well as to the corresponding simulations reported in Figures~\\ref{fig:lambda-m-empty}, \\ref{fig:superlinear-large-L} and \\ref{fig:phase-transition-non-empty}. \n\\end{remark}\n \n The next result generalizes Proposition~\\ref{prop:nbp-lower-bound} for non-zero initial condition. Note that the non-zero initial condition only affects the first busy period; all subsequent busy periods will necessarily start from with zero initial condition.\n\n \\begin{proposition}\\label{prop:nbn-lower-bound}\n For any $m>1$, $M\\in\\mathbb{N}$, $L>0$, $\\lambda >0$, $\\varphi\\in \\Phi$, $\\psi \\in \\Psi$, initial condition $x_0 \\in [0,L]^{n_0}$, $n_0 \\in \\mathbb{N}$, with associated workload $w_0>0$, the probability that the queue length is upper bounded by $M+n_0$ over a given time interval $[0,T]$ satisfies the following: \n\n $$\\Pr \\big(N(t) \\leq M + n_0 \\quad \\forall t \\in [0,T] \\big)\\geq \\sup_{r \\in \\mathbb{N}} \\, \\sum_{n=1}^M \\int_T^{\\infty} G_{L^m (M+n_0)^{1-m}}(\\delta_{w_0}) * G_{r-1,L^m M^{1-m}}(\\psi) (t,n,\\lambda) \\, \\mathrm{d} t$$\n \\end{proposition}\n \\begin{proof}\nThe proof is similar to the proof of Proposition \\ref{prop:nbp-lower-bound}; however, since we consider $M$ number of new arrivals in each of the busy periods, the \\emph{event} of interest is when the queue length in HTQ-s does not exceed $M+n_0$ and $M$ in the first and subsequent busy periods, respectively, while operating with constant service rates $L^m (M+n_0)^{1-m}$ and $L^m M^{1-m}$, respectively. \n\\qed\n \\end{proof}\n\n\n\n \n\n\n\n \n\n\n\n \n\n We shall use Propositions~\\ref{prop:nbp-lower-bound} and \\ref{prop:nbn-lower-bound} to establish probabilistic lower bound for a finite time horizon version of the throughput defined in Definition \\ref{def:throughput}: for $T>0$, let \n \\begin{equation*}\n\\label{eq:throughput-def-finite-horizon}\n\\lambda_{\\text{max}}(L,m,\\varphi,\\psi,x_0,\\delta,T):= \\sup \\left\\{\\lambda \\geq 0: \\, \\Pr \\left( N(t;L, m, \\lambda, \\varphi, \\psi, x_0) < + \\infty, \\quad \\forall t \\in [0,T] \\right) \\geq 1 - \\delta \\right\\}.\n\\end{equation*}\n \n \n\n\\begin{theorem}\\label{thm:superlinear-bound-empty}\n For $L>0$, $m>1$, $\\varphi\\in\\Phi$, $\\psi\\in\\Psi$, $\\delta\\in(0,1)$, $T>0$, zero initial condition $x_0=0$, \n \\begin{equation}\n \\label{eq:sublinear-throughput-zero-initial-condition}\n \\lambda_{\\max}(L, m, \\varphi, \\psi, x_0,\\delta,T)\\geq \\sup_{M \\in \\mathbb{N}}\\;\\sup \\Big\\{\\lambda \\geq 0 \\; \\Big | \\sup_{r \\in \\mathbb{N}} \\, \\sum_{n=1}^M \\int_T^{\\infty} G_{r,L^m M^{1-m}}(t,n,\\psi,\\lambda) \\, \\mathrm{d} t \\geq1-\\delta \\Big\\} \n \\end{equation}\n \\end{theorem}\n \\begin{proof}\nFollows from Proposition \\ref{prop:nbp-lower-bound}.\n\\qed\n \\end{proof} \n \n \n \n \\begin{theorem}\\label{thm:superlinear-bound-non-empty}\n For $L>0$, $m>1$, $\\varphi\\in\\Phi$, $\\psi\\in\\Psi$, $\\delta\\in(0,1)$, $T>0$, initial condition $x_0 \\in [0,L]^{n_0}$, $n_0 \\in \\mathbb{N}$, with associated workload $w_0>0$, \n \\begin{multline*}\n \\lambda_{\\max}(L, m, \\varphi, \\psi, x_0,\\delta,T) \\\\ \\geq \\sup_{M \\in \\mathbb{N}}\\;\\sup \\Big \\{\\lambda>0 \\; \\Big | \\sup_{r \\in \\mathbb{N}} \\, \\sum_{n=1}^M \\int_T^{\\infty} G_{L^m (M+n_0)^{1-m}}(\\delta_{w_0}) * G_{r-1,L^m M^{1-m}}(\\psi) (t,n,\\lambda)\\geq 1-\\delta \\Big\\} \n \\end{multline*}\n \\end{theorem}\n \\begin{proof}\n Follows from Proposition \\ref{prop:nbn-lower-bound}.\n \\qed\n \\end{proof}\n \n \\begin{remark}\n In Theorems~\\ref{thm:superlinear-bound-empty} and \\ref{thm:superlinear-bound-non-empty}, we implicitly assume the rather standard convention that supremum over an empty set is zero.\n \\end{remark}\n\n \n\n \n\n\n\n\n\\section{Throughput Analysis}\n\\label{sec:throughput-analysis}\n\n\\input{linear}\n\n\\input{nonlinear}\n\n\\input{super-linear}\n\n\n\\input{sublinear-new}\n\n\\section{Appendix}\n\\label{sec:appendix}\n\nIn this section, we gather a few technical results that are used in the main results of the paper. \n\n\n\\begin{definition}[Type $K$ function]~\\cite{Smith:08}\nLet $g:S\\mapsto\\mathbb{R}^n$ be a function on $S\\subset\\mathbb{R}^n$. $g$ is said to be of type $K$ in $S$ if, for each $i \\in \\until{n}$, $g_i(z_1)\\leq g_i(z_2)$ holds true for any two points $z_1$ and $z_2$ in $S$ satisfying $z_1 \\leq z_2$ and $z_{1,i}=z_{2,i}$.\n\\end{definition}\n\n\\begin{lemma}\n\\label{lem:monotonicity-same-size}\nLet $\\map{g}{S}{\\mathbb{R}^N}$ and $\\map{h}{S}{\\mathbb{R}^N}$ be both of type $K$ over $S \\subset \\mathbb{R}^N$. Let $z_1(t)$ and $z_2(t)$\nbe the solutions to $\\dot{z}=g(z)$ and $\\dot{z}=h(z)$, respectively, starting from initial conditions $z_1(0)$ and $z_2(0)$ respectively. \nLet $S$ be positively invariant under $\\dot{z}=g(z)$ and $\\dot{z}=h(z)$.\nIf $g(z) \\leq h(z)$ for all $z \\in S$, and $z_1(0) \\leq z_2(0)$, then $z_1(t) \\leq z_2(t)$ for all $t \\geq 0$. \n\\end{lemma}\n\\begin{proof}\nBy contradiction, let $\\tilde{t} \\geq 0$ be the smallest time at which, there exists, say $k \\in \\until{N}$, such that $z_1(\\tilde{t}) \\leq z_2(\\tilde{t})$, $z_{1,k}(\\tilde{t}) = z_{2,k}(\\tilde{t})$, and \n\\begin{equation}\n\\label{eq:class-K-contradiction}\ng_k(z_1(\\tilde{t})) > h_k(z_2(\\tilde{t})).\n\\end{equation}\nSince $g(z)$ is of class $K$, $z_1(\\tilde{t}) \\leq z_2(\\tilde{t})$ and $z_{1,k}(\\tilde{t}) = z_{2,k}(\\tilde{t})$ imply that $g(z_1(\\tilde{t})) \\leq g(z_2(\\tilde{t}))$. This, combined with the assumption that $g(z) \\leq h(z)$ for all $z \\in S$ implies that $g(z_1(\\tilde{t})) \\leq h(z_2(\\tilde{t}))$, which contradicts \\eqref{eq:class-K-contradiction}.\n\\qed\n\\end{proof}\n\nLemma~\\ref{lem:monotonicity-same-size} is relevant because the basic dynamical system in our case is of type $K$. \n\n\\begin{lemma}\n\\label{lem:car-following-type-K}\nFor any $L > 0$, $m>0$, and $N \\in \\mathbb{N}$, the right hand side of \\eqref{eq:x-dynamics-in-Rn} is of type $K$ in $\\mathbb{R}_+^N$.\n\\end{lemma}\n\\begin{proof}\nConsider $\\tilde{x}, \\hat{x} \\in \\mathbb{R}_+^N$ such that $\\tilde{x} \\leq \\hat{x}$. \nIf $\\tilde{x}_i = \\hat{x}_i$ for some $i \\in \\until{N}$, then, according to \\eqref{eq:inter-vehicle-distance-Rn}, $y_i(\\tilde{x})-y_i(\\hat{x})= \\left(\\tilde{x}_{i+1} - \\hat{x}_{i+1} \\right) -\\left(\\tilde{x}_i-\\hat{x}_i \\right) = \\tilde{x}_{i+1} - \\hat{x}_{i+1}$ if $i \\in \\until{N-1}$, and is equal to $\\left(\\tilde{x}_{1} - \\hat{x}_{1} \\right) - \\left(\\tilde{x}_N-\\hat{x}_N \\right)= \\tilde{x}_{1} - \\hat{x}_{1}$ if $i=N$. In either case, $y_i(\\tilde{x}) \\leq y_i(\\hat{x})$, which also implies $y_i^m(\\tilde{x}) \\leq y_i^m(\\hat{x})$ for all $m > 0$.\n\\qed\n\\end{proof}\n\n\n\n\n\nIn order to state the next lemma, we need a couple of additional definitions. \n\n\\begin{definition}[Monotone Aligned and Monotone Opposite Functions]\nTwo strictly monotone functions $\\map{h}{\\mathbb{R}}{\\mathbb{R}}$ and $\\map{g}{\\mathbb{R}}{\\mathbb{R}}$ are said to be \\emph{monotone-aligned} if they are both either strictly increasing, or strictly decreasing. Similarly, the two functions are called \\emph{monotone opposite} if one of them is strictly increasing, and the other is strictly decreasing. \n\\end{definition}\n\n\n\\begin{lemma}\n\\label{lem:appendix-general-summation}\nLet $\\map{h}{\\mathbb{R}_+}{\\mathbb{R}}$ and $\\map{g}{\\mathbb{R}_+}{\\mathbb{R}}$ be strictly monotone functions. Then, for every $y \\in \\mathcal S_N^L$, $n \\in \\mathbb{N}$, $L>0$,\n\\begin{equation}\n\\label{eq:summation-general}\n\\sum_{i=1}^N h(y_i) \\left(g(y_{i+1}) - g(y_i) \\right)\n\\end{equation}\nis non-negative if $h$ and $g$ are monotone-opposite, and is non-positive if $h$ and $g$ are monotone-aligned. Moreover, \\eqref{eq:summation-general} is equal to zero if and only if $y=\\frac{L}{N} \\mathbf{1}$.\n\\end{lemma}\n\\begin{proof}\nFor $i \\in \\until{N}$, let $I_i$ be the interval with end points $g(y_i)$ and $g(y_{i+1})$. \nFor $i \\in \\until{N}$, let $f_i(z):=\\sgn{g(y_{i+1}) - g(y_i)} h(y_i) \\mathbf{1}_{I_i}(z)$. \nLet $g_{\\text{min}}:=\\min_{i \\in \\until{N}} g(y_i)$, and $g_{\\text{max}}:=\\max_{i \\in \\until{N}} g(y_i)$. With $f(z):=\\sum_{i=1}^N f_i(z)$,\n \\eqref{eq:summation-general} can then be written as:\n\\begin{equation}\n\\label{eq:line-integral}\n\\sum_{i=1}^N h(y_i) \\left(g(y_{i+1}) - g(y_i) \\right) = \\int_{g_{\\text{min}}}^{g_{\\text{max}}} f(z) \\, dz.\n\\end{equation}\nWe now show that, for every $z \\in [g_{\\text{min}}, g_{\\text{max}}] \\setminus \\{g(y_i): i \\in \\until{N}\\}$, $f(z)$ is non-negative if $h$ and $g$ are monotone-opposite, and is non-positive if $h$ and $g$ are monotone-aligned. \nThis, together with \\eqref{eq:line-integral}, will then prove the lemma.\n\nIt is easy to see that every $z \\in [g_{\\text{min}}, g_{\\text{max}}] \\setminus \\{g(y_i): i \\in \\until{N}\\}$ belongs to an even number of intervals in $\\{I_i: \\, i \\in \\until{N}\\}$, say $I_{\\ell_1}, I_{\\ell_2}, \\ldots$, with $\\ell_1 < \\ell_2 < \\ldots$ (see Figure \\ref{fig:intervals} for an illustration).\nWe now show that $f_{\\ell_1}(z) + f_{\\ell_2}(z)$ is non-negative if $h$ and $g$ are monotone-opposite, and is non-positive if $h$ and $g$ are monotone-aligned. The same argument holds true for $f_{\\ell_3}(z)+f_{\\ell_4}(z), \\ldots$. \nAssume that $g(y_{\\ell_1}) \\leq g(y_{\\ell_2})$; the other case leads to the same conclusion. By definition of $f_i$'s, $f_{\\ell_1}(z)=h(y_{\\ell_1})$ and $f_{\\ell_2}(z)=-h(y_{\\ell_2})$. $g(y_{\\ell_1}) \\leq g(y_{\\ell_2})$ implies that \n$f_{\\ell_1}(z)+f_{\\ell_2}(z)=h(y_{\\ell_1})-h(y_{\\ell_2})$ is non-negative if $h$ and $g$ are monotone-opposite, and is non-positive if $h$ and $g$ are monotone-aligned, with the equality holding true if and only if $y_{\\ell_1}=y_{\\ell_2}$. \n\\begin{figure}[htb!]\n \\centering\n \\includegraphics[width=15.5cm]{intervals} \n \\caption{A schematic view of (a) $f_i(z), i=\\{1,2,3,4\\}$ and (b) $f(z)=\\sum_{i=1}^4f_i(z)$ for a $y\\in\\mathcal S_4^L$ ($L=1$) with $y_{\\text{min}} = y_2 < y_4 < y_3 < y_1 = y_{\\mathrm{max}}$ for a $m<1$. }\n \\label{fig:intervals}\n\\end{figure}\n\\qed\n\\end{proof}\n\n\n\n\\begin{lemma}\\label{lemma:t-n}\nFor $n \\in \\mathbb{N} \\setminus \\{1\\} $, let $\\psi_n$ be the n-fold convolution of $\\psi \\in \\Psi$. Then, \n$$\\int_{0}^t z \\, \\psi(z)\\psi_{n-1}(t-z)\\mathrm{d} z = \\frac{t}{n}\\psi_n(t) \\qquad \\forall \\, t \\geq 0$$\n\\end{lemma}\n\\begin{proof}\nLet $J_1,\\cdots,J_n$ be $n$ random variables, all with distribution $\\psi$. Therefore, the probability distribution function of the random variable $V:=\\sum_{i=1}^n J_i$ is $\\psi_n$. \nUsing linearity of the expectation, we get that \n\\begin{equation*}\nt=E\\left[\\sum_{i=1}^n J_i|V=t\\right]=\\sum_{i=1}^n E\\left[J_i|V=t\\right]=n \\, E\\left[J_1|V=t \\right]\n\\end{equation*}\ni.e., \n\\begin{equation}\n\\label{eq:D1-cond-sum}\nE\\left[J_1|V=t \\right] = \\frac{t}{n}\n\\end{equation}\nLet $f_{J_1|V}(j_1|t)$ denote the probability distribution function of $J_1|V$. By definition:\n\\begin{equation}\\label{eq:f-d1-z}\nf_{J_1|V}(j_1|t) = \\frac{f_{J_1,V}(j_1,t)}{\\psi_n(t)}=\\frac{\\psi(j_1)\\psi_{n-1}(t-j_1)}{\\psi_n(t)}\n\\end{equation}\nTherefore, using \\eqref{eq:D1-cond-sum} and \\eqref{eq:f-d1-z}, we get that\n\\begin{equation*}\nE[J_1|V=t] = \\int_0^t zf_{J_1|V}(z|t)\\, \\mathrm{d} z = \\int_0^t z\\frac{\\psi(z)\\psi_{n-1}(t-z)}{\\psi_n(t)} \\, \\mathrm{d} z = \\frac{t}{n}\n\\end{equation*}\nSimple rearrangement gives the lemma. \n\\qed\n\\end{proof}\n\n\nThe following is an adaptation of \\cite[Lemma 2.3.4]{Ross:96}. \n\\begin{lemma}\\label{lemma:busy-period-type1-prob}\nLet $a_1, \\cdots, a_{n-1}$ denote the ordered values from a set of $n-1$ independent uniform $(0,t)$ random variables. Let $\\tilde d_0=z \\geq 0$ be a constant and $\\tilde d_1, \\tilde d_2, \\cdots \\tilde d_{n-1}$ be i.i.d. non-negative random variables that are also independent of $\\{a_1, \\cdots, a_{n-1}\\}$, then \n\\begin{align*}\n\\Pr(\\tilde d_{k}+ \\cdots +\\tilde d_{n-1}\\leq a_{n-k},k=1,\\cdots,n-1 |\\tilde d_0+ \\cdots +\\tilde d_{n-1}=t, \\tilde{d}_0=z)\n = \\begin{cases} z\/t & z 0$, $m=1$, $\\varphi \\in \\Phi$, $\\psi \\in \\Psi$, the mean value of the busy period duration is equal to $\\bar \\psi \/ (L - \\lambda \\bar \\psi)$.\n \\end{lemma}\n \\begin{proof}\nA busy period, say of duration $B$, is initiated by the arrival of a vehicle, say $j$, when the system is idle. \nLet the number of vehicles that arrive during the busy period be $N_{bn}$. Note that $N_{bn}$ does not include the vehicle initiating the busy period. Therefore, the workload brought into the system during the busy period is equal to $w_B=\\sum_{i=j}^{j+N_{bn}} d_i$.\nThe expected value of $N_{bn}$ can be obtained by conditioning on the duration of the busy period: \n\\begin{equation}\\label{eq:nb-expectation}\nE[N_{bn}]=E\\left[E[N_{bn}|B]\\right]=E[\\lambda B]=\\lambda E[B]\n\\end{equation}\nwhere the second equality follows from the fact that the arrival process is a Poisson process. \nSince the event $\\{N_{bn}+1=n\\}$ is independent of $\\{d_{j+i},i > n\\}$, $N_{bn}+1$ is a stopping time for the sequence $\\{d_{j+i},i\\geq 1\\}$. Therefore, using Wald's equation, e.g., see \\cite[Theorem 3.3.2]{Ross:96}, and \\eqref{eq:nb-expectation}, the expected value of the workload $w_B$ added to the system during the busy period $B$ is given by: \n\\begin{equation}\\label{eq:wb-expectation}\nE[w_B]=(E[N_{bn}]+1) \\, \\bar \\psi=(\\lambda E[B]+1) \\, \\bar \\psi.\n\\end{equation}\n\n\n\n\\begin{figure}[htb!]\n \\centering\n \\includegraphics[width=5.5cm]{busy-period-new-notation} \n \\caption{(a) Queue length process and (b) workload process during a busy period. }\n \\label{fig:busy-period}\n\\end{figure}\n\nSince the workload decreases at a constant rate $L$ during a busy period, we have $B=w_B\/L$ (see Figure \\ref{fig:busy-period} for an illustration). Therefore, $E[B]=E[w_B]\/L$, which when combined with \\eqref{eq:wb-expectation}, establishes the lemma. \n\\qed\n\\end{proof}\n\n\n\n\n\\begin{remark}\n\\label{remark:waiting-time}\nSince the mean busy period duration is an upper bound on the mean waiting time, Lemma \\ref{lemma:busy-period-mean} also gives an upper bound on the mean waiting time. One can then use Little's law~\\cite{Kleinrock:75}\\footnote{Little's law has previously been used in the context of processor sharing queues, e.g., in \\cite{Altman.ea:06}.} to show that the mean queue length is upper bounded by $\\lambda\\bar \\psi \/ (L - \\lambda \\bar \\psi)$.\n\\end{remark}\n\nLet $\\mathcal{I}(t):=\\int_0^t\\delta_{\\{w(s)=0\\}}\\mathrm{d} s$ be the cumulative \\emph{idle time} up to time $t$. The following result characterizes the long run proportion of the idle time in the linear case.\n\\begin{proposition}\n\\label{prop:long-run-idle-time}\nFor any $\\lambda 0$, $\\varphi \\in \\Phi, \\psi \\in \\Psi$, the long-run proportion of time in which HTQ is idle is given by the following:\n$$\\lim_{t\\to\\infty} \\frac{\\mathcal{I}(t)}{t}=1-\\frac{\\lambda \\bar \\psi}{L}>0 \\quad a.s.$$\n\\end{proposition}\n\n\\begin{proof}\nHTQ alternates between busy and idle periods. Let $Z=I+B$ be the duration of a cycle that contains an idle period of length $I$ followed by a busy period of length $B$. Idle period, $I$, has the same distribution as inter-arrival times i.e. an exponential random variable with mean $1\/\\lambda$, and the mean value of $B$ is given in Lemma \\ref{lemma:busy-period-mean}. Note that duration of cycles, $Z$, are i.i.d. random variables. Thus, the busy-idle profile of the system is an alternating renewal process where renewals correspond to the moments at which the system gets idle. Suppose the system earns reward at a rate of one per unit of time when it is idle (and thus the reward for a cycle equals the idle time of that cycle i.e. $I$). Then, the total reward earned up to time $t$ is equal to the total idle time in $[0,t]$ (or $\\mathcal{I}(t)$), and by the result for renewal reward process (see \\cite{Ross:96}, Theorem 3.6.1), with probability one,\n$\\lim_{t\\to\\infty} \\mathcal{I}(t)\/t=E[I]\/(E[B]+E[I])$. \n\\qed\n\\end{proof}\n\n\n\n\n\n\n\n\n\\subsection{Busy Period Distribution}\n\\label{sec:busy-period}\nIn this section, we compute the cumulative distribution function for the number of new arrivals during a busy period for a HTQ with constant service rate, say $p>0$. This could, e.g., correspond to \\eqref{eq:inter-vehicle-distance-dynamics} for $m=1$. However, our analysis in this section, is not restricted to this specific model, but applies to any HTQ with constant service rate $p$. \nThis cumulative distribution for the number of new arrivals during a busy period, while of independent interest, will be used to derive lower bounds on the throughput in the super-linear case in Section~\\ref{subsec:superlinear}. Our analysis is inspired by that of M\/G\/1 queue, e.g., see \\cite{Ross:96}, where our consideration for non-zero initial condition appears to be novel. \n\nLet us consider an arbitrary busy period spanning time interval $(0,t)$, without loss of generality. For non-zero initial condition, one has to distinguish between the first and subsequent busy periods. Let the workload at the beginning of the arbitrary busy period, denoted as $d_0$, be sampled from $\\theta$. The relationship between $\\theta$ and $\\psi$ is as follows. \nIf the system starts with a non-zero initial initial condition with initial workload $w_0>0$, then the value of the $d_0$ for the first busy period will be deterministic and equals $w_0$, and hence $\\theta=\\delta_{w_0}$. However, for subsequent busy periods, or if the initial condition is zero, $d_0$ is sampled from $\\theta=\\psi$.\nThe workload brought to the system by arriving vehicles, $\\{d_i\\}_{i=1}^\\infty$, equals to the distance that vehicles wish to travel and are sampled identically and independently from the distribution $\\psi$. When the system is busy, the workload decreases at a given constant rate $p > 0$. The busy period ends when the workload becomes zero. \n\\begin{remark}\nWe emphasize that $d_0$ denotes the workload at the beginning of a busy period (see Figure \\ref{fig:normalized-distance} for further illustration), and hence is not equal to zero when the queue starts from a zero initial condition. \n\\end{remark}\n\n \n\nIn order to align our calculations with the standard M\/G\/1 framework, where service rate is assumed to be unity, we consider normalized workloads, $\\tilde{d}_i:=d_i\/p$ for all $i \\in \\{0,1,\\cdots\\}$ (see Figure \\ref{fig:normalized-distance} for an illustration). Correspondingly, let the distributions for the normalized distances be denoted as $\\tilde{\\theta}$ and $\\tilde{\\psi}$. Let the arrival time of the $k$-th new vehicle during $(0,t)$ be denoted as $T_k$, and let $N_{bn}$ denote the number of arrivals in $(0,t)$, i.e., the total number of arrivals over the entire duration of the busy period, including the vehicle which initiates the busy period, is $N_{bn} + 1$. \n\n A busy period ends at time $t$, and $N_{bn}=n-1$ if and only if, \n\\begin{enumerate}[(i)]\n\\item $T_k\\leq \\tilde d_0+\\cdots+ \\tilde d_{k-1}, \\qquad k=1,\\cdots,n-1$ \n\\item $\\tilde d_0+\\cdots+ \\tilde d_{n-1}=t$\n\\item There are exactly $n-1$ arrivals in $(0,t)$\n\\end{enumerate} \n\n\n\n\\begin{figure}[htbp]\n\\begin{center}\n\\includegraphics[width=8cm]{normalized-distance} \n\\caption{Evolution of workload during first two busy periods for an HTQ with constant service rate $p$, and starting from a non-zero initial condition. In the first busy period, $d_0$ is equal to the workload $w_0$ associated with the non-zero initial condition. In the second busy period, $d_0$ is equal to the workload brought by the first vehicle that initiates that busy period.}\n\\label{fig:normalized-distance}\n\\end{center}\n\\end{figure}\n\n\n\nBy treating densities as if they are probabilities, we get:\n\\begin{align}\n\\label{eq:prop-busy-des}\n\\Pr & (B =t \\text{ and }N_{bn} = n -1)\\nonumber \\\\\n=& \\Pr(\\tilde d_0+\\cdots+ \\tilde d_{n-1}=t, n-1 \\text{ arrivals in $(0,t)$}, T_k\\leq \\tilde d_0+\\cdots+ \\tilde d_{k-1}, k=1,\\cdots,n-1) \\nonumber \\\\\n=& \\int_0^t\\Pr(T_k\\leq \\tilde d_0+\\cdots+ \\tilde d_{k-1}, k=1,\\cdots,n-1| n-1 \\text{ arrivals in $(0,t)$}, \\tilde d_0+\\cdots+ \\tilde d_{n-1}=t, \\tilde d_0 =z) \\nonumber \\\\\n& \\times \\Pr(n-1 \\text{ arrivals in $(0,t)$}, \\tilde d_1+\\cdots+ \\tilde d_{n-1}=t-z)\\, \\tilde \\theta(z) \\, \\mathrm{d} z \n\\end{align}\nwhere we recall that $B$ is the random variable corresponding to the busy period duration. By the independence of normalized distances and the arrival process, the second probability term in the integrand in \\eqref{eq:prop-busy-des} can be expressed as\n\\begin{align}\n\\label{eq:prop-busy-part2}\n\\Pr(n-1 \\text{ arrivals in $(0,t)$}, \\tilde d_1+\\cdots+ \\tilde d_{n-1}=t-z)=e^{-\\lambda t}\\frac{(\\lambda t)^{n-1}}{(n-1)!} \\tilde \\psi_{n-1}(t- z) \n\\end{align}\nwhere $\\tilde\\psi_n$ is the $n$-fold convolution of $\\tilde\\psi$ with itself. \n\nIn the first probability term in \\eqref{eq:prop-busy-des}, it is given that the system receives $n-1$ arrivals in $(0,t)$ and since the arrival process is a Poisson process, the ordered arrival times, $\\{T_1,T_2,\\cdots,T_{n-1}\\}$, are distributed as the ordered values of a set of $n-1$ independent uniform $(0,t)$ random variables $\\{a_1,a_2,\\cdots,a_{n-1}\\}$ (see Theorem 2.3.1 in \\cite{Ross:96}). Thus,\n\\begin{align}\\label{eq:busy-period-given-n-prob}\n\\Pr & (T_k\\leq \\tilde d_0+\\cdots+ \\tilde d_{k-1}, k=1,\\cdots,n-1| n-1 \\text{ arrivals in $(0,t)$}, \\tilde d_0+\\cdots+ \\tilde d_{n-1}=t,\\tilde d_0 =z) \\nonumber \\\\\n& = \\Pr(a_k\\leq \\tilde d_0+\\cdots+\\tilde d_{k-1}, k=1,\\cdots,n-1|\\tilde d_0+\\cdots+\\tilde d_{n-1} =t,\\tilde d_0 =z)\n\\end{align}\n\n\n\n\n\n\n\n\nBy noting that $t-U$ will also be a uniform $(0,t)$ random variable whenever $U$ is, it follows that $a_1,\\cdots,a_{n-1}$ has the same joint distribution as $t-a_{n-1},\\cdots,t-a_1$. Thus, replacing $a_k$ with $a_{n-k}$ for $k\\in\\{1,\\cdots,n-1\\}$ in \\eqref{eq:busy-period-given-n-prob}, we get\n\\begin{align}\\label{eq:prob-manipulation}\n \\Pr&(a_k \\leq \\tilde d_0+\\cdots+\\tilde d_{k-1}, k=1,\\cdots,n-1|\\tilde d_0+\\cdots+\\tilde d_{n-1} =t,\\tilde d_0 =z) \\nonumber \\\\\n & =\\Pr(t-a_{n-k} \\leq \\tilde d_0+\\cdots+\\tilde d_{k-1}, k=1,\\cdots,n-1|\\tilde d_0+\\cdots+\\tilde d_{n-1} =t,\\tilde d_{0} =z) \\nonumber \\\\\n & = \\Pr(t-a_{n-k} \\leq t-(\\tilde d_{k}+\\cdots+\\tilde d_{n-1}), k=1,\\cdots,n-1|\\tilde d_0+\\cdots+\\tilde d_{n-1} =t,\\tilde d_0 =z) \\nonumber \\\\\n & = \\Pr(a_{n-k} \\geq \\tilde d_{k}+\\cdots+\\tilde d_{n-1}, k=1,\\cdots,n-1|\\tilde d_0+\\cdots+\\tilde d_{n-1} =t,\\tilde d_0 =z) \\nonumber \\\\\n & = \\Pr(a_{n-k} \\geq \\tilde d_{k}+\\cdots+\\tilde d_{n-1}, k=1,\\cdots,n-1|\\tilde d_0+\\cdots+\\tilde d_{n-1} =t,\\tilde d_0 =z) = \\begin{cases} z\/t & z1$ case}\nIn this section, we assume that system starts from an empty initial condition and then we derive a probabilistic lower bound on the throughput of the system. For our analysis we focus on busy period and the number of vehicles served in a busy period. \n\nWe first assume an upper bound on the queue length, i.e. $N(t)\\leq M \\; \\forall t\\geq0$, which by Lemma \\ref{lem:service-rate-bounds} gives a lower bound on the service rate of the system i.e. $s(y(t))\\geq L^mM^{1-m}\\; \\forall t\\geq 0$. Then, we use this lower bound on the service rate to compare the system with a system with the same temporal-spatial arrival process but a slower decrease rate of workload . We use this comparison as a tool to find a lower bound on the probability of achieving the assumed upper bound on the queue length. This result leads to a probabilistic lower bound on the throughput of the system which is stated in Theorem \\ref{}. Our analysis heavily depends on the probability distribution of busy periods which is described in the next section.\n\n\n \n \\subsubsection{From busy period to throughput}\n\n\n \n \\begin{proposition}\\label{prop:nbp-lower-bound}\n For any $m>1$, $w_0=0$, $L>0$, $\\lambda > 0$ $\\varphi \\in \\Psi$ and $M\\in\\{2,3,\\cdots\\}$, given $N(t)0$, then \n $$\\Pr(N_{bp}\\leq M)\\geq C^{(2)}_{L^m M^{1-m}}(M,\\lambda)\\qquad \\forall n\\in\\mathbb{N}$$\n \\end{proposition}\n \\begin{proof}\n \\mmmargin{service is not greater than this lower bound for all t because of idle time. revise it}\n For $m>1$, by Lemma \\ref{lem:service-rate-bounds}, and the upper bound given on the queue length,\n \\begin{equation}\n s(t)\\geq L^mM^{1-m} = p \\quad \\forall t>0\n \\end{equation} \n \n We consider another system with the same realization of temporal-spatial arrival process but fixed service rate of $p$ and name it as the \\emph{slower} system. Let $B_1$ and $B_2$ be random variables that denote the length of a busy period in actual and slower system, respectively. Since the realization of arrival processes are the same for both systems, and the decrease rate of workload in slower system is smaller it is clear that,\n \\begin{equation}\\label{eq:workload-actual-slower}\n w_1(t)\\leq w_2(t) \\quad \\forall t>0\n \\end{equation}\nwhere $w_1(t)$ and $w_2(t)$ denote the the workload processes in actual and slower system, respectively. \n\nNow, let $t_s$ be the time at which a (random) busy period in the slower system ends and $t_1$ be the time at which the next busy period in that system ends (see Figure \\ref{fig:b1-b2} for an illustration). Obviously, $w_1(t_s)=w_1(t_f)=0$. Also, by \\eqref{eq:workload-actual-slower}, $w_2(t_s)=w_2(t_f)=0$. Let $t_1\\in(t_s,t_f)$ be the time of the first arrival after $t=t_s$. Note that this time is the same in both systems. Therefore, in the slower system, the busy period that lies in $[t_s,t_f]$ has the length of $t_f-t_1$. Let $t_2\\in[t_1,t_f]$ be the time at which the actual system gets idle for the first time after $t=t_1$. We want to show that $t_2\\leq t_f$. By contradiction, assume $t_2> t_f$. Thus, $w_1(t_f)>0$, but $w_2(t_f)=0$. This contradicts \\eqref{eq:workload-actual-slower} and proves that $t_2\\leq t_f$. This implies that the longest busy period in actual system contained in $[t_s,t_f]$ is no longer than the busy period in actual system. This argument holds for any busy period in slower system. Thus, with probability one, \n \\begin{equation}\\label{eq:relation-b1-b2}\n B_1\\leq B_2\n \\end{equation} \n \n\\begin{figure}[htbp]\n\\begin{center}\n\\includegraphics[width=9cm]{b1-b2} \n\\caption{Workload process for two systems with the same realization temporal-spatial arrival process, a system with slower service rate (solid line) and a system with a faster service rate (dashed line).}\n\\label{fig:b1-b2}\n\\end{center}\n\\end{figure}\n\nNow, let $N_{bp}$ and $\\tilde N_{bp}$ be the number of customers served in a busy period in the actual system and slower system, respectively. Note that $N_{bp}$ and $\\tilde{N}_{bp}$ are Poisson random variables with mean $\\lambda B_1$ and $\\lambda B_2$, respectively and by \\eqref{eq:relation-b1-b2}, $\\lambda B_1\\leq \\lambda B_2$. Poisson random variable is stochastically increasing in its mean (see e.g. \\cite{Ross:96}), therefore\n\\begin{equation}\n\\tilde{N}_{bp}\\geq_{\\text{st}}{N}_{bp}\n\\end{equation} \nwhich establishes this proposition.\n\n\n\n \\end{proof}\n \n\\mmmargin{have we defined this notation ($n_0$) earlier?}\n \\begin{proposition}\\label{prop:nbn-lower-bound}\n For any $m>1$, $w_0>0$, \\mmcomment{$n_0\\in\\mathbb{N}$}, $L>0$, $\\lambda > 0$, $\\varphi \\in \\Psi$ and $M\\in\\{2,3,\\cdots\\}$, given $N(t)0$, then \n $$\\Pr(N_{bn}\\leq M)\\geq C^{(1)}_{L^m (M+n_0)^{1-m}}(M,\\lambda,w_0)\\qquad \\forall n\\in\\mathbb{N}$$\n \\end{proposition}\n \\begin{proof}\n The proof is very similar to the proof of Proposition \\ref{prop:nbp-lower-bound}; however, in this case the upper bound on the queue length is $M+n_0$, and the lower bound on the service rate is $L^m(M+n_0)^{1-m}$. \n \\end{proof}\n\n\n\n \n\n\n\n \n \\begin{lemma} \\label{lemma:st-order-queulength-nbp}\n For any $m>1$, $L>0$, $\\lambda>0$, $\\varphi \\in \\Psi$, $w_0=0$\n $$N_{bp}\\geq_{\\text{st}}N(t)\\quad \\forall t\\geq 0$$\n \\end{lemma}\n \\begin{proof}\n Conditioned on the length of busy period, $B$, \n \\begin{equation}\n N_{bp}\\leq M | B = b \\implies N(t)\\leq M | B = b \\quad \\forall t\\in[0,b]\n \\end{equation}\n where here $t=0$ denotes the time at which busy period starts. Then,\n \\begin{equation}\n \\Pr(N_{bp}\\leq M | B = b) \\leq \\Pr(N(t)\\leq M | B = b) \\quad \\forall t\\in[0,b]\n \\end{equation}\n and by taking the expectation of the above with respect to length o busy period, we get \n \\begin{equation}\n \\Pr(N_{bp}\\leq M) \\leq \\Pr(N(t)\\leq M) \\quad \\forall t>0\n \\end{equation}\n which establishes this lemma. \n \\end{proof} \n\n\n Now, we are ready to state the main result in the following theorem which gives a probabilistic lower bound on the throughput of the system for $m>1$.\n \\begin{theorem}\n Given $\\delta\\in(0,1)$, $L>0$, $m>1$, $\\varphi\\in\\Psi$, $w_0=0$, then\n $$\\lambda_{\\max}(m, L, \\varphi,\\delta)\\geq \\max_{M\\geq 2}\\;\\max\\{\\lambda>0 \\;| C^{(2)}_{L^m M^{1-m}}(M,\\lambda)\\geq1-\\delta\\}$$\n \\end{theorem}\n \\begin{proof}\nFor a given upper bound on the queue length, $M\\in\\{2,3\\cdots\\}$, \n \\begin{equation}\\label{eq:prob-bound-queue}\n \\Pr(N\\leq M) \\geq \\Pr(N_{bp}\\leq M) \\geq C^{(2)}_{L^m M^{1-m}}(M,\\lambda)\n \\end{equation}\n where the first inequality follows by Lemma \\ref{lemma:st-order-queulength-nbp} and the second inequality follows by Proposition \\ref{prop:nbp-lower-bound}. Therefore, based on Definition \\ref{def:throughput}, $\\tilde \\lambda = \\max\\{\\lambda>0 \\;| C_{L^m M^{1-m}}(M,\\lambda)=1-\\delta\\}$ gives a lower bound on $\\lambda_{\\max}$. In Definition \\ref{def:throughput}, however, only boundedness of queue length matters. Thus, one can maximize $\\tilde \\lambda$ over $M\\geq 2$ to get tighter lower bound. \n \n \n\n\n\n\n\n \\end{proof}\n \n \n \\begin{theorem}\n Given $\\delta_1,\\delta_2\\in(0,1)$, $L>0$, $m>1$, $\\varphi\\in\\Psi$, $w_0>0$, $n_0\\in\\mathbb{N}$ then\n $$\\lambda_{\\max}(m, L, \\varphi,\\delta)\\geq \\max_{M_1,M_2\\geq 2}\\;\\max\\{\\lambda>0 \\;| C^{(1)}_{L^m (M+n_0)^{1-m}}(M,\\lambda,w_0)C^{(2)}_{L^m (M_2)^{1-m}}(M,\\lambda)\\geq(1-\\delta_1)(1-\\delta_2)\\geq (1-\\delta)\\}$$\n \\end{theorem}\n \\begin{proof}\n\\mmcomment{work in progress ...} \n \n \\end{proof}\n \n In order to further clarify the analysis in this section, the following example shows the application of these results in order to probabilistically characterize the throughput of the system for a specific spatial distribution. \n \n \\begin{example}[Deterministic traveled distances] Assume that vehicles, upon arrival, wish to travel a deterministic distance of length $L>0$ i.e. $\\varphi(x)=\\delta_{\\{x=L\\}}$ and their motion is determined by a car following model with $m>1$. Also, further assume that the queue length does not exceed $M\\in\\{2,3,\\cdots\\}$. In this case, the lower bound on the service rate is $p=L^mM^{1-m}$. Application of \\eqref{eq:busy-period-and-nbp-joint} gives,\n \\begin{equation}\\label{eq:busy-period-joint-deterministic}\n D_p(t,N_{bp}=n) = \\int_0^te^{-\\lambda y}\\frac{(\\lambda y)^{n-1}}{n!}\\delta_{\\{y=nL\/p\\}}\\mathrm{d} y=\\begin{cases}0 & t\\leq nL\/p\\\\ e^{-\\lambda nL\/p}\\frac{(\\lambda nL\/p)^{n-1}}{n!}& t> nL\/p\\end{cases}\n \\end{equation}\nand by Lemma \\ref{lemma:st-order-queulength-nbp} and Proposition \\ref{prop:nbp-lower-bound},\n\\begin{equation}\n \\Pr(N\\leq M)\\geq \\Pr(N_{bp}\\leq M)\\geq C_{L^m M^{1-m}}(M,\\lambda) = \\sum_{i=1}^M e^{-\\lambda nL\/p}\\frac{(\\lambda nL\/p)^{n-1}}{n!} \\geq1-\\delta.\n\\end{equation}\nThe above inequality can be used in order to find the maximum throughput for which queue length remains bounded by $M$ with probability of at least $1-\\delta$.\n \\end{example}\n\n\n\\section{Conclusions}\n\\label{sec:conclusions}\n\nIn this paper, we formulated and analyzed a novel horizontal traffic queue. A key characteristic of this queue is the state dependence of its service rate. We establish useful properties of the service rate dynamics. We also extend calculations for M\/G\/1 busy period distributions to our setting, even for non-empty initial condition. These results allow us to provide tight results for throughput in the linear case, and probabilistic bounds on queue length over finite time horizon in the super-linear case. We also study throughput under a batch release control policy, where the additional waiting induced by the control policy is interpreted as a perturbation to the arrival process. We provide lower bound on the throughput for a maximum permissible perturbation. In particular, if the allowable perturbation is sufficiently large, then this lower bound grows unbounded as $m \\to 0^+$. Simulation results suggest a sharp phase transition in the throughput as the car-following behavior transitions from super-linear to sub-linear regime. \n\nIn future, we plan to sharpen our analysis to theoretically demonstrate the phase transition behavior. \nThis could include, for example, considering other release control policies with better perturbation properties. The increasing nature of the throughput in the super-linear regime for large values of $L$, as illustrated in Figure~\\ref{fig:throughput-simulations}, is possibly because the car-following model considered in this paper does not impose any explicit upper bounds on the speed of the vehicles. We plan to extend our analysis to such practical constraints, as well as to higher order, e.g., second order, car-following models, and models emerging from consideration of inter-vehicle distance beyond the vehicle immediately in front. The connections with processor sharing queue, as highlighted in this paper, suggest the possibility of utilizing the construct of measure-valued state descriptors~\\cite{Grishechkin:94,Gromoll.Puha.ea:02} to derive fluid and diffusion limits of the proposed horizontal traffic queues. In particular, one could interpret the measure-valued state descriptor to play the role of traffic density in the context of traffic flow theory. Along this direction, we plan to investigate connections between the fluid limit of horizontal traffic queues, and PDE models for traffic flow. \n\\section{Introduction}\n\\label{sec:introduction}\nWe consider a horizontal traffic queue (HTQ) on a periodic road segment, where vehicles arrive according to a spatio-temporal Poisson process, and depart the queue after traveling a distance that is sampled independently and identically from a spatial distribution. When inside the queue, the speed of a vehicle is proportional to a power $m>0$ of the distance to the vehicle in front. For a given initial condition, we define the throughput of such a queue as the largest arrival rate under which the queue length remains bounded. We provide rigorous analysis for the service rate, busy period distribution, and throughput of the proposed HTQ. \n\nOur motivation for studying HTQ comes from advancements in connected and autonomous vehicle technologies that allow to program individual vehicles with rules that can optimize system level performance. Within this application context, one can interpret the results of this paper as rigorously characterizing the impact of a parametric class of car-following behavior on system throughput. \n\n\nIn the linear case ($m=1$), i.e., when the speed of every vehicle is proportional to the distance to the vehicle directly in front, the periodicity of the road segment implies that the sum of the speeds of the vehicles is proportional to the total length of the road segment, i.e., it is constant. This feature allows us to exploit the equivalence between workload and queue length to show that, independent of the initial condition and almost surely, the throughput is the inverse of the time required by a solitary vehicle to travel average distance. \n\nIn the non-linear case ($m \\neq 1$), the cumulative service rate of HTQ queue is constant if and only if all the inter-vehicle distances are equal. For all other inter-vehicle configurations, we show that the service rate is strictly decreasing (resp., strictly increasing) in the super-linear, i.e., $m>1$ (resp., sub-linear, i.e., $m<1$) case. The service rate exhibits an another contrasting behavior in the sub- and super-linear regimes. In the super-linear case, the service rate is maximum (resp., minimum) when all the vehicles are co-located (resp., when the inter-vehicle distances are equal), and vice-versa for the sub-linear case. Using a combination of these properties, we prove that, when the length of the road segment is at most one, the throughput in the super-linear (resp., sub-linear) case is upper (resp., lower) bounded by the throughput for the linear case.\n\nWe prove the remaining bounds on the throughput for the non-linear case as follows. The standard calculations for joint distributions of duration and number of arrivals during a busy period for M\/G\/1 queue are extended to the HTQ setting, including for non-empty initial conditions. \nThese joint distributions are used to derive probabilistic upper bounds on queue length over finite time horizons for HTQ for the $m>1$ case. Such bounds are optimized to get lower bounds on throughput defined over finite time horizons. Simulation results show good comparison between such lower bounds and numerical estimates.\n \nWe also analyze throughput in the sub-linear and super-linear cases under perturbation to the arrival process, which is attributed to the additional expected waiting time induced by a release control policy that adds appropriate delay to the arrival times to ensure a desired minimum inter-vehicle distance $\\triangle>0$ at the time of a vehicle joining the HTQ. Since the minimum inter-vehicle distance is non-decreasing in between arrivals and jumps, this implies an upper bound on the queue length which is inversely proportional to $\\triangle$. We derive a lower bound on throughput for a given \ncombination of maximum allowable perturbation. In particular, if the allowable perturbation is sufficiently large, then this lower bound grows unbounded, as $m \\to 0^+$. \n\n\n \nQueueing models have been used to model and analyze traffic systems. The focus here has been primarily on vertical queues, under which vehicles travel at maximum speed until they hit a congestion spot where all vehicles queue on top of each other.\nThe queue length and waiting time of a minor traffic stream at an unsignalized intersection where major traffic stream has high priority is studied in \\cite{Tanner:62} and \\cite{Heidemann:91}. \nIn \\cite{Heidemann:94}, a vertical single server queue is utilized to model the queue length distribution at signalized intersections.\nIn \\cite{Jain.Smith:97}, a state-dependent queuing system is used to model vehicular traffic flow where the service rate depends on the number of vehicles on each road link. \n\n\n On the other hand, the \\emph{horizontal traffic queue} terminology has been primarily used to study macroscopic traffic flow, e.g., see \\cite{Helbing:03}. While such models capture the macroscopic relationship between traffic flow and density, a rigorous description and analysis of an underlying queue model is lacking. Indeed, to the best of our knowledge, there is no prior work on the analysis of a traffic queue model that explicitly incorporates car-following behavior. \n\nThe proposed HTQ has an interesting connection with processor sharing (PS) queues, and this connection does not seem to have been documented before.\nA characteristic feature of PS queues is that all the outstanding jobs receive service simultaneously, while keeping the total service rate of the server constant. The simplest model is where the service rate for an individual job is equal to $1\/N$, where $N$ is the number of outstanding jobs. In our proposed system, one can interpret the road segment as a server simultaneously providing service to all the vehicles, with the service rate of an individual vehicle equal to its speed. This natural analogy between HTQ and PS queues, to the best of our knowledge, was reported for the first time in our recent work~\\cite{Motie.Savla.CDC15}.\nThe $1\/N$ rule applied to our setting implies that all the vehicles travel with the same speed. Clearly, such a rule, or even the general discriminatory PS disciplines, e.g., see \\cite{Kleinrock:67}, are not applicable to the car following models considered in this paper. Indeed, the proposed HTQ is best described as a state-dependent PS queue. \n\n\nIn the PS queue literature, the focus has been on the sojourn time and queue length distribution. For example, see \\cite{Ott:84} and \\cite{Yashkov:83} for M\/G\/1-PS queue and \n\\cite{Grishechkin:94} for G\/G\/1-PS queue. Fluid limit analysis for PS queue is provided in \\cite{Chen.ea:97} and \\cite{Gromoll.Puha.ea:02}. \nHowever, relatively less attention has been paid to the throughput analysis of state-dependent PS queues.\nIn \\cite{Moyal:08,Kherani.Kumar:02,Chen.Jordan:07}, throughput analysis for state-dependent PS queues is provided, where throughput is defined as the quantity of work achieved by the server per unit of time.\nStability analysis for a single server queue with workload-dependent service and arrival rate is provided in \\cite{Bambos.Walrand:89} and \\cite{Bekker:05}. However, the dependence of service rate on the system state in the HTQ proposed in the current paper is complex, and hence none of these results are readily applicable. \n\nIn summary, there are several novel contributions of the paper. First, we propose a novel horizontal traffic queue and place it in the context of processor-sharing queues and state-dependent queues. We establish monotonicity properties of service rates in between jumps (i.e., arrivals and departures), and derive bounds on change in service rates at jumps. \nSecond, we adapt busy period calculations for M\/G\/1 queue to our current setup, including for non-empty initial conditions. These results allow us to provide tight results for throughput in the linear case, and probabilistic bounds on queue length over finite time horizon in the super-linear case. We also study throughput under a batch release control policy, whose effect is interpreted as a perturbation to the arrival process. We provide lower bound on the throughput for a maximum permissible perturbation for sub- and super-linear cases. In particular, we show that, for sufficiently large perturbation, this lower bound grows unbounded as $m \\to 0^+$. It is interesting to compare our analytical results with simulation results, which suggest a sharp transition in the throughput from being unbounded in the sub-linear regime to being bounded in the super-linear regime. While our analytical results do not exhibit such a phase transition yet, their novelty is in providing rigorous estimates of any kind on the throughput of horizontal traffic queues under nonlinear car following models.\n\nThe rest of the paper is organized as follows. We conclude this section with key notations to be used throughout the paper. The setting for the proposed horizontal traffic queue and formal definition of throughput are provided in Section~\\ref{sec:problem-formulation}. Section~\\ref{sec:service-rate-monotonicity} contains useful properties on the dynamics in service rate in between and during jumps. Key busy period properties for the M\/G\/1 queue are extended to the HTQ case in Section~\\ref{sec:busy-period}. Throughput analysis is reported in Section~\\ref{sec:throughput-analysis}. Simulations are presented in \\ref{sec:simulations}. Concluding remarks and directions for future work are presented in Section~\\ref{sec:conclusions}. A few technical intermediate results are collected in the appendix. \n\n\\subsection*{Notations}\nLet $\\mathbb{R}$, $\\mathbb{R}_+$, and $\\mathbb{R}_{++}$ denote the set of real, non-negative real, and positive real numbers, respectively. \nLet $\\mathbb{N}$ be the set of natural numbers. \nIf $x_1$ and $x_2$ are of the same size, then $x_1 \\geq x_2$ implies element-wise inequality between $x_1$ and $x_2$. If $x_1$ and $x_2$ are of different sizes, then $x_1 \\geq x_2$ implies inequality only between elements which are common to $x_1$ and $x_2$ -- such a common set of elements will be specified explicitly. \nFor a set $\\mathcal{J}$, let $\\text{int}(\\mathcal{J})$ and $|\\mathcal{J}|$ denote the interior and cardinality of $\\mathcal{J}$, respectively.\nGiven $a \\in\\mathbb{R}$, and $b > 0$, we let $\\mod(a,b):=a-\\lfloor \\frac{a}{b}\\rfloor b$. Let $\\mathcal S_N^L$ be the $N-1$-simplex over $L$, i.e., $\\mathcal S_N^L=\\setdef{x \\in \\mathbb{R}_+^N}{\\sum_{i=1}^N x_i = L}$. When $L=1$, we shall use the shorthand notation $\\mathcal S_N$.\nWhen referring to the set $\\until{N}$, for brevity, we let the indices $i=-1$ and $i=N+1$ correspond to $i=N$ and $i=1$ respectively.\n Also, for $p, q \\in \\mathcal S_N$, we let\n$D(p || q)$ denote the K-L divergence of $q$ from $p$, i.e., $D(p || q):= \\sum_{i=1}^N p_i \\log\\left(p_i\/q_i\\right)$.\nWe also define a permutation matrix, $P^- \\in \\{0,1\\}^{N \\times N}$, as follows:\n\\begin{align*}\nP^-:=\\begin{bmatrix}\n\\mathbf{0}_{N-1}^T & 1 \\\\\nI_{N-1} & \\mathbf{0}_{N-1}\n\\end{bmatrix}\n\\end{align*}\nwhere $\\mathbf{0}_N$ and $\\mathbf{1}_N$ stand for vectors of size $N$, all of whose entries are zero and one, respectively. We shall drop $N$ from $\\mathbf{0}_N$ and $\\mathbf{1}_N$ whenever it is clear from the context.\n\n\n\n\n\\subsection{Linear Case: $m=1$}\n\\label{sec:linear}\nIn this section, we provide an exact characterization of throughput for the linear case, i.e., when $m=1$. Recall that, for $m=1$, the service rate $s(y)=\\sum_{i=1}^N y_i \\equiv L$ is constant.\n\n\n\n\\begin{proposition}\\label{prop:unstable}\nFor any $L > 0$, $\\varphi \\in \\Phi$, $\\psi \\in \\Psi$, $x_0 \\in [0,L]^{n_0}$, $n_0 \\in \\mathbb{N}$ and :\n$$\n\\lambda_{\\text{max}}(L,m=1,\\varphi,\\psi,x_0,\\delta=0) \\leq L\/\\bar{\\psi} \\, .\n$$ \n\\end{proposition}\n\\begin{proof}\nBy contradiction, assume $\\lambda_{\\text{max}}>L\/\\bar{\\psi}$. Let $r(t):=\\sum_{i=1}^{A(t)} d_i$ be the workload added to the system by the $A(t)$ vehicles that arrive over $[0,t]$. Therefore, \n\\begin{equation}\\label{eq:workload-linear}\nw(t)=w_0+r(t)-L(t-\\mathcal{I}(t))\n\\end{equation}\nwhere $w_0$ is the initial workload. The process $\\{r(t), \\, t\\geq 0\\}$ is a renewal reward process, where the renewals correspond to arrivals of vehicles and the rewards correspond to the distances $\\{d_i\\}_{i=1}^{\\infty}$ that vehicles wish to travel in the system upon arrival before their departures. \nInter-arrival times are exponential random variables with mean $1\/\\lambda$, and the reward associated with each renewal is independently and identically sampled from $\\psi$, whose mean is $\\bar{\\psi}$. \nTherefore, e.g., \\cite[Theorem 3.6.1]{Ross:96} implies that, with probability one, \n\\begin{equation}\n\\label{eq:slln-renewal}\n\\lim_{t\\to\\infty}\\frac{r(t)}{t}=\\lambda \\bar{\\psi}\n\\end{equation}\nThus, for all $\\varepsilon \\in \\left(0,\\lambda \\bar{\\psi} -L\\right)$, there exists a $t_0\\geq0$ such that, with probability one,\n\\begin{equation}\\label{eq:s-t-lowerbound}\n\\frac{r(t)}{t}\\geq\\lambda \\bar \\psi-\\varepsilon\/2> L + \\varepsilon\/2 \\qquad \\forall \\, t \\geq t_0.\n\\end{equation}\nSince $w_0$ and $\\mathcal{I}(t)$ are both non-negative, \\eqref{eq:workload-linear} implies that $w(t)\\geq r(t) - Lt$ for all $t \\geq 0$. \nThis combined with \\eqref{eq:s-t-lowerbound} implies that, with probability one, $w(t) \\geq \\varepsilon t \/2$ for all $t \\geq t_0$, and hence \n$\\lim_{t\\to\\infty}w(t)= + \\infty$. This combined with \\eqref{eq:workload-upperbound} implies that, with probability one, $\\lim_{t\\to\\infty}N(t)=+\\infty$. \n\\qed\n\\end{proof}\n\n\\begin{theorem}\\label{thm:stable}\nFor any $L > 0$, $\\varphi \\in \\Phi$, $\\psi \\in \\Psi$, $x_0 \\in [0,L]^{n_0}$, $n_0 \\in \\mathbb{N}$:\n$$\n\\lambda_{\\text{max}}(L,m=1,\\varphi,\\psi,x_0,\\delta=1) = L\/\\bar{\\psi} \\, .\n$$ \n\\end{theorem}\n\\begin{proof}\nAssume that for some $\\lambda < L\/\\bar{\\psi}$, there exists some initial condition $(x_0,n_0)$ such that the queue length grows unbounded with some positive probability. \nSince the workload brought by every vehicle is i.i.d., and the inter-arrival times are exponential, without loss of generality, we can assume that the queue length never becomes zero. That is, the idle time satisfies $\\mathcal{I}(t) \\equiv 0$. Moreover, \\eqref{eq:slln-renewal} implies that, for every $\\varepsilon \\in \\left(0, L- \\lambda \\bar{\\psi} \\right)$, there exists $t_0 \\geq 0$ such that, with probability one,\n\\begin{equation}\n\\label{eq:s-t-upperbound}\n\\frac{r(t)}{t} \\leq \\lambda \\bar{\\psi} + \\varepsilon\/2 < L - \\varepsilon\/2 \\qquad \\forall t \\geq t_0\n\\end{equation} \n\nCombining \\eqref{eq:workload-linear} with \\eqref{eq:s-t-upperbound}, and substituting $\\mathcal{I}(t) \\equiv 0$, we get $w(t) < w_0 - \\varepsilon t \/2$, which implies that workload, and hence queue length, goes to zero in finite time after $t_0$, leading to a contradiction. \nCombining this with the upper bound proven in Proposition~\\ref{prop:unstable} gives the result. \n\\qed\n\\end{proof}\n\n\\begin{remark}\nTheorem~\\ref{thm:stable} implies that the throughput in the linear case is equal to the inverse of the \ntime required to travel average total distance by a solitary vehicle in the system. In the linear case, the throughput can be characterized with probability one, independent of the initial condition of the queue.\n\\end{remark}\n\n\n\n\n\n\\subsection{Monotonicity of Throughput in $m$ and $x_0$}\n\\label{sec:non-linear}\n\n\nIn this section, we show the following monotonicity property of $\\lambda_{\\text{max}}$ with respect to $m$ for small values of $L$: for given $x_0 \\in [0,L]^{n_0}$, $n_0 \\in \\mathbb{N}$, $L \\in (0,1)$, $\\varphi \\in \\Phi$, and $\\psi \\in \\Psi$, throughput is a monotonically decreasing function of $m$. \nFor this section, we rewrite \\eqref{eq:dynamics-moving-coordinates} in $\\mathbb{R}_+^N$, i.e., without projecting onto $[0,L]^N$. Specifically, let the vehicle coordinates be given by the solution of \n\\begin{equation}\n\\label{eq:x-dynamics-in-Rn}\n\\dot{x}_i = y_i^m, \\qquad x_i(0)=x_{0,i}, \\qquad i \\in \\until{N} \n\\end{equation} \nLet $X(t;x_0,m)$ denote the solution to \\eqref{eq:x-dynamics-in-Rn} at $t$ starting from $x_0$ at $t=0$. We will compare $X(t;x_0,m)$ under different values of $m$ and initial conditions $x_0$, over an interval of the kind $[0,\\tau)$, in between arrivals and departures. \nWe recall the notation that, if $x_{0}^1$ and $x_{0}^2$ are vectors of different sizes, then $x_{0}^1 \\leq x_{0}^2$ implies element-wise inequality only for components which are common to $x_{0}^1$ and $x_{0}^2$. In Lemma~\\ref{lemma:two-size-compare} and Proposition~\\ref{prop:queue-length-2m}, this common set of components corresponds to the set of vehicles common between $x_{0}^1$ and $x_{0}^2$.\n \n\n\\begin{lemma}\n\\label{lemma:two-size-compare}\nFor any $L \\in (0,1]$, $x^1_{0} \\in \\mathbb{R}_+^{n_1}$, $x^2_{0} \\in \\mathbb{R}_+^{n_2}$, $n_1, n_2 \\in \\mathbb{N}$,\n$$\nx^1_{0} \\leq x^2_{0}, \\, \\, n_2 \\leq n_1, \\, \\, 0 < m_2 \\leq m_1 \\implies X(t;x^1_{0},m_1) \\leq X(t;x^2_{0},m_2) \\qquad \\forall \\, t \\in [0,\\tau)\n$$ \n\\end{lemma}\n \\begin{proof}\nThe proof is straightforward when $n_1=n_2$. This is because, in this case, since $y_i \\leq L\\leq1$, $m_2 \\leq m_1$ implies $y_i^{m_2} \\geq y_i^{m_1}$ for all $i \\in \\until{n_1}$. Using this with Lemmas~\\ref{lem:monotonicity-same-size} and \\ref{lem:car-following-type-K} gives the result. \n \nIn order to prove the result for $n_2 < n_1$, we show that $X(t; x^1_{0}, m_1) \\leq X(t; x^2_{0}, m_1) \\leq X(t; x^2_{0}, m_2)$. Note that the second inequality follows from the previous case. Therefore, it remains to prove the first inequality. Let $(i_1, \\ldots, i_{n_2})$ be the set of indices of $n_2$ vehicles such that $0 \\leq x^2_{0,i_1} \\leq \\ldots \\leq x^2_{0,i_{n_2}} \\leq L$. Similarly, let $(i_1, i_1+1, \\ldots, i_2, i_2+1, \\ldots)$ be the indices of $n_1$ vehicles in the order of increasing coordinates in $x_0^1$. Our assumption on the initial condition implies that $x^1_{0,i_k} \\leq x^2_{0,i_k}$ for all $k \\in \\until{n_2}$. For brevity, let $x^1(t) \\equiv X(t;x_0^1,m_1)$, and $x^2(t) \\equiv X(t;x_0^2,m_1)$. It is easy to check that, for all $t \\in [0,\\tau)$, and all $k \\in \\until{n_2}$, \n\\begin{equation}\n\\label{eq:xdot-ub}\n\\dot{x}^1_{i_k} = \\left(x^1_{i_{k}+1} - x^1_{i_k}\\right)^{m_1} \\leq \\left(x^1_{i_{k+1}} - x^1_{i_k}\\right)^{m_1}\n\\end{equation}\nLet $t \\in [0,\\tau)$ be the first time instant when $x^1_{i_k}(t)=x^2_{i_k}(t)$ for some $k \\in \\until{n_2}$. Then, recalling $x^1_{i_{k+1}}(t) \\leq x^2_{i_{k+1}}(t)$, \\eqref{eq:xdot-ub} implies that $\\dot{x}_{i_k}^1(t) \\leq \\left(x^2_{i_{k+1}} - x^2_{i_k} \\right)^{m_1} = \\dot{x}^2_{i_k}(t)$. The result then follows from Lemma~\\ref{lem:monotonicity-same-size}. \n\\qed\n \\end{proof}\n \n \n\nLemma~\\ref{lemma:two-size-compare} is used to establish monotonicity of throughput as follows.\n\\begin{proposition}\n\\label{prop:queue-length-2m}\nFor any $L \\in (0,1]$, $\\varphi \\in \\Phi$, $\\psi \\in \\Psi$, $\\delta \\in (0,1)$, $x^1_{0} \\in [0,L]^{n_1}$, $x^2_{0} \\in [0,L]^{n_2}$, $n_1, n_2 \\in \\mathbb{N}$:\n$$\nx^1_{0} \\leq x^2_{0}, \\, \\, n_2 \\leq n_1, \\, \\, 0 < m_2 \\leq m_1 \\implies \\lambda_{\\text{max}}(L,m_1,\\varphi,\\psi,x^1_0,\\delta) \\leq \\lambda_{\\text{max}}(L,m_2,\\varphi,\\psi,x^2_0,\\delta)\n$$\n\\end{proposition}\n\\begin{proof}\nFor brevity in notation, we refer to the queue corresponding to $m_1$, and initial condition $x_0^1$ as HTQ-S. We refer to the other queue as HTQ-F. Let $\\lambda$, $\\varphi$ and $\\psi$ common to HTQ-S and HTQ-F be given. Let $x^1(t) \\equiv X(t; x^1_0, m_1)$ and $x^2(t) \\equiv X(t; x^2_0, m_2)$, and let $N_s(t)$ and $N_f(t)$ be the queue lengths in the two queues at time $t$. It suffices to show that $N_s(t) \\geq N_f(t)$ for a given realization of arrival times, arrival locations, and travel distances. In particular, this also implies that the departure locations are also the same for every vehicle, including the vehicles present at $t=0$, in both the queues. \n\nIndeed, it is sufficient to show that $x^1(\\tau) \\leq x^2(\\tau)$ and $N_s(\\tau)\\geq N_f(\\tau)$ where $\\tau$ is the time of first arrival or departure from either HTQ-S or HTQ-F. Accordingly, we consider two cases, corresponding to whether $\\tau$ corresponds to arrival or departure. \n\nSince $x^1(t) \\leq x^2(t)$ for all $t \\in [0,\\tau)$ from Lemma~\\ref{lemma:two-size-compare}, and the departure locations of all the vehicles in HTQ-S and HTQ-F are identical, the first departure from HTQ-S can not happen before the first departure in HTQ-F. Therefore, $N_s(\\tau) \\geq N_f(\\tau)$. Since $x^1(\\tau^-) \\leq x^2(\\tau^-)$, and $x^2(\\tau)$ is a subset of $x^2(\\tau^-)$, we also have $x^1(\\tau) \\leq x^2(\\tau)$. \n\nWhen $\\tau$ corresponds to the time of the first arrival, since the arrivals happen at the same location in HTQ-S and HTQ-F, and since $x^1(\\tau^-) \\leq x^2(\\tau^-)$, rearrangement of the indices of the vehicles to include the new arrival at $t=\\tau$ implies that $x^1(\\tau) \\leq x^2(\\tau)$. Moreover, since $N_s(\\tau^-) \\geq N_f(\\tau^-)$, and the arrivals happen simultaneously in both HTQ-S and HTQ-F, we have $N_s(\\tau) \\leq N_f(\\tau)$. \n\\qed\n\\end{proof}\n\n\\begin{remark}\nProposition \\ref{prop:queue-length-2m} establishes monotonicity of throughput only for $L\\in(0,1]$. This is consistent with our simulation studies, e.g., as reported in Figure \\ref{fig:throughput-simulations}, according to which, the throughput is non-monotonic for large $L$. \n\\end{remark}\nFor the analysis of the linear car following model, we exploited the fact that the total service rate of the system is constant. However, for the nonlinear model, i.e., $m \\neq 1$, the total service rate depends on the number and relative locations of vehicles. The state dependent service rate of nonlinear models makes the throughput analysis much more complex. In the next section, we find probabilistic bound on the throughput in the super-linear case.\n\n\n\n\n\n\n\n\n\n\n\n\\section{The Horizontal Traffic Queue (HTQ) Setup}\n\\label{sec:problem-formulation}\nConsider a periodic road segment of length $L$; without loss of generality, we assume it be a circle. Starting from an arbitrary point on the circle, we assign coordinates in $[0,L]$ to the circle in the clock-wise direction (See Figure \\ref{fig:htq}). Vehicles arrive on the circle according to a spatio-temporal process: the arrival process $\\{A(t),t\\geq0\\}$, is assumed to be a Poisson process with rate $\\lambda>0$, and the arrival locations are sampled independently and identically from a spatial distribution $\\varphi$ and mean value $\\bar \\varphi$. Without loss of generality, let the support of $\\varphi$ be $\\text{supp}(\\varphi)=[0,\\ell]$ for some $\\ell\\in[0,L]$. \nUpon arriving, vehicle $i$ travels distance $d_i$ in a counter-clockwise direction, after which it departs the system. \nThe travel distances $\\{d_i\\}_{i=1}^\\infty$ are sampled independently and identically from a spatial distribution $\\psi$ with support $[0,R]$ and mean value $\\bar \\psi$.\nLet the set of $\\varphi$ and $\\psi$ satisfying the above conditions be denoted by $\\Phi$ and $\\Psi$ respectively.\nThe stochastic processes for arrival times, arrival locations, and travel distances are all assumed to be independent of each other.\n\n\n\\begin{figure}[htb!]\n \\centering\n \\includegraphics[width=5cm]{htq-colorful} \n \\caption{Illustration of the proposed HTQ with three vehicles.}\n \\label{fig:htq}\n\\end{figure}\n\n\n\n\n\n\n\n\n\n\n\n\\subsection{Dynamics of vehicle coordinates between jumps}\nLet the time epochs corresponding to arrival and departure of vehicles be denoted as $\\{\\tau_1, \\tau_2, \\ldots\\}$. We shall refer to these events succinctly as \\emph{jumps}. We now formally state the dynamics under this car-following model. \nWe describe the dynamics over an arbitrary time interval of the kind $[\\tau_j,\\tau_{j+1})$. Let $N \\in \\mathbb{N}$ be the fixed number of vehicles in the system during this time interval. \nDefine the inter-vehicle distances associated with vehicle coordinates $x \\in [0,L]^N$ as follows:\n\\begin{equation}\n\\label{eq:inter-vehicle-distance-Rn}\ny_i(x) = \\mod \\left(x_{i+1} - x_i, L\\right), \\quad i \\in \\until{N} \n\\end{equation}\nwhere we implicitly let $x_{N+1} \\equiv x_1$ (See Figure \\ref{fig:htq} for an illustration). Note that the normalized inter-vehicle distances $y\/L$ are probability vectors. \nWhen inside the queue, the speed of every vehicle is proportional to a power $m >0$ of the distance to the vehicle directly in front of it. \nWe assume that this power $m > 0$ is the same for every vehicle at all times. Then, starting with $x(\\tau_{j}) \\in [0,L]^N$, the vehicle coordinates over $[\\tau_{j},\\tau_{j+1})$ are given by:\n\\begin{equation}\n\\label{eq:dynamics-moving-coordinates}\nx_i(t) = \\mod \\left( x_i(\\tau_{j}) + \\int_{\\tau_{j}}^t y_i^m(x(z)) \\, dz, L \\right), \\qquad \\forall \\, i \\in \\until{N}, \\quad \\forall \\, t \\in [\\tau_{j},\\tau_{j+1}) \\, ,\n\\end{equation}\n\n\\begin{remark}\nIt is easy to see that the clock-wise ordering of the vehicles is invariant under \\eqref{eq:inter-vehicle-distance-Rn}-\\eqref{eq:dynamics-moving-coordinates}. \n\\end{remark}\n\nThe dynamics in inter-vehicle distances is given by:\n\\begin{equation}\n\\label{eq:inter-vehicle-distance-dynamics}\n\\dot{y}_i = y^m_{i+1} - y_i^m, \\qquad i \\in \\until{N} \n\\end{equation}\nwhere we implicitly let $y_{N+1} \\equiv y_1$.\n\n\\subsection{Change in vehicle coordinates during jumps}\nLet $x(\\tau^-_{j}) =\\left(x_1(\\tau^-_{j}), \\ldots, x_N(\\tau^-_{j})\\right) \\in [0,L]^N$ be the vehicle coordinates just before the jump at $\\tau_{j}$. If the jump corresponds to the departure of vehicle $k \\in \\until{N}$, then the coordinates of the vehicles $x(\\tau_{j}) =\\left(x_1(\\tau_{j}), \\ldots, x_{N-1}(\\tau_{j})\\right) \\in [0,L]^{N-1}$ after re-ordering due to the jump, for $i \\in \\until{N-1}$, are given by:\n \\begin{equation*}\n\t\tx_i(\\tau_{j})=\\left\\{\\begin{array}{ll}\n\t\t\\displaystyle x_i(\\tau^-_{j}) \t& i\\in \\until{k-1} \\\\[15pt]\n\t\t\\displaystyle x_{i+1}(\\tau^-_{j})\t& i\\in \\{k+1, \\ldots, N-1\\}\\,.\\end{array}\\right.\n\t\\end{equation*}\n\n Analogously, if the jump corresponds to arrival of a vehicle at location $z \\in [0,\\ell]$ in between the locations of the $k$-th and $k+1$-th vehicles at time $\\tau_{j}^-$, then the coordinates of the vehicles $x(\\tau_{j}) =\\left(x_1(\\tau_{j}), \\ldots, x_{N+1}(\\tau_{j})\\right) \\in [0,L]^{N+1}$ after re-ordering due to the jump, for $i \\in \\until{N+1}$, are given by:\n \\begin{equation*}\n \\begin{split}\n x_{k+1}(\\tau_{j}) & = z \\\\\n\t\tx_i(\\tau_{j}) & =\\left\\{\\begin{array}{ll}\n\t\t\\displaystyle x_i(\\tau^-_{j}) \t& i\\in \\until{k} \\\\[15pt]\n\t\t\\displaystyle x_{i-1}(\\tau^-_{j})\t& i\\in \\{k+2, \\ldots, N+1\\}\\,.\\end{array}\\right.\n\t\t\\end{split}\n\t\\end{equation*}\n\n\\subsection{Problem statement}\nLet $x_0 \\in [0,L]^{n_0}$ be the initial coordinates of $n_0$ vehicles present at $t=0$. An HTQ is described by the tuple $\\left(L, m, \\lambda, \\varphi, \\psi, x_0\\right)$. \nLet \n$N(t;L, m, \\lambda, \\varphi, \\psi, x_0)$ be the corresponding queue length, i.e., the number of vehicles at time $t$ for an HTQ $\\left(L, m, \\lambda, \\varphi, \\psi, x_0\\right)$. For brevity in notation, at times, we shall not show the dependence of $N$ on parameters which are clear from the context.\n\n\n\nIn this paper, our objective is to provide rigorous characterizations of the dynamics of the proposed HTQ. A key quantity that we study is throughput, defined below. \n \n\\begin{definition}[Throughput of HTQ]\\label{def:throughput}\nGiven $L > 0$, $m > 0$, $\\varphi \\in \\Phi , \\psi \\in \\Psi$, $x_0 \\in [0,L]^{n_0}$, $n_0 \\in \\mathbb{N}$ and $\\delta \\in [0,1)$, \nthe throughput of HTQ is defined as:\n\\begin{equation}\n\\label{eq:throughput-def}\n\\lambda_{\\text{max}}(L,m,\\varphi,\\psi,x_0,\\delta):= \\sup \\left\\{\\lambda \\geq 0: \\, \\Pr \\left( N(t;L, m, \\lambda, \\varphi, \\psi, x_0) < + \\infty, \\quad \\forall t \\geq 0 \\right) \\geq 1 - \\delta \\right\\}.\n\\end{equation}\n\\end{definition}\n\n\nFigure~\\ref{fig:throughput-simulations} shows the complex dependency of throughput on key queue parameters such as $m$ and $L$. In particular, it shows that for every $L$, $\\varphi$, $\\psi$, $x_0$ and $\\varphi$, the throughput exhibits a phase transition from being unbounded for $m \\in (0,1)$ to being bounded for $m>1$. Moreover, Figure~\\ref{fig:throughput-simulations} also suggests that, for sufficiently small $L$, throughput is monotonically non-increasing in $m$, and that it is monotonically non-decreasing in $m>1$, for sufficiently large $L$. Also, it can be observed that initial condition can also affect the throughput. We now develop analytical results that match the throughput profile in Figure~\\ref{fig:throughput-simulations} as closely as possible. To that purpose, we will make extensive use of novel properties of \\emph{service rate} and \\emph{busy period} of the proposed HTQ, which could be of independent interest. \n\n\n\\begin{figure}\n\\centering\n\\subfigure[]{\\includegraphics[width=6cm]{phase-transition-4L-n0}}\n\\centering\n\\hspace{0.3in}\n\\subfigure[]{\\includegraphics[width=6cm]{phase-transition-5L-n100}}\\\\\n\\subfigure[]{\\includegraphics[width=6cm]{phase-transition-4L-n0-unif}}\n\\centering\n\\hspace{0.3in}\n\\subfigure[]{\\includegraphics[width=6cm]{phase-transition-5L-n100-unif}}\n\\caption{Throughput for various combinations of $m$, $L$, and $n_0$. The parameters used in individual cases are: (a) $\\varphi=\\delta_{0}$, $\\psi = \\delta_L$, and $n_0=0$ (b) $\\varphi=\\delta_{0}$, $\\psi = \\delta_L$, and $n_0=100$ (c) $\\varphi=U_{[0,L]}$, $\\psi = U_{[0,L]}$, and $n_0=0$ (d) $\\varphi=U_{[0,L]}$, $\\psi = U_{[0,L]}$, and $n_0=100$. In all the cases, the locations of initial $n_0$ vehicles were chosen at equal spacing in $[0,L]$.\n \\label{fig:throughput-simulations}}\n\\end{figure}\n\n\n\n\n\n\n\\section{Service Rate Properties of the Horizontal Traffic Queue}\n\\label{sec:service-rate-monotonicity}\nFor every $y \\in \\mathcal S_N^L$, $N \\in \\mathbb{N}$, $L>0$, we let \n$y_{\\text{min}}:=\\min_{i \\in \\until{N}} y_i$, and $y_{\\mathrm{max}}:=\\max_{i \\in \\until{N}} y_i$ denote the minimum and maximum inter-vehicle distances respectively. It is easy to establish the following monotonicity properties of $y_{\\text{min}}$ and $y_{\\mathrm{max}}$.\n\n\\begin{lemma}[Inter-vehicle Distance Monotonicity Between Jumps]\n\\label{lem:vehicle-distance-monotonicity}\nFor any $y \\in \\mathcal S^L_N$, $N \\in \\mathbb{N}$, $L>0$, under the dynamics in \\eqref{eq:inter-vehicle-distance-dynamics}, for all $m>0$\n$$\n\\frac{d}{dt} y_{\\text{min}} \\geq 0 \\qquad \\& \\qquad \\qquad \\frac{d}{dt} y_{\\mathrm{max}} \\leq 0.\n$$\n\\end{lemma}\n\\begin{proof}\nLet $y_{\\text{min}}(t) = y_j(t)$, i.e., the $j$-th vehicle has the minimum inter-vehicle distance at time $t\\geq 0$.\nTherefore, \\eqref{eq:inter-vehicle-distance-dynamics} implies that $\\dot{y}_{\\min}(t) = \\dot{y}_j(t)=y_{j+1}^m(t) - y_j^m(t)\\geq 0$. One can similarly show that $y_{\\mathrm{max}}$ is non-increasing.\n\\qed\n\\end{proof}\n\nDue to the complex state-dependence of the departure process, the queue length process is difficult to analyze. We propose to study a related scalar quantity, called \\emph{workload} formally defined as follows, where we recall the notations introduced in \nSection~\\ref{sec:problem-formulation}.\n\n\\begin{definition}[Workload]\nThe workload associated with the HTQ at any instant is the sum of the distances remaining to be travelled by all the vehicles present at that instant. That is, if the current coordinates and departure coordinates of all vehicles are $x \\in [0,L]^N$ and $q \\in \\mathbb{R}_+^N$ respectively, with $q \\geq x$, then the workload is given by:\n$$\nw(x,q):= \\sum_{i=1}^N (q_i-x_i).\n$$ \n\\end{definition}\n\n\nSince the maximum distance to be travelled by any vehicle from the time of arrival to the time of departure is upper bounded by $R$, we have the following simple relationship between workload and queue length at any time instant:\n\\begin{equation}\\label{eq:workload-upperbound}\nw(t) \\leq N(t)\\, R \\, , \\qquad \\forall \\, t \\geq 0 \\, .\n\\end{equation}\nAn implication of \\eqref{eq:workload-upperbound} is that unbounded workload implies unbounded queue length in our setting. We shall use this relationship to establish an upper bound on the throughput.\nHowever, a finite workload does not necessarily imply finite queue length. In order to see this, consider the state of the queue with $N$ vehicles, all of whom have distance $1\/N$ remaining to be travelled. Therefore, the workload at this instant is $1\/N \\times N= 1$, which is independent of $N$. \n\nWhen the workload is positive, its rate of decrease is equal to \\emph{service rate} in between jumps, defined next.\n\n\\begin{definition}[Service Rate]\nWhen the HTQ is not idle, its instantaneous service rate is equal to the sum of the speeds of the vehicles present in the system at that time instant, i.e., $s(x)=\\sum_{i=1}^N y_i^m(x)$.\n\\end{definition}\n\n\nSince the service rate depends only on the inter-vehicle distances, we shall alternately denote it as $s(y)$. For $m=1$, $s(y)=\\sum_{i=1}^N y_i \\equiv L$, i.e., the service rate is independent of the state of the system, and is constant in between and during jumps. This property does not hold true in the nonlinear ($m \\neq 1$) case. Nevertheless, one can prove interesting properties for the service rate dynamics. We start by deriving bounds on service rate in between jumps. \n\n\\begin{lemma}[Bounds on Service Rates]\n\\label{lem:service-rate-bounds}\nFor any $y \\in \\mathcal S_N^L$, $N \\in \\mathbb{N}$, $L>0$, under the dynamics in \\eqref{eq:inter-vehicle-distance-dynamics},\n\\begin{enumerate}\n\\item $L^m N^{1-m} \\leq s(y) \\leq L^m$ if $m > 1$;\n\\item $L^m \\leq s(y) \\leq L^m N^{1-m}$ if $m \\in (0,1)$.\n\\end{enumerate}\n\\end{lemma}\n\\begin{proof}\nNormalizing the inter-vehicular distances by $L$, the service rate can be rewritten as \n\\begin{equation}\n\\label{eq:service-rate-normalized}\ns(y)=L^m \\sum_{i=1}^N \\left(\\frac{y_i}{L}\\right)^m. \n\\end{equation}\nTherefore, for $m > 1$, $s(y) \\leq L^m \\sum_{i=1}^N \\frac{y_i}{L}=L^m$. One can similarly show that, for $m \\in (0,1)$, $s(y) \\geq L^m$. In order to prove the remaining bounds, we note that $\\sum_{i=1}^N z_i^m$ is strictly convex in $z=[z_1, \\ldots, z_N]$ for $m > 1$, and that the minimum of $\\sum_{i=1}^N z_i^m$ over $z \\in \\mathcal S_N$ occurs at $z= \\mathbf{1}\/N$, and is equal to $N^{1-m}$. Similarly, for $m \\in (0,1)$, $\\sum_{i=1}^N z_i^m$ is strictly concave in $z$, and its maximum over $z \\in \\mathcal S_N$ occurs at $z= \\mathbf{1}\/N$, and is equal to $N^{1-m}$. Combining these facts with \\eqref{eq:service-rate-normalized}, and noting that $y\/L \\in \\mathcal S_N$, gives the lemma.\n\\qed\n\\end{proof}\n\n\\begin{lemma}[Service Rate Monotonicity Between Jumps]\n\\label{lem:service-rate-dynamics-between-jumps}\nFor any $y \\in \\mathcal S^L_N$, $N \\in \\mathbb{N}$, $L>0$, under the dynamics in \\eqref{eq:inter-vehicle-distance-dynamics}, \n$$\n\\frac{d}{dt} s(y) \\leq 0 \\quad \\text{ if } m > 1 \\qquad \\& \\qquad \\frac{d}{dt} s(y) \\geq 0\\quad \\text{ if } m \\in (0,1) \\, ,\n$$\nwhere the equality holds true if and only if $y=\\frac{L}{N}\\mathbf{1}$.\n\\end{lemma}\n\\begin{proof}\nThe time derivative of service rate is given by:\n\\begin{align}\n\\frac{d}{dt} s(y) & = \\frac{d}{dt} \\sum_{i=1}^N y_i^m = m \\sum_{i=1}^N y_i^{m-1} \\dot{y}_i \\nonumber \\\\ \n\\label{eq:service-rate-derivative}\n& = m \\sum_{i=1}^N y_i^{m-1} \\left( y_{i+1}^m - y_i^m\\right) \n\\end{align}\nwhere the second equality follows by \\eqref{eq:inter-vehicle-distance-dynamics}. The result then follows by application of Lemma~\\ref{lem:appendix-general-summation}, and by noting that $g(z)=z^m$ is a strictly increasing function for all $m>0$, and $h(z)=z^{m-1}$ is strictly decreasing if $m \\in (0,1)$, and strictly increasing if $m>1$.\n\\qed\n\\end{proof}\n\n\nThe following lemma quantifies the change in service rate due to departure of a vehicle. \n\n\\begin{lemma}[Change in Service Rate at Departures]\n\\label{lem:service-rate-jumps}\nConsider the departure of a vehicle that changes inter-vehicle distances from $y \\in \\mathcal S_N^L$ to $y^- \\in \\mathcal S_{N-1}^L$, for some $N \\in \\mathbb{N} \\setminus \\{1\\}$, $L > 0$. If $y_1 \\geq 0$ and $y_2 \\geq 0$ denote the inter-vehicle distances behind and in front of the departing vehicle respectively, at the moment of departure, then the change in service rate due to the departure satisfies the following bounds:\n\\begin{enumerate}\n\\item if $m > 1$, then $0 \\leq s(y^-) - s(y) \\leq (y_1+y_2)^m \\left(1-2^{1-m} \\right)$;\n\\item if $m \\in (0,1)$, then $0 \\leq s(y) - s(y^-) \\leq \\min\\{y_1^m,y_2^m\\}$.\n\\end{enumerate}\n\\end{lemma}\n\\begin{proof}\nIf $m>1$, then $\\left(\\frac{y_1}{y_1+y_2}\\right)^m + \\left(\\frac{y_2}{y_1+y_2}\\right)^m \\leq \\frac{y_1}{y_1+y_2} + \\frac{y_2}{y_1+y_2} = 1$, i.e., $s(y^-)-s(y) = (y_1 + y_2)^m - y_1^m - y_2^m \\geq 0$. One can similarly show that $s(y)-s(y^-) \\geq 0$ if $m \\in (0,1)$.\n\nIn order to show the upper bound on $s(y^-)-s(y)$ for $m>1$, we note that the minimum value of $z^m + (1-z)^m$ over $z \\in [0,1]$ for $m > 1$ is $2^{1-m}$, and it occurs at $z=1\/2$. Therefore, \n\\begin{align*}\ns(y^-)-s(y) = (y_1 + y_2)^m - y_1^m - y_2^m & = (y_1 + y_2)^m \\left(1 - \\left(\\frac{y_1}{y_1+y_2}\\right)^m - \\left(\\frac{y_2}{y_1+y_2}\\right)^m \\right) \\\\\n& \\leq (y_1+y_2)^m \\left(1-2^{1-m} \\right)\n\\end{align*}\n\nThe upper bound on $s(y)-s(y^-)$ for $m \\in (0,1)$ can be proven as follows. Since $y_1^m \\leq (y_1 + y_2)^m$, $s(y) - s(y^-) = y_1^m + y_2 ^m - (y_1 + y_2)^m \\leq y_2^m$. \nSimilarly, $s(y)-s(y^-) \\leq y_1^m$. Combining, we get $s(y)-s(y^-) \\leq \\min \\{y_1^m, y_2^m\\}$. Note that, in proving this, we nowhere used the fact that $m \\in (0,1)$. However, this bound is useful only for $m \\in (0,1)$. \n\\qed\n\\end{proof}\n\n\\begin{remark}[Change in Service Rate at Arrivals]\n\\label{rem:service-rate-jump-arrival}\nThe bounds derived in Lemma~\\ref{lem:service-rate-jumps} can be trivially used to prove the following bounds for change in service rate at arrivals:\n\\begin{enumerate}\n\\item if $m > 1$, then $0 \\leq s(y) - s(y^+) \\leq (y_1+y_2)^m \\left(1-2^{1-m} \\right)$;\n\\item if $m \\in (0,1)$, then $0 \\leq s(y^+) - s(y) \\leq \\min\\{y_1^m,y_2^m\\}$,\n\\end{enumerate}\nwhere $y_1$ and $y_2$ are the inter-vehicle distances behind and in front of the arriving vehicle respectively, at the moment of arrival.\n\\end{remark}\n\n\nThe following lemma will facilitate generalization of Lemma~\\ref{lem:service-rate-dynamics-between-jumps}. In preparation for the lemma, let $f(y,m):=m \\sum_{i=1}^N y_i^{m-1} \\left( y_{i+1}^m - y_i^m\\right)$ be the time derivative of service rate, as given in \\eqref{eq:service-rate-derivative}.\n\n\\begin{lemma}\n\\label{lem:service-rate-dot-lower-bound}\nFor all $y \\in \\text{int}(\\mathcal S_N^L)$, $N \\in \\mathbb{N} \\setminus \\{1\\}$, $L > 0$:\n\\begin{equation}\n\\label{eq:service-rate-der-m}\n\\frac{\\partial}{\\partial m} f(y,m)|_{m=1} = - L D\\left(\\frac{y}{L} || P^- \\frac{y}{L}\\right) \\leq 0\n\\end{equation}\nAdditionally, if $L < e^{-2}$, then\n\\begin{equation}\n\\label{eq:service-rate-twice-der-m}\n\\frac{\\partial^2}{\\partial m^2} f(y,m)|_{m=1} \\geq 0\n\\end{equation}\nMoreover, equality holds true in \\eqref{eq:service-rate-der-m} and \\eqref{eq:service-rate-twice-der-m} if and only if $y = \\frac{L}{N}\\mathbf{1}$.\n\\end{lemma}\n\\begin{proof}\nTaking the partial derivative of $f(y,m)$ with respect to $m$, we get that \n\\begin{align*}\n\\frac{\\partial}{\\partial m} f(y,m) & = \\frac{f(y,m)}{m} + m \\sum_{i=1}^N \\left( y_i^{m-1} y_{i+1}^m \\left(\\log y_i + \\log y_{i+1} \\right) - 2 y_i^{2m-1} \\log y_{i} \\right)\n\\end{align*}\nIn particular, for $m=1$: \n\\begin{align*}\n\\frac{\\partial}{\\partial m} f(y,m) |_{m=1} & = f(y,1) + \\sum_{i=1}^N \\left(y_{i+1} \\left(\\log y_i + \\log y_{i+1} \\right) - 2 y_i \\log y_i \\right) \\\\\n& = L \\sum_{i=1}^N \\frac{y_i}{L} \\log \\left(\\frac{y_{i-1}\/L}{y_i\/L} \\right) \\\\\n& = - L D\\left(\\frac{y}{L} || P^- \\frac{y}{L}\\right)\n\\end{align*}\nwhere, for the second equality, we used the trivial fact that $f(y,1)=0$. \nTaking second partial derivative of $f(y,m)$ w.r.t. $m$ gives:\n\\begin{align*}\n\\frac{\\partial^2}{\\partial m^2} f(y,m) = & \\sum_{i=1}^N y_i^{m-1}\\log y_i \\left(y_{i+1}^m - y_i^m \\right) + \\sum_{i=1}^N y_i^{m-1} \\left(y_{i+1}^m \\log y_{i+1} - y_i^m \\log y_i \\right)\\\\\n& + \\sum_{i=1}^N \\left( y_i^{m-1} y_{i+1}^m \\left(\\log y_i + \\log y_{i+1} \\right) - 2 y_i^{2m-1} \\log y_{i} \\right) \\\\\n& + m \\sum_{i=1}^N \\left( y_i^{m-1} y_{i+1}^m \\left(\\log y_i + \\log y_{i+1} \\right)^2 - 4 y_i^{2m-1} \\log^2 y_i \\right)\n\\end{align*}\nIn particular, for $m=1$:\n\\begin{align}\n\\frac{\\partial^2}{\\partial m^2} f(y,m) |_{m=1} = & \\sum_{i=1}^N \\left(y_{i+1} - y_i \\right) \\log y_i + \\sum_{i=1}^N \\left(y_{i+1} \\log y_{i+1} - y_i \\log y_i \\right) \\nonumber \\\\\n& + \\sum_{i=1}^N \\left(y_{i+1} \\left(\\log y_i + \\log y_{i+1} \\right) - 2 y_i \\log y_i \\right) \\nonumber \\\\\n& + \\sum_{i=1}^N \\left( y_{i+1} (\\log y_i + \\log y_{i+1})^2 - 4 y_i \\log^2 y_i\\right) \\nonumber \\\\\n\\label{eq:second-derivative-m-final-eq}\n= & \\sum_{i=1}^N \\log^2 y_i \\left(y_{i+1} - y_i \\right) + 2 \\sum_{i=1}^N \\log y_i \\left(y_{i+1} \\log y_{i+1} + y_{i+1} - y_i \\log y_i - y_i \\right) \\\\\n\\nonumber\n\\geq & \\, 0\n\\end{align}\nIt is easy to check that, $\\log z$, $\\log^2 z$ and $z + z \\log z$ are strictly increasing, strictly decreasing and strictly decreasing functions, respectively, for $z \\in (0,e^{-2})$. Therefore, Lemma~\\ref{lem:appendix-general-summation} implies that each of the two terms in \\eqref{eq:second-derivative-m-final-eq} is non-negative, and hence the lemma.\n\\qed\n\\end{proof}\n\n\\iffalse\n\n\\begin{lemma}\n\\label{lem:service-rate-dot-lower-bound}\nFor all $y \\in \\mathcal S_n^L$, $n \\in \\mathbb{N} \\setminus \\{1\\}$, $L > 0$:\n\\begin{equation}\n\\label{eq:service-rate-der-m}\n\\frac{\\partial}{\\partial m} f(y,m)|_{m=1} = - L D\\left(\\frac{y}{L} || P^- \\frac{y}{L}\\right) \\leq 0\n\\end{equation}\nAdditionally, if $L < e^{-2}$, then\n\\begin{equation}\n\\label{eq:service-rate-twice-der-m}\n\\frac{\\partial^2}{\\partial m^2} f(y,m)|_{m=1} \\geq 0\n\\end{equation}\nMoreover, equality holds true in \\eqref{eq:service-rate-der-m} and \\eqref{eq:service-rate-twice-der-m} if and only if $y = \\frac{L}{n}\\mathbf{1}$.\n\\end{lemma}\n\\begin{proof}\n\\ksmargin{need to exclude from the proof all $y$ such that $y_i=0$ for some $i \\in \\until{n}$}\nTaking the partial derivative of $f(y,m)$ with respect to $m$, we get that \n\\begin{align*}\n\\frac{\\partial}{\\partial m} f(y,m) & = \\frac{f(y,m)}{m} + m \\sum_{i=1}^n \\left( y_i^{m-1} y_{i+1}^m \\left(\\log y_i + \\log y_{i+1} \\right) - 2 y_i^{2m-1} \\log y_{i} \\right)\n\\end{align*}\nIn particular, for $m=1$: \n\\begin{align*}\n\\frac{\\partial}{\\partial m} f(y,m) |_{m=1} & = f(y,1) + \\sum_{i=1}^n \\left(y_{i+1} \\left(\\log y_i + \\log y_{i+1} \\right) - 2 y_i \\log y_i \\right) \\\\\n& = L \\sum_{i=1}^n \\frac{y_i}{L} \\log \\left(\\frac{y_{i-1}\/L}{y_i\/L} \\right) \\\\\n& = - L D\\left(\\frac{y}{L} || P^- \\frac{y}{L}\\right)\n\\end{align*}\nwhere, for the second equality, we used the trivial fact that $f(y,1)=0$. \nTaking second partial derivative of $f(y,m)$ w.r.t. $m$ gives:\n\\begin{align*}\n\\frac{\\partial^2}{\\partial m^2} f(y,m) = & \\sum_{i=1}^n y_i^{m-1}\\log y_i \\left(y_{i+1}^m - y_i^m \\right) + \\sum_{i=1}^n y_i^{m-1} \\left(y_{i+1}^m \\log y_{i+1} - y_i^m \\log y_i \\right)\\\\\n& + \\sum_{i=1}^n \\left( y_i^{m-1} y_{i+1}^m \\left(\\log y_i + \\log y_{i+1} \\right) - 2 y_i^{2m-1} \\log y_{i} \\right) \\\\\n& + m \\sum_{i=1}^n \\left( y_i^{m-1} y_{i+1}^m \\left(\\log y_i + \\log y_{i+1} \\right)^2 - 4 y_i^{2m-1} \\log^2 y_i \\right)\n\\end{align*}\nIn particular, for $m=1$:\n\\begin{align}\n\\frac{\\partial^2}{\\partial m^2} f(y,m) |_{m=1} = & \\sum_{i=1}^n \\left(y_{i+1} - y_i \\right) \\log y_i + \\sum_{i=1}^n \\left(y_{i+1} \\log y_{i+1} - y_i \\log y_i \\right) \\nonumber \\\\\n& + \\sum_{i=1}^n \\left(y_{i+1} \\left(\\log y_i + \\log y_{i+1} \\right) - 2 y_i \\log y_i \\right) \\nonumber \\\\\n& + \\sum_{i=1}^n \\left( y_{i+1} (\\log y_i + \\log y_{i+1})^2 - 4 y_i \\log^2 y_i\\right) \\nonumber \\\\\n\\label{eq:second-derivative-m-final-eq}\n= & \\sum_{i=1}^n \\log^2 y_i \\left(y_{i+1} - y_i \\right) + 2 \\sum_{i=1}^n \\log y_i \\left(y_{i+1} \\log y_{i+1} + y_{i+1} - y_i \\log y_i - y_i \\right) \\\\\n\\nonumber\n\\geq & \\, 0\n\\end{align}\nIt is easy to check that, $\\log z$, $\\log^2 z$, and $z + z \\log z$ are strictly increasing, strictly decreasing and strictly decreasing functions, respectively, for $z \\in (0,e^{-2})$. Therefore, Lemma~\\ref{lem:appendix-general-summation} implies that each of the two terms in \\eqref{eq:second-derivative-m-final-eq} is non-negative, and hence the lemma.\n\\end{proof}\n\n\\fi\n\nLemma~\\ref{lem:service-rate-dot-lower-bound} implies that, for sufficiently small $L$, $f(y,m)$ is locally convex in $m$. One can use this property along with an exact expression for $\\frac{\\partial}{\\partial m}f(y,m)$ in Lemma~\\ref{lem:service-rate-dot-lower-bound} at $m=1$, and the fact that $f(y,1)=0$ for all $y$, to develop a linear approximation in $m$ of $f(y,m)$ around $m=1$. The following lemma derives this approximation, as also suggested by Figure~\\ref{fig:service-rate-dot-vs-m}.\n\n\\begin{figure}[htb!]\n \\centering\n \\includegraphics[width=5cm]{service-rate-dot-vs-m} \n \\caption{$f(y,m)$ vs. $m$ for a typical $y \\in \\mathcal S_{10}$.}\n \\label{fig:service-rate-dot-vs-m}\n\\end{figure}\n\n\\begin{lemma}\n\\label{lem:service-rate-dot-deltay-bound}\nFor a given $y \\in \\text{int}(\\mathcal S_N^L)$, $n \\in \\mathbb{N}$, $L \\in (0,e^{-2})$, there exists $\\underbar{m}(y) \\in [0,1)$ such that \n$$\n\\frac{d}{dt} s(y) \\geq 2 \\frac{(1-m)}{L} \\left(y_{\\mathrm{max}} - y_{\\text{min}} \\right)^2 \\, , \\qquad \\forall \\, m \\in [\\underbar{m}(y),1]\n$$\n\\end{lemma}\n\\begin{proof}\nFor a given $y \\in \\text{int}(\\mathcal S_N^L)$, the local convexity of $f(y,m):=\\frac{d}{dt} s(y)$ in $m$, and the expression of $\\frac{\\partial}{\\partial m}f(y,m)$ at $m=1$ in Lemma~\\ref{lem:service-rate-dot-lower-bound} implies that $\\frac{d}{dt} s(y) \\geq (1-m) L D\\left(\\frac{y}{L} || P^-\\frac{y}{L} \\right)$ for sufficiently small $m<1$. Pinsker's inequality implies $D\\left(\\frac{y}{L} || P^-\\frac{y}{L} \\right) \\geq \\frac{\\|y-P^-y\\|_1^2}{2 L^2}$. This, combined with the fact that $\\|y-P^-y\\|_1 \\geq 2(y_{\\mathrm{max}}-y_{\\text{min}})$ for all $y \\in \\text{int}(\\mathcal S_N^L)$, gives the lemma.\n\\qed\n\\end{proof}\n\n\n\\section{Simulations}\n\\label{sec:simulations}\nIn this section, we present simulation results on throughput analysis, and compare with our theoretical results from previous sections. \n \n \n\n\n\n \n\n\n\n\\begin{figure}\n\\centering\n\\subfigure[]{\\includegraphics[width=5cm]{lambda-m-lowerUpper-w-phase-transition-separated-v2}}\n\\centering\n\\hspace{0.3in}\n\\subfigure[]{\\includegraphics[width=5cm]{lambda-m-lowerUpper-uniform-w-phase-transition-separated-v2}}\n\\caption{Comparison between theoretical estimates of throughput from Theorem~\\ref{thm:superlinear-bound-empty}, and range of numerical estimates from simulations, for zero initial condition. The parameters used for this case are: $L = 1$, $\\delta = 0.1$, and (a) $\\varphi = \\delta_0$, $\\psi = \\delta_L$, (b) $\\varphi = U_{[0,L]}$, $\\psi=U_{[0,L]}$.\n \\label{fig:lambda-m-empty}}\n\\end{figure}\n\n \\begin{figure}[htb!]\n\\begin{center}\n\\includegraphics[width=5cm]{lambda-m-busy-period-L100-v2} \n\\caption{Comparison between theoretical estimates of throughput from Theorem \\ref{thm:superlinear-bound-empty}, and range of numerical estimates from simulations, for zero initial condition. The parameters used for this case are: $L = 100$, $\\delta = 0.1$, $T=10$, and $\\varphi = \\delta_0$, $\\psi = \\delta_L$.}\n\\label{fig:superlinear-large-L}\n\\end{center}\n\\end{figure} \n\n \\begin{figure}[htb!]\n\\begin{center}\n\\includegraphics[width=5cm]{phase-transition-non-empty-separated-v2} \n\\caption{Comparison between theoretical estimates of throughput from Theorem~\\ref{thm:superlinear-bound-non-empty}, and range of numerical estimates from simulations. The parameters used for this case are: $L = 1$, $\\delta = 0.1$, $\\varphi = \\delta_0$, $\\psi = \\delta_L$, $w_0=1$ and $n_0=4$, $x_1(0) =0.6, x_2(0) =0.7, x_3(0) =0.8, x_4(0) =0.9$.}\n\\label{fig:phase-transition-non-empty}\n\\end{center}\n\\end{figure}\n\n\n\n\n\n\nFigures~\\ref{fig:lambda-m-empty}, \\ref{fig:superlinear-large-L} and \\ref{fig:phase-transition-non-empty} show comparison between the lower bound on throughput over finite time horizons, as given by Theorems~\\ref{thm:superlinear-bound-empty} and \\ref{thm:superlinear-bound-non-empty}, and the corresponding numerical estimates from simulations. \n\n Figures~\\ref{fig:lambda-m-empty} and \\ref{fig:superlinear-large-L} are for zero initial condition, and Figure~\\ref{fig:phase-transition-non-empty} is for non-zero initial condition.\n\nFigures~\\ref{fig:sublinear-lowerbound} and \\ref{fig:superlinear-batch} show comparison between the lower bound on throughput as given by the bacth release control policy, as per Theorems~\\ref{thm:main-sub-linear} and \\ref{thm:main-batch-super-linear}, respectively, under a couple of representative values of maximum permissible perturbation $\\eta$. In particular, Figure~\\ref{fig:sublinear-lowerbound} demonstrates that the lower bound achieved from Theorem \\ref{thm:main-sub-linear} increases drastically as $m \\to 0^+$. Both the figures also confirm that the throughput indeed increases with increasing maximum permissible perturbation $\\eta$.\n\nIt is instructive to compare Figures~\\ref{fig:lambda-m-empty}(b) and \\ref{fig:superlinear-batch}(a), both of which depict throughput estimates for the sub-linear case, however obtained from different methods, namely busy period distribution and batch release control policy. Accordingly, one should bear in mind that the two bounds have different qualifiers attached to them: the bound in Figure~\\ref{fig:lambda-m-empty}(b) is valid probabilistically only over a finite time horizon, whereas the bound in Figure~\\ref{fig:superlinear-batch}(a) is valid with probability one, although under a perturbation to the arrival process. \n\n \\begin{figure}[htb!]\n\\begin{center}\n\\includegraphics[width=5cm]{sublinear-lowerbound} \n\\caption{Theoretical estimates of throughput from Theorems~\\ref{thm:main-sub-linear} for different values of $\\eta$. The parameters used for this case are: $L = 1$, $\\varphi = U_{[0,L]}$, $\\psi = U_{[0,L]}$, and $w_0=0$ . Note that the vertical axis is in logarithmic scale.}\n\\label{fig:sublinear-lowerbound}\n\\end{center}\n\\end{figure}\n\n\n\n \n\n\n\n\\begin{figure}\n\\centering\n\\subfigure[]{\\includegraphics[width=5cm]{sublinear-small-L}}\n\\centering\n\\hspace{0.3in}\n\\subfigure[]{\\includegraphics[width=5cm]{superlinear-batch-new-v2}}\n\\caption{Theoretical estimates of throughput from Theorem \\ref{thm:main-batch-super-linear}, and numerical estimates from simulations for different The parameters used for this case are: $\\varphi=U_{[0,L]}$, $\\psi=U_{[0,L]}$, and (a) $L = 1$, (b) L = 100. \\label{fig:superlinear-batch}}\n\\end{figure}\n\n \\begin{figure}[htb!]\n\\begin{center}\n\\includegraphics[width=5cm]{upper-bound-queue-length-linear-large-lambdas} \n\\caption{Comparison between the empirical expectation of the queue length and the upper bound suggested by Remark~\\ref{remark:waiting-time}. We let the simulations run up to time $t=80,000$. The parameters used for this case are: $L=1$, $m=1$, $\\varphi=\\delta_0$, $\\psi=\\delta_L$. For these values, we have $\\lambda_{\\text{max}}=1$.}\n\\label{fig:queue-length-linear}\n\\end{center}\n\\end{figure} \n\nFinally, Figure~\\ref{fig:queue-length-linear} shows a good agreement between queue length bound suggested by Remark~\\ref{remark:waiting-time}, and the corresponding numerical estimates in the linear case. \n\n \n\n\n\n\n\n\n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\subsection{Throughput Bounds under Batch Release Control Policy}\nIn this section, we consider a \\emph{time-perturbed} version of the arrival process. \nFor a given realization of arrival times, $\\{t_1,t_2,\\cdots\\}$, consider a perturbation map $t_i^{\\prime} \\equiv t_i^{\\prime}(t_1,\\ldots,t_i)$ satisfying $t_i^{\\prime}\\geq t_i$ for all $i$, which prescribes the perturbed arrival times. The magnitude of perturbation is defined as $\\eta := E\\left(t_i^{\\prime}-t_i\\right)$, where the expectation is with respect to the Poisson process with rate $\\lambda$ that generates the arrival times.\n\nWe prove boundedness of the queue length under a specific perturbation map. This perturbation map is best understood in terms of a control policy that governs the release of arrived vehicles into HTQ. In order to clarify the implementation of the control policy, we decompose the proposed HTQ into two queues in series: denoted as HTQ1 and HTQ2, both of which have the same geometric characteristics as HTQ, i.e., a circular road segment of length $L$ (see Figure \\ref{fig:htq1-htq2} for illustrations). The original arrival process for HTQ, i.e. spatio-temporal Poisson process with rate $\\lambda$ and spatial distribution $\\varphi$ is now the arrival process for HTQ1. Vehicles remain stationary at their arrival locations in HTQ1, until released by the control policy into HTQ2. Upon released into HTQ2, vehicles travel according to \\eqref{eq:dynamics-moving-coordinates} until they depart after traveling a distance that is sampled from $\\psi$, as in the case of HTQ. The time of release of the vehicles into HTQ2 correspond to their perturbed arrival times $t_1^{\\prime}, t_2^{\\prime}, \\ldots$. The average waiting time in HTQ1 under the given release control policy is then the magnitude of perturbation in the arrival times. \n\n\n\n\n\\begin{figure}[htb!]\n\\begin{center}\n\\includegraphics[width=14cm]{htq1-htq2-decomposed} \n\\caption{Decomposition of HTQ into HTQ1 and HTQ2 in series.}\n\\label{fig:htq1-htq2}\n\\end{center}\n\\end{figure}\n\nWe consider the following class of release control policy, for which we recall from the problem setup in Section~\\ref{sec:problem-formulation} that $\\text{supp}(\\varphi)=[0,\\ell]$ for some $\\ell \\in [0,L]$. \n\n\n\\begin{definition}[Batch Release Control Policy $\\pi^b_\\triangle$]\n\\label{def:batch-release-control-policy}\nDivide $[0,\\ell]$ into sub-intervals, each of length $\\triangle$, enumerated as $1, 2, \\ldots, \\lceil \\frac{\\ell}{\\triangle} \\rceil$. Let $T_1$ be the first time instant when HTQ2 is empty. At time $T_1$, release one vehicle each, if present, from all odd-numbered sub-intervals in $\\{1, 2, \\ldots, \\lceil \\frac{\\ell}{\\triangle} \\rceil\\}$ simultaneously into HTQ2. Let $T_2$ be the next time instant when HTQ2 is empty. At time $T_2$, release one vehicle each, if present, from all even-numbered sub-intervals in $\\{1, 2, \\ldots, \\lceil \\frac{\\ell}{\\triangle} \\rceil\\}$ simultaneously into HTQ2. Repeat this process of alternating between releasing one vehicle each from odd and even-numbered sub-intervals every time that HTQ2 is empty.\n\\end{definition}\n\n\n\n\\begin{remark}\n\\begin{enumerate}\n\\item Under $\\pi^b_{\\triangle}$, when vehicles are released into HTQ2, the inter-vehicle distances in the front and rear of each vehicle being released is at least equal to $\\triangle$. \n\\item The order in which vehicles are released into HTQ2 from HTQ1 under $\\pi^b_{\\triangle}$ may not be the same as the order of arrivals into HTQ1.\n\\end{enumerate}\n\\end{remark}\n\nIn the next two sub-sections, we analyze the performance of the batch release control policy for sub-linear and super-linear cases. \n\\subsubsection{The Sub-linear Case}\nIn this section, we derive a lower bound on throughput when $m\\in(0,1)$. We first derive a trivial lower bound in Proposition \\ref{prop:throughput-bound-sublinear} implied by Lemma \\ref{lem:service-rate-jumps} and Remark \\ref{rem:service-rate-jump-arrival}. Next, we improve this lower bound in Theorem \\ref{thm:main-sub-linear} under a under a batch release control policy, $\\pi_{\\triangle}^b$. \n\\begin{proposition}\n\\label{prop:throughput-bound-sublinear}\nFor any $L > 0$, $m\\in(0,1)$, $\\varphi \\in \\Phi$, $\\psi \\in \\Psi$, $x_0 \\in [0,L]^{n_0}$, $n_0 \\in \\mathbb{N}$:\n$$\n \\lambda_{\\text{max}}(L,m,\\varphi,\\psi,x_0,\\delta=0) \\geq L^m\/\\bar{\\psi} \n$$\n\\end{proposition}\n\\begin{proof}\nRemark \\ref{rem:service-rate-jump-arrival} implies that, for $m \\in (0,1)$, the service rate does not decrease due to arrivals. Therefore, a simple lower bound on the service rate for any state is the service rate when there is only one vehicle in the system, i.e., $L^m$. Therefore, \nthe workload process is upper bounded as\n$w(t) = w_0+r(t)-\\int_0^ts(z)\\mathrm{d} z \\leq w_0+r(t) -L^m(t-\\mathcal{I}(t)), \\quad \\forall t\\geq 0$, \nwhere $r(t)$ and $\\mathcal{I}(t)$ denote the renewal reward and the idle time processes, respectively, as introduced in the proof of Proposition \\ref{prop:unstable}. Similar to the proof of Proposition \\ref{prop:unstable}, it can be shown that, if $\\lambda < L^m\/\\bar{\\psi}$, then the workload, and hence the queue length, goes to zero in finite time with probability one. \n\\qed\n\\end{proof}\n\nNext, we establish better throughput guarantees than Proposition~\\ref{prop:throughput-bound-sublinear}, under a batch release control policy, $\\pi_{\\triangle}^b$.\n\nThe next result characterizes the time interval between release of successive batches into HTQ2 under $\\pi_\\triangle^b$.\n\n\\begin{lemma}\n\\label{eq:successive-release-time-difference}\nFor given $\\lambda>0$, $\\triangle > 0$, $\\varphi \\in \\Phi$, $\\psi \\in \\Psi$ with $\\text{supp}(\\psi)=[0,R]$, $R>0$, $m\\in(0,1)$, $x_0\\in[0,L]^{n_0}$, $L>0$, $n_0\\in \\mathbb{N}$, let $T_1$, $T_2$, $\\ldots$ denote the random variables corresponding to time of successive batch releases into HTQ2 under $\\pi_{\\triangle}^b$. Then, $T_1 \\leq \\frac{n_0R}{L^m}$, $T_{i+1}-T_i \\leq R\/\\triangle^m$ for all $i \\geq 1$, and $y_{\\text{min}}(t) \\geq \\triangle$ for all $t \\geq T_1$.\n\\end{lemma}\n\\begin{proof}\nSince the maximum distance to be traveled by every vehicle is upper bounded by $R$, the initial workload satisfies $w_0\\leq n_0R$. Since the minimum service rate for $m \\in (0,1)$ is $L^m$ (see proof of Proposition~\\ref{prop:throughput-bound-sublinear}), with no new arrivals, it takes at most $w_0\/L^m=n_0 R\/L^m$ amount of time for the system to become empty. This establishes the bound on $T_1$. \n\nLemma~\\ref{lem:vehicle-distance-monotonicity} implies that, under $\\pi^b_{\\triangle}$, the minimum inter-vehicle distance in HTQ2 is at least $\\triangle$ after $T_1$. \nThis implies that $y_{\\text{min}}(t) \\geq \\triangle$ for all $t \\geq T_1$, and hence the minimum speed of every vehicle in HTQ2 is at least $\\triangle^m$ after $T_1$. Since the maximum distance to be traveled by every vehicle is $R$, this implies that the time between release of a vehicle into HTQ2 and its departure is upper bounded by $R\/\\triangle^m$, which in turn is also an upper bound on the time required by all the vehicles released in one batch to depart from the system. \n\\qed\n\\end{proof}\nLet $N_1(t)$ and $N_2(t)$ denote the queue lengths in HTQ1 and HTQ2, respectively, at time $t$. Lemma~\\ref{eq:successive-release-time-difference} implies that, for every $\\triangle>0$, $N_2(t)$ is upper bounded for all $t \\geq T_1$. The next result identifies conditions under which $N_1(t)$ is upper bounded. \n\n\n\n\nFor $F>0$, let $\\Phi_F:=\\setdef{\\varphi \\in \\Phi}{\\sup_{x \\in [0,\\ell]} \\varphi(x) \\leq F}$. For subsequent analysis, we now derive an upper bound on the \\emph{load factor}, i.e., the ratio of the arrival and departure rates, associated with a typical sub-queue of HTQ1 among $\\{1, 2, \\ldots, \\lceil \\frac{\\ell}{\\triangle} \\rceil\\}$. It is easy to see that, for every $\\varphi \\in \\Phi_F$, $F>0$, the arrival process into every sub-queue is Poisson with arrival rate upper bounded by $\\lambda F \\triangle$. Lemma~\\ref{eq:successive-release-time-difference} implies that the departure rate is at least $\\triangle^m\/2R$. Therefore, the load factor for every sub-queue is upper bounded as \n\\begin{equation}\n\\label{eq:load-factor-upper-bound}\n\\rho \\leq \\frac{2 R \\lambda F \\triangle}{\\triangle^m}=2 R \\lambda F \\triangle^{1-m}\n\\end{equation}\nIn particular, if \n\\begin{equation}\\label{eq:triangle-star}\n\\triangle < \\triangle^*(\\lambda):=\\left(2 R \\lambda F \\right)^{-\\frac{1}{1-m}},\n\\end{equation} \nthen $\\rho<1$. It should be noted that for $n_0<+\\infty$, by Lemma \\ref{eq:successive-release-time-difference}, $T_1<+\\infty$. The service rate is zero during $[0,T_1]$; however, since $T_1$ is finite, this does not affect the computation of load factor. \n\n\n\\begin{proposition}\n\\label{prop:pi1-policy}\nFor any $\\lambda>0$, $\\varphi \\in \\Phi_F$, $F>0$, $\\psi \\in \\Psi$ with $\\text{supp}(\\psi)=[0,R]$, $R>0$, $m\\in(0,1)$, $x_0\\in[0,L]^{n_0}$, $L>0$, $n_0\\in \\mathbb{N}$, for sufficiently small $\\triangle$, $N_1(t)$ is bounded for all $t \\geq 0$ under $\\pi_{\\triangle}^b$, almost surely.\n\\end{proposition}\n\\begin{proof}\nBy contradiction, assume that $N_1(t)$ grows unbounded. This implies that there exists at least one sub-queue, say $ j \\in \\{1, 2, \\ldots, \\lceil \\frac{\\ell}{\\triangle} \\rceil\\}$, such that its queue length, say $N_{1,j}(t)$, grows unbounded. In particular, this implies that there exists $t_0 \\geq T_1$ such that $N_{1,j}(t) \\geq 2$ for all $t \\geq t_0$. Therefore, for all $t \\geq t_0$, the ratio of arrival rate to departure rate for the $j$-th sub-queue is given by \\eqref{eq:load-factor-upper-bound}, which is a decreasing function of $\\triangle$, and hence becomes strictly less than one for sufficiently small $\\triangle$. A simple application of the law of large numbers then implies that, almost surely, $N_{1,j}(t)=0$ for some finite time, leading to a contradiction.\n\\qed \n\\end{proof}\n\n\n\n\n\n\n\n\n\n\nThe following result gives an estimate of the mean waiting time in a typical sub-queue in HTQ1 under the $\\pi_{\\triangle}^b$ policy.\n\n\\begin{proposition}\n\\label{eq:waiting-time}\nFor $\\varphi \\in \\Phi_F$, $F>0$, $\\psi \\in \\Psi$, $m \\in (0,1)$, there exists a sufficiently small $\\triangle$ such that the average waiting time in HTQ1 under $\\pi_{\\triangle}^b$ is upper bounded as: \n\\begin{equation}\n\\label{eq:W-upper-bound}\nW \\leq R (2 R \\lambda F )^{\\frac{m}{1-m}} \\left(\\frac{2}{m^{\\frac{m}{1-m}}} + \\frac{m}{m^{\\frac{m}{1-m}}-m^{\\frac{1}{1-m}}} \\right).\n\\end{equation}\n\\end{proposition}\n\\begin{proof}\nIt is easy to see that the desired waiting time corresponds to the system time of an M\/D\/1 queue with load factor given by \\eqref{eq:load-factor-upper-bound} along with the arrival and departure rates leading to \\eqref{eq:load-factor-upper-bound}. Note that, by Lemma \\ref{eq:successive-release-time-difference}, for finite $n_0$, the value of $T_1$ is finite and does not affect the average waiting time. Therefore, using standard expressions for M\/D\/1 queue~\\cite{Kleinrock:75}, we get that the waiting time in HTQ1 is upper bounded as follows for $\\rho <1$:\n\\begin{align}\nW \\leq \\frac{2R}{\\triangle^m} + \\frac{R}{\\triangle^m} \\frac{\\rho}{1-\\rho} & \\leq \\frac{2R}{\\triangle^m} + \\frac{R}{\\triangle^m} \\frac{1}{1-\\rho} \\nonumber \\\\ \n\\label{eq:waiting-time-upper-bound}\n& \\leq \\frac{2R}{\\triangle^m} + \\frac{R}{\\triangle^m -2 R \\lambda F \\triangle} \n\\end{align} \nIt is easy to check that the minimum of the second term in \\eqref{eq:waiting-time-upper-bound} over $\\big(0,\\triangle^*(\\lambda)\\big)$ occurs at $\\triangle = \\left(\\frac{m}{2 R \\lambda F} \\right)^{\\frac{1}{1-m}}$. Substitution in the right hand side of the first inequality in \\eqref{eq:waiting-time-upper-bound} gives the result.\n\\qed\n\\end{proof}\n\n\\begin{remark}\n\\label{rem:W-limit}\n\\eqref{eq:W-upper-bound} implies that, for every $R>0$, $F>0$, $\\lambda>0$, we have $W \\to 2R$ as $m \\to 0^+$. \n\\end{remark}\n\nWe extend the notation introduced in \\eqref{eq:throughput-def} to $\\lambda_{\\text{max}}(L,m,\\varphi,\\psi,x_0,\\delta,\\eta)$ to also show the dependence on maximum allowable perturbation $\\eta$. This is not to be confused with the notation for $\\lambda_{\\text{max}}$ used in Theorems~\\ref{thm:superlinear-bound-empty} and \\ref{thm:superlinear-bound-non-empty}, where we used the notion of throughput over finite time horizons. We choose to use the same notations to maintain brevity. \n\nIn order to state the next result, for given $R>0$, $F>0$, $m \\in (0,1)$ and $\\eta \\geq 0$, let $\\tilde{W}(m,F,R,\\eta)$ \nbe the value of $\\lambda$ for which the right hand side of \\eqref{eq:W-upper-bound} is equal to $\\eta$, if such a $\\lambda$ exists and is at least $L^m\/\\bar{\\psi}$, and let it be equal to $L^m\/\\bar{\\psi}$, otherwise. The lower bound of $L^m\/\\bar{\\psi}$ in the definition of $\\tilde{W}$ is inspired by Proposition~\\ref{prop:throughput-bound-sublinear}. The next result formally states $\\tilde{W}$ as a lower bound on $\\lambda_{\\text{max}}$. \n\n\n\n\n\n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n \n\n\n\n\n\n\n\n\n\n\\begin{theorem}\\label{thm:main-sub-linear}\nFor any $\\varphi \\in \\Phi_F$, $F>0$, $\\psi \\in \\Psi$ with $\\text{supp}(\\psi)=[0,R]$, $R>0$, $m \\in (0,1)$, $x_0 \\in [0,L]^{n_0}$, $n_0 \\in \\mathbb{N}$, $L>0$, and maximum permissible perturbation $\\eta \\geq 0$, \n$$\n\\lambda_{\\text{max}}(L,m,\\varphi,\\psi,x_0,\\delta=0,\\eta) \\geq \\tilde{W}(m,F,R,\\eta)\n$$\nIn particular, if $\\eta > 2R$, then $\\lambda_{\\text{max}}(L,m,\\varphi,\\psi,x_0,\\delta=0,\\eta) \\to + \\infty$ as $m \\to 0^+$.\n\\end{theorem}\n\\begin{proof}\nConsider any $\\lambda \\leq\\tilde{W}(m,F,R,\\eta)$, and $\\triangle \\leq \\left(\\frac{m}{2 R \\lambda F} \\right)^{\\frac{1}{1-m}}$. Under policy $\\pi_\\triangle^b$, Lemma \\ref{eq:successive-release-time-difference} and Proposition \\ref{prop:pi1-policy} imply that, for finite $n_0$, $N_2(t)$ and $N_1(t)$ remain bounded for all times, with probability one. Also, for $\\lambda = \\tilde{W}(m,F,R,\\eta)$, by Proposition \\ref{eq:waiting-time} and the definition of $\\tilde{W}(m,F,R,\\eta)$, the introduced perturbation remains upper bounded by $\\eta$. Since the right hand side of \\eqref{eq:W-upper-bound} is monotonically increasing in $\\lambda$, perturbations remain bounded by $\\eta$ for all $\\lambda \\leq \\tilde{W}(m,F,R,\\eta)$. \nIn particular, by Remark \\ref{rem:W-limit}, we have $W\\to 2R$ as $m\\to 0^+$. In other words, as $m\\to 0^+$, the magnitude of the introduced perturbation becomes independent of $\\lambda$. \nTherefore, when $\\eta >2R$, and $m\\to 0^+$ throughput can grow unbounded while perturbation and queue length remains bounded.\n\\qed\n\\end{proof}\n\n\n\n\n\\begin{remark}\nWe emphasize that the only feature required in a batch release control policy is that, at the moment of release, the front and rear distances for the vehicles being released should be greater than $\\triangle$. The requirement of the policy in Definition~\\ref{def:batch-release-control-policy} for the road to be empty at the moment of release makes the control policy conservative, and hence affects the maximum permissible perturbation. In fact, for special spatial distributions, e.g., when $\\varphi$ is a Dirac delta function and the support of $\\psi$ is $[0,L-\\triangle])$, one can relax the conservatism to guarantee unbounded throughput for arbitrarily small permissible perturbation. \n\\end{remark}\n\n\\subsubsection{The Super-linear Case}\n\nIn this section, we study the throughput for the super-linear case under perturbed arrival process with a maximum permissible perturbation of $\\eta$. For this purpose, we consider the batch release control policy $\\pi_{\\triangle}^b$, defined in Definition \\ref{def:batch-release-control-policy}, for our analysis. Time intervals between release of successive batches, under $\\pi_{\\triangle}^b$, are characterized the same as Lemma \\ref{eq:successive-release-time-difference}. However, in the super linear case, by Lemma \\ref{lem:service-rate-bounds}, the initial minimum service rate is $L^mn_0^{1-m}$. Therefore, the time of first release is bounded as $T_1 \\triangle^*(\\lambda).\n\\end{equation}\nIt should be noted that since the batch release control policy iteratively releases from odd and even sub-queues, we need at least two sub-queues to be able to implement this policy. As a result, $\\triangle$ cannot be arbitrary large and $\\triangle<\\ell\/2$. This constraint gives the following bound on the admissible throughput under this policy\n\\begin{equation}\\label{eq:lambda-star-superlinear}\n\\lambda<\\lambda^*:=(\\ell\/2)^{m-1}\/2RF\n\\end{equation}\n The following result shows that for the above range of throughput, the queue length in HTQ1, $N_1(t)$, remains bounded at all times. \n \\begin{proposition}\n\\label{prop:pi1-policy-super-linear}\nFor any $\\lambda<\\lambda^*$, $\\triangle\\in\\big(\\triangle^*(\\lambda),\\ell\/2\\big]$, $\\varphi \\in \\Phi_F$, $F>0$, $\\psi \\in \\Psi$ with $\\text{supp}(\\psi)=[0,R]$, $R>0$, $m>1$, $x_0\\in[0,L]^{n_0}$, $L>0$, $n_0\\in \\mathbb{N}$, $N_1(t)$ is bounded for all $t \\geq 0$ under $\\pi_{\\triangle}^b$, almost surely.\n\\end{proposition}\n\n\\begin{proof}\nThe proof is similar to proof of Proposition \\ref{prop:pi1-policy}. In particular, by \\eqref{eq:triangle-star-superlinear} and \\eqref{eq:lambda-star-superlinear}, one can show that load factor \\eqref{eq:load-factor-upper-bound} remains strictly smaller than one. This implies that no sub-queue in HTQ1 can grow unbounded, and $N_1(t)$ remains bounded for all times, with probability one. \n\\qed\n\\end{proof}\n\n\n\n\\begin{proposition}\n\\label{eq:waiting-time-super-linear}\nFor any $\\lambda<\\lambda^*$, $\\varphi \\in \\Phi_F$, $F>0$, $\\psi \\in \\Psi$, $m> 1$, the average waiting time in HTQ1 under $\\pi_{\\triangle}^b$ for $\\triangle = \\ell\/2$ is upper bounded as: \n\\begin{equation}\n\\label{eq:waiting-time-super-linear-eq}\nW \\leq \\frac{2R}{(\\ell\/2)^m}+\\frac{R}{(\\ell\/2)^m}\\frac{2R\\lambda F(\\ell\/2)^{1-m}}{1-2R\\lambda F(\\ell\/2)^{1-m}}\n\\end{equation}\n\\end{proposition}\n\\begin{proof}\nThe proof is very similar to the proof of Proposition \\ref{eq:waiting-time}. Thus, we get the following bounds:\n\\begin{align*}\nW \\leq \\frac{2R}{\\triangle^m} + \\frac{R}{\\triangle^m} \\frac{\\rho}{1-\\rho} \\leq \\frac{2R}{\\triangle^m}+\\frac{R}{\\triangle^m}\\frac{2R\\lambda F\\triangle^{1-m}}{1-2R\\lambda F\\triangle^{1-m}}\n\\end{align*} \nThe right hand side of the above inequality is a decreasing function of $\\triangle$; therefore, $\\triangle = \\ell\/2$ minimizes it, and gives \\eqref{eq:waiting-time-super-linear-eq}. \n\\qed\n\\end{proof}\n\nLet $\\hat{W}(m,F,R,\\eta)$ \nbe the value of $\\lambda$ for which the right hand side of \\eqref{eq:waiting-time-super-linear-eq} is equal to $\\eta$, if such a $\\lambda\\leq \\lambda^*$ exists, and let it be equal to $\\lambda^*$ otherwise. Note that since the right hand side of \\eqref{eq:waiting-time-super-linear-eq} is monotonically increasing in $\\lambda$, for all $\\lambda\\leq \\hat{W}(m,F,R,\\eta)$ the introduced perturbation remains upper bounded by $\\eta$. \n\\begin{theorem}\\label{thm:main-batch-super-linear}\nFor any $\\varphi \\in \\Phi_F$, $F>0$, $\\psi \\in \\Psi$ with $\\text{supp}(\\psi)=[0,R]$, $R>0$, $m >1$, $x_0 \\in [0,L]^{n_0}$, $n_0 \\in \\mathbb{N}$, $L>0$, and maximum permissible perturbation $\\eta \\geq 0$, \n$$\n\\lambda_{\\text{max}}(L,m,\\varphi,\\psi,x_0,\\delta=0,\\eta) \\geq \\hat{W}(m,F,R,\\eta)\n.$$\n\\end{theorem}\n\\begin{proof}\nFor any $\\lambda <\\hat{W}(m,F,R,\\eta)$, under $\\pi_\\triangle^b$, Lemma \\ref{eq:successive-release-time-difference} and Proposition \\ref{prop:pi1-policy-super-linear} imply that, for finite $n_0$, $N_2(t)$ and $N_1(t)$ remain bounded for all times, with probability one. Also, by Proposition \\ref{eq:waiting-time-super-linear} and the definition of $\\hat{W}(m,F,R,\\eta)$, the introduced perturbation remains upper bounded by $\\eta$. \n\\qed\n\\end{proof}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\subsection{Throughput Bounds for the Super-linear Case from Busy Period Calculations}\\label{subsec:superlinear}\nIn this section, we derive lower bound on the throughput for the super-linear case. The next result computes a bound on the probability that the queue length of the HTQ satisfies a given upper bound over a given time interval, using the probability distribution functions from \\eqref{eq:G-def}.\nIn Propositions~\\ref{prop:nbp-lower-bound} and \\ref{prop:nbn-lower-bound}, for the sake of clarity, we add explicit dependence on $\\lambda$ to this probability distribution function. \n\n\n\n\\begin{proposition}\\label{prop:nbp-lower-bound}\nFor any $m>1$, $M\\in \\mathbb{N}$, $L>0$, $\\lambda >0$, $\\varphi\\in \\Phi$, $\\psi \\in \\Psi$, and zero initial condition $x_0=0$, the probability that the queue length is upper bounded by $M$ over a given time interval $[0,T]$ satisfies the following bound: \n\\begin{equation}\n\\label{eq:nbp-lower-bound}\n \\Pr \\big(N(t) \\leq M \\quad \\forall t \\in [0,T] \\big)\\geq \\sup_{r \\in \\mathbb{N}} \\, \\sum_{n=1}^M \\int_T^{\\infty} G_{r,L^m M^{1-m}}(t,n,\\psi,\\lambda) \\, \\mathrm{d} t\n \\end{equation}\n \\end{proposition}\n \\begin{proof}\nLet us denote the current queueing system as HTQ-f. We shall compare queue lengths between HTQ-f and a slower queueing system HTQ-s, which starts from the same (zero) initial condition, and experiences the same realizations of arrival times, locations and travel distances. Let every incoming vehicle into HTQ-s and HTQ-f be tagged with a unique identifier. At time $t$, let $\\mathcal J(t)$ be the set of identifiers of vehicles present both in HTQ-s and HTQ-f, $\\mathcal J_{s\/f}(t)$ be the set of identifiers of vehicles present only in HTQ-s, and $\\mathcal J_{f\/s}(t)$ be the set of identifiers of vehicles present only in HTQ-f.\nLet $v^f_i$ denote the speed of the vehicle in HTQ-f with identifier $i \\in \\mathcal J(t) \\cup \\mathcal J_{f\/s}(t)$, as determined by the car-following behavior underlying \\eqref{eq:dynamics-moving-coordinates}. The vehicle speeds in HTQ-s are not governed by the car following behavior, but are rather related to the speeds of vehicles in HTQ-f as: \n \\begin{equation}\n \\label{eq:htq-s-speed}\n\t\tv^s_i(t) =\\left\\{\\begin{array}{ll}\n\t\t\\displaystyle v^f_i(t) \\frac{p}{v^f(t)} \\frac{|\\mathcal J(t)|}{|\\mathcal J(t)| + |\\mathcal J_{s\/f}(t)|} \t&i \\in \\mathcal J(t)\\\\[15pt]\n\t\t\\displaystyle \\frac{p}{|\\mathcal J(t)| + |\\mathcal J_{s\/f}(t)|}& i \\in \\mathcal J_{s\/f}(t)\\,\\end{array}\\right.\n\t\\end{equation}\nwhere $v^f(t):=\\sum_{i \\in \\mathcal J(t)}v_i^f(t)$ is the sum of speeds of vehicles in HTQ-f that are also present in HTQ-s at time $t$, and $p$ is a parameter to be specified. Indeed, note that $\\sum_{i \\in \\mathcal J(t) \\cup \\mathcal J_{s\/f}(t)} v_i^s(t) \\equiv p$, i.e., $p$ is the (constant) service rate of HTQ-s. \n\nConsider a realization where the number of arrivals into HTQ-s with $p=L^m M^{1-m}$ during any busy period overlapping with $[0,T]$ does not exceed $M$. We refer to such a realization as \\emph{event} in the rest of the proof. Since the maximum queue length during a busy period is trivially upper bounded by the number of arrivals during that busy period, conditioned on the event, we have \n\\begin{equation}\n\\label{eq:slow-queue-queue-length-bound}\nN_s(t) \\leq M, \\qquad t \\in [0,T]\n\\end{equation}\n\nConsider the union of departure epochs from HTQ-s and HTQ-f in $[0,T]$: $0=\\tau_0 \\leq \\tau_1 \\leq \\ldots$. If $\\mathcal J_{f\/s}(\\tau_k)=\\emptyset$ for some $k \\geq 0$, then $\\mathcal J_{f\/s}(t)=\\emptyset$ for all $t \\in (\\tau_k,\\tau_{k+1})$. Hence, the service rate for HTQ-f over the interval $(\\tau_k,\\tau_{k+1})$ is $v^f(t)$, which, conditioned on the event, is lower bounded by $L^m M^{1-m}=p$ by Lemma \\ref{lem:service-rate-bounds}.\nTherefore, $p\/v^f(t) \\leq 1$ over $(\\tau_k,\\tau_{k+1})$, and hence \\eqref{eq:htq-s-speed} implies that all the vehicles with identifiers in $\\mathcal J_f$ will travel slower in HTQ-s in comparison to HTQ-f. In particular, this implies that $\\mathcal J_{f\/s}(\\tau_{k+1})=\\emptyset$. This, combined with the fact that $\\mathcal J_{f\/s}(\\tau_0)=\\emptyset$ (both the queues start from the same initial condition), we get that, conditioned on the event, $\\mathcal J_{s\/f}(t) \\equiv \\emptyset$, and hence $N(t) \\leq N_s(t)$ over $[0,T]$. Combining this with \\eqref{eq:slow-queue-queue-length-bound} gives that, conditioned on the event, $N(t) \\leq M$ over $[0,T]$. \n\n\nWe now compute the probability of the occurrence of the event using busy period calculations from Section~\\ref{sec:busy-period}. The event can be categorized by the maximum number of busy periods, say $r \\in \\mathbb{N}$, that overlap with $[0,T]$, i.e., the $r$-th busy period ends after time $T$ (and each of these busy periods has at most $M$ arrivals). Since these busy periods are interlaced with idle periods, the probability of the $r$-th busy period ending after time $T$ is lower bounded by the probability that the sum of the durations of $r$ busy periods is at least $T$. \\eqref{eq:G-def} implies that the latter quantity is equal to $\\sum_{n=1}^M \\int_T^{\\infty} G_{r,L^m M^{1-m}}(t,n,\\psi,\\lambda) \\, \\mathrm{d} t$. The proposition then follows by noting that this is true for any $r \\in \\mathbb{N}$.\n\\qed\n \\end{proof}\n \n \\begin{remark}\nIn the proof of Proposition~\\ref{prop:nbp-lower-bound}, when deriving probabilistic upper bound on the queue length over a given time horizon $[0,T]$, we neglected the idle periods in $[0,T]$. This introduces conservatism in the bound on the right hand side of \\eqref{eq:nbp-lower-bound}. Since the idle period durations are distributed independently and identically according to an exponential random variable (since the arrival process is Poisson), one could incorporate them into \\eqref{eq:nbp-lower-bound} by taking convolution of $G$ with idle period distributions. Our choice for not doing so here is to ensure conciseness in the presentation of bounds in \\eqref{eq:nbp-lower-bound}. The resulting conservatism is also present in Proposition~\\ref{prop:nbn-lower-bound}, and carries over to Theorems~\\ref{thm:superlinear-bound-empty} and \\ref{thm:superlinear-bound-non-empty}, as well as to the corresponding simulations reported in Figures~\\ref{fig:lambda-m-empty}, \\ref{fig:superlinear-large-L} and \\ref{fig:phase-transition-non-empty}. \n\\end{remark}\n \n The next result generalizes Proposition~\\ref{prop:nbp-lower-bound} for non-zero initial condition. Note that the non-zero initial condition only affects the first busy period; all subsequent busy periods will necessarily start from with zero initial condition.\n\n \\begin{proposition}\\label{prop:nbn-lower-bound}\n For any $m>1$, $M\\in\\mathbb{N}$, $L>0$, $\\lambda >0$, $\\varphi\\in \\Phi$, $\\psi \\in \\Psi$, initial condition $x_0 \\in [0,L]^{n_0}$, $n_0 \\in \\mathbb{N}$, with associated workload $w_0>0$, the probability that the queue length is upper bounded by $M+n_0$ over a given time interval $[0,T]$ satisfies the following: \n\n $$\\Pr \\big(N(t) \\leq M + n_0 \\quad \\forall t \\in [0,T] \\big)\\geq \\sup_{r \\in \\mathbb{N}} \\, \\sum_{n=1}^M \\int_T^{\\infty} G_{L^m (M+n_0)^{1-m}}(\\delta_{w_0}) * G_{r-1,L^m M^{1-m}}(\\psi) (t,n,\\lambda) \\, \\mathrm{d} t$$\n \\end{proposition}\n \\begin{proof}\nThe proof is similar to the proof of Proposition \\ref{prop:nbp-lower-bound}; however, since we consider $M$ number of new arrivals in each of the busy periods, the \\emph{event} of interest is when the queue length in HTQ-s does not exceed $M+n_0$ and $M$ in the first and subsequent busy periods, respectively, while operating with constant service rates $L^m (M+n_0)^{1-m}$ and $L^m M^{1-m}$, respectively. \n\\qed\n \\end{proof}\n\n\n\n \n\n\n\n \n\n\n\n \n\n We shall use Propositions~\\ref{prop:nbp-lower-bound} and \\ref{prop:nbn-lower-bound} to establish probabilistic lower bound for a finite time horizon version of the throughput defined in Definition \\ref{def:throughput}: for $T>0$, let \n \\begin{equation*}\n\\label{eq:throughput-def-finite-horizon}\n\\lambda_{\\text{max}}(L,m,\\varphi,\\psi,x_0,\\delta,T):= \\sup \\left\\{\\lambda \\geq 0: \\, \\Pr \\left( N(t;L, m, \\lambda, \\varphi, \\psi, x_0) < + \\infty, \\quad \\forall t \\in [0,T] \\right) \\geq 1 - \\delta \\right\\}.\n\\end{equation*}\n \n \n\n\\begin{theorem}\\label{thm:superlinear-bound-empty}\n For $L>0$, $m>1$, $\\varphi\\in\\Phi$, $\\psi\\in\\Psi$, $\\delta\\in(0,1)$, $T>0$, zero initial condition $x_0=0$, \n \\begin{equation}\n \\label{eq:sublinear-throughput-zero-initial-condition}\n \\lambda_{\\max}(L, m, \\varphi, \\psi, x_0,\\delta,T)\\geq \\sup_{M \\in \\mathbb{N}}\\;\\sup \\Big\\{\\lambda \\geq 0 \\; \\Big | \\sup_{r \\in \\mathbb{N}} \\, \\sum_{n=1}^M \\int_T^{\\infty} G_{r,L^m M^{1-m}}(t,n,\\psi,\\lambda) \\, \\mathrm{d} t \\geq1-\\delta \\Big\\} \n \\end{equation}\n \\end{theorem}\n \\begin{proof}\nFollows from Proposition \\ref{prop:nbp-lower-bound}.\n\\qed\n \\end{proof} \n \n \n \n \\begin{theorem}\\label{thm:superlinear-bound-non-empty}\n For $L>0$, $m>1$, $\\varphi\\in\\Phi$, $\\psi\\in\\Psi$, $\\delta\\in(0,1)$, $T>0$, initial condition $x_0 \\in [0,L]^{n_0}$, $n_0 \\in \\mathbb{N}$, with associated workload $w_0>0$, \n \\begin{multline*}\n \\lambda_{\\max}(L, m, \\varphi, \\psi, x_0,\\delta,T) \\\\ \\geq \\sup_{M \\in \\mathbb{N}}\\;\\sup \\Big \\{\\lambda>0 \\; \\Big | \\sup_{r \\in \\mathbb{N}} \\, \\sum_{n=1}^M \\int_T^{\\infty} G_{L^m (M+n_0)^{1-m}}(\\delta_{w_0}) * G_{r-1,L^m M^{1-m}}(\\psi) (t,n,\\lambda)\\geq 1-\\delta \\Big\\} \n \\end{multline*}\n \\end{theorem}\n \\begin{proof}\n Follows from Proposition \\ref{prop:nbn-lower-bound}.\n \\qed\n \\end{proof}\n \n \\begin{remark}\n In Theorems~\\ref{thm:superlinear-bound-empty} and \\ref{thm:superlinear-bound-non-empty}, we implicitly assume the rather standard convention that supremum over an empty set is zero.\n \\end{remark}\n\n \n\n \n\n\n\n\n\\section{Throughput Analysis}\n\\label{sec:throughput-analysis}\n\n\\input{linear}\n\n\\input{nonlinear}\n\n\\input{super-linear}\n\n\n\\input{sublinear-new}","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction: the double side of leptogenesis}\n\nA successful model of baryogenesis cannot be\nrealised within the Standard Model (SM) and, therefore,\nthe observed matter-antimatter asymmetry of the Universe\ncan be regarded as an evidence for new physics beyond the SM.\n\nThe discovery of neutrino masses and mixing in neutrino oscillation\nexperiments in 1998 \\cite{superk} has for the first time shown directly,\nin particle physics experiments, that the SM is indeed incomplete, since it \nstrictly predicts, in the limit of infinite cutoff, \nthat neutrinos are massless and, therefore, cannot oscillate.\n\nThis discovery has greatly raised the interest for\nleptogenesis \\cite{Fukugita:1986hr,reviews}, a model of baryogenesis that is a cosmological\nconsequence of the most popular way to extend the SM in order to\nexplain why neutrinos are massive but at the same time much lighter than all the\nother fermions: the seesaw mechanism~\\cite{seesaw}.\nAs a matter of fact, leptogenesis realises a highly non-trivial link\nbetween two completely independent experimental observations:\nthe absence of primordial antimatter in the observable Universe and the\nobservation that neutrinos mix and (therefore) have masses.\nIn fact, leptogenesis has a naturally built-in double-sided nature.\nOn one side, it describes a very early stage in the history of the Universe\ncharacterised by temperatures $T_{\\rm lep} \\gtrsim 100\\,{\\rm GeV}$,\nmuch higher than those probed by Big Bang Nucleosynthesis, $T_{\\rm BBN} \\sim (0.1$--$1)\\,{\\rm MeV}$;\non the other side, it complements low-energy neutrino experiments providing\na completely independent phenomenological tool\nto test models of new physics embedding the seesaw mechanism.\n\nIn this article we review the main features and results of leptogenesis.\nLet us give a brief outline.\nIn Section 2 we present the status of low-energy neutrino experiments measuring\nneutrino masses and mixing parameters, and we introduce the seesaw mechanism, which provides an elegant framework to explain them.\nIn Section 3 we discuss the vanilla leptogenesis scenario.\nIn Section 4 we show the importance of accounting for flavour effects for a correct calculation of the final asymmetry. In Section 5 we discuss the density matrix formalism which properly takes into account decoherence effects, which are\ncrucial to describe the transition from a one-flavoured regime to a fully flavoured regime.\nIn Section 6 we relax the assumption of hierarchical right-handed neutrino mass spectrum, and discuss how the asymmetry can be calculated in the degenerate limit.\nIn Section 7 we discuss different ways to improve the kinetic description\nbeyond the density matrix formalism.\nIn Section 8 we discuss other effects (thermal corrections, spectator processes, scatterings)\nthat have been considered and that can give in some cases important corrections.\nIn Section 9 we show how leptogenesis represents an important guidance to test models of new physics.\nFinally, in Section 10 we conclude outlining the prospects to test leptogenesis in future years.\n\n\\section{Neutrino masses and mixing}\n\nNeutrino oscillation experiments have established two fundamental\nproperties of neutrinos. The first one is that the neutrinos mix.\nThis means that the neutrino weak eigenfields\n$\\nu_{\\alpha}$ ($\\alpha=e,\\mu,\\tau$)\ndo not coincide with the neutrino mass eigenfields $\\nu_i$ ($i=1,2,3$) but\nare obtained applying to them a unitary transformation described\nby the $(3\\times 3)$ leptonic mixing matrix $U$,\n\\begin{equation}\n\\nu_{\\alpha} = \\sum_{i} \\, U_{\\alpha i}\\,\\nu_i \\, .\n\\end{equation}\nThe leptonic mixing matrix is usually parameterised in terms of 6 physical parameters,\nthree mixing angles, $\\theta_{12}$, $\\theta_{13}$ and $\\theta_{23}$, and three phases,\ntwo Majorana phases, $\\rho$ and $\\sigma$ and one Dirac phase $\\delta$,\n\\begin{equation}\\label{Umatrix} \\fl\nU=\\left( \\begin{array}{ccc}\nc_{12}\\,c_{13} & s_{12}\\,c_{13} & s_{13}\\,e^{-{\\rm i}\\,\\delta} \\\\\n-s_{12}\\,c_{23}-c_{12}\\,s_{23}\\,s_{13}\\,e^{{\\rm i}\\,\\delta} &\nc_{12}\\,c_{23}-s_{12}\\,s_{23}\\,s_{13}\\,e^{{\\rm i}\\,\\delta} & s_{23}\\,c_{13} \\\\\ns_{12}\\,s_{23}-c_{12}\\,c_{23}\\,s_{13}\\,e^{{\\rm i}\\,\\delta}\n& -c_{12}\\,s_{23}-s_{12}\\,c_{23}\\,s_{13}\\,e^{{\\rm i}\\,\\delta} &\nc_{23}\\,c_{13}\n\\end{array}\\right)\n\\, {\\rm diag}\\left(e^{i\\,\\rho}, 1, e^{i\\,\\sigma}\n\\right)\\, ,\n\\end{equation}\nwhere $s_{ij} \\equiv \\sin\\theta_{ij}$ and $c_{ij}\\equiv\\cos\\theta_{ij}$.\nA global analysis \\cite{Fogli:2011qn} of all existing neutrino data, prior to the results\nof a non-vanishing $\\theta_{13}$ from short baseline reactors, gives\n$\\theta_{12}=34^{\\circ}\\pm 1^{\\circ}$ for the solar mixing angle,\n$\\theta_{23}= 40.4{^{\\circ}}^{+4.6^{\\circ}}_{-1.8^{\\circ}}$\nfor the atmospheric mixing angle, and $\\theta_{13}=9.0^{\\circ}\\pm 1.3^{\\circ}$\nfor the reactor mixing angle, where the latter is mainly dominated by\nthe evidence for a non-vanishing $\\theta_{13}$ found by the T2K experiment \\cite{T2K}\nthat confirmed previous hints \\cite{hints}.\nRecently, the Daya Bay and Reno short-baseline reactor neutrino experiments\nrespectively found $\\theta_{13}= 8.8^{\\circ}\\pm 0.8^{\\circ}\\pm 0.3^{\\circ}$ \\cite{dayabay},\nand $\\theta_{13}=9.8^{\\circ}\\pm 0.6{^{\\circ}}\\pm 0.85^{\\circ}$ \\cite{reno}, confirming, at more than $5\\sigma$,\nthe previous results.\n\n\nThe second important property established by neutrino oscillation experiments\nis that neutrinos are massive. More specifically, defining the three neutrino masses\nin a way that $m_1 \\leq m_2 \\leq m_3$, neutrino oscillation experiments\nmeasure two mass-squared differences that we can indicate with\n$\\Delta m^2_{\\rm atm}$ and $\\Delta m^2_{\\rm sol}$ since historically the first\none has been first measured in atmospheric neutrino experiments and the second one in\nsolar neutrino experiments.\n Two options are currently allowed by previous\nexperiments. A first option is `normal ordering' (NO) and in this case\n\\begin{equation}\nm_3^2 - m^2_2 = \\Delta m^2_{\\rm atm} \\hspace{5mm} \\mbox{\\rm and}\n\\hspace{5mm}\nm_2^2 - m^2_1 = \\Delta m^2_{\\rm sol} \\, ,\n\\end{equation}\nwhile a second option is represented by `inverted ordering' (IO) and in this case\n\\begin{equation}\nm_3^2 - m^2_2 = \\Delta m^2_{\\rm sol} \\hspace{5mm} \\mbox{\\rm and} \\hspace{5mm}\nm_2^2 - m^2_1 = \\Delta m^2_{\\rm atm} \\, .\n\\end{equation}\nIt is convenient to introduce the atmospheric neutrino mass scale\n$m_{\\rm atm} \\equiv \\sqrt{\\Delta m^2_{\\rm atm}+\\Delta m^2_{\\rm sol}}\n= (0.049\\pm 0.001)\\,{\\rm eV}$ and the solar neutrino mass scale\n$m_{\\rm sol} \\equiv \\sqrt{\\Delta m^2_{\\rm sol}}=(0.0087 \\pm 0.0001)\\,{\\rm eV}$\n\\cite{Fogli:2011qn}.\n\nThe measurements of $m_{\\rm atm}$ and $m_{\\rm sol}$\nare not sufficient to fix all three neutrino masses.\nIf we express them in terms of the lightest neutrino\nmass $m_1$ we can see from Fig.~1 that while\n $m_2 \\geq m_{\\rm sol}$ and $m_3 \\geq m_{\\rm atm}$, the lightest\nneutrino mass can be arbitrarily small implying that the lightest neutrino\ncould be even massless.\n\nThe lower limits for $m_2$ and $m_3$\nare saturated when $m_1 \\ll m_{\\rm sol}$. In this case one has hierarchical\nneutrino models, either normal and in this case $m_2 \\simeq m_{\\rm sol}$\nand $m_3\\simeq m_{\\rm atm}$, or inverted, and in this case\n$m_2\\simeq \\sqrt {m^2_{\\rm atm}-m^2_{\\rm sol}} \\simeq m_3\\simeq m_{\\rm atm}$.\nOn the other hand, for $m_1\\gg m_{\\rm atm}$ one obtains the limit of quasi-degenerate neutrinos\nwhen all three masses can be arbitrarily close to each other.\n\nHowever, the lightest neutrino mass is upper bounded by absolute neutrino mass scale\nexperiments. Tritium beta decay experiments \\cite{Kraus:2004zw}\nplace an upper bound on the effective electron neutrino mass\n$m_{\\nu_e} \\lesssim 2\\,{\\rm eV}$ ($95 \\%$ C.L.) that translates\ninto the same upper bound on $m_1$. This is derived from model-independent\nkinematic considerations that apply independently of whether neutrinos have a Dirac or\nMajorana nature.\n\nNeutrinoless double beta decay ($0\\nu\\beta\\b$) experiments\nplace a more stringent upper bound on the effective $0\\nu\\beta\\b$ Majorana neutrino mass,\n$m_{ee}\\lesssim (0.34$--$0.78)\\,{\\rm eV}$\\,($95\\%$ C.L.) as obtained by the CUORICINO\nexperiment~\\cite{bb0n}\n\\footnote[1]{At $90\\%$~C.L. the CUORICINO result is $m_{ee}\\lesssim (0.27$--$0.57)\\,{\\rm eV}$. For comparison,\nthe bound from the Heidelberg-Moscow experiment is $m_{ee}\\lesssim (0.21$--$0.53)\\,{\\rm eV}$ ($90\\%$~C.L.) \n\\cite{HDM} while the EXO-200 experiment has recently found the upper \nbound $m_{ee}\\lesssim (0.14$--$0.38)\\,{\\rm eV}$ ($90\\%$~C.L.) \\cite{EXO}.}.\nIt translates into the following upper bound on $m_1$~\\cite{rodejohann}:\n\\begin{equation} \\fl\nm_1 \\leq m_{ee}\/(\\cos 2\\theta_{12}\\,\\cos^2 \\theta_{13}-\\sin^2\\theta_{13}) \n\\lesssim 3.45 \\, m_{ee} \\lesssim (1.2 \\textrm{--}2.7)\\,{\\rm eV} \n\\hspace{3mm} (95\\%\\, {\\rm C.L.}) \\, .\n\\end{equation}\nHere, the wide range is due to\ntheoretical uncertainties in the calculation\nof the involved nuclear matrix elements. However, this upper bound applies\nonly if neutrinos are of Majorana nature, which is the relevant case for us, since\nthe seesaw mechanism predicts Majorana neutrinos.\n\\begin{figure}\n\\begin{center}\n\\begin{minipage}{150mm}\n\\begin{center}\n\\centerline{\\psfig{file=numasses.eps,height=8cm,width=11cm}}\n\\caption{Neutrino masses $m_i$ versus the lightest neutrino mass $m_1$.\nThe three upper bounds (at $95\\%$ C.L.) discussed in the body text\nfrom absolute neutrino mass scales phenomenologies are also indicated.}\n\\end{center}\n\\end{minipage}\n\\end{center}\n\\end{figure}\n\nFrom cosmological observations,\nwithin the $\\Lambda$CDM model, one obtains a very stringent upper bound on the\nsum of the neutrino masses \\cite{WMAP7}\n\\begin{equation}\\label{WMAP7}\n\\sum_i\\, m_i \\lesssim 0.58\\,{\\rm eV} \\hspace{10mm} (95\\% \\, {\\rm C.L.})\\, .\n\\end{equation}\nThis translates into the\nmost stringent upper bound that we currently have on the\nlightest neutrino mass $m_1 \\lesssim 0.19\\,{\\rm eV}$ ($95\\%$ C.L.), an upper bound that\nalmost excludes quasi-degenerate neutrino models\n\\footnote[2]{For a more general discussion of neutrino mass bounds, \nin particular on theoretical assumptions and uncertainties, see for example \n\\cite{rodejohann,giunti,pastor}.}.\n\nA minimal extension of the SM, able to explain not only why neutrinos\nare massive but also why they are much lighter than all the other massive fermions,\nis represented by the seesaw mechanism \\cite{seesaw}.\nIn the minimal type I version, one adds\nright-handed neutrinos $N_{Ri}$ to the SM lagrangian with Yukawa couplings $h$\nand a Majorana mass term that violates lepton number\n\\begin{equation}\\label{lagrangian}\n\\mathcal{L} = \\mathcal{L}_{\\rm SM}\n+{\\rm i} \\, \\overline{N_{iR}}\\gamma_{\\mu}\\partial^{\\mu} N_{iR} -\n\\overline{\\ell_{\\alpha L}}\\,h_{\\alpha i}\\, N_{iR}\\, \\tilde{\\Phi} -\n {1\\over 2}\\,M_i \\, \\overline{N_{iR}^c} \\, N_{iR} +h.c. \\nonumber\n\\end{equation}\nFor definiteness we will consider the case of three RH neutrinos ($i=1,2,3$).\nThis is the most attractive case corresponding to have one RH neutrino for each generation,\nas for example nicely predicted by $SO(10)$ grand-unified models.\nNotice, however, that all current\ndata from low-energy neutrino experiments are also consistent\nwith a more minimal model with only two RH neutrinos that will be discussed in Section 9.\n\nAfter spontaneous symmetry breaking, a Dirac mass term $m_D=v\\,h$\nis generated by the Higgs vev $v$. In the seesaw limit, $M\\gg m_D$, the spectrum\nof neutrino masses splits into a light set given by the eigenvalues $m_1 \\eta_{B}^{\\rm CMB}$,\none obtains the allowed region in the plane $(m_1,M_1)$ shown in the left panel of Fig.~2.\n\\begin{figure}\n\\begin{center}\n\\psfig{file=massbounds.eps,width=0.4\\textwidth} \\hspace{5mm}\n\\psfig{file=BoundFlavoredK1Relaxw32.eps,width=0.4\\textwidth}\n\\end{center}\n\\caption{Left: Neutrino mass bounds in the vanilla scenario. Right:\n Relaxation of the lower bound on $M_1$ thanks\n to additional unbounded flavoured $C\\!P$ violating terms~\\cite{bounds}.}\n\\end{figure}\nOne can notice the existence of an upper bound on the light\nneutrino masses $m_1\\lesssim 0.12\\,{\\rm eV}$ \\cite{Buchmuller:2002jk,window},\nincompatible with quasi-degenerate\nneutrino mass models, and a lower bound on\n$M_1\\gtrsim 3\\times 10^9\\,{\\rm GeV}$ \\cite{Davidson:2002qv,Buchmuller:2002jk,window}\nimplying a lower bound on the\nreheat temperature $T_{\\rm reh}\\gtrsim 10^9\\,{\\rm GeV}$ \\cite{pedestrians}.\n\nAn important feature of vanilla leptogenesis is that the final asymmetry does not\ndirectly depend on the parameters of leptonic mixing matrix $U$.\nThat implies that one cannot establish a direct model-independent connection.\nIn particular the discovery\nof $C\\!P$ violation in neutrino mixing would not be a smoking gun for leptogenesis\nand, vice versa, a non-discovery would not rule out leptogenesis.\nHowever, within more restricted scenarios, for example imposing some conditions on\nthe neutrino Dirac mass matrix, links can emerge. In Section 9 we will discuss\nin detail the interesting case of $SO(10)$-inspired models.\n\nIn the past years different directions beyond the\nvanilla leptogenesis scenario were explored,\nusually aiming at evading the above-mentioned bounds.\nFor example, it was noticed that the Davidson-Ibarra bound eq.~(\\ref{CPbound})\nis strictly speaking evaded by an extra-term contribution $\\Delta \\varepsilon_1$ to the total\n$C\\!P$ asymmetry \\cite{hambyestrumia,bounds}. However, this extra-term vanishes\nif $M_3 \\simeq M_2$ and is suppressed as $\\Delta \\varepsilon_1 \\propto (M_1\/M_2)^2$. It can\nsignificantly relax the bounds on neutrino masses only when $|\\Omega_{ij}|^2 \\gtrsim (M_2\/M_1)^2$,\nimplying cancellations with a certain degree of fine tuning in the see-saw formula \nfor the neutrino masses. For example,\nin usual models with $|\\Omega_{ij}|^2 \\lesssim 10$, this extra-contribution can be safely neglected\nfor $M_2\\gtrsim 10\\,M_1$.\n\nThe development that proved to have the most important\nimpact on the final asymmetry, compared to a calculation within the vanilla scenario,\nis certainly the inclusion of flavour effects. For this reason we discuss\nthem in detail in the next Section.\n\n\\section{The importance of flavour}\n\nThe inclusion of flavour effects provides the most significant modification in\nthe calculation of the final asymmetry compared to the vanilla scenario. \nTwo kinds of flavour\neffects are neglected in the vanilla scenario:\nheavy neutrino flavour effects, how heavier RH neutrinos\ncan contribute to the final asymmetry, \nand lepton flavour effects,\nhow the flavour composition of the leptons quantum states produced in the RH neutrino decays\naffects the calculation of the final asymmetry.\nWe first discuss the two effects separately, and then we show\nhow their interplay can have very interesting consequences.\n\n\\subsection{Heavy neutrino flavour effects}\n\nIn the vanilla scenario the contribution to the final asymmetry from the\nheavier RH neutrinos is negligible because either the $C\\!P$ asymmetries are\nsuppressed in the hierarchical limit compared to\n$\\varepsilon_1^{\\rm max}$ (cf. Eq.~(\\ref{CPbound})) and\/or because,\neven assuming that a sizeable asymmetry (compared to the observed one) \nis produced at $T \\sim M_{2,3}$,\nit is later on washed out by the lightest RH neutrino inverse processes.\n\nHowever, as we anticipated, there is a particular case when,\neven neglecting the lepton flavour composition and assuming\na hierarchical heavy neutrino mass spectrum, the contribution\nto the final asymmetry from next-to-lightest RH neutrino decays can be\ndominant and explain the observed asymmetry \\cite{DiBari:2005st}. This case corresponds to a particular\nchoice of the orthogonal matrix such that $N_1$ is so weakly coupled\n(corresponding to $K_1 \\ll 1$) that its washout can be neglected.\nFor the same choice of the parameters, the $N_2$ total $C\\!P$ asymmetry\n$\\varepsilon_2$ is unsuppressed if $M_3\\lesssim 10^{15}\\,{\\rm GeV}$.\nIn this case a $N_2$-dominated scenario is realised.\nNotice that in this case the existence of a third (heavier) RH neutrino\nspecies is crucial in order to have a sizeable $\\varepsilon_2$.\n\nThe contribution from the two heavier RH neutrino species is also important\nin the quasi-degenerate limit when $\\delta_{i}\\equiv (M_i-M_1)\/M_1 \\ll 1,~i=2,3$.\nIn this case the $C\\!P$ asymmetries $\\varepsilon_{2,3}$ are not suppressed, and the\nwashout from the lighter RH neutrino species is moderate, with no exponential\nprefactor \\cite{Pilaftsis:1997jf,Blanchet:2006dq}.\n\n\\subsection{Lepton flavour effects}\n\nThe importance and generality of flavour effects in leptogenesis was fully highlighted in \\cite{Abada:2006fw,Nardi:2006fx}.\nTheir role was first discussed in \\cite{bcst} and \nincluded in specific scenarios in \\cite{endoh,Pilaftsis:2005rv,vives}.\n\nFor the time being, let us continue to assume that the final asymmetry is\ndominantly produced from the decays of the lightest RH neutrinos $N_1$,\nneglecting the contribution from the decays of the heavier RH neutrinos $N_2$ and $N_3$.\nIf $M_1\\gg 10^{12}\\,{\\rm GeV}$, the flavour composition\nof the quantum states of the leptons produced from $N_1$ decays\nhas no influence on the final asymmetry and a one-flavour\nregime holds \\cite{bcst,Abada:2006fw,Nardi:2006fx}.\nThis is because the lepton quantum states evolve coherently between the production\nfrom a $N_1$-decay\nand a subsequent inverse decay with a Higgs boson.\nIn this way the lepton flavour composition does not play any role.\n\nHowever, if $10^{12}\\,{\\rm GeV}\\gtrsim M_1 \\gtrsim 10^{9}\\,{\\rm GeV}$,\nduring the relevant period of generation of the asymmetry, the produced lepton\nquantum states will, on average, have an interaction with RH tauons before\nundergoing the subsequent inverse decay. In this way the tauon component of the lepton quantum\nstates is measured by the thermal bath and the coherent evolution breaks down\n\\cite{bcst,Abada:2006fw,Nardi:2006fx}.\nTherefore, at the subsequent inverse decays,\nthe lepton quantum states are an incoherent mixture of a tauon component and\nof a (still coherent) superposition of an electron and a muon component that\nwe can indicate with $\\tau^{\\bot}$.\n\nThe fraction of asymmetry stored in each flavour component is not proportional in general\nto the branching ratio of that component. This implies that the two\nflavour asymmetries, the tauon and the $\\tau^{\\bot}$ components,\nevolve differently and have to be calculated separately.\nIn this way the resulting final asymmetry can considerably differ\nfrom the result in the one-flavour regime. This can be indeed approximated by \nthe expression\n\\begin{equation}\\label{twofully}\nN^{\\rm f}_{B-L} \\simeq 2\\,\\varepsilon_1\\,\\kappa(K_1) +\n{\\Delta p_{1\\tau}\\over 2}\\,\\left[\\kappa(K_{1{\\tau}^{\\bot}_1})-\\kappa(K_{1\\tau})\\right] \\, ,\n\\end{equation}\nwhere $K_{1\\alpha}\\equiv p^0_{1\\alpha}\\,K_i$, the $p^0_{1\\alpha}$'s ($\\alpha=\\tau, \\tau^{\\bot}$) are the \ntree level probabilities that the leptons ${\\ell}_1$ and the anti-leptons $\\bar{\\ell}'_1$ \nproduced in the decays of the $N_1$'s \nare in the flavour $\\alpha$, while $\\Delta p_{1\\tau}$ is the difference \nbetween the probability to find ${\\ell}_1$ in the flavour $\\tau$ and that\none to find $\\bar{\\ell}'_1$ in the flavour $\\bar{\\tau}$. If we \ncompare this expression with the one-flavour regime result eq.~(\\ref{unflavoured})\none can see that if $\\Delta p_{1\\tau}=0$ (or if $K_{1\\tau}=K_{1\\tau^{\\bot}}$), \nthen the final symmetry is enhanced just by a factor $2$. However, in general\nleptons and anti-leptons have a different flavour composition \\cite{bcst,Nardi:2006fx}\nand in this case $\\Delta p_{1\\tau}\\neq 0$ and the final asymmetry can be much higher than in the one-flavour regime.\nThe most extreme case is when the total $B-L$ number is conserved (i.e. $\\varepsilon_1=0$)\nand still the second term in eq.~(\\ref{twofully}) \ncan be non-vanishing \\cite{Nardi:2006fx} and even explain the observed asymmetry \\cite{Blanchet:2006be}. \n\nIf $M_1\\lesssim 10^{9}\\,{\\rm GeV}$, even the coherence of the $\\tau^{\\bot}$\ncomponent is broken by the muon interactions between decays and inverse decays\nand a full three flavour regime applies. In the intermediate regimes\na density matrix formalism is necessary to properly describe decoherence\n\\cite{Abada:2006fw,Blanchet:2006ch,DeSimone:2006dd,Beneke:2010dz}.\n\nWe can briefly say that \nlepton flavour effects induce three major consequences that are all encoded \nin the expression (\\ref{twofully}) valid in the two fully flavoured regime. \ni) The washout can be considerably lower than in the unflavoured regime \\cite{Abada:2006fw,Nardi:2006fx}\nsince one can have that the asymmetry is dominantly produced in a flavour $\\alpha$ with $K_{1\\alpha}\\ll K_1$\nii) The leptonic mixing matrix enters directly in the calculation of the final asymmetry, more specifically\nand in particular the low-energy phases\ncontribute as a second source of $C\\!P$ violation in the flavoured $C\\!P$ asymmetries\n\\cite{Nardi:2006fx,Abada:2006ea,Blanchet:2006be,Pascoli:2006ci}. As an interesting phenomenological \nconsequence, the same source of $C\\!P$\nviolation that could take place in neutrino oscillations could be sufficient\nto explain the observed asymmetry \\cite{Blanchet:2006be,Pascoli:2006ci},\nthough under quite stringent conditions on the RH neutrino mass spectrum \nand with an exact determination requiring a density matrix calculation \\cite{Anisimov:2007mw}.\nNotice that this is a particular case realising the above-mentioned scenario with $\\varepsilon_1=0$ \\cite{Nardi:2006fx}. \nThis problem becomes particularly interesting\nin the light of the recent discovery of a non-vanishing $\\theta_{13}$ angle \\cite{hints,T2K,dayabay,reno},\na necessary condition to have $C\\!P$ violation in neutrino oscillations.\n In particular a calculation of the lower bound on $\\theta_{13}$ that necessarily requires\nthe use of density matrix equations.\niii) The flavoured $C\\!P$ asymmetries, given by\n\\begin{equation}\\label{flavouredCP} \\fl\n\\varepsilon_{i\\alpha}=\n{3\\over 16\\pi (h^{\\dagger}h)_{ii}}\\sum_{j\\neq i} \\left\\{ {\\rm Im}\n\\left[h_{\\alpha i}^{\\star}h_{\\alpha j}(h^{\\dagger}h)_{ij}\\right] {\\xi(x_j\/x_i)\\over\n\\sqrt{x_j\/x_i}}\\right. \\nonumber\n\\left.+{2\\over 3(x_j\/x_i-1)}{\\rm Im}\n\\left[h_{\\alpha i}^{\\star}h_{\\alpha j}(h^{\\dagger}h)_{ji}\\right]\\right\\} \\, ,\n\\end{equation}\ncontain a second term that conserves the total lepton number (it cancels exactly\nwhen summing over flavour in the total $C\\!P$ asymmetry and it, therefore, contribute to\n$\\Delta p_{1\\tau}$ in eq.~(\\ref{twofully}) but not to $\\varepsilon_1$),\nand therefore the upper bound in Eq.~(\\ref{CPbound})\ndoes not strictly apply to the flavoured $C\\!P$ asymmetries.\nAs a consequence, allowing for a mild cancelation in the neutrino mass matrix,\ncorresponding to $|\\Omega_{ij}|\\sim 1$, and also\nfor a mild RH neutrino mass hierarchy ($M_2\/M_1 \\sim 10$), the lower bound on $T_{\\rm reh}$\ncan be relaxed by about one order of magnitude, down to $10^8\\,{\\rm GeV}$\n\\cite{bounds}, as shown in the right panel of Fig.~2.\nHowever, for many models such as sequential dominated models \\cite{King:2003jb},\nthese cancellations do not occur, and lepton flavour effects cannot relax the\nlower bound on $T_{\\rm reh}$ \\cite{Blanchet:2006be}. One known exception is given\nby the inverse seesaw model~\\cite{Mohapatra:1986bd} which naturally explains large\ncancellations in the neutrino\nmass matrix and leads to large $C\\!P$ asymmetries thanks to an underlying\nlepton number symmetry. This leads to the relaxation of the lower bound on $T_{\\rm reh}$ \nby up to three orders of magnitude~\\cite{Antusch:2009gn,Racker:2012vw}\n(see more detailed discussion in the next Section).\n\nWhether or not the upper bound $m_i\\lesssim 0.1$~eV on neutrino masses found in the vanilla scenario\nstill holds in a flavoured $N_1$-dominated scenario, is relaxed, or even completely evaporates, is a controversial topic. \nIt still surely holds in the one-flavour regime for $M_1 \\gtrsim 10^{12}\\,{\\rm GeV}$ but it certainly does not hold in the two-fully flavoured regime\n\\cite{Abada:2006fw,JosseMichaux:2007zj,bounds}. \nHowever, it was found that the two-fully \nflavoured regime is not respected at large values $m_1\\gtrsim 0.1 \\,{\\rm eV}$\n\\cite{Blanchet:2006ch}.\nIn \\cite{DeSimone:2006dd} it was found \nthat it holds up to $m_1 \\sim 2\\,{\\rm eV}$, \nimplying in any case an upper bound much above current\nexperimental bounds and, therefore, uninteresting. \nIn \\cite{bounds}, including the information from low energy neutrino data and accounting for the Higgs asymmetry, it was found again that it \nholds only up to $m_1\\gtrsim 0.1 \\,{\\rm eV}$ and in this case a density matrix approach would be required for a conclusion.\n\n\\subsection{The interplay between lepton and heavy neutrino flavour effects}\n\nAs we have seen, when lepton flavour effects are neglected, the possibility\nthat the next-to-lightest RH neutrino decays contribute to the final\nasymmetry relies on a special case realising the $N_2$-dominated scenario\n\\cite{DiBari:2005st}. On the other hand,\nwhen lepton flavour effects are taken into account,\nthe contribution from heavier RH neutrinos\ncannot be neglected in a much more general situation. Even the contribution\nfrom the heaviest RH neutrinos can be sizeable (i.e. explain the observed asymmetry)\nand has to be taken into account in general.\n\nAs a result, the calculation of the final asymmetry becomes much more involved.\nAssuming hierarchical mass patterns and that the RH neutrino processes\noccur in one of the three different fully flavoured regimes, one has to consider\nten different mass patterns, shown in Fig.~2, that require specific multi-stage\nsets of classical Boltzmann equations for the calculation of the final asymmetry.\n\\begin{figure}\n\\begin{minipage}{140mm}\n\\begin{center}\n\\psfig{file=f4p1.eps,height=3cm,width=3cm} \\hspace*{2mm}\n\\psfig{file=f5p1.eps,height=3cm,width=3cm} \\hspace*{2mm}\n\\psfig{file=f5p2.eps,height=3cm,width=15mm} \\hspace*{2mm}\n\\psfig{file=f5p3.eps,height=3cm,width=15mm}\n\\end{center}\n\\begin{center}\n\\psfig{file=f6p1.eps,height=3cm,width=3cm}\n\\psfig{file=f6p2.eps,height=3cm,width=15mm}\n\\psfig{file=f6p3.eps,height=3cm,width=15mm}\n\\psfig{file=f6p4.eps,height=3cm,width=3cm}\n\\psfig{file=f6p5.eps,height=3cm,width=15mm}\n\\psfig{file=f6p6.eps,height=3cm,width=15mm}\n\\end{center}\n\\caption{The ten RH neutrino mass patterns corresponding to\nleptogenesis scenarios with\ndifferent sets of classical Boltzmann equations for the\ncalculation of the final asymmetry \\cite{problem}.}\n\\end{minipage}\n\\end{figure}\n\n\\subsubsection{The (flavoured) $N_2$-dominated scenario}\n\nAmong these 10 RH neutrino mass patterns,\nfor those three where the $N_1$ washout occurs in\nthe three-fully-flavoured regime, for $M_1 \\ll 10^{9}\\,{\\rm GeV}$,\nthe final asymmetry has necessarily to be produced by the\n$N_2$ either in the one or in the two-fully-flavoured\nregime (for $M_2 \\gg 10^9\\,{\\rm GeV}$). They have some particularly attractive\nfeatures and realise a `flavoured $N_2$-dominated scenario' \\cite{vives}\n(it corresponds to the fifth, sixth and seventh mass pattern in Fig.~3).\n\nWhile in the unflavoured approximation the lightest RH neutrino washout\nyields a global exponential washout factor, when lepton flavour effects are\ntaken into account, the asymmetry produced by the\nheavier RH neutrinos, at the $N_1$ washout, gets distributed\ninto an incoherent mixture of charged lepton flavour eigenstates \\cite{vives}.\nIt turns out that the $N_1$ washout in one of the three flavours is negligible, corresponding to have\nat least one $K_{1\\alpha}\\lesssim 1$,\nin quite a wide region of the parameter space \\cite{bounds}.\nIn this way, accounting for flavour effects, the region of applicability of the\n$N_2$-dominated scenario enlarges considerably, since it is not\nnecessary that $N_1$ fully decouples but it is sufficient that it decouples\njust in a specific lepton flavour \\cite{vives}. \nThe unflavoured $N_2$-dominated scenario is recovered\nin the limit where all three $K_{1\\alpha}\\lesssim 1$ \nand either $M_2\\gtrsim 10^{12}\\,{\\rm GeV}$ or for \n$10^{12}\\,{\\rm GeV} \\gtrsim M_2\\gtrsim 10^{9}\\,{\\rm GeV}$ and $K_2 \\lesssim 1$. \n\nRecently, it has also been realised that,\naccounting for the Higgs and for the quark asymmetries, the dynamics of the flavour asymmetries\ncouple and the lightest RH neutrino washout in a particular flavour can be circumvented even when $N_1$ is strongly\ncoupled in that flavour \\cite{Antusch:2010ms}.\nAnother interesting effect arising in the $N_2$-dominated scenario is {\\em phantom leptogenesis}.\nThis is a pure quantum-mechanical effect that for example\nallows parts of the electron and of the muon asymmetries, the phantom terms,\nto undergo a weaker washout at the production than the total asymmetry.\nIt has been recently shown that phantom terms\nassociated to a RH neutrino species $N_i$ with $M_i \\gg 10^{9}\\,{\\rm GeV}$\nare present not just in the $N_2$-dominated scenario \\cite{Blanchet:2011xq}.\nHowever, it should be noticed that phantom terms produced by the\nlightest RH neutrinos\ncancel with each other and thus do not contribute to the final asymmetry, though\nthey can induce flavoured asymmetries much larger than the total asymmetry,\nsomething potentially relevant in active-sterile neutrino oscillations~\\cite{activesterile}.\n\n\\subsubsection{Heavy neutrino flavour projection}\n\nEven assuming a strong RH neutrino mass hierarchy, a coupled $N_1$ in all lepton flavours\n($K_{1\\alpha} \\gg 1$ for any $\\alpha$) and\n$M_1\\gtrsim 10^{12}\\,{\\rm GeV}$ (it corresponds to the first panel in Fig.~2), the asymmetry produced by the\nheavier RH neutrino decays at $T\\sim M_i$, in particular by the $N_2$'s decays,\ncan be large enough to explain the observed asymmetry by avoiding most of the washout from \nthe lightest RH neutrino\nprocesses. This is because, in general, there is an `orthogonal' component\nthat escapes the $N_1$ washout \\cite{bcst,Engelhard:2006yg} while the remaining\n`parallel' component undergoes the usual exponential washout.\nFor a mild mass hierarchy, $\\delta_3 \\lesssim 10$,\neven the asymmetry produced by the $N_3$'s decays can be large enough to\nexplain the observed asymmetry and escape\n the $N_1$ and $N_2$ washout. Heavy neutrino flavour projection is also occurring\n when $10^{12}\\,{\\rm GeV}\\gtrsim M_1 \\gtrsim 10^9 \\,{\\rm GeV}$, in this case \n in the $e$--$\\mu$ plane.\n\nWhen the effect of heavy neutrino flavour projection is taken into account jointly\n with an additional contribution to the flavoured $C\\!P$ asymmetries $\\varepsilon_{2\\alpha}$\n that is not suppressed when the heaviest RH neutrino mass $M_3\\gtrsim 10^{15}\\,{\\rm GeV}$,\nthis can lead to the possibility of a dominant contribution from the next-to-lightest RH neutrinos\neven in an effective two RH neutrino model that can be regarded as \nas a limit case in a (more appealing) three RH neutrino case with \n$M_3\\gtrsim 10^{15}\\,{\\rm GeV}$ \\cite{Antusch:2010ms}.\n\n\\subsubsection{The problem of the initial conditions in flavoured leptogenesis}\n\nAs we have seen, in the vanilla scenario the unflavoured assumption reduces the problem\nof the dependence on the initial conditions to simply imposing\nthe strong washout condition\n$K_1 \\gg 1$. In other words, there is a full equivalence between\nstrong washout and independence of the initial conditions.\n\nWhen (lepton and heavy neutrino) flavour effects are considered, the situation is much more\ninvolved and for example imposing strong washout conditions on all flavoured\nasymmetries ($K_{i\\alpha}$) is not enough to guarantee independence of the initial conditions.\nPerhaps the most striking consequence is that in a traditional\n$N_1$-dominated scenario there is no condition that can guarantee\n independence of the initial conditions. The only possibility to have independence\n of the initial conditions is represented by a tauon $N_2$-dominated scenario~\\cite{problem},\n i.e. a scenario where the asymmetry is dominantly produced\n from the next-to-lightest RH neutrinos, and therefore $M_2\\gg 10^{9}\\,{\\rm GeV}$,\n in the tauon flavour. The condition $M_1 \\ll 10^{9}\\,{\\rm GeV}$\n is also important to have projection on the orthonormal three lepton flavour\n basis before the lightest RH neutrino washout \\cite{Engelhard:2006yg}.\n\n \\section{Density matrix formalism}\n\nAs explained in the last Section, leptogenesis is sensitive to the temperature\nrange at which the asymmetry is produced. This determines whether the lepton quantum states produced in RH neutrino\ndecays either remain coherent or undergo decoherence and get projected in flavour space before scattering in inverse\nprocesses. Moreover, since the lepton states produced in heavy neutrino decays differ in general\nfrom the lepton flavour eigenstates, lepton flavour\noscillations can also in principle arise in a similar way as neutrino oscillations happen in\nvacuum or in a medium. In order to treat the problem of flavour oscillations and\npartial loss of decoherence in a consistent way, one has to extend the classical\nBoltzmann framework to account for these intrinsically quantum effects.\nThe formalism of the density matrix is appropriate for this purpose~\\cite{Sigl:1992fn}. The\ndensity matrix for leptons with momentum ${\\mathbf p}$ is defined as\n\\begin{equation}\n\\rho_{\\ell}({\\mathbf p})=\\left(\\begin{array}{cc}\\langle a_{\\alpha}^{\\dagger}({\\mathbf p})\\,a_{\\alpha}({\\mathbf p})\\rangle &\n\\langle a_{\\beta}^{\\dagger}({\\mathbf p})\\, a_{\\alpha}({\\mathbf p})\\rangle \\\\\n\\langle a_{\\alpha}^{\\dagger}({\\mathbf p})\\,a_{\\beta}({\\mathbf p})\\rangle &\n\\langle a_{\\beta}^{\\dagger}({\\mathbf p})\\, a_{\\beta}({\\mathbf p})\\rangle \\end{array}\\right) \\, ,\n\\end{equation}\nwith $a\\,(a^{\\dagger})$ denoting\nthe annihilation (creation) operator, i.e. it is the expectation\nvalue (to be understood in statistical terms) of the generalised number operator.\nFor anti-leptons one has analogously a density matrix $\\rho_{\\bar{\\ell}}({\\mathbf p})$.\nThe diagonal elements of the density matrix contains\nnothing else than the occupation numbers of the two flavoured leptons, and the off-diagonal\nelements encode flavour correlations.\n\nLet us consider an $N_1$-dominated scenario for simplicity. Moreover, let us consider\na momentum integrated description, introducing the matrix of lepton number densities\n$N_{\\ell}\/R^3=g_{\\ell}\\int\\,{d^3p\\over (2\\pi)^3} \\, \\rho_{\\ell}({\\mathbf p})$, where $R$ is the \nscale factor. \nWithin the density matrix formalism,\nthe asymmetry can be calculated from the following\ndensity matrix equation in the flavour space $\\tau$--$\\tau^{\\bot}$\n~\\cite{Abada:2006fw,DeSimone:2006dd,Blanchet:2008hg,\nBeneke:2010dz,Blanchet:2011xq}\n\\begin{eqnarray} \\label{densmateq}\n{dN^{B-L}_{\\alpha\\beta} \\over dz} &=&\n\\varepsilon^{(1)}_{\\alpha\\beta}\\,D_1\\,(N_{N_1}-N_{N_1}^{\\rm eq})-{1\\over 2}\\,W_1\\,\\left\\{{\\mathcal P}^{0(1)} , N^{B-L}\\right\\}_{\\alpha\\beta}\\nonumber \\\\ \\nonumber\n&+& {\\rm i}\\,{{\\rm Re}(\\Lambda_{\\tau})\\over H\\, z}\\left[\\left(\\begin{array}{cc}\n1 & 0 \\\\\n0 & 0\n\\end{array}\\right),N^{\\ell +\\bar{\\ell}}\\right]_{\\alpha\\beta} \\\\\n& - & {{\\rm Im}(\\Lambda_{\\tau})\\over H\\, z}\\left[\\left(\\begin{array}{cc}\n1 & 0 \\\\\n0 & 0\n\\end{array}\\right),\\left[\\left(\\begin{array}{cc}\n1 & 0 \\\\\n0 & 0\n\\end{array}\\right),N^{B-L} \\right]\\right]_{\\alpha\\beta} \\, ,\n\\end{eqnarray}\nwhere we defined $N^{\\ell +\\bar{\\ell}}_{\\alpha\\beta} \\equiv N^{{\\ell}}_{\\alpha\\beta} +N^{{\\bar{\\ell}}}_{\\alpha\\beta}$ and\nwhere $\\mathcal{P}^{0(1)}$ is a matrix projecting the\nlepton quantum states along the flavour `$1$' of a lepton ${\\ell}_1$\nproduced from the decay of a RH neutrino $N_1$ . The $C\\!P$ asymmetry\nmatrix is a straightforward generalization of Eq.~(\\ref{flavouredCP})~\\cite{Abada:2006ea,Beneke:2010dz,Blanchet:2011xq}.\nThe real and imaginary parts of the tau-lepton self-energy are respectively given by~\n\\cite{Weldon:1982bn,Cline:1993bd}\n\\begin{equation}\n{\\rm Re}(\\Lambda_{\\tau})= {f_{\\tau}^2\\over 64} T \\, \\hspace{10mm} \\mbox{\\rm and}\n\\hspace{10mm} {\\rm Im}(\\Lambda_{\\tau})= 8\\times 10^{-3}f_{\\tau}^2 T \\, ,\n\\end{equation}\nwhere $f_{\\tau}$ is the tauon Yukawa coupling.\nThe commutator structure in the third term on the RHS of Eq.~(\\ref{densmateq})\naccounts for oscillations in flavour space driven by the real part of\nthe self energy, and the double commutator accounts for damping of the off-diagonal terms\ndriven by the imaginary part\nof the self energy.\n\nIn order to close the system of equations, we also need an\nequation for the matrix $N_{\\ell +\\bar{\\ell}}$, which is given by\n\\begin{equation}\n{dN^{\\ell +\\bar{\\ell}}_{\\alpha\\beta} \\over dz} =\n-{{\\rm Re}(\\Lambda_{\\tau})\\over H\\, z}(\\sigma_2)_{\\alpha\\beta} N^{B-L}_{\\alpha\\beta} -S_g \\,\n(N^{\\ell +\\bar{\\ell}}_{\\alpha\\beta}-2\\,N_{\\ell}^{\\rm eq }\\delta_{\\alpha\\beta}) \\, ,\n\\end{equation}\nwhere $S_g\\equiv \\Gamma_g\/(Hz)$ accounts for gauge interactions. As shown in \\cite{Beneke:2010dz}, this term\nhas the effect of damping the flavour oscillations. This can be understood by noticing that gauge\ninteractions force $N^{\\ell +\\bar{\\ell}}_{\\alpha\\beta}=2\\,N_{\\ell_1}^{\\rm eq }\\delta_{\\alpha\\beta}$, which in turn renders the oscillatory\nterm Eq.~(\\ref{densmateq}) negligible.\n\nEq.~(\\ref{densmateq}) should be solved in any of the intermediate regimes where lepton states\nare partially coherent. Actually the range of relevance of the density matrix equation might be more\nimportant than previously believed. Indeed, it was found in~\\cite{Beneke:2010dz} that in order to\nrecover the unflavoured regime, one should have masses well above $10^{13}$~GeV.\n\nAs already discussed, the contribution from heavier RH neutrinos cannot be neglected in\ngeneral. Therefore, the density matrix equation (\\ref{densmateq}) should be extended to\naccount for such effects; this was done in~\\cite{Blanchet:2011xq}, where a general equation\nwas presented, valid for any RH neutrino decays and any temperature range. Flavour projection effects\nas well as phantom terms are readily taken into account in this framework.\n\n\n\\section{Limit of quasi-degenerate heavy neutrinos}\n\nIf $\\delta_2 \\ll 1$, the $C\\!P$ asymmetries $\\varepsilon_{1,2}$\nget resonantly enhanced as $\\varepsilon_{1,2}\\propto 1\/\\delta_2$ \\cite{flanz,Covi:1996wh,buchplumi1}.\nIf, more stringently, $\\delta_2\\lesssim 10^{-2}$, then\n$\\eta_B \\propto 1\/\\delta_2$ and the degenerate limit is obtained \\cite{Blanchet:2006dq}.\nIn this limit the lower bounds on $M_1$ and on $T_{\\rm reh}$\nget relaxed proportionally to $\\delta_2$ and at the resonance they completely disappear\n\\cite{Pilaftsis:1997jf,Pilaftsis:2003gt}. The upper bound on $m_1$ also \ndisappears in this extreme case. In a more realistic case where the degeneracy of the RH neutrino masses\nis comparable to the degeneracy of the light neutrino masses, as typically occurring in \nmodels with flavour symmetries, the upper bound $m_1 \\lesssim 0.1\\,{\\rm eV}$ obtained in the hierarchical case\nimposing the validity of Boltzmann equations, gets relaxed to $m_1 \\lesssim 0.4\\,{\\rm eV}$\n\\cite{hambyestrumia,bounds} in the case $M_3\\gg M_2$, while it is basically unchanged\nif $M_3=M_2$ \\cite{bounds}. The difference is due to the fact that the relaxation is mainly \nto be ascribed to the extra-term in the $C\\!P$ asymmetry mentioned at the end of Section 3. This \nextra-term, subdominant in the hierarchical case, can become dominant (if $M_3\\gg M_2$) in the quasi-degenerate case\n($\\delta_2 \\equiv (M_2-M_1)\/M_1 \\ll 1$)\nand it grows with the absolute neutrino mass scale instead of being suppressed as the usual term \\cite{hambyestrumia}. \nOn the other hand this extra-terms vanishes exactly when $M_2=M_3$, \na more reasonable assumption for $\\delta_2 \\ll 1$. \n\nIn the full three-flavour regime, the contributions from all quasi-degenerate\nRH neutrinos should be taken into account and in this case the final asymmetry can be calculated as\n\\begin{equation}\nN^{\\rm f}_{B-L}=\\sum_{i,\\alpha} \\varepsilon_{i\\alpha}\\, \\kappa\\left(\\sum_j K_{j\\alpha}\\right) \\, .\n\\end{equation}\nNotice that, for each lepton flavour $\\alpha$, the washout in the degenerate limit\nis described by the sum of the\nflavoured decay parameters for each RH neutrino species.\n\nThe simplest way to obtain a quasi-degenerate RH neutrino mass spectrum is to postulate the\nexistence of a slightly broken lepton number symmetry in the lepton sector~\\cite{Branco:1988ex,Shaposhnikov:2006nn,\nKersten:2007vk}. Assuming the existence of\nonly two RH neutrinos at first for simplicity, it is possible to write down Yukawa couplings and\na Majorana mass term in a way that conserves lepton number. In the ``flavour''\nbasis, which we denote by a prime, one RH neutrino can be assigned lepton number +1,\nand the other one -1, so that the seesaw mass matrix from Eq.~(\\ref{lagrangian}) takes the form\n\\begin{equation}\\label{inverse}\nM_{\\nu}=\\left(\\begin{array}{ccc} 0 & h'_{\\alpha 1}v^2 & 0\\\\\nh^{'T}_{\\alpha 1}v^2 & 0 & M \\\\\n0 & M & 0 \\end{array}\\right) \\, ,\n\\end{equation}\nwhich conserves lepton number. Rotating to the RH neutrino mass basis, one finds that\n\\begin{equation}\nM_{\\nu}=\\left(\\begin{array}{ccc} 0 & h'_{\\alpha 1}v^2 & {\\rm i} h'_{\\alpha 1} v^2\\\\\nh^{'T}_{\\alpha 1}v^2 & M & 0 \\\\\n{\\rm i} h^{'T}_{\\alpha 1} v^2 & 0 & M \\end{array}\\right) \\, .\n\\end{equation}\nIt can be seen that the two RH neutrinos are exactly degenerate in this limit, and one\ncan easily show that the neutrino mass matrix $m_{\\nu}$ in Eq.~(\\ref{seesaw}) vanishes\nidentically. In other words, having non-zero neutrino masses requires a small\nbreaking of the lepton number symmetry, which automatically splits the two RH neutrinos\ninto a quasi-Dirac fermion pair.\n\nThere are different ways to implement the breaking of the\nlepton number symmetry, thus generating non-zero neutrino masses. For instance,\nin the above two-RH neutrino model, we can write:\n\\begin{equation}\\label{breaking}\nM_{\\nu}=\\left(\\begin{array}{ccc} 0 & h'_{\\alpha 1}v^2 & \\epsilon_{\\alpha}v^2\\\\\nh^{'T}_{\\alpha 1}v^2 & \\mu_1 & M \\\\\n\\epsilon^{T}_{\\alpha}v^2 & M & \\mu_2 \\end{array}\\right) \\, ,\n\\end{equation}\nwhich implies that the full light neutrino mass matrix is given by~\\cite{Gavela:2009cd,Blanchet:2009kk}\n\\begin{equation}\\label{nuinverse}\nm_{\\nu}\\simeq v^2\\left(\\epsilon {1\\over M} h^{'T} + h' {1\\over M} \\epsilon^T\\right) - v^2\n\\left(h'{1\\over M}\\mu_2 {1\\over M} h^{'T} \\right) \\, ,\n\\end{equation}\nproportional to the breaking parameters $\\epsilon$ and $\\mu_2$, as expected.\nIf the lepton number symmetry is broken as in the first term in Eq.~(\\ref{nuinverse}), it\nis referred to as `linear'~\\cite{Barr:2003nn}; if the second term is at work, it is referred to\nas inverse\/double\nseesaw mechanism~\\cite{Mohapatra:1986bd}. In the following, we\nwill refer to these models for simplicity as\n`inverse seesaw models'.\nNo matter how the lepton number symmetry is broken, the bottom line is that these\nmodels fall in the category of \\emph{low-scale seesaw\nmodels}, where the size of the Yukawa couplings is not necessarily suppressed if the RH neutrino\nmass scale is lowered to the electroweak scale. This can lead to interesting non-unitarity effects in neutrino\noscillation experiments~\\cite{FernandezMartinez:2007ms,Antusch:2006vwa}, as well\nas observable lepton flavour violating rates in experiments looking for $\\mu \\to e\\gamma$,\n$\\tau \\to \\mu (e)\\gamma$, $\\mu \\to eee$ or $\\mu \\to e$ conversion in nuclei~\\cite{Pilaftsis:2005rv}.\nNote that, within the orthogonal parameterisation Eq.~(\\ref{casas}), the inverse seesaw model\nwith two RH neutrinos is obtained in the limit $|\\Omega| \\to \\infty$~\\cite{Asaka:2008bj}.\n\nThe simple model in Eq.~(\\ref{inverse}) can be trivially\nextended to have a third massive RH neutrino of mass still conserving lepton number.\nIt would have zero lepton\nnumber and be decoupled from leptons. Without further assumptions, the other RH neutrino mass\nscale is independent of $M$, and therefore it can be much lower or much higher.\n\nLeptogenesis in the context of these low-scale seesaw models was intensely studied in recent\nyears. It was found in~\\cite{Pilaftsis:2005rv} that it is possible to have at the same time\nsuccessful leptogenesis and low energy observable effects (beyond standard neutrino oscillation phenomenology)\nsuch as, for example, charged lepton flavour violation processes. \nThis is possible in a model with three RH neutrinos and with the help\nof very large flavour effects (hence the name `resonant $\\tau$-leptogenesis').\n\nMore recently, this possibility was examined in the context of a two-RH\nneutrino model, and it was found that, in the limit $M_2-M_1 \\gg \\Gamma_{1,2}$, leptogenesis does\nnot allow large enough Yukawa couplings to have non-trivial consequences at low energies~\\cite{Asaka:2008bj}.\nThis conclusion was re-examined in \\cite{Blanchet:2009kk}\nrelaxing the requirement on the mass splitting, and allowing for more extreme quasi-degeneracies. In the\nlimit $M_2-M_1 \\ll \\Gamma_{1,2}$, it was argued that the decay parameter $K$ should be replaced by\nan effective decay parameter $K^{\\rm eff}_{\\alpha} \\propto K_{\\alpha} (M_2-M_1)^2\/\\Gamma_1^2$\nwhich depends explicitly on the small breaking of the lepton number\nsymmetry. As a matter of fact,\nit is expected that the washout of lepton number vanishes in the limit\nof lepton number conservation, and in \\cite{Blanchet:2009kk} it was rigorously derived from the negative interference\nbetween the two RH neutrinos exchanged in the $\\Delta L=2$ process $\\ell \\Phi \\to \\bar{\\ell}\\Phi^{\\dagger}$.\n\nA more controversial issue is the behavior of the $C\\!P$ asymmetry parameter in the limit\n$M_2-M_1 \\ll \\Gamma_{1,2}$, which is directly related to the form of the regulator for the RH neutrino\npropagator in the self-energy diagram. The reason is that the location of the pole for the RH neutrino determines\nthe maximum enhancement of the $C\\!P$ asymmetry. Following \\cite{Anisimov:2005hr}, Ref.~\\cite{Blanchet:2009kk}\nuses\n\\begin{eqnarray}\\label{CPdeg}\n\\varepsilon_{i\\alpha} &\\simeq& {1\\over 8\\pi (h^{\\dagger}h)_{ii}}\\sum_{j\\neq i}\n\\left\\{ {\\rm Im}\\left[\nh_{\\alpha i}^{\\star}h_{\\alpha j}(h^{\\dagger}h)_{ij}\\right] +\n{\\rm Im}\\left[h_{\\alpha i}^{\\star}h_{\\alpha j}(h^{\\dagger}h)_{ji}\\right]\\right\\} \\nonumber\\\\\n&&\\times{M_j^2-M_i^2 \\over (M_j^2-M_i^2)^2\n+(M_i\\Gamma_i -M_j\\Gamma_j)^2}\n\\end{eqnarray}\nwith the regulator given by the difference $M_i\\Gamma_i -M_j\\Gamma_j$,\nwhereas \\cite{Pilaftsis:2005rv} finds the regulator $M_i\\Gamma_i$. Within \nthe inverse seesaw model considered (with two RH neutrinos)\nthe decay rates $\\Gamma_1$ and $\\Gamma_2$ are predicted to be equal\nin the lepton number conserving limit. Therefore, a regulator $M_1\\Gamma_1 -M_2\\Gamma_2$ allows for a much\nlarger enhancement of the $C\\!P$ asymmetry than $M_1\\Gamma_1$, implying that\nleptogenesis is compatible with observable lepton flavour violation rates~\\cite{Blanchet:2009kk}.\nThe precise value of the $C\\!P$ asymmetry in the regime $M_2-M_1 \\ll \\Gamma_{1,2}$, which is especially relevant in inverse\nseesaw models, is currently still an open issue, whose resolution presumably lies beyond the classical Boltzmann approach\n(see next Section).\n\nNote that the Weinberg-Nanopoulos~\\cite{Nanopoulos:1979gx} requirement, that at least\ntwo couplings should violate lepton (or baryon) number to have a generation of asymmetry,\nis satisfied with both regulators in the inverse seesaw model.\nIndeed, when four out of the five lepton-number-violating couplings in Eq.~(\\ref{breaking})\nare turned off, the $C\\!P$ asymmetry vanishes with both regulators,\nas it should~\\cite{Blanchet:2009kk}. However, the limit of all couplings taken\nsimultaneously to zero is not well-behaved for the regulator in Eq.~(\\ref{CPdeg}),\nas noted in \\cite{Deppisch:2010fr}.\n\nA different scenario was considered in \\cite{Antusch:2009gn} but still within the inverse seesaw framework. There,\na third almost decoupled RH neutrino was added, with a mass $M_1\\ll M$. \nLepton number violation was included in the coupling of the lightest\nRH neutrino, such that it can decay producing a lepton asymmetry. Therefore, in this case, the quasi-degenerate\npair of RH neutrinos is not responsible for the generation of asymmetry. Nonetheless, their large couplings to leptons\n(leading for instance to non-unitarity effects in neutrino oscillations) imply that the flavoured $C\\!P$ asymmetry, more precisely\nthe lepton number conserving part in Eq.~(\\ref{flavouredCP}), can be large even for TeV-scale RH neutrino masses.\nHowever, this scenario of `non-unitarity driven leptogenesis' has intrinsically large lepton\nflavour violating interactions that lead to flavour equilibration~\\cite{AristizabalSierra:2009mq}. Using the\ncross-sections for flavour violating interactions obtained in~\\cite{Pilaftsis:2005rv}, one finds that\nthe asymmetry cannot be generated in the right amount if $M_1\\lesssim 10^8$~GeV~\\cite{Antusch:2009gn}. \nHowever, these cross-sections were found recently to be significantly more suppressed than in~\\cite{Pilaftsis:2005rv}, \nleading to an interesting\nrelaxation of the bounds down to $M_1\\gtrsim 10^6$~GeV~\\cite{Racker:2012vw}.\n\nIt is worth mentioning that leptogenesis was investigated in the context of the inverse seesaw model also when\nlepton number is exactly conserved \\cite{GonzalezGarcia:2009qd}.\nIn this case, the observed baryon asymmetry can be generated\nif leptogenesis occurs during the electroweak phase transition, when the sphaleron rate progressively\ngoes out of equilibrium. The lepton flavour asymmetries, generated exclusively thanks to flavour effects (again the second\nterm in Eq.~(\\ref{flavouredCP})),\nare then converted into a baryon asymmetry before total washout.\n\nAnother example where leptogenesis occurs in the resonant regime\nis radiative leptogenesis~\\cite{GonzalezFelipe:2003fi,Branco:2005ye}. There, RH neutrinos are assumed exactly\ndegenerate at some high scale (for instance the GUT scale), and\nsmall RH neutrino mass splittings are generated by the running of the seesaw parameters.\n\nIn these low-scale seesaw models with TeV-scale RH neutrinos,\nit is natural to wonder whether some interesting signatures could be observed at collider experiments such as the LHC.\nUnfortunately, it seems that within the simplest type-I seesaw, the prospects are\nrather dim~\\cite{Kersten:2007vk,Ibarra:2011xn}.\nThe main problem is that the mixing of RH neutrinos with light neutrinos has strong upper\nlimits from rare (lepton-flavour-violating) decays~\\cite{FernandezMartinez:2007ms}, which prevents an\nimportant production of RH neutrinos at the LHC.\nHowever, in extended models such as Type II seesaw, Type III seesaw, left-right symmetric\nmodels, or simply an extra $U(1)_{B-L}$, \nthe prospects are much more encouraging (see \\cite{hambye} in this Issue).\n\n\n\\section{Improved kinetic description}\n\nWe have already discussed the density matrix formalism, which goes beyond the traditional\nkinetic treatment with Boltzmann (rate) equations. This Section is devoted to other\nkinetic effects that are important both for an estimation of the\ntheoretical uncertainties in the calculation and for a better conceptual understanding of\nthe minimal leptogenesis framework.\n\n\\subsection{Momentum dependence}\n\nWithin the vanilla scenario, the final asymmetry is computed by solving classical Boltzmann equations\nfor the RH neutrino and lepton \\emph{number densities}, so-called rate equations. These\nare obtained from the Boltzmann equations for the \\emph{distribution\nfunction} integrating over momenta with some approximations (see below).\nOne can then wonder what is the theoretical error introduced by this integrated description.\n\nGiven a particle species $X$, the number density is obtained by integrating\nthe distribution function over momentum,\n\\begin{equation}\nn_X ={g_X\\over(2\\pi)^3}\\, \\int \\, d^3 \\, p_X \\, f_X \\, ,\n\\end{equation}\nwhere $g_X$ is the number of degrees of freedom of particle $X$.\nFor leptogenesis with decays and inverse decays,\nthe system of Boltzmann equations (one for the RH neutrino and one for lepton number)\nin the expanding Friedmann-Robertson-Walker Universe\nis given by\n\\begin{eqnarray}\n{\\partial f_N\\over \\partial t} - |{\\bf p}_N| H {\\partial f_N\\over \\partial |{\\bf p}_N|}&=& \\mathcal{C}_D[f_N]\\\\\n{\\partial f_{\\ell-\\bar{\\ell}}\\over \\partial t} - |{\\bf p}_{\\ell}| H\n{\\partial f_{\\ell-\\bar{\\ell}}\\over \\partial |{\\bf p}_{\\ell}|}&=& \\mathcal{C}_D[f_{\\ell-\\bar{\\ell}}] \\, ,\n\\label{momentumke}\n\\end{eqnarray}\nwhere the collision integrals on the right-hand side are defined as\n\\begin{eqnarray} \\fl\n\\mathcal{C}[f_A, A \\leftrightarrow B\\, C] &=& {1\\over 2 E_A}\\int {d^3 p_B \\over 2E_B (2\\pi)^3}\n{d^3 p_C \\over 2E_C (2\\pi)^3} (2\\pi)^4 \\delta^4(p_A - p_B - p_C) \\nonumber\\\\\n&&\\times \\left[ f_B f_C(1-f_A)|\\mathcal{M}( B\\,C\\to A)|^2 - \\right.\\\\\n&&\\left. f_A(1-f_B)(1-f_C)|\\mathcal{M}(A\\to B\\,C)|^2\\right]\\nonumber \\,.\n\\end{eqnarray}\nNote that the double-counting problem is\nsolved here in the same way as for the integrated Boltzmann equations, by consistently including\nthe resonant part of the $\\Delta L=2$ scatterings. \nIn order to recover the usual Boltzmann equations~(\\ref{dlg1})--(\\ref{unflke}),\none has to introduce three approximations:\n(i) kinetic equilibrium for the RH neutrinos, which can be expressed as $f_N\/f_N^{\\rm eq} = n_N\/n_N^{\\rm eq}$;\n(ii) Maxwell-Boltmann distributions for RH neutrinos, leptons and Higgs fields;\n(iii) neglect Pauli blocking and Bose enhancement factors.\n\nRH neutrinos are only coupled to the thermal bath via their Yukawa couplings. It is therefore\nclear that in the weak washout regime, $K_1\\ll 1$, the assumption of kinetic equilibrium is not a very good one,\nand, indeed, it was found that the lepton asymmetry computed with the above equations can differ by\nup to 50\\% in the weak\nwashout regime compared to the usual treatment with integrated equations~\\cite{Basboll:2006yx,HahnWoernle:2009qn,\nGarayoa:2009my}.\nHowever, as expected, in the strong washout regime, the above-mentioned approximations\nare very good and the integrated rate equations can be used safely.\n\n\\subsection{Non-equilibrium formalism}\n\nLeptogenesis is an intrinsic non-equilibrium problem. One of the Sakharov's conditions is indeed that\na departure from thermal equilibrium is necessary to produce the baryon asymmetry. Therefore,\nit does not come as a surprise that recently a huge effort~\\cite{Buchmuller:2000nd,desimone,qke,qke2,garny,beneke,\nBeneke:2010dz,Garny:2011hg,Garbrecht:2011aw}\nwas made to understand leptogenesis\nwithin non-equilibrium quantum field theory, also known as the closed-time-path (CTP) or\nKeldysh-Schwinger formalism. This more rigorous, though far more complex, approach has the advantage\nof taking into account quantum effects that are completely missed by the usual approach,\nlike memory effects and off-shell effects. Moreover, it allows the straightforward inclusion\nof flavour oscillations and decoherence~\\cite{Beneke:2010dz}, it has the advantage of being able to consistently\naccount for finite density corrections, and it has no double counting problem.\n\nConcretely, in the non-equilibrium framework,\none needs to find the equations of motion for the two-point correlation functions\n(i.e. the propagators or Green's functions) of RH neutrinos\nand leptons from the general Schwinger-Dyson equation on the CTP,\n\\begin{eqnarray}\nS_{\\ell}^{-1}(x,y)&=& S^{-1}_{\\ell 0}(x,y) -\\Sigma_{\\ell}(x,y)\\, ,\\\\\nS^{-1}(x,y)&=& S^{-1}_{0}(x,y) -\\Sigma_{N}(x,y)\\, ,\n\\end{eqnarray}\nwhich are obtained from the variational principle on the effective action, a functional of the\nfull propagators $\\Delta_{\\phi}$, $S_{\\ell}$ and $S$, for the Higgs, lepton and RH neutrino, respectively.\nIn the above equations, the subscript $0$ denotes the free propagators, and $\\Sigma_{\\ell}$ and\n$\\Sigma_N$ are the self-energies for the leptons and RH neutrinos, respectively.\nNote that the self-energies are themselves functions of the propagators. For instance, the one-loop\nlepton self-energy depends on the RH neutrino and Higgs propagators:\n\\begin{equation}\n\\Sigma_{\\ell}^{\\alpha \\beta}(x,y)=-h_{\\alpha i}h_{j \\beta}^{\\dagger} P_R S^{ij}(x,y)P_L \\Delta_{\\phi}(y,x) \\, .\n\\end{equation}\nIt usually proves convenient\nto decompose any two-point function $D(x,y)$ into a \\emph{spectral}, $D_{\\rho}$, and a \\emph{statistical}\ncomponent, $D_F$:\n\\begin{equation}\nD(x,y)=D_F(x,y)-{i\\over 2} {\\rm sgn}(x^0-y^0)D_{\\rho}(x,y) \\, .\n\\end{equation}\nConvoluting the Schwinger-Dyson equations\nwith the full propagator, we finally arrive at a system of two coupled integro-differential\nequations, the so-called Kadanoff-Baym equations:\n\\begin{eqnarray}\ni\\slashed{\\partial}_x {S_{\\ell}}_F^{\\alpha\\beta}(x,y)&=&\\int_0^{x^0} d^4 z {\\Sigma_{\\ell}}_{\\rho}^{\\alpha\\gamma}(x,z)\n{S_{\\ell}}_F^{\\gamma\\beta}(z,y) \\nonumber\\\\\n&& -\\int_0^{y^0} d^4 z {\\Sigma_{\\ell}}_{F}^{\\alpha\\gamma}(x,z)\n{S_{\\ell}}_{\\rho}^{\\gamma\\beta}(z,y)\\, , \\\\\ni\\slashed{\\partial}_x {S_{\\ell}}_{\\rho}^{\\alpha\\beta}(x,y)&=&\\int_{y^0}^{x^0} d^4 z {\\Sigma_{\\ell}}_{\\rho}^{\\alpha\\gamma}(x,z)\n{S_{\\ell}}_{\\rho}^{\\gamma\\beta}(z,y) \\, .\n\\end{eqnarray}\nThe corresponding equations for the RH neutrino two-point function are obtained by changing $\\slashed{\\partial}_x \\to\n\\slashed{\\partial}_x -M$, and $S^{\\alpha \\beta} \\to\nS^{ij}$, where $\\alpha, \\beta= e,\\mu,\\tau$ are lepton flavours, and $i,j=1,2,3$ are RH neutrino flavours.\nIt can be noticed that the Kadanoff-Baym equations contain an integration over the entire history of the system, a `memory'\nintegral, which encodes all previous interactions with momentum and spin correlations. An attempt\nof studying memory effects in the context of leptogenesis~\\cite{desimone}\nhas found that large effects could arise in the resonant limit in the weak washout regime.\nIn order to recover a Markovian description of\nthe system, characterized by uncorrelated initial states at every timestep, one has to perform a gradient expansion\n(for an alternative approach, see~\\cite{qke2}),\nrelying on the fact that the microscopic timescale $t_{\\rm mic}\\sim 1\/M_i$ is much smaller than the macroscopic timescales\n$t_{\\rm mac}\\sim 1\/\\Gamma_i,1\/H$. This is also known as the molecular chaos approximation.\n\nIn leptogenesis, one needs to compute the evolution of the lepton number density. The latter\nis given by the average expectation value of the zeroth component of the lepton number current,\ngiven by\n\\begin{equation}\nj^{\\mu}_{L\\alpha\\beta}(x)=-{\\rm tr}\\left[\\gamma^{\\mu}S_{\\ell \\alpha\\beta}(x,x)\\right] \\, .\n\\end{equation}\nOne then obtains for the lepton number density\n\\begin{eqnarray}\nn_{L\\alpha\\beta}(t)&=& i\\int {d^3 p\\over (2\\pi)^3} \\int_0^t dt' \\int_0^{t'} dt'' {\\rm tr}\n\\left[{{\\Sigma_{\\ell}}_{\\rho}}_p^{\\alpha\\beta} (t',t'') {{S_{\\ell}}_F}_p(t'',t') \\right.\\nonumber \\\\\n&& \\left. -{{\\Sigma_{\\ell}}_F}_p^{\\alpha \\beta} (t',t'') {{S_{\\ell}}_{\\rho}}_p(t'',t')\\right] \\, ,\n\\end{eqnarray}\nafter switching to momentum space. From this master equation, one then needs to\ninput the equilibrium expression for the lepton and\nHiggs propagators, as well as the non-equilibrium Majorana neutrino propagator.\nIn order to obtain a Boltzmann-like equation, two further simplifications are required.\nFirst, the quasi-particle ansatz, also known as the on-shell approximation, which states:\n\\begin{equation}\nD_{\\rho}(X,p)=2\\pi\\, {\\rm sgn}(p^0)\\delta(p^2-m^2) \\, ,\n\\end{equation}\nwhere $X\\equiv (x+y)\/2$ is the central coordinate. Then, one can express\nthe statistical propagator for the occupation number using the so-called Kadanoff-Baym ansatz,\n\\begin{equation}\nD_F(X,p)=\\left[f(X,p)+{1\\over 2}\\right]D_{\\rho}(X,p) \\, ,\n\\end{equation}\nwhich is chosen such that, in equilibrium, the correct \\emph{fluctuation dissipation relation} between $D_F^{\\rm eq}(p)$ and $D_{\\rho}^{\\rm eq}(p)$ \nis automatically obtained (see \\cite{qke} for more details).\nWith these approximations, one arrives at a Boltzmann equation which includes finite density\neffects. This equation would be similar to the kinetic equation~(\\ref{momentumke}), with however\nthe important new property:\n\\begin{equation}\\label{property}\n|\\mathcal{M}( N\\to \\ell \\Phi)|^2 = |\\mathcal{M}( \\ell \\Phi \\to N)|^2 =\n|\\mathcal{M}_0( N\\leftrightarrow \\ell \\Phi)|^2 (1+\\varepsilon(p,T)) \\,,\n\\end{equation}\nwhere subscript `0' denotes tree level, and~\\cite{garny}\n\\begin{equation}\n\\varepsilon (p, T)=\\varepsilon \\times \\left(1+\\int {d \\Omega\\over 4\\pi}[f_{\\phi}^{\\rm eq}(E_1)\n-f_{\\ell}^{\\rm eq}(E_2)]\\right) \\,\n\\end{equation}\nwhere $\\varepsilon$ is the $C\\!P$ asymmetry defined in Eq.~(\\ref{CPas}), and\n$E_{1,2}={1\\over 2} \\left[(M^2+ p^2)^{1\\over 2} \\pm p \\cos\\theta\\right]$.\nNote that the above property in Eq.~(\\ref{property}) explicitly avoids\nthe double counting problem, which plagues the momentum-dependent description of the\nlast Subsection (as well as the usual integrated one).\nAs it can be seen, the\n$C\\!P$ asymmetry parameter now includes finite density effects via a dependence on the Higgs and lepton\ndistribution functions. This result\nagrees with that one found using thermal field theory in the real time formalism, when the right convention\nis used \\cite{garny}, as well as in the imaginary time formalism as recently obtained~\\cite{Kiessig:2011fw}.\nIt, however, disagrees with an earlier work based on thermal field theory in the real time formalism, where\na term quadratic in the distribution functions was found~\\cite{giudice}.\n\n\n\nOne should then wonder whether the non-equilibrium formalism described above leads to substantial\ndifferences in the leptogenesis predictions for the baryon asymmetry. It turns out\nthat the modifications are very\nimportant (up to an order of magnitude) in the regime where $M_i\/T\\ll 1$, but they tend to vanish\nin the non-relativistic regime $M_i\/T \\gg 1$. In other words, major changes are expected mainly in the\nweak washout regime for a vanishing initial RH neutrino abundance, because in this case the asymmetry is produced\nin both regimes: the initial asymmetry is produced with the wrong sign at early times and is compensated\nby the right-sign asymmetry at later times. This was confirmed in~\\cite{beneke}, where it was found\nthat the sign of the final asymmetry could even get changed by finite density effects.\nIn all other cases, corrections are quantitatively small, in particular in the strong washout regime,\n$K_i\\gg 1$,\nwhere the asymmetry is produced exclusively in the non-relativistic regime~\\cite{garny,beneke}.\n\n\n\\subsubsection{Resonant limit}\n\nIt is interesting to study whether the above quantum kinetic formalism can provide some insight\nonto the $C\\!P$ asymmetry produced\nin the extreme quasi-degenerate limit, $|M_1 -M_2|\\ll M_{1,2}$. This formalism includes off-shell\neffects, as well as the proper inclusion of coherent\ntransitions $N_i \\to N_j$, both of which should be important when $|M_1 -M_2|\\sim \\Gamma_{1,2}$. This\nproblem attracted some attention recently~\\cite{Garny:2011hg,Garbrecht:2011aw}.\n\nIn \\cite{Garny:2011hg} it was found that the enhancement of the $C\\!P$ asymmetry\nderived within the Kadanoff-Baym (KB) formalism, \n\\begin{equation}\nR^{\\rm KB}={M_1 M_2 (M_2^2-M_1^2)\\over (M_2^2-M_1^2)^2+(M_1\\Gamma_1+M_2\\Gamma_2)^2} \\, ,\n\\end{equation}\ndiffers from the Boltzmann result, \n\\begin{equation}\nR^{\\rm BE}={M_1 M_2 (M_2^2-M_1^2)\\over (M_2^2-M_1^2)^2+(M_1\\Gamma_1-M_2\\Gamma_2)^2} \\, ,\n\\end{equation}\nspecifically because of a contribution from coherent RH neutrino oscillations. \nThis would prevent any additional enhancement when $\\Gamma_1 \\sim \\Gamma_2$, as predicted by\nthe inverse seesaw model. In such a case, the new region in the parameter space of successful\nleptogenesis explored in~\\cite{Blanchet:2009kk} is not available.\n\n\n\\subsubsection{Adding flavour}\n\nWe saw in Section 5 how the first kind of `quantum' effects were included,\nnamely flavour oscillations and decoherence, by switching from classical Boltzmann equations\nfor lepton number densities to evolution equations for the density matrix. For the lightest\nRH neutrino and two relevant lepton flavours, we presented Eq.~(\\ref{densmateq}). The CTP\nformalism can also account for a flavour matrix structure, through the\nlepton propagators and self-energies. Reassuringly, the structure of\nEq.~(\\ref{densmateq}) was also found within this formalism~\\cite{Beneke:2010dz}.\n\n\\section{Other corrections}\n\n\\subsection{Thermal effects}\n\nLeptogenesis occurs in the very hot thermal bath of the early Universe. Leptons and Higgs fields\nhave very fast interactions with the thermal bath due to their gauge couplings. This gives them an\neffective mass which is proportional to temperature~\\cite{Weldon:1982bn}. This effective mass allows processes\nwhich were otherwise kinematically forbidden to occur, such as $C\\!P$-violating Higgs decays to\nRH neutrinos, when $T\\gg M$. Thermal effects have therefore a direct impact on leptogenesis, and the\nfirst study to try and quantify them \\cite{giudice} employed a real time formalism and\nhard thermal loop resummation. It was\nfound that the $C\\!P$ violating parameter has a strong temperature dependence, and\nthat in the weak washout regime, there\ncould be important differences with the zero-temperature treatment. On the other hand, in the strong\nwashout regime, $K\\gg 1$, the usual results from a vacuum calculation are recovered with\na good accuracy. Recently, a new study came out using the imaginary time formalism~\\cite{Kiessig:2011fw},\nand the $C\\!P$\nasymmetry parameter was found to differ from~\\cite{giudice}, and to agree with more recent attempts\nwith non-equilibrium quantum field theory (see previous Section).\n\n\\subsection{Spectator processes}\n\nChemical equilibrium holds among Standard Model particles in the early Universe above the electroweak phase\ntransition thanks to gauge interactions. On the other hand, Yukawa interactions only force some new conditions\n(between left- and right-handed fermions)\nwhen the temperature is high enough, depending on the size of the Yukawa coupling. If one imposes\nadditionally hypercharge\nneutrality and the effect of electroweak and strong sphaleron equilibrium, one arrives at a set\nof relations among the chemical potentials (or among the asymmetries) of leptons, Higgs and\nbaryons~\\cite{Nardi:2005hs}.\nThese processes, although not directly involved in the leptogenesis process, hence the name `spectator\nprocesses'~\\cite{Buchmuller:2001sr},\nhave an indirect effect\non the final asymmetry through a modified washout. The first effect is the inclusion of the Higgs asymmetry\nas a new contribution to the washout. The second effect is that the asymmetry originally produced in leptons of a\nparticular flavour gets redistributed into the other flavours following precise relations~\\cite{bcst}.\nThis means\nthat the set of Boltzmann equations to solve for the different lepton flavours are now coupled to each other\nvia a flavour coupling matrix.\n\nWithin the $N_1$-dominated scenario,\nthe overall effect of spectator processes is usually subdominant compared to flavour effects, but it can change\nthe final result by as much as 40\\% depending on the temperature at which the asymmetry\nis produced~\\cite{Nardi:2005hs,JosseMichaux:2007zj}.\nSupersymmetry has new degrees of freedom and new constraints can be derived. A full study was performed recently\nin~\\cite{Fong:2010qh} and the overall effect was found to be again of order one.\nWithin the $N_2$-dominated scenario, however, potentially much bigger effects are possible \\cite{Antusch:2010ms}.\n\n\\subsection{Scattering processes}\n\nIt can be shown that leptogenesis is well-described in the strong washout regime, $K\\gg 1$, by just\ndecays and inverse decays. The reason is that in this regime\nthe asymmetry is produced at relatively late times, at $T\\ll M$, with no dependence on the dynamics\nhappening at $T\\gtrsim M$ when scattering\nprocesses are important. However, they should be included\nin the weak washout regime for initial vanishing RH neutrino abundance. For instance, $\\Delta L=1$ Higgs-mediated\nscatterings involving top quarks, such as $\\ell N \\leftrightarrow Q_3 \\bar{t}$, contribute to the washout\nand the $C\\!P$ asymmetry~\\cite{Abada:2006ea}, and their inclusion is crucial to have a correct estimation\nof the final asymmetry in the weak washout regime~\\cite{Nardi:2007jp}. Scatterings involving gauge\nbosons, such as $\\ell N\\leftrightarrow \\Phi^{\\dagger}A$, have also been included, especially their contribution\nto the $C\\!P$ asymmetric source term~\\cite{Fong:2010bh}. It was found that the factorization of the $C\\!P$ asymmetry\nfrom decays and scatterings involving top quarks does not happen with scatterings involving gauge bosons.\nIn particular, there is a new source of lepton-number-conserving $C\\!P$ asymmetry.\n\n\n\\section{Testing new physics with leptogenesis}\n\nThe seesaw mechanism with three RH neutrinos extends the Standard Model by introducing eighteen new parameters.\nOn the other hand, low-energy\nneutrino experiments can only potentially test the nine parameters in the low-energy\nneutrino mass matrix $m_{\\nu}$. Nine high-energy parameters, those characterising the properties\nof the three RH neutrinos, the three masses and the six parameters\nencoded in the seesaw orthogonal matrix of Eq.~(\\ref{casas}),\nbasically fixing, together with the light neutrino masses, the three lifetimes and the three total $C\\!P$ asymmetries,\nare not tested by low-energy neutrino experiments.\nQuite interestingly, the requirement of successful leptogenesis,\n\\begin{equation}\n\\eta_B(m_{\\nu},\\Omega,M_i)=\\eta_{B}^{\\rm CMB} \\, ,\n\\end{equation}\nprovides an additional constraint on a combination\nof both low-energy neutrino parameters and high-energy neutrino parameters.\nHowever, just one additional constraint would not seem sufficient to over-constrain the parameter\nspace leading to testable predictions.\nIn spite of this observation, as we have seen, in the vanilla leptogenesis scenario\none can derive an upper bound on the neutrino masses. The reason is that, within this scenario,\nthe dependence of $\\eta_B$ on the six parameters related to the properties of the two heavier RH neutrinos\ncancels out. In this way the asymmetry depends on a reduced\nsubset of high-energy parameters (just three instead of nine).\nAt the same time, the final asymmetry gets strongly suppressed when\nthe absolute neutrino mass scale it is larger than the atmospheric neutrino mass scale.\nFor all these reasons, by maximising the final asymmetry over the high energy parameters and\nby imposing successful leptogenesis, an upper bound on the neutrino masses is found.\n\nWhen flavour effects are considered, the vanilla leptogenesis\nscenario holds only under very special conditions, as we have seen.\nIn general, the final asymmetry depends also on the parameters in the leptonic mixing matrix.\nTherefore, accounting for flavour effects,\none could naively hope to derive definite predictions on the leptonic mixing matrix as well,\nin addition to the upper bound on the absolute neutrino mass scale.\nHowever, the situation is quite different when flavour effects are taken into account.\nThis is because the final asymmetry depends, in general, also on the six parameters\nrelated describing the two heavier RH neutrino properties and\nthat were cancelling out in the calculation of the final asymmetry in the vanilla\nscenario, and this goes at the expense of predictability.\n\nFor this reason, in a general scenario with three RH neutrinos and flavour effects included, it is not possible\nto derive any prediction on low-energy neutrino parameters.\nAs we discussed, even whether the upper bound $m_i\\lesssim 0.1$~eV on neutrino masses still holds\nor not is an open issue and a precise value seems to depend on a precise account of \nmany different subtle effects and in particular it necessarily requires a density matrix formalism. \n\nIn order to gain predictive power, two possibilities have been explored in the past years.\n\nA first possibility is to consider non-minimal scenarios giving rise to additional phenomenological constraints.\nFor example, as discussed in Section 6, the inverse seesaw model, which technically can be still regarded as part\nof the minimal type I seesaw model, allows for non-trivial phenomenologies at low energy beyond standard neutrino\noscillations, such as non-unitarity effects and observable lepton flavour violation. An experimental\nobservation of any of these signatures would be of great value to reduce the freedom in\nthe choice of seesaw parameters.\nIn recent years, during the Large Hadron Collider era, it has been also intensively\nexplored the possibility that, within a non-minimal version of the seesaw mechanism,\none can have successful low scale leptogenesis together with collider signatures.\nIt has also been noticed that\nin the supersymmetric version of the seesaw, the branching ratios of lepton-flavour-violating processes\nor electric dipole moments are typically enhanced, and hence the existing experimental bounds\nfurther constrain the seesaw parameter space \\cite{Pascoli:2003uh,Dutta:2003my}.\n\nA second possibility is to search for a reasonable\nscenario where the final asymmetry depends only on a reduced set of independent parameters\nover-constrained by the successful leptogenesis condition, as with the vanilla scenario. \nFrom this point of view, the account of\nflavour effects has opened very interesting new opportunities or even re-opened old attempts\nthat fail within a strict unflavoured scenario. Let us briefly discuss some\nof the main ideas that have been proposed within this second possibility, the first one being\ncovered elsewhere in this Issue~\\cite{hambye}.\n\n\\subsection{Two-RH neutrino model}\n\nA phenomenological possibility that attracted great attention is the two-RH\nneutrino model \\cite{Frampton:2002qc},\nwhere the third RH neutrino is either absent or\neffectively decoupled in the seesaw formula.\nThis necessarily happens when $M_3\\gg 10^{14}\\,{\\rm GeV}$, implying that the\nlightest left-handed neutrino mass $m_1$ has to vanish. It can be shown that the number of parameters\ndecreases from 18 to 11 in this case.\nIn particular the orthogonal matrix is parameterised in terms of just one complex angle.\n\nIn leptogenesis the two-RH neutrino model has been traditionally considered as a sort of benchmark case\nfor the $N_1$-dominated scenario, where the final asymmetry is dominated by the contribution\nfrom the lightest RH neutrinos \\cite{Abada:2006ea,Blanchet:2006dq,petcovmolinaro}. However, recently, as we anticipated already, it has been\nshown that there are some regions in the one complex angle parameter space\nthat are $N_2$-dominated~\\cite{Antusch:2011nz} and that\ncorrespond to so-called light sequential dominated models \\cite{King:2003jb}.\n\nIt should be said that even though the number of parameters is highly reduced,\nin a general two-RH neutrino model it is still not possible to make predictions on the low-energy neutrino\nparameters. To this extent, one should further reduce the parameter space, for example assuming texture zeros\nin the neutrino Dirac mass matrix.\n\n\\subsection{$SO(10)$-inspired models}\n\n In order to gain predictive power, one can\nimpose conditions within some model of new physics embedding the seesaw mechanism.\nAn interesting example is represented by the `$SO(10)$-inspired leptogenesis scenario' \n\\cite{buchplumi,Branco:2002kt,Akhmedov:2003dg},\nwhere $SO(10)$-inspired conditions are imposed on the neutrino Dirac mass matrix $m_D$.\nIn the basis where the charged leptons mass matrix and the Majorana mass matrix are diagonal,\nin the bi-unitary parametrisation, one has $m_D = V_L^{\\dagger}\\,D_{m_D}\\,U_R$,\nwhere $D_{m_D}\\equiv {\\rm diag}({\\lambda_1,\\lambda_2,\\lambda_3})$ is the diagonalised neutrino Dirac mass matrix\nand the mixing angles in $V_L$ are of the order\nof the mixing angles in the CKM matrix $V_{CKM}$.\nThe $U_R$ and three $M_i$ can then be calculated from $V_L$, $U$ and $m_i$,\nsince the seesaw formula Eq.~(\\ref{seesaw}) directly leads to the Takagi factorisation of\n$M^{-1} \\equiv D^{-1}_{m_D}\\,V_L\\,U\\,D_m\\,U^T\\,V_L^T\\,D^{-1}_{m_D}$,\nor explicitly $M^{-1} = U_R\\,D_M^{-1}\\,U_R^T$.\n\nIn this way the RH neutrino masses and the matrix $U_R$ are expressed in terms of the\nlow-energy neutrino parameters, of the eigenvalues $\\lambda_i$ and of the parameters in $V_L$.\nTypically one obtains a very hierarchical spectrum $M_1 \\sim 10^{5}\\,{\\rm GeV}$ and $M_{2}\\sim 10^{11}\\,{\\rm GeV}$,\nthe asymmetry produced from the lightest RH neutrino decays is by far unable to explain the\nobserved asymmetry \\cite{Branco:2002kt}. \n\nHowever, when the $N_2$ produced asymmetry is taken into account,\nsuccessful ($N_2$-dominated) leptogenesis can be attained \\cite{SO10lep1}. \nIn this case, imposing the leptogenesis bound\nand considering that the final asymmetry does not depend on $\\lambda_1$ and on $\\lambda_3$, one obtains\nconstraints on all low-energy neutrino parameters, which have some dependence on the\nparameter $\\lambda_2$ typically parameterised in terms of $\\alpha_2\\equiv \\lambda_2\/m_c$, where $m_c$\nis the charm quark mass. Some examples of the constraints on the low-energy neutrino parameters\nare shown in Fig.~3. They have been obtained scanning over the $2\\sigma$ ranges of the allowed values of the\nlow-energy parameters and over the parameters\nin $V_L$ assumed to be $I< V_L < V_{CKM}$ and\nfor three values of $\\alpha_2=5,4, 1$.\nIt is particularly interesting that when the independence of the initial conditions\nis imposed, negative values of $J_{CP}$ seem to be favoured \\cite{prep} establishing\na connection between the sign of $J_{CP}$ and of the matter-antimatter asymmetry.\n\n\nA supersymmetric version of this scenario including the renormalization group evolution\nof all the relevant couplings was also studied in~\\cite{Blanchet:2010td}, and\nincluding a type II contribution to the seesaw mechanism\nfrom a triplet Higgs in left-right symmetric models in~\\cite{Abada:2008gs}.\n\\begin{figure}\n\\begin{center}\n \\mbox{\\epsfig{figure=m1th13NOGLOBAL.eps,width=55mm,height=55mm}}\n \\vspace*{-20pt}\n \\mbox{\\epsfig{figure=m1th23NOGLOBAL.eps,width=55mm,height=55mm}} \\\\\n \\vspace*{15pt}\n \\mbox{\\epsfig{figure=th13th23NOGLOBAL.eps,width=55mm,height=55mm}}\n \\vspace*{-5pt}\n \\mbox{\\epsfig{figure=th13deltaNOGLOBAL.eps,width=55mm,height=55mm}}\n\\caption{Constraints on some of the low-energy neutrino parameters in the $SO(10)$-inspired\n scenario for normal ordering and $I< V_L < V_{CKM}$~\\cite{DiBari:2010ux}. The yellow, green and red points\n correspond respectively to $\\alpha_2=5,4,1$.}\n\\end{center}\n\\end{figure}\n\nTogether with strongly hierarchical RH neutrino mass patterns,\nthere exist also `level crossing' regions where at least two RH neutrino masses \nare arbitrarily close to each other, in a way that the lightest RH neutrino mass is uplifted and \n$C\\!P$ asymmetries are resonantly enhanced to some lever. Also in this case, imposing the successful\nleptogenesis condition, one then obtains\nconditions, though quite fine tuned ones, \non the low energy neutrino parameters \\cite{Akhmedov:2003dg}. \n\n\\subsection{Discrete flavour symmetries}\n\n An account of heavy neutrino flavour effects is also important when leptogenesis is embedded within\ntheories that try to explain tribimaximal mixing for the leptonic\nmixing matrix via flavour symmetries. It has been shown in particular that,\nif the symmetry is unbroken, then the $C\\!P$ asymmetries of the RH neutrinos would exactly\nvanish. On the other hand, when the symmetry is broken, for the naturally expected\nvalues of the symmetry breaking parameters, then the observed\nmatter-antimatter asymmetry can be successfully reproduced \n\\cite{Jenkins:2008rb,Bertuzzo:2009im,Hagedorn:2009jy,aristizabal}.\nIt is interesting that in a minimal picture based on an $A4$ symmetry, one has a RH neutrino mass spectrum with\n$10^{15}\\,{\\rm GeV} \\gtrsim M_3 \\gtrsim M_2 \\gtrsim M_1 \\gg 10^{12}\\,{\\rm GeV}$. One has therefore\nthat all the asymmetry is produced in the unflavoured regime and that the mass spectrum\nis only mildly hierarchical (it has actually the same kind of hierarchy as light neutrinos).\nAt the same time, the small symmetry breaking imposes\na quasi-orthogonality of the three lepton quantum states produced in the RH neutrino\ndecays. Under these conditions the washout of the asymmetry produced by one RH neutrino species\nfrom the inverse decays of a lighter RH neutrino species is essentially negligible. The final\nasymmetry then receives a non-negligible contribution from the decays of all three RH neutrinos species.\n\n\\subsection{Supersymmetric models}\n\nWithin a supersymmetric vanilla framework the final asymmetry\nis only slightly modified compared to the non-supersymmetric calculation \\cite{proceedings}.\nHowever, supersymmetry introduces a conceptually important issue: the stringent\nlower bound on the reheat temperature, $T_{\\rm reh}\\gtrsim 10^{9}\\,{\\rm GeV}$,\nis typically marginally compatible with an upper bound\nfrom the avoidance of the gravitino problem $T_{\\rm reh}\\lesssim 10^{6-10}\\,{\\rm GeV}$, with the\nexact value depending on the parameters of the model \\cite{Khlopov:1984pf,Ellis:1984eq,Kawasaki:2008qe}.\nIt is quite remarkable\nthat the solution of such an issue inspired an intense research activity on supersymmetric\nmodels able to reconcile minimal leptogenesis and the gravitino problem. Of course, on the\nleptogenesis side, some of the discussed extensions beyond the vanilla scenario that relax the RH neutrino\nmass bounds also relax the $T_{\\rm reh}$ lower bound. However, notice that in the $N_2$ dominated\nscenario, while the lower bound on $M_1$ simply disappears, there is still a lower bound\non $T_{\\rm reh}$ that is even more stringent, $T_{\\rm reh}\\gtrsim 6\\times 10^{9}\\,{\\rm GeV}$ \\cite{DiBari:2005st}.\n\nAs we mentioned already, with flavour effects one has the possibility to relax the lower bound\non $T_{\\rm reh}$ if a mild hierarchy in the RH neutrino masses\nis allowed together with a mild cancellation in the seesaw formula \\cite{bounds}.\nHowever for most models, such as sequential dominated models \\cite{King:2003jb},\nthis solution does not work. A major modification introduced by supersymmetry\nis that the critical value of the mass of the decaying RH neutrinos\nsetting the transition from an unflavoured regime to a two-flavour regime\nand from a two-flavour regime to a three flavour regime is enhanced by a factor\n$\\tan^2\\beta$~\\cite{Abada:2006fw,Antusch:2006cw}.\nThis has a practical relevance in the calculation of the asymmetry within supersymmetric models\nand it is quite interesting that leptogenesis becomes sensitive to such a relevant\nsupersymmetric parameter. Recently, a refined analysis,\nmainly discussing how the asymmetry is distributed among all particle species,\nhas shown different subtle effects in the calculation of the final asymmetry\nwithin supersymmetric models finding corrections below ${\\cal O}(1)$ \\cite{Fong:2010qh}.\n\n\\section{Future prospects for testing leptogenesis}\n\nIn 2011 two important experimental results have been announced that,\nif confirmed, can be interpreted as positive for future tests of leptogenesis.\n\nThe first result is the discovery of a non-vanishing\n$\\theta_{13}\\simeq 9^{\\circ}$ (cf. Section 2) confirmed now\nby various experiments (both long-baseline and reactor).\nAn important consequence of the measurement of such a `large'\n$\\theta_{13}$ is the encouraging prospects for the discovery\nof the neutrino mass ordering (either normal or inverted)\nin current or near-future neutrino oscillation experiments such as T2K and NO$\\nu$A. \nFor instance, if the ordering is found\nto be inverted in these experiments, we should expect a signal in $0\\nu\\beta\\beta$ experiments\nin the next decade. If it is not found, the Majorana nature of neutrinos, and therefore\nthe seesaw mechanism, would be ruled out.\n\nMoreover, such a large value of $\\theta_{13}$ opens the possibility of a measurement of\nthe neutrino oscillation $C\\!P$-violating invariant\n$J_{CP}\\propto \\sin\\theta_{13}\\,\\sin\\delta$ during next years.\nThis would have some direct model-independent consequences.\nIf a non-vanishing and close-to-maximal value is found ($|\\sin\\delta| \\sim 1$),\neven the small contribution to the final asymmetry uniquely stemming\nfrom a Dirac phase could be sufficient to reproduce the observed final\nasymmetry.\nMore generally, the presence of $C\\!P$ violation at low energies would\ncertainly support the presence of $C\\!P$ violation at high energies as well, since\ngiven a generic theoretical model predicting the neutrino Dirac mass matrix $m_D$,\nthese are in general both present.\n\n\nA more practical relevance of such a measurement of $J_{CP}$, even if in the end it\nindicates a vanishing value within the experimental error, is that it\nwill provide an additional constraint on specific models embedding the seesaw,\nsatisfying successful leptogenesis and able to make predictions on the low energy neutrino parameters.\nIn this way, the expected improvements in low-energy neutrino experiments, made easier by\na non-vanishing $\\theta_{13}$, will test the models more and more stringently.\n\nThe second important experimental result of 2011 is the hint for\nthe existence of the Higgs boson reported by the ATLAS and CMS collaborations.\nThere are at least two reasons why this hint can be also interpreted as positive for leptogenesis.\nFirst, because the whole leptogenesis mechanism relies on\nthe Yukawa coupling between Higgs, lepton and RH neutrino. Second,\nbecause the measurement\nof the Higgs boson mass could in future open opportunities for additional\nphenomenological information to be imposed on leptogenesis scenarios,\nfor example relying on the requirement of Standard Model electroweak \nvacuum stability \\cite{giudiceetal}.\nIn this respect, it is interesting to note that the current value of 125 GeV is compatible \nwith reheating temperatures as high as $10^{15}\\,{\\rm GeV}$\nas needed by thermal leptogenesis, and it is also compatible with the requirement that the\nYukawa couplings of the RH neutrinos do not destabilise the Higgs potential \nwhen RH neutrino masses $M_i$, assumed to be quasi-degenerate, are smaller than $10^{14}$~GeV.\n\nWhat are other possible future experimental developments\nthat could further support the idea of leptogenesis?\nAn improved information from absolute neutrino mass scale experiments,\nboth on the sum of the neutrino masses from cosmology and on $m_{ee}$\nfrom $0\\nu\\beta\\b$ experiments could be crucial. For instance,\nif cosmology provides a measurement of the neutrino mass\n$0.01\\,{\\rm eV} \\lesssim m_1 \\lesssim 0.2\\,{\\rm eV}$\nin the next few years, as it is reasonable to expect, then a positive signal\nin $0\\nu\\beta\\b$ experiments must be found, otherwise Majorana neutrinos and the seesaw\nmechanism will be disfavored.\nOn the other hand, in case of a positive signal in $0\\nu\\beta\\b$ experiments, we\nwill be able to say that minimal leptogenesis with hierarchical heavy neutrino\nmasses works in a optimal neutrino mass window\n$10^{-3}\\,{\\rm eV}\\lesssim m_1 \\lesssim 0.1\\,{\\rm eV}$ \\cite{window,bounds,problem}, where\nindependence of the initial conditions is more easily obtained and where we know that successful\nleptogenesis can be safely obtained from existing calculations using Boltzmann equations.\nMoreover, a determination of the allowed region in the plane $m_{ee}$--$\\sum_i\\,m_i$\ncould provide an additional test of specific leptogenesis scenarios, such as, for example, \n$SO(10)$-inspired scenarios.\n\nIn conclusion, we are living in an exciting time where new experimental information\nis coming, which is providing and will continue to provide crucial tests for new\nphysics models, in particular leptogenesis, in the near future.\n\n\\section*{Acknowledgements}\n\nWe wish to thank A.~Abada, W.~Buchm\\\"{u}ller, S.~Davidson, M.~Drewes, T.~Hambye, \nD.A.~Jones, S.F.~King, L.~Marzola, A.~Pilaftsis for useful comments and discussions.\nWe also wish to thank A.~Hohenegger for enlightening discussions about the closed-time-path formalism.\nPDB also wishes the Laboratoire de Physique Th\\'{e}orique, Universit\\'{e} de Paris-Sud 11 (Orsay) \nfor the warm hospitality during the completion of this work.\nSB acknowledges support from the Swiss National Science Foundation, under the Ambizione grant\nPZ00P2\\_136947. PDB acknowledges financial support from the NExT\/SEPnet Institute, \nfrom the STFC Rolling Grant ST\/G000557\/1 and from the EU FP7 ITN INVISIBLES \n(Marie Curie Actions, PITN- GA-2011- 289442).\n\n\\section*{References}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\n\n\\subsubsection{Computation of $\\PLQ{\\phi_{il}}$}\n\n\n\\xhdr{Variational M-Step}\nIn the E-step, we introduced the variational distribution $Q(F)$ parameterized\nby $\\phi$ and approximated the posterior distribution $P(F| A, \\mu, \\Theta)$ by\nmaximizing $\\mathcal{L}_{Q}(\\mu, \\Theta)$ over $\\phi$.\nIn the M-step, we now fix $Q(F)$,\n\\emph{i.e.}\\xspace, fix the variational parameters $\\phi$, and update the model parameters\n$\\mu$ and $\\Theta$ to maximize $\\mathcal{L}_{Q}$.\n\n\nFirst, in order to maximize $\\mathcal{L}_{Q}(\\mu, \\Theta)$ with respect to $\\mu$, we need to maximize $\\mathcal{L}_{\\mu_{l}} = \\sum_{i} \\EXATTR{il}{\\log P(F_{il} |\n\\mu_{l})}$ for each $\\mu_{l}$.\nBy definitions in \\EQ{eq:jointlik} and (\\ref{eq:qdef}), we obtain\n\\[\n \\mathcal{L}_{\\mu_{l}} = \\sum_{i} \\left( \\phi_{il} \\mu_{il} + (1-\\phi_{il})(1-\\mu_{il}) \\right) \\,.\n\\]\nThen $\\mathcal{L}_{\\mu_{l}}$ is maximized when\n\\[\n \\frac{\\partial \\mathcal{L}_{\\mu_{l}}}{\\partial \\mu_{l}} = \\sum_{i} \\phi_{il} - N = 0\n\\]\nwhere $\\mu_{l} = \\frac{1}{N} \\sum_{i} \\phi_{il}$.\n\nSecond, to maximize $\\mathcal{L}_{Q}(\\mu, \\Theta)$ with respect to $\\Theta_{l}$, we\nmaximize $\\mathcal{L}_{\\Theta} = \\EXATTR{}{\\log P(A, F | \\mu, \\Theta) - \\log Q(F)}$.\nWe first obtain the gradient\n\\begin{align}\n \\nabla_{\\Theta_{l}} \\mathcal{L}_{\\Theta} = \\sum_{i, j} \\nabla_{\\Theta_{l}} \\EXATTR{i, j}{\\log P(A_{ij} |F_{i}, F_{j}, \\Theta)}\n \\label{eq:mstep-theta}\n\\end{align}\nand then use a gradient-based method to optimize $\\mathcal{L}_{Q}(\\mu, \\Theta)$ with regard to $\\Theta_{l}$.\nAlgorithm~\\ref{alg:mstep} gives details for optimizing $\\mathcal{L}_{Q}(\\mu, \\Theta)$ over $\\mu$ and $\\Theta$.\n\n\n\\xhdr{Speeding up \\textsc{MagFit}\\xspace}\nSo far we described how to apply the variational EM algorithm to MAG model parameter estimation. However, both E-step and M-step are infeasible when the number of nodes $N$ is large. In particular, in the E-step, for each update of $\\phi_{il}$, we have to\ncompute the expected log-likelihood value of every entry in the $i$-th row and\ncolumn of the adjacency matrix $A$.\nIt takes $O(LN)$ time to do this, so overall $O(L^2 N^2)$ time is needed to update all $\\phi_{il}$. Similarly, in\nthe M-step, we need to sum up the gradient of $\\Theta_{l}$ over every pair of\nnodes (as in \\EQ{eq:mstep-theta}). Therefore, the M-step requires $O(L\nN^2)$ time and so it takes $O(L^2 N^2)$ to run a single iteration of EM. Quadratic dependency in the number of attributes $L$ and the number of nodes $N$ is infeasible for the size of the networks that we aim to work with here.\n\nTo tackle this, we make the following observation. Note that both\n\\EQ{eq:mstep-theta} and computation of $\\PLQ{\\phi_{il}}$ involve the sum of\n\\rev{expected values of the log-likelihood or the gradient.}\nIf we can quickly approximate this sum of the expectations,\nwe can dramatically reduce the computation time. As real-world networks are\nsparse in a sense that most of the edges do not exist in the network, we can\nbreak the summation into two parts --- a fixed part that ``pretends'' that the\nnetwork has no edges and the adjustment part \\rev{that} takes into account the edges\nthat actually exist in the network.\n\n\nFor example, in the M-step we can separate \\EQ{eq:mstep-theta} into two parts,\nthe first term\\hide{left part} that considers an empty graph and the second\nterm\\hide{the adjustment part} that accounts for the edges that actually\noccurred in the network:\n\\begin{small}\n\\begin{align}\n & \\nabla_{\\Theta_{l}} \\mathcal{L}_{\\Theta} =\n \\sum_{i, j} \\nabla_{\\Theta_{l}} \\EXATTR{i, j}{\\log P(0 |F_{i}, F_{j}, \\Theta)} \\nonumber \\\\\n & \\quad + \\sum_{A_{ij} = 1} \\nabla_{\\Theta_{l}} \\EXATTR{i, j}{\\log P(1 |F_{i}, F_{j}, \\Theta) - \\log P(0 |F_{i}, F_{j}, \\Theta)} \\,.\n \\label{eq:fastmstep1}\n\\end{align}\n\\end{small}\nNow we approximate the first term that computes the gradient pretending that\nthe graph $A$ has no edges:\n\\begin{align}\n & \\sum_{i, j} \\nabla_{\\Theta_{l}} \\EXATTR{i, j}{\\log P(0 | F_{i}, F_{j}, \\Theta)} \\nonumber \\\\\n & = \\nabla_{\\Theta_{l}} \\mathbb{E}_{Q_{i, j}}[\\sum_{i, j} \\log P(0 | F_{i}, F_{j}, \\Theta)] \\nonumber \\\\\n & \\approx \\nabla_{\\Theta_{l}} \\EXATTR{i, j}{N(N-1) \\mathbb{E}_{F} [\\log P(0 | F, \\Theta)]} \\nonumber \\\\\n & = \\nabla_{\\Theta_{l}} N(N-1) \\mathbb{E}_{F} [\\log P(0 | F, \\Theta)] \\,.\n \\label{eq:fastmstep2}\n\\end{align}\nSince each $F_{il}$ follows the Bernoulli distribution with parameter\n$\\mu_{l}$, \\EQ{eq:fastmstep2} can be\n\\rev{computed in $O(L)$ time.}\nAs the second term in \\EQ{eq:fastmstep1} requires only $O(LE)$ time, the computation\ntime of the M-step is reduced from $O(LN^2)$ to\n\\rev{$O(LE)$.}\nSimilarly we reduce the computation time of the E-step from $O(L^2 N^2)$ to\n$O(L^2 E)$\n(see Appendix for details).\nThus overall we reduce the computation time of \\textsc{MagFit}\\xspace from $O(L^2 N^2)$\n\\rev{to $O(L^2 E)$.} \n\n\\subsection{Synthetic Network}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\xhdr{Convergence of \\textsc{MagFit}\\xspace}\nFirst, we briefly evaluate the convergence of the \\textsc{MagFit}\\xspace algorithm. For this\nexperiment, we use synthetic MAG networks with $N = 1024$ and $L = 4$.\nFigure~\\ref{fig:convll} illustrates that the objective function $\\mathcal{L}_{Q}$, \\emph{i.e.}\\xspace, the\nlower bound of the log-likelihood, nicely converges with the number of EM\niterations. While the log-likelihood converges, the model parameters $\\mu$ and\n$\\Theta$ also nicely converge. Figure~\\ref{fig:convmu} shows convergence of\n$\\mu_1, \\dots, \\mu_4$, while Fig. \\ref{fig:convtheta} shows the convergence of\nentries $\\Theta_l[0,0]$ for $l=1,\\dots,4$. Generally, in 100 iterations of EM,\nwe obtain stable parameter estimates.\n\nWe also compare the runtime of the fast \\textsc{MagFit}\\xspace to the naive version where we\ndo not use speedups for the \n\\rev{algorithm.}\nFigure~\\ref{fig:scale} shows the\nruntime as a function of the number of nodes in the network.\nThe runtime of the naive algorithm scales quadratically $O(N^2)$, while\nthe fast version runs in near-linear time.\n\\rev{For example, on 4,000 node network, the fast algorithm runs about 100 times faster than the naive one.}\n\n\\begin{figure}[t]\n\\centering\n\\subfigure[Convergence of $\\mathcal{L}_{Q}(\\mu, \\Theta)$\\label{fig:convll}]{\\includegraphics[width=0.235\\textwidth]{FIG\/PowL4-Conv-LL.eps}}\n\\subfigure[Convergence of $\\mu_{l}$'s \\label{fig:convmu}]{\\includegraphics[width=0.235\\textwidth]{FIG\/PowL4-Conv-Mu.eps}}\n\\subfigure[Convergence of $\\Theta_{l}{[0, 0]}$'s\\label{fig:convtheta}]{\\includegraphics[width=0.235\\textwidth]{FIG\/PowL4-Conv-Mtx.eps}}\n\\subfigure[\\rev{Run time}\\label{fig:scale}]{\\includegraphics[width=0.235\\textwidth]{FIG\/Scale2.eps}}\n\\vspace{-3mm}\n\\caption{{Parameter convergence and scalability.}}\n\\vspace{-5mm}\n\\end{figure}\n\nBased on these experiments, we conclude that the variational EM gives robust\nparameter estimates. We note that the \\textsc{MagFit}\\xspace optimization problem is\nnon-convex, however, in practice we observe fast convergence and good fits.\nDepending on the initialization \\textsc{MagFit}\\xspace may converge to different solutions but\nin practice solutions tend to have comparable log-likelihoods and consistently\ngood fits. Also, the method nicely scales to networks with up to hundred\nthousand nodes.\n\n\\xhdr{Experiments on real data}\nWe proceed with experiments on real datasets. We use the LinkedIn social\nnetwork~\\cite{jure08microevol} at the time in its evolution when it had $N =$\n4,096 nodes and $E =$ 10,052 edges.\nWe also use the Yahoo!-Answers question answering social network, again from\nthe time when the network had $N =$ 4,096, $E =$ 5,678~\\cite{jure08microevol}.\nFor our experiments we choose $L = 11$, which is roughly $\\log N$ as it has\nbeen shown that this is the optimal choice for $L$~\\cite{mh10mag}.\n\nNow we proceed as follows. Given a real network $A$, we apply \\textsc{MagFit}\\xspace to\nestimate MAG model parameters $\\hat{\\Theta}$ and $\\hat{\\mu}$. Then, given these\nparameters, we generate a synthetic network $\\hat{A}$ and compare how well\nsynthetic $\\hat{A}$ mimics the real network $A$.\n\n\\xhdr{Evaluation} To measure the level of agreement between synthetic $\\hat{A}$\nand the real $A$, we use several different metrics. First, we evaluate how well\n$\\hat{A}$ captures the structural properties, like degree distribution and\nclustering coefficient, of the real network $A$. We consider the following\nnetwork properties:\n\n\\vspace{-3mm}\n\\begin{itemize}\n \\itemsep-3pt \\topsep-5pt \\partopsep-5pt\n \\item {\\em In\/Out-degree distribution (InD\/OutD)} is a histogram of the\n number of in-coming and out-going links of a node.\n \n \\item {\\em Singular values (SVal)} indicate the singular values of the\n adjacency matrix versus their rank.\n \n \\item {\\em Singular vector (SVec)} represents the distribution of\n components in the left singular vector associated with the largest\n singular value.\n \n \\item {\\em Clustering coefficient (CCF)} represents the degree versus the\n average (local) clustering coefficient of nodes of a given\n degree~\\cite{watts98smallworld}.\n \\item {\\em Triad participation (TP)} indicates the number of triangles\n that a node is adjacent to. It measures the transitivity in networks.\n\\end{itemize}\n\\vspace{-3mm}\n\nSince distributions of the above quantities are generally heavy-tailed, we plot\nthem in terms of complementary cumulative distribution functions ($P(X>x)$ as a\nfunction of $x$). Also, to indicate the scale, we do not normalize the\ndistributions to sum to 1.\n\n\n\\newcommand{\\textit{KS}}{\\textit{KS}}\n\\newcommand{\\textit{L2}}{\\textit{L2}}\n\nSecond, to quantify the discrepancy of network properties between real and\nsynthetic networks, we use a variant of Kolmogorov-Sminorv (KS) statistic and\nthe $L2$ distance between different distributions.\nThe original KS statistics is not appropriate here since if the distribution\nfollows a power-law then the original KS statistics is usually dominated by the\nhead of the distribution. We thus consider the following variant of the KS\nstatistic: $\\textit{KS}(D_{1}, D_{2}) = \\max_{x} | \\log D_{1}(x) - \\log\nD_{2}(x)|$~\\cite{mh11kronem}, where $D_{1}$ and $D_{2}$ are two complementary\ncumulative distribution functions.\nSimilarly, we also define a variant of the $L2$ distance on the log-log scale,\n$\\textit{L2}(D_{1}, D_{2}) = \\sqrt{\\frac{1}{\\log b - \\log a} \\left( \\int_{a}^{b}\n\\left(\\log D_{1}(x) - \\log D_{2}(x)\\right)^{2} \\, d (\\log x)\\right)} $ where\n$[a, b]$ is the support of distributions ${D_{1}}$ and ${D_{2}}$. Therefore, we\nevaluate the performance with regard to the recovery of the network properties\nin terms of the $\\textit{KS}$ and $\\textit{L2}$ statistics.\n\n\\newcommand{\\textit{TPI}\\xspace}{\\textit{TPI}\\xspace}\n\\newcommand{\\textit{LL}\\xspace}{\\textit{LL}\\xspace}\n\nLast, since MAG generates a probabilistic adjacency matrix $P$, we also\nevaluate how well $P$ represents a given network $A$. We use the following two\nmetrics:\n\n\\vspace{-3mm}\n\\begin{itemize}\n \\itemsep-3pt \\topsep-5pt \\partopsep-5pt\n \\item {\\em Log-likelihood (\\textit{LL})} measures the possibility that\n the probabilistic adjacency matrix $P$ generates network $A$: $LL=\n \\sum_{i j} \\log (P_{ij}^{A_{ij}} (1-P_{ij})^{1-A_{ij}})$.\n \\item {\\em True Positive Rate Improvement (\\textit{TPI}\\xspace)} represents the\n improvement of the true positive rate over a random graph: $TPI =\n \\sum_{A_{ij}=1}P_{ij} \/ \\frac{E^2}{N^2}$. \\textit{TPI}\\xspace\\xspace indicates how\n much more probability mass is put on the edges compared to a random\n graph (where each edge occurs with probability $E\/N^2$).\n\\end{itemize}\n\\vspace{-3mm}\n\n\n\\begin{figure}[t]\n \\centering\n \\subfigure[In-degree]{\\includegraphics[width=0.235\\textwidth]{FIG\/LinkedIn4K-7-InDeg.eps}}\n \n \\subfigure[Out-degree]{\\includegraphics[width=0.235\\textwidth]{FIG\/LinkedIn4K-7-OutDeg.eps}}\n \n \\subfigure[Singular value]{\\includegraphics[width=0.235\\textwidth]{FIG\/LinkedIn4K-7-Sval.eps}}\n \n \\subfigure[Singular vector]{\\includegraphics[width=0.235\\textwidth]{FIG\/LinkedIn4K-7-Svec.eps}}\n \n \\subfigure[Clustering coefficient]{\\includegraphics[width=0.235\\textwidth]{FIG\/LinkedIn4K-7-Ccf.eps}}\n \n \\subfigure[Triad participation]{\\includegraphics[width=0.235\\textwidth]{FIG\/LinkedIn4K-7-Triad.eps}}\n \n \\caption{The recovered network properties by the MAG model\\xspace~and the Kronecker graphs\n model on the LinkedIn network.\n For every network property, MAG model\\xspace~outperforms the Kronecker graphs model.}\n \\label{fig:realplot}\n \\vspace{-3mm}\n\\end{figure}\n\n\\begin{table}[t]\n\\caption{\\textit{KS}~and \\textit{L2}~of MAG and the Kronecker graphs model on the LinkedIn\nnetwork. MAG exhibits 50-70\\% better performance than Kronecker graphs model.}\n\\label{tbl:linkedinks} \\centering \\small\n\\begin{tabular}{c||c|c|c|c|c|c||c}\n\\multicolumn{8}{l}{\\small{}}\\\\\n {\\bf \\textit{KS}} & InD & OutD & SVal & SVec & TP & CCF & Avg \\\\ \\hline \\hline\n MAG & \\small 3.70 & 3.80 & 0.84 & 2.43 & 3.87 & 3.16 & 2.97 \\\\ \\hline\n Kron & 4.00 & 4.32 & 1.15 & 7.22 & 8.08 & 6.90 & 5.28 \\\\\n \n \\multicolumn{8}{l}{\\bf \\textit{L2}} \\\\ \\hline \\hline\n MAG & 1.01 & 1.15 & 0.46 & 0.62 & 1.68 & 1.11 & 1.00 \\\\ \\hline\n Kron & 1.54 & 1.57 & 0.65 & 6.14 & 6.00 & 4.33 & 3.37\n\\end{tabular}\n\\vspace{-3mm}\n\\end{table}\n\n\\xhdr{Recovery of the network structure}\nWe begin our investigations of real networks by comparing the performance of\nthe MAG model to that of the Kronecker graphs model~\\rev{\\cite{jure10kronecker}, which}\noffers a state of the art baseline for\nmodeling the structure of large networks.\nWe use evaluation methods described in the previous section where \\hide{a given\nreal-world network $A$, }we fit both models to a given real-world network $A$\nand generate synthetic $\\hat{A}_{MAG}$ and $\\hat{A}_{Kron}$. Then we compute\nthe structural properties of all three networks and plot them in\nFigure~\\ref{fig:realplot}. Moreover, for each of the properties we also compute\n\\textit{KS}~and \\textit{L2}~statistics and show them in Table~\\ref{tbl:linkedinks}.\n\nFigure~\\ref{fig:realplot} plots the six network properties described above for\nthe LinkedIn network and the synthetic networks generated by fitting MAG and\nKronecker models to the LinkedIn network.\nWe observe that MAG can successfully produce synthetic networks that match the\nproperties of the real network.\nIn particular, both MAG and Kronecker graphs models capture the degree\ndistribution of the LinkedIn network well. However, MAG model\\xspace~performs much better\nin matching spectral properties of graph adjacency matrix as well as the local\nclustering of the edges in the network.\n\nTable~\\ref{tbl:linkedinks} shows the \\textit{KS}~and \\textit{L2}~statistics for each of the\nsix structural properties plotted in Figure~\\ref{fig:realplot}. Results confirm\nour previous visual inspection. The MAG model\\xspace~is able to fit the network structure\nmuch better than the Kronecker graphs model. In terms of the average\n\\textit{KS}~statistics, we observe 43\\% improvement, while observe even greater\nimprovement of 70\\% in the \\textit{L2}~metric. For degree distributions and the\nsingular values, MAG outperforms Kronecker for about 25\\% while the improvement\non singular vector, triad participation and clustering coefficient is 60 $\\sim$\n75\\%.\n\nWe make similar observations on the Yahoo!-Answers network but omit the results\nfor brevity. We include them\nin Appendix.\n\n\\begin{figure}[t]\n\\centering\n\\begin{tabular}{ccc}\n \\includegraphics[width=0.15\\textwidth]{FIG\/Homophily.eps} &\n \\includegraphics[width=0.13\\textwidth]{FIG\/Heterophily.eps} &\n \\includegraphics[width=0.13\\textwidth]{FIG\/CorePeri.eps} \\\\\n \\includegraphics[width=0.1\\textwidth]{FIG\/HomophilyT.eps} &\n \\includegraphics[width=0.1\\textwidth]{FIG\/HeterophilyT.eps} &\n \\includegraphics[width=0.1\\textwidth]{FIG\/CorePeriT.eps} \\\\\n \\small (a) Homophily & \\small (b) Heterophily & \\small (c) Core-Periphery \\\\\n\\end{tabular}\n \\caption{Structures in which a node attribute can affect link affinity.\n The widths of arrows\n correspond to the affinities towards link formation.}\n \\label{fig:structure}\n\\vspace{-3mm}\n\\end{figure}\n\nWe interpret the improvement of the MAG over Kronecker graphs model in the\nfollowing way. Intuitively, we can think of Kronecker graphs model as a version\nof the MAG model where all affinity matrices $\\Theta_l$ are the same and all\n$\\mu_l=0.5$.\nHowever, real-world networks may include various types of structures and thus\ndifferent attributes may interact in different ways. For example,\nFigure~\\ref{fig:structure} shows three possible linking affinities of a binary\nattribute. Figure~\\ref{fig:structure}(a) shows a homophily (love of the same)\nattribute affinity and the corresponding affinity matrix $\\Theta$. Notice large\nvalues on the diagonal entries of $\\Theta$, which means that link probability\nis high when nodes share the same attribute value.\n\\rev{The top of each figure}\ndemonstrates that there will be many links between nodes that have the value of\nthe attribute set to ``0'' and many links between nodes that have the value\n``1'', but there will be few links between nodes where one has value ``0'' and\nthe other ``1''.\nSimilarly, Figure~\\ref{fig:structure}(b) shows a heterophily (love of the\ndifferent) affinity, where nodes that do not share the value of the attribute\nare more likely to link, which gives rise to near-bipartite networks. Last,\nFigure~\\ref{fig:structure}(c) shows a core-periphery affinity, where links are\nmost likely to form between ``0'' nodes (\\emph{i.e.}\\xspace, members of the core) and least\nlikely to form between ``1'' nodes (\\emph{i.e.}\\xspace, members of the periphery). Notice that\nlinks between the core and the periphery are more likely than the links between\nthe nodes of the periphery.\n\nTurning our attention back to MAG and Kronecker models, we note that\nreal-world networks globally exhibit nested core-periphery\nstructure~\\cite{jure10kronecker} (Figure \\ref{fig:structure}(c)). While there\nexists the core (densely connected) and the periphery (sparsely connected) part\nof the network, there is another level of core-periphery structure inside the\ncore itself. On the other hand, if viewing the network more finely, we may also\nobserve the homophily which produces local community structure. MAG can model\nboth global core-periphery structure and local homophily communities, while the\nKronecker graphs model cannot express the different affinity types because it\nuses only one initiator matrix.\n\n\n\n\nFor example, the LinkedIn network consists of 4 core-periphery affinities, 6\nhomophily affinities, and 1 heterophily affinity matrix. Core-periphery\naffinity models active users who are more likely to connect to others.\nHomophily affinities model people who are more likely to connect to others in\nthe same job area. Interestingly, there is a heterophily affinity which results\nin bipartite relationship. We believe that the relationships between job\nseekers and recruiters or between employers and employees leads to this\nstructure.\n\n\n\\begin{table}[t]\n \\caption{{\\textit{LL} and \\textit{TPI}\\xspace values for LinkedIn (\\textit{LI}) and\n Yahoo!-Answers (\\textit{YA}) networks}} \\label{tbl:linkacc}\n\\centering \\small\n \\vspace{-2mm}\n\\begin{tabular}{c||c|c||c|c}\n\\multicolumn{5}{l}{}\\\\\n & \\textit{LL}(\\textit{LI}) & \\textit{TPI}\\xspace(\\textit{LI}) & \\textit{LL}(\\textit{YA}) & \\textit{TPI}\\xspace(\\textit{YA}) \\\\ \\hline \\hline\n MAG & -47663 & 232.8 & -33795 & 192.2 \\\\ \\hline\n Kron & -87520 & 10.0 & -48204 & 5.4\n\\end{tabular}\n \\vspace{-3mm}\n\\end{table}\n\n\\xhdr{TPI and LL}\nWe also compare the \\textit{LL} and \\textit{TPI}\\xspace~values of MAG and Kronecker models on\nboth LinkedIn and Yahoo!-Answers networks. Table~\\ref{tbl:linkacc} shows that\nMAG outperforms Kronecker graphs by surprisingly large margin.\n\\rev{In \\textit{LL} metric, the MAG model shows $50\\sim\\,60$ \\% improvement over the Kronecker model.}\nFurthermore, in \\textit{TPI}\\xspace~metric, the MAG model shows $23\\sim\\,35$ times better\naccuracy than the Kronecker model. From these results, we conclude that the\nMAG model\\xspace~achieves a superior probabilistic representation of a given network.\n\n\n\\xhdr{Case Study: \\textit{AddHealth} network}\nSo far we considered node attributes as {\\em latent} and we inferred the\naffinity matrices $\\Theta$ as well as the attributes themselves. Now, we\nconsider the setting where the node attributes are already given and we only\nneed to infer affinities $\\Theta$. Our goal here is to study how real\nattributes explain the underlying network structure.\n\nWe use the largest high-school friendship network ($N =$ 457, $E =$ 2,259) from\nthe National Longitudinal Study of Adolescent Health (\\textit{AddHealth})\ndataset. The dataset includes more than 70 school-related attributes for each\nstudent. Since some attributes do not take binary values, we binarize them\nby taking value 1 if the value of the attribute is less than the\nmedian value. Now we aim to investigate which attributes affect the friendship\nformation and how.\n\n\nWe set $L=7$ and consider the following methods for selecting a subset of 7\nattributes:\n\n\\vspace{-3mm}\n\\begin{itemize}\n \\itemsep-3pt \\topsep-5pt \\partopsep-5pt\n \\item {\\em R7}: Randomly choose 7 real attributes and fit the model (\\emph{i.e.}\\xspace,\n only fit $\\Theta$ as attributes are given).\n \\item {\\em L7}: Regard all 7 attributes as latent (\\emph{i.e.}\\xspace, not given) and\n estimate $\\mu_l$ and $\\Theta_l$ for $l=1,\\dots,7$.\n \\item {\\em F7}: Forward selection. Select attributes one by one. At each\n step select an additional attribute that maximizes the overall\n log-likelihood (\\emph{i.e.}\\xspace, select a real attribute and estimate its\n $\\Theta_l$).\n \\item {\\em F5+L2}: Select 5 real attributes using forward selection.\n Then, we infer 2 more latent attributes.\n\\end{itemize}\n\\vspace{-3mm}\n\n\nTo make the \\textsc{MagFit}\\xspace work with fixed real attributes (\\emph{i.e.}\\xspace, only infer $\\Theta$)\nwe fix $\\phi_{il}$ to the values of real attributes. In the E-step we\nthen optimize only over the latent set of $\\phi_{il}$ and the M-step remains as\nis.\n\n\n\\xhdr{{\\em AddHealth} network structure}\nWe begin by evaluating the recovery of the network structure.\nFigure~\\ref{fig:addhealth} shows the recovery of six network properties for\neach attribute selection method. We note that each method manages to recover\ndegree distributions as well as spectral properties (singular values and\nsingular vectors) but the performance is different for clustering coefficient\nand triad participation.\n\n\\begin{figure}[t]\n\\centering\n\\includegraphics[width=0.235\\textwidth]{FIG\/School2MedComp-InDeg.eps}\n\\includegraphics[width=0.235\\textwidth]{FIG\/School2MedComp-OutDeg.eps}\n\\includegraphics[width=0.235\\textwidth]{FIG\/School2MedComp-Sval.eps}\n\\includegraphics[width=0.235\\textwidth]{FIG\/School2MedComp-Svec.eps}\n\\includegraphics[width=0.235\\textwidth]{FIG\/School2MedComp-Ccf.eps}\n\\includegraphics[width=0.235\\textwidth]{FIG\/School2MedComp-Triad.eps}\n\\vspace{-8mm}\n\\caption{{Properties of the AddHealth network.}}\n\\vspace{-5mm}\n\\label{fig:addhealth}\n\\end{figure}\n\n\\begin{table}[t]\n\\centering \\small\n \\caption{Performance of different selection methods.}\n \\label{tbl:addhealthks}\n\\begin{tabular}{l|c|c|c|c|c|c||c}\n\\multicolumn{8}{l}{\\small{}}\\\\\n {\\bf \\textit{KS}} & InD & OutD & SVal & SVec & TP & CCF & Avg \\\\ \\hline \\hline\n R7 &1.00 & 0.58 & 0.48 & 2.92 & 4.52 & 4.45 & 2.32 \\\\ \\hline\n F7 &2.32 & 2.80 & 0.30 & 2.68 & 2.60 & 1.58 & 2.05 \\\\ \\hline\n F5+L2&3.45 & 4.00 & 0.26 & 0.95 & 1.30 & 3.45 & 2.24 \\\\ \\hline\n L7 &1.58 & 1.58 & 0.18 & 2.00 & 2.67 & 2.66 & 1.78 \\\\\n \n \\multicolumn{8}{l}{\\bf \\textit{L2}} \\\\ \\hline \\hline\n R7 &0.25 & 0.16 & 0.25 & 0.96 & 3.18 & 1.74 & 1.09 \\\\ \\hline\n F7 &0.71 & 0.67 & 0.18 & 0.98 & 1.26 & 0.78 & 0.76 \\\\ \\hline\n F5+L2&0.80 & 0.87 & 0.13 & 0.34 & 0.76 & 1.30 & 0.70 \\\\ \\hline\n L7 &0.29 & 0.27 & 0.10 & 0.64 & 0.75 & 1.22 & 0.54\n \n\\end{tabular}\n \\vspace{-5mm}\n\\end{table}\n\nTable~\\ref{tbl:addhealthks} shows the discrepancies in the 6 network properties\n(\\textit{KS}~and \\textit{L2}~statistics) for each attribute selection method. As expected,\nselecting 7 real attributes at random (R7) performs the worst. Naturally, L7\nperforms the best (23\\% improvement over R7 in \\textit{KS}~and 50\\% in \\textit{L2}) as it has\nthe most degrees of freedom\\hide{ ($(1+4)\\cdot7=35$ parameters)}. It is\nfollowed by F5+L2 (the combination of 5 real and 2 latent attributes) and F7\n(forward selection).\n\nAs a point of comparison we also experimented with a simple logistic regression\nclassifier where given the attributes of a pair of nodes we aim to predict an\noccurrence of an edge. Basically, given network $A$ on $N$ nodes, we have $N^2$\n(one for each pair of nodes) training examples: $E$ are positive (edges) and $N^2-E$ are negative (non-edges).\nHowever, the model performs poorly as it gives 50\\% worse \\textit{KS}~statistics than\nMAG. The average \\textit{KS}~of logistic regression under R7 is 3.24 (vs. 2.32 of MAG) and the same statistic under F7 is 3.00 (vs. 2.05 of MAG). Similarly, logistic regression gives 40\\% worse \\textit{L2}~under R7\nand 50\\% worse \\textit{L2}~under F7.\nThese results demonstrate that using the same\nattributes MAG heavily outperforms logistic regression.\n\\rev{We understand that this performance difference arises because the connectivity between a pair of nodes depends on some factors other than the linear combination of their attribute values.}\n\n\\begin{table}[t]\n \\caption{{\\textit{LL} and \\textit{TPI}\\xspace~for the {\\em AddHealth} network.}}\n\\label{tbl:addhealthacc}\n\\centering \\small\n\\begin{tabular}{c|c|c|c|c}\n\\multicolumn{5}{l}{\\small{}}\\\\\n & R7 & F7 & F5+L2 & \\hide{Kron & }L7 \\\\ \\hline \\hline\n \\textit{LL} & -13651 & -12161 & -12047 & \\hide{-11629 &} -9154 \\\\ \\hline\n \\textit{TPI}\\xspace & 1.0 & 1.1 & 1.9 & \\hide{3.0 &} 10.0\n\\end{tabular}\n\\vspace{-5mm}\n\\end{table}\n\nLast, we also examine the \\textit{LL} and \\textit{TPI}\\xspace values and compare them to the\nrandom attribute selection R7 as a baseline. Table~\\ref{tbl:addhealthacc} gives\nthe results. Somewhat contrary to our previous observations, we note that F7\nonly slightly outperforms R7, while F5+L2 gives a factor 2 better \\textit{TPI}\\xspace than R7.\nAgain, L7 gives a factor 10 improvement in \\textit{TPI}\\xspace and overall best performance.\n\n\\begin{table}[t]\n \\caption{{Affinity matrices of 5 AddHealth attributes.}}\n \\label{tbl:attrs}\n\\centering \\small \\vspace{-2mm}\n\\begin{tabular}{ l | l }\n\\multicolumn{2}{l}{\\small{}}\\\\\n \\hide{$\\mu$ &} Affinity matrix & Attribute description \\\\ \\hline \\hline\n \\hide{0.624 &} [0.572\\,0.146;\\,0.146\\,0.999] & School year (0 if $\\geq$ 2) \\\\ \\hline\n \\hide{0.513 &} [0.845\\,0.332;\\,0.332\\,0.816] & Highest level math (0 if $\\geq$ 6) \\\\ \\hline\n \\hide{0.502 &} [0.788\\,0.377;\\,0.377\\,0.784] & Cumulative GPA (0 if $\\geq$ 2.65) \\\\ \\hline\n \\hide{0.078 &} [0.999\\,0.246;\\,0.246\\,0.352] & AP\/IB English (0 if taken) \\\\ \\hline\n \\hide{0.672 &} [0.794\\,0.407;\\,0.407\\,0.717] & Foreign language (0 if taken)\n \n \n\\end{tabular}\n\\vspace{-5mm}\n\\end{table}\n\n\\xhdr{Attribute affinities}\nLast, we investigate the structure of attribute affinity matrices to illustrate\nhow MAG model can be used to understand the way real attributes interact in\nshaping the network structure. We use forward selection (F7) to select 7 real\nattributes and estimate their affinity matrices. Table~\\ref{tbl:attrs} reports\nfirst 5 attributes selected by the forward selection.\n\nFirst notice that {\\em AddHealth} network is undirected graph and that the\nestimated affinity matrices are all symmetric. This means that without a priori\nbiasing the fitting towards undirected graphs, the recovered parameters obey\nthis structure.\nSecond, we also observe that every attribute forms a homophily structure in a\nsense that each student is more likely to be friends with other students of the\nsame characteristic. For example, people are more likely to make friends of the\nsame school year. Interestingly, students who are freshmen or sophomore are\nmore likely (0.99) to form links among themselves than juniors and seniors (0.57).\nAlso notice that the level of advanced courses that each student takes as well\nas the GPA affect the formation of friendship ties. Since it is difficult for\nstudents to interact if they do not take the same courses, the chance of the\nfriendships may be low. We note that, for example, students that\ntake advanced placement (AP) English courses are very likely to form links.\nHowever, links between students who did not take AP English are nearly as\nlikely as links between AP and non-AP students. Last, we also observe\nrelatively small effect of the number of foreign language courses taken on the\nfriendship formation.\n\n\n\\section{Variational EM Algorithm}\n\n\nIn Section~\\ref{sec:problem}, we proposed a version of MAG model\\xspace~by introducing a generative Bernoulli model for node attributes and formulated the problem to solve.\nIn the following Section~\\ref{sec:algorithm}, we gave a sketch of \\textsc{MagFit}\\xspace~that used the variational EM algorithm to solve the problem.\nHere we provide how to compute the gradients of the model parameters ($\\phi$, $\\mu$, and $\\Theta$) for the of E-step and M-step that we omitted in Section~\\ref{sec:algorithm}.\nWe also give the details of the fast \\textsc{MagFit}\\xspace~in the following.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\subsection{Variational E-Step}\n\nIn the E-step, the MAG model\\xspace~parameters $\\mu$ and $\\Theta$ are given and we aim to find the optimal variational parameter $\\phi$\nthat maximizes $\\mathcal{L}_{Q}(\\mu, \\Theta)$ as well as minimizes the mutual information factor $\\mbox{MI}(F)$.\nWe randomly select a batch of entries in $\\phi$ and update the selected entries by their gradient values of the objective function $\\mathcal{L}_{Q}(\\mu, \\Theta)$.\nWe repeat this updating procedure until $\\phi$ converges.\n\nIn order to obtain $\\nabla_{\\phi} \\left( \\mathcal{L}_{Q}(\\mu, \\Theta) - \\lambda \\mbox{MI}(F) \\right)$,\nwe compute $\\PLQ{\\phi_{il}}$ and $\\frac{\\partial \\mbox{MI}}{\\partial \\phi_{il}}$ in turn as follows.\n\n\\xhdr{Computation of $\\PLQ{\\phi_{il}}$}\nTo calculate the partial derivative $\\PLQ{\\phi_{il}}$, we begin by restating $\\mathcal{L}_{Q}(\\mu, \\Theta)$ as a function of one specific parameter $\\phi_{il}$ and differentiate this function over $\\phi_{il}$.\nFor convenience, we denote $F_{-il} = \\{F_{jk}: j \\neq i, k \\neq l\\}$\nand\n$\nQ_{-il} = \\prod_{j \\neq i, k \\neq l} Q_{jk}\n$.\nNote that $\\sum_{F_{il}} Q_{il}(F_{il}) = 1$ and $\\sum_{F_{-il}} Q_{-il}(F_{-il})$ because both are the sums of probabilities of all possible events.\nTherefore, we can separate $\\mathcal{L}_{Q}(\\mu, \\Theta)$ in \\EQ{eq:lqdef} into the terms of $Q_{il}(F_{il})$ and $Q_{-il}(F_{-il})$:\n\\begin{align}\n& \\mathcal{L}_{Q}(\\mu, \\Theta) \\nonumber \\\\\n& = \\EXATTR{}{\\log P(A, F | \\mu, \\Theta) - \\log Q(F)} \\nonumber \\\\\n& = \\sum_{F} Q\\left(F\\right) \\left( \\log P\\left(A, F | \\mu, \\Theta\\right) - \\log Q\\left(F\\right) \\right) \\nonumber \\\\\n& = \\sum_{F_{-il}} \\sum_{F_{il}} Q_{-il}(F_{-il}) Q_{il}(F_{il}) \\nonumber \\\\\n& \\quad \\times \\left( \\log P\\left(A, F | \\mu, \\Theta\\right) - \\log Q_{il}\\left(F_{il}\\right) - \\log Q_{-il}\\left(F_{-il}\\right) \\right) \\nonumber \\\\\n& = \\sum_{F_{il}} Q_{il}(F_{il}) \\left( \\sum_{F_{-il}} Q_{-il}\\left(F_{-il}\\right) \\log P\\left(A, F | \\mu, \\Theta\\right) \\right) \\nonumber \\\\\n& \\quad \\quad\n- \\sum_{F_{il}} Q_{il}(F_{il}) \\log Q_{il}(F_{il}) \\nonumber \\\\\n& \\quad \\quad\n- \\sum_{F_{-il}} Q_{-il}(F_{-il}) \\log Q_{-il}(F_{-il}) \\nonumber \\\\\n& = \\sum_{F_{il}} Q_{il}(F_{il}) \\EXATTR{-il}{\\log P\\left(A, F | \\mu, \\Theta\\right)} \\nonumber \\\\\n& \\quad \\quad + \\mathcal{H} (Q_{il}) + \\mathcal{H} (Q_{-il})\n\\label{eq:lqsep}\n\\end{align}\nwhere $\\mathcal{H}(P)$ represents the entropy of distribution $P$.\n\nSince we compute the gradient of $\\phi_{il}$, we regard the other variational parameter $\\phi_{-il}$ as a constant so $\\mathcal{H} (Q_{-il})$ is also a constant.\nMoreover, as $\\EXATTR{-il}{\\log P\\left(A, F | \\mu, \\Theta\\right)}$ integrates out all the terms\nwith regard to $\\phi_{-il}$, \nit is a function of $F_{il}$.\nThus, for convenience, we denote $\\EXATTR{-il}{\\log P\\left(A, F | \\mu, \\Theta\\right)}$ as $\\log \\PFUNC{F_{il}}$.\nThen, since $F_{il}$ follows a Bernoulli distribution with parameter $\\phi_{il}$, by \\EQ{eq:lqsep}\n\\begin{align}\n\\mathcal{L}_{Q}(\\mu, \\theta) & = (1 - \\phi_{il}) \\left( \\log \\PFUNC{1} - \\log (1-\\phi_{il}) \\right) \\nonumber \\\\\n& \\quad + \\phi_{il} \\left( \\log \\PFUNC{0} - \\log \\phi_{il} \\right) + const \\, .\n\\label{eq:lqonephi}\n\\end{align}\n\nNote that both $\\PFUNC{0}$ and $\\PFUNC{1}$ are constant. Therefore,\n\\begin{equation}\n\\PLQ{\\phi_{il}} = \\log \\frac{\\PFUNC{0}}{\\phi_{il}} - \\log \\frac{\\PFUNC{1}}{1-\\phi_{il}} \\, .\n\\label{eq:eqonephi}\n\\end{equation}\n\\hide{\nFrom \\EQ{eq:eqonephi}, the optimal $\\phi_{il}$ is as follows:\n\\begin{equation}\n\\phi_{il} = \\frac{\\PFUNC{0}}{\\PFUNC{0} + \\PFUNC{1}} \\, .\n\\label{eq:updatephi}\n\\end{equation}\n}\n\n\nTo complete the computation of $\\PLQ{\\phi_{il}}$,\nnow we focus on the value of $\\PFUNC{F_{il}}$ for $F_{il} = 0, 1$.\nBy \\EQ{eq:jointlik}~and the linearity of expectation,\n$\\log \\PFUNC{F_{il}}$ is separable into small tractable terms as follows:\n\\begin{align}\n\\log \\PFUNC{F_{il}}\n& = \\EXATTR{-il}{\\log P(A, F | \\mu, \\Theta)} \\nonumber \\\\\n& = \\sum_{u, v} \\EXATTR{-il}{\\log P(A_{uv} | F_{u}, F_{v}, \\Theta)} \\nonumber \\\\\n& \\quad + \\sum_{u, k} \\EXATTR{-il}{\\log P(F_{uk} | \\mu_{k})}\n\\label{eq:ptildesep}\n\\end{align}\nwhere $F_{i} = \\{F_{il}: l = 1, 2, \\cdots, L \\}$.\nHowever, if $u, v \\neq i$, then $\\EXATTR{-il}{\\log P(A_{uv} | F_{u}, F_{v}, \\Theta)}$ is a constant,\nbecause the average over $Q_{-il}(F_{-il})$ integrates out all the variables $F_{u}$ and $F_{v}$.\nSimilarly, if $u \\neq i$ and $k \\neq l$, then $\\EXATTR{-il}{\\log P(F_{uk} | \\mu_{k})}$ is a constant.\nSince most of terms in \\EQ{eq:ptildesep}~are irrelevant to $\\phi_{il}$, $\\log \\PFUNC{F_{il}}$~is simplified as\n\\begin{align}\n\\log \\PFUNC{F_{il}}\n& = \\left(\\sum_{j} \\EXATTR{-il}{\\log P(A_{ij} | F_{i}, F_{j}, \\Theta)}\\right) \\nonumber \\\\\n& \\quad + \\left(\\sum_{j} \\EXATTR{-il}{\\log P(A_{ji} | F_{j}, F_{i}, \\Theta)}\\right) \\nonumber \\\\\n& \\quad + \\log P(F_{il} | \\mu_{l}) + C \n\\label{eq:ptildesim}\n\\end{align}\nfor some constant $C$.\n\nBy definition of $P(F_{il} | \\mu_{l})$ in \\EQ{eq:jointlik}, the last term in \\EQ{eq:ptildesim} is \n\\begin{equation}\n\\log P(F_{il} | \\mu_{l}) = F_{il} \\log \\mu_{l} + (1-F_{il}) \\log (1 - \\mu_{il})\n\\, .\n\\label{eq:logmu}\n\\end{equation}\n\nWith regard to the first two terms in \\EQ{eq:ptildesim},\n\\[\n\\log P(A_{ij} | F_{i}, F_{j}, \\Theta)\n= \\log P(A_{ji} | F_{i}, F_{j}, \\Theta^{T}) \\, .\n\\]\nHence, the methods to compute the two terms are equivalent.\nThus, we now focus on the computation of $\\EXATTR{-il}{\\log P(A_{ij} | F_{i}, F_{j}, \\Theta)}$.\n\nFirst, in case of $A_{ij} = 1$, by definition of $P(A_{ij} | F_{i}, F_{j})$ in \\EQ{eq:jointlik},\n\\begin{align}\n& \\EXATTR{-il}{\\log P(A_{ij} = 1 | F_{i}, F_{j}, \\Theta)} \\nonumber \\\\\n& = \\EXATTR{-il}{\\sum_{k} \\log \\Theta_{k} [F_{ik}, F_{jk}]} \\nonumber \\\\\n& = \\EXATTR{jl}{\\log \\Theta_{l}[F_{il}, F_{jl}]} + \\sum_{k \\neq l} \\EXATTR{ik, jk}{\\log \\Theta_{k} [F_{ik}, F_{jk}]} \\nonumber \\\\\n& = \\EXATTR{jl}{\\log \\Theta_{l}[F_{il}, F_{jl}]} + C'\n\\label{eq:logedgeprob1}\n\\end{align}\nfor some constant $C'$ where $Q_{ik, jk}(F_{ik}, F_{jk}) = Q_{ik}(F_{ik}) Q_{jk}(F_{jk})$,\nbecause $\\EXATTR{ik, jk}{\\log \\Theta_{k}[F_{ik}, F_{jk}]}$ is constant for each $k$.\n\nSecond, in case of $A_{ij} = 0$,\n\\begin{align}\nP(A_{ij} = 0 | F_{i}, F_{j}, \\Theta)\n& = 1 - \\prod_{k} \\Theta_{k} [F_{ik}, F_{jk}]\n\\, .\n\\label{eq:logedgeprob0}\n\\end{align}\nSince $\\log P(A_{ij} | F_{i}, F_{j}, \\Theta)$ is not separable in terms of $\\Theta_{k}$, it takes $O(2^{2L})$ time to compute $\\EXATTR{-il}{\\log P(A_{ij} | F_{i}, F_{j}, \\Theta)}$ exactly.\nWe can reduce this computation time to $O(L)$ by applying Taylor's expansion of $\\log (1 - x) \\approx -x - \\frac{1}{2}x^{2}$ for small $x$:\n\\begin{align}\n& \\EXATTR{-il}{\\log P(A_{ij} = 0 | F_{i}, F_{j}, \\Theta)} \\nonumber \\\\\n& \\approx \\EXATTR{-il}{-\\prod_{k} \\Theta_{k}[F_{ik}, F_{jk}] - \\frac{1}{2} \\prod_{k} \\Theta^{2}_{k}[F_{ik}, F_{jk}]} \\nonumber \\\\\n& = -\\EXATTR{jl}{\\Theta_{l}[F_{il}, F_{jl}]} \\prod_{k \\neq l} \\EXATTR{ik, jk}{\\Theta_{k}[F_{ik}, F_{jk}]} \\nonumber \\\\\n& \\quad \\quad - \\frac{1}{2} \\EXATTR{jl}{\\Theta^{2}_{l}[F_{il}, F_{jl}]} \\prod_{k \\neq l} \\EXATTR{ik, jk}{\\Theta^{2}_{k}[F_{ik}, F_{jk}]}\n\\label{eq:logedgeprob0approx}\n\\end{align}\nwhere each term can be computed by\n\\begin{align*}\n\\EXATTR{il}{Y_{l}[F_{il}, F_{jl}]} & = \\phi_{jl} Y_{l}[F_{il}, 0] + (1-\\phi_{jl}) Y_{l}[F_{il}, 1] \\\\\n\\EXATTR{ik, jk}{Y_{k}[F_{ik}, F_{jk}]} & = [\\phi_{ik}~~\\phi_{jk}] \\cdot Y_{k} \\cdot [1-\\phi_{ik} ~~ 1-\\phi_{jk}]^{T}\n\\end{align*}\nfor any matrix $Y_{l}, Y_{k} \\in \\mathbb{R}^{2 \\times 2}$.\n\nIn brief, for fixed $i$ and $l$, we first compute $\\EXATTR{-il}{\\log P(A_{ij} | F_{i}, F_{j}, \\Theta)}$ for each node $j$ depending on whether or not $i \\rightarrow j$ is an edge.\nBy adding $\\log P(F_{il} | \\mu_{l})$, we then acheive the value of $\\log \\PFUNC{F_{il}}$ for each $F_{il}$.\nOnce we have $\\log \\PFUNC{F_{il}}$, we can finally compute $\\PLQ{il}$.\n\n\n\n\\xhdr{Scalable computation}\nHowever, as we analyzed in Section~\\ref{sec:algorithm}, the above E-step algorithm requires $O(LN)$ time for each computation of $\\PLQ{\\phi_{il}}$ so that the total computation time is $O(L^{2}N^{2})$, \nwhich is infeasible when the number of nodes $N$ is large.\n\nHere we propose the scalable algorithm of computing $\\PLQ{\\phi_{il}}$\nby further approximation.\nAs described in Section~\\ref{sec:algorithm}, we quickly approximate the value of $\\PLQ{\\phi_{il}}$ as if the network would be empty, \nand adjust it by the part where edges actually exist.\nTo approximate $\\PLQ{\\phi_{il}}$ in empty network case, \nwe reformulate the first term in \\EQ{eq:ptildesim}:\n\\begin{align}\n& \\sum_{j} \\EXATTR{-il}{\\log P(A_{ij} | F_{i}, F_{j}, \\Theta)} = \\sum_{j} \\EXATTR{-il}{\\log P(0 | F_{i}, F_{j}, \\Theta)} \\nonumber \\\\\n& \\quad + \\sum_{A_{ij} = 1} \\EXATTR{-il}{\\log P(1 | F_{i}, F_{j}, \\Theta) - \\log P(0 | F_{i}, F_{j}, \\Theta)}\n\\label{eq:fastestep1_x}\n\\end{align}\nHowever, since the sum of \\textit{i.i.d.} random variables can be approximated in terms of the expectaion of the random variable,\nthe first term in \\EQ{eq:fastestep1_x} can be approximated as follows:\n\\begin{align}\n& \\sum_{j} \\EXATTR{-il}{\\log P(0 | F_{i}, F_{j}, \\Theta)} \\nonumber \\\\\n& = \\EXATTR{-il}{\\sum_{j} \\log P(0 | F_{i}, F_{j}, \\Theta)} \\nonumber \\\\\n& \\approx \\EXATTR{-il}{(N-1) \\mathbb{E}_{F_{j}}[ \\log P(0 | F_{i}, F_{j}, \\Theta)]} \\nonumber \\\\\n& = (N-1) \\mathbb{E}_{F_{j}} [\\log P(0 | F_{i}, F_{j}, \\Theta)]\n\\label{eq:fastestep2_x}\n\\end{align}\nAs $F_{jl}$ marginally follows a Bernoulli distribution with $\\mu_{l}$, we can compute \\EQ{eq:fastestep2_x} by using \\EQ{eq:logedgeprob0approx} in $O(L)$ time.\nSince the second term of \\EQ{eq:fastestep1_x} takes $O(L N_{i})$ time where $N_{i}$ represents the number of neighbors of node $i$,\n\\EQ{eq:fastestep1_x} takes only $O(L N_{i})$ time in total.\nAs in the E-step we do this operation by iterating for all $i$'s and $l$'s,\nthe total computation time of the E-step eventually becomes $O(L^2 E)$,\nwhich is feasible in many large-scale networks.\n\n\n\\xhdr{Computation of $\\PMI{\\phi_{il}}$}\nNow we turn our attention to the derivative of the mutual information term.\nSince $\\mbox{MI}(F) = \\sum_{l \\neq l'} \\mbox{MI}_{ll'}$,\nwe can separately compute the derivative of each term $\\frac{\\partial \\mbox{MI}_{ll'}}{\\partial \\phi_{il}}$.\nBy definition in \\EQ{eq:midef} and Chain Rule,\n\\begin{align}\n& \\frac{\\partial \\mbox{MI}_{ll'}}{\\partial \\phi_{il}} = \n\\sum_{x, y \\in \\{0, 1\\}} \\frac{\\partial p_{ll'}(x,y)}{\\partial \\phi_{il}} \\log \\frac{p_{ll'}(x, y) }{p_{l}(x)p_{l'}(y)} \\nonumber \\\\\n& \\quad + \\frac{\\partial p_{ll'}(x, y)}{\\partial \\phi_{il}} \n+ \\frac{p_{ll'}(x, y)}{p_{l}(x)} \\frac{\\partial p_{l}(x)}{\\partial \\phi_{il}} \n+ \\frac{p_{ll'}(x, y)}{p_{l'}(y)} \\frac{\\partial p_{l'}(y)}{\\partial \\phi_{il}} \\,.\n\\label{eq:partialmi}\n\\end{align}\nThe values of $p_{ll'}(x, y)$, $p_{l}(x)$, and $p_{l'}(y)$ are defined in \\EQ{eq:midef}.\nTherefore, in order to compute $\\frac{\\partial \\mbox{MI}_{ll'}}{\\partial \\phi_{il}}$,\nwe need the values of $\\frac{\\partial p_{ll'}(x, y)}{\\partial \\phi_{il}}$, $\\frac{\\partial p_{l}(x)}{\\partial \\phi_{il}}$, and $\\frac{\\partial p_{l'}(y)}{\\partial \\phi_{il}}$.\nBy definition in \\EQ{eq:midef},\n\\begin{align*}\n& \\frac{\\partial p_{ll'}(x, y)}{\\partial \\phi_{il}} = Q_{il'}(y) \\frac{\\partial Q_{il}}{\\partial \\phi_{il}} \\\\\n& \\frac{\\partial p_{l}(x)}{\\partial \\phi_{il}} = \\frac{\\partial Q_{il}}{\\partial \\phi_{il}} \\\\\n& \\frac{\\partial p_{l'}(y)}{\\partial \\phi_{il}} = 0\n\\end{align*}\nwhere $\\frac{\\partial Q_{il}}{\\partial \\phi_{il}} |_{F_{il}=0} = 1$\nand $\\frac{\\partial Q_{il}}{\\partial \\phi_{il}} |_{F_{il}=1} = -1$.\n\nSince all terms in $\\frac{\\partial \\mbox{MI}_{ll'}}{\\partial \\phi_{il}}$ are tractable, we can eventually compute $\\PMI{\\phi_{il}}$.\n\n\n\n\n\n\n\\subsection{Variational M-Step}\n\nIn the E-Step, \nwith given model parameters $\\mu$ and $\\Theta$, we updated the variational parameter $\\phi$\nto maximize $\\mathcal{L}_{Q}(\\mu, \\Theta)$ as well as to minimize the mutual information between every pair of attributes.\nIn the M-step,\nwe basically fix the approximate posterior distribution $Q(F)$, \\emph{i.e.}\\xspace~fix the variational parameter $\\phi$,\nand update the model parameters $\\mu$ and $\\Theta$ to maximize $\\mathcal{L}_{Q}(\\mu, \\Theta)$.\n\n\nTo reformulate $\\mathcal{L}_{Q}(\\mu, \\Theta)$\nby \\EQ{eq:jointlik},\n\\begin{align}\n& \\mathcal{L}_{Q}(\\mu, \\Theta) \\nonumber \\\\\n& = \\EXATTR{}{\\log P(A, F | \\mu, \\Theta) - \\log Q(F)} \\nonumber \\\\\n& = \\EXATTR{}{\\sum_{i, j} P(A_{ij} | F_{i}, F_{j}, \\Theta)\n+ \\sum_{i, l} P(F_{il} | \\mu_{l}) } + \\mathcal{H}(Q) \\nonumber \\\\\n& = \\sum_{i, j} \\EXATTR{i, j}{\\log P(A_{ij} | F_{i}, F_{j}, \\Theta)} \\nonumber \\\\\n& \\quad + \\sum_{l} \\left( \\sum_{i} \\EXATTR{il}{\\log P(F_{il} | \\mu_{l})} \\right)\n+ \\mathcal{H}(Q)\n\\label{eq:mstep}\n\\end{align}\nwhere $Q_{i, j}(F_{\\{i\\cdot\\}}, F_{\\{j\\cdot\\}})$ represents $\\prod_{l} Q_{il}(F_{il}) Q_{jl}(F_{jl})$.\n\nAfter all, $\\mathcal{L}_{Q}(\\mu, \\Theta)$ in \\EQ{eq:mstep} is divided into the following terms:\na function of $\\Theta$, a function of $\\mu_{l}$, and a constant.\nThus, we can exclusively update $\\mu$ and $\\Theta$. Since we already showed how to update $\\mu$ in Section~\\ref{sec:algorithm}, here we focus on the maximization of $\\mathcal{L}_{\\Theta} = \\EXATTR{}{\\log P(A, F | \\mu, \\Theta) - \\log Q(F)}$ using the gradient method.\n\n\n\n\\xhdr{Computation of $\\nabla_{\\Theta_{l}} \\mathcal{L}_{\\Theta}$}\nTo use the gradient method, we need to compute the gradient of $\\mathcal{L}_{\\Theta}$:\n\\begin{align}\n\\nabla_{\\Theta_{l}} \\mathcal{L}_{\\Theta} = \\sum_{i, j} \\nabla_{\\Theta_{l}} \\EXATTR{i, j}{\\log P(A_{ij} |F_{i}, F_{j}, \\Theta)} \\, .\n\\label{eq:mstep-theta_x}\n\\end{align}\nWe separately calculate the gradient of each term in $\\mathcal{L}_{\\Theta}$ as follows:\nFor every $z_{1}, z_{2} \\in \\{0, 1\\}$,\nif $A_{ij} = 1$,\n\\begin{align}\n& \\frac{\\partial \\EXATTR{i, j}{\\log P(A_{ij} | F_{i}, F_{j}, \\Theta)}}{\\partial \\Theta_{l}[z_{1}, z_{2}]} \\bigg|_{A_{ij} = 1} \\nonumber \\\\\n& = \\frac{\\partial}{\\partial \\Theta_{l}[z_{1}, z_{2}]}\n\\EXATTR{i, j}{\\sum_{k} \\log \\Theta_{k}[F_{ik}, F_{jk}]} \\nonumber \\\\\n& = \\frac{\\partial}{\\partial \\Theta_{l}[z_{1}, z_{2}]} \\EXATTR{i, j}{\\log \\Theta_{l}[F_{il}, F_{jl}]} \\nonumber \\\\\n& = \\frac{Q_{il}(z_{1}) Q_{jl}(z_{2})}{\\Theta_{l}[z_{1}, z_{2}]} \\, .\n\\label{eq:mstep-theta1}\n\\end{align}\nOn the contrary, if $A_{ij} = 0$,\nwe use Taylor's expansion as used in \\EQ{eq:logedgeprob0approx}:\n\\begin{align}\n& \\frac{\\partial \\EXATTR{i, j}{\\log P(A_{ij} | F_{i}, F_{j}, \\Theta)}}{\\partial \\Theta_{l}[z_{1}, z_{2}]} \\bigg|_{A_{ij} = 0} \\nonumber \\\\\n& \\approx \\frac{\\partial}{\\partial \\Theta_{l}} \\EXATTR{i, j}{- \\prod_{k} \\Theta_{k}[F_{ik}, F_{jk}] - \\frac{1}{2} \\prod_{k} \\Theta^{2}_{k}[F_{ik}, F_{jk}]} \\nonumber \\\\\n& = -Q_{il}(z_{1})Q_{jl}(z_{2}) \\prod_{k \\neq l} \\EXATTR{ik, jk}{\\Theta_{k}[F_{ik}, F_{jk}]} \\nonumber \\\\\n& \\quad \\quad - Q_{il}(z_{1})Q_{jl}(z_{2})\\Theta_{k}[z_{1}, z_{2}] \\prod_{k \\neq l} \\EXATTR{ik, jk}{\\Theta^{2}_{k}[F_{ik}, F_{jk}]}\n\\label{eq:mstep-theta2}\n\\end{align}\nwhere $Q_{il, jl}(F_{il}, F_{jl}) = Q_{il}(F_{il}) Q_{jl}(F_{jl})$.\n\nSince\n\\[\n\\EXATTR{ik, jk}{f(\\Theta)} = \\sum_{z_{1}, z_{2}} Q_{ik}(z_{1})Q_{jk}(z_{2})f\\left(\\Theta[z_{1}, z_{2}]\\right)\n\\]\nfor any function $f$\nand we know each function values of $Q_{il}(F_{il})$ in terms of $\\phi_{il}$,\nwe are able to achieve the gradient $\\nabla_{\\Theta_{l}}\\mathcal{L}_{\\Theta}$ by \\EQ{eq:mstep-theta_x}~$\\sim$~(\\ref{eq:mstep-theta2}).\n\n\\xhdr{Scalable computation}\nThe M-step requires to sum $O(N^{2})$ terms in \\EQ{eq:mstep-theta_x} where each term takes $O(L)$ time to compute.\nSimilarly to the E-step, here we propose the scalable algorithm by separating \\EQ{eq:mstep-theta_x} into two parts, the fixed part for an empty graph and the adjustment part for the actual edges:\n\\begin{align}\n& \\nabla_{\\Theta_{l}} \\mathcal{L}_{\\Theta} =\n\\sum_{i, j} \\nabla_{\\Theta_{l}} \\EXATTR{i, j}{\\log P(0 |F_{i}, F_{j}, \\Theta)} \\nonumber \\\\\n& \\quad + \\sum_{A_{ij} = 1} \\nabla_{\\Theta_{l}} \\EXATTR{i, j}{\\log P(1 |F_{i}, F_{j}, \\Theta) - \\log P(0 |F_{i}, F_{j}, \\Theta)} \\, .\n\\label{eq:fastmstep1_x}\n\\end{align}\nWe are able to approximate the first term in \\EQ{eq:fastmstep1_x}, the value for the empty graph part, as follows:\n\\begin{align}\n& \\sum_{i, j} \\nabla_{\\Theta_{l}} \\EXATTR{i, j}{\\log P(0 | F_{i}, F_{j}, \\Theta)} \\nonumber \\\\\n& = \\nabla_{\\Theta_{l}} \\EXATTR{i, j}{\\sum_{i, j} \\log P(0 | F_{i}, F_{j}, \\Theta)} \\nonumber \\\\\n& \\approx \\nabla_{\\Theta_{l}} \\EXATTR{i, j}{N(N-1) \\mathbb{E}_{F} [\\log P(0 | F, \\Theta)]} \\nonumber \\\\\n& = \\nabla_{\\Theta_{l}} N(N-1) \\mathbb{E}_{F} [\\log P(0 | F, \\Theta)] \\,.\n\\label{eq:fastmstep2_x}\n\\end{align}\nSince each $F_{il}$ marginally follows the Bernoulli distribution with $\\mu_{l}$, \\EQ{eq:fastmstep2_x} is computed by \\EQ{eq:mstep-theta2} in $O(L)$ time.\nAs the second term in \\EQ{eq:fastmstep1_x} requires only $O(LE)$ time,\nthe computation time of the M-step is finally reduced to $O(LE)$ time.\n\n\n\n\\section{Experiments}\n\n\\subsection{Yahoo!-Ansers Network}\n\nHere we add some experimental results that we omitted in Section~\\ref{sec:experiments}.\nFirst, Figure~\\ref{fig:answerplot} compares the six network properties of Yahoo!-Answers network and the synthetic networks generated by MAG model\\xspace~and Kronecker graphs model fitted to the real network.\nThe MAG model\\xspace~in general shows better performance than the Kronecker graphs model. Particularly, the MAG model\\xspace~greatly outperforms the Kronecker graphs model in local-clustering properties (clustering coefficient and triad participation).\n\nSecond, to quantify the recovery of the network properties, we show the \\textit{KS}~and \\textit{L2}~statistics for the synthetic networks generated by MAG model\\xspace~and Kronecker graphs model in Table~\\ref{tbl:answerks}.\nThrough Table~\\ref{tbl:answerks}, we can confirm the visual inspection in Figure~\\ref{fig:answerplot}.\nThe MAG model\\xspace~shows better statistics than the Kronecker graphs model in overall\nand there is huge improvement in the local-clustering properties.\n\n\n\\begin{figure}[t]\n \\centering\n \\subfigure[In-degree]{\\includegraphics[width=0.235\\textwidth]{FIG\/Answer4K-7-InDeg.eps}}\n \\subfigure[Out-degree]{\\includegraphics[width=0.235\\textwidth]{FIG\/Answer4K-7-OutDeg.eps}}\n \\subfigure[Singular value]{\\includegraphics[width=0.235\\textwidth]{FIG\/Answer4K-7-Sval.eps}}\n \\subfigure[Singular vector]{\\includegraphics[width=0.235\\textwidth]{FIG\/Answer4K-7-Svec.eps}}\n \\subfigure[Clustering coefficient]{\\includegraphics[width=0.235\\textwidth]{FIG\/Answer4K-7-Ccf.eps}}\n \\subfigure[Triad participation]{\\includegraphics[width=0.235\\textwidth]{FIG\/Answer4K-7-Triad.eps}}\n \\caption{The recovered network properties by the MAG model\\xspace~and the Kronecker graphs\n model on the Yahoo!-Answers network.\n For every network property, MAG model\\xspace~outperforms the Kronecker graphs model.}\n \\label{fig:answerplot}\n\\end{figure}\n\n\\begin{table}\n\\caption{\\textit{KS}~and \\textit{L2}~for MAG and Kronecker model fitted to Yahoo!-Answers network}\n\\label{tbl:answerks}\n\\centering \\small\n\\begin{tabular}{c||c|c|c|c|c|c||c}\n\\multicolumn{8}{l}{\\small{}}\\\\\n {\\bf \\textit{KS}} & InD & OutD & SVal & SVec & TP & CCF & Avg \\\\ \\hline \\hline\nMAG & 3.00 & 2.80 & 14.93 & 13.72 & 4.84 & 4.80 & 7.35 \\\\ \\hline\nKron & 2.00 & 5.78 & 13.56 & 15.47 & 7.98 & 7.05 & 8.64 \\\\ \\hline\n \\multicolumn{8}{l}{\\bf \\textit{L2}} \\\\ \\hline \\hline\nMAG & 0.96 & 0.74 & 0.70 & 6.81 & 2.76 & 2.39 & 2.39 \\\\ \\hline\nKron & 0.81 & 2.24 & 0.69 & 7.41 & 6.14 & 4.73 & 3.67 \\\\ \\hline\n\\end{tabular}\n\\end{table}\n\n\n\\subsection{AddHealth Network}\n\nWe briefly mentioned the logistic regression method in AddHealth network experiment.\nHere we provide the details of the logistic regression and full experimental results of it.\n\nFor the variables of the logistic regression, we use a set of real attributes in the AddHealth network dataset.\nFor such set of attributes, we used F7 (forward selection) and R7 (random selection) defined in Section~\\ref{sec:experiments}.\nOnce the set of attributes is fixed, we come up with a linear model:\n\\begin{align*}\nP(i \\rightarrow j) = \n\\frac{\\exp(c + \\sum_{l} \\alpha_{l} F_{il} + \\sum_{l} \\beta_{l} F_{jl}) }\n{1 + \\exp(c + \\sum_{l} \\alpha_{l} F_{il} + \\sum_{l} \\beta_{l} F_{jl}) } \\,.\n\\end{align*}\n\nTable~\\ref{tbl:addhealthlogit} shows the \\textit{KS}~and \\textit{L2}~statistics for logistic regression methods under R7 and F7 attribute sets.\nIt seems that the logistic regression succeeds in the recovery of degree distributions.\nHowever, it fails to recover the local-clustering properties (clustering coefficient and triad participation) for both sets.\n\n\\begin{table}\n\\caption{\\textit{KS}~and \\textit{L2}~for logistic regression methods fitted to AddHealth network}\n\\label{tbl:addhealthlogit}\n\\centering \\small\n\\begin{tabular}{c||c|c|c|c|c|c||c}\n\\multicolumn{8}{l}{\\small{}}\\\\\n {\\bf \\textit{KS}} & InD & OutD & SVal & SVec & TP & CCF & Avg \\\\ \\hline \\hline\nR7 & 2.00 & 2.58 & 0.58 & 3.03 & 5.39 & 5.91 & 3.24 \\\\ \\hline\nF7 & 1.59 & 1.59 & 0.52 & 3.03 & 5.43 & 5.91 & 3.00 \\\\ \\hline\n \\multicolumn{8}{l}{\\bf \\textit{L2}} \\\\ \\hline \\hline\nR7 & 0.54 & 0.58 & 0.29 & 1.09 & 3.43 & 2.42 & 1.39 \\\\ \\hline\nF7 & 0.42 & 0.24 & 0.27 & 1.12 & 3.55 & 2.09 & 1.28 \\\\ \\hline\n\\end{tabular}\n\\end{table}\n\n\n\n\\hide{\n\\newpage\n\\section{General Questions}\n\nMAG model\\xspace~provides a class of social and information network model defined by\nmultiplicative node attributes and their similarity matrices. We come up with\ninteresting questions on top of this MAG model\\xspace~as follows:\n\n\\begin{itemize}\n\\item{\\em{Parameter estimation}:} Given a network, how can we estimate the\n node attributes and their similarity matrices in MAG model\\xspace?\n\\item{\\em{MAG-PCA}:} Given a network, how can we estimate the node\n attributes such that the attributes for each node can be characterized\n by a few components? Also, how can we estimate their similarity\n matrices under this setting?\n\\item{\\em{Node attribute inference}:} Given a network and a part of node\n attributes, how can we infer the missing part of the attributes as well\n as the similarity matrices in MAG model\\xspace?\n\\end{itemize}\n\n\n\n\\xhdr{Parameter estimation}\n\nTo simplify the model, here we reduce the categorical attributes to only binary\ncases. Basically, we want to estimate these binary attributes for each node and\nthe underlying similarity matrices, given a network. However, this is an\nNP-hard combinatorial problem when we find the binary attributes for a large\nnumber of nodes such that the estimated attributes maximize the likelihood\ncombined with the similarity matrices. To solve this difficult problem, we can\nformulate the problem such that each attribute allows the value between 0 and 1\nrather than 0 or 1 itself. This relaxation is very natural in a sense that each\nvalue between 0 and 1 represents the probability and its actual binary\nattribute can be regarded as a Bernoulli random variable.\n\nHere we are interested in two types of problems.\n\\begin{itemize}\n\\item If we believe that each attribute is defined in the whole population,\n independently of the individual nodes, then the parameter $\\mu_{l}$ of\n the Bernoulli random variable for each attribute $a_{l}$ is shared by\n all nodes in the network.\n\\item On the other hand, if we allow the differences between the nodes,\n then each attribute $a_{l}$ for each node $i$ has a different Bernoulli\n parameter, $\\mu_{il}$.\n\\end{itemize}\n\nIn both problem settings, we eventually find how each attribute interacts\nbetween nodes (\\emph{i.e.}\\xspace~$\\Theta_{l}$) and the generative model for the attributes of\na node (\\emph{i.e.}\\xspace~$\\mu_{l}$ or $\\mu_{il}$).\n\nThis is important in the following senses:\n\\begin{itemize}\n\\item Fits the generative model to a given network : Possible to generate a\n synthetic network with the same characteristics\n\\item Compares two networks under the MAG model\\xspace : Possible to quantify the\n informative discrepency between the two networks\n\\item Characterizes each node of the network via the posterior attribute\n distribution\n\\end{itemize}\n\n\n\n\n\n\\xhdr{MAG-PCA}\n\nWe are interested in the following types of problem.\n\\begin{itemize}\n\\item We assume that there are only a few possible binary attribute vectors\n (combinations) in the MAG model\\xspace and the attribute vector for each node\n belongs to those candidates with some assignment probabilities. We want\n to estimate the candidate vectors as well as the assignment probability\n for each node.\n\\item We relax the previous problem so that the the attribute vector allows\n the values between 0 and 1, \\emph{i.e.}\\xspace~each component in the attribute vector\n represents the probability. This problem may be more robust and\n tractable than the previous problem using binary attributes.\n\\end{itemize}\n\nThis is important in the following senses:\n\\begin{itemize}\n\\item Relates to PCA\n\\item Nicely explains the continuous-valued attributes which are not\n necessarily multiplicative each other in terms of similarities\n\\end{itemize}\n\n\n\n\n\\xhdr{Node attribute inference}\n\nWe are interested in the following problems.\n\\begin{itemize}\n\\item Given a network and the attributes for a part of nodes, how can we\n infer the similarity matrices and the attributes for the other nodes\n that we cannot observe?\n\\item Given a network and the attributes for every node with some missing\n entries, how can we infer the similarity matrices as well as the\n missing attributes?\n\\end{itemize}\n\nNote that the attributes are not limited to binary cases and, moreover, they\nare not necessarily categorical. We will present the methods to bring general\nattributes into our MAG model\\xspace~framework.\n\nThis is important in the following senses:\n\\begin{itemize}\n\\item Plugs a network structure into data inference problem. What will be\n the information gain compared to not using the network structure?\n\\end{itemize}\n\nThis problem will be particularly interesting when the network data is noisy or\npartially observed as well as when the attribute observations are noisy.\n\n\n\\subsection{Related Work}\n\\xhdr{Random Dot Product Model}\n\n\\xhdr{Latent Dirchelt Allocation (LDA) based model}\n\n\\xhdr{Kronecker graphs model} Kronecker graphs model~\\cite{TOCITE} relates to\nour work in a sense that this model is a special case of MAG model\\xspace~\\cite{TOCITE}.\nWhile estimating a single small initiator matrix and a node mapping in the\nKronecker graphs model, we estimate the parameters involved in the generative\nmodel for node attributes as well as the multiple similarity matrices\nassociated with the node attributes in the MAG model\\xspace.\n\n\\xhdr{Mixed Membership Stochastic Blockmodel (MMSB)}\n\\begin{itemize}\n\\item{MMSB} : Simple attribute, arbtrary interactions between memberships\n\\item{MAG} : Allows a rich set of attributes, hierarchical interactions\n with regard to each attribute\n\\end{itemize}\n\n}\n\n\\section{Introduction}\n\\label{sec:intro}\n\\input{010intro}\n\n\\section{Multiplicative Attribute Graphs}\n\\label{sec:problem}\n\\input{020problem}\n\n\\section{MAG Parameter Estimation}\n\\label{sec:algorithm}\n\\input{030algorithm}\n\n\\section{Experiments}\n\\label{sec:experiments}\n\\input{040experiments}\n\n\\section{Conclusion}\n\\label{sec:conclusion}\n\\input{050conclusion}\n\n\\subsubsection*{Acknowledgements}\nResearch was in-part supported by NSF\nCNS-1010921, \nNSF IIS-1016909, \nAFRL FA8650-10-C-7058,\nLLNL DE-AC52-07NA27344,\nAlbert Yu \\& Mary Bechmann Foundation, IBM,\nLightspeed, Yahoo and the Microsoft Faculty Fellowship.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}