diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzdbyc" "b/data_all_eng_slimpj/shuffled/split2/finalzzdbyc" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzdbyc" @@ -0,0 +1,5 @@ +{"text":"\n\n\\section{Introduction}\n\nNowadays, compute-heavy simulation software typically runs on different types of high-end processors to solve these simulations in a reasonable amount of time.\nNevertheless, this high-end hardware requires highly specialized code that is precisely adapted to the respective hardware in order to get anywhere near peak performance. \nWhat is more, in some cases different algorithmic variants are more suitable towards different kinds of hardware. \nFor example, writing applications for a GPU, a massively parallel device with a very peculiar memory hierarchy, is very different from implementing applications for a CPU.\n\nVery large problems even call for distributed systems in which we have to partition the workload among multiple computers within a network.\nThis entails proper data communication between these systems;\ntransfer latencies should be hidden as much as possible.\n\n\nIn this paper we focus on \\ac{MD} simulations.\nThese study the interactions among particles and how these interactions affect their motion.\nTo achieve peak performance on these simulations, the implementation must consider the best data access pattern for the target architecture.\n\nWe base our implementation \\emph{tinyMD} on AnyDSL---a partial evaluation framework to write high-performance applications and libraries.\nWe compare this implementation with miniMD, a parallel and scalable proxy application that also contains GPU support;\nminiMD is written in C++ and is based on Kokkos~\\cite{6805038}.\nAdditionally, we also couple our tinyMD application with the waLBerla \\cite{BAUER2020, godenschwager2013framework} multi-physics simulation framework. This allows us to exploit its load-balancing mechanism \\cite{doi:10.1137\/15M1035240} implementation within tinyMD simulations. We discuss the advantages that tinyMD provides in order to ease the coupling with different technologies.\n\nWe use the Lennard-Jones potential model in order to calculate the forces of the atoms in the experiments comparing to miniMD. Whereas in the experiments for load-balancing, we rely on the Spring-Dashpot force model, which is common in \\ac{DEM} simulations~\\cite{cundall79}.\nOur goal is to compare and discuss the difference regarding the implementation and performance for both applications. We present experimental results for single-node performance in both CPU and GPU target processors, and we also show our experimental results for multi-node CPU processors in the SuperMUC-NG supercomputer, and multi-node GPU accelerators in the Piz~Daint supercomputer.\n\n\\subsection{Contributions}\n\nIn summary this paper makes the following contributions beyond our previous work~\\cite{Schmitt18AnyDSL}:\n\\begin{itemize}\n \\item We present our new tinyMD distributed-memory parallel implementation based upon AnyDSL and discuss its differences to miniMD---a typical C++ implementation based upon the Kokkos library to target GPU devices.\n For example, we use higher-order functions to build array abstractions.\n Due to AnyDSL's partial evaluator these abstractions are not accompanied by any overhead (see \\autoref{sec:impl}).\n \\item We demonstrate how flexible tinyMD is implemented with AnyDSL, and how its communication code can be coupled with the waLBerla framework to use its load-balancing feature in tinyMD simulations (see \\autoref{sec:coupling}).\n \\item We show performance and scalability results for various CPU and GPU architectures including multi-CPU results on up to 2048 nodes of the SuperMUC-NG cluster (98304 cores), and multi-GPU results on up to 1024 nodes of the Piz~Daint cluster (see \\autoref{sec:eval}).\n\\end{itemize}\nIn order to make this paper as self-contained as possible, \\autoref{sec:background} provides necessary background for both AnyDSL and \\ac{MD} simulations after discussing related work in \\autoref{sec:relwork}.\n\n\\section{Related Work}\n\\label{sec:relwork}\n\n\nThere is a wide effort on porting \\ac{MD} simulations to different target architectures while delivering good performance and scalability. The majority of the developed frameworks and applications use the traditional approach of using a general-purpose language to implement the code for the simulations.\n\nGROMACS~\\cite{DBLP:journals\/jcc\/SpoelLHGMB05, ABRAHAM201519, pall2015} is a versatile \\ac{MD} package used primarily for dynamical simulations of bio-molecules. It is implemented in C\/C++ and supports most commonly used CPUs and GPUs. The package was initially released in 1991 and has been carefully optimized since then. GROMACS supports \\ac{SIMD} in order to enhance the instruction-level parallelism and hence increase CPU throughput by treating elements as clusters of particles instead of individual particles \\cite{PALL20132641}. GROMACS provides various hand-written \\ac{SIMD} kernels for different \\ac{SIMD} \\acp{ISA}.\n\n\nLAMMPS~\\cite{PLIMPTON19951, BROWN2012449} is an \\ac{MD} code with focus on material modeling.\nIt contains potentials for soft and solid-state materials, as well as coarse-grain systems.\nIt is implemented in C++ and uses \\ac{MPI} for communication when executed on several nodes.\n\nThe Mantevo proxy application miniMD~\\cite{DBLP:conf\/pgas\/LiLLHTP14,DBLP:journals\/bioinformatics\/RieberM17} is based on LAMMPS and performs and scales well (in a weak sense) on distributed memory systems, albeit with a very restricted set of features.\nSince miniMD provides a very useful benchmark case for \\ac{MD} simulations in terms of performance and scalability, we choose it as the base line for tinyMD.\n\nMESA-PD~\\cite{Eibl2019a} is a general particle dynamics framework.\nIts design principle of separation of code and data allows to introduce arbitrary interaction models with ease.\nIt can thus also be used for molecular dynamics. Via its code generation approach using Jinja templates it can be adapted very effectively to any simulation scenario.\nAs successor of the pe rigid particle dynamics framework it inherits its scalability~\\cite{Eibl2018} and advanced load balancing functionalities~\\cite{Eibl2019}.\n\nAll these \\ac{MD} applications have dedicated, hand-tuned codes for each supported target architecture.\nThis is the most common approach for today's \\ac{MD} applications~\\cite{doi:10.1002\/wcms.1121,doi:10.5167\/uzh-19245,doi:10.1002\/jcc.20289}.\nIt requires much more effort as all these different code variants must not only be implemented, but also optimized, tested, debugged, and maintained independently from each other.\n\nOther applications rely on domain-specific languages to generate parallel particle methods \\cite{10.1145\/3175659}, but this approach requires the development of specific compilation tools that are able to generate efficient code.\nIn this paper we explore the benefits from using the AnyDSL framework, where we shallow-embed \\cite{10.1145\/2692915.2628138, leissa2015} our domain-specific library into its front-end Impala and can then abstract device, memory layout and communication pattern through higher-order functions.\nThus, we use the compiler tool-chain provided by AnyDSL and do not need to develop specific compilation tools for \\ac{MD} or particle methods.\n\n\\section{Background}\n\\label{sec:background}\n\n\\subsection{AnyDSL}\n\\label{sec:anydsl}\n\nAnyDSL~\\cite{DBLP:journals\/pacmpl\/LeissaBHPMSMS18} is a compiler framework designed to speed up the development of domain-specific libraries.\nIt consists of three major components:\nthe frontend \\emph{Impala}, \nits \\ac{IR} \\emph{Thorin}~\\cite{DBLP:conf\/cgo\/LeissaKH15}, \nand a runtime system.\nThe syntax of Impala is inspired from Rust and allows both imperative and functional programming.\n\n\\subsubsection{Partial Evaluation}\n\nIn contrast to Rust, Impala features a partial evaluator, which is controlled via \\emph{filter expressions}~\\cite{DBLP:conf\/esop\/Consel88}:\n\\begin{lstlisting}\nfn @(?n) pow(x: i32, n: i32) -> i32 {\n if n == 1 {\n z\n } if n\n let y = pow(x, n \/ 2);\n y * y\n } else {\n x * pow(x, n - 1)\n }\n}\n\\end{lstlisting}\nIn this example, the function \\lstinline{pow} is assigned the filter expression \\mbox{\\lstinline{?n}} (introduced via~\\lstinline{@}).\nThe \\lstinline{?}-operator evaluates to \\lstinline{true} whenever its argument is statically known by the compiler.\nNow, at every call site, the compiler will instantiate the callee's filter expression by substituting all parameters with the corresponding arguments of that call site. \nIf the result of this substitution is \\lstinline{true}, the function will get executed and any call sites within will receive the same treatment.\n\nIn the case of the function \\lstinline{pow} above, this means that the following call site will be partially evaluated because the argument provided for \\lstinline{n} is known at compile-time:\n\\begin{lstlisting}\nlet y = pow(z, 5);\n\\end{lstlisting}\nThe result will be recursively expanded to:\n\\begin{lstlisting}\nlet y = z * pow(z, 4);\n\\end{lstlisting}\nThen:\n\\begin{lstlisting}\nlet z2 = pow(z, 2);\nlet z4 = z2 * z2;\nlet y = z * z4;\n\\end{lstlisting}\nAnd finally:\n\\begin{lstlisting}\nlet z1 = z;\nlet z2 = z1 * z1;\nlet z4 = z2 * z2;\nlet y = z * z4;\n\\end{lstlisting}\nNote how this expansion is performed \\emph{symbolically}.\nIn contrast to C++ templates or \\mbox{\\lstinline[morekeywords=constexpr]|constexpr|essions}, the Impala compiler does not need to know the value of the parameter \\lstinline{x} to execute the function \\lstinline{pow}.\n\n\\subsubsection{Triggered Code Generation}\n\nAnother important feature of Impala is its ability to perform \\emph{triggered code generation}.\nImpala offers built-in functions to allow the programmer to execute on the GPU, vectorize, or parallelize a given function.\nFor instance, the syntax to trigger code generation for GPUs that support CUDA is as follows:\n\\begin{lstlisting}\nlet acc = cuda_accelerator(device_index);\nlet grid = (n, 1, 1);\nlet block = (64, 1, 1);\nwith work_item in acc.exec(grid, block) {\n let id = work_item.gidx();\n buffer(id) = pow(buffer(id), 5);\n}\n\\end{lstlisting}\nThis snippet launches a CUDA kernel on the GPU with index \\lstinline{device_index} on a 1D grid and a block size of $64\\times1\\times1$.\n\n\\subsection{Molecular Dynamics}\n\n\\ac{MD} simulations are widely used today to study the behavior of microscopic structures.\nThese simulations reproduce the interactions among atoms in these structures on a macroscopic scale while allowing us to observe the evolution in the system of particles in a time-scale that is simpler to analyze.\n\nDifferent areas such as material science to study the evolution of the system for specific materials, chemistry to analyze the evolution of chemical processes, or biology to reproduce the behavior of certain proteins and bio-molecules resort to simulations of \\ac{MD} systems.\n\nA system to be simulated constitutes of a number of atoms, the initial state (such as the atoms' position or velocities), and the boundary conditions of the system.\nHere we use \\ac{PBC} in all directions, hence when particles cross the domain, they reappear on the opposite side with the same velocity. \nFundamentally, the evolution of \\ac{MD} systems is computed by solving Newton's second law equation, also known as the equation of motion (\\autoref{eq:newton}). Knowing the force, it is possible to track the positions and velocities of atoms by integrating the same equation.\n\\begin{equation}\n F = m \\dot{v} = m a \\label{eq:newton}\n\\end{equation}\nThe forces of each atom are based on its interaction with neighboring atoms.\nThis computation is usually described by a potential function or force field.\nMany different potentials can be used to calculate the particle forces, and each one is suitable for different types of simulation.\nIn this work, the potential used for miniMD comparison is the Lennard-Jones potential (\\autoref{eq:lennard_jones})---a pair potential for van-der-Waals forces.\nConsider $x_i$ the position of the particle $i$, the force for the Lennard-Jones potential can be expressed as:\n\\begin{equation}\n F_{2}^{LJ}(x_i, x_j) = 24\\epsilon \\left( \\frac{\\sigma}{x_{ij}} \\right)^{6} \\left[ 2\\left(\\frac{\\sigma}{x_{ij}}\\right)^{6} - 1\\right] \\frac{x_{ij}}{|x_{ij}|^{2}}\n \n \\label{eq:lennard_jones}\n\\end{equation}\nHere, $x_{ij}$ is the distance vector between the particles $i$ and $j$, $\\epsilon$ determines the width of the potential well, and $\\sigma$ specifies at which distance the potential is~$0$.\n\nFor our load balancing experiments, we use the Spring-Dashpot (\\autoref{eq:spring_dashpot}) contact model to provide a static simulation and to demonstrate the flexibility of our application. This contact model is commonly used on \\ac{DEM} to simulate rigid bodies interactions (instead of point masses on MD simulations). Therefore we consider particles as spheres on these experiments. Consider the position $x_i$ and velocity $v_i$ for the particle $i$. Its force is defined as:\n\\begin{align}\n F_{2}^{SD}(x_i, v_i, x_j, v_j) &= K\\xi + \\gamma\\dot{\\xi}\n \\label{eq:spring_dashpot} \\\\\n\\intertext{where}\n \\xi &= \\hat{x}_{ij}(\\sigma - |x_{ij}|)\\Theta(\\sigma - |x_{ij}|) \\\\\n \\dot{\\xi} &= -\\hat{x}_{ij}(\\hat{x}_{ij} \\cdot v_{ij})\\Theta(\\sigma - |x_{ij}|)\n\\end{align}\nand $K$ being the spring constant, $\\gamma$ the dissipation constant, $\\hat{x}_{ij}$ the unit vector $\\frac{x_{ij}}{|x_{ij}|}$, $\\sigma$ the sphere diameter, and $\\Theta$ the Heaviside step function.\n\nNaively, an \\ac{MD} implementation iterates over all atom pairs in the system.\nThis requires to consider a quadratic number of pairs.\nFor short range interactions, however, we can use a Verlet list~\\cite{PhysRev.159.98} instead to keep track of all nearby particles within a certain \\emph{cutoff radius}.\nThen, we compute the potential force for an atom only for the neighbors stored in its list.\nWe regularly update the Verlet list to keep track of new atoms that enter the cutoff radius.\n\nAlthough it is not necessary to build the Verlet list at all time steps, it is still a costly procedure since it requires the iteration over each pair of atoms.\nTo enhance the building performance, cell lists can be used to segregate atoms according to their spatial position.\nAs long as the cell sizes are equal to or greater than the cutoff radius, it is just necessary to iterate over the neighbor cells to build the Verlet list.\n\\autoref{fig:neighborlists} depicts the creation of the neighbor lists for a particle using cell lists.\nNeighbor lists can be created for just half of the particle pairs (known as half neighbor lists).\nThis allows for simultaneous updates of both particle forces within the same iteration (the computed force is subtracted from the neighbor particle) but requires atomic operations to prevent race conditions.\n\n\\begin{figure}[t]\n\\includegraphics[width=4cm]{neighborlists.pdf}\n\\centering\n\\caption{Neighbor list creation example. In this case the neighbor list is built for the red particle, and only the blue particles in the neighbor cells are checked for comparison. Particles within the green area with radius $r$ are inserted into the red particle neighbor list. The size for the cells $s$ can be greater or equal than $r$, but not less than it\\protect\\footnotemark. The value for $r$ is usually the cutoff radius plus a small value (called verlet buffer) so neighbor lists do not have to be updated every time step.}\n\\label{fig:neighborlists}\n\\end{figure}\n\n\\footnotetext{In some implementations the cell size can be less than the cutoff radius, however the cell neighborhood to be checked must be extended accordingly.}\n\n\n\\section{The tinyMD Library}\n\\label{sec:impl}\n\nIn this section we introduce and discuss tinyMD\\footnote{\\url{https:\/\/github.com\/AnyDSL\/molecular-dynamics}}.\nWe focus on the main differences in writing portable code with AnyDSL as opposed to traditional C\/C++ implementations.\nWe explore the benefits achieved by using higher-order functions to map code to different target devices, different data layouts and to implement flexible code for \\ac{MPI} communication.\nIn the following we use the term \\emph{particle} to refer to atoms\n\n\\subsection{Device Mapping}\n\\label{sec:device_mapping}\n\nIn order to map code parts for execution on the target device, tinyMD relies on the \\lstinline{Device} abstraction.\nIt contains functions to allocate memory, transfer data, launch a loop on the target device, and perform even more complex device-dependent procedures such as reductions:\n\\begin{lstlisting}\nstruct Device {\n alloc: fn(i64) -> Buffer,\n transfer: fn(Buffer, Buffer) -> (),\n loop_1d: fn(i32, fn(i32) -> ()) -> (),\n \/\/ ...\n}\n\\end{lstlisting}\nSimilar to a Java interface or abstract virtual methods in C++, a \\mbox{\\lstinline|Device|} instance allows tinyMD to abstract from the concrete implementation of these functions---in this case device-specific code.\nUnlike Java interfaces or virtual methods however, the partial evaluator will remove these indirections by specializing the appropriate device-specific code into the call-sites.\nEach device supported in tinyMD possesses its own implementation: a function that returns a \\lstinline|struct| instance which contains several functions, and hence, \\enquote{implements the \\lstinline|Device| interface}.\nFor example, the CPU implementation looks like this:\n\\begin{lstlisting}\nfn @device() -> Device {\n Device {\n alloc: |size| { alloc_cpu(size) },\n transfer: |from, to| {}, \/\/ no copy required\n loop_1d: @|n, f| {\n vectorized_range(get_vector_width(), 0, n, |i, _| f(i));\n },\n \/\/ ...\n }\n}\n\\end{lstlisting}\nThe \\lstinline{vectorized_range} function iterates over the particles. \nThis function in turn calls the \\lstinline{vectorize} intrinsic that triggers the \\emph{Region Vectorizer}~\\cite{10.1145\/3296979.3192413} to vectorize the LLVM \\ac{IR} generated by the Impala compiler. Although not covered in this paper, tinyMD employs the \\lstinline|parallel| intrinsic to spawn multiple threads for multiple CPU cores.\n\nThe GPU implementation looks like this:\n\\begin{lstlisting}\nfn @device() -> Device {\n Device {\n alloc: |size| { acc.alloc(size) },\n transfer: |from, to| { copy(from, to) },\n loop_1d: @|n, f| {\n \/\/ build grid_size and block_size for n\n acc.exec(grid_size, block_size, |work_item| {\n let i = work_item.bidx() * work_item.bdimx() + work_item.tidx();\n if i < n { f(i); }\n });\n },\n \/\/ ...\n }\n}\n\\end{lstlisting}\n\nThe \\lstinline|Device| abstraction is sufficient to map our application to different targets, it takes care of separating the parallel execution strategy from the compute kernels and provides basic functions for the device. Together with the data management abstractions (see \\autoref{sec:data}) we attain performance portability with substantially less effort.\n\n\\subsection{Data Management}\n\\label{sec:data}\n\nTo store our simulation data, we built an \\lstinline{ArrayData} structure, which defines a multi-dimensional array in tinyMD.\nThis data structure takes care of memory allocation in both device and host (if required).\nThe actual accesses are abstracted from with the help of \\lstinline|ArrayLayout|.\nThis idiom is similar to the \\lstinline|Device| abstraction and makes arrays in tinyMD both target and layout-agnostic. \n\\begin{lstlisting}\nstruct ArrayData {\n buffer: Buffer,\n buffer_host: Buffer,\n size_x: i32,\n size_y: i32,\n host_mirror: bool\n};\n\nstruct ArrayLayout {\n index_2d_fn: fn(ArrayData, i32, i32) -> i32,\n add_fn: fn(ArrayData, i32, real_t) -> ()\n};\n\\end{lstlisting}\nWith this definition, \\lstinline{ArrayData} just contains sizes for the \\lstinline|x| and \\lstinline|y| dimensions. This can be easily extended to more dimensions, but for our application, two dimensions are enough. The following list demonstrates how to implement a simple row-major order array layout similar to C arrays (again, this is akin to \\enquote{implementing the \\lstinline|ArrayLayout| interface}):\n\\begin{lstlisting}\nfn @row_major_order_array(is_atomic: bool) -> ArrayLayout {\n ArrayLayout {\n index_2d_fn: @|array, x, y| { x * array.size_y + y },\n add_fn: @|array, i, v| { do_add(is_atomic, array, i, v) }\n }\n}\n\\end{lstlisting}\nNote that the layout can also be made atomic, by using the \\lstinline{do_add} function that uses atomic addition when the first argument \\lstinline{is_atomic} is \\lstinline{true}. The AnyDSL partial evaluator takes care of generating the proper add instructions in the final code with zero overhead when the \\lstinline{is_atomic} flag is known at compile-time. Similarly, we define a column-major order layout as used in Fortran:\n\\begin{lstlisting}\nfn @column_major_order_array(is_atomic: bool) -> ArrayLayout {\n ArrayLayout {\n index_2d_fn: @|array, x, y| { y * array.size_x + x },\n add_fn: @|array, i, v| { do_add(is_atomic, array, i, v) }\n }\n}\n\\end{lstlisting}\n\nFor a more complex data layout, we show the definition of a clustered array, also known as an Array of Struct of Arrays (AoSoA). This is an array of structs whose elements are again clusters of elements as opposed to individual elements. With cluster sizes that are a power of two, we get:\n\\begin{lstlisting}\nfn @clustered_array(is_atomic: bool, cluster_size: i32) -> ArrayLayout {\n let mask = cluster_size - 1;\n let shift = get_shift_size(cluster_size) - 1;\n ArrayLayout {\n index_2d_fn: @|array, x, y| {\n let i = x >> shift;\n let j = x & mask;\n cluster_size * (i * array.size_y + y) + j\n },\n add_fn: @|array, i, v| { do_add(is_atomic, array, i, v) }\n }\n}\n\\end{lstlisting}\n\nFor most common use cases, the cluster size is known at compile-time, which allows AnyDSL to specialize this data layout by directly replacing the pre-computed values of \\lstinline{shift} and \\lstinline{mask}. If more than one cluster size is used in the application, different specialized codes are generated.\n\nTo bind both array structures mentioned above, we write different template functions that perform operations such as \\lstinline{get} and \\lstinline{set} on values. We also provide target functions to abstract whether the device or the host buffer must be used. The following snippet shows these target functions and the template for reading \\lstinline{real_t} elements in a 2-dimensional array:\n\n\\begin{lstlisting}\nfn @array_dev(array: ArrayData) -> Buffer { array.buffer }\nfn @array_host(array: ArrayData) -> Buffer { array.buffer_host }\ntype ArrayTargetFn = fn(ArrayData) -> Buffer;\n\nfn @array_2d_get_real(\n target_fn: ArrayTargetFn, layout: ArrayLayout, array: ArrayData,\n i: i32, j: i32) -> real_t {\n\n bitcast[&[real_t]](target_fn(array).data)(layout.index_2d_fn(array, i, j))\n}\n\\end{lstlisting}\n\nIn this case, we also resort to the partial evaluator to generate the specialized functions for all used targets and layouts. We can also map non-primitive data types to our arrays, the following example shows how to map our \\lstinline{Vector3D} structure to N$\\times3$ arrays. The structure contains \\lstinline{x}, \\lstinline{y} and \\lstinline{z} elements of type \\lstinline{real_t}:\n\n\\begin{lstlisting}\n\/\/ Get Vector3D value in 2D array with abstract target and layout\nfn @array_2d_get_vec3(\n target_fn: ArrayTargetFn, layout: ArrayLayout, array: ArrayData,\n i: i32) -> Vector3D {\n\n Vector3D {\n x: array_2d_get_real(target_fn, layout, array, i, 0),\n y: array_2d_get_real(target_fn, layout, array, i, 1),\n z: array_2d_get_real(target_fn, layout, array, i, 2)\n }\n}\n\\end{lstlisting}\n\nNotice that through \\lstinline{set()} and \\lstinline{get()} functions it is also possible to abstract data over scratchpad memory (such as shared or texture memory on GPU devices). This could be done by first staging data into these memory and then providing set and get functions that operate on them.\n\nFinally, we can use these templates functions to implement the abstractions for our particle data. For this, we created a \\lstinline{Particle} data structure that holds the proper functions to write and read particle information:\n\n\\begin{lstlisting}\nstruct Particle {\n set_position: fn(i32, Vector3D) -> (),\n get_position: fn(i32) -> Vector3D,\n \/\/ ...\n};\n\\end{lstlisting}\n\nThe \\lstinline{Particle} interface takes care of generating easy to use functions that map particle information to our \\lstinline{ArrayData} primitives. These functions can set and get particle properties, iterate over the neighbor lists and perform any operation that relies on the data layout and target. Thus, we can generate \\lstinline{Particle} structures for specific targets (host or device) and for distinct layouts. This is done through the \\lstinline{make_particle} function:\n\n\\begin{lstlisting}\nfn @make_particle(\n grid: Grid, target_fn: ArrayTargetFn,\n vec3_layout: ArrayLayout, nb_layout: ArrayLayout) -> Particle {\n\n Particle {\n set_position: @|i, p| array_2d_set_vec3(target_fn, vec3_layout, grid.positions, i, p),\n get_position: @|i| array_2d_get_vec3(target_fn, vec3_layout, grid.positions, i),\n \/\/...\n }\n}\n\\end{lstlisting}\n\nThe structure generated by \\lstinline{make_particle} contains the specialized functions for the layouts we specified, these functions access buffers in the memory space provided by the target function. To achieve a full abstraction, we then put it altogether with the device loop and the layout definition:\n\n\\begin{lstlisting}\nfn @ParticleVec3Layout() -> ArrayLayout { row_major_order_array(false) }\nfn @particles(grid: Grid, f: fn(i32, Particle) -> ()) -> () {\n device().loop_1d(grid.nparticles, |i| {\n f(i, make_particle(grid, array_dev, ParticleVec3Layout(), NeighborlistLayout()));\n });\n}\n\\end{lstlisting}\n\nData layout definitions such as the \\lstinline{ParticleVec3Layout} can also be written in the device-specific codes.\nConsider the \\lstinline{NeighborlistLayout} layout as an example: For the CPU it is optimal to store data in a particle-major order to enhance locality per thread during access, and therefore improving cache utilization.\nFor GPUs, neighbor-major order is preferable because it enhances coalesced memory access.\nSince each GPU thread computes a different particle, we keep all the nth neighbor for each particle contiguous in memory, causing threads to access these data in the same iteration and therefore reducing the number of required transactions to load the entire data.\nNevertheless, the \\lstinline{NeighborlistLayout} has the same definition as the \\lstinline{ParticleVec3Layout} shown above for CPU targets, and is defined as a \\lstinline{column_major_order_array} for GPU targets.\n\nTo show how simple it is to use our \\lstinline{particles} abstraction, consider the following example to compute particle-neighbor potentials:\n\\begin{lstlisting}\nparticles(grid, |i, particle| {\n let pos_i = particle.get_position(i);\n particle.neighbors(i, |j| {\n let pos_j = particle.get_position(j);\n let del = vector_sub(pos_i, pos_j);\n let rsq = vector_len2(del);\n if rsq < rsq_cutoff {\n let f = potential(del, rsq);\n \/\/ update force for i (and j if half neighbor lists is being\n \/\/ used) with the computed force\n }\n });\n});\n\\end{lstlisting}\n\nWe also provide a \\lstinline{compute_potential} syntactic-sugar that can be used as follows to compute the Lennard-Jones potential (consider the lamba-function passed as the \\lstinline{potential} function used previously):\n\n\\begin{lstlisting}\nlet sigma6 = pow(sigma, 6);\ncompute_potential(grid, half_nb, rsq_cutoff, @|del, rsq| {\n let sr2 = 1.0 \/ rsq;\n let sr6 = sr2 * sr2 * sr2 * sigma6;\n let f = 48.0 * sr6 * (sr6 - 0.5) * sr2 * epsilon;\n vector_scale(f, del) \/\/ returns the force to be added\n});\n\\end{lstlisting}\n\nAnyDSL can generate specialized variants for \\lstinline{compute_potential} with full- and half-neighbor lists through the \\lstinline{half_nb} parameter.\nThis avoids extra conditions on full neighbor lists kernels that are just required with half neighbor lists.\nThe same specialization is performed when building the neighbor lists.\n\nAll these abstractions provide a very simple and extensible way to work with different devices and data management.\nThe template functions and layout specifications can be extended to support more complex operations and map to different data types.\nFurthermore, these can be used to yield a domain-specific interface (such as the \\lstinline{Particle} abstraction) to improve the usability of our library.\nThis is accomplished with no overhead in the final generated code with AnyDSL.\n\n\n\\subsection{Communication}\n\\label{sec:comm}\n\nThe communication for our code can be separated in three routines as is also done in miniMD.\nThese routines are listed below:\n\n\\begin{itemize}\n \\item Exchange: exchange particles that overlap the domain for the current rank, this particle becomes a local particle on the process it is sent to. This operation is not performed on every time step, but at an interval of $n$ time steps, $n$ is adjustable by the application.\n \\item Border definition: defines the particles that are in the border of the current rank's domain, these particles are sent as ghost particles to the neighbor processes. This routine also is performed at every $n$ time steps.\n \\item Synchronization: uses the border particles defined in the border definition, and just sends them at every time step of the simulation. Since the number of particles to be sent is known beforehand, it is a less costly routine.\n\\end{itemize}\n\nWe provide a generic way to abstract the communication pattern on these routines through a higher-order function named \\lstinline{communication_ranks}.\nThis function receives both conditional functions to check whether particles must be exchanged or sent as ghost particles to neighbor ranks.\n\\ac{PBC} correction can also be proper applied using these conditional functions, which greatly improves the flexibility for our communication code.\n\nThese conditional functions also help when coupling tinyMD with waLBerla because they separate the logic to check particle conditions from the packing and MPI send\/recv routines that stay untouched.\nWhen using waLBerla domain partitioning, particle positions must be checked against the supplied waLBerla data structures (block forest domain), whereas for miniMD pattern we can just write simple comparisons in the 6 directions.\nThe \\lstinline{communication_ranks} implementation separating both strategies is listed as follows:\n\n\\begin{lstlisting}\n\/\/ Types for condition and communication functions\ntype CondFn = fn(Vector3D, fn(Vector3D) -> ()) -> ();\ntype CommFn = fn(i32, i32, CondFn, CondFn) -> ();\nfn communication_ranks(grid: Grid, body: CommFn) -> () {\n if use_walberla() { \/\/ Use walberla for communication\n \/\/...\n range(0, nranks as i32, |i| {\n let rank = get_neighborhood_rank(i);\n body(rank, rank, \/\/ Conditions to send and receive from rank\n @|pos, f| { \/* Check for border condition with walberla *\/ },\n @|pos, f| { \/\/ Exchange condition\n for j in range(0, get_rank_number_of_aabbs(i)) {\n let p = pbc_correct(pos); \/\/ PBC correction\n if is_within_domain(p, get_neighbor_aabb(get_rank_offset(i) + j)) {\n f(p);\n break()\n }\n }\n });\n });\n } else { \/\/ Use 6D stencil communication like miniMD\n body(xnext, xprev, \/\/ Conditions to send to xnext\n @|p, f| { if p.x > aabb.xmax - spacing * 2.0 { f(pbc_correct(p)); }},\n @|p, f| { if p.x > aabb.xmax - spacing { f(pbc_correct(p)); }});\n body(xprev, xnext, \/\/ Conditions to send to xprev\n @|p, f| { if p.x < aabb.xmin + spacing * 2.0 { f(pbc_correct(p)); }},\n @|p, f| { if p.x < aabb.xmin + spacing { f(pbc_correct(p)); }});\n \/\/ Analogous for y and z dimensions\n }\n}\n\\end{lstlisting}\n\nThe \\lstinline{communication_ranks} function can be used to pack particles and to perform the MPI communications since it provides the functions to check for which particles must be packed and the ranks to send and receive data.\nThe following snippet shows a simple usage to obtain the particles to be exchanged:\n\n\\begin{lstlisting}\ncommunication_ranks(grid, |rank, _, _, exchange_positions| {\n particles(grid, |i, particle| {\n exchange_positions(particle.get_position(i), @|pos| {\n \/\/ Here pos is a particle position that must be exchanged,\n \/\/ it already contains the proper PBC adjustments\n });\n });\n});\n\\end{lstlisting}\n\nOn miniMD, a 6-stencil communication pattern is hard-coded to the simulation, which turns the code inflexible to be coupled to other technologies.\nThis also shows an important benefit from higher-order functions, as code can be more easily coupled with other technologies by simply replacing functionality.\n\n\\subsection{Summary of Benefits}\n\nThis section summarizes the benefits of using the AnyDSL framework for both tinyMD and \\ac{MD} applications in general.\n\n\\paragraph{Separation of Concerns}\nWe separate our force potential computation logic from device-specific code and rely on higher-order functions to map it to the proper target.\nThis substantially reduces the effort in writing portable applications.\nWe employ the same technique to abstract other procedures such as the \\ac{MPI} communication calls.\nHere, we adopt higher-order functions to define communication patterns (see \\autoref{sec:comm}).\n\n\\paragraph{Layers of Abstractions}\nWe hide device-dependent code like our data layout (see \\autoref{sec:data}) or device mapping (see \\autoref{sec:device_mapping}) behind functions.\nThese abstractions have zero overhead due to the partial evaluator (see below).\n\n\\paragraph{Compile-Time Code Specialization}\nWe utilize Impala's partial evaluator to generate faster specialized code when one or more parameters are known at compile-time.\nThis significantly reduces the amount and complexity of the code while still being able to generate all desired variants at compile time.\n\n\\section{Coupling tinyMD with waLBerla}\n\\label{sec:coupling}\n\nIn this section, we briefly present the fundamental concepts behind waLBerla to understand its load balancing mechanism.\nThe most important characteristic is its domain partitioning using a forest of octrees called block forest. This kind of partitioning allows us to refine blocks in order to manage and distribute regions with smaller granularities.\nFurthermore, we explain how this block forest feature written in C++ is integrated into our tinyMD Impala code.\n\nwaLBerla is a modern multi-physics simulation framework that supports massive parallelism of current peta- and future exascale supercomputers.\nDomain partitioning in waLBerla is done through a forest of octrees. The leaf nodes, so called blocks, can be coarsened or refined and distributed among different processes. \\autoref{fig:block_forest} depicts the waLBerla forest of octrees data structure with its corresponding domain partitioning.\n\nFor each local block, waLBerla keeps track of its neighbor blocks and the process rank that owns it.\nThis information is important to determine to which processes we must communicate and with which blocks\/subdomains we need to compare our particle positions.\n\n\\begin{figure}[t]\n\\includegraphics[width=12cm]{walberla_block_forest.pdf}\n\\centering\n\\caption{Schematic 2D illustration of the waLBerla block forest domain partitioning. At left, the forest of octrees is depicted, where it is possible to observe the refinement of the blocks. At right, the domain partitioning for the correspondent block forest is depicted. On 3D simulations, refined blocks produce 8 child blocks in the tree instead of 4.}\n\\label{fig:block_forest}\n\\end{figure}\n\nTo balance the simulation such that work is evenly distributed among the processes, it is first necessary to assign weights to each block in the domain.\nIn this paper we use the number of particles located on a block as the weight of the block.\nThe weight is not only used during the distribution of blocks, it is also used to determine whether a block should be refined (when it reaches an upper threshold) or merged (when it reaches a lower threshold).\nDifferent algorithms are available in waLBerla to distribute the workload. The algorithms can be categorized into space-filling curves \\cite{bader2013space}, graph partitioning and diffusive schemes.\nIn this paper we concentrate on space-filling curves, more specifically the Hilbert \\cite{Campbell2003} and Morton \\cite{Morton1966} (or z-order) curves.\n\nTo couple applications and combine their functionality can save a lot of effort as one does not have to re-implement the functionality.\nIn this paper, we chose to couple the load balancing implementation from waLBerla with tinyMD. As this part of the code does not have to be portable to different devices we can rely on an external framework.\nOne could try to implement a portable code for it, but the amount of work and complexity to do so may not be worth the benefits.\nNevertheless, we can exploit AnyDSL to optimize our simulation kernels, memory management and other parts of the application that can be advantageous, and then rely on existing implementations to avoid redoing work that does not give benefits in the end.\n\nSince our Impala implementation turns out to be compiled to LLVM IR at some point, we can link C\/C++ code with it using Clang.\nTherefore, we write an interface from Impala and C++ in order to do the coupling. \\autoref{sec:comm} already presented how our communication code is integrated through the usage of routines that fetch information provided by waLBerla.\nIn this section, we focus on how these routines and other parts of the load balancing are written.\n\nThe first part is to initialize the waLBerla data structures, the block forest is created using the bounding box information from tinyMD.\nWhen the block forest for a process is adjusted due to the balancing, the tinyMD bounding box is also changed, and since tinyMD does not have a block forest implementation, we just transform the waLBerla block forest into a simple AABB structure.\nThis is done by performing an union of all blocks that belongs to the current process.\nIn the end, the domain can occupy a larger region than necessary, but this does not affect the results, just the memory allocation.\n\n\\begin{figure}[t]\n\\includegraphics[width=12cm]{walberla_domain_to_tinymd.pdf}\n\\centering\n\\caption{Transformation from waLBerla block forest domain to tinyMD regular AABB. The union of blocks is performed so all blocks that belong to the current process are converted to tinyMD simple AABB structure, which only supports rectangular domains. Crop is performed to remove empty space and reduce the amount of allocated memory. On simulations which the empty space size is not significant, the crop operation can be skipped.}\n\\label{fig:aabb_transformation}\n\\end{figure}\n\nIn some extreme cases where we only fill part of the global domain with particles, the whole empty part of the domain can end up being assigned to a single process, which turns it to allocate an amount of memory proportional to the global domain.\nTo mitigate this issue, we simply define a process bounding box as enough to comprise its particles---or in other words, we crop the domain.\nSince tinyMD bounding box is only useful to define cells and distribute the particles over them, this does not affect the correctness of the simulation.\n\\autoref{fig:aabb_transformation} depicts how the union and cropping transform the grid on tinyMD.\n\nTo crop the AABB, a reduction operation is required. The following code shows how this is performed using the \\lstinline{reduce_aabb} function abstracted by the device:\n\n\\begin{lstlisting}\nlet b = AABB {\n xmin: grid.aabb.xmax,\n xmax: grid.aabb.xmin,\n \/\/ ... analogous to y and z\n};\n\nlet red_aabb_fn = @|aabb1: AABB, aabb2: AABB| {\n AABB {\n xmin: select(aabb1.xmin < aabb2.xmin, aabb1.xmin, aabb2.xmin),\n xmax: select(aabb1.xmax > aabb2.xmax, aabb1.xmax, aabb2.xmax),\n \/\/ ... analogous to y and z\n }\n};\n\nlet aabb = device().reduce_aabb(grid.nparticles, b, red_aabb_fn, @|i| {\n let pos = particle.get_position(i);\n AABB {\n xmin: pos.x,\n xmax: pos.x\n \/\/ ... analogous to y and z\n }\n});\n\\end{lstlisting}\n\nNote that \\lstinline{reduce_aabb} is device-specific and is optimally implemented on GPU, which also demonstrates benefits obtained through the \\lstinline{Device} abstraction presented on \\autoref{sec:device_mapping}.\nOne can also observe how simple it is with AnyDSL to execute device code with non-primitive data types, in this case the \\lstinline{AABB} structure.\nFurthermore, reduction can also be used to count the number of particles within a domain, which is useful to obtain the weights for the load-balancing, the following code shows how it is written in tinyMD:\n\n\\begin{lstlisting}\nlet sum = @|a: i32, b: i32| { a + b };\ncomputational_weight = device().reduce_i32(nparticles, 0, sum, |i| {\n select(is_within_domain(particle.get_position(i), aabb), 1, 0)\n}) as u32;\n\ncommunication_weight = device().reduce_i32(nghost, 0, sum, |i| {\n select(is_within_domain(particle.get_position(nparticles+i), aabb), 1, 0)\n}) as u32;\n\\end{lstlisting}\n\nNext step for the coupling is to provide a function in tinyMD to update the neighborhood of a process.\nWe first build the list of neighbors using the waLBerla API in C++ and then call a tinyMD function to update the neighborhood data in tinyMD.\nAn important part of the process is to perform a conversion from the dynamic C++ container data types to simple arrays of \\lstinline{real_t} (in case of the boundaries) and integers (in case of neighbor processes ranks and number of blocks per neighbor process).\nThis conversion is necessary because (a) Impala does not support this dynamic types from C++ and (b) code that executes on GPU, more specifically the particle position checking presented on \\autoref{sec:comm} also does not support these dynamic data types.\n\nFinally, it is also necessary to perform serialization and de-serialization of our particle data during the balancing step.\nThis is required because the blocks data must be transferred to their new process owners during the distribution.\nSince waLBerla is an extensible framework, it provides means to call custom procedures when a block must be moved from or to another process.\nThis allow us to implement (de)serialization functions on tinyMD, which also updates the local particles that are exchanged.\nThe communication in these routines is entirely handled by waLBerla, hence no MPI calls are performed by tinyMD.\n\n\\section{Evaluation}\n\\label{sec:eval}\n\nWe evaluated tinyMD as well as miniMD on several CPU and GPU architectures.\nWe chose the following CPUs:\n\n\\medskip\n\\begin{tabular}{l@{\\phantom{X}}lr}\n \\textbf{Cascade Lake:} & Intel(R) Xeon(R) Gold 6246 CPU & @ 3.30\\,GHz \\\\\n \\textbf{Skylake:} & Intel(R) Xeon(R) Gold 6148 CPU & @ 2.40\\,GHz \\\\\n \\textbf{Broadwell:} & Intel(R) Xeon(R) CPU E5-2697 v4 & @ 2.30\\,GHz\n\\end{tabular}\n\\medskip\n\n\\noindent\nAnd the following GPUs:\n\n\\medskip\n\\begin{tabular}{l@{\\phantom{X}}lr}\n \\textbf{Pascal:} & GeForce GTX 1080 & ( 8\\,GB memory) \\\\\n \\textbf{Turing:} & GeForce RTX 2080 Ti & (11\\,GB memory) \\\\\n \\textbf{Volta:} & Tesla V100-PCIe-32GB & (32\\,GB memory)\n\\end{tabular}\n\\medskip\n\nWe ran each simulation over 100 time steps---each time step with 0.005.\nWe performed particle distribution over cells and reneighboring every 20 time-steps with double-precision floating point and full neighbor interaction.\nWe use the Lennard-Jones potential with parameters $\\epsilon = 1$ and $\\sigma = 1$ (see \\autoref{eq:lennard_jones}). The particles setup in tinyMD is the same as in miniMD, as well as the cutoff radius of 2.5 and the Verlet buffer of 0.3.\n\nFor multi-node benchmarks, particles are exchanged with neighbor ranks at each 20 time-steps before distribution over cells and reneighboring, and communications to update the particle positions within ghost layers are done every time-step with neighbor ranks.\n\nFor CPU, tests were performed in a single core (no parallelism) with fixed frequency, we simulated a system configuration of $32^3$ unit cells with 4~particles per unit cell.\nFor tinyMD, the Impala compiler generates Thorin code that is further compiled with Clang\/LLVM~8.0.0.\nFor miniMD, we use the Intel Compiler (icc) 18.0.5.\n\n\\colorlet{force_color_cpu}{blue!70}\n\\colorlet{neigh_color_cpu}{blue!50}\n\\colorlet{other_color_cpu}{blue!30}\n\n\\begin{figure}[t]\n\\centering\n\\begin{tikzpicture}[scale=0.95]\n \\pgfplotsset{\n ybar stacked, ymin=0, ymax=18, xmin=0.5, xmax=3.5, xtick=data,\n xtick={1,...,3},\n xticklabels={Broadwell, Cascade Lake, Skylake},\n xticklabel style={yshift=-10pt},\n ylabel={time to solution (s)}, ylabel style={yshift=-1ex},\n legend cell align={left},\n \/pgf\/bar width=8pt\n scatter\/position=absolute,\n node near coords style={\n font=\\footnotesize,\n at={(axis cs:\\pgfkeysvalueof{\/data point\/x},\\pgfkeysvalueof{\/pgfplots\/ymin})},\n anchor=north,\n yshift={-\\pgfkeysvalueof{\/pgfplots\/major tick length} + 4pt},\n },\n }\n \\begin{axis}[bar shift=-16pt, nodes near coords style={xshift=0pt},\n legend pos = outer north east, legend style = {name = minimd}]\n \\addplot [fill=force_color_cpu, nodes near coords=A] table [x=arch, y=minimd_ref] {single_cpu_force.txt};\n \\addplot [fill=neigh_color_cpu] table [x=arch, y=minimd_ref] {single_cpu_neigh.txt};\n \\addplot [fill=other_color_cpu] table [x=arch, y=minimd_ref] {single_cpu_other.txt};\n \\legend{Force, Neigh, Other}\n \\addlegendimage{empty legend}\n \\addlegendentry{\\hspace{-.325cm}\\textbf{A:} miniMD (ref)}\n \\addlegendimage{empty legend}\n \\addlegendentry{\\hspace{-.325cm}\\textbf{B:} miniMD (Kokkos)}\n \\addlegendimage{empty legend}\n \\addlegendentry{\\hspace{-.325cm}\\textbf{C:} tinyMD (AoS)}\n \\addlegendimage{empty legend}\n \\addlegendentry{\\hspace{-.325cm}\\textbf{D:} tinyMD (SoA)}\n \\addlegendimage{empty legend}\n \\addlegendentry{\\hspace{-.325cm}\\textbf{E:} tinyMD (AoSoA)}\n \\end{axis}\n \\begin{axis}[bar shift= -8pt, nodes near coords style={xshift=0pt},\n legend style = {at = {([yshift = -1mm]minimd.south west)},\n anchor = north west}]\n \\addplot [fill=force_color_cpu, nodes near coords=B] table [x=arch, y=minimd_kokkos] {single_cpu_force.txt};\n \\addplot [fill=neigh_color_cpu] table [x=arch, y=minimd_kokkos] {single_cpu_neigh.txt};\n \\addplot [fill=other_color_cpu] table [x=arch, y=minimd_kokkos] {single_cpu_other.txt};\n \\end{axis}\n \\begin{axis}[bar shift= 0pt, nodes near coords style={xshift=0pt},\n legend style = {at = {([yshift = -1mm]minimd.south west)},\n anchor = north west}]\n \\addplot [fill=force_color_cpu, nodes near coords=C] table [x=arch, y=tinymd_aos] {single_cpu_force.txt};\n \\addplot [fill=neigh_color_cpu] table [x=arch, y=tinymd_aos] {single_cpu_neigh.txt};\n \\addplot [fill=other_color_cpu] table [x=arch, y=tinymd_aos] {single_cpu_other.txt};\n \\end{axis}\n \\begin{axis}[bar shift= 8pt, nodes near coords style={xshift=0pt},\n legend style = {at = {([yshift = -1mm]minimd.south west)},\n anchor = north west}]\n \\addplot [fill=force_color_cpu, nodes near coords=D] table [x=arch, y=tinymd_soa] {single_cpu_force.txt};\n \\addplot [fill=neigh_color_cpu] table [x=arch, y=tinymd_soa] {single_cpu_neigh.txt};\n \\addplot [fill=other_color_cpu] table [x=arch, y=tinymd_soa] {single_cpu_other.txt};\n \\end{axis}\n \\begin{axis}[bar shift= 16pt, nodes near coords style={xshift=0pt},\n legend style = {at = {([yshift = -1mm]minimd.south west)},\n anchor = north west}]\n \\addplot [fill=force_color_cpu, nodes near coords=E] table [x=arch, y=tinymd_aosoa] {single_cpu_force.txt};\n \\addplot [fill=neigh_color_cpu] table [x=arch, y=tinymd_aosoa] {single_cpu_neigh.txt};\n \\addplot [fill=other_color_cpu] table [x=arch, y=tinymd_aosoa] {single_cpu_other.txt};\n \\end{axis}\n\\end{tikzpicture}\n\\vspace{-3ex}\n\\caption{Execution time in seconds (lower is better) for force calculation and neighbor list creation a 100 time-steps simulation on CPU architectures. Simulations were performed with $32^3$ unit cells with 4~particles per unit cell. Tests were performed in a single core (no parallelism) with fixed frequency. For AVX (Broadwell), AnyDSL emitted code with worse performance due to data gather and scatter operations, therefore results for scalar instructions are shown.}\n\\vspace{-2ex}\n\\centering\n\\label{fig:cpu_single_node_results}\n\\end{figure}\n\nOn GPU benchmarks we simulate a system configuration of $80^3$ unit cells with 4 particles per unit cell ($2.048.000$ particles in total). Both versions use the CUDA compilation tools V9.2.148, with CUDA driver version 10.2 and NVRTC version 9.1, for miniMD the Kokkos variant is required to execute on GPUs.\n\n\\colorlet{force_color_gpu}{green!60!black}\n\\colorlet{neigh_color_gpu}{green!40!black}\n\\colorlet{other_color_gpu}{green!20!black}\n\n\\begin{figure}[t]\n\\centering\n\\begin{tikzpicture}[scale=0.95]\n \\pgfplotsset{\n ybar stacked, ymin=0, ymax=5, xmin=0.5, xmax=3.5, xtick=data,\n xtick={1,...,3},\n xticklabels={Pascal, Turing, Volta},\n xticklabel style={yshift=-10pt},\n ylabel={time to solution (s)}, ylabel style={yshift=-1ex},\n legend cell align={left},\n \/pgf\/bar width=8pt\n scatter\/position=absolute,\n node near coords style={\n font=\\footnotesize,\n at={(axis cs:\\pgfkeysvalueof{\/data point\/x},\\pgfkeysvalueof{\/pgfplots\/ymin})},\n anchor=north,\n yshift={-\\pgfkeysvalueof{\/pgfplots\/major tick length} + 4pt},\n },\n }\n \\begin{axis}[bar shift=-12pt, nodes near coords style={xshift=0pt},\n legend pos = outer north east, legend style = {name = minimd}]\n \\addplot [fill=force_color_gpu, nodes near coords=A] table [x=arch, y=minimd] {single_gpu_force.txt};\n \\addplot [fill=neigh_color_gpu] table [x=arch, y=minimd] {single_gpu_neigh.txt};\n \\addplot [fill=other_color_gpu] table [x=arch, y=minimd] {single_gpu_other.txt};\n \\legend{Force, Neigh, Other}\n \\addlegendimage{empty legend}\n \\addlegendentry{\\hspace{-.325cm}\\textbf{A:} miniMD (Kokkos)}\n \\addlegendimage{empty legend}\n \\addlegendentry{\\hspace{-.325cm}\\textbf{B:} tinyMD (AoS)}\n \\addlegendimage{empty legend}\n \\addlegendentry{\\hspace{-.325cm}\\textbf{C:} tinyMD (SoA)}\n \\addlegendimage{empty legend}\n \\addlegendentry{\\hspace{-.325cm}\\textbf{D:} tinyMD (AoSoA)}\n \\end{axis}\n \\begin{axis}[bar shift= -4pt, nodes near coords style={xshift=0pt},\n legend style = {at = {([yshift = -1mm]minimd.south west)},\n anchor = north west}]\n \\addplot [fill=force_color_gpu, nodes near coords=B] table [x=arch, y=tinymd_aos] {single_gpu_force.txt};\n \\addplot [fill=neigh_color_gpu] table [x=arch, y=tinymd_aos] {single_gpu_neigh.txt};\n \\addplot [fill=other_color_gpu] table [x=arch, y=tinymd_aos] {single_gpu_other.txt};\n \\end{axis}\n \\begin{axis}[bar shift= 4pt, nodes near coords style={xshift=0pt},\n legend style = {at = {([yshift = -1mm]minimd.south west)},\n anchor = north west}]\n \\addplot [fill=force_color_gpu, nodes near coords=C] table [x=arch, y=tinymd_soa] {single_gpu_force.txt};\n \\addplot [fill=neigh_color_gpu] table [x=arch, y=tinymd_soa] {single_gpu_neigh.txt};\n \\addplot [fill=other_color_gpu] table [x=arch, y=tinymd_soa] {single_gpu_other.txt};\n \\end{axis}\n \\begin{axis}[bar shift= 12pt, nodes near coords style={xshift=0pt},\n legend style = {at = {([yshift = -1mm]minimd.south west)},\n anchor = north west}]\n \\addplot [fill=force_color_gpu, nodes near coords=D] table [x=arch, y=tinymd_aosoa] {single_gpu_force.txt};\n \\addplot [fill=neigh_color_gpu] table [x=arch, y=tinymd_aosoa] {single_gpu_neigh.txt};\n \\addplot [fill=other_color_gpu] table [x=arch, y=tinymd_aosoa] {single_gpu_other.txt};\n \\end{axis}\n\\end{tikzpicture}\n\\vspace{-3ex}\n\\caption{Execution time in seconds (lower is better) for force calculation and neighbor list creation a 100 time-steps simulation on GPU architectures. Simulations were performed with $80^3$ unit cells with 4~particles per unit cell.}\n\\vspace{-2ex}\n\\centering\n\\label{fig:single_gpu_results}\n\\end{figure}\n\n\\autoref{fig:cpu_single_node_results} depicts the force calculation and neighbor list creation times for tinyMD and miniMD for the CPU architectures.\nFor AVX Broadwell architecture, tinyMD produced poor vectorized code because of the gather and scatter operations, and therefore the results for scalar operations are used, that is why the miniMD performance is much better for this architecture.\nNote that this pairwise interactions kernels do not provide a straightforward way for vectorization, hence the compiler requires sophisticated vectorization analysis algorithms to achieve good performance within CPU cores.\nFor the AVX512 processors Cascade Lake and Skylake, AnyDSL produced more competitive code compared to miniMD, although the generated code is still inferior than the one generated by the Intel compiler.\n\nThis demonstrates a limitation for AnyDSL, as the auto-vectorizer is not capable of generating the most efficient variant for the potential kernels, this can be due either because of (a) the compiler itself (AnyDSL\/Impala code must be compiled with Clang) or (b) the transformations performed by the AnyDSL compiler.\n\nIn all of the cases, however, it is possible to notice that the neighbor list creation performance is better for tinyMD (on AVX512 processors, it outperforms miniMD by more than a factor of two).\nThis can be a result of both (a) specialized code generation for both half- and full-neighbor lists creation and (b) usage of vectorization instructions to check for particle distances.\nAs for the data layout, the structure of arrays is the best layout for the particle \\lstinline{Vector3D} data on CPU.\nThis data layout enhances locality for data in the same dimension, and therefore can improve the speedup to gather data into the vector registers.\n\n\\autoref{fig:single_gpu_results} depicts the force calculation and neighbor list creation times for tinyMD and miniMD for the GPU architectures.\nFor Pascal and Volta architectures, all tinyMD variants outperform miniMD. It is also noticeable that the performance advantage comes from the force compute time, which demonstrates tinyMD can generate performance-portable MD kernels to different devices.\nFor Turing architecture, the performance for miniMD is slightly faster than tinyMD slower variants, and the fastest one (array of structures) moderately outperforms miniMD.\nThe time difference for distinct \\lstinline{Vector3D} arrays data layout on GPU is mostly concentrated in the neighbor list creation, where we can observe the array of structures delivers the best performance.\n\n\\subsection{Weak Scalability}\n\nFor the CPU weak scalability tests we chose the same miniMD setup used for the single core evaluation.\nFor every node involved a $96^3$ system of unit cells was included into the simulation domain. Hence, the total number of particles simulated is $3.538.944 \\times \\text{number\\_of\\_nodes}$.\n\nThe tests were executed on the SuperMUC-NG supercomputer. MPI ranks are mapped to the cores of two Intel Skylake Xeon Platinum 8174 processors (24 physical cores each) on each node. The processors are accompanied by a total of \\SI{96}{GB} of main memory. All physical cores were used resulting in 48 MPI ranks per node.\n\n\\autoref{fig:weak_scaling_cpu} depicts the time to solution for both tinyMD and miniMD executing on different number of nodes. Although miniMD delivered superior results on our single core experiments, we can notice that for this configuration tinyMD presented a better performance. This happened because for this configuration the neighbor lists creation and communication times overcame the faster force field kernel in miniMD. tinyMD keeps perfect scaling for all amount of nodes, whereas miniMD starts to degrade its parallel efficiency over 512 nodes. Therefore tinyMD provides very competitive weak-scaling results in comparison to state-of-the-art applications.\n\n\\begin{figure}[t]\n\\centering\n\\begin{tikzpicture}[scale=0.80]\n \\tikzstyle{every node}=[font=\\small]\n \\begin{axis}[\n width=\\textwidth, height=8cm,\n xmode=log,log basis x={2},\n \n xmin=1, xmax=2048,\n ymin=0, ymax=5,\n log ticks with fixed point,\n xlabel={\\# nodes}, xlabel style={yshift= 1ex},\n xticklabel={\n \\pgfkeys{\/pgf\/fpu=true}\n \\pgfmathparse{int(2^\\tick)}\n \\pgfmathprintnumber[fixed]{\\pgfmathresult}\n },\n ylabel={time to solution (s)}, ylabel style={yshift=-1ex},\n grid=major,\n legend pos=south east,\n \n ]\n \\addplot table[x=nodes,y=tinymd] {scaling_cpu.txt};\n \\addplot table[x=nodes,y=minimd] {scaling_cpu.txt};\n \\legend{tinyMD, miniMD}\n \\end{axis}\n\\end{tikzpicture}\n\\vspace{-3ex}\n\\caption{Weak-scaling comparison between tinyMD and miniMD for CPU on SuperMUC-NG. For 256 and 1024 nodes miniMD crashed with memory violation errors and, hence, results were interpolated.}\n\\vspace{-2ex}\n\\label{fig:weak_scaling_cpu}\n\\end{figure}\n\n\\begin{figure}[t]\n\\centering\n\\begin{tikzpicture}[scale=0.80]\n \\tikzstyle{every node}=[font=\\small]\n \\begin{axis}[\n width=\\textwidth, height=8cm,\n xmode=log,log basis x={2},\n \n xmin=1, xmax=1024,\n ymin=0, ymax=1,\n log ticks with fixed point,\n xlabel={\\# nodes}, xlabel style={yshift= 1ex},\n xticklabel={\n \\pgfkeys{\/pgf\/fpu=true}\n \\pgfmathparse{int(2^\\tick)}\n \\pgfmathprintnumber[fixed]{\\pgfmathresult}\n },\n ylabel={time to solution (s)}, ylabel style={yshift=-1ex},\n grid=major,\n legend pos=south east,\n \n ]\n \\addplot table[x=nodes,y=tinymd] {scaling_gpu.txt};\n \\legend{tinyMD}\n \\end{axis}\n\\end{tikzpicture}\n\\vspace{-3ex}\n\\caption{Weak-scaling results for tinyMD on the Piz~Daint supercomputer. Each process is mapped to a node with one NVIDIA Tesla P100 16GB GPU.}\n\\vspace{-2ex}\n\\label{fig:weak_scaling_gpu}\n\\end{figure}\n\nWe performed the experimental tests for GPU weak-scalability on the Piz~Daint supercomputer using the XC50 compute nodes.\nEach node consists of a NVIDIA Tesla P100 16GB GPU, together with an Intel Xeon E5-2690 v3 @ 2.60GHz processor with 12 cores, and \\SI{64}{GB} of main memory.\nEach MPI rank is mapped to a GPU in our simulation---so one rank per node.\nFor each GPU, a $50^3$ system of unit cells was included in the simulation domain, resulting in a total of $500.000 \\times \\text{number\\_of\\_gpus}$ particles.\n\n\\autoref{fig:weak_scaling_gpu} depicts the time to solution for tinyMD on GPU. From 1 to 4 nodes, tinyMD executes faster than for more nodes, which is expected because with less than 8 nodes, there is no remote communication in all directions.\nSince GPU compute kernels are much faster compared to CPU, remote communication consumes a bigger fraction of the total time and this can affect the scalability.\nFrom 8 to 1024 nodes where remote communication is performed in all directions, it is reasonable to state that tinyMD presented perfect scalability.\n\nFor miniMD, we were not able to produce weak-scalability results because we could not manage to compile it properly on the Piz~Daint GPU cluster. The current compiler versions available in the cluster delivered error messages during compilation and the ones that were able to compile generated faulted versions of miniMD, which delivered unclear MPI errors during runtime. Nevertheless, the presented results for tinyMD are enough to demonstrate its weak-scaling capability on GPU clusters.\n\n\\subsection{Load Balancing}\n\nFor the load balancing experiments, we also execute our tests on SuperMUC-NG with 48 CPU cores per node. In this experiments, the Spring-Dashpot contact model was used, with the stiffness and damping values set to zero, which means that particles keep static during the simulation. In order to measure the balancing efficiency, we distribute the particles evenly in half of the domain, using a diagonal axis to separate the regions with and without particles. When more nodes are used in the simulation, the domain is also extended in the same proportion, hence the proportion of particles to nodes remains the same.\n\nWe perform 1000 time steps during the simulation, and the load balancing is performed before the simulation begins. This performs a good way to measure the load balancing efficiency because we can expect an improved speedup close to a factor of two. We use a system of $96^3$ unit cells per node (48 cores), with $442.368$ particles per node. We performed experiments with Hilbert and Morton space-filling curves methods to balance the simulation.\n\n\\begin{figure}[t]\n\\centering\n\\begin{tikzpicture}[scale=0.80]\n \\tikzstyle{every node}=[font=\\small]\n \\begin{axis}[\n width=\\textwidth, height=8cm,\n xmode=log,log basis x={2},\n \n xmin=1, xmax=2048,\n ymin=0, ymax=45,\n log ticks with fixed point,\n xlabel={\\# nodes}, xlabel style={yshift= 1ex},\n xticklabel={\n \\pgfkeys{\/pgf\/fpu=true}\n \\pgfmathparse{int(2^\\tick)}\n \\pgfmathprintnumber[fixed]{\\pgfmathresult}\n },\n ylabel={time to solution (s)}, ylabel style={yshift=-1ex},\n grid=major,\n \n legend pos=south east,\n \n \n ]\n \\addplot table[x=nodes,y=morton] {lb_cpu.txt};\n \\addplot table[x=nodes,y=hilbert] {lb_cpu.txt};\n \\addplot table[x=nodes,y=no_lb] {lb_cpu.txt};\n \\legend{Morton, Hilbert, Imbalanced}\n \\end{axis}\n\\end{tikzpicture}\n\\vspace{-3ex}\n\\caption{Load-balancing results for tinyMD on SuperMUC-NG with the Spring-Dashpot contact model. For each node involved, the domain is extended by a $96^3$ system of unit cells, and particles are then distributed through half of the domain using a diagonal axis.}\n\\vspace{-2ex}\n\\label{fig:load_balancing}\n\\end{figure}\n\n\\autoref{fig:load_balancing} depicts the time to solution for the load balancing experiments. Both balanced and imbalanced simulations scale well during the experiments, and it is also possible to notice the performance benefit from the balanced simulations. The benefit as previously mentioned is close to a factor of two, and the difference between Morton and Hilbert methods is not too significant, with Hilbert being more efficient at some node sizes.\n\nAlthough the load balancing feature works on GPU simulations, the communication code using waLBerla domain partitioning takes a considerable fraction of their time. Therefore a different strategy on this communication code is necessary for GPU accelerators to reduce the fraction of the communication time compared to the potential kernels and then achieve benefits from load-balancing. For this reason, we just show the experimental results on SuperMUC-NG for CPU compute nodes.\n\n\\section{Conclusion}\n\\label{sec:concl}\n\nThis paper presents tinyMD: an efficient, portable, and scalable implementation of an \\ac{MD} application using the AnyDSL partial evaluation framework.\nTo evaluate tinyMD, we compare it with miniMD, implemented in C++ that relies on the Kokkos library to be portable to GPU accelerators.\nWe discuss the implementation differences regarding code portability, data layout and MPI communication.\n\nTo achieve performance-portability on most recent processors and supercomputers, we provide abstractions in AnyDSL that allow our application to be mapped to distinct hardware and be compiled with different data layouts.\nAll this can be done with zero overhead due to partial evaluation, which is one of the main advantages we get when using AnyDSL.\n\nMoreover, we also couple our application with the waLBerla multi-physics framework to use its load balancing mechanism within tinyMD simulations.\nThis emphasizes how our Impala code can be coupled to other implementations, avoiding doing work that otherwise would not generate benefits.\nFurthermore, we show how this coupling can be made easier when using higher-order functions in tinyMD communication code to abstract the communication pattern.\nThis permit us to insert the waLBerla communication logic into tinyMD by just replacing functionality.\n\nPerformance results for tinyMD on single CPU and GPU show that it is competitive with miniMD on both single CPU cores and single GPU accelerators, as well as on supercomputer running on several compute nodes.\nWeak scalability results on the SuperMUC-NG and Piz~Daint supercomputers demonstrate that tinyMD achieves perfect scaling on top supercomputers for the presented benchmarks.\nThe load-balancing results on SuperMUC-NG demonstrate that our strategy for coupling tinyMD and waLBerla works as expected, since the balanced simulations in our experiments reach a speedup close to a factor of two compared to the imbalanced simulations when filling just half of the domain.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{The double side of leptogenesis}\n\nLeptogenesis \\cite{fy} realizes a highly non trivial link\nbetween two completely independent experimental observations: a global\nproperty of the Universe, the absence of primordial anti-matter\nin the observable Universe and the observation that neutrinos mix and (therefore) have masses.\nIn this way leptogenesis has a naturally built-in double sided nature.\nOn one hand it describes a very early stage in the history of the Universe characterized by temperatures\n($T_{\\rm Lep}\\gtrsim 100\\,{\\rm GeV}$)\nmuch higher than those probed by Big Bang Nucleosynthesis ($T_{BBN} \\sim 1\\,{\\rm MeV}$). On the other hand\nleptogenesis complements low energy neutrino experiments providing a completely independent phenomenological\ntool for testing the high energy parameters in the seesaw mechanism \\cite{seesaw}.\nIn these proceedings we will mainly focus on this second side of leptogenesis,\nwhere the early Universe history is basically exploited as a neutrino physics experiment.\n\n\\section{Vanilla leptogenesis and beyond}\n\n\\subsection{Vanilla leptogenesis}\n\nLeptogenesis is a (cosmo)logical consequence of the the seesaw mechanism\nthat elegantly explains not only\nwhy neutrinos mix and have masses but also why they are so much lighter than all the other\nmassive fermions. In a minimal type I\nseesaw mechanism right-handed neutrinos with neutrino Yukawa coupling $h$\nand a right-right Majorana mass term are added to the Standard Model Lagrangian,\n\\begin{eqnarray}\\label{lagrangian}\n\\mathcal{L} & = & \\mathcal{L}_{\\rm SM} +i \\overline{N_{R i}}\\gamma_{\\mu}\\partial^{\\mu} N_{Ri} -\nh_{\\alpha i} \\overline{\\ell_{L\\alpha}} N_{R i} \\tilde{\\Phi} - \\\\\n& & {1\\over 2}\\,M_i \\overline{N_{R i}^c}N_{R i} +h.c. \\nonumber\n\\end{eqnarray}\n$(i=1,2,3,\\quad \\alpha=e,\\mu,\\tau)$.\nFor definiteness we consider the case of three RH neutrinos species.\nThis is also the most attractive option with\none RH neutrino for each family, as nicely predicted by $SO(10)$ grand unified\nmodels. Notice however that all current data from low energy neutrino experiments\nare consistent with a more minimal two RH neutrino model.\n\nAfter spontaneous symmetry breaking, a Dirac mass term $m_D=v\\,h$\nis generated by the Higgs vev $v$. In the seesaw limit, $M\\gg m_D$, the spectrum\nof neutrino masses splits into a light set given by the eigenvalues $m_1 {\\cal O}(M_i)$.\n\nThe final asymmetry has been traditionally calculated in a very simple\nway neglecting both the flavour composition of the lepton quantum states\nproduced by $N_i$-decays (light flavour effects) and the production of the\nasymmetry from the heavier RH neutrino decays (heavy flavour effects). In this\noversimplified picture, that we call {\\em vanilla leptogenesis},\nthe final asymmetry is then simply given by $N_{B-L}^{\\rm f}\\simeq \\varepsilon_1\\,\\kappa^{\\rm f}(K_1)$,\nwhere $K_1 \\equiv (\\Gamma + \\bar{\\Gamma})\/H(T=M_1)$ is the lightest RH neutrino\ndecay parameter and $\\kappa^{\\rm f}(K_1)$ is the final efficiency factor\ngiving approximately the number of\n$N_1$'s decaying out-of-equilibrium.\n\nBarring fine tuned cancelations among the\nterms giving the RH neutrino masses in the see-saw formula,\nthe total $C\\!P$ asymmetry is upper bounded by \\cite{di},\n\\begin{equation}\\label{CPbound}\n\\varepsilon_1 \\leq \\varepsilon_1^{\\rm max} \\simeq 10^{-6}\\,{M_1\\over 10^{10}\\,{\\rm GeV}}\\,{m_{\\rm atm}\\over m_1+m_3} ,\n\\end{equation}\nand, imposing $\\eta_B^{\\rm max}\\simeq 0.01\\,\\varepsilon_1^{\\rm max}\\,\\kappa_1^{\\rm f} > \\eta_{B}^{CMB}$,\none obtains, in the plane $(m_1,M_1)$, the allowed region shown in Fig.~1.\n\\begin{figure}\n\\vspace*{-1mm}\n\\begin{center}\n \\psfig{file=fig1.pdf,height=65mm,width=75mm}\n \\vspace*{-3mm}\n \\caption{Neutrino mass bounds in the vanilla scenario.}\n\\end{center}\n\\vspace*{-7mm}\n\\end{figure}\nOne can notice the existence of an upper bound on the light\nneutrino masses $m_1\\lesssim 0.12\\,{\\rm eV}$, incompatible with quasi-degenerate\nneutrino mass models, and a lower bound on\n$M_1\\gtrsim 2\\times 10^9\\,{\\rm GeV}$ \\cite{di} implying a lower bound on the\nreheat temperature $T_{\\rm RH}\\gtrsim 10^9\\,{\\rm GeV}$ \\cite{pedestrians}.\nThese bounds are valid under the following set of assumptions and approximations \\cite{bounds}:\ni) the flavour composition of the leptons in the final states is neglected;\nii) the heavy RH neutrino mass spectrum is assumed to be strongly hierarchical\n with $M_2\\gtrsim 10\\,M_1$;\niii) there is no interference between the heaviest RH neutrino and the\n next-to-lightest RH neutrino, i.e. $(m^{\\dagger}_D\\,m_D)_{23}=0$.\nThe last two conditions guarantee\nthat $\\varepsilon_{2,3}^{\\rm max}\\,\\kappa(K_{2,3})\\ll \\varepsilon_{1}^{\\rm max}\\,\\kappa(K_1)$.\nIn particular, the last condition is always verified for $M_3\\gg 10^{14}\\,{\\rm GeV}$,\nwhen an effective two RH neutrino model is recovered.\n\nAn important feature of vanilla leptogenesis is that the final asymmetry does not\ndirectly depend on the parameters of the leptonic mixing matrix and therefore\none cannot establish any kind of direct connection. In particular a discovery\nof CP violation in neutrino mixing would not be a smoking gun for leptogenesis\nbut on the other hand a non discovery would not rule out leptogenesis.\nHowever, within more restricted scenarios, where for example some conditions on\nthe neutrino Dirac mass matrix are imposed, links can emerge. We will discuss\nin detail the case of $SO(10)$-inspired models.\n\nMany different directions have been explored in order to go beyond\nthe assumptions and the approximations of the vanilla leptogenesis scenario,\noften with the objective of finding ways to evade the bounds shown in Fig.~1.\n Let us briefly discuss the main results.\n\n\\subsection{Beyond a hierarchical RH mass spectrum}\n\nIf $(M_2-M_1)\/M_1 \\equiv \\delta_2 \\ll 1$, the $C\\!P$ asymmetries get resonantly\nenhanced as $\\varepsilon_{1,2}\\propto 1\/\\delta_2$. If, more stringently, $\\delta_2\\lesssim 10^{-2}$, then\n$\\eta_B \\propto 1\/\\delta_2$ and the degenerate limit is obtained.\nIn this limit the lower bounds on $M_1$ and on $T_{\\rm RH}$\nget relaxed $\\propto \\delta_2$ and at the resonance they completely disappear \\cite{beyondHR}.\nHowever, there are not many models able to justify in a reasonable way such a degenerate limit.\n Examples are provided by radiative leptogenesis and by models with extra-dimensions\nwhere all RH neutrinos masses squeeze together to a common TeV scale \\cite{DLmodels}.\n\n\\subsection{Non minimal leptogenesis}\n\nOther proposals to relax the lower bounds on $M_1$ and on $T_{RH}$\nrely on extensions beyond minimal leptogenesis. For example on the\n addition of a right-right Majorana mass term yielding a type II seesaw mechanism \\cite{typeII}\nor on a non thermal production of the RH neutrinos whose decays produce the asymmetry \\cite{nonth}.\nHowever, these non minimal models spoil somehow a remarkable coincidence between the measured values\nof the atmospheric and solar neutrino mass scales and the possibility to have successful leptogenesis\neven independently of the initial conditions \\cite{pedestrians,bounds}.\nNon minimal models have been also extensively explored in order to get a\nlow scale leptogenesis testable at colliders \\cite{mohapatratalk}.\n\n\n\\subsection{Improved kinetic description}\n\nWithin vanilla leptogenesis the asymmetry is calculated solving simple rate equations,\nclassical Boltzmann equations integrated over the RH neutrino momenta.\nDifferent kinds of extensions have been studied, for example accounting for a full momentum dependence\n\\cite{momentum}, for quantum kinetic effects \\cite{quantum} or for thermal effects \\cite{thermal}.\nAll these analyses find significant changes in the weak wash-out regime but within\n$\\sim 50\\%$ in the strong wash-out regime. This result has\nquite a straightforward general explanation. In the strong wash-out regime the final asymmetry\nis produced by the decays of RH neutrinos in a non relativistic regime \\cite{pedestrians}\nwhen a simple classical momentum independent kinetic description provides quite a good approximation.\nIt should therefore be bourne in mind that the use of a simple kinetic description in leptogenesis\nis not just a simplistic approach but is justified in terms of the\nneutrino oscillations experimental results on the neutrino masses that\nsupport a strong wash-out regime.\n\n\n\\section{Flavour effects}\n\nIn the last years, flavour effects proved to be the most relevant\nmodification of the vanilla scenario and for this reason we\ndiscuss them in a separate section.\nThere are two kinds of flavour effects that are neglected in the vanilla scenario: heavy\nflavour effects \\cite{geometry}, how heavier RH neutrinos influence the final asymmetry,\nand light flavour effects \\cite{flavoreffects}, how the\nflavour composition of the leptons quantum states produced in the RH neutrino decays\ninfluence the final asymmetry.\nWe first discuss the two effects separately and then we show\nhow their interplay has a very interesting application \\cite{vives}.\n\n\\subsection{Light flavour effects}\n\nLet us first start by continuing to assume that the final asymmetry is\ndominantly produced by the decays\nof the lightest RH neutrinos, neglecting the contribution from the\ndecays of the heavier RH neutrinos.\nIf $M_1\\gtrsim 5\\times 10^{11}\\,{\\rm GeV}$, the flavour composition\nof the quantum states of the leptons produced in $N_1$ decays\nhas no influence on the final asymmetry and the unflavoured regime holds.\nThis is because the lepton quantum states evolve coherently between the production of a lepton from an $N_1$-decay\nand a subsequent inverse decay with an Higgs boson. In this way\nthe lepton flavour composition does not play any role.\n\nHowever, if $5\\times 10^{11}\\,{\\rm GeV}\\gtrsim M_1 \\gtrsim 10^{9}\\,{\\rm GeV}$, then\nbetween one decay and the subsequent inverse decay, the produced lepton quantum states, on average,\ninteract with tauons in a way that the coherent evolution breaks down. Therefore, at\nthe inverse decays, the leptons quantum states are an incoherent mixture of a tauon component and\nof a (still coherent) superposition of an electron and of a muon component that we will\nindicate with $\\gamma$.\nThe fraction of asymmetry stored in each flavour component is not proportional in general\nto the branching ratio of that component. This implies that the dynamics of the two\nflavour asymmetries, the tauon and the $\\gamma$ asymmetries, are different and have to be separately\ncalculated. In this way the resulting final asymmetry can considerably differ\nfrom the result in the unflavoured regime.\nIf $M_1\\lesssim 10^{9}\\,{\\rm GeV}$, then even the coherence of the $\\gamma$\ncomponent is broken by the muon interactions between decays and inverse decays\nand a full three flavour regime applies. In the intermediate regimes\na density matrix formalism is necessary to describe the transition \\cite{flavoreffects,densitymatrix}.\n\nThere are three kinds of major modifications induced by flavour effects.\nFirst, the wash-out can be considerably lower than in the unflavoured regime \\cite{flavoreffects}.\nSecond, the low energy phases affect directly the final asymmetry since they\ncontribute to a second source of $C\\!P$ violation in the flavoured $C\\!P$ asymmetries\n\\cite{ibarra,flavorlep,ppr}. As a consequence the same source of $C\\!P$\nviolation that could take place in neutrino oscillations, could be also responsible for the observed\nmatter-antimatter asymmetry of the Universe, though under quite stringent\nconditions on the RH neutrino mass spectrum \\cite{diraclep}.\nA third modification is that the flavored $C\\!P$ asymmetries contain\nextra-terms that evade the upper bound eq.~(\\ref{CPbound}) if some\nmild cancelations in the seesaw formula among the light neutrino mass terms and\njust a mild RH neutrino mass hierarchy ($M_2\/M_1 \\lesssim 10$) are allowed. In this way the lower bound on the\nreheat temperature can be relaxed by about one order of magnitude, down to $10^8\\,{\\rm GeV}$\n\\cite{bounds} (see Fig.~2).\n\\begin{figure}\n\\vspace*{-15mm}\n\\begin{center}\n \\psfig{file=fig2.pdf,height=95mm,width=105mm}\n \\vspace*{-3mm}\n \\caption{Relaxation of the lower bound on $M_1$ thanks\n to additional unbounded flavoured $C\\!P$ violating terms.}\n\\end{center}\n\\vspace*{-7mm}\n\\end{figure}\n\n\\subsection{Heavy flavour effects}\n\nIn the vanilla scenario the contribution to the final asymmetry from the\nheavier RH neutrinos is negligible for two reasons: the $C\\!P$ asymmetries of $N_2$ and $N_3$ are\nsuppressed in the hierarchical limit with respect to $\\varepsilon_1^{\\rm max}$ and\neven assuming that a sizeable asymmetry is produced around $T \\sim M_{2,3}$,\nthis is later on washed out by the lightest RH neutrino\ninverse processes. However, it has been realized that the assumptions\nfor the validity of the vanilla scenario are quite restrictive and\nthere are a few reasons why heavy flavour effects have to be taken into account\nin general.\n\nFirst, in the quasi-degenerate limit when $\\delta_{2,3} \\ll 1$,\nthe $C\\!P$ asymmetries are not suppressed and the wash-out from the lightest RH neutrinos\nis only partial \\cite{beyondHR}.\nSecond, even assuming a strong RH neutrino mass hierarchy, there is always\na choice of the parameters such that $N_1$ decouples and its wash-out vanishes.\nFor the same choice of the parameters, the $N_2$ total $C\\!P$ asymmetry is unsuppressed\nif $M_3\\lesssim 10^{15}\\,{\\rm GeV}$ . In this\ncase a $N_2$-dominated scenario is realized \\cite{geometry}.\nNotice that the existence of a third heaviest RH neutrino species is crucial.\nThird, even assuming a strong mass hierarchy, a coupled $N_1$ and $M_1\\gtrsim 10^{12}\\,{\\rm GeV}$,\nthe asymmetry produced by the heavier RH neutrino decays, in particular by the $N_2$'s decays, with unsuppressed\ntotal $C\\!P$ asymmetry can be sizeable and in general is not completely washed-out by the lightest RH neutrino\nprocesses. This is because there is in general a component\nthat escapes the $N_1$ wash-out \\cite{bcst,nardinir}. Notice that for a mild mass hierarchy,\n$\\delta_3 \\lesssim 10$,\neven the asymmetry produced by the $N_3$'s decays can be sizeable and circumvent\n the $N_1$ and $N_2$ wash-out.\n\n\n\\subsection{Flavoured $N_2$ dominated scenario}\n\nThere is an another interesting scenario where the asymmetry from the $N_2$ decays\ndominates the final asymmetry. This scenario relies on the interplay between\nlight and heavy flavour effects \\cite{vives}.\nEven assuming a strong mass hierarchy, a coupled $N_1$ and $M_1\\lesssim 10^{12}\\,{\\rm GeV}$,\nthe $N_1$ wash-out can be circumvented. Suppose for example that the\nlightest RH neutrino wash-out occurs in the three-flavour regime ($M_1 \\ll 10^{9}\\,{\\rm GeV}$).\nIn this case the asymmetry produced by the\nheavier RH neutrinos, at the $N_1$ wash-out, distributes into an incoherent mixture of\nlight flavour quantum eigenstates. It turns out that\nthe $N_1$ wash-out in one of the three flavours is negligible in quite a wide region of the parameter space.\nIn this way, accounting for flavour effects, the region of applicability of the\n$N_2$-dominated scenario enlarges considerably, since it is not\nnecessary that $N_1$ fully decouples but it is sufficient that it decouples\njust in one particular light flavor. Recently, it has been realized that,\naccounting for the Higgs and for the quark asymmetries, the dynamics of the flavour asymmetries\ncouple and the lightest RH neutrino wash-out in a particular flavour can be circumvented even when $N_1$ is strongly\ncoupled in that flavour \\cite{flcoupling}.\nAnother interesting effect arising in the $N_2$-dominated scenario is phantom leptogenesis.\nThis is a pure quantum-mechanical effect that for example\nallows parts of the electron and of the muon asymmetries, the phantom terms, to escape completely\nthe wash-out at the production when $T\\sim M_2 \\gg 10^{9}\\,{\\rm GeV}$.\n\n\\section{Testing new physics with leptogenesis}\n\nThe seesaw mechanism extends the Standard Model introducing eighteen new parameters\nwhen three RH neutrinos are considered. On the other hand, low energy\nneutrino experiments can only potentially test nine parameters in the\nneutrino mass matrix $m_{\\nu}$. Nine high energy parameters, those characterizing the properties\nof the three RH neutrinos (three life times, three masses and three total $C\\!P$ asymmetries)\nand encoded in the orthogonal matrix $R$ \\cite{casas},\nare not tested by low energy neutrino experiments.\nQuite interestingly, leptogenesis gives an additional constraint on a combination\nof both low energy neutrino parameters and high energy neutrino parameters,\n$\\eta_B(m_{\\nu},R)=\\eta_{B}^{CMB}$. However,\njust one additional constraint does not seem to be still sufficient to over-constraint the parameter\nspace leading to testable predictions. Despite this, as we have seen, in the vanilla leptogenesis scenario\nthere is an upper bound on the neutrino masses. The reason is that in this case\n$\\eta_B$ does not depend on the 6 parameters related to the properties of the two heavier RH neutrinos and\ntherefore the asymmetry depends on a reduced number of high energy parameters. At the\nsame time, the final asymmetry is strongly suppressed by the absolute neutrino mass scale when this is\nlarger than the atmospheric neutrino mass scale. This is why the leptogenesis\nbound yields an upper bound on the neutrino masses.\n\nWhen flavour effects are considered, the vanilla leptogenesis\nscenario holds only under very special conditions. More generally\nthe parameters in the leptonic mixing matrix also\ndirectly affect the final asymmetry and, accounting for flavour effects,\none could hope to derive definite predictions on the leptonic mixing matrix .\nHowever, when flavour effects are taken into account,\nthe 6 parameters associated to the two heavier RH neutrinos contribute in general to the final\nasymmetry at the expenses of predictability.\nFor this reason, in a generic scenario with three RH neutrinos, it is not possible\nto derive any prediction on low energy neutrino parameters.\n\nIn order to gain predictive power, two possibilities have been largely explored in the last years.\nIn a first case one considers non minimal scenarios giving rise to\nadditional phenomenological constraints.\nWe have already mentioned how with a non minimal seesaw mechanism it is possible to lower\nthe leptogenesis scale and have signatures at colliders. It has also been noticed that\nin supersymmetric models one can enhance the branching ratios of lepton flavour violating processes\nor electric dipole moments and in this way the existing experimental bounds\nfurther constrain the seesaw parameter space \\cite{lfvedm}.\n\nA second possibility is to search again, as within vanilla leptogenesis, for a reasonable\nscenario where the final asymmetry depends on a reduced number of free parameters in a way that the\nparameter space gets over-constrained by the leptogenesis bound. Let us briefly discuss some\nof the ideas that have been proposed.\n\n\\subsection{Two RH neutrino model}\n\nA well motivated scenario that attracted great attention is a two\nRH neutrino scenario \\cite{2RHN}, where the third RH neutrino is either absent or\neffectively decoupled. This necessarily happens when $M_3\\gg 10^{14}\\,{\\rm GeV}$, implying that the\nlightest LH neutrino mass $m_1$ has to vanish. It can be shown that the number of parameters\ngets reduced from 18 to 11. It has been shown that in this case\ninverted hierarchical models with\n$\\sin\\theta_{13}\\cos\\delta \\gtrsim -0.15$ are viable only if there\nis $C\\!P$ violation from Majorana phases \\cite{mp}.\nHowever this prediction would be very difficult to test and in any case\nwould be quite unlikely to provide a smoking gun.\n\n\\subsection{$SO(10)$ inspired models}\n\nThe only way to gain a strong predictive power is by adding\nsome additional conditions within some model\nof new physics embedding the seesaw mechanism. In this respect\nquite an interesting example is represented\nby the '$SO(10)$-inspired scenario' \\cite{branco},\nwhere $SO(10)$-inspired conditions are over-imposed onto the neutrino Dirac mass matrix.\nIn the basis where the charged leptons mass matrix and the Majorana mass matrix are diagonal,\nthis is expressed in the bi-unitary parametrization as $m_D = V_L^{\\dagger}\\,D_{m_D}\\,U_R$,\nwhere $D_{m_D}\\equiv {\\rm diag}({\\lambda_1,\\lambda_2,\\lambda_3})$ is the diagonalized neutrino Dirac mass matrix\nand mixing angles in the unitary matrix $V_L$ are of the order\nof the mixing angles in the CKM matrix.\nThe matrix $U_R$ can then be calculated from $V_L$, $U$ and $m_i$,\nconsidering that, as it can be seen from the seesaw formula (\\ref{seesaw}),\nit provides a Takagi factorization of\n$M^{-1} \\equiv D^{-1}_{m_D}\\,V_L\\,U\\,D_m\\,U^T\\,V_L^T\\,D^{-1}_{m_D}$,\nor explicitly $M^{-1} = U_R\\,D_M^{-1}\\,U_R^T$.\nIn this way the RH neutrino masses and the matrix $U_R$ are expressed in terms of the\nlow energy neutrino parameters, of the eigenvalues $\\lambda_i$ and of the parameters in $V_L$.\nSince one typically obtains $M_1 \\sim 10^{5}\\,{\\rm GeV}$ and $M_{2}\\sim 10^{11}\\,{\\rm GeV}$,\nthe asymmetry produced from the lightest RH neutrino decays is negligible and the\n$N_2$-dominated scenario is realized \\cite{SO10,SO10b}.\n\nImposing the leptogenesis bound\nand considering that the final asymmetry does not depend on $\\lambda_1$ and on $\\lambda_3$, one obtains\nconstraints on all low energy neutrino parameters and some examples are shown in the Fig.~3\nfor a scan over the $2\\sigma$ ranges of the allowed values of the\nlow energy parameters and over the parameters\nin $V_L$ assumed to be $I< V_L < V_{CKM}$, where $V_{CKM}$ is the Cabibbo-Kobayashi-Maskawa\nmatrix \\cite{SO10b}. This scenario has been also studied in a more general context\nincluding a type II contribution to the seesaw mechanism from a triplet Higgs \\cite{abada}.\n\\begin{figure}\n\\vspace*{-1mm}\n\\begin{center}\n \\psfig{file=fig3a.pdf,height=65mm,width=75mm} \\\\\n \\vspace*{0mm}\n \\psfig{file=fig3b.pdf,height=65mm,width=75mm} \\\\\n \\vspace*{0mm}\n \\psfig{file=fig3c.pdf,height=65mm,width=75mm}\n \\vspace*{0mm}\n \\caption{Constraints on some of the low energy neutrino parameters\n in the $SO(10)$-inspired scenario for normal ordering and $I< V_L < V_{CKM}$ \\cite{SO10b}.}\n\\end{center}\n\\vspace*{-1mm}\n\\end{figure}\n\\subsection{Discrete flavour symmetries}\n\nHeavy flavour effects are quite important when leptogenesis is embedded within\ntheories that try to explain the emerging tribimaximal mixing structure in the leptonic\nmixing matrix via flavour symmetries. It has been shown in particular that\nif the symmetry is unbroken then the $C\\!P$ asymmetries of the RH neutrinos would exactly\nvanish. On the other hand when the symmetry is broken, for the naturally expected\nvalues of the symmetry breaking parameters, then the observed\nmatter-antimatter asymmetry can be successfully reproduced \\cite{manohar,feruglio}.\nIt is interesting that in a minimal picture based on $A4$ symmetry, one has a RH neutrino mass spectrum with\n$10^{15}\\,{\\rm GeV} \\gtrsim M_3 \\gtrsim M_2 \\gtrsim M_1 \\gg 10^{12}\\,{\\rm GeV}$. One has therefore\nthat all the asymmetry is produced in the unflavoured regime and that the mass spectrum\nis only mildly hierarchical (it has actually the same kind of hierarchy of light neutrinos).\nAt the same time the small symmetry breaking imposes\na quasi-orthogonality of the three lepton quantum states produced in the RH neutrino\ndecays. Under these conditions the wash-out of the asymmetry produced by one RH neutrino species\nfrom the inverse decays of a lighter RH neutrino species is essentially negligible. The final\nasymmetry then receives a non negligible contribution from the decays of all three RH neutrinos species.\n\n\\subsection{Supersymmetric models}\n\nWithin a supersymmetric framework the final asymmetry within\nthe vanilla leptogenesis scenario undergoes small changes \\cite{proceedings}.\nHowever, supersymmetry introduces a conceptual important issue: the stringent\nlower bound on the reheat temperature, $T_{\\rm RH}\\gtrsim 10^{9}\\,{\\rm GeV}$,\nis typically marginally compatible with an upper bound\nfrom the avoidance of the gravitino problem $T_{\\rm RH}\\lesssim 10^{6-10}\\,{\\rm GeV}$, with the\nexact number depending on the parameters of the model \\cite{gravitino}. It is quite remarkable\nthat the solution of such a issue inspired an intense research activity on supersymmetric\nmodels able to reconcile minimal leptogenesis and the gravitino problem. Of course on the\nleptogenesis side, some of the discussed extensions beyond the vanilla scenario that relax the neutrino\nmass bounds also relax the $T_{\\rm RH}$ lower bound. However, notice that in the $N_2$ dominated\nscenario, while the lower bound on $M_1$ is completely evaded, there is still a lower bound\non $T_{\\rm RH}$ that is even more stringent, $T_{\\rm RH}\\gtrsim 6\\times 10^{9}\\,{\\rm GeV}$ \\cite{geometry}.\n\nAs we mentioned already, with flavour effects one has the possibility to relax the lower bound\non $T_{\\rm RH}$ if a mild hierarchy in the RH neutrino masses\nis allowed together with a mild cancelation in the seesaw formula \\cite{bounds}.\nHowever for most models, such as sequential dominated models \\cite{sequential},\nthis solution does not work. A major modification introduced by supersymmetry\nis that the critical value of the mass of the decaying RH neutrinos\nsetting the transition from an unflavoured regime to a two-flavour regime\nand from a two-flavour regime to a three flavour regime is enhanced by a factor $\\tan^2\\beta$ \\cite{antusch}.\nThis has a practical relevance in the calculation of the asymmetry within supersymmetric models\nand it is quite interesting that leptogenesis becomes sensitive to such a relevant\nsupersymmetric parameter. Recently, a detailed analysis\nmainly discussing how asymmetry is distributed among all particle species,\nhas shown different subtle effects in the calculation of the final asymmetry\nwithin supersymmetric models but it just found ${\\cal O}(1)$ corrections\nto the final asymmetry \\cite{superequilibration}.\n\n\\section{Future prospects}\n\nIn recent years, there have been important developments in leptogenesis\nfirst of all involving a full account of (light and heavy) flavour effects\nand also a deeper kinetic description accounting for quantum kinetic effects.\nMany efforts are currently devoted to explore possible ways to test the seesaw mechanism\nand leptogenesis. The possibility to have\nmodels with a seesaw scale down to the TeV scale, are gaining a lot of attention,\nespecially in the light of the LHC and with the prospect of solving the hierarchy problem\n\\cite{mohapatratalk,seesawLHC}.\nThis possibility seems necessarily to involve non minimal leptogenesis models based on a\nseesaw mechanism beyond the minimal type I \\cite{petcov}.\n\nEven within traditional high energy scale leptogenesis, flavour effects have\nopened new opportunities, or re-opened old ones, to test leptogenesis.\nIn a minimal leptogenesis scenario, among the many possible mass patterns, a genuine\n$N_2$-dominated scenario with $M_1\\ll 10^{9}\\,{\\rm GeV}$ and $M_2\\gtrsim 10^{9}\\,{\\rm GeV}$,\npresents some attractive features: i) the presence of a double\nstage, production from $N_2$ decays and wash-out from $N_1$ inverse processes,\nseems to enhance the predictive power yielding constraints on the low\nenergy parameters; ii) it provides a solution to the problem\nof the independence of the initial conditions if the final asymmetry is\ntauon dominated (in this case the constraints on the low energy parameters\nbecome even more meaningful) \\cite{preexisting};\niii) it rescues the interesting class of $SO(10)$-inspired\nmodels leading to testable constraints on the low energy neutrino parameters.\n\nWe can fairly conclude saying that leptogenesis is experiencing a mature stage with\nvarious interesting ideas about the possibility to test it.\nLow and high energy scale models lead to\nquite different phenomenological scenarios. In the first case they necessarily predict\nsome novel phenomenology. In the case of more conventional\nhigh energy scale models, the naturally expected experimental progress\nin low energy neutrino experiments could uncover some\nnon trivial correlations among parameters. These correlations would be a trace of the\ndynamical processes that led to the generation of observed matter-antimatter asymmetry\nduring a very early stage in the Universe history and would specifically depend on the model\nof new physics embedding the seesaw mechanism.\n\n\\subsection*{Acknowledgments}\n\nI wish to thank S.~Antusch, E.~Bertuzzo, S.~Blanchet, W.~Buchmuller, F.~Feruglio, D.~Jones,\nS.~King, L.~Marzola, M.~Plumacher, G.~Raffelt, A.~Riotto for a fruitful collaboration\non leptogenesis. I acknowledge financial support from the NExT Institute and SEPnet.\n\n\\bibliographystyle{elsarticle-num}\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction} \\label{sec:intro}\n\nThe Sun exhibits many time scales from the ten minute lifetimes of granules to multi-millennial\nmagnetic activity modulations. One of the most prominent of these scales is the 11-year sunspot\ncycle, during which the number of magnetically active regions waxes and wanes. The Sun also\npossesses longer-term variability of its magnetic activity such as the 88-year Gleissberg cycle\n\\citep{gleissberg39} and less frequent phenomenon commonly described as grand extrema\n\\citep{usoskin13}. Other main-sequence stars also exhibit cyclical magnetic phenomenon in Ca II,\nphotometric, spectropolarimetric, and X-ray observations \\citep[e.g.,][]{baliunas96, hempelmann96,\n favata08, metcalf10, fares13, mathur13}. These observations include solar-mass stars younger than\nthe Sun that also possess magnetic activity cycles, yet they rotate more rapidly than the Sun as a\nconsequence of the low rate of angular momentum loss in such stars \\citep{barnes07}. Furthermore,\nthere are hints from both observations and from theory that a star's magnetic cycle period is\nclosely linked to its rotation rate \\citep[e.g.,][]{saar09,jouve10,morgenthaler11}. This may imply\nthat the dynamo regime achieved in our simulation of a young sun, which rotates three times faster\nthan the Sun and has a nearly constant magnetic polarity cycle of 6.2~years, can scale up to the\nsolar rotation rate with a polarity cycle period closer to the 22~year cycle of the Sun.\n\nIn addition to its large range of time scales, observations of the magnetic field at the solar\nsurface reveal complex, hierarchical structures existing on a vast range of spatial scales. Despite\nthese chaotic complexities, large-scale organized spatial patterns such as Maunder's butterfly\ndiagram, Joy's law, and Hale's polarity laws suggest the existence of a structured large-scale\nmagnetic field within the solar convection zone. On the Sun's surface active regions initially\nemerge at mid-latitudes and appear at increasingly lower latitudes as the cycle progresses, thus\nexhibiting equatorward migration. As the low-latitude field propagates toward the equator, the\ndiffuse field that is comprised of small-scale bipolar regions migrates toward the pole, with the\nglobal-scale reversal of the polar magnetic field occurring near solar maximum\n\\citep[e.g.,][]{hathaway10,stenflo12}.\n\nConsequently, the large-scale field must vary with the solar cycle, likely being sustained through\ndynamo action deep in the solar interior. It has been suspected for at least 60 years that the\ncrucial ingredients for the solar dynamo are the shear of the differential rotation and the helical\nnature of the small-scale convective flows present in the solar convection zone\n\\citep[e.g.,][]{parker55, steenbeck69, parker77}. Yet even with the advancement to fully nonlinear\nglobal-scale 3-D MHD simulations \\citep[e.g.,][]{gilman83,glatzmaier85,brun04,browning06}, achieving\ndynamo action that exhibits the basic properties of Sun's magnetism has been quite\nchallenging. Nonetheless, recent global-scale simulations of convective dynamos have begun to make\nsubstantial contact with some of the properties of the solar dynamo through a wide variety of\nnumerical methods \\citep[e.g.,][]{brown11,racine11, kapyla12, nelson13a}. It is within this vein of\nmodern global-scale modeling that we report on a global-scale 3D MHD convective dynamo simulation\nutilizing the ASH code that possesses some features akin to those observed during solar cycles.\n\n\\begin{figure*}[t!]\n \\begin{center}\n \\includegraphics[width=\\textwidth]{d3_sld_4panel_300dpi.eps}\n \\figcaption{Nature of the toroidal magnetic field $B_{\\phi}$. (a) Snapshot of the horizontal\n structure of $B_{\\phi}$ at $\\Rsun{0.95}$ shown in Mollweide projection, at the time corresponding\n to the vertical dashed line in (c). This illustrates the azimuthal connectivity of the magnetic\n wreaths, with the polarity of the field such that red (blue) tones indicate positive (negative)\n toroidal field. (b) Azimuthally-averaged $\\langle B_{\\phi} \\rangle$ also time-averaged over a single energy\n cycle, depicting the structure of the toroidal field in the meridional plane. (c) Time-latitude\n diagram of $\\langle B_{\\phi} \\rangle$ at $\\Rsun{0.95}$ in cylindrical projection, exhibiting the equatorward\n migration of the wreaths from the tangent cylinder and the poleward propagation of the higher\n latitude field. The color is as in (a). (d) A rendering of magnetic field lines in the domain\n colored by the magnitude and sign of $B_{\\phi}$, with strong positively oriented field in red, and\n the strong oppositely directed field in blue. \\label{fig1}}\n \\end{center}\n\\end{figure*}\n\n\\section{Methods} \\label{sec:methods}\n\nThe 3D simulation of convective dynamo action presented here uses the ASH code to evolve the\nanelastic MHD equations for a conductive calorically perfect plasma in a rotating spherical\nshell. ASH solves the necessary equations with a spherical harmonic decomposition of the entropy,\nmagnetic field, pressure, and mass flux in the horizontal directions \\citep{clune99,miesch00}. A\nfourth-order non-uniform finite difference in the radial direction resolves the radial derivatives\n\\citep{featherstone13}. The solenoidality of the mass flux and magnetic vector fields is maintained\nthrough the use of a stream function formalism \\citep{brun04}. The boundary conditions used are\nimpenetrable on radial boundaries, with a constant entropy gradient there as well. The magnetic\nboundary conditions are perfectly conducting at the lower boundary and extrapolated as a potential\nfield at the upper boundary. \n\nThe authors have implemented a slope-limited diffusion (SLD) mechanism into the reformulated ASH\ncode, which is similar to the schemes presented in \\citet{rempel09} and \\citet{fan13}. SLD acts\nlocally to achieve a monotonic solution by limiting the slope in each coordinate direction of a\npiecewise linear reconstruction of the unfiltered solution. The scheme minimizes the steepest\ngradient, while the rate of diffusion is regulated by the local velocity. It is further reduced\nthrough a function $\\phi$ that depends on the eigth power of the ratio of the cell-edge difference\n$\\delta_i q$ and the cell-center difference $\\Delta_i q$ in a given direction $i$ for the quantity\n$q$. This limits the action of the diffusion to regions with large differences in the reconstructed\nsolutions at cell-edges. Since SLD is computed in physical space, it incurs the cost of smaller time\nsteps due to the convergence of the grid at the poles. The resulting diffusion fields are projected\nback into spectral space and added to the solution.\n\nWe simulate the solar convection zone, stretching from the base of the convection zone at\n$\\Rsun{0.72}$ to the upper boundary of our simulation at $\\Rsun{0.97}$. This approximation omits the\nnear-surface region and any regions below the convection zone. The SLD has been restricted to act\nonly on the velocity field in this simulation. This mimics a lower thermal and magnetic Prandtl\nnumber ($\\mathrm{Pr}$, $\\mathrm{Pm}$) than otherwise attainable through an elliptic diffusion operator. The\nentropy and magnetic fields remain under the influence of an anisotropic eddy diffusion, with both a\nradially dependent entropy diffusion $\\kappa_S$ and resistivity $\\eta$. These two diffusion\ncoefficients are similar to those of case D3 from \\citep{brown10}, with $\\kappa_S , \\eta \\propto\n\\overline{\\rho}^{\\; -1\/2}$, with $\\overline{\\rho}$ the spherically symmetric density. The\nstratification in this case has about twice the density contrast across the domain, being 45 rather\nthan 26, and has a resolution of $N_r\\times N_{\\theta} \\times N_{\\phi} = 200\\times256\\times512$.\n\n\\section{Cyclical Convective Dynamo Action} \\label{sec:cycles}\n\nGlobal-scale convective dynamo simulations in rotating spherical shells have recently achieved the\nlong-sought goal of cyclical magnetic polarity reversals with a multi-decadal period. Moreover, some\nof these simulations have illustrated that large-scale dynamo action is possible within the bulk of\nthe convection zone, even in the absence of a tachocline. Global-scale MHD simulations of a more\nrapidly rotating Sun with the pseudo-spectral Anelastic Spherical Harmonic (ASH) code have produced\npolarity reversing dynamo action that possesses strong toroidal wreaths of magnetism that propagate\npoleward as a cycle progresses \\citep{brown11}. These fields are seated deep within the convection,\nwith the bulk of the magnetic energy near the base of the convection zone. The perfectly conducting\nlower boundary condition used here and in those simulations requires the field to be horizontal\nthere, which tends to promote the formation of longitudinal structure in the presence of a\ndifferential rotation.\n\nA recent simulation with ASH employs a dynamic Smagorinski diffusion scheme, wherefore they achieve\na greater level of turbulent complexity. Those simulations show that the large-scale toroidal\nwreaths persist despite the greater pummeling they endure from the more complex and vigorous\nconvection \\citep{nelson13a}. Not only do the toroids of field persevere, but portions of them can\nbe so amplified that the combination of upward advection and magnetic buoyancy create loops of\nmagnetic field \\citep{nelson13b}. This lends credence to the classical picture of a Babcock-Leighton\nor Parker interface dynamo \\citep{leighton69,parker93}, with semi-buoyant flux structures that rise\ntoward the solar surface, leading to active regions and helicity ejection. There is the caveat that\nthe magnetic fields in the simulation are instead built in the convection zone.\n\nImplicit large-eddy simulations (ILES) have concurrently paved the road toward more orderly\nlong-term cycles in a setting that mimics the solar interior. Indeed, simulations utilizing the\nEulerian-Lagrangian (EULAG) code produce regular polarity cycles occurring roughly every 80 years in\nthe presence of a tachocline and with the bulk of the magnetic field existing at higher latitudes\n\\citep{ghizaru10}. This simulation showed radial propagation of structures but little latitudinal\nvariation during a cycle. More recent simulations of a Sun-like star rotating at $\\Osun{3}$ also\nproduce low-latitude poleward propagating solutions \\citep{charbonneau13}. Such dynamo action is\naccomplished first through the reduction of the enthalpy transport of the largest scales through a\nsimple sub-grid-scale (SGS) model that diminishes thermal perturbations over a roughly 1.5~year time\nscale, which serves to moderate the global Rossby number. The ILES formulation of EULAG also\nmaximizes the complexity of the flows and magnetic fields for a given Eulerian grid resolution.\n\n\\begin{figure*}[t!]\n \\begin{center}\n \\includegraphics[width=\\textwidth]{d3_sld_3panel_vrvp_300dpi.eps} \\figcaption{Convective\n patterns and differential rotation. (a) Snapshot of the horizontal convective patterns\n arising in the radial velocity $v_r$ at $\\Rsun{0.95}$ shown in Mollweide projection, at the\n time corresponding to the vertical dashed line in (c). This reveals the larger-scale\n convection at low latitudes and the smaller-scales at higher latitudes, with downflows dark\n and upflows in lighter tones. (b) Time and azimuthally-averaged angular velocity\n $\\avg{\\langle\\Omega\\rangle}$ (double brackets indicating dual averages), showing a fast equator in red and\n slower high-latitudes in blue. (c) A time-latitude diagram of azimuthally-averaged\n $\\avg{\\Delta\\Omega}=\\langle\\Omega\\rangle-\\avg{\\langle\\Omega\\rangle}$ in cylindrical projection, elucidating the propagation\n of equatorial and polar branches of a torsional oscillation arising from strong Lorentz-force\n feedback. The color indicates enhanced differential rotation in red and periods of slower\n rotation in blue, with variations of up to $\\pm 10$\\% of the bulk rotation rate.\\label{fig2}}\n \\end{center}\n\\end{figure*}\n\nInspired by these recent ASH and EULAG results, we have attempted to splice the two together by\nincorporating SLD into ASH with the express goal of achieving a low effective $\\mathrm{Pr}$ and $\\mathrm{Pm}$\ndynamo. Thus an attempt is made to better mimic the low Prandtl number solar setting, while keeping\nthe eddy-diffusive approximation for entropy mixing and treating the reconnection of small-scale\nmagnetic field as diffusive. This effort minimizes the effects of viscosity, and so extends the\ninertial range as far as possible for a given resolution. Thus SLD permits more scales to be\ncaptured before entering the dissipation range, allowing more scale separation between the larger\nmagnetic and and smaller kinetic scales participating in the low $\\mathrm{Pm}$ dynamo \\citep{ponty05,\n schekochihin07, brandenburg09}. Subsequently, the kinetic helicity is also greater at small scales\nthan otherwise would be achieved, which has been shown to have a large influence on the dynamo\nefficiency \\citep{malyshkin10}. Indeed, with this newly implemented diffusion minimization scheme,\nwe have happened upon a solution that possesses four fundamental features of the solar dynamo: a\nregular magnetic energy cycle period, and an orderly magnetic polarity cycle of $\\tau_C=6.2$~years,\nequatorward propagation of magnetic features, and poleward migration of oppositely signed\nflux. Furthermore this equilibrium is punctuated by an interval of relative quiescence, after which\nthe cycle is recovered. In keeping with the ASH nomenclature for cases as in \\citep{brown10,\n brown11, nelson13a}, this dynamo solution has been called D3S.\n\nFigure \\ref{fig1} illustrates the morphology of the toroidal fields in space and time. The presence\nof large-scale and azimuthally-connected structures is evident in Figures \\ref{fig1}(a, d). Such\ntoroidal structures have been dubbed wreaths \\citep{brown10}. In D3S, there are two\ncounter-polarized, lower-latitude wreaths that form near the point where the tangent cylinder\nintersects a given spherical shell. This point is also where the peak in the latitudinal gradient of\nthe differential rotation exists for much of a magnetic energy cycle. There are also polar caps of\nmagnetism of the opposite sense of those at lower latitudes. These caps serve to moderate the polar\ndifferential rotation, which would otherwise tend to accelerate and hence establish fast polar\nvortices. The average structure of the wreaths and caps is apparent in Figure \\ref{fig1}(b), which\nis averaged over a single energy cycle or 3.1~years. The wreaths appear rooted at the base of the\nconvection zone, whereas the caps have the bulk of their energy in the lower convection zone above\nits base. This is somewhat deceptive as the wreaths are initially generated higher in the convection\nzone, while the wreath generation mechanism (primarily the $\\Omega$-effect) migrates equatorward and\ntoward the base of the convection zone over the course of the cycle. The wreaths obtain their\ngreatest amplitude at the base of the convection zone and thus appear seated there.\n\nFigure \\ref{fig2}(a) shows a typical convective pattern during a cycle, with elongated and\nnorth-south aligned flows at low latitudes and smaller scales at higher latitudes. In aggregate, the\nspatial structure and flow directions along these cells produce strong Reynolds stresses acting to\naccelerate the equator and slow the poles. In concert with a thermal wind, such stresses serve to\nrebuild and maintain the differential rotation during each cycle. While the variable nature of the\nconvective patterns over a cycle is not shown, it is an important piece of the story. Indeed, the\nmagnetic fields disrupt the alignment and correlations of these cells through Lorentz forces.\nParticularly, as the field gathers strength during a cycle, the strong azimuthally-connected\ntoroidal fields tend to create a thermal shadow that weakens the thermal driving of the equatorial\ncells. Thus their angular momentum transport is also diminished, which explains why the differential\nrotation seen in Figure \\ref{fig2}(b) cannot be fully maintained during the cycle. This is captured\nin the ebb and flow of the kinetic energy contained in the fluctuating velocity field, which here\nvary by about 50\\%. Such a mechanism is in keeping with the impacts of strong toroidal fields in the\nconvection zone suggested by \\citet{parker87}. Moreover, strong nonlinear Lorentz force feedbacks\nhave been seen in other convective dynamo simulations as well \\citep{brown11}, and they have been\ntheoretically realized for quite some time in mean-field theory \\citep[e.g.,][]{malkus75}.\n\n\\section{Cycle Periods} \\label{sec:periods}\n\nThere are a large set of possible and often interlinked time scales that could be relevant to the\nprocesses setting the pace of the cyclical dynamo established in D3S. For instance, there are\nresistive time scales that depend upon the length scale chosen. One such time scale is the resistive\ndecay of the poloidal field at the upper boundary as it propagates from the tangent cylinder to the\nequator, which would imply that the length scale is $\\ell = r_2 \\Delta\\theta$ and so $\\tau_{\\eta} =\n\\ell^2\/\\eta_2 \\approx 6.7$~years and is close to the polarity cycle period, where the subscript two\ndenotes the value of a quantity at the outer boundary of the simulation. However, this is likely not\ndynamically dominant as the the polarity reversal occurs in half that time. The same is true of the\ndiffusion time across the convection zone, being $4.6$~years. Since the cycle is likely not\nresistively controlled it must be the interplay of dynamical processes. Another mechanism to\nconsider is the cycle time related to flux transport by the meridional flow, then the transit time\nof a magnetic element along its circuits could be relevant. In D3S, the mean meridional flow is\nanti-symmetric about the equator and has two cells, with a polar branch and a lower latitude cell\nthat are split by the tangent cylinder. The circulation time of the polar branch is about 0.7~years,\nwhereas that of the equatorial cell is about a year. So it is also unlikely that the meridional flow\nis setting the cycle period.\n\n\\begin{figure}[t!]\n \\begin{center}\n \\includegraphics[width=0.45\\textwidth]{d3_ac_cc_300dpi.eps} \n \\caption[Auto and cross-correlation of Lorentz-force and toroidal field production by\n mean-shear]{(a) Volume-averaged temporal auto-correlation of toroidal magnetic energy\n generation by mean shear ($\\mathrm{S} = \\lambda \\avg{\\mathbf{B}_{P}} \\boldsymbol{\\cdot}\\boldsymbol{\\nabla} \\langle\\Omega\\rangle$, blue curve) and\n the same for the mean Lorentz force impacting the mean angular velocity ($\\mathrm{L}_{\\phi}$,\n red curve) plotted against temporal lags $\\Delta \\mathrm{t}$ normalized by the polarity cycle\n period $\\tau_C=6.2$~years. Confidence intervals are shown as shaded gray regions, with the\n 67\\% interval in darker gray and 95\\% in lighter gray. (b) Cross-correlation of the mean\n poloidal energy production ($\\mathrm{P} = \\mathbf{B}_P\\boldsymbol{\\cdot}\\boldsymbol{\\nabla}\\boldsymbol{\\times}\\mathcal{E}'_{\\phi}$)\n through the fluctuating EMF and the toroidal magnetic energy production due to the mean shear\n ($\\mathrm{T} = \\langle B_{\\phi} \\rangle S$), showing the nonlinear dynamo wave character of the\n solution. \\label{fig3}}\n \\end{center}\n\\end{figure}\n\nThe dynamical coupling of azimuthally-averaged magnetic fields $\\avg{\\mathbf{B}}$ and the mean angular\nvelocity $\\langle\\Omega\\rangle$ (Figure \\ref{fig2}(b)) plays a crucial role in regulating the cycle, though it\nalone cannot be the sole actor as is well known from Cowling's anti-dynamo theorem. The significant\nanti-correlation of $\\langle B_{\\phi} \\rangle$ and angular velocity variations $\\avg{\\Delta\\Omega}$ during reversals\nbecomes apparent when comparing Figures \\ref{fig1}(c) and \\ref{fig2}(c), revealing the strong\nnonlinear coupling of the magnetic field and the large-scale flows. The dynamics that couples these\ntwo fields is the toroidal field generation through the mean shear ($\\mathrm{S} = \\lambda\n\\avg{\\mathbf{B}_{P}} \\boldsymbol{\\cdot}\\boldsymbol{\\nabla} \\langle\\Omega\\rangle$, with $\\avg{\\mathbf{B}_{P}}$ the mean poloidal field) and the mean azimuthal\nLorentz-force ($L_{\\phi} = \\boldsymbol{\\hat{\\phi}\\cdot} \\avg{\\mathbf{J}} \\boldsymbol{\\times} \\avg{\\mathbf{B}}$), which acts to\ndecrease $\\langle\\Omega\\rangle$. The auto-correlation of each of these components of the MHD system reveals that\n$L_{\\phi}$ varies with a period corresponding to the magnetic energy cycle, whereas $\\mathrm{S}$ varies\non the polarity cycle period (Figure \\ref{fig3}). It also shows the high degree of temporal\nself-similarity between cycles, with the auto-correlation of both quantities remaining significant\nwith 95\\% confidence for a single polarity cycle and with 67\\% confidence for three such cycles.\n\nAppealing to Figure \\ref{fig1}(c), it is evident that $\\mathbf{B}$ exhibits a high degree of\nspatial and temporal self-similarity, though with reversing polarity. Thus the period apparent in\nthe auto-correlation for $L_{\\phi}$ might be expected. Furthermore, if we simply let $\\avg{\\mathbf{B}} \\approx\n\\mathbf{B}_0(r,\\theta) \\exp(i\\omega_C t)$, the Lorentz forces could be characterized very roughly as $L_{\\phi}\n\\propto {L_{\\phi}}_{, 0} \\exp(i \\omega_L t) \\sim \\mathbf{B}_0\\cdot\\mathbf{B}_0\/\\ell \\exp(2 i \\omega_C t)$, with cycle\nfrequency $\\omega_C = 2\\pi\/\\tau_C$ and some length scale $\\ell$. Hence, the magnetic energy or\nLorentz cycle frequency $\\omega_L = 2\\pi\/\\tau_{L}$ implies that $2 \\tau_L = \\tau_C$. What is\npotentially more curious is that $\\mathrm{S}$ varies on the cycle period. While Figure \\ref{fig2}(c)\nmight suggest a reversal in the solar-like character of the differential rotation. This in fact does\nnot occur. Rather, the shear is significantly weakened but maintains the positive latitudinal\ngradient that sustains the toroidal magnetic field, which renders the sign of $\\boldsymbol{\\nabla}\\Omega$\nindependent of time. Therefore, the polarity reversals in $\\avg{\\mathbf{B}_P}$ require that $\\mathrm{S}$\nvary with the polarity cycle period $\\tau_C$.\n\n\\begin{figure}[t!]\n \\begin{center}\n \\includegraphics[width=0.45\\textwidth]{d3_sld_figure3_300dpi.eps} \\figcaption{An interval of\n magnetic quiescence. (a) Time-latitude diagram of $\\langle B_{\\phi} \\rangle$ at $\\Rsun{0.95}$ in cylindrical\n projection, picturing the loss and reappearance of cyclical polarity reversals as well as the\n lower amplitude of the wreaths. Strong positive toroidal field is shown as red, negative in\n blue. (b) Normalized magnetic dipole moment (red) and the quarupolar moment (blue). The\n quadrupole moment peaks near reversals, indicating its importance. \\label{fig4}}\n \\end{center}\n\\end{figure}\n\n\\section{Equatorward Propagation} \\label{sec:propagate}\n\nAs with ASH and EULAG, simulations in spherical segments that employ the Pencil code also obtain\nregular cyclical magnetic behavior. Some of these polarity reversing solutions exhibiting\nequatorward propagating magnetic features \\citep{kapyla12}, magnetic flux ejection\n\\citep{warnecke12}, and 33-year magnetic polarity cycles \\citep{warnecke13}. Currently, however, the\nmechanism for the equatorward propagation of the magnetic structures in those simulations remains\nunclear. Perhaps the mechanism is similar to that seen here.\n\nThe equatorward propagation of magnetic features observed in this case, as in Figures \\ref{fig1}(c)\nand \\ref{fig4}(a), arises through two mechanisms. The first process is the nonlinear feedback of the\nLorentz force that acts to quench the differential rotation, disrupting the convective patterns and\nthe shear-sustaining Reynolds stresses they possess. Since the latitudinal shear serves to build and\nmaintain the magnetic wreaths, the latitude of peak magnetic energy corresponds to that of the\ngreatest shear. So the region with available shear moves progressively closer to the equator as the\nLorentz forces of the wreaths locally weaken the shear. Such a mechanism explains the periodic\nmodifications of the differential rotation seen in Figure \\ref{fig2}(c). However, it does not\nexplain how this propagation is initiated and sustained, as one might instead expect an equilibrium\nto be established with the magnetic energy generation balancing the production of shear and which is\nfurther moderated by cross-equatorially magnetic flux cancellation as the distance between the\nwreaths declines.\n\nThere are two possibilities for the second mechanism that promotes the equatorward propagation of\ntoroidal magnetic field structures. If we may consider the dynamo action in this case as a dynamo\nwave, the velocity of the dynamo wave propagation is sensitive to the gradients in the angular\nvelocity and the kinetic helicity in the context of an $\\alpha\\Omega$ dynamo\n\\citep[e.g.,][]{parker55,yoshimura75}. A simple analysis indicates that near and poleward of the\nedge of the low-latitude wreaths the sign of the Parker-Yoshimura mechanism is correct to push the\ndynamo wave toward the equator, but the effect is marginal elsewhere. The second possibility is that\nthe spatial and temporal offsets between the fluctuating EMF and the mean-shear production of\ntoroidal field leads to a nonlinear inducement to move equatorward. This mechanism relies on the\nconcurrent movement of the turbulent production of the poloidal field that continues to destroy\ngradients in angular velocity through the production of toroidal magnetic through the action of the\ndifferential rotation on the renewed poloidal field. Nonetheless, the wreaths eventually lose their\nazimuthal coherence because of cross-equatorial flux cancellation and the lack of sufficient\ndifferential rotation to sustain them, which leads to a rapid dissemination of the remaining flux by\nthe convection. This is evident in Figure \\ref{fig1}(a), where at the end of each cycle the wreaths\nconverge on the equator and their resulting destruction leads to the poleward advection of\nfield. This advected field is of the opposite sense of the previous cycle's polar cap and, being of\ngreater amplitude compared to the remaining polar field, establishes the sense of the subsequent\ncycle's polar field. Furthermore, in D3S, as a cycle progresses the centroid for the greatest dynamo\naction propagates both equatorward and downward in radius, as might be deduced from the successful\nreversals visible in Figure \\ref{fig4}(b). Though, it is more evident in a time-radius diagram.\nHence, the equatorial migration begun at the surface makes its way deeper into the domain as the\ncycle progresses.\n\n\\section{Grand Minima} \\label{sec:intermit}\n\nAs with some other dynamo simulations \\citep[e.g.,][]{brown11,augustson13}, there is also long-term\nmodulation in case D3S. Figure \\ref{fig4} shows an interval of about 20 years where the polarity\ncycles are lost, though the magnetic energy cycles resulting from the nonlinear interaction of the\ndifferential rotation and the Lorentz force remains. During this period, the magnetic energy in the\ndomain is about 25\\% lower, whereas the energy in the volume encompassed by the lower-latitudes is\ndecreased by 60\\%. However, both the spatial and temporal coherency of the cycles are recovered\nafter this interval and persist for the last 40~years of the 100~year-long simulation. Prior to\nentering this quiescent period, there was an atypical cycle with only the northern hemisphere\nexhibiting equatorward propagation. This cycle also exhibits a prominent loss of the equatorial\nanti-symmetry in its magnetic polarity. The subsequent four energy cycles do not reverse their\npolarity, which is especially evident in the polar regions, whereas the lower latitudes do seem to\nattempt such reversals.\n\n\\section{Conclusions} \\label{sec:conclude}\n\nThe simulation presented here is the first to self-consistently exhibit four prominent aspects of\nsolar magnetism: regular magnetic energy cycles during which the magnetic polarity reverses, akin to\nthe sunspot cycle; magnetic polarity cycles with a period of 6.2~years, where the orientation of the\ndipole moment returns to that of the initial condition; the equatorward migration of toroidal field\nstructures during these cycles; and quiescence after which the previous polarity cycle is\nrecovered. Furthermore, this simulation may capture some aspects of the influence of a layer of\nnear-surface shear, with a weak negative gradient in $\\langle\\Omega\\rangle$ within the upper 10\\% of the\ncomputation domain (3\\% by solar radius). The magnetic energy cycles with the time scale $\\tau_C\/2$\narise through the nonlinear interaction of the differential rotation and the Lorentz force. We find\nthat the nonlinear feedback of the Lorentz force on the differential rotation significantly reduces\nits role in the generation of toroidal magnetic energy. The magnetic fields further quench the\ndifferential rotation by impacting the convective angular momentum transport during the\nreversal. Furthermore, despite the nonlinearity of the case, there is an eligible influence of a\ndynamo wave in the fluctuating production of poloidal magnetic field linked to the shear-produced\ntoroidal field. The mechanisms producing the equatorward propagation of the toroidal fields have\nbeen identified, with the location of the greatest latitudinal shear at a given point in the cycle\nand the weak negative radial shear both playing a role. This simulation has also exhibited\nlong-lasting minimum, loosely similar to the Maunder minimum. Indeed, there is an interval covering\n20\\% of the cycles during which the polarity does not reverse and the magnetic energy is\nsubstantially reduced. Despite rotating three times faster than the Sun and parameterizing large\nportions of its vast range of spatio-temporal scales, some of the features of the dynamo that may be\nactive within the Sun's interior have been realized in this global-scale ASH simulation.\n\n\\section*{Acknowledgments}\n\nThe authors thank Nicholas Featherstone, Brad Hindman, Mark Rast, Matthias Rempel, and Regner\nTrampedach, for helpful and insightful conversations. This research is primarily supported by NASA\nthrough the Heliophysics Theory Program grant NNX11AJ36G, with additional support for Augustson\nthrough the NASA NESSF program by award NNX10AM74H. The computations were primarily carried out on\nPleiades at NASA Ames with SMD grants g26133 and s0943, and also used XSEDE resources for\nanalysis. This work also utilized the Janus supercomputer, which is supported by the NSF award\nCNS-0821794 and the University of Colorado Boulder.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\tThe Internet of Things (IoT) is expected to make our physical surroundings accessible by placing sensors on everything in the world and converting the physical information into a digital format. The applications of IoT span numerous verticals, including transportation, environmental detection, and energy scheduling.\n\tIn such applications, timely message delivery is of necessity. For example, for intelligent vehicles, real-time updates of road information are crucial for safe driving, and in environmental detection, updating environmental information on time is beneficial to the prediction, as well as preparation, for occurrence of natural disasters. Since an outdated message would become useless, timeliness is one of the critical objectives in the IoT network.\n\t\n\tTo assess the timeliness of delivered messages, a new metric called the \\emph{Age of Information} (AoI) was proposed in \\cite{6195689}\\cite{6284003}, which is defined as the time elapsed since the most recently received update was generated at its source. Aiming to design systems that can provide fresh information, extensive studies have been conducted to characterize the AoI on the basis of queuing theory. For instance, in \\cite{6195689}, the AoI was minimized for first-come-first-served (FCFS) M\/M\/1, M\/D\/1, and D\/M\/1 queues. Multiple sources was considered in \\cite{6284003} for the FCFS M\/M\/1 queue. In \\cite{7415972}, the AoI in M\/M\/1\/1 queue and M\/M\/1\/2 queue with both FCFS and LCFS was characterized by considering a finite buffer capacity. The effects of the buffer size, packet age deadline, and replacement on the average AoI were further studied in \\cite{7795343}.\n\t\n\tHowever, these studies only focused on a point-to-point communication scenario. In practice, IoT networks generally consist of a large number of nodes that intend to communicate with their destinations via spectrum, which usually constitutes a multiple access network. Due to the broadcast nature of wireless medium, transmissions of nodes will affect each other via the interference they generated. And the characterization of AoI under such a setting has attracted a variety of studies recently [5]-[14]. Specifically, the AoI was minimized in \\cite{8943134}\\cite{7492912} by scheduling a group of links that are active at the same time and limiting the interference to an acceptable level. Various scheduling policies including Greedy policy, stationary randomized policy, Max-Weight policy, and Whittle's Index policy were proposed in \\cite{8514816} to minimize the AoI for periodic packet arrivals. In addition, the AoI performance under stationary randomized policy and Max-Weight policy was analyzed for Bernoulli packet arrivals \\cite{8933047}. A joint design of status sampling and updating process to minimize the average AoI was further proposed in \\cite{8778671}. Despite the promise of improving the AoI performance, the overhead of centralized scheduling may be too hefty to be affordable for IoT networks with massive connectivity. In that respect, decentralized schemes were also studied from the perspective of AoI optimization. In particular, the effectiveness of slotted ALOHA on minimizing AoI was studied in \\cite{8006544}, where each node initializes a channel access attempt at each time slot with a certain probability. A threshold-based age-dependent random access protocol was proposed in \\cite{9162973} \\cite{yavascan2020analysis}, where each node accesses the channel only when its instantaneous AoI exceeds a predetermined threshold. A distributed transmission policy was proposed in \\cite{9174254} based on the age gain which is the reduction of instantaneous AoI when packets are successfully delivered. An Index-Prioritized Random Access scheme was proposed in \\cite{8935400}, where nodes access the radio channel according to their indices that reflect the urgency of update. The classic collision model was adopted in these studies, where one node can successfully access the channel if and only if there are no other concurrent transmissions. Albeit significant advances have been achieved, these works did not take into account the key effects of nodes physical attributes in wireless systems such as fading, path loss, and interference.\n\t\n\tStochastic geometry on the other hand provides an elegant way of capturing macroscopic properties of such networks by averaging over all potential geographical patterns of the nodes, which can help to account for sources of uncertainties such as co-channel interference and channel fading. Therefore, this tool has been widely adopted to evaluate the performance of various types of wireless networks \\cite{1} \\cite{20160000}. Recently, there have been studies of the AoI performance in large-scale networks by combining queuing theory and stochastic geometry [17]-[23]. In particular,\n\tthe lower and upper bounds of the average AoI for Poission bipolar network were characterized in \\cite{001} via the introduction of two auxiliary systems. Based on a dominant system where every transmitter sends out packets in every time slot, \\cite{002} devised a locally adaptive channel access scheme for reducing the peak AoI. In these studies, the interference was decoupled from the queue status, i.e., whether the queue is empty or not. To characterize the spatio-temporal interactions of queues, a framework was provided in \\cite{003} that captures the peak AoI for large-scale IoT networks with time-triggered (TT) and event-triggered (ET) traffic. The effects of network parameters were further studied in \\cite{004} \\cite{9316915} on the AoI performance in the context of random access networks.\n\tThe spatial moments of the mean AoI of the status update links were characterized in \\cite{mankar2020throughput} \\cite{mankar2020spatial} based on the moments of the conditional successful probability. These studies focused on a static network topology, i.e., the point process pattern is realized at the beginning of time and keeps unchanged after that, leaving the network scenario with mobility largely unexplored. In this paper, we study the optimization of AoI over a large-scale random access network with mobility. Interestingly, the expression for AoI has concise form in this case and it allows us to obtain optimal system design parameters in a close-form.\n\t\n\tIn particular, we consider a Poisson bipolar network where each transmitter updates information packets according to an independent Bernoulli process. Similar to that in \\cite{7415972}\\cite{004}, we adopt a unit-size buffer at the transmitter side which avoids the long waiting time caused by the accumulation of data packets in the buffer. To reduce the overhead of centralized scheduling, each transmitter employs an ALOHA random access protocol, i.e., each transmitter accesses the channel with a certain probability at each time slot. The successful transmission of packets depends on the Signal to Interference plus Noise Ratio (SINR) value at the receiver side. Because of the interference, the buffer states of the transmitters are coupled with each other. By leveraging tools from stochastic geometry and queuing theory, we derive a fixed-point equation of the probability of successful transmission of each transmitter by taking into account the coupling effect. Based on the probability of successful transmission, an analytical expression for the peak AoI is obtained, which is a function of the packet arrival rate and the channel access probability. Using this expression, we find that when the node deployment density is small, the AoI performance can be always improved by choosing a large packet arrival rate or channel access probability. When the node deployment density becomes large, a very high packet arrival rate or channel access probability can in turn deteriorate the AoI performance owing to the severe interference caused by the simultaneous node transmissions. The peak AoI is then optimized by tuning the channel access probability for a given packet arrival rate and by tuning the packet arrival rate for a given channel access probability, respectively. It is found that when the packet arrival rate is optimally tuned, a higher channel access probability always leads to better peak AoI performance, but when the channel access probability is optimally tuned, the peak AoI can be benefited with a smaller packet arrival rate only when the node deployment density is high. We then study how to minimize the peak AoI by jointly tuning the packet arrival rate and the channel access probability, and find that the optimal channel access probability is always set to be one. This indicates that to reduce the waiting time in each transmitters' buffer, each packet should be transmitted as soon as possible. Yet, the packet arrival rate, i.e., the information update frequency, should be lower so as to alleviate the channel contention. For all the three cases, i.e., tuning the channel access probability, tuning the packet arrival rate and joint tuning, the optimal peak AoI grows linearly as the node deployment density increases, which is in sharp contrast to an exponential growth when the system parameters are not properly tuned. This sheds important light on freshness-aware design for large-scale networks.\n\t\n\tThe remainder of this paper is organized as follows. Section \\ref{system model} presents the system model and preliminary analysis. Section \\ref{section:p} shows the derivation and anlysis of the probability of successful transmission. In Section \\ref{PAoI}, the peak AoI is derived and optimized by tuning system parameters including the channel access probability and the packet arrival rate. Section \\ref{Simulation Results} presents the simulation results of above analysis. Finally, Section \\ref{conclusion} summarizes the work and draws final conclusion.\n\t\n\t\\section{System Model and Preliminary analysis}\\label{system model}\n\tLet us consider a Poisson bipolar network where transmitters are scattered according to a homogeneous Poisson point process (PPP) of density $\\lambda$. As Fig. \\ref{system_model} illustrates, each transmitter is paired with a receiver that is situated in distance $R$ and oriented at a random direction. In this network, the time is slotted into equal-length intervals and the transmission of each packet lasts for one slot.\n\tThe packets arrive at each transmitter following independent Bernoulli processes of rate $\\xi$. We assume every transmitter is equipped with a unit-size buffer and hence a newly incoming packet will be dropped if an elder packet is in its service. At the beginning of each time slot, transmitters with non-empty buffers will access the channel with a fixed probability $q$. To better illustrate the channel access process of each transmitter, let us define two parameters $\\epsilon^{'}$ and $\\epsilon^{''}$, where $\\epsilon^{''}\\ll\\epsilon^{'}\\ll1$:\n\t1) at $t+\\epsilon^{''}$, a new packet arrives with probability $\\xi$;\n\t2) at $t+\\epsilon^{'}$, each transmitter that has one packet in its buffer accesses the radio channel with probability $q$;\n\t3) If the transmission is successful, then the packet departs at $t+1-\\epsilon^{'}$; otherwise, the packet remains in the queue and will be sent out in the next time slot until success.%\n\t\n\t\\subsection{Signal-to-Interference-plus-Noise Ratio }\n\tIn this paper, we consider the radio frequency is globally reused, i.e., all the nodes utilize the same spectrum for packet delivery. Moreover, each transmitter employs a universally unified transmit power and thus has an equal mean received SNR $\\gamma$ at the receiver. As such, for a generic transmitter $i$, its received SINR at time slot $t$ is given by\n\t\\begin{equation}\n\t\\text{SINR}_i(t)=\\frac{h_{ii}(t)R^{-\\alpha}}{\\sum_{j\\neq i}h_{ij}(t)e_j(t)\\mathbf{1}(Q_j(t)>0)d_{ij}(t)^{-\\alpha}+\\gamma^{-1}} ~,\n\t\\label{eq:defineSINR}\n\t\\end{equation}\n\twhere $h_{ij}(t)$ represents the small-scale fading coefficient between transmitter $j$ and receiver $i$, which is assumed to be exponentially distributed with unit mean and varies i.i.d. across space and time, $d_{ij}$ is the distance between transmitter $j$ and destination $i$, and $e_j(t)$ is a binary function, where $e_j(t)=1$ denotes that transmitter $j$ initiates the packet transmission at slot $t$, and $e_j(t)=0$ otherwise.\n\t$Q_j(t)$ is the queue length of transmitter $j$ at slot $t$. Thus, $\\mathbf{1}(Q_j(t)>0)=1$ indicates that transmitter j has a non-empty buffer at time slot $t$, and $\\mathbf{1}(Q_j(t)>0)=0 $ otherwise. The parameter $\\alpha$ is the path-loss exponent. In this work, we consider a packet is successfully delivered if the received SINR exceeds a decoding threshold $\\theta$. Therefore, the corresponding probability of successful transmission for node $i$ can be written as:\n\t\\begin{equation}\n\tp_{i}(t)=P(\\text{SINR}_i (t)>\\theta).\n\t\\label{define_p}\n\t\\end{equation}\n\t\\begin{figure}[t]\n\t\t\\centering\n\t\t\\includegraphics[width=14cm,height=6cm]{system_model.png}\n\t\t\\caption{Snapshot of Poisson bipolar network in consideration. The up-right figure illustrates the queueing model of a generic transmiter. The down-right figure illustrates the channel access process.}\n\t\t\\label{system_model}\n\t\\end{figure}\n\tSimilar to \\cite{5601963}, we assume a high mobility random walk model for the positions of transmitters. As such, the received $\\text{SINR}_i(t)$ of each transmitter $i$, $i \\in \\mathbb{N}$, can be considered as i.i.d across time $t$. By symmetry, the probability of successful transmission is also identical across all the transmitters. To that end, we drop the indices $i$ and $t$ in \\eqref{define_p} and denote $p$ as the probability of successful transmission. Then, the dynamics of packet transmissions over each wireless link can be regarded as a Geo\/Geo\/1\/1 queue with the service rate $qp$.\n\t\n\t\\subsection{Performance Metric}\n\tIn this paper, we focus on the performance metric of AoI, which captures the timeliness of information delivered at the receiver side. In Fig. \\ref{AoIcurve}, we depict the evolution of AoI $A(t)$ over time for a Geo\/Geo\/1\/1 queue, where $t_k$ denotes the time slot in which the $k^{th}$ packet arrived, $t^{'}_k$ denotes the time slot in which the $k^{th}$ packet is successfully transmitted, and $t^{*}_k$ denotes the time slot in which the $k^{th}$ packet is dropped. From this figure, we can see that the AoI $A(t)$ increases linearly over time and plummets at time slots $t^{'}_1, t^{'}_2, t^{'}_3,\\ldots ,t^{'}_n$ where packets are successfully transmitted. Notably, during the period between $t_2$ and $t^{'}_2$, there is a packet arrivals at slot $t^\\ast$ but is immediately discarded because the buffer can accommodate only one packet. Formally, the progress of such a process can be written as:\n\t\\begin{equation}\\label{define_aoi}\n\tA(t+1)=\\left\\{\n\t\\begin{array}{lr}\n\tA(t)+1~~~~\\text{transmission failure} \\\\\n\tt-t_k+1~~~\\text{transmission successful}.\n\t\\end{array}\n\t\\right.\\end{equation}\t\n\t\\begin{figure}[t]\n\t\t\\centering\n\t\t\\includegraphics[width=12cm,height=6cm]{8.png}\n\t\t\\caption{An example of the AoI evolution over time.}\n\t\t\\label{AoIcurve}\n\t\\end{figure}\n\tIn this paper, we focus on the peak AoI, denoted as $A_p$, which is defined as the time average of age values at time instants when there is packet transmitted successfully as our performance metric. Such a metric is given by \\cite{apdefine}\n\t\\begin{equation}\n\tA_p=\\lim_{T\\to\\infty}\\sup\\frac{\\sum_{t=1}^{t=T}A(t)\\mathbf{1}\\{A(t+1)\\leq A(t)\\}}{\\sum_{t=1}^{t=T}\\mathbf{1}\\{A(t+1)\\leq A(t)\\}}.\n\t\\end{equation\n\t\n\t\n\t\\section{probability of successful transmission}\\label{section:p}\n\tThe AoI performance is dependent on the probability of successful transmission of each update packet. This section is then devoted to the characterization of the probability of successful transmission. First of all, the following lemma shows that the probability of successful transmission can be written in the form of a fixed-point equation.\n\t\\begin{lemma}\\label{lemma:p}\n\t\tThe probability of successful transmission of a generic transmitter can be obtained as\n\t\t\\begin{equation}\n\t\tp=\\exp{\\left\\{-\\lambda cR^2\\frac{q \\xi}{\\xi+pq(1-\\xi)}-\\theta R^{\\alpha}\\gamma^{-1}\\right\\}} \\label{eq:p}\n\t\t\\end{equation}\n\t\twhere $c=\\pi\\theta^{\\frac{2}{\\alpha}}\\operatorname{sinc}(\\frac{2}{\\alpha})$.\n\t\t\\begin{proof}\n\t\t\tSee Appendix \\ref{prooflemma1}\n\t\t\\end{proof}\n\t\\end{lemma}\n\t\n\tLemma \\ref{lemma:p} indicates that the probability of successful transmission $p$ is determined by the channel access probability $q$, the packet arrival rate $\\xi$, the node deployment density $\\lambda$, and the TX-RX distance $R$. The following result further characterizes the distribution of roots of \\eqref{eq:p}.\n\t\\begin{theorem}\\label{Theorem_p_root}\n\t\tThe fixed-point equation \\eqref{eq:p} has three non-zero roots $01+\\frac{{p}_{*}(1-\\xi)}{\\xi} \\\\\n\t\t\\frac{1}{\\xi}+\\frac{2}{p^{*}}-1 \\quad &\\text{otherwise},\\end{cases}\n\t\t\\end{equation}\n\t\twhich is acheived when the channel access probability $q$ is set to be\n\t\t\\begin{equation}\\label{eq:OptimalQ}%\n\t\tq=q^\\ast_\\xi = \\begin{cases}\n\t\t\\frac{1}{\\lambda cR^2-\\frac{1-\\xi}{\\xi}\\exp{\\left\\{-\\theta R^\\alpha \\gamma^{-1}-1\\right\\}}} \\quad &\\text{if } \\lambda c R^2>1+\\frac{{p}_{*}(1-\\xi)}{\\xi} \\\\\n\t\t1 \\quad &\\text{otherwise},\n\t\t\\end{cases\n\t\t\\end{equation}\n\t\twhere $p_{*}$ is the non-zero root of the following equation\n\t\t\\begin{equation}\n\t\tp_{*}=\\exp\\left\\{-\\lambda c R^2 \\frac{\\xi}{\\xi+p_{*}(1-\\xi)}-\\theta R^\\alpha \\gamma^{-1}\\right\\}.\\label{eq:q1valuep}\n\t\t\\end{equation}\n\t\t\n\t\\end{theorem\n\t\\begin{proof}\n\t\tSee Appendix \\ref{prooftheorem2}\n\t\\end{proof}\n\tTheorem \\ref{Theorem_OPtimalQ} shows that the optimal channel access probability $q^\\ast_\\xi=1$ when $\\lambda c R^2\\leq 1+\\frac{{p}_{*}(1-\\xi)}{\\xi}$, indicating that in this case, each node would transmit its packet as long as the buffer is nonempty. As the node deployment density $\\lambda$, the distance between each TX-RX distance $R$ or the decoding threshold $\\theta$ (equivalently, $c$ according to \\eqref{eq:p}) grows, we have $q^\\ast_\\xi<1$ due to either mounting channel contention or a lower chance of successful packet decoding.%\n\t\n\t\n\tTo take a closer look at Theorem \\ref{Theorem_OPtimalQ}, Fig. \\ref{fig:OptimalQandPage} demonstrates how the optimal channel access probability $q^\\ast_\\xi$ and the corresponding the peak AoI $A^{q=q^\\ast_\\xi}_p$ vary with the node deployment density $\\lambda$ under different values of the packet arrival rate $\\xi$. It can be seen that when $\\lambda$ is small, e.g., $\\lambda=0.02$, the optimal channel access probability $q^\\ast_\\xi=1$ regardless of the value of the packet arrival rate $\\xi$. Yet, the peak AoI $A^{q=q^\\ast_\\xi}_p$ crucially depends on $\\xi$. Intuitively, a smaller node deployment density can reduce the interference among transmitter-receiver pairs, which improves the probability of successful packet transmission. Accordingly, the age performance could be effectively improved with more frequent updates, i.e., a larger packet arrival rate $\\xi$. Therefore, as shown in Fig. \\ref{fig:OptimalQandPage}b, $A^{q=q^\\ast_\\xi}_p$ with $\\xi=0.9$ is lower than those with $\\xi=0.6$ or $\\xi=0.3$ when the node deployment density $\\lambda$ is small. On the other hand, if the node deployment density $\\lambda$ grows, then to relieve the channel contention, the system should reduce the channel access probability. Thus, we can see $q^\\ast_\\xi$ decreases with $\\lambda$ and the descent position, i.e., the starting point that $q^\\ast_\\xi<1$, is positively correlated with the packet arrival rate $\\xi$. In this case, it is interesting to observe that the peak AoI $A^{q=q^\\ast_\\xi}_p$ can be benefited with a lower packet arrival rate $\\xi$, which is in sharp contrast to the case where the node deployment density $\\lambda$ is small.\n\t\\begin{figure}[t]%\n\t\t\\begin{minipage}[t]{0.5\\textwidth}\n\t\t\t\\includegraphics[width=8cm,height=6.73cm]{2.png}\n\t\t\t\\centering{(a)}\n\t\t\t\\label{fig:optimal:q:lambda}\n\t\t\\end{minipage}%\n\t\t\\begin{minipage}[t]{0.5\\textwidth}\n\t\t\t\\includegraphics[width=8cm,height=6.7cm]{1.png}\n\t\t\t\\centering{(b)}\n\t\t\t\\label{fig:optimal:q:pAoi}\n\t\t\\end{minipage}\n\t\t\\caption{ Optimal channel access probability $q^\\ast_\\xi$ and the corresponding the peak AoI $A^{q=q^\\ast_\\xi}_p$ versus the node deployment density $\\lambda$. $\\alpha=3$, $\\theta=0.2$, $\\gamma=20$, $R=3$. $\\xi\\in\\{0.3,0.6,0.9\\}$. (a) $q^\\ast_\\xi$ versus $\\lambda$. (b) $A^{q=q^\\ast_\\xi}_p$ versus $\\lambda$.} \\label{fig:OptimalQandPage}\n\t\\end{figure}\n\t\n\t\n\t\\begin{figure}[t]%\n\t\t\\begin{minipage}[t]{0.5\\textwidth}\n\t\t\t\\includegraphics[width=8cm,height=6.85cm]{3.png}\n\t\t\t\\label{fig:optimal:alpha:lambda}\n\t\t\t\\centering{(a)}\n\t\t\\end{minipage}%\n\t\t\\begin{minipage}[t]{0.5\\textwidth}\n\t\t\t\\includegraphics[width=8.2cm,height=6.85cm]{4.png}\n\t\t\t\\label{fig:optimal:alpha:lambda:PAoI}\n\t\t\t\\centering{(b)}\n\t\t\\end{minipage}\n\t\t\\caption{Optimal channel access probability $\\xi^\\ast_q$ and the corresponding the peak AoI $A^{\\xi=\\xi^\\ast_q}_p$ versus the node deployment density $\\lambda$. $\\alpha=3$, $\\theta=0.5$, $\\gamma=20$, $R=3$. $q\\in\\{0.2,0.4,0.6,0.8\\}$. (a) $\\xi^\\ast_q$ versus $\\lambda$. (b) $A^{\\xi=\\xi^\\ast_q}_p$ versus $\\lambda$.}\n\t\t\\label{fig:optimal_q_Ap}\n\t\\end{figure}%\n\t\\subsection{Optimal Tuning of Packet Arrival Rate $\\xi$}\n\tThe following theorem presents the optimal packet arrival rate $\\xi^\\ast_q$ that minimizes the peak AoI $A_p$, i.e., $A^{\\xi=\\xi^\\ast_q}_p=\\underset{\\xi}{\\min}A_p$\n\t\\begin{theorem}\\label{Theorem_OPtimalAlpha}\n\t\tGiven a channel access probability $q$, the optimal peak AoI $A^{\\xi=\\xi^\\ast_q}_p$ is given by\n\t\t\\begin{equation}\\label{eq:OPtimalAlpha2}\n\t\tA_{p}^{\\xi=\\xi^{*}_{q}} = \\begin{cases}\n\t\t\\frac{q\\lambda cR^2\\left(\\sqrt{1+\\frac{4}{q\\lambda cR^2}}+1\\right)+2}{2q \\exp{\\left\\{-\\frac{2}{\\sqrt{1+\\frac{4}{q\\lambda cR^2}}+1}-\\theta R^\\alpha \\gamma^{-1}\\right\\}}} \\quad &\\text{if } \\lambda c R^2>\\frac{1}{2q} \\\\\n\t\t\\frac{2}{q}\\exp{\\left\\{\\lambda cR^2q+\\theta R^\\alpha \\gamma^{-1}\\right\\}} \\quad &\\text{otherwise},\\end{cases}\t\t\n\t\t\\end{equation}\n\t\twhich is acheived when the packet arrival rate $\\xi$ is set to be\n\t\t\\begin{equation}\\label{eq:OPtimalAlpha}%\n\t\t\\xi=\\xi_q^{*} = \\begin{cases}\n\t\t\\frac{2q \\exp{\\left\\{-\\frac{2}{\\sqrt{1+\\frac{4}{q\\lambda cR^2}}+1}-\\theta R^\\alpha \\gamma^{-1}\\right\\}}}{q\\lambda cR^2\\left(\\sqrt{1+\\frac{4}{q\\lambda cR^2}}+1\\right)+2q\\exp{\\left\\{-\\frac{2}{\\sqrt{1+\\frac{4}{q\\lambda cR^2}}+1}-\\theta R^\\alpha \\gamma^{-1}\\right\\}}-2} &\\text{if } \\lambda c R^2>\\frac{1}{2q} \\\\\n\t\t1 \\quad &\\text{otherwise}.\n\t\t\\end{cases\n\t\t\\end{equation}\t\n\t\\end{theorem}\n\t\\begin{proof}\n\t\tSee Appendix \\ref{prooftheorem3}\n\t\\end{proof}%\n\t\n\tTheorem \\ref{Theorem_OPtimalAlpha} reveals that the optimal packet arrival rate $\\xi^\\ast_q=1$ when $\\lambda c R^2<\\frac{1}{2 q}$, indicating that in this case, to minimize the peak AoI, new packets shall be updated as frequent as possible. Similarly to Theorem \\ref{Theorem_OPtimalQ}, as $\\lambda$, $R$ or $c$ grows, due to mounting channel contention or a lower probability of successful transmission, the optimal packet arrival rate $\\xi^\\ast_q<1$.%\n\t\n\tFig. \\ref{fig:optimal_q_Ap} demonstrates how the optimal packet arrival rate $\\xi^\\ast_q$ and the corresponding the peak AoI $A^{\\xi=\\xi^\\ast_q}_p$ varies with the node deployment density $\\lambda$ under various value of the channel access probability.\n\tIntuitively, as the node deployment density $\\lambda$ increases, to reduce the interference, the system should either reduce the channel access probability $q$ or the packet arrival rate $\\xi$. Accordingly, we can see from Fig. \\ref{fig:optimal_q_Ap}a that as $\\lambda$ increases, the optimal packet arrival rate $\\xi^\\ast_q$ declines, which could be further reduced with a larger channel access probability $q$. On the other hand, as shown in Fig. \\ref{fig:optimal_q_Ap}b, the corresponding the peak AoI $A^{\\xi=\\xi^\\ast_q}_p$ grows with the node deployment density $\\lambda$, which is intuitively clear. Yet, in contract to that in Fig. \\ref{fig:OptimalQandPage}b, a larger channel access probability $q$ always leads to a smaller $A^{\\xi=\\xi^\\ast_q}_p$\n\t\\subsection{Joint Tuning of $q$ and $\\xi$}\n\t\n\tThe following theorem presents the result of joint tuning of the probability of successful transmission $q$ and the packet arrival rate $\\xi$ for minimizing the peak AoI $A_p$, i.e., $A^{*}_p=\\underset{\\{q, \\xi\\}}{\\min} A_p.$\n\t\n\t\\begin{theorem}\\label{Theorem_OPtimalqAlphaboth}\n\t\tThe optimal peak AoI $A^{*}_p=\\underset{\\{q, \\xi\\}}{\\min}~A_p$ is given by\n\t\t\\begin{equation}\\label{eq:OPtimalAlpha2}\n\t\tA_{p}^{*} = \\begin{cases}\\frac{\\lambda cR^2\\left(\\sqrt{1+\\frac{4}{\\lambda cR^2}}+1\\right)+2}{2\\exp{\\left\\{-\\frac{2}{\\sqrt{1+\\frac{4}{\\lambda cR^2}}+1}-\\theta R^\\alpha \\gamma^{-1}\\right\\}}},\n\t\t\\quad &\\text{if } \\lambda c R^2>\\frac{1}{2} \\\\\n\t\t2\\exp{\\left\\{\\lambda cR^2+\\theta R^\\alpha \\gamma^{-1}\\right\\}} \\quad &\\text{otherwise},\\end{cases}\n\t\t\\end{equation}\n\t\twhich is achieved when the channel access probability $q$ is set to be\n\t\t\\begin{equation}\n\t\tq=q^{*}=1,\n\t\t\\end{equation}\n\t\tand the packet arrival rate $\\xi$ is set to be\n\t\t\\begin{equation}\\label{eq:OPtimalAlpha2}\n\t\t\\xi=\\xi^{*} = \\begin{cases}\n\t\t\\frac{2 \\exp{\\left\\{-\\frac{2}{\\sqrt{1+\\frac{4}{\\lambda cR^2}}+1}-\\theta R^\\alpha \\gamma^{-1}\\right\\}}}{\\lambda cR^2\\left(\\sqrt{(1+\\frac{4}{\\lambda cR^2}}+1\\right)+2\\exp{\\left\\{-\\frac{2}{\\sqrt{1+\\frac{4}{\\lambda cR^2}}+1}-\\theta R^\\alpha \\gamma^{-1}\\right\\}}-2} &\\text{if } \\lambda c R^2>\\frac{1}{2} \\\\\n\t\t1 \\quad &\\text{otherwise}.\n\t\t\\end{cases}\n\t\t\\end{equation}\n\t\t\\begin{proof}\n\t\t\tSee Appendix \\ref{prooftheorem4}\n\t\t\\end{proof}%\n\t\\end{theorem}\n\t\n\t\\begin{figure}[t]\n\t\t\\begin{minipage}[t]{0.5\\textwidth}\n\t\t\t\\includegraphics[width=7.8cm,height=7.10cm]{BAoptimalalphaq.png}\n\t\t\t\\label{fig:optimal:alphaq:lambda}\n\t\t\t\\centering{(a)}\n\t\t\\end{minipage}%\n\t\t\\begin{minipage}[t]{0.5\\textwidth}\n\t\t\t\\includegraphics[width=8.5cm,height=7.08cm]{BAoptimalAp.png}\n\t\t\t\\label{fig:optimal:alphaq:lambda:PAoI}\n\t\t\t\\centering{(b)}\n\t\t\\end{minipage}\n\t\t\\caption{Optimal the channel access probability $q^{\\ast}$, the packet arrival rate $\\xi^{\\ast}$ and the corresponding peak AoI $A^{\\xi=\\xi^{\\ast},q=q^{\\ast}}_p$ versus the node deployment density $\\lambda$. $\\alpha=3$, $\\gamma=20$, $R=3$, $\\theta\\in\\{0.2,0.5,0.8\\}$. (a) $\\xi^{\\ast}$, $q^{\\ast}$ versus $\\lambda$. (b) $A^{\\xi=\\xi^{\\ast},q=q^{\\ast}}_p$ versus $\\lambda$.}\n\t\t\\label{fig:optimal_q_alpha_both}\n\t\\end{figure}\n\tIn Fig. \\ref{fig:optimal_q_alpha_both}, which demonstrate how the optimal channel access probability $q^\\ast$, the optimal packet arrival rate $\\xi^\\ast$ and the corresponding minimum peak AoI $A^\\ast_p$ vary with the node deployment density $\\lambda$.\n\tIt is clear from Fig. \\ref{fig:optimal_q_alpha_both} that the optimal packet arrival rate $\\xi^\\ast=1$ when $\\lambda c R^2<\\frac{1}{2}$, indicating that in this case, to minimize the peak AoI, packets should arrival at the system as many as possible. Similarly to Theorem \\ref{Theorem_OPtimalQ}, as $\\lambda$, $R$ or $c$ grows, due to mounting channel contention or a lower probability of successful transmission, the optimal packet arrival rate $\\xi^\\ast<1$. In such a jointly optimization, it is interesting to see that we always have $q^{\\ast}=1$ whatever the value of $\\lambda$ is. Intuitively, as $\\lambda$ increases, to reduce the channel contention, the system should decrease both $q$ and $\\xi$. Yet, the shorter a period the packet stays in the buffer, the lower AoI will be when it is updated. Accordingly, the system keeps the optimal channel access probability $q^\\ast=1$ while reduces the packet arrival rate $\\xi^\\ast$ only.\n\t\n\t\n\t\\section{Simulation Results and Discussions}\\label{Simulation Results}\n\tIn this section, we provide simulation results to validate the analysis and further shed light on AoI minimum network designs. Specifically, in the begining of each simulation run, we realize the locations of transmitter-receiver pairs over a $100\\times100~m^2$ square area according to independent PPPs and place the typical link where the recevier is located at the center of the area. In each timeslot, the location of each pair is shifted except for the typical link, and each simulation last for $10^5$ time slots. In each realization, the simulated peak AoI is caculated as the sum of the peak value of AoI curve to the number of successful transmissions of the typical link. To obtain the simulated mean peak AoI, we average over 20 realizations.\n\t\n\t\\begin{figure}[t]\n\t\t\\centering\n\t\t\\includegraphics[width=8.66cm,height=7cm]{peak_density.png}\n\t\t\\caption{Peak Age of Information $A_p$ versus the node deployment density $\\lambda$. $\\alpha=3$, $\\theta=0.2$, $q=1$, $\\xi=1$, $\\gamma=20$, $R\\in\\{1,2,3\\}$.}\n\t\t\\label{PAoI_density}\n\t\\end{figure}\n\t\n\tFig. \\ref{PAoI_density} illustrates how the peak AoI $A_p$ varies with the node deployment density $\\lambda$ under different TX-RX distances. From this figure, we can see that the simulation results match well with the analysis, which verify the accuracy of Theorem \\ref{PAoIdef}. Moreover, following the developed analysis, we know that as the node deployment density goes up, the interference among the wireless link become more severe, leading to a smaller probability of successful transmissions that deteriorates the age performance. Accordingly, we can see from Fig. \\ref{PAoI_density} that the peak AoI $A_p$ increases as\n\t$\\lambda$ increases. By reducing the TX-RX distance of each pair the SINR can be improved. As Fig. \\ref{PAoI_density} illustates, when the distance of each pair is reduced to be $R=1$, the peak AoI $A_p$ becomes less sensitive to the variation of the node deployment density $\\lambda$.\n\t\\begin{figure}[t]\n\t\t\\centering\n\t\t\\includegraphics[width=8.2cm,height=6.9cm]{peak_access_lambda.png}\n\t\t\\caption{Peak Age of Information $A_p$ versus the channel access probability $q$. $R=3$, $\\alpha=3$, $\\theta=0.8$, $ \\xi =1$, $\\gamma=20$, $\\lambda\\in\\{0.01,0.03,0.05\\}$.}\n\t\t\\label{PAoI_transmission}\n\t\\end{figure}\n\t\n\tFig. \\ref{PAoI_transmission} depicts the peak AoI $A_p$ as a function of the channel access probability $q$ under various values of the node deployment density $\\lambda$. It can be seen that when the node deployment density $\\lambda$ is small, $A_p$ monotonically decreases with respect to $q$ and reaches the smallest value when $q=1$. Note that an increase in the channel access probability has two opposite effects on the peak AoI. On the one side, a larger channel access probability $q$ results in a short waiting time in the buffer which reduces the staleness of information packet. On the other side, a larger channel access probability $q$ can also incur severe interference due to a high offered loads of each transmitter's queue, leading to a lower probability of successful transmission. As Fig. \\ref{PAoI_transmission} illustrates, the node deployment density $\\lambda$ determines how these two effects trade off with each other. In particular, with a small node deployment density, the interference would not become severe even the channel access probability $q$ is large. With a large $\\lambda$, on the other hand, the interference can deteriorate the AoI if $q$ is large.\n\t\\begin{figure}[t]\n\t\t\\centering\n\t\t\\includegraphics[width=8cm,height=6.5cm]{peak_alpha_lambda.png}\n\t\t\\caption{Peak Age of Information $A_p$ versus the packet arrival rate $\\xi$. $R=2$, $\\alpha=3$, $\\theta=0.8$, $q=1$, $\\gamma=20$, $\\lambda\\in\\{0.01,0.03,0.05\\}$.}\n\t\t\\label{PAoI_arrival_rate}\n\t\\end{figure}\n\tSimilar observations can also be seen from Fig. \\ref{PAoI_arrival_rate}, which demonstrates how the peak AoI $A_p$ varies with the packet arrival rate $\\xi$ under different values of the node deployment density.\n\t\n\tThe optimal channel access probability $q_{\\xi}^{*}$ under a given packet arrival rate $\\xi$ and the optimal packet arrival rate $\\xi^{*}_{q}$ for a given channel access probability $q$ are illustrated in Fig. \\ref{PAoI_transmission} and Fig. \\ref{PAoI_arrival_rate}, respectively. It can be seen that by optimally tuning $q$ and $\\xi$, the peak AoI can be largely reduced. To futher investigate the performance gain brought by joint tuning the channel access probability $q$ and the packet arrival rate $\\xi$, Fig. \\ref{fig:optimal_q_alpha} demonstrates how the peak AoI $A_p$ varies with the node deployment density $\\lambda$ in four cases: 1) fixed $\\xi$ and $q$, 2) optimal $q$ with fixed $\\xi$, 3) optimal $\\xi$ with fixed $q$, 4) joint optimal tuning of $q$ and $\\xi$. We can clearly see that with fixed $\\xi$ and $q$, the peak AoI $A_p$ exponentially increases with $\\lambda$. In sharp contrast, with a joint optimal tuning of $q$ and $\\xi$, the peak AoI $A_p$ linearly increases with $\\lambda$. It implies that the performance gain becomes significant when $\\lambda$ is large.\n\t\\begin{figure}[t]\n\t\t\\centering\n\t\t\\includegraphics[width=8cm,height=6.8cm]{optimal.png}\n\t\t\\caption{Optimal Peak Age of Information $A_p$ versus the node deployment density. 1) fixed parameter: $\\xi=1$, $q=0.6$, 2) optimal $q$ with fixed $\\xi=1$, 3) optimal $\\xi$ with fixed $q=0.6$, 4) joint optimal tuning $q$ and $\\xi$. $\\alpha=3$, $\\theta=0.8$, $R=3$.}\n\t\t\\label{fig:optimal_q_alpha}\n\t\\end{figure}\n\t\\section{Conclusion}\\label{conclusion}\n\tIn this paper, we conducted an analytical study of optimizing AoI in a random access network by tuning system parameters such as the channel access probability and the packet arrival rate.\n\tAnalytical expressions for the optimal peak AoI, as well as the corresponding system parameters, are derived for cases of seperate tuning and jointly tuning. In the seperate tuning case, when the node deployment density is small, information packets should be generated as frequently as possible, so as to achieve the optimal AoI performance. The same can apply to the optimal channel access rate, where transmitters should access the channel at each time slot. When the node deployment density becomes large, the optimal packet arrival rate and the optimal channel access probability should decrease as the node deployment density increases. In the jointly tuning case, in contrast, the optimal channel access probability is always set to be one and the optimal packet arrival rate shall decrease as the node deployment density increases. For all the cases of separately or jointly tuning of the channel access probability and the packet arrival rate, the optimal peak AoI linearly grows with the node deployment density as opposed to an exponential growth with the fixed channel access probability and the packet arrival rate. It is therefore of crucial importance to properly tune these parameters toward a satisfactory AoI performance especially in dense networks.\n\t\\appendices\n\t\\section{Proof of Lemma \\ref{lemma:p}} \\label{prooflemma1}\n\tNote that according to (9) and (10) in \\cite{5226957}, the probability of successful transmission $p$ is determined by the following equation\n\t\\begin{equation}\\label{eq:Fixed-Eq1}\n\tp=\\exp{\\{-\\lambda cR^2\\rho q-\\theta R^\\alpha\\gamma^{-1}\\}},\n\t\\end{equation}\n\twhere $c=\\frac{\\pi\\theta^{\\frac{2}{\\alpha}}}{\\text{sinc}(\\frac{2}{\\alpha})}$\n\tand $\\rho$ denotes the offered load of each transmitter. To derive the offered load $\\rho$, let us define the state of each transmitter at time $t$ as the number of packets in the buffer at the begining of the time slot. As the buffer size of each transmitter is one, the state transition process of each transmitter can be model as a Markov chain with the state space $\\textbf{X}=\\{0,1\\}$, where the transition matrix is given by\n\t\\begin{equation}\\label{eq:StateTrans}\n\t\\textbf{P} =\\left[\\begin{array}{cc} p_{0,0} & p_{0,1} \\\\ p_{1,0} & p_{1,1} \\\\ \\end{array}\\right]=\n\t\\left[\\begin{array}{cc} 1-\\xi+\\xi qp & \\xi(1- qp) \\\\ qp & 1- qp \\\\ \\end{array}\\right].\n\t\\end{equation\n\twhere $ p_{i,j}$ is the probability of transiting $i\\in \\textbf{X}$ to state $j\\in \\textbf{X}$. According to \\eqref{eq:StateTrans}, the steady-state distribution can be derived as\n\t\\begin{equation}\\label{eq:steady_pro}\n\t\\left\\{\n\t\\begin{array}{lr}\n\t\\pi_{0}=\\frac{ qp}{\\xi+ qp-\\xi qp}, \\\\\n\t\\pi_{1}=\\frac{\\xi-\\xi qp}{\\xi+ qp-\\xi qp}.\n\t\\end{array}\n\t\\right.\\end{equation}\n\tThe offered load $\\rho$ can then be written as\n\t\\begin{equation}\\label{eq:rho1}\n\t\\rho=\\frac{r}{qp},\n\t\\end{equation}\n\twhere $r$ is the effective packet arrival rate. As one incoming packet would be dropped if it sees a full buffer, the effective packet arrival rate $r$ is given by\n\t\\begin{equation}\\label{eq:effective_arrival}\n\tr=\\xi\\pi_0=\\frac{\\xi qp}{\\xi+qp-\\xi qp}.\n\t\\end{equation}\n\tBy combining \\eqref{eq:rho1} and \\eqref{eq:effective_arrival}, the offered load of each transmitter $\\rho$ can be obtained as\n\t\\begin{equation}\\label{eq:rho}\n\t\\rho=\\frac{\\xi}{\\xi+qp-\\xi qp}.\n\t\\end{equation}\n\tFinally \\eqref{eq:p} can be obtained by substituting \\eqref{eq:rho} into \\eqref{eq:Fixed-Eq1}.\n\t\n\t\\section{Proof of Theorem 1}\\label{studyProot}\n\tLet\n\t\\begin{equation}\n\tf(p)= -\\ln p-\\frac{M}{N+p}-K,\n\t\\label{eq:lnp}\n\t\\end{equation}\n\twhere\n\t\\begin{equation}\n\tM=\\lambda cR^2\\frac{\\xi}{1-\\xi},~~\n\tN=\\frac{\\xi}{q(1-\\xi)},~~\n\tK=\\theta R^\\alpha\\gamma\n\t^{-1}.\n\t\\label{MNK}\n\t\\end{equation}\n\tIt can be seen that $f(p)=0$ has the same non-zero roots as the fixed-point equation. The first derivative of $f(p)$ can be written as $f^{'}(p)=\\frac{g(p)}{p(N+p)^2}$, where\n\t\\begin{equation}\n\tg(p)=-\\left(p+N-\\frac{M}{2}\\right)^2+\\frac{{M}^2}{4}-MN.\n\t\\label{eq:gp}\n\t\\end{equation}\n\tLemma \\ref{lemma:gptfp} shows that the number of non-zero roots of $f(p)=0$ for $p\\in(0,1]$ is crucially related with the number of non-zero roots of $g(p)=0$ for\n\t$p\\in(0,1]$.\n\t\n\t\\begin{lemma}\n\t\t$f(p)=0$ has three non-zero roots of $00$; Otherwise, $f(p)=0$ has only one non-zero root $00$, $f(1)=-\\frac{M}{N+1}-K<0$ and $f(p)$ is continuous function. According to the zero-point theorem, $f(p)$ has non-zero root. As $p(N+p)^2>0$ when $ 00$ for $p\\in(p_1^{'},1)$. As a result, $f^{'}(p)<0$ for $p\\in(0,p_{1}^{'})$ and $f^{'}(p)>0$ for $p\\in(p_{1}^{'},1)$, indicating that $f(p)$ monotonically decreases for $p\\in(0,p_{1}^{'})$, and increases for $p\\in(p_{1}^{'},1]$. Since $f(1)<0$, we can conclude that in this case, $f(p)=0$ only one non-zero root $00$ for $p\\in (p^{'}_1, p^{'}_2)$. As a result, $f^{'}(p)<0$ for $p\\in(0, p^{'}_1)\\cup(p^{'}_2, 1)$, and $f^{'}(p)>0$ for $p\\in (p^{'}_1, p^{'}_2)$, indicating that $f(p)$ monotonically decreases for $p\\in(0, p^{'}_1)\\cup(p^{'}_2, 1)$, and increases for $p\\in (p^{'}_1, p^{'}_2)$.Then, we have if $f(p_{1}^{'})>0$ or $f(p_{2}^{'})<0$, $f(p)=0$ has one zero root\n\t\t$00$.\n\t\\end{proof}\n\tlemma \\ref{lemma:3roots} further presents the necessary and sufficient condition for $g(p)=0$ has two non-zero roots $00$.\n\t\\begin{lemma}\n\t\t$g(p)=0$ has two non-zero roots $00$ if and only if $\\frac{4}{q}<\\lambda cR^2<\\frac{((1-\\xi)q+\\xi)^2}{q^2\\xi(1-\\xi)}$ and $\\xi_l<\\xi<\\xi_h$, where $\\xi_l$ and $\\xi_h$ are given in \\eqref{arrival_low}\n\t\tand \\eqref{arrival_high},\n\t\trespectively.\n\t\t\\label{lemma:3roots}\n\t\\end{lemma}\n\t\\begin{proof}\n\t\t$g(p)=0$ has two non-zero roots, when $\\lim_{p\\to 0} g(p)<0$, $g(1)< 0$ and peak value of $g(p)$ is larger than zero, and find that $g(p)=0$ has two non-zero roots $p^{'}_1$ and $p^{'}_2$, if\n\t\t\\begin{equation}\\frac{4}{q}<\\lambda cR^2<\\frac{((1-\\xi)q+\\xi)^2}{q^2\\xi(1-\\xi)},\\text{and} \\quad 0 0,\n\t\t\\label{eq:fp2'MN}\n\t\t\\end{align}\n\t\twhere $M=\\lambda cR^2\\frac{\\xi}{1-\\xi}$,\n\t\t$N=\\frac{\\xi}{q(1-\\xi)}.$\n\t\t\n\t\tFirstly, we simplify \\eqref{eq:lambdacRcondition}, which means\n\t\t$\n\t\t\\frac{4}{q}<\\frac{2}{q}+\\frac{2}{\\xi}-2$ and $\n\t\t\\frac{4}{q}<\\frac{((1-\\xi)q+\\xi)^2}{q^2\\xi(1-\\xi)}$.\n\t\tFrom that, we can get $q>\\frac{\\xi}{1-\\xi}$. Then, we prove that $\\frac{2}{q}+\\frac{2}{\\xi}-2>\\frac{((1-\\xi)q+\\xi)^2}{q^2\\xi(1-\\xi)}$ in this case. We have\n\t\t\\begin{equation}\n\t\t\\frac{2}{q}+\\frac{2}{\\xi}-2-\\frac{((1-\\xi)q+\\xi)^2}{q^2\\xi(1-\\xi)}=\\frac{q^2(1-\\xi)^2-\\xi^2}{q^2\\xi(1-\\xi)},\n\t\t\\end{equation}\n\t\tAs $q>\\frac{\\xi}{1-\\xi}$, we get $\\xi<1-\\frac{1}{1+q}$. And $q\\in(0,1]$, thus, $\\xi \\in(0,0.5]$. Then, we get $q^2(1-\\xi)^2>\\xi^2$, and we have $\\frac{2}{q}+\\frac{2}{\\xi}-2-\\frac{((1-\\xi)q+\\xi)^2}{q^2\\xi(1-\\xi)}>0$, which means \\eqref{eq:lambdacRcondition} can be written as\n\t\t\\begin{equation}\n\t\t\\frac{4}{q}<\\lambda cR^2<\\frac{((1-\\xi)q+\\xi)^2}{q^2\\xi(1-\\xi)}.\n\t\t\\label{eq:lcr2final}\n\t\t\\end{equation}\n\t\t\n\t\tMoreover, by combining \\eqref{MNK}, \\eqref{eq:fp1'MN}, \\eqref{eq:fp2'MN}, we can obtain \\eqref{arrival_low}, \\eqref{arrival_high}.\n\t\\end{proof}\n\tFinally, Theorem \\ref{Theorem_p_root} can be obtained by combining Lemma \\ref{lemma:gptfp} and Lemma \\ref{lemma:3roots}.\n\t\n\t\\section{Proof of monotonicity of $p_A$, $p_L$ }\\label{studyPLtrend}\n\tAccording to \\eqref{eq:p}, we have \t\n\t\\begin{align}\n\t\\frac{\\partial{p}}{\\partial{\\xi}}&=\\frac{\\lambda cR^2p^2}{\\lambda cR^2 p \\xi(1-\\xi)-(\\frac{\\xi}{q}+p(1-\\xi))^2}=\\frac{\\lambda cR^2p^2}{g(p)}\\frac{1}{(1-\\xi)^2},\n\t\\label{pvsalpha}\n\t\\\\\n\t\\frac{\\partial{p}}{\\partial{q}}&=\\frac{\\lambda cR^2(\\frac{\\xi}{q})^2p}{\\lambda cR^2 p \\xi(1-\\xi)-(\\frac{\\xi}{q}+p(1-\\xi))^2}=\\frac{\\lambda cR^2(\\frac{\\xi}{q})^2p}{g(p)}\\frac{1}{(1-\\xi)^2},\n\t\\label{pvsq}\n\t\\\\\n\t\\frac{\\partial{p}}{\\partial{\\lambda}}&=\\frac{cR^2\\frac{\\xi}{q}(\\xi+p q(1-\\xi))}{\\lambda cR^2 p \\xi(1-\\xi)-(\\frac{\\xi}{q}+p(1-\\xi))^2}=\\frac{cR^2\\frac{\\xi}{q}(\\xi+p q(1-\\xi))}{g(p)}\\frac{1}{(1-\\xi)^2}.\n\t\\label{pvslambda}\n\t\\end{align}\n\twhere $g(p)$ is given in \\eqref{eq:gp}. Let us consider the following scenarios:\n\t\n\t1) If $g(p)=0$ has no non-zero root for $p\\in(0,1]$, then $g(p)<0$ for $p\\in (0,1]$. In this case, \\eqref{eq:p} has one-zero root $p_L$, which is a steady-state point according to the approximate trajectory analysis in \\cite{6205590}. We then have have $g(p_L)<0$.\n\t\n\t2) If $g(p)=0$ has one non-zero root for $00$ for $p\\in(p^{'}_1,1)$. \\eqref{eq:p} has one-zero root $p_L$, and $p_L< p^{'}_1$, which is a steady-state point according to the approximate trajectory analysis in \\cite{6205590}. We then have $g(p_L)<0$.\n\t\n\t~~b) If $g(p)=0$ has one root when $p\\in (-\\infty,\\infty)$, and this root in range $p\\in(0,1)$, then $g(p)<0$ for $p\\in (0,1]$. \\eqref{eq:p} has one-zero root $p_L$, which is a steady-state point according to the approximate trajectory analysis in \\cite{6205590}. We then have $g(p_L)<0$.\n\t\n\t3) If $g(p)=0$ has two non-zero roots for $00$ for $p\\in(p^{'}_1,1)$, \\eqref{eq:p} has one non-zero root $p_L0$ for $p\\in (p^{'}_1, p^{'}_2)$. In this case, \\eqref{eq:p} may has one non-zero root $p_L\\in(0, p^{'}_1)\\cup(p^{'}_2, 1)$, or three non-zero roots $p_A\\frac{(\\xi+p_{*}(1-\\xi))^2}{\\xi^2+p_{*}\\xi(1-\\xi)}=1+\\frac{p_{*}(1-\\xi)}{\\xi},\n\t\\end{equation}\n\twe have $\\lim_{q\\to 1}\\frac{\\partial{A_p}}{\\partial{q}}>0$.\n\tThe peak AoI $A_p$ can then be optimized when $q \\in (0,1)$. By combining $\\frac{\\partial{A_p}}{\\partial{q}}=0$ and \\eqref{eq:p}, the optimal channel access probability $q$ can be obtained as\n\t\\begin{equation}\n\tq=\\frac{1}{\\lambda cR^2-\\exp{\\left\\{-\\theta R^\\alpha\\gamma^{-1}-1\\right\\}}\\frac{1-\\xi}{\\xi}}.\n\t\\label{eq:accessrate}\n\t\\end{equation}\n\tThe optimal peak AoI can be obtained by substituting \\eqref{eq:accessrate} into \\eqref{eq:PeakAge}.\n\t\n\tWhen $\t\\lambda c R^2\\leq 1+\\frac{p_{*}(1-\\xi)}{\\xi}$, on the other hand, the optimal channel access rate is given by $q=1$, and the corresponding optimal peak AoI can be obtained by combining $q=1$ and \\eqref{eq:PeakAge}.\n\t\n\t\\section{Proof of Theorem \\ref{Theorem_OPtimalAlpha}}\\label{prooftheorem3}\n\tAccording to \\eqref{eq:p} and \\eqref{eq:PeakAge}, we have\n\t\\begin{equation}\\label{eq:opalpha}\n\t\\frac{\\partial{A_p}}{\\partial{\\xi}}=-\\left(\\frac{1}{\\xi^2}+\\frac{2\\frac{\\partial{p}}{\\partial{\\xi}}}{q p^2}\\right)=-\\frac{1}{\\xi^2}-\\frac{2\\lambda cR^2\\frac{1}{q}}{\\lambda cR^2 p \\xi(1-\\xi)-(\\frac{\\xi}{q}+p(1-\\xi))^2}\n\t\\end{equation}\n\tWe then have\n\t\\begin{equation}\n\t\\lim_{\\xi \\to 0}\\frac{\\partial{A_p}}{\\partial{\\xi}}<0\n\t\\end{equation}\n\tand\n\t\\begin{equation}\n\t\\lim_{\\xi \\to 1}\\frac{\\partial{A_p}}{\\partial{\\xi}}=2\\lambda c R^2 q-1.\n\t\\end{equation}\n\tWhen\n\t$\\lambda c R^2>\\frac{1}{2 q}$, we have $\\lim_{\\xi \\to 1}\\frac{\\partial{A_p}}{\\partial{\\xi}}>0$\n\tthe peak AoI $A_p$ can then be optimized when $\\xi \\in (0,1)$. By combining $\\frac{\\partial{A_p}}{\\partial{\\xi}}=0$ and \\eqref{eq:p}, the optimal packet arrival rate $\\xi$ can be obtained as\n\t\\begin{equation}\n\t\\label{eq:arrivalrate}\n\t\\xi=\\frac{2q \\exp{\\left\\{-\\frac{2}{\\sqrt{1+\\frac{4}{q\\lambda cR^2}}+1}-\\theta R^\\alpha \\gamma^{-1}\\right\\}}}{q\\lambda cR^2\\left(\\sqrt{1+\\frac{4}{q\\lambda cR^2}}+1\\right)+2q\\exp{\\left\\{-\\frac{2}{\\sqrt{1+\\frac{4}{q\\lambda cR^2}}+1}-\\theta R^\\alpha \\gamma^{-1}\\right\\}}-2}.\n\t\\end{equation}\n\tThe optimal peak AoI can be obtained by substituting \\eqref{eq:arrivalrate} into \\eqref{eq:PeakAge}.\n\t\n\tWhen $\\lambda cR^2\\leq\\frac{1}{2q}$, on the other hand, the optimal packet arrival rate is given by $\\xi=1$, and the corresponding optimal peak AoI can be obtained by combining $\\xi=1$ and \\eqref{eq:PeakAge}.\n\t\n\t\\section{Proof of Theorem \\ref{Theorem_OPtimalqAlphaboth}}\\label{prooftheorem4}\n\tWe denote that\n\t\\begin{equation}\n\tA_p^{*}=\\underset{\\{q\\}}{\\min}~A_{p}^{\\xi=\\xi_q^{*}}.\n\t\\end{equation}\n\tFrom Theorem \\ref{Theorem_OPtimalAlpha}, for $q<\\frac{1}{2\\lambda c R^2}$, we have\n\t\\begin{equation}\n\t\\frac{\\mathrm{d}{A_{p}^{\\xi=\\xi_q^{*}}}}{\\mathrm{d}{q}}=\\frac{2(\\lambda cR^2-\\frac{1}{q})}{q}\\exp{\\left\\{\\lambda cR^2q+\\theta R^\\alpha \\gamma^{-1}\\right\\}}.\n\t\\end{equation}\n\tAs $q<\\frac{1}{2\\lambda c R^2}$, thus, $\\frac{\\mathrm{d}{A_{p}^{\\xi=\\xi_q^{*}}}}{\\mathrm{d}{q}}<0$, which means that $A_{p}^{\\xi=\\xi^{*}_{q}}$ decrease monotonically when $q<\\frac{1}{2\\lambda c R^2}$.\n\tOn the other hand, for $q>\\frac{1}{2\\lambda c R^2}$, we denote $k=-\\frac{2}{\\sqrt{1+\\frac{4}{q\\lambda cR^2}}+1}$, $k\\in(-1,-\\frac{1}{2})$, then\n\t\\begin{equation}\n\tA_{p}^{\\xi=\\xi_q^{*}}=\\frac{1}{qe^{k}e^{-\\theta R^\\alpha \\gamma^{-1}}(k+1)}.\n\t\\end{equation}\n\tWe find that $k$ decreases with the increase of $q$, $\\frac{1}{e^{k}(k+1)}$ decreases with the increase of $k$ and $\\frac{1}{e^{k}(k+1)}>0$, so $A_{p}^{\\xi=\\xi_q^{*}}$ decrease monotonically with the increase of $q$ when $q>\\frac{1}{2\\lambda c R^2}$.\n\t\n\tMoreover, when $\\lambda cR^2=\\frac{1}{2q}$, $A_{p}^{\\xi_q^{*}=1}=A_{p}^{\\xi_q^{*}<1}$, which means $A_{p}^{\\xi=\\xi_q^{*}}$ is a coutinuous function for $q$, thus $A_{p}^{\\xi=\\xi_q^{*}}$ decrease monotonically for $q\\in (0,1]$.\n\t\n\tTherefore, $A_p^{*}=A_{p}^{\\xi=\\xi_q^{*}}$ when $q=1$ and Theorem \\ref{Theorem_OPtimalqAlphaboth} can be obtained by combining $q=1$ and Theorem \\ref{Theorem_OPtimalAlpha}.\n\t\n\t\n\t","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{introduction}\nIncreasing demand for high data rate and live video streaming in cellular networks has attracted researchers' attention to cache-enabled cellular network architectures \\cite{ref1}. These networks exploit D2D communications as a promising technology of 5G heterogeneous networks, for cellular video distribution. In a cellular content delivery network assisted by D2D communications and similarly in peer-assisted networks \\cite{Nasreen}, user devices can capture their desired contents either via cellular infrastructure or via D2D links from other devices in their vicinity. Recently, several studies in both content placement policies and delivery strategies are conducted to minimize the downloading time, and to maximize the overall network throughput in terms of rate and area spectral efficiency. From the content placement perspective of view, contents can be placed on collaborative nodes formerly, either according to a predefined caching policy (reactive caching) \\cite{ref2}, or more intelligently, according to statistics of user devices' interest (proactive caching) \\cite{ref3}.\nThe theoretical bounds for D2D caching network proposed in \\cite{Theoretical2}, indicates that caching most popular contents in users' devices is optimal in almost all system regimes.\nCross-layer resource allocation methods are also investigated for supporting video over wireless in multiuser scenarios \\cite{ref5}. It is shown that quality-aware resource allocation can improve video services in wireless networks. However, the conventional architectures of content delivery in both wireless cellular and D2D networks, are based on half-duplex (HD) transmission and to the best of our knowledge, full-duplex (FD) capability and its advantages have not yet been investigated in both wireless cellular video distribution and D2D caching systems.\nRecent advances in FD radio design \\cite{ref6}, materialized by advanced signal processing techniques that can suppress self-interference (SI) at the receiver, have enabled simultaneous transmission and reception over the same frequency band. From theoretical point of view, FD communication can potentially double the spectral efficiency of a point-to-point communication link, providing SI is entirely canceled.\nIn this paper, we propose an FD-based scheme for D2D wireless video distribution. Details along with the main contributions are as follows:\n \n\\begin{itemize}\n\\item The proposed scheme has been investigated in two different scenarios: a) user devices operate in bidirectional FD mode in which two users can exchange data simultaneously at the same frequency and b) user devices can concurrently transmit to and receive data from two different nodes at the same frequency. i.e., an intermediate node can receive its desired content from one node and simultaneously serve for other user's demand at the same frequency.\n\n\\item We have analyzed throughput and delay in both scenarios and compared them against conventional HD systems.\n\n\\item In contrast with the works in the literature \\cite{ref2}, where only one active node per cluster is considered, we consider D2D communication among multiple nodes in our proposed scheme. \n\\item We have derived closed form expressions for FD\/HD-D2D collaboration probabilities which previously obtained by numerical evaluations in \\cite{MyIET}. \n\n\\end{itemize}\nThe remainder of paper is structured as follows. In Section II system model is introduced. In section III, throughput analysis for the proposed FD-enabled cellular system is provided. In section IV simulation results are explained and conclusions are presented in section V.\n\n\\section{System model}\nWe consider a cellular network with a single cell, one base station (BS) and $n$ randomly distributed users (Fig.1 (a)) according to uniform distribution. Assuming that inter-cell interference is negligible or canceled out, analysis can be extended to multi-cell scenarios. We divide the whole cell area into logical equally sized square clusters (Fig. 1(a)) and neglect co-channel interference and neighboring cell users' influence, for the sake of simplicity. We consider an in-band overlay spectrum access strategy for D2D communications \\cite{in-band}. Thus, there is no interference between cellular and D2D communications. All D2D communications are under full control of the BS. We also assume that SI cancellation allows the FD radios to transmit and receive simultaneously over the same frequency band. However, since all D2D pairs in all clusters share the same resource blocks, inter- and intra-cluster interference is taken into account. \n\\begin{figure}[t]\n\t\\centering\n\t\\subfloat[Square cell with equal-sized clusters within the cell]{\t\t\\includegraphics[clip, width=0.2 \\textwidth]{SysModel-eps-converted-to.pdf}}\n\t\\subfloat[Schematic of D2D communications]{\n\t\t\\includegraphics[clip, width=0.23 \\textwidth]{NewRandomGraph.pdf}\\quad\\quad\n\t} \\caption{System model and D2D communications graph}\n\\end{figure}\n\n\n\n\nDenote the set of popular video files as $\\bm{V} = \\{ {v_1},{v_2},...,{v_m}\\}$ with size $m$. We use Zipf distribution for modeling the popularity of video files and thus, the popularity of the cached video file $v_s$ in user $u_{\\omega}$, denoted by ${f_{\\omega s}}$, is inversely proportional to its rank, i.e., $f_{\\omega s}=\\left( s^{\\gamma_r} \\sum\\nolimits_{g=1}^{m} g^{-\\gamma_r} \\right)^{-1}, \\begin{array}{*{20}{c}}\n{}&{1 \\le s \\le m}\n\\end{array}$. The Zipf exponent $\\gamma _r$ characterizes the distribution by controlling the popularity of files for a given library size $m$. Contents are placed in users' caches in advance according to a caching policy in which each user with a considerable storage capacity can cache a subset of files $\\bm{F_{\\ell} \\subset V}$ from the library, i.e., $\\bm{F_{\\ell}} = \\{ {f_{\\ell1}},{f_{\\ell2}},...,{f_{\\ell h}}\\}$, $h \\le m$. We assume that there is no overlap between users' caches, i.e., ${\\bm{F}_p}\\mathop \\cap \\limits_{p \\ne q} {\\bm{F}_q} = \\phi $. Each user randomly requests a video file from the library according to Zipf distribution. Technically, to schedule and establish a D2D connection, necessary signaling messages are needed to be exchanged between D2D pairs and the BS \\cite{D2DDiscovery}. However, the signaling mechanisms do not affect our analysis in this work. Hence, we adopt the protocol model of \\cite{ProtocolModel} to setup D2D communications, which is based on a distance threshold; A pair of users\/devices $(u_i, u_j)$ can potentially initiate a D2D communication for video file transfer providing that the distance between $u_i$ and $u_j$ is less than a threshold ($l$ in Fig. 1(a)) and one of them finds its desired video file in the other device.\nFig. 2(b) illustrates the schematic of typical D2D communication graphs inside a cluster. Each user generates a random request according to the Zipf distribution. BS is assumed to be aware of all contents in the users' caches. We define a directed edge from $u_i$ pointing to $u_j$ if the user $u_j$ requests a file that has been previously cached by $u_i$. Since we assume that each user can make only one request (as shown in Fig. 1(b)), there will be at most one incoming link to the user node and one or multiple outgoing links from the user node. In this system, no data is relayed over multiple hops, which means any transmission(s) from one node to another node(s) corresponds to delivering a different video content. It is also possible that some users demand for the same video content which is previously cached by one user. For instance the users in set $\\bm{Z}$ demand for the same video content from user $u_6$ (Fig. 1(b)). The number of users in set $\\bm{Z}$ depends on the popularity of the video content which is desired by these users. As can be seen in Fig. 1(b), there are two different possible configurations for FD collaboration; i) bi-directional full-duplex (BFD) mode, in which two users exchange their desired video content and ii) three node full-duplex (TNFD) mode, in which an intermediate node can receive its desired video content from one node and simultaneously serve for another user(s)' demand (see $u_6$ in Fig. 1(b)).\n\n\\section{Analysis}\nBoth analog and digital SI cancellation methods can be used to partially cancel the SI. However, in practice, it is difficult or even impossible to cancel the SI perfectly. We assume that all users transmit with power ${P_t}$. The SI in FD nodes is assumed to be canceled imperfectly with residual self-interference-to-power ratio $\\beta$ and hence, the residual SI is $\\beta P_t$. The parameter $\\beta$ denotes the amount of SI cancellation, and $10{\\log _{10}}\\beta$ is the SI cancellation in dB. When $\\beta = 0$, there is perfect SI cancellation, while for $\\beta = 1$, there is no SI cancellation. Thus, the signal-to-interference-plus-noise ratio (SINR) at receiver $u_j$ due to transmitted signal from $u_i$ can be written as \n\n\\begin{equation}\n\\label{SINR formula}\n\\textup{SINR}_{i \\to j} = \\frac{{{P_t}{h_{ij}}d_{ij}^{ - \\alpha }}}{{{\\sigma ^2} + \\sum\\nolimits_{z \\in \\Phi \\backslash \\{ i\\} } {{P_t}{h_{zj}}d_{zj}^{ - \\alpha }} + \\chi \\beta {P_t}}},\n\\end{equation}\nwhere ${\\sum\\nolimits_{z \\in \\Phi \\backslash \\{ i\\} } {{P_t}{h_{zj}}{d_{zj}}} }$ is total inter- and intra-cluster interference due to the nodes in set $\\Phi$, which is the set of concurrent transmitting nodes. Backslash in eq. (\\ref{SINR formula}) implies that the node $u_i$ is excluded from transmitters. ${{h_{ij}}}$ and ${{h_{zj}}}$ are the fading power coefficients with exponential distribution of mean one, corresponding to the channel between transmitter $u_i$ and receiver $u_j$, and interferer $u_z$, respectively. $d_{ij}$ denotes the Euclidean distance between transmitter $u_i$ and receiver $u_j$ inside the cluster. $\\alpha$ is the path loss exponent. A white Gaussian noise with power ${{\\sigma ^2}}$ is added to the received signal. $\\chi$ denotes collaboration mode; $\\chi=0$, when user $u_i$ operates in HD mode, and $\\chi=1$, when it operates in FD mode.\n\n\\subsection{Collaboration Probability}\nFor given $k$ users which randomly fall inside a cluster and given $h$ number of cached contents for each user inside the random cluster $c$, we define popularity of cached contents within the cluster as ${{\\rho}_{c}} = \\sum\\nolimits_{i = 1}^k {{\\rho_{{u_i}}}}$, where $\\rho_{u_i} = \\sum\\nolimits_{{f_{is}} \\in {F_i }} {{f_{is}}} $ is the popularity of cached contents by user $u_i$. For the $i$th user, $u_i$, we define two parameters $P_{ai}$ and $P_{bi}$ as follows; $P_{ai}$: the probability that $u_i$ cannot find its desired content within cluster. $P_{bi}$: the probability that user $u_i$ can serve for other user(s)' request(s). Since all requests are identically distributed and independent (i.i.d) at each user, given $k$ users inside the cluster, the probability that $u_i$ operates in HD mode is\n\\begin{equation}\n\\label{PHD ui|k}\nP_{{u_i}|k}^{\\textup{HD}} = {P_{ai}}{P_{bi}}.\n\\end{equation}\nSimilarly, the probability that $u_i$ operates in FD mode is\n\\begin{equation}\n\\label{PFD ui|k}\nP_{{u_i}|k}^{\\textup{FD}} = {(1-P_{ai})}{P_{bi}}.\n\\end{equation}\nHowever, the probability of making HD-D2D and FD-D2D connections depends on parameter $k$. The probability that $u_i$ can collaborate in HD or FD mode is\n\\begin{equation}\n\\label{P_ui^delta}\n\\mathcal{P}_{{u_i}}^{\\delta} = \\sum\\limits_{k = 0}^n {P_{{u_i}|k}^{\\delta}\\Pr [K=k]}, \n\\end{equation} \nwhere $\\delta \\in \\{ \\textup{HD}, \\textup{FD}\\}$ is the operation mode, $\\Pr [K=k]$ is the probability that there are $k$ users in the cluster. Since the distribution of users is assumed to be uniform within the cell area, the number of users in the cluster is a binomial random variable with parameters $n$ and $\\frac{{{l^2}}}{{2{a^2}}}$, i.e., $K = B(n,\\frac{{{l^2}}}{{2{a^2}}})$, where ${\\frac{{{l^2}}}{{2{a^2}}}}$ is the ratio of the cluster area to the cell area. Hence, the probability that $k$ users fall inside the cluster is\n\\begin{equation}\n\\label{Pr[K=k]}\n\\Pr [K = k] = \\left( {\\begin{array}{*{20}{c}}\n\tn\\\\\n\tk\n\t\\end{array}} \\right){\\left( {\\frac{{{l^2}}}{{2{a^2}}}} \\right)^k}{\\left( {1 - \\frac{{{l^2}}}{{2{a^2}}}} \\right)^{n - k}}.\n\\end{equation}\nThe probability that $u_i$ can find its desired file inside the cluster and cannot find on its own cache (i.e., we exclude self-request\\footnote{self-request takes place when the user finds its desired file in its own cache.} from user $u_i$), can be written as\n\\begin{equation}\n\\label{Pai}\n{P_{ai}} = {\\rho_{c}} - {\\rho_{{u_i}}}.\n\\end{equation}\nWe define $Q_{{u_i}}(x)$ which determines the probability that $u_i$ can serve $x$ number of users' requests inside the cluster. The number of users demanding for a content which is cached by $u_i$ is a binomial random variable with parameters $k-1$ and $\\rho_{u_i}$, i.e.,\n\\begin{equation}\n\\label{Pserve(x)}\nQ_{{u_i}}(x) = \\left( {\\begin{array}{*{20}{c}}\n\t{k - 1}\\\\\n\tx\n\t\\end{array}} \\right){\\left( {{\\rho_{{u_i}}}} \\right)^x}{\\left( {1 - {\\rho_{{u_i}}}} \\right)^{k - 1 - x}}. \\begin{array}{*{20}{c}}\n{}&{k \\ge 2}\n\\end{array}\n\\end{equation} \nIt is clear that for $k < 2$, $Q_{{u_i}}(x)=0$, which implies that there is no user's demand for cached content by $u_i$. And finally, $P_{bi}$ can be written as\n\\begin{equation}\n\\label{Pbi}\n{P_{bi}} = \\sum\\limits_{x = 1}^{k - 1} {Q_{{u_i}}(x)}.\n\\end{equation}\nBy substituting eqs. (\\ref{Pai}, \\ref{Pbi}) in eqs. (\\ref{PHD ui|k}, \\ref{PFD ui|k}), and eqs. (\\ref{PHD ui|k}, \\ref{PFD ui|k}) in eq. (\\ref{P_ui^delta}), respectively, we get the final mathematical expressions for HD and FD collaboration probabilities. \n\\begin{align}\n\\label{HD final expression}\n\\mathcal{P}_{{u_i}}^{\\textup{HD}} = \\sum\\limits_{k = 0}^n {\\left( {\\left( {1 - \\left( {{\\rho_{c}} - {\\rho _{{u_i}}}} \\right)} \\right)\\sum\\limits_{x = 1}^{k - 1} {Q_{{u_i}}(x)} } \\right)} \\Pr [K = k].\n\\end{align}\n\\begin{equation}\n\\label{FD final expression}\n\\mathcal{P}_{{u_i}}^{\\textup{FD}} = \\sum\\limits_{k = 0}^n {\\left( {\\left( {{\\rho_{c}} - {\\rho_{{u_i}}}} \\right)\\sum\\limits_{x = 1}^{k - 1} {Q_{{u_i}}(x)} } \\right)} \\Pr [K = k].\n\\end{equation}\nDenoting $\\mathcal{P}_{{u_i}}^{\\textup{self}}$ as the probability that user $u_i$ finds its desired content on its own cache, By substituting eqs. (\\ref{HD final expression}) and (\\ref{FD final expression}) in $\\mathcal{P}_{{u_i}}^{\\textup{FD}} + \\mathcal{P}_{{u_i}}^{\\textup{HD}} + \\mathcal{P}_{{u_i}}^{\\textup{self}} = 1$, the probability that node $u_i$ demands for a file which is cached by itself is \n\\begin{equation}\n\\mathcal{P}_{{u_i}}^{\\textup{self}} = 1 - \\sum\\limits_{k = 0}^n {\\left( {\\sum\\limits_{x = 1}^{k - 1} {{P_{{u_i}}^{serve}}} } \\right)} \\Pr [K = k].\n\\end{equation}\n\n\n\\subsection{Throughput Analysis}\nWe focus on a typical random cluster $c$ (representative cluster) and derive system sum throughput for this cluster. We obtain the ergodic capacity of the link associated with D2D pair ($u_i$,$u_j$), which is defined by ${C_{i \\to j}} = WE[{\\log _2}(1 + \\textup{SINR}_{i \\to j})]$, where, $W$ is the bandwidth for D2D link. For the wireless D2D network described in section II, the expected value of the throughput of the system due to establishing node $u_i$ in $\\delta$ mode can be written as\n\\begin{equation}\nT_{{u_i}}^{\\delta} = \\mathcal{P}_{{u_i}}^{\\delta}C_{{u_i}}^{\\delta},\n\\end{equation}\nwhere $\\mathcal{P}_{{u_i}}^{\\delta}$ is the collaboration probabilities for $\\delta$ mode, which is derived in equations (\\ref{HD final expression}) and (\\ref{FD final expression}). $C_{{u_i}}^{\\delta}$ is achievable capacity by establishing node $u_i$ in $\\delta$ mode and can be calculated as\n\\begin{equation}\n\\label{C HD ui}\nC_{{u_i}}^{\\textup{HD}} = \\sum\\limits_{{u_j} \\in A} {WE[{{\\log }_2}(1 + \\textup{SINR}_{i \\to j})]},\n\\end{equation}\n\\begin{align}\n\\label{C FD ui}\nC_{{u_i}}^{\\textup{FD}} =& WE[{\\log _2}(1 + \\textup{SINR}_{o \\to i})] \\notag\\\\&+ \\sum\\limits_{{u_{j}} \\in B} {WE[{{\\log }_2}(1 + \\textup{SINR}_{i \\to j})]}, \n\\end{align}\nwhere $A$ and $B$ are the set of users which are connected to $u_i$ in HD and FD modes respectively. First term in eq. (\\ref{C FD ui}), i.e., $WE[{\\log _2}(1 + \\textup{SINR}_{o \\to i})]$, determines the ergodic capacity for the link through which $u_i$ receives its desired file in FD mode from $u_o$ (TNFD mode). Showing the set of established nodes inside the random cluster $c$ is by $\\bm{\\Psi} = \\{ {u_1},{u_2},...,{u_{\\tau}}\\}$, the sum throughput of the respective cluster can be written as\n\\begin{equation}\n\\eta _c^\\delta = \\sum\\limits_{{u_i} \\in \\Psi } {T_{{u_i}}^\\delta }.\n\\end{equation}\n\n\n\\subsection {Download Time}\nAs we described in section II, there are two full-duplex collaboration modes: TNFD and BFD. For better understanding the concept of download time in HD and FD modes, we use the D2D communication graphs shown in Fig. 1(b). \n\\subsubsection{TNFD Mode}\nconsider $u_i$, $u_j$ and $\\bm{Z} = \\left\\{ {{u_1},{u_2},...{u_k}} \\right\\}$ in which ${u_i} \\notin \\bm{Z}$, ${u_j} \\notin \\bm{Z}$. \nFor a typical link between $u_i$ and $u_j$, and assuming that $u_i$ is transmitting video file $v_j$ to $u_j$, the experienced average download time $\\theta_{i \\to j}$ at $u_j$ can be defined as $\\theta_{i \\to j} = \\frac{{{b_{v_j}}}}{{{C_{i \\to j}}}}$, where $b_{v_j}$ is the number of bits for video file $v_j$ and $C_{i \\to j}$ is the achievable ergodic capacity for transmitting link from $u_i$ to $u_j$. Similarly, for the set $\\bm{Z}$, we have: ${\\Theta} = \\{\\theta_{j \\to 1}, \\theta_{j \\to 2},...,\\theta_{j \\to k}\\}$ where, ${\\theta_{j \\to k}} = \\frac{{{b_{{v_k}}}}}{{{C_{j \\to k}}}}$. Due to random distribution of the users' locations, the ergodic capacity for all links associated with all users in set $\\bm{Z}$ is not necessarily the same, hence ${\\theta_{j \\to p}} \\ne {\\theta_{j \\to q}}$ for $p \\ne q$. \nSince all users in set $\\bm{Z}$ are demanding the same video content from $u_j$, the total average download time due to one transmission of user $u_j$ can be defined as \n\\begin{equation}\n\\varpi = \\mathop {\\max }\\limits_{1 \\le \\lambda \\le k} ({\\theta _{j \\to \\lambda }}),{\\theta _{j \\to \\lambda }} \\in \\Theta. \n\\end{equation}\nDenoting $D_{u_j}^{HD}$ and $D_{u_j}^{FD}$ as the total experienced average download times by establishing $u_j$ in HD and FD modes, respectively, we have: \n\\begin{equation}\nD_{u_j}^{HD} = {\\theta_{i \\to j}} + \\varpi, \\quad D_{u_j}^{FD} = \\max(\\theta_{i \\to j},\\varpi).\n\\end{equation}\n\\subsubsection{BFD Mode}\nIn this mode, both users (i.e., $u_3$ and $u_4$ in Fig. 1(b)) exchange data simultaneously. Denoting ${\\theta_{j \\to i}}$ and ${\\theta_{i \\to j}}$ as the experienced download time for $u_i$ and $u_j$, respectively, the total average download time can be calculated as \n\\begin{equation}\nD^{HD} = {\\theta_{j \\to i}} + {\\theta_{i \\to j}}, \\quad D^{FD} = \\max(\\theta_{j \\to i},\\theta_{i \\to j}).\n\\end{equation}\nIn practice, the received and transmitted packets may have different lengths. Therefore, the transmission of all nodes will not end up at the same time.\nTherefore, due to asymmetric data packets at the transmitter and receiver, this situation is referred to as ``the residual hidden node problem''. However, the node that finishes data transmission earlier can resolve this issue by transmitting busy tone signals until the other node completes its transmission \\cite{HidenNodeProblem}. \n\n\\section {Numerical Evaluations}\nIn this section, we provide Monte-Carlo simulation to evaluate the performance of our proposed FD-D2D caching system. We assume a single square cell as shown in Fig. 1(a).\nSimulation parameters are shown in Table 1. The proposed FD-scheme simulated based on the following scenarios:\n\n\\textit{Caching procedure}:\nEach user caches multiple files from the library, according to the described caching policy in section II. This procedure can be launched in the off-peak hours of the cellular network to avoid traffic load. \n\n\\textit{Delivery procedure}: Users make and send their request to the BS randomly according to Zipf distribution and consequently the BS recognizes users' interests. Moreover, users' locations are known in advance for BS via channel state information (CSI) procedure. Hence, BS can predict potential D2D communications graphs (as such in Fig. 1(b)) for all clusters by having knowledge of users' caches, interests and locations. In each cluster, BS determines and establishes $\\tau$ number of nodes associated with most popular cached contents. Since all D2D communications in all clusters use the same time-frequency resources, inter- and intra-cluster interferences are taken into account. Fig. 2 shows the probability that a node inside a cluster is in FD, HD or self-request mode. By increasing D2D collaboration threshold $l$, the expected number of users inside a cluster increases and, consequently, the expected number of nodes that collaborate in FD mode increases.\nAs can be seen in Fig. 2, the probability that users can find their desired content on their own caches, decreases as $l$ increases, because for lower values of $l$, there are few users inside a cluster and these users previously stored high popular files. Hence, when the users inside a cluster make request according to Zipf distribution, there is a high probability that they make a request for a file that they have previously stored on their own caches.\nIn contrast, as the density of users inside a cluster increases (i.e., higher values of $l$), the number of self-request users decreases.\nFig. 3 shows the impact of the number of users ($n$) within the cell on the aforementioned probabilities. As can be seen from Fig. 3, the higher density of users within a cell, the higher is FD collaboration probability. Fig. 4 and Fig. 5, show the total average rate for FD-D2D and HD-D2D systems. Although the number of clusters increases at lower ranges of $l$ (we expect that the frequency reuse increases as well), nevertheless, the probability that clusters are of low density or that no D2D candidates are found therein, also increases. This can be interpreted as the fact that the probability of finding a user's desired file inside the cluster decreases when the node density decreases. As the number of clusters in the cell decreases, the frequency reuse decreases too. However, the probability that a user can find its desired file inside the cluster increases and hence the probability of making D2D communication increases. \nThe impact of parameter $\\tau$ ($\\tau$ in Fig. 5) demonstrates that incorporating FD-enabled nodes, with multiple nodes establishments inside a cluster can improve the average gain in sum throughput by increasing the number of active D2D links. Alongside the considerable improvements in system sum throughput, the gain in frequency reuse in FD-enabled system is more accentuated for higher ranges of $l$. Fig. 6 illustrates the total average download time versus $l$. As we discussed in section II, there are three possible ways for the users to access their desired file; through conventional cellular infrastructure, via D2D collaboration, and by self-request. We define the download time as the delay incurred in downloading a file, i.e. the time between sending requests by the user till capturing the file. Download time for self-request case is zero and we exclude this case from calculations of download time. In the proposed FD-D2D system, each D2D receiver can download its desired file with zero waiting time. Fig. 6 shows the major impact of FD collaboration on decreasing the latency in downloading video files.\n\\begin{table}[b]\n\t\\centering\n\t\\caption {Simulation parameters} \n\t\\centering\n\t\\begin{tabular}{|l|l|}\n\t\t\\hline\n\t\tParameter & Values\\\\\n\t\t\\hline \\hline\n\t\t$\\bm{V}$ & Video Content Library\\\\\n\t\t$v_i$ & $i$th video content in library\\\\\n\t\tNumber of users ($n$) & [10 1000]\\\\\n\t\tCached contents per node ($h$) & 1, 3, 5\\\\\n\t\tSize of library ($m$) & 1000\\\\\n\t\tZipf exponent ($\\gamma_r$) &1, 1.6\\\\\n\t\tSI cancellation factor ($10{\\log _{10}}\\beta$) & -70 dB\\\\\n\t\tNumber of established nodes ($\\tau$) & 1, 2, 3\\\\\n\t\tD2D link bandwidth ($W$) & 1.2 MHz\\\\\n\t\tBackground noise (${\\sigma ^2}$) & -174 dBm\/Hz\\\\\n\t\tPath loss exponent ($\\alpha$) & 2.6\\\\\n\t\tSize of files & [5 50] MB\\\\\n\t\tUser transmit power ($P_t$) & 23 dBm\\\\\n\t\tCell size ($a$) & 1 km \\\\\n\t\tLog-normal shadow fading & 4 dB standard deviation\\\\\n\t\tMonte-Carlo iterations & 1000\\\\\n\t\t\\hline\n\t\\end{tabular}\n\t\\label{notations}\n\\end{table}\n \n\\begin{figure}[t]{\\vspace{+3mm}}\n\t\\centering\n\t\\includegraphics[width=0.37 \\textwidth]{Coll_Probe1-eps-converted-to.pdf}{\\vspace{0mm}}\n\t\\caption{Collaboration Probability versus $l$ for $n=500$ and $h=1$.}\n\\end{figure}\n\\begin{figure}[t]\n\t\\centering\n\t\\includegraphics[width=0.36 \\textwidth]{Coll_Probe2-eps-converted-to.pdf}{\\vspace{0mm}}\n\t\\caption{Collaboration Probability versus $n$ for $\\gamma_r=1.6$, $h=5$ and $l=0.2$.}\n\\end{figure}\n\n\n\\begin{figure}[!htb]\n\t\\centering\n\t\\includegraphics[width=0.38 \\textwidth]{AveRate-eps-converted-to.pdf}{\\vspace{-3mm}}\n\t\\caption{Average rate versus $l$ for $n=500$ and $h=1$ and $\\tau=1$.}\n\\end{figure}\n\n\n\n\\section {CONCLUSION}\nIn this paper, we used full duplex radios on user devices to increase the throughput of video caching in cellular systems with D2D collaboration. We investigated FD-enabled networks by enabling FD radios only for D2D communications. Simulation results show that achievable throughput gain can increases in high intra- and inter-cluster interference conditions. We also showed that allowing full duplex collaboration can have a major effect on the quality of video content distribution by reducing download time compared to HD-only collaboration.\n\n\\begin{figure}[!t]\n\t\\centering\n\t\\includegraphics[width=0.36 \\textwidth]{R_tau-eps-converted-to.pdf}{\\vspace{0mm}}\n\t\\caption{Average rate versus $l$ for $h=3$, $\\gamma_r=1$ and $n=1000$.}\n\\end{figure} \n\n\n\n\\begin{figure}[h]\\vspace{0mm}\n\t\\centering\n\t\\includegraphics[width=0.38 \\textwidth]{CustomDLTime-eps-converted-to.pdf}{\\vspace{0mm}}\n\t\\caption{Total average download time versus $l$ for $h=1$ and $\\tau=1$.}\n\\end{figure}\n\n\\bibliographystyle{IEEEtran}\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}