diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzhgpv" "b/data_all_eng_slimpj/shuffled/split2/finalzzhgpv" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzhgpv" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\\label{Sec:Intro}\n{ASIPs} (\\emph{Application Specific Instruction-set Processors}) play a central role in contemporary embedded systems-on-a-chip (SoCs) replacing hardwired solutions which offer no programmability for enabling reuse or encompassing late specification changes. ASIPs are tuned for cost-effective execution of targeted application sets. An ASIP design flow involves profiling, architecture exploration, generation\/selection of instruction-set extensions (ISEs) and synthesis of the corresponding hardware while enabling the user taking certain decisions. \n\nCustom processors either adhere to the configurable\/extensible processor paradigm \\cite{ARC,Gonzalez00} or can be ASIPs completely designed from scratch. Configurability lies in tuning architectural parameters (e.g. cache sizes) and enabling\/disabling features \\cite{Yiannacouras06} while extensibility of a processor comes in modifying the instruction set architecture by adding forms of custom functionality. Designing a custom processor from scratch is a more aggressive approach requiring a significant investment of effort in developing all the necessary software development tools (compiler, binary utilities, debugger\/simulator) and possibly a real-time OS, while in the configurable processor case, the RTOS is usually targeted to the base ISA and the software toolchain can be incrementally updated. There exist two basic themes for architecture extension: tight integration of custom functional units and storage \\cite{NiosII} or loose coupling of hardware accelerators \nto the processor through a bus interface \\cite{MicroBlaze}. Recent works \\cite{Sirowy07} \nadvocate in favor of both approaches, proving that both techniques can be considered \nsimultaneously by formalizing the problem as a form of two-level partitioning. \n\nIt is often in ASIP\/custom processor design that certain practical issues arising from seemingly invariant elements of the design flow are not addressed:\n\\begin{enumerate}\n\\item[a)] {Assumptions of the intermediate representation (IR) to which the application code is mapped, directly affect solution quality as in the case of ISE synthesis.} \n\\item[b)] {The exploration infrastructure tied up to the conventions of software development tools.} \n\\item[c)] {Adaptability to different compilers\/simulators.}\n\\item[d)] {Support for low-level entry for application migration within a processor family and reverse engineering.}\n\\end{enumerate}\n\nIn this paper, all these issues are successfully addressed by integrating custom instruction (CI) generation and selection techniques with a flexible IR infrastructure that can reflect certain designer decisions that is cumbersome to apply otherwise. Our approach is substantiated in the form of the YARDstick prototype tool \\cite{Kavvadias07}. For example using an IR with intrinsic support for bit-level operations may yield significantly different ISEs to the case of an unaugmented IR. Also, in YARDstick it is possible to directly measure the effect of certain machine-dependent compiler transformations, such as register allocation, to the quality and impact of the generated ISEs, an issue recognized but never quantified in other works \\cite{ClarkN03,Castro04}. Further, YARDstick provides profiling facilities for determining static and dynamic application metrics such as data types, memory hierarchy statistics, and execution frequencies. Application entry can be either high-level (e.g. ANSI C) or low-level (assembly code for a target architecture or virtual machine). A number of recent custom functionality identification and selection techniques have been implemented while hardware estimators (speedup, area) and bindings to third-party tools for hardware synthesis from CDFGs are provided. \n\nIt is important to note that the interpretation of custom functionalities depends on the context; they can represent instruction-set extensions (ISEs) to a baseline ISA requiring to be accounted in the control path of the processor (decoding logic, extending the interrupt services), custom instructions of an ASIP enabled by a programmable controller or hardwired functions meant to be used as non-programmable hardware accelerators, loosely connected to the processor (i.e. accessible through the local bus).\n\n\\section{Related work}\n\\label{Sec:RelatedWork}\nLast years, a number of research efforts have regarded the automated application-specific \nextension of embedded processors \\cite{Alippi99,Yu04a,ClarkN05,Goodwin03,Pozzi06,Biswas07,Pothineni07}. \nA few open instruction generation frameworks exist \\cite{Pattlib}; an \nadvantage of their work being delivering a format for storing, manipulating and \nexchanging instruction patterns. In order to use their pattern library (Pattlib), the \npotential user should adapt his compiler for generating and manipulating patterns in \nthe cumbersome GCC RTL (Register Transfer Language) \\cite{GCC} intermediate representation. Some issues with the Pattlib approach regard the significant efforts for adapting the GCC compiler to emit information in ``pattlib'' format, and that the IR for their selected backend (SPARC V8) is not architecture-neutral and cannot be easily altered. \n \nApplication-specific instructions have been generated for the Xtensa configurable \nprocessor \\cite{Goodwin03} that may comprise of VLIW (Very Long Instruction Word), SIMD \n(Single-Instruction Multiple-Data) or fused (chained) RTL operations. However, as induced by the architecture template of Xtensa, control-transfer instructions ({\\it cti}) are not considered to be included in the resulting complex instructions. A sophisticated framework for the design of tightly-coupled custom coprocessing datapaths and their integration to existing processors has been presented in \\cite{ClarkN05}. While providing a complete solution to programmable acceleration, their work still has some drawbacks: the possibility \nof direct communication to fast local data memory is excluded and for this reason, \nbeneficial addressing modes cannot be identified. In \\cite{Atasu03,Pozzi06} a multi-output \ninstruction generation algorithm is presented which selects maximal-speedup convex \nsubgraphs for each basic block data-dependence graph (DDG), with worst case exponential \ncomplexity, while \\cite{Yu04a} added path profiling to extend beyond basic block scope.\nAn important conclusion was that useful instruction identification scope does not extend \nfurther than 2 or 3 consecutive basic blocks. Still, memory operations are not regarded \nin the formation of custom instructions, while pattern identification can only take \nplace post register allocation.\n\n\\section{YARDstick}\n\\label{Sec:YARDstick}\nThe main role of YARDstick is to facilitate design space exploration (DSE) in heterogeneous flows for ASIP design where the development tools (compiler, binary utilities, simulator\/ debugger) in many cases, lack DSE capabilities and\/or have been designed with different interfaces in mind. Thus, it is often that significant development effort is required in adding features as afterthoughts and dealing with interoperability issues, especially at the {\\it compiler} and {\\it simulator} boundaries.\n\n\\subsection{The YARDstick kernel}\n\\label{Sec:YARDstickKernel}\nThe current YARDstick infrastructure, as illustrated in Fig.~\\ref{Fig:1}, comprises of three kernel components ({\\it libByoX, libPatCUTE, libmachine}), the target architecture specification tools (the BXIR frontend) and a set of backends for exporting control-flow graphs, basic blocks and custom instructions for visualization, simulation and RTL synthesis purposes. {\\it libByoX} and {\\it libPatCUTE} are target-independent, and only {\\it libmachine} has to be retargeted for different IR specifications.\n\n\\begin{figure}[tb]\n \\centering\n \\includegraphics[width=8.0cm]{fig1-c.eps}\n \\caption{The YARDstick infrastructure.}\n \\label{Fig:1}\n \\vspace{-0.25cm}\n\\end{figure}\n\n\\subsubsection{{\\it libByoX}}\n\\label{Sec:libbyox}\n{\\it libByoX} implements the core YARDstick API and provides frontends\/manipulators for internal data structures. \nThe ByoX (Bring Your Own Compiler and Simulator) library provides:\n\\begin{itemize}\n\\item {The ISeqinfo parser for ISeq (historical name for ``Instruction Sequence'') entries. ISeq is a flat CDFG (with\/without SSA) format of application IR that is used for recording the data-dependence graphs for the application basic blocks.}\n\\item {The CFGinfo parser for control-flow graph (CFG) files that attribute the corresponding ISeq files with typed control-flow edges.}\n\\item {Simple file interface for the ISeq and CFG formats as well as for results of compiler analyses, e.g. control\/data flow analyses evaluating register liveness and natural loops, that can be passed to ByoX as defined by their corresponding BNFs.}\n\\item {An IR manipulation API for writing external analyses and optimizations.}\n\\item {Parameterization for a template machine context without inherent restrictions to its ISA.}\n\\end{itemize}\n\nIn ISeq, the following application information is recorded:\n\\begin{itemize}\n\\item\t{The global symbol table.}\n\\item {The procedure list, consisting of data dependence entries, the local symbol table and a statement list per procedure. It is possible to generate different facets of the local symbol table, e.g. single reference per direction (input or output) for each variable versus allowing multiple definition points for the same variable.}\n\\end{itemize}\n\n\\subsubsection{{\\it libPatCUTE}}\n\\label{Sec:libpatcute}\nFurther, a number of custom instruction generation\/selection methods have been implemented as part of the PatCUTE (Pattern-based Custom UniT Exploration) library. CI generation involves the identification of MIMO (Multiple-Input Multiple-Output) or MISO (Multiple-Input Single-Output) ISeq patterns under user-defined constraints. The CI generation methods available in {\\it libPatCUTE} are:\n\\begin{itemize}\n\\item {MAXMISO \\cite{Alippi99} for identifying maximal subgraphs with a single-output node using a linear complexity algorithm.}\n\\item {MISO exploration under constraints for the maximum number of input\/output operands, and for two types of operation node-related constraints \\cite{Kavvadias05}.} \n\\item {MIMO CI generation. In our case, we do not search for maximal MIMO patterns \\cite{Pothineni07}, however, we employ a fast heuristic by assuming similarly to \\cite{Pothineni07} that the performance gain provided by a pattern $P$ is higher than any pattern that is a subgraph of $P$. The user could disable the heuristic and apply an exponential complexity algorithm as well.}\n\\end{itemize}\n\nWhen CI generation is invoked, a CI list is constructed from the resulting ISeq patterns, which can be filtered via graph or graph-subgraph isomorphism tests \\cite{VFLib2} during the process of removing redundant cases. A subset of the library can be selected by using either a configurable greedy selector (supporting cycle-gain and cycle-gain per area priority metrics) or a 0-1 knapsack-based one. An important YARDstick characteristic is that CIs can be expressed in ISeq in the same way to either application CFGs or subregions thereof, thus existing data structures and analyses can be reused for further manipulation of the generated CIs. For example, pattern libraries can be imported to YARDstick.\n\n\\subsubsection{{\\it libmachine}}\n\\label{Sec:libmachine}\nThe {\\it libmachine} library is the only core YARDstick component that needs retargeting for a user-defined target architecture. Target architectures are specified in the BXIR (ByoX IR) \nformat which supports semantics for defining global-scope (data types, operation grouping) and operation-level information (operands, interpretation semantics for each IR operator, area\/latency cost for corresponding hardware implementations and cycle timings). \n\n\\subsubsection{Backend engines}\n\\label{Sec:backends}\nApplication CFGs, (basic blocks) BBs and patterns can be processed by a number of backends for exporting to:\n\\begin{itemize}\n\\item {ANSI C subset code for incorporation to user tools (simulators, validators etc).}\n\\item {GDL (VCG) \\cite{VCG} and dot (Graphviz) \\cite{Graphviz} files for visualization.}\n\\item {An extended CDFG \\cite{CDFGtool} format for scheduling and translation to synthesizable VHDL (applicable to BBs and CI patterns).}\n\\item {GGX XML \\cite{AGG} files for algebraic graph transformation.}\n\\end{itemize}\n\n\\subsection{Structure of a YARDstick environment}\n\\label{Sec:YARDstickStructure}\nThe YARDstick kernel can be utilized as an infrastructure for application analysis and exploration of custom functionality extensions. Fig.~\\ref{Fig:2} shows our YARDstick framework which reuses third-party compilation and simulation tools. The compiler frontend ({\\it gcc} is such an example) accepts input in C\/C++ or other high-level languages of interest. The application program is compiled to a low-level representation that can be represented by a form of ``assembly'' code after frontend processing, conversion to its internal IR, application of machine-independent optimizations and a set of compiler backend processes with only code selection being obligatory. The assembly-level code can then be macro-expanded, instrumented for profiling and converted to ISeq by an appropriate SALTO pass \\cite{SALTO}. This flow assumes that a working SALTO backend library has been ported for the target architecture. Assembly code can be assembled and linked by the target machine binary utilities ({\\it binutils} or equivalent tools) and the resulting ELF executables can be evaluated on an instruction- or cycle-accurate simulator. Alternatively, ISeq files can be generated as compiler IR dumps directly from the compiler for the target machine. This is the case for a modified version of Machine-SUIF \\cite{MachSUIF} for which the basic block profile is automatically obtained by converting the IR to a C subset and executing the low-level C code on a native machine.\n\nAt the simulation boundary, YARDstick expects information on the dynamic profile of the application (basic block execution frequencies, program trace, cache memory access statistics) on a target machine. From within YARDstick, static and dynamic application metrics can be evaluated and visualized. An application analyzer ({\\it iseqtool}) and CI generator ({\\it igensel}) linked to {\\it libByoX} and {\\it libPatCUTE} are used to obtain the application profile and custom instructions, respectively. \n\n\\begin{figure}[tb]\n \\centering\n \\includegraphics[width=8.0cm]{fig2-c.eps}\n \\caption{A high-level look to a YARDstick-based framework.}\n \\label{Fig:2}\n \\vspace{-0.25cm}\n\\end{figure}\n\n\\subsection{Usage of the YARDstick API}\n\\label{Sec:APIUsage}\nThe YARDstick API provides methods for manipulation of ISeq entities and extraction of useful information to internal data structures such as local operand lists, operation-level \nanalysis (e.g. finding zero-predecessor\/-successor instruction nodes) and application of backend processing. Fig.~\\ref{Fig:3} shows an example of API usage for updating the necessary data structures for basic-block-based CI generation. \n\nIn more detail, a basic block ISeq cluster is denoted by `bb'. First, `init\\_opnd\\_library' initializes an empty operand list container, named {\\it Lopnd} which is updated by calls to the `find\\_{\\it type}\\_opnds' functions, where {\\it type} denotes operand type and can be one of \\{input,output,cnst\\}. When the unique register operands and constants option is enabled, operands are treated as in SSA form and have a single representation per input and output sublist by applying `collapse\\_to\\_unique\\_opnds'. After clearing temporary storage for the best cut to be identified in the specific iteration of the CI generation algorithm, either a MIMO or a MISO-based method can be selected for performing the actual process.\n\n\\begin{figure}[tb]\n\\centering\n{\n{\\footnotesize\n\\begin{verbatim}\nvoid evaluate_bb_ci(ISeq bb)\n{\n ... \n \/\/ Setup operand list\n UIOCList Lopnd = init_opnd_library();\n\n \/\/ Find unique i\/o registers and constants\n find_input_opnds(bb, Lopnd, input_opnds);\n find_output_opnds(bb, Lopnd, output_opnds, \n instr_has_successor);\n find_cnst_opnds(bb, Lopnd, cnst_opnds);\n\n if (unique i\/o instances for operands\/constants)\n collapse_to_unique_opnds();\n \n clear_best_cut();\n \n \/\/ CI generation for the BB\n if (MIMO method)\n MIMO_identification(bb);\n else if (MaxMISO or constrained MISO method)\n MAXMISO_identification(bb);\n}\n\\end{verbatim}\n}\n}\n\\vspace{-0.125cm}\n\\caption{Updating internal data structures for BB-level CI generation.}\n\\label{Fig:3}\n\\end{figure}\n\n\\section{Case studies of design space exploration with YARDstick}\n\\label{Sec:DSE}\nFor proof-of-concept, we have evaluated YARDstick under various scenarios that reflect realistic problems in evaluating and exploring the design space when developing new ASIPs or enhancing customizable architectures. For the case studies we have used three different target architectures: \n\\begin{enumerate}\n\\item [1)] {The SUIFvm IR \\cite{MachSUIF} augmented by a set of incremental extensions to it, called {\\it SUIFvmenh}.}\n\\item [2)] {The {\\it SUIFrmenh} architecture (SUIF `real machine' enhanced) supported by an in-house backend written for Machine-SUIF, that introduces a finite register set of configurable size to {\\it SUIFvmenh}. {\\it SUIFrmenh} also resolves type casting (conversion) operations mapping them from the {\\it cvt} SUIFvm instructions to the proper instructions accepted by the {\\it SUIFrmenh} backend: zero- and sign-extend, truncation and {\\it mov} explicitly denoting the source and destination data types.}\n\\item [3)] {The DLX integer subset ({\\it iDLX}) for which the formatted assembly dumps are viewed as a kind of human-readable machine-level IR.}\n\\end{enumerate}\n\nThe target IR architectures are summarized in Table~\\ref{Tab:1}. In the experiments of the following subsections, all control transfer instructions ({\\it beqz, bnez, j, jr, jal, jalr}) were forbidden from CI pattern formation for the {\\it iDLX} IR, while for the SUIFvm-based IRs, branch operations were permitted. The two different forbidden instruction constraint sets were chosen in order to highlight distinct potential requirements and were not meant to be directly contrasted. The DLX-based IR would be a choice when the objective is to optimize pre-existing DLX legacy assembly (or binaries). Instead, {\\it SUIFvmenh} implements a representative RISC-like IR not restricting the processor template, which is more suitable for developing ASIPs from scratch. \n\n\\begin{table}\n \\renewcommand{\\arraystretch}{0.925}\n \\vspace{-0.25cm}\n \\caption{Different IR settings for CI generation.}\n \\centering\n {\\footnotesize\n \\begin{tabular}{|l|l|}\n \\hline\n \\multicolumn{1}{|m{1.5cm}|}{\\centering IR}\n &\\multicolumn{1}{m{5.0cm}|}{\\centering Operations}\\\\\n \\hline\n SUIFvmenh & SUIFvm plus: type conversion (sxt, zxt),\\\\\n & partial predication (select),\\\\ \n & bit manipulation (bitinsert, bitextract, concat) \\\\\n \\hline\n SUIFrmenh & SUIFvmenh with finite register set \\\\\n & (12, 16, 32 or 64 registers); here 32 is used \\\\\n \\hline\n iDLX & The DLX integer instruction set \\\\\n \\hline\n \\end{tabular}\n }\n \\label{Tab:1}\n\\end{table}\n\nFor the experiments we used applications from a set of embedded benchmarks consisting of 5 cryptographic ({\\it crc32, deraiden, enraiden, idea, sha}) and 5 media-oriented applications ({\\it adpcm\\_dec, adpcm\\_enc, fir, fsme, mc}) which are shown in Table~\\ref{Tab:2}. \n\n\\begin{table}\n \\centering\n \\renewcommand{\\arraystretch}{0.925}\n \\caption{Summary of examined benchmarks.} \n {\\footnotesize\n \\begin{tabular}{|l|l|}\n \\hline\n \\multicolumn{1}{|m{2.0cm}|}{\\centering Benchmark}\n &\\multicolumn{1}{m{5.0cm}|}{\\centering Description}\\\\\n \\hline\n {\\it crc32} & Cyclic redundancy check \\\\ \\hline\n {\\it deraiden} \\cite{Raiden} & Decoding raiden cipher \\\\ \\hline\n {\\it enraiden} \\cite{Raiden} & Encoding raiden cipher \\\\ \\hline \n {\\it idea} & IDEA cryptographic kernel \\\\ \\hline\n {\\it sha} & Secure Hash Algorithm producing an 160-bit \\\\ \n & message digest for a given input \\\\ \\hline\n {\\it adpcmdec} & Adaptive Differential Pulse Code Modulation \\\\ \n & (ADPCM) decoder \\\\ \\hline\n {\\it adpcmenc} & Adaptive Differential Pulse Code Modulation \\\\ \n & (ADPCM) encoder \\\\ \\hline\n {\\it fir} & FIR filter \\\\ \\hline\n {\\it fsme} & Full-search motion estimation \\\\ \\hline\n {\\it mc} & Motion compensation \\\\ \\hline\n \\end{tabular}\n }\n \\label{Tab:2}\n\\end{table}\n\n\\subsection{Effect of compilation specifics: Case study of media processing kernels}\n\\label{Sec:CaseStudy}\nIt has been argued recently \\cite{Bonzini06} that traditional compiler transformations and the trivial solution of applying CI identification at the end of the optimization phase pipeline do not necessarily yield the best performance when targeting a custom processor. On the contrary, source code and IR-level transformations have to be especially tuned for exposing beneficial application-specific hardware extensions. \n\nIn this subsection, the effect of the choice in compilation specifics is highlighted for popular case study applications: the ADPCM codec, an FIR filter, and typical implementations of motion estimation\/compensation. We investigate specific effects that the compiler imposes when used for exploring the potential for custom instructions:\n\\begin {enumerate}\n\\item[a)] {The effect of register allocation on the quality of the generated CIs. For this purpose, we have targeted the {\\it SUIFrmenh} backend. A 32-entry register file was assumed while the procedure calling convention for {\\it SUIFrmenh} was the same to an in-house GCC-based DLX backend.}\n\\item[b)]\t{The suitability of using a highly-optimizing (but aimed to general-purpose processors) compiler such as {\\it gcc} targeted to DLX which is our case, against a well-known research compiler ({\\it MachSUIF} targeted to SUIFvm) which has been extensively used for exploring the transformation space for new CIs.}\n\\end{enumerate}\n\nFor accounting only the true data dependencies amongst operations, it is necessary to remove all false dependencies. This can be achieved by a simple IR-level transformation pass (for example, such pass was implemented in the Machine-SUIF compiler for the {\\it SUIFvmenh} target) which involves the use of the pseudocode of Fig.~\\ref{Fig:4}. The algorithm in Fig.~\\ref{Fig:4} can be used for an in-order instruction schedule, i.e. no backward data dependence edges exist within a basic block. For a given set of dependence edges $\\bigcup\\{(i \\rightarrow j)_k\\}$ between instructions $mi(i),mi(j)$ of instruction IDs $i,j$ respectively, the range $[i,j]$ is considered. The destination operands of machine instructions in range are iterated and compared to the operand ($opnd$) for which we want to remove all false dependencies with it as the data dependency. If $opnd$ is written at least once, a false dependency is identified (marked as $TRUE$) and the corresponding data dependence edge is annulled.\n\n\\begin{figure}[tb]\n\\centering\n{\n{\\footnotesize\n\\begin{verbatim}\nboolean is_false_dependency(BB* bb, InstrID mi_lpos, \n mi_hpos, LOpnd opnd)\n{\n boolean false_dependency_f = FALSE;\n ...\n \/\/ Iterate through the [mi_lpos..mi_hpos] range\n foreach machine instruction (mi) in range do \n if the current mi is within the specified range\n get destination operand dstop of mi \n if dstop is ((a base register or address symbol) \n and writes memory)\n if dstop is equal to opnd\n \/\/ a false dependency has been found\n false_dependency_f |= TRUE;\n fi\n fi\n fi\n od\n \n return false_dependency_f;\n}\n\\end{verbatim}\n}\n}\n\\vspace{-0.125cm}\n\\caption{Removing false data dependence edges from basic blocks.}\n\\label{Fig:4}\n\\end{figure}\n\nApplication speedups obtained prior and post register allocation (the latter indicated by a `ra' suffix to the benchmark name) are shown in Fig.~\\ref{Fig:5}. In contrast to common belief \\cite{ClarkN03,Castro04}, the introduction of a finite register set and the mapping of the instruction selection temporaries to this set, does not always have a negative impact on the evaluated speedups. While it is clear that there is a measurable effect (an overhead of 22.15\\%) due to register allocation for a single output ($N_{o}=1$), the extent of this overhead is reduced for larger number of register outputs. Thus, for $N_{o}=\\{2,4,\\infty\\}$ the corresponding overheads have been calculated as 17\\%, 2.5\\% and -21.3\\%, the latter meaning that the register allocated IR enables higher speedups compared to obtaining the IR prior register allocation for the constraint of unlimited number of register outputs. This important outcome infers that the overhead of spills and fills occuring due to register pressure, can be efficiently hidden when multi-output (MIMO) instructions are used for the estimations. In addition to that, CIs have the side-effect of eliminating the need for certain temporary variables within a CI pattern, given that they need not be alive outside the pattern. \n\n\\begin{figure}[tb]\n \\SetFigLayout{2}{1}\n \\centering\n \\subfigure[$N_{i}$ = 4]{\n \\includegraphics[width=8.0cm]{fig5a.eps}\n \\label{Fig:5:a}}\n \\subfigure[$N_{i}$ = 8]{\n \\includegraphics[width=8.0cm]{fig5b.eps}\n \\label{Fig:5:b}}\n \\caption{Effect of register allocation on application speedup for different number of input\/output register operands.}\n \\label{Fig:5}\n \\vspace{-0.5cm}\n\\end{figure}\n\nFig.~\\ref{Fig:6} shows the results on relative application speedups for different number of input\/output register operands for the two selected compiler targets. An unlimited number of inputs was also set but the corresponding results where within 0.4\\% of the $N_{i}=8$ case.\nThe difference in the average speedup achieved for the given numbers of inputs for the same application is about 44\\% (ranging from 20\\% to 61\\%). This is partially due to the fact that stack argument allocation applied for {\\it iDLX} only, adds memory access operations for saving and restoring function arguments that are not usually included in new CIs. Even when MIMO instructions are identified incorporating the callee saved sequence (a series of {\\it sw} instructions), the obtained speedups are severely limited by the data memory bandwidth assumed in the estimations which is 1R\/1W port for all target architectures.\n\n\\begin{figure}[tb]\n \\SetFigLayout{2}{1}\n \\centering\n \\subfigure[$N_{i}$ = 4]{\n \\includegraphics[width=8.0cm]{fig6a.eps}\n \\label{Fig:6:a}}\n \\subfigure[$N_{i}$ = 8]{\n \\includegraphics[width=8.0cm]{fig6b.eps}\n \\label{Fig:6:b}}\n \\caption{Application speedup for different number of input\/output register operands on {\\it SUIFvmenh} and {\\it iDLX}.}\n \\label{Fig:6}\n \\vspace{-0.125cm}\n\\end{figure}\n\n\\subsection{Transformation to more suitable IRs}\n\\label{Sec:IRTransformations}\nAlthough not explicitly stated in related works, the effect of IR selection significantly affects the quality of the CI generation results. In YARDstick, GGX XML graph representations of ISeq patterns can be automatically generated and then transformed by hand-written AGG rules to use different IR operators for implementing equivalent functionality.\n\nMost compilers (one exception is the commercial CoSy \\cite{ACE}) do not account for bit-level manipulations that are desirable in application domains such as network processing and genetic algorithms (GAs). To highlight this issue we have defined three custom IR operators, namely {\\it bitinsert, bitextract} and {\\it concat} with the semantics of Table~\\ref{Tab:3}. As motivational examples, we have used the well-known single- ({\\it crcsp}) and double-point ({\\it crcdp}) crossover operators, which are encountered in typical GAs such as the SGA \\cite{Goldberg89}. It should be noted that the ANSI C implementations of the crossover operators where hand-tuned, with optimizations including the conversion of all function calls inside the {\\it crcsp} and {\\it crcdp} functions to macro-inclusions. Fig.~\\ref{Fig:7} shows the result of applying a rule-based transformation in AGG \\cite{AGG} for replacing a {\\it SUIFvmenh} IR segment (Fig.~\\ref{Fig:7:a}) with a use of the {\\it bitextract} IR operator as seen in the resulting graph (Fig.~\\ref{Fig:7:b}. To highlight the importance of the right choice of compiler IR, Fig.~\\ref{Fig:8} visualizes the VCG representations of the custom instruction generated for the {\\it crcsp} genetic operator, without (Fig.~\\ref{Fig:8:a}) and with the use of the bit-level IR operators (Fig.~\\ref{Fig:8:b}).\n\n\\begin{table}\n \\renewcommand{\\arraystretch}{0.95}\n \\caption{Custom IR operators improving bit-level compiler support. $r_{d}$, $r_{s}$ are register operands, {\\it hpos,lpos} denote a bit range, and {\\it n} is the number of arguments for a variadic operator.} \n \\centering\n {\\footnotesize\n \\begin{tabular}{|l|c|}\n \\hline\n \\multicolumn{1}{|m{2.0cm}|}{\\centering Operator}\n &\\multicolumn{1}{m{5.0cm}|}{\\centering Semantics}\\\\\n \\hline\n {\\it bitinsert} & $r_{d}[lpos..hpos] \\Leftarrow r_{s}$ \\\\ \n {\\it bitextract} & $r_{d} \\Leftarrow r_{s}[lpos..hpos]$ \\\\ \n {\\it concat} & $r_{d} \\Leftarrow r_{s(0)} \\& r_{s(1)} \\& \\ldots \\& r_{s(n-1)}$ \\\\\n \\hline\n \\end{tabular}\n }\n \\label{Tab:3}\n \\vspace{-0.125cm}\n\\end{table}\n\n\\begin{figure}[tb]\n \\SetFigLayout{2}{1}\n \\centering\n \\subfigure[Visualization of an example host {\\it SUIFvmenh} IR graph.]{\n \\includegraphics[width=8.0cm]{fig7a.eps}\n \\label{Fig:7:a}}\n \\subfigure[The resulting graph after the application of a transformation rule for `bitextract'.]{\n \\includegraphics[width=8.0cm]{fig7b.eps}\n \\label{Fig:7:b}}\n \\caption{An example of IR graph rewriting via AGG transformation rules.}\n \\label{Fig:7}\n \\vspace{-0.125cm}\n\\end{figure}\n\n\\begin{figure}[tb]\n \\SetFigLayout{2}{1}\n \\centering\n \\subfigure[The {\\it crcsp}-induced CI without the use of bit-level operators.]{\n \\includegraphics[width=8.0cm]{fig8a-c.eps}\n \\label{Fig:8:a}}\n \\subfigure[The {\\it crcsp}-induced CI using bit-level operators.]{\n \\includegraphics[width=8.0cm]{fig8b-c.eps}\n \\label{Fig:8:b}}\n \\caption{Visualization of the {\\it crcsp} genetic operator CI for different compiler IRs.}\n \\label{Fig:8}\n \\vspace{-0.125cm}\n\\end{figure}\n\nThe performance gains for the generated hardware depend heavily on the target IR used for mapping the application code as can be clearly seen by the results of Table~\\ref{Tab:4}. In Table~\\ref{Tab:4}, the first three columns are self-explanatory. Column `Cycles...' shows the cycles required for a sequential schedule of the corresponding GA operator assuming the usage of the generated CIs. The last two columns indicate the number of cycles and area of the CI. The area requirement is calculated relatively to the area (multiplier area unit or MAU) of a 32-bit single-cycle multiplier producing a 64-bit result.\n\nFor computing schedules with unlimited resources, the generated ISeq files of the custom instructions were automatically converted with YARDstick to CDFGs compatible with an extended version of the CDFG toolset \\cite{CDFGtool} and processed by an ASAP scheduler. If the bit-level operators are not used, the minimum number of cycles required for the {\\it crcsp} operator are 76 for a sequential schedule and 12 for scheduling with unlimited resources, while for the {\\it crcdp} these limits are 111 and 14, respectively. When the bit-level operators are used, the sequential schedules prior to the inclusion of custom instructions require 13 and 18 cycles for {\\it crcsp} and {\\it crcdp} respectively with an ASAP schedule of 5 cycles for both. In the latter case, a single-cycle MIMO custom instruction is identified for each genetic operator when a $N_{i}\/N_{o}=\\{8\/2\\}$ constraint is used with impressive area benefits as well. \n\n\\begin{table}\n \\renewcommand{\\arraystretch}{0.975}\n \\caption{CI characteristics for hand-optimized ANSI C implementations of {\\it crcsp} and {\\it crcdp}.} \n \\centering\n {\\footnotesize\n \\begin{tabular}{|l|l|r|r|r|r|}\n \\hline\n \\multicolumn{1}{|m{1.0cm}|}{\\centering \\footnotesize GA operator}\n &\\multicolumn{1}{m{1.0cm}|}{\\centering \\footnotesize Bit-level operations}\n &\\multicolumn{1}{m{1.25cm}|}{\\centering \\footnotesize CI gen. constraints \\\\ $N_{i}\/N_{o}$}\n &\\multicolumn{1}{m{1.25cm}|}{\\centering \\footnotesize Cycles \\\\ (seq. schedule)}\n &\\multicolumn{1}{m{0.75cm}|}{\\centering \\footnotesize CI cycles}\n &\\multicolumn{1}{m{1.0cm}|}{\\centering \\footnotesize CI area (MAU)}\\\\\n \\hline\n \n {\\it crcsp} & No & 4\/1 & 76 & -- & -- \\\\ \n {\\it crcsp} & No & 8\/1 & 41 & 3 & 0.977 \\\\ \n {\\it crcsp} & No & 8\/2 & 5 & 3 & 1.867 \\\\ \n \\hline\n \n {\\it crcsp} & Yes & 4\/1 & 13 & -- & -- \\\\ \n {\\it crcsp} & Yes & 8\/1 & 6 & 1 & 0.142 \\\\ \n {\\it crcsp} & Yes & 8\/2 & 1 & 1 & 0.153 \\\\ \n \\hline\n \n {\\it crcdp} & No & 4\/1 & 111 & -- & -- \\\\ \n {\\it crcdp} & No & 8\/1 & 58 & 3 & 1.466 \\\\ \n {\\it crcdp} & No & 8\/2 & 5 & 3 & 2.800 \\\\ \n \\hline\n \n {\\it crcdp} & Yes & 4\/1 & 18 & -- & -- \\\\ \n {\\it crcdp} & Yes & 8\/1 & 8 & 1 & 0.147 \\\\ \n {\\it crcdp} & Yes & 8\/2 & 1 & 1 & 0.164 \\\\ \n \\hline\n \\end{tabular}\n }\n \\label{Tab:4}\n \\vspace{-0.25cm}\n\\end{table}\n\n\\subsection{Effect of data memory access model}\n\\label{Sec:MemoryAccessModel}\nThe extent and scope of using custom instructions is constrained by the data bandwidth to the data memory unit and local storage (register file) as defined by the number of input\/output ports and the resolution of dependencies for load\/store operations. In certain approaches \\cite{ClarkN05,Leupers06} which deal with predefined architectures such as the MIPS CorExtend and ARM OptimoDE systems, this limitation imposes a definitive factor. However, for exploration purposes when developing an ASIP from scratch, it is useful to consider different storage consistency models. Following the notation introduced in \\cite{Biswas07} for state consistency between application-specific functional units (AFUs) with local storage and data memory, it is possible to consider two such models in YARDstick:\n\\begin{itemize}\n\\item {{\\it Consistent data memory}, where the AFU directly accesses data in the on-chip data memory and there is no need for local AFU storage. We also make the conservative assumption that loads and stores ought always be serialized.}\n\\item {{\\it Ideal consistent AFU memory}, where each load\/store to main memory is transformed to an access to local AFU memory. We assume that data memory status is updated by DMA accesses occuring in parallel to processor instructions.}\n\\end{itemize} \n\nTo investigate the effect of memory model choice on application speedup due to CIs we first generated CIs without allowing memory inclusion (``noMEM''), then allowed local memory and performed estimations for the consistent data memory model (``CDM'') and subsequently we assumed an ideal consistent AFU memory (``idealCAM''). The corresponding results are illustrated in Fig.~\\ref{Fig:9} for indicative $N_{i}\/N_{o}$ combinations and for a single-issue processor. \n\n\\begin{figure}[tb]\n \\SetFigLayout{2}{1}\n \\centering\n \\subfigure[$N_{i}\/N_{o}$ = 4\/2]{\n \\includegraphics[width=8.0cm]{fig9a.eps}\n \\label{Fig:9:a}}\n \\subfigure[$N_{i}\/N_{o}$ = 8\/4]{\n \\includegraphics[width=8.0cm]{fig9b.eps}\n \\label{Fig:9:b}}\n \\caption{Effect of data memory accesses to the speedup induced by CIs. Accesses to data memory are assumed to require a single clock cycle overhead.}\n \\label{Fig:9}\n \\vspace{-0.25cm}\n\\end{figure}\n\nAs can be seen by the collected results, the inclusion of data memory access operations in CIs has a significant positive impact in the attained speedups: from 15.5\\% to 33.3\\% for the given input\/output constraints. Especially for the {\\it SUIFvmenh} target, the speedup improvements were up to 43.4\\%. Another important observation is that the consistent AFU memory model has a limited effect with improvements of up to 6.3\\% in average and 8.9\\% for the {\\it SUIFvmenh} applications alone. However, for a larger cycle overhead to accessing data storage, the speedup improvements are more considerable. For another exploration example, we have estimated that 2- and 5-cycle load\/store accesses to a data memory module through the local bus (an address cycle followed by either one word data access or consecutive byte data access cycles) result in higher speedups. More specifally, the ``CDM'' case performs better to ``noMEM'' by 34.7\\% and 49.9\\%, respectively for the 2- and 5-cycle overheads. When comparing the two different models that allow memory accesses to be part of CIs, the corresponding values are 7\\% and 20.9\\% in favor of ``idealCAM'' for the given cycle overheads. \n\n\\subsection{Greedy CI selection under priority metrics}\n\\label{Sec:GreedySelection}\nFor implementing a greedy CI selector, the key idea is to assign priorities to the CI patterns and the more proficient instances are chosen by starting with the highest prioritized one. We have used the following two priority functions: \n\n\\begin{equation}\n\\label{Eq:1}\n\\text{Cycle gain}: Priority(\\sum_{j} C_{i,j}) = \\sum_{j} \\{P_{i,j} \\times f_{i,j}\\}\n\\end{equation}\n\nthat forces for best performance regardless AFU area requirements and: \n\n\\begin{equation}\n\\label{Eq:2}\n\\text{Cycle gain\/Area}: Priority(\\sum_{j} C_{i,j}) = \\sum_{j} \\{(P_{i,j} \\times f_{i,j})\\}\/A_{i}\n\\end{equation}\n\nwhere $C_{i,j}$ denotes the $i$-th candidate instruction with $j$ different instances in the entire program, $f_{i,j}$ the basic block execution frequency metric associated with the specific instance, and $A_{i}$ the area cost for the candidate. These priority functions force different objectives: equation~\\ref{Eq:1} maximizes performance gain for each isomorphic candidate CI over the entire program when area is not an issue while equation~\\ref{Eq:2} quantifies the available area budget as well.\n\nA summary of the measurements for the application set is given in Table~\\ref{Tab:5}. \nTaking {\\it sha.dlx} for example, although tens of candidate instructions are identified, \nonly a few (7 for achieving 95\\% of the maximum speedup compared to 20 for achieving totality) contribute significantly to the execution time for either priority function. The \nnumber of required extensions for reaching the 95\\% speedup levels ranges from 2 ({\\it fir.dlx}) to 40 ({\\it idea.dlx}), while the area requirement is less than 3.4 multiples of the area of a 32-bit single-cycle multiplier for all applications with the exception of {\\it idea.dlx} which demands up to 10.23 MAU. \n\nFinally, Fig.~\\ref{Fig:10} compares the pros and cons for the priority functions \nused in the custom instruction selection process for the {\\it sha.dlx} application example. For the {\\it sha} application, CI selection under the `Cycle gain' priority function reaches the 95\\% of the maximum speedup for one instruction less and a slight area increase (0.04 MAU) compared to `Cycle gain\/Area'.\n\n\\begin{figure}[tb]\n \\centering\n \\includegraphics[width=8.0cm]{fig10.eps}\n \\caption{Custom instruction selection under priority metrics for {\\it sha}\n ($N_{i}\/N_{o}=\\infty\/\\infty$).}\n \\label{Fig:10}\n \\vspace{-0.175cm}\n\\end{figure}\n\n\\begin{table}\n \\renewcommand{\\arraystretch}{1.0}\n \\caption{Speedup-AFU area for `Cycle gain'\/`Cycle gain\/Area' \n for the input\/output constraint $N_{i}\/N_{o}=\\{8\/4\\}$.} \n \\centering\n {\\footnotesize\n \\begin{tabular}{|l|r|r|r|r|}\n \\hline\n \\multicolumn{1}{|m{1.2cm}|}{\\centering Benchmark}\n &\\multicolumn{1}{m{1.0cm}|}{\\centering 0.95$\\times$ max. speedup}\n &\\multicolumn{1}{m{1.2cm}|}{\\centering Area (MAU)}\n &\\multicolumn{1}{m{1.0cm}|}{\\centering At max. speedup}\n &\\multicolumn{1}{m{1.2cm}|}{\\centering Area (MAU)}\\\\\n \\hline\n \n {\\it adpcmdec} & 4\/4 & 0.895\/0.895 & 6 & 0.983 \\\\ \n \\hline\n \n {\\it adpcmdec.dlx} & 11\/11 & 1.123\/1.123 & 17 & 1.721 \\\\ \n \\hline\n \n {\\it adpcmenc} & 4\/4 & 0.998\/0.998 & 6 & 1.086 \\\\\n \\hline\n \n {\\it adpcmenc.dlx} & 16\/16 & 1.475\/1.375 & 22 & 2.074 \\\\\n \\hline\n \n {\\it crc32.dlx} & 3\/3 & 0.12\/0.12 & 3 & 0.12 \\\\\n \\hline\n \n {\\it deraiden} & 4\/4 & 2.657\/2.657 & 4 & 2.657 \\\\\n \\hline\n \n {\\it enraiden} & 3\/3 & 1.949\/1.949 & 3 & 1.949 \\\\\n \\hline\n \n {\\it fir} & 4\/4 & 1.398\/1.398 & 5 & 1.398 \\\\\n \\hline\n \n {\\it fir.dlx} & 2\/2 & 0.155\/0.155 & 2 & 0.155 \\\\\n \\hline\n \n {\\it fsme.dlx} & 9\/9 & 1.143\/1.143 & 11 & 1.546 \\\\\n \\hline\n \n {\\it fsme.dlx} & 6\/6 & 1.03\/1.03 & 10 & 1.65 \\\\\n \\hline\n \n {\\it idea.dlx} & 40\/50 & 10.23\/9.325 & 69 & 13.002 \\\\\n \\hline\n \n {\\it mc} & 5\/5 & 1.824\/1.824 & 7 & 2.53 \\\\\n \\hline\n \n {\\it mc.dlx} & 7\/7 & 1.489\/1.489 & 12 & 2.516 \\\\\n \\hline\n \n {\\it sha.dlx} & 7\/7 & 1.671\/1.671 & 20 & 3.378 \\\\\n \\hline\n \\end{tabular}\n }\n \\label{Tab:5}\n \\vspace{-0.25cm}\n\\end{table}\n\n\\section{Usage environment}\n\\label{Sec:Usage}\nYARDstick has been used along with the SUIF\/Machine-SUIF \\cite{MachSUIF}, GCC \\cite{GCC}, and COINS \\cite{COINS} compilers and the ArchC \\cite{ArchC} simulation framework. Functional and cycle-accurate simulators generated by version 1.5.1 of ArchC can be used with YARDstick without any modifications. Most of the YARDstick functionality is also accessible through a cross-platform GUI \\cite{Kavvadias07} compatible to recent Tcl\/Tk versions (8.5.a5 and newer). \n\nSupported platforms include GNU\/Linux (RedHat 9.0), Cygwin and Win32 (Windows\/XP SP2) on x86-compatible processors.\n\n\\section{Conclusions}\n\\label{Sec:Conclusions}\nYARDstick is a retargetable application analysis and custom instruction generation\/selection environment providing a compiler-\/simulator-agnostic infrastructure. YARDstick aims in separating design space exploration from compiler\/simulator idiosyncrasies. Different compilers\/simulators can be plugged-in via file-based interfaces; further, both high- (e.g. ANSI C) and low-level (assembly for an architecture or a virtual machine) input can be analyzed by the infrastructure.\n\nIn order to prove the applicability and usefulness of YARDstick in ASIP development, we have evaluated a variety of exploration scenarios on a benchmark set consisting of well-known embedded applications and kernels. In this context, we have investigated effects of the compilation process, such as the selection of the target IR and the impact of register allocation, on the characteristics of the identified hardware extensions. Also, different memory models involving local storage for application-specific functional units were examined and quantified, and for the entire set of applications, custom instructions were generated under different input\/output constraints.\n\n\n\n\n\\nocite{*}\n\n\\bibliographystyle{compj}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{PSO Introduction}\n\nParticle Swarm Optimisation (PSO,~\\cite{kennedy1995particle}) is a metaheuristic algorithm \nwhich is widely used to solve search and optimisation tasks. \nIt employs a number of particles as a swarm of potential solutions.\nEach particles shares knowledge about the current overall best solution and also retains a memory of \nthe best solution it has encountered itself previously. Otherwise the particles, after random initialisation,\nobey a linear dynamics of the following form\n\\begin{eqnarray}\n\\label{eq:PSOVelocityUpdate}\n\\mathbf{v}_{i,t+1}&=&\\omega\\mathbf{v}_{i,t}+\\alpha_{2} \\mathbf{R}_{1}(\\mathbf{p}_{i}-\\mathbf{x}_{i,t})+\\alpha_{2} \\mathbf{R}_{2}(\\mathbf{g}-\\mathbf{x}_{i,t})\n\\cr\n\\mathbf{x}_{i,t+1}&=&\\mathbf{x}_{i,t}+\\mathbf{v}_{i,t+1}\n\\label{eq:PSOPositionUpdate}\n\\end{eqnarray}\nHere $\\mathbf{x}_{i,t}$ and $\\mathbf{v}_{i,t}$, $i=1,\\dots,N$, $t=0,1,2,\\dots$,\nrepresent, respectively, the $d$-dimensional position in the search space and the velocity \nvector of the $i$-th particle in the swarm at time $t$. \nThe velocity update\ncontains an inertial term parameterised by $\\omega$ and includes attractive forces \ntowards the personal best location $\\mathbf{p}_{i}$ and towards \nthe globally\nbest location $\\mathbf{g}$, which are parameterised by $\\alpha_{1}$ and\nand $\\alpha_{2}$, respectively. \nThe symbols $\\mathbf{R}_{1}$ and $\\mathbf{R}_{2}$ denote diagonal matrices whose non-zero entries \nare uniformly distributed in the unit interval. The number of particles $N$ is quite low in \nmost applications, usually amounting to a few dozens.\n\nIn order to function as an optimiser, the algorithm uses a nonnegative cost function \n$F:\\mathbb{R}^d \\to \\mathbb{R}$, where without loss of generality $F(\\mathbf{x}^*)=0$ \nis assumed at an optimal solution $\\mathbf{x}^*$. In many problems, where PSO is applied, there are also states with near-zero costs can be considered as good solutions.\nThe cost function is evaluated for the state of each particle at each time step. \nIf $F(\\mathbf{x}_{i,t})$ is better than $F(\\mathbf{p}_{i})$, then the personal\nbest $\\mathbf{p}_{i}$ is replaced by $\\mathbf{x}_{i,t}$. \nSimilarly, if one of the particles arrives at a state\nwith a cost less than $F(\\mathbf{g})$, then $\\mathbf{g}$ is replaced in all particles by the \nposition of the particle that has discovered the new solution. \nIf its velocity is non-zero, a particle will depart from the current best location,\nbut it may still have a chance to return guided by the force terms in the dynamics.\n\nNumerous modifications and variants have been \nproposed since the algorithm's inception \\cite{kennedy1995particle} and it continues to enjoy \nwidespread usage. Ref.~\\cite{poli2008analysis} groups around 700 PSO papers into 26 discernible\napplication areas. Google Scholar reveals over 150,000 results for ``Particle Swarm Optimisation''\nin total and 24,000 for the year 2014.\n\nIn the next section we will report observations from a simulation of a particle swarm and move on to\na standard matrix formulation of the swarm dynamics in order to describe some of the existing analytical \nwork on PSO. In Sect.~\\ref{Critical} we will argue for a formulation of PSO as a random dynamical system\nwhich will enable us to derive a novel exact characterisation of the dynamics of one-particle system, \nwhich will then be generalised towards the more realistic case of a multi-particle swarm.\nIn Sect.~\\ref{Simulations} we will compare the theoretical predictions with simulations on\na representative set of benchmark functions. Finally, in Sect.~\\ref{discussion} we will discuss the\nassumption we have made in the theoretical solution in Sect.~\\ref{Critical} and address the applicability\nof our results to other metaheuristic algorithms and to practical optimisation problems.\n\n\\section{Swarm dynamics}\n\n\\subsection{Empirical properties}\n\nThe success of the algorithm in locating good solutions depends on the dynamics of the particles \nin the state space of the problem. In contrast to many evolution strategies, it is not straight \nforward to interpret the particle swarm as following a landscape defined by the cost function. \nUnless the current best\npositions $\\mathbf{p}$ or $\\mathbf{g}$ change, the particles do not interact with each other and \nfollow an intrinsic dynamics that does not even indirectly obtain any gradient information.\n\nThe particle dynamics depends on the parameterisation of the Eq.~\\ref{eq:PSOVelocityUpdate}.\nTo obtain the best result one needs to select parameter settings that \nachieve a balance between the particles exploiting the knowledge of good known locations and exploring regions of the problem space that have not been visited before. Parameter values often need to be experimentally determined, and poor selection may result in premature convergence of the swarm to poor local minima or in a divergence of the particles towards regions that are irrelevant for the problem.\n\nEmpirically we can execute PSO against a variety of problem functions with a range of $\\omega$ and \n$\\alpha_{1,2}$ values. Typically the algorithm shows performance of the form depicted \nin Fig.~\\ref{fig:PSOPerformance}. The best solutions found show a curved relationship \nbetween $\\omega$ and $\\alpha=\\alpha_1+\\alpha_2$, with $\\omega\\approx 1$ at small $\\alpha$, and\n$\\alpha\\gtrapprox 4$ at small $\\omega$. \nLarge values of both $\\alpha$ and $\\omega$ are found \nto cause the particles to diverge leading to results far from optimality, while at small values for both\nparameters the particles converge to a nearby solution which sometimes is acceptable. \nFor other cost functions similar relationships are observed in numerical tests (see Sect.~\\ref{Simulations})\nunless no good solutions found due to problem complexity or run time limits, see Sect.~\\ref{runtime}.\nFor simple cost functions, such as a single well potential, there are also parameter combinations with \nsmall $\\omega$ and small $\\alpha$ will usually lead to good results.\nThe choice of $\\alpha_1$ and $\\alpha_2$ at constant $\\alpha$ may\nhave an effect for some cost functions, but does not seem to have a big effect in most cases.\n\n\\begin{figure}[th]\n\\noindent \\begin{centering}\n\\includegraphics[width=0.75\\linewidth]{fn13_2000_valley_F}\n\\par\\end{centering}\n\\vspace{-5mm}\n\\caption{Typical PSO performance as a function of its $\\omega$ and $\\alpha$ parameters. \nHere a 25 particle swarm was run for pairs of $\\omega$ and $\\alpha$ values ($\\alpha_1=\\alpha_2=\\alpha\/2$). \nCost function here was the $d=10$ non-continuous rotated Rastrigin function~\\cite{CEC2013}. \nEach parameter pair was repeated 25 times and the minimal costs after 2000 iterations were \naveraged. \\label{fig:PSOPerformance}}\n\\end{figure}\n\n\\subsection{Matrix formulation}\n\nIn order to analyse the behaviour of the algorithm it is convenient to use a matrix formulation \nby inserting the velocity explicitly in the second equation (\\ref{eq:PSOPositionUpdate}).\n\\begin{equation}\n\\mathbf{z}_{t+1}=M \\mathbf{z}_t + \\alpha_1 \\mathbf{R}_1 (\\mathbf{p},\\mathbf{p})^{\\top}+\n\\alpha_2 \\mathbf{R}_2 (\\mathbf{g},\\mathbf{g})^{\\top} \\label{eq:matrix_from}\n\\end{equation}\nwith $\\mathbf{z}=(\\mathbf{v},\\mathbf{x})^{\\top}$ and \n\\begin{equation}\nM=\\left(\\begin{array}{cc} \n\t\t\t\\omega \\mathbf{I}_d & -\\alpha_1 \\mathbf{R}_1- \\alpha_2 \\mathbf{R}_2 \\\\\n\t\t\t\\omega \\mathbf{I}_d & \\mathbf{I}_d-\\alpha_1 \\mathbf{R}_1- \\alpha_2 \\mathbf{R}_2 \n\t\\end{array}\\right),\n\\label{matrix}\n\\end{equation}\nwhere $\\mathbf{I}_d$ is the unit matrix in $d$ dimensions. Note that the two occurrence of $\\mathbf{R}_1$ in \nEq.~\\ref{matrix} refer to the same realisation of the random variable. Similarly, the two $\\mathbf{R}_2$'s are \nthe same realisation, but different from $\\mathbf{R}_1$.\nSince the second and third term on the right in Eq.~\\ref{eq:matrix_from} are \nconstant most of the time, the analysis of the algorithm can focus on the properties of the matrix $M$.\nIn spite of its wide applicability, PSO has not been subject to deeper theoretical study, which may\nbe due to the multiplicative noise in the simple quasi-linear, quasi-decoupled dynamics. In previous\nstudies the effect of the noise has largely been ignored.\n\n\\subsection{Analytical results}\n\nAn early exploration of the PSO dynamics~\\cite{kennedy1998behavior} considered a single particle in \na one-dimension space where the personal and global best locations were taken to be the same. \nThe random components were replaced by their averages such that apart from random initialisation\nthe algorithm was deterministic. \nVarying the parameters was shown to result \nin a range of periodic motions and divergent behaviour for the case of $\\alpha_1+\\alpha_2\\ge4$. \nThe addition of the random vectors was seen as beneficial as it adds noise to the deterministic search.\n\nControl of velocity, not requiring the enforcement of an arbitrary maximum value as in \nRef.~\\cite{kennedy1998behavior}, is derived in an analytical manner by~\\cite{clerc2002particle}. \nHere eigenvalues derived from the dynamic matrix of a simplified version of the PSO algorithm \nare used to imply various search behaviours. Thus, again the $\\alpha_1+\\alpha_2\\ge4$ case is\nexpected to diverge. For $\\alpha_1+\\alpha_2<4$ various cyclic and quasi-cyclic motions are shown \nto exist for a non-random version of the algorithm.\n\nIn Ref.~\\cite{trelea2003particle} again a single particle was considered in a one dimensional \nproblem space, using a deterministic version of PSO, setting $\\mathbf{R}_{1}=\\mathbf{R}_{2}=0.5$. \nThe eigenvalues of the system were determined as functions of $\\omega$ and a combined $\\alpha$, which\nleads to three conditions: The particle is shown to converge when $\\omega<1$, $\\alpha>0$ and \n$2\\omega-\\alpha+2>0$. Harmonic oscillations occur for \n$\\omega^2+\\alpha^2-2\\omega\\alpha-2\\omega-2\\alpha+1<0$ \nand a zigzag motion is expected if \n$\\omega<0$ and $\\omega-\\alpha+1<0$. \nAs with the preceding papers the discussion of the random numbers in the algorithm views \nthem purely as enhancing the search capabilities by adding a \\emph{drunken walk} \nto the particle motions. Their replacement by expectation values was thus believed to \nsimplify the analysis with no loss of generality. \n\nWe show in this contribution that the \niterated use of these random factors $\\mathbf{R}_{1}$ and $\\mathbf{R}_{2}$\nin fact adds a further level of complexity to the \ndynamics of the swarm which affects the behaviour of the algorithm in a non-trivial way. In Ref.~\\cite{jiang2007stagnation} these factors were given some consideration. Regions of convergence and divergence separated by a curved line were predicted. This line separating these regions (an equation for which is given in Ref.~\\cite{cleghorn2014generalized}) fails to include some parameter settings that lead to convergent swarms. Our analytical solution of the stability problem for the swarm dynamics explains\nwhy parameter settings derived from the deterministic approaches are not in line with\nexperiences from practical tests. For this purpose we will now formulate the PSO algorithm as\na random dynamical system and present an analytical solution for the swarm dynamics in a\nsimplified but representative case.\n\n\\section{Critical swarm conditions for a single particle\\label{Critical}}\n\n\\subsection{PSO as a random dynamical system}\n\nAs in Refs.~\\cite{kennedy1998behavior,trelea2003particle} the dynamics of the particle swarm will \nbe studied here as well in the single-particle case. This can be justified\nbecause the particles interact only\nvia the global best position such that, while $\\mathbf{g}$ (\\ref{eq:PSOVelocityUpdate}) is unchanged,\nsingle particles exhibit qualitatively the same dynamics as in the swarm. \nFor the one-particle case we have necessarily $\\mathbf{p}=\\mathbf{g}$,\nsuch that shift invariance allows us to set both to zero, which leads us to the following\nis given by the stochastic-map formulation of the PSO dynamics (\\ref{eq:matrix_from}).\n\\begin{equation}\n\\mathbf{z}_{t+1}=M \\mathbf{z}_t \n\t\\label{eq:pso_expl-1}\n\\end{equation}\nExtending earlier approaches we will explicitly consider the randomness of the dynamics,\ni.e.~instead of averages over $\\mathbf{R}_1$ and $\\mathbf{R}_2$ we consider a random\ndynamical system with dynamical matrices $M$ chosen from the set\n\\begin{equation}\n\t{\\cal M}_{\\alpha,\\omega}=\\left\\{\n\\left(\\begin{array}{cc} \n\t\t\t\\omega \\mathbf{I}_d & -\\alpha \\mathbf{R} \\\\\n\t\t\t\\omega \\mathbf{I}_d & \\mathbf{I}_d-\\alpha \\mathbf{R} \n\t\\end{array}\\right)\\!,\\,\\,\n\t\\mathbf{R}_{ij}=0 \\mbox{ for } i\\ne j \\mbox{ and } \\mathbf{R}_{ii}\\in\\left[0,1\\right], \n\\right\\} \n\\label{eq:matrix_set}\n\\end{equation}\nwith $\\mathbf{R}$ being in both rows the same realisation of a random diagonal matrix \nthat combines the effects of $\\mathbf{R}_1$ and $\\mathbf{R}_2$ (\\ref{eq:PSOPositionUpdate}).\nThe parameter $\\alpha$ is the sum $\\alpha_1+\\alpha_2$ with $\\alpha_1,\\alpha_2\\ge0$ and $\\alpha >0$.\nAs the diagonal elements of $\\mathbf{R}_1$ and $\\mathbf{R}_2$ are \nuniformly distributed in $\\left[0,1\\right]$, the distribution of the random variable \n$\\mathbf{R}_{ii} = \\frac{\\alpha_1}{\\alpha} \\mathbf{R}_{1,ii} + \\frac{\\alpha_2}{\\alpha} \\mathbf{R}_{2,ii}$ \nin Eq.~\\ref{eq:pso_expl-1} is given by a convolution\nof two uniform random variables, namely\n\\begin{equation}\nP_{\\alpha_1,\\alpha_2}(r)=\\begin{cases}\n\t\\frac{\\alpha r}{\\max\\{\\alpha_1,\\alpha_2\\}}&\\mbox{if } 0\\le r\\le \\min\\{\\frac{\\alpha_1}{\\alpha},\\frac{\\alpha_2}{\\alpha}\\}\\\\\n\t\\frac{\\alpha}{\\max\\{\\alpha_1,\\alpha_2\\}} & \\mbox{if } \\min\\{\\frac{\\alpha_1}{\\alpha},\\frac{\\alpha_2}{\\alpha}\\}0$. The finite-time approximation of the\nLyapunov exponent (see Eq.~\\ref{eq:lyapunov})\n\\begin{equation}\n\t\\lambda(t)=\\frac{1}{t} \\log \\left\\langle\\left\\| \\left(\\mathbf{x}_t,\\mathbf{v}_t\\right)\\right\\| \\right\\rangle\n\\end{equation}\nwill be changed by an amount of $\\frac{1}{t} \\log \\kappa $ by the scaling.\nAlthough this has no effect on the asymptotic behaviour, we will have to expect \nan effect on the stability of the swarm for finite times which may be relevant for\npractical applications. For the same parameters, the swarm will be more stable\nif $\\kappa<1$ and less stable for $\\kappa>1$, \nprovided that the initial conditions are scaled in the same way. \nLikewise, if $\\|\\mathbf{p}\\|$ is increased,\nthen the critical contour will move inwards, see Fig.~\\ref{fig:PSOAverageValley_p}.\nNote that in this figure, the low number of iterations lead\nto a few erroneous trials at parameter pairs outside the outer contour which have been omitted here.\nWe also do not consider the behaviour near $\\alpha=0$ which is complex but irrelevant for PSO.\nThe contour (\\ref{eq:implicite_a_w}) can be seen as the limit $\\kappa \\to 0$ such that\nonly an increase of $\\|\\mathbf{p}\\|$ is relevant for comparison with the theoretical\nstability result. When comparing the stability results with numerical simulations for\nreal optimisation problems, we will need to take into account the effects caused by differences between $\\mathbf{p}$ and $\\mathbf{g}$ in a multi-particle swarm with finite runtimes.\n\n\\begin{figure}[th]\n\\begin{center}\n\\includegraphics[width=0.65\\linewidth]{th2_and_emp_200_2000_20000_a.eps}\n\\end{center}\n\\vspace{-5mm}\n\\caption{\nBest parameter regions for 200 (blue), 2000 (green), and 20000 (magenta) iterations: For more\niterations the region shifts towards the critical line.\nCost averaged over 100 runs and 28 CEC benchmark functions. \nThe red (outer) curve represents the zero Lyapunov exponent for $N=1$, $d=1$, $\\alpha_{1}=\\alpha_{2}$.\n\\label{fig:PSOAverageValley}}\n\\end{figure}\n\n\\begin{figure}[th]\n\\begin{center}\n\\includegraphics[width=0.65\\linewidth,trim=0cm 3.5mm 0cm 0cm]{pso_p100.eps}\n\\end{center}\n\\vspace{-5mm}\n\\caption{\nFor $\\mathbf{p}\\ne\\mathbf{g}$ we define neutral stability as the equilibrium between\ndivergence and convergence. Convergence means here that the particle approaches the line\nconnecting $\\mathbf{p}$ and $\\mathbf{g}$.\nCurves are for a one-dimensional problem with $\\mathbf{p}=0.1$ and $\\mathbf{g}=0$ scaled (see \nSect.~\\ref{Personalvsglobal}) by $\\kappa=1$ (outer curve) $\\kappa=0.1$ and $\\kappa=0.04$ (inner curve).\nResults are for 200 iterations and averaged over 100000 repetitions. \n\\label{fig:PSOAverageValley_p}}\n\\end{figure}\n\n\n\\section{Optimisation of benchmark functions\\label{Simulations}}\n\nMetaheuristic algorithms are often tested in competition against benchmark functions designed to \npresent different problem space characteristics. \nThe 28 functions~\\cite{CEC2013} contain a mix of unimodal, basic multimodal and composite functions. \nThe domain of the functions in this test set are all defined to be $[-100, 100]^d$ \nwhere $d$ is the dimensionality of the problem. Particles were initialised within the same domain.\nWe use 10-dimensional problems throughout. \nOur implementation of PSO performed no spatial or velocity clamping. In all trials a swarm of 25 particles was used. \nWe repeated the algorithm 100 times, on each occasion allowing 200, 2000, 20000 iterations to pass before \nrecording the best solution found by the swarm. For the competition 50000 fitness evaluation were allowed which\ncorresponds to 2000 iterations with 25 particles. Other iteration numbers were included for comparison.\nThis protocol was carried out for pairs of $\\omega\\in[-1.1,1.1]$ and $\\alpha\\in[0,5]$ \nThis was repeated for all 28 functions. \nThe averaged solution costs as a function of the two parameters \nshowed curved valleys similar to that in Fig.~\\ref{fig:PSOPerformance} for all problems. \nFor each function we obtain different best values along (or near) the theoretical curve \n(\\ref{eq:implicite_a_w}). There appears to be no\npreferable location within the valley. \nSome individual functions yield best performance \nnear $\\omega=1$. This is not the case near $\\omega=0$, although the global average performance \nover all test functions is better in the valley near $\\omega=0$ than near $\\omega=1$, see\nFig~\\ref{fig:PSOAverageValley}.\n\nAt medium values of $\\omega$ \nthe difference between the analytical solutions for the cases \n$\\alpha_1=\\alpha_2$ and $\\alpha_1=0$ is strongest, see Fig.~\\ref{fig:PSOAverageValley}. \nIn simulations this shows to a lesser extent, thus revealing a shortcoming of the one-particle\napproximation. Because in the multi-particle case, $\\mathbf{p}$ and $\\mathbf{g}$ are often\ndifferent, the resulting vector will have a smaller norm than in the one-particle case, where \n$\\mathbf{p}=\\mathbf{g}$. The case $\\mathbf{p}\\ne\\mathbf{g}$ violates a the assumption of\nthe theory the dynamics can be described based unit vectors. While a particle far away from\nboth $\\mathbf{p}$ and $\\mathbf{g}$ will behave as predicted from the one-particle case,\nat length scales smaller than $\\Vert \\mathbf{p}-\\mathbf{g}\\Vert$ the retractive forces will\ntend to be reduced such that the inertia becomes more effective and the particle is locally less\nstable which shows numerically in optimal parameters that are smaller than predicted.\n\n\n\\section{Discussion\\label{discussion}}\n\n\\subsection{Relevance of criticality}\n\nOur analytical approach predicts a locus of $\\alpha$ and $\\omega$ pairings that maintain the critical\nbehaviour of the PSO swarm.\nOutside this line the swarm will diverge unless steps are taken to constrain it. Inside, the swarm \nwill eventually converge to a single solution.\nIn order to locate a solution within the search space, the swarm needs to converge at some point, so the line represents an upper bound on the exploration-exploitation mix that a swarm manifests. \nFor parameters on the critical line, fluctuations are still arbitrary large. Therefore, subcritical \nparameter values can be preferable if the settling time is of the same order as the scheduled \nruntime of the algorithm. If, in addition, a typical length scale of the problem is known, then the\nfinite standard deviation of the particles in the stable parameter region \ncan be used to decide about the distance of the \nparameter values from the critical curve. These dynamical quantities can be approximately set, \nbased on the theory presented here, such that a precise control of the behaviour of the algorithm is in\nprinciple possible. \n\nThe observation of the distribution of empirically optimal parameter values along the critical curve,\nconfirms the expectation that critical or near-critical behaviour is the main reason for success\nof the algorithm. Critical fluctuations are a plausible tool in search problem if apart from\ncertain smoothness assumption nothing is known about the cost landscape: The majority of excursions\nwill exploit the smoothness of the cost function by local search, whereas the fat tails of the \ndistribution allow the particles to escape from local minima.\n\n\n\n\\subsection{Switching dynamics at discovery of better solutions\\label{sub:The-role-of}}\n\nEq.~\\ref{eq:matrix_from} shows that the discovery of a better solution\naffects only the constant terms of the linear dynamics of a particle, whereas its\ndynamical properties are governed by the linear coefficient matrices. \nHowever, in the time step after a particle has found a new solution the corresponding force term \nin the dynamics is zero (see Eq.~\\ref{eq:PSOPositionUpdate}) such that the particle dynamics \nslows down compared to the theoretical solution which assumes a finite\ndistance from the best position at all (finite) times. As this affects usually only one particle\nat a time and because new discoveries tend to become rarer over time, this effect will be small\nin the asymptotic dynamics, although it could justify the empirical optimality of parameters \nin the unstable region for some test cases.\n\nThe question is nevertheless, how often these changes occur. A weakly\nconverging swarm can still produce good results if it often discovers\nbetter solutions by means of the fluctuations it performs before settling\ninto the current best position. For cost functions that are not `deceptive',\ni.e.~where local optima tend to be near better optima, parameter\nvalues far inside the critical contour (see Fig.~\\ref{fig:Theory_and_Numerical})\nmay give good results, while in other cases more exploration is needed.\n\n\\subsection{The role of personal best and global best\\label{current_best}\\label{runtime}}\n\nA numerical scan of the $(\\alpha_1,\\alpha_2)$ plane shows \na valley of good fitness values, which, at small fixed positive $\\omega$, is roughly linear and described \nby the relation $\\alpha_1+\\alpha_2= \\mbox{const}$, i.e.~only the joint parameter\n$\\alpha=\\alpha_1+\\alpha_2$ matters.\nFor large $\\omega$, and accordingly small predicted optimal $\\alpha$ values, \nthe valley is less straight. This may be \nbecause the effect of the known solutions is \nrelatively weak, so the interaction of the two components becomes more important.\nIn other words if the movement of the particles is mainly due to inertia, \nthen the relation between the global and local best is non-trivial, \nwhile at low inertia the particles can adjust their $\\mathbf{p}$ vectors\nquickly towards the $\\mathbf{g}$ vector such that both terms become interchangeable.\n\nFinally, we should mention that more particles, longer runtime as well as lower search \nspace dimension increase the potential for \nexploration. They all lead to the empirically determined optimal parameters being closer \nto the critical curve.\n\n\n\\section{Conclusion}\n\nPSO is a widely used optimisation scheme which is theoretically not well understood. Existing theory \nconcentrates on a deterministic version of the algorithm which does not possess useful exploration\ncapabilities. We have studied the algorithm by means of a product of random matrices which allows us to\npredict useful parameter ranges and may allow for more precise settings \nif a typical length scale of the problem is known.\nA weakness of the current approach is that it focuses on the standard PSO~\\cite{kennedy1995particle} which\nis known to include biases~\\cite{clerc2006confinments,spears2010biases}, \nthat are not necessarily justifiable,\nand to be outperformed on benchmark set and in practical applications by many of the existing PSO variants.\nSimilar analyses are certainly possible and are expected to be carried out for some of the variants, \neven though the field of metaheuristic search is often portrayed as largely inert to theoretical advances.\nIf the dynamics of particle swarms is better understood, the algorithms may become useful as\nefficient particle filters which have many applications beyond heuristic optimisation.\n\n\\subsection*{Acknowledgments}\n\nThis work was supported by the Engineering and Physical Sciences Research Council (EPSRC), grant number EP\/K503034\/1.\n\n\\bibliographystyle{unsrt}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nLow-density parity-check (LDPC) convolutional codes \\cite{JimenezLDPCCC}, also known as spatially coupled LDPC (SC-LDPC) codes \\cite{Kudekar_ThresholdSaturation}, can be obtained from a sequence of individual LDPC block codes by distributing the edges of their Tanner graphs over several adjacent blocks \\cite{LentmaierTransITOct2010}.\nThe resulting spatially coupled codes exhibit a \\emph{threshold saturation} phenomenon, which has attracted a lot of interest in the past few years: The threshold of an iterative belief propagation (BP) decoder, obtained by density evolution (DE), can be improved to that of the optimal maximum-a-posteriori (MAP) decoder, for properly chosen parameters.\nIt follows from threshold saturation that it is possible to achieve capacity by spatial coupling of simple regular LDPC codes, which show a significant gap between BP and MAP threshold in the uncoupled case.\nA first analytical proof of threshold saturation was given in \\cite{Kudekar_ThresholdSaturation} for the binary erasure channel (BEC), considering a specific ensemble with uniform random coupling. An alternative proof based on potential functions was then presented in \\cite{Yedla2012,Yedla2014,KudekarWaveLike}, which was extended from scalar recursions to vector recursions in \\cite{Yedla2012vector}.\nBy means of vector recursions, the proof of threshold saturation can be extended to spatially coupled ensembles with structure, such as SC-LDPC codes based on protographs \\cite{Mitchell_ProtographSCLDPC}.\n\nThe concept of spatial coupling is not limited to LDPC codes.\nAlso codes on graphs with stronger component codes can be considered.\nIn this case the structure of the component codes has to be taken into account in a DE analysis.\nInstead of a simple check node update, a constraint node update within BP decoding of a generalized LDPC code involves an a-posteriori probability (APP) decoder applied to the associated component encoder.\nIn general, the input\/output transfer functions of the APP decoder are multi-dimensional because the output bits of the component encoder have different protection.\nFor the BEC, however, it is possible to analytically derive explicit transfer functions \\cite{AshikhminEXIT} by means of a Markov chain analysis of the decoder metric values in a trellis representation of the considered code \\cite{LentDE_PG-LDPC}.\nThis technique was applied in \\cite{LentDE_BBC, Lent_DE_SCGLDPC} to perform a DE analysis of braided block codes (BBCs) \\cite{JimenezBBC} and other spatially coupled generalized LDPC codes.\nThreshold saturation could be observed numerically in all the considered cases.\nBBCs can be seen as a spatially coupled version of product codes, and are closely related to staircase codes \\cite{Smith12}, which have been proposed for high-speed optical communications.\nIt was demonstrated in \\cite{PfisterISIT12,PfisterGC13} that BBCs show excellent performance even with the iterative hard decision decoding that is proposed for such scenarios.\nThe recently presented spatially coupled split-component codes \\cite{Truhachev_SplitComponent} demonstrate the connections between BBCs and staircase codes.\n\nIn this paper, we study codes on graphs whose constraint nodes represent convolutional codes \\cite{Wiberg95, WibergPhD96,Loeliger}.\nWe denote such codes as turbo-like codes (TCs).\nWe consider three particular concatenated convolutional coding schemes: Parallel concatenated codes (PCCs) \\cite{BerrouTC}, serially concatenated codes (SCCs) \\cite{Benedetto98Serial}, and braided convolutional codes (BCCs) \\cite{ZhangBCC}.\nOur aim is to investigate the impact of spatial coupling on the BP threshold of these TCs.\nFor this purpose we introduce some special block-wise spatially coupled ensembles of PCCs (SC-PCCs) and SCCs (SC-SCCs) \\cite{Moloudi_SCTurbo}.\nIn the case of BCCs, which are inherently spatially coupled, we consider the original block-wise ensemble from \\cite{ZhangBCC,Moloudi_DEBCC} and generalize it to larger coupling memories.\nFurthermore, we introduce a novel BCC ensemble in which not only the parity bits but also the information bits are coupled over several time instants \\cite{Moloudi_SPCOM14}.\n\n\nFor these spatially coupled turbo-like codes (SC-TCs), we perform a threshold analysis for the BEC analogously to \\cite{LentmaierTransITOct2010,LentDE_BBC, Lent_DE_SCGLDPC}.\nWe derive their exact DE equations from the transfer functions of the convolutional component decoders \\cite{Kurkoski, tenBrinkEXITConv}, whose computation is similar to that for generalized LDPC codes in \\cite{LentDE_PG-LDPC}.\nIn order to evaluate and compare the ensembles at different rates, we also derive DE equations for the punctured ensembles.\nUsing these equations, we compute BP thresholds for both coupled and uncoupled TCs \\cite{Moloudi_ISTW14} and compare them with the corresponding MAP thresholds \\cite{Measson2009,Measson_Turbo}.\nOur numerical results indicate that threshold saturation occurs if the coupling memory is chosen sufficiently large.\nThe improvement of the BP threshold is specially significant for SCCs and BCCs, whose uncoupled ensembles suffer from a poor BP threshold.\nWe then consider the construction of families of rate-compatible SC-TCs which achieve close-to-capacity performance for a wide range of code rates.\n\nMotivated by the numerical results, we prove threshold saturation analytically.\nWe show that, by few assumptions in the ensembles of uncoupled TCs, in particular considering identical component encoders, it is possible to rewrite their DE recursions in a form that corresponds to the recursion of a scalar admissible system.\nThis representation allows us to apply the proof technique based on potential functions for scalar admissible systems proposed in \\cite{Yedla2012,Yedla2014}, which simplifies the analysis.\nFor the general case, the analysis is significantly more complicated and requires the coupled vector recursion framework of \\cite{Yedla2012vector}.\nFinally, for the example of PCCs, we generalize the proof to non-symmetric ensembles with different component encoders by using the framework in \\cite{Yedla2012vector}.\n\nThe remainder of the paper is organized as follows. \nIn Section~\\ref{sec:CompactGraphCC}, we introduce a compact graph representation for the trellis of a convolutional code that is amenable for a DE analysis.\nFurthermore, we derive explicit input\/output transfer functions of the BCJR decoder for transmission over the BEC.\nThen, in Section~\\ref{sec:CompactGraphTCs}, we describe uncoupled ensembles of PCCs, SCCs and BCCs by means of the compact graph representation. SC-TCs, their spatially coupled counterparts, are introduced in Section~\\ref{sec:SCTCs}.\nIn Section~\\ref{sec:DE}, we derive exact DE equations for uncoupled and coupled ensembles of TCs.\nIn Section~\\ref{sec:RandomP}, we consider random puncturing and derive the corresponding DE equations and analyze SC-TCs as a family of rate compatible codes.\nNumerical results are presented and discussed in Section~\\ref{Sec6}.\nThreshold saturation, which is observed numerically in the results section, is proved analytically in Section \\ref{Sec7}. \nFinally, the paper is concluded in Section~\\ref{Sec8}.\n\n\\section{Compact Graph Representation and Transfer Functions of Convolutional Codes}\n\\label{sec:CompactGraphCC}\nIn this section, we introduce a graphical representation of a convolutional code, which can be seen as a compact form of its corresponding factor graph \\cite{Loeliger}. \nThis compact graph representation makes the illustration of SC-TCs simpler and is convenient for the DE analysis. \nWe also generalize the method in \\cite{Kurkoski, tenBrinkEXITConv} to derive explicit input\/output transfer functions of the BCJR decoder of rate-$k\/n$ convolutional codes on the BEC, which will be used in Section~\\ref{sec:DE} to derive the exact DE for SC-TCs.\n\n\n\\subsection{Compact Graph Representation}\nConsider a rate-$k\/n$ systematic convolutional encoder of code length $nN$ bits, i.e., its corresponding trellis has $N$ trellis sections. At each time instant $\\tau=1,\\ldots,N$, corresponding to a trellis section, the encoder encodes $k$ input bits and generates $n-k$ parity bits.\nLet $\\bs{u}^{(i)}=(u_{1}^{(i)}, u_{2}^{(i)}, \\dots, u_{N}^{(i)})$, $i=1,\\dots,k$, and $\\bs{v}_{\\text{p}}^{(i)}=(v_{\\text{p},1}^{(i)}, v_{\\text{p},2}^{(i)}, \\dots, v_{\\text{p},N}^{(i)})$, $i=1,\\dots,n-k$, denote the $k$ input sequences and the $n-k$ parity sequences, respectively. We also denote by $\\bs{v}^{(i)}=(v_{1}^{(i)}, v_{2}^{(i)}, \\dots, v_{N}^{(i)})$, $i=1,\\dots,n$, the $i$th code sequence, with $\\bs{v} ^{(i)}=\\bs{u}^{(i)}$ for $i=1,\\dots,k$ and $\\bs{v} ^{(i)}=\\bs{v}_{\\text{p}}^{(i-k)}$ for $i=k+1,\\dots,n$. The conventional factor graph of a convolutional encoder is shown in Fig.~\\ref{factorgraph}(a), where black circles represent code bits, each black square corresponds to the code constraints (allowed combinations of input state, input bits, output bits, and output state) of one trellis section, and the double circles are (hidden) state variable nodes. \n\nFor convenience, we will represent a convolutional encoder with the more compact graph representation depicted in Fig.~\\ref{factorgraph}(b). \nIn this compact graph representation, each input sequence $\\bs{u}^{(i)}$ and each parity sequence $\\bs{v}^{(i)}_{\\text{p}}$ is represented by a single black circle, referred to as variable node, i.e., each circle represents $N$ bits. Furthermore, the code trellis is represented by a single empty square, referred to as factor node. The factor node is labeled by the length $N$ of the trellis. \n\\begin{figure}[!t]\n \\centering\n \\includegraphics[width=0.8\\linewidth]{Fig1.pdf}\n\\caption{(a) Factor graph representation of a rate-$k\/n$ systematic convolutional code. (b) Compact graph representation of the same code.}\n\\label{factorgraph}\n\\end{figure} \nEach node in the compact graph represents a sequence of nodes belonging to the same type, similar to the nodes in a protograph of an LDPC code.\nVariable nodes in the original factor graph may represent different bit values, even if they belong to the same type in the compact graph.\nHowever, assuming a tailbiting trellis, the probability distribution of these values after decoding will be equal for all variables that correspond to the same node type.\nAs a consequence, a DE analysis can be performed in the compact graph, independently of the trellis length $N$, which plays a similar role as the lifting factor of a protograph ensemble.\nIf a terminated convolutional encoder, which starts and ends in the zero state, is used instead, the bits that are close to the start and end of the trellis will have a slightly stronger protection. Since this effect will not have a significant impact on the performance, we will neglect this throughout this paper and assume equal output distributions for all bits of the trellis, even when termination is used.\n\n\\subsection{Transfer Function of the BCJR Decoder of a Convolutional Code} \\label{TransferFunction}\n\nConsider the BCJR decoder of a memory $\\nu$, rate$-k\/n$ convolutional encoder and transmission over the BEC. \nWithout loss of generality, we restrict ourselves within this paper to encoders with $k=1$ or $n-k=1$, which can be implemented with $2^\\nu$ states in controller canonical form or observer canonical form, respectively. \nWe would like to characterize the transfer function between the input erasure probabilities (i.e., prior to decoding) and output erasure probabilities (i.e., after decoding) on both the input bits and the output bits of the convolutional encoder. Note that the erasure probabilities at the input of the decoder depend on both the channel erasure probability and the a-priori erasure probabilities on systematic and parity bits (provided, for example, by another decoder). Thus, in the more general case, we consider non-equal erasure probabilities at the input of the decoder.\n\nConsider the extrinsic erasure probability of the $l$th code bit, $l=1,2, \\dots, n$, which is the erasure probability of the $l$th code bit when it is estimated based on the other code bits\\footnote{Without loss of generality we assume that the first $k$ bits are the systematic bits.}. This extrinsic erasure probability, at the output of the decoder, is denoted by $p_{l}^{\\text {ext}}$. The probabilities $p_{l}^{\\text {ext}}$ depend on the erasure probabilities of all code bits (systematic and parity) at the input of the decoder, \n\\begin{align}\n\\label{eq:Transfer1}\np_ {l}^{\\text {ext}}=f_l(p_1,p_2, \\dots,p_n),\n\\end{align}\nwhere $p_l$ is the erasure probability of the $l$th code bit at the input of the decoder and $f_l(p_1,p_2, \\dots,p_n)$ is the transfer function of the BCJR decoder for the $l$th code bit. For notational simplicity, we will often omit the argument of $f_l(p_1,p_2, \\dots,p_n)$ and write simply $f_l$.\n\n\nLet $\\bs{r}^{(i)}=(r_{1}^{(i)}, r_{2}^{(i)}, \\dots, r_{N}^{(i)})$,\n$i=1,\\dots,n$, be the vectors of received symbols at the output of the channel, with $r_{j}^{(i)}\\in\\{0,1,?\\}$, where $?$ denotes an erasure. The branch metric of the trellis edge departing from state $\\sigma'$ at time $\\tau-1$ and ending to state $\\sigma$ at time $\\tau$, $\\tau=1,\\dots,N$, is\n\n\\begin{align}\n\\gamma_{\\tau}(\\sigma',\\sigma)=\\prod_{l=1}^{n} p\\left(r_{\\tau}^{(l)}\\; \\big| \\; v_{\\tau}^{(l)}\\right)\\cdot p\\left(v_{\\tau}^{(l)}\\right),\n\\end{align}\nwhere $p\\big(v_{\\tau}^{(l)}\\big)$ is the a-priori probability on symbol $v_{\\tau}^{(l)}$.\n\nThe forward and backward metrics of the BCJR decoder are\\begin{align}\n&\\alpha_{\\tau}(\\sigma)=\\sum_{\\sigma'}\n\\gamma_{\\tau}(\\sigma',\\sigma) \\cdot \\alpha_{\\tau-1}(\\sigma')\\\\\n&\\beta_{\\tau-1}(\\sigma')=\\sum _{\\sigma}\\gamma_{\\tau}(\\sigma',\\sigma)\n\\cdot \\beta_{\\tau}(\\sigma').\n\\end{align}\n\nFinally, the extrinsic output likelihood ratio is given by \n\\begin{align*}\n&L^{(l)}_{\\text{out},\\tau}=\\\\\n&\\frac{\\sum\\limits_{(\\sigma',\\sigma):v_{\\tau}^{(l)}=0}\\alpha_{\\tau-1}(\\sigma')\\cdot\n\\gamma_{\\tau}(\\sigma',\\sigma)\\cdot\n\\beta_{\\tau}(\\sigma)}{\\sum\\limits_{(\\sigma',\\sigma):v_{\\tau}^{(l)}=1}\n\\big(\\alpha_{\\tau-1}(\\sigma')\\cdot \\gamma_{\\tau}(\\sigma',\\sigma)\\cdot \\beta_{\\tau}(\\sigma)}\\cdot\\frac{p\\left(v_{\\tau}^{(l)}=1\\right)}{p\\left(v_{\\tau}^{(l)}=0\\right)}.\n\\end{align*}\n\n\n\n\nLet the $2^\\nu$ trellis states be $s_1,s_2,\\ldots,s_{2^\\nu}$. Then, we define the forward and backward metric vectors as $\\boldsymbol{\\alpha}_{\\tau}=(\\alpha_{\\tau}(s_1),\\ldots,\\alpha_{\\tau}(s_{2^\\nu}))$ and $\\boldsymbol{\\beta}_{\\tau}=(\\beta_{\\tau}(s_1),\\ldots,\\beta_{\\tau}(s_{2^\\nu}))$, respectively. \nFor transmission on the BEC, the nonzero entries of vectors $\\boldsymbol{\\alpha}_{\\tau}$ and $\\boldsymbol{\\beta}_{\\tau}$ are all equal. Thus, we can normalize them to $1$.\n\n\n\nWe consider transmission of the all-zero codeword. The sets of values that vectors $\\boldsymbol{\\alpha}_{\\tau}$ and\n $\\boldsymbol{\\beta}_{\\tau}$ can take on are denoted by $\\mathcal{M}_{\\alpha}=\\{\\boldsymbol{m}_{\\alpha}^{(1)},\\ldots,\\boldsymbol{m}_{\\alpha}^{(|\\mathcal{M}_{\\alpha}|)}\\}$ and $\\mathcal{M}_{\\beta}=\\{\\boldsymbol{m}_{\\beta}^{(1)},\\ldots,\\boldsymbol{m}_{\\beta}^{(|\\mathcal{M}_{\\beta}|)}\\}$, respectively. It is important to remark that these sets are finite. Furthermore, the sequence $\\dots,\\boldsymbol{\\alpha}_{\\tau-1},\\boldsymbol{\\alpha}_{\\tau},\\boldsymbol{\\alpha}_{\\tau+1},\\dots$\nforms a Markov chain, which can be properly described by a probability transition matrix, denoted by $\\boldsymbol{M}_{\\alpha}$.\nThe $(i,j)$ entry of $\\boldsymbol{M}_{\\alpha}$ is the probability of transition from state\n$\\boldsymbol{m}_{\\alpha}^{(i)}$ to state $\\boldsymbol{m}_{\\alpha}^{(j)}$. Denote the steady\nstate distribution vector of the Markov chain by $\\boldsymbol{\\pi}_{\\alpha}$, which can be computed as the solution to\n\\begin{equation}\n\\boldsymbol{\\pi}_{\\alpha}=\\boldsymbol{M}_{\\alpha}\\cdot \\boldsymbol{\\pi}_{\\alpha}. \n\\end{equation}\nSimilarly, we can define the transition matrix for the sequence of backward metrics $\\dots,\\boldsymbol{\\beta}_{\\tau+1},\\boldsymbol{\\beta}_{\\tau},$ $\\boldsymbol{\\beta}_{\\tau-1},\\dots$, denoted by $\\boldsymbol{M}_{\\beta}$, and compute the steady state distribution vector $\\bs{\\pi}_{\\beta}$.\n\n\\begin{example}\nConsider the rate-$2\/3$, $4$-state convolutional encoder with generator\nmatrix \n\\begin{equation*\n\\boldsymbol{G} (D)=\\left( \\begin{array}{ccc}1&0&\\frac{1}{1+D+D^2}\\\\0&1&\\frac{1+D^2}{1+D+D^2}\\end{array}\\right).\n\\end{equation*}\n$\\mathcal{M}_{\\alpha}$\nand $\\mathcal{M}_{\\beta}$ are equal and have cardinality $5$,\n\\begin{align*}\n&\\mathcal{M}_{\\alpha}=\\mathcal{M}_{\\beta}=\\\\\n&\\{(1,0,0,0),(1,1,0,0),(1,0,0,1),(1,0,1,0),(1,1,1,1)\\}.\n\\end{align*}\n\nConsider equal erasure probability for all code bits at the input of the decoder, i.e., $p_1=p_2=p_3 = p$. Then,\n\\begin{align*}\n\\resizebox{\\hsize}{!}{$\n\\boldsymbol{M}_{\\alpha}=\\left[\\begin{array}{ccccc}\n(1-p)^2(2p+1)&(1-p)^2&(1-p)^3&0&0\\\\\np^2(1-p)&0&p(1-p)^2&p^3-2p+1&(1-p)^2\\\\\np^2(1-p)&p(1-p)&p(1-p)^2&0&0\\\\\np^2(1-p)&p(1-p)&p(1-p)^2&0&0\\\\\np^3&p^2&p^2(3-2p)&p^2(2-p)&p(2-p)\n\\end{array}\\right] \\; .\n$ \n}\n\\end{align*} \\hfill $\\triangle$\n\\end{example}\n\\vspace{3mm}\n\nIn order to compute the erasure probability of the $l$th bit at the output of the decoder, we have to\ncompute the probability of $L^{(l)}_{\\text{out},\\tau}=1$. Define the \nmatrices $\\boldsymbol{T}_{l}$, $l=1,2,\\dots,n$, where the $(i,j)$ entry of $\\boldsymbol{T}_{l}$ is computed as\n\\[\nT_{l}(i,j)=p\\left(L^{(l)}_{\\text{out},\\tau}=1\\; | \\;\\boldsymbol{\\alpha}_{\\tau}=\\boldsymbol{m}_{\\alpha}^{(i)},\\boldsymbol{\\beta}_{\\tau+1}=\\boldsymbol{m}_{\\beta}^{(j)}\\right).\n\\]\nThenhresho, the extrinsic erasure probability of the $l$th output, $p_ {l}^{\\text{ext}}$, introduced in~\\eqref{eq:Transfer1}, is obtained as\n\\begin{align}\np_ {l}^{\\text {ext}}&=f_l(p_1,p_2,\\dots,p_n)=p\\left(L^{(l)}_{\\text{out},\\tau}=1\\right) \\nonumber\\\\\n&=\\sum^{|\\mathcal{M}_{\\alpha}|}_{i=1}\\sum^{|\\mathcal{M}_{\\beta}|}_{j=1}p\\left(\nL^{(l)}_{\\text{out},\\tau}=1\\; | \\; \\boldsymbol{\\alpha}_{\\tau}=\\boldsymbol{m}_{\\alpha}^{(i)},\\boldsymbol{\\beta}_{\\tau+1}=\\boldsymbol{m}_{\\beta}^{(j)}\\right)\\nonumber\\\\ \n&\\quad\\quad\\quad\\quad\\quad \\cdot p\\left(\\boldsymbol{\\alpha}_{\\tau}=\\boldsymbol{m}_{\\alpha}^{(i)}\\right)\\cdot p\\left(\\boldsymbol{\\beta}_{\\tau+1}=\\boldsymbol{m}_{\\beta}^{(j)}\\right)\\nonumber\\\\\n& =\\boldsymbol{\\pi}_{\\alpha} \\cdot\\boldsymbol{T}_{l}\\cdot \\boldsymbol{\\pi}_{\\beta} . \n\\end{align}\n\n\\begin{example}\nConsider the rate$-2\/3$ convolutional encoder with generator matrix\n\\begin{equation*\n\\boldsymbol{G} (D)=\\left( \\begin{array}{ccc}1&0&\\frac{1}{1+D}\\\\0&1&\\frac{D}{1+D}\\end{array}\\right).\n\\end{equation*}\n\nAssuming $p_1=p_2=p_3\\triangleq p$, the transfer functions for the corresponding decoder are\n\\[\nf_1=f_2=\\frac{p(p^5-4p^4+6p^3-5p^2+2p+1)}{p^6-4p^5+6p^4-6p^3+5p^2-2p+1},\n\\]\n\\[\nf_3=\\frac{p^2(p^2-4p+4)}{p^6-4p^5+6p^4-6p^3+5p^2-2p+1}.\n\\] \\hfill $\\triangle$\n\\end{example}\n\n\\begin{lemma}\n\\label{Lemma1}\nConsider a terminated convolutional encoder where all distinct input\nsequences have distinct encoded sequences. \nFor such a system, the transfer function $f(p_1,p_2,\\dots,p_n)$ of a BCJR decoder with input erasure probabilities $p_1,p_2,\\dots,p_n$, or any convex combination of such transfer functions, is increasing in all its arguments.\n\\end{lemma}\n\\begin{IEEEproof}\nWe prove the statement by contradiction. Recall that the BCJR decoder\nis an optimal APP decoder. \nNow, consider the transmission of the same\ncodeword over two channels, called channel 1 and 2. The erasure probabilities of the $i$th bit\nat the input of the decoder are denoted by $p_i^{(1)}$ and $p_i^{(2)}$ for\ntransmission over channel 1 and 2, respectively. These erasure probabilities are equal for all\n$i=1,\\ldots,n$ except for the $j$th bit, for which\n$p_j^{(1)}f(p_1^{(1)},\\dots,p_j^{(1)},\\ldots,p_n^{(1)}),\n\\end{align}\nSince after puncturing $p_i^{(1)}$ and $p_i^{(2)}$ are equal for all $i$, then \n$f(p_1^{(1)},\\dots,p_{j,\\text{punc}}^{(1)},\\ldots,p_n^{(1)}) =f(p_1^{(2)},\\dots,p_j^{(2)},\\ldots,p_n^{(2)})$. Then, we can rewrite the inequality \\eqref{eq:Contr2} as\n\\begin{align}\n\\label{eq:Contr3}\nf(p_1^{(2)},\\ldots,p_j^{(2)},\\ldots,p_n^{(2)})&>f(p_1^{(1)},\\dots,p_j^{(1)},\\ldots,p_n^{(1)}).\n\\end{align}\nHowever, the inequality \\eqref{eq:Contr3} is in contradiction with \\eqref{contradiction}.\n\\end{IEEEproof}\n\n\\section{Compact Graph Representation of\\\\ Uncoupled Turbo-like Codes}\n\\label{sec:CompactGraphTCs}\n\n\nIn this section, we describe PCCs, SCCs and BCCs using the compact graph representation introduced in the previous section. In Section~\\ref{sec:SCTCs} we then introduce the corresponding spatially coupled ensembles.\n\n\\subsection{Parallel Concatenated Codes}\n\nWe consider a rate $R=1\/3$ PCC built from two rate-$1\/2$ recursive systematic convolutional encoders, referred to as the upper and lower component encoder. \nIts conventional factor graph is shown in Fig.~\\ref{Uncoupled}(a), where $\\Pi$ denotes the permutation.\nThe trellises corresponding to the upper and lower encoders are denoted by $\\text{T}^{\\text{U}}$ and $\\text{T}^{\\text{L}}$, respectively. The information sequence $\\bs{u}$, of length $N$ bits, and a reordered copy are encoded by the upper and lower encoder, respectively, to produce the parity sequences $\\bs{v}^{\\text{U}}$ and $\\bs{v}^{\\text{L}}$. The code sequence is denoted by $\\bs{v}=(\\bs{u},\\bs{v}^{\\text{U}},\\bs{v}^{\\text{L}})$. The compact graph representation of the PCC is shown in Fig.~\\ref{Uncoupled}(b), where\neach of the sequences $\\bs{u}$, $\\bs{v}^{\\text{U}}$ and $\\bs{v}^{\\text{L}}$ is represented by a single variable node and the trellises are replaced by factor nodes $\\text{T}^{\\text{U}}$ and $\\text{T}^{\\text{L}}$ (cf. Fig.~\\ref{factorgraph}).\nIn order to emphasize that a reordered copy of the input sequence is used in $\\text{T}^{\\text{L}}$, the permutation is depicted by a line that crosses the edge which connects $\\bs{u}$ to $\\text{T}^{\\text{L}}$.\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=0.75\\linewidth]{Fig2.pdf}\n\\caption{(a) Conventional factor graph of a PCC. Compact graph representation of a (b) PCC, (c) SCC, (d) BCC.}\n\\label{Uncoupled}\n\\end{figure}\n\n\\subsection{Serially Concatenated Codes}\n\nWe consider a rate $R=1\/4$ SCC built from the serial concatenation of two rate-$1\/2$ recursive systematic component encoders, referred to as the outer and inner component encoder. Its compact graph representation is shown in Fig.~\\ref{Uncoupled}(c), where $\\text{T}^{\\text{O}}$ and $\\text{T}^{\\text{I}}$ are the factor nodes corresponding to the outer and inner encoder, respectively, and the rectangle illustrates a multiplexer\/demultiplexer. The information sequence $\\bs{u}$, of length $N$, is encoded by the outer encoder to produce the parity sequence $\\bs{v}^{\\text{O}}$. Then, the sequences $\\bs{u}$ and $\\bs{v}^{\\text{O}}$ are multiplexed and reordered to create the intermediate sequence $\\tilde{\\bs{v}}^{\\text{O}}$, of length $2N$ (not shown in the graph). Finally, $\\tilde{\\bs{v}}^{\\text{O}}$ is encoded by the inner encoder to produce the parity sequence $\\bs{v}^{\\text{I}}$. The transmitted sequence is $\\bs{v}=(\\bs{u},\\bs{v}^{\\text{O}},\\bs{v}^{\\text{I}})$. \n\n \\begin{figure*}[t]\n\t\\centering\n\t\\includegraphics[width=\\linewidth]{Fig3.pdf}\n\t\\caption{Block diagram of the encoder of a SC-PCC ensemble with $m=1$. }\n\t\\label{CoupledPCC}\n\\end{figure*}\n\n\n\\begin{figure*}[t]\n\t\\centering\n\t\\includegraphics[width=1\\linewidth]{Fig4.pdf}\n\t\\caption{Compact graph representation of (a) PCC, (b) SC-PCC at time instant $t$, (c) SC-PCC.}\n\t\\label{SC-PCC}\n\\end{figure*}\n\n\\subsection{Braided Convolutional Codes}\n\nWe consider a rate $R=1\/3$ BCC built from two rate-$2\/3$ recursive systematic convolutional encoders, referred to as upper and lower encoders. The corresponding trellises are denoted by $\\text{T}^{\\text{U}}$ and $\\text{T}^{\\text{L}}$. The compact graph representation of this code is shown in Fig.~\\ref{Uncoupled}(d). The parity sequences of the upper and lower encoder are denoted by $\\bs{v}^{\\text{U}}$ and $\\bs{v}^{\\text{L}}$, respectively.\nTo produce the parity sequence $\\bs{v}^{\\text{U}}$, the information sequence $\\bs{u}$ and a reordered copy of $\\bs{v}^{\\text{L}}$ are encoded by $\\text{T}^{\\text{U}}$.\nLikewise, a reordered copy of $\\bs{u}$ and a reordered copy of $\\bs{v}^{\\text{U}}$ are encoded by $\\text{T}^{\\text{L}}$ in order to produce the parity sequence $\\bs{v}^{\\text{L}}$.\nSimilarly to PCCs, the transmitted sequence is $\\bs{v}=(\\bs{u},\\bs{v}^{\\text{U}},\\bs{v}^{\\text{L}})$.\n\n\\section{Spatially Coupled Turbo-like Codes} \n\\label{sec:SCTCs}\nIn this section, we introduce SC-TCs. We first describe the spatial coupling for both PCCs and SCCs. Then, we generalize the original block-wise BCC ensemble \\cite{ZhangBCC} in order to obtain ensembles with larger coupling memories. \n\n\\subsection{Spatially Coupled Parallel Concatenated Codes}\n\nWe consider the spatial coupling of rate-$1\/3$ PCCs, described in the previous section.\nFor simplicity, we first describe the SC-PCC ensemble with coupling memory $m=1$.\nThen we show the coupling for higher coupling memories.\nThe block diagram of the encoder for the SC-PCC ensemble is shown in Fig.~\\ref{CoupledPCC}.\nIn addition, its compact graph representation and the coupling are illustrated in Fig.~\\ref{SC-PCC}.\n\n\nAs it is shown in Fig.~\\ref{CoupledPCC} and Fig.~\\ref{SC-PCC}(a) we denote by $\\bs{u}_t$ the information sequence, and by $\\bs{v}_t^{\\text{U}}$ and $\\bs{v}_t^{\\text{L}}$ the parity sequence of the upper and lower encoder, respectively, at time $t$.\nThe code sequence of the PCC at time $t$ is given by the triple $\\bs{v}_t = \n(\\bs{u}_t,\\bs{v}^{\\text{U}}_t ,\\bs{v}^{\\text{L}}_t )$. With reference to Fig.~\\ref{CoupledPCC} and Fig.~\\ref{SC-PCC}(b),\nin order to obtain the coupled sequence, the information sequence, $\\bs{u}_t$, is divided into two sequences of equal size, $\\bs{u}_{t,0}$ and $\\bs{u}_{t,1}$ by a multiplexer. \nThen, the sequence $\\bs{u}_{t,0}$ is used as a part of the input to the upper encoder at time $t$ and $\\bs{u}_{t,1}$ is used as a part of the input to the upper encoder at time $t+1$.\nLikewise, a reordered copy of the information sequence, $\\tilde{\\bs{u}}_t$, is divided into two sequences $\\tilde{\\bs{u}}_{t,0}$ and $\\tilde{\\bs{u}}_{t,1}$.\n\nTherefore, the input to the upper encoder at time $t$ is a reordered copy of $(\\bs{u}_{t,0},\\bs{u}_{t-1,1})$, and likewise the input to the lower encoder at time $t$ is a reordered copy of $(\\tilde{\\bs{u}}_{t,0},\\tilde{\\bs{u}}_{t-1,1})$.\nIn this ensemble, the coupling memory is $m=1$ as $\\bs{u}_t$ is used only at the time instants $t$ and $t+1$.\n\n\nFinally, an SC-PCC with $m=1$ is obtained by considering a collection of $L$ PCCs at time instants $t=1,\\ldots,L$, where $L$ is referred to as the coupling length, and coupling them as described above, see Fig.~\\ref{SC-PCC}(c).\n\n \nAn SC-PCC ensemble with coupling memory $m$ is obtained by dividing each of the sequences $\\bs{u}_t$ and $\\tilde{\\bs{u}}_t$ into $m+1$ sequences of equal size and spread these sequences respectively to the input of the upper and the lower encoder at time slots $t$ to $t+m$. The compact graph representation of the SC-PCC with coupling memory $m$ is shown in Fig.~\\ref{Coupled}(a) for a given time instant $t$.\n\nThe coupling is performed as follows. Divide the information sequence $\\u_t$ into $m+1$ sequences of equal size $N\/(m+1)$, denoted by $\\bs{u}_{t,j}$, $j=0,\\dots,m$. Likewise, divide $\\tilde{\\bs{u}}_t$, the information sequence $\\bs{u}_t$ reordered by a permutation, into $m+1$ sequences of equal size, denoted by $\\tilde{\\bs{u}}_{t,j}$, $j=0,\\dots,m$. At time $t$, the information sequence at the input of the upper encoder is $(\\u_{t,0},\\u_{t-1,1},\\ldots,\\u_{t-m,m})$, properly reordered by a permutation. Likewise, the information sequence at the input of the lower encoder is $(\\tilde{\\u}_{t,0},\\tilde{\\u}_{t-1,1},\\ldots,\\tilde{\\u}_{t-m,m})$, reordered by a permutation. \nUsing the procedure described above, a coupled chain (a convolutional\nstructure over time) of $L$ PCCs with coupling memory\n$m$ is obtained. \n\nIn order to terminate the encoder of the SC-PCC to the zero state, the information sequences at the end of the chain are chosen in such a way that the code sequences become\n$\\bs{v}_{t}=\\bs{0}$ at time $t=L+1,\\dots,L+m$, and $\\u_t$ is set to $\\bs{0}$ for $t>L$. Analogously to conventional convolutional codes, this results in a rate loss that becomes smaller as $L$ increases. \n\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=0.75\\linewidth]{Fig5.pdf}\n\\caption{Compact graph representation of (a) SC-PCCs, and (b) SC-SCCs of coupling memory $m$ for time instant $t$.}\n\\label{Coupled}\n\\end{figure}\n\n\n\\subsection{Spatially Coupled Serially Concatenated Codes}\n\nAn SC-SCC is constructed similarly to SC-PCCs. Consider a collection of $L$ SCCs at time instants $t=1,\\ldots,L$,\nand let $\\bs{u}_t$ be the information sequence at time $t$. Also, denote by $\\bs{v}_t^{\\mathrm{O}}$ and $\\bs{v}_t^{\\mathrm{I}}$ the parity sequence at the output of the outer and inner encoder, respectively. The information sequence $\\bs{u}_t$ and the parity sequence $\\bs{v}_t^{\\mathrm{O}}$ are multiplexed and reordered into the sequence $\\tilde{\\bs{v}}_t^{\\mathrm{O}}$. The sequence $\\tilde{\\bs{v}}_t^{\\mathrm{O}}$ is divided into $m+1$ sequences of equal length, denoted by $\\tilde{\\bs{v}}_{t,j}^{\\text{O}}$, $j=0,\\dots,m$. Then, at time instant $t$, the\nsequence at the input of the inner encoder is\n$(\\tilde{\\bs{v}}_{t-j,0}^{\\text{O}},\\tilde{\\bs{v}}_{t-1,1}^{\\text{O}}\\ldots,\\tilde{\\bs{v}}_{t-m,m}^{\\text{O}})$, properly reordered by a permutation. This sequence is encoded by the inner encoder into $\\bs{v}_t^{\\mathrm{I}}$. Finally, the code sequence at time $t$ is $\\bs{v}=(\\bs{u}_t,\\bs{v}_t^{\\text{O}},\\bs{v}_t^{\\text{I}})$. Using this construction method, a coupled chain of $L$ SCCs with coupling memory $m$ is obtained. The compact graph representation of SC-SCCs with coupling memory $m$ is shown in Fig.~\\ref{Coupled}(b) for time instant $t$. \n\nIn order to terminate the encoder of the SC-SCC, the information sequences at the end of the chain are chosen in such a way that the code sequences become $\\bs{v}_{t}=\\bs{0}$ at time $t=L+1,\\dots,L+m$. A simple and practical way to terminate SC-SCCs is to set $\\u_t=\\boldsymbol{0}$ for $t=L-m+1,\\dots,L$. This enforces $\\bs{v}_{t}=\\bs{0}$ for $t=L+1,\\dots,L+m$, since we can assume that $\\u_t=\\boldsymbol{0}$ for $t>L$. Using this termination technique, only the parity sequence $\\bs{v}_t^{\\text{I}}$ needs to be transmitted at time instants $t=L-m+1,\\dots,L$.\n\n\n\\subsection{Braided Convolutional Codes}\n\nThe compact graph representation of the original BCCs is depicted in Fig~\\ref{BCCSC}.\nAs for SC-PCCs, let $\\bs{u}_t$, $\\bs{v}_t^{\\text{U}}$ and $\\bs{v}_t^{\\text{L}}$ denote the information sequence, the parity sequence at the output of the upper encoder, and the parity sequence at the output of the lower encoder, respectively, at time $t$. \nAt time $t$, the information sequence \n $\\bs{u}_t$ and a reordered copy of $\\bs{v}_{t-1}^{\\text{L}}$ are encoded by the upper encoder to generate the parity sequence $\\bs{v}_t^{\\text{U}}$.\nLikewise, a reordered copy of the information sequence, denoted by $\\tilde{\\bs{u}}_t$, and a reordered copy of $\\bs{v}_{t-1}^{\\text{L}}$ are encoded by the lower encoder to produce the parity sequence $\\bs{v}_t^{\\text{L}}$. The code sequence at time $t$ is $\\bs{v}=(\\bs{u}_t,\\bs{v}_t^{\\text{U}},\\bs{v}_t^{\\text{L}})$.\n\nAs it can be seen from Fig~\\ref{BCCSC}, the original BCCs are inherently spatially coupled codes\\footnote{The uncoupled ensemble, discussed in the previous section, can be defined by tailbiting a coupled chain of length $L=1$.} with coupling memory one. \nIn the following, we introduce two extensions of BCCs, referred to as Type-I and Type-II, with increased coupling memory, $m>1$.\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=0.6\\linewidth]{Fig6.pdf}\n\\caption{Compact graph representation of the original BCCs.}\n\\label{BCCSC}\n\\end{figure}\n\nThe compact graph of Type-I BCCs is shown in\nFig.~\\ref{BCCI-II}(a) for time instant $t$.\nThe parity sequence $\\boldsymbol{v}_t^{\\text{U}}$ is randomly divided into $m$ sequences $\\boldsymbol{v}_{t,j}^{\\text{U}}$, $j=1,\\dots,m$, of the same length.\nLikewise, the parity sequence $\\boldsymbol{v}_t^{\\text{L}}$ is randomly divided into $m$ sequences $\\boldsymbol{v}_{t,j}^{\\text{L}}$, $j=1,\\dots,m$. At time $t$, the information sequence $\\bs{u}_t$ and the sequence $(\\boldsymbol{v}_{t-1,1}^{\\text{L}},\\boldsymbol{v}_{t-2,2}^{\\text{L}},\\ldots,\\boldsymbol{v}_{t-m,m}^{\\text{L}})$, properly reordered, are used as input sequences to the upper encoder to produce the parity sequence $\\boldsymbol{v}_t^{\\text{U}}$. Likewise, a reordered copy of the information sequence $\\bs{u}_t$ and the sequence $(\\boldsymbol{v}_{t-1,1}^{\\text{U}},\\boldsymbol{v}_{t-2,2}^{\\text{U}},\\ldots,\\boldsymbol{v}_{t-m,m}^{\\text{U}})$, properly reordered, are encoded by the lower encoder to produce the parity sequence $\\boldsymbol{v}_t^{\\text{L}}$. \n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=0.75\\linewidth]{Fig7.pdf}\n\\caption{Compact graph representation of (a) Type-I BCCs, and (b) Type-II BCCs of coupling memory $m$ at time instant $t$.}\n\\label{BCCI-II}\n\\end{figure}\n\nThe compact graph of Type-II BCCs is shown in\nFig.~\\ref{BCCI-II}(b) for time instant $t$. Contrary to Type-I BCCs, in addition to the coupling of parity bits, for Type-II BCCs information bits are also coupled. At time $t$, divide the information sequence $\\u_t $ into $m+1$ sequences $\\boldsymbol{u}_{t,j}$, $j=0,\\dots,m$ of equal length. Furthermore, divide the reordered copy of the information sequence, $\\tilde{\\bs{u}}_t$, into $m+1$ sequences $\\tilde{\\boldsymbol{u}}_{t,j}$, $j=0,\\dots,m$. The first input of the upper and lower encoders are now the sequences $(\\bs{u}_{t-0,0},\\bs{u}_{t-1,1},\\ldots,\\bs{u}_{t-m,m})$ and $(\\tilde{\\bs{u}}_{t-0,0},\\tilde{\\bs{u}}_{t-1,1},\\ldots,\\tilde{\\bs{u}}_{t-m,m})$, respectively, properly reordered.\n\n\n\\section{Density Evolution Analysis for SC-TCs over the Binary Erasure Channel}\\label{sec:DE}\n\nIn this section we derive the exact DE for SC-TCs. For the three considered code ensembles, we first derive the DE equations for the uncoupled ensembles and then extend them to the coupled ones.\n\\subsection{Density Evolution Equations and Decoding Thresholds} \nFor transmission over the BEC, it is possible to analyze the asymptotic behavior of TCs and SC-TCs by tracking the evolution of the erasure probability with the number of decoding iterations.\nThis evolution can be formalized in a compact way as a set of equations called DE equations. For the BEC, it is possible to derive a exact DE equations for TCs and SC-TCs.\nBy use of these equations, the BP decoding threshold can be computed.\nThe BP threshold is the largest channel erasure probability $\\varepsilon$ for which the erasure probability at the output of the BP decoder converges to zero as the block length and number of iterations grow to infinity.\n\nIt is also possible to compute the threshold of the MAP decoder, $\\varepsilon_{\\text{MAP}}$, by the use of the area theorem \\cite{Measson_Turbo}.\nAccording to the area theorem, the MAP threshold\\footnote{The threshold given by the area theorem is actually an upper bound on the MAP threshold. However, the numerical results show that the thresholds of the coupled ensembles converge to this upper bound. This indicates that the upper bound on the MAP threshold is a tight bound.} \ncan be obtained from the following equation,\n\\[\n\\int_{\\varepsilon_{\\text{MAP}}}^1\\bar{p}_{\\text{extr}}(\\varepsilon)d\\varepsilon=R \\ ,\n\\]\nwhere $R$ is the rate of the code and $\\bar{p}_{\\text{extr}}(\\varepsilon)$ is the average extrinsic erasure probability for all transmitted bits.\n\n\n\\subsection{Parallel Concatenated Codes}\n\\subsubsection{Uncoupled}\nConsider the compact graph of a PCC in Fig.~\\ref{Uncoupled}(b). \nLet $p_{\\text{U},\\text{s}}^{(i)}$ and $p_{\\text{U},\\text{p}}^{(i)}$ denote \nthe average extrinsic erasure probability from factor node $T^{\\text{U}}$ to $\\bs{u}$ and $\\bs{v}^{\\text{U}}$, respectively, in the $i$th iteration.\\footnote{With some abuse of language, we sometimes refer to a variable node representing a sequence (e.g., $\\u$) as the sequence itself ($\\u$ in this case).} Likewise, denote by $p_{\\text{L},\\text{s}}^{(i)}$ and $p_{\\text{L},\\text{p}}^{(i)}$ the extrinsic erasure probabilities from $T^{\\text{L}}$ to $\\bs{u}$ and $\\bs{v}^{\\text{L}}$, respectively. It is easy to see that the erasure probability from $\\bs{u}_t$ and $\\bs{v}_t^{\\text{U}}$ to $T^{\\text{U}}$ is $\\varepsilon \\cdot p_{\\text{L},\\text{s}}^{(i-1)}$ and $\\varepsilon$, respectively. Therefore, the DE updates for $T^{\\text{U}}$ can be written as\n\\begin{align}\n\\label{DEPCC1}\np_{\\text{U},\\text{s}}^{(i)}=f_{\\text{U,s}}\\left(\nq_{\\text{L}}^{(i)},\\varepsilon\\right),\\\\\n\\label{DEPCC2}\np_{\\text{U},\\text{p}}^{(i)}=f_{\\text{U,p}}\\left(\nq_{\\text{L}}^{(i)},\\varepsilon\\right),\n\\end{align}\nwhere\n\\begin{equation}\n\\label{DEPCC3}\nq_{\\text{L}}^{(i)}=\\varepsilon \\cdot p_{\\text{L},\\text{s}}^{(i-1)},\n\\end{equation}\nand $f_{\\text{U,s}}$ and $f_{\\text{U,p}}$ denote the transfer function of $T^{\\text{U}}$ for the systematic and parity bits, respectively. \n\nSimilarly, the DE update for $T^{\\text{L}}$ can be written as\n\\begin{align}\n\\label{DEPCC4}\np_{\\text{L},\\text{s}}^{(i)}=f_{\\text{L,s}}\\left(\nq_{\\text{U}}^{(i)},\\varepsilon\\right),\\\\ \\label{DEPCC5}\np_{\\text{L},\\text{p}}^{(i)}=f_{\\text{L,p}}\\left(\nq_{\\text{U}}^{(i)},\\varepsilon\\right),\n\\end{align}\nwhere\n\\begin{equation}\n\\label{DEPCC6}\nq_{\\text{U}}^{(i)}=\\varepsilon \\cdot p_{\\text{U},\\text{s}}^{(i-1)},\n\\end{equation}\nand $f_{\\text{L,s}}$ and $f_{\\text{L,p}}$ are the transfer functions of $T^{\\text{L}}$ for the systematic and parity bits, respectively.\n\\subsubsection{Coupled}\nConsider the compact graph of a SC-PCC ensemble in Fig.~\\ref{Coupled}(a).\nThe variable node $\\bs{u}_t$ is connected to factor nodes $T^{\\text{U}}_{t'}$ and $T^{\\text{L}}_{t'}$, at time instants $t'=t,\\ldots,t+m$. We denote by $p_{\\text{U},\\text{s}}^{(i,t')}$ and $p_{\\text{U},\\text{p}}^{(i,t')}$ \nthe average extrinsic erasure probability from factor node $T^{\\text{U}}_{t'}$ at time instant $t'$ to $\\bs{u}$ and $\\bs{v}^{\\text{U}}$, respectively, computed in the $i$th iteration. We also denote by $\\bar{q}^{(i-1,t)}_{\\text{U}}$ the input erasure probability to variable node $\\bs{u}_t$ in the $i$th iteration, received from its neighbors $T^{\\text{U}}_{t'}$. It can be written as\n\\begin{equation}\\label{DESCPCC1}\n\\bar{q}_{\\text{U}}^{(i-1,t)}=\\frac{1}{m+1} \\sum_{j=0}^{m} p_{\\text{U},\\text{s}}^{(i-1,t+j)}.\n\\end{equation}\n\nSimilarly, the average erasure probability from factor nodes $T^{\\text{L}}_{t'}$, $t'=t,\\ldots,t+m$, to $\\bs{u}_t$, denoted by $\\bar{q}_{\\text{L}}^{(i-1,t)}$, can be written as\n\\begin{equation}\\label{DESCPCC2}\n\\bar{q}_{\\text{L}}^{(i-1,t)}=\\frac{1}{m+1} \\sum_{j=0}^{m} p_{\\text{L},\\text{s}}^{(i-1,t+j)}.\n\\end{equation} \n\nThe erasure probabilities from variable node $\\bs{u}_t$ to its neighbors $T^{\\text{U}}_{t'}$ and $T^{\\text{L}}_{t'}$ are\n $\\varepsilon \\cdot \\bar{q}^{(i-1,t)}_{\\text{L}}$ and $\\varepsilon \\cdot \\bar{q}^{(i-1,t)}_{\\text{U}}$, respectively.\n\nOn the other hand, $T^{\\text{U}}_t$ at time $t$ is connected to the set of $\\bs{u}_{t'}$s for $t'=t-m, \\dots, t$. \nThe erasure probability to $T^{\\text{U}}_t$ from this set, denoted by $q_{\\text{L}}^{(i,t)}$, is given by\n\\begin{align}\n\\label{DESCPCC3}\nq_{\\text{L}}^{(i,t)}&=\\varepsilon \\cdot \\frac{1}{m+1} \\sum_{k=0}^{m}\n\\bar{q}_{\\text{L}}^{(i-1,t-k)}\\nonumber\\\\\n&=\\varepsilon \\cdot \\frac{1}{(m+1)^2} \\sum_{k=0}^{m}\\sum_{j=0}^{m} p_{\\text{L},\\text{s}}^{(i-1,t+j-k)}.\n\\end{align}\n\nThus, the DE updates of $T^{\\text{U}}_t$ are\n\\begin{align}\n\\label{DESCPCC4}\np_{\\text{U},\\text{s}}^{(i,t)}=f_{\\text{U,s}}\\left(\nq_{\\text{L}}^{(i,t)},\\varepsilon\\right),\\\\\n\\label{DESCPCC5}\np_{\\text{U},\\text{p}}^{(i,t)}=f_{\\text{U,p}}\\left(\nq_{\\text{L}}^{(i,t)},\\varepsilon\\right).\n\\end{align}\n\nSimilarly, the input erasure probability to $T^{\\text{L}}_t$ from the set of connected $\\bs{u}_{t'}$s at time instants $t'=t-m, \\dots, t$, is\n\\begin{align}\n\\label{DESCPCC6}\n&q_{\\text{U}}^{(i,t)}=\\varepsilon \\cdot \\frac{1}{m+1} \\sum_{k=0}^{m}\n\\bar{q}_{\\text{U}}^{(i-1,t-k)} \\nonumber\\\\\n&=\\varepsilon \\cdot \\frac{1}{(m+1)^2} \\sum_{k=0}^{m}\\sum_{j=0}^{m} p_{\\text{U},\\text{s}}^{(i-1,t+j-k)},\n\\end{align} \nand the DE updates of $T^{\\text{L}}_t$ are\n\\begin{align}\n\\label{DESCPCC7}\np_{\\text{L},\\text{s}}^{(i,t)}=f_{\\text{L,s}}\\left(\nq_{\\text{U}}^{(i,t)},\\varepsilon\\right),\\\\\n\\label{DESCPCC8}\np_{\\text{L},\\text{p}}^{(i,t)}=f_{\\text{L,p}}\\left(\nq_{\\text{U}}^{(i,t)},\\varepsilon\\right).\n\\end{align}\n\nFinally the a-posteriori erasure probability on $\\bs{u}_t$ at time $t$ and iteration $i$ is\n\\begin{equation}\np_a^{(i,t)}=\\varepsilon \\cdot \\bar{q}_{\\text{U}}^{(i,t)} \\cdot \\bar{q}_{\\text{L}}^{(i,t)}.\n\\end{equation}\n DE is performed by tracking the evolution of the a-posteriori erasure probability\n with the number of iterations.\n\\subsection{Serially Concatenated Codes}\n\\subsubsection{Uncoupled}\nConsider the compact graph of the SCC ensemble in Fig.~\\ref{Uncoupled}(c).\nLet $p_{\\text{O},\\text{s}}^{(i)}$ and $p_{\\text{O},\\text{p}}^{(i)}$ denote the erasure probability from $T^{\\text{O}}$ to $\\bs{u}$ and $\\bs{v}^{\\text{O}}$, respectively, computed in the $i$th iteration.\nLikewise, $p_{\\text{I},\\text{s}}^{(i)}$ and $p_{\\text{I},\\text{p}}^{(i)}$ denote the extrinsic erasure probability from $T^{\\text{I}}$ to $\\tilde{\\bs{v}}^{\\text{O}}=(\\bs{u},\\bs{v}^{\\text{O}})$ and $\\bs{v}^{\\text{I}}$. \n\nBoth $\\bs{u}$ and $\\bs{v}^{\\text{O}}$ receive the same erasure probability, $p_{\\text{I},\\text{s}}^{(i-1)}$, from $T^{\\text{I}}$. Therefore, the erasure probabilities that $T^{\\text{O}}$ receives from these two variable nodes are equal and given by\n\\begin{align}\nq_{\\text{I}}^{(i)}=\\varepsilon \\cdot p_{\\text{I},\\text{s}}^{(i-1)}.\\label{DESCC1}\n\\end{align}\nThe DE equations for $T^{\\text{O}}$ can then be written as\n\\begin{align}\n\\label{DESCC2}\np_{\\text{O},\\text{s}}^{(i)}=f_{\\text{O,s}}\\left(\nq_{\\text{I}}^{(i)},q_{\\text{I}}^{(i)}\\right),\\\\\n\\label{DESCC3}\np_{\\text{O},\\text{p}}^{(i)}=f_{\\text{O,p}}\\left(\nq_{\\text{I}}^{(i)},q_{\\text{I}}^{(i)}\\right),\n\\end{align}\nwhere $f_{\\text{O,s}}$ and $f_{\\text{O,p}} $ are the transfer functions of $T^{\\text{O}}$ for the systematic and parity bits, respectively. \n\n The erasure probability that $T^{\\text{I}}$ receives from $\\tilde{\\bs{v}}^{\\text{O}}=(\\bs{u},\\bs{v}^{\\text{O}})$ is the average of the erasure probabilities from $\\bs{u}$ and $\\bs{v}^{\\text{O}}$,\n\\begin{equation}\n\\label{DESCC4}\nq_{\\text{O}}^{(i)}=\\varepsilon \\cdot\\frac{p_{\\text{O},\\text{s}}^{(i)}+p_{\\text{O},\\text{p}}^{(i)}}{2}.\n\\end{equation}\nOn the other hand, the erasure probability to $T^{\\text{I}}$ from $\\bs{v}^{\\text{I}}$ is $\\varepsilon$. Therefore, the DE equations for $T^{\\text{I}}$ can be written as\n\\begin{align}\n\\label{DESCC5}\np_{\\text{I},\\text{s}}^{(i)}=f_{\\text{I,s}}\\left(\nq_{\\text{O}}^{(i)},\\varepsilon\\right),\n\\\\\n\\label{DESCC6}\np_{\\text{I},\\text{p}}^{(i)}=f_{\\text{I,p}}\\left(\nq_{\\text{O}}^{(i)},\\varepsilon\\right).\n\\end{align}\n\\subsubsection{Coupled}\nConsider the compact graph representation of SC-SCCs in Fig.~\\ref{Coupled}(b). \nVariable nodes $\\bs{u}_t$ and $\\bs{v}_t^{\\text{O}}$ are connected to factor nodes $T^{\\text{I}}_{t'}$ at time instants $t'=t,\\ldots,t+m$. \nThe input erasure probability to variable nodes $\\bs{u}_t$ and $\\bs{v}_t^{\\text{O}}$ from these factor nodes, denoted by $\\bar{q}_{\\text{I}}^{(i-1,t)}$, is the same for both $\\bs{u}_t$ and $\\bs{v}_t^{\\text{O}}$ and is obtained as the average of the erasure probabilities from each of the factor nodes $T^{\\text{I}}_{t'}$,\n\\begin{equation}\n\\label{DESCSCC1}\n\\bar{q}_{\\text{I}}^{(i-1,t)}=\\frac{1}{m+1}\\sum_{j=0}^{m}p_{\\text{I},\\text{s}}^{(i-1,t+j)} \\ .\n\\end{equation}\nThe erasure probability to $T^{\\text{O}}_{t}$ from $\\bs{u}_t$ and $\\bs{v}_t^{\\text{O}}$ is\n\\begin{align}\n\\label{DESCSCC2}\nq_{\\text{I}}^{(i,t)}=\\varepsilon \\cdot \\bar{q}_{\\text{I}}^{(i-1,t)}= \\frac{\\varepsilon}{m+1}\\sum_{j=0}^{m}p_{\\text{I},\\text{s}}^{(i-1,t+j)} \\ .\n\\end{align}\nThus, the DE updates of $T^{\\text{O}}_t$ are\n\\begin{align}\n\\label{DESCSCC3}\np_{\\text{O},\\text{s}}^{(i,t)}=f_{\\text{O,s}}\\left(\nq_{\\text{I}}^{(i,t)},q_{\\text{I}}^{(i,t)}\\right) \\ ,\\\\\n\\label{DESCSCC4}\np_{\\text{O},\\text{p}}^{(i,t)}=f_{\\text{O,p}}\\left(\nq_{\\text{I}}^{(i,t)},q_{\\text{I}}^{(i,t)}\\right) \\ . \n\\end{align}\nAt time $t$, $T^{\\text{I}}_t$ is connected to a set of $\\tilde{\\bs{v}}_{t'}^{\\text{O}}$s at time instants $t'=t-m,\\ldots,t$. \nThe erasure probability that $T^{\\text{I}}_t$ receives from this set is the average of the erasure probabilities of all $\\bs{u}_{t'}$s and $\\bs{v}_{t'}^{\\text{O}}$s at times $t'=t-m\\ldots,t$. This erasure probability can be written as\n \\begin{align}\n\\label{DESCSCC5}\nq_{\\text{O}}^{(i,t)}=\\frac{\\varepsilon}{m+1}\\sum_{k=0}^{m}\\frac{p_{\\text{O},\\text{s}}^{(i,t-k)}+p_{\\text{O},\\text{p}}^{(i,t-k)}}{2} \\ .\n\\end{align}\nHence, the DE updates for the inner encoder are given by\n\\begin{align}\n\\label{DESCSCC6}\np_{\\text{I},\\text{s}}^{(i,t)}=f_{\\text{I,s}}\\left(\nq_{\\text{O}}^{(i,t)},\\varepsilon\\right) \\ ,\n\\\\\n\\label{DESCSCC7}\np_{\\text{I},\\text{p}}^{(i,t)}=f_{\\text{I,p}}\\left(\nq_{\\text{O}}^{(i,t)},\\varepsilon\\right) \\ .\n\\end{align}\nFinally, the a-posteriori erasure probability on information bits at time $t$ and iteration $i$ is\n\\begin{equation}\np_{a}^{(i,t)}=\\varepsilon \\cdot p_{\\text{O},\\text{s}}^{(i,t)}\\cdot \\bar{q}_{\\text{I}}^{(i,t)} \\ .\n\\end{equation}\n\\subsection{Braided Convolutional Codes}\n\\subsubsection{Uncoupled}\nConsider the compact graph of uncoupled BCCs in Fig.~\\ref{Uncoupled}(c). These can be obtained by tailbiting BCCs, as shown in Fig.~\\ref{BCCSC}, with coupling length $L=1$.\nLet $p_{\\text{U},k}^{(i)}$ and $p_{\\text{L},k}^{(i)}$ denote the erasure probabilities of messages from $T^{\\text{U}}$ and $T^{\\text{L}}$ through their $k$th connected edge, $k=1,2,3$, respectively.\nThe erasure probability of messages that $T^{\\text{U}}$ receives through its edges are\n\\begin{align}\nq_{\\text{L},1}^{(i)}=\\varepsilon \\cdot p_{\\text{L},1}^{(i-1)} \\ ,\\label{DEBCC1}\\\\\nq_{\\text{L},2}^{(i)}=\\varepsilon \\cdot p_{\\text{L},3}^{(i-1)} \\ ,\\label{DEBCC2}\\\\\nq_{\\text{L},3}^{(i)}=\\varepsilon \\cdot\np_{\\text{L},2}^{(i-1)} \\ .\\label{DEBCC3}\n\\end{align}\nThe exact DE equations of $T^{\\text{U}}$ can be written as\n\\begin{align}\np_{\\text{U},1}^{(i)}=&f_{\\text{U},1}\\left(q_{\\text{L},1}^{(i)} ,q_{\\text{L},2}^{(i)},q_{\\text{L},3}^{(i)}\\right) \\label{DEBCC4}\\ ,\\\\\np_{\\text{U},2}^{(i)}=&f_{\\text{U},2}\\left(q_{\\text{L},1}^{(i)} ,q_{\\text{L},2}^{(i)},q_{\\text{L},3}^{(i)}\\right) \\label{DEBCC5}\\ ,\\\\\np_{\\text{U},3}^{(i)}=&f_{\\text{U},3}\\left(q_{\\text{L},1}^{(i)} ,q_{\\text{L},2}^{(i)},q_{\\text{L},3}^{(i)}\\right) \\label{DEBCC6}\\ , \n\\end{align}\nwhere $f_{\\text{U},k}$ denotes the transfer function of $T^{\\text{U}}$ for its $k$th connected edge.\nSimilarly, the DE equations for $T^{\\text{L}}$ can be written by swapping indexes $\\text{U}$ and $\\text{L}$ in \\eqref{DEBCC1}--\\eqref{DEBCC6}.\n\\subsubsection{Coupled}\nConsider the compact graph representation of Type-I BCCs in Fig.~\\ref{BCCI-II}(a).\nAs in the uncoupled case, the DE updates of factor nodes $T^{\\text{U}}_t$ and $T^{\\text{L}}_t$ are similar due to the symmetric structure of the coupled construction. Therefore, for simplicity, we only describe the DE equations of $T^{\\text{U}}_t$ and the equations for $T^{\\text{L}}_t$ are obtained by swapping indexes $\\text{U}$ and $\\text{L}$ in the equations.\n\nThe first edge of $T^{\\text{U}}_t$ is connected to $\\bs{u}_t$. Thus, the erasure probability that $T^{\\text{U}}_t$ receives through this edge is\n\\begin{equation}\n\\label{eq:T1BCCDE1}\nq_{\\text{L},1}^{(i,t)}=\\varepsilon \\cdot p_{\\text{L},1}^{(i-1,t)}.\n\\end{equation}\nThe second edge of $T^{\\text{U}}_t$ is connected to variable nodes $\\bs{v}_{t'}^{\\text{L}}$ at time instants $t'=t-m,\\ldots,t-1$.\n The erasure probability that $T^{\\text{U}}_t$ receives through its second edge is therefore the average of the erasure probabilities from the variable nodes $\\bs{v}_{t'}^{\\text{L}}$ that are connected to this edge. \nThis erasure probability can be written as\n\\begin{equation}\n\\label{eq:T1BCCDE2}\nq_{\\text{L},2}^{(i,t)}=\\frac{\\varepsilon}{m}\\sum_{j=1}^{m} p_{\\text{L},3}^{(i-1,t-j)}.\n\\end{equation}\nThe third edge of $T^{\\text{U}}_t$ is connected to $\\bs{v}_t^{\\text{U}}$, which is in turn connected to the second edges of factor nodes $T^{\\text{L}}_{t'}$ at time instants $t'=t+1,\\ldots,t+m$. \nThe erasure probability that $\\bs{v}_t^{\\text{U}}$ receives from the set of connected nodes $T^{\\text{L}}_{t'}$ is the average of erasure probabilities from these nodes through their second edges.\nThe erasure probability from $\\bs{v}_t^{\\text{U}}$ to $T^{\\text{U}}_t$ is\n\\begin{equation}\n\\label{eq:T1BCCDE3}\nq_{\\text{L},3}^{(i,t)}=\\frac{\\varepsilon}{m}\\sum_{j=1}^{m} p_{\\text{L},2}^{(i-1,t+j)}.\n\\end{equation}\nThe DE equations of $T^{\\text{U}}_t$ can then be written as\\footnote{The DE equations of the original BCCs are obtained by setting $m=1$ in the DE equations of Type-I BCCs.}\n\\begin{align}\n\\label{eq:T1BCCDE4}\np_{\\text{U},1}^{(i,t)}=&f_{\\text{U},1}\\left(q_{\\text{L},1}^{(i,t)},q_{\\text{L},2}^{(i,t)},q_{\\text{L},3}^{(i,t)}\\right),\\\\ \\label{eq:T1BCCDE5}\np_{\\text{U},2}^{(i,t)}=&f_{\\text{U},2}\\left(q_{\\text{L},1}^{(i,t)},q_{\\text{L},2}^{(i,t)},q_{\\text{L},3}^{(i,t)}\\right),\\\\ \\label{eq:T1BCCDE6}\np_{\\text{U},3}^{(i,t)}=&f_{\\text{U},3}\\left(q_{\\text{L},1}^{(i,t)},q_{\\text{L},2}^{(i,t)},q_{\\text{L},3}^{(i,t)}\\right).\n\\end{align} \nThe a-posteriori erasure probability on $\\bs{u}_t$ at time $t$ and iteration $i$ for Type-I BCCs is\n\\begin{equation}\np_a^{(i,t)}=\\varepsilon \\cdot p_{\\text{U},1}^{(i,t)} \\cdot p_{\\text{L},1}^{(i,t)}.\n\\end{equation}\n\n\nAs we discussed in the previous section, the difference between Type-I and Type-II BCCs is that $\\bs{u}_t$ is also coupled in the latter. Variable node\n$\\bs{u}_t$ in Type-II BCCs is connected to a set of factor nodes $T^{\\text{U}}_{t'}$ and $T^{\\text{L}}_{t'}$ at time instants $t'=t,\\ldots,t+m$.\nThe DE equations of Type-II BCCs are identical to those of Type-I BCCs except for equation \\eqref{eq:T1BCCDE1}. Denote by $\\bar{q}_{\\text{L},1}^{(i-1,t)}$ the input erasure probability to $\\bs{u}_t$ from the connected factor nodes $T^{\\text{L}}_{t'}$ in the $i$th iteration.\nAccording to Fig.~\\ref{BCCI-II}(b), $\\bar{q}_{\\text{L},1}^{(i-1,t)}$ is the average of erasure probabilities\nfrom $T^{\\text{L}}_{t'}$ at time instants $t'=t,\\ldots,t+m$, \n\\begin{equation}\n\\bar{q}_{\\text{L},1}^{(i-1,t)}=\\frac{1}{m+1} \\sum_{j=0}^{m} p_{\\text{L},1}^{(i-1,t+j)}.\n\\end{equation}\nFactor node $T^{\\text{U}}_{t}$ is connected to variable nodes $\\bs{u}_{t'}$ at time instants $t'=t-m,\\ldots,t$. \nThe incoming erasure probability to $T^{\\text{U}}_t$ through its first edge, denoted by $q_{\\text{L},1}^{(i,t)}$, is therefore the average of the erasure probabilities from $\\bs{u}_{t'}$ at times $t'=t-m,\\ldots,t$, \n \\begin{align}\n&q_{\\text{L},1}^{(i,t)}=\\varepsilon \\cdot \\frac{1}{m+1} \\sum_{k=0}^{m}\n\\bar{q}_{\\text{L},1}^{(i-1,t-k)}\\\\\n&=\\varepsilon \\cdot \\frac{1}{(m+1)^2} \\sum_{k=0}^{m}\\sum_{j=0}^{m} p_{\\text{L},1}^{(i-1,t+j-k)}.\\nonumber\n\\end{align}\nFinally, the a-posteriori erasure probability on $\\bs{u}_t$ at time $t$ and iteration $i$ for Type-II BCCs is\n\\begin{equation}\np_a^{(i,t)}=\\varepsilon \\cdot \\bar{q}_{\\text{U}}^{(i,t)} \\cdot \\bar{q}_{\\text{L}}^{(i,t)}.\n\\end{equation}\n\n\n\\section{Rate-compatible SC-TCs via Random Puncturing}\\label{sec:RandomP}\n\n\nHigher rate codes can be obtained by applying puncturing. For analysis purposes, we consider random puncturing. Random puncturing has been considered, e.g., for LDPC codes in \\cite{PishroNik, LDPCPuncture} and for turbo-like codes in \\cite{AGiAa,Kol12}. In \\cite{LDPCPuncture}, the authors introduced a parameter called $\\theta$ which allows comparing the strengths of the codes with different rates. In this section, we consider the construction of rate-compatible SC-TCs by means of random puncturing.\n\n\nWe denote by $\\rho\\in[0,1]$ the fraction of surviving bits after puncturing, referred to as the permeability rate. Consider that a code sequence $\\boldsymbol{v}$ is randomly punctured with permeability rate $\\rho$ and transmitted over a BEC with erasure probability $\\varepsilon$, BEC$(\\varepsilon)$. For the BEC, applying puncturing is equivalent to transmitting $\\bs{v}$ over a BEC with erasure probability $\\varepsilon_{\\rho}=1-(1-\\varepsilon)\\rho$, resulting from the concatenation of two BECs, BEC$(\\varepsilon)$ and BEC$(\\varepsilon_\\rho)$. The DE equations of SC-TCs in the previous section can then be easily modified to account for random puncturing.\n\nFor SC-PCCs, we consider puncturing of parity bits only, i.e., the overall code is systematic. The rate of the punctured code (without considering termination of the coupled chain) is\n$R=\\frac{1}{1+2\\rho}$. The DE equations of punctured SC-PCCs are obtained\nby substituting $\\varepsilon\\leftarrow \\varepsilon_{\\rho}$ in \\eqref{DESCPCC4}, \\eqref{DESCPCC5}, \\eqref{DESCPCC7} and \\eqref{DESCPCC8}. \n\nFor punctured SC-SCCs, we consider the coupling of the punctured SCCs proposed in \\cite{AGiAa,AGiAb}\\footnote{ In contrast to standard SCCs, characterized by a rate-$1$ inner code and for which to achieve higher rates the outer code is heavily punctured, the SCCs proposed in \\cite{AGiAa,AGiAb} achieve higher rates by moving the puncturing of the outer code to the inner code, which is punctured beyond the unitary rate. This allows to preserve the interleaving gain for high rates and yields a larger minimum distance, which results in codes that significantly outperform standard SCCs, especially for high rates. Furthermore, the SCCs in \\cite{AGiAa,AGiAb} yield better MAP thresholds than standard SCCs.}, where $\\rho_0$ and $\\rho_1$ are the permeability rates of the systematic and\nparity bits, respectively, of the outer code (see\n\\cite[Fig.~1]{AGiAb}), and $\\rho_2$ is the permeability rate of the\nparity bits of the inner code. The code rate of the \npunctured SC-SCC is\n$R=\\frac{1}{\\rho_0+\\rho_1+2\\rho_2}$ (neglecting the rate loss due to termination). The DE for punctured SC-SCCs is\nobtained by substituting $\\varepsilon\\leftarrow\\varepsilon_{\\rho_2}$\nin \\eqref{DESCSCC6} and \\eqref{DESCSCC7}, and modifying \\eqref{DESCSCC5} to\n\\begin{align*}\nq_{\\text{O}}^{(i,t)}=\\frac{1}{m+1}\\sum_{k=0}^{m}\\frac{\\varepsilon\n \\cdot p_{\\text{O},\\text{s}}^{(i,t-k)}\n+\\varepsilon_{\\rho_1} \\cdot p_{\\text{O},\\text{p}}^{(i,t-k)}}{2}\n\\end{align*}\nand \\eqref{DESCSCC3}, \\eqref{DESCSCC4} to\n\\begin{align}\np_{\\text{O},\\text{s}}^{(i,t)}=f_{\\text{O,s}}\\left(\nq_{\\text{I}}^{(i,t)},\\tilde{q}_{\\text{I}}^{(i,t)}\\right),\\\\\np_{\\text{O},\\text{p}}^{(i)}=f_{\\text{O,p}}\\left(\nq_{\\text{I}}^{(i,t)},\\tilde{q}_{\\text{I}}^{(i,t)}\\right),\n\\end{align}\nwhere $q_{\\text{I}}^{(i,t)}$ is given in \\eqref{DESCSCC2} and\n\\begin{equation}\n\\tilde{q}_{\\text{I}}^{(i,t)}=\\frac{\\varepsilon_{\\rho_1}}{m+1}\\sum_{j=0}^{m}p_{\\text{I},\\text{s}}^{(i-1,t+j)}.\n\\end{equation}\n\nFor both Type-I and Type-II BCCs, similarly to SC-PCCs, we consider only puncturing of parity bits with permeability rate $\\rho$.\nThe DE equations of punctured SC-BCCs are obtained by substituting $\\varepsilon\\leftarrow \\varepsilon_{\\rho}$ in \\eqref{eq:T1BCCDE2} and \\eqref{eq:T1BCCDE3} and the corresponding equations for $q_{\\text{U},2}^{(i,t)}$ and $q_{\\text{U},3}^{(i,t)}$.\n\n\\section{Numerical Results}\\label{Sec6}\n\nIn Table~\\ref{Tab:BPThresholds}, we give DE results for the SC-TC ensembles, and their uncoupled ensembles for rate $R=1\/2$.\nIn particular, we consider SC-PCC and SC-SCC ensembles with identical 4-state and 8-state component encoders with generator matrix ${\\bf{G}}=(1,5\/7)$ and ${\\bf{G}}=(1,11\/13)$, respectively, in octal notation.\nFor the BCC ensemble, we consider two identical 4-state component encoders and generator matrix\n\\begin{equation}\\label{eqG}\n\\bs{G}_1 (D)= \\left( \\begin{array}{ccc}1&0&1\/7\\\\0&1&5\/7\\end{array}\\right) \\ .\n\\end{equation}\nThe BP thresholds ($\\varepsilon_{\\text{BP}}$) and MAP thresholds ($\\varepsilon_{\\text{MAP}}$) for the uncoupled ensembles are reported in Table~\\ref{Tab:BPThresholds}.\nThe MAP threshold is obtained using the area theorem \\cite{AshikhminEXIT,Measson2009}.\nWe also give the BP thresholds of SC-TCs for coupling memory $m=1$, denoted by $\\varepsilon_{\\text{SC}}^{1}$.\n\\begin{table}[t]\n\n\t\\caption{Thresholds for rate-$1\/2$ TCs, and SC-TCs}\n\n\t\\begin{center}\n\t\t\\begin{tabular}{lcccc}\n\t\t\t\\hline\n\t\t\tEnsemble&states&$\\varepsilon_{\\text{BP}}$ & $\\varepsilon_{\\text{MAP}}$ &$\\varepsilon^1_{\\mathrm{SC}}$ \\\\\n\t\t\t\\hline\n\t\t\n\t\t\n\t\t\n\t\t\t$\\mathcal{C}_{\\mathrm{PCC}}$\/$\\mathcal{C}_{\\mathrm{SC-PCC}}$&4&0.4606&0.4689&0.4689\\\\[0.5mm]\n\t\t\t$\\mathcal{C}_{\\mathrm{SCC}}$\/$\\mathcal{C}_{\\mathrm{SC-SCC}}$&4&0.3594&0.4981& 0.4708\\\\[0.5mm]\n\t\t\t$\\mathcal{C}_{\\mathrm{PCC}}$\/$\\mathcal{C}_{\\mathrm{SC-PCC}}$&8&0.4651&0.4863&0.4862\\\\[0.5mm]\n\t\t\t$\\mathcal{C}_{\\mathrm{SCC}}$\/$\\mathcal{C}_{\\mathrm{SC-SCC}}$&8&0.3120&0.4993& 0.4507\\\\\n\t\t\n\t\t\n\t\t\tType-I $\\mathcal{C}_{\\mathrm{BCC}}$&4&0.3013 &0.4993&0.4932\\\\[0.5mm]\n\t\t\tType-II $\\mathcal{C}_{\\mathrm{BCC}}$&4& 0.3013&0.4993& 0.4988\\\\[0.5mm]\n\t\t\n\t\t\t\\hline\n\t\t\t\n\t\t\\end{tabular} \n\t\\end{center}\n\t\\label{Tab:BPThresholds}\n\n\\end{table}\n\nAs expected, PCC ensembles yield better BP thresholds than SCC ensembles. However, SCCs have better MAP threshold. The BP decoder works poorly for uncoupled BCCs and the BP thresholds are worse than those of PCCs and SCCs. On the other hand, the MAP thresholds of BCCs are better than those of both PCCs and SCCs. By applying coupling, the BP threshold improves and for $m=1$, the Type-II BCC ensemble has the best coupling threshold.\n\nTable \\ref{Tab:BPThresholdsSCC} shows the thresholds of TCs and SC-TCs for several rates. In the table, for the ensembles $\\mathcal{C}_{\\mathrm{PCC}}$\/$\\mathcal{C}_{\\mathrm{SC-PCC}}$, $\\rho_2$ is the permeability rate of the parity bits of the upper encoder and the lower encoder. For example, $\\rho_2=0.5$ means that half of the bits of $\\bs{v}^{\\text{U}}$ and $\\bs{v}^{\\text{L}}$ are punctured (thus, the resulting code rate is $R=1\/2$). Note that $\\rho_2$ corresponds to permeability $\\rho$ defined in Section~\\ref{sec:RandomP}. Here, we use $\\rho_2$ instead to unify notation with that of SCCs. For the ensembles $\\mathcal{C}_{\\mathrm{SCC}}$\/$\\mathcal{C}_{\\mathrm{SC-SCC}}$ (based on the SCCs introduced in \\cite{AGiAa,AGiAb}), for a given code rate $R$ the puncturing rates $\\rho_0$, $\\rho_1$ and $\\rho_2$ (see Section~\\ref{sec:RandomP}) may be optimized. In this paper, we consider $\\rho_0=1$, i.e., the overall code is systematic, and we optimize $\\rho_1$ and $\\rho_2$ such that the MAP threshold of the (uncoupled) SCC is maximized.\\footnote{We remark that nonsystematic codes, i.e., $\\rho_0<1$, lead to better MAP thresholds. In this case, the optimum is to puncture last the parity bits of the inner encoder, i.e., for $R<1\/2$ $\\rho_2=1$ and for $R\\ge 1\/2$ $\\rho_0=0$, $\\rho_1=0$ and $\\rho_2=1\/2R$.} Note that, if $\\rho_0=1$, for a given $R$ the optimization simplifies to the optimization of a single parameter, say $\\rho_2$, since $\\rho_1$ and $\\rho_2$ are related by $\\rho_1=\\frac{1}{R}-1-2\\rho_2$.\\footnote{Alternatively, one may optimize $\\rho_1$ and $\\rho_2$ such that the BP threshold of the SC-SCC is optimized for a given coupling memory $m$.} Rate-compatibility can be guaranteed by choosing $\\rho_1$ and $\\rho_2$ to be decreasing functions of $R$. In the table, we report the coupling thresholds for coupling memory $m=1,2,3$, denoted by $\\varepsilon_{\\text{SC}}^{1}$, $\\varepsilon_{\\text{SC}}^{2}$, and $\\varepsilon_{\\text{SC}}^{3}$, respectively. The gap to the Shannon limit is shown by $\\delta_{\\text{SH}}=(1-R)-\\varepsilon_{\\text{MAP}}$.\n\n\\begin{table*}[t]\n\\caption{Thresholds for punctured spatially coupled turbo codes}\n\\begin{center}\n\\begin{tabular}{ccccccccccc}\n\\hline\nEnsemble& Rate &states& $\\rho_2$ & $\\varepsilon_{\\text{BP}}$ & $\\varepsilon_{\\text{MAP}}$ &$\\varepsilon^1_{\\mathrm{SC}}$ & $\\varepsilon^3_{\\mathrm{SC}}$ & $\\varepsilon^5_{\\mathrm{SC}}$ &$m_{\\text{min}}$ &$\\delta_{\\text{SH}}$ \\\\\n\\hline\n$\\mathcal{C}_{\\mathrm{PCC}}$\/$\\mathcal{C}_{\\mathrm{SC-PCC}}$ & $1\/3$ &4& 1.0 & 0.6428 & 0.6553 & 0.6553 & 0.6553 & 0.6553 &1 &0.0113\\\\[0.5mm]\n$\\mathcal{C}_{\\mathrm{SCC}}$\/$\\mathcal{C}_{\\mathrm{SC-SCC}}$ & $1\/3$ &4&1.0 & 0.5405 & 0.6654 & 0.6437 & 0.6650 & 0.6654 &4 &0.0012\\\\[0.5mm]\n$\\mathcal{C}_{\\mathrm{PCC}}$\/$\\mathcal{C}_{\\mathrm{SC-PCC}}$ & $1\/3$ &8& 1.0 &0.6368& 0.6621 & 0.6617 & 0.6621 & 0.6621&2&0.0045\\\\[0.5mm]\n$\\mathcal{C}_{\\mathrm{SCC}}$\/$\\mathcal{C}_{\\mathrm{SC-SCC}}$ & $1\/3$ &8&1.0 &0.5026&0.6663& 0.6313&0.6647&0.6662&6&0.0003\\\\[0.5mm]\n\\hline\n$\\mathcal{C}_{\\mathrm{PCC}}$\/$\\mathcal{C}_{\\mathrm{SC-PCC}}$ & $1\/2$ & 4&0.5 & 0.4606 & 0.4689 & 0.4689 & 0.4689 & 0.4689 &1& 0.0311\\\\[0.5mm]\n$\\mathcal{C}_{\\mathrm{SCC}}$\/$\\mathcal{C}_{\\mathrm{SC-SCC}}$ & $1\/2$ & 4&0.5 & 0.3594 & 0.4981 & 0.4708 & 0.4975 & 0.4981 & 5&0.0019\\\\[0.5mm]\n$\\mathcal{C}_{\\mathrm{PCC}}$\/$\\mathcal{C}_{\\mathrm{SC-PCC}}$ & $1\/2$ & 8&0.5 &0.4651 & 0.4863&0.4862&0.4863&0.4863&2&0.0137\\\\[0.5mm]\n$\\mathcal{C}_{\\mathrm{SCC}}$\/$\\mathcal{C}_{\\mathrm{SC-SCC}}$ & $1\/2$ & 8&0.5 &0.3120 & 0.4993&0.4507 &0.4970&0.4992&7&0.0007\\\\[0.5mm]\n\\hline\n$\\mathcal{C}_{\\mathrm{PCC}}$\/$\\mathcal{C}_{\\mathrm{SC-PCC}}$ & $2\/3$ & 4&0.25 & 0.2732 & 0.2772 & 0.2772 & 0.2772 & 0.2772 &1&0.0561\\\\[0.5mm]\n$\\mathcal{C}_{\\mathrm{SCC}}$\/$\\mathcal{C}_{\\mathrm{SC-SCC}}$ & $2\/3$ & 4&0.25 & 0.2038 & 0.3316 & 0.3303 & 0.3305 & 0.3315 & 6&0.0018\\\\[0.5mm]\n$\\mathcal{C}_{\\mathrm{PCC}}$\/$\\mathcal{C}_{\\mathrm{SC-PCC}}$ & $2\/3$ & 8&0.25 &0.2945& 0.3080&0.3080& 0.3080& 0.3080&1 &0.0253\\\\[0.5mm]\n$\\mathcal{C}_{\\mathrm{SCC}}$\/$\\mathcal{C}_{\\mathrm{SC-SCC}}$ & $2\/3$ & 8&0.25 &0.1507&0.3326&0.2710&0.3278 &0.3323&7&0.0007\\\\[0.5mm]\n\\hline\n$\\mathcal{C}_{\\mathrm{PCC}}$\/$\\mathcal{C}_{\\mathrm{SC-PCC}}$ & $3\/4$ & 4&0.166 & 0.1854 & 0.1876 & 0.1876 & 0.1876 & 0.1876 & 1&0.0624\\\\[0.5mm]\n$\\mathcal{C}_{\\mathrm{SCC}}$\/$\\mathcal{C}_{\\mathrm{SC-SCC}}$ & $3\/4$ & 4&0.166 & 0.1337 & 0.2486 & 0.2155 & 0.2471 & 0.2486 & 5&0.0014\\\\[0.5mm]\n$\\mathcal{C}_{\\mathrm{PCC}}$\/$\\mathcal{C}_{\\mathrm{SC-PCC}}$ & $3\/4$ & 8&0.166 &0.2103&0.2196&0.2196&0.2196&0.2196&1&0.0304\\\\[0.5mm]\n$\\mathcal{C}_{\\mathrm{SCC}}$\/$\\mathcal{C}_{\\mathrm{SC-SCC}}$ & $3\/4$ & 8&0.166 &0.0865 &0.2495&0.1827&0.2416&0.2488&8&0.0005\\\\[0.5mm]\n\\hline\n$\\mathcal{C}_{\\mathrm{PCC}}$\/$\\mathcal{C}_{\\mathrm{SC-PCC}}$ & $4\/5$ & 4&0.125 & 0.1376 & 0.1391 & 0.1391 & 0.1391 & 0.1391 & 1&0.0609\\\\[0.5mm]\n$\\mathcal{C}_{\\mathrm{SCC}}$\/$\\mathcal{C}_{\\mathrm{SC-SCC}}$ & $4\/5$ & 4&0.125 & 0.0942 & 0.1990 & 0.1644 & 0.1968 & 0.1989 & 7&0.0011\\\\[0.5mm]\n$\\mathcal{C}_{\\mathrm{PCC}}$\/$\\mathcal{C}_{\\mathrm{SC-PCC}}$ & $4\/5$ & 8&0.125 & 0.1628& 0.1698& 0.1698 &0.1698 &0.1698&1 &0.0302\\\\[0.5mm]\n$\\mathcal{C}_{\\mathrm{SCC}}$\/$\\mathcal{C}_{\\mathrm{SC-SCC}}$ & $4\/5$ & 8&0.125 &0.0517&0.1996&0.1302 &0.1885&0.1982&8&0.0004\\\\[0.5mm]\n\\hline\n$\\mathcal{C}_{\\mathrm{PCC}}$\/$\\mathcal{C}_{\\mathrm{SC-PCC}}$ & $9\/10$ & 4&0.055 & 0.0578 & 0.0582 & 0.0582 & 0.0582 & 0.0582 & 1&0.0418\\\\[0.5mm]\n$\\mathcal{C}_{\\mathrm{SCC}}$\/$\\mathcal{C}_{\\mathrm{SC-SCC}}$ & $9\/10$ & 4&0.055 & 0.0269 & 0.0996 & 0.0624 & 0.0930 & 0.0988 & 8&0.0012\\\\[0.5mm]\n$\\mathcal{C}_{\\mathrm{PCC}}$\/$\\mathcal{C}_{\\mathrm{SC-PCC}}$ & $9\/10$ & 8&0.055 &0.0732&0.0761&0.0761& 0.0761&0.0761&1&0.0239\\\\[0.5mm]\n$\\mathcal{C}_{\\mathrm{SCC}}$\/$\\mathcal{C}_{\\mathrm{SC-SCC}}$ & $9\/10$ & 8&0.055 &0.0128& 0.0999 & 0.0384& 0.0765 &0.0931 &16&0.0001\\\\[0.5mm]\n\\hline\n\n\\end{tabular} \n\\end{center}\n\\label{Tab:BPThresholdsSCC}\n\\end{table*}\n\n\n\\begin{table*}[t]\n\n\t\\caption{Thresholds for punctured Braided Convolutional Codes}\n\n\t\\begin{center}\n\t\t\\begin{tabular}{cccccccccc}\n\t\t\t\\hline\n\t\t\tEnsemble& Rate &states& $\\rho_2$ & $\\varepsilon_{\\text{BP}}$ & $\\varepsilon_{\\text{MAP}}$ &$\\varepsilon^1_{\\mathrm{SC}}$ & $\\varepsilon^3_{\\mathrm{SC}}$ & $\\varepsilon^5_{\\mathrm{SC}}$ & $\\delta_{\\text{SH}}$ \\\\\n\t\t\t\\hline\n\t\t\tType-I& $1\/3$ &4& 1.0 & 0.5541 & 0.6653 & 0.6609&0.6644 & 0.6650& 0.0013\\\\[0.5mm]\n\t\t\tType-II & $1\/3$ &4&1.0 &0.5541 & 0.6653& 0.6651 &0.6653 &0.6653 &0.0013\\\\[0.5mm]\n\t\t\n\t\t\t\\hline\n\t\t\tType-I & $1\/2$ & 4&0.5 &0.3013&0.4993&0.4932&0.4980& 0.4988&0.0007 \\\\[0.5mm]\n\t\t\tType-II & $1\/2$ & 4&0.5 &0.3013&0.4993& 0.4988&0.4993&0.4993&0.0007 \\\\[0.5mm]\n\t\t\n\t\t\t\\hline\n\t\t\tType-I & $2\/3$ & 4&0.25 & -- &0.3331&0.3257&0.3315&0.3325&0.0002\\\\[0.5mm]\n\t\t\tType-II & $2\/3$ & 4&0.25 & -- &0.3331& 0.3323&0.3331 &0.3331&0.0002\\\\[0.5mm]\n\t\t\n\t\t\t\\hline\n\t\t\tType-I & $3\/4$ & 4&0.166& -- &0.2491&0.2411&0.2473&0.2484&0.0009\\\\[0.5mm]\n\t\t\tType-II & $3\/4$ & 4&0.166 & -- &0.2491&0.2481&0.2491 &0.2491 &0.0009\\\\[0.5mm]\n\t\t\n\t\t\t\\hline\n\t\t\tType-I & $4\/5$ & 4&0.125 & -- &0.1999&0.1915 &0.1979&0.1991 &0.0001\\\\[0.5mm]\n\t\t\tType-II & $4\/5$ & 4&0.125 & -- &0.1999&0.1986&0.1999&0.1999&0.0001\\\\[0.5mm]\n\t\t\n\t\t\t\\hline\n\t\t\tType-I & $9\/10$ & 4&0.055 & -- &0.0990&0.0893&0.0966&0.0980&0.0010 \\\\[0.5mm]\n\t\t\tType-II & $9\/10$ & 4&0.055& -- &0.0990&0.0954& 0.0990&0.0990&0.0010\\\\[0.5mm]\n\t\t\n\t\t\t\n\t\t\t\\hline\n\t\t\t\n\t\t\\end{tabular} \n\t\\end{center}\n\t\\label{BPThresholdsBCC}\n\n\\end{table*}\n\nFor large enough coupling memory, we observe threshold saturation for both SC-PCCs and SC-SCCs. The value $m_{\\text{min}}$ in Table~\\ref{Tab:BPThresholdsSCC} denotes the smallest coupling memory for which threshold saturation is observed numerically. Interestingly, thanks to the threshold saturation phenomenon, for large enough coupling memory SC-SCCs achieve better BP threshold than SC-PCCs. We remark that SCCs yield better minimum Hamming distance than PCCs \\cite{Benedetto98Serial}.\n\nComparing ensembles with 8-state component encoders and ensembles with 4-state component encoders, we observe that the MAP threshold improves for all the considered cases, since the overall codes become stronger. For PCCs, the BP threshold also improves for 8-state component encoders, but only with puncturing, i.e., for $R>1\/3$. For SCCs, on the other hand, the BP threshold gets worse if higher memory component encoders are used. Due to this fact, a higher coupling memory $m_{\\text{min}}$ is needed for SC-SCCs with 8-state component encoders until threshold saturation is observed, and this effect becomes more pronounced for larger rates. However, the achievable BP thresholds of SC-SCCs are better than those of SC-PCCs for all rates.\n\n\n\nIn Table~\\ref{BPThresholdsBCC}, we give BP thresholds for Type-I and Type-II SC-BCCs with different coupling memories and several rates.\\footnote{The BP threshold of the Type-I BCC with $m=1$ corresponds to the BP threshold of the original BCC.} As for PCCs, $\\rho_2$ is the permeability rate of the parity bits of the upper encoder and the lower encoder. We also report the BP threshold and MAP threshold of the uncoupled ensembles.\nAlmost in all rates, the BP decoder works poorly for uncoupled BCCs and the BP thresholds are worse than those of PCCs and SCCs (an exception are SCCs with $R=1\/3$). This is specially significant for rates $R\\ge 2\/3$, for which the BP thresholds of uncoupled BCCs are very close to zero.\nOn the other hand, the MAP thresholds of BCCs are better than those of both PCCs and SCCs for all rates. As for SC-PCCs and SC-SCCs, the BP thresholds improve if coupling is applied.\nType-II BCCs yield better thresholds than Type-I BCCs and achieve threshold saturation for small coupling memories.\nIn contrast, for the coupling memories considered, threshold saturation is not observed for Type-I BCCs.\n\n\n\nFor comparison purposes, in Table~\\ref{BPThresholdsLDPC} we report the $\\varepsilon_{\\text{BP}}$, $\\varepsilon_{\\text{MAP}}$, and $\\varepsilon_{\\text{SC}}^{1}$ for three rate-$1\/2$ LDPC code ensembles.\nAs it is well known, by increasing the variable node degree, the MAP threshold improves, but the BP threshold decreases.\nSimilarly to TCs, applying the coupling improves the BP threshold.\nAmong all the ensembles shown in Table \\ref{BPThresholdsLDPC}, the $(5,10)$ LDPC ensemble has the best MAP threshold.\nHowever, for this ensemble the gap between the BP and MAP thresholds is larger than that of the other LDPC code ensembles and the coupling (with $m=1$) is not able to completely close this gap, therefore $\\varepsilon_{\\text{SC}}^{1}$ is worse than that of other two SC-LPDC code ensembles. Among all codes in Table~\\ref{BPThresholdsLDPC}, the best $\\varepsilon_{\\text{BP}}$ is achieved by the Type II BCC ensemble. Similar to the $(5,10)$ LDPC code ensemble, the gap between the BP and the MAP threshold is relatively large for BCCs. However, for BCCs the BP threshold increases significantly after applying coupling with $m=1$.\nIn addition, the only way to increase the MAP threshold of the LDPC codes is to increase their variable node degree, but in TCs the BP threshold can be improved by several different methods, e.g., increasing the component code memory, selecting a good ensemble, or increasing the variable node degree.\n\n\n\\begin{table}[t]\n\n\t\\caption{Thresholds for rate-$1\/2$ TCs, SC-TCs, LDPC and SC-LDPC codes}\n\n\t\\begin{center}\n\t\t\\begin{tabular}{lcccc}\n\t\t\t\\hline\n\t\t\tEnsemble&states&$\\varepsilon_{\\text{BP}}$ & $\\varepsilon_{\\text{MAP}}$ &$\\varepsilon^1_{\\mathrm{SC}}$ \\\\\n\t\t\t\\hline\n\t\t \n\t\t\tLDPC $(3,6)$&-& 0.4294& 0.4881& 0.4880\\\\[0.5mm]\n\t\t\tLDPC $(4,8)$&-&0.3834& 0.4977& 0.4944\\\\[0.5mm]\n\t\t\tLDPC $(5,10)$&-&0.3415&0.4994& 0.4826\\\\[0.5mm]\n\t\t\n\t\t\t\\hline\n\t\t\t$\\mathcal{C}_{\\mathrm{PCC}}$\/$\\mathcal{C}_{\\mathrm{SC-PCC}}$&4&0.4606&0.4689&0.4689\\\\[0.5mm]\n $\\mathcal{C}_{\\mathrm{SCC}}$\/$\\mathcal{C}_{\\mathrm{SC-SCC}}$&4&0.3594&0.4981& 0.4708\\\\[0.5mm]\n $\\mathcal{C}_{\\mathrm{PCC}}$\/$\\mathcal{C}_{\\mathrm{SC-PCC}}$&8&0.4651&0.4863&0.4862\\\\[0.5mm]\n $\\mathcal{C}_{\\mathrm{SCC}}$\/$\\mathcal{C}_{\\mathrm{SC-SCC}}$&8&0.3120&0.4993& 0.4507\\\\\n Type-I $\\mathcal{C}_{\\mathrm{BCC}}$&4&0.3013 &0.4993&0.4932\\\\[0.5mm]\n Type-II $\\mathcal{C}_{\\mathrm{BCC}}$&4& 0.3013&0.4993& 0.4988\\\\[0.5mm]\n\t\t\t\\hline\n\t\t\t\n\t\t\\end{tabular} \n\t\\end{center}\n\t\\label{BPThresholdsLDPC}\n\n\\end{table}\n\n\\begin{figure}[t]\n\t\\centering\n\t\\includegraphics[width=0.9\\linewidth]{Fig8.pdf}\n\t\\caption{BER results for SC-SCCs with $L=100$ and $m=1$ on the binary erasure channel.}\n\t\\label{SimSCC}\n\\end{figure}\n\nFig.~\\ref{SimSCC} shows the bit error rate (BER) for SC-SCCs with $L=100$ and $m=1$ on the binary erasure channel for two different rates, $R=1\/4$ (solid blue line) and $R=1\/3$ (solid red line). \nHere, we consider the coupling of SCCs with block length $K=1024$, hence the information block length of the SC-SCC ensemble is $K=101376$.\nIn addition, we plot in the figure the BER curves for the uncoupled ensemble (dotted lines) with $K=3072$.\nFor comparison, we also plot the BER using a sliding window decoder with window size $W=3$ and $K=1024$ (dashed lines) which has a decoding latency equal to that of the uncoupled ensemble.\nFor both rates, the BER improves significantly applying coupling and the use of the window decoder entails only a slight performance degradation with respect to full decoder\n\\footnote{In this work, we are focusing on the BER of TC and SC-TC ensembles in the waterfall region.\n\t\t\t However, spatial coupling does also preserve, or even improve, the error floor performance.\n\t\t\t For example, the minimum distance of each SC-TC ensemble is lower bounded by the minimum distance of the corresponding uncoupled TC ensemble. This can be shown by extending the results for BCCs derived in \\cite{MoloudiISITA}.}. \nWe remark that the comparison between SC-TCs and other types of codes is a new and ongoing field of research.\nIn \\cite{zhu2015window} the authors have compared BCC and SC-LDPC codes for rate $1\/2$ and under the assumption of similar latency for both.\nThe results in \\cite{zhu2015window} show that the considered BCC ensemble outperforms the SC-LDPC code ensemble.\n\n\\section{Threshold Saturation}\\label{Sec7}\n\nThe numerical results in the previous section suggest that threshold\nsaturation occurs for SC-TCs. In this section, for some relevant\nensembles, we prove that, indeed, threshold saturation occurs.\nTo prove threshold saturation we use the proof technique based on potential functions introduced in \\cite{Yedla2012,Yedla2012vector}.\nIn the general case, the DE equations of TCs form a vector recursion.\nHowever, we show that, for some relevant TC ensembles, it is possible to rewrite the DE vector recursion in a form which corresponds to the recursion of a scalar admissible system.\nWe can then prove threshold saturation using the framework in \\cite{Yedla2012} for scalar recursions.\nSince the proof for scalar recursions is easier to describe, we first address this case, and we then highlight the proof for the general case of TCs with a vector recursion based on the framework in \\cite{Yedla2012vector}.\n\n\\begin{definition}[\\cite{Yedla2012,Yedla2014}] \\label{def1} A scalar admissible system $(f,g)$, is defined by the recursion\n\\begin{equation}\n\\label{recursion}\nx^{(i)}=f\\Big( g(x^{(i-1)});\\varepsilon\\Big),\n\\end{equation}\nwhere $f : [0,1] \\times [0,1] \\rightarrow [0,1]$ and $g : [0,1] \\rightarrow [0,1]$ satisfy the following conditions.\n\\begin{enumerate}\n\\item $f$ is increasing in both arguments $x,\\varepsilon \\in (0,1]$; \n\\item $g$ is increasing in $x \\in (0,1]$; \n\\item $f(0;\\varepsilon)=f(x;0)=g(0)=0$;\n\\item $f$ and $g$ have continuous second derivatives.\n\\end{enumerate}\n\\end{definition}\n\n\nIn the following we show that the DE equations for some relevant TCs form a scalar admissible system.\n\n\\subsection{Turbo-like codes as Scalar Admissible Systems}\n\\subsubsection{PCC}\nThe DE equations \\eqref{DEPCC1}--\\eqref{DEPCC6} form a vector recursion. However, if the code is built from identical component encoders, i.e., $f_{\\text{U},\\text{s}}=f_{\\text{L},\\text{s}}\\triangleq f_{\\text{s}}$, it follows\n\\[\np_{\\text{U},\\text{s}}^{(i)}=p_{\\text{L},\\text{s}}^{(i)}\\triangleq x^{(i)}.\n\\]\nUsing this and substituting \\eqref{DEPCC3} into \\eqref{DEPCC1} and \\eqref{DEPCC6} into \\eqref{DEPCC4}, the DE can then be written as\n\\begin{equation}\n\\label{recursionPCC}\nx^{(i)}=f_{\\text{s}}(\\varepsilon x^{(i-1)},\\varepsilon ),\n\\end{equation}\nwith initialization $x^{(0)}=1$.\n\\begin{lemma}\n\\label{LemmaPCC}\nThe DE recursion of a PCC with identical component encoders, given in \\eqref{recursionPCC}, forms a scalar admissible system with $f(x;\\varepsilon)=f_s(\\varepsilon\\cdot x,x)$ and $g(x)=x$.\n\\end{lemma}\n\\begin{IEEEproof}\n It is easy to show that all conditions in Definition~\\ref{def1} are satisfied for $g(x)=x$.\nWe now prove that $f(x;\\varepsilon)$ satisfies Conditions 1, 3 and 4. Note that $f(x;\\varepsilon)$ is the transfer function of a rate-$1\/2$\nconvolutional encoder. According to equation \\eqref{eq:Transfer1},\nthis function can be written as $f(p_1,p_2)$, where $p_1=\\varepsilon\n\\cdot x$ and $p_2=\\varepsilon$. Using Lemma~\\ref{Lemma1}, $f(p_1,p_2)$ is increasing with $p_1$ and $p_2$, therefore $f(x;\\varepsilon)$ is increasing with $x$ and $\\varepsilon$ and Condition 1 is satisfied.\n\nTo show that Condition 3 holds, it is enough to realize that for $\\varepsilon=0$ the input sequence can be recovered perfectly\nfrom the received sequence, i.e., $f(x;0)=0$, as there is a one-to-one mapping between input sequences and coded\nsequences. Furthermore, when $x=0$, the input sequence is fully known by a-priori information and the erasure probability at the output of the decoder is zero, i.e., $f(x;0)=0$.\n\nFinally, $f(x;\\varepsilon)$ is a rational function and its poles are outside the interval $x,\\varepsilon \\in [0,1]$ (otherwise we may get infinite output erasure probability for a finite input erasure probability), hence it has continuous first and second derivatives inside this interval.\n\\end{IEEEproof} \n\n\\subsubsection{SCC}\nConsider the DE equations of the SCC ensemble in \\eqref{DESCC1}--\\eqref{DESCC6}, which form a vector recursion. For identical component encoders, $f_{\\text{I},\\text{s}}=f_{\\text{O},\\text{s}}\\triangleq f_{\\text{s}}$ and $f_{\\text{I},\\text{p}}=f_{\\text{O},\\text{p}}\\triangleq f_{\\text{p}}$. \nUsing this and $q_{\\text{I}}^{(i)}\\triangleq x^{(i)}$, by substituting \\eqref{DESCC2}--\\eqref{DESCC6} into \\eqref{DESCC1}, the DE recursion can be rewritten as \n\\begin{equation}\n\\label{eq:SCCrec}\nx^{(i)}=\\varepsilon \\cdot f_{\\text{s}}\\Big(\\varepsilon g(x^{(i-1)}),\\varepsilon\\Big),\n\\end{equation}\nwhere\n\\begin{equation}\n\\label{eq:gSCC}\ng(x^{(i)})=\\frac{f_{\\text{s}}\\Big(x^{(i)},x^{(i)}\\Big)+f_{\\text{p}}\\Big(x^{(i)},x^{(i)}\\Big)}{2},\n\\end{equation}\nand the initial condition is $x^{(0)}=1$.\n\n\\begin{lemma}\n\\label{LemmaSCC}\nThe DE recursion of a SCC with identical component encoders, given in \\eqref{eq:SCCrec} and \\eqref{eq:gSCC}, form a scalar admissible system with $f(x;\\varepsilon)=\\varepsilon \\cdot f_{\\text{s}}(\\varepsilon \\cdot x, \\varepsilon)$ and\n\\begin{align*}\ng(x)=\\frac{f_{\\text{s}}(x,x)+f_{\\text{p}}(x,x)}{2}.\n\\end{align*}\n\\end{lemma}\n\\begin{IEEEproof}\nThe proof follows the same arguments as the proof of Lemma~\\ref{LemmaPCC}.\n\\end{IEEEproof}\n\n\n\\subsubsection{BCC}\nSimilarly to PCCs and SCCs, the DE equations of BCCs (see \\eqref{DEBCC4}--\\eqref{DEBCC6}) form a vector recursion. With identical component encoders, due to the symmetric structure of the code, $f_{\\text{U},k}=f_{\\text{L},k}\\triangleq f_k$ and $p_{\\text{U},k}^{(i)}=p_{\\text{U},k}^{(i)}\\triangleq x_k^{(i)}$ for $k=1,2,3$.\nUsing this, \\eqref{DEBCC4}--\\eqref{DEBCC6} can be rewritten as\n\\begin{align}\n\\label{BCC1}\nx_1^{(i)}&=f_1\\Big(\\varepsilon \\cdot x_1^{(i-1)},\\varepsilon \\cdot x_3^{(i-1)},\\varepsilon\\cdot x_2^{(i-1)}\\Big)\\\\\n\\label{BCC2}\nx_2^{(i)}&=f_2\\Big(\\varepsilon \\cdot x_1^{(i-1)},\\varepsilon \\cdot x_3^{(i-1)},\\varepsilon \\cdot x_2^{(i-1)}\\Big)\\\\\n\\label{BCC3}\nx_3^{(i)}&=f_3\\Big(\\varepsilon \\cdot x_1^{(i-1)},\\varepsilon \\cdot x_3^{(i-1)},\\varepsilon \\cdot x_2^{(i-1)}\\Big).\n\\end{align}\n\nThe above DE equations are still a vector recursion.\nTo write the recursion in scalar form, it is necessary to have identical transfer functions for all the edges which are connected to factor nodes $T^{\\text{U}}$ and $T^{\\text{L}}$. This is needed because all variable nodes in a BCC receive a-priori information.\nIn order to achieve this property, we can apply some averaging over the different types of code symbols. In particular, we can randomly permute the order of the encoder outputs $v_\\tau^{(l)}$, $l=1,\\dots,n$. For each trellis section $\\tau$ the order of these $n$ symbols is chosen indepently according to a uniform distribution.\nEquivalently, instead of performing this permutation on the encoder outputs we can define a corresponding component encoder with a time-varying trellis in which the branch labels are permuted accordingly.\nThen, it results $x_1^{(i)}=x_2^{(i)}=x_3^{(i)}\\triangleq x^{(i)}$\nand all transfer functions are equal to the average of the transfer functions $f_1,f_2,f_3$,\n\\[\nf_{\\text{ave}}=\\frac{f_1+f_2+f_3}{3}.\n\\] \nUsing this, the DE equations can be simplified as\n\\begin{equation}\n\\label{eq:BCCScalar}\nx^{(i)}=f_{\\text{ave}}(\\varepsilon \\cdot x^{(i-1)},\\varepsilon \\cdot x^{(i-1)},\\varepsilon \\cdot x^{(i-1)}).\n\\end{equation}\n\\begin{lemma}\n\\label{LemmaBCC}\nThe DE recursion of a BCC with identical component encoders and time varying trellises, given in \\eqref{eq:BCCScalar}, form a scalar admissible system with $f(x;\\varepsilon)=f_{\\text{ave}}(\\varepsilon \\cdot x,\\varepsilon \\cdot x,\\varepsilon \\cdot x)$ and $g(x)=x$.\n\\end{lemma}\n\\begin{IEEEproof}\nThe proof follows the same arguments as the proof of Lemma~\\ref{LemmaPCC}.\n\\end{IEEEproof}\n\n\\subsection{Single System Potential}\n\\begin{definition}[\\cite{Yedla2012,Yedla2014}] \\label{def2}\nFor a scalar admissible system, defined in Definition~\\ref{def1}, the potential function $U(x;\\varepsilon)$ is\n\\begin{align}\n\\label{Potential}\nU(x;\\varepsilon)&=\\int_{0}^{x}\\big{(}z-f(g(x);\\varepsilon)\\big{)}g'(z)dz \\\\\n&=xg(x)-G(x)-F(g(x);\\varepsilon),\\nonumber\n\\end{align}\nwhere $F(x;\\varepsilon)=\\int_{0}^{x}f(z;\\varepsilon) dz$ and $G(x)=\\int_{0}^{x}g(z) dz$.\n\\end{definition}\n\\begin{proposition}[\\cite{Yedla2012,Yedla2014}]\nThe potential function has the following properties.\n\\begin{enumerate}\n\\item $U(x;\\varepsilon)$ is strictly decreasing in $\\varepsilon \\in (0,1]$;\n\\item An $x\\in [0,1]$ is a fixed point of the recursion\n(\\ref{recursion}) if and only if it is a stationary point of the corresponding potential function.\n\\end{enumerate}\n\\end{proposition}\n\\begin{definition}[\\cite{Yedla2012,Yedla2014}] \\label{defBP} If the DE recursion is the recursion of a BP decoder, the BP threshold is \\cite{Yedla2012}\n\\[\n\\varepsilon^{\\text{BP}}=\\sup\\Big\\{\\varepsilon\n \\in[0,1] : U'(x;\\varepsilon)>0,\\; \\forall x\\in (0,1]\\Big\\} \\ .\n\\] \n\\end{definition}\n According to Definition~\\ref{defBP}, for $\\varepsilon < \\varepsilon^{\\text{BP}}$, the derivative of the potential function is always larger than zero for $x\\in (0,1]$, i.e., the potential function has no stationary point in $x\\in (0,1]$. \n\\begin{definition}[\\cite{Yedla2012,Yedla2014}]\n\\label{defPotth}\nFor $\\varepsilon >\\varepsilon^{\\text{BP}} $, the minimum unstable fixed point is $u(\\varepsilon)=\\sup\\big\\{\\tilde{x} \\in [0,1] : f(g(x);\\varepsilon)0, \\min_{x \\in [u(x),1]} U(x;\\varepsilon)> 0 \\Big\\} \\ .\n\\end{align*}\n\\end{definition}\nThe potential threshold depends on the functions $f(x;\\varepsilon)$ and $g(x)$.\n\n\\begin{example}\nConsider rate-$1\/3$ PCCs with identical 2-state component encoders with generator matrix ${\\bs{G}}=(1,1\/3)$.\nFor this code ensemble,\n\\[\nf_{\\text{s}}(\\varepsilon\\cdot x,\\varepsilon)=\\frac{x\\varepsilon^2(2-2\\varepsilon+x\\varepsilon^2)}{(1-\\varepsilon+x\\varepsilon^2)^2} \\ .\n\\]\nTherefore,\n\\[\nF_{\\text{s}}(x;\\varepsilon)=\\frac{x\\varepsilon^2}{1-\\varepsilon+x\\varepsilon^2} \\ ,\n\\]\nand\n\\[\nU(x;\\epsilon)=\\frac{x\\varepsilon^3+(1-\\varepsilon-2\\varepsilon^2)x^2}{2(1-\\varepsilon+x\\varepsilon^2)} \\ . \n\\] \\hfill $\\triangle$\n\\end{example}\n\\begin{example}\nConsider the PCC ensemble in Fig.~\\ref{Uncoupled}(b) with identical component encoders with generator matrix $\\bold{G}=(1,5\/7)$.\nThe DE recursion of this ensemble is given in \\eqref{recursionPCC}, where $f_s$ is the transfer function of the $(1,5\/7)$ component encoder. The corresponding potential function is\n\\begin{equation}\nU(x;\\epsilon)=x^2-G(x)-F_{\\text{s}}(x;\\epsilon)=\\frac{x^2}{2}-F_{\\text{s}}(x;\\epsilon) \\ ,\n\\end{equation} \nwhere $F_{\\text{s}}(x;\\varepsilon)=\\int_{0}^{x}f_{\\text{s}}(\\varepsilon\\cdot z,\\varepsilon) dz$ and $G(x)=\\int_{0}^{x}g(z)dz=\\frac{x^2}{2}$. The potential function is shown in Fig.~\\ref{PotPCC} for several values of $\\varepsilon$.\nAs it is illustrated, for $\\varepsilon<0.6428$ the potential function has no stationary point. The BP threshold and the potential threshold are $\\varepsilon=0.6428$ and $\\varepsilon=0.6553$, respectively (see Definitions~\\ref{defBP} and \\ref{defPotth}).\nThese results match with the DE results in Table~\\ref{Tab:BPThresholdsSCC}. \\hfill $\\triangle$\n\\end{example}\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=\\linewidth]{Fig9V3.pdf}\n\\caption{Potential function of a PCC ensemble.}\n\\label{PotPCC}\n\\end{figure}\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=\\linewidth]{Fig10V3.pdf}\n\\caption{Potential function of a SCC ensemble.}\n\\label{PotSCC}\n\\end{figure}\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=\\linewidth]{Fig11V3.pdf}\n\\caption{Potential function of a BCC ensemble.}\n\\label{PotBCC}\n\\end{figure}\n\\begin{example} The potential function of the SCC ensemble in Fig.~\\ref{Uncoupled}(c) with identical component encoders with generator matrix $\\bold{G}=(1,5\/7)$ is shown in Fig.~\\ref{PotSCC}. The BP threshold and the potential threshold are \n$\\varepsilon=0.689$ and $\\varepsilon=0.748$, respectively, which match with the DE results in Table~\\ref{Tab:BPThresholdsSCC}. \\hfill $\\triangle$\n\\end{example}\n\\begin{example} Consider the BCC ensemble in\n Fig.~\\ref{Uncoupled}(d) with identical component encoders with generator matrix given in \\eqref{eqG} and time-varying trellises. The potential function of this code is depicted in Fig.~\\ref{PotBCC}. The BP threshold and the potential threshold are $\\varepsilon=0.5522$ and $\\varepsilon=0.6654$, respectively. Note that these values are slightly different from the values in\nTable~\\ref{BPThresholdsBCC}. This is due to the fact that we considered an ensemble with time-varying trellises, which can be modeled by means of a scalar recursion. The ensemble considered in Table~\\ref{BPThresholdsBCC} needs to be analyzed by means of a vector recursion. \\hfill $\\triangle$\n\\end{example}\n\n\\subsection{Coupled System and Threshold Saturation}\n\\begin{theorem}\\label{thm1}\nConsider a spatially coupled system defined by the following recursion at time $t$,\n\\begin{align}\n\\label{SCrecursion}\nx_t^{(i)}=\\frac{1}{1+m}\\sum_{j=0}^{m}f_{t+j}\\Big(\\frac{1}{1+m}\\sum_{k=0}^{m}g(x_{t+j-k}^{(i-1)});\\varepsilon\\Big).\n\\end{align}\nIf $f(x;\\varepsilon)$ and $g(x)$ form a scalar admissible system, for large enough coupling memory and $\\varepsilon < \\varepsilon^*$, the only fixed point of the recursion is\n$x=0$.\n\\end{theorem}\n\\begin{IEEEproof}\nThe proof follows from \\cite{Yedla2012}.\n\\end{IEEEproof}\n\nIn the following we show that the DE recursions of SC-TCs (with identical component encoders) can be written in the form \\eqref{SCrecursion}. As a result, threshold saturation occurs for these ensembles.\n\n\\subsubsection{PCCs}\nConsider the SC-PCC ensemble in Fig.~\\ref{Coupled}(a) with identical component encoders.\nDue to the symmetric coupling structure, it follows that (cf. \\eqref{DESCPCC1} and \\eqref{DESCPCC2})\n\\[\n\\bar{q}_{\\text{U}}^{(i,t)}=\\bar{q}_{\\text{L}}^{(i,t)}\\triangleq x_{t}^{(i)}.\n\\]\nNow, using $x_t^{(i)}$ in \\eqref{DESCPCC3} and \\eqref{DESCPCC6}, we can write\n\\begin{align}\n\\label{eq:QLi}\nq_{\\text{L}}^{(i,t)}=q_{\\text{U}}^{(i,t)}=\\varepsilon \\cdot \\frac{1}{m+1}\\sum_{k=0}^{m}x_{t-k}^{(i-1)}.\n\\end{align}\nFinally, by substituting \\eqref{eq:QLi} into \\eqref{DESCPCC4} and \\eqref{DESCPCC5} and the results into \\eqref{DESCPCC1} and \\eqref{DESCPCC2}, the recursion of SC-PCCs can be rewritten as\n\\begin{equation}\n\\label{SCPCCrecursion}\nx_t^{(i)}=\\frac{1}{1+m}\\sum_{j=0}^{m}f_{\\text{s},t+j}\\Big(\\frac{\\varepsilon}{m+1}\\cdot\\sum_{k=0}^{m}x_{t+j-k}^{(i-1)},\\varepsilon\\Big).\n\\end{equation}\nNote that the recursion in \\eqref{SCPCCrecursion} is identical to the recursion in \\eqref{SCrecursion}.\n\n\\subsubsection{SCCs}\n\nConsider the SC-SCC ensemble in Fig.~\\ref{Coupled}(b) with identical component encoders. Define $x_t^{(i)} \\triangleq q_{\\text{I}}^{(i,t)}$ (see \\eqref{DESCSCC2})\nNow, use it in \\eqref{DESCSCC3}--\\eqref{DESCSCC6}.\nFinally, by substituting the result in \\eqref{DESCSCC2}, the recursion of a SC-SCC an be rewritten as\n\\begin{equation}\n\\label{SCSCCrecursion}\nx_t^{(i)}=\\frac{1}{1+m}\\sum_{j=0}^{m}\\varepsilon \\cdot f_{\\text{s},t+j}\\Big(\\frac{\\varepsilon}{m+1}\\cdot\\sum_{k=0}^{m}g(x_{t+j-k}^{(i-1)}),\\varepsilon\\Big),\n\\end{equation}\nwhere $g(x)$ is shown in equation \\eqref{eq:gSCC}. The recursion in \\eqref{SCSCCrecursion} is identical to the recursion in Theorem \\ref{thm1}. \n\n\\subsubsection{BCCs}\n\nConsider a coupling for BCCs slightly different from the one for Type-II BCCs.\nAt time $t$, each of the parity sequences $\\bs{v}^{\\text{U}}_t$ and $\\bs{v}^{\\text{L}}_t$ is divided into $m+1$ sequences, $\\boldsymbol{v}_{t,j}^{\\text{U}}$, $j=0,\\dots,m$, and $\\boldsymbol{v}_{t,j}^{\\text{L}}$, $j=0,\\dots,m$, respectively (in Type-II BCCs they are divided into $m$ sequences).\nThe sequences $\\boldsymbol{v}_{t-j,j}^{\\text{U}}$ and $\\boldsymbol{v}_{t-j,j}^{\\text{L}}$ are multiplexed and reordered, and are used as the second input of the lower and upper encoder, respectively. Note that in this way of coupling, part of the parity bits at time $t$ are used as input at the same time instant $t$. Now, similarly to uncoupled BCCs, consider identical time-varying trellises. Let $x^{(i)}_t$ denote the extrinsic erasure probability from $T^{\\text{U}}_t$ through all its edges in the $i$th iteration. The erasure probabilities to $\\text{T}^{\\text{U}}_t$ through all its incoming edges are equal and are given by the average of the erasure probabilities from variable nodes $\\bs{v}_{t'}$, $t'= t-m,\\dots,t$,\n\\[\nq_t^{(i)}=\\frac{\\varepsilon}{1+m}\\sum_{k=0}^{m}x_{t-k}^{(i-1)}.\n\\]\nThus, the erasure probabilities from $T^{\\text{U}}_t$ and $T^{\\text{L}}_t$ are identical and equal to $f_{\\text{ave},t}(q_t^{(i)},q_t^{(i)},q_t^{(i)})$. Finally, the recursion at time slot $t$ is\n\\begin{equation}\n\\label{eq:SCBCC}\nx_t^{(i)}=\\frac{1}{1+m}\\sum_{j=0}^{m}f_{\\text{ave},t+j}(q_{t+j}^{(i)},q_{t+j}^{(i)},q_{t+j}^{(i)}).\n\\end{equation}\nThe recursion in (\\ref{eq:SCBCC}) is identical to (\\ref{SCrecursion}).\n\n\\subsection{Random Puncturing and Scalar Admissible System}\n\nIn the following, we show that the DE recursion of punctured TC ensembles can also be rewritten as a scalar admissible system for some particular cases. Then, threshold saturation follows from the discussion in the previous subsection.\n\\subsubsection{PCC} Consider the PCC ensemble with\nidentical component encoders and random puncturing of the parity bits with\npermeability rate $\\rho$. The DE recursion can be rewritten\nas,\n\\[\nx^{(i)}=f_{\\text{s}}(\\varepsilon x^{(i-1)},1-(1-\\varepsilon)\\rho).\n\\]\nThe above equation is a recursion of a scalar admissible system and satisfies the\nconditions in Definition \\ref{def1}, where $g(x)=x$ and\n$f(x;\\varepsilon)=f_s(\\varepsilon\\cdot x, 1-(1-\\varepsilon)\\rho)$.\n\n\\subsubsection{SCC} \n\nConsider random puncturing of the SCC ensemble\nwith identical component encoders. Assuming $\\rho_0=\\rho_1$ (i.e., we puncture also systematic bits of the outer code),\nwe can rewrite the DE recursion as\n\\[\nx^{(i)}=\\varepsilon_{\\rho_1} \\cdot f_{\\text{s}}(\\varepsilon_{\\rho_1} x^{(i-1)},\\varepsilon_{\\rho_2}),\n\\]\nwhere $\\varepsilon_{\\rho_1}=1-(1-\\varepsilon)\\rho_1$ and\n$\\varepsilon_{\\rho_2}=1-(1-\\varepsilon)\\rho_2$. The above equation is\nthe recursion of a scalar admissible system, where\n$f(x;\\varepsilon)=\\varepsilon_{\\rho_1} f_s(\\varepsilon_{\\rho_1}\\cdot\nx,\\varepsilon_{\\rho_2} )$ and $g(x)$ is obtained by equation \\eqref{eq:gSCC}.\n\n\\subsubsection{BCC} \n\nConsider random puncturing of the BCC ensemble with identical\ntime-varying trellises. Assume that the systematic bits and the parity bits of the upper and lower encoders are punctured with the same permeability rate $\\rho$. Then, the DE recursion can be\nrewritten as \\eqref{eq:BCCScalar}, where $\\varepsilon$ should\nbe replaced by $\\varepsilon_{\\rho}=1-(1-\\varepsilon)\\rho$.\n\n\\subsection{Turbo-like Codes as Vector Admissible Systems}\nIn general, the DE recursions of TCs are vector recursions. In this case, it is\npossible to prove threshold saturation using the technique proposed in \\cite{Yedla2012vector} for vector recursions. The proof is similar to that of scalar recursions, albeit more involved. In the following, we show how to rewrite the recursion of punctured PCCs as a vector admissible system recursion. Then, following \\cite{Yedla2012vector}, we can prove threshold saturation. \nUsing the same technique, it is possible to prove threshold saturation for SCCs and BCCs as well.\n\n\nConsider the DE equations of the PCC ensemble in \\eqref{DEPCC1}--\\eqref{DEPCC6}. To reduce the number of the equations, substitute \n\\eqref{DEPCC3} and \\eqref{DEPCC6} into \\eqref{DEPCC1} and \\eqref{DEPCC4},\nrespectively.\nConsider random puncturing of information bits, upper encoder parity bits and lower encoder parity bits with permeability rates\n$\\rho_0$, $\\rho_1$ and $\\rho_2$, respectively. By considering\n$x_1^{(i)}\\triangleq p_{\\text{U,s}} $ and $x_2^{(i)}\\triangleq\np_{\\text{L,s}} $, the DE recursion can be simplified to \n\\[\nx_1^{(i)}=f_{\\text{U,s}}(\\varepsilon_{\\rho_0}\\cdot x_2^{(i-1)},\\varepsilon_{\\rho_1})\n\\]\n\\[\nx_2^{(i)}=f_{\\text{L,s}}(\\varepsilon_{\\rho_0}\\cdot x_1^{(i-1)},\\varepsilon_{\\rho_2}).\n\\]\nThe above equations can be written in vector format as\n\\begin{equation}\n\\label{PCCvector}\n\\bs{x}^{(i)}=\\bs{f}(\\bs{g}(\\bs{x}^{(i-1)});\\varepsilon),\n\\end{equation}\nwhere, $\\bs{x}=[x_1, x_2]$, $\\bs{f}(\\bs{x};\\varepsilon)=[f_{U,\\text{s}}(\\varepsilon_{\\rho_0} \\cdot x_1,\\varepsilon_{\\rho_1}),f _{L,\\text{s}}(\\varepsilon_{\\rho_0} \\cdot x_2,\n\\varepsilon_{\\rho_2})]$ and $\\bs{g}(\\bs{x})=[x_2,x_1]$.\nIs it easy to verify that the recursion in \\eqref{PCCvector} satisfies the\nconditions in \\cite[Def.~1]{Yedla2012vector}, hence \\eqref{PCCvector} is the recursion of a vector\nadmissible system. \nFor this vector admissible system,\nthe line integral is path independent in \\cite[Eq.~(2)]{Yedla2012vector} and the potential function is well defined.\nSo, we can define (see \\cite{Yedla2012vector}) $\\bs{D}=I_{2\\times 2}$, $G=x_1\\cdot x_2$ and\n\\[\nF=\\int_0^{x_1} f_{\\text{U,s}}(\\varepsilon_{\\rho_0} \\cdot\nz,\\varepsilon_{\\rho_1}) \\; dz+\\int_0^{x_2} f_{\\text{L,s}}(\\varepsilon_{\\rho_0} \\cdot z,\\varepsilon_{\\rho_2}) \\; dz.\n\\]\nIt is possible to show that the DE recursion of SC-PCCs can be\nrewritten in the same form as \\cite[Eq.~(5)]{Yedla2012vector} and by\nusing \\cite[Th.~1]{Yedla2012vector}, threshold saturation can be proven.\n\\section{Conclusion}\\label{Sec8}\nIn this paper we investigated the impact of spatial coupling on the BP decoding threshold of turbo-like codes. We introduced the concept of spatial coupling for PCCs and SCCs, and generalized the concept of coupling for BCCs.\nConsidering transmission over the BEC, we derived the exact DE equations for uncoupled and coupled ensembles. \nFor all spatially coupled ensembles, the BP threshold improves and our numerical results suggest that threshold saturation occurs if the coupling memory is chosen sufficiently large. We therefore constructed rate-compatible families of SC-TCs that achieve close-to-capacity performance for a wide range of code rates.\n\nWe showed that the DE equations of SC-TC ensembles with identical component encoders can be properly rewritten as a scalar recursion. \nFor SC-PCCs, SC-SCCs and BCCs we then proved threshold saturation analytically, using the proof technique based on potential functions proposed in \\cite{Yedla2012,Yedla2014}. Finally, we demonstrated how vector recursions can be used to extend the proof to more general ensembles.\n\nA generalization of our results to general binary-input memoryless channels is challenging, because the transfer functions of the component decoders can no longer be obtained in closed form. Even a numerical computation of the exact thresholds is difficult, but Monte Carlo methods and Gaussian approximation techniques could be helpful tools for finding approximate thresholds. EXIT charts, for example, have been widely used for analyzing uncoupled TCs and may be useful for estimating the thresholds of SC-TCs. A connection between EXIT functions and potential functions of spatially coupled systems is given in \\cite{KudekarWaveLike}. An investigation of SC-TC ensembles along this line may be an interesting direction for future work. The simulation results for SC-BCCs over the AWGN channel in \\cite{ZhangBCC} and \\cite{zhu2015window} clearly show that spatial coupling significantly improves the performance, suggesting that threshold saturation also occurs for this channel.\n\nThe invention of turbo codes and the rediscovery of LDPC codes, allowed to approach capacity with practical codes.\nToday, both turbo-like codes and LDPC codes are ubiquitous in communication standards.\nIn the academic arena, however, the interest on turbo-like codes has been declining in the last years in favor of the (considered) more mathematically-appealing LDPC codes.\nThe invention of spatially coupled LDPC codes has exacerbated this situation.\nWithout spatial coupling, it is well known that PCCs yield good BP thresholds but poor error floors, while SCCs and BCCs show low error floors but poor BP thresholds.\nOur SC-TCs, however, demonstrate that turbo-like codes can also greatly benefit from spatial coupling.\nThe concept of spatial coupling opens some new degrees of freedom in the design of codes on graphs: designing a concatenated coding scheme for achieving the best BP threshold in the uncoupled case may not necessarily lead to the best overall performance.\nInstead of optimizing the component encoder characteristics for BP decoding, it is possible to optimize the MAP decoding threshold and rely on the threshold saturation effect of spatial coupling.\nPowerful code ensembles with strong distance properties such as SCCs and BCCs can then perform close to capacity with low-complexity iterative decoding.\nWe hope that our work on spatially coupled turbo-like codes will trigger some new interest in turbo-like coding structures.\n\n\n\n\n\n\n\n\n\\input{SCTCsITTransaction.bbl}\n\n\\balance\n\\begin{IEEEbiographynophoto}{Saeedeh Moloudi}\nreceived the Master Degree in Wireless Communications from Shiraz University, Iran in 2012. Since September 2013, she has been a PhD candidate at the Department of Electrical and Information Technology, Lund University. Her main research interests include design and analysis of coding systems and graph based iterative algorithms.\n\\end{IEEEbiographynophoto}\n\n\\begin{IEEEbiographynophoto}{Michael Lentmaier}\n received the Dipl. Ing. degree in electrical engineering from University of Ulm, Germany in 1998, and the Ph.D. degree in telecommunication theory from Lund University, Sweden in 2003. He then worked as a Post-Doctoral Research Associate at University of Notre Dame, Indiana and at University of Ulm. From 2005 to 2007 he was with the Institute of Communications and Navigation of the German Aerospace Center (DLR) in Oberpfaffenhofen, where he worked on signal processing techniques in satellite navigation receivers. From 2008 to 2012 he was a senior researcher and lecturer at the Vodafone Chair Mobile Communications Systems at TU Dresden, where he was heading the Algorithms and Coding research group. Since January 2013 he is an Associate Professor at the Department of Electrical and Information Technology at Lund University. His research interests include design and analysis of coding systems, graph based iterative algorithms and Bayesian methods applied to decoding, detection and estimation in communication systems. He is a senior member of the IEEE and served as an editor for IEEE Communications Letters (2010-2013), IEEE Transactions on Communications (2014-2017), and IEEE Transactions on Information Theory (since April 2017). He was awarded the Communications Society and Information Theory Society Joint Paper Award (2012) for his paper \"Iterative Decoding Threshold Analysis for LDPC Convolutional Codes\".\n\\end{IEEEbiographynophoto}\n\n\\begin{IEEEbiographynophoto}{Alexandre Graell i Amat}\nreceived the MSc degree in Telecommunications Engineering from the Universitat Polit\u00e8cnica de Catalunya, Barcelona, Catalonia, Spain, in 2001, and the MSc and the PhD degrees in Electrical Engineering from the Politecnico di Torino, Turin, Italy, in 2000 and 2004, respectively. From September 2001 to May 2002, he was a Visiting Scholar at the University of California San Diego, La Jolla, CA. From September 2002 to May 2003, he held a Visiting Appointment at the Universitat Pompeu Fabra and at the Telecommunications Technological Center of Catalonia, both in Barcelona. From 2001 to 2004, he also held a part-time appointment at STMicroelectronics Data Storage Division, Milan, Italy, as a Consultant on coding for magnetic recording channels. From 2004 to 2005, he was a Visiting Professor at the Universitat Pompeu Fabra, Barcelona, Spain. From 2006 to 2010, he was with the Department of Electronics, Telecom Bretagne (former ENST Bretagne), Brest, France. In 2011, he joined the Department of Electrical Engineering, Chalmers University of Technology, Gothenburg, Sweden, where he is currently a Professor. His research interests include the areas of modern coding theory, distributed storage, and optical communications.\nProf. Graell i Amat is currently Editor-at-Large of the IEEE TRANSACTIONS ON COMMUNICATIONS. He was an Associate Editor of the IEEE TRANSACTIONS ON COMMUNICATIONS from 2011 to 2015, and the IEEE COMMUNICATIONS LETTERS from 2011 to 2013. He was the General Co-Chair of the 7th International Symposium on Turbo Codes and Iterative Information Processing, Gothenburg, Sweden, 2012. He received the postdoctoral Juan de la Cierva Fellowship from the Spanish Ministry of Education and Science and the Marie Curie Intra-European Fellowship from the European Commission. He received the IEEE Communications Society 2010 Europe, Middle East, and Africa Region Outstanding Young Researcher Award.\n\\end{IEEEbiographynophoto}\n\\end{document}\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction} \\label{Int}\n\nGeneral Theory of Relativity (GR) is the cornerstone of\nastrophysics and cosmology, giving predictions with unprecedented success. \nAt astrophysical scales GR has been tested in, for example, the solar system, stellar dynamics, black hole formation and evolution, among others (see for instance\\cite{FischbachTalmadge1999,Will1993,Kamionkowski:2007wv,*Matts,*Peebles:2013hla}). \nHowever, GR is being currently tested with various phenomena that can be \nsignificant challenges to the GR theory, generating important changes never seen before. Ones of the major challenges of modern cosmology are undoubtedly dark matter (DM) and dark energy. They comprise approximately $27\\%$ for DM and $68\\%$ for dark energy, of our \nuniverse\\cite{PlanckCollaboration2013} allowing the formation of large scale structures\\cite{FW,Diaferio:2008jy}.\nDark matter has been invoked as the mechanism to stabilize spiral galaxies and to provide with a matter distribution component to explain the observed rotation curves.\nNowadays the best model of the universe we have is the \n\\emph{concordance} or $\\Lambda$CDM model that has been successful in explaining the very large-scale structure formation, the statistics of the distribution of galaxy clusters, the temperature anisotropies of the cosmic microwave background radiation (CMB) and many other astronomical observations. \nIn spite of all successes we have mentioned, \nthis model has several problems, for example: predicts too much power on small scale\\cite{Rodriguez-Meza:2012}, then it over predicts the number of observed satellite galaxies\\cite{Klypin1999,Moore1999,Papastergis2011} and predicts halo profiles that are denser and \ncuspier than those inferred observationally\\cite{Navarro1997,Subramanian2000,Salucci}, and also predicts a population of massive and concentrated subclass that are inconsistent \nwith observations of the kinematics of Milky Way satellites\\cite{BoylanKolchin2012}.\n\nOne of the first astronomical observations that brought attention on DM was the observation of rotation curves of spiral galaxies by Rubin and coworkers\\cite{Rubin2001}, \nthese observations turned out to be the main tool to investigate the role of DM at galactic scales: its role in determining the structure, how the mass is distributed, and the dynamics, evolution, and formation of spiral galaxies.\nRemarkably, the corresponding rotation velocities of galaxies, can be explained with the density profiles of different Newtonian DM models like Pseudo Isothermal profile (PISO)\\cite{piso}, Navarro-Frenk-White profile (NFW)\\cite{Navarro1997} \nor Burkert profile\\cite{Burkert}, among \nothers\\cite{Einasto}; except by the fact that until now it is unsolved the problem of cusp and core in the densities profiles. In this way, none of them have the last word because the main questions, about the density distribution and of course the \\emph{nature} of DM, has not been resolved.\n\nAlternative theories of gravity have been used to model DM. For instance a scalar field has been proposed to model DM\\cite{Dick1996,Cho\/Keum:1998}, \nand has been used to study rotation curves of spiral galaxies\\cite{Guzman\/Matos:2000}. This scalar field is coupled minimally to the metric, however, scalar fields coupled non minimally to the metric have also been used to study DM\\cite{Rodriguez-Meza:2012,RodriguezMeza\/Others:2001,RodriguezMeza\/CervantesCota:2004,Rodriguez-Meza:2012b}. Equivalently $F(R)$ models exists in the literature that analyzes rotation curves\\cite{Martins\/Salucci:2007}.\n\nOn the other hand, one of the best candidates to extend GR is the brane theory, whose main characteristic is to add another dimension having a five dimensional bulk where it is embedded a four dimensional manifold called the brane\\cite{Randall-I,*Randall-II}. This model is characterized by the fact that the standard model of particles is confined in the brane and only the gravitational interaction can travel in the bulk\\cite{Randall-I,*Randall-II}. The assumption that the five dimensional Einstein's equations are valid, generates corrections in the four dimensional Einstein's equations confined in the brane bringing information from the extra dimension\\cite{sms}. \nThese extra corrections in the Einstein's equations can help us to elucidate and solve the problems that afflicts the modern cosmology and astrophysics\\cite{m2000,*yo2,*Casadio2012251,*jf1,*gm,*Garcia-Aspeitia:2013jea,*langlois2001large,*Garcia-Aspeitia:2014pna,*jf2,*PerezLorenzana:2005iv,*Ovalle:2014uwa,*Garcia-Aspeitia:2014jda,*Linares:2015fsa,*Casadio:2004nz}.\n\nBefore we start, let us mention here some experimental constraints on braneworld models, most of them about the so-called brane tension, $\\lambda$, which appears explicitly as a free parameter in the corrections of the gravitational equations mentioned above. As a first example we have the measurements on the deviations from Newton's law of the gravitational interaction at small distances. It is reported that no deviation is observed for distances $l \\gtrsim 0.1 \\, {\\rm mm}$, which then implies a lower limit on the brane tension in the Randall-Sundrum II model (RSII): $\\lambda> 1 \\, {\\rm TeV}^{4}$\\cite{Kapner:2006si,*Alexeyev:2015gja}; it is important to mention that these limits do not apply to the two-branes case of the Randall-Sundrum I model (RSI) (see \\cite{mk} for details). \nAstrophysical studies, related to gravitational waves and stellar stability, constrain\nthe brane tension to be $\\lambda > 5\\times10^{8} \\, {\\rm MeV}^{4}$\\cite{gm,Sakstein:2014nfa}, whereas the existence of black hole X-ray binaries suggests that $l\\lesssim 10^{-2} {\\rm mm}$\\cite{mk,Kudoh:2003xz,*Cavaglia:2002si}. Finally, from cosmological observations, the requirement of successful nucleosynthesis provides the lower limit $\\lambda> 1\\, {\\rm MeV}^{4}$, which is a much weaker limit as compared to other experiments (another cosmological tests can be seen in: Ref. \\cite{Holanda:2013doa,*Barrow:2001pi,*Brax:2003fv}).\n \nIn fact, this paper is devoted to study the main observable of brane theory which is the brane tension, whose existence delimits between the four dimensional GR and its high energy corrections. We are given to the task of perform a Newtonian approximation of the modified Tolman-Oppenheimer-Volkoff (TOV) equation, maintaining the effective terms provided by branes which cause subtle differences in the traditional dynamics. \nIn this way we test the theory at galactic scale using high resolution measurements of rotation curves of a sample of low surface brightness (LSB) galaxies with no photometry\\cite{deBlok\/etal:2001}\nand a synthetic rotation curve built from 40 rotation curves of spirals of magnitude around $M_I=-18.5$ where was found that the baryonic components has a very small contribution\\cite{Salucci1},\nassuming PISO, NFW and Burkert DM profiles respectively and with that, we constraint the preferred value of brane tension with observables. \nThat the sample has no photometry means that the galaxies are DM dominated and then we have only two parameters related to the distribution of DM, a density scale and a length scale, adding the brane tension we have three parameters in total to fit. \nThe brane tension fitted values are compared among the traditional DM density profile models of spiral galaxies \n(PISO, NFW and Burkert) and against the same models without the presence of branes and confronted with other values of the tension parameter coming from cosmological and astrophysical observational data.\n\nThis paper is organized as follows: In Sec.\\ \\ref{EM} we show the equations of motion (modified TOV equations) for a spherical symmetry and the appropriate initial conditions. In Sec.\\ \\ref{TOV MOD} we explore the Newtonian limit and we show the mathematical expression of rotation velocity with brane modifications; particularly we show the modifications to velocity rotation expressions of PISO, NFW and Burkert DM profiles and they are compared with models without branes. \nIn Sec.\\ \\ref{Results} we test the DM models plus brane with observations: we use a sample of high resolution measurements of rotation curves of LSB galaxies and a synthetic rotation curve representative of 40 rotation curves of spirals where the baryonic component has a very small contribution.\nFinally in Sec.\\ \\ref{Disc}, we discuss the results obtained in the paper and make some conclusions.\n\nIn what follows, we work in units in which $c=\\hbar=1$, unless explicitly written.\n\n\\section{Review of equations of motion for branes} \\label{EM}\n\nLet us start by writing the equations of motion for galactic stability in a brane embedded in a five-dimensional bulk according to the RSII model\\cite{Randall-II}. Following an appropriate computation (for details see\\cite{mk,sms}), it is possible to demonstrate that the modified four-dimensional Einstein's equations can be written as \n\\begin{equation}\n G_{\\mu\\nu} + \\xi_{\\mu\\nu} + \\Lambda_{(4)}g_{\\mu\\nu} = \\kappa^{2}_{(4)} T_{\\mu\\nu} + \\kappa^{4}_{(5)} \\Pi_{\\mu\\nu} +\n \\kappa^{2}_{(5)} F_{\\mu\\nu} , \\label{Eins}\n\\end{equation}\nwhere $\\kappa_{(4)}$ and $\\kappa_{(5)}$ are respectively the four and five- dimensional coupling constants, which are related in the form: $\\kappa^{2}_{(4)}=8\\pi G_{N}=\\kappa^{4}_{(5)} \\lambda\/6$, where $\\lambda$ is defined as the brane tension, and $G_{N}$ is the Newton constant. For purposes of simplicity, we will not consider bulk matter, which translates into $F_{\\mu\\nu}=0$, and discard the presence of the four-dimensional cosmological constant, $\\Lambda_{(4)}=0$, \nas we do not expect it to have any important effect at galactic scales (for a recent discussion about it see\\cite{Pavlidou:2013zha}). Additionally, we will neglect any nonlocal energy flux, which is allowed by the static spherically symmetric solutions we will study below\\cite{gm}.\n\nThe energy-momentum tensor, the quadratic energy-momentum tensor, and the Weyl (traceless) contribution, have the explicit forms\n\\begin{subequations}\n\\label{eq:4}\n\\begin{eqnarray}\n\\label{Tmunu}\nT_{\\mu\\nu} &=& \\rho u_{\\mu}u_{\\nu} + p h_{\\mu\\nu} \\, , \\\\\n\\label{Pimunu}\n\\Pi_{\\mu\\nu} &=& \\frac{1}{12} \\rho \\left[ \\rho u_{\\mu}u_{\\nu} + (\\rho+2p) h_{\\mu\\nu} \\right] \\, , \\\\\n\\label{ximunu}\n\\xi_{\\mu\\nu} &=& - \\frac{\\kappa^4_{(5)}}{\\kappa^4_{(4)}} \\left[ \\mathcal{U} u_{\\mu}u_{\\nu} + \\mathcal{P}r_{\\mu}r_{\\nu}+ \\frac{ h_{\\mu\\nu} }{3} (\\mathcal{U}-\\mathcal{P} ) \\right] \\, .\n\\end{eqnarray}\n\\end{subequations}\nHere, $p$ and $\\rho$ are, respectively, the pressure and energy density of the stellar matter of interest, $\\mathcal{U}$ is the nonlocal energy density, and $\\mathcal{P}$ is the nonlocal anisotropic stress. Also, $u_{\\alpha}$ is the four-velocity (that also satisfies the condition $g_{\\mu\\nu}u^{\\mu}u^{\\nu}=-1$), $r_{\\mu}$ is a unit radial vector, and $h_{\\mu\\nu} = g_{\\mu\\nu} + u_{\\mu} u_{\\nu}$ is the projection operator orthogonal to $u_{\\mu}$.\n\nSpherical symmetry indicates that the metric can be written as:\n\\begin{equation}\n{ds}^{2}= - B(r){dt}^{2} + A(r){dr}^{2} + r^{2} (d\\theta^{2} + \\sin^{2} \\theta d\\varphi^{2}) \\, .\\label{metric}\n\\end{equation}\nIf we define the reduced Weyl functions $\\mathcal{V} = 6 \\mathcal{U}\/\\kappa^4_{(4)}$, and $\\mathcal{N} = 4 \\mathcal{P}\/\\kappa^4_{(4)}$. First, we define the effective mass as:\n\\begin{equation}\n\\mathcal{M}^\\prime_{eff} = 4\\pi{r}^{2}\\rho_{eff}. \\label{eq:7a}\n\\end{equation}\nThen, from Eqs. \\eqref{Eins} and \\eqref{eq:4} and after straightforward calculations we have the following equations of motion:\n\\begin{subequations}\n \\label{eq:7}\n\\begin{eqnarray}\n p^\\prime &=& -\\frac{G_N}{r^{2}} \\frac{4 \\pi \\, p_{eff} \\, r^3 + \\mathcal{M}_{eff}}{1 - 2G_N \\mathcal{M}_{eff}\/r} ( p + \\rho ) \\, , \\label{eq:7b} \\\\\n \\mathcal{V}^{\\prime} + 3 \\mathcal{N}^{\\prime} &=& - \\frac{2G_N}{r^{2}} \\frac{4 \\pi \\, p_{eff} \\, r^3 + \\mathcal{M}_{eff}}{1 - 2G_N \\mathcal{M}_{eff}\/r} \\left( 2 \\mathcal{V} + 3 \\mathcal{N} \\right)\\nonumber\\\\ \n && - \\frac{9}{r} \\mathcal{N} - 3 (\\rho+p) \\rho^{\\prime} \\, , \\label{eq:7c}\n\\end{eqnarray}\n\\end{subequations}\nwhere a prime indicates derivative with respect to $r$, $A(r) = [1 - 2G_N \\mathcal{M}(r)_{eff}\/r]^{-1}$, and the effective energy density and pressure, respectively, are given as:\n\\begin{subequations}\n\\label{eq:3}\n\\begin{eqnarray}\n\\rho_{eff} &=& \\rho \\left( 1 + \\frac{\\rho}{2\\lambda} \\right) + \\frac{\\mathcal{V}}{\\lambda} \\, , \\label{eq:3a} \\\\\np_{eff} &=& p \\left(1 + \\frac{\\rho}{\\lambda} \\right) + \\frac{\\rho^{2}}{2\\lambda} + \\frac{\\mathcal{V}}{3\\lambda} + \\frac{\\mathcal{N}}{\\lambda} \\, . \\label{eq:3b}\n\\end{eqnarray}\n\\end{subequations}\nEven though we will not consider exterior galaxy solutions, we must anyway take into account the information provided by the Israel-Darmois (ID) matching condition, which for the case under study can be written as\\cite{gm}:\n\\begin{equation}\n \\label{eq:28}\n (3\/2) \\rho^2(R) + \\mathcal{V}^-(R) + 3 \\mathcal{N}^-(R) = 0 \\, .\n\\end{equation}\nIn this case, the superscript ($-$) indicates the interior value of the quantity at the halo surface\\footnote{We denote the surface of the galaxy as a region where does not exist DM or baryons, \\emph{i.e.}, the intergalactic space.} of the galaxy, assuming that $\\rho(r>R)=0$ where $R$ denotes the maximum size of the galaxy. Also, the previous equation takes in consideration the fact that the exterior must be Schwarzschild which in general the following condition must be fulfilled $\\mathcal{V}(r \\geq R) = 0 =\\mathcal{N}(r\\geq R)$ (see\\cite{Garcia-Aspeitia:2014pna} for details).\n\nFor completeness, we just note that the exterior solutions of the metric functions are given by the well known expressions $B(r) = A^{-1}(r) = 1 - 2G_N M_{eff}\/r$.\n\nFinally, we impose $\\mathcal{N}=0$ (see\\cite{Garcia-Aspeitia:2014pna}). Implying that Eq. \\eqref{eq:28} is restricted as:\n\\begin{equation}\n \\label{eq:29}\n -(3\/2) \\rho^2(R) = \\mathcal{V}^-(R) \\, ,\n\\end{equation}\nwith the aim of maintain a galaxy Schwarzschild exterior.\n\n\\section{Low energy limit and rotation curves} \\label{TOV MOD}\n\nTo begin with, we observe, from Eq.\\ \\eqref{eq:7b} in the low energy (Newtonian) limit, that we have: $r^{2}p^{\\prime}=-G_{N}\\mathcal{M}_{eff}\\rho$. Differentiating we found\n\\begin{equation}\n\\frac{d}{dr}\\left(\\frac{r^{2}}{\\rho}\\frac{dp}{dr}\\right)=-4\\pi r^{2}G_{N}\\rho_{\\rm eff}. \\label{eqdiff9}\n\\end{equation}\nFrom here it is possible to note that $d\\Phi\/dr=-\\rho^{-1}(dp\/dr)$ resulting in\n\\begin{equation}\n\\nabla^{2}\\Phi_{\\rm eff}=\\frac{1}{r^{2}}\\frac{d}{dr}\\left(r^{2}\\frac{d\\Phi_{\\rm eff}}{dr}\\right)=4\\pi G_{N}\\rho_{\\rm eff}, \\label{Poisson}\n\\end{equation}\nbeing necessary to define the energy density of DM together with the nonlocal energy density. Notice that the nonlocal energy density can be obtained easily from Eq.\\ \\eqref{eq:7c} in the galaxy interior and also the fluid behaves like dust, implying the condition $p=0$, always fulfilling the low energy condition $4\\pi r^3p_{eff}\\ll\\mathcal{M}_{eff}$ and $2G_{N}\\mathcal{M}_{eff}\/r\\ll1$, between effective quantities and in consequence $4G_{N}\\mathcal{M}_{eff}\\mathcal{V}\/r^2\\sim0$, is negligible.\n\nIn addition, the rotation curve is obtained from the contribution of the effective potential, this expression can be written as:\n\\begin{eqnarray}\nV^2(r) &=& r\\left\\vert\\frac{d\\Phi_{\\rm eff}}{dr}\\right\\vert=\\frac{G_N \\mathcal{M}_{eff}(r)}{r} \n\\nonumber \\\\\n&=& \n\\frac{G_N }{r} \n\\left[\n\\mathcal{M}_{DM}(r) + \\mathcal{M}_{Brane}(r)\n\\right]\n, \\label{rotvel}\n\\end{eqnarray}\nwhere $\\mathcal{M}_{DM}(r)$ is the contribution to the mass from DM, $\\mathcal{M}_{Brane}(r)$ gives the modification to the DM mass that comes from the brane; and $\\mathcal{M}_{eff}(r)$ must be greater than zero. From here, it is possible to study the rotation velocities of the DM, assuming a variety of density profiles.\n\nBefore we start let us define the following dimensionless variables: $\\bar{r}\\equiv r\/r_{\\rm s}$, $v_{0}^{2}\\equiv4\\pi G_{N}r_{\\rm s}^{2}\\rho_{\\rm s}$ and $\\bar{\\rho}\\equiv\\rho_{\\rm s}\/2\\lambda$ where $\\rho_{\\rm s}$, is the central density of the halo and $r_{s}$ is associated with the central radius of the halo. \n\n\\subsection{Pseudo isothermal profile for dark matter}\n\nHere we consider that DM density profile is given by PISO\\cite{piso} written as:\n\\begin{equation}\n\\rho_{\\rm PISO}(r)=\\frac{\\rho_{\\rm s}}{1+\\bar{r}^{2}}. \\label{PIP}\n\\end{equation}\nFrom Eq. \\eqref{rotvel}, together with Eq. \\eqref{PIP}, it is possible to obtain:\n\\begin{eqnarray}\nV_{\\rm PISO}^{2}(\\bar{r}) &=& v_{0}^{2}\n\\left\\lbrace\n\\left(\n1-\\frac{1}{\\bar{r}}\\arctan\\bar{r}\n\\right) \n\\right.\n\\nonumber \\\\\n&& \n\\left. + \\bar{\\rho}\n\\left(\n\\frac{1}{1+\\bar{r}^2}- \\frac{1}{\\bar{r}}\\arctan\\bar{r}\n\\right)\n\\right\\rbrace.\n\\label{RCPISO}\n\\end{eqnarray}\nIn the limit $\\bar{\\rho}\\to0$, we recover the classical rotation velocity associated with PISO for DM.\nThe effective density must be positive defined, then $\\lambda > \\rho_s$ must be fulfilled. The first right-hand term in parenthesis in Eq.\\ \\eqref{RCPISO} is PISO dark matter contribution and the second \nis brane's contribution.\n\n\\subsection{Navarro-Frenk-White profile for dark matter}\nAnother interesting case (motivated by cosmological $N$-body simulations) is the NFW density profile, which is given by\\cite{NFW}:\n\\begin{equation}\n\\rho_{\\rm NFW}(r)=\\frac{\\rho_{\\rm s}}{\\bar{r}(1+\\bar{r})^{2}}. \\label{NFW}\n\\end{equation}\nThis is a density profile that diverges as $r \\rightarrow 0$ \nand \nit is not possible to say\nthat $\\rho_s$ is related with the central density of the DM distribution.\nAlso density goes as $1\/\\bar{r}^3$ when $\\bar{r} \\gg 1$.\nHowever, in this particular case, we will still be calling them the \\emph{central} density and radius of the NFW matter distribution.\nFrom Eq.\\ \\eqref{rotvel}, together with Eq.\\ \\eqref{NFW} we obtain the following rotation curve:\n\\begin{eqnarray}\nV_{\\rm NFW}^{2}(\\bar{r}) &=& v_{0}^{2}\\left\\lbrace\\left(\\frac{(1+\\bar{r})\\ln(1+\\bar{r})-\\bar{r}}{\\bar{r}(1+\\bar{r})}\\right)\\right.\\nonumber\\\\&&\n\\left.+\\frac{2\\bar{\\rho}}{3\\bar{r}}\\left(\\frac{1}{(1+\\bar{r})^{3}}-1\\right) \\right\\rbrace.\n\\label{RCNFW}\n\\end{eqnarray}\nThe first right-hand term in parenthesis in Eq.\\ \\eqref{RCNFW} is NFW dark matter contribution and the second one is the brane's contribution. Notice that we recover also the classical limit when $\\bar{\\rho}\\to0$.\n\nIn addition, it is important to remark that the effective density must be positive defined, then $\\lambda > \\rho_s r_s \/r$. Also, if $\\mathcal{M}(r)$ must be greater than zero, then $r > r_{min}$ where $r_{min}$ is given by solving the following equation:\n\\begin{equation}\n\\frac{2}{3}\\bar{\\rho}=\\frac{(\\alpha+1)^2[(\\alpha+1)\\ln(\\alpha+1)-\\alpha]}{(\\alpha+1)^3-1},\\label{comp}\n\\end{equation}\nwhere we define $\\alpha\\equiv r_{min}\/r_s$ as a dimensionless quantity.\n\n\\subsection{Burkert density profile for dark matter}\n\nAnother density profile was proposed by Burkert\\cite{Burkert}, which it has the form:\n\\begin{equation}\n\\rho_{\\rm Burk}=\\frac{\\rho_{\\rm s}}{(1+\\bar{r})(1+\\bar{r}^{2})}. \\label{Burk}\n\\end{equation}\nAgain, from Eq.\\ \\eqref{rotvel}, together with Eq.\\ \\eqref{Burk} we obtain the following rotation curve:\n\\begin{eqnarray}\nV_{\\rm Burk}^{2}(\\bar{r}) &=&\\frac{v_{0}^{2}}{4\\bar{r}} \\left\\lbrace \\left( \\ln[(1+\\bar{r}^{2})(1+\\bar{r})^{2}]-2\\arctan(\\bar{r}) \\right)\\right.\n\\label{RCBurkert}\n\\\\&&\n\\left.+ \\frac{1}{2}\\bar{\\rho}\\left( \\frac{1}{1+\\bar{r}}+\\frac{1}{1+\\bar{r}^{2}}+\\arctan(\\bar{r})-2 \\right)\\right\\rbrace.\n\\nonumber\n\\end{eqnarray}\nIn the limit $\\bar{\\rho}\\to0$, we recover the classical rotation velocity associated with Burkert density profile\\cite{Burkert}.\nThe effective density must be positive defined, then $\\lambda > \\rho_s$. \nAgain the first right-hand term in parenthesis in Eq.\\ \\eqref{RCBurkert} is\nBurkert DM contribution and the second one comes from the\n brane's contribution.\n\n\\section{Constrictions with galaxies without photometry} \\label{Results}\n\nTo start with the analysis, we $\\chi^{2}$ best fit the observational rotation curves of the sample with:\n\\begin{equation}\n\\chi^{2}=\\sum_{i=1}^{N}\\left(\\frac{V_{theo}-V_{exp \\; i}}{\\delta V_{exp\\; i}}\\right)^{2},\n\\label{chi2Eq}\n\\end{equation}\nwhere $i$ runs from one up to the number of points in the data, $N$; $V_{theo}$, is computed according to the velocity profile under consideration \nand $\\delta V_{exp\\; i}$, is the error in the measurement of the rotational velocity. Notice that \nthe free parameters are only for DM-Branes: $r_{s}$, $\\rho_{s}$ and $\\lambda$. \nIn the tables below we show $\\chi_{red}^{2} \\equiv \\chi^{2}\/(N - n_p -1)$ where $n_p$ is the number of parameters to fit, being in our case, $n_p=3$.\n\nThe analyzed sample of galaxies are twelve high resolution rotation curves of LSB galaxies with no photometry (visible components, such as gas and stars, are negligible) as given in Ref.\\cite{deBlok\/etal:2001}. This sample was used to study DM equation of state (EoS) in Ref.\\cite{Barranco\/etal:2015}. We remark that in this part we use units such that $4 \\pi G_{N}=1$, velocities are in km\/s, and distances are given in kpc.\n\n\\subsection{Results: PISO profile + Branes}\n\nWe have estimated the parameters of the PISO+branes model \nand were compared with PISO model without brane contribution, minimizing the appropriate $\\chi^2$ for the sample of observed rotational curves, using Eq.\\ (\\ref{chi2Eq}) with Eq.\\ (\\ref{RCPISO}) and taking into account that $\\lambda > \\rho_s$ must be fulfilled. \n\nIn Fig.\\ \\ref{PISO1} we show, for each one of the galaxies in the sample,\nthe plots of the PISO theoretical rotation curve (solid line), that best fit of the corresponding observational data (orange symbols); also shown are the errors of the estimation (brown symbols). \nFor each galaxy we have plotted the contribution to the rotation velocity due only to the brane (red long-dashed curve) and only to the dark matter PISO density profile (blue short-dashed curve), see Eq.\\ (\\ref{RCPISO}).\nBrane effects are very clear in galaxies: \nESO 2060140,\nESO 3020120,\nU 11616,\nU 11648,\nU 11748,\nU 11819.\nIn Table \\ref{TablePiso} it is shown the central density, central radius and the brane tension which is the free parameter of the brane theory (only in PISO+branes). As a comparison, it is also shown the central density and radius without brane contribution.\nThe worst fitted galaxies were (high $\\chi_{red}^2$ value): \nU 11648,\nU 11748.\nThe fitted brane tension values presents great dispersion, from the lower value: \n$0.167\\; M_{\\odot}\/\\rm pc^3$, ESO 3020120 to the higher value:\n$108.096\\; M_{\\odot}\/\\rm pc^3$, ESO 4880049.\nIt is useful to have $\\lambda$ in eV, where the conversion from solar masses to eV is: $1 M_{\\odot}\/\\rm pc^3 \\sim 2.915\\times10^{-4}eV^4$. \nThe brane tension parameter has an average value of $\\langle\\lambda\\rangle_{\\rm PISO} = 33.178 \\; M_{\\odot}\/\\rm pc^3$ with a standard deviation $\\sigma_{\\rm PISO} = 40.935 \\; M_{\\odot}\/\\rm pc^3$. Notice that we can't see a clear tendency to a $\\lambda$ value or range of values.\n\n\\begin{figure}\n\\includegraphics[scale=0.33]{vceso30500900PISO+Branes}\n\\includegraphics[scale=0.33]{vceso0140040PISO+Branes} \\\\\n\\includegraphics[scale=0.33]{vceso2060140PISO+Branes} \n\\includegraphics[scale=0.33]{vcESO3020120PISO+Branes} \\\\ \n\\includegraphics[scale=0.33]{vceso4250180PISO+Branes} \n\\includegraphics[scale=0.33]{vceso4880049PISO+Branes} \\\\\n\\includegraphics[scale=0.33]{vcf570_v1PISO+Branes} \n\\includegraphics[scale=0.33]{vcu11454PISO+Branes} \\\\\n\\includegraphics[scale=0.33]{vcu11616PISO+Branes} \n\\includegraphics[scale=0.33]{vcu11648PISO+Branes} \\\\\n\\includegraphics[scale=0.33]{vcu11748PISO+Branes} \n\\includegraphics[scale=0.33]{vcu11819PISO+Branes}\n\\caption{Group of analyzed galaxies using modified rotation velocity for PISO profile: ESO 3050090,\nESO 0140040, ESO 2060140, ESO 3020120, ESO 4250180, ESO 4880049, 570\\_V1, U11454, U11616, U11648, U11748, U11819. We show in the plots: Total rotation curve (solid black line), only PISO curve (short dashed blue curve) and the rotation curve associated with the mass lost by the effect of the brane (red dashed curve).} \n\\label{PISO1}\n\\end{figure}\n\n\n\n\n\\subsection{Results: NFW profile + Branes}\nFor the NFW density profile case we have the following results:\nWe have estimated parameters with and without brane contribution by\nminimizing the corresponding $\\chi^2$, Eq.\\ (\\ref{chi2Eq}) with Eq.\\ (\\ref{RCNFW}), for the sample of observed rotation curves \nand taking into account that\n$\\lambda > \\rho_s r_s \/r$ in order to have an effective density positive defined, always fulfilling Eq.\\ (\\ref{comp}).\n\nIn Fig.\\ \\ref{NFW1}, it is shown, for each galaxy in the sample of the LSB galaxies, the theoretical fitted curve to a preferred brane tension value (solid line), \nthe NFW curve and the rotation curve associated with the mass lost by the effects of branes, see Eq.\\ (\\ref{RCNFW}). \nIn Table \\ref{TableNFW} it is shown, for the sample,\nthe central density, central radius and $\\chi_{red}^2$ values without branes; and\nthe central density, central radius, brane tension\nand $\\chi_{red}^2$ values with branes contribution.\nGalaxy U 11748 is the worst fitted case with $\\chi_{red}^2 = 2.163$.\nFor galaxies: \nESO 4250180,\nESO 4880049,\nand U 11648,\nthere are not clear brane effects. \nGalaxy U 11648 is an \\emph{outlier} with a brane tension value of $4323.28\\; M_{\\odot}\/\\rm pc^3$ that is out of the range of preferred values of the other galaxies in the sample.\nNotice that we have found a preferred range of tension values, from $0.487$ to $9.232$ $M_{\\odot}\/\\rm pc^3$. Without the outlier, the brane tension parameter has an average value of \n$\\langle\\lambda\\rangle_{\\rm NFW}\\simeq 2.51 \\; M_{\\odot}\/\\rm pc^3$ \nwith a standard deviation $\\sigma_{\\rm NFW}\\simeq 3.015 \\; M_{\\odot}\/\\rm pc^3$.\n\n\\begin{figure}\n\\includegraphics[scale=0.33]{vceso30500900NFW+Branes} \n\\includegraphics[scale=0.33]{vceso0140040NFW+Branes} \\\\\n\\includegraphics[scale=0.33]{vceso2060140NFW+Branes} \n\\includegraphics[scale=0.33]{vcESO3020120NFW+Branes} \\\\ \n\\includegraphics[scale=0.33]{vceso4250180NFW+Branes} \n\\includegraphics[scale=0.33]{vceso4880049NFW+Branes} \\\\\n\\includegraphics[scale=0.33]{vcf570_v1NFW+Branes} \n\\includegraphics[scale=0.33]{vcu11454NFW+Branes} \\\\\n\\includegraphics[scale=0.33]{vcu11616NFW+Branes} \n\\includegraphics[scale=0.33]{vcu11648NFW+Branes} \\\\\n\\includegraphics[scale=0.33]{vcu11748NFW+Branes} \n\\includegraphics[scale=0.33]{vcu11819NFW+Branes} \n\\caption{Group of analyzed galaxies using modified rotation velocity for NFW profile: ESO 3050090,\nESO 0140040, ESO 2060140, ESO 3020120, ESO 4250180, ESO 4880049, 570\\_V1, U11454, U11616, U11648, U11748, U11819. We show in the plots: Total rotation curve (solid black line), only NFW curve (short dashed blue curve) and the rotation curve associate with the mass lost by the effect of the brane (red dashed curve).} \n\\label{NFW1}\n\\end{figure}\n\n\n\n\\subsection{Results: Burkert+Branes profile}\n\nIn the case of Burkert DM density profile, we have also estimated the parameters of the Burkert+branes model \nand were compared with Burkert model without branes, minimizing the appropriate $\\chi^2_{red}$, Eq.\\ (\\ref{chi2Eq}) with Eq.\\ (\\ref{RCBurkert}), for the sample of observed rotation curves. We have considered that $\\lambda > \\rho_s$ must be fulfilled.\n\nThe results are shown in Fig.\\ \\ref{Burkert1}, where it is plotted the fit to a preferred brane tension value, remarking the total rotation curve (solid line), the Burkert DM density contribution curve (blue short-dashed line) and the rotation curve associated with the mass lost by the effects of branes (red dashed line), see Eq.\\ (\\ref{RCBurkert}). \nIn Table \\ref{TableBurkert} it is shown the fitted values for the central density, central radius and the corresponding value of the $\\chi_{red}^2$ without brane contribution; and the fitted values for\nthe central density, central radius, brane tension, and theirs $\\chi_{red}^2$ values with brane contribution.\nThe worst fitted (high values of $\\chi_{red}^2$) galaxies are: U 11648 and U 11748.\nGalaxies ESO 3020120, U 11748, and\nU 11819 show a clear brane effects and also are outliers. The main tendency is that $\\lambda$ has values of the order of $10^3 \\;M_{\\odot}\/\\rm pc^3$ or above, approximately.\nThe brane tension parameter, without the outliers, for the DM Burkert profile case has an average value of \n$\\langle\\lambda\\rangle_{\\rm Burk}\\simeq 3192.02 \\;M_{\\odot}\/\\rm pc^3$, \nand a standard deviation of \n$\\sigma_{\\rm Burk}\\simeq 2174.97 \\; M_{\\odot}\/\\rm pc^3$.\n\n\\begin{figure}\n\\includegraphics[scale=0.33]{vceso30500900Burkert+Branes} \n\\includegraphics[scale=0.33]{vceso0140040Burkert+Branes} \\\\\n\\includegraphics[scale=0.33]{vceso2060140Burkert+Branes} \n\\includegraphics[scale=0.33]{vcESO3020120Burkert+Branes} \\\\ \n\\includegraphics[scale=0.33]{vceso4250180Burkert+Branes} \n\\includegraphics[scale=0.33]{vceso4880049Burkert+Branes} \\\\\n\\includegraphics[scale=0.33]{vcf570_v1Burkert+Branes} \n\\includegraphics[scale=0.33]{vcu11454Burkert+Branes} \\\\\n\\includegraphics[scale=0.33]{vcu11616Burkert+Branes} \n\\includegraphics[scale=0.33]{vcu11648Burkert+Branes} \\\\\n\\includegraphics[scale=0.33]{vcu11748Burkert+Branes} \n\\includegraphics[scale=0.33]{vcu11819Burkert+Branes}\n\\caption{Group of analyzed galaxies using modified rotation velocity for Burkert profile: ESO 3050090,\nESO 0140040, ESO 2060140, ESO 3020120, ESO 4250180, ESO 4880049, 570\\_V1, U11454, U11616, U11648, U11748, U11819. We show in the plots: Total rotation curve (solid black line), only Burkert curve (short dashed blue curve) and the rotation curve associate with the mass lost by the effect of the brane (red dashed curve).} \n\\label{Burkert1}\n\\end{figure}\n\n\n\n\\subsection{Results: a synthetic rotation curve}\n\nFinally, we show the fitting results of the DM models plus brane's contribution to a synthetic rotation curve. This synthetic rotation curve was made of 40 rotation curves of galaxies with magnitudes around $M_I = -18.5$\\cite{Salucci1}.\nThese 40 rotation curves came out of 1100 galaxies that gave the universal rotation curve for spirals. For this sample of low luminosity galaxies, of $M_I = -18.5$, it was shown that the baryonic disk has a very small contribution (for details see reference\\cite{Salucci1}).\n\nIn this subsection we are now using units such $G=R_{opt}=V(R_{opt})=1$, where $R_{opt}$ and $V(R_{opt})$ are the optical radius and the velocity at the optical radius, respectively. $R_{opt}$ is the radius encompassing 83 per cent of the total integrated light. For an exponential disk with a surface brightness given by: $I(r) \\propto \\exp(-r\/R_D)$, we have that $R_{opt}=3.2 R_D$\\cite{Salucci1}.\n\nIn figure \\ref{SM185} we show the synthetic rotation curve and the fitting results using PISO, NFW and Burkert profiles with and without brane's contribution.\n\\begin{figure}\n\\includegraphics[scale=0.33]{vcM185PISO} \n\\includegraphics[scale=0.33]{vcM185PISO+Branes} \n\\includegraphics[scale=0.33]{vcM185NFW} \n\\includegraphics[scale=0.33]{vcM185NFW+Branes} \n\\includegraphics[scale=0.33]{vcM185Burkert} \n\\includegraphics[scale=0.33]{vcM185Burkert+Branes} \n\\caption{Synthetic rotation curve of galaxies with magnitud $M_I=-18.5$.\nLeft panels: rotation curves fitted without branes. Right panels: rotation curves fitted with branes.\nFirst row is for PISO model; second row is for NFW model and third row is for Burkert model. \nWe show in the plots: Total rotation curve (solid black line), only DM model curve (short dashed blue curve) and the rotation curve associate with the mass lost by the effect of the brane (red dashed curve).} \n\\label{SM185}\n\\end{figure}\nAs we can see in Table \\ref{TableSynthetic} the same trend is observed in the brane's tension values as compared with the results for the LSB catalog analyzed above using PISO, NFW and Burkert as a DM profiles: lower value is obtained for NFW model and higher values is obtained for Burkert density profile.\n\nGiven that this synthetic rotation curve is built from 40 rotation curves of real spirals, the values of the brane's tension in table \\ref{TableSynthetic} is representative of all these rotation curves. \nThen, for PISO model $\\lambda=60.692$ $M_{\\odot}\/\\rm pc^3$, a value that is greater than the average value of the tension shown in Table \\ref{TablePiso} but inside the interval marked by the standard deviation. \nFor NFW model $\\lambda=226.054$ $M_{\\odot}\/\\rm pc^3$, this value is lower than the average value reported in Table \\ref{TableNFW} and inside the range marked by the standard deviation. \nFor Burkert model $\\lambda=1.58\\times 10^5$ $M_{\\odot}\/\\rm pc^3$ this value is well above than the average value shown in Table \\ref{TableBurkert}; a value outside the range marked by the standard deviation. \n\n\n\\section{Discussion and conclusions} \\label{Disc}\n\nWe have presented in this paper, the effects coming from the presence of branes in galaxy rotation curves for \nthree density profiles used to study the behavior of DM at galactic scales. \nWith this in mind, we were given to the task of study a sample of \nhigh resolution measurements of rotation curves of galaxies without photometry\\cite{deBlok\/etal:2001} \nand a synthetic rotation curve built from 40 rotation curves of galaxies of magnitude around $M_I=-18.5$\nfitting\n the values of $\\rho_{s}$, $r_{s}$ and $\\lambda$ through minimizing the $\\chi^{2}_{red}$ value and we have compared with the standard results of $\\rho_{s}$, $r_{s}$ for each DM density profile without branes. \n The results for every observable in the three different profiles were summarized and compared in Tables \\ref{TablePiso}-\\ref{TableSynthetic}.\n\nFrom here, it is possible to observe how the results show a weaker limit for the value of brane tension \n($\\sim10^{-3}\\; \\rm eV^4-46$ eV$^4$) for the three models, in comparison with other astrophysical and cosmological studies\\cite{Kapner:2006si,Alexeyev:2015gja,mk,gm,Sakstein:2014nfa,Kudoh:2003xz,Cavaglia:2002si,Holanda:2013doa,Barrow:2001pi,Brax:2003fv}; for example, Linares \\emph{et al.}\\cite{Linares:2015fsa} show that weaker values than $\\lambda \\simeq 10^{4}$ MeV$^{4}$, present anomalous behavior in the compactness of a dwarf star composed by a polytropic EoS, concluding that a wide region of their bound \nwill show a non compactness stellar configuration, if it is applied to the study shown in\\cite{Linares:2015fsa}.\n \nIt is important to notice that chosen a value of brane tension that not fulfill our bounds imposed through the paper, generate an anomalous behavior in the center of the galaxy which is characteristic of the model. Remarkable, for higher values of this bound, the modified rotation curves are in good agreement with \nthe observed rotation curves of the sample that we use,\npresenting only the distinctive features of each density profile: For example,\nNFW dark matter density profile prefers lower values of the brane tension (on the average $\\lambda \\sim 0.73\\times 10^{-3}$ eV$^4$), implying clear effects of the brane;\nPISO dark matter\n case has an average value of $\\lambda \\sim 0.96\\times 10^{-2}$ eV$^4$ and show relatively the maximum dispersion on the fitted values of the brane tension;\nwhereas Burkert DM density profile shows negligible brane effects, on the average $\\lambda \\sim 0.93$ eV$^4$ -- $46$ eV$^4$.\n\n In addition, it is important to discuss briefly the changes caused by the presence of branes in the problem of cusp\/core. Notice that in this case the part that play a role is the effective density which is written in terms of brane corrections as: $\\rho_{eff}=\\rho(1-\\rho\/\\lambda)$; it is notorious how the small perturbations alleviate the cusp problem which afflicts NFW, albeit the excessive presence of these terms could generates a negative effective density profile; also PISO and Burkert show modifications when $r\\to0$ but does not pledge its core behavior while the brane tension only takes small values. In this way, the possibility of having a core behavior, help us to constraint the value of brane tension and still keep in the game the NFW profile.\n\nSummarizing, it is really challenging to establish bounds in a dynamical systems like rotation curves in galaxies due to the\nlow densities found in the galactic medium, giving only a weaker limits in comparison with other studies in a most energetic systems. \nOur most important conclusion, is that despite the efforts, we think that it is not straightforward to do that the results fit with other astrophysical and cosmological studies, being impractical and not feasible to\nfind evidence of extra dimensions in galactic dynamics through the determination of the brane tension value, \neven more, we think that exist too much dispersion in the fitted values of the brane tension using this method for some DM density profile models. \nAlso it is important to note that the value of the brane tension is strongly dependent of the characteristic of the galaxy studied, suggesting an average for the preferred value of the brane tension in each case. \nIn addition, we notice that the effects of extra dimensions are stronger in the galactic core, suggesting that the NFW model is not appropriate in the search of constraints in brane theory due to the divergence in the center of the galaxy (see Eq.\\ \\eqref{comp}); PISO and Burkert could be good candidates to explore the galactic core in this framework; however it is necessary a more extensive study before we obtain a definitive conclusion. \n\nAs a final note, we know that it is necessary to recollect more observational data to constraint \nthe models or even give the final conclusion about extra dimensions (for or against), supporting the brane constraints shown through this paper with a more profound study of galactic dynamic or other tests like cosmological evidences presented in CMB anisotropies. \nHowever, this work is in progress and will be reported elsewhere.\n\n\n\\begin{acknowledgements}\nMAG-A acknowledge support from SNI-M\\'exico and CONACyT research fellow. Instituto Avanzado de Cosmolog\\'ia (IAC) collaborations.\n\\end{acknowledgements}\n\n\n\\section{Introduction} \\label{Int}\n\nGeneral Theory of Relativity (GR) is the cornerstone of\nastrophysics and cosmology, giving predictions with unprecedented success. \nAt astrophysical scales GR has been tested in, for example, the solar system, stellar dynamics, black hole formation and evolution, among others (see for instance\\cite{FischbachTalmadge1999,Will1993,Kamionkowski:2007wv,*Matts,*Peebles:2013hla}). \nHowever, GR is being currently tested with various phenomena that can be \nsignificant challenges to the GR theory, generating important changes never seen before. Ones of the major challenges of modern cosmology are undoubtedly dark matter (DM) and dark energy. They comprise approximately $27\\%$ for DM and $68\\%$ for dark energy, of our \nuniverse\\cite{PlanckCollaboration2013} allowing the formation of large scale structures\\cite{FW,Diaferio:2008jy}.\nDark matter has been invoked as the mechanism to stabilize spiral galaxies and to provide with a matter distribution component to explain the observed rotation curves.\nNowadays the best model of the universe we have is the \n\\emph{concordance} or $\\Lambda$CDM model that has been successful in explaining the very large-scale structure formation, the statistics of the distribution of galaxy clusters, the temperature anisotropies of the cosmic microwave background radiation (CMB) and many other astronomical observations. \nIn spite of all successes we have mentioned, \nthis model has several problems, for example: predicts too much power on small scale\\cite{Rodriguez-Meza:2012}, then it over predicts the number of observed satellite galaxies\\cite{Klypin1999,Moore1999,Papastergis2011} and predicts halo profiles that are denser and \ncuspier than those inferred observationally\\cite{Navarro1997,Subramanian2000,Salucci}, and also predicts a population of massive and concentrated subclass that are inconsistent \nwith observations of the kinematics of Milky Way satellites\\cite{BoylanKolchin2012}.\n\nOne of the first astronomical observations that brought attention on DM was the observation of rotation curves of spiral galaxies by Rubin and coworkers\\cite{Rubin2001}, \nthese observations turned out to be the main tool to investigate the role of DM at galactic scales: its role in determining the structure, how the mass is distributed, and the dynamics, evolution, and formation of spiral galaxies.\nRemarkably, the corresponding rotation velocities of galaxies, can be explained with the density profiles of different Newtonian DM models like Pseudo Isothermal profile (PISO)\\cite{piso}, Navarro-Frenk-White profile (NFW)\\cite{Navarro1997} \nor Burkert profile\\cite{Burkert}, among \nothers\\cite{Einasto}; except by the fact that until now it is unsolved the problem of cusp and core in the densities profiles. In this way, none of them have the last word because the main questions, about the density distribution and of course the \\emph{nature} of DM, has not been resolved.\n\nAlternative theories of gravity have been used to model DM. For instance a scalar field has been proposed to model DM\\cite{Dick1996,Cho\/Keum:1998}, \nand has been used to study rotation curves of spiral galaxies\\cite{Guzman\/Matos:2000}. This scalar field is coupled minimally to the metric, however, scalar fields coupled non minimally to the metric have also been used to study DM\\cite{Rodriguez-Meza:2012,RodriguezMeza\/Others:2001,RodriguezMeza\/CervantesCota:2004,Rodriguez-Meza:2012b}. Equivalently $F(R)$ models exists in the literature that analyzes rotation curves\\cite{Martins\/Salucci:2007}.\n\nOn the other hand, one of the best candidates to extend GR is the brane theory, whose main characteristic is to add another dimension having a five dimensional bulk where it is embedded a four dimensional manifold called the brane\\cite{Randall-I,*Randall-II}. This model is characterized by the fact that the standard model of particles is confined in the brane and only the gravitational interaction can travel in the bulk\\cite{Randall-I,*Randall-II}. The assumption that the five dimensional Einstein's equations are valid, generates corrections in the four dimensional Einstein's equations confined in the brane bringing information from the extra dimension\\cite{sms}. \nThese extra corrections in the Einstein's equations can help us to elucidate and solve the problems that afflicts the modern cosmology and astrophysics\\cite{m2000,*yo2,*Casadio2012251,*jf1,*gm,*Garcia-Aspeitia:2013jea,*langlois2001large,*Garcia-Aspeitia:2014pna,*jf2,*PerezLorenzana:2005iv,*Ovalle:2014uwa,*Garcia-Aspeitia:2014jda,*Linares:2015fsa,*Casadio:2004nz}.\n\nBefore we start, let us mention here some experimental constraints on braneworld models, most of them about the so-called brane tension, $\\lambda$, which appears explicitly as a free parameter in the corrections of the gravitational equations mentioned above. As a first example we have the measurements on the deviations from Newton's law of the gravitational interaction at small distances. It is reported that no deviation is observed for distances $l \\gtrsim 0.1 \\, {\\rm mm}$, which then implies a lower limit on the brane tension in the Randall-Sundrum II model (RSII): $\\lambda> 1 \\, {\\rm TeV}^{4}$\\cite{Kapner:2006si,*Alexeyev:2015gja}; it is important to mention that these limits do not apply to the two-branes case of the Randall-Sundrum I model (RSI) (see \\cite{mk} for details). \nAstrophysical studies, related to gravitational waves and stellar stability, constrain\nthe brane tension to be $\\lambda > 5\\times10^{8} \\, {\\rm MeV}^{4}$\\cite{gm,Sakstein:2014nfa}, whereas the existence of black hole X-ray binaries suggests that $l\\lesssim 10^{-2} {\\rm mm}$\\cite{mk,Kudoh:2003xz,*Cavaglia:2002si}. Finally, from cosmological observations, the requirement of successful nucleosynthesis provides the lower limit $\\lambda> 1\\, {\\rm MeV}^{4}$, which is a much weaker limit as compared to other experiments (another cosmological tests can be seen in: Ref. \\cite{Holanda:2013doa,*Barrow:2001pi,*Brax:2003fv}).\n \nIn fact, this paper is devoted to study the main observable of brane theory which is the brane tension, whose existence delimits between the four dimensional GR and its high energy corrections. We are given to the task of perform a Newtonian approximation of the modified Tolman-Oppenheimer-Volkoff (TOV) equation, maintaining the effective terms provided by branes which cause subtle differences in the traditional dynamics. \nIn this way we test the theory at galactic scale using high resolution measurements of rotation curves of a sample of low surface brightness (LSB) galaxies with no photometry\\cite{deBlok\/etal:2001}\nand a synthetic rotation curve built from 40 rotation curves of spirals of magnitude around $M_I=-18.5$ where was found that the baryonic components has a very small contribution\\cite{Salucci1},\nassuming PISO, NFW and Burkert DM profiles respectively and with that, we constraint the preferred value of brane tension with observables. \nThat the sample has no photometry means that the galaxies are DM dominated and then we have only two parameters related to the distribution of DM, a density scale and a length scale, adding the brane tension we have three parameters in total to fit. \nThe brane tension fitted values are compared among the traditional DM density profile models of spiral galaxies \n(PISO, NFW and Burkert) and against the same models without the presence of branes and confronted with other values of the tension parameter coming from cosmological and astrophysical observational data.\n\nThis paper is organized as follows: In Sec.\\ \\ref{EM} we show the equations of motion (modified TOV equations) for a spherical symmetry and the appropriate initial conditions. In Sec.\\ \\ref{TOV MOD} we explore the Newtonian limit and we show the mathematical expression of rotation velocity with brane modifications; particularly we show the modifications to velocity rotation expressions of PISO, NFW and Burkert DM profiles and they are compared with models without branes. \nIn Sec.\\ \\ref{Results} we test the DM models plus brane with observations: we use a sample of high resolution measurements of rotation curves of LSB galaxies and a synthetic rotation curve representative of 40 rotation curves of spirals where the baryonic component has a very small contribution.\nFinally in Sec.\\ \\ref{Disc}, we discuss the results obtained in the paper and make some conclusions.\n\nIn what follows, we work in units in which $c=\\hbar=1$, unless explicitly written.\n\n\\section{Review of equations of motion for branes} \\label{EM}\n\nLet us start by writing the equations of motion for galactic stability in a brane embedded in a five-dimensional bulk according to the RSII model\\cite{Randall-II}. Following an appropriate computation (for details see\\cite{mk,sms}), it is possible to demonstrate that the modified four-dimensional Einstein's equations can be written as \n\\begin{equation}\n G_{\\mu\\nu} + \\xi_{\\mu\\nu} + \\Lambda_{(4)}g_{\\mu\\nu} = \\kappa^{2}_{(4)} T_{\\mu\\nu} + \\kappa^{4}_{(5)} \\Pi_{\\mu\\nu} +\n \\kappa^{2}_{(5)} F_{\\mu\\nu} , \\label{Eins}\n\\end{equation}\nwhere $\\kappa_{(4)}$ and $\\kappa_{(5)}$ are respectively the four and five- dimensional coupling constants, which are related in the form: $\\kappa^{2}_{(4)}=8\\pi G_{N}=\\kappa^{4}_{(5)} \\lambda\/6$, where $\\lambda$ is defined as the brane tension, and $G_{N}$ is the Newton constant. For purposes of simplicity, we will not consider bulk matter, which translates into $F_{\\mu\\nu}=0$, and discard the presence of the four-dimensional cosmological constant, $\\Lambda_{(4)}=0$, \nas we do not expect it to have any important effect at galactic scales (for a recent discussion about it see\\cite{Pavlidou:2013zha}). Additionally, we will neglect any nonlocal energy flux, which is allowed by the static spherically symmetric solutions we will study below\\cite{gm}.\n\nThe energy-momentum tensor, the quadratic energy-momentum tensor, and the Weyl (traceless) contribution, have the explicit forms\n\\begin{subequations}\n\\label{eq:4}\n\\begin{eqnarray}\n\\label{Tmunu}\nT_{\\mu\\nu} &=& \\rho u_{\\mu}u_{\\nu} + p h_{\\mu\\nu} \\, , \\\\\n\\label{Pimunu}\n\\Pi_{\\mu\\nu} &=& \\frac{1}{12} \\rho \\left[ \\rho u_{\\mu}u_{\\nu} + (\\rho+2p) h_{\\mu\\nu} \\right] \\, , \\\\\n\\label{ximunu}\n\\xi_{\\mu\\nu} &=& - \\frac{\\kappa^4_{(5)}}{\\kappa^4_{(4)}} \\left[ \\mathcal{U} u_{\\mu}u_{\\nu} + \\mathcal{P}r_{\\mu}r_{\\nu}+ \\frac{ h_{\\mu\\nu} }{3} (\\mathcal{U}-\\mathcal{P} ) \\right] \\, .\n\\end{eqnarray}\n\\end{subequations}\nHere, $p$ and $\\rho$ are, respectively, the pressure and energy density of the stellar matter of interest, $\\mathcal{U}$ is the nonlocal energy density, and $\\mathcal{P}$ is the nonlocal anisotropic stress. Also, $u_{\\alpha}$ is the four-velocity (that also satisfies the condition $g_{\\mu\\nu}u^{\\mu}u^{\\nu}=-1$), $r_{\\mu}$ is a unit radial vector, and $h_{\\mu\\nu} = g_{\\mu\\nu} + u_{\\mu} u_{\\nu}$ is the projection operator orthogonal to $u_{\\mu}$.\n\nSpherical symmetry indicates that the metric can be written as:\n\\begin{equation}\n{ds}^{2}= - B(r){dt}^{2} + A(r){dr}^{2} + r^{2} (d\\theta^{2} + \\sin^{2} \\theta d\\varphi^{2}) \\, .\\label{metric}\n\\end{equation}\nIf we define the reduced Weyl functions $\\mathcal{V} = 6 \\mathcal{U}\/\\kappa^4_{(4)}$, and $\\mathcal{N} = 4 \\mathcal{P}\/\\kappa^4_{(4)}$. First, we define the effective mass as:\n\\begin{equation}\n\\mathcal{M}^\\prime_{eff} = 4\\pi{r}^{2}\\rho_{eff}. \\label{eq:7a}\n\\end{equation}\nThen, from Eqs. \\eqref{Eins} and \\eqref{eq:4} and after straightforward calculations we have the following equations of motion:\n\\begin{subequations}\n \\label{eq:7}\n\\begin{eqnarray}\n p^\\prime &=& -\\frac{G_N}{r^{2}} \\frac{4 \\pi \\, p_{eff} \\, r^3 + \\mathcal{M}_{eff}}{1 - 2G_N \\mathcal{M}_{eff}\/r} ( p + \\rho ) \\, , \\label{eq:7b} \\\\\n \\mathcal{V}^{\\prime} + 3 \\mathcal{N}^{\\prime} &=& - \\frac{2G_N}{r^{2}} \\frac{4 \\pi \\, p_{eff} \\, r^3 + \\mathcal{M}_{eff}}{1 - 2G_N \\mathcal{M}_{eff}\/r} \\left( 2 \\mathcal{V} + 3 \\mathcal{N} \\right)\\nonumber\\\\ \n && - \\frac{9}{r} \\mathcal{N} - 3 (\\rho+p) \\rho^{\\prime} \\, , \\label{eq:7c}\n\\end{eqnarray}\n\\end{subequations}\nwhere a prime indicates derivative with respect to $r$, $A(r) = [1 - 2G_N \\mathcal{M}(r)_{eff}\/r]^{-1}$, and the effective energy density and pressure, respectively, are given as:\n\\begin{subequations}\n\\label{eq:3}\n\\begin{eqnarray}\n\\rho_{eff} &=& \\rho \\left( 1 + \\frac{\\rho}{2\\lambda} \\right) + \\frac{\\mathcal{V}}{\\lambda} \\, , \\label{eq:3a} \\\\\np_{eff} &=& p \\left(1 + \\frac{\\rho}{\\lambda} \\right) + \\frac{\\rho^{2}}{2\\lambda} + \\frac{\\mathcal{V}}{3\\lambda} + \\frac{\\mathcal{N}}{\\lambda} \\, . \\label{eq:3b}\n\\end{eqnarray}\n\\end{subequations}\nEven though we will not consider exterior galaxy solutions, we must anyway take into account the information provided by the Israel-Darmois (ID) matching condition, which for the case under study can be written as\\cite{gm}:\n\\begin{equation}\n \\label{eq:28}\n (3\/2) \\rho^2(R) + \\mathcal{V}^-(R) + 3 \\mathcal{N}^-(R) = 0 \\, .\n\\end{equation}\nIn this case, the superscript ($-$) indicates the interior value of the quantity at the halo surface\\footnote{We denote the surface of the galaxy as a region where does not exist DM or baryons, \\emph{i.e.}, the intergalactic space.} of the galaxy, assuming that $\\rho(r>R)=0$ where $R$ denotes the maximum size of the galaxy. Also, the previous equation takes in consideration the fact that the exterior must be Schwarzschild which in general the following condition must be fulfilled $\\mathcal{V}(r \\geq R) = 0 =\\mathcal{N}(r\\geq R)$ (see\\cite{Garcia-Aspeitia:2014pna} for details).\n\nFor completeness, we just note that the exterior solutions of the metric functions are given by the well known expressions $B(r) = A^{-1}(r) = 1 - 2G_N M_{eff}\/r$.\n\nFinally, we impose $\\mathcal{N}=0$ (see\\cite{Garcia-Aspeitia:2014pna}). Implying that Eq. \\eqref{eq:28} is restricted as:\n\\begin{equation}\n \\label{eq:29}\n -(3\/2) \\rho^2(R) = \\mathcal{V}^-(R) \\, ,\n\\end{equation}\nwith the aim of maintain a galaxy Schwarzschild exterior.\n\n\\section{Low energy limit and rotation curves} \\label{TOV MOD}\n\nTo begin with, we observe, from Eq.\\ \\eqref{eq:7b} in the low energy (Newtonian) limit, that we have: $r^{2}p^{\\prime}=-G_{N}\\mathcal{M}_{eff}\\rho$. Differentiating we found\n\\begin{equation}\n\\frac{d}{dr}\\left(\\frac{r^{2}}{\\rho}\\frac{dp}{dr}\\right)=-4\\pi r^{2}G_{N}\\rho_{\\rm eff}. \\label{eqdiff9}\n\\end{equation}\nFrom here it is possible to note that $d\\Phi\/dr=-\\rho^{-1}(dp\/dr)$ resulting in\n\\begin{equation}\n\\nabla^{2}\\Phi_{\\rm eff}=\\frac{1}{r^{2}}\\frac{d}{dr}\\left(r^{2}\\frac{d\\Phi_{\\rm eff}}{dr}\\right)=4\\pi G_{N}\\rho_{\\rm eff}, \\label{Poisson}\n\\end{equation}\nbeing necessary to define the energy density of DM together with the nonlocal energy density. Notice that the nonlocal energy density can be obtained easily from Eq.\\ \\eqref{eq:7c} in the galaxy interior and also the fluid behaves like dust, implying the condition $p=0$, always fulfilling the low energy condition $4\\pi r^3p_{eff}\\ll\\mathcal{M}_{eff}$ and $2G_{N}\\mathcal{M}_{eff}\/r\\ll1$, between effective quantities and in consequence $4G_{N}\\mathcal{M}_{eff}\\mathcal{V}\/r^2\\sim0$, is negligible.\n\nIn addition, the rotation curve is obtained from the contribution of the effective potential, this expression can be written as:\n\\begin{eqnarray}\nV^2(r) &=& r\\left\\vert\\frac{d\\Phi_{\\rm eff}}{dr}\\right\\vert=\\frac{G_N \\mathcal{M}_{eff}(r)}{r} \n\\nonumber \\\\\n&=& \n\\frac{G_N }{r} \n\\left[\n\\mathcal{M}_{DM}(r) + \\mathcal{M}_{Brane}(r)\n\\right]\n, \\label{rotvel}\n\\end{eqnarray}\nwhere $\\mathcal{M}_{DM}(r)$ is the contribution to the mass from DM, $\\mathcal{M}_{Brane}(r)$ gives the modification to the DM mass that comes from the brane; and $\\mathcal{M}_{eff}(r)$ must be greater than zero. From here, it is possible to study the rotation velocities of the DM, assuming a variety of density profiles.\n\nBefore we start let us define the following dimensionless variables: $\\bar{r}\\equiv r\/r_{\\rm s}$, $v_{0}^{2}\\equiv4\\pi G_{N}r_{\\rm s}^{2}\\rho_{\\rm s}$ and $\\bar{\\rho}\\equiv\\rho_{\\rm s}\/2\\lambda$ where $\\rho_{\\rm s}$, is the central density of the halo and $r_{s}$ is associated with the central radius of the halo. \n\n\\subsection{Pseudo isothermal profile for dark matter}\n\nHere we consider that DM density profile is given by PISO\\cite{piso} written as:\n\\begin{equation}\n\\rho_{\\rm PISO}(r)=\\frac{\\rho_{\\rm s}}{1+\\bar{r}^{2}}. \\label{PIP}\n\\end{equation}\nFrom Eq. \\eqref{rotvel}, together with Eq. \\eqref{PIP}, it is possible to obtain:\n\\begin{eqnarray}\nV_{\\rm PISO}^{2}(\\bar{r}) &=& v_{0}^{2}\n\\left\\lbrace\n\\left(\n1-\\frac{1}{\\bar{r}}\\arctan\\bar{r}\n\\right) \n\\right.\n\\nonumber \\\\\n&& \n\\left. + \\bar{\\rho}\n\\left(\n\\frac{1}{1+\\bar{r}^2}- \\frac{1}{\\bar{r}}\\arctan\\bar{r}\n\\right)\n\\right\\rbrace.\n\\label{RCPISO}\n\\end{eqnarray}\nIn the limit $\\bar{\\rho}\\to0$, we recover the classical rotation velocity associated with PISO for DM.\nThe effective density must be positive defined, then $\\lambda > \\rho_s$ must be fulfilled. The first right-hand term in parenthesis in Eq.\\ \\eqref{RCPISO} is PISO dark matter contribution and the second \nis brane's contribution.\n\n\\subsection{Navarro-Frenk-White profile for dark matter}\nAnother interesting case (motivated by cosmological $N$-body simulations) is the NFW density profile, which is given by\\cite{NFW}:\n\\begin{equation}\n\\rho_{\\rm NFW}(r)=\\frac{\\rho_{\\rm s}}{\\bar{r}(1+\\bar{r})^{2}}. \\label{NFW}\n\\end{equation}\nThis is a density profile that diverges as $r \\rightarrow 0$ \nand \nit is not possible to say\nthat $\\rho_s$ is related with the central density of the DM distribution.\nAlso density goes as $1\/\\bar{r}^3$ when $\\bar{r} \\gg 1$.\nHowever, in this particular case, we will still be calling them the \\emph{central} density and radius of the NFW matter distribution.\nFrom Eq.\\ \\eqref{rotvel}, together with Eq.\\ \\eqref{NFW} we obtain the following rotation curve:\n\\begin{eqnarray}\nV_{\\rm NFW}^{2}(\\bar{r}) &=& v_{0}^{2}\\left\\lbrace\\left(\\frac{(1+\\bar{r})\\ln(1+\\bar{r})-\\bar{r}}{\\bar{r}(1+\\bar{r})}\\right)\\right.\\nonumber\\\\&&\n\\left.+\\frac{2\\bar{\\rho}}{3\\bar{r}}\\left(\\frac{1}{(1+\\bar{r})^{3}}-1\\right) \\right\\rbrace.\n\\label{RCNFW}\n\\end{eqnarray}\nThe first right-hand term in parenthesis in Eq.\\ \\eqref{RCNFW} is NFW dark matter contribution and the second one is the brane's contribution. Notice that we recover also the classical limit when $\\bar{\\rho}\\to0$.\n\nIn addition, it is important to remark that the effective density must be positive defined, then $\\lambda > \\rho_s r_s \/r$. Also, if $\\mathcal{M}(r)$ must be greater than zero, then $r > r_{min}$ where $r_{min}$ is given by solving the following equation:\n\\begin{equation}\n\\frac{2}{3}\\bar{\\rho}=\\frac{(\\alpha+1)^2[(\\alpha+1)\\ln(\\alpha+1)-\\alpha]}{(\\alpha+1)^3-1},\\label{comp}\n\\end{equation}\nwhere we define $\\alpha\\equiv r_{min}\/r_s$ as a dimensionless quantity.\n\n\\subsection{Burkert density profile for dark matter}\n\nAnother density profile was proposed by Burkert\\cite{Burkert}, which it has the form:\n\\begin{equation}\n\\rho_{\\rm Burk}=\\frac{\\rho_{\\rm s}}{(1+\\bar{r})(1+\\bar{r}^{2})}. \\label{Burk}\n\\end{equation}\nAgain, from Eq.\\ \\eqref{rotvel}, together with Eq.\\ \\eqref{Burk} we obtain the following rotation curve:\n\\begin{eqnarray}\nV_{\\rm Burk}^{2}(\\bar{r}) &=&\\frac{v_{0}^{2}}{4\\bar{r}} \\left\\lbrace \\left( \\ln[(1+\\bar{r}^{2})(1+\\bar{r})^{2}]-2\\arctan(\\bar{r}) \\right)\\right.\n\\label{RCBurkert}\n\\\\&&\n\\left.+ \\frac{1}{2}\\bar{\\rho}\\left( \\frac{1}{1+\\bar{r}}+\\frac{1}{1+\\bar{r}^{2}}+\\arctan(\\bar{r})-2 \\right)\\right\\rbrace.\n\\nonumber\n\\end{eqnarray}\nIn the limit $\\bar{\\rho}\\to0$, we recover the classical rotation velocity associated with Burkert density profile\\cite{Burkert}.\nThe effective density must be positive defined, then $\\lambda > \\rho_s$. \nAgain the first right-hand term in parenthesis in Eq.\\ \\eqref{RCBurkert} is\nBurkert DM contribution and the second one comes from the\n brane's contribution.\n\n\\section{Constrictions with galaxies without photometry} \\label{Results}\n\nTo start with the analysis, we $\\chi^{2}$ best fit the observational rotation curves of the sample with:\n\\begin{equation}\n\\chi^{2}=\\sum_{i=1}^{N}\\left(\\frac{V_{theo}-V_{exp \\; i}}{\\delta V_{exp\\; i}}\\right)^{2},\n\\label{chi2Eq}\n\\end{equation}\nwhere $i$ runs from one up to the number of points in the data, $N$; $V_{theo}$, is computed according to the velocity profile under consideration \nand $\\delta V_{exp\\; i}$, is the error in the measurement of the rotational velocity. Notice that \nthe free parameters are only for DM-Branes: $r_{s}$, $\\rho_{s}$ and $\\lambda$. \nIn the tables below we show $\\chi_{red}^{2} \\equiv \\chi^{2}\/(N - n_p -1)$ where $n_p$ is the number of parameters to fit, being in our case, $n_p=3$.\n\nThe analyzed sample of galaxies are twelve high resolution rotation curves of LSB galaxies with no photometry (visible components, such as gas and stars, are negligible) as given in Ref.\\cite{deBlok\/etal:2001}. This sample was used to study DM equation of state (EoS) in Ref.\\cite{Barranco\/etal:2015}. We remark that in this part we use units such that $4 \\pi G_{N}=1$, velocities are in km\/s, and distances are given in kpc.\n\n\\subsection{Results: PISO profile + Branes}\n\nWe have estimated the parameters of the PISO+branes model \nand were compared with PISO model without brane contribution, minimizing the appropriate $\\chi^2$ for the sample of observed rotational curves, using Eq.\\ (\\ref{chi2Eq}) with Eq.\\ (\\ref{RCPISO}) and taking into account that $\\lambda > \\rho_s$ must be fulfilled. \n\nIn Fig.\\ \\ref{PISO1} we show, for each one of the galaxies in the sample,\nthe plots of the PISO theoretical rotation curve (solid line), that best fit of the corresponding observational data (orange symbols); also shown are the errors of the estimation (brown symbols). \nFor each galaxy we have plotted the contribution to the rotation velocity due only to the brane (red long-dashed curve) and only to the dark matter PISO density profile (blue short-dashed curve), see Eq.\\ (\\ref{RCPISO}).\nBrane effects are very clear in galaxies: \nESO 2060140,\nESO 3020120,\nU 11616,\nU 11648,\nU 11748,\nU 11819.\nIn Table \\ref{TablePiso} it is shown the central density, central radius and the brane tension which is the free parameter of the brane theory (only in PISO+branes). As a comparison, it is also shown the central density and radius without brane contribution.\nThe worst fitted galaxies were (high $\\chi_{red}^2$ value): \nU 11648,\nU 11748.\nThe fitted brane tension values presents great dispersion, from the lower value: \n$0.167\\; M_{\\odot}\/\\rm pc^3$, ESO 3020120 to the higher value:\n$108.096\\; M_{\\odot}\/\\rm pc^3$, ESO 4880049.\nIt is useful to have $\\lambda$ in eV, where the conversion from solar masses to eV is: $1 M_{\\odot}\/\\rm pc^3 \\sim 2.915\\times10^{-4}eV^4$. \nThe brane tension parameter has an average value of $\\langle\\lambda\\rangle_{\\rm PISO} = 33.178 \\; M_{\\odot}\/\\rm pc^3$ with a standard deviation $\\sigma_{\\rm PISO} = 40.935 \\; M_{\\odot}\/\\rm pc^3$. Notice that we can't see a clear tendency to a $\\lambda$ value or range of values.\n\n\\begin{figure}\n\\includegraphics[scale=0.33]{vceso30500900PISO+Branes}\n\\includegraphics[scale=0.33]{vceso0140040PISO+Branes} \\\\\n\\includegraphics[scale=0.33]{vceso2060140PISO+Branes} \n\\includegraphics[scale=0.33]{vcESO3020120PISO+Branes} \\\\ \n\\includegraphics[scale=0.33]{vceso4250180PISO+Branes} \n\\includegraphics[scale=0.33]{vceso4880049PISO+Branes} \\\\\n\\includegraphics[scale=0.33]{vcf570_v1PISO+Branes} \n\\includegraphics[scale=0.33]{vcu11454PISO+Branes} \\\\\n\\includegraphics[scale=0.33]{vcu11616PISO+Branes} \n\\includegraphics[scale=0.33]{vcu11648PISO+Branes} \\\\\n\\includegraphics[scale=0.33]{vcu11748PISO+Branes} \n\\includegraphics[scale=0.33]{vcu11819PISO+Branes}\n\\caption{Group of analyzed galaxies using modified rotation velocity for PISO profile: ESO 3050090,\nESO 0140040, ESO 2060140, ESO 3020120, ESO 4250180, ESO 4880049, 570\\_V1, U11454, U11616, U11648, U11748, U11819. We show in the plots: Total rotation curve (solid black line), only PISO curve (short dashed blue curve) and the rotation curve associated with the mass lost by the effect of the brane (red dashed curve).} \n\\label{PISO1}\n\\end{figure}\n\n\n\n\n\\subsection{Results: NFW profile + Branes}\nFor the NFW density profile case we have the following results:\nWe have estimated parameters with and without brane contribution by\nminimizing the corresponding $\\chi^2$, Eq.\\ (\\ref{chi2Eq}) with Eq.\\ (\\ref{RCNFW}), for the sample of observed rotation curves \nand taking into account that\n$\\lambda > \\rho_s r_s \/r$ in order to have an effective density positive defined, always fulfilling Eq.\\ (\\ref{comp}).\n\nIn Fig.\\ \\ref{NFW1}, it is shown, for each galaxy in the sample of the LSB galaxies, the theoretical fitted curve to a preferred brane tension value (solid line), \nthe NFW curve and the rotation curve associated with the mass lost by the effects of branes, see Eq.\\ (\\ref{RCNFW}). \nIn Table \\ref{TableNFW} it is shown, for the sample,\nthe central density, central radius and $\\chi_{red}^2$ values without branes; and\nthe central density, central radius, brane tension\nand $\\chi_{red}^2$ values with branes contribution.\nGalaxy U 11748 is the worst fitted case with $\\chi_{red}^2 = 2.163$.\nFor galaxies: \nESO 4250180,\nESO 4880049,\nand U 11648,\nthere are not clear brane effects. \nGalaxy U 11648 is an \\emph{outlier} with a brane tension value of $4323.28\\; M_{\\odot}\/\\rm pc^3$ that is out of the range of preferred values of the other galaxies in the sample.\nNotice that we have found a preferred range of tension values, from $0.487$ to $9.232$ $M_{\\odot}\/\\rm pc^3$. Without the outlier, the brane tension parameter has an average value of \n$\\langle\\lambda\\rangle_{\\rm NFW}\\simeq 2.51 \\; M_{\\odot}\/\\rm pc^3$ \nwith a standard deviation $\\sigma_{\\rm NFW}\\simeq 3.015 \\; M_{\\odot}\/\\rm pc^3$.\n\n\\begin{figure}\n\\includegraphics[scale=0.33]{vceso30500900NFW+Branes} \n\\includegraphics[scale=0.33]{vceso0140040NFW+Branes} \\\\\n\\includegraphics[scale=0.33]{vceso2060140NFW+Branes} \n\\includegraphics[scale=0.33]{vcESO3020120NFW+Branes} \\\\ \n\\includegraphics[scale=0.33]{vceso4250180NFW+Branes} \n\\includegraphics[scale=0.33]{vceso4880049NFW+Branes} \\\\\n\\includegraphics[scale=0.33]{vcf570_v1NFW+Branes} \n\\includegraphics[scale=0.33]{vcu11454NFW+Branes} \\\\\n\\includegraphics[scale=0.33]{vcu11616NFW+Branes} \n\\includegraphics[scale=0.33]{vcu11648NFW+Branes} \\\\\n\\includegraphics[scale=0.33]{vcu11748NFW+Branes} \n\\includegraphics[scale=0.33]{vcu11819NFW+Branes} \n\\caption{Group of analyzed galaxies using modified rotation velocity for NFW profile: ESO 3050090,\nESO 0140040, ESO 2060140, ESO 3020120, ESO 4250180, ESO 4880049, 570\\_V1, U11454, U11616, U11648, U11748, U11819. We show in the plots: Total rotation curve (solid black line), only NFW curve (short dashed blue curve) and the rotation curve associate with the mass lost by the effect of the brane (red dashed curve).} \n\\label{NFW1}\n\\end{figure}\n\n\n\n\\subsection{Results: Burkert+Branes profile}\n\nIn the case of Burkert DM density profile, we have also estimated the parameters of the Burkert+branes model \nand were compared with Burkert model without branes, minimizing the appropriate $\\chi^2_{red}$, Eq.\\ (\\ref{chi2Eq}) with Eq.\\ (\\ref{RCBurkert}), for the sample of observed rotation curves. We have considered that $\\lambda > \\rho_s$ must be fulfilled.\n\nThe results are shown in Fig.\\ \\ref{Burkert1}, where it is plotted the fit to a preferred brane tension value, remarking the total rotation curve (solid line), the Burkert DM density contribution curve (blue short-dashed line) and the rotation curve associated with the mass lost by the effects of branes (red dashed line), see Eq.\\ (\\ref{RCBurkert}). \nIn Table \\ref{TableBurkert} it is shown the fitted values for the central density, central radius and the corresponding value of the $\\chi_{red}^2$ without brane contribution; and the fitted values for\nthe central density, central radius, brane tension, and theirs $\\chi_{red}^2$ values with brane contribution.\nThe worst fitted (high values of $\\chi_{red}^2$) galaxies are: U 11648 and U 11748.\nGalaxies ESO 3020120, U 11748, and\nU 11819 show a clear brane effects and also are outliers. The main tendency is that $\\lambda$ has values of the order of $10^3 \\;M_{\\odot}\/\\rm pc^3$ or above, approximately.\nThe brane tension parameter, without the outliers, for the DM Burkert profile case has an average value of \n$\\langle\\lambda\\rangle_{\\rm Burk}\\simeq 3192.02 \\;M_{\\odot}\/\\rm pc^3$, \nand a standard deviation of \n$\\sigma_{\\rm Burk}\\simeq 2174.97 \\; M_{\\odot}\/\\rm pc^3$.\n\n\\begin{figure}\n\\includegraphics[scale=0.33]{vceso30500900Burkert+Branes} \n\\includegraphics[scale=0.33]{vceso0140040Burkert+Branes} \\\\\n\\includegraphics[scale=0.33]{vceso2060140Burkert+Branes} \n\\includegraphics[scale=0.33]{vcESO3020120Burkert+Branes} \\\\ \n\\includegraphics[scale=0.33]{vceso4250180Burkert+Branes} \n\\includegraphics[scale=0.33]{vceso4880049Burkert+Branes} \\\\\n\\includegraphics[scale=0.33]{vcf570_v1Burkert+Branes} \n\\includegraphics[scale=0.33]{vcu11454Burkert+Branes} \\\\\n\\includegraphics[scale=0.33]{vcu11616Burkert+Branes} \n\\includegraphics[scale=0.33]{vcu11648Burkert+Branes} \\\\\n\\includegraphics[scale=0.33]{vcu11748Burkert+Branes} \n\\includegraphics[scale=0.33]{vcu11819Burkert+Branes}\n\\caption{Group of analyzed galaxies using modified rotation velocity for Burkert profile: ESO 3050090,\nESO 0140040, ESO 2060140, ESO 3020120, ESO 4250180, ESO 4880049, 570\\_V1, U11454, U11616, U11648, U11748, U11819. We show in the plots: Total rotation curve (solid black line), only Burkert curve (short dashed blue curve) and the rotation curve associate with the mass lost by the effect of the brane (red dashed curve).} \n\\label{Burkert1}\n\\end{figure}\n\n\n\n\\subsection{Results: a synthetic rotation curve}\n\nFinally, we show the fitting results of the DM models plus brane's contribution to a synthetic rotation curve. This synthetic rotation curve was made of 40 rotation curves of galaxies with magnitudes around $M_I = -18.5$\\cite{Salucci1}.\nThese 40 rotation curves came out of 1100 galaxies that gave the universal rotation curve for spirals. For this sample of low luminosity galaxies, of $M_I = -18.5$, it was shown that the baryonic disk has a very small contribution (for details see reference\\cite{Salucci1}).\n\nIn this subsection we are now using units such $G=R_{opt}=V(R_{opt})=1$, where $R_{opt}$ and $V(R_{opt})$ are the optical radius and the velocity at the optical radius, respectively. $R_{opt}$ is the radius encompassing 83 per cent of the total integrated light. For an exponential disk with a surface brightness given by: $I(r) \\propto \\exp(-r\/R_D)$, we have that $R_{opt}=3.2 R_D$\\cite{Salucci1}.\n\nIn figure \\ref{SM185} we show the synthetic rotation curve and the fitting results using PISO, NFW and Burkert profiles with and without brane's contribution.\n\\begin{figure}\n\\includegraphics[scale=0.33]{vcM185PISO} \n\\includegraphics[scale=0.33]{vcM185PISO+Branes} \n\\includegraphics[scale=0.33]{vcM185NFW} \n\\includegraphics[scale=0.33]{vcM185NFW+Branes} \n\\includegraphics[scale=0.33]{vcM185Burkert} \n\\includegraphics[scale=0.33]{vcM185Burkert+Branes} \n\\caption{Synthetic rotation curve of galaxies with magnitud $M_I=-18.5$.\nLeft panels: rotation curves fitted without branes. Right panels: rotation curves fitted with branes.\nFirst row is for PISO model; second row is for NFW model and third row is for Burkert model. \nWe show in the plots: Total rotation curve (solid black line), only DM model curve (short dashed blue curve) and the rotation curve associate with the mass lost by the effect of the brane (red dashed curve).} \n\\label{SM185}\n\\end{figure}\nAs we can see in Table \\ref{TableSynthetic} the same trend is observed in the brane's tension values as compared with the results for the LSB catalog analyzed above using PISO, NFW and Burkert as a DM profiles: lower value is obtained for NFW model and higher values is obtained for Burkert density profile.\n\nGiven that this synthetic rotation curve is built from 40 rotation curves of real spirals, the values of the brane's tension in table \\ref{TableSynthetic} is representative of all these rotation curves. \nThen, for PISO model $\\lambda=60.692$ $M_{\\odot}\/\\rm pc^3$, a value that is greater than the average value of the tension shown in Table \\ref{TablePiso} but inside the interval marked by the standard deviation. \nFor NFW model $\\lambda=226.054$ $M_{\\odot}\/\\rm pc^3$, this value is lower than the average value reported in Table \\ref{TableNFW} and inside the range marked by the standard deviation. \nFor Burkert model $\\lambda=1.58\\times 10^5$ $M_{\\odot}\/\\rm pc^3$ this value is well above than the average value shown in Table \\ref{TableBurkert}; a value outside the range marked by the standard deviation. \n\n\n\\section{Discussion and conclusions} \\label{Disc}\n\nWe have presented in this paper, the effects coming from the presence of branes in galaxy rotation curves for \nthree density profiles used to study the behavior of DM at galactic scales. \nWith this in mind, we were given to the task of study a sample of \nhigh resolution measurements of rotation curves of galaxies without photometry\\cite{deBlok\/etal:2001} \nand a synthetic rotation curve built from 40 rotation curves of galaxies of magnitude around $M_I=-18.5$\nfitting\n the values of $\\rho_{s}$, $r_{s}$ and $\\lambda$ through minimizing the $\\chi^{2}_{red}$ value and we have compared with the standard results of $\\rho_{s}$, $r_{s}$ for each DM density profile without branes. \n The results for every observable in the three different profiles were summarized and compared in Tables \\ref{TablePiso}-\\ref{TableSynthetic}.\n\nFrom here, it is possible to observe how the results show a weaker limit for the value of brane tension \n($\\sim10^{-3}\\; \\rm eV^4-46$ eV$^4$) for the three models, in comparison with other astrophysical and cosmological studies\\cite{Kapner:2006si,Alexeyev:2015gja,mk,gm,Sakstein:2014nfa,Kudoh:2003xz,Cavaglia:2002si,Holanda:2013doa,Barrow:2001pi,Brax:2003fv}; for example, Linares \\emph{et al.}\\cite{Linares:2015fsa} show that weaker values than $\\lambda \\simeq 10^{4}$ MeV$^{4}$, present anomalous behavior in the compactness of a dwarf star composed by a polytropic EoS, concluding that a wide region of their bound \nwill show a non compactness stellar configuration, if it is applied to the study shown in\\cite{Linares:2015fsa}.\n \nIt is important to notice that chosen a value of brane tension that not fulfill our bounds imposed through the paper, generate an anomalous behavior in the center of the galaxy which is characteristic of the model. Remarkable, for higher values of this bound, the modified rotation curves are in good agreement with \nthe observed rotation curves of the sample that we use,\npresenting only the distinctive features of each density profile: For example,\nNFW dark matter density profile prefers lower values of the brane tension (on the average $\\lambda \\sim 0.73\\times 10^{-3}$ eV$^4$), implying clear effects of the brane;\nPISO dark matter\n case has an average value of $\\lambda \\sim 0.96\\times 10^{-2}$ eV$^4$ and show relatively the maximum dispersion on the fitted values of the brane tension;\nwhereas Burkert DM density profile shows negligible brane effects, on the average $\\lambda \\sim 0.93$ eV$^4$ -- $46$ eV$^4$.\n\n In addition, it is important to discuss briefly the changes caused by the presence of branes in the problem of cusp\/core. Notice that in this case the part that play a role is the effective density which is written in terms of brane corrections as: $\\rho_{eff}=\\rho(1-\\rho\/\\lambda)$; it is notorious how the small perturbations alleviate the cusp problem which afflicts NFW, albeit the excessive presence of these terms could generates a negative effective density profile; also PISO and Burkert show modifications when $r\\to0$ but does not pledge its core behavior while the brane tension only takes small values. In this way, the possibility of having a core behavior, help us to constraint the value of brane tension and still keep in the game the NFW profile.\n\nSummarizing, it is really challenging to establish bounds in a dynamical systems like rotation curves in galaxies due to the\nlow densities found in the galactic medium, giving only a weaker limits in comparison with other studies in a most energetic systems. \nOur most important conclusion, is that despite the efforts, we think that it is not straightforward to do that the results fit with other astrophysical and cosmological studies, being impractical and not feasible to\nfind evidence of extra dimensions in galactic dynamics through the determination of the brane tension value, \neven more, we think that exist too much dispersion in the fitted values of the brane tension using this method for some DM density profile models. \nAlso it is important to note that the value of the brane tension is strongly dependent of the characteristic of the galaxy studied, suggesting an average for the preferred value of the brane tension in each case. \nIn addition, we notice that the effects of extra dimensions are stronger in the galactic core, suggesting that the NFW model is not appropriate in the search of constraints in brane theory due to the divergence in the center of the galaxy (see Eq.\\ \\eqref{comp}); PISO and Burkert could be good candidates to explore the galactic core in this framework; however it is necessary a more extensive study before we obtain a definitive conclusion. \n\nAs a final note, we know that it is necessary to recollect more observational data to constraint \nthe models or even give the final conclusion about extra dimensions (for or against), supporting the brane constraints shown through this paper with a more profound study of galactic dynamic or other tests like cosmological evidences presented in CMB anisotropies. \nHowever, this work is in progress and will be reported elsewhere.\n\n\n\\begin{acknowledgements}\nMAG-A acknowledge support from SNI-M\\'exico and CONACyT research fellow. Instituto Avanzado de Cosmolog\\'ia (IAC) collaborations.\n\\end{acknowledgements}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nThe construction of the free exact category over a category with finite limits was introduced in ~\\cite{FECLEO}. It was later improved to the construction of the free exact category over a category with finite weak limits (\\emph{weakly lex}) in \\cite{REC}. This followed from the fact that the uniqueness of the finite limits of the original category is never used in the construction; only the existence. In ~\\cite{REC}, the authors also considered the free regular category over a weakly lex one.\n\nAn important property of the free exact (or regular) construction is that such categories always have enough (regular) projectives. In fact, an exact category $\\mathbb{A}$ may be seen as the exact completion of a weakly lex category if and only if it has enough projectives. If so, then $\\mathbb{A}$ is the exact completion of any of its \\emph{projective covers}. Such a phenomenon is captured by varieties of universal algebras: they are the exact completions of their full subcategory of free algebras.\n\nHaving this link in mind, our main interest in studying this subject is to characterize projective covers of certain algebraic categories through simpler properties involving projectives and to relate those properties to the known varietal characterizations in terms of the existence of operations of their varietal theories (when it is the case). Such kind of studies have been done for the projective covers of categories which are: Mal'tsev~\\cite{ECRAC}, protomodular and semi-abelian~\\cite{G}, (strongly) unital and subtractive~\\cite{compl}.\n\nThe aim of this work is to obtain characterizations of the weakly lex categories whose regular completion is a Goursat (=$3$-permutable) category (Propositions~\\ref{cover} and ~\\ref{weaksquare}). We then relate them to the existence of the quaternary operations which characterize the varieties of universal algebras which are $3$-permutable (Remark~\\ref{vars}).\n\n\n\\section{Preliminaries}\\label{pre}\n\nIn this section, we briefly recall some elementary categorical notions needed in the following.\n\nA category with finite limits is \\textbf{regular} if regular epimorphisms are stable under pullbacks, and kernel pairs have coequalizers. Equivalently, any arrow $f: A\\longrightarrow B$ has a unique factorisation $f=i r $ (up to isomorphism), where $r$ is a regular epimorphism and $i$ is a monomorphism and this factorisation is pullback stable.\n\nA \\textbf{relation} $R$ from $X$ to $Y$ is a subobject $\\langle r_1,r_2 \\rangle : R \\rightarrowtail X \\times Y $. The opposite relation of $R$, denoted $R^o$, is the relation from $Y$ to $X$ given by the subobject $\\langle r_2,r_1 \\rangle : R \\rightarrowtail Y \\times X $. A relation $R$ from $X$ to $X$ is called a relation on $X$. We shall identify a morphism $f: X \\longrightarrow Y$ with the relation $\\langle 1_X,f \\rangle: X \\rightarrowtail X \\times Y$ and write $f^o$ for its opposite relation. Given two relations $R \\rightarrowtail X \\times Y $ and $S \\rightarrowtail Y \\times Z $ in a regular category, we write $SR \\rightarrowtail X \\times Z $ for their relational composite. With the above notations, any relation $\\langle r_1, r_2 \\rangle: R \\rightarrowtail X \\times Y$ can be seen as the relational composite $r_2r_1^o$.\nThe following properties are well known and easy to prove (see \\cite{ckp} for instance); we collect them in the following lemma:\n\\begin{lemma}\n\\label{lem}\nLet $f: X \\longrightarrow Y$ be an arrow in a regular category $\\mathbb{C}$, and let $ f=i r $ be its (regular epimorphism, monomorphism) factorisation. Then:\n\\begin{enumerate}\n \\item $f^of$ is the kernel pair of $f$, thus $1_X \\leqslant f^of $; moreover, $1_X = f^of$ if and only if $f$ is a monomorphism;\n \\item $ff^o$ is $(i,i)$, thus $ff^o \\leqslant 1_Y$; moreover, $ff^o = 1_Y$ if and only if $f$ is a regular epimorphism;\n \\item $ff^of = f $ and $f^off^o = f^o$.\n\\end{enumerate}\n\\end{lemma}\n\nA relation $R$ on $X$ is \\textbf{reflexive} if $1_X \\leqslant R$, \\textbf{symmetric} if $R^o \\leqslant R$, and \\textbf{transitive} if $RR \\leqslant R$. As usual, a relation $R$ on $X$ is an \\textbf{equivalence relation} when it is reflexive, symmetric and transitive. In particular, a kernel pair $\\langle f_1,f_2 \\rangle: \\Eq(f)\\rightarrowtail X \\times X$ of a morphism $f: X \\longrightarrow Y$ is an equivalence relation.\n\nBy dropping the assumption of uniqueness of the factorization in the definition of a limit, one obtains the definition of a weak limit. We call \\textbf{weakly lex} a category with weak finite limits.\n\nAn object $P$ in a category is (regular) \\textbf{projective} if, for any arrow $f: P \\longrightarrow X$ and for any regular epimorphism $g: Y \\twoheadrightarrow X$ there exists an arrow $h: P \\longrightarrow Y$ such that $g h = f $. We say that a full subcategory $\\mathbb{C}$ of $\\mathbb{A}$ is a \\textbf{projective cover} of $\\mathbb{A}$ if two conditions are satisfied:\n\\begin{itemize}\n\\item any object of $\\mathbb{C}$ is regular projective in $\\mathbb{A}$;\n\\item for any object $X$ in $\\mathbb{A}$, there exists a ($\\mathbb{C}$-)cover of $X$, that is an object $C$ in $\\mathbb{C}$ and a regular epimorphism $C\\twoheadrightarrow X$.\n\\end{itemize}\n\nWhen $\\mathbb{A}$ admits a projective cover, one says that $\\mathbb{A}$ has \\textit{enough projectives}.\n\\begin{remark}\\label{weaklims}\nIf $\\mathbb{C}$ is a projective cover of a weakly lex category $\\mathbb{A}$, then $\\mathbb{C}$ is also weakly lex~\\cite{REC}. For example, let $X$ and $Y$ be objects in $\\mathbb{C}$ and $\\xymatrix@C=15pt{ X & W \\ar[l] \\ar[r] & Y}$ a weak product of $X$ and $Y$ in $\\mathbb{A}$. Then, for any cover $\\bar{W}\\twoheadrightarrow W$ of $W$, $\\xymatrix@C=15pt{ X & \\bar{W} \\ar[l] \\ar[r] & Y}$ is a weak product of $X$ and $Y$ in $\\mathbb{C}$. Furthermore, if $\\mathbb{A}$ is a regular category, then the induced morphism $W\\twoheadrightarrow X\\times Y$ is a regular epimorphism.\nSimilar remarks apply to all weak finite limits.\n\\end{remark}\n\n\n\n\\section{Goursat categories}\nIn this section we review the notion of Goursat category and the characterizations of Goursat categories through regular images of equivalence relations and through Goursat pushouts.\n\n\\begin{definition} \\emph{\\cite{CLP,ckp}}\nA regular category $\\mathbb{C}$ is called a \\textbf{Goursat category} when the equivalence relations in $\\mathbb{C}$ are $3$-permutable, i.e. $RSR = SRS$ for any pair of equivalence relations $R$ and $S$ on the same object.\n\\end{definition}\n\nWhen $\\mathbb{C}$ is a regular category, $(R,r_1,r_2)$ is an equivalence relation on $X$ and $f: X \\twoheadrightarrow Y$ is a regular epimorphism, we define the \\textbf{regular image of $R$ along $f$} to be the relation $f(R)$ on $Y$ induced by the (regular epimorphism, monomorphism) factorization $\\langle s_1, s_2 \\rangle \\psi$ of the composite $(f\\times f) \\langle r_1,r_2\\rangle$:\n\\[\n \\xymatrix{\nR \\ar@{.>>}[r]^{\\psi} \\ar@{ >->}[d]_{\\langle r_1,r_2\\rangle} & f(R) \\ar@{ >.>}[d]^{\\langle s_1,s_2\\rangle}\\\\\nX \\times X \\ar@{>>}[r]_{f \\times f} & Y \\times Y.\n }\n\\]\nNote that the regular image $f(R)$ can be obtained as the relational composite $f(R)=fRf^o=fr_2r_1^of^o$. When $R$ is an equivalence relation, $f(R)$ is also reflexive and symmetric. In a general regular category $f(R)$ is not necessarily an equivalence relation.\nThis is the case in a \\emph{Goursat category} according to the following theorem.\n\n\\begin{theorem}\\label{CKP} \\emph{\\cite{ckp}} A regular category $\\mathbb{C}$ is a Goursat category if and only if for any regular epimorphism $f: X \\twoheadrightarrow Y$ and any equivalence relation $R$ on $X$, the regular image $f(R)= fRf^o$ of $R$ along $f$ is an equivalence relation.\n\\end{theorem}\n\nGoursat categories are well known in Universal Algebra. In fact, by a classical theorem in \\cite{hm}, a variety of universal algebras is a Goursat category precisely when its theory has two quaternary operations $p$ and $q$ such that the identities $p(x,y,y,z)= x$, $q(x,y,y,z)= z$ and $p(x,x,y,y)= q(x,x,y,y)$ hold. Accordingly, the varieties of groups, Heyting algebras and implication algebras are Goursat categories. The category of topological group, Hausdorff groups, right complemented semi-group are also Goursat categories.\n\nThere are many known characterizations of Goursat categories (see \\cite{ckp,gr,grod,grt} for instance). In particular the following characterization, through Goursat pushouts, will be useful:\n\\begin{theorem}\\label{Goursatpushout}\\emph{\\cite{gr}}\nLet $\\mathbb{C}$ be a regular category. The following conditions are equivalent:\n\\begin{enumerate}\n \\item[(i)] $\\mathbb{C}$ is a Goursat category;\n \\item[(ii)] any commutative diagram of type \\emph{($\\mathrm{I}$)} in $\\mathbb{C}$, where $\\alpha$ and $\\beta$ are regular epimorphisms and $f$ and $g$ are split epimorphisms\n \\[\n \\xymatrix@C=2cm{\n X \\ar@{}[dr]|{(\\mathrm{I})} \\ar@{->>}[r]^{\\alpha} \\ar@<3pt>[d]^{f} & U \\ar@<3pt>[d]^{g} \\ar@<40pt>@{}[d]^(.3){g\\alpha=\\beta f} \\ar@<40pt>@{}[d]^(.7){\\alpha s=t\\beta} \\\\ Y \\ar@{->>}[r]_{\\beta} \\ar@<3pt>[u]^{s}& W, \\ar@<3pt>[u]^t\n }\n\\]\n(which is necessarily a pushout) is a \\textbf{Goursat pushout}: the morphism $ \\lambda : \\Eq(f) \\longrightarrow \\Eq(g)$, induced by the universal property of kernel pair $\\Eq(g)$ of $g$, is a regular epimorphism.\n\\end{enumerate}\n\\end{theorem}\n\n\\begin{remark}\n\\label{variant} Diagram ($\\mathrm{I}$) is a Goursat pushout precisely when the regular image of $\\Eq(f)$ along $\\alpha$ is (isomorphic to) $\\Eq(g)$. From Theorem~\\ref{Goursatpushout}, it then follows that a regular category $\\mathbb{C}$ is a Goursat category if and only if for any commutative diagram of type ($\\mathrm{I}$) one has $\\alpha(\\Eq(f))= \\Eq(g)$.\n\\end{remark}\n\nNote that Theorem~\\ref{CKP} characterizes Goursat categories through the property that regular images of equivalence relations are equivalence relations, while Theorem~\\ref{Goursatpushout} characterizes them through the property that regular images of certain kernel pairs are kernel pairs.\n\n\n\n\\section{Projective covers of Goursat categories}\nIn this section, we characterize the categories with weak finite limits whose regular completion are Goursat categories.\n\\begin{definition} Let $\\mathbb{C}$ be a weakly lex category:\n\\begin{enumerate}\n\\item a \\textbf{pseudo-relation} on an object X of $\\mathbb{C}$ is a pair of parallel arrows $\\xymatrix {R \\ar@<2pt>[r]^{r_1} \\ar@<-2pt>[r]_{r_2} & X;}$ a pseudo-relation is a relation if $r_1$ and $r_2$ are jointly monomorphic;\n\\item a pseudo-relation $ \\xymatrix {R \\ar@<2pt>[r]^{r_1} \\ar@<-2pt>[r]_{r_2} & X}$ on $X$ is said to be:\n\\begin{itemize}\n \\item \\textbf{reflexive} when there is an arrow $ r: X \\longrightarrow R $ such that $ r_1 r = 1_X = r_2 r $;\n \\item \\textbf{symmetric} when there is an arrow $ \\sigma : R \\longrightarrow R $ such that $ r_2 = r_1 \\sigma $ and $r_1 = r_2 \\sigma $;\n \\item \\textbf{transitive} if by considering a weak pullback\n $$\n \\xymatrix{\n W \\ar[r]^-{p_2} \\ar[d]_{p_1} & R \\ar[d]^{r_1} \\\\ R \\ar[r]_{r_2}& X,\n }\n$$\nthere is an arrow $ t : W \\longrightarrow R$ such that $ r_1 t = r_1 p_1 $ and $r_2 t = r_2 p_2$.\n \\item a \\textbf{pseudo-equivalence relation} if it is reflexive, symmetric and transitive.\n\\end{itemize}\n\\end{enumerate}\n\\end{definition}\n\nRemark that the transitivity of a pseudo-relation $\\xymatrix {R \\ar@<2pt>[r]^{r_1} \\ar@<-2pt>[r]_{r_2} & X}$ does not depend on the choice of the weak pullback of $r_1$ and $r_2$; in fact, if\n$$\n \\xymatrix{\n \\bar{W} \\ar[r]^-{\\bar{p_2}} \\ar[d]_{\\bar{p_1}} & R \\ar[d]^{r_1} \\\\ R \\ar[r]_{r_2}& X,\n }\n$$\nis another weak pullback, the factorization $\\bar{W} \\longrightarrow W$ composed with the transitivity $t: W \\longrightarrow R$ ensures that the pseudo-relation is transitive also with respect to the second weak pullback.\n\\vspace{0.5cm}\n\nThe following property from \\cite{Enrico} (Proposition 1.1.9) will be useful in the sequel:\n\n\\begin{proposition}\\emph{\\cite{Enrico}}\\label{EV}\nLet $\\mathbb{C}$ be a projective cover of a regular category $\\mathbb{A}$. Let $ \\xymatrix {R \\ar@<2pt>[r]^{r_1} \\ar@<-2pt>[r]_{r_2} & X}$ be a pseudo-relation in $\\mathbb{C}$ and consider its (regular epimorphism, monomorphism) factorization in $\\mathbb{A}$\n$$\n \\xymatrix{\n R \\ar[rr]^{(r_1, r_2)} \\ar@{->>}[rd]_p & {} & X \\times X. \\\\ {} & E \\ar@{ >->}[ru]_{(e_1, e_2)} & {}\n }\n$$\nThen, $R$ is a pseudo-equivalence relation in $\\mathbb{C}$ if and only if $S$ is an equivalence relation in $\\mathbb{A}$.\n\\end{proposition}\n\n\n\\begin{definition}\nLet $\\mathbb{C}$ be a weakly lex category. We call $\\mathbb{C}$ a \\textbf{weak Goursat category} if, for any pseudo-equivalence relation $ \\xymatrix { R \\ar@<2pt>[r]^{r_1} \\ar@<-2pt>[r]_{r_2} & X}$ and any regular epimorphism $f: X \\twoheadrightarrow Y$, the composite $\\xymatrix { R \\ar@<2pt>[r]^{fr_1} \\ar@<-2pt>[r]_{fr_2} & X}$ is also a pseudo-equivalence relation.\n\\end{definition}\n\n\nWe use Remark~\\ref{weaklims} repeatedly in the next results.\n\n\\begin{proposition}\\label{cover}\nLet $\\mathbb{C}$ be a projective cover of a regular category $\\mathbb{A}$. Then $\\mathbb{A}$ is a Goursat category if and only if $\\mathbb{C}$ is a weak Goursat category.\n\\end{proposition}\n\n\\begin{proof}\nSince $\\mathbb{C}$ is a projective cover of a regular category $\\mathbb{A}$, then $\\mathbb{C}$ is weakly lex.\n\nSuppose that $\\mathbb{A}$ is a Goursat category. Let $ \\xymatrix { R \\ar@<2pt>[r]^{r_1} \\ar@<-2pt>[r]_{r_2} & X}$ be a pseudo-equiva\\-lence relation in $\\mathbb{C}$ and let $f: X \\twoheadrightarrow Y$ be a regular epimorphism in $\\mathbb{C}$. For the (regular epimorphism, monomorphism) factorizations of $\\langle r_1, r_2\\rangle$ and $\\langle fr_1, fr_2\\rangle$ we get the following diagram\n\\begin{equation}\n\\label{proj}\n \\vcenter{\\xymatrix {\n R \\ar[rr]^(0.4){\\langle r_1, r_2\\rangle} \\ar@{=}[ddd] \\ar@{->>}[rd]_{p} & {} & X\\times X \\ar@{->>}[ddd]^{f \\times f} \\\\\n {} & E \\ar@{ >->}[ru]_(0.4){\\langle e_1, e_2\\rangle} \\ar@{.>}[d]_{w} & {} \\\\\n {} & S \\ar@{ >->}[rd]^(0.4){\\langle s_1, s_2\\rangle} & {} \\\\\n R \\ar@{->>}[ru]^{q} \\ar[rr]_(0.4){\\langle fr_1, fr_2\\rangle} & {} & Y \\times Y,\n}}\n\\end{equation}\n\nwhere $w:E\\longrightarrow S$ is induced by the strong epimorphism $p$\n\\[\n \\xymatrix@C=1cm {\n R \\ar@{->>}[r]^{p} \\ar@{->>}[d]_{q} & E \\ar[d]^{(f \\times f) \\langle e_1, e_2\\rangle} \\ar@{.>}[dl]_{w} \\\\ S \\ar@{ >->}[r]_{\\langle s_1,s_2\\rangle} & Y \\times Y.\n}\n\\]\nThen $w$ is a regular epimorphism and by the commutativity of the right side of $\\eqref{proj}$, one has $S = f(E)$.\nBy Proposition \\ref{EV}, we know that $E$ is an equivalence relation in $\\mathbb{A}$. Since $\\mathbb{A}$ is a Goursat category, then $S = f(E)$ is also an equivalence relation in $\\mathbb{A}$ and by Proposition \\ref{EV}, we can conclude that $ \\xymatrix { R \\ar@<2pt>[r]^{fr_1} \\ar@<-2pt>[r]_{fr_2} & X}$ is a pseudo-equivalence relation in $\\mathbb{C}$.\n\nConversely, suppose that $\\mathbb{C}$ is a weak Goursat category. Let $ \\xymatrix { R \\ar@<2pt>[r]^{r_1} \\ar@<-2pt>[r]_{r_2} & X}$ be an equivalence relation in $\\mathbb{A}$ and $f :X \\twoheadrightarrow Y$ a regular epimorphism. We are going to show that $f(R) = S$\n$$ \\xymatrix {\n R \\ar@{->>}[r]^-{h} \\ar@<.5ex>[d]^{r_2} \\ar@<-.5ex>[d]_{r_1} & f(R)=S \\ar@<.5ex>[d]^{s_2} \\ar@<-.5ex>[d]_{s_1} \\\\ X \\ar@{->>}[r]_{f} & Y\n}\n$$\nis an equivalence relation; it is obviously reflexive and symmetric. In order to conclude that $\\mathbb{A}$ is a Goursat category, we must prove that $S$ is transitive, i.e that $S$ is an equivalence relation.\n\nWe begin by covering the regular epimorphism $f$ in $\\mathbb{A}$ with a regular epimorphism $\\bar{f}$ in $\\mathbb{C}$. For that we take the cover $y : \\bar{Y} \\twoheadrightarrow Y$, consider the pullback of $y$ and $f$ in $\\mathbb{A}$ and take its cover $\\alpha: \\bar{X} \\twoheadrightarrow X \\times_Y \\bar{Y}$\n\\[\n \\xymatrix@C=1cm{\n \\bar{X} \\ar@{->>}[rd]^{\\alpha} \\ar@\/^2pc\/@{->>}[rrd]^{\\bar{f}} \\ar@\/_2pc\/@{->>}[ddr]_{x} & {} & {}\\\\ {} & X \\times_Y \\bar{Y} \\ar@{->>}[r]^{f'} \\ar@{->>}[d]_{y'}& \\bar{Y} \\ar@{->>}[d]^{y}\\\\ {} & X \\ar@{->>}[r]_{f} & Y.\n }\n\\]\nNote that the above outer diagram is a \\emph{regular pushout}, so that\n\\begin{equation}\\label{push}f^o y = x \\bar{f}^o\\;\\;\\mathrm{and}\\;\\; y^of=\\bar{f}x^o \\end{equation}\n(Proposition 2.1 in~\\cite{ckp}).\n\nNext, we take the inverse image $x^{-1}(R)$ in $\\mathbb{A}$, which in an equivalence relation since $R$ is, and cover it to obtain a pseudo-equivalence $W \\rightrightarrows \\bar{X}$ in $\\mathbb{C}$. By assumption $ \\xymatrix { W \\ar@<2pt>[r] \\ar@<-2pt>[r] & \\bar{X} \\ar@{->>}[r]^{\\bar{f}} & \\bar{Y}}$ is a pseudo-equivalence relation in $\\mathbb{C}$ so it factors through an equivalence relation, say $ \\xymatrix { V \\ar@<2pt>[r]^{v_1} \\ar@<-2pt>[r]_{v_2} & \\bar{Y},}$ in $\\mathbb{A}$. We have\n\n$$\n\\xymatrix@C=25pt@R=15pt{\nW \\ar@{=}[rrr] \\ar@{->>}[d]_{w} &{} &{} & W \\ar[ddd] \\ar@{->>}[rd]^v &{} & {} \\\\\nx^{-1}(R)\\ar@{ >->}[dd]_{\\langle\\rho_1, \\rho_2\\rangle} \\ar@{->>}[dr]_{\\pi_R} \\ar@{.>}[rrrr]^-{\\gamma} & {} & {}& {} & V \\ar@{ >->}[ddl]_(0.3){\\langle v_1, v_2\\rangle} \\ar@{.>>}[dr]^{\\lambda} & {} \\\\\n{} & R \\ar@{ >->}[dd]_(.3){\\langle r_1,r_2\\rangle} \\ar@{->>}[rrrr]^(.2){h} & {} & {} & {} & S \\ar@{ >->}[dd]^-{\\langle s_1,s_2\\rangle} \\\\\n\\bar{X} \\times \\bar{X} \\ar@{->>}[dr]_-{x \\times x} \\ar@{-->>}[rrr]^(.7){\\bar{f} \\times \\bar{f}} & {} & {} & \\bar{Y} \\times \\bar{Y} \\ar@{->>}[rrd]^{y \\times y} & {} & {} \\\\\n{} & X \\times X \\ar@{->>}[rrrr]_{f \\times f} & {} & {} &{} & Y \\times Y, }\n$$\nwhere $\\gamma$ and $\\lambda$ are induced by the strong epimorphisms $w$ and $v$, respectively\\\\\n\\begin{center}\n$\n \\xymatrix@C=2cm {\n W \\ar@{->>}[r]^w \\ar@{->>}[dd]_v & x^{-1}(R) \\ar@{ >->}[d]^{\\langle\\rho_1, \\rho_2\\rangle} \\ar@{.>}[ddl]_{\\gamma} \\\\ {} & \\bar{X} \\times \\bar{X} \\ar@{->>}[d]^{\\bar{f} \\times \\bar{f}} \\\\ V \\ar@{ >->}[r]_{\\langle v_1,v_2\\rangle} & \\bar{Y} \\times \\bar{Y}\n}\n$\nand\n$\n \\xymatrix@C=2cm {\n W \\ar@{->>}[r]^v \\ar@{->>}[dd]_{h\\pi_R w} & V \\ar@{ >->}[d]^{\\langle v_1, v_2\\rangle} \\ar@{.>}[ddl]_{\\lambda} \\\\ {} & \\bar{Y} \\times \\bar{Y} \\ar@{->>}[d]^{y \\times y} \\\\ S \\ar@{ >->}[r]_{\\langle s_1,s_2\\rangle} & Y \\times Y.\n}\n$\n\\end{center}\n\nSince $\\gamma$ is a regular epimorphism, we have $V = \\bar{f}(x^{-1}(R))$.\nSince $\\lambda$ is a regular epimorphism, we have $S = y(V)$. One also has $V = y^{-1}(S)$ because\n$$\n\\begin{matrix}\n y^{-1}(S) &=& y^o S y & \\\\\n {} &=& y^o f(R) y &\\\\\n{} &=& y^o f R f^o y &\\\\\n{} &=& \\bar{f} x^o R x \\bar{f}^o & \\text{(by \\eqref{push})}\\\\\n{} &=& \\bar{f}(x^{-1}(R))&\\\\\n{} &=& V.&\n\\end{matrix}\n$$\n\nFinally, $S$ is transitive since\n$$\n\\begin{matrix}\n SS &=& yy^o S yy^o S yy^o &\\text{(Lemma~\\ref{lem}(2))} \\\\\n {} &=& yy^{-1}(S) y^{-1}(S) y^o &\\\\\n{} &=& y VV y^o &\\\\\n{} &=& y V y^o & \\text{(since V is an equivalence relation)}\\\\\n{} &=& y(V)&\\\\\n{} &=& S.&\n\\end{matrix}\n$$\n\n\\end{proof}\n\nWe may also consider weak Goursat categories through a property which is more similar to the one mentioned in Theorem~\\ref{CKP}:\n\n\\begin{lemma} Let $\\mathbb{C}$ be a projective cover of a regular category $\\mathbb{A}$. Then $\\mathbb{C}$ is a weak Goursat category if and only if for any commutative diagram in $\\mathbb{C}$\n\\begin{equation}\n\\label{equivdef}\n \\vcenter{ \\xymatrix {\n R \\ar@{->>}[r]^{\\varphi} \\ar@<.5ex>[d]^{r_2} \\ar@<-.5ex>[d]_{r_1} & S \\ar@<.5ex>[d]^{s_2} \\ar@<-.5ex>[d]_{s_1} \\\\ X \\ar@{->>}[r]_{f} & Y\n}}\n\\end{equation}\nsuch that $f$ and $\\varphi$ are regular epimorphism and $R$ is a pseudo-equivalence relation, then $S$ is a pseudo-equivalence relation.\n\\end{lemma}\n\n\\begin{proof} $(i) \\Rightarrow (ii)$ Since $\\xymatrix { R \\ar@<2pt>[r]^{r_1} \\ar@<-2pt>[r]_{r_2} & X}$ is a pseudo-equivalence relation, by assumption $\\xymatrix { R \\ar@<2pt>[r]^{fr_1} \\ar@<-2pt>[r]_{fr_2} & X}$ is also a pseudo-equivalence relation and then its (regular epimorphism, monomorphism) factorization gives an equivalence relation $\\xymatrix { E \\ar@<2pt>[r]^{e_1} \\ar@<-2pt>[r]_{e_2} & Y}$ in $\\mathbb{A}$ (Proposition~\\ref{EV}). We have the following commutative diagram\n$$ \\xymatrix {\n R \\ar@{=}[r] \\ar@<.5ex>[dd]^{r_2} \\ar@<-.5ex>[dd]_{r_1} & R \\ar@<.5ex>[dd]^(.4){f r_2} \\ar@<-.5ex>[dd]_(.4){f r_1} \\ar@{->>}[rr]^{\\varphi} \\ar@{->>}[rd]^{\\rho} & {} & S \\ar@{.>}[dl]^{\\sigma} \\ar@<.5ex>@\/^4pc\/[ddll]^{s_2} \\ar@<-.5ex>@\/^4pc\/[ddll]_{s_1} \\\\\n {} & {} & E \\ar@<.5ex>[dl]^{e_2} \\ar@<-.5ex>[dl]_{e_1} & {} \\\\\n X \\ar@{->>}[r]_{f} & Y & {} & {}\n}\n$$\nwhere $\\sigma: S \\longrightarrow E$ is induced by the strong epimorphism $\\varphi$\n\\[\n \\xymatrix@C=1cm {\n R \\ar@{->>}[r]^{\\varphi} \\ar@{->>}[d]_{\\rho} & S \\ar[d]^{\\langle s_1, s_2\\rangle} \\ar@{.>}[dl]_{\\sigma} \\\\ E \\ar@{ >->}[r]_{\\langle e_1,e_2\\rangle} & Y \\times Y.\n}\n\\]\nThen $\\sigma$ is a regular epimorphism and $\\xymatrix { S \\ar@<2pt>[r]^{s_1} \\ar@<-2pt>[r]_{s_2} & Y}$ is a pseudo-equivalence relation (Proposition~\\ref{EV}).\n\n $(ii) \\Rightarrow (i)$ Let $ \\xymatrix { R \\ar@<2pt>[r]^{r_1} \\ar@<-2pt>[r]_{r_2} & X}$ be a pseudo-equivalence relation in $\\mathbb{C}$ and $f: X \\twoheadrightarrow Y$ a regular epimorphism. The following diagram is of the type \\eqref{equivdef}\n$$\n\\xymatrix {\n R \\ar@{=}[r] \\ar@<.5ex>[d]^{r_2} \\ar@<-.5ex>[d]_{r_1} & R \\ar@<.5ex>[d]^{fr_2} \\ar@<-.5ex>[d]_{fr_1} \\\\ X \\ar@{->>}[r]_{f} & Y.\n}\n$$\nSince $\\xymatrix { R \\ar@<2pt>[r]^{r_1} \\ar@<-2pt>[r]_{r_2} & X}$ is a pseudo-equivalence relation, then by assumption\\\\ $\\xymatrix { R \\ar@<2pt>[r]^{f r_1} \\ar@<-2pt>[r]_{f r_2} & Y}$ is also a pseudo-equivalence relation.\n\\end{proof}\n\nAlternatively, weak Goursat categories may be characterized through a property more similar to the one mentioned in Remark~\\ref{variant}:\n\n\\begin{proposition}\\label{weaksquare} Let $\\mathbb{C}$ be a projective cover of a regular category $\\mathbb{A}$. The following conditions are equivalent:\n\\begin{enumerate}\n \\item[(i)] $\\mathbb{A}$ is a Goursat category;\n \\item[(ii)] $\\mathbb{C}$ is a weak Goursat category;\n \\item[(iii)] For any commutative diagram of type \\emph{($\\mathrm{I}$)} in $\\mathbb{C}$ where\n$$\\xymatrix@C=2cm{\n F \\ar@<.5ex>[d]^{\\beta_2} \\ar@<-.5ex>[d]_{\\beta_1} \\ar@{->>}[r]^{\\lambda}& G \\ar@<.5ex>[d]^{\\rho_2} \\ar@<-.5ex>[d]_{\\rho_1} \\\\\n X \\ar@{}[dr]|{\\mathrm{(I)}} \\ar@{->>}[r]^{\\alpha} \\ar@<3pt>[d]^{f} & U \\ar@<3pt>[d]^{g} \\\\\n Y \\ar@{->>}[r]_{\\beta} \\ar@<3pt>[u]^{s} & W \\ar@<3pt>[u]^t\n }\n$$\n$F$ is a weak kernel pair of $f$ and $\\lambda$ is a regular epimorphism (in $\\mathbb{C}$), then $G$ is a weak kernel pair of $g$.\n\\end{enumerate}\n\\end{proposition}\n\n\\begin{proof}\n$(i) \\Leftrightarrow (ii)$ By Proposition \\ref{cover}.\n\n$(i) \\Rightarrow (iii)$ If we take the kernel pairs of $f$ and $g$, then the induced morphism $\\bar{\\alpha}:\\Eq(f)\\longrightarrow \\Eq(g)$ is a regular epimorphism by Theorem~\\ref{Goursatpushout}. Moreover, the induced morphism $\\varphi:F\\longrightarrow \\Eq(f)$ is also a regular epimorphism. We get\n\\\n \\vcenter{\\xymatrix{\n F \\ar@{->>}[r]^{\\lambda} \\ar@{->>}[d]^{\\varphi} & G \\ar@{.>}[d]^{\\omega} \\ar@<.5ex>@\/^4pc\/[dd]^{\\rho_2} \\ar@<-.5ex>@\/^4pc\/[dd]_{\\rho_1} \\\\\n \\Eq(f) \\ar@<.5ex>[d]^{f_2} \\ar@<-.5ex>[d]_{f_1} \\ar@{->>}[r]^{\\bar{\\alpha}} & \\Eq(g) \\ar@<.5ex>[d]^{g_2} \\ar@<-.5ex>[d]_{g_1} \\\\\n X \\ar@{->>}[r]^{\\alpha} \\ar@<3pt>[d]^{f} & U \\ar@<3pt>[d]^{g}\n \\\\ Y \\ar@{->>}[r]_{\\beta} \\ar@<3pt>[u]^{s} & W, \\ar@<3pt>[u]^t\n }}\n\\]\nwhere $w:G\\longrightarrow \\Eq(g)$ is induced by the strong epimorphism $\\lambda$\n\\[\n \\xymatrix@C=1cm {\n F \\ar@{->>}[r]^{\\lambda} \\ar@{->>}[d]_{\\bar{\\alpha}.\\varphi} & G \\ar[d]^{\\langle\\rho_1, \\rho_2\\rangle} \\ar@{.>}[dl]_-{w} \\\\ \\Eq(g) \\ar@{ >->}[r]_{\\langle g_1,g_2\\rangle} & U.\n}\n\\]\nThis implies that $\\omega$ is a regular epimorphism ($\\omega \\lambda = \\bar{\\alpha} \\varphi$) and then $ \\xymatrix@C=1cm {G \\ar@<2pt>[r]^{\\rho_1} \\ar@<-2pt>[r]_{\\rho_2} & U}$ is a weak kernel pair of $g$.\n\n$(iii) \\Rightarrow (ii)$\nConsider the diagram \\eqref{equivdef} in $\\mathbb{C}$ where $ \\xymatrix { R \\ar@<2pt>[r]^{r_1} \\ar@<-2pt>[r]_{r_2} & X}$ is a pseudo-equivalence relation. We want to prove that $ \\xymatrix { S \\ar@<2pt>[r]^{s_1} \\ar@<-2pt>[r]_{s_2} & Y}$ is also a pseudo-equivalence. Take the (regular epimorphism, monomorphism) factorization of $R$ and $S$ in $\\mathbb{A}$ and the induced morphism $\\mu$ making the following diagram commutative\n$$\n\\xymatrix {\n R \\ar@{->>}[rr]^{\\varphi} \\ar@<.5ex>[dd]^(.4){r_2} \\ar@<-.5ex>[dd]_(.4){r_1} \\ar@{->>}[rd]^{\\rho} & {} & S \\ar@{->>}[rd]^{\\sigma} \\ar@<.5ex>[dd]^(.3){s_2} \\ar@<-.5ex>[dd]_(.3){s_1} & {} \\\\\n {} & U \\ar@{.>}[rr]_(0.3){\\mu} \\ar@<.5ex>[dl]^{u_2} \\ar@<-.5ex>[dl]_{u_1} & {} & V \\ar@<.5ex>[dl]^{v_2} \\ar@<-.5ex>[dl]_{v_1} \\\\\n X \\ar@{->>}[rr]_{f}& {} & Y. & {}\n}\n$$\nSince $\\mu$ is a regular epimorphism, then $V = f(U)$ and consequently, $V$ is reflexive and symmetric.\n\nSince $S$ is a pseudo-relation associated to $V$, then $S$ is also a reflexive and symmetric pseudo-relation. We just need to prove that $V$ is transitive. To do so, we apply our assumption to the diagram\n\n$$\n\\xymatrix@C=25pt@R=20pt{\nF \\ar@{->>}[rr]^{\\lambda} \\ar@{->>}[dd]_{\\delta} \\ar@{->>}[rd] & {} & G \\ar@{->>}[dd]^{\\alpha} \\\\\n{} & \\Eq(r_1) \\times_{\\varphi(\\Eq(r_1))} G \\ar@{->>}[ru] \\ar@{->>}[dl] & {} \\\\\n \\Eq(r_1) \\ar@{->>}[rr]_{\\chi} \\ar@<-.5ex>[d] \\ar@<.5ex>[d] & {} & \\varphi(\\Eq(r_1)) \\ar@<.5ex>[d] \\ar@<-.5ex>[d] \\\\\n R \\ar@{}[drr]|{\\mathrm{(I)}} \\ar@{->>}[rr]^{\\varphi} \\ar@<3pt>[d]^{r_1}& {} & S \\ar@<3pt>[d]^{s_1} \\\\\n X \\ar@{->>}[rr]_{f} \\ar@<3pt>[u]^{e_R} & {} & Y \\ar@<3pt>[u]^{e_S}\n }\n$$\nwhere $G$ is a cover of the regular image $\\varphi(\\Eq(r_1))$ and $F$ is a cover of the pullback $\\Eq(r_1) \\times_{\\varphi(Eq(r_1))} G$. Since $\\delta$ is a regular epimorphism, then $ \\xymatrix { F \\ar@<2pt>[r] \\ar@<-2pt>[r] & R}$ is a weak kernel pair of $r_1$. By assumption $ \\xymatrix { G \\ar@<2pt>[r] \\ar@<-2pt>[r] & S}$ is a weak kernel pair of $s_1$, thus $\\varphi(\\Eq(r_1)) = \\Eq(s_1)$. We then have\n\\[\n\\begin{matrix}\nVV &=& v_2 v_1^o v_1 v_2^o & \\text{(since $V$ is symmetric}) \\\\\n{} &=& v_2 \\sigma \\sigma^o v_1^o v_1 \\sigma \\sigma^o v_2^o & \\text{(Lemma~\\ref{lem}(2))} \\\\\n{} &=& s_2 s_1^o s_1 s_2^o & \\text{($v_i\\sigma=s_i$)}\\\\\n{} &=& s_2 \\varphi r_1^o r_1 \\varphi^o s_2^o &\\text{($\\varphi(\\Eq(r_1)) = \\Eq(s_1)$)}\\\\\n{} &=& f r_2 r_1^o r_1 r_2^o f^o & \\text{($s_i \\varphi = f r_i$ )}\\\\\n{} &=& f u_2 \\rho \\rho^o u_1^o u_1 \\rho \\rho^o u_2^o f^o & \\text{($u_i\\rho=r_i$)} \\\\\n{} &=& f u_2 u_1^o u_1 u_2^o f^o & \\text{(Lemma~\\ref{lem}(2)) } \\\\\n{} &=& f UU f^o & \\text{(since $U$ is an equivalence relation in $\\mathbb{A}$)} \\\\\n{} &=& f U f^o & \\\\\n{} &=& V. &\n\\end{matrix}\n\\]\n\\end{proof}\n\\begin{remark}\\label{vars}\nWhen $\\mathbb{A}$ is a $3$-permutable variety and $\\mathbb{C}$ its subcategory of free algebras, then the property stated in Proposition \\ref{weaksquare} (iii) is precisely what is needed to obtain the existence of the quaternary operations $p$ and $q$ which characterize $3$-permutable varieties. Let $X$ denote the free algebra on one element. Diagram $\\mathrm{(I)}$ below belongs to $\\mathbb{C}$\n$$\n\\xymatrix@C=2cm{\n F \\ar@{=}[r] \\ar@{->>}[d]^{\\mu} & F \\ar@{->>}[d]^{\\lambda \\mu} \\\\\n \\Eq(\\nabla_2 + \\nabla_2) \\ar@<.5ex>[d]^{\\pi_2} \\ar@<-.5ex>[d]_{\\pi_1} \\ar[r]^-{\\lambda} & \\Eq(\\nabla_3) \\ar@<.5ex>[d] \\ar@<-.5ex>[d] \\\\\n 4X \\ar@{}[dr]|{\\mathrm{(I)}} \\ar@{->>}[r]^{1+\\nabla_2+1} \\ar@<3pt>[d]^{\\nabla_2 +\\nabla_2} & 3X \\ar@<3pt>[d]^{\\nabla_3} \\\\\n 2X \\ar@{->>}[r] \\ar@<3pt>[u]^{\\iota_2 + \\iota_1} & X. \\ar@<3pt>[u]^{\\iota_2}\n }\n$$\nIf $F$ is a cover of $\\Eq(\\nabla_2+\\nabla_2))$, then $ \\xymatrix { F \\ar@<2pt>[r] \\ar@<-2pt>[r] & 4X}$ is a weak kernel pair of $\\nabla_2+\\nabla_2$.\nBy assumption $ \\xymatrix { F \\ar@<2pt>[r] \\ar@<-2pt>[r] & 3X}$ is a weak kernel pair of $\\nabla_3$, so that $\\lambda \\mu$ is surjective. We then conclude that $\\lambda$ is surjective and the existence of the quaternary operations $p$ and $q$ follows from Theorem $3$ in \\cite{gr}.\n\\end{remark}\n\n\n\\section*{Acknowledgements}\nThe first author acknowledges partial financial assistance by Centro de Mate--m\\'{a}tica da\nUniversidade de Coimbra---UID\/MAT\/00324\/2013, funded by the\nPortuguese Government through FCT\/MCTES and co-funded by the\nEuropean Regional Development Fund through the Partnership\nAgreement\\\\ PT2020.\\\\\n The second author acknowledges financial assistance by Fonds de la Recher--che Scientifique-FNRS Cr\\'edit Bref S\\'ejour \\`a l'\\'etranger 2018\/V 3\/5\/033 - IB\/JN - 11440, which supported his stay at the University of Algarve, where this paper was partially written.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}