diff --git a/.gitattributes b/.gitattributes index cd586ea8ddc2e1444ad0a8efc9cedec21c2e76d1..2366375bd4d80f4d0b7a823227242d5beb3e55cc 100644 --- a/.gitattributes +++ b/.gitattributes @@ -214,3 +214,5 @@ data_all_eng_slimpj/shuffled/split/split_finalac/part-16.finalac filter=lfs diff data_all_eng_slimpj/shuffled/split/split_finalac/part-07.finalac filter=lfs diff=lfs merge=lfs -text data_all_eng_slimpj/shuffled/split/split_finalad/part-01.finalad filter=lfs diff=lfs merge=lfs -text data_all_eng_slimpj/shuffled/split/split_finalac/part-14.finalac filter=lfs diff=lfs merge=lfs -text +data_all_eng_slimpj/shuffled/split/split_finalac/part-02.finalac filter=lfs diff=lfs merge=lfs -text +data_all_eng_slimpj/shuffled/split/split_finalac/part-05.finalac filter=lfs diff=lfs merge=lfs -text diff --git a/data_all_eng_slimpj/shuffled/split/split_finalac/part-02.finalac b/data_all_eng_slimpj/shuffled/split/split_finalac/part-02.finalac new file mode 100644 index 0000000000000000000000000000000000000000..c35179b655e47d5a044f4ae7cdec578c0c115ce9 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split/split_finalac/part-02.finalac @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f58035dc339d5280c7aaf72bee09b13670a24424b7e4819c233e9c7e9de06ae0 +size 12576660901 diff --git a/data_all_eng_slimpj/shuffled/split/split_finalac/part-05.finalac b/data_all_eng_slimpj/shuffled/split/split_finalac/part-05.finalac new file mode 100644 index 0000000000000000000000000000000000000000..a95b37a5c21440fcb3690d03522038f65fe57864 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split/split_finalac/part-05.finalac @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b257f509ba706c5612e817ec888930a1d07220f00e925c86190a4d54ab49b80b +size 12576662770 diff --git a/data_all_eng_slimpj/shuffled/split2/finalzwmu b/data_all_eng_slimpj/shuffled/split2/finalzwmu new file mode 100644 index 0000000000000000000000000000000000000000..d87528cad3d3ea0c9778e924593740b95da8ae0f --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzwmu @@ -0,0 +1,5 @@ +{"text":"\n\\chapter{Hardware}\n\\label{app:hardware}\n\nThis appendix gives an overview of the processors used throughout this work and\ntheir relevant properties.\n\nNote that, while the single-threaded peak performance is, where appropriate,\nbased on the processors' maximum turbo frequency, the multi-threaded peak\nperformance is instead computed from the base frequency. Furthermore, we only\nlist the vector instructions that allow to reach a processor's theoretical peak\nperformance.\n\n\\section{\\hwstyle Harpertown E5450}\n\\label{hardware:E5450}\n\n\\href{http:\/\/ark.intel.com\/products\/33083\/Intel-Xeon-Processor-E5450-12M-Cache-3_00-GHz-1333-MHz-FSB}{\\nolinkurl{http:\/\/ark.intel.com\/products\/33083\/Intel-Xeon-Processor-E5450-}\\\\\\nolinkurl{12M-Cache-3_00-GHz-1333-MHz-FSB}}\n\nOur {\\namestyle Harpertown E5450}s were part of our compute cluster. Because\nthey were disposed of in mid~2016, they are only used in a part of this work's\nperformance analyses.\n\n\\begin{hwtable}\n Name &\\namestyle Intel\\textsuperscript\\textregistered{} Xeon\\textsuperscript\\textregistered{} Processor E5450 \\\\\n Codename &\\namestyle Harpertown \\\\\n Lithography &\\SI{45}{\\nano\\meter} \\\\\n Release &Q4 2007 \\\\\n Cores \/ Threads &4 \/ 4 \\\\\n Base Frequency &\\SI{3.00}{\\giga\\hertz} \\\\\n Peak Performance &\\SI{12}{\\giga\\flops\\per\\second} (single-threaded) \\\\\\nopagebreak\n &\\SI{48}{\\giga\\flops\\per\\second} (all cores) \\\\\n Peak Bandwidth &\\SI{10.6}{\\giga\\byte\\per\\second} \\\\\n L2~cache &\\SI6{\\mebi\\byte} {\\em per 2~cores}, 24-way set associative\\\\\n L1d~cache &\\SI{32}{\\kibi\\byte} per core, 8-way set-associative \\\\\n Vector Instructions &1 SSE FMUL + 1 SSE FADD per cycle \\\\\\nopagebreak\n &$= \\SI4{\\flops\\per\\cycle}$ \\\\\n\\end{hwtable}\n\n\\section{\\hwstyle Sandy Bridge-EP E5-2670}\n\\label{hardware:E5-2670}\n\n\\href{http:\/\/ark.intel.com\/products\/64595\/Intel-Xeon-Processor-E5-2670-20M-Cache-2_60-GHz-8_00-GTs-Intel-QPI}{\\nolinkurl{http:\/\/ark.intel.com\/products\/64595\/Intel-Xeon-Processor-E5-2670-}\\\\\\nolinkurl{20M-Cache-2_60-GHz-8_00-GTs-Intel-QPI}}\n\nOur {\\namestyle Sandy Bridge E5-2680 v2}s are part of our compute cluster.\n\\intel{} \\turboboost is disabled on these machines unless otherwise stated.\n\n\\begin{hwtable}\n Name &\\namestyle Intel\\textsuperscript\\textregistered{} Xeon\\textsuperscript\\textregistered{} Processor E5-2670 \\\\\n Codename &\\namestyle Sandy Bridge-EP \\\\\n Lithography &\\SI{32}{\\nano\\meter} \\\\\n Release &Q1 2012 \\\\\n Cores \/ Threads &8 \/ 16 \\\\\n Base Frequency &\\SI{2.60}{\\giga\\hertz} \\\\\n Max Turbo Frequency &\\SI{3.30}{\\giga\\hertz} ({\\em disabled unless otherwise stated})\\\\\n Peak Performance &\\SI{20.8}{\\giga\\flops\\per\\second} (single-threaded) \\\\\\nopagebreak\n &\\SI{166.4}{\\giga\\flops\\per\\second} (all cores) \\\\\n Peak Bandwidth &\\SI{51.2}{\\giga\\byte\\per\\second} \\\\\n L3~cache &\\SI{20}{\\mebi\\byte} shared, 20-way set associative \\\\\n L2~cache &\\SI{256}{\\kibi\\byte} per core, 8-way set associative \\\\\n L1d~cache &\\SI{32}{\\kibi\\byte} per core, 8-way set-associative \\\\\n Vector Instructions &1 AVX FMUL + 1 AVX FADD per cycle \\\\\\nopagebreak\n &$= \\SI8{\\flops\\per\\cycle}$ \\\\\n\\end{hwtable}\n\n\n\\section{\\hwstyle Ivy Bridge-EP E5-2680 v2}\n\\label{hardware:E5-2680 v2}\n\n\\href{http:\/\/ark.intel.com\/products\/75277\/Intel-Xeon-Processor-E5-2680-v2-25M-Cache-2_80-GHz}{\\nolinkurl{http:\/\/ark.intel.com\/products\/75277\/Intel-Xeon-Processor-E5-2680-}\\\\\\nolinkurl{v2-25M-Cache-2_80-GHz}}\n\nOur {\\namestyle Ivy Bridge E5-2680 v3}s are part of our compute cluster.\n\n\\begin{hwtable}\n Name &\\namestyle Intel\\textsuperscript\\textregistered{} Xeon\\textsuperscript\\textregistered{} Processor E5-2680 v2\\\\\n Codename &\\namestyle Ivy Bridge-EP \\\\\n Lithography &\\SI{22}{\\nano\\meter} \\\\\n Release &Q3 2013 \\\\\n Cores \/ Threads &10 \/ 20 \\\\\n Base Frequency &\\SI{2.80}{\\giga\\hertz} \\\\\n Max Turbo Frequency &\\SI{3.60}{\\giga\\hertz} \\\\\n Peak Performance &\\SI{28.8}{\\giga\\flops\\per\\second} (single-threaded) \\\\\\nopagebreak\n &\\SI{224}{\\giga\\flops\\per\\second} (all cores) \\\\\n Peak Bandwidth &\\SI{59.7}{\\giga\\byte\\per\\second} \\\\\n L3~cache &\\SI{25}{\\mebi\\byte} shared, 20-way set associative \\\\\n L2~cache &\\SI{256}{\\kibi\\byte} per core, 8-way set associative \\\\\n L1d~cache &\\SI{32}{\\kibi\\byte} per core, 8-way set-associative \\\\\n Vector Instructions &1 AVX FMUL + 1 AVX FADD per cycle \\\\\\nopagebreak\n &$= \\SI8{\\flops\\per\\cycle}$ \\\\\n\\end{hwtable}\n\n\\section{\\hwstyle Haswell-EP E5-2680 v3}\n\\label{hardware:E5-2680 v3}\n\n\\href{http:\/\/ark.intel.com\/products\/81908\/Intel-Xeon-Processor-E5-2680-v3-30M-Cache-2_50-GHz}{\\nolinkurl{http:\/\/ark.intel.com\/products\/81908\/Intel-Xeon-Processor-E5-2680-}\\\\\\nolinkurl{v3-30M-Cache-2_50-GHz}}\n\nOur {\\namestyle Haswell-EP E5-2680 v3}s are part of our compute cluster.\n\n\\begin{hwtable}\n Name &\\namestyle Intel\\textsuperscript\\textregistered{} Xeon\\textsuperscript\\textregistered{} Processor E5-2680 v3\\\\\n Codename &\\namestyle Haswell-EP \\\\\n Lithography &\\SI{22}{\\nano\\meter} \\\\\n Release &Q3 2014 \\\\\n Cores \/ Threads &12 \/ 24 \\\\\n Base Frequency &\\SI{2.50}{\\giga\\hertz} \\\\\n Max Turbo Frequency &\\SI{3.30}{\\giga\\hertz} \\\\\n Peak Performance &\\SI{52.8}{\\giga\\flops\\per\\second} (single-threaded) \\\\\\nopagebreak\n &\\SI{480}{\\giga\\flops\\per\\second} (all cores) \\\\\n Peak Bandwidth &\\SI{68}{\\giga\\byte\\per\\second} \\\\\n L3~cache &\\SI{30}{\\mebi\\byte} shared, 20-way set associative \\\\\n L2~cache &\\SI{256}{\\kibi\\byte} per core, 8-way set associative \\\\\n L1d~cache &\\SI{32}{\\kibi\\byte} per core, 8-way set-associative \\\\\n Vector Instructions &2 AVX FMA per cycle \\\\\\nopagebreak\n &$= \\SI{16}{\\flops\\per\\cycle}$ \\\\\n\\end{hwtable}\n\n\\section{\\hwstyle Broadwell i7-5557U}\n\\label{hardware:i7-5557U}\n\n\\href{https:\/\/ark.intel.com\/products\/84993\/Intel-Core-i7-5557U-Processor-4M-Cache-up-to-3_40-GHz}{\\nolinkurl{https:\/\/ark.intel.com\/products\/84993\/Intel-Core-i7-5557U-}\\\\\\nolinkurl{Processor-4M-Cache-up-to-3_40-GHz}}\n\nOur {\\namestyle Broadwell i7-5557U} is part of a {\\namestyle MacBook Pro}.\n\n\\begin{hwtable}\n Name &\\namestyle Intel\\textsuperscript\\textregistered{} Core\\texttrademark{} i7-5557U Processor \\\\\n Codename &\\namestyle Broadwell-U \\\\\n Lithography &\\SI{14}{\\nano\\meter} \\\\\n Release &Q1 2015 \\\\\n Cores \/ Threads &2 \/ 4 \\\\\n Base Frequency &\\SI{3.10}{\\giga\\hertz} \\\\\n Max Turbo Frequency &\\SI{3.40}{\\giga\\hertz} \\\\\n Peak Performance &\\SI{54.4}{\\giga\\flops\\per\\second} (single-threaded) \\\\\\nopagebreak\n &\\SI{99.2}{\\giga\\flops\\per\\second} (all cores) \\\\\n Peak Bandwidth &\\SI{25.6}{\\giga\\byte\\per\\second} \\\\\n L3~cache &\\SI4{\\mebi\\byte} shared, 16-way set associative \\\\\n L2~cache &\\SI{256}{\\kibi\\byte} per core, 8-way set associative \\\\\n L1d~cache &\\SI{32}{\\kibi\\byte} per core, 8-way set-associative \\\\\n Vector Instructions &2 AVX FMA per cycle \\\\\\nopagebreak\n &$= \\SI{16}{\\flops\\per\\cycle}$ \\\\\n\\end{hwtable}\n\n\\subsection{\\blasl1}\n\n\\routinedoc{dcopy,\n arguments={\n n=dimension $n$,\n x=vector $\\dv x \\in \\R^n$,\n incx=increment for \\dv x,\n y=vector $\\dv y \\in \\R^n$,\n incy=increment for \\dv y\n },\n description={double-precision vector copy},\n operations={$\\dv y \\coloneqq \\alpha \\dv x$},\n flops=0,\n datavol=$2 n$,\n datamov=$2 n$,\n}\n\n\\routinedoc{dswap,\n arguments={\n n=dimension $n$,\n x=vector $\\dv x \\in \\R^n$,\n incx=increment for \\dv x,\n y=vector $\\dv y \\in \\R^n$,\n incy=increment for \\dv y\n },\n description={double-precision vector swap},\n operations={${\\dv x, \\dv y \\coloneqq \\dv y, \\dv x}$},\n flops=0,\n datavol=$2 n$,\n datamov=$4 n$,\n}\n\n\\routinedoc{daxpy,\n arguments={\n n=dimension $n$,\n alpha=scalar $\\alpha$,\n x=vector $\\dv x \\in \\R^n$,\n incx=increment for \\dv x,\n y=vector $\\dv y \\in \\R^n$,\n incy=increment for \\dv y\n },\n description={double-precision scaled vector addition},\n operations={$\\dv y \\coloneqq \\alpha \\dv x + \\dv y$},\n flops=$2 n$,\n datavol=$2 n$,\n datamov=$3 n$,\n}\n\n\\routinedoc{ddot,\n arguments={\n n=dimension $n$,\n x=vector $\\dv x \\in \\R^n$,\n incx=increment for \\dv x,\n y=vector $\\dv y \\in \\R^n$,\n incy=increment for \\dv y\n },\n description={double-precision inner vector product},\n operations={${\\alpha \\coloneqq \\dm[height=0, ']x \\dv x}$},\n flops=$2 n$,\n datavol=$2 n$,\n datamov=$2 n$,\n}\n\n\n\\subsection{\\blasl2}\n\n\\routinedoc{dgemv,\n arguments={\n trans=\\dm A is transposed,\n m=dimension $m$,\n n=dimension $n$,\n alpha=scalar $\\alpha$,\n A=matrix $\\dm A \\in \\R^{m \\times n}$,\n ldA=leading dimension for \\dm A,\n x={vector $\\dv x \\in \\begin{cases}\n \\R^n &\\text{if } \\code{trans} = \\code N\\\\\n \\R^m &\\text{else}\n \\end{cases}$},\n incx=increment for \\dv x,\n beta=scalar $\\beta$,\n y={vector $\\dv y \\in \\begin{cases}\n \\R^m &\\text{if } \\code{trans} = \\code N\\\\\n \\R^n &\\text{else}\n \\end{cases}$},\n incy=increment for \\dv y\n },\n description={double-precision matrix-vector product},\n operations={\n {$\\dv y \\coloneqq \\alpha \\dm A \\matmatsep \\dv x + \\beta\\dv y$},\n {$\\dv y \\coloneqq \\alpha \\dm[']A \\dv x + \\beta\\dv y$}\n },\n flops=$2 m n$,\n datavol={$\\begin{array}{ll}\n m n + m &\\text{if } \\code{trans} = \\code N\\\\\n m n + n &\\text{else}\n \\end{array}$},\n datamov={$\\begin{array}{ll}\n m n + 2 m &\\text{if } \\code{trans} = \\code N\\\\\n m n + 2 n &\\text{else}\n \\end{array}$},\n}\n\n\\routinedoc{dger,\n arguments={\n m=dimension $m$,\n n=dimension $n$,\n alpha=scalar $\\alpha$,\n x=vector $\\dv x \\in \\R^m$,\n incx=increment for \\dv x,\n y=vector $\\dv y \\in \\R^n$,\n incy=increment for \\dv y,\n A=matrix $\\dm A \\in \\R^{m \\times n}$,\n ldA=leading dimension for \\dm A\n },\n description={double-precision vector outer product},\n operations={${\\dm A \\coloneqq \\alpha \\dv x \\dm[height=0, ']y + \\dm A}$},\n flops=$2 m n$,\n datavol=$m n + m + n$,\n datamov=$2 m n + m + n$,\n}\n\n\\routinedoc{dtrsv,\n arguments={\n uplo=\\dm[lower]A is lower- or upper-triangular,\n trans=\\dm[lower]A is transposed,\n diag=\\dm[lower]A is unit triangular,\n n=dimension $n$,\n A=matrix $\\dm[lower]A \\in \\R^{n \\times n}$,\n ldA=leading dimension for \\dm[lower]A,\n x=vector $\\dv x \\in \\R^n$,\n incX=increment for \\dv x\n },\n description={double-precision triangular linear system solve},\n operations={\n {$\\dv x \\coloneqq \\dm[lower, inv]A \\dv x$},\n {$\\dv x \\coloneqq \\dm[lower, inv']A \\dv x$}\n },\n flops=$n^2$,\n datavol={$\\frac12 n (n + 1) + n$},\n datamov={$\\frac12 n (n + 1) + 2 n$}\n}\n\n\n\\subsection{\\blasl3}\n\n\\routinedoc{dgemm,\n arguments={\n transA=\\dm A is transposed,\n transB=\\dm B is transposed,\n m=dimension $m$,\n n=dimension $n$,\n k=dimension $k$,\n alpha=scalar $\\alpha$,\n A={matrix $\\dm A \\in \\begin{cases}\n \\R^{m \\times k} &\\text{if } \\code{transA} = \\code N\\\\\n \\R^{k \\times m} &\\text{else}\n \\end{cases}$},\n ldA=leading dimension for \\dm A,\n B={matrix $\\dm B \\in \\begin{cases}\n \\R^{k \\times n} &\\text{if } \\code{transB} = \\code N\\\\\n \\R^{n \\times k} &\\text{else}\n \\end{cases}$},\n ldB=leading dimension for \\dm B,\n beta=scalar $\\beta$,\n C={matrix $\\dm C \\in \\R^{m \\times n}$},\n ldC=leading dimension for \\dm C\n },\n description={double-precision matrix-matrix product},\n operations={\n {$\\dm C \\coloneqq \\alpha \\dm A \\matmatsep \\dm B + \\beta \\dm C$},\n {$\\dm C \\coloneqq \\alpha \\dm A \\matmatsep \\dm[']B + \\beta \\dm C$},\n {$\\dm C \\coloneqq \\alpha \\dm[']A \\matmatsep \\dm B + \\beta \\dm C$},\n {$\\dm C \\coloneqq \\alpha \\dm[']A \\matmatsep \\dm[']B + \\beta \\dm C$}\n },\n flops=$2 m n k$,\n datavol=$m k + k n + m n$,\n datamov=$m k + k n + 2 m n$,\n}\n\n\\routinedoc{dsymm,\n arguments={\n side=\\dm A is on the left or right of \\dm B,\n uplo=\\dm A is in lower- or upper-triangular storage,\n m=dimension $m$,\n n=dimension $n$,\n alpha=scalar $\\alpha$,\n A={matrix $\\dm A \\in \\begin{cases}\n \\R^{m \\times m} &\\text{if } \\code{side} = \\code L\\\\\n \\R^{n \\times n} &\\text{else}\n \\end{cases}$},\n ldA=leading dimension for \\dm A,\n B={matrix $\\dm B \\in \\R^{m \\times n}$},\n ldB=leading dimension for \\dm B,\n beta=scalar $\\beta$,\n C={matrix $\\dm C \\in \\R^{m \\times n}$},\n ldC=leading dimension for \\dm C\n },\n description={double-precision symmetric matrix-matrix product},\n operations={\n {$\\dm C \\coloneqq \\alpha \\dm A \\matmatsep \\dm B + \\beta \\dm C$},\n {$\\dm C \\coloneqq \\alpha \\dm B \\matmatsep \\dm A + \\beta \\dm C$}\n },\n flops={$\\begin{array}{ll}\n 2 m^2 n &\\text{if } \\code{side} = \\code L\\\\\n 2 m n^2 &\\text{else}\n \\end{array}$},\n datavol={$\\begin{array}{ll}\n \\frac12 m (m + 1) + 2 m n &\\text{if } \\code{side} = \\code L\\\\\n \\frac12 n (n + 1) + 2 m n &\\text{else}\n \\end{array}$},\n datamov={$\\begin{array}{ll}\n \\frac12 m (m + 1) + 3 m n &\\text{if } \\code{side} = \\code L\\\\\n \\frac12 n (n + 1) + 3 m n &\\text{else}\n \\end{array}$},\n}\n\n\\routinedoc{dtrmm,\n arguments={\n side=\\dm[lower]A is on the left or right of \\dm B,\n uplo=\\dm[lower]A is lower- or upper-triangular,\n transA=\\dm[lower]A is transposed,\n diag=\\dm[lower]A is unit triangular,\n m=dimension $m$,\n n=dimension $n$,\n alpha=scalar $\\alpha$,\n A={matrix $\\dm[lower]A \\in \\begin{cases}\n \\R^{m \\times m} &\\text{if } \\code{side} = \\code L\\\\\n \\R^{n \\times n} &\\text{else}\n \\end{cases}$},\n ldA=leading dimension for \\dm[lower]A,\n B={matrix $\\dm B \\in \\R^{m \\times n}$},\n ldB=leading dimension for \\dm B\n },\n description={double-precision triangular matrix-matrix product},\n operations={\n {$\\dm B \\coloneqq \\alpha \\dm[lower]A \\matmatsep \\dm B$},\n {$\\dm B \\coloneqq \\alpha \\dm[lower, ']A \\dm B$},\n {$\\dm B \\coloneqq \\alpha \\dm[upper]A \\matmatsep \\dm B$},\n {$\\dm B \\coloneqq \\alpha \\dm[upper, ']A \\dm B$},\n {$\\dm B \\coloneqq \\alpha \\dm B \\matmatsep \\dm[lower]A$},\n {$\\dm B \\coloneqq \\alpha \\dm B \\matmatsep \\dm[lower, ']A$},\n {$\\dm B \\coloneqq \\alpha \\dm B \\matmatsep \\dm[upper]A$},\n {$\\dm B \\coloneqq \\alpha \\dm B \\matmatsep \\dm[upper, ']A$}\n },\n flops={$\\begin{array}{ll}\n m^2 n &\\text{if } \\code{side} = \\code L\\\\\n m n^2 &\\text{else}\n \\end{array}$},\n datavol={$\\begin{array}{ll}\n \\frac12 m (m + 1) + m n &\\text{if } \\code{side} = \\code L\\\\\n \\frac12 n (n + 1) + m n &\\text{else}\n \\end{array}$},\n datamov={$\\begin{array}{ll}\n \\frac12 m (m + 1) + 2 m n &\\text{if } \\code{side} = \\code L\\\\\n \\frac12 n (n + 1) + 2 m n &\\text{else}\n \\end{array}$},\n}\n\n\\routinedocforward\\dsyrk{ssyrk,\n arguments={uplo=, trans=, n=, k=, alpha=, A=, ldA=, beta=, C=, ldB=},\n description={single-precision symmetric rank-k update},\n}\n\n\\routinedoc{dsyrk,\n arguments={\n uplo=\\dm C has lower- or upper-triangular storage,\n trans=\\dm A is transposed,\n n=dimension $n$,\n k=dimension $k$,\n alpha=scalar $\\alpha$,\n A={matrix $\\dm A \\in \\begin{cases}\n \\R^{n \\times k} &\\text{if } \\code{trans} = \\code N\\\\\n \\R^{k \\times n} &\\text{else}\n \\end{cases}$},\n ldA=leading dimension for \\dm A,\n beta=scalar $\\beta$,\n C={symmetric matrix $\\dm C \\in \\R^{n \\times n}$},\n ldB=leading dimension for \\dm C\n },\n description={double-precision symmetric rank-k update},\n operations={\n {$\\dm C \\coloneqq \\alpha \\dm A \\matmatsep \\dm[']A + \\dm C$},\n {$\\dm C \\coloneqq \\alpha \\dm[']A \\dm A + \\dm C$},\n },\n flops={$n (n + 1) k$},\n datavol={$\\frac12 n (n + 1) + n k$},\n datamov={$n (n + 1) + n k$},\n}\n\n\\routinedocforward\\dsyrk{cherk,\n arguments={uplo=, trans=, n=, k=, alpha=, A=, ldA=, beta=, C=, ldB=},\n description={single-precision complex Hermitian rank-k update},\n}\n\n\\routinedocforward\\dsyrk{zherk,\n arguments={uplo=, trans=, n=, k=, alpha=, A=, ldA=, beta=, C=, ldB=},\n description={double-precision complex Hermitian rank-k update},\n}\n\n\\routinedoc{dsyr2k,\n arguments={\n uplo=\\dm C has lower- or upper-triangular storage,\n trans=\\dm A is transposed,\n n=dimension $n$,\n k=dimension $k$,\n alpha=scalar $\\alpha$,\n A={matrix $\\dm A \\in \\begin{cases}\n \\R^{n \\times k} &\\text{if } \\code{trans} = \\code N\\\\\n \\R^{k \\times n} &\\text{else}\n \\end{cases}$},\n ldA=leading dimension for \\dm A,\n B={matrix $\\dm B \\in \\begin{cases}\n \\R^{n \\times k} &\\text{if } \\code{trans} = \\code N\\\\\n \\R^{k \\times n} &\\text{else}\n \\end{cases}$},\n ldB=leading dimension for \\dm B,\n beta=scalar $\\beta$,\n C={symmetric matrix $\\dm C \\in \\R^{n \\times n}$},\n ldC=leading dimension for \\dm C\n },\n description={double-precision symmetric rank-2k update},\n operations={\n {$\\dm C \\coloneqq \\alpha \\dm A \\matmatsep \\dm[']B + \\alpha \\dm B \\matmatsep \\dm[']A + \\dm C$},\n {$\\dm C \\coloneqq \\alpha \\dm[']A \\dm B + \\alpha \\dm[']B \\dm A + \\dm C$}\n },\n flops={$2 n (n + 1) k$},\n datavol={$\\frac12 n (n + 1) + 2 n k$},\n datamov={$n (n + 1) + 2 n k$},\n}\n\n\\routinedocforward\\dtrsm{strsm,\n arguments={side=, uplo=, transA=, diag=, m=, n=, alpha=, A=, ldA=, B=, ldB=},\n description={single-precision triangular linear system solve with multiple\n right hand sides},\n}\n\n\\routinedoc{dtrsm,\n arguments={\n side=\\dm[lower]A is on the left or right of \\dm B,\n uplo=\\dm[lower]A is lower- or upper-triangular,\n transA=\\dm[lower]A is transposed,\n diag=\\dm[lower]A is unit triangular,\n m=dimension $m$,\n n=dimension $n$,\n alpha=scalar $\\alpha$,\n A={matrix $\\dm[lower]A \\in \\begin{cases}\n \\R^{m \\times m} &\\text{if } \\code{side} = \\code L\\\\\n \\R^{n \\times n} &\\text{else}\n \\end{cases}$},\n ldA=leading dimension for \\dm[lower]A,\n B={matrix $\\dm B \\in \\R^{m \\times n}$},\n ldB=leading dimension for \\dm B\n },\n description={double-precision triangular linear system solve with multiple\n right hand sides},\n operations={\n {$\\dm B \\coloneqq \\alpha \\dm[lower, inv]A \\dm B$},\n {$\\dm B \\coloneqq \\alpha \\dm[lower, inv']A \\dm B$},\n {$\\dm B \\coloneqq \\alpha \\dm[upper, inv]A \\dm B$},\n {$\\dm B \\coloneqq \\alpha \\dm[upper, inv']A \\dm B$},\n {$\\dm B \\coloneqq \\alpha \\dm B \\matmatsep \\dm[lower, inv]A$},\n {$\\dm B \\coloneqq \\alpha \\dm B \\matmatsep \\dm[lower, inv']A$},\n {$\\dm B \\coloneqq \\alpha \\dm B \\matmatsep \\dm[upper, inv]A$},\n {$\\dm B \\coloneqq \\alpha \\dm B \\matmatsep \\dm[upper, inv']A$}\n },\n flops={$\\begin{array}{ll}\n m^2 n &\\text{if } \\code{side} = \\code L\\\\\n m n^2 &\\text{else}\n \\end{array}$},\n datavol={$\\begin{array}{ll}\n \\frac12 m (m + 1) + m n &\\text{if } \\code{side} = \\code L\\\\\n \\frac12 n (n + 1) + m n &\\text{else}\n \\end{array}$},\n datamov={$\\begin{array}{ll}\n \\frac12 m (m + 1) + 2 m n &\\text{if } \\code{side} = \\code L\\\\\n \\frac12 n (n + 1) + 2 m n &\\text{else}\n \\end{array}$},\n}\n\n\\routinedocforward\\dtrsm{ctrsm,\n arguments={side=, uplo=, transA=, diag=, m=, n=, alpha=, A=, ldA=, B=, ldB=},\n description={single-precision complex triangular linear system solve with\n multiple right hand sides},\n}\n\n\\routinedocforward\\dtrsm{ztrsm,\n arguments={side=, uplo=, transA=, diag=, m=, n=, alpha=, A=, ldA=, B=, ldB=},\n description={double-precision complex triangular linear system solve with\n multiple right hand sides},\n}\n\n\\subsection*{\\codestyle\\bf\\llap{\\routine(}\\arglist)}\n \\label{routine:\\routine}\n {\\it\\description}\n ]\n \\def\\empty{}\\small\\singlespacing\n \\expandafter\\ifx\\note\\pgfkeysnovalue\\else\n \\paragraph{Note\\strut}\n \\note\n \\fi\n\n \\ifx\\operations\\empty\\else\n \\paragraph{Operations\\strut}\n \\operations\n \\fi\n\n {\n \\raggedright\n \\hbadness=10000\n \\hangafter=1\n \\renewcommand\\newline{\n \\par\n \\settowidth\\hangindent{\\hspace\\argwidth: }\n \\makebox[\\hangindent]{}%\n }\n \\paragraph{Arguments\\strut}\n \\arguments\n }\n\n \\expandafter\\ifx\\flops\\pgfkeysnovalue\\else\n \\paragraph{Minimal FLOP-count\\strut}\n \\flops\n \\fi\n\n \\expandafter\\ifx\\datavol\\pgfkeysnovalue\\else\n \\paragraph{Data volume\\strut}\n \\datavol\n \\fi\n\n \\expandafter\\ifx\\datamov\\pgfkeysnovalue\\else\n \\paragraph{Minimal data movement\\strut}\n \\datamov\n \\fi\n \\end{multicols}\n\n \\filbreak\n}}\n\n\\newcommand\\routinedocforward[2]{\n \\pgfkeys{\n \/routine,\n #2,\n name\/.get=\\routine,\n arglist\/.get=\\arglist,\n description\/.get=\\description,\n }\n \\subsection*{\\codestyle\\bf\\routine(\\arglist)}\n \\label{routine:\\routine}\n {\\it\\description.}\n See #1.\n\n \\filbreak\n}\n\n\n\n\\subsection*{Reference Implementations}\n\nThe \\blas and \\lapack reference implementations~\\cite{blasweb, lapackweb} are\nfully functional and well-documented and thus of great value as references for\nroutine interfaces and semantics. However, on their own they only attain poor\nperformance, and should therefore not be used in production codes.\n\nAll routines in the \\blas reference implementation are single-threaded\nand unoptimized. The central kernel \\dgemm, for instance, is realized as a\nsimple triple loop that reaches around \\SI6{\\percent} of modern processors'\nsingle-threaded theoretical peak performance---optimized implementations are\ncommonly $15\\times$~faster on a single core and provide excellent multi-threaded\nscalability.\n\nSince \\lapack primarily relies on a tuned \\blas implementation for speed,\nthe reference implementation can in principle reach good performance. However,\nas its documentation states, this requires careful tuning of its block sizes,\nwhose default values are generally too low on contemporary processors.\nOptimized implementations may further improve \\lapack's performance through\nfaster algorithms, tuned unblocked kernels (e.g, \\dtrti2, \\dpotf2), and\nalgorithm-level parallelism (e.g., task-based algorithms-by-blocks).\n\nThroughout this work, we use reference \\blas and \\lapack version~3.5.0.\n\n\n\\subsection*{\\namestyle OpenBLAS}\n\n{\\namestyle OpenBLAS}~\\cite{openblasweb} is a high-performance open-source \\blas\nand \\lapack implementation that is currently developed and maintained at the\n{\\namestyle Massachusetts Institute of Technology}. It provides optimized and\nmulti-threaded \\blas kernels for a wide range of architectures, and offers tuned\nversion of core \\lapack routines, such as the \\dlauum, \\dtrtri, \\dpotrf, and\n\\dgetrf. {\\namestyle OpenBLAS} is based on the discontinued {\\namestyle\nGotoBLAS2}, adopting its approach and much of its source-code; it includes\nassembly kernels for more recent architectures, such as \\sandybridgeshort and\n\\haswellshort, as well {\\namestyle AMD} processors.\n\nThroughout this work, we use {\\namestyle OpenBLAS} version~0.2.15.\n\n\n\\subsection*{\\namestyle BLIS}\n\nThe {\\namestyle BLAS-like Library Instantiation Software} ({\\namestyle\nBLIS})~\\cite{blis1, blis2, blis3, blisweb} is a fairly recent framework for\ndense linear algebra libraries that is actively developed at the {\\namestyle\nUniversity of Texas at Austin}. While it comes with its own API, which is a\nsuperset, generalization, and extension of the \\blas, it contains a\ncompatibility layer offering the original de-factor standard \\blas interface.\n{\\namestyle BLIS} builds upon the {\\namestyle GotoBLAS} approach, yet\nrestructures and solidifies it to make all but a tiny ``micro-kernel''\narchitecture-independent. While its performance is so far generally lower than\nthat of \\openblas (see examples in \\cref{sec:model:args}), its ambitious goal is\nto significantly speed up both the development of new application-specific\nkernels, and the adaptation to other architectures.\n\nAlthough multi-threading was introduced into {\\namestyle BLIS}~\\cite{blis3} soon\nafter its inception, its flexible threading model lacked a simple end-user\ninterface (such as following the environment variable \\code{OMP\\_NUM\\_THREADS})\nuntil November~2016 (commit\n\\href{https:\/\/github.com\/flame\/blis\/commit\/6b5a4032d2e3ed29a272c7f738b7e3ed6657e556}{\\sf\n6b5a403}). As a result, we only presents single-threaded results for\n{\\namestyle BLIS}.\n\nThroughout this work we use {\\namestyle BLIS} version~0.2.0.\n\n\n\\subsection*{\\namestyle MKL}\n\n\\intel's {\\namestyle Math Kernel Library} ({\\namestyle MKL})~\\cite{mklweb} is a\nhigh-performance library for \\intel processors that covers \\blas and\n\\lapack, as well as other high-performance computations, such as for Fast\nFourier Transforms (FFT) and Deep Neural Networks (DNN). While {\\namestyle MKL}\nis a closed-source library, it recently began offering free developer licenses.\nIn terms of performance, it is in most scenarios superior to open-source\nlibraries such as \\openblas and \\blis (see examples in \\cref{sec:model:args}).\n\nThroughout this work we use {\\namestyle MKL} version~11.3.\n\n\n\\subsection*{\\namestyle Accelerate}\n\n\\apple's framework {\\namestyle Accelerate}~\\cite{accelerateweb} is a\nhigh-performance library that ships with {\\namestyle macOS} and, among others,\nprovides full \\blas and \\lapack functionality. Its performance is for many\ncases comparable to \\openblas or slightly better.\n\n\n\\subsection*{Other Implementations}\n\nThe following notable \\blas and \\lapack implementations are not used throughout\nthis work:\n\\begin{itemize}\n \\item The {\\namestyle Automatically Tuned Linear Algebra Software}\n ({\\namestyle ATLAS})~\\cite{atlas1, atlas2, atlas3, atlasweb} is a\n high-performance \\blas implementation that relies on auto-tuning. While\n {\\namestyle ATLAS} kernels typically don not reach the performance of\n hand-tuned implementations such as \\openblas, \\blis, and \\mkl, it\n provides good performance for new and exotic architectures with little\n effort.\n\n \\item {\\namestyle GotoBLAS2}~\\cite{gotoblas1, gotoblas2, gotoblasweb} is a\n high-performance \\blas implementation that was developed at the\n {\\namestyle Texas Advanced Computing Center}. Since its\n discontinuation, much of its code-base was picked up by its successor\n \\openblas in~2011, and its approach was refined and generalized in\n \\blis.\n\n \\item {\\namestyle IBM}'s {\\namestyle Engineering and Scientific Subroutine\n Library} ({\\namestyle ESSL}) \\cite{esslweb} provides a high-performance\n \\blas implementation and parts of \\lapack for {\\namestyle POWER}-based\n systems, such as {\\namestyle Blue Gene} supercomputers.\n\\end{itemize}\n\n\\section{Storage Format}\n \\label{app:libs:store}\n \\input{applibs\/store}\n\n \\section{\\namestyle Basic Linear Algebra Subprograms}\n \\label{app:libs:blas}\n \\input{applibs\/blas}\n\n \\section{\\namestyle Linear Algebra PACKage}\n \\label{app:libs:lapack}\n \\input{applibs\/lapack}\n\n \\section{Implementations}\n \\label{app:libs:libs}\n \\input{applibs\/libs}\n}\n\n\\subsection{Scalars}\nEach scalar operand (e.g., $\\alpha \\in \\R$) is passed as a single argument,\n (e.g., \\code{double *alpha}). Complex scalars are stored as two consecutive\n elements of the basis data-type (\\code{float} or \\code{double}) that represent\n the real and imaginary parts.\n\n\n\\subsection{Vectors}\nEach vector operand (e.g., $\\dv x \\in \\R^n$) is specified by three arguments:\n\\begin{itemize}\n \\item A size argument (e.g., \\code{int *n}) determines the length of the\n vector. One size argument can describe multiple vectors (and\/or\n matrices) with the same size.\n\n \\item A data argument (e.g., \\code{double *x}) points to the vector's first\n element in memory.\n\n \\item An increment argument (e.g., \\code{int *incx}) identifies the stride\n between consecutive elements of the vector. For instance, a\n contiguously stored vector has an increment of~1.\n\n Note that most routines allow negative increments. In this case, the\n vector is stored in reverse, and the data argument points to the\n vector's last element---the first memory location.\n\\end{itemize}\nTo summarize, vector element~$x_i$ is stored at \\code{x[i * incx]} if\n\\code{incx} is positive and \\code{x[(i - n + 1) * incx]} otherwise.\n\n\n\\subsection{Matrices}\nEach matrix (e.g., $\\dm[width=.7]A \\in \\R^{m \\times n}$) is specified by four\narguments:\n\\begin{itemize}\n \\item Two size arguments (e.g., \\code{int *m} and \\code{int *n}) determine\n the matrix height~($m$) and width~($n$). One size argument can describe\n the dimensions of multiple matrices (and\/or vectors), or both dimensions\n of a square matrix.\n\n \\item A data argument (e.g., \\code{double *A}) points to the first matrix\n element in memory (e.g., $a_{00}$). The following elements of the\n first column (e.g., $a_{i0}$) are stored consecutively in memory as\n vector with increment~1.\n\n \\item A leading dimension argument (e.g., \\code{int *ldA}) describes the\n distance in memory between matrix columns. It can hence be understood\n and used as the increment argument for the matrix rows as vectors. The\n term ``leading dimension'' comes from the concept that a referenced\n matrix is part of a larger, contiguously stored ``leading'' matrix. It\n allows to operate on sub-matrices or tensor panels as shown throughout\n this work.\n\n Leading dimensions must be at least equal to the height of the matrix\n (e.g., $m$).\n\\end{itemize}\nTo summarize, matrix element~$a_{ij}$ is stored at \\code{A[i + j * ldA]}.\n\n\n\\subsection{Compute-Bound Efficiency}\n\\label{sec:term:eff:compbound}\n\nA computation is compute-bound on a hardware platform if the memory operations\nto load and store the involved data can be amortized by floating-point\noperations, i.e., the available memory bandwidth is sufficient for all transfers\nand the speed at which the processor performs \\flops is the bottleneck. An\noperation is theoretically bandwidth bound when\n\\[\n \\text{arithmetic intensity} \\geq \\frac\\pperf\\pbw \\enspace.\n\\]\nFurthermore, a computation's\n\\definition[(compute-bound)\\\\efficiency]{compute-bound efficiency} (or simply\n{\\em efficiency}) is given by\n\\begin{equation}\\label{eq:term:eff}\n \\text{compute-bound efficiency}\n \\defeqq \\frac{\\text{attained performance}}\\pperf \\enspace.\n\\end{equation}\nThis unit-less metric between 0 and~1 indicates how well the available hardware\nresources are utilized: While a value close to~1 corresponds to near-optimal\nutilization, lower values indicate untapped resource potential.\n\n\\begin{example}{Compute-bound efficiency}{term:eff}\n The matrix-matrix multiplication $\\dm C \\coloneqq \\dm A \\matmatsep \\dm B +\n \\dm C$ (\\dgemm[NN]) with $\\dm A,\\allowbreak \\dm B,\\allowbreak \\dm C \\in\n \\R^{1000 \\times 1000}$ has an arithmetic intensity of (see\n \\cref{ex:term:ai})\n \\[\n \\SIvar{1000 \\times \\frac1{16}}{\\flops\\per\\Byte} \n = \\SI{62.5}{\\flops\\per\\Byte} \\enspace.\n \\]\n On a single core of a \\sandybridge with a peak floating-point performance of\n \\SI{20.8}{\\giga\\flops\\per\\second} (\\turboboost disabled) and peak bandwidth\n of \\SI{51.2}{\\gibi\\byte\\per\\second} this operation is clearly compute bound:\n \\[\n \\frac\n {\\SI{20.8}{\\giga\\flops\\per\\second}}\n {\\SI{16.25}{\\gibi\\byte\\per\\second}}\n \\approx \\SI{1.28}{\\flops\\per\\Byte}\n < \\SI{62.5}{\\flops\\per\\Byte} \\enspace.\n \\]\n If the \\dgemm[NN] runs at \\SI{19.61}{\\giga\\flops\\per\\second}\n (\\cref{ex:term:perf}), it reached an efficiency of\n \\[\n \\frac{\\text{attained performance}}\\pperf\n = \\frac\n {\\SI{19.61}{\\giga\\flops\\per\\second}}\n {\\SI{20.8}{\\giga\\flops\\per\\second}}\n \\approx \\SI{94.27}\\percent \\enspace.\n \\]\n\\end{example}\n\nThere are many different ways to look at efficiency other than the ratio of\nattained performance to peak performance. Rewriting the definition of\nefficiency as\n\\begin{align*}\n \\text{efficiency}\n &= \\frac{\\text{attained performance}}\\pperf \\\\\n &= \\frac\n {\\text{cost} \/ \\text{runtime}}\n {\\text{cost} \/ \\text{optimal runtime}} \\\\\n &= \\frac{\\text{optimal runtime}}{\\text{runtime}} \\enspace,\n\\end{align*}\nit is expressed as the ratio of the minimum time required to perform the\noperation's minimal \\flops on the given hardware to the computation's runtime.\nIf we reorganize it as\n\\begin{align*}\n \\text{efficiency}\n &= \\frac{\\text{attained performance}}\\pperf \\\\\n &= \\frac{\\text{cost} \/ \\text{runtime}}\\pperf \\\\\n &= \\frac{\\text{cost}}{\\text{runtime} \\times \\pperf} \\\\\n &= \\frac{\\text{cost}}{\\text{available \\flops}} \\enspace,\n\\end{align*}\nit can be seen as the ratio of the operation's minimal \\flop-count to how many\n\\flops the processor could theoretically perform during the computation's\nruntime.\n\n\\begin{example}{Expressing compute-bound efficiency}{term:eff2}\n In \\cref{ex:term:eff} the \\dgemm[NN] took \\SI{102}\\ms, while the\n \\sandybridge with a peak performance of \\SI{20.8}{\\giga\\flops\\per\\second}\n (\\turboboost disabled) could have performed the required $\\SIvar{2 \\times\n 1000^3}\\flops = \\SI{2e9}\\flops$ in\n \\[\n \\frac{\\SI{2e9}\\flops}{\\SI{20.8}{\\giga\\flops\\per\\second}}\n \\approx \\SI{96.15}\\ms \\enspace .\n \\]\n Hence, the computation's efficiency can be computed as\n \\[\n \\frac{\\text{optimal runtime}}{\\text{runtime}}\n = \\frac{\\SI{96.15}\\ms}{\\SI{102}\\ms} \\approx \\SI{94.26}\\percent \\enspace.\n \\]\n\n We can also consider that in the \\SI{102}{\\ms} that the \\dgemm[NN] took, the\n \\sandybridgeshort core could have performed\n \\[\n \\SI{102}\\ms \\times \\SI{20.8}{\\giga\\flops\\per\\second}\n \\approx \\SI{2.12e9}\\flops \\enspace.\n \\]\n Once again we obtain the same efficiency, as a \\flop-count ratio:\n \\[\n \\frac{\\text{cost}}{\\text{available \\flops}}\n = \\frac{\\SI{2e9}\\flops}{\\SI{2.12e9}\\flops}\n \\approx \\SI{94.26}\\percent\n \\enspace.\n \\]\n\\end{example}\n\n\n\\subsection{Bandwidth-Bound Efficiency}\n\\label{sec:term:eff:bwbound}\n\nA computation is bandwidth-bound on a hardware platform if the memory operations\ncannot load and store the involved data as fast as the processor's\nfloating-point units can process it, i.e., the memory bandwidth is the\nbottleneck and the compute units are partially idle. An operation is\ntheoretically bandwidth-bound when\n\\[\n \\text{arithmetic intensity} \\leq \\frac\\pperf\\pbw \\enspace.\n\\]\nFurthermore, a computation's \\definition{bandwidth-bound efficiency} is defined\nas\n\\begin{equation}\n \\label{eq:term:eff:bwbound}\n \\text{bandwidth-bound efficiency} \\defeqq\n \\frac{\\text{attained bandwidth}}\\pbw \\enspace.\n\\end{equation}\nA bandwidth-bound efficiency close to~1 indicates a good utilization of the\nprocessor's main-memory bandwidth, while smaller values signal underutilization.\n\n\\begin{example}{Bandwidth-bound efficiency}{term:bwbeff}\n The vector inner product $\\alpha \\coloneqq \\dm[height=0, ']x \\matvecsep \\dv\n y$ (\\ddot) with $\\dv x, \\dv y \\in \\R^{\\num{100000}}$ has an arithmetic\n intensity of \\SIvar{\\frac18}{\\flops\\per\\Byte} (\\cref{ex:term:ai}) and is\n thus clearly bandwidth-bound. If on one core of a \\sandybridge, it attains\n a bandwidth of \\SI{11.49}{\\gibi\\byte\\per\\second} (\\cref{ex:term:bw}),\n relative to the processor's empirical peak bandwidth of\n \\SI{16.25}{\\gibi\\byte\\per\\second} (\\cref{ex:term:peakbw}), it performed at a\n bandwidth-bound efficiency of\n \\[\n \\frac{\\text{attained bandwidth}}\\pbw\n = \\frac\n {\\SI{11.49}{\\gibi\\byte\\per\\second}}\n {\\SI{16.25}{\\gibi\\byte\\per\\second}}\n \\approx \\SI{70.71}\\percent \\enspace.\n \\]\n\\end{example}\n\n\n\\subsection{The Roofline Model}\n\\label{sec:term:roofline}\n\nThe \\definition{Roofline model}~\\cite{roofline1} plots the performance of\ncomputations (in \\si{\\giga\\flops\\per\\second}) against their arithmetic intensity\n(in \\si{\\flops\\per\\Byte}). In addition to data-points from measurements, two\nlines are added to such a plot to indicate the theoretically attainable\nperformance depending on the arithmetic intensity: The product of peak bandwidth\nand arithmetic intensity (in units: $\\si{\\gibi\\byte\\per\\second} \\times\n\\si{\\flops\\per\\Byte} = \\si{\\gibi\\flops\\per\\second} \\approx\n\\SI{.93}{\\giga\\flops\\per\\second}$) constitutes a straight line through the\norigin with the bandwidth as a gradient (visually: \\tikz\\draw[thick, darkred]\n(0, 0) -- (1.5ex, 1.5ex);) that represents the bandwidth-bound performance limit;\nand the peak floating-point performance is a constant line (\\tikz\\draw[thick,\ndarkred] (0,0) (0, 1.5ex) -- (3ex, 1.5ex);). Together these two lines form the\nroofline-shaped performance limit (\\tikz\\draw[thick, darkred] (0, 0) -- (1.5ex,\n1.5ex) -- (4.5ex, 1.5ex);) that gives the visualization its name:\n\\begin{equation}\\label{eq:term:roofline}\n \\text{performance limit} =\n \\min\\left(\\begin{array}c\n \\pbw \\times \\text{intensity},\\\\\n \\pperf\n \\end{array}\\right) \\enspace.\n\\end{equation}\nComparing the attained performance of a computation to this limit yields the\ncomputation's efficiency---bandwidth-bound below the left part of the ``roof''\nand compute-bound below the right part.\n\n\\input{appterm\/figures\/roofline}\n\n\\begin{example}{The roofline model}{term:roofline}\n \\Cref{fig:term:roofline} presents the Roofline model for one core of a\n \\sandybridge. This processor has a single-core peak performance of\n \\SI{20.8}{\\giga\\flops\\per\\cycle} (\\turboboost disabled), and we use the\n measured single-core peak bandwidth of \\SI{16.25}{\\gibi\\byte\\per\\second}\n (\\cref{ex:term:peakbw}). Together these two factors impose the performance\n limit~(\\ref*{plt:term:roofline:peak})\n \\[\n \\min(\\SI{16.25}{\\gibi\\byte\\per\\second} \\times \\text{arithmetic\n intensity}, \\SI{20.8}{\\giga\\flops\\per\\second})\n \\]\n\n \\cref{fig:term:roofline} also contains the measured performance of\n representative \\blasl1, 2, and~3 operations, whose arithmetic intensity was\n determined in \\cref{ex:term:ai}.\n \\begin{itemize}\n \\item The vector inner product $\\alpha \\coloneqq \\dm[height=0, ']x\n \\matvecsep \\dv y$ (\\ddot) with $\\dv x, \\dv y \\in \\R^n$\n (\\ref*{plt:term:roofline:ddot}) has a arithmetic intensity of\n \\SIvar{\\frac18}{\\flops\\per\\Byte}, making it clearly bandwidth-bound\n below the left part of the ``roofline''. The attained\n (bandwidth-bound) efficiency, which is given by the ratio of the\n measured performance~(\\ref*{plt:term:roofline:ddot}) to the\n attainable peak performance~(\\ref*{plt:term:roofline:peak}), is\n quite high at~\\SI{87.93}\\percent.\n\n \\item The matrix-vector multiplication $\\dv y \\coloneqq \\dm A \\matvecsep\n \\dv x + \\dm y$ (\\dgemv) with $\\dm A \\in \\R^{n \\times n}$ and $\\dv x,\n \\dv y \\in \\R^n$ (\\ref*{plt:term:roofline:dgemv}) has a computation\n intensity of $\\approx \\SIvar{\\frac14}{\\flops\\per\\Byte}$, making it\n also bandwidth-bound. The (bandwidth-bound) efficiency\n (\\ref*{plt:term:roofline:dgemv} divided by\n \\ref*{plt:term:roofline:peak}) is between~\\SI{45.32}{\\percent} (for\n $n = 100$) and \\SI{76.66}{\\percent} (for~$n = 2000$).\n\n \\item The matrix-matrix multiplication $\\dm C \\coloneqq \\dm A \\matmatsep\n \\dm B + \\dm C$ (\\dgemm[NN]) with $\\dm A, \\dm B, \\dm C \\in \\R^{n\n \\times n}$ (\\ref*{plt:term:roofline:dgemm}) has a higher arithmetic\n intensity of \\SIvar{\\frac n{16}}{\\flops\\per\\Byte}, which makes it\n theoretically compute-bound on our system for~$n \\geq 21$. In the\n memory-bound domain it reaches its peak (memory-bound) efficiency\n (\\ref*{plt:term:roofline:dgemm} divided\n by~\\ref*{plt:term:roofline:peak}) of \\SI{50.15}{\\percent} at~$n =\n 20$. Within the compute-bound domain, its (compute-bound)\n efficiency grows towards \\SI{74.32}{\\percent} for our largest\n problem size~$n = 100$. Beyond this size the efficiency keeps\n growing and converge to its peak of \\SI{93.70}{\\percent} for\n matrices of size~$n = 2000$.\n \\end{itemize}\n\\end{example}\n\n\n\n\n\\section{Workload}\n \\label{sec:term:workload}\n \\input{appterm\/workload}\n\n \\section{Runtime}\n \\label{sec:term:time}\n \\input{appterm\/time}\n\n \\section{Performance and Attained Bandwidth}\n \\label{sec:term:perf}\n \\input{appterm\/perf}\n\n \\section{Hardware Constraints}\n \\label{sec:term:hw}\n \\input{appterm\/hardware}\n\n \\section{Efficiency}\n \\label{sec:term:eff}\n \\input{appterm\/eff}\n\n \\section{Other Metrics}\n \\label{sec:term:other}\n \\input{appterm\/othermetrics}\n}\n\n\n\n\n\\subsection{Floating-Point Operations}\n\\label{sec:term:flops}\n\nMost scientific computations, as complex as they may be, perform their work\nthrough a small set of elementary arithmetic operations on floating-point\nrepresentations of real numbers, such as scalar additions or\nmultiplications\\footnote{%\n Exceptions that work on integer data or other structures include graph\n algorithms and discrete optimization.\n}---These the so-called \\definition[\\flops: floating-point\noperations\\\\single- and double-precision]{floating-point operations}\n({\\em\\flops}).\\footnote{%\n Not to be confused with floating-point operations {\\em per second}\n (\\si{\\flops\\per\\second}).\n}\n\nContemporary hardware offers two floating-point precisions standardized in\nIEEE~754~\\cite{ieee754}: {\\em single-precision}, and {\\em double-precision}.\nThey differ in the range of representable numbers, their representation\naccuracy, and their implementation in hardware. While we distinguish between\nsingle-precision \\flops and double-precision \\flops, throughout this work we are\nmostly concerned with double-precision computations. Hence we use ``\\flops''\nwithout a specification refers to double-precision floating-point operations,\nand \\R is used to denote double-precision numbers.\n\nAs commonly practiced in dense linear algebra, we assume that the multiplication\nof two $n \\times n$ matrices requires \\SIvar{2 n^3}\\flops{}---it has an\nasymptotic \\definition[matrix-matrix multiplication: $O(n^3)$]{complexity} of\n$O(n^3)$. While algorithms with lower asymptotic complexities (such as the {\\em\nStrassen algorithm} with a complexity of $O(n^{2.807})$~\\cite{strassen} or the\n{\\em Coppersmith-Winograd algorithm} with a complexity of\n$O(n^{2.376})$~\\cite{coppersmith}) were already known in the 1970s, due to\nconsiderably higher constant factors they found little to no application in\nhigh-performance computing until recently~\\cite{blisstrassen}.\n\nThe \\flop-count of most dense liner algebra operations such as the matrix-matrix\nmultiplication is \\definition[data-independence]{data-independent}, i.e., the\noperand entries do not affect what arithmetic operations are\nperformed.\\footnote{%\n Exceptions may be caused by corrupted input, such as \\code{NaN}s, or\n floating-point exceptions, such as division by~0 or under-\/overflows.\n} In particular, this means that all multiplications with 0's are explicitly\nperformed no matter how sparse an operand is (i.e., how few non-zero entries\nit has). A notable exception to the data-independence are numerical\neigensolvers, whose FLOP-counts depend on the eigenspectrum of the input matrix;\nhowever, we do not study eigensolvers in further detail in this work.\n\nAssuming the cubic complexity of the matrix-matrix multiplication, the\ndata-independence allows us to compute the \\definition[cost = minimal\nFLOP-count]{minimal FLOP-count}---also referred to as {\\em cost}---for most\noperations solely based on their operands' sizes.\n\n\\begin{example}{Minimal \\flop-counts}{term:flops}\n The vector inner product $\\alpha \\coloneqq \\dm[height=0, ']x \\matvecsep \\dv\n y$ (\\ddot) with $\\dv x, \\dv y \\in \\R^n$ costs \\SIvar{2 n}\\flops: one\n multiplication and one addition per vector entry.\n\n The solution of a triangular linear system with multiple right-hand-sides\n $\\dm[width=.4]B \\coloneqq \\dm[lower, inv]A \\dm[width=.4]B$ (\\dtrsm) with\n $\\dm[lower]A\\lowerpostsep \\in \\R^{n \\times n}$ and $\\dm[width=.4]B \\in \\R^{n\n \\times m}$ requires \\SIvar{n^2 m}\\flops.\n\n The Cholesky decomposition of a symmetric positive definite (SPD) matrix\n $\\dm[lower]L \\dm[upper, ']L \\coloneqq \\dm A$ (\\dpotrf) with $\\dm A \\in \\R^{n\n \\times n}$ costs\n \\[\n \\SIvar{\\frac16 n (n + 1) (2 n + 1)}\\flops\n \\approx \\SIvar{\\frac13 n^3}\\flops \\enspace.\n \\]\n\\end{example}\n\nNote that an operation's minimal \\flop-count only provides a lower bound for\nroutines implementing it; reasons for exceeding this bound range from technical\nlimitations to cache-aware data movement patterns and algorithmic schemes that\nperform extra \\flops to use faster compute kernels.\n\n\n\\subsection{Data Volume and Movement}\n\\label{sec:term:datamovement}\n\nThe largest portion of a scientific computation's memory footprint is typically\noccupied by its numerical data consisting of floating-point numbers. A real\nnumber in single- and double-precision requires, respectively, 4 and~\\SI8\\bytes,\nwhereas complex numbers are represented as two consecutive real numbers\nand thus require twice the space. Since throughout this work we mostly use\ndouble-precision numbers---conventionally called ``\\definition[$\\SI1\\double =\n\\SI8\\bytes$]{doubles}''---we can proceed with the assumption that each number\ntakes up \\SI8\\bytes.\n\nIn dense linear algebra, the \\definition[data volume in \\bytes]{data volume}\n(in~\\bytes) involved in a computation is determined almost exclusively by the\ninvolved matrix operands. For instance, a square matrix of size $1000 \\times\n1000$ consists of $\\SI{e6}\\doubles = \\SI{8e6}\\bytes \\approx\n\\SI{7.63}{\\mebi\\byte}$;\\footnote{%\n We use the 1024-based binary prefixes for data volumes: $\\SI{1024}\\bytes =\n \\SI1{\\kibi\\byte}$ (``kibibyte''), $\\SI{1024}{\\kibi\\byte} = \\SI1{\\mebi\\byte}$\n (``mebibyte''), and $\\SI{1024}{\\mebi\\byte} = \\SI1{\\gibi\\byte}$\n (``gibibyte'').\n} vector and scalar operands in comparison take up little space: A vector of\nsize~1000 requires $\\SI{8000}\\bytes = \\SI{7.81}{\\kibi\\byte}$, and a scalar fits\nin just \\SI8\\bytes.\n\nWhile a computation's data volume describes how much data is involved in an\noperation, it says nothing about how often it is accessed. For this purpose we\nintroduce the concept of \\definition{data movement} that quantifies how much\ndata is read from or written to memory. A computation's data movement is\ncommonly higher than its data volume, because (parts of) the data are accessed\nmultiple times.\n\nWhile the actual data movement of any dense linear algebra operation is highly\nimplementation dependent, we can easily derive the \\definition{minimal data\nmovement} from the operation's mathematical formulation by summing the size of\nall input and output operands, counting the operands that are both input and\noutput twice.\n\n\\begin{example}{Data volume and movement}{term:datamov}\n The vector inner product $\\alpha \\coloneqq \\dm[height=0, ']x \\matvecsep \\dv\n y$ (\\ddot) with $\\dv x, \\dv y \\in \\R^n$ involves a data volume of $\\SIvar{2\n n}\\doubles = \\SIvar{16 n}\\bytes$ (ignoring the scalar $\\alpha$); since both\n \\dv x and \\dv y need only be read once the data movement is also \\SIvar{16\n n}\\bytes.\n\n The matrix-matrix product $\\dm C \\coloneqq \\dm A \\matmatsep \\dm B + \\dm C$\n (\\dgemm[NN]) with $\\dm A,\\allowbreak \\dm B,\\allowbreak \\dm C \\in \\R^{n\n \\times n}$ involves a data volume of $\\SIvar{3 n^2}\\doubles = \\SIvar{24\n n^2}\\bytes$, however, since $\\dm C$ is updated, the minimal data movement is\n $\\SIvar{4 n^2}\\doubles = \\SIvar{32 n^2}\\bytes$.\n\n The Cholesky decomposition $\\dm[lower]L \\dm[upper, ']L \\coloneqq \\dm A$\n (\\dpotrf) with $\\dm A \\in \\R^{n \\times n}$ uses only the lower-triangular\n part of the symmetric matrix \\dm A,\\footnotemark{} and \\dm A is decomposed\n in place, i.e., it is overwritten by \\dm[lower]L\\lowerpostsep upon\n completion. Hence the data volume is $\\SIvar{\\frac12 n (n + 1)}\\doubles\n \\approx \\SIvar{4 n^2}\\bytes$, while the minimal data movement is at least\n $\\SIvar{2 \\cdot \\frac12 n (n + 1)}\\doubles \\approx \\SIvar{8 n^2}\\bytes$.\n\\end{example}\n\\footnotetext{%\n Space for the whole matrix is allocated, but the strictly upper-triangular\n part is not accessed.\n}\n\nNote that the minimal data movement is a strict lower bound when none of the\ninvolved data is in any of the processor's caches. Furthermore, depending on\nthe operation and the cache sizes, it may not be attainable in implementations.\n\n\n\\subsection{Arithmetic Intensity}\n\\label{sec:term:ai}\n\nDividing an operation's minimal flop count by its minimal data movement yields\nits \\definition{arithmetic intensity}:\n\\begin{equation}\n \\label{eq:term:ai}\n \\text{arithmetic intensity}\n \\defeqq \\frac{\\text{minimal \\flop-count}}{\\text{minimal data movement}}\n \\enspace.\n\\end{equation}\nA low arithmetic intensity means that few operations are performed per memory\naccess, thus making the data movement a likely bottleneck; a high arithmetic\nintensity on the other hand indicates that a lot of work is performed per data\nelement, thus making the floating-point computations the potential bottleneck.\nArithmetic intensity divides dense linear algebra operations into two groups:\nWhile for \\blasl1 (vector-vector) and~2 (matrix-vector) operations the intensity\nis quite small and independent of the problem size, it is considerably larger\nfor \\blasl3 (matrix-matrix) and dense \\lapack-level operations, for which\nincreases linearly with the problem size.\n\n\\begin{example}{Arithmetic intensity}{term:ai}\n The vector inner product $\\alpha \\coloneqq \\dm[height=0, ']x \\dv y$ (\\ddot)\n with $\\dv x, \\dv y \\in \\R^n$ is a \\blasl1 operation that performs \\SIvar{2\n n}{\\flops} over \\SIvar{2 n}{\\doubles} of data movement. Hence its\n arithmetic intensity is\n \\[\n \\frac{\\text{minimal \\flop-count}}{\\text{minimal data movement}}\n = \\frac{\\SIvar{2 n}\\flops}{\\SIvar{2 n}\\doubles}\n = \\SIvar{\\frac18}{\\flops\\per\\Byte} \\enspace.\n \\]\n\n The matrix-vector multiplication $\\dv y \\coloneqq \\dm A \\matvecsep \\dv x +\n \\dm y$ (\\dgemv[N]) with $\\dm A \\in \\R^{n \\times n}$ and $\\dv x, \\dv y \\in\n \\R^n$ is a \\blasl2 operation that performs \\SIvar{2 n^2}{\\flops} over\n \\SIvar{n^2 + 3 n}{\\doubles} of data movement ($\\dv y$ is both read and\n written). Therefore, its arithmetic intensity is\n \\[\n \\frac{\\text{minimal \\flop-count}}{\\text{minimal data movement}}\n = \\frac{\\SIvar{2 n^2}\\flops}{\\SIvar{n^2 + 3 n}\\doubles}\n \\approx \\SIvar{\\frac14}{\\flops\\per\\Byte} \\enspace.\n \\]\n\n The matrix-matrix multiplication $\\dm C \\coloneqq \\dm A \\matmatsep \\dm B +\n \\dm C$ (\\dgemm[NN]) with $\\dm A,\\allowbreak \\dm B,\\allowbreak \\dm C \\in\n \\R^{n \\times n}$ is a \\blasl3 that performs \\SIvar{2 n^3}{\\flops} over\n \\SIvar{4 n^2}{\\doubles} of data movement ($\\dm C$ is both read and written).\n Hence, its arithmetic intensity\n \\[\n \\frac{\\text{minimal \\flop-count}}{\\text{minimal data movement}}\n = \\frac{\\SIvar{2 n^3}\\flops}{\\SIvar{4 n^2}\\doubles}\n = \\SIvar{\\frac n{16}}{\\flops\\per\\Byte}\n \\]\n grows linearly with the problem size~$n$ and already exceeds the intensity\n of \\dgemv for matrices as small as $5 \\times 5$.\n\\end{example}\n\nWe revisit the arithmetic intensity in \\cref{sec:term:eff}, where it determines\nwhether a computation's performance is limited by the processor's memory\nsubsystem or its floating-point units.\n\n\\section*{About This Document}\n\n\\def\\gettexliveversion#1, #2 (#3)#4\\relax{#2}\n\\newcommand\\pdftexver{\\expandafter\\gettexliveversion\\pdftexbanner\\relax\\xspace}\n\nThis document was written in \\href{https:\/\/www.latex-project.org\/}{\\LaTeXe} and\ntypeset with \\href{http:\/\/www.tug.org\/applications\/pdftex\/}{pdfTeX} \\pdftexver\non \\today.\n\nIt relies on the following packages:\n\\href{http:\/\/ctan.org\/pkg\/microtype}{\\code{microtype}} for micro-typography;\n\\href{http:\/\/ctan.org\/pkg\/listings}{\\code{listings}} and\n\\href{http:\/\/ctan.org\/pkg\/tcolorbox}{\\code{tcolorbox}} for algorithms, listings,\nand examples; \\href{http:\/\/ctan.org\/pkg\/pgf}{\\code{tikz}} and\n\\href{http:\/\/ctan.org\/pkg\/pgfplots}{\\code{pgfplots}} for graphics and plots;\n\\href{http:\/\/ctan.org\/pkg\/drawmatrix}{\\code{drawmatrix}} for matrix\nvisualizations; \\href{http:\/\/ctan.org\/pkg\/cleveref}{\\code{cleveref}} and\n\\href{http:\/\/ctan.org\/pkg\/hyperref}{\\code{hyperref}} for references and\nhyperlinks; and \\href{http:\/\/ctan.org\/pkg\/biblatex}{\\code{biblatex}} for the\nbibliography.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\subsection{Timing Kernels in \\lapack's \\texorpdfstring\\dgeqrf{dgeqrf}}\n \\label{sec:cache:qr:alg}\n \\input{cache\/alg}\n\n \\subsection{Cache-Aware Timings}\n \\label{sec:cache:qr:timings}\n \\input{cache\/timings}\n\n \\subsection{Modeling the Cache}\n \\label{sec:cache:qr:cache}\n \\input{cache\/cache}\n\n \\subsection{Varying the Setup}\n \\label{sec:cache:qr:res}\n \\input{cache\/qrresults}\n\n \\section{Application to Other Algorithms}\n \\label{sec:cache:algs}\n \\input{cache\/otheralgs}\n\n \\section{Feasibility on Modern Hardware}\n \\label{sec:cache:new}\n \\input{cache\/new}\n\n \\section{Summary}\n \\label{sec:cache:conclusion}\n \\input{cache\/conclusion}\n}\n\n\\subsection{In- and Out-of-Cache Timings}\n\\label{sec:cache:icoc}\n\n\\input{cache\/figures\/ooc}\n\nOut-of-core timings are hardware independent, and just as on the\n\\harpertownshort serve as an upper bound on the \\sandybridgeshort and\n\\haswellshort. This is illustrated in \\cref{fig:cache:ooc} for the inversion of\na lower-triangular matrix $\\dm[lower]A \\in \\R^{3200 \\times 3200}$ with\n\\dtrtri[LN] (\\cref{alg:dtrtriLN2}) and block size~$b = 64$ on the \\haswellshort,\nand the QR~decomposition of $\\dm A \\in \\R^{2400 \\times 2400}$ with \\dgeqrf\n(\\cref{alg:dgeqrf}) and $b = 32$ on the \\sandybridgeshort{}---the chosen\nmatrices comprise around \\SI{40}{\\mebi\\byte} and thus exceed the\n\\sandybridgeshort's and \\haswellshort's last-level cache~(L3) of, respectively,\n\\SIlist{20,30}{\\mebi\\byte}. The out-of-cache timings indeed consistently\noverestimate the in-algorithm timings---by up to~\\SI{347}{\\percent} for the last\ncall to \\refdtrmmRUNN in the QR~decomposition \\dgeqrf on the \\sandybridgeshort\n(\\cref{fig:cache:ooc:dgeqrf:sandybridge} is clipped at~\\SI{175}\\percent). As\nsuch, these measurements serve well as an upper bound on the in-algorithm\ntimings.\n\n\\input{cache\/figures\/ic}\n\nFore the same scenarios \\cref{fig:cache:ic} presents the error of our previous\nin-cache setup with respect to the in-algorithm timings: While we expect the\nour setup to yield faster kernel executions than the in-algorithm timings, on\nthe \\sandybridge (with \\turboboost disabled) the in-cache timings are still up to~\\SI{.51}{\\percent} slower than\nthe in-algorithm timings (not accounting for the small unblocked \\dgeqr2); on\nthe \\haswell (with \\turboboost enabled), the relative errors for \\dtrtri[LN] and\n\\dgeqrf reach, respectively, \\SIlist{1.67;3.44}\\percent.\n\n\\input{cache\/figures\/ictb}\n\nFurther investigation reveals that the processor's \\intel{} \\turboboost is a source of\ncomplication for out measurements: As \\cref{fig:cache:ictb} shows, enabling\n\\turboboost on the \\sandybridge leads to overestimations of the \\dtrtri[LN]'s\nand \\dgeqrf's most compute-intensive operations (i.e., the \\refdtrmmLLNN and the\ntwo \\dgemm{}s (\\ref*{plt:dgemmTN}, \\ref*{plt:dgemmNT})), by up to, respectively,\n\\SIlist{3.20;2.79}\\percent.\n\nWhile \\turboboost increases the overestimation of individual kernels, this\nphenomenon's origin lies in the processor's cache hierarchy: Within an\nalgorithm, each kernel is invoked with a distinct cache precondition, i.e., with\nonly portions of its operands in the processor's caches. Since our\nalgorithm-independent measurements do clearly not match such preconditions, we\nattempted to construct conditions in which the kernel executes at its absolute\npeak performance with different cache setups:\n\\begin{itemize}\n \\item First, we used simple repeated execution of the kernel without any\n modification of the cache in between as before.\n\n \\item Next, we accessed the kernel operands in various\n orders prior to the invocation. E.g., for a \\dgemm $\\dm[width=.25]C\n \\coloneqq \\dm[width=.8]A \\matvecsep \\dm[width=.25, height=.8]B +\n \\dm[width=.25]C$, we attempted all permutations of access orders, such\n as \\dm[width=.8]A--\\dm[width=.25, height=.8]B--\\dm[width=.25]C and\n \\dm[width=.25]C--\\dm[width=.8]A--\\dm[width=.25, height=.8]B.\n\n \\item Finally, we refined the access granularity and attempted to bring\n operands into cache not as a whole but only partially: For a kernel\n with one operand larger than the cache and the other operand(s) only a\n fraction of that size (e.g., the \\dgemm[TN] (\\ref*{plt:dgemmTN}) in\n \\dgeqrf: $\\dm[width=.25]C \\coloneqq \\dm[width=.8]A \\matvecsep \\dm\n [width=.25, height=.8]B + \\dm[width=.25]C$ where \\dm[width=.25]B and\n \\dm[width=.25]C are of width~$b$ and close to the problem size~$n$ in\n height), we bring the entire small operand(s) into cache but only\n portions of the large one.\n\n \\input{cache\/figures\/acc}\n\n \\Cref{fig:cache:acc} presents which operand portions we chose to load\n into the cache. These choices are based on the assumption that any\n kernel implementation likely traverses the input matrix somehow form\n from the top-left \\tsearrow to the bottom-right.\\footnote{%\n Exceptions are, e.g., \\dtrsm[RLNN] ($B \\coloneqq B A^{-1}$) and\n \\dtrsm[LUNN] ($B \\coloneqq A^{-1} B$), which must traverse the\n triangular~$A$ from the bottom-right to the top-left---in these\n cases the accessed matrix portions are mirrored accordingly.\n } Therefore, we bring a column panel of the operand, a row panel, a\n square block, or any combination of these into the processor's caches.\n While doing so, we varied the sizes~$s_1$, $s_2$, and~$s_3$ of the\n accessed operand portions.\n\\end{itemize}\n\nWhile in some scenarios changing the in-cache setup for kernel invocations\nreduced the runtime overestimation, the effects were not consistent across\ndifferent algorithms, kernels, processors, and \\blas implementations.\nAltogether, it was not possible to determine general, algorithm-independent\nin-cache setups that yield a clear lower bound on the in-algorithm timings.\n\n\n\\subsection{Algorithm-Aware Timings}\n\\label{sec:cache:algaware}\n\nSince our above attempts at algorithm-independent in-cache timings did not yield\nthe required lower bound on in-algorithm timings, the only alternative is to\ntailor the timing setups to individual algorithms. We might for instance setup\neach kernel timing with several preceding kernel invocations from within the\nalgorithms. Such obtained \\definition{algorithm-aware timings} yield accurate\nestimates for the in-algorithm timings, and rid us of the need for combining in-\nand out-of-cache estimates.\n\n\\input{cache\/figures\/exact}\n\n\\begin{example}{Algorithm-aware timings}{cache:algaware}\n \\Cref{fig:cache:exact} presents the accuracy of algorithm-aware timings as\n estimates for in-algorithm timings for the inversion of a lower-triangular\n matrix (\\dtrtri[LN]) and the QR~decomposition (\\dgeqrf) on a \\sandybridge\n (with \\turboboost enabled) using single-threaded \\openblas. The\n algorithm-aware timings were created by preceding each measured kernel\n invocation with the calls from the corresponding blocked algorithm that were\n executed since that kernel's last invocation.\n\n \\Cref{fig:cache:exact:dtrtri} shows that for \\dtrtri[LN] the algorithm-aware\n timings are with few exceptions within~\\SI1{\\percent} of the in-algorithm\n timings with an average absolute relative error (ARE) of~\\SI{.54}\\percent.\n As seen in \\Cref{fig:cache:exact:dtrtri}, for the \\dgetrf the relative error\n is overall larger yet similarly spread around~\\SI0{\\percent} with an average\n ARE of~\\SI{.84}\\percent.\n\\end{example}\n\nWhile this approach yields accurate estimates, when the kernel invocations for\neach algorithm execution are timed separately and each measurement is preceded\nwith a setup of one or more kernels, the timing procedure takes effectively\nlonger than executing and measuring the target algorithm repeatedly. As a\nresult, this method is at the same time highly accurate and impractical, which\nis why we do not further pursue it.\n\n\\subsection{Cholesky Decomposition: \\texorpdfstring{\\dpotrf[U]}{dpotrf}}\n\\label{sec:cache:dpotrfU}\n\n\\input{cache\/figures\/cholUalg}\n\nFirst, we consider \\lapack's upper triangular Cholesky decomposition \\dpotrf[U]\n\\[\n \\dm[lower, ']U \\dm[upper]U \\coloneqq \\dm A\n\\]\nof a symmetric positive definite $\\dm A \\in \\R^{n \\times n}$ in upper triangular\nstorage. \\Cref{alg:dpotrfU} presents the blocked algorithm employed in this\nroutine, which is the transpose of \\dpotrf's algorithm for lower-triangular\ncase (\\cref{alg:chol2} on \\cpageref{algs:chol}). As the algorithm traverses\n\\dm A, both the size and shape of~\\dm[mat02, width=1.25]{A_{02}} (the largest\noperand) change noticeably: It starts as row panel, then grows to a square\nmatrix and finally shrinks to a column panel. \\dm[mat02, width=1.25]{A_{02}}'s\nsize determines the workload performed by the algorithm's large \\refdgemmTN,\nwhich is reflected in the in-algorithm timings in\n\\cref{fig:cache:dpotrfU:instr}.\n\n\\input{cache\/figures\/cholres}\n\nIn our experiments, we execute \\dpotrf[U] on a \\harpertown with single-threaded\n\\openblas, $\\dm A \\in \\R^{2400 \\times 2400}$,\\footnote{%\n For $n = 2400$, the upper-triangular portion of~$A$ takes up about\n \\SI{12}{\\mebi\\byte}---twice the size of the L2~cache.\n} and block size $b = 32$. \\Cref{fig:cache:dpotrf:res} presents the relative\nperformance difference with respect to in-algorithm timings for both repeated\nexecution timings and our final estimates. Our estimates yield improvements for\nthe \\refdsyrkUT and \\refdpotfU involving large matrices in the middle of \\dm A's\ntraversal. In the beginning of the traversal, the estimates are generally too\npessimistic because some matrices are (partially) brought into cache by\nprefetching, which is not accounted for in our estimates. On average the\nrelative error is reduced from~\\SIrange{11.11}{7.87}{\\percent}, i.e., by a\nfactor of~1.41.\n\nHowever, note that the improvement is only visible in the averaged per-kernel\nrelative error: Since the runtime of large \\dgemm[TN]~(\\ref*{plt:dgemmTN}) is\noverestimated, the accumulated runtime estimate for the entire algorithm\nactually becomes less accurate.\n\n\n\\subsection{Inversion of a Triangular Matrix:\n\\texorpdfstring{\\dtrtri[LN]}{dtrtri}}\n\\label{sec:cache:dtrtriLN}\n\n\\input{cache\/figures\/trinvalg}\n\nWe now take a closer look at \\lapack's inversion of a lower-triangular matrix\n\\dtrtri[LN]\n\\[\n \\dm[lower]A \\coloneqq \\dm[lower, inv]A\n\\]\nwith $\\dm A \\in \\R^{n \\times n}$, whose blocked algorithm is presented in\n\\cref{alg:dtrtriLN2}. In contrast to the previous operations, this algorithm\ntraverses \\dm A \\tnwarrow from the bottom-right to the top-left, thereby\noperating on sub-matrices of increasing size. \\Cref{fig:cache:dtrtriLN:instr}\nshows the in-algorithm timings for the algorithm, which are dominated by\n\\refdtrmmLLNN.\n\n\\input{cache\/figures\/trinvres}\n\nWe execute \\dtrtri[LN] on a \\harpertown with single-threaded \\openblas, $\\dm A\n\\in \\R^{2400 \\times 2400}$, and block size $b = 32$.\n\\Cref{fig:cache:dtrtriLN:res} compares the performance measurements from\nrepeated execution and our final estimates to in-algorithm timings: The\nimprovements of our estimates are most significant in \\refdtrmmLLNN (which\nperforms the most computation) and \\refdtrtiLN; the error is reduced from an\naverage of~\\SIrange{6.70}{3.37}{\\percent}---a total improvement of~$1.99\\times$.\n\n\n\\subsection{Summary}\n\nWe have seen that, on a \\harpertown the accuracy of our runtime estimates for\nkernels within blocked algorithms is increased by taking the state of the\nL2~cache throughout the algorithm execution into consideration. For different\nalgorithms, problem sizes, block sizes, \\blas implementations, and thread\ncounts, we have seen improvements between~$1.15\\times$ (with all 4~cores)\nand~$2.99\\times$.\n\n\n\n\n\n\n\n\\chapter{Conclusion}\n\\label{ch:conclusion}\n\nThis dissertation set out to predict the performance of dense linear algebra\nalgorithms. It targeted two types of algorithms that require different\nprediction approaches: blocked algorithms and tensor contractions.\n\nFor blocked algorithms, we accomplished accurate performance predictions through\nautomatically generated performance models for compute kernels. Our\npredictions both reliably identify the fastest blocked algorithm from\npotentially large numbers of available alternatives, and select a block size\nfor near-optimal algorithm performance. Our approach's main advantage is its\nseparation of the model generation and the performance prediction: While the\ngeneration may take several hours, thousands of algorithm executions are\nafterwards predicted within seconds. A discussed downside to the approach,\nhowever, is that it does not account for algorithm-dependent caching effects.\n\nFor tensor contractions, we established performance predictions that identify\nthe fastest among potentially hundreds of alternative \\blas-based contraction\nalgorithms. By using cache-aware micro-benchmarks instead of our performance\nmodels, our solution is highly accurate even for contractions with severely\nskewed dimensions. Furthermore, since these micro-benchmarks only execute a\ntiny fraction of each tensor contraction, they provide performance predictions\norders of magnitude faster than empirical measurements.\n\nTogether, our model generation framework and micro-benchmarks form a solid\nfoundation for accurate and fast performance prediction for dense linear algebra\nalgorithms.\n\n\n\\section{Outlook}\nThe techniques presented in this dissertations offer numerous opportunities for\napplications and extensions:\n\\begin{itemize}\n \\item Our methods can be applied to predict the performance various types of\n algorithms and operations, such as recursive algorithms and\n algorithms-by-blocks.\n\n \\item For dense eigenvalue solvers, our models can predict the two most\n computationally intensive stages: The reduction to tridiagonal form and\n the back-transformation. By additionally estimating the data-dependent\n performance of tridiagonal eigensolvers, one can predict the solution of\n complete eigenproblems.\n\n \\item Beyond individual operations, our predictions can be applied to\n composite operations and algorithms, such as matrix chain\n multiplications or least squares solvers.\n\n \\item Our models were designed to provide estimates for configurable yet\n limited ranges of problem sizes. For extrapolations to larger problems\n they should be revised to ensure that local performance phenomena do not\n distort faraway estimates.\n\n \\item Computations on distributed memory systems, accelerators, and graphics\n cards can be predicted by combining our techniques with models for data\n movement and communication.\n\\end{itemize}\n\n\\chapter*{Abstract}\n\n\\input{abstract\/abstract}\n\n\n\\chapter*{Acknowledgments}\n\nFirst and foremost, I would like to express my sincere gratitude to my advisor\nPaolo Bientinesi. While guiding me through my studies, he always embraced my\nown ideas and helped me shape and develop them in countless discussions. While\nhe granted me freedom in many aspects of my work, he always had time for\nanything between a quick exchange of thoughts and extensive brainstorming\nsessions. Beyond our professional relationship, we enjoyed twisty puzzles and\nboard games in breaks from work, long game nights, and annual trips to SPIEL. I\nconsider my self lucky to have spend my time as a doctoral student with him and\nhis research group.\n\nThe HPAC group proved to be much more than a collection of researchers working\non remotely associated projects; my colleagues were not only a source of\nincredibly valuable discussions and feedback regarding my work, we also indulged\nin various unrelated arguments and exchanges over lunch and at many other\noccasions. My thanks go to Edoardo Di~Napoli, Diego Fabregat-Traver, Paul\nSpringer, Jan Winkelmann, Henrik Barthels, Markus H\u00f6hnerbach, Sebastian\nAchilles, William McDoniel, and Caterina Fenu, as well as our former group\nmembers Matthias Petschow, Roman Iakymchuk, Daniel Tameling, and Lucas Beyer.\n\nI am grateful for financial support from the {\\namestyle Deutsche\nForschungs\\-gemeinschaft} (DFG) through grant GSC~111 (the graduate school\nAICES) and the {\\namestyle Deutsche Telekomstiftung}. Their programs not only\nfunded my work, but opened further opportunities in the form of seminars and\nworkshops, and connected me with like-minded students from various disciplines.\n\nThe {\\namestyle\\rwth IT Center} provided and maintained an extremely reliable\ninfrastructure central to my work: the {\\namestyle \\rwth Compute Cluster}. I\nthank its staff not only for ensuring smooth operations but also for their\ncompetent and detailed responses to my many inquiries and requests regarding our\ninstitute's cluster partition.\n\nThe AICES service team did their best to shield me from the bureaucracy of\ncontracts, stipends, and reimbursements. I am grateful they allowed me to focus\nsolely on my research.\n\nEven more important than a gratifying work environment is forgetting about it\nevery once in a while. My friends played a bigger role in this effort than\nprobably most of them know, whether we were simply spending time hanging out or\nplaying games, went swimming, climbing or playing badminton, or taught swimming\nand worked as lifeguards. You are too many to enumerate, but you know who you\nare.\n\nFinally, but most importantly, none of this would have been possible without the\nendless and uncompromising support of may parents. You are the reason I grew\ninto the person I am today. Danke!\n\n\\tableofcontents\n\n\\subsection{Motivation: Blocked Algorithms}\n\\label{sec:intro:blocked:algs}\n\n\\definitionp[blocked algorithm]{Blocked algorithms} are commonly used to exploit\nthe performance of optimized \\blasl3 kernels\\footnote{%\n The {\\namestyle Basic Linear Algebra Subprograms} (BLAS) form the basis for\n high-performance in dense linear algebra. See \\cref{app:term,app:libs}.\n} in other matrix operations, such as decompositions, inversions, and\nreductions. Every blocked algorithm traverses its input matrix (or matrices) in\nsteps of a fixed \\definition{block size}; in each step of this traversal, it\nexposes a set of \\definition[sub-matrices\\\\updates]{sub-matrices} to which it\napplies a series of {\\em updates}. Through these updates, it progresses with\nthe computation and obtains a portion of the operation's result; once the matrix\ntraversal completes, the entire result is computed.\n\n\\input{intro\/figures\/blocked}\n\n\\footnotetextbefore{%\n \\Cref{app:libs} gives an overview of the \\blas and \\lapack routines used\n throughout this work. When specified, the subscripts indicate the values of\n the flag arguments, which identify the variant of the operation; e.g., in\n \\dpotrf[L] the \\code L corresponds to the argument \\code{uplo} indicating\n a lower-triangular decomposition.\n}\n\\begin{example}{Blocked algorithms for the Cholesky decomposition}{intro:chol}\n \\newcommand\\Azz{\\dm[mat00, lower]{A_{00}}\\xspace}%\n \\newcommand\\Aoz{\\dm[mat10, height=.5]{A_{10}}\\xspace}%\n \\newcommand\\Aoo{\\dm[mat11, size=.5, lower]{A_{11}}\\xspace}%\n \\newcommand\\Atz{\\dm[mat20, height=1.25]{A_{20}}\\xspace}%\n \\Cref{algs:chol} illustrates blocked algorithms for a simple yet\n representative operation: the lower-triangular Cholesky decomposition\n \\[\n \\dm[lower]L \\dm[upper, ']L \\coloneqq \\dm A\n \\]\n of a symmetric positive definite (SPD) matrix $\\dm A \\in \\R^{n \\times n}$ in\n lower-triangular storage (\\lapack: \\dpotrf[L]\\footnotemark). For this\n operation there exist three different blocked algorithms. Each algorithm\n traverses \\dm A diagonally from the top-left to the bottom-right \\tsearrow\n and computes the Cholesky factor~\\dm[lower]L in place. At each step of the\n traversal, the algorithm exposes the sub-matrices shown in\n \\cref{algs:chol:traversal} and makes progress by applying the\n algorithm-dependent updates in \\cref{alg:chol1,alg:chol2,alg:chol3}. Before\n these updates, the sub-matrix~\\Azz, which in the first step is of size $0\n \\times 0$, already contains a portion of the Cholesky factor~\\dm[lower]L;\n after the updates, the sub-matrices~\\Aoz and~\\Aoo also contain their\n portions of~\\dm[lower]L, and in the next step become part of~\\Azz. Once the\n traversal reaches the bottom-right corner (i.e., \\Azz is now of size $n\n \\times n$), the entire matrix is factorized.\n\\end{example}\n\nBlocked algorithms pose two \\definition[optimization challenges:\\\\alternative\nalgorithms]{optimization challenges}:\n\\begin{itemize}\n \\item For each operation there typically exist several {\\em alternative\n algorithms}, which are mathematically equivalent in exact arithmetic;\n however, even if such algorithms perform the same number of floating\n point operations, they may differ significantly in performance.\n\n \\item For each algorithm, the \\definition{block size} influences the number\n of traversal steps and the sizes and shapes of the exposed sub-matrices,\n and thus the performance of the kernels applied to them.\n\\end{itemize}\nWhat makes matters more complicated is that the optimal choice depends on\nvarious factors, such as the hardware , the number of threads, the kernel\nimplementations, and the problem size.\n\n\\input{intro\/figures\/chol_vars}\n\n\\footnotetextbefore{%\n \\Cref{app:hardware} provides an overview of the processors used throughout\n this work.\n}\n\\begin{example}{Performance of alternative algorithms}{intro:chol:var}\n \\Cref{fig:intro:chol:vars} shows the performance of the three blocked\n Cholesky decompositions from \\cref{algs:chol} with block size~$b = 128$ and\n increasing problem size~$n$ on a 12-core \\haswell\\footnotemark{} with\n single- and multi-threaded \\openblas.\n\n In both the single- and multi-threaded scenarios,\n algorithm~3~(\\ref*{plt:chol3}) is the fastest among the three alternatives\n for all problem sizes. On a single core and for problem size $n = 4152$, it\n is \\SIlist{27.40;12.89}{\\percent} faster than, respectively,\n algorithms~1~(\\ref*{plt:chol1}) and~2~(\\ref*{plt:chol2}), and it reaches up\n to \\SI{91.01}{\\percent} of the processor's theoretical peak performance (red\n line \\legendline[very thick, darkred] at the top of the plot). On all 12~of\n the processor's cores, algorithm~3~(\\ref*{plt:chol3}) still reaches an\n efficiency of~\\SI{69.70}\\percent, and outperforms\n algorithms~1~(\\ref*{plt:chol1}) and~2~(\\ref*{plt:chol2}) by, respectively,\n $5.21\\times$ and~$1.92\\times$.\n\n Although algorithm~3~(\\ref*{plt:chol3}) is clearly the fastest in this and\n many other scenarios, \\lapack's \\dpotrf[L] implements\n algorithm~2~(\\ref*{plt:chol2}).\n\n For other operations, the choice becomes more complicated, since no single\n algorithm is the fastest for all problem sizes and scenarios. For instance,\n for the single-threaded inversion of a lower-triangular matrix $\\dm[lower]A\n \\coloneqq \\dm[lower, inv]A$, two different algorithms are the fastest for\n small and large matrices; with the performance differing by up\n to~\\SI{13}{\\percent} in either direction (\\cref{sec:pred:var:trinv}).\n\\end{example}\n\n\\input{intro\/figures\/chol_b}\n\n\\begin{example}{Influence of the block size on performance}{intro:chol:b}\n Let us consider the blocked Cholesky decomposition\n algorithm~3~(\\ref*{plt:chol3} in \\cref{fig:intro:chol:vars}) with fixed\n problem sizes~$n = 1000$, 2000, 3000, and~4000 and varying block size~$b$.\n \\cref{fig:intro:chol:b} presents the performance of these algorithm\n executions for 1 and 12~threads on the \\haswell using \\openblas:\n Single-threaded, the optimal block size increases from~$b = 96$ for~$n =\n 1000$ to~$b = 184$ for~$n = 4000$. On 12~cores, on the other hand, the\n performance is less smooth and the optimal choices for~$b$ are between~56\n and~112.\n\n \\Cref{fig:intro:chol:b} demonstrates the importance of selecting the block\n size dynamically: If we use~$b = 184$, which is optimal for~$n = 4000$ on\n one core, for~$n = 1000$ on 12~cores we only reach \\SI{77.62}{\\percent} of\n the algorithm's optimal performance. On the other hand, \\lapack's default\n block size~$b = 64$ (which is close to the optimal~$b = 56$ for~$n = 1000$\n on 12~cores) would reach \\SI{95.95}{\\percent} of the optimal single-threaded\n performance for~$n = 4000$.\n\\end{example}\n\n\n\\subsection{Prediction through Performance Models}\n\\label{sec:intro:blocked:pred}\n\nNaturally, both the best algorithm and its optimal block size for a given\nscenario (operation, problem size, hardware, kernel library, multi-threading)\ncan be determined through exhaustive performance measurements; however, this is\nextremely time consuming and thus often impractical. Instead we aim to\ndetermine the optimal configuration {\\em without executing} any of the\nalternative algorithms. For this purpose, we use the hierarchical structure of\nblocked algorithms: Their entire computation is performed in a series of calls\nto a few kernel routines; hence, by accurately estimating the runtime of these\nkernels, we can predict an entire algorithm's runtime and performance.\n\nIn order to estimate the kernel runtimes, let us study how these kernels are\nused: In each algorithm execution, the same set of kernels is invoked\nrepeatedly---once for each step of the blocked matrix traversal. Each\ninvocation, however, works on operands of different size depending on the\nprogress of the algorithms' traversal, the input problem size, and the block\nsize. In short, we need to estimate the performance of only a few kernels, yet\nwith potentially wide ranges of operand sizes.\n\nOur solution is \\definition{performance modeling}, as detailed in\n\\cref{ch:model}: Based on a detailed study of how a kernel's arguments (i.e.,\nflags, operand sizes, etc.) affect its performance, we design performance models\nin the form of piecewise multi-variate polynomials. These models are generated\nautomatically once for each hardware and software setup and subsequently provide\naccurate performance estimates at a tiny fraction of the kernel's runtime.\n\nUsing such estimates, we \\definition[performance prediction]{predict} the {\\em\nperformance} of blocked algorithms, as presented in \\cref{ch:pred}. These fast\npredictions prove to be highly accurate, and allow us to both rank the blocked\nalgorithms for a given operation according to their performance, and find\nnear-optimal values for the algorithmic block sizes.\n\nWhile our models yield accurate performance estimates for individual kernel\nexecutions, they do not capture the performance influence of\n\\definition{caching} between kernels. Prior to the invocation of each compute\nkernel in an algorithm, typically only a portion of its operands are in cache,\nand loading operands from main memory increases the kernel runtime.\n\\cref{ch:cache} investigates how caching effects can be accounted for in blocked\nalgorithms, and attempts to combine pure in- and out-of-cache estimates into\nmore accurate prediction. However, while the results look promising on a rather\nold \\harpertown, the analysis reveals that on modern processors the effect\ncaching on kernel performance is so complex that accounting for it in\nalgorithm-independent performance models to further improve our prediction\naccuracy is infeasible.\n\n\n\n\n\n\n\n\n\\chapter{Introduction}\n\\chapterlabel{intro}\n{\n \\tikzsetexternalprefix{externalized\/intro-}\n\n \\input{intro\/intro.tex}\n\n \\section[Performance Modeling for Blocked Algorithms]\n {Performance Modeling\\newline for Blocked Algorithms}\n \\label{sec:intro:blocked}\n \\input{intro\/blocked}\n\n \\section{Micro-Benchmarks for Tensor Contractions}\n \\label{sec:intro:tensor}\n \\input{intro\/tensors}\n\n \\section{Related Work}\n \\label{sec:intro:relwork}\n \\input{intro\/relwork}\n}\n\n\\subsection{Dense Linear Algebra Libraries and Algorithms}\n\\label{sec:relwork:libsalgs}\n\nWe begin with a brief history of the fundamental DLA libraries \\blas and \\lapack\nand prominent implementations in \\cref{sec:relwork:libs}. We then focus on\nblocked algorithms and their tuning opportunities in \\cref{sec:relwork:blocked},\nand finally give an overview of alternative algorithms and libraries for\ndistributed-memory and accelerator hardware in, respectively,\n\\cref{sec:relwork:altalgs,sec:relwork:dist}.\n\n\n\\subsubsection{\\blas and \\lapack}\n\\label{sec:relwork:libs}\n\nThe development of standardized DLA libraries began in~1979 with the inception\nof the {\\namestyle Basic Linear Algebra Subprograms}\n(\\definition{\\blas})~\\cite{blasl1}, a \\fortran interface specification for,\ninitially, various ``Level~1'' scalar and vector operations. It was\nsubsequently extended to kernels for ``Level~2'' matrix-vector~\\cite{blasl2} and\n``Level~3'' matrix-matrix~\\cite{blasl3} operations in, respectively, 1988\nand~1990. The aim of the \\blas specification is to enable performance portable\napplications: DLA codes reach high performance on different hardware by using\narchitecture-specific \\blas implementations. Although computer architectures\nhave evolved dramatically in the last~40 years, this principle of performance\nportability is still at the core of all current DLA libraries.\n\nThe \\blas specification is accompanied by a reference\nimplementation~\\cite{blasweb} that, while fully functional and well documented,\nis deliberately simple and thus slow; to reach high performance, users instead\nlink with optimized \\definition[open-source implementations]{\\blas\nimplementations}. The oldest {\\em open-source} implementation still in\nuse is the {\\namestyle Automatically Tuned Linear Algebra Software}\n(\\atlas)~\\cite{atlas1, atlas3, atlas2, atlasweb}, first released in 1997; this\nauto-tuning based library's main proficiency is to yield decent performance on a\nwide range of hardware platforms with little developer and user effort. The\nfirst major open-source implementation hand-tuned for modern processors with\ncache hierarchies was {\\swstyle GotoBLAS}~\\cite{gotoblas1, gotoblas2,\ngotoblasweb}. It reaches up to around \\SI{90}{\\percent} of a processor's peak\nfloating-point performance for both sequential and multi-threaded Level~3\nkernels and good bandwidth-bound performance for Level~1 and~2 operations.\nAfter {\\swstyle GotoBLAS}'s discontinuation in~2010, its code-base and approach\nwere picked up and extended to more recent processors in the \\openblas\nlibrary~\\cite{openblasweb}, which is currently the fastest open-source\nimplementation for many architectures. Also inspired by {\\swstyle GotoBLAS}'s\napproach is the fairly recent {\\namestyle \\blas-like Library Instantiation\nSoftware} (\\blis)~\\cite{blis3, blis1, blis2, blisweb}, an open-source framework\nthat provides optimized kernels for basic DLA operations, such as the \\blas,\nbased on one hand-tuned micro-kernel per architecture.\n\nIn addition to open-source implementations, many hardware \\definition[vendor\nimplementations]{vendors} maintain and distribute their own high-performance\n{\\em\\blas}, e.g., \\intel's {\\namestyle Math Kernel Library}\n(\\mkl)~\\cite{mklweb}, \\apple's framework \\accelerate~\\cite{accelerateweb}, and\n{\\namestyle IBM}'s {\\namestyle Engineering and Scientific Subroutine Library}\n(\\essl)~\\cite{esslweb}.\n\n\\blas forms the basis for DLA libraries covering more advanced operations. The\nearliest library built on top of first \\blasl1 and later Level~2 was {\\swstyle\nLINPACK}~\\cite{linpack, linpackweb}, a package of solvers for linear equations\nand least-squares problems from the~1970s and~1980s. {\\swstyle LINPACK}\ntogether with {\\swstyle EISPACK}~\\cite{eispack, eispackweb}, a collection of\neigenvalue solvers, was superseded by the {\\namestyle Linear Algebra PACKage}\n(\\definition{\\lapack})~\\cite{lapack, lapackweb} in~1992. \\lapack has since been\nextended with new features and algorithms, and is still under active\ndevelopment. Just like \\blas, \\lapack functions as a de-facto standard\ninterface specification for many advanced DLA operations; libraries such as\n\\openblas and \\mkl adopt its interface and provide tuned implementations of\nvarious routines.\n\nFor more details on \\blas and \\lapack, and their kernels and implementations\nused throughout this work, see \\cref{app:libs}.\n\n\n\\subsubsection{Blocked Algorithms}\n\\label{sec:relwork:blocked}\n\n\\lapack uses \\definition{blocked algorithms} for most of its dense operations.\nThe core idea behind these algorithms is to leverage a processor's cache\nhierarchy by increasing the spacial and temporal locality of operands, as well\nas casting most of an operation's computation in terms of \\blasl3 kernels. As a\nresult, complex operations can reach performance levels close to the hardware's\ntheoretical peak.\n\nHowever, for each operation, there typically exist multiple\n\\definition{alternative blocked algorithms}, of which \\lapack offers only one,\nbut not always the fastest. The alternative algorithms for a given operation\ncan be derived from its mathematical formulation\nsystematically~\\cite{derivingbalgs} and automatically~\\cite{loopgen, pmegen}.\nBased on these principles, \\libflame~\\cite{libflameref, libflame, libflameweb}\noffers many alternative algorithms for each operation, and for several\noperations provides more efficient default algorithms than \\lapack. In this\nwork we consider \\libflame's blocked algorithms for various operations, and aim\nto predict which of them is most efficient for given scenarios.\n\nAnother caveat of blocked algorithms is their \\definition[block size\ntuning]{block sizes}, which need to be carefully {\\em tuned} to maximize\nperformance. Since this is a well-known aspect of blocked\nalgorithms~\\cite{rooflinedla, blocksizetuning}, \\lapack encapsulates and exposes\nall its tuning parameters in \\ilaenv, a central routine that is used to\nconfigure the library at compile time; for many operations the block\nsizes used by \\lapack's reference implementation of \\ilaenv (64~for most\nalgorithms) have been too small on recent hardware for quite some time.\nAlthough the necessity of optimizing block sizes is well understood and taken\ncare of by implementations such as \\mkl, it remains non-trivial, and in fact few\nend-users and application-developers are aware of it. The automated model-based\noptimization of the block size for blocked algorithms is the second major goal\nof this work.\n\n\n\\subsubsection{Alternatives to Blocked Algorithms}\n\\label{sec:relwork:altalgs}\n\nAn alternative to blocked algorithms is \\definition{recursive algorithms},\nwhich avoid both the algorithm selection and block-size optimization. They are\nalso known as ``cache oblivious'' algorithms~\\cite{cacheoblivious2,\ncacheoblivious1} since they minimize the data-movement between cache\nlevels~\\cite{dlarec}. Recursion has been suggested for many DLA operations,\nsuch as the LU~decomposition~\\cite{lurec, lurec2}, the Cholesky\ndecomposition~\\cite{cholrec}, triangular matrix inversion~\\cite{trinvrec},\ntwo-sided linear systems~\\cite{sygstrec}, tall-and-skinny\nQR~factorization~\\cite{qrrec}, and Sylvester-type equation solvers~\\cite{recsy,\nrecsyweb}.\n\nHowever, since no readily-available recursion-based library comparable to\n\\lapack existed, we developed the {\\namestyle Recursive \\lapack collection}\n(\\definition{\\relapack})~\\cite{relapack, relapackweb}. \\relapack provides\nrecursive implementations for 48~\\lapack routines, and outperforms not only the\nreference implementation but in many cases also optimized libraries such as\n\\openblas and \\mkl.\n\nA second alternative to blocked algorithms tailored to shared-memory systems are\ntask-based \\definition{algorithms-by-blocks}, also known as ``block algorithms''\nor ``tiled algorithms''. However, these algorithms not only introduce a\nspecialized storage scheme of matrices ``by block'', but also require custom\ntask scheduling mechanisms. Implementations of such schedulers include\n{\\namestyle QUARK}~\\cite{quark} as part of {\\namestyle PLASMA}~\\cite{plasma},\n{\\namestyle DAGuE}~\\cite{dague}, {\\namestyle SMPSs}~\\cite{smpssdla}, and\n{\\namestyle SuperMatrix}~\\cite{supermatrix}.\n\n\n\\subsubsection{Distributed-Memory and Accelerators}\n\\label{sec:relwork:dist}\n\n\\definition[distributed memory]{Distributed-memory} systems and super-computers\nare indispensable for large-scale DLA computations. The first noteworthy\nextension of the \\blas and the \\lapack to this domain was the {\\namestyle\nScalable Linear Algebra PACKage} (\\scalapack)~\\cite{scalapack, scalapackweb},\nwritten in \\fortran and based on \\blas, \\lapack, and the {\\namestyle Message\nPassing Interface} (MPI). However, {\\namestyle ScaLAPACK} is only sparingly\nupdated (last in~2012), and, instead, the state of the art for\ndistributed-memory DLA is {\\namestyle Elemental}~\\cite{elemental, elementalweb},\nan actively developed \\cpplang~library, based on \\libflame's methodology in and\nobject-oriented and templated programming techniques.\n\nSince \\definition{accelerators} such as {\\namestyle Xeon-Phi} coprocessors and\ngraphics processors lend themselves well to compute-intensive operations, they\nare a natural target for DLA codes. While some classic \\blas implementations\nsuch as \\atlas, \\blis, and \\mkl, can be used on the x68-based {\\namestyle Xeon\nPhi}s, separate libraries are required for graphics processors: {\\namestyle\nNVIDIA}'s {\\namestyle cuBLAS}~\\cite{cublasweb} provides high-performance \\blas\nkernels for {\\langstyle CUDA}-enabled graphics cards, and {\\namestyle\nclBLAS}~\\cite{clblasweb} targets {\\langstyle OpenCL}-capable devices.\nFurthermore, {\\namestyle Matrix Algebra on GPU and Multicore Architectures}\n({\\namestyle MAGMA})~\\cite{magma, magmaweb} targets \\blas and \\lapack operations\non heterogeneous systems (e.g., CPU + GPU).\n\n\n\\subsection{Performance Measurements and Profiling}\n\\label{sec:relwork:meas}\n\nRuntime measurements of both application codes and algorithms are crucial in the\ninvestigation of performance behaviors, bottlenecks, as well as optimization\nand tuning in general; hence, numerous tools facilitate such\nmeasurements. Simple timers are accessible in virtually any language and\nenvironment: e.g., \\code{time} in Unix, \\code{rdtsc} in x86~assembly,\n\\code{gettimeofday()} in~\\clang, \\code{omp\\_get\\_wtime()} in {\\namestyle\nOpenMP}, \\code{tic} and \\code{toc} in \\matlab, and \\code{timeit} in \\python.\nSeveral more advanced tools \\definition[profiling]{profile} executions of\nfunctions and communications in applications by tracing or sampling: e.g.,\n{\\namestyle gprof}~\\cite{gprof, gprofweb}, {\\namestyle VAMPIR}~\\cite{vampirweb},\n{\\namestyle TAU}~\\cite{tau, tauweb}, {\\namestyle Scalasca}~\\cite{scalasca,\nscalascaweb}, and \\intel's {\\namestyle VTune}~\\cite{vtuneweb}. While such tools\nare invaluable in the performance analysis of application codes, their\ngenerality makes them somewhat unwieldy for our purposes of investigating DLA\nkernel performance. Therefore, we designed {\\namestyle Experimental Linear\nAlgebra Performance Studies} (\\definition{\\elaps})~\\cite{elaps, elapsweb}, a\nframework for performance measurements and analysis of DLA routines and\nalgorithms, further detailed in \\cref{sec:meas:elaps}.\n\n\n\\subsection{Performance Modeling and Predictions}\n\\label{sec:relwork:model}\n\nPredicting and modeling application performance is an important aspect of\nhigh-performance computing, and the term ``performance modeling'' is used to\ndescribe many different techniques and approaches. This section gives a brief\noverview of such approaches with focus on methods for DLA algorithms.\n\nThe well-established \\definition{Roofline model}~\\cite{roofline1} does not\npredict performance, but relates an algorithm's attained performance to the\nhardware's potential: As detailed in \\ref{sec:term:roofline}, it allows to\nevaluate an execution's resource efficiency by relating its algorithm's\narithmetic intensity and int performance relative to the hardware's peak\nmain-memory bandwidth and floating-point performance. It has been applied,\nimplemented, and extended in numerous publications, such as~\\cite{rooflinecache,\nrooflinetoolkit, roofline2}. Notably, \\citeauthor*{rooflinedla} use the\nroofline model (the arithmetic intensity in particular) to optimize the block\nsize for a blocked matrix inversion algorithm~\\cite{rooflinedla}.\n\nModel-based performance tuning of \\blas implementations was suggested for both\n\\atlas~\\cite{atlasmodel} and \\blis~\\cite{blismodel}, showing that near-optimal\n\\blas performance can be reached without measurement-based autotuning: Instead\nthey, e.g., select blocking sizes according to the \\blas implementation and the\ntarget processor's cache sizes. Note that these approaches are used to tune\n\\blas kernels, and do not actually predict their performance; hence they cannot\nserve as a basis for our predictions.\n\nPrevious work in our research group by \\citeauthor*{roman1} constructed accurate\n\\definition[analytical models]{analytical performance models} for small DLA\nkernels~\\cite{romandis, roman1}. These models target problems that fit within a\n\\harpertown's last-level cache (L2), and are based on the number of\nmemory-stalls and arithmetic operations as well as their overlap incurred by\nspecific kernel implementations. As such, they require not only a deep\nunderstanding of the processor architecture, but also a detailed analysis of the\nkernel implementation. While the resulting models yield accurate predictions\nwithin a few percent of reference measurements, they are not easily extended to\nlarger problems and other operations. Therefore, this work instead\nconsiders automatically generated, measurement-based models.\n\n\\Citeauthor*{blis3model} construct \\definition[piecewise models]{piecewise}\nruntime and energy {\\em models}---somewhat similar to those presented in this\nwork---for the \\blis implementations of \\dgemm and \\dtrsm~\\cite{blis3model} on a\n{\\hwstyle Sandy Bridge-EP E5-2620}. However, their approach is based on\nextensive knowledge of \\blis~\\cite{blismodel}, and their models only represent\none degree of freedom (by considering only square matrices or operations on\npanel matrices with fixed width\/height). Their average runtime model accuracy\nfor \\dgemm and \\dtrsm is, respectively, \\SIlist{1.5;4.5}\\percent, with local\nerrors of up to, respectively, \\SIlist{4.5;7}\\percent.\n\\citeauthor*{blischolmodel} extend this work to multi-threaded \\dgemm, \\dtrsm,\nand \\dsyrk in order to predict the performance of a blocked Cholesky\ndecomposition algorithm with fixed block size~\\cite{blischolmodel}; their\naverage runtime prediction errors are \\SIlist{3.7;2.4}\\percent, depending on the\nparallelization within \\blis. In contrast to these publications, the modeling\nframework presented in this work, which was developed around the same time, is\nfully automated, applicable to any \\blas- or \\lapack-like routine, not limited\nto one implementation and hardware, and offers models with multiple degrees of\nfreedom.\n\nIn a separate effort \\citeauthor*{tridiagmodel} constructs measurement-based,\nyet hardware- and \\definition{implementation-independent models} in the form of\na series of univariate polynomials (one kernel argument is represented by the\npolynomial, the other varied in the series) for several \\blasl3\nkernels~\\cite{tridiagmodel, qrmodel}. These models are used to predict the\nperformance of both a blocked reduction to tridiagonal form~\\cite{tridiagmodel}\nand a blocked multishift QR~algorithm~\\cite{qrmodel}. The resulting prediction\nerror on an unspecified {\\namestyle AMD Opteron} is reported to be\nbelow~\\SI{10}{\\percent} for the single-threaded tridiagonalization, and is on\naverage around~\\SI{10}{\\percent} for the QR~algorithm using multi-threaded\n\\blas. In contrast, the more general piecewise models proposed in this work\nyield considerable smaller prediction errors for various blocked algorithms.\n\nSeveral research projects model the performance of \\definition[distributed\nmemory]{distributed-memory} applications. A general purpose approach by\n\\citeauthor*{alex1} builds basic performance models for kernels in application\ncodes based on performance profiling~\\cite{alex2, alex1}, allowing to\ninvestigate the complexity and scalability of application components. In the\nfield of distributed-memory DLA, most modeling efforts target \\scalapack using\ndomain-specific knowledge through, e.g., polynomial\nfitting~\\cite{scalapackpolfit} or hierarchical modeling of\nkernels~\\cite{scalapckhierarchmodel}.\n\n\n\\subsection{Tensor Contractions}\n\\label{sec:relwork:tensor}\n\nTensor contractions are at the core of scientific computations, such as machine\nlearning~\\cite{tensorml}, general relativity~\\cite{generalrelativity,\ngeneralrelativity2}, and quantum chemistry~\\cite{ccd2, ccd1}. Since generally\nspeaking such contractions are high-dimensional matrix-matrix multiplications,\nthey are closely related to \\blasl3 operations, and in fact most contractions\ncan be cast in terms of one or more calls to \\dgemm, either by adding loops or\ntranspositions; this is implemented in many frameworks, such as the {\\namestyle\nTensor Contraction Engine} (TCE)~\\cite{tce, tceweb}, the {\\namestyle Cyclops\nTensor Framework} (CTF)~\\cite{cyclops, cyclopsweb}, the \\matlab{} {\\namestyle\nTensor Toolbox}~\\cite{matlabtt, matlabttweb}, and {\\namestyle\nlibtensor}~\\cite{libtensor, libtensorweb}.\n\nIn contrast to these implementations, which rely on a single algorithm for each\ncontraction (potentially selected through heuristics), previous work in our\ngroup by \\citeauthor*{tensorgen} investigated the automated generation of all\nalternative \\blas-based algorithms~\\cite{tensorgen}. \\Cref{ch:tensor} picks up\nthis work and presents a performance prediction framework for such algorithms\nthat allow to automatically identify the fastest algorithm~\\cite{tensorpred}.\n\nMore recent and ongoing work in our group by \\citeauthor*{gett} attempts to go\nbreak the barrier between contraction algorithms and \\dgemm implementations.\nFollowing the structured design of \\blis~\\cite{blis1}, they propose code\ngenerators that provide high-performance algorithms tailored to specific\ncontraction problems that reach close to optimal performance~\\cite{gett}. Their\ntools construct numerous alternative implementations, and identify the fastest\nthrough a combination of heuristics and micro-benchmarks.\n\n\n\\subsection{Motivation: Tensor Contraction Algorithms}\n\\label{sec:intro:tensor:algs}\n\nComputationally, tensor contractions are generalizations of matrix-vector and\nmatrix-matrix products to operands of higher dimensionality. While\n\\blas covers contractions of up to two-dimensional operands (i.e., matrices),\nthere are no equivalently established and standardized high-performance\nlibraries for general tensor contractions. Fortunately, just as a matrix-matrix\nproducts can be decomposed into sequences of matrix-vector products, higher\ndimensional tensor contractions can be cast in terms of matrix-matrix or\nmatrix-vector kernels. (A broader overview of alternative approaches is given\nin \\cref{sec:relwork:tensor}.)\n\n\\input{intro\/figures\/tensor_algs}\n\n\\begin{example}{Tensor contraction algorithms}{intro:tensor:algs}\n Let us consider the contraction $C_{abc} \\coloneqq A_{ai} B_{ibc}$ (in\n Einstein notation), which is visualized as follows:\n \\[\n \\begin{tikzpicture}[baseline=(c.base)]\n \\begin{drawcube}\n \\node[anchor=east] at (-1, 0, 1) {$\\scriptstyle a$};\n \\node[anchor=north] at (0, -1, 1) {$\\scriptstyle b$};\n \\node[anchor=north west] at (1, -1, 0) {$\\scriptstyle c$};\n \\node (c) {$C$};\n \\end{drawcube}\n \\end{tikzpicture}\n \\coloneqq\n \\begin{tikzpicture}[baseline=(c.base)]\n \\begin{drawsquare}\n \\node[anchor=east] at (-1, 0, 0) {$\\scriptstyle a$};\n \\node[anchor=north] at (0, -1, 0) {$\\scriptstyle i$};\n \\node {$A$};\n \\end{drawsquare}\n \\end{tikzpicture}\n \\matmatsep\n \\begin{tikzpicture}[baseline=(c.base)]\n \\begin{drawcube}\n \\node[anchor=east] at (-1, 0, 1) {$\\scriptstyle i$};\n \\node[anchor=north] at (0, -1, 1) {$\\scriptstyle b$};\n \\node[anchor=north west] at (1, -1, 0) {$\\scriptstyle c$};\n \\node {$B$};\n \\end{drawcube}\n \\end{tikzpicture}\n \\enspace.\n \\]\n The entries~$C$\\code{[a,b,c]} of the resulting three-dimensional tensor $C\n \\in \\R^{a \\times b \\times c}$ are computed as\n \\[\n \\forall \\code a \\forall \\code b \\forall \\code c :\\\n C\\text{\\code{[a,b,c]}} \\coloneqq \\sum_\\code i A\\text{\\code{[a,i]}}\n B\\text{\\code{[i,b,c]}}\n \\enspace.\n \\]\n As further described in \\cref{sec:tensor:alggen}, this contraction can be\n performed by a total of 36~alternative algorithms, each consisting of one or\n more \\code{\\bf for}-loops with a single \\blas kernel at its core. Three\n examples of such algorithms using \\blasl1, 2, and~3 kernels are shown in\n \\cref{fig:intro:tensor:algs}. These algorithms use \\matlab's ``\\code:''\n slicing notation\\footnotemark{} to access matrices and vectors within the\n tensors~$A$, $B$, and~$C$; the resulting operand shapes within the tensors\n passed to the \\blas kernel are shown alongside the algorithms.\n\\end{example}\n\\footnotetext{%\n The index ``\\code:'' in a tensor refers to all elements along that\n dimension, e.g., $A$\\code{[a,:]} is the \\code a-th row of~$A$.\n}\n\nEach tensor contraction can be computed via \\blas kernels through many---even\nhundreds---of algorithms, each with its own performance behavior. The\n\\definition[optimization challenge:\\\\alternative algorithms\\\\skewed\ndiensions]{optimization challenge} of identifying the fastest among such a set\nof {\\em alternative algorithms} is especially difficult due to the in practice\ncommonly encountered {\\em skewed dimensions} (i.e., one or more dimensions are\nextremely small) for which most \\blas implementations are typically not\noptimized.\n\n\\input{intro\/figures\/tensor_perf}\n\n\\begin{example}{Performance of contraction algorithms}{intro:tensor:perf}\n Let us consider the tensor contraction $C_{abc} \\coloneqq A_{ai} B_{ibc}$\n from \\cref{ex:intro:tensor:algs} with tensors $A \\in \\R^{n \\times 8}$, $B\n \\in \\R^{8 \\times n \\times n}$, and thus $C \\in \\R^{n \\times n \\times n}$;\n for~$n = 100$, this can be visualized as follows:\n \\[\n \\begin{tikzpicture}[baseline=(c.base)]\n \\begin{drawcube}\n \\node[anchor=east] at (-1, 0, 1) {$\\scriptstyle a$};\n \\node[anchor=north] at (0, -1, 1) {$\\scriptstyle b$};\n \\node[anchor=north west] at (1, -1, 0) {$\\scriptstyle c$};\n \\node (c) {$C$};\n \\end{drawcube}\n \\end{tikzpicture}\n \\coloneqq\n \\begin{tikzpicture}[baseline=(a.base), x={(.08, 0)}]\n \\begin{drawsquare}\n \\node[anchor=east] at (-1, 0, 0) {$\\scriptstyle a$};\n \\node[anchor=north] at (0, -1, 0) {$\\scriptstyle i$};\n \\node (a) {$A$};\n \\end{drawsquare}\n \\end{tikzpicture}\n \\begin{tikzpicture}[baseline=(a.base), y={(0, .08)}]\n \\begin{drawcube}\n \\node[anchor=east] at (-1, 0, 1) {$\\scriptstyle i$};\n \\node[anchor=north] at (0, -1, 1) {$\\scriptstyle b$};\n \\node[anchor=north west] at (1, -1, 0) {$\\scriptstyle c$};\n \\node (b) {$B$};\n \\end{drawcube}\n \\end{tikzpicture}\n \\enspace.\n \\]\n\n \\Cref{fig:intro:tensor:perf1} presents the performance of all 36~algorithms\n for this contraction on a \\harpertown with single-threaded \\openblas. While\n the two \\dgemm-based algorithms~(\\ref*{plt:intro:tensor:dgemm}) are clearly\n faster than the others, they differ in performance by up to\n \\SI{23.32}\\percent; with other kernels the difference are even more extreme,\n exceeding a factor of~60 for the \\daxpy-based\n algorithms~(\\ref*{plt:intro:tensor:daxpy}).\n\n \\Cref{fig:intro:tensor:perf2} showcases the performance of algorithms for\n the more complex contraction $C_{abc} \\coloneqq A_{ija} B_{jbic}$ on all\n 10~cores of an \\ivybridge using multi-threaded \\openblas. In this scenario,\n the performance of the \\dgemm-based algorithms alone differs by up\n to~$3\\times$.\n\\end{example}\n\nOne could argue that only \\dgemm-based algorithms are viable candidates to\nachieve the best performance; while for the most part this observation is true,\ndue to skewed dimensions, even the performance of only these algorithms can\ndiffer dramatically. Furthermore, some contractions (e.g., $C_a \\coloneqq\nA_{iaj} B_{ji}$) cannot be implemented via \\dgemm in the first place.\nTherefore, we aim at the accurate prediction of any \\blas-based contraction,\nirrespective of which kernel is used.\n\n\n\\subsection{Prediction through Micro-Benchmarks}\n\\label{sec:intro:tensor:pred}\n\nAt first sight the situation seems similar to the selection of blocked\nalgorithms: We want to avoid exhaustive performance measurements and select the\nbest algorithm {\\em without executing} any of the alternatives; our strategy is\nonce again to predict each algorithm's performance by estimating its invoked\nkernel's runtime. However, while performance models accurately estimates the\nperformance of such kernels for many operand sizes, they perform rather poorly\nfor operations with skewed dimensions: For extremely thin or small operands,\n\\blas kernels exhibit strong size-dependent performance fluctuations, which are\nimpractical to capture and represent in performance models.\n\nWhile we cannot rely on performance models, analyzing the structure of tensor\ncontraction algorithms suggests a different approach: In contrast to blocked\nalgorithms, a contraction algorithm performs its entire computation in a series\nof calls to a \\definition[single kernel\\\\fixed size\\\\micro-benchmarks]{single\n\\blas kernel} of with operands of {\\em fixed size}. Based on this observation,\nwe estimate the performance of such calls by constructing a small set of {\\em\nmicro-benchmarks} that executes the kernel only a few times, and thus performs\nonly a fraction of the algorithm's computation. Since memory locality plays an\nespecially important role in contractions with skewed dimensions, we carefully\nrecreate the stat of the processor's caches within the micro-benchmarks to time\nthe kernel in conditions analogous to those in the actual algorithm.\n\nBased on such micro-benchmarks, we can predict the total runtime of contraction\nalgorithms for tensors of various shapes and sizes. These predictions reliably\nsingle out the fastest algorithm from a set of alternatives several orders of\nmagnitude faster than a single algorithm execution.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\subsubsection{Background and System Noise}\n\\label{sec:meas:fluct:noise}\n\nThe potentially most disturbing, yet also quite easily avoidable source of\nfluctuations are other \\definition{background processes} competing for the\nprocessor's resources.\n\n\\input{meas\/figures\/fluct}\n\n\\begin{example}{Influence of background noise}{meas:fluct}\n \\Cref{fig:meas:fluct} presents the runtime of 1000~repetitions of the\n matrix-matrix multiplication $\\dm C \\coloneqq \\dm A \\matmatsep \\dm B + \\dm\n C$ (\\dgemm[NN]) with $\\dm A, \\dm B, \\dm C \\in \\R^{100 \\times 100}$ on a\n \\broadwell (as part of {\\namestyle MacBook Pro} with \\apple's framework\n \\accelerate and a \\sandybridge (as part of \\rwth's computing cluster) with\n \\mkl.\n\n On the \\broadwell~(\\ref*{plt:ibacc:circ}) with various other applications\n running in the background (e.g., browser and music player), the fluctuations\n are enormous: The measurement standard deviation is over $4\\times$~the mean\n runtime. On the \\sandybridge~(\\ref*{plt:sbmkl:circ}) with no other user\n applications running during measurements, the fluctuations are already much\n smaller at \\SI{2.36}{\\percent}~of the average time. For larger problem\n sizes, the fluctuations are considerably smaller, and quickly fall below\n \\SI{.1}\\percent.\n\\end{example}\n\nWhile these type of fluctuations can be avoided to some extend by ensuring that\nno other applications run during measurements, they cannot be avoided altogether\neven with exclusive access to dedicated high-performance hardware---the\nremaining fluctuations are known as \\definition{system noise}. Hence, for our\nexperiments, models, and micro-benchmarks all our measurements are repeated at\nleast five times and \\definition{summary statistics} of the runtime (or\nperformance) are presented, such as the minimum or median.\n\n\n\\subsubsection{\\intel{} \\turboboost}\n\\label{sec:meas:fluct:turbo}\n\nCompute-bound dense linear algebra computations, such as \\blasl3 and\n\\lapack-level routines, benefit directly from increased processing frequencies.\nTherefore, they usually trigger \\intel{} \\turboboost and constantly run at the\nmaximum turbo frequency if possible. Since this frequency cannot be sustained\nindefinitely on most machines, the processor frequency is eventually lowered and\nhenceforth fluctuates to keep the hardware within its power and thermal limits.\n\n\\input{meas\/figures\/turbo}\n\n\\begin{example}{\\turboboost}{meas:turbo}\n \\Cref{fig:meas:turbo} presents the runtime of repeated matrix-matrix\n multiplications $\\dm C \\coloneqq \\dm A \\matmatsep \\dm B + \\dm C$\n (\\dgemm[NN]) with $\\dm A, \\dm B, \\dm C \\in \\R^{1300 \\times 1300}$ alongside\n the processor's temperature and frequency\\footnotemark{} on both cores of a\n \\broadwell with multi-threaded \\accelerate; in this experiment, no other\n resource intensive programs run in the background.\n\n In the beginning, the processor is at a cool\n \\SI{53}{\\celsius}~(\\ref*{plt:meas:turbo:temp}) and each \\dgemm[NN] takes\n about \\SI{60}{\\ms}~(\\ref*{plt:meas:turbo:time}) at the maximum turbo\n frequency of \\SI{3.4}{\\GHz}~(\\ref*{plt:meas:turbo:freq}). The processor\n temperature increases steadily up to \\SI{105}{\\celsius} around\n repetition~200 (\\SI{12}{\\second} into the experiment); at this point the\n frequency is reduced and continuously adjusted between \\SIlist{3;3.2}{\\GHz}\n such that this temperature threshold is not exceeded. This change in\n frequency, as well as its fluctuations towards the end have a direct effect\n on the \\dgemm[NN]'s runtime: It increases by about~\\SI{10}{\\percent} to\n roughly~\\SI{67}\\ms.\n\\end{example}\n\\footnotetext{%\n Obtained through the \\intel {\\namestyle Power Gadget}.\n}\n\nThe behavior of \\turboboost depends enormously on the computation environment:\nWhile on a work-station or laptop system the processor temperature increases\nrapidly and the maximum turbo frequency is not sustained for long, on dedicated\nhigh-performance compute clusters, efficient cooling allows for the processor to\noperate at the maximum turbo frequency for much longer, if not indefinitely.\nHowever, even in our main computing facilities at the {\\namestyle\\rwth IT\nCenter}, we observed notable fluctuations of the frequency below its maximum\nwith negative impacts on our measurement quality and stability.\n\nThroughout this work, we consider processors with and without enabled\n\\turboboost. While the performance of these two cases is not directly\ncomparable, we consider our methodologies for both scenarios. In particular,\n\\turboboost is disabled on our \\sandybridge (unless otherwise stated) and\nenabled on our \\haswell---an overview of all hardware configurations is given in\n\\cref{app:hardware}.\n\n\n\\subsubsection{Distinct Long-Term Performance Levels}\n\\label{sec:meas:fluct:longterm}\n\nEven with \\turboboost disabled, a processor's speed is not always fixed to its\nbase frequency and we instead observed jumps between two or more\n\\definition{performance levels}.\n\n\\input{meas\/figures\/longterm}\n\n\\begin{example}{Performance levels}{meas:longterm}\n \\Cref{fig:meas:longterm} presents the runtime of 1000~repetitions of the\n matrix-matrix multiplication $\\dm[width=.05]C \\coloneqq \\dm A \\matvecsep\n \\dm[width=.05]B + \\dm[width=.05]C$ (\\dgemm[NN]) with $\\dm A \\in \\R^{4000\n \\times 4000}$ and $\\dm[width=.05]B, \\dm[width=.05]C \\in \\R^{4000 \\times\n 200}$ on a \\sandybridge and a \\haswell (both with \\turboboost disabled) with\n single-threaded \\openblas.\n\n On both systems, we can clearly make out two distinct runtime levels: on the\n \\sandybridgeshort, the measurements jump between \\SIlist{354;359}\\ms, which\n are \\SI{1.4}{\\percent}~apart, and on the \\haswellshort with twice the\n floating-point performance per cycle, the two levels\n at~\\SIlist{205;213}{\\ms} differ by~\\SI{3.9}\\percent. There is no\n discernible pattern to the jumps between these levels and the processors\n commonly stay at the same level for~\\SI{10}{\\second} or longer\n (50~repetitions at \\SI{200}{\\ms} each).\n\\end{example}\n\nSince we found no means to eradicate this type of fluctuations, we adopt our\nmeasurement setups to account for them: Whenever we have more than one\nmeasurement point (e.g., varying the routines or problem sizes), we not only\nrepeat each measurement several times in isolation, but also shuffle the\nrepetitions. As a result, the repetitions for each data point are spread across\nthe entire experiment duration and summary statistics such as the minimum and\nmedian yield a stable runtime estimate for only one performance level.\n\nIn summary, we can avoid or account for various types of fluctuations within our\nmeasurements.\n\n\n\n\\section{Performance Effects for Dense Linear Algebra Kernels}\n \\label{sec:meas:effects}\n \\input{meas\/effects}\n\n \\subsection{Library Initialization Overhead}\n \\label{sec:meas:effects:init}\n \\input{meas\/init}\n\n \\subsection{Fluctuations}\n \\label{sec:meas:effects:fluct}\n \\input{meas\/fluct}\n\n \\subsection{Thread Pinning}\n \\label{sec:meas:effects:pin}\n \\input{meas\/pin}\n\n \\subsection{Caching}\n \\label{sec:meas:effects:caching}\n \\input{meas\/caching}\n\n \\subsection{Summary}\n \\label{sec:meas:effects:sum}\n \\input{meas\/effectssum}\n\n \\section{Measurements and Experiments: \\elaps}\n \\label{sec:meas:elaps}\n \\input{meas\/elapsintro}\n\n \\subsection{The \\sampler}\n \\label{sec:meas:sampler}\n \\input{meas\/sampler}\n\n \\subsection{The \\elaps{} \\python Framework}\n \\label{sec:meas:elapslib}\n \\input{meas\/elaps}\n\n \\section{Summary}\n \\label{sec:meas:conclusion}\n \\input{meas\/conclusion}\n}\n\n\n\n\n\n\n\n\n\n\\subsubsection{Alignment to Cache-Lines}\n\nData is moved through the memory hierarchy in blocks of \\SI{64}{\\bytes} ($=\n\\SI8\\doubles$) called \\definition{cache-lines}.\\footnote{%\n The cache-line size is generally not fixed but for most processors it is\n \\SI{64}{\\Byte}.\n} Hence using multiples of the cache-lines size as memory access strides\ntypically shows a more regular and often better performance compared to other\nstrides.\n\n\\input{model\/figures\/ld8}\n\n\\footnotetextbefore{%\n Since $A$ and~$B$ have 256~rows, the leading dimensions are at least~256.\n}\n\\begin{example}{Aligning leading dimensions to cache-lines}{model:args:ld:8}\n \\Cref{fig:model:ld8} shows the runtime of\n \\displaycall\\dtrsm{\n \\arg{side}L, \\arg{uplo}L,\n \\arg{transA}N, \\arg{diag}N,\n \\arg m{256}, \\arg n{256},\n \\arg{alpha}{1.0},\n \\arg AA, \\arg{ldA}{\\it\\color{blue}ld},\n \\arg BB, \\arg{ldB}{\\it\\color{blue}ld}\n }\n i.e., $\\dmB \\coloneqq \\dmAi \\dmB$ with $\\dmA\\lowerpostsep, \\dmB \\in \\R^{256\n \\times 256}$, for leading dimensions\\footnotemark{} $ld = 256, \\ldots, 320$\n in steps of~1 on a \\sandybridge and a \\haswell with single-threaded\n \\openblas, \\blis, and \\mkl.\n\n For all setups, the \\dtrsm[LLNN]'s runtime exhibits some regular pattern in\n terms of the leading dimension arguments---with an average amplitude\n of~\\SI{2.19}\\percent. However the patterns are quite different: While\n \\openblas's runtime on the \\sandybridgeshort~(\\ref*{plt:sbopen}) drops\n equally at every even leading dimension, \\mkl on the\n \\haswellshort~(\\ref*{plt:hwmkl}) dips only at multiples of~4, and on the\n \\sandybridgeshort~(\\ref*{plt:sbmkl}) it has stronger dips at multiples of~8.\n \\blis on the other hand shows the exact opposite behavior: On both\n platforms~(\\ref*{plt:sbblis}, \\ref*{plt:hwblis}) its runtime spikes slightly\n at multiples of~8.\n\n Independent of the specific behavior of each setup, a smooth runtime curve\n is obtained when only multiples of~8 are considered as leading dimensions.\n\\end{example}\n\nTo avoid small performance irregularities, we will generate our models using\n\\definition[use multiples of the cache-line size]{multiples of the cache-line\nsize} for leading dimensions---in double-precision: multiples of~8.\n\n\n\\subsubsection{Set-Associative Cache Conflicts}\n\\label{sec:model:args:ld512}\n\nThe Level~1 and~2 caches in our processors are \\definition{8-way\nset-associative}: They are divided into sets of 8~cache-lines, and when a\ncache-line is loaded, its address's least significant bits determine which of the\nsets it is assigned to; within the set, an architecture-dependent cache\nreplacement policy determines in which of the 8~slots it is stored. When the\naddress space is accessed contiguously, consecutive cache-lines are loaded into\nconsecutive sets, and the cache is filled evenly. In the worst case, however,\nthe address space is accessed with a stride equal to the number of sets, and\nall loaded cache-lines are associated to the same set: Only 8~cache-lines are\ncached, and each additional line results in a \\definition{cache conflict miss}\ncausing a recently loaded line to be evicted. This effect should be avoided\nwhenever possible.\n\nOn recent \\intel{} {\\namestyle Xeon} processors, the Level~1 data cache~(L1d)\nfits \\SI{32}{\\kibi\\byte} organized as 64~sets of 8~cache-lines. A memory\nlocation with address~$a$ is a part of cache-line~$\\lfloor a \/ 64 \\rfloor$ (due\nto the size of \\SI{64}{\\Byte} per line) and assigned to set $\\lfloor a \/ 64\n\\rfloor \\bmod 64$ (due to the capacity of 64~sets). The Level~2 cache (L2) in\nturn fits \\SI{256}{\\kibi\\byte} in 1024~sets; here address~$a$ is assigned to set\n$\\lfloor a \/ 64 \\rfloor \\bmod 1024$.\n\nIn a double-precision matrix stored with leading dimension~$ld$, consecutive\nelements in each row are $8 ld$~\\bytes apart ($\\SI1\\double = \\SI8\\bytes$).\nHence, for $ld = 512$, the consecutive row elements starting at address~$a_0$\nare stored at~$a_i = a_0 + 8 ld \\cdot i = a_0 + 4096 i$, and associated to the\nsame set in the L1d~cache:\n\\begin{align*}\n \\left\\lfloor \\frac{a_i}{64} \\right\\rfloor \\bmod 64\n &= \\left\\lfloor \\frac{a_0 + 4096 i}{64} \\right\\rfloor \\bmod 64 \\\\\n &= \\left(\\left\\lfloor \\frac{a_0}{64} \\right\\rfloor + 64 i \\right) \\bmod 64 \\\\\n &= \\left\\lfloor \\frac{a_0}{64} \\right\\rfloor \\bmod 64.\n\\end{align*}\nThe same problem occurs for leading dimensions that are multiples of~512, and\neven below~512 powers of~2 have a similar effect: E.g., with $ld = 256$ the\nelements of a row are associated to only two of the cache's 64~sets. Similarly,\nfor the L2~cache with 1024~sets, consecutive row-elements are associated to the\nsame cache set for leading dimensions that are multiples of~8192, and multiples\nof~4096 utilize only two sets.\n\n\\input{model\/figures\/ld512}\n\n\\begin{example}{Cache conflict misses caused by leading\n dimensions}{model:args:ld:512}\n \\Cref{fig:model:ld512} shows the runtime of\n \\displaycall\\dtrsm{\n \\arg{side}L, \\arg{uplo}L, \\arg{transA}N, \\arg{diag}N,\n \\arg m{256}, \\arg n{256}, \\arg{alpha}{1.0},\n \\arg AA, \\varg{ldA}{ld}, \\arg BB, \\varg{ldB}{ld}\n }\n i.e., $\\dmB \\coloneqq \\dmAi \\dmB$ with $\\dmA\\lowerpostsep, \\dmB \\in \\R^{256\n \\times 256}$, for leading dimensions $ld = 256, \\ldots, 8320$ in steps\n of~128 on a \\sandybridge and a \\haswell with single-threaded \\openblas,\n \\blis, and \\mkl.\n\n For most setups the runtime spikes above the baseline at multiples of~512.\n However, the average magnitude of these spikes ranges\n from~\\SI{.14}{\\percent} for \\blis on the\n \\sandybridgeshort~(\\ref*{plt:sbblis}) to~\\SI{8.37}{\\percent} for \\openblas\n on the \\haswellshort~(\\ref*{plt:hwopen}). Especially for\n \\openblas~(\\ref*{plt:sbopen}, \\ref*{plt:hwopen}), there are additional, yet\n lower spikes of \\SI{1.40}{\\percent} at multiples of~256. Furthermore, on\n the \\haswellshort for both \\openblas~(\\ref*{plt:hwopen}) and\n \\blis~(\\ref*{plt:hwblis}) the spikes are especially high at $ld = 4096$\n and~8192, exceeding the baseline by, respectively,\n \\SIlist{6.55;11.24}\\percent.\n\\end{example}\n\nTo prevent distortions from unfortunate leading dimensions in our model\ngeneration altogether, we will \\definition{avoid multiples of~256} for these\narguments.\n\nNote that by using leading dimensions that are multiples of~8, yet not of~256 in\nour measurements, our models will not yield accurate predictions for kernel\ninvocations that do not follow this pattern. However, predicting the\nperformance of such unfortunate invocations, which can be systematically\navoided, is not part of our models' purpose and would exceed the scope of this\nwork.\n\n\n\\subsubsection{Smalls Scale Behavior}\n\\label{sec:model:args:size:small}\n\nOptimizations of compute kernels commonly involve vectorization and loop\nunrolling of length~4 or~8. These optimizations typically have a direct\ninfluence on a kernel's runtime for small variations of the size arguments.\n\n\\input{model\/figures\/size8}\n\n\\begin{example}{Small variations of size arguments}{model:args:size:8}\n \\Cref{fig:model:size8} shows the runtime of\n \\displaycall\\dtrsm{\n \\arg{side}L, \\arg{uplo}L, \\arg{transA}N, \\arg{diag}N,\n \\varg mn, \\varg nn, \\arg{alpha}{1.0},\n \\arg AA, \\arg{ldA}{400}, \\arg BB, \\arg{ldB}{400}\n }\n i.e., $\\dmB \\coloneqq \\dmAi \\dmB$ with $\\dmA\\lowerpostsep, \\dmB \\in \\R^{n\n \\times n}$, for $n = 256, \\ldots, 320$ in steps of~1 on a \\sandybridge and a\n \\haswell with single-threaded \\openblas, \\blis, and \\mkl.\n\n All setups show periodic patterns in their runtimes. While these patterns\n differ between the implementations, most have local runtime minima at\n multiples of~4, and all of them have minima at multiples of~8.\n\\end{example}\n\nTo avoid runtime artefacts introduced by vectorization and loop unrolling, we\nwill build our models on measurements that \\definition{use multiples of~8} for\nall size arguments.\n\n\n\\subsubsection{Piecewise Polynomial Behavior}\n\\label{sec:model:args:size:large}\n\nSince an operation's minimal \\flop-count is generally a (multivariate)\npolynomial function of the size arguments, one might expect that (for\ncompute-bound kernels) it translates directly into an equally polynomial\nruntime. However, since a kernel's performance is generally not constant for\nvarying operand sizes, a single polynomial is often insufficient to accurately\nrepresent a kernel's runtime for large ranges of problem sizes.\n\n\\input{model\/figures\/size}\n\n\\begin{example}{Polynomial fitting for size arguments}{model:args:size}\n \\Cref{fig:model:size} shows the runtime of\n \\displaycall\\dtrsm{\n \\arg{side}L, \\arg{uplo}L, \\arg{transA}N, \\arg{diag}N,\n \\varg m{n}, \\varg n{n}, \\arg{alpha}{1.0},\n \\arg AA, \\arg{ldA}{1000}, \\arg BB, \\arg{ldB}{1000}\n }\n i.e., $\\dmB \\coloneqq \\dmAi \\dmB$ with $\\dmA\\lowerpostsep, \\dmB \\in \\R^{n\n \\times n}$, with $n = 24, \\ldots, 536$ in steps of~16 on a \\sandybridge and\n a \\haswell with single-threaded \\openblas, \\blis, and \\mkl.\n\n At first sight, the runtime for all setups follows a smooth cubic\n behavior---perfectly in line with the operation's minimal cost of\n \\SIvar{n^3}\\flops. However, if for each setup we fit the measurements with\n a single cubic polynomial that minimizes the least-squares relative error\n (details in~\\cref{sec:model:fit}), we are left with the approximation error\n shown in~\\cref{fig:model:size:err1}. The absolute relative approximation\n error\\footnotemark{} lies between \\SI{.86}{\\percent} for \\blis on the\n \\sandybridgeshort~(\\ref*{plt:sbblis}) and \\SI{11.22}{\\percent} for \\openblas\n on the \\haswellshort~(\\ref*{plt:hwopen}); on average it\n is~\\SI{5.30}\\percent.\n\n If we look closer at the approximation errors in\n \\cref{fig:model:size:err1}---especially for \\openblas on the\n \\haswellshort~(\\ref*{plt:hwopen})---we observe a piecewise smooth(er)\n behavior. Motivated by this observation, we now fit not one polynomial to\n each data-set but two: one for the first half ($n \\leq 280$) and one for the\n second half ($n \\geq 280$). For this two-split polynomial fit the\n approximation error is shown in~\\cref{fig:model:size:err2}: The largest\n error is now reduced to~\\SI{5.25}{\\percent} for \\mkl on the\n \\haswellshort~(\\ref*{plt:hwmkl}), and the average error\n is~\\SI{2.55}{\\percent}---less than half of the original approximation error.\n (Based on a more detailed analysis, a better splitting point than\n $\\frac{24+536}2 = 280$ could have been chosen, but as\n \\cref{fig:model:size:err1} shows such choices would be notably different for\n each setup.) Within the new approximation, the error for the second\n polynomial ($n \\geq 280$) is already quite low---on\n average~\\SI{.38}\\percent. Hence, in a second step, we further subdivide\n only the first half of the domain ($n \\leq 280$) at~$n = 152$, and generate\n a new approximation consisting of three polynomials. As\n \\cref{fig:model:size:err3} shows, the error of this approximation is\n below~\\SI{1.28}{\\percent}~(\\ref*{plt:hwmkl}) in all cases and on\n average~\\SI{.71}\\percent.\n\\end{example}\n\\footnotetext{%\n For a polynomial~$p(x)$ fit to measurements~$y_1, \\ldots, y_N$ in\n points~$x_1, \\ldots, x_N$ we consider the error $1 \/ N \\sum_{i=1}^N \\lvert\n y_i - p(x_i) \\rvert \/ y_i$. Note that the least-squares fitting minimizes\n not this sum of absolute relative errors but the sum of squared relative\n errors.\n}\n\nTo account for the not purely polynomial influence of a kernel's size arguments\non its runtime, we will represent it in our models through \\definition{piecewise\npolynomials}. Details on the such piecewise polynomial representations and\ntheir automated generation are given in\n\\cref{sec:model:fit,sec:model:adaptive,sec:model:config}.\n\n\n\n\n\\subsection{Configuration Parameters}\n\nThe adaptive refinement is controlled by a total of eight\n\\definition{configuration parameters}. They allow to control the model\naccuracy, but also affect the time spent for the required measurements. The\neight parameters regulate the model generation as follows:\n\\begin{itemize}\n \\item To represent the runtime of a kernel, the monomial basis for the\n fitted polynomials needs to at least cover the kernel's asymptotic\n complexity (i.e., its minimal \\flop-count). To better represent\n performance variations, however, the maximum degree of the monomials can\n be increased in each each dimension (i.e., size argument). We refer to\n this increase as \\definition[overfitting:\\\\between 0\n and~2]{overfitting}; practical values are {\\em between 0 and~2}.\n\n \\item To fit a polynomial to a routine's runtime, the number of sampling\n points along each dimension needs to be at least one more than the\n corresponding polynomial degree. However, since this minimal number of\n points yields a polynomial that fits the measurements perfectly, we\n cannot use it to compute an approximation error. We hence increase the\n number of sampling points per dimension by at least one, and to further\n improve the approximation accuracy, further points can be added; we\n refer to the total number of points added as\n \\definition[oversampling:\\\\between 1 and~10]{oversampling}; practical\n values are values {\\em between 1 and~10}.\n\n \\item We introduced two alternatives to \\definition[distribution\n grid:\\\\Cartesian or Chebyshev]{distribute} sampling points on {\\em\n grids} that cover the domains of problem sizes: a {\\em Cartesian} grid\n and a {\\em Chebyshev} grid.\n\n \\item For each sampling point, we perform several \\definition[measurement\n repetitions:\\\\between 5 and~20]{measurement repetitions}; practical\n values are {\\em between 5 and~20}.\n\n \\item From the repetitions, we compute several runtime summary statistics:\n minimum, median, maximum, average, and standard deviation. One of these\n is selected as the \\definition[reference statistic:\\\\minimum or\n median]{reference statistic}; practical choices are the {\\em minimum and\n median}.\n\n \\item From the absolute relative errors in the reference statistic for all\n sampling points, we compute the \\definition[error measure:\\\\average,\n maximum, or 90th~percentile]{error measure} which is these relative\n errors' {\\em average, maximum, or 90th~percentile}.\n\n \\item The first termination criterion for the adaptive refinement process is\n the approximation accuracy: The refinement stops when the computed\n error measure is below a \\definition[target error bound:\\\\between\n {\\SIlist[detect-all=true]{1;5}\\percent}]{target error bound}; practical\n values for this bound are {\\em between\n \\SIlist[detect-all=true]{1;5}\\percent}.\n\n \\item The second termination criterion is the size of the domains: The\n refinement stops when a new domain is smaller than a \\definition[minimum\n width:\\\\32 or~64]{minimum width} along all dimensions; typical values\n are {\\em 32 and~64}.\n\\end{itemize}\n\n\n\\subsection{Trade-Off and Configuration Selection}\n\nIn the following, we analyze the accuracy of our models and their generation\ncost, and select a configuration to generate the models for the performance\npredictions in the \\cref{ch:pred}.\n\nWe consider the model generation for\n\\displaycall\\dtrsm{\n \\arg{side}L, \\arg{uplo}L, \\arg{transA}N, \\arg{diag}N,\n \\arg m{\\it\\color{blue}m}, \\arg n{\\it\\color{blue}n}, \\arg{alpha}{1.0},\n \\arg AA, \\arg{ldA}{5000}, \\arg BB, \\arg{ldB}{5000}\n}\ni.e., $\\dmB[height=.5] \\coloneqq \\dmAi[size=.5] \\dmB[height=.5]$ with\n$\\dmA[size=.5] \\in \\R^{m \\times m}$ and $\\dmB[height=.5] \\in \\R^{m \\times n}$,\nfor sizes $m \\in [24, 536]$ and $n \\in [24, 4152]$ on a \\sandybridge and a\n\\haswell using single-threaded \\openblas, \\blis, and \\mkl.\n\nFor each setup, our first step is to exhaustively measure the \\dtrsm[LLNN]'s\nruntime 15~times in all points $(m, n)$ in the domain $[24, 536] \\times [24,\n4152]$ at which both~$m$ and~$n$ are multiples of~8---a total of \\num{504075}\nmeasurements. These measurements are used both as the basis for our model\ngeneration and to evaluate the model accuracy across the entire domain (contrary\nto the model generation, which can only evaluate the error in its sampling\npoints).\n\n\\input{model\/tables\/config}\n\nWe generate models for all 2880~configurations obtained from combining the\nparameter values shown in \\cref{tbl:model:config}. These configurations result\nin a wide range of models with significantly different accuracies and generation\ncosts. To evaluate them, we quantify the \\definition{model error} as the\naveraged relative error of the predicted minimum runtime~$p(\\x_i)$ relative to\nthe measured minimum~$y_i$ across all $N = \\num{33605}$ points~$\\x_i$ of the\ndomain:\n\\[\n \\text{model error} \\defeqq\n \\frac1N \\sum_{i=1}^N \\frac{\\lvert p(\\x_i) - y_i \\rvert}{y_i} \\enspace;\n\\]\nfurthermore, we define the \\definition{model cost} as the total runtime of the\nrequired measurements used as samples.\n\n\\input{model\/figures\/modelplots}\n\\input{model\/tables\/modelplots}\n\n\\begin{example}{Model accuracy}{model:acc}\n \\Cref{fig:model:modelplots} shows the structure and point-wise accuracy of\n the four models with minimum and maximum accuracy and cost for\n single-threaded \\openblas on a \\sandybridge; \\cref{tbl:model:modelplots}\n lists the corresponding configurations. Both the cheapest and least\n accurate model use only a single polynomial for the entire domain but also\n offer only poor accuracy. The expensive and accurate models on the other\n hand subdivide the domain repetitively, and thus find a better fitting\n piecewise polynomial.\n\\end{example}\n\n\\input{model\/figures\/tradeoff}\n\nThe accuracy and cost of all 2880~generated models for each setup are presented\nin \\cref{fig:model:tradeoff:full}; in this plot, the preferable models with low\nerror and cost are found close to the origin. All setups share the same general\ntrend: Models with low accuracy are quite cheap, while models with high\naccuracy are more expensive. Hence we are faced with a\n\\definition[trade-off:\\\\accuracy vs.~cost]{trade-off between accuracy and cost}.\nHowever, the configuration selection is not straight-forward: Models with\npractically identical accuracy are up to a factor of~16 apart in generation\ncost, and a cheap and accurate configuration for one setup may be neither for\nother setups. In the following, we describe how we approach the search-space of\nall considered configurations, and identify a desirable default configuration\nthat we subsequently use to generate the models for all setups and kernels\nneeded for our performance predictions in \\cref{ch:pred}.\n\nBefore we begin to reduce our search space, we notice that on the \\haswellshort,\nthe models for both \\blis~(\\ref*{plt:hwblis:circ}) and\n\\mkl~(\\ref*{plt:hwmkl:circ}) are on average less than half as accurate than for\nthe other setups. The cause is a rather jagged performance behavior, which is\ndifficult to represent accurately. Hence, to identify a good default\nconfiguration, we consider only the \\sandybridgeshort~(\\ref*{plt:sbopen:circ},\n\\ref*{plt:sbblis:circ}, \\ref*{plt:sbmkl:circ}) and \\openblas on the\n\\haswellshort~(\\ref*{plt:hwopen:circ}).\n\nOur first step is to \\definition{prune by accuracy}: We discard any\nconfiguration that for any of the considered setups yields a model error larger\nthan $1.5\\times$ the minimum error for that setup; in other words, all\nremaining configurations generate models that are at most \\SI{50}{\\percent}\nless accurate than the most accurate model. This step reduces the number of\npotential configurations from 2880 to~163; all remaining configurations use an\noversampling value of~3 or higher, and a target error bound of~\\SI1\\percent.\n\\cref{fig:model:tradeoff:within2err} shows the 163~remaining models' accuracy\nand cost.\n\n\\input{model\/tables\/tradeofffinal}\n\nOur second step is to similarly \\definition{prune by cost}: We discard any\nconfiguration that for any considered setup takes longer than the first quartile\nin generation time for that setup; in other words, the remaining models are all\nwithin the \\SI{25}{\\percent} that are generated the fastest. This step further\nreduces the number of potential configurations from 163 to~14, as shown in\n\\cref{fig:model:tradeoff:belowmedcost}.\n\nThe parameter values for the 14~remaining configurations are shown in\n\\cref{tbl:model:tradeoff:final}. For each parameter, we can find one value that\nis common to at least 8~of the 14~configurations (highlighted in {\\bf bold}).\nWe choose our \\definition{default configuration} by selecting this most common\nvalue for each parameter. It corresponds to line~(10) in\n\\cref{tbl:model:tradeoff:final} (highlighted in {\\bf\\color{blue}blue}), and is\nmarked for each setup in \\cref{fig:model:tradeoff:belowmedcost}. Note that it\nalso serves as a good choice between accuracy and cost for\n\\blis~(\\ref*{plt:hwblis:circ}) and \\mkl~(\\ref*{plt:hwmkl:circ}) on the\n\\haswellshort, which were not included in the pruning process.\n\n\n\\subsection{Variations of the Default Configuration}\n\nWhile the configuration was found to yield good accuracies at reasonable costs\nfor almost all encountered kernels, it proves to be quite expensive for kernels\nwith \\definition[3D case (\\dgemm)]{three degrees of freedom}, which for the\npredictions in \\cref{ch:pred} only applies to {\\em\\dgemm} with its three size\narguments~\\code m, \\code n, and~\\code k. To reduce the modeling cost for this\nkernel, we adjust the default configuration by reduce the overfitting from~2\nto~0, and increasing the minimum width from~32 to~64.\n\nFurthermore, the performance of \\blas kernels becomes less smooth when we bring\n\\definition{multi-threading} into the picture. Hence, to avoid excessive\npartitioning as seen in \\cref{fig:model:modelplots:maxcost}, we increase the\nminimum width for all models to~64, and for \\dgemm to~256.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\chapter{Performance Modeling}\n\\chapterlabel{model}\n{\n \\input{model\/commands}\n\n \\input{model\/intro}\n\n \\section{Kernel Argument Analysis}\n \\label{sec:model:args}\n \\input{model\/args}\n\n \\subsection{Flag Arguments}\n \\label{sec:model:args:flag}\n \\input{model\/arg-flag}\n\n \\subsection{Scalar Arguments}\n \\label{sec:model:args:scalar}\n \\input{model\/arg-scalar}\n\n \\subsection{Leading Dimension Arguments}\n \\label{sec:model:args:ld}\n \\input{model\/arg-ld}\n\n \\subsection{Increment Arguments}\n \\label{sec:model:args:inc}\n \\input{model\/arg-inc}\n\n \\subsection{Size Arguments}\n \\label{sec:model:args:size}\n \\input{model\/arg-size}\n\n \\subsection{Data Arguments}\n \\label{sec:model:args:data}\n \\input{model\/arg-data}\n\n \\subsection{Summary}\n \\label{sec:model:args:sum}\n \\input{model\/arg-sum}\n\n \\section{Model Generation}\n \\label{sec:model:generation}\n \\input{model\/generation}\n\n \\subsection{Model Structure}\n \\label{sec:model:structure}\n \\input{model\/structure}\n\n \\subsection{Sample Distribution}\n \\label{sec:model:grids}\n \\input{model\/grids}\n\n \\subsection{Repeated Measurements and Summary Statistics}\n \\label{sec:model:stat}\n \\input{model\/stat}\n\n \\subsection{Relative Least-Squares Polynomial Fitting}\n \\label{sec:model:fit}\n \\input{model\/fit}\n\n \\subsection{Adaptive Refinement}\n \\label{sec:model:adaptive}\n \\input{model\/adaptive}\n\n \\section{Model Generator Configuration}\n \\label{sec:model:config}\n \\input{model\/config}\n\n \\section{Summary}\n \\label{sec:model:sum}\n \\input{model\/model-sum}\n}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\subsection{Varying Problem Size}\n\\label{sec:pred:chol:n}\n\n\\input{pred\/figures\/cholperf}\n\nIn our first analysis, we use only one of the \\sandybridgeshort's 8~cores and\nvary the problem size between~$n = 56$ and~4152 in steps of~64 while keeping the\nblock size fixed at~$b = 128$. \\Cref{fig:pred:chol:time_perf} shows the runtime\nand performance of predictions and measurements for this setup side-by-side.\n(Since the red line \\legendline[very thick, darkred] at the top of the\nperformance plots indicates the processor's theoretical peak performance, such\nplots can also be interpreted as compute-bound efficiencies with\n\\SI0{\\percent}~at the bottom and \\SI{100}{\\percent}~at the top.) The\npredictions give a good idea of the algorithm behavior: While the runtime\nincreases cubically with the problem size~$n$, the performance is low for small\nmatrices and increases steadily towards \\SI{18}{\\giga\\flops\\per\\second}. At\nfirst sight, the predictions match the measurements well.\n\n\\input{pred\/figures\/cholerr}\n\nTo further study the accuracy of our predictions, the top half of\n\\cref{fig:pred:chol:err} presents the prediction errors. As one might expect,\n\\cref{fig:pred:chol:err:time} indicates that with increasing problem size, the\nmagnitude of the runtime prediction error increases for all summary\nstatistics---most notably for the maximum~(\\ref*{plt:max}). Since in contrast\nthe performance prediction error~(\\cref{fig:pred:chol:err:perf}) is not affected\nby the decomposition's cubic runtime, we instead observe the largest prediction\nerrors for the smallest problem size~$n = 56$. Furthermore, we find that the\nminimum performance prediction error~(\\ref*{plt:min}) seems to alternate between\ntwo separate levels: one around \\SI0{\\mega\\flops\\per\\second} and one close to\n\\SI{200}{\\mega\\flops\\per\\second}. This behavior, which is also already somewhat\nvisible in \\cref{fig:pred:chol:perf:meas,fig:pred:chol:err:time}, is caused by\nmeasurement fluctuations as discussed in \\cref{sec:meas:fluct:longterm}.\n\nWe gain more insights from the prediction errors when we compare it to the\npredicted quantities. For this purpose, the bottom half of\n\\cref{fig:pred:chol:err} presents the relative runtime and performance\nprediction errors. These relative errors for these metrics are almost identical\nup to a change in the sign---since the runtime is generally slightly\nunderestimated, the performance is somewhat overestimated. Focusing on the\nruntime in \\cref{fig:pred:chol:re:time}, we notice that the average standard\ndeviation ARE is~\\SI{194.70}\\percent~(\\ref*{plt:std}), which, as in\n\\cref{ex:pred:err}, exceeds the error of the other prediction statistics by far.\nFurthermore, the previously addressed measurement fluctuations are also clearly\nvisible in the maximum~(\\ref*{plt:max}) as variations with a magnitude\nof~\\SI{1.5}\\percent. The minimum~(\\ref*{plt:min}), median~(\\ref*{plt:med}), and\nmean~(\\ref*{plt:avg}) AREs on the other hand quickly fall below~\\SI2{\\percent}\nfor matrices larger than~$n = 200$ and further below below~\\SI1{\\percent}\nbeyond~$n \\approx 1000$; across all chosen problem sizes, the average AREs for\nthe minimum, median and mean runtime are, respectively,\n\\SIlist{.78;.91;.90}\\percent.\n\nAmong the eight metrics presented in\n\\cref{fig:pred:chol:time_perf,fig:pred:chol:err}, we gained the most insight\nfrom 1)~the performance prediction (\\cref{fig:pred:chol:perf:pred}), which gives\na good idea of both the algorithm's performance and efficiency, and 2)~the\nrelative runtime prediction error (\\cref{fig:pred:chol:re:time}), which provides\nnot only an accuracy measure independent of the operation, the algorithm, and\nthe actual performance, but also indicates whether the runtime is under- or\noverestimated. Hence, we use these two types of plots in our following\nanalyses.\n\n\n\\subsection{Varying Block Size}\n\\label{sec:pred:chol:b}\n\n\\input{pred\/figures\/cholnb}\n\nIn our next analysis, we fix the problem size to~$n = 3000$ and vary the block\nsize between~$b = 24$ and~536 in steps of~8. \\Cref{fig:pred:chol:b} presents\nthe performance prediction and the relative runtime prediction error for this\nscenario using single-threaded \\openblas on the \\sandybridgeshort.\n\nThe performance prediction (\\cref{fig:pred:chol:b:perf}) exhibits the typical\ntrade-off for any blocked algorithm: While for both small and large block sizes\nthe algorithm attains rather poor performance, in between it reaches up to\n\\SI{17.91}{\\giga\\flops\\per\\second}, which corresponds to an efficiency\nof~\\SI{85.10}\\percent. The cause for this trade-off and the selection of block\nsizes are addressed in detail in \\cref{sec:pred:b}.\n\nCompared to our previous performance predictions\n(\\cref{fig:pred:chol:perf:pred}), \\cref{fig:pred:chol:b:perf} exhibits a far\nwider spread of the summary statistics for large block sizes. In particular,\nthe predicted minimum performance~(\\ref*{plt:min}) drops drastically, which\nimmediately causes the mean performance~(\\ref*{plt:avg}) to decrease and an\nenormous increase in the predicted standard deviation~(\\ref*{plt:stdf}).\n\nThe relative runtime prediction error (\\cref{fig:pred:chol:b:re}) indicates that\nthe predicted performance fluctuations are not present in the performance\nmeasurements: The maximum and mean relative errors (\\ref*{plt:max} and\n\\ref*{plt:avg}) increase drastically for large problem size, suggesting that the\nmodel generation was influenced by large outlier measurements. (A repetition of\nthe generation process would likely encounter different outliers and distort\nthese metrics statistics for other problem sizes.) The minimum~(\\ref*{plt:min})\nand median~(\\ref*{plt:med}), on the other hand, are with few exceptions\npredicted within~\\SI1\\percent; their average prediction AREs are\n\\SI{.36}{\\percent} (minimum \\ref*{plt:min}) and \\SI{.42}{\\percent} (median\n\\ref*{plt:med}).\n\n\n\\subsection{Varying Problem Size and Block Size}\n\\label{sec:pred:chol:nb}\n\n\\input{pred\/figures\/cholheatmap}\n\nIf we vary both the problem size~$n$ and the block size~$b$, we can visualize\nthe runtime prediction ARE as a set of heat-maps as shown in\n\\cref{fig:pred:chol:heatmap}. Note that these plots are based on a total of\n\\num{39690}~measurements of the algorithm's runtime (65~problem sizes, $\\approx\n65$~block sizes, 10~repetitions) that took over 4~hours. The performance models\nfor the kernels needed for the predictions (\\dpotf[L]2, \\dtrsm[RLTN], and\n\\dsyrk[LN]), on the other hand, were generated in just under 10~minutes,\nproduced our predictions in under \\SI{20}\\second.\n\nThe standard deviation ARE is once again too large to fit the chosen scale and\nis hence not shown. Furthermore, as already seen in \\cref{fig:pred:chol:b}, the\nmaximum prediction becomes rather inaccurate for large~$n$ and~$b$, which also\nhas a negative impact on the mean prediction. On the other hand, both the\nminimum and median predictions are overall quite accurate with an average ARE of\nonly~\\SI{.45}\\percent.\n\nSince in the following we compare multiple alternative algorithms and\nhardware\/software setups, we limit our focus to a single statistic.\nWhile in the previous analysis the runtime minimum or median were predicted with\nequivalent accuracy, in practice the expected performance is better represented\nby the median runtime.\\footnote{%\n In scenarios other than our considered single-node computations different\n measures might be preferable; e.g., the 90th~percentile runtime.\n} Hence, from now on we use the \\definition[accuracy\nmeasure: relative median runtime prediction error]{relative median runtime\nprediction error}~\\Q t{med}{RE} as our {\\em prediction accuracy measure}.\n\n\n\\subsection{Other Data-Types}\n\\label{sec:pred:chol:dt}\n\n\\input{pred\/tables\/cholfp}\n\\input{pred\/figures\/cholfp}\n\nSo far, we have considered the Cholesky decomposition of real double-precision\nmatrices; however, the same algorithm is also applicable to other data-types.\nFor the four de-facto standard numerical data-types (real and complex\\footnote{%\n For the complex cases, the Cholesky decomposition is of the form $L L^H\n \\coloneqq A$, where $A$~must be Hermitian positive definite (HPD).\n} floating-point numbers in single- and double-precision)\n\\cref{tab:pred:chol:fp} summarizes the algorithm's \\blas and \\lapack kernels,\nand \\Cref{fig:pred:chol:fp} presents our model's performance predictions and\ntheir accuracy. (For each data-type, we generated a separate set of performance\nmodels.)\n\nIn the performance predictions (\\cref{fig:pred:chol:fp:perf}), we observe that\nthe real double-precision version~(\\ref*{plt:dt:d}) is most efficient (with\nrespect to its theoretical peak performance); this was to be expected because\n\\openblas is most optimized for this data-type. In contrast, it is somewhat\nsurprising that, while single-precision complex~(\\ref*{plt:dt:c}) is noticeably\nmore performant than single-precision real~(\\ref*{plt:dt:s}), double-precision\ncomplex~(\\ref*{plt:dt:z}) does not exceed an efficiency of~\\SI{50}\\percent.\n\nAlthough the algorithm's performance for the four data-types differs\nsignificantly, \\cref{fig:pred:chol:fp:perf} reveals that our models predict the\nruntime for all of them equally well. Moreover, for the in comparison\ninefficient double-precision complex variant~(\\ref*{plt:dt:z}), the prediction\nis already notably accurate small problem sizes below~$n = 1000$.\n\nWith equally accurate predictions demonstrated for four data-types, we will in\nthe following focus on real operations in double-precision.\n\n\n\\subsection{Multi-Threaded \\blas}\n\\label{sec:pred:chol:mt}\n\n\\input{pred\/figures\/cholp}\n\nFinally, we consider how multi-threading (through \\openblas) impacts the\nalgorithm's performance and our predictions' accuracy. For this purpose,\n\\cref{fig:pred:cholp} presents the predicted performance of the Cholesky\ndecomposition and the prediction accuracy with 1, 2, 4, and 8~threads on the\n8-core \\sandybridgeshort. (For each of these four levels of parallelism, a\nseparate set of performance models was generated.)\n\nThe predictions show that, while the performance grows with the number of\nthreads, the efficiency decreases from~\\SI{87.74}{\\percent} with one thread to a\nmaximum of~\\SI{70.78}{\\percent} with eight threads. Furthermore, the\nperformance curves become less smooth with increased parallelism.\n\nConsidering our prediction's accuracy, we notice that for small problem sizes\nbelow~$n = 500$, the prediction ARE increases significantly when more threads\nare added. Beyond this point however, the prediction for 1~(\\ref*{plt:nt:1})\nand 2~threads~(\\ref*{plt:nt:2}) are both highly accurate with an average ARE\nof~\\SI{.46}{\\percent}; the predictions for 4~(\\ref*{plt:nt:4}) and\n8~threads~(\\ref*{plt:nt:8}) are slightly less accurate and the AREs fluctuate\naround~\\SI1\\percent. Note that the large fluctuations within the ARE for the\nmulti-threaded algorithms are caused by the combination of the block size~$b =\n128$ and the chosen problem sizes in steps of~64. While with\n8~threads~(\\ref*{plt:nt:8}) these fluctuations are represented by our\npredictions to some degree, with 2~(\\ref*{plt:nt:2}) and\n4~threads~(\\ref*{plt:nt:4}), they are most striking for large problem sizes,\nwhere our models do not predict such fluctuations.\n\n\n\\subsection{Summary}\n\\label{sec:pred:chol:sum}\n\nWe studied the blocked Cholesky decomposition algorithm~3 on a \\sandybridge\nusing \\openblas with varying problem and block sizes, data-types, and kernel\nparallelism. We analyzed this algorithm's measured and predicted runtime and\nperformance to evaluate the accuracy of our predictions, and selected the\nrelative median runtime prediction error~\\Q t{med}{RE} as our primary accuracy\nmeasure.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\subsection{Single-Threaded \\blas}\n\\label{sec:pred:acc:st}\n\nWe begin with a study of the single-threaded prediction accuracy with \\lapack's\ndefault block size ($b = 64$, except for \\dgeqrf with~$b = 32$). While these\nare generally sub-optimal configurations and often even sub-optimal algorithms\nfor the performed operations, this configuration is unfortunately still\nencountered frequently in application codes that use the reference \\lapack\nimplementation. As such, it forms a quite canonical reference for the\nevaluation of our predictions.\n\n\\input{pred\/figures\/accst}\n\\input{pred\/tables\/accst}\n\n\\Cref{fig:pred:lapack:st} presents the relative runtime prediction error~\\Q\nt{med}{RE} for this scenario. For all algorithms and setups, our\npredictions are mostly within \\SI5{\\percent}~of the measured runtime, and in\nmany situations considerably closer. The runtime prediction ARE averaged across\nall problem sizes for each routine and setup is summarized in\n\\cref{tbl:pred:acc:st}: It ranges from~\\SIrange{.71}{3.93}\\percent, and its\naverage and median are, respectively, \\SIlist{1.91;1.69}\\percent. Overall, the\npredictions are slightly more accurate on the \\sandybridge (average $\\Q\nt{med}{ARE} = \\SI{1.66}\\percent$) with the lowest average $\\Q t{med}{ARE} =\n\\SI{1.22}\\percent$ for \\openblas~(\\ref*{plt:sbopen}); on the \\haswell (average\n$\\Q t{med}{ARE} = \\SI{2.16}\\percent$), the predictions are least accurate for\n\\mkl~(\\ref*{plt:hwmkl}) with an average of $\\Q t{med}{ARE} = \\SI{2.26}\\percent$.\n\nMost routines are predicted equally well (with an average \\Q t{med}{ARE} around\n\\SI{1.5}\\percent) with two exceptions: \\dsygst[1L] (average $\\Q t{med}{ARE} =\n\\SI{2.63}\\percent$) and \\dgeqrf (average $\\Q t{med}{ARE} = \\SI{2.87}\\percent$).\n\\begin{itemize}\n \\item For the two-sided linear system solver \\dsygst,\n \\cref{fig:pred:accst:dsygst} reveals that for most setups, the\n predictions consistently underestimate the algorithm runtime for large\n problem sizes~$n$.\n\n A quick calculation shows that this effect is related to the size of the\n last-level cache~(L3): On the \\haswellshort, the problem emerges\n beyond~$n \\approx 2000$ at which point the two operands~\\dm A (symmetric\n in lower-triangular storage) and~\\dm[lower]L\\lowerpostsep take up\n $\\SIvar{2 \\times \\frac{2000^2}2}\\doubles \\approx\n \\SI{30.52}{\\mebi\\byte}$---slightly more than the L3~cache of\n \\SI{30}{\\mebi\\byte}. On the \\sandybridgeshort with \\SI{20}{\\mebi\\byte}\n of L3~cache, the effect is accordingly already visible beyond~$n \\approx\n 1600$.\n\n The cause for the underestimation of large problems is as follows: Our\n models are based on repeated kernel measurements, which operate on\n cached (``warm'') data as long as all of the kernel's arguments fit in\n the cache. However, each traversal step of \\dsygst[1L]\n (\\cref{alg:dsygst}) uses two separate kernels (namely \\dsyrtk[LN] and\n \\dtrsm[LLNN]) that operate on the trailing parts of \\dm A and\n \\dm[lower]L\\lowerpostsep{}---since these do not fit in the cache\n simultaneously, they are mutually evicted by these kernels, and hence\n have to be loaded from main memory repeatedly (``cold'' data). To\n summarize, our models estimate fast operations on cached data, while in\n the algorithm the operations are slower due to cache misses.\n\n A more detailed study of caching effects within blocked algorithms and\n attempts to account for them are presented in \\cref{ch:cache}.\n\n Note that only \\dsygst is affected by caching effects on this scale\n because all other routines involve only one dense operand.\n\n \\item For the QR~decomposition \\dgeqrf, \\cref{fig:pred:accst:dgeqrf} reports\n that the runtime for almost all setups is consistently\n underestimated---especially for small problems.\n\n The cause is the transposed matrix copy and addition (see\n \\cref{alg:dgeqrf}), which account for about~\\SI4{\\percent} of the\n runtime for small problems ($n \\approx 250$) and \\SI1{\\percent} for\n large problems ($n \\approx 4000$): The copy, performed by a sequence of\n $b = 32$~\\dcopy{}s, is underestimated by~$2\\times$ to~$7\\times$ because\n our models do not account for caching effects; the addition, which\n inlined as two nested loops, is not accounted for at all.\n\\end{itemize}\n\n\n\\subsection{Multi-Threaded \\blas}\n\\label{sec:pred:acc:mt}\n\nWe study the multi-threaded prediction accuracy for the same six \\lapack\nalgorithms using all available cores of the processors, i.e., 8~threads on the\n\\sandybridge and 12~threads on the \\haswell. In contrast to the single-threaded\npredictions, we use a block size of~$b = 128$ for all algorithms---while this\nconfiguration is certainly not optimal for all algorithms and problem sizes, it\ngenerally yields better performance than \\lapack's default values.\n\n\\input{pred\/figures\/accmt}\n\\input{pred\/tables\/accmt}\n\n\\Cref{fig:pred:lapack:mt} presents the relative runtime prediction errors~\\Q\nt{med}{RE} for this scenario, and \\cref{tbl:pred:acc:mt} summarizes their\naveraged AREs~\\Q t{med}{ARE}. Compared to the single-threaded case, the\nprediction errors are across the board around $2.5\\times$~larger with a total\naverage of $\\Q t{med}{ARE} = \\SI{4.85}\\percent$. The predictions are roughly\nequally accurate across the two architectures and the two \\blas implementations.\n\nConsidering \\cref{fig:pred:lapack:mt}, we note fluctuation patterns in the\nprediction errors by up to~\\SI{10}\\percent, most notably for \\dsygst[1L] and\n\\dtrtri[LN] using \\mkl on the \\haswellshort~(\\ref*{plt:hwmkl}). As observed in\n\\cref{sec:pred:chol:mt}, these fluctuations are an artefact of the block size~$b\n= 128$ interacting with the considered problem sizes in steps of~64: Between\nconsecutive problem sizes, the remaining matrix portions in the last step of the\nmatrix traversal alternate between widths~56 and~120.\n\nAs in the single-threaded case, the QR~decomposition's runtime is\nunderestimated by on average~\\SI{8.00}\\percent, due to the \\dcopy{}s and the\ninlined matrix addition. Since especially the latter cannot make any use of the\nmulti-threaded parallelism, their impact increases significantly with the number\nof available cores.\n\nFurthermore, several individual algorithms and setups are consistently under- or\noverestimated: e.g., \\openblas on the \\sandybridge~(\\ref*{plt:sbopen}) for\n\\dlauum[L] and \\dpotrf[L]. These problems arise from the multi-threaded\nimplementations of \\dgemm, whose irregular performance is not well represented\nin our models: Since \\blas implementations distribute computations among\nthreads along a certain dimension of the operation, for small dimension (such as\nthe block size), only a subset of the available threads is used. When the small\ndimension is increased, more threads are activated and the performance increases\nsuddenly.\n\n\n\\subsection{Summary}\n\\label{sec:pred:acc:sum}\n\nThis section has shown that across experiments on two processor architectures,\nthree \\blas implementations, and six blocked \\lapack algorithms, our models\nyield accurate predictions that are on average within~\\SI{1.91}{\\percent}\n(single-threaded) and \\SI{4.85}{\\percent} (multi-threaded) of reference\nmeasurements. Encouraged by these accuracy results, the following sections use\nperformance predictions to target our main goals of algorithm selection and\nblock-size optimization.\n\n\\section{Performance Prediction}\n \\label{sec:pred:pred}\n \\input{pred\/pred}\n\n \\section{Accuracy Quantification}\n \\label{sec:pred:acc}\n \\input{pred\/acc}\n\n \\section[Accuracy Case Study: Cholesky Decomposition]\n {Accuracy Case Study:\\newline Cholesky Decomposition}\n \\label{sec:pred:chol}\n \\input{pred\/chol}\n\n \\section[Accuracy Study: Blocked \\lapack Algorithms]\n {Accuracy Study:\\newline Blocked \\lapack Algorithms}\n \\label{sec:pred:lapack}\n \\input{pred\/lapack}\n\n \\section{Algorithm Selection}\n \\label{sec:pred:var}\n \\input{pred\/var}\n\n \\subsection{Cholesky Decomposition}\n \\label{sec:pred:var:chol}\n \\input{pred\/varchol}\n\n \\subsection{Triangular Inversion}\n \\label{sec:pred:var:trinv}\n \\input{pred\/vartrinv}\n\n \\subsection{Sylvester Equation Solver}\n \\label{sec:pred:var:sylv}\n \\input{pred\/varsylv}\n\n \\subsection{Summary}\n \\label{sec:pred:var:sum}\n \\input{pred\/varsum}\n\n \\section{Block Size Optimization}\n \\label{sec:pred:b}\n \\input{pred\/b}\n\n \\subsection{Cholesky Decomposition}\n \\label{sec:pred:b:chol}\n \\input{pred\/bchol}\n\n \\subsection{Triangular Inversion}\n \\label{sec:pred:b:trinv}\n \\input{pred\/btrinv}\n\n \\subsection{\\lapack Algorithms}\n \\label{sec:pred:b:lapack}\n \\input{pred\/blapack}\n\n \\section{Summary}\n \\label{sec:pred:conclusion}\n \\input{pred\/conclusion}\n}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\subsubsection{Algorithms}\nThe solution to the triangular Sylvester equation is computed by traversing \\dmC\nfrom the bottom left to the top right. However, in contrast to the previous\noperations, this traversal does not need to follow \\dmC's diagonal; in fact \\dmC\ncan be traversed in various different ways: Two algorithms traverse \\dmC\nvertically, two horizontally (using $3 \\times 1$ and $1 \\times 3$ partitions),\nand 14~diagonally (exposing $3 \\times 3$ sub-matrices), making a total of\n18~algorithms. Furthermore, as detailed in the following, the Sylvester\nequation requires two layers of blocked algorithms, resulting in a total of\n\\definition[Sylvester equation:\\\\64~``complete'' algorithms]{64~``complete''\nalgorithms}.\n\n\\input{pred\/figures\/sylv1dalgs}\n\n\\Cref{algs:sylv1d} presents the four algorithms that traverse \\dmC vertically or\nhorizontally, thereby exposing $3 \\times 1$ or $1 \\times 3$ sub-matrices; each\nof these algorithms consists of one call to \\dgemm[NN] and the solution of a\nsub-problem (another triangular Sylvester equation). To obtain a ``complete''\nalgorithm, two of these algorithms with orthogonal traversals are combined---the\nfirst traverses the full~\\dmC and invokes the second to solve sub-problem in\neach iteration; the second, in turn, solves its small $b \\times b$ sub-problem\nusing \\lapack's unblocked \\dtrsyl[NN1]. E.g., one can use algorithm~$m1$ to\ntraverse \\dmC vertically and in each step apply algorithm~$n2$ to traverse the\nmiddle panel~\\dm[mat11, height=.2, width=.8]{C_1} horizontally. We call the\nresulting ``complete'' algorithm~$m1n2$, and see that eight such combinations\nare possible: $m1n1$, $m1n2$, $m2n1$, $m2n2$, $n1m1$, $n1m2$, $n2m1$,\nand~$n2m2$. Note that in principle the block sizes for the two layered blocked\nalgorithms can be chosen independently; however, we limit our study to a single\nblock size for both layers.\n\n\\input{pred\/figures\/sylv2dalgs}\n\nBeyond the combination of the vertically and horizontally traversing algorithms\nabove, an additional 14~algorithms traverse the matrix diagonally (with\npotentially different block sizes~$b_m$ and~$b_n$ for dimensions~$m$ and~$n$),\nand operate on a set of $3 \\times 3$ sub-matrices in each iteration;\n\\cref{algs:sylv2d} presents a sample of two of these algorithms (all\n14~algorithms are found in \\libflame~\\cite{libflameweb}). Each algorithm\nconsists of a sequence of \\dgemm[NN]{}s and three solutions of sub-problems that\nare also triangular Sylvester equations. While the sub-problem involving\n\\dm[mat11, size=.5]{B_{11}} of size $b_m \\times b_n$ is directly solved by the\nunblocked \\dtrsyl[NN1], the other two involve potentially large yet thin panels\nof~\\dmC. Complete algorithms are constructed by solving each of these sub\nproblems with an appropriate vertical or horizontal traversal\nalgorithm.\\footnote{%\n Setting one of the block sizes of a diagonally traversing algorithm to the\n corresponding matrix size results in one of the vertical or horizontal\n traversal algorithms.\n} Since each of the 14~algorithms has\ntwo such sub-problems, for each of which we can choose from two algorithms, we\nend up with a total of $14 \\cdot 2 \\cdot 2 = 56$~possible combinations.\nTogether with the eight combinations of only vertical and horizontal traversal\nalgorithms, this results in a grand total of 64~different ``complete'' blocked\nalgorithms.\n\n\n\\subsubsection{Algorithm Selection}\n\n\\input{pred\/figures\/varsylv}\n\n\\cref{fig:pred:var:sylv} presents performance predictions and measurements for\nthe Sylvester equation solver for problem sizes between~$n = 56$ and~4152 in\nsteps of~64 and block size~$b = 64$ on a \\haswell using \\openblas. Since the\nexecutions for this setup take between 40~minutes and 2~hours for each\nalgorithm, we only measured the eight algorithms based exclusively on orthogonal\nmatrix traversals. Our predictions, which are generated up to\n$1500\\times$~faster at roughly \\SI5{\\second}~per algorithm, indicate that in\nterms of performance these eight algorithms are evenly spread across the entire\n64~``complete'' algorithms.\n\nFor the single-threaded scenario, the predictions in\n\\cref{fig:pred:var:sylv:pred:1} suggest that\nalgorithms~$n2m2$~(\\ref*{plt:sylvn2m2}) and $m1n1$~(\\ref*{plt:sylvm1n1}) are,\nrespectively, the fastest and slowest, and differ in performance\nby~\\SI{9.99}\\percent. The measurements in \\cref{fig:pred:var:sylv:meas:1}\nconfirm that, while algorithm~$n2m2$~(\\ref*{plt:sylvn2m2}) is indeed the\nfastest, algorithm~~$n1m1$~(\\ref*{plt:sylvn1m1}) is the slowest. While the\nperformance of algorithms~$m1n1$~(\\ref*{plt:sylvm1n1}) and\n$n1m1$~(\\ref*{plt:sylvn1m2}) is predicted to be almost identical, the\nmeasurements show that $m1n1$~(\\ref*{plt:sylvm1n1}) is in fact up to\n\\SI{3.00}{\\percent} faster than $n1m1$~(\\ref*{plt:sylvn1m2}). Furthermore,\nwhile the remaining algorithms are correctly placed between the fastest and the\nslowest, they are not accurately ranked.\n\nThe predictions and measurements for the multi-threaded scenario in\n\\cref{fig:pred:var:sylv:pred:12,fig:pred:var:sylv:meas:12} are at first sight\nsurprising: Compared to the single-threaded case the attained performance is\nconsiderably lower. For matrices of size~$n = 4000$, the algorithms reach\nroughly \\SI8{\\giga\\flops\\per\\second}, which corresponds to\nmerely~\\SI{1.67}{\\percent} of the processor's 12-core peak performance of\n\\SI{480}{\\giga\\flops\\per\\second} (without \\turboboost). An analysis revealed\nthat the source of the drastic increase in runtime is the \\blasl1 kernel \\dswap,\nwhich the unblocked \\dtrsyl\\footnote{%\n Technically within \\code{dlasy2}, which is called from \\dtrsyl.\n} uses to swap two vectors of length~4: Although the workload for this\noperation is tiny, with multiple threads \\openblas (version~0.2.15) activates\nits parallelisation, which for a copy operation on only~\\SI{64}{\\bytes}\nintroduces a overhead of over~$200\\times$ the kernel's single-threaded runtime.\n(The problem was subsequently fixed in \\openblas version~0.2.16 (March 2016) and\nis not present in \\mkl.)\n\nWhile the multi-threaded predictions for all 64~algorithms indicate virtually\nidentical performance and thus do not allow a meaningful performance ranking,\nthey support the crucial revelation that using \\openblas~0.2.15 the triangular\nSylvester equation is solved considerably faster on a single core than on\n12~cores without exception.\n}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Algorithm Generation}\n \\label{sec:tensor:alggen}\n \\input{tensor\/alggen}\n\n \\section{Runtime Prediction}\n \\label{sec:tensor:pred}\n \\input{tensor\/pred}\n\n \\subsection{Example Contraction: \\texorpdfstring{$C_{abc} \\coloneqq\n A_{ai} B_{ibc}$}{C\\_abc := A\\_ai B\\_ibc}}\n \\label{sec:tensor:extc}\n \\input{tensor\/predex}\n\n \\subsection{Repeated Execution}\n \\label{sec:tensor:repeat}\n \\input{tensor\/repeat}\n\n \\subsection{Operand Access Distance}\n \\label{sec:tensor:accdist}\n \\input{tensor\/accdist}\n\n \\subsection{Cache Prefetching}\n \\label{sec:tensor:prefetch}\n \\input{tensor\/prefetch}\n\n \\subsection{Prefetching Failures}\n \\label{sec:tensor:prefetchfail}\n \\input{tensor\/prefetchfail}\n\n \\subsection{First Loop Iterations}\n \\label{sec:tensor:firstiter}\n \\input{tensor\/firstiter}\n\n \\section{Results}\n \\label{sec:tensor:results}\n \\input{tensor\/results}\n\n \\section{Summary}\n \\label{sec:tensor:conclusion}\n \\input{tensor\/conclusion}\n}\n\n\n\n\n\n\n\\subsection{Changing the Setup for \\texorpdfstring{$C_{abc} \\coloneqq A_{ai}\nB_{ibc}$}{C\\_abc := A\\_ai B\\_ibc}}\n\\label{sec:ai_ibc2}\n\n\\input{tensor\/figures\/ai_ibc2}\n\nWe consider the previously studied contraction with an entirely different setup:\nWe use $a = b = c = 128$ and $i = 8, \\ldots, 1000$ in steps of~8 on an\n\\ivybridge with single-threaded \\mkl. For this scenario,\n\\cref{fig:tensor:ai_ibc2} presents the performance predictions and measurements\nfor all 36~algorithms (see \\cref{sec:tensor:extc}). Although everything,\nranging from the problem sizes to the machine and \\blas library was changed in\nthis setup, the predictions are of equivalent quality and our tool correctly\ndetermines that the \\dgemm-based algorithms (\\ref*{plt:ai_ibc:c_gemm}),\n\\ref*{plt:ai_ibc:b_gemm}) not only perform best and equally well but also reach\nover~\\SI{75}{\\percent} of the \\ivybridgeshort's theoretical peak performance of\n\\SI{28.8}{\\giga\\flops\\per\\second}.\n\n\n\\subsection{Vector Contraction: \\texorpdfstring{$C_a \\coloneqq A_{iaj}\nB_{ji}$}{C\\_a := A\\_iaj B\\_ji}}\n\\label{sec:noblas3}\n\n\\input{tensor\/algs\/iaj_ji}\n\\input{tensor\/figures\/iaj_ji}\n\nFor certain contractions (e.g., those involving vectors), \\dgemm cannot be\nused as a compute kernel, and algorithms can only be based on \\blasl1 or~2\nkernels. One such scenario is encountered in the contraction $C_a \\coloneqq\nA_{iaj} B_{ji}$, for which our generator yields 8~algorithms:\n\\begin{itemize}\n \\item 4 \\ddot-based:\n \\tensoralgname{aj}{dot}~(\\ref*{plt:iaj_ji:aj_dot}),\n \\tensoralgname{ja}{dot}~(\\ref*{plt:iaj_ji:ja_dot}),\\\\\n \\tensoralgname{ai}{dot}~(\\ref*{plt:iaj_ji:ai_dot}),\n \\tensoralgname{ia}{dot}~(\\ref*{plt:iaj_ji:ia_dot});\n \\item 2 \\daxpy-based:\n \\tensoralgname{ij}{axpy}~(\\ref*{plt:iaj_ji:ij_axpy}),\n \\tensoralgname{ji}{axpy}~(\\ref*{plt:iaj_ji:ji_axpy}), and\n \\item 2 \\dgemv-based (see \\cref{algs:iaj_ji}):\n \\tensoralgname j{gemv}~(\\ref*{plt:iaj_ji:j_gemv}),\n \\tensoralgname{i'}{gemv}~(\\ref*{plt:iaj_ji:i'_gemv}).\n\\end{itemize}\nNote that since last algorithm operates on slices \\tind A{i,:,:}, which do not\nhave contiguously-stored dimension, a \\code{copy} kernel (indicated by the\napostrophe in the algorithm name) is required before each \\dgemv[N]\n(\\cref{alg:iaj_ji:i'-gemv}).\n\n\\Cref{fig:tensor:iaj_ji} presents the predicted and measured performance for\nthese algorithms. Our predictions clearly identify the fastest algorithm\n\\tensoralgname j{gemv}~(\\ref*{plt:iaj_ji:j_gemv}) across the board.\nFurthermore, the next group of four algorithms is also correctly recognized, and\nthe low performance of the second \\dgemv[N]-based algorithm\n\\tensoralgname{i'}{gemv}~(\\ref*{plt:iaj_ji:i'_gemv}) (due to the overhead of the\ninvolved copy operation) is correctly predicted as well.\n\n\n\\subsection{Challenging Contraction: \\texorpdfstring{$C_{abc} \\coloneqq A_{ija}\nB_{jbic}$}{C\\_abc := A\\_ija B\\_jbic}}\n\\label{sec:ijb_jcid}\n\n\\input{tensor\/algs\/ijb_jcid}\n\nWe now turn to a more complex example inspired by space-time continuum\ncomputations in the field general relativity~\\cite{generalrelativity}: $C_{abc}\n\\coloneqq A_{ija} B_{jbic}$. For this contraction, we generated a total of\n176~different algorithms:\n\\begin{itemize}\n \\item 48 \\ddot-based~(\\ref*{plt:ijb_jcid:dot}),\n \\item 72 \\daxpy-based~(\\ref*{plt:ijb_jcid:axpy}),\n \\item 36 \\dgemv-based~(\\ref*{plt:ijb_jcid:gemv}),\n \\item 12 \\dger-based~(\\ref*{plt:ijb_jcid:ger}), and\n \\item 8 \\dgemm-based:\\\\\n \\tensoralgname{cj'}{gemm}~(\\ref*{plt:ijb_jcid:cj'_gemm}),\n \\tensoralgname{jc'}{gemm}~(\\ref*{plt:ijb_jcid:jc'_gemm}),\n \\tensoralgname{ci'}{gemm}~(\\ref*{plt:ijb_jcid:ci'_gemm}),\n \\tensoralgname{i'c}{gemm}~(\\ref*{plt:ijb_jcid:i'c_gemm}),\\\\\n \\tensoralgname{bj'}{gemm}~(\\ref*{plt:ijb_jcid:bj'_gemm}),\n \\tensoralgname{jb'}{gemm}~(\\ref*{plt:ijb_jcid:jb'_gemm}),\n \\tensoralgname{bi'}{gemm}~(\\ref*{plt:ijb_jcid:bi'_gemm}),\n \\tensoralgname{i'b}{gemm}~(\\ref*{plt:ijb_jcid:i'b_gemm}).\n\\end{itemize}\nAll \\dgemm-based (see \\cref{algs:ijb_jcid}) and several of the \\dgemv-based\nalgorithms involve copy operations to ensure that each matrix has a\ncontiguously-stored dimension as required by the \\blas interface. Once again,\nwe consider a challenging scenario where both contracted indices are of size $i\n= j = 8$ and the free indices $a = b = c$ vary between~8 and~1000.\n\n\\input{tensor\/figures\/ijb_jcid}\n\n\\Cref{fig:tensor:ijb_jcid:pred} presents the predicted performance of the\n176~algorithms, where algorithms based on \\blasl1 and~2 are grouped by kernel.\nEven with the copy operations, the \\dgemm-based algorithms are the fastest.\nHowever, within these 8~algorithms, the performance differs by more\nthan~\\SI{20}\\percent. \\Cref{fig:tensor:ijb_jcid:meas} compares our predictions\nwith corresponding performance measurements\\footnote{%\n Slow tensor contraction algorithms were stopped before reaching the largest\n problem size by limiting the total measurement time per algorithm\n to~\\SI{15}\\min.\n}: Among the \\dgemm-based algorithms, our predictions clearly separate the bulk\nof fast algorithms from the slightly less efficient ones.\n\n\\input{tensor\/figures\/ijb_jcid10}\n\n\\paragraph{Multi-Threading}\nOur contraction algorithms can profit from shared memory parallelism through\nmulti-threaded \\blas kernels. To focus on the impact of parallelism, we\nincrease the contracted tensor dimension sizes to~$i = j = 32$ and use all\n10~cores of the \\ivybridge with multi-threaded \\openblas.\n\\Cref{fig:tensor:ijb_jcid10} presents performance predictions and measurements\nfor this setup: Our predictions accurately distinguish the three groups of\n\\dgemm-based implementations, and algorithms\n\\tensoralgname{i'c}{gemm}~(\\ref*{plt:ijb_jcid:i'c_gemm}) and\n\\tensoralgname{i'b}{gemm}~(\\ref*{plt:ijb_jcid:i'b_gemm}) (see\n\\cref{algs:ijb_jcid}), which reach \\SI{170}{\\giga\\flops\\per\\second}, are\ncorrectly identified as the fastest.\n\\tensoralgname{jb'}{gemm}~(\\ref*{plt:ijb_jcid:jb'_gemm}) on the other hand\nmerely reaches \\SI{60}{\\giga\\flops\\per\\second}. This $3\\times$~difference in\nperformance among \\dgemm-based algorithms emphasizes the importance of selecting\nthe right algorithm.\n\n\n\\subsection{Efficiency Study}\n\n\\input{tensor\/figures\/eff}\n\nThe above study provided evidence that our automated approach successfully\nidentifies the most efficient algorithm(s). In the following we show how much\nfaster this approach is compared to empirical measurements. For this purpose, we\nonce more consider the contraction $C_{abc} \\coloneqq A_{ai} B_{ibc}$ with $i =\n8$ and varying $a = b = c$ on a \\harpertown with \\openblas.\n\\Cref{fig:tensor:eff} presents the speedup of our micro-benchmark over\ncorresponding algorithm measurements: Generally our predictions are several\norders of magnitude faster than such algorithm executions. For $a = b = c =\n1000$, this relative improvement is smallest for the \\dgemm-based\nalgorithms~(\\ref*{plt:eff:gemm}) at $1000\\times$, because each \\dgemm performs a\nsignificant portion of the computation; for the \\dger-based\nalgorithms~(\\ref*{plt:eff:ger}), it lies between 6000 and \\num{10000} and for\nthe \\dgemv-based algorithms~(\\ref*{plt:eff:gemv}) the gain is $\\num{5e5}\\times$\nto $\\num{e6}\\times$; finally, for the \\blasl1-based\nalgorithms~(\\ref*{plt:eff:axpy}, \\ref*{plt:eff:dot}), where each kernel\ninvocation only performs a tiny fraction of the contraction, our predictions are\n\\num{1e6} to \\num{1e9}~times faster than the algorithm executions.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section*{Affine and non-affine deformation}\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=0.9\\linewidth]{figure1.pdf}\n\\caption{(a) The deformation of infinitesimal spherical fluid elements after $\\Delta t=3\\tau_\\eta$ (size not to scale). (b) The time evolution of the mean curvature $\\langle\\kappa_1\\rangle$ (black solid curve); The black dashed line represents a linear relationship, and the cyan dashed line represents the prediction based on an Eulerian quantity $\\langle|\\boldsymbol{\\hat{e}}_1\\cdot\\boldsymbol{H}\\cdot\\boldsymbol{\\hat{e}}_1|\\rangle$ with $\\boldsymbol{\\hat{e}}_1$ being the eigenvector corresponding to the maximum eigenvalue of the rate-of-strain tensor. Inset: the same figure for $\\langle\\kappa_1\\rangle$ (black solid line) with a linear scale in time. The black dashed line represents an exponential growth over time.\\label{fig_element_deformation}}\n\\end{figure}\n\n\nTo build a framework that makes that connection, we consider the folding of infinitesimal fluid elements. Fig. \\ref{fig_element_deformation}(a) shows a number of infinitesimal spherical fluid elements being deformed after a time $3\\tau_\\eta$ ($\\tau_\\eta$ is the Kolmogorov time scale) in 3D homogeneous and isotropic turbulence \\cite{li2008public,perlman2007data} (details of the direct numerical simulation (DNS) of the turbulence can be found in Supplemental Material). It is clear that the deformed fluid elements show complex geometry involving both stretching and folding. To mathematically describe this high-order deformation, we consider each point $\\boldsymbol{X}$ at $t_0$ within an infinitesimal fluid element mapped to another point $\\boldsymbol{x}$ within the deformed element after a finite time $\\Delta t$, where $\\boldsymbol{x}$ and $\\boldsymbol{X}$ are the relative positions with respect to the center of the fluid elements. The non-linear mapping function between $\\boldsymbol{X}$ and $\\boldsymbol{x}$ with the leading orders follows\n\\begin{equation}\\label{eqn_mapping}\n\\boldsymbol{x}=\\boldsymbol{F}(t_0+\\Delta t)\\cdot\\boldsymbol{X}+\\boldsymbol{X}\\cdot\\boldsymbol{G}(t_0+\\Delta t)\\cdot\\boldsymbol{X},\n\\end{equation}\nwhere $F_{ij}=\\partial x_i\/\\partial X_j$ is the deformation gradient tensor and $G_{ijk}=\\partial^2 x_i\/\\partial X_j\\partial X_k$ is the deformation Hessian tensor. The tensors $F_{ij}$ and $G_{ijk}$ can be then determined by integrating $dF_{ij}(t)\/dt=A_{im}F_{mj}(t)$ and $dG_{ijk}(t)\/dt=A_{im}G_{mjk}(t)+H_{imn}F_{mj}(t)F_{nk}(t)\/2$ along the trajectories of fluid elements, with $A_{ij}=\\partial u_i\/\\partial x_j$ and $H_{ijk}=\\partial^2 u_i\/\\partial x_j \\partial x_k$ being the velocity gradient and velocity Hessian tensors, respectively. Details of these equations can be found in Supplemental Material. \n\nTo further simplify Eq. (\\ref{eqn_mapping}), we consider the deformation of an arbitrary straight material line passing through the center of a fluid element, represented by a set of positions $\\boldsymbol{X}$ represented parametrically according to $\\boldsymbol{X}(\\lambda)=\\boldsymbol{\\hat{e}}\\lambda$. $\\boldsymbol{\\hat{e}}$ is a selected unit vector and the parameter $\\lambda\\rightarrow0$ indicates the distance from the center of the fluid element. Substituting $\\boldsymbol{X}(\\lambda)=\\boldsymbol{\\hat{e}}\\lambda$ into Eq. (\\ref{eqn_mapping}) yields the expression for the deformed material line at $t_0+\\Delta t$, \n\\begin{equation}\\label{eqn_mapping_expand}\n \\boldsymbol{x}(\\lambda)=\\boldsymbol{F}\\cdot \\boldsymbol{\\hat{e}} \\lambda+\\boldsymbol{\\hat{e}}\\cdot\\boldsymbol{G}\\cdot\\boldsymbol{\\hat{e}}\\lambda^2= \\boldsymbol{r}^s \\lambda+\\boldsymbol{r}^b \\lambda^2,\n\\end{equation}\nwhere $\\boldsymbol{r}^s=\\boldsymbol{F}\\cdot \\boldsymbol{\\hat{e}}$ and $\\boldsymbol{r}^b=\\boldsymbol{\\hat{e}}\\cdot\\boldsymbol{G}\\cdot\\boldsymbol{\\hat{e}}$ are defined as the stretching vector and the bending vector, respectively. \n\n\n\n\nA highly relevant material line is the one that gets stretched the most, written as $\\boldsymbol{X}(\\lambda)=\\boldsymbol{\\hat{e}}_{R1}\\lambda$. Here, $\\boldsymbol{\\hat{e}}_{R1}$ is the unit eigenvector associated with the greatest eigenvalue of right Cauchy-Green strain tensor $\\boldsymbol{C}^R=\\boldsymbol{F}^T\\boldsymbol{F}$. This special material line, as the \"skeleton\" of the fluid element, can be used to reflect the overall geometry of the fluid element. Substituting $\\boldsymbol{\\hat{e}}=\\boldsymbol{\\hat{e}}_{R1}$ in Eq. (\\ref{eqn_mapping_expand}) results in the quadratic equation $\\boldsymbol{x}(\\lambda)= \\boldsymbol{r}^s_1 \\lambda+\\boldsymbol{r}^b_1 \\lambda^2$ where $\\boldsymbol{r}^s_1=\\boldsymbol{F}\\cdot\\boldsymbol{\\hat{e}}_{R1}$ and $\\boldsymbol{r}^b_1=\\boldsymbol{\\hat{e}}_{R1}\\cdot\\boldsymbol{G}\\cdot\\boldsymbol{\\hat{e}}_{R1}$. An example of this material line is shown as the inset of Fig. \\ref{fig_element_deformation}(a) (black dashed line). Given this quadratic equation, the curvature of the material line $\\kappa_1$ can be found using $\\kappa_1=2r^b_{1\\perp}\/(r_1^s)^2$, where $\\boldsymbol{r}^b_{1\\perp}$ represents the component of $\\boldsymbol{r}^b_1$ that is perpendicular to $\\boldsymbol{r}^s_1$. Although $\\kappa_1$ is not sufficient to describe the complete deformation, it does reflect the overall folding of the fluid element. \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nThe curvature $\\kappa_1$ can therefore be obtained by computing $\\boldsymbol{F}$ and $\\boldsymbol{G}$ and their associated $\\boldsymbol{r}^b_1$ and $\\boldsymbol{r}^s_1$ along with each fluid trajectory. Fig. \\ref{fig_element_deformation}(b) shows the time evolution of the mean curvature $\\langle\\kappa_1\\rangle$, averaged over $10^5$ fluid elements, as a function of the integration time $\\Delta t$ using the DNS data. It is evident that, for the available simulation duration, the mean curvature of the fluid elements grows continuously, but the growth rate changes appreciably between two regimes. In early times, $\\langle\\kappa_1\\rangle$ increases linearly. The linear regime lasts until about the Kolmogorov timescale $\\tau_\\eta$ when the length scale $1\/\\langle\\kappa_1\\rangle$ is around 25$\\eta$ ($\\eta$ is the Kolmogorov length scale), and the growth of $\\langle\\kappa_1\\rangle$ slows down, marking the transition of the curvature dynamics. Soon after $\\tau_\\eta$, the growth of $\\langle\\kappa_1\\rangle$ accelerates again, and this late stage behavior is better fitted with an exponential function, which is illustrated in a semi-logarithmic plot in the inset of Fig. \\ref{fig_element_deformation}(b). \n\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.9\\linewidth]{curv_pdf.pdf}\n \\caption{(a) The PDFs of the curvature $p(\\kappa_1)$ at different time instants in the early stage. Inset of (a): the same PDFs but for the normalized curvature $p(\\kappa_1\/\\langle\\kappa_1\\rangle)$. (b) The PDFs of the curvature $p(\\kappa_1)$ at different time instants in the late stage with the solid curves representing the data and the dashed curves representing the prediction by the model (Eq. (\\ref{eqn_pdf_evolution})). Inset of (b): the time evolution of the kurtosis of $\\kappa_1$. }\n \\label{fig_curv_pdf}\n\\end{figure}\n\n\n\n\n\n\n\n\n\n\nThe transition from the linear to the exponential growth of $\\langle \\kappa_1 \\rangle$ indicates different mechanisms at play, which can be better understood using local curvature. Here, the probability density function (PDF) of $\\kappa_1$, i.e. $p(\\kappa_1)$, at different times are shown in Fig. \\ref{fig_curv_pdf} for the early (a) and late (b) stages. In the early stage, the curvature grows systematically, but follows a self-similar behavior as indicated by the collapsed PDFs of the normalized curvature $p(\\kappa_1\/\\langle \\kappa_1 \\rangle)$ in the inset of Fig. \\ref{fig_curv_pdf}(a). In the late stage, the tail of the PDF still rises over time, whereas the peak location remains constant. This distinct behavior suggests that the curvature distribution becomes more intermittent over time, which is confirmed by the growing kurtosis as shown in Fig. \\ref{fig_curv_pdf}(b) inset. This result highlights the growing inhomogeneity of local mixing as locations with extreme curvature should reach a well-mixed stage much sooner than what is implied by the mean.\n\n\n\n\n\n\n\n\n\nTo model the multi-stage growth behavior of curvature, we consider an arbitrary deforming infinitesimal material line as in Eq. (\\ref{eqn_mapping_expand}). The equation for this material line can therefore be decomposed along two directions, $\\boldsymbol{\\hat{e}}_\\parallel=\\boldsymbol{r}^s\/r^s$ and $\\boldsymbol{\\hat{e}}_\\perp=\\boldsymbol{r}^b_\\perp\/r^b_\\perp$ , following:\n\\begin{equation}\\label{eqn_geometry}\n \\boldsymbol{x}(\\lambda)=\\left(r^s\\lambda+r^b_\\parallel\\lambda^2\\right)\\boldsymbol{\\hat{e}}_\\parallel+r^b_\\perp\\lambda^2\\boldsymbol{\\hat{e}}_\\perp,\n\\end{equation}\nwhere $\\boldsymbol{r}^b_\\parallel=(\\boldsymbol{r}^b\\cdot\\boldsymbol{\\hat{e}}_\\parallel)\\boldsymbol{\\hat{e}}_\\parallel$ and $\\boldsymbol{r}^b_\\perp=\\boldsymbol{r}^b-\\boldsymbol{r}^b_\\parallel$.\n\nThe velocity of any arbitrary material point on the material line, $\\boldsymbol{u}(\\lambda)$, can then be expressed in the frame spanned by ($\\boldsymbol{\\hat{e}}_\\parallel$, $\\boldsymbol{\\hat{e}}_\\perp)$ in two different ways by taking either direct time derivative of Eq. (\\ref{eqn_geometry}) or the Taylor expansion based on the velocity information (see Supplemental Material). Comparing these two expressions for $\\boldsymbol{u}(\\lambda)$ leads to evolution equations for $r^s$ and $r^b_\\perp$, which then yields the evolution equation for curvature of the material line\n\\begin{equation}\\label{eqn_curv_evolution}\n\\begin{split}\n \\frac{d\\kappa}{dt}=&\\left(\\boldsymbol{\\hat{e}}_\\parallel\\cdot\\boldsymbol{H}\\cdot\\boldsymbol{\\hat{e}}_\\parallel\\right)\\cdot\\boldsymbol{\\hat{e}}_\\perp\\\\\n &+\\left(\\boldsymbol{\\hat{e}}_\\perp\\cdot\\boldsymbol{S}\\cdot\\boldsymbol{\\hat{e}}_\\perp-2\\boldsymbol{\\hat{e}}_\\parallel\\cdot\\boldsymbol{S}\\cdot\\boldsymbol{\\hat{e}}_\\parallel\\right)\\kappa.\n\\end{split}\n\\end{equation}\nHere $\\boldsymbol{S}$ and $\\boldsymbol{H}$ are the rate-of-strain tensor and the velocity Hessian tensor following the trajectories of fluid elements, respectively.\n\nEq. (\\ref{eqn_curv_evolution}) holds for an arbitrary material line, so it also works for the curvature along the largest stretching ($\\boldsymbol{\\hat{e}}_{R1}$) direction $\\kappa_1$. The first term on the right side of Eq. (\\ref{eqn_curv_evolution}) represents the contribution from the velocity Hessian, which can directly bend the fluid element as shown in Fig. \\ref{fig_alignment}(a). Here, the thick blue arrows indicate the primary velocity Hessian that bends the element (i.e., the velocity gradient that changes along the $\\boldsymbol{\\hat{e}}_\\parallel$ direction). In the short time limit, $\\kappa_1\\rightarrow0$, all the terms multiplied by $\\kappa_1$ in Eq. (\\ref{eqn_curv_evolution}) are negligible, \nso Eq. (\\ref{eqn_curv_evolution}) can be simplified to $d\\kappa_1\/dt=\\left(\\boldsymbol{\\hat{e}}_\\parallel\\cdot\\boldsymbol{H}\\cdot\\boldsymbol{\\hat{e}}_\\parallel\\right)\\cdot\\boldsymbol{\\hat{e}}_\\perp$, which corresponds to the linear growth in the early stage as in Fig. \\ref{fig_element_deformation}(b). At later times ($\\Delta t>\\tau_\\eta$), this contribution of the velocity Hessian approaches zero as shown in Fig. \\ref{fig_alignment}(d) (blue solid line) because $\\left(\\boldsymbol{\\hat{e}}_\\parallel\\cdot\\boldsymbol{H}\\cdot\\boldsymbol{\\hat{e}}_\\parallel\\right)$ may not be perfectly aligned with $\\boldsymbol{\\hat{e}}_\\perp$. Since the velocity Hessian is a small-scale quantity, it is not surprising that the transition in Fig. \\ref{fig_element_deformation}(b) begins at a small $\\Delta t$ as the velocity Hessian decorrelates \\cite{schumacher2007asymptotic}.\n\n\n\n\n\n\n\\begin{figure}[h]\n \\centering\n \\includegraphics[width=0.9\\linewidth]{alignment.pdf}\n \\caption{(a-c) Schematics illustrating how (a) velocity Hessian , (b) strain along $\\boldsymbol{\\hat{e}}_\\perp$, and (c) strain along $\\boldsymbol{\\hat{e}}_\\parallel$ contribute to the curvature change, respectively. For all cases, the black dashed curves represent the special material line (skeleton) while the gray dashed curves indicate the same material line at a later time deformed by the surrounding flows indicated by the thick arrows. (d) The time evolution of the contribution to the mean curvature growth by each term in Eq. (\\ref{eqn_curv_evolution}), conditioned on $\\kappa_1>3\\langle\\kappa_1\\rangle$. All the terms are normalized by the Kolmogorov scales. }\n \\label{fig_alignment}\n\\end{figure}\n\n\n\n \nIn addition to the Hessian term, the other two terms in Eq. (\\ref{eqn_curv_evolution}), both proportional to $\\kappa_1$, represent how the strain affects the curvature of an already-bent fluid element. Here, $\\boldsymbol{\\hat{e}}_\\perp\\cdot\\boldsymbol{S}\\cdot\\boldsymbol{\\hat{e}}_\\perp$ represents the stretching along $\\boldsymbol{\\hat{e}}_\\perp$, which tends to increase the curvature (as shown in Fig. \\ref{fig_alignment}(b)); $\\boldsymbol{\\hat{e}}_\\parallel\\cdot\\boldsymbol{S}\\cdot\\boldsymbol{\\hat{e}}_\\parallel$ represents the stretching along $\\boldsymbol{\\hat{e}}_\\parallel$, which straightens an already-bent fluid element and reduces the curvature (as shown in Fig. \\ref{fig_alignment}(c)). At later times, the mean curvature $\\langle\\kappa_1\\rangle$ is large so both terms associated with $\\kappa_1$ become dominant, leading to $d\\kappa_1\/dt\\propto \\kappa_1$. As a result, the late stage growth of curvature exhibits exponential trend, consistent with the results in Fig. \\ref{fig_element_deformation}(b) inset.\n\nThe contributions from strain by each of the two terms (dashed line) and their combination (red solid line) are shown in Fig. \\ref{fig_alignment}(d). The statistics were collected by only using the fluid elements with $\\kappa_1>3\\langle\\kappa_1\\rangle$ because the late stage is dominated by the large-curvature cases as indicated by Eq. (\\ref{eqn_curv_evolution}). It is evident that, as the velocity Hessian contribution approaches zero, the total contribution by the strain grows significantly, signaling the transition of the roles between these two mechanisms. This growing contribution by the strain is dominated by $(\\boldsymbol{\\hat{e}}_\\perp\\cdot\\boldsymbol{S}\\cdot\\boldsymbol{\\hat{e}}_\\perp)\\kappa$ which enhances the folding, whereas the other term $(-\\boldsymbol{\\hat{e}}_\\parallel\\cdot\\boldsymbol{S}\\cdot\\boldsymbol{\\hat{e}}_\\parallel)\\kappa$ that reduces the curvature plateaus close to zero.\n\n\nTo understand the enhanced curvature intermittency at the late stage, the time evolution of the PDF of $\\kappa_1$, i.e. $p(\\kappa_1,t)$ as shown in Fig. \\ref{fig_curv_pdf}(b), is modelled by assuming that $p(\\kappa_1,t)d\\kappa_1=p(\\kappa_1',t+dt)d\\kappa_1'$, where $\\kappa_1'=\\kappa_1+(d\\kappa_1\/dt)dt$ is the curvature of the fluid elements with an initial curvature $\\kappa_1$ after $dt$. Substituting $\\kappa_1'$ into the equation for PDF leads to,\n\\begin{equation}\\label{eqn_pdf_evolution}\n \\frac{\\partial p}{\\partial t}+(d\\kappa_1\/dt)\\cdot\\frac{\\partial p}{\\partial \\kappa_1}+p\\cdot\\frac{d(d\\kappa_1\/dt)}{d\\kappa_1}=0.\n\\end{equation}\nHere we approximate $d\\kappa_1\/dt\\approx\\langle\\boldsymbol{\\hat{e}}_\\perp\\cdot\\boldsymbol{S}\\cdot\\boldsymbol{\\hat{e}}_\\perp-2\\boldsymbol{\\hat{e}}_\\parallel\\cdot\\boldsymbol{S}\\cdot\\boldsymbol{\\hat{e}}_\\parallel\\rangle\\kappa_1$ because (i) the strain is the dominant mechanism in the late stage and (ii) the contribution by velocity Hessian will only result in a self-similar distribution of curvature as shown in Fig. \\ref{fig_curv_pdf}(a), whereas the PDFs in the late stage exhibit longer tails over time. Eq. (\\ref{eqn_pdf_evolution}) is then solved numerically with $p(\\kappa_1)$ at $t\/\\tau_\\eta=3$ obtained from the DNS data serving as the initial condition. \n\n\n\n\n\n\n\n\n\nThe predicted PDFs at different times are shown as the dashed curves in Fig. \\ref{fig_curv_pdf}(b). An overall good agreement between the prediction and the data is achieved up to $t\\approx10\\tau_\\eta$, particulary in the tail region extended beyond $\\kappa_1\\eta\\approx 0.2$ in Fig. \\ref{fig_curv_pdf}(b), which correspond to a length scale smaller than 5$\\eta$. This suggests that the intermittency shown here is related to the curved elements being stretched even further by small-scale straining motions in the dissipative range. Note that the range of $\\kappa_1\\eta$ is limited because of the exceedingly low probability of finding fluid elements with $\\kappa_1\\eta$ greater than 0.25. We also note that the model following Eq. (\\ref{eqn_pdf_evolution}) is simplified and it only holds when $d\\kappa_1\/dt$ increases with $\\kappa_1$, i.e., more curved elements are being bent at a faster rate, which can only be satisfied at the late stage given the overall positive magnitude of $\\langle\\boldsymbol{\\hat{e}}_\\perp\\cdot\\boldsymbol{S}\\cdot\\boldsymbol{\\hat{e}}_\\perp-2\\boldsymbol{\\hat{e}}_\\parallel\\cdot\\boldsymbol{S}\\cdot\\boldsymbol{\\hat{e}}_\\parallel\\rangle$ in Eq. (\\ref{eqn_curv_evolution}). Furthermore, the model is intended only for the tail region because the peak region with smaller $\\kappa_1$ is dominated by the velocity Hessian. As a result, a mismatch between model predictions and simulation results is not unexpected for smaller $\\kappa_1\\eta$.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.9\\linewidth]{joint_pdf.pdf}\n \\caption{The joint PDF of the normalized curvature along $\\boldsymbol{\\hat{e}}_1$ and $\\boldsymbol{\\hat{e}}_2$ directions. Two schematics show an initially spherical fluid elements deforming to a bowl shape (top) and a saddle shape (bottom) after a short time, respectively. }\n \\label{fig_joint_pdf}\n\\end{figure}\n\nEq. (\\ref{eqn_curv_evolution}) also enables us to use simple Eulerian quantities to understand folding in the early stage. As $\\Delta t\\rightarrow0$, $\\boldsymbol{\\hat{e}}_\\parallel$ approaches $\\boldsymbol{\\hat{e}}_1$, which is the one of the three eigenvectors [$\\boldsymbol{\\hat{e}}_i$ ($i=1,2,3$)] corresponding to the maximum eigenvalue of the rate-of-strain tensor $\\boldsymbol{S}$. The early growth of the material curvature can therefore be determined by an Eulerian quantity $\\langle|\\boldsymbol{\\hat{e}}_1\\cdot\\boldsymbol{H}\\cdot\\boldsymbol{\\hat{e}}_1|\\rangle$ following $d\\langle\\kappa_1\\rangle\/dt\\approx\\langle\\left(\\boldsymbol{\\hat{e}}_\\parallel\\cdot\\boldsymbol{H}\\cdot\\boldsymbol{\\hat{e}}_\\parallel\\right)\\cdot\\boldsymbol{\\hat{e}}_\\perp\\rangle\\approx \\langle|\\boldsymbol{\\hat{e}}_1\\cdot\\boldsymbol{H}\\cdot\\boldsymbol{\\hat{e}}_1|\\rangle\\beta$, where $\\beta\\approx0.85$ is the mean cosine of the angle between $\\boldsymbol{\\hat{e}}_\\parallel\\cdot\\boldsymbol{H}\\cdot\\boldsymbol{\\hat{e}}_\\parallel$ and $\\boldsymbol{\\hat{e}}_\\perp$ obtained from the DNS data. The predicted result is shown as the cyan dashed line in Fig. \\ref{fig_element_deformation}(b), and it overlaps with the DNS data perfectly. \n\n\n\n\nThis Eulerian quantity $\\langle|\\boldsymbol{\\hat{e}}_1\\cdot\\boldsymbol{H}\\cdot\\boldsymbol{\\hat{e}}_1|\\rangle$ also helps to establish a better physical picture of the deformed fluid elements in the short time limit beyond a simple flat sheet that extends along the $\\boldsymbol{\\hat{e}}_1$ and $\\boldsymbol{\\hat{e}}_2$ directions considered in the classical framework \\cite{lund1994improved}. As illustrated in the schematics of Fig. \\ref{fig_joint_pdf}, such a sheet could be curved along $\\boldsymbol{\\hat{e}}_3$ direction, and its geometry can be described by two curvatures, whose growth are controlled by $(\\boldsymbol{\\hat{e}}_1\\cdot\\boldsymbol{H}\\cdot\\boldsymbol{\\hat{e}}_1)\\cdot\\boldsymbol{\\hat{e}}_3$ and $(\\boldsymbol{\\hat{e}}_2\\cdot\\boldsymbol{H}\\cdot\\boldsymbol{\\hat{e}}_2)\\cdot\\boldsymbol{\\hat{e}}_3$, respectively. \n\n\nThe joint PDF of $(\\boldsymbol{\\hat{e}}_1\\cdot\\boldsymbol{H}\\cdot\\boldsymbol{\\hat{e}}_1)\\cdot\\boldsymbol{\\hat{e}}_3$ and $(\\boldsymbol{\\hat{e}}_2\\cdot\\boldsymbol{H}\\cdot\\boldsymbol{\\hat{e}}_2)\\cdot\\boldsymbol{\\hat{e}}_3$ normalized by Kolmogorov scales is shown in Fig. \\ref{fig_joint_pdf}. Here, the direction of $\\boldsymbol{\\hat{e}}_3$ is chosen such that $(\\boldsymbol{\\hat{e}}_1\\cdot\\boldsymbol{H}\\cdot\\boldsymbol{\\hat{e}}_1)\\cdot\\boldsymbol{\\hat{e}}_3>0$, while $(\\boldsymbol{\\hat{e}}_2\\cdot\\boldsymbol{H}\\cdot\\boldsymbol{\\hat{e}}_2)\\cdot\\boldsymbol{\\hat{e}}_3$ can be either positive (bowl shape) or negative (saddle shape). The joint PDF suggests a nearly symmetric probability for either shape, skewing only slightly towards the bowl case. Nevertheless, for a given curvature in one direction, the most likely curvature in the other direction is zero, so there appears to be some preference for cigar like shapes. This is confirmed in Fig. \\ref{fig_element_deformation}(a) where the bending occurs mostly in one direction (although various other bending configurations can be seen). Note that large values of the velocity Hessian may be the result of local instabilities (e.g. shear instabilities that are responsible for rolling up the vortex sheets into tubes \\citep{vincent1994dynamics}). Connecting the dynamics\nof instabilities to velocity Hessian and curvature requires further investigations.\n\n\n\n\n\n\n\n\nIn sum, our work establishes a new framework to connect folding dynamics to the velocity Hessian and deformation Hessian tensors in a way similar to the connection between stretching to velocity gradient and Cauchy-Green strain tensors. As the stretching can be well described by the Lyapunov exponents based on strain, such a relationship may inspire the development of new ways to formulate the dynamical system for folding. Our framework also provides new insights into the flow intermittency that the sharp-turning points in flows become even more curved due to strain, which could help gain deeper insights into the intermittency and inhomogeneity of turbulent mixing. Future work can possibly extend our framework to finite-sized fluid elements considering the coarse-graining effect at the same length scale. This extension will help develop improved models for length-scale reduction in the energy cascade process.\n\n\nWe acknowledge the financial support from the National Science Foundation under the award number CAREER-1905103. This project was also partially supported by the ONR award: N00014-21-1-2083. \n \n\n\n\n\n\n\n\n\n\n\n\n\n\n \n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{${\\cal F}$-transversality}\\label{sF}\n\n\nIn this section, we study properties\nof morphisms of schemes\nwith respect to\ncomplexes on the \\'etale site\nof a scheme.\nThe transversality is defined \nas a condition for a canonical morphism\nfor extraordinary pull-back to be\nan isomorphism.\nIn Section \\ref{ssFtr},\nafter preparing some sorites on\nthe canonical morphism,\nwe establish basic properties\non the transversality.\nIn Section \\ref{ssla},\nafter recalling basic properties\nof local acyclicity,\nwe study the relation between\nthe local acyclicity and\nthe transversality.\n\nIn this section\nand Section \\ref{sms},\n$\\Lambda$ denotes\na finite field of characteristic $\\ell$\ninvertible on relevant \nnoetherian schemes.\nThe derived categories\n$D^+(-,\\Lambda)$ of \nbounded below complexes\nand \n$D^b_c(-,\\Lambda)$ of \nconstructible complexes\nare defined as usual.\n\n\\subsection{${\\cal F}$-transversality}\\label{ssFtr}\n\n\n\nLet $h\\colon W\\to X$\nbe a separated morphism\nof finite type of noetherian schemes\nand $\\Lambda$ be a finite field\nof characteristic $\\ell$ invertible on $X$.\nThe functor\n$Rh^!\\colon D^+(X,\\Lambda)\n\\to D^+(W,\\Lambda)$\nis defined as the adjoint\nof\n$Rh_!\\colon D(W,\\Lambda)\n\\to D(X,\\Lambda)$\nin \\cite[Th\\'eor\\`eme 3.1.4.]{DP}.\nIf $X$ is quasi-excellent,\nby the finiteness theorem\n\\cite[{\\sc Th\\'eor\\`eme} 1.1.1]{fini},\nwe have a functor\n$Rh^!\\colon D^b_c(X,\\Lambda)\n\\to D^b_c(W,\\Lambda)$\nsee also \\cite[Corollaire 1.5]{TF}.\nRecall that a scheme of\nfinite type over\na Dedekind domain with\nfraction field of characteristic 0\nis quasi-excellent\nby \\cite[Scholie (7.8.3)]{EGA4}.\n\n\nLet ${\\cal F}\n\\in D^+(X,\\Lambda)$ and \n${\\cal G}\\in D^+(W,\\Lambda)$ .\nThen, the adjoint of the morphism\n$h^*{\\cal F}\\otimes h^*Rh_*{\\cal G}\n\\to h^*{\\cal F}\\otimes {\\cal G}$\ninduced by the adjunction\n$h^*Rh_*{\\cal G} \\to{\\cal G}$\ndefines a canonical morphism\n\\begin{equation}\n{\\cal F}\\otimes Rh_*{\\cal G}\n\\to Rh_*(h^*{\\cal F}\\otimes {\\cal G}).\n\\label{eqpr0}\n\\end{equation}\nIf $h$ is an open immersion\nand if ${\\cal G}=h^*{\\cal G}_X$\nfor some extension of ${\\cal G}$\non $X$,\n(\\ref{eqpr0}) is identified with\nthe morphism\n${\\cal F}\\otimes R{\\cal H}om(h_!\\Lambda,\n{\\cal G}_X)\n\\to R{\\cal H}om(h_!\\Lambda,{\\cal F}\\otimes {\\cal G}_X)$ defined\nby the product.\n\nApplying the construction\n(\\ref{eqpr0})\nto a compactification of $h$\nand the extension by $0$,\na canonical isomorphism\n\\begin{equation}\n{\\cal F}\\otimes\nRh_!{\\cal G} \n\\to Rh_!(h^*{\\cal F}\\otimes{\\cal G} ),\n\\label{eqprj}\n\\end{equation}\nthe projection formula\n\\cite[(4.9.1)]{Rapport}\nis defined.\n\\if{This is defined as the adjoint\n$h^*{\\cal F}\\otimes^L_\\Lambda\nh^*Rh_!{\\cal G} \n\\to h^*{\\cal F}\\otimes^L_\\Lambda{\\cal G}$\nof the morphism induced\nby the adjunction\n$h^*Rh_!{\\cal G} \n\\to {\\cal G}$\nif $h$ is proper.\nIt is defined as the inverse of\nthe isomorphism\n$h^*{\\cal F}\\otimes^L_\\Lambda\nh^*Rh_!{\\cal G} \n\\gets h^*{\\cal F}\\otimes^L_\\Lambda{\\cal G}$\nif $h$ is an open immersion.}\\fi\n\n\\begin{df}\\label{dfAB}\nLet $h\\colon W\\to X$\nbe a separated morphism of finite\ntype of quasi-excellent noetherian schemes.\nLet ${\\cal F}\\in D^+(X,\\Lambda)$.\n\n{\\rm 1.}\nLet ${\\cal G}\\in D^+(X,\\Lambda)$. \nWe define a canonical morphism\n\\begin{equation}\nc_{{\\cal F},{\\cal G},h}\\colon\nh^*{\\cal F}\n\\otimes\nRh^!{\\cal G}\n\\to \nRh^!({\\cal F}\n\\otimes\n{\\cal G})\n\\label{eqAB}\n\\end{equation}\nto be the adjoint of the composition\n$$\nRh_!(h^*{\\cal F}\n\\otimes\nRh^!{\\cal G})\n\\to \n{\\cal F}\n\\otimes\nRh_!Rh^!{\\cal G}\n\\to {\\cal F}\n\\otimes\n{\\cal G}$$\nof the inverse \nof the isomorphism {\\rm (\\ref{eqprj})}\nand the morphism induced\nby the adjunction\n$Rh_!Rh^!{\\cal G}\n\\to\n{\\cal G}$.\nFor ${\\cal G}=\\Lambda$,\nwe define a canonical morphism\n\\begin{equation}\nc_{{\\cal F},h}\n\\colon \nh^*{\\cal F}\n\\otimes^L\nRh^!\\Lambda\n\\to Rh^!{\\cal F}\n\\label{eqcF}\n\\end{equation}\nto be \n$c_{{\\cal F},\\Lambda,h}$.\n\\end{df}\n\n\n\\begin{lm}\\label{lmcF}\nLet $h\\colon W\\to X$\nbe a separated morphism of finite\ntype of noetherian schemes.\nLet ${\\cal F}\\in D^+(X,\\Lambda)$.\n\n{\\rm 1.}\nLet ${\\cal G},{\\cal H}\\in D^+(X,\\Lambda)$.\nThen, the diagram \n\\begin{equation}\n\\begin{CD}\nh^*{\\cal F}\n\\otimes \nRh^!({\\cal G}\\otimes {\\cal H})\n@>{c_{{\\cal F},\n{\\cal G}\\otimes {\\cal H},h}}>>\nRh^!({\\cal F}\n\\otimes \n{\\cal G}\\otimes {\\cal H})\\\\\n@A{1\\otimes c_{{\\cal G},\n{\\cal H},h}}AA\n@AA{c_{{\\cal F},\n{\\cal G},h}\\otimes 1}A\\\\\nh^*{\\cal F}\n\\otimes \nRh^!{\\cal G}\\otimes h^*{\\cal H}\n@>{c_{{\\cal F},\n{\\cal G},h}\\otimes 1}>>\nRh^!({\\cal F}\\otimes {\\cal G})\n\\otimes \nh^*{\\cal H}\n\\end{CD}\n\\end{equation}\nis commutative.\n\n\n{\\rm 2.}\nLet $g\\colon V\\to W$\nbe a separated morphism of finite\ntype of schemes\nand \nlet ${\\cal G}\\in D^+(X,\\Lambda)$.\nThen, the diagram\n\\begin{equation}\n\\xymatrix{\n(hg)^*{\\cal F}\n\\otimes\nR(hg)^!{\\cal G}\n\\ar[r]^{c_{{\\cal F},{\\cal G},hg}}\n&\nR(hg)^!({\\cal F}\\otimes\n{\\cal G})\n\\\\\ng^*h^*{\\cal F}\n\\otimes\nRg^!Rh^!{\\cal G}\n\\ar[u]\n\\ar[rd]^{c_{h^*{\\cal F},\nRh^!{\\cal G},g}}\n&\nRg^!Rh^!({\\cal F}\\otimes\n{\\cal G})\n\\ar[u]\n\\\\\ng^*h^*{\\cal F}\n\\otimes\nRg^!\\Lambda\n\\otimes\ng^*Rh^!{\\cal G}\n\\ar[u]^{1\\otimes\nc_{Rh^!{\\cal G},g}}\n\\ar[dr]_{\nc_{h^*{\\cal F},g}\\otimes 1}\n&\nRg^!(h^*{\\cal F}\n\\otimes Rh^!{\\cal G})\n\\ar[u]_{Rg^!(c_{{\\cal F},{\\cal G},h})}\n\\\\\n&\nRg^!h^*{\\cal F}\n\\otimes g^*Rh^!{\\cal G}.\n\\ar[u]_{c_{h^*{\\cal F},Rh^!{\\cal G},g}}\n}\n\\label{eqcgh}\n\\end{equation}\nwhere the upper vertical arrows\nare canonical isomorphisms\n{\\rm \\cite[(3.1.13.1)]{DP}}\nis commutative.\n\n\n{\\rm 3.}\nLet $$\\begin{CD}\nX@{c_{Rf_*{\\cal F},g}}>>\nRg^!Rf_*{\\cal F}\n\\\\\n@VVV@VVV\\\\\nRf'_*h^*{\\cal F}\n\\otimes\nRg^!\\Lambda\n@.\nRf'_*Rh^!{\\cal F}\n\\\\\n@V\n{\\rm (\\ref{eqpr0})}VV\n@AA{Rf'_*(c_{{\\cal F},h})}A\\\\\nRf'_*(h^*{\\cal F}\n\\otimes\nf'^*Rg^!\\Lambda)\n@>>>\nRf'_*(h^*{\\cal F}\n\\otimes\nRh^!\\Lambda)\n\\end{CD}\n\\label{eqcfg}\n\\end{equation}\nwhere the arrows without\ntags are defined by\nbase change morphisms\nis commutative.\n\\end{lm}\n\n\\proof{\n1.\nThe diagram\n$$\\begin{CD}\nRh_!Rh^!({\\cal G}\\otimes {\\cal H})\n@>>>\n{\\cal G}\\otimes {\\cal H}\\\\\n@A\n{Rh_!(c_{{\\cal G},{\\cal H},h})}AA\n@AAA\\\\\nRh_!(Rh^!{\\cal G}\\otimes h^*{\\cal H})\n@<{{\\rm (\\ref{eqprj})}}<<\nRh_!Rh^!{\\cal G}\\otimes {\\cal H}\n\\end{CD}$$\nwhere the arrows without\ntags are defined by the adjunction\nis commutative\nby the definition of\n$c_{{\\cal G},{\\cal H},h}$.\nTaking the tensor products with ${\\cal F}$,\napplying the projection formula\n(\\ref{eqprj}) and\ntaking the adjoint,\nwe see that the upper triangle in\n\\begin{equation*}\n\\xymatrix{\nh^*{\\cal F}\n\\otimes \nRh^!({\\cal G}\\otimes {\\cal H})\n\\ar[r]^{c_{{\\cal F},\n{\\cal G}\\otimes {\\cal H},h}}&\nRh^!({\\cal F}\n\\otimes \n{\\cal G}\\otimes {\\cal H})\\\\\nh^*{\\cal F}\n\\otimes \nRh^!{\\cal G}\\otimes h^*{\\cal H}\n\\ar[u]^{1\\otimes c_{{\\cal G},\n{\\cal H},h}}\n\\ar[ru]^\n{c_{{\\cal F}\\otimes{\\cal H},\n{\\cal G},h}}\n\\ar[r]^{c_{{\\cal F},\n{\\cal G},h}\\otimes 1}\n&\nRh^!({\\cal F}\\otimes {\\cal G})\n\\otimes \nh^*{\\cal H}\n\\ar[u]_{c_{{\\cal F},\n{\\cal G},h}\\otimes 1}\n}\n\\end{equation*}\nis commutative.\nThe lower triangle is similarly\ncommutative\nand the assertion follows.\n\n2.\nThe lower quadrangle\nis commutative by 1.\nThe composition \n$g^*h^*{\\cal F}\n\\otimes\nRg^!Rh^!{\\cal G}\n\\to\nRg^!Rh^!{\\cal F}$\nthrough\n$\nRg^!(h^*{\\cal F}\n\\otimes Rh^!{\\cal G})$\nis the adjoint of\n$Rh_!Rg_!\n(g^*h^*{\\cal F}\n\\otimes\nRg^!Rh^!{\\cal G})\n\\to\n{\\cal F}\\otimes\nRh_!Rg_!\nRg^!Rh^!{\\cal G}$\ninduced by\nthe adjunction\n$Rh_!Rg_!\nRg^!Rh^!{\\cal G}\n\\to\nRh_!Rh^!{\\cal G}\n\\to {\\cal G}$.\nSince the last morphism\nis identified\nwith \nthe adjunction\n$R(hg)_!\nR(hg)^!{\\cal G}\n\\to {\\cal G}$,\nthe upper pentagon is also commutative.\n\n\n3.\nFor ${\\cal G}\\in D^+(V,\\Lambda)$,\nwe consider the diagram\n\\begin{equation}\n\\begin{CD}\nf^*Rg_!(g^*Rf_*{\\cal F}\n\\otimes\n{\\cal G})\n@<{f^*{\\rm (\\ref{eqprj})}}<<\nf^*Rf_*{\\cal F}\n\\otimes\nf^*Rg_!{\\cal G}\n@>>>\n{\\cal F}\n\\otimes\nf^*Rg_!{\\cal G}\n\\\\\n@VVV@.@VVV\\\\\nRh_!f'^*(Rf'_*h^*{\\cal F}\n\\otimes\n{\\cal G})\n@>>>\nRh_!(h^*{\\cal F}\n\\otimes\nf^*{\\cal G})\n@<{\\rm (\\ref{eqprj})}<<\n{\\cal F}\n\\otimes\nRh_!f'^*{\\cal G}\n\\end{CD}\n\\label{eqcfga}\n\\end{equation}\ndefined as follows.\nThe vertical arrows are\ndefined by the base change morphisms\nand the horizontal arrows\nwithout labels are\ndefined by adjunction.\nWe see that the diagram is commutative\nby reducing to the case\nwhere $g$ is proper and\ngoing back to the definition\nof (\\ref{eqprj}).\n\nWe apply (\\ref{eqcfga}) to\n${\\cal G}=Rg^!\\Lambda$.\nSince the composition\n$f^*Rg_!Rg^!\\Lambda\n\\to\nRh_!f'^*Rg^!\\Lambda\n\\to Rh_!Rh^!\\Lambda\n\\to \\Lambda$\nof the base change morphisms\nwith the adjunction\nis induced by the adjuncion\n$Rg_!Rg^!\\Lambda\\to \\Lambda$,\nwe obtain a commutative diagram\n\\begin{equation}\n\\begin{CD}\nf^*Rg_!(g^*Rf_*{\\cal F}\n\\otimes\nRg^!\\Lambda)\n@<{f^*{\\rm (\\ref{eqprj})}}<<\nf^*Rf_*{\\cal F}\n\\otimes\nf^*Rg_!Rg^!\\Lambda\n@>>>\n{\\cal F}\n\\\\\n@VVV@.@AAA\\\\\nRh_!f'^*(Rf'_*h^*{\\cal F}\n\\otimes\nRg^!\\Lambda)\n@>>>\nRh_!(h^*{\\cal F}\n\\otimes\nRh^!\\Lambda)\n@<{\\rm (\\ref{eqprj})}<<\n{\\cal F}\n\\otimes\nRh_!Rh^!\\Lambda\n\\end{CD}\n\\label{eqcfgb}\n\\end{equation}\nSince the canonical morphism\n(\\ref{eqcF}) is defined as\nthe adjoint of (\\ref{eqprj}),\nwe obtain (\\ref{eqcfg})\nby taking the adjoint of (\\ref{eqcfgb}).\n\\qed\n\n}\n\n\n\\begin{lm}\\label{lmij}\nLet $i\\colon Z\\to X$ be a closed\nimmersion of noetherian schemes\nand let ${\\cal F},{\\cal G}\n\\in D^+(X,\\Lambda)$.\n\n{\\rm 1.}\nWe define the slant arrow\nand the vertical arrow\nin the diagram\n\\begin{equation}\n\\xymatrix{\n{\\cal F}\\otimes\ni_*Ri^!{\\cal G}\n\\ar[r]^-{\\rm(\\ref{eqprj})}\n\\ar[rd]\n&\ni_*(i^*{\\cal F}\\otimes\nRi^!{\\cal G})\n\\ar[r]^-{i_*(c_{{\\cal F},{\\cal G},i})}\n&\ni_*Ri^!({\\cal F}\\otimes {\\cal G})\n\\ar[d]\n\\\\\n&\n{\\cal F}\\otimes\nR{\\cal H}om(i_*\\Lambda,{\\cal G})\n\\ar[r]\n&\nR{\\cal H}om(i_*\\Lambda,{\\cal F}\n\\otimes{\\cal G})\n}\n\\label{eqic}\n\\end{equation}\nby the canonical isomorphism\n$i_*Ri^!\\to R{\\cal H}om(i_*\\Lambda,-)$\nand the lower horizontal arrow\nby the product.\nThen, the diagram\n{\\rm (\\ref{eqic})}\nis commutative.\n\n{\\rm 2.}\nLet $j\\colon U=X\\raisebox{2.33pt}{~\\rule{6.4pt}{1.3pt}~} Z\\to X$\nbe the open immersion of the complement.\nThen, \nthe exact sequence\n$0\\to j_!\\Lambda\n\\to \\Lambda\\to i_*\\Lambda\\to 0$\ndefines a commutative diagram\n\\begin{equation}\n\\begin{CD}\n{\\cal F}\\otimes\ni_*Ri^!{\\cal G}\n@>>> {\\cal F}\\otimes\n{\\cal G}\n@>>>\n{\\cal F}\\otimes\nRj_*j^*{\\cal G}\n@>>>\\\\\n@V{c_{{\\cal F},{\\cal G},i}}VV@|\n@VV{\\rm (\\ref{eqpr0})}V@.\\\\\ni_*Ri^!({\\cal F}\\otimes\n{\\cal G})\n@>>> {\\cal F}\\otimes\n{\\cal G}\n@>>>\nRj_*j^*({\\cal F}\\otimes\n{\\cal G})\n@>>>\n\\end{CD}\n\\label{eqij}\n\\end{equation}\nof distinguished triangles.\n\\end{lm}\n\n\n\\proof{\n1.\nBy the definition of\n$c_{{\\cal F},{\\cal G},i}$,\nthe morphism\n$i_*(c_{{\\cal F},{\\cal G},i})\\colon\ni_*(i^*{\\cal F}\\otimes\nRi^!{\\cal G})\n\\to i_*Ri^!({\\cal F}\n\\otimes {\\cal G})$\nis the unique morphism\nsuch that the diagram\n$$\n\\begin{CD}\n{\\cal F}\\otimes\ni^*Ri^!{\\cal G}\n@>>> {\\cal F}\\otimes{\\cal G}\n\\\\\n@V{\\rm (\\ref{eqprj})}VV@AAA\\\\\ni_*(i^*{\\cal F}\\otimes\nRi^!{\\cal G})\n@>{i_*(c_{{\\cal F},{\\cal G},i})}>>\ni_*Ri^!({\\cal F}\\otimes{\\cal G})\n\\end{CD}$$\nis commutative.\nHere the arrows without tag\nare defined by the\nadjunction $i_*Ri^!\\to 1$.\nSimilarly, the lower horizontal arrow\n${\\cal F}\\otimes\nR{\\cal H}om(i_*\\Lambda,{\\cal G})\n\\to\nR{\\cal H}om(i_*\\Lambda,{\\cal F}\n\\otimes{\\cal G})$\nis the unique morphism\nsuch that the diagram\n$$\n\\begin{CD}\n{\\cal F}\\otimes\ni^*Ri^!{\\cal G}\n@>>> {\\cal F}\\otimes{\\cal G}\n\\\\\n@VVV@AAA\\\\\n{\\cal F}\\otimes\nR{\\cal H}om(i_*\\Lambda,{\\cal G})\n@>>>\nR{\\cal H}om(i_*\\Lambda,{\\cal F}\n\\otimes{\\cal G})\n\\end{CD}$$\nis commutative.\nHere the left vertical arrow\nis the slant arrow in (\\ref{eqic})\nand the right vertical arrow\nis induced by $\\Lambda\\to i_*\\Lambda$.\nHence\nthe assertion follows.\n\n2.\nThe exact sequence\n$0\\to j_!\\Lambda\n\\to \\Lambda\\to i_*\\Lambda\\to 0$\ndefines a commutative diagram\n\\begin{equation}\n\\begin{CD}\n{\\cal F}\\otimes\nR{\\cal H}om(i_*\\Lambda,{\\cal G})\n@>>> {\\cal F}\\otimes\n{\\cal G}\n@>>>\n{\\cal F}\\otimes\nR{\\cal H}om(j_!\\Lambda,{\\cal G})\n@>>>\\\\\n@VVV@VVV@VVV@.\\\\\nR{\\cal H}om(i_*\\Lambda,{\\cal F}\\otimes\n{\\cal G})\n@>>> {\\cal F}\\otimes\n{\\cal G}\n@>>>\nR{\\cal H}om(j_!\\Lambda,{\\cal F}\\otimes\n{\\cal G})\n@>>>\n\\end{CD}\n\\label{eqij2}\n\\end{equation}\nof distinguished triangles.\nBy 1.,\nthe left vertical arrow \nof (\\ref{eqij}) is\nidentified with\nthat of (\\ref{eqij2})\nand similarly for the\nright vertical arrows.\n\\qed\n\n}\n\n\\begin{lm}\\label{lmtrbc}\nLet\n$$\\begin{CD}\nX@{i'_s}>>X\\times_YY_{(s)}@<{j'_t}<< X_t\\\\\n@V{f_s}VV@V{f_{(s)}}VV @VV{f_t}V\\\\\ns@>{i_s}>>Y_{(s)}@<{j_t}<{i'_s}>>X\\times_YY_{(s)}@<{j'_t}<< X_t\\\\\n@V{p_s}VV@V{p_{(s)}}VV @VV{p_t}V\\\\\nP_s@>{i''_s}>>P\\times_YY_{(s)}@<{j''_t}<{\\rm (\\ref{eqpr0})}>>\nR(p'j')_*(p'j')^*{\\cal F}\n\\end{CD}\n\\label{eqYU}\n\\end{equation}\nwhere the first morphism is\ninduced by the base change morphism\nis an isomorphism\non a neighborhood of $Z$.\n\n\n\n{\\rm 2.}\n$f$ is universally ${\\cal F}$-acyclic\nalong $Z$.\n\\end{pr}\n\nFor the sake of completeness,\nwe record the proof in \\cite{CC}\nwith more detail.\n\n\\proof{\n1. \nLet \n$D_1,\\ldots,D_n$\nbe the irreducible components\nof $D$. For a subset $I\\subset \\{1,\\ldots,n\\}$,\nlet $X'_I=X'\\times_{Y'}(\\bigcap_{i\\in I}D_i)$\nand let $i'_I\\colon X'_I\\to X'$\nbe the closed immersion.\nBy the assumption,\n$p'\\colon X'\\to X$\nand $p'i'_I\\colon X'_I\\to X$\nare ${\\cal F}$-transversal\non neighborhoods of\nthe inverse images of $Z$.\n\nLet ${\\cal F}'=p'^*{\\cal F}$.\nSince the assumption\non $Rh^!\\Lambda$\nin Proposition \\ref{prhF}.1\nis satisfied by the absolute\npurity \\cite[{\\sc Th\\'eor\\`eme 3.1.1}]{purete},\nthe immersions \n$i'_I\\colon X'_I\\to X'$\nare ${\\cal F}'$-transversal\non neighborhoods of\nthe inverse images of $Z$\nby Proposition \\ref{prhF}.1.\nHence by Lemma \\ref{lmiZ},\nthe canonical morphism\n${\\cal F}'\n\\otimes Rj'_*\\Lambda\n\\to Rj'_*j'^*{\\cal F}'$ (\\ref{eqpr0}) is an\nisomorphism \non a neighborhood of $p'^{-1}(Z)$.\nSince $p'$ is proper,\nwe obtain an isomorphism\n$Rp'_*({\\cal F}'\n\\otimes Rj'_*\\Lambda)\n\\to R(pj')_*(pj')^*{\\cal F}$\non a neighborhood of $Z$.\n\n\nBy the projection formula\n(\\ref{eqprj}),\nwe have a canonical isomorphism\n${\\cal F}\n\\otimes Rp'_*Rj'_*\\Lambda\n\\to Rp'_*({\\cal F}'\n\\otimes Rj'_*\\Lambda)$.\nThe base change morphism\n$f^*R(pj)_*\\Lambda\\to\nRp'_*Rj'_*\\Lambda$ is an isomorphism\nby the smooth base change\ntheorem\n\\cite[Corollaire 1.2]{smbc}.\nHence the morphism (\\ref{eqYU})\nis an isomorphism on a neighborhood of $Z$.\n\n{\\rm 2.}\nIt suffices to show that\nfor a smooth morphism\n$Y'\\to Y$,\nthe base change\n$X'\\to Y'$ of $f$\nis locally acyclic with respect to the \npull-back of ${\\cal F}$ by Lemma \\ref{lmlac}.3.\nSimilarly as in the proof of 1.,\nthe assumption is satisfied\nfor the pull-back $Y'\\to Y$.\nHence, \nby replacing $Y$ by $Y'$,\nit suffices to show\nthat\n$f$ is locally acyclic with respect to ${\\cal F}$.\n\n\nLet $s\\gets t$ be a specialization\nof geometric points of $Y$\nas in Lemma \\ref{lmlac}.1\nand let the notation be as loc.~cit.\nBy \\cite[Theorem 4.1,\nTheorem 8.2]{dJ},\nwe may write $t$ as a limit\n$\\varprojlim_\\lambda\nU_\\lambda$\nof the complements $U_\\lambda=Y_\\lambda\\raisebox{2.33pt}{~\\rule{6.4pt}{1.3pt}~} \nD_\\lambda$,\nin regular schemes $Y_\\lambda$\nendowed with a\nproper, surjective\nand generically finite\nmorphism $p_\\lambda\\colon Y_\\lambda\n\\to Y$ \nof divisors $D_\\lambda\\subset\nY_\\lambda$ with simple normal crossings.\nThen, as the limit of\n(\\ref{eqYU}), the canonical morphism\n\\begin{equation}\n{\\cal F}\\otimes f_{(s)}^*\nRj_{t*}j^*_t\\Lambda\n\\to \nRj'_{t*}j^{\\prime*}_t{\\cal F}\n\\label{eqijst2}\n\\end{equation}\nis an isomorphism\non the inverse image of $Z$.\nSince $Y$ is normal,\nthe canonical morphism\n$\\Lambda\\to\ni_s^*Rj_{t*}j^*_t\\Lambda$\nis an isomorphism.\nHence the isomorphism\n(\\ref{eqijst2})\ninduces an isomorphism\n(\\ref{eqijst})\non the inverse image of $Z$.\n\\qed\n\n}\n\n\\begin{cor}\\label{corlc}\nLet $X$\nbe a regular scheme\nof finite type over \na discrete valuation ring\n${\\cal O}_K$\nand\n$Z\\subset X$ be a closed subset.\nLet\n${\\cal F}$ be a constructible\ncomplex of $\\Lambda$-modules\non $X$.\nAssume that every separated morphism\n$h\\colon W\\to X$ of\nregular schemes\nof finite type over ${\\cal O}_K$\nis ${\\cal F}$-transversal\non a neighborhood of\nthe inverse image $h^{-1}(Z)$.\nThen ${\\cal F}$ is locally constant\non a neighborhood of $Z$.\n\\end{cor}\n\n\\proof{\nBy Proposition \\ref{prla1}\napplied to $1_X\\colon X\\to X$,\nthe identity $1_X\\colon X\\to X$\nis ${\\cal F}$-acyclic\nalong $Z$.\nHence ${\\cal F}$ is locally constant\non a neighborhood of $Z$\nby Lemma \\ref{lmla}.2.\n\\qed\n\n}\n\n\\medskip\n\nWe have a partial converse of\nProposition \\ref{prla1}\nnot used in the article.\n\n\\begin{pr}[{\\cite[Corollary 8.10]{CC}}]\\label{prla2}\nLet $f\\colon X\\to Y$\nbe a smooth morphism\nof noetherian schemes\nand let\n${\\cal F}$ be a constructible\ncomplex of $\\Lambda$-modules\non $X$.\nLet $i\\colon Z\\to Y$ be an immersion\nand let\n$$\\begin{CD}\nX@h>>X@<{j'}<< U\\\\\n@VgVV @VfVV @VV{f_V}V\\\\\nZ@>i>>Y@{g'}>>X'@>{f'}>>Y'\\\\\n@V{h_V}VV@VhVV@VV{h'}V\\\\\nV@>g>> X@>f>>Y\n\\end{CD}$$\nbe a cartesian diagram of\nmorphisms of finite type of schemes\nsuch that \n$f\\colon X\\to Y$ is smooth\nand that the vertical arrows are\nseparated.\nAssume that\n$Rh^!\\Lambda$ is locally constant\nof support $X'$\nand that the base change\nmorphism\n$g'^*Rh^!\\Lambda\n\\to Rh_V^!\\Lambda$\nis an isomorphism.\n\nLet ${\\cal G}$\nbe a constructible complex\nof $\\Lambda$-modules on $V$\nand \nassume that $f$\nis $Rg_*{\\cal G}$-acyclic\nand that $fg$ is\n${\\cal G}$-acyclic.\nThen, \nthe base change morphism\n\\begin{equation}\nh^*Rg_*{\\cal G}\n\\to \nRg'_*h_V^*{\\cal G}\n\\label{eqbcj}\n\\end{equation}\nis an isomorphism.\n\\end{cor}\n\n\\proof{\nSince $f$ is $Rg_*{\\cal G}$-acyclic\nand $fg$ is ${\\cal G}$-acyclic,\nby Proposition \\ref{prla2},\n$h$ is $Rg_*{\\cal G}$-transversal\nand\n$h_V$ is ${\\cal G}$-transversal.\nHence the assertion follows\nfrom Proposition \\ref{prhF}.2.\n\\qed\n\n}\n\n\\section{$C$-transversality}\\label{sTX}\n\n\nIn this section,\nfirst we define \nthe FW-cotangent bundle\nof a regular scheme,\nas a vector bundle\non the closed subscheme \ndefined by $p=0$.\nThen, \nwe study properties\nof morphisms with respect to\nits closed conical subsets\ncorresponding to the transversality\nand the local\nacyclicity studied in Section \\ref{sF}.\n\nFirst in Section \\ref{ssFW},\nwe recall basic properties\nof the sheaf $F\\Omega^1_X$ \nof Frobenius-Witt differentials\nfrom \\cite{FW}.\nIn particular if $X$ is regular,\nunder a certain finiteness condition,\nthe sheaf $F\\Omega^1_X$\nis a locally free \n${\\cal O}_{X_{{\\mathbf F}_p}}$-module\nof rank $\\dim X$ on\n$X_{{\\mathbf F}_p}=\nX\\times_{{\\rm Spec}\\, {\\mathbf Z}}\n{\\rm Spec}\\, {\\mathbf F}_p$.\nUnder this condition,\nwe define the FW-cotangent bundle\n$FT^*X|_{X_{{\\mathbf F}_p}}$ on $X_{{\\mathbf F}_p}$\nas the vector bundle\nassociated to the locally free\n${\\cal O}_{X_{{\\mathbf F}_p}}$-module\n$F\\Omega^1_X$.\n\n\nWe study properties\nof morphisms with respect to \na given closed conical subset in\nSections \\ref{ssCtr} and \\ref{ssCac}.\nIn Section \\ref{ssCtr},\nwe study the transversality\nfor morphisms to $X$.\nIn Section \\ref{ssCac},\nwe study the acyclicity,\nwhich was also called transversality,\nfor morphisms from $X$.\n\n\n\n\n\\subsection{FW-cotangent bundle}\n\\label{ssFW}\n\n\n\\begin{df}[{\\rm \\cite[Definition 1.1]{FW}}]\\label{dfFW}\nLet $p$ be a prime number.\n\n{\\rm 1.}\nDefine a polynomial\n$P\\in {\\mathbf Z}[X,Y]$\nby\n\\begin{equation}\nP=\n\\sum_{i=1}^{p-1}\n\\dfrac{(p-1)!}{i!(p-i)!}\\cdot\nX^iY^{p-i}.\n\\label{eqP}\n\\end{equation}\n\n{\\rm 2.}\nLet $A$ be a ring\nand $M$ be an $A$-module.\nWe say that a mapping\n$w\\colon A\\to M$\nis an Frobenius-Witt derivation\nor FW-derivation for short\nif the following condition is\nsatisfied:\nFor any $a,b\\in A$, we have\n\\begin{align}\nw(a+b)\\, &=\nw(a)+\nw(b)\n-P(a,b)\n\\cdot w(p),\n\\label{eqadd}\\\\\nw(ab)\\, &=\nb^p\\cdot w(a)+\na^p\\cdot w(b).\n\\label{eqLb}\n\\end{align}\n\\end{df}\n\n\nDefinition \\ref{dfFW}.2\nis essentially the same\nas \\cite[Definition 2.1.1]{DKRZ}.\nWe recall some results from \\cite{FW}.\n\n\\begin{lm}\\label{lmOm}\nLet $p$ be a prime number and\n$A$ be a ring.\n\n{\\rm 1.\n(\\cite[Lemma 2.1.1]{FW})}\nThere exists a universal pair\nof an $A$-module\n$F\\Omega^1_A$\nand an FW-derivation\n$w\\colon A\n\\to F\\Omega^1_A$.\n\n{\\rm 2.\n(\\cite[Corollary 2.3.1]{FW})}\nIf $A$ is a ring over ${\\mathbf Z}_{(p)}$,\nwe have $p\\cdot F\\Omega^1_A=0$.\n\n{\\rm 3.\n(\\cite[Corollary 2.3.2]{FW})}\nIf $A$ is a ring over ${\\mathbf F}_{p}$,\nthen there exists a canonical\nisomorphism $F\\Omega^1_A\n\\to F^*\\Omega^1_A\n=\\Omega^1_A\\otimes_AA$\nto the tensor product\nwith respect to the absolute\nFrobenius morphism $A\\to A$.\n\\end{lm}\n\n\n\nWe call $F\\Omega^1_A$\nthe module of FW-differentials of $A$\nand $w(a)\\in F\\Omega^1_A$\nthe FW-differential of $a\\in A$.\nFor a morphism $A\\to B$ of rings,\nwe have a canonical $B$-linear morphism\n$F\\Omega^1_A\\otimes_AB\n\\to\nF\\Omega^1_B$.\n\nWe may sheafify the construction\nand define $F\\Omega^1$\nas a quasi-coherent ${\\cal O}_X$-module\nfor a scheme $X$. \nWe call $F\\Omega^1_X$\nthe sheaf of FW-differentials on $X$.\nIf $X$ is a scheme over\n${\\mathbf Z}_{(p)}$,\nthe ${\\cal O}_X$-module\n$F\\Omega^1_X$ is\nan ${\\cal O}_{X_{{\\mathbf F}_p}}$-module\nwhere\n$X_{{\\mathbf F}_p}\n=X\\times_{{\\rm Spec}\\, {\\mathbf Z}_{(p)}}\n{\\rm Spec}\\, {\\mathbf F}_p$.\nFurther if $X$ is noetherian\nand if $X_{{\\mathbf F}_p}$\nis of finite type over a field\nof finite $p$-basis,\nthen $F\\Omega^1_X$ is\na coherent \n${\\cal O}_{X_{{\\mathbf F}_p}}$-module\nby \\cite[Lemma 4.1.2]{FW}.\nIf $X$ is a scheme over\n${\\mathbf F}_p$,\nwe have a canonical isomorphism\n\\begin{equation}\nF\\Omega^1_X\n\\to F^*\\Omega^1_X\n\\label{eqFFX}\n\\end{equation}\nto the pull-back by\nthe absolute Frobenius morphism\n$F\\colon X\\to X$,\nsending $w(a)$ to $da$.\n\n\nFor a morphism \n$f\\colon X\\to Y$ of schemes,\nwe have a canonical morphism\n\\begin{equation}\nf^*F\\Omega^1_Y\\to\nF\\Omega^1_X\n\\label{eqFXY}\n\\end{equation}\n\n\n\n\n\n\n\n\\begin{pr}[{\\rm \\cite[Proposition 2.4]{FW}}]\\label{prdx}\nLet $X$ be a scheme\nand $x\\in X$\nbe a point such that\nthe residue field $k(x)={\\cal O}_{X,x}\/\n{\\mathfrak m}_{X,x}$\nis of characteristic $p$.\nFor a $k(x)$-vector space $M$,\nlet $F^*M$\ndenote the tensor product\n$M\\otimes_{k(x)}k(x)$\nwith respect to the Frobenius\n$F\\colon k(x)\\to k(x)$.\nThen, we have an exact\nsequence \n\\begin{equation}\n\\begin{CD}\n0@>>>\nF^*({\\mathfrak m}_{X,x}\/\n{\\mathfrak m}_{X,x}^2)\n@>{w}>>\nF\\Omega^1_{X,x}\n\\otimes_{{\\cal O}_{X,x}} k(x)\n@>{\\rm (\\ref{eqFFX})}>>\nF^*\\Omega^1_{k(x)}\n@>>>0\n\\end{CD}\n\\label{eqdx}\n\\end{equation}\nof $k(x)$-vector spaces.\n\\end{pr}\n\n\n\n\\begin{pr}[{\\rm \\cite[Proposition 2.8]{FW}}]\\label{prsm}\nLet $f\\colon X\\to Y$\nbe a morphism of finite type of\nregular noetherian schemes\nover ${\\mathbf Z}_{(p)}$.\nThen the following conditions\nare equivalent:\n\n{\\rm (1)}\n$f\\colon X\\to Y$ is smooth\non a neighborhood of\n$X_{{\\mathbf F}_p}$.\n\n{\\rm (2)}\nThe sequence \n\\begin{equation}\n\\begin{CD}\n0@>>>\nf^*F\\Omega^1_Y\n@>{\\rm (\\ref{eqFXY})}>>\nF\\Omega^1_X\n@>{\\rm (\\ref{eqFFX})}>>F^*\\Omega^1_{\nX_{{\\mathbf F}_p}\/\nY_{{\\mathbf F}_p}}\n@>>>\n0\n\\end{CD}\n\\end{equation}\nof ${\\cal O}_{X_{{\\mathbf F}_p}}$-modules\nis a locally split exact sequence.\n\\end{pr}\n\n\n\\begin{thm}[{\\rm \\cite[Theorem 3.1]{FW}}]\n\\label{thmreg}\nLet $X$ be a noetherian scheme\nover ${\\mathbf Z}_{(p)}$\nand $X_{{\\mathbf F}_p}=\nX\\times_{{\\rm Spec}\\, \n{\\mathbf Z}_{(p)}}\n{\\rm Spec}\\, {\\mathbf F}_p$\nbe the closed subscheme.\nAssume that the reduced\npart $X_{{\\mathbf F}_p,{\\rm red}}$ is\na scheme of finite type over\na field $k$ with finite $p$-basis.\nIf $X$ is regular and \nis equi-dimensional\nof dimension $n$ and if\n$[k:k^p]=p^r$, then \nthe ${\\cal O}_{X_{{\\mathbf F}_p}}$-module\n$F\\Omega^1_X$\nis locally free of rank $n+r$.\n\\end{thm}\n\n\n\n\n\\begin{cor}[{\\rm \\cite[Corollary 2.6,\nCorollary 3.2]{FW}}]\n\\label{corXZ}\nLet $X$ be a regular noetherian scheme\nover ${\\mathbf Z}_{(p)}$\nsuch that the reduced\npart $X_{{\\mathbf F}_p,{\\rm red}}$ \nof\n$X_{{\\mathbf F}_p}=X\\times_{{\\rm Spec}\\, \n{\\mathbf Z}_{(p)}}\n{\\rm Spec}\\, {\\mathbf F}_p$\nis\na scheme of finite type over\na field $k$ of finite $p$-basis.\nLet $Z\\subset X$ be a closed subscheme.\n\nWe consider the following conditions:\n\n{\\rm (1)}\n$Z$ is regular on a neighborhood of\n$Z_{{\\mathbf F}_p}=Z\\times_{{\\rm Spec}\\, \n{\\mathbf Z}_{(p)}}\n{\\rm Spec}\\, {\\mathbf F}_p$.\n\n{\\rm (1$'$)}\nAt every point $x\\in Z\n_{{\\mathbf F}_p}$,\nthe local ring\n${\\cal O}_{Z,x}$ is regular.\n\n{\\rm (2)}\nThe sequence \n\\begin{equation}\n\\begin{CD}\n0@>>>\nF^*(N_{Z\/X}\\otimes_{{\\cal O}_Z}\n{\\cal O}_{Z_{{\\mathbf F}_p}})\n@>w>>\nF\\Omega^1_X\\otimes_{{\\cal O}_X}\n{\\cal O}_{Z_{{\\mathbf F}_p}}\n\\longrightarrow\nF\\Omega^1_Z\n@>>>\n0\n\\end{CD}\n\\label{eqXZ}\n\\end{equation}\nof \n${\\cal O}_{Z_{{\\mathbf F}_p}}$-modules\nis a locally splitting exact sequence.\n\nThen, we have\n{\\rm (1)}$\\Rightarrow${\\rm (2)}$\\Rightarrow${\\rm (1$'$)}.\nConsequently \nif the subset\n${\\rm Reg}(Z)\n\\subset Z$ consisting\nof regular points is an open subset,\nthe 3 conditions are equivalent.\n\\end{cor}\n\n\n\\proof{\nThe implications\n{\\rm (1)}$\\Rightarrow${\\rm (2)} and\n{\\rm (2)}$\\Rightarrow${\\rm (1$'$)}\nare proved in \n\\cite[Corollary 3.2]{FW} and in\n\\cite[Corollary 2.6.1]{FW}\nrespectively.\nSince (1$'$) means\n$Z_{{\\mathbf F}_p}\n\\subset {\\rm Reg}(Z)$,\nthe last assertion follows.\n\\qed\n\n}\n\n\n\n\n\n\\begin{df}\\label{dfFTX}\nLet $k$ be a perfect field \nof characteristic $p>0$\nand let $X$ be\na regular noetherian scheme\nsatisfying the following condition:\n\n{\\rm (F)}\n$X_{{\\mathbf F}_p}\n=X\\times_{{\\rm Spec}\\, {\\mathbf Z}}\n{\\rm Spec}\\, {\\mathbf F}_p$\nis a scheme of\nfinite type over $k$.\n\n\\noindent\nThen, we define\nthe FW-cotangent bundle\n$FT^*X|_{X_{{\\mathbf F}_p}}$\nof $X$ to be the vector bundle\non $X_{{\\mathbf F}_p}$\nassociated with the locally free\n${\\cal O}_{X_{{\\mathbf F}_p}}$-module\n$F\\Omega^1_X$\nof rank $\\dim X$.\n\\end{df}\n\nLet $x\\in X_{{\\mathbf F}_p}$\nbe a closed point \nand let\n$T^*_xX$\ndenote the cotangent space\nat $x$ defined as a scheme\n${\\rm Spec}\\,\nS_{k(x)}({\\mathfrak m}_x\/\n{\\mathfrak m}_x^2)^\\vee$\nassociated to the\n$k(x)$-vector space\n${\\mathfrak m}_x\/\n{\\mathfrak m}_x^2$.\nSince $k(x)$ is perfect,\nthe exact sequence\n(\\ref{eqdx}) defines a canonical\nisomorphism\n\\begin{equation}\nF^*T^*_xX\n\\to \nFT^*X|_x\n\\label{eqTx}\n\\end{equation}\nto the fiber of \nthe FW-cotangent bundle\nat $x$ \nfrom the pull-back by Frobenius\n$F\\colon x\\to x$ of \n$T^*_xX$.\nIf $X=X_{{\\mathbf F}_p}$,\nthen the FW-cotangent bundle\n$FT^*X|_{X_{{\\mathbf F}_p}}$\nis the pull-back of\nthe cotangent bundle\n$T^*X$ \nby the Frobenius morphism\n$F\\colon X\\to X$\nby {\\rm (\\ref{eqFFX})}.\n\n\nLet $X\\to Y$ be a morphism\nof finite type\nof regular noetherian schemes\nsatisfying the condition (F)\nin Definition \\ref{dfFTX}.\nThen, the morphism (\\ref{eqFXY})\ndefines morphisms\n\\begin{equation}\n\\begin{CD}\nFT^*X|_{X_{{\\mathbf F}_p}}\n@<{f^*}<<\nFT^*Y|_{Y_{{\\mathbf F}_p}}\n\\times_{Y_{{\\mathbf F}_p}}\nX_{{\\mathbf F}_p}\n@>>>\nFT^*Y|_{Y_{{\\mathbf F}_p}}\n\\end{CD}\n\\label{eqdXY}\n\\end{equation}\n of\nschemes.\n\nAssume that $X\\to Y$ is\nsmooth and let \n$F^*T^*X\/Y|_{\nX_{{\\mathbf F}_p}}$ denote the\npull-back by the Frobenius\n$F\\colon X_{{\\mathbf F}_p}\n\\to X_{{\\mathbf F}_p}$\nof the restriction to\n$X_{{\\mathbf F}_p}$ of \nthe vector\nbundle defined $T^*X\/Y$ by\nthe locally free ${\\cal O}_X$-module\n$\\Omega^1_{X\/Y}$.\nThen, by Proposition \\ref{prsm},\nwe have an exact sequence\n\\begin{equation}\n0\\to FT^*Y|_{Y_{{\\mathbf F}_p}}\n\\times_{Y_{{\\mathbf F}_p}}\nX_{{\\mathbf F}_p}\n\\to FT^*X|_{X_{{\\mathbf F}_p}}\n\\to F^*T^*X\/Y|_{\nX_{{\\mathbf F}_p}}\\to 0\n\\label{eqTEf}\n\\end{equation}\nof vector bundles on $X_{{\\mathbf F}_p}$.\n\nSimilarly,\nlet $Z\\to X$ be a closed immersion\nof regular noetherian schemes\nsatisfying the condition (F).\nLet ${\\cal I}_Z\\subset {\\cal O}_X$\nbe the ideal sheaf\nand let $T^*_ZX$ be\nthe conormal bundle\ndefined by\nthe locally free ${\\cal O}_Z$-module\n${\\cal I}_Z\/{\\cal I}_Z^2$.\nLet \n$F^*T^*_ZX|_{\nZ_{{\\mathbf F}_p}}$ denote the\npull-back by the Frobenius\n$F\\colon Z_{{\\mathbf F}_p}\n\\to Z_{{\\mathbf F}_p}$\nof the restriction to\n$Z_{{\\mathbf F}_p}$.\nThen, by Corollary \\ref{corXZ},\nwe have an exact sequence\n\\begin{equation}\n0\\to F^*T^*_ZX|_{\nZ_{{\\mathbf F}_p}}\\to \nFT^*X|_{Z_{{\\mathbf F}_p}}\n\\to \nFT^*Z|_{Z_{{\\mathbf F}_p}}\n\\to 0\n\\label{eqTEi}\n\\end{equation}\nof vector bundles on $Z_{{\\mathbf F}_p}$.\n\n\n\\subsection{$C$-transversality}\\label{ssCtr}\n\nIn the rest of this section,\nwe fix a perfect field $k$\nof characteristic $p>0$.\n\nWe fix some terminology\non closed conical subsets of\na vector bundle of a scheme.\nLet $V$ be a vector bundle\nover a scheme $Y$.\nWe say that a closed subset\nof $V$ is conical if it is\nstable under the action of\n${\\mathbf G}_{m,Y}$.\nFor a closed conical subset\n$C\\subset V$,\nthe intersection\n$B=C\\cap Y$ with the\n$0$-section $Y\\subset V$ regarded\nas a closed subset of $Y$\nis called the base of $C$.\nThe base $B$ equals the\nimage of $C$ by\nthe projection $V\\to Y$.\n\n\nWe say that a separated\nmorphism $f\\colon X\\to Y$\nof finite type of schemes\nis proper on a closed subset $Z\\subset X$\nif for every base change\n$f'\\colon X'\\to Y'$ of $f$\nits restriction to\nthe inverse image $Z'\n\\subset X'$ is a closed mapping.\nFor a morphism\n$V\\to V'$ of vector bundles\non a scheme $Y$\nand a closed conical subset\n$C$ of $V$,\nthe morphism $V\\to V'$\nis proper on $C$ if and only\nif the intersection\n$C\\cap {\\rm Ker}(V\\to V')$\nis a subset of the $0$-section of $V$\nby \\cite[Lemma 1.2(ii)]{Be}.\n\n\n\\begin{df}\\label{dfhC}\nLet $X$ be a \nregular noetherian scheme\nsatisfying the condition {\\rm (F)}\nin Definition {\\rm \\ref{dfFTX}}\nand let $C\\subset FT^*X|_{X_{{\\mathbf F}_p}}\n$ be a closed\nconical subset\nof the FW-cotangent bundle.\nLet $h\\colon W\\to X$\nbe a morphism of finite type\nof regular schemes.\n\n\n{\\rm 1.}\n{\\rm (\\cite[1.2]{Be}, \\cite[Definition 3.3]{CC})}\nWe say that \n$h\\colon W\\to X$ is $C$-transversal\nif \nthe intersection of\n$h^*C=C\\times_XW\n\\subset \nFT^*X|_{X_{{\\mathbf F}_p}}\n\\times_{X_{{\\mathbf F}_p}}\nW_{{\\mathbf F}_p}$\nwith the kernel \n${\\rm Ker}(\nFT^*X|_{X_{{\\mathbf F}_p}}\n\\times_{X_{{\\mathbf F}_p}}\nW_{{\\mathbf F}_p}\n\\to\nFT^*W|_{W_{{\\mathbf F}_p}})$\nis a subset of the $0$-section.\n\n{\\rm 2.}\nAssume that $h$ is\n$C$-transversal.\nThen we define\na closed conical subset \n$h^\\circ C\n\\subset FT^*W|_{W_{{\\mathbf F}_p}}$\nto be the image\nof $h^*C$ by\n$\nFT^*X|_{X_{{\\mathbf F}_p}}\n\\times_{X_{{\\mathbf F}_p}}\nW_{{\\mathbf F}_p}\n\\to\nFT^*W|_{W_{{\\mathbf F}_p}}$.\n\\end{df}\n\nExample.\nLet $Z\\subset X$ be a regular closed subscheme.\nThen a closed\nconical subset \n$C=F^*T^*_ZX|_{Z_{{\\mathbf F}_p}}\n\\subset \nFT^*X|_{X_{{\\mathbf F}_p}}$\nis defined by (\\ref{eqTEi}).\nIn particular,\nfor $Z=X$,\nthe $0$-section\n$F^*T^*_XX|_{X_{{\\mathbf F}_p}}\n=X_{{\\mathbf F}_p}$ is\na closed conical subset of\n$FT^*X|_{X_{{\\mathbf F}_p}}$.\n\n\n\n\\begin{lm}\\label{lmTXC}\nLet $X$ be a \nregular noetherian scheme\nsatisfying the condition {\\rm (F)}\nin Definition {\\rm \\ref{dfFTX}}\nand let $C\\subset FT^*X|_{X_{{\\mathbf F}_p}}\n$ be a closed\nconical subset.\nLet $h\\colon W\\to X$\nbe a morphism of finite type\nof regular schemes.\n\n{\\rm 1.}\nLet $C=FT^*X|_Z$\nbe the restriction to \na closed subset\n$Z\\subset X_{{\\mathbf F}_p}$\nof the closed fiber.\nIf $h$ is $C$-transversal,\nthen $h$ is smooth\non a neighborhood of\nthe inverse image $h^{-1}(Z)$.\n\n{\\rm 2.}\nIf $C$ is the $0$-section\nof $FT^*X|_{X_{{\\mathbf F}_p}}$,\nthen $h$ is $C$-transversal.\n\n{\\rm 3.}\nIf $h$ is smooth,\nfor any closed conical subset\n$C$ of $FT^*X|_{X_{{\\mathbf F}_p}}$,\nthe morphism\n$h$ is $C$-transversal.\n\\end{lm}\n\n\n\\proof{\n1.\nThe condition that the\nintersection of\n$h^*C=FT^*X|_Z\n\\times_{X_{{\\mathbf F}_p}}\nW_{{\\mathbf F}_p}\n=FT^*X\n\\times_{X_{{\\mathbf F}_p}}\nh^{-1}(Z)$\nwith the kernel\n${\\rm Ker}(FT^*X|_{X_{{\\mathbf F}_p}}\n\\times_{X_{{\\mathbf F}_p}}\nW_{{\\mathbf F}_p}\n\\to FT^*W|_{W_{{\\mathbf F}_p}})$\nis a subset of the $0$-section\nmeans that\n$F\\Omega^1_X\n\\otimes_{{\\cal O}_X}{\\cal O}_W\n\\to F\\Omega^1_W$\nis a locally splitting injection\non a neighborhood of $h^{-1}(Z)$.\nBy Proposition \\ref{prsm},\nthis means that\n$W\\to X$\nis smooth on a neighborhood of\nthe inverse image $h^{-1}(Z)$.\n\n\n2.\nIf $C$ is the $0$-section,\nits intersection with the\nkernel \n${\\rm Ker}(FT^*X|_{X_{{\\mathbf F}_p}}\n\\times_{X_{{\\mathbf F}_p}}\nW_{{\\mathbf F}_p}\n\\to FT^*W|_{W_{{\\mathbf F}_p}})$\nis also the $0$-section.\n\n\n\n3.\nIf $h$ is smooth,\nthe morphism $FT^*X|_{X_{{\\mathbf F}_p}}\n\\times_{X_{{\\mathbf F}_p}}\nW_{{\\mathbf F}_p}\n\\to FT^*W|_{W_{{\\mathbf F}_p}}$ is an injection\nby Proposition \\ref{prsm}.\nHence \nfor any subset\n$C\\subset FT^*X|_{X_{{\\mathbf F}_p}}$,\nits intersection with the\nkernel \n${\\rm Ker}(FT^*X|_{X_{{\\mathbf F}_p}}\n\\times_{X_{{\\mathbf F}_p}}\nW_{{\\mathbf F}_p}\n\\to FT^*W|_{W_{{\\mathbf F}_p}})$\nis a subset of the $0$-section.\n\\qed\n\n}\n\n\\begin{lm}\\label{lmhC}\nLet $h\\colon W\\to X$\nbe a morphism of\nfinite type of \nregular noetherian schemes\nsatisfying the condition {\\rm (F)}\nand let $C$ be a closed\nconical subset of $FT^*X|_{X_{{\\mathbf F}_p}}$.\nAssume that $h$ is\n$C$-transversal.\nThen, for a morphism \n$g\\colon V\\to W$ of finite type of\nregular noetherian schemes\nthe following conditions\nare equivalent:\n\n{\\rm (1)}\nThe morphism\n$g$ is $h^\\circ C$-transversal.\n\n{\\rm (2)}\nThe composition\n$hg$ is $C$-transversal.\n\n\\noindent\nIf these equivalent conditions\nare satisfied,\nwe have $(hg)^\\circ C=g^\\circ h^\\circ C$.\n\\end{lm}\n\n\\proof{\nThe condition (1)\nmeans that\nthe intersection\n$h^*C\\cap {\\rm Ker}(\nFT^*X|_{X_{{\\mathbf F}_p}}\n\\times_{X_{{\\mathbf F}_p}}\nW_{{\\mathbf F}_p}\n\\to FT^*W|_{W_{{\\mathbf F}_p}})$\nis a subset of the $0$-section\nand further that\nfor $h^\\circ C\n\\subset FT^*W|_{W_{{\\mathbf F}_p}}$,\nthe intersection\n$g^*h^\\circ C\\cap {\\rm Ker}(\nFT^*W|_{W_{{\\mathbf F}_p}}\n\\times_{W_{{\\mathbf F}_p}}\nV_{{\\mathbf F}_p}\n\\to FT^*V|_{V_{{\\mathbf F}_p}})$\nis a subset of the $0$-section.\nThis means that\n$(hg)^*C\\cap {\\rm Ker}(\nFT^*X|_{X_{{\\mathbf F}_p}}\n\\times_{X_{{\\mathbf F}_p}}\nV_{{\\mathbf F}_p}\n\\to FT^*V|_{V_{{\\mathbf F}_p}})$\nis a subset of the $0$-section,\nnamely the condition (2).\n\nThe image of \n$(hg)^*C$ by\n$FT^*X|_{X_{{\\mathbf F}_p}}\n\\times_{X_{{\\mathbf F}_p}}\nV_{{\\mathbf F}_p}\n\\to FT^*V|_{V_{{\\mathbf F}_p}}$\nequals \nthe image of $g^*h^\\circ C$\nby \n$FT^*W|_{W_{{\\mathbf F}_p}}\n\\times_{W_{{\\mathbf F}_p}}\nV_{{\\mathbf F}_p}\n\\to FT^*V|_{V_{{\\mathbf F}_p}}$.\n\\qed\n\n}\n\n\\medskip\nThe terminology transversality\nis related to the transversality\nof morphisms of regular\nschemes defined as follows.\n\n\n\n\n\n\\begin{df}\\label{dftrans}\nLet $f\\colon X\\to Y$\nand $g\\colon V\\to Y$\nbe morphisms of finite type of\nregular schemes\nand set $W\n=X\\times_YV$.\n\n\n\n{\\rm 1.}\nLet $w\\in W$\nand $x\\in X,y\\in Y,v\\in V$\nbe the images.\nWe say that \n$f$ and $g$ are transversal\nat $w$, if ${\\cal O}_{W,w}$\nis regular and if\n$Tor_q^{{\\cal O}_{Y,y}}\n({\\cal O}_{X,x},{\\cal O}_{V,v})=0$\nfor $q>0$.\n\n{\\rm 2.}\nLet $W_1\\subset W$\nbe an open subscheme.\nWe say that \n$f$ and $g$ are transversal on\n$W_1$ if \n$f$ and $g$ are transversal \nat every point of $W_1$.\n\\end{df}\n\nExample.\nLet $Z\\subset X$ be a regular closed subscheme\nand \n$C=F^*T^*_ZX|_{Z_{{\\mathbf F}_p}}\n\\subset \nFT^*X|_{Z_{{\\mathbf F}_p}}$\nbe the closed\nconical subset defined by the conormal bundle.\nThen, as we will see\nin Corollary \\ref{corfC},\na morphism\n$h\\colon W\\to X$ of\nfinite type \nof regular quasi-excellent\nnoetherian schemes\nis $C$-transversal\nif and only if\n$h\\colon W\\to X$\nis transversal to $Z\\subset X$\non a neighborhood of\nthe closed fiber $W_{{\\mathbf F}_p}$.\n\nIn particular, \nif $X$ is smooth over \na discrete valuation \nring ${\\cal O}_K$ of mixed\ncharacteristic with residue field $k$\nand if $C=F^*T^*_{X_k}X|_{X_k}$\nfor the closed fiber $Z=X_k$,\nthen the condition that\n$h\\colon W\\to X$ is\n$C$-transversal\nmeans that\n$W$ is smooth over ${\\cal O}_K$ \non a neighborhood of\nthe closed fiber $W_k$.\n\n\n\n\n\\begin{lm}\\label{lmtrreg}\nLet $f\\colon X\\to Y$\nand $g\\colon V\\to Y$\nbe morphisms of finite type of\nregular schemes\nand set $W\n=X\\times_YV$.\nLet $w\\in W$\nand $x\\in X,y\\in Y,v\\in V$\nbe the images.\n\n{\\rm 1.}\nSuppose that $g\\colon V\\to Y$\nis an immersion.\nThen, the following conditions\nare equivalent:\n\n{\\rm (1)}\n$f$ and $g$ are transversal\nat $w$.\n\n{\\rm (2)}\nThe morphism\n$T^*_yY\\times_yx\\to T^*_xX$\non the cotangent space\ninduces an injection\non the subspace\n$T^*_VY\\times_Vy\n\\subset\nT^*_yY\\times_yx$.\n\n{\\rm 2.}\nSuppose that the\nsubset ${\\rm Reg}(W)\n\\subset W$\nconsisting of regular points\nis an open subset.\nIf\n$f$ and $g$ are transversal\nat $w\\in W$,\nthen\n$f$ and $g$ are transversal\non a neighborhood of $w$.\n\\end{lm}\n\n\n\nThe condition that\n${\\rm Reg}(W)\n\\subset W$\nis an open subset is satisfied\nif $W$ is of finite type\nover a Dedekind domain\nsuch that the fraction field\nis of characteristic $0$\nor a semi-local ring of dimension\nat most $1$\nby \\cite[Corollaire (6.12.6)]{EGA4}.\n\n\n\\proof{\n1.\nLet $a_1,\\ldots,a_r\\in {\\cal O}_{Y,y}$\nbe a minimal system of generators\nof ${\\rm Ker}({\\cal O}_{Y,y}\n\\to {\\cal O}_{V,y})$.\nThen, the both conditions are\nequivalent to the condition\nthat $a_1,\\ldots,a_r\\in {\\cal O}_{X,x}$\nis a part of a regular system of\nparameters.\n\n\n2.\nSince the ${\\cal O}_W$-modules\n${\\cal T}or_q^{{\\cal O}_Y}\n({\\cal O}_X,{\\cal O}_V)=0$\nare coherent\nand $w$ is an element\nof the open subset ${\\rm Reg}(W)$,\nthe assertion follows.\n\\qed\n\n}\n\n\n\\medskip\n\n\nLet $f\\colon X\\to Y$\nbe a morphism of finite type of\nregular noetherian schemes\nsuch that $Y_{{\\mathbf F}_p}$\nis of finite type\nover $k$\nand consider the\nmorphisms\n(\\ref{eqdXY}).\nLet $C$ be a closed conical subset\nof $FT^*X|_{X_{{\\mathbf F}_p}}$\nsuch that \n$f\\colon X\\to Y$ is proper\non the base $B(C)$.\nThen we define a closed conical subset\n$f_\\circ C$ of $FT^*Y|_{Y_{{\\mathbf F}_p}}$\nto be\nthe image by\n$FT^*X|_{X_{{\\mathbf F}_p}}\n\\times_{X_{{\\mathbf F}_p}}\nY_{{\\mathbf F}_p}\n\\to\nFT^*Y|_{Y_{{\\mathbf F}_p}}$\nof the inverse image of\n$C$ by\n$FT^*X|_{X_{{\\mathbf F}_p}}\n\\times_{X_{{\\mathbf F}_p}}\nY_{{\\mathbf F}_p}\n\\to\nFT^*X|_{X_{{\\mathbf F}_p}}$.\n\n\nFor a closed immersion\n$i\\colon Z\\to X$\nof regular noetherian schemes\nsuch that $X_{{\\mathbf F}_p}$\nis of finite type\nover $k$,\nthe closed conical subset\n$F^*T^*_ZX|_{X_{{\\mathbf F}_p}}$\ndefined by the conormal bundle\nequals $i_\\circ C$\nfor the $0$-section\n$C=FT^*_ZZ|_{Z_{{\\mathbf F}_p}}$\nof $FT^*Z|_{Z_{{\\mathbf F}_p}}$.\n\n\n\n\\begin{pr}\\label{prfC}\nLet $X,Y$ and $V$ be regular\nnoetherian schemes \nsatisfying the condition {\\rm (F)} and\n\\begin{equation}\n\\begin{CD}\nX@>> Q@>>> V\\\\\n@VVV@VVV@VVV@VVV\\\\\nX@<{\\supset}<>> P@>>>Y\n\\end{CD}$$\nsuch that $P\\to Y$\nis smooth and \n$U\\to P$ is a closed immersion.\nLet $w\\in W$ be a closed point\nabove $x$ and $v=f'(w)\\in V$.\nWe may also assume that\nthe morphisms \n$k(y)\\to k(v)$\nand hence $k(x)\\to k(w)$\nare isomorphisms.\nWe consider the cartesian\ndiagram\n$$\\begin{CD}\n@.\nT^*_wQ\n@<<<\nT^*_vV\n\\\\\n@.@AAA@AAA\\\\\nT^*_xX\n@<<<\nT^*_xP\n@<<<\nT^*_yY\n\\end{CD}$$\nof cotangent spaces\nand identify their Frobenius\npull-backs with the fibers\nof FW-cotangent bundles\nby the isomorphism (\\ref{eqTx}).\n\nLet $\\widetilde C_x\n\\subset F^*T^*_xP$\nand $A_x \\subset F^*T^*_yY$\nbe the inverse images of\n$C_x\\subset F^*T^*_xX$.\nThen, by the condition (1),\nthe intersection \n$A_x\\cap {\\rm Ker}(F^*T^*_yY \\to \nF^*T^*_vV)$\nis a subset of the $0$-section.\nSince\n$T^*_yY \\to T^*_xP$\ninduces an isomorphism\n${\\rm Ker}(T^*_yY\\to T^*_vV)\n\\to \n{\\rm Ker}(T^*_xP\\to T^*_wQ)$,\nthe intersection \n$\\widetilde C_x\\cap\n{\\rm Ker}(F^*T^*_xP\\to F^*T^*_wQ)$\nis a subset of the $0$-section.\n\nBy the exact sequence\n$0\\to T^*_XP|_x\n\\to T^*_xP\\to T^*_xX\\to 0$\nand $x\\in B(C)$,\nwe have\n$F^*T^*_XP|_x\\subset \\widetilde C_x$.\nHence\n$T^*_xP\\to T^*_wQ$\ninduces an injection on\n$T^*_XP|_x$.\nNamely, \nthe morphism\n$Q\\to P$ and the immersion\n$U\\to P$ are transversal\non a neighborhood of $w$\nby Lemma \\ref{lmtrreg}.\n\nHence\nthe horizontal arrows\nof the commutative diagram\n\\begin{equation}\n\\begin{CD}\nT^*_wW\n@<<<\nT^*_vV\n\\\\\n@AAA@AAA\\\\\nT^*_xX\n@<<<\nT^*_yY\n\\end{CD}\n\\label{eqUVW}\n\\end{equation}\ninduce isomorphisms\non the kernels and cokernels\nof the vertical arrows.\nSince the intersection of\nthe inverse image $A_x$ with \n${\\rm Ker}(F^*T^*_yY\n\\to F^*T^*_wV)$\nis a subset of the $0$-section,\nthe intersection of\n$C_x$ with \n${\\rm Ker}(F^*T^*_xX\n\\to F^*T^*_wW)$\nis also a subset of the $0$-section.\nNamely,\n$h$ is $C$-transversal\non a neighborhood of $w$.\nThus $h$ is $C$-transversal on\na neighborhood of \nthe inverse image of $B(C)$.\n\nFurther an elementary\ndiagram chasing shows\nthat the inverse image of\n$h^\\circ C|_w$\nby $F^*T^*_wW\\gets F^*T^*_vV$\nequals the image of\n$A_x$ by\n$F^*T^*_yY\\to F^*T^*_vV$.\nHence we have\n$g^\\circ f_\\circ C=\nf'_{1\\circ} h_1^\\circ C$.\n\n\n\n(2)$\\Rightarrow$(1):\nLet $w\\in B(h_1^\\circ C)$\nbe a closed point\nand let $v\\in V, x\\in X$\nand $y\\in Y$ be the image.\nThen, the commutative diagram\n(\\ref{eqUVW})\ninduces an isomorphism\n${\\rm Ker}(T^*_yY\\to T^*_vV)\n\\to\n{\\rm Ker}(T^*_xX\\to T^*_wW)$\non the kernels.\nIn the same notation,\nsince the intersection of\n$C_x$ with \n${\\rm Ker}(F^*T^*_xX\\to F^*T^*_wW)$\nis a subset of the $0$-section,\nthe intersection of\n$A_x$ with \n${\\rm Ker}(F^*T^*_yY\\to F^*T^*_vV)$\nis also a subset of the $0$-section.\n\\qed\n\n}\n\n\\begin{cor}\\label{corfC}\nLet $X,Y$ and $V$\nbe regular noetherian schemes \nsatisfying the condition {\\rm (F)}\nand let\n{\\rm (\\ref{eqprfC})}\nbe a cartesian diagram\nof morphisms of finite type.\nAssume that the\nsubset ${\\rm Reg}(W)\n\\subset W$\nconsisting of regular points\nis an open subset\nand that\n$f\\colon X\\to Y$ is an immersion.\nThen,\nthe following conditions\nare equivalent:\n\n{\\rm (1)}\nThe morphism\n$g$ is $F^*T^*_XY|_{Y_{{\\mathbf F}_p}}$-transversal.\n\n{\\rm (2)}\nThe morphism $g\\colon V\\to Y$\nis transversal with\nthe immersion\n$f\\colon X\\to Y$ on\na neighborhood of $W_{{\\mathbf F}_p}\n=V\\times_XX_{{\\mathbf F}_p}$.\n\\end{cor}\n\n\\proof{\nIt suffices to apply\nProposition \\ref{prfC} \ntogether with Lemma \\ref{lmTXC}.2\nto\nthe $0$-section $C$\nof $FT^*X|_{X_{{\\mathbf F}_p}}$.\n\\qed\n\n}\n\n\\begin{df}\\label{dfCet}\nLet $f\\colon U\\to X$ be an \\'etale morphism\nof regular noetherian schemes \nsatisfying the condition {\\rm (F)}\nand let $C'$ be a closed\nconical subset of $FT^*U$.\nWe identify $FT^*U$ with\nthe pull-back\n$FT^*X\\times_{X_{{\\mathbf F}_p}}\nU_{{\\mathbf F}_p}$ by the\ncanonical isomorphism\ninduced by\n$F\\Omega^1_X\\otimes_{{\\cal O}_X}\n{\\cal O}_U\\to\nF\\Omega^1_U$ \nand let\n${\\rm pr}_1\\colon \nFT^*X\\times_{X_{{\\mathbf F}_p}}\nU_{{\\mathbf F}_p} \\to \nFT^*X$ be the projection.\nThen, we define\na closed conical subset $f_*C'$ of \n$FT^*X$ to be the union of\nthe closure $\\overline{{\\rm pr}_1(C')}$\nand the restriction\n$FT^*X|_{X_{{\\mathbf F}_p}\\raisebox{2.33pt}{~\\rule{6.4pt}{1.3pt}~} \nf(U_{{\\mathbf F}_p})}$ \nto the image of the complement.\n\\end{df}\n\n\\begin{lm}\\label{lmCet}\nLet $$\n\\begin{CD}\nV@>>>W\\\\\n@VgVV@VVhV\\\\\nU@>f>>X\n\\end{CD}\n$$ be a cartesian diagram\nof regular noetherian schemes \nsatisfying the condition {\\rm (F)}\nsuch that $f$ is\nan \\'etale morphism of finite type.\nLet $C'$ be a closed\nconical subset of $FT^*U$\nand set $C=f_*C'\\subset FT^*X$\nas in Definition {\\rm \\ref{dfCet}}.\n\nIf $h$ is $C$-transversal,\nthen $h$ is smooth on a neighborhood\nof $h^{-1}(X_{{\\mathbf F}_p}\\raisebox{2.33pt}{~\\rule{6.4pt}{1.3pt}~} \nf(U_{{\\mathbf F}_p}))$\nand $g$ is $C'$-transversal.\n\\end{lm}\n\n\\proof{\nAssume that\n$h$ is $C$-transversal.\nSince \n$C\\supset \nFT^*X|_{X_{{\\mathbf F}_p}\\raisebox{2.33pt}{~\\rule{6.4pt}{1.3pt}~} \nh(U_{{\\mathbf F}_p})}$,\nthe morphism \n$h$ is smooth on a neighborhood\nof $h^{-1}(X_{{\\mathbf F}_p}\\raisebox{2.33pt}{~\\rule{6.4pt}{1.3pt}~} \nf(W_{{\\mathbf F}_p}))$\nby Lemma \\ref{lmTXC}.1.\nSince \n$f^\\circ C\\supset C'$,\nthe morphism \n$g$ is $C'$-transversal\nby Lemma \\ref{lmhC}.\n\\qed\n\n}\n\n\\subsection{$C$-acyclicity}\\label{ssCac}\n\nWe keep fixing a perfect field\n$k$ of characteristic $p>0$.\n\n\\begin{df}\\label{dffC}\nLet $f\\colon X\\to Y$\nbe a morphism of finite type of \nregular noetherian schemes\nsatisfying the condition {\\rm (F)}\nin Definition {\\rm \\ref{dfFTX}}\nand \nlet $C$\nbe a closed conical subset\nof the FW-cotangent bundle\n$FT^*X|_{X_{{\\mathbf F}_p}}$.\nWe say that $f$\nis $C$-acyclic if the inverse image of\n$C$ by the morphism\n$FT^*Y|_{Y_{{\\mathbf F}_p}}\n\\times_{Y_{{\\mathbf F}_p}}\nX_{{\\mathbf F}_p}\n\\to FT^*X|_{{\\mathbf F}_p}$\nis a subset of the $0$-section.\n\\end{df}\n\n\nThe corresponding notion is\ncalled $C$-transversality\nin \\cite[1.2]{Be} and\n\\cite[Definition 3.5]{CC}.\nHere to avoid confusion with\nthe $C$-transversality \nfor morphisms to $X$ in\nDefinition \\ref{dfhC}.1,\n\\cite[1.2]{Be} and\n\\cite[Definition 3.3]{CC},\nwe introduce another terminology.\nWe will show in Lemma \\ref{lmhf}.2\nthat\nfor a morphism\n$f\\colon X\\to Y$\nof regular schemes and\na closed immersion\n$i\\colon Z\\to X$\nof regular schemes,\nthe morphism\n$f$ is $F^*T^*_ZX|_{X_{{\\mathbf F}_p}}$-acyclic\nif and only if\nthe composition\n$fi$ is smooth on a neighborhood\nof $Z_{{\\mathbf F}_p}$.\n\n\n\\begin{lm}\\label{lmftr}\nLet $f\\colon X\\to Y$\nbe a morphism of finite type of \nregular noetherian schemes\nsatisfying the condition {\\rm (F)}\nand \nlet $C$\nbe a closed conical subset\nof $FT^*X|_{X_{{\\mathbf F}_p}}$.\n\n{\\rm 1.}\nThe following conditions\nare equivalent:\n\n{\\rm (1)}\n$f$ is $C$-acyclic.\n\n{\\rm (2)}\n$f$ is smooth\non a neighborhood of\nthe base $B(C)\\subset \nX_{{\\mathbf F}_p}$\nand \nthe intersection of\n$C\\subset FT^*X|_{X_{{\\mathbf F}_p}}$ \nwith the image of the morphism\n$FT^*Y|_{Y_{{\\mathbf F}_p}}\n\\times_{Y_{{\\mathbf F}_p}}X_{{\\mathbf F}_p}\n\\to FT^*X|_{X_{{\\mathbf F}_p}}$\nis a subset of the $0$-section.\n\n\n{\\rm 2.}\nIf $C$ is the $0$-section\n$F^*T^*_XX|_{X_{{\\mathbf F}_p}}$,\nthe following conditions\nare equivalent:\n\n\n{\\rm (1)}\n$f$ is $C$-acyclic.\n\n{\\rm (2)}\n$f$ is smooth\non the neighborhood of\n$X_{{\\mathbf F}_p}$.\n\\end{lm}\n\n\\proof{\n1.\nThe condition (1)\nis equivalent to the conjunction\nof the following (1$'$) and (1$''$):\n\n(1$'$)\nThe inverse image of\nthe $0$-section by\n$FT^*Y|_{Y_{{\\mathbf F}_p}}\n\\times_{Y_{{\\mathbf F}_p}}X_{{\\mathbf F}_p}\n\\to FT^*X|_{X_{{\\mathbf F}_p}}$\non the base $B(C)\\subset X_{{\\mathbf F}_p}$\nis a subset of the $0$-sections.\n\n(1$''$)\nThe intersection of\n$C\\subset FT^*X|_{X_{{\\mathbf F}_p}}$ \nwith the image of the morphism\n$FT^*Y|_{Y_{{\\mathbf F}_p}}\n\\times_{Y_{{\\mathbf F}_p}}X_{{\\mathbf F}_p}\n\\to FT^*X|_{X_{{\\mathbf F}_p}}$\nis a subset of the $0$-sections.\n\n\\noindent\nThe condition (1$'$)\nmeans that\nthe morphism\n$f^*F\\Omega^1_Y\n\\to\nF\\Omega^1_X$\nis a locally splitting injection\non a neighborhood\nof the base $B(C)\\subset X_{{\\mathbf F}_p}$.\nHence the assertion follows\nfrom Proposition \\ref{prsm}.\n\n2.\nFor the $0$-section\n$C=F^*T^*_XX|_{X_{{\\mathbf F}_p}}$,\nthe base\n$B(C)$ is $X_{{\\mathbf F}_p}$\nand the condition \n(1$''$) in the proof of\n1 is satisfied.\nHence the assertion follows from 1.\n\\qed\n\n}\n\n\n\\begin{pr}\\label{prfCX}\nLet $X,Y,V$\nbe regular noetherian schemes\nsatisfying the condition {\\rm (F)}\nand let\n$$\\begin{CD}\nX@>> FT^*W|_{W_{{\\mathbf F}_p}}\n@>>> F^*T^*W\/V|_{W_{{\\mathbf F}_p}}\n&\\to 0\\\\\n&@AAA@AAA@AA{\\cong}A&\\\\\n0\\to& FT^*Y|_{Y_{{\\mathbf F}_p}}\n\\times_{Y_{{\\mathbf F}_p}}\nW_{{\\mathbf F}_p}\n@>>> FT^*X|_{X_{{\\mathbf F}_p}}\n\\times_{X_{{\\mathbf F}_p}}\nW_{{\\mathbf F}_p}\n@>>> F^*T^*X\/Y|_{X_{{\\mathbf F}_p}}\n\\times_{X_{{\\mathbf F}_p}}\nW_{{\\mathbf F}_p}\n&\\to0\\\\\n\\end{CD}$$\nof exact sequences\nof vector bundles on $W_{{\\mathbf F}_p}$.\nLet $C'\\subset FT^*W|_{W_{{\\mathbf F}_p}}$\nbe the image\nof $h^*C=C\\times_{X_{{\\mathbf F}_p}}\nW_{{\\mathbf F}_p}\n\\subset FT^*X|_{X_{{\\mathbf F}_p}}\n\\times_{X_{{\\mathbf F}_p}}\nW_{{\\mathbf F}_p}$\nand let\n$A\\subset FT^*Y|_{Y_{{\\mathbf F}_p}}\n\\times_{Y_{{\\mathbf F}_p}}\nW_{{\\mathbf F}_p}$\nand $A'\\subset FT^*V|_{V_{{\\mathbf F}_p}}\n\\times_{V_{{\\mathbf F}_p}}\nW_{{\\mathbf F}_p}$\nbe their inverse images.\n\nSince the right vertical arrow is an\nisomorphism,\nthe lower left arrow induces\nan isomorphism\n${\\rm Ker}(FT^*Y|_{Y_{{\\mathbf F}_p}}\n\\times_{Y_{{\\mathbf F}_p}}\nW_{{\\mathbf F}_p}\n\\to \nFT^*V|_{V_{{\\mathbf F}_p}}\n\\times_{V_{{\\mathbf F}_p}}\nW_{{\\mathbf F}_p})\n\\to\n{\\rm Ker}(FT^*X|_{X_{{\\mathbf F}_p}}\n\\times_{X_{{\\mathbf F}_p}}\nW_{{\\mathbf F}_p}\n\\to FT^*W|_{W_{{\\mathbf F}_p}})$.\nHence $A\n\\subset FT^*Y|_{Y_{{\\mathbf F}_p}}\n\\times_{Y_{{\\mathbf F}_p}}\nW_{{\\mathbf F}_p}$ is a subset\nof the $0$-section\nif and only if\n$A'\\subset\nFT^*V|_{V_{{\\mathbf F}_p}}\n\\times_{V_{{\\mathbf F}_p}}\nW_{{\\mathbf F}_p}$ and\n$h^*C\\cap \n{\\rm Ker}(FT^*X|_{X_{{\\mathbf F}_p}}\n\\times_{X_{{\\mathbf F}_p}}\nW_{{\\mathbf F}_p}\n\\to FT^*W|_{W_{{\\mathbf F}_p}})$\nare subsets\nof the $0$-sections\nand the assertion follows.\n\\qed\n\n}\n\n\n\\begin{lm}\\label{lmhf}\nLet $f\\colon X\\to Y$\nbe a morphism of finite type of \nregular noetherian schemes\nsatisfying the condition {\\rm (F)}.\n\n{\\rm 1.}\nLet $C$ be a closed conical subset\nof $FT^*X|_{X_{{\\mathbf F}_p}}$\nand \nassume that $f$ is proper\non the base $B(C)$.\nLet $g\\colon Y\\to Z$\nbe a morphism of finite type of \nregular noetherian schemes\nsuch that $Z_{{\\mathbf F}_p}$\nis of finite type over $k$.\nThen the following conditions are\nequivalent:\n\n{\\rm (1)}\n$g$ is $f_\\circ C$-acyclic.\n\n{\\rm (2)}\n$gf$ is $C$-acyclic.\n\n{\\rm 2.}\nLet $p\\colon V\\to X$\nbe a proper morphism of\nregular schemes\nand let $C=p_\\circ F^*T^*_VV|_{V_{{\\mathbf F}_p}}\n\\subset FT^*X|_{X_{{\\mathbf F}_p}}$.\nThen, the following conditions\nare equivalent:\n\n{\\rm (1)}\n$f$ is $C$-acyclic.\n\n{\\rm (2)}\nThe composition\n$fp$ is smooth\non a neighborhood of $V_{{\\mathbf F}_p}$.\n\\end{lm}\n\n\\proof{\n1.\nLet $x\\in X_{{\\mathbf F}_p}$ be a closed\npoint and $y\\in Y_{{\\mathbf F}_p}$ and $z\\in Z_{{\\mathbf F}_p}$\nbe the images.\nSince the assertion is \\'etale local,\nwe may also assume that\nthe morphisms \n$k(z)\\to k(y)\\to k(x)$\nare isomorphisms.\n\nLet $A_x$ be the inverse image of\n$C_x$ by $F^*T^*_xX\\gets F^*T^*_yY$.\nThen, the inverse image $A'_x$\nof $C_x$ by $F^*T^*_xX\\gets F^*T^*_zZ$\nequals the inverse image $A''_x$\nof $A_x$\nby $F^*T^*_yY\\gets F^*T^*_zZ$.\nSince the condition (1) \n(resp.\\ (2)) is equivalent to\nthat $A'_x$ (resp.\\ $A''_x$)\nis a subset of the $0$-section\nfor any $x$,\nthe assertion follows.\n\n2.\nBy 1.~applied to\n$p_\\circ F^*T^*_VV|_{V_{{\\mathbf F}_p}}\n=F^*T^*_VX|_{X_{{\\mathbf F}_p}}$,\nthe condition (1)\nis equivalent to that\nthe composition $fp$\nis $F^*T^*_VV|_{V_{{\\mathbf F}_p}}$-acyclic.\nHence the assertion follows from\nLemma \\ref{lmftr}.2.\n\\qed\n\n}\n\n\n\\begin{df}\\label{dfhfC}\nLet $X$\nbe a regular noetherian scheme\nsatisfying the condition {\\rm (F)}\nand let $C$ be a closed conical subset\nof $FT^*X|_{X_{{\\mathbf F}_p}}$.\nWe say that a pair\n$(h,f)$ of morphisms\n$h\\colon W\\to X$,\n$f\\colon W\\to Y$\nof finite type\nof regular noetherian schemes\nsuch that $Y_{{\\mathbf F}_p}$\nis of finite type over $k$\nis $C$-acyclic\nif the intersection of\n$(C\\times_{X_{{\\mathbf F}_p}}\nW_{{\\mathbf F}_p})\n\\times_{W_{{\\mathbf F}_p}} \n(FT^*Y|_{Y_{{\\mathbf F}_p}}\n\\times_{Y_{{\\mathbf F}_p}}\nW_{{\\mathbf F}_p})\n\\subset \n(FT^*X|_{X_{{\\mathbf F}_p}}\n\\times_{X_{{\\mathbf F}_p}}\nW_{{\\mathbf F}_p})\n\\times_{W_{{\\mathbf F}_p}} \n(FT^*Y|_{Y_{{\\mathbf F}_p}}\n\\times_{Y_{{\\mathbf F}_p}}\nW_{{\\mathbf F}_p})$\nwith the kernel \n${\\rm Ker}((h^*,f^*)\\colon$\n$(FT^*X|_{X_{{\\mathbf F}_p}}\n\\times_{X_{{\\mathbf F}_p}}\nW_{{\\mathbf F}_p})\n\\times_{W_{{\\mathbf F}_p}} \n(FT^*Y|_{Y_{{\\mathbf F}_p}}\n\\times_{Y_{{\\mathbf F}_p}}\nW_{{\\mathbf F}_p})\n\\to\nFT^*W|_{W_{{\\mathbf F}_p}})$\nis a subset of the $0$-section.\n\\end{df}\n\n\n\\begin{lm}\\label{lmhfC}\nLet $X$\nbe a regular noetherian scheme\nsatisfying the condition {\\rm (F)}\nand let $C$ be a closed conical subset\nof $FT^*X|_{X_{{\\mathbf F}_p}}$.\n\n\n{\\rm 1.}\nLet $f\\colon X\\to Y$\nbe a morphism of finite type\nof regular noetherian schemes\nsatisfying the condition {\\rm (F)}.\nThen, the following conditions are\nequivalent:\n\n{\\rm (1)}\n$f$ is $C$-acyclic.\n\n{\\rm (2)}\n$(1_X,f)$ is $C$-acyclic.\n\n{\\rm 2.}\nLet $h\\colon W\\to X$\nand $f\\colon W\\to Y$\nbe morphisms of finite type\nof regular noetherian schemes\nsatisfying the condition {\\rm (F)}.\nThen the following conditions are\nequivalent:\n\n{\\rm (1)}\n$(h,f)$ is $C$-acyclic.\n\n{\\rm (2)}\n$h$ is $C$-transversal\nand \n$f$ is $h^\\circ C$-acyclic.\n\\end{lm}\n\n\\proof{1.\nIdentify the kernel of\n$(1,f^*)\\colon\nFT^*X|_{X_{{\\mathbf F}_p}}\n\\times_{X_{{\\mathbf F}_p}} \n(FT^*Y|_{Y_{{\\mathbf F}_p}}\n\\times_{Y_{{\\mathbf F}_p}}\nX_{{\\mathbf F}_p})\n\\to\nFT^*X|_{X_{{\\mathbf F}_p}}$\nwith the image of\nthe injection\n$(f^*,-1)\\colon \nFT^*Y|_{Y_{{\\mathbf F}_p}}\n\\times_{Y_{{\\mathbf F}_p}}\nX_{{\\mathbf F}_p}\n\\to\nFT^*X|_{X_{{\\mathbf F}_p}}\n\\times_{X_{{\\mathbf F}_p}}\n(FT^*Y|_{Y_{{\\mathbf F}_p}}\n\\times_{Y_{{\\mathbf F}_p}}\nX_{{\\mathbf F}_p})$.\nThen the inverse image\nin $\nFT^*Y|_{Y_{{\\mathbf F}_p}}\n\\times_{Y_{{\\mathbf F}_p}}\nX_{{\\mathbf F}_p}$ of\n$C\n\\times_{X_{{\\mathbf F}_p}}\n(FT^*Y|_{Y_{{\\mathbf F}_p}}\n\\times_{Y_{{\\mathbf F}_p}}\nX_{{\\mathbf F}_p})\n\\subset\nFT^*X|_{X_{{\\mathbf F}_p}}\n\\times_{X_{{\\mathbf F}_p}}\n(FT^*Y|_{Y_{{\\mathbf F}_p}}\n\\times_{Y_{{\\mathbf F}_p}}\nX_{{\\mathbf F}_p})$\nis the same as\nthe inverse image of\n$C\\subset T^*X$\nand the assertion follows.\n\n2.\nSince ${\\rm Ker}(h^*\nFT^*X|_{X_{{\\mathbf F}_p}}\n\\times_{X_{{\\mathbf F}_p}}\nW_{{\\mathbf F}_p}\n\\to\nFT^*W|_{W_{{\\mathbf F}_p}})\n\\times 0\n\\subset\n{\\rm Ker}((h^*,f^*)\\colon\n(FT^*X|_{X_{{\\mathbf F}_p}}\n\\times_{X_{{\\mathbf F}_p}}\nW_{{\\mathbf F}_p})\n\\times_{W_{{\\mathbf F}_p}} \n(FT^*Y|_{Y_{{\\mathbf F}_p}}\n\\times_{Y_{{\\mathbf F}_p}}\nW_{{\\mathbf F}_p})\n\\to\nFT^*W|_{W_{{\\mathbf F}_p}})$,\nthe $C$-acyclicity\nof $(h,f)$ implies\nthe $C$-transversality of\n$h$.\nBy 1.,\nthe $h^\\circ C$-acyclicity\nof $f$ is equivalent to the condition that\nthe intersection of\n$h^\\circ C\\times_{W_{{\\mathbf F}_p}} \n(FT^*Y|_{Y_{{\\mathbf F}_p}}\n\\times_{Y_{{\\mathbf F}_p}}\nW_{{\\mathbf F}_p})$\nwith \n${\\rm Ker}(\nFT^*W|_{W_{{\\mathbf F}_p}}\n\\times_{W_{{\\mathbf F}_p}}\n(FT^*Y|_{Y_{{\\mathbf F}_p}}\n\\times_{Y_{{\\mathbf F}_p}}\nW_{{\\mathbf F}_p})\n\\to \nFT^*W|_{W_{{\\mathbf F}_p}})$\nis a subset of the\n$0$-section.\nThis condition is equivalent to the\n$C$-acyclicity\nof $(h,f)$\nsince \n$h^\\circ C\\times_{W_{{\\mathbf F}_p}} \n(FT^*Y|_{Y_{{\\mathbf F}_p}}\n\\times_{Y_{{\\mathbf F}_p}}\nW_{{\\mathbf F}_p})$\nis the image of \n$h^*C\\times_{W_{{\\mathbf F}_p}} \n(FT^*Y|_{Y_{{\\mathbf F}_p}}\n\\times_{Y_{{\\mathbf F}_p}}\nW_{{\\mathbf F}_p})$.\n\\qed\n\n}\n\n\n\\section{Micro-support}\\label{sms}\n\nWe fix a perfect field $k$ \nof characteristic $p>0$\nand a finite field \n$\\Lambda$ of characteristic $\\ell\\neq p$.\nWe will assume\nthat a regular noetherian scheme $X$\nover ${\\mathbf Z}_{(p)}$\nsatisfies the condition {\\rm (F)}\nin Definition {\\rm \\ref{dfFTX}}.\n\n\\subsection{Micro-support}\n\\begin{df}\\label{dfms}\nLet $X$ be a regular noetherian scheme \nover ${\\mathbf Z}_{(p)}$\nsatisfying the condition {\\rm (F)}\nin Definition {\\rm \\ref{dfFTX}} and\nlet $C$ be a closed conical subset\nof the FW-cotangent bundle \n$FT^*X|_{X_{{\\mathbf F}_p}}$.\nLet ${\\cal F}$\nbe a constructible complex\nof $\\Lambda$-modules\non $X$.\nWe say that ${\\cal F}$\nis micro-supported on $C$\nif the following conditions\n{\\rm (1)} and {\\rm (2)}\nare satisfied:\n\n{\\rm (1)}\nThe intersection of\nthe support ${\\rm supp}\\, {\\cal F}$\nwith the closed fiber $X_{{\\mathbf F}_p}$ is \na subset of the base $B(C)$.\n\n{\\rm (2)}\nEvery $C$-transversal separated morphism\n$h\\colon W\\to X$ of finite type of\nregular schemes\nis ${\\cal F}$-transversal\non a neighborhood of the closed\nfiber $W_{{\\mathbf F}_p}$.\n\\end{df}\n\nThis definition of micro-support\nis related to\n\\cite[Proposition 8.13]{CC}\nbut is different from\n\\cite[1.3]{Be}.\nWe discuss this point in\nRemark after Proposition \\ref{prtrla}.\nIt is a property on a neighborhood\nof $X_{{\\mathbf F}_p}$.\nIf $X_{\\mathbf Q}\n=X\\times_{{\\rm Spec}\\, {\\mathbf Z}_{(p)}}\n{\\rm Spec}\\, {\\mathbf Q}$\nis smooth over a field $K$\nof characteristic $0$,\nto cover $X_{\\mathbf Q}$,\none can use the micro-support\nof the restriction of ${\\cal F}$ on $X_{\\mathbf Q}$\ndefined as closed conical subset\nof the cotangent bundle\n$T^*X_{\\mathbf Q}\/K$.\n\n\n\\begin{lm}\\label{lmTX}\nLet $X$ be a regular noetherian scheme \nover ${\\mathbf Z}_{(p)}$\nsatisfying the condition {\\rm (F)}\nand ${\\cal F}$\nbe a constructible complex\nof $\\Lambda$-modules.\n\n{\\rm 1.}\n${\\cal F}$ is micro-supported\non $FT^*X|_{X_{{\\mathbf F}_p}}$.\n\n{\\rm 2.}\nIf ${\\cal F}$ is locally constant\non a neighborhood of\nthe closed fiber $X_{{\\mathbf F}_p}$,\nthen \n${\\cal F}$ is micro-supported\non the $0$-section $F^*T^*_XX|_{X_{{\\mathbf F}_p}}$.\n\n{\\rm 3.}\nAssume that $X$ is a\nsmooth scheme over $k$.\nLet $C\\subset T^*X$ be\na closed conical subset\nand $F^*C\\subset F^*T^*X\n=FT^*X$ \nbe the pull-back of $C$\nThen, ${\\cal F}$ is micro-supported\non $C$ in the sense of\n{\\rm (\\cite[1.3]{Be}, \\cite[Definition 4.1]{CC})}\nif and only if \n${\\cal F}$ is micro-supported\non $F^*C$.\n\\end{lm}\n\nWe show the converse of\n2 in Corollary \\ref{cortrla}.\n\n\n\\proof{\n1.\nLet $h\\colon W\\to X$\nbe a separated\nmorphism of finite type of regular schemes.\nIf $h$ is $FT^*X|_{X_{{\\mathbf F}_p}}$-transversal,\nthen $h$ is smooth\non a neighborhood\nof $W_{{\\mathbf F}_p}$ by Lemma \\ref{lmTXC}.1.\nHence $h$ is ${\\cal F}$-transversal\non a neighborhood\nof $W_{{\\mathbf F}_p}$ by Lemma \\ref{lmPoi}.1.\n\n2.\nLet $h\\colon W\\to X$\nbe a separated\nmorphism of finite type \nof regular schemes.\nThen, since \n${\\cal F}$ is locally constant\non a neighborhood of\nthe closed fiber $X_{{\\mathbf F}_p}$,\n$h$ is ${\\cal F}$-transversal\non a neighborhood of $W_{{\\mathbf F}_p}$\nby Lemma \\ref{lmPoi}.2.\n\n3.\nLet $h\\colon W\\to X$\nbe a separated\nmorphism of finite type of regular schemes.\nThen, $h\\colon W\\to X$\nis a separated morphism \nof smooth schemes\nof finite type over $k$.\nThe morphism\n$h\\colon W\\to X$ is $F^*C$-transversal\nif and only if\n$h\\colon W\\to X$ is $C$-transversal.\nHence\nthe equivalence follows\nfrom \\cite[Proposition 8.13]{CC}.\n\\qed\n\n}\n\n\n\\begin{pr}\\label{prmcf}\nLet $X$ be a regular scheme \nover ${\\mathbf Z}_{(p)}$\nsatisfying the condition {\\rm (F)}\nand let\n${\\cal F}$ be a constructible\ncomplex of $\\Lambda$-modules.\nLet $C$ be a closed conical\nsubset of $FT^*X|_{X_{{\\mathbf F}_p}}$\nsuch that ${\\cal F}$\nis micro-supported on $C$.\n\n{\\rm 1.}\nLet $h\\colon W\\to X$\nbe a separated morphism\nof finite type of regular schemes.\nIf $h$ is $C$-transversal,\nthen $h$ is ${\\cal F}$-transversal\non a neighborhood of $W_{{\\mathbf F}_p}$\nand $h^*{\\cal F}$\nis micro-supported on $h^\\circ C$.\n\n{\\rm 2.}\nLet $f\\colon X\\to Y$\nbe a separated\nmorphism of finite type \nproper on the base $B(C)$\nof regular quasi-excellent\nnoetherian schemes\nsatisfying the condition {\\rm (F)}.\nThen $Rf_*{\\cal F}$\nis micro-supported on $f_\\circ C$.\n\\end{pr}\n\n\\proof{\n1.\nLet $g\\colon V\\to W$\nbe an $h^\\circ C$-transversal\nseparated morphism of finite type of\nregular noetherian schemes.\nThen, by Lemma \\ref{lmhC},\n$hg$ and $h$ are $C$-transversal.\nSince ${\\cal F}$ is\nmicro-supported on $C$,\n$hg$ and $h$ are \n${\\cal F}$-transversal\non neighborhoods of\n$V_{{\\mathbf F}_p}$ and of $W_{{\\mathbf F}_p}$ respectively.\nHence by Proposition \\ref{prhF}.1,\n$g$ is $h^*{\\cal F}$-transversal\non a neighborhood of\n$V_{{\\mathbf F}_p}$.\n\n2.\nLet $g\\colon V\\to Y$\nbe an $f_\\circ C$-transversal \nseparated morphism of finite type \nof regular noetherian schemes\nand let\n$$\\begin{CD}\nX@0$.\nLet $h\\colon W\\to X$ be a \nfinite surjective morphism of\nregular flat schemes\nof finite type over\n${\\cal O}_K$\nsuch that the morphism\n$W_K\\to X_K$\non the generic fiber is \\'etale.\nAssume that the reduced parts\n$D=X_{k,{\\rm red}}$\nand\n$E=W_{k,{\\rm red}}$\nof the closed fibers\nare irreducible and are smooth \nof dimension $\\geqq 1$\nover the residue field $k$.\n\nAssume that the following condition\nis satisfied:\n\n{\\rm (1)}\nThe cokernel of the canonical morphism\n$F\\Omega^1_X\n\\otimes_{{\\cal O}_X}\n{\\cal O}_E\n\\to\nF\\Omega^1_W\n\\otimes_{{\\cal O}_W}\n{\\cal O}_E$\nof locally free \n${\\cal O}_E$-modules\nis locally free of rank $1$.\n\n{\\rm 1.}\nThe direct image\n$C= \\pi_\\circ FT^*_WW|_E\n\\subset FT^*X|_D$\nof the $0$-section\nis the image of the sub line bundle\n${\\rm Ker}(FT^*X|_D\\times_DE\n\\to FT^*W|_E)$\nof\n$FT^*X|_D\\times_DE$.\n\n{\\rm 2.}\nFurther assume \nthat the following condition is satisfied:\n\n{\\rm (2)}\nThe finite morphism\n$E\\to D$ is purely inseparable\nof degree $\\geqq 1$.\n\n\\noindent\nThen, for each closed point $x\\in D$\nand for the point $w\\in E$\nabove $x$,\nthere exists\na regular subscheme\n$Z\\subset W$ \nof codimension $1$\ncontaining $w$ and\nflat over ${\\cal O}_K$\nsatisfying the following conditions:\n\nThe composition\n$Z\\to W\\to X$ is unramified.\nThe pull-back $C\\times_{X_{{\\mathbf F}_p}}w\n\\subset FT^*X\\times_{X_{{\\mathbf F}_p}}\nw$ \nof the fiber at $x$\nequals the fiber of the\nkernel of the surjection\n$FT^*X\\times_{X_{{\\mathbf F}_p}}Z\n\\to FT^*Z$.\n\\end{lm}\n\n\\proof{\n1. \nSince the ${\\cal O}_E$-linear morphism\n$F\\Omega^1_X\n\\otimes_{{\\cal O}_X}\n{\\cal O}_E\n\\to\nF\\Omega^1_W\n\\otimes_{{\\cal O}_W}\n{\\cal O}_E$\nof locally free \n${\\cal O}_E$-modules\nof the same rank has\nthe cokernel of rank 1,\nthe kernel is also locally free of\nrank 1.\nHence the assertion follows.\n\n\n2.\nLet $n=\\dim {\\cal O}_{X,x}$.\nSince $E\\to D$ is assumed\nto be purely inseparable,\nthe residue field\n$k(w)$ is a purely inseparable\nextension of a perfect field\n$k(x)$ and hence \nthe morphism $k(x)\\to k(w)$ is an isomorphism.\nBy the assumption on the\nrank of the cokernel\nand by Proposition \\ref{prdx},\nthe $k(x)$-linear mapping\n${\\mathfrak m}_x\/\n{\\mathfrak m}_x^2\n\\to\n{\\mathfrak m}_w\/\n{\\mathfrak m}_w^2$\ninduced by\n${\\cal O}_{X,x}\\to\n{\\cal O}_{W,w}$ is of rank $n-1$.\n\nTake an element of\n${\\mathfrak m}_w\/\n{\\mathfrak m}_w^2$\nnot contained in the image\nof ${\\mathfrak m}_x\/\n{\\mathfrak m}_x^2$\nand take its lifting\n$f\\in {\\mathfrak m}_w$\nnot divisible by\na prime element $t$ defining \nthe divisor $E\\subset W$.\nThen,\na regular closed subscheme $Z$\nof codimension $1$ \nof a neighborhood \nof $w$ is defined \nby $f$.\nLet $z$ denote $w\\in W$\nregarded as a point of $Z$.\nSince $f$ is not divisible by $t$,\nwe may assume that $Z$ is flat\nover ${\\cal O}_K$.\n\nSince $\\bar f\\in\n{\\mathfrak m}_w\/\n{\\mathfrak m}_w^2$ is \nnot contained in the image\nof ${\\mathfrak m}_x\/\n{\\mathfrak m}_x^2$,\nthe induced morphism\n${\\mathfrak m}_x\/\n{\\mathfrak m}_x^2\n\\to\n{\\mathfrak m}_w\/((f)+\n{\\mathfrak m}_w^2)\n=\n{\\mathfrak m}_z\/\n{\\mathfrak m}_z^2$\nis a surjection.\nHence further shrinking \n$Z$ if necessary,\nwe may assume that\n$Z\\to X$ is unramified.\nSince the kernel of the surjection\n${\\mathfrak m}_x\/\n{\\mathfrak m}_x^2\n\\to\n{\\mathfrak m}_z\/\n{\\mathfrak m}_z^2$\nequals the kernel of\n${\\mathfrak m}_x\/\n{\\mathfrak m}_x^2\n\\to\n{\\mathfrak m}_w\/\n{\\mathfrak m}_w^2$,\nthe last condition \non the fibers is satisfied.\n\\qed\n\n}\n\n\\medskip\nWe show that some concrete\nexamples of Kummer coverings\nsatisfy the assumptions\nin Lemma \\ref{lmXW}.\nLet $K$ be a discrete valuation\nfield as in Lemma \\ref{lmXW}\ncontaining a primitive\n$p$-th root of 1.\nLet $X$ be a regular flat\nscheme of finite type\nover ${\\cal O}_K$\nand assume that the reduced part\n$D=X_{k,{\\rm red}}$\nis smooth over the residue field $k$.\nLet $L$ be the local field \nat the generic point of $D$\nand let $e={\\rm ord}_Lp\\geqq p-1$\nbe the absolute ramification index.\n\n\\begin{lm}\\label{lmKum}\nLet $\\pi \\in \\Gamma(X,{\\cal O}_X)$\nbe a uniformizer of the divisor $D\n=X_{k,{\\rm red}}\\subset X$\nand let $u \\in \\Gamma(X,{\\cal O}_X^\\times)$\nbe a unit.\nLet $1\\leqq n< \\dfrac{pe}{p-1}$\nbe an integer\ncongruent to $0$ or $1$\nmodulo $p$\nand set $n=pm$ or $n=pm+1$\nrespectively.\nIn the case $n=pm$,\nassume that $du$ defines locally\na part of a basis of $\\Omega^1_D$.\nDefine a Kummer covering\n$V\\to U=X_K$\nby $v^p=1+u\\pi^n$.\n\n\n{\\rm 1.}\nThe normalization $\\pi\\colon\nW\\to X$\nin $V$ is regular.\nThe reduced closed fiber\n$E=W_{k,{\\rm red}}$\nis smooth over $k$\nand the finite morphism\n$E\\to D$ is purely inseparable.\n\n\n{\\rm 2.}\nThe cokernel\n${\\rm Coker}(F\\Omega^1_X\n\\otimes_{{\\cal O}_X}\n{\\cal O}_E\n\\to\nF\\Omega^1_W\n\\otimes_{{\\cal O}_W}\n{\\cal O}_E)$\nis an invertible ${\\cal O}_E$-module.\n\n{\\rm 3.}\nAssume $n=pm$.\nIf $e=m+1$,\nlet $\\pi'$ denote the uniformizer\n$p\/\\pi^m$.\nThen, the kernel of the canonical morphism\n$FT^*X|_D\\times_DE\\to\nFT^*W|_E$\nis a line bundle spanned\nby \n$$\\begin{cases}\nw(u)-u\\cdot w(\\pi')\n&\n\\text{ if $p=2$ and $e=m+1$},\n\\\\\nw(u)&\n\\text{ otherwise}.\n\\end{cases}$$\n\\end{lm}\n\n\\proof{\n{\\rm 1.}\nSince the assertion is local,\nwe may assume that\n$X={\\rm Spec}\\, A$ is affine.\nWe show that the normalization\n$B$ of $A$ is generated by \n$t=(v-1)\/\\pi^m$.\nBy the assumption\n$n<\\dfrac{ep}{p-1}$,\nwe have\n$e+m>pm$ and\nthe polynomial\n$(1+\\pi^mT)^p-1\n\\in A[T]$\nis divisible by $\\pi^{pm}$.\nDefine a monic polynomial\n$F\\in A[T]$\nby $1+\\pi^{pm}F=(1+\\pi^mT)^p$.\nSince\n$F\\equiv T^p\\bmod \n\\pi A$ and since $u$ is a unit,\nin the case $n=pm+1$,\nthe equation\n$F=\\pi u$ is an Eisenstein equation.\nIn the case $n=pm$,\nthe reduction of the equation\n$F=u$\nmodulo $\\pi A$\ngives $T^p=u$.\nIn this case $du$ is a part of\na basis of $\\Omega^1_D$\nby the assumption.\nHence \nby setting $v=1+\\pi^mt$\nwhere $t\\in B$ denotes the class of\n$T$,\nwe obtain\n$B=A[T]\/(F-u\\pi)$ \nor $B=A[T]\/(F-u)$ \nrespectively.\n\nThe reduced part $E$\nis defined by $t$ or $\\pi$\naccording to $n=pm+1$ or $n=pm$\nrespectively.\nHence $E$ is smooth over $k$\nand the finite morphism\n$E\\to D$ is purely inseparable\nof degree 1 or $p$\nrespectively.\n\n2.\nBy Corollary \\ref{corXZ},\nwe have a commutative diagram\n$$\\begin{CD}\n0@>>>\nF^*N_{D\/X}\n\\otimes_{{\\cal O}_D}\n{\\cal O}_E\n@>>>\nF\\Omega^1_X\n\\otimes_{{\\cal O}_X}\n{\\cal O}_E\n@>>>\nF^*\\Omega^1_D\n\\otimes_{{\\cal O}_D}\n{\\cal O}_E\n@>>>\n0\\\\\n@.@VVV@VVV@VVV@.\\\\\n0@>>>\nF^*N_{E\/W}\n@>>>\nF\\Omega^1_W\n\\otimes_{{\\cal O}_W}\n{\\cal O}_E\n@>>>\nF^*\\Omega^1_E\n@>>>\n0\n\\end{CD}$$\nof exact sequences of\nlocally free ${\\cal O}_E$-modules.\nIn the case $n=pm+1$,\nthe right vertical arrow\nis an isomorphism\nsince $E\\to D$ is an isomorphism.\nFurther\nthe left vertical arrow is $0$\nsince the ramification index is $p$.\nIn the case $n=pm$,\nthe left vertical arrow\nis an isomorphism\nsince the ramification index is $1$.\nFurther\nthe cokernel of the right vertical arrow is \nlocally free of rank 1\nsince $E\\to D$ is a purely inseparable\ncovering defined by $T^p=u$\nand $du$ is a part of a basis\nof $\\Omega^1_D$.\nHence the assertion follows.\n\n\n\n3.\nWe compute\nthe polynomial $F\n\\bmod \\pi^2$.\nRecall that we have\n$e+m>pm$.\nSince $e$ is divisible by\n$p-1$, the equality\n$e+m=pm+1$ holds\nif and only if\n$p=2$ and $e=m+1$.\nHence \nthe coefficients of $T^i$ \nfor $i=1,\\ldots, p-1$ in \nthe polynomial $F$\nare divisible by $\\pi^2$\nexcept $F=T^2+\n2\/\\pi^m\\cdot T$\nin the exceptional case.\n\nThus, except the exceptional case,\nwe have a congruence\n$F\\equiv T^p\n\\bmod \\pi^2$\nand hence the kernel is\nspanned by \n$w(u)$.\nIn the exceptional case,\nwe have\n$t^2+\\pi't=u$ for\n$\\pi'=2\/\\pi^m$.\nHence $w(u)$\nis sent to $t^2\\cdot w(\\pi')\n=u\\cdot w(\\pi')$.\n\\qed\n\n}\n\n\n\\begin{pr}\\label{prKum}\nLet $K$ be a discrete\nvaluation field of characteristic $0$\nsuch that\nthe residue field $k$\nis a perfect field of characteristic $p>0$.\nLet $X$ be a regular flat scheme\nof finite type over\n${\\cal O}_K$\nsuch that the reduced part\n$D=X_{k,{\\rm red}}$\nis irreducible and is smooth\nover the residue field $k$.\n\nLet ${\\cal F}_U$ be a locally constant\nconstructible sheaf of\n$\\Lambda$-modules on\nthe generic fiber $U=X_K$\nand let ${\\cal F}=j_!{\\cal F}_U$\nbe the $0$-extension\nfor the open immersion\n$j\\colon U\\to X$.\nLet $V\\to U$ be a finite\n\\'etale Galois covering \nof Galois group $G$ such that\nthe pull-back ${\\cal F}_V$\nis constant \nand let $\\pi\\colon W\\to X$ be\nthe normalization \nof $X$ in $V$.\n\nAssume that $W$ is regular\nand that\nthe reduced part\n$E=W_{k,{\\rm red}}$\nis also irreducible and smooth\nover the residue field $k$.\nAssume that the order of $G$\nis invertible in $\\Lambda$\nand that ${\\cal F}_U$ corresponds\nto a non-trivial\nirreducible representation $M$ of $G$.\n\n{\\rm 1.}\nThe canonical morphism\n${\\cal F}=j_!{\\cal F}_U\n\\to Rj_*{\\cal F}_U$\nis an isomorphism.\n\n{\\rm 2.}\nAssume that conditions\n{\\rm (1)} and {\\rm (2)} \nin Lemma {\\rm \\ref{lmXW}}\nare satisfied.\nThen, the singular support $SS{\\cal F}$\nequals the direct image\n$C=\\pi_\\circ FT^*_WW|_{W_k}$\nof the $0$-section.\n\\end{pr}\n\n\n\\proof{\n1.\nBy the assumption \nthat the order of $G$\nis invertible in $\\Lambda$\nand that $M$ is an irreducible\nrepresentation,\nthe locally constant sheaf\n${\\cal F}_U$\nis isomorphic to a direct summand\nof $\\pi_{K*}\\Lambda$\nwhere $\\pi_K\\colon V=W_K\\to U=X_K$\nis the restriction of $\\pi$.\n\nLet $j_W\\colon W_K\\to W$\nbe the open immersion\nof the generic fiber.\nSince $W$ is regular\nand the reduced part\nof the closed fiber\n$W_k$ is a regular divisor,\nwe have isomorphisms\n$\\Lambda\\to j_{W*}\\Lambda$,\n$\\Lambda_E(-1)\\to R^1 j_{W*}\\Lambda$\nand $R^qj_{W*}\\Lambda=0$\nfor $q\\neq 0,1$\nby the absolute purity \n\\cite[{\\sc Th\\'eor\\`eme 3.1.1}]{purete}.\nSimilarly,\nwe have isomorphisms\n$\\Lambda\\to j_*\\Lambda$,\n$\\Lambda_D(-1)\\to R^1 j_*\\Lambda$\nand $R^qj_*\\Lambda=0$\nfor $q\\neq 0,1$.\nSince $E\\to D$ induces a homeomorphism\non the \\'etale site by the assumption,\nthe canonical morphism\n$\\Lambda_D\\to \\pi_*\\Lambda_E$\nis an isomorphism.\nHence, for the cokernel\n${\\cal G}={\\rm Coker}\n(\\Lambda_X\\to \\pi_*\\Lambda_W)$,\nthe canonical morphisms\n$j_!j^*{\\cal G}\\to\n{\\cal G}\\to Rj_*j^*{\\cal G}$\nare isomorphisms.\n\nSince\n$M$ is a non-trivial irreducible\nrepresentation\nof a semi-simple algebra\n$\\Lambda[G]$,\nthe corresponding sheaf\n${\\cal F}$ is a direct summand\nof $j^*{\\cal G}$.\nHence the canonical morphism\n${\\cal F}=j_!{\\cal F}_U\n\\to Rj_*{\\cal F}_U$\nis an isomorphism.\n\n2.\nSince ${\\cal F}$ is a direct summand\nof $\\pi_*\\Lambda_W=\\Lambda_X\n\\oplus {\\cal G}$,\nby Proposition \\ref{prmcf}.2,\nthe constructible sheaf\n${\\cal F}$ is micro-supported on\n$C=\\pi_\\circ FT^*_WW|_{W_k}$.\n\nSuppose ${\\cal F}$\nis micro-supported on \na closed conical subset $C'$.\nIt suffices to prove $C\\subset C'$.\nLet $x\\in X_{{\\mathbf F}_p}$\nbe a closed point,\nlet $h\\colon Z\\to X$\nbe an unramified morphism\nas in Lemma \\ref{lmXW}\nand let $z\\in Z$ be\nthe unique point above $x$.\nSince $Z\\to X$ factors through\n$Z\\to W$,\nthe restriction \n${\\cal F}_{Z\\cap U}$\nis constant.\nHence the morphism\n$h$ is not\n${\\cal F}$-transversal\nby the contraposition\nof Proposition \\ref{prhF}.2\n(1)$\\Rightarrow$(2)\nand 1.\nSince ${\\cal F}$\nis micro-supported on $C'$,\nthe morphism\n$h$ is not $C'$-transversal,\non any open neighborhood of $z\\in Z$.\n\nThe kernel $L={\\rm Ker}\n(FT^*X\\times_{X_{{\\mathbf F}_p}}\nZ_{{\\mathbf F}_p}\\to FT^*Z)$\nis a line bundle\non $Z_{{\\mathbf F}_p}$.\nThe intersection $C'_1=h^*C'\n\\cap L\n\\subset FT^*X\\times_{X_{{\\mathbf F}_p}}\nZ_{{\\mathbf F}_p}$\nis a closed conical subset\nof $L$.\nLet $Z_1\n=\\{y\\in Z_{{\\mathbf F}_p}\\mid\nC'_{1,y}=L_y\\}$\nbe the image by the projection \nof the complement\n$C'_1\\raisebox{2.33pt}{~\\rule{6.4pt}{1.3pt}~} (C'_1\\cap Z_{{\\mathbf F}_p})$\nof the $0$-section.\nSince $C'_1\\subset L$ is a closed\nconical subset,\nthe image\n$Z_1\\subset\nZ_{{\\mathbf F}_p}$ is a closed subset.\nSince the restriction \n$Z\\raisebox{2.33pt}{~\\rule{6.4pt}{1.3pt}~} Z_1\\to X$ of $h$\nis $C'$-transversal,\nthe complement\n$Z\\raisebox{2.33pt}{~\\rule{6.4pt}{1.3pt}~} Z_1$ is not\nan open neighborhood of $z$.\nNamely,\nwe have $z\\in Z_1$\nand hence \n$C'_{1,z}=L_z$ \nis a subset of $C'_z$.\n\nSince \n$L_z=C_z=C_x\\times_xz$\nby the condition on $Z$, we get\n$C_x\\subset C'_x$\nfor each closed point $x\\in X_k$.\nThus we have $C\\subset C'$\nas required.\n\\qed\n\n}\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Preliminaries}\n\n\\subsection{Cayley algebra and compact Lie group of type ${\\rm G}_2$} \n\nLet $\\mathfrak{C}=\\{e_0 =1, e_1, e_2, e_3, e_4, e_5, e_6, e_7 \\}_{\\sR}$ be the division Cayley algebra. In $\\mathfrak{C}$, since the multiplication and the inner product are well known, these are omitted.\n\\vspace{1mm}\n\nThe connected compact Lie group of type ${\\rm G_2}$ is given by\n$$\nG_2 =\\{\\alpha \\in \\Iso_{\\sR}(\\mathfrak{C})\\,|\\, \\alpha(xy)=(\\alpha x) (\\alpha y) \\}\\vspace{1mm}.\n$$ \n\\subsection{Exceptional Jordan algebra and compact Lie group of type ${\\rm F}_4$} \n\nLet \n$\\mathfrak{J}(3,\\mathfrak{C} ) = \\{ X \\in M(3, \\mathfrak{C}) \\, | \\, X^* = X \\}$ be the \nexceptional Jordan algebra. \nIn $\\mathfrak{J}(3,\\mathfrak{C} )$, the Jordan multiplication $X \\circ Y$, the \ninner product $(X,Y)$ and a cross multiplication $X \\times Y$, called the Freudenthal multiplication, are defined by\n$$\n\\begin{array}{c}\nX \\circ Y = \\dfrac{1}{2}(XY + YX), \\quad (X,Y) = \\tr(X \\circ Y),\n\\vspace{1mm}\\\\\nX \\times Y = \\dfrac{1}{2}(2X \\circ Y-\\tr(X)Y - \\tr(Y)X + (\\tr(X)\\tr(Y) \n- (X, Y))E), \n\\end{array}$$\nrespectively, where $E$ is the $3 \\times 3$ unit matrix. Moreover, we define the trilinear form $(X, Y, Z)$, the determinant $\\det \\,X$ by\n$$\n(X, Y, Z)=(X, Y \\times Z),\\quad \\det \\,X=\\dfrac{1}{3}(X, X, X),\n$$\nrespectively, and briefly denote $\\mathfrak{J}(3, \\mathfrak{C})$\nby $\\mathfrak{J}$.\n\\vspace{1mm}\n\nThe connected compact Lie group of type ${\\rm F_4}$ is given by\n\\begin{align*}\n\tF_4 &= \\{\\alpha \\in \\Iso_{\\sR}(\\mathfrak{J}) \\, | \\, \\alpha(X \\circ Y) = \\alpha X \\circ \\alpha Y \\}\n\t\\\\[1mm]\n\t&= \\{\\alpha \\in \\Iso_{\\sR}(\\mathfrak{J}) \\, | \\, \\alpha(X \\times Y) = \\alpha X \\times \\alpha Y \\}. \n\\end{align*}\nThen we have naturally the inclusion $G_2 \\subset F_4$ as follows:\n\\begin{align*}\n\\varphi:G_2 \\to F_4,\\,\\,\\varphi(\\alpha)X=\\begin{pmatrix}\n\\xi_1 & \\alpha x_3 & \\ov{\\alpha x_2} \\\\\n\\ov{\\alpha x_3} & \\xi_2 & \\alpha x_1 \\\\ \n\\alpha x_2 & \\ov{\\alpha x_1} & \\xi_3\n\\end{pmatrix},\\,\\, X \\in \\mathfrak{J}.\n\\end{align*} \n\\subsection{Complex exceptional Jordan algebra and Compact Lie group of type ${\\rm E}_6$} \nLet $\\mathfrak{J}(3,\\mathfrak{C})^C = \\{ X \\in M(3, \\mathfrak{C})^C \\, | \\, X^* = X \\}$ be the complexification of the exceptional Jordan algebra $\\mathfrak{J}$. In $\\mathfrak{J}(3,\\mathfrak{C})^C$, as in $\\mathfrak{J}$, we can also define the multiplication $X \\circ Y, X \\times Y$, the inner product $(X, Y)$, the trilinear forms $(X, Y, Z)$ and the determinant $\\det \\, X$ in the same manner, and those have the same properties. The $\\mathfrak{J}(3,\\mathfrak{C} )^C$ is called the complex exceptional Jordan algebra, and briefly denote $\\mathfrak{J}(3, \\mathfrak{C})^C$ by $\\mathfrak{J}^C$. \n\\vspace{1mm}\n\nThe connected compact Lie group of type ${\\rm E_6}$ is given by\n\\begin{align*}\n\t\tE_6 &= \\{\\alpha \\in \\Iso_C(\\mathfrak{J}^C) \\, | \\, \\det\\, \\alpha X = \\det\\, X, \\langle \\alpha X, \\alpha Y \\rangle = \\langle X, Y \\rangle \\}\n\t\t\\\\ \n\t\t &=\\{\\alpha \\in \\Iso_C(\\mathfrak{J}^C) \\, | \\,\\alpha X \\times \\alpha Y=\\tau\\alpha\\tau(X \\times Y) , \\langle \\alpha X, \\alpha Y \\rangle = \\langle X, Y \\rangle \\}\n\\end{align*}\nwhere $\\tau$ is a complex conjugation in $\\mathfrak{J}^C$: $\\tau(X+iY)=X-iY, \\,X, Y \\in \\mathfrak{J}$ and the Hermite inner product $\\langle X, Y \\rangle$ is defined by $(\\tau X, Y)$.\n\n\\noindent Then we have naturally the inclusion $F_4 \\subset E_6$ as follows:\n\\begin{align*}\n \\varphi:F_4 \\to E_6,\\,\\,\\varphi(\\alpha)(X_1+iX_2)=(\\alpha X_1)+i(\\alpha X_2),\\,\\,X_1+iX_2 \\in \\mathfrak{J}^C, X_i \\in \\mathfrak{J}.\n\\end{align*}\n\n\n\\if\nIn the last of this section, we state useful lemma. \n\n\\begin{lemma}\\label{lemma 2.3.}\n\tFor Lie groups $G, G' $, let a mapping $\\varphi : G \\to G'$ be a homomorphism of Lie groups. When $G'$ is connected, if $\\Ker\\,\\varphi$ is discrete and $\\dim(\\mathfrak{g})=\\dim(\\mathfrak{g}')$, $\\varphi$ is surjective.\n\\end{lemma}\n\\begin{proof}\n\tThe proof is omitted (see \\cite[Proposition 8.2 (1)]{iy4} in detail).\n\\end{proof}\n\n\\begin{lemma}[E. Cartan-Ra\\v{s}evskii]\\label{lemma 2.3.1}\n\tLet $G$ be a simply connected Lie group with a finite order automorphism $\\sigma$\n\tof $G$. Then $G^\\sigma$ is connected.\n\\end{lemma}\n\\begin{proof}\n\tThe proof is omitted (cf. \\cite[Lemma 0.7]{realization G_2}).\n\\end{proof}\n\\noindent Hereafter, using these lemmas without permission each times, we often prove lemma, proposition or theorem.\n\nWe almost use the same notation as \\cite{iy0}, in particular the complex fields $\\C, C$ are as follows.\n\\begin{align*}\n \\C=\\{x+ye_1 \\,|\\, x,y \\in \\R \\},\\quad C=\\{x+yi \\,|\\, x,y \\in \\R \\}(=\\R^C).\n\\end{align*}\n\\f\n\n\\section{The inner automorphisms of order $3$ and the fixed points subgroups by them}\\label{section 3\n\nIn this section, we will rewrite the inner automorphisms of order $3$ on $G=G_2, F_4,E_6$ and the fixed points subgroups of $G$ by them which were realized and determined in \\cite{iy1}, in association with the involutive inner automorphisms. However, the detailed proofs are omitted.\n\n\\subsection{In $G_2$}\\label{subsection 3.1\n\nLet $\\mathfrak{C}=\\H \\oplus \\H e_4$ be Cayley devision algebra, where $\\H$ is the field of quaternion number. Since a multiplication, a conjugation and inner product in $\\mathfrak{C}=\\H \\oplus \\H e_4$ are well known, these are ommited. If necessary, refer to \\cite{miya1},\\cite{realization G_2} and \\cite{iy0}.\n\nWe define an $\\R$-linear transformation $\\gamma$ of $\\mathfrak{C}$ by \n\\begin{align*}\n\t\t\\gamma(m+ne_4)=m-ne_4, \\,\\, m+ne_4 \\in \\H \\oplus \\H e_4 = \\mathfrak{C}.\n\\end{align*}\nThen we have that $\\gamma \\in G_2$ and $\\gamma^2 =1$. Hence $\\gamma$ induces the involutive inner automorphism $\\tilde{\\gamma}$ on $G_2: \\tilde{\\gamma}(\\alpha)=\\gamma\\alpha\\gamma, \\alpha \\in G_2$, so we have the following well-known result.\n\n\\begin{proposition}\\label{proposition 3.1.1\n\tThe group $(G_2)^\\gamma$ is isomorphism to the group $(Sp(1) \\times Sp(1))\/\\Z_2${\\rm:} $(G_2)^\\gamma \\cong (Sp(1) \\times Sp(1))\/\\Z_2, $ $ \\Z_2=\\{ (1,1), (-1,-1) \\}$.\n\\end{proposition}\n\\begin{proof}\n\tWe define a mapping $\\varphi_{{}_{311}}: Sp(1) \\times Sp(1) \\to (G_2)^\\gamma$ by \n\t\\begin{align*}\n\t\\varphi_{{}_{G_2,\\gamma}}(p, q)(m+n e_4)=qm \\ov{q}+(pn \\ov{q}) e_4, \\,\\,\\,m+n e_4 \\in \\H \\oplus \\H e_4 =\\mathfrak{C}.\n\t\\end{align*}\n\tThis mapping induces the required isomorphism (see \\cite [Theorem 1.10.1]{iy0} in detail).\n\\end{proof}\n\nLet $\\bm{\\omega}=-(1\/2)+(\\sqrt{3}\/2)e_1 \\in U(1) \\subset \\C \\subset \\H \\subset \\mathfrak{C}$. We define an $\\R$-linear transformation $\\gamma_3$ of $\\mathfrak{C}$ by\n\\begin{align*}\n\t\t\\gamma_3(m+ne_4)=m+(\\bm{\\omega} n)e_4, \\,\\,m+ne_4 \\in \\H \\oplus \\H e_4=\\mathfrak{C}.\n\\end{align*}\nThen, using the mapping $\\varphi_{{}_{G_2, \\gamma}}$ above, since $\\gamma_3$ is expressed by $\\varphi_{{}_{G_2,\\gamma}}(\\bm{\\omega},1)$: $\\gamma_3=\\varphi_{{}_{G_2,\\gamma}}(\\bm{\\omega},1)$, it is clear that $\\gamma_3 \\in G_2$ and $(\\gamma_3)^3=1$. Hence $\\gamma_3$ induces the inner automorphism $\\tilde{\\gamma}_3$ of order $3$ on $G_2: \\tilde{\\gamma}_3(\\alpha)={\\gamma_3}^{-1}\\alpha\\gamma_3, \\alpha \\in G_2$. \n\\vspace{1mm}\n\nNow, we have the following theorem.\n\n\\begin{theorem}\\label{theorem 3.1.2\n\tThe group $(G_2)^{\\gamma_3}$ is isomorphism to the group $(U(1) \\times Sp(1))\/\\Z_2${\\rm:} $(G_2)^{\\gamma_3} \\cong (U(1) \\times Sp(1))\/\\Z_2, \\Z_2=\\{ (1,1), (-1,-1) \\}$.\n\\end{theorem}\n\\begin{proof}\n\tLet $U(1)=\\{a \\in \\C \\,|\\,\\ov{a}a=1 \\} \\subset Sp(1)$, where $\\C=\\{x+ye_1\\,|\\, x,y \\in \\R \\}$. Then we define a mapping $\\varphi_{{}_{G_2,\\gamma_3}}:U(1) \\times Sp(1) \\to (G_2)^{\\gamma_3}$ by the restriction of the mapping $\\varphi_{{}_{G_2,\\gamma}}$ (Proposition \\ref{proposition 3.1.1}). This mapping induces the required isomorphism (see \\cite [Theorem 1.2]{iy1} in detail).\n\\end{proof}\n\nThus, since the group $(G_2)^{\\gamma_3}$ is connected, together with the result of Theorem \\ref{theorem 3.1.2}, we have an exceptional $\\varmathbb{Z}_3$-symmetric space $G_2\/(U(1) \\times Sp(1))\/\\Z_2)$.\n\\vspace{2mm}\n\nLet $x = m_0 + m_1e_2 + m_2e_4 + m_3e_6 \\in \\mathfrak{C}, m_i \\in \\C$. Then we associate such elements $x$ of $\\mathfrak{C}$ with the elements \n\\begin{align*}\n\t\t\tm_0 + \\begin{pmatrix}\n\t\t\tm_1 \\\\\n\t\t\tm_2 \\\\\n\t\t\tm_3\n\t\t\t\\end{pmatrix}(=:m_0+\\m)\n\\end{align*}\nof $\\C \\oplus \\C^3$ and we can define a multiplication, a conjugation and an inner product in $\\C \\oplus \\C^3$ corresponding to the same ones in $\\mathfrak{C}$ (see \\cite[Subsection 1.5]{iy0} in detail). Hence we have that $\\C \\oplus \\C^3$ is isomorphic to $\\mathfrak{C}$ as algebra. Hereafter, if necessary, we identify $\\mathfrak{C}$ with $\\C \\oplus \\C^3$: $\\mathfrak{C}=\\C \\oplus \\C^3$. \n\n\n\\if\nWe will rewrite alternative definition of Cayley algebra $\\mathfrak{C}$ according to \\cite[Subsection 1.5]{iy0}.\n\nAny element $x \\in \\mathfrak{C}$ is expressed by\n\\begin{align*}\n\tx &= x_0 + x_1e_1 + x_2e_2 + x_3e_3 + x_4e_4 + x_5e_5 + \n\tx_6e_6 + x_7e_7 \\quad (x_i \\in \\R) \n\t\\\\\n\t&= (x_0 + x _1e_1) + (x_2 + x_3e_1)e_2 + (x_4 + x_5e_1)e_4\n\t+ (x_6 + x_7e_1)e_6,\n\\end{align*}\nthat is,\n$$\nx = m_0 + m_1e_2 + m_2e_4 + m_3e_6, \\quad m_i \\in \\C.\n$$\n\nWe associate such element $x$ of $\\mathfrak{C}$ with the element $m_0 + \\begin{pmatrix}\nm_1 \\\\\nm_2 \\\\\nm_3\n\\end{pmatrix}$ of $\\C \\oplus \\C^3$. \n\n\\noindent In $\\C \\oplus \\C^3$, we define a multiplication, an inner product $(\\;\\;,\\;\\,)$ and a conjugation $\\overline{{\\;}^{\\;}\\;\\;}$ respectively by\n\\begin{align*}\n\t(m_0 + \\m)(n_0 + \\n) &= (m_0 n_0 - \\langle \\m, \\n \\rangle ) + \n\t(m_0\\n + \\ov{n_0}\\m - \\ov{\\m \\times \\n}), \n\t\\\\\n\t(m_0 + \\m, n_0 + \\n) &= (m_0, n_0) + (\\m, \\n), \n\t\\\\\n\t\\ov{m_0 + \\m} &= \\ov{m_0} - \\m, \n\\end{align*}\nwhere the real valued symmetric inner product $(\\m, \\n)$, the Hermitian inner \nproduct $\\langle \\m, \\n \\rangle$ and the exterior product $\\m \\times \\n$ are \nusually defined respectively by\n\\begin{align*}\n(\\m, \\n) = \\frac{1}{2}(\\m^{*}\\n + \\n^{*}\\m) = \\sum_{i=1}^3(m_i,n_i), \\,\\, \\langle \\m, \\n \\rangle = \\sum_{i=1}^3m_i\\ov{n}_i, \\,\\,\\m \\times \\n = \n\\begin{pmatrix} \nm_2n_3 - n_2m_3 \\\\\nm_3n_1 - n_3m_1 \\\\ \nm_1n_2 - n_1m_2\n\\end{pmatrix}\n\\end{align*}\n\\noindent for $\\m = \\begin{pmatrix}m_1 \\\\ m_2 \\\\ m_3\\end{pmatrix}$, $\\n = \\begin{pmatrix}n_1 \\\\ n_2 \\\\ n_3\\end{pmatrix} \n\\in \\C^3$. Since these operations correspond to their respective operations in \n$\\mathfrak{C}$. From now on, we also identify $\\C \\oplus \\C^3$ with $\\mathfrak{C}$ \n: $\\mathfrak{C}=\\C \\oplus \\C^3$.\n\\vspace{1mm}\n\\f\n\nAgain let $\\bm{\\omega}=-(1\/2)+(\\sqrt{3}\/2)e_1 \\in U(1) \\subset \\C \\subset \\H \\subset \\mathfrak{C}$. We define an $\\R$-linear transformation $w_3$ of $\\mathfrak{C}=\\C \\oplus \\C^3$ by\n\\begin{align*}\n\t\tw_3(m_0+\\m)=m_0+\\bm{\\omega} \\m, \\,\\,m_0+\\m \\in \\C \\oplus \\C^3=\\mathfrak{C}.\n\\end{align*}\nThen we have that $w_3 \\in G_2$ (\\cite[Proposition 1.4]{iy1}) and $(w_3)^3=1$. Hence $w_3$ induces the inner automorphism $\\tilde{w}_3$ of order $3$ on $G_2$: $\\tilde{w}_3(\\alpha)={w_3}^{-1}\\alpha w_3, \\alpha \\in G_2$.\n\\vspace{1mm}\n\nNow, we have the following theorem.\n\n\\begin{theorem}\\label{theorem 3.1.3\n\tThe group $(G_2)^{w_3}$ is isomorphic to the group $SU(3)${\\rm :} $(G_2)^{w_3} \\cong SU(3)$.\n\\end{theorem}\n\\begin{proof}\nWe define a mapping $\\varphi_{{}_{G_2,w_3}}: SU(3) \\to (G_2)^{w_3}$ by\n\t\\begin{align*}\n\t\t\t\\varphi_{{}_{G_2,w_3}}(A)(m_0+\\m)=m_0+A\\m, \\,\\,m_0+\\m \\in \\C \\oplus \\C^3=\\mathfrak{C}.\n\t\\end{align*}\n\tThis mapping induces the required isomorphism (see \\cite[Theorem 1.6]{iy1} in detail).\n\\end{proof}\n\nThus, since the group $(G_2)^{w_3}$ is connected, together with the result of Theorem \\ref{theorem 3.1.3}, we have an exceptional $\\varmathbb{Z}_3$-symmetric space $G_2\/SU(3)$. As is well known, this space is homeomorphic to a $6$-dimensional sphere $S^6$: $G_2\/SU(3) \\simeq S^6$. \n\\vspace{2mm}\n\nThe following lemma are useful to determine the structure of groups $G^{\\sigma_3} \\cap G^{\\tau_3}$ in $G_2$.\n\n\\begin{lemma}\\label{lemma 3.1.4\n\t{\\rm (1)} The mapping $\\varphi_{{}_{G_2,\\gamma_3}}:U(1) \\times Sp(1) \\to (G_2)^{\\gamma_3}$ of \\,Theorem {\\rm \\ref{theorem 3.1.2}} satisfies the relational formulas \n\t\\begin{align*}\n \t\\gamma_3&=\\varphi_{{}_{G_2,\\gamma_3}}(\\bm{\\omega},1),\n \t\\\\\n \tw_3&=\\varphi_{{}_{G_2,\\gamma_3}}(1, \\ov{\\bm{\\omega}}),\n\t\\end{align*}\n \twhere $\\bm{\\omega}=-(1\/2)+(\\sqrt{3}\/2)e_1 \\in U(1)$.\n\\vspace{1mm}\n\n\t{\\rm (2)} The mapping $\\varphi_{{}_{G_2,w_3}}:SU(3) \\to (G_2)^{w_3}$ of \\,Theorem {\\rm \\ref{theorem 3.1.3}} satisfies the relational formulas\n\t\\begin{align*}\n\t\\gamma_3&=\\varphi_{{}_{G_2,w_3}}(\\diag(1,\\bm{\\omega},\\ov{\\bm{\\omega}})),\n\t\\\\\n\tw_3&=\\varphi_{{}_{G_2,w_3}}(\\bm{\\omega}E),\n\t\\end{align*}\n\twhere $\\bm{\\omega}=-(1\/2)+(\\sqrt{3}\/2)e_1 \\in U(1)$. \n\\end{lemma}\n\\begin{proof}\n\t(1), (2) By doing straightforward computation we obtain the results above. \n\\end{proof}\n\n\\subsection{In $F_4$}\\label{subsection 3.2\n\nLet $\\mathfrak{J}$ be the exceptional Jordan algebra. As is well known, the elements $X$ of $\\mathfrak{J}$ take the form \n$$\nX = \\begin{pmatrix}\n\\xi_1 & x_3 & \\ov{x_2} \\\\\n\\ov{x_3} & \\xi_2 & x_1 \\\\ \nx_2 & \\ov{x_1} & \\xi_3\n\\end{pmatrix},\\,\\, \\xi_i \\in \\R,\\, x_i \\in \\mathfrak{C},\\, i=1, 2, 3.\n$$\nHereafter, in $\\mathfrak{J}$, we use the following nations:\n\\begin{align*}\nE_1 &= \\left(\\begin{array}{ccc}\n1 & 0 & 0 \\\\\n0 & 0 & 0 \\\\\n0 & 0 & 0\n\\end{array}\n\\right), \\,\\,\\,\\,\\,\\,\\,\\,\nE_2 = \\left(\\begin{array}{ccc}\n0 & 0 & 0 \\\\\n0 & 1 & 0 \\\\\n0 & 0 & 0\n\\end{array}\n\\right), \\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\nE_3 = \\left(\\begin{array}{ccc}\n0 & 0 & 0 \\\\\n0 & 0 & 0 \\\\\n0 & 0 & 1\n\\end{array}\n\\right), \n\\\\[2mm]\nF_1 (x) &= \\left(\\begin{array}{ccc}\n0 & 0 & 0 \\\\\n0 & 0 & x \\\\\n0 & \\ov{x} & 0\n\\end{array}\n\\right), \\,\\,\nF_2(x) = \\left(\\begin{array}{ccc}\n0 & 0 & \\ov{x} \\\\\n0 & 0 & 0 \\\\\nx & 0 & 0\n\\end{array}\n\\right), \\,\\,\nF_3 (x) = \\left(\\begin{array}{ccc}\n0 & x & 0 \\\\\n\\ov{x} & 0 & 0 \\\\\n0 & 0 & 0\n\\end{array}\n\\right).\n\\end{align*}\n\n\n\\if\nThen $\\mathfrak{J}(3, \\H) \\oplus \\H^3$ has the Freudenthal multiplication and the inner product \n\\begin{align*}\n(M + \\a) \\times (N + \\b) &= \\Big(M \\times N - \\dfrac{1}{2} (\\a^*\\b + \\b^*\\a)\\Big) - \\dfrac{1}{2}(\\a N + \\b M), \n\\\\[1mm]\n(M + \\a, N + \\b) &= (M, N) + 2(\\a, \\b) \n\\end{align*}\ncorresponding those of $\\mathfrak{J}$, where $(\\a, \\b) = (1\/2)(\\a\\b^* + \\b\\a^*)$. Hence $\\mathfrak{J}$ is isomorphic to $\\mathfrak{J}(3, \\H) $ $\\oplus \\,\\H^3$ as algebra. From now on, we identify $\\mathfrak{J}$ with $\\mathfrak{J}(3, \\H) \\oplus \\H^3$: $\\mathfrak{J}=\\mathfrak{J}(3, \\H) \\oplus \\H^3$.\n\\f\n\\vspace{1mm}\n\nWe define an $\\R$-linear transformation $\\gamma$ of $\\mathfrak{J}$ by\n$$\n\\gamma X= \\begin{pmatrix} \\xi_1 & \\gamma x_3 & \\ov{\\gamma x_2} \\\\\n\\ov{\\gamma x_3} & \\xi_2 & \\gamma x_1 \\\\\n\\gamma x_2 & \\ov{\\gamma x_1} & \\xi_3 \\end{pmatrix}\n,\\,\\,X \\in \\mathfrak{J},\n$$\nwhere $\\gamma$ on right hand side is the same one as $\\gamma \\in G_2$. Then we have that $\\gamma \\in F_4$ and $\\gamma^2 =1$. Hence $\\gamma$ induce involutive inner automorphism $\\tilde{\\gamma}$ of $F_4{\\rm :}\\,\\tilde{\\gamma}(\\alpha)=\\gamma\\alpha\\gamma, \\alpha \\in F_4$.\n\\vspace{1mm}\n\nHere, we associate the elements $X$ of $\\mathfrak{J}$ with the elements \n\\begin{align*}\n\\begin{pmatrix}\n\\xi_1 & m_3 & \\ov{m_2} \\\\\n\\ov{m_3} & \\xi_2 & m_1 \\\\ \nm_2 & \\ov{m_1} & \\xi_3\n\\end{pmatrix}\n+ (\\a_1, \\a_2, \\a_3)(=:M + \\a) \n\\end{align*}\nof $\\mathfrak{J}(3, \\H) \\oplus \\H^3$ and we can define a multiplication, a conjugation and an inner product in $\\mathfrak{J}(3, \\H) \\oplus \\H^3$ corresponding to the same ones in $\\mathfrak{J}$ (see \\cite[Subsection 2.11]{iy0} in detail). \nHence we have that $\\mathfrak{J}(3, \\H) \\oplus \\H^3$ is isomorphic to the exceptional Jordan algebra $\\mathfrak{J}$ as algebra. From now on, if necessary we identify $\\mathfrak{J}$ with $\\mathfrak{J}(3, \\H) \\oplus \\H^3$: $\\mathfrak{J}=\\mathfrak{J}(3, \\H) \\oplus \\H^3$.\nNote that the action to $\\mathfrak{J}(3, \\H) \\oplus \\H^3$ of $\\gamma$ is as follows.\n\\begin{align*}\n\t\t\\gamma(M+\\a)=M-\\a,\\,\\,M+\\a \\in \\mathfrak{J}(3, \\H) \\oplus \\H^3=\\mathfrak{J}.\n\\end{align*}\n\nThen we have the following well-known result.\n\n\\begin{proposition}\\label{proposition 3.2.1\n\tThe group $(F_4)^\\gamma$ is isomorphic to the group $(Sp(1) \\times Sp(3))\/\\Z_2${\\rm:} $(F_4)^\\gamma \\cong (Sp(1) \\times Sp(3))\/\\Z_2, \\,$ $\\Z_2 =\\{(1, E), (-1, -E) \\}$.\n\\end{proposition}\n\\begin{proof}\n\tWe define a mapping $\\varphi_{{}_{F_4,\\gamma}}: Sp(1) \\times Sp(3) \\to (F_4)^\\gamma$ by\n\t$$\n\t\\varphi_{{}_{F_4,\\gamma}}(p, A)(M+\\a)=AMA^* +p\\a A^*,\\,\\,\\, M+\\a \\in \\mathfrak{J}(3, \\H) \\oplus \\H^3=\\mathfrak{J}.\n\t$$\n\tThis mapping induces the required isomorphism (see \\cite[Theorem 2.11.2]{iy0} in detail).\n\\end{proof}\n\\vspace{1mm}\n\nLet $\\gamma_3 \\in G_2$ be the $\\R$-linear transformation of $\\mathfrak{C}$. Using the inclusion $G_2 \\subset F_4$, $\\gamma_3$ is naturally extended to the $\\R$-linear transformation of $\\mathfrak{J}$. The explicit form of \n$\\gamma_3$ as action to $\\mathfrak{J}$ is as follows.\n\\begin{align*}\n\t\t\\gamma_3 X=\n\t\t\\begin{pmatrix} \\xi_1 & \\gamma_3 x_3 & \\ov{\\gamma_3 x_2} \\\\\n\t\t\\ov{\\gamma_3 x_3} & \\xi_2 & \\gamma_3 x_1 \\\\\n\t\t\\gamma_3 x_2 & \\ov{\\gamma_3 x_1} & \\xi_3 \n\t\t\\end{pmatrix},\\,\\,X \\in \\mathfrak{J},\n\\end{align*}\nwhere $\\gamma_3$ on the right hand side is the same one as $\\gamma_3 \\in G_2$. Needless to say, $\\gamma_3 \\in F_4$ and $(\\gamma_3)^3=1$. Hence $\\gamma_3$ induces the automorphism $\\tilde{\\gamma}_3$ of order $3$ on $F_4$: $\\tilde{\\gamma}_3(\\alpha)={\\gamma_3}^{-1}\\alpha\\gamma_3, \\alpha \\in F_4$. Note that the action to $\\mathfrak{J}(3, \\H) \\oplus \\H^3$ of $\\gamma_3$ is as follows.\n\\begin{align*}\n\\gamma_3(M+\\a)=M+\\bm{\\omega}\\a,\\,\\,M+\\a \\in \\mathfrak{J}(3, \\H) \\oplus \\H^3=\\mathfrak{J}.\n\\end{align*}\n\nNow, we have the following theorem.\n\n\\begin{theorem}\\label{theorem 3.2.2\n\tThe group $(F_4)^{\\gamma_3}$ is isomorphic to the group $(U(1) \\times Sp(3))\/\\Z_2$ {\\rm :} $(F_4)^{\\gamma_3} \\cong (U(1) \\times Sp(3))\/\\Z_2, \\Z_2=\\{(1,E), \n\t(-1,-E) \\}$.\n\\end{theorem}\n\\begin{proof}\n\tAs in the proof of Theorem \\ref{theorem 3.1.2}, let $U(1)=\\{a \\in \\C \\,|\\,\\ov{a}a=1 \\} \\subset Sp(1)$. We define a mapping $\\varphi_{{}_{F_4,\\gamma_3}}:U(1) \\times Sp(3) \\to (F_4)^{\\gamma_3}$ by the restriction of the mapping $\\varphi_{{}_{F_4,\\gamma}}$ (Proposition \\ref{proposition 3.2.1}). This mapping induces the required isomorphism (see \\cite [Theorem 2.2]{iy1} in detail).\t\n\\end{proof}\n\nThus, since the group $(F_4)^{\\gamma_3}$ is connected, together with the result of Theorem \\ref{theorem 3.2.2}, we have an exceptional $\\varmathbb{Z}_3$-symmetric space $F_4\/((U(1) \\times Sp(3))\/\\Z_2)$.\n\\vspace{1mm}\n\nWe define an $\\R$-linear transformation $\\sigma$ of $\\mathfrak{J}$ by\n\\begin{align*}\n\\sigma X= \\begin{pmatrix} \\xi_1 & -x_3 & -\\ov{x_2} \\\\\n-\\ov{x_3} & \\xi_2 & x_1 \\\\\n-x_2 & \\ov{x_1} & \\xi_3 \\end{pmatrix}\n,\\,\\,X \\in \\mathfrak{J},\n\\end{align*}\nThen we have that $\\sigma \\in F_4$ and $\\sigma^2 =1$. Hence $\\sigma$ induce involutive inner automorphism $\\tilde{\\sigma}$ on $F_4{\\rm :}\\,\\tilde{\\sigma}(\\alpha)=\\sigma\\alpha\\sigma, \\alpha \\in F_4$.\n\\vspace{1mm}\n\nThen we have the following well-known result.\n\n\\begin{proposition}\\label{proposition 3.2.3\n\tThe group $(F_4)^\\sigma$ is isomorphic to the group $Spin(9)${\\rm:}$(F_4)^\\sigma \\!\\cong \\!Spin(9)$.\n\\end{proposition}\n\\begin{proof}\n\tFrom \\cite[Thorem 2.7.4]{iy0}\n\t, we have $(F_4)_{E_1} \\cong Spin(9)$, so by proving that $(F_4)^\\sigma \\cong (F_4)_{E_1}$ (\\cite[Thorem 2.9.1]{iy0}) we have the required isomorphism (see \\cite[Sections 2.7, 2.9 ]{iy0} in detail).\n\\end{proof}\n\\vspace{1mm}\n\nLet $U(1)=\\{a \\in \\C \\,|\\,\\ov{a}a=1 \\}$. For $a \\in U(1)$, we define an $\\R$-linear transformation $D_a$ of $\\mathfrak{J}$ by\n\\begin{align*}\n\t\tD_a X= \n\t\t\\begin{pmatrix} \\xi_1 & x_3 a & \\ov{ax_2} \\\\\n\t\t\\ov{x_3 a} & \\xi_2 & \\ov{a}x_1\\ov{a} \\\\\n\t\ta x_2 & a\\ov{x_1}a & \\xi_3 \n\t\t\\end{pmatrix},\\,\\, X \\in \\mathfrak{J}.\n\\end{align*}\nThen, since $D_a=\\varphi_{{}_{F_4,\\gamma}}(1,\\diag(1,\\ov{a},a))$, we have that $D_a \\in F_4$. Hence, by corresponding $a \\in U(1)$ to $D_a \\in F_4$, $U(1)$ is embedded into $F_4$.\nIn addition, we can express $\\sigma$ defined above by $D_{-1}$: $\\sigma=D_{-1}$.\n\nLet $\\bm{\\omega}=-(1\/2)+(\\sqrt{3}\/2)e_1 \\in U(1)$. Then we define an $\\R$-linear transformation $\\sigma_3$ of $\\mathfrak{J}$ by\n\\begin{align*}\n\t\t\\sigma_3X= \n\t\t\\begin{pmatrix} \\xi_1 & x_3 \\bm{\\omega} & \\ov{\\bm{\\omega} x_2} \\\\\n\t\t\\ov{x_3 \\bm{\\omega}} & \\xi_2 & \\ov{\\bm{\\omega}}x_1\\ov{\\bm{\\omega}} \\\\\n\t\t\\bm{\\omega} x_2 & \\bm{\\omega}\\ov{x_1}\\bm{\\omega} & \\xi_3 \n\t\t\\end{pmatrix},\\,\\, X \\in \\mathfrak{J}.\n\\end{align*}\nNeedless to say, since $\\sigma_3=D_\\omega=\\varphi_{{}_{F_4,\\gamma}}(1,\\diag(1,\\ov{\\bm{\\omega}},\\bm{\\omega}))$, we have that $\\sigma_3 \\in F_4$. Hence $\\sigma_3$ induces the automorphism $\\tilde{\\sigma}_3$ of order $3$ on $F_4$: $\\tilde{\\sigma}_3(\\alpha)={\\sigma_3}^{-1}\\alpha\\sigma_3, \\alpha \\in F_4$.\n\\vspace{1mm}\n\nNow, we have the following theorem.\n\n\\begin{theorem}\\label{theorem 3.2.4\n\tThe group $(F_4)^{\\sigma_3}$ is isomorphic to the group $(Spin(2) \\times Spin(7))\/\\Z_2${\\rm:} $(F_4)^{\\sigma_3} \\cong (Spin(2) \\times Spin(7))\/\\Z_2, \\Z_2=\\{(1,1), (\\sigma,\\sigma)\\}$.\n\\end{theorem}\n\\begin{proof}\n\tLet $Spin(2)$ as the group $\\{D_a \\in F_4 \\,|\\,a \\in U(1) \\}$ defined above which is isomorphic to the group $U(1)$ and $Spin(7)$ as the subgroup $(F_4)_{E_1, F_1(1),F_1(e_1)}$ of $F_4$ (cf. \\cite[Propsition 2.9 (1)]{iy2}, \\cite[Subsection 2.2]{iy1}). We define a mapping $\\varphi_{{}_{F_4,\\sigma_3}}: Spin(2) \\times Spin(7) \\to (F_4)^{\\sigma_3}$ by\n\t\\begin{align*}\n\t\t\t\\varphi_{{}_{F_4,\\sigma_3}}(D_a, \\beta)=D_a \\beta.\n\t\\end{align*}\t\n\tThis mapping induces the required isomorphism (see \\cite[Lemmas 2.5, 2.6, Theorem 2.7]{iy1} in detail).\n\\end{proof}\n\nThus, since the group $(F_4)^{\\sigma_3}$ is connected, together with the result of Theorem \\ref{theorem 3.2.4}, we have an exceptional $\\varmathbb{Z}_3$-symmetric space $F_4\/((Spin(2) \\times Spin(7))\/\\Z_2)$.\n\\vspace{2mm}\t\n\n We define an $\\R$-linear transformation $w_3$ of $\\mathfrak{J}$ by\n\\begin{align*}\nw_3X= \n\\begin{pmatrix} \\xi_1 & w_3 x_3 & \\ov{w_3 x_2} \\\\\n\\ov{w_3 x_3} & \\xi_2 & w_3 x_1 \\\\\nw_3 x_2 & \\ov{w_3 x_1} & \\xi_3 \n\\end{pmatrix},\\,\\, X \\in \\mathfrak{J},\n\\end{align*}\nwhere $w_3$ on the right hand side is the same one as $w_3 \\in G_2$. Needless to say, $w_3 \\in F_4$ and $(w_3)^3=1$. Hence $w_3$ induces the automorphism $\\tilde{w}_3$ of order $3$ on $F_4$: $\\tilde{w}_3(\\alpha)={w_3}^{-1}\\alpha w_3, \\alpha \\in F_4$.\n\nWe associate the elements $X$ of $\\mathfrak{J}$ with the elements \n\\begin{align*}\n\t\t\\begin{pmatrix}\n\t\t\\xi_1 & c_3 & \\ov{c_2} \\\\\n\t\t\\ov{c_3} & \\xi_2 & c_1 \\\\ \n\t\tc_2 & \\ov{c_1} & \\xi_3\n\t\t\\end{pmatrix} +\n\t\t\\begin{pmatrix}\n\t\t & & \\\\\n\t\t\\m_1 \\!\\!\\!& \\m_2 \\!\\!\\!& \\m_3 \\\\ \n\t\t & & \n\t\t\\end{pmatrix}(=:X_{\\bm{C}}+M)\n\\end{align*}\nof $\\mathfrak{J}(3,\\C) \\oplus M(3,\\C)$, where $\\m_i \\in \\C^3$, \nand we can define a multiplication, a conjugation and an inner product in \n$\\mathfrak{J}(3, \\C) \\oplus M(3,\\C)$ corresponding to the same ones in $\\mathfrak{J}$ (see \\cite[Subsection 2.12]{iy0} in detail). Hence we have that $\\mathfrak{J}(3, \\C) \\oplus M(3,\\C)$ is isomorphic to $\\mathfrak{J}$ as algebra. Hereafter, if necessary we identify $\\mathfrak{J}$ with $\\mathfrak{J}(3, \\C) \\oplus M(3,\\C)$: $\\mathfrak{J}=\\mathfrak{J}(3, \\C) \\oplus M(3,\\C)$. Note that using $\\bm{\\omega}=-(1\/2)+(\\sqrt{3}\/2)e_1 \\in \\C$, the action to $\\mathfrak{J}=\\mathfrak{J}(3, \\C) \\oplus M(3,\\C)$ of $w_3$ is as follows.\n\\begin{align*}\nw_3(X_{\\bm{C}}+M)=X_{\\bm{C}}+\\bm{\\omega} M,\\,\\,X_{\\bm{C}}+M \\in \\mathfrak{J}(3, \\C) \\oplus M(3,\\C)=\\mathfrak{J}.\n\\end{align*}\n\nNow, we have the following theorem.\n\n\\begin{theorem}\\label{theorem 3.2.5\n\t\tThe group $(F_4)^{w_3}$ is isomorphic to the group $(SU(3) \\times SU(3))\/\\Z_3${\\rm :} $(F_4)^{w_3} \\cong (SU(3) \\times SU(3))\/\\Z_3, \\Z_3=\\{(E,E),(\\bm{\\omega} E,\\bm{\\omega} E),({\\bm{\\omega}}^{-1}E,{\\bm{\\omega}}^{-1}E) \\}$.\n\\end{theorem}\n\\begin{proof}\n\tWe define a mapping $\\varphi_{F_4,w_3}:SU(3) \\times SU(3) \\to (F_4)^{w_3}$ by\n\t\\begin{align*}\n\t\t\t\t\\varphi_{F_4,w_3}(B, A)(X_{\\bm{C}}+M)=AX_{\\bm{C}}A^* + BMA^*,\\,\\,X_{\\bm{C}}+M \\in \\mathfrak{J}(3, \\C) \\oplus M(3,\\C)=\\mathfrak{J}.\n\t\\end{align*}\n\tThis mapping induces the required isomorphism (see \\cite[Theorem 2.9]{iy1} in detail).\n\\end{proof}\n\nThus, since the group $(F_4)^{w_3}$ is connected, together with the result of Theorem \\ref{theorem 3.2.5}, we have an exceptional $\\varmathbb{Z}_3$-symmetric space $F_4\/((SU(3) \\times SU(3))\/\\Z_3)$.\n\\vspace{1mm}\n\nAs in Section 3.1, the following lemma are useful to determine the structure of a group $G^{\\sigma_3} \\cap G^{\\tau_3}$ in $F_4$.\n\n\\begin{lemma}\\label{lemma 3.2.6\n\t{\\rm (1)} The mapping $\\varphi_{{}_{F_4,\\gamma_3}}:U(1) \\times Sp(3) \\to (G_2)^{\\gamma_3}$ of \\,Theorem {\\rm \\ref{theorem 3.2.2}} satisfies the relational formulas \n\t\\begin{align*}\n\t\\gamma_3&=\\varphi_{{}_{F_4,\\gamma_3}}(\\bm{\\omega},E), \n\t\\\\\n\t\\sigma_3&=\\varphi_{{}_{F_4,\\gamma_3}}(1,\\diag(1,\\ov{\\bm{\\omega}},\\bm{\\omega})),\n\t\\\\\n\tw_3&=\\varphi_{{}_{F_4,\\gamma_3}}(1, \\ov{\\bm{\\omega}}E), \n\t\\end{align*}\n where $\\bm{\\omega}=-(1\/2)+(\\sqrt{3}\/2)e_1 \\in U(1)$.\n\t\\vspace{1mm}\n\t\n\t{\\rm (2)} The mapping $\\varphi_{{}_{F_4,w_3}}:SU(3)\\times SU(3) \\to (F_4)^{w_3}$ of \\,Theorem {\\rm \\ref{theorem 3.2.5}} satisfies the relational formulas\n\t\\begin{align*}\n\t\\gamma_3&=\\varphi_{{}_{F_4,w_3}}(\\diag(1,\\bm{\\omega},\\ov{\\bm{\\omega}}),E), \n\t\\\\\n\t\\sigma_3&=\\varphi_{{}_{F_4,w_3}}(E,\\diag(1,\\ov{\\bm{\\omega}},\\bm{\\omega}))\\\\\n\tw_3&=\\varphi_{{}_{F_4,w_3}}(\\bm{\\omega}E,E), \n\t\\end{align*}\n where $\\bm{\\omega}=-(1\/2)+(\\sqrt{3}\/2)e_1 \\in U(1)$. \t\n\\end{lemma}\n\\begin{proof}\n\t(1), (2) By doing straightforward computation we obtain the results above. \n\\end{proof}\n\n\\subsection{In $E_6$}\\label{subsection 3.3\n\nLet $\\gamma, \\gamma_3 \\in G_2 \\subset F_4$, and using the inclusion $F_4 \\subset E_6$, \n$\\gamma, \\gamma_3$ are naturally extended to an $C$-linear \ntransformation of $\\mathfrak{J}^C$. Needless to say, $\\gamma, \\gamma_3 \\in E_6$ and $\\gamma^2=(\\gamma_3)^3=1$. Hence $\\gamma, \\gamma_3$ induce the involutive automorphism $\\tilde{\\gamma}$, the automorphism $\\tilde{\\gamma}_3$ of order $3$ on $E_6$, respectively: $\\tilde{\\gamma}(\\alpha)=\\gamma\\alpha\\gamma, \\tilde{\\gamma}_3(\\alpha)={\\gamma_3}^{-1}\\alpha\\gamma_3, \\alpha \\in E_6$. \n\\vspace{1mm}\n\nThen we have the following proposition and theorem.\n\n\\begin{proposition}\\label{proposition 3.3.1}\n\tThe group $(E_6)^\\gamma$ isomorphic to the group $(Sp(1) \\times SU(6))\/\\Z_2${\\rm:}\n\t$(E_6)^\\gamma \\cong (Sp(1) \\times SU(6))\/\\Z_2,\\Z_2 =\\{(1, E), (-1, -E) \\}$.\n\\end{proposition}\n\\begin{proof}\n\tLet $SU(6)=\\{A \\in M(6, C)\\,|\\,(\\tau\\,{}^t A) A$ $=1, \\det\\, A=1) \\}$, where $\\tau$ is the complex conjugation of $C=\\{x+iy \\,|\\,x,y \\in \\R \\}$, that is, $\\tau(x+yi)=x-yi, x,y \\in \\R$.\n\tWe define a mapping $\\varphi_{{}_{E_6,\\gamma}}:Sp(1) \\times SU(6) \\to (E_6)^\\gamma $ by\n\t\\begin{align*}\n\t\\varphi_{{}_{E_6,\\gamma}}(p, A)(M+\\a)={k_J}^{-1}(A(k_J M){}^t\\!A)+p\\a k^{-1}(\\tau \\,{}^t\\!A), M+\\a \\in \\mathfrak{J}(3, \\H)^C \\oplus (\\H^3)^C=\\mathfrak{J}^C,\n\t\\end{align*}\n\twhere both of $k_J:\\mathfrak{J}(3, \\H)^C \\to \\mathfrak{S}(6, C)$ and $k:M(3, \\H)^C \\to M(6, C)$ are the $C$-linear isomorphisms.\n\tThis mapping induces the required isomorphism (see \\cite[Theorem 3.11.4 ]{iy0} in detail).\n\\end{proof}\n\n\\begin{theorem}\\label{theorem 3.3.2\n\tThe group $(E_6)^{\\gamma_3}$ is isomorphic to the group $(U(1) \\times SU(6))\/\\Z_2${\\rm :} $(E_6)^{\\gamma_3} \\cong (U(1) \\times SU(6))\/\\Z_2, \\Z_2=\\{(1,E),\n\t(-1,-E) \\}$.\t\n\\end{theorem}\n\\begin{proof}\n\tLet $U(1)=\\{a \\in \\C\\,|\\, \\ov{a}a=1 \\} \\subset Sp(1)$. We define a mapping $\\varphi_{{}_{E_6,\\gamma_3}}: U(1) \\times SU(6) \\to (E_6)^{\\gamma_3}$ by the restriction of the mapping $\\varphi_{{}_{E_6,\\gamma}}$ (Proposition \\ref{proposition 3.3.1}). This mapping induces the required isomorphism (see \\cite [Theorem 3.2]{iy1} in detail). \n\\end{proof}\n\nThus, since the group $(E_6)^{\\gamma_3}$ is connected, together with the result of Theorem \\ref{theorem 3.3.2}, we have an exceptional $\\varmathbb{Z}_3$-symmetric space $E_6\/((U(1) \\times SU(6))\/\\Z_2)$.\n\\vspace{2mm}\n\nLet $\\sigma, \\sigma_3 \\in F_4$. Then, as in the case above, using the inclusion $F_4 \\subset E_6$, $\\sigma, \\sigma_3$ are naturally extended to\ntransformations of $\\mathfrak{J}^C$. Needless to say, $\\sigma, \\sigma_3 \\in E_6$ and $\\sigma^2=(\\sigma_3)^3=1$. Hence $\\sigma$ and $\\sigma_3$ induce the involutive automorphism $\\tilde{\\sigma}$ and the automorphism $\\tilde{\\sigma}_3$ of order $3$ on $E_6$, respectively: $\\tilde{\\sigma}(\\alpha)=\\sigma\\alpha\\sigma, \\tilde{\\sigma}_3(\\alpha)={\\sigma_3}^{-1}\\alpha\\sigma_3, \\alpha \\in E_6$. \n\\vspace{1mm}\n\nThen we have the following proposition and theorem.\n\n\\begin{proposition}\\label{proposition 3.3.3\n\tThe group $(E_6)^\\sigma$ is isomorphic to the group $(U(1) \\times Spin(10))\/\\Z_4${\\rm:}\\,\n\t$(E_6)^\\sigma \\!\\cong (U(1) \\times Spin(10))\/\\Z_4,\\Z_4=\\{ (1, \\phi_{{}_{6,\\sigma}}(1)), (-1, \\phi_{{}_{6,\\sigma}}(-1)), (i, \\phi_{{}_{6,\\sigma}}(-i)), (-i, \\phi_{{}_{6,\\sigma}}(i)) \\}$.\n\\end{proposition}\n\\begin{proof}\n\tLet $Spin(10)$ as the group $(E_6)_{E_1}=\\{\\alpha \\in E_6\\,|\\,\\alpha E_1=E_1 \\}$ (\\cite[Theorem 3.10.4]{iy0}).\n\tWe define a mapping $\\varphi_{{}_{E_6,\\sigma}}:U(1) \\times Spin(10) \\to (E_6)^\\sigma $ by\n\t$$\n\t\\varphi_{{}_{E_6,\\sigma}}(\\theta, \\delta)=\\phi_{{}_{6,\\sigma}}(\\theta)\\delta,\n\t$$\n\twhere $\\phi_{{}_{6,\\sigma}}:U(1) \\to E_6$ is defined by\n\t\\begin{align*}\n\t\\phi_{{}_{6,\\sigma}}(\\theta)X=\\begin{pmatrix}\n\t\\theta^4 \\xi_1 & \\theta x_3 & \\theta \\ov{x_2} \\\\\n\t\\theta \\ov{x_3} & {\\theta}^{-2}\\xi_2 & {\\theta}^{-2}x_1 \\\\ \n\t\\theta x_2 & {\\theta}^{-2}\\ov{x_1} & {\\theta}^{-2}\\xi_3\n\t\\end{pmatrix}, \\,\\, X \\in \\mathfrak{J}^C.\n\t\\end{align*}\n\tThis mapping induces the required isomorphism (see \\cite[Theorem 3.10.7 ]{iy0} in detail).\n\\end{proof}\n\n\\begin{theorem}\\label{theorem 3.3.4\n\tThe group $(E_6)^{\\sigma_3}$ is isomorphic to the group $(U(1) \\times Spin(2) \\times Spin(8))\/(\\Z_4 \\allowbreak \\times \\Z_2)${\\rm :} $(E_6)^{\\sigma_3} \\cong (U(1) \\times Spin(2) \\times Spin(8))\/(\\Z_2 \\times \\Z_4), \\Z_2=\\{(1,1,1),(1,\\sigma,\\sigma) \\}, \\Z_4=\\{(1,1,1),(i,D_{e_1},\\phi_{{}_{6,\\sigma}}(-i)D_{-e_1}),(-1,\\allowbreak\\sigma,1),(-i,D_{-e_1},\\phi_{{}_{6,\\sigma}}(i)D_{e_1}) \\} \\}$.\n\\end{theorem}\n\\begin{proof}\n\tLet $U(1)=\\{\\theta \\in C\\,|\\,(\\tau \\theta)\\theta=1 \\}$ and $Spin(2)$, which is isomorphic to the group $U(1)$, as the group $\\{D_a \\in F_4 \\,|\\,a \\in U(1) \\}$ defined in $F_4$, moreover let $Spin(8)$ as the group $(E_6)_{E_1, F_1(1),F_1(e_1)}=\\{ \\alpha \\in E_6 \\,|\\,\\alpha E_1=E_1, \\alpha F_1(1)=F_1(1), \\alpha F_1(e_1)=F_1(e_1)\\}$ (cf.\\cite[Proposition 3.22]{iy2}, \\cite[Subsection 3.2]{iy1}), respectively. We define a mapping $\\varphi_{{}_{E_6,\\sigma_3}}: U(1) \\times Spin(2) \\times Spin(8) \\to (E_6)^{\\sigma_3}$ by\n\t\\begin{align*}\n\t\t\t\\varphi_{{}_{E_6,\\sigma_3}}(\\theta, D_a, \\beta)=\\phi_{{}_{6,\\sigma}}(\\theta)D_a \\beta.\n\t\\end{align*}\n\tThis mapping induces the required isomorphism (see \\cite[Theorem 3.9]{iy1} in detail).\n\\end{proof}\n\nThus, since the group $(E_6)^{\\sigma_3}$ is connected, together with the result of Theorem \\ref{theorem 3.3.4}, we have an exceptional $\\varmathbb{Z}_3$-symmetric space $E_6\/((U(1) \\times Spin(2) \\times Spin(8))\/(\\Z_2 \\times \\Z_4))$.\n\\vspace{2mm}\n\nLet $\\nu=\\exp(2\\pi i\/9) \\in U(1)=\\{ \\theta \\in C \\,|\\, (\\tau \\theta)\\theta=1\\} \\subset C$. We consider the element $A_\\nu \\in SU(6) \\subset M(6, C)$ as follows.\n\\begin{align*}\n\t\tA_\\nu=\\diag(\\nu^5, \\nu^{-1}, \\nu^{-1}, \\nu^{-1}, \\nu^{-1},\\nu^{-1}),\n\\end{align*} \nand using this $A_\\nu$, set $\\nu_3=\\varphi_{{}_{E_6,\\gamma}}(1,A_\\nu)$. Then we have that $\\nu_3 \\in (E_6)^\\gamma \\subset E_6$ and $(\\nu_3)^9=1$. Since ${A_\\nu}^3= \\nu^6 E \\in z(SU(6))$ (the center of $SU(6)$) and $(\\nu_3)^3=\\varphi_{{}_{E_6,\\gamma}}(1, {A_\\nu}^3)=\\omega 1$, where $\\omega= -(1\/2)+(\\sqrt{3}\/2)i \\in C$, $\\nu_3$ induces the automorphism $\\tilde{\\nu}_3$ of order $3$ on $E_6$: $\\tilde{\\nu}_3(\\alpha)={\\nu_3}^{-1}\\alpha\\nu_3, \\alpha \\in E_6$.\n\\vspace{1mm}\n\nNow, we have the following theorem.\n\n\\begin{theorem}\\label{theorem 3.3.5\n\tThe group $(E_6)^{\\nu_3}$ is isomorphic to the group $(Sp(1) \\times S(U(1) \\times U(5)))\/\\Z_2${\\rm :} $(E_6)^{\\nu_3} \\cong (Sp(1) \\times S(U(1) \\times U(5)))\/\\Z_2, \\Z_2=\\{(1,E), (-1,-E) \\}$.\n\\end{theorem}\n\\begin{proof}\n\tLet $S(U(1) \\times U(5)) \\subset SU(6)$. We define a mapping $\\varphi_{{}_{E_6, \\nu_3}}:Sp(1) \\times S(U(1) \\times U(5)) \\to (E_6)^{\\nu_3}$ by the restriction of the mapping $\\varphi_{{}_{E_6,\\gamma}}$. This mapping induces the required isomorphism (see \\cite[Theorem 3.4]{iy1} in detail).\n\\end{proof}\n\nThus, since the group $(E_6)^{\\nu_3}$ is connected, together with the result of Theorem \\ref{theorem 3.3.5}, we have an exceptional $\\varmathbb{Z}_3$-symmetric space $E_6\/((U(1) \\times S(U(1) \\times U(5)))\/ \\Z_2)$.\n\\vspace{2mm}\n\nLet $\\phi_{{}_{6,\\sigma}}:U(1) \\to E_6$ be the embedding defined in the proof of Proposition \\ref{proposition 3.3.3}, and again let $\\nu=\\exp(2\\pi i\/9) \\in U(1) \\subset C$. Set $\\mu_3=\\phi_{{}_{6,\\sigma}}(\\nu)$. Then, needless to say, $\\mu_3 \\in E_6$ and $\\nu^9=1$. \nHence, since $\\mu^3=\\omega 1 \\in z(E_6)$ (the center of $E_6$), $\\mu_3$ induces the automorphism $\\tilde{\\mu_3}$ of order $3$ on $E_6$: $\\tilde{\\mu_3}(\\alpha)={\\mu_3}^{-1}\\alpha\\mu_3, \\alpha \\in E_6$.\n\\vspace{1mm}\n\nNow, we have the following theorem.\n\n\\begin{theorem}\\label{theorem 3.3.6\n\tThe group $(E_6)^{\\mu_3}$ coincides with the group $(E_6)^\\sigma$, that is, this group is isomorphic to the group $(U(1) \\times Spin(10))\/\\Z_4${\\rm :} $(E_6)^{\\mu_3} \\cong (U(1) \\times Spin(10))\/\\Z_4, \\Z_4=\\{ (1, 1), (-1, \\sigma), (i, \\phi_{{}_{6,\\sigma}}(-i)), (-i, \\phi_{{}_{6,\\sigma}}(i)) \\}$\n\\end{theorem}\n\\begin{proof}\n\tWe have to prove that $(E_6)^{\\mu_3}=(E_6)^\\sigma$.\n\n\tHowever the details of proof is omitted (see \\cite[Theorem 3.11]{iy1} in detail).\n\\end{proof}\n\\vspace{2mm}\n\nLet $w_3 \\in G_2 \\subset F_4$. Then, as in the cases above, using the inclusion $F_4 \\subset E_6$, $w_3$ are naturally extended to\ntransformation of $\\mathfrak{J}^C$.\nNeedless to say, $w_3 \\in E_6$ by inclusion $F_4 \\subset E_6$ and $(w_3)^3=1$. Hence $w_3$ induces the automorphism $\\tilde{w}_3$ of order $3$ on $E_6$: $\\tilde{w}_3(\\alpha)={w_3}^{-1}\\alpha w_3, \\alpha \\in E_6$.\nNote that using $\\bm{\\omega}=-(1\/2)+(\\sqrt{3}\/2)e_1 \\in \\C$, the action to $\\mathfrak{J}^C=\\mathfrak{J}(3, \\C)^C \\oplus M(3,\\C)^C$ of $w_3$ is as follows.\n\\begin{align*}\n\t\tw_3(X_{\\bm{C}}+M)=X_{\\bm{C}}+\\bm{\\omega}M,\\,\\,X_{\\bm{C}}+M \\in \\mathfrak{J}(3,\\C)^C \\oplus M(3, \\C)^C=\\mathfrak{J}^C.\n\\end{align*}\n\nNow, we have the following theorem.\n\n\\begin{theorem}\\label{theorem 3.3.7\n\tThe group $(E_6)^{w_3}$ is isomorphic to the group $(SU(3) \\times SU(3) \\times SU(3))\/\\Z_3${\\rm:} $(E_6)^{w_3} \\cong (SU(3) \\times SU(3) \\times SU(3))\/\\Z_3, \\Z_3=\\{(E,E,E),(\\bm{\\omega}E,\\bm{\\omega}E,\\bm{\\omega}E),(\\bm{\\omega}^{-1}E,\\bm{\\omega}^{-1}E,\\allowbreak \\bm{\\omega}^{-1}E) \\}$.\n\\end{theorem}\n\\begin{proof}\n\tWe define a mapping $\\varphi_{{}_{E_6,w_3}}:SU(3) \\times SU(3) \\times SU(3) \\to (E_6)^{w_3}$ by\n\t\\begin{align*}\n\t\t\t\\varphi_{{}_{E_6,w_3}}(L,A,B)(X_{C}+M)&=h(A,B)X_{C}h(A,B)^*+LM\\tau h(A,B)^*, \n\t\t\t\\\\\n\t\t\t&\\hspace*{20mm} X_{C}+M \\in \\mathfrak{J}(3, \\C)^C \\oplus \n\t\t\tM(3,\\C)^C=\\mathfrak{J}^C,\n\t\\end{align*}\n\twhere $h:M(3,\\C) \\times M(3,\\C) \\to M(3,\\C)^C$ is defined by \n\t\\begin{align*}\n\t\t\th(A,B)=\\dfrac{A+B}{2}+i\\dfrac{(B-A)e_1}{2}.\n\t\\end{align*}\n\tThis mapping induces the required isomorphism (see \\cite[Theorem 13]{iy0} in detail). Note that there is a mistake for the numbering of theorems in \\cite{iy0}, so Theorem 13 above is corresponding to the last theorem.\n\\end{proof}\n\nThus, since the group $(E_6)^{w_3}$ is connected, together with the result of Theorem \\ref{theorem 3.3.7}, we have an exceptional $\\varmathbb{Z}_3$-symmetric space $E_6\/((SU(3) \\times SU(3))\/ \\Z_3)$.\n\nAs in Subsections 3.1, 3.2, the following lemma are useful to determine the structure of groups $G^{\\sigma_3} \\cap G^{\\tau_3}$ in $E_6$.\n\n\\begin{lemma}\\label{lemma 3.3.8\n\t{\\rm (1)} The mapping $\\varphi_{{}_{E_6,\\gamma_3}}:U(1) \\times SU(6) \\to (E_6)^{\\gamma_3}$ of \\,Theorem {\\rm \\ref{theorem 3.3.2}} satisfies the relational formulas \n\t\\begin{align*}\n\t\\gamma_3&=\\varphi_{{}_{E_6,\\gamma_3}}(\\omega,E), \n\t\\\\\n\t\\sigma_3&=\\varphi_{{}_{E_6,\\gamma_3}}(1,\\diag(1,1,\\tau\\omega,\\omega,\\omega,\\tau\\omega)), \n\t\\\\\n\t\\nu_3&=\\varphi_{{}_{E_6,\\gamma_3}}(1,\\diag(\\nu^5,\\nu^{-1},\\nu^{-1},\\nu^{-1},\\nu^{-1},\\nu^{-1})),\n\t\\\\\n\t\\mu_3&=\\varphi_{{}_{E_6,\\gamma_3}}(1,\\diag(\\nu^{-2},\\nu^2,\\nu^{-1},\\nu,\\nu^{-1},\\nu)),\n\t\\\\\n\tw_3&=\\varphi_{{}_{E_6,\\gamma_3}}(1,\\diag(\\tau\\omega,\\omega,\\tau\\omega,\\omega,\\tau\\omega,\\omega)),\n\t\\end{align*}\n where $\\bm{\\omega}=-(1\/2)+(\\sqrt{3}\/2)i \\in U(1), \\nu=\\exp(2\\pi i\/9)$.\n\t\\vspace{1mm}\n\t\n\t{\\rm (2)} The mapping $\\varphi_{{}_{E_6,w_3}}:SU(3)\\times SU(3) \\times SU(3) \\to (E_6)^{w_3}$ of \\,Theorem {\\rm \\ref{theorem 3.3.7}} satisfies the relational formulas\n\t\\begin{align*}\n\t\\gamma_3&=\\varphi_{{}_{E_6,w_3}}(\\diag(1,\\bm{\\omega},\\ov{\\bm{\\omega}}),E,E),\n\t\\\\\n\t\\sigma_3&=\\varphi_{{}_{E_6,w_3}}(E,\\diag(1,\\ov{\\bm{\\omega}},\\bm{\\omega}),\\diag(1,\\ov{\\bm{\\omega}},\\bm{\\omega})),\n\t\\\\\n\t\\mu_3&=\\varphi_{{}_{E_6,w_3}}(E,\\diag({\\bm{\\varepsilon}}^{-2},\\bm{\\varepsilon},\\bm{\\varepsilon}),\\diag({\\bm{\\varepsilon}}^2,{\\bm{\\varepsilon}}^{-1},{\\bm{\\varepsilon}}^{-1})),\n\t\\\\\n\tw_3&=\\varphi_{{}_{E_6,w_3}}(\\bm{\\omega}E,E,E),\n\t\\end{align*}\n\t where $\\bm{\\omega}=-(1\/2)+(\\sqrt{3}\/2)e_1 \\in U(1), \\bm{\\varepsilon}=\\exp(2\\pi e_1\/9)$. \t\n\\end{lemma}\n\\begin{proof}\n\t(1), (2) By doing straightforward computation we obtain the results above. \n\\end{proof}\n\n\\section{Globally exceptional $\\varmathbb{Z}_3 \\times \\varmathbb{Z}_3$-symmetric spaces}\n\nIn this section, we construct a finite abelian group $\\varGamma=\\varmathbb{Z}_3 \\times \\varmathbb{Z}_3$ by using the inner automorphisms $\\tilde{\\sigma}_3, \\tilde{\\tau}_3$ of order $3$ on $G=G_2, F_4,E_6$ as the Case 1 below and determine the structure of the group $G^{\\sigma_3} \\cap G^{\\tau_3}$.\n\n\\subsection{Case 1: $\\{1, \\tilde{\\gamma}_3, \\tilde{\\gamma}_3{}^{-1}\\} \\times \\{1, \\tilde{w}_3, \\tilde{w}_3{}^{-1}\\}$-symmetric space}\n\nLet the $\\R$-linear transformations $\\gamma_3, w_3$ of $\\mathfrak{C}$ defined in Subsection \\ref{subsection 3.1}. \n\n\\noindent From Lemma \\ref{lemma 3.1.4} (1), since we can easily confirm that $\\gamma_3$ and $w_3$ are commutative, $\\tilde{\\gamma}_3$ and $\\tilde{w}_3$ are commutative in $\\Aut(G_2)$: $\\tilde{\\gamma}_3\\tilde{w}_3=\\tilde{w}_3\\tilde{\\gamma}_3$.\n\\vspace{1mm}\n\nNow, we will determine the structure of the group $(G_2)^{\\gamma_3} \\cap (G_2)^{w_3}$.\n\n\\begin{theorem}\\label{theorem 4.1.1\n\tThe group $(G_2)^{\\gamma_3} \\cap (G_2)^{w_3}$ is isomorphic to the group $(U(1) \\times U(1))\/\\Z_2${\\rm :} $(G_2)^{\\gamma_3} \\cap (G_2)^{w_3} \\cong (U(1) \\times U(1))\/\\Z_2, \\Z_2=\\{(1,1), (-1,-1) \\}$.\n\\end{theorem}\n\\begin{proof}\n\tLet $U(1) \\subset Sp(1)$. \n\tWe define a mapping $\\varphi_{{}_{G_2,\\gamma_3, w_3}}: U(1) \\times U(1) \\to (G_2)^{\\gamma_3} \\cap (G_2)^{w_3}$ by \n\t\\begin{align*}\n\t\t\t\t\\varphi_{{}_{G_2,\\gamma_3, w_3}}(s,t)(m+ne_4)=tm\\ov{t}+(sn\\ov{t})e_4,\\,\\,m+ne_4 \\in \\H \\oplus \\H e_4=\\mathfrak{C}.\n\t\\end{align*}\n\tNeedless to say, this mapping is the restriction of the mapping $\\varphi_{{}_{G_2,\\gamma_3}}$ (Theorem \\ref{theorem 3.1.2}).\n\t\n\tFirst, we will prove that $\\varphi_{{}_{G_2,\\gamma_3, w_3}}$ is well-defined. Since this mapping is also the restriction of the mapping $\\varphi_{{}_{G_2,\\gamma_3}}$, it is trivial that $\\varphi_{{}_{G_2,\\gamma_3, w_3}}(s,t) \\in (G_2)^{\\gamma_3}$, and from $w_3=\\varphi_{{}_{G_2,\\gamma_3}}(1,\\ov{\\bm{\\omega}})$ (Lemma \\ref{lemma 3.1.4} (1)), it is almost clear that $\\varphi_{{}_{G_2,\\gamma_3, w_3}}(s,t) \\in (G_2)^{w_3}$. Hence $\\varphi_{{}_{G_2,\\gamma_3, w_3}}$ is well-defined. Subsequently, since $\\varphi_{{}_{G_2,\\gamma_3, w_3}}$ is the restriction of $\\varphi_{{}_{G_2,\\gamma_3}}$, we easily see that $\\varphi_{{}_{G_2,\\gamma_3, w_3}}$ is a homomorphism.\n\t\n\tNext, we will prove that $\\varphi_{{}_{G_2,\\gamma_3, w_3}}$ is surjective. Let $\\alpha \\in (G_2)^{\\gamma_3} \\cap (G_2)^{w_3} \\subset (G_2)^{\\gamma_3}$. There exist $s \\in U(1)$ and $q \\in Sp(1)$ such that $\\alpha=\\varphi_{{}_{G_2,\\gamma_3}}(s,q)$ (Theorem \\ref{theorem 3.1.2}). Moreover, since $\\alpha=\\varphi_{{}_{G_2,\\gamma_3}}(s,q)$ commutes with $w_3$, again using $w_3=\\varphi_{{}_{G_2,\\gamma_3}}(1,\\ov{\\bm{\\omega}})$, we have that \n\t\\begin{align*}\n\t\t\t\\left\\{ \\begin{array}{l}\n\t\t\ts=s \\\\\n\t\t\t\\bm{\\omega}q\\ov{\\bm{\\omega}}=q \n\t\t\t\\end{array} \\right. \n\t\t\t\\quad \\text{or}\\quad\n\t\t\t\\left\\{ \\begin{array}{l}\n\t\t\ts=-s \\\\\n\t\t\t\\bm{\\omega}q\\ov{\\bm{\\omega}}=-q.\n\t\t\t\\end{array} \\right.\n\t\\end{align*}\n\tThe latter case is impossible because $s \\not=0$. As for the former case, from the relational formula $\\bm{\\omega}q\\ov{\\bm{\\omega}}=q$ we easily see that $q \\in U(1)$, and needless to say, $s \\in U(1)$. Hence there exist $s,t \\in U(1)$ such that $\\alpha=\\varphi_{{}_{G_2,\\gamma_3}}(s,t)$. Namely, there exist $s,t \\in U(1)$ such that $\\alpha=\\varphi_{{}_{G_2,\\gamma_3,w_3}}(s,t)$. The proof of surjective is completed.\n\t\n\tFinally, we determine $\\Ker \\,\\varphi_{{}_{G_2,\\gamma_3, w_3}}$. However, since $\\varphi_{{}_{G_2,\\gamma_3, w_3}}$ is the restriction of $\\varphi_{{}_{G_2,\\gamma_3}}$, it is easily obtain that $\\Ker \\,\\varphi_{{}_{G_2,\\gamma_3, w_3}}=\\{(1,1),(-1,-1) \\} \\cong \\Z_2$.\n\t\n\tTherefore we have the required isomorphism\n\t\\begin{align*}\n\t\t\t(G_2)^{\\gamma_3} \\cap (G_2)^{w_3} \\cong (U(1) \\times U(1))\/\\Z_2.\n\t\\end{align*}\n\\end{proof}\n\nThus, since the group $(G_2)^{\\gamma_3} \\cap (G_2)^{w_3}$ is connected from Theorem \\ref{theorem 4.1.1}, we have an exceptional $\\varmathbb{Z}_3 \\times \\varmathbb{Z}_3$-symmetric space\n\\begin{align*}\n\t\t\t\t\t\t\tG_2\/((U(1) \\times U(1))\/\\Z_2).\n\\end{align*}\n\n\\subsection{Case 2: $\\{1, \\tilde{\\gamma}_3, \\tilde{\\gamma}_3{}^{-1}\\} \\times \\{1, \\tilde{\\sigma}_3, \\tilde{\\sigma}_3{}^{-1}\\}$-symmetric space}\n\nLet the $\\R$-linear transformations $\\gamma_3, \\sigma_3$ of $\\mathfrak{J}$ defined in Subsection \\ref{subsection 3.2}. \n\n\\noindent From Lemma \\ref{lemma 3.2.6} (1), since we can easily confirm that $\\gamma_3$ and $\\sigma_3$ are commutative, $\\tilde{\\gamma}_3$ and $\\tilde{\\sigma}_3$ are commutative in $\\Aut(F_4)$: $\\tilde{\\gamma}_3\\tilde{\\sigma}_3=\\tilde{\\sigma}_3\\tilde{\\gamma}_3$.\n\\vspace{1mm}\n\nBefore determining the structure of the group $(F_4)^{\\gamma_3} \\cap (F_4)^{\\sigma_3}$, we prove proposition needed in the proof of theorem below.\n\\vspace{1mm}\n\nWe define subgroups $G_{1,2}$ and $G'_{1,2}$ of the group $Sp(3)$ by\n\\begin{align*}\n\tG_{1,2}&=\\left\\{ A=\\begin{pmatrix}\n\t h & 0 & 0 \\\\\n\t 0 & a & c \\\\\n\t 0 & d & b\n\t \\end{pmatrix} \\in Sp(3)\\,\\left|\\,h \\in Sp(1), \\begin{pmatrix}\t \n\t a & c \\\\\n\t d & b\n\t \\end{pmatrix} \\in U(2) \\subset Sp(2) \n\t \\right. \\right\\}, \n\t\\\\\n\tG'_{1,2}&=\\left\\{ A'=\\begin{pmatrix}\n\th' & 0 & 0 \\\\\n\t0 & a' & c'e_2 \\\\\n\t0 & \\ov{e_2}d' & b'\n\t\\end{pmatrix} \\in Sp(3)\\,\\left|\\,h' \\in Sp(1), \n\t\\begin{array}{l}\n\t(c'e_2)(\\ov{c'e_2})+a'\\ov{a'}=1\\\\\n\tb'\\ov{b'}+(\\ov{e_2}d')(\\ov{\\ov{e_2}d'})=1\\\\\n\t(c'e_2)\\ov{b'}+a'(\\ov{\\ov{e_2}d'})=0\\\\\n\ta',b',c',d' \\in \\C\n\t\\end{array}\n\t\\right. \\right\\}, \n\\end{align*}\nwhere $e_2$ is one of basis in $\\mathfrak{C}$.\n\n It goes without saying that $\\begin{pmatrix}\n a & c \\\\\n d & b\n \\end{pmatrix} \\in U(2)$ is equivalent to the conditions\n\\begin{align*}\n\t\tc\\ov{c}+a\\ov{a}=1, \\,\\,b\\ov{b}+d\\ov{d}=1,\\,\\,c\\ov{b}+a\\ov{d}=0,\n\\end{align*}\nmoreover, that $(c'e_2)(\\ov{c'e_2})+a'\\ov{a'}=1$ above is same as $c'\\ov{c}+a'\\ov{a'}=1$, so is others.\n\\vspace{1mm}\n\n\\begin{proposition}\\label{proposition 4.2.1\n\tThe group $G'_{1,2}$ is isomorphic to the group $Sp(1) \\times U(2)${\\rm :} $G'_{1,2} \\cong Sp(1) \\times U(2)$.\n\\end{proposition}\n\\begin{proof}\n\tFirst, we will prove that the group $G'_{1,2}$ is isomorphic to the group $G_{1,2}$. \n\tWe define a mapping $g_{{}_{421}}: G_{1,2} \\to G'_{1,2}$ by\n\t\\begin{align*}\n\t\t\tg_{{}_{421}}(\\begin{pmatrix}\n\t\t\th & 0 & 0 \\\\\n\t\t\t0 & a & c \\\\\n\t\t\t0 & d & b\n\t\t\t\\end{pmatrix})\n\t\t\t&=\\begin{pmatrix}\n\t\t\t1 & 0 & 0 \\\\\n\t\t\t0 & 1 & 0 \\\\\n\t\t\t0 & 0 & \\ov{e_2}\n\t\t\t\\end{pmatrix}\\begin{pmatrix}\n\t\t\th & 0 & 0 \\\\\n\t\t\t0 & a & c \\\\\n\t\t\t0 & d & b\n\t\t\t\\end{pmatrix}\\begin{pmatrix}\n\t\t\t1 & 0 & 0 \\\\\n\t\t\t0 & 1 & 0 \\\\\n\t\t\t0 & 0 & e_2\n\t\t\t\\end{pmatrix}\\left(=\\begin{pmatrix}\n\t\t\th & 0 & 0 \\\\\n\t\t\t0 & a & ce_2 \\\\\n\t\t\t0 & \\ov{e_2}d & b\n\t\t\t\\end{pmatrix} \\right).\n\t\\end{align*}\n\tFirst, it is clear that $g_{{}_{421}}$ is well-defined and a homomorphism. Moreover, it is easy to verify that $g_{{}_{421}}$ is bijective. Thus we have the isomorphism $G'_{1,2} \\cong G_{1,2}$. \n\t\n\tHere, by defining a mapping $f_{{}_{421}}:Sp(1) \\times U(2) \\to G_{1,2}$ as follows:\n\t\\begin{align*}\n\t\tf_{{}_{421}}(p,U)=\\scalebox{0.8}{$\n\t\t\t\\left( \\begin{array}{cccccccc@{\\!}}\n\t\t\t&\\multicolumn{2}{c}{\\raisebox{-15pt}[0pt][0pt]{\\Large$p$}}&&&&\n\t\t\t\\\\\n\t\t\t&&&&\\multicolumn{2}{c}{\\raisebox{-5pt}[0pt]{\\Large $0$}}&\n\t\t\t\\\\\n\t\t\t&&&&&&&\n\t\t\t\\\\\n\t\t\t&&&&\\multicolumn{2}{c}{\\raisebox{-18pt}[0pt][0pt]{\\huge $U$}}&\n\t\t\t\\\\\n\t\t\t&\\multicolumn{2}{c}{\\raisebox{-5pt}[0pt]{\\Large $0$}}&&&&\n\t\t\t\\\\[-2mm]\n\t\t\t&&&&&&&\n\t\t\t\\end{array}\\right)$},\n\t\t\\end{align*}\n\twe have the isomorphism $G_{1,2} \\cong Sp(1) \\times U(2)$.\n\t\n\tTherefore, together with the result of $G'_{1,2} \\cong G_{1,2}$, we have the required isomorphism \n\t\\begin{align*}\n\t\t\tG'_{1,2} \\cong Sp(1) \\times U(2).\n\t\\end{align*}\n\\end{proof}\n\nNow, we will determine the structure of the group $(F_4)^{\\gamma_3} \\cap (F_4)^{\\sigma_3}$.\n\n\\begin{theorem} \\label{theorem 4.2.2\n\tThe group $(F_4)^{\\gamma_3} \\cap (F_4)^{\\sigma_3}$ is isomorphic to the group $(U(1) \\times Sp(1) \\times U(2))\/\\Z_2$ {\\rm: } $(F_4)^{\\gamma_3} \\cap (F_4)^{\\sigma_3} \\cong (U(1) \\times Sp(1) \\times U(2))\/\\Z_2, \\Z_2=\\{(1,1,E),(-1,-1,-E) \\}$.\n\\end{theorem}\n\\begin{proof}\n\tFirst, we denote the composition of $g_{{}_{421}}$ and $f_{{}_{421}}$ by $h$: $h=g_{{}_{421}}f_{{}_{421}}$ (in the proof of Proposition \\ref{proposition 4.2.1}). Then we define a mapping $\\varphi_{{}_{F_4,\\gamma_3,\\sigma_3}}:U(1) \\times Sp(1) \\times U(2) \\to (F_4)^{\\gamma_3} \\cap (F_4)^{\\sigma_3}$ by\n\t\t\\begin{align*}\n\t\t\\varphi_{{}_{F_4,\\gamma_3,\\sigma_3}}(s,p,U)(M+\\a)=h(p,U)Mh(p,U)^*+s\\a h(p,U)^*,\\,\n\t\tM+\\a \\in \\mathfrak{J}(3,\\H) \\oplus \\H^3=\\mathfrak{J}.\n\t\t\\end{align*}\n\t\tNeedless to say, this mapping is the restriction of the mapping $\\varphi_{{}_{F_4,\\gamma_3}}$, that is, $\\varphi_{{}_{F_4,\\gamma_3,\\sigma_3}}(s,p,U) \\allowbreak =\\varphi_{{}_{F_4,\\gamma_3}}(s,h(p,U))$ (Theorem \\ref{theorem 3.2.2}).\n\t\t\n\t\tFirst, we will prove that $\\varphi_{{}_{F_4,\\gamma_3,\\sigma_3}}$ is well-defined. It is clear that $\\varphi_{{}_{F_4,\\gamma_3,\\sigma_3}}(s,p,U) \\in (F_4)^{\\gamma_3}$, and using $\\sigma_3=\\varphi_{{}_{F_4,\\gamma_3}}(1,\\diag(1,\\ov{\\bm{\\omega}}, \\bm{\\omega}))$ (Lemma \\ref{lemma 3.2.6} (1)), it follows that\n\t\t\\begin{align*}\n\t\t{\\sigma_3}^{-1}\\varphi_{{}_{F_4,\\gamma_3,\\sigma_3}}(s,p,U)\\sigma_3\n\t\t&=\\varphi_{{}_{F_4,\\gamma_3}}(1,\\diag(1,\\ov{\\bm{\\omega}},\\bm{\\omega}))^{-1}\\varphi_{{}_{F_4,\\gamma_3,\\sigma_3}}(s,p,U)\n\t\t\\varphi_{{}_{F_4,\\gamma_3}}(1,\\diag(1,\\ov{\\bm{\\omega}},\\bm{\\omega}))\n\t\t\\\\\n\t\t&=\\varphi_{{}_{F_4,\\gamma_3}}(1,\\diag(1,\\bm{\\omega},\\ov{\\bm{\\omega}}))\n\t\t\\varphi_{{}_{F_4,\\gamma_3}}(s,h(p,U))\\varphi_{{}_{F_4,\\gamma_3}}(1,\\diag(1,\\ov{\\bm{\\omega}},\\bm{\\omega}))\n\t\t\\\\\n\t\t&=\\varphi_{{}_{F_4,\\gamma_3}}(s,\\diag(1,\\bm{\\omega},\\ov{\\bm{\\omega}})h(p,U)\\diag(1,\\ov{\\bm{\\omega}},\\bm{\\omega})), h(p,U)\\!=\n\t\t\\begin{pmatrix}\n\t\tp & 0 & 0 \\\\\n\t\t0 & a & c \\\\\n\t\t0 & d & b\n\t\t\\end{pmatrix}\n\t\t\\\\\n\t\t&=\\varphi_{{}_{F_4,\\gamma_3}}(s,\n\t\t\\begin{pmatrix}\n\t\tp & 0 & 0 \\\\\n\t\t0 & \\bm{\\omega}a\\ov{\\bm{\\omega}} & \\bm{\\omega}(ce_2)\\bm{\\omega} \\\\\n\t\t0 & \\ov{\\bm{\\omega}}(\\ov{e_2}d)\\ov{\\bm{\\omega}} & \\ov{\\bm{\\omega}}b\\bm{\\omega}\n\t\t\\end{pmatrix})\n\t\t\\\\\n\t\t&=\\varphi_{{}_{F_4,\\gamma_3}}(s,\n\t\t\\begin{pmatrix}\n\t\tp & 0 & 0 \\\\\n\t\t0 & a & c e_2\\\\\n\t\t0 & \\ov{e_2}d & b\n\t\t\\end{pmatrix})\n\t\t\\\\\n\t\t&=\\varphi_{{}_{F_4,\\gamma_3}}(s,h(p,U))\n\t\t\\\\\n\t\t&=\\varphi_{{}_{F_4,\\gamma_3,\\sigma_3}}(s,p,U).\n\t\t\\end{align*}\n\tHence we have that $\\varphi_{{}_{F_4,\\gamma_3,\\sigma_3}}(s,p,U) \\in (F_4)^{\\sigma_3}$. Thus $\\varphi_{{}_{F_4,\\gamma_3,\\sigma_3}}$ is well-defined.\n\tSubsequently, since $\\varphi_{{}_{F_4,\\gamma_3,\\sigma_3}}$ is the restriction of the mapping $\\varphi_{{}_{F_4,\\gamma_3}}$, we easily see that $\\varphi_{{}_{F_4,\\gamma_3,\\sigma_3}}$ is a homomorphism. \n\n\tNext, we will prove that $\\varphi_{{}_{F_4,\\gamma_3,\\sigma_3}}$ is surjective. Let $\\alpha \\in (F_4)^{\\gamma_3} \\cap (F_4)^{\\sigma_3} \\subset (F_4)^{\\gamma_3}$. There exist $s \\in U(1)$ and $A \\in Sp(3)$ such that $\\alpha=\\varphi_{{}_{F_4,\\gamma_3}}(s,A)$ (Theorem \\ref{theorem 3.2.2}). Moreover, from the condition $\\alpha\\in\t(F_4)^{\\sigma_3}$, that is, ${\\sigma_3}^{-1}\\varphi_{{}_{F_4,\\gamma_3}}(s,A)\\sigma_3=\\varphi_{{}_{F_4,\\gamma_3}}(s,A)$, and using ${\\sigma_3}^{-1}\\varphi_{{}_{F_4,\\gamma_3}}(s,A)\\sigma_3\\!=\\!\\varphi_{{}_{F_4,\\gamma_3}}(s,\\diag(1,\\bm{\\omega},\\ov{\\bm{\\omega}})A\\,\\diag(1,\\ov{\\bm{\\omega}},\\bm{\\omega}))$ (Lemma \\ref{lemma 3.2.6} (1)), we have that\n\t\\begin{align*}\n\t\\left\\{ \n\t\\begin{array}{l}\n\ts=s \\\\\n\t\\diag(1,\\bm{\\omega},\\ov{\\bm{\\omega}})A\\,\\diag(1,\\ov{\\bm{\\omega}},\\bm{\\omega})=A \n\t\\end{array} \\right.\n\t\\quad {\\text{or}}\\quad\n\t\\left\\{ \n\t\\begin{array}{l}\n\ts=-s \\\\\n\t\\diag(1,\\bm{\\omega},\\ov{\\bm{\\omega}})A\\,\\diag(1,\\ov{\\bm{\\omega}},\\bm{\\omega})=-A. \n\t\\end{array} \\right.\n\t\\end{align*}\n\tThe latter case is impossible because of $s\\not=0$. As for the former case, from the second condition, by doing straightforward computation $A$ takes the following form $\\begin{pmatrix}\n\tp & 0 & 0 \\\\\n\t0 & a & c e_2\\\\\n\t0 & \\ov{e_2}d & b\n\t\\end{pmatrix} \\allowbreak \\in Sp(3)$, that is, $A \\in G'_{1,2}$. Hence there exist $s \\in U(1)$ and $h(p,U) \\in Sp(3)$ such that $\\alpha=\\varphi_{{}_{F_4,\\gamma_3}}(s,h(p,U))$. \n\tMoreover, from Lemma \\ref{lemma 3.2.6} (1) there exist $p \\in Sp(1)$ and $U \\in U(2)$ such that $A=h(p,U)$. Needless to say, $s \\in U(1)$.\n Thus, there exist $s \\in U(1),p \\in Sp(1)$ and $U \\in U(2)$ such that $\\alpha=\\varphi_{{}_{F_4,\\gamma_3, \\sigma_3}}(s,p,U)$. The proof of surjective is completed.\n\t\n\tFinally, we will determine $\\Ker\\,\\varphi_{{}_{F_4,\\gamma_3,\\sigma_3}}$. However, from $\\Ker\\,\\varphi_{{}_{F_4,\\gamma_3}}=\\{(1,E), (-1,-E) \\}$ we easily obtain that $\\Ker\\,\\varphi_{{}_{F_4,\\gamma_3,\\sigma_3}}=\\{(1,1,E), (-1,-1,-E) \\} \\cong \\Z_2$. \n\n Therefore we have the required isomorphism\n \\begin{align*}\n (F_4)^{\\gamma_3} \\cap (F_4)^{\\sigma_3} \\cong (U(1) \\times Sp(1) \\times U(2))\/\\Z_2.\n \\end{align*}\n\\end{proof}\n\\vspace{1mm}\n\nThus, since the group $(F_4)^{\\gamma_3} \\cap (F_4)^{\\sigma_3}$ is connected from Theorem \\ref{theorem 4.2.2}, we have an exceptional $\\varmathbb{Z}_3 \\times \\varmathbb{Z}_3$-symmetric space\n\\begin{align*}\n F_4\/((U(1) \\times Sp(1) \\times U(2))\/\\Z_2).\n\\end{align*}\n\n\\subsection{Case 3: $\\{1, \\tilde{\\gamma}_3, \\tilde{\\gamma}_3{}^{-1}\\} \\times \\{1, \\tilde{w}_3, \\tilde{w}_3{}^{-1}\\}$-symmetric space}\n\nLet the $\\R$-linear transformations $\\gamma_3, w_3$ of $\\mathfrak{J}$ defined in Subsection \\ref{subsection 3.2}. \n\n\\noindent From Lemma \\ref{lemma 3.2.6} (2), since we can easily confirm that $\\gamma_3$ and $w_3$ are commutative, $\\tilde{\\gamma}_3$ and $\\tilde{w}_3$ are commutative in $\\Aut(F_4)$: $\\tilde{\\gamma}_3\\tilde{w}_3=\\tilde{w}_3\\tilde{\\gamma}_3$.\n\\vspace{1mm}\n\nBefore determining the structure of the group $(F_4)^{\\gamma_3} \\cap (F_4)^{w_3}$, we prove lemma needed in the proof of theorem below.\n\n\\begin{lemma}\\label{lemma 4.3.1\n\tThe group $S(U(1)\\times U(1)\\times U(1))$ is isomorphic to the group $U(1)\\times U(1)${\\rm :} $S(U(1)\\times U(1)\\times U(1)) \\cong U(1)\\times U(1)$.\n\\end{lemma}\n\\begin{proof}\n\tWe define a mapping $f_{{}_{431}}: U(1)\\times U(1) \\to S(U(1)\\times U(1)\\times U(1))$ by \n\t\\begin{align*}\n\tf_{{}_{431}}(a,b)=\\left( \n\t\\begin{array}{ccc}\n\ts & & {\\raisebox{-7pt}[0pt]{\\large $0$}}\n\t\\\\[2mm]\n\t& t & \n\t\\\\[2mm]\n\t{\\raisebox{1pt}[0pt]{\\large $0$}}&& (st)^{-1}\n\t\\end{array}\\right) \\in SU(3).\n\t\\end{align*}\n\tThen this mapping induces the required isomorphism.\n\\end{proof}\n\nNow, we will determine the structure of the group $(F_4)^{\\gamma_3} \\cap (F_4)^{w_3}$.\n\n\\begin{theorem}\\label{theorem 4.3.2\n\tThe group $(F_4)^{\\gamma_3} \\cap (F_4)^{w_3}$ is isomorphic to the group $(U(1) \\times U(1) \\times SU(3))\/\\Z_3${\\rm :} $(F_4)^{\\gamma_3} \\cap (F_4)^{w_3} \\cong (U(1) \\times U(1)\\times SU(3))\/\\Z_3, \\Z_3=\\{(1,1,E), (\\bm{\\omega},\\bm{\\omega}, \\bm{\\omega}E),(\\bm{\\omega}^{-1},\\allowbreak\\bm{\\omega}^{-1}, \\bm{\\omega}^{-1}E)\\}$.\n\\end{theorem}\n\\begin{proof}\n\tLet $S(U(1) \\times U(1) \\times U(1)) \\subset SU(3)$.\n\tWe define a mapping $\\varphi_{{}_{F_4,\\gamma_3, w_3}}: S(U(1) \\times U(1) \\times U(1)) \\times SU(3) \\to (F_4)^{\\gamma_3} \\cap (F_4)^{w_3}$ by\n\t\\begin{align*}\n\t\t\t\\varphi_{{}_{F_4,\\gamma_3, w_3}}(L,A)(X_{\\bm{C}}+M)=AX_{\\bm{C}}A^*+LMA^*,\\,\\,X_{\\bm{C}}+M \\in \\mathfrak{J}(3,\\C)\\oplus M(3,\\C)=\\mathfrak{J}.\n\t\\end{align*}\n\tNeedless to say, this mapping is the restriction of the mapping $\\varphi_{{}_{F_4,w_3}}$, that is, $\\varphi_{{}_{F_4,\\gamma_3, w_3}}(L,A)=\\varphi_{{}_{F_4,w_3}}(L,A)$ (Theorem \\ref{theorem 3.2.5}).\n\t\n\tAs usual, we will prove that $\\varphi_{{}_{F_4,\\gamma_3, w_3}}$ is well-defined. It is clear that $\\varphi_{{}_{F_4,\\gamma_3, w_3}}(L,A) \\in (F_4)^{w_3}$, and using $\\gamma_3=\\varphi_{{}_{F_4,w_3}}(\\diag(1,\\ov{\\bm{\\omega}},\\bm{\\omega}), E)$ (Lemma \\ref{lemma 3.2.6} (2)), it follows that \n\t\\begin{align*}\n\t {\\gamma_3}^{-1}\\varphi_{{}_{F_4,\\gamma_3, w_3}}(L,A)\\gamma_3\n\t &=\\varphi_{{}_{F_4,w_3}}(\\diag(1,\\ov{\\bm{\\omega}},\\bm{\\omega}), E)^{-1}\\varphi_{{}_{F_4,\\gamma_3, w_3}}(L,A)\\varphi_{{}_{F_4,w_3}}(\\diag(1,\\ov{\\bm{\\omega}},\\bm{\\omega}), E)\n\t \\\\\n\t &=\\varphi_{{}_{F_4,w_3}}(\\diag(1,\\bm{\\omega},\\ov{\\bm{\\omega}}), E)\\varphi_{{}_{F_4,w_3}}(L,A)\\varphi_{{}_{F_4,w_3}}(\\diag(1,\\ov{\\bm{\\omega}},\\bm{\\omega}), E)\n\t \\\\\n\t &=\\varphi_{{}_{F_4,w_3}}(\\diag(1,\\bm{\\omega},\\ov{\\bm{\\omega}})L\\diag(1,\\ov{\\bm{\\omega}},\\bm{\\omega}),A),L=\\diag(a,b,c), abc=1\n\t \\\\\n\t &=\\varphi_{{}_{F_4,w_3}}(L,A)\n\t \\\\\n\t &=\\varphi_{{}_{F_4,\\gamma_3,w_3}}(L,A).\n\t\\end{align*}\n\tHence we have that $\\varphi_{{}_{F_4,\\gamma_3, w_3}}(L,A) \\in (F_4)^{\\gamma_3}$. Thus $\\varphi_{{}_{F_4,\\gamma_3, w_3}}$ is well-defined. Subsequently, since $\\varphi_{{}_{F_4,\\gamma_3,w_3}}$ is the restriction of the mapping $\\varphi_{{}_{F_4,w_3}}$, we easily see that $\\varphi_{{}_{F_4,\\gamma_3,w_3}}$ is a homomorphism. \n\t\n\tNext, we will prove that $\\varphi_{{}_{F_4,\\gamma_3,w_3}}$ is surjective. Let $\\alpha \\in (F_4)^{\\gamma_3} \\cap (F_4)^{w_3} \\subset (F_4)^{w_3}$. There exist $P, A \\in SU(3)$ such that $\\alpha=\\varphi_{{}_{F_4,w_3}}(P,A)$ (Theorem \\ref{theorem 3.2.5}). Moreover, from the condition $\\alpha \\in (F_4)^{\\gamma_3}$, that is, ${\\gamma_3}^{-1}\\varphi_{{}_{F_4,w_3}}(P,A)\\gamma_3=\\varphi_{{}_{F_4,w_3}}(P,A)$, and using ${\\gamma_3}^{-1}\\varphi_{{}_{F_4,w_3}}(P,A)\\gamma_3=\\varphi_{{}_{F_4,w_3}}(\\diag(1,\\bm{\\omega},\\ov{\\bm{\\omega}})P\\,\\diag(1,\\ov{\\bm{\\omega}},\\bm{\\omega}),A)$ (Lemma \\ref{lemma 3.2.6} (2)), we have that\n\t\\begin{align*}\n\t&\\,\\,\\,{\\rm(i)}\\,\\left\\{\n\t\\begin{array}{l}\n\t\\diag(1,\\bm{\\omega},\\ov{\\bm{\\omega}})P\\diag(1,\\ov{\\bm{\\omega}},\\bm{\\omega})=P \\\\\n\tA=A,\n\t\\end{array} \\right.\n\t\\qquad\n\t {\\rm(ii)}\\,\\left\\{\n \t\\begin{array}{l}\n \t\\diag(1,\\bm{\\omega},\\ov{\\bm{\\omega}})P\\diag(1,\\ov{\\bm{\\omega}}, \\bm{\\omega})=\\bm{\\omega}P \\\\\n\tA=\\bm{\\omega}A,\n\t\\end{array} \\right.\n\t\\\\[2mm]\n\t&{\\rm(iii)}\\,\\left\\{\n\t\\begin{array}{l}\n\t\\diag(1,\\bm{\\omega},\\ov{\\bm{\\omega}})P\\diag(1,\\ov{\\bm{\\omega}},\\bm{\\omega})=\\bm{\\omega}^{-1}P \\\\\n\tA=\\bm{\\omega}^{-1}A.\n\t\\end{array} \\right.\n\t\\end{align*}\n\tThe Cases (ii) and (iii) are impossible because of $A\\not=0$. As for the Case (i), from the first condition, by doing straightforward computation $P$ takes the form $\\diag(a,b,c)\n\n\n\n\n\n \\in SU(3)$, that is, $P \\in S(U(1)\\times U(1)\\times U(1))$. Needless to say, $A \\in SU(3)$. Hence there exist $L \\in S(U(1)\\times U(1) \\times U(1))$ and $A \\in SU(3)$ such that $\\alpha=\\varphi_{{}_{F_4,w_3}}(L,A)$. Namely, there exist $L \\in S(U(1)\\times U(1) \\times U(1))$ and $A \\in SU(3)$ such that $\\alpha=\\varphi_{{}_{F_4,\\gamma_3,w_3}}(L,A)$. With above, the proof of surjective is completed.\n\t\n\tFinally, we will determine $\\Ker\\,\\varphi_{{}_{F_4,\\gamma_3,w_3}}$. However, from $\\Ker\\,\\varphi_{{}_{F_4,w_3}}=\\{(E,E),(\\bm{\\omega}E,\\bm{\\omega}E), \\allowbreak (\\bm{\\omega}^{-1}E,\\bm{\\omega}^{-1}E)\\}$, we easily obtain that $\\Ker\\,\\varphi_{{}_{F_4,\\gamma_3,w_3}}=\\{(E,E),(\\bm{\\omega}E,\\bm{\\omega}E), (\\bm{\\omega}^{-1}E,\\bm{\\omega}^{-1}E)\\} \\cong \\Z_3$. Thus we have the isomorphism $(F_4)^{\\gamma_3} \\cap (F_4)^{w_3} \\cong (S(U(1) \\times U(1) \\times U(1))\\times SU(3))\/\\Z_3$.\n\t\n\tTherefore, by Lemma \\ref{lemma 4.3.1} we have the required isomorphism \n\t\\begin{align*}\n\t(F_4)^{\\gamma_3} \\cap (F_4)^{w_3} \\cong (U(1) \\times U(1)\\times SU(3))\/\\Z_3,\n\t\\end{align*}\n\twhere $\\Z_3=\\{(1,1,E), (\\bm{\\omega},\\bm{\\omega}, \\bm{\\omega}E),(\\bm{\\omega}^{-1},\\bm{\\omega}^{-1}, \\bm{\\omega}^{-1}E)\\}$.\n\\end{proof}\n\\vspace{1mm}\n\nThus, since the group $(F_4)^{\\gamma_3} \\cap (F_4)^{w_3}$ is connected from Theorem \\ref{theorem 4.3.2}, we have an exceptional $\\varmathbb{Z}_3 \\times \\varmathbb{Z}_3$-symmetric space\n\\begin{align*}\nF_4\/((U(1) \\times U(1) \\times SU(3))\/\\Z_3).\n\\end{align*}\n\n\\subsection{Case 4: $\\{1, \\tilde{\\sigma}_3, \\tilde{\\sigma}_3{}^{-1}\\} \\times \\{1, \\tilde{w}_3, \\tilde{w}_3{}^{-1}\\}$-symmetric space}\\label{case 4}\n\nLet the $\\R$-linear transformations $\\sigma_3, w_3$ of $\\mathfrak{J}$ defined in Subsection \\ref{subsection 3.2}. \n\n\\noindent From Lemma \\ref{lemma 3.2.6} (1), since we can easily confirm that $\\gamma_3$ and $\\sigma_3$ are commutative, $\\tilde{\\sigma}_3$ and $\\tilde{w}_3$ are commutative in $\\Aut(F_4)$: $\\tilde{\\sigma}_3\\tilde{w}_3=\\tilde{w}_3\\tilde{\\sigma}_3$.\n\\vspace{1mm}\n\nNow, we will determine the structure of the group $(F_4)^{\\sigma_3} \\cap (F_4)^{w_3}$. Note that we can prove theorem below as in the proof of Theorem \\ref{theorem 4.3.2}, however we give the proof as detailed as possible.\n\n\\begin{theorem}\\label{theorem 4.4.1\n\tThe group $(F_4)^{\\sigma_3} \\cap (F_4)^{w_3}$ is isomorphic to the group $(SU(3)\\times U(1) \\times U(1))\/\\Z_3${\\rm :} $(F_4)^{\\sigma_3} \\cap (F_4)^{w_3} \\cong (SU(3)\\times U(1) \\times U(1))\/\\Z_3, \\Z_3=\\{(E,1,1), (\\bm{\\omega}E,\\bm{\\omega},\\bm{\\omega}),( \\bm{\\omega}^{-1}E,\\allowbreak \\bm{\\omega}^{-1},\\bm{\\omega}^{-1}\\}$.\n\\end{theorem}\n\\begin{proof}\n\tLet $S(U(1) \\times U(1) \\times U(1)) \\subset SU(3)$.\n\tWe define a mapping $\\varphi_{{}_{F_4,\\sigma_3, w_3}}: SU(3) \\times S(U(1) \\times U(1) \\times U(1)) \\to (F_4)^{\\sigma_3} \\cap (F_4)^{w_3}$ by\n\t\\begin{align*}\n\t\\varphi_{{}_{F_4,\\sigma_3, w_3}}(P,L)(X_{\\bm{C}}+M)=LX_{\\bm{C}}L^*+PML^*,\\,\\,X_{\\bm{C}}+M \\in \\mathfrak{J}(3,\\C)\\oplus M(3,\\C)=\\mathfrak{J}.\n\t\\end{align*}\n\tNeedless to say, this mapping is the restriction of the mapping $\\varphi_{{}_{F_4,w_3}}$, that is, $\\varphi_{{}_{F_4,\\gamma_3, w_3}}(P,L)=\\varphi_{{}_{F_4,w_3}}(P,L)$ (Theorem \\ref{theorem 3.2.5}).\n\t\n\tAs usual, we will prove that $\\varphi_{{}_{F_4,\\sigma_3, w_3}}$ is well-defined. It is clear that $\\varphi_{{}_{F_4,\\sigma_3, w_3}}(P,L) \\in (F_4)^{w_3}$, and using $\\sigma_3=\\varphi_{{}_{F_4,w_3}}(E,\\diag(1,\\ov{\\bm{\\omega}},\\bm{\\omega}))$ (Lemma \\ref{lemma 3.2.6} (2)), it follows that \n\t\\begin{align*}\n\t{\\sigma_3}^{-1}\\varphi_{{}_{F_4,\\sigma_3, w_3}}(P,L)\\sigma_3\n\t&=\\varphi_{{}_{F_4,w_3}}(E,\\diag(1,\\ov{\\bm{\\omega}},\\bm{\\omega}))^{-1}\\varphi_{{}_{F_4,\\gamma_3, w_3}}(P,L)\\varphi_{{}_{F_4,w_3}}(E,\\diag(1,\\ov{\\bm{\\omega}},\\bm{\\omega}))\n\t\\\\\n\t&=\\varphi_{{}_{F_4,w_3}}(E,\\diag(1,\\bm{\\omega},\\ov{\\bm{\\omega}}))\\varphi_{{}_{F_4,w_3}}(P,L)\\varphi_{{}_{F_4,w_3}}(E,\\diag(1,\\ov{\\bm{\\omega}},\\bm{\\omega}))\n\t\\\\\n\t&=\\varphi_{{}_{F_4,w_3}}(P,\\diag(1,\\bm{\\omega},\\ov{\\bm{\\omega}})L\\diag(1,\\ov{\\bm{\\omega}},\\bm{\\omega})),\\,\\,L=\\diag(a,b,c)\n\t\\\\\n\t&=\\varphi_{{}_{F_4,w_3}}(P,L)\n\t\\\\\n\t&=\\varphi_{{}_{F_4,\\sigma_3,w_3}}(P,L).\n\t\\end{align*}\n\tHence we have that $\\varphi_{{}_{F_4,\\sigma_3, w_3}}(P,L) \\in (F_4)^{\\sigma_3}$. Thus $\\varphi_{{}_{F_4,\\sigma_3, w_3}}$ is well-defined. Subsequently, since $\\varphi_{{}_{F_4,\\sigma_3,w_3}}$ is the restriction of the mapping $\\varphi_{{}_{F_4,w_3}}$, we easily see that $\\varphi_{{}_{F_4,\\sigma_3,w_3}}$ is a homomorphism. \n\t\n\tNext, we will prove that $\\varphi_{{}_{F_4,\\sigma_3,w_3}}$ is surjective. Let $\\alpha \\in (F_4)^{\\sigma_3} \\cap (F_4)^{w_3} \\subset (F_4)^{w_3}$.\n\t There exist $P, A \\in SU(3)$ such that $\\alpha=\\varphi_{{}_{F_4,w_3}}(P,A)$ (Theorem \\ref{theorem 3.2.5}). Moreover, from the condition $\\alpha \\in (F_4)^{\\sigma_3}$, that is, ${\\sigma_3}^{-1}\\varphi_{{}_{F_4,w_3}}(P,A)\\sigma_3=\\varphi_{{}_{F_4,w_3}}(P,A)$, and using ${\\sigma_3}^{-1}\\varphi_{{}_{F_4,w_3}}(P,A)\\sigma_3\\allowbreak=\\varphi_{{}_{F_4,w_3}}(P,\\diag(1,\\bm{\\omega},\\ov{\\bm{\\omega}})A\\diag(1,\\ov{\\bm{\\omega}},\\bm{\\omega}))$ (Lemma \\ref{lemma 3.2.6} (2)), we have that\n\t\\begin{align*}\n\t&\\,\\,\\,{\\rm(i)}\\,\\left\\{\n\t\\begin{array}{l}\n\tP=P\\\\\n\t\\diag(1,\\bm{\\omega},\\ov{\\bm{\\omega}})A\\,\\diag(1,\\ov{\\bm{\\omega}},\\bm{\\omega})=A, \n\t\\end{array} \\right.\n\t\\qquad\n\t{\\rm(ii)}\\,\\left\\{\n\t\\begin{array}{l}\n\tP=\\bm{\\omega}P\\\\\n\t\\diag(1,\\bm{\\omega},\\ov{\\bm{\\omega}})A\\,\\diag(1,\\ov{\\bm{\\omega}},\\bm{\\omega})=\\bm{\\omega}A, \n\t\\end{array} \\right.\n\t\\\\[2mm]\n\t&{\\rm(iii)}\\,\\left\\{\n\t\\begin{array}{l}\n\tP=\\bm{\\omega}^{-1}P\\\\\n\t\\diag(1,\\bm{\\omega},\\ov{\\bm{\\omega}})A\\,\\diag(1,\\ov{\\bm{\\omega}},\\bm{\\omega})=\\bm{\\omega}^{-1}A.\n\t\\end{array} \\right.\n\t\\end{align*}\n\tThe Cases (ii) and (iii) are impossible because of $P\\not=0$. As for the Case (i), from the first condition, by doing straightforward computation $A$ takes the following form $\\diag(a,b,c), a,b,c \\in U(1), abc=1$, that is, $A \\in S(U(1)\\times U(1)\\times U(1))$. Needless to say, $P \\in SU(3)$. Hence there exist $P \\in SU(3)$ and $A \\in S(U(1)\\times U(1) \\times U(1))$ such that $\\alpha=\\varphi_{{}_{F_4,w_3}}(P,A)$. Namely, there exist $P \\in SU(3)$ and $A \\in S(U(1)\\times U(1) \\times U(1))$ such that $\\alpha=\\varphi_{{}_{F_4,\\sigma_3,w_3}}(P,A)$. The proof of surjective is completed.\n\t\n\tFinally, we will determine $\\Ker\\,\\varphi_{{}_{F_4,\\sigma_3,w_3}}$. However, from $\\Ker\\,\\varphi_{{}_{F_4,w_3}}=\\{(E,E),(\\bm{\\omega}E,\\bm{\\omega}E), \\allowbreak (\\bm{\\omega}^{-1}E,\\bm{\\omega}^{-1}E)\\}$, we easily obtain that $\\Ker\\,\\varphi_{{}_{F_4,\\sigma_3,w_3}}=\\{(E,E),(\\bm{\\omega}E,\\bm{\\omega}E), (\\bm{\\omega}^{-1}E,\\bm{\\omega}^{-1}E)\\} \\cong \\Z_3$. Thus we have the isomorphism $(F_4)^{\\sigma_3} \\cap (F_4)^{w_3} \\cong (SU(3)\\times S(U(1) \\times U(1) \\times U(1)))\/\\Z_3$.\n\t\n\tHere, as in the proof of Theorem \\ref{theorem 4.3.2} we have the isomorphism $U(1) \\times U(1) \\cong S(U(1) \\times U(1) \\times U(1))$.\n\t\n\tTherefore we have the required isomorphism \n\t\\begin{align*}\n\t(F_4)^{\\sigma_3} \\cap (F_4)^{w_3} \\cong (SU(3)\\times U(1) \\times U(1))\/\\Z_3,\n\t\\end{align*}\n\twhere $\\Z_3=\\{(E,1,1), (\\bm{\\omega}E,\\bm{\\omega},\\bm{\\omega}),( \\bm{\\omega}^{-1}E,\\allowbreak \\bm{\\omega}^{-1},\\bm{\\omega}^{-1}\\}$.\n\\end{proof}\n\\vspace{1mm}\n\nThus, since the group $(F_4)^{\\sigma_3} \\cap (F_4)^{w_3}$ is connected from Theorem \\ref{theorem 4.4.1}, we have an exceptional $\\varmathbb{Z}_3 \\times \\varmathbb{Z}_3$-symmetric space\n\\begin{align*}\nF_4\/((SU(3)\\times U(1) \\times U(1))\/\\Z_3).\n\\end{align*} \n\n\\begin{assertion}\\label{assertion}\n\tOn Theorem \\ref{theorem 4.4.1} from a different view point. \n\\end{assertion}\n\n\tFirst, let $U(3) \\subset Sp(3)$. Then, we can embed $U(3)$ into $F_4$ using the mapping $\\varphi_{{}_{F_4,\\gamma_3}}$ as follows:\n\t\\begin{align*}\n\t\t\\varphi_{{}_{F_4,\\gamma_3}}(1,U)(M+\\a)=UMU^*+\\a U^*,\\,\\,M+\\a \\in \\mathfrak{J}(3,\\H) \\oplus \\H^3=\\mathfrak{J},\n\t\\end{align*}\n\tmore detail, since $w_3$ induces an automorphism of the group $(F_4)_{E_1, F_1(1),F_1(e_1)}$, it follows that $\\varphi_{{}_{F_4,\\gamma_3}}(1,U) \\in ((F_4)_{E_1, F_1(1),F_1(e_1)})^{w_3} \\cong (Spin(7))^{w_3}$ , where $Spin(7)$ is defined in Theorem \\ref{theorem 3.2.4}. Here, we denote $\\varphi_{{}_{F_4,\\gamma_3}}(1,U)$ by $\\varphi(U)$: $\\varphi(U)=\\varphi_{{}_{F_4,\\gamma_3}}(1,U)$, and we define a mapping $\\psi: U(1) \\times U(3) \\to (F_4)^{\\sigma_3} \\cap (F_4)^{w_3}$ by\n\t\\begin{align*}\n\t\t\\psi(a,U)=D_a\\varphi(U),\n\t\\end{align*}\n\twhere $D_a$ is defined in Subsection 3.2. Then the mapping $\\psi$ induces the isomorphism $(F_4)^{\\sigma_3} \\cap (F_4)^{w_3} \\cong (U(1)\\times U(3))\/\\Z_3$, where $\\Z_3=\\{(1,E), (\\bm{\\omega},\\bm{\\omega}^{-1}E), (\\bm{\\omega}^{-1}, \\bm{\\omega}E) \\}$.\n\n\\subsection{Case 5: $\\{1, \\tilde{\\gamma}_3, \\tilde{\\gamma}_3{}^{-1}\\} \\times \\{1, \\tilde{\\sigma}_3, \\tilde{\\sigma}_3{}^{-1}\\}$-symmetric space}\n\nLet the $C$-linear transformations $\\gamma_3, \\sigma_3$ of $\\mathfrak{J}^C$ defined in Subsection \\ref{subsection 3.3}.\n\n\\noindent From Lemma \\ref{lemma 3.3.8} (1), since we can easily confirm that $\\gamma_3$ and $\\sigma_3$ are commutative, $\\tilde{\\gamma}_3$ and $\\tilde{\\sigma}_3$ are commutative in $\\Aut(E_6)$: $\\tilde{\\gamma}_3\\tilde{\\sigma}_3=\\tilde{\\sigma}_3\\tilde{\\gamma}_3$.\n\\vspace{1mm}\n\nBefore determining the structure of the group $(E_6)^{\\gamma_3} \\cap (E_6)^{\\sigma_3}$, we prove proposition and lemma needed in the proof of theorem below.\n\\vspace{1mm}\n\nWe define a $C$-linear transformation $\\sigma'_3$ of $\\mathfrak{J}^C$ by\n\\begin{align*}\n\t\t\t\\sigma'_3=\\varphi_{{}_{E_6,\\gamma_3}}(1,\\diag(1,1,\\omega,\\omega,\\tau\\omega,\\tau\\omega)) \\in (E_6)^{\\gamma_3} \\subset E_6,\n\\end{align*}\nwhere $\\omega=-(1\/2)+(\\sqrt{3}\/2)i \\in C$.\n\nLet an element \n\\begin{align*}R:=\\scalebox{0.8}{$\\begin{pmatrix}\n\t1&&&&&\\\\\n\t&1&&&&\\\\\n\t&&&&1&\\\\\n\t&&&1&&\\\\\n\t&&-1&&&\\\\\n\t&&&&&1\n\t\\end{pmatrix}$} \\in SO(6) \\subset SU(6), \n\\end{align*}\nwhere the blanks are $0$, and we consider an element $\\varphi_{{}_{E_6,\\gamma_3}}(1,R) \\in (E_6)^{\\gamma_3} \\subset E_6$. Here, we denote this element by $\\delta_R$: $\\delta_R=\\varphi_{{}_{E_6,\\gamma_3}}(1,R)$.\nThen by doing straightforward computation, we have that $\\sigma_3\\delta_R=\\delta_R\\sigma'_3$, that is, $\\sigma_3$ is conjugate to $\\sigma'_3$ under $\\delta_R \\in (E_6)^{\\gamma_3} \\subset E_6$: $\\sigma_3 \\sim \\sigma'_3$. Moreover, $\\sigma'_3$ induces the automorphism $\\tilde{\\sigma'}_3$ of order $3$ on $E_6$: $\\tilde{\\sigma'}_3(\\alpha)={\\sigma'_3}^{-1}\\alpha\\sigma'_3, \\alpha \\in E_6$.\n\\vspace{1mm}\n\nThen we have the following proposition.\n\n\\begin{proposition}\\label{proposition 4.5.1\n\tThe group $(E_6)^{\\gamma_3} \\cap (E_6)^{\\sigma_3}$ is isomorphic to the group $(E_6)^{\\gamma_3} \\cap (E_6)^{\\sigma'_3}${\\rm :} $(E_6)^{\\gamma_3} \\cap (E_6)^{\\sigma_3} \\cong (E_6)^{\\gamma_3} \\cap (E_6)^{\\sigma'_3}$.\n\\end{proposition}\n\\begin{proof}\n\tWe define a mapping $g_{{}_{451}}: (E_6)^{\\gamma_3} \\cap (E_6)^{\\sigma_3} \\to (E_6)^{\\gamma_3} \\cap (E_6)^{\\sigma'_3}$ by\n\t\\begin{align*}\n\t\t\tg_{{}_{451}}(\\alpha)={\\delta_R}^{-1}\\alpha\\delta_R.\t\n\t\\end{align*}\n\tIn order to prove this isomorphism, it is sufficient to show that $g_{{}_{452}}$ is well-defined. \n\t\n\t\\noindent First, we will show that $g_{{}_{451}} \\in (E_6)^{\\gamma_3}$. Since it follows from $\\delta_R=\\varphi_{{}_{E_6,\\gamma_3}}(1,R)$ and $\\gamma_3=\\varphi_{{}_{E_6,\\gamma_3}}(\\omega,E)$ that $\\delta_R\\gamma_3=\\gamma_3\\delta_R$, we have that $g_{{}_{451}} \\in (E_6)^{\\gamma_3}$. Similarly, from $\\sigma_3\\delta_R=\\delta_R\\sigma'_3$ we have that $g_{{}_{451}} \\in (E_6)^{\\sigma'_3}$. Hence $g_{{}_{451}}$ is well-defined. With above, the proof of this proposition is completed.\t\n\\end{proof}\n\\vspace{1mm}\n\nSubsequently, we will prove the following lemma. \n\n\\begin{lemma}\\label{lemma 4.5.2\n\tThe group $S(U(2)\\times U(2)\\times U(2))$ is isomorphic to the group $(U(1) \\times U(1)\\times SU(2)\\times SU(2)\\times SU(2))\/(\\Z_2\\times\\Z_2)${\\rm :} $S(U(2)\\times U(2)\\times U(2)) \\cong (U(1) \\times U(1)\\times SU(2)\\times SU(2)\\times SU(2))\/(\\Z_2\\times\\Z_2), \\Z_2=\\!\\{(1,1,E,E,E), (1,-1,E,-E,E) \\}, \\Z_2=\\!\\{(1,1,E,E,E), (-1,1,-E,\\allowbreak E,E) \\}$.\n\\end{lemma}\n\\begin{proof}\n\tWe define a mapping $f_{{}_{452}}:U(1) \\times U(1)\\times SU(2)\\times SU(2)\\times SU(2) \\to S(U(2)\\times U(2)\\times U(2))$ by\n\t\\begin{align*}\n\t\t\tf_{{}_{452}}(a,b,A,B,C)=\\left( \n\t\t\t\\begin{array}{ccc}\n\t\t\t a\\mbox{\\large {$A$}} & & {\\raisebox{-7pt}[0pt]{\\large $0$}}\n\t\t\t \\\\[2mm]\n\t\t\t & b\\mbox{\\large {$B$}} & \n\t\t\t \\\\[2mm]\n\t\t\t {\\raisebox{1pt}[0pt]{\\large $0$}}&& (ab)^{-2}\\mbox{\\large {$C$}}\n\t\t\t\\end{array}\\right) \\in SU(6).\n\t\\end{align*}\n\tThen it is clear that $f_{{}_{452}}$ is well-defined and a homomorphism. \n\t\n\tWe will prove that $f_{{}_{452}}$ is surjective. Let $P \\in S(U(2)\\times U(2)\\times U(2))$. Then $P$ takes the form of $\\diag(P_1,P_2,P_3),P_j \\in U(2), (\\det\\,P_1)(\\det\\,P_2)(\\det\\,P_3)=1$. Here, since $P_1 \\in U(2)$, we see that $\\det\\,P_1 \\in U(1)$. We choose $a \\in U(1)$ such that $a^2=\\det\\,P_1$, and set $A=(1\/a)P_1$. Then we have that $ A \\in SU(2)$. Similarly, for $P_2 \\in U(2)$, there exist $b \\in U(1)$ and $B \\in SU(2)$ such that $P_2=bB, b^2=\\det\\,P_2$. From $(\\det\\,P_1)(\\det\\,P_2)(\\det\\,P_3)=1$, we have that $\\det\\,P_3=(ab)^{-2}$. Set $C=(ab)^2P_3$. Then we have that $C \\in SU(2)$. With above, the proof of surjective is completed.\n\t\n\tFinally, we will determine $\\Ker\\,f_{{}_{452}}$. It follows from the kernel of definition that\n\t\\begin{align*}\n\t\t\t\\Ker\\,f_{{}_{452}}&=\\{(a,b,A,B,C)\\in U(1)^{\\times 2}\\times SU(2)^{\\times 3} \\,|\\,f_{{}_{453}}(a,b,A,B,C)=E \\}\n\t\t\t\\\\\n\t\t\n\t\t\n\t\t\n\t\t\t&=\\{(a,b,a^{-1}E,b^{-1}E,(ab)^2E)\\in U(1)^{\\times 2}\\times SU(2)^{\\times 3} \\,|\\,a^2=b^2=1 \\}\n\t\t\t\\\\\n\t\t\t&=\\{(1,1,E,E,E), (1,-1,E,-E,E),(-1,1,-E,E,E), (-1,-1,-E,-E,E) \\}\n\t\t\t\\\\\n\t\t\t&=\\{(1,1,E,E,E), (1,-1,E,-E,E) \\} \\times \\{(1,1,E,E,E), (-1,1,-E,E,E) \\}\n\t\t\t\\\\\n\t\t\t& \\cong \\Z_2 \\times \\Z_2.\n\t\\end{align*}\n\t\n\tTherefore we have the required isomorphism \n\t\\begin{align*}\n\t\t\t\tS(U(2)\\times U(2)\\times U(2)) \\cong (U(1) \\times U(1)\\times SU(2)\\times SU(2)\\times SU(2))\/(\\Z_2\\times\\Z_2).\n\t\\end{align*}\n\\end{proof}\n\nNow, we will determine the structure of the group $(E_6)^{\\gamma_3} \\cap (E_6)^{\\sigma_3}$.\n\n\\begin{theorem}\\label{theorem 4.5.3\n\tThe group $(E_6)^{\\gamma_3} \\cap (E_6)^{\\sigma_3}$ is isomorphic the group $(U(1)\\times U(1) \\times U(1)\\allowbreak \\times SU(2) \\times SU(2)\\times SU(2))\/(\\Z_2\\times\\Z_2\\times\\Z_2\\times\\Z_2)${\\rm :} $(E_6)^{\\gamma_3} \\cap (E_6)^{\\sigma_3} \\cong (U(1)\\times U(1) \\times U(1)\\times SU(2) \\allowbreak \\times SU(2)\\times SU(2))\/(\\Z_2\\times\\Z_2\\times\\Z_2\\times\\Z_2), \\Z_2=\\{(1,1,1,E,E,E), (-1,1,1,-E,-E,E) \\},\\,\\Z_2=\\{(1,1,1,E,E,E), (-1,1,-1,-E,E,E) \\},\\Z_2\\!=\\!\\{(1,1,1,E,E,E), (-1,-1,1,-E,-E,E) \\},\\!\\Z_2\\allowbreak=\\{(1,1,1,E,E,E), (-1,-1,-1,E,E,E) \\}$.\n\\end{theorem}\n\\begin{proof}\n\tLet $S(U(2)\\times U(2)\\times U(2)) \\subset SU(6)$. \n\tWe define a mapping $\\varphi_{{}_{E_6,\\gamma_3,\\sigma'_3}}: U(1)\\times S(U(2)\\times U(2)\\times U(2)) \\to (E_6)^{\\gamma_3} \\cap (E_6)^{\\sigma'_3}$ by\n\t\\begin{align*}\n\t\\varphi_{{}_{E_6,\\gamma_3,\\sigma'_3}}(s, P)(M+\\a)&={k_J}^{-1}(P(k_J M){}^t\\!P)+s\\a k^{-1}(\\tau \\,{}^t\\!P), \n \\\\\n\t&\\hspace*{40mm}M+\\a \\in \\mathfrak{J}(3, \\H)^C \\oplus (\\H^3)^C\\!\\!=\\!\\mathfrak{J}^C.\n\t\\end{align*}\n\tNeedless to say, this mapping is the restriction of the mapping $\\varphi_{{}_{E_6,\\gamma_3}}$, that is, $\\varphi_{{}_{E_6,\\gamma_3,\\sigma'_3}}(s, P)=\\varphi_{{}_{E_6,\\gamma_3}}(s,P)$ (Theorem \\ref{theorem 3.3.2}). \n\t\n\tFirst, we will prove that $\\varphi_{{}_{E_6,\\gamma_3,\\sigma'_3}}$ is well-defined. It is clear that $\\varphi_{{}_{E_6,\\gamma_3,\\sigma'_3}}(s,P) \\in (E_6)^{\\gamma_3}$, and it follows from $\\sigma'_3=\\varphi_{{}_{E_6,\\gamma_3}}(1,\\diag(1,1,\\omega,\\omega,\\tau\\omega,\\tau\\omega))$ that\n\t\\begin{align*}\n\t\t\t&\\quad {\\sigma'_3}^{-1}\\varphi_{{}_{E_6,\\gamma_3,\\sigma'_3}}(s,P)\\sigma'_3\n\t\t\t\\\\\n\t\t\t&=\\varphi_{{}_{E_6,\\gamma_3}}(1,\\diag(1,1,\\omega,\\omega,\\tau\\omega,\\tau\\omega))^{-1}\\varphi_{{}_{E_6,\\gamma_3,\\sigma'_3}}(s,P)\\varphi_{{}_{E_6,\\gamma_3}}(1,\\diag(1,1,\\omega,\\omega,\\tau\\omega,\\tau\\omega))\n\t\t\t\\\\\n\t\t\t&=\\varphi_{{}_{E_6,\\gamma_3}}(1,\\diag(1,1,\\tau\\omega,\\tau\\omega,\\omega,\\omega))\\varphi_{{}_{E_6,\\gamma_3}}(s,P)\\varphi_{{}_{E_6,\\gamma_3}}(1,\\diag(1,1,\\omega,\\omega,\\tau\\omega,\\tau\\omega))\n\t\t\t\\\\\n\t\t\t&=\\varphi_{{}_{E_6,\\gamma_3}}(s,\\diag(1,1,\\tau\\omega,\\tau\\omega,\\omega,\\omega)P\\diag(1,1,\\omega,\\omega,\\tau\\omega,\\tau\\omega)),P=\\diag(P_1,P_2,P_3)\n\t\t\t\\\\\n\t\t\t&=\\varphi_{{}_{E_6,\\gamma_3}}(s,\\diag(P_1,(\\tau\\omega E) P_2(\\omega E),(\\omega E) P_3(\\tau\\omega E)))\n\t\t\t\\\\\n\t\t\t&=\\varphi_{{}_{E_6,\\gamma_3}}(s,P)\n\t\t\t\\\\\n\t\t\t&=\\varphi_{{}_{E_6,\\gamma_3,\\sigma'_3}}(s,P).\n\t\\end{align*}\n\tHence we have that $\\varphi_{{}_{E_6,\\gamma_3,\\sigma'_3}}(s,P) \\in (E_6)^{\\sigma'_3}$. Thus $\\varphi_{{}_{E_6,\\gamma_3,\\sigma'_3}}$ is well-defined. Subsequently, since $\\varphi_{{}_{E_6,\\gamma_3,\\sigma'_3}}$ is the restriction of the mapping $\\varphi_{{}_{E_6,\\gamma_3}}$, we easily see that $\\varphi_{{}_{E_6,\\gamma_3,\\sigma'_3}}$ is a homomorphism.\n\t\n\tNext, we will prove that $\\varphi_{{}_{E_6,\\gamma_3,\\sigma'_3}}$ is surjective. Let $\\alpha \\in (E_6)^{\\gamma_3} \\cap (E_6)^{\\sigma'_3} \\subset (E_6)^{\\gamma_3}$. There exist $s \\in U(1)$ and $A \\in SU(6)$ such that $\\alpha=\\varphi_{{}_{E_6,\\gamma_3}}(s,A)$ (Theorem \\ref{theorem 3.3.2}). Moreover, from the condition $\\alpha \\in (E_6)^{\\sigma'_3}$, that is, ${\\sigma'_3}^{-1}\\varphi_{{}_{E_6,\\gamma_3}}(s,A)\\sigma'_3=\\varphi_{{}_{E_6,\\gamma_3}}(s,A)$, and using ${\\sigma'_3}^{-1}\\varphi_{{}_{E_6,\\gamma_3}}(s,A)\\sigma'_3=\\varphi_{{}_{E_6,\\gamma_3}}(s,\\diag(1,1,\\tau\\omega,\\tau\\omega,\\omega,\\omega)A\\,\\diag(1,1,\\omega,\\omega,\\tau\\omega,\\tau\\omega))$ (Lemma \\ref{lemma 3.3.8} (1)), we have that\n\t\\begin{align*}\n\t&\\left\\{\n\t \\begin{array}{l}\n\t s=s \\\\\n\t \\diag(1,1,\\tau\\omega,\\tau\\omega,\\omega,\\omega)A\\,\\diag(1,1,\\omega,\\omega,\\tau\\omega,\\tau\\omega)=A \n\t \\end{array}\\right. \n\t \\\\\n\t&\\hspace*{45mm}{\\text{or}}\n\t \\\\\n\t&\\left\\{\n\t\\begin{array}{l}\n\ts=-s \\\\\n\t\\diag(1,1,\\tau\\omega,\\tau\\omega,\\omega,\\omega)A\\,\\diag(1,1,\\omega,\\omega,\\tau\\omega,\\tau\\omega)=-A. \n\t\\end{array}\\right. \n\t\\end{align*}\n\tThe latter case is impossible because of $s\\not=0$. As for the former case, from the second condition, by doing straightforward computation $A$ takes the following form $\\diag(A_1, A_2, A_3), A_j \\in U(2), (\\det\\,A_1)(\\det\\,A_2)(\\det\\,A_3)=1$, that is, $A \\in S(U(2)\\times U(2)\\times U(2))$.\n\tNeedless to say, $s \\in U(1)$. Hence there exist $s \\in U(1)$ and $P \\in S(U(2)\\times U(2) \\times U(2))$ such that $\\alpha=\\varphi_{{}_{E_6,\\gamma_3}}(s,P)$. Namely, there exist $s \\in U(1)$ and $P \\in S(U(2)\\times U(2) \\times U(2))$ such that $\\alpha=\\varphi_{{}_{E_6,\\gamma_3,\\sigma'_3}}(s,P)$. The proof of surjective is completed.\n\t\n\tFinally, we will determine $\\Ker\\,\\varphi_{{}_{E_6,\\gamma_3,\\sigma'_3}}$. However, from $\\Ker\\,\\varphi_{{}_{E_6,\\gamma_3}}=\\{(1,E),(-1,-E) \\}$, we easily obtain that $\\Ker\\,\\varphi_{{}_{E_6,\\gamma_3,\\sigma'_3}}=\\{(1,E),(-1,-E) \\} \\cong \\Z_2$. Thus we have the isomorphism $(E_6)^{\\gamma_3} \\cap (E_6)^{\\sigma'_3} \\cong (U(1)\\times S(U(2)\\times U(2)\\times U(2)))\/\\Z_2$. Here, from Proposition \\ref{proposition 4.5.1} we have the isomorphism $(E_6)^{\\gamma_3} \\cap (E_6)^{\\sigma_3} \\cong (U(1)\\times S(U(2)\\times U(2)\\times U(2)))\/\\Z_2$. Moreover, by Lemma \\ref{lemma 4.5.2} we have the required isomorphism \n\t\\begin{align*}\n\t(E_6)^{\\gamma_3} \\cap (E_6)^{\\sigma_3} \\!\\cong (U(1)\\times U(1) \\times U(1)\\times SU(2) \\allowbreak \\times SU(2)\\times SU(2))\/(\\Z_2\\times\\Z_2\\times\\Z_2\\times\\Z_2), \n\t\\end{align*}\n\twhere \n\t\\begin{align*}\n\t&\\Z_2=\\{(1,1,1,E,E,E), (-1,1,1,-E,-E,E) \\},\n\t\\\\\n\t&\\Z_2=\\{(1,1,1,E,E,E), (-1,1,-1,-E,E,E) \\},\n\t\\\\\n\t&\\Z_2=\\{(1,1,1,E,E,E), (-1,-1,1,-E,-E,E) \\},\n\t\\\\\n\t&\\Z_2\\allowbreak=\\{(1,1,1,E,E,E), (-1,-1,-1,E,E,E) \\}.\n\t\\end{align*}\n\\end{proof}\n\nThus, since the group $(E_6)^{\\gamma_3} \\cap (E_6)^{\\sigma_3}$ is connected from Theorem \\ref{theorem 4.5.3}, we have an exceptional $\\varmathbb{Z}_3 \\times \\varmathbb{Z}_3$-symmetric space\n\\begin{align*}\n\t\t\t\t\tE_6\/((U(1)\\times U(1) \\times U(1)\\times SU(2) \\allowbreak \\times SU(2)\\times SU(2))\/(\\Z_2\\times\\Z_2\\times\\Z_2\\times\\Z_2)).\n\\end{align*}\n\n\\subsection{Case 6: $\\{1, \\tilde{\\gamma}_3, \\tilde{\\gamma}_3{}^{-1}\\} \\times \\{1, \\tilde{\\nu}_3, \\tilde{\\nu}_3{}^{-1}\\}$-symmetric space}\n\nLet the $C$-linear transformations $\\gamma_3, \\nu_3$ of $\\mathfrak{J}^C$ defined in Subsection \\ref{subsection 3.3}.\n\n\\noindent From Lemma \\ref{lemma 3.3.8} (1), together with $\\gamma_3=\\varphi_{{}_{E_6,\\gamma_3}}(\\omega,E)$, since we can easily confirm that $\\gamma_3$ and $\\nu_3$ are commutative, $\\tilde{\\gamma}_3$ and $\\tilde{\\nu}_3$ are commutative in $\\Aut(E_6)$: $\\tilde{\\gamma}_3\\tilde{\\nu}_3=\\tilde{\\nu}_3\\tilde{\\gamma}_3$. \n\nBefore determining the structure of the group $(E_6)^{\\gamma_3} \\cap (E_6)^{\\nu_3}$, we prove lemma needed in the proof of theorem below.\n\n\\begin{lemma}\\label{lemma 4.6.1\n\tThe group $S(U(1)\\times U(5))$ is isomorphism the group $(U(1)\\times SU(5))\/\\Z_5${\\rm :} $S(U(1)\\times U(5))\\! \\cong\\! (U(1)\\times SU(5))\/\\Z_5, \\Z_5\\!=\\!\\{(\\varepsilon_k, {\\varepsilon_k}^{-1}E) | \\varepsilon_k\\!=\\!\\exp((2\\pi i\/5)k), k\\!=0,1,2,3,4\\}$.\n\\end{lemma}\n\\begin{proof}\n\tWe define a mapping $f_{{}_{461}}:U(1) \\times SU(5) \\to S(U(1)\\times U(5))$ by\n\t\\begin{align*}\n\tf_{{}_{461}}(t, T)=\\scalebox{0.7}{$\n\t\t\\left(\\begin{array}{cccccccc@{\\!}}\n\t\t&\\multicolumn{2}{c}{\\raisebox{-15pt}[0pt][0pt]{\\Large$t^{-5}$}}&&&&\n\t\t\\\\\n\t\t&&&&\\multicolumn{2}{c}{\\raisebox{-5pt}[0pt]{\\Large $0$}}&\n\t\t\\\\\n\t\t&&&&&&&\n\t\t\\\\\n\t\t&&&&\\multicolumn{2}{c}\n\t\t{\\raisebox{-15pt}[0pt][0pt]{\\Large $t$}\\,\\raisebox{-18pt}[0pt][0pt]{\\huge $T$}}&\n\t\t\\\\\n\t\t&\\multicolumn{2}{c}{\\raisebox{0pt}[0pt]{\\Large $0$}}&&&&\n\t\t\\\\[-2mm]\n\t\t&&&&&&&\n\t\t\\end{array}\\right)$}.\n\t\\end{align*}\n\tThen it is clear that $f_{{}_{461}}$ is well-defined and a homomorphism. \n\t\n\tNow, we will prove that $f_{{}_{461}}$ is surjective. Let $P \\in S(U(1) \\times U(5))$. Then $P$ takes the form of \n\t\\scalebox{0.6}\n\t{$\\left(\\begin{array}{cccccccc@{\\!}}\n\t\t\t&\\multicolumn{2}{c}{\\raisebox{-15pt}[0pt][0pt]{\\Large $s$}}&&&&\n\t\t\t\\\\\n\t\t\t&&&&\\multicolumn{2}{c}{\\raisebox{-5pt}[0pt]{\\Large $0$}}&\n\t\t\t\\\\\n\t\t\t&&&&&&&\n\t\t\t\\\\\n\t\t\t&&&&\\multicolumn{2}{c}{\\raisebox{-18pt}[0pt][0pt]{\\huge $S$}}&\n\t\t\t\\\\\n\t\t\t&\\multicolumn{2}{c}{\\raisebox{0pt}[0pt]{\\Large $0$}}&&&&\n\t\t\t\\\\[-2mm]\n\t\t\t&&&&&&&\n\t\t\\end{array}\\right)$},\\,\\,$s \\in U(1), S \\in U(5), s(\\det S)=1$.\n\tHere, since $S \\in U(5)$, we see that $\\det\\,S \\in U(1)$, and so we choose $t \\in U(1)$ such that $t^5=\\det\\,S$. Set $T=t^{-1}S$, then we have that $T \\in SU(5)$ and $s=t^{-5}$. With above, the proof of surjective is completed.\n\t\n\tFinally, we will determine $\\Ker\\,f_{{}_{461}}$. It follows from the definition of kernel that \n\t\\begin{align*}\n\t\\Ker\\,f_{{}_{461}}&=\\{(t,T) \\in U(1)\\times SU(5)\\,|\\, f_{{}_{461}}(t,T)=E \\}\n\t\\\\\n\t&=\\{(t,T) \\in U(1)\\times SU(5)\\,|\\,t^5=1, T=t^{-1}E \\}\n\t\\\\\n\t&=\\{(\\varepsilon_k, {\\varepsilon_k}^{-1}E) \\,|\\, \\varepsilon_k=\\exp((2\\pi i\/5)k), k=0,1,2,3,4\\}\n\t\\\\\n\t& \\cong \\Z_5.\n\t\\end{align*}\n\t\n\tTherefore we have the required isomorphism \n\t\\begin{align*}\n\tS(U(1) \\times U(5)) \\cong (U(1)\\times SU(5))\/\\Z_5.\n\t\\end{align*}\n\\end{proof}\n \nNow, we will determine the structure of the group $(E_6)^{\\gamma_3} \\cap (E_6)^{\\nu_3}$.\n\n\\begin{theorem}\\label{theorem 4.6.2\n\tThe group $(E_6)^{\\gamma_3} \\cap (E_6)^{\\nu_3}$ is isomorphic to the group $(U(1)\\times U(1)\\times SU(5))\/\\Z_2${\\rm :} $(E_6)^{\\gamma_3} \\cap (E_6)^{\\nu_3} \\cong (U(1)\\times U(1)\\times SU(5))\/(\\Z_2\\times \\Z_5), \\Z_2=\\{(1,1,E), (-1,-1,\\allowbreak -E) \\}, \\Z_5=\\{(1,\\varepsilon_i, {\\varepsilon_i}^{-1}E) \\,|\\, \\varepsilon_i=\\exp ((2\\pi i\/5)k), k=0,1,2,3,4\\}$.\n\\end{theorem}\n\\begin{proof}\n\tLet $S(U(1)\\times U(5)) \\subset SU(6)$. Then we define a mapping $\\varphi_{{}_{E_6, \\gamma, \\nu_3}}: U(1)\\times S(U(1)\\times U(5)) \\to (E_6)^{\\gamma_3} \\cap (E_6)^{\\nu_3}$ by\n\t\\begin{align*}\n\t\\varphi_{{}_{E_6,\\gamma_3,\\nu_3}}(s, P)(M+\\a)&={k_J}^{-1}(P(k_J M){}^t\\!P)+s\\a k^{-1}(\\tau \\,{}^t\\!P), \n\t\\\\\n\t&\\hspace*{40mm}M+\\a \\in \\mathfrak{J}(3, \\H)^C \\oplus (\\H^3)^C=\\mathfrak{J}^C.\n\t\\end{align*}\n\tNeedless to say, this mapping is the restriction of the mapping $\\varphi_{{}_{E_6,\\gamma_3}}$, that is, $\\varphi_{{}_{E_6,\\gamma_3,\\nu_3}}(s, P)=\\varphi_{{}_{E_6,\\gamma_3}}(s,P)$ (Theorem \\ref{theorem 3.3.2}). \n\n\tFirst, we will prove that $\\varphi_{{}_{E_6,\\gamma_3,\\nu_3}}$ is well-defined. It is clear that $\\varphi_{{}_{E_6,\\gamma_3,\\nu_3}}(s, P) \\in (E_6)^{\\gamma_3}$, and using $\\nu_3=\\varphi_{{}_{E_6,\\gamma_3}}(1,\\diag(\\nu^5, \\nu^{-1},\\nu^{-1},\\nu^{-1},\\nu^{-1},\\nu^{-1}))$ (Lemma \\ref{lemma 3.3.8} (1)), it follows that \n\t\\begin{align*}\n\t\t\t&\\quad {\\nu_3}^{-1}\\varphi_{{}_{E_6, \\gamma_3, \\nu_3}}(s,P)\\nu_3\n\t\t\t\\\\\n\t\t\t&=\\varphi_{{}_{E_6,\\gamma_3}}(1,\\diag(\\nu^5, \\nu^{-1},\\ldots,\\nu^{-1}))^{-1}\\varphi_{{}_{E_6, \\gamma_3, \\nu_3}}(s,P)\\varphi_{{}_{E_6,\\gamma_3}}(1,\\diag(\\nu^5, \\nu^{-1},\\ldots,\\nu^{-1}))\n\t\t\t\\\\\n\t\t\t&=\\varphi_{{}_{E_6,\\gamma_3}}(1,\\diag(\\nu^{-5}, \\nu,\\ldots,\\nu))\\varphi_{{}_{E_6, \\gamma_3}}(s,P)\\varphi_{{}_{E_6,\\gamma_3}}(1,\\diag(\\nu^5, \\nu^{-1},\\ldots,\\nu^{-1}))\n\t\t\t\\\\\n\t\t\t&=\\varphi_{{}_{E_6, \\gamma_3}}(s,\\diag(\\nu^{-5}, \\nu,\\ldots,\\nu)P\\,\\diag(\\nu^5, \\nu^{-1},\\ldots,\\nu^{-1})), P=\\scalebox{0.6}{$\n\t\t\t\t\\left( \\begin{array}{cccccccc@{\\!}}\n\t\t\t\t&\\multicolumn{2}{c}{\\raisebox{-15pt}[0pt][0pt]{\\Large$t$}}&&&&\n\t\t\t\t\\\\\n\t\t\t\t&&&&\\multicolumn{2}{c}{\\raisebox{-5pt}[0pt]{\\Large $0$}}&\n\t\t\t\t\\\\\n\t\t\t\t&&&&&&&\n\t\t\t\t\\\\\n\t\t\t\t&&&&\\multicolumn{2}{c}{\\raisebox{-18pt}[0pt][0pt]{\\huge $U$}}&\n\t\t\t\t\\\\\n\t\t\t\t&\\multicolumn{2}{c}{\\raisebox{0pt}[0pt]{\\Large $0$}}&&&&\n\t\t\t\t\\\\[-2mm]\n\t\t\t\t&&&&&&&\n\t\t\t\t\\end{array}\\right)$}\n\t\t\t\\\\\n\t\t\t&=\\varphi_{{}_{E_6, \\gamma_3}}(s,P)\n\t\t\t\\\\\n\t\t\t&=\\varphi_{{}_{E_6, \\gamma_3, \\nu_3}}(s,P)\n\t\\end{align*}\n\tHence we have that $\\varphi_{{}_{E_6, \\gamma_3, \\nu_3}}(s,P) \\in (E_6)^{\\nu_3}$. Thus $\\varphi_{{}_{E_6,\\gamma_3,\\nu_3}}$ is well-defined. Subsequently, since $\\varphi_{{}_{E_6,\\gamma_3,\\nu_3}}$ is the restriction of the mapping $\\varphi_{{}_{E_6,\\gamma_3}}$, we easily see that $\\varphi_{{}_{E_6,\\gamma_3,\\nu_3}}$ is a homomorphism.\n\t\n\tNext, we will prove that $\\varphi_{{}_{E_6, \\gamma_3, \\nu_3}}$ is surjective. Let $\\alpha \\in (E_6)^{\\gamma_3} \\cap (E_6)^{\\nu_3} \\subset (E_6)^{\\nu_3}$. There exist $ q \\in Sp(1)$ and $P \\in S(U(1) \\times U(5))$ such that $\\alpha=\\varphi_{{}_{E_6, \\nu_3}}(q, P)$ (Theorem \\ref{theorem 3.3.5}). Moreover, from the condition $\\alpha \\in (E_6)^{\\gamma_3}$, that is, ${\\gamma_3}^{-1}\\varphi_{{}_{E_6, \\nu_3}}(q, P)\\gamma_3=\\varphi_{{}_{E_6, \\nu_3}}(q, P)$, and note that $\\gamma_3=\\varphi_{{}_{E_6, \\nu_3}}(\\omega,E)(=\\varphi_{{}_{E_6,\\gamma_3}}(\\omega,E))$ (Lemma \\ref{lemma 3.3.8} (1)), since it follows that\n\t${\\gamma_3}^{-1}\\varphi_{{}_{E_6, \\nu_3}}(q, P)\\gamma_3=\\varphi_{{}_{E_6,\\nu_3}}(\\omega^{-1}q\\omega, P)$, we have that\n\t\\begin{align*}\n\t\t\t\t\\left\\{\n\t\t\t\t\\begin{array}{l}\n\t\t\t\t\\omega^{-1}q\\omega=q \\\\\n\t\t\t\tP=P\n\t\t\t\t\\end{array}\\right.\n\t\t\t\t\\quad {\\text{or}}\\quad\n\t\t\t\t\\left\\{\n\t\t\t\t\\begin{array}{l}\n\t\t\t\t\\omega^{-1}q\\omega=-q \\\\\n\t\t\t\tP=-P.\n\t\t\t\t\\end{array}\\right.\n\t\\end{align*}\n\tThe latter case is impossible because of $P\\not=0$. As for the former case, from the first condition, we easily see that $q \\in U(1)$, and needless to say, $P \\in S(U(1)\\times U(5))$. Hence there exist $s \\in U(1)$ and $P \\in S(U(1)\\times U(5))$ such that $\\alpha=\\varphi_{{}_{E_6,\\nu_3}}(s,P)$. Namely, there exist $s \\in U(1)$ and $P \\in S(U(1)\\times U(5))$ such that $\\alpha=\\varphi_{{}_{E_6,\\gamma_3,\\nu_3}}(s,P)$. The proof of surjective is completed.\n\t\n\tFinally, we will determine $\\Ker\\,\\varphi_{{}_{E_6, \\gamma_3, \\nu_3}}$. However, from $\\Ker\\,\\varphi_{{}_{E_6,\\gamma_3}}=\\{(1,E),(-1,-E) \\}$, we easily obtain that $\\Ker\\,\\varphi_{{}_{E_6, \\gamma_3, \\nu_3}}=\\{(1,(1,E)),(-1,(-1,-E)) \\} \\cong \\Z_2$. Thus we have the isomorphism $(E_6)^{\\gamma_3} \\cap (E_6)^{\\nu_3} \\cong (U(1)\\times S(U(1)\\times U(5)))\/\\Z_2$.\n\n\tTherefore, by Lemma \\ref{lemma 4.6.1} we have the required isomorphism\n\t\\begin{align*}\n\t\t\t\t(E_6)^{\\gamma_3} \\cap (E_6)^{\\nu_3} \\cong (U(1)\\times U(1)\\times SU(5))\/(\\Z_2\\times \\Z_5),\n\t\\end{align*}\n\twhere \n\t\\begin{align*}\n\t\t\t&\\Z_2=\\{(1,1,E), (-1,-1,-E) \\},\n\t\t\t\\\\\n\t\t\t&\\Z_5=\\{(1,\\varepsilon_i, {\\varepsilon_i}^{-1}E) \\,|\\, \\varepsilon_i=\\exp ((2\\pi i\/5)k), k=0,1,2,3,4\\}.\n\t\\end{align*}\n\\end{proof}\n\nThus, since the group $(E_6)^{\\gamma_3} \\cap (E_6)^{\\nu_3}$ is connected from Theorem \\ref{theorem 4.6.2}, we have an exceptional $\\varmathbb{Z}_3 \\times \\varmathbb{Z}_3$-symmetric space\n\\begin{align*}\nE_6\/((U(1)\\times U(1)\\times SU(5))\/(\\Z_2\\times \\Z_5)).\n\\end{align*}\n\n\\subsection{Case 7: $\\{1, \\tilde{\\gamma}_3, \\tilde{\\gamma}_3{}^{-1}\\} \\times \\{1, \\tilde{\\mu}_3, \\tilde{\\mu}_3{}^{-1}\\}$-symmetric space}\n\nLet the $C$-linear transformations $\\gamma_3, \\mu_3$ of $\\mathfrak{J}^C$ defined in Subsection \\ref{subsection 3.3}. \n\n\\noindent From Lemma \\ref{lemma 3.3.8} (1), since we can easily confirm that $\\gamma_3$ and $\\mu_3$ are commutative, $\\tilde{\\gamma}_3$ and $\\tilde{\\mu}_3$ are commutative in $\\Aut(E_6)$: $\\tilde{\\gamma}_3\\tilde{\\mu}_3=\\tilde{\\mu}_3\\tilde{\\gamma}_3$.\n\\vspace{1mm}\n\nBefore determining the structure of the group $(E_6)^{\\gamma_3} \\cap (E_6)^{\\mu_3}$, we prove proposition and lemma needed in the proof of theorem below.\n\\vspace{1mm}\n\nWe define a $C$-linear transformation $\\mu'_3$ of $\\mathfrak{J}^C$ by\n\\begin{align*}\n\\mu'_3=\\varphi_{{}_{E_6,\\gamma_3}}(1,\\diag(\\nu^{-2},\\nu^2,\\nu^{-1},\\nu^{-1},\\nu,\\nu))\\in (E_6)^{\\gamma_3} \\subset E_6,\n\\end{align*}\nwhere ${\\nu}=\\exp(2\\pi i\/9)\\in C$.\n\nLet an element \n\\begin{align*}Q:=\\scalebox{0.8}{$\\begin{pmatrix}\n\t1&&&&&\\\\\n\t&1&&&&\\\\\n\t&&1&&&\\\\\n\t&&&&1&\\\\\n\t&&&-1&&\\\\\n\t&&&&&1\n\t\\end{pmatrix}$} \\in SO(6) \\subset SU(6), \n\\end{align*}\nwhere the blanks are $0$, and we consider an element $\\varphi_{{}_{E_6,\\gamma_3}}(1,Q) \\in (E_6)^{\\gamma_3} \\subset E_6$. Here, we denote this element by $\\delta_Q$: $\\delta_Q=\\varphi_{{}_{E_6,\\gamma_3}}(1,Q)$.\nThen by doing straightforward computation, we have that $\\mu_3\\delta_Q=\\delta_Q\\mu'_3$, that is, $\\mu_3$ is conjugate to $\\mu'_3$ under $\\delta_Q \\in (E_6)^{\\gamma_3} \\subset E_6$: $\\mu_3 \\sim \\mu'_3$. Moreover, $\\mu'_3$ induces the automorphism $\\tilde{\\mu'}_3$ of order $3$ on $E_6$: $\\tilde{\\mu'}_3(\\alpha)={\\mu'_3}^{-1}\\alpha\\mu'_3, \\alpha \\in E_6$.\n\\vspace{1mm}\n\nThen we have the following proposition.\n\n\\begin{proposition}\\label{proposition 4.7.1\n\tThe group $(E_6)^{\\gamma_3} \\cap (E_6)^{\\mu_3}$ is isomorphic to the group $(E_6)^{\\gamma_3} \\cap (E_6)^{\\mu'_3}${\\rm :} $(E_6)^{\\gamma_3} \\cap (E_6)^{\\mu_3} \\cong (E_6)^{\\gamma_3} \\cap (E_6)^{\\mu'_3}$.\n\\end{proposition}\n\\begin{proof}\n\tWe define a mapping $g_{{}_{471}}: (E_6)^{\\gamma_3} \\cap (E_6)^{\\mu'_3} \\to (E_6)^{\\gamma_3} \\cap (E_6)^{\\mu_3}$ by\n\t\\begin{align*}\n\tg_{{}_{471}}(\\alpha)=\\delta_Q\\alpha{\\delta_Q}^{-1}.\t\n\t\\end{align*}\n\tIn order to prove this isomorphism, it is sufficient to show that $g_{{}_{471}}$ is well-defined. \n\t\n\t\\noindent First, we will show that $g_{{}_{471}} \\in (E_6)^{\\gamma_3}$. Since it follows from $\\delta_Q=\\varphi_{{}_{E_6,\\gamma_3}}(1,Q)$ and $\\gamma_3=\\varphi_{{}_{E_6,\\gamma_3}}(\\omega,E)$ that $\\delta_Q\\gamma_3=\\gamma_3\\delta_Q$, we have that $g_{{}_{471}} \\in (E_6)^{\\gamma_3}$. Similarly, from $\\mu_3\\delta_Q=\\delta_Q\\mu'_3$ we have that $g_{{}_{471}} \\in (E_6)^{\\sigma_3}$. Hence $g_{{}_{471}}$ is well-defined. With above, the proof of this proposition is completed.\t\n\\end{proof}\n\\vspace{1mm}\n\nSubsequently, we will prove the following lemma. \n\n\\begin{lemma}\\label{lemma 4.7.2\n\tThe group $S(U(1)\\times U(1)\\times U(2)\\times U(2))$ is isomorphic to the group $(U(1) \\times U(1)\\times U(1)\\times SU(2)\\times SU(2))\/(\\Z_2\\times\\Z_2\\times \\Z_2)${\\rm :} $S(U(1)\\times U(1)\\times U(2)\\times U(2)) \\cong (U(1) \\times U(1)\\times U(1)\\times SU(2)\\times SU(2)\\times SU(2))\/(\\Z_2\\times\\Z_2\\times \\Z_2), \\Z_2=\\{(1,1,1,E,E), (1,-1,1,E,-E) \\}, \\Z_2=\\{(1,1,1,E,E), (1,-1,-1,-E,E) \\}, \\Z_2=\\{ (1,1,1,E,E), (-1,1,1,E,-E)\\}$.\n\\end{lemma}\n\\begin{proof}\n\tWe define a mapping $f_{{}_{472}}:U(1)\\times U(1) \\times U(1)\\times SU(2)\\times SU(2) \\to S(U(1)\\times U(1)\\times U(2)\\times U(2))$ by\n\t\\begin{align*}\n\tf_{{}_{472}}(a,b,c,A,B)=\\left( \n\t\\begin{array}{cccc}\n\ta^{-2} && &{\\raisebox{-7pt}[0pt]{\\large $0$}}\n\t\\\\[2mm]\n\t& b^{-2} &&\n\t\\\\[2mm]\n\t&& c^{-1}\\mbox{\\large {$A$}}&\n\t\\\\[2mm]\n\t{\\raisebox{3pt}[0pt]{\\large $0$}}&&&(abc)\\mbox{\\large {$B$}}\n\t\\end{array}\\right) \\in SU(6).\n\t\\end{align*}\n\tThen it is clear that $f_{{}_{472}}$ is well-defined and a homomorphism. \n\t\n\tNow, we will prove that $f_{{}_{472}}$ is surjective. Let $P \\in S(U(1)\\times U(1)\\times U(2)\\times U(2))$. Then $P$ takes the form of $\\diag(s,t,P_1,P_2),s,t \\in U(1),P_j \\in U(2), (st)(\\det\\,P_1)(\\det\\,P_2)=1$. Here, first we choose $a \\in C$ such that $s=a^{-2}$. Then it is clear that $a \\in U(1)$, so is $b \\in C$ such that $t=b^{-2}$, that is, $b \\in U(1)$. \n\tMoreover, since $P_1 \\in U(2)$, we see that $\\det\\,P_1 \\in U(1)$, and so we choose $c \\in U(1)$ such that $c^2=\\det\\,P_1$. Set $A=c^{-1}P_1$, then we have that $A \\in SU(2)$. Similarly, for $P_2 \\in U(2)$, set $B=(stc)P_2$. Since $stc=(\\det\\,P_2)^{-1}$, we have that $B \\in SU(2)$. With above, the proof of surjective is completed.\n\t\n\tFinally, we will determine $\\Ker\\,f_{{}_{472}}$. It follows from the kernel of definition that\n\t\\begin{align*}\n\t\\Ker\\,f_{{}_{472}}&=\\{(a,b,c,A,B)\\in U(1)^{\\times 3}\\times SU(2)^{\\times 2} \\,|\\,f_{{}_{472}}(a,b,c,A,B)=E \\}\n\t\\\\\n\t&=\\{(a,b,c,A,B)\\in U(1)^{\\times 3}\\times SU(2)^{\\times 2} \\,|\\,a^2=b^2=1,A=cE, B=(abc)^{-1}E \\}\n\t\\\\\n\t&=\\{(1,1,1,E,E), (1,1,-1,-E,-E),(1,-1,1,E,-E), (1,-1,-1,-E,E) \\}\n\t\\\\\n\t& \\quad \\cup \\{ (-1,1,1,E,-E), (-1,1,-1,-E,E),(-1,-1,1,E,E), (-1,-1,-1,-E,-E)\\}\n\t\\\\\n\t&=\\{(1,1,1,E,E), (1,-1,1,E,-E) \\}\\times \\{(1,1,1,E,E), (1,-1,-1,-E,E) \\}\n\t\\\\\n\t&\\quad \\times \\{(1,1,1,E,E), (-1,1,1,E,-E) \\}\n\t\\\\\n\t& \\cong \\Z_2 \\times \\Z_2 \\times\\Z_2.\n\t\\end{align*}\n\t\n\tTherefore we have the required isomorphism \n\t\\begin{align*}\n\t&\\quad S(U(1)\\times U(1)\\times U(2)\\times U(2)) \n\t\\\\\n\t&\\cong (U(1)\\times U(1) \\times U(1)\\times SU(2)\\times SU(2))\/(\\Z_2\\times\\Z_2\\times \\Z_2).\n\t\\end{align*}\n\\end{proof}\n\nNow, we will determine the structure of the group $(E_6)^{\\gamma_3} \\cap (E_6)^{\\mu_3}$.\n\n\\begin{theorem}\\label{theorem 4.7.3\n\tThe group $(E_6)^{\\gamma_3} \\cap (E_6)^{\\mu_3}$ is isomorphic the group $(U(1)\\times U(1) \\times U(1)\\allowbreak \\times U(1) \\times SU(2)\\times SU(2))\/(\\Z_2\\times\\Z_2\\times\\Z_4)${\\rm :} $(E_6)^{\\gamma_3} \\cap (E_6)^{\\mu_3} \\cong (U(1)\\times U(1) \\times U(1)\\times U(1) \\allowbreak \\times SU(2)\\times SU(2))\/(\\Z_2\\times\\Z_2\\times\\Z_4), \\Z_2=\\{(1,1,1,1,E,E), (1,1,-1,1,E,-E) \\},\n\t\\Z_2=\\{(1,1,1,1,E,E), (1,1,-1,-1,-E,E) \\},\n\t\\Z_4=\\{(1,1,1,E,E,E), (1,-1,1-,1,E,E), (-1,i,i,\\allowbreak 1,-E,E), (-1,-i,-i,1,-E,E) \\}$.\n\\end{theorem}\n\\begin{proof}\n\tLet $S(U(1)\\times U(1)\\times U(2)\\times U(2)) \\subset SU(6)$. \n\tWe define a mapping $\\varphi_{{}_{E_6,\\gamma_3,\\mu'_3}}: U(1)\\times S(U(1)\\times U(1)\\times U(2)\\times U(2)) \\to (E_6)^{\\gamma_3} \\cap (E_6)^{\\mu'_3}$ by\n\t\\begin{align*}\n\t\\varphi_{{}_{E_6,\\gamma_3,\\mu'_3}}(s, P)(M+\\a)&={k_J}^{-1}(P(k_J M){}^t\\!P)+s\\a k^{-1}(\\tau \\,{}^t\\!P), \n\t\\\\\n\t&\\hspace*{40mm}M+\\a \\in \\mathfrak{J}(3, \\H)^C \\oplus (\\H^3)^C=\\mathfrak{J}^C.\n\t\\end{align*}\n\tNeedless to say, this mapping is the restriction of the mapping $\\varphi_{{}_{E_6,\\gamma_3}}$, that is, $\\varphi_{{}_{E_6,\\gamma_3,\\mu'_3}}(s, P)=\\varphi_{{}_{E_6,\\gamma_3}}(s,P)$ (Theorem \\ref{theorem 3.3.2}). \n\t\n\tAs usual, we will prove that $\\varphi_{{}_{E_6,\\gamma_3,\\mu'_3}}$ is well-defined. It is clear that $\\varphi_{{}_{E_6,\\gamma_3,\\mu'_3}}(s,P) \\in (E_6)^{\\gamma_3}$, and it follows from $\\mu'_3=\\varphi_{{}_{E_6,\\gamma_3}}(1,\\diag({\\nu}^{-2},\\nu^2,{\\nu}^{-1},{\\nu}^{-1},\\nu,\\nu))$ that\n\t\\begin{align*}\n\t&\\quad {\\mu'_3}^{-1}\\varphi_{{}_{E_6,\\gamma_3,\\nu'_3}}(s,P)\\mu'_3\n\t\\\\\n\t&=\\varphi_{{}_{E_6,\\gamma_3}}(1,\\diag({\\nu}^{-2},\\nu^2,{\\nu}^{-1},{\\nu}^{-1},\\nu,\\nu))^{-1}\\varphi_{{}_{E_6,\\gamma_3,\\mu'_3}}(s,P)\\varphi_{{}_{E_6,\\gamma_3}}(1,\\diag({\\nu}^{-2},\\nu^2,{\\nu}^{-1},{\\nu}^{-1},\\nu,\\nu))\n\t\\\\\n\t&=\\varphi_{{}_{E_6,\\gamma_3}}(1,\\diag({\\nu}^2,\\nu^{-2},\\nu,\\nu,{\\nu}^{-1},{\\nu}^{-1}))\\varphi_{{}_{E_6,\\gamma_3}}(s,P)\\varphi_{{}_{E_6,\\gamma_3}}(1,\\diag({\\nu}^{-2},\\nu^2,{\\nu}^{-1},{\\nu}^{-1},\\nu,\\nu))\n\t\\\\\n\t&=\\varphi_{{}_{E_6,\\gamma_3}}(s,\\diag({\\nu}^2,\\nu^{-2},\\nu,\\nu,{\\nu}^{-1},{\\nu}^{-1})P\\diag({\\nu}^{-2},\\nu^2,{\\nu}^{-1},{\\nu}^{-1},\\nu,\\nu)),P=\\diag(a,b,P_1,P_2)\n\t\\\\\n\t&=\\varphi_{{}_{E_6,\\gamma_3}}(s,\\diag(\\nu^2 a \\nu^{-2}, {\\nu}^{-2} b \\nu^2, (\\nu E) P_1(\\nu^{-1}E), ({\\nu}^{-1}E) P_2 (\\nu E) ))\n\t\\\\\n\t&=\\varphi_{{}_{E_6,\\gamma_3}}(s,P)\n\t\\\\\n\t&=\\varphi_{{}_{E_6,\\gamma_3,\\mu'_3}}(s,P).\n\t\\end{align*}\n\tHence we have that $\\varphi_{{}_{E_6,\\gamma_3,\\mu'_3}}(s,P) \\in (E_6)^{\\sigma'_3}$. Thus $\\varphi_{{}_{E_6,\\gamma_3,\\mu'_3}}$ is well-defined. Subsequently, since $\\varphi_{{}_{E_6,\\gamma_3,\\mu'_3}}$ is the restriction of the mapping $\\varphi_{{}_{E_6,\\gamma_3}}$, we easily see that $\\varphi_{{}_{E_6,\\gamma_3,\\mu'_3}}$ is a homomorphism.\n\t\n\tNext, we will prove that $\\varphi_{{}_{E_6,\\gamma_3,\\mu'_3}}$ is surjective. Let $\\alpha \\in (E_6)^{\\gamma_3} \\cap (E_6)^{\\mu'_3} \\subset (E_6)^{\\gamma_3}$. There exist $s \\in U(1)$ and $A \\in SU(6)$ such that $\\alpha=\\varphi_{{}_{E_6,\\gamma_3}}(s,A)$ (Theorem \\ref{theorem 3.3.2}). Moreover, from the condition $\\alpha \\in (E_6)^{\\mu'_3}$, that is, ${\\mu'_3}^{-1}\\varphi_{{}_{E_6,\\gamma_3}}(s,A)\\mu'_3=\\varphi_{{}_{E_6,\\gamma_3}}(s,A)$, and using ${\\mu'_3}^{-1}\\varphi_{{}_{E_6,\\gamma_3}}(s,A)\\mu'_3=\\varphi_{{}_{E_6,\\gamma_3}}(s,\\diag({\\nu}^2,\\nu^{-2},\\nu,\\nu,{\\nu}^{-1},{\\nu}^{-1}) A \\,\\diag({\\nu}^{-2},\\nu^2,{\\nu}^{-1},{\\nu}^{-1},\\nu,\\nu))$, we have that\n\t\\begin{align*}\n\t&\\left\\{\n\t\\begin{array}{l}\n\ts=s \\\\\n\t\\diag({\\nu}^2,\\nu^{-2},\\nu,\\nu,{\\nu}^{-1},{\\nu}^{-1}) A \\diag({\\nu}^{-2},\\nu^2,{\\nu}^{-1},{\\nu}^{-1},\\nu,\\nu)=A \n\t\\end{array}\\right. \n\t\\\\\n\t&\\hspace*{50mm}{\\text{or}}\n\t\\\\\n\t&\\left\\{\n\t\\begin{array}{l}\n\ts=-s \\\\\n\t\\diag({\\nu}^2,\\nu^{-2},\\nu,\\nu,{\\nu}^{-1},{\\nu}^{-1}) A \\diag({\\nu}^{-2},\\nu^2,{\\nu}^{-1},{\\nu}^{-1},\\nu,\\nu)=-A. \n\t\\end{array}\\right. \n\t\\end{align*}\n\tThe latter case is impossible because of $s\\not=0$. As for the former case, from the second condition, by doing straightforward computation $A$ takes the following form $\\diag(a, b, C, D), a,b \\in U(1),C, D \\in U(2), (ab)(\\det\\,C)(\\det\\,D)=1$, that is, $A \\in S(U(1)\\times U(1)\\times U(2)\\times U(2))$. Needless to say, $s \\in U(1)$.\n\tHence there exist $s \\in U(1)$ and $P \\in S(U(1)\\times U(1)\\times U(2)\\times U(2))$ such that $\\alpha=\\varphi_{{}_{E_6,\\gamma_3}}(s,P)$. Namely, there exist $s \\in U(1)$ and $P \\in S(U(1)\\times U(1)\\times U(2)\\times U(2))$ such that $\\alpha=\\varphi_{{}_{E_6,\\gamma_3,\\mu'_3}}(s,P)$. With above, the proof of surjective is completed.\n\t\n\tFinally, we will determine $\\Ker\\,\\varphi_{{}_{E_6,\\gamma_3,\\mu'_3}}$. However, from $\\Ker\\,\\varphi_{{}_{E_6,\\gamma_3}}=\\{(1,E),(-1,-E) \\}$, we easily obtain that $\\Ker\\,\\varphi_{{}_{E_6,\\gamma_3,\\mu'_3}}=\\{(1,E),(-1,-E) \\} \\cong \\Z_2$. Thus we have the isomorphism $(E_6)^{\\gamma_3} \\cap (E_6)^{\\mu'_3} \\cong (U(1)\\times S(U(1)\\times U(1)\\times U(2)\\times U(2)))\/\\Z_2$. In addition, from Proposition \\ref{proposition 4.7.1} we have the isomorphism $(E_6)^{\\gamma_3} \\cap (E_6)^{\\mu_3} \\cong (U(1)\\times S(U(1)\\times U(1)\\times U(2)\\times U(2)))\/\\Z_2$. Here, using the mapping $f_{{}_{472}}$ in the proof of Lemma \\ref{lemma 4.7.2}, we define a homomorphism $h_{{}_{473}}:U(1)\\times (U(1)\\times U(1)\\times U(1)\\times SU(2)\\times SU(2)) \\to U(1)\\times S(U(1)\\times U(1)\\times U(2)\\times U(2))$ by\n\t\\begin{align*}\n\t\t\th_{{}_{473}}(s,(a,b,c,A,B))=(s,f_{{}_{472}}(a,b,c,A,B)).\n\t\\end{align*}\n\tThen, the elements $(s,(a,b,c,A,B))$ corresponding to the elements $(1,E), (-1,-E) \\in \\Ker\\,\\varphi_{{}_{E_6,\\gamma_3,\\mu'_3}}$ under the mapping $h_{{}_{473}}$ are as follows.\n\t\\begin{align*}\n\t&(1,(1,1,1,E,E)),(1,(1,1,-1,-E,-E)),(1,(1,-1,1,E,-E)),(1,(1,-1,-1,-E,E)),\n\t\\\\\n\t&(1,(-1,1,1,E,-E)),(1,(-1,1,-1,-E,E)),(1,(-1,-1,1,E,E)),(1,(-1,-1,-1,-E,-E)),\n\t\\\\\n\t&(-1,(i,i,1,-E,E)),(-1,(i,i,-1,E,-E)),(-1,(i,-i,1,-E,-E)),(-1,(i,i,-1,-E,E)),\n\t\\\\\n\t&(-1,(-i,i,1,-E,\\!E)),(-1,(-i,i,-1,E,\\!E)),(-1,(-i,-i,1,-E,E)),(-1,(-i,-i,-1,E,-E)).\n\t\\end{align*}\n\t\n\tTherefore we have the required isomorphism \n\t\\begin{align*}\n\t(E_6)^{\\gamma_3} \\cap (E_6)^{\\mu_3} \\!\\cong (U(1)\\times U(1) \\times U(1)\\times U(1) \\allowbreak \\times SU(2)\\times SU(2))\/(\\Z_2\\times\\Z_2\\times\\Z_4), \n\t\\end{align*}\n\twhere \n\t\\begin{align*}\n\t&\\Z_2=\\{(1,1,1,1,E,E), (1,1,-1,1,E,-E) \\},\n\t\\\\\n\t&\\Z_2=\\{(1,1,1,1,E,E), (1,1,-1,-1,-E,E) \\},\n\t\\\\\n\t&\\Z_4=\\{(1,1,1,E,E,E), (1,-1,1-,1,E,E), (-1,i,i,1,-E,E), (-1,-i,-i,1,-E,E) \\}.\n\t\\end{align*}\n\\end{proof}\n\nThus, since the group $(E_6)^{\\gamma_3} \\cap (E_6)^{\\mu_3}$ is connected from Theorem \\ref{theorem 4.7.3}, we have an exceptional $\\varmathbb{Z}_3 \\times \\varmathbb{Z}_3$-symmetric space\n\\begin{align*}\nE_6\/((U(1)\\times U(1) \\times U(1)\\times U(1) \\allowbreak \\times SU(2)\\times SU(2))\/(\\Z_2\\times\\Z_2\\times\\Z_4)).\n\\end{align*}\n\n\\subsection{Case 8: $\\{1, \\tilde{\\gamma}_3, \\tilde{\\gamma}_3{}^{-1}\\} \\times \\{1, \\tilde{w}_3, \\tilde{w}_3{}^{-1}\\}$-symmetric space}\n\nLet the $C$-linear transformations $\\gamma_3, w_3$ of $\\mathfrak{J}^C$ defined in Subsection \\ref{subsection 3.3}. \n\n\\noindent From Lemma \\ref{lemma 3.3.8} (1), since we can easily confirm that $\\gamma_3$ and $w_3$ are commutative, $\\tilde{\\gamma}_3$ and $\\tilde{w}_3$ are commutative in $\\Aut(E_6)$: $\\tilde{\\gamma}_3\\tilde{w}_3=\\tilde{w}_3\\tilde{\\gamma}_3$.\n\\vspace{1mm}\n\nBefore determining the structure of the group $(E_6)^{\\gamma_3} \\cap (E_6)^{w_3}$, we prove proposition and lemma needed in the proof of theorem below.\n\\vspace{1mm}\n\nWe define a $C$-linear transformation $w'_3$ of $\\mathfrak{J}^C$ by\n\\begin{align*}\n\t\tw'_3=\\varphi_{{}_{E_6,\\gamma_3}}(1,\\diag(\\tau\\omega,\\tau\\omega,\\tau\\omega,\\omega,\\omega,\\omega)) \\in (E_6)^{\\gamma_3} \\subset E_6.\n\\end{align*}\n\nLet an element \n\\begin{align*}N:=\\scalebox{0.8}{$\\begin{pmatrix}\n\t1&&&&&\\\\\n\t&&&&1&\\\\\n\t&&1&&&\\\\\n\t&&&1&&\\\\\n\t&-1&&&&\\\\\n\t&&&&&1\n\t\\end{pmatrix}$} \\in SO(6) \\subset SU(6), \n\\end{align*}\nwhere the blanks are $0$, and we consider an element $\\varphi_{{}_{E_6,\\gamma_3}}(1,N) \\in (E_6)^{\\gamma_3} \\subset E_6$. Here, we denote this element by $\\delta_N$: $\\delta_N=\\varphi_{{}_{E_6,\\gamma_3}}(1,N)$.\nThen by doing straightforward computation, we have that $w_3\\delta_Q=\\delta_Q w'_3$, that is, $w_3$ is conjugate to $w'_3$ under $\\delta_N \\in (E_6)^{\\gamma_3} \\subset E_6$: $w_3 \\sim w'_3$. Moreover, $w'_3$ induces the automorphism $\\tilde{w'}_3$ of order $3$ on $E_6$: $\\tilde{w'}_3(\\alpha)={w'_3}^{-1}\\alpha w'_3, \\alpha \\in E_6$.\n\\vspace{1mm}\n\nThen we have the following proposition.\n\n\\begin{proposition}\\label{proposition 4.8.1\n\tThe group $(E_6)^{\\gamma_3} \\cap (E_6)^{w_3}$ is isomorphic to the group $(E_6)^{\\gamma_3} \\cap (E_6)^{w'_3}${\\rm :} $(E_6)^{\\gamma_3} \\cap (E_6)^{w_3} \\cong (E_6)^{\\gamma_3} \\cap (E_6)^{w'_3}$.\n\\end{proposition}\n\\begin{proof}\n\tWe define a mapping $g_{{}_{481}}: (E_6)^{\\gamma_3} \\cap (E_6)^{w'_3} \\to (E_6)^{\\gamma_3} \\cap (E_6)^{w_3}$ by\n\t\\begin{align*}\n\tg_{{}_{481}}(\\alpha)=\\delta_N\\alpha{\\delta_N}^{-1}.\t\n\t\\end{align*}\n\tIn order to prove this isomorphism, it is sufficient to show that $g_{{}_{481}}$ is well-defined. \n\t\n\t\\noindent First, we will show that $g_{{}_{481}} \\in (E_6)^{\\gamma_3}$. Since it follows from $\\delta_N=\\varphi_{{}_{E_6,\\gamma_3}}(1,N)$ and $\\gamma_3=\\varphi_{{}_{E_6,\\gamma_3}}(\\omega,E)$ that $\\delta_N\\gamma_3=\\gamma_3\\delta_N$, we have that $g_{{}_{481}} \\in (E_6)^{\\gamma_3}$. Similarly, from $w_3\\delta_N=\\delta_N w'_3$ we have that $g_{{}_{481}} \\in (E_6)^{w_3}$. Hence $g_{{}_{481}}$ is well-defined. With above, the proof of this proposition is completed.\t\n\\end{proof}\n\nSubsequently, we will prove the following lemma. \n\n\\begin{lemma}\\label{lemma 4.8.2\n\tThe group $S(U(3)\\times U(3))$ is isomorphic to the group $(U(1) \\times SU(3)\\times SU(3))\/\\Z_3${\\rm :} $S(U(3)\\times U(3)) \\cong (U(1) \\times SU(3)\\times SU(3))\/\\Z_3, \\Z_3=\\{(1,E,E), (\\omega,{\\omega}^{-1} E,\\omega E),\\allowbreak (\\omega,\\omega E,{\\omega}^{-1} E)\\}$, where $\\omega=(-1\/2)+(\\sqrt{3}\/2)i \\in C$.\n\\end{lemma}\n\\begin{proof}\n\tWe define a mapping $f_{{}_{482}}:U(1)\\times SU(3)\\times SU(3) \\to S(U(3)\\times U(3))$ by\n\t\\begin{align*}\n\tf_{{}_{482}}(a,A,B)=\\left( \n\t\\begin{array}{cc}\n\taA &{\\raisebox{-5pt}[0pt]{\\large $0$}}\n\t\\\\[4mm]\n\t{\\raisebox{1pt}[0pt]{\\large $0$}}& a^{-1}B\n\t\\end{array}\\right) \\in SU(6).\n\t\\end{align*}\n\tThen it is clear that $f_{{}_{482}}$ is well-defined and a homomorphism. \n\t\n\tNow, we will prove that $f_{{}_{482}}$ is surjective. Let $P \\in S(U(3)\\times U(3))$. Then $P$ takes the form of $\\diag(P_1,P_2),P_j \\in U(3), (\\det\\,P_1)(\\det\\,P_2)=1$. Here, since $P_1 \\in U(3)$, we see that $\\det\\,P_1 \\in U(1)$, and so we choose $a \\in U(1)$ such that $a^3=\\det\\,P_1$. Set $A=a^{-1}P_1$, then we have that $A \\in SU(3)$. Similarly, for $P_2 \\in U(2)$, set $B=aP_2$, we have that $B \\in SU(3)$. With above, the proof of surjective is completed.\n\t\n\tFinally, we will determine $\\Ker\\,f_{{}_{482}}$. It follows from the kernel of definition that\n\t\\begin{align*}\n\t\\Ker\\,f_{{}_{482}}&=\\{(a,A,B)\\in U(1)\\times SU(3)\\times SU(3) \\,|\\,f_{{}_{482}}(a,A,B)=E \\}\n\t\\\\\n\t&=\\{(a,A,B)\\in U(1)\\times SU(3)\\times SU(3) \\,|\\,a^3=1,A=a^{-1}E, B=aE \\}\n\t\\\\\n\t&=\\{(1,E,E), (\\omega,{\\omega}^{-1}E,\\omega E),({\\omega}^{-1},\\omega E,{\\omega}^{-1}E) \\}\n\t\\\\\n\t& \\cong \\Z_3.\n\t\\end{align*}\n\t\n\tTherefore we have the required isomorphism \n\t\\begin{align*}\n\tS(U(3)\\times U(3)) \\cong (U(1) \\times SU(3)\\times SU(3))\/\\Z_3.\n\t\\end{align*}\n\\end{proof}\n\nNow, we will determine the structure of the group $(E_6)^{\\gamma_3} \\cap (E_6)^{w_3}$.\n\n\\begin{theorem}\\label{theorem 4.8.3\n\tThe group $(E_6)^{\\gamma_3} \\cap (E_6)^{w_3}$ is isomorphic the group $(U(1) \\times U(1) \\times SU(3)\\times SU(3)))\/(\\Z_2 \\times \\Z_3)${\\rm :} $(E_6)^{\\gamma_3} \\cap (E_6)^{w_3} \\cong ((U(1) \\times U(1) \\times SU(3)\\times SU(3)))\/(\\Z_2 \\times \\Z_3), \\Z_2=\\{(1,1,E,E), (-1,-1,E,E)\\},\n\t\\Z_3=\\{(1,1,E,E), (1,\\omega,{\\omega}^{-1}E,\\omega E),(1,{\\omega}^{-1},\\omega E,{\\omega}^{-1}E)\\}$.\n\\end{theorem}\n\\begin{proof}\n\tLet $S(U(3)\\times U(3)) \\subset SU(6)$. \n\tWe define a mapping $\\varphi_{{}_{E_6,\\gamma_3,w'_3}}: U(1)\\times S(U(3)\\times U(3)) \\to (E_6)^{\\gamma_3} \\cap (E_6)^{w'_3}$ by\n\t\\begin{align*}\n\t\\varphi_{{}_{E_6,\\gamma_3,w'_3}}(s, P)(M+\\a)&={k_J}^{-1}(P(k_J M){}^t\\!P)+s\\a k^{-1}(\\tau \\,{}^t\\!P), \n\t\\\\\n\t&\\hspace*{40mm}M+\\a \\in \\mathfrak{J}(3, \\H)^C \\oplus (\\H^3)^C=\\mathfrak{J}^C.\n\t\\end{align*}\n\tNeedless to say, this mapping is the restriction of the mapping $\\varphi_{{}_{E_6,\\gamma_3}}$, that is, $\\varphi_{{}_{E_6,\\gamma_3,w'_3}}(s, P)=\\varphi_{{}_{E_6,\\gamma_3}}(s,P)$ (Theorem \\ref{theorem 3.3.2}). \n\t\n\tFirst, we will prove that $\\varphi_{{}_{E_6,\\gamma_3,w'_3}}$ is well-defined. It is clear that $\\varphi_{{}_{E_6,\\gamma_3,w'_3}}(s,P) \\in (E_6)^{\\gamma_3}$, and it follows from $w'_3=\\varphi_{{}_{E_6,\\gamma_3}}(1,\\diag(\\tau\\omega,\\tau\\omega,\\tau\\omega,\\omega,\\omega,\\omega))$ that\n\t\\begin{align*}\n\t&\\quad {w'_3}^{-1}\\varphi_{{}_{E_6,\\gamma_3,\\nu'_3}}(s,P) w'_3\n\t\\\\\n\t&=\\varphi_{{}_{E_6,\\gamma_3}}(1,\\diag(\\tau\\omega,\\tau\\omega,\\tau\\omega,\\omega,\\omega,\\omega))^{-1}\\varphi_{{}_{E_6,\\gamma_3,w'_3}}(s,P)\\varphi_{{}_{E_6,\\gamma_3}}(1,\\diag(\\tau\\omega,\\tau\\omega,\\tau\\omega,\\omega,\\omega,\\omega))\n\t\\\\\n\t&=\\varphi_{{}_{E_6,\\gamma_3}}(1,\\diag(\\omega,\\omega,\\omega,\\tau\\omega,\\tau\\omega,\\tau\\omega))\\varphi_{{}_{E_6,\\gamma_3}}(s,P)\\varphi_{{}_{E_6,\\gamma_3}}(1,\\diag(\\tau\\omega,\\tau\\omega,\\tau\\omega,\\omega,\\omega,\\omega))\n\t\\\\\n\t&=\\varphi_{{}_{E_6,\\gamma_3}}(s,\\diag(\\omega,\\omega,\\omega,\\tau\\omega,\\tau\\omega,\\tau\\omega)P\\,\\diag(\\tau\\omega,\\tau\\omega,\\tau\\omega,\\omega,\\omega,\\omega)),P=\\diag(P_1,P_2)\n\t\\\\\n\t&=\\varphi_{{}_{E_6,\\gamma_3}}(s,\\diag((\\omega E)P_1(\\tau\\omega E), \\tau(\\omega E) P_2 (\\omega E) ))\n\t\\\\\n\t&=\\varphi_{{}_{E_6,\\gamma_3}}(s,P)\n\t\\\\\n\t&=\\varphi_{{}_{E_6,\\gamma_3,w'_3}}(s,P).\n\t\\end{align*}\n\tHence we have that $\\varphi_{{}_{E_6,\\gamma_3,w'_3}}(s,P) \\in (E_6)^{w'_3}$. Thus $\\varphi_{{}_{E_6,\\gamma_3,w'_3}}$ is well-defined. Subsequently, since $\\varphi_{{}_{E_6,\\gamma_3,w'_3}}$ is the restriction of the mapping $\\varphi_{{}_{E_6,\\gamma_3}}$, we easily see that $\\varphi_{{}_{E_6,\\gamma_3,w'_3}}$ is a homomorphism.\n\t\n\tNext, we will prove that $\\varphi_{{}_{E_6,\\gamma_3,w'_3}}$ is surjective. Let $\\alpha \\in (E_6)^{\\gamma_3} \\cap (E_6)^{w'_3} \\subset (E_6)^{\\gamma_3}$. There exist $s \\in U(1)$ and $A \\in SU(6)$ such that $\\alpha=\\varphi_{{}_{E_6,\\gamma_3}}(s,A)$ (Theorem \\ref{theorem 3.3.2}). Moreover, from the condition $\\alpha \\in (E_6)^{w'_3}$, that is, ${w'_3}^{-1}\\varphi_{{}_{E_6,\\gamma_3}}(s,A)w'_3=\\varphi_{{}_{E_6,\\gamma_3}}(s,A)$, and using ${w'_3}^{-1}\\varphi_{{}_{E_6,\\gamma_3}}(s,A)w'_3=\\varphi_{{}_{E_6,\\gamma_3}}(s,\\diag(\\omega,\\omega,\\omega,\\tau\\omega,\\tau\\omega,\\tau\\omega)A\\,\\diag(\\tau\\omega,\\tau\\omega,\\tau\\omega,\\omega,\\omega,\\omega))$, we have that\n\t\\begin{align*}\n\t&\\left\\{\n\t\\begin{array}{l}\n\ts=s \\\\\n\t\\diag(\\omega,\\omega,\\omega,\\tau\\omega,\\tau\\omega,\\tau\\omega)A\\,\\diag(\\tau\\omega,\\tau\\omega,\\tau\\omega,\\omega,\\omega,\\omega)=A \n\t\\end{array}\\right. \n\t\\\\\n\t&\\hspace*{50mm}{\\text{or}}\n\t\\\\\n\t&\\left\\{\n\t\\begin{array}{l}\n\ts=-s \\\\\n\t\\diag(\\omega,\\omega,\\omega,\\tau\\omega,\\tau\\omega,\\tau\\omega)A\\,\\diag(\\tau\\omega,\\tau\\omega,\\tau\\omega,\\omega,\\omega,\\omega)=-A. \n\t\\end{array}\\right. \n\t\\end{align*}\n\tThe latter case is impossible because of $s\\not=0$. As for the former case, from the second condition, by doing straightforward computation $A$ takes the following form $\\diag(C, D), C, D \\in U(2), (\\det\\,C)(\\det\\,D)=1$, that is, $A \\in S(U(3)\\times U(3))$. Needless to say, $s \\in U(1)$.\n\tHence there exist $s \\in U(1)$ and $P \\in S(U(3)\\times U(3))$ such that $\\alpha=\\varphi_{{}_{E_6,\\gamma_3}}(s,P)$. Namely, there exist $s \\in U(1)$ and $P \\in S(U(3)\\times U(3))$ such that $\\alpha=\\varphi_{{}_{E_6,\\gamma_3,w'_3}}(s,P)$. The proof of surjective is completed.\n\t\n\tFinally, we will determine $\\Ker\\,\\varphi_{{}_{E_6,\\gamma_3,w'_3}}$. However, from $\\Ker\\,\\varphi_{{}_{E_6,\\gamma_3}}=\\{(1,E),(-1,-E) \\}$, we easily obtain that $\\Ker\\,\\varphi_{{}_{E_6,\\gamma_3,w'_3}}=\\{(1,E),(-1,-E) \\} \\cong \\Z_2$. Thus we have the isomorphism $(E_6)^{\\gamma_3} \\cap (E_6)^{w'_3} \\cong (U(1)\\times S(U(3)\\times U(3)))\/\\Z_2$. In addition, from Proposition \\ref{proposition 4.8.1} we have the isomorphism $(E_6)^{\\gamma_3} \\cap (E_6)^{w_3} \\cong (U(1)\\times S(U(3)\\times U(3)))\/\\Z_2$. Here, using the mapping $f_{{}_{482}}$ in the proof of Lemma \\ref{lemma 4.8.2}, we define a homomorphism $h_{{}_{484}}:U(1)\\times (U(1)\\times SU(3)\\times SU(3)) \\to U(1)\\times S(U(3)\\times U(3))$ by\n\t\\begin{align*}\n\th_{{}_{483}}(s,(a,A,B))=(s,f_{{}_{482}}(a,A,B)).\n\t\\end{align*}\n\tThen, the elements $(s,(a,A,B))$ corresponding to the elements $(1,E), (-1,-E) \\in \\\\\n\t\\Ker\\,\\varphi_{{}_{E_6,\\gamma_3,w'_3}}$ under the mapping $h_{{}_{483}}$ are as follows.\n\t\\begin{align*}\n\t& (1,1,E,E), (1,\\omega,{\\omega}^{-1}E,\\omega E),(1,{\\omega}^{-1},\\omega E,{\\omega}^{-1}E) \n\t\\\\\n\t& (-1,-1,E,E), (-1,-\\omega,{\\omega}^{-1}E,\\omega E),(-1,-{\\omega}^{-1},\\omega E,{\\omega}^{-1}E).\n\t\\end{align*}\n\tTherefore we have the required isomorphism\n\t\\begin{align*}\n\t (E_6)^{\\gamma_3} \\cap (E_6)^{w_3} \\cong (U(1) \\times U(1) \\times SU(3)\\times SU(3))\/(\\Z_2 \\times \\Z_3), \n\t \\end{align*}\n\t where\n\t \\begin{align*}\n\t &\\Z_2=\\{(1,1,E,E), (-1,-1,E,E)\\},\n\t \\\\\n\t &\\Z_3=\\{(1,1,E,E), (1,\\omega,{\\omega}^{-1}E,\\omega E),(1,{\\omega}^{-1},\\omega E,{\\omega}^{-1}E)\\}.\n\t \\end{align*}\n\\end{proof}\n\nThus, since the group $(E_6)^{\\gamma_3} \\cap (E_6)^{w_3}$ is connected from Theorem \\ref{theorem 4.8.3}, we have an exceptional $\\varmathbb{Z}_3 \\times \\varmathbb{Z}_3$-symmetric space\n\\begin{align*}\nE_6\/((U(1) \\times U(1) \\times SU(3)\\times SU(3))\/(\\Z_2 \\times \\Z_3)).\n\\end{align*}\n\n\\subsection{Case 9: $\\{1, \\tilde{\\sigma}_3, \\tilde{\\sigma}_3{}^{-1}\\} \\times \\{1, \\tilde{\\nu}_3, \\tilde{\\nu}_3{}^{-1}\\}$-symmetric space}\n\nLet the $C$-linear transformations $\\sigma_3, \\nu_3$ of $\\mathfrak{J}^C$ defined in Subsection \\ref{subsection 3.3}. \n\n\\noindent From Lemma \\ref{lemma 3.3.8} (1), since we can easily confirm that $\\sigma_3$ and $\\nu_3$ are commutative, $\\tilde{\\sigma}_3$ and $\\tilde{\\nu}_3$ are commutative in $\\Aut(E_6)$: $\\tilde{\\sigma}_3\\tilde{\\nu}_3=\\tilde{\\nu}_3\\tilde{\\sigma}_3$.\n\nBefore determining the structure of the group $(E_6)^{\\sigma_3} \\cap (E_6)^{\\nu_3}$, we confirm that useful lemma holds and prove proposition needed in the proof of theorem below.\n\n\\begin{lemma}\\label{lemma 4.9.1\n\tThe mapping $\\varphi_{{}_{E_6,\\nu_3}}:Sp(1) \\times S(U(1)\\times U(5)) \\to (E_6)^{\\nu_3}$ of \\,Theorem {\\rm \\ref{theorem 3.3.5}} satisfies the relational formulas \n\t\\begin{align*}\n\t\\sigma_3&=\\varphi_{{}_{E_6,\\nu_3}}(1, \\diag(1,1,\\tau\\omega,\\omega,\\omega,\\tau\\omega)),\n\t\\\\\n\t\\nu_3&=\\varphi_{{}_{E_6,\\nu_3}}(1,\\diag(\\nu^5,\\nu^{-1},\\nu^{-1},\\nu^{-1},\\nu^{-1},\\nu^{-1})),\n\t\\end{align*}\n\twhere ${\\omega}=-(1\/2)+(\\sqrt{3}\/2)i \\in U(1)$.\n\\end{lemma}\n\\begin{proof}\n\tFrom Lemma \\ref{lemma 3.3.8} (1), these results are trivial. \n\\end{proof}\n\nThe $C$-linear transformation $\\sigma'_3$ defined in the Case 5 is expressed by\n\\begin{align*}\n\\sigma'_3=\\varphi_{{}_{E_6,\\nu_3}}(1, \\diag(1,1,\\omega,\\omega,\\tau\\omega,\\tau\\omega)),\n\\end{align*}\nand note that $\\delta_R=\\varphi_{{}_{E_6, \\nu_3}}(1,R)(=\\varphi_{{}_{E_6, \\gamma_3}}(1,R))$, where $\\delta_R$ is also defined in the Case 5, moreover needless to say, $\\sigma_3$ is conjugate to $\\sigma'_3$ under $\\delta_R=\\varphi_{{}_{E_6, \\nu_3}}(1,R)$.\n\n\\begin{proposition}\\label{proposition 4.9.2\n\tThe group $(E_6)^{\\sigma_3} \\cap (E_6)^{\\nu_3}$ is isomorphic to the group $(E_6)^{\\sigma'_3} \\cap (E_6)^{\\nu_3}${\\rm :} $(E_6)^{\\sigma_3} \\cap (E_6)^{\\nu_3} \\cong (E_6)^{\\sigma'_3} \\cap (E_6)^{\\nu_3}$.\n\\end{proposition}\n\\begin{proof}\n\tWe define a mapping $g_{{}_{492}}: (E_6)^{\\sigma_3} \\cap (E_6)^{\\nu_3} \\to (E_6)^{\\sigma'_3} \\cap (E_6)^{\\nu_3}$ by\n\t\\begin{align*}\n\tg_{{}_{492}}(\\alpha)={\\delta_R}^{-1}\\alpha\\delta_R,\n\t\\end{align*}\n\twhere $\\delta_R$ is same one above. Since it is easy to verify that $\\delta_R\\nu_3=\\nu_3\\delta_R$ using $\\nu_3=\\varphi_{{}_{E_6,\\nu_3}}(1,\\diag(\\nu^5,\\nu^{-1},\\nu^{-1},\\nu^{-1},\\nu^{-1},\\nu^{-1}))$ (Lemma \\ref{lemma 4.9.1}), we can prove this proposition as in the proof of Proposition \\ref{proposition 4.5.1}\n\\end{proof}\n\nNow, we will determine the structure of the group $(E_6)^{\\sigma_3} \\cap (E_6)^{\\nu_3}$.\n\n\\begin{theorem}\\label{theorem 4.9.3\n\tThe group $(E_6)^{\\sigma_3} \\cap (E_6)^{\\nu_3}$ is isomorphic the group $(Sp(1)\\times U(1) \\times U(1)\\allowbreak \\times U(1) \\times SU(2)\\times SU(2))\/(\\Z_2\\times\\Z_2\\times\\Z_4)${\\rm :} $(E_6)^{\\sigma_3} \\cap (E_6)^{\\nu_3} \\cong (Sp(1)\\times U(1) \\times U(1)\\times U(1) \\allowbreak \\times SU(2)\\times SU(2))\/(\\Z_2\\times\\Z_2\\times\\Z_4), \\Z_2=\\{(1,1,1,1,E,E), (1,1,-1,1,E,-E) \\},\n\t\\Z_2=\\{(1,1,1,1,E,E), (1,1,-1,-1,-E,E) \\},\n\t\\Z_4=\\{(1,1,1,E,E,E), (1,-1,1-,1,E,E), (-1,i,i,\\allowbreak 1,-E,E), (-1,-i,-i,1,-E,E) \\}$.\n\\end{theorem}\n\\begin{proof}\n\tLet $S(U(1)\\times U(1)\\times U(2)\\times U(2)) \\subset S(U(1)\\times U(5))$ as in the proof of Theorem \\ref{theorem 4.7.3}. \n\tWe define a mapping $\\varphi_{{}_{E_6,\\sigma'_3,\\nu_3}}: Sp(1)\\times S(U(1)\\times U(1)\\times U(2)\\times U(2)) \\to (E_6)^{\\sigma'_3} \\cap (E_6)^{\\nu_3}$ by\n\t\\begin{align*}\n\t\\varphi_{{}_{E_6,\\sigma'_3,\\nu_3}}(q, P)(M+\\a)&={k_J}^{-1}(P(k_J M){}^t\\!P)+q\\a k^{-1}(\\tau \\,{}^t\\!P), \n\t\\\\\n\t&\\hspace*{40mm}M+\\a \\in \\mathfrak{J}(3, \\H)^C \\oplus (\\H^3)^C=\\mathfrak{J}^C.\n\t\\end{align*}\n\tNeedless to say, this mapping is the restriction of the mapping $\\varphi_{{}_{E_6,\\nu_3}}$, that is, $\\varphi_{{}_{E_6,\\sigma'_3,\\nu_3}}(q, P)=\\varphi_{{}_{E_6,\\nu_3}}(q,P)$ (Theorem \\ref{theorem 3.3.5}). \n\t\n\tAs usual, we will prove that $\\varphi_{{}_{E_6,\\sigma'_3,\\nu_3}}$ is well-defined. It is clear that $\\varphi_{{}_{E_6,\\sigma'_3,\\nu_3}}(q,P) \\in (E_6)^{\\nu_3}$, and it follows from $\\sigma'_3=\\varphi_{{}_{E_6,\\nu_3}}(1,\\diag(1,1,\\omega,\\omega,\\tau\\omega,\\tau\\omega))$ that\n\t\\begin{align*}\n\t&\\quad {\\sigma'_3}^{-1}\\varphi_{{}_{E_6,\\sigma'_3,\\nu_3}}(q,P)\\sigma'_3\n\t\\\\\n\t&=\\varphi_{{}_{E_6,\\nu_3}}(1,\\diag(1,1,\\omega,\\omega,\\tau\\omega,\\tau\\omega))^{-1}\\varphi_{{}_{E_6,\\sigma'_3,\\nu_3}}(q,P)\\varphi_{{}_{E_6,\\nu_3}}(1,\\diag(1,1,\\omega,\\omega,\\tau\\omega,\\tau\\omega))\n\t\\\\\n\t&=\\varphi_{{}_{E_6,\\nu_3}}(1,\\diag(1,1,\\tau\\omega,\\tau\\omega,\\omega,\\omega))\\varphi_{{}_{E_6,\\nu_3}}(q,P)\\varphi_{{}_{E_6,\\nu_3}}(1,\\diag(1,1,\\omega,\\omega,\\tau\\omega,\\tau\\omega))\n\t\\\\\n\t&=\\varphi_{{}_{E_6,\\nu_3}}(q,\\diag(1,1,\\tau\\omega,\\tau\\omega,\\omega,\\omega)P\\,\\diag(1,1,\\omega,\\omega,\\tau\\omega,\\tau\\omega)),P=\\diag(a,b,P_1,P_2)\n\t\\\\\n\t&=\\varphi_{{}_{E_6,\\nu_3}}(q,\\diag(a, b, (\\tau\\omega E)P_1(\\omega E), (\\omega E) P_2 (\\tau\\omega E) ))\n\t\\\\\n\t&=\\varphi_{{}_{E_6,\\nu_3}}(q,P)\n\t\\\\\n\t&=\\varphi_{{}_{E_6,\\sigma'_3,\\nu_3}}(q,P).\n\t\\end{align*}\n\tHence we have that $\\varphi_{{}_{E_6,\\sigma'_3,\\nu_3}}(q,P) \\in (E_6)^{\\sigma'_3}$. Thus $\\varphi_{{}_{E_6,\\sigma'_3,\\nu_3}}$ is well-defined. Subsequently, since $\\varphi_{{}_{E_6,\\sigma'_3,\\nu_3}}$ is the restriction of the mapping $\\varphi_{{}_{E_6,\\nu_3}}$, we easily see that $\\varphi_{{}_{E_6,\\sigma'_3,\\nu_3}}$ is a homomorphism.\n\t\n\tNext, we will prove that $\\varphi_{{}_{E_6,\\sigma'_3,\\nu_3}}$ is surjective. Let $\\alpha \\in (E_6)^{\\sigma'_3} \\cap (E_6)^{\\nu_3} \\subset (E_6)^{\\nu_3}$. There exist $q \\in Sp(1)$ and $A \\in S(U(1)\\times U(5))$ such that $\\alpha=\\varphi_{{}_{E_6,\\nu_3}}(q,A)$ (Theorem \\ref{theorem 3.3.5}). Moreover, from the condition $\\alpha \\in (E_6)^{\\sigma'_3}$, that is, ${\\sigma'_3}^{-1}\\varphi_{{}_{E_6,\\nu_3}}(q,A)\\sigma'_3=\\varphi_{{}_{E_6,\\nu_3}}(q,A)$, and using ${\\sigma'_3}^{-1}\\varphi_{{}_{E_6,\\nu_3}}(q,A)\\sigma'_3=\\varphi_{{}_{E_6,\\nu_3}}(q,\\diag(1,1,\\tau\\omega,\\tau\\omega,\\omega,\\omega)A\\,\\diag(1,1,\\omega,\\omega,\\tau\\omega,\\tau\\omega))$, we have that\n\t\\begin{align*}\n\t&\\left\\{\n\t\\begin{array}{l}\n\tq=q \\\\\n\t\\diag(1,1,\\tau\\omega,\\tau\\omega,\\omega,\\omega)A\\,\\diag(1,1,\\omega,\\omega,\\tau\\omega,\\tau\\omega)=A \n\t\\end{array}\\right. \n\t\\\\\n\t&\\hspace*{50mm}{\\text{or}}\n\t\\\\\n\t&\\left\\{\n\t\\begin{array}{l}\n\tq=-q \\\\\n\t\\diag(1,1,\\tau\\omega,\\tau\\omega,\\omega,\\omega)A\\,\\diag(1,1,\\omega,\\omega,\\tau\\omega,\\tau\\omega)=-A. \n\t\\end{array}\\right. \n\t\\end{align*}\n\tThe latter case is impossible because of $q\\not=0$. As for the former case, from the second condition, by doing straightforward computation $A$ takes the following form $\\diag(a, b, C, D), a,b \\in U(1),C, D \\in U(2), (ab)(\\det\\,C)(\\det\\,D)=1$, that is, $A \\in S(U(1)\\times U(1)\\times U(2)\\times U(2))$. Needless to say, $q \\in Sp(1)$.\n\tHence there exist $q \\in Sp(1)$ and $P \\in S(U(1)\\times U(1)\\times U(2)\\times U(2))$ such that $\\alpha=\\varphi_{{}_{E_6,\\nu_3}}(s,P)$. Namely, there exist $q \\in Sp(1)$ and $P \\in S(U(1)\\times U(1)\\times U(2)\\times U(2))$ such that $\\alpha=\\varphi_{{}_{E_6,\\sigma'_3,\\nu_3}}(s,P)$. The proof of surjective is completed.\n\t\n\tFinally, we will determine $\\Ker\\,\\varphi_{{}_{E_6,\\sigma'_3,\\nu_3}}$. However, from $\\Ker\\,\\varphi_{{}_{E_6,\\nu_3}}=\\{(1,E),(-1,-E) \\}$, we easily obtain that $\\Ker\\,\\varphi_{{}_{E_6,\\sigma'_3,\\nu_3}}=\\{(1,E),(-1,-E) \\} \\cong \\Z_2$. Thus we have the isomorphism $(E_6)^{\\sigma'_3} \\cap (E_6)^{\\nu_3} \\cong (Sp(1)\\times S(U(1)\\times U(1)\\times U(2)\\times U(2)))\/\\Z_2$. In addition, from Proposition \\ref{proposition 4.8.1} we have the isomorphism $(E_6)^{\\sigma_3} \\cap (E_6)^{\\nu_3} \\cong (Sp(1)\\times S(U(1)\\times U(1)\\times U(2)\\times U(2)))\/\\Z_2$. \n\t\\if\n\tHere, \tusing the mapping $f_{{}_{473}}$ in the proof of Lemma \\ref{lemma 4.7.3}, we define a homomorphism $h_{{}_{474}}:U(1)\\times (U(1)\\times U(1)\\times U(1)\\times SU(2)\\times SU(2)) \\to U(1)\\times S(U(1)\\times U(1)\\times U(2)\\times U(2))$ by\n\t\\begin{align*}\n\th_{{}_{474}}(s,(a,b,c,A,B))=(s,f_{{}_{473}}(a,b,c,A,B)).\n\t\\end{align*}\n\tThen, the elements of $(s,(a,b,c,A,B))$ corresponding to $(1,E), (-1,-E) \\in \\Ker\\,\\varphi_{{}_{E_6,\\gamma_3,\\mu'_3}}$ under the mapping $h_{{}_{474}}$ are as follows.\n\t\\begin{align*}\n\t&(1,(1,1,1,E,E)),(1,(1,1,-1,-E,-E)),(1,(1,-1,1,E,-E)),(1,(1,-1,-1,-E,E)),\n\t\\\\\n\t&(1,(-1,1,1,E,-E)),(1,(-1,1,-1,-E,E)),(1,(-1,-1,1,E,E)),(1,(-1,-1,-1,-E,-E)),\n\t\\\\\n\t&(-1,(i,i,1,-E,E)),(-1,(i,i,-1,E,-E)),(-1,(i,-i,1,-E,-E)),(-1,(i,i,-1,-E,E)),\n\t\\\\\n\t&(-1,(-i,i,1,-E,\\!E)),(-1,(-i,i,-1,E,\\!E)),(-1,(-i,-i,1,-E,E)),(-1,(-i,-i,-1,E,-E)).\n\t\\end{align*}\n\t\\f\n\tTherefore, as in the proof of Theorem \\ref{theorem 4.7.3}, we have the required isomorphism \n\t\\begin{align*}\n\t(E_6)^{\\sigma_3} \\cap (E_6)^{\\nu_3} \\!\\cong (Sp(1)\\times U(1) \\times U(1)\\times U(1) \\allowbreak \\times SU(2)\\times SU(2))\/(\\Z_2\\times\\Z_2\\times\\Z_4), \n\t\\end{align*}\n\twhere \n\t\\begin{align*}\n\t&\\Z_2=\\{(1,1,1,1,E,E), (1,1,-1,1,E,-E) \\},\n\t\\\\\n\t&\\Z_2=\\{(1,1,1,1,E,E), (1,1,-1,-1,-E,E) \\},\n\t\\\\\n\t&\\Z_4=\\{(1,1,1,E,E,E), (1,-1,1-,1,E,E), (-1,i,i,1,-E,E), (-1,-i,-i,1,-E,E) \\}.\n\t\\end{align*}\n\\end{proof}\n\nThus, since the group $(E_6)^{\\sigma_3} \\cap (E_6)^{\\nu_3}$ is connected from Theorem \\ref{theorem 4.9.3}, we have an exceptional $\\varmathbb{Z}_3 \\times \\varmathbb{Z}_3$-symmetric space\n\\begin{align*}\nE_6\/((Sp(1)\\times U(1) \\times U(1)\\times U(1) \\times SU(2)\\times SU(2))\/(\\Z_2\\times\\Z_2\\times\\Z_4)).\n\\end{align*}\n\n\\subsection{Case 10: $\\{1, \\tilde{\\sigma}_3, \\tilde{\\sigma}_3{}^{-1}\\} \\times \\{1, \\tilde{\\mu}_3, \\tilde{\\mu}_3{}^{-1}\\}$-symmetric space}\n\nLet the $C$-linear transformations $\\sigma_3, \\mu_3$ of $\\mathfrak{J}^C$ defined in Subsection \\ref{subsection 3.3}. \n\n\\noindent From Lemma \\ref{lemma 3.3.8} (1), since we can easily confirm that $\\sigma_3$ and $\\mu_3$ are commutative, $\\tilde{\\sigma}_3$ and $\\tilde{\\mu}_3$ are commutative in $\\Aut(E_6)$: $\\tilde{\\sigma}_3\\tilde{\\mu}_3=\\tilde{\\mu}_3\\tilde{\\sigma}_3$.\n\nBefore determining the structure of the group $(E_6)^{\\sigma_3} \\cap (E_6)^{\\mu_3}$, we prove proposition needed in the proof of theorem below.\n\n\\begin{proposition}\\label{proposition 4.10.1\n\tThe group $(E_6)^{\\sigma_3}$ is a subgroup of the group $(E_6)^\\sigma${\\rm: } $(E_6)^{\\sigma_3} \\subset (E_6)^\\sigma$.\n\\end{proposition}\n\\begin{proof}\n\tLet $\\alpha \\in (E_6)^{\\sigma_3}$. Then, from Theorem \\ref{theorem 3.3.4}, there exist $\\theta \\in U(1), D_a \\in Spin(2)$ and $\\beta \\in Spin(8)$ such that $\\alpha=\\phi_{{}_{6,\\sigma}}(\\theta) D_a \\beta$. Here, note that $(E_6)_{E_1} \\subset (E_6)^\\sigma$ (\\cite[Theorem 3.10.2]{iy0}), and so since $Spin(8)$ as the group $(E_6)_{E_1,F_1(1),F_1(e_1)} \\subset (E_6)_{E_1} \\subset (E_6)^\\sigma$, it follows that\n\t\\begin{align*}\n\t\t\t\\sigma\\alpha=\\sigma(\\phi_{{}_{6,\\sigma}}(\\theta) D_a \\beta)=\\phi_{{}_{6,\\sigma}}(\\theta)\\sigma D_a\\beta=\\phi_{{}_{6,\\sigma}}(\\theta)D_a \\sigma\\beta=(\\phi_{{}_{6,\\sigma}}(\\theta) D_a \\beta)\\sigma=\\alpha\\sigma.\n\t\\end{align*}\n\tHence we have that $\\alpha \\in (E_6)^\\sigma$, that is, $(E_6)^{\\sigma_3} \\subset (E_6)^\\sigma$.\n\\end{proof}\n\nNow, we will determine the structure of the group $(E_6)^{\\sigma_3} \\cap (E_6)^{\\mu_3}$.\n\n\\begin{theorem}\\label{theorem 4.10.2\n\tThe group $(E_6)^{\\sigma_3} \\cap (E_6)^{\\mu_3}$ coincides with the group $(E_6)^{\\sigma_3}$, that is, the group $(E_6)^{\\sigma_3} \\cap (E_6)^{\\mu_3}$ is isomorphic to the group $(U(1)\\times Spin(2)\\times Spin(8))\/(\\Z_2 \\times \\Z_4),\\\\ \\Z_2=\\!\\{(1,1,1),(1,\\sigma,\\sigma) \\}, \\Z_4\\!=\\!\\{(1,1,1),(i,D_{e_1},\\phi_{{}_{6,\\sigma}}(-i)D_{-e_1}),(-1,\\allowbreak \\sigma,-1),(-i,D_{-e_1}, \\phi_{{}_{6,\\sigma}}(i) \\allowbreak D_{e_1}) \\}$. \n\\end{theorem}\n\\begin{proof}\n\tFrom Proposition \\ref{proposition 3.3.3} and Theorem \\ref{theorem 3.3.6}, we have that the group $(E_6)^{\\sigma_3} \\cap (E_6)^{\\mu_3}$ coincides with the group $(E_6)^{\\sigma_3} \\cap (E_6)^{\\sigma}$: $(E_6)^{\\sigma_3} \\cap (E_6)^{\\mu_3}=(E_6)^{\\sigma_3} \\cap (E_6)^{\\sigma}$. In addition, from Proposition \\ref{proposition 4.10.1} above, we have that \n\t\\begin{align*}\n\t (E_6)^{\\sigma_3} \\cap (E_6)^{\\mu_3}=(E_6)^{\\sigma_3} \\cap (E_6)^{\\sigma}=(E_6)^{\\sigma_3}.\n\t\\end{align*}\n\tTherefore, by Theorem \\ref{theorem 3.3.4}, we have the required isomorphism \n\t\\begin{align*}\n\t \t\t (E_6)^{\\sigma_3} \\cap (E_6)^{\\mu_3} \\cong (U(1)\\times Spin(2)\\times Spin(8))\/(\\Z_2\\times \\Z_4).\n\t\\end{align*}\n\\end{proof}\n\nThus, since the group $(E_6)^{\\sigma_3} \\cap (E_6)^{\\mu_3}$ is connected from Theorem \\ref{theorem 4.10.2}, we have an exceptional $\\varmathbb{Z}_3 \\times \\varmathbb{Z}_3$-symmetric space\n\\begin{align*}\nE_6\/((U(1)\\times Spin(2)\\times Spin(8))\/(\\Z_2\\times \\Z_4)).\n\\end{align*}\n\n\\subsection{Case 11: $\\{1, \\tilde{\\sigma}_3, \\tilde{\\sigma}_3{}^{-1}\\} \\times \\{1, \\tilde{w}_3, \\tilde{w}_3{}^{-1}\\}$-symmetric space}\n\nLet the $C$-linear transformations $\\sigma_3, w_3$ of $\\mathfrak{J}^C$ defined in Subsection \\ref{subsection 3.3}. \n\n\\noindent From Lemma \\ref{lemma 3.3.8} (2), since we can easily confirm that $\\sigma_3$ and $w_3$ are commutative, $\\tilde{\\sigma}_3$ and $\\tilde{w}_3$ are commutative in $\\Aut(E_6)$: $\\tilde{\\sigma}_3\\tilde{w}_3=\\tilde{w}_3\\tilde{\\sigma}_3$.\n\nNow, we will determine the structure of the group $(E_6)^{\\sigma_3}\\cap (E_6)^{w_3}$.\n\n\\begin{theorem}\\label{theorem 4.11.1\n\tThe group $(E_6)^{\\sigma_3}\\cap (E_6)^{w_3}$ is isomorphic to the group $(SU(3)\\times U(1)\\times U(1)\\times U(1)\\times U(1))\/\\Z_3${\\rm :} $(E_6)^{\\sigma_3}\\cap (E_6)^{w_3} \\cong (SU(3)\\times U(1)\\times U(1)\\times U(1)\\times U(1))\/\\Z_3, \\Z_3=\\{(E,1,1,1,1),(\\bm{\\omega}E,\\bm{\\omega},\\bm{\\omega},\\bm{\\omega},\\bm{\\omega}),(\\bm{\\omega}^{-1}E,\\bm{\\omega}^{-1},\\bm{\\omega}^{-1},\\bm{\\omega}^{-1},\\bm{\\omega}^{-1})\\}$.\n\\end{theorem}\n\\begin{proof}\n\tLet $S(U(1)\\times U(1)\\times U(1)) \\subset SU(3)$. We define a mapping $\\varphi_{{}_{E_6,\\sigma_3,w_3}}: SU(3)\\times S(U(1)\\times U(1)\\times U(1))\\times S(U(1)\\times U(1)\\times U(1)) \\to (E_6)^{\\sigma_3}\\cap (E_6)^{w_3}$ by\n\t\\begin{align*}\n\t\t\t\\varphi_{{}_{E_6,\\sigma_3,w_3}}(L,P,Q)(X_{C}+M)&=h(P,Q)X_{C}h(P,Q)^*+LM\\tau h(P,Q)^*, \n\t\t\t\\\\\n\t\t\t&\\hspace*{20mm} X_{C}+M \\in \\mathfrak{J}(3, \\C)^C \\oplus \n\t\t\tM(3,\\C)^C=\\mathfrak{J}^C.\n\t\\end{align*}\n\tNeedless to say, this mapping is the restriction of the mapping $\\varphi_{{}_{E_6,w_3}}$, that is, $\\varphi_{{}_{E_6,\\sigma_3,w_3}}(L,P,\\allowbreak Q)=\\varphi_{{}_{E_6,w_3}}(L,P,Q)$ (Theorem \\ref{theorem 3.3.7}). \n\t\n\tWe will prove that $\\varphi_{{}_{E_6,\\sigma_3,w_3}}$ is well-defined. It is clear that $\\varphi_{{}_{E_6,\\sigma_3,_3}}(L,P,Q) \\in (E_6)^{w_3}$, and it follows from $\\sigma_3=\\varphi_{{}_{E_6,w_3}}(E,\\diag(1,\\ov{\\bm{\\omega}},\\bm{\\omega}), \\diag(1,\\ov{\\bm{\\omega}},\\bm{\\omega}))$ (Lemma \\ref{lemma 3.3.8} (2)) that \n\t\\begin{align*}\n\t&\\quad {\\sigma_3}^{-1}\\varphi_{{}_{E_6,\\sigma_3,w_3}}(L,P,Q)\\sigma_3\n\t\\\\\n\t&=\\varphi_{{}_{E_6,w_3}}(E,\\diag(1,\\ov{\\bm{\\omega}},\\bm{\\omega}), \\diag(1,\\ov{\\bm{\\omega}},\\bm{\\omega}))^{-1}\\varphi_{{}_{E_6,\\sigma_3,w_3}}(L,P,Q)\n\t\\\\\n\t&\\hspace*{70mm}\\varphi_{{}_{E_6,w_3}}(E,\\diag(1,\\ov{\\bm{\\omega}},\\bm{\\omega}), \\diag(1,\\ov{\\bm{\\omega}},\\bm{\\omega}))\n\t\\\\\n\t&=\\varphi_{{}_{E_6,w_3}}(E,\\diag(1,\\bm{\\omega},\\ov{\\bm{\\omega}}), \\diag(1,\\bm{\\omega},\\ov{\\bm{\\omega}}))\\varphi_{{}_{E_6,w_3}}(L,P,Q)\n\t\\\\\n\t&\\hspace*{70mm}\\varphi_{{}_{E_6,w_3}}(E,\\diag(1,\\ov{\\bm{\\omega}},\\bm{\\omega}), \\diag(1,\\ov{\\bm{\\omega}},\\bm{\\omega}))\n\t\\\\\n\t&=\\varphi_{{}_{E_6,w_3}}(L,\\diag(1,\\bm{\\omega},\\ov{\\bm{\\omega}})P\\diag(1,\\ov{\\bm{\\omega}},\\bm{\\omega}),\\diag(1,\\bm{\\omega},\\ov{\\bm{\\omega}})Q\\diag(1,\\ov{\\bm{\\omega}},\\bm{\\omega})),\n\t\\\\\n\t&\\hspace*{85mm}P=\\diag(a,b,c), Q=\\diag(s,t,v)\n\t\\\\\n\t&=\\varphi_{{}_{E_6,w_3}}(L,P,Q)\n\t\\\\\n\t&=\\varphi_{{}_{E_6,\\sigma_3,w_3}}(L,P,Q).\n\t\\end{align*}\n\tHence we have that $\\varphi_{{}_{E_6,\\sigma_3,w_3}}(L,P,Q) \\in (E_6)^{\\sigma_3}$. Thus $\\varphi_{{}_{E_6,\\sigma_3,w_3}}$ is well-defined. Subsequently, since $\\varphi_{{}_{E_6,\\sigma_3,w_3}}$ is the restriction of the mapping $\\varphi_{{}_{E_6,w_3}}$, we easily see that $\\varphi_{{}_{E_6,\\sigma_3,w_3}}$ is a homomorphism.\n\t\n\tNext we will prove that $\\varphi_{{}_{E_6,\\sigma_3,w_3}}$ is surjective. Let $\\alpha \\in (E_6)^{\\sigma_3}\\cap (E_6)^{w_3} \\subset (E_6)^{w_3}$. There exist $L, A, B \\in SU(3)$ such that $\\alpha=\\varphi_{{}_{E_6,w_3}}(L,A,B)$ (Theorem \\ref{theorem 3.3.7}). Moreover, from the condition $\\alpha \\in (E_6)^{\\sigma_3}$, that is, ${\\sigma_3}^{-1}\\varphi_{{}_{E_6,w_3}}(L,A,B)\\sigma_3=\\varphi_{{}_{E_6,w_3}}(L,A,B)$, and using \n\t\\begin{align*}\n\t\t\t\t&\\quad {\\sigma_3}^{-1}\\varphi_{{}_{E_6,w_3}}(L,A,B)\\sigma_3\n\t\t\t\t\\\\\n\t\t\t\t&=\\varphi_{{}_{E_6,w_3}}(L,\\diag(1,\\bm{\\omega},\\ov{\\bm{\\omega}})A\\diag(1,\\ov{\\bm{\\omega}},\\bm{\\omega}),\\diag(1,\\bm{\\omega},\\ov{\\bm{\\omega}})B\\diag(1,\\ov{\\bm{\\omega}}, \\bm{\\omega}))\n\t\\end{align*}\n\t(Lemma \\ref{lemma 3.3.8} (2)) we have that \n\t\\begin{align*}\n\t&\\,\\,\\,{\\rm(i)}\\,\\left\\{\n\t\\begin{array}{l}\n\tL=L\\\\\n\t\\diag(1,\\bm{\\omega},\\ov{\\bm{\\omega}})A\\diag(1,\\ov{\\bm{\\omega}},\\bm{\\omega})=A \\\\\n\t\\diag(1,\\bm{\\omega},\\ov{\\bm{\\omega}})B\\diag(1,\\ov{\\bm{\\omega}}, \\bm{\\omega})=B,\n\t\\end{array} \\right.\n\t\\qquad\n\t{\\rm(ii)}\\,\\left\\{\n\t\\begin{array}{l}\n\tL=\\bm{\\omega}L\\\\\n\t\\diag(1,\\bm{\\omega},\\ov{\\bm{\\omega}})A\\diag(1,\\ov{\\bm{\\omega}},\\bm{\\omega})=\\bm{\\omega}A \\\\\n\t\\diag(1,\\bm{\\omega},\\ov{\\bm{\\omega}})B\\diag(1,\\ov{\\bm{\\omega}}, \\bm{\\omega})=\\bm{\\omega}B,\n\t\\end{array} \\right.\n\t\\\\[2mm]\n\t&{\\rm(iii)}\\,\\left\\{\n\t\\begin{array}{l}\n\tL=\\bm{\\omega}^{-1}L\\\\\n\t\\diag(1,\\bm{\\omega},\\ov{\\bm{\\omega}})A\\diag(1,\\ov{\\bm{\\omega}},\\bm{\\omega})=\\bm{\\omega}^{-1}A \\\\\n\t\\diag(1,\\bm{\\omega},\\ov{\\bm{\\omega}})B\\diag(1,\\ov{\\bm{\\omega}}, \\bm{\\omega})=\\bm{\\omega}^{-1}B.\n\t\\end{array} \\right.\n\t\\end{align*}\n\tThe Cases (ii) and (iii) are impossible because $L\\not=0$. As for the Case (i), from the second and third conditions, it is easy to see that $A,B \\in S(U(1)\\times U(1) \\times U(1))$. Needless to say, $L \\in SU(3)$. \n\tHence there exist $L \\in SU(3)$ and $A,B \\in S(U(1)\\times U(1)\\times U(1))$ such that $\\alpha=\\varphi_{{}_{E_6,w_3}}(L,P,Q)$. Namely, there exist $L \\in SU(3)$ and $A,B \\in S(U(1)\\times U(1)\\times U(1))$ such that $\\alpha=\\varphi_{{}_{E_6,\\sigma_3,w_3}}(L,P,Q)$. The proof of surjective is completed.\n\t\n\tFinally, we will determine $\\Ker\\,\\varphi_{{}_{E_6,\\sigma_3,w_3}}$. However, from $\\Ker\\,\\varphi_{{}_{E_6,w_3}}=\\{(E,E,E),(\\bm{\\omega}E,\\allowbreak \\bm{\\omega}E,\\bm{\\omega}E),(\\bm{\\omega}^{-1}E,\\bm{\\omega}^{-1}E, \\bm{\\omega}^{-1}E) \\}$, we easily obtain that $\\Ker\\,\\varphi_{{}_{E_6,\\sigma_3,w_3}}=\\{(E,E,E),(\\bm{\\omega}E,\\allowbreak \\bm{\\omega}E,\\bm{\\omega}E),(\\bm{\\omega}^{-1}E,\\bm{\\omega}^{-1}E, \\bm{\\omega}^{-1}E) \\} \\cong \\Z_3$.\n\tThus we have the isomorphism $(E_6)^{\\sigma_3}\\cap (E_6)^{w_3} \\cong SU(3)\\times S(U(1)\\times U(1)\\times U(1))\\times S(U(1)\\times U(1)\\times U(1))\/\\Z_3$. \n\t\n\tTherefore, by Lemma \\ref{lemma 4.3.1} we have the required isomorphism \n\t\\begin{align*}\n\t(E_6)^{\\sigma_3}\\cap (E_6)^{w_3} \\cong (SU(3)\\times U(1)\\times U(1)\\times U(1)\\times U(1))\/\\Z_3,\n\t\\end{align*}\n\twhere $\\Z_3=\\{(E,1,1,1,1),(\\bm{\\omega}E,\\bm{\\omega},\\bm{\\omega},\\bm{\\omega},\\bm{\\omega}),(\\bm{\\omega}^{-1}E,\\bm{\\omega}^{-1},\\bm{\\omega}^{-1},\\bm{\\omega}^{-1},\\bm{\\omega}^{-1})\\}$.\n\\end{proof}\n\nThus, since the group $(E_6)^{\\sigma_3} \\cap (E_6)^{w_3}$ is connected from Theorem \\ref{theorem 4.11.1}, we have an exceptional $\\varmathbb{Z}_3 \\times \\varmathbb{Z}_3$-symmetric space\n\\begin{align*}\nE_6\/((SU(3)\\times U(1)\\times U(1)\\times U(1)\\times U(1))\/\\Z_3).\n\\end{align*}\n\n\\subsection{Case 12: $\\{1, \\tilde{\\nu}_3, \\tilde{\\nu}_3{}^{-1}\\} \\times \\{1, \\tilde{\\mu}_3, \\tilde{\\mu}_3{}^{-1}\\}$-symmetric space}\n\nLet the $C$-linear transformations $\\nu_3, \\mu_3$ of $\\mathfrak{J}^C$ defined in Subsection \\ref{subsection 3.3}. \n\n\\noindent From Lemma \\ref{lemma 3.3.8} (1), since we can easily confirm that $\\nu_3$ and $\\mu_3$ are commutative, $\\tilde{\\nu}_3$ and $\\tilde{\\mu}_3$ are commutative in $\\Aut(E_6)$: $\\tilde{\\nu}_3\\tilde{\\mu}_3=\\tilde{\\mu}_3\\tilde{\\nu}_3$.\n\nBefore determining the structure of the group $(E_6)^{\\nu_3} \\cap (E_6)^{\\mu_3}$, we confirm that useful lemma holds and prove proposition needed in the proof of theorem below.\n\n\\begin{lemma}\\label{lemma 4.12.1\n\tThe mapping $\\varphi_{{}_{E_6,\\nu_3}}:Sp(1) \\times S(U(1)\\times U(5)) \\to (E_6)^{\\nu_3}$ of \\,Theorem {\\rm \\ref{theorem 3.3.5}} satisfies the relational formulas \n\t\\begin{align*}\n\t\t\t\\nu_3&=\\varphi_{{}_{E_6,\\nu_3}}(1,\\diag(\\nu^5,\\nu^{-1},\\nu^{-1},\\nu^{-1},\\nu^{-1},\\nu^{-1})),\n\t\t\t\\\\\n\t\t\t\\mu_3&=\\varphi_{{}_{E_6,\\nu_3}}(1, \\diag(\\nu^{-2},\\nu^{2},\\nu^{-1},\\nu,\\nu^{-1},\\nu)),\n\t\\end{align*}\n\t where $\\nu=\\exp(2\\pi i\/9)\\in U(1)$.\n\\end{lemma}\n\\begin{proof}\n\t From Lemma \\ref{lemma 3.3.8} (1), these results are trivial. \n\\end{proof}\n\nIt goes with out saying that $\\delta_Q=\\varphi_{{}_{E_6, \\nu_3}}(1,Q)(=\\varphi_{{}_{E_6, \\gamma_3}}(1,Q))$, where $\\delta_Q$ is defined in the Case 7, and so from Lemma \\ref{lemma 3.3.8} (1) the $C$-linear transformation $\\mu'_3$ which is conjugate to $\\mu_3$ under $\\delta_Q \\in (E_6)^{\\nu_3}$ is also expressed by\n\\begin{align*}\n\\mu'_3=\\varphi_{{}_{E_6,\\nu_3}}(1, \\diag(\\nu^{-2},\\nu^{2},\\nu^{-1},\\nu^{-1},\\nu,\\nu)).\n\\end{align*}\n\nThen we have the following proposition.\n\n\\begin{proposition}\\label{proposition 4.12.2\n\tThe group $(E_6)^{\\nu_3} \\cap (E_6)^{\\mu_3}$ is isomorphic to the group $(E_6)^{\\nu_3} \\cap (E_6)^{\\mu'_3}${\\rm :} $(E_6)^{\\nu_3} \\cap (E_6)^{\\mu_3} \\cong (E_6)^{\\nu_3} \\cap (E_6)^{\\mu'_3}$.\n\\end{proposition}\n\\begin{proof}\n\tWe define a mapping $g_{{}_{4122}}: (E_6)^{\\nu_3} \\cap (E_6)^{\\mu'_3} \\to (E_6)^{\\gamma_3} \\cap (E_6)^{\\mu_3}$ by\n\t\\begin{align*}\n\tg_{{}_{4122}}(\\alpha)=\\delta_Q\\alpha{\\delta_Q}^{-1}.\t\n\t\\end{align*}\n\tSince it is easily to verify that $\\delta_Q\\nu_3=\\nu_3\\delta_Q$ using $\\nu_3=\\varphi_{{}_{E_6,\\nu_3}}(1,\\diag(\\nu^5,\\nu^{-1},\\nu^{-1},\\nu^{-1},\\nu^{-1}, \\allowbreak \\nu^{-1}))$ (Lemma \\ref{lemma 4.12.1}), we can prove this proposition as in the proof of Proposition \\ref{proposition 4.7.1}.\n\\end{proof}\n\\vspace{1mm}\n\nNow, we will determine the structure of the group $(E_6)^{\\nu_3} \\cap (E_6)^{\\mu_3}$.\n\n\\begin{theorem}\\label{theorem 4.12.3\n\tThe group $(E_6)^{\\nu_3} \\cap (E_6)^{\\mu_3}$ is isomorphic the group $(Sp(1)\\times U(1) \\times U(1)\\allowbreak \\times U(1) \\times SU(2)\\times SU(2))\/(\\Z_2\\times\\Z_2\\times\\Z_4)${\\rm :} $(E_6)^{\\nu_3} \\cap (E_6)^{\\mu_3} \\cong (Sp(1)\\times U(1) \\times U(1)\\times U(1) \\allowbreak \\times SU(2)\\times SU(2))\/(\\Z_2\\times\\Z_2\\times\\Z_4), \\Z_2=\\{(1,1,1,1,E,E), (1,1,-1,1,E,-E) \\},\n\t\\Z_2=\\{(1,1,1,1,E,E), (1,1,-1,-1,-E,E) \\},\n\t\\Z_4=\\{(1,1,1,E,E,E), (1,-1,1-,1,E,E), (-1,i,i,\\allowbreak 1,-E,E), (-1,-i,-i,1,-E,E) \\}$.\n\\end{theorem}\n\\begin{proof}\n\tLet $S(U(1)\\times U(1)\\times U(2)\\times U(2)) \\subset S(U(1)\\times U(5))$. \n\tWe define a mapping $\\varphi_{{}_{E_6,\\nu_3,\\mu'_3}}: Sp(1)\\times S(U(1)\\times U(1)\\times U(2)\\times U(2)) \\to (E_6)^{\\nu_3} \\cap (E_6)^{\\mu'_3}$ by\n\t\\begin{align*}\n\t\\varphi_{{}_{E_6,\\nu_3,\\mu'_3}}(q, P)(M+\\a)&={k_J}^{-1}(P(k_J M){}^t\\!P)+q\\a k^{-1}(\\tau \\,{}^t\\!P), \n\t\\\\\n\t&\\hspace*{40mm}M+\\a \\in \\mathfrak{J}(3, \\H)^C \\oplus (\\H^3)^C=\\mathfrak{J}^C.\n\t\\end{align*}\n\tNeedless to say, this mapping is the restriction of the mapping $\\varphi_{{}_{E_6,\\nu_3}}$, that is, $\\varphi_{{}_{E_6,\\nu_3,\\mu'_3}}(q, P)=\\varphi_{{}_{E_6,\\nu_3}}(q,P)$ (Theorem \\ref{theorem 3.3.5}). \n\t\n\tAs usual, we will prove that $\\varphi_{{}_{E_6,\\nu_3,\\mu'_3}}$ is well-defined. It is clear that $\\varphi_{{}_{E_6,\\nu_3,\\mu'_3}}(q,P) \\in (E_6)^{\\nu_3}$, and it follows from $\\mu'_3=\\varphi_{{}_{E_6,\\gamma_3}}(1,\\diag({\\nu}^{-2},\\nu^2,{\\nu}^{-1},{\\nu}^{-1},\\nu,\\nu))$ that\n\t\\begin{align*}\n\t&\\quad {\\mu'_3}^{-1}\\varphi_{{}_{E_6,\\nu_3,\\mu'_3}}(q,P)\\mu'_3\n\t\\\\\n\t&=\\varphi_{{}_{E_6,\\nu_3}}(1,\\diag({\\nu}^{-2},\\nu^2,{\\nu}^{-1},{\\nu}^{-1},\\nu,\\nu))^{-1}\\varphi_{{}_{E_6,\\nu_3,\\mu'_3}}(q,P)\\varphi_{{}_{E_6,\\nu_3}}(1,\\diag({\\nu}^{-2},\\nu^2,{\\nu}^{-1},{\\nu}^{-1},\\nu,\\nu))\n\t\\\\\n\t&=\\varphi_{{}_{E_6,\\nu_3}}(1,\\diag({\\nu}^2,\\nu^{-2},\\nu,\\nu,{\\nu}^{-1},{\\nu}^{-1}))\\varphi_{{}_{E_6,\\nu_3}}(q,P)\\varphi_{{}_{E_6,\\nu_3}}(1,\\diag({\\nu}^{-2},\\nu^2,{\\nu}^{-1},{\\nu}^{-1},\\nu,\\nu))\n\t\\\\\n\t&=\\varphi_{{}_{E_6,\\nu_3}}(q,\\diag({\\nu}^2,\\nu^{-2},\\nu,\\nu,{\\nu}^{-1},{\\nu}^{-1})P\\diag({\\nu}^{-2},\\nu^2,{\\nu}^{-1},{\\nu}^{-1},\\nu,\\nu)),P=\\diag(a,b,P_1,P_2)\n\t\\\\\n\t&=\\varphi_{{}_{E_6,\\nu_3}}(q,\\diag(\\nu^2 a \\nu^{-2}, {\\nu}^{-2} b \\nu^2, (\\nu E) P_1(\\nu^{-1}E), ({\\nu}^{-1}E) P_2 (\\nu E) ))\n\t\\\\\n\t&=\\varphi_{{}_{E_6,\\nu_3}}(q,P)\n\t\\\\\n\t&=\\varphi_{{}_{E_6,\\nu_3,\\mu'_3}}(q,P).\n\t\\end{align*}\n\tHence we have that $\\varphi_{{}_{E_6,\\nu_3,\\mu'_3}}(q,P) \\in (E_6)^{\\mu'_3}$. Thus $\\varphi_{{}_{E_6,\\nu_3,\\mu'_3}}$ is well-defined. Subsequently, since $\\varphi_{{}_{E_6,\\nu_3,\\mu'_3}}$ is the restriction of the mapping $\\varphi_{{}_{E_6,\\nu_3}}$, we easily see that $\\varphi_{{}_{E_6,\\nu_3,\\mu'_3}}$ is a homomorphism.\n\t\n\tNext, we will prove that $\\varphi_{{}_{E_6,\\nu_3,\\mu'_3}}$ is surjective. Let $\\alpha \\in (E_6)^{\\nu_3} \\cap (E_6)^{\\mu'_3} \\subset (E_6)^{\\nu_3}$. There exist $q \\in Sp(1)$ and $A \\in S(U(1)\\times U(5))$ such that $\\alpha=\\varphi_{{}_{E_6,\\nu_3}}(q,A)$ (Theorem \\ref{theorem 3.3.5}). Moreover, from the condition $\\alpha \\in (E_6)^{\\mu'_3}$, that is, ${\\mu'_3}^{-1}\\varphi_{{}_{E_6,\\nu_3}}(q,A)\\mu'_3=\\varphi_{{}_{E_6,\\nu_3}}(q,A)$, and using ${\\mu'_3}^{-1}\\varphi_{{}_{E_6,\\nu_3}}(q,A)\\mu'_3=\\varphi_{{}_{E_6,\\nu_3}}(q,\\diag({\\nu}^2,\\nu^{-2},\\nu,\\nu,{\\nu}^{-1},{\\nu}^{-1}) A \\,\\diag({\\nu}^{-2},\\nu^2,{\\nu}^{-1},{\\nu}^{-1},\\nu,\\nu))$, we have that\n\t\\begin{align*}\n\t&\\left\\{\n\t\\begin{array}{l}\n\tq=q \\\\\n\t\\diag({\\nu}^2,\\nu^{-2},\\nu,\\nu,{\\nu}^{-1},{\\nu}^{-1}) A\\, \\diag({\\nu}^{-2},\\nu^2,{\\nu}^{-1},{\\nu}^{-1},\\nu,\\nu)=A \n\t\\end{array}\\right. \n\t\\\\\n\t&\\hspace*{50mm}{\\text{or}}\n\t\\\\\n\t&\\left\\{\n\t\\begin{array}{l}\n\tq=-q \\\\\n\t\\diag({\\nu}^2,\\nu^{-2},\\nu,\\nu,{\\nu}^{-1},{\\nu}^{-1}) A \\,\\diag({\\nu}^{-2},\\nu^2,{\\nu}^{-1},{\\nu}^{-1},\\nu,\\nu)=-A. \n\t\\end{array}\\right. \n\t\\end{align*}\n\tThe latter case is impossible because of $q\\not=0$. As for the former case, from the second condition, by doing straightforward computation $A$ takes the following form $\\diag(a, b, C, D), a,b \\in U(1),C, D \\in U(2), (ab)(\\det\\,C)(\\det\\,D)=1$, that is, $A \\in S(U(1)\\times U(1)\\times U(2)\\times U(2))$. Needless to say, $q \\in Sp(1)$.\n\tHence there exist $q \\in Sp(1)$ and $P \\in S(U(1)\\times U(1)\\times U(2)\\times U(2))$ such that $\\alpha=\\varphi_{{}_{E_6,\\nu_3}}(q,P)$. Namely, there exist $q \\in Sp(1)$ and $P \\in S(U(1)\\times U(1)\\times U(2)\\times U(2))$ such that $\\alpha=\\varphi_{{}_{E_6,\\nu_3,\\mu'_3}}(q,P)$. The proof of surjective is completed.\n\t\n\tFinally, we will determine $\\Ker\\,\\varphi_{{}_{E_6,\\nu_3,\\mu'_3}}$. However, from $\\Ker\\,\\varphi_{{}_{E_6,\\nu_3}}=\\{(1,E),(-1,-E) \\}$, we easily obtain that $\\Ker\\,\\varphi_{{}_{E_6,\\nu_3,\\mu'_3}}=\\{(1,E),(-1,-E) \\} \\cong \\Z_2$. Thus we have the isomorphism $(E_6)^{\\nu_3} \\cap (E_6)^{\\mu'_3} \\cong (Sp(1)\\times S(U(1)\\times U(1)\\times U(2)\\times U(2)))\/\\Z_2$. In addition, by Proposition \\ref{proposition 4.12.2} we have the isomorphism $(E_6)^{\\nu_3} \\cap (E_6)^{\\mu_3} \\cong (Sp(1)\\times S(U(1)\\times U(1)\\times U(2)\\times U(2)))\/\\Z_2$. \n\t\n\tTherefore, as in the proof of Theorem \\ref{theorem 4.7.3}, we have the required isomorphism \n\t\\begin{align*}\n\t(E_6)^{\\nu_3} \\cap (E_6)^{\\mu_3} \\!\\cong (Sp(1)\\times U(1) \\times U(1)\\times U(1) \\allowbreak \\times SU(2)\\times SU(2))\/(\\Z_2\\times\\Z_2\\times\\Z_4), \n\t\\end{align*}\n\twhere \n\t\\begin{align*}\n\t&\\Z_2=\\{(1,1,1,1,E,E), (1,1,-1,1,E,-E) \\},\n\t\\\\\n\t&\\Z_2=\\{(1,1,1,1,E,E), (1,1,-1,-1,-E,E) \\},\n\t\\\\\n\t&\\Z_4=\\{(1,1,1,E,E,E), (1,-1,1-,1,E,E), (-1,i,i,1,-E,E), (-1,-i,-i,1,-E,E) \\}.\n\t\\end{align*}\n\\end{proof}\n\nThus, since the group $(E_6)^{\\nu_3} \\cap (E_6)^{\\mu_3}$ is connected from Theorem \\ref{theorem 4.12.3}, we have an exceptional $\\varmathbb{Z}_3 \\times \\varmathbb{Z}_3$-symmetric space\n\\begin{align*}\nE_6\/((Sp(1)\\times U(1) \\times U(1)\\times U(1) \\allowbreak \\times SU(2)\\times SU(2))\/(\\Z_2\\times\\Z_2\\times\\Z_4)).\n\\end{align*}\n\n\\subsection{Case 13: $\\{1, \\tilde{\\nu}_3, \\tilde{\\nu}_3{}^{-1}\\} \\times \\{1, \\tilde{w}_3, \\tilde{w}_3{}^{-1}\\}$-symmetric space}\n\nLet the $C$-linear transformations $\\nu_3, w_3$ of $\\mathfrak{J}^C$ defined in Subsection \\ref{subsection 3.3}. \n\n\\noindent From Lemma \\ref{lemma 3.3.8} (1), since we can easily confirm that $\\nu_3$ and $w_3$ are commutative, $\\tilde{\\nu}_3$ and $\\tilde{w}_3$ are commutative in $\\Aut(E_6)$: $\\tilde{\\nu}_3\\tilde{w}_3=\\tilde{w}_3\\tilde{\\nu}_3$.\n\nBefore determining the structure of the group $(E_6)^{\\nu_3} \\cap (E_6)^{w_3}$, we confirm that useful lemma holds, and we prove proposition and lemma needed in the proof of theorem below.\n\n\\begin{lemma}\\label{lemma 4.13.1\n\tThe mapping $\\varphi_{{}_{E_6,\\nu_3}}:Sp(1) \\times S(U(1)\\times U(5)) \\to (E_6)^{\\nu_3}$ of \\,Theorem {\\rm \\ref{theorem 3.3.5}} satisfies the relational formula \n\t\\begin{align*}\n\tw_3=\\varphi_{{}_{E_6,\\nu_3}}(1, \\diag(\\tau\\omega,\\omega,\\tau\\omega,\\omega,\\tau\\omega,\\omega)),\n\t\\end{align*}\n\twhere $\\nu=\\exp(2\\pi i\/9)\\in U(1)$.\n\\end{lemma}\n\\begin{proof}\n\tFrom Lemma \\ref{lemma 3.3.8} (1), these results are trivial. \n\\end{proof}\n\nThe $C$-linear transformation $w'_3$ defined in the Case 8 is expressed by\n\\begin{align*}\nw'_3=\\varphi_{{}_{E_6,\\nu_3}}(1, \\diag(\\tau\\omega,\\tau\\omega,\\tau\\omega,\\omega,\\omega,\\omega)),\n\\end{align*}\nand note that $\\delta_N=\\varphi_{{}_{E_6, \\nu_3}}(1,N)(=\\varphi_{{}_{E_6, \\gamma_3}}(1,N))$, where $\\delta_N$ is also defined in the Case 8, needless to say, $w_3$ is conjugate to $w'_3$ under $\\delta_N=\\varphi_{{}_{E_6, \\nu_3}}(1,N)$.\n\\vspace{1mm}\n\nThen we have the following proposition.\n\n\\begin{proposition}\\label{proposition 4.13.2\n\tThe group $(E_6)^{\\nu_3} \\cap (E_6)^{w_3}$ is isomorphic to the group $(E_6)^{\\gamma_3} \\cap (E_6)^{w'_3}${\\rm :} $(E_6)^{\\nu_3} \\cap (E_6)^{w_3} \\cong (E_6)^{\\nu_3} \\cap (E_6)^{w'_3}$.\n\\end{proposition}\n\\begin{proof}\n\tWe define a mapping $g_{{}_{4132}}: (E_6)^{\\nu_3} \\cap (E_6)^{w'_3} \\to (E_6)^{\\nu_3} \\cap (E_6)^{w_3}$ by\n\t\\begin{align*}\n\tg_{{}_{4132}}(\\alpha)=\\delta_N\\alpha{\\delta_N}^{-1},\t\n\t\\end{align*}\n\twhere $\\delta_N$ is same one above. Since it is easy to verify that $\\delta_N \\nu_3=\\nu_3\\delta_N$ using $\\nu_3=\\varphi_{{}_{E_6,\\nu_3}}(1,\\diag(\\nu^5,\\nu^{-1},\\nu^{-1},\\nu^{-1},\\nu^{-1},\\nu^{-1}))$ (Lemma \\ref{lemma 4.9.1}) and $w_3\\delta_N=\\delta_N w'_3$ (Lemma \\ref{lemma 4.13.1}), we can prove this proposition as in the proof of Proposition \\ref{proposition 4.8.1}.\n\\end{proof}\n\nSubsequently, we will prove the following lemma.\n\n\\begin{lemma}\\label{lemma 4.13.3\n\tThe group $S(U(1)\\times U(2)\\times U(3))$ is isomorphic to the group $(U(1)\\times U(1)\\times SU(2)\\times SU(3))\/(\\Z_2\\times \\Z_3)${\\rm :} $S(U(1)\\times U(2)\\times U(3)) \\cong (U(1)\\times U(1)\\times SU(2)\\times SU(3))\/(\\Z_2\\times \\Z_3), \\Z_2\\!=\\{(1,1,E,E),(-1,1,-E,E) \\},\\Z_3\\!=\\!\\{(1,1,E,E),(1,\\omega,E,\\omega E),(1,\\omega^{-1},E,\\omega^{-1}E) \\}$.\t\n\\end{lemma}\n\\begin{proof}\n\tWe define a mapping $f_{{}_{4133}}:U(1) \\times U(1)\\times SU(2)\\times SU(3) \\to S(U(1)\\times U(2)\\times U(3))$ by\n\t\\begin{align*}\n\tf_{{}_{4133}}(a,b,A,B)=\\left( \n\t\\begin{array}{ccc}\n\ta^{-2}b^{-3} & & {\\raisebox{-7pt}[0pt]{\\large $0$}}\n\t\\\\[2mm]\n\t& a\\mbox{\\large {$A$}} & \n\t\\\\[2mm]\n\t{\\raisebox{1pt}[0pt]{\\large $0$}}&& b\\mbox{\\large {$B$}}\n\t\\end{array}\\right) \\in SU(6).\n\t\\end{align*}\n\tThen it is clear that $f_{{}_{4133}}$ is well-defined and a homomorphism. \n\t\n\tWe will prove that $f_{{}_{4133}}$ is surjective. Let $P \\in S(U(1)\\times U(2)\\times U(3))$. Then $P$ takes the form of $\\diag(s,P_1,P_2),s \\in U(1),P_1 \\in U(2), P_2 \\in U(3),s(\\det\\,P_1)(\\det\\,P_2)=1$. Here, since $P_1 \\in U(2), P_2 \\in U(3)$, we see that $\\det\\,P_1, \\det\\,P_2 \\in U(1)$. We choose $a,b \\in U(1)$ such that $a^2=\\det\\,P_1, b^3=\\det\\,P_2$, respectively, and set $A=(1\/a)P_1, B=(1\/b)P_2$. Then we have that $ A \\in SU(2), B \\in SU(3)$. With above, the proof of surjective is completed.\n\t\n\tFinally, we will determine $\\Ker\\,f_{{}_{4133}}$. It follows from the kernel of definition that\n\t\\begin{align*}\n\t\\Ker\\,f_{{}_{4133}}&=\\{(a,b,A,B)\\in U(1)\\times U(1)\\times SU(2) \\times SU(3) \\,|\\,f_{{}_{4133}}(a,b,A,B)=E \\}\n\t\\\\\n\t&=\\{(a,b,A,B)\\in U(1)\\times U(1)\\times SU(2) \\times SU(3)\\,|\\,a^2b^3=1,aA=bB=E \\}\n\t\\\\\n\t&=\\{(a,b,a^{-1}E,b^{-1}E)\\in U(1)\\times U(1)\\times SU(2) \\times SU(3) \\,|\\,a^2=b^3=1 \\}\n\t\\\\\n\t&=\\{(1,1,E,E), (1,\\omega,E,{\\omega}^{-1}E),(1,{\\omega}^{-1},E,\\omega E), \n\t\\\\\n\t&\\hspace*{20mm}(-1,1,-E,E), (-1,\\omega,-E,{\\omega}^{-1}E),(-1,{\\omega}^{-1},E,\\omega E)\\}\n\t\\\\\n\t&=\\{(1,1,E,E), (-1,1,-E,E) \\} \\times \\{(1,1,E,E), (1,\\omega,E,{\\omega}^{-1}E),(1,{\\omega}^{-1},E,\\omega E) \\}\n\t\\\\\n\t& \\cong \\Z_2 \\times \\Z_3.\n\t\\end{align*}\n\t\n\tTherefore we have the required isomorphism \n\t\\begin{align*}\n\tS(U(1)\\times U(2)\\times U(3)) \\cong (U(1) \\times U(1)\\times SU(2)\\times SU(3))\/(\\Z_2\\times\\Z_3).\n\t\\end{align*}\n\\end{proof} \n\nNow, we will determine the structure of the group $(E_6)^{\\nu_3} \\cap (E_6)^{w_3}$.\n\n\\begin{theorem}\\label{theorem 4.13.4\n\tThe group $(E_6)^{\\nu_3} \\cap (E_6)^{w_3}$ is isomorphic the group $(Sp(1)\\times U(1) \\times SU(2)\\times SU(3))\/(\\Z_2 \\times \\Z_3)${\\rm :} $(E_6)^{\\nu_3} \\cap (E_6)^{w_3} \\cong ((Sp(1)\\times U(1) \\times SU(2)\\times SU(3)))\/(\\Z_2 \\times \\Z_2 \\times \\Z_3), \\Z_2\\{(1,1,1,E,E), (1,-1,1,-E,E)\\},\\Z_2=\\{(1,1,1,E,E), (-1,-1,-1,E,E)\\}\n\t\\Z_3=\\{(1,\\allowbreak 1,1,E,E), (1,1,\\omega,E,{\\omega}^{-1}E),(1,1,{\\omega}^{-1},E,\\omega E)\\}$.\n\\end{theorem}\n\\begin{proof}\n\tLet $S(U(1)\\times U(2)\\times U(3)) \\subset S(U(1) \\times U(5))$. \n\tWe define a mapping $\\varphi_{{}_{E_6,\\nu_3,w'_3}}: Sp(1)\\times S(U(1)\\times U(2)\\times U(3)) \\to (E_6)^{\\nu_3} \\cap (E_6)^{w'_3}$ by\n\t\\begin{align*}\n\t\\varphi_{{}_{E_6,\\nu_3,w'_3}}(q, P)(M+\\a)&={k_J}^{-1}(P(k_J M){}^t\\!P)+q\\a k^{-1}(\\tau \\,{}^t\\!P), \n\t\\\\\n\t&\\hspace*{40mm}M+\\a \\in \\mathfrak{J}(3, \\H)^C \\oplus (\\H^3)^C=\\mathfrak{J}^C.\n\t\\end{align*}\n\tNeedless to say, this mapping is the restriction of the mapping $\\varphi_{{}_{E_6,\\nu_3}}$, that is, $\\varphi_{{}_{E_6,\\nu_3,w'_3}}(q, P)=\\varphi_{{}_{E_6,\\nu_3}}(q,P)$ (Theorem \\ref{theorem 3.3.5}). \n\t\n\tAs usual, we will prove that $\\varphi_{{}_{E_6,\\nu_3,w'_3}}$ is well-defined. It is clear that $\\varphi_{{}_{E_6,\\nu_3,w'_3}}(q,P) \\in (E_6)^{\\nu_3}$, and it follows from $w'_3=\\varphi_{{}_{E_6,\\nu_3}}(1,\\diag(\\tau\\omega,\\tau\\omega,\\tau\\omega,\\omega,\\omega,\\omega))$ that\n\t\\begin{align*}\n\t&\\quad {w'_3}^{-1}\\varphi_{{}_{E_6,\\nu_3,\\nu'_3}}(q,P) w'_3\n\t\\\\\n\t&=\\varphi_{{}_{E_6,\\nu_3}}(1,\\diag(\\tau\\omega,\\tau\\omega,\\tau\\omega,\\omega,\\omega,\\omega))^{-1}\\varphi_{{}_{E_6,\\nu_3,w'_3}}(q,P)\\varphi_{{}_{E_6,\\nu_3}}(1,\\diag(\\tau\\omega,\\tau\\omega,\\tau\\omega,\\omega,\\omega,\\omega))\n\t\\\\\n\t&=\\varphi_{{}_{E_6,\\nu_3}}(1,\\diag(\\omega,\\omega,\\omega,\\tau\\omega,\\tau\\omega,\\tau\\omega))\\varphi_{{}_{E_6,\\nu_3}}(q,P)\\varphi_{{}_{E_6,\\nu_3}}(1,\\diag(\\tau\\omega,\\tau\\omega,\\tau\\omega,\\omega,\\omega,\\omega))\n\t\\\\\n\t&=\\varphi_{{}_{E_6,\\nu_3}}(q,\\diag(\\omega,\\omega,\\omega,\\tau\\omega,\\tau\\omega,\\tau\\omega)P\\,\\diag(\\tau\\omega,\\tau\\omega,\\tau\\omega,\\omega,\\omega,\\omega)),P=\\diag(s,P_1,P_2)\n\t\\\\\n\t&=\\varphi_{{}_{E_6,\\nu_3}}(q,\\diag(\\omega s (\\tau\\omega),(\\omega E)P_1(\\tau\\omega E), \\tau(\\omega E) P_2 (\\omega E) ))\n\t\\\\\n\t&=\\varphi_{{}_{E_6,\\nu_3}}(q,P)\n\t\\\\\n\t&=\\varphi_{{}_{E_6,\\nu_3,w'_3}}(q,P).\n\t\\end{align*}\n\tHence we have that $\\varphi_{{}_{E_6,\\nu_3,w'_3}}(s,P) \\in (E_6)^{w'_3}$. Thus $\\varphi_{{}_{E_6,\\nu_3,w'_3}}$ is well-defined. Subsequently, since $\\varphi_{{}_{E_6,\\nu_3,w'_3}}$ is the restriction of the mapping $\\varphi_{{}_{E_6,\\nu_3}}$, we easily see that $\\varphi_{{}_{E_6,\\nu_3,w'_3}}$ is a homomorphism.\n\t\n\tNext, we will prove that $\\varphi_{{}_{E_6,\\nu_3,w'_3}}$ is surjective. Let $\\alpha \\in (E_6)^{\\nu_3} \\cap (E_6)^{w'_3} \\subset (E_6)^{\\nu_3}$. There exist $q \\in Sp(1)$ and $A \\in S(U(1)\\times U(5))$ such that $\\alpha=\\varphi_{{}_{E_6,\\nu_3}}(q,A)$ (Theorem \\ref{theorem 3.3.5}). Moreover, from the condition $\\alpha \\in (E_6)^{w'_3}$, that is, ${w'_3}^{-1}\\varphi_{{}_{E_6,\\nu_3}}(q,A)w'_3=\\varphi_{{}_{E_6,\\nu_3}}(q,A)$, and using ${w'_3}^{-1}\\varphi_{{}_{E_6,\\nu_3}}(q,A)w'_3=\\varphi_{{}_{E_6,\\nu_3}}(q,\\diag(\\omega,\\omega,\\omega,\\tau\\omega,\\tau\\omega,\\tau\\omega)A\\,\\diag(\\tau\\omega,\\tau\\omega,\\tau\\omega,\\omega,\\omega,\\omega))$, we have that\n\t\\begin{align*}\n\t&\\left\\{\n\t\\begin{array}{l}\n\tq=q \\\\\n\t\\diag(\\omega,\\omega,\\omega,\\tau\\omega,\\tau\\omega,\\tau\\omega)A\\,\\diag(\\tau\\omega,\\tau\\omega,\\tau\\omega,\\omega,\\omega,\\omega)=A \n\t\\end{array}\\right. \n\t\\\\\n\t&\\hspace*{50mm}{\\text{or}}\n\t\\\\\n\t&\\left\\{\n\t\\begin{array}{l}\n\tq=-q \\\\\n\t\\diag(\\omega,\\omega,\\omega,\\tau\\omega,\\tau\\omega,\\tau\\omega)A\\,\\diag(\\tau\\omega,\\tau\\omega,\\tau\\omega,\\omega,\\omega,\\omega)=-A. \n\t\\end{array}\\right. \n\t\\end{align*}\n\tThe latter case is impossible because of $q\\not=0$. As for the former case, from the second condition, by doing straightforward computation $A$ takes the following form $\\diag(s,C, D), C \\in U(2),D \\in U(3), s(\\det\\,C)(\\det\\,D)=1$, that is, $A \\in S(U(1)\\times U(2)\\times U(3))$. Needless to say, $q \\in Sp(1)$.\n\tHence there exist $q \\in Sp(1)$ and $P \\in S(U(1)\\times U(2)\\times U(3))$ such that $\\alpha=\\varphi_{{}_{E_6,\\nu_3}}(q,P)$. Namely, there exist $q \\in Sp(1)$ and $P \\in S(U(1)\\times U(2)\\times U(3))$ such that $\\alpha=\\varphi_{{}_{E_6,\\nu_3,w'_3}}(q,P)$. The proof of surjective is completed.\n\t\n\tFinally, we will determine $\\Ker\\,\\varphi_{{}_{E_6,\\nu_3,w'_3}}$. However, from $\\Ker\\,\\varphi_{{}_{E_6,\\nu_3}}=\\{(1,E),(-1,-E) \\}$, we easily obtain that $\\Ker\\,\\varphi_{{}_{E_6,\\nu_3,w'_3}}=\\{(1,E),(-1,-E) \\} \\cong \\Z_2$. Thus we have the isomorphism $(E_6)^{\\nu_3} \\cap (E_6)^{w'_3} \\cong (Sp(1)\\times S(U(1)\\times U(2)\\times U(3)))\/\\Z_2$. In addition, by Proposition \\ref{proposition 4.13.2} we have the isomorphism $(E_6)^{\\nu_3} \\cap (E_6)^{w_3} \\cong (Sp(1)\\times S(U(1)\\times U(2)\\times U(3)))\/\\Z_2$. Here, using the mapping $f_{{}_{4133}}$ in the proof of Lemma \\ref{lemma 4.13.3}, we define a homomorphism $h_{{}_{4134}}:Sp(1)\\times (U(1)\\times U(1)\\times SU(2)\\times SU(3)) \\to Sp(1)\\times S(U(1)\\times U(2)\\times U(3)))$ by\n\t\\begin{align*}\n\th_{{}_{4134}}(q,(a,b,A,B))=(q,f_{{}_{4133}}(a,b,A,B)).\n\t\\end{align*}\n\tThen, the elements of $(q,(a,b,A,B))$ corresponding to the elements \n\t $(1,E), (-1,-E) \\in \\Ker\\,\\varphi_{{}_{E_6,\\nu_3,w'_3}}$ under the mapping $h_{{}_{4134}}$ are as follows.\n\t\\begin{align*}\n\t& (1,1,1,E,E), (1,1,\\omega,E,{\\omega}^{-1}E),(1,1,{\\omega}^{-1},E,\\omega E), (1,-1,1,-E,E),\n\t\\\\\n\t& (1,-1,\\omega,-E,{\\omega}^{-1}E),(1,-1,{\\omega}^{-1},-E,\\omega E),\n\t\\\\\n\t& (-1,1,-1,-E,E), (-1,1,-\\omega,-E,{\\omega}^{-1}E),(-1,1,-{\\omega}^{-1},-E,\\omega E), (-1,-1,-1,E,E),\n\t\\\\\n\t& (-1,-1,-\\omega,E,{\\omega}^{-1}E),(-1,-1,-{\\omega}^{-1},E,\\omega E).\n\t\\end{align*}\n\t\n\tTherefore we have the required isomorphism\n\t\\begin{align*}\n\t(E_6)^{\\nu_3} \\cap (E_6)^{w_3} \\cong (Sp(1) \\times U(1)\\times U(1) \\times SU(2)\\times SU(3))\/(\\Z_2\\times \\Z_2\\times \\Z_3), \n\t\\end{align*}\n\twhere\n\t\\begin{align*}\n\t&\\Z_2=\\{(1,1,1,E,E), (1,-1,1,-E,E)\\},\n\t\\\\\n\t&\\Z_2=\\{(1,1,1,E,E), (-1,-1,-1,E,E)\\},\n\t\\\\\n\t&\\Z_3=\\{(1,1,1,E,E), (1,1,\\omega,E,{\\omega}^{-1}E),(1,1,{\\omega}^{-1},E,\\omega E)\\}.\n\t\\end{align*}\n\\end{proof}\n\nThus, since the group $(E_6)^{\\nu_3} \\cap (E_6)^{w_3}$ is connected from Theorem \\ref{theorem 4.13.4}, we have an exceptional $\\varmathbb{Z}_3 \\times \\varmathbb{Z}_3$-symmetric space\n\\begin{align*}\nE_6\/((Sp(1) \\times U(1)\\times U(1) \\times SU(2)\\times SU(3))\/(\\Z_2\\times \\Z_2\\times \\Z_3)).\n\\end{align*}\n\n\\subsection{Case 14: $\\{1, \\tilde{\\mu}_3, \\tilde{\\mu}_3{}^{-1}\\} \\times \\{1, \\tilde{w}_3, \\tilde{w}_3{}^{-1}\\}$-symmetric space}\n\nLet the $C$-linear transformations $\\mu_3, w_3$ of $\\mathfrak{J}^C$ defined in Subsection \\ref{subsection 3.3}. \n\n\\noindent From Lemma \\ref{lemma 3.3.8} (2), since we can easily confirm that $\\mu_3$ and $w_3$ are commutative, $\\tilde{\\mu}_3$ and $\\tilde{w}_3$ are commutative in $\\Aut(E_6)$: $\\tilde{\\mu}_3\\tilde{w}_3=\\tilde{w}_3\\tilde{\\mu}_3$.\n\nNow, we will determine the structure of the group $(E_6)^{\\mu_3}\\cap (E_6)^{w_3}$.\n\n\\begin{theorem}\\label{theorem 4.14.1\n\tThe group $(E_6)^{\\mu_3}\\cap (E_6)^{w_3}$ is isomorphic to the group $(SU(3)\\times U(1)\\times U(1)\\times U(1)\\times U(1))\/\\Z_3${\\rm :} $(E_6)^{\\mu_3}\\cap (E_6)^{w_3} \\cong (SU(3)\\times U(1)\\times U(1)\\times U(1)\\times U(1))\/\\Z_3, \\Z_3=\\{(E,1,1,1,1),(\\bm{\\omega}E,\\bm{\\omega},\\bm{\\omega},\\bm{\\omega},\\bm{\\omega}),(\\bm{\\omega}^{-1}E,\\bm{\\omega}^{-1},\\bm{\\omega}^{-1},\\bm{\\omega}^{-1},\\bm{\\omega}^{-1})\\}$.\n\\end{theorem}\n\\begin{proof}\n\tLet $S(U(1)\\times U(1)\\times U(1)) \\subset SU(3)$. We define a mapping $\\varphi_{{}_{E_6,\\mu_3,w_3}}: SU(3)\\times S(U(1)\\times U(1)\\times U(1))\\times S(U(1)\\times U(1)\\times U(1)) \\to (E_6)^{\\nu_3}\\cap (E_6)^{w_3}$ by\n\t\\begin{align*}\n\t\\varphi_{{}_{E_6,\\mu_3,w_3}}(L,P,Q)(X_{C}+M)&=h(P,Q)X_{C}h(P,Q)^*+LM\\tau h(P,Q)^*, \n\t\\\\\n\t&\\hspace*{20mm} X_{C}+M \\in \\mathfrak{J}(3, \\C)^C \\oplus \n\tM(3,\\C)^C=\\mathfrak{J}^C.\n\t\\end{align*}\n\tNeedless to say, this mapping is the restriction of the mapping $\\varphi_{{}_{E_6,w_3}}$, that is, $\\varphi_{{}_{E_6,\\nu_3,w_3}}(L,P,\\allowbreak Q)\\allowbreak=\\varphi_{{}_{E_6,w_3}}(L,P,Q)$ (Theorem \\ref{theorem 3.3.7}). \n\t\n\tAs usual, we will prove that $\\varphi_{{}_{E_6,\\mu_3,w_3}}$ is well-defined. It is clear that $\\varphi_{{}_{E_6,\\mu_3,_3}}(L,P,Q) \\in (E_6)^{w_3}$, and it follows from $\\mu_3=\\varphi_{{}_{E_6,w_3}}(E,\\diag({\\bm{\\varepsilon}}^{-2},\\bm{\\varepsilon},\\bm{\\varepsilon}), \\diag({\\bm{\\varepsilon}}^{2},\\bm{\\varepsilon}^{-1},\\bm{\\varepsilon}^{-1}))$ (Lemma \\ref{lemma 3.3.8} (2)) that \n\t\\begin{align*}\n\t&\\quad {\\mu_3}^{-1}\\varphi_{{}_{E_6,\\sigma_3,w_3}}(L,P,Q)\\mu_3\n\t\\\\\n\t&=\\varphi_{{}_{E_6,w_3}}(E,\\diag({\\bm{\\varepsilon}}^{-2},\\bm{\\varepsilon},\\bm{\\varepsilon}), \\diag({\\bm{\\varepsilon}}^{2},\\bm{\\varepsilon}^{-1},\\bm{\\varepsilon}^{-1}))^{-1}\\varphi_{{}_{E_6,\\mu_3,w_3}}(L,P,Q)\n\t\\\\\n\t&\\hspace*{70mm}\\varphi_{{}_{E_6,w_3}}(E,\\diag({\\bm{\\varepsilon}}^{-2},\\bm{\\varepsilon},\\bm{\\varepsilon}), \\diag({\\bm{\\varepsilon}}^{2},\\bm{\\varepsilon}^{-1},\\bm{\\varepsilon}^{-1}))\n\t\\\\\n\t&=\\varphi_{{}_{E_6,w_3}}(E,\\diag({\\bm{\\varepsilon}}^{2},\\bm{\\varepsilon}^{-1},\\bm{\\varepsilon}^{-1}), \\diag({\\bm{\\varepsilon}}^{-2},\\bm{\\varepsilon},\\bm{\\varepsilon}))\\varphi_{{}_{E_6,\\mu_3,w_3}}(L,P,Q)\n\t\\\\\n\t&\\hspace*{70mm}\\varphi_{{}_{E_6,w_3}}(E,\\diag({\\bm{\\varepsilon}}^{-2},\\bm{\\varepsilon},\\bm{\\varepsilon}), \\diag({\\bm{\\varepsilon}}^{2},\\bm{\\varepsilon}^{-1},\\bm{\\varepsilon}^{-1}))\n\t\\\\\n\t&=\\varphi_{{}_{E_6,w_3}}(L,\\diag({\\bm{\\varepsilon}}^{2},\\bm{\\varepsilon}^{-1},\\bm{\\varepsilon}^{-1})P\\diag({\\bm{\\varepsilon}}^{-2},\\bm{\\varepsilon},\\bm{\\varepsilon}),\\diag({\\bm{\\varepsilon}}^{-2},\\bm{\\varepsilon},\\bm{\\varepsilon})Q \\diag({\\bm{\\varepsilon}}^{2},\\bm{\\varepsilon}^{-1},\\bm{\\varepsilon}^{-1})),\n\t\\\\\n\t&\\hspace*{90mm}P=\\diag(a,b,c), Q=\\diag(s,t,v)\n\t\\\\\n\t&=\\varphi_{{}_{E_6,w_3}}(L,P,Q)\n\t\\\\\n\t&=\\varphi_{{}_{E_6,\\mu_3,w_3}}(L,P,Q).\n\t\\end{align*}\n\tHence we have that $\\varphi_{{}_{E_6,\\mu_3,w_3}}(L,P,Q) \\in (E_6)^{\\mu_3}$. Thus $\\varphi_{{}_{E_6,\\mu_3,w_3}}$ is well-defined. Subsequently, since $\\varphi_{{}_{E_6,\\mu_3,w_3}}$ is the restriction of the mapping $\\varphi_{{}_{E_6,w_3}}$, we easily see that $\\varphi_{{}_{E_6,\\mu_3,w_3}}$ is a homomorphism.\n\t\n\tNext we will prove that $\\varphi_{{}_{E_6,\\mu_3,w_3}}$ is surjective. Let $\\alpha \\in (E_6)^{\\mu_3}\\cap (E_6)^{w_3} \\subset (E_6)^{w_3}$. There exist $L, A, B \\in SU(3)$ such that $\\alpha=\\varphi_{{}_{E_6,w_3}}(L,A,B)$ (Theorem \\ref{theorem 3.3.7}). Moreover, from the condition $\\alpha \\in (E_6)^{\\mu_3}$, that is, ${\\mu_3}^{-1}\\varphi_{{}_{E_6,w_3}}(L,A,B)\\mu_3=\\varphi_{{}_{E_6,w_3}}(L,A,B)$, and using \n\t\\begin{align*}\n\t&\\quad {\\mu_3}^{-1}\\varphi_{{}_{E_6,w_3}}(L,A,B)\\mu_3\n\t\\\\\n\t&=\\varphi_{{}_{E_6,w_3}}(L,\\diag({\\bm{\\varepsilon}}^{2},\\bm{\\varepsilon}^{-1},\\bm{\\varepsilon}^{-1})A\\,\\diag({\\bm{\\varepsilon}}^{-2},\\bm{\\varepsilon},\\bm{\\varepsilon}),\\diag({\\bm{\\varepsilon}}^{-2},\\bm{\\varepsilon},\\bm{\\varepsilon})B \\,\\diag({\\bm{\\varepsilon}}^{2},\\bm{\\varepsilon}^{-1},\\bm{\\varepsilon}^{-1}))\n\t\\end{align*}\n\t(Lemma \\ref{lemma 3.3.8} (2)) we have that \n\t\\begin{align*}\n\t&\\,\\,\\,{\\rm(i)}\\,\\left\\{\n\t\\begin{array}{l}\n\tL=L\n\t\\\\\n\t\\diag({\\bm{\\varepsilon}}^{2},\\bm{\\varepsilon}^{-1},\\bm{\\varepsilon}^{-1})A\\diag({\\bm{\\varepsilon}}^{-2},\\bm{\\varepsilon},\\bm{\\varepsilon})=A \n\t\\\\\n\t\\diag({\\bm{\\varepsilon}}^{-2},\\bm{\\varepsilon},\\bm{\\varepsilon})B \\diag({\\bm{\\varepsilon}}^{2},\\bm{\\varepsilon}^{-1},\\bm{\\varepsilon}^{-1})=B,\n\t\\end{array} \\right.\n\n\t\\\\[2mm]\n\t&{\\rm(ii)}\\,\\left\\{\n\t\\begin{array}{l}\n\tL=\\bm{\\omega}L\n\t\\\\\n\t\\diag({\\bm{\\varepsilon}}^{2},\\bm{\\varepsilon}^{-1},\\bm{\\varepsilon}^{-1})A\\diag({\\bm{\\varepsilon}}^{-2},\\bm{\\varepsilon},\\bm{\\varepsilon})=\\bm{\\omega}A \n\t\\\\\n\t\\diag({\\bm{\\varepsilon}}^{-2},\\bm{\\varepsilon},\\bm{\\varepsilon})B \\diag({\\bm{\\varepsilon}}^{2},\\bm{\\varepsilon}^{-1},\\bm{\\varepsilon}^{-1})=\\bm{\\omega}B,\n\t\\end{array} \\right.\n\t\\\\[2mm]\n\t&{\\rm(iii)}\\,\\left\\{\n\t\\begin{array}{l}\n\tL=\\bm{\\omega}^{-1}L\n\t\\\\\n\t\\diag({\\bm{\\varepsilon}}^{2},\\bm{\\varepsilon}^{-1},\\bm{\\varepsilon}^{-1})A\\diag({\\bm{\\varepsilon}}^{-2},\\bm{\\varepsilon},\\bm{\\varepsilon})=\\bm{\\omega}^{-1}A \n\t\\\\\n\t\\diag({\\bm{\\varepsilon}}^{-2},\\bm{\\varepsilon},\\bm{\\varepsilon})B \\diag({\\bm{\\varepsilon}}^{2},\\bm{\\varepsilon}^{-1},\\bm{\\varepsilon}^{-1})=\\bm{\\omega}^{-1}B.\n\t\\end{array} \\right.\n\t\\end{align*}\n\tThe Cases (ii) and (iii) are impossible because $L\\not=0$. As for the Case (i), from the second and third conditions, it is easy to see that $A,B \\in S(U(1)\\times U(1) \\times U(1))$. Needless to say, $L \\in SU(3)$. Hence there exist $L \\in SU(3)$ and $P,Q \\in S(U(1)\\times U(1) \\times U(1))$ such that $\\alpha=\\varphi_{{}_{E_6,w_3}}(L,P,Q)$. Namely, there exist $L \\in SU(3)$ and $P,Q \\in S(U(1)\\times U(1) \\times U(1))$ such that $\\alpha=\\varphi_{{}_{E_6,\\mu_3, w_3}}(L,P,Q)$. The proof of surjective is completed.\n\t\n\tFinally, we will determine $\\Ker\\,\\varphi_{{}_{E_6,\\mu_3,w_3}}$. However, from $\\Ker\\,\\varphi_{{}_{E_6,w_3}}=\\{(E,E,E),(\\bm{\\omega}E,\\allowbreak \\bm{\\omega}E,\\bm{\\omega}E),(\\bm{\\omega}^{-1}E,\\bm{\\omega}^{-1}E, \\bm{\\omega}^{-1}E) \\}$, we easily obtain that $\\Ker\\,\\varphi_{{}_{E_6,\\mu_3,w_3}}=\\{(E,E,E),(\\bm{\\omega}E,\\allowbreak \\bm{\\omega}E,\\bm{\\omega}E),(\\bm{\\omega}^{-1}E,\\bm{\\omega}^{-1}E, \\bm{\\omega}^{-1}E) \\} \\cong \\Z_3$.\n\tThus we have the isomorphism $(E_6)^{\\mu_3}\\cap (E_6)^{w_3} \\cong (SU(3)\\times S(U(1)\\times U(1)\\times U(1))\\times S(U(1)\\times U(1)\\times U(1)))\/\\Z_3$. \n\t\n\tTherefore, by Lemma \\ref{lemma 4.3.1} we have the required isomorphism \n\t\\begin{align*}\n\t(E_6)^{\\mu_3}\\cap (E_6)^{w_3} \\cong (SU(3)\\times U(1)\\times U(1)\\times U(1)\\times U(1))\/\\Z_3,\n\t\\end{align*}\n\twhere $\\Z_3=\\{(E,1,1,1,1),(\\bm{\\omega}E,\\bm{\\omega},\\bm{\\omega},\\bm{\\omega},\\bm{\\omega}),(\\bm{\\omega}^{-1}E,\\bm{\\omega}^{-1},\\bm{\\omega}^{-1},\\bm{\\omega}^{-1},\\bm{\\omega}^{-1})\\}$.\n\\end{proof}\n\nThus, since the group $(E_6)^{\\mu_3} \\cap (E_6)^{w_3}$ is connected from Theorem \\ref{theorem 4.14.1}, we have an exceptional $\\varmathbb{Z}_3 \\times \\varmathbb{Z}_3$-symmetric space\n\\begin{align*}\nE_6\/((SU(3)\\times U(1)\\times U(1)\\times U(1)\\times U(1))\/\\Z_3).\n\\end{align*}\n\n\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\\label{sec:intro}\n\nUpcoming 21~cm surveys are poised to make a first detection of redshifted\n21~cm fluctuations from the EoR within the next several years\n\\citep{DeBoer:2016tnn}. These measurements will provide a direct probe of the\ndistribution of neutral hydrogen in the IGM, revealing the spatial structure\nof the reionization process, and its redshift evolution. Along with these\nmeasurements, several other ``line-intensity'' mapping surveys are planned to\nmap out large-scale structure in the galaxy distribution using convenient\nemission lines with current targets including [C~\\textsc{ii}], CO, Ly-$\\alpha$, and\nH-$\\alpha$ \\citep[see e.g.][and references therein]{kovetz2017:im_review}.\nThese surveys study the spatial fluctuations in the collective emission from\nmany individually unresolved sources (e.g.\n\\citealt{Suginohara:1998ti,Righi:2008br,Visbal10}). These measurements should\nnicely complement 21~cm observations (e.g. \\citealt{Lidz11,gong11:probing}):\nwhile the 21~cm fluctuations trace-out remaining neutral hydrogen residing\nmostly in the low-density IGM, the galactic emission lines track the galaxies\nthemselves, which presumably lie within ``bubbles'' of mostly ionized hydrogen\n\\citep{Lidz:2008ry}.\n\nIn fact, recent work has led to detections in various lines at low redshift\n\\citep{2010Natur.466..463C,2001defi.conf..241D,Keating:2016pka,2018MNRAS.478.1911P,2016MNRAS.457.3541C,Croft:2018rwv},\nbolstering efforts to employ the line-intensity mapping technique at earlier\ntimes during the EoR. It is hence timely to explore the scientific benefits of\ncombining 21~cm observations of the EoR with line-intensity mapping surveys in\nother emission lines.\n\nHere we consider, for the first time, one potential advantage of combining\n21~cm surveys of the EoR with line-intensity mapping surveys in {\\em two\nadditional lines.} Specifically, we show that the linear bias factor of the\n21~cm field may be extracted solely from cross-power spectra between the 21~cm\nfluctuations and those in each of two separate lines. This can provide an\nimportant cross-check on inferences from the 21~cm auto-power spectrum since\ncross-power spectra should be less prone to bias from residual foregrounds\n\\citep[e.g.][]{Furlanetto:2006pg,Lidz:2008ry}; only shared foregrounds\ncontribute to the average cross spectrum signal. \n\nThe foreground problem is especially daunting in the case of redshifted\n21 cm surveys, where the expected foreground-to-signal strength is on the\norder of $\\sim 10^5$\n\\citep[e.g.][]{2009A&A...500..965B,2013ApJ...768L..36P,Dillon:2013rfa}. The\nbasic strategy for extracting the signal is to exploit the fact that the\nforegrounds should be smooth functions of frequency, while the reionization\nsignal has a great deal of spectral structure. In practice, this is\nchallenging because the instrument, calibration errors, and other effects may\nimprint artificial spectral variations. Cross-spectrum measurements should be\nless sensitive to such systematic effects and can therefore help confirm early\ndetections. For instance, \\citet{2015JCAP...03..034V} show that cross-spectra\ncan be robustly measured even in the presence of polarized synchrotron\nforegrounds; this is a troublesome case for auto-spectrum analyses because\nFaraday rotation leads to frequency structure.\n\nThe amplitude of the 21~cm power spectrum evolves with redshift in a\ndistinctive way as reionization proceeds \\citep[e.g.][]{Lidz08}, and recent\nwork has demonstrated that linear biasing describes the large-scale 21~cm\npower spectrum rather well\n\\citep{McQuinn:2018zwa,Hoffmann:2018clb,2018ApJ...867...26B}. Therefore, if\nour three-field method may be employed over a range of redshifts, it can be\nused to extract key and robust information regarding the reionization history\nof the Universe.\n\nIn recent related work we showed that the large-scale 21~cm bias factor may be\nrecovered using suitable cross-bispectra between the 21~cm fluctuations and\nthe [C~\\textsc{ii}] emission field \\citep{2018ApJ...867...26B}. While the\ncross-bispectra method requires only the 21~cm fluctuations and one additional\ntracer field, the technique we propose here should be vastly simpler to\nimplement in practice (provided two additional tracers are available with\ncommon sky and redshift coverage). This is the case because our present method\nrelies only on two-point statistics, and it therefore avoids practical\ndifficulties in carrying out cross-bispectrum analyses. For example, it is\nchallenging to estimate the bispectrum covariance as this involves computing a\nsix-point function. In addition, we will show that our present technique\nallows for a more faithful extraction of the 21~cm bias factor. Ultimately,\nboth analyses may be carried out for additional cross-checks.\n\nThere are a broad range of possible lines that may be combined with the 21~cm\nsurveys. Currently, there are projects -- either ramping-up or in the planning\nstages -- to perform EoR-era line-intensity surveys in: [C~\\textsc{ii}]~$158\\,\\mu\n\\text{m}$ \\citep{Crites14,Lagache:2018hmk,Vavagiakis:2018gen}, rotational\ntransitions from CO molecules \\citep{Chung:2017uot}, Ly-$\\alpha$\n\\citep{Dore16}, and H-$\\alpha$ \\citep{Cooray:2016hro}. Additional\nfine-structure lines such as [O~\\textsc{iii}]~$88\\,\\mu \\text{m}$ \\citep{Moriwaki18} and\n[N \\textsc{ii}]~$122\\,\\mu \\text{m}$ \\citep{Serra:2016jzs} may also be suitable\n--- in some cases, these lines will land in the proposed frequency bands of\nthe planned [C~\\textsc{ii}] surveys. The [O~\\textsc{iii}]~$88\\,\\mu \\text{m}$ line appears\nespecially promising since targeted ALMA observations around $z \\sim 7-9$\ngalaxies have found that this line is {\\em brighter} at high redshift than\nexpected based on local correlations between line-luminosity and\nstar-formation rate \\citep[e.g.][and references therein]{Moriwaki18}.\n\nIn principle, one could extract the 21~cm bias using the cross-spectrum with a\ntraditional galaxy survey, in which case the galaxy bias may be measured\nrobustly from the auto-power spectrum. In practice, this is extremely\nchallenging because one needs {\\em spectroscopic redshifts} for the galaxy\nsurvey over a huge sky area at $z \\sim 8$. If only photometric redshifts are\navailable, then one only accesses long-wavelength line-of-sight modes (with\nsmall or vanishing line-of-sight wavenumbers) in the galaxy survey but\nprecisely these modes are lost to foreground cleaning\/avoidance in the 21~cm\nsurveys (e.g. \\citealt{Lidz:2008ry}). Fortunately, multi-line intensity\nmapping provides a promising way forward here and our approach avoids\nmeasuring bias factors from auto-spectra.\n\nIn Section~\\ref{sec:approach}, we describe our three cross-spectra approach in\ndetail. In Section~\\ref{sec:simulations} we briefly discuss the radiative\ntransfer simulations of reionization \\citep{2007MNRAS.377.1043M,Lidz08} used\nin our analysis, the reionization model assumed, and our method for generating\nmock line-intensity mapping data cubes. We then quantify the accuracy of our\ntechnique in Section~\\ref{sec:results}. The survey specifications required to\nextract bias factors with this method are discussed briefly in\nSection~\\ref{sec:detectability}. We conclude in Section~\\ref{sec:conclusions}.\nWe assume a $\\Lambda$CDM cosmology, parameterized by $(\\Omega_m,\n\\Omega_{\\Lambda}, \\Omega_b, h, \\sigma_8, n_s) = (0.27, 0.73, 0.046, 0.7, 0.8,\n1)$ as in the simulations used in this work \\citep{McQuinn:2007dy}. While\nthese parameters differ slightly from presently favored values (e.g.\n\\citealt{2018arXiv180706209P}), this should not impact our conclusions.\n\n\\section{Approach}\\label{sec:approach}\nHere we define terms and describe our three cross-spectra approach. Ignoring\nredshift-space distortions and spin-temperature fluctuations, the 21~cm\nbrightness temperature contrast between neutral hydrogen gas and the cosmic\nmicrowave background is:\n\\begin{equation}\\label{eq:brightness_temp}\nT_{21}(\\bm{x}) = T_0 X_{\\text{HI}}(\\bm{x})[1+\\delta_\\rho(\\bm{x})]\\text{.}\n\\end{equation}\nHere $T_0 = 28\\,\\text{mK}[(1+z)\/10]^{1\/2}$ \\citep[e.g.][]{Zaldarriaga:2003du},\n$X_{\\text{HI}}(\\bm{x})$ is the neutral hydrogen fraction at position $\\bm{x}$, and\n$\\delta_\\rho(\\bm{x})$ is the gas density contrast, which is assumed to follow\nthe overall matter density field on the large scales of interest. Although\nionized regions imprint large-scale fluctuations in the 21~cm field, on scales\nmuch larger than the size of the ionized regions, the 21~cm fluctuations\nshould nevertheless follow a linear biasing relation\n\\begin{equation}\\label{eq:21cm_bias}\nT_{21}(\\bm{k}) = \\pm \\avg{T_{21}} b_{21} \\delta_{\\text{lin}}(\\bm{k})\\text{,}\n\\end{equation}\nwhere the $\\pm$ indicates that the fields are either correlated ($+$) or\nanti-correlated ($-$) --- during the bulk of the EoR, the 21~cm and density\nfields are anti-correlated on large scales in most models\n\\citep[e.g.][]{Lidz:2008ry}. Here $T_{21}(\\bm{k})$ is the Fourier transform of\nthe brightness temperature field (Equation~\\ref{eq:brightness_temp}) and\n$\\delta_{\\text{lin}}(\\bm{k})$ is the Fourier transform of the linear density\ncontrast.\\footnote{Our Fourier convention is: $T_{21}(\\bm{k}) = \\int\n\\text{d}^3x\\, T_{21}(\\bm{x}) e^{i \\bm{k} \\cdot \\bm{x}}$ and $T_{21}(\\bm{x}) =\n\\int \\frac{\\text{d}^3k}{(2\\pi)^3}\\, T_{21}(\\bm{k}) e^{-i \\bm{k} \\cdot\n\\bm{x}}$.} The quantity $b_{21}$ is the dimensionless, and scale-independent,\nlinear bias factor of the 21~cm fluctuation contrast, $\\delta_{21}(\\bm{x}) =\n\\left(T_{21}(\\bm{x}) - \\avg{T_{21}}\\right)\/\\avg{T_{21}}$, while the\n$\\avg{T_{21}}$ factor reverts to brightness temperature units (since the\naverage brightness temperature is not itself observable from interferometric\nmeasurements.) In this work when we refer to the ``bias'' we mean\n$\\avg{T_{21}}b_{21}$ (and likewise for the intensity mapping surveys.)\n\nLikewise, we can consider additional tracer lines, such as [C~\\textsc{ii}]. On large\nscales, the Fourier transform of the specific intensity of each of these lines\nshould be well-described by\n\\begin{equation}\\label{eq:linear_biasing}\nI_{i}(\\bm{k}) = \\avg{I_{i}} b_{i} \\delta_{\\text{lin}}(\\bm{k})\\text{,}\n\\end{equation}\nwhere $\\avg{I_{i}}$ is the mean specific intensity of the emission\nline.\\footnote{We follow standard conventions in expressing 21~cm fluctuations\nin brightness temperature units, i.e. in $\\text{mK}$, while we use specific\nintensity units for the other tracer lines, i.e. $I_{i}$ is the specific\nintensity in $\\text{Jy\/str}$.} For the case of emission lines sourced by gas\nwithin galaxies, the relevant bias factor is the luminosity-weighted bias of\nthe line-emitting host halos (e.g. \\citealt{Lidz11}). To be completely general\nwe should also include a $\\pm$ here (as in Equation~\\ref{eq:21cm_bias}), but\nfor the galactic emission lines we generally expect brighter line emission in\noverdense regions.\n\nOn sufficiently large scales, the auto-power spectrum of the fluctuations in\neach tracer line (Equation~\\ref{eq:linear_biasing}) will be\n\\begin{equation}\\label{eq:bias_ps}\n\\begin{split}\nP_{i, i}(k, z) &\\equiv \\avg{I_{i}(k, z) I_{i}^{*}(k, z)} \\\\\n&= \\left[\\avg{I_{i}}(z) b_{i}(z)\\right]^2 P_{\\text{lin}}(k, z)\\text{,}\n\\end{split}\n\\end{equation}\nwhere $P_{\\text{lin}}(k,z)$ is the linear matter power spectrum. Similarly, on\nlarge scales the 21~cm auto-power spectrum should follow $P_{21,21}(k,z) =\n\\left[\\avg{T_{21}}(z) b_{21}(z)\\right]^2 P_{\\text{lin}}(k,z)$. In principle,\none can infer the bias factors $\\avg{I_i} b_i$ and $\\avg{T_{21}} b_{21}$ from\nauto-power spectrum measurements (assuming a model for the linear power\nspectrum). However, foreground cleaning\/avoidance present significant\nchallenges here\n\\citep[e.g.][]{2012MNRAS.419.3491L,2013ApJ...769..154M,2015ApJ...804...14T,2016ApJ...819....8P,Ewall-Wice:2016bhu}\nand residual foregrounds may bias such inferences.\n\nAnother approach is to measure the cross-power spectrum between two lines $i$\nand $j$. In this case, one measures\n\\begin{equation}\\label{eq:xps}\nP_{i,j} = r_{i, j} \\avg{I_{i}} \\avg{I_{j}} b_{i} b_{j} P_{\\text{lin}}\\text{,}\n\\end{equation}\nwhere $r_{i,j}$ is the cross-correlation coefficient which ranges from $-1$ to\n$1$.\\footnote{Note that here we adopt the convention that the bias factors are\nalways positive and that the sign of the cross-spectrum is determined solely\nby that of the correlation coefficient. This convention differs from our\nprevious work \\citep{2018ApJ...867...26B}.} In the above equation and in what\nfollows, we generally suppress redshift and wavenumber labels for brevity. In\ngeneral, $r_{i,j}$ is scale-dependent, but asymptotes to $-1$ (for\nanticorrelated fields) or $1$ (for correlated fields) on large\nscales.\\footnote{Note that we neglect shot-noise contributions to the\nauto-spectrum in Equation~\\ref{eq:bias_ps}, as well as correlated shot-noise\nterms in the cross-power spectrum. This should be a very good approximation on\nthe scales of interest unless the line-emitting sources are quite rare (e.g.\n\\citealt{lidz2016:remove_interloper}). Even in the case of rare sources, the\nshot-noise term should be a white-noise contribution on scales much larger\nthan the size of the host halos. In this case, one can perform a joint fit for\nthe shot-noise along with the clustering terms.} If one of the lines is the\n21~cm field, we replace $\\avg{I_i}$ with $\\avg{T_{21}}$ in\nEquation~\\ref{eq:xps}.\n\nHowever, in the presence of a third line $k$, and with $P_{j,k}$ and $P_{k,i}$\ndefined analogously as in Equation~\\ref{eq:xps}, we can simply write\n\\begin{equation}\\label{eq:threefields}\n\\begin{split}\nP_{i,i}=(\\avg{I_i} b_i)^2 P_{\\text{lin}} &= \\frac{r_{j,k}}{r_{i,j} r_{k,i}} \\frac{P_{i,j} P_{k, i}}{P_{j,k}} \\\\\n&\\equiv R_{i,j,k} P_{i,j,k}\\text{,}\n\\end{split}\n\\end{equation}\nwhere we have defined $R_{i,j,k} \\equiv r_{j,k}\/(r_{i,j} r_{k,i})$ and\n$P_{i,j,k} \\equiv (P_{i,j}P_{k,i})\/P_{j,k}$. On sufficiently large scales,\n$R_{i,j,k} \\rightarrow 1$, but on intermediate scales $R_{i,j,k} > 1$ for most\nreasonable cases when the various $r$'s are close in magnitude.\nEquation~\\ref{eq:threefields} shows that (on sufficiently large scales where\nlinear biasing holds and $R_{i,j,k} \\sim 1$) we can recover the linear bias\nfactor of field $i$ from a suitable ratio of cross-spectra. Here we suppose\nthat the underlying density power spectrum is well known.\nEquation~\\ref{eq:threefields} is the main point of this paper; in the\nremainder of this work we consider an application to the EoR and quantify its\naccuracy. Specifically, we will test the range of validity -- in spatial scale\nand redshift\/ionization fraction -- of the assumption that $R_{i,j,k}=1$,\nalong with the linear biasing approximations of\nEquations~\\ref{eq:21cm_bias}~\\&~\\ref{eq:linear_biasing}. Note that testing the\nassumption that $R_{i,j,k} = 1$ directly from upcoming data will require\nreliable auto-spectra.\n\nWe turn now to the specific case of EoR surveys with the goal of extracting\nthe 21~cm bias factor using only cross-power spectra. For further specificity\nwe suppose that the two additional tracer lines are [C~\\textsc{ii}] and [O~\\textsc{iii}],\nalthough little of the analysis that follows depends on the choice of these\ntwo lines --- any of the lines mentioned in Section~\\ref{sec:intro} can be\nused instead of [C~\\textsc{ii}] or [O~\\textsc{iii}]. In this case,\nEquation~\\ref{eq:threefields} may be applied as\n\\begin{equation}\\label{eq:threefields_specific}\n\\begin{split}\nP_{21,21} &= (\\avg{T_{21}} b_{21})^2 P_{\\text{lin}}\\\\\n&= \\frac{P_{21,\\text{C~\\textsc{ii}}} P_{\\text{O~\\textsc{iii}}, 21}}{P_{\\text{C~\\textsc{ii}}, \\text{O~\\textsc{iii}}}}\\text{,}\n\\end{split}\n\\end{equation}\ni.e. assuming $R_{21,\\text{C~\\textsc{ii}},\\text{O~\\textsc{iii}}} = 1$.\n\nWe expect this approach to break down on small scales. First, the three fields\nwill be well-correlated (or anti-correlated) only on large scales, with the\n21~cm field and the [C~\\textsc{ii}], [O~\\textsc{iii}] fields decorrelating on scales smaller\nthan the size of the ionized regions \\citep{Lidz11}. Second, we assume linear\nbiasing which should break down on scales where second-order bias terms become\nsignificant \\citep{McQuinn:2018zwa}.\n\nOne caveat here is that we neglect redshift space distortions throughout.\nIncluding these effects will make the power spectra in\nEquation~\\ref{eq:threefields_specific} angle-dependent. Although these effects\nare well studied in the case of the 21~cm auto-spectrum (e.g.\n\\citealt{Mao12}), an extension of our three cross-spectra method may be needed\nto account for these distortions.\n\n\\section{Simulations}\\label{sec:simulations}\n\nIn order to investigate the accuracy of Equation~\\ref{eq:threefields_specific}\nwe turn to $(186\\,\\text{Mpc})^3$ radiative transfer simulations of the EoR\n\\citep{2007MNRAS.377.1043M,McQuinn:2007dy,Lidz08}. In these calculations,\nradiative transfer is post-processed onto a $(1024)^3$ dark matter only\nsimulation run with \\texttt{GADGET-2} \\citep{Springel:2005mi}. The dark matter\nsimulation resolves halos only down to $10^{10}\\,\\text{M}_\\odot$, however halos down to\n$10^8\\,\\text{M}_\\odot$ are added manually in post-processing with the correct\nstatistical properties \\citep{2007MNRAS.377.1043M}. Halos resolved directly in\nthe simulation (i.e. $>10^{10}\\,\\text{M}_\\odot$) are identified with a\nFriends-of-Friends algorithm with a linking length of 0.2.\n \n In what follows, we adopt the abundant mini-halo sink scenario\n \\citep{2007MNRAS.377.1043M,Lidz08} as our baseline reionization model.\n Although the detailed model for photon sinks implemented in these simulations\n may not be fully realistic, the smaller ionized regions in ``abundant sink''\n scenarios may, in fact, be more plausible than the other cases considered in\n this previous work \\citep{McQuinn:2018zwa}. In any case, the accuracy of our\n method does not depend strongly on the precise reionization model assumed.\n\nIn order to model the [C~\\textsc{ii}] and [O~\\textsc{iii}] emission fluctuations, we assume that\nthe luminosity in each line is correlated with the host halo mass.\nSpecifically, we adopt a power-law average relation between line-luminosity\nand halo mass:\n\\begin{equation}\\label{eq:im_form}\n\\avg{L_i}(M) = L_{i,0} \\left[\\frac{M}{M_0}\\right]^{\\alpha_i},\n\\end{equation}\nwhere $M$ is the mass of the halo, $\\avg{L_i}$ is the average luminosity, and\n$L_{i,0}$ is the luminosity at characteristic mass $M_0$. In order to account\nfor scatter in this relation, we add a random number so that each halo's\nluminosity is $L_i = \\avg{L_i}(1 + \\epsilon)$ where $\\epsilon$ is drawn from a\nzero-mean lognormal distribution of width 0.4 dex.\n\nIn what follows we assume that each host halo in the simulation hosts a [C~\\textsc{ii}]\nand [O~\\textsc{iii}] emitter. If only a random fraction $f$ of halos host active [C~\\textsc{ii}]\nand\/or [O~\\textsc{iii}] emitters while $L_{i,0}$ is boosted to fix the average\nspecific-intensity in each line, this does not change the 21~cm-[C~\\textsc{ii}] or\n21~cm-[O~\\textsc{iii}] cross-power spectra. This represents the case that\nstar-formation activity has a short duty-cycle, yet the total star-formation\nrate density is fixed to the observed value. If the same random fraction emit\nin both [C~\\textsc{ii}] and [O~\\textsc{iii}] this can boost the cross-shot noise contribution to\n$P_{\\text{C~\\textsc{ii}},\\text{O~\\textsc{iii}}}$, but this is highly sub-dominant on the scales of interest\n($k \\leq 0.4$ Mpc$^{-1}$) even for $f=10^{-2}$.\n\nIn order to estimate the specific intensity of the two fields, we use nearest\ngrid-point interpolation to estimate the emissivity on a $512^3$ Cartesian\ngrid, matching the resolution of the density and 21~cm fields from\n\\citet{Lidz08}. Note that we can test the accuracy of\nEquation~\\ref{eq:threefields_specific} without specifying the numerical value\nof $L_{i,0}$ or $M_0$ since they cancel in the ratio. The value of $\\alpha_i$,\non the other hand, controls which host-halos (and galactic star-formation\nrates) produce most of the specific intensity in line $i$.\\footnote{Note we\nassume that the minimum host halo mass of the [C~\\textsc{ii}] and [O~\\textsc{iii}] emitters is\n$10^8 M_\\odot$, comparable to the atomic cooling mass. The true minimum host\nmass of the emitters may, in fact, be larger. However, note that the average\nspecific intensity may be fixed by the total star-formation rate density and\nthe line-luminosity star-formation rate correlation. Provided these quantities\nare fixed, then the main impact of boosting the minimum host halo mass will be\nto increase slightly the bias factors, $b_i$, and the signal strength. See\ne.g. \\citep{lidz2016:remove_interloper} for more details regarding\nline-intensity fluctuation models.} If the value of $\\alpha_i$ is the same for\n[C~\\textsc{ii}] and [O~\\textsc{iii}], then the two fields differ only by an overall\nmultiplicative factor and Equation~\\ref{eq:threefields_specific} reduces to a\nsimple ratio between a single cross-spectrum and an\nauto-spectrum.\\footnote{This assumes, as we do here, that the scatter in the\nluminosity-mass relation is perfectly correlated between [C~\\textsc{ii}] and [O~\\textsc{iii}] at\nfixed $\\alpha_i$.}\n\nWe consider three different values for $\\alpha_i$: $2\/3$, $1$, and $4\/3$. We\nrefer to these as L, M, and H since they provide most weight to low, medium,\nand high mass host-halos respectively. We allow for the case that the two\nlines have different values of $\\alpha_i$: i.e., we consider 21~cm-L-M,\n21~cm-M-H, and 21~cm-H-L, with L, M, or H standing in for [C~\\textsc{ii}] or [O~\\textsc{iii}] in\nEquation~\\ref{eq:threefields_specific}. We then measure the various\ncross-spectra using a slightly modified version of the power spectrum\ncalculator in \\texttt{21cmFAST} \\citep{Mesinger11,2018arXiv180908995P}.\n\n\\section{Results}\\label{sec:results}\n\nWe first investigate how well our three cross-spectra approach for measuring\nthe large-scale 21~cm bias agrees with the true bias. We measure the true bias\nas\n\\begin{equation}\\label{eq:truebias}\n\\avg{T_{21}} b_{21}(k) \\equiv \\sqrt{\\frac{P_{21,21}(k)}{P_{\\delta,\\delta}(k)}}\\text{,}\n\\end{equation}\nand also estimate the bias as\n\\begin{equation}\\label{eq:truebias_cross}\n\\avg{T_{21}} b_{21}(k) \\simeq\n\\left|\\frac{P_{21,\\delta}(k)}{P_{\\delta,\\delta}(k)} \\right| \\text{,}\n\\end{equation}\nwhere $P_{\\delta,\\delta}(k)$ is the auto-power spectrum of the simulated\ndensity field and $P_{21,\\delta}(k)$ is the 21~cm-density cross-power\nspectrum. Note that Equation~\\ref{eq:truebias_cross} assumes that the\ncorrelation coefficient $\\left|r_{21,\\delta}\\right| = 1$ and so will depart\nfrom Equation \\ref{eq:truebias} on small scales, but the two should converge\non large scales (see Section~\\ref{sec:intro}). The absolute value in\nEquation~\\ref{eq:truebias_cross} comes about from the convention adopted in\nSection~\\ref{sec:approach}. On large scales where the 21~cm, [C~\\textsc{ii}], and\n[O~\\textsc{iii}] fields are each well correlated or anti-correlated with the density\nfield and linear theory applies, we expect all estimates of $\\avg{T_{21}}\nb_{21}$ to agree. When we estimate the bias factors using our three\ncross-spectra method (Equation~\\ref{eq:threefields_specific}) we use the\nsimulated density power-spectrum, since this is extremely close to the linear\ntheory prediction on the relevant scales and redshifts.\n\nThe bias factors inferred from Equation~\\ref{eq:threefields_specific} are\nshown in Figure~\\ref{fig:b21_vs_k} for each of the three combinations of our\nluminosity-mass relation models (L-M, M-H, H-L) at $z=8.34$ when the model\nvolume-averaged ionization fraction is $\\avg{x_i}=0.36$. These are compared\nwith the bias inferred from the 21~cm auto-spectrum\n(Equation~\\ref{eq:truebias}) and the 21~cm-density cross-spectrum\n(Equation~\\ref{eq:truebias_cross}). On large scales ($k \\lesssim\n0.3\\,\\text{Mpc}^{-1}$), the methods converge to very nearly the same value. We find\nthat on a scale of $k=0.1\\,\\text{Mpc}^{-1}$ at $\\avg{x_i}=0.36$ the three methods\nagree with the true value to within $0.6\\%$. In the case of 21~cm-L-L,\n21~cm-M-M, or 21~cm-H-H models the agreement is slightly worse but still at\nthe percent-level. Note that another approach for estimating the 21~cm bias\nwould use only the 21~cm-[C~\\textsc{ii}] cross-spectrum and the [C~\\textsc{ii}] auto-spectrum.\nThis requires measuring the [C~\\textsc{ii}] auto-spectrum, which is subject to\ncontamination from interloping line emission, and so we pursue only the more\nrobust three-field technique here.\n\nThe success results because the ionized regions are sufficiently smaller than\nthis scale ($k=0.1\\,\\text{Mpc}^{-1}$), ensuring that the 21~cm and line-intensity\nfields are highly anti-correlated and that second-order biasing contributions\nare small. For example, the cross-correlation coefficient between the 21~cm\nfield and the density field is $r_{21,\\delta} = -0.99$ at $k=0.1\\,\\text{Mpc}^{-1}$\nfor $\\avg{x_i}=0.36$, $z=8.34$.\n\nOn smaller scales, our approach breaks down. At $\\avg{x_i}=0.36$, the\ndifferent bias factor estimates begin diverging at the $\\geq 10\\%$ level near\n$k \\sim 0.4\\,\\text{Mpc}^{-1}$. This occurs because the fields start to de-correlate\nand second order biasing terms become more important. As anticipated after\nEquation~\\ref{eq:threefields}, the three cross-spectra approach underestimates\nthe bias factor in this regime. This underestimation may allow one to place\nrobust lower limits on $P_{21,21}$ that are only $\\sim50\\%$ smaller than the\ntrue value down to $k\\sim2\\,\\text{Mpc}^{-1}$ at this stage of the EoR, although the\nmodel-dependence of such limits warrants further investigation.\n\n\\begin{figure}\n\\includegraphics[width=\\columnwidth]{{b21_k_z8.34}.pdf}\n\\caption{{\\em Upper:} The simulated, dimensionless 21~cm auto-power spectrum\n(gray) compared to that inferred from our three-cross spectra approach\nassuming linear biasing at $\\avg{x_i}=0.36$, $z=8.34$. The different colors\ncorrespond to various possible line-luminosity mass relations (L, M, H), as\ndescribed in Section~\\ref{sec:simulations}. The shaded area shows the\n$1\\,\\sigma$ expected errors for the 21~cm-L-M survey described in\nSection~\\ref{sec:detectability}. {\\em Middle:} The 21~cm bias factor extracted\nfrom our three cross-spectra approach in the different line-luminosity models.\nThese are compared with that inferred from the 21~cm auto-spectrum (solid\ngray) and the 21~cm-density cross-spectrum (gray dashed). {\\em Bottom:} The\nrelative difference between the different bias-factor models. On large scales\nall inferences agree.}\n\\label{fig:b21_vs_k}\n\\end{figure}\n\n\\begin{figure}\n\\includegraphics[width=\\columnwidth]{{b21_z}.pdf}\n\\caption{{\\em Upper:} The inferred 21~cm bias factor as a function of\nredshift\/volume-average ionization fraction at $k=0.1\\,\\text{Mpc}^{-1}$. The\ndifferent colored lines show inferences from our three cross-spectra approach\nin the different line-luminosity models (see Figure~\\ref{fig:b21_vs_k} and\ntext). The gray line shows the true bias factor measured from the\n21~cm-$\\delta$ cross-spectrum. {\\em Lower:} The relative error in the three\ncross-spectra approach. We find better than $5\\%$ agreement for most of the\nEoR with sub-percent accuracy achieved near $\\avg{x_i}=0.36$ at $z=8.34$. At\n$\\avg{x_i} \\sim 0.15$ the fields decorrelate on large scales and so the\napproach breaks down (see text).}\n\\label{fig:b21_vs_z}\n\\end{figure}\n\nAt later times, the average ionization fraction and the bubble sizes increase\nand so the scale at which the linear biasing approximation breaks down moves\nto larger scales. For example, at $\\avg{x_i} = 0.7$ ($z = 7.32$), the approach\nbreaks down at the $\\sim 10\\%$ level at $k\\sim0.3\\,\\text{Mpc}^{-1}$, though an\naccuracy of only a few percent is achieved at the largest scales considered\nhere. We suspect better agreement on even larger scales than probed by our\nrelatively small simulation volume.\n\nIn Figure~\\ref{fig:b21_vs_z} we turn to consider the redshift evolution of the\n21 cm bias factor at $k=0.1\\,\\text{Mpc}^{-1}$. As emphasized earlier, the redshift\nevolution of the 21~cm bias factor encodes interesting information about how\nreionization proceeds. The three cross-spectra method generally recovers the\noverall evolution of the 21~cm bias factor with redshift and volume-averaged\nionization fraction quite accurately. This suggests that our technique may\nhelp in reconstructing the reionization history of the Universe, or in\nverifying the results from 21~cm auto-spectrum measurements.\n\nThe one exception is near $\\avg{x_i} \\sim 0.15$, where our technique is\nrelatively inaccurate. This occurs because large-scale overdense regions are\ninitially brighter in 21~cm than typical regions in our model and so the 21~cm\nfields are intially {\\em positively-correlated} with the density fluctuations.\nAs reionization begins, the large-scale overdense regions ionize first which\ncauses the correlation coefficient between the 21~cm and density fields to\nreverse signs. Consequently, there is an intermediate period (near $\\avg{x_i}\n\\sim 0.15$ in this model) where the two fields are roughly {\\em uncorrelated}\non large scales \\citep{Lidz08}. This causes our method to break down, although\nwe caution that incorporating spin-temperature fluctuations into the modeling\nmay modify this conclusion. Note also that it will be challenging to perform\nline-intensity mapping observations at very early times before, e.g.,\nsufficient metal enrichment occurs.\n\nWhile our baseline model assumes the abundant mini-halo sinks scenario we have\nalso investigated the fiducial model used in \\citet{Lidz08}. Although this\nlatter model has a different ionization history and bias factor evolution, the\naccuracy of our three cross-spectra method is broadly similar in this case.\nFor example, near the midpoint of reionization in this model ($z=7.32,\n\\avg{x_i}=0.54$), the 21~cm bias extraction also reaches sub-percent accuracy.\n\n\\section{Detectability}\\label{sec:detectability}\n\nEncouraged by the success of our approach in simulations, we briefly describe\nthe survey specifications required to infer 21~cm bias factors using this\ntechnique. Here we consider only rough estimates and defer an in depth\ntreatment of noise power spectra, variance from residual foregrounds, and a\nfull probabilistic, multi-field framework to future work.\n\nWe first describe the relevant variance and covariance formulae (for\nderivations, see e.g. \\citealt{2015JCAP...03..034V}):\n\\begin{equation}\\label{eq:var_covar}\n\\begin{split}\n\\Var{P_{i,j}} &= P_{i,j}^2 + P_{i,\\text{tot}}P_{j,\\text{tot}} \\\\\n\\Cov{P_{i,j}}{P_{i,k}} &= P_{i,\\text{tot}}P_{j,k} + P_{i,j}P_{i,k}\\text{,}\n\\end{split}\n\\end{equation}\nwhere $P_{i,\\text{tot}} = P_{i} + N_{i}$ and $N_{i}$ is the instrumental noise\npower spectrum of line $i$. For simplicity, we neglect the shot-noise\ncontribution to each field. We note that Equation~\\ref{eq:var_covar} is only\nvalid in the Gaussian approximation, but this is suitable for the large scales\nof interest in our approach.\n\nWe can now apply the standard propagation of errors formula to\nEquation~\\ref{eq:threefields_specific} and substitute\nEquation~\\ref{eq:var_covar}, yielding:\n\\begin{equation}\\label{eq:noise_P21}\n\\begin{split}\n&\\Var{P_{21}} = \\\\\n& \\left(\\frac{P_{21,\\text{C~\\textsc{ii}}}}{P_{21,\\text{O~\\textsc{iii}}}}\\right)^2\\left(P_{21,\\text{O~\\textsc{iii}}}^2 + P_{21,\\text{tot}}P_{\\text{O~\\textsc{iii}},\\text{tot}}\\right) \\\\\n&+ \\left(\\frac{P_{21,\\text{O~\\textsc{iii}}}}{P_{21,\\text{C~\\textsc{ii}}}}\\right)^2\\left(P_{21,\\text{C~\\textsc{ii}}}^2 + P_{21,\\text{tot}}P_{\\text{C~\\textsc{ii}},\\text{tot}}\\right) \\\\\n&+ \\left(\\frac{P_{21,\\text{C~\\textsc{ii}}}P_{21,\\text{O~\\textsc{iii}}}}{P_{\\text{C~\\textsc{ii}},\\text{O~\\textsc{iii}}}^2}\\right)^2\\left(P_{\\text{C~\\textsc{ii}},\\text{O~\\textsc{iii}}}^2 + P_{\\text{C~\\textsc{ii}},\\text{tot}}P_{\\text{O~\\textsc{iii}},\\text{tot}}\\right) \\\\\n&+ \\frac{P_{21,\\text{C~\\textsc{ii}}}P_{21,\\text{O~\\textsc{iii}}}}{P_{\\text{C~\\textsc{ii}},\\text{O~\\textsc{iii}}}^2}\\left(P_{21,\\text{tot}}P_{\\text{C~\\textsc{ii}},\\text{O~\\textsc{iii}}} + P_{21,\\text{C~\\textsc{ii}}}P_{21,\\text{O~\\textsc{iii}}} \\right) \\\\\n&- \\frac{P_{21,\\text{C~\\textsc{ii}}}^2P_{21,\\text{O~\\textsc{iii}}}}{P_{\\text{C~\\textsc{ii}},\\text{O~\\textsc{iii}}}^3}\\left(P_{\\text{O~\\textsc{iii}},\\text{tot}}P_{21,\\text{C~\\textsc{ii}}} + P_{21,\\text{O~\\textsc{iii}}}P_{\\text{C~\\textsc{ii}},\\text{O~\\textsc{iii}}} \\right) \\\\\n&- \\frac{P_{21,\\text{C~\\textsc{ii}}}P_{21,\\text{O~\\textsc{iii}}}^2}{P_{\\text{C~\\textsc{ii}},\\text{O~\\textsc{iii}}}^3}\\left(P_{\\text{C~\\textsc{ii}},\\text{tot}}P_{21,\\text{O~\\textsc{iii}}} + P_{21,\\text{C~\\textsc{ii}}}P_{\\text{C~\\textsc{ii}},\\text{O~\\textsc{iii}}} \\right)\\text{.}\n\\end{split}\n\\end{equation}\n\nThe number of modes in a bin of width $\\delta k$ centered on $k$ is,\n\\begin{equation}\\label{eq:num_modes}\nN_m = \\frac{4\\pi k^2 \\delta k}{V_{\\text{fund}}}\\text{,}\n\\end{equation}\nwhere $V_{\\text{fund}}$ is the volume of a fundamental mode. We assume a\nsquare survey area and therefore compute,\n\\begin{equation}\\label{eq:vfund}\nV_{\\text{fund}} = \\frac{(2\\pi)^3}{L_{\\bot}^2L_{\\parallel}}\\text{,}\n\\end{equation}\nwhere $L_{\\bot}$ is the side length of the survey area and $L_{\\parallel}$ is\nthe length of the redshift bin $\\Delta z$.\n\nWe assume a joint survey area of $100\\,\\text{deg}^2$ and bin widths of $\\delta\nk = 0.03 \\text{Mpc}^{-1}$ and $\\Delta z = 0.25$. In order to make a rough estimate,\nwe assume that each experiment reaches sample-variance limited sensitivity at\n$k=0.1\\,\\text{Mpc}^{-1}$, with $N_i=P_i$ at this wavenumber and adopt a pure,\nisotropic white-noise power spectrum. In the case of [C~\\textsc{ii}], the required\nnoise depends on the uncertain average specific intensity which determines, in\npart, the signal strength, $P_i$. A plausible value is $\\avg{I_{\\text{C~\\textsc{ii}}}}=5\n\\times 10^2\\,\\text{Jy\/str}$ at $z=8.34$ \\citep{2018ApJ...867...26B}. In this\ncase, $N_{\\text{C~\\textsc{ii}}} = 1.6 \\times 10^9$, $2.5 \\times 10^9$, $3.9 \\times\n10^9\\,(\\text{Jy}\/\\text{str})^2\\,\\text{Mpc}^3$ for the L, M, and H models of the\n[C~\\textsc{ii}] line, respectively at $z=8.34$. These noise requirements are comparable\nto the values forecasted for Stage-II [C~\\textsc{ii}] line-intensity mapping forecasts\nin \\citet{silva15:prospects,lidz2016:remove_interloper}. We expect broadly\nsimilar noise requirements for hypothetical future [O~\\textsc{iii}] surveys but defer\ndetailed forecasts to future work. As we discussed previously\n\\citep{2018ApJ...867...26B}, the 21~cm sensitivity requirement assumed here\nseems plausible considering HERA-350 will {\\em image} some large scale modes\n\\citep{DeBoer:2016tnn} --- although the white noise approximation is rather\ncrude and should be refined in future work.\n\nWe caution that the strength of the [C~\\textsc{ii}] signal at the redshifts of interest\nis quite uncertain. A broad range of estimates appear in the current\nliterature, depending on assumptions about: the correlation between [C~\\textsc{ii}]\nluminosity and SFR at high redshift, the total star-formation rate density\n(estimates from UV luminosity functions are sensitive to whether and how one\nextrapolates to faint luminosities beyond current detection limits), and the\nhost-halo masses of [C~\\textsc{ii}] emitters. For example, our model values for\n$\\avg{I_{C~\\textsc{ii}}}$ are similar to a number of recent forecasts\n\\citep{2018arXiv180204804D, 2015ApJ...806..209S}, but are more than an order\nof magnitude larger than some more pessimistic estimates in\n\\citet{2015ApJ...806..209S,2018arXiv181208135C}. In any case, at fixed\nluminosity-weighted bias, the required noise scales quadratically with the\naverage specific intensity and so the reader can rescale our results according\nto their preferred specific intensity model. For instance, in the case of\n$\\avg{I_{C~\\textsc{ii}}} = 20\\,\\text{Jy}\/\\text{sr}$ \\citep{2018arXiv181208135C}, one\nwould require that $N_{C~\\textsc{ii}} = 2.6 \\times 10^6$, $4 \\times 10^6$, $6.2 \\times\n10^6\\,(\\text{Jy}\/\\text{sr})^2\\,\\text{Mpc}^3$ for the L, M, and H models of the\n[C~\\textsc{ii}] line, respectively at $z=8.34$. On the other hand, a more moderate\nestimate of $\\avg{I_{\\text{C~\\textsc{ii}}}} = 100 \\,\\text{Jy}\/\\text{sr}$ \\citep{\n2015ApJ...806..209S,2018arXiv180204804D}, requires $N_{C~\\textsc{ii}} = 6.4\n\\times 10^7$, $1 \\times 10^8$, $1.6 \\times\n10^8\\,(\\text{Jy}\/\\text{sr})^2\\,\\text{Mpc}^3$ for the L, M, and H models of the\n[C~\\textsc{ii}] line, respectively at $z=8.34$.\n\n\\begin{deluxetable}{cCCC}\n\\tablecaption{The noise power-spectrum for upcoming [C~\\textsc{ii}] surveys at $z=7.4$.\n\\label{tab:noise}}\n\\tablehead{\\colhead{survey} & \\colhead{$A_{\\text{survey}}$} & \\colhead{$A_{\\text{pix}}$} & \\colhead{$N_{\\text{C~\\textsc{ii}}}$} \\\\ \n\\colhead{} & \\colhead{$(\\text{deg}^2)$} & \\colhead{$(\\text{deg}^2)$} & \\colhead{$((\\text{Jy}\/\\text{sr})^2\\,\\text{Mpc}^3)$} } \n\\startdata\nCCAT-p & 2 & 2.5\\times10^{-4} & 2.66\\times10^{9} \\\\\nCONCERTO & 1.4 & 6.7\\times10^{-5} & 2.04 \\times 10^{9} \\\\\nTIME & 1.3 \\times 0.0084 & 6.7\\times10^{-5} & 1.04\\times10^9 \\\\\n\\enddata\n\\tablerefs{See \\citet{2018arXiv181208135C} for more details.}\n\\end{deluxetable}\n\nWith the assumed noise and survey requirements for our fiducial model, we show\nthe resulting error bars in the {\\em Upper} and {\\em Middle} panels of\nFigure~\\ref{fig:b21_vs_k} for our particular choice of binning. At least for\nthe hypothetical surveys considered here, the 21~cm bias factor may be\nrecovered with good statistical precision. In other words, if sample-variance\nlimited sensitivity may be reached at $k=0.1\\,\\text{Mpc}^{-1}$ in each line over a\ncommon survey area of $\\sim 100\\,\\text{deg}^2$, then a strong detection\nappears feasible. Of course we have neglected sample variance contributions\nfrom residual foregrounds among other complications, and so this should be\ninterpreted as a best-case scenario. On the other hand, increasing the common\nsurvey area above $100\\,\\text{deg}^2$, for example, could help shrink the\nerror bars.\n\nWhile our fiducial [C~\\textsc{ii}] survey is somewhat futuristic, we can also consider\nthe prospects with current, shortly upcoming surveys, specifically\nCCAT-prime\\footnote{\\url{http:\/\/www.ccatobservatory.org}}\n\\citep{2018SPIE10700E..1MS},\nCONCERTO\\footnote{\\url{https:\/\/people.lam.fr\/lagache.guilaine\/CONCERTO.html}}\n\\citep{Lagache:2018hmk}, and\nTIME\\footnote{\\url{https:\/\/cosmology.caltech.edu\/projects\/TIME}}\n\\citep{Crites14}. We use the pixel noise values, $\\sigma_{\\text{pix}}\nt_{\\text{pix}}^{-1\/2}$, for each survey from \\citet{2018arXiv181208135C}. We\nreport the noise power spectrum at $z=7.4$ (assuming a pure white-noise\nspectrum) in Table~\\ref{tab:noise}. We generically find that\n$N\\sim2\\times10^9\\, (\\text{Jy}\/\\text{sr})^2\\,\\text{Mpc}^3$. If we assume a model\nwith $\\avg{I_{\\text{C~\\textsc{ii}}}}\\sim500\\,\\text{Jy}\/\\text{sr}$ then even the\nfirst-generation surveys reach our requisite noise. However, deeper surveys\nwill be needed in the case of the more pessimistic estimates of\n$\\avg{I_{\\text{C~\\textsc{ii}}}}\\sim100$ or $\\sim20\\,\\text{Jy}\/\\text{sr}$. That being said,\nour fiducial calculations also assume a larger survey area of\n$100\\,\\text{deg}^2$. At $z=8.34$ we find a $\\text{S}\/\\text{N}$ of $3.3$,\n$2.7$, and $2.9$ for the L-M, M-H, and H-L models, respectively at\n$k=0.1\\,\\text{Mpc}^{-1}$ and bin width of $\\Delta k=0.03\\,\\text{Mpc}^{-1}$. Since the\nnumber of modes scale with the square root of the survey area, we estimate\nthat CCAT-p might be able to recover a $\\text{S}\/\\text{N}$ of $0.5$, $0.4$,\nand $0.4$ for the L-M, M-H, and H-L models, respectively at\n$k=0.1\\,\\text{Mpc}^{-1}$. Including some higher $k$-modes, even this\nfirst-generation survey might be capable of a marginal detection (if [O~\\textsc{iii}]\ncan be surveyed as well), but this is only for our optimistic signal strength\nmodel.\n\nSince the strength of the [C~\\textsc{ii}] signal is likely a strong function of\nredshift, the survey requirements should be less stringent at $z \\sim 7$ than\nthe $z \\sim 8$ case considered above. The main effect here should be from\nredshift evolution in the average specific intensity; again, the noise\nrequirements scale with the average intensity squared. The required noise can\ntherefore be adjusted according to one's preferred model for redshift\nevolution in the signal strength.\n\n\\section{Conclusions}\\label{sec:conclusions}\n\nWe have shown that the amplitude of large-scale 21 cm fluctuations may be\ninferred from measuring cross-power spectra between the 21 cm fluctuations and\neach of two separate line-intensity maps, such as [C~\\textsc{ii}] or [O~\\textsc{iii}]. Although\nit has long been recognized that the cross-power spectrum between two fields\nis more robust to foreground contamination than the auto-power spectrum of\neither field alone, the amplitude of a single cross-power spectrum provides\nonly a product of two bias factors. We found that using a suitable combination\nof three cross-power spectra\n(Equations~\\ref{eq:threefields}~and~\\ref{eq:threefields_specific}) one can\ninstead infer the 21~cm bias alone to high accuracy.\n\nQuantitatively, in the reionization model we considered, the accuracy reaches\npercent-level on large scales ($k \\sim 0.1-0.3\\,\\text{Mpc}^{-1}$) during much of the\nEoR. The inferred bias factor evolution can then be compared to that extracted\nfrom the 21~cm auto spectrum. In principle, checking whether the 21~cm\nauto-power spectrum follows linear-biasing on large scales might itself be a\ngood systematics check. However, linear biasing holds only over a limited span\nof wavenumbers and early measurements may probe a small dynamic range in\nspatial scale. Hence we believe that our three cross-spectra approach might\nplay an important role in confirming initial detections. Since our method\nunderestimates $P_{21,21}$ on intermediate scales, it can place informative\nlower limits (i.e. $\\sim 50\\%$ of the true value) down to $k\\sim1\\,\\text{Mpc}^{-1}$,\ndepending on the stage of reionization. More work is necessary, however, to\nsee if there are some allowed reionization and line-intensity models where our\ntechnique actually overestimates $P_{21,21}$.\n\nAlthough we focused here on the case of 21~cm fluctuations during the EoR, the\nmethod has broader applicability. For example, one can also estimate the bias\nof the [C~\\textsc{ii}] and [O~\\textsc{iii}] fluctuations by using a similar ratio of\ncross-spectra. This should help circumvent the line-interloper problem that\npresents a challenge for such surveys \\citep[e.g.][]{kovetz2017:im_review}.\nSince the ionized bubbles lead to scale-dependent biasing in the 21~cm field\non large spatial scales, the 21~cm case is an especially demanding\napplication, and we expect even better performance for [C~\\textsc{ii}], [O~\\textsc{iii}], and\nrelated lines.\n\nIn order to implement the strategy proposed here, there must be a coordinated\neffort to probe the same regions on the sky over common redshifts in multiple\nlines of interest. Ultimately, we envision line-intensity mapping surveys in\n$N$ different lines, all probing the same cosmological volume. Among other\nbenefits, this will provide $N(N-1)\/2$ measurements of the bias factor in each\nline using the same basic technique outlined here.\n\n\\acknowledgments\nWe thank the anonymous referee for providing helpful comments. We thank Matt\nMcQuinn for the simulations used in this analysis. A.B. would like to thank\nTodd Phillips for helpful discussions. A.B. was supported in part by the Roy\n\\& Diana Vagelos Program in the Molecular Life Sciences and the Roy \\& Diana\nVagelos Challenge Award. The work of A.B. and F.V.-N. is supported by the\nSimons Foundation.\n\n\\software{\\texttt{colossus} \\citep{2018ApJS..239...35D}, \\texttt{matplotlib}\n\\citep{Hunter:2007}, \\texttt{numpy} \\citep{numpy:2011}, and \\texttt{scipy}\n\\citep{scipy:2001}.}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzbfyb b/data_all_eng_slimpj/shuffled/split2/finalzzbfyb new file mode 100644 index 0000000000000000000000000000000000000000..b009920646978ae863c0cf8e46942d69bdd3173c --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzbfyb @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\nIt is known that at least half of the stars in the Galaxy are multiple systems containing two or more stars orbiting each other \\citep{2001ASPC..229...91K, 2017ApJ...836..139F}, thus in many surveys and large samples of stars, binaries are ubiquitous. This is in contrast with the Sun, which is a single star, and attempts to find a faint stellar companion orbiting it rendered no results thus far \\citep[e.g.,][]{2014ApJ...781....4L}. Many studies avoid contamination by binaries in their samples, the main reasons being because we do not understand well how binaries evolve and how the presence of a companion affects the primary star. However, with the development of instruments with higher spatial and spectral resolution and coronagraphs, it is now possible to better probe the secondary component of such systems.\n\n\\defcitealias{2014A&A...572A..48R}{Paper~I}\n\\defcitealias{2015A&A...581A..34B}{Paper~II}\n\\defcitealias{2016A&A...590A..32T}{Paper~III} \\defcitealias{2016A&A...592A.156D}{Paper~IV}\n\\defcitealias{2017A&A...597A..34M}{Paper~V}\n\nWe have been carrying out a radial velocity planet search focused on solar twins using HARPS \\citep[][hereafter Papers I, II, III, IV and V, respectively]{2014A&A...572A..48R, 2015A&A...581A..34B, 2016A&A...590A..32T, 2016A&A...592A.156D, 2017A&A...597A..34M}. The definition of solar twin we use is a star with stellar parameters inside the ranges $5777 \\pm 100$ K, $4.44 \\pm 0.10$ dex(cgs) and $0.0 \\pm 0.1$ dex, respectively, for $T_{\\mathrm{eff}}$, $\\log{g}$ and [Fe\/H]. In total, 81 solar twins\\footnote{\\footnotesize{Some of the stars in our sample do not fit the strict definition of solar twins because one or more parameters are off the definition intervals, but they are still very close solar analogues.}} were observed on HARPS. As part of our survey we previously identified 16 clear spectroscopic binaries (SB) \\citepalias{2016A&A...592A.156D}. We report here the identification of four additional SBs (HIP 14501, HIP 18844, HIP 65708 and HIP 83276) and the withdrawal of HIP 43297 and HIP 64673, which are unlikely to host stellar-mass companions, bringing the number of solar twin SBs to 18. Most of these SBs are single-lined -- they do not contain a second component in their spectral lines --, meaning that their companions are either faint stars or located outside the $\\sim$$1\\arcsec$ aperture of the HARPS spectrograph. We confirm that there are three solar twins with spectra contaminated by a relatively bright companion (see discussion in Section \\ref{peculiar}). In our sample there are an additional 18 visual binaries\\footnote{\\footnotesize{We define as visual companions those with separations larger than $1\\arcsec$}} or multiple systems, of which HIP 6407 and HIP 18844 have wide companions \\citep[see table 5 in][]{2014AJ....147...86T} as well as the spectroscopic companions reported here.\n\nIn \\citetalias{2016A&A...592A.156D} we saw that the single or visual binary solar twins display a rotational evolution that can described with a relation between stellar age $t$ and rotational velocity $v_{\\mathrm{rot}}$ in the form of a power law plus a constant: $v_{\\mathrm{rot}} = v_{\\mathrm{f}} + m\\ t^{-b}$, where $v_{\\mathrm{f}}$, $m$ and $b$ are free parameters to be fit with observations. This relation is explained by loss of angular momentum due to magnetized winds \\citep[e.g.,][]{1984RSPTA.313...19M, 1992ASPC...26..416C, 2003ApJ...586..464B, 2013A&A...556A..36G}, and the index $b$ reflects the geometry of the stellar magnetic field \\citep{1988ApJ...333..236K}. There are at least two solar twin binaries that display enhanced rotational velocities -- above $2\\sigma$ from the expected -- and activity for their ages: HIP 19911 and HIP 67620; if we consider the revised age for HIP 103983 (Spina et al., in preparation), it can also be considered a fast rotator for its age.\n\nBesides the enhanced rotational velocities and higher chromospheric activity (\\citetalias{2014A&A...572A..48R}; \\citealt{F16sub}), some of these binaries also display peculiar chemical abundances (\\citetalias{2016A&A...590A..32T}; \\citealt{2016A&A...587A..46D}). As pointed out by \\citeauthor{2016A&A...587A..46D}, the ultra-depletion of beryllium, which is observed on HIP 64150, could be explained by the interaction of the main star with the progenitor of the white dwarf companion. In addition to HIP 64150, the confirmed binaries HIP 19911 and HIP 67620 also display clearly enhanced $\\mathrm{[Y\/Mg]}$ abundances \\citepalias[see fig. 9 in][and the discussion in Section \\ref{peculiar} of this paper]{2016A&A...590A..32T}.\n\nOne interesting aspect about stars with enhanced activity and rotation is that these characteristics were hypothesized to be the result of dynamo action from close-in giant planets \\citep[see][and references therein]{2016ApJ...830L...7K}. In fact, some of our early results pointed out that the star HIP 68468, for which we inferred two exoplanets candidates \\citepalias{2017A&A...597A..34M}, had an enhanced rotational velocity when compared to other solar twins of the same age. However, a more careful analysis showed that the enhancement was instead a contribution of macroturbulence. Another explanation for these enhancements is due to magnetic interactions with either a close-in or an eccentric giant planet \\citep{2000ApJ...533L.151C}, but recent results obtained by, e.g., \\citet{2016A&A...592A.143F} and \\citet{2017MNRAS.465.2734M} show that they cannot explain such anomalies.\n\n\\defcitealias{2010exop.book...15M}{MC10}\n\nIn light of these intriguing results, we sought to better understand the nature of these solar twin multiple systems by studying their orbital parameters, and use them to search for explanations of the observed anomalies, especially stellar rotation. The orbital parameters can be estimated from the radial velocity data of the stars \\citep[see, e.g.,][hereafter MC10]{2010exop.book...15M}, with the quality of the results depending strongly on the time coverage of the data.\n\n\\section{Radial velocities}\n\nOur solar twins HARPS data\\footnote{\\footnotesize{Based on observations collected at the European Organisation for Astronomical Research in the Southern Hemisphere under ESO programs 188.C-0265, 183.D-0729, 292.C-5004, 077.C-0364, 072.C-0488, 092.C-0721, 093.C-0409, 183.C-0972, 192.C-0852, 091.C-0936, 089.C-0732, 091.C-0034, 076.C-0155, 185.D-0056, 074.C-0364, 075.C-0332, 089.C-0415, 60.A-9036, 075.C-0202, 192.C-0224, 090.C-0421 and 088.C-0323.}} are completely described in \\citetalias{2014A&A...572A..48R}. Their radial velocities (RV) are automatically measured from the HARPS Data Reduction Software \\textbf{(see Table \\ref{HARPS_rvs})}, and the noise limit of the instrument generally remains around 1 m s$^{-1}$. In order to broaden the coverage of our RV data, we also obtained more datasets that were available in the literature and public databases, including the HARPS archival data for other programs.\n\nThe mass and other stellar parameters of the solar twins were estimated with high precision using the combined HARPS spectra and differential analysis owing to their similarity with the Sun \\citep[see, e.g.,][]{2014ApJ...795...23B, 2010A&A...519A..87B, 2016A&A...589A..17Y}. The ages for the solar twins were obtained using Yonsei-Yale isochrones \\citep{2001ApJS..136..417Y} and probability distribution functions as described in \\citet{2013ApJ...764...78R} and in \\citetalias{2014A&A...572A..48R}. The full description and discussion of the stellar parameters of the HARPS sample are going to be presented in a forthcoming publication (Spina et al., in preparation).\n\nThe additional radial velocities data obtained from online databases and the literature are summarized in Table \\ref{add_rvs}. These are necessary to increase the time span of the observations to include as many orbital phases as possible at the cost of additional parameters to optimize for (see Section \\ref{short}).\n\n\\begin{table}\n\\begin{center}\n\\caption{HARPS radial velocities for stars in the Solar Twin Planet Search program. This table is presented for guidance regarding the form and content of the online supplementary data, which is available in its entirety in machine-readable format.}\n\\begin{tabular}{ccc}\n\\toprule[\\heavyrulewidth]\nJulian Date & RV & $\\sigma_{\\mathrm{RV}}$ \\\\\n(d) & (km s$^{-1}$) & (km s$^{-1}$) \\\\\n\\bottomrule[\\heavyrulewidth]\n\\multicolumn{3}{c}{HIP 6407} \\\\\n2455846.750257 & 6.816873 & 0.001020 \\\\\n2455850.716200 & 6.811186 & 0.000997 \\\\\n2455851.710847 & 6.806352 & 0.000886 \\\\\n2455852.703837 & 6.801079 & 0.001140 \\\\\n2456164.849296 & 6.204053 & 0.001059 \\\\\n2456165.853256 & 6.203359 & 0.001049 \\\\\n2456298.564449 & 5.954045 & 0.001029 \\\\\n\\hline\n\\multicolumn{3}{c}{HIP 14501} \\\\\n2452937.683821 & 7.024814 & 0.000475 \\\\\n2452940.727854 & 7.025289 & 0.000516 \\\\\n2453001.575609 & 7.026322 & 0.000658 \\\\\n2453946.941856 & 7.023707 & 0.000708 \\\\\n\\midrule[\\heavyrulewidth]\n\\label{HARPS_rvs}\n\\end{tabular}\n\\end{center}\n\\end{table}\n\n\\begin{table*}\n\\begin{center}\n\\caption{Additional radial velocities from other programs and instruments.}\n\\begin{tabular}{lll}\n\\toprule[\\heavyrulewidth]\nInstrument\/Program & References & Data available for (HIP numbers) \\\\\n\\bottomrule[\\heavyrulewidth]\n CfA Digital Speedometers & \\citet{2002AJ....124.1144L} & 65708 \\\\\n ELODIE\\textsuperscript{a} & \\citet{1996AandAS..119..373B, 2004PASP..116..693M} & 43297, 54582, 62039, 64150, 72043, 87769 \\\\\n SOPHIE\\textsuperscript{b} & \\citet{2011SPIE.8151E..15P} & 6407, 43297, 54582, 62039, 64150, 87769 \\\\\n Lick Planet Search & \\citet{2014ApJS..210....5F} & 54582, 65708 \\\\\n AAT Planet Search & \\citet{2015MNRAS.453.1439J} & 18844, 67620, 73241, 79578, 81746 \\\\\n Various & \\citet{2016AJ....152...46W} & 67620 \\\\\n HIRES\/Keck RV Survey\\textsuperscript{c} & \\citet{2017AJ....153..208B} & 14501, 19911, 62039, 64150, 72043, 103983 \\\\\n\\bottomrule[\\heavyrulewidth]\n\\multicolumn{3}{l}{\\textsuperscript{a}\\footnotesize{Available at \\url{http:\/\/atlas.obs-hp.fr\/elodie\/}.}}\\\\\n\\multicolumn{3}{l}{\\textsuperscript{b}\\footnotesize{Available at \\url{http:\/\/www.obs-hp.fr\/guide\/sophie\/data_products.shtml}.}}\\\\\n\\multicolumn{3}{l}{\\textsuperscript{c}\\footnotesize{Available at \\url{http:\/\/home.dtm.ciw.edu\/ebps\/data\/}.}}\\\\\n\\label{add_rvs}\n\\end{tabular}\n\\end{center}\n\\end{table*}\n\n\\section{Methods}\n\nThe variation of radial velocities of a star in binary or multiple system stems from the gravitational interaction between the observed star and its companions. For systems with stellar or substellar masses, the variation of radial velocities can be completely explained by the Keplerian laws of planetary motion. For the sake of consistency, we will use here the definitions of orbital parameters as presented in \\citetalias{2010exop.book...15M}.\n\nTo completely characterize the orbital motion of a binary system from the measured radial velocities of the main star, we need to obtain the following parameters: the semi-amplitude of the radial velocities $K$, the orbital period $T$, the time of periastron passage $t_0$, the argument of periapse $\\omega$ and the eccentricity $e$ of the orbit. In order to estimate the minimum mass $m \\sin{i}$ of the companion and the semi-major axis $a$ of the orbit, we need to know the mass $M$ of the main star.\n\nDue to their non-negative nature, the parameters $K$ and $T$ are usually estimated in logarithmic scale in order to eliminate the use of search bounds. Additionally, for orbits that are approximately circular, the value of $\\omega$ may become poorly defined. In these cases, a change of parametrization may be necessary to better constrain them. \\citet{2013PASP..125...83E}, for instance, suggest using $\\sqrt{e} \\cos{\\omega}$ and $\\sqrt{e} \\sin{\\omega}$ (which we refer to as the EXOFAST parametrization) instead of $\\omega$ and $e$ to circumvent this problem, which also can help improve convergence time.\n\nOne issue that affects the radial velocities method is the contamination by stellar activity \\citep[see, e.g.,][]{2016MNRAS.457.3637H}. This activity distorts the spectral lines \\citep{2005oasp.book.....G}, which in turn produces artificial RV variations that can mimic the presence of a massive companion orbiting the star. More active stars are expected to have RV variations with larger amplitudes and a shorter activity cycle period \\citep{2011arXiv1107.5325L}. For most binaries in our sample, the contamination by activity in the estimation of orbital parameters is negligible; the cases where this is not applicable are discussed in detail.\n\n\\subsection{Binaries with well-sampled orbits}\\label{short}\n\nFor binaries with orbital periods $T \\lesssim 15$ yr, usually there are enough RV data measured to observe a complete phase. In these cases, the natural logarithm of the likelihood of observing radial velocities $\\mathbf{y}$ on a specific instrument, given the Julian dates $\\mathbf{x}$ of the observations, their uncertainties $\\mathbf{\\sigma}$ and the orbital parameters $\\mathbf{p}_{\\mathrm{orb}}$ is defined as:\n\n\\begin{equation}\n \\ln{p \\left(\\mathbf{y} \\mid \\mathbf{x},\\mathbf{\\sigma}, \\mathbf{p}_{\\mathrm{orb}} \\right)} = -\\frac{1}{2} \\sum_n \\left[ \\frac{ \\left(y_\\mathrm{n} - y_{\\mathrm{model}} \\right)^2}{\\sigma_\\mathrm{n}^2} + \\ln{ \\left(2 \\pi \\sigma_\\mathrm{n}^2 \\right)} \\right] \\mathrm{,}\n\\label{likelihood}\n\\end{equation}where $y_\\mathrm{n}$ are the RV datapoints, $y_{\\mathrm{model}}$ are the model RV points for a given set of orbital parameters, and $\\sigma_\\mathrm{n}$ are the RV point-by-point uncertainties. The RV models are computed from Eq. 65 in \\citetalias{2010exop.book...15M}:\n\n\\begin{equation}\n v_\\mathrm{r} = \\gamma + K (\\cos{(\\omega + f)} + e \\cos{\\omega}) \\mathrm{,}\n\\end{equation}where $f$ is the true anomaly, and $\\gamma$ is the systemic velocity (usually including the instrumental offset). The true anomaly depends on $e$ and the eccentric anomaly $E$:\n\n\\begin{equation}\n \\cos{f} = \\frac{(1 - e^2)}{1 - e \\cos{E}} - 1 \\mathrm{;}\n\\end{equation}the eccentric anomaly, in turn, depends on $T$, $t_0$ and time $t$ in the form of the so called Kepler's equation:\n\n\\begin{equation}\n E - e \\sin{E} = \\frac{2 \\pi}{T} \\left(t - t_0 \\right) \\mathrm{.}\n\\end{equation}\nEq.~\\ref{likelihood} is minimized using the Nelder-Mead algorithm implementation from \\texttt{lmfit} \\citep[][version 0.9.5]{2016ascl.soft06014N} to obtain the best-fit orbital parameters to the observed data. Because different instruments have different instrumental offsets, the use of additional RV data from other programs require the estimation of an extra value of $\\gamma$ for each instrument.\n\nThe uncertainties of the orbital parameters are estimated using \\texttt{emcee}, an implementation of the Affine Invariant Markov chain Monte Carlo Ensemble sampler \\citep[][version 2.2.1]{2013PASP..125..306F} using flat priors for all parameters in both \\citetalias{2010exop.book...15M} and EXOFAST parametrizations. These routines were implemented in the Python package \\texttt{radial}\\footnote{\\footnotesize{Available at \\url{https:\/\/github.com\/RogueAstro\/radial}.}}, which is openly available online. The uncertainties in $m \\sin{i}$ and $a$ quoted in our results already take into account the uncertainties in the stellar masses of the solar twins.\n\n\\subsection{Binaries with partial orbits}\\label{methods_long_period}\n\nFor the binary systems with long periods (typically 20 years or more), it is possible that the time span of the observations does not allow for a full coverage of at least one phase of the orbital motion. In these cases, the estimation of the orbital parameters renders a number of possible solutions, which precludes us from firmly constraining the configuration of the system. Nevertheless, RV data containing a curvature or one inflection allows us to place lower limits on $K$, $T$ and $m \\sin{i}$, whilst leaving $e$ and $\\omega$ completely unconstrained. When the RV data are limited but comprise two inflections, it may be possible to use the methods from Section \\ref{short} to constrain the orbital parameters, albeit with large uncertainties.\n\nFor stars with very large orbital periods ($T \\gtrsim 100$ yr), the variation of radial velocities may be present in the form of a simple linear trend. In these cases, it is still possible to obtain an estimate of the mass of the companion -- a valuable piece of information about it: \\citet{1999PASP..111..169T} describes a statistical approach to extract the sub-stellar companion mass when the only information available from radial velocities is the inclination of the linear trend, provided information about the angular separation of the system is also available. In this approach, we need to adopt reasonable prior probability density functions (PDF) for the eccentricity $e$, the longitude of periastron $\\varpi$, phase $\\phi$ and the inclination $i$ of the orbital plane. As in \\citeauthor{1999PASP..111..169T}, we adopt the following PDFs: $p(i) = \\sin{i}$, $p(e) = 2e$ and flat distributions for $\\varpi$ and $\\phi$.\n\nWe sample the PDFs using \\texttt{emcee}, 20 walkers and 10000 steps; the first 500 burn-in steps are discarded. From these samples, we compute the corresponding companion masses and its posterior distribution. This posterior usually displays a very strong peak and long tails towards low and high masses which can be attributed to highly unlikely orbital parameters (see Fig. \\ref{64150_pdf} for an example). In our results, we consider that the best estimates for the companion masses are the central bin of the highest peak of the distribution in a histogram with log-space bin widths of about 0.145 dex(M$_\\odot$).\n\nWhen no adaptive optics (AO) imaging data are available for the stars with a linear trend in their RVs, the most conservative approach is to provide the minimum mass for the putative companion. In the case of a linear trend, the lowest mass is produced when $e = 0.5$, $\\omega = \\pi\/2$ and $\\sin{i} = 1$ \\citep{2015ApJ...800...22F}, yielding\n\n\\begin{equation}\\label{feng_eq}\n m_{\\mathrm{min}} \\approx \\left( 0.0164\\ \\mathrm{M}_{\\mathrm{Jup}}\\right) \\left( \\frac{\\tau}{\\mathrm{yr}} \\right)^{4\/3} \\left| \\frac{dv\/dt}{\\mathrm{m\\ s}^{-1}\\ \\mathrm{yr}^{-1}} \\right| \\left( \\frac{M}{\\mathrm{M}_\\odot} \\right)^{2\/3} \\mathrm{,}\n\\end{equation}where $\\tau$ is 1.25 multiplied by the time span of the radial velocities and $dv\/dt$ is the inclination of the linear trend.\n\n\\section{Results}\n\nWe discovered new, short-period companions for the stars HIP 6407 and HIP 30037 (see Fig. \\ref{orbits_new}) and new long-period companions for HIP 54582 and HIP 62039, and updated or reproduced the parameters of several other known binaries that were observed in our program (see Figs. \\ref{orbits_updated} and \\ref{long_period_rvs}). We briefly discuss below each star, pointing out the most interesting results, inconsistencies and questions that are still open about each of them. The orbital parameters of the binaries with well-sampled orbits in their RV data are presented in Table \\ref{short_params} and the systems with partial orbits are reported in Tables \\ref{curvature_results} and \\ref{linear_trend_results}.\n\n\\subsection{Withdrawn binary candidates}\n\nIn \\citetalias{2016A&A...592A.156D}, we showed that HIP 43297 had a rotational velocity $v \\sin{i}$ higher than expected for its age. Moreover, its radial velocities had variations that hinted for one or more companions orbiting it. We carefully analyzed the RVs and concluded that the periodic ($T = 3.8$ yr) signal observed is highly correlated (Pearson $R = 0.893$) with the activity \\textit{S}-index of the star \\citep{F16sub}. In addition, we tentatively fitted a linear trend to the combined RVs from HARPS, ELODIE and SOPHIE, and obtained an inclination of $4.53 \\pm 0.04$ m s$^{-1}$ yr$^{-1}$, but further monitoring of the system is required to infer the presence of a long-period spectroscopic companion. The revised stellar age for HIP 43297 yields $1.85 \\pm 0.50$ Gyr (Spina et al., in preparation), which explains the high rotational velocity and activity.\n\nThe solar twin HIP 64673 displays significant fluctuations in its radial velocities, but they do not correlate with its activity index; the data covers approximately 5 years of RV monitoring and displays an amplitude $> 20$ m s$^{-1}$. If confirmed to be caused by massive companions, the RV variations of both HIP 43297 and HIP 64673 suggest substellar masses for the most likely orbital configurations. These stars are, thus, removed from the binaries sample of the Solar Twin Planet Search program.\n\n\\begin{figure*}\n\\centering\n\\begin{tabular}{cc}\n\\includegraphics[width=0.47\\textwidth]{images\/6407_rvs_lmfit.pdf} & \\includegraphics[width=0.47\\textwidth]{images\/30037_rvs_lmfit.pdf} \\\\\n\\end{tabular}\n\\caption{The radial velocities and orbital solutions of the newly discovered companions for solar twins with short orbital periods.}\n\\label{orbits_new}\n\\end{figure*}\n\n\\begin{figure*}\n\\centering\n\\begin{tabular}{cc}\n\\includegraphics[width=0.45\\textwidth]{images\/19911_rvs_folded.pdf} & \\includegraphics[width=0.45\\textwidth]{images\/65708_rvs_lmfit.pdf} \\\\\n\\includegraphics[width=0.45\\textwidth]{images\/67620_rvs_folded.pdf} & \\includegraphics[width=0.45\\textwidth]{images\/79578_rvs_lmfit.pdf} \\\\\n\\includegraphics[width=0.45\\textwidth]{images\/81746_rvs_lmfit.pdf} & \\includegraphics[width=0.45\\textwidth]{images\/103983_rvs_lmfit.pdf} \\\\\n\\end{tabular}\n\\caption{The radial velocities and updated orbital solutions of the known solar twin binaries with at least one complete or near-complete orbital phase. The acronyms CfADS, W16 and AATPS correspond to data from, respectively, the Center for Astrophysics Digital Speedometers, \\citet{2016AJ....152...46W} and the Anglo-Australian Telescope Planet Search program.}\n\\label{orbits_updated}\n\\end{figure*}\n\n\\begin{table*}\n\\begin{center}\n\\caption{Orbital parameters of the binaries with complete or near-complete orbital phases in their RV data.}\n\\begin{tabular}{llccccccc}\n\\toprule[\\heavyrulewidth]\n\\multirow{2}{*}{HIP} & \\multirow{2}{*}{HD} & $K$ & $T$ & $t_0$ & $\\omega$ & $e$ & $m\\sin{i}$ & $a$ \\\\\n& & (km s$^{-1}$) & (days) & (JD-$2.45E6$ days) & ($^o$) & & (M$_\\odot$) & (AU) \\\\\n\\bottomrule[\\heavyrulewidth]\n\\multirow{2}{*}{6407\\textsuperscript{$\\dag$}} & \\multirow{2}{*}{8291} & $2.614$ & $1852.3$ & $5076.7$ & $-57.1$ & $0.682$ & $0.119$ & $3.070$ \\\\\n& & $\\pm 0.084$ & $^{+3.3}_{-3.1}$ & $^{+1.1}_{-1.3}$ & $\\pm 0.9$ & $^{+0.009}_{-0.010}$ & $\\pm 0.002$ & $\\pm 0.005$ \\\\\n\\hline\n\\multirow{2}{*}{19911} & \\multirow{2}{*}{26990} & $7.707$ & $2074.15$ & $4627.4$ & $40.92$ & $0.8188$ & $0.313$ & $6.16$ \\\\\n& & $\\pm 0.007$ & $\\pm 0.09$ & $\\pm 0.1$ & $\\pm 0.06$ & $\\pm 0.0003$ & $\\pm 0.002$ & $\\pm 0.02$ \\\\\n\\hline\n\\multirow{2}{*}{30037} & \\multirow{2}{*}{45021} & $4.246$ & $31.61112$ & $5999.413$ & $-133.60$ & $0.30205$ & $0.0610$ & $0.1971$ \\\\\n& & $\\pm 0.003$ & $\\pm 0.00006$ & $\\pm 0.001$ & $\\pm 0.02$ & $\\pm 0.00008$ & $\\pm 0.0002$ & $\\pm 0.0003$ \\\\\n\\hline\n\\multirow{2}{*}{65708} & \\multirow{2}{*}{117126} & $5.754$ & $207.273$ & $-3675.8$ & $-137.9$ & $0.311$ & $0.170$ & $0.851$ \\\\\n& & $\\pm 0.007$ & $\\pm 0.004$ & $\\pm 0.3$ & $\\pm 0.2$ & $\\pm 0.002$ & $\\pm 0.001$ & $\\pm 0.001$ \\\\\n\\hline\n\\multirow{2}{*}{67620} & \\multirow{2}{*}{120690} & $6.311$ & $3803.3$ & $4945.7$ & $145.10$ & $0.3428$ & $0.578$ & $5.50$ \\\\\n& & $\\pm 0.002$ & $\\pm 0.4$ & $\\pm 0.7$ & $\\pm 0.07$ & $\\pm 0.0002$ & $\\pm 0.002$ & $\\pm 0.01$ \\\\\n\\hline\n\\multirow{2}{*}{79578} & \\multirow{2}{*}{145825} & $1.125$ & $6681.8$ & $879.2$ & $138.9$ & $0.3322$ & $0.1014$ & $7.216$ \\\\\n& & $\\pm 0.007$ & $\\pm 1.5$ & $\\pm 1.9$ & $\\pm 0.1$ & $\\pm 0.0003$ & $\\pm 0.0002$ & $\\pm 0.007$ \\\\\n\\hline\n\\multirow{2}{*}{81746} & \\multirow{2}{*}{150248} & $1.987$ & $3246.5$ & $5623.8$ & $-2.49$ & $0.6644$ & $0.1079$ & $4.387$ \\\\\n& & $\\pm 0.001$ & $\\pm 0.7$ & $\\pm 0.6$ & $\\pm 0.06$ & $\\pm 0.0005$ & $\\pm 0.0002$ & $\\pm 0.003$ \\\\\n\\hline\n\\multirow{2}{*}{103983} & \\multirow{2}{*}{200565} & $2.100$ & $10278$ & $6659$ & $-51.6$ & $0.50$ & $0.210$ & $9.8$ \\\\\n& & $\\pm 0.018$ & $^{+274}_{-247}$ & $\\pm 11$ & $\\pm 1.5$ & $\\pm 0.01$ & $\\pm 0.005$ & $\\pm 0.2$ \\\\\n\\midrule[\\heavyrulewidth]\n\\multicolumn{9}{l}{\\textsuperscript{$\\dag$}\\footnotesize{Triple or higher-order system. The orbital parameters correspond to the closer-in companion.}}\\\\\n\\end{tabular}\n\\label{short_params}\n\\end{center}\n\\end{table*}\n\n\\subsection{Solar twins with new companions}\n\n\\textbf{HIP 6407:} This is a known binary system located 58 pc away from the solar system \\citep{2007A&A...474..653V}, possessing a very low-mass (0.073 M$_\\odot$) L2-type companion separated by $44.8\\arcsec$ (2222 AU), as reported by \\citet[][and references therein]{2015ApJ...802...37B}. In this study, we report the detection of a new close-in low-mass companion with $m\\sin{i} = 0.12$ M$_\\odot$ on a very eccentric orbit ($e = 0.67$) with $a = 3$ AU and an orbital period of approximately 5 years. As expected, the long-period companion does not appear in the RV data as a linear trend.\n\n\\textbf{HIP 30037:} The most compact binary system in our sample, hosting a brown dwarf companion orbiting the main star with a period of 31 days. The high precision of its parameters owes to the wide time span of observations, which covered several orbits. This is one of the first detections of a close-in brown dwarf orbiting a confirmed solar twin\\footnote{\\footnotesize{There are at least 4 solar twin candidates with a close-in brown dwarf companion listed in table A.1 in \\citet{2016A&A...588A.144W}.}}. HIP 30037 is a very quiet star, displaying no excessive jitter noise in its radial velocities. We ran stellar evolution models with \\texttt{MESA}\\footnote{\\footnotesize{Modules for Experiments in Stellar Astrophysics, available at \\url{http:\/\/mesa.sourceforge.net}}} \\citep{2011ApJS..192....3P, 2015ApJS..220...15P} to test the hypothesis of the influence of tidal acceleration caused by the companion on a tight orbit, and found that, for the mass and period of the companion, we should expect no influence in the rotational velocity.\n\n\\textbf{HIP 54582:} \\textit{RV Curvature only}. There are no reports of binarity in the literature. The slight curvature in the RVs of this star is only visible when we combine the HARPS data and the Lick Planet Search archival data. Owing to the absence of an inflection point, the orbital parameters of this system are highly unconstrained. We found that an orbit with $e \\approx 0.2$ produces the least massive companion and shortest orbital period ($m \\sin{i} = 0.03$ M$_\\odot$ and $T = 102$ yr).\n\n\\textbf{HIP 62039:} \\textit{Linear trend}. There are no reports of visually detected close-in ($\\rho < 2\\arcsec$) companions around it. This can be attributed to: i) low luminosity companion, which is possible if it is a white dwarf or a giant planet, and ii) unfavorable longitude of periapse during the observation windows. By using Eq. \\ref{feng_eq}, we estimate that the minimum mass of the companion is 19 M$_{\\mathrm{Jup}}$.\n\n\\subsection{The peculiar binaries}\\label{peculiar}\n\n\\subsubsection{HIP 19911}\\label{19911_results}\n\nThis is one of the main outlier stars in the overall sample of solar twins in regards to its rotation and activity, which are visibly enhanced for both the previous and revised ages (\\citetalias{2016A&A...592A.156D}; \\citealt{F16sub}; Spina et al., in preparation). For the estimation of orbital parameters reported below, we used only the LCES HIRES\/Keck radial velocities, because there are too few HARPS data points to justify the introduction of an extra source of uncertainties (the HARPS points are, however, plotted in Fig. \\ref{orbits_updated} for reference). When using the HARPS data, although the solution changes slightly, our conclusions about the system remain the same.\n\nThe orbital solution of HIP 19911 renders a 0.31 M$_\\odot$ companion in a highly eccentric orbit ($e = 0.82$, the highest in our sample), with period $T = 5.7$ yr. Visual scrutiny reveals what seems to be another signal with large amplitude in the residuals of this fit ($> 250$ m s$^{-1}$, see Fig. \\ref{orbits_updated}); the periodogram of the residuals shows a very clear peak near the orbital period of the stellar companion.\n\nThe cross-correlation function (CCF) plots for the HARPS spectra of HIP 19911 display a significant asymmetry -- longer tail in the blue side -- for the observations between October 2011 and February 2012, which suggests that the companion is contaminating the spectra. Upon visual inspection of the archival HIRES spectra\\footnote{\\footnotesize{Available at \\url{http:\/\/nexsci.caltech.edu\/archives\/koa\/}.}} taken on 17 January 2014, which is when we expect the largest RV difference between the main star and its companion, we saw a clear contamination of the spectrum by the companion (see Fig. \\ref{SBII}). This contamination could explain the large residuals of the orbital solution, as it introduces noise to the measured radial velocities. The double-lines also explain the inferred high rotational velocity of HIP 19911, since they introduce extra broadening to the spectral lines used to measure rotation. The presence of a bright companion may also affect estimates of chemical abundances, which elucidates the yttrium abundance anomaly \\citepalias{2016A&A...590A..32T}. The double-lined nature of this system is not observed on the HARPS spectra due to an unfavorable observation window.\n\nEven at the largest RV separation, we did not detect the Li I line at 6707.75 \\AA\\ in the HIRES spectrum of the companion. This is expected because M dwarf stars have deeper convection zones, which means they deplete lithium much faster than Sun-like stars. This leads us to conclude that estimates of Li abundance on solar twin binaries using this line do not suffer from strong contamination by their companions; consequently, age estimates with lithium abundances may be more reliable for such binaries than isochronal or gyro ages.\n\n\\begin{figure*}\n\\centering\n\\begin{tabular}{cc}\n\\includegraphics[width=0.47\\textwidth]{images\/19911_SBII.pdf} & \\includegraphics[width=0.47\\textwidth]{images\/103983_SBII.pdf} \\\\\n\\end{tabular}\n\\caption{The spectrum of HIP 19911 and HIP 103983 are contaminated by bright companions. The double-lined nature of these stars is only clear only in the maximum RV separation, which are the solid purple curves. The dashed green curves show spectra during minimum RV separation where there is no obvious contamination. The spectra in this figure are not Doppler-corrected for their systemic velocities or instrumental RV offset.}\n\\label{SBII}\n\\end{figure*}\n\n\\defcitealias{2014AJ....147...86T}{T14}\n\nAnother observation conundrum for this system is that \\citet{2015ApJ...799....4R}, using AO imaging without a coronagraph, reports the detection of a visual companion with orbital period $\\sim 12.4$ yr (roughly twice the one we estimated), a lower eccentricity ($e = 0.1677$) and similar semi-major axis of the orbit ($a = 6.17$ AU, if we consider a distance of 30.6 pc). Moreover, \\citet[][hereafter T14]{2014AJ....147...86T} reports that this visual companion has $m = 0.85$ M$_\\odot$. The most likely explanation is that the observations of \\citeauthor{2015ApJ...799....4R} did in fact detect the spectroscopic companion, but the coarse timing of the observations produced a larger period; the lower eccentricity could be explained by a strong covariance between $e$ and the inclination $i$. If $i$ is lower, that means the mass of the companion is significantly higher than $m \\sin{i} = 0.316$ M$_\\odot$, and that would explain the value obtained by \\citetalias{2014AJ....147...86T}. A companion with a mass as large as 0.85 M$_\\odot$ would likely pollute the spectra of HIP 19911, which agrees with our observation that this is a SB II system. If confirmed, this prominent $\\sim$0.85 M$_\\odot$ red dwarf companion could explain the observed activity levels for HIP 19911, since red dwarf stars are expected to be more active than Sun-like stars.\n\n\\defcitealias{2014ApJ...783L..25M}{M14}\n\n\\subsubsection{HIP 67620}\\label{67620_results}\n\nThis is a well-known binary and the target with the largest amount of RV data available (see Fig. \\ref{orbits_updated}). Its orbital parameters have been previously determined by \\citet{2006ApJS..162..207A} and more recently by \\citet{2015MNRAS.453.1439J} and \\citet{2016AJ....152...46W}. The orbital parameters we obtained are in good agreement with \\citet{2016AJ....152...46W}. It has one of the most peculiar rotation rates from our sample (2.77 km s$^{-1}$ for an age of 7.18 Gyr), an enhanced chromospheric activity \\citep{F16sub} and an anomalous [Y\/Mg] abundance \\citepalias{2016A&A...590A..32T}. The orbital period of the system is far too long for gravitational interaction to enhance the rotation of the main star through tidal acceleration, thus we should expect that they evolve similarly to single stars from this point of view.\n\nHigh-resolution imaging of HIP 67620 revealed a companion with $V_{\\mathrm{mag}} \\approx 10$ and separations which are consistent with the spectroscopic companion (\\citealt{2012AJ....143...42H}; \\citetalias{2014AJ....147...86T}). As explained by \\citet{2015MNRAS.453.1439J}, the presence of a companion with $m > 0.55$ M$_\\odot$ can produce contaminations to the spectra that introduce noise to the measured RVs; our estimate of $m \\sin{i}$ for this system is 0.58 M$_\\odot$. These results suggest that, similarly to HIP 19911 but to a lesser degree, the companion of HIP 67620 may be offsetting our estimates for rotational velocity, stellar activity, chemical abundances and isochronal age.\n\nWe were unable to discern double-lines in the HARPS spectra, likely resulting from unfavorable Doppler separations (observations range from February 2012 to March 2013). However, an analysis of the CCF of this star shows slight asymmetries in the line profiles of the HARPS spectra, which indicates a possible contamination by the companion. \\citet{2017ApJ...836..139F} reported HIP 67620 as a double-lined binary using spectra taken at high-resolution ($R \\approx 60$$,$$000$) in February 2014 and July 2015. As expected due to the short time coverage of the HARPS spectra, we did not see any correlation between the bisector inverse slope \\citep[as defined in][]{2001A&A...379..279Q} and the radial velocities of HIP 67620.\n\n\\citet{2015MNRAS.453.1439J} found an additional signal on the periodogram of HIP 67620 at 532 d, which could be fit with a 1 M${_{\\mathrm{Jup}}}$ planet, bringing down the $rms$ of the fit by a factor of 2. However, we did not find any significant peak in the periodogram of the residuals of the radial velocities for HIP 67620.\n\n\\subsubsection{HIP 103983}\\label{103983_results}\n\nThe revised age for HIP 103983 ($4.9 \\pm 0.9$ Gyr; Spina et al., in preparation) renders this system as an abnormally fast rotator ($3.38$ km s$^{-1}$) for its age. However, upon a careful inspection of the HARPS data obtained at different dates, we identified that the spectrum from 2015 July 27 displays clearly visible double-lines, albeit not as well separated as those observed in the HIRES spectra of HIP 19911 (see Fig. \\ref{SBII}). No other anomalies besides enhanced rotation were inferred for this system. The CCF plots of the HARPS spectra show clear longer tails towards the blue side for most observations.\n\nIn \\citetalias{2016A&A...592A.156D} we reported distortions in the combined spectra of HIP 103983; this is likely a result from the combination of the spectra at orbital phases in which the Doppler separation between the binaries is large. Since the observing windows of the HARPS spectra of HIP 19911 and HIP 67620 do not cover large RV separations (see Fig. \\ref{orbits_updated}), the same effect is not seen in the combined spectra of these stars. This effect also explains why HIP 103983 is an outlier in fig. 4 of \\citetalias{2016A&A...592A.156D}.\n\nAlthough we have limited RV data, the \\texttt{emcee} simulations converge towards a well-defined solution instead of allowing longer periods, as these produce larger residuals. It is important, however, to keep monitoring the radial velocities of this system in order to confirm that the most recent data points are in fact a second inflection in the radial velocities. The residuals for the fit for the HIRES spectra are on the order of 100 m s$^{-1}$, which is likely a result from the contamination by a bright companion. \\citetalias{2014AJ....147...86T} reported a $0.91$ M$_\\odot$ visual companion at a separation of $0.093 \\arcsec$, which is consistent with the spectrocopic semi-major axis we estimated: $0.149 \\arcsec$ for a distance of 65.7 pc \\citep{2007A&A...474..653V}.\n\n\\subsection{Other binaries with updated orbital parameters}\n\nAmong the known binaries in the solar twins sample, five of them display curvature in their RV data which allows the estimation of limits for their orbital parameters (see Table \\ref{curvature_results} and Fig. \\ref{long_period_rvs}). Some of the linear trend binaries observed in our HARPS Solar Twin Planet Search program are targets with large potential for follow-up studies. For the companions with visual detection, we were able to estimate their most likely mass (see Table \\ref{linear_trend_results}).\n\n\\textbf{HIP 14501:} \\textit{Linear trend.} Its companion is reported by \\citet{2014ApJ...781...29C} as the first directly imaged T dwarf that produces a measurable doppler acceleration in the primary star. Using a low-resolution direct spectrum of the companion, \\citet{2015ApJ...798L..43C} estimated a model-dependent mass of 56.7 M$_{\\mathrm{Jup}}$. Using the HARPS and HIRES\/Keck RV data and the observed separation of $1.653\\arcsec$ \\citep{2014ApJ...781...29C}, we found that the most likely value of the companion mass is 0.043 M$_\\odot$ (45 M$_{\\mathrm{Jup}}$), which agrees with the mass obtained by \\citet{2015ApJ...798L..43C}. The most recent HARPS data hints of an inflection point in the orbit of HIP 14501 B (see Fig. \\ref{rv_14501}), but further RV monitoring of the system is necessary to confirm it.\n\n\\begin{figure*}\n\\centering\n\\begin{tabular}{cc}\n\\includegraphics[width=0.45\\textwidth]{images\/54102_rvs_lmfit.pdf} & \\includegraphics[width=0.45\\textwidth]{images\/54582_rvs_lmfit.pdf} \\\\\n\\includegraphics[width=0.45\\textwidth]{images\/72043_rvs_lmfit.pdf} & \\includegraphics[width=0.45\\textwidth]{images\/87769_rvs_lmfit.pdf} \\\\\n\\end{tabular}\n\\caption{The radial velocities and the solutions that produced the lower limits of orbital parameters of the solar twin binaries with RV curvature. A similar plot for HIP 73241 can be found in \\citet{2015MNRAS.453.1439J}. Time is given in $\\mathrm{JD} - 2.45 \\times 10^{6}$ d.}\n\\label{long_period_rvs}\n\\end{figure*}\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=0.47\\textwidth]{images\/14501_rvs.pdf}\n\\caption{Radial velocities of HIP 14501. The RV shift in the $y$-axes is arbitrary. Time is given in $\\mathrm{JD} - 2.45 \\times 10^{6}$ d.}\n\\label{rv_14501}\n\\end{figure}\n\n\\textbf{HIP 18844:} \\textit{Linear trend.} It is listed as a multiple system containing a closer-in low-mass stellar companion (estimated 0.06 M$\\odot$, which agrees with our most likely mass) and orbital period $T = 6.5$ yr \\citepalias[][and references therein]{2014AJ....147...86T}. For the companion farther away, \\citet{2015MNRAS.453.1439J} reported a minimum orbital period of $\\sim 195$ yr and $m\\sin{i} = 0.33$ M$_\\odot$, with a separation of $29\\arcsec$ in 1941 ($\\sim$$750$ AU for a distance of 26 pc).\n\n\\textbf{HIP 54102:} \\textit{RV curvature only.} It is listed as a proper motion binary by \\citet{2005AJ....129.2420M}, but there are no other information about the companions in the literature. Its eccentricity is completely unconstrained due to lack of RV coverage. We estimate that its companion's minimum mass is $12.6$ M$_\\mathrm{Jup}$, with an orbital period larger than 14 years.\n\n\\textbf{HIP 64150:} \\textit{Linear trend.} The most likely companion mass obtained by the method explained in Section \\ref{methods_long_period} renders an estimate of 0.26 M$_\\odot$, as seen in Fig. \\ref{64150_pdf}. The higher mass (0.54 M$_\\odot$) obtained by \\citealt{2013ApJ...774....1C} and \\citetalias{2014ApJ...783L..25M} can be attributed to less likely orbital configurations, but it is still inside the 1-$\\sigma$ confidence interval of the RV+imaging mass estimate. The main star displays clear signals of atmosphere pollution caused by mass transfer from its companion during the red giant phase \\citep{2011PASJ...63..697T}, characterizing the only confirmed blue straggler of our sample. The measured projected separation of the binary system is 18.1 AU \\citepalias{2014ApJ...783L..25M}, which indicates that even for such a wide system the amount of mass transferred is still large enough to produce measurable differences in chemical abundances. It seems, however, that the amount of angular momentum transfer was not enough to produce significant enhancement in the rotation rate and activity of the solar twin. It is also important to note that the isochronal age measured for this system \\citepalias{2016A&A...590A..32T} has a better agreement with the white dwarf (WD) cooling age estimated by \\citetalias{2014ApJ...783L..25M} than previous estimates, illustrating the importance of studying these Sirius-like systems to test the various methods of age estimation.\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=0.47\\textwidth]{images\/64150_torres.pdf}\n\\caption{Posterior probability distribution of the companion mass for HIP 64150. The mass obtained by \\citet{2014ApJ...783L..25M} using SED fitting for the spectra of the WD companion is shown as a red vertical line.}\n\\label{64150_pdf}\n\\end{figure}\n\n\\textbf{HIP 65708:} This star has previously been reported as a single-lined spectroscopic binary with an orbital solution \\citep{2002AJ....124.1144L}. Here we update this solution by leveraging the extremely precise radial velocities measured in the Lick Planet Search program and with the HARPS spectrograph. The minimum mass of the companion is 0.167 M$_\\odot$, indicating it is a red dwarf, orbiting at less than 1 AU with a slight eccentricity of 0.31. Our results agree with the previous orbital solution, which was based solely on data with uncertainties two orders of magnitude higher than the most recent data from HARPS and the Lick Planet Search.\n\n\\textbf{HIP 72043:} \\textit{RV curvature only.} Similarly to HIP 54102, it is listed as a proper motion binary and we could not constrain its eccentricity. A fairly massive ($> 0.5$ M$_\\odot$) companion is inferred at a very large period; this fit suggests that the longitude of periapse of the companion of HIP 72043 is currently at an unfavorable position for visual detection.\n\n\\textbf{HIP 73241:} \\textit{RV curvature only.} The companion's orbit is eccentric enough to allow an estimation of the minimum eccentricity; its companion has been previously been confirmed by \\citet{2010ApJS..190....1R} and visually detected by \\citetalias{2014AJ....147...86T} with a separation $0.318\\arcsec$. In \\citetalias{2016A&A...592A.156D} we listed this star as having an unusually high rotation, but here we revise this conclusion and list HIP 73241 as a candidate peculiar rotator because its $v \\sin{i}$ is less than 2$\\sigma$ above the expected value for its age. Similarly to HIP 67620, this peculiarity, if real, could also be explained by contamination by a bright companion, since we determined that the minimum companion mass $m \\sin{i} > 0.49$ M$_\\odot$.\n\n\\textbf{HIP 79578:} The companion is a well-defined 0.10 M$_\\odot$ red dwarf orbiting the main star approximately every 18 years in a fairly eccentric orbit ($e = 0.33$). The orbital parameters we obtained differ significantly from the ones obtained by \\citet{2015MNRAS.453.1439J} by more than $10\\%$, except for the eccentricity; also in contrast, \\citeauthor{2015MNRAS.453.1439J} report it as a brown dwarf companion. The fit for this binary displays residuals of up to 30 m s$^{-1}$ for the AATPS radial velocities, and the periodogram of these residuals shows a peak near the period 725 days. When we fit an extra object with $m\\sin{i} = 0.70$ M$_{\\mathrm{Jup}}$ at this period ($a = 1.62$ AU and $e = 0.87$), it improves the general fit of the RVs by a factor of 7. It is important to mention, however, that there are only 17 data points for the AATPS dataset, and the HARPS dataset does not display large residuals for a single companion fit. We need thus more observations to securely infer the configuration of this binary system, and if it truly has an extra substellar companion at a shorter period.\n\n\\textbf{HIP 81746:} This is another high-eccentricity ($e = 0.7$) binary that does not display clear anomalies in its rotation and activity. Its companion is a 0.1 M$_\\odot$ red dwarf orbiting the main star every 9 years. The orbital parameters we obtained are in good agreement with the ones reported by \\citet{2015MNRAS.453.1439J}.\n\n\\textbf{HIP 83276:} \\textit{RV curvature only.} Although the HARPS radial velocities suggest the presence of a stellar mass companion, we do not have enough RV data points to infer any information about the orbital parameters of the system. Using radial velocities measured with the CORAVEL spectrograph, \\citet{1991A&A...248..485D} found the companion has $m\\sin{i}=0.24$ M$_\\odot$, $e=0.185$ and an orbital period of 386.72 days.\n\n\\textbf{HIP 87769:} \\textit{RV curvature only.} It is reported as a binary system by \\citetalias{2014AJ....147...86T} but, similarly to HIP 54102, lacks an inflection point in its RV data from HARPS, which spans 3.3 yr. There is a wide range of possible orbital solutions that suggest $m \\sin{i}$ varying from brown dwarf masses to $\\sim 1$ M$_\\odot$. Higher eccentricities ($e > 0.8$) can be ruled out as unlikely because they suggest a companion with $m \\sin{i} \\approx 1$ M$_\\odot$ at an orbital period of more than 500 yr and $a > 80$ AU.\n\n\\begin{table}\n\\begin{center}\n\\caption{Lower limits of the orbital parameters of the spectroscopic binaries with curvature in their RV data.}\n\\begin{tabular}{llcccc}\n\\toprule[\\heavyrulewidth]\n\\multirow{2}{*}{HIP} & \\multirow{2}{*}{HD} & $K$ & $T$ & $m \\sin{i}$ & $e$\\\\\n & & (km s$^{-1}$) & (yr) & (M$_\\odot$) & \\\\\n\\bottomrule[\\heavyrulewidth]\n 54102 & 96116 & $> 0.182$ & $> 14$ & $> 0.012$ & $\\dots$ \\\\\n 54582 & 97037 & $> 0.193$ & $> 102$ & $> 0.03$ & $\\dots$ \\\\\n 72043 & 129814 & $> 2.11$ & $> 104$ & $> 0.40$ & $\\dots$ \\\\\n 73241 & 131923 & $> 5.93$ & $> 21.0$ & $> 0.49$ & $> 0.72$ \\\\\n 87769 & 163441 & $> 1.90$ & $> 81.5$ & $> 0.30$ & $\\dots$ \\\\\n\\bottomrule[\\heavyrulewidth]\n\\label{curvature_results}\n\\end{tabular}\n\\end{center}\n\\end{table}\n\n\\begin{table}\n\\begin{center}\n\\caption{Measured RV slopes of the linear trend binaries. The most likely mass for the spectroscopic companion is estimated when their separation is available. Otherwise a minimum mass is provided.}\n\\begin{tabular}{llcccc}\n\\toprule[\\heavyrulewidth]\n\\multirow{2}{*}{HIP} & \\multirow{2}{*}{HD} & $dv_r \/ dt$ & $\\rho$ & Dist. & $m$ \\\\\n & & (m s$^{-1}$ yr$^{-1}$) & (arcsec) & (pc)\\textsuperscript{d} & (M$_{\\mathrm{Jup}}$) \\\\\n\\bottomrule[\\heavyrulewidth]\n14501 & 19467 & $-1.30 \\pm 0.01$ & 1.653\\textsuperscript{a} & 30.86 & 45 \\\\\n18844\\textsuperscript{$\\dag$} & 25874 & $424 \\pm 3$ & 0.140\\textsuperscript{b} & 25.91 & 79 \\\\\n62039 & 110537 & $7.25 \\pm 0.03$ & $\\dots$ & 42.68 & $> 19$ \\\\\n64150 & 114174 & $61.72 \\pm 0.02$ & 0.675\\textsuperscript{c} & 26.14 & 270 \\\\\n\\bottomrule[\\heavyrulewidth]\n\\multicolumn{6}{l}{\\textsuperscript{a}\\footnotesize{\\citet{2014ApJ...781...29C}.}}\\\\\n\\multicolumn{6}{l}{\\textsuperscript{b}\\footnotesize{\\citet{2014AJ....147...86T}.}}\\\\\n\\multicolumn{6}{l}{\\textsuperscript{c}\\footnotesize{\\citet{2014ApJ...783L..25M}.}}\\\\\n\\multicolumn{6}{l}{\\textsuperscript{d}\\footnotesize{\\citet{2007A&A...474..653V}.}}\n\\\\\n\\multicolumn{6}{l}{\\textsuperscript{$\\dag$}\\footnotesize{Triple or higher-order system.}}\\\\\n\\label{linear_trend_results}\n\\end{tabular}\n\\end{center}\n\\end{table}\n\n\\subsection{Considerations on multiplicity statistics}\n\nAlthough planet search surveys are generally biased against the presence of binaries due to avoiding known compact multiple systems, the fraction of binary or higher-order systems in the whole sample of the Solar Twin Planet Search program is $42\\% \\pm 6\\%$\\footnote{\\footnotesize{Counting stellar and brown dwarf companions. The uncertainty is computed using a bootstrap resampling analysis with 10,000 iterations, similarly to \\citet{2010ApJS..190....1R}. In each iteration, a new set of 81 solar twins is randomly drawn from the original sample allowing stars to be selected more than once.}}. This value agrees with previous multiplicity fractions reported by, e.g., \\citet{2010ApJS..190....1R} and \\citetalias{2014AJ....147...86T}; however, it is signficantly lower than the $58\\%$ multiplicity factor for solar-type stars reported by \\citet{2017ApJ...836..139F}, who argues that previous results are subject to selection effects and are thus biased against the presence of multiple systems.\n\nThe orbital period vs. mass ratio plot of companions in the Solar Twin Planet Search is shown in Fig. \\ref{mratio}. A comparison with the sample of solar-type stars from \\citetalias{2014AJ....147...86T} reveals two important biases in our sample: i) Mass ratios are mostly below 0.3 because of selection of targets that do not show large radial velocity variations in previous studies; ii) Orbital periods are mostly lower than 30 yr because longer values cannot be constrained from the recent RV surveys targeting solar-type stars with low-mass companions. In such cases, further monitoring of linear trend and RV curvature-only binaries may prove useful to understand the origins of the brown dwarf desert \\citep{2006ApJ...640.1051G}. These targets are particularly appealing because the long periods mean that the separation from the main star is large enough to allow us to observe them directly using high-resolution imaging.\n\nPrevious studies on the period-eccentricity relation for binary stars found that systems with orbital periods below 10 days tend to have eccentricities near zero, while those between 10 and 1000 days follow a roughly flat distribution of eccentricities \\citep[][and references therein]{2016AJ....152..189K}, an effect that is due to the timescales for circularization of orbits. In relation to our sample, with the exception of HIP 30037, HIP 65708 and HIP 83276, all of the binaries we observed have periods longer than 1000 days and eccentricities higher than 0.3, which agrees with the aforementioned findings. According to \\citet{1991A&A...248..485D}, the distribution of eccentricities on systems with $T > 1000$ d is a function of energy only, and does not depend on $T$ (see fig. 5 in \\citeauthor{1991A&A...248..485D}). Interestingly, HIP 30037, which hosts a brown dwarf companion with $T = 31.6$ d, falls inside the 25--35 days interval of orbital periods found by \\citeauthor{2016AJ....152..189K} that corresponds to a short stage of evolution of binaries undergoing a fast change in their orbits.\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=0.47\\textwidth]{images\/mratio_period.pdf}\n\\caption{Mass ratios in function of the orbital periods of binary stars or higher-order systems in the solar neighborhood. The purple circles are binaries in our sample with well defined period and $m \\sin{i}$; the blue triangles correspond to the binaries in our sample for which we only have lower limits for the periods and $m \\sin{i}$. The stars from \\citetalias{2014AJ....147...86T} are plotted as black dots (the darker ones are those with main star masses between 0.9 and 1.1 M$_\\odot$).}\n\\label{mratio}\n\\end{figure}\n\n\\section{Conclusions}\n\nThe Solar Twin Planet Search and several other programs observed 81 solar twins using the HARPS spectrograph. In total, 18 of these solar twins are spectroscopic binaries, 18 are visual binaries, and two intersect these categories. We found a multiplicity fraction of $42\\% \\pm 6\\%$ in the whole sample, which is lower than the expected fraction ($\\sim$$58\\%$) because of selection effects that are generally seen in exoplanet search surveys.\n\nWe updated or reproduced the solutions of several known binaries, and determined all the orbital parameters of HIP 19911, HIP 65708, HIP 67620, HIP 79578, HIP 81746 and HIP 103983. The stars HIP 43297 and HIP 64673, which we previously reported as binaries, are likely to host long-period giant planets instead of stellar companions. For binaries with partial orbits, we were able to place lower limits for some of their orbital parameters owing to the presence of curvature or an inflection point in their RV data. We estimated the most likely mass of the companions of the binaries that display only linear trends in their RV data. Future work is needed on studying the long-period binaries using photometry data and high-resolution imaging in order to constrain the nature of their companions. These wide solar twin binaries are prime targets for detailed physical characterization of their companions owing to the favorable separation for AO imaging and the precision with which we can measure the stellar parameters of the main star -- this is particularly important for fully convective red dwarf stars and very low-mass companions such as the T dwarf HIP 14501 B, whose evolution and structure is still poorly constrained.\n\nAdditionally, we reported the detailed discovery of new companions to the following solar twins: HIP 6407, HIP 30037, HIP 54582, and HIP 62039, for which we are able to determine an orbital solution for the first two using radial velocities. The latter two do not have enough RV data to obtain precise orbital parameters, but we can nonetheless estimate their minimum companions masses. We found that these new companions are likely very low-mass, ranging from 0.02 to 0.12 M$_\\odot$ (although stressing that these are lower limits), which should be useful in understanding the origins of the brown dwarf desert in future research.\n\nThe anomalies and RV residuals observed on HIP 19911, HIP 67620 and HIP 103983 are likely due to contamination by the companion on the spectra of the main star. Although the peculiar stars in our sample are no longer considered blue straggler candidates, it is important to note that the detection of WD companions is particularly important for the study of field Sun-like stars because they allow the estimation of their cooling ages; these are more reliable than isochronal and chromospheric ages in some cases, providing thus robust tests for other age estimate methods. We do not expect that the presence of M dwarf companions contaminate lithium spectral lines in Sun-like stars, thus stellar ages derived from Li abundances may be more reliable for double-lined solar twins. We recommend a revision of the stellar parameters of the peculiar binary stars by analyzing high-resolution spectra at the highest Doppler separations possible, or using Gaussian processes to disentangle the contaminated spectra \\citep[see, e.g.,][]{2017ApJ...840...49C}.\n\nWe conclude that single-lined solar twin binaries with orbital periods larger than several months and moderate to low eccentricities do not display signals of distinct rotational evolution when compared to single solar twins. The most compact system in our sample, HIP 30037, which hosts a 0.06 M$_\\odot$ brown dwarf companion at an orbital period of 31 days is, in fact, one of the quietest stars in the sample (in regards of its activity levels), and is thus a viable target for further efforts in detecting moderate- to long-period circumbinary planets.\n\n\\section*{Acknowledgements}\n\nLdS acknowledges the financial support from FAPESP grants no. 2016\/01684-9 and 2014\/26908-1. JM thanks FAPESP (2012\/24392-2) for support. LS acknowledges support by FAPESP (2014\/15706-9). This research made use of SciPy \\citep{scipy_ref}, Astropy \\citep{2013A&A...558A..33A}, Matplotlib \\citep{Hunter:2007}, and the SIMBAD and VizieR databases \\citep{2000A&AS..143....9W, 2000A&AS..143...23O}, operated at CDS, Strasbourg, France. We thank R. P. Butler, S. Vogt, G. Laughlin and J. Burt for allowing us to analyze the LCES HIRES\/Keck data prior to publication. LdS also thanks B. Montet, J. St\\\"urmer and A. Seifahrt for the fruitful discussions on the results and code implementation. We would also like to thank the anonymous referee for providing valuable suggestions to improve this manuscript.\n\n\n\n\\bibliographystyle{mnras}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nPolarimetric imagery consists in forming an image of the state of polarization of the light backscattered by a scene. We consider in this paper that the scene is artificially illuminated with coherent light (laser). For example, this illumination is used in active imagery in order to combine night vision capability and to improve image resolution for a given aperture size. In practice, using a coherent illumination produces speckle noise that deteriorate the image \\cite{goo85}. However, the backscattered light gives information about the capability of the scene to polarize or depolarize the emitted light and thus allows one to determine the medium that compose the scene. These information can be described by a scalar parameter: the degree of polarization of light. This quantity is obtained in standard configurations of polarimetric systems using four pair of angular rotations of both a compensator and a polarizer. Four transmittance are thus recorded \\cite{bro98} that lead to the estimation of the degree of polarization. However, this system is complex and it is interesting to develop methods to estimate the degree of polarization that could reduce the number of images to register. In \\cite{gou01}, the authors proposed to estimate the degree of polarization with only two intensity images, however this method relies on the assumption that the measurements of the two components are uncorrelated which can be in some cases a too restrictive hypothesis. This paper extends the work of \\cite{gou01} by taking into account the correlation of the different components.\\\\\n Let us first introduce the context of the study.\\\\\n\n\n\\section{Background}\nThe electric field of the light at a point of coordinates { \\bf r} (vector of 3 component) in a 3D space and at time $t$ can be written, if we assume the light to propagate in a homogeneous and isotropic medium, as\n\\begin{equation}\n {\\bf E}({\\bf r},t) = \\left[ A^{X}({\\bf r},t) {\\bf e_x} + A^{Y}({ \\bf r},t) {\\bf e_y}\\right]. e^{- i 2 \\pi \\nu t}\n\\end{equation}\nwhere $\\nu$ is the central frequency of the field and $\\bf{e_x}, \\bf{e_y}$ are unitary orthogonal vectors (in the following bold letters represent vectors).\\\\\nThe terms $A^{X}({\\bf r},t)$ and $ A^{Y}({\\bf r},t)$ are complex and define the random vector called Jones vector\n\\begin{equation}\n\\bf{A} = \\left[\\begin{array}{l}\n A^{X}({\\bf r},t) \\\\\n A^{Y}({\\bf r},t)\n\\end{array}\\right].\n\\end{equation}\n\n\nThe state of polarization of light corresponds to the properties of ${\\bf{E}}({\\bf r},t)$ at a particular point of space. It can be described by the covariance matrix $\\Gamma$\n\n\\begin{equation}\n\\Gamma = \\left[\\begin{array}{ll}\n & \\\\\n & \n \\end{array}\\right]\n\\end{equation}\nwhere $<.>$ and $.^*$ define respectively the statistical average and the complex conjugate. For sake of brevity, the following notations for $\\Gamma$ are introduced\n\\begin{equation}\\label{cov}\n\\Gamma = \\left[\\begin{array}{ll}\n a_1 & a_2 \\\\\n a_2^* & a_4 \\\\\n \\end{array}\\right] = \\frac{1}{c_1c_4-|c_2|^2} \\left[\\begin{array}{ll}\n c_4 & -c_2 \\\\\n -c_2^* & c_1 \\\\\n \\end{array}\\right]; \n\\end{equation}\n\\begin{equation}\n\\hspace{0.6cm} \\Gamma^{-1} = \\left[\\begin{array}{ll}\n c_1 & c_2 \\\\\n c_2^* & c_4 \\\\\n \\end{array}\\right] \n\\end{equation}\nLet us note that this matrix can be diagonalized since it is hermitic. \\\\\n In the case of coherent light, the electric field is represented by the complex Jones vector $\\bf{A}$ which follows a Gaussian circular law \\cite{goo85}\n\\begin{equation}\n p_{\\Gamma}({\\bf{A}}) = \\frac{1}{\\pi^2 det(\\Gamma)} e^{-{\\bf{A}}^{\\dagger} \\Gamma^{-1}{\\bf{A}}} \n\\end{equation}\nwhere ${\\bf A}^{\\dagger}$ stands for the adjoint of the vector ${\\bf{A}}$.\\\\\n\nThe degree of polarization is defined by \\cite{goo85}\n\n\\begin{equation}\nP^2= \\frac{\\mu_1 - \\mu_2}{\\mu_1 + \\mu_2}\n\\end{equation}\nwhere $\\mu_1$ and $\\mu_2$ are the eigenvalues of $\\Gamma$ ($\\mu_1 \\geq \\mu_2 \\geq 0$).\\\\\nThe degree of polarization is a scalar parameter that characterizes the state of polarization of the light: if $P = 0$, the light is said to be totally depolarized, and if $P = 1$, the light is said to be totally polarized. In the intermediate cases, the light is partially polarized.\\\\\nWith the notations introduced in (\\ref{cov}) one can show that \\cite{goo85}\n\\begin{equation}\\label{deg_pol}\nP^2= 1 - \\frac{4 (a_1 a_4 - |a_2|^2)}{(a_1 + a_4)^2}.\n\\end{equation}\nThe knowledge of this quantity allows one to study the way the illuminated scene \npolarized or depolarized the emitted light. This degree gives information about the nature of the medium in the scene. In the standard configuration, four measurements are needed to estimate it. In the case of uncorrelated measurements, two intensity images gives a good estimation of this degree using the OSCI \\cite{gou01}. However, in some cases the uncorrelation of the measurements may be not valid.\\\\\nIn this paper, an original estimation method that both uses a pair of images and that accounts for correlation in the components is proposed. We recall in the following the method proposed in \\cite{gou01} and we extend it to correlated measurements. We then compare them through statistical measures using simulated data. The results are also presented when the standard estimation of $P^2$ is used (four images) as this case is expected to give the best results. Finally we conclude and give the perspectives of this work.\n\\section{The OSCI}\nIn the case of two uncorrelated images, the Orthogonal State Contrast Image (OSCI) is determined from \\cite{gou01}, \\cite{bre99}, \\cite{gou04b}\n\\begin{equation}\n\\rho(i) = \\frac{I_1(i) - I_2(i)}{I_1(i) + I_2(i)}\n\\end{equation}\nwhere $I_1(i)$ and $I_2(i)$ are intensity measurements at the pixel site $(i)$, assuming a lexicographic order for the pixels.\\\\\nThese two images are obtained with simple polarimetric systems. First, the scene is illuminated by coherent light with single elliptical polarization state. Then the backscattered light is analysed in the polarization state parallel (which leads to $I_1(i) = |A_X(i)|^2)$ and orthogonal (which leads to $I_2(i) = |A_Y(i)|^2) $ to the incident one. \\\\\nThe OSCI is an estimation of $P^2$ in each pixel provided that the materials of the scene modify the degree of polarization of incident light without modifying its principal polarized state (\\footnote{State represented by the eigenvector associated to eigenvalue $\\mu_1$.}). Let us recall that this kind of material is called pure depolarizer. For such materials, the covariance matrix $\\Gamma$ is diagonal and, since the diagonal terms represent the intensity images, the OSCI gives an estimation of $P^2$ with \n\\begin{equation}\\label{posci}\n{\\hat P}_{OSCI}^2(i) = \\left(\\frac{I_1(i) - I_2(i)}{I_1(i) + I_2(i)}\\right)^2 = \\eta^2(i).\n\\end{equation}\nIn the case of non pure depolarizer objects (i.e. $\\Gamma$ is non diagonal), the OSCI still reveals interesting contrast image but no more defines $P^2$. This leads us to a new method that considers the cases of correlated measurements. This is the object of the following part.\n\n\\section{Correlated measurements}\nIn the case of correlated measurements, the covariance matrix is non diagonal and of the form (\\ref{cov}). In its standard estimation, the degree of polarization needs four measurements, however, two images are sufficient to get an estimation of $P^2$ . Indeed, the coefficient $a_1$ is obtained from one measurement, as the coefficient $a_4$ and the squared modulus of $a_2$ can be estimated from the cross-correlation coefficient $\\delta_{12}$ between two measurements $I_1$ and $I_2$. We have\n\\begin{equation}\n\\delta_{12}= \\int \\int I_1 I_2 p(I_1,I_2) dI_1 dI_2 = .\n\\end{equation}\n$\\delta_{12}$ can be calculated by using the joined density probability function $p(I_1,I_2)$ assuming that ${\\bf A}$ is Gaussian circular (i.e. the speckle is supposed to be fully developped).\nIt can be shown that the correlation coefficient is obtained with\n\\begin{equation}\n\\delta_{12}= \\frac{1}{det \\Gamma c_1^2 c_4^2} \\left( \\frac{1 + \\frac{|c_2|^{2}}{c_1 c_4}}{\\left( 1 - \\frac{|c_2|^{2}}{c_1 c_4} \\right )^3 }\\right )\n\\end{equation}\nwhere $|c_2|$ stands for the modulus of $c_2$.\nCalculating the centered correlation coefficient defined by\n\\begin{equation}\\label{cross_coef0}\n\\Delta_{12}= \\delta_{12} - \n\\end{equation}\nAfter some simple algebra, we get\n\\begin{equation}\\label{cross_coef}\n\\Delta_{12}= |a_2|^2\n\\end{equation}\n\nThus the coefficient $|a_2|^2$ can be obtained from two measurements with $ - $. This remark leads to write the following property.\\\\\n\n{\\bf Property A:} {\\it For fully developped speckle fluctuations, the degree of polarization can be obtained from only two intensity images.}\n\\\\\\\\\nOne can easily note that the degree of polarization can be written as a function of the OSCI with \n\\begin{equation}\\label{cor}\nP^2(i,j)= \\eta^2(i,j) + 4 \\frac{\\Delta_{12}}{( + )^2}\n\\end{equation}\nFrom (\\ref{posci}), the degree of polarization estimated from the OSCI is clearly under-estimated. Thus, we can correct the OSCI in order to get an estimation of the degree of polarization using the correlation coefficient $\\Delta_{12}$. \\\\\nIn the following part, the different estimation of the degree of polarization are compared to the estimation with four images through simulated data.\n\n\\section{Comparison with numerical experiments}\nWe generated $R$ experiments of $N$ samples of complex Jones vectors which follow a Gaussian circular law. The covariance matrix is known and thus $P^2$ is also known. Under the assumption that the statistical average can be estimated by spatial averages in homogenous regions, the coefficients $a_1$ can be estimated from a single image ( $I_1 = |A_X|^2$) like the coefficient $a_4$ ($I_2 = |A_Y|^2$) since\n\\begin{equation}\\label{estima_1}\n{\\hat a}_1 = \\frac{1}{N}\\sum ^{N}_{i=1}|A_X(i)|^2\n\\end{equation}\n\\begin{equation}\\label{estima_2}\n{\\hat a}_4 = \\frac{1}{N}\\sum ^{N}_{i=1}|A_Y(i)|^2\n\\end{equation}\nwhere $\\{ A_X(i), A_Y(i) \\}$ represents the component of the Jones vector for the sample $i$.\\\\\nThe estimation of $P^2$ differs in the studied cases by the way $|a_2|^2$ is estimated. Three different methods are used:\n\\begin{itemize}\n\n\\item Case of four images\\\\\nIn this situation, we have both the real and the imaginary part of the coefficient $a_2$. The quantity $|a_2|^2$ is estimated by\n\n\\begin{equation}\\label{estim_a2_A}\n{\\hat \\rho}_{A}(i)= \\left|\\frac{1}{N}\\sum ^{N}_{i=1}A_X(i)A^{*}_Y(i) \\right|^2 \n\\end{equation}\n\n\\item Case of two images with the OSCI\\\\\nIn this case $a_2$ is assumed to be equal to zero.\n\n\\item Case of two images with the proposed approach\\\\\nThe coefficient $|a_2|^2$ is estimated using (\\ref{cross_coef0}) with\n\n\\begin{equation}\\label{estim_a2_I}\n\\begin{array}{lll}\n{\\hat \\rho}_{I}(i) & = & \\frac{1}{N}\\sum ^{N}_{i=1}|A_X(i)|^2|A_Y(i)|^2 \\\\\n& &\\\\\n& &- \\left( \\frac{1}{N}\\sum ^{N}_{i=1}|A_X(i)|^2 \\right) \\left(\\frac{1}{N}\\sum ^{N}_{i=1}|A_Y(i)|^2\\right)\n\\end{array}\n\\end{equation}\n\n\\end{itemize}\n\nIn the three cases, $P^2$ was estimated from the relation (\\ref{deg_pol}) with the estimated parameters.\\\\\nIn order to characterize the precision of the estimation, one considers six examples of matrix $\\Gamma$. \n\n\\begin{equation}\n\\Gamma_1=\\left[ \\begin{array}{cc}\n15 & 0.2 + 0.5i\\\\\n0.2 - 0.5i & 6\n\\end{array}\\right]; \\Gamma_2=\\left[ \\begin{array}{cc}\n16 & 0\\\\\n0 & 3.6\n\\end{array}\\right];\n\\end{equation}\n\n\n\n\\begin{equation}\n \\Gamma_3=\\left[ \\begin{array}{cc}\n82 & 13i\\\\\n-13i & 17\n\\end{array}\\right]; \\Gamma_4=\\left[ \\begin{array}{cc}\n18 & 7 + 8i\\\\\n7 - 8i & 11\n\\end{array}\\right];\n\\end{equation}\n\n\\begin{equation}\n\\Gamma_5=\\left[ \\begin{array}{cc}\n30 & 16 - 8i\\\\\n16 + 8i & 14\n\\end{array}\\right]; \\Gamma_6=\\left[ \\begin{array}{cc}\n1.25 & 5.5i\\\\\n-5.5i & 26\n\\end{array}\\right];\n\\end{equation}\n These matrices were chosen such as the degree of polarization are approximatively in $\\{ 0.2, 0.4, 0.5, 0.6, 0.8, 1 \\}$.\\\\\n\nThe simulations are performed for $R = 1000$ realisations of $N = 10000$ samples for the six covariance matrices. For the two matrices $\\Gamma_1$ and $\\Gamma_5$, supplementary cases have been studied when $N$ $\\in$ $\\{ 100, 500, 1000, 5000, 10000 \\}$. The results are presented in figures \\ref{fig1}, \\ref{fig2}, \\ref{fig3}, \\ref{fig4}, \\ref{fig5}, \\ref{fig6}. Several points are important to notice.\nFirst of all, as expected, the best estimations of the degree of polarization, regarding all the cases tested, was achieved with four images. However the proposed approach that relies on two correlated images produces good estimations of the degree of polarization whatever the covariance matrix is used (fig.\\ref{fig1}) as soon as $N > 1000$ (fig.\\ref{fig3} and fig.\\ref{fig5} ). Note that the estimation with the OSCI gives results which cannot be used if the term $\\Delta_{12}\/(( + )^2)$ is too high (for example if $|a_2|^2$ is non negligible). Fig.\\ref{fig2}, \\ref{fig4} and \\ref{fig6} show that the experimental variance using four measurements or the OSCI are comparable whatever $\\Gamma$ and $N$ are, whereas the variance obtained with the proposed approach is larger. However this precision should be sufficient for some practical applications. This point should be studied in details in a future work.\n\n\\begin{figure}\n\\centerline{\\epsfxsize=9cm\\epsfbox{P_gamma.eps}}\n\\caption{The degree of polarization is plotted as a function of the six covariance matrix for $R = 1000$ and $N = 10000$. $P^2$ is the true degree of polarization, $P^2$ - A is the degree estimated from four measurements, $P^2$ - I is the degree obtained with the numerical simulations using the proposed method for evaluating $|a_2|^2$, $P^2$ - OSCI is the degree estimed from the OSCI.}\n\\label{fig1}\n\\end{figure}\n\\begin{figure}\n\\centerline{\\epsfxsize=9cm\\epsfbox{sigma_gamma3.eps}}\n\\caption{The experimental variance of the degree of polarization is plotted as a function of the six covariance matrix for $R = 1000$ and $N = 10000$. $\\sigma$ - A is the standard deviation of the degree of polarization estimated from four measures, $\\sigma$ - I is the standard deviation of $P^2$ obtained with the numerical simulations using the proposed method for evaluating $|a_2|^2$, $\\sigma$ - OSCI is the standard deviation of $P^2$ estimed from the OSCI.}\n\\label{fig2}\n\\end{figure}\n\\begin{figure}\n\\centerline{\\epsfxsize=9cm\\epsfbox{P_gamma1_samples.eps}}\n\\caption{The degree of polarization is plotted as a function of the number of sample $N$ for $R = 1000$ for the covariance matrix $\\Gamma_1$. $P^2$ is the true degree of polarization, $P^2$ - A is the degree estimated from four measures, $P^2$ - I is the degree obtained with the numerical simulations using the proposed method for evaluating $|a_2|^2$, $P^2$ - OSCI is the degree estimed from the OSCI.}\n\\label{fig3}\n\\end{figure}\n\\begin{figure}\n\\centerline{\\epsfxsize=9cm\\epsfbox{sigma_gamma1_samples4.eps}}\n\\caption{The product of N and the experimental variance of the degree of polarization ($\\sigma^2$) is plotted as a function of $N$ for $R = 1000$ for the covariance matrix $\\Gamma_1$. $N \\sigma^2$ - A is the product obtained when $P^2$ is estimated from four measures, $N \\sigma^2$ - I is the product obtained when $P^2$ is estimated with the numerical simulations using the proposed method for evaluating $|a_2|^2$, $N \\sigma^2$ - OSCI is the product obtained when $P^2$ is estimated from the OSCI.}\n\\label{fig4}\n\\end{figure}\n\\begin{figure} \n\\centerline{\\epsfxsize=9cm\\epsfbox{P_gamma5_samples.eps}}\n\\caption{Idem as the Figure 3. using the covariance matrix $\\Gamma_5$ instead of $\\Gamma_1$.}\n\\label{fig5}\n\\end{figure}\n\\begin{figure}\n\\centerline{\\epsfxsize=9cm\\epsfbox{sigmagamma5samples4.eps}}\n\\caption{Idem as the Figure 4. using the covariance matrix $\\Gamma_5$ instead of $\\Gamma_1$.}\n \\label{fig6}\n\\end{figure}\n\n\\section{Conclusion}\n\nWe have proposed a new approach to estimate the degree of polarization on polarimetric images degraded by speckle noise. Assuming that the speckle is fully developped, this method allows one to estimate this degree with only two intensity images whereas four images are needed in a standard experimental setup. This presents a great interest in term of reduction of cost of the imagery system since the original setup can be simplified. The proposed approach has been tested on simulated data and compared to the standard estimation techniques that requires either 4 images or 2 independant images (OSCI). The results show that the proposed method gives good approximation of the degree of polarization.\nThis study needs to be extended with a theoritical analyses in order to precise the conditions of validity of the proposed approach. \n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{INTRODUCTION}\nIt has been known that the observed number density of satellite galaxies in the Local Group and our own Milky Way is orders of magnitude lower than the predictions from the cosmological simulations, the so-called `missing satellite problem' \\citep{kly99,moo99}. One of the popular solutions to this problem is that low-mass halos fail to form stars efficiently such that they are below the detection limit of most imaging surveys. Under this paradigm, `dark' galaxies are gas-rich galaxies that do not emit sufficient optical light due to their low efficiency in forming\nstars, but are thought to be building blocks of normal star-forming galaxies. This type of galaxy is an\nideal laboratory to study early stages of star formation, which could lead to understanding how\nthe star formation is triggered. \n\nIt is in general very difficult to identify dark galaxies since they\nare extremely faint in the optical. Very few dark candidates have been identified and confirmed to date. The ALFALFA (Arecibo Legacy Fast ALFA) HI survey \\citep{gio05} has discovered on the order of $\\sim 200$ HI sources not associated with apparent optical counterparts, but most of them are likely to have tidal origins \\citep{hay11}. A pilot study suggested that none of remaining objects they have explored is a dark galaxy after cross-checking with data in other wavelengths \\citep{can15}. \n\n\nRecently, \\citet{van15a} have identified a new class of low surface brightness (LSB) dwarf galaxies in the Coma cluster, often referred to as `ultra-diffuse' galaxies (UDGs), using the Dragonfly Telescope\nArray \\citep{abr14}. These UDGs not only have a low surface brightness (24 -- 26 mag arcsec$^{-2}$), but also show extended structures with size comparable to $L^{*}$ galaxies. Although these UDGs are found in special environments and may not necessarily represent the global dwarf population, the discovery of this type of galaxy suggests we might have missed numerous faint dwarfs due to observation limitations in the past.\n\nIn addition to the aforementioned efforts, the optical Integral Field Unit (IFU) observations open a plausible window to probe the dwarf populations. Normally, emission lines ionized by star-forming regions, AGNs, or shocks are stronger than the stellar continuum and hence can be easily detected with reasonable integration time. With the large area covered by IFU, the structures of ionized gas can be probed out to several tens of kpc. Isolated ionized gases that are separate from a nearby galaxy or a quasar have been readily studied in IFU as well as spectroscopic observations \\citep{fu07a,fu07b,fu08,lin09,hus10,kee12,che16b}. Although the majority of those ionized gas are suggested or inferred to have external origins, such as the result of gas accretion or minor mergers, their nature remains unknown.\n\nHere we report the discovery of a giant H$\\alpha$~blob which does not have any optical counterpart in deep CFHT $gri$ images. However, the morphology and emission line analyses suggest that it could either be ejected gas due to past AGN activity or a special type of UDGs (or `dark' galaxies). \nIn \\S2, we describe the multi-wavelength data for this system. We present the main results in \\S3. Section 4 discusses the plausible origins of this system and the important implications of our results. Conclusions are given in \\S5. Throughout this paper we adopt the following cosmology: \\textit{H}$_0$ = 100$h$~${\\rm km~s^{-1}}$ Mpc$^{-1}$, $\\Omega_{\\rm m} =\n0.3$ and $\\Omega_{\\Lambda } = 0.7$. We use a Salpeter IMF when deriving the star formation rate from various observables.\nWe adopt the Hubble constant $h$ = 0.7 when calculating rest-frame magnitudes. All magnitudes are given in the AB system.\n\n\n\n\\section{DATA \\label{sec:data}}\n\\subsection{Optical integral field data}\nThis system, MaNGA 1-24145 ($z$ = 0.0322, RA = 258.84693, DEC =57.43288, M$_{*}$ $\\sim 10^{11}$ $\\rm M_{\\odot}$\\footnote{based on the NASA-Sloan\nAtlas catalog: http:\/\/www.nsatlas.org\n }), was observed in the first 1392 galaxies, as part of the on-going SDSS-IV\/MaNGA survey \\citep{bun15,dro15,law16,yan16a,yan16b,sdss16}. MaNGA is an IFU program to survey for 10k nearby galaxies with a spectral resolution varying from R $\\sim$ 1400 at 4000 \\AA~ to R $\\sim$ 2600 at 9000 \\AA. The survey uses the BOSS spectrographs (Smee et al. 2013) on the 2.5m Sloan Foundation Telescope (Gunn et al. 2006). The median full width at half maximum (FHWM) of the MaNGA point spread function (PSF) of the datacube is $\\sim$ 2.5\".\nThe MaNGA data used were reduced using the MPL-4 version of the MaNGA data reduction pipeline \\citep{law16}.\nThe spectral line fitting is carried out using Pipe3D pipeline\\citep{san16a}. The stellar continuum was first modelled with a linear combination of 12 single stellar population (SSP) templates that were extracted\nfrom the MILES project \\citep{san06,vaz10,fal11}. The best-fit stellar continuum is then subtracted from the reduced data spectrum for the emission line measurements. Details of the fitting procedures are described in \\citet{san16b}. To ensure reliable measurements, we restrict our analysis to spaxels where the error-to-flux ratio of the line fitting is less than 1 in subsequent analyses.\n\n\nTo correct for dust reddening, we follow the method described in the Appendix of \\citet{vog13} to compute the reddening correction by using the Balmer decrement at each spaxel of the IFU cube. An extinction law with $Rv = 4.5$ \\citep{fis05} is used. \nThe star formation rate (SFR) is then estimated based on this extinction corrected H$\\alpha$~flux. \n\n\n\n\\begin{figure*}\n\\includegraphics[angle=-90,width=17cm]{manga-7991-12702_SDSSimage_Ha_edit.eps}\n\\caption{Left: The SDSS $gri$ composite image of MaNGA 1-24145 with the MaNGA hexagonal bundle field of view (FoV) overlaid. This system was observed with the 127 fibre bundle of MaNGA, so this hexagon is $\\sim$32.5\" in diameter. The data extend to regions a bit outside the hexagon because of the dithering. Three distinct objects are visible within the bundle, including two elliptical galaxies (Satsuki and Mei) and one foreground star. Right: the observed H$\\alpha$~flux map from the MaNGA observations. \\label{fig:ha}.}\n\\end{figure*}\n\n\\begin{figure}\n\\includegraphics[angle=0,width=8.5cm]{SDSS_manga-7991-12702_colorcomposite_pipe3D_enhanced_crop.eps}\n\\caption{Left: The $ugi$ composite image reconstructed using the MaNGA continuum with the MaNGA hexagonal bundle field of view (FoV) overlaid. Right: The [OIII]+H$\\alpha$+[NII] composite image. The flux scales of the three lines are adjusted in order to highlight the H$\\alpha$~blob. \\label{fig:pipe3Dimage}}\n\\end{figure}\n\n\n\n\\begin{figure*}\n\\includegraphics[angle=-90,width=17cm]{manga-7991-12702_pipe3d_spectra.eps}\n\\caption{Upper panels: The black curves represent the MaNGA spectra of the central spaxel of the main galaxy `Satsuki' (left), the southern companion `Mei' (middle), and the H$\\alpha$~blob `Totoro' (right). The red curves are the best-fitted SSP model spectra for the stellar continuum. The blue curves show the residual spectra that are used for the emission line fitting. Lower panels: The zoom-in spectra around the H$\\alpha$~line. The five dashed line denote the observed wavelengths of the [NII] 6584, H$\\alpha$, [NII] 6584, [SII] 6717, and [SII] 6731 lines, respectively. \n\\label{fig:spectra}}\n\\end{figure*}\nFigure \\ref{fig:ha} shows the SDSS $gri$ composite image (left panel), and the H$\\alpha$~flux map (right panel) of this system (MaNGA 1-24145\n). The SDSS image indicates that MaNGA 1-24145 (nicknamed `Satsuki')\n has a companion galaxy (nicknamed `Mei') located in the lower left to MaNGA 1-24145. \n The stellar absorption lines suggest that these two galaxies are at similar redshifts with the line-of-sight velocities differed by $\\sim$ 200 ${\\rm km~s^{-1}}$ . In addition, both galaxies are round and red according to the SDSS images. Therefore, these two galaxies likely form a dry (gas-poor) merger system. The compact source in the upper-left corner of the image is a foreground star according to the MaNGA spectrum, and hence it is irrelevant to the dry merger system we are probing for this work.\n\nWhat is striking about this system is that in the upper-right corner of the H$\\alpha$~map, there exists a giant H$\\alpha$~blob (RA = 258.84314; DEC = 57.43529) with which however does not have any optical counterpart in the SDSS images. The size of the H$\\alpha$~blob is $\\sim3.2$ kpc in radius. This H$\\alpha$~blob (nicknamed `Totoro') is 7.7 kpc away from Satsuki, and is connected to the H$\\alpha$~emission of Satsuki through tail-like structures. To ensure that the H$\\alpha$~emission at the position of Totoro is not due to artifacts in the data, we check the MaNGA spectra at various spaxels in the region of Totoro. We found that multiple emission lines, including H$\\alpha$, [NII] 6584, [SII] 6717,6731, and [OIII] 5007~ lines are clearly detected in those regions, suggesting that the H$\\alpha$~blob is a real feature. Figure \\ref{fig:pipe3Dimage} displays the reconstructed $ugr$ image from the MaNGA continuum (left panel) and the [OIII] + H$\\alpha$ + [NII] composite image (right panel). It can be seen that the reconstructed continuum map from the MaNGA data is as smooth as SDSS. The three upper panels of Figure \\ref{fig:spectra} show the MaNGA spectra of the central pixels for Satsuki, Mei, and Totoro, respectively. The two elliptical galaxies are mainly composed of old stellar populations, consistent with the morphology classification of being early-type galaxies, whereas the blob is dominated by emission lines. \n\n\\subsection{Optical imaging data \\label{sec:optical}}\nThe imaging data come from two sources: the DR12 release of the SDSS photometric survey \\citep{yor00}, which reaches $r\\sim22$, and deeper observations with CFHT\/MegaCam in $g$, $r$ and $i$. The latter combines archival data downloaded from the CADC server and the data taken in 2015 summer (PI: Lihwai Lin) with the Director Discretionary time (DDT) program. All the MegaCam data were processed and stacked via MegaPipe \\citep{gwy08}. The final images have 5$\\sigma$ limiting mag of 25.7, 26.2, and 25.2 mag (1\" aperture in radius) and surface brightness 5$\\sigma$ limit of 26.4, 26.9, and 25.9 mag arcsec$^{-2}$ in $g$, $r$, and $i$, respectively.\n\n\\subsection{Radio Continuum}\n \nWe observed this system with the Karl G. Jansky Very Large Array\n(VLA) in the A configuration on 2015 August 20 using the C-band\nreceiver tuned to $4-6$ GHz ($\\lambda = 7.5-5.0$\\,cm). The on-source\ntime is 42 minutes; we observed 3C147 for flux and bandpass calibrations,\nand J0920+4441 for phase calibration. Data reduction was carried out\nwith CASA \\citep{McMullin07} using the following steps: (1) standard\ncalibration using the VLA Data Reduction Pipeline (Chandler et al., in\nprep); (2) removal of any portions of the data corrupted by strong\nradio frequency interference; and (3) imaging with the task {\\tt\nCLEAN}. The imaging parameters are the following: MT-MFS deconvolver\nwith nterms of 2, $0\\farcs06$ pixel size, and Briggs weighting with\nrobust parameter of 0.5. The final image has a $0\\farcs39 \\times\n0\\farcs35$ synthesized beam and rms noise at the pointing center of 7\n$\\mu$Jy beam$^{-1}$.\n\n\nThe radio flux of this source is about 37 $\\pm$ 13 $\\mu$Jy at 5 GHz.\nAssuming that this emission is synchrotron-dominated and so following\na power law $S \\propto \\nu^{\u2212\\alpha}$ with a spectral index $\\alpha$ = - 0.7, we derive the\nradio luminosity of Satsuki to be 2.2 $\\times$ 10$^{20}$ WHz$^{\u22121}$ at 1.4\nGHz. This luminosity implies a star-formation rate of 0.12 $\\rm M_{\\odot}$ yr$^{-1}$\nusing the \\citet{bel03} radio SFR indicator, converted to the Salpeter IMF, which is a factor of 2.5 greater than the limit implied by\nH$\\alpha$~emission at the location of the radio source (see Table \\ref{tab:property}). Therefore, it is\nlikely that this radio point source is a faint AGN.\n\n\\subsection{HI}\nThis source was observed as part of the HI-MaNGA programme at the Robert C. Byrd Green Bank Telescope (GBT), which is obtaining HI 21cm observations of a large sample of MaNGA galaxies (AGBT16A\\_095, PI: K. Masters). This target was observed on 2016 Feb. 5 for 3 sets of 5 min ON\/OFF pairs using the VEGAS spectrometer with a bandwidth of 23.44 MHz, centred on the frequency of 21cm emission redshifted to cz=9653 km~s$^{-1}$. At this frequency the FWHM of the GBT beam is 9\\arcmin. No HI emission was detected in this volume to a rms of 1.58 mJy (after smoothing to 5.15 km~s$^{-1}$ velocity resolution). Assuming a velocity spread of 100-400 ${\\rm km~s^{-1}}$ this non-detection sets an 1-$\\sigma$ upper limit of 8.9--9.2 $\\times 10^{8}$ $\\rm M_{\\odot}$~for the HI mass of this system \\footnote{It is intended that HI-MaNGA data will be released as an SDSS Value Added Catalogue in a future data release from SDSS. In addition the raw data will be publicly available via the NRAO Data Archive at https:\/\/archive.nrao.edu a year following observations.}.\n \n\n\\subsection{X-ray}\nSatsuki is located within the field of view $\\sim$ 47 ks \\textit{Chandra} ACIS observation OBSID 4194 (PI: Trevor Ponman), which occurred on 2003 September 17. This pointing of this observation was centered on nearby galaxy NGC 6338 (258.84256, +57.407) [J2000], $\\sim$ 2' South of the MaNGA source of interest. Before analysis, this dataset was reprocessed using the Chandra$\\_$repro task in CIAO v4.8 \\citep{fru06} using CALDB v4.6.3. \n\nThe exposure corrected image of this field (Figure \\ref{fig:JVLA}) was generated using the `fluximage' command, and indicates diffuse X-ray emission coincident with the dry merger system, as also shown in \\citet{pan12}, which performed a study on a nearby BCG based on the same X-ray dataset. The spectrum of the X-ray emission coincident with the H$\\alpha$~blob of interest was then extracted using the CIAO script specextract using a source region of a 22\" $\\times$ 18\" ellipse centered at 17:15:23.7, +57.26:05 which encompasses all of the emission. We fit this X-ray spectrum with a single absorbed APEC model -- the emission spectrum from a collisionally-ionized diffuse gas -- assuming the redshift $z$ = 0.032202 measured from optical spectroscopy using XSPEC v12.8.2e \\citep{arn96}. This fit results in a reasonable reduced $\\chi^{2}$ (1.45 for 88 d.o.f), though somewhat under-predicts the flux $>$ 5 keV. As we will mention in Section 3.5, this dry merger system is part of a small group. The derived X-ray temperature is 1.26$\\pm$0.06 keV, consistent with the temperature on group scale \\citep{ket13}. No point-like source is found within Satsuki or Totoro, indicating that there is no strong X-ray AGN present in this system.\n\n\n\n\n\\section{RESULTS}\n\n\n\\subsection{The optical morphology \\label{sec:obs}}\n\nTo ensure that the absence of the optical counterpart at the position of Totoro as shown in Figure \\ref{fig:ha} is not due to relatively shallow depth of SDSS imaging, we carried out a follow-up observation for this system with CFHT\/MegaCam and combined it with the archival data (see section \\ref{sec:optical}). Figure \\ref{fig:cfht} displays the $gri$ composite image of this galaxy. It is clear that there are extended stellar halos surrounding the two galaxies. Again at the position of the H$\\alpha$~blob, no apparent optical continuum is revealed (see the right panel of Figure \\ref{fig:cfht}). \n\n\\begin{figure}\n\\includegraphics[angle=0,width=8.5cm]{Halpha_Xray_VLA_SDSSr_crop_axistitle.eps}\n\\caption{The Chandra X-ray contours (magenta) and extinction-corrected MaNGA H$\\alpha$~contours (cyan) overlaid on the SDSS $r$-band image (background image) of MaNGA 1-24145. The red cross marks the position of the VLA point-source detection. The X-ray contours correspond to 44\\%, 30\\%, 21\\%, and 15\\% of the peak value, respectively, whereas the H$\\alpha$~contours correspond to 100\\%, 60\\%, 37\\%, 23\\%, 14\\%, 8\\%, 5\\%, 3\\%, and 2\\% of the peak value, respectively.\n \\label{fig:JVLA}}\n\\end{figure}\n\n\n\n\n\\begin{figure*}\n\\includegraphics[angle=0,width=17cm]{7991-12702_3color_region_combined.eps}\n\\caption{\n(A) CFHT $gri$ composite color image for MaNGA 1-24145 with the MaNGA hexagonal FoV overlaid. A bright BCG is $\\sim$ 43 kpc away to the South. (B) A zoom-in picture of (A).\nThe white circle marks the location of Totoro. In both panels, North is up and East is left. \\label{fig:cfht}}\n\\end{figure*}\n\n\n\n\n\n\n\\begin{figure}\n\\includegraphics[angle=0,width=8cm]{show_galfit_g_result_resize.eps}\n\\caption{\n(A) The original CFHT Megacam $g-$band image, (B) the $g-$band image after subtracting nearby satellite galaxies, (C) model image for (B), and (D) the residual images produced by GALFIT. The dashed while circles mark the position of Totoro. The white bars in the lower-right corner of each panel corresponds to a scale of 5\". \\label{fig:song}}\n\\end{figure}\n\n\\begin{figure*}\n\\includegraphics[angle=-90,width=17cm]{manga-7991-12702_kinematics.eps}\n\\caption{\nVelocity fields of MaNGA 1-24145\n based on H$\\alpha$~line (upper panels) and stellar components (lower panels). The left panels show the radial velocities and the right panels show the velocity dispersions. \\label{fig:velocity}}\n\\end{figure*}\n\n\\begin{figure*}\n\\includegraphics[angle=-90,width=17cm]{manga-7991-12702_lineratio_elines.eps}\n\\caption{Left: The extinction-corrected H$\\alpha$~map of MaNGA 1-24145\n. Right: The [OIII] 5007\/H$\\beta$~ratio as a function of distance from the center of Satsuki\n. The letters (A,B,C,..etc) mark different locations of this system.\n\\label{fig:o3hb}}\n\\end{figure*}\n\n\n\n\nTo more clearly see the underlying surface brightness structure at the location\nof Totoro, we build detailed photometric models for this merging system\nusing \\texttt{GALFIT\\ v3.0.2} (Peng\\ et al.\\ 2002, 2010).The MecaCam $g$-band image is used here for this purpose because it may better reveal the continuum of Totoro if has had recent star formation (see \\S 4). Since we are not interested in\nthe overall structures of this complex merging system, we will only focus on a\n400$\\times$400 pixels region centered on the MaNGA bundle. We\nconstruct a mask image to exclude isolated objects around the merging system from the\nfitting. The PSF of the image was modelled using the \\texttt{SExtractor} and \n\\texttt{PSFex} routines, and the PSF image used in the modelling was extracted using the \ncentral coordinate of the MaNGA bundle. Meanwhile, on top of the extended evelope, there\nare 11 objects (most are galaxies) that can not be easily masked out. We model them \nseparately, then subtract them from the image. All of these smaller objects can be \nwell modelled by a single- or double-{S\\'{e}rsic\\ } model locally with the help of an additional \n{S\\'{e}rsic\\ } and sky components to account for the envelope in the background. As shown in the \npanel (B) of Figure \\ref{fig:song}, they have been removed smoothly from the input image without any \nsignificant residual pattern. Using this ``cleaned'' image as input, we model the two \nmerging galaxies along with their ``common envelope'' together using different\ncombinations of {S\\'{e}rsic\\ } components. We started with three {S\\'{e}rsic\\ } components (one for each \ngalaxy, additional one for the envelope), and gradually build up the complexity by adding \nmore {S\\'{e}rsic\\ } component. As we simply want to achieve smooth residual map to study the \nunderlying structure, the number of the {S\\'{e}rsic\\ } components and the detailed parameters are\nnot a concern as long as each component behaves normally (e.g. Gu\\ et al.\\ 2013). \nFor each object (including the envelope), all {S\\'{e}rsic\\ } components are constrained to have the\nsame center, and only symmetric {S\\'{e}rsic\\ } components are used here. After visualizing the\nresidual of initial model, it becomes clear that, on the lower part of the image, there is\nan additional surface brightness enhancement (caused by the merging process) that are not well fit by the simple model. Eventually, the best model we achieved include 7 {S\\'{e}rsic\\ } components plus a\ntilted-plane sky background component. Both of the merging galaxies, and the extended \nenvelope are described by 2 {S\\'{e}rsic\\ } components; while the surface brightness in the south is \nmodelled using single {S\\'{e}rsic\\ } component. All components behave regularly in term of size and \nshape, except for the central component for the brighter galaxy, no component has {S\\'{e}rsic\\ } \nindex larger than 2.0. \n\nThe panels (C) and (D) of Figure \\ref{fig:song} show the model image and the residual maps of this\nmodel reconstruction, respectively. It clearly reveals a rich system of shells and tidal tails around the main MaNGA \ngalaxy, indicating an on-going interaction between these two galaxies. Although the\nresidual map is not perfectly smooth, no optical counterpart is found at the position of\nTotoro. \n\nReducing the number of components used, or changing the initial guess will not affect the\nabove conclusion. Adding more components could not further improve the residual map,\nbut results in ill-behaved {S\\'{e}rsic\\ } components. Given the complex nature of this merging\nsystem, we also try to invoke the asymmetric features in \\texttt{GALFIT}, especially the \n1st (global lopsidedness) and 4th (boxiness of the isophote) Fourier components (see \nPeng\\ et al.\\ 2010 for details). The residual map shows improvements around the tidal\nfeatures, but does not change the conclusion that there is no apparent optical counterpart\nfor Totoro \\footnote{The results still hold if we repeat the analysis using the $i$-band image, which is more sensitive to the stellar mass distributions.}. \n\n\n\n\\subsection{Kinematics from the MaNGA Observations\\label{sec:velocity}}\nFigure \\ref{fig:velocity} displays the velocity and dispersion maps for this system. \nWhile the stellar velocity field indicates that the main galaxy Satsuki\n is primarily pressure supported, the gas component reveals more complex structures. The central galaxy shows a weak rotation structure, while there is strong variation in the line-of-sight velocity across Totoro region, redshifted in the left tail and blueshifted in the right tail. The inconsistent velocity fields between stars and gas suggests that some part of the gas of Satsuki might have been either accreted or ejected recently. In the former case, it is similar to those early-type galaxies \nthat exhibit misaligned gas and stellar kinematics, the so-called early-type `counter rotators' \\citep{sar06,dav11,che16,jin16}, although counter rotators\nin general are defined for systems with a rotating stellar component. The gas inflow scenario is also consistent with the H$\\alpha$~morphology which shows bridge (tail)-like structures that connect Totoro and Satsuki. On the other hand, the complicated velocity field can also be explained if the main galaxy Satsuki underwent a strong outburst phase, during which Totoro was expelled from Satuski.\n\n\n\n\n\n\n\n\n\n\\subsection{Excitation State \\label{sec:bpt}}\n\n\n\n\n\nIn addition to H$\\alpha$~and two [NII] 6584~lines, some other weak lines such as [SII] 6717,6731~ and [OIII] 5007~ are also detected in both Satsuki and Totoro, allowing us to \nprobe the ionization state for this system. Figure \\ref{fig:o3hb} shows the [OIII] 5007\/H$\\beta$~ ratio, one of the frequently used ionization parameters, as a function of the distance from the main galaxy. There exists a strong [OIII] 5007\/H$\\beta$~gradient, decreasing from the main galaxy Satsuki (location A) to the left bridge (locations B, C, and D) that connects to Totoro (location E, F, G, H, and I). On the other hand, the [OIII] 5007\/H$\\beta$~ ratio is nearly constant across Totoro. \n\n\nThe multiple line detections also allow us to classify the emission line regions into HII or AGN regimes using the standard Baldwin-Phillips-Terlevich (BPT Baldwin, Phillips \\& Terlevich 1981; Veilleux \\& Osterbrock 1987; Kauffmann et al. 2003; Kewley et al. 2006) excitation diagnostic diagrams. Here we apply three types of line diagnostics based on four line ratios, [OIII] 5007\/H$\\beta$, [NII] 6584\/H$\\alpha$~, [SII] 6717,6731\/H$\\alpha$, and [OI] 6300\/H$\\alpha$. Figures \\ref{fig:bpt} and \\ref{fig:bptmap} display the line ratio diagrams and classification maps for this system, respectively. We adopt the dividing curves suggested in the literature \\citep[e.g.,][]{kew01,kau03,cid10} to separate various regions. All the three classifications indicate `LINER'-type excitations for Satsuki. As the LINER emission is extended across the entire main galaxy, this object falls in to the extended LIER (eLIER) category according to the classification scheme by \\citet{bel16a,bel16b}\n\nOn the other hand, in the regions of Totoro, the line ratios are consistent with the `composite', `HI'. and 'LINER' regions when using the [NII], [SII], and [OI] diagnostics, respectively. In the remaining part of this paper, we treat Totoro as `composite' regions based on the [NII], as it allows for the intermediate excitation state as opposed to the other two methods.\n\n\n\n\n\\begin{figure*}\n\\includegraphics[angle=-90,width=17cm]{manga-7991-12702_BPT_elines.eps}\n\\caption{Upper panels: BPT diagnostic diagrams for MaNGA 1-24145\n based on the [OIII] 5007\/H$\\beta$~ vs. [NII] 6584\/H$\\alpha$~ (left panels), [OIII] 5007\/H$\\beta$~ vs. [SII] 6717,6731\/H$\\alpha$~(middle panels), and [OIII] 5007\/H$\\beta$~ vs. [OI] 6300\/H$\\alpha$~(right panels). Various colors on the data points indicate the physical separation from the center of Satsuki (from near to far: red, origin, green, blue, purple). The solid, dashed, and dotted lines show the classification lines suggested by \\citet{cid10}, \\citet{kew01}, and \\citet{kau03}, respectively. The blue curves display the model predictions of the shock and photoionization mixing sequences by \\citet{ho14}. Each line corresponds to a certain shock fraction (from bottom to top: 20\\% to 100\\%). The shock velocity ranges from 100 to 300 ${\\rm km~s^{-1}}$ (from left to right). The MAPPINGS III shock+precursor models (gray grid) with $n$ = 1 cm$^{-3}$ from \\citet{all08} are also shown for comparison. The thick lines represent constant magnetic parameter while the thin lines display the constant shock velocity ranging from 200 to 1000 ${\\rm km~s^{-1}}$ (bottom to top; with 25 ${\\rm km~s^{-1}}$ intervals).\nBottom panels: A zoomed-in view of the upper panels. The letters (A,B,C,..etc) mark different locations of this system, following the definitions of Figure \\ref{fig:o3hb}.\n\\label{fig:bpt}}\n\\end{figure*}\n\n\n\\begin{figure*}\n\\includegraphics[angle=-90,width=17cm]{manga-7991-12702_BPTmap_elines.eps}\n\\caption{BPT classification maps for MaNGA 1-24145\n based on the [OIII] 5007\/H$\\beta$~ vs. [NII] 6584\/H$\\alpha$~ (left panel) and [OIII] 5007\/H$\\beta$~ and [SII] 6717,6731\/H$\\alpha$~(right panel). The [OI] diagnostic map is not included here as all spaxels are classified as LINER.\n\\label{fig:bptmap}}\n\\end{figure*}\n\n\\begin{figure}\n\\includegraphics[angle=-90,width=11cm]{manga-7991-12702_gasmetal_N2S2Ha.eps}\n\\caption{The gas metallcity map of MaNGA 1-24145\n, computed using the method described in \\citet{dop16} based on the emission line ratios among [NII], [SII], and H$\\alpha$.\n\\label{fig:metal}}\n\\end{figure}\n\n\\subsection{Gas metallicity \\label{sec:metal}}\nTo further understand the properties of Totoro, we also measure the gas-phase metallicity (Z) for this system. Conventionally, there are many ways to estimate the gas metallicity through emission line ratios \\citep[see][]{kew08}. However, most of those tracers are calibrated against the HII regions, and may not be applicable to systems with ionization parameters or interstellar medium (ISM) pressure different from typical HII regions. Here, we adopt the `N2S2H$\\alpha$'~ calibrator that is suggested to be less sensitive to the ionization parameters \\citep{dop16} to estimate the metallicity. The N2S2H$\\alpha$~calibration can be expressed as the following:\n\n\\begin{equation}\\label{eq:N2S2ha}\n12 + \\mathrm{log (O\/H)} = 8.77 + \\mathrm{N2S2H}\\alpha\n\\end{equation}\nwhere N2S2H$\\alpha$ = Log([NII]\/[SII]) + 0.264Log([NII]\/H$\\alpha$). The [NII] to [SII] ratio is found to nearly independent of the AGN luminosity and hence provides a good metallicity indicator for even AGNs \\citep{ste13}. However, we note with caution that the derived metallicity may still be subject to large systematic uncertainty since the line ratios of the main galaxy and Totoro are located in the LINER and composite regions, respectively. \n\nFigure \\ref{fig:metal} shows the Z(N2S2H$\\alpha$) map for this system. The metallicity of the gas around Satsuki is close to solar and is $\\sim$ 0.3 dex lower than the expected value (Z = 9.1) for a massive galaxy with a stellar mass of $10^{11}$ $\\rm M_{\\odot}$~ based on the local mass--metallicity relation \\citep{tre04}. The offset we see here, however, may be easily accounted for by the different metallicity calibrators adopted. On the other hand, the averaged metallicity in the H$\\alpha$~ region is greater for Totoro than Satsuki by a factor of 0.3 dex and is more consistent with the gas metalltcity of high mass galaxies. Although the difference is significantly larger than the statistical uncertainty (0.03-0.1 dex) in the metallicity measurement, it is difficult to interpret this result given the systematic uncertainty in the metallicity measurement due to their different ionization states.\n\n\n\\subsection{Environment \\label{sec:environment}}\nThe environment of this system\n is rather complex. As already mentioned, it has an early-type companion (Mei) just 4 kpc away to the South, which makes them a possible dry merger candidate. Moreover, Satsuki\n is also part of the system MCG+10+24-117, a small group falling onto a galaxy cluster with NGC 6338 as a central Brightest Cluster Galaxy that is 43 kpc away to the south (see Figure \\ref{fig:cfht}; Pandge et al. 2012; Dupke et al. 2013). \n\n\n\n\n\n\n\n\n\n\\section{DISCUSSION}\\label{sec:discussion}\n\n\n\n\n\n\n\n\n\\subsection{The origin of the H$\\alpha$~ blob}\nUsing the first-year MaNGA data, we discover a giant H$\\alpha$~blob, Totoro, associated with a dry merger system. This object, however, does not have any optical counterparts down to 26.9 mag arcsec$^{-2}$ in deep CFHT\/MegaCam $g,r$, and $i$ images. \nThere are several possible scenarios to explain the origins of Totoro: \n\n\n\n\n\n\nScenario 1: Totoro is associated with gas tidally stripped from Satsuki during\nthe interaction between Satsuki and Mei. \n\nScenario 2: Totoro is associated with the gas ram-pressure stripped during the infall of\nSatsuki\n toward the center of NGC 6338 galaxy cluster, similar to NGC4569 located in the Virgo cluster (Boselli et al. 2016).\n \nScenario 3: Totoro is associated with gas ejected from Satsuki by an AGN outflow during\nthe mergers between Satsuki and Mei. If the central black hole of\nSatsuki is turned on during mergers, the energy can ionize the stripped gas, resulting in\nthe H$\\alpha$~emissions, similar to the known `Hanny's voorwerp' phenomenon (Lintott et al. 2009).\n\nScenario 4: Totoro is an UDG (or alternatively, a LSB galaxy), which falls\nbelow the detection limit (26.9 mag arcsec$^{-2}$ in $r$-band) of CFHT imaging data, making it a `dark' galaxy. The morphology of Totoro (Figure 1), especially the (tail) bridge-like structures that connect between Satsuki and Totoro, indicates that Satsuki\n is likely under interaction with the hosting galaxy of\nTotoro, which has a comparable physical size as Satsuki. \n\n\n\n\nTo estimate the mass of the warm gas component, we follow the approach adopted by \\citet{che16b}. We first estimate the electron density $n_{e}$ to be $\\sim$ 260 cm$^{-3}$ based on the median value of the [SII]6717\/[SII]6731 ratio in the region of Totoro following Equation 3 of \\citet{pro14}, which assumes the electron temperature $T_{e} = 10,000K$. Next, we calculate the extinction-corrected H$\\beta$~luminosity, $L_{\\mathrm{H}\\beta}$, to be 3.1$\\times 10^{39}$ erg s$^{-1}$. The ionized gas mass can then be derived using the following equation \\citep{ost06,fu12}:\n\\begin{equation}\\label{eq:mass}\n\\frac{M_{\\rm HII}} {6.8\\times 10^{7}~ \\rm M_{\\odot}} = \\bigg(\\frac{L_{\\mathrm{H}\\beta}}{10^{40}~\\mathrm{erg s^{-1}}}\\bigg)\\bigg(\\frac{n_{e}}{1~ \\mathrm{cm}^{-3}}\\bigg)^{-1}.\n\\end{equation}\nWe obtain the ionized gas mass $\\sim$ 8.2$\\times$ 10$^{4}$ $\\rm M_{\\odot}$.\n\nThe HI observation of this system provides an upper limit of (8.9--9.2) $\\times 10^{8}$ $\\rm M_{\\odot}$~for the HI mass due to the null detection. We account for He gas by applying a factor of 1.33 to this upper limit. We can also infer the H$_{2}$ content of this Totoro using the H$\\alpha$~emission. The integrated H$\\alpha$~flux over spaxels within Totoro region is 3.3$\\times 10^{-15}$ erg s$^{-1}$ cm$^{-2}$, corresponding to a luminosity of 7.8$\\times 10^{39}$ erg s$^{-1}$. Assuming all the H$\\alpha$~flux results from star formation, we obtain the total star formation rate (SFR) of Totoro to be 0.059$\\rm M_{\\odot}$ yr$^{-1}$ using the conversion between the H$\\alpha$~luminosity and star formation rate from \\citet{ken98}.\nWe then estimate the total H$_2$ gas mass ( = SFR x t$_{dep}$) of the H$\\alpha$ blob to be $\\sim1.2\\times 10^8\\,M_\\odot$ assuming a typical gas depletion time (t$_{dep}$) of 2 Gyr. This value is also an upper limit since we have assumed that all the H$\\alpha$~fluxes are contributed by the star formation. This implies an upper limit of the entire cold gas (HI + He + H$_{2}$) of $\\sim$ 1.3 $\\times 10^{9}$ $\\rm M_{\\odot}$.\n\nAlthough Satsuki and its companion Mei are consistent with being early-type galaxies (ETGs), the amount of the cold gas in Totoro is not totally unexpected if it is part of Satsuki. For example, recent works have found that a significant fraction of massive ETGs that show signs of star formation possess cold gas with gas mass comparable to this Totoro \\citep{osu15,dav16}. Therefore, it is reasonable to speculate that Totoro may originally be part (if not all) of the cold gas contained Satsuki, and then get ejected during the galaxy-galaxy interaction or because of the ram-pressure stripping (scenarios 1 to 3, respectively). \n\n\n\n\nTo test the galaxy interaction scenario (scenario 1), we have performed and examined a set of N-body\nsimulations of the interaction of two Elliptical galaxies (E-E) with the\ncode Gadget2 (Springel 2005). Different initial orbital configurations\n(e.g pericentric distance) have been considered in a similar way to Peirani\net al. (2010). Although we did not include any gas component in our dry merger simulations, the features in the stellar components can still be useful to understand the system as the stellar streams well trace the gas streams before 1 Gyr during galaxy interaction. \n\nAmong all the cases we have looked at, we did not see the formation of any clear stellar stream or a centrally concentrated blob-like structures during the interaction. This strongly suggested that Totoro is unlikely to be produced during an E-E interaction. In addition, based on other existing gas-rich merger simulations (see\nfor instance Barnes \\& Hernquist 1996; Di Matteo et al. 2007; Peirani et\nal. 2010), the prominent gas (or stellar) streams are expected to be\nformed when at least one of merger progenitors have extended disk structures, unlikely to be the case for this dry merger system even if these two ellipticals have small amount of gas. Furthermore, such latter simulations also suggest that it is very\ndifficult to produce such blob-like structure. Although tidal dwarf\ngalaxies can be formed during the interaction of disk galaxies (see for\ninstance Bournaud, Duc et Masset 2003; Duc, Bournaud et Masset 2004) in these cases, the\nblob-like structure is more extended with no clear star formation\nactivity. Nevertheless, we note that the simulations we have examined have the following caveats: 1) the possible parameter space is extensive, so it is impossible to fully explore, and 2) the stellar\/AGN feedback which could drive galactic scale outflows and these physics are not included. Such advanced simulations are beyond the scope of this work. We defer more detailed comparisons with simulations to a future work.\n\n\nOn the other hand, the centrally concentrated blob-like structure is not expected in the ram-pressure stripped gas (scenario 2), either, which often shows the tail and\/or `jellyfish' like structures (e.g., Boselli et al. 2016). Moreover, galaxies that show ram-pressure stripping phenomena are mostly gas-rich late-type galaxies, unlike Satsuki.\n\n\nAnother explanation for Totoro is the materials ejected by an AGN located in Satsuki (scenario 3), similar to the extended emission-line regions found around quasars \\citep{fu07a, fu07b}. The outflow scenario is also consistent with the complicated velocity field seen in the gas component of this system. On the other hand,\nas discussed in \\S 2, we do not detect an X-ray point source or an extended radio jet in the main galaxy, suggesting that there is no on-going strong AGN activity. However, it has been known that the AGN brightness can vary over timescales of several to 10$^{5}$ years \\citep{den14,mce16}, either due to a change in the black-hole accretion rate or Tidal-Disruption Events \\citep[TDEs, e.g.][]{sax12,mer15}. Therefore, we can not rule out the possibility of a recent past AGN outflow, similar to the notable phenomena of Henny's Voorwerp, in which the ionizing source has already diminished whereas a clump of ionized gas is found several kpc away. Nevertheless, it is unclear why the ejected gas would have a higher metallicity than than the gas remaining in Satsuki (although it is unclear whether the metallicity difference is real given the potential systematics in our metallicity estimate).\n\n \nAn alternative explanation of the H$\\alpha$~blob is a separate faint galaxy, interacting with the dry merger system (scenario 4). \nA few morphological features, for example, the centrally concentrated H$\\alpha$~and the bridges extended from Totoro toward Satsuki, are most consistent with Totoro being a separated gas component. According to numerical simulations, these features can indeed be explained by the interaction between the main galaxy and a faint disk galaxy (scenario 1) (see for instance, Peirani et al. 2010; Cheung\net al. 2016).\n\nRecently, a class of UDGs has been identified in the Coma cluster and several low-redshift clusters \\citep{van15a,van16}. These UDGs are surprising large in size ($r_{eff}$ = 1.5 -- 4.6 kpc) despite of their low stellar masses ($< 10^{8}$ $\\rm M_{\\odot}$). The majority of these UDGs are found to lie on the red-sequence, indicating old stellar populations and quenched star formation activities \\citep{van16}. Possible formation mechanisms include that gas is stripped by the ram pressure when falling into the cluster, which prevents subsequent star formation, or that gas is expelled due to strong stellar or supernova feedback for galaxies with halo mass between $10^{10}$--$10^{11}$ $\\rm M_{\\odot}$~\\citep{dicin16}. Nevertheless, it remains an open question regarding the origin of UDGs and whether these UDGs are already quenched before falling into the cluster environments. Totoro identified in this work has a comparable size ($\\sim$ 3.2 kpc) as the UDGs. However, the averaged surface brightness of Totoro has an upper limit of 26.9 mag arcsec$^{-2}$, at least 1-3 mag dimmer compared to that of the known UDGS ranging from 24 to 26 mag arcsec$^{-2}$. Another aspect of Totoro that is distinct from the known UDGs is that the latter are generally old in the stellar populations as indicated from their red colors, whereas Totoro shown in this work is likely to have small amount of on-going star formation based on the BPT diagnostics but with little old stellar populations. Therefore, Totoro may represent a different category of LSB galaxy from the quiescent UDGs.\n\n\n\nThe non-detection of optical light associated with Totoro can provide an upper limit of the amount of star formation. Assuming that this H$\\alpha$~cloud is a young star-forming system with age $< 0.1$ Gyr, the flux in the optical regime is expected to be comparable to that in the UV (1500--2800 \\AA) in the case of no dust extinction. Following the conversion between the UV luminosity, H$\\alpha$~luminosity, and the star formation rate given by \\citep{ken98} and adopt the surface brightness limit (26.9 mag arcsec$^{-2}$) from our CFHT MegaCam $r-$band observation, we estimate that the corresponding H$\\alpha$~surface density of this Totoro is on the order of 1.1 $\\times10^{-17}$erg s$^{-1}$ cm$^{-1}$ arcsec$^{-2}$, which is one magnitude lower than the peak value of Totoro. By integrating it over the H$\\alpha$~region, we estimate that the star formation contributes less than 37\\% to the total H$\\alpha$~flux, otherwise we should be able to detect the optical counterpart.\n\nGiven the very low amount of star formation that can possibly occur in Totoro, we speculate that Totoro is a gas cloud that fails to form stars efficiently, and thus a `dark' gas cloud. The origin of this gas cloud, whether it is associated with a satellite dark matter, or a pure gas cloud, is difficult to pin down. However, the latter scenario is unlikely since it would be difficult to maintain the kpc-scale gas cloud against gravity without invoking the existence of a dark matter halo. If Totoro is indeed hosted by a subhalo, it would be a strong evidence of the existence of `dark' subhalos, which help to alleviate the missing satellite problem. Since there is no stellar component in the region of Totoro, we could roughly estimate the halo mass of Totoro based on the gas velocity dispersion using the virial theorem. Taking R = 3.2 kpc and $\\sigma_{gas} = 50$ ${\\rm km~s^{-1}}$ , we obtain the halo mass M$_{halo}\\sim 5.6\\times10^{9}$ $\\rm M_{\\odot}$.\n\nOn the other hand, the high metallicity of Totoro (see Section 3.4) is unexpected for a typical dwarf galaxy. One possible explanation is that the gas cloud has been enriched by the surrounding environment, and may has a different evolution process from the typical known dwarf galaxies. However, we caution that the metallicity measurement presented in this work is subject to large uncertainty since the ionization source is not well constrained. \n\n\n\n\\subsection{Sources of ionization}\n\n\\begin{deluxetable*}{llllllllll}\n\\tabletypesize{\\scriptsize}\n\\tablewidth{0pt}\n\\tablecaption{Properties of MaNGA 1-24145 (Satsuki), its southern companion (Mei), and the H$\\alpha$~blob (Totoro).\\label{tab:property}}\n\\tablehead{\n\t\\colhead{Object} &\n \\colhead{$z$} &\n \\colhead{RA} &\n \\colhead{DEC} &\n \\colhead{M$_{*}$ ($\\rm M_{\\odot}$)} &\n \\colhead{M$_{\\rm HII}$ ($\\rm M_{\\odot}$)} &\n \\colhead{M$_{H_{2}}$ ($\\rm M_{\\odot}$)} &\n \\colhead{M$_{HI}$ ($\\rm M_{\\odot}$)} &\n \\colhead{M$_{halo}$ ($\\rm M_{\\odot}$)} &\n \\colhead{SFR (yr$^{-1}$ $\\rm M_{\\odot}$)} \n }\n\n\\startdata\nSatsuki & 0.0322 & 258.84695 & 57.43288 & $1.2 \\times 10^{11}$ & \\nodata & \\nodata & $< 9.2 \\times 10^{8}$ & \\nodata & $<$ 0.049$^{b}$\\\\\nMei & 0.0322 & 258.84750 & 57.43133 & $4.0 \\times 10^{10~a}$ & \\nodata & \\nodata & $< 9.2 \\times 10^{8}$ & \\nodata & \\nodata\\\\\nTotoro & 0.0322 & 258.84314 & 57.43529 & \\nodata & $8.2 \\times 10^{4}$ & $< 1.2 \\times 10^{8}$ & $< 9.2 \\times 10^{8}$ & $5.6 \\times 10^{9}$ & $<$ 0.059$^{b}$\n\\enddata\n\n\\tablecomments{$^{(a)}$ This is scaled from the stellar mass of Satsuki by using the difference in their SDSS $r$-band magnitudes.; $^{(b)}$ This upper limit is derived assuming all the H$\\alpha$~fluxes come from the star formation.}\n\n\\end{deluxetable*}\n\nShocks can often lead to line ratios similar to those occupying the composite regions \\citep{ho14}. In the case where there are shocks, the gas velocity dispersion ($\\sigma_{gas}$) is expected to be as high as several hundreds of ${\\rm km~s^{-1}}$ . As revealed in Figure \\ref{fig:velocity}, $\\sigma_{gas}$ in the region of Totoro is $\\sim 50$ ${\\rm km~s^{-1}}$ . Although it is close to the lower end of velocity dispersion distribution that have been found in typical shocked regions, we can not ruled out the shock excitation of the emission lines. To gain insights to whether shock is responsible for producing the ionized ratios seen in this system, we show the shock and photoionization mixing models from \\citet{ho14} as blue curves in the upper panels of Figure \\ref{fig:bpt}. These models are produced based on the \\texttt{MAPPINGS IV code} \\citep{dop13} and span a wide range of shock fractions (from 20\\% to 100\\%) and shock velocities (from 100 to 300 ${\\rm km~s^{-1}}$ ). As it can be seen, the models predict a greater [OIII] 5007\/H$\\beta$~ratio than what is observed in the data and do not cover the majority of the regions occupied by the data points even at shock velocity up to $\\sim300$ ${\\rm km~s^{-1}}$ . In addition to the pure shock models, we also compare the data to the shock + precursor models of Allen et al. (2008) with $n$ = 1 cm$^{-3}$ and solar metallicity, shown as the gray grid in figure \\ref{fig:bpt}. Although the grid starts with the shock velocity of 200 ${\\rm km~s^{-1}}$ , it is expected by extrapolation that models with lower velocity values still can not reproduce the [NII]\/H$\\alpha$~ratios of Totoro. These comparisons suggest that shocks are unlikely to be the dominant mechanism that is responsible for ionizing the gas blob. \n\n\nThere have been studies showing that AGN is able to ionize gas clouds extending to several kpc \\citep{lin09,fu08}, and the effect from AGN may persist even $\\sim10^{5}$yr after the central engines shut off \\citep[e.g., the `Hanny's Voorewerp';][]{lin09}). If Totoro is indeed a `dark' galaxy or a cloud interacting with Satsuki, the tidal field would cause a gas inflow that fuels the central black hole of Satsuki and possibly trigger the AGN activity. \nAlthough no X-ray point source is detected in the position of either Satsuki or Totoro, the detection of point-like radio source (see Sec. 2.3) in the center of Satsuki indicates the presence of a low-activity AGN. Therefore, an alternative explanation for the line ratios seen in Totoro is due to the star formation -- AGN mixing effect \\citep{dav14a,dav14b}. \n\nObservationally a starburst--AGN mixing sequence is often found in starbursting galaxy that hosts a central AGN, in which case the line ratio moves from Seyfert to composite to HII regions as the distance from the central AGN increases. The position of line ratios depends on the fractional contribution between AGN and star formation. According to the mixing model by \\citet{dav14b}, when the star-forming cloud is photonionized by an AGN, the line ratios can fall into the `composite' region on BPT diagrams. In the case of 100\\% contribution from AGN, the emission line ratio falls into the `AGN' region, instead of `composite' region where Totoro lies. This implies that the emission line ratios seen in Totoro can not be fully attributed to pure AGN photonionization, and some level of star formation may be required. Unlike the typical star formation -- AGN mixing \\citep{dav14a,dav14b}, our case is analogous to the star formation -- LINER sequence, in which the LINER excitation is due to the low-activity AGN located in Satsuki.\n\n\n\n\n\n\\subsection{Comparison to similar objects in the literature}\nThere are several similar systems reported in the literature that show offset ionized gas components, and hence it is intriguing to compare this Totoro with previous cases. For example, the famous Hanny's Voorewerp \\citep{lin09}, also exhibits a separate warm gas component that is offset from the main galaxy by several kpc. However, there are several different features between Hanny's Voorewerp and Totoro presented in this work. First, the ionized gas of Hanny's Voorewerp has much higher excitation state and its line ratios are consistent with Seyfert, as opposed to the `composite' region for Totoro. Moreover, the H$\\alpha$~morphology of the Hanny's Voorewerp is lumpy and irregular, unlike a disk-like structure seen in our case. Another well-known example is an H$\\alpha$~emission line component (often referred as the `cap') at a projected distance of 11 kpc northwest of M82 \\citep{dev99,leh99,ste03}. The cap has a shell-like structure and may possibly be bow shock formed by the starburst-driven superwind. In both cases, the nearby galaxy is a late-type star forming galaxy, different from Satsuki\n, which is an early-type galaxy. Totoro may thus represent a different category of offset ionized gas in nearby galaxies. \n\n\\section{CONCLUSION}\\label{sec:conclusion}\n\n\n\nHere we report a discovery of a puzzling giant H$\\alpha$~blob, Totoro, identified from the first-year MaNGA data. The data disfavor the scenario that Totoro is tidally-stripped gas from MaNGA 1-24145 (Satsuki) that is interacting with the southern companion (Mei), or the ram-pressure stripped gas when Satsuki falls into the center of the cluster it is located. Despite there being no X-ray point source or radio jet detected in this system, we can not rule out the possibility that Totoro is ejected from a past AGN activity in Satsuki, which likely hosts a faint AGN given its radio luminosity. On the other hand, the H$\\alpha$~morphology and the lack of stellar tidal streams suggest that Totoro could also be a separate `dark' galaxy (or an extremely LSB galaxy) interacting with Satsuki. The non-detection of the stellar continuum suggests Totoro is different from known dwarf populations or UDGs: it is either completely `dark' or with a star formation rate that contributes $<37\\%$ of the H$\\alpha$~flux. \n\nAs for the source that powers the line excitation for Totoro, the `composite' line excitation can be explained either by a star-forming cloud being excited by a low-velocity shock or by the star formation -- LINER mixing effect. The shock scenario, however, is less favoured because of the low velocity dispersion observed in Totoro region. The decrease in the [OIII] 5007\/H$\\beta$~ratio away from Satsuki indicates that the ionizing source is possibly located inside Satsuki. Thus, the star formation -- LINER mixing effect seems to be the most probable ionizing mechanism. In this scenario, the hypothesis is that the ionizing source is the low-activity AGN residing in Satsuki, being triggered by the gas inflow induced during the interaction between Satsuki and the `dark' galaxy (or gas cloud). The AGN subsequently photoionizes the `dark' gas cloud, which then emits the H$\\alpha$~photons. \n\nHowever, we have not considered the case where Totoro is tidally-stripped from Satsuki while falling into the cluster center at the same time, which results in the non-typical tidally or ram-pressure stripped gas morphology of Totoro. More sophisticated modelling considering both the effects of tidal disrupting and the orbital motions of Satsuki relative to the cluster environments, as well as future resolved atomic and molecular gas observations, such as HI and CO, are required to further understand the origin of Totoro. \n\n\n\n\n\\acknowledgments\n\nWe thank the anonymous referee for constructive suggestions which significantly improve the clarity of this paper. L. Lin thank I-Ting Ho, Michal Michalowski, Yen-Ting Lin, You-Hua Chu, Lisa Kewley, Tomo Goto, Jorge Barrera-Ballesteros, and Christy Tremonti for useful discussions. The work is supported by the Ministry of Science \\& Technology of Taiwan\nunder the grant MOST 103-2112-M-001-031-MY3. H.F. acknowledges support from the NSF grant AST-1614326 and funds from the University of Iowa. S. Peirani acknowledges support from the Japan Society for the Promotion of\nScience (JSPS long-term invitation fellowship). J.G.F-T is currently supported by Centre National d'Etudes Spatiales (CNES) through PhD grant 0101973 and the R\\'egion de Franche-Comt\\'e and by the French Programme National de Cosmologie et Galaxies (PNCG). \n\nFunding for the Sloan Digital Sky Survey IV has been\nprovided by the Alfred P. Sloan Foundation, the U.S.\nDepartment of Energy Office of Science, and the Participating Institutions. SDSS-IV acknowledges support\nand resources from the Center for High-Performance\nComputing at the University of Utah. The SDSS web\nsite is www.sdss.org. SDSS-IV is managed by the Astrophysical Research Consortium for the Participating\nInstitutions of the SDSS Collaboration including the\nBrazilian Participation Group, the Carnegie Institution\nfor Science, Carnegie Mellon University, the Chilean\nParticipation Group, the French Participation Group,\nHarvard-Smithsonian Center for Astrophysics, Instituto\nde Astrof\\'isica de Canarias, The Johns Hopkins University, Kavli Institute for the Physics and Mathematics of the Universe (IPMU) \/ University of Tokyo, Lawrence\nBerkeley National Laboratory, Leibniz Institut f\\\"ur Astrophysik Potsdam (AIP), Max-Planck-Institut f\\\"ur Astronomie (MPIA Heidelberg), Max-Planck-Institut f\\\"ur\nAstrophysik (MPA Garching), Max-Planck-Institut f\\\"ur\nExtraterrestrische Physik (MPE), National Astronomical Observatory of China, New Mexico State University,\nNew York University, University of Notre Dame, Observat\\'ario Nacional \/ MCTI, The Ohio State University,\nPennsylvania State University, Shanghai Astronomical\nObservatory, United Kingdom Participation Group, Universidad Nacional Aut\\'onoma de M\\'exico, University of\nArizona, University of Colorado Boulder, University of\nOxford, University of Portsmouth, University of Utah,\nUniversity of Virginia, University of Washington, University of Wisconsin, Vanderbilt University, and Yale University. \n\nThe National Radio Astronomy Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc. This work used data from project AGBT16A\\_095: `HI-MaNGA: HI Followup of MaNGA galaxies, PI Karen L. Masters. \n\nThis work is partly based on observations obtained with MegaPrime\/MegaCam, a joint project of CFHT and CEA\/DAPNIA, at the Canada-France-Hawaii Telescope (CFHT) which is operated by the National Research Council (NRC) of Canada, the Institut National des Science de l\\'Univers of the Centre National de la Recherche Scientifique (CNRS) of France, and the University of Hawaii.\n\nThis research has made use of data obtained from the Chandra Data Archive and the Chandra Source Catalog, and software provided by the Chandra X-ray Center (CXC) in the application packages CIAO, ChIPS, and Sherpa.\n\n\n\n\n\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\subsection{Review of Crater Detection}\nPlanetary rovers to date have all used stereo cameras for terrain imaging and 3D perception \\cite{goldberg2002stereo}. Future rovers (to Moon and Mars) might have LIDAR, either by itself or in addition to one or more cameras. Thus, any combination of these sensors could be used for crater detection. To cover this set of options and to create a foundation for identifying the best approach in the future, crater detection algorithms were developed for three classes of technical approach: (1) Using 3D point clouds from LIDAR, (2) Using 3D point clouds from stereo vision, and (3) Using deep-learning based pattern recognition with monocular images. \n\nThe original plan was to treat crater detection as a process done independently from any knowledge of the crater landmark map and rover position; however, work last year showed that more reliable crater detection results was achieved by assuming approximate prior knowledge of rover position, which is realistic in practice, and using that to allow the crater detection process to invoke 3D models of craters expected to be around the rover based on this prior knowledge. Such approximate prior knowledge was used for the LIDAR- based approach developed, but has not yet been carried over to the other approaches. Geometric analysis methods was appointed to the point clouds from LIDAR and stereo vision; machine learning with neural nets was used for monocular images.\n\nResults of quantitative performance evaluation with geometric analysis applied to simulated 3D point clouds from LIDAR show high reliability for detecting craters with a leading edge within about 15 m from the rover. The results also suggested that rover localization with an error less than 5 m is highly probable. Somewhat simpler geometric analysis methods were applied to simulated 3D point clouds from stereo vision, which are noisier than LIDAR-based point clouds. Stereo-based detection degraded at shorter range than for LIDAR and obtained significantly higher crater position estimation error; nevertheless, rover localization with error in the 5 m range still appears to be possible. Monocular appearance-based detection was done with a CNN-based machine learning algorithm; this produced detection results in image space, but did not produce 3D crater position and size estimates. Detection performance exceeded the other two methods, making this a very promising approach for future crater-based localization systems. See \\cite{matthies2022lunar} for details on the algorithms and performance characterization of crater detection.\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=\\linewidth]{figures\/particle-filter-overview.pdf}\n \\vspace{2mm}\n \\caption{\\textbf{Overview of the Particle Filter framework. Steps associated with a particle filter approach. (1) The particles associated with the a posteriori function t-1 are (2) sampled; and (3) propagated leading to the a priori function P($X_t|Z_{(t-1)}$). Then, upon (4) observation in t, we have the (5) a posteriori function P($X_t|Z_t$)in t.}}\n \\label{fig:partcile_filter}\n\\end{figure}\n\n\\begin{figure*}[!t]\n \\centering\n \\includegraphics[width=0.7\\linewidth]{figures\/figure2_annotated.png}\n \\caption{\\textbf{Formulating rover and orbital observations as gaussian mixture models. Using optimization to find the best match between two closed-form terrain models for localization.}}\n \\vspace{5mm}\n \\label{fig:gmm}\n\\end{figure*}\n\n\\begin{figure*}[!t]\n \\centering\n \\includegraphics[width=\\linewidth]{figures\/figure3.png}\n \\caption{\\textbf{Parametric loss function landscape of the gaussian mixture models.}}\n \\vspace{2mm}\n \\label{fig:loss_landscape}\n\\end{figure*}\n\n\\subsection{Crater-based Localization using Particle Filters}\nFor non-parametric methods, we implemented a crater-based localization algorithm based on Particle Filters \\cite{thrun2002particle,fox2001particle}, where each particle represents a hypothesis for the rover location, and the strength of the hypothesis is represented by a measurement probability. Figure \\ref{fig:partcile_filter} illustrates a generalized overview of the particle filter's cycle in which we have, in the first layer, some particles represented by ellipsis of various sizes. The size denotes the weight of a particle. The second layer illustrates the result of the sampling process which can lead to repeated particles. Upon sampling, the weights of the particles lose their meaning and new measurements are necessary as the third layer shows. In this layer, the particles' weights are adjusted according to the used motion model and a noise model applied to the sorted particles. Upon adjustments, we have an estimation for the next frame in the sequence. The fourth layer shows the function representing the updated particles. In this function, the height denotes the weight of the measurement at a given point. The fifth layer shows the a posteriori probability function as the result of the measurement step. In this step, the particles are properly weighted and prepared for the next iteration of the localization cycle.\n\nIn our implementation, a large number of particles can be used to approximate the distribution of the rover location as the rover moves and detects the craters. To estimate the location, the particles are moved according to the motion model, then the measurement probabilities are updated by comparing the ground craters with the orbital craters. Lastly, the particles are re-sampled according to their measurement probabilities to represent the new location distribution after motion. \n\nThere are many possible formulations on how to update the measurement probability given certain observations; we reasoned that the geometric relationship between a set of crater observations can be used to improve the accuracy of localization, instead of modeling each crater as an independent observation. Therefore, we constructed a spatial formulation where the measurement probability of each particle is represented by the average area Intersection Over Union (IoU) distance \\cite{rezatofighi2019generalized} between the orbital and ground craters:\n\n\\begin{figure}[h]\n \\centering\n \\includegraphics[width=\\linewidth]{figures\/iou.pdf}\n\\end{figure}\n\nTo account for missed detections, we adjust for the probability of new crater detection by using prior probability of detection from previous data.\n\n\\begin{figure*}[!t]\n \\centering\n \\begin{tabular}{cc}\n \\includegraphics[width=0.5\\linewidth]{figures\/exp1.pdf} &\n \\includegraphics[width=0.5\\linewidth]{figures\/exp2.pdf}\n \\end{tabular}\n \\caption{\\textbf{(Left) Example of particle filter performance on a simulated scenario, and (Right) Example of parametric model performance on a simulated scenario.}}\n \\label{fig:lidar_res1}\n\\end{figure*}\n\n\\begin{figure*}[!t]\n \\centering\n \\begin{tabular}{ccc}\n \\includegraphics[width=0.31\\linewidth]{figures\/exp3.pdf} &\n \\includegraphics[width=0.31\\linewidth]{figures\/exp4.pdf} &\n \\includegraphics[width=0.31\\linewidth]{figures\/exp5.pdf}\n \\end{tabular}\n \\caption{\\textbf{(Left) Performance of particle filter and parametric model over different (n=50) simulated environments. The red line indicates the 2$\\%$ noise in motion model baseline, and the distribution of the relative error at the end of the traversal for particle filter (Middle) and parametric model (Right) with the 2$\\%$ motion model baseline in red.}}\n \\label{fig:lidar_res2}\n\\end{figure*}\n\n\\subsection{Crater-based based Localization using Parametric Matching}\nFor parametric method-based localization, we assumed that each crater is represented by a gaussian distribution, with the location of the crater as the mean and the half of the radius as the standard deviation. For orbital craters, the set of craters can then be expressed using a gaussian mixture; and this can also be independently formulated for ground craters. Parametric matching based localization is formulated as the problem of finding the translation between the two gaussian mixtures that best matches these two distributions as shown in \\ref{fig:gmm}. The KL-divergence \\cite{goldberger2003efficient} measures the distance between two distributions and can be used as the loss function. We minimize the loss function with gradient-descent based methods to find the best translation, which represents the location (as illustrated in Figure \\ref{fig:loss_landscape}). Further, the Hessian around the optimal solution can be used to estimate the standard error, since KL-divergence is equivalent to the negative log-likelihood.\n\n\\subsection{Blender-based Lunar Scene Simulator}\nThe Blender-based image scene simulation takes a lunar digital elevation map (DEM), applies a custom texture map to it, and creates a world model with a simulated sun as a light source; within this model, a stereo camera pair is added and the simulation generates locations for the cameras and the sun. For each camera and sun location, Blender's render engine produces a synthetic image. This image is rendered using a custom implementation of the Hapke radiometric model \\cite{hapke1963theoretical,hapke1981bidirectional,hapke1993opposition} incorporated into the Blender's path tracing algorithm to simulate accurate lighting across the surface. The simulation has multiple parameters that can be controlled, including sun position, camera extrinsic and intrinsic parameters, image size, and a few others. Images were were generated for craters with diameters ranging from 5 to 20 meters, with four different approach angles for each crater. For each crater and approach angle, the stereo camera was positioned at distances between 5 and 20 meters from the crater near rim, at 1 meter spacing; for each location, an image was rendered with a varied set of sun angles from 0$^{\\circ}$ to 80$^{\\circ}$ from nadir. The final dataset consisted of 1,792 stereo pairs.\n\n\\begin{figure}[!tbh]\n \\centering\n \\includegraphics[width=\\linewidth]{figures\/mc-sim-environment.pdf}\n \\caption{\\textbf{Illustration of the simulated environment used for development and testing of localization algorithms. Shown here is an example for a 600m x 600m region with known orbital craters}}\n \\label{fig:mc_sim_env}\n\\end{figure}\n\n\\subsection{2.5D Simulation Environment for Long-range Traverse}\nTo facilitate development of algorithms for crater matching based localization, we constructed a 2.5D simulation environment that allows for independent unit testing of the different localization algorithms. This allowed us to quickly iterate through different versions of the localization algorithms and validate their effectiveness with offline Monte Carlo simulations. In the simulated environment, the localization algorithm receives two sets of craters as input: the known craters detected from orbit and the ones observed by the rover from the ground. The orbital craters are parameterized by their size and location (x, y, diameter). Their craters sizes are drawn from a truncated power-law distribution ($\\alpha$ = 1, diameter $<$ 20), whereas their locations are drawn from a uniform distribution. The craters observed on the surface by the rover are generated by perturbing the known orbital craters according to a similar distribution of noise from crater detection, with a zero-centered gaussian ($\\sigma$ = 3m) error in the crater location and zero-centered gaussian ($\\sigma$ = 1m) error in the crater size. Further, we also added the capability to mask a percentage of either orbital or ground craters to simulate false positives and false negatives in crater detection. Figure \\ref{fig:mc_sim_env} shows an illustration of the simulated environment that was used. Assuming the motion model with 2$\\%$ noise, the rover traverses within this simulated environment and observes craters within a field of view range ($<$ 40m). The localization algorithms then match the observed ground craters to the orbital craters for localization. Then, the estimated route is compared against the ground truth route for evaluation. The simulated environment can be extended to long-range traversals and can be used to validate the effectiveness in overcoming the baseline $2\\%$ of motion model noise.\n\n\\begin{figure}[!t]\n \\centering\n \\includegraphics[width=\\linewidth]{figures\/dataset-fig2.pdf}\n \\caption{\\textbf{Illustration of an instance from the LiDAR simulation from RSIM.}}\n \\label{fig:lidar_sim}\n\\end{figure}\n\n\\begin{figure*}[!t]\n \\centering\n \\includegraphics[width=\\linewidth]{figures\/dataset-fig3.pdf}\n \\caption{\\textbf{Example Chang'e 3 Yutu rover images (left), an example stereo disparity map and 3D rendering (middle), and orbital image showing the rover traverse (right). Stereo disparity is inversely proportional to range; in the disparity map, bright pixels are close and dark are far.}}\n \\label{fig:change-stereo}\n\\end{figure*}\n\n\\begin{figure*}[!t]\n \\centering\n \\includegraphics[width=0.95\\linewidth]{figures\/lcam.pdf}\n \\caption{\\textbf{The Chang'e 4 EDL Flight trajectory was recovered between 1000m and 100m above ground, using TRN algorithm on the LCAM images.}}\n \\label{fig:change-lcam}\n\\end{figure*}\n\n\\begin{figure*}[!t]\n \\centering\n \\includegraphics[width=0.95\\linewidth]{figures\/change-improved.pdf}\n \\caption{\\textbf{Comparison of (Left) Original resolution - 1.4 m\/pixel vs (Right) Improved resolution \u2013 0.2 m\/pixel for the orbital maps for the Chang'e 4 landing site.}}\n \\label{fig:change-improved}\n\\end{figure*}\n\n\\begin{figure*}[!t]\n \\centering\n \\includegraphics[width=\\linewidth]{figures\/dataset-fig4.pdf}\n \\caption{\\textbf{Crater landmark map generation combining LRO-NAC images with higher resolution Chang'e LCAM descent imagery. Large image is from LRO-NAC; smaller overlay and the expanded view on the right is from LCAM.}}\n \\label{fig:change-crater}\n\\end{figure*}\n\n\\subsection{Long-range LiDAR simulation using RSIM}\nTo further simulate long-range traverses from real Lunar terrain, we added a LiDAR simulation capability in RSIM \\cite{allan2019planetary}, a ROS2 and Gazebo based high-fidelity simulation environment developed at NASA Ames to support the development of the VIPER Rover \\cite{colaprete2019overview}. RSIM models the VIPER lunar rover structure, with the full CAD including rover kinematics, dynamics, and 3D model. RSIM also contains a lunar terrain model of the Nobile landing site, with 8k resolution. The terrain includes rocks and crater distributions, shadowing, and sun lighting. The LiDAR sensor (also referred to as the Velodyne Simulator), is forked from the Toyota Research Institute Velodyne Simulator. This package contains a URDF description and Gazebo plugins to simulate the Velodyne laser scanners (our model is the HDL-32E). This Gazebo plugin is based on the gazebo plugins ROS block laser, which publishes PointCloud2 messages with the same structure (x, y, z, intensity, ring) and simulates Gaussian noise. To capture the point clouds from the LiDAR sensor, we created a package that provides a ROS2 node that can subscribe to the Velodyne lidar sensor mounted on the VIPER rover and saves a snapshot to the desk. In addition to capturing the point cloud, this node also outputs the position and orientation of the rover with respect to the world origin. It also contains scripts to visualize the point cloud and export it to other formats. The final datasets used were two discrete drives of the VIPER rover in RSIM with LiDAR on the Nobile landing site; both drives started the rover at the bottom-left corner of the map (320m x 320m) and traversed diagonally along the hypotenuse to the upper-right corner, took approximately 30-32 steps of 10m each, with a change in orientation chosen randomly between (-5 to 5 degrees) at each step. Due to the orientation change, the rover could not complete all the steps as it would fall off the map since it was not driving in a purely straight path with step-length: 10m, total steps: 35 and orientation change: 5 degrees at every waypoint. Figure \\ref{fig:lidar_sim} shows an illustration of an instance the LiDAR simulation from RSIM. \n\n\\subsection{Real Lunar data from Chang'e missions}\nFor real sensor data, the original plan for to acquire a large- scale dataset of LIDAR and stereo image data from an analog site at the Cinder Lake crater field near Flagstaff, Arizona. This had to be postponed due to covid travel restrictions and forest fires; in lieu of this data, we decided to use real lunar stereo images that were available from the \"panoramic\" cameras (PCAM) on the Chang'e 3 \\cite{li2015chang} and Chang'e 4 \\cite{li2021overview} lunar rover missions. Since this is actual lunar data, in important ways it is better than the analogue stereo images that were originally planned. The Chang'e missions' rover data has been publicly released \\footnote{https:\/\/moon.bao.ac.cn\/}. The cameras for both missions are identical; their FOV is 19.7$^{\\circ}$x14.5$^{\\circ}$, resolution are 2352 x 1728 pixels (color) and 1176 x 864 pixels (monochrome) and stereo baseline length is 27 cm. A total of 168 pairs of Chang'e 3 and 1174 pairs of Chang'e 4 PCAM images were processed. A stereo camera self-calibration applied to this data yielded good camera models for both data sets. Stereo depth maps were computed from these images with good results, as shown in Figures \\ref{fig:change-stereo}. \n\nGround truth labeling of craters in this imagery and registration of this imagery against orbital imagery was done to prepare a dataset that was used for performance evaluations. Towards this end, we studied the usefulness of the Chang'e lander camera (LCAM) to obtain a high resolution and high precision crater database. The LCAM image sequences (5000 images) were downloaded, and the Chang'e EDL flight trajectory was recovered between 1000m and 100m above ground, using terrain relative navigation (TRN) and structure from motion algorithms. Figure \\ref{fig:change-lcam} shows visualization of the recovered LCAM trajectory. Then, the LCAM images were ortho-rectified to a coarser resolution LRO-NAC image map (1 m\/pixel) to obtain an ortho-image with higher resolution of up to 20 cm\/pixel around the lander. This process was used to improve the resolution for a 100m x 100m region around the Chang'e 4 lander as shown in Figure \\ref{fig:change-improved}. The craters were detected from this high-resolution map and their geographic locations, diameters, depths were extracted into a crater database, as shown in Figure \\ref{fig:change-crater}. This database was used for rover localization algorithm development\n\n\n\n\n\\subsection{Remaining Gaps, Risks and Challenges to Flight Infusion}\nAs mentioned in Section \\ref{sec:intro}, the capability developed in this project is intended for use in lunar rovers. No such missions are currently in development, but robotic lunar science rover mission concepts were strongly recommended by the PSADS report; in particular, the Endurance-A robotic lunar science rover mission concept was recommended as the highest priority medium-sized mission for the Moon. This concept involves traverse of roughly 2,000 km in several Earth years, which requires onboard absolute position estimation; LunarNav capability would be enabling for this mission. The Lunar Terrain Vehicle (LTV) envisioned for transporting astronauts may also benefit from having an option for autonomous position estimation as developed by LunarNav. Some of the potential challenges to flight infusion falls into the following main categories:\n\\begin{itemize}\n \\item The state of maturity of the LunarNav algorithms is at intermediate TRLs; more maturation is required before it is fully ready for infusion. In particular, the performance of stereo-based crater localization needs to be improved by adding robustness to a wide-range of operational scenarios (partial crater visibility, variable crater shapes) \\\\\n \\item LunarNav requires stereo cameras and\/or a lidar onboard the rover. For night operation, either headlights for the stereo cameras or a lidar is required. These sensors are not fully developed at present. Furthermore, the majority of the work done in LunarNav focused on day-time driving; further work needs to be done to extend this capability to night-time and operations in permanently shadowed regions. \\\\\n \\item The computing load associated with crater-based localization is expected to be less than that required for rover obstacle avoidance. Since any autonomous lunar rover would need obstacle avoidance, it is expected that the necessary computing capability for LunarNav would be available. Nevertheless, this aspect of system architecture must be verified as being sufficient. \\\\\n \\item Validation and verification (V\\&V) of crater-based localization requires datasets with craters, either from lunar analog terrain on Earth or from high fidelity lunar terrain simulators. There is a very limited amount of suitable analog terrain available. The Cinder Lake Crater Field planned for use in the LunarNav project is the only location of any reasonable size in the U.S.; it is nevertheless fairly small for this purpose, it has not been maintained (i.e. it is partly overgrowth with vegetation), and access to it is limited to spring and fall seasons, due to snow in the winter, heat in the summer, and the possibility of nearby forest fires in the summer through early fall.\n\\end{itemize}\n\\subsection{LIDAR-based Localization on Simulation Environment:}\nWe evaluated the particle filter and parametric localization algorithms on a 400m x 400m simulated environment with 100 craters. An example of the traversal and localization results are shown below in Figures \\ref{fig:lidar_res1}-\\ref{fig:lidar_res3}. Over a large number of simulated scenarios (n=50), both localization methods were able to perform better than the relative localization baseline of 2\\% noise motion model, with the particle filter performing slightly better. Further, the particle filter converged at around 2m of error over long ranges, suggesting that crater landmarks are valid features for accurate localization. \n\n\\begin{figure*}\n \\centering\n \\begin{tabular}{cc}\n \\includegraphics[width=0.65\\linewidth]{figures\/exp8.pdf} &\n \\includegraphics[width=0.3\\linewidth]{figures\/exp9.pdf}\n \\end{tabular}\n \\caption{\\textbf{Illustration of the rover traverse for the Apollo 2 landing site showing the (left) simulated LiDAR point cloud and (middle) DEM overlayed with the rover traverse. Note: Point cloud is color coded using elevation data. (Right) An instance of particle filter illustration. Blue circles indicate craters, green arrow indicates estimated position, red arrow indicates GT position, and black dots represent the distribution of particles.}}\n \\label{fig:lidar_apollo}\n\\end{figure*}\n\n\\subsection{LIDAR-based Localization from Apollo 2 Landing Site:}\nWe performed preliminarily tests of the particle filter and parametric localization on craters detected from simulated LIDAR point clouds. Here, we used the elevation map from the Apollo 2 landing site to simulate a traversal of 20 steps (with 1m\/step) for crater detection. In this scenario, two craters (with diameter 6.20m and 14.76m) were manually labeled as orbital craters. The crater detection algorithm matches observed candidate craters on the ground with the two orbital craters, and the localization algorithm used the location of the observed craters for localization, as shown in Figure \\ref{fig:lidar_apollo}.\n\n\\begin{figure*}\n \\centering\n \\begin{tabular}{cc}\n \\includegraphics[width=0.5\\linewidth]{figures\/exp10.pdf} &\n \\includegraphics[width=0.5\\linewidth]{figures\/exp11.pdf}\n \\end{tabular}\n \\caption{\\textbf{Rumker Dome site localization algorithm performance. The particle filter converged at around 0.75m of error, which is lower than the expected 2$\\%$ (1m) motion model noise. The parametric method has an average error of 1.6m.}}\n \\label{fig:lidar_rumker}\n\\end{figure*}\n\n\\subsection{LIDAR-based Localization from Rumker Dome Landing Site:}\nFurther, we evaluated the particle filter and parametric localization on LIDAR crater detected from a traverse through the Rumker Dome region. This is a more complex scenario compared to the Apollo 2 landing site, with 4 manually labeled craters (with diameter 21.94m, 5.60m, 6.98m, 8.58m) and 50 traversal steps (1m\/step). According to the 2\\% motion model baseline, our model is estimated to deviate 1m from the ground truth. Our results (Figure \\ref{fig:lidar_rumker}) indicate an average distance of 0.75m throughout the traverse, indicating that crater landmarks are a reliable feature for more accurate localization. \n\n\\begin{figure*}\n \\centering\n \\begin{tabular}{cc}\n \\includegraphics[width=0.45\\linewidth]{figures\/exp12.pdf} &\n \\includegraphics[width=0.45\\linewidth]{figures\/exp13.pdf}\n \\end{tabular}\n \\caption{\\textbf{An example of a successful crater detection. (Left) original image; (b) disparity map overlayed with white ellipse showing crater detection}}\n \\label{fig:stereo-change}\n\\end{figure*}\n\n\\begin{figure*}\n \\centering\n \\begin{tabular}{ccc}\n \\includegraphics[width=0.31\\linewidth]{figures\/exp14.pdf} &\n \\includegraphics[width=0.31\\linewidth]{figures\/exp15.pdf} &\n \\includegraphics[width=0.31\\linewidth]{figures\/exp16.pdf}\n \\end{tabular}\n \\caption{\\textbf{Example images where the stereo-based crater detection algorithm fails.}}\n \\label{fig:stereo-fail}\n\\end{figure*}\n\n\\subsection{Stereo-based Localization on Chang'e 4 Landing Site}\nWe integrated the particle-filter-based localization algorithm with the stereo-based crater detection from FY21, and tested it with the Chang'e rover stereo data. The stereo crater detection algorithm succeeds in detecting craters on the real data when the assumptions of the algorithm \\cite{matthies2022lunar} hold true; for example, as shown in Figure \\ref{fig:stereo-change} which demonstrated a successful case of stereo-based crater detection on real Lunar data. \n\nHowever, the original crater detection makes two critical assumptions which regularly do not hold in the Chang'e 4 dataset. It first assumes that the entire crater is visible in the image. Due to the small field of view of the PCAM, this is often not the case in the Chang'E imagery. The second assumption is that the crater composes a relatively small section of the image, since otherwise the ground plane will not be detected correctly. This is also not the case with nearby craters, in part due once again to the small camera field of view. \n\nFigure \\ref{fig:stereo-fail} shows three example images where crater detection currently fails. In the leftmost image, the algorithm successfully detects the as a crater candidate in the penultimate step of the algorithm. However, it is filtered out as not a crater because the entire crater is not visible in the image, violating an assumption behind the algorithm. In the middle image, the crater is not entirely within the image. This case fails in step three of the algorithm, earlier than the previous case, since the critical near rim edge is not visible. Furthermore, since the crater takes up almost the entirety of the image, the back wall of the crater is detected as the ground plane. In the rightmost image, the crater takes up a large section of the image, and the crater's back wall is detected as the ground plane, leading the algorithm to fail. Additionally, the left and right edges of the crater are not visible so this would likely fail in a later step as well. Thus, the performance of the crater detection did not generalize to the current dataset of craters visible in Chang'e imagery. As a result, we were unable to benchmark the performance of the particle filter on stereo modality.\n\n\n\\section{Introduction}\n\\input{intro.tex}\n\\vspace{2mm}\n\\label{sec:intro}\n\n\\section{Related Work}\n\\vspace{2mm}\n\\input{related.tex}\n\n\\section{System Overview}\n\\vspace{2mm}\n\\input{system.tex}\n\n\\section{Real and Simulated Datasets}\n\\vspace{2mm}\n\\input{datasets.tex}\n\\label{sec:data}\n\n\\section{Technical Approach}\n\\vspace{2mm}\n\\input{approach.tex}\n\n\\section{Performance Evaluation}\n\\input{experiments.tex}\n\\label{sec:exp}\n\n\\section{Discussion}\n\\vspace{2mm}\n\\input{discussion}\n\n\n\n\n\\acknowledgments\nThe research was carried out at the Jet Propulsion Laboratory, California Institute of Technology, under a contract with the National Aeronautics and Space Administration (80NM0018D0004).\n\n\n\n\\bibliographystyle{IEEEtran}\n\n\n\\subsection{System Concept}\nThe LunarNav system concept requires creating a database of crater landmarks from orbital images, as shown in Figure \\ref{fig:system_concept}. This database would contain the position, diameter, and estimated depth of each crater. Craters with diameters $> 5$ meters are mappable with LRO-NAC imagery, under most orbital imaging conditions. Even in the youngest terrain on the Moon, such craters occur with a frequency $> 10^3$ per $km^2$, or on average about 30 meters apart, and they occur more frequently in older terrain \\cite{hiesinger2012old,minton2019equilibrium}. This would provide frequent landmarks that should be detectable at distances of roughly 10 to 20 meters from the rover.\n\nRobotic lunar rovers will carry a sensor suite for relative localization and obstacle detection that includes wheel odometry, an IMU, either stereo cameras or a lidar, and either a sun sensor or a star camera for absolute heading measurement. Assuming the availability of a star camera, which is applicable for driving in sunlight and in shadow, gives 3-axis attitude knowledge to a small fraction of a degree. With this, the relative navigation sensor suite enables dead-reckoning with position error that typically can be $< 2\\%$ of distance traveled. This provides a prior estimate of position that at all times strongly constrains which crater(s) from the landmark database are expected to be near the rover. Craters can then be detected near the rover with a combination of 3D point cloud data from stereo cameras or a lidar, image data from a camera, or reflectance image data from a lidar. This enables detecting craters with diameters roughly between 5 to 20 meters whose near rims are roughly less than 20 meters from the rover.\n\nOverall, these methods should enable reliable absolute localization; given typical resolution characteristics of cameras, stereo vision, and lidar, we estimate that it should be possible to maintain a rover absolute position estimate with $3 \\sigma$ error $< 5$ meters at all times. Furthermore, in terms of computational feasibility, crater-based localization is less expensive than obstacle detection and needs to be done much less frequently than obstacle detection, so any onboard computing system that can do obstacle detection would also be able to do crater-based localization.\n\n\\begin{table*}[!tbh]\n \\centering\n \\caption{\\bf{Key Performance Parameters (KPP)}}\n \\includegraphics[width=0.9\\linewidth]{figures\/kpp.pdf}\n \\label{tab:kpp}\n\\end{table*}\n\n\\subsection{Key Performance Parameters}\nThorough performance evaluation of the LunarNav framework was a function of many parameters, and depended fundamentally on the ability to detect and localize individual craters and to estimate their positions and diameters. This depends on (1) crater size and distance from the rover, (2) rover camera\/lidar sensor parameters, including angular resolution, range resolution, field of view, and sensor height above the ground, (3) lighting conditions, and (4) other characteristics of terrain geometry, like slope. We distilled key performance parameters (KPPs) in terms of the crater distribution, as defined in Table \\ref{tab:kpp} and anticipated that craters with diameters $>$ 5 meters will be readily detectable from distances of at least 15 meters, in many but not necessarily all lighting conditions. For example, detection probability ($P_d$) of 0.5 for 5 m craters at 15 m range should be a conservative estimate. With a notional stereo camera system with angular resolution of 1 milliradian\/pixel and binocular camera baseline of 30 cm, 3 $\\sigma$ errors in estimating the position and diameter of such craters should be $<$ 2 m each.\n\nUpdates to rover position and heading will be obtained every time an onboard crater map is registered to the orbital map. The precision of these rover position and heading estimates will be a function of the precision of crater positions and diameters in the onboard map, as well as detection and false alarm probabilities ($P_d$ and $P_f$) for onboard crater recognition. Quality of the onboard map will also depend on accuracy and precision of heading estimation and dead reckoning between crater detections. Past experience suggests that dead reckoned position error can be better than $2\\%$ of distance traveled with visual odometry (2 m per 100 m) \\cite{rankin2021mars}. Rover heading error should be bounded by about $5\\degree$ by a sun sensor, $<1\\degree$ per 100 m using visual odometry, or $<0.01 \\degree$ at all times using a star camera. A star camera is the preferred heading estimation solution for performance, but a sun sensor may offer a lower cost solution with adequate performance for predominantly sunlit scenarios. Combining this with errors in crater center positions relative to the rover and crater position errors of $<$ 2 m in the orbital map should enable rover position and heading estimation error to be conservatively bounded by about 10 m and 5$\\degree$ at all times. Erroneous crater detections (false alarms) will be filtered out through a combination of several techniques applied at different stages of the estimation pipeline. Since the effect of false detections is ultimately captured in rover position estimation error, a separate key performance parameter is not specified for false alarms. Performance modeling and evaluation throughout the course of the project characterized how performance varies as a function of these sensor parameters (See Section \\ref{sec:exp}). \n\n ","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nNew physics (NP), or physics beyond the standard model, involves various models\nthat extend the well verified standard model (SM) of particle physics by\nintroducing a number of new particles with novel properties and interactions.\nThough various aspects of many of these particles and interactions are\nconstrained by existing experimental data, we are yet to detect any definitive\nsignature of new physics in our experiments. Nevertheless, recent experimental\nstudies in $B$ meson decays, such as $B \\to K^{(*)} \\ell^-\\ell^+$\n\\cite{B2KorKstLL}, $B_s \\to \\phi \\ell^-\\ell^+$ \\cite{Aaij:2015esa}, $B \\to\nD^{(*)}\\ell\\nu$ \\cite{B2DorDstLN} and $B_c \\to J\/\\psi \\ell\\nu$\n\\cite{Aaij:2017tyk} (where $\\ell$ can be $e,\\mu$ or $\\tau$) have reported\nanomalous observations raising the expectation of discovery of new physics with\nmore statistical significance. In this context, model-independent studies of\nsuch semi-leptonic three-body meson decay processes become important as they can\nidentify generic signatures of new physics which can be probed experimentally.\nIn this paper, we have analyzed the effects of new physics, in a\nmodel-independent manner, on the angular distribution of a general semi-hadronic\nthree-body meson decay of the type $P_i \\to P_f f_1 f_2$, where $P_i$ and $P_f$\nare the initial and final pseudo-scalar mesons respectively, and $f_{1,2}$\ndenote fermions (which may or may not be leptons but not quarks) out of which at\nleast one gets detected experimentally. Presence of new interactions, or new\nparticles such as fermionic dark matter (DM) particles or heavy sterile\nneutrinos or long lived particles (LLP) would leave their signature in the\nangular distribution and we show by example how new physics contribution can be\nquantified from angular asymmetries. Our methodology can be used for detection\nof new physics in experimental study of various three-body pseudo-scalar meson\ndecays at various collider experiments such as LHCb and Belle~II.\n\nThe structure of our paper is as follows. In\nSec.~\\ref{sec:Lagrangian-and-amplitude} we discuss the most general Lagrangian\nand amplitude which include all probable NP contributions to our process under\nconsideration. The relevant details of kinematics is then described in\nSec.~\\ref{sec:kinematics}. This is followed by a discussion on the angular\ndistribution and the various angular asymmetries in\nSec.~\\ref{sec:ang-dist-asymmetries}. In Sec.~\\ref{sec:example} we present a few\nwell chosen examples illustrating the effects of new physics on the angular\ndistribution. In Sec.~\\ref{sec:conclusion} we conclude by summarizing the\nimportant aspects of our methodology and its possible experimental realization.\n\n\\section{Most general Lagrangian and Amplitude}\\label{sec:Lagrangian-and-amplitude}\n\nFollowing the model-independent analysis of the decay $B \\to D \\ell^-\\ell^+$ as\ngiven in Ref.~\\cite{Kim:2016zbg} and generalizing it for our process $P_i \\to\nP_f f_1 f_2$ where $P_{i,f}$ can be $B, B_s, B_c, D, K, \\pi$ etc.\\ as\nappropriate and $f_1 f_2$ can be $\\ell^- \\ell^+$, $\\ell \\bar{\\ell'}$, $\\ell\n\\nu_{\\ell}$, $\\ell \\nu_S$, $\\ell f^{DM}$, $\\nu_{\\ell}\\ensuremath{\\overline{\\nu}}_{\\ell}$,\n$\\nu_S\\ensuremath{\\overline{\\nu}}_{\\ell}$, $\\nu_{\\ell}\\ensuremath{\\overline{\\nu}}_S$, $\\nu_S \\ensuremath{\\overline{\\nu}}_S$, $f^{DM}\n\\bar{f}^{DM}$, $f_1^{DM} f_2^{DM}$, $f_1^{LLP} f_2^{LLP}$ (with\n$\\ell,\\ell'=e,\\mu,\\tau$ denoting leptons, $\\nu_S$ being sterile neutrino,\n$f^{DM}_{1,2}$ as fermionic dark matter and $f_{1,2}^{LLP}$ as long lived\nfermions)\\footnote{It is clear that we can not only analyze processes allowed in\n\tthe SM but also those NP contributions from fermionic dark matter in the final\n\tstate as well as including flavor violation. Our analysis as presented in this\n\tpaper is fully model-independent and general in nature.}, we can write down the\neffective Lagrangian facilitating the decay under consideration as follows,\n\\begin{eqnarray}\n\\mathcal{L}_{\\textrm{eff}} &=& J_S \\left( \\bar{f}_1 f_2 \\right)\n+ J_P \\left( \\bar{f}_1~\\gamma^5~f_2 \\right) + \\left(J_V\\right)_{\\alpha}\n\\left( \\bar{f}_1~\\gamma^{\\alpha}~f_2 \\right) \\nonumber\\\\%\n&& + \\left(J_A\\right)_{\\alpha} \\left( \\bar{f}_1~\\gamma^{\\alpha}\\gamma^5~f_2\n\\right) + \\left(J_{T_1}\\right)_{\\alpha\\beta} \\left(\n\\bar{f}_1~\\sigma^{\\alpha\\beta}~f_2 \\right) \\nonumber\\\\%\n&& + \\left(J_{T_2}\\right)_{\\alpha\\beta} \\left(\n\\bar{f}_1~\\sigma^{\\alpha\\beta}\\gamma^5~f_2 \\right) + \\text{h.c.},\n\\label{eq:Effective-Lagrangian}\n\\end{eqnarray}\nwhere $J_S$, $J_P$, $\\left(J_V\\right)_{\\alpha}$, $\\left(J_A\\right)_{\\alpha}$,\n$\\left(J_{T_1}\\right)_{\\alpha\\beta}$, $\\left(J_{T_2}\\right)_{\\alpha\\beta}$ are\nthe different hadronic currents which effectively describe the quark level\ntransitions from $P_i$ to $P_f$ meson. It should be noted that we have kept both\n$\\sigma^{\\alpha\\beta}$ and $\\sigma^{\\alpha\\beta}\\gamma^5$ terms. This is because\nof the fact that the currents $\\bar{f}_1 \\, \\sigma^{\\alpha\\beta} \\, f_2$ and\n$\\bar{f}_1 \\, \\sigma^{\\alpha\\beta}\\gamma^5 \\, f_2$ describe two different\nphysics aspects namely the magnetic dipole and electric dipole contributions\nrespectively. In the SM, vector and axial-vector currents (mediated by photon,\n$W^{\\pm}$ and $Z^0$ bosons) and the scalar current (mediated by Higgs boson)\ncontribute. So every other term in Eq.~\\eqref{eq:Effective-Lagrangian} except\nthe ones with $J_S$, $\\left(J_V\\right)_{\\alpha}$ and $\\left(J_A\\right)_{\\alpha}$\ncan appear in some specific NP model. Since, in this paper, we want to\nconcentrate on a fully model-independent analysis to get generic signatures of\nnew physics, we shall refrain from venturing into details of any specific NP\nmodel, which nevertheless are also useful. It is important to note that $J_S$,\n$\\left(J_V\\right)_{\\alpha}$ and $\\left(J_A\\right)_{\\alpha}$ can also get\nmodified due to NP contributions.\n\n\\begin{figure}[hbtp]\n\\centering%\n\\includegraphics[scale=1]{fig_Feynman_diagram.pdf} \\caption{Feynman diagram\n\tfor $P_i \\to P_f f_1 f_2$ considering $f_1$ as a particle and $f_2$ as an\n\tanti-particle. Here the blob denotes the effective vertex and includes\n\tcontributions from all the form factors defined in\n\tEq.~\\eqref{eq:form-factors}.}%\n\\label{fig:Feynman_diagram}\n\\end{figure}\n\nIn order to get the most general amplitude for our process under consideration,\nwe need to go from the effective quark-level description of\nEq.~\\eqref{eq:Effective-Lagrangian} to the meson level description by defining\nappropriate form factors. It is easy to write down the most general form of the\namplitude for the process $P_i \\to P_f f_1 f_2$ depicted in\nFig.~\\ref{fig:Feynman_diagram} as follows,\n\\begin{align}\n\\mathcal{M} \\left( P_i \\to P_f f_1 f_2 \\right) &= F_S \\left(\n\\bar{f}_1 f_2 \\right) + F_P \\left( \\bar{f}_1~\\gamma^5~f_2 \\right)\n\\nonumber\\\\*%\n&\\quad + \\left( F_V^+ p_{\\alpha} + F_V^- q_{\\alpha} \\right) \\left(\n\\bar{f}_1~\\gamma^{\\alpha}~f_2 \\right) \\nonumber\\\\* %\n&\\quad + \\left( F_A^+ p_{\\alpha} + F_A^- q_{\\alpha} \\right) \\left(\n\\bar{f}_1~\\gamma^{\\alpha}~\\gamma^5~f_2 \\right) \\nonumber\\\\* %\n&\\quad + F_{T_1}~p_{\\alpha}~q_{\\beta} \\left( \\bar{f}_1~\\sigma^{\\alpha\\beta}~f_2\n\\right) \\nonumber\\\\* %\n&\\quad + F_{T_2}~p_{\\alpha}~q_{\\beta} \\left(\n\\bar{f}_1~\\sigma^{\\alpha\\beta}~\\gamma^5~f_2 \\right), \\label{eq:amplitude}\n\\end{align}\nwhere $F_{S}$, $F_{P}$, $F_{V}^{\\pm}$, $F_{A}^{\\pm}$, $F_{T_1}$ and $F_{T_2}$\nare the relevant form factors, and are defined as follows,\n\\begin{subequations}\\label{eq:form-factors}\n\\begin{align}\n\\bracket{P_f}{J_S}{P_i} &= F_S,\\\\%\n\\bracket{P_f}{J_P}{P_i} &= F_P,\\\\%\n\\bracket{P_f}{\\left(J_V\\right)_{\\alpha}}{P_i} &= F_V^+ p_{\\alpha} + F_V^-\nq_{\\alpha},\\\\%\n\\bracket{P_f}{\\left(J_A\\right)_{\\alpha}}{P_i} &= F_A^+ p_{\\alpha} + F_A^-\nq_{\\alpha},\\\\%\n\\bracket{P_f}{\\left(J_{T_1}\\right)_{\\alpha\\beta}}{P_i} &=\nF_{T_1}~p_{\\alpha}~q_{\\beta},\\\\%\n\\bracket{P_f}{\\left(J_{T_2}\\right)_{\\alpha\\beta}}{P_i} &=\nF_{T_2}~p_{\\alpha}~q_{\\beta},\n\\end{align}\n\\end{subequations}\nwith $p \\equiv k + k_3$ and $q \\equiv k - k_3 = k_1 + k_2$, in which $k, k_1,\nk_2, k_3$ are the 4-momenta of the $P_i, f_1, f_2 $ and $P_f$ respectively (see\nFig.~\\ref{fig:Feynman_diagram}). All the form factors appearing in the amplitude\nin Eq.~\\eqref{eq:amplitude} and as defined in Eq.~\\eqref{eq:form-factors} are,\nin general, complex and contain all NP information. It should be noted that for\nsimplicity we have implicitly put all the relevant Cabibbo-Kobayashi-Maskawa\nmatrix elements as well as coupling constants and propagators inside the\ndefinitions of these form factors. In the SM only $F_V^{\\pm}$ and $F_A^{\\pm}$\nare present. Presence of NP can modify these as well as introduce other form\nfactors\\footnote{It should be noted that the form factors, especially the ones\n\tdescribing semi-leptonic $B$ meson decays, can be obtained by using the heavy\n\tquark effective theory \\cite{HQET}, the lattice QCD \\cite{Lattice}, QCD\n\tlight-cone sum rule \\cite{Light-cone} or the covariant confined quark model\n\t\\cite{CCQM} etc. In this paper we present a very general analysis which is\n\tapplicable to a diverse set of meson decays. Hence we do not discuss any\n\tspecifics of the form factors used in our analysis. Moreover, we shall show, by\n\tusing certain examples and in a few specific cases, that one can also probe new\n\tphysics without worrying about the details of the form factors. Nevertheless,\n\twhen one concentrates on a specific decay mode, considering the form factors in\n\tdetail is always useful.}. These various NP contributions would leave behind\ntheir signatures in the angular distribution for which we need to specify the\nkinematics in a chosen frame of reference.\n\n\n\\section{Decay Kinematics}\\label{sec:kinematics}\n\n\\begin{figure}[hbtp]\n\\centering%\n\\includegraphics[scale=1]{fig_GJframe.pdf}%\n\\caption{Decay of $P_i \\to P_f f_1 f_2$ in the Gottfried-Jackson frame.} %\n\\label{fig:GJ-frame}\n\\end{figure}\n\nWe shall consider the decay $P_i \\to P_f f_1 f_2$ in the Gottfried-Jackson\nframe, especially the center-of-momentum frame of the $f_1,f_2$ system, which is\nshown in Fig.~\\ref{fig:GJ-frame}. In this frame the parent meson $P_i$ flies\nalong the positive $z$-direction with 4-momentum $k = \\left(E, \\mathbf{k}\n\\right) = \\left(E,0,0,\\modulus{\\mathbf{k}}\\right)$ and decays to the daughter\nmeson $P_f$ which also flies along the positive $z$-direction with 4-momentum\n$k_3 = \\left( E_3, \\mathbf{k}_3 \\right) = \\left(E_3, 0, 0,\n\\modulus{\\mathbf{k}_3}\\right)$ and to $f_1$, $f_2$ which fly away back-to-back\nwith 4-momenta $k_1 = \\left( E_1, \\mathbf{k}_1 \\right)$ and $k_2 = \\left( E_2,\n\\mathbf{k}_2 \\right)$ respectively, such that by conservation of 4-momentum we\nget, $\\mathbf{k}_1 + \\mathbf{k}_2 = \\mathbf{0}$, $\\mathbf{k} = \\mathbf{k}_3$,\nand $E = E_1 + E_2 + E_3$. The fermion $f_1$ (which we assume can be observed\nexperimentally) flies out subtending an angle $\\theta$ with respect to the\ndirection of flight of the $P_i$ meson, in this Gottfried-Jackson frame. The\nthree invariant mass-squares involved in the decay under consideration are\ndefined as follows,\n\\begin{subequations}\\label{eq:stu}\n\\begin{align}\ns &= (k_1 + k_2)^2 = (k - k_3)^2,\\\\%\nt &= (k_1 + k_3)^2 = (k - k_2)^2,\\\\%\nu &= (k_2 + k_3)^2 = (k - k_1)^2.\n\\end{align}\n\\end{subequations}\nIt is easy to show that $s + t + u = m_i^2 + m_f^2 + m_1^2 + m_2^2$, where\n$m_i, m_f,m_1$ and $m_2$ denote the masses of particles $P_i,P_f,f_1$ and $f_2$\nrespectively. In the Gottfried-Jackson frame, the expressions for $t$ and $u$\nare given by\n\\begin{subequations}\\label{eq:tu}\n\\begin{align}\nt &= a_t - b \\cos\\theta,\\label{eq:t}\\\\%\nu &= a_u + b \\cos\\theta,\\label{eq:u}\n\\end{align}\n\\end{subequations}\nwhere\n\\begin{subequations}\\label{eq:ab}\n\\begin{align}\na_t &= m_1^2 + m_f^2 + \\frac{1}{2s} \\left( s + m_1^2 - m_2^2 \\right) \\left(\nm_i^2 - m_f^2 -s \\right),\\label{eq:at}\\\\%\na_u &= m_2^2 + m_f^2 + \\frac{1}{2s} \\left( s - m_1^2 + m_2^2 \\right) \\left(\nm_i^2 - m_f^2 -s \\right),\\label{eq:au}\\\\%\nb &= \\frac{1}{2s} \\sqrt{\\lambda\\left( s, m_1^2, m_2^2 \\right)~\\lambda \\left( s,\n\tm_i^2, m_f^2 \\right)},\\label{eq:b}\n\\end{align}\n\\end{subequations}\nwith the K\\\"{a}ll\\'{e}n function $\\lambda(x,y,z)$ defined as,\n\\begin{equation*}\n\\lambda\\left( x,y,z \\right) = x^2 + y^2 + z^2 - 2 \\left( xy + yz + zx \\right).\n\\end{equation*}\nIt is clear that $a_t$, $a_u$ and $b$ are functions of $s$ only. For the special\ncase of $m_1 = m_2 = m$ (say) we have $a_t = a_u = \\tfrac{1}{2} \\left(m_i^2 +\nm_f^2 + 2m^2 -s\\right)$ and $b = \\tfrac{1}{2} \\sqrt{\\left(1-4m^2\/s\\right)\n\t\\lambda\\left(s,m_i^2,m_f^2\\right)}$. It is important to note that we shall use\nthe angle $\\theta$ in our angular distribution.\n\n\\section{Most general angular distribution and angular asymmetries}\\label{sec:ang-dist-asymmetries}\n\nConsidering the amplitude as given in Eq.~\\eqref{eq:amplitude}, the most general\nangular distribution in the Gottfried-Jackson frame is given by,\n\\begin{equation}\\label{eq:gen-angular-dist}\n\\frac{d^2\\Gamma}{ds \\, d\\cos\\theta} = \\frac{b\\sqrt{s} \\left( C_0 + C_1\n\t\\cos\\theta + C_2 \\cos^2\\theta \\right)}{128 \\, \\pi^3 \\, m_i^2 \\left(m_i^2 - m_f^2\n\t+ s \\right)},\n\\end{equation}\nwhere $C_0$, $C_1$ and $C_2$ are functions of $s$ and are given by,\n\\begin{subequations}\\label{eq:gen-C012}\n\\begin{align}\nC_0 &= 2 \\Bigg(-\\modulus{F_{T_1}}^2 \\bigg(-\\Sigma m_{12}^2 s^2 + 2 \\Sigma\nm_{12}^2 \\left( \\Sigma m^2 \\right)_{if} s \\nonumber\\\\%\n& \\hspace{1.5cm} + \\left( \\Delta m^2 \\right)_{12}^2 s -\\Delta a_{tu}^2 s - 2\n\\left( \\Delta m^2 \\right)_{12}^2 \\left( \\Sigma m^2 \\right)_{if} \\nonumber\\\\%\n& \\hspace{1.5cm} - \\left( \\Delta m^2 \\right)_{if}^2 \\Sigma m_{12}^2 + 2 \\Delta\na_{tu} \\left( \\Delta m^2 \\right)_{12} \\left( \\Delta m^2 \\right)_{if} \\bigg)\n\\nonumber\\\\%\n& \\quad - 2 \\Im\\left( F_V^+ F_{T_1}^* \\right) \\bigg( -\\Sigma m_{12} s^2 + 2\n\\Sigma m_{12} \\left( \\Sigma m^2 \\right)_{if} s \\nonumber\\\\%\n& \\hspace{1.5cm} + \\Delta m_{12} \\left( \\Delta m^2 \\right)_{12} s - 2 \\Delta\nm_{12} \\left( \\Delta m^2 \\right)_{12} \\left( \\Sigma m^2 \\right)_{if}\n\\nonumber\\\\%\n& \\hspace{1.5cm} - \\left( \\Delta m^2 \\right)_{if}^2 \\Sigma m_{12} +\\Delta a_{tu}\n\\Delta m_{12} \\left( \\Delta m^2 \\right)_{if} \\bigg) \\nonumber\\\\%\n& \\quad + \\modulus{F_{T_2}}^2 \\bigg( \\Delta m_{12}^2 s^2 - 2 \\Delta m_{12}^2\n\\left( \\Sigma m^2 \\right)_{if} s - \\left( \\Delta m^2 \\right)_{12}^2 s\n\\nonumber\\\\%\n& \\hspace{1.5cm} + \\Delta a_{tu}^2 s + 2 \\left( \\Delta m^2 \\right)_{12}^2 \\left(\n\\Sigma m^2 \\right)_{if} + \\Delta m_{12}^2 \\left( \\Delta m^2 \\right)_{if}^2\n\\nonumber\\\\%\n& \\hspace{1.5cm} - 2 \\Delta a_{tu} \\left( \\Delta m^2 \\right)_{12} \\left( \\Delta\nm^2 \\right)_{if} \\bigg) \\nonumber\\\\%\n& \\quad - 2 \\Im\\left( F_A^+ F_{T_2}^* \\right) \\bigg(\\Delta m_{12} s^2 - 2 \\Delta\nm_{12} \\left( \\Sigma m^2 \\right)_{if} s \\nonumber\\\\%\n& \\hspace{1.5cm} - \\left( \\Delta m^2 \\right)_{12} \\Sigma m_{12} s + 2 \\left(\n\\Delta m^2 \\right)_{12} \\Sigma m_{12} \\left( \\Sigma m^2 \\right)_{if}\n\\nonumber\\\\%\n& \\hspace{1.5cm} - \\Delta a_{tu} \\left( \\Delta m^2 \\right)_{if} \\Sigma m_{12} +\n\\Delta m_{12} \\left( \\Delta m^2 \\right)_{if}^2 \\bigg) \\nonumber\\\\%\n& \\quad + \\modulus{F_A^+}^2 \\bigg( s^2 - 2\\left( \\Sigma m^2 \\right)_{if} s -\n\\Sigma m_{12}^2 s \\nonumber\\\\%\n& \\hspace{1.5cm} + 2 \\Sigma m_{12}^2 \\left( \\Sigma m^2 \\right)_{if} + \\left(\n\\Delta m^2 \\right)_{if}^2-\\Delta a_{tu}^2 \\bigg) \\nonumber\\\\%\n& \\quad + \\modulus{F_V^+}^2 \\bigg( s^2 - 2 \\left( \\Sigma m^2 \\right)_{if} s\n-\\Delta m_{12}^2 s \\nonumber\\\\%\n& \\hspace{1.5cm} + 2 \\Delta m_{12}^2 \\left( \\Sigma m^2 \\right)_{if} + \\left(\n\\Delta m^2 \\right)_{if}^2 - \\Delta a_{tu}^2 \\bigg) \\nonumber\\\\%\n& \\quad + \\modulus{F_A^-}^2 \\left(\\Sigma m_{12}^2 s - \\left( \\Delta m^2\n\\right)_{12}^2 \\right) \\nonumber\\\\%\n& \\quad - 2 \\Re\\left( F_P F_A^{-*} \\right) \\left(\\Sigma m_{12} s - \\Delta m_{12}\n\\left( \\Delta m^2 \\right)_{12} \\right) \\nonumber\\\\%\n& \\quad - \\modulus{F_V^-}^2 \\left( \\left( \\Delta m^2 \\right)_{12}^2-\\Delta\nm_{12}^2 s \\right) \\nonumber\\\\%\n& \\quad - 2\\Re\\left( F_S F_V^{-*} \\right) \\left(\\left( \\Delta m^2 \\right)_{12}\n\\Sigma m_{12}-\\Delta m_{12} s \\right) \\nonumber\\\\%\n& \\quad -\\modulus{F_S}^2 \\left( \\Sigma m_{12}^2-s \\right) -\\modulus{F_P}^2\n\\left( \\Delta m_{12}^2-s \\right) \\nonumber\\\\%\n& \\quad + 2 \\Re\\left( F_A^+ F_A^{-*} \\right) \\left( \\left( \\Delta m^2\n\\right)_{if} \\Sigma m_{12}^2 - \\Delta a_{tu} \\left( \\Delta m^2 \\right)_{12}\n\\right) \\nonumber\\\\%\n& \\quad - 2 \\Re\\left( F_P F_A^{+*} \\right) \\left( \\left( \\Delta m^2 \\right)_{if}\n\\Sigma m_{12} - \\Delta a_{tu} \\Delta m_{12} \\right) \\nonumber\\\\%\n& \\quad - 2 \\Re\\left( F_S F_V^{+*} \\right) \\left( \\Delta a_{tu} \\Sigma m_{12} -\n\\Delta m_{12} \\left( \\Delta m^2 \\right)_{if} \\right) \\nonumber\\\\%\n& \\quad + 2 \\Re\\left( F_V^+ F_V^{-*} \\right) \\left( \\Delta m_{12}^2 \\left(\n\\Delta m^2 \\right)_{if} - \\Delta a_{tu} \\left( \\Delta m^2 \\right)_{12} \\right)\n\\Bigg),\\\\%\nC_1 &= 8 b \\Bigg( \\Delta m_{12} \\left( \\Im\\left( F_V^- F_{T_1}^* \\right)\ns-\\Re\\left( F_P F_A^{+*} \\right) \\right) \\nonumber\\\\%\n& \\hspace{1.5cm} + \\Sigma m_{12} \\Big(-\\Im\\left( F_A^- F_{T_2}^* \\right) s +\n\\Re\\left( F_S F_V^{+*} \\right) \\nonumber\\\\%\n& \\hspace{4cm} - \\left( \\Delta m^2 \\right)_{if} \\Im\\left( F_A^+ F_{T_2}^*\n\\right) \\Big) \\nonumber\\\\%\n& \\hspace{1.5cm} + \\Delta a_{tu} \\left( \\modulus{F_V^+}^2 + \\modulus{F_A^+}^2\n\\right) \\nonumber\\\\%\n& \\hspace{1.5cm} + \\left( \\Im\\left( F_S F_{T_1}^* \\right) + \\Im\\left( F_P\nF_{T_2}^* \\right) \\right)s \\nonumber\\\\%\n& \\hspace{1.5cm} + \\left( \\Delta m^2 \\right)_{12} \\left( \\Re\\left( F_V^+\nF_V^{-*} \\right) + \\Re\\left( F_A^+ F_A^{-*} \\right) \\right) \\nonumber\\\\%\n& \\hspace{1.5cm} + \\left( \\Delta m^2 \\right)_{if} \\Delta m_{12} \\Im\\left( F_V^+\nF_{T_1}^* \\right) \\Bigg),\\\\%\nC_2 &= 8 b^2 \\left( \\left( \\modulus{F_{T_2}}^2 + \\modulus{F_{T_1}}^2 \\right) s -\n\\modulus{F_V^+}^2 - \\modulus{F_A^+}^2 \\right),\n\\end{align}\n\\end{subequations}\nwith\n\\begin{subequations}\n\\begin{align}\n\\Delta a_{tu} &= a_t - a_u,\\\\%\n\\Delta m_{12} &= m_1 - m_2,\\\\%\n\\Delta m_{if} &= m_i - m_f,\\\\%\n\\Sigma m_{12} &= m_1 + m_2,\\\\%\n\\Sigma m_{if} &= m_i + m_f,\\\\%\n\\left(\\Delta m^2\\right)_{12} &= \\Delta m_{12} \\Sigma m_{12} = m_1^2 - m_2^2,\\\\%\n\\left(\\Delta m^2\\right)_{if} &= \\Delta m_{if} \\Sigma m_{if} = m_i^2 - m_f^2,\\\\%\n\\left(\\Sigma m^2\\right)_{if} &= m_i^2 + m_f^2.\n\\end{align}\n\\end{subequations}\n\nIn the limit $m_1=m_2$, which happens when $f_1 f_2 = \\ell^-\\ell^+, \\nu\\ensuremath{\\overline{\\nu}},$\nor $f^{DM} \\bar{f}^{DM}$ etc., our expressions for the angular distribution\nmatches with the corresponding expression in Ref.~\\cite{Kim:2016zbg}. It is\nimportant to remember that in the SM we come across scalar, vector and axial\nvector currents only. Therefore, in the SM, $F_P^{\\text{SM}} =\nF_{T_1}^{\\text{SM}} = F_{T_2}^{\\text{SM}} = 0$, which implies that,\n\\begin{subequations}\\label{eq:SM-C012}\n\\begin{align}\nC_0^{\\text{SM}} =& 2 \\Bigg( \\modulus{\\left(F_A^+\\right)_{\\text{SM}}}^2 \\bigg(\ns^2 - 2\\left( \\Sigma m^2 \\right)_{if} s - \\Sigma m_{12}^2 s \\nonumber\\\\%\n& \\hspace{2cm} + 2 \\Sigma m_{12}^2 \\left( \\Sigma m^2 \\right)_{if} + \\left(\n\\Delta m^2 \\right)_{if}^2-\\Delta a_{tu}^2 \\bigg) \\nonumber\\\\%\n& \\quad + \\modulus{\\left(F_V^+\\right)_{\\text{SM}}}^2 \\bigg( s^2 - 2 \\left(\n\\Sigma m^2 \\right)_{if} s -\\Delta m_{12}^2 s \\nonumber\\\\%\n& \\hspace{2.25cm} + 2 \\Delta m_{12}^2 \\left( \\Sigma m^2 \\right)_{if} + \\left(\n\\Delta m^2 \\right)_{if}^2 - \\Delta a_{tu}^2 \\bigg) \\nonumber\\\\%\n& \\quad + \\modulus{\\left(F_A^-\\right)_{\\text{SM}}}^2 \\left(\\Sigma m_{12}^2 s -\n\\left( \\Delta m^2 \\right)_{12}^2 \\right) \\nonumber\\\\%\n& \\quad - \\modulus{\\left(F_V^-\\right)_{\\text{SM}}}^2 \\left( \\left( \\Delta m^2\n\\right)_{12}^2-\\Delta m_{12}^2 s \\right) \\nonumber\\\\%\n& \\quad - \\modulus{\\left(F_S\\right)_{\\text{SM}}}^2 \\left( \\Sigma m_{12}^2 -s\n\\right) \\nonumber\\\\%\n& \\quad + 2 \\Re\\left( \\left(F_A^+\\right)_{\\text{SM}}\n\\left(F_A^-\\right)_{\\text{SM}}^* \\right) \\bigg( \\left( \\Delta m^2 \\right)_{if}\n\\Sigma m_{12}^2 \\nonumber\\\\*%\n& \\hspace{4cm} - \\Delta a_{tu} \\left( \\Delta m^2 \\right)_{12} \\bigg)\n\\nonumber\\\\%\n& \\quad + 2 \\Re\\left( \\left(F_V^+\\right)_{\\text{SM}}\n\\left(F_V^-\\right)_{\\text{SM}}^* \\right) \\bigg( \\left( \\Delta m^2 \\right)_{if}\n\\Delta m_{12}^2 \\nonumber\\\\%\n& \\hspace{4cm} - \\Delta a_{tu} \\left( \\Delta m^2 \\right)_{12} \\bigg) \\Bigg),\\\\%\nC_1^{\\text{SM}} =& 8 b \\Bigg( \\Delta a_{tu} \\left(\n\\modulus{\\left(F_V^+\\right)_{\\text{SM}}}^2 +\n\\modulus{\\left(F_A^+\\right)_{\\text{SM}}}^2 \\right) \\nonumber\\\\%\n& \\qquad + \\left( \\Delta m^2 \\right)_{12} \\bigg( \\Re\\left(\n\\left(F_V^+\\right)_{\\text{SM}} \\left(F_V^-\\right)_{\\text{SM}}^* \\right)\n\\nonumber\\\\%\n& \\hspace{2.5cm} + \\Re\\left( \\left(F_A^+\\right)_{\\text{SM}}\n\\left(F_A^-\\right)_{\\text{SM}}^* \\right) \\bigg) \\Bigg),\\\\%\nC_2^{\\text{SM}} =& - 8 b^2 \\left( \\modulus{\\left(F_V^+\\right)_{\\text{SM}}}^2 +\n\\modulus{\\left(F_A^+\\right)_{\\text{SM}}}^2 \\right).\n\\end{align}\n\\end{subequations}\n\nIt is interesting to note that in the special case of $m_1 = m_2$, such as in\n$P_i \\to P_f \\ell^+ \\ell^-$, we always have $C_1^{\\text{SM}}=0$. For specific\nmeson decays of the form $P_i \\to P_f f_1 f_2$ allowed in the SM, one can write\ndown $\\left(F_S\\right)_{\\text{SM}}$ , $\\left(F_V^{\\pm}\\right)_{\\text{SM}}$ and\n$\\left(F_A^{\\pm}\\right)_{\\text{SM}}$, at least in principle. The SM prediction\nfor the angular distribution can thus be compared with corresponding\nexperimental measurement. In order to quantitatively compare the theoretical\nprediction with experimental measurement, we define the following three angular\nasymmetries which can precisely probe $C_0$, $C_1$ and $C_2$ individually,\n\\begin{subequations}\\label{eq:ang-asymmetries}\n\\begin{align}\nA_0 \\equiv A_0(s) &= \\frac{- \\frac{1}{6} \\left( \\int_{-1}^{-1\/2} - 7\n\t\\int_{-1\/2}^{+1\/2} + \\int_{+1\/2}^{+1} \\right) \\dfrac{d^2 \\Gamma}{ds \\,\n\t\td\\cos\\theta} d\\cos\\theta}{d\\Gamma\/ds} \\nonumber\\\\%\n&= 3C_0\/\\left(6C_0+2C_2\\right),\\\\%\nA_1 \\equiv A_1(s) &= \\frac{- \\left( \\int_{-1}^{0} - \\int_{0}^{+1} \\right)\n\t\\dfrac{d^2 \\Gamma}{ds \\, d\\cos\\theta} d\\cos\\theta}{d\\Gamma\/ds} \\nonumber\\\\%\n&= 3C_1\/\\left(6C_0+2C_2\\right),\\\\%\nA_2 \\equiv A_2(s) &= \\frac{2 \\left( \\int_{-1}^{-1\/2} - \\int_{-1\/2}^{+1\/2} +\n\t\\int_{+1\/2}^{+1} \\right) \\dfrac{d^2 \\Gamma}{ds \\, d\\cos\\theta}\n\td\\cos\\theta}{d\\Gamma\/ds} \\nonumber\\\\%\n&= 3C_2\/\\left(6C_0+2C_2\\right).\n\\end{align}\n\\end{subequations}\nThe angular asymmetries of Eq.~\\eqref{eq:ang-asymmetries} are functions of $s$\nand it is easy to show that $A_2 = 3 \\left(1\/2 - A_0 \\right)$. We can do the\nintegration over $s$ in Eq.~\\eqref{eq:gen-angular-dist} and define the following\nnormalized angular distribution,\n\\begin{equation}\\label{eq:Gen-ang-dist}\n\\frac{1}{\\Gamma} \\frac{d\\Gamma}{d\\cos\\theta} = T_0 + T_1 \\cos\\theta + T_2 \\cos^2\\theta,\n\\end{equation}\nwhere\n\\begin{equation}\\label{eq:Def-T012}\nT_j = 3 c_j\/\\left(6c_0 + 2 c_2\\right),\n\\end{equation}\nfor $j=0,1,2$ and with\n\\begin{equation}\\label{eq:cj}\nc_j = \\int_{(m_1+m_2)^2}^{(m_i-m_f)^2} \\frac{b\\sqrt{s} \\, C_j}{128 \\pi^3 m_i^2 \\left(m_i^2 - m_f^2 + s\\right)} ds.\n\\end{equation}\nFrom Eq.~\\eqref{eq:Def-T012} it is easy to show that $T_2 = 3 \\left(1\/2 -\nT_0\\right)$ which also ensures that integration over $\\cos\\theta$ on\nEq.~\\eqref{eq:Gen-ang-dist} is equal to $1$. It is interesting to note that the\nangular distribution of Eq.~\\eqref{eq:Gen-ang-dist} can be written in terms of\nthe orthogonal Legendre polynomials of $\\cos\\theta$ as well,\n\\begin{equation}\\label{eq:Gen-ang-dist-Legendre}\n\\frac{1}{\\Gamma} \\frac{d\\Gamma}{d\\cos\\theta} = \\sum_{i=0}^{2} \\langle G^{(i)} \\rangle P_i \\left(\\cos\\theta\\right).\n\\end{equation}\nHere we have followed the notation of Ref.~\\cite{Gratrex:2015hna} which also\nanalyzes decays of the type $P_i \\to P_f f_1 f_2$, with only leptons for\n$f_{1,2}$, in a model-independent manner but using a generalized helicity\namplitude method. The observables $\\langle G^{(i)} \\rangle$ of\nEq.~\\eqref{eq:Gen-ang-dist-Legendre} are related to $T_0$, $T_1$ and $T_2$ of\nEq.~\\eqref{eq:Gen-ang-dist} as follows,\n\\begin{subequations}\n\\begin{align}\n\\langle G^{(0)} \\rangle &= T_0 + T_2\/3 = 1\/2,\\\\%\n\\langle G^{(1)} \\rangle &= T_1,\\\\%\n\\langle G^{(2)} \\rangle &= 2 T_2\/3.\n\\end{align}\n\\end{subequations}\nThese angular observables $\\langle G^{(i)} \\rangle$'s can be obtained by using\nthe method of moments \\cite{Gratrex:2015hna, Beaujean:2015xea}. Another\nimportant way to describe the normalized angular distribution is by using a flat\nterm $F_H\/2$ and the forward-backward asymmetry $A_{FB}$ \\cite{AngDist:Hiller}\nas follows,\n\\begin{equation}\\label{eq:Gen-ang-dist-expt}\n\\frac{1}{\\Gamma} \\frac{d\\Gamma}{d\\cos\\theta} = \\frac{1}{2} F_H + A_{FB}\n\\cos\\theta + \\frac{3}{4} \\left(1-F_H\\right) \\left(1 - \\cos^2\\theta\\right).\n\\end{equation} \nThis form of the angular distribution has also been used in the experimental\ncommunity \\cite{AngDist:Expt} in the study of $B \\to K \\ell^+ \\ell^-$. The parameters $F_H$ and $A_{FB}$ are related to $T_0$, $T_1$ and $T_2$ as follows,\n\\begin{subequations}\n\\begin{align}\nF_H &= 2 \\left( T_0 + T_2 \\right) = 3 - 4 T_0,\\\\%\nA_{FB} &= T_1.\n\\end{align}\n\\end{subequations}\nThus we have shown that Eqs.~\\eqref{eq:Gen-ang-dist},\n\\eqref{eq:Gen-ang-dist-Legendre} and \\eqref{eq:Gen-ang-dist-expt} are equivalent\nto one another. In this paper, we choose to work using the normalized angular\ndistribution in terms of $T_0$, $T_1$ and $T_2$ as shown in\nEq.~\\eqref{eq:Gen-ang-dist}. This is because the terms $T_0$, $T_1$ and $T_2$\ncan be easily determined experimentally by using the $t$-vs-$u$ Dalitz plot\nwhich does not depend on any specific frame of reference. This Dalitz plot can\nbe easily divided into four segments $I$, $II$, $III$ and $IV$ as shown in\nFig.~\\ref{fig:Dalitz-plot-region}. The segments are decided as follows,\n\\begin{center}\n\\begin{tabular}{lcl}\nSegment $I$ & : & $-1 \\leqslant \\cos\\theta \\leqslant -0.5$,\\\\%\nSegment $II$ & : & $-0.5 < \\cos\\theta \\leqslant 0$,\\\\%\nSegment $II$ & : & $0 < \\cos\\theta \\leqslant 0.5$,\\\\%\nSegment $IV$ & : & $0.5 < \\cos\\theta \\leqslant 1$.%\n\\end{tabular}\n\\end{center}\n\\begin{figure}[hbtp]\n\\centering%\n\\includegraphics[scale=0.8]{fig_Dalitz_plot_region.pdf}%\n\\caption{Two examples depicting the variation of $\\cos\\theta$ in the interior\n\tregion of the $t$-vs-$u$ Dalitz plot. The interior of the Dalitz plot can be\n\tdivided into four segments, $I$, $II$, $III$ and $IV$, as shown here.}%\n\\label{fig:Dalitz-plot-region}\n\\end{figure}\nThe terms $T_0$, $T_1$ and $T_2$ can thus be expressed in terms of the following\nasymmetries,\n\\begin{subequations}\\label{eq:T012}\n\\begin{align}\nT_0 &= - \\frac{1}{6} \\left( \\frac{N_I - 7 \\left( N_{II} + N_{III}\\right) +\n\tN_{IV}}{N_I + N_{II} + N_{III} + N_{IV}} \\right),\\\\%\nT_1 &= \\frac{\\left( N_I + N_{II} \\right) - \\left( N_{III} + N_{IV} \\right)}{N_I\n\t+ N_{II} + N_{III} + N_{IV}},\\\\%\nT_2 &= 2 \\left( \\frac{N_I - \\left( N_{II} + N_{III}\\right) + N_{IV}}{N_I +\n\tN_{II} + N_{III} + N_{IV}} \\right),\n\\end{align}\n\\end{subequations}\nwhere $N_j$ denotes the number of events contained in the segment $j$. Since the\n$t$-vs-$u$ Dalitz plot does not depend on the frame of reference, we need not\nconstraint ourselves to the Gottfried-Jackson frame of Fig.~\\ref{fig:GJ-frame}\nand can work in the laboratory frame as well. Furthermore, we can use the\nexpressions in Eq.~\\eqref{eq:T012} to search for NP.\n\n\n\\section{Illustrating the effects of new physics on the angular distribution}\\label{sec:example}\n\n\\subsection{Classification of the \\texorpdfstring{$P_i \\to P_f f_1 f_2$}{Pi -> Pf + f1 + f2} decays}%\nIt should be emphasized that for our methodology to work, we need to know the\nangle $\\theta$ in the Gottfried-Jackson frame, or equivalently the $t$-vs-$u$\nDalitz plot, which demand that 4-momenta of the final particles be fully known.\nUsually, the 4-momenta of the initial and final pseudo-scalar mesons are\ndirectly measured experimentally. However, depending on the detection\npossibilities of $f_1$ and $f_2$ we can identify three distinct scenarios for\nour process $P_i \\to P_f f_1 f_2$. We introduce the notations $f_i^{\\textrm{\\ding{51}}}$\nand $f_i^{\\textrm{\\ding{55}}}$ to denote whether the fermion $f_i$ gets detected\n(\\textrm{\\ding{51}}) or not (\\textrm{\\ding{55}}) by the detector. Using this notation the three\nscenarios are described as follows.\n\\begin{enumerate}\n\\item[(S1)] $P_i \\to P_f + f_1^{\\textrm{\\ding{51}}} + f_2^{\\textrm{\\ding{51}}} \\equiv P_f +\n\\textrm{`visible'}$. Here both $f_1$ and $f_2$ are detected, e.g.\\ when $f_1 f_2\n= \\ell^-\\ell^+$ or $\\ell \\bar{\\ell'}$.%\n\\item[(S2)] $P_i \\to\n\\begin{Bmatrix}\nP_f + f_1^{\\textrm{\\ding{51}}} + f_2^{\\textrm{\\ding{55}}}\\\\%\nP_f + f_1^{\\textrm{\\ding{55}}} + f_2^{\\textrm{\\ding{51}}}\n\\end{Bmatrix} \\equiv P_f + \\textrm{`visible'} + \\text{`invisible'}$. Here either\n$f_1$ or $f_2$ gets detected, e.g.\\ when $f_1 f_2 = \\ell \\nu_{\\ell}$, $\\ell\n\\nu_S$, $\\ell f^{DM}$, $\\ell f^{LLP}$.%\n\\item[(S3)] $P_i \\to P_f + f_1^{\\textrm{\\ding{55}}} + f_2^{\\textrm{\\ding{55}}} \\equiv P_f +\n\\textrm{`invisible'}$. Here neither $f_1$ nor $f_2$ gets detected, e.g.\\ when\n$f_1 f_2 = \\nu_{\\ell}\\ensuremath{\\overline{\\nu}}_{\\ell}$, $\\nu_{\\ell}\\ensuremath{\\overline{\\nu}}_S$, $\\nu_S\\ensuremath{\\overline{\\nu}}_{\\ell}$,\n$\\nu_S\\ensuremath{\\overline{\\nu}}_S$, $f^{DM} \\bar{f}^{DM}$, $f_1^{DM} f_2^{DM}$, $f_1^{LLP}\nf_2^{LLP}$ etc.\n\\end{enumerate}\nIt should be noted that the above classification is based on our existing\nexperimental explorations. What is undetected today might get detected in future\nwith advanced detectors. In such a case we can imagine that, in future, the\nmodes grouped in S2 might migrate to S1 and those in S3 might be grouped under\nS2. Below we explore each of the above scenarios in more details.\n\n\\subsection{Exploration of new physics effects in each scenario}\n\nThe first scenario (S1) is an experimenter's delight as in this case all final\n4-momenta can be easily measured and the $t$-vs-$u$ Dalitz plot can be obtained.\nHere, our methodology can be used to look for the possible signature of new\nphysics in rare decays such as $B \\to D \\ell^- \\ell^+$ (which can be found in\n\\cite{Kim:2016zbg}) or study the nature of new physics contributing to\nlepton-flavor violating processes such as $B \\to P \\ell^{\\pm} \\ell^{\\prime\\mp}$\nwhere $P=\\pi,K,D$, $\\ell\\neq\\ell'$ and $\\ell,\\ell'=e,\\mu,\\tau$. Let us consider\na few NP possibilities mediating this lepton-flavor violating decay. There is no\ncontribution within the SM to such decays. Therefore, all contribution to these\ndecays comes from NP alone. It is very easy to note that for the decay $B \\to P\n\\ell^{-} \\ell^{\\prime+}$, from Eqs.~\\eqref{eq:gen-C012} and\n\\eqref{eq:Gen-ang-dist} we get,\n\\begin{equation}\\label{eq:NPinB2PLl}\n\\frac{1}{\\Gamma} \\frac{d\\Gamma}{d\\cos\\theta} = \n\\begin{cases}\n\\dfrac{1}{2}, & \\left(\\parbox{0.23\\linewidth}{\\textrm{\\centering only scalar or pseudo-scalar interaction}}\\right)\\\\[5mm]%\nT_0 + T_2 \\cos^2\\theta, & \\left(\\parbox{0.23\\linewidth}{\\textrm{\\centering only tensorial interaction}}\\right)\\\\[3mm]%\nT_0 + T_1 \\cos\\theta + T_2 \\cos^2\\theta, & \\left(\\parbox{0.23\\linewidth}{\\textrm{\\centering only vector or axial-vector interaction}}\\right)\n\\end{cases}\n\\end{equation}\nwhere $T_2 = 3\\left(1\/2 - T_0\\right)$ with the quantities $T_0$, $T_1$ and $T_2$\nbeing easily obtainable from the Dalitz plot distribution by using\nEq.~\\eqref{eq:T012}. It is clear from Eq.~\\eqref{eq:NPinB2PLl} that scalar or\npseudo-scalar interaction would give rise to a uniform (or constant) angular\ndistribution, while tensorial interaction gives a non-uniform distribution which\nis symmetric under $\\cos\\theta \\leftrightarrow -\\cos\\theta$ and for this $T_0\n\\leqslant 1\/2$. On the other hand vector or axial-vector interaction can only be\ndescribed by the most general form of the angular distribution, with its\nsignature being $T_1 \\neq 0$. Nevertheless, if vector or axial-vector\ninteraction contributes to the flavor violating processes $B \\to P \\ell^{-}\n\\ell^{\\prime+}$, it is important to note that $T_1 \\propto \\left(m_{\\ell}^2 -\nm_{\\ell'}^2\\right)$, where $m_{\\ell}$, $m_{\\ell'}$ denote the masses of the\ncharged leptons $\\ell^-$ and $\\ell^{\\prime+}$ respectively. Therefore, we should\nobserve an increase in the value of $T_1$ when going from $B \\to P \\mu^- e^+$ to\n$B \\to P \\tau^- \\mu^+$ to $B \\to P \\tau^- e^+$. This would nail down the vector\nor axial vector nature of the NP, if it is the only NP contributing to these\ndecays. Thus far we have analyzed the first scenario (S1) in which the relevant\ndecays can be easily probed with existing detectors.\n\nThe second scenario (S2) can also be studied experimentally with existing\ndetectors. In this case, the missing 4-momentum can be fully deduced using\nconservation of 4-momentum. Thus the $t$-vs-$u$ Dalitz plot can readily be\nobtained. Using our methodology the signatures of NP can then be extracted. One\npromising candidate for search for NP in this kind of scenario is in the decay\n$B \\to P \\ell N$ where $P=\\pi$, $K$ or $D$ and $N$ can be an active neutrino\n($\\nu_{\\ell}$) or sterile neutrino ($\\nu_S$) or a neutral dark fermion\n($f^{DM}$) or a long lived neutral fermion ($f^{LLP}$) which decays outside the\ndetector. These S2 decay modes offer an exciting opportunity for study of NP\neffects.\n\nThe third scenario (S3), which has the maximum number of NP possibilities, is\nalso the most challenging one for the current generation of experimental\nfacilities, due to lack of information about the individual 4-momentum of $f_1$\nand $f_2$. This implies that we can not do any angular analysis for these kind\nof decays unless by some technological advancement such as by using displaced\nvertex detectors\\footnote{There are many existing proposals for such displaced\n\tvertex studies from other theoretical and experimental considerations (see\n\tRefs.~\\cite{DV:Theory,DV:Experiments} and references contained therein for\n\tfurther information). } we can manage to make measurement of the 4-momentum or\nthe angular information of at least one of the final fermions. Getting 4-momenta\nof both the fermions would be ideal, but knowing 4-momentum of either one of\nthem would suffice for our purpose. We are optimistic that the advancement in\ndetector technology would push the current S3 decay modes to get labelled as S2\nmodes in the foreseeable future. It is important to note that once the current\nS3 modes enter the S2 category, we can cover the whole spectrum of NP\npossibilities in the $P_i \\to P_f f_1 f_2$ decays. Below we make a comprehensive\nexploration of NP possibilities in the generalized S2 decay modes, which\nincludes the current S2 and S3 modes together.\n\n\\subsection{Probing effects of new physics in the (S2)\\\\and generalized (S2)\tscenarios}\n\nIn the generalized S2 (GS2) scenario we have decays of the type $P_i \\to\n\\begin{Bmatrix}\nP_f + f_1^{\\textrm{\\ding{51}}} + f_2^{\\textrm{\\ding{55}}}\\\\%\nP_f + f_1^{\\textrm{\\ding{55}}} + f_2^{\\textrm{\\ding{51}}}\n\\end{Bmatrix} \\equiv P_f + \\textrm{`visible'} + \\text{`invisible'}$, where the detected (\\textrm{\\ding{51}}) or undetected (\\textrm{\\ding{55}}) nature is not constrained by our existing detector technology. In some cases, even with advanced detectors, either of the fermions $f_1$, $f_2$ might not get detected simply because its direction of flight lies outside the finite detector coverage, especially when the detector is located farther from the place of origin of the particle. Such possibilities are also included here. As noted before, measuring the 4-momentum of either of the final fermions would suffice to carry out the angular analysis following our approach. \n\nIn this context let us analyze the following decays.\n\\begin{enumerate}\n\\item[(i)] S2 decay: $B \\to P\\ell^- f^{\\textrm{\\ding{55}}}$ where $P$ can be $\\pi$ or\n$D$ and $f^{\\textrm{\\ding{55}}}$ is a neutral fermion. In the SM this process is\nmediated by $W^-$ boson and we have $f^{\\textrm{\\ding{55}}} = \\ensuremath{\\overline{\\nu}}_{\\ell}$. Presence\nof NP can imply $f^{\\textrm{\\ding{55}}}$ being a sterile neutrino $\\nu_S$ or a fermionic\ndark matter particle $f^{DM}$ or a long lived fermion $f^{LLP}$, with additional\nnon-SM interactions.%\n\\item[(ii)] GS2 decay: $B \\to K f_1^{\\textrm{\\ding{51}}} f_2^{\\textrm{\\ding{55}}}$ where\n$f_1^{\\textrm{\\ding{51}}}$ and $f_2^{\\textrm{\\ding{55}}}$ are both neutral fermions. In the SM\nthis process is mediated by $Z^0$ boson and we have $f_1 f_2 = \\nu_{\\ell}\n\\ensuremath{\\overline{\\nu}}_{\\ell}$. However, in case of NP contribution we can get pairs of sterile\nneutrinos or fermionic dark matter or fermionic long lived particles etc.\\ along\nwith nonstandard interactions as well. Here we are assuming that either of the\nfinal neutral fermions leaves a displaced vertex signature in an advanced\ndetector so that its 4-momentum or angular information could be obtained.%\n\\end{enumerate}\n\n\\subsubsection{New physics effects in the S2 decay \\texorpdfstring{$B \\to P\\ell^- f^{\\textrm{\\ding{55}}}$}{B -> P + l- + fX}}\nAnalyzing the $B \\to P\\ell^- f^{\\textrm{\\ding{55}}}$ decay in the SM we find that only\nvector and axial vector currents contribute and $F_A^{\\pm} = -F_V^{\\pm}$ while\nother form factors are zero. Also considering the anti-neutrino to be massless,\ni.e.\\ $m_2 =0$ we find that \n\\begin{align*}\na_t &= m_{\\ell}^2 + m_P^2 + \\left(s + m_{\\ell}^2\\right) \\left(m_B^2 - m_P^2 -\ns\\right)\/(2s),\\\\%\na_u &= m_P^2 + \\left(s - m_{\\ell}^2\\right) \\left(m_B^2 - m_P^2 -\ns\\right)\/(2s),\\\\%\nb &= \\left(s-m_{\\ell}^2\\right) \\sqrt{\\lambda\\left(s, m_B^2, m_P^2\\right)}\/(2s),\n\\end{align*}\nwhere $m_{\\ell}$, $m_P$ and $m_B$ denote the masses of the charged lepton\n$\\ell^-$, mesons $P$ and $B$ respectively. Substituting these information in\nEqs.~\\eqref{eq:SM-C012} and in Eq.~\\eqref{eq:gen-angular-dist} we get,\n\\begin{equation}\\label{eq:B2Plnu-dist-gen}\n\\frac{d^2\\Gamma^{\\textrm{SM}}}{ds \\, d\\cos\\theta} = \\frac{b\\sqrt{s} \\left(\n\tC_0^{\\textrm{SM}} + C_1^{\\textrm{SM}} \\cos\\theta + C_2^{\\textrm{SM}}\n\t\\cos^2\\theta \\right)}{128 \\, \\pi^3 \\, m_B^2 \\left(m_B^2 - m_P^2 + s \\right)},\n\\end{equation}\nwhere\n\\begin{subequations}\\label{eq:C012-in-B2Plnu}\n\\begin{align}\nC_0^{\\text{SM}} =& 4 \\Bigg( \\modulus{\\left(F_V^+\\right)_{\\text{SM}}}^2 \\bigg(\n\\lambda\\left(s, m_B^2, m_P^2\\right) - m_{\\ell}^2 \\left(s - 2 \\left(m_B^2 -\nm_P^2\\right) \\right) \\nonumber\\\\%\n& \\hspace{2cm} - m_{\\ell}^4 \\left(m_B^2 - m_P^2\\right)^2\/s^2 \\bigg)\n\\nonumber\\\\%\n& \\quad + \\modulus{\\left(F_V^-\\right)_{\\text{SM}}}^2 m_{\\ell}^2 \\left( s -\nm_{\\ell}^2 \\right) \\nonumber\\\\%\n& \\quad + 2 \\Re\\left( \\left(F_V^+\\right)_{\\text{SM}}\n\\left(F_V^-\\right)_{\\text{SM}}^* \\right) m_{\\ell}^2 \\left(m_B^2 - m_P^2\\right)\n\\left(1- \\frac{m_{\\ell}^2}{s}\\right) \\Bigg),\\\\%\nC_1^{\\text{SM}} =& 16 m_{\\ell}^2 b \\Bigg( \\left(\\frac{m_B^2 - m_P^2}{s}\\right)\n\\modulus{\\left(F_V^+\\right)_{\\text{SM}}}^2 + \\Re\\left(\n\\left(F_V^+\\right)_{\\text{SM}} \\left(F_V^-\\right)_{\\text{SM}}^* \\right)\n\\Bigg),\\\\%\nC_2^{\\text{SM}} =& - 16 b^2 \\modulus{\\left(F_V^+\\right)_{\\text{SM}}}^2 .\n\\end{align}\n\\end{subequations}\nIt is important to notice that in Eq.~\\eqref{eq:C012-in-B2Plnu} we have many\nterms in the expression for $C_0^{\\textrm{SM}}$ that are proportional to some\npower of the lepton mass, while the entire $C_1^{\\textrm{SM}}$ is directly\nproportional to $m_{\\ell}^2$. If we compare the $m_{\\ell}$ dependent and\n$m_{\\ell}$ independent contributions in $C_0^{\\textrm{SM}}$ we find that the\ndependent terms are suppressed by about a factor of\n$\\mathcal{O}\\left(2m_{\\ell}^2\/m_B^2\\right)$ which is roughly $8\\times 10^{-4}$\nfor muon and $2\\times 10^{-8}$ for electron. Thus we can neglect these\n$m_{\\ell}$ dependent terms in comparison with mass independent terms.\nEquivalently, we can consider the charged leptons such as electron and muon as\nmassless fermions, when compared with the $B$ meson mass scale. In the limit\n$m_{\\ell} \\to 0$ the expression for angular distribution as given in\nEq.~\\eqref{eq:B2Plnu-dist-gen} becomes much simpler,\n\\begin{equation}\n\\frac{d^2\\Gamma^{\\text{SM}}}{ds \\, d\\cos\\theta} = \\frac{b^3\\sqrt{s}}{8 \\, \\pi^3\n\t\\, m_B^2 \\left( m_B^2 - m_P^2 + s \\right)}\n\\modulus{\\left(F_V^+\\right)_{\\text{SM}}}^2 \\sin^2\\theta.\n\\end{equation}\nIndependent of the expression for $\\left(F_V^+\\right)_{\\text{SM}}$, it is easy\nto show that the normalized angular distribution is given by,\n\\begin{equation}\\label{eq:SM-Dist-B2Plnu-massless}\n\\frac{1}{\\Gamma^{\\text{SM}}} \\frac{d\\Gamma^{\\text{SM}}\n}{d\\cos\\theta} = \\frac{3}{4} \\sin^2\\theta,\n\\end{equation}\nwhich implies that $T_0 = 3\/4 = -T_2$, $T_1 = 0$. Since the distribution of\nevents in the Dalitz plot is symmetric under $\\cos\\theta \\leftrightarrow -\n\\cos\\theta$, we have $N_I = N_{IV}$ and $N_{II} = N_{III}$ which automatically\nsatisfies the condition $T_1 = 0$. If we solve $T_0 = 3\/4 = -T_2$, we find that\nthe number of events in the different segments of the Dalitz plot (equivalently\nthe number of events in the four distinct bins of $\\cos\\theta$) are related to\none another by\n\\begin{equation}\\label{eq:SM-bins-B2Plnu}\n\\frac{N_I}{N_{II}} = \\frac{5}{11} = \\frac{N_{IV}}{N_{III}}.\n\\end{equation}\nAny significant deviation from this would imply presence of NP effects. To\nillustrate the effects of NP on the angular distribution in these types of\ndecays, we consider two simple and specific NP possibilities. Here we assume the\ncharged lepton to be massless ($m_{\\ell}=0$) and the undetected fermion\n($f^{\\textrm{\\ding{55}}}$) to have mass $m\\neq 0$.\n\n\\paragraph{\\textbf{Scalar type new physics:}} Considering the simplest scalar type NP\nscenario, with $F_S \\neq 0$, $F_P = F_V^{\\pm} = F_A^{\\pm} = F_{T_1} = F_{T_2} =\n0$, we get\n\\begin{align*}\nC_0^{\\text{NP}} =& 2 \\left(s - m^2\\right) \\modulus{F_S}^2,\\\\%\nC_1^{\\text{NP}} =& 0 = C_2^{\\text{NP}}.\n\\end{align*}\nIn other words, there is no angular dependence at all here, i.e.\\\n\\begin{equation*}\n\\frac{d^2\\Gamma^{\\text{NP}}}{ds \\, d\\cos\\theta} = \\frac{b\\sqrt{s}}{64 \\, \\pi^3 \\,\n\tm_B^2 \\left(m_B^2 - m_P^2 + s \\right)} \\left(s - m^2\\right) \\modulus{F_S}^2,\n\\end{equation*}\nwhere $b = \\left(s-m^2\\right) \\sqrt{\\lambda\\left(s,m_B^2,m_P^2\\right)}\/(2s)$ and\n$m^2 \\leqslant s \\leqslant \\left(m_B-m_P\\right)^2$. If we do the integration\nover $s$, then the normalized angular distribution is given by,\n\\begin{equation*}\n\\frac{1}{\\Gamma^{\\text{NP}}} \\frac{d\\Gamma^{\\text{NP}}}{d\\cos\\theta} =\n\\frac{1}{2}.\n\\end{equation*}\nIn fact, if such a new physics were present, our observation of $B \\to P +\n\\ell^- + f^{\\textrm{\\ding{55}}}$ would have the following angular distribution,\n\\begin{equation*}\n\\frac{d\\Gamma}{d\\cos\\theta} = \\Gamma^{\\text{SM}} \\left(\\frac{3}{4} \\sin^2\\theta\n+ \\frac{1}{2} \\epsilon_0 \\right),\n\\end{equation*}\nwhere we have parametrized the new physics contribution in terms of $\\epsilon_0$,\n\\begin{equation*}\n\\epsilon_0 = \\Gamma^{\\text{NP}}\/\\Gamma^{\\text{SM}}.\n\\end{equation*}\nDoing integration over $\\cos\\theta$ we get,\n\\begin{equation*}\n\\Gamma = \\Gamma^{\\text{SM}} \\left(1+\\epsilon\\right) = \\Gamma^{\\text{SM}} + \\Gamma^{\\text{NP}}.\n\\end{equation*}\nThis implies \n\\begin{equation}\\label{eq:Scalar-NP-Dist-B2Plnu}\n\\frac{1}{\\Gamma} \\frac{d\\Gamma}{d\\cos\\theta} = \\frac{3\\sin^2\\theta + 2\n\t\\epsilon_0}{4 \\left(1+\\epsilon_0\\right)}.\n\\end{equation}\nThis angular distribution is shown in Fig.~\\ref{fig:Scalar-NP-B2Plnu} where we\nhave varied $\\epsilon_0$ in the range $[0,1]$, i.e.\\ we have allowed the\npossibility that the NP contribution might be as large as that of the SM. It is\ninteresting to find that in Fig.~\\ref{fig:Scalar-NP-B2Plnu} at two specific\nvalues of $\\cos\\theta$ there is no difference between the standard model\nprediction alone and the combination of standard model and new physics\ncontributions. These two points can be easily obtained by equating\nEqs.~\\eqref{eq:SM-Dist-B2Plnu-massless} and \\eqref{eq:Scalar-NP-Dist-B2Plnu},\nand then solving for $\\cos\\theta$ gives us\n\\begin{equation}\n\\cos\\theta = \\pm 1\/\\sqrt{3} \\approx \\pm 0.57735.\n\\end{equation}\nThis corresponds to the angle $\\theta \\approx 54.74^{\\circ}$. At these two\npoints in $\\cos\\theta$, the normalized uni-angular distribution always has the\nvalue $0.5$, even if there is some scalar new physics contributing to our\nprocess under consideration.\n\n\\begin{figure}[hbtp]\n\\centering%\n\\includegraphics[scale=0.8]{fig_Scalar-NP.pdf}%\n\\caption{Normalized uni-angular distribution showing the effect of a scalar new\n\tphysics contribution to $B \\to P \\ell^- f^{\\textrm{\\ding{55}}}$ where we have neglected\n\tthe mass of the charged lepton $\\ell =e,\\mu$. This also shows the normalized\n\tuni-angular distribution showing the effect of a scalar new physics contribution\n\tto $B \\to K f_1^{\\textrm{\\ding{51}}} f_2^{\\textrm{\\ding{55}}}$ considering the $m_1 = m_2$ case\n\tonly.}%\n\\label{fig:Scalar-NP-B2Plnu}\n\\end{figure}\n\nFrom Eq.~\\eqref{eq:Scalar-NP-Dist-B2Plnu} it is clear that despite the scalar NP\neffect, the distribution is still symmetric under $\\cos\\theta \\leftrightarrow\n-\\cos\\theta$, and solving for the number of events in the four segments of the\nDalitz plot (equivalently the four $\\cos\\theta$ bins) we get,\n\\begin{equation}\\label{eq:Scalar-NP}\n\\frac{N_I}{N_{II}} = \\frac{5+8\\epsilon_0}{11 + 8\\epsilon_0} =\n\\frac{N_{IV}}{N_{III}}.\n\\end{equation}\nIt is easy to see that when $\\epsilon=0$ we get back the SM prediction of\nEq.~\\eqref{eq:SM-bins-B2Plnu} as expected.\n\n\\paragraph{\\textbf{Tensor type new physics:}} \n\n\\begin{figure*}[hbtp]\n\\centering%\n\\includegraphics[scale=0.8]{fig_B2Plnu-Tensor-NP.pdf}%\n\\caption{Normalized uni-angular distribution showing the effect of a tensor new\n\tphysics contribution to $B \\to P \\ell^- f^{\\textrm{\\ding{55}}}$ where we have neglected\n\tthe mass of the charged lepton $\\ell=e,\\mu$. These set of plots can also\n\tdescribe the effect of a vector new physics contribution to $B \\to K\n\tf_1^{\\textrm{\\ding{51}}} f_2^{\\textrm{\\ding{55}}}$ when the final fermions are equally\n\tmassive.}%\n\\label{fig:Tensor-NP}\n\\end{figure*}\n\nLet us consider a tensor type of new physics\npossibility in which $F_{T_1} \\neq 0$ and all other form factors are zero. In\nsuch a case we get,\n\\begin{align*}\nC_0^{\\textrm{NP}} &=2 m^2 \\left(s-m^2\\right)\n\\frac{\\lambda\\left(s,m_B^2,m_P^2\\right)}{s} \\modulus{F_{T_1}}^2,\\\\%\nC_1^{\\textrm{NP}} &=0,\\\\%\nC_2^{\\textrm{NP}} &= 2 \\left(s-m^2\\right)^2 \\frac{\\lambda\\left(s, m_B^2,\n\tm_P^2\\right)}{s} \\modulus{F_{T_1}}^2.\n\\end{align*}\nIt is easy to notice that in the limit $m \\to 0$ we have $C_0 \\to 0$ but $C_2\n\\not\\to 0$. If we do the integration over $s$, then the normalized angular\ndistribution is given by,\n\\begin{equation}\n\\frac{1}{\\Gamma^{\\textrm{NP}}} \\frac{d\\Gamma^{\\textrm{NP}}}{d\\cos\\theta} =\nT_0^{\\textrm{NP}} + T_2^{\\textrm{NP}} \\cos^2\\theta,\n\\end{equation}\nwhere $T_2^{\\textrm{NP}} = 3\\left(1\/2-T_0^{\\textrm{NP}}\\right)$ and\n$T_0^{\\textrm{NP}} = 3c_0\/\\left(6c_0 + 2c_2\\right)$ with\n\\begin{equation*}\nc_j = \\int_{m^2}^{\\left(m_B-m_P\\right)^2} \\frac{b\\sqrt{s} \\\n\tC_j^{\\textrm{NP}}}{128 \\pi^3 m_B^2 \\left(m_B^2 - m_P^2 + s\\right)} ds.\n\\end{equation*}\nThus in the limit $m\\to 0$ we have $T_0 = 0$. If such a new physics were\npresent, our observation of $B \\to P \\ell^- f^{\\textrm{\\ding{55}}}$ would have the\nfollowing angular distribution,\n\\begin{equation}\n\\frac{d\\Gamma}{d\\cos\\theta} = \\Gamma^{\\text{SM}} \\left(\\frac{3}{4} \\sin^2\\theta + \\left(T_0^{\\textrm{NP}} + 3 \\left(\\frac{1}{2} - T_0^{\\textrm{NP}}\\right) \\cos^2\\theta\\right) \\epsilon \\right),\n\\end{equation}\nwhere $\\epsilon=\\Gamma^{\\textrm{NP}}\/\\Gamma^{\\textrm{SM}}$ is the NP parameter\nwhich can vary in the range $\\left[0,1\\right]$ denoting the possibility that the\nNP contribution can be as large as that of the SM, and $T_0^{\\textrm{NP}}$ acts\nas a free parameter here which can vary in the range $\\left[0,3\/4\\right]$ in\nwhich $d\\Gamma^{\\textrm{NP}}\/d\\cos\\theta \\geqslant 0$ for all values of\n$\\cos\\theta$. Doing integration over $\\cos\\theta$ we get $\\Gamma =\n\\Gamma^{\\textrm{SM}} \\left(1 + \\epsilon\\right) = \\Gamma^{\\textrm{SM}} +\n\\Gamma^{\\textrm{NP}}$. This implies\n\\begin{equation}\\label{eq:Tensor-NP-Dist-B2Plnu}\n\\frac{1}{\\Gamma} \\frac{d\\Gamma}{d\\cos\\theta} = \\frac{3 + 4 T_0^{\\textrm{NP}}\n\t\\epsilon - 3 \\left(4T_0^{\\textrm{NP}} \\epsilon -2\\epsilon +\n\t1\\right)\\cos^2\\theta}{4\\left(1+\\epsilon\\right)}.\n\\end{equation}\n\nThis angular distribution is shown in Fig.~\\ref{fig:Tensor-NP} in which we have\nconsidered nine values of $T_0^{\\textrm{NP}}$ and varied $\\epsilon$ in the range\n$[0,1]$. It is clearly evident in Fig.~\\ref{fig:Tensor-NP} that\n$T_0^{\\textrm{NP}} = 3\/4$ case is always indistinguishable from the SM case, as\nit should be. Just like the scalar-type new physics case, we observe that there\nare two values of $\\cos\\theta$ at which there is no difference between the SM\nprediction alone and the combination of SM and NP contributions. These two\npoints can be easily computed by equating\nEqs.~\\eqref{eq:SM-Dist-B2Plnu-massless} and \\eqref{eq:Tensor-NP-Dist-B2Plnu},\nand then solving for $\\cos\\theta$ we once again find that,\n\\begin{equation}\n\\cos\\theta = \\pm 1\/\\sqrt{3} \\approx \\pm 0.57735,\n\\end{equation}\nwhich corresponds to the angle $\\theta \\approx 54.74^{\\circ}$. At these two\npoints in $\\cos\\theta$, the normalized uni-angular distribution always has the\nvalue $0.5$, even if there is some tensor new physics contributing to our\nprocess under consideration. It should be noted that these are also the same\npoints where the scalar new physics contribution shows similar effect.\n\nIt is also easy to notice that the angular distribution as given in\nEq.~\\eqref{eq:Tensor-NP-Dist-B2Plnu} is symmetric under $\\cos\\theta\n\\leftrightarrow -\\cos\\theta$, and solving for the number of events in the four\nsegments of the Dalitz plot (equivalently the four $\\cos\\theta$ bins) we get,\n\\begin{equation}\n\\frac{N_{I}}{N_{II}} = \\frac{5 + 2 \\epsilon \\left(7 - 6\n\tT_0^{\\textrm{NP}}\\right)}{11 + 2 \\epsilon \\left(1 + 6 T_0^{\\textrm{NP}}\\right)}\n= \\frac{N_{IV}}{N_{III}}.\n\\end{equation}\nIt is easy to see that when $\\epsilon=0$ or $T_0^{\\textrm{NP}}=3\/4$ we get back\nthe SM prediction of Eq.~\\eqref{eq:SM-bins-B2Plnu} as expected.\n\nFinally we analyze new physics possibilities in the decays belonging to the GS2\ncategory. Due to the very nature of the GS2 decay modes, the following\ndiscussion of NP effects presumes usage of advanced detector technology to get\nangular information.\n\n\\subsubsection{New physics effects in the GS2 decay \\texorpdfstring{$B \\to K f_1^{\\textrm{\\ding{51}}} f_2^{\\textrm{\\ding{55}}}$}{B -> K + f1V + f2X}}\n\nAs mentioned before, the GS2 decay modes are originally part of S3, i.e.\\ it is\nextremely difficult to get angular distribution for these cases unless we\ninnovate on detector technology. Here we consider such a decay mode $B \\to K\nf_1^{\\textrm{\\ding{51}}} f_2^{\\textrm{\\ding{55}}}$ in which both $f_1$, $f_2$ are neutral\nfermions who have evaded, till now, all our attempts to detect them near their\nplace of origin. But probably with displaced vertex detectors or some other\nadvanced detector we could bring at least one of these fermions (say $f_1$)\nunder the purview of experimental study and measure its 4-momentum or angular\ninformation. The missing fermion (which is $f_2$ in our example here) might have\nflied in a direction along which there is no detector coverage. To increase the\nsample size we should include $B \\to K f_1^{\\textrm{\\ding{55}}} f_2^{\\textrm{\\ding{51}}}$ events\nalso, provided we know how to ascertain the particle or anti-particle nature of\n$f_1$ and $f_2$. To illustrate this point, let us consider the possibility $f_1\nf_2 = \\nu_S \\ensuremath{\\overline{\\nu}}_S$. In a displaced vertex detector if we see $\\pi^+ \\mu^-$\nevents, they can be attributed to the decay of $\\nu_S$ and similarly $\\pi^-\n\\mu^+$ events would appear from the decay of $\\ensuremath{\\overline{\\nu}}_S$. In this case, we can\ninfer the angle $\\theta$ by knowing the 4-momentum of either $f_1 = \\nu_S$ or\n$f_2 = \\ensuremath{\\overline{\\nu}}_S$ (see Fig.~\\ref{fig:GJ-frame}). If we find that both $f_1$ and\n$f_2$ leave behind their signature tracks in the detector (i.e.\\\n$f_1^{\\textrm{\\ding{51}}} f_2^{\\textrm{\\ding{51}}}$) it would be the most ideal situation. But\nas we have already stressed before, measuring 4-momenta of either of the\nfermions would suffice for our angular studies.\n\nIn the SM the only contribution to $B \\to K f_1^{\\textrm{\\ding{51}}} f_2^{\\textrm{\\ding{55}}}$\nand $B \\to K f_1^{\\textrm{\\ding{55}}} f_2^{\\textrm{\\ding{51}}}$ would come from $B \\to K\n\\nu_{\\ell} \\ensuremath{\\overline{\\nu}}_{\\ell}$ where as in the case of NP we have a number of\npossibilities that includes sterile neutrinos, dark matter particles, or some\nlong lived particles in the final state, $f_1 f_2 = \\nu_{\\ell} \\ensuremath{\\overline{\\nu}}_S$, $\\nu_S\n\\ensuremath{\\overline{\\nu}}_{\\ell}$, $\\nu_S \\ensuremath{\\overline{\\nu}}_S$, $f^{\\textrm{DM}} \\bar{f}^{\\textrm{DM}}$,\n$f_1^{\\textrm{DM}} f_2^{\\textrm{DM}}$, $f^{\\textrm{LLP}}\n\\bar{f}^{\\textrm{LLP}}$, $f_1^{\\textrm{LLP}} f_2^{\\textrm{LLP}}$\netc.\\footnote{In addition to the new physics possibilities considered here,\n\tthere can be additional contributions to the $B \\to K + \\text{`invisible'}$\n\tdecay, e.g.\\ from SM singlet scalars contributing to the `invisible' part as\n\tdiscussed in Ref.~\\cite{Kim:2009qc}. As is evident, our analysis is instead\n\tfocused on a pair of fermions contributing to the `invisible' part.} One can\nalso consider non-standard neutrino interactions also contributing in these\ncases. To demonstrate our methodology, we shall analyze only a subset of these\nvarious NP possibilities in which $f_1$ and $f_2$ have the same mass, i.e.\\ $m_1\n= m_2 = m$ (say), as this greatly simplifies the calculation. As we shall\nillustrate below we can not only detect the presence of NP but ascertain whether\nit is of scalar type or vector type, for example, by analyzing the angular\ndistribution.\n\nBefore, we go for new physics contributions, let us analyze the SM contribution\n$B \\to K \\nu_{\\ell} \\ensuremath{\\overline{\\nu}}_{\\ell}$. Here only vector and axial-vector currents\ncontributions, and $F_A^{\\pm} = - F_V^{\\pm}$. Also the neutrino and\nanti-neutrino are massless, i.e.\\ $m_1 = 0 = m_2$, which implies $a_t = a_u =\n\\tfrac{1}{2} \\left(m_B^2 + m_K^2 -s\\right)$ and $b = \\tfrac{1}{2}\n\\sqrt{\\lambda\\left(s,m_B^2,m_K^2\\right)}$, where $m_B$ and $m_K$ denote the\nmasses of $B$ and $K$ mesons respectively. Substituting these information in\nEqs.~\\eqref{eq:SM-C012} and in Eq.~\\eqref{eq:gen-angular-dist} we get,\n\\begin{equation}\n\\frac{d^2\\Gamma^{\\text{SM}}}{ds \\, d\\cos\\theta} = \\frac{b^3\\sqrt{s}}{8 \\, \\pi^3 \\, m_B^2\n\t\\left( m_B^2 - m_K^2 + s \\right)} \\modulus{\\left(F_V^+\\right)_{\\text{SM}}}^2\n\\sin^2\\theta.\n\\end{equation}\nIrrespective of the expression for $\\left(F_V^+\\right)_{\\text{SM}}$, it can be\neasily shown that the normalized angular distribution is given by,\n\\begin{equation}\\label{eq:SM-Dist-B2Knn}\n\\frac{1}{\\Gamma^{\\text{SM}}} \\frac{d\\Gamma^{\\text{SM}}\n}{d\\cos\\theta} = \\frac{3}{4} \\sin^2\\theta,\n\\end{equation}\nwhich implies that $T_0 = 3\/4 = -T_2$, $T_1 = 0$. Following the same logic as\nthe one given after Eq.~\\eqref{eq:SM-Dist-B2Plnu-massless}, we find that the\nnumber of events in the different segments of the Dalitz plot (equivalently the\nnumber of events in the four distinct bins of $\\cos\\theta$) are related to one\nanother by,\n\\begin{equation}\\label{eq:SM-bins}\n\\frac{N_I}{N_{II}} = \\frac{5}{11} = \\frac{N_{IV}}{N_{III}}.\n\\end{equation}\nThis sets the stage for us to explore (i) a scalar type and (ii) a vector type\nof NP possibility, with final fermions for which $m_1 = m_2 = m \\neq 0$. \n\n\\paragraph{\\textbf{Scalar type new physics:}}\n\nOnce again we consider the simplest scalar type NP scenario, with $F_S \\neq 0$,\nand other form factors being zero. This leads us to,\n\\begin{align*}\n\tC_0^{\\text{NP}} =& 2 \\left(s - 4m^2\\right) \\modulus{F_S}^2,\\\\%\n\tC_1^{\\text{NP}} =& 0 = C_2^{\\text{NP}}.\n\\end{align*}\nIn other words, there is no angular dependence at all here, i.e.\\\n\\begin{equation}\n\\frac{d^2\\Gamma^{\\text{NP}}}{ds \\, d\\cos\\theta} = \\frac{b\\sqrt{s}}{64 \\, \\pi^3 \\,\n\tm_B^2 \\left(m_B^2 - m_K^2 + s \\right)} \\left(s - 4m^2\\right) \\modulus{F_S}^2,\n\\end{equation}\nwhere $b = \\left(\\sqrt{\\left(s-4m^2\\right)} \\,\n\\sqrt{\\lambda\\left(s,m_B^2,m_K^2\\right)}\\right)\/(2\\sqrt{s})$ and $4m^2 \\leqslant\ns \\leqslant \\left(m_B-m_K\\right)^2$. If we do the integration over $s$, then for\nNP only the normalized angular distribution is given by,\n\\begin{equation*}\n\\frac{1}{\\Gamma^{\\text{NP}}} \\frac{d\\Gamma^{\\text{NP}}}{d\\cos\\theta} =\n\\frac{1}{2}.\n\\end{equation*}\nAssuming such a NP contributing in addition to the SM, the experimentally\nobserved angular distribution can be written as,\n\\begin{equation*}\n\\frac{d\\Gamma}{d\\cos\\theta} = \\Gamma^{\\text{SM}} \\left(\\frac{3}{4} \\sin^2\\theta + \\frac{1}{2} \\epsilon_0 \\right),\n\\end{equation*}\nwhere $\\epsilon_0 = \\Gamma^{\\text{NP}}\/\\Gamma^{\\text{SM}}$ is the new physics\nparameter which can vary in the range $\\left[0,1\\right]$ if we assume the NP\ncontribution to be as large as that from the SM. Doing integration over\n$\\cos\\theta$ we get, $\\Gamma = \\Gamma^{\\text{SM}} \\left(1+\\epsilon_0\\right) =\n\\Gamma^{\\text{SM}} + \\Gamma^{\\text{NP}}$. This implies\n\\begin{equation}\\label{eq:Scalar-NP-Dist-B2Knn}\n\\frac{1}{\\Gamma} \\frac{d\\Gamma}{d\\cos\\theta} = \\frac{3\\sin^2\\theta + 2\n\t\\epsilon_0}{4 \\left(1+\\epsilon_0\\right)}.\n\\end{equation}\nSince Eq.~\\eqref{eq:Scalar-NP-Dist-B2Knn} is identical to\nEq.~\\eqref{eq:Scalar-NP-Dist-B2Plnu}, the angular distribution for this case is\nalso as shown in Fig.~\\ref{fig:Scalar-NP-B2Plnu} where we have varied\n$\\epsilon_0$ in the range $[0,1]$. Once again at two specific values of\n$\\cos\\theta$, namely $\\cos\\theta = \\pm 1\/\\sqrt{3} \\approx \\pm 0.57735$\ncorresponding to the angle $\\theta \\approx 54.74^{\\circ}$, there is no\ndifference between the standard model prediction alone and the combination of\nstandard model and scalar new physics contribution. At these two points in\n$\\cos\\theta$, the normalized uni-angular distribution always has the value\n$0.5$, even if there is some scalar new physics contributing to our process\nunder consideration.\n\nSince the angular distribution as shown in Eq.~\\eqref{eq:Scalar-NP-Dist-B2Knn}\nis fully symmetric under $\\cos\\theta \\leftrightarrow -\\cos\\theta$, the number of\nevents in the four segments of the Dalitz plot (equivalently in the four\n$\\cos\\theta$ bins) satisfy the following relationship,\n\\begin{equation}\\label{eq:Scalar-NP-bins}\n\\frac{N_I}{N_{II}} = \\frac{5+8\\epsilon_0}{11 + 8\\epsilon_0} =\n\\frac{N_{IV}}{N_{III}}.\n\\end{equation}\nIt is easy to see that $\\epsilon_0=0$ gives the SM prediction of\nEq.~\\eqref{eq:SM-bins} as expected.\n\n\\paragraph{\\textbf{Vector type new physics:}}\n\nLet us now discuss another new physics scenario, such as the case of a\nflavor-changing $Z'$ or a dark photon $\\gamma_D$ giving rise to the final pair\nof fermions $f_1 f_2$. We assume that for this kind of new physics scenario,\n$F_V^+ = F_V^{\\text{NP}} \\neq 0$ and other form factors are zero. For this kind\nof new physics we get,\n\\begin{align*}\nC_0^{\\text{NP}} =& 2 \\modulus{F_V^{\\text{NP}}}^2\n\\lambda\\left(s,m_B^2,m_K^2\\right),\\\\%\nC_1^{\\text{NP}} =& 0,\\\\%\nC_2^{\\text{NP}} =& -8 b^2 \\modulus{F_V^{\\text{NP}}}^2,\n\\end{align*}\nwhere $b = \\left(\\sqrt{\\left(s-4m^2\\right)} \\,\n\\sqrt{\\lambda\\left(s,m_B^2,m_K^2\\right)}\\right)\/\\left(2\\sqrt{s}\\right)$ and\n$4m^2 \\leqslant s \\leqslant \\left(m_B-m_K\\right)^2$. The angular distribution\nfor the NP alone contribution can, therefore, be written in terms of\n$T_0^{\\textrm{NP}}$ and $T_2^{\\textrm{NP}}$ which are directly proportional to\n$C_0^{\\text{NP}}$ and $C_2^{\\text{NP}}$ respectively. It would lead us to\ndescribe the complete angular distribution in terms of $T_0^{\\text{NP}}$ and\n$\\epsilon=\\Gamma^{\\textrm{NP}}\/\\Gamma^{SM}$ using\nEq.~\\eqref{eq:Tensor-NP-Dist-B2Plnu} and the angular distribution would look\nlike the one shown in Fig.~\\ref{fig:Tensor-NP}. However, it is possible to\ndescribe the effects of NP on the angular distribution using a different set of\nparameters as well. For this we start a fresh with the angular distribution for\nthe NP contribution alone, which in our case is given by\n\\begin{equation*}\n\\frac{d^2\\Gamma^{\\text{NP}}}{ds \\, d\\cos\\theta} = \\frac{b \\modulus{F_V^{\\text{NP}}}^2 \\lambda\\left(s,m_B^2,m_K^2\\right) \\, \\left( s \\sin^2\\theta + 4m^2 \\cos^2\\theta \\right)}{64 \\, \\pi^3 \\, m_B^2 \\left(m_B^2 - m_K^2\n\t+ s \\right) \\sqrt{s}}.\n\\end{equation*}\nDoing integration over $\\cos\\theta$ we obtain,\n\\begin{equation*}\n\\frac{d\\Gamma^{\\text{NP}}}{ds} = \\frac{b \\modulus{F_V^{\\text{NP}}}^2\n\t\\lambda\\left(s,m_B^2,m_K^2\\right)}{64 \\, \\pi^3 \\, m_B^2 \\left(m_B^2 - m_K^2 + s\n\t\\right) \\sqrt{s}} \\left( \\frac{4s + 8m^2}{3} \\right).\n\\end{equation*}\nTherefore, the normalized uni-angular distribution is given by\n\\begin{equation}\\label{eq:Vector-NP-Dist-s-B2Knn}\n\\frac{1}{d\\Gamma^{\\text{NP}}\/ds} \\frac{d^2\\Gamma^{\\text{NP}}}{ds \\, d\\cos\\theta} = \\frac{3}{4} \\left(\\frac{s \\sin^2\\theta + 4m^2 \\cos^2\\theta}{s + 2m^2}\\right).\n\\end{equation}\nIt is interesting to compare this with the standard model expression,\n\\begin{equation}\\label{eq:SM-Dist-s-B2Knn}\n\\frac{1}{d\\Gamma^{\\text{SM}}\/ds} \\frac{d^2\\Gamma^{\\text{SM}}}{ds \\, d\\cos\\theta} = \\frac{3}{4} \\sin^2\\theta.\n\\end{equation}\n\n\\begin{figure*}[hbtp]\n\\centering%\n\\includegraphics[scale=0.8]{fig_B2Knunu-Vector-NP.pdf}%\n\\caption{Normalized uni-angular distribution showing the effect of a vector new\n\tphysics contribution to $B \\to K f_1^{\\textrm{\\ding{51}}} f_2^{\\textrm{\\ding{55}}}$. }%\n\\label{fig:Vector-NP}\n\\end{figure*}\n\nSince the range for $s$ is different in the SM and the NP scenarios, we can not\nadd Eqs.~\\eqref{eq:Vector-NP-Dist-s-B2Knn} and \\eqref{eq:SM-Dist-s-B2Knn}\ndirectly. Carrying out the integration over $s$ we get,\n\\begin{equation*}\n\\frac{d\\Gamma^{\\text{NP}}}{d\\cos\\theta} = \\frac{3}{4} \\Big( \\mathcal{S}\n\\sin^2\\theta + \\mathcal{C} \\cos^2\\theta \\Big),\n\\end{equation*}\nwhere\n\\begin{align*}\n\\mathcal{S} &= \\int_{4m^2}^{(m_B-m_K)^2} \\frac{d\\Gamma^{\\text{NP}}}{ds}\n\\left(\\frac{s}{s+2m^2}\\right) ds,\\\\%\n\\mathcal{C} &= \\int_{4m^2}^{(m_B-m_K)^2} \\frac{d\\Gamma^{\\text{NP}}}{ds}\n\\left(\\frac{4m^2}{s+2m^2}\\right) ds.\n\\end{align*}\nDoing integration over $\\cos\\theta$ we get,\n\\begin{equation*}\n\\Gamma^{\\text{NP}} = \\mathcal{S} + \\mathcal{C}\/2,\n\\end{equation*}\nand hence\n\\begin{equation*}\n\\frac{1}{\\Gamma^{\\text{NP}}} \\frac{d\\Gamma^{\\text{NP}}}{d\\cos\\theta} = \\frac{3\n\t\\left(\\mathcal{S} \\sin^2\\theta + \\mathcal{C} \\cos^2\\theta\\right)}{2\n\t(2\\mathcal{S} + \\mathcal{C})}.\n\\end{equation*}\nFor the SM contribution we know that\n\\begin{equation*}\n\\frac{1}{\\Gamma^{\\text{SM}}} \\frac{d\\Gamma^{\\text{SM}}}{d\\cos\\theta} = \\frac{3}{4} \\sin^2\\theta.\n\\end{equation*}\nNow the uni-angular distribution for the process $B \\to K f_1^{\\textrm{\\ding{51}}}\nf_2^{\\textrm{\\ding{55}}}$ is given by,\n\\begin{equation*}\n\\frac{d\\Gamma}{d\\cos\\theta} =\\frac{3}{4} \\Gamma^{\\text{SM}} \\left( \\left(1 +\\epsilon_s \\right) \\sin^2\\theta + \\epsilon_c \\cos^2\\theta \\right),\n\\end{equation*}\nwhere $\\epsilon_s = \\mathcal{S}\/\\Gamma^{\\text{SM}}$ and $\\epsilon_c =\n\\mathcal{C}\/\\Gamma^{\\text{SM}}$, are the two parameters which describe the\neffect of vector type NP. It is easy to check that,\n\\begin{equation*}\n\\Gamma = \\frac{3}{4} \\Gamma^{\\text{SM}} \\left( \\frac{4}{3} \\left(1+\\epsilon_s\\right) + \\frac{2\\epsilon_c}{3} \\right) = \\Gamma^{\\text{SM}} + \\Gamma^{\\text{NP}}.\n\\end{equation*}\nTherefore, the normalized angular distribution is given by,\n\\begin{equation}\\label{eq:Vector-NP-Dist-B2Knn}\n\\frac{1}{\\Gamma} \\frac{d\\Gamma}{d\\cos\\theta} = \\frac{3 \\left(1 + \\epsilon_s\\right) \\sin^2\\theta + 3\\epsilon_c \\cos^2\\theta}{4 \\left(1+\\epsilon_s\\right) + 2 \\epsilon_c}.\n\\end{equation}\nIt is important to note that, if we consider the mass of the fermion $f$ to be\nzero, i.e.\\ $m=0$, then $\\epsilon_c = 0$, since $\\mathcal{C} =0$. In such a case\nthe uni-angular distribution is given by,\n\\begin{equation*}\n\\frac{1}{\\Gamma} \\frac{d\\Gamma}{d\\cos\\theta} = \\frac{3}{4} \\sin^2\\theta, \\qquad \\left(\\text{here } \\epsilon_c=0\\right)\n\\end{equation*}\nwhich is same as that of the SM case. This is plausible, as in the SM case also\none has $m=0$ for the neutrino mass and only vector and axial-vector currents\ncontribute.\n\nAssuming that the NP contribution can be smaller than or as large as the SM\ncontribution, i.e.\\ $0 \\leqslant \\Gamma^{\\text{NP}} \\leqslant\n\\Gamma^{\\text{SM}}$, we get\n\\begin{equation*}\n0 \\leqslant \\epsilon_s + \\epsilon_c\/2 \\leqslant 1.\n\\end{equation*}\nThus $0 \\leqslant \\epsilon_s \\leqslant 1$ implies that $0 \\leqslant \\epsilon_c\n\\leqslant 2(1-\\epsilon_s)$.\n\nIn Fig.~\\ref{fig:Vector-NP} we have considered nine values of $\\epsilon_s$ and\nvaried $\\epsilon_c$ in the range $[0,2\\left(1-\\epsilon_s\\right)]$, to obtain the\nuni-angular distribution. It is clearly evident in Fig.~\\ref{fig:Vector-NP} that\n$\\epsilon_c=0$ case is always indistinguishable from the SM case, as it should\nbe. Just like the scalar-type new physics case, we observe that at $\\cos\\theta =\n\\pm 1\/\\sqrt{3} \\approx \\pm 0.57735$, there is no difference between the SM\nprediction alone and the combination of SM and NP contributions.\n\nIt is also easy to notice that the angular distribution as given in\nEq.~\\eqref{eq:Vector-NP-Dist-B2Knn} is symmetric under $\\cos\\theta\n\\leftrightarrow -\\cos\\theta$, and solving for the number of events in the four\nsegments of the Dalitz plot (equivalently the four $\\cos\\theta$ bins) we get,\n\\begin{equation}\n\\frac{N_{I}}{N_{II}} = \\frac{5 \\left(1+\\epsilon_s\\right) +\n\t7\\epsilon_c}{11\\left(1+\\epsilon_s\\right) + \\epsilon_c} = \\frac{N_{IV}}{N_{III}}.\n\\end{equation}\nIt is easy to see that when $\\epsilon_c = 0 = \\epsilon_s$ we get back the SM\nprediction of Eq.~\\eqref{eq:SM-bins} as expected.\n\n\\subsection{Discussion}\n\nIt should be noted that our discussions on the types of NP contributions to the\nS2 and GS2 modes, specifically $B \\to P \\ell^- f^{\\textrm{\\ding{55}}}$ and $B \\to K\nf_1^{\\textrm{\\ding{51}}} f_2^{\\textrm{\\ding{55}}}$ respectively, has been fully general. There\nis no complications arising out of hadronic form factors since we have\nconsidered normalized angular distribution. It should be noted that our analysis\nalso does not depend on how large or small the masses of the fermions\n$f,f_{1,2}$ are, as long as they are non-zero.\n\nIt is also very interesting to note that both the scalar and tensor type of NP\nfor the $B \\to P \\ell^- f^{\\textrm{\\ding{55}}}$ decays and both the scalar and vector\ntypes of NP for the $B \\to K f_1^{\\textrm{\\ding{51}}} f_2^{\\textrm{\\ding{55}}}$ decays, exhibit\nsimilar behaviour at $\\cos\\theta = \\pm 1\/\\sqrt{3}$. In order to know the real\nreason behind this we must do a very general analysis. Let us assume that the\nmost general angular distribution for the processes $B \\to P \\ell^-\nf^{\\textrm{\\ding{55}}}$ and $B \\to K f_1^{\\textrm{\\ding{51}}} f_2^{\\textrm{\\ding{55}}}$ is given by\nEq.~\\eqref{eq:Gen-ang-dist}. If we now equate this distribution to the SM\nprediction of Eq.~\\eqref{eq:SM-Dist-B2Plnu-massless} or\nEq.~\\eqref{eq:SM-Dist-B2Knn}, and solve for $\\cos\\theta$ after substituting\nEq.~\\eqref{eq:Def-T012} we find that,\n\\begin{equation}\\label{eq:costheta-gen-sol}\n\\cos\\theta = \\frac{-c_1 \\pm \\sqrt{c_1^2 + 3\n\t\t\\left(c_0+c_2\\right)^2}}{3\\left(c_0+c_2\\right)},\n\\end{equation}\nwhere the $c_j$'s (for $j=0,1,2$) are obtained from Eq.~\\eqref{eq:cj} with\nappropriate substitutions of masses and form factors. Thus\nEq.~\\eqref{eq:costheta-gen-sol} is the most general solution that we can get for\nthe two specific values of $\\cos\\theta$. However, let us look at the specific\ncase when $c_1=0$. Only in this situation do we get\n\\begin{equation}\n\\cos\\theta = \\pm 1\/\\sqrt{3}.\n\\end{equation}\nNow it is clear that since, in both the scalar and tensor type of NP\nconsiderations for the $B \\to P \\ell^- f^{\\textrm{\\ding{55}}}$ decays and in both the\nscalar and vector types of NP considerations for the $B \\to K f_1^{\\textrm{\\ding{51}}}\nf_2^{\\textrm{\\ding{55}}}$ decays, the angular distribution did not have any term\ndirectly proportional to $\\cos\\theta$ (i.e.\\ $c_1=0$), we obtained the same\n$\\cos\\theta = \\pm 1\/\\sqrt{3}$ result in both the cases. Therefore, if the\nobserved normalized uni-angular distribution does not have the value $0.5$ at\n$\\cos\\theta = \\pm 1\/\\sqrt{3}$, it implies that $c_1 \\neq 0$.\n\nAnother interesting aspect of the two specific NP contributions we have\nconsidered, is that from Figs.~\\ref{fig:Scalar-NP-B2Plnu}, \\ref{fig:Tensor-NP}\nand \\ref{fig:Vector-NP} one can clearly see that the vector and tensor types of\nNP can accommodate a much larger variation in the angular distribution than the\nscalar type NP. However, there is also a certain part of the angular\ndistribution for which both scalar and vector (or tensor) types of NP give\nidentical results. This happens when\n\\begin{equation}\n\\epsilon_0 = \\frac{3\\epsilon_c}{2\\left(1 + \\epsilon_s - \\epsilon_c\\right)} = \\frac{\\epsilon \\left( 3 - 4T_0^{\\textrm{NP}} \\right)}{1 - \\epsilon \\left( 2 - 4T_0^{\\textrm{NP}} \\right)}.\n\\end{equation}\nIn order for $\\epsilon_0$ to vary in the range $[0,1]$ we find that (i) for\n$0\\leqslant \\epsilon_s \\leqslant 1$ we have $0 \\leqslant \\epsilon_c \\leqslant\n2\\left(1+\\epsilon_s\\right)\/5$ and (ii) for $0 \\leqslant \\epsilon \\leqslant 1$ we\nhave $\\frac{1}{2} \\leqslant T_0^{\\textrm{NP}} \\leqslant \\frac{3}{4}$. In these\nspecific regions, therefore, it would not be possible to clearly distinguish\nwhether scalar or vector or tensor type NP is contributing to our process under\nconsideration. Nevertheless, our approach can be used to constraint these NP\nhypothesis without any hadronic uncertainties.\n\n\\section{Conclusion}\\label{sec:conclusion}\n\nWe have shown that all NP contributions to three-body semi-hadronic decays of\nthe type $P_i \\to P_f f_1 f_2$, where $P_{i(f)}$ denotes appropriate initial\n(final) pseudo-scalar meson and $f_{1,2}$ are a pair of fermions, can be\ncodified into the most general Lagrangian which gives rise to a very general\nangular distribution. The relevant NP information can be obtained by using\nvarious angular asymmetries, provided at least one of the final pair of fermions\nhas some detectable signature, such as a displaced vertex, in the detector.\nDepending on the detection feasibility of the final fermions we have grouped the\n$P_i \\to P_f f_1 f_2$ decays into three distinct categories: (i) S1 where both\n$f_1$ and $f_2$ are detected, (ii) S2 where either $f_1$ or $f_2$ gets detected,\nand (ii) S3 where neither $f_1$ nor $f_2$ gets detected. We consider the\npossibility that with advancement in detector technology S3 decays could, in\nfuture, be grouped under S2 category. We analyze some specific NP scenarios in\neach of these categories to illustrate how NP affects the angular distribution.\nSpecifically we have analyzed (a) lepton-flavor violating S1 decay $B \\to P\n\\ell^- \\ell'^+$ (with $P = \\pi, K, D$ and $\\ell,\\ell'=e,\\mu,\\tau$) showing\nangular signatures of all generic NP possibilities, (b) S2 decays of the type $B\n\\to P \\ell^- f$ (where $f$ is not detected in the laboratory) showing the effect\nof a scalar type and a tensor type NP on the angular distribution, and finally\n(c) S3 decays (more correctly generalized S2 decays) of the type $B \\to K f\n\\bar{f}$ (where either $f$ or $\\bar{f}$ gets detected in an advanced detector)\nshowing the effects of a scalar type and a vector type NP on the angular\ndistribution. The effects on the angular distribution can be easily estimated\nfrom Dalitz plot asymmetries. The signatures of NP in angular distribution are\ndistinct once the process is chosen carefully. Moreover, as shown in our\nexamples it can be possible to do the identification and quantification of NP\neffects without worrying about hadronic uncertainties. We are optimistic that\nour methodology can be put to use in LHCb, Belle II in the study of appropriate\n$B$ meson decays furthering our search for NP.\n\n\n\n\\acknowledgments \n\nThis work was supported in part by the National Research Foundation of Korea\n(NRF) grant funded by the Korean government (MSIP) (No.2016R1A2B2016112) and\n(NRF-2018R1A4A1025334). This work of D.S. was also supported (in part) by the\nYonsei University Research Fund (Post Doc. Researcher Supporting Program) of\n2018 (project no.: 2018-12-0145).\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzcgdw b/data_all_eng_slimpj/shuffled/split2/finalzzcgdw new file mode 100644 index 0000000000000000000000000000000000000000..0d67e68a73bcc508846a8293360af7c4bfd6b10a --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzcgdw @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\nIn this somewhat long introduction,we start from general analogies between the theory of Artin Presentations (AP Theory) with modern physics, before giving more specific ones related to the actual Clay YM problem,\\cite {JW}, \\cite{D}, \\cite{V}, \\cite{F},in section 3. \n\nIn the following $X^4$ will denote a connected,compact,smooth,simply-connected 4-manifold with a connected boundary,$\\partial X^4$; if $\\partial X^4$ = $S^3$,we will also denote the so determined closed manifold by $X^4$.Although all the $X^4$ we discuss are so-called '2-handle bodies' ,(see, e.g., \\cite{GS})\nthey form a very large class of smooth 4-manifolds: every connected, closed,orientable 3-manifold \nappears as a $\\partial X^4$ and at least every complex elliptic surface (e.g.the Kummer surface) is \npart of this class, \\cite{CW}.(It is unknown whether every {\\it closed}, smooth,simply-connected 4-manifold can be so obtained, although it seems very likely that this is so).\\cite{GS},p.344.\n\nIn AP Theory (see ahead) such a $X^4$ is already determined by a certain type of (group) presentation, an Artin Presentation, $r$, on $n$ generators with $n$ relations, of the fundamental group of its boundary,which we denote by $\\pi(r)$ and \nwe write, $X^4$ = $W^4(r)$.\n\nEven at this early stage,this relation of pure discrete group theory with 4D smooth structures has physical relevance:\n\nIt is the most radical,mathematically rigorous,universal form of $3+1$ holography:\nDiscrete Group theory on the boundary of $X^4$ determines the whole smooth 4-manifold $X^4$,up to diffeomorphism.\n\nFor the meaning and importance in physics of holography ('t Hooft, Susskind, Maldacena,\\dots,) see \\cite{Bo}.\n\nThe discrete,purely group-theoretic Artin presentation $r$ is a {\\it hologram} of the {\\it smooth} 4D spacetime universe $W^4(r)$.Compare to \\cite{M2},p.63.\n\n\nThis is the more far reaching AP-analogue of Witten's most general version of holography in \nmodern physics:\n\n{\\it \"Gravity on the bulk is built from Gauge Theory on the boundary\"},see p.13\nof \\cite{Wi1}.\n\n{\\it This analogy of the rigorous, radical, universal AP-holography with,e.g.the more heuristic, restrictive, AdS\/CFT holography,(\\cite{M1},\\cite{M2}, \\cite{Re},\\cite{Wi2}, \\cite{SW}, \\cite{S1},\\dots), is the prototype of the analogies exhibited and studied in this paper.}\n\nWe can say: the compact,smooth $3+1$ spacetime universe $W^4(r)$ {\\it emerges} from the discrete group-theoretic 'vacuum', via the purely group theoretic Artin presentation $r$; in other words,4D gravity,in its most abstract Einsteinian sense,emerges purely group-theoretically,{\\it not} metrically,and it does so in a non-infinitesimal,non-local manner.Compare to \\cite{AJL},\\cite {Lo},\\cite{Se},\\cite{Wi1},p.5,\\cite{Sm}, \\cite{M4}.\n\nThe 4D smooth {\\it 'continuum picture arises from this fundamental discreteness'}, \\cite {As}, p.174.\n\nThis {\\it 'immense gauge symmetry'} (\\cite{G},p.A98) already makes AP Theory a candidate for\n a topological {\\it 'quantum gauge theory in four dimensional space-time'}, \\cite{JW},p.3, although it is, apriori, 'exterior',not confined to the Differential Geometry of a single fixed manifold,in the same sense\nthat Cobordism Theory is not so confined. In AP Theory,gauge theory has exterior,autonomous, {\\it intrinsic}, absolute meaning,not related to any particular Lie group ,just as Cobordism Theory has exterior,intrinsic,absolute homological meaning.We note here that,nevertheless, cobordism theory has led to the solutions of important 'internal' problems on a given fixed manifold (Hirzebruch formula,Atiyah-Singer theorems,\nSteenrod's conjecture,etc. \\cite {A1}, \\cite{Su}).\n\nAs has been pointed out,even the more classical AdS\/CFT holographic principle {\\it 'calls into question not only the status of field theory but the\nvery notion of locality'}, \\cite{Bo},p.2, \\cite{Wi3},p.1579, \\cite{S1},p.42, \\cite{S2}, p.10. Thus infinitesimal, analytic, smooth continuum using differential geometric methods are not, apriori, used in AP Theory, {\\it although} the mathematical results of such classic analytic gauge theoretic methods,e.g. Donaldson\/Seiberg-Witten Theory,can still lead to purely group-theoretic analogs in AP theory, {\\it despite} the absence of moduli spaces in AP Theory. See \\cite {W1}, p.240, \\cite{R}, p.621 and section 2 ahead.\n\nIn other words,the non-local radical AP-holography forces the substitution of Field Theories by discrete group theory. We do not consider this a bad thing, as, e.g. the ultraviolet catastrophe in QFT:to AP discrete group theory it does not matter 'whether a field at a point is not well-defined' and has to be remedied 'by smearing over test functions to tame the UV divergences'....On the contrary, AP theory creates our intrinsic gauge theory,\nwithout obtaining the gauge theory analytically from vector bundles\/fields,connections,on a particular given manifold. In the 4D AP Theory no field redefinition or renormalization is needed.\n(compare to \\cite{Wi2},p.2). Furthermore, our non-local construction of 4D smooth structures is still so subtle and metamathematically \"local\" so to speak, that, as mentioned above, an analogue of Donaldson\/Seiberg-Witten Theory is still obtained. This AP-theoretic non-locality is evidently the conceptually \nsimplest way to obtain universal, {\\it intrinsic quantum} gauge theory, not based on a particular Lie group. Thus AP Theory takes the place of {\\it \"the large N limit of YM theory\"}, \\cite{M3}, \\cite{M4}.\n\n{\\it In AP Theory the UV problem and other quantum field-theoretic problems are bypassed rigorously with discrete group theory}. (Compare to \\cite{Fe}, \\cite{P}).\n\nIn a more abstract vein, our metamathematical maximalization of group theory, which does not use infinite-dimensional spaces (i.e. does not rely on SUSY), but which nevertheless still contains a very general, at each stage, {\\it infinitely generated} graded group of topology-changing transitions (the\nAP-analogue of Morse Theory) relates the symmetries of our 'particles',i.e. the discrete Artin Presentations $r$, with the exterior symmetries of the smooth 4D spacetime universes $W^4(r)$.\nThis is an analogue of the very desired property for unifying the symmetries of the actual Standard Model with those of classical General Relativity. For a promising relation between braids and the Standard Model, see \\cite{BT}.\n\nOne can say, in AP Theory, SUSY is already 'broken' metamathematically by its canonical,natural grading by the positive \nintegers; SUSY reliance on an infinite number of dimensions is substituted by the infinite number of generators of our group of topology-changing transitions, whose existence is requiered for any sensible Quantum Gravity theory, see \\cite{Wi2}, p.4. There is no incompatibility problem between SUSY and the gauge transitions and other symmetries of AP Theory.(\\cite{Wi5}, p.5, \\cite{Wi6}, p.362, \\cite{Wi18}). The 'vacuum' of AP Theory, i.e. the purely discrete part, gets enough 'rigidity' from its own graded group theory. A priori, SUSY is merely the stabilization theory of AP Theory,when $n$ approaches infinity. \n\nThese topology-changing transitions, which we call Torelli transitions, (and which, for each $n$, form a group, isomorphic to the commutator subgroup of the pure braid group on $n$ strands, compare to \\cite{N}, p.43) are more universal, versatile and sharper than those caused by any classical Morse Theory or $3+1$ TQFT, \\cite {A2}, in the sense that, e.g. they can also just leave the underlying {\\it topological} structure of the manifold $X^4$ invariant and just change the smooth structure or in fact also leave it fixed; (see section 2 ahead). It is discrete pure group theory, that in a general, systematic, exterior manner (thus avoiding symmetry destroying 'skein' and\/or ad hoc 'by hand' internal surgery methods) generates new smooth 4D structures,i.e.structures that are at the foundations of modern 4D gravitational physics,\\cite{C1},\\cite{C2}.\n\nIn order to further explain and augment the above, it is instructive to analize more explicitly how the {\\it smooth} 4D manifold $W^4(r)$ is obtained holographically in a non-infinitesimal, non-local manner from the {\\it purely discrete} Artin presentation $r$ of the\nfundamental group of its boundary.\n\nLet $\\Omega_n$ denote the compact 2-disk with $n$ holes in the plane and $\\partial \\Omega_n$ its boundary. To obtain $W^4(r)$ from $r$ proceed as follows:an Artin presentation $r$ on $n$ generators defines a unique, {\\it framed}, pure braid on $n$ strands (and conversely).This framing,(i.e. an assignement of an integer to each strand,sometimes also called a 'coloring' of the braid), is not obtained 'by hand', but is obtained canonically by representing the pure,framed, braid uniquely by an Artin presentation $r$, which then defines a smooth diffeomorphism $h(r):\\Omega_n\\to \\Omega_n$,\nwhich restricts to the identity on $\\partial \\Omega_n$. With $h(r)$,the smooth 4-manifold $W^4(r)$ is constructed by means of a relative open book construction (see \\cite{W1}, p.250,\\cite{CW}, \\cite{C1},\\cite{C2},and references therein, for the rigorous technical details). This is a construction which is structurally similar to the fundamental Lefschetz Hyperplane Theorem for non-singular complex algebraic varieties and goes beyond the mere surgery prescriptions of the Kirby Calculus, see section 2 ahead.\n\n{\\it The diffeomorphism $h(r):\\Omega_n\\to \\Omega_n$ is determined by $r$ only up to isotopies keeping it fixed as the identity on the boundary $\\partial \\Omega_n$.}\n\nThus (the still well-defined) smooth 4D diffeomorphism class of the 4-manifold $W^4(r)$ is obtained in a non-infinitesimal,non-local manner from the 2D diffeomorphism $h(r)$, which up to isotopy is determined by the discrete Artin presentation $r$.\n\nThis is a {\\it sporadic} 4D smooth analogue, in a metamathematical sense, to the celebrated Hilbert Vth Problem, which cleared up Lie Group Theory's 'infinitesimal mess' (\\cite{H}) by showing that the existence of the {\\it smooth} structure of a Lie group need not be postulated a priori, since its existence already follows from non-infinitesimal, non-analytic concepts.\n\n{\\it This is the fundamental smooth 4D topological construction of AP Theory.}\n\nIt is indeed universal $3+1$ {\\it'gravitational holography'},very different from the very restricted field theoretic version of\nMaldacena ,Rehren,et al. and is definitely not {\\it 'a metaphoric illusion'}. (\\cite{S2},p.9).\n\nAs far as gauge theory is concerned,it is in the reverse order of the dimensional reduction of \\cite{Be},which is a cornerstone of so-called Geometric Langlands Theory,\\cite{GW}, \\cite{W4}.\n\nIt is natural to conjecture that,just as in the mathematically analogous case of Lie Groups,our\nsporadic holographic 4D gravitational Hilbert Vth-like construction,where 4D smooth structures emerge from pure discrete group theory,will help clear up the current conceptual mess in Quantum Gravity,(see \\cite{Sm}, \\cite{N}) beyond the results of this paper.\n\n\nIn order to place our main section 3 ahead in its proper metamathematical setting,we point out some more similarities of our fundamental construction to more heuristic ideas and concepts of modern physics in the literature,(\\cite{Wi7}, \\cite{Wi8}, \\cite{A2}, \\cite{G1}, \\cite{G2}, \\cite{GW},\\cite{Wi9},\\dots )\n\ni) In the above construction of the 4D smooth manifold $W^4(r)$ from the 2D diffeomorphism $h(r):\\Omega_n\\to \\Omega_n$,the boundary components of $\\partial \\Omega_n$ define $n+1$ knots in the 3D boundary of $W^4(r)$ on which the,in general topology-changing and knot-changing, Torelli transitions\/interactions act.\\cite{W1},p.226.\n\nThis is a more direct, non-categorical analogue of {\\it 'surface operators...realizing knot invariants'} of the 4D topological gauge theories of Gukov,Witten, et al., \\cite{G1}, \\cite{GW}, \\cite{Wi9}, p.5, which lead to Geometric Langlands Theory. The following quote of p.9 of \\cite{G1} also seems relevant here:{\\it \"..every topological gauge theory,which admits surface operators is,in a sense,a factory that produces examples of braid group actions on branes..\"}.\n\nii) The Artin presentation $r$ on $n$ generators, which is obtained by infinite intersection via the {\\it rigorous} Cayley-von Dyck process (see section 2), instead of the infinite 'sum' (as in the non-rigorous Feynman process for achieving independence of the metric),defines a pure framed (i.e. colored) braid on $n$ strands,i.e.intuitively a back-ground independent, {\\it macroscopic} 'string',{\\it which is immediately related (without relying on SUSY), to 4D 'gravity' via the 4D smooth structure of $W^4(r)$, by the fundamental Hilbert Vth-like construction of AP Theory}, compare to \\cite{Wi11}, p.25, \\cite{Wi16}. These are the strings in AP Theory, when it is considered as {\\it 'the infinite limit of $SU(N)$ YM Theory'},\\cite{M3},p.10, \\cite{M4}, \\cite{M5}.\n\n(A priori,they do not seem to be related to so-called 'string topology',\\cite{Su1}.)\n\nThus AP Theory can be considered to be a graded, background independent, non-perturbative, parameter-free, macroscopic 'string theory' where holography and topology changing transistions and interactions are as strong as possible, (compare to p.411 of \\cite{Gr} and p.8 of \\cite{We}). The intrinsic 'world-sheet' of the string $r$ consists simply of the iterates of the canonically associated planar diffeomorphisms $h(r):\\Omega_n\\to \\Omega_n$ .The incredibly rich Iteration and Covering theories (Nielsen, Thurston,...) of these $h(r):\\Omega_n\\to \\Omega_n$ makes this 'string theory' a very strong one indeed; in fact,in should also be relevant to \nLoop Quantum Gravity,\\cite{Sm}, \\cite{N}, \\cite{P},\\cite{BT}, thus uniting these two,a priori different approaches of modern physics to Quantum Gravity. The true LQG,after it accomodates universal holography and topology change, when freed from the graph-theoretic combinatorics of 'spin networks',etc., revealing itself as being the \nCovering Theory of String\/M Theory in AP Theory.Then some of the basic difficult problems of LQG (see,e.g. \\cite{DT}) will be by-passed or solved intrinsically {\\it ab initio}.\n\niii) Similarly AP-holography,our gauge-gravity correspondence,is so sharp and general,it can also be considered to be a particle-wave,particle-field 'duality': the discrete Artin presentation $r$,an 'extended' particle,i.e. a crystallic,non-topological 'quantum string',determines the smooth 4D spacetime universe $W^4(r)$. We can say:\n\n{\\it The 'particle'\/'quantum string' $r$ is a hologram of the 4D spacetime universe $W^4(r)$; compare again to \\cite{M2},p.63.}\n\n\n\nThis makes the fundamental AP construction of $W^4(r)$ from $r$,a rigorous background independent analogue of 'The Wave Function of the Universe',\n\\cite{HH},without using path integrals.\nThese analogies and the ones in section 3 already support many of the fundamental concepts of String\/M theory as well as QCD,(see \\cite{Wi3},p.1577,\\cite{Wil1}, \\cite{M4}), in a conceptually very simple,rigorous purely mathematical way:these concepts are {\\it \"here to stay\"},\\cite{L},p.9; the concepts of radical holography,universal topological change,etc. have to be present in any Quantum Geometry\/ Gravity theory.\n\niv) We consider $h(r):\\Omega_n\\to \\Omega_n$ to be the AP-analogue (where the rigorous Cayley-von Dyck intersection procedure is substituted for the non-rigorous Feynman summation procedure) of 't Hooft's 'planar dominant Feynman diagrams' on the sphere $S^2$ (see \\cite{Wi3},p.1577,\\cite{M2},p.8, Ashtekar's remark on p.10 of \\cite{Au}).\nThus AP Theory in its abstract,but rigorous,universal way,realizes 't Hooft's 'bold' Conjecture of\nrelating 4D Quantum Gauge Theory to String Theory,\\cite{Wi3},p.1577, \\cite{Wi10},p.25,\\cite{Sm},p.44,\\cite{S3}.\n\nv) It is interesting to observe (compare to,e.g.,\\cite{Mo}) that the truth of the 3D Poincar\\'e Conjecture shows that\nthere is no analogue of 4D Black Hole singularities in AP Theory; although it is obvious from our Hilbert Vth-like construction,that 4D smooth singularities are avoided,they could have\n'perversely' re-appeared as follows: if the Poincar\\'e Conjecture were false,i.e.,if there existed an Artin Presentation $r$ presenting the trivial group,but such that the boundary\nof the corresponding 4D smooth manifold $W^4(r)$ were not homeomorphic to the 3-sphere $S^3$,then this fact would imply that this smooth $W^4(r)$ would have a serious,unremovable\nsingularity: one would not be able to 'close' this smooth 4-manifold {\\it smoothly}.See also \\cite{Wi15}.\n\nvi) AP Theory has rigorous analogues to all the features indicated in Witten's Fig.1 d) on p.25 of \\cite{Wi11}. \n\nAll the above make AP Theory a strong candidate for contributing to:{\\it \"..the core geometrical ideas which underlie string theory, the way Riemannian geometry underlies general relativity\"},see \\cite{Wi12}.\n\n\nIn fact,perhaps it is not an exageration to call AP Theory,due to its discrete,group-theoretic conceptual simplicity,an \"Erlanger Program\" for Quantum Geometry and Quantum Gravity.It emerges as a new geometry {\\it \"..one that aligns with the new physics of string theory; \"}, see \\cite{Gr},pp.231,232.\n\nIn AP Theory, pure discrete group theory,i.e. 'symmetry' in its purest form, has been 'maximized' metamathematically; compare to \\cite{Wil5},\\cite{Wil4}.\n\n{\\it Metamathematically it is the most basic, simplest,'outermost' enveloping {\\it framing} of Quantum Gravity}.\n\nThis basic intrinsic, autonomous mathematical consistency should also, a fortiori, encompass the logic, if rigorous, of any {\\it empirical} physical evidence.\\cite{FRS},p.24.\n\nAP Theory in a way solves Atiyah's \"joker in the pack\" mystery of why low-dimensional manifold theory should be relevant to modern physics,\\cite{A3},p.15: the, for each $n$, {\\it infinitely generated} graded groups of topology changing transitions and interactions in AP Theory, take the place of the classical infinite dimensional Hilbert space of Quantum Theory.\n\nThe 4D metamathematical 'sporadicity' of AP Theory is much more universal than the important known ones: $Spin(4)$=$SU(2)\\times SU(2)$,\\cite{DK},p.7, or that the braid group $B_n$ has a non-trivial amalgamation only when $n=3,4$.\\cite{KPS}.\n\n\nIn our main section 3 we point out more rigorous specific analogies with the Clay YM problem and how these might impede its solution as stated and\/or lessen its importance in modern physics.\n\nWe thank J.S.Calcut and C.E.Hough for very helpful conversations.\n\n\\section{The pure Mathematics of AP Theory}\n\nIn this section,we list,reference and explain some of the main,purely mathematical, but physically relevant, facts of\nAP Theory,that have been rigorously proven in the refereed literature, (\\cite{W1},\\cite{CW},\\cite{C2},..)\n\nIn a purely algebraic sense,AP Theory starts off as a sub-theory of the very basic,discrete\ntheory of Finitely Presented Groups,which begins with the concept of a presentation (see,e.g.,\\cite{KMS},chapter 1) of a discrete group $G$ ,whose meaning is the following:\n\nLet $F_n$ denote the free group on the $n$ generators $x_1,\\dots,x_n$ and let $w_1,\\dots,w_m$ be $m$ words in $F_n$ ;let $N$ be the normal subgroup of $F_n$ ,which is the intersection of all normal subgroups of $F_n$ which contain all the $w_1\\dots, w_m$;\nthen one says $$ presents $G$, is called a {\\it presentation} of $G$, if the factor group $F_n\/N=G$. This is the Cayley-von Dyck process.[KMS,p.12]\n\n\nIt is important to notice here that a presentation of the trivial group,i.e. the case when it happens that $N =F_n$, can be,a priori, as complicated as a presentation of any arbitrary group and that the concept of infinity is used here, when saying:\"intersection of {\\it all} normal subgroups\".\n\nEvidently in Group Theory,Presentation Theory is more basic, canonical and intrinsic than Representation Theory.\n\nIf $m=n$ a presentation $r=$ is called an {\\it Artin presentation},if,in $F_n$, the following group-theoretic equation holds:$$x_1\\dots x_n=r_1^{-1}x_1r_1\\dots r_n^{-1}x_nr_n$$.\n\nAs already realized by Artin himself, {\\it it is an equation in the free group $F_n$ that actually defines and characterizes pure,framed,(colored) braids.} See also \\cite{MS}.\n\n\nSee \\cite{W1},\\cite{W2},\\cite{W3},\\cite{CW},\\cite{C1},\\cite{C2}, \\cite{C3}, \\cite{C4}, \\cite{Ar}, and \\cite{R},Appendix, for many examples.\n\nThe set of Artin presentations on $n$ generators is denoted by $R_n$, the group so presented by $\\pi(r)$; $A(r)$ denotes the $n\\times n$ integer matrix obtained by abelianizing the presentation $r$.\n\n$A(r)$ is always symmetric and determines the integer quadratic form of the compact 4-manifold $W^4(r)$; every symmetric, integer $n\\times n$ matrix is an $A(r)$, where $r$ lies in $R_n$; $A(r)$ determines the $Z$-homology of both $W^4(r)$ and its boundary,$M^3(r)$.\n\nWe consider the integer, symmetric matrix $A(r)$ to be the analogue of the Hilbert space binary forms (of QM) in AP Theory; it is all that remains of them under the radical reductivity of AP Theory,compare to \\cite{Wi17},p.9,\\cite{Wi2},p.4.\n\nWe say $r$ is 'a Torelli',if $A(r)$ is the zero matrix.\n\nThe $r$ in $R_n$ can be multiplied in a very non-trivial way,$r\\cdot r'$ again being an Artin presentation, and so that $R_n$ is canonically isomorphic to $P_n\\times Z^n$, where $P_n$ denotes the pure braid group on $n$ strands (see \\cite{W1},p.227]).The Torelli form a subgroup isomorphic to the (infinitely generated if $n>2$) commutator subgroup of $P_n$ and it is indeed a subgroup of the classical Torelli group of a 2D closed,orientable surface of genus $n$,hence the name. Multiplication of $r$ by a Torelli does not change $A(r)$ and hence preserves the Z-homology of both $W^4(r)$ and $M^3(r)$.\n\nThe fact that AP Theory is discrete and is graded by the positive integers,i.e. is 'cone-like',allows one to use mathematical induction,to stabilize,and to use computer based methods.See \\cite{W1},\\cite{W2}, \\cite{W3}, \\cite{CW}, \\cite{C1}, \\cite{C2}, \\cite{C3}, \\cite{C4}, for many examples of such computations.\n\nRecall that a group is called perfect,if its abelianization is trivial.\nFrom work of Milnor,see\\cite{W1},p.227, we have the following 'Triality' fact:\n{\\it If the group $\\pi(r)$ is finite and perfect,then it is either trivial,or isomorphic to I(120),the binary icosahedral group,i.e. the fundamental group of Poincar\\'e's Z-homology 3-sphere}. \n\nThe smooth 4D topological part of AP Theory starts as explained in the Introduction\n(see \\cite{W1},\\cite{CW},\\cite{C1}, \\cite{C2},for the rigorous technical mathematical details) and we add the following in order to stress its topological 3D and 4D importance: \n\ni) Let $M^3(r)$ denote the boundary of $W^4(r)$,the smooth 4-manifold determined by the Artin\npresentation $r$ via the 2D diffeomorphism $h(r):\\Omega_n\\to \\Omega_n$;it is always a connected,orientable,closed 3-manifold and every such 3-manifold can be so obtained.\\cite{W1}, \\cite{W3}, \\cite{R},Appendix.To posess an Artin presentation characterizes the fundamental groups of such $M^3$ and the Artin presentation {\\it actually determines the 3-manifold} up to diffeomorphism,not just its fundamental group. \n\nIn particular,the theory of closed,orientable 3-manifolds,is strictly speaking,{\\it not} an autonomous 3D theory,since at the same time of its definition,the smooth 4D manifold $W^4(r)$ is defined, amalgamating this 3D theory with the much more physically relevant theory of 4D smooth manifolds.Thus the Hamilton-Thurston-Perelman program aquires more physical importance,and hence perhaps a simpler solution.\n\nii)The $n+1$ boundary components of $\\Omega_n$ define a link $L(r)$ of $n+1$ knots, $k_i(r)$,($i$=0,1,2,..,$n$),in $M^3(r)$; Gonz\\'alez-Acu\\~na showed (see \\cite{C1}),{\\it given any link $L$ in any closed,orientable 3D manifold $M^3$,there exists an Artin presentation $r$ such that $M^3$ = $M^3(r)$ and $L$ is a sublink of $L(r)$. In particular,any knot in any $M^3$ can be obtained this way}.\n\niii) When $A(r)$ is unimodular,then $M^3(r)$ is a Z-homology 3-sphere,and the groups and peripheral structures of the above knots,have very simple,computer friendly presentations, \\cite{W1}, p.226, (where framings do not have to be 'put in by hand'),avoiding self-linking problems and heeding the admonitions of Penrose,Witten, et al.against symmetry-destroying 'skein methods'in Knot and Linking theory,when it is used in physics.\n\niv) The Torelli, denote one by $t$, by transforming $W^4(r)$,$M^3(r)$ into $W^4(t\\cdot r)$,$M^3(t\\cdot r)$, provide a very general,unpredictable and subtle theory of {\\it topology changing} transitions, and form the analogue of Morse Theory in AP Theory, which is much sharper than that of any known $3+1$ TQFT.\n\nMultiplying by a Torelli always preserves the $Z$-homology of $W^4(r)$ and $M^3(r)$ but usually changes the topology of $W^4(r)$ and $M^3(r)$,and the knots $k_i(r)$ in $M^3(r)$,but they can also just change certain things and leave others invariant:\n\n\n\n{\\bf Example:}Consider $s\\in R_4$ and the Torelli $t\\in R_4$ given by $s_1=(x_1x_3)^2x_2s_2$, $s_2=(x_1(x_2x_3)^2x_2^2)^{-1}$, $s_3=(x_2x_3x_2)^{-1}x_4s_4$, $s_4=x_4^{-2}x_2x_3x_2(x_2x_3)^{-2}$, and $t_1=(x_4^{-1},x_1(x_1x_2x_3)^{-1})$,$t_2=(x_1,x_4)=t_3$, $t_4=(x_1^{-1},x_4x_1x_2x_3)t_3$; (here $(x,y)=x^{-1}y^{-1}xy$) and we use the computer algebra system MAGMA to do the group-theoretic computations).\n\nMAGMA shows that $\\pi(s) =1$, $M^3(s)=S^3$ and that all the knot groups of the $k_i(s)$ are isomorphic to $Z$,except that of $k_3(s)$, which is isomorphic to that of the trefoil in $S^3$.\n\nMAGMA,and some simplification by hand, gives $r=t\\cdot s$ as:$$r_1=(x_4^{-1},x_1(x_1x_2x_3)^{-1})(x_4,x_1)(x_2x_3)^2x_2r_2,$$ $$r_2=(x_2x_3x_2^2)^{-1}((x_1^{-1},x_4x_1x_2x_3),x_4^{-1})(x_4^{-1},x_1)(x_1x_2x_3)^{-1},$$ $$r_3=(x_2x_3x_2)^{-1}(x_4x_1x_2x_3,x_1^{-1})x_4r_4,$$ $$r_4=x_4^{-2}(x_1^{-1},x_4x_1x_2x_3)x_2x_3x_2(x_2x_3)^{-2}(x_1,x_4);$$\n\nNow we again have $\\pi(r)=1$ and $M^3(r)=S^3$, however the (non-amphicheiral) trefoil $k_3(s)$ of $M^3(s)$ has been transformed by the Torelli $t$ to a (amphicheiral) figure-8 knot $k_3(r)$ in $M^3(r)$; all the other knots stay trivial. \n\n\n\nv) Another very important,physically relevant property of the Torelli transitions is the following:\n\n{\\it One can change the smooth structure of a smooth 4-manifold, but leave the underlying topological structure intact. The discrete pure group theory of AP Theory has the energy and power to juggle different 4D smooth structures on the same underlying 4D topological manifold}.\\cite{C1},\\cite{C2}.\n\nThis phenomenon should be considered to be the last vestige (in the radical reductivity of AP Theory) of any hypothetical gravitational Schroedinger Wave Equation,where now 4D smooth structures, 'powered' by the Torelli transitions, are the analogues of gravitational waves. This also seems to solve the so-called Hierarchy Problem (\"Why gravity is so weak\") in AP Theory.\n\nvi) The symmetric integer matrix $A(r)$ determines the quadratic form of $W^4(r)$,in particular its integer homology,as well as the integer homology of $M^3(r)$.Any integer quadratic form can be so obtained,which implies the very non-trivial fact that AP Theory has a discrete purely group-theoretic analogue of the fundamental,physically relevant Donaldson Theorem,\\cite{DK},\\cite{GS},\\cite{FM},\\cite{Wi4}, \\cite{Wi7}, \\cite{Wi8}, \\cite{Wi13}, {\\it despite the fact that there are no moduli spaces nor deRham theory} in AP Theory:\n\n{\\bf THEOREM \\cite{W1},p.240, \\cite{R},p.621}:{\\it If A(r) is a symmetric,integer,unimodular matrix,prevented by Donaldson's theorem from representing the quadratic form of closed,smooth,simply-connected 4-manifold,then the group $\\pi(r)$ is non-trivial; in fact,it has a non-trivial representation into the Lie Group SU(2).} \n\nThere exist (necessarilly non-Artin) presentations $w$ of the trivial group, where $A(w)$ = $E_8$,(see p.11 of \\cite{C4}) and hence it is the Artin Equation above, in the free group $F_n$, for the presentation $r$, that makes this theorem true. \n\nThus, in particular, the discreteness of AP Theory is related to the complex numbers in a very non-trivial way.\n\nThe 'modularity' hinted at by this {\\it purely group-theoretic} theorem should be related to that of Borcherds,\\cite{B} and that of Tomita-Takesaki theory as in Algebraic QFT, \\cite{S2},\\cite{Sum}.\n\nWe remark,that at this point in time, (although,e.g. the Casson invariant, can be described in purely AP-theoretic fashion,\\cite{Ar}, \\cite{C3},\\cite{C4}), we have no purely AP-theoretic proof of this 'Langlands-ian' theorem,which relates the Number Theory of Integer Quadratic Forms with Group Representations of the $\\pi(r)$ into $SU(2)$; compare to \\cite{GW}, \\cite{W4}. We still need the actual classic analytic field-theoretic methods for this.\\cite{T},\\cite{DS}.\n\n\nOn the other hand, this also shows that the non-analytic,non-local smooth 4D Hilbert Vth Problem-like construction above,\nis so subtle , sharp and metamathematically 'local',that certain important analytic,{\\it field-theoretic}, physically relevant results,pertaining to a single, given smooth 4-manifold, are still present\nand are actually {\\it detected} by the discrete AP Theory.\n\nThis goes well beyond the mere ad hoc Tietze-like methods of the Kirby Calculus.\n\n\nFurthermore,(see \\cite{CW}),any complex elliptic surface,e,g., the Kummer surface, can be obtained smoothly as a $W^4(r)$,with boundary $S^3$,thus also proving the existence in AP Theory\nof a discrete, purely group-theoretic analogue of Donaldson\/ Seiberg-Witten {\\it invariants}.\\cite{FM},p.4, \\cite{DK},p.376,\n\\cite{Wi4},\\cite{Wi6},p.375.\n\n\nFinally,although in this paper we will not resort to it,we note that a very non-trivial Covering Theory exists in AP Theory: the covering and lifting theory of the 2D diffeomorphisms $h(r):\\Omega_n\\to \\Omega_n$,{\\it which is non-trivial even if the group $\\pi(r)$ is trivial}.In particular,the covering theory of the so-called 'class surface',i.e. the regular covering corresponding to the commutator subgroup of the fundamental group of $\\Omega_n$, and that corresponding to the normal closure in $F_n$ of the $r_i$, should be specially relevant here for obtaining rigorous proofs in String theory and LQG.\n\nThis makes AP Theory also a very sophisticated mathematical theory indeed.\n\n\n\\section{Intrinsic 4D Quantum YM Existence and Mass Gap}\n\nIn sections 1,2,we exhibited the existence of a mathematically rigorous, sporadic,purely group-theoretic,smooth $3+1$ theory,which unlike the classical gauge-theoretic approach (which uses the space of connections on a particular,fixed manifold) is,a priori, exterior,(as cobordism theory is) ,autonomous,intrinsic,graded by the positive integers,and which is not related to any particular Lie group in the usual gauge-theoretic sense.\n\nAs mentioned in the introduction, due to the radical,universal holography described above,this theory can not,a priori,be described by a conventional analytic,infinitesimal Field Theory.\n\nThis is the AP-analogue to {\\it 'producing a mathematically complete example of quantum gauge field theory in four dimensional space time'}, \\cite{JW},p.5, i.e. the first part of the Clay YM Existence and Mass Gap problem.\n\nDespite that our '4D Quantum Gauge Theory' is not {\\it quantitative},we can still ask the other fundamental question:\n\n{\\it Do there exist natural qualitative analogues to the quantitative mass gap condition of the Clay YM Millenium problem?}\n\nWe quote \\cite{JW},p.3:{\\it \"..it must have a \"mass gap\":namely there must be some constant $\\Delta > 0$ such that every excitation of the vacuum has energy at least $\\Delta$\".}\n\nFurthermore,if such qualitative analogues of this quantitative mass gap do exist,how do they affect the rigorous solution,if it exists,of the actual {\\it quantitative} Clay mass gap problem as stated?\n\nWe start with the questions of Ashtekar,\\cite{As},p.174,regarding Quantum Geometry:{\\it \"What are the atoms of geometry? What are the fundamental excitations?\"}.\n\nWe show that by considering the $h(r):\\Omega_n\\to \\Omega_n$ as the analogue of 'vacuum fluctuations\/excitations' and their generation of the 4D smooth manifolds $W^4(r)$ as a 'giving gravitational mass' Higgs-like phenomenon,we obtain some of the most desired and important hypothetical {\\it qualitative} consequences of the {\\it quantitative} Clay mass gap, if it were true.\n\nIn other words,the fundamental Hilbert Vth Problem-like construction, from the topology-less Artin presentation $r$ (i.e.$r$ is of the {\\it discrete purely group-theoretic} 'vacuum' with zero 'mass'), gives non-zero 'gravitational mass',i.e.4D smooth structures to the $W^4(r)$ in a universal Higgs-like way.(Compare to the heuristic quantitative arguments in \\cite{K} and references therein).\n\n{\\it In AP Theory,'mass gap' is just its radical,vacuum based,universal holography dressed in physical jargon}.\n\n{\\bf In AP Theory,holography and mass gap are defined in unison;they are the same mathematical phenomenon.}\n\nWe point out some AP-analogies with the most important {\\it qualitative} consequences of a positive solution to the actual quantitative Clay YM problem:\n\nI.Our mass gap immediately gives a sharper version of the 'clustering property' of (\\cite{JW},p.6,or \\cite{Wi10},p.125) {\\it \"of the principle of exponential decay of correlations at long distances that makes it possible to deduce global results about four manifolds from a knowledge how the theory behaves on $R^4$. ..the mass gap is closely related to the behaviour of the Donaldson invariants on algebraic surfaces\"}.\n(see also \\cite{Wi13},p.291,\\cite{Wi6}).\nIn AP Theory,global,non-local 4D results are deduced not from $R^4$,{\\it but already,more holographically,from $R^2$},the plane and Pure Braid Theory,via the planar $h(r):\\Omega_n\\to \\Omega_n$.\nAs pointed out above,the results of \\cite{CW} show that there even exists a non-trivial,purely\nAP-theoretic theory of Donaldson\/Seiberg-Witten invariants.\n\nII.The above AP-analogy (in iv) of the Introduction) with 't Hooft's 'bold' conjecture which according to \\cite{Wi10},p.25,{\\it \".. if valid,it might give an effective way to demonstrate the mass gap\"} and which {\\it \"..seems like much the most plausible known approach to the problem,but an answer along these lines is not yet in sight,even at a heuristic level\"}.\n\n\nIII.However the strongest analogy is with the most important desideratum of the Clay problem: namely relating the mass gap with Quantum Chromodynamics (QCD),i.e. 4D $SU(N)$ Quantum Gauge Theory,\\cite{Wi3}, p.1577, ('YM Theory without SUSY') and its important properties such as {\\it confinement} and {\\it asymptotic freedom}. \\cite{Wil1},\\cite{Wil2},\\cite{H1},\\cite{H2}.\n\nFirst we note that the $h(r):\\Omega_n\\to \\Omega_n$,our 'vacuum fluctuations\/excitations' are not only sophisticated purely mathematically,but also {\\it physically}:\n\nSince $h(r)$ is determined only up to isotopies of $\\Omega_n$,keeping it fixed as the identity on the boundary,we obtain a {\\it topological} analogue of {\\it uncertainty} in AP Theory (instead of minimal length 'uncertainty')\nfor our vacuum fluctuations.Compare to \\cite{AY},\\cite{SW}.\n\nThus,unlike as usual in quantum physics,'uncertainty' in AP Theory is {\\it deduced} from the vacuum fluctuations,not {\\it used} to 'prove' their existence.\n\nIn AP Theory, holography and 'uncertainty' are related; compare to \\cite{SW}.\n\nIf we dare call the generic fixed points of $h(r):\\Omega_n\\to \\Omega_n$, 'quarks',(compare to Wilczek,\\cite{Wil3},Kondo,\\cite{K} and Susskind's 'partons' in \\cite{S}), we immediately obtain a topological analogue of 'confinement':\n\n{\\it Although generically these fixed points do not dissapear under isotopies,they can not be individually determined due to the uncertainty above}.\n\nThis is a topologically more sophisticated explanation of {\\it confinement} ('quarks can not be individually determined'),than the classical string-theoretic one:that an open string has to have two inseparable end points,i.e.quarks.\\cite{Wi3},p.1577.\n\nThis should also explain,why the phenomenon of confinement resists being proven analytically by field-theoretic methods.\\cite{Wil4},p.7,\\cite{S1},pp.40-41.\n\n\n\nSimilarly,if we also dare call our 4D smooth manifolds $W^4(r)$,'gluons',then we have a topological resemblance to {\\it asymptotic freedom},('that in very high energy reactions quarks and gluons interact very weakly'):\n\n\nIf we iterate $h(r)$ it is natural to suppose that the mathematical relations between the fixed point theory (Nielsen,Thurston,..) of $h(r)^m$ and the 4D smooth topology of the corresponding $W^4(r^m)$ will grow weaker.This hints at the existence of an abstract intrinsic 'non-linear Fourier transform',\\cite{A3},p.14,and should be compared to Taubes, \\cite{T},p.367,Wilczek, \\cite{Wil3}, Kondo,\\cite{K}, and \\cite{GW}, \\cite{W4}.\n\nRelating the the generic fixed points of $h(r):\\Omega_n\\to \\Omega_n$ (i.e.'quarks') to the smooth structures of the 4D manifolds $W^4(r)$ is a sporadic {\\it 4D smooth topological},i.e. gravitational, version of the original Yang-Mills program of relating particles to the differential geometry of connections.\n\n\nIt seems natural to conjecture that AP Theory and QCD ,due to their conceptual simplicity,are related ,in the sense that AP Theory has rigorous mathematical features that the still hypothetical 'highest temperature QCD' should have.\\cite{Wil1},p.25; see also \\cite{We},pp.13,14.\n\nIn conclusion,we have exhibited the existence of an absolute,intrinsic '4D Quantum YM Theory' with a qualitative analogue of 'mass gap',with all the physical analogies above. Due to the universality and rigorous mathematical conceptual simplicity of this theory,perhaps the actual quantitative Clay YM problem should be substituted by the more general one of the existence of a $3+1$ axiomatic,'constructive' QFT. \\cite{J}, \\cite{Ri}, \\cite{FRS}, \\cite{S2},\\cite{Wi14},\\cite{We}.\n\nDue to the crucial mathematical fact that AP Theory is not a model (in particular, does not introduce any new axioms) and its radical holography and strong Torelli transitions and interactions, which bypass difficult UV problems with pure group theory and which,a priori,does not seem to mix well with,e.g., the Wightman axioms,(see \\cite{S2},p.10),the fate of a mathematically rigorous axiomatic $3+1$ QFT is a serious and more pressing problem in physics,than the more particular Clay YM problem.It seems reasonable to conjecture that any such $3+1$ QFT would have to be 'modular' with respect to the groups $\\pi(r)$ and therefore its rigorous construction, if indeed it exists, would be difficult.\n\nFor the Geometric Langlands Theory corresponding to Intrinsic 4D Quantum YM Theory, instead of $N=4$ Super YM Theory, see \\cite{W4}. \n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nRecently, in a series\nof publications\\cite{neuberger,hasenfratz,hasenfratz2,hasenfratz3,\nluscher,chandra}, \nit has become clear that, if one would\nmodify the chiral transformation laws away from their canonical form in\nthe continuum, chiral symmetries can be preserved on the lattice without\nthe problems of fermion doubling.\nThe lattice Dirac operator $D$ for \nthese fermions satisfies the Ginsparg-Wilson \\cite{ginsparg}\nrelation\\footnote{They are therefore named Ginsparg-Wilson fermions.}, \n\\begin{equation}\n\\label{eq:ginsparg}\n\\gamma_5 D +D\\gamma_5 D =aD \\gamma_5 D \\;\\;,\n\\end{equation}\nwhere $a$ is the lattice spacing. \nAs a consequence of the \nGinsparg-Wilson relation~(\\ref{eq:ginsparg}), it is easy \nto show that the fermion action,\n\\begin{equation}\nS_F=\\sum_{x,y}\\bar{\\psi}(x) D_{x,y} \\psi(x) \\;\\;,\n\\end{equation}\nis invariant under lattice chiral transformations, and \nchiral symmetry will protect the quark masses away from the\nadditive renormalizations.\n\nThe chiral properties of the Ginsparg-Wilson fermions is\na direct result of the Ginsparg-Wilson relation. In particular, several\ntypes of fermion actions could be written down which all fulfill the\ncondition. For definiteness, one particular choice \\cite{neuberger2,luscher} \nis adopted in this paper, namely: \n\\begin{eqnarray}\n\\label{eq:ferm_matrix}\naD&=&1-H_W(H^{\\dagger}_W H_W)^{-1\/2} \\;\\;, \\nonumber \\\\\nH_W&=& 1+s-a\\sum_{\\mu}{1 \\over 2}\n[\\gamma_\\mu(\\nabla_\\mu+\\nabla^*_\\mu)-a\\nabla^*_\\mu\\nabla_\\mu] \\;\\;,\n \\nonumber \\\\\n&=&(-3+s)\\delta_{x,y}+\n{1 \\over 2}\\sum_{\\mu}\\left[(1-\\gamma_\\mu)U_{\\mu}(x)\\delta_{x+\\mu,y}\n+(1+\\gamma_\\mu)U^{\\dagger}_{\\mu}(x-\\mu)\\delta_{x-\\mu,y}\\right]\n\\end{eqnarray}\nwhere $s$ is a parameter satisfying $|s|<1$. \nThe lattice covariant derivatives\n$\\nabla^*_\\mu$ and $\\nabla_\\mu$ are defined as usual,\n\\begin{eqnarray}\n\\nabla_\\mu \\psi&=& U_\\mu(x)\\psi(x+\\mu)-\\psi(x) \\;\\;, \\nonumber \\\\\n\\nabla^*_\\mu \\psi&=& \\psi(x)-U^{\\dagger}_{\\mu}(x-\\mu)\\psi(x) \\;\\;. \n\\end{eqnarray}\n\nDue to their decent chiral properties, it is quite tempting to investigate\nthe possibility of performing Monte Carlo simulations using \nGinsparg-Wilson fermions, replacing the conventional\nWilson fermions. Albeit their\nseemingly non-local appearance, the fermion matrix~(\\ref{eq:ferm_matrix})\nis in fact local (with exponentially decaying tails), \nas long as the parameter $s$\nin the matrix is chosen appropriately\n\\cite{luscher2,chiu}. The locality property of the\n fermion matrix will enable us to use iterative methods in\nKrylov space whenever inversion of the matrix becomes necessary.\n\nWhen performing the inversion of the fermion matrix~(\\ref{eq:ferm_matrix}), \none would encounter the problem of yet another matrix \ninversion of a fractional power.\nRecently, proposals have been put forward \\cite{neuberger2,bunk} which\nmake such inversions possible. \nSome numerical calculations in quenched QCD \\cite{edwards}\nfor Ginsparg-Wilson fermions already indicate that these fermions\nindeed have anticipated chiral properties.\nHowever, it is also realized that quenched calculations with Ginsparg-Wilson\nfermions are more costly than with conventional Wilson fermions\nprimarily due to fractional inversion of \nthe matrix $H^{\\dagger}_WH_W$.\nTwo specific types of methods will be discussed in this paper. One is the\nmethod proposed by Bunk \\cite{bunk}, which will be called \nfractional inverter method, or FIM. The other method is the\noptimal rational approximation method, or ORAM,\nproposed in Ref.\\cite{edwards}.\nIt was reported in Ref.~\\cite{edwards} that ORAM converges faster\nthan FIM for a desired accuracy and a given condition number of\nthe matrix $H^{\\dagger}_WH_W$.\nNow, each multiplication with the fermion matrix $D$\nfor Ginsparg-Wilson fermions is\nequivalent to $2N+1$ multiplications with matrix $H^{\\dagger}_W$ or\n$H_W$, where $N$ is some integer.\nWith FIM, $N$ is the number of \nhighest order Legendre polynomial kept in the iteration procedure.\nWith ORAM, $N=N_{cg}$ is the number of conjugate gradient \\footnote{\nBy the phrase \"conjugate gradient\", we mean all possible iterative\nalgorithms in Krylov space: conjugate gradient, minimal residue,\nBi-conjugate gradient, etc.} \niterations needed to perform the multi-shift matrix inversions.\nThis number is determined by\nthe condition number of the matrix $H^{\\dagger}_WH_W$ and\nthe accuracy desired.\nTherefore, the calculations with Ginsparg-Wilson fermions is\nat least more costly than conventional Wilson fermions\nby a factor of $2N+1$.\n\nAlthough it is already quite costly in the quenched case, it remains \nan tempting problem to simulate {\\em dynamical} \nGinsparg-Wilson fermions. No algorithms including dynamical fermions\n have been tested on these newly proposed fermions.\nIn this paper, it is shown that a Hybrid Monte Carlo algorithm would do\nthe job, however, just as in the quenched case, the simulation is\nmore costly than simulating {\\em dynamical} Wilson fermions.\nAlso, in the calculation of \nthe fermionic force to gauge links, using\ntwo different methods for the fractional inversion\nresults in very different memory and CPU time consumptions.\nFor FIM, it seems that\n $O(N)$ psudofermion fields have\nto be stored in order to make the simulation tractable.\nFor ORAM, only a moderate number of\npsudofermion fields have to be stored, and the number of matrix\nmultiplications also increases slower than in FIM.\nIn this paper, we will concentrate on a Hybrid Monte Carlo\nalgorithm for simulation of dynamical Ginsparg-Wilson fermions.\nThe fermionic force for the gauge links, which is the crucial\npart for the dynamical fermion simulation, will be calculated \nfor both ORAM and FIM. General properties of the two methods \nare compared. Test runs on small lattices with gauge group\n$SU(3)$ are now being investigated \\cite{chuan2} and detailed\nresults will be reported later.\n\nThis paper is organized in the following manner. In Section 2, \nthe Hybrid Monte Carlo algorithm suitable for \nsimulating dynamical Ginsparg-Wilson fermions\nare described and the formula for the force of the gauge link is \nderived, in both ORAM and FIM. These two methods are compared\nin the dynamical simulation in terms of CPU time consumption\nand memory consumption. Possible improvement methods are\nalso addressed. Some concluding remarks are in Section 3.\n\n\\section{ The Hybrid Monte Carlo algorithm}\n\nThe basic formalism of Hybrid Monte Carlo\nalgorithm \\cite{kennedy} remains the same\nas in the conventional Wilson case \\cite{chuan}.\nOnly the fermionic force has to be re-derived for the\nGinsparg-Wilson case, which will be dealt with below in detail.\nThe effective action with the psudofermion contribution now reads:\n\\begin{equation}\nS_{eff}=S_g[U_{\\mu}(x)]+\\phi^{\\dagger}Q^{-2}\\phi \\;\\;,\n\\end{equation}\nwhere the fermion matrix $Q \\equiv \\gamma_5(D+m)$ is hermitian and\n$\\phi(x)$ being the psudofermion field generated at the beginning of\na Hybrid Monte Carlo trajectory from Gaussian noise.\nWe have also assumed that two flavors of quarks have degenerate masses. \nAt each molecular dynamics step in a Hybrid Monte Carlo trajectory,\none has to find solution vectors \n$X_1 \\equiv Q^{-2}\\phi$ and $X_2 \\equiv Q^{-1}\\phi=QX_1$ from \nan iterative algorithm (conjugate gradient, for example) in Krylov space.\nThen, the total force with respect to the gauge fields can be found\nby investigating the variation of the action under infinitesimal \nchanges of the gauge fields:\n\\begin{eqnarray}\n\\delta S_{eff}&=&\\sum_{x,\\mu} Tr[V_{\\mu}(x)\\delta U_{\\mu}(x)+h.c.]\n+ \\delta S_f \\;\\;, \n\\nonumber \\\\\n\\delta S_f&=& \\delta[\\phi^{\\dagger}Q^{-2}\\phi]\n= Tr[F_{\\mu}(x)\\delta U_{\\mu}(x)+h.c.] \\;\\;. \n\\end{eqnarray}\nThe gauge staple $V_{\\mu}(x)$ comes solely from\nthe pure gauge part of the action and could be\nobtained with little cost. The fermionic forces $F_{\\mu}(x)$, \nhowever, is much more costly. Once the fermionic forces\nare obtained, the standard Hybrid Monte Carlo updating procedure\ncan be carried on just as in the conventional Wilson case.\n\nTo derive the formula for the fermionic force, we take the variation\nof the fermionic part of the action and get,\n\\begin{equation}\n\\delta S_f= X^{\\dagger}_1(-\\delta Q) X_2 + X^{\\dagger}_2(-\\delta Q) X_1 \n\\end{equation}\nThe variation of the matrix $Q$ contains two parts, one being \nsimple, namely \n\\begin{equation}\n-\\delta_1 Q = \\gamma_5 (\\delta H_W) (H^{\\dagger}_WH_W)^{-1\/2} \\;\\;,\n\\end{equation}\nthe other being quite complicated, i.e.\n\\begin{equation}\n-\\delta_2 Q = \\gamma_5 H_W \\delta (H^{\\dagger}_WH_W)^{-1\/2} \\;\\;,\n\\end{equation}\nwhich depends on the detailed implementation of the fractional\ninversion of the matrix. We now proceed to discuss the fermionic\nforces in ORAM and FIM respectively.\n\n\n\\subsection{Fermionic force in ORAM}\n\nWe first present the force in ORAM which is more straightforward.\nWe recall that, this approximation \namounts to approximate the function $z(z^2)^{-1\/2}$ in the\ninterval $[0,1]$ by a ratio of two polynomials:\n\\begin{equation}\nz(z^2)^{-1\/2} = z\\left(c_0+\\sum^N_{k=1}{c_k \\over z^2+q_k}\\right) \\;\\;,\n\\end{equation}\nIt is an approximation similar to the Pad\\'e approximation\nused in Ref.\\cite{ying}. For details about this approximation\nand how to get coefficients $c_k$, consult~\\cite{edwards}\nand references therein.\nApplying this method to the hermitian matrix $\\gamma_5H_W$,\nwe immediately obtain the following expression for\nthe variation of the fermionic action:\n\\begin{eqnarray}\n\\delta S_f &=& c_0Tr(X_2 \\otimes X^{\\dagger}_1\\gamma_5\\delta H_W)\n+\\sum^N_{k=1}c_k Tr(\\zeta_{2,k} \\otimes X^{\\dagger}_1 \\gamma_5 \\delta H_W)\n\\nonumber \\\\\n&+&c_0Tr(X_1 \\otimes X^{\\dagger}_2\\gamma_5\\delta H_W)\n+\\sum^N_{k=1}c_k Tr(\\zeta_{1,k} \\otimes X^{\\dagger}_2 \\gamma_5 \\delta H_W)\n\\nonumber \\\\\n&-&\\sum^N_{k=1}c_k Tr\\left( \\zeta_{2,k} \\otimes \\xi^{\\dagger}_{1,k} H^{\\dagger}_W \\delta H_W\n+ \\xi_{2,k} \\otimes \\xi^{\\dagger}_{1,k} \\gamma_5 \\delta H_W\n\\right) \\;\\;,\n\\nonumber \\\\\n&-&\\sum^N_{k=1}c_k Tr\\left( \\zeta_{1,k} \\otimes \\xi^{\\dagger}_{2,k} H^{\\dagger}_W \\delta H_W\n+ \\xi_{1,k} \\otimes \\xi^{\\dagger}_{2,k} \\gamma_5 \\delta H_W\n\\right) \\;\\;,\n\\nonumber \\\\\n\\zeta_{i,k}&=& {1 \\over (\\gamma_5H_W)^2+q_k}X_i \\;\\;, \\;\\;\\;\\;\n\\xi_{i,k}= \\gamma_5H_W {1 \\over (\\gamma_5H_W)^2+q_k}X_i \\;\\;.\n\\end{eqnarray}\nIn the above formula, ``$Tr$'' stands for taking trace in both Dirac and color\nspace and a summation over the whole lattice points.\nThe symbol $\\otimes$ stands for direct product of two vectors in\ncolor space. Therefore, the fermionic force is obtained as\n\\begin{eqnarray}\nF_{\\mu}(x)&=&{c_0 \\over 2} tr_{Dirac}[X_2(x+\\mu) \\otimes X^{\\dagger}_1(x)\\gamma_5(1-\\gamma_{\\mu}) \n+X_1(x+\\mu) \\otimes X^{\\dagger}_2(x)\\gamma_5(1-\\gamma_{\\mu})]\n\\nonumber \\\\\n&+&\\sum^N_{k=1}tr_{Dirac}{c_k \\over 2}[\\zeta_{2,k}(x+\\mu) \\otimes X^{\\dagger}_1(x)\n\\gamma_5(1-\\gamma_{\\mu})]\n\\nonumber \\\\\n&+&\\sum^N_{k=1}tr_{Dirac}{c_k \\over 2}[\\zeta_{1,k}(x+\\mu) \\otimes X^{\\dagger}_2(x)\n\\gamma_5(1-\\gamma_{\\mu})]\n\\nonumber \\\\\n&-&\\sum^N_{k=1}tr_{Dirac}{c_k \\over 2}[\\zeta_{2,k}(x+\\mu) \\otimes \n[H_W\\xi_{1,k}]^{\\dagger}(x)(1-\\gamma_{\\mu})]\n\\nonumber \\\\\n&-&\\sum^N_{k=1}tr_{Dirac}{c_k \\over 2}[\\xi_{2,k}(x+\\mu) \\otimes \\xi^{\\dagger}_{1,k}(x)\n\\gamma_5(1-\\gamma_{\\mu})]\n\\nonumber \\\\\n&-&\\sum^N_{k=1}tr_{Dirac}{c_k \\over 2}[\\zeta_{1,k}(x+\\mu) \\otimes \n[H_W\\xi_{2,k}]^{\\dagger}(x)(1-\\gamma_{\\mu})]\n\\nonumber \\\\\n&-&\\sum^N_{k=1}tr_{Dirac}{c_k \\over 2}[\\xi_{1,k}(x+\\mu) \\otimes \\xi^{\\dagger}_{2,k}(x)\n\\gamma_5(1-\\gamma_{\\mu})] \\;\\;,\n\\end{eqnarray}\nwhere the trace $tr_{Dirac}$ is only taken within the Dirac space.\n\n\n\\subsection{Fermionic force in FIM}\n\nIn FIM, we would like to solve for $\\xi$ satisfying:\n\\begin{equation}\n\\label{eq:solve}\nM^{1\/2}\\xi=X \\;\\;\\;\\;, {\\rm given} \\;\\;\\; X.\n\\end{equation}\nwhere the matrix $M \\equiv H^{\\dagger}_W H_W$ by setting:\n\\begin{equation}\nM = c (1+t^2-2tA) \\;\\;,\n\\end{equation}\nwith the parameters $c$ and $t$ chosen in such a way \nthat all eigenvalues of the matrix $A$ lie within $[-1,1]$. \nTo be more specific, we choose,\n\\begin{equation}\nt = {\\sqrt{\\kappa}-1 \\over\\sqrt{\\kappa}+1} \\;\\;,\\;\\;\\;\\;\nc = {(\\sqrt{\\kappa}+1)^2 \\over 4 \\lambda_{min}} \\;\\;,\n\\end{equation}\nwhere $\\lambda_{min}$ ($\\lambda_{max}$) is the lowest\n(highest) eigenvalue of the matrix $H^{\\dagger}_WH_W$ and\n $\\kappa \\equiv \\lambda_{max}\/\\lambda_{min}$ is the condition number.\nThen, the solution to eq.~(\\ref{eq:solve}) may be written as:\n\\begin{equation}\n\\xi=c^{-1\/2} \\sum^{\\infty}_{k=0} t^k P_k(A) \\cdot X\n=\\sum^{\\infty}_{k=0} s_k \\;\\;,\n\\end{equation}\nwhere $P_k(z)$ are Legendre polynomials. Therefore, an approximant for the\nsolution at the $n$-th level is\n\\begin{equation}\n\\xi_n =\\sum^{n}_{k=0} s_k \\;\\;.\n\\end{equation}\nThe shifts \\footnote{A extra factor $t^k$ has been included\nin the definition of $s_k$ as compared with Ref.\\cite{bunk}}, \n$s_k$, defined as\n\\begin{equation}\ns_k =c^{-1\/2}t^kP_k(A)\\cdot X \\;\\;,\n\\end{equation}\nsatisfy the following recursion relations:\n\\begin{eqnarray}\ns_{-1}&=&0 \\;\\;, s_0=c^{-1\/2}X \\;\\;, \\nonumber \\\\\ns_{k}&=&(2-1\/k)tAs_{k-1}-(1-1\/k)t^2s_{k-2} \\;\\;.\n\\end{eqnarray}\nFor the case of Legendre polynomials, it is claimed that the following\nbound for the residue is obtained \\cite{bunk}:\n\\begin{equation}\n|\\xi-\\xi_n|\/|\\xi|=|R_n(A)| \\le t^{n+1}=\n ({\\sqrt{\\kappa}-1 \\over\\sqrt{\\kappa}+1})^{n+1} \\;\\;,\n\\end{equation}\nwhich asserts the exponential convergence of the iteration.\n\nFor the vectors $\\delta (H^{\\dagger}_WH_W)^{-1\/2}X_i$,\nsimilar strategy could be applied, \n\\begin{equation}\n\\delta(M^{-1\/2})\\eta = c^{-1\/2}\n\\sum^{N_{cut}}_{n=0} t^n \\delta P_n(A)\\eta \\;\\;, \\;\\;\\;\\;\\; {\\rm given} \\;\\;\\; \\eta,\n\\end{equation}\nwhere $\\eta$ represents either $X_1$ or $X_2$ and $N_{cut}$ is\nthe highest order of Legendre polynomials kept in the approximation.\nIn an analogous manner, we define,\n\\begin{equation}\n\\delta_k \\equiv c^{-1\/2} t^k (\\delta P_k(A)) \\cdot \\eta \\;\\;,\n\\end{equation}\nwhich satisfy the following recursion relation:\n\\begin{eqnarray}\n&&(k+1) \\delta_{k+1}+kt^2 \\delta_{k-1}=(2k+1)t(\\delta As_k+A\\delta_k) \\;\\;.\n\\nonumber \\\\\n&& \\delta_{-1}=0 \\;\\;\\;\\;, \\delta_0=0\\;\\;,\n\\end{eqnarray}\nUsing this relation, $\\delta_k$ could be expressed as:\n\\begin{equation}\n\\delta_k = \\sum^{k-1}_{l=0} tL^l_k(A)\\delta A s_l \\;\\;.\n\\end{equation}\nThe coefficients $L^l_k(A)$ are polynomials in $A$ with degree\n$(k-l-1)$\nand can be expressed as Legendre polynomials,\n\\begin{equation}\nL^l_k(A)= {2l+1 \\over l+1}t^{k-l-1}P_{k-l-1}({2l+3 \\over l+2}A) \\;\\;.\n\\end{equation}\nAfter rearranging the double summation and some trivial\nalgebra, we get the following formula for the variation of\nthe fermionic action:\n\\begin{eqnarray}\n\\delta S_f&=&\nTr(Z_1 \\otimes X_2^{\\dagger} \\gamma_5 \\delta H_W)\n+Tr(Z_2 \\otimes X_1^{\\dagger} \\gamma_5 \\delta H_W)\n\\nonumber \\\\\n&-&{1 \\over 2c} \\sum^{N_{cut}}_{l=0} ({2l+1 \\over l+1})Tr[(H_Wx_{2,l} \\otimes \nT^{\\dagger}_{1,l})\\gamma_5\\delta H^{\\dagger}_W]\n\\nonumber \\\\\n&-&{1 \\over 2c} \\sum^{N_{cut}}_{l=0} ({ 2l+1 \\over l+1})Tr[x_{2,l} \\otimes \n(H_W\\gamma_5T_{1,l})^{\\dagger}\\delta H_W] \n\\nonumber \\\\\n&-&{1 \\over 2c} \\sum^{N_{cut}}_{l=0} ({2l+1 \\over l+1})Tr[(H_Wx_{1,l} \\otimes \nT^{\\dagger}_{2,l})\\gamma_5\\delta H^{\\dagger}_W]\n\\nonumber \\\\\n&-&{1 \\over 2c} \\sum^{N_{cut}}_{l=0} ({ 2l+1 \\over l+1})Tr[x_{1,l} \\otimes \n(H_W\\gamma_5T_{2,l})^{\\dagger}\\delta H_W] \\;\\;,\n\\nonumber \\\\\nZ_i&=& (H^{\\dagger}_WH_W)^{-1\/2}X_i \\;\\;,\\;\\;\\;\\;\nx_{i,l}= t^l(P_l(A))X_i \\;\\;,\n\\nonumber \\\\\nT_{i,l}&=& \\sum^{N-l-1}_{m=0} {\\cal S}^{(m)}_{i,l} \\;\\;,\\;\\;\\;\\;\n{\\cal S}^{(m)}_{i,l} =\nt^m P_{m}(({2l+3 \\over l+2})\\gamma_5A\\gamma_5)H_W X_i \\;\\;.\n\\end{eqnarray}\nTherefore, the following expression for the fermionic force is obtained:\n\\begin{eqnarray}\nF_{\\mu}(x)&=&\ntr_{Dirac}(Z_1(x+\\mu) \\otimes X_2^{\\dagger}(x) \\gamma_5(1-\\gamma_{\\mu})\n+(Z_2(x+\\mu) \\otimes X_1^{\\dagger}(x) \\gamma_5(1-\\gamma_{\\mu}))\n\\nonumber \\\\\n&-&{1 \\over 2c} \\sum^N_{l=0} ({ 2l+1 \\over l+1}) tr_{Dirac}[(H_Wx_{2,l}(x+\\mu) \n\\otimes T_{1,l}(x))\\gamma_5(1+\\gamma_{\\mu})]\n\\nonumber \\\\\n&-&{1 \\over 2c} \\sum^N_{l=0} ({ 2l+1 \\over l+1}) tr_{Dirac}[(x_{2,l}(x+\\mu) \n\\otimes [H^{\\dagger}_WT_{1,l}]^{\\dagger}(x))\\gamma_5(1-\\gamma_{\\mu})] \n\\nonumber \\\\\n&-&{1 \\over 2c} \\sum^N_{l=0} ({ 2l+1 \\over l+1}) tr_{Dirac}[(H_Wx_{1,l}(x+\\mu) \n\\otimes T_{2,l}(x))\\gamma_5(1+\\gamma_{\\mu})]\n\\nonumber \\\\\n&-&{1 \\over 2c} \\sum^N_{l=0} ({ 2l+1 \\over l+1}) tr_{Dirac}[(x_{1,l}(x+\\mu) \n\\otimes [H^{\\dagger}_WT_{2,l}]^{\\dagger}(x))\\gamma_5(1-\\gamma_{\\mu})] \n\\;\\;.\n\\end{eqnarray}\nSince the CPU cost of a simulation program with dynamical\nfermions is dominated by the fermion matrix times vector operations.\nIt becomes clear that \nthe above formula for the fermionic force is not very useful\nfrom a practical point of view.\nThe most CPU consuming part is the calculation of\nthe vectors $T_{i,l}(x)$, for all values of $l$,\neach requiring an iterative procedure, i.e. the calculation\nof the quantities ${\\cal S}^{(m)}_{i,l}$. This implies that, in order to\ncalculate the fermionic force, \nthe multiplication of the matrix $H^{\\dagger}_WH_W$ \nhas to be performed $O(N^2_{cut})$ times, where $N_{cut}$ \ncan become large. This would make the calculation of \nthe fermionic force too costly.\n\nHowever, there is a way around this difficulty. \nThe price to pay will be some extra\nstorage. Note that Legendre polynomials $P_m(\\beta(l)z)$, $\\beta(l)$\nbeing the constant $(2l+1)\/(l+1)$, could be\nexpressed as a linear combination of Legendre polynomials \nof lower or equal degrees with argument changed to $z$, i.e.\n\\begin{equation}\nP_m(\\beta z)= \\sum^m_{j=0} \\sigma_{m,j}(\\beta) P_j(z) \\;\\;.\n\\end{equation}\nWith this, we could express the quantities $T_{i,l}$ in the following way:\n\\begin{eqnarray}\nT_{i,l}&=&\\sum^{N-l-1}_{m=0} f_m(l,t)t^mP_m(\\gamma_5A\\gamma_5)H_WX_i \\;\\;,\n\\nonumber \\\\\nf_m(t,l)&=&\\sum^{N-l-1}_{j=m} t^{j-m} \\sigma_{j,m}(\\beta(l)) \\;\\;.\n\\end{eqnarray}\nThe functions $f_m(t,l)$ are just c-numbers and can\nbe calculated at the beginning of the simulation.\nTherefore,after the vectors $X_i$ are obtained,\none can calculate the vectors $P_m(\\gamma_5A\\gamma_5)HX_i$ \nonce for all values of $m$\nand store the resulting vectors. \nThus, $T_{i,l}$ could be obtained easily without\nfurther iteration of matrix multiplications.\nThe coefficients $\\sigma_{m,j}(l)$ satisfy the following recursion relation:\n\\begin{equation}\n\\sigma_{m,j}=({2m-1 \\over m})\\beta(l)[({j \\over 2j-1})\\sigma_{m-1,j-1}\n+({j+1 \\over 2j+3})\\sigma_{m-1,j+1}]-({m-1 \\over m}) \\sigma_{m-2,j} \\;\\;,\n\\end{equation}\nwhere the subscripts $m$ $j$ should satisfy $0\\le j \\le m$ and \na zero value is understood whenever an out-of-range subscript is\nencountered. Together with $\\sigma_{0,0} \\equiv 1$, the above recursion\nrelation completely determines all coefficients $\\sigma_{m,j}(\\beta(l))$\nand therefore the function $f_m(t,l)$.\nNow the calculation of the quantity $T_{i,l}$ only requires a \nlinear combination of vectors which costs little CPU time.\n\n\\subsection{comparison of the two methods}\n\nWe now compare the CPU time consumption and memory consumptions of the\ntwo methods discussed so far for the simulation of dynamical \nGinsparg-Wilson fermions. As is well known, the CPU time \nconsumption is proportional to the number of operations\nof the matrix $H_W$ on vectors. \nFor each molecular dynamics step in the Hybrid Monte Carlo,\nORAM requires $2N_{CG}(2N_{cg}+1)$ number of matrix multiplications\nto obtain the solution vector $X_1$ and $4N_{cg}+4N_r$ more matrix\nmultiplications to obtain the fermionic force. Here,\n$N_{CG}$ is the number of conjugate gradient iterations\nneeded to obtain the solution $X_1$ and $N_{cg}$ is the\nnumber of conjugate gradient iterations needed to obtain\nthe vector $(H^{\\dagger}_WH_W)+q_{min})^{-1}X_i$ for the\nsmallest shift $q_{min}$.\nParameter $N_r$ is the order of the polynomials in the optimal\nrational approximation.\nORAM also requires to store $N_r$ psudofermion field vectors.\nAs a comparison, \nFIM requires $2N_{CG}(2N_{cut}+1)$ number of matrix multiplications\nto obtain the solution vector $X_1$ and $12N_{cut}$ more matrix\nmultiplications to obtain the fermionic force, where $N_{cut}$\nis the highest order of Legendre polynomials kept in the series\nexpansion. FIM also needs to store $2N_{cut}$ psudofermion field vectors.\n\nConcerning the CPU time consumption, both method are more costly\nthan dynamical simulations with Wilson fermions by a factor\nof $2N+1$, where $N=N_{cg}$ for ORAM and $N=N_{cut}$ for FIM.\nFrom the theoretical upper bound of the error, the two methods\nbehave in a similar manner, $N_{cg} \\sim N_{cut}$. Practically,\nhowever, according\nto the experience in \\cite{edwards}, $N_{cg}$ is usually less\nthan $N_{cut}$ because the theoretical bound is saturated for\nFIM while it is not for ORAM. Therefore, ORAM is more favorable\ncompared with FIM when doing simulations \nwith dynamical Ginsparg-Wilson\nfermions. Concerning the memory consumption, $N_r$ is usually much\nless than $N_{cut}$ which would \nagain put ORAM in a more favorable place.\n\nIt is clear from the above discussion that, if one would like\nto accelerate the simulation with dynamical Ginsparg-Wilson fermions,\none has to find ways to decrease $N_{cg}$ in ORAM or $N_{cut}$ in FIM.\nThese two parameters are mainly determined by the condition number\nof the matrix $H^{\\dagger}_WH_W$. \nAny preconditioning methods that would decrease the condition\nnumber of the matrix (while still maintaining the\nshifted nature of the matrix in ORAM)\n will bring an improvement to the simulation of\ndynamical Ginsparg-Wilson fermions.\nIt should be pointed out that other improvements, for example using \nbetter integration schemes, would apply to both methods.\nTest runs on small lattices are now under \ninvestigation \\cite{chuan2} where these \nalgorithmic issues will be further studied.\n\n\\section{Conclusions}\n\nIn this paper, possibilities of simulating dynamical Ginsparg-Wilson fermions\nare discussed. The formula for\nthe fermionic force is derived for two specific\nimplementations of the algorithm, the optimal rational\napproximation method (ORAM) and fractional inverter method (FIM).\nIt turns out that, in both methods, \nsimulating dynamical Ginsparg-Wilson fermions are more costly than\nsimulating dynamical Wilson fermions. The extra CPU time\nconsumption mainly comes from the\nthe fractional inversion of the matrix.\nIn quenched simulations, it has been reported \\cite{edwards} \nthat ORAM performs better than FIM. In dynamical simulations,\nthis conclusion remains, both on CPU time consumption and \nmemory consumption.\nIt should be emphasized that, though being more costly, \nthe advantage of simulating dynamical Ginsparg-Wilson\nfermions over dynamical Wilson fermions or\nquenched Ginsparg-Wilson fermions would be a much better\nbehavior towards the chiral limit.\nThe feasibility of such simulation has been demonstrated in\nthis paper using a Hybrid Monte Carlo algorithm. \n\n\n\\section*{Acknowledgments}\nThis work is supported by the Chinese National Science Foundation \nand by the Natural Science Fund from Peking University.\n\n\\input refs.tex \n\n\\end{document}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{sec:intro}\n \n \n In models extended from the standard model (SM) Higgs sector, strongly first order phase transition (1stOPT) can be realized.\n We emphasize that the nature of electroweak phase transition (EWPT) can be probed by exploring the Higgs sector at ongoing and future experiments.\n Models predicting significant deviations in various Higgs boson couplings can be tested at the LHC~\\cite{CMS:2013xfa} as well as at future lepton colliders including, the International Linear Collider (ILC)~\\cite{ILC}, the Compact Linear Collider (CLIC)~\\cite{CLIC} and the Future Circular Collider of electrons and positrons (FCC-ee)~\\cite{FCC-ee}.\n\n On the cosmological side, the strongly 1stOPT that occurs in the early Universe produces stochastic gravitational waves (GWs).\n In future, planned space-based interferometers such as LISA~\\cite{Seoane:2013qna}, DECIGO~\\cite{Kawamura:2011zz} and BBO~\\cite{Corbin:2005ny} will survey GWs in the millihertz to decihertz range, which is the typical frequency of GWs from the 1stOPT at the electroweak scale.\n\n Among various extensions of the Higgs sector, we here focus on a model with gauged dark $U(1)_X^{}$ symmetry including the $U(1)$ gauge kinetic mixing term.\n In general, $U(1)$ extended models are also testable at various experiments for the dark photon search.\n\n\\section{Model with dark $U(1)_X$ gauge symmetry}\n\\label{sec:model}\n\n We consider a model with a dark sector where the $U(1)_X^{}$ Abelian gauge symmetry is spontaneously broken by the so-called dark Higgs mechanism. \n We introduce a complex scalar $S$ with $U(1)_X^{}$-charge $Q_X^{}$ and the $U(1)_X^{}$ gauge field (dark photon) $X_\\mu^0$. \n In generic, there appears the gauge kinetic mixing term between the $U(1)_X^{}$ gauge boson $X_\\mu^0$ and the hypercharge $U(1)_Y^{}$ gauge boson $B_\\mu^{}$~\\cite{Holdom:1985ag}, and the Lagrangian is given by (e.g. Ref.~\\cite{Addazi:2017gpt})\n\n\\begin{align}\n{\\cal L} = - \\frac{1}{4} X_{\\mu\\nu} X^{\\mu\\nu} - \\frac{\\epsilon}{2} X_{\\mu \\nu} B^{\\mu \\nu} + |D_\\mu S|^2 - V_0\n\\label{eq:lagrangian}\n\\end{align}\nwhere $X_{\\mu\\nu}=\\partial_\\mu X_\\nu^0 - \\partial_\\nu X_\\mu^0$ and\n$B_{\\mu\\nu}^0=\\partial_\\mu B_{\\nu}^0 - \\partial_\\nu B_{\\mu}^0$, \\\n and the covariant derivative is defined as $D_\\mu = \\partial_\\mu + i g_X Q_X X_\\mu^0$.\nHere, the Higgs potential is given by \n\\begin{align}\nV_0^{} =\n-\\mu_\\Phi^2|\\Phi|^2\n-\\mu_S^2 |S|^2\n+\\lambda_\\Phi^{} |\\Phi|^4\n+\\lambda_S^{} |S|^4\n+\\lambda_{\\Phi S}^{} |\\Phi|^2 |S|^2.\n\\label{eq:full_theory}\n\\end{align}\n We normalize the $U(1)_X^{}$ charge of $S$, $Q_S^{} \\equiv Q_X^{}(S)$; $Q_S^{}=1$.\n Since the viable parameter range for $\\epsilon$ is too small to affect PT~\\cite{Addazi:2017gpt}, we focus on the rest six parameters, i.e. $\\mu^2_\\Phi, \\mu^2_S, \\lambda_\\Phi^{}, \\lambda_S^{}$, $\\lambda_{\\Phi S}^{}$ and $g_X^{}$.\n\n After the EW symmetry breaking, the two Higgs multiplets can be expanded as\n$\\Phi=(\nw^+, \\frac{1}{\\sqrt{2}}(v_\\Phi+\\phi_\\Phi+i z^0)\n)^T, \nS=\\frac{1}{\\sqrt{2}}(v_S+\\phi_S+i x^0)$, \nwhere $v_\\Phi$ and $v_S$ are the corresponding vacuum expectation values (VEVs). \n The Nambu-Goldstone modes $w^\\pm$, $z^0$ and $x^0$ are absorbed by the gauge bosons $W^\\pm_\\mu$, $Z^0_\\mu$ and $X_\\mu^0$.\n The mass of $X_\\mu^0$ is $m_{X} = g_X |Q_S| v_S$ (see also Ref.~\\cite{Farzan:2012hh}).\n The interaction basis state ($\\phi_\\Phi^{}$, $\\phi_S^{}$) is diagonalized to their mass eigenstate ($h$, $H$) through the rotation matrix with $c_\\theta\\equiv\\cos\\theta$ and $s_\\theta\\equiv\\sin\\theta$.\n We take $v_\\Phi^{}(= 246~{\\rm GeV})$, $m_h^{}(=125~{\\rm GeV})$, $m_H^{}$, $\\theta$, $g_X$ and $m_X$ as input parameters.\n The tree-level interactions of $h$ and $H$ with the SM gauge bosons $V$(=$W_\\mu^\\pm$, $Z_\\mu^0$) and with the SM fermions $F$ are given by\n${\\cal L}_{\\Phi V V, \\Phi FF} = (h c_\\theta-H s_\\theta) \\{(2 m_W^2\/v_\\Phi) W^+_\\mu W^{- \\mu}+ (m_Z^2\/v_\\Phi) Z^0_\\mu Z^{0 \\mu} -\\sum_F (m_F\/v_\\Phi) \\bar{F} F \\}$. \n The couplings of $h$ is normalized by the corresponding SM ones are universally given by\n\\begin{align}\n\\kappa \\equiv \n\\frac{g_{h VV}}{g_{h VV}^{\\rm SM}} =\\frac{g_{h FF}}{g_{h FF}^{\\rm SM}}=c_\\theta. \n\\end{align}\n\n\\section{Numerical results}\n\\label{sec:results}\n\n In Fig.~\\ref{fig:200+1_1}, various types of multi-step PT that predict first order EWPT are marked with colored plots.\n The gray plots are insensitive at future GW observations, LISA~\\cite{Caprini:2015zlo} and DECIGO~\\cite{Kawamura:2011zz}.\n We find that there are still detectable regions satisfying the current collider constraints shown below.\n\n\\begin{figure}[t]\n\\centering\n\\includegraphics[width=0.57\\textwidth]{gx2+1mx200_1.eps}\n \\caption{\\label{fig:200+1_1}\n Types of multi-step PT on the $(m_H^{},\\theta)$ plane for the benchmark point $m_X=200~{\\rm GeV}$ and $g_X=2$.\n Parameter sets predicting one-step PT with 1st order are marked with blue closed square, one-step PT with 2nd order with blue open square, two-step PT where both transitions are 1st order with green closed star, two-step PT where the latter one is 1st order with green closed triangle, and two-step PT where the former one is 1st order with green open triangle.\n The gray plots are insensitive at future GW observations.\n The colored regions are excluded by perturbative unitarity (black), vacuum stability (brown).\n The black lines show the combined exclusion limit obtained by $\\kappa_Z^{}$ measurement (orange) and direct searches for $H$ (gray).\n The black dashed lines and dotted lines show the expected accuracy of $\\kappa$ measurements at HL-LHC (14~TeV, 3~${\\rm ab}^{-1}$) and ILC (250~GeV, 2~${\\rm ab}^{-1}$).\n }\n\\end{figure}\n\n The measurements of the Higgs boson decay into weak gauge bosons give constraints on the $hVV$ couplings as $\\kappa_Z^{}=1.03^{+0.11}_{-0.11}$ and $\\kappa_W^{}=0.91^{+0.10}_{-0.10}$ from the ATLAS and CMS combination of the LHC Run-I data (68\\% CL)~\\cite{TheATLASandCMSCollaborations:2015bln}.\n In our numerical analysis, we take the 68\\% CL bound $\\kappa_Z^{}>0.92$ as the lower bound on the mixing angle, namely $|\\theta| \\leq 23.1^\\circ$. \n The exclusion limits from the direct searches for the $H$ boson at the LEP and LHC Run-II are examined in Ref.~\\cite{Robens:2015gla}. \n We will show that a large portion of the model parameter space where strongly 1stOPT and detectable GW signals are possible is excluded by the collider bounds on the Higgs bosons discussed above.\n\n The expected accuracy of the measurements of the Higgs boson couplings are also displayed in Fig.~\\ref{fig:200+1_1} as follows. \n The high-luminosity (HL)-LHC with $\\sqrt{s}=14$~TeV and $L=3~{\\rm ab}^{-1}$ can constrain $\\Delta \\kappa_V$ with an accuracy of $2\\%$~\\cite{CMS:2013xfa}.\n Future $e^+e^-$ colliders can considerably ameliorate the precision.\n The stage of the ILC with $\\sqrt{s}=250$~GeV and $L=2~{\\rm ab}^{-1}$ can limit $\\Delta \\kappa_W^{}$ to 1.8\\% and $\\Delta \\kappa_Z^{}$ to 0.38\\%~\\cite{Fujii:2017vwa}.\n In addition, the limit obtained from direct searches for the $H$-boson at future colliders are discussed in Ref.~\\cite{Chang:2017ynj} for the small mass region and in Ref.~\\cite{Carena:2018vpt} for the large mass region.\n\n\n\n In Fig.~\\ref{fig:benchmark}, our numerical results about the EWPT and GW signals for the six benchmark points are defined.\n Scanning the parameter region with the collider bounds on the Higgs boson properties into consideration, we have found that GW signals are detectable only for larger dark photon mass, say $m_X^{}\\gtrsim {\\cal O} (25-100)~{\\rm GeV}$.\n As shown in Ref.~\\cite{He:2017zzr}, the recent data from LHCb~\\cite{Aaij:2017rft} and LHC Run-II~\\cite{Aaboud:2017buh} give constraints on $\\epsilon$, which is roughly smaller than $10^{-2}$ at least, for the mass regions $10.6~{\\rm GeV} $, and $P_\\tau^{FB}$. Other inputs and \nassumptions are as follows:\n\\begin{tabbing}\n Anomalous Trilinear Gauge Couplings Fun \\= 8.Fun and games with Extended \nTechnicolor models \\kill\n ~~~~~~~e,$\\mu$,$\\tau$ universality \\> ISR with $\\sqrt {s'}\/\\sqrt {s} >0.7$\\\\\n ~~~~~~~$P=90\\%$, $\\delta P\/P=0.3\\%$ \\> $\\delta {\\cal L}\/ {\\cal L}=0.25\\%$\\\\\n ~~~~~~~$\\epsilon_b=50\\%$, $\\Pi_b=100\\%$ \\> $|\\theta|>10^\\circ$\\\\\n ~~~~~~~$\\epsilon_{e,\\mu,\\tau}(\\sigma)=100\\%$, $\\epsilon_\\tau(P_\\tau)=50\\%$ \n \\> Neglect $t$-channel exchange in $e^+e^-\\rightarrow e^+e^-$ \n\\end{tabbing}\nOf special note on this list are ($i$) a $b$-tagging \nefficiency($\\epsilon_b$) of $50\\%$ for a purity($\\Pi_b$) of 100$\\%$, ($ii$) \nthe efficiency for identifying all leptons is assumed to be 100$\\%$, although \nonly $50\\%$ of $\\tau$ decays are assumed to be polarization analyzed, ($iii$) \na $10^\\circ$ angle cut has been applied to all final state fermions, and \n($iv$) a strong cut to events with an excess of initial state \nradiation(ISR) has also been made. \nIn addition to the above, final state QED as well as QCD corrections are \nincluded, the \n$b$-quark and $\\tau$ masses have been neglected, and the possibility of \n$Z-Z'$ mixing has been ignored. Since \nour results will generally be statistics limited, the role played by the \nsystematic uncertainties associated with the parameter choices above will \ngenerally be rather minimal. \n\nTo insure model-independence, the values of the $Z'$ couplings, {\\it i.e.}, \n$(v,a)_{\\ell,b}$, as well as $M_{Z'}$, are chosen {\\it randomly} and \n{\\it anonymously} from \nrather large ranges representative of a number of extended gauge models. \nMonte Carlo data representing the above observables \nis then generated for several different values of $\\sqrt s$. At this point, the \nvalues of the mass and couplings are not `known' \n{\\it a priori}, but will later be compared with what is extracted \nfrom the Monte Carlo generated event sample. Following this approach \nthere is no particular relationship between any of the couplings and \nthere is no dependence upon any particular $Z'$ model. (We normalize our \ncouplings so that \nfor the SM $Z$, $a_\\ell=-1\/2$.) Performing this analysis for a wide range of \npossible mass and coupling choices then shows the power as well as the \nlimitations of this technique. \n\nTo get an understanding for how this procedure works in general we will make \ntwo case studies for the $Z'$ mass and couplings, labelled here by I and II. \nThere is nothing special about these two choices and several other parameter \nsets have been analyzed in comparable detail to show that the results that \nwe display below are rather typical. To begin \nwe generate Monte Carlo data at $\\sqrt {s}=$0.5, 0.75 and 1 TeV \nwith associated integrated luminosities of 70, 100, and 150 \n$fb^{-1}$, respectively, and subsequently \ndetermine the 5-dimensional $95\\%$ CL allowed region for the mass and \ncouplings from a simultaneous fit using the assumptions listed above. This \n5-dimensional region is then \nprojected into a series of 2-dimensional plots which we can examine in detail. \nFigs. 1 and 2 show the results of our analysis for these two case studies \ncompared \nwith the expectations of a number of well-known $Z'$ models{\\cite {rev}}. \nSeveral things are immediately apparent-the most obvious being that two \ndistinct allowed regions are obtained from the fit in both cases. (The input \nvalues are seen to lie nicely inside one of them.) This two-fold ambiguity \nresults from our inability to make a \ndetermination of the overall sign of {\\it one} of the couplings, {\\it e.g.}, \n$a_\\ell$. If the sign \nof $a_\\ell$ were known, only a single allowed region would appear in \nFigs. 1a-b and 2a-b and a unique coupling determination would be obtained. \nNote that this {\\it same} sign ambiguity arises in SLD\/LEP data for the \nSM $Z$ and is only removed through the examination of low-energy neutrino \nscattering. Secondly, we see that the leptonic couplings \nare somewhat better determined than are those of the $b$-quark, which is \ndue to the fact that the leptonic \nobservables involve only leptonic couplings, while \nthose for $b$-quarks involve both \ntypes. In addition, there is more statistical power available in the lepton \nchannels due to the assumption of universality and the \nleptonic results employ two additional observables related to $\\tau$ \npolarization. Thirdly, we see from \nFigs. 1a-b the importance in obtaining coupling information for a number of \ndifferent fermion species. If only the Fig. 1a results were available, one \nmight draw the hasty conclusion that an $E_6$-type $Z'$ had been found. Fig. 1b \nclearly shows us that this is not the case. Evidently {\\it neither} $Z'$ \ncorresponds to any well-known model. Lastly, as promised, the $Z'$ mass is \ndetermined in both cases, although with somewhat smaller uncertainties in \ncase II. We remind the reader that there is nothing special about these two \nparticular cases. \n\n\\vspace*{-0.5cm}\n\\noindent\n\\begin{figure}[htbp]\n\\centerline{\n\\psfig{figure=zdrzpfig1a.ps,height=9.1cm,width=9.1cm,angle=-90}\n\\hspace*{-5mm}\n\\psfig{figure=zdrzpfig1b.ps,height=9.1cm,width=9.1cm,angle=-90}}\n\\vspace*{-0.75cm}\n\\centerline{\n\\psfig{figure=zdrzpfig1c.ps,height=9.1cm,width=9.1cm,angle=-90}}\n\\vspace*{-1cm}\n\\caption{\\small $95\\%$ CL allowed regions for the extracted values of the \n(a) lepton and (b) $b$-quark couplings \nfor the $Z'$ of case I compared with the predictions of the $E_6$ \nmodel(dotted), the Left-Right Model(dashed), and the Un-unified \nModel(dash-dot), \nas well as the Sequential SM and Alternative LR Models(labeled by `S' and `A', \nrespectively.) (c) Extracted $Z'$ mass; only the $a_\\ell >0$ branch is shown. \nIn all cases the diamond represents the corresponding input values.}\n\\end{figure}\n\n\n\\vspace*{-0.5cm}\n\\noindent\n\\begin{figure}[htbp]\n\\centerline{\n\\psfig{figure=zdrzpfig2a.ps,height=9.1cm,width=9.1cm,angle=-90}\n\\hspace*{-5mm}\n\\psfig{figure=zdrzpfig2b.ps,height=9.1cm,width=9.1cm,angle=-90}}\n\\vspace*{-0.75cm}\n\\centerline{\n\\psfig{figure=zdrzpfig2c.ps,height=9.1cm,width=9.1cm,angle=-90}}\n\\vspace*{-1cm}\n\\caption{\\small Same as Fig. 1 but for a different choice of $Z'$ mass and \ncouplings referred to as case II in the text.}\n\\end{figure}\n\n\nOf course, the clever reader must now be asking the question `why use 3 \ndifferent \nvalues of $\\sqrt s$, why not 2 or 5?' This is a very important issue which we \ncan only begin to address here. Let us return to the mass and couplings of \ncase I and generate Monte Carlo `data' for \nonly $\\sqrt s$=0.5 and 1 TeV with $\\cal L$= 100 and 220 $fb^{-1}$, \nrespectively, thus keeping the total $\\cal L$ the {\\it same} as in the \ndiscussion above. Repeating our analysis we then \narrive at the `2-point' fit as shown in Fig. 3a; unlike Fig. 1a, the \nallowed region does not \nclose and extends outward to ever larger values of \n$v_\\ell,a_\\ell$. The corresponding \n$Z'$ mass contour also does not close, again extending outwards to ever larger \nvalues. We realize immediately that this is just what happens when data at \nonly a single $\\sqrt s$ is available. For our fixed $\\cal L$, distributed as we \nhave done, we see that there is not enough of a lever arm to simultaneously \ndisentangle the $Z'$ mass and \ncouplings. Of course the reverse situation can also be just as bad. We \nnow generate Monte Carlo `data' for the case I mass and couplings in 100 GeV \nsteps in $\\sqrt s$ over the 0.5 to 1 TeV interval with the same total \n$\\cal L$ as above but now distributed as 30, 30, 50, 50, 60, and 100 $fb^{-1}$, \nrespectively. We then arrive at the `6-point' fit shown in Fig. 3b \nwhich suffers \na problem similar to Fig. 3a. What has happened now is that we have spread \nthe fixed $\\cal L$ too thinly over too many points for the \nanalysis to work. This brief study indicates that a proper balance is \nrequired to simultaneously achieve the desired statistics as well as an \neffective lever arm to obtain the $Z'$ mass and couplings. It is important \nto remember that we \nhave {\\it not} demonstrated that the `2-point' fit will never work. We note \nonly that it fails with our specific fixed luminosity distribution for the \nmasses and couplings associated with cases I and II. It is possible that for \n`lucky' combinations of masses and couplings a 2-point fit will suffice. \nClearly, more work is required to further address this issue. \n\n\\vspace*{-0.5cm}\n\\noindent\n\\begin{figure}[htbp]\n\\centerline{\n\\psfig{figure=zdrzpfig3a.ps,height=9.1cm,width=9.1cm,angle=-90}\n\\hspace*{-5mm}\n\\psfig{figure=zdrzpfig3b.ps,height=9.1cm,width=9.1cm,angle=-90}}\n\\vspace*{-1cm}\n\\caption{\\small Failure of the method in case I when data is taken at \n(a) too few (`2-point' fit) or (b) too many (`6-point' fit) different \ncenter of mass energies for the same total integrated \nluminosity as in Figs. 1 and 2. The luminosities are distributed as discussed \nin the text.}\n\\end{figure}\n\\vspace*{0.4mm}\n\n\nHow do these results change if $M_{Z'}$ {\\it were} known or if our input \nassumptions were modified? Let us return to case I and concentrate on the \nallowed \ncoupling regions corresponding to a choice of negative values of \n$v_{\\ell,b}$; these are \nexpanded to the solid curves shown in Figs. 4a and 4c. The large dashed curve \nin Fig. 4a corresponds to a reduction of the polarization to $80\\%$ with the \nsame relative error as before. While the allowed region expands the \ndegradation is not severe. If the $Z'$ mass were known, the `large' \nellipses shrink to \nthe small ovals in Fig. 4a; these are expanded in Fig. 4b. This is clearly a \nradical reduction in the size of the allowed region! We see that when the \nmass is known, varying the polarization or its uncertainty over a reasonable \nrange has very little influence on the resulting size of the allowed \nregions. From Fig. 4c we see that while knowing the $Z'$ mass significantly \nreduces the size of the allowed region for the $b$ couplings, the impact is \nfar less than in the leptonic case \nfor the reasons discussed above. Figs. 5a and 5b show that case I is not \nspecial in that similar results are seen to hold for case II.\n\n\\vspace*{-0.5cm}\n\\noindent\n\\begin{figure}[htbp]\n\\centerline{\n\\psfig{figure=zdrzpfig4a.ps,height=9.1cm,width=9.1cm,angle=-90}\n\\hspace*{-5mm}\n\\psfig{figure=zdrzpfig4b.ps,height=9.1cm,width=9.1cm,angle=-90}}\n\\vspace*{-0.75cm}\n\\centerline{\n\\psfig{figure=zdrzpfig4c.ps,height=9.1cm,width=9.1cm,angle=-90}}\n\\vspace*{-1cm}\n\\caption{\\small (a) Expanded lobe(solid) from Fig. 1a; the dashed curve shows \nthe same result but for $P=80\\%$. The smaller ovals, expanded in (b) apply \nwhen the $Z'$ mass is known. Here, in (b), $P=90(80)\\%$ corresponds to the \ndash-dot(dotted) curve while the case of $P=90\\%$ with $\\delta P\/P=5\\%$ \ncorresponds to the square-dotted curve. (c) Expanded lobe(solid) from Fig.1b; \nthe dotted curve corresponds to the case when $M_{Z'}$ is known.} \n\\end{figure}\n\nWhat happens for larger $Z'$ masses or when data at larger values of $\\sqrt s$ \nbecomes available? Let us assume that the `data' from the above three center \nof mass \nenergies is already existent, with the luminosities as given. We now imagine \nthat the NLC increases its center of mass energy to $\\sqrt s$= 1.5 TeV and \ncollects an additional \n200 $fb^{-1}$ of integrated luminosity. Clearly for $Z'$ masses near or \nbelow 1.5 TeV our \nproblems are solved since an on-shell $Z'$ can now be produced. Thus we \nshall concern ourselves with $Z'$ masses in excess of 2 TeV. \nFigs. 6a-d show the result of extending our procedure--now using 4 \ndifferent $\\sqrt s$ values, again for two distinct choices of the \n$Z'$ mass and \ncouplings. These `4-point' results are a combined fit to the data at \nall four center of mass energies. \n(Only one of the allowed pair of ellipses resulting from the overall sign \nambiguity is shown for simplicity.) \nNote that the $Z'$ input masses we have chosen are well in excess of 2 TeV \nwhere the LHC may provide only very minimal information on the fermion \ncouplings{\\cite {rev}}. Clearly by using the additional data from a \nrun at $\\sqrt s$=1.5 TeV this technique can be extended to perform coupling \nextraction for $Z'$ masses in excess of 2.5 TeV. The maximum `reach' for the \ntype of coupling analysis we have done is not yet known. It seems likely, \nbased on these initial studies, that the extraction of interesting coupling \ninformation for $Z'$ masses in excess of 3 TeV seems possible for a reasonable \nrange of parameters. \n\n\n\\vspace*{-0.5cm}\n\\noindent\n\\begin{figure}[htbp]\n\\centerline{\n\\psfig{figure=zdrzpfig5a.ps,height=9.1cm,width=9.1cm,angle=-90}\n\\hspace*{-5mm}\n\\psfig{figure=zdrzpfig5b.ps,height=9.1cm,width=9.1cm,angle=-90}}\n\\vspace*{-1cm}\n\\caption{\\small (a) Expanded lobe(solid) from Fig. 2a; the dashed curve shows \nthe same result but for $P=80\\%$. The smaller dotted oval, applies \nwhen the $Z'$ mass is known and $P=90\\%$. (b) Expanded lobe(solid) from \nFig. 2b; \nthe dotted curve corresponds to the case when $M_{Z'}$ is known. }\n\\end{figure}\n\\vspace*{0.4mm}\n\n\n\\section{Outlook and Conclusions}\n\nIn this paper we have shown that it is possible for the NLC to extract \ninformation on the $Z'$ couplings to leptons and $b$-quarks even when the $Z'$ \nmass is not {\\it a priori} known. The critical step for the success of the \nanalysis was to combine the data available from measurements performed \nat several different center of mass energies. For reasonable luminosities \nthe specific results we have obtained suggest, but do not prove, that data \nsets at at least 3 different energies are necessary for the procedure to be \nsuccessful. \n\n\n\n\\bigskip \\bigskip \\begin{center} \\begin{large\n\nThe author would like to thank J.L. Hewett and S. Godfrey for discussions \nrelated to this work.\n\n\\def\\MPL #1 #2 #3 {Mod.~Phys.~Lett.~{\\bf#1},\\ #2 (#3)}\n\\def\\NPB #1 #2 #3 {Nucl.~Phys.~{\\bf#1},\\ #2 (#3)}\n\\def\\PLB #1 #2 #3 {Phys.~Lett.~{\\bf#1},\\ #2 (#3)}\n\\def\\PR #1 #2 #3 {Phys.~Rep.~{\\bf#1},\\ #2 (#3)}\n\\def\\PRD #1 #2 #3 {Phys.~Rev.~{\\bf#1},\\ #2 (#3)}\n\\def\\PRL #1 #2 #3 {Phys.~Rev.~Lett.~{\\bf#1},\\ #2 (#3)}\n\\def\\RMP #1 #2 #3 {Rev.~Mod.~Phys.~{\\bf#1},\\ #2 (#3)}\n\\def\\ZP #1 #2 #3 {Z.~Phys.~{\\bf#1},\\ #2 (#3)}\n\\def\\IJMP #1 #2 #3 {Int.~J.~Mod.~Phys.~{\\bf#1},\\ #2 (#3)}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nUnmanned Underwater Vehicles (UUVs) have the capability to perform various operations including surveillance, exploration, and object detection in underwater environments. While traditional propeller-based propulsion enables UUVs to conduct deep water diving and high-speed traversal, flapping fin propulsion--inspired by the high levels of controllability observed in aquatic animals--offers high maneuverability at low speeds \\cite{Blake1979}. Flapping fin propulsion provides a solution for UUVs to effectively navigate near-shore, obstacle-ridden terrain.\n\nFlapping fin UUV control systems regulate vehicle propulsion by modifying the vehicle gait, the specific set of fin kinematics applied to the current flapping cycle. UUV control systems have been extensively explored \\cite{He2020}; however, there is sparse literature on flapping fin control. While the effect of various kinematics on propulsion has been studied through experimental \\cite{Santo2017}, computational fluid dynamics \\cite{Liu2017}, and surrogate model \\cite{Viswanath2019} approaches, prior UUV flapping fin control systems do not embed a full understanding of how gait affects propulsion. For example, they focus on experimentally determining a small set of high-propulsion gaits \\cite{Shan2019}, restrict chosen gaits to a line in the kinematic space \\cite{Bi2014}, or incrementally change kinematics that have a known positive or negative correlation with thrust to eventually reach the target propulsion \\cite{Palmisano2008}. We use a neural network model to embed a more comprehensive understanding of the relationship between gait and propulsion within a gait selection model; as a result, the control system can generate gaits that not only meet a target trajectory, but also optimize for other measures of performance such as a smooth transition between gaits and energy efficiency.\n\nWe propose gait generation using a search-based inverse model that invokes a forward surrogate model. Our work focuses on a control system for thrust, which is the forward propulsion of the vehicle. The inverse model determines the subsequent gait from the desired thrust, current gait, and relative performance metric weights for thrust accuracy and kinematic smoothness. Inverse model gait evaluation uses a neural network forward model to predict the thrust of a given gait. Figure \\ref{fig:config} shows the integration of the inverse model within the control system.\n\nSearch-based methods are frequently used for offline inverse problems as they require invoking the forward model multiple times \\cite{Zhou2012,Hansen2017}. Unlike other aerial and underwater vehicle control systems, our flapping fin control system only needs to generate one gait per flapping cycle, allowing for a more flexible time constraint. Therefore, search-based methods are a viable onboard approach that additionally allows for the incorporation of modifiable optimization parameters without retraining the forward model.\n\nAs an alternative to a search-based methods, autonomous aerial vehicle literature often uses neural networks in control systems to develop direct inverse models \\cite{Muliadi2018,ElHamidi2020}. In this approach, the inverse model consists of a neural network that is directly trained from a forward model or plant. While producing fast predictions, this method does not allow for a flexible optimizer that can be changed cycle-to-cycle by the controller to prioritize different performance metrics.\n\nWe demonstrate that our forward gait-to-thrust model accurately interpolates gaits, and we show that our thrust-to-gait search-based inverse model generates high thrust accuracy gaits while embedding adjustable performance weights. These weights allow a control system to make cycle-to-cycle trade-offs between performance metrics based on current system objectives. We compare the performance of different sampling and search-based techniques to improve upon our inverse model performance. Through inverse model integration on the Raspberry Pi, we demonstrate that the inverse model fulfills the one prediction per cycle time constraints, even with the use of an expensive time-series forward model.\n\n\\begin{figure}[t]\n \\centering\n\t\\includegraphics[width=1.0\\linewidth]{NN_Control_System_Design4.png}\n\t\\caption{\n Integration of the inverse model (shown in light blue) within a control system for position. Orange arrows indicate one request per thrust cycle.\n\t}\n\t\\label{fig:config}\n\\end{figure}\n\n\n\\section{Methods}\n\nOur inverse model uses the input from the controller---desired thrust, current gait, and desired performance metric weights---to output a new gait with the goal of minimizing the inverse model loss function. Proposed gaits are generated using a sampling or direct search method. These gaits are then evaluated using a multi-objective loss function that evokes the forward gait-to-thrust neural network model.\n\n\\subsection{Loss Function}\n\nProposed gaits from our inverse model are evaluated with a loss function that is defined as the weighted sum of the loss functions for the different performance metrics. Equation \\ref{eq:loss} describes the overall loss function. The losses $L_{t}$, $L_{k}$, and $L_{e}$ correspond to the thrust accuracy, kinematic smoothness, and efficiency loss functions, while $w_{t}$, $w_{k}$, and $w_{e}$ serve as the corresponding performance metric weights.\n\\begin{equation}\n\\fontsize{10}{0} L = w_{t} * L_{t} + w_{k} * L_{k} + w_{e} * L_{e}\n\\label{eq:loss}\n\\end{equation}\n\nThe thrust accuracy loss is computed as the difference between $T_{target}$, the desired thrust, and $T_{pred}$, the thrust from the proposed gait of the inverse model (Eq. \\ref{eq:loss_t}). $T_{pred}$ is calculated using the gait-to-thrust neural network.\n\\begin{equation}\n \\fontsize{10}{0} L_{t} = \\left|T_{target} - T_{pred}\\right|\n \\label{eq:loss_t}\n\\end{equation}\n\nThe kinematic smoothness loss accounts for the detrimental effect of frequently moving between gaits with highly deviant kinematics between flapping cycles. Transitioning between similar gaits allows the UUV to undergo a smoother motion, promoting system stability. We define a user-selected equivalent step size such that a change of $s_i$ units for kinematic $i$ has the same kinematic smoothness loss to the system as a change of $s_j$ units for kinematic $j$. The kinematic space is normalized by scaling each dimension based on its equivalent step size; then, kinematic loss is calculated as the Euclidean distance between the current and proposed gait. Equation \\ref{eq:loss_k} defines the kinematic smoothness loss function where $n_{k}$ is the number of kinematics, and $x_{i}$ and $y_{i}$ are the values of kinematic $i$ for the current and proposed gait.\n \\begin{equation}\n \\fontsize{10}{0} L_{k} = \\left(\\sum_{i=1}^{n_{k}} \\left(\\frac{|y_{i} - x_{i}|}{s_{i}}\\right)^2\\right)^{\\frac{1}{2}}\n \\label{eq:loss_k}\n \\end{equation}\n\nOur efficiency loss will be based upon the propulsive efficiency of the gait, which is equal to the product of output thrust and current velocity divided by power. As we do not currently have experimental positive flow cases available for forward gait-to-thrust training, our results set the efficiency weight to 0 and evaluate the trade-offs between thrust accuracy and kinematic smoothness.\n\n\\subsection{The Forward Model}\n\nThe forward model predicts average UUV thrust for a flapping cycle from the gait. Reduced-order analytic models can produce fast predictions, but they struggle to maintain accuracy when generalized beyond a small parameter space \\cite{Muscutt2017}. A model supporting a higher-order input space will allow future forward gait-to-propulsion models to incorporate fluid dynamics-related parameters such as flow speed as well as multi-fin kinematic parameters such as the flapping phase offset between front and rear fins.\n\nNeural network surrogate models support higher degree input spaces, and prior flapping fin propulsion research on fin design has developed neural network surrogate models for thrust prediction \\cite{Viswanath2019,Lee2021}. Both works demonstrate that time series models can predict the time history of thrust generation for a flapping cycle. Therefore, we implement a similar long-term short memory (LSTM) network for our gait-to-thrust forward model using the inputs in Table \\ref{tab:parameters}. A single pectoral fin setup is shown in Figure \\ref{fig:single_fin_setup}. LSTM networks process sequential data by generating an output at each time step and using information from past outputs to inform subsequent results. Compared to traditional recurrent neural networks, LSTM networks include a cell state to retain a long-term memory accumulated from multiple past time steps that influences the next output.\n\n\\begin{figure}[t]\n \\centering\n\t\\includegraphics[width=0.65\\linewidth]{Single_Fin_Setup.png}\n\t\\caption{\n Example single pectoral fin setup.\n\t}\n\t\\label{fig:single_fin_setup}\n\\end{figure}\n\nOur control system requests one gait per flapping cycle, so only the average thrust over a cycle is used by our system. While the LSTM model computes average thrust as the mean of the output thrusts, we also test the viability of a simple dense neural network (DNN) model that uses the static kinematic values in Table \\ref{tab:parameters} to directly predict average thrust. DNN networks consist of layers of nodes such that each node in layer $l$ is connected to every node in layer $l-1$. Both models are discussed in our results.\n\n\\begin{table}[ht]\n\\footnotesize\n\\begin{center}\n \\caption{Parameters that compose a UUV gait}\n \\label{tab:parameters}\n \\begin{tabular}{p{3.1cm} p{4.5cm}}\n \\hline\n \\hline\n Kinematic & Description \\\\\n \\hline\n \\hline\n \\textit{Static kinematics} & \\\\\n \\hline\n Stroke Amplitude (\u00b0) & Maximum gait stroke angle \\\\\n \\hline\n Pitch Amplitude (\u00b0) & Maximum gait pitch angle \\\\\n \\hline\n Flap Frequency (Hz) & Frequency of a stroke cycle \\\\\n \\hline\n Stroke-Pitch Offset & Phase offset of the pitch cycle relative to the stroke cycle, calculated as a fraction of one cycle \\\\\n \\hline\n \\textit{Time-varying kinematics} & \\\\\n \\hline\n Stroke Angle (\u00b0) & Flapping angle as a function of time \\\\\n \\hline\n Pitch Angle (\u00b0) & Pitching angle as a function of time \\\\\n\\end{tabular}\n\\end{center}\n\\end{table}%\n\n\\subsection{Sampling and Direct Search Methods}\n\nMonte Carlo sampling and direct search methods are common derivative-free approaches for optimization based on an objective function \\cite{Kroese2014,Audet2014}. Monte-Carlo sampling involves randomly sampling the domain of the objective function based on a probability distribution, while direct search methods use a heuristic approach where past searches influence future attempts. A popular and effective class of direct search methods is the generalized pattern search (GPS) family of algorithms defined by Torczon \\cite{Torczon1997}. When applied to optimization problems, GPS often provides fast convergence and high accuracy solutions \\cite{Herrera2015,Javed2016}.\n\nFor our flapping fin inverse model, each static kinematic listed in Table \\ref{tab:parameters} serves as an input for the objective function, and the loss function in Equation \\ref{eq:loss} acts as the objective function output. We restrict our input space to the domain of provided experimental data as well as the approximate set of attainable gaits as described in the Experimental Data section. The input space is normalized through compressions in the direction of each static kinematic setting $i$ by the corresponding equivalent step size of $s_{i}$ units; in the normalized input space, any movement between two points of distance $d$ has the same kinematic loss. Both GPS-based and Monte Carlo-based inverse models are implemented as specified below.\n\n\\subsubsection{Monte Carlo Approach}\n\nThe Monte Carlo method evaluates points from a hyper-sphere centered around the current gait in the input space, where the hyper-sphere radius $a$ determines the search scale. The search scale is calculated as $a = d_t\/10$, where $d_t$ is the difference between the thrust of the current gait and the target thrust. A uniform random sample of $n$ points satisfying the inequality in Equation \\ref{eq:MCIneq} are evaluated, and the gait with the lowest loss is selected. The variables $x_{i}$ and $y_{i}$ are the value of static kinematic $i$ for the current and new gaits respectively.\n\\begin{equation}\n\\fontsize{10}{0}\n\\sum_{n=1}^{n_{k}} \\left(\\frac{|y_{i} - x_{i}|}{a}\\right)^2 < 1\n\\label{eq:MCIneq}\n\\end{equation}\n\n\\subsubsection{Generalized Pattern Search Approaches}\n\nGeneralized pattern search algorithms provide a fast, derivative-free method for optimization. The objective function is evaluated at points on a mesh generated from a positive spanning set. GPS consists of a search and poll step. During the search step, a finite number of mesh points are evaluated from the objective function using a user-defined procedure; if an improvement is found for any proposed point, then the solution is accepted. If no new point is found, GPS polls neighboring mesh points to the current solution $x_i$ and accepts new points with improved objective function values. If both steps are unable to generate an improved solution, the mesh size $m$ is divided by a mesh size divider $d_m$ and the process is repeated until a certain precision $p$ is obtained. An outline of GPS is provided.\n\n\\begin{algorithm}[tb]\n\\caption{Outline of GPS}\n\\begin{algorithmic}\n \\State Set initial solution $x_{prev}$, initial mesh size $m$, precision $p$;\n \\While{$k \\ge p$}\n \\State $x_{new} \\gets x_{prev}$;\n \\State SEARCH for new solution in mesh, update $x_{new}$;\n \\If{$x_{new}$ == $x_{prev}$}\n \\State POLL for new solution in mesh, update $x_{new}$;\n \\If{$x_{new}$ == $x_{prev}$}\n \\State $m \/= d_m$;\n \\EndIf\n \\EndIf\n \\EndWhile\n\\end{algorithmic}\n\\end{algorithm}\n\nHooke-Jeeves pattern search (HJPS) is a commonly used pattern search algorithm \\cite{Hooke1961}. The mesh is created from the positive spanning set consisting of the standard basis vectors. For each input dimension, the poll step evaluates a neighboring mesh point in both the positive and negative directions, and $x_{prev}$ is updated if an improved solution is found. From this step, the vector $x_{new} - x_{prev}$ offers a promising direction for a continued search. The search step leverages this tactic by attempting to move the current solution in the direction $x_{new} - x_{prev}$ until no further improvement can be made.\n\n\n\\begin{algorithm}[tb]\n\\caption{Search Upon Failure for Our GPS Algorithm}\n\\begin{algorithmic}\n \\State Set current gait and loss as $\\vec{g}_{curr}, l_{curr}$\n \\State Set direction $\\vec{u}_k$ and magnitude $i_k$ of minimum loss increase for each kinematic $k$ during polling\n \\State Sort $\\vec{u}_k$ by $i_k$ in ascending order\n \\State $\\vec{v} \\gets \\sum_{c=0}^{n_k-1} \\vec{u}_k$\n \\State $j \\gets n_k - 1$\n \\While{$j \\ge 1$}\n \\While{$(l_{pred} \\gets loss(\\vec{g}_{curr} + \\vec{v})) < l_{curr}$}\n \\State $\\vec{g}_{curr} \\gets \\vec{g}_{curr} + \\vec{v}$\n \\State $l_{curr} \\gets l_{pred}$\n \\State $j \\gets 0$\n \\EndWhile\n \\State $\\vec{v} \\gets \\vec{v} - \\vec{u}_j$\n \\State $j \\gets j - 1$\n \\EndWhile\n\\end{algorithmic}\n\\end{algorithm}\n\nIn cases with a high kinematic smoothness weight, movement in any one coordinate direction results in a larger increase to kinematic loss than reduction in thrust accuracy loss, leading to a failed poll step and a resulting failed search step by HJPS. In these situations, HJPS is unable to escape the local minimum until the kinematic smoothness weight is lowered. With the goal of escaping these minima, our proposed generalized pattern search algorithm modifies HJPS to strategically search upon polling failure. Pseudocode for this additional search step is provided. Our GPS algorithm records the direction $\\vec{u}_k$ and magnitude $i_k$ of minimum loss increase for each kinematic $k$ during the poll step. Until failure, searches are conducted in the direction of vector $\\vec{v}$, where $\\vec{v}$ is equal to the sum of all $\\vec{u}_k$ and spans all $n_k$ kinematic directions. If the search fails, the component $\\vec{u}_k$ associated with the highest loss increase $i_k$ is removed from vector $v$; this process repeats until a search succeeds, or until the vector $v$ only spans one dimension.\n\n\\subsection{Experimental Data}\n\nTo train and evaluate our forward model, experimental thrust data for various gaits was collected for a setup consisting of rigid rectangular-shaped pectoral fins, one on each side of the UUV. Experiments were run for 864 gaits, which are combinations of the kinematics listed in Table \\ref{tab:expData}. Time-series data for five stroke cycles was collected for each gait.\n\n\\begin{table}[ht]\n\\footnotesize\n\\begin{center}\n \\caption{Provided experimental gaits}\n \\label{tab:expData}\n \\begin{tabular}{p{3.2cm} p{4.1cm}}\n \\hline\n \\hline\n Kinematic & Provided Values \\\\\n \\hline\n \\hline\n Stroke Amplitude (\u00b0) & 0, 15, 25, 32, 40, 55 \\\\\n \\hline\n Pitch Amplitude (\u00b0) & 0, 15, 25, 32, 38, 55 \\\\\n \\hline\n Flap Frequency (Hz) & 0.75, 1, 1.25, 1.5, 1.75, 2 \\\\\n \\hline\n Stroke-Pitch Offset & -0.0625, 0, 0.0625, 0.125 \\\\\n\\end{tabular}\n\\end{center}\n\\end{table}%\n\nIn order to invoke the LSTM network on interpolated gaits, the expected motor stroke and pitch angle time histories must be generated as an input for the network. The motors are commanded 16 values per cycle, where the commanded stroke follows a sinusoidal curve and commanded pitch oscillates between $-p_a$ and $p_a$, where $p_a$ is the pitch amplitude. Time histories are generated using a simple model of motor dynamics accounting for maximum velocity and acceleration. Across the provided experimental data with attainable gaits, mean stroke and pitch angle difference between the generated and experimental time histories was 2.94\u00b0 and 4.02\u00b0 for stroke and pitch respectively. An example generated and experimental time history is shown in Figure \\ref{fig:strokePitchTimeHist}.\n\nDue to physical system limitations, certain high commanded amplitudes are unattainable at high flap frequencies, $ff$, as the motor is unable to realize the full commanded amplitude within the provided time frame. Therefore, our inverse model excludes all gaits where the stroke amplitude exceeds $97 - ff * 30$ or the pitch amplitude exceeds $75 - ff * 26$. The above inequalities were experimentally determined.\n\n\\begin{figure}[t]\n \\centering\n\t\\includegraphics[width=0.6\\linewidth]{Stroke_Pitch_Time_Histories_2.png}\n\t\\caption{\n Sample stroke and pitch time history.\n\t}\n\t\\label{fig:strokePitchTimeHist}\n\\end{figure}\n\n\\section{Results and Discussion}\n\n\n\\subsection{Forward Model Performance}\n\nBoth the DNN and LSTM were evaluated using the experimental data described in Table 2. Forward passes of the models during evaluation are run using the Tensorflow Lite library \\cite{Tensorflow} to increase prediction speed. Z-score normalization was applied to each kinematic input. The DNN directly outputs average thrust over one flapping cycle while the LSTM outputs the full thrust time history for a cycle. The input stroke and pitch time histories for LSTM training and evaluation were generated based on the motor dynamics as described in the Experimental Data section. The LSTM contains 100 hidden units, and time histories consisted of 50 points that were evenly spaced over a flapping cycle. The DNN contains 3 layers with 100 nodes per layer. The LSTM was trained for 150 epochs, and the DNN was trained for 500 epochs.\n\nWhen trained and evaluated on all experimental gaits, the LSTM and DNN reached mean average thrust errors of 0.0174N and 0.0364N respectively, where the average thrust error refers to the difference between the predicted and experimental average thrust for a specific gait. The DNN produces a significantly higher error, signifying it is unable to effectively learn the gait-thrust relationship across the full space of experimental gaits. The increase in error is concentrated in the high flapping frequency, high amplitude gaits: physically unattainable gaits described in our Experimental Data section have a mean average thrust error of 0.0915 for the DNN. The inability of the DNN to memorize the full gait-thrust relationship poses a concern since the modeling task will grow more complex in the future through the addition of inputs accounting for flow speed and multi-fin kinematic interactions. Therefore, our inverse model will implement the LSTM model.\n\nTo test LSTM gait interpolation, a holdout set of gaits was excluded from training. Our holdout set consisted of all experimental gaits fulfilling one or more of the following criteria: a flap frequency of 1.25 Hz, a stroke-pitch offset of 0, or a stroke or pitch amplitude of 25\u00b0. The LSTM successfully interpolated kinematics for the excluded gaits with a mean average thrust error of 0.0344N. The worst performing subset of excluded gaits--gaits with a stroke pitch offset of 0--still obtained a mean average thrust error of 0.0374N.\n\nFigure \\ref{fig:LSTMThrustProf} shows example thrust time histories generated by the LSTM for interpolated gaits. The LSTM embeds an understanding of how thrust changes over the course of a flapping cycle, capturing the peak and troughs of the thrust time history; this understanding offers an explanation for the high-accuracy LSTM average thrust predictions for interpolated kinematics.\n\n\\begin{figure}[t]\n \\centering\n\t\\includegraphics[width=1\\linewidth]{LSTM_Thrust_Profs.png}\n\t\\caption{\n Sample thrust time histories for interpolated kinematics. The left graph involves interpolation to an unseen stroke and pitch angle, while the right graph involves interpolation to an unseen flap frequency and stroke pitch offset.\n\t}\n\t\\label{fig:LSTMThrustProf}\n\\end{figure}\n\n\\subsection{Inverse Model Performance}\n\nThree search-based methods for the inverse model were evaluated--Monte Carlo, Hooke-Jeeves Pattern Search (HJPS), and our Generalized Pattern Search (GPS) algorithm that builds upon HJPS. Monte Carlo generates 50 trial gaits ($n = 50$). The mesh size $k$ and precision $p$ for both pattern search algorithms is set to 3 and 0.375 respectively. The inverse models implement the LSTM forward model for gait-to-thrust prediction.\n\nSynthetic and simulated thrust requests were used to evaluate the inverse models. Each synthetic data set consists of a sequential list of 100 pseudo randomly generated thrust requests with a difference between 0 and $\\Delta T_{max}$ for adjacent thrusts. Thrust requests were restricted to the range 0.2N to 1.2N, and the value of $\\Delta T_{max}$ was varied from 0.1N to 1.0N in increments of 0.1N to produce 10 data sets.\n\nInverse model performance on synthetically generated thrust requests is shown in Figure \\ref{fig:SynDataInvModelResults}. As the maximum step size increases, the average kinematic loss generally trends upwards for all three models: the inverse models can obtain gaits closer in the kinematic input space for smaller changes in thrust. An increase in thrust weight for our inverse models results in a lower thrust loss and a higher kinematic loss. Therefore, priorities on different objectives--thrust accuracy and kinematic smoothness--can be directly regulated by a control system through a change in weights.\n\n\\begin{figure}[t]\n \\centering\n\t\\includegraphics[width=1\\linewidth]{Synthetic_Data_Inverse_Model_Results.png}\n\t\\caption{\n Inverse model performance for synthetic data, measured by thrust accuracy loss (left) and kinematic smoothness loss (right). The dotted, dashed, and solid lines have thrust weights of 0.9, 0.95, and 1. Kinematic weights are set to $1 - w_{t}$.\n\t}\n\t\\label{fig:SynDataInvModelResults}\n\\end{figure}\n\nThe models do not optimize for kinematic smoothness in the cases where the models have a thrust weight of 1, yet Monte Carlo still exhibits a low kinematic smoothness loss across the tested input thrust data. Compared to HJPS and GPS, the MC model has a tendency to find closer gaits in the input space without implicitly embedding kinematic smoothness into the loss function. However, HJPS and GPS successfully obtain similar kinematic smoothness values to MC for higher kinematic smoothness weights.\n\nThe GPS and MC inverse models demonstrate a high thrust accuracy performance across all three weights, achieving an average thrust loss of less than 0.04N. While the HJPS inverse model performs similarly to the GPS model for higher thrust weights, this model shows a significant deterioration in thrust accuracy when a higher kinematic smoothness weight of 0.1 is applied. The higher kinematic smoothness weight increases the overall cost of moving to a new gait; in these cases, the HJPS inverse model reaches a gait where movement in any coordinate direction results in a higher increase in kinematic smoothness loss than reduction in thrust accuracy loss. At this point, the algorithm becomes trapped at a local minimum for all subsequent iterations unless the thrust accuracy weight is increased. Our GPS method strategically searches promising points in situations where the algorithm is potentially trapped in a local minimum, namely during polling failure. These additional searches enable the GPS inverse model to maintain a low thrust loss for lower thrust weights.\n\nTo evaluate our inverse models on more realistic thrust requests, a PID controller for vehicle position was used for thrust request generation. The PID controller outputs a target thrust to control a simulated thrust-to-position plant that models the UUV based on the translational equations of motion for a rigid body. We simulate UUV movement to 100 randomly generated positions between 0 and 10m. For each location, the controller commands a thrust between -1.2N and 1.2N for 15 flapping cycles to reach and then remain at the position. A sample of the change in positional values and corresponding thrust requests from the simulation is provided in Figure \\ref{fig:PID_Pos_Thrust}. The inverse model generated kinematics is able to accurately track the target trajectory in the given timeframe. This experiment uses small positional changes to simulate the start and stop conditions of the UUV: for intermediary movement, the UUV thrust is held constant at the maximum or minimum value.\n\n\\begin{figure}[ht]\n \\centering\n\t\\includegraphics[width=1.0\\linewidth]{PID_Pos_Thrust.png}\n\t\\caption{\n Sample position (left) and thrust request (right) time histories from the simulated PID controller for position.\n\t}\n\t\\label{fig:PID_Pos_Thrust}\n\\end{figure}\n\nAs provided experimental data does not cover the domain of negative-thrust gaits producing negative trusts, the inverse model temporarily assumes symmetry in the gait landscape. A gait with a negative stroke pitch offset, stroke amplitude, and pitch amplitude generates a thrust $-T$, where its positive counterpart generates a thrust $T$. For this symmetry assumption to hold true, it would physically require a fin rotation of 180\u00b0 along the pitch axis during every transition between a positive and negative gait such that the leading edge of the fin faces the direction of movement. This temporary solution allows for the inverse model to access negative-thrust gaits for simulated data testing, and the kinematic landscape will be extended to incorporate negative-thrust gaits in the future.\n\nThe inverse models are evaluated on the 1500 thrust requests generated by the simulation. To emulate the onboard system, inverse models were run on a Raspberry Pi 4 Model B. Model performance across various weight settings is shown in Figure \\ref{fig:SimDataInvModelResults}. Table \\ref{tab:simInvModelResults} summarizes the performance of MC, HJPS, and GPS across weight settings: losses are averaged across weight settings, and the worst-case run time across all calls to each algorithm is provided.\n\n\\begin{figure}[t]\n \\centering\n\t\\includegraphics[width=1\\linewidth]{Simulated_Data_Inverse_Model_Results.png}\n\t\\caption{\n Inverse model performance for simulated data, measured by thrust accuracy loss (left) and kinematic smoothness loss (right). The kinematic smoothness weight was set to $1-w_t$.\n\t}\n\t\\label{fig:SimDataInvModelResults}\n\\end{figure}\n\n\\begin{table}[ht]\n\\footnotesize\n\\begin{center}\n \\caption{Inverse Model Simulation Performance Summary}\n \\label{tab:simInvModelResults}\n \\begin{tabular}{p{2.8 cm} p{1.2cm} p{1.2cm} p{1.2cm}}\n \\hline\n \\hline\n & MC & HJPS & GPS \\\\\n \\hline\n \\hline\n Thrust Loss (N) & 0.219 & 0.224 & 0.157 \\\\\n \\hline\n Kinematic Loss & 4.693 & 4.747 & 4.940 \\\\\n \\hline\n Overall Loss & 0.484 & 0.469 & 0.435 \\\\\n \\hline\n Maximum Time (s) & 0.300 & 0.393 & 0.467 \\\\\n\\end{tabular}\n\\end{center}\n\\end{table}%\n\nOnboard predictions must meet the time constraints of 1 generated gait per flapping cycle. As the onboard UUV will not exceed a flapping frequency of 2 Hz, the inverse model must generate a gait within 0.5 seconds. All inverse models consistently run within 0.5 seconds on the Pi: the GPS algorithm has the slowest maximum run time of 0.467 seconds. Therefore, these inverse models are all viable for an onboard flapping fin UUV control system.\n\nAll algorithms were responsive to changes to performance metric weights: higher thrust weights resulted in lower thrust losses and higher kinematic losses. GPS reached the lowest mean overall loss of 0.435, as well as a mean thrust accuracy of less than 0.01N when a thrust weight of 1 is applied. While the inverse models demonstrate similar performances when minimizing kinematic loss, GPS consistently outperforms MC and HJPS in terms of minimizing thrust loss as seen in Figure \\ref{fig:SimDataInvModelResults}. GPS obtains a mean thrust accuracy loss of 0.157N averaged across the tested weights, while MC and HJPS obtain thrust accuracy losses of 0.219N and 0.224N. The inverse models demonstrate similar performances when minimizing $L_k$, obtaining mean kinematic smoothness losses of 4.693, 4.747, and 4.940 respectively.\n\n\\section{Conclusion and Deployment Strategy}\n\nOur work uses neural networks to embed a deep kinematic-thrust relationship in a flapping fin UUV control system with the goal of multi-objective optimization: more specifically, we design a search-based inverse model that invokes a gait-to-thrust forward model to select gaits for the controller.\n\nWe demonstrate that our LSTM forward model effectively learned the full space of kinematic-thrust mappings and accurately interpolated to unseen gaits; the DNN model, which does not use time-series data, was unable to match the performance of the LSTM. We implemented inverse models incorporating the LSTM forward model using three approaches: Monte-Carlo Sampling, Hooke-Jeeves Pattern Search, and our new Generalized Pattern Search method, which is an extension of HJPS. Upon integration with onboard hardware, the inverse models consistently ran within the time constraint of 0.5 seconds per iteration. When evaluated with simulated PID controller thrust requests, our GPS algorithm yielded the best performance for minimizing both thrust loss and overall loss across weight settings.\n\nAll three inverse models successfully made trade-offs between thrust accuracy and kinematic smoothness based on the applied performance metric weights. Our flexible inverse model framework enables future UUV control systems to incrementally adjust the emphasis placed on different measures of performance based on the current task and vehicle status. For example, the emphasis on thrust accuracy can be dynamically changed by the controller based on the degree of precise maneuvering required for the task at hand. Our inverse model framework also allows for the incorporation of additional performance metrics such as efficiency.\n\nPrior to deployment, we will collect experimental data for gaits producing negative thrusts and subsequently retrain the forward model based on this new data. Since our inverse model is already integrated with onboard hardware, the Raspberry Pi 4, the final deployment step consists of establishing onboard communication between the Raspberry Pi containing our inverse model and a PID-based micro controller; at this point, physical experiments can be conducted using the inverse model. Following deployment, our inverse model will be extended to incorporate the propulsive efficiency performance metric, which is calculated as the product of output thrust and current velocity divided by power; inverse model performance for thrust accuracy and efficiency trade-offs will be evaluated once experimental data for positive flow speeds becomes available.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzcicp b/data_all_eng_slimpj/shuffled/split2/finalzzcicp new file mode 100644 index 0000000000000000000000000000000000000000..cef8f673da896475467459d348e5f8d4373c51fe --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzcicp @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\\IEEEPARstart{F}{uzzy} logic addresses reasoning about vague perceptions of true or false \\cite{zadeh1965fuzzy,zadeh1968probability,zadeh1988fuzzy}. Moreover, it represents very well our linguistic description about everyday dynamic processes \\cite{zadeh1976fuzzy, zadeh1975conceptI, zadeh1975conceptII, zadeh1975conceptIII, zadeh1996fuzzy,zadeh1983computational, efstathiou1979multiattribute}. Thus, the introduction of tense operators becomes a necessity \\cite{chajda2015tense, thiele1993fuzzy, moon2004fuzzy, cardenas2006sound, mukherjee2013fuzzy} because they are useful to represent the temporal indeterminism contained in some propositions \\cite{prior2003time,macfarlane2003future}. For example, the hypothesis ``The weather is very hot'' has its sense of truth revealed when we say ``The weather will always be very hot''. The quantifier ``always'' communicates the belief about hypothesis confirmation in the future.\n\nThe task of developing a general theory for decision-making in a fuzzy environment was started in \\cite{bellman1970decision}, but a temporal approach still needs to be discussed.\nFor example, do you prefer to receive 7 dollars today or to receive it 5 years later?\nIf you prefer today, then you made a distinction of the same value at distant points from time, even though you have used vague reasoning about value changes because we do not access our equity value in memory to perform calculations. \n\nIn decision-making, we do not only evaluate fuzzy goals, but also the time. The change caused by 7 dollars is strongly distinguishable between a short time and a distant future. This suggests the existence of temporal arguments in fuzzy logic, where ``little change in wealth today'' is different from ``little change in wealth 5 years later''. The linguistic value ``little'' is irrelevant in this problem, but the arguments ``today'' and ``5 years later'' are decisive in the judgment.\n\n\n\nThe proposal here is to connect different fuzzy sets through time attenuators and intensifiers. Hence, it is possible to simulate two figures of thought in hypotheses of dynamic systems: meiosis to argue underestimated changes and hyperbole to argue overestimated changes.\n\n\nThrough meiosis it is possible to formulate a concave expected change curve because it argues minor changes for maximizing the sense of truth. By this figure of thought within the fuzzy temporal logic it is noticeable that the hyperbolic discounting \\cite{benzion1989discount, chapman1996temporal, chapman1995valuing, pender1996discount, redelmeier1993time, frederick2002time} and its subadditivity \\cite{read2001time, read2003subadditive}, despite its numerous expected utility theory anomalies \\cite{loewenstein1992anomalies,ainslie2016cardinal,loewenstein1989anomalies}, is an intuitive dynamic evaluation of wealth, where ergodicity is relevant in the problem \\cite{peters2016evaluating, peters2011time}. \nSimilarly, concave expected change curve in lotteries \\cite{bernoulli2011exposition} make certain outcomes more attractive than lotteries with uncertain outcomes.\n\nHyperbole has an inverse reasoning to meiosis and it produces a convex expected change curve. Similarly, risky losses in the Prospect Theory have the same characteristic in its subjective value function \\cite{tversky1986rational,tversky1992advances, kahneman2013prospect}. Then, it is shown here that the risk seeking, which is a preference for risky losses rather than certain losses \\cite{kahneman2013choices}, can be described in fuzzy environment by an imprecise perception for small losses. \nThus, the indistinguishability between small negative changes leads to preference for hopes when only these are perfectly distinguishable.\nOn the other hand, when the losses are high, the risk seeking disappears and a kind of ruin aversion prevails \\cite{taleb2018skin}, where it is better to lose a lot and stay with little than risk losing all means of survival after the lottery. \nIn addition, the loss aversion behavior, where people prefer not to lose a given amount than to win the same amount \\cite{kahneman2013prospect}, is interpreted by a disjunction between gains and losses hypotheses leading to the conclusion that such behavior is also amount dependent. \n\n\nIn essence, all the behaviors analyzed here are speculation examples in dynamic systems, where we evaluate hypotheses and commit to outcomes before they emerge.\nThis paper shows, by modeling the Time Preference and the Prospect Theory, that the fuzzy temporal logic allows to construct a rhetoric for intertemporal and probabilistic choices in fuzzy environment. The first problem takes us to focus on values and time and the second focuses on values and probabilities. However, if the future is uncertain, there is no reason for time and uncertainty are studied in different matters \\cite{lu2010many}. \nIn addition, the feelings about judgments are amount dependents where the fuzziness can be decisive in some situations. Therefore, time, uncertainty and fuzziness are concepts that can be properly studied by the fuzzy temporal logic in order to elaborate the decision-making rhetoric.\n\n\n\\section{Theoretical model}\nThis section provides a theoretical framework for the reader to understand as the figures of thought, meiosis and hyperbole, can be elaborated in the fuzzy temporal logic. In short, it is discussed the need of many-valued temporal logic, the existence of different temporal prepositions with similar goals over time and, finally, it is shown as to perform the rhetorical development to make a judgment between two different hypotheses about the future. \n\n\\subsection{Temporal and many-valued logic}\n\\label{TIL}\n Usually we make decisions about dynamic processes whose states are unknown in the future. An amusing example can be found in an Aesop's fable, where the Grasshopper and the Ant have different outlooks for an intertemporal decision dilemma \\cite{perry1965babrius}. In short, while the Ant is helping to lay up food for the winter, the Grasshopper is enjoying the summer without worrying about the food stock. \n \nThe narrative teaches about hard work, collaboration, and planning by presenting temporal propositions. These propositions have invariant meaning in time, but sometimes they are true and sometimes false, yet never simultaneously true and false \\cite{ohrstrom2007temporal}. This property can be noted in the statement:\n\\[\n\\begin{array}{l}\nD_1 = \\text{``We have got plenty of food at present'' \\cite{perry1965babrius}.} \n\\end{array}\n\\]\nAlthough $D_1$ has constant meaning over time, its logical value is not constant. According to the fable, this statement is true in the summer, but it is false in the winter, hence there is a need to stock food.\n \n \nIf the logical value varies in time according to random circumstances, such as the weather of the seasons, how can we make inferences about the future? There seems to be a natural uncertainty that sets a haze over the vision of what will happen. For instance, consider the following statements about the Grasshopper: \n\\[\\centering\n\\begin{array}{rl}\nD_2 = \\text{``The Grasshopper stored a lot of food }\\\\ \n \\text{during the summer'';}\\\\\n D_3 = \\text{``The Grasshopper stores food for winter''.}\n\\end{array}\n\\] \nAccording to the fable, we know that the statement $D_2$ is false, \nbut at the fable end, when the winter came, \nthe Grasshopper says ``It is best to prepare for days of need'' \\cite{perry1965babrius}. In this way, the truth value in $D_3$ is ambiguous. We can not say with certainty that it is false, but \nwe do not know how true it can be. Thus, we can only propose hypotheses to speculate the Grasshopper's behavior.\n\n\n A hypothesis is a proposition (or a group of propositions) provisionally anticipated as an explanation of facts, behaviors, or natural phenomena that must be later verified by deduction or experience. They should always be spoken or written in the present tense because they are referring to the research being conducted. In the scientific method, this is done independently whether they are true or false and, depending on rigor, no logical value is attributed to them. However, in everyday language this rigor does not exist. Hypotheses are guesses that argue for the decision-making before the facts are verified, so they have some belief degree about the logical value that is a contingency sense performing a sound practical judgment concerning future events, actions or whatever is at stake.\n\n \n\nSince there is no rigor in everyday speculations, then different propositions may arise about the same fact. If they are analyzed by binary logic, then we may have unnecessary redundancy that can lead to contradictions.\n However, the redundancy of propositions is not a problem within the speculation process, what suggests a many-valued temporal logic.\n \n We can discuss the limitations of a binary temporal logic by speculating the Grasshopper's behavior. For example, is it possible to guarantee that the two hypotheses below are completely true simultaneously?\n\\[\n\\begin{array}{l}\n\\Theta = \\text{``The Grasshopper stores a lot of food'';}\\\\\n\\theta = \\text{``The Grasshopper stores little food''.}\n\\end{array}\n\\]\nThe hypotheses $\\Theta$ and $\\theta$ propose changes to different states. If $S_0$ is the initial state for stored food, the new state after storing a lot of food $M$ is $S_{\\Theta}=S_0+M$. Analogously, the next state after storing little food is \n$S_{\\theta}=S_0+m$ for $m S_\\theta. \\]\nTherefore, affirming both as true leads to a contradiction because the same subject can not simultaneously produce two different results on the same object. However, in a speculation of the Grasshopper's behavior, none of the propositions can be discarded.\n\n\n\n\n\nEvaluating by fuzzy logic, the linguistic variable ``stored food'' has values ``a lot of'' and ``little'' in the propositions $\\Theta$ and $\\theta$. According to Bellman and Zadeh's model \\cite{bellman1970decision}, these linguistic values are the fuzzy constraints for inputs $M$ and $m$ about the food supply. Meanwhile, the target states $S_\\theta$ and $S_\\Theta$ can be the fuzzy goals.\n\nAn alternative development, but similar, can be done through changes, what is in accordance with human psychophysical reality \\cite{kahneman2013choices}. Thus, in this paper, the goals are the factors $ S_{\\Theta}\/S_0= 1+X$ and $S_{\\theta}\/S_0= 1+x$, where $X=M\/S_0$ and $x=m\/S_0$ are changes. For example, in \\cite{mukherjee2017loss} the respondents were asked how they would feel gaining (or losing) a certain amount. It was noted that the emotional effects gradually increased as the amounts had grown. Therefore, the intensity of gains and losses are easily ordered by our perception, $x\\text{now}$. \n\n\n\n\n\nAbout the certain sense, $F\\Theta$ is known as the weak operator because $\\Theta$ is true only once in the future, while $G\\theta$ is known as the strong operator because it is true in all future periods.\nIf $\\Theta$ does not always come true, then we can investigate its sense of truth through the affirmative: \n\\[\n\\begin{array}{rl}\nGF\\Theta = \\text{``The Grasshopper will frequently store}\\\\\n\\text{ a lot of food''.}\\\\\n\\end{array}\n\\]\nWhere the quantifier ``frequently'' better argues for the sense of truth of $\\Theta$ because we have undefined repetitions in the same period in which $G\\theta$ is true.\n\n \n\nThe frequency in which the propositions $\\Theta$ and $\\theta$ are true and the changes proposed by them determine the outcomes over time. \nThe affirmative $GF\\Theta$ communicates that the Grasshopper will frequently produce a strong change in the stored food stock (change factor $1+X$). On the other hand, $G\\theta$ proposes a small change factor, $1+x$, but continuously over time. So what is the relation between $X$ and $x$ that generates a similarity of states between the two hypotheses over time? The relation that constructs this similarity can be obtained by the time average. Therefore, let us consider $\\tau(t)$ as the total time where $GF\\Theta$ is true and $t$ as the observation time. If $\\Theta$ is true with a frequency given by\n\\begin{equation}\n\\lim\\limits_{t\\to \\infty} \\frac{\\tau(t)}{t}=s,\n\\end{equation} \nthen, the relation between change factors $1+x =(1+X)^s$, estimated by time average \\cite{peters2016evaluating}, indicates that the sentences $GF\\Theta$ and $G\\theta$ have similar goals in the long run. This similarity is denoted in this work by\n\n\\begin{equation}\nGF\\Theta \\sim G {\\theta}.\n\\end{equation} \n\n\nThe sense of truth for the sentence $GF\\Theta$ is quantified in the parameter $s$. It can be a stationary probability when\n $t$ is big enough, but we do not have much time to calculate this probability in practice.\nIn this way, we assume that the sense of truth is an imprecise suggestion (intuition) for the time probability.\nFigure \\ref{fig1} presents some adverbs of frequency that can suggest the sense of truth in future sentences.\n\\begin{figure}[!ht]\n\t\\centering\n\t\\includegraphics[scale=0.8]{Gfrequency}\n\t\t\\caption{Adverbs of frequency that can suggest the sense of truth of $GF$. The quantifier ``always'' indicates certainty, while ``never'' indicates impossibility.}\t\n\t\\label{fig1}\n\\end{figure}\n\n\n\n\nThe axiomatic system of temporal logic proposes that \n $G {\\theta} \\Rightarrow GN {\\theta}$, where $N\\theta$ stands for $\\theta$ is true at the next instant. Therefore, the similarity $GF\\Theta \\sim G {\\theta}$ can be written by\n\\begin{equation}\n F\\Theta \\sim N {\\theta},\n\\end{equation}\nwhen $1+x \\approx (1+X)^s$. \nThus, the statement ``the Grasshopper will sometime store a lot of food'', which have a change factor $1+X$, is similar to the statement ``the Grasshopper will store little food at the next moment'', which has a change factor $(1+X)^s$.\n\n\n\n\\subsection{Rhetoric: Meiosis and Hyperbole}\n\\label{MH}\nIn general, different hypotheses do not have similar changes in the future. For this reason, it is required that a rhetorical development make a judgment between them. In this section, two figures of thought are presented, meiosis and hyperbole, which can be used to compare hypotheses with different changes and senses of truth. \n\nStill using Aesop's fable, imagine that we want to compare Ant and Grasshopper's performance with the following hypotheses:\n\\[\n\\begin{array}{l}\n\\phi = \\text{``The Ant stores little food'';}\\\\\n\\Theta = \\text{``The Grasshopper stores a lot of food''.}\\\\\n\\end{array}\n\\]\nAssume that the Ant produces a change $y$ in its food stock while the Grasshopper produces a change $X$. If we think that the Ant is more diligent in its work than the Grasshopper, i.e., the sense of truth for $\\phi$ is maximum while the sense of truth for $\\Theta$ is ambiguous, so we can affirm $N\\phi$ and $F\\Theta$ in order to develop the following argumentation process:\n\\begin{enumerate}\n\\item elaborate a proposition $\\theta$, similar to $\\Theta$, which proposes a lower outcome, that has a change $x$ (for example, $\\theta $ = ``The Grasshopper stores little food'');\n\\item express $\\theta$ with maximum certainty in the future, $N\\theta$ = ``The Grasshopper will store little food at the next moment'';\n\\item calculate the relation $1+x \\approx (1+X)^s$ to match the average changes between $N\\theta$ and $F\\Theta$ in order to obtain the similarity $F\\Theta \\sim N {\\theta}$, where $X>x$ and $s\\in [0,1]$;\n\\item finally, judge the affirmations $N\\phi$ and $N\\theta$ through fuzzy logic. In this specific problem we have\n\\[N\\theta \\text{ or } N\\phi = \\max\\left\\{\\mu \\left((1+X)^s-1\\right),\\mu(y)\\right\\}.\\]\n\\end{enumerate}\n\n\nThe above argument uses meiosis and the upper part of Figure \\ref{figMH} summarizes this procedure. \nIn linguistic meiosis, the meaning of something is reduced to simultaneously increase something else in its place.\nIn the above mentioned case, proposing $\\theta$ means reducing the stored food change by the Grasshopper.\nAt the same time, this suggests greater certainty because it makes the process more feasible in the future.\nHowever, it is only a figure of thought to make an easy comparison because judging two sentences with the same sense of truth, looking only at the change goals, is much simpler.\n\n The meiosis for Grasshopper's case has a membership composite function given by\n\\[ \\mu_{\\text{ Grasshopper's goal}}= \\mu \\circ \\mu_{\\text{meiotic change}}(X)=\\mu \\left((1+X)^s-1\\right).\\] \nIn general, $\\mu_{\\text{ Grasshopper's goal}}$ refers to the fuzzy goal ``the bigger the better is the change $(1+X)^s-1$ at the next moment''. Like this, we can evaluate it for decision-making in real time. \n\n\\begin{figure}[!ht]\n\t\\centering\n\t\\begin{tikzpicture}[node distance=2cm and 1cm,>=stealth',auto, every place\/.style={draw}]\n \n \\node [state,initial text=,accepting by double] (1) [] {$\\begin{array}{l}\n F\\Theta \\\\\n \\\\\n{\\color{blue} F\\Phi}\n\\end{array}$}; \n \\node [state,initial text=,accepting by double] (0) [right=of 1] {$\\begin{array}{l}\n{\\color{blue} N\\theta} \\\\\n\\\\\n N\\phi\n\\end{array}$};\n \\path[->] (1) edge [bend left] node {$\\underbrace{(1+X)^s}_{\\text{Meiosis}}$} (0);\n \\path[->] (0) edge [bend left] node {$\\overbrace{(1+y)^\\frac{1}{s}}^{\\text{Hyperbole}}$} (1); \n\\end{tikzpicture}\n\t\t\\caption{Diagram representing the meiosis and hyperbole procedure for the judgment of hypotheses. The blue sentences $F\\Phi$ and $N\\theta$ are the figures of thought.}\t \n\t\\label{figMH}\n\\end{figure}\n\n\n \n\n\n\n\nOn the other hand, there is an inverse process to meiosis that is called hyperbole. Basically, it exaggerates a change in the outcome to reduce its sense of truth. In everyday language, such statements are commonplace, as ``the bag weighed a ton''.\nIn this statement, we realize that the bag is really heavy. However, is there a bag weighing really a ton? This is just a figure of speech.\n\n\nIn order to understand how to judge two hypotheses through a process of hyperbolic argumentation, consider again that we can make the future statements $F\\Theta$ and $N\\phi$ pass through the following steps:\n\\begin{enumerate}\n\\item elaborate a proposition $\\Phi$, similar to $\\phi$, which proposes a larger outcome, that has a change $Y$ (for instance, $\\Phi $ = ``The Ant stores a lot of food'');\n\\item affirm $\\Phi$ in the future with the same sense of truth as the proposition $\\Theta$, that is, $F\\Phi$ = ``The Ant will sometime store a lot of food'';\n\\item calculate the relation $(1+y)^{\\frac{1}{s}}\\approx 1+Y$ to match the average changes between $N\\phi$ and $F\\Phi$ in order to obtain the similarity $N\\phi \\sim F\\Phi$, where $Y>y$ and $s\\in [0,1]$; \n\\item finally, judge the fuzzy changes goals of the affirmative $F\\Phi$ and $F\\Theta$. In this specific problem we have\n\\[F\\Theta \\text{ or } F\\Phi = \\max\\left\\{\\mu(X), \\mu\\left((1+y)^\\frac{1}{s}-1\\right)\\right\\}.\\]\n\\end{enumerate}\n\n\n\nThe hyperbole for Ant's case has a membership composite function given by\n\\[ \\mu_{\\text{ Ant's goal}}= \\mu \\circ \\mu_{\\text{hyperbolic change}}(y)=\\mu \\left((1+y)^\\frac{1}{s} -1\\right).\\] \nThus, $\\mu_{\\text{ Ant's goal}}$ refers to the fuzzy goal ``the bigger the better is the change $ (1+y)^\\frac{1}{s} -1 $ sometime in the future''. \n\n\n The bottom part of Figure \\ref{figMH} summarizes hyperbolic argumentation procedure. Note that the arguments by meiosis and hyperbole lead to the same conclusion. Therefore, they can be only two ways to solve the same problem. However, in section \\ref{PT}, where the Prospect Theory is evaluated, there may be preference for one of the methods according to the frame in which the hypotheses are inserted.\n\n\n\n\n\\section{Time Preference}\n\\label{PTemp}\n\nTime preference is the valuation placed on receiving a good on a short date compared with receiving it on a farther date. A typical situation is choosing to receive a monetary amount $m$ after a brief period (a day or an hour) or to receive $M>m$ in a distant time (after some months or years).\n\n\nThe time preference choice is a problem of logic about the future and in order to model it consider the following hypotheses:\n\n\\begin{itemize}\n\\item $\\Theta_m =$ ``to receive $m$'' represents the receipt of the amount $m$ in short period, $t_m=t_0+\\delta t$;\n\\item $\\Theta_M =$ ``to receive $M$'' represents the receipt of the amount $M$ in longer time horizons, $t_M=t_0+\\Delta t$. \n\\end{itemize}\nEach proposition has a change factor for the individual's wealth. The proposition $\\Theta_M$ has the change factor $(1+M\/W_0)$, while $\\Theta_m$ has the change factor $(1+m\/W_0)$. Now, we perform the meiosis procedure for both hypotheses, reducing the changes and maximizing the sense of truth:\n\\begin{itemize}\n\\item $N\\theta_m =$ ``to receive an amount less than $m$ at the next moment''. This affirmative proposes a change factor $1+x_m$ in the individual's wealth;\n\\item $N\\theta_M =$ ``to receive an amount less than $M$ at the next moment''. Similarly, this affirmative proposes a change factor $1+x_M$.\n\\end{itemize}\n\nThe senses of truth for the hypotheses $\\Theta_M$ and $\\Theta_m$ are revealed when they are affirmed in the future. Therefore, we have the following similarities:\n\\begin{itemize}\n\\item $F\\Theta_M \\sim N\\theta_M$, if $(1+x_M)\\approx \\left(1+\\frac{M}{W_0}\\right)^{s_M}$ ;\n\\item $F\\Theta_m \\sim N\\theta_m$, if $(1+x_m)\\approx \\left(1+\\frac{m}{W_0}\\right)^{s_m}$ .\n\\end{itemize}\nWhere $s_M$ and $s_m$ are the senses of truth regarding the receipt of the values $M$ and $m$.\nIn this problem, they cannot be time probabilities since the probabilistic investigation in this case is not convenient. However, intuitions about the realization of the hypotheses $\\Theta_M$ and $\\Theta_m$ are feasible for individuals and they should be represented.\n\nNow suppose, without loss of generality, that $M$ is large enough so that the individual prefers to receive it in the distant future. If we consider $n=\\Delta t\/\\delta t$ periods in which $n$ attempts to receive $m$ until $M$'s receipt date are allowed, then we have\n\\begin{eqnarray}\n\\nonumber (1+x_M) &>& (1+x_m)^n \\\\\n\\Rightarrow\\left(1+\\frac{M}{W_0} \\right)^{s_M} &>& \\left(1+\\frac{m}{W_0} \\right)^{ns_m}.\n\\label{BhomBawerk}\n\\end{eqnarray}\nJudging by fuzzy logic the two hyprothesis in the future, the ``or'' operation between change goals is indicated to finalize the meiosis procedure, \n\\begin{eqnarray}\n\\nonumber \\mu(x_M) &=& \\max \\left\\{\\mu(x_M), \\mu \\left( (1+x_m)^n-1 \\right)\\right\\}\\\\\n&=& \\mu\\left(\\left(1+\\frac{M}{W_0} \\right)^{s_M}-1\\right).\n\\end{eqnarray}\n\nIn general the time preference solution is presented through a discount function. In order to use this strategy it is necessary to develop the same form on both sides of the inequality \\ref{BhomBawerk}. For this, there is a value $\\kappa$ such that $\\kappa s_M > m$, where we can write\n\\begin{equation}\n\\left(1+\\frac{M}{W_0} \\right)^{s_M} = \\left(1+\\frac{\\kappa {s_M}}{W_0} \\right)^{ns_m}>\\left(1+\\frac{m}{W_0} \\right)^{ns_m}.\n\\label{Passagem}\n\\end{equation}\nThe discount function undoes exactly the change caused by the proposition $\\Theta_M$, that is,\n\\begin{equation}\n \\frac{1}{{\\left(1+\\frac{M}{W_0} \\right)}} =\\left(1+\\frac{\\kappa s_M}{W_0} \\right)^{-\\frac{s_m}{s_M}n} .\n \\label{DescontoHiperbolico}\n\\end{equation}\n\n\nEquation \\ref{DescontoHiperbolico} describes the hyperbolic discount, the most well documented empirical observation of discounted utility \\cite{frederick2002time}. When mathematical functions are explicitly fitted to experiment data, a hyperbolic shape fits the data better than the exponential form \\cite{kirby1997bidding, kirby1995modeling, myerson1995discounting,rachlin1991subjective}. Among the functions proposed for the adjustment of experimental data, the discount function proposed by Benhabib, Bisin and Schotter \\cite{benhabib2004hyperbolic}\n\\[e_h^{-\\rho n}\\equiv \\left(1-h \\rho n \\right)^{\\frac{1}{h}}\\]\nallows for greater flexibility of fit for exponential, hyperbolic, and quasi-hyperbolic discounting \\cite{laibson1997golden} (see Figure \\ref{figDiscounting}). In order to obtain it, we must reparametrize equation \\ref{DescontoHiperbolico} by doing \n\\begin{eqnarray}\n\\frac{1}{h} &=& -\\frac{s_m}{s_M}n, \\label{eqh} \\\\\n\\rho &=& \\frac{\\kappa s_m}{W_0}.\n\\end{eqnarray}\n\nThe parameter $h$ denotes hyperbolicity and it gives the curve shape over time. For instance, $e_h^{-\\rho x}$ equals the exponential function $e^{-\\rho x}$ when $h \\to 0^-$. This means that there is plenty of time for possible trials with higher sense of truth until the date of the great reward. On the other hand, $h\\ll 0$ indicates time shortage for trials. In theses cases, only the first periods have strong declines in the discount function. In short, equation \\ref{eqh} shows that the senses of truth and the time between rewards determinate the value of $h$.\n\n\n Furthermore, the discount rate $\\rho$ quantifies the preference for goods and it is influenced by individual states of scarcity and abundance of goods. For instance, let us consider an individual called Bob. If an object is scarce for him (small $W_0$), then he places a higher preference (great $\\rho$). Analogously, if $W_0$ represents an abundance state for him, he has a lower preference (small $\\rho$). This may cause great variability in experiments because the wealth distribution follows the power law \\cite{levy1997new,druagulescu2001exponential,sinha2006evidence,klass2006forbes}. This means that $W_0$ can vary abruptly from one individual to another in the same intertemporal arbitrage experiment. \n\n\\begin{figure}[!ht]\n\t\\centering\n\t\\includegraphics[scale=0.65]{Discount}\n\t\t\\caption{Discount function $e_h^{-\\rho n}$ versus number of delayed periods: the dashed black curve is the exponential discounting for $\\rho = 0.005$; the $\\circ$-blue curve is quasi-hyperbolic discounting for $\\rho = 0.7$ and $h = -3$; \nand the $\\ast$-red and cyan curves are hyperbolic discounting for $\\rho_1 = 0.0175$ and $h_1 = -3$, and $\\rho_2 = 0.05$ and $h_2 = -5$, respectively. }\t\n\t\\label{figDiscounting}\n\\end{figure} \n\n\\subsection{Discussion about time preference behaviors}\n\n\nDaniel Read pointed out that a common evidence in time preference is the ``subadditve discounting'', in others words, the discounting over a delay is greater when it is divided into subintervals than when it is left undivided \\cite{read2001time}. For example, in \\cite{takahashi2006time,takahashi2008psychophysics} it has been argued that abstinent drug addicts may more readily relapse into addiction if an abstinence period is presented as a series of shorter periods, rather than a undivided long period. This property is present in the function $e_h^{-x}$, because if $h<0$, then\n\\[e_h^{-x} \\; e_h^{-y}=e_h^{-x-y+h xy}0.\\]\n\nThe hyperbolicity, due to intertemporal arbitrage, is always negative, $\\frac{1}{h} = -\\frac{s_m}{s_M}n$. Therefore, we have a subadditivity as a mandatory property in the time preference for positive awards. \n\n\n\nHowever, the main experimentally observed behavior, hereinafter referred to as ``time effect'', concerns the observation that the discount rate will vary inversely with the time period to be waited. Here, we can verify this effect as a consequence of the subadditivity found in the function $e_h^{-\\rho n}$. After all, when we want to find the discount rate through a given discount $D$ after $n$ periods, then we calculate $-\\frac{1}{n}\\ln D $. For example, if we have an exponential discounting $e^{-\\rho n}$, then $\\rho= -\\frac{1}{n}\\ln e^{-\\rho n} $. Thus, using the subadditive property we can develop \n\\begin{eqnarray}\n\\nonumber \\left(e_h^{-\\rho}\\right)^n\\leq e_h^{-\\rho n} & \\Rightarrow & \n-\\frac{1}{n}\\ln \\left(e_h^{-\\rho}\\right)^n\\geq -\\frac{1}{n}\\ln e_h^{-\\rho n} \\\\\n\\nonumber & \\Rightarrow & \\left\\langle -\\ln e_h^{-\\rho} \\right\\rangle \\geq \\left\\langle -\\frac{1}{n}\\ln e_h^{-\\rho n} \\right\\rangle .\n\\end{eqnarray}\nTherefore, if $h$ does not tend to zero from the left side, then the average discount rate over shorter time is higher than the average discount rate over longer time horizons. \nFor example, in \\cite{thaler1981some} it was asked to respondents how much money they would require to make waiting one month, one year and ten years just as attractive as getting the \\$ 250 now. The median responses (US \\$ 300, US \\$ 400 and US \\$ 1000) had an average (annual) discount rate of 219\\% over the one month, 120\\% over the one year and 19\\% over the ten years. Other experiments presented a similar pattern \\cite{benzion1989discount, chapman1996temporal, chapman1995valuing, redelmeier1993time, pender1996discount}. Therefore, the time effect is a consequence of subadditivity when there is not plenty of time for trials at one of the hypotheses. \n\n\n\n\nA second behavior, referred to as ``magnitude effect'', is also consequence of subadditivity. The reason for this is that magnitude effect and time effects are mathematically similar, because the discount rate is $\\rho=\\kappa s_m\/W_0$ and $ \\kappa $ is growing for large values of $ M $ (see equation \\ref{Passagem}). In order to understand the similarity, note that the function $e_h^{-\\rho n}$ varies with $ n \\rho $. If we set the value $ n $, for example $ n = 1 $, and we vary the only rate $ \\rho = r \\rho_0 $, where $ \\rho_0 $ is constant and $ r>1 $ is a multiplier which is growing for values of $ M $, then we have the function $e_h^{-r\\rho_0}$ analogous to $e_h^{-\\rho n}$. Therefore, the magnitude effect made by $r$, similarly to the time effect, results in\n\\[\\left\\langle -\\ln e_h^{-\\rho_0} \\right\\rangle \\geq \\left\\langle -\\frac{1}{r}\\ln e_h^{-\\rho_0 r} \\right\\rangle .\\]\n\nFor example, in Thaler's investigation \\cite{thaler1981some}, the respondents preferred, on average, \\$ 30 in 3 months rather than \\$ 15 now, \\$ 300 in 3 months rather than \\$ 250 now, and \\$ 3500 in 3 months rather than \\$ 3000 now, where the discount rates are 277\\%, 73\\% and 62\\%, respectively. Other experiments have found similar results \\cite{ainslie1983motives, benzion1989discount,green1994temporal,holcomb1992another,kirby1997bidding, kirby1995modeling, kirby1999heroin, loewenstein1987anticipation,raineri1993effect,shelley1993outcome, green1997rate}.\n\n\nAnother experimentally observed behavior is the ``preference reversal''. Initially, when individuals are asked to choose between one apple today and two apples tomorrow, then they may be tempted to prefer only one apple today. However, when the same options are long delayed, for example, choosing between one apple in one year and two apples in one year plus one day, then to add one day to receive two apples becomes acceptable \\cite{thaler1981some}. \n\nThe preference reversal shows how we evaluate value and time in the same hypothesis. For example, if $M_1 \\left( 1+\\frac{M_2}{W_0}\\right)^{s_2}.\n\\label{Twice}\n\\end{equation}\n\n\nOn the other hand, when one has to choose between the hypotheses $ H_1=$``to receive $M_1$ in $n$ days'' and $ H_2=$ ``to receive $M_2$ in $n+1$ days'', then a similar judgment can be made by evaluating the proposed action execution over and over again over time. Since the waiting time to receive $ M_1 $ is shorter, then we can realize that the number of attempts to receive $M_1$ will be greater in the future ($(n+1)\/n$ trials to receive $M_1$ for each trial to receive $M_2$). By fuzzy temporal logic, the choice between $ H_1$ and $ H_2$ depends on the following result: \n\\[ \\max \\left\\{\\left( 1+\\frac{M_1}{W_0}\\right)^{\\frac{n+1}{n} s_1},\\left( 1+\\frac{M_2}{W_0}\\right)^{s_2}\\right\\}.\\]\nWhen $n=1$, then it will be preferable to receive the reward $M_1$ (see equation \\ref{Twice}). This can also happen to other small values of $n$, for example, $n$ equals 2 or 3. \nHowever, when $n$ is large enough, the relation $(n+1)\/n$ tends to 1 and makes $M_2$ a preferable reward (see equation \\ref{today}). Thus, the preference between the rewards are reversed when they are shifted in time. In similar experiments, this behavior can be observed in humans \\cite{kirby1995preference,green1994temporal,millar1984self,solnick1980experimental} and in pigeons \\cite{ainslie1981preference,green1981preference}.\n\n\n\nHence, the time effect and magnitude effect on the discount rates, preference reversal and subadditivity are strong empirical evidences for the application of fuzzy temporal logic in intertemporal choices. \n\n\n\n\n\n\n \n\n\\section{Lotteries}\n\\label{PT}\nIn a more realistic human behavior descriptive analysis with the psychophysics, the subjective value of lottery must be related to the wealth change \\cite{kahneman2013choices}. However, changes after lotteries have incomplete information because we avoid calculating them using equity values, premiums and probabilities. Therefore, fuzzy sets are good candidates for representing these hypothetical changes. In addition, more realistic expectations should consider the evolution of outcomes over time \\cite{peters2016evaluating}. Thus, the changes may have their expected values attenuated or intensified by the sense of truth (intuitive time probability).\n\n \n\\subsection{Meiosis and risk aversion}\n In order to model lotteries consider the hypothesis $\\Theta_2$ = ``to win $M $'' and a probability $p \\in [0,1]$. If $M>0$, then by fuzzy temporal logic we can have:\n\\begin{itemize}\n\\item $l_1$ = to win $Mp$ (at the next moment);\n\\item $l_2$ = to win $M$ (at the next moment) with probability $p$.\n\\end{itemize}\nThe expression ``at the next moment'' does not appear in the experiments, but they are implicit because the low waiting time for the two lotteries seems to be the same. Moreover, note that both lotteries are equivalents in the ensemble average ($E=pM$ for the two lotteries). \n\nAgain, let us consider an individual called Bob who may repeat similar lotteries in the future. Therefore, this repetition can affect his decision \\cite{peters2016evaluating}. If the lottery $l_2$ is repeated several times until he wins $M$, then $l_2$ is similar to affirm\n\\[F\\Theta_2=\\text{ ``will sometime win } M \\text{'',}\\]\nwhere $p$ is the time probability (or sense of truth for lottery). Thus, if we perform the meiotic argumentation procedure (see section \\ref{MH}), then the similar sentence which have equivalent outcomes to $F\\Theta_2$ is\n\\[N\\theta_2=\\text{ ``will win } W_0\\left(1+\\frac{M}{W_0}\\right)^p - W_0 \\text{ at the next moment'',}\\]\nin which this sentence is a future affirmation of the hypothesis \\[\\theta_2= \\text{ ``to win } W_0\\left(1+\\frac{M}{W_0}\\right)^p - W_0 \\text{''}.\\]\n\nNow, the difference between $N\\theta_2$ and $Nl_1$ consists only in the value of the award. Although values are reported in lotteries, variations on wealth are unknown because the cognitive effort to perform the division $M\/W_0$ is avoided. Thus, taking $x\\geq 0$, such that $x=M\/W_0$, we can only evaluate changes, \n\\begin{eqnarray}\n\\nonumber N l_1 \\text{ or } N\\theta_2 &=& \\max \\{\\mu(px),\\mu\\left((1+x)^p-1\\right)\\}\\\\\n&=& \\mu (px) \\text{ for all } x\\geq 0.\n\\label{MeioseComp}\n\\end{eqnarray}\n\n\n\n\\begin{figure}[!ht]\n\t\\centering\n\t\\includegraphics[scale=0.65]{PT1}\n\t\t\\caption{Function ${\\cal M}_p^+(x)$ to represent meiosis. The blue dashed line $x\/2$ is tangent to the $\\circ$-blue curve given by ${\\cal M}_{1\/2}^+(x)$. Analogously, the red dashed line $x\/10$ is tangent to the $\\ast$-red curve given by ${\\cal M}_{1\/10}^+(x)$. Note that in the vicinity of zero the curves are close, so this is a region of low distinguishability for the changes.}\t\n\t\\label{figPT1}\n\\end{figure} \n\nThe line $px$ is tangent to the concave curve $(1+x)^p-1$ at the point $x=0$ what results in $px \\geq (1+x)^p-1$ for $x\\geq 0$. An example can be seen in Figure \\ref{figPT1} where the dashed blue line $x\/2$ is above the curve $\\circ$-blue $(1+x)^\\frac{1}{2}-1$. Analogously, a similar illustration can be seen for $p=1\/10$. Thus, we can note that the lottery $l_1$ is preferable for any values of $M$ and $p$, because the line $px$ is always above the curve $(1+x)^p-1$. This result is consistent with the respondents' choices in the Kahneman and Tversky experiments \\cite{kahneman2013prospect}. Thus, the concave curve representing the expected positive change at the next moment can be described by\n\\begin{eqnarray}\n\\nonumber {\\cal M}^+_p(x)&=& (1+x)^p -1 \\\\\n&=& p\\ \\text{ln}_p (1+x) \\text{ for all } x\\geq 0.\n\\label{MeioseMais}\n\\end{eqnarray}\nThe function $\\text{ln}_p (x)\\equiv (x^p -1)\/p$ is defined here as in \\cite{nivanen2003generalized} and it is commonly used in nonextensive statistics \\cite{tsallis1988possible,tsallis1999nonextensive}.\n\n\n\n\n\n \n\n\n\n\n\\subsection{Hyperbole and risk seeking}\n Kahneman and Tversky also show that the subjective value function is not always concave \\cite{kahneman2013choices}. They noted that in a loss scenario there is a convexity revealing a preference for uncertain loss rather than for certain loss.\n If we replace the word ``win'' for ``lose'' in the lotteries $l_1$ and $l_2$, then we have the following lotteries that result in the wealth decrease: \n\\begin{itemize}\n\\item $l_3$ = to lose $Mp$ (at the next moment);\n\\item $l_4$ = to lose $M$ (at the next moment) with probability $p$.\n\\end{itemize}\n\nNow consider the hypothesis $\\Theta_4$ = ``to lose $M$''. If $l_4$ is repeated until the individual loses $M$, then this lottery becomes similar to \n\\[F\\Theta_4=\\text{ ``will sometime lose } M \\text{'',}\\] \nwhere $p$ is the time probability (sense of truth for $F\\Theta_4$). \n\nSimultaneously, the lottery $l_3$ proposes a certain loss. Then we can reduce its sense of truth for $p$ to compare with $F\\Theta_4$. For this, let us consider the following hyperbole\n \\[L_3 = \\text{``to lose } W_0-W_0\\left(1-\\frac{pM}{W_0}\\right)^\\frac{1}{p}\\text{'',}\\]\nin which the affirmation in the future\n\\[\\centering\n\\begin{array}{rl}\nFL_3 = \\text{``will sometime lose } W_0-W_0\\left(1-\\frac{pM}{W_0}\\right)^\\frac{1}{p}\\text{''}\n \\end{array}\n\\]\narguments an expected change $(1+px)^\\frac{1}{p}-1$ for $-1\\leq x<0$.\n\n\n\n The line $x$ is tangent to the convex curve $(1+px)^\\frac{1}{p}-1$ at the point $x=0$ for any $p$, what results in $(1+x)^\\frac{1}{p}-1 \\geq x$ for $-1\\leq x<0$. In Figure \\ref{figPT2} the dashed black line $x$ represents the proposed change in $l_4$ and the curves $\\ast$-red and $\\circ$-blue, \nbelonging to the family of curves $(1+px)^\\frac{1}{p}-1$, represent the hyperbolic argumentation for $l_3$.\n \\begin{figure}[!ht]\n\t\\centering\n\t\\includegraphics[scale=0.65]{PT2}\n\t\t\\caption{The hyperbolic curves $(1+px)^{\\frac{1}{p}}$ for $p=1\/2$ and $p=1\/10$. Note that the curves tangentiate the black dashed line $x$ at the point zero. The interval $-0.1\\leq x <0$ is a region of low distinguishability. }\t\n\t\\label{figPT2}\n\\end{figure} \nTherefore, note that the family of curves $(1+px)^\\frac{1}{p}-1$ is very close to line $x$ until 0.1. \nThis means that they have low distinguishability in this region, in other words, uncertain and certain losses can be imperceptible changes when the losses are small.\nIn fuzzy logic is equivalent to choose between ``small decrease in wealth with certainty'' and ``small decrease in wealth with probability $p$''. \nThe decreases in wealth are almost the same and undesirable, but the uncertainty argues hope for escaping losses and it is desirable. Thus, the uncertain option for losses will be more attractive in this situation. Then, in order to simulate risk seeking in the losses region we must insert a rate $\\rho$ into the hyperbolic argumentation process, so that\n\\begin{eqnarray}\n\\nonumber {\\cal H}^-_p(\\rho x)&\\equiv &(1+p\\rho x)^\\frac{1}{p}-1\\\\\n\\nonumber &=& e_p^{\\rho x}-1.\n\\end{eqnarray}\n\n\nThe rate $\\rho$ makes the curve ${\\cal H}^-_p(\\rho x) $ more convex. Thus, its first values pass below the line $x$ to simulate the risk seeking. In Figure \\ref{figPT3} the red curve has $\\rho=1.2$ and $p=1\/2$ to simulate this effect in the interval $-0.550$.\n\n\\begin{figure}[!ht]\n\t\\centering\n\t\\includegraphics[scale=0.65]{PT3}\n\t\t\\caption{Function ${\\cal S}_{0.5} (x)$ for $\\rho=1.2$. When the red curve is below the dashed black line we have risk seeking (interval $-0.550$ (dashed black line $x\/2$ above the red curve). }\t\n\t\\label{figPT3}\n\\end{figure} \n\\subsection{Loss aversion and disjunction between hypotheses}\n\\label{averisco}\nThe loss aversion principle, ${\\cal{S}}_p(x)<-{\\cal{S}}_p(-x)$, refers to the tendency to avoid losses rather than to acquire equivalent gains, in other words, it is better not to lose \\$ 1,000 than to win \\$ 1,000. \n\nIn order to understand why the curve is steeper on the losses side, consider that Bob has only \\$10,000 (all of his money). If now he loses \\$ 1,000, then the variation is -10\\% and his new wealth is \\$ 9,000. However, if he wins \\$ 1,000, then the positive variation is 10\\% and his new wealth is \\$ 11,000. So far this process seems fair, but we need to look at it dynamically.\n If he will have \\$ 9,000 at the next moment, then he will need to gain 11.11\\% to get back \\$ 10,000. On the other hand, if he will have \\$ 11,000 at the next moment, then the required variation is -9.09\\% to get back \\$ 10,000. So which is the most difficult change to happen? Exactly, the 11.11\\% that restore the previous state after to lose 10\\%. Therefore, it is better not lose 10\\% than to gain 10\\% in the long-term gamble.\n\n\nThis behavior can be modeled in fuzzy temporal logic through the disjunction operator. In order to understand the details about the disjunction between loss and gain hypotheses, consider the lottery \n\\begin{eqnarray}\n\\nonumber L_{wl} &=& \\text{``to win } M_1 \\text{ with probability } p\\\\\n\\nonumber &\\text{ }&\\text{ or to lose }M_2\\text{ with probability } q\\text{''.}\n\\label{AversoRisco}\n\\end{eqnarray}\nIf winning $M_1$ produces a gain $x_1>0$ and losing $M_2$ produces a loss, $-1\\leq x_2 <0$, so we have the following atomic hypotheses\n\\[\n\\begin{array}{l}\nH_1 = \\text{``to win }M_1\\text{'' and } H_2 = \\text{``to lose }M_2\\text{''.}\n\\end{array}\n\\]\nWhere the sense of truth for $H_1$ is $p$ and the sense of truth for $H_2$ is $q$. The future statement for disjunction is \n\\begin{eqnarray}\n\\nonumber F (H_1 \\vee H_2) &=& \\text{``to win } M_1 \\text{ or to lose }M_2\\\\\\nonumber &\\text{ }&\\text{once in the future''.}\n\\label{LoteriaDisjunta}\n\\end{eqnarray}\nThe change average in this disjunction is $(1+x_1)^p(1+x_2)^q$ for $p+q<1$. This means that one of the hypotheses may be true at the next instant, or none, because only one will sometime be true in the future.\n \n Another way of affirming a disjunction of losses and gains is ensuring that one or the other will be true at the next instant, $N (H_1 \\vee H_2)$. The change average in this case is $(1+x_1)^p(1+x_2)^q$ for $p+q=1$. This means that $H_1$ or $H_2$ will be true at the next moment with absolute certainty. Uncertainty is just ``which hypothesis is true?''. Therefore, the judgment preceding the decision whether or not participating in this lottery, $N(H_1 \\vee H_2)$ or nothing, is equal to\n\\begin{equation}\n\\nonumber \\max \\{\\mu\\left((1+x_1)^p(1+x_2)^q-1\\right),\\mu(0)\\}.\n\\end{equation}\nThe lottery $L_{wl}$, which is a loss and gain disjunction, will be considered fair if the parameters $x_1$, $x_2$, $p$ and $q$ guarantee $(1+x_1)^p(1+x_2)^q-1>0$. In the experiment described at \\cite{kahneman2013prospect}, the value $p=1\/2$ and $x_1=x_2=x$ generate the inequality $\\sqrt{1-x^2}-1<0$ that makes the lottery unfair. Thus, the respondent's choice for not betting seems to reveal a perception about the lottery dynamics. In addition, it can be noted that expected negative change has its intensity increased by $x$. Therefore, the intensity of loss aversion is amount magnitude dependent. This means that the feeling of aversion to the lottery increases with the growth of the amount. In \\cite{mukherjee2017loss} is presented an empirical evidence of this behavior. \n\n\n \n\\section{Conclusion}\nHeuristics are cognitive processes that ignore part of the information and use simple rules to make a decision quicker and easier. In general, they are defined as strategies that seek alternatives, stop searches in a short time, and make a decision soon after.\n\n\nWithin heuristic processes, some decision-making requires the hypotheses judgment about dynamic processes before they take place in time. Time Preference Problem and Prospect Theory are famous examples. The first evaluates the goods receipt at different future dates and the second requires lotteries valuation before their outcomes are known. The common characteristics between these two problems noted here are the magnitude dependence and the inseparability between time and uncertainty. On the magnitude dependence it can be concluded that:\n\\begin{itemize}\n\\item the magnitude effect in time preference is a consequence of subadditivity;\n\\item the risk seeking can disappear in the Prospect Theory if high magnitude losses were considered. In addition, the aversion risk increases with the growth of the amounts.\n\\end{itemize}\nOn the other hand, on the inseparability between time and uncertainty it can be concluded that:\n\\begin{itemize}\n\\item in the time preference problem, the number of uncertain trials for the short-term hypotheses until verification of the long-term hypothesis produces the subadditive discounting, and consequently, higher annual average rates as the waiting time decreases. In addition, the preference reversal occurs because the number of allowed trials is changed when the hypothesis deadlines are shifted in time;\n\\item the probabilities of lotteries represent the temporal indeterminism about the future. Thus, the S-shaped curve in the Prospect Theory can be described by expected fuzzy changes of temporal hyptotheses.\n\\end{itemize}\n\nIf the future is uncertain, then time and uncertainty about changes can not mean two independent matters. For this reason, choice under uncertainty and intertemporal choice, traditionally treated as separate subjects, are unified in a same matter in this paper to elaborate the rhetoric for the decision-making. \n\nIn addition, it is shown that the fuzziness can changes to prospective judgments about magnitude dependent gains and losses. This means that a given problem may have different decisions simply by changing the values of the rewards, even if time and uncertainty context are not changed. \nExactly in these situations, fuzzy environment modeling will be essential to represent the decision-making.\n\n\n\n\n\\bibliographystyle{IEEEtran}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nUplift modeling involves a set of methods for estimating the expected causal impact of taking an action or a treatment at an individual or subgroup level, which could lead to an increase in their conversion probability \\cite{Zhao2019UpliftMF}.\nTypically for financial services and commercial companies looking to provide additional value-added services and products to their customers, marketers may be interested in evaluating the effectiveness of numerous marketing techniques, such as sending promotional coupons.\nWith the change in customer conversion possibilities, marketers are able to efficiently target prospects.\nMore than marketing campaigns, uplift modeling can be applied to a variety of real-world scenarios related to personalization, such as online advertising, insurance, or healthcare, where patients with varying levels of response to a new drug are identified, including the discovery of adverse effects on specific subgroups \\cite{Jaskowski2012UpliftMF}.\n\nIn essence, uplift modeling is a problem that combines causal inference and machine learning. For the former, it is mutually exclusive to estimate the change between two outcomes for the same individual. To overcome this counterfactual problem, samples are randomly assigned to a treatment group (receiving online advertisement or marketing campaign) and a control group (not receiving online advertisement nor marketing campaign). For the latter, the task is to train a model that predicts the difference in the probability of belonging to a given class on the two groups.\nAt present, two major categories of estimation techniques have been proposed in the literature, namely meta-learners and tailored methods \\cite{Zhang2022AUS}. The first includes the Two-Model approach \\cite{Radcliffe2007UsingCG}, the X-learner \\cite{Knzel2017MetalearnersFE} and the transformed outcome methods \\cite{Athey2015MachineLM} which extend classical machine learning techniques. The second refers to direct uplift modeling such as uplift trees \\cite{5693998} and various neural network based methods \\cite{ Louizos2017CausalEI,Yoon2018GANITEEO}, which modify the existing machine learning algorithms to estimate treatment effects. \nAlso, uplift trees can be extended to more general ensemble tree models, such as causal forests \\cite{doi:10.1080\/01621459.2017.1319839,10.1214\/18-AOS1709}, at the cost of losing true interpretability.\n\nIn order to take advantage of decision trees and the ensemble approach, we propose causal inference based single-branch ensemble trees for uplift modeling (CIET) with two completely different partition criteria that directly maximizes the difference between outcome distributions of the treatment and control groups. When building a single-branch tree, we employ lift gain and lift gain ratio as loss functions or partition criteria for node splitting in a recursive manner. \nSince our proposed splitting criteria are highly related to the incremental impact, the performance of CIET is thus expected to be reflected in the uplift estimation. Meanwhile, \nthe splitting logic of all nodes along the path from root to leaf is combined to form a single rule to ensure the interpretability of CIET. Moreover, the dataset not covered by the rule is then censored and the above tree-building process continue to repeat on the censored data. \nDue to this divide and conquer learning strategy, the dependencies between the formed rules can be effectively avoided. It leads to the formation of single-branch ensemble trees and a set of decorrelated inference rules. \n\nNote that our CIET is essentially different from decision trees for uplift modeling and causal forests. There are three major differences: (1) single-branch tree $vs$ standard binary tree; (2) lift gain and its ratio as loss function or splitting criterion $vs$ Kullback-Leibler divergence and squared Euclidean distance; and (3) decorrelated inference rules $vs$ correlated inference rules or even no inference rules. \nIt is demonstrated through empirical experiments that CIET can achieve better uplift estimation compared with the existing models. Extensive experimental results on synthetic data and the public credit card data show the success of CIET in uplift modeling. We also train an ensemble model and evaluate its performance on a large real-world online loan application dataset from a national financial holdings group in China. As expected, the corresponding results show a significant improvement in the evaluation metrics in terms of both AUUC and Qini coefficient.\n\n\n\n\nThe rest of this ms is organized as follows. First, causal inference based single-branch ensemble trees for uplift modeling is introduced. Next, full details of our experimental results on synthetic data, credit card data and real-world online loan application data are given. \nIt is demonstrated that CIET performs well in estimating causal effects compared to decision trees for uplift modeling. Finally, conclusions are presented.\n\n\n\n\\section{Causal Inference Based Single-Branch Ensemble Trees (CIET) for Uplift Modeling}\\label{sec:Alg}\nThis section consists of three parts. We first present two splitting criteria, single-branch ensemble approach and pruning strategy specially designed for the uplift estimation problem. Evaluation metrics for uplift modeling are then discoursed. Three key algorithms of CIET are further described in detail.\n\n\\subsection{Splitting Criteria, Single-Branch Ensemble Method and Pruning Strategy}\nTwo distinguishing characteristics of CIET are splitting criteria for tree generation and the single-branch ensemble method, respectively.\n\nAs for splitting criteria in estimating uplift, it is motivated by our expectation to achieve the maximum difference between the distributions of the treatment and control groups. \nGiven a class-labeled dataset with $N$ samples, $N^{T}$ and $N^{C}$ are sample size of the treatment and control groups (recall that $N = N^{T} + N^{C}$, $T$ and $C$ represent the treatment and control groups).\nFormally, in the case of a balanced randomized experiment, the estimator of the difference in sample average outcomes between the two groups is given by:\n\\begin{align}\n \\label{tau} \\tau = (P^{T} - P^{C})(N^{T} + N^{C})\n\\end{align}\nwhere $P^{T}$ and $P^{C}$ are the probability distribution of the outcome for the two groups. Motivated by Eq. (\\ref{tau}), the divergence measures for uplift modeling we propose are lift gain and its ratio, namely LG and LGR for short. The corresponding mathematical forms of LG and LGR can be thus expressed as\n\\begin{align}\n \\label{LG} LG &=(P_{R}^{T} - P_{R}^{C})N_{R} - \\tau_{0} = \\tau_{R} - \\tau_{0}\\\\\n \\label{LGR} LGR &=\\frac{(P_{R}^{T} - P_{R}^{C})}{(P_{0}^{T} - P_{0}^{C})} \\propto (P_{R}^{T} - P_{R}^{C}) = \\frac{\\tau_{R}}{N_{R}}\n\\end{align}\nwhere $P_{0}^{T}$ and $P_{0}^{C}$ are the initial probability distribution of the outcome for the two groups, $\\tau_{0} = (P_{0}^{T} - P_{0}^{C})N_{R}$. And $N_{R}$ and $Y_{R}$ for a node logic $R$ represent coverage and correction classification,\nwhile $P_{R}^{T}$ and $P_{R}^{C}$ are the corresponding probability distribution of the outcome for both groups, respectively. Evidently, both Eq. (\\ref{LG}) and Eq. (\\ref{LGR}) represent the estimator for uplift, which are proposed as two criteria in our ms. Compared to the standard binary tree with left and right branches, only one branch is created after each node splitting in this ms. It is characterized by the fact that both LG and LGR are calculated using the single-branch observations that present following a node split. Accordingly, subscript $k$ indicating binary branches doesn't exist in the above equation. Furthermore, the second term of LG makes every node partition better than randomization, while LGR has the identical advantages to information gain ratio. \n\n\n\nThe proposed splitting criterion for a test attribute A is then defined for any divergence measure $D$ as \n\\begin{equation}\\label{CRITERION}\n \\Delta = D(P^T(Y):P^C(Y)|A) - D(P^T(Y):P^C(Y)) \n\\end{equation}\nwhere $D(P^T(Y):P^C(Y)|A)$ is the conditional divergence measure. Apparently, $\\Delta$ is the incremental gain in divergence following a node splitting. Substituting for $D$ the $LG$ and $LGR$, \nwe obtain our proposed splitting criteria $\\Delta_{LG}$ and $\\Delta_{LGR}$. \nThe intuition behind these splitting criteria is as follows: we want to build a single-branch tree such that the distribution divergence between the treatment and control groups before and after splitting an attribute differ as much as possible. \nThus, an attribute with the highest $\\Delta$ is chosen as the best splitting one. In order to achieve it, we need to calculate and find the best splitting point for each attribute. In particular, an attribute is sorted in descending order by value when it is numerical. \nFor categorical attributes, some encoding methods are adopted for numerical type conversion. The average of each pair of adjacent values in an attribute with $n$ value, forms $n-1$ splitting points or values. \nAs for this attribute, the point of the highest $\\Delta$ can be seen as the best partition one. \nFurthermore, the best splitting attribute with the highest $\\Delta$ can be achieved by traversing all attributes. \nAs for the best splitting attribute, the instances are thus divided into two subsets at the best splitting point. One feeds into a single-branch node, while the other is censored. \nNote that the top\u2013down, recursive partition will continue unless there is no attribute that explains the incremental estimation with statistical significance. Also, histogram-based method can be employed to select the best splitting for each feature, which can reduce the time complexity effectively.\n\nDue to noise and outliers in the dataset, a node may merely represent these abnormal points, resulting in model overfitting. Pruning can often effectively deal with this problem. That is, using statistics to cut off unreliable branches. Since none of the pruning methods is essentially better than others, we use a relatively simple pre-pruning strategy. If $\\Delta$ gain is less than a certain threshold, node partition would stop. Thus, a smaller and simpler tree is constructed after pruning. Naturally, decision-makers prefer less complex inference rules, since they are considered to be more comprehensible and robust from business perspective.\n\n\n\\subsection{Evaluation Metrics for Uplift Modeling}\nAs noted above, it is impossible to observe both the control and treatment outcomes for an individual, which makes it difficult to find measure of loss for each observation.\nIt leads that uplift evaluation should differ drastically from the traditional machine learning model evaluation. \nThat is, improving the predictive accuracy of the outcome \ndoes not necessarily indicate that the models will have better performance in identifying targets with higher uplift. In practice, most of the uplift literature resort to aggregated\nmeasures such as uplift bins or curves. Two key metrics involved are area under uplift curve (AUUC) and Qini coefficient \\cite{Gutierrez2016CausalIA}, respectively. In order to define AUUC, binned uplift predictions are sorted from largest to smallest. For each $t$, the cumulative sum of the observations statistic is formulated as below,\n\\begin{equation}\\label{AUUC}\n f(t) = (\\frac{Y_t^{T}}{N_t^{T}} - \\frac{Y_t^{C}}{N_t^{C}})(N_t^{T} + N_t^{C}) \n\\end{equation}\nwhere the $t$ subscript implies that the quantity is calculated on the first or top $t$ observations.\nThe higher this value, the better the uplift model. The continuity of the uplift curves makes it \npossible to calculate AUUC, i.e. area under the real uplift curve, which can be used to evaluate and compare different models. As for Qini coefficient, it represents a natural generalization of \nGini coefficient to uplift modeling. Qini curve is introduced with the following equation,\n\\begin{equation}\\label{QINI_Curve}\n g(t) = {Y_t^{T}} - \\frac{Y_t^{C}N_t^{T}}{N_t^{C}}\n\\end{equation}\nThere is an obvious parallelism with the uplift curve since $f(t)=g(t)(N_t^{T}+N_t^{C})\/N_t^{T}$.\nThe difference between the area under the actual Qini curve and that under the diagonal corresponding to random targeting can be obtained. \nIt is further normalized by the area between the random and the optimal targeting curves, which is defined as Qini coefficient. \n\n\n\\subsection{Algorithm Modules}\nThe following representation of three algorithms includes: selecting the best split for each feature using the splitting criteria described above, learning a top-down induced single-branch tree and forming ensemble trees with each resulting tree progressively.\n\nAlgorithm \\ref{single1:algorithm} depicts how to find the best split of a single feature $F$ on a given dataset $D[group\\_key, feature,$ $ target]$ using a histogram-based method with the proposed two splitting criteria. Gain\\_left and Gain\\_right are the uplift gains for the child nodes after each node partition. If the maximum value of Gain\\_left is greater than that of Gain\\_right, the right branch is censored and vice versa. Thus, the best split with its corresponding splitting logic, threshold and uplift gain is found, which is denoted by Best$\\_$Direction, Best$\\_$Threshold and Best$\\_\\Delta$. Besides, there are several thresholds to be initialized before training a CIET model, including minimum number of samples at a inner node $min\\_samples$, minimum recall $min\\_recall$ and minimum uplift gain required for splitting $min\\_\\Delta$. The top-down process would continue only when the restrictions are satisfied.\n\n\\begin{algorithm}[htbp]\n\\caption{Selecting the Best Split for One Feature}\n\\label{single1:algorithm}\n\\textbf{Input}: D, the given class-labeled dataset, including the group key (treatment\/control);\\\\\n\\textbf{Parameter}: feature $F$, min$\\_$samples, min$\\_$recall, min$\\_\\Delta$\\\\\n\\textbf{Output}: the best split that maximizes the lift gain or lift gain ratio on a feature\n\n\\begin{algorithmic}[1]\n\\STATE \\textbf{Set} Best$\\_$Value = 0, Best$\\_$Direction = \"\", Best$\\_\\Delta$ = 0, Best$\\_$Threshold = None\n\\STATE calculate $Y^T$, $Y^C$, $N^T$, $N^C$ on D\n\\STATE For each feature value $v$ , calculate the values of $Y_{F\\leq v}^T$, $Y_{F\\leq v}^C$, $N_{F\\leq v}^T$, $N_{F\\leq v}^C$, and then the Gain\\_left($v$) and Gain\\_right($v$) with LG \\eqref{LG} or LGR \\eqref{LGR}.\n\\STATE set the Gain\\_left($v$) and Gain\\_right($v$) to their minimum value on the $v$, whose split does not satisfy the restrictions on number of samples\/recall rate\/divergence measure gain. \n\\STATE $v_1$ = argmax(Gain\\_left($v$)), $v_2$ = argmax(Gain\\_right($v$))\n\\IF {max(Gain\\_left) $\\geq$ max(Gain\\_right)}\n\\STATE Best$\\_$Value = max(Gain\\_left)\n\\STATE Best$\\_$Direction = \"$\\leq$\"\n\\STATE Best$\\_$Threshold = $v_1$\n\\ELSE\n\\STATE Best$\\_$Value = max(Gain\\_right)\n\\STATE Best$\\_$Direction = \"$>$\"\n\\STATE Best$\\_$Threshold = $v_2$\n\\ENDIF\n\\STATE \\textbf{return} Best$\\_$Value, Best$\\_$Direction, Best$\\_$Threshold\n\\end{algorithmic} \n\\end{algorithm}\n\nAlgorithm \\ref{single:algorithm} presents a typical algorithmic framework for top\u2013down induction of a single-branch uplift tree, which is built in a recursive manner using a greedy depth-first strategy. \nThe parameter $max\\_depth$ represents the depth of the tree and $cost$ indicates the threshold of LG or LGR.\nAs the tree grows deep, more instances are censored since they are not covered by the node partition logic of each layer. \nAs a result, each child node subdivides the original dataset hierarchically into a smaller subset until the stopping criterion is satisfied. Tracing the splitting logics on the path from the root to leaf nodes in the tree, an \"IF-THEN\" inference rule is thus extracted. \n\nFinally, adopting a divide-and-conquer strategy, the above tree-building process is repeated on the censored samples to form ensemble trees, resulting in the formation of a set of inference rules as shown in Algorithm \\ref{multi:algorithm}.\n\n\n\\begin{algorithm}[htbp]\n\\caption{Learning An \"IF-THEN\" Uplift Rule of A Single-branch Tree}\n\\label{single:algorithm}\n\\textbf{Input}: D, the given class-labeled dataset, including the group key (treatment\/control);\\\\\n\\textbf{Parameter}: max$\\_$depth, cost, min$\\_$samples, min$\\_$recall, min$\\_\\Delta$\\\\\n\\textbf{Output}: an \"IF-THEN\" uplift rule\n\n\\begin{algorithmic}[1]\n\\STATE \\textbf{Set} Rule$\\_$Single = [], Max$\\_$Gain = 0.0\n\\STATE \\textbf{Set} Add\\_Rule = \\textbf{True}\n\\WHILE{depth $\\leq$ max$\\_$depth \\textbf{and} Add\\_Rule}\n\\IF {the treatment group or control group in D is empty}\n\\STATE break\n\\ENDIF\n\\STATE \\textbf{Set} Keep = \\{ \\}, Best$\\_$Split = \\{ \\}\n\\STATE depth $\\leftarrow $ depth + 1\n\\STATE Add\\_Rule = \\textbf{False}\n\\FOR{feature in features}\n\\STATE Keep[feature] = Best\\_Split\\_for\\_One\\_Feature(D, feature, min$\\_$samples, min$\\_$recall, min$\\_\\Delta$) (Algorithm \\ref{single1:algorithm})\n\\ENDFOR\n\\FOR{feature in Keep}\n\\IF {feature's best gain $>$ Max\\_Gain + cost}\n\\STATE Max$\\_$Gain = feature's best gain\n\\STATE \\textbf{Add} Keep[feature] \\textbf{to} Best$\\_$Split\n\\STATE Add\\_Rule = \\textbf{True}\n\\ELSE\n\\STATE continue\n\\ENDIF\n\\ENDFOR\n\\STATE \\textbf{Add} Best$\\_$Split \\textbf{to} Rule$\\_$Single\n\\STATE D $\\leftarrow $ D $\\setminus$ \\{\nSamples covered by Rule$\\_$Single\\} \n\\ENDWHILE\n\\STATE \\textbf{return} Rule$\\_$Single\n\\end{algorithmic} \n\\end{algorithm}\n\n\n\\begin{algorithm}[htbp]\n\\caption{Learning A Set of \"IF-THEN\" Uplift Rules}\n\\label{multi:algorithm}\n\\textbf{Input}: D, the given class-labeled dataset, including the group key (treatment\/control);\\\\\n\\textbf{Parameter}: max$\\_$depth, rule$\\_$count, cost, min$\\_$samples, min$\\_$recall, min$\\_\\Delta$\\\\\n\\textbf{Output}: a set of \"IF-THEN\" uplift rules\n\n\\begin{algorithmic}[1]\n\\STATE \\textbf{Set} Rule$\\_$Set = \\{\\}, number = 0\n\\WHILE{number $\\leq$ rule$\\_$count}\n\\STATE rule = Single$\\_$Uplift$\\_$Rule(D, cost, max$\\_$depth, min$\\_$samples, min$\\_$recall, min$\\_\\Delta$) (Algorithm \\ref{single:algorithm})\\\\\n\\STATE \\textbf{Add} rule \\textbf{to} Rule$\\_$Set \n\\STATE D $\\leftarrow $ D $\\setminus$ dataset covered by rule \n\\STATE number $\\leftarrow$ number + 1\n\\ENDWHILE\n\\STATE \\textbf{return} Rule$\\_$Set\n\\end{algorithmic} \n\\end{algorithm}\n\n\\begin{figure*}[htbp]\n\\centering\n\\includegraphics[width=0.95\\textwidth]{figs\/AUUC.png} \n\\caption{The uplift curves of four analyzed classifiers with four different colors for the synthetic dataset, while the dashed line corresponding to random targeting.}\n\\label{fig:AUUC_SSD} \n\\end{figure*}\n\n\\section{Experiments}\\label{sec:er}\nIn this section, the effectiveness of our CIET is evaluated on synthetic and real-world business datasets. Since CIET fundamentally stems from tree-based approaches, we implement it and compare it with uplift decision trees based on squared Euclidean distance and Kullback-Leibler divergence \\cite{5693998}, which are referred as baselines.\n\n\n\n\\begin{table*}[ht]\n\\centering\n\\begin{tabular}{llll}\n\\hline\nrule number &\"1\" &\"2\" &\"3\"\\\\\n\\hline\nnode logic &x9$\\_$uplift $\\leq$ 0.17 &x3$\\_$informative $>$ -1.04 &x6$\\_$informative $>$ 0.95 \\\\\nnode logic &x10$\\_$uplift $\\leq$ 2.59 &x1$\\_$informative $\\leq$ 2.71 &x1$\\_$informative $\\leq$ 1.58\\\\\nnode logic &null &x2$\\_$informative $\\leq$ 1.28 &x9$\\_$uplift $\\leq$ 2.09 \\\\\n$N_{before}$ &3000 &2210 &675 \\\\\n$N_{before}^T$ &1500 &1117 &348 \\\\\n$N_{before}^C$ &1500 &1093 &327 \\\\\n$N_{rule}$ &790 &1535 &180 \\\\\n$N_{rule}^T$ &383 &769 &76 \\\\\n$N_{rule}^C$ &407 &766 &104 \\\\\nnet gain &195.99 &87.14 &37.66 \\\\\n$recall_{treatment}$ &36.62$\\%$ &70.42$\\%$ &42.11$\\%$ \\\\\n$recall_{control}$ &28.00$\\%$ &63.70$\\%$ &44.90$\\%$ \\\\\n\\hline\n\\end{tabular}\n\\caption{A set of inference rules found by CIET and their corresponding statistical indicators with $criterion\\_type = $\"LG\", $rule\\_count = 3$ and $max\\_depth = 3$.}\n\\label{tab:RS_CIET}\n\\end{table*}\n\n\n\\subsection{Experiments on Synthetic Data}\\label{subsec: SD}\n\\textbf{Dataset} We can test the methodology with numerical simulations. That is, generating synthetic datasets with known causal and non-causal relationships between the outcome, action (treatment\/control) and some confounding variables. More specifically, both the outcome and the action\/treatment variables are binary. A synthetic dataset is generated with the $make\\_uplift\\_classification$ function in \"Causal ML\" package, based on the algorithm in \\cite{Guyon2003DesignOE}.\nThere are 3,000 instances for the treatment and control groups, with response rates of 0.6 and 0.5, respectively. The input consist of 11 features in three categories. 8 of them are used for base classification, which are composed of 6 informative and 2 irrelevant variables. 2 positive uplift variables are created to testify positive treatment effect. The remaining one is a mix variable, which is defined as a linear superposition of a randomly selected informative classification variable and a randomly selected positive uplift variable.\n\n\n\n\n\\textbf{Parameters and Results} There are four hyper-parameters in CIET: $criterion\\_type$, $max\\_depth$ and $rule\\_count$. $criterion\\_type$ includes two options, LG and LGR. \nMore precisely, two main factors of business complexity and difficulty in online deployment, determine parameter assignment.\nDue to the requirement of model generalization and its interpretability, $max\\_depth$ is set to 3. That is, the business logics of a single inference rule are always less than or equal to 3. And, $rule\\_count$ is given a value of 3, indicating that a set of no more than three rules is defined to model the causal effect of a treatment on the outcome. Meanwhile, the default values for $min\\_samples$, $min\\_recall$, $cost$ and $min\\_\\Delta$ are 50, 0.1, 0.01 and 0, respectively.\n\n\n\n\\begin{table}[htbp]\n\\centering\n\\begin{tabular}{lllll}\n\\hline\ndataset &KL &Euclid &LG &LGR\\\\\n\\hline\ntraining &0.187 &0.189 &0.239 &0.235 \\\\\ntest &0.176 &0.178 &0.210 &0.225 \\\\\n\\hline\n\\end{tabular}\n\\caption{Qini coefficients of four analyzed classifiers on the training and test sets of the synthetic data.}\n\\label{tab:QC_SSD}\n\\end{table}\n\n\nThe stratified sampling method is used to divide the synthetic dataset into training and test sets in a ratio of fifty percent to fifty percent. Figure \\ref{fig:AUUC_SSD} shows the uplift curves of the four analyzed classifiers. The AUUC of CIET with LG and LGR are 294 and 292 on the training set, which are significantly greater than 266 and 265 for the decision trees for uplift modeling with KL divergence and squared Euclidean distance. \nAt the 36th percentile of the population, the cumulative profit increase reach 303 and 354 for LG and LGR, resulting in a growth rate of more than 18$\\%$ and 37$\\%$ compared to baselines. Besides, AUUC shows little variation in the training and test datasets, indicating that the stability of CIET is also excellent. According to Table \\ref{tab:QC_SSD}, Qini coefficients of CIET are also obviously greater, with an increase of more than 24.5$\\%$ and 17.8$\\%$. Furthermore, all three rules are determined by uplift and informative variables as expected, which can be seen from Table \\ref{tab:RS_CIET}. \n\n\n\n\n\n\\subsection{Experiments on Credit Card Data}\n\n\\textbf{Dataset} We use the publicly available dataset $Credit$ $Approval$ from the UCI repository as one of the real-world examples, which contains 690 credit card applications. All the 15 attributes and the outcome are encoded as nonsense symbols, where $A7 \\neq v$ \nis applied as the condition for dividing the dataset into treatment and control groups. \nThere are 291 and 399 observations in the two groups with response rates of 0.47 and 0.42, respectively.\nAttributes with more than 25$\\%$ difference in distribution between the two groups should be removed before any experiments are performed. This leads that 12 attributes are left as input variables. For further preprocessing, categorical features are binarized through one-hot encoding.\n\n\\begin{figure}[htbp]\n\\centering\n\\includegraphics[width=0.95\\linewidth]{figs\/CC.png} \n\\caption{The uplift curves of four analyzed classifiers with four different colors for the $Credit\\ Approval$ dataset, while the dashed curve corresponding to the random targeting.}\n\\label{fig:FIG_CAD} \n\\end{figure}\n\n\\begin{table}[htbp]\n\\centering\n\\begin{tabular}{lllll}\n\\hline\nmetrics &KL &Euclid &LG &LGR\\\\\n\\hline\nAUUC &37.337 &40.887 &42.893 &48.222 \\\\\nQini &0.201 &0.236 &0.257 &0.310 \\\\\n\\hline\n\\end{tabular}\n\\caption{Model performance of four analyzed classifiers on the $Credit\\ Approval$ dataset.}\n\\label{tab:CAD}\n\\end{table}\n\n\n\\textbf{Parameters and Results} Based on the business decision-making perspective, the initial parameters are also the same as above.\n\n\n\n\\begin{figure*}[htbp]\n\\centering\n\\includegraphics[width=0.95\\textwidth]{figs\/ROP.png} \n\\caption{The uplift curves of four analyzed classifiers with four different colors for the real world online loan application dataset, while the dashed line corresponding to random targeting.}\n\\label{fig:AUUC_ROP} \n\\end{figure*}\n\nIn order to avoid the distribution bias caused by the division on such small sample size dataset, there is no need to divide $Credit\\ Approval$ into training and test parts. Figure \\ref{fig:FIG_CAD} shows the uplift curves for the four analyzed classifiers, from which we can see that CIET is able to obtain higher AUUC and Qini coefficients. As shown in Table \\ref{tab:CAD}, the former increases from 37$\\sim$40 at baselines to 42$\\sim$48 at CIET approximately, while the latter also improves significantly from 0.20$\\sim$0.23 to 0.25$\\sim$0.31. Especially when LGR serves as the splitting criterion, the cumulative profit has a distinguished peak of 74.5, while only 48.4$\\%$ of the samples are covered.\n\n\n\n\n\\subsection{Experiments on Online Loan Application Data}\n\n\n\n\\textbf{Dataset} We further extend our CIET to precision marketing for new customer application. A telephone marketing campaign is designed to promote customers to apply for personal credit loans at a national financial holdings group in China via its official mobile app. The target is 1\/0, indicating whether a customer would submit an application or not. \nThe data contains 53,629 individuals, consisting of a treated group of 32,984 (receiving marketing calls) and a control group of 20,645 (not receiving marketing calls). These two groups have 300 and 124 credit loan applications with response rates of 0.9\\% and 0.6\\%, which are typical values in real world marketing practice. There are 24 variables in all, which are characterized as credit card-related information, loan history, customer demographics et al.\n\n\n\n\n\n\\begin{table}[htbp]\n\\centering\n\\begin{tabular}{lllll}\n\\hline\ndataset &KL &Euclid &LG &LGR\\\\\n\\hline\ntraining &0.173 &0.168 &0.414 &0.302 \\\\\ntest &0.108 &0.124 &0.385 &0.319 \\\\\n\\hline\n\\end{tabular}\n\\caption{Qini coefficients of four analyzed classifiers on the training and test sets of the real world online loan application data.}\n\\label{tab:QC_ROP}\n\\end{table}\n\n\n\n\\textbf{Parameters and Results} All parameters are the same as in the above experiments. The dataset is first divided into training and test sets in a ratio of sixty percent to forty percent. The response rates are consistent across two sets for two groups. Figure \\ref{fig:AUUC_ROP} diplays the results graphically. As for the training dataset, CIET based on LG and LGR reach AUUC of about 104 and 89, while the decision trees based on KL divergence and squared Euclidean distance are 73 and 72. It can be seen that CIET achieves a significant improvement compared to baselines on this real-world dataset even with a very low response rate. Moreover, as can be seen in Table \\ref{tab:QC_ROP}, Qini coefficient based on our approaches increases to 0.30$\\sim$0.41 from 0.16$\\sim$0.17 \non the training dataset. Meanwhile, Qini coefficient changes little when crossing to test dataset, indicating a better stability. Consequently, classifier with our CIET for precision marketing is effectively improved as well as stabilized in terms of AUUC and Qini coefficient. At present, CIET has already been applied to personal credit telemarketing. \n\n\n\n\\section{Conclusion}\\label{sec:con}\nIn this ms, we propose new methods for constructing causal inference based single-branch ensemble trees for uplift estimation, CIET for short. Our methods provide two partition criteria for node splitting and strategy for generating ensemble trees. The corresponding outputs are uplift gain between the two outcomes and a set of interpretable inference rules, respectively. Compared with the classical decision tree for uplift modeling, CIET can not only be able to avoid dependencies among inference rules, but also improve the model performance in terms of AUUC and Qini coefficient. It would be widely applicable to any randomized controlled trial, such as medical studies and precision marketing. \n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nQuantum teleportation experiments have shown that quantum coherence can be \nmaintained for ever increasing distances. Indeed, the factor that hinders \ncoherence (breaking the required entanglement for teleportation) is the \nloss of signal to the medium, mainly the atmosphere. This obstacle is no \nlonger present in space, which hints at the possibility of performing \nsuch experiments at interstellar distances, or even detecting quantum \nsignals from astrophysical sources. In this context, one of us showed \nrecently that the quantum state of a photon could indeed be maintained \nat galactic distances, at least for a range of the electromagnetic \nspectrum \\cite{Berera2020}. The reason for this is that the mean free \npaths associated to the different interactions the photon could have are \nmany order of magnitude larger than the galactic scales \n(or even the observable Universe). \nAs an outcome of this observation, one seminal suggestion that paper \nmade was the possibility\nfor interstellar quantum communication, due to the viability for\nmaintaining quantum coherence over these distances\nfor certain frequency bands. Another possibility suggested in\nthat paper was if there were any natural quantum coherent sources,\nsuch signals could maintain their coherence over interstellar\ndistances. Extending on these ideas,\nthat paper also noted that this (lack of) effect most likely can be extrapolated to cosmological distances. \n\nThis work will explore that possibility. Here we consider\na wider variety of decoherence factors, like the expansion of the Universe itself. However, even for this case we do not give up on the philosophy that decoherence takes place due to the interaction of the quantum state with some environment. To do so, we consider the environment to be constituted by particles produced by the expansion of the Universe at different epochs. The mechanism to achieve this is squeezing, which has been widely studied in quantum optics and, in cosmology, in the theory of inflationary perturbations. So, borrowing from this mechanism, we compute the number of scalar particles through squeezing, and argue that this effect is essentially absent for fermions and $U(1)$ gauge bosons. Moreover, we identify the scalar field (interacting with photons) to be that of axion-like particles (ALPs), as a natural extension of the Standard Model. With these considerations, we are able to look at different interactions of the photon with the ALPs (or their decay products) in order to estimate the probability of interactions, which we find to be basically null. Thus, in practice, the expansion is not a decoherence factor for photons (at the energies we shall consider). We also look at other potential sources of decoherence, like interaction with CMB radiation or with electrons after reionization. The latter is more likely to be a source of decoherence, although the probabilities remain low enough to consider that the quantum state could remain undisturbed after decoupling. This opens up a new window to look for quantum signals from certain astrophysical objects or even from cosmic strings. \n\n\\section{Expansion-induced decoherence}\n\n\\subsection{Scalar fields}\n\nIn order to learn how the expansion of the Universe can lead to decoherence, let us look at the theory of cosmic inflation for guidance. \nCosmological perturbations during inflation undergo a process known as {\\it squeezing}, where states of the type $|n_{\\bf k}, n_{-{\\bf k}}\\rangle$ are created at superhorizon scales. This is an effect purely due to expansion, whose basic principles can be grasped just by studying a massless scalar field minimally coupled to gravity, as follows:\n\\begin{align}\n\t{\\cal S} & = \\frac{1}{2} \\int dt\\ d^3 x\\ \\sqrt{-g} \\partial_{\\mu} \\phi \\partial^{\\mu} \\phi \\nonumber \\\\\n\t& = \\frac{1}{2} \\int d\\tau\\ d^3 x\\ a^2 \\left[(\\phi')^2 - (\\nabla \\phi)^2 \\right],\n\\end{align}\nwhere primes denote derivative w.r.t. the conformal time $\\tau$. It is convenient to introduce the change of variable $\\varphi \\equiv a \\phi$, such that\n\\begin{equation}\\label{act1}\n\t{\\cal S} = \\frac{1}{2}\\int d\\tau\\ d^3 x\\ \\left[ \\left(\\varphi' - \\frac{a'}{a}\\varphi\\right)^2 - (\\nabla \\varphi)^2 \\right].\n\\end{equation}\nUsing the Euler-Lagrange equations, and going to Fourier space, one gets the mode equations\n\\begin{equation}\\label{eom1}\n\t\\varphi_k'' + \\left(k^2 - \\frac{a''}{a}\\right) \\varphi_k = 0.\n\\end{equation} \nIn the case of a perfect de Sitter expansion, $a''\/a = 2\/\\tau^2$, the equation of motion becomes\n\\begin{equation}\n\t\\varphi_k'' + \\left(k^2 - \\frac{2}{\\tau^2} \\right) \\varphi_k = \\varphi_k'' + \\left(k^2 - 2 (aH)^2 \\right) \\varphi_k = 0\\,.\n\\end{equation}\nClearly, the solutions are oscillatory for $k^2 > 2 (aH)^2$, whereas for $k^2 < 2(aH)^2$ there is a growing and a decaying-mode solution. The question which then arises is what should be the right initial state for solving this equation. For inflation, one usually takes Bunch-Davies initial states\n\\begin{equation}\n\t\\varphi_k (\\tau) = \\frac{e^{-i k \\tau}}{\\sqrt{2k}} \\left(1 - \\frac{i}{k\\tau}\\right)\\,,\n\\end{equation}\nsuch that the time-dependent field operator and the canonical momentum are given by\n\\begin{widetext}\n\\begin{equation}\n\t\\hat{\\varphi} (\\tau,\\vt{x}) = \\int \\frac{d^3k}{(2\\pi)^3} \\frac{1}{\\sqrt{2k}} \\left[e^{-ik\\tau}\\left(1 - \\frac{i}{k\\tau}\\right) \\cm{k} (\\tau_0) + e^{ik\\tau} \\left(1 + \\frac{i}{k\\tau}\\right) \\cpp{k} (\\tau_0) \\right] e^{i \\dpr{k}{x}},\n\\end{equation}\n\\begin{equation}\n\\hat{\\pi}(\\tau, \\vt{x}) = \\varphi' - \\frac{a'}{a}\\varphi = -i \\int \\frac{d^3k}{(2\\pi)^3} \\sqrt{\\frac{k}{2}} \\left[e^{-ik\\tau} \\cm{k} (\\tau_0) - e^{ik\\tau} \\cpp{k} (\\tau_0)\\right] e^{i \\dpr{k}{x}}\\,.\n\\end{equation}\n\\end{widetext}\n\nThe creation and annihilation operators at later times can be found through a Bogolyubov transformation, such that\n\\begin{align}\n\t\\cm{k} (\\tau) & = \\alpha_k (\\tau) \\cm{k} (\\tau_0) + \\beta_k (\\tau) \\cpp{k}(\\tau_0)\\,, \\nonumber\\\\\n\t\\cpp{k} (\\tau) & = \\alpha_k^* (\\tau) \\cpp{k} (\\tau_0) + \\beta_k^* (\\tau) \\cm{k} (\\tau_0)\\,,\n\\end{align}\nwhere $|\\alpha_k|^2 - |\\beta_k|^2 = 1$.\n\nConsidering this, one can parametrize these coefficients as\n\\begin{equation}\n\t\\alpha_k = \\cosh (r_k) e^{-i \\Theta_k}, \\quad \\beta_k = -\\sinh (r_k) e^{i (\\Theta_k + 2\\phi_k)}\\,,\n\\end{equation}\nwhich renders\n\\begin{widetext}\n\\begin{align}\\label{sqpar}\n\t\\hat{\\varphi}_k (\\tau) =& \\frac{1}{\\sqrt{2k}} \\left\\{\\left[ \\cosh (r_k) e^{-i \\Theta_k} - \\sinh(r_k) e^{-i(\\Theta_k + 2\\phi_k)}\\right] \\cm{k} + \\left[\\cosh(r_k) e^{i\\Theta_k} - \\sinh(r_k) e^{i(\\Theta_k + 2\\phi_k)}\\right] \\cpp{k} \\right\\}, \\nonumber \\\\\n\t\\hat{\\pi}_k (\\tau) = & -i \\sqrt{\\frac{k}{2}} \\left\\{\\left[ \\cosh (r_k) e^{-i \\Theta_k} + \\sinh(r_k) e^{-i(\\Theta_k + 2\\phi_k)}\\right] \\cm{k} - \\left[\\cosh(r_k) e^{i\\Theta_k} + \\sinh(r_k) e^{i(\\Theta_k + 2\\phi_k)}\\right] \\cpp{k} \\right\\}\\,.\n\\end{align}\n\\end{widetext}\n\nComparing with the equations above (depending on Bunch-Davies functions), one readily finds the parameters\n\\begin{align}\n\tr_k & = \\sinh^{-1} \\left(\\frac{1}{2k\\tau}\\right), \\qquad \\Theta_k = k\\tau + \\tan^{-1} \\left(\\frac{1}{2k\\tau}\\right)\\,, \\nonumber \\\\\n\t\\phi_k & = -\\frac{\\pi}{4} - \\frac{1}{2} \\tan^{-1} \\left(\\frac{1}{2k\\tau}\\right).\n\\end{align}\nThe vacuum expectation value of the number of particles for the new vacuum in the $k$ mode is given by\n\\begin{equation}\n\t\\left\\langle N_k \\right\\rangle = |\\beta_k|^2 = \\sinh^2 (r_k) = \\left(\\frac{1}{2k\\tau}\\right)^2.\n\\end{equation}\nThus, for $k < 2\/(aH)$ the expectation number is bigger than $1$. In practice, this matches the region for which the equation of motion has the exponential solutions, and in particular, where squeezing takes place. \n\n\\subsubsection{Particle production during the standard cosmological expansion}\n\nIn order to find the density of particles created during the expansion history, it is convenient to have at hand the evolution of the scale factor as a function of the conformal time, starting from the inflationary era until the matter dominated era. We shall assume the transitions between epochs to be instantaneous, commonly known as the sudden approximation.\\footnote{Relaxing this assumption does not change our main findings.} Using the sudden approximation between the (quasi)-de Sitter expansion and the hot big bang phase, the scale factor is given by\n\n\\begin{equation*}\n\ta(\\tau)=\\left\\{\\begin{array}{l}\n(H_{\\rm Inf}|\\tau|)^{-1}, \\quad \\tau<\\tau_{e}<0 \\\\\n\\alpha_M (\\tau-\\tau_e)^2 + \\alpha_R (\\tau - \\tau_e) + \\alpha_I, \\quad \\tau>\\tau_{e}\n\\end{array}\\right.\\,,\n\\end{equation*}\n\\begin{align}\n\t\\alpha_M & = \\frac{\\pi G}{3} \\rho_{eq} a_{eq}^3, \\quad \\alpha_I = \\frac{1}{H_{\\rm Inf} |\\tau_e|}, \\nonumber \\\\\n\t \\alpha_R & = \\left[ \\frac{4\\pi G}{3} \\rho_{eq} a_{eq}^3 \\left(\\frac{1}{H_{\\rm Inf} |\\tau_e|} + a_{eq} \\right)\\right]^{1\/2}\\,,\n\\end{align}\nwhere $\\tau_e$ denotes the conformal time at the end of inflation and ``$eq$'' refers to the time of matter-radiation equality. The quadratic term corresponds to the evolution during matter domination, whereas the linear term to radiation domination. \n\nThen, the equation of motion Eq. \\eqref{eom1} (which is for a completely general cosmological background) during this epoch(s) is given by\n\\begin{equation}\n\t\\varphi_k'' + \\left(k^2 - \\frac{2\\alpha_M}{\\alpha_M(\\tau-\\tau_e)^2 + \\alpha_R(\\tau-\\tau_e) + \\alpha_I} \\right) \\varphi_k = 0\\,.\n\\end{equation}\nNaturally, one can identify regions where the equation gets simplified. For the radiation-dominated era, the e.o.m. is\n\\begin{equation}\n\t\\varphi_k'' + k^2 \\varphi_k = 0\\,,\n\\end{equation}\nwhereas for the matter-dominated era, it is given by\n\\begin{equation}\n\t\\varphi_k'' + \\left(k^2 - \\frac{2}{\\tau^2} \\right) \\varphi_k = 0\\,,\n\\end{equation}\ni.e., the same equation as for the inflationary era. In principle, there can be small changes to the usual (positive-frequency) vacuum state coming from effects of gravitational phase transitions. However, the corrections to the positive-frequency vacuum are small or, in other words, the number of particles created due to these phase transitions quickly dilute. Therefore, it is reasonable to consider the Bunch Davies-like initial states, such that the solutions to these equations are, respectively, given by\n\\begin{eqnarray}\n{}^{\\rm Rad}\\varphi_k (\\tau) = \\frac{1}{\\sqrt{2k}} e^{-ik\/\\mathcal{H}}\\,,\n\\end{eqnarray}\nand the basis for the matter-dominated era as\n\\begin{eqnarray}\\label{solm}\n\t{}^{\\rm Mat}\\varphi_k (\\tau) = \\frac{1}{\\sqrt{2k}} \\left(1 - \\frac{i \\mathcal{H}}{2k}\\right) e^{-2ik\/\\mathcal{H}}\\,,\n\\end{eqnarray}\nwhere ${\\cal H}$ is the comoving rate of expansion.\\footnote{We here ignore any\n squeezing which takes place before the epoch of radiation domination.} As mentioned, the matching of the solutions during the different epochs will lead to excited states that will increase the number of generated particles. However, for now it will be enough to concentrate on this simple form of solutions. Moreover, notice that Eq. \\eqref{solm} has the same functional form as the Bunch-Davies solution for de Sitter spacetime, and thus the squeezing formalism derived for that case also applies here. In particular, the vacuum expectation of the number of particles is \n\\begin{equation}\n\t\\langle N_k \\rangle = |\\beta_k|^2 = \\left(\\frac{1}{2k\\tau}\\right)^2\\,.\n\\end{equation}\nOn the contrary, during radiation domination there is no mass term in the e.o.m., so there is no squeezing and particle production during this era. Therefore, expansion induces particle excitations of a scalar field only during the de Sitter and matter-dominated eras. Before moving on, we will cover the case of massive scalar fields during inflation, and similar calculations can be done for standard expansion.\n\n\\subsubsection{The massive scalar case}\n\nHere we will cover (although somewhat superficially) the case of a massive scalar field. A priori, one would expect particle production for massive fields to be less efficient, so we need to quantify the required corrections to the functions displayed above.\n\nLet us start with a generalization of the action Eq. \\eqref{act1},\n\\begin{equation}\n\t{\\cal S} = \\frac{1}{2} \\int d\\tau d^3 x \\left[ \\left(\\varphi' - \\frac{a'}{a}\\varphi \\right)^2 - (\\nabla \\varphi)^2 - a^2 m^2 \\varphi^2 \\right]\\,,\n\\end{equation}\nwhich renders the following equation of motion for the field\n\\begin{equation}\n\t\\varphi_k'' + \\left[k^2 - \\left( \\frac{a''}{a} - a^2 m^2 \\right) \\right] \\varphi_k = 0\\,.\n\\end{equation}\nOnce again, this equation is completely general for any cosmological epoch. For inflation this becomes\n\\begin{equation}\n\t\\varphi_k'' + \\left[k^2 - \\frac{1}{\\tau^2} \\left(2 - \\frac{m^2}{H^2}\\right) \\right]\\varphi_k = 0\\,,\n\\end{equation}\nwhere the Bunch-Davies solution is\n\\begin{equation}\\label{fim}\n\t\\varphi_k = \\frac{e^{i (2\\nu+1)\\frac{\\pi}{4}}}{\\sqrt{2k}} \\sqrt{\\frac{\\pi}{2}} w^{1\/2} H_{\\nu}^{(1)} (w)\\,,\n\\end{equation}\nwhere $w = |k \\tau|$ and $\\nu^2 = 9\/4 - (m\/H)^2$. The conjugate momentum is\n\\begin{align}\\label{pim}\n\t\\pi_k = & -i \\sqrt{\\frac{k}{2}} \\bigg\\{-i \\sqrt{\\frac{\\pi}{2}} e^{i (2\\nu+1)\\frac{\\pi}{4}} \\bigg[ w^{1\/2} H_{\\nu-1}^{(1)} (w) \\nonumber \\\\\n\t& + w^{-1\/2} \\left(3\/2 - \\nu \\right) H_{\\nu}^{(1)} (w) \\bigg] \\bigg\\}\\,.\n\\end{align}\nNotice that negative values of $\\nu$ lead to exponentially suppressed solutions. Thus, as expected, there is no particle production for $m \\gtrsim H$. Then, assuming that the mass term is small enough so that $\\nu$ is safely larger than 0, one can compute the number of generated particles due to squeezing by comparing the equations above with Eq. \\eqref{sqpar}. In order to have analytical expressions, one can expand eqs.\\eqref{fim},\\eqref{pim} in powers of $m\/H$, which yields\n\\begin{align}\n\t|\\beta_k|^2 \\simeq \\left(\\frac{1}{2k\\tau}\\right)^2 \\left[1 + \\frac{2}{3} \\frac{m^2}{H^2} \\left( -1 + \\gamma_E + \\ln (2w) \\right)\\right]\\,,\n\\end{align}\nwhere $\\gamma_E$ denotes the Euler-Mascheroni constant. Naturally, during radiation domination this type of mass term does not enhance squeezing, whereas during matter dominance it is more subdominant than in the other eras ($\\tau^{-4}$ vs. $\\tau^{-2}$). \n\n\\subsection{Setting up an environment}\n\nWhat we have covered so far is valid for a scalar field, so the natural next step is to try and reproduce this for photons. However, in this case there is no induced time-dependent mass-term and thus no squeezing (similarly to the scalar case during radiation domination). Naturally, there can be particle production due to interactions with other fields, but such processes are not linked to the background dynamics. In fact, in some cases the expansion just dilutes whatever number of particles are produced through these couplings. Consequently, in order to grasp the effects of decoherence of photons due to expansion alone, the next best thing is to look at the interactions between the quantum state (of a test photon) and an environment encompassed by either pseudoscalar particles produced by the squeezing of super-horizon states, or by decay products of these scalars, in particular, into photons. Arguably, the preeminent example of a scalar field in such scenario is the axion, which has a well-known interaction with $U(1)$ fields. Moreover, the interactions between axions and photons through other means have been widely explored in the literature, where the search of this particle is largely based on this interaction. \nThe interaction between axions and $U(1)$ gauge fields is described by the Lagrangian\n\\begin{equation}\n\t{\\cal L}_{A\\gamma\\gamma} = - \\frac{g_{A \\gamma\\gamma}}{4} F_{\\mu\\nu} \\tilde{F}^{\\mu\\nu} \\phi_A = g_{A\\gamma\\gamma} \\vt{E}\\cdot \\vt{B}\\ \\phi_A\\,,\n\\end{equation}\nwith\n\\begin{align}\n\tg_{A\\gamma\\gamma} & = \\frac{\\alpha}{2\\pi f_A} \\left(\\frac{E}{N} - 1.92(4)\\right) \\nonumber \\\\\n\t& = \\left(0.203(3) \\frac{E}{N} - 0.39(1) \\right) \\frac{m_A}{{\\rm GeV}^2}\\,,\n\\end{align}\nwhere $E$ and $N$ are the electromagnetic and color anomalies of the axial current \\cite{RRR2018}. \n\n\\subsubsection{Number density}\n\nLet us estimate the number density of $\\phi_A$-particles created during inflation (just by squeezing). For this, we need to compute the total number of particles. Assuming the states are homogeneously distributed, the amount of states within a ``radius'' $k$ is \n\\begin{equation}\n\tG(k) = \\frac{V}{(2\\pi)^3} \\frac{4\\pi k^3}{3}\\,,\n\\end{equation}\nwhere $V$ stands for a comoving volume. In this way, the density of states is given by\n\\begin{equation}\n\tg(k) = \\frac{\\partial G}{\\partial k} = \\frac{V}{2\\pi^2} k^2\\,.\n\\end{equation}\nThus, the total number of particles is\n\\begin{widetext}\n\\begin{equation}\n\tN = \\sum_k N_k = \\int dk\\ g(k) f(k) \\nonumber = \\frac{V}{2\\pi^2} \\int_{0}^{-1\/\\tau_e} dk\\ k^2 \\left(\\frac{1}{2k\\tau_e}\\right)^2 \\left[1 + \\frac{2}{3} \\frac{m^2}{H_{\\rm Inf}^2} \\left( -1 + \\gamma_E + \\ln (-2k\\tau_e) \\right)\\right]\\,,\n\\end{equation}\n\\end{widetext}\nwhere we have used the formula for the average number of particles on a mode $k$ created due to squeezing, which we identified with $f(k)$. The integration limits correspond only to modes that have been superhorizon at some point during inflation, as those are the ones that undergo squeezing. Performing the integral we get\n\\begin{eqnarray}\n\tN & = & \\frac{V}{8\\pi^2} \\left. \\frac{k}{\\tau_e^2} \\left[1 + \\frac{2}{3}\\frac{m_a^2}{H_{\\rm Inf}^2} \\left( -2 + \\gamma_E + \\ln (-2k\\tau_e)\\right] \\right]\\right|_{0}^{-1\/\\tau_e} \\nonumber \\\\\n\t& = & -\\frac{V}{8\\pi^2} \\frac{1}{\\tau_e^3} \\left[1 + \\frac{2}{3}\\frac{m_a^2}{H_{\\rm Inf}^2} \\left( -2 + \\gamma_E + \\ln 2 \\right) \\right]\\,.\n\\end{eqnarray}\nThen, one can obtain the density of these particles at any given time through\n\\begin{equation*}\n\tn = \\frac{N}{V_{\\rm phys}} = - \\frac{1}{8\\pi^2} \\left(\\frac{1}{a \\tau_e}\\right)^3 \\left[1 + \\frac{2}{3} \\frac{m_a^2}{H_{\\rm Inf}^2} \\left( -2 + \\gamma_E + \\ln 2 \\right) \\right]\\,.\n\\end{equation*}\n\nThis is a good point to make some estimates. First, one can get away (for now) with not choosing a value of $m_a$, as it will be subdominant. Thus, we are left to find $\\tau_e$. To do so, notice that \n\\begin{equation}\n\t\\frac{1}{k_0 |\\tau_e|} = \\frac{k_0|\\tau_*|}{k_0 |\\tau_e|} = \\frac{a_e}{a_*} \\sim e^{60}\\,,\n\\end{equation}\nwhere `$0$' and `$*$' stand for present-day and horizon-crossing magnitudes. In particular, $k_0$ can be identified with the current horizon length. As it is widely known, inflation had to last at least 60 $e-$folds after this mode crossed the horizon in order to solve the horizon problem.\\footnote{Actually, the number of $e-$folds needed to solve the horizon problem depends on the energy scale of inflation, but we are only interested in rough estimates here.} Then, the above is equivalent to \n\\begin{equation}\n\t\\frac{(aH)_0^{-1}}{|\\tau_e|} = \\frac{H_0^{-1}}{|\\tau_e|} \\sim e^{60}\\,,\n\\end{equation}\nrendering a conformal time at the end of inflation,\n\\begin{equation}\n\t\\tau_e \\sim - 4 \\times 10^{-9}\\ {\\rm sec} = - 1.465\\times 10^{34} \\Mp^{-1}\\,.\n\\end{equation}\nWith this, we have the necessary values to estimate the number density of squeezing-generated ALPs at any given era. The free parameters are the energy scale of inflation and the mass of the particles. However, if the latter is small in comparison to the former, the contribution from the ratio will be negligible and one can get away with working with the first term. \n\n\\subsection{$\\phi_A \\gamma_t \\rightarrow x \\overline{x}$}\n\nFermion production from the interaction of an ALP and a (test) photon $\\gamma_t$ is mediated by the Lagrangian\n\\begin{equation}\n\tq_x A_{\\mu} \\overline{\\psi} \\gamma^{\\mu} \\psi\\,.\n\\end{equation}\n\nLet us take the initial momenta of the particles to be\n\\begin{equation}\n\tk_a = E_a(1, \\cos \\theta, \\sin \\theta ,0), \\qquad k_{\\gamma} = E_{\\gamma}(1, 1, 0, 0)\\,.\n\\end{equation}\nWith a center of mass energy given by $E^2_{\\rm com} = 2 E_a E_{\\gamma} (1-\\cos \\theta)$, one can find the cross section of the interaction to be\n\\begin{equation}\n\t\\sigma = \\frac{1}{4E_a E_{\\gamma} |v_a - v_{\\gamma}|} \\frac{q_x^2 m_x^2}{2\\pi f_a^2} \\ln \\left(\\frac{E' + p'}{E'-p'}\\right)\\,,\n\\end{equation}\nwhere $E' = E_{\\rm com}\/2$, $p' = \\sqrt{(E')^2 - m_x^2}$ and $m_x$ denotes the mass of the fermions. Now, we introduce the variables $\\lambda = 2 m_x^2\/(E_a E_{\\gamma})$ and $y = \\cos \\theta$, such that the average over the initial axion momentum is \\cite{Conlon:2013isa}\n\\begin{widetext}\n\\begin{align}\n\t\\langle \\sigma v \\rangle & = \\frac{q_x^2 \\lambda}{16\\pi f_a^2} \\int_{-1}^{1-\\lambda} dy\\ \\ln \\left(\\frac{\\sqrt{1-y} + \\sqrt{1-y-\\lambda}}{\\sqrt{1-y} - \\sqrt{1-y-\\lambda}} \\right) \\nonumber \\\\\n\t& = \\frac{q_x^2 \\lambda}{16\\pi f_a^2} \\left[ - \\sqrt{4-2\\lambda} + (\\lambda - 2) \\ln (\\sqrt{2} - \\sqrt{2-\\lambda}) + 2 \\ln (\\sqrt{2-\\lambda} + \\sqrt{2}) - \\frac{1}{2} \\lambda \\ln \\lambda \\right] \\nonumber \\\\\n\t& = \\frac{q_x^2 \\lambda}{16\\pi f_a^2} \\left[ - \\sqrt{4-2\\lambda} + (4-\\lambda) \\ln (\\sqrt{2} + \\sqrt{2-\\lambda}) + \\left(\\frac{\\lambda}{2} - 2\\right) \\ln \\lambda \\right]\\,.\n\\end{align}\n\\end{widetext}\nNotice this expression tells us that $0 < \\lambda \\leq 2$. \n\nNext, we identify two contributions to the ALP number density during the matter dominated era: those produced during inflation and those produced during matter domination itself. \n\\begin{equation*}\n\tn = \\int dn = \\frac{1}{8\\pi^2} \\left[ \\int_{aH}^{-1\/\\tau_e} \\frac{dk}{a^3 \\tau_e^2} + \\int_{aH}^{(aH)_{eq}} \\frac{dk}{a^3 \\tau^2} \\right]\\,.\n\\end{equation*}\nThen, it is convenient to write every expression in terms of the variable $\\lambda$ introduced above, \n\\begin{equation}\nk = \\frac{2 a m_x^2}{\\lambda E_{\\gamma}} \\implies d k= - \\frac{2am_x^2}{E_{\\gamma}} \\frac{d\\lambda}{\\lambda^2}\\,,\n\\end{equation}\nsuch that the interaction rate is given by\n\\begin{widetext}\n\\begin{align}\n\t\\langle n \\sigma v \\rangle = & \\frac{q_x^2}{16\\pi f_a^2} \\frac{1}{8\\pi^2 a^3} \\frac{2am_x^2}{E_{\\gamma}} \\bigg\\{ \\int_{\\lambda_e}^{\\lambda_{\\tau}} \\frac{d\\lambda}{\\lambda \\tau_e^2} \\left[ - \\sqrt{4-2\\lambda} + (4-\\lambda) \\ln (\\sqrt{2} + \\sqrt{2-\\lambda}) + \\left(\\frac{\\lambda}{2} - 2\\right) \\ln \\lambda \\right] \\nonumber \\\\\n\t& + \\int_{\\lambda_{\\tau}}^{\\lambda_{eq}} \\frac{d\\lambda}{\\lambda \\tau^2} \\left[ - \\sqrt{4-2\\lambda} + (4-\\lambda) \\ln (\\sqrt{2} + \\sqrt{2-\\lambda}) + \\left(\\frac{\\lambda}{2} - 2\\right) \\ln \\lambda \\right] \\bigg\\}\\,,\n\\end{align}\n\\end{widetext}\nwhere \n\\begin{equation*}\n\t\\lambda_e = \\frac{2m_x^2}{E_{\\gamma}} a|\\tau_e|\\,, \\qquad \\lambda_{\\tau} = \\frac{2m_x^2}{H E_{\\gamma}}\\,, \\qquad \\lambda_{eq} = \\frac{2m_x^2}{H_{eq} E_{\\gamma}}\\,.\n\\end{equation*}\nThe kinematic constraints on $\\lambda$ place stringent bounds on the allowed values of the parameters of the model, in particular on the ratio $m_x^2\/E_{\\gamma}$. To see this, take the values at matter-radiation equality, where\n\\begin{equation*}\n\t\\lambda_e(a_{eq}) \\sim 10^{30} \\Mp^{-1} \\frac{2m_x^2}{E_{\\gamma}}\\,, \\quad \\lambda_{\\tau} (\\lambda_{eq}) \\sim 10^{55} \\Mp^{-1} \\frac{2m_x^2}{E_{\\gamma}}\\,.\n\\end{equation*}\nSo, taking the maximum allowed value of $\\lambda$, we conclude that $m_x^2\/E_{\\gamma} \\sim 10^{-55} \\Mp$. This could be satisfied only for extremely light fermions (even for not-so-realistic values of the photon energy). Assuming these rather implausible conditions are satisfied, we can notice that the first integral will dominate ($\\tau_e^{-2} \\gg \\tau^{-2}$), so we will just focus on this one (for now). Then, the interaction rate is\n\\begin{equation}\n\t\\langle n \\sigma v \\rangle \\approx \\frac{3356 q_x^2}{16\\pi f_a^2} \\frac{(1+z_{eq})^2}{8\\pi^2} \\frac{2\\times 10^{-55}\\Mp}{\\tau_e^2}\\,, \n\\end{equation} \nwhere we have solved the integral numerically. Plugging the numerical values of $\\tau_e$ and $z_{eq}$, we have that \n\\begin{equation}\n\t\\langle n \\sigma v \\rangle \\sim 10^{-112} \\Mp^3 \\frac{q_x^2}{128\\pi^2 f_a^2}\\,.\n\\end{equation}\nClearly $f_a$ would need to be abnormally small in order to have a non negligible interaction rate. The only way to obtain non negligible values would be to suppress even more the ratio $m_x^2\/E_{\\gamma}$, such that the corresponding versions of $\\lambda$ approach to $0$, where the integral actually diverges. Needless to say, even considering very light fermions, the energy of the photon would be out of reach (and can even become trans-Planckian). Indeed, for axions coming from string theory, we generically expect $f_a>\\Mp$ from the Weak Gravity Conjecture (WGC) \\cite{Heidenreich:2015nta, Rudelius:2014wla, Bachlechner:2015qja}. Interestingly, the WGC also constrains the ratio of the charge-to-mass of fermions to be less than $q_x\/m_x < 1$ in Planck units. On excluding trans-Planckian photons on physical grounds, this means that $q_x$ gets naturally suppressed on considering very small values for $m_x^2\/E_{\\gamma}$. Therefore, it seems that the WGC highly disfavours having a non-negligible value for this interaction rate.\n\n\\subsection{$\\phi_A \\rightarrow \\gamma \\gamma \\implies \\gamma_t \\gamma \\rightarrow \\gamma \\gamma$}\n\nIn this case, we will check how likely it is for the photon to interact with an environment composed of photons which are produced from the decay of an ALP. For this, we need the decay width of the process, which is \n\\begin{equation}\n\t\\Gamma_{A\\rightarrow \\gamma\\gamma} = \\frac{g_{A\\gamma\\gamma}^2 m_A^3}{64\\pi}\\,,\n\\end{equation}\nand, assuming $E\/N = 0$, this becomes\n\\begin{equation}\n\t\\Gamma_{A\\rightarrow \\gamma\\gamma} = 1.1 \\times 10^{-24}\\ {\\rm s}^{-1} \\left(\\frac{m_A}{\\rm eV}\\right)^5\\,.\n\\end{equation}\nWithout any further calculations, one can see that for masses $m_A \\sim {\\cal O}(1)\\ {\\rm eV}$ or less, the decay width is too small even considering the age of the Universe ($\\sim 10^{17}\\ {\\rm s}$), and so no photons would be produced. Current bounds on the mass of the axion highly disfavour higher masses. This is why it is more appropriate to talk about ALPs, as they are more generic and well suited to be a test lab. \n\nNaturally, the photons resulting from the decaying of the ALP will not have the same momentum as it. We label the resulting photons as $1'$ and $2'$, with an angle $\\theta'$ between their momenta. Then, one can easily show that \n\\begin{equation}\n\t\\langle \\cos \\theta' \\rangle = - \\frac{m_A^2}{4 p_{1'} p_{2'}}\\,, \\qquad \\langle p_{1'} p_{2'} \\rangle = \\frac{m_A^2}{4}\\,,\n\\end{equation}\nso that\n\\begin{equation}\n\tp_{1'}^2 + p_{2'}^2 = p_A^2 + \\frac{m_A^2}{2}\\,,\n\\end{equation}\nleading to the following direction-averaged momenta\n\\begin{align}\n\tp_{1'}^2 &= \\frac{m_A^2}{4} + \\frac{p_A^2}{2} \\left[ 1 + \\sqrt{1+\\frac{m_A^2}{p_A^2}} \\right]\\,, \\nonumber \\\\\n\tp_{2'}^2 &= \\frac{m_A^2}{4} + \\frac{p_A^2}{2} \\left[ 1 - \\sqrt{1+\\frac{m_A^2}{p_A^2}} \\right]\\,.\n\\end{align}\nThis leads to a not-so-simple distribution of photons. However, considering the range of masses that render a photon population at matter domination, the distribution can be somewhat simplified. To see this, first notice that the comoving momentum is between $(aH)_{eq} \\lesssim k \\leq (aH)_{e}$, or plugging in numbers, $10^{-59}\\ \\Mp \\lesssim k \\lesssim 10^{-34}\\ \\Mp$. The physical momentum of massive particles varies with expansion the same way as for massless particles ($p \\propto a^{-1}$). Thus, the physical momentum of ALPs should be on the range $10^{-56}\\ \\Mp \\lesssim p \\lesssim 10^{-31}\\ \\Mp$ (or $ 10^{-38}\\ {\\rm GeV} \\lesssim p \\lesssim 10^{-13}\\ {\\rm GeV}$). Even for the upper limit, the physical momentum of ALPs is rather negligible in comparison with the rest mass required for it to decay by the matter dominated era (${\\cal O}(10^3)\\ {\\rm eV}$). Thus, it is a good approximation to treat the ALPs as non-relativistic. Then, the momentum of the resulting photons are roughly\n\\begin{equation}\n\tp_{1'} \\approx \\frac{m_A + p_A}{2}\\,, \\qquad p_{2'} \\approx \\frac{m_A - p_A}{2}\\,,\n\\end{equation}\nwhere for the sake of simplicity, we take $p_{1'} \\approx p_{2'} \\approx m_A\/2$.\n\nWith these considerations, one can compute the mean free path of a test photon interacting with an environment of photons decaying from ALPs. For starters, Euler and Kockel computed the cross section for photon-photon interactions \\cite{Euler:1935zz, Liang:2011sj}, \n\\begin{equation}\n\t\\sigma(\\gamma\\gamma \\rightarrow \\gamma\\gamma) = \\frac{937 \\alpha^4 \\omega^6}{10125 \\pi m^8},\n\\end{equation} \nwhere $\\alpha \\simeq 1\/137$ is the fine structure constant, $\\omega$ is the energy of the photons in the center-of-momentum frame, and $m$ is the mass of the electron. The momentum of each photon in the lab-frame can be written as\n\\begin{equation}\np_1^{\\mu} = E_{1} (1,1,0,0), \\qquad p_2^{\\mu} = E_2 (1, -\\cos \\theta, \\sin \\theta, 0),\n\\end{equation}\nsuch that\n\\begin{equation}\n\t\\omega = \\sqrt{E_1 E_2}\\; \\cos \\frac{\\theta}{2} .\n\\end{equation}\nNext, recalling the number density of ALPs (which translates into the number density of photons up to a factor of $2$), and considering that their mass is negligible in comparison to the energy scale of inflation, we have\n\\begin{equation}\\label{npeq}\n\tn = - \\frac{1}{4\\pi^2} \\left(\\frac{1}{a \\tau_e}\\right)^3\\,,\n\\end{equation}\nsuch that\n\\begin{equation}\n\t\\sigma n \\sim \\frac{937 \\alpha^4}{10125\\pi m^8} E_{\\gamma}^3 E_{1}^3 \\frac{2}{\\pi} \\frac{(1+z_{eq})^3}{4\\pi^2} |\\tau_e|^{-3}\\,,\n\\end{equation}\nwhere $E_{\\gamma}$ denotes the energy of the test photon (quantum state) and $E_{1}$ the energy of the environment photon. Then, taking $E_{\\gamma} = 10^{-17}\\ \\Mp$ and $E_{1} \\sim m_A = 10^{-24}\\ \\Mp\\ (1\\ {\\rm keV})$, the resulting mean free path is\n\\begin{equation}\n\t\\ell = (\\sigma n)^{-1} \t\\sim 10^{21}\\ {\\rm cm}\\,,\n\\end{equation}\nwhich should be compared to $H_{eq}^{-1} \\sim 10^{50}\\ {\\rm cm}$. Nevertheless, notice that we have taken a rather high energy for the test photon, so much so that the cross section formula may be invalid due to other processes being predominant. A more sensible value would be $E_{\\gamma} = 10^{-24}\\ \\Mp$, which yields\n\\begin{equation}\n\t\\ell = (\\sigma n)^{-1} \t\\sim 10^{42}\\ {\\rm cm}\\,.\n\\end{equation}\nThus, in principle photons could interact with other photons emerging from the decay of ALPs (we will check this more carefully below). However, it is instructive to compare the possibility of these interactions to the interaction with CMB photons. According to our estimation for the number density of photons created through the process $\\phi_A \\rightarrow \\gamma\\gamma$, by the time of photon decoupling we have $ n \\sim 20 \\ {\\rm cm}^{-3}$ $(600\\ {\\rm cm}^{-3}$ by matter-radiation equality), whereas for CMB photons $n_{pd} \\approx n_{\\gamma,0} (1+z_{pd})^3 \\sim 4 \\times 10^{11}\\ {\\rm cm}^{-3}$. Thus, the number density of ALP photons is negligible in comparison to CMB photons, so the latter are in principle a more important source of decoherence than the former after $z \\sim 1000$. Let us compute next the mean free path due to this interaction. \n\n\\subsection*{Mean free path}\n\nIn order to compute the mean free path (or redshift in a cosmological setting), we will use the optical depth, defined as \n\\begin{equation}\n\t{\\cal T} = \\int \\sigma j_{\\mu} dx^{\\mu}\\,,\n\\end{equation}\nwhere $\\sigma$ is the cross section of the interaction and $j_{\\mu}$ is the four-current \\cite{Ruffini:2015oha}. The integral over the spatial dimensions are null due to isotropy and homogeneity. \nThis will be used to compute in a more robust manner the mean free path for the interaction of a photon with others produced by the decay of an ALP. Moreover, we will incorporate the time dependence from the decay width. With these considerations, the optical depth is written as \n\\begin{widetext}\n\\begin{equation}\\label{opd}\n\t{\\cal T} = \\int_{t}^{t_0} dt\\ (1-e^{-\\Gamma t}) \\frac{937\\alpha^4 E_{\\gamma}^3 m_A^3}{10125\\pi m^8} \\int_{-1}^{1} d(\\cos \\theta) \\cos^6 \\frac{\\theta}{2}\\ \\frac{1}{4\\pi^2} \\left[\\int_{aH}^{-1\/\\tau_e} \\frac{dk}{\\tau_e^2} + \\int_{aH}^{(aH)_{eq}} \\frac{dk}{\\tau^2} \\right]\\,.\n\\end{equation}\n\\end{widetext}\n\nNext, we shall assume a matter dominated Universe throughout the entire propagation of the photon. This will be convenient in order to deal with the explicit time dependence in the expression. Thus, we have that\n\\begin{equation}\n\tt = \\frac{2}{3H_0} (1+z)^{-3\/2}\\,.\n\\end{equation}\n\nWe shall focus on the first term inside the brackets of \\eqref{opd}, which is dominant (by many orders of magnitude). In doing so, the optical depth is written as\n\\begin{align}\n{\\cal T} & = \\frac{937\\alpha^4 E_{\\gamma,0}^3 m_A^3}{10125\\pi m^8} \\frac{1}{8\\pi^2} \\times \\nonumber \\\\ & \\int_0^z \\frac{dz'\\ (1+z')^3}{H_0 (1+z')^{5\/2}} (1-e^{-\\Gamma t})\\frac{1}{\\tau_e^2}\\left[-\\frac{1}{\\tau_e} - H_0 (1+z)^{1\/2}\\right]\\,. \\nonumber\n\\end{align}\nIn order to have numerical estimates we take $E_{\\gamma,0} = m_A = 10^{-24}\\ \\Mp$, such that\n\\begin{align}\n\t\\frac{937\\alpha^4 E_{\\gamma,0}^3 m_A^3}{10125\\pi m^8} \\frac{1}{8\\pi^2 H_0} \\simeq 10^{78}\\,.\n\\end{align}\nThe probability of the photon travelling without interacting with the environment is given by $P(z) = e^{- {\\cal T}(z)}$. For $z = 3400$, one gets ${\\cal T} \\sim 10^{-20}$, meaning that basically $P = 1$, and so there is no decoherence due to the interaction between the photon in some quantum state and the photons produced by the decay of expansion-generated ALPs. One could entertain the idea of going further into the past (higher redshift) in order to obtain non-trivial probabilities (even though the single-fluid approximation would break in the realistic setup). However, even for redshifts as high as $10^{20}$, the optical depth is just around $10^{-13}$, so that interactions remain highly unlikely. One could also argue that different input parameters could change this conclusion, however, smaller masses only lead to less efficient interactions and a slower decay, effectively increasing the mean free path. \n\nLet us emphasize that we have studied the potential interactions with particles that have been produced directly or indirectly due to the dynamics of the expansion of the Universe. In this sense, one could also ask if there can be interactions with a primordial population of ALPs (or their offspring). Such interactions can be potentially more important that the ones we have considered; however, it has been found that for realistic values of the parameters the growth of the photon field in particular is strongly suppressed \\cite{Garretson1992, Arza2020}, and thus by the time of decoupling this scenario should not be considered a source of decoherence. \n\nAn interesting thing to note is that the strength of the interaction, which we have considered in this work, has recently been constrained from the observation of the birefringence angle from the CMB data \\cite{Minami:2020odp}. It is also well-known that photons travelling significantly large distances, and interacting with magnetic fields, can lead to the production of ALPs (see, for instance, \\cite{DeAngelis:2008sk}). Conversions between photons and ALPs, in the presence of primordial magnetic fields, can also leave observable signatures in the CMB \\cite{Mirizzi:2009nq}, which together with other cosmological considerations, has been used to constrain a considerable region of the parameter space \\cite{Irastorza:2018dyq}. In the future, we plan to combine the estimate coming from polarization data, and the requirement that ALPs from the early-universe do not decohere, to find new probes for the so-called cosmological axion background \\cite{Dror:2021nyr}.\n\n\n\n\\section{Decoherence through the cosmological medium}\n\nIn this section we will look at the potential sources of decoherence of a photon in some quantum state due to the interaction with other particles in the cosmological medium. Unlike for the estimates in the previous section, we know from observations the number density of the other species, with values that make interactions more likely. We already had a first glance at such interactions, like photon-photon scattering with CMB radiation. \n\n\\subsection{Abundance of particles}\\label{abpar}\n\nFirst, we shall compute the number density of photons. This is given by\n\\begin{align}\n& n_{\\gamma}=\\frac{8 \\pi}{c^{3}} \\int_{0}^{\\infty}\\left(\\frac{k T}{h}\\right)^{3} \\frac{x^{2} d x}{e^{x}-1} \\nonumber \\\\\n& \\implies n_\\gamma = 4.11 \\times 10^8 \\, (1+z)^3 \n\\, m^{-3}\\,,\n\\end{align}\nwhere the temperature of the CMB is $T_0 = 2.72548\\pm 0.00057$K. Other sources give far fewer photons. \n\nNext, we look at the abundance of baryons. The baryon-to-photon density is \n\\begin{equation}\n\\eta = \\frac{n_{\\rm b}}{n_\\gamma} = 2.75 \\times 10^{-8} \\Omega_b h^2\\,.\n\\end{equation}\nWith Planck's (2018) value of $\\Omega_{\\rm b} h^2 = 0.02237 \\pm 0.00015$ \\cite{Planck:2018vyg}, this gives an average baryon density today (if fully ionized) of\n\\begin{equation}\nn_{\\rm b,0} = 0.2526 \\, m^{-3}.\n\\end{equation}\nPrimordial nucleosynthesis and the CMB tell us that the Helium-4 mass fraction is about $Y_{\\rm P} = 0.246$. To a good approximation, all the mass is in protons and Helium --- everything else is negligible in terms of number density.\n\nThe number density of Helium is given by $Y_{\\rm P} = 4n_{\\rm He}\/(4n_{\\rm He}+n_{\\rm p})$. With $Y_{\\rm P}=0.246$, $n_{\\rm p}\/n_{\\rm He} = 12.26$. This means that the fraction of baryonic nuclei that is Helium-4 is 0.0754. We also have $n_{\\rm b} = n_{\\rm p} + 4 n_{\\rm He} = n_{\\rm p}(1+4\/12.26) = 1.33 n_{\\rm p}$.\n\nNext, the abundance of protons is related to that of baryons by\n\\begin{equation}\nn_{\\rm p,0} = \\frac{n_{\\rm b}}{1.33} \\implies n_{\\rm p} = 0.190 \\,(1+z)^3\\, m^{-3}\\,, z0$. Let us call them Algorithms 1 and 2 as they are in \\cite{KS}. Algorithm 2 is simpler than Algorithm 1 and has been intensely studied: see for example Aronson, Frieze and Pittel \\cite{AFP}, Bohman and Frieze \\cite{BF}, Balister and Gerke \\cite{BG} or Bordenave and Lelarge \\cite{BL}. In particular, \\cite{AFP} together with Frieze and Pittel \\cite{FP} shows that w.h.p. Algorithm 2 finds a matching that is within $\\tilde{\\Theta}(n^{1\/5})$ of the optimum, when applied to $G_{n,m}$. Subsequently, Chebolu, Frieze and Melsted \\cite{CFP} showed how to use Algorithm 2 as a basis for a linear expected time algorithm, when $c$ is sufficiently large.\n\nAlgorithm 2 proceeds as follows (a formal definition of Algorithm 1 is given in the next section). While there are isolated vertices it deletes them. After which, while there are vertices of degree one in the graph, it chooses one at random and adds the edge incident with it to the matching and deletes the endpoints of the edge. Otherwise, if the current graph has minimum degree at least two, then it adds a random edge to the matching and deletes the endpoints of the edge. \n\nIn the same paper Karp and Sipser proposed another algorithm for finding a matching that also runs in linear time. This was Algorithm 1. The algorithm sequentially reduces the graph until it reaches the empty graph. Then it unwinds some of the actions that it has taken and grows a matching which is then output. Even though it was shown empirically to outperform Algorithm 2, it has not been rigorously analyzed. In this paper we analyze Algorithm 2 in the special case where the graph is random with a fixed degree sequence $3\\leq d(i)\\leq 4$ for $i=1,2,\\ldots,n$. We prove the following:\n\\begin{thm}\\label{main}\nLet $G$ be a random graph with degree sequence $3\\leq d(i)\\leq 4$ for $i=1,2,\\ldots,n$. Then \n\\begin{enumerate}[(a)]\n\\item Algorithm 1 finds a matching of size $n\/2-O(\\log n)$, w.h.p.\n\\item Algorithm 1 can be modified to find a (near) perfect matching in $O(n)$ time w.h.p. and in expectation.\n\\end{enumerate}\n\\end{thm}\nA (near) perfect matching is one of size $\\rdown{n\/2}$. Note that in the case of cubic graphs, it is known that they have (near) perfect matchings w.h.p., see Bollob\\'as \\cite{Bol}. Note also that it was shown by Frieze, Radcliffe and Suen \\cite{FRS} that w.h.p. Algorithm 2 finds a matching of size $n\/2-\\tilde{\\Theta}(n^{1\/5})$. \\footnote{Recently, the junior author has extended Theorem \\ref{main} to random $r$-regular graphs for all $3\\leq r=O(1)$.}\n\\section{The Algorithm}\nThe algorithm that is given in \\cite{KS} can be split into two parts. The first part sequentially reduces the graph until it reaches the empty graph. Then the second part reverses part of this reduction and grows a matching which is then output. \n\nTo reduce the graph, \n\\begin{enumerate}[(1)]\n\\item First, while there are vertices of degree 0 or degree 1 the algorithm removes them along with any edge incident to them. The edges removed at this stage will be part of the output matching. \n\\item Second, while there are vertices of degree 2 the algorithm contracts them along with their two neighbors. That is the induced path $(x,y,z)$ is replaced by a single contracted vertex $y_c$ whose neighbors are those of $x,z$ other than $y$. The description in \\cite{KS} does not explicitly say what to do with loops or multiple edges created by this process. In any case, such creations are very rare. We say a little more on this in Section \\ref{details}.\n\nIn the unwinding, if we have so far constructed a matching containing an edge $\\set{y_c,\\xi}$ incident with $y_c$ and $\\xi$ is a neighbor of $x$ then in our matching we replace this edge by $\\set{x,\\xi}$ and $\\set{y,z}$. If there is no matching edge so far chosen incident with $y_c$ then we add an arbitrary one of $\\set{x,y}$ or $\\set{y,z}$ to our matching.\n\\item Finally if the graph has minimum degree 3 then a random vertex is chosen among those of maximum degree and then a random edge incident to that vertex is deleted. These edges will not be used in the unwinding.\n\\end{enumerate} \n\\subsection{Idea of proof:} No mistakes are made while handling vertices of degree 0,1 or 2. Each mistake decreases the size of the final matching produced by one from the maximum size. We will show that mistakes occur only at parts of the graph that have become denser than is likely. \n\nWe show that w.h.p. the maximum degree remains $O(\\log^{2}\\nu)$ where $\\nu$ is the number of vertices remaining and so as long as $\\nu\\log n$, say, then w.h.p. there will be no dense subgraphs and the algorithm will not make any mistakes. This explains the $O(\\log n)$ error term. Finally, we assert that removing an edge incident to a vertex of a maximum degree will help to control the maximum degree, explaining this choice of edge to delete.\n\n\\subsection{Details}\\label{details}\nThe precise algorithm that we analyze is called {\\sc reduce-construct} The algorithm description given in \\cite{KS} is not explicit in how to deal with loops and multiple edges, as they arise. We will remove loops immediately, but keep the multiple edges until removed by other operations. \n\nWe assume that our input (multi-)graph $G = G([n],E)$ has degree sequence $\\bd$ and is generated by the configuration model of Bollob\\'as \\cite{Bol}. Let $W=[2\\nu]$, $2\\nu=\\sum_{i=1}^n d(i)$, be our set of {\\em configuration points} and let $\\Phi$ be the set of {\\em configurations} i.e. functions $\\phi:W \\mapsto [n]$ that such that $|\\f^{-1}(i)|=d(i)$ for every $i \\in [n]$. Given $\\phi \\in \\Phi$ we define the graph $G_\\phi=([n],E_\\phi)$ where $E_\\phi=\\{\\{\\phi(2j-1),\\phi(2j)\\}: j\\in [\\nu] \\}$. Choosing a function $\\phi \\in \\Phi$ uniformly at random yields a random (multi-)graph $G_\\phi$ with degree sequence $\\bd$. \n\nIt is known that conditional on $G_\\f$ being simple, i.e. having no loops or multiple edges, it is equally likely to be any graph that has degree sequence $\\bd$. Also, if the maximum degree is $O(1)$ then the probability that $G_\\f$ is simple is bounded below by a positive quantity that is independent of $n$. Thus results on this model can be translated immediately to random simple graphs.\n\nWe split the {\\sc reduce-construct} Algorithm into the {\\sc reduce} and {\\sc construct} algorithms which we present separately.\n\n\\textbf{Algorithm} \\textsc{Reduce}:\n\nThe input $G_0=G_\\f$ where we condition on there being no loops.\\\\ \n$i=\\hat{\\tau}=0$.\n\\\\ \\textbf{While} $G_i=(V_i,E_i) \\neq (\\emptyset,\\emptyset)$ do: \n\\begin{itemize}\n\\item[]\\textbf{If} $\\delta(G_i)=0$: Perform a {\\bf vertex-0 removal}: choose a vertex of degree 0 and remove it from $V_i$.\n\\item[]\\textbf{Else if} $\\delta(G_i)=1$: Perform a {\\bf vertex-1 removal}: choose a random vertex $v$ of degree 1 and remove it along with its neighbor $w$ and any edge incident to $w$. \n\\item[]\\textbf{Else if} $\\delta(G_i)=2$: Perform a {\\bf contraction}: choose a random vertex $v$ of degree 2. Then replace $\\{v\\} \\cup N(v)$ ($v$ and its neighbors $N(v)$) by a single vertex $v_c$. For $u\\in V \\setminus (\\{v\\} \\cup N(v))$, $u$ is joined to $v_c$ by as many edges as there are in $G_i$ from $u$ to $\\{v\\} \\cup N(v)$. Remove any loops created.\n\\item[]\\textbf{Else if } $\\delta(G_i)\\geq 3$: Perform a {\\bf max-edge removal}: choose a random vertex of maximum degree and remove a random edge incident with it.\n\\\\ \\textbf{End if}\n\\item[]\\textbf{If} the last action was a max-edge removal, say the removal of edge $\\{u,v\\}$ and in the current graph we have $d(u)=2$ and $u$ is joined to a single vertex $w$ by a pair of parallel edges then perform an {\\bf auto correction contraction}: contract $u,v$ and $w$ into a single vertex. Remove any loops created.\n\\\\ \\textbf{End If}\n\\item[] Set $i=i+1$ and let $G_i$ be the current graph.\n\\end{itemize}\n\\textbf{End While}\n\\\\Set $\\hat{\\tau}=i$.\n\nObserve that we only reveal edges (pairs of the form $(\\phi(2j-1),\\phi(2j)): j\\in [\\nu]$) of $G_\\phi$ as the need arises in the algorithm. Moreover the algorithm removes any edges that are revealed. Thus if we let $\\bd(i)$ be the degree sequence of $G_i$ then, given $\\bd(i)$ and the actions performed by {\\sc reduce} until it generates $G_i$ we have that $G_i$ is uniformly distributed among all configurations with degree sequence $\\bd(i)$ and no loops.\n\nCall a contraction that is performed by {\\sc reduce} and involves only 2 vertices \\emph{bad} i.e. one where we contract $u,v$ to a single vertex given that $G$ contains a parallel pair of the edge $\\set{u,v}$ and $u$ has degree 2. Otherwise call it \\emph{good}. Observe that a bad contraction can potentially be a mistake while a good contraction is never a mistake. By introducing the auto correction contraction we replace the bad contraction of the vertex set $\\{u,w\\}$, as presented in the description of {\\sc reduce}, with the good contraction of the vertex set $\\{v,u,w\\}$. Note that we do not claim that all bad contractions can be dealt with in this way. We only show later that other instance of bad contractions are very unlikely.\n\nWe now describe how we unwind the operations of {\\sc reduce} to provide us with a matching.\n\n\\textbf{Algorithm} {\\sc construct}:\n\n\\textbf{Input:} $G_0,G_1,...,G_{\\hat{\\tau}}$ - \nthe graph sequence produced by {\\sc reduce},\nan integer $j\\in \\{0,1,...,\\hat{\\tau}\\}$ and a matching $M_{j}$ of $G_j$. (We allow the possibility of stopping {\\sc reduce} before it has finished.\nIf we do so when $|V(G_j)|=\\Theta(n^{2\/3})$ then, given that $G_j$ has a perfect matching w.h.p., we can use the $O(|E||V|^{1\/2})$ algorithm of \\cite{MV} applied to $G_j$ to find a perfect matching $M_j$ of $G_j$ in $O(n)$ time. Thereafter we can use {\\sc construct} to extend $M_j$ to a matching of $G_0$.)\n\\\\ \\textbf{For $i=1$ to $j$ do}: \n\\begin{itemize}\n\\item[]\\textbf{If} $\\delta(G_{j-i})=0$: Set $M_{j-i}=M_{j-i+1}$\n\\item[]\\textbf{Else if} $\\delta(G_{j-i})=1$: Let $v$ be the vertex of degree 1 chosen at the $(j-i)th$ step of {\\sc reduce} and let $e$ be the edge that is incident to $v$ in $G_{j-i}$. Then, \nSet $M_{j-i}=M_{j-i+1}\\cup\\{e\\}$.\n\\item[]\\textbf{Else if} $\\delta(G_{j-i})=2$: Let $v$ be the vertex of degree 2 selected in $G_{j-i}$. If $|N(v)|=1$ i.e. $v$ is joined to a single vertex by a double edge in $G_{j-i}$, set $M_{j-i}=M_{j-i+1}$. Else let $N(v)=\\{u,w\\}$ and $v_c$ be the new vertex resulting from the contraction of $\\{v,u,w\\}$. If $v_c$ is not covered by $M_{j-i+1}$ then set $M_{j-i}=M_{j-i+1}\\cup\\{v,u\\}$. Otherwise assume that $\\{v_c,z\\}\\in M_{j-i+1}$ for some $z\\in V(G_{j-i})$. Without loss of generality assume that in $G_{j-i}$, $z$ is connected to $u$. Set\n$M_{j-i}=(M_{j-i+1}\\cup \\{\\{v,w\\},\\{u,z\\}\\})\\setminus \\{ v_c,z\\}$.\n\\item[]\\textbf{Else if }: $\\delta(G_{j-i})\\geq 3$: Set $M_{j-i}=M_{j-i+1}$\n\\end{itemize}\n\\textbf{End For}\n\nFor a graph $G$ and $j\\in \\{0,1,...,\\hat{\\tau}\\}$ denote by $R_0(G,j)$ and $R_{2b}(G,j)$ the number of times that {\\sc reduce} has performed a vertex-0 removal and a bad contraction respectively until it generates $G_j$. For a graph $G$ and a matching $M$ denote by $\\kappa(G,M)$ the number of vertices that are not covered by $M$. The following Lemma determines the quality of the output of the {\\sc reduce-construct} algorithm.\n\\begin{lem}\\label{correction}\nLet $G$ be a graph and $M$ be the output of the Reduce-Backtrack algorithm applied to $G$. Then, for $j\\geq 0$,\n\\beq{true}{\n\\kappa(G,M)=R_0(G,j)+R_{2b}(G,j)+\\kappa(G_j,M_j).\n}\n\\end{lem}\n\\begin{proof}\nLet $G=G_0,G_1,...,G_{\\hat{\\tau}}$ be the sequence of graphs produced by {\\sc reduce} and let $M_j,M_{j-1},...,M_0=M$ be the sequence of matchings produced by {\\sc construct}. Let $R_0(G,j,i)$ and $R_{2b}(G,j,i)$ be the number of vertex-0 removals and bad contractions performed by {\\sc reduce} going from $G_{j-i}$ to $G_j$. We will prove that for every $0\\leq i\\leq j$, \n\\beq{troo}{\n\\kappa(G_{j-i},M_{j-i})=R_0(G,j,i)+R_{2b}(G,j,i)+\\kappa(G_j,M_j).\n} \nTaking $i=j$ yields the desired result.\n\nFor $i=0$, equation \\eqref{true} holds as $R_0(G,j,0)=R_{2b}(G,j,0)=0$. Assume inductively that \\eqref{true} holds for $i=k-1$ where $k$ satisfies $0\\ell).$$\n{\\bf Hyperactions of Interest}\\\\\nFor the analysis of {\\sc reduce} we consider 7 distinct hyperactions (sequences of actions) which we call hyperactions of Type 1,2,3,4,5,33 and 34 respectively. In the case that the maximum degree is larger than 3 we consider the following hyperactions: we have put some diagrams of these hyperactions at the end of the paper.\n\\begin{itemize}\n\\item[]\\textbf{ Type 1}: A single max-edge removal,\n\\item[]\\textbf{ Type 2}: A max edge-removal followed by an auto correction contraction.\n\\item[]\\textbf{ Type 3}: A single max-edge removal followed by a good contraction. \n\\item[]\\textbf{ Type 4}: A single max-edge removal followed by 2 good contractions.\nIn this case we add the restriction that there are exactly 6 distinct vertices $v,u,x_1,x_2,w_1,w_2$ involved in this hyperaction and they satisfy the following:\n(i) $v$ is a vertex of maximum degree, it is adjacent to $u$ and $\\{u,v\\}$ is removed during the max-edge removal, (ii) $d(u)=d(x_1)=d(x_2)=3$, (iii) $N(u)=\\{v,x_1,x_2\\}$, $N(x_1)=\\{u,x_2,w_1\\}$ and $N(x_2)=\\{u,x_1,w_2\\}$. (Thus $\\set{u,x_1,x_2}$ form a triangle.) The two contractions have the same effect as contracting $\\{u,x_1,x_2,w_1,w_2\\}$ into a single vertex.\n\\end{itemize}\n In the case that the maximum degree equals 3 we also consider the following hyperactions:\n\\begin{itemize}\n\\item[] \\textbf{Type 5}: A max-edge removal followed by 2 good contractions that interact.\nIn this case the 5 vertices $u,v,x_1,x_2,z$ involved in the hyperaction satisfy the following:\n(i) $\\{u,v\\}$ is the edge removed by the max-edge removal, (ii) $N(v)=\\{u,x_1,x_2\\}$, $N(u)=\\{v,x_1,z\\}$, (so $\\set{u,v,x_1}$ form a triangle), (iii) $|(N(x_1) \\cup N(x_2) \\cup N(z)) \\setminus \\set{u,v,x_1,x_2,z}|\\geq 3$. This hyperaction has the same effect as contracting all of $\\set{u,v,x_1,x_2,z}$ into a single vertex.\n\\item[] \\textbf{Type 33}: A max-edge removal followed by 2 good contractions that do not interact. There are 6 distinct vertices involved $v,v_1,v_2,u,u_1,u_2$. During the max-edge removal $\\{u,v\\}$ is removed. Thereafter each of the 2 sets of vertices $\\{v,v_1,v_2\\}$ and $\\{u,u_1,u_2\\}$ is contracted to a single vertex.\n\\item[] \\textbf{Type 34}: A max-edge removal followed by 3 good contractions. There are 8 distinct vertices involved $v,v_1,v_2,v,u,u_1,u_2,w_1,w_2$. During the max-edge removal $\\{u,v\\}$ is removed. The conditions satisfied by $v,u,u_1,u_2,w_1,w_2$ and the actions that are performed on them are similar to the ones in a hyperaction of Type 4. The difference now is that $v$ has degree 3 before the hyperaction. In addition $\\{v,v_1,v_2\\}$ is contracted into a single vertex.\n\\end{itemize}\nWe divide Hyperactions of Type 3 into three classes. Assume that during a Hyperaction of Type 3 the set $\\{v,a,b\\}$ is contracted, $v$ is the contracted vertex and $v_c$ is the new vertex. We say that such a Hyperaction is of \\textbf{Type 3a} if $d(v_c)=d(a)+d(b)-2$, is of \\textbf{Type 3b} if $d(v_c)=d(a)+d(b)-4$ and is of \\textbf{Type 3c} if $d(v_c)j}(G):=\\sum_{h>j} p_h(G)$,\n\\vspace{-2mm}\n\\item $ ex_\\ell(G):=\\sum_{v \\in V(G)} [d(v)-\\ell]\\mathbb{I}(d(v)>\\ell)$, \n\\end{itemize}\nWe denote by $n_i,e_i, n_{j,i},p_{j,i},p_{>j,i}$ and $ex_{\\ell,i}$ the corresponding quantities in relation to $\\Gamma_i$. \n\\begin{lem}\\label{dence}\nLet $K$ be an arbitrary fixed positive integer. Let $\\bd$ be a degree sequence of length $n$ that satisfies $ex_\\ell(G)\\leq \\log^2 n$ for some $3\\leq \\ell=O(1)$. Let $G$ be a random graph with degree sequence $\\bd$ and no loops. Let $b \\in \\{0,1\\}$, then $\\mathbf{Pr}(\\cB_K(G,v,b))=o(n^{-0.9-b})$.\n\\end{lem}\n\\begin{proof}\nLet $G$ be a random graph with degree sequence $\\bd$. The fact that $ex_\\ell(G)\\leq \\log^2 n$ implies that $G$ has no loops with probability bounded below by a positive constant (see for example \\cite{FK}). Hence the condition of having no loops can be ignored in the proof that events have probability $o(1)$. Also it implies that $\\Delta=\\Delta(G)\\leq \\ell+ ex_\\ell(G) \\leq \\ell+\\log^2 n$.\n\nLet $2m=\\sum_{i=1}^n d(i)\\leq \\ell n+ex_{\\ell}(G) =\\Theta(n)$. Then for vertex $v$ and for $b=0,1$ the probability that $G$ spans a subgraph that covers $v$, spans $a\\leq K$ vertices and $a+b$ edges can be bounded above by \n\\begin{align*}\n&\\leq_O \\sum_{a=2}^{K} \\binom{n}{a-1} (\\Delta a)^{2(a+b)} \\frac{(2m-2(a+b))!}{2^{m-(a+b)}[m-(a+b)]!}\\times \\frac{2^{m}m!}{(2m)!}\\\\ \n&\\leq_O \\sum_{a=2}^{K} n^{a-1} \\Delta^{2(a+b)} \\frac{m(m-1)...(m-(a+b)+1)}{2m(2m-1)...(2m-2(a+b)+1)}\\\\ \n&\\leq_O \\sum_{a=2}^{K} n^{a-1} \\Delta^{2(a+b)} m^{-(a+b)}\\\\\n&=o(n^{-0.9-b}).\n\\end{align*}\n\\end{proof}\nWe will drop the subscript $K$ from $\\cB$. Taking $K=20$ will easily suffice for the rest of the proof. Thus $\\cB(G,v,b)=\\cB_{20}(G,v,b)$.\n\\subsection{Proof of Lemma \\ref{hyper}}\nLet $v$ be the vertex of maximum degree chosen by {\\sc reduce} and let $u$ be the vertex adjacent to $v$ such that $\\{u,v\\}$ is chosen for removal. We will show that if $\\cB(\\Gamma_i,v,1)$ does not occur then {\\sc reduce} performs one of the hyperactions given in Section \\ref{list}.\nAlso observe that if a Hyperaction of Type 2,3b,4,5 or 34 occurs then $\\cB(\\Gamma_i,v,0)$ occurs. Lemma \\ref{dence} states that $\\mathbf{Pr}(\\cB(\\Gamma_i,v,0))=o(|V(\\Gamma_i)|^{-0.9})$ thus proving the second part of Lemma \\ref{hyper}. Also note that if a Hyperaction of Type 3c, occurs, corresponding to a bad Hyperaction, then $\\cB(\\Gamma_i,v,1)$ occurs. Lemma \\ref{dence} states that $\\mathbf{Pr}(\\cB(\\Gamma_i,v,1))=o(|V(\\Gamma_i)|^{-1.9})$.\n\n{\\bf Case A: $d(v) \\geq 4$.}\\\\\nIf $d(u)\\geq 4$ then a hyperaction of Type 1 is performed. \nThus assume $d(u)=3$ and consider the cases where $|N(u)|=1,2,3$, (recall that we allow parallel edges but not self-loops). \n\n{\\textbf{Case A1: $|N(u)|=1$}}.\\\\\n$u$ is connected to $v$ by 3 parallel edges and so $\\cB(\\Gamma_i,v,1)$ occurs.\t\n\n{\\textbf{Case A2: $|N(u)|=2$}}.\\\\\nLet $N(u)=\\{v,u'\\}$ and $S=\\set{u,u',v}$. Let $T=(N(u')\\cup N(v) )\\setminus S$. If $|T|\\leq 2$ then we have $d(S)\\geq 10$ and either $S$ spans more than 3 edges or $S \\cup T$ spans at least 7 edges. In both cases $\\cB(\\Gamma_i,v,1)$ occurs. Assume then that $|T|\\geq 3$. Now exactly one of $\\set{u,u'}$, $\\set{u,v}$ is repeated, else $\\cB(\\Gamma_i,v,1)$ occurs. If $\\set{u,u'}$ is repeated then we perform an auto correction contraction resulting to a Hyperaction of Type 2 . If $\\set{u,v}$ is repeated then we contract the remaining path $(u',u,v)$. Hence we have performed a Hyperaction of Type 3b.\n\n{\\textbf{Case A3: $|N(u)|=3$.} }\\\\\nLet $N(u)=\\set{v,x_1,x_2}$ and $T=(N(x_1)\\cup N(x_2))\\setminus\\{u,x_1,x_2\\}$. \n\n{\\textbf{Sub-case A3a: $|T|\\leq 1$}. \\\\\n$\\{u,x_1,x_2\\} cup T$ spans at least $(d(u)-1+d(x_1)+d(x_2)+|T|)\/2 \\geq 4+|T|\/2$ edges and the event $\\cB(\\Gamma_i,v,1)$ occurs.\n\n{\\textbf{Sub-case A3b: $T=\\{w_1,w_2\\}$}}. \\\\\nIf $v$ is at distance less than 6 from $\\{u\\} \\cup N(u)$ in $\\Gamma\\setminus \\{ u,v\\}$ then $\\cB(\\Gamma_i,v,1)$ occurs. To see this consider the subgraph $H$ spanned by $\\{u,v,x_1,x_2,w_1,w_2,y\\}$ and the vertices on the shortest path $P$ from $v$ to $u$ in $\\Gamma_i\\setminus \\{ u,v\\}$. Here $y$ is the neighbor of $v$ on $P$. It must contain at least two distinct cycles. One contained in $\\set{u,x_1,x_2,w_1,w_2}$ and $u,v,P$. If there is no edge from $x_1$ to $x_2$ then $\\{v,u,x_1,x_2,w_1,w_2\\}$ spans at least 7 edges and so $\\cB(\\Gamma_i,v,1)$ occurs.\n\nThus we may assume that $N(x_1)=\\{u,x_2,w_1\\}$, $N(x_2)=\\{u,x_1,w_2\\}$ and $v\\notin \\{w_1,w_2\\}\\cup N(w_1) \\cup N(w_2)$. We may also assume that $\\set{w_1,w_2}$ is not an edge of $\\Gamma$, for otherwise $\\{u,x_1,x_2,w_1,w_2\\}$ contains two distinct cycles and $\\cB(G_i,v,1)$ occurs. The algorithm {\\sc Reduce} proceeds by contracting $u,x_1,x_2$ into a single vertex $x'$. $x'$ has degree 2 and the algorithm proceeds by performing a contraction of $x',w_1,w_2$ into a new vertex $w'$. Let $S=N\\{w_1,w_2\\}\\setminus \\{x_1,x_2\\}$. If $|S|\\leq 3$ then $\\cB(\\Gamma_i,u,1)$ occurs. To see this observe that $w_1,w_2$ must then have a common neighbor $w_3$ say. Consider the subgraph $H$ spanned by $\\{u,x_1,x_2,w_1,w_2,w_3\\}$. $H$ contains at least 7 edges and 6 vertices. If $|S|\\geq 4$ then the new vertex has degree 4 and the sequence of actions taken by {\\sc reduce} corresponds to a hyperaction of Type 4.\n\\vspace{3mm}\n\\\\{\\textbf{Sub-case A3c: $|T| \\geq 3$.}}\nAfter the removal of $\\{v,u\\}$ we contract $\\{u,x_1,x_2\\}$ into a single vertex of degree at least 3, hence a hyperaction of Type 3 is performed.\n\\vspace{3mm}\n\\\\\n{\\bf Case B: $d(v)=d(u)=3$.}\\\\\nLet $\\Gamma'=\\Gamma_i\\setminus \\{e\\}$ where $e=\\{v,u\\}$.\n\\vspace{3mm}\n\\\\{\\textbf{Case B1: In $\\Gamma'$, $u$ and $v$ are at distance at least 4.}}\\\\ \nIf $|N(N (u))|$ and $|N(N(v))| \\leq 3$ then $\\cB(\\Gamma_i,v,1)$ occurs. Thus we can assume that either $|N(N (u))|=4$ and\/or $|N(N(v))|=4$. If both $|N(N (u))|,|N(N(v))|=4$ then {\\em Reduce} will perform 2 good contractions and this amounts to a hyperaction of Type 33. Assume then that $|N(N (u))|=4$ and that $|N(N (v))|\\leq 3$. If $|N(N (v))|=3$ then again {\\em Reduce} will perform 2 good contractions amounting to a hyperaction of Type 33. If $|N(N (v))|=2$ and so $v$ is in a triangle then {\\em Reduce} will perform a hyperaction of Type 34. Finally, if $|N(N (v))|=1$ then $\\cB(\\Gamma_i,v,1)$ occurs.\n\n{\\textbf{Case B2: In $\\Gamma'$, $u$ and $v$ are at distance 3.}}\\\\\nIn $\\Gamma_i$ there is a cycle $C$ of length 4 containing $u,v$. If in $\\Gamma'$ we find that $|N(N (u))|\\leq 3$ or $ |N(N(v))| \\leq 3$ or $|N(u)\\cap N(N(v))|>1$ or $|N(v)\\cap N(N(u))|>1$ then $\\cB(\\Gamma_i,v,1)$ occurs. This is because the graph spanned by $\\{u,v\\} \\cup N(u)\\cup N(v) \\cup N(N(u) \\cup N(N(v))$ in $\\Gamma_i$ will contain a cycle distinct from $C$. Assume this is not the case. Then after the max-edge removal of $\\{u,v\\}$ we have a contraction of $\\{u\\} \\cup N(u)$ followed by a contraction of $\\{u\\} \\cup N(u)$. Observe that neither contraction reduces the size of $N(N(u))$ or $N(N(v))$. Thus {\\sc reduce} performs a hyperaction of Type 33.\n\\vspace{3mm}\n\\\\{\\textbf{Case B3: In $\\Gamma'$, $u$ and $v$ are at distance 2.}\\\\\nIn the case that $u,v$ have 2 common neighbors in $\\Gamma'$ we see that $\\cB(\\Gamma_i,v,1)$ occurs. Assume then that they have a single common neighbor $x_1$. Let $z,x_2$ be the other neighbors of $u,v$ respectively. Then either $\\cB(\\Gamma_i,v,1)$ occurs or {\\sc reduce} performs a hyperaction of Type 5.\n\\vspace{3mm}\n\\\\{\\textbf{Case B4: In $\\Gamma'$, $u$ and $v$ are at distance 1}.\\\\\nSo here we have that $\\set{u,v}$ is a double edge in $\\Gamma_i$. Let $x,y$ be the other neighbors of $u,v$ repectively in $\\Gamma$. Assuming that $\\cB(\\Gamma_i,v,1)$ does not occur, {\\sc reduce} performs a max-edge removal followed by a single good contraction and this will be equivalent to a hyperaction of Type 3, involving the contraction of one of $x,u,v$ or $u,v,y$.\n\\qed\n\\subsection{Proof of Lemma \\ref{drift}}\nThe inequality $ex_{4,i} \\leq \\log^2 n_i$ implies that $p_{j,i}\\leq \\log^2n_i\/2e_i =o(n_i^{-0.95})$ for $5\\leq j \\leq \\Delta$. It also implies that the maximum degree $\\Delta$ of $\\Gamma$ satisfies $\\Delta=O(\\log^2 n)$. \n\nIn the case that $\\cB(G,v,1)$ occurs, i.e. with probability $O(n_i^{-1.9})$ (see Lemma \\ref{hyper}) \n$$|ex_{4,i}-ex_{4,i+1}|\\leq 2e_i \\leq 4n_i+ex_{4,i} \\leq 5n_i.$$ \nObserve that if an action of Type 2,3b,4,5 or 34 takes place then $v$ lies on a subgraph with $a<12$ vertices and $a$ edges. Lemma \\ref{hyper} states that this occurs with probability $o(n^{-0.9})$. For all the above hyperactions we have $|ex_{4,i+1}-ex_{4,i}|\\leq 2$. This follows from the fact that performing a contraction can increase $ex_4$ by at most 2. This is because the initial vertices with degrees say $2,d_1,d_2$ contributed $\\max\\{ 0, (d_1-4)+(d_2-4) \\}$ to $ex_{4,i}$ while the new contracted vertex has degree $d_1+d_2-2$ and contributes $\\max\\{ 0, d_1+d_2-2-4) \\}$ to $ex_{4,i+1}$. Moreover observe that if a hyperaction of Type 5, 33 or 34 occurs then all the vertices involved have degree 3. If $\\cB(\\Gamma,v,1)$ does not occur then a hyperaction of Type 5 will increase $ex_{4,i}$ by 1 (since there will be one new vertex of degree 5). Hyperactions of Type 33 or Type 34 do not change $ex_4$. Thus it remains to examine the effects of a hyperaction of Type 1 or of Type 3a. \n\nIf $ex_{4,i}=0$ then a hyperaction of Type 1 does not increase $ex_{4,i}$ while a hyperaction of Type 3 could increase it by 2. \n\nIf $ex_4(\\Gamma)>0$ then the $i^{th}$ hyperaction starts with a max-edge removal. \\\\\n{\\bf Case 1:} If the smaller degree vertex involved is of degree larger than 3, then this results in a hyperaction of Type 1. This happens with probability $(1+o(1))(1-p_{3,i})$. Furthermore in this case $ex_{4,i}-2\\leq ex_{4,i+1}\\leq ex_{4,i}-1$.\n (The $(1+o(1))$ factor arises because of $O(1)$ degree changes during the hyperaction.)\\\\\n{\\bf Case 2:} If the smaller degree vertex $v$ involved is of degree 3 then a contraction is performed. And this occurs with probability $(1+o(1))p_{3,i}$. If the contraction results in a vertex of degree at least 3 then we have a hyperaction of Type 3a, and not of Type 1. Also this is the only way for a hyperaction of Type 3a to occur. Let the other two vertices involved in the contraction be $a,b$ and have degrees $d_a,d_b$ respectively. Now $d_a=d_b=3$ with probability $(1+o(1))p_{3,i}^2$, resulting in a new vertex that has degree at most 4. In this case, $ex_{4,i+1}-ex_{4,i}= -1$ (the -1 here originates from the max-edge removal). Else if $d_a=3,d_b=4$ then we have $-1\\leq ex_{4,i}-ex_{4,i}\\leq -1+1=0$. And this occurs with probability $(1+o(1))2p_{3,i}p_{4,i}$. Else if $d_a=d_b=4$ then we have $-1\\leq ex_{4,i+1}-ex_{4,i}\\leq -1+2= 1$ (this occurs with probability $p_{3,i}p_{4,i}^2$. Otherwise a vertex of degree at least 5 is involved and given our upper bound on $ex_{4,i}$, this happens with probability $o(1)$. If the event $\\cB(\\Gamma,v,1)$ does not occur then the new contracted vertex has degree $d_a+d_b-2$. Hence $-1\\leq ex_{4,i} - ex_{4,i} \\leq -1+2=1.$ \n\nThus in all cases $|ex_{4,i} - ex_{4,i}|\\leq 2$ and if $ex_{4,i}>0$ then\n\\begin{align*}\n\\ex[ex_{4,i+1}-ex_{4,i}| \\Gamma_i] &\\leq -(5n_i)\\cdot o(n_i^{-1.9})+ 2n_i^{-0.95} - (1-p_{3,i}) -p_{3,i}^3 + p_{3,i}p_{4,i}^2+o(1)\n\\\\&\\leq - (1-p_{3,i}) -p_{3,i}^3 +p_{3,i}(1-p_{3,i})^2+o(1)\n \\leq -\\frac13\n\\end{align*} \n(The final expression in $p_{3,i}$ is maximized at $p_{3,i}=1\/2$, for $p_{3,i}\\in [0,1]$.)\n\\qed\n\\subsection{Proof of Lemma \\ref{4inf}} \nWe start by proving the following Lemma:\n\\begin{lem}\\label{intervals}\nLet $\\Gamma_h$ be such that $ex_{4,h}=0$. Then with probability $1-o(n_j^{-1.8})$ there exists $1\\leq j\\leq 0.25 \\log^2n_h$ satisfying $ex_{4,h+j}=0$. Furthermore\n$ex_{4,h+i}\\leq \\log^2n_{h+i}$ for $i\\leq j$. \n\\end{lem}\n\\begin{proof}\nIf $ex_{4,h+1}=0$ then we are done. Otherwise Lemma \\ref{drift} implies that \n$ex_{4,h+1}\\in\\{1, 2\\}$ with probability $1-o(n_h^{-1.9})$.\n\nLet $\\cE_T$ be the event that for $j\\leq 0.25\\log^2 n_h$ {\\sc reduce} performs only hyperactions of Type 1,2,3,4,5,33 or 34. Such a hyperaction reduces the vertex set by at most 8. Lemma \\ref{hyper} implies that $\\cE_T$ occurs with probability $1-o(n^{-1.8}_h)$. Moreover if $\\cE_T$ occurs for $j<0.25\\log^2 n_h$ then $|n_{h+j}| \\geq n_h-8\\cdot 0.25 \\log^2 n_h$. In addition from Lemma \\ref{drift} we have that with probability $1-o(n^{-1.8})$, $|ex_{4,h+j}-ex_{4,h+j-1}|<2$ for $j<0.25 \\log^2 n$ hence $ex_{4,h+j} \\leq 2\\cdot 0.25 \\log^2 n_h \\leq \\log^2 n_{h+j}$. Finally conditioned on $\\cE_T$ since $ex_{4,h+1}=1$ or 2, the probability that there is no $2\\leq j\\leq 0.25\\log^2 n_h$ satisfying $ex_{4,h+j}=0$ is bounded by the probability that the sum of $0.25 \\log^2 n-2$ independent random variables with magnitude at most 2 and expected value smaller than $-1\/3$ (see Lemma \\ref{drift}) is positive. From Hoeffding's inequality \\cite{Hoeff} we have that the later probability is bounded by $\\exp\\set{-\\frac{2(\\frac13\\log^2n_h-3)^2}{\\log^2 n_h}}=o(n^{-2}_h)$.\n\\end{proof}\nNow let $\\Gamma_0,\\Gamma_{i_1},...,\\Gamma_{i_\\ell}$ be the subsequence of $\\Gamma_{0},\\Gamma_{1},..,\\Gamma_{\\tau}$ that includes all the graphs that have $ex_4=0$ and at least $\\omega$ vertices. Then since $\\Gamma_i$ has minimum degree 3 and $e_i$ is decreasing with respect to $i$ using Lemma \\ref{intervals} we have that with probability \n$$1-\\sum_{i:n_i\\geq \\omega}o( n_i^{-1.8} )\\geq 1-\\sum_{i:n_i\\geq \\omega} 2e_i^{-1.8}\/3 \\geq 1-\\sum_{i=\\omega}^{\\infty} i^{-1.8}=1-o(\\omega^{-0.8})$$\nwe have that for $j<\\ell$ we have $n_{i_j} -n_{i_{j+1}} \\leq 8\\cdot 0.25\\log ^2 n_{i_j}=2\\log ^2 n_{i_j}$ and all the graphs $\\Gamma_i$ preceding $\\Gamma_{i_\\ell}$ in $\\Gamma_0,\\Gamma_1,...,\\Gamma_\\tau$ satisfy $ex_{4,i} \\leq \\log^2 n_i$. Suppose now that $n_{i_\\ell}> 2\\omega$. Then the above argument implies that w.h.p. there is $j>i_\\ell$ such that $ex_{4,j}=0$ and $n_j\\geq n_{i_\\ell}-2\\log^2n_{i_\\ell}\\geq \\omega$ and this contradicts the definition of $i_\\ell$. Thus, w.h.p., $\\omega\\leq n_{i_\\ell}\\leq 2\\omega$ and hhis completes the proof of Lemma \\ref{4inf}.\n\\qed\n\\section{Existence of a Perfect Matching}\\label{Proofs Matchings}\nWe devote this section to the proof of Lemma \\ref{34match}. As discussed in the previous section it is enough to prove that given a degree sequence $\\bd=(d(1),...,d(n))$ consisting only of 3's and 4's, if we let $G$ be a random configuration (multi)-graph with degree sequence $\\bd$, then w.h.p. $G$ has a (near)-perfect matching (i.e we can lift the condition that $G$ has no loops). We will first assume that $n$ is even and verify Tutte's condition. That is for every $W\\subset V$ the number of odd components induced by $V \\setminus W$, $q(V\/W)$, is not larger that $|W|$. We split the verification of Tutte's condition into two lemmas. \n\\begin{lem}\\label{minimal}\nLet $W\\subset V$ be a set of minimum size that satisfies $q(V\/W)>|W|$. Then with probability $1-O(n^{-3\/4})$, $|W| > 10^{-5}n$. \n\\end{lem}\n\\begin{lem}\\label{maximal}\nLet $W\\subset V$ be a set of maximum size that satisfies $q(V\/W)>|W|$. Then with probability $1-O(n^{-3\/4})$, $|W| < 10^{-5}n$. \n\\end{lem}\nLemmas \\ref{minimal} and \\ref{maximal} together imply Tutte's condition as there exists no set with size that is both strictly larger and strictly smaller than $10^{-5}n$. In the proof of these lemmas we use the following estimates.\n\\begin{lem}\\label{estimates}\nThe number of distinct partitions of a set of size $2r$ into 2-element subsets, denoted by $\\phi(2r)$, satisfies $\\phi(2r)=\\Theta\\left(\\bfrac{2r}{e}^r\\right)$. Also for $\\ell|W|$ of minimum size and assume $2\\leq w=|W|\\leq 10^{-5}n$. We can rule out the case $w=1$ from the fact that with probability $1-O(1\/n)$, $G$ will be 3-connected, see e.g. the proof of Theorem 10.8 in \\cite{FK} . Let $C_z$ be a component spanned by $V\\setminus W$ of maximum size and let $r=|C_z|$. \n\n\\textbf{Case 1: $r=|C_z|\\leq 0.997n$.}} In this case we can partition $V\\setminus W$ into two parts $V_1,V_2$ such that (i) each $V_l,l=1,2$ is the union of components of $V\\setminus W$, (ii) $|V_1|\\geq |V_2|$, and (iii) $|V_2|\\geq (n-(r+w))\/2\\geq 10^{-3}n$. \n\nLet $d_2=d(V_2)$ and $d_W=d(W)$. Out of the $d_W$ endpoints in $W$ (i.e. configuration points that correspond to vertices in $W$), $\\ell\\leq d_W$ are matched with endpoints in $V_2$ and the rest with endpoints in $V_1$. \n\nFor fixed $i,w,d_W$ the probability that there are sets $V_1,V_2,W$ with $w=|W|, d(W)=d_W$ and $|V_2|=i$ satisfying $1\\leq w\\leq 10^{-5}n, 10^{-3}n\\leq i\\leq 0.5n$ and $d_W\\leq 4w\\leq 0.04i$, such that $V_1 \\times V_2$ spans no edges is bounded by \n\\begin{align*}\np_1 &\\leq \\sum_{\\ell=0}^{d_W}\n\\binom{n}{i} \\binom{n-i}{w} \\binom{d_W}{\\ell} \\frac{\\phi(d_2+\\ell)\\cdot \\phi(2m-d_2-\\ell) }{\\phi(2m)}\\\\ \n&\\leq_O \\sum_{\\ell=0}^{d_W}\n\\bfrac{en}{i}^i \\bfrac{en}{w}^w 2^{d_W} \\frac{\\bfrac{d_2+\\ell}{e}^{(d_2+\\ell)\/2} \n\\bfrac{2m-d_2-\\ell}{e}^{(2m-d_2-\\ell)\/2} }{\\bfrac{2m}{e}^m}\\\\ \n&\\leq\\sum_{\\ell=0}^{d_W}\n \\bfrac{en}{i}^i \\bfrac{100en}{i}^{i\/100} 2^{i\/25} \\bfrac{d_2+\\ell}{2m}^{(d_2+\\ell)\/2} \n\\bigg(1-\\frac{d_2+\\ell}{2m}\\bigg)^{(2m-d_2-\\ell)\/2} \\\\ \n& \\leq_O \\sum_{\\ell=0}^{d_W}\n \\bfrac{1600(en)^{101}}{i^{101}}^{i\/100} \\bfrac{d_2+\\ell}{2m}^{(d_2+\\ell)\/2} \\exp\\set{- \\frac{d_2+\\ell}{2}\\brac{1-\\frac{d_2+\\ell}{2m}}} \n\\end{align*}\nFor the third line we used the fact that $w\\leq i\/100$ and $d_W\\leq 4w \\leq i\/25$. \n\nLet $f(x)=x^xe^{-x(1-x)}$ and $L(x)=\\log f(x)$. Then $L''(x)=x^{-1}+2$ and so $L$ and hence $f$ is convex for $x>0$. Now $d_2+\\ell \\in J= [3i,4.04i]$ and since $\\bfrac{d_2+\\ell}{2m}^{(d_2+\\ell)\/2}\\exp\\set{- \\frac{d_2+\\ell}{2}\\brac{1-\\frac{d_2+\\ell}{2m}}}=f\\bfrac{d_2+\\ell}{2m}^m$ we see that its maxima are at the endpoints of $J$. In general $3i\\leq 3n\/2 \\leq m$. However when $d_2+\\ell=4.04i$ we have that \n\\beq{2m}{\n2m\\geq 4.04i+3(n-i-w)\\geq 4.04i+ 3(n-1.01i)= 3n+1.01i.\n}\n{\\bf Case 1a: $d_2+\\ell=3i$.}\\\\\nWe have $\\frac{d_2+\\ell}{2m} \\leq \\frac{3i}{3n}\\leq \\frac{1}{2}$ and $(d_2+\\ell)(1-\\frac{d_2+\\ell}{2m})\\geq 3i\/2$. Therefore,\n\\begin{align*}\np_1&\\leq_O w \\bfrac{1600(en)^{101}}{i^{101}}^{i\/100}\\bfrac{i}{n}^{3i\/2} e^{-3i\/4} \\\\\n&= w \\bigg[\\frac{1600e^{26}}{2^{49}} \\bfrac{2i}{n}^{49}\\bigg]^{i\/100} \\\\ \n&\\leq w \\bigg[ e^{-1\/2} \\bfrac{2i}{n}^{49}\\bigg]^{i\/100}\\\\\n&\\leq w e^{-i\/200}.\n\\end{align*}\n{\\bf Case 1b: $d_2+\\ell=4.04i$.}\\\\\nIt follows from \\eqref{2m} that $\\frac{d_2+\\ell}{2m} \\leq \\frac{4.04i}{3n+1.01i} \\leq 0.577$ where the second inequality uses $i\\leq n\/2$. It follows from this that $(d_2+\\ell)(1-\\frac{d_2+\\ell}{2m})\/2\\geq 0.85i$. Hence,\n\\begin{align*}\np_1&\\leq_O w \\bfrac{1600(en)^{101}}{i^{101}}^{i\/100} \\bfrac{4.04i}{3n+1.01i}^{2.02i} e^{-0.85i}\\\\\n&\\leq_O w \\bigg[1600e^{16} \\bigg(\\frac{n}{i}\\bigg)^{101} \\bigg( \\frac{4.04i}{3n}\\bigg)^{101} \\bigg(\\frac{4.04i}{3n+1.01i}\\bigg) ^{101}\\bigg]^{i\/100}\\\\ \n&\\leq_O w \\bigg[1600e^{16} \\bigg(\\frac{4.04}{3} \\cdot 0.577\\bigg)^{101}\\bigg]^{i\/100}\\\\\n& \\leq_O w e^{-i\/100}.\n\\end{align*}\nTherefore the probability that Case 1 is satisfied is bounded by a constant times\n\\begin{align*}\n& \\sum_{w=1}^{10^{-5}n} \\sum_{i=10^{-3}n}^{0.5n} w e^{-i\/200}=O(n^{-3\/4}).\n\\end{align*}\n\\vspace{3mm}\n\\\\ \\textbf{Case 2:} $r=|C_z|\\geq 0.997n$. \nLet $V_1=V(C_z)$, \n$V_2=V\\setminus (V_1\\cup W)$. First note that $V_2$ spans at least $w$ components. Therefore $|V_2|\\geq w$. To lower bound $e(V_2:W)$ we use the following Claim.\n\\vspace{3mm}\n\\\\ \\textbf{Claim 1} Every vertex in $W$ is adjacent to at least 3 distinct components in $V \\setminus W$, and hence to at least 2 vertices in $V_2$.\n\\vspace{3mm}\n\\\\ \\textbf{Proof of Claim 1:} Let $v \\in W$ be such that it is adjacent to $t\\in \\{0,1,2\\}$ components in $V\\setminus W$. Consider $W'=W\\setminus\\{v\\}$. Thus $|W'|=|W|-1$ . If $t=0$ then $q(V\\setminus W')=q(V\\setminus W)+1$. If $t=1$ then $q(V\\setminus W')\\geq q(V\\setminus W)-1$. If $t=2$ then if the both of the corresponding components have odd size then the new component will also have odd side, while if only one of them has odd size then the new one has even size. Finally if both have even size the new one has odd size. In all three cases the inequality $q(V\\setminus W')\\geq q(V\\setminus W)-1$ is satisfied. Therefore $q(V\\setminus W') \\geq q(V\\setminus W) -1 >|W|-1=|W'| $ contradicting the minimality of $W$.\n\\qed \n\\vspace{3mm}\n\\\\ From Claim 1 it follows that $W:V_2$ spans at least $2w$ edges. \nWe also have that $|V_2|\\leq n-r-w \\leq 0.003n$. For fixed $2\\leq w\\leq 10^{-5}n$, $3w\\leq d_W \\leq 4w$ and $w\\leq i$ the probability that there exist such sets $V_1,V_2,W$, $|V_2|=i$, $w=|W|$, $d(W)=d_W$ and $2w\\leq \\ell= e(V_2:W)\\leq 4w$ is bounded by \n\\begin{align*}\n&\\sum_{\\ell=2w}^{d_W} \n \\binom{n}{i} \\binom{n-i}{w} \\binom{d_W}{\\ell} \\frac{\\phi(d_2+\\ell)\\cdot \\phi(2m-d_2-\\ell) }{\\phi(2m)}\\\\\n& \\leq_O \\sum_{\\ell=2w}^{d_W} \\bfrac{en}{i}^i \\bfrac{en}{w}^w 2^{4w} \\frac{\\bfrac{d_2+\\ell}{2}^{(d_2+\\ell-2w)\/2} \\bfrac{2w}{e}^{w} \\bfrac{2m-d_2-\\ell}{e}^{(2m-d_2-\\ell)\/2} }{\\bfrac{2m}{e}^m}\\\\ \n&= \\sum_{\\ell=2w}^{d_W} \\bfrac{en}{i}^i \\bfrac{en}{w}^w 2^{4w} \\bfrac{2w}{2m}^{w} \\brac{\\frac{d_2+\\ell}{2m}\\cdot \\frac{e}2}^{(d_2+\\ell-2w)\/2} \n\\bigg(1-\\frac{d_2+\\ell}{2m}\\bigg)^{(2m-d_2-\\ell)\/2} \\\\ \n& \\leq_O w \\bfrac{en}{i}^i \\bfrac{16e}{3}^{w} \\bigg( \\frac{5i}{3n} \\cdot \\frac{e}{2} \\bigg)^{3i\/2}\\\\\n&\\leq_O w \\brac{e^2 \\bfrac{16e}{3}^{2w\/i} \\frac{5^3i}{3^3n}}^{i\/2}.\n\\end{align*} \nFor the second line we used the second inequality of Lemma \\ref{estimates}. For the fourth line we used that $2w\\leq \\ell $, $d_2+\\ell \\leq 4|V_2|+4w \\leq 0.01204n$ and so $\\brac{\\frac{d_2+\\ell}{2m}\\cdot\\frac{e}2}^{(d_2+\\ell-2w)\/2}$ is maximized when $d_2,\\ell$ are as small as possible, that is $d_2=3i,\\ell=2w$. Furthermore note that $d_2+\\ell-2w\\geq d_2\\geq 3i$ and $i\\geq q(V\\setminus W)-1\\geq w$. Therefore the probability that Case 2 is satisfied is bounded by a constant times\n\\begin{align*}\n&\\sum_{w=2}^{10^{-5}n} \\sum_{i=w}^{0.003n} \\brac{e^2\\bfrac{16e}{3}^{2w\/i} \\frac{5^3i}{3^3n}}^{i\/2}\\\\\n&\\leq_O \\sum_{w=2}^{10^{-5}n} \\sum_{i=w}^{2w}\\bfrac{C_1i}{n}^{i\/2}+\\sum_{w=2}^{10^{-5}n} \\sum_{i=2w}^{0.003n}\\bfrac{C_2i}{n}^{i\/2}\\\\\n\\noalign{where $C_1=16^25^3e^4\/3^5,C_2=16\\cdot 5^3 e^3\/3^4$,}\n&\\leq \\sum_{i=2}^{n^{1\/4}}i\\brac{\\bfrac{C_1}{n^{3\/4}}^{i\/2}+\\bfrac{C_2}{n^{3\/4}}^{i\/2}} +\\sum_{i=n^{1\/4}}^{2\\cdot 10^{-5}n}i\\bfrac{2C_1}{10^5}^{i\/2}+\\sum_{i=n^{1\/4}}^{0.003n}i\\bfrac{6C_2}{10^3}^{i\/2}\\\\\n&=O(n^{-3\/4}).\n\\end{align*} \nFinally, since $G$ has an even number of vertices, for $W=\\emptyset$ we have $|W|=q(V\\setminus W)=0$.\n\\qed\n\\subsection{Proof of Lemma \\ref{maximal}:}\nLet $W$ be a set satisfying $q(V \\setminus W)>|W|$ of maximum size and assume $w=|W|\\geq 10^{-5}n$.\n\\vspace{3mm}\n\\\\\n{\\textbf{Claim 2}} No component induced by $V\\setminus W$ is a tree with more than one vertex.\n\\vspace{3mm}\n\\\\ \\textbf{Proof of Claim 2:} Indeed assume that $C_i$ is such a component. If $|C_i|$ is even then\nlet $v$ be a leaf of $C_i$ and define $W'=W \\cup \\{v\\}$. Then $C_i \\setminus \\{v\\}$ is an odd component in $V \\setminus W'$ and $q(V \\setminus W')=q(V \\setminus W)+1>|W|+1=|W'|$ contradicting the maximality of $W$.\n\nThus assume that $|C_i|$ is odd. Let $L_1$ be the set of leaves of $C_i$ and $L_2$ be the neighbors of $L_1$. Set $W'=W \\cup |L_1|$. Then $|L_1| \\geq |L_2|$. Furthermore every vertex in $L_1$ is an odd component in $V \\setminus W'$ and in the case $|L_1|=|L_2|$ then $C_i \\setminus (L_1\\cup L_2)$ is also an odd component in $V \\setminus W'$. Therefore,\n\\begin{align*}\nq(V\/W') &=q(V\/W) -1 +|L_1|+\\mathbb{I}(|L_1|=|L_2|) \n\\\\&\\geq q(V\/W) +|L_2| +|L_1|-|L_2| +\\mathbb{I}(|L_1|=|L_2|)-1\n\\\\&> |W|+|L_2|=|W'|,\n\\end{align*} \ncontradicting the maximality of $W$. \\qed\n\\vspace{3mm}\n\\\\\nWe partition $V \\setminus W$ into three sets, $W_1,W_2$ and $W_3$, as follows. With the underlying graph being the one spanned by $V \\setminus W$, $W_1$ consists of the isolated vertices in $V \\setminus W$, $W_2$ consists of the vertices spanned by components that contain a cycle and have size $s\\leq \\frac{1}{10}\\log n$ and $W_3$ consists of the vertices that are spanned by a component of size at least $\\frac1{10}\\log n$. Finally let $W_4=W_2 \\cup W_3$. To lower bound $W_1$ we use the following claim.\n\n\\textbf{{{Claim 3:}}} W.h.p.\\@ $W_4$ spans at most $\\frac{11w}{\\log n}$ components in $V\\setminus W$.\n\n\\textbf{Proof of Claim 3:} First observe that the number of components spanned by $W_2$ is smaller than the number of cycles of size at most $\\frac1{10} \\log n$ in $G$, which we denote by $r$. \n\\begin{align*}\n\\mathbf{Pr}(r\\geq n^{0.3}) &\\leq n^{-0.3} \\sum_{q=1}^{0.1 \\log n} \\binom{n}{q} 4^q q! \\frac{\\phi(2q) \\phi(2m-2q)}{\\phi(2m)}\\\\\n& \\leq_O n^{-0.3} \\sum_{q=1}^{0.1 \\log n} \\bfrac{en}{q}^q 4^q \\bfrac{2q}{e}^q \\bfrac{e}{2m}^q\n\\\\& \\leq_O n^{-0.3} \\sum_{q=1}^{0.1 \\log n} \\bfrac{8e}{3}^q \n\\leq_O n^{-0.3} (\\log n) 8^{0.1 \\log n}=o(1). \n\\end{align*}\nHence w.h.p.\\@ $W_2$ spans at most $n^{0.3}$ components. Moreover every component spanned by $W_3$ has size at least $\\frac{1}{10}\\log n$. Therefore $W_4$ spans at most $n^{0.3}+\\frac{10w}{\\log n}= \\frac{(1+o(1))10w}{\\log n}$ components in $V\\setminus W$.\\qed\n\nSince $W_4$ spans at most $u=\\frac{11w}{\\log n}$ components in $V\\setminus W$ and no component is a tree it follows that the rest of the components consist of at least $q(V\\setminus W) -u > w-u$ isolated vertices that lie in $W_1$.\n\nFor convenience, we move $|W_1|-(w-u)$ vertices from $W_1$ to $W_4$. Therefore $|W_1|= w-u$.\nLet $k_1$ be the number of vertices of degree 4 in $W_1$ and $d=d(W)-d(W_1)$.\nThen $0\\leq d\\leq 4w-(3(w-u)+k_1)=w+3u-k_1$. For fixed $10^{-5}n\\leq w\\leq 0.5n$ the probability that there exist such sets $W,W_1,W_4$ is bounded by \n\n\\begin{align}\np_2&\\leq \\sum_{k_1=0}^{w-u} \\sum_{d=0}^{w+3u-k_1}\n\\binom{n}{2w} \\binom{2w}{w}\\binom{w}{u} \\binom{4w}{d} \\mathbf{Pr}(d(W)-d(W_1)=d) \\label{2}\\\\ \n&\\times (3(w-u)+k_1)! \\times \\frac{ [2m- [6(w-u)+2k_1]]!}{2^{m-[3(w-u)+k_1}[m-[3(w-u)+k_1]]!} \\times \\frac{2^m m!}{(2m)!}.\\label{3}\n\\end{align}\n\\textbf{Explanation:} We first choose the sets $W,W_1$ and $W_4$ of size $w,w-u$ and $n-2w+u$ respectively. This can be done in $\\binom{n}{2w} \\binom{2w}{w}\\binom{w}{u} \\binom{n-2w+u}{u}^{-1} $ ways, but we ignore the final factor. \n\nFrom the at most $4w$ copies of vertices in $W$ we choose a set $W''\\subset W$ be of size $d$. \nWe let $W'=W\/W''$. These are the copies of vertices that will be matched with those in $W_1$. \n\nIn the calculations that follow we let $a=w\/n\\geq 10^{-5}$. We also let $k_4$ be the number of vertices of degree 4 that lie in $W_4$. We first bound the binomial coefficients, found in the first line.\n\\begin{align}\n &\\binom{n}{2w} \\binom{2w}{w}\\binom{w}{u} \\binom{4w}{d} \n= \\binom{n}{2an} \\binom{2an}{an}\\binom{an}{u} \\binom{4an}{d} \\nonumber\\\\\n&\\leq 2^{o(n)} \\bfrac{1}{2a}^{2an} \\bfrac{1}{1-2a}^{(1-2a)n} 2^{2an} \\bfrac{4ean}{d}^{d} \\nonumber\\\\\n& =2^{o(n)}\\bfrac{1}{a}^{2an} \\bfrac{1}{1-2a}^{(1-2a)n} \\bfrac{4ean}{d}^{d}.\\label{f1}\n\\end{align}\nFor the second line we used that $u\\leq u_0$ which implies that $\\binom{an}{u}=2^{o(n)}$. Observe that \n\\begin{align}\\label{2m2}\n2m=6(w-u)+2k_1+d+3(n-2w+u)+k_4=3n+d+2k_1+k_4-3u.\n\\end{align}\nLet $m_0=d+2k_1+k_4-3u$. For the terms in line \\eqref{3} we have\n\\begin{align*}\n\\frac{(2m)!}{2^m m!}&=\\frac{(3n)!}{2^{1.5n}(1.5n)!} \n\\frac{\\prod_{i=1}^{m_0}(3n+i)}{2^{m_0\/2} \\prod_{i=1}^{m_0\/2}(1.5n+i)}\n\\geq_O\\bfrac{3n}{e}^{1.5n} \\prod_{i=1}^{m_0\/2}[3n+(2i-1)].\\\\\n& \\geq \\bfrac{3n}{e}^{1.5n} e^{-o(n)}(3n)^{-3u\/2} \\prod_{i=1}^{d\/2+k_1+k_4\/2}[3n+(2i-1)]\n\\end{align*}\nEquation \\eqref{2m2} implies that \n$$2m-[6(w-u)+2k_1]=3(1-2a)n+3u+k_4+d.$$ \nThus, \n\\begin{align*}\n& \\frac{ [2m- [6(w-u)+2k_1]]!}{2^{m-[3(w-u)+2k_1]}[m-[3(w-u)+k_1]]!}=\\frac{ [3(1-2a)n]!} {2^{1.5(1-2a)n}[1.5(1-2a)n]!} \\cdot \\frac{\\prod_{i=1}^{d}3(1-2a)n+i}{2^{d\/2}\\prod_{j=1}^{\\frac{d}{2}} 1.5(1-2a)n+j} \n\\\\&\\hspace{5mm} \\times \\frac{\\prod_{i=1}^{k_4}3(1-2a)n+d+i}{2^{k_4\/2}\\prod_{j=1}^{k_4\/2} 1.5(1-2a)n+d\/2 +j } \\cdot \\frac{\\prod_{i=1}^{3u}3(1-2a)n+d+k_4+i}{2^{3u\/2}\\prod_{j=1}^{\\frac{3u}{2}} 1.5(1-2a)n+d\/2+k_4\/2 +j}\n\\\\ &\\hspace{5mm}\\leq_O \\bfrac{3(1-2a)n}{e}^{1.5(1-2a)n} \\prod_{i=1}^{d\/2} [3(1-2a)n+(2i-1)]\n \\prod_{j=1}^{k_4\/2} [3(1-2a)n+d+(2j-1) ] \\cdot (2m)^{3u\/2}\n\\\\ &\\hspace{5mm} \\leq_O \\bfrac{3(1-2a)n}{e}^{1.5(1-2a)n} [3(1-2a)n+an\/2]^{d\/2} (2m)^{3u\/2} \\prod_{j=1}^{k_4\/2} [3(1-2a)n+d+(2j-1) ]\n\\end{align*}\nFor the last inequality we used the Arithmetic Mean-Geometric Mean inequality and the fact that $d\/2\\leq an\/2+ o(n)$, which follows from $d\\leq w+3u-k_1$. \n\nFor the first term of \\eqref{3} we have\n\\begin{align*}\n[3(w-u)+k_1]! \\leq \\frac{3w!}{ (3(w-u))^{3u}} \\prod_{i=1}^{k_1}(3(w-u)+i)\\leq \\bfrac{3an}{e}^{3an} \\frac{2^{o(n)}}{n^{3u}} \\prod_{i=1}^{k_1}(3(w-u)+i).\n\\end{align*}\nThus the expression in \\eqref{3} is bounded by\n\\begin{align}\n&2^{o(n)} \\bfrac{3an}{e}^{3an} \\frac{1}{n^{3u}} \\prod_{i=1}^{k_1}(3(w-u)+i) \\nonumber\\\\ \n&\\times \\bfrac{3(1-2a)n}{e}^{1.5(1-2a)n} [3(1-2a)n+an\/2]^{d\/2} (2m)^{3u\/2} \\prod_{j=1}^{k_4\/2} [3(1-2a)n+d+2j-1 ]\\nonumber\\\\ \n&\\times \\bigg[ \\bfrac{3n}{e}^{1.5n} (3n)^{-3u\/2} \\prod_{i=1}^{d\/2+k_1+k_4\/2}[3n+(2i-1)] \\bigg]^{-1}\\nonumber\\\\\n&=2^{o(n)} a^{3an} [(1-2a)n]^{1.5(1-2a)n} \\bfrac{6m}{n}^{3u\/2} \\prod_{i=1}^{d\/2}\\frac{ 3(1-2a)n+an\/2}{3n+(2i-1)} \\nonumber\\\\\n&\\times \n\\prod_{i=1}^{k_1}\\frac{ 3(w-u)+i}{3n+d+(2i-1)} \n\\prod_{i=1}^{k_4\/2} \\frac{ 3(1-2a)n+d+2i-1}{3n+d+2k_1+2i-1}\\nonumber\\\\\n&\\leq_O 2^{o(n)} a^{3an} [(1-2a)n]^{1.5(1-2a)n}\n\\prod_{i=1}^{d\/2}\\frac{ 3(1-2a)n+an\/2}{3n} \n\\prod_{i=1}^{k_1}\\frac{ 3(w-u)+i}{3n+d+(2i-1)} \\prod_{i=1}^{k_4\/2} 1 \\nonumber\\\\ \n&\\leq_O 2^{o(n)} a^{3an} [(1-2a)n]^{1.5(1-2a)n}[(1-2a)+a\/6]^{d\/2} 2^{-k_1} \\label{f2\n\\end{align}\nFinally we consider the term $\\mathbf{Pr}(d(W)-d(W_1)=d) $ and assume that $h$ vertices of degree 4 were chosen to be included in $W\\cup W_1$, so that $d=h+3u-2k_1$. Then, because there are $\\binom{h}{k_1}\\binom{2w-u-h}{(w-u)-k_1}$ out of $\\binom{2w}{w-u}$ ways to distribute the $k_1$ vertices of degree 4,\n\\begin{align*}\np_3&=\\mathbf{Pr}(d(W)-d(W_1)=d) =\\binom{h}{k_1}\\binom{2w-u-h}{(w-u)-k_1} \\bigg\/ \\binom{2w-u}{w-u}\\\\\n&\\leq \\binom{h}{k_1}\\binom{2w-u-h}{w-u}\\bigg\/\\binom{2w-u}{w-u} \\leq \\binom{h}{k_1} \\prod_{i=0}^{h-1} \\frac{w-i}{2w-i}\\\\\n&\\leq 2^{hH(k_1\/h) -h}=2^{k_1}2^{-k_1+h\\cdot H(k_1\/h)-h}.\n\\end{align*}\nHere $H(x)=-x\\log_2( x) -(1-x) \\log_2(1-x)$ is the entropy function. For fixed $d$ we have $h=d+2k_1+o(n)$. Thus \n$$p_3 \\leq 2^{o(n)+k_1+df(k_1\/d)},\\text{ where }f(x)= -x + (1+2x) H\\brac{\\frac{x}{1+2x}}-(1+2x).$$ \n$f(x)$ has a unique maximum at $x^*$, the solution to $8x(1+x)=(1+2x)^2$ and $f(x^*) \\leq -0.771$. Hence \n\\beq{f3}{\np_3\\leq 2^{-0.771d+k_1+o(n)}.\n} \nMultiplying the bounds in \\eqref{f1}, \\eqref{f2}, \\eqref{f3} together we have a bound\n\\begin{align*}\np_2 & \\leq 2^{o(n)-0.771d+k_1} \\bfrac{1}{a}^{2an} \\bfrac{1}{1-2a}^{(1-2a)n} \\bfrac{4ean}{d}^{d} \n\\\\ &\\times a^{3an} (1-2a)^{1.5(1-2a)n} \\bigg(1-2a+ \\frac{a}{6}\\bigg)^{d\/2} 2^{-k_1}\n\\\\ & = 2^{o(n)} \\bfrac{2^{1.229}ean}{d}^{d} a^{an} (1-2a)^{0.5(1-2a)n} \\bigg(1-\\frac{11a}{6}\\bigg)^{d\/2}\n\\end{align*}\nThus $p_2=o(1)$ when $d=o(n)$. Let $d=ban$ for some $0e$ then $\\bfrac{g(a)}{b}^b$ is maximized at $b=1$. Hence\n$$ p_2 \\leq \\bigg\\{ 2^{o(1)} \\ 2^{1.229} e \\brac{1-\\frac{11a}{6} }^{0.5} a (1-2a)^{0.5(1-2a)\/a} \\bigg\\}^{an} \\leq \\bfrac{19}{20}^{an}.$$\nThe last inequality is most easily verified numerically. Thus the probability that there exists a set $W$ satisfying $q(V \\setminus W)>|W|$ of size $w=|W|\\leq 10^{-5}n$\nis bounded by \n\\begin{align*}\n\\sum_{w=10^{-5}n}^{0.5n} \\bfrac{99}{100}^w =o(1). \n\\end{align*}\nThis only leaves the case of $n$ odd. The reader will notice that in none of the calculations above, did we use the fact that $n$ was even. The Tutte-Berge formula for the maximum size of a matching $\\nu(G)$ is\n$$\\nu(G)=\\min_{W\\subseteq V}\\frac12(|V|+|W|-q(V\\setminus W)).$$\nWe have shown that the above expression is at least $|V|\/2$ for $W\\neq\\emptyset$ and so the case of $n$ odd is handled by putting $W=\\emptyset$ and $q(W)=1$.\n\\section{Conclusions and open questions}\nThe paper of Karp and Sipser \\cite{KS} has been a springboard for research on matching algorithms in random graphs. Algorithm 1 of that paper has not been the subject of a great deal of analysis, mainly because of the way it disturbs the degree sequences of the graphs that it produces along the way. In this paper we have shown that if the original graph has small maximum degree then the maximum degree is controllable and the great efficiency of the algorithm can be verified.\n\nIt is natural to try to extend the analysis to random regular graphs with degree more than four and we are in the process of trying to overcome some technical problems. It would also be of interest to analyse the algorithm on $G_{n,p}$, as originally intended.\n\n\\section{Diagrams of Hyperactions of interest}\n{\\bf Type 2.}\n\n\\begin{center}\n\\pic{\n\\node at (0,0.2) {$w$};\n\\node at (1,0.2) {$u$};\n\\node at (2,0.2) {$v$};\n\\node at (3,1.2) {$x$};\n\\node at (3,0.2) {$y$};\n\\node at (3,-.8) {$z$};\n\\node at (-1,1.2) {$a$};\n\\node at (-1,-0.8) {$b$};\n\\draw (1,0) -- (2,0);\n\\draw (2,0) -- (3,1);\n\\draw (2,0) -- (3,-1);\n\\draw (2,0) -- (3,0);\n\\draw (0,0) -- (-1,1);\n\\draw (0,0) -- (-1,-1);\n\\draw (0,0) to [out=45,in=135] (1,0);\n\\draw (0,0) to [out=-45,in=-135] (1,0);\n\\draw [fill=black] (0,0) circle [radius=.05];\n\\draw [fill=black] (1,0) circle [radius=.05];\n\\draw [fill=black] (2,0) circle [radius=.05];\n\\draw [fill=black] (3,0) circle [radius=.05];\n\\draw [fill=black] (3,1) circle [radius=.05];\n\\draw [fill=black] (3,-1) circle [radius=.05];\n\\draw [fill=black] (-1,1) circle [radius=.05];\n\\draw [fill=black] (-1,-1) circle [radius=.05];\n\\draw [->] [ultra thick] (4,0) -- (5,0);\n\\draw (7,0) ellipse (.6 and .3);\n\\node at (7.05,0) {$wuv$};\n\\draw (7.6,0) -- (8.6,1);\n\\draw (7.6,0) -- (8.6,-1);\n\\draw (7.6,0) -- (8.6,0);\n\\draw (5.6,1) -- (6.5,0);\n\\draw (5.6,-1) -- (6.5,0);\n\\draw [fill=black] (8.6,1) circle [radius=.05];\n\\draw [fill=black] (8.6,-1) circle [radius=.05];\n\\draw [fill=black] (8.6,0) circle [radius=.05];\n\\draw [fill=black] (5.6,-1) circle [radius=.05];\n\\draw [fill=black] (5.6,1) circle [radius=.05];\n\\node at (5.6,1.2) {$a$};\n\\node at (5.6,-0.8) {$b$};\n\\node at (8.6,1.2) {$x$};\n\\node at (8.6,0.2) {$y$};\n\\node at (8.6,-.8) {$z$};\n\\node at (8.6,0.2) {$y$};\n\\node at (8.6,-.8) {$z$};\n}\n\\end{center}\n\n{\\bf Type 3.}\n\n\\begin{center}\n\\pic{\n\\node at (1,0.2) {$u$};\n\\node at (2,0.2) {$v$};\n\\node at (3,1.2) {$x$};\n\\node at (3,0.2) {$y$};\n\\node at (3,-0.8) {$z$};\n\\node at (0,1.2) {$a$};\n\\node at (0,-.8) {$b$};\n\\node at (-1,2.2) {$c$};\n\\node at (-1,1.2) {$d$};\n\\node at (-1,-.8) {$e$};\n\\node at (-1,-1.8) {$f$};\n\\draw (1,0) -- (2,0);\n\\draw (2,0) -- (3,1);\n\\draw (2,0) -- (3,-1);\n\\draw (2,0) -- (3,0);\n\\draw (0,1) -- (1,0);\n\\draw (0,1) -- (-1,2);\n\\draw (0,1) -- (-1,1);\n\\draw (0,-1) -- (-1,-1);\n\\draw (0,-1) -- (-1,-2);\n\\draw (0,-1) -- (1,0);\n\\draw [fill=black] (-1,2) circle [radius=.05];\n\\draw [fill=black] (-1,1) circle [radius=.05];\n\\draw [fill=black] (-1,-2) circle [radius=.05];\n\\draw [fill=black] (-1,-1) circle [radius=.05];\n\\draw [fill=black] (0,1) circle [radius=.05];\n\\draw [fill=black] (0,-1) circle [radius=.05];\n\\draw [fill=black] (1,0) circle [radius=.05];\n\\draw [fill=black] (2,0) circle [radius=.05];\n\\draw [fill=black] (3,0) circle [radius=.05];\n\\draw [fill=black] (3,1) circle [radius=.05];\n\\draw [fill=black] (3,-1) circle [radius=.05];\n\\draw [->] [ultra thick] (4,0) -- (5,0);\n\\draw (7.5,0) ellipse (.6 and .3);\n\\node at (7.55,0) {$avb$};\n\\draw (9.1,0) -- (10.1,1);\n\\draw (9.1,0) -- (10.1,-1);\n\\draw (9.1,0) -- (10.1,0);\n\\draw [fill=black] (9.1,0) circle [radius=.05];\n\\draw [fill=black] (10.1,1) circle [radius=.05];\n\\draw [fill=black] (10.1,-1) circle [radius=.05];\n\\draw [fill=black] (10.1,0) circle [radius=.05];\n\\node at (9.1,0.2) {$u$};\n\\node at (10.1,1.2) {$x$};\n\\node at (10.1,0.2) {$y$};\n\\node at (10.1,-.8) {$z$};\n\\node at (6,2.2) {$c$};\n\\node at (6,1.2) {$d$};\n\\node at (6,-.8) {$e$};\n\\node at (6,-1.8) {$f$};\n\\draw [fill=black] (6,2) circle [radius=.05];\n\\draw [fill=black] (6,1) circle [radius=.05];\n\\draw [fill=black] (6,-1) circle [radius=.05];\n\\draw [fill=black] (6,-2) circle [radius=.05];\n\\draw (6,2) -- (7,0);\n\\draw (6,1) -- (7,0);\n\\draw (6,-2) -- (7,0);\n\\draw (6,-1) -- (7,0);\n}\n\\end{center}\nWe allow the edge $\\set{a,b}$ to be a single edge in this construction. This gives us a Type 3b hyperaction.\n\n{\\bf Type 4.}\n\n\\begin{center}\n\\pic{\n\\node at (0,0.2) {$v$};\n\\draw [fill=black] (0,0) circle [radius=.05];\n\\node at (-1,1.2) {$a$};\n\\node at (-1,0.2) {$b$};\n\\node at (-1,-0.8) {$c$};\n\\draw [fill=black] (-1,1) circle [radius=.05];\n\\draw [fill=black] (-1,0) circle [radius=.05];\n\\draw [fill=black] (-1,-1) circle [radius=.05];\n\\draw (-1,1) -- (0,0);\n\\draw (-1,0) -- (0,0);\n\\draw (-1,-1) -- (0,0);\n\\draw [fill=black] (1,0) circle [radius=.05];\n\\node at (1,0.2) {$u$};\n\\draw (0,0) -- (1,0);\n\\node at (2,1.2) {$x_1$};\n\\node at (2,-1.2) {$x_2$};\n\\draw [fill=black] (2,1) circle [radius=.05];\n\\draw [fill=black] (2,-1) circle [radius=.05];\n\\draw (1,0) -- (2,1);\n\\draw (1,0) -- (2,-1);\n\\draw (2,-1) -- (2,1);\n\\node at (3,1.2) {$w_1$};\n\\node at (3,-1.2) {$w_2$};\n\\draw [fill=black] (3,1) circle [radius=.05];\n\\draw [fill=black] (3,-1) circle [radius=.05];\n\\draw (2,1) -- (3,1);\n\\draw (2,-1) -- (3,-1);\n\\node at (4,2.2) {$p$};\n\\node at (4,1.2) {$q$};\n\\node at (4,-0.8) {$r$};\n\\node at (4,-1.8) {$s$};\n\\draw [fill=black] (4,2) circle [radius=.05];\n\\draw [fill=black] (4,1) circle [radius=.05];\n\\draw [fill=black] (4,-1) circle [radius=.05];\n\\draw [fill=black] (4,-2) circle [radius=.05];\n\\draw (3,1) -- (4,2);\n\\draw (3,1) -- (4,1);\n\\draw (3,-1) -- (4,-2);\n\\draw (3,-1) -- (4,-1);\n\\draw [->] [ultra thick] (5,0) -- (6,0);\n\\node at (8,0.2) {$v$};\n\\draw [fill=black] (0,0) circle [radius=.05];\n\\node at (7,1.2) {$a$};\n\\node at (7,0.2) {$b$};\n\\node at (7,-0.8) {$c$};\n\\draw [fill=black] (7,1) circle [radius=.05];\n\\draw [fill=black] (7,0) circle [radius=.05];\n\\draw [fill=black] (7,-1) circle [radius=.05];\n\\draw [fill=black] (8,0) circle [radius=.05];\n\\draw (8,0) -- (7,1);\n\\draw (8,0) -- (7,0);\n\\draw (8,0) -- (7,-1);\n\\draw (10,0) ellipse (1.4 and .3);\n\\node at (10,0) {$u,x_1,x_2,w_1,w_2$};\n\\node at (13,2.2) {$p$};\n\\node at (13,1.2) {$q$};\n\\node at (13,-0.8) {$r$};\n\\node at (13,-1.8) {$s$};\n\\draw [fill=black] (13,2) circle [radius=.05];\n\\draw [fill=black] (13,1) circle [radius=.05];\n\\draw [fill=black] (13,-1) circle [radius=.05];\n\\draw [fill=black] (13,-2) circle [radius=.05];\n\\draw (11.3,0) -- (13,2);\n\\draw (11.3,0) -- (13,1);\n\\draw (11.3,0) -- (13,-1);\n\\draw (11.3,0) -- (13,-2);\n}\n\\end{center}\n\n{\\bf Type 5}.\n\n\\begin{center}\n\\pic{\n\\draw [fill=black] (0,1) circle [radius=.05];\n\\draw [fill=black] (0,-1) circle [radius=.05];\n\\node at (0,1.2) {$a$};\n\\node at (0,-.8) {$b$};\n\\draw [fill=black] (1,0) circle [radius=.05];\n\\node at (1,.2) {$z$};\n\\draw (0,1) -- (1,0);\n\\draw (0,-1) -- (1,0);\n\\draw [fill=black] (2,0) circle [radius=.05];\n\\draw (1,0) -- (2,0);\n\\node at (2,.2) {$u$};\n\\draw [fill=black] (3,0) circle [radius=.05];\n\\draw (3,0) -- (2,0);\n\\node at (3,-.2) {$v$};\n\\draw [fill=black] (3,1) circle [radius=.05];\n\\node at (3,1.2) {$x_1$};\n\\draw (3,1) -- (2,0);\n\\draw (3,1) -- (3,0);\n\\draw [fill=black] (4,0) circle [radius=.05];\n\\node at (4,.2) {$x_2$};\n\\draw (3,0) -- (4,0);\n\\draw [fill=black] (5,0) circle [radius=.05];\n\\draw [fill=black] (5,1) circle [radius=.05];\n\\draw [fill=black] (5,-1) circle [radius=.05];\n\\draw (4,0) -- (5,0);\n\\draw (3,1) -- (5,1);\n\\draw (4,0) -- (5,-1);\n\\node at (5,1.2) {$p$};\n\\node at (5,0.2) {$q$};\n\\node at (5,-.8) {$r$};\n\\draw [->] [ultra thick] (6,0) -- (7,0);\n\\draw (10,0) ellipse (1.4 and .3);\n\\node at (10,0) {$z,u,v,x_1,x_2$};\n\\draw [fill=black] (8,1) circle [radius=.05];\n\\draw [fill=black] (8,-1) circle [radius=.05];\n\\node at (8,-0.8) {$b$};\n\\node at (8,1.2) {$a$};\n\\draw [fill=black] (12.5,0) circle [radius=.05];\n\\draw [fill=black] (12.5,1) circle [radius=.05];\n\\draw [fill=black] (12.5,-1) circle [radius=.05];\n\\draw (11.4,0) -- (12.5,1);\n\\draw (11.4,0) -- (12.5,-1);\n\\draw (11.4,0) -- (12.5,0);\n\\node at (12.5,1.2) {$p$};\n\\node at (12.5,-0.8) {$r$};\n\\node at (12.5,0.2) {$q$};\n\\draw (8,1) -- (8.6,0);\n\\draw (8,-1) -- (8.6,0);\n}\n\\end{center}\n\n\n{\\bf Type 33.}\n\\begin{center}\n\\pic{\n\\draw [fill=black] (0,1) circle [radius=.05];\n\\draw [fill=black] (0,-1) circle [radius=.05];\n\\node at (0,1.2) {$u_1$};\n\\node at (0,-.8) {$u_2$};\n\\draw [fill=black] (1,0) circle [radius=.05];\n\\node at (1,.2) {$u$};\n\\node at (2,.2) {$v$};\n\\draw (0,1) -- (1,0);\n\\draw (0,-1) -- (1,0);\n\\node at (-1,2.2) {$a$};\n\\node at (-1,1.2) {$b$};\n\\node at (-1,-.8) {$c$};\n\\node at (-1,-1.8) {$d$};\n\\draw (1,0) -- (2,0);\n\\draw (2,0) -- (3,1);\n\\draw (2,0) -- (3,-1);\n\\draw (0,1) -- (1,0);\n\\draw (0,1) -- (-1,2);\n\\draw (0,1) -- (-1,1);\n\\draw (0,-1) -- (-1,-1);\n\\draw (0,-1) -- (-1,-2);\n\\draw (0,-1) -- (1,0);\n\\draw [fill=black] (-1,2) circle [radius=.05];\n\\draw [fill=black] (-1,1) circle [radius=.05];\n\\draw [fill=black] (-1,-2) circle [radius=.05];\n\\draw [fill=black] (-1,-1) circle [radius=.05];\n\\draw [fill=black] (0,1) circle [radius=.05];\n\\draw [fill=black] (0,-1) circle [radius=.05];\n\\draw [fill=black] (3,1) circle [radius=.05];\n\\draw [fill=black] (3,-1) circle [radius=.05];\n\\node at (3,-.8) {$v_2$};\n\\node at (3,1.2) {$v_2$};\n\\draw [fill=black] (4,1) circle [radius=.05];\n\\draw [fill=black] (4,-1) circle [radius=.05];\n\\node at (4,2.2) {$p$};\n\\node at (4,1.2) {$q$};\n\\node at (4,-.8) {$r$};\n\\node at (4,-1.8) {$s$};\n\\draw [fill=black] (4,2) circle [radius=.05];\n\\draw [fill=black] (4,-2) circle [radius=.05];\n\\draw (3,1) -- (4,1);\n\\draw (3,1) -- (4,2);\n\\draw (3,-1) -- (4,-1);\n\\draw (3,-1) -- (4,-2);\n\\draw [->] [ultra thick] (5,0) -- (6,0);\n\\draw (9,0) ellipse (1 and .2);\n\\draw (12,0) ellipse (1 and .2);\n\\node at (14,2.2) {$p$};\n\\node at (14,1.2) {$q$};\n\\node at (14,-.8) {$r$};\n\\node at (14,-1.8) {$s$};\n\\node at (7,2.2) {$a$};\n\\node at (7,1.2) {$b$};\n\\node at (7,-.8) {$c$};\n\\node at (7,-1.8) {$d$};\n\\draw [fill=black] (14,2) circle [radius=.05];\n\\draw [fill=black] (14,1) circle [radius=.05];\n\\draw [fill=black] (14,-2) circle [radius=.05];\n\\draw [fill=black] (14,-1) circle [radius=.05];\n\\draw [fill=black] (7,1) circle [radius=.05];\n\\draw [fill=black] (7,-1) circle [radius=.05];\n\\draw [fill=black] (7,2) circle [radius=.05];\n\\draw [fill=black] (7,-2) circle [radius=.05];\n\\draw (14,2) -- (13,0);\n\\draw (14,1) -- (13,0);\n\\draw (14,-1) -- (13,0);\n\\draw (14,-2) -- (13,0);\n\\draw (7,2) -- (8,0);\n\\draw (7,1) -- (8,0);\n\\draw (7,-2) -- (8,0);\n\\draw (7,-1) -- (8,0);\n\\node at (9,0) {$u,u_1,u_2$};\n\\node at (12,0) {$v,v_1,v_2$};\n}\n\\end{center}\n\n{\\bf Type 34.}\n\n\\hspace{1in}\n\\pic{\n\\draw [fill=black] (0,1) circle [radius=.05];\n\\draw [fill=black] (0,-1) circle [radius=.05];\n\\node at (0,1.2) {$v_1$};\n\\node at (0,-.8) {$v_2$};\n\\draw [fill=black] (1,0) circle [radius=.05];\n\\node at (1,.2) {$v$};\n\\node at (2,.2) {$u$};\n\\draw (0,1) -- (1,0);\n\\draw (0,-1) -- (1,0);\n\\node at (-1,2.2) {$a$};\n\\node at (-1,1.2) {$b$};\n\\node at (-1,-.8) {$c$};\n\\node at (-1,-1.8) {$d$};\n\\draw (1,0) -- (2,0);\n\\draw (2,0) -- (3,1);\n\\draw (2,0) -- (3,-1);\n\\draw (0,1) -- (1,0);\n\\draw (0,1) -- (-1,2);\n\\draw (0,1) -- (-1,1);\n\\draw (0,-1) -- (-1,-1);\n\\draw (0,-1) -- (-1,-2);\n\\draw (0,-1) -- (1,0);\n\\draw [fill=black] (-1,2) circle [radius=.05];\n\\draw [fill=black] (-1,1) circle [radius=.05];\n\\draw [fill=black] (-1,-2) circle [radius=.05];\n\\draw [fill=black] (-1,-1) circle [radius=.05];\n\\draw [fill=black] (0,1) circle [radius=.05];\n\\draw [fill=black] (0,-1) circle [radius=.05];\n\\draw [fill=black] (3,1) circle [radius=.05];\n\\draw [fill=black] (3,-1) circle [radius=.05];\n\\node at (3,-1.2) {$u_2$};\n\\node at (3,1.2) {$u_1$};\n\\draw (3,1) -- (3,-1);\n\\draw [fill=black] (4,1) circle [radius=.05];\n\\draw [fill=black] (4,-1) circle [radius=.05];\n\\node at (4,-1.2) {$w_2$};\n\\node at (4,1.2) {$w_1$};\n\\draw (3,1) -- (4,1);\n\\draw (3,-1) -- (4,-1);\n\\node at (5,2.2) {$p$};\n\\node at (5,1.2) {$q$};\n\\node at (5,-.8) {$r$};\n\\node at (5,-1.8) {$s$};\n\\draw [fill=black] (5,2) circle [radius=.05];\n\\draw [fill=black] (5,1) circle [radius=.05];\n\\draw [fill=black] (5,-2) circle [radius=.05];\n\\draw [fill=black] (5,-1) circle [radius=.05];\n\\draw (4,1) -- (5,1);\n\\draw (4,1) -- (5,2);\n\\draw (4,-1) -- (5,-1);\n\\draw (4,-1) -- (5,-2);\n\\draw [->] [ultra thick] (6,0) -- (7,0);\n}\n\n\\hspace{3in}\n\\pic{\n\\draw (10,0) ellipse (1 and .2);\n\\draw (13,0) ellipse (1.4 and .3);\n\\node at (10,0) {$v,v_1,v_2$};\n\\node at (13,0) {$u,u_1,u_2,w_1,w_2$};\n\\node at (8,2.2) {$a$};\n\\node at (8,1.2) {$b$};\n\\node at (8,-.8) {$c$};\n\\node at (8,-1.8) {$d$};\n\\draw [fill=black] (8,2) circle [radius=.05];\n\\draw [fill=black] (8,1) circle [radius=.05];\n\\draw [fill=black] (8,-2) circle [radius=.05];\n\\draw [fill=black] (8,-1) circle [radius=.05];\n\\draw (9,0) -- (8,1);\n\\draw (9,0) -- (8,2);\n\\draw (9,0) -- (8,-1);\n\\draw (9,0) -- (8,-2);\n\\node at (15.4,2.2) {$p$};\n\\node at (15.4,1.2) {$q$};\n\\node at (15.4,-.8) {$r$};\n\\node at (15.4,-1.8) {$s$};\n\\draw [fill=black] (15.4,2) circle [radius=.05];\n\\draw [fill=black] (15.4,1) circle [radius=.05];\n\\draw [fill=black] (15.4,-2) circle [radius=.05];\n\\draw [fill=black] (15.4,-1) circle [radius=.05];\n\\draw (14.4,0) -- (15.4,1);\n\\draw (14.4,0) -- (15.4,2);\n\\draw (14.4,0) -- (15.4,-1);\n\\draw (14.4,0) -- (15.4,-2);\n}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\n\nIn the past years, {\\it Integral}--IBIS (Ubertini et al. 2003)\nand {\\it Swift}--BAT (Barthelmy et al. 2005) catalogs of hard X--ray selected \nextragalactic sources have been published (Sambruna et al. 2007; Bird et al. 2010), opening \na new window in the study of blazars.\n\nThe hard-X-ray selection produces a set of diverse sources, including extremely ``red'' \nflat spectrum radio quasars (FSRQs), with Inverse Compton (IC) peak at keV-MeV\nenergies, or ``blue'' BL Lacs with synchrotron peak at these frequencies,\nand provides a way to test the validity of the blazar sequence, \nlooking for outliers not predicted by the sequence itself (Giommi et al. 2007).\nIn particular, by selecting hard X--ray luminous FSRQ at high redshift, i.e. with the highest intrinsic \nbolometric luminosities, it is possible to collect samples of \nsupermassive black holes (SMBHs) in the early Universe, thus introducing important observational \nconstraint on the number density of \nheavy black holes at high redshift and hence on their formation processes and timing.\n\nIGR J22517+2217 was first reported by Krivonos et al.\n(2007) as an unidentified object detected by {\\it Integral}--IBIS (150 ks of total IBIS exposure).\n{\\it Swift} follow-up observations were used to associate the source \nwith MG3 J225155+2217 (Bassani et al. 2007). MG3 J225155+2217 has been optically identified as a QSO by Falco et al. (1998) in a redshift survey of \n177 FSRQs, on the basis of S$_{IV}$, Ly$\\alpha$, C$_{II}$ and C$_{IV}$ emission lines.\nIGR J22517+2217 is the highest redshift ($z=3.668$) blazar detected in the fourth {\\it Integral}--IBIS hard X--rays catalog (Bird et al. 2010).\nThe source has a {\\it Swift}--BAT counterpart (SWIFT J2252.0+2218) in the 3 years BAT survey (Ajello et al. 2009) and is present in the multifrequency ``Roma--BZCAT\" \ncatalog of blazars (Massaro et al. 2009).\n\nUsing XRT, IBIS and archival optical\/IR data, Bassani et al. (2007) constructed a non simultaneous SED \nof the source, showing an extremely bright X--ray emission with respect to the\noptical emission ($\\alpha_{OX}<0.75$), \nand suggested that IGR J22517+2217 could be a rare FSRQ with synchrotron peak at X--ray frequencies,\nor a more canonical FSRQ, i.e. with the synchrotron peak at radio--mm frequencies and IC peak at MeV-GeV energies, \nbut with an exceptionally strong Compton dominance.\n\nThis ``controversial\" blazar has been studied also by Maraschi et al. (2008). \nThey reanalyzed the existing {\\it Swift} (XRT and BAT) and {\\it Integral}--IBIS data, \nand propose a ''standard leptonic one-zone emission model'' (Ghisellini \\& Tavecchio 2009, see Sect. 3) with the peak of the synchrotron component at microwave\/radio frequencies, \nand a high luminosity external Compton (EC) component peaking in hard X-rays to reproduce the SED of the source.\nThis model ruled out both a synchrotron and a synchrotron self--Compton (SSC) interpretation for the \nX--ray emission. \n\nGhisellini et al. (2010) included IGR J22517+2217 in their sample of 10 X--ray selected blazar at $z>2$:\nthe intent of the paper was to characterize the physical properties of these powerful sources, and to confirm\nthe capability of the hard X--ray selection in finding \nextreme blazars with massive SMBH, powerful jets and luminous accretion disks. \nIGR J22517+2217 is the highest redshift FSRQ in their sample and shows the highest total jet power (P$_{Jet}=1.5\\times10^{48}$ erg s$^{-1}$).\n\nAll these previous studies have been performed through the analysis of the ''average'' X-ray spectra obtained with the INTEGRAL and {\\it Swift}-BAT surveys, \nwithout taking into account any possible flux variation of the source during the period of monitoring (5 years for INTEGRAL and BAT).\nIn this paper we present the discovery of strong flaring activity, \nin X--ray (IBIS and BAT) archival data, of this extremely bright and peculiar\nFSRQ, and the modelization of both its flaring and quiescent SEDs.\nNew {\\it Suzaku} and {\\it Fermi} data are used for characterizing the \nquiescent state.\nOur goal is to investigate the evolution \nof the SED and obtain information on the physical condition of the source in the two different states.\n\nThe paper is organized as follows. In \\S2 we report the multiwavelength data analysis of the instruments involved in the SED building. \nIn \\S3 we describe the model adopted to reproduce the broad band SED, while in \\S4 we discuss the SED fitting of both the flaring and quiescent state. \nThe summary of our results is presented in \\S5.\nThroughout the paper, a $\\Lambda-CDM$ cosmology with $H_0 = 71$ km s$^{-1}$ Mpc$^{-1}$, \n$\\Omega_\\Lambda = 0.73$, and $\\Omega_m = 0.27$ is adopted.\n\n\n\n\\section{Data reduction}\n\n\n\\subsection{ New {\\it Suzaku} data}\n\n{\\it Suzaku} observed the source on 2009 Nov 26 (ID 704060010, PI A. De Rosa), \nfor a net exposure of $\\sim$40 ks, with both the X--ray Imaging Spectrometer (XIS; Koyama et al. 2007) and the \nHard X--ray Detector (HXD; Takahashi et al. 2007).\nThe XIS instrument has 3 operating detectors at the time of the observation: the front illuminated XIS 0 and XIS 3, sensitive in the 0.5--10 keV band, and the \nback--illuminated XIS 1, extending the low energy range to 0.2 keV.\nThe HXD is instead composed of GSO scintillators and silicon PIN diodes.\nThe PIN detectors observe in the 12--60 keV energy band, while the GSO ones can observe up to 600 keV.\nData reduction and processing were performed using HEASOFT $v6.9$ and {\\it Suzaku ftools v16}.\nOnly XIS and PIN data have been used in this analysis, since the source is below the \nsensitivity limit of the GSO scintillators.\nThe cleaned event files produced by the data processing with the standard selection criteria were used.\n\nXIS source events were extracted from a region of radius 200 arcsec, and the \nbackground events from an annulus (external radius 400 arcsec) outside the source region.\nThe response matrix and effective area were calculated for each detector using {\\it Suzaku} \nftools tasks {\\it xisrmfgen} and {\\it xissimarfgen}.\nGiven that XIS 0 and XIS 3 have similar responses, their event files were summed.\nOnly data in the 0.5--8 keV band were considered, and the spectral counts were \nrebinned using at least 30 counts per bin, in order to allow the use of $\\chi^2$ statistic.\n\nPIN data were extracted from the HXD cleaned event files after standard screening. \nThe tuned background model supplied\nby the {\\it Suzaku} team (Fukazawa et al. 2009), was used for the ``Non X--ray Background\" (NXB) events. \nThe background light curve was corrected for the 10x oversampling rate. \nThe source spectra were corrected for deadtime using {\\it hxddtcor}. \nWe estimate the cosmic X--ray background (CXB) contribution to the\nPIN background using the model given in Gruber et al. (1999),\nwhich is folded with the PIN response to estimate the CXB rate.\n\nThe XIS and PIN data were fitted with a simple power-law, modified by neutral absorption at the source redshift, plus \ngalactic absorption, fixed to the value measured by Kalberla et al. (2005) at the source \ncoordinates\\footnote{http:\/\/heasarc.nasa.gov\/cgi-bin\/Tools\/w3nh\/w3nh.pl} ($N_{H, Gal.} = 5\\times10^{20} cm^{-2}$). \nThe best fit values are reported in Table 1.\nThe XIS and PIN data show a flat spectrum, with $\\Gamma=1.5\\pm0.1$ and $\\Gamma=1.5\\pm0.8$ \nrespectively, with the PIN power law normalization being $\\sim1.2$ times the XIS one, a known cross calibration issue\n\\footnote{http:\/\/heasarc.gsfs.nasa.gov\/docs\/suzaku\/analysis\/abc sec 5.4}.\nThe XIS data show some curvature below 1 keV, that can be reproduced either by an intrinsic \ncolumn density of $N_H=(1.5\\pm1.1)\\times10^{22}$ cm$^{-2}$ at the source redshift (compatible with the value found in Bassani et al. using XRT data)\nor with a broken power-law with break energy of $0.8\\pm0.1$ keV and $\\Delta\\Gamma=0.8$.\nBoth models give a comparable reduced $\\chi^2$ of 1.03 and 1.04, respectively, thus the \nquality of the data do not allow do disentangle the two possibilities.\nIn Sect. 4 we will attempt to disentangle these two different models using broadband data. \n\nThe hard X-ray flux obtained by fitting the PIN data is a factor of $\\sim$10 (4) lower that the one\nreported in literature from {\\it Integral}-IBIS ({\\it Swift}-BAT) survey data (Bassani et al. 2007; Maraschi et al. 2008; Ghisellini et al. 2010).\nThis suggests that the source was in a much less active state during the 2009 {\\it Suzaku} observation, compared to previous measurements.\nThis lead us to reanalyze the archival IBIS and BAT data in order to check for the presence of variability in the hard X-ray source flux.\nIn Figure 1 top panel we show the {\\it Suzaku} XIS (red, black) and PIN (green) unfolded spectra and model best fit found for IGR J22517+2217. \nFor comparison also the IBIS spectrum (blue) is shown, to emphasize the different flux state.\n\n\n\\subsection{{\\it Integral}-IBIS and {\\it Swift}-BAT survey data}\n\n\\begin{figure}\n\\begin{center}\n\\psfig{figure=plot_xis_integral_def.ps,width=8cm,height=6.5cm}\\hspace{1cm}\\psfig{figure=igr22_lc.ps,width=8cm,height=7cm}\n\\caption{\n{\\it a) Top panel:} {\\it Suzaku} XIS (red, black) and PIN (green) unfolded spectrum and model of IGR J22517+2217. \nFor comparison the IBIS cat4 spectrum is shown (blue). \n{\\it b) Bottom panel:} Hard X--ray light curve of IGR J22517+2217. \nRed squares (black circles) represent BAT (IBIS) 15--55 keV flux. \nThe blue empty (green filled) triangles show the time of XRT\/UVOT ({\\it Suzaku}) observation. \nThe cyan solid lines represent the time intervals used in the BAT spectra extraction.\nThe orange dashed line represents the period of observation of {\\it Fermi}\/LAT.\n}\n\\end{center}\n\\label{fig:light}\n\\end{figure}\n\n\nIBIS data are taken from the Fourth IBIS\/ISGRI Soft $\\gamma$--ray Survey Catalog (Bird et al. 2010), \ncovering the period Feb 2003 -- Apr 2008.\nImages from the ISGRI detector (Lebrun et al. 2003) for each pointing have been generated\nusing scientific Analysis\nSoftware (Goldwurm et al. 2003) OSA version 7.0. \nFive primary energy bands (20--40, 30--60, 20--100, 17--30, and\n18--60 keV) were used to maximize the detection sensitivity\nfor sources with various energy spectra (for details see Bird et al. 2010).\n\nThe BAT analysis results presented in this paper were derived with all the available data \nduring the time interval 2005 Jan 19 -- 2010 Apr 14.\nThe 15--55 keV spectra and light curve were extracted following the recipes presented in Ajello \net al. (2008, 2009). \nThe spectra are constructed by weighted averaging of the source spectra extracted from short \nexposures (e.g., 300s) and are accurate to the mCrab level. \nThe reader is referred to Ajello et al. (2009) for more details. \n\nThe total IBIS and BAT spectra can be reproduced with a simple power law having $\\Gamma=1.6\\pm 0.6$ and \n$1.6\\pm0.5$, respectively;\nthe 15--55 keV flux is F$_{15-55 \\, {\\rm keV}}=(2.5\\pm0.9)\\times10^{-11}$ and \nF$_{15-55 \\, {\\rm keV}}=(2.1\\pm0.8)\\times10^{-11}$ erg cm$^{-2}$ s$^{-1}$, respectively.\n\nThe bottom panel of Fig. 1 shows the BAT (red squares) and IBIS (black diamonds) historical light curve \nof IGR J22517+2217 from 2003 Dec 04 to 2010 Mar 23.\nThe 15--55 keV IBIS light curve was extracted from the ISDC Science Products Archive\\footnote{http:\/\/www.isdc.unige.ch\/heavens\\_webapp\/integral} \nadopting a nominal bin size of 100 days. \nWe converted the observed IBIS and BAT count rates into 15--55 keV observed flux, using the \nWebPimms HEASARC tool\\footnote{http:\/\/heasarc.nasa.gov\/Tools\/w3pimms.html}, \nassuming as underling model a power law with photon index $\\Gamma=1.6$, consistent with \nthe values observed in the BAT and IBIS spectra, and assuming a constant cross-calibration between IBIS and BAT, equal to one.\n\n\\noindent Fig.1 (bottom panel) shows that: \n\n-- the source displays quite strong long term variability in hard X--rays;\n\n-- a strong flare episode occurred around Jan 2005, and the source reached a 15--55 keV flux maximum of $(8\\pm2) \\times10^{-11}$ erg cm$^{-2}$ s$^{-1}$\n(a factor of 20 higher than the flux measured by {\\it Suzaku}-PIN in 2009); \n\n-- after the flare, the source faded into a quiescent state, reaching a flux that is at or below the detection limit of both BAT and IBIS instruments.\nAs can be seen, the IBIS light curve is completely dominated by the flare, \ni.e. the source flux is below the IBIS detection limit after \nMJD 53550 and the total spectrum extracted from the entire period can be considered representative of the flare.\n\nAs a result of its different pointing strategy, BAT has a much more regular and extended coverage of the source, \nand we were able to characterize the source in both states.\nWe extracted a BAT spectrum from the period around the flare of 2005 (2004 Dec 11--2005 Mar 21)\nand also one from the remaining quiescent period (2005 May 10--2009 Jun 18, \nsolid cyan lines in the bottom panel of Fig. 1). \n\n\nThe BAT flux relative to the flare state is the highest one measured in the hard X-ray energy range\n(F$_{15-55 \\, {\\rm keV}}=(3.7\\pm0.8)\\times10^{-11}$ erg cm$^{-2}$ s$^{-1}$).\nDuring this state the source is detected up to 200 keV, and the spectrum is characterized by photon index of 1.5$\\pm$0.5.\nIn the quiescent state the source is detected, with significance $\\sim3\\sigma$, only up to $\\sim75$ keV, and \nthe spectrum has a flux F$_{15-55 \\, {\\rm keV}}$ a factor of $\\sim15$ lower than the flaring one, while the photon index is 1.7$\\pm$1.1.\nConsidering the large uncertainties on $\\Gamma$ we can consider the spectra (in flaring and quiescent state) comparable (see\nTable 1 for IBIS and BAT spectral analysis results).\n\n\\begin{table*}\n\\begin{center}\n\\label{tab:xray}\n\\begin{tabular}{ccccccccccccccc}\n\\hline\\hline\\\\\n\\multicolumn{1}{c} {Date}&\n\\multicolumn{1}{c} {Inst.}&\n\\multicolumn{1}{c} {Exp.}&\n\\multicolumn{1}{c} {$N_{\\rm H}$}&\n\\multicolumn{1}{c} {$\\Gamma$}&\n\\multicolumn{1}{c} {F$_{2-10}$}&\n\\multicolumn{1}{c} {$\\log L_{2-10}$}&\n\\multicolumn{1}{c} {F$_{15-55}$}& \n\\multicolumn{1}{c} {$\\log L_{15-55}$}\\\\\n (1) & (2) &(3) & (4) & (5) & (6) & (7) & (8) & (9) \\\\\n\\hline\\\\ \n2003 Dec 04--2007 Nov 17 & IBIS & 191 & - & 1.6$\\pm$0.6 & - & - &25.1$\\pm$8.9 & 48.1 \\\\\n2004 Dec 11--2005 Mar 21 & BAT Flare & 160 & - & 1.5$\\pm$0.5 & - & - &36.9$\\pm$8.1 & 48.3 \\\\\n2005 May 10--2009 Jun 18 & BAT quiesc.&1610 & - & 1.7$\\pm$1.1 & - & - &2.6$\\pm$1.6 & 47.3 \\\\\n2007 May 01 & XRT & 40 & $2.0\\pm1.5$ & 1.4$\\pm$0.1 & 2.4$\\pm$0.4& 47.1 & - & - \\\\\n2009 Nov 01 & XIS & 40 & $1.5\\pm1.1$ & 1.5$\\pm$0.1 & 1.2$\\pm$0.1& 46.7 & - & - \\\\\n2009 Nov 01 & PIN & 40 & - & 1.5$\\pm$0.8 & - & - &3.8$\\pm$1.8 & 47.4\\\\\n\\hline \n\\hline \n\\end{tabular}\n\\end{center} \n\\caption{Best fit model parameters for the different observations of IGR 22517+2217 analyzed in this work.\nThe broadband continuum is reproduced with a simple power-law, modified by intrinsic absorption at the source redshift, \nplus galactic absorption N$_{\\rm H, Gal}$ = 5$\\times$10$^{20}$ cm$^{-2}$.\nColumn: \n(1) Date;\n(2) Instrument; \n(3) Effective exposure (ks);\n(4) Column density (in $10^{22}$ cm$^{-2}$ units);\n(5) Photon index; \n(6) 2--10 keV observed flux (in $10^{-12}$ erg s$^{-1}$ cm$^{-2}$ units); \n(7) Log 2--10 keV deabsorbed luminosity (erg s$^{-1}$);\n(8) 15--55 keV flux (in $10^{-12}$ erg s$^{-1}$ cm$^{-2}$ units);\n(9) Log 15--55 keV luminosity (erg s$^{-1}$).}\n\\end{table*}\n \n\n\n\n\\subsection{{\\it Fermi}\/LAT data}\n\n{\\it Fermi} Large Area Telescope (LAT; Atwood et al. 2009) data were collected from Aug 2008 (MJD 54679) to Aug 2010 (MJD 55409). \nDuring this period, the {\\it Fermi}\/LAT instrument operated mostly in survey mode, \nscanning the entire $\\gamma$-ray sky every 3 hours.\nThe analysis was performed with the ScienceTools software package version {\\it v9r17p0}, which\nis available from the Fermi Science Support Center. \nOnly events having a high probability of being photons\n-- those in the ``diffuse class\" -- were used. \n\nThe energy range used was 100 MeV -- 100 GeV, and the maximum zenith angle value was $105^{\\circ}$.\nWe adopted the unbinned likelihood analysis, using the standard P6\\_V3\\_DIFFUSE response functions.\nThe diffuse isotropic background used is {\\it isotropic\\_iem\\_v02} and the galactic diffuse emission model used is \n{\\it gll\\_iem\\_v02}\\footnote{http:\/\/fermi.gsfc.nasa.gov\/ssc\/data\/access\/lat\/BackgroundModels.html}.\nWe considered a region of interest (RoI) of $15^{\\circ}$ from the IGR J22517+2217 position.\nAll sources from the 1FGL catalog (Abdo et al. 2010) within the RoI of the source were included in the fit, \nwith their photon indexes and the integral fluxes free to vary, \nplus a point source with power law spectrum at the IGR J22517+2217 position, having photon index fixed to 2.2 (a value typical of FSRQ in the GeV band).\nThe normalization of the background was also left free to vary.\nWe also repeated the analysis using the source list of the 2FGL catalog (Ackermann et al. 2011), obtaining consistent results.\n\nIGR J22517+2217 is located $\\sim6^{\\circ}$ from 3C 454.3, a bright, extremely variable source in \nthe $\\gamma$--ray sky (Ackermann et al. 2010). It contributes for more than 90\\% to the total counts in the RoI.\nWe tried to exclude the period of flaring activity of 3c 454.3 from the data set, in order to minimize the contamination by this source.\nThe results obtained for IGR J22517+2217, however, did not change significantly.\n\nIGR J22517+2217 is not detected in the 2 year observation, and the computed Test Statistic (TS, Mattox et al. 1996)\nis $TS \\simeq3.5$ in the full band.\nWe therefore calculated the 95\\% upper limits in 5 energy bands (i.e. 0.1--0.3, 0.3--1, 1--3, 3--10, 10--100 GeV), using the profile likelihood method.\nThe upper-limits were corrected for attenuation due to extragalactic background light (EBL) through $\\gamma-\\gamma$ interactions (Chen et al. 2004, Razzaque et al. 2009), \nalthough only the 10-100 GeV band was found to be affected by significant attenuation.\nAs a consequence of being close to 3C 454.3, the background around the position \nof IGR J22517+2217 is higher than in a typical extragalactic field, making it difficult to constrain more strongly the upper limits at GeV energies.\nThe upper limits are plotted in Figure 2, with all other data discussed in this paper.\n\nGiven that {\\it Fermi} data were collected starting from Aug 2008, they completely fall\nin the quiescent period of the source, and so it is not surprising that IGR J22517+2217 was not detected.\nTherefore, they will be used to characterize the \nstate of the source in the quiescent SED only, while we do not have available $\\gamma$-ray data \nfor the flaring state SED.\n\n\n\\subsection{Archival Data}\n\nWe collected archival radio and optical data from NASA\/IPAC EXTRAGALACTIC DATABASE \\footnote{http:\/\/ned.ipac.caltech.edu\/} for the source.\nRadio data at 1.4 4.8 and 8.4 GHz comes from NRAO\/VLA.\nOptical data in J, H, and K bands are taken with the UFTI instrument at UKIRT (Kuhn 2004).\nThe UV data are taken from Ghisellini et al. (2010), where they corrected the {\\it Swift}-UVOT observed magnitudes\nfor the absorption of neutral hydrogen in intervening Lyman$-\\alpha$ absorption systems. \nWe also reanalyzed the archival XRT spectrum (average of 4 contiguous observations), obtaining consistent results with those found in Bassani et al. (2007)\nusing the same set of data, and with the results reported in \\S2.1 for the XIS spectra.\nThe only difference is in the XRT observed 2-10 keV flux, that is a factor 2 higher than the flux measured by XIS.\n\n\\begin{figure*}\n\\begin{center}\n\\psfig{figure=sed_tot2.ps,width=16cm,height=12cm}\n\\hspace{0.3cm}\n\\vspace{-0.5cm}\n\\psfig{figure=zoom.ps,width=10cm,height=8cm}\n\\caption{\n{\\it a) Top panel:} \nSpectral energy distribution of IGR J22517+2217. \nGray circles and arrows represent archival radio\/optical\/UV data from NED. \nEmpty red triangles and magenta \nsquares represent XIS 0 and XIS 3 data, \nempty green circles and orange pentagons represent PIN and BAT quiescent data respectively, \nwhile black arrows are {\\it Fermi} upper limits in 5 bands.\nFilled violet squares represent XRT data, filled cyan diamonds and blue pentagons represent \nIBIS and BAT flare data respectively.\nThe solid cyan and orange curves are the results of the modeling of the quiescent \nand flaring states, respectively.\nWith gray lines we show the different components of the non--thermal emission:\nsynchrotron (dotted), synchrotron self--Compton (long dashed) and\nexternal Compton (dot--dashed).\nThe black dashed line corresponds to the thermal emission of the disk, the IR torus and the X--ray disk corona. \nThe model does not account for radio emission, produced from much larger regions of the jet.\n{\\it a) Bottom panel:} Zoom in the X--ray energy range for the two SEDs. Symbols as in top panel.\n}\n\\end{center}\n\\label{fig:sed2}\n\\end{figure*}\n\n\\section{The SED model}\n\nThe model adopted to fit the SED is a leptonic, one--zone synchrotron and inverse Compton model, \nfully discussed in Ghisellini \\& Tavecchio (2009).\nThe assumptions can be summarized as follows:\n\n\\begin{table*}\n\\label{tab:sed}\n\\begin{center}\n\\begin{tabular}{ccccccccc}\n\\hline\\hline\\\\\n\\multicolumn{1}{c} {State}&\n\\multicolumn{1}{c} {$R_{\\rm diss}$}&\n\\multicolumn{1}{c} {$P'_{\\rm inj}$}&\n\\multicolumn{1}{c} {$B$}&\n\\multicolumn{1}{c} {$\\Gamma$}&\n\\multicolumn{1}{c} {$\\gamma_{\\rm b}$}&\n\\multicolumn{1}{c} {$\\gamma_{\\rm max}$}&\n\\multicolumn{1}{c} {$s_1$}&\n\\multicolumn{1}{c} {$s_2$}\\\\\n (1) & (2) &(3) & (4) & (5) & (6) & (7) & (8) & (9) \\\\\n\\hline\\\\ \nLow & 570 (1900) & 0.045 & 1.06 & 16 & 70 & 2e3 & -1 & 4 \\\\\nHigh & 990 (3300) & 0.30 & 0.61 & 16* & 70* & 2e3* & -1* & 4* \\\\\n\\hline \n\\end{tabular}\\end{center} \n\\caption{Input parameters of the SED fitting for the low and high state of IGR J22517+2217.\nColumn: \n(1) State ; \n(2) dissipation radius in units of $10^{15}$ cm and, in parenthesis, in units of\nSchwarzschild radii;\n(3) intrinsic injected power ($10^{45}$ erg s$^{-1}$) in the form of relativistic electrons;\n(4) magnetic field intensity (Gauss);\n(5) bulk Lorentz factor at R$_{diss}$; \n(6) and (7) break and maximum random Lorentz factors of the injected electrons; \n(8) and (9) slopes of the injected electron distribution [Q($\\gamma$)] below and above $\\gamma_{\\rm b}$;\n* fixed values from low to high state SED.}\n\\end{table*}\n\n-- The emitting region is assumed to be spherical \n(with radius $R$) and at a distance $R_{\\rm diss}$ (dissipation radius) from the central black hole.\nThe emitting electrons are injected at a rate Q($\\gamma$) [cm$^{-3}$ s$^{-1}$] for a finite time equal to the\nlight crossing time $R\/c$. \nThe adopted function Q($\\gamma$) is a smoothly broken power law with a break at $\\gamma_{\\rm b}$\nand slopes $s_1$ and $s_2$ below and above $\\gamma_{\\rm b}$, respectively.\nThe emitting region is moving with a velocity $\\beta_{\\rm c}$ corresponding to a bulk Lorentz factor $\\Gamma$. \nWe observe the source at the viewing angle $\\theta_{\\rm v}$.\n\n-- The external radiation sources taken into account are:\nthe broad line region (BLR) photons, assumed to re--emit 10\\% of the \naccretion disk luminosity from a shell--like distribution of clouds located at a distance \n$R_{\\rm BLR}=10^{17}L_{\\rm d, 45}^{1\/2}$ cm (Kaspi et al. 2005), where $L_{\\rm d}$ is the disk luminosity; \nthe IR emission from a dusty torus located at a distance \n$R_{\\rm IR}=2.5\\times10^{18}L_{d, 45}^{1\/2}$ cm (Elitzur 2006); \nthe direct emission from the accretion disk, including its X--ray corona. \nAlso the starlight contribution from the inner region of the host galaxy and the cosmic\nbackground radiation are taken into account, but these photon sources are unimportant in\nthe case of IGR J22517+2217.\n\n-- The accretion disk is a standard ``Shakura \\& Syunyaev\" (1973) disk, emitting as a blackbody at each radius.\nThe maximum temperature ($T_{\\rm max}$), i.e. the peak of the disk luminosity, is assumed to occur at $\\sim5$ \nSchwarzschild radii ($R_S$).\nThus from the position of the peak of the disk luminosity, and the total luminosity of the accretion \ndisk ($L_d$) it is possible to derive $M_{\\rm BH}$ and $\\dot M$, once a value for the efficiency $\\eta$ is assumed ($\\eta=0.08$ for a Schwarzschild black hole). \nSee Ghisellini et al. (2010a) for a discussion on caveats of this black hole mass estimate.\n\nWe can estimate the black hole mass $M_{\\rm BH}$ and the accretion luminosity \n$L_{\\rm d}$ of IGR J22517+2217 using optical and UV data and upper-limits corrected for the absorption\nof neutral hydrogen in intervening Lyman $\\alpha$ systems along the line of sight \n(see Ghisellini et al. 2010a for details).\nGiven the uncertainties in the amount of intervening Ly$\\alpha$ systems \nand the paucity of data the results must be considered an approximation.\nWe find $M_{\\rm BH}=10^9 M_{\\sun}$ and $L_{\\rm d}=6.8\\times10^{46}$ erg s$^{-1}$.\nThese values correspond to a disk radiating at 45\\% of the Eddington level.\n\nThe BLR are located at $R_{\\rm BLR}= 8\\times 10^{17}$ cm and the \nIR emitting torus at $R_{\\rm IR}=2.5\\times10^{19}$ cm.\nThe total X--ray corona luminosity is assumed to emit 30\\% of $L_{\\rm d}$. \nIts spectral shape is assumed to be $\\propto \\nu^{-1} \\exp(-h\\nu\/150 \\, {\\rm keV})$. \n\n\n\\section{Discussion}\n\nOur analysis shows that the extremely Compton dominated FSRQ IGR J22517+2217,\nexperienced a strong flare in the high energy hump in Jan 2005, and then faded in a quiescent state.\nIn order to investigate the physical properties of the source we built two SEDs for the two different states, and fitted \nthe data with the leptonic, one--zone synchrotron and inverse Compton model described in \\S3.\n\nIn order to build the SED for the quiescent state of IGR J22517+2217, we used the X-ray data from {\\it Suzaku} (XIS and PIN), \nthe BAT spectrum extracted from the quiescent period and {\\it Fermi}\/LAT 24 months upper limits.\nThe archival non--simultaneous optical\/UV data were added to our data.\n\nFor the flaring SED we used hard X-ray data from {\\it Integral}--IBIS and the {\\it Swift}--BAT \nspectrum extracted from the Jan 2005 flare.\nWe do not have soft X-ray data available for the flaring period. \nHowever, in order to put some constraint at soft X-ray frequencies we chose to include the XRT spectrum in \nthe flaring SED. This has a factor 2 higher normalization with respect to the XIS data.\nWe stress that, as observed in other samples of bright, red blazars (Ghisellini et al. 2010a), \nthe flux variability is larger at higher energies (i.e. in hard X--rays)\nbut modest at few keV. This support our choice of include the XRT data in the flaring SED.\n\nAll data points and SED model components of the flaring and quiescent SED are plotted in Fig. 2, top panel. Fig. 2, bottom\npanel, shows a zoom in the region of X-ray data.\nThe strong and very bright hard X-ray spectrum, together with the upper limits in the {\\it Fermi}\/LAT\nenergy range, constrain the peak of the high energy hump of the quiescent SED to be located \nat $\\sim10^{20}$--$10^{21}$ Hz, and thus the form of energy spectrum of radiating electrons.\nThe same energy spectrum is then assumed in the flaring SED, for which no MeV\/GeV data are available.\nThe corresponding synchrotron peak falls in a region of the spectrum localized around \n$10^{11}$ Hz for both SEDs.\n\nHowever, in our single--zone leptonic model, the synchrotron and the high \nenergy humps are produced by the same population of electrons.\nFurthermore, if the high energy emission is given by the external Compton process,\nthe energies of the electrons emitting at the $\\sim$MeV peak are rather modest,\nimplying a corresponding low frequency synchrotron peak.\nWe then require that the synchrotron component peaks at low energies,\nclose to the self--absorption frequency, and furthermore require that the\nthin synchrotron emission has a steep spectrum, whose flux is smaller\nthan the optical archival data (characterized instead by a rather hard slope,\nthat we interpret as emission from the accretion disk).\n\nGiven the relative paucity of the observational data, the choice of the model parameters is not unique,\nand some further assumption has to be made.\nWe indeed fix the viewing angle $\\theta_{\\rm v}$ to 3$^\\circ$, to be close to the\n$\\theta_{\\rm v}\\sim 1\/\\Gamma$ condition.\n\nNote that we model both states of\nthe source by changing the minimum number of parameters, given that fitting \ntwo data sets with the same model, allows to constraint better\nthe model parameters.\nIn particular, we assume that the accretion luminosity does not change \nfor the two states, and require also the bulk Lorentz factor and the parameters\nof the distribution of the injected electrons (break and maximum random Lorentz factors\nand slopes below and above $\\gamma_{\\rm b}$) to be the same for the two SEDs.\nThus, the parameters that are left free to vary from one state to the other are \nthe dissipation radius $R_{\\rm diss}$, the injected power $P'_{\\rm inj}$ and the \nmagnetic field $B$, that is proportional to $1\/R_{\\rm diss}$ (the Poynting flux is assumed to be constant).\n\nThe results of our modeling are shown in Fig. 2, where we show the total flux together\nwith the contributions of the non--thermal (synchrotron, self Compton, external Compton)\nand thermal (accretion disk, IR torus, X--ray corona) components.\nAs can be seen, the curvature around 1 keV observed in the XRT and XIS spectra\ncan be well reproduced by an EC component changing (softening) slope from \n$\\sim10^{17}$ to $\\sim10^{19}$ Hz, disfavoring the intrinsic obscuration scenario\nfor the shape of the X-ray emission of IGR J22517+2217.\n\nTable 2 reports the parameters of the SED fitting for the quiescent and flaring states. \nThe main difference between the two SEDs is the power $P'_{\\rm inj}$ \ninjected in the source in the form of relativistic electrons, that changes by a factor $\\sim$7.\nThe increase of $P'_{\\rm inj}$ accounts for the enhanced X--ray flux in the high state.\nThe other difference is the location of the emitting region $R_{\\rm diss}$, \nbecoming larger for the high state.\nThis is dictated by the detailed modeling of the slope of the soft to hard X--ray spectrum,\nrequiring an electron distribution with a break at low energies, and another break\nat somewhat larger energies.\nThis is accounted for, in our modeling, by requiring that electron of very low energies\n(corresponding to random Lorentz factors $\\gamma<5$) do not cool in one light crossing time.\nThis can be achieved if the location of the emitting region is slightly beyond the BLR,\nin a zone where the BLR radiation energy density is somewhat smaller.\nThis is the reason leading to a larger $R_{\\rm diss}$ in the high state.\nAs a consequence of assuming a larger region, the magnetic field is lower,\nfollowing the assumption of a constant Poynting flux ($\\propto B^2 R^2$).\nThe large Compton dominance constrains the value of the magnetic field,\nand in turn the relevance of the self Compton flux, found to be almost negligible.\n\nThis is also in agreement with the results pointed out in Sikora et al. (2009):\nbright blazars with very hard ($\\alpha_x<0.5$) X--ray spectra and high luminosity ratio \nbetween high and low frequency spectral components\nchallenge both the standard synchrotron self-Compton and \nhadronic models, while EC can easily account for these observed properties.\n\nIn the analysis described above, the bulk Lorentz factor is assumed to be constant. \nHowever, the change of bulk Lorentz factor is often invoked to explain the variability of FSRQs.\nAs a further check, we performed a new fit of both low and high states, \nleaving $\\Gamma$ as a free parameter, in addition to $R_{\\rm diss}$ and $P'_{\\rm inj}$, although the fit is limited by the poor number of data points\n\nBoth SEDs are well reproduced with these new parameters for the low (high) state: $\\Gamma$=15 (20); $R_{\\rm diss}$=1700 (3500) $R_S$ and \nLog($P'_{\\rm inj}$)= 43.48 (44.17) erg s$^{-1}$.\nIn this case the values of $P'_{\\rm inj}$ are slightly lower for both low and high states,\nand the difference is lower (a factor of $\\sim5$), while the difference \nin $R_{\\rm diss}$ is slightly larger: \nfrom 1700 to 3500 $R_S$ instead of 1900 and 3300 $R_S$.\nThus the change in $\\Gamma$ has the main effect of slightly decreasing the required variation of the total injected power\nto account for the observed variability, but no other substantial differences are introduced.\nThis new fit also gives an idea of the degree of degeneracy in the fit parameters, due to the incompleteness of the data set,\nespecially in the $\\gamma$-ray band.\n\n\n\nIn Table 3 we report the logarithm of the jet power in the form of radiation, \nPoynting flux and bulk motion of electrons and protons, \nin erg s$^{-1}$, calculated for both SEDs.\nThey have been calculated from\n\\begin{equation}\nP_{\\rm i} = \\pi R^2 \\Gamma^2 c U^\\prime_{\\rm i} \n\\end{equation}\nwhere $U^\\prime_{\\rm i}$ is the energy density of interest, calculated in the comoving frame\n(see e.g. Celotti \\& Ghisellini 2008).\nAs discussed in Ghisellini et al (2011), the power $P_{\\rm r}$ dissipated by the jet\nto produce the radiation we see is almost model--independent, since it depends\nonly on the observed luminosity and on the bulk Lorentz factor $\\Gamma$.\nThe power dissipated in other forms, on the other hand, depends on the amount of electrons ($P_{\\rm e}$),\nprotons ($P_{\\rm p}$), and magnetic field ($P_{\\rm B}$) carried by the jet,\nwhich have been estimated by applying our specific model.\nFurthermore the power carried in the bulk motion of protons requires \na knowledge of how many protons are there, per emitting lepton.\nThe values given in Table 3 assume one proton for one emitting lepton.\n\n\\begin{table}\n\\label{tab:sed}\n\\begin{center}\n\\begin{tabular}{cccccccccccccccc}\n\\hline\\hline\\\\\n\\multicolumn{1}{c} {State}&\n\\multicolumn{1}{c} {$\\log P_{\\rm r}$}&\n\\multicolumn{1}{c} {$\\log P_{\\rm B}$}&\n\\multicolumn{1}{c} {$\\log P_{\\rm e}$}&\n\\multicolumn{1}{c} {$\\log P_{\\rm p}$}&\\\\\n (1) & (2) &(3) & (4) & (5) \\\\\n\\hline\\\\ \nLow & 46.04 & 45.54 & 45.00 & 47.56 \\\\\nHigh & 46.83 & 45.54 & 46.17 & 48.41 \\\\\n\\hline \n\\end{tabular}\\end{center} \n\\caption{Input parameters of the SED model fitting both the low and high state of IGR J22517+2217. \nColumn: \n(1) State ; \n(2)--(5) logarithm of the jet power in the form of radiation ($P_{\\rm r}$), \nPoynting flux ($P_{\\rm B}$), bulk motion of electrons ($P_{\\rm e}$) and protons ($P_{\\rm p}$,\nassuming one proton per emitting electron). Powers are in erg s$^{-1}$.}\n\\end{table}\n\nFrom Table 3 we can see that the power\n$P_{\\rm r}$ changes from $\\sim0.15\\times L_{\\rm d}$ in the quiescent state, \nto $P_{\\rm r} \\sim L_{\\rm d}$ in the flaring state, i.e. the jet requires a power comparable \nto the disk luminosity to produce the radiation we see, in the latter case.\n\nTanaka et al. (2011) reported a similar behavior, based on {\\it Fermi}-LAT data, for the strong 2010 GeV flare of the blazar 4C+21.35:\nassuming similar efficiencies for the accretion disk and the jet, they estimated a jet intrinsic power $L_{jet}$\nchanging from $\\sim0.1 L_{acc}$ in the quiescent state to $\\sim1 L_{acc}$ (beeing $L_{acc}$ the intrinsic accretion power).\nThey argued that these results, combined with the findings of Abdo et al. (2010b) on several FSRQ detected by {\\it Fermi}-LAT,\nsuggest a scenario in which the observed $\\gamma$-ray variability of blazars is due to the different power of the jet,\nthat normally represents only a small fraction of the accretion power, while during major flares is capable of \ncarry away almost all the available accretion power.\nOur findings are quantitatively similar, and thus result in agreement with this view.\n\nThe $P_{\\rm r}$ however is a {\\it lower limit} to the total jet power.\nThe total jet power, dominated by the bulk motion of protons associated to emitting \nelectrons, is $P_{\\rm jet} = P_{\\rm B} + P_{\\rm e} + P_{\\rm p} = 3.6\\times10^{47}$ \nand $2.6\\times10^{48}$ erg s$^{-1}$, in the low and high state, respectively. \n$P_{\\rm jet}$ is dominated by $P_{\\rm p}$: if there is indeed one proton per\nemitting lepton, then the jet power is from 3 to 30 times more powerful than the \naccretion luminosity. \n\nIt has been proposed that jets in luminous blazars may well be numerically dominated by pairs,\nbut being still dynamically dominated by protons (Sikora \\& Madejski 2000, Kataoka et al. 2008).\nWe computed the jet power due to protons assuming an upper limit for this ratio of one proton per 20 pairs.\nAbove this limit the jet is too ``light'' and the Compton drag produces significant\ndeceleration (Ghisellini \\& Tavecchio 2010).\nThe one-to-one ratio can be assumed as a lower limit for this ratio.\n\nThe lower limits for $P_{\\rm p}$ obtained in this way are $1.8\\times10^{46}$ and $1.3\\times10^{47}$ erg s$^{-1}$ for the low and high state, respectively.\nTherefore with this assumption, the jet power in electron, protons and magnetic field, becomes comparable with the radiation power.\nThis translates into a total jet power of $P_{\\rm jet} = 2.2\\times10^{46}$ and $1.4\\times10^{47}$ erg s$^{-1}$, respectively.\n\nThe values obtained assuming one proton per lepton are extreme, even if compared with the distribution of $P_{\\rm jet}$ and $L_{\\rm d}$ \ncomputed, with the same assumptions, for a sample of high redshift {\\it Fermi}\/LAT and BAT blazars in Ghisellini et al. (2011, Fig. 9), \nthat are in the range $P_{\\rm jet}\\simeq10^{46}-2\\times10^{48}$ and $L_{\\rm d}\\simeq8\\times10^{45}-2\\times10^{47}$ erg s$^{-1}$. \nTo better clarify the remarkable behavior of IGR J22517+2217, \nwe note that similar values of jet power have been achieved during the exceptional flare \nof 3C 454.3 in December 2009 (Ackermann et al. 2010; Bonnoli et al. 2011). Therefore, IGR J22517+2217 represents one of the ''monsters'' of the high-z Universe.\n\n\n\n\n\\section{Summary and conclusion}\n\n\nThanks to a new {\\it Suzaku} observation in the X-ray energy band, the {\\it Fermi} upper limits in the 0.1-100 GeV band,\nthe flux selected spectra obtained through a re-analysis of the IBIS and BAT hard X-ray data, and other optical and radio archival data sets, \nwe were able to identify a strong flare episode in the high redshift, hard X-ray selected blazar IGR J22517+2217, which occurred in Jan 2005,\nfollowed by a period of quiescence, extending up to the present days.\nTo model the overall SEDs of the source in the flare and quiescent states, we adopted a leptonic, one-zone synchrotron and inverse Compton model. \nThe optical\/UV emission is interpreted as thermal emission from the accretion disk, plus IC from the corona, and reprocessed emission from the BLR. \n\nThe curvature observed in the X-ray spectra was proposed to be due to intrinsic, moderate absorption (N$_H\\sim2\\times10^{22}$ cm$^{-2}$).\nHowever, in the context of the broad band SED modelization proposed in this paper, it appears to be inherently accounted for by an intrinsic softening of the \nEC component around $\\sim10^{18}$ Hz. \n\nIn both states a very strong Compton dominance is observed, with the high energy hump (produced by EC), that is at least two orders of magnitudes \nhigher than the low energy (synchrotron) one.\nThe high energy peak flux varies by a factor 10 between the two states, while the high energy peak frequency remain almost constant, between $10^{20}-10^{22}$ Hz.\nThe observed large Compton dominance constrains the value of the magnetic field,\nand hence the relevance of the self--Compton component, that is found to be negligible in both states.\nThe model can explain the observed variability as a variation of the total number of emitting electrons (a variation of factor $\\sim7$) \nand as a change in the dissipation radius, moving from within to outside the broad line region as the luminosity increases.\n\nIn the flaring state, the jet power lower limit, represented by the radiative component $P_{\\rm r}$,\nrequires a power comparable to the disk luminosity to produce the observed radiation.\nThe total jet power upper limit, dominated by the bulk motion of protons, and estimated assuming one proton per electron, \nis more than $\\sim30$ times more powerful than the accretion luminosity ($2.6\\times10^{48}$ erg s$^{-1}$). \nSuch extreme values have been derived only recently for a handful of extreme, high redshift, hard-X\/soft-$\\gamma$ ray selected FSRQs, \nshowing similar strong Compton dominance,\nand comparable with the value achieved by 3C 454.3 during the 2009 exceptional flare.\n\n\n\n\\section*{acknowledgements}\n\nWe thank the referee for useful comments that improved the paper.\n\nPartial support from the Italian Space Agency (contracts ASI-INAF\nASI\/INAF\/I\/009\/10\/0) is acknowledged. \n\nThis research has made use of the NASA\/IPAC Extragalactic Database (NED) \nwhich is operated by the Jet Propulsion Laboratory, California Institute of Technology, \nunder contract with the National Aeronautics and Space Administration.\n\nThe \\textit{Fermi} LAT Collaboration acknowledges generous ongoing support\nfrom a number of agencies and institutes that have supported both the\ndevelopment and the operation of the LAT as well as scientific data analysis.\nThese include the National Aeronautics and Space Administration and the\nDepartment of Energy in the United States, the Commissariat \\`a l'Energie Atomique\nand the Centre National de la Recherche Scientifique \/ Institut National de Physique\nNucl\\'eaire et de Physique des Particules in France, the Agenzia Spaziale Italiana\nand the Istituto Nazionale di Fisica Nucleare in Italy, the Ministry of Education,\nCulture, Sports, Science and Technology (MEXT), High Energy Accelerator Research\nOrganization (KEK) and Japan Aerospace Exploration Agency (JAXA) in Japan, and\nthe K.~A.~Wallenberg Foundation, the Swedish Research Council and the\nSwedish National Space Board in Sweden.\n\nAdditional support for science analysis during the operations phase is gratefully\nacknowledged from the Istituto Nazionale di Astrofisica in Italy and the Centre National d'\\'Etudes Spatiales in France.\n\n\n\n\n\n\n\n\\begin{appendix}\n\n\n\\end{appendix}\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzeapf b/data_all_eng_slimpj/shuffled/split2/finalzzeapf new file mode 100644 index 0000000000000000000000000000000000000000..81b7acc2ee2a54b8b6ce8757dcb2dc18460ea7bb --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzeapf @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\nFor a conventional orthogonal class system, it is believed that an arbitrarily weakly uncorrelated diagonal disorder in one and two dimension \\cite{Economou} can result in the Anderson localization \\cite{Anderson1957}.\nIn three dimension, there exists mobility edge $E_c$ which separates the localized states from extended states \\cite{Economou1972}. When the eigenenergies approach the mobility edge $E_c$, the localization length of localized states would diverge. Interestingly, in the presence of off-diagonal uncorrelated disorders, one-dimensional system can have a singular density of states near the zero energy \\cite{Dyson,Eggarter1978,Balents1997}, which also results in an anomalous localization \\cite{Theodorou1976,Antoaiou1977} that the localization length is proportional to the square root of system size \\cite{Fleishman1977,Inui1992,Izrailev2012}.\nIf the energy deviates from zero, the eigenstates are usually localized states.\nIf the off-diagonal disorder is correlated, the system can have localized-extended transition \\cite{Cheraghchi2005}.\nIn the presence of both diagonal and correlated off-diagonal disorders, the one-dimensional system can also have extended states \\cite{Zhangwei2004}.\nRecently, a so-called mosaic lattice model with diagonal quasiperiodic disorder has been proposed \\cite{Wangyucheng2020}. It is found that this model has mobility edges and the mobility edges can be exactly obtained with Avila's theory \\cite{Liu2021,Avila2015}.\n\n\n\n\nSince there exist above anomalous properties of localizations in the model with a pure off-diagonal uncorrelated disorder,\na natural question arises, how is it if there the off-diagonal hopping is quasiperiodic?\n One may wonder whether there exist mobility edges for off-diagonal quasiperiodic disorder (hopping). What are the localization properties of eigenstates?\n\n\n\n\n\n\n\n\n\n\n\n\nIn this work, we try to answer the above questions by exploring a quasiperiodic off-diagonal disorder model with mosaic modulation. The model is\n \\begin{align}\\label{2}\nV_{i,i+1}\\psi(i+1)+V_{i,i-1}\\psi(i-1)=E\\psi(i).\n\\end{align}\nwhere\n \\begin{align}\nV_{i,i+1}=V_{i+1,i}\n\\left\\{\\begin{array}{cc}\nt, & for \\ i\\neq 0 \\ mod \\ \\kappa\\\\\n\\frac{2\\lambda \\cos(2\\pi \\beta i+\\phi)}{\\sqrt{1-\\tau \\cos^2(2\\pi \\beta i+\\phi)}},& for \\ i= 0 \\ mod \\ \\kappa\n \\end{array}\\right.\n\\end{align}\nwhere $t>0$ is constant hopping strength, $\\lambda$ describes the quasi-periodic hopping strength, positive integer $\\kappa$ is mosaic period, $\\beta$ is an irrational number, and the parameter $\\tau$ is a real number.\nIn this work, we only consider the parameter $\\tau$ is not larger than $1$, i.e., $\\tau\\leq 1$.\nIn addition, we should remark that there is no extended state in the above model Eq.(1). This is because by \\cite{Barry1989} (or the mechanism in \\cite{xwzl}), the absolutely continuous spectrum which corresponds extended states is empty since there exists a sequence $\\{n_k\\}$ such that $V_{n_k,n_{k+1}}\\rightarrow0$. Thus the mobility edges (if any ) would separate localized states and critical states.\nIn the whole paper, we take $\\beta=(\\sqrt{5}-1)\/2$ and use the units of $t=1$ for $\\kappa>1$ (or $\\lambda=1$ for $\\kappa=1$).\n\n\n\n\n\nIt is found that the parity of mosaic period $\\kappa$ has important influences on the localization of eigenstates near zero-energy. To be specific, if mosaic period $\\kappa$ is odd, there is no Anderson localization for arbitrarily strong hopping strength. While for even mosaic period, the system undergoes Anderson localization as the quasiperiodic hopping increases.\nIn addition, the Lyapunov exponent $\\gamma(E)$ and mobility edges are also exactly obtained with the Avila's theory.\nWith Lyapunov exponent, we find that, some critical regions in the parameter plane would appear.\n In comparison with the localized states, the spatial extensions of eigenstates and their fluctuations in the critical region are much larger.\nNear localized-critical transition points ($E_c$), the localization length diverges, i.e.,\n\\begin{align}\\label{10}\n\\xi(E)\\equiv1\/\\gamma(E)\\propto|E-E_c|^{-\\nu}\\rightarrow\\infty, \\ \\ as \\ E\\rightarrow E_c,\n\\end{align}\nwhere the critical index \\cite{Huckestein1990} $\\nu=1$.\nFinally, we show that the systems with different parameter $E$ can be systematically classified by Lyapunov exponent and Avila acceleration.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nThe work is organized as follows. First of all, we discuss the localization properties of zero-energy states for both odd and even number $\\kappa$ in Sec.\\textbf{II}. In Sec.\\textbf{III}, the Lyapunov exponent is calculated. Next, with the Lyapunov exponent, we determine the mobility edges and critical region in Sec.\\textbf{IV}. In addition, Avila's acceleration is also calculated.\n At the end, a summary is given in Sec.\\textbf{V}.\n\n\n\n\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[width=1.0\\columnwidth]{bu0.eps}\n\\end{center}\n\\caption{ The average growth\/decreasing ratio of zero-energy wave function for $\\kappa=2$, and $\\tau=-2$. The critical hopping strength $\\lambda=\\lambda_c\\simeq\\pm1.366t$ where $f$ is exactly zero (indicated by black arrows in the figure). }\n\\label{bu0}\n\\end{figure}\n\n\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[width=1.0\\columnwidth]{bu1.eps}\n\\end{center}\n\\caption{Several typical zero-energy wave functions of critical states and localized edge states. Panels (a) and (c) are critical zero-energy wave functions where $f=0$. Panel(b) [(d)] is localized right (left)-hand edge states where $f>0$ ($f<0$).}\n\\end{figure}\n\n\n\\section{localization of zero-energy state}\nIn this section, we discuss the influences of parity of integer $\\kappa$ on the localization properties of the zero energy states.\nFor the quasi-periodic model Eq.(1), we note that if one applies a transform $\\psi(n)\\rightarrow(-1)^n\\psi(n)$ in the Eq.(1), the energy would change a sign, i.e., $E\\rightarrow -E$.\nDue to the chiral (sublattice) symmetry, the energy $E_n$ and $-E_n$ appear in pairs \\cite{Cheraghchi2005}.\nIn addition, the number of eigenenergies is same with the lattice site number. If the total lattice site number $N$ is an odd number, then there would be one zero energy state at least.\nIn the following, we find that when $N$ is even, usually there is no zero-energy eigenstates. In this section, we assume the total lattice number $N$ is odd, then the zero-energy state always exists.\n\n\n\nFurthermore, we assume the lattice sites of system are labeled with number $i=1,2,3,...,2m, N=2m+1$, where $m$ is a positive integer.\nStarting from wave functions of left-hand end site $\\psi(i=1)=1$ [and $\\psi(i=0)=0$], by Eq.(1),\nthe wave function of zero-energy state can be written as\n\\begin{align}\n&\\psi(N=2m+1)=\\frac{V_{2m,2m-1}V_{2m-2,2m-3}...V_{4,3}V_{2,1}}{V_{2N,2N+1}V_{2m-2,2m-1}...V_{4,5}V_{2,3}}\\psi(i=1),\\notag\\\\\n&=\\frac{v(2m-1)v(2m-3)...v(3)v(1)}{v(2m)v(2m-2)...v(4)v(2)}.\n\\end{align}\nIn the above equation, we set $v(i)\\equiv v(i,\\phi)\\equiv V_{i,i+1}$ and use the relation $V_{i+1,i}=V_{i,i+1}$.\nThe average growth\/decreasing ratio of wave function is\n\\begin{align}\n&f=\\lim_{m\\rightarrow\\infty}\\frac{1}{2m}ln(|\\frac{\\psi(N=2m+1)}{\\psi(i=1)}|)\\notag\\\\\n&=\\lim_{m\\rightarrow\\infty}\\frac{1}{2m}ln(|\\frac{v(2m-1)v(2m-3)...v(3)v(1)}{v(2m)v(2m-2)...v(4)v(2)}|).\n\\end{align}\nIf $f\\geq0$, $f$ would be the Lyapunov exponent $\\gamma(E)$ (see Sec. \\textbf{III}).\n\n\n\n\n\n\n\n\n\n\\subsection{$\\kappa$ is a positive even integer}\n When $\\kappa$ is a positive even integer, we can assume $N=n\\kappa+1$, where $n$ is an integer.\nBased on Eqs. (1) and (5), due to the ergodicity of the map $\\phi\\longrightarrow 2\\pi\\beta i+\\phi$, the average growth\/decreasing rate of wave function can be reduced into\n\\begin{align}\n&f=\\lim_{n\\rightarrow\\infty}\\frac{1}{n\\kappa}ln(\\frac{|v(\\kappa-1)v(2\\kappa-1)...v(n\\kappa-1)|}{|v(\\kappa)v(2\\kappa)...v(n\\kappa)|}),\\notag\\\\\n&=\\frac{-1}{\\kappa\\times 2\\pi}[\\int_{0}^{2\\pi}d\\phi ln(|v(\\kappa,\\phi)|)],\\notag\\\\\n&=\\frac{-1}{\\kappa}ln(\\frac{2|\\lambda\/t|}{1+\\sqrt{1-\\tau}}).\n\\end{align}\n\n\nIt is shown that when $\\kappa$ is a positive even integer, for a generic $\\lambda$, $f$ is usually not zero. Then the zero-energy state would be localized states which may situate at right-hand edge ($f>0$) or left-hand edge ($f<0$) of the lattices (see Figs.1 and 2). So for general parameters, the zero-energy state would be a localized edge state.\nOnly when $f$ is exactly vanishing, i.e., $f=0$,\n\\begin{align}\n&\\rightarrow f=\\frac{-1}{\\kappa}ln(\\frac{2|\\lambda\/t|}{1+\\sqrt{1-\\tau}})=0\\notag\\\\\n&\\rightarrow |\\lambda\/t|=|\\lambda_c\/t|\\equiv\\frac{1+\\sqrt{1-\\tau}}{2},\n\\end{align}\n the zero-energy state would be a critical state (see Figs.1 and 2).\nThe average growth\/decreasing rate $f$ for $\\kappa=2$ and $\\tau=-2$ is reported in Fig.1. At the critical strength $\\lambda_c=\\pm 1.366t$, $f=0$.\nWhen $\\lambda$ approaches the critical $\\lambda_c$, the localization length can be arbitrarily large, i.e., $\\xi\\equiv 1\/|f|\\propto 1\/|\\lambda-\\lambda_c|\\rightarrow\\infty$.\n\n\n\n\nIn order to investigate the properties of zero-energy states, we also numerically solve Eq.(1) for $\\kappa=2$, $\\tau=-2$, and lattice size $N=2\\times500+1$.\n Several typical wave functions for localized states and critical states are reported in Fig.2.\n We know that the wave function of extended state usually extends all over the whole lattices, while localized state only occupies finite lattice sites.\nThe critical state consists of several disconnected patches which interpolates between the localized and extended states \\cite{yicai,Liu2022}.\nFrom Fig.2, we see the zero energy wave function of $f=0$ is critical state. The wave functions with non-vanishing $f$ correspond to localized edge states.\nWhen $f>0$, the state is at right-hand end edge, while for $f<0$, it is at left-hand end edge.\n\n\n\n\n\\subsection{$\\kappa$ is a positive odd integer}\nWhen $\\kappa$ is a positive odd integer, we can assume $N=2n\\kappa+1$, where $n$ is an integer.\n Similarly, $f$ can be written as\n\\begin{align}\n&f=\\lim_{n\\rightarrow\\infty}\\frac{1}{2n\\kappa}ln(\\frac{|v(\\kappa)v(3\\kappa)...v((2n-1)\\kappa)|}{|v(2\\kappa)v(4\\kappa)...v(2n\\kappa)|}),\\notag\\\\\n&=\\frac{1}{2\\kappa\\times 2\\pi}[\\int_{0}^{2\\pi}d\\phi ln(|v(\\kappa,\\phi)|)-\\int_{0}^{2\\pi}d\\phi ln(|v(2\\kappa,\\phi)|)],\\notag\\\\\n&=0.\n\\end{align}\nIt is shown that when $\\kappa$ is a positive odd integer, the average growth\/decreasing ratio of zero-energy wave function is exactly zero.\nSo if $\\kappa$ is odd, all the zero-energy states are critical states.\n\n\n Some interesting even-odd effects of lattice site number $N$ have been investigated in the random off-diagonal disorder models.\nIt is found that there exists a delocalization transition only when lattice size $N$ is odd \\cite{Brouwer1998}. In addition, the localization length of the zero-energy state depends sensitively boundary conditions \\cite{Brouwer2002}, and it can be an arbitrarily large value.\n\n\n\n\n The above discussions show that the parity of integer $\\kappa$ has important influences on the localization of zero-energy states.\n An odd integer $\\kappa$ results in a critical zero-energy state, while an even $\\kappa$ usually gives the localized edge states (see Fig.2).\n We notice that the above discussion can be also applied to other forms of quasiperiodic hopping with mosaic modulations.\n\n\nIn the following text, we will show that the above influences of parity of $\\kappa$ are also transmitted to other eigenstates near the zero energy.\nTo be specific, when $\\kappa$ is a positive odd integer, for a given hopping strength $\\lambda$, the eigenstates near zero energy are always critical (see Sec.IV). So if energy is sufficiently near the zero-energy, there is no Anderson localization transition for odd $\\kappa$.\nWhen $\\kappa$ is a positive even integer, the eigenstates near zero-energy would undergo Anderson localization transition as the quasiperiodic hopping strength increases.\nThen the system would have localized states near the zero-energy.\n\n\n\n\n\\section{The Lyapunov exponent }\n When $E\\neq0$, the localization properties of eigenstates can be characterized by the Lyapunov exponent. In this section, we calculate the Lyapunov exponent with the transfer matrix method \\cite{Sorets1991,Davids1995}.\n\nFirst of all, we assume the system is a half-infinite lattice system with left-hand end sites $i=0$ and $i=1$.\nFurther using Eq.(1), starting with $\\psi(0)$ and $\\psi(1)$ of left-hand end sites, the wave function can be obtained with relation\n\\begin{align}\n\\Psi(i)=T(i)T(i-1)...T(2)T(1)\\Psi(0)\n\\end{align}\nwhere transfer matrix\n\\begin{align}\\label{V}\nT(n)\\equiv\\left[\\begin{array}{ccc}\n\\frac{E}{V_{n,n+1}} &-\\frac{V_{n,n-1}}{V_{n,n+1}} \\\\\n1&0\\\\\n \\end{array}\\right].\n\\end{align}\nand\n\\begin{align}\n\\Psi(n)\\equiv\\left[\\begin{array}{ccc}\n\\psi(n+1) \\\\\n\\psi(n)\\\\\n \\end{array}\\right].\n\\end{align}\n\nFor a given parameter $E$, with the increasing of $n$, we can assume that the wave function grows roughly according to an exponential law \\cite{Ishii,Furstenberg}, i.e.,\n\\begin{align}\n\\psi(n)\\sim e^{\\gamma(E) n}, &\\ as \\ n\\rightarrow \\infty,\n\\end{align}\nwhere $\\gamma(E)\\geq0$ is Lyapunov exponent which measures the average growth rate of wave function. If the parameter $E$ is not an eigen-energy of $H$, the Lyapunov exponent would be positive, i.e., $\\gamma(E)>0$ \\cite{Jonhnson1986}.\nWhen $E$ is an eigen-energy of system, the Lyapunov exponent can be zero or positive \\cite{yicai}.\nFor critical states, the Lyapunov exponent $\\gamma(E)\\equiv0$ . While for localized states, the Lyapunov exponent $\\gamma(E)>0$.\n\nConsequently the Lyapunov exponent can be written as\n\\begin{align}\n&\\gamma(E)=\\lim_{L \\rightarrow \\infty }\\frac{\\log(|\\Psi(L)|\/|\\Psi(0)|)}{L}\\notag\\\\\n&=\\lim_{L\\rightarrow \\infty}\\frac{\\log(|T(L)T(L-1)...T(2)T(1)\\Psi(0)|\/|\\Psi(0)|)}{L}\n\\end{align}\nwhere $L$ is a positive integer and\n\\begin{align}\n|\\Psi(n)|=\\sqrt{|\\psi(n+1)|^2+|\\psi(n)|^2}.\n\\end{align}\n\n\n\nIn the following, we view the adjacent $\\kappa$ lattice sites as a ``super unit cell\". Next we assume $L=m\\kappa+1$ ($m$ is an integer) and $|\\Psi(0)|\/|\\Psi(1)|$ is a finite non-zero real number, the Lyapunov exponent can be reduced into\n\\begin{align}\n&\\gamma(E)=\\lim_{L \\rightarrow \\infty }\\frac{\\log(|\\Psi(L)|\/|\\Psi(0)|)}{m\\kappa+1}\\notag\\\\\n&=\\lim_{m\\rightarrow \\infty}\\frac{\\log(|T(m\\kappa+1)T(m\\kappa)...T(2)\\Psi(1)|\/|\\Psi(0)|)}{m \\kappa}\\notag\\\\\n&=\\lim_{m\\rightarrow \\infty}\\frac{\\log(|T(m\\kappa+1)T(m\\kappa)...T(2)\\Psi(0)|\/|\\Psi(0)|)}{m \\kappa}\\notag\\\\\n&=\\frac{1}{\\kappa}\\lim_{m\\rightarrow \\infty}\\frac{\\log(|CT_m.CT(m-1)...CT_1\\Psi(0)|\/|\\Psi(0)|)}{m}\\notag\\\\\n\\end{align}\nwhere cluster transfer matrix $CT_n$ for $n-th$ ``super unit cell\" is defined as\n\\begin{align}\n&CT_n\\equiv T(n\\kappa+1)T(n\\kappa)...T((n-1)\\kappa+3)T((n-1)\\kappa+2)\\notag\\\\\n&=\\left[\\begin{array}{ccc}\nE &-v(n\\kappa) \\\\\n1&0\\\\\n \\end{array}\\right]\\left[\\begin{array}{ccc}\n\\frac{E}{v(n\\kappa)} &-\\frac{1}{v(n\\kappa)} \\\\\n1&0\\\\\n \\end{array}\\right]\\left[\\begin{array}{ccc}\nE &-1 \\\\\n1&0\\\\\n \\end{array}\\right]^{\\kappa-2}\\notag\\\\\n &=\\frac{\\left[\\begin{array}{ccc}\nE^2-(E^2\\tau+4\\lambda^2)cos^2(\\theta_n) &-E(1-\\tau cos^2(\\theta_n) ) \\\\\nE(1-\\tau cos^2(\\theta_n) )&-1+\\tau cos^2(\\theta_n) \\\\\n \\end{array}\\right]}{2\\lambda cos(\\theta_n)\\sqrt{1-\\tau cos^2(\\theta_n)}}\\notag\\\\\n &\\times \\left[\\begin{array}{ccc}\nE &-1 \\\\\n1&0\\\\\n \\end{array}\\right]^{\\kappa-2}\n\\end{align}\nwhere $\\theta_n=2\\pi\\beta n\\kappa +\\phi$.\n\n\n\nThe cluster transfer matrix Eq.(16) can be further\nwritten as a product of two parts, i.e., $CT_n= A_nB_n$, where\n \\begin{align}\n&A_n=\\frac{1}{2\\lambda cos(\\theta_n)\\sqrt{1-\\tau cos^2(\\theta_n)}},\\notag\\\\\n&B_n=\\left[\\begin{array}{ccc}\nB_{11} &B_{12} \\\\\nB_{21}&B_{22}\\\\\n \\end{array}\\right]\\left[\\begin{array}{ccc}\nE &-1 \\\\\n1&0\\\\\n \\end{array}\\right]^{\\kappa-2},\n\\end{align}\nwith $B_{11}=E^2-(E^2\\tau+4\\lambda^2)cos^2(\\theta_n)$, $B_{21}=-B_{12}=E(1-\\tau cos^2(\\theta_n))$ and $B_{22}=-1+\\tau cos^2(\\theta_n)$.\nNow the Lyapunov exponent is\n\\begin{align}\n\\gamma(E)=\\frac{1}{\\kappa}[\\gamma_A(E)+\\gamma_B(E)],\n\\end{align}\nwhere\n\\begin{align}\n&\\gamma_A(E)=\\lim_{m\\rightarrow \\infty}\\frac{\\log(|A(m)A(m-1)...A(2)A(1)|)}{m}.\n\\end{align}\nand $\\gamma_B(E)$ are given by\n\\begin{align}\n&\\gamma_B(E)=\\lim_{m\\rightarrow \\infty}\\frac{\\log(|B(m)B(m-1)...B(2)B(1)\\Psi(0)|\/|\\Psi(0)|)}{m}.\n\\end{align}\n\n\nIn the following, we would use Avila's global theory \\cite{Avila2015} to get the Lyapunov exponent and the Avila's acceleration (see next section).\nFollowing Refs.\\cite{Liu2021,YONGJIAN1}, first of all, we complexify the phase $\\phi\\rightarrow \\phi+i \\epsilon$ with $\\epsilon >0$ , e.g., $A_n=\\frac{1}{2\\lambda cos(2\\pi \\beta n\\kappa +\\phi+i\\epsilon)\\sqrt{1-\\tau cos^2(2\\pi\\beta n\\kappa +\\phi+i\\epsilon)}}$.\n In addition, due to the ergodicity of the map $\\phi\\longrightarrow 2\\pi\\beta n+\\phi$, we can write $\\gamma_A(E)$ as an integral over phase $\\phi$ \\cite{Longhi2019}, consequently\n\\begin{align}\n&\\gamma_A(E,\\epsilon)\\notag\\\\\n&=\\frac{1}{2\\pi}\\int_{0}^{2\\pi} d\\phi \\ln(|\\frac{1}{2\\lambda cos(\\phi+i\\epsilon)\\sqrt{1-\\tau cos^2(\\phi+i\\epsilon)}}|)\\notag\\\\\n&=-\\epsilon+\\ln(|\\frac{2}{\\lambda(1+\\sqrt{1-\\tau})}|),\n\\end{align}\nfor $\\epsilon< \\ln |\\frac{2+2\\sqrt{1-\\tau}-\\tau}{\\tau}|$.\n\n\n\nNext we take $\\epsilon\\rightarrow\\infty$\n \\begin{align}\n&B_n=\\frac{e^{-i(4\\pi\\beta n \\kappa+\\phi)+2\\epsilon}}{4}\\left[\\begin{array}{ccc}\n-( E^2 \\tau+4\\lambda^2) &E\\tau \\\\\n-E\\tau & \\tau\\\\\n \\end{array}\\right]\\left[\\begin{array}{ccc}\nE & -1 \\\\\n1& 0\\\\\n \\end{array}\\right]^{\\kappa-2}\\notag\\\\\n &+O(1).\n\\end{align}\nThen for large $\\epsilon$, i.e., $\\epsilon\\gg1$, $\\gamma_B(E,\\epsilon)$ is determined by the largest eigenvalue (in absolute value) of $B_n$, i.e.,\n\\begin{align}\n\\gamma_B(E,\\epsilon)=2\\epsilon+\\ln(|\\frac{|P|+\\sqrt{P^2+16\\lambda^2\\tau}}{8}|),\n\\end{align}\nwhere\n\\begin{align}\nP=(\\tau E^2+4\\lambda^2)a_{\\kappa}+\\tau a_{\\kappa-2}-2\\tau E a_{\\kappa-1}.\n\\end{align}\nand $a_{\\kappa}$, is given by\n\\begin{align}\na_\\kappa=\\frac{1}{\\sqrt{E^2-4}}[(\\frac{E+\\sqrt{E^2-4}}{2})^{\\kappa-1}-(\\frac{E-\\sqrt{E^2-4}}{2})^{\\kappa-1}].\n\\end{align}\n\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[width=1.0\\columnwidth]{F211.eps}\n\\end{center}\n\\caption{ Lyapunov exponents for $\\kappa=3$, $\\tau=-2$, and $\\lambda\/t=0.5,1.5, 2.5$. The discrete points are the numerical results for all the eigenenergies. The solid lines are given by Eq.(27). The mobility edges for $\\lambda\/t=2.5$ are indicated by black arrows. Near mobility edges of the localized-critical transition, the Lyapunov exponent $\\gamma(E)\\propto |E-E_c|$ approaches zero (as $E\\rightarrow E_c$). The critical index of the localization length $\\nu=1$. }\n\\end{figure}\n\n\n\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[width=1.0\\columnwidth]{F221.eps}\n\\end{center}\n\\caption{ Lyapunov exponents for $\\kappa=3$ and $\\tau=1\/2$, and $\\lambda\/t=0.5,1.0, 1.5$. The discrete points are the numerical results for all the eigenenergies. The solid lines are given by Eq.(27). The mobility edges for $\\lambda\/t=0.5$ are indicated by black arrows. Near mobility edges of the localized-critical transition (e.g., $E_c\/t\\simeq\\pm \\sqrt{2}$ for $\\lambda\/t=0.5$), the Lyapunov exponent $\\gamma(E)\\propto |E-E_c|$ approaches zero (as $E\\rightarrow E_c$). The critical index of the localization length $\\nu=1$.}\n\\end{figure}\nWhen $\\epsilon$ is very small, using the facts that $\\gamma(E,\\epsilon)\\geq0$ and $\\gamma_B(E,\\epsilon)$ is a convex and piecewise linear function of $\\epsilon$ \\cite{Avila2015,YONGJIAN1}, one can get\n\\begin{align}\n&\\gamma(E,\\epsilon)=Max\\{0,\\gamma_A(E,\\epsilon)+\\gamma_B(E,\\epsilon)\\},\\notag\\\\\n&=\n\\frac{1}{\\kappa}Max\\{0,\\epsilon+\\ln(|\\frac{|P|+\\sqrt{P^2+16\\lambda^2\\tau}}{4\\lambda(1+\\sqrt{1-\\tau})}|)\\}\n\\end{align}\nFurthermore, when $\\epsilon=0$, the Lyapunov exponent $\\gamma(E)\\equiv\\gamma(E,\\epsilon=0) $ is\n\\begin{align}\n\\gamma(E)=\\frac{1}{\\kappa}Max\\{\\ln|\\frac{|P(E)|+\\sqrt{P^2(E)+16\\lambda^2\\tau}}{4\\lambda(1+\\sqrt{1-\\tau})}|,0 \\}\n\\end{align}\nwhere\n\\begin{align}\nP(E)=(\\tau E^2+4\\lambda^2)a_{\\kappa}+\\tau a_{\\kappa-2}-2\\tau E a_{\\kappa-1}\n\\end{align}\nand\n\\begin{align}\na_\\kappa=\\frac{1}{\\sqrt{E^2-4}}[(\\frac{E+\\sqrt{E^2-4}}{2})^{\\kappa-1}-(\\frac{E-\\sqrt{E^2-4}}{2})^{\\kappa-1}].\n\\end{align}\n\nWhen $\\tau=0$, then $P(E)=4\\lambda^2a_{\\kappa}$, and\n\\begin{align}\n\\gamma(E)=\\frac{1}{\\kappa}Max\\{\\ln|\\lambda a_{\\kappa}|,0 \\}.\n\\end{align}\n\n\n\n\n\n\n\nThe above formula Eq.(27) has been verified by our numerical results (see Figs.3 and 4).\nIn our numerical calculations, in order to get the correct Lyapunov exponents, on the one hand, the integer $L$ should be sufficiently large.\nOn the other hand, $L$ should be also much smaller than the system size $N$, i.e., $1\\ll L\\ll N$.\n\n To be specific, taking $\\kappa=3$, $\\tau=-2,1\/2$, system size $N=3\\times1000$, we get the $N=3\\times1000$ eigenenergies and eigenstates.\n Then we calculate the Lyapunov exponents numerically for all the eigenenergies [see the several sets of discrete points in Figs.3 and 4].\n In our numerical calculation, we take $L=200$, phase $\\phi=0$, $\\psi(0)=0$ and $\\psi(1)=1$ in Eq.(13).\n The solid lines of Figs.3 and 4 are given by Eq.(27) with same parameters. It is shown that most of all discrete points fall onto the solid lines.\n\nHowever, we also note that there are some discrete points of localized states which are not on the solid lines. This is because these localized wave functions are too near the left-hand boundary of system.\n\n\n\\section{mobility edge and critical region }\nIn this section, based on the Lyapunov exponent formula Eq.(27), we determine the mobility edges and the critical region.\n By the Eq.(27), the mobility edges $E_c$ which separate the localized states from the critical states, is determined by\n\\begin{align}\n\\gamma(E=E_c)=\\frac{1}{\\kappa}\\ln|\\frac{|P(E)|+\\sqrt{P^2(E)+16\\lambda^2\\tau}}{4\\lambda(1+\\sqrt{1-\\tau})}|=0\n\\end{align}\nthen\n\\begin{align}\n|P(E=E_c)|=4|\\lambda|\\sqrt{1-\\tau},\n\\end{align}\nThe critical regions which consist of critical states is given by\n\\begin{align}\n|P(E)|<4|\\lambda|\\sqrt{1-\\tau}.\n\\end{align}\n\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[width=1.0\\columnwidth]{F11.eps}\n\\end{center}\n\\caption{ Phase diagram in $(\\lambda, E)$ plane for $\\kappa=2$ and $\\tau=-2$. When $E$ is near zero, there exists localized-critical transitions. The blue solid lines are the phase boundaries (mobility edges $E_c$), which are given by Eq.(32).\n Standard deviations are represented with different colors. }\n\\end{figure}\n\n\nBy expanding the Lyapunov exponent near the mobility edges $E_c$, we get\n\\begin{align}\n\\gamma(E)\\propto |E-E_c|\\rightarrow0, \\ as \\ E \\rightarrow E_c.\n\\end{align}\nThen the localization length is\n\\begin{align}\n\\xi(E)\\equiv1\/\\gamma(E)\\propto |E-E_c|^{-1}\\rightarrow\\infty,\\ as \\ E \\rightarrow E_c.\n\\end{align}\nIts critical index is $1$ [see the finite slopes of solid lines near $E_c$ in Figs.3 and 4].\n\n\n\nIn order to further distinguish the localized states from the critical states, we also numerically calculate standard deviation of coordinates of eigenstates \\cite{Boers2007}\n\\begin{align}\\label{37}\n&\\sigma=\\sqrt{\\sum_{i}(i-\\bar{i})^2|\\psi(i)|^2},\n\\end{align}\nwhere the average value of coordinate $\\bar{i}$ is\n\\begin{align}\n\\bar{i}=\\sum_{i}i|\\psi(i)|^2.\n\\end{align}\nThe standard deviation $\\sigma$ describes the spatial extension of wave function in the lattices.\nThe phase diagram in $[\\lambda ( \\tau)- E]$ plane is reported in Figs.5,6,7 and 8. In Figs.5,6,7 and 8, the standard deviations of coordinates are represented with different colors.\nFrom Figs.5,6,7 and 8, we can see that when the states are localized, standard deviations of coordinates are very small. For critical states, the standard deviations are very large.\n\n\n\nFrom Figs. 5, 7, and 8, we see that when $\\kappa>1$, there are $\\kappa-1$ loops for small hopping strength $\\lambda\/t$ in the phase diagram.\nWithin the loops, the Lyapunov exponent is positive, i.e., $\\gamma(E)>0$. Then if there is some eigenstates in the loops, these states would be localized states.\nHowever, numerical results show that there are no eigenenergies fallen into the loops.\n\nWhen $\\lambda\/t=0$, we see the system has $\\kappa$ eigenenergies with multitude degeneracies. This is because when $\\lambda\/t=0$, the system in fact is composed by a lot of identical independent unit cells with $\\kappa$ lattice sites. Within a unit cell, there are exactly $\\kappa$ eigenenergies. With the increasing of $\\lambda\/t$, the degeneracies are removed and the system enters into the critical regions.\nIn addition, we also find that the parity of $\\kappa$ also has important effects on the phase diagram.\n\n\\subsection{$\\kappa$ is an even number}\nWhen $\\kappa$ is even number and the energy $E$ is very near zero, there exist Anderson localizations if potential hopping strength $\\lambda$ is sufficiently large (see Fig.5 for $\\kappa=2$).\nEspecially when $E\\rightarrow0$, we get two critical hopping strengths\n\\begin{align}\n\\lambda_{c1}=\\pm \\frac{1-\\sqrt{1-\\tau}}{2} \\ \\ \\& \\ \\ \\lambda_{c2}=\\pm \\frac{1+\\sqrt{1-\\tau}}{2},\n\\end{align}\nwhich are independent of the $\\kappa$.\nHere $\\lambda_{c1}$ ($\\lambda_{c2}$) corresponds the point $A$ ($B$) in Fig.5.\nIt is noticed that $\\lambda_{c2}$ coincides the critical $\\lambda_c$ in Sec.II, where the average growth rate of zero-energy wave function is zero.\n\nFrom Fig.5, we can see that when the hopping $\\lambda$ is in the interval $|\\lambda_{c1}|<\\lambda<|\\lambda_{c2}|$, the eigenstates are critical states. Outside the interval, the eigenstates\nbecome localized states. So there exist Anderson localization transitions for even $\\kappa$.\n\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[width=1.0\\columnwidth]{F1.eps}\n\\end{center}\n\\caption{Phase diagram for $\\kappa=1$. When energy is near zero-energy, the states are always critical states with vanishing Lyapunov exponent.}\n\\end{figure}\n\n\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[width=1.0\\columnwidth]{F21.eps}\n\\end{center}\n\\caption{ Phase diagram in $(\\lambda, E)$ plane for $\\kappa=3$ and $\\tau=-2$. When $E$ is near zero, there are no localized-critical transitions. The blue solid lines are the phase boundaries (mobility edges $E_c$), which are given by Eq.(32).\n Standard deviations are represented with different colors. }\n\\end{figure}\n\\begin{figure}\n\\begin{center}\n\\includegraphics[width=1.0\\columnwidth]{F22.eps}\n\\end{center}\n\\caption{ Phase diagram in $(\\lambda, E)$ plane for $\\kappa=3$ and $\\tau=1\/2$. There exists localized-critical transitions for nonzero energy states. The blue solid lines are the phase boundaries (mobility edges $E_c$), which are given by Eq.(32).\n Standard deviations are represented with different colors.}\n\\end{figure}\n\n\n\n\n\n\n\\subsection{$\\kappa$ is an odd number}\nWhen $\\kappa=1$, the system has no the energy scale $t$ , then we would use the units of $\\lambda=1$. In Fig.6, we report the phase diagram of the $\\tau-E$ plane.\nFrom Fig.6, we see if $\\tau<0$, all the eigenstates are in the critical region. Only when $\\tau$ is positive and sufficiently large, the system has localized states.\n When $\\tau$ gets nearer and nearer to $1$, i.e., $\\tau\\rightarrow 1^{-}$, the range of energy spectrum becomes larger and larger, and eventually diverges. This is because when $\\tau=1$, the Hamiltonian defined by the Eq.(1) would be an unbounded operator, e.g, $\\frac{2\\lambda \\cos(2\\pi \\beta i+\\phi)}{\\sqrt{1-\\tau \\cos^2(2\\pi \\beta i+\\phi)}}=\\frac{2\\lambda \\cos(2\\pi \\beta i+\\phi)}{|\\sin(2\\pi \\beta i+\\phi)|}$ diverges for some lattice site index $i$.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[width=1.0\\columnwidth]{F5.eps}\n\\end{center}\n\\caption{ Standard deviations of localized states and critical states for parameters $\\kappa=3$, $\\tau=-2$ and $\\lambda\/t=1.5$. The eigenenergy $E_n$ increases gradually as eigenstate index $n$ runs from $1$ to $3000$ (along the black dashed line of Fig.7).}\n\\end{figure}\n\n\n\n\n\n\n\n\n\n\n\n\n\nWhen $\\kappa>1$ and the energy $E$ is very near zero, there are no Anderson localizations for a given potential strength $\\lambda$ (see Figs.7, and 8).\nIf $\\lambda$ is very large, i.e, $\\lambda\\rightarrow\\pm\\infty$, the mobility edge can be obtained by Eq.(32), i.e.,\n\\begin{align}\n&E_c=\\pm\\frac{2\\sqrt{1-\\tau}}{(\\kappa-1)|\\lambda|}, \\ as \\ \\lambda\\rightarrow\\pm\\infty .\n\\end{align}\nIt is shown that for an arbitrarily strong quasiperiodic hopping strength $\\lambda$, there always exists an energy window, i.e., $-\\frac{2\\sqrt{1-\\tau}}{(\\kappa-1)|\\lambda|}0$], the Lyapunov exponents are different for three different $\\epsilon=0,0.1,0.2$. Their differences are linearly proportional to $\\Delta\\epsilon=0.1$ in Fig. 11.\n\nBy taking $\\epsilon=0.1$, we also approximately calculate the Avila's acceleration $\\omega(E)$ by\n\\begin{align}\n\\kappa\\omega(E)\\simeq\\frac{\\gamma(E,\\epsilon)-\\gamma(E,0)}{\\epsilon},\n\\end{align}\n[see panel (b) of Fig.11]. It shows that when $E$ is an eigenenergy of localized state [$\\gamma(E)>0$], the Avila's acceleration is 1. When $E$ is an eigenenergy of critical state [$\\gamma(E)=0$], the Avila's acceleration is 0.\nWe also notice that if $E$ is not an eigenenergy, the Avila's acceleration is $-1$.\n\n\n\n Further combining Eq.(27) and Eq.(43),\nthen one can classify systems with different real parameter $E$ (different phases) by Lyapunov exponent and the quantized acceleration, i.e.,\n\\begin{align}\n &(a): \\gamma(E)>0 \\ \\ \\& \\ \\kappa\\omega(E)=-1, \\ if \\ E \\ is \\ not\\ an\\ eigenvalue \\notag\\\\\n &(b): \\gamma(E)>0 \\ \\ \\& \\ \\kappa\\omega(E)=1, \\ for \\ localized \\ state\\notag\\\\\n &(c): \\gamma(E)=0 \\ \\ \\& \\ \\kappa\\omega(E)=0, \\ for \\ critical \\ state.\n\n\\end{align}\n\n\n\n\n\n\n\n\n\n\n\\section{summary}\nIn conclusion, we investigate the localization properties of the one-dimensional lattice model with off-diagonal mosaic quasiperiodic hopping.\n The parity of mosaic periodic has important effects on the localization of zero-energy states.\n When the mosaic period is odd, there always exists an energy window for critical states regardless hopping strength.\n While for even period, the states near zero-energy would become localized edge states for a sufficiently large hopping strength.\n It is found that there exist mobility edges which separate the localized states from critical states.\n Within the critical region, the spatial extensions of eigenstates have large fluctuation.\n\n The Lyapunov exponents and mobility edges are exactly obtained with Avila's theory.\n Furthermore, it is found that the critical index of localization length $\\nu=1$.\nFor $\\kappa=3$ and $\\tau=1\/2$, the numerical results show that the scaling exponent of inverse participation ratio (IPR) of critical states $x\\simeq0.47$. It is shown that these states indeed are critical states.\nIn addition, it is shown that the Lyapunov exponent and Avila's acceleration can be used to classify the systems with different $E$.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section*{Acknowledgements}\nThis work was supported by the NSFC under Grants Nos.\n11874127, 12061031, 12171039, 11871007, the Joint Fund with\nGuangzhou Municipality under No.\n202201020137, and the Starting Research Fund from\nGuangzhou University under Grant No.\nRQ 2020083.\n\n\n\n\n\n\n\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction} \n\\thispagestyle{empty}\nA common problem faced in machine learning is the lack of sufficient training data. For colonoscopy the majority of readily-available public image data is limited to individual frames or short sequences for benchmarking CAD-based polyp detection. Public colonoscopy videos of the entire colon structure are limited to rather low-quality capsule endoscopy video footage. The lack of ground truth camera poses further hampers the training of models for applications different from polyp detection such as: anatomical segment classification, visual place recognition (VPR), simultaneous localization and mapping (SLAM) and structure from motion (SfM). These applications require high-quality colonoscopy videos of the entire examinations covering all phases of the intervention.\n\\begin{center}\n \\begin{figure}[ht]\n \\centering\n \\includegraphics[width=0.45\\textwidth]{domain.jpeg}\n \\caption{Examples of synthetic colonoscopy images.}\n \\label{fig:synthetic_images}\n\\end{figure}\n\\end{center}\n\nA common solution to this is the rendering of virtual endoscopy (VE) videos based on CT colonography data. VE provides both, image sequences and ground truth poses of varying anatomy, but (without further investigation) differs substantially from the visual appearance of real colonoscopy images. This entails gaps that have to be addressed by proper domain adaptation methods as demonstrated in \\cite{mathew2020augmenting}. This, however, implies that synthesized images resemble colonoscopy images (and their anatomical locations) of small datasets which likely do not generalize well to unseen or less observed colon regions. \nDomain randomization, in contrast, utilizes a large amount of data which is randomly sampled over the entire configuration space with the variables being carefully predefined. It is important to note that domain randomization is practically applicable to only simulated data as some of the parameters such as textures, materials, occlusions and coat masks have to be properly controlled in a simulated environment have to be more elaborated than for generating VE images in order to enable visual appearance close to real colonoscopy images (see Fig. \\ref{fig:synthetic_images}). Powerful engines such as \\textit{Unity} have gained particular interest in the computer vision and robotics communities \\cite{borkman2021unity,tremblay2018training}, but have been rarely investigated in medical imaging \\cite{incetan2020vrcaps,billot2021synthseg}. \n\n\\begin{figure*}[ht]\n \\centering\n \\includegraphics[width=0.8\\textwidth]{Architecture.png}\n \\caption{Overview of the utilized processing pipeline for generating synthetic images.}\n \\label{fig:architecture}\n\\end{figure*}\n\nGiven sufficient capabilities of simulation, models can solely be trained on domain-randomized data while still achieving high generalization performance for inference on real-world test data.\n\nThis paper presents an exemplary implementation of domain randomization for colonoscopy with all required algorithmic components. It is built up on prior work \\cite{incetan2020vrcaps} and supplements the latter by automated domain-randomized video recording through following waypoints along the interior colon's centerline.\n\n\n\\section{Material and Methods} \n\n\\subsection{Colon segmentation}\nAt first, a CT colonography (CT with radiocontrast material) obtained from TCIA is imported in \\textit{3D Slicer} for semi-automatic colon segmentation which is carried out as follows. A ROI around the colon is set manually with its image content being thresholded. Subsequently we apply region-based segmentation on the (thresholded) mask to further delineate the colon structure. The segmentation mask is manually curated to ensure optimal results for successive steps.\n\n\\subsection{Centerline extraction}\nFor automated image collection we require an appropriate camera path through the interior colon. For this purpose, we estimate the centerline within the colon structure based on the prior work of \\cite{wan2002automatic}. The key idea is to plan an obstacle-free (w.r.t colon wall) path from the anus (colon entry) to the caecum. Since the intuitive approach based on the shortest path estimation tends to get too close to corners in turns, Wan et al. propose to explicitly incorporate the inversed map of distances to the colon wall \\cite{wan2002automatic} which was demonstrated to achieve optimal results with paths being exactly centered. Subsequently we sample equidistant waypoints along the extracted centerline which will be utilized within the simulation. Currently, we manually pick start and end points of the centerline extraction which, however, could be replaced by automatic anatomical landmark prediction through heatmap regression.\n\n\\subsection{3D model preparation}\nNext, the colon segmentation is imported in \\textit{Blender} for UV editing. Generally, a mesh is created surrounding the organ that can be edited along the vertices of the object. This mesh allows \\textit{UV mapping} which is a method for projecting a 3D model surface onto a 2D plane for texture mapping. An UV editing tool as part of \\textit{Blender} offers the possibility of unwrapping the 3D object onto a 2D plane where textures can be applied seamlessly throughout the region of the colon. This texture gives a realistic pattern to the object. Default shaders in \\textit{Blender} enable to change material properties corresponding to colon such as surface IOR, secular tint and anisotropy to further enhance the realism. \n\n\\subsection{Photorealistic rendering}\nThe 3D model prepared in \\textit{Blender} is subsequently imported in \\textit{Unity} which provides high definition render pipelines for our simulation environment that can produce photorealistic colonoscopy images. This virtual engine is commonly used for game development and has drawn particular interest in computer vision research due to its powerful graphical simulation platform for generating synthetic images. Using \\textit{Unity} we are able to synthesize images where parameters such as lighting, materials, occlusions, transparency and coat mask are altered to give it a more realistic appearance. These parameters are carefully selected such that real-world characteristics are optimally mimicked. As a starting base we utilize parts of the \\textit{VR-Caps} project simulating a capsule endoscopic camera within \\textit{Unity} \\cite{incetan2020vrcaps}. A 3D model of this capsule with predefined attributes of an attached camera is placed inside the colon which is used for data collection. Adjusting these parameters is crucial for both mimicking real endoscopy and augmenting the data. The table below shows the camera parameters and post-processing effects required to achieve a fully synthetic model of the colon. For potential navigation tasks it is possible to additionally store corresponding depth images. \n\n\\begin{table}[!h]\n\\label{tab:fonts}\n\\centering\n\\begin{tabular}{|c|c| }\n\\hline\nAttributes & Values\\\\\n\\hline\nSurface Metallic \t\t\t\t& 0.3 \\\\\nSurface Smoothness\t\t\t& 0.7 \\\\\nLens Intensity \t\t\t& 0.1\\\\\nChromic Abberation\t\t\t& 0.5 \\\\\nCoat Mask\t\t\t& 0.435\t \\\\\nCamera's Field of View \t\t\t& 91.375\t \\\\\nFocal length\t\t\t\t& 159.45\\\\\nISO \t\t& 200 \\\\\nAperture \t\t\t\t& 16\\\\\nAnisotropy \t\t\t& 1 \\\\\n\\hline\n\\end{tabular}\n\\caption{Camera parameters and Post-processing Effects}\n\\end{table}\n\n \n \n \n\n\\begin{figure*}\n \\centering\n \\includegraphics[width=\\textwidth]{DR-Samples_Colon.png}\n \\caption{Synthesized, domain-randomized images captured at the same pose inside the colon. Textures are obtained as from random patterns as well as synthetic patterns mimicking mucosa appearances.}\n \\label{fig:dr-samples}\n\\end{figure*}\n\n\\subsection{Automated Video Rendering}\nManually collecting data for endoscopy becomes highly time-consuming when creating synthetic datasets consisting of all the required variations and diversity. For domain randomization we need to record sequences of images each time with different textures and materials which entails substantial individual setup. Thus, an approach for automating the process of data collection is introduced, which allows us to collect numerous samples inside the colon with different parameters. For this purpose we make use of the \\textit{scripting API} offered by \\textit{Unity} which gives access to the simulation environment and interactive components via executable scripts. Firstly, the simulated capsule is introduced to the colon and then automatically steered along the waypoints of the centerline (see Fig. \\ref{fig:waypoints}). The \\textit{Unity engine} is setup appropriately such that it enables smooth camera motion while following the waypoints. \nOur path following script consist of two parts: an \\textit{initialization} function which runs all required initial setups (parameter setup, initial capsule pose) and an \\textit{update} function which constantly controls the movement of the capsule (along the waypoints) and triggers actions such as changing parameters (e.g. lightning, texture). All images captured by the camera of the capsule are recorded. The parameters can either be adjusted on the fly allowing to capture images at the same pose with varying conditions or alternatively it is possible to alter the parameter set only for entire traversals. \\textit{Unity} also enables to configure the capsule's speed and camera's field of view and targeted frame rate (FPS). \n\n\\section{Results and Discussion} \n\nWe evaluate our simulation qualitatively based on image renderings for varying parameters which becomes particularly apparent when randomizing surface material and textural patterns. This is illustrated by Fig.\\ref{fig:dr-samples} which shows different renderings from the same captured from the same camera pose inside the colon. \nFig. \\ref{fig:waypoints} shows an example of an extracted centerline and generated waypoints being followed for automated video recording. For comparison, Fig. \\ref{fig:real_samples} shows exemplary real and synthetic images respectively.\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.5\\textwidth]{cenerline.png}\n \\caption{Path following the centerline of the colon. The green line visualize the path and red circles waypoints being traced by the simulated capsule.}\n \\label{fig:waypoints}\n\\end{figure}\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.5\\textwidth]{RE-VE-comparison.jpeg}\n \\caption{Comparison of synthetic (top) and real (bottom) colonoscopy images.}\n \\label{fig:real_samples}\n\\end{figure}\n\\section{Conclusion}\nThis paper presented a pipeline for generating synthetic colonoscopy videos that can be used to improve training of deep learning models. By controlling environment (e.g. texture, reflectance) as well as virtual camera (e.g. lightning) properties we are able to simulate conditions being observed in inference but hardly ever presented in training data which is particularly the case for small-scale datasets. Inspired by substantive improvements reported on computer vision and robotics applications and limited prior work (VR-Caps) \\cite{incetan2020vrcaps} we motivate to utilize domain-randomized synthesization for video colonoscopy. In our future work, we will incorporate this additional data for training deep learning-based approaches to SfM, SLAM and 3D reconstruction. In order to further simplify the variation in patient anatomy, we will investigate (fully) automatic segmentation of the colon in CT scans as well as an alternative to the 3D model preparation in \\textit{Blender}. \n\\pagebreak\n\n\\bibliographystyle{IEEEtran}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction and main results}\n\n\n\nConsider the one-dimensional Schr\\\"odinger equation\n\\begin{equation}\\label{eq_Schrodinger}\n {\\rm i}\\partial_t u=\\frac{\\nu(E)}{2}H_0u + W(E,\\omega t, x, -{\\rm i} \\partial_x)u,\\qquad x\\in{\\mathbb R} ,\n\\end{equation}\nwhere, we assume that\n\\begin{itemize}\n \\item the frequencies $\\omega\\in {\\mathbb R}^d$, $d\\geq 1$, satisfy the {\\it Diophantine} condition (denoted by $\\omega\\in {\\rm DC}_d(\\gamma,\\tau)$ for $\\gamma>0$, $\\tau>d-1$):\n$$\\inf_{j\\in{\\mathbb Z}}|\\langle n,\\omega\\rangle-j|>\\frac{\\gamma}{|n|^\\tau},\\qquad \\forall \\ n\\in{\\mathbb Z}^d\\setminus\\{0\\},$$\n \\item the parameter $E\\in {{\\mathcal I}}$, an interval $\\subset {\\mathbb R}$, and $\\nu\\in C^2({{\\mathcal I}},{\\mathbb R})$ satisfies\n $$|\\nu'(E)|\\geq l_1,\\quad |\\nu''(E)|\\leq l_2,\\qquad \\forall \\ E\\in{{\\mathcal I}},$$\nfor some $l_1,l_2>0$,\n \\item $H_0$ is the {\\it one-dimensional quantum harmonic operator}, i.e.\n$$(H_0u)(x):=-(\\partial_{x}^2 u)(x)+x^2\\cdot u(x),\\qquad \\forall \\ u\\in L^2({\\mathbb R}),$$\n \\item $W(E,\\theta, x,\\xi)$ is a quadratic form of $(x,\\xi)$:\n$$W(E, \\theta, x,\\xi)=\\frac12 \\big(a(E,\\theta) x^2+2b(E,\\theta) x\\cdot\\xi+c(E,\\theta) \\xi^2\\big),$$\nwith $a,b,c:{{\\mathcal I}}\\times {\\mathbb T}^d\\to {\\mathbb R}$, all of which are $C^2$ w.r.t. $E\\in{{\\mathcal I}}$ and $C^\\omega$ w.r.t. $\\theta\\in{\\mathbb T}^d:=({\\mathbb R}\/{\\mathbb Z})^d$, and for every $E\\in{{\\mathcal I}}$, for $m=0,1,2$,\n$|\\partial^m_E a(E,\\cdot)|_r:=\\sup_{|\\Im z|0$ such that if\n$$\\max_{m=0,1,2}\\left\\{\\left|\\partial_E^m a\\right|_r, \\ \\left|\\partial_E^m b\\right|_r, \\ \\left|\\partial_E^m c\\right|_r\\right\\}=:\\varepsilon_0\\leq \\varepsilon_*,\\qquad \\forall \\ E\\in{{\\mathcal I}},$$\nthen for a.e. $E\\in{{\\mathcal I}}$, Eq. (\\ref{eq_Schrodinger}) is reducible, i.e., there exists a time quasi-periodic transformation $U(\\omega t)$, unitary in $L^2$ and analytically depending on $t$, such that\nEq. (\\ref{eq_Schrodinger}) is conjugated to\n${\\rm i}\\partial_t v= G v$ by the transformation $u=U(\\omega t) \\, v$, with $G$ a linear operator independent of $t$.\n\nMore precisely,\nthere exists a subset\n$${{\\mathcal O}}_{\\varepsilon_0}=\\bigcup_{j\\in{\\mathbb N}}\\Lambda_j\\subset \\overline{{\\mathcal I}}$$ with $\\Lambda_j$'s being closed intervals \\footnote{In this paper, the ``closed interval\" is interpreted in a more general sense, i.e., it can be degenerated to a point instead of a positive-measure subset of ${\\mathbb R}$.}\nand ${\\rm Leb}({{\\mathcal O}}_{\\varepsilon_0})<\\varepsilon_0^{\\frac{1}{40}}$,\nsuch that the following holds.\n\\begin{enumerate}\n\\item For a.e. $E\\in{{\\mathcal I}}\\setminus{{\\mathcal O}}_{\\varepsilon_0}$, $G$ is unitary equivalent to $\\varrho H_0$ for some $\\varrho=\\varrho_E\\geq 0$;\n\\item If ${\\rm Leb}(\\Lambda_j)>0$, then\n\\begin{itemize}\n \\item for $E\\in{\\rm int}\\Lambda_j$, $G$ is unitary equivalent to $-\\frac{\\lambda {\\rm i}}{2}(x\\cdot \\partial_x+ \\partial_x \\cdot x)$ for some $\\lambda=\\lambda_E> 0$;\n \\item for $E\\in\\partial\\Lambda_j\\setminus\\partial{{\\mathcal I}}$, $G$ is unitary equivalent to $-\\frac{\\kappa}2 x^2$ for some $\\kappa=\\kappa_E\\in{\\mathbb R}\\setminus\\{0\\}$.\n\\end{itemize}\nIf ${\\rm Leb}(\\Lambda_j)=0$, then $G=0$ for $E\\in\\Lambda_j$.\n\\end{enumerate}\n\\end{Theorem}\n\nBefore giving its application on the growth of Sobolev norm, let us first make a review on previous works about the reducibility on harmonic oscillators as well as the relative KAM theory.\n\nFor 1-d harmonic oscillators with time periodic smooth perturbations, Combescure \\cite{Com87} firstly showed the pure point nature of Floquet operator (see also \\cite{DLSV2002, EV83, Kuk1993}).\nFor 1-d harmonic oscillators with time quasi-periodic bounded perturbations, we can refer to \\cite{GreTho11, Wang08, WLiang17} for the reducibility and the pure point spectrum of Floquet operator.\nFor 1-d harmonic oscillators with unbounded time quasi-periodic perturbations, similar results can be found in \\cite{Bam2018, Bam2017, BM2018, Liangluo19}.\nIn investigating the reducibility problems, KAM theory for 1-d PDEs has been well developed by Bambusi-Graffi \\cite{BG2001} and Liu-Yuan \\cite{LY2010} in order to deal with unbounded perturbations.\n\nReducibility for PDEs in higher-dimensional case was initiated by Eliasson-Kuksin \\cite{EK2009}, based on their KAM theory \\cite{EK2010}.\nWe refer to \\cite{GrePat16} and \\cite{LiangWang19} for any dimensional harmonic oscillator with bounded potential. \nWe mention that some higher-dimensional results with unbounded perturbations have been recently obtained\n\\cite{BLM18, FGMP19, FG19, FGN19, Mon19}. \nHowever, a general KAM theorem for higher-dimensional PDEs with unbounded perturbations is far from success.\n\n\nRecently, Bambusi-Gr\\'ebert-Maspero-Robert \\cite{BGMR2018} built a reducibility result for the harmonic oscillators on ${\\mathbb R}^n$, ,$n\\geq 1$, in which the perturbation is a polynomial of degree at most two in $x$ and $-{\\rm i}\\partial_x$ with coefficients quasi-periodically depending on time.\nThe proof in \\cite{BGMR2018} exploits the fact that for polynomial Hamiltonians of degree at most $2$, there is an exact correspondence between classical and quantum mechanics, so that the result can be proved by exact quantization of the classical KAM theory which ensures reducibility of the classical Hamiltonian system.\nThe exact correspondence between classical and quantum dynamics of quadratic Hamiltonians was already exploited in the paper \\cite{HLS86} to prove stability and instability results for one degree of freedom time periodic quadratic Hamiltonians.\nTo prove our main result, we use the same strategy as \\cite{BGMR2018} and the reducibility result for the classical Hamiltonian by Eliasson \\cite{Eli1992}.\n\n\n\n\\subsection{Growth of Sobolev norms}\n\nBesides reducibility, the construction of unbounded solutions in Sobolev space for Schr\\\"odinger equations attracts even more attentions.\n\nAs an application of Theorem \\ref{thm_Schro}, we can study the long time behaviour of its solution $u(t)$ to Eq. (\\ref{eq_Schrodinger}) in Sobolev space.\nFor $s\\geq 0$, we define Sobolev space\n$${{\\mathcal H}}^s:=\\left\\{\\psi\\in L^2({\\mathbb R}):H_0^{\\frac{s}2}\\psi \\in L^2({\\mathbb R})\\right\\}$$\nand Sobolev norm\n$\\|\\psi\\|_{s}:=\\|H_0^{\\frac{s}2} \\psi\\|_{L^2({\\mathbb R})}$.\nIt is well known that, for $s\\in {\\mathbb N}$, the above definition of norm is equivalent to\n$$\n\\sum\\limits_{\\alpha+\\beta\\leq s\\atop{\\alpha,\\beta\\in{\\mathbb N}} }\\|x^{\\alpha}\\cdot\\partial^{\\beta} \\psi\\|_{L^2({\\mathbb R})}.\n$$\n\\begin{remark}\\label{remark_norm_equiv}\n In view of Remark 2.2 of \\cite{BM2018}, we get that, for a given $\\psi\\in {{\\mathcal H}}^s$,\n\\begin{equation}\\label{norm_equiv}\n\\|\\psi\\|_{s}\\simeq \\|\\psi\\|_{H^s}+ \\|x^{s} \\psi\\|_{L^2},\n\\end{equation}\nreplacing $K_0=H_0$ in that remark by $K_0=H_0^{\\frac12}$, where $H^s$ means the standard Sobolev space and $\\|\\cdot\\|_{H^s}$ is the corresponding norm. Hence, to calculate the norm $\\|\\psi\\|_s$, $s\\geq 0$, it is sufficient to focus on $\\|x^{s} \\psi\\|_{L^2}$ for $s\\geq0$ and $\\|\\psi^{(s)}\\|_{L^2}$ for $s\\in{\\mathbb N}$.\n\\end{remark}\n\n\nFor different types of reduced systems, Sobolev norm of solution exhibits different behaviors.\n\n\n\n\\begin{Theorem}\\label{thm_Schro_sobolev} Under the assumption of Theorem \\ref{thm_Schro}, for any $s\\geq 0$, and any non-vanishing initial condition $u(0)\\in {{\\mathcal H}}^s$, the following holds true for the solution $u(t)$ to Eq. (\\ref{eq_Schrodinger}) for $t\\geq0$.\n\\begin{enumerate}\n\\item For a.e. $E\\in{{\\mathcal I}}\\setminus{{\\mathcal O}}_{\\varepsilon_0}$,\n$\nc \\leq \\|u(t)\\|_{s}\\leq C\n$.\n\\item If ${\\rm Leb}(\\Lambda_j)>0$, then\n\\begin{itemize}\n \\item for $E\\in{\\rm int}\\Lambda_j$,\n$c e^{\\lambda st} \\leq \\|u(t)\\|_{s} \\leq C e^{\\lambda st}$,\n \\item for $E\\in\\partial \\Lambda_j\\setminus \\partial{{\\mathcal I}}$,\n$\nc |\\kappa|^s t^s \\leq \\|u(t)\\|_{s}\\leq C |\\kappa| (1+ t^2)^{\\frac{s}2}\n$.\n\\end{itemize}\nIf ${\\rm Leb}(\\Lambda_j)=0$, then for $E\\in \\Lambda_j$, $c e^{\\lambda st} \\leq \\|u(t)\\|_{s} \\leq C e^{\\lambda st}$.\n\\end{enumerate}\nHere $\\lambda=\\lambda_E$ and $\\kappa=\\kappa_E$ are the same with Theorem \\ref{thm_Schro} and $c, \\, C>0$ are two constants depending on $s$, $E$ and $u(0)$.\n\\end{Theorem}\n\n\nLet us make more comments on constructing solutions growing with time in Schr\\\"odinger equations.\nBourgain \\cite{Bou99} built logarithmic lower and upper growth bounds for linear Schr\\\"odinger equation on ${\\mathbb T}$ by\nexploiting resonance effects. And the optimal polynomial growth example was given by Delort \\cite{Del2014} for 1-d harmonic oscillator with a time periodic order zero perturbation. Maspero \\cite{Mas2018} reproved the result of Delort by exploiting the idea in \\cite{GY00}. In \\cite{BGMR2018}, the authors also considered the higher-dimensional harmonic oscillator with a linear perturbation in $x$ and $-{\\rm i}\\partial_x$ with time quasi-periodic coefficients. Under the Diophantine condition of frequencies, the time-dependent equation can be reduced to a special ``normal form\" independent of time (see Theorem 3.3 of \\cite{BGMR2018}), which implies the polynomial growth of Sobolev norm. There are also many literatures, e.g., \\cite{BGMR2019, MR2017}, which are relative to the upper growth bound of the solution in Sobolev space.\n\nFrom the above mentioned literatures, we can see that almost all the growth results of lower growth bound of the solution are closely related to the resonance phenomenon. However, it is not clear to us which kind of parameter set is connected to the growth of Sobolev norm.\nComparing with all the above results, we introduce the parameter set $\\bigcup_{j\\in{\\mathbb N}}\\Lambda_j$ following \\cite{Eli1992}, in which the solutions has exponential lower and upper growth bounds, while on the boundaries of this set the solutions has polynomial lower and upper growth bounds. In the following, we will present several concrete examples to show that the set $\\bigcup_{j\\in{\\mathbb N}}\\Lambda_j$ is of positive measure.\n\n\n\\subsection{Examples with ${\\rm Leb}({{\\mathcal O}}_{\\varepsilon_0})>0$}\n\nIn view of Theorem \\ref{thm_Schro} and \\ref{thm_Schro_sobolev}, the growth of Sobolev norm can be obtained via the reducibility if ${\\rm Leb}({{\\mathcal O}}_{\\varepsilon_0})>0$.\nWe need to point that the time-dependent quadratic perturbation $W(E,\\omega t, x, -{\\rm i} \\partial_x)$ with ${\\rm Leb}({{\\mathcal O}}_{\\varepsilon_0})>0$ exists universally. In other words, it is a quite ``extreme\" case that\n$${\\rm Leb}(\\Lambda_j)=0, \\qquad \\forall \\ j\\in{\\mathbb N}.$$\nWe have the following concrete examples.\n\n\\\n\nFor ${{\\mathcal I}}={\\mathbb R}$, $\\nu(E)=E$, the equation\n\\begin{equation}\\label{example_1}\n {\\rm i}\\partial_t u=\\frac{E}{2}H_0u + \\left(\\frac{a(\\omega t)}{2} x^2-\\frac{b(\\omega t)}2\\left(x\\cdot{\\rm i}\\partial_x+{\\rm i}\\partial_x\\cdot x\\right)-\\frac{c(\\omega t)}2 \\partial_x^2 \\right) u,\n\\end{equation}\nsatisfies the assumptions of Theorem \\ref{thm_Schro} if $a,b,c\\in C^\\omega({\\mathbb T}^d,{\\mathbb R})$ are small enough. Hence, for Eq. (\\ref{example_1}), the reducibility and the behaviors of ${{\\mathcal H}}^s$ norm of solutions described in Theorem \\ref{thm_Schro_sobolev} can be obtained.\n\\begin{Theorem}\\label{thm_example_1}\nFor generic $a,b,c\\in C^\\omega({\\mathbb T}^d,{\\mathbb R})$ with $|a|_r, |b|_r, |c|_r$ small enough (depending on $r,\\gamma,\\tau,d$), the conclusions of Theorem \\ref{thm_Schro} and \\ref{thm_Schro_sobolev} hold for Eq. (\\ref{example_1}) for ${{\\mathcal I}}={\\mathbb R}$ with ${\\rm Leb}({{\\mathcal O}}_{\\varepsilon_0})>0$.\n\\end{Theorem}\n\n\\\n\n\n\nFor $\\nu(E)=\\sqrt{E}$, consider the equation\n\\begin{equation}\\label{eq_Schrodinger-example}\n {\\rm i}\\partial_t u=\\frac{\\sqrt{E}}{2} H_0 u -\\frac{q(\\omega t)}{2\\sqrt{E}}\\left(x^2-x\\cdot{\\rm i}\\partial_x-{\\rm i}\\partial_x\\cdot x-\\partial^2_x \\right)u.\n\\end{equation}\nwith $q\\in C_r^\\omega({\\mathbb T}^d,{\\mathbb R})$. The equation is important, since as we will show later, it is closely related to quasi-periodic Schr\\\"odinger operator.\n\\begin{Theorem}\\label{thm_example_schro}\nFor generic $q\\in C^\\omega({\\mathbb T}^d,{\\mathbb R})$, the conclusions of Theorem \\ref{thm_Schro} and \\ref{thm_Schro_sobolev} hold for Eq. (\\ref{eq_Schrodinger-example}) for ${{\\mathcal I}}=[E_0,E_1]$ with ${\\rm Leb}(\\Lambda_j)>0$ for infinitely many $j$'s, where $E_0>0$ is large enough (depending on $|q|_r$) and $E_1<\\infty$.\n\\end{Theorem}\n\n\\\n\n\nTheorem \\ref{thm_example_1} gives the example that ${\\rm Leb}(\\Lambda_j)>0$ for at least one $j$, while Theorem \\ref{thm_example_schro} gives the example that ${\\rm Leb}(\\Lambda_j)>0$ for infinitely many $j$'s. Indeed, if the dimension of the frequency $d=2$, we could even gives ${\\rm Leb}(\\Lambda_j)>0$ for every $j$'s. To construct such an example, we consider\n \\begin{equation}\\label{eq_AMO}\n {\\rm i}\\partial_t u=\\frac{\\nu(E)}{2} H_0 u+ \\left(\\frac{a(E,\\omega t)}{2} x^2-\\frac{b(E,\\omega t)}2\\left(x\\cdot{\\rm i}\\partial_x+{\\rm i}\\partial_x\\cdot x\\right)-\\frac{c(E,\\omega t)}2 \\partial_x^2 \\right) u.\n \\end{equation}\nwhere $\\nu(E)=\\cos^{-1}(-\\frac{E}{2})$, ${{\\mathcal I}}\\subset[-2+\\delta,2-\\delta]$ with $\\delta$ a small numerical constant (e.g., $\\delta=10^{-6}$). Then our result is the following:\n\n\n\n\n \\begin{Theorem}\\label{thm_AMO} There exist a sub-interval ${{\\mathcal I}}\\subset[-2+\\delta,2-\\delta]$ and $a,b,c:{{\\mathcal I}}\\times {\\mathbb T}^2\\to{\\mathbb R}$ with $a(E,\\cdot), \\, b(E,\\cdot), \\, c(E,\\cdot)\\in C^{\\omega}({\\mathbb T}^2,{\\mathbb R})$ for every $E\\in{{\\mathcal I}}$,\n\n such that the conclusions of Theorem \\ref{thm_Schro} and \\ref{thm_Schro_sobolev} hold for Eq. (\\ref{eq_AMO}). Moreover, ${\\rm Leb}(\\Lambda_j)> 0$ for every $j\\in{\\mathbb N}$.\n \\end{Theorem}\n\n\n\n\\begin{remark} One can even further get precise size of ${\\rm Leb}(\\Lambda_j)$ according to \\cite{LYZZ}.\n\\end{remark}\n\n\\\n\nThe rest of paper will be organised as follows. In Section \\ref{sec_weyl}, which serves as a preliminary section, we recall the definition of Weyl quantization and some known results on the relation between classical Hamiltonian to quantum Hamiltonian.\nWe give an abstract theorem in Section \\ref{sec_abstract} on the reducibility for quantum Hamiltonian, provided that the reducibility for the corresponding classical Hamiltonian is known.\nBy applying this abstract theorem, we exploit the connection between reducibility and property of Sobolev norm.\nThe abstract theorem is proved in Section \\ref{sec_reduc}.\nIn Section \\ref{sec_proof}, we prove the main result just by verifying the hypothesis of abstract theorem.\nIn Section \\ref{sec_pr_examples}, the proofs of Theorem \\ref{thm_example_1} -- \\ref{thm_AMO} are given.\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Classical Hamiltonian and quantum Hamiltonian}\\label{sec_weyl}\n\nTo give some preliminary knowledge,\nlet us recall the definition of Weyl quantization, which relates the classical and quantum mechanics, and its properties. The conclusions listed in this section can also be found in \\cite{BGMR2018}.\n\n\nThe Weyl quantization is the operator ${\\rm Op}^W:f\\mapsto f^W$ for any symbol $f=f(x,\\xi)$, with $x,\\xi\\in{\\mathbb R}^n$, where $f^{W}$ is the Weyl operator of $f$:\n$$\\left(f^{W} u\\right)(x)=\\frac{1}{(2\\pi)^n}\\int_{y, \\, \\xi\\in{\\mathbb R}^n} f\\left(\\frac{x+y}{2},\\xi\\right) u(y) \\, dy \\, d\\xi,\\qquad \\forall \\ u\\in L^2({\\mathbb R}^n).$$\nIn particular, if $f$ is a polynomial of degree at most $2$ in $(x,\\xi)$, then $f^W$ is exactly $f(x,-{\\rm i}\\partial_x)$.\n\n\n\nFor the $1-$parameter family of Hamiltonian $\\chi(t, x, \\xi )$, with $t$ an external parameter, let $\\phi^\\tau(t,x,\\xi)$ be the time $\\tau-$flow it generates, precisely the\nsolution of\n$$\\frac{dx}{d\\tau}=\\frac{\\partial\\chi}{\\partial\\xi}(t, x, \\xi ),\\qquad \\frac{d\\xi}{d\\tau}=-\\frac{\\partial\\chi}{\\partial x}(t, x, \\xi).$$\nThe time-dependent coordinate transformation\n\\begin{equation}\\label{time1}\n(x,\\xi)=\\phi^1\\left(t,\\tilde x,{\\tilde\\xi}\\right)=\\left.\\phi^{\\tau}\\left(t,\\tilde x,{\\tilde\\xi}\\right)\\right|_{\\tau=1}\n\\end{equation}\ntransforms a Hamiltonian system with\nHamiltonian $h$ into a system with Hamiltonian $g$ given by\n$$g(t,\\tilde x,\\tilde\\xi)=h(\\phi^1(t,\\tilde x,\\tilde\\xi))-\\int_0^1 \\frac{\\partial\\chi}{\\partial t}(t,\\phi^{\\tau}(t,\\tilde x,\\tilde\\xi) ) d\\tau.$$\n\n\n\\begin{Lemma} [Remark 2.6 of \\cite{BGMR2018}] If the Weyl operator $\\chi^W(t, x, -{\\rm i}\\partial_x)$ is self-adjoint for any fixed $t$, then the transformation\n\\begin{equation}\\label{tran}\n\\psi=e^{{\\rm i}\\chi^W(t, x, -{\\rm i}\\partial_x)}\\tilde\\psi\n\\end{equation}\ntransforms the equation ${\\rm i}\\partial_t\\psi=H\\psi$ into ${\\rm i}\\partial_t\\tilde\\psi=G\\tilde\\psi$ with\n\\begin{eqnarray*}\nG&:=&e^{{\\rm i}\\chi^W(t, x, -{\\rm i}\\partial_x)}He^{-{\\rm i}\\chi^W(t, x, -{\\rm i}\\partial_x)}\\\\\n& & - \\, \\int_0^1 e^{{\\rm i}\\tau\\chi^W(t, x, -{\\rm i}\\partial_x)}\\left(\\partial_t \\chi^W(t, x, -{\\rm i}\\partial_x)\\right)e^{-{\\rm i}\\tau\\chi^W(t, x, -{\\rm i}\\partial_x)}d\\tau.\n\\end{eqnarray*}\n\n\\end{Lemma}\n\n\\begin{Proposition} [Proposition 2.9 of \\cite{BGMR2018}]\\label{Prop_hami} Let $\\chi(t, x, \\xi )$ be a polynomial of degree at most $2$ in $(x,\\xi)$ with smooth time-dependent coefficients.\nIf the transformation (\\ref{time1}) transforms a classical system with Hamiltonian $h$ into\na system with Hamiltonian $g$, then the transformation (\\ref{tran}) transforms the quantum Hamiltonian system\n$h^W$ into $g^W$.\n\\end{Proposition}\n\n\nNow, let us focus on the case $n=1$.\n\n\\begin{Lemma} [Lemma 2.8 of \\cite{BGMR2018}]\\label{lem_Sobolev}\nLet $\\chi(\\theta,x,\\xi)$ be a polynomial of degree at most $2$ in $(x,\\xi)$ with real coefficients depending in a $C^\\infty-$way on $\\theta\\in {\\mathbb T}^d$.\nFor every $\\theta\\in {\\mathbb T}^d$, the Weyl operator $\\chi^W(\\theta,x, -{\\rm i}\\partial_x)$ is self-adjoint in $L^2({\\mathbb R})$ and $e^{-{\\rm i}\\tau\\chi^W(\\theta,x, -{\\rm i}\\partial_x)}$ is unitary in $L^2({\\mathbb R}^n)$ for every $\\tau\\in{\\mathbb R}$.\nFurthermore, if the coefficients of $\\chi(\\theta,x,\\xi)$ are uniformly bounded w.r.t. $\\theta\\in {\\mathbb T}^d$, then for any $s\\geq 0$, there exist $c'$, $C' > 0$ depending on $\\|[H_0^s,\\chi^W(\\theta,x, -{\\rm i}\\partial_x)]H_0^{-s}\\|_{L^2\\mapsto L^2}$ and $s$, such that\n \\begin{equation}\\label{change_Sobolevnorm}\n c'\\|\\psi\\|_{s}\\leq \\|e^{-{\\rm i}\\tau\\chi^W(\\theta,x,-{\\rm i}\\partial_x)}\\psi\\|_{s}\\leq C'\\|\\psi\\|_{s},\\qquad \\tau\\in [0,1], \\quad \\theta\\in{\\mathbb T}^d.\n \\end{equation}\n\\end{Lemma}\n\n\n\n\n\\section{Reducibility and growth of Sobolev norm}\\label{sec_abstract}\n\n\n\n\\subsection{An abstract theorem on reducibility}\n\n\nConsider the 1-d time-dependent equation\n\\begin{equation}\\label{eq_abs}\n {\\rm i}\\partial_t u=L^{W}(\\omega t, x, -{\\rm i} \\partial_x)u,\\qquad x\\in{\\mathbb R} ,\n\\end{equation}\nwhere $L^{W}(\\omega t, x, -{\\rm i} \\partial_x)$ is a linear differential operator, $\\omega\\in{\\mathbb T}^d$, $d\\geq 1$, and the symbol $L(\\theta, x,\\xi)$ is a quadratic form of $(x,\\xi)$ with coefficients\nanalytically depending on $\\theta\\in{\\mathbb T}^d$. More precisely, we assume that\n\\begin{equation}\\label{op_L}\nL(\\theta, x,\\xi)=\\frac12 \\big(a(\\theta) x^2+ b(\\theta) x\\cdot \\xi + b(\\theta) \\xi\\cdot x + c(\\theta) \\xi^2\\big),\n\\end{equation}\nwith coefficients $a,b,c\\in C^\\omega({\\mathbb T}^d,{\\mathbb R})$.\n\n\n\nThrough Weyl quantization, the reducibility for the time-dependent PDE can be related to the reducibility for the ${\\rm sl}(2,{\\mathbb R})-$linear system $(\\omega, \\, A(\\cdot))$:\n$$X'=A(\\omega t)X,\\qquad A\\in C^{\\omega}({\\mathbb T}^d,{\\rm sl}(2,{\\mathbb R})).$$\nGiven $A_1, A_2 \\in C^{\\omega}({\\mathbb T}^d,{\\rm sl}(2,{\\mathbb R}))$, if there exists $Y\\in C^{\\omega}(2{\\mathbb T}^d,{\\rm SL}(2,{\\mathbb R}))$ such that\n$$\\frac{d}{dt}Y(\\omega t)=A_1(\\omega t)Y(\\omega t)-Y(\\omega t)A_2(\\omega t),$$\n we say that $(\\omega, \\, A_1(\\cdot))$ is conjugated to $(\\omega, \\, A_2(\\cdot))$ by $Y$.\nIf $(\\omega, \\, A(\\cdot))$ can be conjugated to $(\\omega, \\, B)$ with $B\\in{\\rm sl}(2,{\\mathbb R})$, we say that $(\\omega, \\, A(\\cdot))$ is {\\it reducible}.\n\n\\smallskip\n\nNow let $A(\\cdot):=\\left(\\begin{array}{cc}\n b(\\cdot) & c(\\cdot) \\\\[1mm]\n -a(\\cdot) & -b(\\cdot)\n \\end{array}\\right) \\in C^{\\omega}({\\mathbb T}^d,{\\rm sl}(2,{\\mathbb R})) $ with $a,b,c$ coefficients given in (\\ref{op_L}).\n\\begin{Theorem}\\label{thm_redu}\nAssume that there exist $B\\in{\\rm sl}(2,{\\mathbb R})$ and $Z_j\\in C^\\omega(2{\\mathbb T}^d, {\\rm sl}(2,{\\mathbb R}))$, $j=0, \\cdots,K$,\nsuch that $(\\omega, \\, A(\\cdot))$ is conjugated to $(\\omega, \\, B)$ by $\\prod_{j=0}^K e^{Z_j}$. Then Eq. (\\ref{eq_abs}) is reducible, i.e., there exists a time quasi-periodic map $U(\\omega t)$, unitary in $L^2$ and analytic on $t$, satisfying\n\\begin{equation}\\label{norm_U}\nc'\\|\\psi\\|_{s}\\leq \\|U(\\omega t)\\psi\\|_{s}\\leq C'\\|\\psi\\|_{s}, \\quad \\forall \\ \\psi\\in{{\\mathcal H}}^s,\n\\end{equation}\nfor constants $c', \\, C'>0$ depending on $s$ and $\\psi$, such that\nEq. (\\ref{eq_abs}) is conjugated to\n\\begin{equation}\\label{Ham_G}\n{\\rm i}\\partial_t v= G v\n\\end{equation}\nby the transformation $u=U(\\omega t) v$, with $G$ an operator independent of time.\n\n\nMore precisely,\n\\begin{itemize}\n\\item [(\\uppercase\\expandafter{\\romannumeral1})] $G$ is unitary equivalent to $\\frac{\\sqrt{{\\rm det} B}}{2} H_0$ if\n\\begin{equation}\\label{type1}\n{\\rm det}B>0 \\ or \\ B=\\left(\\begin{array}{cc}\n 0 & 0 \\\\[1mm]\n 0 & 0\n \\end{array}\n\\right).\n\\end{equation}\n\n\n\\item [(\\uppercase\\expandafter{\\romannumeral2})]\n$G$ is unitary equivalent to $-\\frac{{\\rm i}\\sqrt{-{\\rm det}B}}{2}(x\\cdot \\partial_x+ \\partial_x \\cdot x)$ if\n\\begin{equation}\\label{type2}\n{\\rm det}B<0.\\end{equation}\n\n\\item [(\\uppercase\\expandafter{\\romannumeral3})]\n$G$ is unitary equivalent to $-\\frac{\\kappa}{2} x^2$ if\n\\begin{equation}\\label{type3}\nB \\ is \\ similar \\ to \\ \\left(\\begin{array}{cc}\n 0 & 0\\\\[1mm]\n \\kappa & 0\n \\end{array}\n\\right) \\ with \\ \\kappa\\neq 0.\n\\end{equation}\n\\end{itemize}\n\\end{Theorem}\n\n\n\n\n\n\\subsection{Growth of Sobolev norm via reducibility}\n\nAs an corollary of Theorem \\ref{thm_redu}, we have:\n\n\\begin{Theorem}\\label{thm_sobo} Under the assumption of Theorem \\ref{thm_redu}, we consider the solution $u(t)=u(t,\\cdot)$ to Eq. (\\ref{eq_abs}) with the non-vanishing initial condition $u(0)\\in {{\\mathcal H}}^s$, $s\\geq 0$. There exists $c, C>0$, depending on $s$ and $u(0)$, such that, for any $t\\geq 0$,\n\\begin{itemize}\n\\item If (\\ref{type1}) holds, then\n$c \\leq \\|u(t)\\|_{s}\\leq C$.\n\\item If (\\ref{type2}) holds, then\n$c e^{\\sqrt{-{\\rm det}B}st} \\leq \\|u(t)\\|_{s}\\leq C e^{\\sqrt{-{\\rm det}B}st}$.\n\\item If (\\ref{type3}) holds, then\n$\nc |\\kappa|^s t^s \\leq \\|u(t)\\|_{s}\\leq C |\\kappa|^s \\sqrt{1+t^2}^{\\frac{s}2}\n$.\n\\end{itemize}\n\\end{Theorem}\n\n\n\nAccording to (\\ref{norm_U}), to precise the growth of Sobolev norms for the solution to Eq. (\\ref{eq_abs}), it is sufficient to study the reduced quantum Hamiltonian $G(x, -{\\rm i}\\partial_x)$ obtained in (\\ref{Ham_G}), or more simply, the unitary equivalent forms of types (\\uppercase\\expandafter{\\romannumeral1})$-$(\\uppercase\\expandafter{\\romannumeral3}) listed in Theorem \\ref{thm_redu}.\n\n\\smallskip\n\nIf (\\ref{type1}) holds, then $G$ is unitary equivalent to $\\frac{\\sqrt{{\\rm det}B}}{2} H_0$. Since the ${{\\mathcal H}}^s-$norm of $e^{-{\\rm i}t\\frac{\\sqrt{{\\rm det}B}}{2} H_0}\\psi_0$ is conserved for any $\\psi_0\\in{{\\mathcal H}}^s$, the boundedness of Sobolev norm is shown.\nWe focus on the cases where (\\ref{type2}) and (\\ref{type3}) hold, in which the growth of Sobolev norm occurs.\n\n\n\\begin{Proposition} \\label{prop6}\nFor the equation\n\\begin{equation}\\label{eq_hyper}\n\\partial_t v(t,x)=-\\frac\\lambda2 x\\cdot\\partial_x v (t,x)-\\frac\\lambda2 \\partial_x(x\\cdot v (t,x)), \\qquad \\lambda>0,\n\\end{equation}\n with non-vanishing initial condition $ v (0, \\cdot)= v_0\\in {{\\mathcal H}}^s$, $s\\geq 0$, there exist two constants $\\tilde c, \\, \\tilde C>0$, depending on $s$, $\\lambda$ and $ v _0$, such that the solution satisfies\n\\begin{equation}\\label{bounds_hyper}\n\\tilde c e^{\\lambda st} \\leq \\|\\psi(t,\\cdot )\\|_{s}\\leq \\tilde C e^{\\lambda st}, \\qquad \\forall \\ t\\geq0.\n\\end{equation}\n\\end{Proposition}\n\n\\begin{remark}\nThis conclusion is also given in Remark 1.4 of \\cite{MR2017}.\n\\end{remark}\n\n\n\n\\proof Through a straightforward computation, we can verify that, for the initial condition $ v (0,\\cdot)= v _0(\\cdot)\\in {{\\mathcal H}}^s$, the solution to Eq. (\\ref{eq_hyper}) satisfies\n$$ v (t,x)=e^{-\\frac\\lambda2 t} v _0(e^{-\\lambda t} x).$$\nFor any $s\\geq 0$,\n\\begin{eqnarray}\n\\int_{\\mathbb R} x^{2s } | v (t,x)|^2 \\, dx &=& \\int_{\\mathbb R} x^{2s } | v _0(e^{-\\lambda t} x)|^2 \\, d (e^{-\\lambda t}x) \\nonumber\\\\\n&=& e^{2\\lambda s t}\\int_{\\mathbb R} (e^{-\\lambda t} x)^{2s}| v _0(e^{-\\lambda t} x)|^2 \\, d (e^{-\\lambda t}x)\\nonumber\\\\\n&=& e^{2\\lambda s t}\\int_{\\mathbb R} x^{2s} | v _0(x)|^2 \\, dx.\\label{sobolev_hyper}\n\\end{eqnarray}\nand for $s\\in{\\mathbb N}$,\n\\begin{equation}\\label{sobolev_hyper-ds}\n\\int_{\\mathbb R} |\\partial_x^s v (t,x)|^2 \\, dx = e^{-2\\lambda s t} \\int_{\\mathbb R} | v _0^{(s)}(e^{-\\lambda t} x)|^2 \\, d (e^{-\\lambda t}x)\n= e^{-2\\lambda s t}\\int_{\\mathbb R} | v _0^{(s)}(x)|^2 \\, dx.\n\\end{equation}\nIn view of the equivalent definition (\\ref{norm_equiv}) of the ${{\\mathcal H}}^s-$norm given in Remark \\ref{remark_norm_equiv}, we get (\\ref{bounds_hyper}) by combining (\\ref{sobolev_hyper}) and (\\ref{sobolev_hyper-ds}).\\qed\n\n\n\n\n\n\n\n\\begin{Proposition}\\label{prop_para}\nFor the equation\n\\begin{equation}\\label{eq_para}\n{\\rm i}\\partial_t v (t,x)=-\\frac{\\kappa}{2} x^2\\cdot v (t,x), \\qquad \\kappa\\in{\\mathbb R},\n\\end{equation}\nwith non-vanishing initial condition $ v _0\\in {{\\mathcal H}}^s$, $s\\geq 0$, there exists constants $\\tilde c, \\tilde C>0$, depending on $s$, $\\kappa$ and $ v _0$, such that the solution satisfies\n\\begin{equation}\\label{sobo_para}\n\\tilde c |\\kappa|^s |t|^s \\leq \\| v (t,\\cdot)\\|_{s}\\leq \\tilde C |\\kappa|^s (1+ t^2)^\\frac{s}2,\\qquad \\forall \\ t\\in{\\mathbb R}.\n\\end{equation}\n\\end{Proposition}\n\\proof\nWith the initial condition $ v (0,\\cdot)= v _0(\\cdot)\\in {{\\mathcal H}}^s$, the solution to Eq. (\\ref{eq_para}) is\n$$ v (t,x)=e^{{\\rm i}\\frac{\\kappa}{2} x^2 t} v _0(x).$$\nFor any $s\\geq 0$,\n$$\\|x^s v (t,x)\\|_{L^2}=\\|x^s e^{{\\rm i}\\frac{\\kappa}{2} x^2 t} v _0(x)\\|_{L^2}=\\|x^s v _0(x)\\|_{L^2},$$\nand for $s\\in{\\mathbb N}$,\n\\begin{eqnarray*}\n\\partial_{x}^{s}( v (t,x))\n&=&\\partial_{x}^{s}(e^{{\\rm i}\\frac{\\kappa}{2} x^2 t} v _0(x))\\\\\n&=& \\sum_{\\alpha=0}^{s} C_s^\\alpha (e^{{\\rm i}\\frac{\\kappa}{2} x^2 t})^{(\\alpha)} v _0^{(s-\\alpha)}(x)\\\\\n&=& e^{{\\rm i}\\frac{\\kappa}{2} x^2 t} \\sum_{\\alpha=0}^{s} C_s^\\alpha \\left(({\\rm i}\\kappa t)^{\\alpha} x^{\\alpha}+P_{\\alpha}({\\rm i}\\kappa t,x)\\right) v _0^{(s-\\alpha)}(x) \\\\\n&=&({\\rm i} \\kappa t)^{s} x^{s} e^{{\\rm i}\\frac{\\kappa}{2} x^2 t}\\cdot v _0(x) +P_{s}({\\rm i}\\kappa t,x)e^{{\\rm i}\\frac{\\kappa}{2} x^2 t}\\cdot v _0(x)\\\\\n& &+ \\, x^{\\alpha} e^{{\\rm i}\\frac{\\kappa}{2} x^2 t} \\sum_{\\alpha=0}^{s-1} C_s^\\alpha \\left(({\\rm i}\\kappa t)^{\\alpha} x^{\\alpha}+P_{\\alpha}({\\rm i}\\kappa t,x)\\right) v _0^{(s-\\alpha)}(x),\n\\end{eqnarray*}\nwhere, for $\\alpha\\geq 2$, $P_{\\alpha}({\\rm i}\\kappa t,x)$ is a polynomial of degree $\\alpha-2$ of $x$, with the coefficients being monomials of ${\\rm i}\\kappa t$ of degree $\\leq \\alpha-1$ and $P_{1}=P_0=0$. Then, there exists a constant $D>0$ such that\n$$\\left|\\|\\partial_{x}^{s}( v (t,x))\\|_{L^2}-|\\kappa t |^{s}\\|x^{s} v _0(x)\\|_{L^2}\\right|\\leq D |\\kappa t |^{s-1} \\| v _0(x)\\|_s.$$\nIn view of the equivalent definition (\\ref{norm_equiv}) of norm in Remark \\ref{remark_norm_equiv},\nwe get (\\ref{sobo_para}). \\qed\n\n\\smallskip\n\n\\noindent{\\bf Proof of Theorem \\ref{thm_sobo}.}\nFrom Theorem \\ref{thm_redu}, we know that\nEq. (\\ref{eq_abs}) is conjugated to\n${\\rm i}\\partial_t v= G v$ by the transformation $u=U(\\omega t) v$, with $G=G(x,-{\\rm i}\\partial_x)$ the operator independent of $t$ given in (\\ref{Ham_G_pr}).\n\nRecall Proposition \\ref{prop6} and \\ref{prop_para}.\nGiven $s\\geq 0$, for any non-vanishing $v_0\\in{{\\mathcal H}}^s$, for the three types of unitary equivalence of $G$, there are three different behaviours of the solution to the equation ${\\rm i}\\partial_t v= G v$ as $t\\to \\infty$.\n\\begin{itemize}\n\\item If $G$ is unitary equivalent to $\\frac{\\sqrt{{\\rm det} B}}{2} H_0$ (under (\\ref{type1})), then\n$\\|e^{-{\\rm i}Gt}v_0\\|_s=O(1)$.\n\\item If $G$ is unitary equivalent to $-\\frac{{\\rm i}\\sqrt{-{\\rm det} B}}{2} (x\\cdot \\partial_x +\\partial_x\\cdot x)$ (under (\\ref{type2})), then\n$\\|e^{-{\\rm i}Gt}v_0\\|_s=O(e^{\\sqrt{-{\\rm det} B} s t}).$\n\\item If $G$ is unitary equivalent to $-\\frac{\\kappa}{2} x^2$ (under (\\ref{type3})), then $\\|e^{-{\\rm i}Gt}v_0\\|_s=O(|\\kappa|^st^s)$.\n\\end{itemize}\nMoreover, according to (\\ref{norm_U}), for $s\\geq 0$, there exist constants $c', C'>0$ such that\n$$ c'\\|v\\|_{s}\\leq \\|U(\\omega t)v\\|_{s}\\leq C'\\|v\\|_{s},\\qquad \\forall \\ v\\in {{\\mathcal H}}^s.$$\nHence Theorem \\ref{thm_sobo} is shown.\\qed\n\n\n\\section{Reducibility in classical Hamiltonian system and Proof of Theorem \\ref{thm_redu}}\\label{sec_reduc}\n\n\\subsection{Conjugation between classical hamiltonians}\n\nGiven two quadratic classical Hamiltonians\n$$h_j(\\omega t, x, \\xi)=\\frac12 \\big(a_j(\\omega t)x^2+ 2b_j(\\omega t) x\\cdot \\xi+ c_j(\\omega t) \\xi^2 \\big), \\qquad j=1,2,$$\nwhich can be presented as\n$$h_j(\\omega t, x,\\xi)=\\frac12\\left(\n \\begin{array}{c}\n x \\\\\n \\xi \\\\\n \\end{array}\n \\right)^{\\top}J A_j(\\omega t)\\left(\n \\begin{array}{c}\n x \\\\\n \\xi \\\\\n \\end{array}\n \\right), \\qquad j=1,2$$\nwith $J:=\\left(\\begin{array}{cc}\n0 & -1 \\\\1 & 0 \\end{array}\\right)$ and\n$A_j(\\cdot)=\\left(\\begin{array}{cc}\nb_j(\\cdot) & c_j(\\cdot) \\\\ -a_j(\\cdot) & -b_j(\\cdot) \\end{array}\\right)\\in C^{\\omega}({\\mathbb T}^d, {\\rm sl}(2,{\\mathbb R}))$.\nThe corresponding equations of motion are given by\n$$x'=\\frac{\\partial h_j}{\\partial\\xi},\\quad \\xi'=-\\frac{\\partial h_j}{\\partial x},\\qquad j=1,2,$$\nwhich are the linear systems $(\\omega, \\, A_j)$:\n$$\\left(\\begin{array}{c}\n x(t) \\\\\n \\xi(t)\n \\end{array}\n\\right)'=A_j(\\omega t)\\left(\\begin{array}{c}\n x(t) \\\\\n \\xi(t)\n \\end{array}\n\\right).$$\n\n\n\n\\begin{Proposition}\\label{prop_ham_cl}\nIf the linear system $(\\omega, \\, A_1(\\cdot))$ is conjugated to $(\\omega, \\, A_2(\\cdot))$ by a time quasi-periodic ${\\rm SL}(2,{\\mathbb R})-$transformation, i.e.,\n\\begin{equation}\\label{conj_ode}\n\\frac{d}{dt} e^{Z(\\omega t)}=A_1(\\omega t)e^{Z(\\omega t)}-e^{Z(\\omega t)} A_2(\\omega t),\\qquad Z \\in C^\\omega(2{\\mathbb T}^d, {\\rm sl}(2,{\\mathbb R})),\n\\end{equation}\nthen the classical Hamiltonian $h_1(\\omega t,x,\\xi)$ is conjugated to $h_2(\\omega t,x,\\xi)$ via the time$-1$ flow $\\phi_{\\chi}^1(t,x,\\xi)$\n generated by the Hamiltonian\n \\begin{equation}\\label{chi_eZ}\n \\chi(\\omega t,x,\\xi)=\\frac12\\left(\n \\begin{array}{c}\n x \\\\\n \\xi \\\\\n \\end{array}\n \\right)^{\\top}J Z(\\omega t)\\left(\n \\begin{array}{c}\n x \\\\\n \\xi \\\\\n \\end{array}\n \\right).\n \\end{equation}\n\\end{Proposition}\n\n\\proof Note that the equation of motion of the classical Hamiltonian $h_1$ is the linear system $(\\omega, \\, A_1(\\cdot))$:\n$$ \\left(\\begin{array}{c}\n x \\\\\n \\xi\n \\end{array}\\right)'=A_1(\\omega t)\\left(\\begin{array}{c}\n x \\\\\n \\xi\n \\end{array}\\right).$$\nIn view of (\\ref{conj_ode}), the transformation\n\\begin{equation}\\label{tramSL}\n \\left(\\begin{array}{c}\n x \\\\\n \\xi\n \\end{array}\\right)= e^{Z(\\omega t)}\\left(\\begin{array}{c}\n \\tilde x \\\\\n \\tilde \\xi\n \\end{array}\\right),\\qquad Z\\in C^{\\omega}(2{\\mathbb T}^d, {\\rm sl}(2,{\\mathbb R})),\n\\end{equation}\nconjugates $(\\omega, \\, A_1(\\cdot))$ to $(\\omega, \\, A_2(\\cdot))$. More precisely,\n\\begin{eqnarray*}\n\\left(\\begin{array}{c}\n\\tilde x \\\\\n\\tilde \\xi\n\\end{array}\\right)'&=&e^{-Z(\\omega t)}A_1(\\omega t)\\left(\\begin{array}{c}\nx \\\\\n\\xi\n\\end{array}\\right)-e^{-Z(\\omega t)}\\frac{d}{dt}e^{Z(\\omega t)}\\left(\\begin{array}{c}\n\\tilde x\\\\\n\\tilde \\xi\n\\end{array}\\right) \\\\\n&=& e^{-Z(\\omega t)}A_1(\\omega t)e^{Z(\\omega t)}\\left(\\begin{array}{c}\n{\\tilde x} \\\\\n{\\tilde \\xi}\n\\end{array}\\right)-e^{-Z(\\omega t)}\\frac{d}{dt}e^{Z(\\omega t)}\\left(\\begin{array}{c}\n\\tilde x\\\\\n\\tilde \\xi\n\\end{array}\\right)\\\\\n&=&A_2(\\omega t)\\left(\\begin{array}{c}\n\\tilde x\\\\\n\\tilde \\xi\n\\end{array}\\right),\n\\end{eqnarray*}\nfor which the corresponding Hamiltonian is $h_2(\\omega t,\\tilde x,\\tilde\\xi)$.\nAs in (3-35) of \\cite{BGMR2018}, the time$-1$ map between the two Hamiltonians is generated by (\\ref{chi_eZ}) since there is only quadratic terms in the Hamiltonian in our case.\\qed\n\n\n\n\n\\subsection{Proof of Theorem \\ref{thm_redu}}\n\n\nWe consider the classical Hamiltonian\n\\begin{eqnarray*}\nL(\\omega t,x,\\xi)&=&\\frac{a(\\omega t)}{2}x^2+\\frac{b(\\omega t)}{2}(x\\cdot\\xi+\\xi \\cdot x)+\\frac{c(\\omega t)}{2}\\xi^2 \\nonumber \\\\\n&=&\\frac12X^{\\top}J A(\\omega t)X,\\qquad X:= \\left(\n \\begin{array}{c}\n x \\\\\n \\xi \\\\\n \\end{array}\n \\right).\n\\end{eqnarray*}\nwith $a,b,c\\in C^{\\omega}({\\mathbb T}^d)$ given in Eq. (\\ref{eq_abs}), and $A:=\\left(\\begin{array}{cc}\n b & c \\\\\n -a & -b\n \\end{array}\n\\right)\\in C^{\\omega}({\\mathbb T}^d, {\\rm sl}(2,{\\mathbb R}))$.\n\n\n\nBy the hypothesis of Theorem \\ref{thm_redu}, the linear system $(\\omega, \\, A(\\cdot))$ can be reduced to the constant system $(\\omega, \\, B)$, with $B=\\left(\\begin{array}{cc}\n B_{11} & B_{12} \\\\\n -B_{21} & -B_{11}\n \\end{array}\n\\right)\\in{\\rm sl}(2,{\\mathbb R})$, via finitely many transformations $(e^{Z_j})_{j=0}^K$ with $Z_j\\in C^{\\omega}(2{\\mathbb T}^d, {\\rm sl}(2,{\\mathbb R}))$. Hence the reduced classical Hamiltonian is\n$$g(x,\\xi)=\\frac12X^{\\top}J B X= \\frac{B_{21}}2 x^2+\\frac{B_{11}}{2}(x\\cdot \\xi+ \\xi\\cdot x)+\\frac{B_{12}}2 \\xi^2.$$\nBy Proposition \\ref{Prop_hami}, we see that $L^W(\\omega t, x, -{\\rm i}\\partial_x)$ is conjugated to\n\\begin{equation}\\label{Ham_G_pr}\nG(x, -{\\rm i}\\partial_x):=g^W(x, -{\\rm i}\\partial_x)=\\frac{B_{21}}2 x^2-\\frac{B_{11}}{2}(x\\cdot{\\rm i}\\partial_x+{\\rm i}\\partial_x\\cdot x)-\\frac{B_{12}}2 \\partial_x^2\n\\end{equation}\nvia the product of unitary (in $L^2({\\mathbb R})$) transformations\n$$\nU(\\omega t):= \\prod_{j=0}^K e^{-{\\rm i}\\chi^W_j(\\omega t,x,-{\\rm i}\\partial_x)}\n$$\nwhere $\\chi^W_j$ is the Weyl quantization of\n$$\\chi_j(\\omega t,x,\\xi)=\\frac12X^{\\top} J Z_j(\\omega t)X.$$\nThen (\\ref{norm_U}) is deduced from (\\ref{change_Sobolevnorm}) in Lemma \\ref{lem_Sobolev}.\nThe following diagram gives a straightforward explanation for the above proof.\n$$\n\\begin{array}{rcccl}\n& X'=A(\\omega t)X &\\stackrel{\\prod_{j=0}^K e^{Z_j(\\omega t)}}{\\longrightarrow} & X'=BX & \\ \\ Z_j\\in C^\\omega(2{\\mathbb T}^d, {\\rm sl}(2,{\\mathbb R})) \\\\\n & & & & \\\\\n & \\big\\updownarrow & & \\big\\updownarrow & \\\\\n & & & & \\\\\n & L(\\omega t)=\\frac12X^{\\top} J A(\\omega t)X & \\stackrel{ \\Phi^1_{\\chi_0(\\omega t)}\\circ \\cdots \\circ \\Phi^1_{\\chi_K(\\omega t)}}{\\longrightarrow} & g=\\frac12X^{\\top} J B X & \\ \\ \\chi_j =\\frac12X^{\\top}J Z_j X\\\\\n & & & & \\\\\n & \\big\\updownarrow & & \\big\\updownarrow & \\\\\n & & & & \\\\\n & {\\rm i}\\partial_t u=L^W(\\omega t)u & \\stackrel{\\prod_{j=0}^{K} e^{-{\\rm i} \\chi_j^W(\\omega t)}}{\\longrightarrow} & {\\rm i}\\partial_t u = g^W u &\n \\end{array}\n$$\n\nIf (\\ref{type1}) holds, i.e., ${\\rm det}B>0$ or $B=\\left(\\begin{array}{cc}\n 0 & 0 \\\\\n 0 & 0\n \\end{array}\\right)$,\nthen there exists $C_B\\in {\\rm sl}(2,{\\mathbb R})$ such that\n\\begin{equation}\\label{elliptic}\nB=e^{C_B}\\left(\\begin{array}{cc}\n 0 & \\sqrt{{\\rm det}B} \\\\\n -\\sqrt{{\\rm det}B} & 0\n \\end{array}\\right)e^{-C_B}.\n\\end{equation}\nIf (\\ref{type2}) holds, i.e., ${\\rm det}B<0$, then there exists $C_B\\in {\\rm sl}(2,{\\mathbb R})$ such that\n\\begin{equation}\\label{hyerbolic}\nB=e^{C_B}\\left(\\begin{array}{cc}\n \\sqrt{-{\\rm det}B} & 0 \\\\\n 0 & -\\sqrt{-{\\rm det}B}\n \\end{array}\\right)e^{-C_B}.\n\\end{equation}\nIf (\\ref{type3}) holds,\nthen there exists $C_B\\in {\\rm sl}(2,{\\mathbb R})$ such that\n\\begin{equation}\\label{parapolic}\nB=e^{C_B}\\left(\\begin{array}{cc}\n 0 & 0 \\\\\n \\kappa & 0\n \\end{array}\\right)e^{-C_B}.\n\\end{equation}\nTherefore, for Eq. (\\ref{eq_abs}), the three types of unitary equivalence of $G=G(x,-{\\rm i}\\partial_x)$ are shown by (\\ref{elliptic})$-$(\\ref{parapolic}) respectively. \\qed\n\n\n\n\n\n\n\n\n\n\\section{Proof of Theorem \\ref{thm_Schro} and \\ref{thm_Schro_sobolev}}\\label{sec_proof}\n\n\nIn view of Theorem \\ref{thm_redu}, to show the reducibility of Eq. (\\ref{eq_Schrodinger}), it is sufficient to show the reducibility of the corresponding ${\\rm sl}(2,{\\mathbb R})-$linear system.\n\nFor $E\\in{{\\mathcal I}}$, the symbol of the quantum Hamiltonian (\\ref{eq_Schrodinger}) is\n$$h_E(\\omega t, x,\\xi)=\\frac{\\nu(E)}{2}(\\xi^2+x^2)+W(E,\\omega t,x,\\xi)$$\nwhich corresponds the quasi-periodic linear system $(\\omega, \\, A_0+F_0)$\n\\begin{equation}\\label{linear_system_pr}\n\\left(\\begin{array}{c}\n x \\\\\n \\xi\n \\end{array}\n\\right)'=\\left[\\left(\\begin{array}{cc}\n 0 & \\nu(E) \\\\\n -\\nu(E) & 0\n \\end{array}\\right)\n +\\left(\\begin{array}{cc}\n b(E,\\omega t) & c(E,\\omega t) \\\\\n -a(E,\\omega t) & -b(E,\\omega t)\n \\end{array}\\right)\\right] \\left(\\begin{array}{c}\n x \\\\\n \\xi\n \\end{array}\n\\right),\n\\end{equation}\nwhere, for every $E\\in{{\\mathcal I}}$,\n\\begin{align*}\nA_0(E):= & \\left(\\begin{array}{cc}\n 0 & \\nu(E) \\\\\n -\\nu(E) & 0\n \\end{array}\\right)\\in {\\rm sl}(2,{\\mathbb R}), \\\\\n F_0(E,\\cdot):=& \\left(\\begin{array}{cc}\n b(E,\\cdot) & c(E,\\cdot) \\\\\n -a(E,\\cdot) & -b(E,\\cdot)\n \\end{array}\\right)\\in C^{\\omega}_r({\\mathbb T}^d,{\\rm sl}(2,{\\mathbb R}))\n\\end{align*}\nwith $|\\partial_E^m F_0|_r<\\varepsilon_0$, $m=0,1,2$, sufficiently small.\n\n\nThe reducibility of linear system (\\ref{linear_system_pr}) is exploited by Eliasson \\cite{Eli1992} (see also \\cite{HA} for results about ${\\rm SL}(2,{\\mathbb R})$-cocycles). We summarise the needed results in the following proposition. To make the paper as self-contained as possible, we give a short proof without adding too many details on known facts.\nSince every quantity depends on $E$, we do not always write this dependence explicitly in the statement of proposition.\n\n\nBefore stating the precise result, we introduce the concept of rotation number. The {\\it rotation number}\nof quasi-periodic ${\\rm sl}(2,{\\mathbb R})-$linear system (\\ref{linear_system_pr})\n is defined as\n$$\\rho(E)=\\rho(\\omega, \\, A_0(E)+F(E,\\omega t))=\\lim_{t\\to\\infty}\\frac{\\arg(\\Phi_E^t X)}{t},\\qquad \\forall \\ X\\in \\mathbb{R}^2\\setminus\\{0\\}$$\nwhere $\\Phi_E^t$ is the basic\nmatrix solution and $\\arg$ denotes the angle. The rotation number\n$\\rho$ is well-defined and it does not depend on $X$\n\\cite{JM82}.\n\n\n\\begin{Proposition}\\label{prop_eliasson} There exists $\\varepsilon_*=\\varepsilon_*(r,\\gamma,\\tau,d,l_1,l_2)>0$ such that if\n\\begin{equation}\\label{small_F_0}\n\\max_{m=0,1,2}|\\partial_E^m F_0|_r=:\\varepsilon_0<\\varepsilon_*,\n\\end{equation}\nthen the following holds for the quasi-periodic linear system $(\\omega, \\, A_0+F_0)$.\n\\begin{enumerate}\n \\item [(1)] For a.e. $E\\in{{\\mathcal I}}$, $(\\omega, \\, A_0+F_0(\\cdot))$ is reducible. More precisely,\n there exist $B\\in{\\rm sl}(2,{\\mathbb R})$ and $Z_j\\in C^\\omega(2{\\mathbb T}^d, {\\rm sl}(2,{\\mathbb R}))$, $j=0,1,\\cdots,K$, such that\n \\begin{equation}\\label{reducibility_sl2R}\n \\frac{d}{dt}\\left(\\prod_{j=0}^K e^{Z_j(\\omega t)}\\right)=\\left(A_0+F_0(\\omega t)\\right)\\left(\\prod_{j=0}^K e^{Z_j(\\omega t)}\\right)-\\left(\\prod_{j=0}^K e^{Z_j(\\omega t)}\\right)B.\n \\end{equation}\n \\item [(2)] The rotation number $\\rho=\\rho(E)$ is monotonic on ${\\mathcal I}$. For any $k\\in{\\mathbb Z}^d$,\n $$\\tilde\\Lambda_k:=\\left\\{E\\in\\overline{{\\mathcal I}}:\\rho(E)=\\frac{\\langle k,\\omega\\rangle}{2}\\right\\} \\ \\footnote{$\\tilde\\Lambda_k$ can be empty for some $k\\in{\\mathbb Z}^d$ if the closed interval $\\rho^{-1}\\left(\\frac{\\langle k,\\omega\\rangle}{2}\\right)$ does not intersect ${{\\mathcal I}}$.} $$ is a closed interval, and we have\n\\begin{equation}\\label{measure_esti}\n\\sum_{k\\in{\\mathbb Z}^d}{\\rm Leb}(\\tilde\\Lambda_k)<\\varepsilon_0^{\\frac{1}{40}}.\n\\end{equation}\n \\item [(3)] For every $E\\in \\tilde\\Lambda_k=:[a_k,b_k]$, $(\\omega, \\, A_0+F_0(\\cdot))$ is reducible and the matrix $B\\in {\\rm sl}(2,{\\mathbb R})$ in (\\ref{reducibility_sl2R}) satisfies\n \\begin{itemize}\n \\item if $a_k=b_k$, then $B=\\left(\\begin{array}{cc}\n 0 & 0 \\\\\n 0 & 0\n \\end{array}\\right)$;\n \\item if $a_k0$.\n\\end{enumerate}\n\\end{Proposition}\n\n\\proof Since $\\nu$ is a strictly monotonic real-valued function of $E\\in{{\\mathcal I}}$ and $|\\nu'|\\geq l_1$, $|\\nu''|\\leq l_2$, (\\ref{small_F_0}) implies that\n$|\\partial^m_E F_0(\\nu^{-1}(E),\\cdot)|_r$, $m=0,1,2$, is also small enough.\n Hence, to prove the above arguments, we can simply consider the case where $\\nu(E)=E\\in {{\\mathcal I}}= {\\mathbb R}$ and then obtain Proposition \\ref{prop_eliasson} by replacing $E$ by $\\nu(E)$.\n\n\n\n\n\n\n\n\n\n\\smallskip\n\n\\noindent {\\it Proof of (1).}\nThe almost reducibility has already been shown by Eliasson \\cite{Eli1992} for every $E\\in{\\mathbb R}$.\nIndeed, if $\\max_{m=0,1,2}|\\partial_E^m F_0|_r$ is small enough (depending on $r,\\gamma,\\tau,d$), then there exists sequences $(Y_j)_{j\\in{\\mathbb N}}\\subset C^\\omega(2{\\mathbb T}^d, {\\rm SL}(2,{\\mathbb R}))$, $(A_j)_{j\\in{\\mathbb N}}\\subset {\\rm sl}(2,{\\mathbb R})$, and $(F_j)_{j\\in{\\mathbb N}}\\subset C^\\omega(2{\\mathbb T}^d, {\\rm sl}(2,{\\mathbb R}))$, all of which are piecewise $C^2$ w.r.t. $E$,\n with $\\max_{m=0,1,2}|\\partial^m_E F_j|_{{\\mathbb T}^d}<\\varepsilon_j:=\\varepsilon_0^{(1+\\sigma)^j}$ for $\\sigma=\\frac1{33}$, such that\n$$\\frac{d}{dt}Y_{j}(\\omega t)=\\left(A_j+F_j(\\omega t)\\right)Y_{j}(\\omega t)-Y_{j}(\\omega t) \\left(A_{j+1}+F_{j+1}(\\omega t)\\right).$$\nMore precisely, at the $j-$th step, for $\\pm{\\rm i}\\xi_j\\in{\\mathbb R}\\cup{\\rm i}{\\mathbb R}$, the two eigenvalues of $A_j$, and\n$$N_j:=\\frac{2\\sigma}{r_j-r_{j+1}}\\ln\\left(\\frac{1}{\\varepsilon_j}\\right)$$\nwith $(r_j)_{j\\in{\\mathbb N}}$ a decreasing sequence of positive numbers such that $r_j-r_{j+1}\\geq 2^{-(j+1)}r$ for each $j$,\n\\begin{itemize}\n \\item (non-resonant case) if for every $n\\in{\\mathbb Z}^d$ with $0<|n|\\leq N_j$, we have\n \\begin{equation}\\label{non_resonant}\n \\left|2\\xi_j-\\langle n,\\omega\\rangle\\right|\\geq \\varepsilon_j^{\\sigma},\n \\end{equation}\n then $Y_{j}=e^{\\tilde Z_{j}}$ for some $\\tilde Z_{j}\\in C^{\\omega}(2{\\mathbb T}^d,{\\rm sl}(2,{\\mathbb R}))$ with $|\\tilde Z_{j}|_{2{\\mathbb T}^d}<\\varepsilon_j^{\\frac23}$, and $|A_{j+1}-A_j|<\\varepsilon_j^{\\frac23}$;\n \\item (resonant) if for some $n_j\\in{\\mathbb Z}^d$ with $0<|n_j|\\leq N_j$, we have\n \\begin{equation}\\label{resonant}\n \\left|2\\xi_j-\\langle n_j,\\omega\\rangle\\right|< \\varepsilon_j^{\\sigma},\n \\end{equation}\n then $Y_{j+1}(\\cdot)=e^{\\frac{\\langle n_j ,\\cdot\\rangle}{2\\xi_j}A_j} e^{\\tilde Z_{j+1}}$ for some $\\tilde Z_{j}\\in C^{\\omega}(2{\\mathbb T}^d,{\\rm sl}(2,{\\mathbb R}))$ with $|\\tilde Z_{j}|_{2{\\mathbb T}^d}<\\varepsilon_j^{\\frac23}$ and $|A_{j+1}|<\\varepsilon_j^{\\frac{\\sigma}2}$.\n\\end{itemize}\nAs $j$ goes to $\\infty$, the time-dependent part $F_{j}$ tends to vanish. Hence $(\\omega,\\, A_0(E)+F_0)$ is almost reducible. For the detailed proof, we can refer to Lemma 2 of \\cite{Eli1992} and its proof.\n\nIn view of Lemma 3 b) of \\cite{Eli1992}, if the rotation number $\\rho(E)$ of $(\\omega, \\, A_0(E)+F_0)$ is Diophantine or rational w.r.t. $\\omega$, which corresponds to a.e. $E\\in{\\mathbb R}$, then the resonant case occurs for only finitely many times.\nTherefore, for a.e. $E\\in{\\mathbb R}$, there exists a large enough $J_*\\in{\\mathbb N}^*$, depending on $E$, such that\n\\begin{equation}\\label{J_large}\nY_{j}=e^{\\tilde Z_{j}} \\ \\ {\\rm with} \\ \\ |\\tilde Z_{j}|_{2{\\mathbb T}^d}<\\varepsilon^{\\frac23}_{j},\\qquad \\forall \\ j\\geq J_*.\n\\end{equation}\nThis implies that $\\prod_{j=0}^\\infty|Y_j|_{2{\\mathbb T}^d}$ is convergent.\nAs explained in the proof of Lemma 3.5 of \\cite{BGMR2018}, (\\ref{J_large}) also implies that there exists $S\\in C^{\\omega}(2{\\mathbb T}^d,{\\rm sl}(2,{\\mathbb R}))$ such that\n$\\prod_{j=J_*}^\\infty Y_{j}=e^S$,\nsince $\\varepsilon_0$ is sufficiently small.\nHence (\\ref{reducibility_sl2R}) is shown, i.e., the reducibility is realized via finitely many transformations of the form $e^{Z_j(\\omega t)}$ with $Z_j\\in C^{\\omega}(2{\\mathbb T}^d,{\\rm sl}(2,{\\mathbb R}))$.\n\n\n\n\n\n\n\n\\\n\n\\noindent {\\it Proof of (2).}\nFor $k\\in{\\mathbb Z}^d$, $\\tilde\\Lambda_k$ is obtained after several resonant KAM-steps, saying $j_1$, $\\cdots$, $j_L$, where $n_{j_i}\\in{\\mathbb Z}^d$ with $0<|n_{j_i}|\\leq N_{j_i}$, $i=1,\\cdots,L$, satisfies\n$$ \\left|2\\xi_{j_i}-\\langle n_{j_i},\\omega\\rangle\\right|< \\varepsilon_{j_i}^{\\sigma},$$\nand $k=n_{j_1}+\\cdots + n_{j_L}$. We will show that\n\\begin{equation}\\label{k_n_j_L}\n\\frac{10|n_{j_L}|}{11}\\leq |k|\\leq \\frac{12|n_{j_L}|}{11}.\n\\end{equation}\n Assume that $L\\geq 2$ (otherwise we have already $k=n_{j_L}$). After the $(j_{i-1}+1)-$th step, $i=2,\\cdots,L$, the eigenvalues $\\pm{\\rm i}\\xi_{j_{i-1}+1}$ satisfies $|\\xi_{j_{i-1}+1}|<2\\varepsilon_{j_{i-1}}^{\\frac\\sigma2}$. On the other hand, before the $(j_{i}+1)-$th step, the resonant condition (\\ref{resonant}) implies that the eigenvalues $\\pm{\\rm i}\\xi_{j_{L}}$ satisfy that\n$$|2\\xi_{j_{i}}-\\langle n_{j_i},\\omega\\rangle|\\leq \\varepsilon_{j_{i}}^\\sigma.$$\nSince the steps between these two successive resonant steps are all non-resonant, and $\\omega\\in {\\rm DC}_{d}(\\gamma,\\tau)$,\nwe have that\n$$\\frac{\\gamma}{|n_{j_i}|^\\tau}\\leq|\\langle n_{j_i},\\omega\\rangle|\\leq 2|\\xi_{j_{i-1}+1}|+2\\varepsilon^{\\frac13}_{j_{i-1}+1}+\\varepsilon_{j_{i}}^\\sigma<3\\varepsilon_{j_{i-1}}^{\\frac\\sigma2},$$\nwhich implies that\n$$|n_{j_i}|>\\left(\\frac\\gamma3\\right)^{\\frac{1}{\\tau}}\\varepsilon^{-\\frac{\\sigma}{2\\tau}}_{j_{i-1}}>12|N_{j_{i-1}}|\\geq 12|n_{j_{i-1}}|.$$\nHence, we get (\\ref{k_n_j_L}).\n\n\n\n$\\tilde\\Lambda_k$ is firstly formed at the $j_L-$th step, with the initial measure smaller than $\\varepsilon_{j_L}^{2\\sigma}$.\nSince all the succedent steps are non-resonant, the measure of $\\tilde\\Lambda_k$ varies up to $\\varepsilon_{j_L}^{2\\sigma}$. Then, for $\\varsigma:=\\frac{\\ln(1+\\sigma)}{\\ln(8+8\\sigma)}$, we have\n$$\n{\\rm Leb}(\\tilde\\Lambda_k)< 2\\varepsilon_{j_L}^{2\\sigma}< 2\\varepsilon_0^{\\sigma} e^{-\\left(\\frac{12}{11}\\right)^\\varsigma N_{j_L}^\\varsigma}\\leq 2\\varepsilon_0^{\\sigma} e^{-\\left(\\frac{12}{11}\\right)^\\varsigma |n_{j_L}|^\\varsigma}.\n$$\nIndeed, recalling that $r_j-r_{j+1}\\geq 2^{-(j+1)}r$ for every $j$, we have\n\\begin{eqnarray*}\n\\varepsilon_{j_L} &=& \\exp\\{-|\\ln\\varepsilon_0|(1+\\sigma)^{j_L}\\} \\\\\n &=& \\exp\\left\\{- \\frac{|\\ln\\varepsilon_0|^{1-\\varsigma}(1+\\sigma)^{j_L(1- \\varsigma)}(r_{j_L}-r_{j_L+1})^{\\varsigma}}{(2\\sigma)^{\\varsigma}} N_{j_L}^\\varsigma\\right\\}\\\\\n &\\leq&\\exp\\left\\{- \\frac{|\\ln\\varepsilon_0|^{1-\\varsigma} r^\\varsigma}{(4\\sigma)^{\\varsigma}} \\left(\\frac{(1+\\sigma)^{1- \\varsigma}}{2^\\varsigma}\\right)^{j_L} N_{j_L}^\\varsigma\\right\\}\\\\\n &<&\\exp\\left\\{-\\left(\\frac{12}{11}\\right)^\\varsigma \\frac{N_{j_L}^\\varsigma}{\\sigma}\\right\\},\n\\end{eqnarray*}\nsince $\\varepsilon_0$ is small enough and\n$$\\frac{(1+\\sigma)^{1- \\varsigma}}{2^\\varsigma}=\\exp\\left\\{\\frac{\\ln(1+\\sigma)}{\\ln(8+8\\sigma)}\\left(\\ln 8-\\ln 2\\right)\\right\\}>1.$$\nTherefore, by (\\ref{k_n_j_L}), we get\n${\\rm Leb}(\\tilde\\Lambda_k)<2\\varepsilon_0^{\\sigma} e^{- |k|^\\varsigma}$,\nwhich implies (\\ref{measure_esti}). For detailed proof of the measure estimate of $\\tilde\\Lambda_k$, we can also refer to Corollary 1 of \\cite{HA}.\n\n\\\n\n\n\\noindent {\\it Proof of (3) and (4).} It can be deduced from Lemma 5 of \\cite{Eli1992}. \\qed\n\n\n\n\\\n\n\\noindent{\\bf Proof of Theorem \\ref{thm_Schro} and \\ref{thm_Schro_sobolev}.} Theorem \\ref{thm_Schro_sobolev} can be seen as a corollary of Theorem \\ref{thm_sobo}.\nAccording to Theorem \\ref{thm_redu}, the reducibility of Eq. (\\ref{eq_Schrodinger}) for a.e. $E\\in{{\\mathcal I}}$ is deduced from Proposition \\ref{prop_eliasson}-(1).\nLet $\\{\\Lambda_j\\}_{j\\in{\\mathbb N}}$ be the intervals $\\tilde\\Lambda_k$'s intersecting ${{\\mathcal I}}$ and let\n$${{\\mathcal O}}_{\\varepsilon_0}:=\\bigcup_{j\\in{\\mathbb N}}\\Lambda_j=\\bigcup_{k\\in{\\mathbb Z}^d}\\tilde\\Lambda_k.$$\nProposition \\ref{prop_eliasson}-(2) gives the measure estimate of ${{\\mathcal O}}_{\\varepsilon_0}$.\nThe unitary equivalences of the reduced quantum Hamiltonian follow from Proposition \\ref{prop_eliasson}-(3) and (4). Hence Theorem \\ref{thm_Schro} is shown. \\qed\n\n\\section{Proof of Theorem \\ref{thm_example_1} -- \\ref{thm_AMO}}\\label{sec_pr_examples}\n\n\nIn this section, we show that the measure of the subset ${{\\mathcal O}}_{\\varepsilon_0}$ is positive for the equations (\\ref{example_1}) -- (\\ref{eq_AMO}),\nwhich implies the growths of Sobolev norm.\n\n\\subsection{Proof of Theorem \\ref{thm_example_1}}\n\nFor Eq. (\\ref{example_1}), $E\\in{\\mathbb R}$, the corresponding linear system is\n$$\n\\left(\\begin{array}{c}\n x \\\\\n \\xi\n \\end{array}\n\\right)'=\\left[\\left(\\begin{array}{cc}\n 0 & E \\\\\n -E & 0\n \\end{array}\\right)\n +\\left(\\begin{array}{cc}\n b(\\omega t) & c(\\omega t) \\\\\n -a(\\omega t) & -b(\\omega t)\n \\end{array}\\right)\\right] \\left(\\begin{array}{c}\n x \\\\\n \\xi\n \\end{array}\n\\right).$$\nIn view of Lemma 5 of \\cite{Eli1992}, for ``generic\" $a,b,c\\in C^\\omega({\\mathbb T}^d,{\\mathbb R})$, there is at least one non-degenerate $\\tilde\\Lambda_k$, $k\\in{\\mathbb Z}^d$.\nMore precisely, at the resonant step of KAM scheme described in the proof of Proposition \\ref{prop_eliasson}-(1),\nthe condition (\\ref{resonant}) defines a resonant interval of $E$, on which the two eigenvalues $\\pm{\\rm i}\\xi_j$ of $A_j$ are purely imaginary since $\\xi_j$ is bounded frow below. After this resonant step, the two new eigenvalues $\\pm{\\rm i}\\xi_{j+1}$ of $A_{j+1}$ can be real or still purely imaginary for $E$ in this resonant interval, since $|\\xi_{j+1}|$ is close to zero.\nWe say that $a,b,c\\in C^\\omega({\\mathbb T}^d,{\\mathbb R})$ are {\\it generic} if, for at least one resonant step in the KAM scheme, the two new eigenvalues $\\pm{\\rm i}\\xi_{j+1}$ become real on a sub-interval of the resonant interval.\n\n\n\n\n\n\n\\subsection{Proof of Theorem \\ref{thm_example_schro}}\n\n\n\nFor Eq. (\\ref{eq_Schrodinger-example}) with $E\\in{{\\mathcal I}}=[E_0,E_1]$ with $E_0>0$ large enough, and $E_1<\\infty$, Theorem \\ref{thm_Schro} and \\ref{thm_Schro_sobolev} hold.\nThe corresponding linear system $(\\omega, \\, A_0+F_0)$ of Eq. (\\ref{eq_Schrodinger-example}) is\n$$\n\\left(\\begin{array}{c}\n x \\\\\n \\xi\n \\end{array}\n\\right)'=\\left[\\left(\\begin{array}{cc}\n 0 & \\sqrt{E} \\\\\n -\\sqrt{E} & 0\n \\end{array}\\right)\n +\\frac{q(\\omega t)}{2\\sqrt{E}}\\left(\\begin{array}{cc}\n -1 & -1 \\\\\n 1 & 1\n \\end{array}\\right)\\right] \\left(\\begin{array}{c}\n x \\\\\n \\xi\n \\end{array}\n\\right).$$\nThen, through the change of variables\n$$\\left(\\begin{array}{c}\n x \\\\\n \\xi\n \\end{array}\\right)=\\frac{1}{2\\sqrt{E}}\\left(\\begin{array}{cc}\n \\sqrt{E} & -1 \\\\\n \\sqrt{E} & 1\n \\end{array}\\right)\\left(\\begin{array}{c}\n \\tilde x \\\\\n \\tilde\\xi\n \\end{array}\\right),$$\n$(\\omega, \\, A_0+F_0)$ is conjugated to\n$$\n\\left(\\begin{array}{c}\n \\tilde x \\\\\n \\tilde\\xi\n \\end{array}\\right)'=C^E_q(\\omega t)\\left(\\begin{array}{c}\n \\tilde x \\\\\n \\tilde\\xi\n \\end{array}\\right):=\\left(\\begin{array}{cc}\n 0 & 1 \\\\\n -E+q(\\omega t) & 0\n \\end{array}\\right)\\left(\\begin{array}{c}\n \\tilde x \\\\\n \\tilde\\xi\n \\end{array}\\right).\n$$\nThe quasi-periodic linear system $(\\omega, \\, C^E_q(\\cdot))$ corresponds exactly to the eigenvalue problem of the quasi-periodic continuous Schr\\\"odinger operator ${{\\mathcal L}}_{\\omega, q}$:\n $$({{\\mathcal L}}_{\\omega, q}y)(t)=-y''(t)+q(\\omega t) y(t).$$\nBy Gap labeling Theorem \\cite{JM82}, if $\\tilde\\Lambda_k$ is not empty for $k\\in{\\mathbb Z}^d$, then it is indeed a ``spectral gap\" of ${{\\mathcal L}}_{\\omega, q}$ intersecting $[E_0,E_1]$, i.e., a connected component of $[E_0,E_1]\\setminus\\Sigma_{\\omega, q}$ with $\\Sigma_{\\omega, q}$ denoting the spectrum of ${{\\mathcal L}}_{\\omega, q}$.\nIn view of Theorem C of \\cite{Eli1992}, for a generic potential $q$ (in the $|q|_r$-topology), for $E_0>0$ large enough, $[E_0,\\infty[ \\, \\cap \\, \\Sigma_{\\omega, q}$ is a Cantor set.\nHence there are infinitely many $\\tilde\\Lambda_k$'s satisfying ${\\rm Leb}(\\tilde\\Lambda_k)>0$.\n\n\n\\subsection{Proof of Theorem \\ref{thm_AMO}}\n\nFor Eq. (\\ref{eq_AMO}) with $\\nu(E)=\\cos^{-1}(-\\frac{E}{2})$, $E\\in[-2+\\delta,2-\\delta]$ with $\\delta>0$ a sufficiently small numerical constant (e.g. $\\delta:=10^{-6}$), we can apply Theorem \\ref{thm_Schro} and \\ref{thm_Schro_sobolev}.\n if $a,b,c:[-2+\\delta,2-\\delta]\\times{\\mathbb T}^2\\to{\\rm sl}(2,{\\mathbb R})$ are small enough as assumed in Theorem \\ref{thm_Schro}.\n\n\n\nFor the quasi-periodic Schr\\\"odinger cocycle $(\\alpha, \\, S_E^\\lambda)$\n$$\nX_{n+1} =S_E^\\lambda(\\theta+n\\alpha) X_n= \\left[\\left(\\begin{array}{cc}\n -E & -1 \\\\\n 1 & 0\n \\end{array}\\right)+\\left(\\begin{array}{cc}\n 2\\lambda \\cos(\\theta+n\\alpha) & 0 \\\\\n 0 & 0\n \\end{array}\\right)\\right] X_n,\n$$\nwith $\\alpha\\in{\\rm DC}_1(\\gamma,\\tau)$, $|\\lambda|$ small enough,\nit can be written as\n$$X_{n+1}=e^{B(E)}e^{G(E,\\theta+n\\alpha)} X_n,$$\nfor $e^{B(E)}:=\\left(\\begin{array}{cc}\n -E & -1 \\\\\n 1 & 0\n \\end{array}\\right)$ and some $G(E,\\cdot)\\in{\\rm sl}(2,{\\mathbb R})$.\nThis cocycle is related to the almost-Mathieu operator $H_{\\lambda,\\alpha,\\theta}$ on $\\ell^2({\\mathbb Z})$:\n$$(H_{\\lambda,\\alpha,\\theta}\\psi)_n=-(\\psi_{n+1}+\\psi_{n-1})+2\\lambda \\cos(\\theta+n\\alpha)\\psi_n,\\qquad n\\in{\\mathbb Z}.$$\nIt is known that its spectrum, denoted by $\\Sigma_{\\lambda,\\alpha}$, is a Cantor set \\cite{AvilaJito1}, which is well-known as Ten Martini Problem. In fact, Avila-Jitomirskaya\n \\cite{AvilaJito} further show that all spectral gaps are ``open\" , which means that, for every $k\\in{\\mathbb Z}$,\n$$\\tilde\\Lambda_k:=\\left\\{E\\in{\\mathbb R}: \\tilde\\rho_{(\\alpha, \\, S_E^\\lambda)}=\\frac{k\\alpha}{2} \\mod {\\mathbb Z} \\right\\}$$\nhas positive measure. Indeed, the size of $\\tilde\\Lambda_k$ decays exponentially with respect to $|k|$, as was shown in \\cite{LYZZ}.\nHere, $\\tilde\\rho_{(\\alpha, \\, S_E^\\lambda)}$ is the fibered rotation number of cocycle $(\\alpha, \\, S_E^\\lambda)$.\nRecall that for any $A:{\\mathbb T}^d \\to {\\rm SL}(2,{\\mathbb R})$ is continuous and homotopic to the identity, \\textit{fibered rotation number} of\n$(\\alpha,A)$ is defined as\n\\begin{equation*}\n\\tilde\\rho(\\alpha,A)=\\int \\psi \\, d \\tilde{\\mu} \\mod {\\mathbb Z}\n\\end{equation*}\nwhere $\\psi:{\\mathbb T}^{d+1} \\to {\\mathbb R}$ is lift of $A$ such that\n$$\nA(x) \\cdot \\left (\\begin{matrix} \\cos 2 \\pi y \\\\ \\sin 2 \\pi y \\end{matrix} \\right )=u(x,y)\n\\left (\\begin{matrix} \\cos 2 \\pi (y+\\psi(x,y)) \\\\ \\sin 2 \\pi (y+\\psi(x,y)) \\end{matrix} \\right),\n$$\nand $\\tilde{\\mu}$ is\ninvariant probability measure of $(x,y) \\mapsto (x+\\alpha,y+\\psi(x,y))$ (according to \\cite{Her}, it does not depend on the choices of $\\psi, \\tilde\\mu$).\n\n\nNote that $(\\alpha, \\, S_E^\\lambda)$ is a discrete dynamical system, however, with the help of Local Embedding Theorem (Theorem \\ref{localemb-sl}), we can embed the cocycle $(\\alpha, \\, S_E^\\lambda)$ into a quasi-periodic linear system $(\\omega, \\, B(E)+F(E,\\cdot))$. \nFor an individual cocycle, the Local Embedding Theorem was already shown in \\cite{YZ2013}.\nNevertheless, the crucial point here is that we really need a parameterized version of Local Embedding Theorem, that means the embedded system $(\\omega, \\, B(E)+F(E,\\cdot))$ should have smooth dependence on $E$.\n\n\n\nTo show the parameterized version of Local Embedding Theorem, let us first introduce more notations.\n Given $f \\in C^2({\\mathcal I})$, define\n$$|f|_{*}= \\sum_\n{0\\leq m\\leq 2} \\sup_{E\\in{\\mathcal I}}|f^{(m)}|.$$\nFor any $ f(E,\\theta)=\\sum_{k\\in {\\mathbb Z}^d}\\widehat f_k(E)e^{2\\pi {\\rm i}\n\\langle k,\\theta\\rangle}$ which is $C^2$ w.r.t. $E\\in{{\\mathcal I}}$, $C^\\omega$ w.r.t. $\\theta\\in{\\mathbb T}^d$, denote\n$$\\|f\\|_h:=\\sum_{k\\in {\\mathbb Z}^d}|\\widehat f_k(E)|_{*}e^{2\\pi |k|h}<\\infty,$$\nand we denote by $C_h^\\omega( {\\mathcal I} \\times {\\mathbb T}^d,{\\mathbb C})$ all these functions with $\\|f\\|_h<\\infty$. Then our result is the following:\n\n\n\n\\begin{Theorem}\\label{localemb-sl}[Local Embedding Theorem]\nGiven $d\\geq 2$, $h>0$ and $G\\in C^\\omega_{h}({{\\mathcal I}} \\times {\\mathbb T}^{d-1}, {\\rm sl}(2,{\\mathbb R}))$, suppose that $\\mu\\in {\\mathbb T}^{d-1}$ such that $(1,\\mu)$ is rationally independent. Then, for any $\\nu\\in C^2({\\mathcal I})$ satisfying\n \\begin{equation}\\label{varition}\n \\sup_{E\\in{\\mathcal I}}|\\nu'(E)|\\cdot |{\\mathcal I}|< \\frac{1}{6},\n \\end{equation}\nthere exist $\\epsilon=\\epsilon(|\\nu|_{*},h,|\\mu|)>0,$\n$c=c(|\\nu|_*,h,|\\mu|)>0,$ and $F\\in\nC^\\omega_{\\frac{h}{1+|\\mu|}}( {\\mathcal I} \\times {\\mathbb T}^d,{\\rm sl}(2,{\\mathbb R}))$ such that the cocycle $(\\mu,e^{2\\pi \\nu J}\ne^{G(\\cdot)})$ is the Poincar\\'e map of linear system\n\\begin{eqnarray}\\label{al-ref1}\n\\left(\\begin{array}{c}x\\\\ \\xi\n \\end{array}\n \\right)'=\\left(\\nu J+F(\\omega t)\\right)\\left(\\begin{array}{c}x\\\\ \\xi\n \\end{array}\n \\right) , \\qquad \\omega=(1,\\mu)\n\\end{eqnarray}\nprovided that $\\|G\\|_{h}<\\epsilon.$ Moreover, we have $\\|F\\|_{\\frac{h}{1+|\\mu|}}\\leq 2c \\|G\\|_{h}$.\n\\end{Theorem}\nWe postpone the proof of Theorem \\ref{localemb-sl} to Appendix \\ref{app_proof}.\n\n\\smallskip\n\nNow let us show how we can apply Theorem \\ref{localemb-sl} to finish the proof of Theorem \\ref{thm_AMO}. First note the constant matrix $e^{B}$ can be rewritten as\n\n$$e^{B}:=\\left(\\begin{array}{cc}\n -E & -1 \\\\\n 1 & 0\n \\end{array}\\right) = M\\left(\\begin{array}{cc}\n \\cos(\\nu) & -\\sin(\\nu) \\\\\n \\sin(\\nu) & \\cos(\\nu)\n \\end{array}\\right)M^{-1} , $$\nwhere\n$$M:=\\frac{1}{\\sqrt{\\sin(\\nu)}}\\left(\\begin{array}{cc}\n \\cos(\\nu) & -\\sin(\\nu) \\\\\n 1 & 0\n \\end{array}\\right), $$\nrecalling that\n$$\\cos(\\nu(E))=-\\frac{E}{2},\\quad \\sin(\\nu(E))=\\frac{\\sqrt{4-E^2}}{2},\\qquad E\\in[-2+\\delta,\\ 2-\\delta].$$\nHence, by noting\n$$\\left(\\begin{array}{cc}\n \\cos(\\nu) & -\\sin(\\nu) \\\\\n \\sin(\\nu) & \\cos(\\nu)\n \\end{array}\\right)=\\exp\\left\\{\\left(\\begin{array}{cc}\n 0 & -\\nu \\\\\n \\nu & 0\n \\end{array}\\right)\\right\\},$$\n we see that $B$ can be written as\n$B=M \\cdot ( \\nu J ) \\cdot M^{-1}.$\n\n\n\n\\smallskip\n\nFor $\\nu(E)=\\cos^{-1}(-\\frac{E}{2})$, there exists ${{\\mathcal I}}\\subset [-2+\\delta, 2-\\delta]$ such that (\\ref{varition}) is satisfied.\nFor example, we can take ${{\\mathcal I}}= ]-\\frac{2}{\\sqrt{37}},\\frac{2}{\\sqrt{37}}[$.\nTherefore, according to Theorem \\ref{localemb-sl}, for $\\omega\\in (1,\\alpha)$, we have a quasi-periodic linear system $(\\omega, \\, B(E)+F(E,\\cdot))$ from the quasi-periodic Schr\\\"odinger cocycle $(\\alpha, \\, S_E^\\lambda)$:\n\\begin{equation}\\label{Schrodinger_cocycle-conti}\n\\left(\\begin{array}{c}\n x \\\\\n \\xi\n \\end{array}\n\\right)'=(B(E)+F(E,\\omega t)) \\left(\\begin{array}{c}\n x \\\\\n \\xi\n \\end{array}\n\\right),\n\\end{equation}\nThrough the change of variables\n$$\\left(\\begin{array}{c}\n x \\\\\n \\xi\n \\end{array}\\right)=M\\left(\\begin{array}{c}\n \\tilde x \\\\\n \\tilde\\xi\n \\end{array}\\right),$$\n$(\\omega, \\, B(E)+F(E,\\cdot))$ is conjugated to\n$$\\left(\\begin{array}{c}\n \\tilde x \\\\\n \\tilde\\xi\n \\end{array}\n\\right)'=\\left(\\left(\\begin{array}{cc}\n 0 & -\\nu \\\\\n \\nu & 0\n \\end{array}\\right)+MF(E,\\omega t)M^{-1}\\right)\\left(\\begin{array}{c}\n \\tilde x \\\\\n \\tilde\\xi\n \\end{array}\n\\right).$$\nThen by Theorem \\ref{thm_Schro} and \\ref{thm_Schro_sobolev}, Theorem \\ref{thm_AMO} is shown with\n$$\\left(\\begin{array}{cc}\n b(E,\\cdot) & c(E,\\cdot) \\\\\n -a(E,\\cdot) & -b(E,\\cdot)\n \\end{array} \\right)=MF(E,\\cdot)M^{-1}.$$\n\nFinally we point out that $\\rho_{(\\omega, \\, B(E)+F(E,\\cdot))}=\\tilde\\rho_{(\\alpha, \\, S_E^\\lambda(\\cdot))}$, since $(\\alpha, \\, S_E^\\lambda)$ is the Poincar\\'e map of linear system\n$(\\omega, \\, B(E)+F(E,\\cdot))$. Let\n$$\\tilde\\Lambda_{(-p, k)}:=\\left\\{E\\in\\overline{{\\mathcal I}}: \\rho_{(\\omega, \\, B(E)+F(E,\\cdot))} = \\frac{ k\\alpha-p}{2} = \\min_{j\\in {\\mathbb Z}} \\left| \\frac{ k\\alpha}{2} -j\\right| \\right\\} ,$$\nthen by well-known result of Avila-Jitomirskaya \\cite{AvilaJito}, ${\\rm Leb}(\\tilde\\Lambda_{(-p, k)})>0$, for every $k\\in{\\mathbb Z}$ such that $\\tilde\\Lambda_k$ intersect with ${{\\mathcal I}}$.\n\n\n\n\n\n\n\n\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section*{Abstract}\nIn a world in which many pressing global issues require large scale cooperation, understanding the group size effect on cooperative behavior is a topic of central importance. Yet, the nature of this effect remains largely unknown, with lab experiments insisting that it is either positive or negative or null, and field experiments suggesting that it is instead curvilinear. Here we shed light on this apparent contradiction by considering a novel class of public goods games inspired to the realistic scenario in which the natural output limits of the public good imply that the benefit of cooperation increases fast for early contributions and then decelerates. We report on a large lab experiment providing evidence that, in this case, group size has a curvilinear effect on cooperation, according to which intermediate-size groups cooperate more than smaller groups and more than larger groups. In doing so, our findings help fill the gap between lab experiments and field experiments and suggest concrete ways to promote large scale cooperation among people.\n\n\n\n\n\\section*{Introduction}\n\nCooperation has played a fundamental role in the early evolution of our societies\\cite{KG,Tomasello14natural} and continues playing a major role still nowadays. From the individual level, where we cooperate with our romantic partner, friends, and co-workers in order to handle our individual problems, up to the global level where countries cooperate with other countries in order to handle global problems, our entire life is based on cooperation.\n\nGiven its importance, it is not surprising that cooperation has inspired an enormous amount of research across all biological and social sciences, spanning from theoretical accounts \\cite{Tr,Ax-Ha,FF03,nowak2006five,Perc10coevolutionary,press2012iterated,perc2013evolutionary,Ca,hilbe2013evolution,Ra-No,capraro2014translucent} to experimental studies \\cite{Andreoni1988why,Fischbacher2001people,milinski2002reputation,Frey2004social,Fischbacher2010social,traulsen2010human,apicella2012social,capraro2014heuristics,capraro2014benevolent,capraro2014good,hauser2014cooperating,gallo2015effects} and numerical simulations\\cite{Nowak92evolutionary,boyd2003evolution,santos2005scale,perc2008social,roca2009evolutionary,gardenes2012evolution,jiang2013spreading}.\n\nSince the resolution of many pressing global issues, such as global climate change and depletion of natural resources, requires cooperation among many actors, one of the most relevant questions about cooperation regards the effect of the size of the group on cooperative behavior. Indeed, since the influential work by Olson \\cite{olson1965logic}, scholars have recognized that the size of a group can have an effect on cooperative decision-making. However, the nature of this effect remains one of the most mysterious areas in the literature, with some scholars arguing that it is negative \\cite{olson1965logic,dawes1977behavior,komorita1982cooperative,baland1999ambiguous,ostrom2005understanding,grujic2012three,vilone2014partner,nosenzo2015cooperation}, others that it is positive \\cite{mcguire1974group,isaac1994group,haan2002free,agrawal2006explaining,masel2007bayesian,zhang2011group,szolnoki2011group}, and yet others that it is ambiguous \\cite{esteban2001collective,pecorino2008group,oliver1988paradox,chamberlin1974provision} or non-significant \\cite{todd1992collective,gautam2007group,rustagi2010conditional}. Interestingly, the majority of field experiments seem to agree on yet another possibility, that is, that group size has a curvilinear effect on cooperative behavior, according to which intermediate-size groups cooperate more than smaller groups and more than larger groups \\cite{poteete2004heterogeneity,agrawal2001group,agrawal2000small,yang2013nonlinear,cinner2013looking}.\nThe emergence of a curvilinear effect of the group size on cooperation in real life situations is also supported by data concerning academic research, which in fact support the hypothesis that research quality of a research group is optimized for medium-sized groups \\cite{kenna2011critical,kenna2011critical2,kenna2012managing}.\n\nHere we aim at shedding light on this debate, by providing evidence that a single parameter can be responsible for all the different and apparently contradictory effects that have been reported in the literature. Specifically, we show that the effect of the size of the group on cooperative decision-making depends critically on a parameter taking into account different ways in which the notion of cooperation itself can be defined when there are more than two agents.\n\nIndeed, while in case of only two agents a cooperator can be simply defined as a person willing to pay a cost $c$ to give a greater benefit $b$ to the other person \\cite{nowak2006five}, the same definition, when transferred to situations where there are more than two agents, is subject to multiple interpretations. If cooperation, from the point of view of the cooperator, means paying a cost $c$ to create a benefit $b$, what does it mean from the point of view of the \\emph{other} player\\emph{s}? Does $b$ get earned by each of the other players or does it get shared among all other players, or none of them? In other words, what is the marginal return for cooperation?\n\nOf course, there is no general answer and, in fact, previous studies have considered different possibilities. For instance, in the standard Public Goods game it is assumed that $b$ gets earned by each player (including the cooperator); instead, in the N-person Prisoner's dilemma (as defined in \\cite{barcelo2015group}) it is assumed that $b$ gets shared among all players; yet, the Volunteer's dilemma \\cite{diekmann1985volunteer} and its variants using critical mass \\cite{szolnoki2010impact} rest somehow in between: one or more cooperators are needed to generate a benefit that gets earned by each player, but, after the critical mass is reached, new cooperators do not generate any more benefit; finally, it has been pointed out \\cite{marwell1993critical,heckathorn1996dynamics} that a number of realistic situations can be characterized by a marginal return which increases linearly for early contributions and then decelerates, reflecting the natural decrease of marginal returns that occurs when output limits are approached.\n\nIn order to take into account this variety of possibilities, we consider a class of \\emph{social dilemmas} parametrized by a function $\\beta=\\beta(\\Gamma,N)$ describing the marginal return for cooperation when $\\Gamma$ people cooperate in a group of size $N$. More precisely, our \\emph{general Public Goods game} is the N-person game in which N people have to simultaneously decide whether to cooperate (C) or defect (D). In presence of a total of $\\Gamma$ cooperators, the payoff of a cooperator is defined as $\\beta(\\Gamma,N)-c$ ($c>0$ represents the cost of cooperation) and the payoff of a defector is defined as $\\beta(\\Gamma,N)$. In order to have a social dilemma (i.e., a tension between individual benefit and the benefit of the group as a whole) we require that:\n\\begin{itemize}\n\\item Full cooperation pays more than full defection, that is, $\\beta(N,N) - c > \\beta(0,N)$, for all $N$; \n\\item Defecting is individually optimal, regardless of the number of cooperators, that is, for all $\\Gamma < N$, one has $\\beta(\\Gamma,N)-c < \\beta(\\Gamma-1,N)$.\n\\end{itemize}\n\nThe aim of this paper is to provide further evidence that the function $\\beta$ might be responsible for the confusion in the literature about group size effect on cooperation. In particular, we focus on the situation, inspired from realistic scenarios, in which the natural output limits of the public good imply that $\\beta(\\Gamma,N)$ increases fast for small $\\Gamma$'s and then stabilizes. \n\nIndeed, in our previous work \\cite{barcelo2015group}, we have shown that the size of the group has a positive effect on cooperation in the standard Public Goods game and has a negative effect on cooperation in the N-person Prisoner's dilemma. A reinterpretation of these results is that, if $\\beta(N,N)$ increases linearly with $N$ (standard Public Goods game), then the size of the group has a positive effect on cooperation; and, if $\\beta(N,N)$ is constant with $N$ (N-person Prisoner's dilemma), then the size of the group has a negative effect on cooperation. This reinterpretation suggests that, in the more realistic situations in which the benefit for full cooperation increases fast for early contributions and then decelerates once the output limits of the public good are approached, we may observe a curvilinear effect of the group size, according to which intermediate-size groups cooperate more than smaller groups and more than larger groups.\n\nTo test this hypothesis, we have conducted a lab experiment using a general public goods game with a piecewise function $\\beta$, which increases linearly up to a certain number of cooperators, after which it remains constant. While it is likely that realistic scenarios would be better described by a smoother function, this is a good approximation of all those situations in which the natural output limits of a public good imply that the increase in the marginal return for cooperation tends to zero as the number of contributors grows very large. The upside of choosing a piecewise function $\\beta$ is that, in this way, we could present the instructions of the experiment in a very simple way, thus minimizing random noise due to participants not understanding the decision problem at hand (see Method).\n\nOur results support indeed the hypothesis of a curvilinear effect of the size of the group on cooperative decision-making. Taken together with our previous work \\cite{barcelo2015group}, our findings thus (i) shed light on the confusion regarding the group size effect on cooperation, by pointing out that different values of a single parameter might give rise to qualitatively different group size effects, including positive, negative, and even curvilinear; and (ii) they help fill the gap between lab experiments and field experiments. Indeed, while lab experiments use either the standard Public Goods game or the N-person Prisoner's dilemma, \\emph{real} public goods game are mostly characterized by a marginal return of cooperation that increases fast for early contributions and then approaches a constant function as the number of cooperators grows very large - and our results provide evidence that these three situations give rise to three different group size effects.\n\n\\section*{Method}\n\nWe have recruited participants through the online labour market Amazon Mechanical Turk (AMT) \\cite{paolacci2010running,horton2011online,mason2012conducting}. After entering their TurkID, participants were directed to the following instruction screen.\n\n\\emph{Welcome to this HIT.}\n \n\\emph{This HIT will take about 5 minutes and you will earn 20c for participating.} \n \n\\emph{This HIT consists of a decision problem followed by a few demographic questions.} \n \n\\emph{You can earn an additional bonus depending on the decisions that you and the participants in your cohort will make.} \n\n\\emph{We will tell you the exact number of participants in your cohort later.} \n\n\\emph{Each one of you will have to decide to join either Group A or Group B.} \n \n\\emph{Your bonus depends on the group you decide to join and on the size of the two groups, A and B, as follows:}\n\\begin{itemize}\n\\item \\emph{If the size of Group A is 0 (that is, everybody chooses to join Group B), then everybody gets 10c}\n\\item \\emph{If the size of Group A is 1, then the person in Group A gets 5c and each person in Group B gets 15c}\n\\item \\emph{If the size of Group A is 2, then each person in Group A gets 10c and each person in Group B gets 20c}\n\\item \\emph{If the size of Group A is 3, then each person in Group A gets 15c and each person in Group B gets 25c}\n\\item \\emph{If the size of Group A is 4, then each person in Group A gets 20c and each person in Group B gets 30c}\n\\item \\emph{And so on, up to 10: If the size of Group A is 10, then each person in Group A gets 50c and each person in Group B gets 60c}\n\\item \\emph{However, if the size of Group A is larger than 10, then, independently of the size of the two groups, each person in group A will still get 50c and each person in group B will still get 60c.}\n\\end{itemize}\n\nAfter reading the instructions, participants were randomly assigned to one of 12 conditions, differing only on the size of the cohort ($N=3,5,10,15,20,25,30,40,50,60,80,100$). For instance, the decision screen for the participants in the condition where the size of the cohort is 3 was:\n\n\\emph{You are part of a cohort of 3 participants.}\n\n\\emph{Which group do you want to join?}\n\nBy using appropriate buttons, participants could select either Group A or Group B. \n\nWe opted for not asking any comprehension questions. We made this choice for two reasons. First, with the current design, it is impossible to ask general comprehension questions such as ``what is the strategy that benefits the group as a whole'', since this strategy depends on the strategy played by the other players. Second, we did not want to ask particular questions about the payoff structure since this may anchor the participants' reasoning on the examples presented. Of course, a downside of our choice is that we could not avoid random noise. However, as it will be discussed in the Results section, random noise cannot be responsible for our findings. Instead, our results would have been even cleaner, if we had not had random noise, since the initial increase of cooperation and its subsequent decline would have been more pronounced (see Results section for more details).\n\nAfter making their decisions, participants were asked to fill a standard demographic questionnaire (in which we asked for their age, gender, and level of education), after which they received the ``survey code'' needed to claim their payment. After collecting all the results, bonuses were computed and paid on top of the participation fee, that was \\$0.20. In case the number of participants in a particular condition was not divisible by the size of the cohort (it is virtually impossible, in AMT experiments, to decide the exact number of participants playing a particular condition), in order to compute the bonus of the remaining people we formed an additional cohort where these people where grouped with a random choice of people for which the bonus had been already computed. Additionally, we anticipate that only 98 subjects participated in the condition with N=100. This does not generate deception in the computation of the bonuses since the payoff structure of the game does not depend on $N$ (as long as $N>10$). As a consequence of these observations, no deception was used in our experiment. \n\nAccording to the Dutch legislation, this is a non-WMO study, that is (i) it does not involve medical research and (ii) participants are not asked to follow rules of behavior. See http:\/\/www.ccmo.nl\/attachments\/files\/wmo- engelse-vertaling-29-7-2013-afkomstig-van-vws.pdf, Section 1, Article 1b, for an English translation of the Medical Research Act. Thus (see http:\/\/www.ccmo.nl\/en\/non-wmo- research) the only legislations which apply are the Agreement on Medical Treatment Act, from the Dutch Civil Code (Book 7, title 7, section 5), and the Personal Data Protection Act (a link to which can be found in the previous webpage). The current study conforms to both. In particular, anonymity was preserved because AMT ``requesters'' (i.e., the experimenters) have access only to the so-called TurkID of a participant, an anonymous ID that AMT assigns to a subject when he or she registers to AMT. Additionally, as demographic questions we only asked for age, gender, and level of education. \n\n\\section*{Results}\n\nA total of 1.195 \\emph{distinct} subjects located in the US participated in our experiment. \\emph{Distinct} subjects means that, in case two or more subjects were characterized by either the same TurkID or the same IP address, we kept only the first decision made by the corresponding participant and eliminated the rest. These multiple identities represent usually a minor problem in AMT experiments (only 2\\% of the participants in the current dataset). Participants were distributed across conditions as follows: 101 participants played with $N=3$, 99 with $N=5$, 102 with $N=10$, 101 with $N=15$, 98 with $N=20$, 103 with $N=25$, 97 with $N=30$, 99 with $N=40$, 97 with $N=50$, 101 with $N=60$, 99 with $N=80$, 98 with $N=100$.\n\nFig. 1 summarizes the main result. The rate of cooperation, that is the proportion of people opting for joining Group A, first increases as the size of the group increases from $N=3$ to $N=15$, then it starts decreasing. The figure suggests that the relation between the size of the group and the rate of cooperation is \\emph{not} quadratic: while the initial increase of cooperation is relatively fast, the subsequent decrease of cooperation seems extremely slow. This is confirmed by linear regression predicting rate of cooperation as a function of $N$ and $N^2$, which shows that neither the coefficient of $N$ nor that of $N^2$ are significant ($p=0.4692, p=0.2003$, resp.). For this reason we use a more flexible econometric model than the quadratic model, consisting of two linear regressions, one with a positive slope (for small $N$'s) and the other one with a negative slope (for large $N$'s). As a switching point, we use the $N=15$, corresponding to the size of the group which reached maximum cooperation. Doing so, we find that both the initial increase of cooperation and its subsequent decline are highly significant (from $N=3$ to $N=15$: coeff $= 0.0187553$, $p=0.00042$; from $N=15$ to $N=100$: coeff $= -0.00177618$, $p=0.00390$). \n\n\\begin{figure}\n \\centering\n \\includegraphics[scale=0.80]{Fig1.jpg} \n \\label{fig:intermediate}\n \\caption{\\emph{Proportion of cooperators (people choosing to join Group A) for each group size. Error bars represent the standard errors of the means. Group size has initially a positive effect on cooperation, which increases and reaches its maximum in groups of size 15, followed by a gradual decrease. Linear regression predicting cooperation using group size as independent variable confirms that both the initial increase of cooperation and its subsequent decline are highly significant (from $N=3$ to $N=15$: coeff $= 0.0187553$, $p=0.00042$; from $N=15$ to $N=100$: coeff $= -0.00177618$, $p=0.00390$).}}\n\\end{figure}\n\nWe conclude by observing that not only random noise cannot explain our results, but, without random noise, the effect would have been even stronger. Indeed, first we observe that there is no a priori worry that random noise would interact with any condition and so we can assume that it is randomly distributed across conditions. Then we observe that subtracting a binary distribution with average $0.5$ from a binary distribution with average $\\mu>0.5$, one would obtain a distribution with average $\\mu_0>\\mu$. Similarly, subtracting a binary distribution with average $0.5$ from a binary distribution with average $\\mu<0.5$ one would obtain a distribution with average $\\mu_0<\\mu$. Thus, if the $\\mu$'s are the averages that we have found (containing random noise) and the $\\mu_0$'s are the \\emph{true} averages (without random noise), the previous inequalities allow us to conclude that the initial increase of cooperation and its following decrease would have been stronger in absence of random noise.\n\n\\section*{Discussion}\n\nHere we have reported on a lab experiment providing evidence that the size of a group can have a curvilinear effect on cooperation in one-shot social dilemmas, with intermediate-size groups cooperating more than smaller groups and more than larger groups. Joining the current results with those of a previously published study of us \\cite{barcelo2015group}, we can conclude that group size can have qualitatively different effects on cooperation, ranging from positive, to negative and curvilinear, depending on the particular decision problem at hand. Interestingly, our findings suggest that different group size effects might be ultimately due to different values of a single parameter, the number $\\beta(N,N)$, describing the benefit for full cooperation. If $\\beta(N,N)$ is constant in $N$, then group size has a negative effect on cooperation; if $\\beta(N,N)$ increases linearly with $N$, then group size has a positive effect on cooperation; in the \\emph{middle}, all sorts of things may a priori happen. In particular, in the realistic situation in which $\\beta(N,N)$ is a piecewise function that increases linearly with $N$ up to a certain $N_0$ and then remains constant, then group size has a curvilinear effect, according to which intermediate-size groups cooperate more than smaller groups and more than larger groups. See Table 1.\n\n\\begin{center}\n\\begin{table}\n\\begin{tabular}{| l | c | c| }\n \\hline \n shape of $\\beta(N,N)$ & group size effect on cooperation & paper \\\\\n\\hline\n linear & positive & Barcelo \\& Capraro (2015) \\\\\n constant & negative & Barcelo \\& Capraro (2015) \\\\\n linear-then-constant & curvilinear & this paper\\\\\n \\hline \n\\end{tabular}\n\\caption{Summary of the different group size effects on cooperation depending on how the benefit for full cooperation varies as a function of the group size.}\n\\end{table}\n\\end{center}\nTo the best of our knowledge, ours is the first study reporting a curvilinear effect of the group size on cooperation in an experiment conducted in the ideal setting of a lab, in which confounding factors are minimized. Previous studies reporting a qualitatively similar effect \\cite{poteete2004heterogeneity,agrawal2001group,agrawal2000small,yang2013nonlinear} used field experiments, in which it is difficult to isolate the effect of the group size from possibly confounding effects. In our case, the only possibly confounding factor is random noise due to a proportion of people that may have not understood the rules of the decision problem. As we have shown, our results cannot be driven by random noise and, in fact, the curvilinear effect would have been even stronger, without random noise. Moreover, since our experimental design was inspired by a tentative to mimic all those \\emph{real} public goods games in which the natural output limits of the public good imply that the increase of the marginal return for cooperation, when the number of cooperators diverges, tends to zero, our results might explain the apparent contradiction that field experiments tend to converge on the fact that the effect of the group size is curvilinear, while lab experiments tend to converge on either of the two linear effects.\n\nOur contribution is also conceptual, since we have provided evidence that a single parameter might be responsible for different group size effects: the parameter $\\beta(N,N)$, describing the way the benefit for full cooperation varies as a function of the size of the group. Of course, we do not pretend to say that this is the only ultimate explanation of why different group size effects have been reported in experimental studies. In particular, in real-life situations, which are typically repeated and in which communication among players is allowed, other factors, such as within-group enforcement, may favor the emergence of a curvilinear effect of the group size on cooperation, as highlighted in \\cite{yang2013nonlinear}. If anything, our results provide evidence that the curvilinear effect on cooperation goes beyond contingent factors and can be found also in the ideal setting of a lab experiment using one-shot anonymous games. We believe that this is a relevant contribution in light of possible applications of our work. Indeed, the difference between $\\beta(N,N)$ and the total cost of full cooperation $cN$ can be interpreted has the incentive that an institution needs to pay to the contributors in order to make them cooperate. Since institutions are interested in minimizing their costs and, at the same time, maximizing the number of cooperators, it is crucial to understand what is the ``lowest'' $\\beta$ such that the resulting effect of the group size on cooperation is positive. This seems to be an non-trivial question. For instance, does $\\beta(\\Gamma,N)=\\frac{\\Gamma}{N}\\log_2(N+1)$ give rise to a positive effect or is it still curvilinear or even negative? The technical difficulty here is that it is hard to design an experiment to test people's behavior in these situations, since one cannot expect that an average person would understand the rules of the game when presented using a logarithmic functions. \n\nIn terms of economic models, our results are consistent with utilitarian models such as the Charness \\& Rabin model \\cite{charness2002understanding} and the novel cooperative equilibrium model \\cite{Ca,capraro2013cooperative,barcelo2015group}. Both these models indeed predict that, in our experiment, cooperation initially (i.e., for $N\\leq10$) increases with $N$ (see \\cite{barcelo2015group} for the details), and then starts decreasing. This behavioral transition follows from the simple observation that free riding when there are more than 10 cooperators costs zero to each of the other players and benefits the free-rider. Thus, cooperation in larger groups is not supported by utilitarian models, which then predict a decrease in cooperative behavior whose speed depends on the particular parameters of the model, such as the extent to which people care about the group payoff versus their individual payoff, and people's beliefs about the behavior of the other players. Thus our results add to the growing body of literature showing that utilitarian models are qualitatively good descriptors of cooperative behavior in social dilemmas.\n\nHowever, we note that while theoretical models predict that the rate of cooperation should start decreasing at $N=10$, our results show that the rate of cooperation for $N=15$ is marginally significantly higher than the rate of cooperation for $N=10$ (Rank sum, $p=0.0588$). Although ours is a between-subjects experiment, this finding seems to hint at the fact that there is a proportion of subjects who would defect for $N=10$ and cooperate for $N=15$. This is not easy to explain: why should a subject cooperate with $N=15$ and defect with $N=10$? One possibility is that there is a proportion of ``inverse conditional cooperators'', who cooperate only if a small percentage of people cooperate: if these subjects believe that the rate of cooperation decreases quickly after $N=10$, they would be more motivated to cooperate for $N=15$ than for $N=10$. Another possibility, of course, is that this discrepancy is just a false positive. In any case, unfortunately our experiment is not powerful enough to detect the reason of this discrepancy between theoretical predictions and experimental results and thus we leave this interesting question for future research.\n\n\\section*{Acknowledgements}\n\nV.C. is supported by the Dutch Research Organization (NWO) Grant No. 612.001.352. This material is based upon work supported by the National Science Foundation under Grant No. 0932078000 while the first author was in residence at the Mathematical Science Research Institute in Berkeley, California, during the Spring 2015 semester.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n In 1982, Feynman showed that a classical Turing machine would not be able to efficiently simulate quantum mechanical systems \\cite{feynman1982simulating}. Feynman went on to propose a model of computation based on quantum mechanics, which would not suffer the same limitations. Feynman's ideas were later refined by Deutsch who proposed a \\textit{universal quantum computer} \\cite{Deutsch97}. In this scheme, computation is performed by a series of \\textit{quantum gates}, which are the quantum analog to classical binary logic gates. A series of gates is called a \\textit{quantum circuit} \\cite{nielsen-book}. Quantum gates act on \\textit{qubits} which is the quantum analog of a bit.\n \\\\\n \\\\\n Lloyd later proved that a quantum computer would be able to simulate any quantum mechanical system efficiently \\cite{lloyd1996universal}. Equivalently, this can be stated as; given some special unitary operation \\( U \\in \\mathrm{SU}(2^n)\\), \\( U^{\\dagger} U = I\\), there exists some quantum circuit that approximates \\(U\\), where \\(n\\) is the number of qubits. One pertinent question that remains is how to find the circuit which implements this \\(U\\). In certain situations the circuit to implement \\(U\\) can be found exactly. However in general it is a difficult problem, and it is acceptable to approximate \\(U\\). Previously \\(U\\) has been found via expensive algebraic means \\cite{qcompiler,opt-qcompiler,cosine-sekigawa, Mottonen2004}. Another novel attempt at finding an approximate \\(U\\) has been to use the tools of \\textit{Riemannian geometry}. \n \\\\\n \\\\\n Nielsen originally proposed calculating special curves called \\textit{geodesics} between two points, \\(I\\) and \\(U\\) in \\(\\mathrm{SU}(2^n)\\). Geodesics are fixed points of the energy functional \\cite{wolfgang}. Nielsen claimed that when an energy minimising geodesic is discretised into a quantum circuit, this would efficiently simulate \\(U\\) \\cite{nielsen-geom-1,nielsen-geom-2,nielsen-geom-3,nielsen-geom-4,nielsen-geom-5}. In practice however, finding the geodesics is a difficult task. Computing geodesics requires one to solve a boundary value problem in a high dimensional space. Furthermore, Nielsen originally formulated the problem on a Riemannian manifold equipped with a so called \\textit{penalty} metric, where the penalty was made large. This complicated solving the boundary value problem \\cite{brachistochrone}.\n \\\\\n \\\\\n The Nielsen approach can be refined by considering subRiemannian geodesics. A subRiemannian geodesic is only allowed to evolve in directions from a \\textit{horizontal subspace} of the tangent space \\cite{montgomery}. This approach still involves solving a complicated boundary value problem. For a practical tool, a much faster methodology to synthesise a \\(U\\) is required. With recent advances in computing power, \\textit{neural networks} (NN) are an attractive option.\n \\\\\n \\\\\n The problem is to find \\(U\\) approximately as a product of exponentials\n \\begin{align} U \\approx \\, &\\mathbf{E}(c) = \\exp(c^1_1 \\tau_{1}) \\dots \\exp( c^1_m \\tau_m ) \\nonumber \\\\\n &\\dots \\exp( c^N_1 \\tau_1 ) \\dots \\exp( c^N_m \\tau_{m}), \\label{eqn:U} \\end{align}\n where \\( \\mathbf{E} \\) we call the \\textit{embedding} function, \\( c = (c_1^1 ,\\dots, c^N_m )\\) and the \\(\\tau_i\\) are a basis for a \\textit{bracket generating} subset of the Lie algebra \\( \\Delta \\subset \\mathfrak{su}(2^n)\\) of dimension \\(m\\). Bracket generating means that repeated Lie brackets of terms in \\(\\Delta \\) can generate any term in \\( \\mathfrak{su}(2^n)\\). Because products of matrix exponentials generate Lie bracket terms\n \\[ \\exp(A) \\exp(B) = \\exp(A+ B+ \\frac{1}{2}[A,B] + \\dots ),\\]\nany \\(U \\in \\mathrm{SU}(2^n) \\) can be written as Equation (\\ref{eqn:U}) with sufficiently many products. We restrict ourselves to \\(U\\) which can be written as a product of a polynomial in \\(n\\) terms . An example of such a \\(\\Delta\\) could be the matrix logarithms of universal gates. For convenience it is easier to work with all permutations of Kronecker products of one and two Pauli matrices, so \n \\[ \\Delta = \\mathrm{span} \\{\\frac{\\mathrm{i}}{\\sqrt{2^n}} \\sigma_i^j , \\frac{\\mathrm{i}}{\\sqrt{2^n}} \\sigma_i^k \\sigma_j^l \\} ,\\]\n where \\( \\sigma^j_i \\) represents the \\(N\\) fold Kronecker product, \\( I \\otimes \\dots \\otimes \\sigma_i \\otimes \\dots \\otimes I\\), with a \\( \\sigma_i \\) inserted in the \\(j\\)-th slot and \\(I\\) representing the \\( 2 \\times 2 \\) identity matrix. Exponentials of these basis elements have very simple circuits, for more detail see Appendix A.\n \\\\\n \\\\\n We propose that a neural network be trained to learn \\( \\mathbf{E}^{-1}\\). The neural network will try to find all the coefficients \\( c^k_{i} \\) so the product approximates \\(U\\). In this approach, the neural network takes a unitary matrix \\(U\\) as an input and returns a list \\(c\\) of \\( c^k_{i}\\). A segment is a product of \\( m \\) exponentials of each basis element. In total there are \\(N\\) segments. We only examine \\(U\\) which are implementable in a reasonable number of segments. We found that we required two neural networks to achieve this. The first is a Gated Recurrent Unit, GRU, network \\cite{gru_paper_1,gru_paper_2} which factors a \\(U\\) into a product of \\(U_j\\),\n \\[U \\approx U_1 U_{2} \\dots U_j \\dots U_N, \\]\n where each \\(U_j\\) is implementable in polynomially many gates, which we call \\textit{global decomposition}.\n The second is simply several dense fully connected layers, which decomposes the \\(U_j\\) into products of exponentials\n \\[ U_j \\approx \\exp( c^j_1 \\tau_1 ) \\dots \\exp( c^j_m \\tau_m ), \\]\n which we term \\textit{local decomposition}. These procedures can be done with traditional optimisation methods. The lack of a good initial guess meant that it took an order of an hour in \\(\\mathrm{SU}(8)\\). While the output from the neural network may not implement \\(U\\) to a required tolerance, it does provide a good initial guess as the error will be small. The output from the neural network could be refined with another optimisation algorithm.\n\\section{Training data \\label{training_data}}\n \\noindent To generate the training data, the \\(c\\) should not be chosen randomly. If there is no structure to how \\(c\\) is chosen, it will introduce extra redundancy. More seriously, \\(\\mathbf{E}^{-1} \\) will not be well defined. There are infinitely many ways to factor a \\(U\\) into some unordered product of matrix exponentials. Geometrically this could be visualised as taking any path from \\(I\\) to \\(U\\) on \\(\\mathrm{SU}(2^n)\\). Randomly generating data may give two different decompositions for a \\(U\\), and so \\(\\mathbf{E}\\) is not one to one. To ensure the training data is unique, we propose that these paths should be chosen to be, at least approximately, minimal normal subRiemannian geodesics. \n \\\\\n \\\\\n The choice of using geodesics is not particularly special. Other types of curves could be used as long as it uniquely joins \\(I\\) and \\(U\\). This is so \\( \\mathbf{E}^{-1} \\) is well defined. Generating random geodesics can be done simply by generating random initial conditions. However the geodesics must also be minimal. The first way to try and ensure they are minimal is to bound the norms of the initial conditions. \n \\\\\n \\\\\n The normal subRiemannian geodesics in \\(\\mathrm{SU}(2^n)\\) can be found via the Pontryagin Maximum Principle \\cite{pmp-intro,pmp-book} by minimising the energy functional\n \\[ \\mathcal{E}[x] = \\int_0^1 dt \\langle \\dot{x}, \\dot{x} \\rangle, \\]\n where \\( \\langle, \\rangle \\) is the restriction of the bi-invariant norm to \\(\\Delta \\subset \\mathfrak{su}(2^n)\\), and \\( x :[0,1] \\rightarrow \\mathrm{SU}(2^n) \\). See Chapter 7 of \\cite{opt-control-lie-group} for a review. The normal subRiemannian geodesic equations can be written as\n \\begin{align*}\n \\dot{x} &= u x, \\\\\n \\dot{\\Lambda} &= [\\Lambda, u ],\\\\\n u &= \\mathrm{proj}_{\\Delta}( \\Lambda),\n \\end{align*}\n where \\( \\Lambda : [0,1] \\rightarrow \\mathfrak{su}(2^n) \\), \\( u : [ 0,1] \\rightarrow \\Delta \\subset \\mathfrak{su}(2^n) \\) and \\( \\mathrm{proj}_{\\Delta} \\) is projection onto \\( \\Delta\\).\n This can be re-written as the single equation\n \\begin{equation} \\dot{x} = \\mathrm{proj}_{\\Delta}( x \\Lambda_0 x^{\\dagger} ) x, \\label{eqn:geod} \\end{equation}\n where \\( \\Lambda_0 = \\Lambda(0) \\).\n Choosing the \\( \\Lambda_0 \\) completely determines the geodesic. To generate the training data for the \\(U_j\\), first randomly choose a \\( \\Lambda_0\\). The \\(U_j \\) are then matrices which forward solve the geodesic equations\n \\[ x(t_{j+1} ) = U_j x(t_j),\\]\n where \\( [0,1] \\) has been divided into \\(N\\) segments of width \\(h\\). For this paper we utilised the simple first order integrator\n \\[ U_j = \\exp\\big( h \\, \\mathrm{proj}_{\\Delta}(x_j \\Lambda_0 x_j^{\\dagger}) \\big), \\]\n since approximating the geodesic is sufficient. There are infinitely many bi-invariant Riemannian geodesics joining \\(I \\) and \\(U\\), for the different branches of \\( \\log(U) \\). SubRiemannian geodesics are similarly behaved, but it varies on the norm of \\( \\Lambda_0\\). To generate the training data we bounded the norms by \\( \\mathrm{dim}(\\Delta) = \\mathcal{O}(n^2) \\), to try and ensure the geodesics are unique. \n \\\\\n \\\\\n Further, the norm \\( || \\mathrm{proj}_{\\Delta} (\\Lambda_0 ) || = || u_0 ||\\) determines the distance between \\(I \\) and a \\(U\\). Nielsen showed that the distance can be thought of as approximately the complexity to implement \\(U\\). Lemma (3) in \\cite{nielsen-geom-1} shows that a \\(U\\) further away from \\(I\\) requires more gates. The distance however is likely to scale exponentially. By bounding the norm by a polynomial, this ensures the training data only contains \\(U\\) which are reachable with a polynomial number of quantum gates. \n\\section{Network Design - SU(8) } \n \\subsection{Global decomposition}\n The neural network for the global decomposition takes an input of \\(U\\) and returns a list of \\(U_j\\). To do this \\(U\\) is decomposed into rows of length \\(2^n\\). This makes \\( 2^n\\) real vectors. Each row is treated as a single timestep in the GRU layer. The output \\(U_j\\) are also decomposed into their rows and these rows are treated as timesteps in the output. This gives \\( 2^n N \\) output vectors of length \\( 2^n\\). In particular we examined the \\(n=3\\) qubit case. For \\(\\mathrm{SU}(8) \\) we found \\(10\\) stacked GRU layers was sufficient to give reasonable results. In \\(\\mathrm{SU}(8)\\) we chose \\(N=10\\) , so there were \\( 8 \\) input vectors of length \\( 8\\) and \\( 80 \\) output vectors of length \\( 8\\). The network was implemented in the Keras Python library \\cite{keras} with the TensorFlow backend, on a Nvidia GTX 1080.\n \\subsection{Local decomposition}\n For \\(\\mathrm{SU}(8)\\) a network with \\(2\\) fully connected dense hidden layers of \\(2000\\) neurons, with the ReLU activation function was found to be sufficient. The input layer took a vectorised \\(U_j\\), and outputted \\( \\dim(\\Delta)\\) values. The network was implemented in the Keras Python library with the TensorFlow backend, on a Nvidia GTX 1080.\n\\section{Results - SU(8)}\n \\subsection{Global decomposition}\n The global decomposition network was trained on \\(U_j\\) taken from \\(5000\\) randomly generated geodesics in \\(\\mathrm{SU}(8)\\). \\(500\\) were used for validation data. The loss function used was the standard Euclidean distance between the output vector and the desired output. After \\( 1500 \\) training epochs the validation loss reached \\( \\sim 0.9 \\) and did not decrease. This was found to be sufficient to generate \\(U_j\\) close to the training data. Figure (\\ref{fig:gruLoss}) shows the validation and training loss. Figure (\\ref{fig:outUi}) and figure (\\ref{fig:validUi}) shows a randomly chosen \\( U_j\\) from a list of \\(U_j\\) generated by the network, and from the training data respectively for some random \\(U\\). Most \\(U_j\\) appeared to be very similar. Figure (\\ref{fig:u34}) and figure (\\ref{fig:u25}) show the same entry in consecutive \\( U_i\\) for validation data. Again the network was able to output values very close to the values in the validation dataset. This similarity was typical. This shows the network is able to reasonably approximate the \\(U_j\\). \n \\begin{figure}[h!]\n \\centering\n \\includegraphics[width=0.45\\textwidth]{GRU_loss.pdf}\n \\caption{The loss and validation loss from training the global decomposition.}\n \\label{fig:gruLoss}\n \\end{figure}\n \\begin{figure}[h!]\n \\centering\n \\begin{subfigure}{0.48\\textwidth}\n \\centering\n \\includegraphics[width=0.4\\linewidth]{out.pdf}\n \\caption{Real components of a \\(U_j\\) generated by the NN.}\n \\label{fig:outUi}\n \\end{subfigure}\n \\begin{subfigure}{0.48\\textwidth}\n \\centering\n \\includegraphics[width=0.4\\linewidth]{valid.pdf}\n \\caption{The respective known real components of a \\(U_j\\) from the validation dataset.}\n \\label{fig:validUi}\n \\end{subfigure}\n \\caption{A known \\(U_j\\) from the validation data and the \\(U_j\\) generated by the NN in \\(\\mathrm{SU}(8)\\) for global decomposition. Each \\(U_j\\) is close to the identity matrix. The shading from blue to orange represents \\( [-1,1] \\)}\n \\end{figure}\n \\begin{figure}[h!]\n \\centering\n \\begin{subfigure}{0.48\\textwidth}\n \\centering\n \\includegraphics[width=0.6\\linewidth]{u14_25.pdf}\n \\caption{The same real entry from the \\(10\\) \\(U_i\\) from the validation data set (blue), vs the predicted output (red).}\n \\label{fig:u34}\n \\end{subfigure}\n \\begin{subfigure}{0.48\\textwidth}\n \\centering\n \\includegraphics[width=0.6\\linewidth]{u34_40.pdf}\n \\caption{The same real entry from the \\(10\\) \\(U_i\\) from the validation data set (blue), vs the predicted output (red). }\n \\label{fig:u25}\n \\end{subfigure}\n \\caption{Real entries of validation \\(U_i\\) vs the \\(U_i\\) generated by the NN. Recall the \\(U_i\\) are not constant, and solve equation (\\ref{eqn:geod}). The behaviour displayed here was typical in other entries. }\n \\end{figure}\n\n\n\\subsection{Local decomposition}\nThe network to implement the local decomposition was trained on \\(U_j\\) generated by choosing a random \\(m\\)-vector of the coefficients \\(c^j_i\\), where each \\(c^j_i \\) was order \\( 1\/N \\). In total there were \\(5000\\) pairs in the training set, and \\(500\\) in the validation set. Figure (\\ref{fig:denseLoss}) shows the validation and training loss. After \\( 500\\) epochs the network was able to sufficiently compute the local decomposition to reasonable error (on average 0.16). Figures (\\ref{fig:localoutUi}) and (\\ref{fig:localvalidUi}) show a matrix generated by the neueral network and the target matrix.\n \\begin{figure}[h!]\n \\centering\n \\includegraphics[width=0.45\\textwidth]{Dense_loss.pdf}\n \\caption{The loss and the validation loss from training the local decomposition. There was no significant improvement after \\(500\\) epochs.}\n \\label{fig:denseLoss}\n \\end{figure}\n \\begin{figure}[h!]\n \\centering\n \\begin{subfigure}{0.48\\textwidth}\n \\centering\n \\includegraphics[width=0.4\\linewidth]{outMat.pdf}\n \\caption{Real components of a \\(U_j\\) generated by the NN.}\n \\label{fig:localoutUi}\n \\end{subfigure}\n \\begin{subfigure}{0.48\\textwidth}\n \\centering\n \\includegraphics[width=0.4\\linewidth]{localValidM.pdf}\n \\caption{The respective known real components of a \\(U_j\\) from the validation dataset.}\n \\label{fig:localvalidUi}\n \\end{subfigure}\n \\caption{A known \\(U_j\\) from the validation data and the \\(U_j\\) generated by the NN in \\(\\mathrm{SU}(8)\\). These figures are for the local decomposition network. The shading from blue to orange represents \\( [-1,1] \\)}\n \\end{figure}\n\n\\section{Conclusion}\nTraining two neural networks to together decompose \\(U\\) into \\(c^j_i \\) via a two-step approach (global decomposition followed by local decomposition) was found to be successful, when restricting the set of training data generated to paths which approximate minimal normal subRiemannian geodesics. This restriction limited the training data pairs to ones which were one-to-one, eliminating redundancy. For the global decomposition, using a neural network consisting of stacked GRU layers allowed for efficient training of the network, with the validation loss of the network approaching its minimum at 500 epochs for \\(\\mathrm{SU}(8)\\). A simple dense network with two hidden layers proved sufficient for the local decomposition. In \\(\\mathrm{SU}(8)\\), the networks were small enough that both networks were able to be trained on a desktop machine with a single NVidia GTX 1080 GPU. The two stage decomposition proved more successful than single-stage attempts to form a solution, with the decomposition of a given \\(U\\) into \\(U_j\\) being crucial for this increase in effectiveness. This approach to the solution for this problem demonstrates a novel use of neural networks.\n\\\\\n\\\\\nAlthough this approach works well for systems with small numbers of qubits (such as the \\(\\mathrm{SU}(8)\\) case used as an example), the approach does not scale well with increasing number of qubits. This is because the size of the network scales by the number of entries in matrices in \\(\\mathrm{SU}(2^n)\\). Although this is not a significant problem for currently realisable quantum computers, or those in the near future, it will increasingly become problematic as quantum computing continues to advance. To somewhat counteract this, the complexity of the problem can be decreased by restricting the set of \\(U\\) on which the neural network is trained. For example if the \\(U\\) are sparse, some savings in the size of the network may be made. Investigating this will be increasingly significant, as it will increase the practical usefulness of this approach. \n\\\\\n\\\\\nAs noted in section \\ref{training_data}, the choice of using geodesics to restrict the training data is fairly arbitrary, and as such, there may be different ways of restricting the training data which, while still ensuring the input\/output is one-to-one, may produce a better dataset, improving the accuracy of the networks. This is heavily related to the nature of \\( \\Lambda_0 \\) which is currently not fully understood. Exploring this problem is a possible future avenue of investigation, which may improve the effectiveness of the approach described in this paper. \n\\\\\n\\\\\nFinally note that training the network is the most computationally expensive part of this approach. Once the network is trained, propagating an input through through the network is much more efficient than the conventional optimisation techniques for compiling \\(U\\). \n\\\\\n\\\\\nAll data and programs used to produce this work can be found at \\href{https:\/\/github.com\/Swaddle\/nnQcompiler}{\\url{https:\/\/github.com\/Swaddle\/nnQcompiler}}. This work was supported by resources provided by the Pawsey Supercomputing Centre with funding from the Australian Government and the Government of Western Australia \\footnote{\\url{https:\/\/www.pawsey.org.au\/}}.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzewsi b/data_all_eng_slimpj/shuffled/split2/finalzzewsi new file mode 100644 index 0000000000000000000000000000000000000000..abc9dedd117763fb54c2875b3a8187cff772296f --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzewsi @@ -0,0 +1,5 @@ +{"text":"\\section{Nomenclature}\n\n{\\renewcommand\\arraystretch{1.0}\n\\noindent\\begin{longtable*}{@{}l @{\\quad=\\quad} l@{}}\n$altitude$ & height above WGS84 ellipsoid \\\\\n$ecef$ & Earth-centered, Earth-fixed coordinate system\\\\\n$ENU$ & East, North, Up coordinate system\\\\\n$\\Delta t_{imu}$ & time between IMU measurements, s\\\\\n$\\bm{\\theta}$ & gyroscope measurements $[\\theta_x, \\theta_y, \\theta_z]^T$, $\\frac{rad}{s}$ \\\\\n$\\omega$& $\\| \\bm{\\theta} \\| $ \\\\\n$q_{1}^{2}$ & unit quaternion describing the rotation from frame 1 to frame 2 \\\\\n$P$ & camera projection matrix \\\\\n$\\pi_{WGS84}$ & projection of a pixel coordinate to a 3D point on the surface of the WGS84 model \\\\\n$\\alpha_{max}$ & max acceptable angle between camera boresight and normal of a landmark \\\\\n$\\delta x$ & amount to shift a point by in pixel space \\\\\n$surface\\_normal()$ & function that finds normal vector at a point on the WGS84 model \\\\\n$angle\\_between()$ & function that finds the angle between a camera boresight and a vector \\\\\n\\end{longtable*}} \\clearpage\n\n\\section{Introduction}\n\\label{sec:intro}\n\nTerrain Relative Navigation (TRN) is a method for absolute pose estimation in a GPS-denied environment using a prior map of the environment and onboard sensors such as a camera. TRN \nis commonly desired %\nfor applications requiring accurate pose estimation, such as planetary landings and airdrops, where GPS \nis either unavailable or cannot be relied upon. Due to the high altitude of planetary TRN missions, acquiring non-simulation test data oftentimes proves difficult, \nand thus many datasets used to test TRN systems are from lower altitudes than what the system would actually be used at during a mission. Additionally, \nfor vision-based TRN systems, the large distance between the camera and features on the ground can make position changes of the camera \ndifficult to accurately observe due to the high ratio of meters per pixel in the image plane.\n\nThis paper presents an experimental analysis on performing TRN using a camera-based approach aided by a gyroscope for high-altitude navigation by associating mapped landmarks from satellite\nimagery to camera images. We evaluate performance of both a sideways-tilted and downward-facing camera on data collected from a World View Enterprises high-altitude balloon (\\cref{fig:balloon_launch}) \nwith\ndata beginning at an altitude of 33 km and descending to ground level with almost 1.5 hours of flight time (\\cref{fig:overview}) and on data collected at speeds up to \n880 km\/h (550 mph) from two sideways-tilted cameras mounted inside the capsule of Blue Origin's New Shepard rocket (\\cref{fig:rocket_all}), during payload mission NS-23. We also demonstrate the \nrobustness of the TRN system to rapid motions of the balloon which causes fast attitude changes (\\cref{fig:challenges_a})\nand can cause image blur (\\cref{fig:challenges_b}). Additionally, we demonstrate performance in the presence of dynamic camera obstructions \n caused by cords dangling below the balloon (\\cref{fig:challenges_c}), and clouds obstructing sections of \nthe image (\\cref{fig:challenges_d}). \n\nSideways-angled cameras are a common choice for TRN applications when mounting a downward camera is either infeasible due to vehicle constraints or \nwould be occluded by exhaust from an engine on vehicles such as a lander or a rocket. Additionally, for \nplanetary landings, a sideways-angled camera allows for a single camera to be used \nduring both the braking phase when the side of the lander faces the surface and during the \nfinal descent phase when the bottom of the lander faces the surface (\\cref{fig:landing}). We thus use both a \nsideways-angled camera and downward-facing camera during our high-altitude balloon flight \nto separately evaluate the performance of TRN using a camera from each orientation.\n\nWe use Draper's Image-Based Absolute Localization (IBAL) \\cite{Denver17aiaa-airdrop} software for our analysis. \nWhile our dataset has images at a rate of 20Hz, we subsample images by a factor of 10 and hence post-process images at 2Hz in real-time.\nIBAL could additionally be combined with a nonlinear estimator such as an Extended Kalman Filter (EKF) or a fixed-lag smoother through either a loosely coupled approach using IBAL's pose estimate or a tightly-coupled approach using landmark matches~\\cite{Forster17tro}. \nSince the quality of the feature matches generated by IBAL would affect all these methods, here we limit ourselves to evaluating IBAL as an independent system and also analyze the quality of the\nfeature matches. At the same time, we investigate the impact of using a gyroscope in conjunction with IBAL to aid with the challenges of our balloon dataset and show the advantage that \neven a simple sensor fusion method can provide. \nFinally, we extend IBAL to incorporate methods to \nefficiently process images when a camera views above the horizon.\n\n\\begin{figure}[hbt!]\n \\centering\n \\begin{subfigure}[t]{0.47\\textwidth}\n \\centering\n \\includegraphics[width=.5\\textwidth]{fig\/balloon2.png}\n \\caption{Release of high-altitude balloon for data collection. \\\\\\ Image: courtesy of World View\\textregistered Enterprises}\n \\label{fig:balloon_launch}\n \\end{subfigure}\n \\hfil\n \\begin{subfigure}[t]{0.418\\textwidth}\n \\centering\n \\includegraphics[width=.50\\textwidth]{fig\/rocket_all.png}\n \\caption{Blue Origin's New Shepard rocket carrying Draper experimental payload in the capsule. Image: courtesy of Blue Origin}\n \\label{fig:rocket_all}\n \\end{subfigure}\n \\caption{Data collection platforms used for experimental analysis.}\n\\end{figure}\n\n\\begin{figure}[hbt!]\n \\centering\n \\includegraphics[width=1.0\\textwidth]{fig\/image_overview.png}\n \\caption{Example of images collected at different altitudes (32, 23, 14, and 4 km) from the balloon dataset with the downward-facing camera (top)\n and sideways-facing camera (bottom).}\n \\label{fig:overview}\n\\end{figure}\n\n\\begin{figure}[hbt!]\n \\centering\n \\begin{subfigure}[t]{0.24\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{fig\/challenge_b.png}\n \\caption{Rapid rotations, here over $90^\\circ$ in 4 seconds. Red dots show ground reference points between top image and bottom image.}\n \\label{fig:challenges_a}\n \\end{subfigure}\n \\unskip\\ \\vrule\\ \n \\begin{subfigure}[t]{0.24\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{fig\/challenge_c.png}\n \\caption{Image blur (top) due to rapid motion compared to crisp image (bottom).}\n \\label{fig:challenges_b}\n \\end{subfigure}\n \\unskip\\ \\vrule\\ \n \\begin{subfigure}[t]{0.24\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{fig\/challenge_a.png}\n \\caption{Moving cords in the image. Top and bottom images showing example range of cord motion.}\n \\label{fig:challenges_c}\n \\end{subfigure}\n \\unskip\\ \\vrule\\ \n \\begin{subfigure}[t]{0.24\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{fig\/challenge_d.png}\n \\caption{images partially occluded by clouds}\n \\label{fig:challenges_d}\n \\end{subfigure}\n \\caption{Different types of TRN challenges in the balloon dataset.}\n\\end{figure}\n\n\\begin{figure}[hbt!]\n \\centering\n \\includegraphics[width=0.5\\textwidth]{fig\/landing.png}\n \\caption{Demonstration of a sideways-angled camera viewing the terrain and being used \n during the braking phase, pitch-up maneuver, and terminal descent phase.}\n \\label{fig:landing}\n\\end{figure}\n %\n\\newpage\n\n\\section {Related Work}\n\nWe present an overview of existing Terrain Relative Navigation approaches and experiments, noting that our primary contribution are two experiments that\nallow us to perform indepth analysis of vision-based terrain relative navigation on challenging high-altitude data and on data from a high speed vehicle. \nTRN methods primarily use either cameras, radar, or lidar as an exteroceptive sensor. The majority of \nearly TRN methods such as the Mars Science Laboratory \\cite{Katake10-landingNav} and NASA's ALHAT Project (\\cite{Brady11gnc-alhat}, \\cite{Amzajerdian12ac-lidarTRN}) \nuse radar or lidar. However, due to the high power \nand weight budget of radar and lidar, cameras have been motivated as an active area of exploration for more recent TRN systems.\n\nThe seminal work of Mourikis \\textit{et al.} \\cite{Mourikis09tro-EdlSoundingRocket} describes a visual-inertial navigation method for \nEntry, Descent, and Landing (EDL) using an Extended Kalman Filter (EKF) with matched landmarks and\ntracked feature points in an image. They use inertial navigation results from their entire sounding rocket launch with an apogee of 123 km, and leverage visual methods after the vehicle reaches altitudes below 3800m. Johnson and Montgomery~\\cite{Johnson09ac-trnReview}\npresent a survey of TRN methods that use either image or lidar to detect the location of known landmarks.\n\nSingh and Lim~\\cite{Singh12aiaa-trnEKF} \ndemonstrate a visual TRN approach leverging an EKF for lunar navigation using known crater locations as landmarks. Recently, Downes \\textit{et al.} \\cite{Downes20aiaa-lunarTRN} \npresent a deep learning method for lunar crater detection to improve TRN landmark tracking. \nThe Lander Vision System (LVS) \\cite{Johnson17-lvs} used for the Mars 2020 mission uses vision-based landmark matching starting at an altitude of 4200m above the \nmartian surface with the objective of achieving less than 40m error with respect to the landing site. \nOur analysis focuses on higher altitudes and on a larger span on altitudes (4.5 km to 33 km for the \nballoon dataset).\n\nDever \\textit{et al.}\\cite{Denver17aiaa-airdrop} demonstrate visual navigation for guided parachute airdrops using IBAL and a \nMulti-State Constraint Kalman Filter (MSCKF). Additionally, the work incorporates\na lost robot approach to recover from a diverged pose estimate and to initialize the system if the pose is unknown. \nSteffes \\textit{et al.} \\cite{Steffes19aiaa-trnEDL} present a theoretical analysis of three types of visual terrain navigation \napproaches, namely template matching, SIFT \\cite{lowe2004ijcv-distinctive} descriptor matching, and crater matching. \nThe work of Lorenz \\textit{et al.} \\cite{Lorenz17ac-osirisrex} demonstrates \nvision-based terrain relative navigation for a touch and go landing on an asteroid for the OSIRIS-REx mission. Due to extreme computation limits,\nthey used a maximum of five\n manually selected mapped template features per frame. Mario \\text{et al.} \\cite{Mario22psj-osirisRexTesting} provide additional discussion on ground tests \n used to prepare the TRN system for the OSIRIS-REx mission. Our balloon dataset has much faster rotional motion than \n what was present during the OSIRIS-REx mission along with camera obstructions.\n\nSteiner \\textit{et al.} \\cite{Steiner15ac-landmarkSelection} present a utility-based approach for optimal landmark selection and demonstrates performance \non a rocket testbed flight up to 500m. As shadows and variable lighting conditions are a well known challenge for TRN, \nSmith \\textit{et al.} \\cite{Smith22aiaa-blenderTRN} demonstrates the ability to use Blender to enhance a satellite database for different lighting conditions. %\n\n\\section{Data Collection}\n\nThe collection of both datasets used in this paper was supported by the NASA Flight Opportunities Program. The high-altitude balloon dataset \nwas designed to test TRN on a wide range of high-altitude data and occured in April of 2019. The New Shepard dataset was intended to \ntest TRN on a high speed vehicle with a flight profile similar to that of a precision landing and occured in August of 2022.\n\n\\subsection{Balloon Flight}\n\nWe captured downward and sideways camera images along with data from a GPS and an inertial measurement unit (IMU) on board a World View \nEnterprises high-altitude balloon shown in \\cref{fig:balloon_launch}, \nwith data recorded up to an altitude of 33 km.\nWe used FLIR Blackfly S Color 3.2 MP cameras for both downward and sideways facing views using 12 mm EFL lens and 4.5 mm EFL lens, respectively. \nThe field of view (FOV) for the downward and sideways camera with their respective lens is $32^{\\circ}$ and $76^{\\circ}$.\nBoth cameras, along with the IMU (Analog Devices ADIS16448) \nand data logging computer are self contained inside the Draper Multi-Environment Navigator (DMEN) package, shown in \\cref{fig:hardware}. \nBoth cameras generated images at 20 Hz with a resolution of $1024 \\times 768$. The IMU logged data at 820 Hz. \n\nAs mentioned in \\cref{sec:intro}, some TRN applications ---such as \nplanetary landing--- might prefer using a sideways-angled camera, while other applications \n---such as \nhigh-altitude drone flights--- may prefer a downward-facing camera. Therefore, we collect data from \nboth a downward and sideways angled camera to allow for IBAL to be evaluated at both these camera \nangles. Some planetary landings may also desire a downward-facing camera since it allows the boresight of the camera \nto be normal to the surface during the terminal descent phase, \nsuch as was done for OSIRIS-REx \\cite{Lorenz17ac-osirisrex}.\n\n\\begin{figure}[hbt!]\n \\centering\n \\includegraphics[width=.4\\textwidth]{fig\/hardware.png}\n \\caption{Draper Multi-Environment Navigator (DMEN) package: data collection package containing sideways and downward facing cameras, IMU, and logging computer.}\n \\label{fig:hardware}\n\\end{figure}\n\n\\subsection{Blue Origin New Shepard Flight}\n\nWe captured images from two sideways-angled cameras with 12.5 mm lens on opposite sides inside the New Shepard capsule which \nlook out the capsule windows. Having two cameras was intended to allow us to study the effects of different cloud cover, terrain, and angle to the sun. \nWe will refer to these cameras as camera 1 and camera 2. \nWe additionally log IMU data from a Analog Devices ADIS16448, and \ntelemetry from the capsule which served as ground truth for our experiment. Data was logged with a NUC mounted inside a payload locker in the capsule.\nBoth cameras generated images at 20 Hz with a resolution of $1024 \\times 768$ and FOV of $31^{\\circ}$. \nThe IMU logged data at 820 Hz. The rocket reached speeds up to 880 km\/h and an altitude of 8.5 km before an anomaly occurred during the NS-23 flight \nwhich triggered the capsule escape system.\n\n\\Cref{fig:payload_blue} shows our payload locker containing the NUC, IMU, and a power converter which is \nmounted inside the New Shepard capsule. An ethernet cable and two USB cables transfer \ntelemetry data from the capsule and data from the cameras to the NUC, respectively.\n\n\\Cref{fig:cam_mount_a} shows camera 2 mounted inside the capsule with a sideways-angle and \n\\cref{fig:cam_mount_b} shows the location of both cameras inside the capsule on opposite sides while New Shepard \nis on the launch pad. Both cameras are mounted at the same tilt angle such that they can view the terrain while not \nhaving their FOVs obstructed by components on the rocket. Additionally, a mounting angle was selected to reduce \nthe effects of distortion caused by the windows, and to ensure the cameras did not come in direct \ncontact with the windows.\n\nDistortion effects from the windows were addressed by calibrating the instrinsic parameters \nof the camera while the camera was mounted in the capsule (i.e., a calibration board was positioned outside \nthe capsule window). We used the Brown-Conrady model \\cite{Brown66-brownConrady} which helps account for decentralized distortion caused by the window \nin addition to distortion from the camera lens. Further evaluation on the effects of distortion caused \nby the window of the capsule is left as a topic for future work.\n\n\\begin{figure}[hbt!]\n \\centering\n \\begin{subfigure}[t]{0.49\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{fig\/payload1.png}\n \\end{subfigure}\n \\hfil\n \\begin{subfigure}[t]{0.49\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{fig\/payload2.JPG}\n \\end{subfigure}\n \\caption{Payload locker inside the New Shepard capsule containing a NUC, IMU, and DC\/DC Converter. Images courtesy of Blue Origin.}\n \\label{fig:payload_blue}\n\\end{figure}\n\n\\begin{figure}[H]\n \\centering\n \\begin{subfigure}[t]{0.49\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{fig\/cam_capsule.png}\n \\caption{}\n \\label{fig:cam_mount_a}\n \\end{subfigure}\n \\hfil\n \\begin{subfigure}[t]{0.49\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{fig\/rocket_cams.png}\n \\caption{}\n \\label{fig:cam_mount_b}\n \\end{subfigure}\n \\caption{Cameras 1 and 2 mounted inside the New Shepard capsule looking out the capsule windows. Images courtesy of Blue Origin.}\n \\label{fig:cameras_in_window}\n\\end{figure} %\n\n\\section{Terrain Relative Navigation Method}\n\\label{sec:method}\n\nWe use Draper's IBAL software~\\cite{Denver17aiaa-airdrop} to perform TRN \nfor our datasets. A database of image templates is created in advance from satellite imagery and stored using known \npixel correspondence with the world frame. Using satellite images and elevation maps from USGS~\\cite{usgs}, we automatically select patches of interest \nfrom the satellite images and create a collection of templates that serve as 3D landmarks. For each camera image processed by IBAL, IBAL uses an initial guess of \nthe camera pose to predict \nwhich templates from the database are in the field of view (FOV) of the camera using a projection from \nthe image plane to an ellipsoidal model of the planet. The templates are then matched to the camera \nimage using cross correlation. The resulting match locations are passed to a 3-point RANSAC \\cite{Fischler81} (using a Perspective-Three-Point method as a minimal solver) to reject outliers. \nThe output is a list of the inlier matches, their pixel location in the image, and their known location \nin the world frame that can be passed to a nonlinear estimator or fixed-lag smoother for tightly-coupled pose estimation. \nA secondary output of RANSAC is an absolute pose estimate found by using the Perspective-n-Point (PnP) \nalgorithm on the set of inliers. \n\nInstead of a tightly-coupled approach, we will use a simpler method to evaluate performance on the balloon and New Shepard datasets. \nFor the balloon dataset, we take the PnP absolute pose estimate directly from IBAL, \n forward propagate it with the gyroscope measurements, and use it at the next time step as a pose guess for IBAL. \n We do not use accelerometer data since \nin the image frame most scene changes for the balloon dataset \nover a short time span will be due to rotations. This\nis due to the high altitude and hence large distance between the camera and the Earth's surface. Using the gyroscope to propagate the rotation also allows for\nreduced computation since we are able to down-sample our camera data by a factor of 10 (2Hz image input to IBAL). \nAdditionally, the gyro allows for robust handling of rapid motions of the balloon and images that have large obstruction\nfrom cords which makes generating landmark matches unreliable. An ablation study on incorporating the gyroscope with IBAL is provided in \\cref{sec:gyro_ablation}.\n Since the New Shepard capsule does not experience rapid rotations like the balloon, we did not find it \nnecessary to use the gryoscope to forward propagate the pose estimate for the New Shepard dataset.\n\nWe propagate the rotation estimate of the vehicle, $q^{cam_T}_{ecef}$ (i.e., the orientation of the earth-centered, earth-fixed frame \nw.r.t. the camera frame at time $T$, represented as a unit quaternion), to the time of the next processed image ($T+1$) \nwith the gyro using second order strapdown quaternion expansion \\cite{Mckern68mit-transforms}. \nUsing 3-axis gyro measurements $\\bm{\\theta}$ and their magnitude $\\omega = \\|\\bm{\\theta}\\|$, we compute the orientation $q_{IMU_{t+1}}^{IMU_t}$ between gyro measurements \nusing the following equation\n\\begin{equation}\n \\label{eq:quat_gyro1}\n q_{IMU_{t+1}}^{IMU_t}= [1 - \\frac{\\omega^2 \\Delta t_{IMU}^2}{8}, \\frac{\\bm{\\theta}^T \\Delta t_{IMU}}{2}]\n\\end{equation}\nwhere $t+1$ and $t$ represent the time of consecutive IMU measurements occuring $\\Delta t_{IMU}$ seconds apart.\n\n\nUsing the rotations $q_{IMU_{t+1}}^{IMU_t}$ between consecutive IMU timestamps, we \ncan compute the relative rotation $q_{cam_{T+1}}^{cam_T}$ between the camera pose between consecutive images collected at time $T$ and $T+1$:\n\\begin{equation}\n \\label{eq:quat_gyro2}\n q_{cam_{T+1}}^{cam_T} = \\prod_{t = T}^{T+1} q_{IMU}^{cam} \\otimes q_{IMU_{t+1}}^{IMU_t} \\otimes (q_{IMU}^{cam})^{-1}\n\\end{equation}\nwhere $\\otimes$ is the quaternion product and $q_{IMU}^{cam}$ is the static transform from the IMU frame to the camera frame:\n\nFinally, we can compute the rotation estimate $q^{cam_{T+1}}_{ecef}$ of the vehicle at time $T+1$:\n\\begin{equation}\n \\label{eq:quat_gyro3}\n q^{cam_{T+1}}_{ecef} = (q_{cam_{T+1}}^{cam_T})^{-1} \\otimes q^{cam_{T}}_{ecef}\n\\end{equation}\n\nWe use a simple yet effective logic for handling short segments in our datasets when PnP is unable to produce a reliable pose, which can be caused by image obstructions \nor blurry images caused by rapid vehicle motion. If PnP RANSAC selects a small set of inliers (i.e., less than 8) or if the pose is clearly infeasible (i.e., an altitude change between \nprocessed images greater than 450 m for the balloon dataset), we reject the \npose estimate, keep forward propagating the pose using gyroscope data, and run IBAL with the next available image, ignoring the down-sampling rate. %\n\n\\section{Addressing Challenges of High-Altitude Images}\n\nWe apply simple and effective methods to address two common challenges we encountered with high-altitude images, namely determining the projection\nto the ellipsoid when the camera views the horizon, and reducing the number of potential landmarks from the database that have a lower probability of \ngenerating good matches when there is a large number of landmarks in view of the camera. \n\nWhen the horizon is in view of the camera, as is true for the higher altitude images from the sideways camera for the balloon dataset \n(\\cref{fig:overview}), our \nbaseline method of determining the camera's viewing bounds of the planet's surface is insufficient. Our baseline method is to use an \ninitial estimate of the camera's pose to project each corner of the image to the ellipsoid model. From this, we can create a bounding box on the \nellipsoid defined by a minimum and maximum latitude and longitude. However, this is ill-defined if at least one corner of the image falls \nabove the horizon. \nTo resolve this case, if the projection of a corner point does not intersect the ellipsoid we incrementally \nmove the point (in the image space) towards the opposite corner of the image until it intersects the ellipsoid (\\cref{fig:works}). This process is summarized in \\cref{alg:horizon_detection}. \nThis process is shown to be effective for our dataset, despite the fact that the approach could fail (see line 15 in~\\cref{alg:horizon_detection}) when the projection of the ellipsoid does not intersect the main diagonals of the image (e.g., when the camera is too far away from Earth or has a large tilt angle).\n\n\n\n\\begin{figure}[H]\n %\n \\centering\n \\includegraphics[width=.3\\textwidth]{fig\/works.png}\n \\caption{Example of our horizon detection method finding the horizon of an ellipsoidal body. Each corner point of the image \n is incremented towards the opposite corner until the ellipsoid body is intersected.}\n \\label{fig:works}\n %\n %\n %\n %\n %\n %\n %\n %\n %\n %\n\\end{figure}\n\n\\begin{algorithm}\n \\caption{Horizon Detection} \n \\label{alg:horizon_detection}\n \\small\n \\begin{algorithmic}[1]\n \\State \\textbf{Inputs:} \n \\State \\indent \\indent P \\Comment{estimate of camera projection matrix (containing intrinsic and extrinsic parameters)}\n \\State \\indent \\indent $\\pi_{WGS84}$ \\Comment{projection of a pixel coordinate to a 3D point on the surface of the WGS84 model}\n \\State \\indent \\indent $\\delta x$ \\Comment{amount to shift a point by in pixel space (default 10 pixels)}\n\n \\State \\textbf{Output:} $image\\_corners$ \\Comment{set of four pixel coordinates bounding image}\n \n \\For{$x_{corner}, \\in image\\_corners$}\n \\While{True}\n \\State X $\\gets \\pi_{WGS84}(P, x_{corner})$\n \\If{X intersects ellipsoid} \n %\n \\State break \\Comment{found valid image boundary}\n \\Else\n \\State increment $x_{corner}$ towards opposite corner by $\\delta x$\n \\EndIf\n \\If{$x_{corner}$ outside image}\n \\State \\textbf{return} error \\Comment{failed to find horizon boundary}\n \\EndIf\n \\EndWhile\n \\EndFor\n \\State \\textbf{return} $image\\_corners$\n \\end{algorithmic}\n\\end{algorithm}\n\nSince we select a maximum number of landmarks based on the landmarks in our satellite database that are in view of the camera, we need additional logic to\navoid the possibility of selecting landmarks that mostly fall near the horizon, since these are unlikely to lead to good matches. \nThe ratio\nof meters per pixels grows rapidly as we approach the horizon, and image matching becomes difficult or impossible near\nthe horizon line due to glare or heavy warping needed to match a shallow surface angle. Additionally, there \nis significant atmospheric distortion. \nRemoving those landmarks helps avoid unnecessary computation and reduces the number of outliers we pass to RANSAC. Towards this goal,\n we set a maximum acceptable angle between the boresight of the \ncamera and the surface normal of \na landmark and reject landmarks that fail to meet this threshold. To increase the number of potential landmarks that \nmeet our angle requirement, we filter out sections of the camera's FOV projection to the ellipsoid that are unlikely \nto produce landmarks that meet the angle threshold. \nThis filtering method follows our prior method for intersecting the ellipsoid and uses \nsimilar logic. Starting at the first point near each image corner that views the ellipsoid, we find the surface normal by projecting \nfrom the image plane to the ellipsoid and move towards the oppostite corner of the image\nuntil the angle requirement is met. This process is summarized in \\cref{alg:landmark_angle} and a corresponding ablation \nis shown in \\cref{fig:angle_ablation}. Notice that \nwithout \\cref{alg:landmark_angle}, more landmarks are selected near the horizon (\\cref{fig:landmark_angle_no_angle}) \nwhere template matching is more difficult resulting in more outliers. Using \\cref{alg:landmark_angle} allows IBAL to target \nregions of the image with more distinguishable features for matching which results in a higher concentration of inliers \n(\\cref{fig:landmark_angle_with_angle}).\n\n\n\n\\begin{algorithm}\n \\small\n \\caption{Landmark Angle Filter} \n \\label{alg:landmark_angle}\n \\begin{algorithmic}[1]\n \\State \\textbf{Inputs:} \n \\State \\indent \\indent P \\Comment{estimate of camera projection matrix (containing intrinsic and extrinsic parameters)}\n \\State \\indent \\indent $\\pi_{WGS84}$ \\Comment{projection of a pixel coordinate to a 3D point on the surface of the WGS84 model}\n \\State \\indent \\indent $\\alpha_{max}$ \\Comment{max acceptable angle between camera boresight and normal of a landmark}\n \\State \\indent \\indent $\\delta x$ \\Comment{amount to shift a point by in pixel space (default 10 pixels)}\n \n \\State \\textbf{Output:} $image\\_corners$ \\Comment{set of four pixel coordinates bounding image}\n\n \\State surface\\_normal() $\\gets$ function that finds normal vector at a point on the WGS84 model\n \\State angle\\_between() $\\gets$ function that finds the angle between a camera boresight and a vector \n \\For{$x_{corner}, \\in image\\_corners$}\n \\While{True}\n \\State X $\\gets \\pi_{WGS84}(P, x_{corner})$\n \\State $x_n \\gets surface\\_normal(X)$\n \\State $\\alpha \\gets angle\\_between(P, x_n)$\n \\If{$\\alpha \\leq \\alpha_{max}$}\n \\State break \\Comment{found valid image bounary}\n \\Else\n \\State increment $x_{corner}$ towards opposite corner by $\\delta x$\n \\EndIf\n \\If{$x_{corner}$ outside image}\n \\State \\textbf{return} error \\Comment{failed to meet landmark angle requirement}\n \\EndIf\n \\EndWhile\n \\EndFor\n \\State \\textbf{return} $image\\_corners$\n \\end{algorithmic}\n\\end{algorithm}\n\n\\begin{figure}[H]\n \\centering\n \\begin{subfigure}[t]{0.46\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{fig\/without_angle.png}\n \\caption{Higher concentration of outliers near the horizon without using landmark angle filter. Ratio of inliers to outliers: 0.3}\n \\label{fig:landmark_angle_no_angle}\n \\end{subfigure}\n \\hfil\n \\begin{subfigure}[t]{0.46\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{fig\/with_angle.png}\n \\caption{Higher concentration of inliers using landmark angle filter. Ratio of inliers to outliers: 1.3}\n \\label{fig:landmark_angle_with_angle}\n \\end{subfigure}\n \\caption{Ablation study for \\cref{alg:landmark_angle}, which filters regions of the image for landmark matching \n based on the angle between the surface and the camera boresight. This leads to a higher ratio of inliers to outliers, reducing computation and improving accuracy. \n Inliers matches are shown in green and outlier are shown in red. \n Blue shows initial estimate of landmark location based on initial pose estimate before utilizing cross correlation. Images are from sideways camera \n from balloon dataset.}\n \\label{fig:angle_ablation}\n\\end{figure} %\n\n\\section{Experiment Results}\n\n\\subsection{Balloon Flight}\n\nWe present results from running IBAL with both a sideways-tilted and downward-facing camera aided by gyroscope measurements on altitudes ranging from 33km to 4.5km. \nNote that we use the term altitude to mean height above the WGS84 ellipsoid. \nDuring this time, the system is descending under a parachute. \nWe split our data into \n7 segments, each about 15 minutes long, and evaluate our estimated TRN position by comparing with GPS. We manually reseed IBAL at the start of each segment. \nResults are defined with respect to an East North Up (ENU) frame centered at the landing site of the balloon. \n\\Cref{fig:all_trajectory} shows the ground truth trajectory from GPS compared to the trajectory estimates from IBAL with a downward and sideways \nfacing camera. The corresponding plot of absolute position \nerror is shown in \\cref{fig:all_trajectory_error} for each of the East, North, and Up axes. \nIBAL is able to achieve an average position error along the up axis of 78 m and 66 m for the entire trajectory with the downward-facing and sideways-tilted camera, \nrespectively, while the balloon travels almost 30 km in elevation.\nIBAL achieves 207 m and 124 m of average position error for the east and north axis across the entire trajectory of the \ndownward-facing camera, and likewise an average error of 177 m and 164 m along the east and north axis for the sideways camera \nwhile the balloon transverses well over 100 km laterally. \n\\Cref{fig:all_trajectory_total_error} shows total absolute error (defined as the Euclidean distance between the estimate and the GPS position) with respect to flight time and with respect to height above ground level. \nAverage absolute position error for the entire trajectory is 287 m and 284 m for the downward and sideways-tilted camera, respectively. \nSpikes in position estimates could be diminished \nusing filtering methods such as coupling with an accelerometer or with visual odometry as mentioned in \\cref{sec:method}. \nWe run IBAL in real-time on a laptop with an Intel Xeon 10885M CPU. \nWhile IBAL is designed to run in real-time \non flight hardware, we do not make showcasing run-time performance a focus of this paper.\n\n\\begin{figure}[hbt!]\n \\centering\n \\includegraphics[width=\\textwidth, height=0.40\\textheight]{fig\/traj_all.png}\n \\caption{IBAL+gyro trajectory estimate vs. GPS for altitude range of 33 km to 4.5 km on balloon dataset. \n Vertical lines show start of each new data segment.}\n \\label{fig:all_trajectory}\n\\end{figure}\n\n\\newpage\n\n\\begin{figure}[hbt!]\n \\centering\n \\includegraphics[width=\\textwidth, height=0.40\\textheight]{fig\/error_all.png}\n \\caption{IBAL+gyro absolute position error for altitude range of 33 km to 4.5 km on balloon dataset. Vertical lines show start of each new data segment.}\n \\label{fig:all_trajectory_error}\n\\end{figure}\n\n\\begin{figure}[H]\n \\centering\n \\includegraphics[width=\\textwidth, height=0.37\\textheight]{fig\/total_error.png}\n \\caption{IBAL+gyro total trajectory error vs. time and vs. height above ground level on balloon dataset. Error tends to \n show slight decrease in magnitude at lower altitudes. Vertical lines show start of each new data segment.}\n \\label{fig:all_trajectory_total_error}\n\\end{figure}\n\n\\newpage\n\n\nWe also provide an analysis of the match correlation for both cameras for the entire balloon dataset. \n\\Cref{fig:all_matches_down} and \\cref{fig:all_matches_side} show number of inliers and outliers \nfor the downward and sideways facing cameras. After estimating the location of a landmark in the image \nwith cross correlation and peak finding, inliers and outliers are labeled using PnP and RANSAC. \nThere are generally more inliers than outliers which shows the effectiveness of the correlation approach, and\nthat IBAL is able to perform well in the presence of outliers. \nWe observe a greater number of inliers with the \ndownward-facing camera than with the sideways-tilted camera.\n\nAdditionally, \\cref{fig:histograms} shows a histogram of the amount of pixel error for the inliers and outliers \ndetermined by PnP and RANSAC for both the downward and sideways-tilted cameras. Inlier pixel error is distributed such that \nmost inliers have between 0 and 1 pixel of error as determined by PnP and RANSAC which shows the effectiveness of IBAL's correlation approach. \nThat there is an increase in the ratio of outliers to inliers at lower altitudes. This is due in part to shadows, lack of distinct texture on the ground, and \nregions with a sparse amount of landmarks in our database. \nDepending on mission requirements, this issue can be greatly reduced \nduring the landmark database creation process such as by optimizing for landmark template size, ensuring sufficent landmark coverage at low altitudes \nfor all phases of a flight, and by baking shadows into the database as was demonstrated in \\cite{Smith22aiaa-blenderTRN}. \nHowever, for the purposes of the balloon experiment in this paper, we determined our database to be sufficient.\n\nLastly, we provide visual examples of IBAL matches on a selected subset of frames from the downward and sideways facing cameras. \n\\Cref{fig:down_match_135} shows landmark matches for the downward camera at 13.5 km with inliers shown in green and \noutliers shown in red. Blue dots show the inital estimate of the landmark locations in the image by using the pose estimated by IBAL's \nprior pose and the gyro before matching with cross correlation. \n\\Cref{fig:down_match_23} shows matches for the downward camera at 23 km. \nCords from the high-altitude balloon are partially in view, but incorrect matches caused by the cords are correctly \nrejected as outliers. \\Cref{fig:side_match_135} and \\cref{fig:side_match_23} show results for the sideways-tilted camera \nat 13.5 km and 23 km.\n\n\\begin{figure}[H]\n \\centering\n \\begin{subfigure}[t]{1.0\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth, height=0.195\\textheight]{fig\/matches_all_down.png}\n \\caption{IBAL landmarking matching results for downward-facing camera}\n \\label{fig:all_matches_down}\n \\end{subfigure}\n\n \n \\begin{subfigure}[t]{1.0\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth, height=0.195\\textheight]{fig\/matches_all_side.png}\n \\caption{IBAL landmarking matching results for sideways-tilted camera}\n \\label{fig:all_matches_side}\n \\end{subfigure}\n \\caption{IBAL+gyro number of inliers and outliers for sideways-tilted and downward-facing cameras on balloon dataset for altitude range of 33 km to 4.5 km \n as determined by PnP and RANSAC. Vertical lines show start of each new data segment. The downward camera tends to have more matches than the sideways-tilted camera.}\n\\end{figure}\n\n\\begin{figure}[H]\n \\centering\n \\begin{subfigure}[t]{0.48\\textwidth}\n \\centering\n \\textbf{Downward Camera}\\par\\medskip\n \\includegraphics[width=\\textwidth, height=0.1\\textheight]{fig\/hist_down_1.png}\n \\caption{altitude range: 33 km to 32.5 km}\n \\end{subfigure}\n \\hfil\n \\begin{subfigure}[t]{0.48\\textwidth}\n \\centering\n \\textbf{Sideways Camera}\\par\\medskip\n \\includegraphics[width=\\textwidth, height=0.1\\textheight]{fig\/hist_side_1.png}\n \\caption{altitude range: 33 km to 32.5 km}\n \\end{subfigure}\n\n \\begin{subfigure}[t]{0.48\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth, height=0.1\\textheight]{fig\/hist_down_2.png}\n \\caption{altitude range: 32.5 km to 29 km}\n \\end{subfigure}\n \\hfil\n \\begin{subfigure}[t]{0.48\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth, height=0.1\\textheight]{fig\/hist_side_2.png}\n \\caption{altitude range: 32.5 km to 29 km}\n \\end{subfigure}\n\n\n \\begin{subfigure}[t]{0.48\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth, height=0.1\\textheight]{fig\/hist_down_3.png}\n \\caption{altitude range: 29 km to 23 km}\n \\end{subfigure}\n \\hfil\n \\begin{subfigure}[t]{0.48\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth, height=0.1\\textheight]{fig\/hist_side_3.png}\n \\caption{altitude range: 29 km to 23 km}\n \\end{subfigure}\n\n\n \\begin{subfigure}[t]{0.48\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth, height=0.1\\textheight]{fig\/hist_down_4.png}\n \\caption{altitude range: 23 km to 18 km}\n \\end{subfigure}\n \\hfil\n \\begin{subfigure}[t]{0.48\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth, height=0.1\\textheight]{fig\/hist_side_4.png}\n \\caption{altitude range: 23 km to 18 km}\n \\end{subfigure}\n\n\n \\begin{subfigure}[t]{0.48\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth, height=0.1\\textheight]{fig\/hist_down_5.png}\n \\caption{altitude range: 18 km to 14 km}\n \\end{subfigure}\n \\hfil\n \\begin{subfigure}[t]{0.48\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth, height=0.1\\textheight]{fig\/hist_side_5.png}\n \\caption{altitude range: 18 km to 14 km}\n \\end{subfigure}\n\n \n \\begin{subfigure}[t]{0.48\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth, height=0.1\\textheight]{fig\/hist_down_6.png}\n \\caption{altitude range: 14 km to 9 km}\n \\end{subfigure}\n \\hfil\n \\begin{subfigure}[t]{0.48\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth, height=0.1\\textheight]{fig\/hist_side_6.png}\n \\caption{altitude range: 14 km to 9 km}\n \\end{subfigure}\n\n\n \\begin{subfigure}[t]{0.48\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth, height=0.1\\textheight]{fig\/hist_down_7.png}\n \\caption{altitude range: 9 km to 4.5 km}\n \\end{subfigure}\n \\begin{subfigure}[t]{0.48\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth, height=0.1\\textheight]{fig\/hist_side_7.png}\n \\caption{altitude range: 9 km to 4.5 km}\n \\end{subfigure}\n\n\\caption{Inlier and outlier pixel error for each segment of balloon dataset. Error is the reprojection error determined by PnP and RANSAC. \nLeft Column: downward camera, Right Column: sideways camera. \nRows correspond to different altitude ranges.\n}\n\\label{fig:histograms}\n\\end{figure}\n\n\\begin{figure}[H]\n \\centering\n \\begin{subfigure}[t]{0.49\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{fig\/chords_outlier_rejection_13_5.png}\n \\caption{Downward Camera, altitude 13.5 km}\n \\label{fig:down_match_135}\n \\end{subfigure}\n \\hfil\n \\begin{subfigure}[t]{0.49\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{fig\/chord_outlier_rejection_23.png}\n \\caption{Downward Camera, altitude 23 km}\n \\label{fig:down_match_23}\n \\end{subfigure}\n\n \n \\begin{subfigure}[t]{0.49\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{fig\/side_cam_matches_13_5.png}\n \\caption{Sideways Camera, altitude 13.5 km}\n \\label{fig:side_match_135}\n \\end{subfigure}\n \\hfil\n \\begin{subfigure}[t]{0.49\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{fig\/side_cam_matches_23.png}\n \\caption{Sideways Camera, altitude 23 km}\n \\label{fig:side_match_23}\n \\end{subfigure}\n\n \\caption{IBAL landmark match analysis on balloon dataset. Inliers matches are shown in green and outlier are shown in red. \n Points in blue show initial estimate of landmark location based on initial pose estimate before utilizing cross correlation. Lines connect blue estimate \n to calculated match location. Landmarks locations covered \n by the cords are correctly rejected as outliers (top row).}\n\\end{figure}\n\n\\subsection{Blue Origin New Shepard Flight}\n\nWe present results from running IBAL with two cameras (referred to as camera 1 and camera 2) mounted inside the Blue Origin New Shepard capsule. \nWe only show results up to an altitude of approximately 8.5 km \nsince there was an anomaly that occurred during flight NS-23 which triggered the capsule escape system. \nNevertheless, we are still able to show IBAL working while the rocket achieves \nnominal speeds up to 880 km\/h (550 mph). We seed the initial input image to IBAL using telemetry from New Shepard and then use the previous IBAL pose estimate \nas the initial pose guess for the next timestep. Unlike the balloon experiment, we do not incorporate the gyroscope measurement to forward propagate the \npose estimate since the capsule does not experience significant rotations during its ascent.\n\nWe show a similar series of analysis of trajectory error and landmark matches as was presented for the high-altitude balloon experiment. \nResults are defined with respect to a ENU frame centered at the launch pad. \n\\Cref{fig:blue_error} shows absolute error for each of the East, North, and Up axes by comparing the position estimate of IBAL with GPS. \n\\Cref{fig:blue_total_error} shows total absolute error with respect to flight time and with respect to height above ground level. IBAL's total position \nerror estimate is below 120 m for the duration of the dataset, and that error with camera 2 is as low as 10 m when the rocket is at an altitude of 3.5 km. \nAverage absolute position error for the entire trajectory is 54 m and 34 m for camera 1 and camera 2, respectively. Both cameras show similar performance with IBAL, and \nslight differences in performance can be explained by the cameras being located on opposite sides of the capsule (and thus viewing different terrain) \nand by potential unaccounted distortion effects in the camera calibration. \n\n\n\\begin{figure}[hbt!]\n \\centering\n \\includegraphics[width=1.0\\textwidth]{fig\/blue_error.png}\n \\caption{IBAL absolute position error on New Shepard dataset: altitude range of 3.5 km to 8.5 km.}\n \\label{fig:blue_error}\n\\end{figure}\n\n\\begin{figure}[hbt!]\n \\centering\n \\includegraphics[width=1.0\\textwidth]{fig\/blue_error_time_alt}\n \\caption{IBAL total trajectory error vs. time and height above ground level on New Shepard dataset. Total error is less than 120 m while reaching \n speeds up to 880 km\/h and a peak altitude of 8.5 km.}\n \\label{fig:blue_total_error}\n\\end{figure}\n\nWe also provide an analysis of match correlation for both cameras. Since each processed frame only had at most 2 matches identified as outliers \nby PnP and RANSAC, we do not include match analysis for outliers in our results. \n\\cref{fig:blue_inliers_outliers_1} and \\cref{fig:blue_inliers_outliers_2}\nshow number of inliers for both cameras. \n\\cref{fig:blue_histogram} shows a histogram of the amount of pixel error for the inliers determined by PnP\nRANSAC for both cameras. Similarly to the results from the balloon flight, pixel error for a majority of the inliers is less than two pixels.\n\nWe provide visual examples of IBAL matches on a frame from both cameras in \\cref{fig:blue_match_visualize}. \nMatches labeled as inliers are shown in green, while outliers are shown in red. There is only one outlier present in the processed image from \ncamera 1 (\\cref{fig:blue_match_visualize_1}) and no outliers in the image from camera 2 (\\cref{fig:blue_match_visualize_2}).\n\nLastly, we remark on one difficulty of the New Shepard dataset.\nA mountain range is in view of camera 2 which makes landmark matching more difficult near the latter portion of the dataset as the mountain comes into the camera's \nFOV (\\cref{fig:mountain}). This is due to the presence of shadows in the mountain that may not be consistent with shadows present in the \ntime of day the database imagery was collected. Additionally, the 2D-2D homography assumption which we use to warp landmark templates into the image for \ncorrelation begins to break down when 3D structures \nsuch as mountains are viewed from low altitudes. Work with database creation such as \\cite{Smith22aiaa-blenderTRN} along with advances in IBAL \nnot mentioned in the paper can be used to reduce these issue for low altitude navigation over mountains. \n\n\n\\begin{figure}[H]\n \\centering\n \\begin{subfigure}[t]{0.46\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{fig\/cam1_blue_matches.png}\n \\caption{IBAL landmarking matching results for camera 1}\n \\label{fig:blue_inliers_outliers_1}\n \\end{subfigure}\n \\hfil\n \\begin{subfigure}[t]{0.46\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{fig\/cam2_blue_matches.png}\n \\caption{IBAL landmarking matching results for camera 2}\n \\label{fig:blue_inliers_outliers_2}\n \\end{subfigure}\n \\caption{IBAL number of inliers and outliers for cameras 1 and 2 on New Shepard dataset as determined by PnP and RANSAC. \n The data corresponds to an altitude range between 3.5 km and 8.5 km.}\n\\end{figure}\n\n\\begin{figure}[H]\n \\centering\n \\begin{subfigure}[t]{0.45\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth, height=0.1\\textheight]{fig\/blue_hist_cam1.png}\n \\caption{Camera 1}\n \\end{subfigure}\n \\hfil\n \\begin{subfigure}[t]{0.45\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth, height=0.1\\textheight]{fig\/blue_hist_cam2.png}\n \\caption{Camera 2}\n \\end{subfigure}\n \\caption{Inlier pixel error distribution for Cameras 1 and 2 on New Shepard dataset. \n %\n }\n \\label{fig:blue_histogram}\n\\end{figure}\n\n\\begin{figure}[H]\n \\centering\n \\begin{subfigure}[t]{0.48\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{fig\/cam1_54.png}\n \\caption{IBAL inlier and outlier matches for camera 1 on New Shepard dataset at an altitude of 6.4 km}\n \\label{fig:blue_match_visualize_1}\n \\end{subfigure}\n \\hfil\n \\begin{subfigure}[t]{0.476\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{fig\/cam2_54.png}\n \\caption{IBAL inlier and outlier matches for camera 2 on New Shepard dataset at an altitude of 6.4 km}\n \\label{fig:blue_match_visualize_2}\n \\end{subfigure}\n \\caption{IBAL inlier and outlier matches for cameras 1 and 2 on New Shepard dataset. Inliers matches are shown in green and outlier are shown in red. \n Blue shows initial estimate of landmark location based on initial pose estimate before utilizing cross correlation. Lines connect blue estimate \n to calculated match location. Images have been rotated by \n 180 $^{\\circ}$ for visual appeal.}\n \\label{fig:blue_match_visualize}\n\\end{figure}\n\n\\begin{figure}[H]\n \\centering\n \\includegraphics[width=0.46\\textwidth]{fig\/mountain.png}\n \\caption{IBAL Camera 2 viewing a mountain range on New Shepard dataset. Inliers matches are shown in green. \n Blue shows initial estimate of landmark location based on initial pose estimate before utilizing cross correlation. Lines connect blue estimate to calculated match location. \n Image has been rotated by \n 180 $^{\\circ}$ for visual appeal.}\n \\label{fig:mountain}\n\\end{figure} %\n\n\\section{Gyroscope Incorporation Ablation Study}\n\\label{sec:gyro_ablation}\n\nWe provide an ablation study of forward propagating the IBAL pose estimate with a gyroscope for the high-altitude balloon dataset \nas mentioned in \\cref{sec:method}. The benefits of incorporating the gyroscope data is two-fold. Firstly, since the balloon experiences \nrapid rotations, in some cases exceeding $20^\\circ$ per second, the gyro provides a more accurate initial guess of the balloon's pose for IBAL, which \nreduces the frequency at which images must be to used to estimate the pose, hence reducing computation. Additionally, if landmark match quality is temporarily insufficient \n(typically on the order of 1 to 3 seconds) for PnP and RANSAC, which can be caused for example by significant obstruction by the cords below the balloon, the gyro allows the pose estimate to be carried over until good landmark matches can be found.\n\n\\Cref{table:gyro_ablation} shows the benefits of using the gyro with our balloon dataset. Using the downward-facing camera, we show the percentage \nof each of the seven data segments IBAL is able to successfully complete with and without incorporating the gyroscope. We also test on two different rates of \nimage processing, noting that while one could partially compensate the lack of gyroscope measurements by increasing the rate of image processing, that strategy is only effective at high altitudes in our dataset.\n\n\n\\begin{table}[H]\n \\begin{tabular}{ | l | l | l | l | l | l | l | l |}\n \\hline\n & 33-32.5 km & 32.5-29 km & 29-23 km & 23-18 km & 18-14 km & 14-9 km & 9-4.5 km\\\\ \\hline\n 4 Hz w\/ gyro & 100 & 100 & 100 & 100 & 100 & 100 & 100 \\\\ \\hline\n 2 Hz w\/ gyro & 100 & 100 & 100 & 100 & 100 & 100 & 100 \\\\ \\hline\n 4 Hz w\/o gyro & 100 & 100 & 96 & 3 & 3 & 1 & 1 \\\\ \\hline\n 2 Hz w\/o gyro & 100 & 100 & 63 & 0 & 0 & 1 & 1 \\\\ \n \\hline\n \\end{tabular}\n \\caption{Ablation study showing the benefit of incorporating gyroscope measurements with IBAL on each of the seven altitude segments of the balloon dataset \n for different rates of image processing. \n Results show the percent of each dataset segment IBAL successfully processes using images from the downward camera.}\n \\label{table:gyro_ablation}\n\\end{table} %\n\n\\section{Conclusion}\n\nThis paper reports on the performance of a vision-based terrain relative navigation method on data ranging from 4.5 km to 33 km on a high-altitude \nballoon dataset and on data collected onboard Blue Origin's New Shepard rocket. We evaluate \nperformance \nof both a sideways-tilted and downward-facing camera for the balloon dataset and two sideways-tilted \ncameras on the New Shepard dataset. We observe less than 290 meters of \naverage position error on the balloon data over a trajectory of 150 kilometers and \nwith the presence of rapid motions and dynamic obstructions in the field of view of the camera. Additionally, we report less than 55 m of \naverage position error on the \nNew Shepard dataset while reaching an altitude of 8.5 km and a max nominal speed of 880 km\/h. As future work, we plan to fly again onboard the New Shepard \nrocket and capture camera data from ground level to an altitude of over 100 km. \n\\section*{Acknowledgments}\nWe would like to gratefully acknowledge Andrew Olguin, Carlos Cruz, Alanna Ferri, Laura Henderson, and\neveryone else at Draper who supported IBAL and data collection for the balloon flight and New Shepard flight. This\nwork was authored by employees of The Charles Stark Draper Laboratory, Inc. under Contract No. 80NSSC21K0348\nwith the National Aeronautics and Space Administration. The United States Government retains and the publisher, by\naccepting the article for publication, acknowledges that the United States Government retains a non-exclusive, paid-up,\nirrevocable, worldwide license to reproduce, prepare derivative works, distribute copies to the public, and perform\npublicly and display publicly, or allow others to do so, for United States Government purposes. All other rights are\nreserved by the copyright owner.\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction} \\label{sec:intro}\nStochastic or simulation models are only approximations to the reality. A conjectured model may not align with the true system because of unobserved complexity. Moreover, some highly accurate models, even if formulable, may not be implementable due to computational barriers and time constraints, in which case a simpler, lower-fidelity model is adopted. In all these cases, there is a discrepancy between the model and the reality, which we call \\emph{model discrepancy}. This article describes a data-processing framework to integrate data from both a simulated response and the real system of interest, under the presence of model discrepancy, to reliably predict stochastic outputs of interest.\n\nOur objective is motivated from everyday practice of simulation analysis. For example, this article describes a major manufacturer that is interested in assessing the impact of the staffing level of support workers on a production line via discrete-event simulation. Twelve weeks were spent carefully designing and tuning the simulation model and the final report included seventy-five realizations of the simulation model at each potential staffing level. The limited amount of realizations gives rise to a simulation error (also termed a Monte Carlo error). In addition, when data at the current staffing level was compared to the simulation model realizations, it was clear the simulation model was inaccurate. Yet, given the resources already invested, the manufacturer was interested if the simulation model could still be used to guide the staffing level decision. An approach that can account for both sources of errors can save significant costs and improve the decisions in situations like these.\n\nDifferences between a simulation and real data is traditionally addressed during the important practice of \\emph{model validation} and \\emph{calibration} in the simulation literature, which refers to the joint task of checking whether a developed stochastic model sufficiently reflects the reality (validation), and if not, re-developing the model until it matches (calibration) (e.g., \\cite{Sargent2013}, \\cite{banks2000dm} Chapter 10, \\cite{kelton2000simulation} Chapter 5). Conventional validation methods compare relevant outputs from simulation models and real-world data via statistical or Turing tests (e.g. \\cite{schruben1980establishing} and \\cite{balci1982some}). In the case of a mismatch, guided expert opinions, together with possibly more data collection, are used to re-calibrate the model recursively until acceptable accuracy \\citep{sargent1998verification}. While these tools are fundamentally critical to the practice of simulation, there can be two deficiencies when using calibration in an ad-hoc way:\n\\begin{enumerate}\n\\item It necessitates building increasingly sophisticated models after unsatisfactory conclusions. This process potentially places a heavy burden on a simulation modeler\/software, consumes time and, moreover, may end up in non-convergence to an acceptable ultimate model.\n\\item The recursive refinement of the model to align it with the real data along the development process involves hidden parameter choices and simultaneous estimations. These details, which are often overlooked and unaccounted for, complicate statistically justified uncertainty quantification alongside prediction.\n\\end{enumerate}\n\n\nOur goal is thus to investigate a framework that systematically offers predictive bounds using a simulation model without the traditionally encountered recursive efforts. Our framework is a stochastic version of model calibration that is similar in spirit to deterministic model calibration \\citep{kennedy2001bayesian}. The basic idea is to view potential model discrepancy as an object that can be inferred statistically, or plainly put, to ``model\" this potential error. To conduct feasible inference, often the model discrepancy is assumed to have some structure decided a priori of observing data, and data are used to update the uncertainty on predictions of the true system. Since \\cite{kennedy2001bayesian}, this idea has been extended and widely applied in various scientific areas, e.g., \\cite{tuo2015efficient,higdon2004combining,plumlee2016bayesian}. In the stochastic simulation literature, similar machinery has appeared under the heading of stochastic kriging (\\cite{ankenman2010stochastic,Staum2009,chen2013enhancing,chen2012effects,chen2014stochastic,chen2016efficient}). In the stochastic kriging literature, the oracle benchmark is the simulation model and stochastic kriging is used to reduce simulation effort by borrowing information from the simulation outputs at a collection of design values. In the model discrepancy setting, the oracle benchmark is the real system's probabilistic generating mechanism and our goal is to improve the prediction accuracy and quantification of uncertainties associated with the simulation model.\n\nOne challenge in bringing the deterministic model discrepancy machinery to stochastic simulation is that in the latter case, the inference objects are themselves embedded in probability spaces. The stochastic simulation model and the real system are naturally represented as probability distributions (think of the output distributions of a queueing or a stochastic inventory model), which constitute the basis of calculation in many decision-making tasks (for example, computing the chance that the outcome is in some region that indicates poor performance). Consequently, the learning and the uncertainty quantification of the discrepancies need to take into account the resulting probabilistic constraints. This is beyond the scope of the established inference and computation tools in the deterministic model discrepancy literature.\n\nAs our main contribution, we develop a framework to infer stochastic model discrepancies that is statistically justified and computationally tractable under the constraints discussed above. On the statistical aspect, we build a Bayesian learning framework that operates on the space of likelihood ratios as the representation of model discrepancies between simulation and reality. We study how this representation satisfies the constraints necessarily imposed in capturing stochastic model discrepancies and leads to desirable asymptotic behavior. On the computational aspect, we propose an optimization approach to obtain prediction bounds. Though sampling techniques such as Markov chain Monte Carlo \\cite[Chapters 11 and 12]{gelman2014bayesian} are widely used in Bayesian computation, they encounter difficulties in our setting due to the constraints and high-dimensionality. Our approach, inspired from the recent literature in robust optimization (\\cite{ben2002robust,ben2009robust,bertsimas2011theory}), alleviates this issue via the imposition of suitable optimization formulations over posterior high probability regions. We study the statistical properties of these formulations and demonstrate that they are equally tight in terms of asymptotic guarantees to traditional Bayesian inference.\n\nWe close this introduction by briefly reviewing two other lines of related work. First, in stochastic simulation, the majority of work in handling model uncertainty focuses on input uncertainty; see, e.g. the surveys \\cite{barton2002panel,henderson2003input,chick2006bayesian,barton2012tutorial,song2014advanced,lam2016advancedtutorial}, \\cite{nelson2013foundations} Chapter 7. They quantify the impacts on simulation outputs due to the statistical uncertainty in specifying the input models (distributions, stochastic assumptions etc.), assuming input data are available. Approaches include the delta method (\\cite{cheng1997sensitivity}) and its variants such as the two-point method (\\cite{cheng1998two,cheng2004calculation}), the bootstrap (\\cite{barton1993uniform,barton2001resampling,cheng1997sensitivity}) which can be assisted with stochastic kriging-based meta-models (\\cite{barton2013quantifying,xie2014bayesian}), and Bayesian methods (\\cite{chick2001input,zouaoui2003accounting,zouaoui2004accounting,xie2014bayesian,biller2011accounting}). Added to these approaches are recent perspectives of model risks and robust optimization that do not necessarily directly utilize data (\\cite{glasserman2014robust,lam2013robust,lam2011sensitivity,ghosh2015computing}). The second line of related work is queueing inference that investigates the calibration of input processes and system performances from partially observed queueing outputs such as congestion or transaction data (e.g., the queue inference engine; \\cite{larson1990queue}). This literature utilizes specific queueing structures that can be approximated either analytically or via diffusion limits, and as such allow tractable inference. Techniques include maximum likelihood estimation (\\cite{basawa1996maximum,pickands1997estimation}), nonparametric approaches (\\cite{bingham1999non,hall2004nonparametric}) and point processes (\\cite{whitt1981approximating}).\nRecently, \\cite{goeva2014reconstructing} study calibration of input distributions under more general simulation models. Like the input uncertainty literature, however, these studies assume correctly specified system logics that imply perfect matches of the simulation models with real-world outputs.\n\n\n\\section{Stochastic Model Discrepancy: Setting and Notations} \\label{sec:setting}\nThis section describes our setting and notations throughout this paper. We consider a system of interest that outputs a discrete random response over the space $\\mathcal{Y}$ with cardinality $m$. For notational simplicity, we will use the space $\\mathcal Y=\\{1,\\ldots,m\\}$. This response depends on a vector of design variables, denoted $x$, which can be broadly defined to include input variables that are not necessarily controllable. We presume a finite set of design points or design values $x_j$, $j = 1,\\ldots,s$. The probability mass function $\\pi_j=\\{\\pi_j(i)\\}_{i=1,\\ldots,m}$ describes the distribution of the response of the real system on $\\mathcal Y$ under $x_j$. Examples of the response include the waiting times in call centers \\citep{brown2005statistical} and hospitals \\citep{helm2014design}. In the first example, design variables could be the number of servers, the system capacity, and the arrival rate. In the second example, the design variable could be the rate of elective admissions.\n\nThe objective is to draw conclusions about $\\pi_j$ for several $j$'s. These distributions form the basis in evaluating quantities of interest used for decision-making. When responses are independently observed from the real system (e.g., from a designed experiment \\citep{li2015value}), $n_j(i)\/n_j$ is a reasonable estimate of $\\pi_j$, where $n_j(i)$ counts the number of outcomes equal to $i$ and $n_j$ is the total number of recorded responses at $x_j$. In the setting of simulation modeling, however, these empirical estimates are often inadequate because typical decision-making tasks, like feasibility or sensitivity tests, are applied on system configurations that are sparsely sampled or even never observed. This means that accurate empirical estimates for the $x_j$ values of interest are not available. In fact, for these $j$'s, $n_j$ can often times be $0$.\n\nIn contrast, using state-of-the-art understanding of the system, possibly simplified for computational concerns, an operations researcher builds a simulation model (typically based on discrete-event simulation) to estimate $\\tilde{\\pi}_{j}=\\{\\tilde\\pi_j(i)\\}_{i=1,\\ldots,m}$, the simulated distribution of the response at the design point $x_j$. In parallel to the real responses, we denote $\\tilde{n}_j(i)$ as the count of outcome $i$ and $\\tilde n_j$ as the total number of replications in a simulation experiment at $x_j$, and $\\tilde{n}_j(i)\/\\tilde{n}_j$ is hence an estimate of $\\tilde\\pi_j(i)$. Unlike the real responses, it is often affordable to generate a more abundant number of $\\tilde n_j$ and hence a more accurate estimate of $\\tilde\\pi_j$. However, the difference between $\\tilde{n}_j(i)\/\\tilde{n}_j$ and $\\tilde\\pi_j(i)$ remains a source of uncertainty.\n\nOur premise is that the real response distribution $\\pi_j$ and the simulated distribution $\\tilde\\pi_j$ differ. Thus, in order to make conclusions about $\\pi_j$, we must conjecture about the potential gap between $\\pi_j$ and $\\tilde\\pi_j$ with the limited simulation and real-world data. The remainder of this section describes our framework for defining the discrepancy between $\\pi_j$ and $\\tilde\\pi_j$.\n\nFirst note that both $\\pi_j$ and $\\tilde\\pi_j$ obviously must satisfy the criteria of a probability distribution:\n\\begin{defination} \\label{def:valid_distribution}\nAny mapping $p: \\mathcal{Y} \\rightarrow \\mathbb{R}$ is a \\emph{valid distribution} if\n\\begin{enumerate}[(i)]\n\\item $p(i)\\geq0$ for all $i = 1,\\ldots,m$ and\n\\item $\\sum_{i=1}^m p(i) = 1$.\n\\end{enumerate}\n\\end{defination}\n\nWe define the discrepancy between $\\pi_j$ and $\\tilde\\pi_j$ as $\\delta_j=\\{\\delta_j(i)\\}_{i=1,\\ldots,m}$ where\n\\begin{equation}\n\\delta_j(i) = \\frac{\\pi_j(i)}{\\tilde{\\pi}_j(i)}. \\label{def discrepancy}\n\\end{equation}\nIn other words, $\\delta_j$ reflects the the ratio between the probabilities of the true responses and simulated responses. If $\\delta_j (i) = 1$ for all $i$, the simulation model is correctly specified. Definition \\ref{def discrepancy} is analogous to that of likelihood ratio in the context of importance sampling (e.g., \\cite{mcbook}, Chapter 9; \\cite{asmussen2007stochastic}, Chapter IV; \\cite{glasserman2003monte}, Chapter 4). In the model risk literature, similar object as Definition \\ref{def discrepancy} also appears as a decision variable in worst-case optimization problems used to bound performance measures subject to the uncertainty on the true model relative to a conjectured stochastic model (often known as the baseline model). Examples include Gaussian models with mean and covariance uncertainty represented by linear matrix inequalities (\\cite{hu2012robust}), and nonparametric uncertainty measured by Kullback-Leibler divergence (e.g., \\cite{glasserman2014robust,lam2013robust}). Our definition \\ref{def discrepancy} is along a similar vein as these work, but rather than using it as a tool to speed up simulation (in importance sampling) or an optimization decision variable (in model risk), our $\\delta_j$ is an object to be \\emph{inferred} from data.\n\nNote that \\eqref{def discrepancy} is not the only way to define stochastic model discrepancy. Another natural choice, which more closely mimics the established deterministic counterpart \\citep{kennedy2001bayesian}, is via\n\\[\\pi_j(i)-\\tilde{\\pi}_j(i)\\]\nThe choice of which version of discrepancy to use relates to the convenience in statistical modeling. We adopt the multiplicative version in \\eqref{def discrepancy} based on its analog with likelihood ratio, which facilitates our inference.\n\n\n\nSince $\\pi_j$ and $\\tilde\\pi_j$ are valid distributions, the model discrepancy $\\delta_j$ defined in \\eqref{def discrepancy} must satisfy the following criteria with respect to $\\tilde\\pi_j$:\n\\begin{defination} \\label{def:valid}\nSay $p$ is a valid distribution with $p(i)> 0$. $d$ is a \\emph{valid discrepancy} with respect to $p$ if\n\\begin{enumerate}[(i)]\n\\item $d(i)\\geq0$ for all $i = 1,\\ldots,m$ and\n\\item $\\sum_{i=1}^m d(i) p(i) = 1$.\n\\end{enumerate}\n\\end{defination}\nClearly, if $d$ is a valid discrepancy and $p$ is a valid distribution then $\\{d(i) p(i)\\}_{i=1,\\ldots,m}$ will also be a valid distribution.\n\nDefinition \\ref{def:valid} plays a vital role in our subsequent analysis as they characterize the properties of our inference targets. Unlike deterministic model discrepancies, these conditions come from the probabilistic structure that arises uniquely in stochastic model discrepancies. Note that Definition \\ref{def:valid} coincides with that of a likelihood ratio (e.g., \\cite{asmussen2007stochastic}).\n\nLastly, in addition to model discrepancy, simulation noise and experimental noise also contribute to the uncertainty in estimating $\\pi_i$, i.e., the noise of the estimator $n_j(i)\/n_j$ for $\\pi_j(i)$ and $\\tilde n_j(i)\/\\tilde n_j$ for $\\tilde\\pi_j(i)$. Our analysis will also incorporate these sources of uncertainty.\n\\section{A Bayesian Framework} \\label{sec:learn}\nWe propose a Bayesian framework to infer the discrepancy $\\delta_j$. The framework has the capability to quantify uncertainty under limited data environments (common in our setting where observed responses from the real system may be sparse or absent for some design points), and to incorporate prior information that anticipates similar discrepancies for similar design points, where the similarity is measured by the distance between the design values. We will also see how the framework can account for the notion of a valid discrepancy provided in Definition \\ref{def:valid}.\n\n\nThe term \\emph{data} substitutes for the collection of all observed responses from the real system and the simulation model, which is sufficiently represented as\n\\[\\text{data} = \\left\\{n_j(i), i = 1,\\ldots,m, j = 1,\\ldots, s \\text{ and } \\tilde{n}_j(i), i = 1,\\ldots,m, j = 1,\\ldots, s \\right\\}.\\]\nOur main inference procedure is the Bayes rule summarized as\n\\begin{equation}\n\\operatorname{post}\\left(d,\\tilde{p},\\text{data}\\right) \\propto \\operatorname{likelihood}(d,\\tilde{p},\\text{data} ) \\operatorname{prior}(d,\\tilde{p}), \\label{Bayesian update}\n\\end{equation}\nwhere $d$ and $\\tilde{p}$ are the locations at which the density is evaluated for $\\delta=(\\delta_j)_{j=1,\\ldots,s}$ and $\\tilde{\\pi}=(\\tilde\\pi_j)_{j=1,\\ldots,s}$. The notations ``$\\operatorname{post}$\", ``$\\operatorname{likelihood}$\" and ``$\\operatorname{prior}$\" stand for the posterior, likelihood and prior distribution of $(\\delta,\\tilde\\pi)$. Note that we have defined $\\tilde\\pi$ as an inference target in addition to the discrepancy $\\delta$, in order to handle the simulation noise (as we will describe momentarily). The relationship\n$$p_j(i) = d_j(i) \\tilde{p}_j(i) $$\ncan be used to define the posterior distribution of $\\pi_j(i)$ at $p_j(i) $.\n\nThe likelihood for \\eqref{Bayesian update} is straightforward to compute as\n\\begin{align}\n\\operatorname{likelihood}(d,\\tilde{p},\\text{data}) \\propto \\exp &\\left( \\sum_{j=1}^{s} \\sum_{i=1}^{m} n_{j}(i) \\log\\left( d_{j}(i) \\tilde{p}_j(i) \\right) + \\sum_{j=1}^{s} \\sum_{i=1}^{m} \\tilde{n}_j(i) \\log \\tilde{p}_j(i)\\right). \\label{eq:likelihood}\n \\end{align}\n\n\nWe now discuss the prior for \\eqref{Bayesian update}. We restrict ourselves to independent priors for the discrepancy and the simulation model. The prior on the simulation model needs to exhibit the properties of a valid distribution. These properties can be enforced by conditioning an arbitrary prior distribution on a vector which takes real values in a space $\\mathbb{R}^{sm}$ on the constrained region associated with Definition \\ref{def:valid_distribution}. Similarly, the properties of a valid discrepancy can be enforced by conditioning an arbitrary prior distribution on the constrained region associated with Definition \\ref{def:valid}. More precisely, let the logarithm of this arbitrary prior mass function for the simulation model be denoted with $f$ and the discrepancy with $g$. Our construction leads to\n\\begin{equation}\n\\text{prior}(d,\\tilde{p}) \\propto \\begin{cases} \\exp\\left(f(\\tilde{p}) + g(d) \\right) &\\text{ if } \\begin{cases} \\tilde{p}_j(i) \\geq 0, & 1\\leq i \\leq m,1\\leq j \\leq s \\\\\n d_j(i) \\geq 0, & 1\\leq i \\leq m,1\\leq j \\leq s \\\\\n\\sum_{i=1}^m \\tilde{p}_j(i) =1, & 1\\leq j \\leq s\\\\\n\\sum_{i=1}^m \\tilde{p}_j(i) d_j(i) =1, & 1\\leq j \\leq s\n\\end{cases} \\\\\n0 &\\text{otherwise}. \\end{cases} \\label{eq:prior} \\end{equation}\n\nThe choices of $f(\\cdot)$ and $g(\\cdot)$ are open to the investigator. For computational reasons that will be detailed in Section \\ref{sec:optim}, we prefer $g(\\cdot)$ that is concave. One widely used option that exhibits this property will be a multivariate Gaussian with a mean $\\mu$ and correlation matrix $R$ that borrows information across design points and observation points. It is recommended that one uses a vector of $1$s as the prior mean for $\\delta$ and $(1\/m)$s for $\\tilde{\\pi}$. $R$ should be built with domain specific logic, e.g., similar design points and\/or similar responses should have similar discrepancies. For more detailed ideas toward constructing correlation structures for responses, see \\cite{ankenman2010stochastic} on the topic of stochastic kriging. In general, this approach leads to\n\\begin{align}\n \\exp\\left(f(\\tilde{p}) + g(d) \\right) \\propto \\exp & \\left( -\\lambda_{\\tilde{p}} (\\tilde{p} - 1\/m)^\\mathsf{T} R_{\\tilde{p}}^{-1} (\\tilde{p} - 1\/m) -\\lambda_d (d - 1)^\\mathsf{T} R_d^{-1} (d - 1) \\right), \\label{eq:Gaussian}\n\\end{align}\nwhere the $\\tilde{p}$ and the $d$ are understood to be vectorizations of the probability masses represented by themselves, and $\\lambda$s are positive constants that scale the correlation matrices $R$s.\n\nNote that, in the settings where simulation is cheap and $\\tilde\\pi$ is estimated with negligible error, one can drop the parameter $\\tilde p$ in the likelihood \\eqref{Bayesian update} and correspondingly the second terms in the likelihood \\eqref{eq:likelihood} and the prior \\eqref{eq:prior}.\n\n\\section{Optimization-based Procedure for Bayesian Inference} \\label{sec:optim}\nThis section presents our computation procedure to make conclusions about $\\pi_j$ based on \\eqref{Bayesian update}. In particular, we propose an optimization-based approach. There are two reasons for considering this inference package in place of the more traditional Markov Chain Monte Carlo. First, a typical decision-making in simulation analysis often boils down to the estimation of expectation-type quantities of interest evaluated at $\\pi_j$. The optimization we study will provide efficiently computable bounds on these expectations. Second, because of the constrained structure of the prior distribution \\eqref{eq:prior}, standard sampling-based Bayesian computation tools are deemed to be inefficient, and optimization serves as a competitive alternative.\n\n\nTo elaborate the second rationale, note that common solution mechanisms in Bayesian inference consist of drawing samples from the posterior of the parameters of interest. However, because the posterior is often not a standard distribution like Normal (and that there is an unknown proportionality constant), direct Monte Carlo sampling is not possible. Sophisticated Markov chain Monte Carlo samplers were designed explicitly for this purpose \\cite[Chapters 11 and 12]{gelman2014bayesian}. Popular samplers include the classic Metropolis Hastings algorithm with a symmetric proposal \\cite[pp 278-280]{gelman2014bayesian}, and other useful methods such as Hamilton Monte Carlo \\citep{duane1987hybrid} and slice sampling \\citep{neal2003slice}. The latter two methods are specifically designed to alleviate the problems faced by classical samplers. But there are still many practical issues for these new samplers regarding their execution and choices of parameters in constrained and high dimensional spaces, which is the setting we encounter in the posterior induced from \\eqref{eq:prior} (probabilistically constrained and with dimension $sm$). See, for example, \\cite{betancourt2017conceptual} for an intuitive history and theoretical summary of these conclusions. It should be acknowledged that theoretical results do not always reveal these practical issues; see, for example, the positive results from \\cite{dyer1991random}. However, numerical tests in \\cite{plumlee2016learning} demonstrate these issues in a closely related setting.\n\n\nIn the following subsections, we will present our optimization formulation, the statistical guarantees, and discussion on computational tractability. The summaries of the sections are: 1) We use an uncertainty set in place of a typical Bayesian integration; 2) The method is guaranteed to produce tight bounds that will contain the truth with the typical desired confidence; and 3) Given we simulate enough, the optimization problem can be reformulated into a convex problem.\n\n\\subsection{Optimization Formulation}\\label{sec:formulation}\nSuppose we are interested in estimating quantities of interest in the form $E[z(Y_j)]$ where $Y_j\\sim\\pi_j$ and $z:\\mathcal Y\\to\\mathbb R$ is some function. We can write this in terms of $\\delta$ and $\\tilde\\pi$ as $$\\zeta(\\delta,\\tilde\\pi)= \\sum_{i=1}^m z(i) \\pi_j(i) =\\sum_{i=1}^mz(i)\\delta_j(i)\\tilde\\pi_j(i).$$ Our procedure consists of solving the optimization pairs\n\\begin{equation}\\begin{array}{ll}\n\\max \\text{ or } \\min_{d,\\tilde{p}} & \\zeta(d,\\tilde{p}) = \\sum_{i=1}^m z(i) d_j(i) \\tilde{p}_j(i), \\\\\n\\text{subject to}& \\operatorname{post}(d,\\tilde{p},\\text{data}) \\geq c \\end{array} \\label{obj:optim}\n\\end{equation}\nwhere $c$ is chosen such that\n\\begin{equation}\n c = \\exp \\left(-\\frac{1}{2} \\Phi^{-1}(q)^2 + \\max_{d,\\tilde{p}} \\log \\text{post}(d,\\tilde{p},\\text{data})\\right),\\label{choice}\n\\end{equation}\nand $ \\Phi^{-1}(q)$ is the standard Normal quantile at level $q$. The optimal values of these optimization problems form an approximate confidence interval for $E[z(Y_j)]$ at a confidence level in the frequentist sense, as we will describe in Section \\ref{sec:theo}.\n\nOptimization problems \\eqref{obj:optim} can be motivated from a robust optimization viewpoint. This literature uses deterministic sets, the so-called ambiguity or uncertainty sets, to represent the probabilistic uncertainty in the parameters (e.g., \\cite{ben2002robust,ben2009robust,bertsimas2011theory}). Typically, these sets are chosen as prediction sets that contains the truth with a prescribed confidence. The optimal values of the resulting robust optimizations then bound the true quantity of interest with at least the same confidence level. This approach has been applied in many contexts, such as approximating chance-constrained programs (e.g., \\cite{ben2002robust}, Chapter 2) and performance measures driven by complex stochastic models (e.g., \\cite{bandi2012tractable,bandi2014robust}). Here, we consider using a prediction set given by a posterior high probability region\n\\begin{equation}\n\\mathcal{U}(c) = \\left\\{d,\\tilde{p} \\left| \\operatorname{post}(d,\\tilde{p},\\text{data}) \\geq c \\right.\\right\\} \\label{uncertainty set}\n\\end{equation}\nas the set of points $(d,\\tilde p)$ with posterior probability higher than level $c$. From the view of robust optimization, if $c$ is chosen such that $\\mathcal U(c)$ contains $1-\\alpha$ posterior content of $(\\delta,\\tilde\\pi)$, the optimal values of \\eqref{obj:optim} will form an interval covering at least $1-\\alpha$ posterior content of $\\zeta(\\delta,\\tilde\\pi)$.\n\nInstead of looking for an exact $(1-\\alpha)$-content prediction set, we choose our $c$ based on asymptotic theory that guarantees an asymptotically exact coverage of the true value of $E[z(Y_j)]$, which in general can be different from the choice discussed above. Our result that justifies this approach has a similar spirit to some recent studies in calibrating uncertainty sets in distributionally robust optimization, a setting in which the uncertainty is on the underlying distribution in a stochastic problem, via asymptotic analysis based on empirical likelihood (\\cite{lam2016recovering,duchi2016statistics,blanchet2016sample,lam2017empirical}) and Bayesian methods \\citep{gupta2015near}. Despite these connections, to our best knowledge, there has been no direct attempt in using robust optimization as a principled Bayesian computation tool.\n\nOur procedure essentially recovers the quantiles of the quantity of interest directly from the posterior distribution, which is the aforementioned goal of our Bayesian analysis and is conventionally obtained from sampling (e.g., Markov chain Monte Carlo). To intuitively explain the connection, consider the case when the posterior is normalized such that\n\\[\\int_{d,\\tilde p} \\text{post}(d,\\tilde p,\\text{data}) \\mathrm{d} d \\mathrm{d}\\tilde p = 1.\\]\nThe described quantile is defined as\n\\begin{equation}\n\\min\\left\\{a\\in\\mathbb R: \\int_{\\zeta(d,\\tilde p)\\leq a} \\text{post}(d,\\tilde p,\\text{data}) \\mathrm{d} d \\mathrm{d}\\tilde p \\geq1-\\alpha\\right\\}\\label{quantile def}\n\\end{equation}\nAssume that for every $a$ in consideration, there exists $(d,\\tilde p)$ such that $\\zeta(d,\\tilde p)=a$. Then \\eqref{quantile def} is equal to\n\\begin{equation}\\begin{array}{ll}\n\\min_{a}& \\left\\{\\begin{array}{ll}\\max_{d,\\tilde{p}} & \\zeta(d,\\tilde{p}) \\\\\n\\text{subject to}&\\zeta(d,\\tilde{p}) \\leq a\\end{array}\\right\\}\\\\\n\\text{subject to}& \\int_{\\zeta(d,\\tilde p)\\leq a} \\text{post}(d,\\tilde p,\\text{data}) \\mathrm{d} d \\mathrm{d}\\tilde p \\geq1-\\alpha\n\\end{array} \\label{quantile reformulation}\n\\end{equation}\nDenote $\\mathcal U_a=\\left\\{d,\\tilde{p} \\left| \\zeta(d,\\tilde{p}) \\leq a \\right. \\right\\}$. We can further rewrite \\eqref{quantile reformulation} as an optimization over the collection of sets in the form $\\mathcal U_a$, given by\n\\begin{equation}\\begin{array}{ll}\n\\min_{\\mathcal U_a}& \\left\\{\\begin{array}{ll}\\max_{d,\\tilde{p}} & \\zeta(d,\\tilde{p}) \\\\\n\\text{subject to}&(d,\\tilde{p}) \\in\\mathcal U_a\\end{array}\\right\\}\\\\\n\\text{subject to}& \\int_{\\mathcal U_a} \\text{post}(d,\\tilde p,\\text{data}) \\mathrm{d} d \\mathrm{d}\\tilde p \\geq1-\\alpha\n\\end{array} \\label{MC_sampler}\n\\end{equation}\nSuppose there exists an optimal solution $\\mathcal U^*$ to the outer optimization in \\eqref{MC_sampler}. We conclude that the $q$ quantile of $\\zeta(d,\\tilde p)$ under $\\text{post}(\\cdot,\\text{data})$ is equal to $\\max_{(d,\\tilde{p})\\in \\mathcal U^*} \\zeta(d,\\tilde{p}) $. Our chosen uncertainty set $\\mathcal U(c)$ turns out to bear a similar performance in bounding the quantity of interest as the set $\\mathcal U^*$, despite the potential vast difference in their geometries.\n\n\nTo illustrate graphically the difference between sampling quantiles and the optimization approach, suppose we are trying to find the $97.5\\%$ confidence level upper bounds for the sum of two probabilities in our system. Figure \\ref{fig:graphical_ROvSAMPLE} illustrates this with samples imposed on top of the projection of the uncertainty set in \\eqref{uncertainty set}, and it shows the similarity of the bounds provided by the two approaches. Clearly, $\\mathcal{U}(c)$ is much smaller compared to $\\mathcal{U}^*$, yet the resulting bounds are quite similar. The next subsection investigates the properties of $\\mathcal U(c)$ and explains such a phenomenon.\n\\begin{figure}[htb]\n{\n\\centering\n\\includegraphics[width=5in]{graphical_ROvSAMPLE-eps-converted-to.pdf}\n\\caption{Graphical description of the differences between optimization- and sampling-based approaches where the objective is to bound the sum of the probabilities. The $1000$ dots are samples from the posterior. The upper limit of the $97.5 \\%$ quantile of the sum is indicated by the dashed line where $\\mathcal{U}^*$ is the solution to the outer optimization in (\\ref{MC_sampler}) given that these samples are the entirety of the posterior distribution. The region labeled $\\mathcal{U}$ is the projection of $\\mathcal{U}$ onto this two-dimensional plane and the maximum of the optimization is determined by the solid line with $c=2$. \\label{fig:graphical_ROvSAMPLE}}\n}\n\\end{figure}\n\\subsection{Theoretical Guarantees} \\label{sec:theo}\nWe first study the asymptotic behavior of the optimal values in \\eqref{obj:optim}. We will consider a more general setting in which the objective function is $ \\sum_{j=1}^s\\sum_{i=1}^m z_j(i) p_j(i)$ for some functions $z_j:\\mathcal Y\\to\\mathbb R$, i.e., a linear combination of individual expectations at $x_j$. Evidently, $z_j(i)=0$ for all but one $j$ will reduce to the setting in \\eqref{obj:optim}. For ease of exposition, define\n\\[Z_j \\stackrel{\\text{dist}}{=}z_j(Y_j),\\]\nwhere $Y_j\\sim\\pi_j$. Let $\\mathcal{U}_n (c) $ be defined as in \\eqref{uncertainty set}, with the subscript $n=\\sum_{j=1}^sn_j$ indicating the total number of observed responses on the real system. Similarly, let $\\text{post}_n(d,\\tilde p)$ represent the posterior function when the data contains $n$ observations.\n\nWe have the following result (which is shown as Lemma \\ref{lem:op_consistency} in the appendix):\n\\begin{theorem} \\label{thm:op_consistency}\nSuppose that $\\tilde{\\pi}_j(i)>0$ and $\\pi_j(i)>0$ for all $i = 1,\\ldots,m$ and $j = 1,\\ldots,s$. For each observation, the design point is an independent random variable with sample space $\\{x_1,\\ldots,x_m\\}$ and respective positive probabilities $\\xi_1,\\ldots, \\xi_m$.\n\nLet $\\text{post}_n^* = \\max_{d,\\tilde{p}} \\text{post}_n (d,\\tilde{p})$ and $\\hat{\\pi}_i^n (i) = n_j(i)\/n_j .$ Then for all $\\ell >0$,\n\\begin{equation}\n\\lim_{n \\rightarrow \\infty} \\sqrt{n} \\left(\\max_{(d,\\tilde p)\\in\\mathcal{U}_n(\\text{post}_n^*-\\ell^2\/2)} \\sum_{j=1}^s\\sum_{i=1}^m z_j(i) d_j(i)\\tilde p_j(i) - \\sum_{j=1}^s \\sum_{i=1}^m z_j(i) \\hat{\\pi}_j^n(i) \\right) = \\ell \\sqrt{\\sum_{j=1}^s \\xi_j^{-1} \\mathbb{V} Z_j} \\label{eq:optim1}\n\\end{equation}\nand\n\\begin{equation}\n\\lim_{n \\rightarrow \\infty} \\sqrt{n} \\left( \\min_{(d,\\tilde p)\\in\\mathcal{U}_n(\\text{post}_n^*-\\ell^2\/2)} \\sum_{j=1}^s\\sum_{i=1}^m z_j(i)d_j(i)\\tilde p_j(i) - \\sum_{j=1}^s \\sum_{i=1}^m z_j(i) \\hat{\\pi}_j^n(i) \\right) = -\\ell \\sqrt{\\sum_{j=1}^s \\xi_j^{-1} \\mathbb{V} Z_j} \\label{eq:optim2}\n\\end{equation}\n\nalmost surely, where $\\mathbb{V} $ represents the variance.\n\\end{theorem}\n\nAn immediate observation of Theorem \\ref{thm:op_consistency} is that the simulation replication size $\\tilde n_j$ plays no role in the asymptotic behavior of the optimization output as $n$ gets large. Thus, with enough real data, the accuracy of the simulation runs is inconsequential, as the values of the real data dominate the results.\nThe same observation also holds for the prior choices made for $f(\\cdot)$ and $g(\\cdot)$. This asymptotic independence of the prior resembles the classic Bernstein-von Mises theorem. In summary, our optimization approach generates bounds in tight asymptotic agreement with those obtained from the typical data-only inference approaches.\n\nIt is known that not every posterior distribution is guaranteed to have appropriate consistency properties; see the works of \\cite{freedman1963asymptotic} and \\cite{diaconis1986consistency}. Bayesian credible sets resembling the form of \\eqref{uncertainty set} are not guaranteed to produce rational inference; for more information on the general properties of Bayesian credible sets, see \\cite{cox1993analysis} or \\cite{szabo2015frequentist}. In particular, two complications arise in proving Theorem \\ref{thm:op_consistency}. First, the measure associated with the likelihood function only concentrates on a lower dimensional manifold (dimension $sm-s$) of the parameter space (dimension $sm$). This issue is by-and-large a technical one and is addressed in Lemmas \\ref{lemma:str_consis} and \\ref{lemma:weak_consis} proved in the appendix. Second, the optimization problem requires a particular shape of the uncertainty set to yield the desired asymptotic properties. As a main observation, the uncertainty set $\\mathcal U_n(\\text{post}_n^*-\\ell^2\/2)$ can be shown to asymptotically become an ellipsoid, and optimization problem \\eqref{obj:optim} therefore reduces to a quadratic program with an elliptical constraint, which can be analyzed and elicits the convergence behavior in Theorem \\ref{thm:op_consistency}.\n\n\nThere are several implications of Theorem \\ref{thm:op_consistency}. The first is that both the upper and lower limits provided by the optimization converge to the true value almost surely as the data gets large, as described below:\n\\begin{corollary}\nUnder the same assumptions in Theorem \\ref{thm:op_consistency}, we for all $\\ell>0$,\n$$\\max_{(d,\\tilde p)\\in\\mathcal{U}_n(\\text{post}_n^*-\\ell)} \\sum_{j=1}^s\\sum_{i=1}^m z_j(i)d_j(i)\\tilde p_j(i)\\to\\sum_{j=1}^s\\sum_{i=1}^m z_j(i)\\pi_j(i)$$\nand\n$$\\min_{(d,\\tilde p)\\in\\mathcal{U}_n(\\text{post}_n^*-\\ell)} \\sum_{j=1}^s\\sum_{i=1}^m z_j(i)d_j(i)\\tilde p_j(i)\\to\\sum_{j=1}^s\\sum_{i=1}^m z_j(i)\\pi_j(i)$$\nalmost surely as $n\\to\\infty$.\\label{consistency}\n\\end{corollary}\n\nCorollary \\ref{consistency} shows that with enough data the proposed posterior estimate is a good representation of the truth. It is a basic property that is in line with Bayesian consistency results studied traditionally by statisticians \\citep{schwartz1965bayes}.\n\nFurthermore, Theorem \\ref{thm:op_consistency} also implies that, as $n$ gets large,\n\\begin{equation}\n\\max_{(d,\\tilde p)\\in\\mathcal{U}_n(\\text{post}_n^*-\\ell^2\/2)} \\sum_{j=1}^s\\sum_{i=1}^m z_j(i)d_j(i)\\tilde p_j(i) \\approx\\sum_{j=1}^s \\sum_{i=1}^m z_j(i) \\hat{\\pi}_j^n(i) + \\ell \\sqrt{\\frac{\\sum_{j=1}^s \\xi_j^{-1} \\mathbb{V} Z_j}{n}} \\label{asymptotic max}\n\\end{equation}\nand\n\\begin{equation}\\min_{(d,\\tilde p)\\in\\mathcal{U}_n(\\text{post}_n^*-\\ell^2\/2)} \\sum_{j=1}^s\\sum_{i=1}^m z_j(i)d_j(i)\\tilde p_j(i) \\approx\\sum_{j=1}^s \\sum_{i=1}^m z_j(i) \\hat{\\pi}_j^n(i) - \\ell \\sqrt{\\frac{\\sum_{j=1}^s \\xi_j^{-1} \\mathbb{V} Z_j}{n}} \\label{asymptotic min}\n\\end{equation}\nNote that the left hand sides of \\eqref{asymptotic max} and \\eqref{asymptotic min} are precisely the classical confidence bounds on $\\sum_{j=1}^s\\sum_{i=1}^m z_j(i) \\pi_j(i)$ generated from the central limit theorem with $n_j \\approx \\xi_j n$. This hints at a proper coverage in large samples at the level $1-\\alpha$. In fact, we have the following result:\n\\begin{corollary}\nUnder the same assumptions in Theorem \\ref{thm:op_consistency}, we have\n$$\\mathbb P\\left(\\max_{(d,\\tilde p)\\in\\mathcal{U}_n(\\text{post}_n^*-\\ell^2\/2)} \\sum_{j=1}^s\\sum_{i=1}^m z_j(i)d_j(i)\\tilde p_j(i)\\geq\\sum_{j=1}^s\\sum_{i=1}^m z_j(i)\\pi_j(i) \\right)\\to \\Phi(\\ell)$$\nand\n$$\\mathbb P\\left(\\min_{(d,\\tilde p)\\in\\mathcal{U}_n(\\text{post}_n^*-\\ell^2\/2)} \\sum_{j=1}^s\\sum_{i=1}^m z_j(i)d_j(i)\\tilde p_j(i)\\leq\\sum_{j=1}^s\\sum_{i=1}^m z_j(i)\\pi_j(i)\\right)\\to \\Phi(\\ell)$$\nas $n\\to\\infty$, where $\\mathbb P$ denotes the probability generated from a data set of size $n$.\\label{CI}\n\\end{corollary}\n\n\n\nThe above results reveal that the proposed inference differs from purely empirical estimates only when data is sparsely collected. If data from the real system is abundant, our simulation models $\\tilde{\\pi}_1(\\cdot),\\ldots,\\tilde{\\pi}_s(\\cdot)$ will have very little impact on our resulting conclusions. In a sense, the Bayesian approach automatically balances the influences from the empirical data versus the simulation model. Complement to our asymptotic result in this section, our numerical examples in Section \\ref{sec:illustration} will demonstrate that the difference in inference between our approach and one that ignores the simulation model can be sizable in sparse data environments.\n\n\nWe conclude this section by presenting a result on the consistency of a ``ranking and selection\" task:\n\\begin{corollary} \\label{corr:op_consistency}\nSuppose the conditions and definitions of Theorem \\ref{thm:op_consistency}. For all $\\ell>0$, if $j$ and $k$ are such that $\\mathbb{E} Z_j > \\mathbb{E} Z_k $, then\n\\[\\lim_{n \\rightarrow \\infty} \\mathbb P \\left( \\min_{(\\tilde{p},d) \\in \\mathcal{U}(\\text{post}_n^*-\\ell)} \\sum_{i=1}^m z_j(i)d_j(i)\\tilde p_j(i) < \\max_{(\\tilde{p},d) \\in\\mathcal{U}(\\text{post}_n^*-\\ell)} \\sum_{i=1}^m z_k(i)d_k(i)\\tilde p_k(i) \\right) = 0 .\\]\\label{rs}\n\\end{corollary}\nCorollary \\ref{rs} implies that the intervals for the quantities of interest at different design points do not overlap as the data gets large, if their values are truly different. Thus, in practice, a user who notes that the two intervals generated from the optimization problems do not overlap can reasonably conclude there is a difference between the two values.\n\n\\subsection{Solvability of the optimization} \\label{sec:optim}\nThis subsection discusses the tractability of the imposed optimization problems in Section \\ref{sec:formulation}. We focus on the convexity of the problems which, in contrast to the previous section, will depend on the replication size from the simulation model.\n\n\n To begin, we write optimization (\\ref{obj:optim}) in full (focusing only on the minimization problem) as\n\\begin{equation}\\begin{array}{ll}\n\\min_{d,\\tilde{p}} & \\zeta(d,\\tilde{p}) = \\sum_{i=1}^m z_j(i) d_j(i) \\tilde{p}_j(i), \\\\\n\\text{subject to}& f(\\tilde{p}) + g(d) + \\sum_{j=1}^{s} \\sum_{i=1}^{m} n_{j} (i) \\log( d_j(i) \\tilde{p}_j(i)) + \\sum_{j=1}^{s} \\sum_{i=1}^{m} \\tilde{n}_{j}(i) \\log \\tilde{p}_j(i) \\geq \\log(c)\\\\\n&\\tilde{p}_j(i) \\geq 0 , d_j(i) \\geq 0, \\text{ for all } i,j \\\\\n&\\sum_{i=1}^m \\tilde{p}_j(i) =1, \\sum_{i=1}^m \\tilde{p}_j(i) \\delta_j(i) =1, \\text{ for all } j\n\\end{array} \\label{obj:optim1}\n\\end{equation}\nThis formulation is generally non-convex because of the non-convex objective function and the non-convex constraint $\\sum_{i=1}^m \\tilde{p}_j(i) \\delta_j(i) =1$, regardless of the sample sizes and the priors $f(\\cdot)$ and $g(\\cdot)$. However, noting that the program is individually convex in $d$ and $\\tilde p$, one approach is to use alternating minimization, by sequentially optimizing $d$ fixing $\\tilde p$ and $\\tilde p$ fixing $d$ until no improvement is detected. Though it does not guarantee a global solution, this approach has been shown to be effective for certain chance-constrained programs (see, e.g., \\cite{chen2010cvar,zymler2013distributionally,jiang2016data}).\n\nOn the other hand, supposing that there is no simulation error in estimating $\\tilde\\pi$, then the prior on $\\tilde p$ and the associated calculations can be removed, resulting in\n\\begin{equation}\\begin{array}{ll}\n\\max _{d} & \\sum_{i=1}^m z(i)\\tilde{\\pi}_j(i)d_j(i) , \\\\\n\\text{subject to}& g(d) + \\sum_{j=1}^{s} \\sum_{i=1}^{m} n_{j}(i) \\log d_{j}(i) \\geq \\log(c) \\\\\n&d_j(i) \\geq 0, \\text{ for all } i,j \\\\\n &\\sum_{i=1}^m \\tilde{\\pi}_j(i) d_j(i) =1, \\text{ for all } j\n\\end{array} \\label{optimization no MC}\n\\end{equation}\nIf in addition the function $g(\\cdot)$ is a concave function, then \\eqref{optimization no MC} is a convex optimization problem. We summarize this as:\n\\begin{proposition}\nProblem (\\ref{optimization no MC}) is a convex program if $g(\\cdot)$ is a concave function on $\\mathbb{R}^{sm}$.\n\\end{proposition}\n\nRecalling our discussion in Section \\ref{sec:learn}, one example of a concave $g(\\cdot)$ corresponds to the multivariate Gaussian prior of (\\ref{eq:Gaussian}).\n\nFormulation \\eqref{optimization no MC} can be reasonably used in situations where simulation replications are abundant, so that the simulation outputs are very close to $\\tilde\\pi_j$. Our next result shows that, in the case that $\\tilde n_j$ is sufficiently large and $g(\\cdot)$ satisfies a slightly stronger condition, using \\eqref{obj:optim1} also leads to a convex problem. To prepare for this result, we rewrite the decision variables in \\eqref{obj:optim1} to get\n\\begin{equation}\\begin{array}{ll}\n\\max _{p,\\tilde{p}} & \\sum_{i=1}^j z(i) p_j(i), \\\\\n\\text{subject to}& f(\\tilde{p}) + h(p,\\tilde{p}) + \\sum_{j=1}^{s} \\sum_{i=1}^{m} n_{j} (i) \\log p_j(i) + \\sum_{j=1}^{s} \\sum_{i=1}^{m} \\tilde{n}_{j}(i) \\log \\tilde{p}_j(i) \\geq \\log(c)\\\\\n&\\tilde{p}_j(i) \\geq 0 , p_j(i) \\geq 0, \\text{ for all } i,j \\\\\n&\\sum_{i=1}^m \\tilde{p}_j(i) =1, \\sum_{i=1}^m p_j(i) =1, \\text{ for all } j\n\\end{array} \\label{optimization reformulation}\n\\end{equation}\nwhere $h(p,\\tilde{p}) = g(p_j\/\\tilde{p}_j)$ with the operation $p_j\/\\tilde p_j$ defined component-wise. We recall the definition that a function $r(\\cdot)$ is strongly concave if for all $a$ and $b$ and $0 \\leq \\lambda \\leq 1$,\n\\[r(\\lambda a+ (1-\\lambda) b ) \\geq \\lambda r(a)+ (1-\\lambda) r(b) + \\beta \\lambda (1-\\lambda) \\|a-b\\|^2, \\]\nwhere $\\|a-b\\|$ is the Euclidean norm and $\\beta$ is some positive constant \\cite[pp 60] {nesterov2003introductory}. Our result is:\n\n\\begin{theorem} \\label{thm:convex}\nAssume that $g(\\cdot)$ is strongly concave and differentiable on $\\mathbb R_+^{sm}$ and the derivative is bounded on all compact sets in $\\mathbb R_+^{sm}$, $f(\\cdot)$ is bounded from above, and $\\tilde\\pi_j(i)>0$ for all $i,j$.\n\nLet $\\mathcal{U}_{\\tilde{n}} (c_{\\tilde{n}})$ be the set of feasible solutions for (\\ref{optimization reformulation}) where\n\\[\\log c_{\\tilde{n}} = -\\frac{\\ell^2}{2} + \\max_{p,\\tilde{p}}\\left\\{ f(\\tilde{p}) + h(p,\\tilde{p}) + \\sum_{j=1}^{s} \\sum_{i=1}^{m} n_j(i) \\log p_j(i) + \\sum_{j=1}^{s} \\sum_{i=1}^{m} \\tilde{n}_{j}(i) \\log \\tilde{p}_j(i)\\right\\},\\]\nfor some constant $\\ell > 0$. Then as $\\tilde{n} \\rightarrow \\infty$, $\\mathbb{P} \\left( \\mathcal{U}_{\\tilde{n}} (c_{\\tilde{n}}) \\text{ is convex}\\right) \\rightarrow 1. $\n\\end{theorem}\n\nThus, given access to sufficient computing resources and properly choosing $g(\\cdot)$, one can use a convex optimization solver to carry out our proposed approach, no matter how few or many data were collected from the real system. Note that this observation holds even when $f(\\cdot)$ is not concave. Theorem \\ref{thm:convex} hinges on a joint convexity argument with respect to $(d,\\tilde{p})$ in the asymptotic regime as $\\tilde n$ grows but $n$ is fixed.\n\n\\section{Numerical Illustrations } \\label{sec:illustration}\nWe demonstrate our approach with two real-data examples. First is a proof-of-concept investigation in modeling a call center. Second is on the support of staffing decision in a manufacturing production line discussed in the introduction of this article.\n\n\\subsection*{Call center example}\\label{sec:call center}\nConsider the call center data originally analyzed in \\cite{brown2005statistical}. This dataset is associated with a call center where a customer calls in and is placed a queue until one of $x$ servers is available. From these data, the sample mean of the waiting time (from entry to service for a customer) from 9:00 to 10:00 am is calculated. In this narrow time period, the arrival rate and service rate, which is time inhomogenous according to \\cite{brown2005statistical}, should be approximately homogenous. Here, we also account for the number of servers operating in the system at any given time, which appears to differ between days (see the appendix for details). To our reading, this subset of the dataset was by-and-large ignored in \\cite{brown2005statistical}'s original analysis.\n\nOur model for this call center will be an $x$-server first-come-first-serve queue. Following practice, both the interarrival and service times are modeled as exponentially distributed. After a warm-up period, the sample average of the waiting time is measured over the course of a one-hour window. This, in principle, agrees with \\cite{brown2005statistical}. In the spirit of ad-hoc calibration, two additional features were added: (i) the arrival rate is randomly generated each day from a log-normal distribution with associated mean $1.8$ and variance $0.4$, and (ii) a customer will abandon the queue if the waiting time is longer than an exponential random variable with mean $5$. Adding both of these features resulted in a simulation model that was closer to the observed data.\n\nThe response is discretized into the four categories $<1$, $1-2$, $2-3$, and $> 3$ minutes ($m=4$) and we study $5-9$ servers ($s = 5$). No data from the real system is observed at either $5$ or $9$ servers. The simulation model was evaluated 250 times at each design point.\n\\begin{figure}[htb]\n{\n\\centering\n\\includegraphics[width=\\textwidth]{orig_data_calibration-eps-converted-to.pdf}\n\\caption{The predictive intervals for $\\pi$ used for the call center study described in Section \\ref{sec:illustration}. The subplots from left to right show the results with 5 to 9 servers, respectively. The solid line is the observed frequency from the data when it is available at $6-8$ server levels. The dashed line is the observed frequency from the simulation model. The rectangles represent intervals predictions from either using the typical intervals from traditional sampling (left rectangles) or the optimization (right rectangles). } \\label{fig:orig_data_calibration}\n}\n\\end{figure}\n\nFigure \\ref{fig:orig_data_calibration} shows the intervals implied by the proposed posterior distribution using a sampling-based approach and our optimization approach using \\eqref{obj:optim} and \\eqref{choice}. The functions $f$ and $g$ were of Gaussian form (\\ref{eq:Gaussian}), with correlation $0.75^{|x_i-x_j| } \\cdot 0.75^{|k-l| }$ between the $i$th and $j$th staffing levels and the $k$th and $l$th outputs with $\\lambda_d = 1\/4$ and $\\lambda_p = 1\/100$. The key is the ability to answer question such as: how likely is it that the average waiting time when there are $9$ servers is between $2$ and $3$ minutes? There is no data, but the simulation model combined with the observed responses and our prior information gives us an estimate of somewhere less than $15 \\%$. This accounts for the discrepancy that we observed based on the recorded responses at $6-8$ servers as well as the potential Monte Carlo error from running a finite number of simulations. Overall, the ranges at other staffing levels appear to agree with both the data and the simulation outputs. We are not confident, for example, that staffing $5$ servers will produce the same results as the simulation model, which has average waiting times over $3$ minutes about $15 \\%$ of the time. Based on the recorded responses, this could be $30 \\%$, but it could also be as low as about $1\\%$.\n\nThe above discussion offers some preliminary validity check on the practical implementation. Next we illustrate the theoretical discussions in Section \\ref{sec:theo}. For this purpose, consider an example where the true model is specified by us, some data is generated from this model, and an inexact simulation model is specified.\n\nWe use the same simulation model. In the dataset we found that waiting times are under-estimated by the simulation model when many servers are present. To replicate this, we add onto our ``true\" model an event (according to a Poisson process, average 5 min between events) in which if there are $5$ idle servers, all idle servers will take a break (average 30 minutes, exponentially distributed) and if there are more than $7$ idle servers, these additional severs will stop servicing for the remainder of the hour. This will naturally inflate the waiting times. While not exactly mimicking the real system, this reflects the general phenomena that all operators may not be working at all times in a call center. Thus even though $8$ servers may be ``working'', because of miscellaneous personnel reasons, the queue behaves differently than the simulation model. All other features of the true model are exactly the same as the simulation model, including the arrival rates and departure rates.\n\nIn this numerical experiment there are either $5$, $10$, $20$, $200$ or $2000$ total observations. Two observation schemes are examined: in the first, each observation comes from one of the staffing levels 6, 7 and 8 with equal probabilities of $1\/3$; in the second, each observation comes from one of the staffing levels 5, 6, 7, 8 and 9 with equal probabilities of $1\/5$.\n\n Figure \\ref{fig:ROvSAMPLE_p} shows the prediction of the probability the average waiting time will be less than $1$ minute. We compare to a data-only approach which consists of bounds based on the classic confidence interval with binomial responses (either less than one minute or not). All approaches behave similarly when the amount of data is large, agreeing with Theorem \\ref{thm:op_consistency}. But there are differences in the data-poor performances. Consider the first observation scheme, where no data is collected at $5$ and $9$ servers. The proposed approach correctly predicts the chance of a short average waiting time with $9$ servers to be large, while the data-only approach does not have access to the simulation model and thus predicts the chance of a short average waiting time with $9$ servers to be possibly small (the prediction covers all possibilities). Moreover, the data-only approach can be quite poor when only a few data points exist. The conclusions reached from using the posterior with either traditional sampling or our optimization approach are comparable in the large data cases, but do differ in the small data cases. The computational speed of the optimization approach was orders of magnitude smaller for this example compared to the sampling.\n\\begin{figure}[htb]\n{\n\\centering\n\\includegraphics[width=\\textwidth]{ROvSAMPLE_p-eps-converted-to.pdf}\n\\caption{The bounds produced in the call center example in Section \\ref{sec:illustration}. The rectangles in the six panels are decided by the data-only approach (left), the sampling-based calibration approach with $2.5\\%$ and $97.5 \\%$ quantiles (middle), and the proposed optimization-based calibration approach. The set of $5$ rectangles for each number of servers represent $5$, $10$, $20$, $200$ and $2000$ observations, respectively. The long horizontal line represents the true value. The top set of panels refers to the case where data is observed only at 6, 7, and 8 and the bottom set of panels refers to the case where data is observed at 5, 6, 7, 8 and 9. \\label{fig:ROvSAMPLE_p}}\n}\n\\end{figure}\n\n\n\\subsection*{Manufacturing line example}\nThis subsection uses our calibration framework to assist a decision process for staffing a real production line. A major manufacturer of automobiles has two parallel production lines, labeled box and closure, that suffer from frequent failures. These failures are predominately handled by a group of workers trained to quickly identify and resolve small issues. Due to the time needed to traverse the line combined with the relative frequency of failures, four workers are currently staffed in this support position. The manufacturer is interested in the impact of this staffing level on the throughput of the line, measured in units per hour. The lines' behavior are classified into thirteen categories from 46 to 74 in $2$ units per hour increments.\n\nThe two lines have different criteria for ill-performance. The box line will starve the next line if the throughput drops below 60. The closure line will starve the next line if the throughput drops below 56. The goal is thus to ensure that the chance of starving the next line remains near the current level when there are $4$ workers. Since experiments on the real system would be extremely costly and potentially dangerous, an outside company was hired to design a discrete-event simulation model to investigate potential staffing reconfigurations for this group of workers. Additionally, a two-person internal team was tasked with refining and adjusting the simulation model via ad-hoc calibration, including detailed input analysis that broke down failure rates by stations along the line. Despite these extensive and costly efforts, the simulation model did not perfectly agree with the data collected in the current four worker configuration (see Figure \\ref{fig:illustration}) due to several assumptions made along the model development process. These included typical input assumptions like independent and exponentially distributed inter-failure times as well as more complicated structural assumptions such as workers returning to their station in between maintenance calls. Roughly $75$ realizations from the simulation were completed at each design point, as decided was sufficient toward the end of the project.\n\n\\begin{figure}[t]\n{\n\\centering\n\\includegraphics[width=6.5in]{manufacturing_example-eps-converted-to.pdf}\n\\caption{Illustration for the manufacturing case study presented in Section \\ref{sec:illustration}. Subplots far left and middle right are for the box and closure lines comparing the model histogram and the frequency histogram from the data. Subplots middle left and far right show the predictive intervals from the case study presented in Section \\ref{sec:illustration} for the box and closure lines, respectively. \\label{fig:illustration}}\n}\n\\end{figure}\n\nKnowing this simulation model is not perfect, what would be a reasonable estimate for the mean throughput of the line at each staffing level from $1$ worker to $6$ workers? If we can define the priors $f$ and $g$, then this becomes an answerable question using the method described in this article. Like the previous example, the functions $f$ and $g$ were of Gaussian form, of (\\ref{eq:Gaussian}), with correlation $0.75^{|x_i-x_j| } \\cdot 0.9^{|k-l| }$ between the $i$th and $j$th staffing levels and the $k$th and $l$th possible throughputs with $\\lambda_d = 1\/4$ and $\\lambda_p = 1\/100$. This agreed, as best as possible, with the expectations of the builders of the simulation model, who think that there is a large correlation across outputs (i.e. a similar likelihood ratio at places close in the sample space) and smaller amounts of correlation across the inputs (i.e. the builders are unsure of the behavior of likelihood ratio across the input variables, but generally anticipate it is close for similar staffing levels).\n\nFigure \\ref{fig:illustration} displays the lower and upper bounds on the probabilities of low production for each line constructed from our method. As we move away from our observations at a staffing level of $4$, the predictive bounds on the mean throughput get larger and become closer to the simulation model. This expansion of predictive intervals and the regression to the simulation model mimic what is seen in calibration of deterministic models \\citep{kennedy2001bayesian} and stochastic kriging \\citep{ankenman2010stochastic}. Around $4$ workers, the assumption is that that there is some correlation between staffing levels that decays as we move away from a staffing level of $4$, thus expanding our predictive intervals.\n\n In terms of comparison to a data-only alternative, there is clearly no ability to distinguish between different staffing levels using data alone. In terms of an answer to the fundamental question posed by the manufacturer, a few things can be gleaned from these bounds. For example, it becomes clear from this analysis that staffing a single worker would in high likelihood starve the next lines, which is the core problem the manufacturer would like to avoid. The ultimate decision from the manufacturer was to do a field study of the three worker staffing level. This was based on both feasibility assurance provided by the simulation model and the potential benefit of redeploying a worker into a different position.\n\n\n\\section*{Acknowledgements}\nWe thank Ilan Guedj for the data organization and for Avi Mandelbaum for continuing to place the data on the website \\href{http:\/\/ie.technion.ac.il\/serveng\/}{http:\/\/ie.technion.ac.il\/serveng\/}. Additional thanks are due to the\nTauber Institute for Global Operations at the University of Michigan, Anthony Sciuto, Anusuya Ramdass and Brian Talbot. We also gratefully acknowledge support from the the National Science Foundation under grants CMMI-1542020, CMMI-1523453 and CAREER CMMI-1653339.\n\n\n\n\\bibliographystyle{informs2014}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Submission of conference papers to ICLR 2021}\n\nICLR requires electronic submissions, processed by\n\\url{https:\/\/openreview.net\/}. See ICLR's website for more instructions.\n\nIf your paper is ultimately accepted, the statement {{ {\\bm{\\theta}}_{t} }\n {\\textbackslash}iclrfinalcopy} should be inserted to adjust the\nformat to the camera ready requirements.\n\nThe format for the submissions is a variant of the NeurIPS format.\nPlease read carefully the instructions below, and follow them\nfaithfully.\n\n\\subsection{Style}\n\nPapers to be submitted to ICLR 2021 must be prepared according to the\ninstructions presented here.\n\n\nAuthors are required to use the ICLR \\LaTeX{} style files obtainable at the\nICLR website. Please make sure you use the current files and\nnot previous versions. Tweaking the style files may be grounds for rejection.\n\n\\subsection{Retrieval of style files}\n\nThe style files for ICLR and other conference information are available online at:\n\\begin{center}\n \\url{http:\/\/www.iclr.cc\/}\n\\end{center}\nThe file \\verb+iclr2021_conference.pdf+ contains these\ninstructions and illustrates the\nvarious formatting requirements your ICLR paper must satisfy.\nSubmissions must be made using \\LaTeX{} and the style files\n\\verb+iclr2021_conference.sty+ and \\verb+iclr2021_conference.bst+ (to be used with \\LaTeX{}2e). The file\n\\verb+iclr2021_conference.tex+ may be used as a ``shell'' for writing your paper. All you\nhave to do is replace the author, title, abstract, and text of the paper with\nyour own.\n\nThe formatting instructions contained in these style files are summarized in\nsections \\ref{gen_inst}, \\ref{headings}, and \\ref{others} below.\n\n\\section{General formatting instructions}\n\\label{gen_inst}\n\nThe text must be confined within a rectangle 5.5~inches (33~picas) wide and\n9~inches (54~picas) long. The left margin is 1.5~inch (9~picas).\nUse 10~point type with a vertical spacing of 11~points. Times New Roman is the\npreferred typeface throughout. Paragraphs are separated by 1\/2~line space,\nwith no indentation.\n\nPaper title is 17~point, in small caps and left-aligned.\nAll pages should start at 1~inch (6~picas) from the top of the page.\n\nAuthors' names are\nset in boldface, and each name is placed above its corresponding\naddress. The lead author's name is to be listed first, and\nthe co-authors' names are set to follow. Authors sharing the\nsame address can be on the same line.\n\nPlease pay special attention to the instructions in section \\ref{others}\nregarding figures, tables, acknowledgments, and references.\n\n\nThere will be a strict upper limit of 8 pages for the main text of the initial submission, with unlimited additional pages for citations. Note that the upper page limit differs from last year!Authors may use as many pages of appendices (after the bibliography) as they wish, but reviewers are not required to read these. During the rebuttal phase and for the camera ready version, authors are allowed one additional page for the main text, for a strict upper limit of 9 pages.\n\n\\section{Headings: first level}\n\\label{headings}\n\nFirst level headings are in small caps,\nflush left and in point size 12. One line space before the first level\nheading and 1\/2~line space after the first level heading.\n\n\\subsection{Headings: second level}\n\nSecond level headings are in small caps,\nflush left and in point size 10. One line space before the second level\nheading and 1\/2~line space after the second level heading.\n\n\\subsubsection{Headings: third level}\n\nThird level headings are in small caps,\nflush left and in point size 10. One line space before the third level\nheading and 1\/2~line space after the third level heading.\n\n\\section{Citations, figures, tables, references}\n\\label{others}\n\nThese instructions apply to everyone, regardless of the formatter being used.\n\n\\subsection{Citations within the text}\n\nCitations within the text should be based on the \\texttt{natbib} package\nand include the authors' last names and year (with the ``et~al.'' construct\nfor more than two authors). When the authors or the publication are\nincluded in the sentence, the citation should not be in parenthesis using \\verb|\\citet{}| (as\nin ``See \\citet{Hinton06} for more information.''). Otherwise, the citation\nshould be in parenthesis using \\verb|\\citep{}| (as in ``Deep learning shows promise to make progress\ntowards AI~\\citep{Bengio+chapter2007}.'').\n\nThe corresponding references are to be listed in alphabetical order of\nauthors, in the \\textsc{References} section. As to the format of the\nreferences themselves, any style is acceptable as long as it is used\nconsistently.\n\n\\subsection{Footnotes}\n\nIndicate footnotes with a number\\footnote{Sample of the first footnote} in the\ntext. Place the footnotes at the bottom of the page on which they appear.\nPrecede the footnote with a horizontal rule of 2~inches\n(12~picas).\\footnote{Sample of the second footnote}\n\n\\subsection{Figures}\n\nAll artwork must be neat, clean, and legible. Lines should be dark\nenough for purposes of reproduction; art work should not be\nhand-drawn. The figure number and caption always appear after the\nfigure. Place one line space before the figure caption, and one line\nspace after the figure. The figure caption is lower case (except for\nfirst word and proper nouns); figures are numbered consecutively.\n\nMake sure the figure caption does not get separated from the figure.\nLeave sufficient space to avoid splitting the figure and figure caption.\n\nYou may use color figures.\nHowever, it is best for the\nfigure captions and the paper body to make sense if the paper is printed\neither in black\/white or in color.\n\\begin{figure}[h]\n\\begin{center}\n\\fbox{\\rule[-.5cm]{0cm}{4cm} \\rule[-.5cm]{4cm}{0cm}}\n\\end{center}\n\\caption{Sample figure caption.}\n\\end{figure}\n\n\\subsection{Tables}\n\nAll tables must be centered, neat, clean and legible. Do not use hand-drawn\ntables. The table number and title always appear before the table. See\nTable~\\ref{sample-table}.\n\nPlace one line space before the table title, one line space after the table\ntitle, and one line space after the table. The table title must be lower case\n(except for first word and proper nouns); tables are numbered consecutively.\n\n\\begin{table}[t]\n\\caption{Sample table title}\n\\label{sample-table}\n\\begin{center}\n\\begin{tabular}{ll}\n\\multicolumn{1}{c}{\\bf PART} &\\multicolumn{1}{c}{\\bf DESCRIPTION}\n\\\\ \\hline \\\\\nDendrite &Input terminal \\\\\nAxon &Output terminal \\\\\nSoma &Cell body (contains cell nucleus) \\\\\n\\end{tabular}\n\\end{center}\n\\end{table}\n\n\\section{Default Notation}\n\nIn an attempt to encourage standardized notation, we have included the\nnotation file from the textbook, \\textit{Deep Learning}\n\\cite{goodfellow2016deep} available at\n\\url{https:\/\/github.com\/goodfeli\/dlbook_notation\/}. Use of this style\nis not required and can be disabled by commenting out\n\\texttt{math\\_commands.tex}.\n\n\n\\centerline{\\bf Numbers and Arrays}\n\\bgroup\n\\def1.5{1.5}\n\\begin{tabular}{p{1in}p{3.25in}}\n$\\displaystyle a$ & A scalar (integer or real)\\\\\n$\\displaystyle {\\bm{a}}$ & A vector\\\\\n$\\displaystyle {\\bm{A}}$ & A matrix\\\\\n$\\displaystyle {\\tens{A}}$ & A tensor\\\\\n$\\displaystyle {\\bm{I}}_n$ & Identity matrix with $n$ rows and $n$ columns\\\\\n$\\displaystyle {\\bm{I}}$ & Identity matrix with dimensionality implied by context\\\\\n$\\displaystyle {\\bm{e}}^{(i)}$ & Standard basis vector $[0,\\dots,0,1,0,\\dots,0]$ with a 1 at position $i$\\\\\n$\\displaystyle \\text{diag}({\\bm{a}})$ & A square, diagonal matrix with diagonal entries given by ${\\bm{a}}$\\\\\n$\\displaystyle {\\textnormal{a}}$ & A scalar random variable\\\\\n$\\displaystyle {\\mathbf{a}}$ & A vector-valued random variable\\\\\n$\\displaystyle {\\mathbf{A}}$ & A matrix-valued random variable\\\\\n\\end{tabular}\n\\egroup\n\\vspace{0.25cm}\n\n\\centerline{\\bf Sets and Graphs}\n\\bgroup\n\\def1.5{1.5}\n\n\\begin{tabular}{p{1.25in}p{3.25in}}\n$\\displaystyle {\\mathbb{A}}$ & A set\\\\\n$\\displaystyle \\mathbb{R}$ & The set of real numbers \\\\\n$\\displaystyle \\{0, 1\\}$ & The set containing 0 and 1 \\\\\n$\\displaystyle \\{0, 1, \\dots, n \\}$ & The set of all integers between $0$ and $n$\\\\\n$\\displaystyle [a, b]$ & The real interval including $a$ and $b$\\\\\n$\\displaystyle (a, b]$ & The real interval excluding $a$ but including $b$\\\\\n$\\displaystyle {\\mathbb{A}} \\backslash {\\mathbb{B}}$ & Set subtraction, i.e., the set containing the elements of ${\\mathbb{A}}$ that are not in ${\\mathbb{B}}$\\\\\n$\\displaystyle {\\mathcal{G}}$ & A graph\\\\\n$\\displaystyle \\parents_{\\mathcal{G}}({\\textnormal{x}}_i)$ & The parents of ${\\textnormal{x}}_i$ in ${\\mathcal{G}}$\n\\end{tabular}\n\\vspace{0.25cm}\n\n\n\\centerline{\\bf Indexing}\n\\bgroup\n\\def1.5{1.5}\n\n\\begin{tabular}{p{1.25in}p{3.25in}}\n$\\displaystyle {a}_i$ & Element $i$ of vector ${\\bm{a}}$, with indexing starting at 1 \\\\\n$\\displaystyle {a}_{-i}$ & All elements of vector ${\\bm{a}}$ except for element $i$ \\\\\n$\\displaystyle {A}_{i,j}$ & Element $i, j$ of matrix ${\\bm{A}}$ \\\\\n$\\displaystyle {\\bm{A}}_{i, :}$ & Row $i$ of matrix ${\\bm{A}}$ \\\\\n$\\displaystyle {\\bm{A}}_{:, i}$ & Column $i$ of matrix ${\\bm{A}}$ \\\\\n$\\displaystyle {\\etens{A}}_{i, j, k}$ & Element $(i, j, k)$ of a 3-D tensor ${\\tens{A}}$\\\\\n$\\displaystyle {\\tens{A}}_{:, :, i}$ & 2-D slice of a 3-D tensor\\\\\n$\\displaystyle {\\textnormal{a}}_i$ & Element $i$ of the random vector ${\\mathbf{a}}$ \\\\\n\\end{tabular}\n\\egroup\n\\vspace{0.25cm}\n\n\n\\centerline{\\bf Calculus}\n\\bgroup\n\\def1.5{1.5}\n\\begin{tabular}{p{1.25in}p{3.25in}}\n$\\displaystyle\\frac{d y} {d x}$ & Derivative of $y$ with respect to $x$\\\\ [2ex]\n$\\displaystyle \\frac{\\partial y} {\\partial x} $ & Partial derivative of $y$ with respect to $x$ \\\\\n$\\displaystyle \\nabla_{\\bm{x}} y $ & Gradient of $y$ with respect to ${\\bm{x}}$ \\\\\n$\\displaystyle \\nabla_{\\bm{X}} y $ & Matrix derivatives of $y$ with respect to ${\\bm{X}}$ \\\\\n$\\displaystyle \\nabla_{\\tens{X}} y $ & Tensor containing derivatives of $y$ with respect to ${\\tens{X}}$ \\\\\n$\\displaystyle \\frac{\\partial f}{\\partial {\\bm{x}}} $ & Jacobian matrix ${\\bm{J}} \\in \\mathbb{R}^{m\\times n}$ of $f: \\mathbb{R}^n \\rightarrow \\mathbb{R}^m$\\\\\n$\\displaystyle \\nabla_{\\bm{x}}^2 f({\\bm{x}})\\text{ or }{\\bm{H}}( f)({\\bm{x}})$ & The Hessian matrix of $f$ at input point ${\\bm{x}}$\\\\\n$\\displaystyle \\int f({\\bm{x}}) d{\\bm{x}} $ & Definite integral over the entire domain of ${\\bm{x}}$ \\\\\n$\\displaystyle \\int_{\\mathbb{S}} f({\\bm{x}}) d{\\bm{x}}$ & Definite integral with respect to ${\\bm{x}}$ over the set ${\\mathbb{S}}$ \\\\\n\\end{tabular}\n\\egroup\n\\vspace{0.25cm}\n\n\\centerline{\\bf Probability and Information Theory}\n\\bgroup\n\\def1.5{1.5}\n\\begin{tabular}{p{1.25in}p{3.25in}}\n$\\displaystyle P({\\textnormal{a}})$ & A probability distribution over a discrete variable\\\\\n$\\displaystyle p({\\textnormal{a}})$ & A probability distribution over a continuous variable, or over\na variable whose type has not been specified\\\\\n$\\displaystyle {\\textnormal{a}} \\sim P$ & Random variable ${\\textnormal{a}}$ has distribution $P$\\\\% so thing on left of \\sim should always be a random variable, with name beginning with \\r\n$\\displaystyle \\mathbb{E}_{{\\textnormal{x}}\\sim P} [ f(x) ]\\text{ or } \\mathbb{E} f(x)$ & Expectation of $f(x)$ with respect to $P({\\textnormal{x}})$ \\\\\n$\\displaystyle \\mathrm{Var}(f(x)) $ & Variance of $f(x)$ under $P({\\textnormal{x}})$ \\\\\n$\\displaystyle \\mathrm{Cov}(f(x),g(x)) $ & Covariance of $f(x)$ and $g(x)$ under $P({\\textnormal{x}})$\\\\\n$\\displaystyle H({\\textnormal{x}}) $ & Shannon entropy of the random variable ${\\textnormal{x}}$\\\\\n$\\displaystyle D_{\\mathrm{KL}} ( P \\Vert Q ) $ & Kullback-Leibler divergence of P and Q \\\\\n$\\displaystyle \\mathcal{N} ( {\\bm{x}} ; {\\bm{\\mu}} , {\\bm{\\Sigma}})$ & Gaussian distribution %\nover ${\\bm{x}}$ with mean ${\\bm{\\mu}}$ and covariance ${\\bm{\\Sigma}}$ \\\\\n\\end{tabular}\n\\egroup\n\\vspace{0.25cm}\n\n\\centerline{\\bf Functions}\n\\bgroup\n\\def1.5{1.5}\n\\begin{tabular}{p{1.25in}p{3.25in}}\n$\\displaystyle f: {\\mathbb{A}} \\rightarrow {\\mathbb{B}}$ & The function $f$ with domain ${\\mathbb{A}}$ and range ${\\mathbb{B}}$\\\\\n$\\displaystyle f \\circ g $ & Composition of the functions $f$ and $g$ \\\\\n $\\displaystyle f({\\bm{x}} ; {\\bm{\\theta}}) $ & A function of ${\\bm{x}}$ parametrized by ${\\bm{\\theta}}$.\n (Sometimes we write $f({\\bm{x}})$ and omit the argument ${\\bm{\\theta}}$ to lighten notation) \\\\\n$\\displaystyle \\log x$ & Natural logarithm of $x$ \\\\\n$\\displaystyle \\sigma(x)$ & Logistic sigmoid, $\\displaystyle \\frac{1} {1 + \\exp(-x)}$ \\\\\n$\\displaystyle \\zeta(x)$ & Softplus, $\\log(1 + \\exp(x))$ \\\\\n$\\displaystyle || {\\bm{x}} ||_p $ & $L^p$ norm of ${\\bm{x}}$ \\\\\n$\\displaystyle || {\\bm{x}} || $ & $L^2$ norm of ${\\bm{x}}$ \\\\\n$\\displaystyle x^+$ & Positive part of $x$, i.e., $\\max(0,x)$\\\\\n$\\displaystyle \\bm{1}_\\mathrm{condition}$ & is 1 if the condition is true, 0 otherwise\\\\\n\\end{tabular}\n\\egroup\n\\vspace{0.25cm}\n\n\n\n\\section{Final instructions}\nDo not change any aspects of the formatting parameters in the style files.\nIn particular, do not modify the width or length of the rectangle the text\nshould fit into, and do not change font sizes (except perhaps in the\n\\textsc{References} section; see below). Please note that pages should be\nnumbered.\n\n\\section{Preparing PostScript or PDF files}\n\nPlease prepare PostScript or PDF files with paper size ``US Letter'', and\nnot, for example, ``A4''. The -t\nletter option on dvips will produce US Letter files.\n\nConsider directly generating PDF files using \\verb+pdflatex+\n(especially if you are a MiKTeX user).\nPDF figures must be substituted for EPS figures, however.\n\nOtherwise, please generate your PostScript and PDF files with the following commands:\n\\begin{verbatim}\ndvips mypaper.dvi -t letter -Ppdf -G0 -o mypaper.ps\nps2pdf mypaper.ps mypaper.pdf\n\\end{verbatim}\n\n\\subsection{Margins in LaTeX}\n\nMost of the margin problems come from figures positioned by hand using\n\\verb+\\special+ or other commands. We suggest using the command\n\\verb+\\includegraphics+\nfrom the graphicx package. Always specify the figure width as a multiple of\nthe line width as in the example below using .eps graphics\n\\begin{verbatim}\n \\usepackage[dvips]{graphicx} ...\n \\includegraphics[width=0.8\\linewidth]{myfile.eps}\n\\end{verbatim}\nor %\n\\begin{verbatim}\n \\usepackage[pdftex]{graphicx} ...\n \\includegraphics[width=0.8\\linewidth]{myfile.pdf}\n\\end{verbatim}\nfor .pdf graphics.\nSee section~4.4 in the graphics bundle documentation (\\url{http:\/\/www.ctan.org\/tex-archive\/macros\/latex\/required\/graphics\/grfguide.ps})\n\nA number of width problems arise when LaTeX cannot properly hyphenate a\nline. Please give LaTeX hyphenation hints using the \\verb+\\-+ command.\n\n\\subsubsection*{Author Contributions}\nIf you'd like to, you may include a section for author contributions as is done\nin many journals. This is optional and at the discretion of the authors.\n\n\\subsubsection*{Acknowledgments}\nUse unnumbered third level headings for the acknowledgments. All\nacknowledgments, including those to funding agencies, go at the end of the paper.\n\n\n\n\\section{Introduction}\n\\input{subtex\/intro.tex}\n\n\\section{Preliminaries} \\label{sec:prel}\n\\input{subtex\/preliminary.tex}\n\n\n\\section{Differential Dynamic Programming Neural Optimizer} \\label{sec:ddp-dnn}\n\\input{subtex\/ddp.tex}\n\n\n\\section{The Role of Feedback Policies} \\label{sec:dnn-trajopt}\n\\input{subtex\/role-of-feedback.tex}\n\n\\section{Experiments} \\label{sec:experiment}\n\\input{subtex\/experiment.tex}\n\n\\section{Conclusion} \\label{sec:conclusion}\nIn this work, we introduce DDPNOpt, a new class of optimizer arising from a novel perspective by bridging DNN training to optimal control and trajectory optimization.\nDDPNOpt features {layer-wise feedback policies} which improve convergence and robustness to hyper-parameters\nover existing optimizers.\nIt outperforms other OCP-inspired methods in both training performance and scalability.\nThis work provides a new algorithmic insight and bridges between deep learning and optimal control.\n\n\\newpage\n\n\n\\section*{Acknowledgments}\nThe authors would like to thank Chen-Hsuan Lin, Yunpeng Pan, Yen-Cheng Liu, and Chia-Wen Kuo for many helpful discussions on the paper.\nThis research was supported by NSF Award Number 1932288.\n\n\n\n\\subsection{Connection between Pontryagin Maximum Principle and DNNs Training} \\label{app:pmp-dev}\n\nDevelopment of the optimality conditions to OCP can be dated back to 1960s,\ncharacterized by both the Pontryagin\\textquotesingle s Maximum Principle (PMP)\nand the Dynamic Programming (DP).\nHere we review Theorem of PMP and its connection to training DNNs.\n\\begin{theorem}[Discrete-time PMP \\citep{pontryagin1962mathematical}] \\label{the:pmp} %\nLet $\\bar{{\\bm{u}}}^*$ be the optimal control trajectory for OCP and\n$\\bar{{\\bm{x}}}^*$ be the corresponding state trajectory.\nThen, there exists a co-state trajectory $\\bar{{\\bm{p}}}^* \\triangleq \\{{\\bm{p}}_t^*\\}_{t=1}^{T}$,\nsuch that\n\\begin{subequations} \\label{eq:mf-pmp}\n\\begin{align}\n{{\\bm{x}}}_{t+1}^{*}&= \\nabla_{{\\bm{p}}_{}} H_t\\left({\\bm{x}}_{t}^{*}, {\\bm{p}}_{t+1}^{*}, {\\bm{u}}_{t}^{*}\\right) { \\text{ ,} }\n\\text{ } {\\bm{x}}_{0}^{*}={\\bm{x}}_{0}\n{ \\text{ ,} } \\label{eq:pmp-forward} \\\\\n{{\\bm{p}}}_{t}^{*}&=\\nabla_{{\\bm{x}}} H_t\\left({\\bm{x}}_{t}^{*}, {\\bm{p}}_{t+1}^{*}, {\\bm{u}}_{t}^{*}\\right) { \\text{ ,} }\n\\text{ } {\\bm{p}}_{T}^{*}= \\nabla_{{\\bm{x}}} \\phi\\left({\\bm{x}}_{T}^{*}\\right)\n{ \\text{ ,} } \\label{eq:pmp-backward} \\\\\n{\\bm{u}}_{t}^{*} &= \\argmin_{v\\in {\\mathbb{R}^{m}}}\nH_t\\left({\\bm{x}}_{t}^{*}, {\\bm{p}}_{t+1}^{*}, {\\bm{v}} \\right)\n{ \\text{ .} } \\label{eq:pmp-max-h}\n\\end{align}\n\\end{subequations}\nwhere $H_t: {\\mathbb{R}^{n}} \\times {\\mathbb{R}^{n}} \\times {\\mathbb{R}^{m}} \\mapsto \\mathbb{R} $\nis the discrete-time Hamiltonian given by\n\\begin{align}\nH_t\\left({\\bm{x}}_t, {\\bm{p}}_{t+1}, {\\bm{u}}_t \\right) \\triangleq \\ell_t({\\bm{x}}_t, {\\bm{u}}_t) + {\\bm{p}}_{t+1}^{\\mathsf{T}} f_t({\\bm{x}}_t, {\\bm{u}}_t) { \\text{ ,} }\n\\end{align}\nand \\eq{\\ref{eq:pmp-backward}} is called the \\textit{adjoint equation}.\n\n\\end{theorem}\nThe discrete-time PMP theorem can be derived using KKT conditions,\nin which the co-state ${\\bm{p}}_t$ is equivalent to the Lagrange multiplier.\nNote that the solution to \\eq{\\ref{eq:pmp-max-h}} admits an open-loop process in the sense that it does not depend on state variables.\nThis is in contrast to the Dynamic Programming principle,\nin which a feedback policy is considered.\n\n\nIt is natural to ask whether the necessary condition in the PMP theorem relates to first-order optimization methods in DNN training.\nThis is indeed the case as pointed out in \\citet{li2017maximum}: %\n\\begin{lemma}[\\cite{li2017maximum}] \\label{lm:bp-gd}\nBack-propagation satisfies \\eq{\\ref{eq:pmp-backward}} and gradient descent iteratively solves \\eq{\\ref{eq:pmp-max-h}}.\n\\end{lemma}\nLemma \\ref{lm:bp-gd} follows by first expanding the derivative of Hamiltonian w.r.t. ${\\bm{x}}_t$,\n\\begin{align}\n \\nabla_{{\\bm{x}}_t} H_t({\\bm{x}}_{t}, {\\bm{p}}_{t+1}, {\\bm{u}}_{t}) &= \\nabla_{{\\bm{x}}_t} \\ell_t({\\bm{x}}_{t}, {\\bm{u}}_{t}) + \\nabla_{{\\bm{x}}_t} f_t({\\bm{x}}_{t}, {\\bm{u}}_{t})^{\\mathsf{T}} {\\bm{p}}_{t+1} \\text{ } = \\nabla_{{\\bm{x}}_t} J({\\bar{{\\bm{u}}}}; {\\bm{x}}_0) { \\text{ .} }\n\\end{align}\nThus, \\eq{\\ref{eq:pmp-backward}} is simply the chain rule used in the Back-propagation.\nWhen $H_t$ is differentiable w.r.t. ${\\bm{u}}_t$, one can attempt to solve \\eq{\\ref{eq:pmp-max-h}} by iteratively taking the gradient descent.\nThis will lead to\n\\begin{align} %\n{\\bm{u}}^{(k+1)}_t\n= {\\bm{u}}^{(k)}_t - \\eta \\nabla_{{\\bm{u}}_t} H_t({\\bm{x}}_{t}, {\\bm{p}}_{t+1}, {\\bm{u}}_{t})\n= {\\bm{u}}^{(k)}_t - \\eta \\nabla_{{\\bm{u}}_t} J({\\bar{{\\bm{u}}}};{\\bm{x}}_0) { \\text{ ,} }\n\\end{align}\nwhere $k$ and $\\eta$ denote the update iteration and step size.\nThus, existing optimization methods can be interpreted as iterative processes to match the PMP optimality conditions.\n\n\nInspired from Lemma \\ref{lm:bp-gd}, \\citet{li2017maximum} proposed\na PMP-inspired method, named Extended Method of Successive Approximations (E-MSA),\nwhich solves the following augmented Hamiltonian\n\\begin{equation}\n\\begin{split}\n\\tilde{H}_t\\left({\\bm{x}}_t, {\\bm{p}}_{t+1}, {\\bm{u}}_t, {\\bm{x}}_{t+1}, {\\bm{p}}_{t} \\right)\n&\\triangleq\nH_t\\left({\\bm{x}}_t, {\\bm{p}}_{t+1}, {\\bm{u}}_t \\right) \\\\ &\\quad +\n\\frac{1}{2}\\rho \\norm{{\\bm{x}}_{t+1} - f_t({\\bm{x}}_t,{\\bm{u}}_t)} +\n\\frac{1}{2}\\rho \\norm{{\\bm{p}}_t - \\nabla_{{\\bm{x}}_t} H_t}\n{ \\text{ .} }\n\\end{split} \\label{eq:emsa}\n\\end{equation}\n$\\tilde{H}_t$ is the original Hamiltonian augmented with the feasibility constraints on both forward states and backward co-states.\nE-MSA solves the minimization\n\\begin{align}\n {\\bm{u}}_t^* = \\argmin_{{\\bm{u}}_t \\in \\mathbf{R}^{m_t}} \\tilde{H}_t\\left({\\bm{x}}_t, {\\bm{p}}_{t+1}, {\\bm{u}}_t, {\\bm{x}}_{t+1}, {\\bm{p}}_{t} \\right)\n\\end{align}\nwith L-BFGS per layer and per training iteration.\nAs a result, we consider E-MSA also as second-order method.\n\n\n\\subsection{Proof of Proposition \\ref{prop:bp2ddp}} \\label{app:c1}\n\\begin{proof}\nWe first prove the following lemma which connects the backward pass between two frameworks in the degenerate case.\n\\begin{lemma}\nAssume $Q_{{\\bm{u}} {\\bm{x}}}^t=\\mathbf{0}$ at all stages,\nthen we have\n\\begin{align} \\label{eq:v-dyn-degenerate}\nV_{\\bm{x}}^t = \\nabla_{{\\bm{x}}_t} J { \\text{ ,} } \\text{ and } \\quad\nV_{{\\bm{x}}\\vx}^t = \\nabla_{{\\bm{x}}_t}^2 J { \\text{ ,} } \\quad \\forall t\n{ \\text{ .} }\n\\end{align}\n\\end{lemma}\n\\begin{proof}\nIt is obvious to see that \\eq{\\ref{eq:v-dyn-degenerate}} holds at $t=T$.\nNow, assume the relation holds at $t+1$ and observe that at the time $t$, the backward passes take the form of\n\\begin{align*} %\n V_{\\bm{x}}^t\n &= Q_{\\bm{x}}^t - Q_{{\\bm{u}} {\\bm{x}}}^{t\\text{ }{\\mathsf{T}}} {({Q^{t}_{{\\bm{u}} {\\bm{u}}})}^{-1}} Q^t_{{\\bm{u}}}\n = \\ell^t_{{\\bm{x}}} + {{{f}^t_{{\\bm{x}}}}^{\\mathsf{T}}} \\nabla_{{\\bm{x}}_{t+1}} J\n = \\nabla_{{\\bm{x}}_t} J { \\text{ ,} } \\\\\n V_{{\\bm{x}}\\vx}^t &= Q_{{\\bm{x}}\\vx}^t - Q_{{\\bm{u}} {\\bm{x}}}^{t\\text{ }{\\mathsf{T}}} {({Q^{t}_{{\\bm{u}} {\\bm{u}}})}^{-1}} Q^t_{{\\bm{u}} {\\bm{x}}}\n = \\nabla_{{\\bm{x}}_t} \\{ \\ell^t_{{\\bm{x}}} + {{{f}^t_{{\\bm{x}}}}^{\\mathsf{T}}} \\nabla_{{\\bm{x}}_{t+1}} J \\}\n = \\nabla_{{\\bm{x}}_{t}}^2 J\n{ \\text{ ,} }\n\\end{align*}\nwhere we recall $J_t = \\ell_t + J_{t+1}(f_t) $ in \\eq{\\ref{eq:Jt}}.\n\\end{proof}\nNow, \\eq{\\ref{eq:newton}} follows by substituting \\eq{\\ref{eq:v-dyn-degenerate}} to the definition of $Q_{{\\bm{u}}}^t$ and $Q_{{\\bm{u}}\\vu}^t$\n\\begin{align*} %\n Q_{{\\bm{u}}}^t\n &= \\ell^t_{{\\bm{u}}} + {{{f}^t_{{\\bm{u}}}}^{\\mathsf{T}}} V_{\\bm{x}}^{t+1}\n = \\ell^t_{{\\bm{u}}} + {{{f}^t_{{\\bm{u}}}}^{\\mathsf{T}}} \\nabla_{{\\bm{x}}_{t+1}} J\n = \\nabla_{{\\bm{u}}_t} J\n{ \\text{ ,} } \\\\\n Q_{{\\bm{u}}\\vu}^t\n &= \\ell^t_{{\\bm{u}}\\vu} + {{{f}^t_{{\\bm{u}}}}^{\\mathsf{T}}} V_{{\\bm{x}}\\vx}^{t+1} {f}_{{\\bm{u}}}^t + V_{\\bm{x}}^{t+1} \\cdot {f}_{{\\bm{u}}\\vu}^t \\\\\n &= \\ell^t_{{\\bm{u}}\\vu} + {{{f}^t_{{\\bm{u}}}}^{\\mathsf{T}}} (\\nabla^2_{{\\bm{x}}_{t+1}} J) {f}_{{\\bm{u}}}^t + \\nabla_{{\\bm{x}}_{t+1}} J \\cdot {f}_{{\\bm{u}}\\vu}^t \\\\\n &= \\nabla_{{\\bm{u}}_t} \\{ \\ell^t_{{\\bm{u}}} + {{{f}^t_{{\\bm{u}}}}^{\\mathsf{T}}} \\nabla_{{\\bm{x}}_{t+1}} J \\}\n = \\nabla_{{\\bm{u}}_t}^2 J\n{ \\text{ .} }\n\\end{align*}\nConsequently, the DDP feedback policy degenerates to layer-wise Newton update.\n\\end{proof}\n\n\n\n\\subsection{Proof of Proposition \\ref{prop:gn-ddp}} \\label{app:prop:gn-ddp}\n\\begin{proof}\nWe will prove Proposition \\ref{prop:gn-ddp} by backward induction.\nSuppose at layer $t+1$, we have ${V^{t+1}_{{\\bm{x}}\\vx}} = {\\bm{z}}_{\\bm{x}}^{t+1} \\otimes {\\bm{z}}_{\\bm{x}}^{t+1}$ and $\\ell_t\\equiv\\ell_t({\\bm{u}}_t)$,\nthen \\eq{\\ref{eq:Qt}} becomes\n\\begin{align*} %\n{Q^t_{{\\bm{x}} {\\bm{x}}}} &= {{{f}^t_{{\\bm{x}}}}^{\\mathsf{T}}} {V^{t+1}_{{\\bm{x}}\\vx}} {{f}^t_{{\\bm{x}}}} = {{{f}^t_{{\\bm{x}}}}^{\\mathsf{T}}} ({\\bm{z}}_{\\bm{x}}^{t+1} \\otimes {\\bm{z}}_{\\bm{x}}^{t+1}) {{f}^t_{{\\bm{x}}}} = ({{{f}^t_{{\\bm{x}}}}^{\\mathsf{T}}}{\\bm{z}}_{\\bm{x}}^{t+1}) \\otimes ({{{f}^t_{{\\bm{x}}}}^{\\mathsf{T}}}{\\bm{z}}_{\\bm{x}}^{t+1}) \\\\\n{Q^t_{{\\bm{u}} {\\bm{x}}}} &= {{{f}^t_{{\\bm{u}}}}^{\\mathsf{T}}} {V^{t+1}_{{\\bm{x}}\\vx}} {{f}^t_{{\\bm{x}}}} = {{{f}^t_{{\\bm{u}}}}^{\\mathsf{T}}} ({\\bm{z}}_{\\bm{x}}^{t+1} \\otimes {\\bm{z}}_{\\bm{x}}^{t+1}) {{f}^t_{{\\bm{x}}}} = ({{{f}^t_{{\\bm{u}}}}^{\\mathsf{T}}}{\\bm{z}}_{\\bm{x}}^{t+1}) \\otimes ({{{f}^t_{{\\bm{x}}}}^{\\mathsf{T}}}{\\bm{z}}_{\\bm{x}}^{t+1})\n{ \\text{ .} }\n\\end{align*}\nSetting ${\\bm{q}}_{\\bm{x}}^t:={{{f}^t_{{\\bm{x}}}}^{\\mathsf{T}}}{\\bm{z}}_{\\bm{x}}^{t+1}$ and ${\\bm{q}}_{\\bm{u}}^t:={{{f}^t_{{\\bm{u}}}}^{\\mathsf{T}}}{\\bm{z}}_{\\bm{x}}^{t+1}$ will give the first part of Proposition \\ref{prop:gn-ddp}.\n\nNext, to show the same factorization structure preserves through the preceding layer, it is sufficient to show\n$V_{{\\bm{x}}\\vx}^t = {\\bm{z}}_{\\bm{x}}^{t} \\otimes {\\bm{z}}_{\\bm{x}}^{t}$ for some vector ${\\bm{z}}_{\\bm{x}}^{t}$. This is indeed the case.\n\\begin{align*} %\nV_{{\\bm{x}}\\vx}^t\n&= {Q^t_{{\\bm{x}} {\\bm{x}}}} - Q_{{\\bm{u}} {\\bm{x}}}^{t\\text{ }\\text{ } {\\mathsf{T}}} ({Q^t_{{\\bm{u}} {\\bm{u}}}})^{-1} {Q^t_{{\\bm{u}} {\\bm{x}}}} \\\\\n&= {\\bm{q}}_{\\bm{x}}^t \\otimes {\\bm{q}}_{\\bm{x}}^t - ({\\bm{q}}_{\\bm{u}}^t \\otimes {\\bm{q}}_{\\bm{x}}^t)^{\\mathsf{T}} ({Q^t_{{\\bm{u}} {\\bm{u}}}})^{-1} ({\\bm{q}}_{\\bm{u}}^t \\otimes {\\bm{q}}_{\\bm{x}}^t) \\\\\n&= {\\bm{q}}_{\\bm{x}}^t \\otimes {\\bm{q}}_{\\bm{x}}^t - ({\\bm{q}}_{\\bm{u}}^{t \\text{ } {\\mathsf{T}}} {({Q^{t}_{{\\bm{u}} {\\bm{u}}})}^{-1}} {\\bm{q}}_{\\bm{u}}^t) ({\\bm{q}}_{\\bm{x}}^t \\otimes {\\bm{q}}_{\\bm{x}}^t) { \\text{ ,} }\n\\end{align*}\nwhere the last equality follows by observing ${\\bm{q}}_{\\bm{u}}^{t \\text{ } {\\mathsf{T}}} {({Q^{t}_{{\\bm{u}} {\\bm{u}}})}^{-1}} {\\bm{q}}_{\\bm{u}}^t$ is a scalar.\n\nSet ${\\bm{z}}_{\\bm{x}}^t = \\mathpalette\\DHLhksqrt{1-{\\bm{q}}_{\\bm{u}}^{t \\text{ } {\\mathsf{T}}} {({Q^{t}_{{\\bm{u}} {\\bm{u}}})}^{-1}} {\\bm{q}}_{\\bm{u}}^t } \\text{ }{\\bm{q}}_{\\bm{x}}^t$ will give the desired factorization.\n\n\\end{proof}\n\n\n\n\\subsection{Derivation of \\eq{\\ref{eq:q-fc}}}\nFor notational simplicity, we drop the superscript $t$ and denote\n$V^{\\prime}_{{\\bm{x}}^{\\prime}} \\triangleq \\nabla_{{\\bm{x}}} V_{t+1}({\\bm{x}}_{t+1})$\nas the derivative of the value function at the next state.\n\\begin{align*}\nQ_{{\\bm{u}}}\n&=\\ell_{{\\bm{u}}}+f_{{\\bm{u}}}^{{\\mathsf{T}}}V_{{\\bm{x}}^{\\prime}}^{\\prime}\n=\\ell_{{\\bm{u}}}+g_{{\\bm{u}}}^{{\\mathsf{T}}}\\sigma_{{\\bm{h}}}^{{\\mathsf{T}}}V_{{\\bm{x}}^{\\prime}}^{\\prime} { \\text{ ,} } \\\\\n Q_{{\\bm{u}} {\\bm{u}}} &=\\ell_{{\\bm{u}} {\\bm{u}}} + \\frac{\\partial}{\\partial {\\bm{u}}} \\{g_{{\\bm{u}}}^{{\\mathsf{T}}}\\sigma_{{\\bm{h}}}^{{\\mathsf{T}}}V_{{\\bm{x}}^\\prime}^{\\prime} \\} \\\\\n &=\\ell_{{\\bm{u}} {\\bm{u}}} + g_{{\\bm{u}}}^{{\\mathsf{T}}}\\sigma_{{\\bm{h}}}^{{\\mathsf{T}}} \\frac{\\partial}{\\partial {\\bm{u}}} \\{V_{{\\bm{x}}^\\prime}^{\\prime} \\}\n + g_{{\\bm{u}}}^{{\\mathsf{T}}}(\\frac{\\partial}{\\partial {\\bm{u}}} \\{ \\sigma_{{\\bm{h}}} \\})^{{\\mathsf{T}}}V_{{\\bm{x}}^\\prime}^{\\prime} + (\\frac{\\partial}{\\partial {\\bm{u}}} \\{ g_{{\\bm{u}}}\\})^{{\\mathsf{T}}}\\sigma_{{\\bm{h}}}^{{\\mathsf{T}}}V_{{\\bm{x}}^\\prime}^{\\prime} \\\\\n &=\\ell_{{\\bm{u}} {\\bm{u}}} + g_{{\\bm{u}}}^{{\\mathsf{T}}}\\sigma_{{\\bm{h}}}^{{\\mathsf{T}}} V^\\prime_{{\\bm{x}}^\\prime {\\bm{x}}^\\prime} \\sigma_{{\\bm{h}}} g_{{\\bm{u}}}\n + g_{{\\bm{u}}}^{{\\mathsf{T}}}(V_{{\\bm{x}}^\\prime}^{\\prime {\\mathsf{T}}} \\sigma_{{\\bm{h}} {\\bm{h}}} g_{{\\bm{u}}})\n + g_{{\\bm{u}} {\\bm{u}}}^{{\\mathsf{T}}}\\sigma_{{\\bm{h}}}^{{\\mathsf{T}}}V_{{\\bm{x}}^\\prime}^{\\prime} \\\\\n &=\\ell_{{\\bm{u}} {\\bm{u}}}+g_{{\\bm{u}}}^{{\\mathsf{T}}} (V_{{\\bm{h}} {\\bm{h}}} + V_{{\\bm{x}}^{\\prime}}^{\\prime} \\cdot \\sigma_{{\\bm{h}} {\\bm{h}}}) g_{{\\bm{u}}}+V_{{\\bm{h}}} \\cdot g_{{\\bm{u}} {\\bm{u}}}\n\\end{align*}\nThe last equation follows by recalling\n$V_{{\\bm{h}}} \\triangleq \\sigma_{{\\bm{h}}}^{{\\mathsf{T}}}V_{{\\bm{x}}^\\prime}^{\\prime}$ and\n$V_{{\\bm{h}}\\vh} \\triangleq \\sigma_{{\\bm{h}}}^{{\\mathsf{T}}} V^\\prime_{{\\bm{x}}^\\prime {\\bm{x}}^\\prime} \\sigma_{{\\bm{h}}}$.\nFollow similar derivation, we have\n\\begin{equation}\n\\begin{split} \\label{eq:q-fc2}\n Q_{{\\bm{x}}} &=\\ell_{{\\bm{x}}}+g_{{\\bm{x}}}^{{\\mathsf{T}}}V_{{\\bm{h}}}\\\\\n Q_{{\\bm{x}} {\\bm{x}}} &=\\ell_{{\\bm{x}} {\\bm{x}}}+g_{{\\bm{x}}}^{{\\mathsf{T}}} (V_{{\\bm{h}} {\\bm{h}}} + V_{{\\bm{x}}^{\\prime}}^{\\prime} \\cdot \\sigma_{{\\bm{h}} {\\bm{h}}}) g_{{\\bm{x}}}+V_{{\\bm{h}}} \\cdot g_{{\\bm{x}} {\\bm{x}}} \\\\\n Q_{{\\bm{u}} {\\bm{x}}} &=\\ell_{{\\bm{u}} {\\bm{x}}}+g_{{\\bm{u}}}^{{\\mathsf{T}}} (V_{{\\bm{h}} {\\bm{h}}} + V_{{\\bm{x}}^{\\prime}}^{\\prime} \\cdot \\sigma_{{\\bm{h}} {\\bm{h}}}) g_{{\\bm{x}}}+V_{{\\bm{h}}} \\cdot g_{{\\bm{u}} {\\bm{x}}}\n\\end{split}\n\\end{equation}\n\n\\textbf{Remarks.}\nFor feedforward networks, the computational overhead in Eq.~\\ref{eq:q-fc} and \\ref{eq:q-fc2} can be mitigated by leveraging its affine structure.\nSince $g$ is bilinear in ${\\bm{x}}_t$ and ${\\bm{u}}_t$, the terms ${{g}^t_{{\\bm{x}} {\\bm{x}}}}$ and ${{g}^t_{{\\bm{u}} {\\bm{u}}}}$ vanish.\nThe tensor ${{g}^t_{{\\bm{u}} {\\bm{x}}}}$ admits a sparse structure, whose computation can be simplified to\n\\begin{equation} \\begin{split}\n [{{g}^t_{{\\bm{u}} {\\bm{x}}}}&]_{(i,j,k)} = 1 \\quad \\text{iff} \\quad j = (k-1)n_{t+1} + i { \\text{ ,} }\n\\\\ [{V^{t}_{{\\bm{h}}}} \\cdot {{g}^t_{{\\bm{u}} {\\bm{x}}}}&]_{((k-1)n_{t+1}:kn_{t+1},k)} = {V^{t}_{{\\bm{h}}}}\n{ \\text{ .} } \\label{eq:gux-compute}\n\\end{split}\n\\end{equation}\nFor the coordinate-wise nonlinear transform, $\\sigma^t_{{\\bm{h}}}$ and $\\sigma^t_{{\\bm{h}}\\vh}$ are diagonal matrix and tensor.\nIn most learning instances, stage-wise losses typically involved with weight decay alone; thus the terms ${{\\ell}^t_{{\\bm{x}}}}, {{\\ell}^t_{{\\bm{x}} {\\bm{x}}}}, {{\\ell}^t_{{\\bm{u}} {\\bm{x}}}}$ also vanish.\n\n\n\n\n\n\n\n\n\n\n\n\\subsection{Derivation of \\eq{\\ref{eq:qux-math}}} \\label{app:c2}\n\\eq{\\ref{eq:qux-math}} follows by an observation that the feedback policy $\\mathbf{K}_t {\\delta{\\bm{x}}}_t = -(Q^t_{{\\bm{u}} {\\bm{u}}})^{-1} Q^t_{{\\bm{u}} {\\bm{x}}} \\delta {\\bm{x}}_t$\nstands as the minimizer of the following objective\n\\begin{align} \\label{eq:Kdx-interpret}\n \\mathbf{K}_t {\\delta{\\bm{x}}}_t =\n \\argmin_{\\delta {\\bm{u}}_t({\\delta{\\bm{x}}}_t) \\in \\Gamma^\\prime(\\delta {\\bm{x}}_t )} \\norm{\\nabla_{{\\bm{u}}_t} Q({\\bm{x}}_t + \\delta {\\bm{x}}_t,{\\bm{u}}_t + \\delta {\\bm{u}}_t({\\delta{\\bm{x}}}_t)) - \\nabla_{{\\bm{u}}_t} Q({\\bm{x}}_t,{\\bm{u}}_t)}\n { \\text{ ,} }\n\\end{align}\nwhere $\\Gamma^\\prime(\\delta {\\bm{x}}_t )$ denotes all affine mappings from $\\delta {\\bm{x}}_t$ to $\\delta {\\bm{u}}_t$ and\n$\\norm{\\cdot}$ can be any proper norm in the Euclidean space.\n\\eq{\\ref{eq:Kdx-interpret}}\nfollows by the Taylor expansion of $Q({\\bm{x}}_t + \\delta {\\bm{x}}_t,{\\bm{u}}_t + \\delta {\\bm{u}}_t)$ to its first order,\n\\begin{align*} %\n \\nabla_{{\\bm{u}}_t} Q({\\bm{x}}_t + \\delta {\\bm{x}}_t,{\\bm{u}}_t + \\delta {\\bm{u}}_t)\n =\n \\nabla_{{\\bm{u}}_t} Q({\\bm{x}}_t,{\\bm{u}}_t) + Q^t_{{\\bm{u}} {\\bm{x}}} \\delta {\\bm{x}}_t + Q^t_{{\\bm{u}} {\\bm{u}}} \\delta {\\bm{u}}_t\n { \\text{ .} }\n\\end{align*}\nWhen $Q = J$, we will arrive at \\eq{\\ref{eq:qux-math}}.\nFrom Proposition \\ref{prop:bp2ddp}, we know the equality holds when all $Q^s_{{\\bm{x}}{\\bm{u}}}$ vanish for $s>t$.\nIn other words, the approximation in \\eq{\\ref{eq:qux-math}} becomes equality\nwhen all aferward layer-wise objectives $s>t$ are expanded only w.r.t. ${\\bm{u}}_s$.\n\n\n\n\\subsection{Performance on Classification Dataset}\n\\textbf{Networks \\& Baselines Setup.}\nWe first validate the performance of training fully-connected (FCN) and convolution networks (CNN) using DDPNOpt on classification datasets.\n{\nFCN consists of $5$ fully-connected layers with the hidden dimension ranging from $10$ to $32$, depending on the size of the dataset.\nCNN consists of $4$ convolution layers (with $3{\\times}3$ kernel, $32$ channels), followed by $2$ fully-connected layers.\nWe use ReLU activation on all datasets except Tanh for WINE and DIGITS to better distinguish the differences between optimizers.\nThe batch size is set to $8$-$32$ for datasets trained with FCN, and $128$ for datasets trained with CNN.}\nAs DDPNOpt combines strengths from both standard training methods and OCP framework, we select baselines from both sides.\nThis includes first-order methods, \\emph{i.e.} SGD (with tuned momentum), RMSprop, Adam,\nand second-order method EKFAC \\citep{george2018fast}, which is a recent extension of the popular KFAC \\citep{martens2015optimizing}.\nFor OCP-inspired methods,\nwe compare DDPNOpt with vanilla DDP and E-MSA \\citep{li2017maximum},\nwhich is also a second-order method\nyet built upon the PMP framework.\nRegarding the curvature approximation used in DDPNOpt (${\\bm{M}}_t$ in Table \\ref{table:update-rule}),\nwe found that using adaptive diagonal and GN matrices respectively for FCNs and CNNs\ngive the best performance in practice.\nWe leave the complete experiment setup and additional results in Appendix \\ref{app:experiment}.\n\n\\input{subtex\/training_table2.tex}\n\n\\begin{figure}[t]\n\\vskip -0.1in\n\\centering\n\\begin{minipage}{0.42\\textwidth}\n\\centering\n\\includegraphics[width=\\linewidth]{fig\/complexity.pdf}\n\\vskip -0.15in\n\\caption{Runtime comparison on MNIST.}\\label{fig:runtime}\n\\end{minipage}\n\\begin{minipage}{0.57\\textwidth}\n\\centering\n \\captionsetup{type=table}\n \\captionsetup{justification=centering}\n \\caption{Computational complexity {in backward pass}. \\\\ ($B$: batch size, $X$: hidden state dim., $L$: \\# of layers)}\n \\vskip -0.1in\n \\begin{small}\n \\begin{tabular}{c|cc|c}\n \\toprule\n Method & Adam & Vanilla DDP & \\textbf{DDPNOpt} \\\\\n \\midrule\n {\\small Memory} & {\\small $\\mathcal{O}(X^2L)$} & {\\small $\\mathcal{O}(BX^3L)$} & {\\small$\\mathcal{O}(X^2L+BX)$} \\\\\n {\\small Speed} & {\\small $\\mathcal{O}(BX^2L)$} & {\\small $\\mathcal{O}(B^3X^3L)$} & {\\small$\\mathcal{O}(BX^2L)$} \\\\\n \\bottomrule\n \\end{tabular} \\label{table:complexity}\n \\end{small}\n\\end{minipage}\n\\vskip -0.2in\n\\end{figure}\n\n\\textbf{Training Results.}\nTable \\ref{table:training} presents the results over $10$ random trials.\nIt is clear\nthat DDPNOpt outperforms two OCP baselines on \\emph{all datasets and network types}.\nIn practice, both baselines suffer from unstable training and require careful tuning on the hyper-parameters.\nIn fact, we are not able to obtain results for vanilla DDP with any reasonable amount of computational resources when the problem size goes beyond FC networks.\nThis is in contrast to DDPNOpt which adapts amortized curvature estimation from widely-used methods;\nthus exhibits much stabler training dynamics with superior convergence.\nIn Table~\\ref{table:complexity}, we provide the {analytic} runtime and memory complexity among different methods.\nWhile vanilla DDP grows cubic w.r.t. $BX$,\nDDPNOpt reduces the computation by orders of magnitude with efficient approximation presented in Sec.~\\ref{sec:ddp-dnn}.\nAs a result,\n{when measuring the actual computational performance with GPU parallelism,}\nDDPNOpt runs nearly as fast as standard methods and outperforms E-MSA by a large margin.\nThe additional memory complexity, when comparing DDP-inspired methods with Back-propagation methods,\ncomes from the layer-wise feedback policies.\nHowever, DDPNOpt is much memory-efficient compared with vanilla DDP by exploiting the factorization in Proposition~\\ref{prop:gn-ddp}.\n\n\n\\textbf{Ablation Analysis.}\nOn the other hand, the performance gain between DDPNOpt and standard methods appear comparatively small.\nWe conjecture this is due to the inevitable use of similar curvature adaptation,\nas the local geometry of the landscape directly affects the convergence behavior.\nTo identify scenarios where DDPNOpt best shows its effectiveness,\nwe conduct an ablation analysis on the feedback mechanism.\nThis is done by recalling Proposition~\\ref{prop:bp2ddp}: when ${Q^t_{{\\bm{u}} {\\bm{x}}}}$ vanishes,\nDDPNOpt degenerates to the method associated with each precondition matrix.\nFor instance, DDPNOpt with identity (\\emph{resp.} adaptive diagonal and GN) precondition ${\\bm{M}}_t$ will generate the same updates as SGD (\\emph{resp.} RMSprop and EKFAC) when all ${Q^t_{{\\bm{u}} {\\bm{x}}}}$ are zeroed out.\nIn other words, these DDPNOpt variants can be viewed as the \\emph{DDP-extension} to existing baselines.\n\n\nIn Fig.~\\ref{fig:exp-grid} we report the performance difference between each baseline and its associated DDPNOpt variant.\nEach grid corresponds to a distinct training configuration that is averaged over $10$ random trails,\nand we keep all hyper-parameters (\\emph{e.g.} learning rate and weight decay) the same between baselines and their DDPNOpt variants.\nThus, the performance gap only comes from the feedback policies,\nor equivalently the update directions in Table~\\ref{table:update-rule}.\nBlue (\\emph{resp.} red) indicates an improvement (\\emph{resp.} degradation) when the feedback policies are presented.\nClearly, the improvement over baselines\nremains consistent across most hyper-parameters setups, and\nthe performance gap tends to become obvious as the learning rate increases.\nThis aligns with the previous study on numerical stability \\citep{liao1992advantages},\n{\nwhich suggests the feedback can stabilize the optimization when \\emph{e.g.} larger control updates are taken.\nSince larger control corresponds to a further step size in the application of DNN training, one should expect DDPNOpt to show its robustness as the learning rate increases.\n}\nAs shown in Fig.~\\ref{fig:exp-comp},\nsuch a stabilization can also lead to smaller variance and faster convergence.\nThis sheds light on\nthe benefit gained by bridging two seemly disconnected methodologies between DNN training and trajectory optimization.\n\n\n\\begin{figure}[t]\n\\vskip -0.34in\n\\subfloat{\\includegraphics[width=0.78\\columnwidth]{fig\/grid-exp3.pdf} \\label{fig:exp-grid} }\n\\subfloat{\\includegraphics[width=0.21\\columnwidth]{fig\/comp2-fix.pdf} \\label{fig:exp-comp}}\n\\vskip -0.1in\n\\caption{ %\n(a) Performance difference between DDPNOpt and baselines on DIGITS across hyper-parameter grid.\nBlue (\\emph{resp.} red) indicates an improvement (\\emph{resp.} degradation) over baselines.\nWe observe similar behaviors on other datasets.\n(b) Examples of the actual training dynamics.\n}\n\\label{fig:grid-exp}\n\\end{figure}\n\n\n\\begin{figure}[t]\n\\vskip -0.1in\n\\centering\n\\begin{minipage}{0.29\\textwidth}\n \\centering\n \\includegraphics[width=0.7\\linewidth]{fig\/K-vis.png}\n \\vskip -0.1in\n \\caption{Visualization of the feedback policies on MNIST.}\\label{fig:K-vis}\n\\end{minipage}\n\\begin{minipage}{0.05\\textwidth}\n\\end{minipage}\n\\begin{minipage}{0.65\\textwidth}\n \\centering\n \\subfloat{\n \\includegraphics[width=0.95\\linewidth]{fig\/vg-update2.pdf}\n \\label{fig:vanish-grad-a}%\n }\n \\subfloat{%\n \\textcolor{white}{\\rule{1pt}{1pt}}\n \\label{fig:vanish-grad-b}%\n }%\n \\vskip -0.1in\n \\caption{Training a $9$-layer sigmoid-activated {FCN} on \\\\ DIGITS using MMC loss.\n DDPNOpt2nd denotes when the layer dynamics is fully expanded to the second order.\n }\n \\label{fig:vanish-grad}\n\\end{minipage}\n\\vskip -0.2in\n\\end{figure}\n\n\n\\subsection{Discussion on Feedback Policies}\n\n\\textbf{Visualization of Feedback Policies.}\nTo understand the effect of feedback policies more perceptually,\n{\nin Fig.~\\ref{fig:K-vis} we visualize the feedback policy when training CNNs.\nThis is done by first conducting singular-value decomposition on the feedback matrices ${{\\bm{K}}_t}$,\nthen projecting the leading right-singular vector back to image space\n(see Alg.~\\ref{alg:K-vis} and Fig.~\\ref{fig:K-vis-app} in Appendix for the pseudo-code).\nThese feature maps, denoted $\\delta x_{\\max}$ in Fig.~\\ref{fig:K-vis}, correspond to the dominating differential image that the policy shall respond with during weight update.\nFig.~\\ref{fig:K-vis} shows that the feedback policies indeed capture\nnon-trivial visual features related to the pixel-wise difference between spatially similar classes, \\emph{e.g.} $(8,3)$ or $(7,1)$.\nThese differential maps differ from adversarial perturbation \\citep{goodfellow2014explaining}\nas the former directly links the parameter update to the change in activation;\nthus being more interpretable.\n\n\n\\textbf{Vanishing Gradient.}\nLastly, we present an interesting finding on how the feedback policies help mitigate vanishing gradient (VG),\na notorious effect when DNNs become impossible to train as gradients vanish along Back-propagation.\nFig.~\\ref{fig:vanish-grad-a} reports results on training a sigmoid-activated DNN on DIGITS.\nWe select SGD-VGR, which imposes a specific regularization to mitigate VG \\citep{pascanu2013difficulty}, and EKFAC as our baselines.\nWhile both baselines suffer to make any progress,\nDDPNOpt continues to generate non-trivial updates as the state-dependent feedback, \\emph{i.e.} $\\mathbf{K}_t {\\delta{\\bm{x}}}_t$, remains active.\nThe effect becomes significant when dynamics is fully expanded to the second order.\nAs shown in Fig.~\\ref{fig:vanish-grad-b}, the update norm from DDPNOpt is typically $5$-$10$ times larger.\nWe note that in this experiment, we replace the cross-entropy (CE) with Max-Mahalanobis center (MMC) loss,\na new classification objective that improves robustness on standard vision datasets \\citep{pang2019rethinking}.\nMMC casts classification to distributional regression, providing denser Hessian and making problems similar to original trajectory optimization.\nNone of the algorithms escape from VG using CE.\nWe highlight that while VG is typically mitigated on the \\textit{architecture} basis, by having either unbounded activation function or residual blocks,\nDDPNOpt provides an alternative from the \\textit{algorithmic} perspective.\n\n\n\n\n\n\n\n\n\n\n\\iffalse\n\nthe update signal remain active\nand, as we discussed in the previous experiment, become significant\n\nwhen the networks going ``deep''.\n\n\n\n\n\noptimizer basis\n\n\n\n gradient vanishes along the Back-propagation\n\nThe layer-wise feedback policy has an surprising effect on preventing \\textit{vanishing gradient},\na notorious effect\n\nWe find that in additional to\n\n\nIn Fig. X we report the training performance on MNIST for $20$-layers feedforward networks\n\n the training curves between different optimizers\n\n of a $20$-layers feedforward networks trained with on MNIST.\n\n\n\n\nknown to be notoriously hard to train\n\n\nIn Fig. X\n\n\nexplain\n\noptimizer\narchitectures\n\n\nfuture works.\n\n\\fi\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\subsection{Experiment Detail} \\label{app:experiment}\n\\subsubsection{Setup} \\label{app:exp-set}\n\n\\begin{minipage}[t]{0.56\\textwidth}\n\\textbf{Clarification Dataset.}\nAll networks in the classification experiments are composed of $5$-$6$ layers.\nFor the intermediate layers, we use ReLU activation on all dataset, except Tanh on WINE and DIGITS.\nWe use identity mapping at the last prediction layer on all dataset except WINE,\nwhere we use sigmoid instead to help distinguish the performance among optimizers.\nFor feedforward networks, the dimension of the hidden state is set to $10$-$32$.\nOn the other hand, we use standard $3\\text{ }\\times\\text{ }3$ convolution\n\\end{minipage}\n\\begin{minipage}[t]{0.43\\textwidth}\n \\captionof{table}{Hyper-parameter search}\n \\vskip -0.5in\n \\begin{center}\n \\begin{small}\n \\begin{tabular}{c|c}\n \\toprule\n Methods & Learning Rate \\\\\n \\midrule\n SGD & $(7\\mathrm{e}\\text{-}2,5\\mathrm{e}\\text{-}1)$ \\\\\n Adam \\& RMSprop & $(7\\mathrm{e}\\text{-}4,1\\mathrm{e}\\text{-}2)$ \\\\\n EKFAC & $(1\\mathrm{e}\\text{-}2,3\\mathrm{e}\\text{-}1)$ \\\\\n \\bottomrule\n \\end{tabular} \\label{table:hyper}\n \\end{small}\n \\end{center}\n \\vskip 0.5in\n\\end{minipage}\nkernels for all CNNs.\nThe batch size is set $8$-$32$ for dataset trained with feedforward networks, and $128$ for dataset trained with convolution networks.\nFor each baseline we select its own hyper-parameter from an appropriate search space, which we detail in Table~\\ref{table:hyper}.\nWe use the implementation in \\url{https:\/\/github.com\/Thrandis\/EKFAC-pytorch} for EKFAC\nand implement our own E-MSA in PyTorch since the official code released from \\citet{li2017maximum} does not support GPU implementation.\nWe impose the GN factorization presented in Proposition \\ref{prop:gn-ddp} for all CNN training.\nRegarding the machine information,\nwe conduct our experiments on GTX 1080 TI, RTX TITAN, and four Tesla V100 SXM2 16GB.\n\n\n\\textbf{Procedure to Generate Fig.~\\ref{fig:K-vis}.}\nFirst, we perform standard DDPNOpt steps to compute layer-wise policies. Next, we conduct singular-value decomposition on the feedback matrix $({{\\bm{k}}_t},{{\\bm{K}}_t})$.\nIn this way, the leading right-singular vector corresponding to the dominating that the feedback policy shall respond with.\nSince this vector is with the same dimension as the hidden state, which is most likely not the same as the image space, we project the vector back to image space using the techniques proposed in \\citep{zeiler2014visualizing}. The pseudo code and computation diagram are included in Alg.~\\ref{alg:K-vis} and Fig.~\\ref{fig:K-vis-app}.\n\n\n\\vspace{-10pt}\n\n\\begin{minipage}[t]{0.52\\textwidth}\n\\vskip -0.in\n\\begin{algorithm}[H]\n\\small\n \\caption{\\small Visualizing the Feedback Policies}\n \\label{alg:K-vis}\n \\begin{algorithmic}[1]\n \\STATE {\\bfseries Input:}\n Image ${{\\bm{x}}}$ (we drop the time subscript for notational simplicity, {\\emph{i.e.}} ${\\bm{x}} \\equiv {\\bm{x}}_0$)\n \\STATE Perform backward pass of DDPNOpt. Compute $({{\\bm{k}}_t},{{\\bm{K}}_t})$ backward\n \\STATE Perform SVD on ${{\\bm{K}}_t}$\n \\STATE Extract the right-singular vector corresponding to the largest singular value, denoted $v_{\\max} \\in \\mathbb{R}^{n_t} $\n \\STATE Project $v_{\\max}$ back to the image space using deconvolution procedures introduced in \\citep{zeiler2014visualizing}\n \\end{algorithmic}\n\\end{algorithm}\n\\end{minipage}\n\\hfill\n\\begin{minipage}[t]{0.42\\textwidth}\n\\vskip -0.1in\n\\begin{figure}[H]\n\\centering\n\\includegraphics[width=\\linewidth]{fig\/K-viz-procedure.png}\n\\caption{Pictorial illustration for Alg. \\ref{alg:K-vis}.}\n\\label{fig:K-vis-app}\n\\end{figure}\n\\end{minipage}\n\n\n\n\\subsubsection{Additional Experiment and Discussion} \\label{app:exp-more}\n\n\n\n\n\\textbf{Batch trajectory optimization on synthetic dataset.}\nOne of the difference between DNN training and trajectory optimization is that for the former,\nwe aim to find an ultimate control law that can drive every data point in the training set, or sampled batch, to its designed target.\nDespite seemly trivial from the ML perspective, this is a distinct formulation to OCP since the optimal policy typically varies at different initial state.\nAs such, we validate performance of DDPNOpt in batch trajectories optimization on a synthetic dataset,\nwhere we sample data from $k\\in\\{5,8,12,15\\}$ Gaussian clusters in $\\mathbb{R}^{30}$.\nSince conceptually\na DNN classifier can be thought of as a dynamical system guiding\ntrajectories of samples toward the target regions belong to their classes,\nwe hypothesize that\nfor the DDPNOpt to show its effectiveness on batch training,\nthe feedback policy must act as an ensemble policy that combines the locally optimal policy of each class.\nFig.~\\ref{fig:feedback-spectrum} shows the spectrum distribution, sorted in a descending order, of the feedback policy in the prediction layer.\nThe result shows that the number of nontrivial eigenvalues matches exactly the number of classes in each setup (indicated by the vertical dashed line).\nAs the distribution in the prediction layer\nconcentrates to $k$ bulks through training,\nthe eigenvalues also increase, providing stronger feedback to the weight update.\n\n\n\\begin{figure}[H]\n\\vskip -0.1in\n\\begin{center}\n\\centerline{\\includegraphics[width=\\columnwidth]{fig\/feedback-policy-spectrum.pdf}}\n\\vskip -0.1in\n\\caption{\nSpectrum distribution on synthetic dataset.\n}\n\\label{fig:feedback-spectrum}\n\\end{center}\n\\vskip -0.2in\n\\end{figure}\n\n\n\n\n\\textbf{Ablation analysis on Adam.}\nFig.~\\ref{fig:grid-exp-adam} reports the ablation analysis on Adam using the same setup as in Fig. \\ref{fig:exp-grid}, \\emph{i.e.}\nwe keep all hyper-parameters the same for each experiment so that the performance difference only comes from the existence of feedback policies.\nIt is clear that the improvements from the feedback policies remain consistent for Adam optimizer.\n\n\n\\begin{figure}[H]\n\\vskip -0.1in\n\\begin{center}\n\\centerline{\\includegraphics[width=0.6\\columnwidth]{fig\/grid-exp-adam.pdf}}\n\\vskip -0.1in\n\\caption{\nAdditional experiment for Fig.~\\ref{fig:exp-grid} where we compare the performance difference between DDPNOpt and Adam.\nAgain, all grids report values averaged over $10$ random seeds.\n}\n\\label{fig:grid-exp-adam}\n\\end{center}\n\\vskip -0.2in\n\\end{figure}\n\n\n\n\n\n\\textbf{Ablation analysis on DIGITS compared with best-tuned baselines.}\nFig.~\\ref{fig:grid-exp} reports the performance difference between baselines and DDPNOpt under different hyperparameter setupts.\nHere, we report the numerical values when each baseline uses its best-tuned learning rate (which is the values we report in Table 3) and compare with its DDPNOpt counterpart using the same learning rate. As shown in Tables \\ref{table:6}, \\ref{table:7}, and \\ref{table:8}, for most cases extending the baseline to accept the Bellman framework improves the performance.\n\n\\providecommand{\\e}[1]{\\ensuremath{\\times 10^{#1}}}\n\n\\bgroup\n\\setlength\\tabcolsep{0.07in}\n\\begin{table}[H]\n\\caption{ Learning rate $=0.1$}\n\\label{table:6}\n\\vspace{-3pt}\n\\begin{center}\n\\begin{small}\n\\vskip -0.1in\n\\begin{tabular}{c?cc}\n\\toprule\n& {SGD} & {DDPNOpt with {${\\bm{M}}_t = {\\bm{I}}_t$} } \\\\[2pt]\n\\midrule\nTrain Loss &\n$0.035$ & $\\textbf{0.032}$ \\\\\nAccuracy (\\%) &\n$95.36$ & $\\textbf{95.52}$ \\\\\n\\bottomrule\n\\end{tabular}\n\\end{small}\n\\end{center}\n\\end{table}\n\n\\vspace{-20pt}\n\n\\begin{table}[H]\n\\caption{ Learning rate $=0.001$}\n\\label{table:7}\n\\vspace{-3pt}\n\\begin{center}\n\\begin{small}\n\\vskip -0.1in\n\\begin{tabular}{c?cc}\n\\toprule\n& {RMSprop} & { DDPNOpt with {${\\bm{M}}_t = \\diag(\\mathpalette\\DHLhksqrt{\\mathbb{E}[{Q^t_{{\\bm{u}}}} \\odot {Q^t_{{\\bm{u}}}}]} +\\epsilon)$} } \\\\[2pt]\n\\midrule\nTrain Loss &\n$0.058$ & $\\textbf{0.052}$ \\\\\nAccuracy (\\%) &\n$94.33$ & $\\textbf{94.63}$ \\\\\n\\bottomrule\n\\end{tabular}\n\\end{small}\n\\end{center}\n\\end{table}\n\n\\vspace{-20pt}\n\n\\begin{table}[H]\n\\caption{ Learning rate $=0.03$}\n\\label{table:8}\n\\vspace{-3pt}\n\\begin{center}\n\\begin{small}\n\\vskip -0.1in\n\\begin{tabular}{c?cc}\n\\toprule\n& {EKFAC} & {DDPNOpt with {${\\bm{M}}_t = \\mathbb{E}{[{\\bm{x}}_t{\\bm{x}}_t^{\\mathsf{T}}]} \\otimes \\mathbb{E}{[V_{{\\bm{h}}}^tV_{{\\bm{h}}}^{t\\text{ }{\\mathsf{T}}}]}$} } \\\\[2pt]\n\\midrule\nTrain Loss &\n$0.074$ & $\\textbf{0.067}$ \\\\\nAccuracy (\\%) &\n$\\textbf{95.24}$ & ${95.19}$ \\\\\n\\bottomrule\n\\end{tabular}\n\\end{small}\n\\end{center}\n\\end{table}\n\n\\egroup\n\n\n\\input{subtex\/figure_4_absolute_value}\n\n\\textbf{More experiments on vanishing gradient.}\nRecall that Fig.~\\ref{fig:vanish-grad} reports the training performance using MMC loss on Sigmoid-activated networks.\nIn Fig.~\\ref{fig:vg-more1-a}, we report the result when training the same networks but using CE loss (notice the numerical differences in the $y$ axis for different objectives).\nNone of the presented optimizers were able to escape from vanishing gradient, as evidenced by the vanishing update magnitude.\nOn the other hands, changing the networks to ReLU-activated networks eliminates the vanishing gradient,\nas shown in Fig.~\\ref{fig:vg-more1-b}.\n\nFig.~\\ref{fig:vg-more1-c} reports the performance with other first-order adaptive optimizers including Adam and RMSprop.\nIn general, adaptive first-order optimizers are more likely to escape from vanishing gradient since the diagonal precondition matrix (recall ${\\bm{M}}_t = \\mathbb{E}[{J}_{{\\bm{u}}_t} \\odot {J}_{{\\bm{u}}_t}]$ in Table~\\ref{table:update-rule}) rescales the vanishing update to a fixed norm. However, as shown in Fig.~\\ref{fig:vg-more1-c}, DDPNOpt* (the variant of DDPNOpt that\nutilizes similar adaptive first-order precondition matrix) converges faster compared with these adaptive baselines.\n\nFig.~\\ref{fig:vg-more2} illustrates the selecting process on the learing-rate tuning when we report Fig.~\\ref{fig:vanish-grad}.\nThe training performance for both SGD-VGR and EKFAC remains unchanged when tuning the learning rate. In practice, we observe unstable training with SGD-VGR when the learing rate goes too large.\nOn the other hands, DDPNOpt and DDPNOpt2nd are able to escape from VG with all tested learning rates.\nHence, Fig.~\\ref{fig:vanish-grad} combines Fig.~\\ref{fig:vg-more2-a} (SGD-VGR-lr$0.1$) and Fig.~\\ref{fig:vg-more2-c} (EKFAC-lr$0.03$, DDPNOpt-lr$0.03$, DDPNOpt2nd-lr$0.03$) for best visualization.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\begin{figure}[H]%\n \\centering\n \\vskip -0.1in\n \\subfloat{\n \\textcolor{white}{\\rule{1pt}{1pt}}\n \\label{fig:vg-more1-a}%\n }\n \\subfloat{\n \\includegraphics[width=0.85\\linewidth]{fig\/vg-1115.pdf}\n \\label{fig:vg-more1-b}%\n }\n \\vskip -0.1in\n \\caption{\n Vanishing gradient experiment for different losses and nonlinear activation functions.\n }\n \\label{fig:weight-update}%\n \\vskip -0.1in\n\\end{figure}\n\n\n\\begin{figure}[H]%\n \\centering\n \\subfloat{\n \\includegraphics[width=0.8\\linewidth]{fig\/vg-1115-2.pdf}\n }\n \\vskip -0.1in\n \\caption{\n Vanishing gradient experiment for other optimizers.\n The legend ``DDPNOpt*''\n denotes DDPNOpt with adaptive diagonal matrix.\n }\n \\label{fig:vg-more1-c}%\n \\vskip -0.1in\n\\end{figure}\n\n\\begin{figure}[H]%\n \\centering\n \\subfloat{\n \\includegraphics[width=0.8\\linewidth]{fig\/vg-lrs-1115.pdf}\n \\label{fig:vg-more2-a}%\n }\n \\subfloat{\n \\textcolor{white}{\\rule{1pt}{1pt}}\n \\label{fig:vg-more2-b}%\n }\n \\subfloat{\n \\textcolor{white}{\\rule{1pt}{1pt}}\n \\label{fig:vg-more2-c}%\n }\n \\subfloat{\n \\textcolor{white}{\\rule{1pt}{1pt}}\n \\label{fig:vg-more2-d}%\n }\n \\vskip -0.1in\n \\caption{\n Vanishing gradient experiment for different learning rate setups.\n }\n \\label{fig:vg-more2}%\n \\vskip -0.1in\n\\end{figure}\n\n\n\n\\subsection{Training DNNs as Trajectory Optimization}\n\nRecall that DNNs can be interpreted as dynamical systems where each layer is viewed as a distinct time step.\nConsider \\emph{e.g.} the propagation rule in feedforward layers,\n\\begin{align} \\label{eq:dnn-dyn}\n{\\bm{x}}_{t+1} = \\sigma_t ({\\bm{h}}_t) { \\text{ ,} } \\quad\n{\\bm{h}}_t = g_t({\\bm{x}}_{t},{\\bm{u}}_t) = {\\bm{W}}_t {\\bm{x}}_t + {\\bm{b}}_t { \\text{ .} }\n\\end{align}\n${\\bm{x}}_t \\in \\mathbb{R}^{n_t}$ and ${\\bm{x}}_{t+1} \\in \\mathbb{R}^{n_{t+1}}$ represent the activation vector at layer $t$ and $t+1$, with\n${\\bm{h}}_t \\in \\mathbb{R}^{n_{t+1}}$ being the pre-activation vector.\n$\\sigma_t$ and $g_t$ respectively denote\nthe nonlinear activation function and\nthe affine transform parametrized by the vectorized weight ${\\bm{u}}_t \\triangleq [\\mathrm{vec}({\\bm{W}}_t), {\\bm{b}}_t]^{\\mathsf{T}}$.\n\\eq{\\ref{eq:dnn-dyn}} can be seen as a dynamical system (by setting $f_t \\equiv \\sigma_t \\circ g_t$ in OCP)\npropagating the activation vector ${\\bm{x}}_t$ using ${\\bm{u}}_t$.\n\n\nNext, notice that the gradient descent (GD) update, denoted\n$\\delta {\\bar{{\\bm{u}}}}^* \\equiv -\\eta \\nabla_{{\\bar{{\\bm{u}}}}} J$ with $\\eta$ being the learning rate,\ncan be break down into each layer, \\textit{i.e.}\n$\\delta {\\bar{{\\bm{u}}}}^* \\triangleq \\{\\delta {\\bm{u}}_t^*\\}_{t=0}^{T-1}$,\nand computed backward by\n\\begin{align}\n{\\delta{\\bm{u}}}_t^*\n&= \\argmin_{{\\delta{\\bm{u}}}_t \\in \\mathbb{R}^{m_t}}\\{\n J_t + \\nabla_{{\\bm{u}}_t} J_t^{\\mathsf{T}} {\\delta{\\bm{u}}}_t + \\textstyle \\frac{1}{2} {\\delta{\\bm{u}}}_t^{\\mathsf{T}} (\\textstyle \\frac{1}{\\eta}{\\bm{I}}_t) {\\delta{\\bm{u}}}_t\n\\} { \\text{ ,} } \\label{eq:du-star} \\\\\n\\text{where } J_t({\\bm{x}}_t,{\\bm{u}}_t) &\\triangleq \\ell_t({\\bm{u}}_t) + J_{t+1}(f_t({\\bm{x}}_t,{\\bm{u}}_t),{\\bm{u}}_{t+1})\n{ \\text{ ,} } \\quad J_T({\\bm{x}}_T)\\triangleq\\phi({\\bm{x}}_T)\n\\label{eq:Jt}\n\\end{align}\nis the per-layer objective\\footnote{\n Hereafter we drop ${\\bm{x}}_t$ in all $\\ell_t(\\cdot)$\n as the layer-wise loss typically involves weight regularization alone.\n} at layer $t$.\nIt can be readily verified that ${\\bm{p}}_t \\equiv \\nabla_{{\\bm{x}}_t}J_t$ gives the exact Back-propagation dynamics.\n\\eq{\\ref{eq:Jt}} suggests that\nGD minimizes the quadratic expansion of $J_t$ with the Hessian $\\nabla^2_{{\\bm{u}}_t}J_t$ replaced by $\\frac{1}{\\eta}{\\bm{I}}_t$.\nSimilarly, adaptive first-order methods, such as RMSprop and Adam,\napproximate the Hessian\nwith the diagonal of the covariance matrix.\nSecond-order methods, such as KFAC and EKFAC \\citep{martens2015optimizing,george2018fast},\ncompute full matrices using Gauss-Newton (GN) approximation:\n\\begin{align} \\label{eq:kfac}\n \\nabla^2_{{\\bm{u}}}J_t \\approx\n \\mathbb{E}{[J_{{\\bm{u}}_t} J_{{\\bm{u}}_t}^{\\mathsf{T}}]}\n = \\mathbb{E}{[({\\bm{x}}_t \\otimes J_{{\\bm{h}}_t}) ({\\bm{x}}_t \\otimes J_{{\\bm{h}}_t})^{\\mathsf{T}}]}\n \\approx \\mathbb{E}{[({\\bm{x}}_t {\\bm{x}}_t^{\\mathsf{T}})]} \\otimes \\mathbb{E}{[( J_{{\\bm{h}}_t} J_{{\\bm{h}}_t}^{\\mathsf{T}})]}\n { \\text{ .} }\n\\end{align}\n\n\n\nWe now draw a novel connection between the training procedure of DNNs and DDP.\nLet us first summarize the Back-propagation (BP) {with gradient descent} in Alg. \\ref{alg:bp} and compare it with DDP (Alg. \\ref{alg:ddp}).\nAt each training iteration, we treat the current weight as the {control} ${\\bar{{\\bm{u}}}}$ that simulates the activation sequence ${\\bar{{\\bm{x}}}}$.\nStarting from this {nominal} trajectory $({\\bar{{\\bm{x}}}},{\\bar{{\\bm{u}}}})$,\nboth algorithms\nrecursively define some layer-wise objectives ($J_t$ in \\eq{\\ref{eq:Jt}} vs $V_t$ in \\eq{\\ref{eq:bellman}}),\ncompute the weight\/control update from the quadratic expansions (\\eq{\\ref{eq:du-star}} vs \\eq{\\ref{eq:du-star-ddp}}),\nand then carry certain information ($\\nabla_{{\\bm{x}}_t}J_t$ vs $(V_{\\bm{x}}^t,V_{{\\bm{x}}\\vx}^t)$)\nbackward to the preceding layer.\nThe computation graph between the two approaches is summarized in Fig.~\\ref{fig:1}.\nIn the following proposition,\nwe make this connection formally and provide conditions when the two algorithms become equivalent.\n\n\\begin{minipage}[t]{0.49\\textwidth}\n\\vskip -0.1in\n\\begin{proposition} \\label{prop:bp2ddp}\nAssume $Q_{{\\bm{u}} {\\bm{x}}}^t=\\mathbf{0}$ at all stages,\nthen the backward dynamics of the value derivative can be described by the Back-propagation,\n\\begin{align}\n\\begin{split}\n\\forall t { \\text{ ,} }\nV_{{\\bm{x}}}^t = \\nabla_{{\\bm{x}}_t} J { \\text{ ,} } \\text{ } Q_{{\\bm{u}}}^t = \\nabla_{{\\bm{u}}_t} J\n{ \\text{ ,} } \\text{ }\nQ_{{\\bm{u}}\\vu}^t = \\nabla^2_{{\\bm{u}}_t}J { \\text{ .} }\n\\end{split}\n\\end{align}\nIn this case, the DDP policy is equivalent to the stage-wise Newton,\nin which the gradient is preconditioned by the block-wise inverse Hessian at each layer:\n\\begin{align}\n{{\\bm{k}}_t} + {{\\bm{K}}_t} {\\delta{\\bm{x}}}_t\n= - (\\nabla_{{\\bm{u}}_t}^2 J)^{{-1}}\\nabla_{{\\bm{u}}_t}J { \\text{ .} } \\label{eq:newton}\n\\end{align}\nIf further we have ${Q_{{\\bm{u}} {\\bm{u}}}^{t}} \\approx \\frac{1}{\\eta}{\\mathbf{I}}$,\nthen DDP degenerates to Back-propagation with gradient descent.\n\\end{proposition}\n\\end{minipage}\n\\hfill\n\\begin{minipage}[t]{0.48\\textwidth}\n \\vskip -0.27in\n \\begin{figure}[H]\n \\vspace{-10pt}\n \\includegraphics[width=\\linewidth]{fig\/compute-graph4.png}\n \\caption{\n DDP backward propagates the value derivatives $(V_{\\bm{x}},V_{{\\bm{x}}\\vx})$ instead of $\\nabla_{{\\bm{x}}_t}J$\n and updates weight using layer-wise feedback policy, $\\delta {\\bm{u}}^{*}_t(\\delta {\\bm{x}}_t)$, with additional forward propagation.}\n \\label{fig:1}\n \\end{figure}\n\\end{minipage}\n\nProof is left in Appendix \\ref{app:c1}.\nProposition \\ref{prop:bp2ddp} states that the backward pass in DDP collapses to BP when $Q_{{\\bm{u}} {\\bm{x}}}$ vanishes at all stages.\nIn other words, existing training methods can be seen as special cases of DDP\nwhen the mixed derivatives (\\emph{i.e.} $\\nabla_{{\\bm{x}}_t{\\bm{u}}_t}$) of the layer-wise objective are discarded.\n\n\\subsection{Efficient Approximation and Factorization} \\label{sec:eff-approx}\n\nMotivated by Proposition \\ref{prop:bp2ddp},\nwe now present a new class of optimizer, DDP Neural Optimizer (DDPNOpt), on training feedforward and convolution networks.\nDDPNOpt follows the same procedure in vanilla DDP (Alg.~\\ref{alg:ddp})\nyet adapts several key traits arising from DNN training,\nwhich we highlight below.\n\n\\textbf{Evaluate derivatives of $Q_t$ with layer dynamics.}\nThe primary computation in DDPNOpt comes from constructing the derivatives of $Q_t$ at each layer.\nWhen the dynamics is represented by the layer propagation (recall \\eq{\\ref{eq:dnn-dyn}} where we set $f_t \\equiv \\sigma_t \\circ g_t$), we can rewrite \\eq{\\ref{eq:Qt}} as:\n\\begin{equation}\n\\begin{split}\n {Q^t_{{\\bm{x}}}} = {{g}^{t \\text{ }{\\mathsf{T}}}_{{\\bm{x}}}} {V^{t}_{{\\bm{h}}}} { \\text{ ,} } \\quad\n {Q^t_{{\\bm{u}}}} = {{\\ell}^t_{{\\bm{u}}}} + {{g}^{t \\text{ }{\\mathsf{T}}}_{{\\bm{u}}}} {V^{t}_{{\\bm{h}}}} { \\text{ ,} } \\quad\n {Q^t_{{\\bm{x}} {\\bm{x}}}} = {{g}^{t \\text{ }{\\mathsf{T}}}_{{\\bm{x}}}} {V^{t}_{{\\bm{h}}\\vh}} g^t_{{\\bm{x}}} { \\text{ ,} } \\quad\n {Q^t_{{\\bm{u}} {\\bm{x}}}} = {{g}^{t \\text{ }{\\mathsf{T}}}_{{\\bm{u}}}} {V^{t}_{{\\bm{h}}\\vh}} g^t_{{\\bm{x}}} { \\text{ ,} }\n \\label{eq:q-fc}\n\\end{split}\n\\end{equation}\nwhere\n${V^{t}_{{\\bm{h}}}} \\triangleq {\\sigma^{t\\text{ } {\\mathsf{T}}}_{{\\bm{h}}}} {V^{t+1}_{{\\bm{x}}}}$ and\n${V^{t}_{{\\bm{h}}\\vh}} \\triangleq {\\sigma^{t\\text{ } {\\mathsf{T}}}_{{\\bm{h}}}} V^{t+1}_{{\\bm{x}} {\\bm{x}}} \\sigma^t_{{\\bm{h}}}$\nabsorb the computation of the non-parametrized activation function $\\sigma$.\n{\n Note that \\eq{\\ref{eq:q-fc}} expands the dynamics only up to first order,\n \\emph{i.e.} we omitt the tensor products which involves second-order expansions on dynamics,\n as the stability obtained by keeping only the linearized dynamics is thoroughly discussed and widely adapted in practical DDP usages \\citep{todorov2005generalized}.\n}\nThe matrix-vector product with the Jacobian of the affine transform (\\emph{i.e.} ${{g}^{t}_{{\\bm{u}}}},{{g}^{t}_{{\\bm{x}}}}$) can be evaluated efficiently for both feedforward (FF) and convolution (Conv) layers:\n\\begin{alignat}{4}\n{\\bm{h}}_t &\\FCeq {\\bm{W}}_t{\\bm{x}}_t+{\\bm{b}}_t\n&&\\Rightarrow\n{g^t_{{\\bm{x}}}}^{\\mathsf{T}} V^{t}_{{\\bm{h}}} &&= {\\bm{W}}_t^{\\mathsf{T}} V^{t}_{{\\bm{h}}}\n{ \\text{ ,} } \\quad\n{g^t_{{\\bm{u}}}}^{\\mathsf{T}} V^{t}_{{\\bm{h}}} &&= {\\bm{x}}_t \\otimes V^{t}_{{\\bm{h}}}\n{ \\text{ ,} } \\\\\n{\\bm{h}}_t &\\CONVeq \\omega_t * {\\bm{x}}_t\n&&\\Rightarrow\n{g^t_{{\\bm{x}}}}^{\\mathsf{T}} V^{t}_{{\\bm{h}}} &&= \\omega_t^{\\mathsf{T}} {\\text{ } \\hat{*} \\text{ }} V^{t}_{{\\bm{h}}}\n{ \\text{ ,} } \\quad\n{g^t_{{\\bm{u}}}}^{\\mathsf{T}} V^{t}_{{\\bm{h}}} &&= {\\bm{x}}_t {\\text{ } \\hat{*} \\text{ }} V^{t}_{{\\bm{h}}}\n{ \\text{ ,} }\n\\end{alignat}\nwhere $\\otimes$, ${\\text{ } \\hat{*} \\text{ }}$, and $*$ respectively denote the Kronecker product and (de-)convolution operator.\n\n\n\n\n\n\n\n\n\n\\input{subtex\/table-optimizer-relation}\n\n\n\\textbf{Curvature approximation.}\n{\n Next, since DNNs are highly over-parametrized models, ${\\bm{u}}_t$ (\\emph{i.e.} the layer weight) will be in high-dimensional space.}\nThis makes ${Q^t_{{\\bm{u}} {\\bm{u}}}}$ and $({Q^t_{{\\bm{u}} {\\bm{u}}}})^{-1}$ computationally intractable to solve; thus requires approximation.\nRecall the interpretation we draw in \\eq{\\ref{eq:Jt}}\nwhere existing optimizers differ in approximating the Hessian $\\nabla_{{\\bm{u}}_t}^2J_t$.\nDDPNOpt adapts the same curvature approximation to ${Q^t_{{\\bm{u}} {\\bm{u}}}}$.\nFor instance, we can approximate ${Q^t_{{\\bm{u}} {\\bm{u}}}}$ simply with\nan identity matrix ${\\bm{I}}_t$, adaptive diagonal matrix $\\diag(\\mathpalette\\DHLhksqrt{\\mathbb{E}[{Q^t_{{\\bm{u}}}} \\odot {Q^t_{{\\bm{u}}}}]})$, or the GN matrix:\n\\begin{align} \\label{eq:ekfac-ddp}\n {Q}^t_{{\\bm{u}} {\\bm{u}}} \\approx\n \\mathbb{E}{[Q^t_{{\\bm{u}}} {Q^t_{{\\bm{u}}}}^{\\mathsf{T}}]}\n =\\mathbb{E}{[({\\bm{x}}_t \\otimes {V^{t}_{{\\bm{h}}}}) ({\\bm{x}}_t \\otimes {V^{t}_{{\\bm{h}}}})^{\\mathsf{T}}]}\n \\approx \\mathbb{E}{[{\\bm{x}}_t {\\bm{x}}_t^{\\mathsf{T}}]} \\otimes \\mathbb{E}{[{V^{t}_{{\\bm{h}}}} {V^{t}_{{\\bm{h}}}}^{\\mathsf{T}}]}\n { \\text{ .} }\n\\end{align}\n\nTable \\ref{table:update-rule} summarizes the difference in curvature approximation\n(\\emph{i.e.} the precondition ${\\bm{M}}_t$ ) for different methods.\nNote that DDPNOpt constructs these approximations using $(V,Q)$ rather than $J$\nsince they consider different layer-wise objectives. %\nAs a direct implication from Proposition $\\ref{prop:bp2ddp}$,\nDDPNOpt will degenerate to the optimizer it adapts for curvature approximation\nwhenever all ${Q^t_{{\\bm{u}} {\\bm{x}}}}$ vanish.\n\n\n\\textbf{Outer-product factorization.}\nWhen the memory efficiency becomes nonnegligible as the problem scales,\nwe make GN approximation to $\\nabla^2\\phi$,\nsince the low-rank structure at the prediction layer has been observed\nfor problems concerned in this work \\citep{nar2019cross,lezama2018ole}.\nIn the following proposition, we show that\nfor a specific type of OCP, which happens to be the case of DNN training,\nsuch a low-rank structure preserves throughout the DDP backward pass.\n\\begin{proposition}[Outer-product factorization in DDPNOpt] \\label{prop:gn-ddp}\n Consider the OCP where $\\ell_t \\equiv \\ell_t({\\bm{u}}_t)$ is independent of ${\\bm{x}}_t$,\n If the terminal-stage Hessian can be expressed by the outer product of vector ${\\bm{z}}_{\\bm{x}}^T$,\n $\\nabla^2\\phi({\\bm{x}}_{T}) = {\\bm{z}}_{\\bm{x}}^T \\otimes {\\bm{z}}_{\\bm{x}}^T$ (for instance, ${\\bm{z}}_{\\bm{x}}^T=\\nabla \\phi$ for GN), then we have the factorization for all $t$:\n \\begin{equation}\n \\begin{split}\n {Q^t_{{\\bm{u}} {\\bm{x}}}} = {\\bm{q}}_{\\bm{u}}^t \\otimes {\\bm{q}}_{\\bm{x}}^t { \\text{ ,} } \\quad\n {Q^t_{{\\bm{x}} {\\bm{x}}}} = {\\bm{q}}_{\\bm{x}}^t \\otimes {\\bm{q}}_{\\bm{x}}^t { \\text{ ,} } \\quad\n V_{{\\bm{x}}\\vx}^t = {\\bm{z}}_{\\bm{x}}^t \\otimes {\\bm{z}}_{\\bm{x}}^t { \\text{ .} }\n \\end{split} \\label{eq:qxqu}\n \\end{equation}\n ${\\bm{q}}_{\\bm{u}}^t$, ${\\bm{q}}_{\\bm{x}}^t$, and ${\\bm{z}}_{\\bm{x}}^t$\n are outer-product vectors\n which are also computed along the backward pass.\n \\begin{align}\n {\\bm{q}}_{\\bm{u}}^t = {{{f}^t_{{\\bm{u}}}}^{\\mathsf{T}}} {\\bm{z}}_{\\bm{x}}^{t+1} { \\text{ ,} } \\quad\n {\\bm{q}}_{\\bm{x}}^t = {{{f}^t_{{\\bm{x}}}}^{\\mathsf{T}}} {\\bm{z}}_{\\bm{x}}^{t+1} { \\text{ ,} } \\quad\n {\\bm{z}}_{\\bm{x}}^t = \\mathpalette\\DHLhksqrt{1-{\\bm{q}}_{\\bm{u}}^{t \\text{ } {\\mathsf{T}}} {({Q^{t}_{{\\bm{u}} {\\bm{u}}})}^{-1}} {\\bm{q}}_{\\bm{u}}^t } \\text{ }{\\bm{q}}_{\\bm{x}}^t\n { \\text{ .} }\n \\label{eq:vxvx}\n \\end{align}\n\\end{proposition}\n\\vskip -0.08in\nThe derivation is left in Appendix \\ref{app:prop:gn-ddp}.\nIn other words,\nthe outer-product factorization at the final layer can be backward propagated to all proceeding layers.\nThus, large matrices, such as ${Q^t_{{\\bm{u}} {\\bm{x}}}}$, ${Q^t_{{\\bm{x}} {\\bm{x}}}}$, $V_{{\\bm{x}}\\vx}^t$, and even feedback policies ${{\\bm{K}}_t}$,\ncan be factorized accordingly, greatly reducing the complexity. %\n\n\n\n\n\\vspace{-8pt}\n\\input{subtex\/pseudo-code}\n\\vspace{-10pt}\n\n{\n \\textbf{Regularization on $V_{{\\bm{x}}\\vx}$.}\n Finally,\n we apply Tikhonov regularization to the value Hessian $V^t_{{\\bm{x}}\\vx}$ (line $12$ in Alg.~\\ref{alg:ddpnopt}).\n This can be seen as placing a quadratic state-cost and has been shown to improve stability on optimizing complex humanoid behavior \\citep{tassa2012synthesis}.\n For the application of DNN where the dimension of the state ({\\emph{i.e.}} the vectorized activation) varies during forward\/backward pass,\n the Tikhonov regularization prevents the value Hessian from low rank (throught ${{g}^{t \\text{ }{\\mathsf{T}}}_{{\\bm{u}}}} {V^{t}_{{\\bm{h}}\\vh}} g^t_{{\\bm{x}}}$);\n hence we also observe similar stabilization effect in practice.\n}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\n\nAlthough standard four-dimensional (4D) General Relativity (GR) is\nbelieved to be the correct description of gravity at the classical level,\nits quantization faces many well-known problems. Therefore,\nthree-dimensional (3D) gravity has gained much interest, since classically\nit is much simpler and thus one can investigate more efficiently its\nquantization. Amongst others, in 3D gravity one obtains the\nBanados-Teitelboim-Zanelli (BTZ) black hole~\\cite{BTZ}, which is a\nsolution to the Einstein equations with a negative cosmological constant.\nThis black-hole solution presents interesting properties at both classical\nand quantum levels, and it shares several features of the Kerr black hole\nof 4D GR~\\cite{Carlip}. \n\nFurthermore, remarkable attention was addressed recently to topologically\nmassive gravity, which is a generalization of 3D GR that amounts to\naugment the Einstein-Hilbert action by adding a Chern-Simons gravitational\nterm, and thus the propagating degree of freedom is a massive\ngraviton, which amongst others also admits BTZ black-hole as exact\nsolutions~\\cite{deser}. The renewed interest on topologically\nmassive gravity relies on the possibility of\nconstructing a chiral theory of gravity at a special point of the\nparameter-space, as it was suggested in~\\cite{Li:2008dq}. This idea has\nbeen extensively analyzed in the last years~\\cite{Strominger:2008dp}, leading to a fruitful\ndiscussion that ultimately led to a significantly better understanding of\nthe model~\\cite{Maloney:2009ck}. Moreover, another\n3D massive gravity theory known as new massive gravity \\cite{Bergshoeff:2009hq, Bergshoeff:2009tb} (where the action is given by the\nEinstein-Hilbert term plus a specific\ncombination of square-curvature terms which gives rise to\nfield equations with a second order trace) have attracted considerable attention, this theory also admits interesting solutions, see for instance \\cite{Clement:2009gq, Clement:2009ka, Oliva:2009ip}. Furthermore, 3D gravity with torsion has been extensively studied in~\\cite{3dgravitywithtorsion} and references therein.\n\n\n\nOn the other hand, hairy black holes are interesting solutions of Einstein's Theory\nof Gravity and also of certain types of Modified Gravity Theories. The first attempts to couple a scalar field to gravity was done in\nan asymptotically flat spacetime. Then, hairy black hole solutions\nwere found \\cite{BBMB} but these\nsolutions were not examples of hairy black hole configurations\nviolating the no-hair theorems because they were not physically\nacceptable as the scalar field was divergent on the horizon and\nstability analysis showed that they were unstable\n\\cite{bronnikov}. To remedy this a regularization procedure has to\nbe used to make the scalar field finite on the horizon. Hairy black hole solutions have been extensively studied over the years mainly in connection\n with the no-hair theorems. The recent developments in string theory and\nspecially the application of the AdS\/CFT principle to condense\nmatter phenomena like superconductivity (for a review see\n\\cite{Hartnoll:2009sz}), triggered the interest of further study\nof the behavior of matter fields outside the black hole horizon\n\\cite{Gubser:2005ih,Gubser:2008px}. There are also very\ninteresting recent developments in Observational Astronomy. High\nprecision astronomical observations of the supermassive black\nholes may pave the way to experimentally test the no-hair\nconjecture\n \\cite{Sadeghian:2011ub}. Also, there are numerical investigations\n of single and binary black holes in the presence of scalar fields\n\\cite{Berti:2013gfa}. The aforementioned is a small part on the relevance that has taken the study of hairy black holes currently in the field of physics, for more details see for instance \\cite{Gonzalez:2013aca, Gonzalez:2014tga} and references therein. Also, we refer the reader to references \\cite{Martinez:1996gn, Henneaux:2002wm, Zhao:2013isa, Xu:2014uka, Cardenas:2014kaa} and references therein, where black holes solutions in three space-time dimensions with a scalar field (minimally and\/or confomally) coupled to gravity have been investigated.\n\n\n\n\n\nIn the present work we are interested in investigating the existence of 3D hairy black holes solutions for theories based on torsion. In particular, the so-called ``teleparallel\nequivalent of General Relativity\" (TEGR) \\cite{ein28,Hayashi79} is an\nequivalent formulation of gravity but instead of using the curvature\ndefined via the Levi-Civita connection, it uses the Weitzenb{\\\"o}ck\nconnection that has no curvature but only torsion. So, we consider a scalar field non-minimally coupled with the torsion scalar, with a self-interacting potential in TEGR, and we find three-dimensional asymptotically AdS, hairy black holes. It is worth mentioning, that this kind of theory (known as scalar-torsion theory), has been studied in the cosmological context, where the dark energy sector is attributed to the scalar field. It was shown that the minimal case is equivalent to standard quintessence. However, the nonminimal case has a richer structure, exhibiting quintessence-like or phantom-like behavior, or experiencing the phantom-divide crossing \\cite{Geng:2011aj, Geng:2011ka, Gu:2012ww}, see also \\cite{Horvat:2014xwa} for aplications of this theory (with a complex scalar field) to boson stars.\n\nIt is also worth to mention that a natural extension of TEGR is the so called $f(T)$ gravity, which is represented by a function of the scalar torsion $T$ as Lagrangian density \\cite{Ferraro:2006jd, Ferraro:2008ey, Bengochea:2008gz,Linder:2010py}.\nThe $f(T)$ theories picks up preferred referential frames which constitute the autoparallel curves of the given manifold. \nA genuine advantage of $f(T)$ gravity compared with other deformed gravitational schemes is that the differential equations for the vielbein components are second order differential equations. However, the effects of the additional degrees of freedom that certainly exist in $f(T)$ theories is a consequence of breaking the local Lorentz invariance that these theories exhibit. Despite this, it was found that on the flat FRW background with a scalar field, up to second order linear perturbations does not reveal any extra degree of freedom at all \\cite{Izumi:2012qj}. As such, it is fair to say that the nature of these additional degrees of freedom remains unknown. Remarkably, it is possible to modify $f(T)$ theory in order to make it manifestly Lorentz invariant. However, it will generically have different dynamics and will reduce to $f(T)$ gravity in some local Lorentz frames \\cite{Li:2010cg, Weinberg, Arcos:2010gi}.\nClearly, in extending this\ngeometry sector, one of the goals is to solve the puzzle of dark energy and\ndark matter without asking for new material ingredients that have\nnot yet been detected by experiments \\cite{Capozziello:2007ec,Ghosh:2012pg}. For instance, a Born-Infeld $f(T)$ gravity Lagrangian was used to address the physically inadmissible divergencies occurring in the standard cosmological Big Bang model, rendering the spacetime geodesically complete and powering an inflationary stage without the introduction of an inflaton field \\cite{Ferraro:2008ey}. Also, it is believed that $f(T)$ gravity could be a reliable approach to address the shortcomings of general relativity at high energy scales \\cite{Capozziello:2011et}. Furthermore, both inflation and the dark energy dominated stage can be realized in Kaluza-Klein and Randall-Sundrum models, respectively \\cite{Bamba:2013fta}.\nIn this way, $f(T)$ gravity has gained attention and\nhas been proven to exhibit interesting cosmological implications. On the other hand, the search for black hole solutions in $f(T)$ gravity is not a trivial problem, and there are only few exact solutions, see for instance \\cite{G1, solutions,Rodrigues:2013ifa}. Remarkably, it is possible to construct other generalizations, as Teleparallel Equivalent of Gauss-Bonnet Gravity \\cite{Kofinas:2014owa, Kofinas:2014daa}, Kaluza-Klein theory for teleparallel gravity \\cite{Geng:2014nfa} and scalar-torsion gravity theories \\cite{Geng:2011aj, Kofinas:2015hla}. \n\n\n\n\n\n\n\n\n\n\n\n\nThe paper is organized as follows. In Section II we give a brief review of three-dimensional Teleparallel Gravity. Then, in Section III we find asymptotically AdS black holes with scalar hair, and we conclude in Section IV with final remarks.\n\n\\section{3D Teleparallel Gravity}\n\\label{Tel3D}\n\nIn 1928, Einstein proposed the idea of teleparallelism to unify gravity and electromagnetism into a unified field theory; this corresponds to an equivalent formulation of General Relativity (GR), nowadays known as Teleparallel Equivalent to General Relativity (TEGR) \\cite{ein28, Hayashi79}, where the Weitzenb\\\"{o}ck connection is used to define the covariant derivative (instead of the Levi-Civita connection which is used to define the covariant derivative in the context of GR). The first \n investigations on teleparallel 3D gravity were\nperformed by Kawai almost twenty years ago \\cite{Kawai1,Kawai2,Kawai3}. The Weitzenb\\\"{o}ck connection mentioned above has not null torsion. However, it is curvatureless, which implies that this formulation of gravity exhibits only torsion. The Lagrangian density $T$ is constructed from the torsion tensor.\nTo clarify, the torsion scalar $T$ is the result of a very specific quadratic combination of irreducible representations of the torsion tensor under the Lorentz group $SO(1,3)$ \\cite{Hehl:1994ue}. In this way, the torsion tensor in TEGR \nincludes all the information concerning the\ngravitational field.\nThe theory is called ``Teleparallel\nEquivalent to General Relativity'' since the field equations are exactly the same as those of GR for every geometry choice.\n\nThe Lagrangian of teleparallel 3D gravity corresponds to\nthe more general quadratic Lagrangian for torsion, under the\nassumption of zero spin-connection. So, the action can be written as \\cite{Muench:1998ay,Itin:1999wi}\n\\begin{equation} \\label{action2}\nS=\\frac{1}{2 \\kappa}\\int \\left( \\rho_{0} \\mathcal{L}_{0}+ \\rho_{1}\n\\mathcal{L%\n}_{1}+ \\rho_{2} \\mathcal{L}_{2}+\\rho_{3} \\mathcal{L}_{3}+ \\rho_{4}\n\\mathcal{L%\n}_{4}\\right)~,\n\\end{equation}%\nwhere $\\kappa$ is the three-dimensional gravitational constant,\n$\\rho_i$ are parameters, and\n\\begin{equation}\n\\mathcal{L}_{0}= \\frac{1}{4}e^{a} \\wedge \\star e_a~,\\quad \\mathcal{L}_{1}=de^{a}\n\\wedge \\star de_{a}~,\\quad \\mathcal{L}_{2}= (de_{a} \\wedge \\star e^a)\n\\wedge \\star (de_b \\wedge e^b)~,\\nonumber\n\\end{equation}\n\\begin{equation}\n\\mathcal{L}_{3}=(de^{a} \\wedge e^{b}) \\wedge \\star (de_{a} \\wedge\ne_{b})~,\\quad \\mathcal{L}_{4}= (de_{a} \\wedge \\star e^b) \\wedge\n\\star (de_b \\wedge e^a)~,\n\\end{equation}\nwhere $e^a$ denotes the vielbein, $d$ is the exterior derivative, $\\star $ denotes the Hodge dual operator and $\\wedge$ the wedge\nproduct. The coupling constant $\\rho_{0}=-\\frac{8}{3} \\Lambda$ represents\nthe cosmological constant term. Moreover, since $\\mathcal{L}_{3}$ can be\nwritten completely in terms of $\\mathcal{L}%\n_{1}$, in the following we set $\\rho_{3}=0$ \\cite{Muench:1998ay}. Action (\\ref{action2}) can be written in a more convenient form as\n\\begin{equation}\n\\label{actiontele0}\nS=\\frac{1}{2\\kappa} \\int \\left (T -2\\Lambda \\right )\\star 1~,\n\\end{equation}\nwhere $\\star1=e^{0} \\wedge e^{1} \\wedge e^{2}$, and the torsion\nscalar $T$ is given by\n\\begin{equation} \\label{scalartorsion}\nT= \\star \\left[\\rho_{1}(de^{a} \\wedge \\star de_{a})+\\rho_{2}(de_{a} \\wedge\ne^a) \\wedge \\star (de_b \\wedge e^b)+\\rho_{4}(de_{a} \\wedge\ne^b) \\wedge \\star (de_b \\wedge e^a) \\right]~.\n\\end{equation}\nExpanding this expression in terms of its components, the torsion scalar yields\n\\begin{equation}\n\\label{scalartorsionrho}\nT=\\frac{1}{2} (\\rho_{1}+\\rho_{2}+\\rho_{4})T^{abc}T_{abc}+%\n\\rho_{2}T^{abc}T_{bca}-\\rho_{4}T_{a}^{ac}T^{b}_{bc}~,\n\\end{equation}\nnote that for TEGR $\\rho_{1}=0$, $\\rho_{2}=-\\frac{1}{2}$ and $\\rho_{4}=1$. \nA variation of action (\\ref{actiontele0}) with respect to the\nvielbein provides the following field equations:\n\\begin{eqnarray}\\label{fieldequations}\n&&\\delta \\mathcal{L} =\\delta e^{a}\\wedge \\left\\{\\left\\{\\rho\n_{1}\\left[2d\\star de_{a}+i_{a}(de^{b}\\wedge \\star\nde_{b})-2i_{a}(de^{b})\\wedge\n\\star de_{b}\\right]\n\\right.\\right.\\nonumber\n\\\\ && \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\\n+\\rho _{2}\\left\\{-2e_{a}\\wedge d\\star (de^{b}\\wedge\ne_{b})\n+2de_{a}\\wedge \\star (de^{b}\\wedge e_{b})+i_{a}\\left[de^{c}\\wedge\ne_{c}\\wedge\n\\star (de^{b}\\wedge e_{b})\\right]\\right.\\nonumber\\\\\n&&\\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\, \\ \\ \\ \\ \\ \\left.\n-2i_{a}(de^{b})\\wedge e_{b}\\wedge \\star\n(de^{c}\\wedge e_{c})\\right\\}\n \\nonumber\\\\\n&& \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ +\\rho_{4}\\left\\{-2e_{b}\\wedge\nd\\star (e_{a}\\wedge de^{b})+2de_{b}\\wedge \\star\n(e_{a}\\wedge de^{b})+i_{a}\\left[e_{c}\\wedge de^{b}\\wedge \\star\n(de^{c}\\wedge\ne_{b})\\right]\\right.\\nonumber \\\\\n&&\\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\, \\ \\ \\ \\ \\ \\left.\\left.\n -2i_{a}(de^{b})\\wedge e_{c}\\wedge \\star (de^{c}\\wedge\ne_{b})\\right\\}\\right\\}\\nonumber\\\\\n&&\\ \\left. \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\\n-2\\Lambda \\star e_a \\right\\}\n=0~,\n\\end{eqnarray}\nwhere $i_a$ is the interior product and for generality's sake we have kept the\ngeneral\ncoefficients $\\rho_i$, and we have used $\\epsilon^{012}=+1$. Through the following choice of the coefficients $\\rho_{1}=0$, $\\rho_{2}=-\\frac{1}{2}$ and $\\rho_{4}=1$ Teleparallel Gravity coincides with the\nusual curvature-formulation of General Relativity and therefore the following BTZ metric is solution of TEGR \n \\begin{equation}\n \\label{metric}\nds^2=N^2dt^2-N^{-2}dr^2-r^2(d\\varphi+N_{\\varphi}dt)^2~,\n\\end{equation}\nwhere the lapse $N$ and shift $N_{\\varphi}$ functions are given by,\n\\begin{equation}\nN^2= -8GM+\\frac{r^2}{l^2}+\\frac{16G^2J^2}{r^2}~,\\quad\nN_{\\varphi}=-\\frac{4GJ}{r^2},\n\\label{BTZ0}\n\\end{equation}\nand the two constants of integration $M$ and $J$ are the usual conserved\ncharges associated with the asymptotic invariance under time displacements\n(mass) and rotational invariance (angular momentum)\nrespectively, given by flux integrals\nthrough a large circle at spacelike infinity, and $\\Lambda=-1\/l^2$ is the\ncosmological constant \\cite{BTZ}.\nFinally, note\nthat \nthe torsion scalar can be \ncalculated, leading to\nthe constant value\n\\begin{equation}\n\\label{Tteleresult}\nT=-2\\Lambda,\n\\end{equation}\nwhich is the cosmological constant as the sole source of torsion.\\\\\n\n\n\n\\section{3D Teleparallel Hairy Black Holes}\n\\label{Tel3DH}\n\\subsection{The Model}\n\nIn this section we will extend the above discussion considering a scalar field $\\phi$ non-minimally coupled with the torsion scalar with a self-interacting potential $V(\\phi)$, and then we will find hairy black hole solutions. \n So, the action can be written as \n\\begin{equation}\n \\label{accionHT}\nS=\\int \\left( \\frac{1}{2 \\kappa} T \\star 1 - \\xi \\phi^2 T \\star 1 + \\frac{1}{2} d\\phi \\wedge \\star d\\phi -V(\\phi)\\star 1\\right)~,\n\\end{equation}\nwhere $T$ is given by (\\ref{scalartorsion}) and $\\xi$ is the non-minimal coupling parameter. \nThus, the variation with respect to the vielbein leads to\n the following field equations:\n\\begin{eqnarray} \\label{fieldeq}\n\\delta_{e^{a}} \\mathcal{L} &=&\\delta e^{a}\\wedge\n\\left\\{\\left(\\frac{1}{2\\kappa}-\\xi\\phi^2\\right)\n\\left\\{\\rho\n_{1}\\left[2d\\star de_{a}+i_{a}(de^{b}\\wedge \\star\nde_{b})-2i_{a}(de^{b})\\wedge\n\\star de_{b}\\right]\\right.\\right.\\nonumber\n\\\\\n&&\\ \\ \\ \\ \\ \\ \\ \\ +\\rho _{2}\\left\\{-2e_{a}\\wedge d\\star (de^{b}\\wedge\ne_{b})+2de_{a}\\wedge \\star\n(de^{b}\\wedge e_{b})+i_{a}\\left[de^{c}\\wedge e_{c}\\wedge\n\\star (de^{b}\\wedge e_{b})\\right] \\right.\n\\nonumber\\\\\n&&\\left. \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\, -2i_{a}(de^{b})\\wedge\ne_{b}\\wedge \\star\n(de^{c}\\wedge e_{c})\\right\\}\\nonumber\\\\\n&&\\ \\ \\ \\ \\ \\ \\ \\ \n+\\rho_{4}\\left\\{-2e_{b}\\wedge d\\star (e_{a}\\wedge\nde^{b})+2de_{b}\\wedge \\star\n(e_{a}\\wedge de^{b})\\right.\n\\nonumber \\\\\n&&\\left.\\left.\\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\,\n+i_{a}\\left[e_{c}\\wedge de^{b}\\wedge \\star (de^{c}\\wedge\ne_{b})\\right] -2i_{a}(de^{b})\\wedge e_{c}\\wedge \\star (de^{c}\\wedge\ne_{b})\\right\\}\n\\right\\}\\nonumber\\\\\n&&\\ \\ \\ \\ \\ \\ \\ \\ -4\\xi\\left[\\rho_1\\phi d\\phi \\wedge \\star de_a+\\rho_2 \\phi d\\phi\\wedge\ne_a\\wedge\n\\star(de_b\\wedge e^b)+\\rho_4\\phi d\\phi \\wedge e_b\\wedge \\star(de^b \\wedge\ne_a)\\right]\\nonumber\\\\\n&&\\left. \\ \\ \\ \\ \\ \\ \\ \\ - V(\\phi) i_a(\\star 1)-\\frac{1}{2}d\\phi\\wedge i_a(\\star d\\phi)-\\frac{1}{2} i_a(d\\phi)\\wedge\\star d\\phi \\right\\}=0~,\n\\label{fieldeq000}\n\\end{eqnarray}\nand the variation with respect to the scalar field leads to the Klein-Gordon equation \n\\begin{equation}\n\\label{fieldeq001}\n\\delta_{\\phi} \\mathcal{L}=\\delta \\phi \\left(-2\\xi\\phi T \\star 1-d\\star d\\phi - \\frac{dV}{d\\phi}\\star 1 \\right)=0~.\n\\end{equation}\n\n\n\\subsection{Circularly Symmetric Hairy Solutions}\n\\label{circsymmsol}\nLet us now investigate hairy black hole solutions of the theory. In order to\nanalyze static solutions we consider\nthe metric form as\n\\begin{equation}\\label{metric}\nds^{2}=A\\left( r\\right) ^{2}dt^{2}-\\frac{1}{B\\left( r\\right) ^{2}}\ndr^{2}-r^{2}d\\varphi^{2}~,\n\\end{equation}\nwhich arises from the triad \ndiagonal ansatz \n\\begin{equation}\\label{diagonal}\ne^{0}=A\\left( r\\right) dt~,\\text{ \\ }e^{1}=\\frac{1}{B\\left( r\\right) }dr~,\n\\text{ \\ }e^{2}=rd\\varphi~.\n\\end{equation}\nThen, inserting this vielbein in the field equations (\\ref{fieldeq000}), (\\ref{fieldeq001}) yields\n\\begin{equation}\n-\\frac{1}{r}{(\\frac{1}{2\\kappa}-\\xi\\phi(r)^2)\\frac{dB^2}{dr}}+\\frac{4}{r}\\xi B(r)^2\\phi(r)\\frac{d\\phi}{dr}-\\frac{1}{2}B(r)^2(\\frac{d\\phi}{dr})^2-V(\\phi)=0~,\n\\label{q1}\n\\end{equation}\n\\begin{equation}\n\\frac{B(r)^2}{rA(r)^2}{(\\frac{1}{2\\kappa}-\\xi\\phi(r)^2)\\frac{dA^2}{dr}}-\\frac{1}{2}B(r)^2(\\frac{d\\phi}{dr})^2+V(\\phi)=0~, \n\\label{q2}\n\\end{equation}\n\\begin{eqnarray}\n\\notag&& 2 \\xi\\phi(r) \\frac{d\\phi}{dr}\\frac{B(r)^2}{A(r)^2}\\frac{dA^2}{dr}-\\frac{1}{2}B(r)^2(\\frac{d\\phi}{dr})^2\\\\\n\\notag&&+\\frac{1}{2A(r)^4}(\\frac{1}{2\\kappa}-\\xi\\phi(r)^2)\\left(-A(r)^2\\frac{dA^2}{dr}\\frac{dB^2}{dr}+B(r)^2(\\frac{dA^2}{dr})^2-2A(r)^2 B(r)^2\\frac{d^2A^2}{dr^2}\\right)\\\\\n&&-V(\\phi)=0~,\n\\label{q3}\n\\end{eqnarray}\n\\begin{equation}\n-\\frac{2B(r)^2}{rA(r)^2}\\xi\\phi(r)\\frac{dA^2}{dr}+\\frac{1}{r}B(r)^2\\frac{d\\phi}{dr}+\\frac{1}{2}\\frac{dB(r)^2}{dr}\\frac{d\\phi}{dr}+\\frac{B(r)^2}{2A(r)^2}\\frac{dA(r)^2}{dr}\\frac{d\\phi}{dr}+B(r)^2\\frac{d^2\\phi}{dr^2}-\\frac{dV}{d\\phi}=0~.\n\\label{q4}\n\\end{equation}\n\nIt is worth mentioning that, in the case of a minimally coupled scalar field, \n the above simple, diagonal relation \n between the metric and the vielbeins (\\ref{diagonal}) is\nalways allowed, due to in this case the theory is invariant under local Lorentz transformations of the vielbein. In contrast, in the extension of a non-minimally coupled scalar field with the torsion scalar, the theory is not local Lorentz invariant, therefore, one could have in general a more complicated relation connecting the vielbein with the metric, with the vielbeins being non-diagonal even for a diagonal\nmetric \\cite{fTLorinv0}. However, for the three-dimensional solutions considered here, using a preferred diagonal frame is allowed, in the sense that this frame defines a global set of basis covering the whole tangent bundle, i.e., they parallelize the spacetime \\cite{Fiorini:2013hva}, \\cite{Ferraro:2011us}.\n\nIn the following, and in order to solve the above system of equations, we will consider two cases: first, we analyze the case $A(r)=B(r)$, and then we analyze the more general case $A(r) \\neq B(r)$.\n\\subsubsection{$A(r)=B(r)$}\nIn this case the field equations (\\ref{q1})-(\\ref{q4}) simplify to\n\\begin{equation}\n-\\frac{1}{r}{(\\frac{1}{2\\kappa}-\\xi\\phi(r)^2)\\frac{dA^2}{dr}}+\\frac{4}{r}\\xi A(r)^2\\phi(r)\\frac{d\\phi}{dr}-\\frac{1}{2}A(r)^2(\\frac{d\\phi}{dr})^2-V(\\phi)=0~, \n\\label{fieldequation1}\n\\end{equation}\n\\begin{equation}\n\\frac{1}{r}{(\\frac{1}{2\\kappa}-\\xi\\phi(r)^2)\\frac{dA^2}{dr}}-\\frac{1}{2}A(r)^2(\\frac{d\\phi}{dr})^2+V(\\phi)=0~, \n\\label{fieldequation2}\n\\end{equation}\n\\begin{equation}\n2\\xi\\phi(r) \\frac{d\\phi}{dr}\\frac{dA^2}{dr}-\\frac{1}{2}A(r)^2(\\frac{d\\phi}{dr})^2-(\\frac{1}{2\\kappa}-\\xi\\phi(r)^2)\\frac{d^2A^2}{dr^2}-V(\\phi)=0~,\n\\label{fieldequation3}\n\\end{equation}\n\\begin{equation}\n-\\frac{2}{r}\\xi\\phi(r)\\frac{dA^2}{dr}+\\frac{1}{r}A(r)^2\\frac{d\\phi}{dr}+\\frac{dA(r)^2}{dr}\\frac{d\\phi}{dr}+A(r)^2\\frac{d^2\\phi}{dr^2}-\\frac{dV}{d\\phi}=0~.\n\\label{fieldequation4}\n\\end{equation}\nNow, by adding equations (\\ref{fieldequation1}) and (\\ref{fieldequation2}) we obtain\n\\begin{equation}\nA(r)^2\\frac{d \\phi}{dr}\\left( \\frac{4 \\xi}{r} \\phi-\\frac{d \\phi}{dr}\\right)=0~.\n\\end{equation}\nTherefore, the nontrivial solution for the scalar field is given by\n\\begin{equation}\n\\phi(r)=Br^{4\\xi} ~,\n\\end{equation}\nand by using this profile for the scalar field in the remaining equations, we obtain the solution\n\\begin{equation}\n\\label{A}\nA(r)^2=Gr^{2}+H {}_{2}F_1(1,-\\frac{1}{4\\xi}, 1-\\frac{1}{4\\xi}, 2\\kappa B^2\\xi r^{8\\xi})~,\n\\end{equation}\n\\begin{eqnarray}\nV(\\phi) & = & \\frac{H}{\\kappa}\\left(\\frac{\\phi}{B}\\right)^{-\\frac{1}{2\\xi}}+2G\\left(-\\frac{1}{2\\kappa}+B^2\\xi(1+4\\xi)\\left(\\frac{\\phi}{B}\\right)^2\\right) \\\\ \\notag \n&& -2H\\left(\\frac{\\phi}{B}\\right)^{-\\frac{1}{2\\xi}}\\left(\\frac{1}{2\\kappa}-B^2\\xi(1+4\\xi)\\left(\\frac{\\phi}{B}\\right)^2\\right){}_{2}F_1(1,-\\frac{1}{4\\xi}, 1-\\frac{1}{4\\xi}, 2\\kappa \\xi \\phi^2)~,\n\\end{eqnarray}\nwhere $B$, $G$, and $H$ are integration constant and ${}_{2}F_{1}$ is the Gauss hypergeometric function. In the limits $\\xi \\rightarrow 0$ or $B \\rightarrow 0$ the theory reduces to TEGR, therefore, we must hope our solution reduces to the BTZ black hole, this is indeed the case, as we show below. For those limits we obtain:\n\\begin{eqnarray}\n\\lim_{\\xi \\rightarrow 0} \\,\\,{}_{2}F_1(1,-\\frac{1}{4\\xi}, 1-\\frac{1}{4\\xi}, 2\\kappa B^2\\xi r^{8\\xi})=1~,\\\\\n\\lim_{B \\rightarrow 0} \\,\\,{}_{2}F_1(1,-\\frac{1}{4\\xi}, 1-\\frac{1}{4\\xi}, 2\\kappa B^2\\xi r^{8\\xi})=1~,\n\\end{eqnarray}\ntherefore,\n\\begin{equation}\n\\lim_{\\xi \\rightarrow 0 \\,\\, or \\,\\, B \\rightarrow 0} \\,\\, A(r)^2=Gr^{2}+H~,\n\\end{equation}\nwhich is the non-rotating BTZ metric.\nIn order to see the asymptotic behavior of $A(r)^2$, we expand the hypergeometric function for large $r$ and $\\xi<0$:\n\\begin{equation}\n{}_{2}F_1(1,-\\frac{1}{4\\xi}, 1-\\frac{1}{4\\xi}, 2\\kappa B^2\\xi r^{8\\xi})\\approx 1-\\frac{\\kappa B^2 r^{8 \\xi}}{2 \\left( 1-\\frac{1}{4 \\xi}\\right)}+...~.\n\\end{equation}\nThis expansion shows that the hairy black hole is asymptotically AdS.\nOn other hand, in the limit $\\phi\\rightarrow 0$ the potential goes to a constant (the effective cosmological constant) $V(\\phi)\\rightarrow -\\frac{G}{\\kappa}=\\Lambda$. In Fig.~(\\ref{function}) we plot the behavior of the metric function\n$A\\left( r\\right)^2 $ given by Eq. (\\ref{A}) \nfor a choice of parameters $H=-1$, $G=1$, $B=1$, $\\kappa=1$, \nand $\\xi=-0.25,-0.5,-1$. The metric function $A(r)^2$ changes\nsign for low values of $r$, signalling the presence of a horizon,\nwhile the scalar field is\nregular everywhere outside the event horizon (for $\\xi < 0$) and null at large\ndistances. In Fig.~(\\ref{Pot1}) we show the behavior of the potential, and we observe that it tends asymptotically ($\\phi \\rightarrow 0$) to a negative constant (the effective cosmological constant). We also plot the\nbehavior of the Ricci scalar $R(r)$, the principal quadratic invariant of the Ricci tensor $R^{\\mu\\nu}R_{\\mu\\nu}(r)$, and the Kretschmann scalar\n$R^{\\mu\\nu\\lambda\\tau}R_{\\mu\\nu\\lambda\\tau}(r)$ in Fig.~(\\ref{figuraRR}) by using the Levi-Civita connection, and we observe that there is not a\nRiemann curvature singularity outside the horizon for $\\xi=-0.25,-0.5,-1$. Also, we observe a Riemann curvature singularity at $r=0$ for $\\xi=-0.25$ and the torsion scalar is singular at $r=0$ for $\\xi=-0.25$, see Fig.~(\\ref{figuraR}).\n\\begin{figure}[h]\n\\begin{center}\n\\includegraphics[width=0.6\\textwidth]{funcion.eps}\n\\end{center}\n\\caption{The behavior of $A(r)^2$, for $H=-1$, $G=1$, $B=1$, $\\kappa=1$, \nand $\\xi=-0.25,-0.5,-1$.} \\label{function}\n\\end{figure}\n\\begin{figure}[h]\n\\begin{center}\n\\includegraphics[width=0.6\\textwidth]{Pot1.eps}\n\\end{center}\n\\caption{The Potencial $V(\\phi)$, for $H=-1$, $G=1$, $B=1$, $\\kappa=1$, \nand $\\xi=-0.25,-0.5,-1$.} \\label{Pot1}\n\\end{figure}\n\\begin{figure}[h]\n\\begin{center}\n\\includegraphics[width=0.4\\textwidth]{epsilon1.eps}\n\\includegraphics[width=0.4\\textwidth]{epsilon2.eps}\n\\includegraphics[width=0.55\\textwidth]{epsilon3.eps}\n\\end{center}\n\\caption{The behavior of $R(r)$, $R^{\\mu\\nu}R_{\\mu\\nu}(r)$ and $R^{\\mu\\nu\\lambda\\tau}R_{\\mu\\nu\\lambda\\tau}(r)$ for $H=-1$, $G=1$, $B=1$, $\\kappa=1$, and $\\xi=-0.25$ (left figure), $\\xi=-0.5$ (right figure), and $\\xi=-1$ (bottom figure).} \\label{figuraRR}\n\\end{figure}\n\\begin{figure}[h]\n\\begin{center}\n\\includegraphics[width=0.6\\textwidth]{epsilon.eps}\n\\end{center}\n\\caption{The behavior of torsion scalar $T$ as function of $r$ for $H=-1$, $G=1$, $B=1$, $\\kappa=1$, and $\\xi=-0.25, -0.5, -1$.} \\label{figuraR}\n\\end{figure}\n\n\n\\subsubsection{$A(r) \\neq B(r)$}\n\nNow, by considering the following ansatz for the scalar field\n\\begin{equation}\n\\phi(r)=Br^{\\gamma} ~,\n\\end{equation}\nwe find the following solution to the field equations\n\\begin{equation}\nA(r)^2=Gr^{2}+H {}_{2}F_1(-\\frac{1}{\\gamma},\\frac{\\gamma}{4\\xi}, 1-\\frac{1}{\\gamma}, 2\\kappa B^2\\xi r^{2\\gamma}) ~,\n\\end{equation}\n\\begin{equation}\nB(r)^2= \\left( \\frac{1}{2\\kappa} - r^{2 \\gamma} \\xi B^2\\right)^{-2+\\frac{\\gamma}{2 \\xi}} A(r)^2 ~,\n\\label{horizon}\n\\end{equation}\n\\begin{eqnarray}\n\\notag V(\\phi) & = & 2H \\left( \\frac{ \\phi}{B} \\right)^{-\\frac{2}{ \\gamma}} \\left( \\frac{1}{2\\kappa}-\\xi \\phi ^2 \\right)^{-1+\\frac{\\gamma}{2 \\xi}} \\left( 1-2\\kappa \\xi \\phi ^2\\right)^{-\\frac{\\gamma}{4 \\xi}}\\\\\n&& -\\frac{G}{2}\\left( \\frac{1}{2\\kappa}-\\xi \\phi ^2 \\right)^{-2+\\frac{\\gamma}{2 \\xi}}\n\\left( \\frac{2}{\\kappa}-(\\gamma^2+4\\xi) \\phi^2\\right)\\\\ \\notag \n&& -\\frac{H}{2} \\left( \\frac{ \\phi}{B} \\right)^{-\\frac{2}{ \\gamma}}\\left( \\frac{1}{2\\kappa}-\\xi \\phi ^2 \\right)^{-2+\\frac{\\gamma}{2 \\xi}}\n\\left( \\frac{2}{\\kappa}-(\\gamma^2+4\\xi) \\phi^2\\right) {}_{2}F_1(-\\frac{1}{\\gamma},\\frac{\\gamma}{4\\xi}, 1-\\frac{1}{\\gamma}, 2\\kappa\\xi \\phi ^2)~,\n\\end{eqnarray}\nwhere $B$, $G$ and $H$ are integration constants. \nThis solution is asymptotically AdS and generalizes the previous one, because if we take $\\gamma=4\\xi$ it reduces to the solution of the case $A(r)=B(r)$. Furthermore, for $\\gamma=0$ we recover the static BTZ black hole. On the other hand, in the limit $\\phi \\rightarrow 0$ the potential tends to a constant $V(\\phi) \\rightarrow -2G (2 \\kappa)^{1-\\frac{\\gamma}{2 \\xi}}=\\Lambda$. \n\nAs in the previous case, we plot the behavior of the metric function\n$B\\left( r\\right)^2 $ given by (\\ref{horizon}), in Fig.~(\\ref{functionn}) \nfor a choice of parameters $H=-1$, $G=1$, $B=1$, $\\kappa=1$, $\\xi=-0.25$\nand $\\gamma=-0.25,-1,-2$. The metric function $B(r)^2$ changes\nsign for low values of $r$, signalling the presence of a horizon,\nwhile for $\\gamma < 0$ the scalar field is\nregular everywhere outside the event horizon and null at large\ndistances. In Fig.~(\\ref{Pot2}) we show the behavior of the potential, asymptotically ($\\phi \\rightarrow 0$) it tends to a negative constant (the effective cosmological constant) as in the previous case. Also, we plot the\nbehavior of $R(r)$, $R^{\\mu\\nu}R_{\\mu\\nu}(r)$, and \n$R^{\\mu\\nu\\lambda\\tau}R_{\\mu\\nu\\lambda\\tau}(r)$ in Fig.~(\\ref{figureinvariant}) by using the Levi-Civita connection, and we observe that there is not a\nRiemann curvature singularity outside the horizon for $\\gamma=-0.25,-1,-2$. Also, we observe a Riemann curvature singularity at $r=0$ and the torsion scalar is singular at $r=0$ for all the cases considered. Asymptotically, the torsion scalar goes to $-2\\Lambda$ since this spacetime is asymptotically AdS, see Fig.~(\\ref{torsionRR}). Therefore, we have shown that there are three-dimensional black hole solutions with scalar hair in Teleparallel Gravity.\n\\begin{figure}[h]\n\\begin{center}\n\\includegraphics[width=0.6\\textwidth]{funcionn.eps}\n\\end{center}\n\\caption{The behavior of $B(r)^2$, for $H=-1$, $G=1$, $B=1$, $\\kappa=1$, $\\xi=-0.25$ \nand $\\gamma=-0.25,-1,-2$.} \\label{functionn}\n\\end{figure}\n\\begin{figure}[h]\n\\begin{center}\n\\includegraphics[width=0.6\\textwidth]{Pot2.eps}\n\\end{center}\n\\caption{The Potencial $V(\\phi)$, for $H=-1$, $G=1$, $B=1$, $\\kappa=1$, $\\xi=-0.25$ \nand $\\gamma=-0.25,-1,-2$.} \\label{Pot2}\n\\end{figure}\n\\begin{figure}[h]\n\\begin{center}\n\\includegraphics[width=0.4\\textwidth]{epsilonn1.eps}\n\\includegraphics[width=0.4\\textwidth]{epsilonn2.eps}\n\\includegraphics[width=0.55\\textwidth]{epsilonn3.eps}\n\\end{center}\n\\caption{The behavior of $R(r)$, $R^{\\mu\\nu}R_{\\mu\\nu}(r)$ and $R^{\\mu\\nu\\lambda\\tau}R_{\\mu\\nu\\lambda\\tau}(r)$ for $H=-1$, $G=1$, $B=1$, $\\kappa=1$, $\\xi=-0.25$ and $\\gamma=-0.25$ (left figure), $\\gamma=-1$ (right figure), and $\\gamma=-2$ (bottom figure).} \\label{figureinvariant}\n\\end{figure}\n\\begin{figure}[h]\n\\begin{center}\n\\includegraphics[width=0.6\\textwidth]{epsilonn.eps}\n\\end{center}\n\\caption{The behavior of torsion scalar $T$ as function of $r$ for $H=-1$, $G=1$, $B=1$, $\\kappa=1$, $\\xi=-0.25$ \nand $\\gamma=-0.25,-1,-2$.} \\label{torsionRR}\n\\end{figure}\n\n\\section{Final Remarks}\n\\label{conclusions}\n\n\nMotivated by the search of hairy black holes solutions in theories based on torsion, we have considered an extension of three-dimensional TEGR with a scalar field non-minimally coupled to the torsion scalar along with a self-interacting potential, and we have found three-dimensional asymptotically AdS black holes with scalar hair. These hairy black holes are characterized by a scalar field with a power-law behavior and by a self-interacting potential, which tends to an effective cosmological constant at spatial infinity. We have considered two cases $A(r)=B(r)$ and $A(r)\\neq B(r)$. In the first case the scalar field depends on the non-minimal coupling parameter $\\xi$, and it is regular everywhere outside the event horizon and null at spatial infinity for $\\xi < 0$, while for $\\xi = 0$ we recover the non-rotating BTZ black hole. In the second case the scalar field depends on a parameter $\\gamma$, and it is regular everywhere outside the event horizon and null at spatial infinity for $\\gamma < 0$, this solution generalizes the solution of the first case, which is recovered for $\\gamma=4\\xi$. Furthermore, for $\\gamma = 0$ we recover the non-rotating BTZ black hole. Moreover, the analysis of the Riemann curvature invariants and the torsion scalar shows that they are all regular outside the event horizon. In furthering our understanding, it would be interesting to study the thermodynamics of these hairy black hole solutions in order to study the phase transitions. Work in this direction is in progress.\n\n\n\n\\acknowledgments \nThis work was funded by Comisi\\'{o}n\nNacional de Ciencias y Tecnolog\\'{i}a through FONDECYT Grants 11140674 (PAG),\n1110076 (JS) and 11121148 (YV) and by DI-PUCV Grant 123713\n(JS). P.A.G. acknowledge the hospitality of the\nUniversidad de La Serena.\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section*{Introduction}\n\n\nIt is well known that $7$-dimensional area minimizing hypersurfaces can have isolated singularities. Using work of Hardt--Simon \\cite{HS}, Smale proved in \\cite{Smale} that in an $8$-dimensional manifold $M$ with $H_7(M; \\mathbb{Z}) \\neq 0$, there exists a smooth embedded area minimizing hypersurface for a generic choice of metric. In other words, he showed that isolated singularities of an area-minimizing $7$-dimensional hypersurface can generically be perturbed away. \n\nOne may thus seek to find a smooth embedded minimal hypersurface in all $8$-manifolds $M$ equipped with a generic metric $g$, even when $H_7(M;\\mathbb{Z})=0$. Here, we find such a hypersurface in the case of positive Ricci curvature, and give a partial answer in general. We let $\\Met^{2,\\alpha}(M)$ denote the space of Riemannian metrics of regularity $C^{2,\\alpha}$ on $M$ and $\\Met^{2,\\alpha}_{\\Ric>0}(M)\\subset\\Met^{2,\\alpha}(M)$ denote the open subset of Ricci positive metrics. \n\n\\begin{theorem}[Generic regularity with positive Ricci in dimension $8$]\\label{thm:generic_bound_ricci}\nLet $M^8$ be a compact smooth $8$-manifold. There is an open and dense set $\\mathcal G \\subset \\Met_{\\Ric>0}^{2,\\alpha}(M)$ so that for $g\\in \\mathcal G$, there exists a smooth embedded minimal hypersurface $\\Sigma \\subset M$. \n\\end{theorem}\n\nWithout the curvature condition, we have the following partial result. \n\n \\begin{theorem}[Generic almost regularity in dimension $8$]\\label{thm:generic_bound}\n \tLet $M^8$ be a compact smooth $8$-manifold. There exists a dense set $\\mathcal G \\subset \\Met^{2,\\alpha}(M)$ so that for $g \\in \\mathcal G$, there exists a smooth embedded minimal hypersurface $\\Sigma \\subset M$ with at most one singular point. \n \\end{theorem}\n \n We actually prove more general results valid in all dimensions, see Theorem \\ref{thm:generic_stratum} below. \n \n As mentioned above, the principal motivation for such results is to study generic regularity of non-minimizing, high-dimensional minimal submanifolds. This contrasts with previous works on generic regularity:\n\\begin{itemize}\n \\item Hardt--Simon \\cite{HS} (resp.\\ Smale \\cite{Smale}), cf.\\ \\cite{Liu}, show that regular singularities of (one-sided) minimizing hypersurfaces can be perturbed away by perturbing the boundary (resp.\\ metric). \n \\item White \\cite{White:85,white2019generic} shows that minimizing integral 2-cycles are smoothly embedded surfaces for a generic metric. \n \\item Moore \\cite{Moore,Moore:book} shows that parametrized minimal (2-dimensional) surfaces are free of branch points for a generic ambient metric. \n\\end{itemize}\n\n \n \\medskip\n\nIn fact, our work proves that generically there exists a minimal hypersurface of optimal regularity avoiding \\emph{certain} singularities in ambient dimensions beyond the singular dimension. Indeed, Theorem \\ref{thm:generic_bound} is a consequence of a more general result stated below.\n\n\\begin{theorem}[Generic removability of isolated singularities]\\label{thm:generic_stratum}\nConsider a compact smooth $(n+1)$-manifold, for $n\\geq 7$. There is a dense set $\\mathcal G\\subset \\Met^{2,\\alpha}(M)$ with the following properties:\n\\begin{itemize}\n\\item If $g\\in\\mathcal G$ then there exists a minimal hypersurface $\\Sigma$, smooth away from a closed singular set of Hausdorff dimension at most $n-7$, so that for $\\mathcal S_0 \\subset \\sing(\\Sigma)$ the set of singular points with regular tangent cones, we have $\\cH^0(S_0) \\leq 1$. \n\\item If $g \\in \\mathcal G \\cap \\Met^{2,\\alpha}_{\\Ric>0}(M)$ then the same statement holds, except we can conclude that $\\cH^0(\\mathcal S_0) = 0$. \n\\end{itemize}\n\\end{theorem}\n\n\n \\medskip\n\nIn order to remove the topological condition $H_7(M; \\mathbb{Z}) \\neq 0$ of Smale, we will use the Almgren--Pitts min-max construction \\cite{Pitts}, which guarantees the existence of a minimal hypersurface $\\Sigma^n$ in a closed Riemannian manifold $(M^{n+1},g)$. As in the area-minimizing case, when the dimension $n$ satisfies $2\\leq n \\leq 6$, the Almgren--Pitts minimal hypersurface is smooth, but for larger values of $n$ there may be an at most $(n-7)$-dimensional singular set (this follows from work of Schoen--Simon \\cite{SS}). However tangent cones to min-max hypersurfaces are \\emph{a priori} only stable, while only area-minimizing cones have complements that are foliated by smooth minimal hypersurfaces (cf.\\ \\cite{BDG, Lawlor}) and it seems that such a foliation is needed (at least on one side) to perturb the singularity away by adjusting the metric \\cite{HS}.\n\nThe key technical result of this paper is that (for one-parameter min-max) at all points---except possibly one---of the singular set with a regular tangent cone, the tangent cone is area minimizing on at least one side. Put another way, we show that tangent cones that are not area minimizing on either side ``contribute to the Morse index'' from the point of view of min-max (and these are precisely the cones that we are unable to perturb away using Hardt--Simon \\cite{HS}). \n\n\\subsection{Detailed description of results} \nLet $(M^{n+1},g)$ be a closed Riemannian manifold. By a \\emph{sweepout} of $M$ we will mean a family of (possibly singular) hypersurfaces $\\{ \\Phi(x)=\\partial \\Omega(x)\\}_{x \\in [0,1]}$, where each hypersurface $\\Phi(x)$ is the boundary of an open set\n$\\Omega(x)$ with $\\Omega(0) = \\emptyset$ and $\\Omega(1) = M$, and we denote the family of such sweepouts by $\\mathcal S$ (see Section \\ref{sec:min-max} for the precise definition).\nThe width, $W(M)$, is then defined by \n\\[\nW(M) = \\inf_{\\Phi\\in \\mathcal S} \\left\\{ \\sup_x \\mathbf{M}(\\Phi(x))\\right\\}\\,.\n\\]\n\n\n\n Given a stationary integral varifold $V$, with $\\supp V$ regular outside of a set of $n-7$ Hausdorff dimension, we define \n \\[\n\\mathfrak{h}_\\textnormal{nm}(V):= \\left\\{ p\\in \\supp(V)\\,:\\, \\begin{gathered}\n\\textrm{for all $r>0$ small, $\\supp V\\cap B_r(p)$ is not one-sided} \\\\\n\\text{homotopy area minimizing on either side (in $B_r(p)$).}\n\\end{gathered}\n\\right\\}\\,\n \\]\n \nIn other words, $p\\in\\mathfrak{h}_\\textrm{nm}(V)$ implies that in any small ball there are one-sided homotopies on both sides of $\\supp V$ that strictly decrease area without ever increasing area. \nLet $\\cR$ denote the set of integral varifolds, \nwhose support is a complete embedded minimal hypersurface regular away from a closed singular set of Hausdorff dimension $n-7$.\nFinally, we let $\\Index(V)$ denote the Morse index of the regular part of the support of $V$, that is\n$$\n\\Index (V)=\\Index (\\supp(\\reg(V)))\\,.\n$$\nThen the main technical estimate of this paper is the following result.\n\n\\begin{theorem}[Index plus non-area minimizing singularities bound]\\label{thm:nm+index_bound}\nFor $n\\geq 7$, let $(M^{n+1}, g)$ be a closed Riemannian manifold of class $C^2$. There exists a stationary integral varifold\n\t$V\\in \\cR$ such that $|V|(M)=W$, which satisfies \n\\begin{equation}\\label{e:bound1}\n \\cH^{0} (\\mathfrak{h}_\\textnormal{nm}(V)) +\\Index(V) \\leq 1\\,.\n\\end{equation}\nIf equality holds in \\eqref{e:bound1}, then for any point $p\\in \\supp V\\setminus \\mathfrak{h}_\\textnormal{nm}(V)$ there is $\\varepsilon>0$ so that $\\supp V$ is area-minimizing to one side in $B_\\varepsilon(p)$. Finally, we can write $V=\\sum_{i}\\kappa_i\\,|\\Sigma_i|$, where $\\Sigma_i$ are finitely many disjoint embeddded minimal hypersursufaces smooth away from finitely many points with $\\kappa_i\\leq 2$ for every $i$; if $\\Sigma_i$ is one-sided then $\\kappa_i =2$ and if $\\kappa_j=2$ for some $j$ then each $\\Sigma_i$ is stable. \n\\end{theorem}\n\nThe above bound is valid in all dimensions and can be seen as a generalization of the work of Calabi--Cao concerning min-max on surfaces \\cite{CaCa}. Indeed if we define $\\mathcal{S}_{\\textnormal{nm}}(V)$ by\\footnote{Here $\\omega$ is a modulus of continuity, and we could take it to be logarithmic, as suggested by the work of \\cite{Simon}. Notice in fact that at all isolated singularities $\\mathcal S_0$, minimal surfaces have unique tangent cone and are locally $C^{1,\\log}$ deformation of the cone itself.}\n\\[\n\\mathcal{S}_{\\textnormal{nm}}(V):= \\left\\{ p\\in \\supp(V)\\,:\\, \\begin{gathered}\nV \\text{ is locally a $C^{1,\\omega}$ graph over its \\emph{unique} tangent cone $\\mathcal C$}\\\\\n\\text{at $p$ and both sides of $\\mathcal C$ are not one-sided minimizing} \n\\end{gathered} \n\\right\\}\\,\n\\]\nthen we will see that $\\mathcal{S}_{\\textnormal{nm}}(V) \\subset \\mathfrak{h}_\\textrm{nm}(V)$ in Lemma \\ref{lem:snm-hnm}. In particular, \\eqref{e:bound1} implies that\n\\[\n\\cH^0(\\mathcal{S}_{\\textnormal{nm}}(V)) + \\Index(V) \\leq 1.\n\\]\nThus, if we are guaranteed to have $\\Index(V) =1$ (e.g., in positive curvature) we see that $\\mathcal{S}_{\\textnormal{nm}}(V) = \\emptyset$. This is precisely the higher dimensional analogue of the result of Calabi--Cao (cf. Figure \\ref{fig:starfish} and the remark below). \n\nSee also the more recent work of Mantoulidis \\cite{mantoulidis} which makes a more explicit connection with Morse index, using the Allen--Cahn approach (as developed by Guaraco and Gaspar \\cite{Guaraco,GasparGuaraco}) rather than Almgren--Pitts; it would be interesting to elucidate the relationship between Mantoulidis's Allen--Cahn techniques and our proof of Theorem \\ref{thm:nm+index_bound}. \n\n\n\\begin{figure}[h]\n\t\\centering\n\t\\includegraphics[scale=0.4]{starfish.png}\n\t\\caption{The figure eight geodesic $c$ is an \n\texample of a min-max closed geodesic\n\tthat is stable and has one singularity \n\twith non-area minimizing tangent cone.}\n\t\\label{fig:starfish}\n\\end{figure}\n\n\n\\begin{remark}\nBy the index bound in Theorem \\ref{thm:nm+index_bound}, any tangent cone to $V$ has stable regular part. Moreover, we note that the Simons cones \\cite{Simons} in $\\mathbb{R}^8$ (formed from products of two spheres) are all stable and area minimizing on (at least) one side (cf.\\ \\cite{Lawlor}). We particularly emphasize that the Simons cone \n\\[\n\\mathbf{C}^{1,5}: = \\{(x,y) \\in \\mathbb{R}^2\\times \\mathbb{R}^6 : 5|x|^2 = |y|^2\\}\n\\]\nis one-sided minimizing (and stable), but is not minimizing on the other side. It seems to be an open question whether or not there exists an $n$-dimensional stable cone that does not minimize area on either side, for $n\\geq 7$. \n\nEven assuming the existence of a stable minimal cone which is not area minimizing on either sides, it is hard to decide if the above bound is optimal. In dimension $n=1$, such an example is provided by the classical starfish example (cf.\\ Figure \\ref{fig:starfish}), whose tangent cone at the singular point (the union of two lines through the origin) is indeed stable non-area minimizing on either sides (and the starfish fails to be one-sided homotopy minimizing on either side).\n\nWe conjecture that if there is a regular stable minimal cone that is not area-minimizing on either side, then it can arise as the tangent cone to a min-max minimal hypersurface (possibly in a manifold geometrically similar to the starfish); note that were this to occur, Theorem \\ref{thm:nm+index_bound} would imply that the resulting hypersurface would necessarily be stable.\n\\end{remark}\n\nTheorem \\ref{thm:nm+index_bound} generalizes the index upper bound of Marques and Neves \\cite{MN16} for Riemannian manifolds $M^{n+1}$, $3\\leq n+1\\leq 7$ (see also \\cite{Zhou-reg-ind}).\nIn recent years there has been tremendous progress in the understanding of the geometry of minimal hypersurfaces constructed using min-max methods in\nthese dimensions\n(see \\cite{DL}, \\cite{MN19}, \\cite{CM}, \\cite{Zh19}, \\cite{So} and references therein).\n\nFor manifolds of dimension $n+1\\geq 8$ much less is known. \nWhen Ricci curvature is positive\nZhou obtained index and multiplicity bounds for one-parameter min-max minimal hypersurface \\cite{Z17} (see also the work of Ram\\'irez-Luna \\cite{RL} and Bellettini \\cite{bellettini}). Upper Morse index bounds are known to hold in arbitrary manifolds of any dimensions for hypersurfaces constructed by Allen--Cahn, as proven by Hiesmayr and Gaspar \\cite{Hiesmayr,Gaspar} (see also the recent work of Dey showing that the Almgren--Pitts and Allen--Cahn approaches are equivalent \\cite{dey}). Li proved \\cite{li2019} existence \nof infinitely many distinct minimal hypersurfaces constructed \nvia min-max methods for a generic set of metrics, using the\nWeyl law of Liokumovich--Marques--Neves \\cite{LMN}.\n\n\n\\subsection{Overview of the proof}\nThe construction of a minimal hypersurface in Almgren-Pitts min-max theory proceeds by considering a sequence of sweepouts $\\{ \\Phi_i(x) \\}$\nwith the supremum of the mass $\\sup_x \\mathbf{M}(\\Phi_i(x)) \\rightarrow W(M)$ as $i \\rightarrow \\infty$.\nIt is then proved that we can find a subsequence $\\{i_k\\}$\nand $\\{\\Phi_{i_k}(x_k) \\}$ with mass tending\nto $W$, so that $|\\Phi_{i_k}|(x_k)$ converges to some $V \\in \\cR$.\n\nWe outline the proof of Theorem \\ref{thm:nm+index_bound}.\nFor the sake of simplicity, let's focus on the non-cancellation case, i.e., when all multiplicities of $V$ are one (in the case of cancellation we must argue slightly differently but the main strategy is the same).\nThe main geometric idea is to show that there cannot be two disjoint open sets $U_1,U_2$ so that $\\Sigma=\\supp V$ fails to be one-sided homotopy minimizing on \nthe same side in both $U_1$ and $U_2$. \nThis property is reminiscent of (but different from) \nalmost minimizing \nproperty introduced by Pitts to prove regularity\nof min-max minimal hypersurfaces.\n\nGranted this fact, it is easy to deduce the bound \\eqref{e:bound1}. For example, if $\\Index(\\Sigma) = 1$ and $\\hnm(\\Sigma) = \\{p\\}$, then we can localize the index in some $U$ disjoint from $p$. Because $\\Sigma$ is unstable in $U$, we can find area decreasing homotopies to both sides there, and we can also find $B_r(p)$ disjoint from $U$ with area decreasing homotopies (by definition). This contradicts the above fact. \n\nAs such, we want to show the one-sided homotopy minimizing property in pairs by using the fact that $V$ is a min-max minimal hypersurface. However, this leads us to a major difficulty. Indeed, the approximating currents $\\Phi_{i_k}(x_k)$ might cross $\\Sigma$ many times, making it difficult to glue in one-sided homotopies to push down the mass. \n\nAt a technical level, the main tool used in this paper is that it is possible to \n simplify the one-parameter case of min-max \n theory by constructing a nested optimal sweepout $\\Phi(x)$ with $\\sup \\mathbf{M}(\\Phi(x)) = W$.\nThis allows us to work with one\nsweepout $\\Phi(x)$\ninstead of a sequence of sweepouts. The nested property allows us to directly ``glue in'' the one-sided homotopies to push down the mass. \n\n\nThe existence of a nested optimal\nsweepout follows from a monotonization\ntechnique from \\cite{CL}.\nThere Chambers and Liokumovich proved that\neach sweepout\n$\\Phi_i(x)$ can be replaced by a nested sweepout \n$\\Psi_i(x)$ with $\\sup \\mathbf{M}(\\Psi_i(x)) \\leq \\sup\n\\mathbf{M}(\\Psi_i(x)) + \\frac{1}{i}$. ``Nested'' here means \nthat $\\Psi_i(x) = \\partial \\Omega(x)$ for a family of open sets with $\\Omega(x) \\subset \\Omega(y)$ if $x0$ \\cite[Proposition 0.5]{DL}.\n\\end{definition}\n\nWe will switch freely between the equivalent notation $\\mathbf{M}(\\Phi(x))$ and $\\Per(\\Omega(x))$. We now introduce the notion of optimal nested sweepouts and prove their existence.\n\n\n\\begin{definition}[Optimal nested volume parametrized (ONVP) sweepout]\n\tA sweepout $\\{ \\Phi(x) = \\partial \\Omega(x) \\,:\\, x\\in[0,1]\\}$ is called\n\t\\begin{itemize}\n\t\t\\item \\textit{optimal} if $\\sup_{x\\in [0,1]} \\mathbf{M}(\\Phi(x)) = W$;\n\t\t\\item \\textit{nested} if $\\Omega(x_1) \\subset \\Omega(x_2)$, for all $0\\leq x_1 \\leq x_2\\leq 1$;\n\t\t\\item \\textit{volume parametrized} if $\\Vol(\\Omega(x)) = x$, for every $x\\in [0,1]$ (recall that we have assumed $\\Vol(M,g) = 1$). \n\t\\end{itemize}\n\\end{definition}\n\nNested volume parametrized sweepouts enjoy nice compactness properties.\n\n\\begin{lemma}[Compactness for nested volume parametrized sweepouts] \\label{nested sequence}\n\tLet $\\left( \\Phi_i\\right)_i$ be a sequence of nested volume-parametrized sweepouts\n\twith mass uniformly bounded, that is\n\t\\begin{equation}\\label{e:mass_bounds}\n \\sup_{i\\in \\mathbb N} \\sup_{x\\in [0, 1]} \\mathbf{M}(\\Phi_i(x))\\leq M<\\infty\\,.\n\t\\end{equation}\n\tThen there exists a subsequence $\\left(\\Phi_{i_k}\\right)_k$ converging uniformly to a nested volume parametrized sweepout $\\Psi$ such that\n\t\\begin{equation}\\label{e:sup_optimal}\n\t\t\\sup_x \\mathbf{M}(\\Psi(x)) \\leq \\liminf_k \\left(\\sup_x \\mathbf{M}( \\Phi_{i_k}(x) )\\right).\n\t\\end{equation}\n\\end{lemma}\n\n\\begin{proof} The sequence of continuous functions $\\Phi_i\\colon [0,1] \\to \\mathcal{Z}_{n-1}(M; \\mathbb{Z}_2)$ is uniformly Lispchitz continuous, since for every $0\\leq x0$ to obtain \\emph{regular} homotopic minimizers in certain situations. \n\n\n\\begin{definition}[Homotopic inner and outer minimizers] \\label{homotopy minimizer}\n\tGiven a Caccioppoli set $\\Omega$ we say that a Caccioppoli set $L(\\Omega\\,|\\, U)\\in \\mathcal I(\\Omega\\,|\\, U)$ is a \\emph{homotopic inner minimizer for $\\Omega$ in $U$}, if\n\t\\begin{enumerate}\n\t\t\\item $\\Per(L(\\Omega\\,|\\, U)\\,|\\, U)\\leq \\Per(\\Omega'\\,|\\,U)$, for every $\\Omega'\\in \\mathcal I(\\Omega\\,|\\, U)$ and\n \\item if $E\\in \\mathcal I(\\Omega\\,|\\, U)$ satisfies (1) and $ L(\\Omega\\,|\\, U) \\subset E$ then $E= L(\\Omega\\,|\\, U)$.\n\t\\end{enumerate}\n\tSimilarly, define $R(\\Omega\\,|\\, U)\\in \\mathcal O(\\Omega\\,|\\, U)$ to be a \\emph{homotopic outer minimizer for $\\Omega$ in $U$}, if\n\t\\begin{enumerate}\n\t\t\\item $\\Per( R(\\Omega\\,|\\, U)\\,|\\,U)\\leq \\Per(\\Omega'\\,|\\,U)$, for every $\\Omega'\\in \\mathcal I(\\Omega\\,|\\, U)$;\n\\item if $E\\in \\mathcal I(\\Omega\\,|\\, U)$ satisfies (1) and $E \\subset R(\\Omega\\,|\\, U)$ then $E= R(\\Omega\\,|\\, U)$.\n\t\\end{enumerate}\n\tWe say that a Caccioppoli set $\\Omega$ is an \\emph{inner (resp. outer) homotopic minimizer in $U$} if $\\Omega$ is a homotopic inner (resp.\\ outer) minimizer relative to itself. \n\\end{definition}\n\n\nIt is easy to see that inner and outer homotopic minimizers for a fixed set $\\Omega$ always exist.\n\n\\begin{lemma}[Existence of homotopic minimizers] \\label{l:existence_homotopic}\n\tFor any Caccioppoli set $\\Omega$ and open set $U$\twe can find a homotopic inner (resp.\\ outer) minimizer $L(\\Omega\\,|\\, U)$ (resp.\\ $R(\\Omega\\,|\\, U)$) for $\\Omega$ in $U$. \n\\end{lemma}\n\n\\begin{proof} We consider only the case of inner minimizers as the outer minimizers are handled identically. \n\nThis is once again an application of Arzel\\`a--Ascoli theorem. Indeed, notice that $\\mathcal I(\\Omega\\,|\\, U) \\not = \\emptyset$, since $\\Omega \\in \\mathcal I(\\Omega\\,|\\, U)$, so we can consider a minimizing sequence $(E_j)_j$, that is \n\\[\n\t\\lim_{j}\\Per( E_j\\,|\\,U)=\\inf\\{ \\Per(E\\,|\\,U)\\,:\\, E\\in \\mathcal I(\\Omega\\,|\\, U)\n\t\\}\n\\]\n\tand let $\\{E_j(x): x\\in [0,1]\\} \\in \\mathcal I(\\Omega,E_j \\,|\\, U)$ be the corresponding inner volume non increasing sweepout between $E_j$ and $\\Omega$. We can assume that it is volume parametrized (being nested). Moreover $\\Per(E_j(x)\\,|\\,U)$ is uniformly bounded by $\\Per(\\Omega\\,|\\,U)$, so by Arzel\\`a--Ascoli there is a subsequence converging to $\\{E_\\infty(x)\\} \\in \\mathcal I(\\Omega, E_\\infty\\,|\\, U)$, with $E_\\infty$ satisfying the desired minimality property by lower semi-continuity of the perimeter. \n\t\n\tFinally, again by Arzel\\`a--Ascoli, we can find $L(\\Omega\\,|\\, U)\\subset \\Omega$ in the set of minimizers, which infimizes the flat distance to $\\partial \\Omega$, and so satisfies condition (2) (otherwise there would be a competitor closer to $\\Omega$ in flat norm).\n\\end{proof}\n\nWe recall the definition of one-sided minimizers, which will be useful in the sequel when we perform cut and paste arguments. \n\n\\begin{definition}[One sided minimizers]\n\tLet $E$ be a Caccioppoli set. We say that $E$ is \\emph{locally one-sided inner (resp. outer) area-minimizing} in $U$ if for every $A\\Subset U$ and $V$ with $V\\Delta E \\subset A$, we have\n\\[\n\t\\Per( E\\,|\\, A) \\leq \\Per (V \\,|\\, A)\n\\]\n\twhenever $V\\subset E$ (resp.\\ $E\\subset V$). We say that $E$ is \\emph{strictly locally one-sided inner (resp.\\ outer) area-minimizing} if the inequality holds strictly except when $E=V$ as Caccioppoli sets. \n\\end{definition}\n\nWe show that homotopic minimizers are in fact strict one sided minimizers into the region they sweep out. \n\n\\begin{lemma}[Homotopic minimizers are one sided minimizers in the swept out region] \\label{strict minimizer}\n\tSuppose $L(\\Omega\\,|\\, U)$ is an homotopic inner (resp. outer) minimizer for $\\Omega$ in $U$. Then $L(\\Omega\\,|\\, U)$ (resp. $R(\\Omega\\,|\\, U)$) is strict locally outer (resp. inner) one-sided minimizing in $U\\cap \\Omega$ (resp. $U\\setminus \\Omega$) .\n\\end{lemma}\n\n\\begin{proof}\n\tWe consider homotopic inner minimizers; the case of outer minimizers is similar. \n\t\n\tIf $L(\\Omega\\,|\\, U)$ is not a strict outer minimizer in $U\\cap \\Omega$ then there is $V'$ with $L(\\Omega\\,|\\, U)\\subset V'$ and $L(\\Omega\\,|\\, U) \\Delta V' \\subset A \\Subset U$ and\n\t\\[\n\t\\Per(V'\\,|\\,A) \\leq P(L(\\Omega\\,|\\, U)\\,|\\,A).\n\t\\]\n\tWe can minimize perimeter in $A$ among all such $V'$ to find $V$. Namely,\n\t\\begin{equation}\\label{e:confusing1}\n\t\\Per( V \\,|\\,A )\\leq \\Per(W\\,|\\,A) \n\t\\end{equation} \n\t\tfor all $W$ with $W\\Delta V\\subset A\\setminus L(\\Omega\\,|\\, U)$. Since $L(\\Omega\\,|\\, U) \\in \\mathcal I(\\Omega\\,|\\, U)$, there is $\\{U(x)\\,:\\,x \\in [0,1]\\} \\in \\mathcal I(\\Omega, L(\\Omega\\,|\\,U)\\,|\\,U)$. Set $\\Omega(x) = U(x) \\cup V$. Since $V$ satisfies \\eqref{e:confusing1},\nwe have that \n\t$$\n\t\\Per(\\Omega_t\\,|\\,A) \\leq \\Per(U_t\\,|\\,A).\n\t$$\n\tThis implies that $\\Omega(1)=V$ satisfies (1) of Definition \\ref{homotopy minimizer} and $V \\Delta L(\\Omega\\,|\\,U)\\subset A \\setminus L(\\Omega\\,|\\, U)$, therefore by (2) of Definition \\ref{homotopy minimizer}, it follows that $V= L(\\Omega\\,|\\,U)$. This completes the proof. \n\\end{proof}\n\n\nWe have the following lemma that will allow us to find bounded mass homotopies in certain situations. \n\\begin{lemma}[Interpolation lemma] \\label{l:close in flat}\n\tFix $L>0$. For every $\\varepsilon>0$ there exists $\\delta>0$, such that the following holds. If $\\Omega_0,\\Omega_1 $ are two sets of finite perimeter, such that $\\Omega_0 \\subset \\Omega_1$, $\\Per(\\Omega_i) \\leq L$, $i=0,1$, and $\\Vol(\\Omega_1 \\setminus \\Omega_0)\\leq \\delta$, then there exists a nested $\\mathcal F$-continuous family $\\{\\partial\\Omega_t \\}_{t \\in [0,1]}$ with \n\\[\n\\Per(\\Omega_t)\\leq \\max\\{\\Per(\\Omega_0),\\Per(\\Omega_1) \\}+\\varepsilon\n\\]\nfor all $t\\in[0,1]$\n\\end{lemma}\n\\begin{proof}\nLet $\\Omega$ be a Caccioppoli set that minimizes\nperimeter among sets $\\Omega'$\nwith $\\Omega_0 \\subset \\Omega' \\subset \\Omega_1$.\n\nFix $r>0$ such that for every $x \\in M$ the ball $B(x,2r)$\nis $2$-bi-Lipschitz diffeomorphic to\nthe Euclidean ball of radius $2r$.\nLet $\\{B(x_i,r) \\}_{i=1}^N$ be a collection of balls covering $M$. By coarea inequality\nwe can find a radius $r_i \\in [r,2r]$,\nso that $\\mathbf{M}(\\partial B(x_i,r_i) \\cap \\Omega \\setminus \\Omega_0) \\leq \\frac{\\delta}{r}$.\n\nLet $U_1 = B(x_1,r_1) \\cap\n\\Omega \\setminus \\Omega_0$.\nBy a result of Falconer (see \\cite{Falconer},\n\\cite[Appendix 6]{Guth}) there exists a family of \nhypersurfaces sweeping out $U_1$ of area bounded by\n$c(n) \\delta^{\\frac{n}{n+1}}$. It follows (see\n\\cite[Lemma 5.3]{CL}) that\nthere exists a nested family \n$\\{ \\Xi^1(t)\\} $ of Caccioppoli sets with\n$\\Xi^1(0) = \\Omega_0$ and $\\Xi^1(1) = \\Omega_0 \\cup U_1$\nand satisfying\n$$\\Per(\\Xi^1(t)) \\leq \\Per(\\Omega_0)+2c(n)\\delta^{\\frac{n}{n+1}} \\, . $$\nLet $\\Omega^1 = \\Omega_0 \\cup U_1$. Observe, that\nthe minimality of $\\Omega$ implies that\n$$\\Per(\\Omega^1) \\leq \\Per(\\Omega_0) + \\frac{2\\delta}{r} $$\n\nInductively, we define $\\Omega^k= \\Omega^{k-1} \\cup U_k$ and\n$U_k = B(x_k,r_k) \\cap\n\\Omega \\setminus \\Omega^{k-1}$. As above we can construct a nested homotopy of Caccioppoli sets $\\Xi^k(t)$ from $\\Omega^{k-1}$ to $\\Omega^k$, satisfying\n$$\\Per(\\Xi^k(t)) \\leq \\Per(\\Omega_0)+2c(n)\\delta^{\\frac{n}{n+1}} \n+ \\frac{2N\\delta}{r}$$\n\nWe choose $\\delta>0$ so small that $\\Per(\\Xi^k(t))< \\Per(\\Omega_0) + \\varepsilon$.\nIt follows then that we have obtained a homotopy from $\\Omega_0$ to $\\Omega$ satisfying the\ndesired perimeter bound. Similarly,\nwe construct a homotopy from $\\Omega$ to $\\Omega_1$.\n\\end{proof}\n\n\n\n\n\n\n\nFinally, we have the following result. Recall that White \\cite{W} proves that strictly stable \\emph{smooth} minimal hypersurfaces are locally area-minimizing. A generalization of such a result to the case of hypersurfaces with singularities (i.e., elements of $\\cR$) would be very interesting. The following (weaker) result will suffice for our needs; it can be seen as a result along these lines, except ``stability'' is replaced by a stronger hypothesis: the surface is homotopic minimizing to one side.\\footnote{Note that one certainly needs a condition on the singularities rather than just a condition on the regular part like strict stability, since as we show in Proposition \\ref{p:def_thm}, the existence of (regular) non-minimizing tangent cones implies that the hypersurface is not homotopic minimizing irrespective of any stability condition that might hold on the regular part.}\n\n \\begin{proposition}[Comparing the notions of minimizing vs.\\ homotopic minimizing for minimal surfaces]\\label{prop:min-vs-htpy-min}\nSuppose that $\\Omega$ is a Caccioppoli set and for some strictly convex open set $U \\subset M$ with smooth boundary, the associated varifold $V = |\\partial \\Omega|$ satisfies $V \\in \\cR(U)$. Assume that $\\supp V \\cap U$ is connected. \n\nSuppose that $\\Omega$ is inner (resp.\\ outer) homotopy minimizing in $U$. Then, at least one of the following two situations holds:\n\\begin{enumerate}\n\\item for all $p \\in \\supp V \\cap U$, there is $\\rho_0>0$ so that for $\\rho<\\rho_0$, $B_\\rho(p) \\subset U$ and $\\Omega$ is inner (resp.\\ outer) minimizing in $B_\\rho(p)$, or\n\\item there exists a sequence of Caccioppoli sets $E_i\n\\neq \\Omega$\nwith $|\\partial E_i| \\in \\cR(U)$ so that $E \\Delta \\Omega \\subset \\Omega \\cap U$ (resp.\\ $\\Omega^c \\cap U$), $|\\partial E_i|$ has stable regular part, and $\\partial E_i \\to \\partial \\Omega$ in the flat norm. \n\\end{enumerate}\n\\end{proposition}\n\n\\begin{remark}\nIt is interesting to ask if the second possibility occurs; it seems possible that one could rule this out in the case where $V$ has regular tangent cones that are all strictly minimizing in the sense of Hardt--Simon \\cite[\\S 3]{HS}. \n\\end{remark}\n\n\n\\begin{proof}[Proof of Proposition \\ref{prop:min-vs-htpy-min}]\nWe consider the ``inner'' case, as the ``outer'' case is similar. Let $E^\\varepsilon \\in \\mathcal I_\\varepsilon(\\Omega\\, |\\, U)$ minimize perimeter among all sets in $\\mathcal I_\\varepsilon(\\Omega\\,|\\, U)$ (as usual, the existence of $E^\\varepsilon$ follows from Arzel\\`a--Ascoli). We claim that $E^\\varepsilon$ is area-minimizing to the inside of $\\Omega$ in sufficiently small balls. \n\nMore precisely, for $r>0$ sufficiently small, suppose there was a Caccioppoli set $E'$ so that $E'\\Delta E^\\varepsilon \\subset B_r(p) \\cap U \\cap \\Omega$ and $\\Per(E'\\,|\\,U) < \\Per(E^\\varepsilon\\,|\\,U)$. As long as $r$ was chosen sufficiently small, Lemma \\ref{l:close in flat} guarantees\nthat $E' \\in \\mathcal I_\\varepsilon(\\Omega\\, |\\,U)$. This is a contradiction. \n\nNow, consider $p \\in \\reg V \\cap U$. We note that $E^\\varepsilon$ is almost minimizing (with no constraint coming from $\\Omega$) in the sense of \\cite{Tam}, and thus has $C^{1,\\alpha}$ boundary in $B_r(p) \\cap U$, thanks to standard results on the obstacle problem; see \\cite[\\S 1.9, \\S1.14(iv)]{Tam}. As such, away from $\\sing V$ (which has Hausdorff dimension at most $n-7$) we can thus conclude that $\\partial^*E^\\varepsilon$ is regular, stationary and stable.\\footnote{Cf.\\ the proof of \\cite[Proposition 2.1]{Liu} for the proof of stability.} A capacity argument then implies that $|\\partial E^\\varepsilon| \\in \\cR(U)$ and $\\partial^*E^\\varepsilon$ is stable. Therefore, the maximum principle for (possibly singular) hypersurfaces \\cite{Ilm} implies that either $E^\\varepsilon = \\Omega$ or $\\partial^*E^\\varepsilon \\cap \\supp V = \\emptyset$. In the first case, we can conclude that $\\Omega$ is inner minimizing in small balls (since $E^\\varepsilon$ is). \n\nWe can thus assume that the latter possibility holds for all $\\varepsilon>0$ sufficiently small. Taking $\\varepsilon_j\\to 0$, there is $E \\in \\mathcal I_0(\\Omega | U)$ so that $E^{\\varepsilon_j}\\to E$ with respect to the flat norm. If $E=\\Omega$, then the second possibility in the conclusion of the proposition holds for $E_j = E^{\\varepsilon_j}$. \n\nThe final case to consider is $E\\neq \\Omega$. By curvature estimates for stable minimal hypersurfaces \\cite{SS}, $|\\partial E| \\in \\cR(\\Omega)$ and thus $\\partial^*E \\cap \\supp V = \\emptyset$ again by the maximum principle.\n\nWe know that $\\Per(E_i| U) \\leq \\Per(\\Omega| U)$, so in the limit we get\n$\\Per(E| U) \\leq \\Per(\\Omega| U)$\nBy Arzel\\`a-Ascoli nested homotopies from $\\Omega$ to $E_i$ will converge to\na nested homotopy $E(t)$ from $\\Omega$ to $E$ that does not increase volume.\nBy the inner homotopy minimizing property of $\\Omega$ we have\n\\[\n\\Per(E\\,|\\,U) = \\Per(\\Omega\\, |\\, U).\n\\]\n\nSuppose we minimize perimeter among Caccioppoli sets $A'$ sandwiched between $E$ and $\\Omega$, $E \\subset A' \\subset \\Omega$. We claim that the\nminimizer $A$ has perimeter equal to that of $\\Omega$. Indeed, if $A$ has strictly\nsmaller perimeter, then family $E(t) \\cup A$\nis an area decreasing nested homotopy between $\\Omega$ and $A$, contradicting that\n$\\Omega$ is inner homotopic minimizing.\n\nWe thus see that $\\Omega$ is minimizing in $\\Omega \\cap E^c \\cap U$, which implies that it is inner minimizing in small balls, as asserted. \n\\end{proof}\n\n\n\\section{Non-excessive sweepouts}\\label{sec:min-max}\n\nIn this section we introduce the concept of excessive intervals and excessive points for a sweepout and prove that there is a sweepout, such that every point in the critical domain is not excessive.\n\n\\begin{definition}[Excessive points and intervals] \\label{def:excessive_interval}\n\tSuppose $\\{\\Phi(x)=\\partial \\Omega(x)\\}$ is a sweepout. Given a connected interval $I$ (we allow $I$ to be open, closed, or half-open)\n\twe will say that $\\{\\Phi^I(x) = \\partial \\Omega^I(x)\\}_{x\\in \\bar I}$ is an \\emph{$I$-replacement family for $\\Phi$} if\n\t$\\Omega^I(a) = \\Omega(a)$, $\\Omega^I(b)=\\Omega(b)$ and for all $x \\in I$, \n\\[\n\\limsup_{I \\ni y\\to x} \\mathbf{M}(\\Phi^I(y)) < W.\n\\]\nWe say that a connected interval $I$ is an \\emph{excessive interval for $\\Phi$} if there is an $I$-replacement family for $\\Phi$. We say that a point $x$ is \\emph{left (resp.\\ right) excessive for $\\Phi$} if there is an excessive interval $I$ for $\\Phi$ so that $(x-\\varepsilon,x]\\subset I$ (resp.\\ $[x,x+\\varepsilon)\\subset I$) for some $\\varepsilon>0$. \n\\end{definition}\n\n \n The goal of this section is to prove the following result.\n\n\\begin{theorem}[Existence of non-excessive min-max hypersurface]\\label{c:non-excessive_minmax}\nThere exists a (ONVP) sweepout $\\Psi$ such that every $x\\in {\\bf m}_L(\\Psi)$ is not left excessive and every $x \\in {\\bf m}_R(\\Psi)$ is not right excessive. \n\\end{theorem}\n\n\\subsection{Preliminary results} We establish several results that will be used in the proof of Theorem \\ref{c:non-excessive_minmax}. \n\n\n\n\n\n\n\\begin{lemma}[Extension lemma I]\\label{l:union-excessive}\nIf $I,J$ are excessive for $\\Phi$ and $I \\cap J \\not = \\emptyset$, then $I \\cup J$ is excessive for $\\Phi$. \n\\end{lemma}\n\\begin{proof}\nLet $\\{\\partial \\Omega^I(x) \\}_{x \\in I}$\nand $\\{\\partial \\Omega^J(x) \\}_{x \\in J}$\nbe $I$ and $J$ replacement families\nfor $\\Phi$.\n\nLet $a_1 = \\inf\\{x \\in I\\} $, $a_2 = \\inf\\{x \\in J\\}$ and $b_1 = \\sup\\{x \\in I\\} $, $b_2 = \\sup\\{x \\in J\\}$.\nAssume without any loss of generality that\n$a_1\\leq a_2$ and $b_1 \\leq b_2$\nand at least one of the two inequalities\nis strict.\n\nLet $K = I \\cap J$; let $a,b$ denote, respectively, left and right\nboundary points of $K$ and $c = \\frac{a+b}{2}\\in K$.\nLet $\\tilde{\\Omega}$ be a Caccioppoli set minimizing perimeter among all $\\Omega'$\nwith $\\Omega(a) \\subset \\Omega' \\subset \\Omega(b)$.\nDefine $\\phi_1:[a_1,c] \\rightarrow [a_1,b_1]$\nand $\\phi_2:[c,b_2] \\rightarrow [a_2,b_2]$\ngiven by $\\phi_1(x)=a_1+\\frac{b_1-a_1}{c-a_1}(x-a_1)$\nand $\\phi_2(x)=a_2 + \\frac{b_2-a_2}{b_2-c}(x-c)$.\nWe define an $I\\cup J$ replacement family for\n$\\Phi$ by setting\n\\[\n\\Phi^{I\\cup J}(x) = \\begin{cases} \n\\partial (\\Omega^I(\\phi_1(x)) \\cap \\tilde{\\Omega}) & x \\in [a_1,c]\\\\\n\\partial (\\Omega^J(\\phi_2(x)) \\cup \\tilde{\\Omega}) & x \\in [c,b_2]\n\\end{cases}\n\\]\nObserve that $\\Phi^{I\\cup J}$\nis continuous since $\\Phi^{I\\cup J}(c) = \\partial \\tilde{\\Omega}$.\nIt follows from our choice of $\\tilde{\\Omega}$ that\n$\\mathbf{M}(\\Phi^{I\\cup J}(x))\\leq \\mathbf{M}(\\Phi^I(\\phi_1^{-1}(x)))\\max\\{n_i,i+1\\}$ so that we can construct $J_n$-replacements $\\{\\Phi^n_{i+1}(x)=\\partial \\Omega^n_{i+1}(x)\\}$ for $n\\geq n_{i+1}$ with \n\\[\n\\Per(\\Omega^n_{i+1}(x)) \\leq A_j\n\\]\nfor $x \\in [a_j',b_j']$ and $1 \\leq j \\leq i+1$. Granted this, we can easily (inductively) complete the proof by passing $\\Phi^{n_{i+1}}_{i+1}$ to a subsequential limit (using Arzel\\`a--Ascoli).\n\n It is useful to introduce the following notation, used in the construction of $\\Phi^n_{i+1}$. Given two nested sets of finite perimeter $V \\subset W$, we\n\tlet \n\t\\begin{itemize}\n\t\t\\item $\\mathcal M_{V,W}$ an outermost Caccioppoli set minimizing perimeter among all the Caccioppoli sets $\\Omega$ with $V \\subset \\Omega \\subset W$;\n\t\t\\item $\\{\\mathcal V_{(V,W)}(x)\\}_x$ the optimal nested homotopy from $V$ to $W$.\n\t\\end{itemize} \nFor $n \\geq n_i$, we set\n\\[\nL_n : = \\mathcal M_{\\Omega(a_n),\\Omega^{n_i}_i(a_{i+1}')}, \\qquad U_n: = \\mathcal M_{\\Omega^{n_i}_i(b_{i+1}'),\\Omega(b_n)}\n\\]\nNote that for $n \\leq m$, $ L_m \\subset L_n$ and $U_n \\subset U_m$. Hence, $L_n$ and $U_n$ have $\\mathcal F$-limits as $n\\to\\infty$. For $\\varepsilon > 0$ fixed so that \n\\[\n\\max\\left\\{\\Per\\left(\\Omega^{n_{i}}(a_{i+1}')\\right),\\Per\\left(\\Omega^{n_i}_i(b_{i+1}')\\right)\\right\\}+\\varepsilon < W,\n\\]\nLemma \\ref{l:close in flat} thus guarantees that there is $n_{i+1}\\geq i+1$ sufficiently large so that for $n\\geq n_{i+1}$, \n\\[\n\\sup_t \\Per\\left(\\mathcal V_{(L_n,L_{n_{i+1}})}(t)\\right) < W, \\qquad \\sup_t \\Per\\left(\\mathcal V_{(U_{n_{i+1}},U_n)}(t)\\right) < W. \n\\]\nFor $n\\geq n_{i+1}$, we define\n\\[\n\\tilde \\Phi^n_{i+1}(x) = \\begin{cases}\n\\partial\\left(\\Omega^n_i(x+1) \\cap L_n\\right) & x \\in [a_n-1,b_n-1]\\\\\n\\partial \\tilde \\mathcal V_{L_n,L_{n_{i+1}}}(x) & x \\in [b_n-1,a_{n_{i+1}}]\\\\\n\\partial\\left(\\Omega^{n_{i+1}}_i(x) \\cup L_{n_{i+1}} \\cap U_{n_{i+1}} \\right)& x \\in [a_{n_{i+1}},b_{n_{i+1}}]\\\\\n\\partial \\tilde \\mathcal V_{U_{n_{i+1}},U_n}(x) & x \\in [b_{n_{i+1}},a_n+1]\\\\\n\\partial\\left(\\Omega^n_i(x-1) \\cup U_n \\right)& x \\in[a_n+1,b_n+1]. \n\\end{cases}\n\\]\nHere, the $\\tilde \\mathcal V$ are the homotopies $\\mathcal V$ reparametrized to be defined on the given intervals (the exact parametrization is immaterial). It is easy to check that $\\tilde\\Phi^n_{i+1}$ is continuous. \n\nLet $\\Phi_{i+1}^n$ denote the reparametrization of $\\tilde\\Phi_{i+1}^n$ by volume. We have arranged that $\\Phi_{i+1}^n$ is a $J_{n}$-replacement. Moreover, for $x \\in [a_{i+1}',b_{i+1}']$, we have that $\\Phi^n_{i+1}(x) = \\Phi^{n_{i+1}}_i(x)$, so \n\\[\n\\mathbf{M}(\\Phi^n_{i+1}(x)) \\leq A_j\n\\]\nfor $x \\in [a_j',b_j']$ and $1\\leq j\\leq i$. Finally, we can set \n\\[\nA_{i+1} : = \\sup_{x\\in[a_{i+1}',b_{i+1}']} \\mathbf{M}(\\Phi^{n_{i+1}}_{i}(x)) < W\n\\]\n(which is independent of $n$). This completes the proof. \n\\end{proof}\n\n\n\\subsection{Proof Theorem \\ref{c:non-excessive_minmax}}\nWe are now able to complete proof of Theorem \\ref{c:non-excessive_minmax}\n\n\t\tLet $\\Phi$ be a nested optimal sweepout. Consider the collection $\\mathcal A$ of the maximal (with respect to inclusion) excessive intervals for $\\Phi$, that is $I\\in \\mathcal A$ if for every excessive interval $I'$ such that $I'\\cap I\\neq \\emptyset$, we have $I\\supset I'$. The existence of maximal intervals follows from Proposition \\ref{p:max_int} proven above. \n\t\n\t Notice that by definition $I \\neq J\\in \\mathcal A$ implies that $I\\cap J=\\emptyset$, so we can define a new sweepout $\\Psi$ in the following way\n\\[\n\t \\Psi(x)=\\begin{cases}\n\t \\Phi^I(x) & \\quad \\text{if }x \\in I \\in \\mathcal A\\\\\n\t \\Phi(x) & \\quad \\text{otherwise}\\,.\n\t \\end{cases}\n\\]\nNote that $\\Psi$ is a nested optimal sweepout, so up to reparametrization we can assume it is (ONVP), and moreover by construction ${\\bf m}(\\Psi)\\subset {\\bf m}(\\Phi)$. Suppose that $x \\in {\\bf m}_L(\\Psi)$ is left excessive. Then, there is a $\\Psi$-excessive interval $J$ with $(x-\\varepsilon,x]\\subset J$. We claim that there is $I \\in \\mathcal A$ with $J \\subset I$. Indeed, if $J \\cap I = \\emptyset$ for all $I \\in \\mathcal A$, then $J$ is a $\\Phi$-excessive interval, contradicting the definition of $\\mathcal A$. On the other hand, if there is $I \\in \\mathcal A$ with $J \\cap I \\not =\\emptyset$, then $J \\cup I$ is excessive by Lemma \\ref{l:union-excessive-repeated}. Thus, $J \\subset I$ by definition of $\\mathcal A$ again. Thus, for $y \\in (x-\\varepsilon,x]\\subset I$, $\\Psi(y) = \\Psi^I(y)$. By the definition of replacement family, we know that if $x_i \\in (x-\\varepsilon,x]$ has $x_i\\to x$, then\n\\[\n\\limsup_{i\\to\\infty} \\mathbf{M}(\\Psi^I(x_i)) < W. \n\\]\nHowever, this contradicts the assumption that $x \\in{\\bf m}_L(\\Psi)$. The same proof works to prove that $x \\in {\\bf m}_R(\\Psi)$ is not right excessive. This finishes the proof. \\qed\n\n\n\\section{Deformation Theorems and Proof of Theorem \\ref{thm:nm+index_bound}}\\label{ss:deformation_thm}\nIn this section we conclude the proof of Theorem \\ref{thm:nm+index_bound}. By Theorem \\ref{c:non-excessive_minmax}, there exists an (ONVP) sweepout $\\Phi$ so that every $x \\in {\\bf m}_L(\\Phi)$ is not left excessive and every $x\\in{\\bf m}_R(\\Phi)$ is not right excessive. By Almgren--Pitts pull-tight and regularity theory \\cite{Pitts}, we find that for some $x_0\\in{\\bf m}(\\Phi)$, there is a min-max sequence $x_i\\to x_0$ so that $|\\Phi(x_i)|$ converges to some $V \\in \\cR$. Indeed, we can pull-tight $\\Phi$ to find a sweepout (in the sense of Almgren--Pitts, not in the (ONVP) sense considered in this paper) $\\tilde\\Phi$; we have that ${\\bf C}(\\tilde\\Phi) \\subset {\\bf C}(\\Phi)$ and some $V\\in{\\bf C}(\\tilde\\Phi)$ is in $\\cR$. By replacing\n$\\Phi(x)$ by $\\Phi(1-x)$ if necessary,\nwe can then assume for the rest of this section that:\n\\begin{equation}\\label{e:no_can}\n\\begin{gathered}\n\\text{there is a (ONVP) sweepout $\\{\\Phi(x)=\\partial \\Omega(x)\\}$ and $x_i \\nearrow x_0 \\in {\\bf m}_L(\\Phi)$, so that}\\\\\n\\text{$|\\Phi(x_i)| \\to V\\in\\cR$ and $\\Phi$ is not left excessive at $x_0$}\n\\end{gathered}\n\\end{equation}\n\nWe then consider two cases: $\\mathbf{M}(\\Phi(x_0)) = W$ (no cancellation) and $\\mathbf{M}(\\Phi(x_0)) < W$ (cancellation). We analyze the geometric properties of $V$ in both cases separately,\nproving deformation theorems reminiscent of those in \\cite{MN16}. \n\t\n\\subsection{No cancellation} \nThroughout this subsection we will assume the no cancellation condition\n\\[\n\\mathbf{M}(\\Phi(x_0)) = W \\,.\n\\] \nIn this case we have that $|\\Phi(x_i)| \\to |\\partial \\Omega|$, see for instance \\cite[Proposition A.1]{DL}, so we can rephrase our assumption \\eqref{e:no_can} as\n\\begin{equation}\\label{e:no_canc2}\n\\begin{gathered}\n\\text{there is a (ONVP) sweepout $\\{\\Phi(x)=\\partial \\Omega(x)\\}$ and $x_i \\nearrow x_0 \\in {\\bf m}_L(\\Phi)$, so that} \\\\\n\\text{$|\\Phi(x_i)| \\to |\\Sigma|:=|\\partial \\Omega|\\in\\cR$ and $\\Phi$ is not left excessive at $x_0$.}\n\\end{gathered}\n\\end{equation}\nIn particular, in this case the multiplicity bound of Theorem \\ref{thm:nm+index_bound} follows immediately.\n\n\n\n\n\n\t\n\n\n\\begin{proposition}\n\\label{p:pairs}\nLet $\\Sigma$ be as in \\eqref{e:no_canc2}. \nSuppose $\\Sigma$ is not homotopic minimizing to either side\nin some open set $U$. Then the following holds:\n\\begin{enumerate}\n \\item for every $x \\not\\in \\overline{U}$ there exists $r>0$, such that\n$\\Sigma$ is minimizing to one side in $B_r(x)$;\n \\item for every open set $U'$ disjoint from $U$,\n we have that $\\Sigma$ is homotopic minimizing\n to one side in $U'$.\n\\end{enumerate}\n\n\\end{proposition}\t\n\t\\begin{proof} \nWe prove statement (1).\nThere is $\\delta>0$ and Caccioppoli sets $E^-_1 \\in \\mathcal I(\\Omega\\,|\\,U)$ and $ E^+_1\\in \\mathcal O(\\Omega\\,|\\,U)$ with \n\t\\begin{equation}\\label{e:vol_dec}\n\t\\Per(E^\\pm_1\\,|\\, U)\\leq \\Per(\\Omega\\,|\\, U)-\\delta\\,,\n\t\\end{equation}\n\tand nested families $\\{\\Omega^-_1(x)\\,:x\\in [0,1]\\} \\in \\mathcal I(\\Omega, E_1^-\\,|\\,U)$ and $\\{\\Omega^+_1(x)\\,:x\\in [0,1]\\} \\in \\mathcal O(\\Omega, E_1^+\\,|\\,U)$. Furthermore, by Lemma \\ref{l:existence_homotopic}, we can assume that $E_1^+$ are inner and $E_1^-$ are outer homotopic minimizers in $U$.\n\t\n Let $x \\in \\Sigma \\setminus \\overline{U}$ and assume, for contradiction,\n\tthat $\\Sigma$ is not area minimizing on both sides \n\tin every ball $B_r(x)$, $r< {\\rm dist}(x, U)$. Let $E^-_2 \\subset \\Omega$, \n\twith $\\Omega \\setminus E^-_2 \\subset B_r(x)$, denote a Caccioppoli set\n\tthat is a strict outer minimizer in $\\Omega \\cap B_r(x)$.\n\tSimilarly, let $ \\Omega \\subset E^+_2$, with\n\t$ E^+_2 \\setminus \\Omega \\subset B_r(x)$, denote a Caccioppoli set\n\tthat is a strict inner minimizer in $\\Omega \\cap B_r(x)$.\n We have $$\\Per(\\Omega)> \\max\\{\\Per(E^\\pm_2) \\}.$$\n\tIf we choose $r>0$ sufficiently small,\n\tthen, by Lemma \\ref{l:close in flat}, there exist nested families $\\{\\Omega^-_2(x)\\,:x\\in [0,1]\\}$ and $\\{\\Omega^+_2(x)\\,:x\\in [0,1]\\} $ that interpolate between $E^-_2$ and \n\t$\\Omega$ and between $\\Omega$ and $E^+_2$ and satisfying\n\\begin{equation}\\label{e:vol_dec2}\n\\Per(\\Omega^\\pm_2(x))\\leq \\Per(\\Omega) + \\frac{\\delta}{2}\n\\end{equation}\n\n\tLet $(x_l,x_r)\\neq \\emptyset$ be the interval (since $\\Phi$ is nested) such that\n\t$$\n\t\\Phi(x) \\cap (\\cup_i E_i^+\\setminus \\cup_i E_i^-)\\neq \\emptyset \\, .\n\t$$\n\tThen we define a family $\\bar \\Psi\\colon [x_l-2, x_r+2] \\to \\mathcal{Z}_{n}(M; \\mathbb{Z}_2)$ by setting\n\t$$\n\t\\bar \\Psi(x):=\n\t\\begin{cases}\n\t\\partial\\left(\\Omega(x+2)\\cap E_1^-\\cap E_2^-\\right) & \\text{if } x\\in (x_l-2,x_0-2]\\\\\n\t\\partial \\left(\\Omega_1^-(x-x_0+2)\\cap E_2^- \\right) & \\text{if } x\\in [x_0-2,x_0-1]\\\\\n\t\\partial \\left(\\Omega_1^+(x-x_0+1)\\cap E_2^-\\right) & \\text{if } x\\in [x_0-1,x_0]\\\\\n\t\\partial \\left(\\Omega_2^-(x-x_0)\\cup E_1^+\\right) & \\text{if } x\\in [x_0,x_0+1]\\\\\n\t\\partial \\left(\\Omega_2^+(x-x_0-1) \\cup E_1^+\\right) & \\text{if } x\\in [x_0+1,x_0+2]\\\\\n\t\\partial\\left(\\Omega(x-2)\\cup E_1^+\\cup E_2^+\\right) & \\text{if } x\\in [x_0+2,x_r+2)\n\t\\end{cases}\n\t$$\n\tIt is easy to see that $\\bar \\Psi$ is continuous, and moreover notice that, since by Lemma \\ref{strict minimizer} $E_1^+$ is a strict inner minimizer in $U$ and $E_{1}^-$ strict outer minimizers in $U$, we have that\n\t$$\n\t\\limsup_{y\\to x} \\mathbf{M} (\\bar \\Psi(y)) < \\limsup_{y\\to x} \\mathbf{M}(\\bar\\Phi(y)) \\leq W \n\t$$\n\tfor $x\\in(x_l-2,x_0-2] \\cup [x_0-2,x_r-2)$.\n\tSince the families $\\Omega_1^\\pm(x)$ do not increase the volume of $\\Sigma$ in $U_i$ and using \\eqref{e:vol_dec} and \\eqref{e:vol_dec2}, we also have\n\t$$\n\t\\mathbf{M} (\\bar \\Psi(x)) \\leq W -\\frac{\\delta}{2} \\qquad \\forall \\, x\\in [x_0-2, x_0+2]\\,.\n\t$$\n\tWe let $\\Psi$ be the volume reparametrization of the nested sweepout $\\bar \\Psi$, then $\\Psi$ is a $(x_l,x_r)$-replacement for $\\Phi$, thus giving a contradiction with the fact that $x_0\\in (x_l,x_r)$ and $x_0\\in {\\bf m}_L(\\Phi)$.\n\t\n\tThe proof of statement (2) is completely analogous.\n\\end{proof}\n\n\n\n\n\\begin{proposition}\n\\label{p:def_thm}\nLet $\\Sigma$ be as in \\eqref{e:no_canc2}, then the following holds\n\\begin{enumerate}\n \\item $\\Index(\\Sigma)\\leq 1$;\n \\item If $\\Index(\\Sigma)=1$, then \n for every point $x \\in \\Sigma$ there exists $r>0$, such that\n$\\Sigma$ is minimizing to one side in $B_r(x)$;\n \\item If $\\hnm(\\Sigma)$ is non-empty, then $\\Sigma$ is stable,\n $\\cH^0(\\hnm(\\Sigma))=1$ and for every point $x \\in \\Sigma\\setminus \\hnm(\\Sigma)$ there exists $r>0$, such that $\\Sigma$ is minimizing to one side in $B_r(x)$.\n\\end{enumerate}\nIn particular, Theorem \\ref{thm:nm+index_bound} holds in the case of no cancellations.\n\\end{proposition}\n\t\n\\begin{proof} \nNote that if $U \\cap \\Sigma$ is smooth and unstable, it is easy to see that $\\Sigma$ is not homotopic minimizing to either side in $U$ (just consider the normal flow generated by a compactly supported unstable variation of fixed sign). Statements (2) and (3)\nof the Proposition now immediately follow from Proposition \\ref{p:pairs}. \nThe upper bound on the index (1) follows from (2) of Proposition \\ref{p:pairs} and Lemma \\ref{l:localizing-index-two-sided} below. \n\\end{proof}\t\n\\begin{lemma}[Localizing the index]\\label{l:localizing-index-two-sided}\nSuppose that $\\Sigma \\in \\cR$ is two-sided and has $\\Index(\\Sigma) \\geq 2$. Then, there is $\\Sigma^*_1,\\Sigma^*_2 \\subset \\Sigma$ smooth hypersurfaces with boundary so that the $\\Sigma^*_i$ are both unstable (for variations fixing the boundary). \n\\end{lemma}\n\\begin{proof}\nA standard capacity argument implies that there is a subset $\\Sigma' \\subset \\Sigma$ where $\\Sigma'$ is a smooth minimal surface with smooth boundary and $\\Index(\\Sigma') \\geq 2$ (with Dirichlet boundary conditions). Let $u$ denote the second (Dirichlet) eigenfunction (with eigenvalue $\\lambda <0$) for the stability operator for $\\Sigma$. Because $u$ must change sign, there are at least two nodal domains $\\Sigma_1,\\Sigma_2\\subset \\Sigma$. One can find subsets with smooth boundary $\\Sigma^*_i\\subset \\Sigma_i$ so that $\\Sigma^*_i$ are unstable. This follows from the argument in \\cite[p.\\ 21]{Chavel} (namely, by considering $(u|_{\\Sigma_i} - \\varepsilon)_+$ in the stability operator for $\\varepsilon\\to 0$ chosen so that $\\{u|_{\\Sigma_i} > \\varepsilon\\}$ has smooth boundary). \n\\end{proof}\n\n\\begin{lemma}\\label{lem:snm-hnm}\n$\\mathcal{S}_{\\textnormal{nm}}(\\Sigma)\\subset \\mathfrak{h}_\\textrm{nm}(\\Sigma)$.\n\\end{lemma}\n\\begin{proof}\nSuppose that $p \\in \\mathcal{S}_{\\textnormal{nm}}(V)$, we claim that $\\Sigma$ is not homotopic minimizing to either side in $B_\\varepsilon(p)$ for any $\\varepsilon>0$ sufficiently small. Indeed, by assumption, the unique tangent cone $\\mathbf{C} = \\partial\\Omega_\\mathbf{C}$ to $\\Sigma$ at $p$ is not minimizing to either side. This implies that there are Caccioppoli sets $E_\\mathbf{C}^- \\subset \\Omega_\\mathbf{C} \\subset E_\\mathbf{C}^+$ so that $E_\\mathbf{C}^\\pm \\Delta\\Omega_\\mathbf{C} \\subset B_1 \\subset \\mathbb{R}^{n+1}$ and so that\n\\[\n\\Per_{\\mathbb{R}^{n+1}}(E_\\mathbf{C}^\\pm\\,|\\,B_1) \\leq \\Per_{\\mathbb{R}^{n+1}}(\\Omega_\\mathbf{C}\\,|\\,B_1) - \\delta.\n\\]\nChoose $C^{1,\\omega}$ coordinates on $M$ around $p$ so that $\\Omega = \\Omega_\\mathbf{C}$ in $B_\\varepsilon(p)$ and so that $g_{ij}(p) = \\delta_{ij}$, which we can do since $g\\in C^2$ and $\\Sigma$ is a $C^{1,\\omega}$ deformation of $\\mathbf{C}$ near $p$ by assumption. Then, set \n\\[\nE(x) := \\begin{cases}\n (\\Omega\\setminus B_\\varepsilon) \\cup (|x| E_\\mathbf{C}^- \\cap B_\\varepsilon) & x < 0\\\\\n \\Omega & x = 0\\\\\n (\\Omega\\setminus B_\\varepsilon) \\cup (|x| E_\\mathbf{C}^+ \\cap B_\\varepsilon) & x > 0\\\\\n \\end{cases}\n\\]\nWe have that \n\\[\n\\Per_g(E(x)) - \\Per_g(\\Omega) = - |x|^n \\delta (1+o(1))\n\\]\nas $x\\to 0$ (since the metric $g_{ij}$ converges to the flat metric $\\delta_{ij}$ after rescaling $|x| \\to 1$, by the $C^{1,\\omega}$ regularity of the chart). This shows that $\\Sigma$ is not homotopic minimizing to either side in $B_\\varepsilon(p)$, so $p\\in\\mathfrak{h}_\\textrm{nm}(\\Sigma)$ as claimed. \n\\end{proof}\n\n\\subsection{Cancellation} We will assume the cancellation condition\n\\[\n\\mathbf{M}(\\Phi(x_0)) < W\n\\]\nthroughout this subsection. In particular, we can find $q \\in \\reg V$ so that for all $\\varepsilon>0$ sufficiently small,\n\\[\n\\Per(\\Omega\\,|\\,B_\\varepsilon(q)) < | V |(B_\\varepsilon(q)) \n\\]\nwhere $\\partial \\Omega = \\Phi(x_0)$. Like in the previous section we set $\\Sigma := \\supp V$. \n\nFurthermore we set $V=\\sum_{i}\\kappa_i\\,|\\Sigma_i|$, where each $\\Sigma_i$ is a minimal hypersurface with optimal regularity and $\\kappa_i\\in \\mathbb{N}$ are constant multiplicities, by the constancy theorem \\cite[Theorem 41.1]{Sim}. So \\eqref{e:no_can} becomes\n\n\\begin{equation}\n\\label{e:canc}\n\\begin{gathered}\n\\text{there is a (ONVP) sweepout $\\{\\Phi(x)=\\partial \\Omega(x)\\}$ and $x_i \\nearrow x_0 \\in {\\bf m}_L(\\Phi)$, so that} \\\\\n\\text{$|\\Phi(x_i)| \\to V=\\sum_{i} \\kappa_i\\,|\\Sigma_i|\\in\\cR$, $\\Phi$ is not left excessive at $x_0$ and} \\\\\n\\text{there is $q\\in \\Sigma$ such that }\\, \\Per(\\Omega\\,|\\,B_\\varepsilon(q)) \\leq | V |(B_\\varepsilon(q))-\\delta(\\varepsilon) \\quad \\text{for all } \\varepsilon>0\\,.\n\\end{gathered}\n\\end{equation}\n\nWe write $\\Omega=\\Omega(x_0)$ and observe that $\\Sigma\\subset \\overline\\Omega$. We would like to claim that $\\Sigma$ is homotopically minimizing, but this condition might not make sense if $\\Sigma$ is one-sided. However, thanks to the cancellation we can actually prove \nthat $\\Sigma$ is area-minimizing in its neighborhood in $\\Omega$\naway from a small ball around $q$.\n\n\n\\begin{definition} \\label{def:nbhd minimizing}\nWe will call a set $\\Omega'$ a $(q,\\varepsilon, \\tau, \\Sigma, \\Omega)$-competitor\nif $$\\big(\\Omega \\setminus B_\\tau(\\Sigma)\\big) \\cup \\big(B_\\varepsilon(q) \\setminus \\Sigma \\big) \n\\subset \\Omega' \\subsetneqq \n\\Omega \\setminus \\Sigma \\, .$$\nAn $(q,\\varepsilon, \\tau, \\Sigma, \\Omega)$-competitor $\\Omega'$ will be called a\nminimizing competitor if \nits perimeter is strictly less than perimeter of any \n$(q,\\varepsilon, \\tau, \\Sigma, \\Omega)$-competitor $\\Omega''$ \nwith $\\Omega' \\subset \\Omega''$.\n(Note that we do not require $\\Per (\\Omega')$ to \nbe less that the perimeter of all competitors, but\nonly those that contain $\\Omega'$).\n\\end{definition}\n\n\n\n\\begin{proposition} \\label{thm:no_competitor}\n Suppose \\eqref{e:canc} holds, then \n for every $\\varepsilon>0$ there is $\\tau>0$,\n such that \n minimizing $(q,\\varepsilon, \\tau, \\Sigma, \\Omega)$-competitor\n does not exist.\n\\end{proposition}\n\n\\begin{proof} \nFor contradiction suppose \nthere exists a minimizing $(q,\\varepsilon, \\tau, \\Sigma, \\Omega)$-competitor\n$U$. Observe that by the cancellation\nassumption for every $\\eta>0$ \nwe can find $(q,\\varepsilon, \\tau, \\Sigma, \\Omega)$-competitors \n$\\Omega'$\nwith $\\Per(\\Omega') \\leq W+ \\eta - \\delta(\\epsilon)$,\nwhere $\\delta(\\epsilon)$ is from (\\ref{e:canc}). \nIt follows that \n$$\\Per(U) \\leq \\Per(\\Omega')0$ sufficiently small,\nthen by Lemma \\ref{l:close in flat}\nthere exists a nested family $\\{E(x)\\,:\\,x\\in [0,1]\\}$\nwith $E(0)= U$, $E(1) = \\Omega$ and\n$$\\Per(E(x))< W \\, .$$\n\nLet $(x_l,x_0]$ be the connected interval such that $\\Omega(x)\\setminus U\\neq \\emptyset$, where $\\{\\Phi(x)=\\partial \\Omega(x)\\}$, and define \nfamily $\\Psi \\colon (x_l, x_0+1] \\to \\mathcal Z_n(M, \\mathbb Z_2)$ by\n$$\n\\Psi(x):=\n\\begin{cases}\n\\partial(\\Omega(x)\\cap U) & \\text{if }x\\in (x_l,x_0] \\\\\n\\partial E(x-x_0)\n& \\text{if }x\\in [x_0, x_0+1]\n\\end{cases}\n$$\nClearly $\\Psi$ is continuous, since $\\Omega=\\Omega(x_0)$ and moreover we have that\n$$\n\\limsup_{y\\to x} \\mathbf{M}(\\Psi(y)) < \\limsup_{y\\to x} \\mathbf{M}(\\Phi(x)) \\leq W\n$$\nfor every $x\\in (x_l,x_0)$ by\nstrict minimality condition in Definition \\ref{def:nbhd minimizing}. For every $x\\in [x_0, x_0+1]$\nwe also have $\\mathbf{M}(\\Psi(x)) = \\mathbf{M}(\\partial E(x))0$, such that \nthe support of $V$ is minimizing to one side in $B_r(x)$\n\\end{proposition}\n\n\n\n\\begin{proof} \nFirst we observe that we can find two points $q_1$ and $q_2$ in $\\reg V$,\nsuch that for all $\\varepsilon>0$ sufficiently small,\n\\[\n\\Per(\\Omega\\,|\\,B_\\varepsilon(q_j)) < | V |(B_\\varepsilon(q)) \\, .\n\\]\nBy Proposition \\ref{thm:no_competitor} we have non-existence\nof minimizing $(q_j,\\varepsilon, \\tau, \\Sigma, \\Omega)$-competitors\nfor $j=1,2$. This implies that \n $\\Sigma_i$ is area minimizing to one side\nin a small ball around every point of $V$. In particular,\nwe have $\\cH^{0}(\\hnm(V)))=0$.\n\nThe stability of the regular part of each $\\Sigma_i$ also follows from the\nnon-existence of minimizing $(q,\\varepsilon, \\tau, \\Sigma, \\Omega)$-competitors.\nIndeed, if a component $\\Sigma_i$ has index $\\geq 1$,\nthen for $\\varepsilon>0$ sufficiently small, the minimal\nhypersurface $\\Sigma_i \\setminus B_{\\varepsilon}(q)$ with fixed boundary\nwill be unstable by a standard capacity argument.\nIf $\\Sigma_i$ is two-sided, then by considering a minimization problem\nto one side of $\\Sigma_i$ \nin $B_\\tau(\\Sigma_i) \\setminus B_\\varepsilon(q)$\nwe can find open set $U \\subset \\Omega$,\nsuch that $\\Omega \\setminus U$ is a minimizing \n$(q,\\varepsilon, \\tau, \\Sigma, \\Omega)$-competitor.\n\nSuppose $\\Sigma_i$ is one-sided.\nSince $\\Sigma_i \\subset \\overline{\\Omega}$\nwe have that $B_\\tau(\\Sigma_i)\\setminus \\Sigma_i \\subset \\Omega$\nfor all sufficiently small $\\tau>0$.\nIn particular, for small $\\tau< \\varepsilon$ \nwe can minimize in the class of hypersurfaces\n $\\{S\\subset B_\\tau(\\Sigma_i) : S \\cap B_\\varepsilon(q) = \\Sigma_i \\cap B_\\varepsilon(q)\\}$\n to obtain a minimizer $\\Sigma_i'$ in the same homology class\n and open set $U \\subset \\Omega$ with $\\partial U = \\Sigma_i \\cup \\Sigma_i'$.\n Then $\\Omega \\setminus U$ is a minimizing \n$(q,\\varepsilon, \\tau, \\Sigma, \\Omega)$-competitor.\n\\end{proof}\n\n\\subsection{Multiplicity $2$ bound} In this subsection we show that if $\\kappa_i> 2$ for some $i$, then $x_0$ is excessive, by using simple comparisons with disks. Notice that if any multiplicity satisfies $\\kappa_i\\geq 2$ then we must be in the cancellation case considered above. \n\n\\begin{lemma}[Multiplicity $2$ bound]\\label{l:mult_bound}\nLet $V=\\sum_{i}\\kappa_i\\,| \\Sigma_i|$ be as in \\eqref{e:no_can}. Then $\\kappa_i\\leq 2$ for every $i$.\n\\end{lemma}\n\t\n\\begin{proof} Suppose by contradiction $\\kappa_i\\geq 3$ for some $i$. Then let $p\\in \\reg(\\Sigma_i)$, $p\\neq q$ (where $q$ is the cancellation point considered above). Consider a ball $B_r(p)$, $r < \\frac{1}{2}dist(p,q)$, sufficiently small so that $\\Sigma_i\\cap B_r(p)$ is two-sided. Let $\\tau(r)>0$ be a small constant\nto be chosen later\nand set $U = B_r(p) \\cap B_\\tau(\\Sigma_i)$.\n\nConsider sequence $x_j \\nearrow x_0$\nwith $|\\partial \\Omega(x_j)| \\rightarrow V$.\nWe can assume that the radius $r$ was\n chosen sufficiently small, so that\n\\begin{equation}\\label{e:mult_bound1}\n \\mathbf{M}(\\partial \\Omega(x_j) \\cap U)\\geq \\left(\\kappa_i - \\frac{1}{10}\\right) \\omega_n r^n\\,,\n\\end{equation}\nfor all $j$ large enough, where $\\omega_n$ denotes the measure of the $n$-dimensional ball of radius one.\n\nLet $\\Omega_j' \\subset \\Omega(x_j)$, $\\Omega_j' \\setminus U = \\Omega(x_j)\n\\setminus U$, be a strict one-sided outer area minimizer\nin $\\Omega(x_j)\\cap U$.\nObserve that if $\\Omega'_j$ does not converge to $\\Omega(x_0)$,\nthen $\\lim \\Omega_j'$ is a $(q,\\frac{1}{2}dist(p,q), \\tau, \\Sigma, \\Omega(x_0))$-competitor,\nwhich contradicts Proposition \\ref{thm:no_competitor}.\n\nWe conclude that $\\lim \\Omega_j' = \\Omega(x_0)$.\nOn the other hand, \nby comparing $\\Omega(x_j) \\setminus U$ to $\\Omega_j'$ and\nassuming that $\\tau(r)$\nwas chosen sufficiently small, we have that one-sided \narea minimizing property of $\\Omega_j'$ implies \n\\begin{equation*}\n \\mathbf{M}(\\partial \\Omega_j' \\cap U)\\leq \\Per(U) \\leq \\left(2 + \\frac{1}{10}\\right) \\omega_n r^n\\,,\n\\end{equation*}\nFor $\\tau(r)$\n sufficiently small and $j$ large we can apply\nLemma \\ref{l:close in flat} to find a nested family $E(x)$\ninterpolating between $\\Omega_j'$ and $\\Omega$, such that\n\\begin{align*}\\Per( E(x) ) &\\leq \\max \\{\\mathbf{M}(\\partial \\Omega_j' \\setminus U),\n\\mathbf{M}(\\partial \\Omega(x_0) \\setminus U) \\} + \\left(2 + \\frac{2}{10}\\right) \\omega_n r^n\\\\\n& \\leq W - \\left( 1-\\frac{3}{10} \\right) \\omega_n r^n.\n\\end{align*}\nBy combining families $\\Omega(x) \\cap \\Omega_j'$\nand $E(x)$ we obtain that $x_0$ is left-excessive.\n\\end{proof}\n\t\n\t\n\t\n\t\n\t\n\t\\subsection{Proof of Theorem \\ref{thm:nm+index_bound}} The result follow immediately by combining Corollary \\ref{c:non-excessive_minmax} with Propositions \\ref{p:pairs}, \\ref{p:def_thm}, \\ref{p:def-thm-cancel} and Lemma \\ref{l:mult_bound}. \\qed\n\t\t\n\t\\section{Proof of Theorems \\ref{thm:generic_bound_ricci}, \\ref{thm:generic_bound}, and \\ref{thm:generic_stratum}}\n In this section we prove Theorem \\ref{thm:generic_stratum} (Theorems \\ref{thm:generic_bound_ricci} and \\ref{thm:generic_bound} follow immediately from Theorem \\ref{thm:generic_stratum} when combined with the facts that when $n=8$ all singularities are regular and that the set of bumpy metrics is open and dense \\cite{White:bumpy,White:bumpy2}). Theorem \\ref{thm:generic_stratum} will follow from Theorem \\ref{thm:nm+index_bound} and Proposition \\ref{prop:min-vs-htpy-min}, together with a simple surgery procedure.\n \n \n \\subsection{Surgery procedure} We show here how to regularize minimal hypersurfaces with regular singularities under the assumption that the hypersurface minimizes area in a small ball around each singularity. \n \\begin{proposition}[Perturbing away regular singularities of locally area minimizing surfaces]\\label{p:surgery}\n\tFor $(M^{n+1},g)$ a compact $C^{2,\\alpha}$-Riemannian metric and $\\Sigma\\in\\cR$ a minimal hypersurface, recall that $\\mathcal S_0(\\Sigma)\\subset \\sing \\Sigma$ is defined to be the set of singular points with a regular tangent cone. There is $\\tilde g\\in \\Met^{2,\\alpha}(M)$ arbitrarily close to $g$ and $\\tilde \\Sigma$ arbitrarily close in the Hausdorff sense to $\\Sigma$ so that $\\tilde\\Sigma$ is minimal with respect to $\\Sigma$ and $\\mathcal S_0(\\tilde\\Sigma) \\subset \\hnm(\\tilde\\Sigma) = \\hnm(\\Sigma)$. \n\\end{proposition} \n\\begin{proof}\nFor every $p \\in \\mathcal S_0(\\Sigma)\\setminus \\hnm(\\Sigma)$, and $\\varepsilon_0=\\varepsilon_0(p)$ so that $\\Sigma \\cap (B_{\\varepsilon_0}(p)\\setminus p)$ is regular, we will show how to perturb $g$ and $\\Sigma$ so that $p$ becomes regular. We will do this by making an arbitrarily small change to $g$, $\\Sigma$ supported in $B_{\\varepsilon_0}(p)$. Because $\\mathcal S_0$ is discrete (but not necessarily closed when $n\\geq 9$) it is easy to enumerate the elements of $\\mathcal S_0(\\Sigma)\\setminus \\hnm(\\Sigma)$ and make a summably small change around each point. As such, it suffices to consider just the perturbation near $p$.\n\nBy definition, taking $\\varepsilon<\\varepsilon_0$ sufficiently small, $\\Sigma \\cap B_\\varepsilon(p)$ is one-sided homotopy area-minimizing. For concreteness write $\\Sigma \\cap B_\\varepsilon(p) = \\partial \\Omega$ in $B_\\varepsilon(p)$ and assume that $\\Omega$ is inner homotopy minimizing. By Lemma \\ref{lem:snm-hnm}, the tangent cone at $p$ is area-minimizing (to the same side). \n\nWe claim that (after taking $\\varepsilon>0$ smaller if necessary) there is a sequence of $\\Sigma_i\\in\\cR(B_\\varepsilon(p))$ with stable regular part, with $\\Sigma_i \\subset \\Omega$, $\\Sigma_i$ disjoint from $\\Sigma$, and $\\Sigma_i\\to \\Sigma$. Indeed, we can apply Proposition \\ref{prop:min-vs-htpy-min} to conclude that either (after shrinking $\\varepsilon>0$), $\\Omega$ is area-minimizing to the inside, or there are $\\Sigma_i$ as asserted.\n\nIn the case that $\\Omega$ is area-minimizing to the inside, we can still construct the $\\Sigma_i$ by shrinking $\\varepsilon>0$ even further so that $\\Omega$ is strictly area-minimizing to the inside and then minimizing area with respect to a boundary\n$\\Sigma \\cap \\partial B_\\varepsilon(p) + \\delta_i$, for a sequence $\\delta_i\\to 0$;\ni.e., the boundary of $\\Sigma\\cap B_\\varepsilon(p)$ pushed slightly into $\\Omega$. By the unique minimizing property, the minimizers will converge back to $\\Sigma$ in $B_\\varepsilon(p)$. \n\nFor $i$ sufficiently large we can write the intersection of $\\Sigma_i$\nwith the annulus $A(p,\\varepsilon\/5,\\varepsilon)$ as a graph of function $u_i$\nover $\\Sigma$.\n\nReasoning as in Hardt--Simon \\cite[Theorem 5.6]{HS} (cf. \\cite[Theorem 3.1]{Liu}), for $i$ sufficiently large, $\\Sigma_i$ will be regular in $B_{\\varepsilon\/2}(p)$. We now set \n\\[\n\\tilde\\Sigma_i =(\\Sigma_i \\cap B_{\\varepsilon\/5}) \\cup (\\Sigma\\setminus B_\\varepsilon(p)) \\cup ((\\Sigma + \\chi u_i)\\cap A(p,\\varepsilon\/5,\\varepsilon))\n\\]\nwhere $\\chi$ is a smooth cutoff function with $\\chi\\equiv1$ on $B_{\\varepsilon\/5}$ and $\\chi\\equiv 0$ on $B_{3\\varepsilon\/5}$. Note that\n\\[\nH_g(\\tilde\\Sigma_i) \\quad \\text{is supported in $B_{4\\varepsilon\/5}(p) \\setminus B_{\\varepsilon\/5}(p)$}\n\\]\nand $\\Vert H_g(\\tilde \\Sigma_i) \\Vert_{C^{2,\\alpha}} = o(1)$ as \n$i \\to \\infty$.\n\n\tNow, define $\\tilde g = e^{2f} g$, in this new metric, since $\\tilde \\Sigma$ is smooth, we have the transformation\n\t$$\n\tH_{\\tilde g}(\\tilde \\Sigma)= e^{-f}\\left(H_g(\\tilde\\Sigma)+\\frac{\\partial f}{\\partial \\nu} \\right)\\,,\n\t$$\n\twhere $\\nu$ is the normal direction to $\\Sigma$. \n\tSetting $H_{\\tilde g}(\\tilde \\Sigma)=0$, this reduces to the equation\n\t$$\n\tH_g(\\tilde\\Sigma)+\\frac{\\partial f}{\\partial \\nu}=0\n\t$$\n\twhich implies that $f=-H_g(\\tilde \\Sigma) \\zeta(\\nu)$, for a function $\\zeta(t)$ such that $\\zeta'(0)=1$ and $\\zeta\\equiv0$ for $|t|\\geq \\varepsilon\/100$ is a solution. Since, as observed, $H_g(\\tilde\\Sigma)$ is supported in $A(p,\\varepsilon\/5,4\\varepsilon\/5)$, so is the metric change, and since $\\|u\\|_{C^{4,\\alpha}}\\leq o(1)$ and $\\chi$ is smooth, we have\n\t$$\n\t\\|g-\\tilde g\\|_{2,\\alpha}=\\|e^f-1\\|_{C^{2,\\alpha}} \\|g\\|_{2,\\alpha} \\leq C\\, \\|u\\|_{C^{4,\\alpha}}\\, \\|g\\|_{2,\\alpha} = o(1)\n\t$$ \n\tas $i\\to \\infty$. This completes the proof. \n\\end{proof}\n\t\n\t\\subsection{Proof of Theorem \\ref{thm:generic_stratum}} For $g \\in \\Met^{2,\\alpha}(M)$, apply Theorem \\ref{thm:nm+index_bound} to find $V\\in\\cR$ with\n\t\\[\n\t\\cH^0(\\hnm(V)) + \\Index(V) \\leq 1.\n\t\\]\n\tWe can apply Proposition \\ref{p:surgery} to $\\Sigma = \\supp V$ to find a metric $\\tilde g$ that is arbitrarily $C^{2,\\alpha}$-close to $g$ and a $\\tilde g$ minimal hypersurface $\\tilde\\Sigma \\in \\cR$ so that $\\mathcal S_0(\\Sigma) \\subset \\hnm(\\Sigma)$. (Note that if $\\Index(V) = 1$, then $ \\hnm(\\Sigma) =\\emptyset$, so $\\mathcal S_0(\\Sigma) = \\emptyset$.) This completes the first part of the proof. \n\t\nWe now consider $g \\in \\Met^{2,\\alpha}_{\\Ric>0}(M)$.\\footnote{The idea is that positive Ricci curvature rules out stable hypersurfaces but this requires the hypersurface to be two-sided. As such, we must consider two cases, depending on whether $\\Sigma$ is one or two-sided.} If $\\Sigma$ is two-sided, then $\\Index(\\Sigma)\\geq1$, so we can argue as above. On the other hand if $\\Sigma$ is one-sided, then $[\\Sigma]\\neq 0 \\in H_{n}(M,\\mathbb{Z}_{2})$. We can then find $\\hat\\Sigma \\in [\\Sigma]$ by minimizing area in the homology class. The surface $\\hat\\Sigma$ may have singularities, but they are all locally area minimizing. Thus, we can apply Proposition \\ref{p:surgery} to $\\Sigma$ yielding $\\tilde\\Sigma$ and $\\tilde g$ with $\\mathcal S_0(\\tilde\\Sigma) = \\emptyset$. \n\\qed\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzfoqm b/data_all_eng_slimpj/shuffled/split2/finalzzfoqm new file mode 100644 index 0000000000000000000000000000000000000000..24a1e9607cc6f283a0390d6603856bd43051418e --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzfoqm @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\n\n\n\n\nIn the series of articles \\cite{FKS, FJK1, FJK2}, degenerate elliptic equations in divergence form with real symmetric coefficients are studied. There, the degeneracy is given in terms of an $A_{2}$ weight or a power of the jacobian of a quasi-conformal map. The first article gives interior estimates, the second article deals with the Wiener test and the third one study boundary behavior and harmonic measure.\n Further work along these lines, for example \\cite{FS, BM}, has been done. However, little work on the fundamental $L_p$ Dirichlet and Neumann\nproblems in the degenerate setting seems to have been done.\nHere, we want to initiate such study of boundary value problems, with $L_2$ boundary data, for a large class of weights, in the case of domains which are Lipschitz\ndiffeomorphic to the upper half space ${\\mathbb R}^{1+n}_+ := \\sett{(t,x)\\in{\\mathbb R}\\times {\\mathbb R}^n}{t>0}$, $n\\ge 1$. Thus, our work includes the case of special Lipschitz domains.\nAnother difference to earlier work that we want to stress is that we consider general elliptic divergence form systems, and not only scalar equations, as this is the natural setting for the\nmethods used. In this generality, interior pointwise regularity estimates fail in general, even\nin the uniform elliptic case.\nHowever, we emphasize that the methods used in this paper do not require such pointwise estimates.\n\n\nWe consider\ndivergence form second order, real and complex, \\textbf{degenerate} elliptic systems\n\\begin{equation} \\label{eq:divform}\n \\sum_{i,j=0}^n\\sum_{\\beta= 1}^m \\partial_i\\Big( A_{i,j}^{\\alpha, \\beta}(t,x) \\partial_j u^{\\beta}(t,x)\\Big) =0,\\ \\alpha=1,\\ldots, m\n\\end{equation}\nin ${\\mathbb R}^{1+n}_+$,\nwhere $\\partial_0= \\tdd{}{t}$ and $\\partial_i= \\tdd{}{x_i}$, $1\\le i\\le n$, which we abbreviate as\n${\\text{{\\rm div}}} A \\nabla u=0$, where\n\\begin{equation} \\label{eq:boundedmatrix}\n A=(A_{i,j}^{\\alpha,\\beta}(t, x))_{i,j=0,\\ldots,n}^{\\alpha,\\beta= 1,\\ldots,m}. \\end{equation}\nWe assume $A$ to be degenerate in the sense that for some $w\\in A_{2}({\\mathbb R}^n)$ and $C<\\infty$,\n \\begin{equation} \\label{eq:bounded}\n |A(t,x)| \\le Cw(x), \\ \\mathrm{for \\ a.e.}\\ (t,x)\\in {\\mathbb R}^{1+n}_{+}\n \\end{equation}\nand elliptic degenerate in the sense that $w^{-1}A$ is accretive on a space ${\\mathcal H}^0$ that we define below. This ellipticity condition means that\nthere exists $\\kappa>0$ such that\n\\begin{equation} \\label{eq:accrassumption}\n \\re \\int_{{\\mathbb R}^n} (Af(x),f(x)) dx\\ge \\kappa\n \\sum_{i=0}^n\\sum_{\\alpha=1}^m \\int_{{\\mathbb R}^n} |f_i^\\alpha(x)|^2 w(x)\\, dx,\n\\end{equation}\nfor all $f\\in {\\mathcal H}^0$ and a.e. $t>0$. We have set $$(A\\xi,\\xi)= \\sum_{i,j=0}^n\\sum_{\\alpha,\\beta=1}^m A_{i,j}^{\\alpha,\\beta}(t,x)\\xi_j^\\beta\\, \\conj{\\xi_i^\\alpha}.\n$$\nThe space ${\\mathcal H}^0$ is the closed subspace of $L^2({\\mathbb R}^n,w;{\\mathbb C}^{m(1+n)})$ consisting of those functions with ${\\text{{\\rm curl}}}_{x}(f_{i}^\\alpha)_{i=1,\\ldots,n}=0$ for all $\\alpha$. The case of equations is when $m=1$ or, equivalently, when $A_{i,j}^{\\alpha,\\beta}=A_{i,j}\\delta _{\\alpha,\\beta}$. In this case, the accretivity condition becomes the usual pointwise accretivity\n\\begin{equation} \\label{eq:poinwiseaccr}\n \\re \\sum_{i,j=0}^n A_{i,j}\\xi_j \\conj{\\xi_i} \\ge\n \\kappa \\sum_{i=0}^n |\\xi_i|^2 w(x),\n\\end{equation}\nfor all $\\xi\\in{\\mathbb C}^{1+n}$ and a.e. $(t,x)\\in {\\mathbb R}^{1+n}_{+}$.\nObserve that the function $(t,x)\\mapsto w(x)$ is an $A_{2}$ weight in ${\\mathbb R}^{1+n}$ if $w$ is an $A_{2}$ weight in ${\\mathbb R}^n$. So, the degeneracy is a special case of that considered in the works mentioned above. However, for the boundary value problems we wish to consider, this seems a natural class. To our knowledge, this has not been considered before.\n\n A natural question is whether weights could depend on both variables or only on the $t$-variable. Already in the non-degenerate case there are regularity conditions without which the Dirichlet problem is ill-posed. As for the degenerate case, the well known example from \\cite{CaS} when $A= t^{1-2s}I$, $0\\varphi(x)\\}$ when the corresponding datum is in $L^2(\\partial\\Omega; \\tilde wd\\sigma)$ where $\\sigma$ is the surface measure and $\\tilde w(\\varphi(x)):=w(x)$. Applying the standard pullback $(x,t) \\mapsto (y, t-\\varphi(y))=(y,s)$, we obtain an equation of the form $ {\\text{{\\rm div}}}_{y,s} (A(y)\\nabla_{y,s}v)(y,s)=0$ that is degenerate elliptic.\nSecondly, it is natural to expect results on bounded domains. Bilipschitz invariance implies that one can look at the case of the unit ball as the non-smoothness is carried by the coefficients. There, the setup of \\cite{AA2} applies to radially independent weights and degenerate coefficients, and perturbations of the latter. It will be clear from the present article that it all depends on the weighted quadratic estimate on the boundary. This requires a proof that is left to further work.\n\n\n\n\nThe first author was partially supported by the ANR project ``Harmonic analysis at its boundaries'' ANR-12-BS01-0013-01. The second one was supported by the Grant 621-2011-3744 from the Swedish Research Council, VR. The first author wants to thank Yannick Sire for bringing his attention to this problem.\n\n\n\\section{Preliminaries on weights}\n\n\\subsection{Muckenhoupt weights}\n\nRecall that for a weight $w$ on ${\\mathbb R}^n$ and $p>1$, the $A_{p}$ condition reads\n$$\n\\bigg(\\barint_{\\hspace{-6pt}Q} w\\, dx\\bigg)\\,\n\\bigg(\\barint_{\\hspace{-6pt}Q} w^{1-p'} \\, dx\\bigg)^{p\/p'}\\le C,\n$$\n for all cubes $Q$ with $p'$ the conjugate exponent to $p$. The smallest possible $C$ is denoted by $[w]_{A_{p}}$.\nThe notation $\\barint_{\\hspace{-2pt}E}$ means the average with respect to the indicated measure on $E$.\n\nWe identify $w$ with the measure $dw=w(x)\\, dx$ and write $w(E)$ for $\\int_{E} dw$ while $|E|= \\int_{E} dx$.\nRecall that $w\\in A_{p}$ implies $w\\in A_{q}$ for all $q>p-\\varepsilon$ where $\\varepsilon>0$ depends on $[w]_{A_{p}}.$ Every $w\\in A_{p}$ is an $A_{\\infty}$ weight: there exist constants $0< \\sigma \\le 1 \\le \\tau<\\infty$ such that\n\\begin{equation}\n\\label{eq:ainfty}\n\\left(\\frac {|E|}{|Q|}\\right)^\\tau \\lesssim \\frac{w(E)}{w(Q)} \\lesssim \\left(\\frac {|E|}{|Q|}\\right)^\\sigma\n\\end{equation}\nfor all cubes $Q$ and measurable subsets $E$ of $Q$ (actually, $\\tau=p$ if $w\\in A_{p}$ and $(1\/\\sigma)'$ is the reverse H\\\"older exponent of $w$, which can be arbitrary in $(1,\\infty]$). In particular, $dw$ is a doubling measure. There also exists a constant $c_0 > 0$ such that\n\\begin{equation} \\label{reversejensen}\n\\barint_{\\hspace{-6pt}Q} \\ln w(x) dx \\leq \\ln\\bigg(\\barint_{\\hspace{-6pt}Q} w(x) dx\\bigg) \\leq \\barint_{\\hspace{-6pt}Q}\\ln w(x) dx + c_0,\n\\end{equation}\nthe first inequality being Jensen's inequality and the second being a reverse form of it.\n\nDenote $ L^p(w; {\\mathbb C}^d)=L^p({\\mathbb R}^n, w; {\\mathbb C}^d)$ for $d\\ge 1$, and $L^p(w)=L^p(w;{\\mathbb C})$.\nIf $w\\in A_{p}$,\n\\begin{equation}\n\\label{eq:dxdwAp}\n\\barint_{\\hspace{-6pt}Q} \\left|f(y)\\right|\\, dy \\le [w]_{A_{p}}^{1\/p} \\left( \\barint_{\\hspace{-6pt}Q} |f(y)|^p\\, dw(y)\\right)^{1\/p}.\n\\end{equation}\nIn particular, for $w\\in A_{2}$, since $w\\in A_{2\/p}$ for some $p>1$, this implies Muckenhoupt's theorem: the Hardy-Littlewood maximal operator $M$ with respect to $dx$ is bounded on $L^2(w)$.\n\n\\subsection{A corona decomposition for $A_{2}$ weights} \\label{coronasec}\n\n\n\n\n\nWe use the following dyadic decomposition of ${\\mathbb R}^n$. Let\n$\\triangle= \\bigcup_{j=-\\infty}^\\infty\\triangle_{2^j}$ where\n$\\triangle_{2^j}:=\\{ 2^j(k+(0,1]^n) :k\\in{\\mathbb Z}^n \\}$. For a dyadic cube $Q\\in\\triangle_{2^j}$, denote by $\\ell(Q)=2^j$\nits \\emph{sidelength} and by $|Q|= 2^{nj}$ its {\\em Lebesgue volume}. We set $\\triangle_t=\\triangle_{2^j}$ if $2^{j-1} 0$, to be chosen later, we consider $B^w(Q)$ the collection of those (``bad for $\\ln w$'') maximal sub-cubes of $Q$ for which\n\\begin{equation*} \\label{st1}\n|(\\ln w)_R - (\\ln w)_Q| > \\sigma_w.\n\\end{equation*}\n In this section, we use the notation $f_{Q}= (f)_{Q}:= \\barint_{\\hspace{-2pt}Q} f\\, dx$.\nWe can then define $B^w_j(Q)$ inductively as $B^w_{1}(Q)=B^w(Q)$ and for $j = 2,3,\\dots$ by\n\\[\nB^w_j(Q) = \\bigcup_{R \\in B^w_{j-1}(Q)} B^w(R), \\quad \\mbox{and set} \\quad B^w_*(Q) = \\bigcup_{j=1}^\\infty B^w_j(R).\n\\]\nThe following proposition shows that the ``number'' of cubes {on which the oscillation of $\\ln w$ differs too much from the one on some ancestor} can be controlled.\n\n\\begin{prop} \\label{prop4}\nFor a cube $R \\in \\triangle$, if $R\\subseteq Q$ is not contained in a cube of $B^w(Q)$ then $|(\\ln w)_R - (\\ln w)_Q| \\leq \\sigma_w$. We also have that\n\\begin{equation} \\label{eqsq}\n\\sum_{R \\in B^w_*(Q)} w(R) \\leq \\frac{C}{\\sigma_w^2} w(Q)\n\\end{equation}\nfor some $C < \\infty$ which depends only on $[w]_{A_2}$.\n\\end{prop}\n\n\\begin{proof}\nThe first statement of the Proposition is clear from the definition of $B^w(Q)$, so we only need to prove \\eqref{eqsq}.\n\n\n\nFor each $R \\in B^w_*(Q)$, there is a unique $R'\\in B^w_*(Q) \\cup \\{Q\\}$ such that $R \\in B^w_1(R')$, that is, $R'$ is the stopping-time parent of $R$. Thus\n\\[\nw(R) \\leq \\int_{R} \\frac{|(\\ln w)_R - (\\ln w)_{R'}|^2}{\\sigma_w^2}\\, dw\n\\]\nand so\n\\begin{align*}\n\\sum_{R \\in B^w_*(Q)} w(R) & \\lesssim \\frac{1}{\\sigma_w^2} \\sum_{R \\in B^w_*(Q)} \\int_{R} |(\\ln w)_R - (\\ln w)_{R'}|^2\\, dw \\\\\n& = \\frac{1}{\\sigma_w^2} \\int_{Q} \\bigg( \\, \\sum_{R \\in B^w_*(Q)} 1_{R} |(\\ln w)_R - (\\ln w)_{R'}|^2 \\bigg)\\, dw,\n\\end{align*}\nwhere $1_R$ is the characteristic function of the set $R$. For any $b\\in L^1_{loc}({\\mathbb R}^n, dx)$ consider\n the square function\n\\[\nS(b)(x) := \\bigg(\\, \\sum_{R \\in B^w_*(Q)} 1_{R}(x) |b_R - b_{R'}|^2 \\bigg)^{1\/2}.\n\\]\nFor any $w \\in A_\\infty$ we have (see Lemma 6.4 in \\cite{Gar} for the unweighted estimate and \\cite{HR2} for the weighted one) the estimate\n\\[\n\\int_{{\\mathbb R}^n} |S(b)|^2 dw \\lesssim \\int_{{\\mathbb R}^n} |M(b)|^2 dw,\n\\]\nwhere $M$ is the Hardy-Littlewood maximal operator with respect to $dx$. Applying this to $b(x) := (\\ln w(x) - (\\ln w)_Q)1_Q(x)$ and using Muckenhoupt's theorem, it suffices to show that\n\\begin{equation} \\label{eq:rem}\n\\int_Q |\\ln w - (\\ln w)_Q|^2\\, dw \\lesssim w(Q).\n\\end{equation}\n\n This inequality follows from the fact that $\\ln w \\in BMO({\\mathbb R}^n, dx)$ when $w\\in A_{2}$ and the fact that $ BMO({\\mathbb R}^n, dx)= BMO({\\mathbb R}^n, dw)$ with equivalent norms for $w\\in A_{\\infty}$. The latter statement is a consequence of the John-Nirenberg inequality and reverse H\\\"older inequality for $w$. However, we present a direct proof. It follows from Jensen's inequality and the $A_2$ condition on $w$ that\n\\[\n\\int_Q e^{|\\ln w(x) - (\\ln w)_Q|}\\, dx \\lesssim |Q|.\n\\]\n Setting $M_\\lambda = \\{x \\in Q \\, ; \\, |\\ln w(x) - (\\ln w)_Q| > \\lambda \\}$, we can write\n\\[\n\\int_{0 }^\\infty e^\\lambda |M_\\lambda| d\\lambda= \\int_Q \\int_{ 0 }^{|\\ln w(x) - (\\ln w)_Q|} e^{\\lambda} d\\lambda dx \\le \\int_Q e^{|\\ln w(x) - (\\ln w)_Q|} \\, dx ,\n\\]\nso\n\\[\n\\int_{0 }^\\infty e^\\lambda |M_\\lambda| d\\lambda \\lesssim |Q|.\n\\]\nSimilarly,\n\\[\n\\int_Q |\\ln w - (\\ln w)_Q|^2\\, dw = \\int_{0}^\\infty 2\\lambda w(M_\\lambda) d\\lambda.\n\\]\nCondition \\eqref{eq:ainfty}, in which we take $\\sigma<1$ as one can always do, ensures that\n\\begin{align*}\n\\int_{0}^\\infty \\lambda w(M_\\lambda) d\\lambda & \\lesssim \\int_{0}^\\infty \\lambda w(Q)\\left(\\frac{|M_\\lambda|}{|Q|}\\right)^\\sigma d\\lambda \\\\\n& = w(Q) \\int_{0}^\\infty \\left(\\frac{|M_\\lambda|}{|Q|}e^\\lambda \\right)^\\sigma (\\lambda e^{-\\lambda\\sigma})d\\lambda \\\\\n& \\lesssim w(Q) \\left(\\int_{0}^\\infty \\frac{|M_\\lambda|}{|Q|}e^\\lambda d\\lambda \\right)^\\sigma \\left(\\int_{0}^\\infty (\\lambda e^{-\\lambda\\sigma})^{1\/(1-\\sigma)}d\\lambda \\right)^{1-\\sigma} \\\\\n& \\lesssim w(Q),\n\\end{align*}\nwhich proves \\eqref{eq:rem} and with it the proof of the Proposition.\n\\end{proof}\n\n\\subsection{Review of weighted Littlewood-Paley inequalities}\n\nWe recall here a few facts of Littlewood-Paley theory. The treatment in Wilson's book used in \\cite{CR} is not completely adapted for our needs. In particular, we use Calder\\'on's reproducing formula in a different way. Here $w$ denotes a weight in $A_{2}$. We begin with approximation issues.\n\n\n\\begin{lem}\\label{lem:approx} Let $\\varphi\\in L^1({\\mathbb R}^n)$ with a radially decreasing integrable majorant $\\phi$. Let $\\varphi_{\\varepsilon}(x) =\\varepsilon^{-n} \\varphi(x\/\\varepsilon)$ for $\\varepsilon>0$.\n\\begin{itemize}\n \\item Convolution with $\\varphi_{\\varepsilon}$ is a bounded operator on $L^2(w)$, uniformly with respect to $\\varepsilon$.\n \\item For every $f\\in L^2(w)$, $\\varphi_{\\varepsilon}\\star f\\to cf$ and $ \\varphi_{1\/\\varepsilon}\\star f \\to 0$ in $L^2(w)$ as $\\varepsilon\\to 0$, where $c=\\int_{{\\mathbb R}^n}\\varphi(x)\\, dx$.\n\n\\end{itemize}\n\n\n\\end{lem}\n\n\n\\begin{proof} For the first point, we observe that $\\sup_{\\varepsilon>0}|\\varphi_{\\varepsilon}\\star f| \\le \\|\\phi\\|_{1} Mf$ almost everywhere (See \\cite[Corollary 2.1.12]{Gra}) and recall that $M$ is bounded on $L^2(w)$.\nFor the second point,\n it is easy to see that $L^\\infty_{c}({\\mathbb R}^n)$, the set of bounded functions with bounded support, is dense in $L^2(w)$. Thus, it is enough to assume $f\\in L^\\infty_{c}({\\mathbb R}^n)$. Then $\\varphi_{\\varepsilon}\\star f- cf$ and $\\varphi_{1\/\\varepsilon}\\star f$ converge almost everywhere (for $dx$, thus for $dw$) to 0 as $\\varepsilon\\to 0$ (See \\cite[Corollary 2.1.19]{Gra}). The conclusion follows using the dominated convergence theorem in $L^2(w)$ as $Mf\\in L^2(w)$.\n\\end{proof}\n\n\n\n\n\n\\begin{cor}\\label{lem:density} $C_{0}^\\infty({\\mathbb R}^n)$ is dense in $L^2(w)$. Also the space ${\\mathcal E}$ of\nfunctions of the form $\\int_{\\varepsilon}^R \\psi_{t}\\star f\\, \\frac{dt}t$ where $f\\in C^\\infty_{0}({\\mathbb R}^n)$, $\\psi_{t}(x)=t^{-n}\\psi(x\/t)$ with $\\psi\\in S({\\mathbb R}^n)$ with Fourier transform supported away from 0 and $\\infty$, $\\varepsilon>0, R<\\infty$ and $\\int_{0}^\\infty \\hat \\psi(t\\xi) \\, \\frac {dt} t=1$ for all $\\xi\\ne 0$, form a dense subspace of $L^2(w)$. [Here, $\\hat g$ is the Fourier transform of $g$ in ${\\mathbb R}^n$.]\n\\end{cor}\n\n\\begin{proof} As $L^\\infty_{c}({\\mathbb R}^n)$ is dense in $L^2(w)$,\nchoosing $\\varphi\\in C^\\infty_{0}({\\mathbb R}^n)$ in the previous lemma proves the first density. Next, with $\\psi$ as in the statement, it is easy to see that $\\hat \\varphi(\\xi): = \\int_{1}^\\infty \\hat\\psi(t\\xi)\\, \\, \\frac {dt} t= 1- \\int_{0}^1 \\hat\\psi(t\\xi) \\, \\frac {dt} t$ for $\\xi\\ne 0$ and $\\hat \\varphi(0)=1$ is a Schwartz class function and that $\\int_{\\varepsilon}^{R} \\psi_{t}\\star f\\, \\frac{dt}t= \\varphi_{\\varepsilon}\\star f - \\varphi_{R}\\star f$. So the lemma follows.\n\\end{proof}\n\n\\begin{cor}\\label{cor:Calderon} Assume $\\psi$ and $\\varphi$ are integrable functions, that $\\varphi$ is as in Lemma \\ref{lem:approx} with $\\int \\varphi =1$ and that $\\int_{\\varepsilon}^{R} \\hat\\psi(t\\xi)\\, \\frac{dt}t= \\hat\\varphi(\\varepsilon\\xi)- \\hat\\varphi({R}\\xi)$ for all $\\xi\\in {\\mathbb R}^n, 0<\\varepsilon0}$ is an $ L^2({{\\mathbb R}_+}, \\frac{dt}t)$-valued Calder\\'on-Zygmund kernel and set $Q_{t}f= \\psi_{t}\\star f$. Then for all $f\\in L^2(w)$,\n$$\\int_{0}^\\infty \\|Q_{t}f\\|^2_{L^2(w)}\\, \\frac{dt}t \\le C_{w,\\psi}\\|f\\|^2_{L^2(w)}.$$\n\\end{lem}\n\n\\begin{proof}\nNote that $T: f\\mapsto (Q_{t}f)_{t>0}$ is a Calder\\'on-Zygmund operator, bounded from $L^2({\\mathbb R}^n)$ to $L^2({\\mathbb R}^n, L^2({{\\mathbb R}_+}, \\frac{dt}t))$. Thus weighted $L^2$ theory for $A_{2}$ weights applies (see \\cite{GCRF}, Chapter V).\n\\end{proof}\n\n\n\n\nTo conclude this section, we recall a trick proved by Cruz-Uribe and Rios \\cite{CR} originating from the proof of \\cite[Corollary 4.2]{DR}.\n\n\\begin{lem}\\label{lem:DRF} If $T$ is a linear operator that is bounded on $L^2(w)$ for any $w\\in A_{2}$ with norm depending only on $[w]_{A_{2}}$, then for any fixed $w\\in A_{2}$, there exists $\\theta\\in (0,1)$ and $C>0$ such that\n$\\|T\\|_{{\\mathcal L}(L^2(w))}\\le C\\|T\\|_{{\\mathcal L}(L^2(dx))}^\\theta$.\n\\end{lem}\n\n\n\n\\subsection{Weighted Sobolev spaces and the gradient operator}\n\n\nFor $10$ such that\n\\begin{equation}\n\\label{accretive}\n\\re (Bv,v) \\ge\\kappa \\|v\\|^2, \\ \\forall v\\in \\clos{\\textsf{R}(D)}.\n\\end{equation}\nIn this case, let\n$$\n \\mu(B):= \\sup_{v\\in \\textsf{R}(D), v\\not = 0} |\\arg(Bv,v)| <\\pi\/2\n$$\ndenote the {\\em angle of accretivity} of $B$ on $\\textsf{R}(D)$.\nNote that $B$ may not be invertible on ${\\mathcal H}$. Still for $X$ a subspace of ${\\mathcal H}$, we set\n$B^{-1}X= \\{u \\in {\\mathcal H}\\, ; \\, Bu \\in X\\}$.\n\n\\begin{prop} \\label{prop:typeomega} With the above assumptions, we have the following facts.\n\\begin{itemize}\n\\item[{\\rm (i)}]\nThe operator $DB$, with domain $B^{-1}\\textsf{D}(D)$, is $\\mu(B)$-bisectorial, i.e. $\\sigma(DB)\\subseteq S_{\\mu(B)}$ and there are resolvent bounds\n$\\|(\\lambda I - DB)^{-1}\\| \\lesssim 1\/ \\text{{\\rm dist}}\\,(\\lambda, S_\\mu)$ when $\\lambda\\notin S_\\mu$, $\\mu(B) <\\mu<\\pi\/2$.\n\\item[{\\rm (ii)}]\nThe operator $DB$ has range $\\textsf{R}(DB)=\\textsf{R}(D)$ and null space $\\textsf{N}(DB)=B^{-1}\\textsf{N}(D)$ such that topologically (but not necessarily orthogonally) one has\n$$\n{\\mathcal H} = \\clos{\\textsf{R}(DB)} \\oplus \\textsf{N}(DB).\n$$\n\\item[{\\rm (iii)}]\nThe restriction of $DB$ to $\\clos{\\textsf{R}(D)}$\nis a closed, injective operator with dense range in\n$\\clos{\\textsf{R}(D)}$. Moreover, the same statements on spectrum and resolvents as in (i) hold.\n\n\\item[{\\rm (iv)}] Statements similar to (i) and (ii) hold for $BD$ with $\\textsf{D}(BD)=\\textsf{D}(D)$, defined as the adjoint of $DB^*$ or equivalently by $BD= B(DB)B^{-1}$ on $\\textsf{R}(BD)\\cap \\textsf{D}(D)$, with $\\textsf{R}(BD):=B\\textsf{R}(D)$ and $BD=0$ on the null space $\\textsf{N}(BD):=\\textsf{N}(D)$.\n\\end{itemize}\n\\end{prop}\n\nFor a proof, see \\cite{ADMc}. Note that the accretivity is only needed on $\\textsf{R}(D)$.\nFor $t\\in {\\mathbb R}$, we shall\n\\begin{align*}R_t^B&=(I+itDB)^{-1},\\\\\nQ_t^B&= \\frac{1}{2i} (R_{-t}^B -R_{t}^B)= t DB(I+t^{2}DBDB)^{-1},\\\\\nP_{t}^B&= \\frac{1}{2} (R_{-t}^B +R_{t}^B)= (I+t^{2} DBDB)^{-1}.\n\\end{align*} It follows from the previous result that $R_t^B$, $P_t^B$ and $Q_{t}^B$\nare uniformly bounded operators on ${\\mathcal H}$.\n\n\n\n\n\\subsection{The main theorem}\\label{sec:main}\n\n\n\n\nLet us specify in this section the operator $D$ we consider throughout. In ${\\mathbb C}^{m(n+1)}={\\mathbb C}^m\\oplus {\\mathbb C}^{mn}= {\\mathbb C}^m \\oplus ({\\mathbb C}^m\\otimes {\\mathbb C}^n) $, we use the notation $v=\\begin{bmatrix}\n v_{{\\scriptscriptstyle\\perp}} \\\\\n v_{{\\scriptscriptstyle \\parallel}}\n \\end{bmatrix}$, with $v_{{\\scriptscriptstyle\\perp}}\\in {\\mathbb C}^m$ and $v_{{\\scriptscriptstyle \\parallel}}\\in {\\mathbb C}^{mn}={\\mathbb C}^m\\otimes {\\mathbb C}^n$ which we call respectively the normal (or scalar) and tangential parts of $v$. To simplify the exposition, we carry out the detailed proof only in the ``scalar '' case $m=1$ as it carries in the vector-valued case $m>1$ without change. See Section \\ref{sec:vv} for the notation.\n\n\n\n\\begin{prop}\\label{prop:D} Let $w\\in A_{2}({\\mathbb R}^n)$ and set ${\\mathcal H}=L^2(w;{\\mathbb C}^{n+1})$ with norm $\\|f \\|= (\\int_{{\\mathbb R}^n} |f|^2\\, dw)^{1\/2}$.\n\\begin{enumerate}\n \\item The operator $D:=\n \\begin{bmatrix}\n 0 & {\\text{{\\rm div}}}_{w} \\\\\n - \\nabla & 0\n \\end{bmatrix}$ with domain $ \\begin{bmatrix}\n \\textsf{D}(\\nabla) \\\\\n \\textsf{D}({\\text{{\\rm div}}}_{w})\\end{bmatrix}$ is self-adjoint.\n \\item $\\begin{bmatrix}\n C^\\infty_{0}({\\mathbb R}^n) \\\\\n \\frac 1 w C^\\infty_{0}({\\mathbb R}^n; {\\mathbb C}^n)\n \\end{bmatrix}$ is a dense subspace of $\\textsf{D} (D)$.\n \\item $ \\clos{\\textsf{R}(D)} = \\begin{bmatrix}\n L^2(w) \\\\\n {\\mathcal R}_{w}(L^2(w))\n \\end{bmatrix}.$\n \\item $ \\clos{\\textsf{R}(D)} = \\begin{bmatrix}\n L^2(w) \\\\\n {\\mathcal R}(L^2(w))\n \\end{bmatrix}=\\{g\\in L^2({\\mathbb R}^n,w;{\\mathbb C}^{1+n}); {\\text{{\\rm curl}}}_x(g_{\\scriptscriptstyle \\parallel})=0\\}$, where the closure is taken in ${\\mathcal H}$ and\n $\\clos{\\textsf{R}(D)} \\cap\n C^\\infty_{0}({\\mathbb R}^n; {\\mathbb C}^{n+1})$ is dense in $\\clos{\\textsf{R}(D)}$.\n \\end{enumerate}\n \\end{prop}\n\n The proof is a direct consequence of Lemma \\ref{lem:gradient}. We omit the details.\n\n\\bigskip\n\n Let us specify the required assumption on $B$: this is the operator of multiplication by an\n$(n+1)\\times(n+1)$ matrix $B(x)$ which has bounded entries and is accretive on $\\clos{\\textsf{R}(D)}$. Associated to $B$ are the constants $\\|B\\|_{\\infty}$ and (the best) $\\kappa>0$ in \\eqref{accretive}.\n\n\n In this section, $\\|f\\|$ systematically designates the weighted $L^2(w; {\\mathbb C}^d)$ norm with $d=1,n$ or $n+1$ depending on the context.\n\n\n\n\n\n\n\n\n\\begin{thm}\\label{th:main} With the preceding assumptions, one has\n\\begin{equation} \\label{eq:sfBD}\n \\int_0^\\infty\\| Q_{t}^B v \\|^2 \\, \\frac{dt}t \\lesssim \\|v\\|^2, \\qquad \\text{for all }\\ v \\in {\\mathcal H}.\n\\end{equation}\n \\end{thm}\n\n\n\n Theorem \\ref{th:main} differs from previous results in the field. First, $D$ still being a first order differential operator, no longer has constant coefficients. Secondly, the coercivity assumption\n $\\|\\nabla u\\| \\lesssim \\|Du\\|$ for $u\\in \\textsf{R}(D) \\cap \\textsf{D}(D) $ which is of utmost importance in other proofs cannot be true here, thus we cannot apply the metric measure space (here ${\\mathbb R}^n$ with Euclidean distance and measure $dw$) generalisation made by Bandara \\cite{Ban} of the Euclidean results in \\cite{AKMc} or \\cite{elAAM}.\n\n Note that\n the block-matrix case $B=\\begin{bmatrix}\n 1 & 0 \\\\\n 0 & d\n\\end{bmatrix}$ in the splitting ${\\mathbb C}^{n+1}={\\mathbb C} \\oplus {\\mathbb C}^n$ gives a proof of the Kato conjecture for degenerate operators, first proved by Cruz-Uribe and Rios \\cite{CR}. See Section \\ref{sec:consequences}.\n\n\n\nOur strategy to prove Theorem \\ref{th:main} is to follow the line of argument in \\cite{elAAM} but with some necessary twists. In particular, we will use different spectral decompositions on the scalar and tangential parts. Also the stopping-time argument requires the use of the Corona decomposition for $w$ (Proposition \\ref{prop4}).\n\n\n\n\\subsection{Reduction to a Carleson measure estimate}\\label{sec:reduction}\n\n\\begin{lem} [Off-diagonal decay] \\label{lem:odd}\n For every integer $N$ there exists $C_N>0$\nsuch that\n\\begin{equation} \\label{odn}\n\\|1_{E}\\,R_{t}^B u\\|+ \\|1_{E}\\,Q_{t}^B u\\| \\le C_N \\brac{\\text{{\\rm dist}}\\, (E,F)\/|t|}^{-N}\\|u\\|\n\\end{equation}\nfor all $t\\ne 0$,\nwhenever $E,F \\subseteq {\\mathbb R}^n$ are closed sets, $u \\in {\\mathcal H}$\nis such that $\\text{{\\rm supp}}\\, u\\subseteq F$. We have set $\\brac x:=1+|x|$ and\n$\\text{{\\rm dist}}\\,(E,F) :=\\inf\\{|x-y|\\, ; \\, x\\in E,y\\in F\\}$.\n\n\\end{lem}\n\n\\begin{proof}\nRepeat the argument in \\cite[Proposition 5.1]{elAAM}, which only uses that $D$ is of order 1 and that the commutator $[\\chi, D]$ of $D$ with a Lipschitz function $\\chi$ is bounded on the space ${\\mathcal H}$. More precisely, it is the pointwise multiplication by\n \\begin{equation}\n\\label{eq:comm}\n\\begin{bmatrix}\n 0 & -(\\nabla \\chi)^t \\\\\n \\nabla \\chi & 0\n \\end{bmatrix}.\n\\end{equation}\n We omit further details.\n\\end{proof}\n\n\n Let $A_t$ be the dyadic averaging operator with respect to Lebesgue measure and $A_{t}^w$ the one with respect to $dw$. Set\n$$E_{t}u(x):= \\begin{bmatrix}\n A_{t}^w u_{{\\scriptscriptstyle\\perp}}(x) \\\\\n A_{t}u_{{\\scriptscriptstyle \\parallel}}(x) \\end{bmatrix} =\\begin{bmatrix}\n \\barint_{\\hspace{-2pt}Q} u_{{\\scriptscriptstyle\\perp}}(y)\\, dw(y) \\\\\n \\barint_{\\hspace{-2pt}Q} u_{{\\scriptscriptstyle \\parallel}}(y)\\, dy\n \\end{bmatrix},\n $$\n where $Q$ is the unique dyadic cube $Q \\in \\triangle_t$ that contains $x$. The notation for dyadic cubes is the same as in Section \\ref{coronasec}. Recall that $\\barint_{\\hspace{-2pt}Q}$ means the average on $Q$ with respect to the indicated measure. We also set\n $$\n E_{Q}u= \\begin{bmatrix}\n \\barint_{\\hspace{-2pt}Q} u_{{\\scriptscriptstyle\\perp}}(y)\\, dw(y) \\\\\n \\barint_{\\hspace{-2pt}Q} u_{{\\scriptscriptstyle \\parallel}}(y)\\, dy\n \\end{bmatrix}.\n $$\nObserve that $E_{t}$ acts as a linear operator componentwise. Using the inequality\n \\eqref{eq:dxdwAp} with $p=2$,\nwe have\n that $E_{t}$ is a bounded operator on ${\\mathcal H}$, uniformly in $t$.\n We also have the pointwise estimate\n $$\n\\sup_{t>0} |E_{t}u| \\le M_{d,w}|u_{{\\scriptscriptstyle\\perp}}| + M_{d}|u_{{\\scriptscriptstyle \\parallel}}|\n $$\n where $M_{d,w}$ is the dyadic maximal function with respect to\n $dw$ and $M_{d}$ the dyadic maximal function with respect to\n $dx$. Both are bounded on $L^2(w)$ by the Hardy-Littlewood theorem for the doubling measure $dw$ and\n by Muckenhoupt's theorem since $w\\in A_{2}$. Thus, $u\\mapsto \\sup_{t>0} |E_{t}u|$ is bounded from ${\\mathcal H}$ into $L^2(w)$.\n\n\n\n\n\\begin{defn} \\label{defn:princpart}\nBy the {\\em principal part} of $(Q_t^B)_{t>0}$,\nwe mean the function $(x,t) \\mapsto \\gamma_t(x)$ defined from ${\\mathbb R}^{n+1}_+$ to $ {\\mathcal L}({\\mathbb C}^{n+1}) $ by\n$$\n \\gamma_t(x)z:= (Q_t^B z)(x)\n$$\nfor every $z\\in {\\mathbb C}^{n+1}$. We view $z$ on the right-hand side\nof the above equation as the constant function valued in ${\\mathbb C}^{n+1}$ defined on ${\\mathbb R}^n$\nby $z(x):=z$. We denote by $|\\gamma_t(x)|$ its norm in ${\\mathcal L}({\\mathbb C}^{n+1})$ subordinated to the hermitian structure on ${\\mathbb C}^{n+1}$.\nWe identify $\\gamma_t(x)$ with the (possibly unbounded) multiplication\noperator $\\gamma_t: u(x)\\mapsto \\gamma_t(x)u(x)$, $u\\in {\\mathcal H}$.\n\\end{defn}\n\\begin{lem}\\label{lem:gammat}\nThe operator $Q_t^B$ extends to a bounded operator from\n$L^\\infty(w; {\\mathbb C}^{n+1})$ into $ {\\mathcal H}_{\\text{loc}}=L^2_{\\text{loc}}(w; {\\mathbb C}^{n+1})$.\nIn particular we have\n$\\gamma_t\\in L^2_{\\text{loc}}(w; {\\mathcal L}({\\mathbb C}^{n+1}))$ with bounds\n$$\n \\barint_{\\hspace{-6pt}Q} |\\gamma_t(y)|^2 \\, dw(y) \\lesssim\n 1\n$$\nfor all $Q\\in\\triangle_t$.\nMoreover, $\\gamma_t E_{t}$ are bounded on ${\\mathcal H}$ with $\\|\\gamma_t E_{t}u\\|\\lesssim \\|E_{t}u\\|$ uniformly for all $t>0$ and $u\\in {\\mathcal H}$.\n\\end{lem}\n\n\\begin{proof} Fix a cube $Q \\in \\triangle_t$ and $u \\in L^\\infty(w;{\\mathbb C}^{n+1})$ with $\\|u\\|_\\infty=1$. Then\nwrite $u= u_0+ u_1+u_2+\\ldots$ where $u_0=u$ on $2Q$ and $0$ elsewhere and if $j\\ge 1$, $u_j=u$ on $2^{j+1}Q \\setminus 2^{j}Q$ and $0$ elsewhere. Then apply $Q_t^B$ and use Lemma \\ref{lem:odd} for each term $Q_t^B u_j$ with $N$ large enough and sum to obtain\n$$\n \\barint_{\\hspace{-6pt}Q} |(Q_t^Bu)(y)|^2 \\, dw(y) \\le\n C.\n$$\nIf we do this for the constant functions with values describing an orthonormal basis of ${\\mathbb C}^{n+1}$ and sum, we obtain an upper bound for the desired average of $\\gamma_t$.\nNext, for a function $u \\in {\\mathcal H}$ and $Q\\in \\Delta_{t}$, as $E_{t}u$ is constant on $Q$,\n\\begin{align*}\n\\label{}\n \\int_Q \\left|\\gamma_t (y)E_{t}u(y)\\right|^2 \\, dw(y)&\\le \\int_Q \\left|\\gamma_t (y)\\right|^2 \\, dw(y) \\times \\barint_{\\hspace{-6pt}Q} \\left|E_{t}u(y)\\right|^2 \\, dw(y) \\\\\n &\\lesssim \\int_Q \\left|E_{t}u(y)\\right|^2 \\, dw(y).\n \\end{align*} Thus\n$$\n\\|\\gamma_t E_{t}u\\|^2 \\lesssim \\sum_{Q\\in \\triangle_t} \\int_Q \\left|E_{t}u(y)\\right|^2 \\, dw(y) = \\|E_{t}u\\|^2 \\lesssim \\|u\\|^2.\n$$\n\\end{proof}\n\nAs $Q_t^B$ vanishes on $ \\textsf{N}(DB)$, it is enough to prove the quadratic estimate (\\ref{eq:sfBD}) for $v \\in \\clos{\\textsf{R}(DB)}=\\clos{\\textsf{R}(D)}$.\n Our principal part approximation reads as follows.\n\n\\begin{prop} \\label{lem:ppa} We have\n \\begin{equation}\\label{eq:ppa1}\n \\qe{Q_t^B v-\\gamma_t E_{t} v}\n \\lesssim \\|v\\|^2, \\quad v\\in \\clos{\\textsf{R}(D)}.\n\\end{equation}\n\n\n\\end{prop}\n\nThe function from ${\\mathbb R}^{n+1}_+$ to $ {\\mathbb R}_+$ defined by $ (x,t) \\mapsto |\\gamma_t(x)|^2$, is\na {\\em weighted dyadic Carleson function} if there exists $C<\\infty$\nsuch that\n$$\n \\iint_{\\carl{Q}} |\\gamma_t(x)|^2 \\, \\frac{dw(x)dt}t \\le C^2 w(Q)\n$$\nfor all dyadic cubes $Q\\subseteq{\\mathbb R}^n$.\nHere $\\carl{Q}:= Q\\times (0,l(Q)]$ is the Carleson box over $Q$. We define the dyadic Carleson norm $\\|\\gamma_t\\|_C$ to be the smallest\nconstant $C$ for which this inequality holds. The form of Carleson's lemma that we need and will apply componentwise is the following (see \\cite{AT}, p.168 and references therein).\n\\begin{lem} \\label{lem:Carleson} For all $u\\in {\\mathcal H}$,\n$$\n \\qe{\\gamma_t E_{t}u} \\lesssim \\|\\gamma_t\\|_C^2 \\| \\sup_{t>0}|E_{t}u|\\|^2 \\lesssim \\|\\gamma_t\\|_C^2 \\| u\\|^2.\n$$\n\\end{lem}\n\n\\begin{cor}\\label{cor:cor1} If $|\\gamma_{t}(x)|^2$ is a weighted dyadic Carleson function, then Theorem \\ref{th:main} holds.\n\\end{cor}\n\nThis corollary clearly follows from the above lemma and \\eqref{eq:ppa1}.\n\n\n\n\n\\subsection{Proof of the principal part approximation}\n\nWe begin the proof of the principal part approximation \\eqref{eq:ppa1} with some further notation.\nDefine\n$$P_{t}=\\begin{bmatrix}\n (I-t^2\\Delta_{w})^{-1} & 0 \\\\\n 0 & (I-t^2\\Delta)^{-1} I_{{\\mathbb C}^n}\n \\end{bmatrix}.\n $$\n\n Here $\\Delta_{w}={\\text{{\\rm div}}}_{w}\\nabla$ is the negative self-adjoint operator on $L^2(w)$ defined in Lemma \\ref{lem:gradient} while $\\Delta$ is the usual negative Laplacian on $L^2(dx)$. From Lemma \\ref{lem:approx},\n the convolution operator $(I-t^2\\Delta)^{-1}$ is bounded on $L^2(w)$ uniformly with respect to $t>0$: It is indeed classical that $(I-t^2\\Delta)^{-1}$ is the convolution with $t^{-n}G_{2}(x\/t)$ where the Bessel potential $G_{2}$ is integrable and radially decreasing (see \\cite[Chapter 6]{Gra}). Thus, $P_{t}$ is uniformly bounded on ${\\mathcal H}$.\n\n\n\n\n\n\n\n\\begin{lem}\\label{whatisneeded} For all $ v \\in \\clos{\\textsf{R}(D)}$, one has\n\\begin{equation} \\label{firsttermprop}\n \\qe{Q_t^B(I-P_t) v}\\lesssim \\|v\\|^2.\n\\end{equation}\n\\end{lem}\n\n\\begin{proof}\\label{proof:thm} The key point is that $ \\clos{\\textsf{R}(D)}$ is preserved by $P_{t}$. This follows from the characterization of $\\clos{\\textsf{R}(D)}$ in Proposition \\ref{prop:D} as $\\begin{bmatrix}\n L^2(w) \\\\\n {\\mathcal R}(L^2(w))\n \\end{bmatrix}$, since the Riesz transforms and $(I-t^2\\Delta)^{-1}$ commute and ${\\mathcal R}(- \\Delta)^{1\/2}=\\nabla$.\n Write $v=\\begin{bmatrix}\n f \\\\\n {\\mathcal R} g\n \\end{bmatrix}$, with $f, g\\in L^2(w)$. Then,\n \\begin{align*}\n (I-P_{t})v&=\\begin{bmatrix}\n -t^2\\Delta_{w} (I-t^2\\Delta_{w})^{-1}f \\\\\n -t^2\\Delta (I-t^2\\Delta)^{-1} {\\mathcal R} g\n \\end{bmatrix}\\\\\n & = \\begin{bmatrix}\n -t^2{\\text{{\\rm div}}}_{w} \\nabla (I-t^2\\Delta_{w})^{-1}f \\\\\n t^2\\nabla (- \\Delta)^{1\/2} (I-t^2\\Delta)^{-1} g\n \\end{bmatrix}\\\\\n & = - tD \\begin{bmatrix}\n t (- \\Delta)^{1\/2} (I-t^2\\Delta)^{-1} g\n \\\\\n t \\nabla (I-t^2\\Delta_{w})^{-1}f\n\\end{bmatrix} .\n\\end{align*}\nThus using that $tQ_{t}^BD$ is uniformly bounded on ${\\mathcal H}$, it suffices to prove\n$$\n\\qe{t \\nabla (I-t^2\\Delta_{w})^{-1}f}\\lesssim \\|f\\|^2\n$$\nand\n$$\\qe{t (- \\Delta)^{1\/2} (I-t^2\\Delta)^{-1} g} \\lesssim \\|g\\|^2.\n$$\nThe first estimate is a simple consequence of the construction of $-\\Delta_{w}$ as the self-adjoint operator $\\nabla^*\\nabla$ on $L^2(w)$. Indeed, $\\|\\nabla (I-t^2\\Delta_{w})^{-1}f\\|= \\| (-\\Delta_{w})^{1\/2} (I-t^2\\Delta_{w})^{-1}f\\|$ and one concludes using the spectral theorem for the self-adjoint operator $\\Delta_{w}$ that $\\qe{t \\nabla (I-t^2\\Delta_{w})^{-1}f }= c \\|f\\|^2$ with $c= \\int_{0}^\\infty t(1+t^2)^{-1} \\frac{dt}{t}$.\nThe second estimate is a consequence of Lemma \\ref{lem:LPw} as soon as the conditions for the function $\\psi$ defined on the Fourier transform side by $\\hat\\psi(\\xi)= |\\xi|(1+|\\xi|^2)^{-1}$ have been checked. \\end{proof}\n\n\n \\begin{lem}\\label{lem:ppalem2} For all $v \\in \\clos{\\textsf{R}(D)}$, one has\n $$\n \\qe{Q_t^BP_tv -\\gamma_tE_{t}P_tv } \\lesssim \\|v\\|^2.\n $$\n \\end{lem}\n\n \\begin{proof}\n We remark that for $t>0$ fixed and $x\\in {\\mathbb R}^n$, we have\n$$(Q_t^BP_tv -\\gamma_tE_{t}P_t v)(x)= Q_t^B (u- u_Q ) (x),\n$$\nwhere $u=P_tv$, $Q$ is the unique dyadic cube in $\\triangle_t$ containing $x$ and\n$u_{Q}$ is the value of $E_{t}u$ on $Q$.\nDefine $C_0(Q)=2Q$ and $C_j(Q)=2^{j+1}Q\\setminus 2^jQ$ if $j\\in {\\mathbb N}^*$. Then,\n\\begin{align*}\n\\|Q_t^BP_t v-\\gamma_tE_{t}P_t v \\|^2 &= \\sum_{Q\\in \\triangle_t} \\int_Q | Q_t^B(u- u_{Q})|^2 \\, dw\n\\\\\n& \\le \\sum_{Q\\in \\triangle_t} \\left(\\sum_{j\\ge 0} \\bigg(\\int_Q | Q_t^B ({\\bf 1}_{C_j(Q)} (u -u_{Q}))|^2\\, dw \\bigg)^{1\/2}\\right)^{2}\n\\\\\n&\n\\lesssim \\sum_{Q\\in \\triangle_t} \\left(\\sum_{j\\ge 0} 2^{-jN} \\bigg(\\int_{C_j(Q)}|u-u_{Q} |^2\\, dw \\bigg)^{1\/2}\\right)^{2}\n\\\\\n&\n\\lesssim \\sum_{Q\\in \\triangle_t} \\sum_{j\\ge 0} 2^{-jN} \\int_{C_j(Q)} |u - u_{Q} |^2\\, dw\n\\\\\n&\n\\lesssim \\sum_{Q\\in \\triangle_t} \\sum_{j\\ge 0} 2^{-jN} 2^{2j} \\ell(Q)^2 2^{jd}\\int_{2^{j+1}Q} |\\nabla u |^2\n\\, dw \\\\\n&\n\\lesssim t^2 \\sum_{j\\ge 0} 2^{-jN} 2^{2j} 2^{jd} 2^{jn} \\int_{{\\mathbb R}^n} |\\nabla u |^2\\, dw.\n\\\\\n&\n\\lesssim t^2 \\|\\nabla u \\|^2.\n\\end{align*}\nWe used the Minkowski inequality on the second line, off-diagonal decay on the third, Cauchy--Schwarz inequality on the fourth,\nPoincar\\'e inequality on the fifth (recalling that one can take the average with respect to either $dx$ or $dw$), and a telescoping argument which produces the doubling exponent $d$ of $w$,\nthe covering inequality $ \\sum_{Q\\in \\triangle_t} {\\bf 1}_{2^{j+1}Q} \\lesssim 2^{jn}$ and $\\ell(Q)\\sim t$ on the sixth. Finally, we choose $N> n+d+ 2$ in the last.\n Hence\n$$\\qe{Q_t^BP_tv-\\gamma_t E_{t} P_t v}\n \\lesssim \\qe{t \\nabla P_t v}.\n $$\n\n\n For the scalar part of $v$, we have to control the weighted quadratic estimate for $t\\nabla (I-t^2\\Delta_{w})^{-1}v_{{\\scriptscriptstyle\\perp}}$ which we have seen already. Using $v\\in \\clos{\\textsf{R}(D)}$, the tangential part $v_{{\\scriptscriptstyle \\parallel}}$ is of the form ${\\mathcal R} g$ for some $g\\in L^2(w)$. Hence we have to control the quadratic estimate of $t\\nabla (I-t^2\\Delta)^{-1}R_{j} g=R_{j} {\\mathcal R} t (-\\Delta)^{1\/2}(I-t^2\\Delta)^{-1}g$ for $j=1, \\ldots, n$. We can eliminate $R_{j}$ and ${\\mathcal R}$ as the Riesz transforms are bounded on $L^2(w)$ and the weighted Littlewood-Paley estimate for $t (-\\Delta)^{1\/2}(I-t^2\\Delta)^{-1}$ has been already seen. \\end{proof}\n\n \\begin{lem}\\label{lem:mean1} There are constants $C<\\infty$ and $\\tau_{1}\\in(0,1)$ such that for all $f\\in \\textsf{D}({\\text{{\\rm div}}}_{w})$ and all dyadic cubes $Q$,\n $$\n\\left| \\barint_{\\hspace{-6pt}Q} {\\text{{\\rm div}}}_{w}f \\, dw \\right| \\le \\frac{C}{\\ell(Q)^{\\tau_{1}}}\n\\left( \\barint_{\\hspace{-6pt}Q} |{\\text{{\\rm div}}}_{w}f|^2 \\, dw \\right)^{\\frac{1-\\tau_{1}}2}\\left( \\barint_{\\hspace{-6pt}Q} |f|^2 \\, dw \\right)^{\\frac{\\tau_{1}}2}.\n$$\n \\end{lem}\n\n\\begin{proof} Observe that if $f$ has support contained in $Q$, then $\\int_{Q} {\\text{{\\rm div}}}_{w}f \\, dw=0$. Thus this lemma follow from \\cite{Ban}. Here is a simple proof in our situation. Let $A=\\left( \\barint_{\\hspace{-2pt}Q} |{\\text{{\\rm div}}}_{w}f|^2 \\, dw \\right)^{1\/2}$ and $B=\\left( \\barint_{\\hspace{-2pt}Q} |f|^2 \\, dw \\right)^{1\/2}$. If $B\\ge A\\ell$ a simple application of the Cauchy-Schwarz inequality gives the result.\n\nAssume next that $B1$ such that $w\\in A_{2\/p}$ and use H\\\"older's inequality with conjugate exponents $p$ and $p'$, the support properties of $1-\\varphi$, and the fact that $w\\in A_{2\/p}$, to conclude that\n$$\n| II | \\lesssim \\left( \\barint_{\\hspace{-6pt}Q} |\\nabla g|^p \\, dx \\right)^{1\/p} t^{1\/p'} \\le [w]_{A_{2\/p}}^{1\/p} A t^{1\/p'}.\n$$\nHence, choosing $t^{1+1\/p'}=B\/A\\ell$, we obtain the inequality with $\\tau_{2}=\\frac{1\/p'}{1+1\/p'}$.\n\\end{proof}\n\n\n\\begin{lem}\\label{lem:ppalem3} For all $u\\in {\\mathcal H}=L^2(w;{\\mathbb C}^{n+1})$,\n$$\\qe{\\gamma_tE_{t}(P_t-I)u} \\lesssim \\|u\\|^2.$$\n\\end{lem}\n\n\n \\begin{proof} {It follows from Lemma \\ref{lem:gammat} that\n $\\|\\gamma_tE_{t}(P_t-I)u\\| \\lesssim \\|E_{t}(P_t-I)u\\|$. Given the definitions of $E_{t}$ and $P_{t}$, the lemma reduces to the scalar inequalities\n $$\\qe{A_t^w((I-t^2\\Delta_{w})^{-1}-I)f} \\lesssim \\|f\\|^2$$\n and\n $$\\qe{A_t((I-t^2\\Delta)^{-1}-I)f} \\lesssim \\|f\\|^2.$$\n\n For the first one, we follow \\cite{AKMc} with a minor simplification. Let $Q_{s}^w= s^2\\Delta_{w}e^{s^2\\Delta_{w}}$. Then for $f\\in L^2(w)$ we have by the spectral theorem, $f= 8 \\int_{0}^\\infty (Q_{s}^w)^2f\\, \\frac {ds}s$\n and also $\\|f\\|^2= 8 \\int_{0}^\\infty \\|Q_{s}^w f\\|^2 \\, \\frac {ds}s$.\n By Schur's lemma, it is enough to show that the operator norm of $A_t^w((I-t^2\\Delta_{w})^{-1}-I)Q_{s}^w$ in $L^2(w)$ is bounded by $h(s\/t)$ with $h\\ge 0$ and $\\int_{0}^\\infty h(u)\\, \\frac{du}u<\\infty$. We shall find $h(u)= C \\inf (u^{\\tau_{1}}, u^{-2})$.\n\n If $t0$ depending only on $w$ using Lemma \\ref{lem:DRF}. Thus, we can use the fact that the integral $f= 8 \\int_{0}^\\infty (Q_{s})^2f\\, \\frac {ds}s$ converges in $L^2(w)$ from Corollary \\ref{cor:Calderon}, Schur's Lemma and that $ \\int_{0}^\\infty \\|Q_{s} f\\|^2 \\, \\frac {ds}s \\lesssim \\|f\\|^2$ from Lemma \\ref{lem:LPw}.\n }\n \\end{proof}\n\n\n\n\n\n\\begin{proof}[Proof of Proposition \\ref{lem:ppa}]\nIt is enough to write\n$$\nQ_t^B v-\\gamma_t E_{t} v= Q_t^B(I-P_{t}) v + (Q_t^BP_{t}v -\\gamma_t E_{t}P_{t} v) + \\gamma_t E_{t}(P_{t}-I) v\n$$\nand to use successively Lemma \\ref{whatisneeded}, Lemma \\ref{lem:ppalem2} and Lemma \\ref{lem:ppalem3}.\n\\end{proof}\n\n\\subsection{Preamble to the Carleson measure estimate}\n\nWe are now ready to prove that $|\\gamma_t(x)|^2$ is a weighted dyadic Carleson function and so, via Proposition \\ref{lem:ppa}, complete the proof of Theorem \\ref{th:main}. The first step towards this is a compactness argument. As was seen in the solution of the Kato square root problem (\\cite{AHLMcT}), the application of a stopping-time argument was made possible by restricting $\\gamma_t(x)$ so that, once normalised, it is close to a fixed element in the unit sphere of $\\mathcal{L}({\\mathbb C}^{1+n})$, the set of bounded linear transformations on ${\\mathbb C}^{1+n}$. We will make use of the same stopping-time argument, but also require a second stopping-time related to the oscillation of the weight, and with it comes a second compactness argument which restricts our attention to Whitney boxes on which the average of the weight is close to that of the top cube.\n\n A convenient way to define the Whitney box $W_{Q}$ associated to a given dyadic cube $Q$ is\n$W_{Q}=\\{ (x,t)\\, ; \\, x\\in Q, Q\\in \\Delta_{t}\\} $. With our definition $\\widehat Q$ is the union all $W_{Q'}$ for which $Q'$ is a dyadic subcube of $Q$.\n\nConsider the compact unit sphere in $\\mathcal{L}({\\mathbb C}^{1+n})$ and the compact interval $[0,c_0]$, where $c_0$ is as in \\eqref{reversejensen}. For each $\\nu \\in \\mathcal{L}({\\mathbb C}^{1+n})$ such that $|\\nu| = 1$, $\\tau \\in [0,c_0]$ and $\\sigma_{1},\\sigma_2 > 0$, define $G_{\\tau,\\sigma_2}$ as the union of those Whitney boxes $W_{Q}$ for which $|\\ln(w_Q) - (\\ln w)_Q - \\tau| < \\sigma_2$,\n and\n\\begin{equation}\\label{gammatilde}\n\\widetilde{\\gamma}_t(x)=\n\\begin{cases} \\gamma_t(x) & \\text{if $\\gamma_t(x)\\ne 0$, $\\left|\\frac{\\gamma_t(x)}{|\\gamma_t(x)|} - \\nu \\right| \\leq \\sigma_1$ and $(x,t) \\in G_{\\tau,\\sigma_2}$,}\n\\\\\n0 &\\text{otherwise.}\n\\end{cases}\n\\end{equation}\n We recall the notation $B^w(Q)$ from Section \\ref{coronasec} and set\n$$\\Omega^w(Q) = \\carl{Q} \\setminus \\cup_{R \\in B^w(Q)} \\carl{R}.\n$$\n\n\n\n\n\\begin{lem} \\label{lemma5}\nSuppose that we can show\n\\[\n K= \\sup_{\\nu, \\tau}\\sup_{Q \\in \\triangle} \\frac{1}{w(Q)} \\iint_{\\Omega^w(Q)} |\\widetilde{\\gamma}_t(x)|^2\\, \\frac{dw(x)dt}{t} < \\infty\n\\]\nfor some choice of parameters $\\sigma_1$ and $\\sigma_2$ depending only on $\\|B\\|_{\\infty}$, $\\kappa$, $[w]_{A_2}$ and $n$. Then $|\\gamma_t(x)|^2$ is a weighted dyadic Carleson function.\n\\end{lem}\n\n\\begin{proof} Fix $\\sigma_{1}$ and $\\sigma_{2}$ so that the hypothesis applies. Let $Q\\in \\Delta$. Observe that the sets $\\Omega^w(R)$ form a partition of $\\carl{Q}$ when $R$ runs over elements of $ B^w_*(Q) \\cup \\{Q\\}$. Thus,\nby the hypothesis and Proposition \\ref{prop4},\n\\begin{align*}\n\\iint_{\\carl{Q}} |\\widetilde{\\gamma}_t(x)|^2\\, \\frac{dw(x)dt}{t} & = \\sum_{R \\in B^w_*(Q) \\cup \\{Q\\}} \\iint_{\\Omega^w(R)} |\\widetilde{\\gamma}_t(x)|^2\\, \\frac{dw(x)dt}{t} \\\\\n& \\leq K \\sum_{R \\in B^w_*(Q) \\cup \\{Q\\}} w(R) \\leq \\frac{KC}{\\sigma^2_{w}} w(Q) + K w(Q).\n\\end{align*}\n By the compactness of the unit sphere in $\\mathcal{L}({\\mathbb C}^{1+n})$ and the interval $[0,c_0]$, there exist a finite index set $A \\subseteq {\\mathbb N}$ and, for each $j \\in A$, choices of $\\nu_j$ and $\\tau_j$ such that $|\\gamma_t(x)|^2 \\leq \\sum_{j \\in A} |\\widetilde{\\gamma}^j_t(x)|^2$, where $\\widetilde{\\gamma}^j_t(x) = \\widetilde{\\gamma}_t(x)$ with the choice $\\nu = \\nu_j$ and $\\tau = \\tau_j$.\nThis completes the proof.\n\\end{proof}\n\n\\subsection{Stopping-time arguments for test functions}\n\nWe fix an arbitrary vector $\\xi$ in the unit sphere of ${\\mathbb C}^{1+n}$. For any $Q_1 \\in \\triangle$ and $\\sigma_{3}>0$ to be chosen, define a test function\n\\begin{equation}\n\\label{testf}\nf^\\xi_{Q_1} := \\Big(I + \\big(\\sigma_3\\ell(Q_{1})DB\\big)^2\\Big)^{-1}(1_{Q_1}\\xi) = P^B_{\\sigma_3\\ell(Q_{1})}(1_{Q_1}\\xi),\n\\end{equation}\nwhere $1_{Q_1}$ is the indicator of $Q_{1}$.\n Note that $\\|f^\\xi_{Q_1}\\|^2 \\lesssim w(Q_{1})$ and $\\|\\sigma_{3}\\ell(Q_{1})DBf^\\xi_{Q_1}\\|^2\\lesssim w(Q_{1})$ with uniform implicit constants with respect to $|\\xi|=1$, $\\sigma_{3}>0$ and $Q_{1}$, as can be seen using the uniform boundedness in $t>0$ of $Q^B_{t}$ and $P^B_{t}$.\n\n\\begin{lem} \\label{lemma6}\nThere exist a constant $c$ depending only on $\\|B\\|_{\\infty}$, $\\kappa$, $[w]_{A_2}$ and $n$, and a constant $\\delta > 0$ depending only on $[w]_{A_2}$, such that for all such $\\xi$, $Q_{1}$ and $\\sigma_{3}$,\n\\[\n|E_{Q_1}(f^\\xi_{Q_1}) - \\xi| \\leq c\\sigma_3^{\\delta}.\n\\]\n\\end{lem}\n\n\\begin{proof}\nWe have $E_{Q_1}(f^\\xi_{Q_1}) - \\xi = E_{Q_1}D u$ with\n\\[\nu: = - (\\sigma_3 \\ell(Q_1))^2BDB \\Big(I + \\big(\\sigma_3\\ell(Q_{1})DB\\big)^2\\Big)^{-1}(1_{Q_1}\\xi)\n\\]\n and notice that $E_{Q_1}D$ acts on $u = \\begin{bmatrix}\nu_1 \\\\\nu_2\n\\end{bmatrix}$ componentwise by averaging ${\\text{{\\rm div}}}_w u_2$ with respect to $dw$ and $\\nabla u_1$ with respect to $dx$.\nLemma \\ref{lem:mean1} says\n\\begin{align*}\n& \\left| \\barint_{\\hspace{-6pt}Q_1} {\\text{{\\rm div}}}_{w}u_2 \\, dw \\right| \\le \\frac{C}{\\ell(Q_1)^{\\tau_{1}}}\n\\left( \\barint_{\\hspace{-6pt}Q_1} |{\\text{{\\rm div}}}_{w}u_2|^2 \\, dw \\right)^{\\frac{1-\\tau_{1}}2}\\left( \\barint_{\\hspace{-6pt}Q_1} |u_2|^2 \\, dw \\right)^{\\frac{\\tau_{1}}2} \\\\\n& \\leq C\\sigma_3^{\\tau_1}\n\\left( \\barint_{\\hspace{-6pt}Q_1} | \\xi - f^\\xi_{Q_1} |^2 \\, dw \\right)^{\\frac{1-\\tau_{1}}2}\\left( \\barint_{\\hspace{-6pt}Q_1} | \\sigma_{3}\\ell(Q_{1})DBf^\\xi_{Q_1}|^2 \\, dw \\right)^{\\frac{\\tau_{1}}2} \\\\\n& \\leq C\\sigma_3^{\\tau_1}\n\\end{align*}\nand Lemma \\ref{lem:mean2} says\n\\begin{align*}\n& \\left| \\barint_{\\hspace{-6pt}Q} \\nabla u_1 \\, dx \\right| \\le \\frac{C}{\\ell(Q)^{\\tau_{2}}}\n\\left( \\barint_{\\hspace{-6pt}Q} |\\nabla u_1|^2 \\, dw \\right)^{\\frac{1-\\tau_{2}}2}\\left( \\barint_{\\hspace{-6pt}Q} |u_1|^2 \\, dw \\right)^{\\frac{\\tau_{2}}2} \\\\\n& \\leq C\\sigma_3^{\\tau_2}\n\\left( \\barint_{\\hspace{-6pt}Q_1} |\\xi - f^\\xi_{Q_1} |^2 \\, dw \\right)^{\\frac{1-\\tau_{2}}2}\\left( \\barint_{\\hspace{-6pt}Q_1} | \\sigma_{3}\\ell(Q_{1})DBf^\\xi_{Q_1} |^2 \\, dw \\right)^{\\frac{\\tau_{2}}2} \\\\\n& \\leq C\\sigma_3^{\\tau_2}.\n\\end{align*}\nSo taking $\\delta = \\min(\\tau_1,\\tau_2)$ completes the proof.\n\\end{proof}\n\nRecall that $w_{Q}= \\barint_{\\hspace{-2pt}Q} w\\, dx$ and similarly for $(\\ln w)_{Q}$.\n\n\\begin{lem} \\label{lemma7}\nFix $\\tau\\in [0,c_{0}]$ and $\\xi$ in the unit sphere of ${\\mathbb C}^{1+n}$. Let\n\\[\nS_{Q_1}^\\tau=\\begin{bmatrix}\n w_{Q_1} e^{-\\tau - (\\ln w)_{Q_1}} & 0 \\\\\n 0 & I\n \\end{bmatrix},\n\\]\nand define the collection of `bad' cubes $B^{\\tau,\\xi}(Q_1)$ to be the set of maximal $Q' \\in \\triangle$ such that $Q' \\subseteq Q_1$ and\n\\[\n\\mbox{either $|E_{Q'}(f^\\xi_{Q_1})| > \\frac{1}{\\sigma_4}$ or $ \\re \\left(S_{Q_1}^\\tau(\\xi),\\begin{bmatrix}\n w_{Q'}\/w_{Q_1} & 0 \\\\\n 0 & I\n \\end{bmatrix}E_{Q'}(f^\\xi_{Q_1})\\right) < \\sigma_5$.}\n\\]\nWe can then choose positive $\\sigma_3$, $\\sigma_4$ and $\\sigma_5$ depending only on $\\|B\\|_{\\infty}$, $\\kappa$, $[w]_{A_2}$ and $n$, in particular independently on $\\tau,\\xi, Q_{1}$, so that\n\\begin{equation} \\label{geometric}\n\\sum_{R \\in B^{\\tau,\\xi}(Q_1)} w(R) \\leq (1-\\sigma_6)w(Q_1),\n\\end{equation}\nwith $0< \\sigma_{6}\\le 1$.\n\\end{lem}\n\n\\begin{proof}\nThere are two sets of cubes to consider. The first is the set of those maximal $Q'$ for which\n\\begin{equation} \\label{stopone}\n \\re \\left(S_{Q_1}^\\tau(\\xi),\\begin{bmatrix}\n w_{Q'}\/w_{Q_1} & 0 \\\\\n 0 & I\n \\end{bmatrix}E_{Q'}(f^\\xi_{Q_1})\\right) < \\sigma_5.\n\\end{equation}\nBy \\eqref{reversejensen}, we know that\n\\[\n-c_0 \\leq \\ln(w_{Q_1}) - \\tau - (\\ln w)_{Q_1} \\leq c_0\n\\]\nso $S_{Q_1}^\\tau$ is a constant self-adjoint matrix with $e^{-c_{0}} I \\le S_{Q_1}^\\tau \\le e^{c_{0}} I$. Applying Lemma \\ref{lemma6},\n\\begin{align}\n \\re \\big(S_{Q_1}^\\tau(\\xi),E_{Q_1}(f^\\xi_{Q_1})\\big) & = \\big(S_{Q_1}^\\tau(\\xi),\\xi\\big) + \\re \\big(S_{Q_1}^\\tau(\\xi), E_{Q_1}(f^\\xi_{Q_1})-\\xi\\big) \\nonumber \\\\\n& \\geq e^{-c_{0}} - ce^{c_{0}}\\sigma_3^{\\delta} \\geq \\frac 1 2 e^{-c_{0}}, \\label{sigma3}\n\\end{align}\non choosing $\\sigma_3$ so that $2c\\sigma_{3}^\\delta \\le e^{-2c_{0}}$. Consequently, setting $G = Q_1 \\setminus (\\cup Q')$ and $f^\\xi_{Q_1}= \\begin{bmatrix}\n f_{1} \\\\ f_{2}\n \\end{bmatrix}\n $,\n we have\n \\begin{align*}\n\\label{}\n E_{Q_1}(f^\\xi_{Q_1})&= \\begin{bmatrix}\n \\frac{1}{w(Q_1)} \\int_{Q_{1}} f_1\\, dw \\\\\n \\frac{1}{|Q_1|} \\int_{Q_{1}} f_2\\, dx\n \\end{bmatrix} \\\\\n &\n =\\sum_{Q'} \\frac{|Q'|}{|Q_1|} \\begin{bmatrix}\n w_{Q'}\/w_{Q_{1}} & 0 \\\\\n 0 & I\n \\end{bmatrix}E_{Q'}(f^\\xi_{Q_1}) + \\begin{bmatrix}\n \\frac{1}{w(Q_1)} \\int_G f_1\\, dw \\\\\n \\frac{1}{|Q_1|} \\int_G f_2\\, dx\n \\end{bmatrix} ,\n \\end{align*}\nwhere the subcubes $Q'$ are those of \\eqref{stopone}. Using \\eqref{stopone} and \\eqref{sigma3}, we obtain\n \\begin{align*}\n\\frac 1 2 e^{-c_{0}}\n&\n \\leq \\re \\big(S_{Q_1}^\\tau(\\xi),E_{Q_1}(f^\\xi_{Q_1})\\big) \\\\\n& \\leq \\sigma_5 \\sum_{Q'} \\frac{|Q'|}{|Q_1|} + \\re \\left(S_{Q_1}^\\tau(\\xi),\\begin{bmatrix}\n \\frac{1}{w(Q_1)} \\int_G f_1\\, dw \\\\\n \\frac{1}{|Q_1|} \\int_G f_2\\, dx\n \\end{bmatrix}\\right) \\\\\n& \\leq \\sigma_5 + \\re \\left(S_{Q_1}^\\tau(\\xi),\\begin{bmatrix}\n \\frac{1}{w(Q_1)} \\int_G f_1\\, dw \\\\\n \\frac{1}{|Q_1|} \\int_G f_2\\, dx\n \\end{bmatrix}\\right)\n\\end{align*}\nand, using the estimate $\\int_{Q_{1}} |f^\\xi_{Q_1}|^2\\, dw \\lesssim w(Q_{1}) $ and again the $A_{2}$ condition for $w$,\n\\begin{align*}\n& \\left|\\left(S_{Q_1}^\\tau(\\xi),\\begin{bmatrix}\n \\frac{1}{w(Q_1)} \\int_G f_1\\, dw \\\\\n \\frac{1}{|Q_1|} \\int_G f_2 \\, dx\n \\end{bmatrix}\\right)\\right| \\\\\n& \\leq e^{c_{0}}\\left(\\frac{w(G)}{w(Q_1)^2}\\left(\\int_G |f_1|^2 dw\\right) + \\frac{w^{-1}(G)}{|Q_1|^2}\\left(\\int_G |f_2|^2 dw\\right)\\right)^{1\/2} \\\\\n& \\lesssim \\left(\\frac{w(G)}{w(Q_1)}\\right)^{1\/2} + \\frac{(w^{-1}(G))^{1\/2}w(Q_1)^{1\/2}(w^{-1}(Q_1))^{1\/2}}{(w^{-1}(Q_1))^{1\/2}|Q_1|} \\\\\n& \\lesssim \\left(\\frac{w(G)}{w(Q_1)}\\right)^{1\/2} + \\left(\\frac{w^{-1}(G)}{w^{-1}(Q_1)}\\right)^{1\/2} \\lesssim \\left(\\frac{w(G)}{w(Q_1)}\\right)^{\\theta}.\n\\end{align*}\nfor some $\\theta>0$ by \\eqref{eq:ainfty} applied to $w^{-1}$ and $w$.\n Therefore, for a small enough choice of $\\sigma_5$, we have that\n\\[\n\\left(\\frac{w(G)}{w(Q_1)}\\right)^{\\theta} \\gtrsim 1\n\\]\nwhich implies that $w(G) \\geq 2\\sigma_6 w(Q_1)$ for some small $\\sigma_6 > 0$ and so\n\\begin{equation} \\label{firsthalf}\n\\sum_{Q'} w(Q') \\leq (1-2\\sigma_6) w(Q_1),\n\\end{equation}\nwhere the sum is taken over those cubes $Q'$ which satisfy \\eqref{stopone}.\n\nNow we consider the set of maximal dyadic subcubes $Q'$ of $Q_{1}$ for which\n\\begin{equation} \\label{stoptwo}\n|E_{Q'}(f^\\xi_{Q_1})| > \\frac{1}{\\sigma_4}\n.\\end{equation}\n Then $w(Q') \\leq \\sigma_4^2 \\int_{Q'} |f^\\xi_{Q_1}|^2 dw$ and\n\\[\n\\sum_{Q'} w(Q') \\leq \\sum_{Q'} \\sigma_4^2 \\int_{Q'} |f^\\xi_{Q_1}|^2 dw \\leq \\sigma_4^2 \\int_{Q_1} |f^\\xi_{Q_1}|^2 dw \\lesssim \\sigma_4^2\\, w(Q_1)\n\\]\nwhere the sum is now over those cubes $Q'$ which satisfy \\eqref{stoptwo}.\nSo we can choose $\\sigma_4$ so small that\n\\begin{equation} \\label{secondhalf}\n\\sum_{Q'} w(Q') \\leq \\sigma_6\\, w(Q_1).\n\\end{equation}\nCombining \\eqref{firsthalf} and \\eqref{secondhalf} proves the lemma.\n\\end{proof}\n\n\\subsection{Conclusion of the Carleson measure estimate} Consider $\\tilde{\\gamma}_t(x)$ depending on $\\nu$ and $ \\tau$ as defined in \\eqref{gammatilde}.\n Associate to $\\nu$ a vector $\\xi \\in {\\mathbb C}^{1+n}$ such that $|\\nu(\\xi)|=1$ and $|\\xi|=1$. Such a $\\xi$ may not be uniquely defined but we pick one.\nFor a cube $Q_1 \\in \\triangle$, consider the test function $f_{Q_{1}}^\\xi$ of \\eqref{testf} and set $\\Omega^{\\tau,\\xi}(Q_1) = \\carl{Q_1} \\setminus \\cup_{R \\in B^{\\tau,\\xi}(Q_1)} \\carl{R}$ with $B^{\\tau,\\xi}(Q_1)$ defined in Lemma \\ref{lemma7}.\n\n\n\\begin{lem} \\label{lem:parameters}\nSuppose that $\\sigma_3$, $\\sigma_4$ and $\\sigma_5$ are chosen as in Lemma \\ref{lemma7} so that \\eqref{geometric} holds.\nThen there exists a choice of $\\sigma_w$, $\\sigma_1$ and $\\sigma_2$ so that for all $Q_{0}, Q_{1}\\in \\triangle$ with $Q_{1}\\subseteq Q_{0}$, and all $\\nu$ and $ \\tau$,\n\\begin{equation} \\label{43}\n|\\tilde{\\gamma}_t(x)| \\leq C|\\gamma_tE_t(f^\\xi_{Q_1})(x)|, \\quad \\mathrm{for}\\,(x,t) \\in \\Omega^w(Q_0) \\cap \\Omega^{\\tau,\\xi}(Q_1),\n\\end{equation}\n where $C > 0$ depends only on the choice of $\\sigma_w$, $\\sigma_1$, $\\sigma_2$, $\\sigma_3$, $\\sigma_4$ and $\\sigma_5$.\n\\end{lem}\n\n\\begin{proof} We assume that $\\Omega^w(Q_0) \\cap \\Omega^{\\tau,\\xi}(Q_1)$ is non-empty, otherwise, there is nothing to prove. Recall that it is a union of Whitney boxes $W_{Q'}$ and \\eqref{43} follows from $\\left| \\frac{\\tilde{\\gamma}_t(x)}{|\\tilde{\\gamma}_t(x)|} \\big( E_{Q'}(f^\\xi_{Q_1})\\big) \\right|\\gtrsim 1$ for $(x,t)\\in W_{Q'}$ with $\\tilde{\\gamma}_t(x)\\ne 0$.\nFor a Whitney box $W_{Q'} \\subseteq\\Omega^w(Q_0) \\cap \\Omega^{\\tau,\\xi}(Q_1)$ we have that\n\\begin{align*}\n|(\\ln w)_{Q'} - (\\ln w)_{Q_0}| & \\leq \\sigma_w, \\\\\n|E_{Q'}(f^\\xi_{Q_1})| & \\leq \\frac{1}{\\sigma_4} \\,\\,\\, \\mbox{and} \\\\\n \\re \\left(S_{Q_1}^\\tau(\\xi),\\begin{bmatrix}\n w_{Q'}\/w_{Q_1} & 0 \\\\\n 0 & I\n \\end{bmatrix}E_{Q'}(f^\\xi_{Q_1})\\right) & \\geq \\sigma_5.\n\\end{align*}\n The last two inequalities are the definition of $\\Omega^{\\tau,\\xi}(Q_{1})$. The first comes from the fact $Q'$ is not contained in a cube of $B^w(Q_{0})$ by Proposition \\ref{prop4}. As $Q'\\subseteq Q_{1}\\subseteq Q_{0}$, $Q_{1}$ is also not contained in a cube of $B^w(Q_{0})$ and we also have\n\\[ |(\\ln w)_{Q_{1}} - (\\ln w)_{Q_0}| \\leq \\sigma_w.\n\\]\n Moreover, recall that if $\\tilde{\\gamma}_t(x)\\ne 0$, then\n\\[\n\\left|\\frac{\\tilde{\\gamma}_t(x)}{|\\tilde{\\gamma}_t(x)|} - \\nu\\right| \\le \\sigma_{1}.\n\\]\nFinally, if $\\tilde{\\gamma}_t(x)\\ne 0$ and $(x,t)\\in W_{Q'}$ then\n\\[\n|\\ln(w_{Q'}) - (\\ln w)_{Q'} - \\tau| < \\sigma_2.\n\\]\nClearly then, we may assume the six inequalities above.\n\nWe begin by observing that\n\\[\n \\begin{bmatrix}\n w_{Q'}\/w_{Q_1} & 0 \\\\\n 0 & I\n \\end{bmatrix} S_{Q_1}^\\tau = \\begin{bmatrix}\n e^{\\ln (w_{Q'})-\\tau - (\\ln w)_{Q_1}} & 0 \\\\\n 0 & I\n \\end{bmatrix}\n\\]\nand\n\\begin{align}\n |\\ln(w_{Q'}) - \\tau - (\\ln w)_{Q_1}|\n& \\leq |\\ln(w_{Q'}) - (\\ln w)_{Q'} - \\tau| \\nonumber \\\\\n& + |(\\ln w)_{Q'} - (\\ln w)_{Q_0}| + |(\\ln w)_{Q_1} - (\\ln w)_{Q_0}| \\nonumber\\\\\n& \\leq \\sigma_2 + \\sigma_w + \\sigma_w. \\label{1}\n\\end{align}\nRecall that we chose $\\sigma_3$ in \\eqref{sigma3} so that $0 < c\\sigma_3^{\\delta} \\leq e^{-2c_{0}}\/2 \\leq 1\/2$. Therefore, using Lemma \\ref{lemma6},\n\\[\n\\left| \\nu \\left(E_{Q'}(f^\\xi_{Q_1})\\right) \\right| \\geq \\left| \\nu \\left(\\xi\\right) \\right| - \\left| \\nu \\left(E_{Q'}(f^\\xi_{Q_1}) - \\xi\\right) \\right| \\geq 1 - c\\sigma_3^{\\delta} \\geq 1\/2\n\\]\nand\n\\[\n \\re \\left( \\xi, E_{Q'}(f^\\xi_{Q_1})\\right) = \\left( \\xi, \\xi\\right) + \\re \\left( \\xi, E_{Q'}(f^\\xi_{Q_1})-\\xi\\right) \\leq 1 + c\\sigma_3^{\\delta} \\leq 2,\n\\]\nso\n\\[\n\\left| \\nu \\left(E_{Q'}(f^\\xi_{Q_1})\\right) \\right| \\geq \\frac{1}{4} \\re \\left( \\xi, E_{Q'}(f^\\xi_{Q_1})\\right).\n\\]\nIt then follows that for $(x,t)\\in W_{Q'}$ with $\\tilde{\\gamma}_t(x)\\ne 0$,\n\\begin{align*}\n& \\left| \\frac{\\tilde{\\gamma}_t(x)}{|\\tilde{\\gamma}_t(x)|} \\left( E_{Q'}(f^\\xi_{Q_1})\\right) \\right| \\\\\n& \\geq \\left| \\nu \\left(E_{Q'}(f^\\xi_{Q_1})\\right) \\right| - \\left|\\left(\\frac{\\tilde{\\gamma}_t(x)}{|\\tilde{\\gamma}_t(x)|} - \\nu\\right)\\left( E_{Q'}(f^\\xi_{Q_1})\\right) \\right| \\\\\n& \\geq \\frac{1}{4} \\re \\left( \\xi, E_{Q'}(f^\\xi_{Q_1})\\right) - \\frac{\\sigma_1}{\\sigma_4} \\\\\n& = \\frac{1}{4} \\re \\left( \\begin{bmatrix}\n e^{\\ln(w_{Q'} ) -\\tau - (\\ln w)_{Q_1}}& 0 \\\\\n 0 & I\n \\end{bmatrix}\\xi, E_{Q'}(f^\\xi_{Q_1})\\right) \\\\\n& \\quad + \\frac{1}{4} \\re \\left(\\begin{bmatrix}\n 1-e^{\\ln(w_{Q'})-\\tau - (\\ln w)_{Q_1}} & 0 \\\\\n 0 & 0\n \\end{bmatrix}\\xi, E_{Q'}(f^\\xi_{Q_1})\\right) - \\frac{\\sigma_1}{\\sigma_4} \\\\\n& \\geq \\frac{1}{4} \\re \\left( S_{Q_1}^\\tau(\\xi), \\begin{bmatrix}\n w_{Q'}\/w_{Q_1} & 0 \\\\\n 0 & I\n \\end{bmatrix}E_{Q'}(f^\\xi_{Q_1})\\right) - \\frac{e}{\\sigma_4} (\\sigma_2 + 2\\sigma_w) - \\frac{\\sigma_1}{\\sigma_4} \\\\\n& \\geq \\frac{1}{4}\\sigma_5 - \\frac{e}{\\sigma_4} (\\sigma_2 + 2\\sigma_w + \\sigma_1).\n\\end{align*}\nWe have used \\eqref{1} and $|1-e^u|\\le e u$ if $u$ is a real with $|u|\\le 1$, assuming $\\sigma_2 + 2\\sigma_w\\le 1$.\nWe have already chosen $\\sigma_4$ and $\\sigma_5$, but we are still free to choose $\\sigma_w$, $\\sigma_1$ and $\\sigma_2$ small so that \\eqref{43} holds.\n\\end{proof}\n\n\n\n\\begin{proof}[Proof of Theorem \\ref{th:main}] By Corollary \\ref{cor:cor1} and Lemma \\ref{lemma5}, we know it is enough to show, for fixed $\\nu$ and $\\tau$, and for any cube $Q_0$, that\n\\begin{equation} \\label{indeed}\n\\iint_{\\Omega^w(Q_0)} |\\widetilde{\\gamma}_t(x)|^2\\, \\frac{dw(x)dt}{t} \\leq Kw(Q_0).\n\\end{equation}\n Fix $Q_{1}\\in \\Delta$ with $Q_{1}\\subseteq Q_{0}$. Having fixed the parameters in Lemma \\ref{lem:parameters}, we apply \\eqref{43} in the first inequality to obtain\n\\begin{align*}\n \\iint_{\\Omega^w(Q_0)\\cap\\Omega^{\\tau,\\xi}(Q_1)}& |\\widetilde{\\gamma}_t(x)|^2\\, \\frac{dw(x)dt}{t} \\\\\n& \\leq \\iint_{\\carl{Q_1}} |\\gamma_tE_t(f^\\xi_{Q_1})(x)|^2\\, \\frac{dw(x)dt}{t} \\\\\n& \\lesssim \\iint_{\\carl{Q_1}} |(Q^B_tf^\\xi_{Q_1})(x)|^2\\, \\frac{dw(x)dt}{t} \\\\\n& \\quad + \\iint_{\\carl{Q_1}} |(Q^B_t - \\gamma_tE_t)(f^\\xi_{Q_1} - 1_{Q_1}\\xi)(x)|^2\\, \\frac{dw(x)dt}{t} \\\\\n& \\quad + \\iint_{\\carl{Q_1}} |(Q^B_t - \\gamma_tE_t)(1_{Q_1}\\xi)(x)|^2 \\, \\frac{dw(x)dt}{t}.\n\\end{align*}\nSince $Q_{t}^Bf^\\xi_{Q_1}= \\frac t{\\sigma_{3}\\ell(Q_{1})}P_{t}^B (\\sigma_{3}\\ell(Q_{1})DBf^\\xi_{Q_1} ) $, one has that\n\\[\n\\iint_{\\carl{Q_1}} |(Q^B_tf^\\xi_{Q_1})(x)|^2\\, \\frac{dw(x)dt}{t} \\lesssim \\int_{0}^{\\ell(Q_{1})} \\frac {t^2\\|\\sigma_{3}\\ell(Q_{1})DBf^\\xi_{Q_1}\\|^2}{(\\sigma_{3}\\ell(Q_{1}))^2} \\, \\frac {dt}t \\lesssim w(Q_1),\n\\]\nand, by Proposition \\ref{lem:ppa} because $f^\\xi_{Q_1} - 1_{Q_1}\\xi \\in \\textsf{R}(D)$,\n\\[\n\\iint_{\\carl{Q_1}} |(Q^B_t - \\gamma_tE_t)(f^\\xi_{Q_1} - 1_{Q_1}\\xi)(x)|^2\\, \\frac{dw(x)dt}{t} \\lesssim \\|f^\\xi_{Q_1} - 1_{Q_1}\\xi\\|^2 \\lesssim w(Q_1).\n\\]\n For the last term, using that by definition $Q^B_t - \\gamma_t E_t$ annihilates constants and $E_t((1_{Q_1} -1)\\xi)(x)=0$ when $(x,t) \\in \\carl{Q_{1}}$, we can rewrite\n\\[(Q^B_t - \\gamma_t E_t) (1_{Q_1} \\xi)(x)= (Q^B_t - \\gamma_t E_t) ((1_{Q_1} -1) \\xi)(x) = Q^B_t ((1_{Q_1} -1)\\xi)(x).\\]\nUsing off-diagonal estimates for $Q^B_t$ as in Lemma \\ref{lem:gammat}, one can easily show $\\iint_{\\carl{Q_1}} |Q^B_t ((1_{2Q_{1}}-1)\\xi)(x)|^2 \\frac{dw(x)dt}{t} \\lesssim w(Q_{1})$. Next, decompose $2Q_{1}\\setminus Q_{1} = \\partial_{a(t)} \\cup (2Q_{1}\\setminus \\partial_{a(t)})$ where $a(t)= \\sqrt {t\/\\ell(Q_{1})} \\ell(Q_{1})$ and $\\partial_{a}=\\{y\\notin Q_{1}\\, ; \\, d(y,Q_{1})\\le a\\}$. Again, using the off-diagonal estimates for each $t$, the function $1_{2Q_{1}\\setminus \\partial_{a(t)}}$ contributes $w(Q_{1})$. It remains to control the integral corresponding to $1_{\\partial_{a(t)}}$. From the uniform boundedness of $Q^B_{t}$, one has\n\\begin{align*}\n\\iint_{\\carl{Q_1}} |Q^B_t (1_{\\partial_{a(t)}}\\xi)(x)|^2 \\frac{dw(x)dt}{t} & \\lesssim \\int_{0}^{\\ell(Q_{1})} w(\\partial_{a(t)}) \\frac{dt}{t} \\\\\n & \\lesssim \\int_{0}^{\\ell(Q_{1})} \\bigg(\\frac{|\\partial_{a(t)}|}{|Q_{1}|}\\bigg)^\\sigma \\frac{dt}{t} \\, w(Q_{1})\\\\\n & \\lesssim w(Q_{1})\n\\end{align*}\nusing \\eqref{eq:ainfty} and $\\frac{|\\partial_{a(t)}|}{|Q_{1}|} \\lesssim \\big(\\frac t{\\ell(Q_{1})}\\big)^{1\/2}$ obtained from elementary observations. Summarizing the estimates above, we have proved that\n$$\n\\iint_{\\Omega^w(Q_0)\\cap\\Omega^{\\tau,\\xi}(Q_1)} |\\widetilde{\\gamma}_t(x)|^2\\, \\frac{dw(x)dt}{t}\n\\lesssim w(Q_{1}).\n$$\n\n\n We can now prove \\eqref{indeed}.\nDefine\n\\begin{align*}\nB^{\\tau,\\xi}_0(Q_0) & = \\{Q_0\\}, \\quad B^{\\tau,\\xi}_1(Q_0) = B^{\\tau,\\xi}(Q_0), \\\\\nB^{\\tau,\\xi}_{j+1}(Q_0) & = \\cup_{R \\in B^{\\tau,\\xi}_j(Q_0)}B^{\\tau,\\xi}(R) \\,\\,\\, \\mbox{for} \\,\\,\\, j = 1,2,\\dots, \\\\\n\\mbox{and} \\,\\,\\, B^{\\tau,\\xi}_*(Q_0) & = \\cup_{j=0}^\\infty B^{\\tau,\\xi}_j(Q_0).\n\\end{align*}\nUsing\n\\[\n\\iint_{\\Omega^w(Q_0)} |\\widetilde{\\gamma}_t(x)|^2\\, \\frac{dw(x)dt}{t} = \\sum_{Q_1 \\in B^{\\tau,\\xi}_*(Q_0)} \\iint_{\\Omega^w(Q_0) \\cap \\Omega^{\\tau,\\xi}(Q_1)} |\\widetilde{\\gamma}_t(x)|^2 \\, \\frac{dw(x)dt}{t},\n\\]\n and summing the estimate above together with an iteration of Lemma \\ref{lemma7} imply\n\\begin{align*}\n& \\iint_{\\Omega^w(Q_0)} |\\widetilde{\\gamma}_t(x)|^2\\, \\frac{dw(x)dt}{t} \\lesssim \\sum_{Q_1 \\in B^{\\tau,\\xi}_*(Q_0)} w(Q_1) \\\\\n& \\leq \\sum_{j=0}^\\infty (1-\\sigma_6)^j w(Q_0) \\lesssim w(Q_0),\n\\end{align*}\nwhich proves \\eqref{indeed} and with it Theorem \\ref{th:main}.\n\\end{proof}\n\n\\subsection{The case of block matrices}\nWe show how to simplify the argument in this case.\nRecall that $B$ is a $(n+1) \\times (n+1)$ matrix. Assume here that it is block diagonal, namely\n$$\nB(x)= \\begin{bmatrix} a(x) & 0 \\\\ 0 & d(x)\n\\end{bmatrix}\n$$\nwith $a(x) $ scalar-valued and $d(x)$ $n\\times n$ matrix-valued. Define the normal and tangential spaces\n$${\\mathcal H}_{{\\scriptscriptstyle\\perp}}=\\begin{bmatrix} L^2({\\mathbb R}^n, w; {\\mathbb C}) \\\\ 0\n\\end{bmatrix} \\quad \\mathrm{and} \\quad {\\mathcal H}_{{\\scriptscriptstyle \\parallel}}=\\begin{bmatrix} 0\\\\ L^2({\\mathbb R}^n, w; {\\mathbb C}^n)\n\\end{bmatrix}.$$\nIn this case, both operators $BD$ and $DB$ swap the normal and tangential spaces. So do $Q_{t}^B$ and multiplication by $\\gamma_{t}$. This means that\n$$\n\\gamma_{t}(x)= \\begin{bmatrix} 0 & \\alpha_{t}(x) \\\\ \\beta_{t}(x) & 0\n\\end{bmatrix}\n$$\nso that the Carleson function norms\nfor $|\\alpha_{t}(x)|^2$ and $|\\beta_{t}(x)|^2$ can be estimated separately. The normal and tangential parts of our test functions can be used in two separate stopping-time arguments, which do not require the Corona decomposition (Proposition \\ref{prop4}), following the usual proof in the unweighted case, since for each stopping-time we use the average against \\emph{one} measure: either $dx$ or $dw$.\n\n\n\n\\subsection{A vector-valued extension}\\label{sec:vv}\n\nThe proof of the main quadratic estimate carries straightforwardly to the case of systems where\n\\begin{itemize}\n \\item ${\\mathcal H}= L_{2}({\\mathbb R}^n, w; {\\mathbb C}^{m(1+n)})$, where $ {\\mathbb C}^{m(1+n)}= ({\\mathbb C}^m)^{1+n}$,\n \\item $D$ acts componentwise on ${\\mathcal H}$ by $(Du)^\\alpha= D(u^\\alpha)$ for $\\alpha=1, \\ldots, m$ (in other words, the new $D$ is $D\\otimes I_{{\\mathbb C}^m}$ but we shall not use this notation), and\n \\item $B(x)$ is an $(n+1)\\times (n+1)$ matrix whose entries are $m\\times m $ matrices and the multiplication by $B(x)$ is assumed to be bounded on ${\\mathcal H}$ and accretive on $\\clos{\\textsf{R}(D)}$.\n\n\n\\end{itemize}\n\n\n\\subsection{Consequences}\\label{sec:consequences}\n\nWe gather here some consequences on the functional calculus for the convenience of the reader.\n\n\n\n\\begin{prop}\\label{prop:equi} Let $w,D$ and $B$ be as above. If $T=DB$ or $T=BD$, then one has the equivalence\n\\begin{equation}\\label{eq:sfT}\n \\int_0^\\infty\\|tT(1+t^2T^2)^{-1} u \\|^2 \\, \\frac{dt}t \\sim \\|u\\|^2, \\qquad \\text{for all }\\ u\\in \\clos{\\textsf{R}(T)}.\n\\end{equation}\n\\end{prop}\n\n\\begin{proof} First, the square function estimate for $BD$ follows from that for $DB$. Indeed, on $\\textsf{N}(BD)$,\n$tBD(1+t^2(BD)^2)^{-1} u =0$. On $\\clos{\\textsf{R}(BD)}$, $BD$ is similar to $DB$ so the square function inequality for $BD$ follows. We conclude using the splitting ${\\mathcal H}= \\textsf{N}(BD) \\oplus \\clos{\\textsf{R}(BD)}$. Now, if one changes $B$ to $B^*$, this means we have proved\n\\eqref{eq:sfBD} for both $T=DB$ (resp. $T=BD$) and its adjoint. It is classical (\\cite{ADMc}) that this implies the equivalence on the range.\n\\end{proof}\n\n\nThe next result summarizes consequences of quadratic estimates that are needed.\n\n\\begin{prop}\\label{prop:SFimpliesFC} Let $T$ be an $\\omega$-bisectorial operator on a separable Hilbert space ${\\mathcal H}$ with $0\\le \\omega<\\pi\/2$. Assume that the quadratic estimate\n\\begin{equation}\n \\int_0^\\infty\\|tT(1+t^2T^2)^{-1} u \\|^2 \\, \\frac{dt}t \\sim \\|u\\|^2 \\ \\text{holds for all }\\ u\\in \\clos{\\textsf{R}(T)}.\n\\end{equation}Then, the following statements hold.\n\\begin{itemize}\n \\item $T$ has a bounded holomorphic functional calculus on $\\clos{\\textsf{R}(T)}$ on any bisector $|\\arg (\\pm z)| <\\mu$ for any $\\omega<\\mu<\\pi\/2$, which can be extended to all ${\\mathcal H}$ by setting $f(T)=f(0)I$ on $\\textsf{N}(T)$ whenever $f$ is also defined at 0.\n\\item The comparison \\begin{equation} \\label{eq:psiT}\n \\int_0^\\infty\\|\\psi(tT) u \\|^2 \\, \\frac{dt}t \\sim \\|u\\|^2 \\ \\text{holds for all }\\ u\\in \\clos{\\textsf{R}(T)}.\n\\end{equation}\nfor any $\\omega<\\mu<\\pi\/2$ and for any holomorphic function $\\psi$ in the bisector $|\\arg (\\pm z)| <\\mu$, which is not identically zero on each connected component of the bisector and which satisfies\n$|\\psi(z)|\\le C\\inf (|z|^\\alpha, |z|^{-\\alpha})$ for some $C<\\infty$ and $\\alpha>0$.\n \\item The operator $\\text{{\\rm sgn}}(T)$ is a bounded involution on $\\clos{\\textsf{R}(T)}$.\n \\item $\\clos{\\textsf{R}(T)}$ splits topologically into two spectral subspaces\n\\begin{equation} \\label{eq:hardysplit}\\clos{\\textsf{R}(T)}={\\mathcal H}^+_{T}\\oplus {\\mathcal H}^-_{T}\n\\end{equation}\n with ${\\mathcal H}^\\pm_{T}=E_{T}^\\pm(\\clos{\\textsf{R}(T)})$ and $E_{T}^+=\\chi^\\pm(T)$ are projections with $\\chi^\\pm(z)=1$ if $\\pm \\re z>0$ and $\\chi^\\pm(z)=0$ otherwise.\n\\item The operator $|T|=sgn(T)T = \\sqrt {T^2}$ with $\\textsf{D}(|T|)=\\textsf{D}(T)$ is a $\\omega$-sectorial operator and\n$-|T|$ generates an analytic semigroup of operators $(e^{-z|T|})_{|\\arg z| <\\pi\/2 - \\omega}$.\n\\item For $h\\in \\textsf{D}(T)$, $h\\in {\\mathcal H}^\\pm_{T}$ if and only if $|T|h=\\pm Th.$ As a consequence\n$e^{\\mp zT}$ are well-defined operators on ${\\mathcal H}^\\pm_{T}$ respectively, and $e^{-zT}E_{T}^+$ and $e^{+zT}E_{T}^-$ are well-defined operators on ${\\mathcal H}$ for $|\\arg z | <\\pi\/2 - \\omega$.\n\\end{itemize}\\end{prop}\n\n\nAs announced in Section \\ref{sec:main}, we recall here why this implies the Kato conjecture for block diagonal\n$$\nB= \\begin{bmatrix} a & 0 \\\\ 0 & d\n\\end{bmatrix}\n$$\nidentifying the functions $a$ and $d$ with the corresponding multiplication operators.\nWe have\n$$BD= \\begin{bmatrix} 0 & a {\\text{{\\rm div}}}_{w} \\\\ - d\\nabla & 0\n\\end{bmatrix} \\ , \\ (BD)^2= \\begin{bmatrix} -a{\\text{{\\rm div}}}_{w} d\\nabla & 0 \\\\ 0 & - d\\nabla a {\\text{{\\rm div}}}_{w}\n\\end{bmatrix}, $$ so that for $u\\in H^1({\\mathbb R}^n, w;{\\mathbb C}^m)$, $v=\\begin{bmatrix}\n u \\\\\n 0\n\\end{bmatrix} \\in \\textsf{D}(BD)= \\textsf{D} (|BD|)$ and\n$$\\|\\sqrt {-a{\\text{{\\rm div}}}_{w} d\\nabla} u\\| \\sim \\| |BD|v \\| \\sim \\|BD v\\| \\sim \\|d\\nabla u \\| \\sim \\|\\nabla u\\|.\n$$\n\n\n\n\n\n\\section{Representations for solutions of degenerate elliptic systems}\\label{sec:rep}\n\n\nFrom now on, we write points in the upper half-space ${\\mathbb R}^{1+n}_+$ as ${\\bf x}=(t,x)$, $t>0, x\\in {\\mathbb R}^n$.\n\n\n\n\n\n\\subsection{From second order to first order}\n\nWe shall now follow closely \\cite{AA1}, and its extension \\cite{R}, but in the weighted setting. It is necessary to have these references handy. The estimates of these two articles obtained in abstract Hilbert spaces evidently apply here. Some other estimates use harmonic analysis (tent spaces, maximal functions). Thus we shall try to extract the relevant information and give proofs only when the argument uses a particular feature of the weighted situation.\n\nWe recall the notation ${\\mathcal H}=L^{2}({\\mathbb R}^n,w;{\\mathbb C}^{m(1+n)})$ and use ${\\mathcal H}^{0}=\\clos{\\textsf{R}(D)}$ where $D$ was defined in Section \\ref{sec:vv}. Beware that in \\cite{AA1}, ${\\mathcal H}$ was taken as $\\clos{\\textsf{R}(D)}$.\nWe continue to use $\\|\\ \\|$ to denote the norm in ${\\mathcal H}$, and occasionally use other notation when needed.\n\nWe construct solutions $u$ to the divergence form system (\\ref{eq:divform}),\nby solving the equivalent vector-valued ODE (\\ref{eq:firstorderODE}) below for the $w$-normalized conormal gradient\n$$\n f=\\nabla_{w^{-1}A} u= \\begin{bmatrix} \\partial_{\\nu_{w^{-1}A}}u \\\\ \\nabla_x u \\end{bmatrix},$$\nand\n$\\partial_{\\nu_{w^{-1}A}}u$ denotes the upward (hence inward for $\\mathbb{R}^{1+n}_+$) $w$-normalized conormal derivative of $u$.\n\nUsing the normal\/tangential decomposition for ${\\mathbb C}^{m(1+n)}= {\\mathbb C}^m \\oplus {\\mathbb C}^{mn}= {\\mathbb C}^m \\oplus ({\\mathbb C}^m \\otimes {\\mathbb C}^n)$ (see Section \\ref{sec:main}), we write matrices acting on ${\\mathbb C}^{m(1+n)}$ as\n$$\n M= \\begin{bmatrix} M_{{\\scriptscriptstyle\\perp}\\no} & M_{{\\scriptscriptstyle\\perp}{\\scriptscriptstyle \\parallel}} \\\\ M_{{\\scriptscriptstyle \\parallel}{\\scriptscriptstyle\\perp}} & M_{{\\scriptscriptstyle \\parallel}\\ta} \\end{bmatrix},\n$$\nthe entries being matrices acting from and into the various spaces in the splitting.\n\n\n\n\\begin{prop}\\label{prop:hat}\n The transformation\n$$\n C\\mapsto \\widehat C:= \\begin{bmatrix} I & 0 \\\\\n C_{{\\scriptscriptstyle \\parallel}{\\scriptscriptstyle\\perp}} & C_{{\\scriptscriptstyle \\parallel}\\ta} \\end{bmatrix} \\begin{bmatrix} C_{{\\scriptscriptstyle\\perp}\\no} & C_{{\\scriptscriptstyle\\perp}{\\scriptscriptstyle \\parallel}} \\\\\n 0 & I \\end{bmatrix}^{-1} = \\begin{bmatrix} C_{{\\scriptscriptstyle\\perp}\\no}^{-1} & -C_{{\\scriptscriptstyle\\perp}\\no}^{-1} C_{{\\scriptscriptstyle\\perp}{\\scriptscriptstyle \\parallel}} \\\\\n C_{{\\scriptscriptstyle \\parallel}{\\scriptscriptstyle\\perp}}C_{{\\scriptscriptstyle\\perp}\\no}^{-1} & C_{{\\scriptscriptstyle \\parallel}\\ta}-C_{{\\scriptscriptstyle \\parallel}{\\scriptscriptstyle\\perp}}C_{{\\scriptscriptstyle\\perp}\\no}^{-1}C_{{\\scriptscriptstyle\\perp}{\\scriptscriptstyle \\parallel}} \\end{bmatrix}\n$$\nis a self-inverse bijective transformation of the set of operator-valued matrices which are bounded on ${\\mathcal H}$ and accretive on $\\clos{\\textsf{R}(D)}$.\n\\end{prop}\n\nThe proof is analogous to that of \\cite{AA1}.\n\n We set\n $$\n \\widehat {w^{-1}A}= B$$ in what follows.\n Our assumption is that as a multiplication operator, $w^{-1}A(t,\\cdot)$ is bounded on ${\\mathcal H}$ and accretive on $\\clos{\\textsf{R}(D)}$ for a.e.~$t>0$ with uniform bounds with respect to $t$. In particular, the matrix $w^{-1}A_{{\\scriptscriptstyle\\perp}\\no}(t,\\cdot)$ is invertible as an operator acting on $L^2({\\mathbb R}^n; w; {\\mathbb C}^m)$, hence it is also invertible in $L^\\infty({\\mathbb R}^n, {\\mathcal L}({\\mathbb C}^{m}))$, with uniform bounds a.e.~in $t>0$. Thus, $B(t,\\cdot)$ is also a multiplication operator.\n\nWe now introduce some notation.\nLet\n$\n \n {\\mathcal D}_{w}=\\begin{bmatrix} C_{0}^\\infty({\\mathbb R}^{1+n}_{+}; {\\mathbb C}^m) \\\\\n w^{-1} C_{0}^\\infty({\\mathbb R}^{1+n}_{+}; {\\mathbb C}^{mn})\\end{bmatrix}.\n $$\n\n Let ${\\mathcal C} url_{{\\scriptscriptstyle \\parallel},0}= \\{f \\in {\\mathcal D}'({\\mathbb R}^n; {\\mathbb C}^{m(1+n)})\\, ;\\, {\\text{{\\rm curl}}}_{x} f_{{\\scriptscriptstyle \\parallel}}=0\\}$, where the curl operator is computed componentwise.\n Let ${\\mathcal H}_\\text{{\\rm loc}}=L^2_\\text{{\\rm loc}}({\\mathbb R}^n, w;{\\mathbb C}^{m(1+n)})$.\n\n\\begin{prop} \\label{prop:divformasODE}\nFor a pair of coefficient matrices $A$ and $B$ related by $A= w\\widehat B$, or equivalently $B= \\widehat {w^{-1}A}$,\nthe pointwise map $g\\mapsto f= \\begin{bmatrix} (w^{-1} A g)_{\\scriptscriptstyle\\perp} \\\\ g_{\\scriptscriptstyle \\parallel} \\end{bmatrix} $ gives\na one-to-one correspondence, with inverse $g= \\begin{bmatrix} (B f)_{\\scriptscriptstyle\\perp} \\\\ f_{\\scriptscriptstyle \\parallel} \\end{bmatrix} $,\nbetween solutions $g$ to the equations\n\\begin{equation} \\label{eq:firstorderdiv}\n \\begin{cases}\n g\\in L^2_\\text{{\\rm loc}}({\\mathbb R}_+;{\\mathcal H}_\\text{{\\rm loc}}), \\\\\n {\\text{{\\rm div}}}_{t,x} (Ag)=0, \\\\\n {\\text{{\\rm curl}}}_{t,x} g=0,\n \\end{cases}\n\\end{equation} in the sense of distributions on ${\\mathbb R}^{1+n}_{+}$\nand solutions $f$ to the generalized Cauchy--Riemann equations\n\\begin{equation} \\label{eq:firstorderODE}\n \\begin{cases}\n f\\in L^2_\\text{{\\rm loc}}({\\mathbb R}_+;{\\mathcal H}_\\text{{\\rm loc}}\\cap {\\mathcal C} url_{{\\scriptscriptstyle \\parallel},0}), \\\\\n \\partial_t f+ DB f=0,\n \\end{cases}\n\\end{equation}\n in the weak sense\n\\begin{equation}\n\\label{firstorderweak}\n\\int_{0}^\\infty -(f, \\partial_{t}\\varphi) + (Bf,D\\varphi)\\, dt =0 \\quad \\forall \\varphi\\in {\\mathcal D}_{w},\n\\end{equation}\nwhere $(\\ ,\\ )$ is the complex inner product with respect to $dw$. \\end{prop}\n\nThe proof is almost completely identical to the one in \\cite{AA1}.\n\n\n\n\\begin{proof}\nThe transformation $g\\mapsto f= \\begin{bmatrix} (w^{-1} A g)_{\\scriptscriptstyle\\perp} \\\\ g_{\\scriptscriptstyle \\parallel} \\end{bmatrix} $ is easily seen to be invertible on $L^2_\\text{{\\rm loc}}({\\mathbb R}_+;{\\mathcal H}^\\text{{\\rm loc}})$. Consider a pair of functions $g$ and $f$ in $L^2_\\text{{\\rm loc}}({\\mathbb R}_+;{\\mathcal H}^\\text{{\\rm loc}})$ related in this fashion.\n Equations (\\ref{eq:firstorderdiv}) for $g$ are equivalent to\n\\begin{equation}\n\\begin{cases}\n \\partial_t (Ag)_{\\scriptscriptstyle\\perp} + {\\text{{\\rm div}}}_x( A_{{\\scriptscriptstyle \\parallel}{\\scriptscriptstyle\\perp}} g_{\\scriptscriptstyle\\perp} + A_{{\\scriptscriptstyle \\parallel}\\ta}g_{\\scriptscriptstyle \\parallel}) =0, \\\\\n \\partial_t g_{\\scriptscriptstyle \\parallel} - \\nabla_x g_{\\scriptscriptstyle\\perp} =0, \\\\\n {\\text{{\\rm curl}}}_x g_{\\scriptscriptstyle \\parallel} =0,\n\\end{cases}\n\\end{equation}\neach in the sense of distributions on ${\\mathbb R}^{1+n}_{+}$.\n The last equation is equivalent to $f_t=f(t,\\cdot)\\in {\\mathcal C} url_{{\\scriptscriptstyle \\parallel},0}$.\n Moreover, using that $(w^{-1}Ag)_{\\scriptscriptstyle\\perp}= f_{\\scriptscriptstyle\\perp}$, $g_{\\scriptscriptstyle \\parallel}= f_{\\scriptscriptstyle \\parallel}$ and\n $g_{\\scriptscriptstyle\\perp}= (Bf)_{\\scriptscriptstyle\\perp}= A_{{\\scriptscriptstyle\\perp}\\no}^{-1}(wf_{\\scriptscriptstyle\\perp}- A_{{\\scriptscriptstyle\\perp}{\\scriptscriptstyle \\parallel}}f_{\\scriptscriptstyle \\parallel})$,\n the first two equations are seen to be equivalent to the equation $\\partial_t f+ DB f=0$ in the prescribed sense.\n \\end{proof}\n\n\n\n\n Next, the strategy in \\cite{AA1} is to integrate the weak differential equation \\eqref{firstorderweak}\n to obtain an equivalent formulation in the Duhamel sense. Again, this can be followed almost line by line, once we have the following density lemma.\n\n \\begin{lem} The space ${\\mathcal D}_{w}$ is dense in $H^{1}_\\text{{\\rm c}}({\\mathbb R}_+;{\\mathcal H}) \\cap L^2_\\text{{\\rm c}}({\\mathbb R}_+;\\textsf{D}(D))$, where the subscript $\\text{{\\rm c}}$ means that elements have compact support in ${\\mathbb R}_{+}$. Thus, if $f\\in L^2_\\text{{\\rm loc}}({\\mathbb R}_+;{\\mathcal H}^{0})$, \\eqref{firstorderweak} holds for any $\\varphi\\in H^{1}_\\text{{\\rm c}}({\\mathbb R}_+;{\\mathcal H}) \\cap L^2_\\text{{\\rm c}}({\\mathbb R}_+;\\textsf{D}(D))$ if it does for any $\\varphi\\in{\\mathcal D}_{w}$.\n\\end{lem}\n\n\n \\begin{proof} The density of ${\\mathcal D}_{w}$ in $H^{1}_\\text{{\\rm c}}({\\mathbb R}_+;{\\mathcal H}) \\cap L^2_\\text{{\\rm c}}({\\mathbb R}_+;\\textsf{D}(D))$ can be easily established using (2) in Proposition \\ref{prop:D} and standard truncation and regularization in the $t$-variable. If $f\\in L^2_\\text{{\\rm loc}}({\\mathbb R}_+;{\\mathcal H}^{0})$ and $\\varphi\\in H^{1}_\\text{{\\rm c}}({\\mathbb R}_+;{\\mathcal H}) \\cap L^2_\\text{{\\rm c}}({\\mathbb R}_+;\\textsf{D}(D))$ then the integral in \\eqref{firstorderweak} makes sense and vanishes by approximating $\\varphi$ by elements in ${\\mathcal D}_{w}$.\n\\end{proof}\n\nIn the above proposition, we are mostly interested in having $g\\in L^2_\\text{{\\rm loc}}({\\mathbb R}_+;{\\mathcal H})$. Recall that\n $${\\mathcal H}^{0}=\\clos{\\textsf{R}(D)}= L^{2}({\\mathbb R}^n,w; {\\mathbb C}^{m(1+n)})\\cap {\\mathcal C} url_{{\\scriptscriptstyle \\parallel},0}={\\mathcal H}\\cap {\\mathcal C} url_{{\\scriptscriptstyle \\parallel},0}.$$\n In particular, $g\\in L^2_\\text{{\\rm loc}}({\\mathbb R}_+;{\\mathcal H})$ if and only if $f\\in L^2_\\text{{\\rm loc}}({\\mathbb R}_+;{\\mathcal H}^{0})$ and we can apply the above lemma.\nFormally writing $\\partial_{t}f+ DB_{0}f= D({\\mathcal E} f)$, ${\\mathcal E}=B_{0}-B$, where $B_{0}$ is now multiplication by a $t$-independent matrix,\nthe integration of \\eqref{firstorderweak} leads to the following equation\n\\begin{equation} \\label{eq:inteqroadmap}\n f_{t}= e^{-tDB_{0}} E_{0}^+ h^+ + (S_Af)_{t},\n\\end{equation}\nfor a unique $h^+\\in {\\mathcal H}^+_{DB_{0}}$\nand where $S_A$ is the vector-valued singular integral operator given\n\\begin{equation} \\label{eq:firstformalSAdefn}\n (S_A f)_t := \\int_0^t e^{-(t-s)DB_0} E_0^+ D {\\mathcal E}_s f_s \\, ds - \\int_t^\\infty e^{(s-t)DB_0} E_0^- D{\\mathcal E}_s f_s\\, ds.\n\\end{equation}\nHere $E_{0}^\\pm=\\chi^\\pm(DB_{0})$ are the projections defined in Section \\ref{sec:consequences}\nand $ {\\mathcal H}^+_{DB_{0}}:= E_{0}^\\pm{\\mathcal H} $ are the ranges of the respective projections. We also use the notation $g_{t}=g(t,\\cdot)$.\nThis operator can be rigorously defined using the maximal regularity operator for $|DB_0|$ viewed from the operational calculus point of view as $F(|DB_0|)$ with $F(z)$ being the operator-valued analytic function given by\n$$\n(F(z)g)_t:= \\int_0^t ze^{-(t-s) z}g_s\\, ds, \\ \\re z>0,\n$$\nso \\eqref{eq:firstformalSAdefn} becomes\n\\begin{equation} \\label{eq:inteqopcalcroadmap}\n S_A= F(|DB_0|) \\widehat E_0^+{\\mathcal E} + F^*(|DB_0|) \\widehat E_0^- {\\mathcal E},\n\\end{equation}\nwhere $\\widehat E_0^\\pm$ are bounded operators on ${\\mathcal H}$ such that $E_0^\\pm D= (DB_0)\\widehat E_0^\\pm$. These two representations and Proposition \\ref{prop:equi} allow us to prove most relevant boundedness results concerning the regularity and Neumann problems. For the Dirichlet problem, they can be used as well in an appropriate sense (see \\cite{R}) but there is another useful representation using the operator \\begin{equation} \\label{eq:tildeSA}\n (\\widetilde S_A f)_t:= \\int_0^t e^{-(t-s)B_0D} \\widetilde E_0^+ {\\mathcal E}_s f_s \\, ds - \\int_t^\\infty e^{(s-t)B_0D} \\widetilde E_0^- {\\mathcal E}_s f_s \\, ds,\n\\end{equation} where $\\widetilde E_0^\\pm=\\chi^\\pm(B_{0}D)$, and the vector field defined by\n$$\n v_{t} := e^{-tB_{0}D} \\widetilde E_{0}^+ \\tilde h^+ + (\\widetilde S_A f)_{t},\n $$\n for some $\\tilde h^+$. From the intertwining property $b(DB_0)D= Db(B_0D)$ of the functional\ncalculi of $DB_0$ and $B_0D$, one has $D\\widetilde S_A =S_{A}$, so that the relation $D\\tilde h^+=h^+$, which uniquely determines a choice $\\tilde h^+\\in {\\mathcal H}^+_{B_{0}D}$, shows that $Dv=f$. Solutions $u$ to the second order equations are related to $v$ in the sense that there exists a constant $c\\in {\\mathbb C}^m$ such that\n $\n u =c -v_{\\scriptscriptstyle\\perp}.\n$\nThis means that the tangential part $-v_{{\\scriptscriptstyle \\parallel}}$ encodes a conjugate to the solution $u$. This notion of conjugate was further developed in \\cite{AA2}.\n\nThese representations are justified provided one has the operator bounds in the next section.\n\n\\subsection{Functions spaces and operator estimates}\n\n\nHere, we give the definition of the functions spaces associated to the BVPs. What changes compared to \\cite{AA1} is that\nthe Lebesgue measure $dx$ on ${\\mathbb R}^n$ is replaced by $dw$ and Lebesgue measure $d{\\bf x}=dtdx$ on ${\\mathbb R}^{1+n}$ by $d{\\underline w}=dtdw$ where ${\\underline w}$ is the $A_{2}$ weight on ${\\mathbb R}^{1+n}$ defined by ${\\underline w}(t,x)=w(x)$. The only property required for $w$ in this section is the doubling property of $dw$, except when we use the quadratic estimate which uses the $A_{2}$ property. Also we incorporate the posterior duality and multiplier results of\n \\cite{HR} (for $dx$), which can be extended to the weighted setting too. See below.\n\n\n\n\n\\begin{defn} \\label{defn:NTandC} For an $L^q_{loc}$ function $f$, $1\\le q\\le \\infty$, define\n$W_{q}f(t,x)= \\left(\\barint_{\\hspace{-2pt}W(t,x)} |f|^q \\, d{\\underline w}\\right)^{1\/q}$ with the usual essential supremum definition if $q=\\infty$ and where $W(t,x):= (c_0^{-1}t,c_0t)\\times B(x;c_1t)$ is a Whitney region, for some fixed constants $c_0>1$, $c_1>0$.\nThe {\\em weighted non-tangential maximal function} of an $L^2_{\\text{{\\rm loc}}}$ function $f$ in ${\\mathbb R}^{1+n}_+$ is\n$$\n \\widetilde N_*(f)(x):= \\sup_{t>0} W_{2}f(t,x), \\qquad x\\in {\\mathbb R}^n,\n \n$$\nThe {\\em weighted Carleson functional} of an $L^1_{\\text{{\\rm loc}}}$ function $f$ is\n$$\n Cf(x) := \\sup_{Q\\ni x} \\frac 1{w(Q)} \\int_{(0, \\ell(Q))\\times Q} |f(t,y)| \\, d{\\underline w}(t,y),\\qquad x\\in{\\mathbb R}^n,\n$$\nwhere the supremum is taken over all cubes $Q$ in ${\\mathbb R}^n$ containing $x$, with $\\ell(Q)$ denoting their sidelengths.\nThe {\\em modified weighted Carleson norm} of a measurable function $g$ in ${\\mathbb R}^{1+n}_+$ is\n$$\n \\|g\\|_* := \\| C(W_{\\infty}(|g|^2\/t))\\|_\\infty^{1\/2}.\n$$\n\\end{defn}\n\n\nWe will use the modified Carleson norm to measure the size of perturbations\nof $t$-independent coefficients $A_0$.\n The proof of Lemma 2.2 in \\cite{AA1} adapts to show that if there exists $A_{0}(x)$ with\n $\\|w^{-1}(A-A_0)\\|_* < \\infty$, then it is unique and $w^{-1}A_{0}$ is bounded, and accretive on $\\clos{\\textsf{R}(D)}$, so that we may call $A_{0}$ the trace of $A$.\n\n\n \\begin{defn} \\label{defn:XY}\n Define the Banach\/Hilbert spaces\n\\begin{align*}\n {\\mathcal X} & := \\sett{ f\\in L^2_{loc}({\\mathbb R}^{1+n}_+; {\\mathbb C}^{m(1+n)})}{ \\|\\widetilde N_*(f)\\|<\\infty }, \\\\\n {\\mathcal C} & := \\sett{f\\in L^2_{loc}({\\mathbb R}^{1+n}_+; {\\mathbb C}^{m(1+n)}) }{ \\|C(W_{2}f)\\|<\\infty },\\\\\n {\\mathcal Y} &:= \\sett{f\\in L^2_{loc}({\\mathbb R}^{1+n}_+; {\\mathbb C}^{m(1+n)})}{\\int_0^\\infty \\| f_t \\|^2 \\, tdt < \\infty}, \\\\\n {\\mathcal Y}^* &:= \\sett{f\\in L^2_{loc}({\\mathbb R}^{1+n}_+; {\\mathbb C}^{m(1+n)}) }{\\int_0^\\infty \\| f_t \\|^2\\, \\frac{dt}t < \\infty},\n\\end{align*}\nwith the obvious norms.\n\\end{defn}\n\nWe use the same notation as in \\cite{AA1}, but of course, here all norms are weighted. Note that ${\\mathcal Y}^*$ is the dual space of\n${\\mathcal Y}$ with respect to the inner product $\\langle\\ , \\ \\rangle_{{\\underline w}}$ of ${\\mathcal H}=L^2({\\mathbb R}^{1+n}_+, d{\\underline w} ;{\\mathbb C}^{m(1+n)})$.\n\n\n\n\n\\begin{lem} \\label{lem:XlocL2}\n There are estimates\n$$\n \\sup_{t>0} \\frac 1t\\int_t^{2t} \\| f_s \\|^2\\, ds \\lesssim \\| \\widetilde N_*(f) \\|^2 \\lesssim \\int_0^\\infty \\| f_s \\|^2\\, \\frac {ds}s.\n$$\nIn particular ${\\mathcal Y}^*\\subseteq {\\mathcal X}$.\n\\end{lem}\n\n A fundamental quantity is the norm of multiplication operators mapping ${\\mathcal X}$ into ${\\mathcal Y}^*$ or ${\\mathcal Y}$ into ${\\mathcal C}$.\n\n\n\n\n\n\\begin{lem} \\label{lem:Carleson} The dual of ${\\mathcal X}$ with respect to the pairing $\\langle\\ , \\ \\rangle_{{\\underline w}}$ is ${\\mathcal C}$.\n For functions ${\\mathcal E}: {\\mathbb R}^{1+n}_+\\to {\\mathcal L}({\\mathbb C}^{m(1+n)})$, we have estimates\n$$\n \\|{\\mathcal E}\\|_\\infty \\lesssim \\|{\\mathcal E}\\|_* \\sim \\sup_{\\|f\\|_{\\mathcal X}=1}\\|{\\mathcal E} f\\|_{{\\mathcal Y}^*} \\sim \\sup_{\\|f\\|_{\\mathcal Y}=1}\\|{\\mathcal E} f\\|_{{\\mathcal C}}.\n$$\n\\end{lem}\n\n\n\\begin{proof} When $w=1$, the duality was established in \\cite{HR} and recently another more direct proof was given in \\cite{Huang}. This second proof passes to the doubling measure setting (personal communication of Amenta and Huang). Next,\nthe first inequality is proved in a similar way than in \\cite{AA1}. The equivalences for the pointwise multiplier operator norms were also established in \\cite{HR}, and reproved in \\cite{Huang} when $w=1$, and the latter proof extends in a doubling measure context as well (personal communication of Amenta and Huang).\n \\end{proof}\n\n\n\n \\begin{prop}\\label{prop:X}\n Let $u\\in W^{1,2}_{\\text{{\\rm loc}}}({\\mathbb R}^{1+n}_+,w) $ be such that $\\|\\widetilde N_*(\\nabla_{t,x}u)\\|<\\infty$. Then there exists $u_{0}\\in \\dot H^1({\\mathbb R}^n,w)$ (as defined in the proof of Lemma \\ref{lem:gradient}) such that $\\|u_{t}-u_{0}\\| \\lesssim t$, $\\|\\nabla_{x}u_{0}\\| \\lesssim \\|\\widetilde N_*(\\nabla_{t,x}u)\\|$, and for $dw$ almost every $x_{0} \\in {\\mathbb R}^n$,\n\\begin{equation}\n\\label{eq:pointwiset}\n \\barint_{\\hspace{-6pt}W(t,x_{0})} |u-u_{0}(x_{0})|^2\\, d{\\underline w} \\leq t^2g(x_{0}).\n\\end{equation}\nwith $g\\in L^1({\\mathbb R}^n,w)$. Conversely, if $u_{0}\\in \\dot H^1({\\mathbb R}^n,w)$, then $u=e^{t^2\\Delta_{w}}u_{0}$ satisfies $\\|\\widetilde N_*(\\nabla_{t,x}u)\\| \\lesssim \\|\\nabla_{x} u_{0}\\|$.\n \\end{prop}\n\n\\begin{proof} The first part is the weighted version of a result in \\cite{KP}.\nFirst, it is easy to show $\\|u_{t}-u_{t'}\\| \\lesssim |t-t'|$ by using $u_{t}-u_{t'}= \\int_{t'}^t \\partial_{s}u_{s}\\, ds$, Cauchy-Schwarz inequality and the left hand inequality in Lemma \\ref{lem:XlocL2}. This gives the existence of $u_{0}\\in L^2_{\\text{{\\rm loc}}}({\\mathbb R}^n,w)$ with $\\|u_{t}-u_{0}\\| \\lesssim t$ (observe that only the difference is in $L^2({\\mathbb R}^n,w)$).\nNext, the Poincar\\'e inequality and a telescopic sum argument implies\n$$\n\\bigg |\\barint_{\\hspace{-6pt}W(t,x_{0})} u \\, d{\\underline w} - \\barint_{\\hspace{-6pt}W(t',x_{0})} u \\, d{\\underline w}\\bigg| \\le C\\tau \\widetilde N_*^1(\\nabla_{t,x}u)(x_{0})\n$$\nwhenever $t,t'\\le \\tau$ up to using a non-tangential maximal function with appropriately large Whitney regions and $\\widetilde N_*^1$ is the analogue of $\\widetilde N_*$ with $L^1$-averages. Thus for every $x_{0}$ where $u_{0}(x_{0})$ exists\n$$\n\\bigg|\\barint_{\\hspace{-6pt}W(t,x_{0})} u \\, d{\\underline w} -u_{0}(x_{0})\\bigg| \\lesssim t \\widetilde N_*^1(\\nabla_{t,x}u)(x_{0})\n$$\nand $\\widetilde N_*^1(\\nabla_{t,x}u)\\in L^2({\\mathbb R}^n,w)$ by equivalences of norms if we change the Whitney regions. By the Poincar\\'e inequality again,\n$$\n\\barint_{\\hspace{-6pt}W(t,x_{0})} \\bigg|u-\\barint_{\\hspace{-6pt}W(t,x_{0})} u\\bigg|^2\\, d{\\underline w} \\le Ct^2\\widetilde N_*(\\nabla_{t,x}u)(x_{0})\n$$\nand we deduce \\eqref{eq:pointwiset} on combining the last two inequalities.\n Finally, note that if $x_{0}, y_{0}$ are different points and $t = 10 (c_{0}+c_{1}) |x_{0}-y_{0}|$, then\n$$\n\\bigg |\\barint_{\\hspace{-6pt}W(t,x_{0})} u \\, d{\\underline w} - \\barint_{\\hspace{-6pt}W(t',y_{0})} u \\, d{\\underline w}\\bigg| \\le C |x_{0}-y_{0}| \\widetilde N_*^1(\\nabla_{t,x}u)(x_{0})\n$$\nagain with slightly larger Whitney regions in the definition of $\\widetilde N_*^1$, so that combining with\n inequalities above, we obtain\n$$\n|u_{0}(x_{0})-u_{0}(y_{0})| \\le C |x_{0}-y_{0}|(\\widetilde N_*^1(\\nabla_{t,x}u)(x_{0})+ \\widetilde N_*^1(\\nabla_{t,x}u)(y_{0})).\n$$\nUsing the theory of Sobolev spaces on the complete doubling metric-measure space $({\\mathbb R}^n, |\\ |, w)$, it follows that $u_{0}\\in \\dot H^1({\\mathbb R}^n,w)$ (identified with the Haj\\l ash space), see \\cite{HajlashKoskela}.\n\nThe converse will be proved after Theorem \\ref{thm:NTmaxandaeCV}.\n\\end{proof}\n\nAt this stage, we do not know if $\\nabla_{t,x}u$ has almost everywhere limits or even strong $L^2(w)$ limits in the above averaged sense (although weak $L^2(w)$ convergence can be shown as in \\cite{KP}). This will be the case, however, when $u$ is a solution of our systems.\nWe remark that in comparison, the space defined by $\\int_0^\\infty \\|\\nabla_{t,x}u_t\\|^2\\, tdt<\\infty$ does not have a trace on ${\\mathbb R}^n$.\n\nWith the above notation, we can state our main theorem for $t$-independent $B_{0}$, thus for semigroups only.\n\n\\begin{thm}\\label{thm:NTmaxandaeCV} Let $T=DB_{0}$ or $B_{0}D$.\nThen one has the estimate\n\\begin{equation}\n\\label{eq:Ntmax}\n\\|e^{-t|T|}h\\|_{{\\mathcal X}} \\sim \\|h\\| \\sim \\|\\partial_{t}e^{-t|T|}h\\|_{{\\mathcal Y}} , \\ \\forall h\\in \\clos{\\textsf{R}(T)}.\n\\end{equation}\nFurthermore, for any $h\\in {\\mathcal H}$ (not just $\\clos{\\textsf{R}(T)}$), we have that the Whitney averages of $e^{-t|T|}h$ converge to $h$ in $L^2$ sense, that is for $dw$ almost every $x_{0}\\in {\\mathbb R}^n$,\n\\begin{equation}\n\\label{eq:CVae}\n\\lim_{t\\to 0}\\ \\barint_{\\hspace{-6pt}W(t,x_{0})} |e^{-s|T|}h-h(x_{0})|^2\\, d{\\underline w}=0.\n\\end{equation}\nIn particular, this implies the $dw$ almost everywhere convergence of Whitney averages\n\\begin{equation}\n\\label{eq:CVaew}\n\\lim_{t\\to 0}\\ \\barint_{\\hspace{-6pt}W(t,x_{0})} e^{-s|T|}h\\, d{\\underline w}=h(x_{0}).\n\\end{equation}\nMore generally, one can replace $e^{-s|T|}$ by any $\\varphi(sT)$ where $\\varphi$ is holomophic and bounded in some bisector containing $\\sigma(T)$ and satisfies $|\\varphi(z)| \\lesssim |z|^{-\\alpha}$ and $|\\varphi(z)-a| \\lesssim |z|^\\alpha$ for some $\\alpha>0$, $a\\in {\\mathbb C}$. In this case, convergence is towards $ah$, and only the upper bound $\\|\\varphi({tT})h\\|_{{\\mathcal X}} \\lesssim \\|h\\| $ holds if $a=0$.\n\\end{thm}\n\nThe proof of this theorem will be given in Section \\ref{sec:NTmax}.\nThe last equivalence in \\eqref{eq:Ntmax} is nothing but \\eqref{eq:psiT}, we put it here for completeness.\n\nIf $B_{0}=I$, then $T^2=D^2= \\begin{bmatrix} -\\Delta_{w} & 0 \\\\ 0 & -\\nabla {\\text{{\\rm div}}}_{w} \\end{bmatrix}$, so that $$\\nabla_{x} e^{t^2\\Delta _{w}}u_{0}= -\\bigg(De^{-t^2D^2}\\begin{bmatrix} u_{0} \\\\ 0 \\end{bmatrix}\\bigg)_{{\\scriptscriptstyle \\parallel}}= \\bigg(e^{-t^2D^2}\\begin{bmatrix} 0 \\\\ \\nabla_{x }u_{0} \\end{bmatrix}\\bigg)_{{\\scriptscriptstyle \\parallel}}\n$$\nand\n$$\\nabla_{t} e^{t^2\\Delta _{w}}u_{0}= \\bigg(2tDe^{-t^2D^2}\\begin{bmatrix} 0 \\\\ \\nabla_{x }u_{0} \\end{bmatrix}\\bigg)_{{\\scriptscriptstyle\\perp}}\n$$\nThus, $\\|\\widetilde N_*(\\nabla_{t,x} e^{t^2\\Delta _{w}}u_{0})\\| \\lesssim \\|\\nabla_{x }u_{0}\\|$ follows from this result, proving the converse statement in Proposition \\ref{prop:X}.\n\nWe observe that only the weak type bound $\\|\\widetilde N_*(e^{-t|T|}h) \\|_{L^{2,\\infty}(w)} \\lesssim \\|h\\|$ holds if $h\\in \\textsf{N}(T)$. Concerning the convergence \\eqref{eq:CVae}, this is new even when $w=1$ for $T=DB_{0}$ in this generality.\nWhat was proved in \\cite{AA2} is \\eqref{eq:CVae} for $|B_{0}e^{-t|DB_{0}|}h-(B_{0}h)(x_{0})|^2$ (which is also true in this situation), and the removal of $B_{0}$ was done only when $B_{0}^{-1}$ is given by pointwise multiplication. It turns out this is not necessary. This will yield the almost everywhere limits in full generality in Theorem \\ref{apriori_HSNeumann} as compared to \\cite{AA2}.\n\n\\begin{rem}\\label{rem} the almost everywhere limit \\eqref{eq:CVaew} is stated with respect to $d{\\underline w}$, which is natural. However, as they are derived from the weighted $L^2({\\underline w})$ limits \\eqref{eq:CVae}, using that ${\\underline w}\\in A_{2}$, we also have unweighted $L^1$ averages that converge to 0 almost everywhere. This means that \\eqref{eq:CVaew} holds also with Lebesgue measure replacing $d{\\underline w}$.\n\\end{rem}\n\n The next two theorems are for the $t$-dependent $S_{A}$ and $\\widetilde S_{A}$. Note that we may rewrite $S_{A}f=S({\\mathcal E} f):=S_{{\\mathcal E}}f$ and $\\widetilde S_{A}f= \\widetilde S({\\mathcal E} f):=\\widetilde S_{{\\mathcal E}}f$ where ${\\mathcal E}$ may not be related to $A$. We use this notation in what follows.\n\n\\begin{thm}\\label{thm:estSA} Assume that $ \\|{\\mathcal E}\\|_*<\\infty$. Then we have the following estimates for arbitrary $f\\in {\\mathcal X}$.\n\\begin{equation}\n\\|S_{{\\mathcal E}}f\\|_{{\\mathcal X}}\\lesssim \\|{\\mathcal E}\\|_*\\|f\\|_{{\\mathcal X}}.\n\\end{equation}\nThe function $h^-:= -\\int_0^\\infty e^{sDB_{0}}E_0^-D {\\mathcal E}_s f_s ds$ belongs to $ E_0^-{\\mathcal H}={\\mathcal H}^-_{DB_{0}}$ and\n\\begin{equation}\n\\|h^-\\|\\lesssim \\|{\\mathcal E}\\|_*\\|f\\|_{{\\mathcal X}},\n\\end{equation}\n\\begin{equation}\n\\|S_{{\\mathcal E}}f-e^{tDB_{0}} E_0^- h^-\\|_{{\\mathcal Y}^*}\\lesssim \\|{\\mathcal E}\\|_*\\|f\\|_{{\\mathcal X}},\n\\end{equation}\n\\begin{equation} \\label{eq:SAavlim}\n\\lim_{t\\to 0} t^{-1} \\int_t^{2t} \\| (S_{{\\mathcal E}}f)_{s} -h^- \\|^2 \\,ds =0=\n\\lim_{t\\to \\infty} t^{-1} \\int_t^{2t} \\| (S_{{\\mathcal E}}f)_s \\|^2 ds,\n\\end{equation}\nand\n\\begin{equation}\n\\label{eq:CVaeSA}\n\\lim_{t\\to 0}\\ \\barint_{\\hspace{-6pt}W(t,x_{0})} |S_{{\\mathcal E}}f-h^-(x_{0})|^2\\, d{\\underline w}=0, \\ \\mathrm{for \\ a.e.} \\ x_{0}\\in {\\mathbb R}^n.\n\\end{equation}\nMoreover, $\\tilde h^-:= -\\int_0^\\infty e^{sB_{0}D}\\widetilde E_0^- {\\mathcal E}_s f_s \\, ds$ satisfies $D\\tilde h^-=h^-\\in E_0^-{\\mathcal H}$,\n\\begin{equation} \\label{eq:SAavlimint}\n \\| (\\widetilde S_{{\\mathcal E}}f)_{t} -\\tilde h^- \\| \\lesssim t \\|{\\mathcal E}\\|_*\\|f\\|_{{\\mathcal X}}.\n\\end{equation}\nIn addition, if $\\|{\\mathcal E}\\|_*$ is sufficiently small\n and ${\\mathcal E}$ satisfies the $t$-regularity condition\n$\n \\|t\\partial_{t}{\\mathcal E}\\|_{*}<\\infty,\n$\n then\n \\begin{equation}\n\\label{ }\n \\|\\partial_t (S_{{\\mathcal E}} f)\\|_{\\mathcal Y}\\lesssim (\\|{\\mathcal E}\\|_* + \\|t\\partial_t {\\mathcal E}\\|_*) \\|f\\|_{\\mathcal X} + \\|{\\mathcal E}\\|_\\infty \\|\\partial_t f\\|_{{\\mathcal Y}},\n\\end{equation}\nand one has $t\\mapsto (S_{{\\mathcal E}}f)_{t}$ is continuous into ${\\mathcal H}$ if $\\|f\\|_{\\mathcal X} + \\|\\partial_t f\\|_{{\\mathcal Y}}<\\infty$ with improved limits\n\\begin{equation} \\label{eq:SAavlimimp}\n\\lim_{t\\to 0} \\| (S_{{\\mathcal E}}f)_{t} -h^- \\| =0=\n\\lim_{t\\to \\infty} \\| (S_{{\\mathcal E}}f)_t \\| .\n\\end{equation}\n \\end{thm}\n\n\n\n\n\n\\begin{thm}\\label{thm:esttSA} Assume that $ \\|{\\mathcal E}\\|_*<\\infty$. Then we have the following estimates for arbitrary $f\\in {\\mathcal Y}$.\n\\begin{equation}\n\\|S_{{\\mathcal E}}f\\|_{{\\mathcal Y}}\\lesssim \\|{\\mathcal E}\\|_*\\|f\\|_{{\\mathcal Y}}.\n\\end{equation}\nThe operator $\\widetilde S_{{\\mathcal E}}$ maps ${\\mathcal Y}$ into $C([0,\\infty); {\\mathcal H})$ with\n\\begin{equation}\n\\label{ }\n\\sup_{t\\ge 0}\\|(\\widetilde S_{{\\mathcal E}}f)_{t}\\| \\lesssim \\|{\\mathcal E}\\|_*\\|f\\|_{{\\mathcal Y}}.\n\\end{equation}\nMoreover, $\\tilde h^-:= -\\int_0^\\infty e^{sB_{0}D}\\widetilde E_0^- {\\mathcal E}_s f_s\\, ds \\in \\widetilde E_0^-{\\mathcal H}= {\\mathcal H}^-_{B_{0}D}$,\n\\begin{equation} \\label{eq:tSAavlimint}\n \\lim_{t\\to 0}\\| (\\widetilde S_{{\\mathcal E}}f)_{t} -\\tilde h^- \\|=0 =\\lim_{t\\to\\infty} \\| (\\widetilde S_{{\\mathcal E}}f)_{t} \\|.\n\\end{equation}\nFurthermore, if $p<2$ and $\\widetilde N_*^p$ is the $p$-modified version of $\\widetilde N_*$ by taking $W_{p}$ functionals on Whitney regions,\n\\begin{equation}\n\\label{ }\n \\|\\widetilde N_*^p(\\widetilde S_{{\\mathcal E}}f)\\| \\lesssim \\|{\\mathcal E}\\|_*\\|f\\|_{{\\mathcal Y}}\n \\end{equation}\nand\n\\begin{equation}\n\\label{eq:CVaetSAint}\n \\barint_{\\hspace{-6pt}W(t,x_{0})} |\\widetilde S_{{\\mathcal E}}f-\\tilde h^-(x_{0})|^p\\, d{\\underline w} =0 , \\ \\mathrm{for \\ a.e.} \\ x_{0}\\in {\\mathbb R}^n.\n\\end{equation}\n\n\\end{thm}\n\n\n\nThese two theorems can be proved following the corresponding results in Sections 6,~7,~8,~9 and 10 of \\cite{AA1} (some of the arguments were simplified in \\cite{R}) and, concerning the almost everywhere convergence limits \\eqref{eq:CVaeSA} and \\eqref{eq:CVaetSAint}, in Section 15 of \\cite{AA2} : they hold in a doubling weighted context when coming to use maximal functions and Carleson estimates. The inequality \\eqref{eq:SAavlimint} is not proved in \\cite{AA1} and merely sketched in \\cite[Section 13]{AA2}, but is easy to prove following the same decompositions as there. We shall not give details. We just mention that $\\tilde h^-$ in Theorem \\ref{thm:estSA} is not an element of ${\\mathcal H}$: it is only defined as a limit of the integrals truncated away from 0 and $\\infty$ in the sense that $-\\int_\\varepsilon^R De^{sB_{0}D}\\widetilde E_0^- {\\mathcal E}_s f_s \\, ds$ converges to $h^-$ in ${\\mathcal H}$. Thus, only the difference $(\\widetilde S_{{\\mathcal E}}f)_{t} -\\tilde h^- $ in \\eqref{eq:SAavlimint} makes sense in ${\\mathcal H}\n $. The scalar part of $\\tilde h^-$ belongs to the homogeneous Sobolev space $\\dot H^1({\\mathbb R}^n,w;{\\mathbb C}^m)$ (as defined in the proof of Lemma \\ref{lem:gradient}) and as such is also an $L^2_{loc}(w)$ function.\n\n\n\n\n\n\\subsection{{\\em A priori} estimates}\\label{sec:apriori}\n\nIn this subsection, we derive a priori estimates for solutions of ${\\text{{\\rm div}}} A \\nabla u=0$ with $\\nabla u \\in {\\mathcal X}$ or ${\\mathcal Y}$. Again, these are obtained as in \\cite{AA1}, together with \\cite{AA2} for the almost everywhere statements and the improvements noticed in Theorem \\ref{thm:NTmaxandaeCV}.\n\n\\begin{thm} \\label{apriori_HSNeumann}\n Consider coefficients $w^{-1}A\\in L^\\infty({\\mathbb R}^{1+n}_+; {\\mathcal L}({\\mathbb C}^{m(1+n)}))$ such that $w^{-1}A$ is accretive on ${\\mathcal H}^{0}$ and assume there exists $t$-independent measurable coefficients $A_0$ such that\n $\\|w^{-1} (A-A_0) \\|_* <\\infty$ or equivalently that $ \\|{\\mathcal E}\\|_*\\sim \\| w^{-1}(A-A_{0})\\|_{*}<\\infty$ where ${\\mathcal E}=B_{0}-B$ and $B=\\widehat {w^{-1}A}, B_{0}=\\widehat{w^{-1}A_{0}}$.\n\n Let $u$\n be a weak solution of\n ${\\text{{\\rm div}}} A\\nabla u=0$ in $\\mathbb{R}^{1+n}_+$ with $\\|\\widetilde N_*(\\nabla_{t,x}u)\\| <\\infty$.\n Then\n$$\n \\lim_{t\\to 0} t^{-1} \\int_t^{2t} \\| \\nabla_{s,x} u_s - g_0 \\|^2 ds =0=\n \\lim_{t\\to \\infty} t^{-1} \\int_t^{2t} \\| \\nabla_{s,x} u_s \\|^2 ds,\n$$\nfor some $g_0 \\in L^2({\\mathbb R}^n ,w;{\\mathbb C}^{m(1+n)})$, with estimate\n$\\|g_0\\|\\lesssim\\|\\widetilde N_*(\\nabla_{t,x}u)\\|$, which we call the gradient of $u$ at the boundary and we set\n$\\nabla_{t,x}u|_{t=0}:=g_{0}$.\nFurthermore, one has that for $dw$ almost every $x_{0}\\in {\\mathbb R}^n$,\n\\begin{equation}\n\\label{eq:CVaewgradsol}\n\\lim_{t\\to 0}\\ \\barint_{\\hspace{-6pt}W(t,x_{0})} \\nabla_{s,x} u \\, d{\\underline w}=g_{0}(x_{0}).\n\\end{equation}\nAll three limits hold with $\\nabla u, g_{0}$ replaced by the $w$-normalized conormal gradient $f= \\nabla_{w^{-1}A}u$ and $f_{0}=\\begin{bmatrix}\n (w^{-1}A_{0}g_{0})_{\\perp} \\\\\n ( g_{0})_{{\\scriptscriptstyle \\parallel}}\n\\end{bmatrix}:=\\nabla_{w^{-1}A}u|_{t=0}$ (in particular, they hold for the $w$-normalised conormal derivative $\\partial_{\\nu_{w^{-1}A}}u$). Moroever, one has the\nrepresentation\n\\begin{equation}\n\\label{eq:representationRegNeu}\n\\nabla_{w^{-1}A}u = e^{- t DB_{0}}h^+ + S_{A}(\\nabla_{w^{-1}A}u).\n\\end{equation}\nfor a unique $h^+\\in {\\mathcal H}^+_{DB_{0}}$ and\n\\begin{equation}\n\\label{eq:representationRegNeu1}\n\\nabla_{w^{-1}A}u|_{t=0} = h^+ + h^-, \\quad h^-=-\\int_0^\\infty e^{sDB_{0}}E_0^-D {\\mathcal E}_s (\\nabla_{w^{-1}A}u)_s ds.\n\\end{equation}\nFinally, there exists $u_{0}\\in \\dot H^1(w)$ (as defined in Lemma \\ref{lem:gradient}) such that $\\nabla_{x}u_{0}= (g_{0})_{{\\scriptscriptstyle \\parallel}}$ and one has $\\|u_{t}-u_{0}\\|\\lesssim t$ and for $dw$ almost every $x_{0}\\in {\\mathbb R}^n$\n\\begin{equation}\n\\label{eq:CVaewsolreg}\n\\lim_{t\\to 0}\\ \\barint_{\\hspace{-6pt}W(t,x_{0})} u \\, d{\\underline w}=u_{0}(x_{0}).\n\\end{equation}\nRemark \\ref{rem} about replacing $d{\\underline w}$ by Lebesgue measure in the almost everywhere limit applies here too.\n\n\\end{thm}\n\n\n\n\n\\begin{thm} \\label{apriori_HSDir}\n Consider coefficients $w^{-1}A\\in L^\\infty({\\mathbb R}^{1+n}_+; {\\mathcal L}({\\mathbb C}^{m(1+n)}))$ such that $w^{-1}A$ is accretive on ${\\mathcal H}^{0}$ and assume there exists $t$-independent measurable coefficients $A_0$\n such that $\\| w^{-1}(A-A_0) \\|_* <\\infty$, or equivalently that $ \\|{\\mathcal E}\\|_*\\sim \\| w^{-1}(A-A_{0})\\|_{*}<\\infty$ where ${\\mathcal E}=B_{0}-B$ and $B=\\widehat {w^{-1}A}, B_{0}=\\widehat{w^{-1}A_{0}}$.\n\n Let $u$ be a weak solution of\n ${\\text{{\\rm div}}} A\\nabla u=0$ in $\\mathbb{R}^{1+n}_+$ and assume that\n $\\int_0^\\infty \\|\\nabla_{t,x}u\\|^2\\, tdt<\\infty$.\n Then $u= \\hat u+c$ almost everywhere,\n for a unique constant $c\\in {\\mathbb C}^m$ and $\\hat u\\in C([0,\\infty); L^2({\\mathbb R}^n,w;{\\mathbb C}^m))$ given by\n $\\hat u= -v_{\\perp}$ with\n \\begin{equation}\n\\label{eq:representationDir}\nv= e^{- t B_{0}D } \\tilde h^+ + \\widetilde S_{A}(\\nabla_{w^{-1}A}u),\n\\end{equation}\nfor a unique $ \\tilde h^+\\in \\widetilde E_{0}^+{\\mathcal H}$. Moreover,\n\\begin{equation}\n\\label{eq:representationDir1}\nv_{0}= \\tilde h^+ + \\tilde h^-\n\\ \\mathrm{with}\\ \\tilde h^-= -\\int_0^\\infty e^{sB_{0}D}\\widetilde E_0^- {\\mathcal E}_s (\\nabla_{w^{-1}A}u)_{s}\\, ds,\n\\end{equation}\nand we call $-v$ the conjugate system associated to $u$. In addition, we have $Dv=\\nabla_{w^{-1}A}u$.\n\n Identifying the functions $u$ and $\\hat u+c$, we have\n limits\n$$\n \\lim_{t\\to 0} \\| u_t -\\hat u_0-c \\| =0=\n \\lim_{t\\to \\infty} \\| u_t -c\\|,\n$$\nfor $\\hat u_0= -(\\tilde h^+)_{\\perp}\\in L^2({\\mathbb R}^n, w;{\\mathbb C}^m)$, and we have the estimates\n$$\n\\|\\hat u_{0}\\|\\lesssim \\max(\\|\\widetilde N_*(\\hat u)\\|, \\sup_{t>0}\\|\\hat u_t \\|)\\lesssim \\bigg(\\int_0^\\infty \\|\\nabla_{t,x}u\\|^2\\, tdt\\bigg)^{1\/2}.\n$$\nFinally, for $dw$ almost every $x_{0}\\in {\\mathbb R}^n$,\n\\begin{equation}\n\\label{eq:CVaewsoldir}\n\\lim_{t\\to 0}\\ \\barint_{\\hspace{-6pt}W(t,x_{0})} u \\, d{\\underline w}=u_{0}(x_{0}).\n\\end{equation}\n Remark \\ref{rem} about replacing $d{\\underline w}$ by Lebesgue measure in the almost everywhere limit applies here too.\n\n\n\\end{thm}\n\nThe representation formula suggests a possible construction of solution given $\\tilde h^+$ provided $\\|{\\mathcal E}\\|_{*}$ is sufficiently small. This is what leads to well-posedness results.\n\n\\begin{rem}\nWe also have the representation \\eqref{eq:representationRegNeu} with both $e^{-t|DB_{0}|}$ and $h^+$ interpreted in a suitable sense with Sobolev spaces of order $s=-1$. This point of view is developed more systematically in \\cite{R} when $w=1$ and with Sobolev regularity $-1\\le s \\le0$. We refer the reader there to make the straightforward adaptation, as it is again an abstract argument. We just warn the reader that the ${\\mathcal E}$ in \\cite{R} is not exactly the same as ours because the author assumed pointwise accretivity to simplify matters. One should use representation \\eqref{eq:firstformalSAdefn} for $S_{A}$ instead.\n\\end{rem}\n\n\n\\begin{cor}\\label{cor:aentcv}\nAssume that $A$ satisfies $\\| w^{-1}(A-A_0) \\|_* <\\infty$ for some $t$-independent $A_{0}$ and is such that all weak solutions $u$ to the system ${\\text{{\\rm div}}} A \\nabla u=0$ in a ball $B\\subseteq \\mathbb{R}^{1+n}_+$ satisfy\n the local boundedness property\n$$\n\\sup_{\\alpha B}|u| \\le C \\left( \\,\\, \\barint_{\\hspace{-6pt}\\beta B} |u|^2 d{\\underline w}\\right)^{1\/2},\n$$\nfor any fixed constant $\\alpha<\\beta<1$, with $C$ independent of $u$ and $B$. Then any\nweak solution $u$ with $\\int_0^\\infty \\|\\nabla_{t,x}u\\|^2\\, tdt<\\infty$ or $\\|\\widetilde N_*(\\nabla_{t,x}u)\\|<\\infty$ converges non-tangentially almost everywhere to its boundary trace.\n\\end{cor}\n\nThe proof is a straightforward consequence of the more precise almost everywhere convergences\nwe stated in the previous section. We skip the details.\nThis result applies in particular to real equations as a consequence of \\cite{FKS}.\n\n\n\n\\section{Well-posedness}\\label{sec:solvability}\n\n\\subsection{Formulation and general results}\n\n\n\n\\begin{defn} Fix $w\\in A_{2}({\\mathbb R}^n)$. Consider degenerate coefficients $A$ with $w^{-1}A\\in L^\\infty({\\mathbb R}^{1+n}_+; {\\mathcal L}({\\mathbb C}^{m(1+n)}))$ such that $w^{-1}A$ is accretive on ${\\mathcal H}^{0}$.\n \\begin{itemize}\n \\item By the Dirichlet problem with coefficients $A$ being well-posed, we mean that\n given $\\varphi\\in L^2({\\mathbb R}^n,w;{\\mathbb C}^m)$, there is a unique weak solution $u$\n solving (\\ref{eq:divform}), with $ \\int_0^\\infty \\|\\nabla_{t,x} u\\|^2\\, tdt<\\infty$\n and trace $u_0= \\varphi$.\n \\item By the regularity problem with coefficients $A$ being well-posed, we mean that\n given $\\varphi\\in L^2({\\mathbb R}^n,w; {\\mathbb C}^{mn})$, where $\\varphi$ satisfies ${\\text{{\\rm curl}}}_x \\varphi=0$, there is a weak solution $u$,\n unique modulo constants,\n solving (\\ref{eq:divform}), with $\\|\\widetilde N_*(\\nabla_{t,x} u)\\|<\\infty$\n and such that $\\nabla_{x}u|_{t=0}= \\varphi$.\n \\item By the Neumann problem with coefficients $A$ being well-posed, we mean that\n given $\\varphi\\in L^2({\\mathbb R}^n,w;{\\mathbb C}^m)$, there is a weak solution $u$,\n unique modulo constants,\n solving (\\ref{eq:divform}), with $\\|\\widetilde N_*(\\nabla_{t,x} u)\\|<\\infty$\n and such that $\\partial_{\\nu_{w^{-1}A}}u|_{t=0}= \\varphi$.\n\\end{itemize}\nWe write $A \\in$ WP(BVP), if the corresponding boundary value problem (BVP) is well-posed with coefficients $A$.\n\\end{defn}\n\n\n\n\n\nWe remark that the definition does not include almost everywhere requirements. For the regularity and Neumann problems, one can make sense of the trace in a weak sense, but for the Dirichlet problem, the trace may not even make sense. However, as soon as we assume $\\|w^{-1}(A-A_{0})\\|_{*}<\\infty$, which will be the case here, we know exactly the meaning of the trace from the results in Section \\ref{sec:apriori}.\n\nThe most important observation following the \\textit{a priori} estimates in Theorems \\ref{apriori_HSNeumann} and \\ref{apriori_HSDir} is the fact that in the $t$-independent coefficient case, we completely identify the trace spaces: $ {\\mathcal H}^+_{DB}$ is the trace space of $w$-normalized conormal gradients for solutions with $\\|\\widetilde N_*(\\nabla_{t,x} u)\\|<\\infty$;\n$ {\\mathcal H}^+_{BD}$ is the trace space of conjugate systems $v$ for solutions with $ \\int_0^\\infty \\|\\nabla_{t,x} u\\|^2\\, tdt<\\infty$. In each case this is an isomorphism.\n\nThis leads to the following characterisation of well-posedness.\n\n\\begin{thm} Consider coefficients $w^{-1}A\\in L^\\infty({\\mathbb R}^{1+n}_+; {\\mathcal L}({\\mathbb C}^{m(1+n)}))$ such that $w^{-1}A$ is accretive on ${\\mathcal H}_{0}$. Assume that $A$ has $t$-independent coefficients. Let $B=\\widehat {w^{-1}A}$.\nThen $A\\in$ WP(Reg)\/WP(Neu)\/WP(Dir) if and only if\n\\begin{align*}\n {\\mathcal H}^+_{DB} \\longrightarrow \\sett{g\\in L^2({\\mathbb R}^n,w;{\\mathbb C}^{mn})}{{\\text{{\\rm curl}}}_x g=0} &: f\\longmapsto f_{\\scriptscriptstyle \\parallel}, \\\\\n {\\mathcal H}^+_{DB} \\longrightarrow L^2({\\mathbb R}^n,w;{\\mathbb C}^m) &: f\\longmapsto f_{\\perp}, \\\\\n {\\mathcal H}^+_{BD} \\longrightarrow L^2({\\mathbb R}^n,w;{\\mathbb C}^m) &: f\\longmapsto f_{\\perp},\n\\end{align*}\nare isomorphisms respectively.\n\\end{thm}\n\n\nObserve the change of space in the third line.\n\nLet us mention a connection to so-called Rellich estimates. The isomorphisms imply the Rellich estimates\n\\begin{align*}\n \\|f_{\\perp}\\|\\lesssim \\|f_{{\\scriptscriptstyle \\parallel}}\\|, \\quad \\forall f \\in {\\mathcal H}^+_{DB}, \\\\\\\n \\|f_{{\\scriptscriptstyle \\parallel}}\\|\\lesssim \\|f_{\\perp}\\|, \\quad \\forall f \\in {\\mathcal H}^+_{DB}, \\\\\n \\|f_{{\\scriptscriptstyle \\parallel}}\\|\\lesssim \\|f_{\\perp}\\|, \\quad \\forall f \\in {\\mathcal H}^+_{BD},\n\\end{align*}\nrespectively. Assuming the Rellich estimates is not enough to conclude well-posedness because this only gives injectivity with closed range. The surjectivity usually follows from a continuity argument starting with a situation where one knows surjectivity. Thus, if, in a connected component of validity of a Rellich estimate, there is one $B$ for which surjectivity holds, then surjectivity holds for all $B$ in this connected component and the corresponding BVP is well-posed for all $B$ in this connected component. Usually one considers $B=I$ so that, here, $A=wI$. See \\cite{AAM} for a discussion which applies \\textit{in extenso}. We remark that this depends on the continuous dependence on $B\\in L^\\infty$ of the projections $E_{0}^+$ and $\\widetilde E_{0}^+$, which follows from Theorem \\ref{th:main}.\nLet us mention also the duality principle between Dirichlet and Regularity, whose proof is the same as that of Proposition 17.6 in \\cite{AA2}.\n\n\\begin{thm}\\label{th:dualityDirReg} Let $w^{-1}A\\in L^\\infty({\\mathbb R}^{1+n}_+; {\\mathcal L}({\\mathbb C}^{m(1+n)}))$ such that $w^{-1}A$ is accretive on ${\\mathcal H}^{0}$. Assume there exists $t$-independent measurable coefficients $A_0$\n such that $\\| w^{-1}(A-A_0) \\|_* <\\varepsilon$. If $\\varepsilon$ is small enough, then $A\\in$ WP(Dir) if and only if $A^*\\in$ WP(Reg).\n\\end{thm}\n\n\n\n\nWe now turn to perturbation results for both $t$-dependent and $t$-independent coefficients. Adapting \\cite{AAH, AAM}, see especially Lemma 4.3 in \\cite{AAM}, one obtains that\neach WP(BVP) is open under perturbation of $t$-independent coefficients in $wL^\\infty$.\nWe refer to \\cite{AA1} for the proofs of the $t$-dependent perturbations, which carry over without change to our setting. We gather these observations together in the following statement.\n\n\n\\begin{thm} \\label{thm:Nellie} Assume the Neumann problem with $t$-independent $A_0$ is well-posed.\n Then there exist $\\varepsilon_{0}>0$ and $ \\varepsilon_{1}>0$ such that if $\\| w^{-1}(A-A_1) \\|_* <\\varepsilon_{1}$ and $A_{1}$ has $t$-independent coefficients with $\\| w^{-1}(A_{1}-A_0) \\|_\\infty <\\varepsilon_{0}$, then the Neumann problem with coefficients $A$ is well-posed.\n\nThe corresponding result holds when the Neumann problem is replaced by the regularity problem.\n\nMoreover, for all such $A$, the solutions $u$ of the BVP satisfy $$\n \\|\\widetilde N_*(\\nabla_{t,x} u)\\| \\approx \\|g_0\\| \\approx \\|\\varphi\\|,\n $$\n with $\\varphi$ the $w$-normalized Neumann data or the regularity data, and one has the limits and regularity estimates as described in Theorem \\ref{apriori_HSNeumann}.\n\n\n\n\\end{thm}\n\n\nWith the duality principle above, we obtain.\n\n\\begin{thm}\\label{thm: DirLip}\n Assume the Dirichlet problem with $t$-independent $A_0$ is well-posed.\n Then there exist $\\varepsilon_{0}>0$ and $\\varepsilon_{1}>0$ such that if $\\| w^{-1}(A-A_1) \\|_* <\\varepsilon_{1}$ and $A_{1}$ has $t$-independent coefficients with $\\| w^{-1}(A_{1}-A_0) \\|_\\infty <\\varepsilon_{0}$, then the Dirichlet\n problem with coefficients $A$ is well-posed.\nMoreover, one has $$\n \\|\\widetilde N_*(u)\\| \\approx \\sup_{t>0}\\|u_t \\| \\approx\n \\bigg(\\int_0^\\infty \\|\\nabla_{t,x} u\\|^2\\, tdt \\bigg)^{1\/2}\\approx \\|\\varphi\\|,\n $$\n if $\\varphi$ is the Dirichlet data,\nand one has the limits and regularity estimates described in Theorem \\ref{apriori_HSDir}.\n\\end{thm}\n\n\n\n\n\n\n\n\n\\subsection{Well-posedness for $t$-independent hermitian coefficients}\n\n\n\n\n\\begin{prop} Assume that $A=A^*$ and that $A$ is $t$-independent and satisfies the usual degenerate boundedness and accretivity on ${\\mathcal H}^{0}=\\clos{\\textsf{R}(D)}$ conditions. Then the regularity, Neumann and Dirichlet problems with coefficients $A$ are well-posed.\n \\end{prop}\n\n\\begin{proof} Let $B=\\widehat {w^{-1}A}$ and\n $f\\in E^+_0{\\mathcal H}={\\mathcal H}^+_{DB}$.\nTheorem \\ref{apriori_HSNeumann} in the case of $t$-independent coefficients implies that the vector field $F_t=e^{-tDB}f$ in ${\\mathbb R}^{1+n}_+$ is such that $\\partial_t F_t=-DB F_t$,\n$\\lim_{t\\rightarrow\\infty}F_t=0$ and $\\lim_{t\\rightarrow 0}F_t=f$.\nLet $N:= \\begin{bmatrix} -I & 0 \\\\ 0 & I\\end{bmatrix}$\nand note that $D N +N D=0$.\nNow, the definition of $B=\\widehat {w^{-1}A}$ and the Hermitian\ncondition $A^*=A$ imply $B^*N=NB$. Using the hermitian inner product $(\\ , \\ )$ for $dw$, we have\n\\begin{multline*}\n \\partial_t (N F_t, BF_t)\n = (NDB F_t, BF_t)+ (N F_t, BDB F_t) \\\\\n = (NDB F_t, B F_t)+ (DB^*N F_t, B F_t)\n = ((N D+ D N)B F_t, B F_t) =0.\n\\end{multline*}\nHence, integrating in $t$ and taking into account the limit at $\\infty$ gives us $(Nf,Bf)=0$. Thus, separating scalar and tangential parts, we obtain the Rellich equality:\n \\begin{equation} \\label{eq:rellich}\n(f,Bf)= 2(f_{\\perp}, (Bf)_{\\perp})=2 (f_{{\\scriptscriptstyle \\parallel}}, (Bf)_{{\\scriptscriptstyle \\parallel}}).\n\\end{equation}\nConsider first the Neumann problem. It follows from (\\ref{eq:rellich}) and the accretivity of $B$ on ${\\mathcal H}^{0}$ that\n$$\n \\kappa \\|f\\|^2 \\le \\re(f,Bf)= 2\\re(f_{\\perp}, (Bf)_{\\perp})\\lesssim 2\\|B\\|_{\\infty} \\|f_{\\perp}\\| \\|f\\|.\n$$\nThis shows that $\\|f\\|\\lesssim \\|f_{\\perp}\\|$ for the Neumann map for any hermitian $A$, which implies that this map is injective with closed range. The continuity argument explained above implies that $A\\in $ WP(Neu) provided that $I\\in$ WP(Neu). That\n$I\\in$ WP(Neu) can be seen from the equality $\\|\\nabla (-\\Delta_{w})^{-1\/2}u\\|=\\|u\\|$ and\n$$\n\\text{{\\rm sgn}}(D)= \\begin{bmatrix} 0 & (-\\Delta_{w})^{-1\/2}{\\text{{\\rm div}}}_{w} \\\\ -\\nabla (-\\Delta_{w})^{-1\/2} & 0\\end{bmatrix}.\n$$\nThus, for $f\\in {\\mathcal H}^{0}$, $f\\in {\\mathcal H}^+_{D}$ if and only if $f_{{\\scriptscriptstyle \\parallel}}= -\\nabla (-\\Delta_{w})^{-1\/2}f_{\\perp}\n$, which in turn holds if and only if $f_{\\perp}= (-\\Delta_{w})^{-1\/2}{\\text{{\\rm div}}}_{w}f_{{\\scriptscriptstyle \\parallel}}$. This implies that the map used for solving the Neumann problem are invertible.\n\nThat $A\\in$ WP(Reg) is proved in the similar way. Then, by Theorem \\ref{th:dualityDirReg}, it follows that $A=A^* \\in$ WP(Dir).\n\\end{proof}\n\n\\subsection{Well-posedness with algebraic structure and $t$-independent coefficients}\n\n Recall that we write our coefficients $A$ as a $2 \\times 2$ block matrix. We say that it is {\\em block lower-triangular} if the upper off-diagonal block $ A_{\\perp{\\scriptscriptstyle \\parallel}}$ is 0, and {\\em block upper-triangular} if the lower block $A_{{\\scriptscriptstyle \\parallel}\\perp}$ is 0.\n\n\n\\begin{thm}\\label{thm:triangular} We assume that $A$ is $t$-independent and satisfies the usual degenerate boundedness and accretivity conditions.\n\\begin{itemize}\n \\item The Neumann problem with block lower-triangular coefficients $A$ is well-posed.\n \\item The regularity problem with block upper-triangular coefficients $A$ is well-posed. More generally, it suffices for the off-diagonal lower block of $A$ to be divergence free and have real entries.\n \\item The Dirichlet problem with block lower-triangular coefficients $A$ is well-posed. More generally, it suffices for the off-diagonal upper block to be divergence free and have real entries.\\end{itemize}\n\n\\end{thm}\n\nLet us clarify the statements above. The off-diagonal lower block is $ A_{{\\scriptscriptstyle \\parallel}{\\scriptscriptstyle\\perp}}= (A^{\\alpha,\\beta}_{i,0})_{i=1,\\ldots, n}^{\\alpha,\\beta= 1,\\ldots, m}$. Real entries means that all these coefficients are real: it guarantees that $A$ and $$A'=A-\\begin{bmatrix} 0 & -A_{{\\scriptscriptstyle \\parallel}{\\scriptscriptstyle\\perp}}^t \\\\ A_{{\\scriptscriptstyle \\parallel}{\\scriptscriptstyle\\perp}} & 0\\end{bmatrix}=\\begin{bmatrix} A_{{\\scriptscriptstyle\\perp}\\no} & A_{{\\scriptscriptstyle\\perp}{\\scriptscriptstyle \\parallel}}+A_{{\\scriptscriptstyle \\parallel}{\\scriptscriptstyle\\perp}}^t \\\\ 0 & A_{{\\scriptscriptstyle \\parallel}\\ta}\\end{bmatrix} $$\nhave the same accretivity bounds. The divergence free condition is $\\sum_{i=1}^n\\partial_{i}A^{\\alpha,\\beta}_{i,0}=0$\nfor all $\\alpha,\\beta$. It implies that weak solutions with coefficients $A$ or $A'$ are the same as can be seen by integrating by parts. In other words, we can reduce matters the special case where the off-diagonal lower block is zero\nThis possibility does not appear to be available for the Neumann problem because the conormal derivative depends on the coefficients.\n\n The proof of this theorem is obtained by a line by line adaptation of \\cite{AMM} to the weighted setting using well-posedness of the three problems (modulo constants) in the class of energy solutions, that is, having finite energy $\\int_{\\mathbb{R}^{1+n}_+} |\\nabla u|^2 d{\\underline w}<\\infty$. We mention that to carry out the algebra there, one should replace the standard Riesz transforms by the Riesz transforms ${\\mathcal R}_{w}$ defined in Lemma \\ref{lem:gradient}. We leave details to the interested reader.\n\n\n\n\n\\section{Non-tangential maximal estimates and Fatou type results}\\label{sec:NTmax}\n\n\n\nRecall that ${\\underline w}$ is the $A_{2}$ weight on ${\\mathbb R}^{1+n}$ defined by ${\\underline w}(t,x)=w(x)$ and that ${\\underline w}$ and $w$ have identical $A_{2}$ constants.\nWriting equations ${\\text{{\\rm div}}} A \\nabla u =0$ as ${\\text{{\\rm div}}}_{{\\underline w}} (w^{-1}A\\nabla u)= 0$ allows one to carry some proofs to the degenerate case without much change from the non-degenerate case. We quote two results we will be using. The first is the usual Caccioppoli inequality with a completely analogous proof:\nall weak solutions $u$ in a ball $ B=B({\\bf x},r) \\subseteq \\mathbb{R}^{1+n}_+$ of ${\\text{{\\rm div}}} A \\nabla u=0$ enjoy the Caccioppoli inequality \\begin{equation}\n\\label{eq:caccio}\n{}\\int_{\\alpha B} |\\nabla u|^2 \\, d{\\underline w} \\le Cr^{-2} {}\\int_{\\beta B} |u|^2 \\, d{\\underline w}\n\\end{equation}\nfor any $0<\\alpha<\\beta<1$,\n the implicit constants depending only on $\\|w^{-1}A\\|_{\\infty}$ the accretivity constant of $w^{-1}A$, $n$, $m$, $\\alpha$ and $\\beta$.\n\nThe second one is a corollary of this and Poincar\\'e inequalities: there exists $10$ but this can be done for some $\\varepsilon>0$ depending only in the size of the $A_{2}$ constant of ${\\underline w}$. Then, one can use \\cite[Theorem 1.5]{FKS} which asserts that the gain of exponent in the Poincar\\'e inequality for an $A_{p}$ weight on ${\\mathbb R}^{1+n}$ is at least $p \\frac{1+n}n$. So for $p> \\frac {2n}{1+n}$ (and $p>2-\\varepsilon$) we are done.\n\n\nWe continue with the analogue of Lemma 10.3 in \\cite{AA1}.\n\n\\begin{lem} \\label{lem:lqoffdiag}\n Let $B$ be $t$-independent, bounded on ${\\mathcal H}$ and accretive on ${\\mathcal H}^{0}= \\clos{\\textsf{R}(D)}$. Let $T=DB$ or $BD$. Then there exists $10$ and sets $E,F\\subseteq {\\mathbb R}^n$ such that $\\text{{\\rm supp}}\\, f\\subseteq F$, with $C_{N}$ independent of $f, t, E, F$. Here $\\text{{\\rm dist}}\\,(E,F):= \\inf\\sett{|x-y|}{x\\in E, y\\in F}$ and $\\|\\ \\|_{q}$ are the weighted $L^q$ norms.\n\\end{lem}\n\n\\begin{proof} It suffices to prove the lemma for $T=DB$ as then it holds for $T^*=B^*D$, and hence for $BD$ upon changing $B^*$ to $B$.\n\n For $q=2$, this is contained in Lemma \\ref{lem:odd}. By interpolation, it suffices to estimate the operator norm of $ (I+it DB)^{-1}$ on $L^q({\\mathbb R}^n,w;{\\mathbb C}^{m(1+n)})$, uniformly for $t$.\n\n To this end, assume that $(I+ it DB)\\tilde f= f$.\n As in Proposition~\\ref{prop:divformasODE}, but replacing $\\partial_t$ by $(it)^{-1}$, this equation is equivalent to\n$$\n\\begin{cases}\n (w^{-1} A\\tilde g)_{\\scriptscriptstyle\\perp} + it{\\text{{\\rm div}}}_w(w^{-1} A\\tilde g)_{\\scriptscriptstyle \\parallel} = (w^{-1} A g)_{\\scriptscriptstyle\\perp}, \\\\\n \\tilde g_{\\scriptscriptstyle \\parallel} - it \\nabla_x\\tilde g_{\\scriptscriptstyle\\perp} = g_{\\scriptscriptstyle \\parallel},\n\\end{cases}\n$$\nwhere $A, g$ and $ \\tilde g$ are related to $B, f $ and $ \\tilde f$ respectively, as in Proposition~\\ref{prop:divformasODE}.\nUsing the second equation to eliminate $\\tilde g_{\\scriptscriptstyle \\parallel}$ in the first, shows that $\\tilde g_{\\scriptscriptstyle\\perp}$ satisfies the divergence form\nequation\n$$\n L_{t}\\tilde g_{\\scriptscriptstyle\\perp}:= \\begin{bmatrix} 1 & it{\\text{{\\rm div}}}_w \\end{bmatrix}\n (w^{-1}A)\n \\begin{bmatrix} 1 \\\\ it\\nabla_x \\end{bmatrix}\n \\tilde g_{\\scriptscriptstyle\\perp} =\n \\begin{bmatrix} 1 & it{\\text{{\\rm div}}}_w \\end{bmatrix}\n \\begin{bmatrix} w^{-1}A_{{\\scriptscriptstyle\\perp}\\no} g_{\\scriptscriptstyle\\perp} \\\\ - w^{-1}A_{{\\scriptscriptstyle \\parallel}\\ta} g_{\\scriptscriptstyle \\parallel} \\end{bmatrix}.\n$$\nLet $r(w)<2$ be the infimum of those exponents $q$ for which $w\\in A_{q}$. For $r(w)w_{\\max}$}{ $C^*$ :=\n $C$\\; } \n\n}\n\\caption{MaxWeight Algorithm without feedback}\n\\label{alg:1}\n} \n\\end{algorithm}\n\nAn issue raised in this algorithm is the fact that the number of possible encoding combinations to be examined is exponential in nature. If, for example, we assume that overhearing is possible for all receivers except the destinations, then the number of combinations is actually $2^N-1$ where $N$ is the number of source-destination pairs (or 2--hop flows). The question is then whether the computational overhead for the weights is prohibitively high. In subsection \\ref{sec:case-implement}, we explain how the list of weights is maintained in order to reduce the number of calculations per slot.\n\nThe algorithm 1 is throughput optimal under the condition that the knowledge of the aforementioned probabilities of overhearing cannot be altered during transmissions. This happens when (i) the probabilities are 0 or 1, as in Alice-Relay-Bob topology (and any other symmetric flow setting) or (ii) upon a decoding failure we reschedule the uplink transmission for the failed flow. The latter may arise in a TCP scenario. In the general case, however, whenever a particular encoded packet is not correctly decoded, the packet remains in the queue at the relay but extra feedback information is obtained. If for example $P_1 \\oplus P_2$ is not decoded by both receivers, the relay knows that these two packets are not overheard by receiver 2 and 1 respectively, and the proper action is to correct the overhearing probabilities to zero and never encode these two specific packets again. The impact of feedback clearly biases the probabilities of decoding.\n The knowledge state of each packet evolves in a such a way that future states depend on the control action selected at present and\n as a result, in the general case, not only algorithm 1 is not optimal but it might perform quite badly when the overhearing probabilities are quite small. \n\n\n\\subsection{Algorithm 2: the case of two queues}\n\nAnother idea is to propose an algorithm which is not necessarily optimal, but manages to handle the feedback information successfully. In general, an algorithm should be able to predict the future effects of current control actions. Here we restrict our search to the category of the so-called myopic algorithms, trying to solve the problem given only the current state and disregarding the future. We consider the\nproblem of mixing only two flows. \n\nIn order to cope with feedback, we add two more knowledge states. Apart from newly arrived (unknown) packets whose behavior is captured by known probabilities, we have a state for ``good'' packets (overheard by the other receiver) and one for ``bad'' packets (those not overheard by the other receiver). Thus, the system maintains the queues $Q_i^s$ where $i\\in\\{1,2\\}$ signifies the flow and $s\\in\\{u,g,b\\}$ signifies the state. The set of controls $\\mathcal{C}$ contains all controls that activate one or two queues with the constraint that no two queues from the same flow can be activated.\n\\begin{algorithm}[h]\n\\small{ \n\\KwIn\n{$Q_i^s, \\mu_i^s(C)$}\n\\KwOut\n{$C^*$}\n\nAt feedback time:\n\t\\begin{itemize}\n\t\t\\item For each packet that was not correctly decoded define whether it is good or bad.\n\t\t\\item Bad packets are directly sent to the MAC layer for transmission without coding.\n\t\t\\item Good packets are sent to the corresponding queue at the good state.\n\t\\end{itemize}\nAt decision time:\n\n$w_{\\max}:=0$\\;\n\n\\For\n{$C\\in \\mathcal{C}$}{\n\n $w(C):=\\sum_{C} Q_i^s \\mu_i^s(C)$\\;\n \n\t\\If\n\t{$w(C)>w_{\\max}$}{ $C^*$ :=\n $C$\\; } \n\n}\n\\caption{Myopic Algorithm with feedback}\n\\label{alg:2}\n}\n\\end{algorithm}\nThe packets are initially injected in the queues at the unknown state. Once a packet is not decoded properly, the relay classifies it as either good or bad based on feedback information. If bad, it is retransmitted without encoding (thus the queues $Q_i^{b}$ are not needed actually). If it is deemed as good, it is transferred to the corresponding queue at the good state ($Q_1^{g}$ or $Q_2^{g}$ depending on the flow it belongs to). When calculating average service rates, the packets at the good state have probability of overhearing equal to one. Apart from these alterations, algorithm 2 works in the same way as algorithm 1.\n\nIn \\cite{rawnet} it is shown that an enhanced queue length based algorithm solves optimally the joint NC and scheduling problem arising in intersession coding at the relay node. This solution might be costly in terms of resources, and therefore suboptimal algorithms might be preferred. For this reason, our framework serves as an ideal substrate for performing measurements of such algorithms.\n\n\\subsection{Algorithm 3: fixed threshold policy}\n\nFor reasons of performance comparison we define a third algorithm. This algorithm operates only with implicit ACKs and makes decisions based on principles used in the COPE framework. In this sense, it emulates COPE in its probabilistic mode. The important differences to our algorithm are that instead of calculating average service rates, the $\\delta$--Fixed Threshold Policy ($\\delta$--FTP) simply marks the incoming packets with information about decoding opportunities. In order to do so, overhearing probabilities $q_{i,j}$ are compared with a fixed threshold $\\delta\\in[0,1]$ and set to $1$ if they exceed the threshold or zero otherwise. \nThe algorithm selects at each decision instance the control that maximizes the number of transmitted packets.\n\n\\subsection{ NCRAWL algorithm implementation}\n\\label{sec:case-implement}\nNext, we demonstrate how to implement the three above algorithms on NCRAWL. \nFor all three cases, \n\n we configure NCRAWL at each node to maintain \n\n one queue per flow for incoming packets\n\n\n{\\bf Implementing algorithm 1:} We first describe how one may organize queues in an efficient manner. Subsequently, we show how to utilize the queue information to apply NC. \n\\\\\n$\\bullet$ {\\bf \\em Organizing packet queues:} \nTo begin with,\nwe dedicate one vector per control which contains\nthe identity of the involved queues (e.g. the flow it belongs to and\/or the state) and the identities of the packets enqueued at the involved queues. \n\nThe formed vectors are stored in\n a double linked list. \n Each vector is assigned a weight (or reward); the higher this weight, the higher the \n preference of the encoder for using the combination. \nThis weight is recalculated every time the backlog size of \n a\n member queue\n changes. \n The linked list is formed such that the head of the list contains always the \n current maximum\n weight.\n For the sake of low processing overhead, \n vectors\n are also directly indexed by their member\n queues; with this,\nthe weight update process is fast. \n As one may expect, vectors as well as their linked list \n are all constructed during the NCRAWL updater write event. \n\\\\\n$\\bullet$ {\\bf \\em Applying NC operations:} \n Given the construction of the control list, \nthe encoder event examines the head of the list, and further: \n(a) retrieves packets from their respective queues, \n (b) updates the\n vector weights (since the respective backlogs are decremented), \nand \n(c) sets the vector with the highest weighted combination as the head of the list. \nThe latter is actually\na process with slowly scaling complexity with the number of vectors-combinations, \nsince each updated vector weight is just compared against the weight of the current head, \n and only takes its place if it is higher.\nRetrieved packets are subsequently combined using the NCRAWL \\emph{encode} \nlibrary call, and the resulting encoded packet is scheduled for transmission.\n \n{\\bf Implementation considerations for algorithm 2:} \n This algorithm is similar to algorithm 1, however it involves an additional \nacknowledgment scheme logic. \nTherefore, for each flow NCRAWL now maintains two queues: \n (a) one with new incoming packets, and \n (b) one with packets that have been successfully \nlogged as keys by fellow nodes, but have not reached their ultimate \ndestinations\\footnote{For example this could be due to the fact that the destination failed to decode properly a previously sent encoded packet.}. \nAlgorithm 2 exploits the \nNCRAWL acknowledgment scheme facility; this process groups the packet \nacknowledgment tokens, which have been created for \noutgoing packets combined together in the same encoded packet. \nThis information is provided by NCRAWL to the \ndeveloper. Algorithm 2 directly sends packets that have not \nyet reached their destinations;\n such packets are not reconsidered for encoding. \n\n However, the algorithm considers favorable queues and ``unknown\" queues for the same flow separately, when forming vectors. \n Note that the vectors formed with this algorithm scale intrusively, \ncompared to the simple maxweight algorithm described previously. Throughout our measurements we only consider the scenario of two flows and thus avoid the arising complexity. This issue is expected to be resolved in the future using the NCRAWL framework.\n\n{\\bf Algorithm 3 in NCRAWL:} \n For the implementation of the third algorithm we simply need to create vectors, (i.e. controls or queue combinations) for which the decoding\nprobability is nonzero, according to the user-defined threshold $\\delta$ and the channel quality. \nAs soon as packets are \navailable in all\nqueues that constitute a vector, they are combined\nand transmitted at once, without considering or updating the queue backlogs. This\nalgorithm\nselects controls that mix the largest possible number of packets each time.\n\n We should note here that NCRAWL does not use any time-threshold policy \n towards increasing the backlog size of the incoming packet queues, before \n\n deciding to send outgoing packets. On the other hand, COPE adopts such a design decision. \n\n With NCRAWL,\nqueue backlogs will increase when the relay's outgoing packet rate is smaller than the incoming packet rate. In such cases, NC proves to be a panacea for the router stability; if the NC algorithmic operations are supported by a lightweight implementation, the router capacity can be truly increased, as our measurements suggest. \n\\vspace{-0.1in}\n\\section{Evaluating our Framework}\n\\label{sec:measurements}\n\nIn this section, we\n evaluate \nNCRAWL\nin conjunction with scheduling algorithms (NCRAWL + alg1, NCRAWL + alg2 and $\\delta$--FTP) described in section \\ref{sec:case}, \nin terms of both throughput and resource utilization. We begin by describing \nthe wireless testbed infrastructure\\footnote{Our motivating experiments discussed in section \\ref{sec:introduction} were performed on 2 different testbeds. We have evaluated NCRAWL on both testbeds; here we present results for one of them.} \nand the configurations that we \nused to deploy experiments. Next, we quantify the \n{CPU} overheard that is introduced by each NCRAWL \nprocessing stage, under maximum traffic loads, and we compare \ntotal {CPU} utilization to: (i) the public COPE implementation \nthat uses an explicit acknowledgment scheme and (ii) legacy IEEE 802.11b-g.\nFollowing, we demonstrate that NCRAWL can support theoretical \ngains even when coding opportunities lead to more than 2-packet \ncombinations. Finally, we deploy experiments that \ndemonstrate how the proposed algorithms perform in cases with variable link qualities and different rates.\n\n\\subsection{Experimental setup}\n\nOur testbed is comprised of 20 ORBIT-like nodes, deployed both indoors and outdoors. \nEach node consists of one 1{GHz} 386 processor, 512MB of {RAM}, two ethernet ports and two miniPCI slots which are used to host two AR5212 Atheros 802.11a\/b\/g WiFi \ncards.\nAll the nodes are connected through wired Ethernet with the testbed's \nserver - console. On console, we have all the required testbed \nservices running as well as the NCRAWL deployment scripts that \nwe described in section~\\ref{subsec:deploy_NCRAWL_experiments}. \n For conducting throughput measurements we use the {iperf} bandwidth meter tool, \\cite{iperf}. \nFor {CPU} occupancy measurements we appropriately instrument \nNCRAWL\n with the Linux \\emph{getrusage} system call, which \naccurately estimates {CPU} usage time. We place several {getrusage} \ncalls at the borders of each processing stage, we record the average usage time \nof each stage and we compare it to the whole NCRAWL system usage time. \n We have repeatedly performed all of our experiments late at night, in order to avoid interference from collocated networks. \n\n\n\n\\subsection{CPU occupancy measurements}\n\nIn order to measure the efficiency of our framework in terms of CPU occupancy, we compare it to the case of running COPE \\cite{cope}, as well as\nthe legacy IEEE 802.11 protocol. \n\n\n\n\\begin{figure*}[h]\n\\begin{center} \\hspace{-0.12in}\n\\parbox{1.5in} {\n \\centerline{\\subfigure\n ]{\\includegraphics[scale=.19]{figures\/cpub.pdf}}}\n\n\n\n}\n\\makebox[.15in] {}\n\\parbox{1.5in} {\n \\centerline{\\subfigure\n ]{\\includegraphics[scale=.19]{figures\/cpug.pdf}}}\n \n\n\n}\n\\makebox[.15in] {}\n\\parbox{1.8in} {\n\\vspace{0.1in}\n \\centerline{\\subfigure\n ]{\\includegraphics[scale=.17]{figures\/origin\/fig_CPU_pie}}}\n \n\n \n}\n\\makebox[.15in] {}\n\\parbox{1.2in} {\n\\hspace{-0.2in}\n \\centerline{\\subfigure\n ]{\\includegraphics[scale=.19]{figures\/cpustage.pdf}}}\n \n\n \n}\n\\end{center}\n\n\\end{figure*}\n\n\n\\begin{figure*}[h] \\vspace{-0.12in}\n\\begin{center} \\hspace{-0.12in}\n\\parbox{1.3in} {\n \\centerline{\\subfigure\n ]{ \\includegraphics[scale=.19]{figures\/tputrateb.pdf}}}\n\n\n\n}\n\\makebox[.34in] {}\n\\parbox{1.3in} {\n \\centerline{\\subfigure\n ]{ \\includegraphics[scale=.19]{figures\/gainb.pdf}}}\n\n\n\n}\n\\makebox[.34in] {}\n\\parbox{1.3in} {\n \\centerline{\\subfigure\n ]{ \\includegraphics[scale=.19]{figures\/tputrateg.pdf}}}\n\n\n\n}\n\\makebox[.34in] {}\n\\parbox{1.3in} {\n \\centerline{\\subfigure\n ]{ \\includegraphics[scale=.19]{figures\/gaing.pdf}}}\n\n\n\n}\n\\caption{Results in Alice-Relay-Bob topology.}\n\\label{fig:alice}\n\\end{center}\n\\end{figure*}\n\n\n\\begin{figure*}[t!]\n\\begin{center} \\hspace{-0.12in}\n\\parbox{1.3in} {\n \\centerline{\\subfigure\n ]{ \\includegraphics[scale=.19]{figures\/flowtput_flows.pdf}}}\n\n\n\n}\n\\makebox[.35in] {}\n\\parbox{1.3in} {\n \\centerline{\\subfigure\n ]{\\includegraphics[scale=.19]{figures\/gainflows.pdf}}}\n\n\n}\n\\makebox[.35in] {}\n\\parbox{1.3in} {\n \\centerline{\\subfigure\n ]{\\includegraphics[scale=.19]{figures\/flowtput_prob.pdf}}}\n\n\n}\n\\makebox[.35in] {}\n\\parbox{1.3in} {\n \\centerline{\\subfigure\n ]{\\includegraphics[scale=.19]{figures\/flowtput_rate.pdf}}}\n \n \n}\n\\caption{Results in wheel topologies. }\n\\label{fig:wheel}\n\\end{center}\n\\end{figure*}\n\n{\\bf NCRAWL is much more CPU friendly than COPE-based approaches:} \n We invoke the Alice-Relay-Bob setting (see section \\ref{sec:nc_scheme}) and we inject fully saturated traffic in both flows. \nWe compare NCRAWL + alg1, NCRAWL + alg2,\nCOPE and the plain 802.11, for the case of 802.11b; figure \\ref{fig:alice}-a depicts the results. \nNote that COPE can support at most the IEEE 802.11b rate set as discussed in section \\ref{sec:introduction}; for the sake of a fair comparison here, we use this mode of operation for NCRAWL as well.\nWe observe that NCRAWL makes use of the CPU resources in a very efficient manner: it reduces the CPU utilization by at least 2 and as much as 7 times compared to COPE \n (we have validated these observations for the case of ER \\cite{er} as well, which is based on COPE). \nFurthermore, we test NCRAWL for the case of 802.11g. Our measurements (figure \\ref{fig:alice}-b) \n suggest that NCRAWL does not need to occupy more than 37\\% of the CPU resources for NC operations at 54 Mbps, with fully saturated UDP traffic! This implies that the design of NCRAWL includes low additional overhead functions (as opposed to legacy 802.11). \n\n{\\bf Evaluating individual operations of NCRAWL:} \nNext, we deploy {getrusage} calls and measure the breakdown of CPU occupancy per processing stage (figure \\ref{fig:alice}-c). \nThe most CPU intensive operation is the SRCR stage (it contains legacy IEEE 802.11 operations as well).\n\n The most computationally heavy pieces of NCRAWL are the encode stage and the key house-keeping. Note here that these two lie at the heart of any NC system and in a way represent unavoidable costs. It should also be noted that the processing stage of the scheduler remains at very low values and there is a certain percentage dedicated to dealing with ACKs. \n Furthermore, as depicted in figure \\ref{fig:alice}-d, by increasing the channel rate (and thus the number of packets into the system per unit time), the coding stage increases in complexity disproportionally with the SRCR stage. This implies that the coding complexity increases faster than SRCR as the rate increases. \n Nevertheless,\n for\n high channel rates the differences are reduced. \n This suggests that NCRAWL could potentially operate efficiently at much higher channel rates, such as with 802.11n systems. We plan to test NCRAWL on MIMO networks in our future work. \n \n\\subsection{Throughput measurements with UDP}\nNext, we assess the ability of NCRAWL to approach the theoretically expected benefits of NC. \n\n{\\bf Experiments with the simple Alice-Relay-Bob topology:}\n\n We calculate and measure the maximum throughput for both symmetric flows, such that the system remains stable (i.e. the queues do not rise more than a large permissible number). \nFigures \\ref{fig:alice}-e, \\ref{fig:alice}-f, \\ref{fig:alice}-g and \\ref{fig:alice}-h show the results. \n\n Note that since the receivers always have the proper keys (these are the keys from their own transmitted packets \\cite{proutiere}), decoding is always possible and thus algorithm 1, algorithm 2 and COPE are optimal in this setting. \n In each case, a gain in throughput of $\\frac{4}{3}$ is identified, which matches the theoretical for this topology. \n %\n %\n %\n %\n Our measurements suggest that \n\nCOPE achieves the theoretical throughput for small rates, but\n it fails to do so in higher rates. \n Note that the public COPE code was initially available for 802.11b only; \n while we carefully\n modified COPE to operate at 802.11g rates, \n we observed that such modifications lead to a very unstable system when rates higher than 18 Mbps are used. A closer look at certain individual components of the COPE implementation revealed that the reason for this instability is the excessive overhead induced by the NC system operations (as discussed earlier). \nFor this reason we do not explicitly\ncompare COPE here at these high rates. Nevertheless, from these measurements \n one can realize\n that COPE cannot provide benefits at \n rates higher than 18 Mbps, due to the tremendous CPU processing overheads that its design incurs. \n In contrast, NCRAWL manages to\n reach the theoretical gain at high channel rates (e.g. at 54 Mbps), as shown in figures \\ref{fig:alice}-f and \\ref{fig:alice}-h.\n\n{\\bf The case for wheel topologies:} \nFurthermore, we scale the number of flows (see figures \\ref{fig:wheel}-a and \\ref{fig:wheel}-b); the topology is an $\\frac{x}{2}$--wheel.\nThe theoretical gain in this case is $\\frac{2x}{x+1}$ where $x$ is the number of flows combined at the downlink. Our measurements support the theoretically predicted gain at the channel rate of 54 Mbps. We observe the per flow throughput naturally drops, as the number of flows increases, but the aggregate throughput increases. The gain (figure \\ref{fig:wheel}-b)) is an increasing function of flows and approaches asymptotically 2; note that this is perfectly aligned with the findings in \\cite{proutiere} as well. \n\n Note also that in $\\frac{x}{2}$--wheel topologies, piggybacking is not available since there is no return flow from the receivers. NCRAWL is able to select the appropriate ACKing method and the results show that the overhead incurred is negligible.\n\n{\\bf Experiments with cross topologies:} \nWe now present two more cases of interest that can appear in realistic environments. \nWe setup\n various {\\em cross} topologies with nodes in different locations across our testbed; we activate the flows Alice-Relay-Chloe and Bob-Relay-David. The arrivals are again chosen in a symmetric way, i.e. the arrival rate of the one flow is equal to the other.\n\n$\\bullet$ In the first case (figure \\ref{fig:wheel}-c), David overhears Alice's uplink transmissions with probability 1 and Chloe hears Bob with probability $q$. The rates of all links are equally set to 12Mbps (the channel rate is not important in this experiment). We measure the highest throughput that guarantees queue stability while varying the probability $q$, by considering different node locations. \nWe compare NCRAWL+alg1, NCRAWL+alg2 and IEEE 802.11g as well as $\\delta$--FTP for $\\delta=\\{0.7,0.8,0.9\\}$ (see section \\ref{sec:case} for description). The results demonstrate the superiority of NCRAWL+alg2, which is able to deliver the maximum throughput in each case. Evidently, our framework in combination with the proposed scheduling algorithms is able to effectively handle\nthe several link quality conditions. \n\n$\\bullet$ In the second case (figure \\ref{fig:wheel}-d),\n the overhearing probability from Bob to Chloe is set to $q=0.7$. All channel rates are set to 24Mbps with the exception of the link Relay-Chloe which is varied. Our measurements demonstrate the inefficiency of policies oblivious to rates like the $\\delta$--FTP. In this case, the choice of a small value for $\\delta$ is penalized when the Relay-Chloe link is slow enough. Instead NCRAWL+alg2 is able to handle in an effective way the several rate and link conditions and deliver important throughput gains. \nFrom figures \\ref{fig:wheel}-c and \\ref{fig:wheel}-d \nwe also observe that \n\n given that \n overhearing links are not perfect in terms of PDR, NCRAWL+alg2 always outperforms NCRAWL+alg1, since it is able to use feedback information.\n\n\\subsection{Performance with TCP traffic}\n\nFinally, we assess the efficacy of NCRAWL in scenarios with TCP traffic. \nIn \\cite{cope}, experiments with TCP have demonstrated a loss in efficiency due to packet losses and reordering. \n\nFirst, throughout our experiments with the Alice-Relay-Bob topology, where no losses or delays are incurred, the throughput is reduced due to the additional TCP overheads. We observe that when the 54 Mbps rate is used, the per flow throughput rate is 7 Mbps for plain 802.11 and 8.5 Mbps for NCRAWL+alg1. \n A slight loss in NC gain is observed; this is the result of mixing TCP ACKs with data packets. The same gain is obtained for all the other available bit rates.\n\nFurthermore, \n\n we perform experiments with {\\em half-cross} topologies, where flows are unidirectional (from Alice to Chloe and from Bob to Dave), \n with probabilities of overhearing $q_{AD}=q_{BC}=0.7$ and several channel rates. In this case, NCRAWL+alg1 achieves a slightly lower throughput than IEEE 802.11. This is due to the fact that some packets are not correctly decoded at the destination and therefore they arrive delayed and out of order. This causes abrupt reactions from TCP and leads to throughput reduction. When adding the reordering module of COPE \\cite{cope}, the packets arrive always in order, however this module increases the delay for each packet. This in turn is interpreted by TCP as congestion; it ends up in TCP window increments, and thereby decreases performance. \nNCRAWL is not optimized to cooperate with TCP at this point and thus, it faces the common problems of TCP in wireless networks. Improving this component is the main goal of our future work.\n\\vspace{-0.1in}\n\\section{Conclusions}\n\\label{sec:conclusions}\n\n We design and develop NCRAWL. Our framework \n is an extended, generic NC framework that can be used to quickly develop networking systems in order to evaluate intersession NC and\/or scheduling algorithms, entirely based on the implicit (probabilistic) acknowledgment that a packet can get decoded at \nthe destination. The design of NCRAWL involves all the common processing steps that are always \nneeded to implement such algorithms; these steps have been abstracted such \nthat designers\nneed to simply focus only on the implementation of their algorithms. \nOur measurements\ndemonstrate that NCRAWL is a powerful NC development system. \nIt offers significant throughput benefits \neven at high channel rates. \n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\t\n\t\n\tThe original Kaluza-Klein idea \\cite{Kaluza}\\cite{Klein1}\\cite{Klein2} consists in a $5D$ space-time formulation having the aim to include in a geometrical picture also the electromagnetic interaction. \n\t\n\t\\noindent The surprising formal success in providing a metric representation of the vector potential suggested, in the Seventies, to attempt for a geometrical unification \\cite{ModernKKtheories}, able to assess all the fundamental interactions into a multi-dimensional space-time, with particular attention to the Electroweak Model. \n\tThe suggestive idea at the ground of these approach consists of the possibility to reproduce the Lie algebra, characterizing the elementary particle symmetries by the isometries of the extra-dimensional space. The non-trivial result obtained by the extra-dimensional Kaluza-Klein theories relies on the emergence from the multi-dimensional Einstein-Hilbert Lagrangian of the correct Yang-Mills action for the vector bosons which are the interaction carriers. \n\t\n\tHowever, many non-trivial problems affected this fascinating attempt for a geometrization of Nature. One of the main questions came out from the difficulty to provide a geometrical version of the chirality singled out by the electroweak interaction \\cite{Wetterich}, as well as the impossibility to represent the Standard Model of elementary particles in a Kaluza-Klein scenario \\cite{Witten}.\n\tFor alternative non-Riemannian approaches to solve the chirality problem of the Electroweak model see \\cite{Cianfrani-Montani1}\\cite{Cianfrani-Montani2}. \n\t\n\t\n\t\\noindent Finally, we observe that a full geometrical picture of Nature would involve the geometrical formulation of the fermionic field, a really non-trivial perspective if supersymmetry is not considered \\cite{Ferrara}. \n\t\n\t\\medskip\n\t\n\tEven the $5D$ Kaluza-Klein theory presents some important difficulties, see for a review \\cite{Cianfrani-Marrocco-Montani}, which leaves open the question concerning the viability of this approach as a geometrization of the electromagnetic interaction. \n\t\n\t\\noindent First of all, the $5D$ metric tensor contains an additional degree of freedom besides the $4D$ metric and the vector potential, namely the fifth diagonal component. \n\tUnder the necessary restriction of the coordinate transformation in order to deal with the $U(1)$ symmetry, this quantity behaves as an additional scalar field, which presence non-trivially affects basic features of the electromagnetism, for instance, the charge conservation itself \\cite{ModernKKtheories}\\cite{Lacquaniti-Montani}\\cite{Lacquaniti-Montani-Vietri}. \n\tBut, even fixing this scalar field to unity in the Lagrangian for the model (with the right sign of a space-like component), nonetheless, the ratio between the charge and the mass of an elementary particle is constrained to remain too small in order to reproduce the Standard Model spectrum of masses (for a proposal to solve the charge to mass ratio problem see \\cite{Lacquaniti-Montani}).\n\t\n\t\\noindent Finally, studying the morphology of a five-dimensional D'A\\-lam\\-ber\\-tian\\- operator, it is immediate to recognize the emergence of huge massive modes of a boson field, as result of the compactified scale of the fifth dimension \\cite{Chodos-Detweiler}.\n\t\n\t\\medskip \n\t\n\tIn the present analysis, we approach the formulation of the $5D$ Kaluza-Klein theory within the semi-classical and quantum framework of the so-called Polymer Quantum Mechanics \\cite{Corichi1}\\cite{Corichi2}.\n\tThis revised formulation of quantum physics has the aim to introduce a discrete nature in the generalized coordinate (a real coordinate of a generic degree of freedom), as an effect of the emergence of cut-off physics. \n\t\n\tIndeed, the fifth compactified dimension, being in the standard approach about two order greater than the Planck size, it is in the natural condition to be approached via the continuum limit of Polymer Quantum Mechanics as referred to a point particle living in this dimension.\n\tFurthermore, also the corresponding diagonal metric component (namely the additional Universe scale factor) in such a dynamical regime is to \n\tbe interested - as expected - by cut-off physics effects. \n\t\n\tThe present analysis follows the scenario proposed in \\cite{Chodos-Detweiler} but revised in view of the polymer formulation. \n\t\n\t\\noindent We first show that a five-dimensional Kasner solution \\cite{Kasner} \\cite{Landau}\\cite{Montani} (characterizing the Bianchi I Universe) admits a configuration in which three spatial directions isotropically expand, while the fourth remains static. \n\tThis result is of impact in the implementation of Kaluza-Klein theory, since it removes some of the non-trivial inconvenient features of a collapsing dimension, closely to a Planckian size. \n\tFor a previous attempt to deal with a static compactified dimension, on the base of a physical phenomenon, see \\cite{Salam}. \n\t\n\t\\noindent Then, we analyse the geodesic motion on a generic $5D$ space-time, having a fifth steady dimension and we outline a natural solution to the charge to mass ratio problem. \n\tThis result comes out from the details of the semi-classical polymer formulation, adopted for the Hamiltonian dynamics of the free-falling particle. \n\tIn particular, the modified expression taken by the fifth momentum of the particle leads to a modified constitutive relation, that is - when passing from the momenta to the velocities - the previous constraint on the charge to mass ratio allows considering values which are natural in the Standard Model particles. \n\t\n\t\\noindent Finally, we study a five-dimensional Klein-Gordon equation and we clarify that, addressing the fifth coordinate via the quantum polymer prescription, the spectrum of emerging masses can fit some values of the Standard Model one and no tachyonic mode emerges, differently from the case discussed in \\cite{Chodos-Detweiler}. \n\t\n\tHowever, it should be noticed that, in this quantum field approach, a problem with the definition of a correct $q\/m$ ratio for a Standard Model particle still survives.\n\t\n\t\\medskip \n\t\n\tThe present study suggests that, when cut-off physics is included in the Kaluza-Klein formulation, some of the puzzling features of this approach are restated into a form that can give new physical insight for their understanding and overcoming. \n\t\n\t\\medskip \n\t\n\tThe manuscript presentation is structured as follows:\n\tin Section \\ref{Kaluza-Klein theory} we review the main features of ordinary Kaluza-Klein theory, from the metric tensor construction and the resulting field equations to the geodesic motion of a point-like particle, which analysis, in particular, leads to the ordinary quantisation law for the electric charge, an estimate for the size $L$ of the fifth dimension and the aforementioned shortcoming\n\tof the charge to mass ratio of a particle. \n\t\n\t\\noindent In Section \\ref{Polymer quantum mechanics} we review polymer quantum mechanics, summarizing the construction of the relative kinematics Hilbert space, via the introduction of a Weyl-Heisenberg algebra and under the assumption of the existence of a discrete spatial coordinate, and the implementation of the proper dynamics both on a quantum and semi-classical level, with particular regard to the p-polarization.\n\t\n\t\\noindent In Section \\ref{Polymer Kasner solution} we analyse the polymer-modified Kasner solution obtained from the introduction of the polymer framework on a semi-classical level in a $5D$ Bianchi I model, focusing on the behaviour of the fifth dimension.\n\t\n\t\\noindent Finally, in Section \\ref{Kaluza-Klein theory in polymer quantum mechanics framework}, based on the result of the previous section, first, we analyse, in a semi-classical formulation of Polymer Quantum Mechanics, the geodesic motion of a point-like particle and all its features, comparing all the results with the ones from the ordinary theory, and then we carry out the study of the polymer quantum dynamics of a complex Klein-Gordon field, along the lines of \\cite{Chodos-Detweiler}, discussing with particular attention the resulting electric charge distribution and mass spectrum. \n\t\n\t\\noindent In Section \\ref{Conlusion} brief concluding remarks follow.\n\t\n\t\\section{Kaluza-Klein theory} \n\t\\label{Kaluza-Klein theory}\n\t\n\tKaluza-Klein theory is a $5D$ extension of Einstein's General Relativity which aim is to provide a unified description of gravitational and electromagnetic interaction in a purely geometric fashion. \n\t\n\t\\noindent In the original theory \\cite{Kaluza}\\cite{Klein1}\\cite{Klein2} the space-time is described by a $5D$ smooth manifold $V^5$, which is assumed to be the direct product $V^4\\otimes S^1$ between a generic $4D$ manifold and a circus of length $L$, that is a compact space.\n\t\n\t\\noindent A crucial assumption relies on the fact that all the observable physical quantities do not depend on the fifth coordinate $x^5$.\n\tThis hypothesis can be further motivated by noticing that, due to the compactness of the fifth dimension, all the observable physical quantities are periodic in $x^5$; hence the independence on the fifth coordinate can be regarded as zero-order cut-off of a Fourier expansion of these quantities themselves, dubbed as cylinder condition.\n\t\n\t\\noindent Once restricted the $5D$ general relativity principle to the following coordinate transformations (and their inverse):\n\t\\begin{equation} \\label{KK_group_trans_1}\n\t\t\\begin{cases}\n\t\t\t\n\t\t\tx^{\\mu'}=\\Psi(x^{\\mu}) \\\\\n\t\t\tx^{5'}=x^5 + k\\Lambda(x^{\\mu}) \\\\\n\t\t\t\n\t\t\\end{cases}\n\t\\end{equation} \n\tthe $5D$ metric tensor of the expanded theory can be written as follows:\n\t\\begin{equation}\\label{KK_metric_tensor}\n\t\t\\tilde{g}_{ab}=\n\t\t\\left(\n\t\t\\begin{array}{c|c}\n\t\t\tg_{\\mu \\nu} + k^2 \\phi^2 A_\\mu A_\\nu & k \\phi^2 A_\\mu \\\\\n\t\t\t\\hline\n\t\t\tk \\phi^2 A_\\nu & \\phi^2\n\t\t\\end{array}\n\t\t\\right) ,\n\t\\end{equation}\n\twhere $g_{\\mu \\nu}$ is the $4D$ metric tensor of the ordinary theory, $A_{\\mu}$ is the electromagnetic four-potential, $\\phi$ is a scalar field and $k$ is a constant to be properly determined. \n\t\n\t\\subsection{Kaluza-Klein field equations}\n\tThe field equations of the theory can be obtained from a $5D$ Einstein-Hilbert action:\n\t\\begin{equation}\n\t\t^{(5)}S:=\\tilde{S}= - \\frac{1}{16\\pi\\tilde{G}}\\int_{V^{4}\\otimes S^1} dx^{0}dx^{1}dx^{2}dx^{3}dx^{5} \\sqrt{-\\tilde{g}} \\tilde{R} ,\n\t\\end{equation}\n\twhere $\\tilde{G}$, $\\tilde{g}$ and $\\tilde{R}$ are respectively the $5D$ gravitational constant, the metric tensor $\\tilde{g}_{ab}$ determinant and the $5D$ scalar curvature.\n\t\n\tBy performing a 4+1 dimensional reduction the ordinary $4D$ Einstein-Maxwell action is surprisingly obtained:\n\t\\begin{equation}\n\t\t\\begin{split}\n\t\t\t\\tilde{S}=& - \\frac{c^3}{16\\pi{G}}\\int_{V^{4}} d^4x \\sqrt{{-g}}\\phi \\biggl( R + \\frac{1}{4}\\phi^2 k^2 F_{\\mu \\nu}F^{\\mu \\nu}\\\\\n\t\t\t&+\\frac{2}{\\phi}\\nabla_{\\mu} \\partial^{\\mu} \\phi \\biggr).\n\t\t\\end{split}\n\t\\end{equation}\n\tBy setting $\\phi=1$ in the action - as in the original work of Kaluza \\cite{Kaluza} and Klein \\cite{Klein1}\\cite{Klein2} - and by using the variational principle ordinary, Einstein-Maxwell field equations can be correctly recovered, once $k$ is setted equal to $2\\sqrt{G}\/c^2$.\n\t\n\t\n\t\\subsection{Geodesic motion}\n\t\n\tA free point-like particle in this theory will move along a $5D$ geodesic, hence the respective action, with signature (-,+,+,+,+), will be:\n\t\\begin{equation}\n\t\t\\tilde{S}=-mc\\int d\\tilde{s}=-mc \\int \\sqrt{-\\tilde{g}_{a b}\\frac{d x^a}{d \\tilde{s}}\n\t\t\t\\frac{d x^b}{d \\tilde{s}}} d\\tilde{s},\n\t\\end{equation}\n\twhere $d\\tilde{s}$ is the $5D$ line element, to be distinguished from the $4D$ line element $ds$.\n\t\n\t\\noindent Once setted - here and in the further developments - $\\phi=1$ in the metric \\eqref{KK_metric_tensor}, from the variational principle $5D$ geodesic equation is immediately restored:\n\t\\begin{equation} \\label{geodesic}\n\t\t\\tilde{u}^a \\tilde{\\nabla}_{a} \\tilde{u}^b=0.\n\t\\end{equation}\n\t\n\tIt is essential to point out that $5D$ velocity $\\tilde{u}^a$ is different from $4D$ velocity $u^a$; indeed they are related as follows:\n\t\\begin{equation} \\label{KK_five-four_velocity_rel}\n\t\t\\tilde{u}^a=\\frac{1}{\\sqrt{1-u_5^2}} u^a.\n\t\\end{equation}\n\t\n\t\\noindent From relations \\eqref{geodesic} and \\eqref{KK_five-four_velocity_rel} it can be easily shown that $u_5$ is a constant of motion. \n\t\n\tIn order to achieve $4D$ equation of motion, the geodesic equation \\eqref{geodesic} has to be evaluated for the usual space-time variables only, which we indicate with Greek letters. \n\t\n\t\\noindent By making use of relation \\eqref{KK_five-four_velocity_rel}, the following result is attained:\n\t\\begin{equation}\n\t\tu^\\nu\\nabla_{\\nu}u^{\\mu}=\\frac{2\\sqrt{G}}{c^2}u_5u^{\\nu}g^{\\mu \\lambda}F_{ \\nu \\lambda},\n\t\\end{equation}\n\twhere $F_{ \\nu \\lambda}$ is the antisymmetric electromagnetic tensor. \n\t\n\t\\noindent By comparison with the ordinary classical equation:\n\t\\begin{equation} \\label{KK_ordinary_4d_geodesic}\n\t\tu^\\nu\\nabla_{\\nu}u^{\\mu}=\\frac{q}{mc^2}u^{\\nu}g^{\\mu \\lambda}F_{\\nu\\lambda},\n\t\\end{equation}\n\tthe following fundamental identification is achieved:\n\t\\begin{equation} \\label{KK_u5_rel_q}\n\t\tu_{5}=\\frac{q}{2m\\sqrt{G}}.\n\t\\end{equation}\n\t\\noindent Being $p_5=mcu_5$ it can then be written:\n\t\\begin{equation} \\label{KK_p5_rel_q}\n\t\tp_5=\\frac{qc}{2 \\sqrt{G}},\n\t\\end{equation}\n\twhich establishes a fundamental relation between the particle fifth component of momentum and its electric charge. \n\t\n\tThe compactness of the fifth dimension implies a quantisation of momentum along the fifth direction:\n\t\\begin{equation} \\label{KK_p_quantised}\n\t\tp_5=\\frac{2\\pi \\hbar}{L}n \\qquad n \\in \\mathbb{Z},\n\t\\end{equation}\n\twhere we remind that $L$ is the length of the circus describing the fifth dimension.\n\t\\noindent By a direct comparison between relations \\eqref{KK_p5_rel_q} and \\eqref{KK_p_quantised} a natural quantisation law for the electric charge and an estimate of the size $L$ of the fifth dimension are obtained:\n\t\\begin{equation} \\label{KK_ordinary_L}\n\t\t{L}=4\\pi \\frac{\\hbar\\sqrt{G}}{e c}\\approx 2.37\\cdot10^{-31}\\>cm \\qquad q=ne,\n\t\\end{equation}\n\twhere $e$ is the electron charge.\n\t\n\t\\noindent Coherently the size of the fifth dimension is in agreement with its non-observability and with its impossibility to be currently detected. \n\t\n\t\\medskip \n\t\n\tNevertheless, despite these remarkable results, the relation \\eqref{KK_five-four_velocity_rel} sets the constraint $\\abs{u_5}<1$; \n\tby virtue of relation \\eqref{KK_u5_rel_q}, this implies the following condition on the charge\/mass ratio of a particle:\n\t\\begin{equation} \\label{KK_q_over_m}\n\t\t\\frac{\\abs{q}}{m}<2\\sqrt{G}\\approx 5.16\\cdot10^{-4} \\> e.s.u.\/g,\n\t\\end{equation}\n\twhich, unfortunately, has not phenomenological confirmation, neither for elementary particle nor for macroscopic object, hence representing one of the puzzling shortcomings of the theory.\n\t\n\t\\section{Polymer quantum mechanics}\n\t\\label{Polymer quantum mechanics}\n\t\n\tPolymer quantum mechanics is a non-standard representation of the non-relativistic quantum theory, unitarily inequivalent to the Schr\u00f6dinger one \\cite{Corichi1}\\cite{Corichi2}.\n\tIts developments are due mainly to the exploration of background-independent theories, such as quantum gravity, of which mimics several structures \\cite{Ashtekar}. \n\t\n\t\\smallskip\n\t\n\tGiven a discrete orthonormal basis $\\ket{\\mu_i}$ for a space $\\mathcal{H'}$, such that $\\braket{\\mu_i}{\\mu_j}=\\delta_{ij}$, where $\\mu_i \\in \\mathbb{R}$ and $i=1,2...,n$, the kinematic polymer Hilbert space $\\mathcal{H}_{poly}$ is obtained as a Cauchy completion of $\\mathcal{H'}$.\n\t\n\t\\noindent On this space two abstract operators can be defined:\n\t\\begin{align}\n\t\t& \\hat{\\epsilon}\\ket{\\mu}:=\\mu \\ket{\\mu} \\\\\n\t\t& \\hat{s}(\\lambda)\\ket{\\mu}:=\\ket{\\mu +\\lambda}.\n\t\\end{align}\n\t\\noindent The operator $\\hat{\\epsilon}$ is a symmetric operator and $\\hat{s}(\\lambda)$ defines a one-parameter family of unitary operators.\n\tIn spite of this $\\hat{s}(\\lambda)$ is discontinuous with respect to $\\lambda$;\n\tthis means that no self-adjoint operator exists that can generate $\\hat{s}(\\lambda)$ by exponentiation. \n\tTaking now in exam a physical system with configuration space spanned by the coordinate $q$, which is assumed to have a discrete character, and its conjugate momentum $p$, the previous abstract representation can be projected and studied with respect to p-polarization.\n\tIn this polarization the basis states will be:\n\t\\begin{equation} \\label{basic_states_p}\n\t\t\\psi_\\mu(p)=\\braket{p}{\\mu}=e^{i\\mu p\/\\hbar}.\n\t\\end{equation}\n\tFollowing the algebraic construction method, a Weyl-Hei\\-sen\\-berg\\- algebra is introduced on $\\mathcal{H}_{poly}$ and the action of its generators on the basis states is defined as follows:\n\t\\begin{align}\n\t\t&\\hat{\\mathcal{U}}(\\nu)\\psi_\\mu(p)=\\psi_\\mu(p+\\nu)=e^{i\\mu(p+\\nu)\/\\hbar}=e^{i\\mu\\nu \/\\hbar}e^{i\\mu p\/\\hbar} \\\\\n\t\t&\\hat{\\mathcal{V}}(\\lambda)\\psi_\\mu(p)=e^{i\\lambda p\/\\hbar}e^{i\\mu p\/\\hbar}=e^{i(\\lambda + \\mu)p\/\\hbar}=\\psi_{\\mu+\\lambda}(p).\n\t\\end{align}\n\tFrom this it can be inferred that the shifting operator $\\hat{s}(\\lambda)$ can be identified with the operator $\\hat{\\mathcal{V}}(\\lambda)$, which is then discontinuous in $\\lambda$; this means that the spatial translations generator, that is the momentum operator $\\hat{p}$, does not exist. \n\tOn the other hand, the operator $\\hat{\\mathcal{U}}(\\nu)$ is continuous, so that the translations generator in the momentum space, i.e. the position operator $\\hat{q}$, exists and it can be identified with the abstract operator $\\hat{\\epsilon}$.\n\t\n\t\\noindent Indeed:\n\t\\begin{equation} \\label{PQ_q_op}\n\t\t\\hat{q}\\psi_{\\mu}(p)=-i\\hbar \\partial_p\\psi_{\\mu}(p)=\\mu\\psi_{\\mu}(p).\n\t\\end{equation}\n\tIt can be proved \\cite{Corichi2} that the kinematic polymer Hilbert space in this polarization is explicitly given by $\\mathcal{H}_{poly,p}=L^2\\left(\\mathbb{R}_{B},d\\mu_{H}\\right)$, where $\\mathbb{R}_{B}$ is the so-called Bohr compactification of the real line and $d\\mu_{H}$ is the Haar measure. \n\t\n\t\\medskip \n\t\n\tA similar picture is obtained in the q-polarization: the momentum operator cannot still be defined, while it is possible to show that the fundamental wave functions are Kronecker deltas and that the kinematic polymer Hilbert space is explicitly given by $\\mathcal{H}_{poly,x}=L^2\\left(\\mathbb{R}_{d},d\\mu_{c}\\right)$, where $\\mathbb{R}_{d}$ is the real line equipped with a discrete topology and $d\\mu_{c}$ is the counting measure. \n\t\n\t\\medskip \n\t\n\tIn order to build the dynamics a Hamiltonian operator $\\hat{H}$ has to be defined on $\\mathcal{H}_{poly}$, but since $\\hat{p}$ do not exist, a direct implementation is not possible.\n\tTo overcome this problem the momentum operator can be approximated by defining on the configuration space of the system a regular graph $\\gamma_{\\mu}=\\{q \\in \\mathbb{R} \\> | \\> q=n\\mu, n \\in \\mathbb{Z}\\}$, where $\\mu$ is the fundamental scale introduced by the polymer representation. \n\tThe basis kets $\\ket{\\mu}$ can now be indicated as $\\ket{\\mu_n}$, where $\\mu_n=n\\mu$ are the points belonging to the graph $\\gamma_{\\mu_0}$.\n\tConsequently the generic states will be:\n\t\\begin{equation}\n\t\t\\ket{\\psi}_{\\gamma_{\\mu}}=\\sum_n a_n \\ket{\\mu_n},\n\t\\end{equation}\n\tand they will belong to the new Hilbert space $\\mathcal{H}_{\\gamma_{\\mu}} \\subset \\mathcal{H}_{poly}$, posed that they satisfy the condition $\\sum_n \\abs{a_n}^2 < \\infty$.\n\tSince the dynamics has to be closed in $\\mathcal{H}_{\\gamma_{\\mu}}$, the shift parameter $\\lambda$ has to be fixed equal to $\\mu$, hence the action of $\\hat{\\mathcal{V}}(\\lambda)$ will be:\n\t\\begin{equation}\n\t\t\\hat{\\mathcal{V}}(\\lambda)\\ket{\\mu_n}=\\hat{\\mathcal{V}}(\\mu)\\ket{\\mu_n}=\\ket{\\mu_{n+1}}.\n\t\\end{equation}\n\t\n\t\\noindent On a general ground, the variable $p$ it can be written as:\n\t\\begin{equation} \\label{PQ_p_var_approx}\n\t\tp \\approx \\frac{\\hbar}{\\mu} \\sin(\\frac{\\mu}{\\hbar}p)=\\frac{\\hbar}{2i\\mu}\\left(e^{i\\frac{\\mu}{\\hbar}p}-e^{-i\\frac{\\mu}{\\hbar}p}\\right),\n\t\\end{equation}\n\twhen the condition $p<<\\hbar\/\\mu$ holds.\n\t\n\t\\noindent Based on this approximation and visualizing the action of $\\hat{\\mathcal{V}}(\\mu)$ in the p-polarization, it is clear that the operator $\\hat{p}$ and its action can be approximated as:\n\t\\begin{equation} \\label{PQ_p_op_approx}\n\t\t\\hat{p}_{\\mu}\\ket{\\mu_n}\\approx \\frac{\\hbar}{2i\\mu} \\left[\\hat{\\mathcal{V}}(\\mu)-\\hat{\\mathcal{V}}(-\\mu)\\right]\\ket{\\mu_n},\n\t\\end{equation}\n\twhere $\\mu$ acts as a regulator.\n\t\n\t\\noindent To approximate the operator $\\hat{p}^2$ two paths are possible:\n\t\\begin{equation} \\label{PQ_p^2_op_approx_1}\n\t\t\\hat{p}^2_{\\mu}\\approx \\frac{\\hbar^2}{4\\mu^2} \\left[ 2-\\hat{\\mathcal{V}}(2\\mu)-\\hat{\\mathcal{V}}(-2\\mu)\\right],\n\t\\end{equation} \n\tbased on the approximation \n\t\\begin{equation} \\label{PQ_p^2_var_approx_1}\n\t\tp^2 \\approx \\frac{\\hbar^2}{\\mu^2} \\sin[2](\\frac{\\mu}{\\hbar}p)\n\t\\end{equation}\n\tand hence defined by iterating the action of $\\hat{p}$ according \\eqref{PQ_p_op_approx}, or\n\t\\begin{equation} \\label{PQ_p^2_op_approx_2}\n\t\t\\hat{p}^2_{\\mu}\\approx \\frac{\\hbar^2}{{\\mu}^2} \\left[2-\\hat{\\mathcal{V}}(\\mu)-\\hat{\\mathcal{V}}(-\\mu)\\right],\n\t\\end{equation}\n\texploiting the approximation \n\t\\begin{equation}\\label{PQ_p^2_var_approx_2}\n\t\tp^2 \\approx \\frac{2\\hbar^2}{{\\mu}^2}\\left(1- \\cos(\\frac{\\mu}{\\hbar}p)\\right),\n\t\\end{equation}\n\tvalid as long as $p<<\\hbar\/\\mu$.\n\t\n\t\n\t\\noindent Hence, the well-defined, symmetric Hamiltonian operator will be:\n\t\\begin{equation}\n\t\t\\hat{H}_{\\mu}:=\\frac{\\hat{p}^2_{\\mu}}{2m}+\\hat{V}(\\hat{q}),\n\t\\end{equation}\n\twhere $\\hat{V}(\\hat{q})$ is the potential operator. \n\t\n\tTherefore, quantising a system according to the polymer representation, in the p-polarization, implies the use of the approximation \\eqref{PQ_p^2_op_approx_1} or \\eqref{PQ_p^2_op_approx_2} for the momentum operator, while the position operator will be the natural differential operator, which action is expressed in \\eqref{PQ_p_op_approx}.\n\t\n\tIn a semi-classical approach, this procedure corresponds to the proper introduction of the approximations \\eqref{PQ_p_var_approx}, \\eqref{PQ_p^2_var_approx_1} and \\eqref{PQ_p^2_var_approx_2} on the variable $p$ in the dynamics of the system of interest. \n\tHence, on this level, the whole procedure can be thought of as a prescription to provide physical insight into the behaviour of the quantum expectations values, according to the so-called Ehrenfest theorem.\n\t\n\t\n\t\\section{Polymer Kasner solution}\n\t\\label{Polymer Kasner solution}\n\t\n\tWe will now apply polymer formalism in a semi-classical framework to the study of a $5D$ Bianchi I model, which solution in the vacuum is the well-known Kasner metric \\cite{Kasner}\\cite{Landau}, focusing on the kinematics and dynamics of the fifth dimension. \n\t\n\t\\medskip\n\t\n\t\\noindent In order to obtain the polymer Kasner cosmological solution, we need a minisuperspace Hamiltonian formulation, extended to the $5D$ case.\n\t\n\t\\noindent The $5D$ Bianchi I line element, written in the ADM formalism \\cite{ADM}, is a straightforward generalisation of the $4D$ one\\footnote{As it should be clear from the context, the metric coefficient $c(t)$ is not to be confused with the velocity of light $c$.}:\n\t\\begin{equation}\n\t\t\\begin{split}\n\t\t\tds^2=&-N^2(t)c^2dt^2+^{(4)}h_{ij}dx^idx^j=\\\\\n\t\t\t&-N^2(t)c^2dt^2+a^2(t)(dx^1)^2+b^2(t)(dx^2)^2\\\\\n\t\t\t& +c^2(t)(dx^3)^2+d^2(t)(dx^5)^2, \\qquad (\\text{i,j=1,2,3,5})\n\t\t\\end{split}\n\t\\end{equation}\n\twhere $N(t)$ is the lapse function of the ADM formalism, $^{(4)}h_{ij}$ is the metric tensor of the $4D$ manifold, which coordinate are all space-like.\n\t\n\t\\noindent Having the general structure of this metric as starting point, we can build the Hamiltonian of the system:\n\t\\begin{equation} \\label{PBM_hamiltonian_bianchi_non_diag}\n\t\t\\begin{split}\n\t\t\tH_{Bianchi \\> I}:=H_{B}=Ne^{-\\sum_a q^a\/2}\\biggl\\{\\sum_a p_a^2-\\frac{1}{3}\\biggl[\\sum_b p_b\\biggr]^2\\biggr\\},\n\t\t\\end{split}\n\t\\end{equation}\n\twhich, by varying with respect to $N(t)$ turns out to be a constraint for the dynamics, namely $H_B=0$.\n\t\n\t\\noindent The couple $(q^a,p_a)$ in \\eqref{PBM_hamiltonian_bianchi_non_diag} are the conjugate variables spanning an highly symmetric phase space, the so-called \\textit{minisuperspace}, and the relation between the metric coefficients and the q-variables is the usual one of the literature \\cite{Montani}, extended to the $5D$ case:\n\t\\begin{equation} \\label{PBM_metric_coeff_to_q}\n\t\t\\begin{split}\n\t\t\t&a(t)=e^{q^1(t)\/2} \\quad b(t)=e^{q^2(t)\/2} \\\\\n\t\t\t&c(t)=e^{q^3(t)\/2} \\quad d(t)=e^{q^5(t)\/2}.\n\t\t\\end{split}\n\t\\end{equation}\n\t\n\tAs is well-known, it is more convenient to express the obtained Hamiltonian in its diagonal form:\n\t\\begin{equation} \\label{PBM_hamilt_bianchi_diagonal}\n\t\tH'_{B}=Ne^{-\\alpha}\\left[-\\frac{1}{3}p^2_\\alpha+p^2_{+}+p^2_{-}+p^2_{\\gamma}\\right],\n\t\\end{equation}\n\twhich is the canonical form of the quadratic form associated to $H_{B}$. \n\t\n\t\\noindent The p-variables in the Hamiltonian \\eqref{PBM_hamilt_bianchi_diagonal} are the conjugate momenta of a set of variables $\\alpha,\\beta_{+},\\beta_{-},\\gamma$, which represent the generalisation of the Misner variables \\cite{Misner}. The relation between these variables and the previous q-variables is defined through the following linear transformation:\n\t\\begin{equation} \\label{q_to_misner_var}\n\t\t\\begin{cases}\n\t\t\tq^1=\\frac{1}{2}\\alpha-\\frac{1}{2\\sqrt{3}}\\beta_{+}-\\frac{1}{\\sqrt{6}}\\beta_{-}-\\frac{1}{\\sqrt{2}}\\gamma\\\\ q^2=\\frac{1}{2}\\alpha-\\frac{1}{2\\sqrt{3}}\\beta_{+}-\\frac{1}{\\sqrt{6}}\\beta_{-}+\\frac{1}{\\sqrt{2}}\\gamma \\\\\n\t\t\tq^3=\\frac{1}{2}\\alpha-\\frac{1}{2\\sqrt{3}}\\beta_{+}+\\sqrt{\\frac{2}{3}}\\beta_{-} \\\\\n\t\t\tq^5=\\frac{1}{2}\\alpha+\\frac{\\sqrt{3}}{2}\\beta_{+}.\n\t\t\\end{cases}\n\t\\end{equation}\n\t\n\t\n\tBy using the Hamilton-Jacobi method and the Hamilton equation for the variable $\\alpha$ - which represents the universe volume - in the synchronous reference frame, the standard classical Kasner solution for the $5D$ case can be recovered:\n\t\\begin{equation} \\label{PBM_kasner_metric}\n\t\t\\begin{split}\n\t\t\tds^2&=-c^2dt^2+(t\/t_0)^{2k_1}(dx^1)^2+(t\/t_0)^{2k_2}(dx^2)^2\\\\\n\t\t\t&+(t\/t_0)^{2k_3}(dx^3)^2+(t\/t_0)^{2k_5}(dx^5)^2.\n\t\t\\end{split}\n\t\\end{equation}\n\t\n\t\\noindent The k parameters are the so-called Kasner exponents and they satisfy the following conditions:\n\t\\begin{equation} \\label{PBM_kasner_buondaries_standard}\n\t\t\\begin{cases}\n\t\t\tk_1+k_2+k_3+k_5=1 \\\\\n\t\t\tk_1^2+k_2^2+k_3^2+k_5^2=1.\n\t\t\\end{cases}\n\t\\end{equation}\n\t\\noindent In particular, if we assume isotropy in the three usual spatial dimensions, as observations suggest, that is if we set $k_1=k_2=k_3$, solution of the previous system becomes:\n\t\\begin{equation}\n\t\tk_1=k_2=k_3=\\frac{1}{2} \\quad k_5=-\\frac{1}{2}.\n\t\\end{equation}\n\t\\noindent This means that while the three usual spatial dimensions expand, the fifth one collapses indefinitely. \n\t\n\t\\medskip \n\t\n\tWe want now to introduce polymer formalism in Bianchi I dynamics.\n\tIn order to do this we choose to operate the substitutions \\eqref{PQ_p_var_approx} and \\eqref{PQ_p^2_var_approx_1} on the conjugate momentum $p_\\gamma$ of the Misner variable $\\gamma$, connected with the metric coefficient of the fifth dimension.\n\tThe new \\textit{polymerized} Hamiltonian will be:\n\t\\begin{equation} \\label{PBM_hamlit_poly_Bianchi}\n\t\tH_{B}^{poly}=e^{-\\alpha}\\biggl[-\\frac{1}{3}p^2_\\alpha+p^2_{+}+p^2_-+\\frac{\\hbar^2}{\\mu^2}\\sin[2](\\frac{\\mu}{\\hbar}p_\\gamma)\\biggr].\n\t\\end{equation}\n\t\n\t\\noindent By the exactly above procedure we find that the solution is still a Kasner solution, where the metric coefficients have a coordinate time power trend as in \\eqref{PBM_kasner_metric}, but their exponents, that is the Kasner indices, due to the quantum polymer modifications, satisfy different constraints:\n\t\n\t\\begin{equation}\n\t\t\\begin{cases}\n\t\t\tk_1+k_2+k_3+k_5=1 \\\\\n\t\t\tk_1^2+k_2^2+k_3^2+k_5^2=1-\\frac{3}{4}\\frac{\\hbar^2}{\\mu^2}\\frac{\\sin[4](\\frac{\\mu}{\\hbar}p_{\\gamma})}{\\sqrt{p^2_++p^2_-+\\frac{\\hbar^2}{\\mu^2}\\sin[2](\\frac{\\mu}{\\hbar}p_\\gamma)}}.\n\t\t\\end{cases}\n\t\\end{equation}\n\t\n\t\\noindent The second term of the right-hand side of the second condition is non-negative, so that we can restate the system as follows:\n\t\\begin{equation}\n\t\t\\begin{cases} \\label{PBM_kasner_ind_poly_conditions}\n\t\t\tk_1+k_2+k_3+k_5=1 \\\\\n\t\t\tk_1^2+k_2^2+k_3^2+k_5^2\\leq 1.\n\t\t\\end{cases}\n\t\\end{equation}\n\t\n\t\\noindent Assuming isotropy in the three usual spatial dimensions and introducing an order between exponents, in particular setting $k_5 cm.\n\t\\end{equation}\n\tAt this point we choose $\\mu$ equal to the Planck length and obtain: \n\t\\begin{equation} \\label{PG_poly_geo_L}\n\t\tL\\approx 2.377 \\cdot 10^{-31} \\> cm,\n\t\\end{equation}\n\twhich almost coincide with the result of the standard theory and therefore it can account for the non observability of the fifth dimension.\n\t\n\tThere are basically two reasons behind this particular choice of the polymer scale: \n\t\\begin{itemize}\n\t\t\\item it is a scale with a strong physical meaning;\n\t\t\\item the $L\/\\mu$ ratio, for this value of $\\mu$, is large enough to ensure some calculations to be carried out in the polymer continuum limit, through the assumptions discussed in \\cite{Corichi1} and \\cite{Corichi2}.\n\t\\end{itemize}\n\t\n\t\\noindent A further discussion about the function \\eqref{PG_L_function} is postponed to the next subsection.\n\t\n\t\\medskip \n\t\n\tThe charge function \\eqref{PG_poly_charge_p5}, instead, can be rewritten, by means of the function $L(\\mu)$ \\eqref{PG_L_function}, as follows:\n\t\\begin{equation} \\label{PG_charge_distr_func_poly}\n\t\tq(\\mu;n)=\\frac{2\\hbar\\sqrt{G}}{\\mu c}\\sin\\bigg(n\\arcsin(\\frac{\\mu ec}{2\\hbar\\sqrt{G}})\\bigg)\n\t\\end{equation}\n\t\n\tand by setting the polymer scale $\\mu$ equal to the Planck length, we obtain the symmetric distribution of positive and negative charges reported in Figure \\ref{PG_planckian_charge_distribution_plt}.\n\t\n\t\\noindent The number of modes $n$ in the considered interval is limited by the periodic condition of the function itself and it clearly depends on $\\mu$.\n\tFor our choice of $\\mu$ we find that $-73\\leq n \\leq 73$ (see again Figure \\ref{PG_planckian_charge_distribution_plt}). \n\t\n\t\\noindent It is worth noticing that for any fixed value of $n=n^*$ it is always possible to expand the sine function in correspondence to a suitable small cut-off parameter $\\mu$, accordingly to the inequality $n^*\\mu<<2\\hbar\\sqrt{G}\/ec$, where we also have expanded the function $L(\\mu)$.\n\t\n\t\\noindent In this limit we recover the standard expression for the charge as multiple of the elementary electron charge, at least for $n g.\n\t\\end{equation}\n\t\n\t\\noindent It does not depend on $\\mu$ and it is almost a Planckian mass, defined only by fundamental constants, equal to the one obtained in the standard framework (without the introduction of the \\textit{ad hoc} parameter $a$).\n\t\n\t\\begin{figure}[h!]\n\t\t\\centering\n\t\t\\includegraphics[width=%\n\t\t0.5\\textwidth]{mass_poly}\t\n\t\t\\caption{Plot of masses distribution for the complex scalar field, for the value $\\mu_{Planck}\\approx 1.627\\cdot10^{-33} \\> cm$. It is possible to observe the oscillating profile of the function and the fitted Pion mass, placed in the minimum, as ground level of the Kaluza-Klein tower.}\n\t\t\\label{planckian_masses_distribution_plt}\n\t\\end{figure}\n\t\n\tIn Figure \\ref{KG_planckian_charge_distribution_plt} instead the corresponding charge distribution - which again can be put in the form \\eqref{PG_charge_distr_func_poly} - for $\\mu=\\mu_{Planck}$ is represented. \n\t\n\t\n\t\\begin{figure}[h!]\n\t\t\\centering\n\t\t\\includegraphics[width=%\n\t\t0.5\\textwidth]{poly_charge_KG}\t\n\t\t\\caption{Plot of charges distribution for value $\\mu=\\mu_{Planck}$. Again, it is possible to observe the oscillating profile of the sequence and the negative and positive symmetric branches.}\n\t\t\\label{KG_planckian_charge_distribution_plt}\n\t\\end{figure}\n\t\n\t\\noindent We observe, however, that only the electron charge ($n=1$) has a phenomenological correspondence, while the remaining points of the sequence have not a clear interpretation.\n\tIn particular according to these mass and charge distributions the fundamental charge $e$ has to be associated with a particle of Planckian mass, while the corresponding charge of the pion would be several orders of magnitude smaller than the electron charge.\n\t\n\t\n\t\\noindent Clearly, this peculiar charge-mass configuration is not phenomenological consistent.\n\t\n\t\\noindent Indeed, calculating the $q\/m$ ratio for the modes of the scalar field, we obtain:\n\t\\begin{equation}\n\t\t\\frac{q(\\mu;n)}{m(\\mu;n)}=\\pm 2\\sqrt{G}\\approx 5.16\\cdot 10^{-4} e.s.u.\/g,\n\t\\end{equation}\n\twhere the $\\pm$ sign is due to the sign of the electric charge.\n\t\n\t\\noindent This value, which coincides with the upper limit of the $q\/m$ ratio \\eqref{KK_q_over_m} of the standard classical case, does not depend on $\\mu$ nor $n$, rather it is constant for every polymer scale and every mode and no known particle satisfies such a relation. \n\t\n\t\n\t\\section{Conclusion}\n\t\\label{Conlusion}\n\t\n\tWe investigated the formulation of a five-dimensional Kaluza-Klein theory in the framework of Polymer Quantum Mechanics, viewed both in a semi-classical and quantum approach. \n\tThe polymer modifications have been implemented to the fifth coordinate only, on a semi-classical level in the spirit of the Ehrenfest theorem (the modification provides the dynamics of the quantum expectation values) and \n\tin a full quantum approach when a Klein-Gordon equation has been investigated. \n\t\n\t\\medskip \n\t\n\tWe started by applying the semi-classical polymer formulation to the evolution of the Bianchi I model, by showing that the corresponding Kasner solution can be taken in a form in which three scale factors isotropically expand and the remaining one is static and, in the considered model, it coincides with the compactified extra-dimension. \n\t\n\t\\noindent Then we studied the geodesic motion of a particle, starting with a Hamiltonian formulation (the only one in which the polymer formulation is viable) and then turning to a formalism based on the particle velocities.\n\tThis procedure allows, in analogy to the standard literature on this same subject, to identify the expression for the electric charge via the fifth momentum component of the particle. \n\tThe important consequence of this revised formulation consists of the overcoming of the problem of a too small charge to mass ratio to account in the model for any known elementary particle. \n\tIn fact, the revised constraint, due to the polymer relation between the fifth momentum component and the corresponding velocity, is in principle compatible with all the elementary particle predicted by the Standard Model. \n\t\n\t\\noindent Finally, we implemented a quantum polymer modification in the Klein-Gordon equation, by adopting a mixed representation of quantum mechanics (based on the coordinates for the usual four dimensions and the momentum for the extra one). \n\tThis study has the aim to revise the analysis in \\cite{Chodos-Detweiler} for a static (now available) extra-dimension, under a polymer prescription for the \n\tcompactified dimension physics. \n\t\n\tWe got the fundamental result that the tachyon mode, present in \\cite{Chodos-Detweiler} is now removed from the mass spectrum and that the obtained values for the boson mass can fit the values spanned in the Standard Model. \n\tActually, we arrived at a deformed morphology of the so-called Kaluza-Klein tower (the steps are no longer equispaced), but this revised structure allows us to avoid the only Planckian mode naturally present in the standard Kaluza-Klein formulation. \n\t\n\t\\medskip \n\t\n\tAll these results suggest that some of the puzzling question affecting the viability of the Kaluza-Klein idea must be reanalysed phenomenologically including the notion of a cut-off physics. \n\tIn fact, in the case of small dimensions, living about two orders of magnitude over the Planck scale, it should be unavoidable to feel the effects of the nearby cut-off and when its presence is made manifest a new paradigm can be assessed.\n\tIn other words, we argue that some limits of the geometrical unification theories are possibly due to the ultraviolet divergence that the gravitational field possesses and when they are somehow attenuated, like by the polymer scenario adopted here, the compactified dimension takes a more regular behaviour, which is reflected into the solution of some inconsistencies of the underlying model.\n\t\n\tThe emergence of a static dimension in the $5D$ Kasner solution - which prevents the necessity to deal with unphysical tachyonic modes - undoubtedly represents the simplest elucidation of this point of view. \n\t\n\t\n\t\n\\section{Introduction}\n\t\n\t\n\tThe original Kaluza-Klein idea \\cite{Kaluza}\\cite{Klein1}\\cite{Klein2} consists in a $5D$ space-time formulation having the aim to include in a geometrical picture also the electromagnetic interaction. \n\t\n\t\\noindent The surprising formal success in providing a metric representation of the vector potential suggested, in the Seventies, to attempt for a geometrical unification \\cite{ModernKKtheories}, able to assess all the fundamental interactions into a multi-dimensional space-time, with particular attention to the Electroweak Model. \n\tThe suggestive idea at the ground of these approach consists of the possibility to reproduce the Lie algebra, characterizing the elementary particle symmetries by the isometries of the extra-dimensional space. The non-trivial result obtained by the extra-dimensional Kaluza-Klein theories relies on the emergence from the multi-dimensional Einstein-Hilbert Lagrangian of the correct Yang-Mills action for the vector bosons which are the interaction carriers. \n\t\n\tHowever, many non-trivial problems affected this fascinating attempt for a geometrization of Nature. One of the main questions came out from the difficulty to provide a geometrical version of the chirality singled out by the electroweak interaction \\cite{Wetterich}, as well as the impossibility to represent the Standard Model of elementary particles in a Kaluza-Klein scenario \\cite{Witten}.\n\tFor alternative non-Riemannian approaches to solve the chirality problem of the Electroweak model see \\cite{Cianfrani-Montani1}\\cite{Cianfrani-Montani2}. \n\t\n\t\n\t\\noindent Finally, we observe that a full geometrical picture of Nature would involve the geometrical formulation of the fermionic field, a really non-trivial perspective if supersymmetry is not considered \\cite{Ferrara}. \n\t\n\t\\medskip\n\t\n\tEven the $5D$ Kaluza-Klein theory presents some important difficulties, see for a review \\cite{Cianfrani-Marrocco-Montani}, which leaves open the question concerning the viability of this approach as a geometrization of the electromagnetic interaction. \n\t\n\t\\noindent First of all, the $5D$ metric tensor contains an additional degree of freedom besides the $4D$ metric and the vector potential, namely the fifth diagonal component. \n\tUnder the necessary restriction of the coordinate transformation in order to deal with the $U(1)$ symmetry, this quantity behaves as an additional scalar field, which presence non-trivially affects basic features of the electromagnetism, for instance, the charge conservation itself \\cite{ModernKKtheories}\\cite{Lacquaniti-Montani}\\cite{Lacquaniti-Montani-Vietri}. \n\tBut, even fixing this scalar field to unity in the Lagrangian for the model (with the right sign of a space-like component), nonetheless, the ratio between the charge and the mass of an elementary particle is constrained to remain too small in order to reproduce the Standard Model spectrum of masses (for a proposal to solve the charge to mass ratio problem see \\cite{Lacquaniti-Montani}).\n\t\n\t\\noindent Finally, studying the morphology of a five-dimensional D'A\\-lam\\-ber\\-tian\\- operator, it is immediate to recognize the emergence of huge massive modes of a boson field, as result of the compactified scale of the fifth dimension \\cite{Chodos-Detweiler}.\n\t\n\t\\medskip \n\t\n\tIn the present analysis, we approach the formulation of the $5D$ Kaluza-Klein theory within the semi-classical and quantum framework of the so-called Polymer Quantum Mechanics \\cite{Corichi1}\\cite{Corichi2}.\n\tThis revised formulation of quantum physics has the aim to introduce a discrete nature in the generalized coordinate (a real coordinate of a generic degree of freedom), as an effect of the emergence of cut-off physics. \n\t\n\tIndeed, the fifth compactified dimension, being in the standard approach about two order greater than the Planck size, it is in the natural condition to be approached via the continuum limit of Polymer Quantum Mechanics as referred to a point particle living in this dimension.\n\tFurthermore, also the corresponding diagonal metric component (namely the additional Universe scale factor) in such a dynamical regime is to \n\tbe interested - as expected - by cut-off physics effects. \n\t\n\tThe present analysis follows the scenario proposed in \\cite{Chodos-Detweiler} but revised in view of the polymer formulation. \n\t\n\t\\noindent We first show that a five-dimensional Kasner solution \\cite{Kasner} \\cite{Landau}\\cite{Montani} (characterizing the Bianchi I Universe) admits a configuration in which three spatial directions isotropically expand, while the fourth remains static. \n\tThis result is of impact in the implementation of Kaluza-Klein theory, since it removes some of the non-trivial inconvenient features of a collapsing dimension, closely to a Planckian size. \n\tFor a previous attempt to deal with a static compactified dimension, on the base of a physical phenomenon, see \\cite{Salam}. \n\t\n\t\\noindent Then, we analyse the geodesic motion on a generic $5D$ space-time, having a fifth steady dimension and we outline a natural solution to the charge to mass ratio problem. \n\tThis result comes out from the details of the semi-classical polymer formulation, adopted for the Hamiltonian dynamics of the free-falling particle. \n\tIn particular, the modified expression taken by the fifth momentum of the particle leads to a modified constitutive relation, that is - when passing from the momenta to the velocities - the previous constraint on the charge to mass ratio allows considering values which are natural in the Standard Model particles. \n\t\n\t\\noindent Finally, we study a five-dimensional Klein-Gordon equation and we clarify that, addressing the fifth coordinate via the quantum polymer prescription, the spectrum of emerging masses can fit some values of the Standard Model one and no tachyonic mode emerges, differently from the case discussed in \\cite{Chodos-Detweiler}. \n\t\n\tHowever, it should be noticed that, in this quantum field approach, a problem with the definition of a correct $q\/m$ ratio for a Standard Model particle still survives.\n\t\n\t\\medskip \n\t\n\tThe present study suggests that, when cut-off physics is included in the Kaluza-Klein formulation, some of the puzzling features of this approach are restated into a form that can give new physical insight for their understanding and overcoming. \n\t\n\t\\medskip \n\t\n\tThe manuscript presentation is structured as follows:\n\tin Section \\ref{Kaluza-Klein theory} we review the main features of ordinary Kaluza-Klein theory, from the metric tensor construction and the resulting field equations to the geodesic motion of a point-like particle, which analysis, in particular, leads to the ordinary quantisation law for the electric charge, an estimate for the size $L$ of the fifth dimension and the aforementioned shortcoming\n\tof the charge to mass ratio of a particle. \n\t\n\t\\noindent In Section \\ref{Polymer quantum mechanics} we review polymer quantum mechanics, summarizing the construction of the relative kinematics Hilbert space, via the introduction of a Weyl-Heisenberg algebra and under the assumption of the existence of a discrete spatial coordinate, and the implementation of the proper dynamics both on a quantum and semi-classical level, with particular regard to the p-polarization.\n\t\n\t\\noindent In Section \\ref{Polymer Kasner solution} we analyse the polymer-modified Kasner solution obtained from the introduction of the polymer framework on a semi-classical level in a $5D$ Bianchi I model, focusing on the behaviour of the fifth dimension.\n\t\n\t\\noindent Finally, in Section \\ref{Kaluza-Klein theory in polymer quantum mechanics framework}, based on the result of the previous section, first, we analyse, in a semi-classical formulation of Polymer Quantum Mechanics, the geodesic motion of a point-like particle and all its features, comparing all the results with the ones from the ordinary theory, and then we carry out the study of the polymer quantum dynamics of a complex Klein-Gordon field, along the lines of \\cite{Chodos-Detweiler}, discussing with particular attention the resulting electric charge distribution and mass spectrum. \n\t\n\t\\noindent In Section \\ref{Conlusion} brief concluding remarks follow.\n\t\n\t\\section{Kaluza-Klein theory} \n\t\\label{Kaluza-Klein theory}\n\t\n\tKaluza-Klein theory is a $5D$ extension of Einstein's General Relativity which aim is to provide a unified description of gravitational and electromagnetic interaction in a purely geometric fashion. \n\t\n\t\\noindent In the original theory \\cite{Kaluza}\\cite{Klein1}\\cite{Klein2} the space-time is described by a $5D$ smooth manifold $V^5$, which is assumed to be the direct product $V^4\\otimes S^1$ between a generic $4D$ manifold and a circus of length $L$, that is a compact space.\n\t\n\t\\noindent A crucial assumption relies on the fact that all the observable physical quantities do not depend on the fifth coordinate $x^5$.\n\tThis hypothesis can be further motivated by noticing that, due to the compactness of the fifth dimension, all the observable physical quantities are periodic in $x^5$; hence the independence on the fifth coordinate can be regarded as zero-order cut-off of a Fourier expansion of these quantities themselves, dubbed as cylinder condition.\n\t\n\t\\noindent Once restricted the $5D$ general relativity principle to the following coordinate transformations (and their inverse):\n\t\\begin{equation} \\label{KK_group_trans_1}\n\t\t\\begin{cases}\n\t\t\t\n\t\t\tx^{\\mu'}=\\Psi(x^{\\mu}) \\\\\n\t\t\tx^{5'}=x^5 + k\\Lambda(x^{\\mu}) \\\\\n\t\t\t\n\t\t\\end{cases}\n\t\\end{equation} \n\tthe $5D$ metric tensor of the expanded theory can be written as follows:\n\t\\begin{equation}\\label{KK_metric_tensor}\n\t\t\\tilde{g}_{ab}=\n\t\t\\left(\n\t\t\\begin{array}{c|c}\n\t\t\tg_{\\mu \\nu} + k^2 \\phi^2 A_\\mu A_\\nu & k \\phi^2 A_\\mu \\\\\n\t\t\t\\hline\n\t\t\tk \\phi^2 A_\\nu & \\phi^2\n\t\t\\end{array}\n\t\t\\right) ,\n\t\\end{equation}\n\twhere $g_{\\mu \\nu}$ is the $4D$ metric tensor of the ordinary theory, $A_{\\mu}$ is the electromagnetic four-potential, $\\phi$ is a scalar field and $k$ is a constant to be properly determined. \n\t\n\t\\subsection{Kaluza-Klein field equations}\n\tThe field equations of the theory can be obtained from a $5D$ Einstein-Hilbert action:\n\t\\begin{equation}\n\t\t^{(5)}S:=\\tilde{S}= - \\frac{1}{16\\pi\\tilde{G}}\\int_{V^{4}\\otimes S^1} dx^{0}dx^{1}dx^{2}dx^{3}dx^{5} \\sqrt{-\\tilde{g}} \\tilde{R} ,\n\t\\end{equation}\n\twhere $\\tilde{G}$, $\\tilde{g}$ and $\\tilde{R}$ are respectively the $5D$ gravitational constant, the metric tensor $\\tilde{g}_{ab}$ determinant and the $5D$ scalar curvature.\n\t\n\tBy performing a 4+1 dimensional reduction the ordinary $4D$ Einstein-Maxwell action is surprisingly obtained:\n\t\\begin{equation}\n\t\t\\begin{split}\n\t\t\t\\tilde{S}=& - \\frac{c^3}{16\\pi{G}}\\int_{V^{4}} d^4x \\sqrt{{-g}}\\phi \\biggl( R + \\frac{1}{4}\\phi^2 k^2 F_{\\mu \\nu}F^{\\mu \\nu}\\\\\n\t\t\t&+\\frac{2}{\\phi}\\nabla_{\\mu} \\partial^{\\mu} \\phi \\biggr).\n\t\t\\end{split}\n\t\\end{equation}\n\tBy setting $\\phi=1$ in the action - as in the original work of Kaluza \\cite{Kaluza} and Klein \\cite{Klein1}\\cite{Klein2} - and by using the variational principle ordinary, Einstein-Maxwell field equations can be correctly recovered, once $k$ is setted equal to $2\\sqrt{G}\/c^2$.\n\t\n\t\n\t\\subsection{Geodesic motion}\n\t\n\tA free point-like particle in this theory will move along a $5D$ geodesic, hence the respective action, with signature (-,+,+,+,+), will be:\n\t\\begin{equation}\n\t\t\\tilde{S}=-mc\\int d\\tilde{s}=-mc \\int \\sqrt{-\\tilde{g}_{a b}\\frac{d x^a}{d \\tilde{s}}\n\t\t\t\\frac{d x^b}{d \\tilde{s}}} d\\tilde{s},\n\t\\end{equation}\n\twhere $d\\tilde{s}$ is the $5D$ line element, to be distinguished from the $4D$ line element $ds$.\n\t\n\t\\noindent Once setted - here and in the further developments - $\\phi=1$ in the metric \\eqref{KK_metric_tensor}, from the variational principle $5D$ geodesic equation is immediately restored:\n\t\\begin{equation} \\label{geodesic}\n\t\t\\tilde{u}^a \\tilde{\\nabla}_{a} \\tilde{u}^b=0.\n\t\\end{equation}\n\t\n\tIt is essential to point out that $5D$ velocity $\\tilde{u}^a$ is different from $4D$ velocity $u^a$; indeed they are related as follows:\n\t\\begin{equation} \\label{KK_five-four_velocity_rel}\n\t\t\\tilde{u}^a=\\frac{1}{\\sqrt{1-u_5^2}} u^a.\n\t\\end{equation}\n\t\n\t\\noindent From relations \\eqref{geodesic} and \\eqref{KK_five-four_velocity_rel} it can be easily shown that $u_5$ is a constant of motion. \n\t\n\tIn order to achieve $4D$ equation of motion, the geodesic equation \\eqref{geodesic} has to be evaluated for the usual space-time variables only, which we indicate with Greek letters. \n\t\n\t\\noindent By making use of relation \\eqref{KK_five-four_velocity_rel}, the following result is attained:\n\t\\begin{equation}\n\t\tu^\\nu\\nabla_{\\nu}u^{\\mu}=\\frac{2\\sqrt{G}}{c^2}u_5u^{\\nu}g^{\\mu \\lambda}F_{ \\nu \\lambda},\n\t\\end{equation}\n\twhere $F_{ \\nu \\lambda}$ is the antisymmetric electromagnetic tensor. \n\t\n\t\\noindent By comparison with the ordinary classical equation:\n\t\\begin{equation} \\label{KK_ordinary_4d_geodesic}\n\t\tu^\\nu\\nabla_{\\nu}u^{\\mu}=\\frac{q}{mc^2}u^{\\nu}g^{\\mu \\lambda}F_{\\nu\\lambda},\n\t\\end{equation}\n\tthe following fundamental identification is achieved:\n\t\\begin{equation} \\label{KK_u5_rel_q}\n\t\tu_{5}=\\frac{q}{2m\\sqrt{G}}.\n\t\\end{equation}\n\t\\noindent Being $p_5=mcu_5$ it can then be written:\n\t\\begin{equation} \\label{KK_p5_rel_q}\n\t\tp_5=\\frac{qc}{2 \\sqrt{G}},\n\t\\end{equation}\n\twhich establishes a fundamental relation between the particle fifth component of momentum and its electric charge. \n\t\n\tThe compactness of the fifth dimension implies a quantisation of momentum along the fifth direction:\n\t\\begin{equation} \\label{KK_p_quantised}\n\t\tp_5=\\frac{2\\pi \\hbar}{L}n \\qquad n \\in \\mathbb{Z},\n\t\\end{equation}\n\twhere we remind that $L$ is the length of the circus describing the fifth dimension.\n\t\\noindent By a direct comparison between relations \\eqref{KK_p5_rel_q} and \\eqref{KK_p_quantised} a natural quantisation law for the electric charge and an estimate of the size $L$ of the fifth dimension are obtained:\n\t\\begin{equation} \\label{KK_ordinary_L}\n\t\t{L}=4\\pi \\frac{\\hbar\\sqrt{G}}{e c}\\approx 2.37\\cdot10^{-31}\\>cm \\qquad q=ne,\n\t\\end{equation}\n\twhere $e$ is the electron charge.\n\t\n\t\\noindent Coherently the size of the fifth dimension is in agreement with its non-observability and with its impossibility to be currently detected. \n\t\n\t\\medskip \n\t\n\tNevertheless, despite these remarkable results, the relation \\eqref{KK_five-four_velocity_rel} sets the constraint $\\abs{u_5}<1$; \n\tby virtue of relation \\eqref{KK_u5_rel_q}, this implies the following condition on the charge\/mass ratio of a particle:\n\t\\begin{equation} \\label{KK_q_over_m}\n\t\t\\frac{\\abs{q}}{m}<2\\sqrt{G}\\approx 5.16\\cdot10^{-4} \\> e.s.u.\/g,\n\t\\end{equation}\n\twhich, unfortunately, has not phenomenological confirmation, neither for elementary particle nor for macroscopic object, hence representing one of the puzzling shortcomings of the theory.\n\t\n\t\\section{Polymer quantum mechanics}\n\t\\label{Polymer quantum mechanics}\n\t\n\tPolymer quantum mechanics is a non-standard representation of the non-relativistic quantum theory, unitarily inequivalent to the Schr\u00f6dinger one \\cite{Corichi1}\\cite{Corichi2}.\n\tIts developments are due mainly to the exploration of background-independent theories, such as quantum gravity, of which mimics several structures \\cite{Ashtekar}. \n\t\n\t\\smallskip\n\t\n\tGiven a discrete orthonormal basis $\\ket{\\mu_i}$ for a space $\\mathcal{H'}$, such that $\\braket{\\mu_i}{\\mu_j}=\\delta_{ij}$, where $\\mu_i \\in \\mathbb{R}$ and $i=1,2...,n$, the kinematic polymer Hilbert space $\\mathcal{H}_{poly}$ is obtained as a Cauchy completion of $\\mathcal{H'}$.\n\t\n\t\\noindent On this space two abstract operators can be defined:\n\t\\begin{align}\n\t\t& \\hat{\\epsilon}\\ket{\\mu}:=\\mu \\ket{\\mu} \\\\\n\t\t& \\hat{s}(\\lambda)\\ket{\\mu}:=\\ket{\\mu +\\lambda}.\n\t\\end{align}\n\t\\noindent The operator $\\hat{\\epsilon}$ is a symmetric operator and $\\hat{s}(\\lambda)$ defines a one-parameter family of unitary operators.\n\tIn spite of this $\\hat{s}(\\lambda)$ is discontinuous with respect to $\\lambda$;\n\tthis means that no self-adjoint operator exists that can generate $\\hat{s}(\\lambda)$ by exponentiation. \n\tTaking now in exam a physical system with configuration space spanned by the coordinate $q$, which is assumed to have a discrete character, and its conjugate momentum $p$, the previous abstract representation can be projected and studied with respect to p-polarization.\n\tIn this polarization the basis states will be:\n\t\\begin{equation} \\label{basic_states_p}\n\t\t\\psi_\\mu(p)=\\braket{p}{\\mu}=e^{i\\mu p\/\\hbar}.\n\t\\end{equation}\n\tFollowing the algebraic construction method, a Weyl-Hei\\-sen\\-berg\\- algebra is introduced on $\\mathcal{H}_{poly}$ and the action of its generators on the basis states is defined as follows:\n\t\\begin{align}\n\t\t&\\hat{\\mathcal{U}}(\\nu)\\psi_\\mu(p)=\\psi_\\mu(p+\\nu)=e^{i\\mu(p+\\nu)\/\\hbar}=e^{i\\mu\\nu \/\\hbar}e^{i\\mu p\/\\hbar} \\\\\n\t\t&\\hat{\\mathcal{V}}(\\lambda)\\psi_\\mu(p)=e^{i\\lambda p\/\\hbar}e^{i\\mu p\/\\hbar}=e^{i(\\lambda + \\mu)p\/\\hbar}=\\psi_{\\mu+\\lambda}(p).\n\t\\end{align}\n\tFrom this it can be inferred that the shifting operator $\\hat{s}(\\lambda)$ can be identified with the operator $\\hat{\\mathcal{V}}(\\lambda)$, which is then discontinuous in $\\lambda$; this means that the spatial translations generator, that is the momentum operator $\\hat{p}$, does not exist. \n\tOn the other hand, the operator $\\hat{\\mathcal{U}}(\\nu)$ is continuous, so that the translations generator in the momentum space, i.e. the position operator $\\hat{q}$, exists and it can be identified with the abstract operator $\\hat{\\epsilon}$.\n\t\n\t\\noindent Indeed:\n\t\\begin{equation} \\label{PQ_q_op}\n\t\t\\hat{q}\\psi_{\\mu}(p)=-i\\hbar \\partial_p\\psi_{\\mu}(p)=\\mu\\psi_{\\mu}(p).\n\t\\end{equation}\n\tIt can be proved \\cite{Corichi2} that the kinematic polymer Hilbert space in this polarization is explicitly given by $\\mathcal{H}_{poly,p}=L^2\\left(\\mathbb{R}_{B},d\\mu_{H}\\right)$, where $\\mathbb{R}_{B}$ is the so-called Bohr compactification of the real line and $d\\mu_{H}$ is the Haar measure. \n\t\n\t\\medskip \n\t\n\tA similar picture is obtained in the q-polarization: the momentum operator cannot still be defined, while it is possible to show that the fundamental wave functions are Kronecker deltas and that the kinematic polymer Hilbert space is explicitly given by $\\mathcal{H}_{poly,x}=L^2\\left(\\mathbb{R}_{d},d\\mu_{c}\\right)$, where $\\mathbb{R}_{d}$ is the real line equipped with a discrete topology and $d\\mu_{c}$ is the counting measure. \n\t\n\t\\medskip \n\t\n\tIn order to build the dynamics a Hamiltonian operator $\\hat{H}$ has to be defined on $\\mathcal{H}_{poly}$, but since $\\hat{p}$ do not exist, a direct implementation is not possible.\n\tTo overcome this problem the momentum operator can be approximated by defining on the configuration space of the system a regular graph $\\gamma_{\\mu}=\\{q \\in \\mathbb{R} \\> | \\> q=n\\mu, n \\in \\mathbb{Z}\\}$, where $\\mu$ is the fundamental scale introduced by the polymer representation. \n\tThe basis kets $\\ket{\\mu}$ can now be indicated as $\\ket{\\mu_n}$, where $\\mu_n=n\\mu$ are the points belonging to the graph $\\gamma_{\\mu_0}$.\n\tConsequently the generic states will be:\n\t\\begin{equation}\n\t\t\\ket{\\psi}_{\\gamma_{\\mu}}=\\sum_n a_n \\ket{\\mu_n},\n\t\\end{equation}\n\tand they will belong to the new Hilbert space $\\mathcal{H}_{\\gamma_{\\mu}} \\subset \\mathcal{H}_{poly}$, posed that they satisfy the condition $\\sum_n \\abs{a_n}^2 < \\infty$.\n\tSince the dynamics has to be closed in $\\mathcal{H}_{\\gamma_{\\mu}}$, the shift parameter $\\lambda$ has to be fixed equal to $\\mu$, hence the action of $\\hat{\\mathcal{V}}(\\lambda)$ will be:\n\t\\begin{equation}\n\t\t\\hat{\\mathcal{V}}(\\lambda)\\ket{\\mu_n}=\\hat{\\mathcal{V}}(\\mu)\\ket{\\mu_n}=\\ket{\\mu_{n+1}}.\n\t\\end{equation}\n\t\n\t\\noindent On a general ground, the variable $p$ it can be written as:\n\t\\begin{equation} \\label{PQ_p_var_approx}\n\t\tp \\approx \\frac{\\hbar}{\\mu} \\sin(\\frac{\\mu}{\\hbar}p)=\\frac{\\hbar}{2i\\mu}\\left(e^{i\\frac{\\mu}{\\hbar}p}-e^{-i\\frac{\\mu}{\\hbar}p}\\right),\n\t\\end{equation}\n\twhen the condition $p<<\\hbar\/\\mu$ holds.\n\t\n\t\\noindent Based on this approximation and visualizing the action of $\\hat{\\mathcal{V}}(\\mu)$ in the p-polarization, it is clear that the operator $\\hat{p}$ and its action can be approximated as:\n\t\\begin{equation} \\label{PQ_p_op_approx}\n\t\t\\hat{p}_{\\mu}\\ket{\\mu_n}\\approx \\frac{\\hbar}{2i\\mu} \\left[\\hat{\\mathcal{V}}(\\mu)-\\hat{\\mathcal{V}}(-\\mu)\\right]\\ket{\\mu_n},\n\t\\end{equation}\n\twhere $\\mu$ acts as a regulator.\n\t\n\t\\noindent To approximate the operator $\\hat{p}^2$ two paths are possible:\n\t\\begin{equation} \\label{PQ_p^2_op_approx_1}\n\t\t\\hat{p}^2_{\\mu}\\approx \\frac{\\hbar^2}{4\\mu^2} \\left[ 2-\\hat{\\mathcal{V}}(2\\mu)-\\hat{\\mathcal{V}}(-2\\mu)\\right],\n\t\\end{equation} \n\tbased on the approximation \n\t\\begin{equation} \\label{PQ_p^2_var_approx_1}\n\t\tp^2 \\approx \\frac{\\hbar^2}{\\mu^2} \\sin[2](\\frac{\\mu}{\\hbar}p)\n\t\\end{equation}\n\tand hence defined by iterating the action of $\\hat{p}$ according \\eqref{PQ_p_op_approx}, or\n\t\\begin{equation} \\label{PQ_p^2_op_approx_2}\n\t\t\\hat{p}^2_{\\mu}\\approx \\frac{\\hbar^2}{{\\mu}^2} \\left[2-\\hat{\\mathcal{V}}(\\mu)-\\hat{\\mathcal{V}}(-\\mu)\\right],\n\t\\end{equation}\n\texploiting the approximation \n\t\\begin{equation}\\label{PQ_p^2_var_approx_2}\n\t\tp^2 \\approx \\frac{2\\hbar^2}{{\\mu}^2}\\left(1- \\cos(\\frac{\\mu}{\\hbar}p)\\right),\n\t\\end{equation}\n\tvalid as long as $p<<\\hbar\/\\mu$.\n\t\n\t\n\t\\noindent Hence, the well-defined, symmetric Hamiltonian operator will be:\n\t\\begin{equation}\n\t\t\\hat{H}_{\\mu}:=\\frac{\\hat{p}^2_{\\mu}}{2m}+\\hat{V}(\\hat{q}),\n\t\\end{equation}\n\twhere $\\hat{V}(\\hat{q})$ is the potential operator. \n\t\n\tTherefore, quantising a system according to the polymer representation, in the p-polarization, implies the use of the approximation \\eqref{PQ_p^2_op_approx_1} or \\eqref{PQ_p^2_op_approx_2} for the momentum operator, while the position operator will be the natural differential operator, which action is expressed in \\eqref{PQ_p_op_approx}.\n\t\n\tIn a semi-classical approach, this procedure corresponds to the proper introduction of the approximations \\eqref{PQ_p_var_approx}, \\eqref{PQ_p^2_var_approx_1} and \\eqref{PQ_p^2_var_approx_2} on the variable $p$ in the dynamics of the system of interest. \n\tHence, on this level, the whole procedure can be thought of as a prescription to provide physical insight into the behaviour of the quantum expectations values, according to the so-called Ehrenfest theorem.\n\t\n\t\n\t\\section{Polymer Kasner solution}\n\t\\label{Polymer Kasner solution}\n\t\n\tWe will now apply polymer formalism in a semi-classical framework to the study of a $5D$ Bianchi I model, which solution in the vacuum is the well-known Kasner metric \\cite{Kasner}\\cite{Landau}, focusing on the kinematics and dynamics of the fifth dimension. \n\t\n\t\\medskip\n\t\n\t\\noindent In order to obtain the polymer Kasner cosmological solution, we need a minisuperspace Hamiltonian formulation, extended to the $5D$ case.\n\t\n\t\\noindent The $5D$ Bianchi I line element, written in the ADM formalism \\cite{ADM}, is a straightforward generalisation of the $4D$ one\\footnote{As it should be clear from the context, the metric coefficient $c(t)$ is not to be confused with the velocity of light $c$.}:\n\t\\begin{equation}\n\t\t\\begin{split}\n\t\t\tds^2=&-N^2(t)c^2dt^2+^{(4)}h_{ij}dx^idx^j=\\\\\n\t\t\t&-N^2(t)c^2dt^2+a^2(t)(dx^1)^2+b^2(t)(dx^2)^2\\\\\n\t\t\t& +c^2(t)(dx^3)^2+d^2(t)(dx^5)^2, \\qquad (\\text{i,j=1,2,3,5})\n\t\t\\end{split}\n\t\\end{equation}\n\twhere $N(t)$ is the lapse function of the ADM formalism, $^{(4)}h_{ij}$ is the metric tensor of the $4D$ manifold, which coordinate are all space-like.\n\t\n\t\\noindent Having the general structure of this metric as starting point, we can build the Hamiltonian of the system:\n\t\\begin{equation} \\label{PBM_hamiltonian_bianchi_non_diag}\n\t\t\\begin{split}\n\t\t\tH_{Bianchi \\> I}:=H_{B}=Ne^{-\\sum_a q^a\/2}\\biggl\\{\\sum_a p_a^2-\\frac{1}{3}\\biggl[\\sum_b p_b\\biggr]^2\\biggr\\},\n\t\t\\end{split}\n\t\\end{equation}\n\twhich, by varying with respect to $N(t)$ turns out to be a constraint for the dynamics, namely $H_B=0$.\n\t\n\t\\noindent The couple $(q^a,p_a)$ in \\eqref{PBM_hamiltonian_bianchi_non_diag} are the conjugate variables spanning an highly symmetric phase space, the so-called \\textit{minisuperspace}, and the relation between the metric coefficients and the q-variables is the usual one of the literature \\cite{Montani}, extended to the $5D$ case:\n\t\\begin{equation} \\label{PBM_metric_coeff_to_q}\n\t\t\\begin{split}\n\t\t\t&a(t)=e^{q^1(t)\/2} \\quad b(t)=e^{q^2(t)\/2} \\\\\n\t\t\t&c(t)=e^{q^3(t)\/2} \\quad d(t)=e^{q^5(t)\/2}.\n\t\t\\end{split}\n\t\\end{equation}\n\t\n\tAs is well-known, it is more convenient to express the obtained Hamiltonian in its diagonal form:\n\t\\begin{equation} \\label{PBM_hamilt_bianchi_diagonal}\n\t\tH'_{B}=Ne^{-\\alpha}\\left[-\\frac{1}{3}p^2_\\alpha+p^2_{+}+p^2_{-}+p^2_{\\gamma}\\right],\n\t\\end{equation}\n\twhich is the canonical form of the quadratic form associated to $H_{B}$. \n\t\n\t\\noindent The p-variables in the Hamiltonian \\eqref{PBM_hamilt_bianchi_diagonal} are the conjugate momenta of a set of variables $\\alpha,\\beta_{+},\\beta_{-},\\gamma$, which represent the generalisation of the Misner variables \\cite{Misner}. The relation between these variables and the previous q-variables is defined through the following linear transformation:\n\t\\begin{equation} \\label{q_to_misner_var}\n\t\t\\begin{cases}\n\t\t\tq^1=\\frac{1}{2}\\alpha-\\frac{1}{2\\sqrt{3}}\\beta_{+}-\\frac{1}{\\sqrt{6}}\\beta_{-}-\\frac{1}{\\sqrt{2}}\\gamma\\\\ q^2=\\frac{1}{2}\\alpha-\\frac{1}{2\\sqrt{3}}\\beta_{+}-\\frac{1}{\\sqrt{6}}\\beta_{-}+\\frac{1}{\\sqrt{2}}\\gamma \\\\\n\t\t\tq^3=\\frac{1}{2}\\alpha-\\frac{1}{2\\sqrt{3}}\\beta_{+}+\\sqrt{\\frac{2}{3}}\\beta_{-} \\\\\n\t\t\tq^5=\\frac{1}{2}\\alpha+\\frac{\\sqrt{3}}{2}\\beta_{+}.\n\t\t\\end{cases}\n\t\\end{equation}\n\t\n\t\n\tBy using the Hamilton-Jacobi method and the Hamilton equation for the variable $\\alpha$ - which represents the universe volume - in the synchronous reference frame, the standard classical Kasner solution for the $5D$ case can be recovered:\n\t\\begin{equation} \\label{PBM_kasner_metric}\n\t\t\\begin{split}\n\t\t\tds^2&=-c^2dt^2+(t\/t_0)^{2k_1}(dx^1)^2+(t\/t_0)^{2k_2}(dx^2)^2\\\\\n\t\t\t&+(t\/t_0)^{2k_3}(dx^3)^2+(t\/t_0)^{2k_5}(dx^5)^2.\n\t\t\\end{split}\n\t\\end{equation}\n\t\n\t\\noindent The k parameters are the so-called Kasner exponents and they satisfy the following conditions:\n\t\\begin{equation} \\label{PBM_kasner_buondaries_standard}\n\t\t\\begin{cases}\n\t\t\tk_1+k_2+k_3+k_5=1 \\\\\n\t\t\tk_1^2+k_2^2+k_3^2+k_5^2=1.\n\t\t\\end{cases}\n\t\\end{equation}\n\t\\noindent In particular, if we assume isotropy in the three usual spatial dimensions, as observations suggest, that is if we set $k_1=k_2=k_3$, solution of the previous system becomes:\n\t\\begin{equation}\n\t\tk_1=k_2=k_3=\\frac{1}{2} \\quad k_5=-\\frac{1}{2}.\n\t\\end{equation}\n\t\\noindent This means that while the three usual spatial dimensions expand, the fifth one collapses indefinitely. \n\t\n\t\\medskip \n\t\n\tWe want now to introduce polymer formalism in Bianchi I dynamics.\n\tIn order to do this we choose to operate the substitutions \\eqref{PQ_p_var_approx} and \\eqref{PQ_p^2_var_approx_1} on the conjugate momentum $p_\\gamma$ of the Misner variable $\\gamma$, connected with the metric coefficient of the fifth dimension.\n\tThe new \\textit{polymerized} Hamiltonian will be:\n\t\\begin{equation} \\label{PBM_hamlit_poly_Bianchi}\n\t\tH_{B}^{poly}=e^{-\\alpha}\\biggl[-\\frac{1}{3}p^2_\\alpha+p^2_{+}+p^2_-+\\frac{\\hbar^2}{\\mu^2}\\sin[2](\\frac{\\mu}{\\hbar}p_\\gamma)\\biggr].\n\t\\end{equation}\n\t\n\t\\noindent By the exactly above procedure we find that the solution is still a Kasner solution, where the metric coefficients have a coordinate time power trend as in \\eqref{PBM_kasner_metric}, but their exponents, that is the Kasner indices, due to the quantum polymer modifications, satisfy different constraints:\n\t\n\t\\begin{equation}\n\t\t\\begin{cases}\n\t\t\tk_1+k_2+k_3+k_5=1 \\\\\n\t\t\tk_1^2+k_2^2+k_3^2+k_5^2=1-\\frac{3}{4}\\frac{\\hbar^2}{\\mu^2}\\frac{\\sin[4](\\frac{\\mu}{\\hbar}p_{\\gamma})}{\\sqrt{p^2_++p^2_-+\\frac{\\hbar^2}{\\mu^2}\\sin[2](\\frac{\\mu}{\\hbar}p_\\gamma)}}.\n\t\t\\end{cases}\n\t\\end{equation}\n\t\n\t\\noindent The second term of the right-hand side of the second condition is non-negative, so that we can restate the system as follows:\n\t\\begin{equation}\n\t\t\\begin{cases} \\label{PBM_kasner_ind_poly_conditions}\n\t\t\tk_1+k_2+k_3+k_5=1 \\\\\n\t\t\tk_1^2+k_2^2+k_3^2+k_5^2\\leq 1.\n\t\t\\end{cases}\n\t\\end{equation}\n\t\n\t\\noindent Assuming isotropy in the three usual spatial dimensions and introducing an order between exponents, in particular setting $k_5 cm.\n\t\\end{equation}\n\tAt this point we choose $\\mu$ equal to the Planck length and obtain: \n\t\\begin{equation} \\label{PG_poly_geo_L}\n\t\tL\\approx 2.377 \\cdot 10^{-31} \\> cm,\n\t\\end{equation}\n\twhich almost coincide with the result of the standard theory and therefore it can account for the non observability of the fifth dimension.\n\t\n\tThere are basically two reasons behind this particular choice of the polymer scale: \n\t\\begin{itemize}\n\t\t\\item it is a scale with a strong physical meaning;\n\t\t\\item the $L\/\\mu$ ratio, for this value of $\\mu$, is large enough to ensure some calculations to be carried out in the polymer continuum limit, through the assumptions discussed in \\cite{Corichi1} and \\cite{Corichi2}.\n\t\\end{itemize}\n\t\n\t\\noindent A further discussion about the function \\eqref{PG_L_function} is postponed to the next subsection.\n\t\n\t\\medskip \n\t\n\tThe charge function \\eqref{PG_poly_charge_p5}, instead, can be rewritten, by means of the function $L(\\mu)$ \\eqref{PG_L_function}, as follows:\n\t\\begin{equation} \\label{PG_charge_distr_func_poly}\n\t\tq(\\mu;n)=\\frac{2\\hbar\\sqrt{G}}{\\mu c}\\sin\\bigg(n\\arcsin(\\frac{\\mu ec}{2\\hbar\\sqrt{G}})\\bigg)\n\t\\end{equation}\n\t\n\tand by setting the polymer scale $\\mu$ equal to the Planck length, we obtain the symmetric distribution of positive and negative charges reported in Figure \\ref{PG_planckian_charge_distribution_plt}.\n\t\n\t\\noindent The number of modes $n$ in the considered interval is limited by the periodic condition of the function itself and it clearly depends on $\\mu$.\n\tFor our choice of $\\mu$ we find that $-73\\leq n \\leq 73$ (see again Figure \\ref{PG_planckian_charge_distribution_plt}). \n\t\n\t\\noindent It is worth noticing that for any fixed value of $n=n^*$ it is always possible to expand the sine function in correspondence to a suitable small cut-off parameter $\\mu$, accordingly to the inequality $n^*\\mu<<2\\hbar\\sqrt{G}\/ec$, where we also have expanded the function $L(\\mu)$.\n\t\n\t\\noindent In this limit we recover the standard expression for the charge as multiple of the elementary electron charge, at least for $n g.\n\t\\end{equation}\n\t\n\t\\noindent It does not depend on $\\mu$ and it is almost a Planckian mass, defined only by fundamental constants, equal to the one obtained in the standard framework (without the introduction of the \\textit{ad hoc} parameter $a$).\n\t\n\t\\begin{figure}[h!]\n\t\t\\centering\n\t\t\\includegraphics[width=%\n\t\t0.5\\textwidth]{mass_poly}\t\n\t\t\\caption{Plot of masses distribution for the complex scalar field, for the value $\\mu_{Planck}\\approx 1.627\\cdot10^{-33} \\> cm$. It is possible to observe the oscillating profile of the function and the fitted Pion mass, placed in the minimum, as ground level of the Kaluza-Klein tower.}\n\t\t\\label{planckian_masses_distribution_plt}\n\t\\end{figure}\n\t\n\tIn Figure \\ref{KG_planckian_charge_distribution_plt} instead the corresponding charge distribution - which again can be put in the form \\eqref{PG_charge_distr_func_poly} - for $\\mu=\\mu_{Planck}$ is represented. \n\t\n\t\n\t\\begin{figure}[h!]\n\t\t\\centering\n\t\t\\includegraphics[width=%\n\t\t0.5\\textwidth]{poly_charge_KG}\t\n\t\t\\caption{Plot of charges distribution for value $\\mu=\\mu_{Planck}$. Again, it is possible to observe the oscillating profile of the sequence and the negative and positive symmetric branches.}\n\t\t\\label{KG_planckian_charge_distribution_plt}\n\t\\end{figure}\n\t\n\t\\noindent We observe, however, that only the electron charge ($n=1$) has a phenomenological correspondence, while the remaining points of the sequence have not a clear interpretation.\n\tIn particular according to these mass and charge distributions the fundamental charge $e$ has to be associated with a particle of Planckian mass, while the corresponding charge of the pion would be several orders of magnitude smaller than the electron charge.\n\t\n\t\n\t\\noindent Clearly, this peculiar charge-mass configuration is not phenomenological consistent.\n\t\n\t\\noindent Indeed, calculating the $q\/m$ ratio for the modes of the scalar field, we obtain:\n\t\\begin{equation}\n\t\t\\frac{q(\\mu;n)}{m(\\mu;n)}=\\pm 2\\sqrt{G}\\approx 5.16\\cdot 10^{-4} e.s.u.\/g,\n\t\\end{equation}\n\twhere the $\\pm$ sign is due to the sign of the electric charge.\n\t\n\t\\noindent This value, which coincides with the upper limit of the $q\/m$ ratio \\eqref{KK_q_over_m} of the standard classical case, does not depend on $\\mu$ nor $n$, rather it is constant for every polymer scale and every mode and no known particle satisfies such a relation. \n\t\n\t\n\t\\section{Conclusion}\n\t\\label{Conlusion}\n\t\n\tWe investigated the formulation of a five-dimensional Kaluza-Klein theory in the framework of Polymer Quantum Mechanics, viewed both in a semi-classical and quantum approach. \n\tThe polymer modifications have been implemented to the fifth coordinate only, on a semi-classical level in the spirit of the Ehrenfest theorem (the modification provides the dynamics of the quantum expectation values) and \n\tin a full quantum approach when a Klein-Gordon equation has been investigated. \n\t\n\t\\medskip \n\t\n\tWe started by applying the semi-classical polymer formulation to the evolution of the Bianchi I model, by showing that the corresponding Kasner solution can be taken in a form in which three scale factors isotropically expand and the remaining one is static and, in the considered model, it coincides with the compactified extra-dimension. \n\t\n\t\\noindent Then we studied the geodesic motion of a particle, starting with a Hamiltonian formulation (the only one in which the polymer formulation is viable) and then turning to a formalism based on the particle velocities.\n\tThis procedure allows, in analogy to the standard literature on this same subject, to identify the expression for the electric charge via the fifth momentum component of the particle. \n\tThe important consequence of this revised formulation consists of the overcoming of the problem of a too small charge to mass ratio to account in the model for any known elementary particle. \n\tIn fact, the revised constraint, due to the polymer relation between the fifth momentum component and the corresponding velocity, is in principle compatible with all the elementary particle predicted by the Standard Model. \n\t\n\t\\noindent Finally, we implemented a quantum polymer modification in the Klein-Gordon equation, by adopting a mixed representation of quantum mechanics (based on the coordinates for the usual four dimensions and the momentum for the extra one). \n\tThis study has the aim to revise the analysis in \\cite{Chodos-Detweiler} for a static (now available) extra-dimension, under a polymer prescription for the \n\tcompactified dimension physics. \n\t\n\tWe got the fundamental result that the tachyon mode, present in \\cite{Chodos-Detweiler} is now removed from the mass spectrum and that the obtained values for the boson mass can fit the values spanned in the Standard Model. \n\tActually, we arrived at a deformed morphology of the so-called Kaluza-Klein tower (the steps are no longer equispaced), but this revised structure allows us to avoid the only Planckian mode naturally present in the standard Kaluza-Klein formulation. \n\t\n\t\\medskip \n\t\n\tAll these results suggest that some of the puzzling question affecting the viability of the Kaluza-Klein idea must be reanalysed phenomenologically including the notion of a cut-off physics. \n\tIn fact, in the case of small dimensions, living about two orders of magnitude over the Planck scale, it should be unavoidable to feel the effects of the nearby cut-off and when its presence is made manifest a new paradigm can be assessed.\n\tIn other words, we argue that some limits of the geometrical unification theories are possibly due to the ultraviolet divergence that the gravitational field possesses and when they are somehow attenuated, like by the polymer scenario adopted here, the compactified dimension takes a more regular behaviour, which is reflected into the solution of some inconsistencies of the underlying model.\n\t\n\tThe emergence of a static dimension in the $5D$ Kasner solution - which prevents the necessity to deal with unphysical tachyonic modes - undoubtedly represents the simplest elucidation of this point of view. \n\t\n\t\n\t","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nIn this paper, we are devoted to the orbital stability of the two-component peakon solutions and the corresponding configuration of train-profiles. The system we are concerned with is the following integrable two-component Novikov system \\cite{Li}\n\\begin{equation}\\label{tcNK}\n\\left\\{\n\\begin{aligned}\n&m_t+uvm_x+(2vu_x+uv_x)m=0,\\quad m=u-u_{xx},\\\\\n&n_t+uvn_x+(2uv_x+vu_x)n=0, \\quad n=v-v_{xx}.\n\\end{aligned}\n\\right.\n\\end{equation}\nNote that this system reduces to the well-studied integrable Novikov equation \\cite{HW, Nov}\n\\begin{equation}\\label{NOV}\nm_t+u^2m_x+3uu_{x}m=0, \\quad m=u-u_{xx},\n\\end{equation}\nwhen $v=u$.\n\n\nSince the celebrated work \\cite{CH} by Camassa and Holm, in which they first discovered the non-smooth peaked soliton solutions (called peakons) to the Camassa-Holm (CH) equation, the existence of peakons and multi-peakons is one of the significant properties of the integrable CH-type equations. The basic wave profile of a peakon takes a quite compact form\n\\begin{equation}\\label{profile}\n\\varphi_c(x-ct)=a(c)\\,e^{-|x-ct|},\n\\end{equation}\nwhere $c\\in \\R$ is the wave speed and the amplitude $a(c)$ is a function related to $c$. Remarkably, the seemingly simple form of the peakon displays the deep relationship with some important phenomena of wave propagation in shallow water waves. Indeed, due to the discussion in \\cite{cons, conesc, To}, the feature of the peakons that their profile \\eqref{profile} is smooth, except at the crest where it is continuous but the lateral tangents differ, is similar to that of the so-called Stokes waves of greatest height, i.e. traveling waves of largest possible amplitude which are solutions to the governing equations for irrotational water waves. There is no closed forms available for these waves, and the peakons can capture these described essential features. It is well-understood that the CH equation\n\\begin{equation*}\nm_t+um_x+2u_{x}m=0, \\quad m=u-u_{xx},\n\\end{equation*}\nand the Degasperis-Procesi (DP) equation\n\\begin{equation*}\nm_t+um_x+3u_{x}m=0, \\quad m=u-u_{xx},\n\\end{equation*}\nboth arise as the appropriate asymptotic approximations of the Euler equations for the free-surface shallow water waves in the moderately nonlinear regime \\cite{CL}, and admit the following peakon solutions ($c>0$, and anti-peakon for $c<0$) in the line \\cite{CH, CHH, CHT, DHK, Len1, Len2, LS}\n\\begin{equation}\\label{CHpeakons}\nu(t, x)=\\varphi_c(x-ct)=ce^{-\\left| x-ct\\right|}, \\qquad c\\neq 0.\n\\end{equation}\nOn the other hand, the peakons and the corresponding multi-peakons admit a rich mathematical structure related to the underlying integrable features. Indeed, both the CH and DP equations are integrable equations with Lax-pair formulations, and the inverse scattering approach can be used to derive explicitly these peakons \\eqref{CHpeakons} and the related multi-peakon solutions \\cite{BSS, ET, LS-0}. In the past ten years, two typical CH-type integrable equations with cubic nonlinearity that support peakon dynamics attracted much attention. One is the Novikov equation \\eqref{NOV}, whose peakon solutions take the form of \\cite{HW}\n\\begin{equation*}\nu(t, x)=\\varphi_c(x-ct)=\\sqrt{c}e^{-\\left| x-ct\\right|}, \\qquad c> 0.\n\\end{equation*}\nAnother one is the modified Camassa-Holm (mCH) equation \\cite{OR}\n\\begin{equation*}\nm_t+\\left( (u^2-u_x^2)m\\right)_x=0, \\quad m=u-u_{xx},\n\\end{equation*}\nwhich has the following peakon structure \\cite{gloq}\n\\begin{equation*}\nu(t, x)=\\varphi_c(x-ct)=\\sqrt{\\frac{3c}{2}}\\,e^{-\\left| x-ct\\right|}, \\qquad c> 0.\n\\end{equation*}\nAlthough the physical background of the Novikov and mCH equations is not so clear, their peakon and multi-peakon dynamics are demonstrated to have several non-trivial properties in the framework of Lax integrability (see the discussion in \\cite{CSz, HLS}, etc).\n\nIn this paper, special concern for these peakon solutions is the issue of their stability, which lies in the fact that they are the explicit weak solutions in the sense of distribution to the corresponding equations. The peakon equations exhibit different features in contrast with the classical integrable equations such as the KdV equation, the modified KdV equation and the Schr\\\"{o}dinger equation that admit the smooth solitions, especially in the study of qualitative properties related to stability and instability. There are huge number of papers to study the stability and instability of solitons for classical integrable systems. We don't attempt to exhaust all the literatures, one can refer to \\cite{BEN, BON, CaLi, GSS-1, GSS-2, MM, MMT, pava, PW} and the references therein. Note that the peakons don't admit the classical second-order derivatives and the linearized operators at peakons appear to be degenerate. So the classical methods based on the spectral analysis are not available in the case of peakons. In an intriguing paper due to Constantin and Strauss \\cite{CS}, they proved the orbital stability of peakons for the CH equation by discovering several precise optimal inequalities relating to the maximum value of the approximate solutions and the conserved quantity of quadratic form and higher-order conserved quantity (see also \\cite{CM} for a variational argument and \\cite{Len} for the periodic case). This approach in \\cite{CS} was further developed by Dika and Molinet \\cite{DM} to study orbital stability of the train of peakons to the CH equation. Orbital stability of peakons and the train of peakons of other CH-type integrable equations were investigated in \\cite{Kab, LL, LLQ-1, LLQ, LLOQ, QLL}. More recently, the issue of instability of peakons for the CH equation and the Novikov equation are addressed in \\cite{NP} and \\cite{CP}, respectively.\n\nCompared with the rich results of the stability or instability properties of peakons for the CH-type integrable equations of scalar form, the corresponding work for the multi-component peakon profiles is quite a few. It is worth noticing that multi-component CH-type integrable systems, that are verified to admit the peakon structures as the distributional solutions, are very rare up to now. A celebrated integrable multi-component extension to the CH equation is the so-called two-component CH system \\cite{CLZ, CI, OR}\n\\begin{eqnarray}\\label{tcCH}\n\\left\\{\n\\begin{aligned}\n&\\; m_t+um_x+2u_xm+\\rho\\rho_x=0, \\quad m=u-u_{xx},\\\\\n&\\rho_t+(\\rho u)_x=0.\n\\end{aligned}\n\\right.\n\\end{eqnarray}\nHowever, system \\eqref{tcCH} does not admit the peaked solitons \\cite{HNT}. The orbital stability of a kind of reduced peakon profile was studied via variational method \\cite{CLLQ}. The Novikov equation \\eqref{NOV} admits the following two-component integrable extension, called the Geng-Xue system \\cite{GX},\n\\begin{equation}\\label{GX}\n\\left\\{\n\\begin{aligned}\n&m_t+uvm_x+3vu_xm=0,\\quad m=u-u_{xx},\\\\\n&n_t+uvn_x+3uv_xn=0, \\quad n=v-v_{xx},\n\\end{aligned}\n\\right.\n\\end{equation}\nwhich has been paid much attention recently \\cite{LiLiu, LS-1, LS-2}. Although system \\eqref{GX} supports the multi-peakon structures, they are derived in the framework of Lax-pair formulation of \\eqref{GX} (see \\cite{LS-2}), which are not weak solutions in the sense of distribution. To the best of our knowledge, there is no work to study orbital stability of multi-component peakons for integrable multi-component CH-type equations.\n\nRecently, another kind of two-component integrable generalization \\eqref{tcNK} of the Novikov equation \\eqref{NOV} was introduced in \\cite{Li} and we find that this system admits the two-component peakon structure, which are given by\n\\begin{equation}\\label{solitons}\n\\big(u(t, x), v(t, x)\\big)=\\big(\\varphi_c(x-ct), \\psi_c(x-ct)\\big)=\\big(a\\varphi(x-ct), b\\psi(x-ct)\\big)=\\big(ae^{-\\left| x-ct\\right|}, be^{-\\left| x-ct\\right|}\\big),\n\\end{equation}\ntraveling at constant speed $c=ab\\neq 0$. It is demonstrated that these peakons \\eqref{solitons} are indeed the weak solutions of \\eqref{tcNK} in the distribution form\n\\begin{equation}\\label{weakform}\n\\left\\{\n\\begin{aligned}\n&u_t+uvu_x+P_x*\\left( \\frac{1}{2}u_x^2v+uu_xv_x+u^2v\\right)+\\frac 12 P*(u_x^2v_x)=0,\\\\\n&v_t+uvv_x+P_x*\\left( \\frac{1}{2}v_x^2u+vv_xu_x+v^2u\\right)+\\frac 12 P*(v_x^2u_x)=0,\n \\end{aligned}\n\\right.\n\\end{equation}\nwhere $P(x)=e^{-\\left| x\\right|}\/2$ and $*$ stands for convolution with respect to the spatial variable $x\\in \\mathbb R$. Here a question arises: are these multi-component peakons and the corresponding train-profiles for system \\eqref{tcNK} stable in the energy space?\n\nSystem \\eqref{tcNK} adapts the following conserved densities\n\\begin{equation*}\nE_{0}[u,v]=\\int_{\\mathbb R}(mn)^{\\frac 13}\\,dx,\n\\end{equation*}\n\\begin{equation*}\nE_{u}[u]=\\int_{\\mathbb{R}}\\left( u^2+u_x^2\\right) \\, dx, \\quad E_{v}[v]=\\int_{\\mathbb{R}}\\left( v^2+v_x^2\\right) \\, dx, \\quad H[u,v]=\\int_{\\mathbb{R}}\\big(uv+u_xv_x\\big) \\, dx\n\\end{equation*}\nand\n\\begin{equation*}\nF[u,v]=\\int_{\\mathbb{R}}\\left( u^2v^2+\\frac{1}{3}u^2v_x^2+\\frac{1}{3}v^2u_x^2+\\frac{4}{3}uvu_xv_x-\\frac{1}{3}u_x^2v_x^2\\right) \\, dx,\n\\end{equation*}\nwhich will play prominent role in proving stability of peakons, while the corresponding three conserved quantities of Novikov equation \\eqref{NOV} are\n\\begin{equation*}\nH_0[u]=\\int_{\\mathbb R}m^{\\frac 23}\\,dx, \\;\\; E[u]=\\int_{\\mathbb{R}}{(u^2+u_x^2)} \\, dx,\\;\\;\nF[u]=\\int_{\\mathbb{R}}{\\left (u^4+2u^2u_x^2-\\frac{1}{3}u_x^4 \\right )}\\, dx.\n\\end{equation*}\nIf we choose $u=v$, then we have $H_0=E_0[u,u]$, $E[u]=E_{u}[u]=E_{v}[v]=H[u,v]$ and $F[u,v]=F[u]$. Due to the existence of the $H^1$ conservation law respectively for the $u$ and $v$-components as well as the mutual interaction conservation laws $H[u, v]$, it is expected to prove stability for the two-component Novikov system in the sense of the energy space of $H^{1}\\times H^{1}$-norm. In general, a small perturbation of a solitary wave can yield another one with a different speed and phase shift. It is appropriate to define the orbit of traveling-wave solutions $(\\varphi_{c}, \\, \\psi_{c})$ to be the set $U(\\varphi, \\psi)=\\{(a\\varphi(\\cdot+x_{1}), b\\psi(\\cdot+x_{2})), x_{1} \\in \\mathbb{R}, x_{2} \\in \\mathbb{R}\\}$. However, if $x_{1}\\neq x_{2}$, the functionals $F[\\varphi_{c}, \\, \\psi_{c}]$ and $H[\\varphi_{c}, \\, \\psi_{c}]$ are not conserved in the time evolution for $(\\varphi_{c}, \\, \\psi_{c})$ in this set $U(\\varphi, \\psi)$. Thus, we consider here a suitable orbit of the traveling-wave solutions $\\varphi_{c}$ and $\\psi_{c}$ to be the set $U_0(\\varphi, \\psi)=\\{(a\\varphi(\\cdot+x_{0}), b\\psi(\\cdot+x_{0})), x_{0} \\in \\mathbb{R}\\}$ and the peakon solutions of the two-component Novikov equation are called orbitally stable if a wave standing close to the peakon remains close to the orbit $U_0(\\varphi, \\psi)$ at all the later existence time.\n\nThe first main result is stated as follows. Here, we only consider the case of peakons traveling to the right, i.e. the case of $a>0$, $b>0$ and then $c=ab>0$ in \\eqref{solitons}.\n\n\\begin{theorem}\\label{thm1.1}\n\\, Let $(\\varphi_c, \\, \\psi_c)$ be the peaked solitons in \\eqref{solitons}, traveling with speed $c=ab>0$. Then $(\\varphi_c, \\, \\psi_c)$ are orbitally stable in the following sense. Assume that $u_0,v_0 \\in H^s(\\mathbb{R})$ for some $s \\geq 3$, $(1-\\partial_x^2)u_0(x)$ and $(1-\\partial_x^2)v_0(x)$ are nonnegative, and there is a $\\delta>0$ such that\n\\begin{align*}\n\\left \\| (u_0,v_0)-(\\varphi_c,\\psi_c) \\right \\|_{H^1(\\mathbb{R}) \\times H^1(\\mathbb{R})}\\leq \\left \\| u_0-\\varphi_c \\right \\|_{H^1(\\mathbb{R})} +\\left \\| v_0-\\psi_c \\right \\|_{H^1(\\mathbb{R})} < \\delta.\n\\end{align*}\nThen the corresponding solution $(u(t, x), v(t, x))$ of the Cauchy problem for the two-component Novikov system \\eqref{tcNK} with the initial data $u(0, x)=u_0(x)$ and $v(0, x)=v_0(x)$ satisfies\n\\begin{eqnarray*}\n\\begin{aligned}\n&\\sup_{t \\in [0,T)}\\left \\| \\big{(}u(t, \\cdot), v(t, \\cdot)\\big{)}-\\big{(}\\varphi_c(\\cdot-\\xi(t)), \\psi_c(\\cdot-\\xi(t))\\big{)} \\right \\|_{H^1(\\mathbb{R}) \\times H^1(\\mathbb{R})}\\\\\n&\\leq \\sup_{t \\in [0,T)}{\\left \\| u(t, \\cdot)-\\varphi_c(\\cdot-\\xi(t)) \\right \\|_{H^1(\\mathbb{R})}+\\left \\| v(t, \\cdot)-\\psi_c(\\cdot-\\xi(t)) \\right \\|_{H^1(\\mathbb{R})}} < A\\delta^\\frac14,\n\\end{aligned}\n\\end{eqnarray*}\nwhere $T>0$ is the maximal existence time, $\\xi(t) \\in \\mathbb{R}$ is the maximum point of the function $u(t, x)v(t, x)$, the constant $A$ depends only on $a$, $b$ as well as the norms $\\left \\| u_0 \\right \\|_{H^s(\\mathbb{R})}$ and $\\left \\| v_0 \\right \\|_{H^s(\\mathbb{R})}$.\n\\end{theorem}\n\nTo prove orbital stability of the two-component peakons $(\\varphi_c, \\, \\psi_c)$, some new insights are developed. We aim to obtain for each component $u$ and $v$ the dynamical estimates $|u(t, \\xi(t))-a|$ and $|v(t, \\xi(t))-b|$ along some trajectory $t \\mapsto \\xi(t)$, where $a$ and $b$ are the maximal value of the component $\\varphi_c$ and $\\psi_c$, respectively. Here, due to the nonlinear interaction between $u$ and $v$ involved in system \\eqref{tcNK}, the key obstacle is how to find the suitable location of $\\xi(t)$ in order to derive the precise estimates for $|u(t, \\xi(t))-a|$ and $|v(t, \\xi(t))-b|$ (note that in the case of scalar peakons such $\\xi(t)$ is always chosen to locate at the maximal point of perturbed solution, which is no longer valid in the multi-component case considered here and $\\xi(t)$ must change according to the appearance of characteristic speed of nonlinear interaction $uv$). Moreover, the dynamical energy identities and energy inequalities should involve the nonlinear interaction of the two components. In addition, the conservation law $F[u,v]$ is much more complicated than $F[u]$ of the Novikov equation. Therefore, the stability issue of the two-component peakon solutions is more subtle. To overcome the difficulties, two observations will be crucial. System \\eqref{tcNK} not only has the separated $H^1$ conserved quantities $\\int (u^2+u_x^2) dx$ and $\\int(v^2+v_x^2) dx$ with which one can derive the pointwise estimates separately for each component $u$ and $v$, but also the second-order interacting conserved quantity $H[u,v]=\\int (uv+u_xv_x) dx$ with which we are motivated to find the exact location of $\\xi(t)$ as the maximal point of the multiplication function $u(t, \\cdot)v(t, \\cdot)$ of two components. This argument is quite different from the case for the scalar CH-type equations. On the other hand, new optimal energy identities for $H[u, v]$ and $F[u, v]$ are established. The new insight is that the precise control of two components is involved in one optimal energy identity. This point is also different with the scalar case. Based on these observations together with corresponding refined analysis, we are able to prove orbital stability of the two-component peakons $(\\varphi_c, \\, \\psi_c)$ on the line ${\\mathbb R}$.\n\n\\begin{remark}\nFor the Geng-Xue system \\eqref{GX}, even though the peakon solutions in the Lax-pair sense are considered, system \\eqref{GX} does not admit sufficient conserved quantities to establish the corresponding estimates for the stability.\n\\end{remark}\n\nFor the issue of orbital stability of train-profiles of these two-component peakons, we have the following result.\n\n\\begin{theorem}\\label{trainsstable}\nLet be given $N$ velocities $c_{1},c_{2},\\cdots, c_{N}$ such that $00$, $L_0>0$ and $\\epsilon_{0}>0$ such that if the initial data $(u_0,v_0) \\in H^s(\\mathbb{R})\\times H^s(\\mathbb{R})$ for some $s \\geq 3$ with $(1-\\partial_x^2)u_0(x)$ and $(1-\\partial_x^2)v_0(x)$ being nonnegative, satisfy\n\\begin{align}\\label{initialdata-1}\n{\\left \\| u_0- \\sum_{i=1}^N{\\varphi_c(\\cdot - z_i^0)} \\right \\|_{H^1}}+{\\left \\| v_0- \\sum_{i=1}^N{\\psi_c(\\cdot - z_i^0)} \\right \\|_{H^1}}\\leq {\\epsilon}\n\\end{align}\nfor some $0<\\epsilon <\\epsilon_0$ and $ z_i^0-z_{i-1}^0 \\geq L$ with $L>L_{0}$, then there exist ${x}_1(t),...,{x}_N(t)$ such that the corresponding strong solution $(u(t, x), v(t, x))$ satisfies\n\\begin{align*}\n{\\left \\| u(t, \\cdot)- \\sum_{i=1}^N{\\varphi_c(\\cdot - {x}_i(t))} \\right \\|_{H^1}}+{\\left \\| v(t, \\cdot)- \\sum_{i=1}^N{\\psi_c(\\cdot - {x}_i(t))} \\right \\|_{H^1}} \\leq A\\left( \\epsilon^{ \\frac{1}{4}}+L^{- \\frac{1}{8}}\\right),\n\\end{align*}\n$\\forall t \\in [0,T)$, where $x_{j}(t)-x_{j-1}(t)>L\/2$.\n\\end{theorem}\n\nIn general, two main ingredients in the proof of orbital stability for the train-proflies of peakons are involved \\cite{DM,LLQ,MMT}. One is orbital stability of the single peakons, and the another one is the property of almost monotonicity of the local energy on the right hand side of the peakons. For the two-component peakons, more difficulties come from the interaction of the two components $u(t, x)$ and $v(t, x)$. The first one is how to establish inequalities among the localized conserved quantities to verify the orbital stability of the two-component peakons separately. For the train-profiles of peakons, we need to apply $N$ inequalities to control $2N$ estimates. The second one is to establish monotonicity result of the functionals $\\mathcal{J}^{u,v}_{j,k}(t)$ since we can not identify the sign for the term $u_xv_x$ in the conserved density $H[u,v]$. To overcome the difficulties, we use \nthe conserved densities $E_u[u]$, $E_v[v]$, $H[u,v]$ and $F[u,v]$, and establish the delicate inequalities relating to the conserved densities and the maximal value of the two components $u(t,x)$ and $v(t,x)$. And in the case of train-profile of two-component peakons, we apply the proof of the single peakons, the modulation theory, the accurate estimates on the conserved densities $E_u[u]$, $E_v[v]$, $H[u, v]$ and $F[u,v]$ and the induction method to obtain the desired result.\n\nThe remainder of the paper is organized as follows. In Section 2, we provide a brief discussion on the integrability, conservation laws, the sign invariant property of $m(t, x)$ and $n(t, x)$ of system \\eqref{tcNK}, and local well-posdness result on Cauchy problem of system \\eqref{tcNK}. In Section 3, we prove orbital stability of single peakons on the line. Finally in Section 4, we verify orbital stability of the train of peakons.\n\n\n\n\n\n\n\\section{Preliminaries}\n\nIn the present section, the issue of well-posedness is discussed. First of all, we call that functions $(u,v)\\in C([0,T);H^1(\\mathbb R))\\times C([0,T);H^1(\\mathbb R))$ is a solution of system \\eqref{tcNK}, if $(u,v)$ is a solution of \\eqref{weakform} in the sense of distributions and $E_{u}[u]$, $E_{v}[v]$, $H[u, v]$ and $F[u, v]$ are conserved quantities.\n\nConsider the following Cauchy problem of system \\eqref{tcNK} in the whole line $\\R$\n\\begin{equation}\\label{tcNK-1}\n\\left\\{\n\\begin{aligned}\n&m_t+uvm_x+(2vu_x+uv_x)m=0,\\quad m=u-u_{xx},\\\\\n&n_t+uvn_x+(2uv_x+vu_x)n=0, \\quad n=v-v_{xx}, \\quad t>0, \\; x\\in \\R,\\\\\n&u(0,x)=u_0(x), \\;\\;v(0,x)=v_0(x), \\quad x\\in \\R.\n\\end{aligned}\n\\right.\n\\end{equation}\n\nFirst, similar to the results for the two-component CH system given in \\cite{FQ}, one can establish the following local well-posedness result.\n\\begin{theorem}\\label{th-2.1}\nGiven $z_0=(u_0, v_0)^T\\in H^s(\\R)\\times H^s(\\R) (s>3\/2)$, there exists a maximal $T=T(\\| z_0\\|_{H^s(\\R)\\times H^s(\\R)})>0$ and a unique strong solution $z=(u, v)^T$ to \\eqref{tcNK-1} such that\n\\begin{equation*}\nz=z(\\cdot, z_0)\\in C([0, T);H^s(\\R)\\times H^s(\\R))\\cap C^1([0,T); H^{s-1}(\\R)\\times H^{s-1}(\\R)).\n\\end{equation*}\nIn addition, the solution depends continuously on the initial data, i.e. the mapping\n\\begin{equation*}\nz_0\\rightarrow z(\\cdot, z_0): H^s(\\R)\\times H^s(\\R) \\rightarrow C([0, T); H^s(\\R)\\times H^s(\\R))\\cap C^1([0, T); H^{s-1}(\\R)\\times H^{s-1}(\\R))\n\\end{equation*}\nis continuous. Furthermore, the quantities $E_{u}[u]$, $E_{v}[v]$, $H[u, v]$ and $F[u, v]$ are all conserved along the solution $z=(u, v)^T$.\n\\end{theorem}\n\nConsider the flow governed by $(uv)(t, x)$\n\\begin{equation}\\label{firstflow}\n\\left\\{\n\\begin{aligned}\n&\\frac{d}{dt}q(t, x)=(uv)(t, q(t, x)),\\quad x\\in \\R, \\quad t\\in [0, T),\\\\\n&q(0, x)=x,\\quad x\\in \\R.\n\\end{aligned}\n\\right.\n\\end{equation}\nJust as the case for the Novikov equation \\eqref{NOV}, the following lemma can be proved.\n\\begin{lemma}\\label{Flow}\nAssume that $(u_{0},v_0)\\in H^s(\\mathbb R)\\times H^s(\\mathbb R)$ with $s>3\/2$ and $T>0$ be the maximal existence time of the corresponding strong solution $(u, v)$ to the Cauchy problem of system \\eqref{tcNK-1}. Then the problem \\eqref{firstflow} has a unique solution $q(t, x)\\in C^1([0, T)\\times \\R)$. Furthermore, the map $q(t,\\cdot)$ is an increasing diffeomorphism over $\\R$ with\n\\begin{eqnarray}\\label{diffflow}\n\\begin{aligned}\nq_{x}(t,x)=exp\\left(\\int^t_{0}(uv)_x(s,q(s,x))ds\\right),\\quad (t,x)\\in [0,T)\\times \\mathbb R.\n\\end{aligned}\n\\end{eqnarray}\n\\end{lemma}\n\n\\begin{lemma}\\label{nonnegative}\nLet $u_0, v_0\\in H^s(\\mathbb R)$, $s\\geq 3$. Assume that $m_0=u_0-u_{0xx}$ and $n_0=v_0-v_{0xx}$ are nonnegative in the line.\nThen for the corresponding strong solution $u, v\\in C([0,T); H^s(\\R))\\cap C^1([0,T); H^{s-1}(\\R))$ of the Cauchy problem of the two-component Novikov system \\eqref{tcNK-1} with the initial data $u_0,\\,v_0$, we have for all $t\\in [0, T)$, $m(t, x)$ and $n(t, x)$ are both nonnegative functions. In addition, $u(t, \\cdot) \\geq 0$, $v(t, \\cdot) \\geq 0$ and $|u_{x}(t, \\cdot)|\\leq u(t, \\cdot)$, $|v_{x}(t, \\cdot,)|\\leq v(t, \\cdot)$ on the line.\n\\end{lemma}\n\\begin{proof}\nIt follows from the Cauchy problem \\eqref{tcNK-1} of the two-component Novikov system that along the flow \\eqref{firstflow}, $m$ and $n$ satisfy\n\\begin{equation}\\label{weakformFLOW}\n\\left\\{\n\\begin{aligned}\n&m'+(2vu_x+uv_x)(t,q(t,x))m(t,q(t,x))=0,\\\\\n&n'+(2uv_x+vu_x)(t,q(t,x))n(t,q(t,x))=0,\n\\end{aligned}\n\\right.\n\\end{equation}\nwhere $'$ denotes the derivative with respect to $t$ along the flow \\eqref{firstflow}. Denote\n\\begin{eqnarray*}\n\\gamma_1(t,x)=exp\\left(\\int^t_{0}(2vu_x+uv_x)(s,q(s,x))ds\\right), \\; \\gamma_2(t,x)=exp\\left(\\int^t_{0}(2uv_x+vu_x)(s,q(s,x))ds\\right).\n\\end{eqnarray*}\nThen they satisfy\n\\begin{eqnarray*}\n\\gamma'_{1}(t,x)=(2vu_x+uv_x)(t,q(t,x))\\gamma_1,\\quad \\gamma'_{2}(t,x)=(2uv_x+vu_x)(t,q(t,x))\\gamma_2.\n\\end{eqnarray*}\nLet\n\\begin{eqnarray*}\n\\begin{aligned}\n\\tilde{m}(t,x)=\\gamma_{1}(t,x)m(t,q(t,x)), \\quad \\tilde{n}(t,x)=\\gamma_{2}(t,x)n(t,q(t,x)).\n\\end{aligned}\n\\end{eqnarray*}\n\\eqref{weakformFLOW} become\n\\begin{eqnarray}\n\\begin{aligned}\n\\tilde{m}'(t,x)=0, \\quad \\tilde{n}'(t,x)=0.\n\\end{aligned}\n\\end{eqnarray}\nThe equations\n\\begin{eqnarray*}\n\\begin{aligned}\n\\tilde{m}(t,x)=m_0 \\quad and \\quad \\tilde{n}(t,x)=n_0\n\\end{aligned}\n\\end{eqnarray*}\nlead to\n\\begin{eqnarray*}\n\\begin{aligned}\n&m(t,q(t,x))=exp\\left(-\\int^t_{0}(2vu_x+uv_x)(s,q(s,x))ds\\right)m_0,\\\\\n&n(t,q(t,x))=exp\\left(-\\int^t_{0}(2uv_x+vu_x)(s,q(s,x))ds\\right)n_0.\n\\end{aligned}\n\\end{eqnarray*}\nThus, for all $t\\in [0, T)$, we have $m(t, \\cdot) \\geq 0$, $n(t, \\cdot) \\geq 0$. And then $u(t, \\cdot) \\geq 0$, $v(t, \\cdot) \\geq 0$.\n\nFormally regarding $m(x)=u(x)-u_{xx}(x)$, it holds that for all $x\\in \\R$,\n\\begin{eqnarray*}\n\\begin{aligned}\nu(x)=\\frac{e^{-x}}{2}\\int^{x}_{-\\infty}e^{y}m(y)dy+\\frac{e^{x}}{2}\\int^{\\infty}_{x}e^{-y}m(y)dy\n\\end{aligned}\n\\end{eqnarray*}\nand\n\\begin{eqnarray*}\n\\begin{aligned}\nu_{x}(x)=-\\frac{e^{-x}}{2}\\int^{x}_{-\\infty}e^{y}m(y)dy+\\frac{e^{x}}{2}\\int^{\\infty}_{x}e^{-y}m(y)dy.\n\\end{aligned}\n\\end{eqnarray*}\nThen we infer that\n\\begin{eqnarray*}\n\\begin{aligned}\nu(x)\\geq |u_{x}(x)|, \\; \\forall x\\in \\mathbb R.\n\\end{aligned}\n\\end{eqnarray*}\nSimilarly, we find\n\\begin{eqnarray*}\n\\begin{aligned}\nv(x)\\geq |v_{x}(x)|, \\; \\forall x\\in \\mathbb R.\n\\end{aligned}\n\\end{eqnarray*}\nThis completes the proof of the lemma.\n\\end{proof}\n\n\n\n\n\n\n\\section{Stability of two-component peakons}\n\nIn this section, we prove Theorem 1.1, which will be based on a series of lemmas. Note that the assumptions on the initial profile guarantee the existence of a unique positive solution for the Cauchy problem \\eqref{tcNK-1} of the two-component Novikov system. In general, for $a>0$ and $b>0$, the profile functions of peakon solutions $\\varphi_c(x)=ae^{-\\left| x\\right|}$ and $\\psi_c(x)=be^{-\\left| x\\right|}$ are in $H^1(\\R)$, which have peaks at $x=0$, and thus\n\\begin{equation*}\n\\max_{x \\in \\mathbb{R}}{\\varphi_c(x)}=\\varphi_c(0)=a \\quad \\mathrm{and} \\quad \\max_{x \\in \\mathbb{R}}{\\psi_c(x)}=\\psi_c(0)=b.\n\\end{equation*}\nA direct calculation gives\n\\begin{equation*}\nE_{u}[\\varphi_c(x)]=\\left \\| \\varphi_c \\right \\|^2_{H^1}=2a^2, \\qquad E_{v}[\\psi_c(x)]=\\left \\| \\psi_c \\right \\|^2_{H^1}=2b^2\n\\end{equation*}\nand\n\\begin{equation*}\nH[\\varphi_c(x),\\psi_c(x)]=2ab, \\qquad F[\\varphi_c(x),\\psi_c(x)]=\\frac43 \\, a^2b^2.\n\\end{equation*}\n\nDue to the conservation of the $H^1$-norm of each component $u$ and $v$, the following pointwise identities still hold for the two-component Novikov system as in the scalar CH and Novikov cases.\n\\begin{lemma}\\label{lem1.1}\nFor any $u,v \\in H^1(\\mathbb{R})$ and $\\xi \\in \\mathbb{R}$, we have\n\\begin{eqnarray}\\label{energyESI}\n\\begin{aligned}\n&E_{u}[u]-E_{u}[\\varphi_c]=\\left \\| u- \\varphi_c(\\cdot - \\xi) \\right \\|^2_{H^1(\\mathbb{R})}+4a(u(\\xi)-a),\\\\\n&E_{v}[v]-E_{v}[\\psi_c]=\\left \\| v- \\psi_c(\\cdot - \\xi) \\right \\|^2_{H^1(\\mathbb{R})}+4b(v(\\xi)-b).\n\\end{aligned}\n\\end{eqnarray}\n\\end{lemma}\n\n\nIn the following two lemmas, two energy identities relating some kind of critical values of $u$ and $v$ to the invariants $H[u, v]$ and $F[u, v]$ are established. Consider $0\\not\\equiv u,\\,v\\in H^{s}(\\R)$, $s\\geq 3$, and $u, v\\geq 0$. Then $u,v\\in C^2$ due to the Sobolev imbedding theory. Since $u$ and $v$ decay at infinity, it must have a point with global maximal value of the multiplication function $u(x)v(x)$. Thus, in the following, set for some $\\xi\\in \\R$\n\\begin{eqnarray}\\label{maxpoint}\nM=\\max_{x\\in \\R}\\{u(x)v(x)\\}=u(\\xi)v(\\xi).\n\\end{eqnarray}\n\n\\begin{lemma}\\label{lem1.2}\nLet $0\\not\\equiv u,\\,v\\in H^{s}(\\R)$, $s\\geq 3$ and $u, v\\geq 0$. Define the functions $g_1(x)$ and $g_2(x)$ by\n\\begin{equation}\\label{functiong1}\ng_1(x)=\n\\left\\{\n\\begin{aligned}\n&u(x)-u_{x}(x), \\quad x<\\xi,\\\\\n&u(x)+u_{x}(x), \\quad x>\\xi,\n \\end{aligned}\n\\right.\n\\end{equation}\nand\n\\begin{equation}\\label{functiong2}\ng_2(x)=\n\\left\\{\n\\begin{aligned}\n&v(x)-v_{x}(x), \\quad x<\\xi,\\\\\n&v(x)+v_{x}(x), \\quad x>\\xi.\n \\end{aligned}\n\\right.\n\\end{equation}\nThen\n\\begin{eqnarray}\\label{g1g2ESI}\n\\begin{aligned}\n\\int_{\\mathbb{R}}{g_{1}(x)g_{2}(x)}dx=H[u,v]-2M.\n\\end{aligned}\n\\end{eqnarray}\n\\end{lemma}\n\n\\begin{proof}\nTo show \\eqref{g1g2ESI}, we evaluate the integral of $g_1(x)g_2(x)$ on $\\mathbb R$. Thus,\n\\begin{eqnarray*}\n\\begin{aligned}\n&\\int_{\\mathbb{R}}{g_{1}(x)g_{2}(x)}dx\\\\\n&=\\int_{-\\infty}^{\\xi}{\\left(u(x)-u_{x}(x)\\right)\\left(v(x)-v_{x}(x)\\right)}dx+\\int_{\\xi}^{\\infty}{\\left(u(x)+u_{x}(x)\\right)\\left(v(x)+v_{x}(x)\\right)}\\,dx\\\\\n&=\\int_{-\\infty}^{\\xi}{\\left(uv-(uv)_x+u_xv_x\\right)}dx+\\int_{\\xi}^{\\infty}{\\left(uv+(uv)_x+u_xv_x\\right)}\\,dx\\\\\n&=H[u,v]-2M.\n\\end{aligned}\n\\end{eqnarray*}\n\\end{proof}\n\nThe construction of the auxiliary function $h(x)$ in the following lemma is crucial in the proof of stability of peakons, which is different from the scalar cases of CH and DP equations. This new defined function is a nontrivial refinement of the case in the Novikov equation.\n\n\\begin{lemma}\\label{lem1.3}\nWith the same assumptions and notation as in Lemma \\ref{lem1.2}, define the function $h$ by\n\\begin{equation}\\label{functionalh}\nh(x)=\n\\left\\{\n\\begin{aligned}\n&uv-\\frac{1}{3}(uv)_{x}-\\frac{1}{3}u_{x}v_{x}, \\quad x<\\xi,\\\\\n&uv+\\frac{1}{3}(uv)_{x}-\\frac{1}{3}u_{x}v_{x}, \\quad x>\\xi.\n \\end{aligned}\n\\right.\n\\end{equation}\nThen\n\\begin{eqnarray}\\label{Finalesi}\n\\begin{aligned}\n\\int_{\\mathbb{R}}{h(x)g_{1}(x)g_{2}(x)}dx=F[u,v]-\\frac{4}{3}\\,M^2.\n\\end{aligned}\n\\end{eqnarray}\n\\end{lemma}\n\n\\begin{proof}\nTo show \\eqref{Finalesi}, we evaluate the integral of $h(x)g_{1}(x)g_{2}(x)$ on $\\mathbb R$. Thus,\n\\begin{eqnarray*}\n\\begin{aligned}\n&\\int_{\\mathbb{R}}{h(x)g_{1}(x)g_{2}(x)}dx\\\\\n&=\\int_{-\\infty}^{\\xi}\\left[uv-\\frac{1}{3}(uv)_{x}-\\frac{1}{3}u_{x}v_{x}\\right]\\left[uv-(uv)_{x}+u_{x}v_{x}\\right]\\,dx\\\\\n&\\qquad+\\int_{\\xi}^{\\infty}\\left[uv+\\frac{1}{3}(uv)_{x}-\\frac{1}{3}u_{x}v_{x}\\right]\\left[uv+(uv)_{x}+u_{x}v_{x}\\right]\\,dx\\triangleq I+\\Pi.\n\\end{aligned}\n\\end{eqnarray*}\nWe do computation:\n\\begin{eqnarray*}\n\\begin{aligned}\nI&=\\int_{-\\infty}^{\\xi}\\left[uv-\\frac{1}{3}(uv)_{x}-\\frac{1}{3}u_{x}v_{x}\\right]\\left[uv-(uv)_{x}+u_{x}v_{x}\\right]\\,dx\\\\\n&=\\int_{-\\infty}^{\\xi}\\left(u^2v^2+\\frac{1}{3}u^2v_x^2+\\frac{1}{3}v^2u_x^2+\\frac{4}{3}uvu_xv_x-\\frac{1}{3}u_x^2v_x^2\\right)\\,dx-\\frac{4}{3}\\,\\int_{-\\infty}^{\\xi}(uv)(uv)_{x}\\,dx\\\\\n&=\\int_{-\\infty}^{\\xi}\\left(u^2v^2+\\frac{1}{3}u^2v_x^2+\\frac{1}{3}v^2u_x^2+\\frac{4}{3}uvu_xv_x-\\frac{1}{3}u_x^2v_x^2\\right)\\,dx-\\frac{2}{3}M^2.\n\\end{aligned}\n\\end{eqnarray*}\nSimilarly,\n\\begin{eqnarray*}\n\\begin{aligned}\n\\Pi=\\int_{\\xi}^{\\infty}\\left(u^2v^2+\\frac{1}{3}u^2v_x^2+\\frac{1}{3}v^2u_x^2+\\frac{4}{3}uvu_xv_x-\\frac{1}{3}u_x^2v_x^2\\right)\\,dx-\\frac{2}{3}M^2.\n\\end{aligned}\n\\end{eqnarray*}\nCombining above, we have\n\\begin{eqnarray*}\n\\begin{aligned}\n&\\int_{\\mathbb{R}}{h(x)g_{1}(x)g_{2}(x)}dx\\\\\n&=\\int_{-\\infty}^{\\infty}\\left(u^2v^2+\\frac{1}{3}u^2v_x^2+\\frac{1}{3}v^2u_x^2+\\frac{4}{3}uvu_xv_x-\\frac{1}{3}u_x^2v_x^2\\right)\\,dx-\\frac{4}{3}\\,M^2=F[u,v]-\\frac{4}{3}\\,M^2.\n\\end{aligned}\n\\end{eqnarray*}\n\\end{proof}\n\nWith the two energy identities \\eqref{g1g2ESI} and \\eqref{Finalesi} in hand, one can derive the following delicate relation between the second order conserved quantity $H[u, v]$ and the higher-order conserved quantity $F[u, v]$ for the strong solution $(u, v)$.\n\n\\begin{lemma}\\label{lem1.4}\nAssume that $0\\not\\equiv u_0,\\,v_0\\in H^s$, $s\\geq 3$ and $m_0=u_0-u_{0xx}\\geq 0$, $n_0=v_0-v_{0xx}\\geq 0$. For the corresponding strong solution $(u(t, x),\\,v(t, x))$ with initial data $(u_0,\\,v_0)$ in the lifespan $[0,\\,T)$, there holds\n\\begin{eqnarray}\\label{functionalESI}\n\\begin{aligned}\nF[u, v]-\\frac{4}{3}M(t)H[u, v]+\\frac{4}{3}M(t)^2 \\leq 0, \\quad \\forall t\\in [0,\\,T),\n\\end{aligned}\n\\end{eqnarray}\nwhere $M(t)=\\max_{x\\in R}\\{u(t, x)v(t, x)\\}=u(t, \\xi(t))v(t, \\xi(t))$ for some trajectory $\\xi(t)\\in \\R$ in $[0,\\,T)$.\n\\end{lemma}\n\n\\begin{proof}\nFirst, by the sign-invariant property, the solution $(u(t, x),\\,v(t, x))$ satisfies $m(t, x)=u(t, x)-\\partial^2_{x}u(t, x)\\geq 0$ and $n(t, x)=v(t, x)-\\partial^2_{x}v(t, x)\\geq 0$ for all $(t, x)\\in [0,\\,T)\\times \\R$. It follows that $(u(t, x),\\,v(t, x))$ is positive solution and fulfills all the conditions assumed in Lemmas 3.2 and 3.3. Hence, we have the following energy identities of the dynamical forms in $[0,\\,T)$\n\\begin{equation}\\label{H(t)}\n\\int_{\\mathbb{R}}{g_{1}(t, x)g_{2}(t, x)}dx=H[u, v]-2M(t)\n\\end{equation}\nand\n\\begin{equation}\\label{F(t)}\n\\int_{\\mathbb{R}}{h(t, x)g_{1}(t, x)g_{2}(t, x)}dx=F[u, v]-\\frac{4}{3}\\,M(t)^2.\n\\end{equation}\n\nWe now claim that for any $(t, x)\\in [0,\\,T)\\times \\R$\n\\begin{equation*}\nh(t, x)=\n\\left\\{\n\\begin{aligned}\n&\\left( uv-\\frac{1}{3} (uv)_{x}-\\frac{1}{3}u_{x}v_{x}\\right)(t, x), \\quad x<\\xi(t),\\\\\n&\\left( uv+\\frac{1}{3} (uv)_x -\\frac{1}{3}u_{x}v_{x}\\right)(t, x), \\quad x>\\xi(t)\n \\end{aligned}\n\\right.\n\\leq \\frac{4}{3}u(t, x)v(t, x).\n\\end{equation*}\nIn fact, due to the definition of $h(t, x)$, it follows from the fact $u(t, x)\\geq |u_{x}(t, x)|$ and $v(t, x)\\geq |v_{x}(t, x)|$ that\n\\begin{equation*}\nh(t, x)= \\frac{4}{3}u(t, x)v(t, x)-\\frac{1}{3}(u\\pm u_{x}(t, x))(v\\pm v_{x}(t, x))\\leq \\frac{4}{3}u(t, x)v(t, x).\n\\end{equation*}\nThus, the combination of the above inequalities yields\n\\begin{eqnarray*}\nh(t, x)\\leq \\frac{4}{3}u(t, x)v(t, x)\\leq \\frac{4}{3}\\max_{x\\in \\R}\\{u(t, x)v(t, x)\\}=\\frac{4}{3}M(t), \\quad \\forall (t, x)\\in [0,\\,T)\\times \\R.\n\\end{eqnarray*}\n\nNow, using estimates $|u_x|\\leq u$ and $|v_x|\\leq v$ again in the expressions \\eqref{functiong1} and \\eqref{functiong2}, we obtain \\eqref{functionalESI} from \\eqref{H(t)} and \\eqref{F(t)}.\n\\end{proof}\n\nIn the following lemma, we study the perturbation of the conserved quantities around the profile functions $\\varphi_c(x)$ and $\\psi_c(x)$ of peakon solutions, under the case of time independence.\n\n\\begin{lemma}\\label{lem1.5}\nFor $u, v \\in H^{s}(\\R)$, $s\\geq 3$, if $\\|u-\\varphi_{c}\\|_{H^{1}(\\mathbb{R})} < \\delta$ and $\\|v-\\psi_{c}\\|_{H^{1}(\\mathbb{R})} < \\delta$ with $0<\\delta<1\/2$, then\n\\begin{equation*}\n\\left |E_{u}[u]-E_{u}[\\varphi_{c}]\\right |<2\\sqrt{2}a\\delta+\\delta^2, \\quad \\left |E_{v}[v]-E_{u}[\\psi_{c}]\\right|<2\\sqrt{2}b\\delta+\\delta^2\n\\end{equation*}\nand\n\\begin{equation*}\n\\left|H[u,v]-H[\\varphi_{c},\\psi_{c}]\\right|<2\\sqrt{2}(a+b)\\delta +6\\delta^{2}, \\quad \\left|F[u,v]-F[\\varphi_{c},\\psi_{c}]\\right|0$ is a constant depending on $a$, $b$, $\\|u\\|_{H^s}$ and $\\|v\\|_{H^s}$.\n\\end{lemma}\n\n\\begin{proof}\nLet $\\tilde{u}=u-\\varphi_{c}$ and $\\tilde{v}=v-\\psi_{c}$, for convenience. Since $\\|\\tilde{u}\\|_{H^{1}(\\mathbb{R})} < \\delta$ and $\\|\\tilde{v}\\|_{H^{1}(\\mathbb{R})} < \\delta$, it follows that\n\\begin{eqnarray*}\n\\begin{aligned}\n\\left|E_{u}[u]-E_{u}[\\varphi_{c}]\\right|=&\\left|\\|u_{0}\\|^{2}_{H^{1}(\\mathbb R)}-\\|\\varphi_{c}\\|^{2}_{H^{1}(\\mathbb R)}\\right|\\\\\n=&\\left|\\|u_{0}\\|_{H^{1}(\\mathbb R)}-\\|\\varphi_{c}\\|_{H^{1}(\\mathbb R)}\\right|\\left|\\|u_{0}\\|_{H^{1}(\\mathbb R)}+\\|\\varphi_{c}\\|_{H^{1}(\\mathbb R)}\\right|\\\\\n\\leq &\\|\\tilde{u}_{0}\\|_{H^{1}(\\mathbb{R})} \\left(\\|\\tilde{u}_{0}\\|_{H^{1}(\\mathbb R)}+2\\|\\varphi_{c}\\|_{H^{1}(\\mathbb R)}\\right)\\leq 2\\sqrt{2}\\,a\\delta+\\delta^2.\n\\end{aligned}\n\\end{eqnarray*}\nSimilarly, we have\n\\begin{align*}\n\\left|E_{v}[v]-E_{v}[\\psi_{c}]\\right|<2\\sqrt{2}\\,b\\delta+\\delta^2.\n\\end{align*}\n\nNext, we now estimate\n\\begin{eqnarray*}\n\\begin{aligned}\n\\Big|H[u, v]-&H[\\varphi_{c},\\psi_{c}]\\Big|=\\left|\\int_{\\mathbb R}\\left(uv+u_xv_x\\right)\\,dx-\\int_{\\mathbb R}\\left(\\varphi_{c}\\psi_{c}+\\varphi'_{c}\\psi'_{c}\\right)dx\\right|\\\\\n=&\\left|\\int_{\\mathbb R}\\left(\\tilde{u}\\tilde{v}+u\\tilde{v}+v\\tilde{u}\\right)\\,dx+\\int_{\\mathbb R}\\left[(u_x-\\varphi'_{c})(v_x-\\psi'_{c})+u_{x}(v_{x}-\\psi'_{c})+v_{x}(u_{x}-\\varphi'_{c})\\right]\\,dx\\right|\\\\\n\\leq & \\int_{\\mathbb R}\\left|\\tilde{u}\\tilde{v}\\right|\\,dx+\\int_{\\mathbb R}\\left|\\tilde{u}v\\right|\\,dx+\\int_{\\mathbb R}\\left|(v-\\varphi_{c})u\\right|\\,dx\\\\\n&\\quad +\\int_{\\mathbb R}\\left|\\tilde{u}'\\tilde{v}'\\right|dx+\\int_{\\mathbb R}\\left|\\tilde{u}'v_{x}\\right|\\,dx+\\int_{\\mathbb R}\\left|\\tilde{v}'u_{x}\\right|\\,dx.\n\\end{aligned}\n\\end{eqnarray*}\nUsing the H$\\mathrm{\\ddot{o}}$lder inequality\n\\begin{eqnarray*}\n\\int_{\\mathbb R}\\left|\\tilde{u}v\\right|\\,dx \\leq \\left(\\int_{\\mathbb R}\\tilde{u}^{2}\\,dx\\right)^{\\frac12}\\left(\\int_{\\mathbb R}v^2\\,dx\\right)^{\\frac12}\\leq \\|\\tilde{u}\\|_{H^{1}(\\mathbb{R})}\\left(\\|\\tilde{v}\\|_{H^{1}(\\mathbb{R})}+\\|\\varphi_{c}\\|_{H^{1}(\\mathbb{R})}\\right) \\leq \\delta^{2}+\\sqrt{2}\\,a \\delta,\n\\end{eqnarray*}\nwe deduce that\n\\begin{align*}\n\\left|H[u, v]-H[\\varphi_{c},\\psi_{c}]\\right|<2\\sqrt{2}(a+b)\\delta +6\\delta^{2}.\n\\end{align*}\n\nFinally, we estimate\n\\begin{eqnarray*}\n\\begin{aligned}\n\\Big|F&[u, v]-F[\\varphi_{c},\\psi_{c}]\\Big|=\\Bigg|\\int_{\\mathbb{R}}\\left(u^2v^2+\\frac{1}{3}u^2v_{x}^2+\\frac{1}{3}v^2u_{x}^2+\\frac{4}{3}uvu_{x}v_{x}-\\frac{1}{3}u_{x}^2v_{x}^2\\right)\\,dx\\\\\n&\\qquad\\qquad\\qquad\\qquad\\quad\\; -\\int_{\\mathbb{R}}\\left[\\varphi_{c}^{2}\\psi_{c}^{2}+\\frac{1}{3}\\varphi_{c}^{2}(\\psi'_{c})^{2}+\\frac{1}{3}(\\varphi'_{c})^{2}\\psi_{c}^{2}+\\frac{4}{3}\\varphi_{c}\\psi_{c}\\varphi'_{c}\\psi'_{c}-\\frac{1}{3}(\\varphi'_{c})^{2}(\\psi'_{c})^{2}\\right]\\,dx\\Bigg|\\\\\n&\\leq \\int_{\\mathbb{R}}\\left|u^2v^2-\\varphi_{c}^{2}\\psi_{c}^{2}\\right|\\,dx+\\frac{1}{3}\\,\\int_{\\mathbb{R}}\\left|u^2v_{x}^2-\\varphi_{c}^{2}(\\psi'_{c})^{2}\\right|\\,dx+\\frac{1}{3}\\,\\int_{\\mathbb{R}}\\left|v^2u_{x}^2-(\\varphi'_{c})^{2}\\psi_{c}^{2}\\right|\\,dx\\\\\n&\\quad\\quad+\\frac{4}{3}\\,\\left|\\int_{\\mathbb{R}}\\big(uvu_{x}v_{x}-\\varphi_{c}\\psi_{c}\\varphi'_{c}\\psi'_{c}\\big)dx\\right|+\\frac{1}{3}\\,\\left|\\int_{\\mathbb{R}}\\left[u_{x}^2v_{x}^2-(\\varphi'_{c})^{2}(\\psi'_{c})^{2}\\right]dx\\right|\\\\\n&\\triangleq I_{1}+\\frac{1}{3}I_{2}+\\frac{1}{3}I_{3}+\\frac{4}{3}I_{4}+\\frac{1}{3}I_{5}.\n\\end{aligned}\n\\end{eqnarray*}\nFor the first term $I_{1}$, we obtain\n\\begin{eqnarray*}\n\\begin{aligned}\nI_{1}& \\leq \\int_{\\mathbb R}\\left|\\left(u^2-\\varphi^{2}_{c}\\right)v^2\\right|\\,dx+\\int_{\\mathbb R}\\left|\\left(v^2-\\psi^{2}_{c}\\right)\\varphi^{2}_{c}\\right|\\,dx\\\\\n&\\leq \\|v\\|^{2}_{L^\\infty}\\left(\\int_{\\mathbb R}\\tilde{u}^2\\,dx\\right)^{\\frac 12}\\left(\\int_{\\mathbb R}(u+\\varphi_{c})^2\\,dx\\right)^{\\frac 12}+\\|\\varphi_{c}\\|^{2}_{L^\\infty}\\left(\\int_{\\mathbb R}\\tilde{v}^2\\,dx\\right)^{\\frac 12}\\left(\\int_{\\mathbb R}(v+\\psi_{c})^2\\,dx\\right)^{\\frac 12}\\\\\n&\\leq \\|v\\|^{2}_{L^\\infty}\\|\\tilde{u}\\|_{H^1}\\left(\\|\\tilde{u}\\|_{H^1}+2\\|\\varphi_{c}\\|_{H^1}\\right)+\\|\\varphi_{c}\\|^{2}_{L^\\infty}\\|\\tilde{v}\\|_{H^1}\\left(\\|\\tilde{v}\\|_{H^1}+2\\|\\psi_{c}\\|_{H^1}\\right)\\\\\n&\\leq \\frac{1}{2}\\left(\\delta+\\sqrt{2}b\\right)^2\\delta\\left(\\delta+2\\sqrt{2}a\\right)+b^2\\delta\\left(\\delta+2\\sqrt{2}b\\right)\\leq 2\\sqrt{2}b^2(a+b)\\delta+O(\\delta^2).\n\\end{aligned}\n\\end{eqnarray*}\nFor the second term $I_{2}$, we have\n\\begin{align*}\nI_{2}&\\leq \\int_{\\mathbb R}\\left|\\left(u^2-\\varphi^{2}_{c}\\right)v_{x}^2\\right|\\,dx+\\int_{\\mathbb R}\\left|\\left(v_{x}^2-\\psi'^{2}_{c}\\right)\\varphi^{2}_{c}\\right|\\,dx\\\\\n&\\leq \\|v_{x}\\|^{2}_{L^\\infty}\\left(\\int_{\\mathbb R}(\\tilde{u})^2\\,dx\\right)^{\\frac 12}\\left(\\int_{\\mathbb R}(u+\\varphi_{c})^2\\,dx\\right)^{\\frac 12}\n+\\|\\varphi_{c}\\|^{2}_{L^\\infty}\\left(\\int_{\\mathbb R}(\\tilde{v}_{x})^2\\,dx\\right)^{\\frac 12}\\left(\\int_{\\mathbb R}(v_{x}+\\psi'_{c})^2\\,dx\\right)^{\\frac 12}\\\\\n&\\leq \\|v_{x}\\|^{2}_{L^\\infty}\\|\\tilde{u}\\|_{H^1}\\left(\\|\\tilde{u}\\|_{H^1}+2\\|\\varphi_{c}\\|_{H^1}\\right)+\\|\\varphi_{c}\\|^{2}_{L^\\infty}\\|\\tilde{v}\\|_{H^1}\\left(\\|\\tilde{v}\\|_{H^1}+2\\|\\psi_{c}\\|_{H^1}\\right)\\\\\n&\\leq \\frac{1}{2}\\left(\\delta+\\sqrt{2}b\\right)^2\\delta\\left(\\delta+2\\sqrt{2}a\\right)+b^2\\delta\\left(\\delta+2\\sqrt{2}b\\right)\\leq 2\\sqrt{2}b^2(a+b)\\delta+O(\\delta^2).\n\\end{align*}\nSimilarly, for the term $I_{3}$, we get\n\\begin{align*}\nI_{3}< 2\\sqrt{2}a^2(a+b)\\delta+O(\\delta^{2}).\n\\end{align*}\nFor the fourth term $I_{4}$, we estimate\n\\begin{align*}\nI_{4} & \\leq \\int_{\\mathbb{R}}\\left|\\tilde{u}vu_{x}v_{x}\\right|\\,dx+\\int_{\\mathbb{R}}\\left|\\varphi_{c}\\tilde{v}vu_{x}v_{x}\\right|\\,dx+\\int_{\\mathbb{R}}\\left|\\varphi_{c}\\psi_{c}\\tilde{u}_{x}v_{x}\\right|\\,dx+\\int_{\\mathbb{R}}\\left|\\varphi_{c}\\psi_{c}\\varphi'_{c}\\tilde{v}_{x}\\right|\\,dx\\\\\n& \\leq \\frac{1}{2}\\|\\tilde{u}\\|_{L^{\\infty}}\\|v\\|_{L^{\\infty}}\\int_{\\mathbb{R}}(u^{2}_{x}+v^{2}_{x})\\,dx+\\frac{1}{2}\\|\\tilde{v}\\|_{L^{\\infty}}\\|u\\|_{L^{\\infty}}\\int_{\\mathbb{R}}(u^{2}_{x}+v^{2}_{x})\\,dx\\\\\n&\\;\\;\\; +\\|\\varphi_{c}\\|_{L^{\\infty}}\\|\\psi_{c}\\|_{L^{\\infty}}\\left(\\int_{\\mathbb{R}}\\tilde{u}_{x}^{2}\\,dx\\right)^{\\frac{1}{2}}\\left(\\int_{\\mathbb{R}}v^{2}_{x}\\,dx\\right)^{\\frac{1}{2}}+\\|\\varphi_{c}\\|_{L^{\\infty}}\\|\\psi_{c}\\|_{L^{\\infty}}\\left(\\int_{\\mathbb{R}}\\tilde{v}_{x}^{2}\\,dx\\right)^{\\frac{1}{2}}\\left(\\int_{\\mathbb{R}}u^{2}_{x}\\,dx\\right)^{\\frac{1}{2}}\\\\\n& \\leq \\frac{1}{2}\\left(\\|\\tilde{u}\\|_{H^{1}(\\mathbb{R})}\\|v\\|_{H^{1}(\\mathbb{R})}+\\|\\tilde{v}\\|_{H^{1}(\\mathbb{R})}\\|u\\|_{H^{1}(\\mathbb{R})}\\right)\\left(\\|v\\|^{2}_{H^{1}(\\mathbb{R})}+\\|u\\|^{2}_{H^{1}(\\mathbb{R})}\\right)\\\\\n&\\quad +\\frac{1}{2}\\|\\varphi_{c}\\|_{H^{1}(\\mathbb{R})}\\|\\psi_{c}\\|_{H^{1}(\\mathbb{R})}\\left(\\|\\tilde{u}\\|_{H^{1}(\\mathbb{R})}\\|v\\|_{H^{1}(\\mathbb{R})}+\\|\\tilde{v}\\|_{H^{1}(\\mathbb{R})}\\|u\\|_{H^{1}(\\mathbb{R})}\\right)\\\\\n& \\leq \\sqrt{2}(a+b)(a^{2}+b^{2}+ab)\\delta+O(\\delta^{2}).\n\\end{align*}\nFor the fifth term $I_{5}$, we have\n\\begin{align*}\nI_{5} & \\leq \\left|\\int_{\\mathbb{R}}u_{x}^{2}\\left(v_{x}^2-(\\psi'_{c})^{2}\\right)\\,dx\\right|+\\left|\\int_{\\mathbb{R}}(\\psi'_{c})^{2}(u^{2}_{x}-(\\varphi'_{c})^{2})\\,dx\\right|\\\\\n& \\leq \\|u\\|^{2}_{L^{\\infty}}\\left(\\int_{\\mathbb{R}}\\tilde{v}_{x}^{2}\\,dx\\right)^{\\frac{1}{2}}\\left(\\int_{\\mathbb{R}}(v_{x}+\\psi'_{c})^{2}\\,dx\\right)^{\\frac{1}{2}}+\\|\\psi'_{c}\\|^{2}_{L^{\\infty}}\\left(\\int_{\\mathbb{R}}\\tilde{u}_{x}^{2}dx\\right)^{\\frac{1}{2}}\\left(\\int_{\\mathbb{R}}(u_{x}+\\varphi'_{c})^{2}dx\\right)^{\\frac{1}{2}}\\\\\n& \\leq \\frac{1}{2}\\left[\\|u\\|^{2}_{H^{1}(\\mathbb{R})}\\|\\tilde{v}\\|_{H^{1}(\\mathbb{R})}\\left(\\|\\tilde{v}\\|_{H^{1}(\\mathbb{R})}+2\\|\\psi_{c}\\|_{H^{1}(\\mathbb{R})}\\right)+\\|\\psi'_{c}\\|^{2}_{H^{1}(\\mathbb{R})}\\|\\tilde{u}\\|_{H^{1}(\\mathbb{R})}\\left(\\|\\tilde{u}\\|_{H^{1}(\\mathbb{R})}+2\\|\\varphi_{c}\\|_{H^{1}(\\mathbb{R})}\\right)\\right]\\\\\n& \\leq 2\\sqrt{2}ab(a+b)\\delta+O(\\delta^{2}).\n\\end{align*}\nAccordingly, for $0<\\delta<1\/2$, we have\n\\begin{align*}\n\\left|F(u,v)-F(\\varphi_{c},\\psi_{c})\\right|\\leq C\\delta +O(\\delta^{2}),\n\\end{align*}\nwhere $C$ is a constant depending on $a$, $b$, $\\|u\\|_{H^s}$ and $\\|v\\|_{H^s}$, which completes the proof of this lemma.\n\\end{proof}\n\nNow, we are in the position to prove that the strong solution satisfies the novel error estimates at some kind of critical point under the assumption of small perturbation of initial data around the profile of peakon solutions.\n\n\\begin{lemma}\nAssume that $(u(t, x),\\,v(t, x))$, $t\\in [0,\\,T)$, is the corresponding strong solution of the Cauchy problem \\eqref{tcNK-1} with initial data $(u_0(x),\\,v_0(x))$ satisfying $0\\not\\equiv u_0,\\,v_0\\in H^s(\\R)$, $s\\geq 3$ and $m_0=u_0-u_{0xx}\\geq 0$, $n_0=v_0-v_{0xx}\\geq 0$. If $(u_0(x),\\,v_0(x))$ satisfies\n\\begin{equation*}\n\\|u_0-\\varphi_{c}\\|_{H^{1}(\\mathbb{R})} < \\delta \\quad \\mathrm{and} \\quad \\|v_0-\\psi_{c}\\|_{H^{1}(\\mathbb{R})} < \\delta,\n\\end{equation*}\nwith $0<\\delta<1\/2$, then there exists a constant $C>0$ such that\n\\begin{eqnarray*}\n|u(t, \\xi(t))-a|0$ and $L>0$, we define the following neighborhood of all the sums of N peakons of speed $c_1,...,c_N$ with spatial shifts $x_i$ that satisfy $x_i-x_{i-1} \\geq L$,\n\\begin{align*}\nU( \\alpha , L) = \\Big\\{(u , v) & \\in H^{1}(\\mathbb{R}) \\times H^{1}(\\mathbb{R}), \\\\\n& \\inf_{x_i-x_{i-1} \\geq L}{{\\Big\\| u- \\sum_{i=1}^N{\\varphi_{c_i}(\\cdot - {x}_i)} \\Big\\|_{H^1}}+{\\Big\\| v- \\sum_{i=1}^N{\\psi_{c_i}(\\cdot - {x}_i)} \\Big\\|_{H^1}}} \\leq \\alpha \\Big\\}.\n\\end{align*}\n\nBy the continuity of the map $t \\mapsto (u(t),v(t))$ from $[0,T[$ into $H^1(\\mathbb{R})\\times H^1(\\mathbb{R})$, to prove Theorem 1.2 it suffices to verify that there exist $ A>0$, $\\epsilon_0>0$ and $L_0>0$ such that for any $L>L_0$ and $0<\\epsilon<\\epsilon_0$, if $(u_{0},v_{0})$ satisfies \\eqref{initialdata-1} and for some $0L_{0}$, with $A$, $\\epsilon_{0}$ and $L_{0}$ to be specified later.\n\\subsection{Control of the distance between the peakons}\nIn this subsection we shall prove that the different bumps of $u$ and $v$ that are individually close to their own peakons and get away from each others as time is increasing. This is crucial in our analysis since we do not know how to manage strong interactions.\n\n\\begin{prop}\\label{implicitfunction}\n Assume $(u_{0}, v_{0})$ satisfies \\eqref{initialdata-1}, and there exist $\\alpha_{0} > 0$, $L_{0} > 0$ and $C_{0} > 0$ such that for all $0 < \\alpha < \\alpha_{0}$, $L >L_{0} >0$. If $(u(t),v(t)) \\in U(\\alpha, \\frac {L}{2})$ on $[0, t_{0}]$ for some $0< t_{0} L\/2$, we set\n\\begin{align}\\label{sumofsolitons}\nR_{Z}(\\cdot)=\\sum_{i=1}^{N}\\varphi_{c_{i}}(\\cdot - z_{i}) \\quad and \\quad S_{Z}(\\cdot)=\\sum_{i=1}^{N}\\psi_{c_{i}}(\\cdot - z_{i}).\n\\end{align}\nFor $\\alpha_{0} > 0$, $L_{0} > 0$, we define the function\n\\begin{align*}\nY:(-\\alpha ,\\alpha )^{N} \\times B_{H^{1} \\times H^{1}}((R_{Z},S_{Z}),\\alpha) \\mapsto {\\mathbb{R}}^{N}, \\\\\n(y_{1},...,y_{N},u,v) \\mapsto (Y^{1}(y_{1},...,y_{N},u,v),...,Y^{N}(y_{1},...,y_{N},u,v))\n\\end{align*}\nwith\n\\begin{align*}\nY^{i}(y_{1},...,y_{N},u,v)= &\\int_{\\mathbb{R}}\\bigg{(}(u-\\sum_{i=1}^{N}\\varphi_{c_{j}}(\\cdot - z_{j} -y_{j})) \\partial_{x}\\varphi_{c_{i}}(\\cdot - z_{i} -y_{i}) \\\\\n&\\qquad \\quad+ (v-\\sum_{i=1}^{N}\\psi_{c_{j}}(\\cdot - z_{j} -y_{j})) \\partial_{x}\\psi_{c_{i}}(\\cdot - z_{i} -y_{i})\\bigg{)},\n\\end{align*}\n$Y$ is clearly of class $C^{1}$. For $i=1,...,N,$\n\\begin{align*}\n\\frac{\\partial{Y}^{i}}{\\partial y_{i}}(y_{1},...,y_{N},u,v)=&\\int_{\\mathbb{R}}\\bigg{(}(u_{x}-\\sum_{i \\ne j}^{N}\\partial_{x} \\varphi_{c_{j}}(\\cdot - z_{j} -y_{j})) \\partial_{x}\\varphi_{c_{i}}(\\cdot - z_{i} -y_{i}) \\\\\n&\\qquad\\qquad + (v_{x}-\\sum_{i \\ne j}^{N}\\partial_{x} \\psi_{c_{j}}(\\cdot - z_{j} -y_{j})) \\partial_{x}\\psi_{c_{i}}(\\cdot - z_{i} -y_{i})\\bigg{)},\n\\end{align*}\nand $\\forall j\\ne i,$\n\\begin{align*}\n\\frac{\\partial{Y}^{i}}{\\partial y_{j}}(y_{1},...,y_{N},u,v)=&\\int_{\\mathbb{R}}\\bigg{(}\\partial_{x} \\varphi_{c_{j}}(\\cdot - z_{j} -y_{j}) \\partial_{x}\\varphi_{c_{i}}(\\cdot - z_{i} -y_{i}) \\\\\n&\\qquad\\qquad + \\partial_{x} \\psi_{c_{j}}(\\cdot - z_{j} -y_{j}) \\partial_{x}\\psi_{c_{i}}(\\cdot - z_{i} -y_{i})\\bigg{)}.\n\\end{align*}\nHence\n\\begin{align*}\n\\frac{\\partial{Y}^{i}}{\\partial y_{i}}(0,...,0,R_{Z},S_{Z})=\\| \\partial_{x}\\varphi_{c_{i}} \\|_{L^2}^2+\\| \\partial_{x}\\psi_{c_{i}} \\|_{L^2}^2 \\geq a_{1}^2+b_{1}^{2},\n\\end{align*}\nand, for $\\forall j\\ne i$, using the exponential decay of $\\varphi_{c}$ and that $z_{i}-z_{i-1} >L $ we infer for $L_{0}$ large enough that (recall that $L>L_{0}$),\n\\begin{equation*}\n\\begin{aligned}\n\\frac{\\partial{Y}^{i}}{\\partial y_{j}}(0,...,0,R_{Z},S_{Z})=&\\int_{\\mathbb{R}}\\bigg{(}\\partial_{x} \\varphi_{c_{j}}(\\cdot - z_{j} ) \\partial_{x}\\varphi_{c_{i}}(\\cdot - z_{i} ) + \\partial_{x} \\psi_{c_{j}}(\\cdot - z_{j}) \\partial_{x}\\psi_{c_{i}}(\\cdot - z_{i})\\bigg{)}\\\\\n&\\leq O(e^{-\\frac{L}{4}}).\n\\end{aligned}\n\\end{equation*}\nWe conclude that, for $L>0$ large enough, $D_{(y_{1},...,y_{N})}Y(0,...,0,R_{Z},S_{Z})=D+P$ where $D$ is an invertible diagonal matrix with $\\|D^{-1}\\|\\leq (a^{2}_{1}+b^{2}_{1})^{-n}$ and $\\|P\\|\\leq O(e^{-L\/4})$. Hence there exists $L_{0}>0$ such that for $L>L_{0}$, $D_{(y_{1},...,y_{N})}Y(0,...,0,R_{Z},S_{Z})$ is invertible with an inverse matrix of norm smaller than $2(a^{2}_{1}+b^{2}_{1})^{-n}$. The implicit function theorem implies that there exists $\\beta_{0}>0$ and $C^{1}$ functions $y_{1},y_{2},...,y_{N}$ from $B_{H^1 \\times H^1}((R_{Z},S_{Z}),\\beta_{0})$ to a neighborhood of $(0,0,...,0)$ which uniquely determined such that\n\\begin{align*}\nY(y_{1},...,y_{N},u,v)=0 \\quad {\\rm for}\\;\\; {\\rm all} \\quad (u,v) \\in B((R_{Z},S_{Z}),\\beta_{0}).\n\\end{align*}\nIn particular, there exists $C_{0}>0$ such that if $(u,v) \\in B((R_{Z},S_{Z}),\\beta)$, with $0<\\beta \\leq \\beta_{0}$, then\n\\begin{align}\\label{implicitfunc}\n\\sum^{N}_{i=1}{\\big{|}y_{i}(u,v)\\big{|}}\\leq C_{0}\\beta.\n\\end{align}\nNote that $\\beta_{0}$ and $C_{0}$ depend on only $a_{1}$, $b_{1}$ and $L_{0}$ and not on the point $(z_{1},...,z_{N})$. For $(u,v) \\in B((R_{Z},S_{Z}),\\beta_{0})$ we set $\\tilde{x}_{i}(u,v)=z_{i}+y_{i}(u,v)$. Assuming that $\\beta_{0}\\leq L_{0}\/(8C_{0})$, $\\tilde{x}_{1},...,\\tilde{x}_{N}$ are thus $C^{1}$ functions on $B((R_{Z},S_{Z}),\\beta)$ satisfying\n\\begin{align}\\label{dist-1}\n\\tilde{x}_{j}(u,v)-\\tilde{x}_{j-1}(u,v)>\\frac{L}{2}-2C_{0}\\beta>\\frac{L}{4}.\n\\end{align}\nFor $L>L_{0}$ and $0<\\alpha<\\alpha_{0}<\\beta_{0}\/2$ to be chosen later, we define the modulation of $(u,v) \\in U(\\alpha,L\/2)$ in the following way, the trajectory of $(u,v)$ is covered by a finite number of open balls:\n\\begin{align*}\n\\Big{\\{}\\big{(}u(t),v(t)\\big{)}, t \\in [0,t_{0}]\\Big{\\}} \\subset \\bigcup_{k=1,...,M}B\\big{(}(R_{Z^{k}},S_{Z^{k}}),2\\alpha\\big{)}.\n\\end{align*}\nIt is worth noticing that, since $0<\\alpha<\\alpha_{0}<\\beta_{0}\/2$, the functions $\\tilde{x}_{i}(u,v)$ are uniquely determined for $(u,v)\\in B((R_{Z^{k}},S_{Z^{k}}),2\\alpha)\\bigcap B((R_{Z^{k'}},S_{Z^{k'}}),2\\alpha)$. We can thus define the functions $t\\mapsto \\tilde{x}_{i}(t)$ on $[0,t_{0}]$ by setting $\\tilde{x}_{i}(t)=\\tilde{x}_{i}(u(t),v(t))$. By construction\n\\begin{eqnarray}\\label{orthogonality}\n\\begin{aligned}\n\\int_{\\mathbb{R}} &\\Big{(}\\Big{(}u(t,\\cdot)-\\sum^{N}_{j=1}\\varphi_{c_{j}}(\\cdot-\\tilde{x}_{j}(t))\\Big{)}\\partial_{x}\\varphi_{c_{i}}(\\cdot-\\tilde{x}_{i}(t))\\\\\n&\\qquad \\qquad\\qquad +\\Big{(}v(t,\\cdot)-\\sum^{N}_{j=1}\\psi_{c_{j}}(\\cdot-\\tilde{x}_{j}(t))\\Big{)}\\partial_{x}\\psi_{c_{i}}(\\cdot-\\tilde{x}_{i}(t))\\Big{)}dx=0.\n\\end{aligned}\n\\end{eqnarray}\nMoreover, on account of \\eqref{implicitfunc} and the fact that $\\varphi''_{c}$ and $\\psi''_{c}$ are the sum of an $L^{1}$ function and a Dirac mass, we claim\n\\begin{align}\\label{preestimate}\n\\Big{\\|}\\Big{(}u(t),v(t)\\Big{)}-\\Big{(}R_{\\tilde{X}(t)},S_{\\tilde{X}(t)}\\Big{)}\\Big{\\|}_{H^{1}\\times H^{1}}\\leq O(\\sqrt{\\alpha}), \\quad \\forall t \\in [0,t_{0}].\n\\end{align}\nIndeed, one can calculate\n\\begin{align*}\n&\\Big{\\|}\\Big{(}u(t),v(t)\\Big{)}-\\Big{(}R_{\\tilde{X}(t)},S_{\\tilde{X}(t)}\\Big{)}\\Big{\\|}_{H^{1}\\times H^{1}} \\triangleq \\Big{\\|}u(t)-R_{\\tilde{X}(t)}\\Big{\\|}_{H^{1}}+\\Big{\\|}v(t)-S_{\\tilde{X}(t)}\\Big{\\|}_{H^{1}}\\\\\n&\\leq \\Big{\\|}u(t)-R_{{Z^{k}}(t)}\\Big{\\|}_{H^{1}}+\\Big{\\|}R_{{Z^{k}}(t)}-R_{\\tilde{X}(t)}\\Big{\\|}_{H^{1}}+\\Big{\\|}v(t)-S_{{Z^{k}}(t)}\\Big{\\|}_{H^{1}}+\\Big{\\|}S_{{Z^{k}}(t)}-S_{\\tilde{X}(t)}\\Big{\\|}_{H^{1}}\\\\\n&\\leq\n\\alpha+\\hat{C}\\sum^{N}_{i=1}\\Big{\\|}\\varphi(\\cdot-z^{k}_{i})-\\varphi(\\cdot-z^{k}_{i}-y_{i}(u,v))\\Big{\\|}_{H^{1}}\\\\\n&\\leq \\alpha+\\hat{C}\\sum^{N}_{i=1}\\Big{(}2(1-e^{-y_{i}})^{2}+2y_{i}+O(y^{2}_{i})\\Big{)}^{\\frac{1}{2}}\\leq O(\\sqrt{\\alpha}).\n\\end{align*}\nLet us now prove that the speed of $\\tilde{x}_{i}$ stays close to $c_{i}$. We set\n\\begin{align*}\n&R_{j}(t)=\\varphi_{c_{j}}(\\cdot-\\tilde{x}_{j}(t)) \\quad \\mathrm{and} \\quad \\tilde{u}(t)=u(t)-\\sum^{N}_{j=1}R_{j}(t)=u(t,\\cdot)-R_{\\tilde{X}(t)},\\\\\n&S_{j}(t)=\\psi_{c_{j}}(\\cdot-\\tilde{x}_{j}(t)) \\quad \\mathrm{and} \\quad \\tilde{v}(t)=v(t)-\\sum^{N}_{j=1}S_{j}(t)=u(t,\\cdot)-S_{\\tilde{X}(t)}.\n\\end{align*}\nDifferentiating \\eqref{orthogonality} with respect to $t$ we get\n\\begin{align*}\n\\int_{\\mathbb{R}}\\Big{(}\\tilde{u}_{t}\\partial_{x}R_{i}+\\tilde{v}_{t}\\partial_{x}S_{i}\\Big{)}=\\dot{\\tilde{x}}_{i}\\big{(}<\\partial^{2}_{x}R_{i},\\tilde{u}>_{H^{-1},H^{1}}+<\\partial^{2}_{x}S_{i},\\tilde{v}>_{H^{-1},H^{1}}\\big{)},\n\\end{align*}\nand thus\n\\begin{align}\\label{speedestimate}\n\\bigg{|}\\int_{\\mathbb{R}}\\Big{(}\\tilde{u}_{t}\\partial_{x}R_{i}+\\tilde{v}_{t}\\partial_{x}S_{i}\\Big{)}\\bigg{|}\\leq |\\dot{\\tilde{x}}_{i}-c_{i}|\\Big(O\\big{(}\\|\\tilde{u}\\|_{H^{1}}\\big{)}+O\\big{(}\\|\\tilde{v}\\|_{H^{1}}\\big{)}\\Big)+\\Big(O\\big{(}\\|\\tilde{u}\\|_{H^{1}}\\big{)}+O\\big{(}\\|\\tilde{v}\\|_{H^{1}}\\big{)}\\Big).\n\\end{align}\nSubstituting $u$ by $\\tilde{u}+\\sum^{N}_{j=1}R_{j}(t)$ and $v$ by $\\tilde{v}+\\sum^{N}_{j=1}S_{j}(t)$ into \\eqref{weakform} and using\n\\begin{equation}\n\\left\\{\n\\begin{aligned}\n&\\partial_{t}R_{i}+\\big{(}\\dot{\\tilde{x}}_{i}(t)-c_{i}\\big{)}\\partial_{x}R_{i}+R_{i}S_{i}\\partial_{x}R_{i}+P_{x}\\ast\\Big{(}\\frac{1}{2}(\\partial_{x}R_{i})^{2}S_{i}+R_{i}(\\partial_{x}R_{i})(\\partial_{x}S_{i})+R^{2}_{i}S_{i}\\Big{)}\\\\\n&+P\\ast\\Big{(}\\frac{1}{2}(\\partial_{x}R_{i})^{2}\\partial_{x}S_{i}\\Big{)}=0,\\\\ \n&\\partial_{t}S_{i}+\\big{(}\\dot{\\tilde{x}}_{i}(t)-c_{i}\\big{)}\\partial_{x}S_{i}+S_{i}R_{i}\\partial_{x}S_{i}+P_{x}\\ast\\Big{(}\\frac{1}{2}(\\partial_{x}S_{i})^{2}R_{i}+S_{i}(\\partial_{x}S_{i})(\\partial_{x}R_{i})+S^{2}_{i}R_{i}\\Big{)}\\\\\n&+P\\ast\\Big{(}\\frac{1}{2}(\\partial_{x}S_{i})^{2}\\partial_{x}R_{i}\\Big{)}=0,\n\\end{aligned}\n\\right.\n\\end{equation}\nwe infer that $(\\tilde{u},\\tilde{v})$ satisfies on $[0,t_{0}]$ that\n\\begin{equation*}\\label{speedequation}\n\\begin{aligned}\n&\\tilde{u}_{t}-\\sum^{N}_{i=1}(\\dot{\\tilde{x}}_{i}(t)-c_{i})\\partial_{x}R_{i}+\\big{(}\\tilde{u}+\\sum^{N}_{j=1}R_{j}\\big{)}\\big{(}\\tilde{v}+\\sum^{N}_{j=1}S_{j}\\big{)}\\Big{(}\\tilde{u}_{x}+\\sum^{N}_{j=1}R_{jx}\\Big{)}-\\sum^{N}_{j=1}(R_{j}S_{j}\\partial_{x}R_{j})\\\\\n&\\quad +P_{x}\\ast\\bigg{(}\\frac{1}{2}\\Big{(}\\tilde{u}_{x}+\\sum^{N}_{j=1}R_{jx}\\Big{)}^{2}(\\tilde{v}+\\sum^{N}_{j=1}S_{j})\n+(\\tilde{u}+\\sum^{N}_{j=1}R_{j})\\Big{(}\\tilde{u}_{x}+\\sum^{N}_{j=1}R_{jx}\\Big{)}\\Big{(}\\tilde{v}_{x}+\\sum^{N}_{j=1}S_{jx}\\Big{)}\\\\\n&\\qquad\\quad+(\\tilde{u}+\\sum^{N}_{j=1}R_{j})^{2}(\\tilde{v}+\\sum^{N}_{j=1}S_{j})-\\frac{1}{2}\\sum^{N}_{j=1}R^{2}_{jx}S_{j}-\\sum^{N}_{j=1}R_{j}R_{jx}S_{jx}-\\sum^{N}_{j=1}R^{2}_{j}S_{j}\\bigg{)}\\\\\n&\\qquad\\qquad+P\\ast \\bigg{(}\\frac{1}{2}\\big{(}\\tilde{u}_{x}+\\sum^{N}_{j=1}R_{jx}\\big{)}^{2}\\big{(}\\tilde{v}_{x}+\\sum^{N}_{j=1}S_{jx}\\big{)}-\\frac{1}{2}\\sum^{N}_{j=1}R^{2}_{jx}S_{jx}\\bigg{)}=0,\\\\\n\\end{aligned}\n\\end{equation*}\n\\begin{eqnarray}\\label{speedequation}\n\\begin{aligned}\n&\\tilde{v}_{t}-\\sum^{N}_{i=1}(\\dot{\\tilde{x}}_{i}(t)-c_{i})\\partial_{x}S_{i}+\\big{(}\\tilde{v}+\\sum^{N}_{j=1}S_{j}\\big{)}\\big{(}\\tilde{u}+\\sum^{N}_{j=1}R_{j}\\big{)}\\Big{(}\\partial_{x}\\big{(}\\tilde{v}+\\sum^{N}_{j=1}S_{j}\\big{)}\\Big{)}-\\sum^{N}_{j=1}(S_{j}R_{j}\\partial_{x}S_{j})\\\\\n&\\quad+P_{x}\\ast\\bigg{(}\\frac{1}{2}\\Big{(}\\tilde{v}_{x}+\\sum^{N}_{j=1}S_{jx}\\Big{)}^{2}(\\tilde{u}+\\sum^{N}_{j=1}R_{j})\n+(\\tilde{v}+\\sum^{N}_{j=1}S_{j})\\Big{(}\\tilde{v}_{x}+\\sum^{N}_{j=1}S_{jx}\\Big{)}\\Big{(}\\tilde{u}_{x}+\\sum^{N}_{j=1}R_{jx}\\Big{)}\\\\\n&\\qquad+(\\tilde{v}+\\sum^{N}_{j=1}S_{j})^{2}(\\tilde{u}+\\sum^{N}_{j=1}R_{j})-\\frac{1}{2}\\sum^{N}_{j=1}S^{2}_{jx}R_{j}-\\sum^{N}_{j=1}S_{j}S_{jx}R_{jx}-\\sum^{N}_{j=1}S^{2}_{j}R_{j}\\bigg{)}\\\\\n&\\qquad\\qquad+P\\ast \\bigg{(}\\frac{1}{2}\\big{(}\\tilde{v}_{x}+\\sum^{N}_{j=1}S_{jx}\\big{)}^{2}\\big{(}\\tilde{u}_{x}+\\sum^{N}_{j=1}R_{jx}\\big{)}-\\frac{1}{2}\\sum^{N}_{j=1}S^{2}_{jx}R_{jx}\\bigg{)}=0.\n\\end{aligned}\n\\end{eqnarray}\nTaking the $L^{2}$ scalar product with $\\partial_{x}R_{i}$ in the first equation of \\eqref{speedequation} and $\\partial_{x}S_{i}$ in the second equation of \\eqref{speedequation}, summing up the resulting equations, integrating by parts, and using the decay of $R_{j}$, $S_{j}$ and their first derivative, we claim\n\\begin{align}\\label{speedfinestimate}\n|\\dot{\\tilde{x}}_{i}(t)-c_{i}|\\leq O(\\sqrt{\\alpha})+O(e^{-\\frac{L}{8}}).\n\\end{align}\nIndeed, according to the above, we obtain that\n\\begin{align*}\n&(\\dot{\\tilde{x}}_{i}-c_{i})\\big{(}<\\partial^{2}_{x}R_{i},\\tilde{u}>_{H^{-1},H^{1}}+<\\partial^{2}_{x}S_{i},\\tilde{v}>_{H^{-1},H^{1}}\\big{)}\\\\\n&+c_{i}\\big{(}<\\partial^{2}_{x}R_{i},\\tilde{u}>_{H^{-1},H^{1}}+<\\partial^{2}_{x}S_{i},\\tilde{v}>_{H^{-1},H^{1}}\\big{)}-(\\dot{\\tilde{x}}_{i}-c_{i})\\int_{\\mathbb{R}}\\bigg{(}(\\partial_{x}R_{i})^{2}+(\\partial_{x}S_{i})^{2}\\bigg{)}dx\\\\\n&=\\sum_{i\\ne j}(\\dot{\\tilde{x}}_{j}-c_{j})\\Big{(}\\int_{\\mathbb{R}}\\big{(}\\partial_{x}R_{i}\\partial_{x}R_{j}+\\partial_{x}S_{i}\\partial_{x}S_{j}\\big{)}dx\\Big{)}\\\\\n&\\quad -\\int_{\\mathbb{R}}\\Bigg{(}\\bigg{(}\\big{(}\\tilde{u}+\\sum^{N}_{j=1}R_{j}\\big{)}\\big{(}\\tilde{v}+\\sum^{N}_{j=1}S_{j}\\big{)}\\Big{(}\\tilde{u}_{x}+\\sum^{N}_{j=1}R_{jx}\\Big{)}-\\sum^{N}_{j=1}(R_{j}S_{j}\\partial_{x}R_{j})\\bigg{)}\\partial_{x}R_{i}\\Bigg{)}dx\\\\\n&\\quad -\\int_{\\mathbb{R}}\\Bigg{(}\\bigg{(}\\big{(}\\tilde{v}+\\sum^{N}_{j=1}S_{j}\\big{)}\\big{(}\\tilde{u}+\\sum^{N}_{j=1}R_{j}\\big{)}\\Big{(}\\partial_{x}\\big{(}\\tilde{v}+\\sum^{N}_{j=1}S_{j}\\big{)}\\Big{)}-\\sum^{N}_{j=1}(S_{j}R_{j}\\partial_{x}S_{j})\\bigg{)}\\partial_{x}S_{i}\\Bigg{)}dx\\\\\n&\\quad +\\int_{\\mathbb{R}}\\Bigg{(}P\\ast\\bigg{(}\\frac{1}{2}\\Big{(}\\tilde{u}_{x}+\\sum^{N}_{j=1}R_{jx}\\Big{)}^2(\\tilde{v}+\\sum^{N}_{j=1}S_{j})\n+(\\tilde{u}+\\sum^{N}_{j=1}R_{j})\\Big{(}\\tilde{u}_{x}+\\sum^{N}_{j=1}R_{jx}\\Big{)}\\Big{(}\\tilde{v}_{x}+\\sum^{N}_{j=1}S_{jx}\\Big{)}\\\\\n&\\quad +(\\tilde{u}+\\sum^{N}_{j=1}R_{j})^{2}(\\tilde{v}+\\sum^{N}_{j=1}S_{j})-\\frac{1}{2}\\sum^{N}_{j=1}R^{2}_{jx}S_{j}-\\sum^{N}_{j=1}R_{j}R_{jx}S_{jx}-\\sum^{N}_{j=1}R^{2}_{j}S_{j}\\bigg{)}\\partial^{2}_{x}R_{i}\\Bigg{)}dx\\\\\n&\\quad +\\int_{\\mathbb{R}}\\Bigg{(}P\\ast\\bigg{(}\\frac{1}{2}\\Big{(}\\tilde{v}_{x}+\\sum^{N}_{j=1}S_{jx}\\Big{)}^2(\\tilde{u}+\\sum^{N}_{j=1}R_{j})\n+(\\tilde{v}+\\sum^{N}_{j=1}S_{j})\\Big{(}\\tilde{v}_{x}+\\sum^{N}_{j=1}S_{jx}\\Big{)}\\Big{(}\\tilde{u}_{x}+\\sum^{N}_{j=1}R_{jx}\\Big{)}\\\\\n&\\quad +(\\tilde{v}+\\sum^{N}_{j=1}S_{j})^{2}(\\tilde{u}+\\sum^{N}_{j=1}R_{j})-\\frac{1}{2}\\sum^{N}_{j=1}S^{2}_{jx}R_{j}-\\sum^{N}_{j=1}S_{j}S_{jx}R_{jx}-\\sum^{N}_{j=1}S^{2}_{j}R_{j}\\bigg{)}\\partial^{2}_{x}S_{i}\\Bigg{)}dx\\\\\n&\\quad -\\int_{\\mathbb{R}}\\Bigg{(}P\\ast\\bigg{(}\\frac{1}{2}\\big{(}\\tilde{u}_{x}+\\sum^{N}_{j=1}R_{jx}\\big{)}^{2}\\big{(}\\tilde{v}_{x}+\\sum^{N}_{j=1}S_{jx}\\big{)}-\\frac{1}{2}\\sum^{N}_{j=1}R^{2}_{jx}S_{jx}\\bigg{)}\\partial_{x}R_{i}\\Bigg{)}dx\\\\\n&\\quad -\\int_{\\mathbb{R}}\\Bigg{(}P\\ast\\bigg{(}\\frac{1}{2}\\big{(}\\tilde{v}_{x}+\\sum^{N}_{j=1}S_{jx}\\big{)}^{2}\\big{(}\\tilde{u}_{x}+\\sum^{N}_{j=1}R_{jx}\\big{)}-\\frac{1}{2}\\sum^{N}_{j=1}S^{2}_{jx}R_{jx}\\bigg{)}\\partial_{x}S_{i}\\Bigg{)}dx.\n\\end{align*}\nFor every term, we have the following estimates\n\\begin{eqnarray*}\n\\begin{aligned}\n<\\partial^{2}_{x}R_{i},\\tilde{u}>_{H^{-1},H^{1}}&=\\int_{\\mathbb{R}}\\partial^{2}_{x}R_{i}\\tilde{u}\\,dx=\\int_{\\mathbb{R}}\\left[R_{i}-2a_i\\delta\\left(x-\\tilde{x}_{i}(t)\\right))\\right]\\tilde{u}\\,dx\\\\\n&=\\int_{\\mathbb{R}}R_{i}\\tilde{u}\\,dx-2a_{i}\\tilde{u}(\\tilde{x}_{i}(t))\\leq \\|\\tilde{u}\\|_{L^\\infty}\\Big(\\Big|\\int_{\\mathbb{R}}R_{i}\\,dx\\Big|+2a_i\\Big)\\leq O(\\sqrt{\\alpha}).\n\\end{aligned}\n\\end{eqnarray*}\nSimilarly,\n\\begin{align*}\n<\\partial^{2}_{x}S_{i},\\tilde{v}>_{H^{-1},H^{1}}\\leq O(\\sqrt{\\alpha}).\n\\end{align*}\nFor the term, $\\int_{\\mathbb{R}}\\partial_{x}R_{i}\\partial_{x}R_{j}\\,dx$, we find\n\\begin{eqnarray*}\n\\begin{aligned}\n\\int_{\\mathbb{R}}\\partial_{x}R_{i}\\partial_{x}R_{j}\\,dx&=-\\int_{\\mathbb{R}}\\partial_x^2 R_{i}R_{j}\\,dx=-\\int_{\\mathbb{R}}\\left(R_{i}-2a_{i}\\delta(x-\\tilde{x}_{i}(t))\\right)R_{j}\\,dx\\\\\n&=-\\int_{\\mathbb{R}}R_{i}R_{j}\\,dx+2a_{i}R_{j}(\\tilde{x}_{i}(t)) \\leq O(e^{-\\frac L4}).\n\\end{aligned}\n\\end{eqnarray*}\nSimilarly,\n\\begin{align*}\n\\int_{\\mathbb{R}}\\partial_{x}S_{i}\\partial_{x}S_{j}\\,dx \\leq O(e^{-\\frac L4}).\n\\end{align*}\n\n\\begin{lemma}\\label{implicit-1}\nAssume $(\\tilde{u},\\tilde{v})$ satisfies \\eqref{preestimate}, then we have\n\\begin{equation*}\n\\int_{\\mathbb{R}}\\Big[(\\tilde{u}+\\sum^{N}_{j=1}R_{j})(\\tilde{v}+\\sum^{N}_{j=1}S_{j})(\\tilde{u}_{x}+\\sum^{N}_{j=1}R_{jx})-\\sum^{N}_{j=1}R_{j}S_{j}R_{jx}\\Big]R_{ix}\\,dx \\leq O(\\sqrt{\\alpha})+O(e^{-\\frac L4})\n\\end{equation*}\nand\n\\begin{equation*}\n\\int_{\\mathbb{R}}\\Big[(\\tilde{u}+\\sum^{N}_{j=1}R_{j})(\\tilde{v}+\\sum^{N}_{j=1}S_{j})(\\tilde{v}_{x}+\\sum^{N}_{j=1}S_{jx})-\\sum^{N}_{j=1}R_{j}S_{j}S_{jx}\\Big]S_{ix}\\,dx \\leq O(\\sqrt{\\alpha})+O(e^{-\\frac L4}).\n\\end{equation*}\n\\end{lemma}\n\\begin{proof}\nWe calculate\n\\begin{align*}\n&\\int_{\\mathbb{R}}\\Big[(\\tilde{u}+\\sum^{N}_{j=1}R_{j})(\\tilde{v}+\\sum^{N}_{j=1}S_{j})(\\tilde{u}_{x}+\\sum^{N}_{j=1}R_{jx})-\\sum^{N}_{j=1}R_{j}S_{j}R_{jx}\\Big]R_{ix}\\,dx\\\\\n=&\\int_{\\mathbb{R}}\\Big[\\tilde{u}(\\tilde{v}+\\sum^{N}_{j=1}S_{j})(\\tilde{u}_{x}+\\sum^{N}_{j=1}R_{jx})+\\sum^{N}_{j=1}R_{j}\\tilde{v}(\\tilde{u}_{x}+\\sum^{N}_{j=1}R_{jx})+\\sum^{N}_{j=1}S_{j}\\sum^{N}_{j=1}R_{j}\\tilde{u}_{x}\\\\\n&\\qquad\\qquad+ \\sum^{N}_{j=1}S_{j}\\sum^{N}_{j=1}R_{j}\\sum^{N}_{j=1}R_{jx}-\\sum^{N}_{j=1}R_{j}S_{j}R_{jx}\\Big]R_{ix}\\,dx\\\\\n\\leq &\\Big|\\int_{\\mathbb{R}}\\tilde{u}(\\tilde{v}+\\sum^{N}_{j=1}S_{j})(\\tilde{u}_{x}+\\sum^{N}_{j=1}R_{jx})R_{ix}\\,dx\\Big|+\\Big|\\int_{\\mathbb{R}}\\sum^{N}_{j=1}R_{j}\\tilde{v}(\\tilde{u}_{x}+\\sum^{N}_{j=1}R_{jx})R_{ix}\\,dx\\Big|\\\\\n&\\quad+\\Big|\\int_{\\mathbb{R}}\\sum^{N}_{j=1}S_{j}\\sum^{N}_{j=1}R_{j}\\tilde{u}_{x}R_{ix}\\,dx\\Big|+\\Big|\\int_{\\mathbb{R}}\\Big[\\sum^{N}_{j=1}S_{j}\\sum^{N}_{j=1}R_{j}\\sum^{N}_{j=1}R_{jx}-\\sum^{N}_{j=1}R_{j}S_{j}R_{jx}\\Big]R_{ix}\\,dx\\Big|\\\\\n\\leq &\\|\\tilde{u}\\|_{L^{\\infty}}\\|u\\|_{L^{\\infty}}\\|v\\|_{L^{\\infty}}\\Big|\\int_{\\mathbb{R}}R_{ix}\\,dx\\Big|+\\sum^{N}_{j=1}\\|R_{j}\\|_{L^{\\infty}}\\|\\tilde{v}\\|_{L^{\\infty}}\\|u\\|_{L^{\\infty}}\\Big|\\int_{\\mathbb{R}}R_{ix}\\,dx\\Big|\\\\\n&\\quad+\\Big(\\sum^{N}_{j=1}\\|R_{j}\\|_{L^{\\infty}}\\Big)\\Big(\\sum^{N}_{j=1}\\|S_{j}\\|_{L^{\\infty}}\\Big)\\Big(\\int_{\\mathbb{R}}(\\tilde{u}_{x})^2\\,dx\\Big)^{\\frac 12}\\Big(\\int_{\\mathbb{R}}R_{ix}^2\\,dx\\Big)^{\\frac 12}\\\\\n&\\quad\\quad +\\Big|\\int_{\\mathbb{R}}\\Big[\\sum_{i\\ne j\\,or\\,i\\ne k\\,or\\,j\\ne k}R_{i}S_{j}R_{kx}\\Big]R_{ix}\\,dx\\Big|\\leq O(\\sqrt{\\alpha})+O(e^{-\\frac L4}).\n\\end{align*}\nSimilarly, we have\n\\begin{align*}\n\\int_{\\mathbb{R}}\\Big[(\\tilde{u}+\\sum^{N}_{j=1}R_{j})(\\tilde{v}+\\sum^{N}_{j=1}S_{j})(\\tilde{v}_{x}+\\sum^{N}_{j=1}S_{jx})-\\sum^{N}_{j=1}R_{j}S_{j}S_{jx}\\Big]S_{ix}\\,dx \\leq O(\\sqrt{\\alpha})+O(e^{-\\frac L4}).\n\\end{align*}\n\\end{proof}\n\nIn order to estimate the next terms, we need the following lemma.\n\n\\begin{lemma}\\label{implicit-2}\nUnder the same assumptions as in Lemma \\eqref{implicit-1}, we have\n\\begin{equation*}\n\\begin{aligned}\n\\Big{\\|} &P\\ast \\Big{(}\\frac{1}{2}\\Big{(}\\tilde{u}_{x}+\\sum^{N}_{j=1}R_{jx}\\Big{)}^2(\\tilde{v}+\\sum^{N}_{j=1}S_{j})\n+(\\tilde{u}+\\sum^{N}_{j=1}R_{j})\\Big{(}\\tilde{u}_{x}+\\sum^{N}_{j=1}R_{jx}\\Big{)}\\Big{(}\\tilde{v}_{x}+\\sum^{N}_{j=1}S_{jx}\\Big{)}\\\\\n&+(\\tilde{u}+\\sum^{N}_{j=1}R_{j})^{2}(\\tilde{v}+\\sum^{N}_{j=1}S_{j})-\\frac{1}{2}\\sum^{N}_{j=1}R^{2}_{jx}S_{j}-\\sum^{N}_{j=1}R_{j}R_{jx}S_{jx}-\\sum^{N}_{j=1}R^{2}_{j}S_{j}\\bigg{)}\\Big{\\|}_{L^\\infty}\\\\\n\\leq & O(\\sqrt{\\alpha})+O(e^{-\\frac L4})\n\\end{aligned}\n\\end{equation*}\nand\n\\begin{equation*}\n\\begin{aligned}\n\\Big{\\|} &P\\ast \\Big{(}\\frac{1}{2}\\Big{(}\\tilde{v}_{x}+\\sum^{N}_{j=1}S_{jx}\\Big{)}^2(\\tilde{u}+\\sum^{N}_{j=1}R_{j})\n+(\\tilde{v}+\\sum^{N}_{j=1}S_{j})\\Big{(}\\tilde{v}_{x}+\\sum^{N}_{j=1}S_{jx}\\Big{)}\\Big{(}\\tilde{u}_{x}+\\sum^{N}_{j=1}R_{jx}\\Big{)}\\\\\n&+(\\tilde{v}+\\sum^{N}_{j=1}S_{j})^{2}(\\tilde{u}+\\sum^{N}_{j=1}R_{j})-\\frac{1}{2}\\sum^{N}_{j=1}S^{2}_{jx}R_{j}-\\sum^{N}_{j=1}S_{j}S_{jx}R_{jx}-\\sum^{N}_{j=1}S^{2}_{j}R_{j}\\Big{)}\\Big{\\|}_{L^\\infty}\\\\\n\\leq & O(\\sqrt{\\alpha})+O(e^{-\\frac L4}).\n\\end{aligned}\n\\end{equation*}\n\\end{lemma}\n\\begin{proof}\nBy using H$\\rm{\\ddot{o}}$lder inequality and triangle inequality, we get\n\\begin{align*}\n\\Big{\\|} &P\\ast \\Big{(}\\frac{1}{2}\\Big{(}\\tilde{u}_{x}+\\sum^{N}_{j=1}R_{jx}\\Big{)}^2(\\tilde{v}+\\sum^{N}_{j=1}S_{j})\n+(\\tilde{u}+\\sum^{N}_{j=1}R_{j})\\Big{(}\\tilde{u}_{x}+\\sum^{N}_{j=1}R_{jx}\\Big{)}\\Big{(}\\tilde{v}_{x}+\\sum^{N}_{j=1}S_{jx}\\Big{)}\\\\\n&\\;\\;+(\\tilde{u}+\\sum^{N}_{j=1}R_{j})^{2}(\\tilde{v}+\\sum^{N}_{j=1}S_{j})-\\frac{1}{2}\\sum^{N}_{j=1}R^{2}_{jx}S_{j}-\\sum^{N}_{j=1}R_{j}R_{jx}S_{jx}-\\sum^{N}_{j=1}R^{2}_{j}S_{j}\\Big{)}\\Big{\\|}_{L^\\infty}\\\\\n\\leq &\\frac{1}{2}\\,\\int_{\\mathbb{R}}\\Big|\\Big{(}\\tilde{u}_{x}+\\sum^{N}_{j=1}R_{jx}\\Big{)}^2(\\tilde{v}+\\sum^{N}_{j=1}S_{j})-\\sum^{N}_{j=1}R^{2}_{jx}S_{j}\\Big|\\,dx\\\\\n&\\;\\;+\\int_{\\mathbb{R}}\\Big| (\\tilde{u}+\\sum^{N}_{j=1}R_{j})\\Big{(}\\tilde{u}_{x}+\\sum^{N}_{j=1}R_{jx}\\Big{)}\\Big{(}\\tilde{v}_{x}+\\sum^{N}_{j=1}S_{jx}\\Big{)}-\\sum^{N}_{j=1}R_{j}R_{jx}S_{jx}\\Big|\\,dx\\\\\n&\\;\\; +\\int_{\\mathbb{R}}\\Big| (\\tilde{u}+\\sum^{N}_{j=1}R_{j})^{2}(\\tilde{v}+\\sum^{N}_{j=1}S_{j})-\\sum^{N}_{j=1}R^{2}_{j}S_{j}\\Big|\\,dx\\\\\n&=\\frac{1}{2}\\,I_{2,1}+I_{2,2}+I_{2,3}.\n\\end{align*}\nFor the term $I_{2,1}$, we have\n\\begin{align*}\nI_{2,1}=&\\int_{\\mathbb{R}}\\Big|\\Big{(}\\tilde{u}_{x}+\\sum^{N}_{j=1}R_{jx}\\Big{)}^2(\\tilde{v}+\\sum^{N}_{j=1}S_{j})-\\sum^{N}_{j=1}R^{2}_{jx}S_{j}\\Big|\\,dx\\\\\n\\leq &\\int_{\\mathbb{R}}\\Big|\\Big[\\Big{(}\\tilde{u}_{x}+\\sum^{N}_{j=1}R_{jx}\\Big{)}^2-\\sum^{N}_{j=1}R^{2}_{jx}\\Big](\\tilde{v}+\\sum^{N}_{j=1}S_{j})\\Big|\\,dx+\\int_{\\mathbb{R}}\\Big|\\Big(\\sum^{N}_{j=1}R^{2}_{jx}\\Big)\\tilde{v}\\Big|\\,dx\\\\\n&\\;\\;+\\int_{\\mathbb{R}}\\Big|\\sum^{N}_{j=1}R^{2}_{jx}\\sum^{N}_{i=1}S_{i}-\\sum^{N}_{j=1}R^{2}_{jx}S_{j}\\Big|\\,dx\\\\\n\\leq &\\|\\tilde{v}\\|_{L^\\infty}\\int_{\\mathbb{R}}\\Big( \\Big|\\tilde{u}^2_{x}\\Big|+\\Big|2\\tilde{u}_{x}\\sum^{N}_{j=1}R_{jx}\\Big|+\\Big|\\Big(\\sum^{N}_{j=1}R_{jx}\\Big)^2-\\sum^{N}_{j=1}R^2_{jx}\\Big|\\Big)\\,dx\\\\\n&\\;\\;+\\|\\tilde{v}\\|_{L^\\infty}\\int_{\\mathbb{R}}\\sum^{N}_{j=1}R^{2}_{jx}\\,dx+\\int_{\\mathbb{R}}\\sum^{N}_{i\\ne j}\\sum^{N}_{j=1}R^{2}_{jx}S_{i}\\,dx\\leq O(\\sqrt{\\alpha})+O(e^{\\frac L4}).\n\\end{align*}\nFor the term $I_{2,2}$, we have\n\\begin{align*}\nI_{2,2}=&\\int_{\\mathbb{R}}\\Big|(\\tilde{u}+\\sum^{N}_{j=1}R_{j})\\Big{(}\\tilde{u}_{x}+\\sum^{N}_{j=1}R_{jx}\\Big{)}\\Big{(}\\tilde{v}_{x}+\\sum^{N}_{j=1}S_{jx}\\Big{)}-\\sum^{N}_{j=1}R_{j}R_{jx}S_{jx}\\Big|\\,dx\\\\\n\\leq &\\int_{\\mathbb{R}}\\Big|\\tilde{u}\\Big{(}\\tilde{u}_{x}+\\sum^{N}_{j=1}R_{jx}\\Big{)}\\Big{(}\\tilde{v}_{x}+\\sum^{N}_{j=1}S_{jx}\\Big{)}\\Big|\\,dx+\\int_{\\mathbb{R}}\\Big|\\Big(\\sum^{N}_{j=1}R_{j}\\Big)\\tilde{u}_{x}\\Big{(}\\tilde{v}_{x}+\\sum^{N}_{j=1}S_{jx}\\Big{)}\\Big|\\,dx\\\\\n&\\;+\\int_{\\mathbb{R}}\\Big|\\Big(\\sum^{N}_{j=1}R_{j}\\Big)\\Big(\\sum^{N}_{j=1}S_{jx}\\Big)\\tilde{u}_{x}\\Big|\\,dx+\\int_{\\mathbb{R}}\\Big|\\Big(\\sum^{N}_{j=1}R_{j}\\Big)\\Big(\\sum^{N}_{j=1}S_{jx}\\Big)\\Big(\\sum^{N}_{j=1}R_{jx}\\Big)-\\sum^{N}_{j=1}R_{j}R_{jx}S_{jx}\\Big|\\,dx\\\\\n\\leq\n&\\|\\tilde{u}\\|_{L^\\infty}\\int_{\\mathbb{R}}\\left|u_{x}v_{x}\\right|\\,dx+\\|v\\|_{L^\\infty}\\Big(\\int_{\\mathbb{R}}\\Big(\\sum^{N}_{j=1}R_{j}\\Big)^2\\,dx\\Big)^{\\frac{1}{2}}\\Big(\\int_{\\mathbb{R}}\\tilde{u}^2_{x}\\,dx\\Big)^{\\frac 12}\\\\\n&\\;\\;+\\sum^{N}_{j=1}\\|S_{j}\\|_{L^\\infty}\\Big(\\int_{\\mathbb{R}}\\Big(\\sum^{N}_{j=1}R_{j}\\Big)^2\\,dx\\Big)^{\\frac 12}\\Big(\\int_{\\mathbb{R}}\\tilde{u}^2_{x}\\,dx\\Big)^{\\frac 12}+\\int_{\\mathbb{R}}\\Big|\\sum_{i\\ne j\\,or\\,j\\ne k\\,or\\,i\\ne k}R_{i}S_{jx}R_{jx}\\Big|\\,dx\\\\\n\\leq & O(\\sqrt{\\alpha})+O(e^{\\frac L4}).\n\\end{align*}\nFor this term $I_{2,3}$, we obtain\n\\begin{align*}\nI_{2,3}=&\\int_{\\mathbb{R}}\\Big(\\tilde{u}+\\sum^{N}_{j=1}R_{j}\\Big)^{2}\\Big(\\tilde{v}+\\sum^{N}_{j=1}S_{j}\\Big)-\\sum^{N}_{j=1}R^{2}_{j}S_{j}\\,dx\\\\\n\\leq & \\int_{\\mathbb{R}}\\Big|\\Big((\\tilde{u}+\\sum^{N}_{j=1}R_{j})^{2}-\\sum^{N}_{j=1}R^{2}_{j}\\Big)\\Big(\\tilde{v}+\\sum^{N}_{j=1}S_{j}\\Big)\\Big|\\,dx+\\int_{\\mathbb{R}}\\Big|\\sum^{N}_{j=1}R^2_{j}\\tilde{v}\\Big|\\,dx\\\\\n&\\qquad\\qquad +\\int_{\\mathbb{R}}\\Big|\\sum^{N}_{j=1}R^{2}_{j}\\sum^{N}_{i=1}S_{i}-\\sum^{N}_{j=1}R^{2}_{j}S_{j}\\Big|\\,dx\\\\\n\\leq &\\|v\\|_{L^\\infty}\\int_{\\mathbb{R}}|\\tilde{u}|^2+\\Big|2\\tilde{u}\\sum^{N}_{j=1}R_{j}\\Big|+\\Big|\\Big(\\sum^{N}_{j=1}R_{j}\\Big)^2-\\sum^{N}_{j=1}R^{2}_{j}\\Big|\\,dx+\\|\\tilde{v}\\|_{L^\\infty}\\int_{\\mathbb{R}}\\sum^{N}_{j=1}R^{2}_{j}\\,dx\\\\\n&\\qquad\\qquad+\\int_{\\mathbb{R}}\\sum^{N}_{i\\ne j}\\sum^{N}_{j=1}R^{2}_{j}S_{i}\\,dx\\leq O(\\sqrt{\\alpha})+O(e^{\\frac L4}).\n\\end{align*}\nAccordingly, we get\n\\begin{align*}\n\\Big{\\|} &P\\ast \\Big{(}\\frac{1}{2}\\Big{(}\\tilde{u}_{x}+\\sum^{N}_{j=1}R_{jx}\\Big{)}^2(\\tilde{v}+\\sum^{N}_{j=1}S_{j})\n+(\\tilde{u}+\\sum^{N}_{j=1}R_{j})\\Big{(}\\tilde{u}_{x}+\\sum^{N}_{j=1}R_{jx}\\Big{)}\\Big{(}\\tilde{v}_{x}+\\sum^{N}_{j=1}S_{jx}\\Big{)}\\\\\n&\\qquad\\qquad+(\\tilde{u}+\\sum^{N}_{j=1}R_{j})^{2}(\\tilde{v}+\\sum^{N}_{j=1}S_{j})-\\frac{1}{2}\\sum^{N}_{j=1}R^{2}_{jx}S_{j}-\\sum^{N}_{j=1}R_{j}R_{jx}S_{jx}-\\sum^{N}_{j=1}R^{2}_{j}S_{j}\\Big{)}\\Big{\\|}_{L^\\infty}\\\\\n\\leq & O(\\sqrt{\\alpha})+O(e^{-\\frac L4}).\n\\end{align*}\nSimilarly, we can prove\n\\begin{align*}\n\\Big{\\|} &P\\ast \\Big{(}\\frac{1}{2}\\Big{(}\\tilde{v}_{x}+\\sum^{N}_{j=1}S_{jx}\\Big{)}^2(\\tilde{u}+\\sum^{N}_{j=1}R_{j})\n+(\\tilde{v}+\\sum^{N}_{j=1}S_{j})\\Big{(}\\tilde{v}_{x}+\\sum^{N}_{j=1}S_{jx}\\Big{)}\\Big{(}\\tilde{u}_{x}+\\sum^{N}_{j=1}R_{jx}\\Big{)}\\\\\n&\\qquad\\qquad\\qquad+(\\tilde{v}+\\sum^{N}_{j=1}S_{j})^{2}(\\tilde{u}+\\sum^{N}_{j=1}R_{j})-\\frac{1}{2}\\sum^{N}_{j=1}S^{2}_{jx}R_{j}-\\sum^{N}_{j=1}S_{j}S_{jx}R_{jx}-\\sum^{N}_{j=1}S^{2}_{j}R_{j}\\Big{)}\\Big{\\|}_{L^\\infty}\\\\\n\\leq & O(\\sqrt{\\alpha})+O(e^{-\\frac L4}).\n\\end{align*}\nThis completes the proof of this lemma.\n\\end{proof}\n\nThanks to the lemma \\ref{implicit-2}, we have\n\\begin{align*}\n\\int_{\\mathbb{R}}&P\\ast\\Big{(}\\frac{1}{2}\\Big{(}\\tilde{u}_{x}+\\sum^{N}_{j=1}R_{jx}\\Big{)}^2(\\tilde{v}+\\sum^{N}_{j=1}S_{j})\n+(\\tilde{u}+\\sum^{N}_{j=1}R_{j})\\Big{(}\\tilde{u}_{x}+\\sum^{N}_{j=1}R_{jx}\\Big{)}\\Big{(}\\tilde{v}_{x}+\\sum^{N}_{j=1}S_{jx}\\Big{)}\\\\\n&+(\\tilde{u}+\\sum^{N}_{j=1}R_{j})^{2}(\\tilde{v}+\\sum^{N}_{j=1}S_{j})-\\frac{1}{2}\\sum^{N}_{j=1}R^{2}_{jx}S_{j}-\\sum^{N}_{j=1}R_{j}R_{jx}S_{jx}-\\sum^{N}_{j=1}R^{2}_{j}S_{j}\\Big{)}\\partial^{2}_{x}R_{i}\\,dx\\\\\n& \\triangleq \\int_{\\mathbb{R}}A(x)\\partial^{2}_{x}R_{i}\\,dx=\\int_{\\mathbb{R}}A(x)\\left(R_{i}-2a_{i}\\delta(x-\\tilde{x}_{i}(t))\\right)\\,dx\\\\\n=&\\int_{\\mathbb{R}}A(x)R_{i}\\,dx-2a_{i}A(\\tilde{x}_{i}(t))\\leq O\\left(\\|A\\|_{L^\\infty}\\right) \\leq O(\\sqrt{\\alpha})+O(e^{-\\frac L4}).\n\\end{align*}\nSimilarly, we get\n\\begin{align*}\n\\int_{\\mathbb{R}}&P\\ast\\Big{(}\\frac{1}{2}\\Big{(}\\tilde{v}_{x}+\\sum^{N}_{j=1}S_{jx}\\Big{)}^2(\\tilde{u}+\\sum^{N}_{j=1}R_{j})\n+(\\tilde{v}+\\sum^{N}_{j=1}S_{j})\\Big{(}\\tilde{v}_{x}+\\sum^{N}_{j=1}S_{jx}\\Big{)}\\Big{(}\\tilde{u}_{x}+\\sum^{N}_{j=1}R_{jx}\\Big{)}\\\\\n&+(\\tilde{v}+\\sum^{N}_{j=1}S_{j})^{2}(\\tilde{u}+\\sum^{N}_{j=1}R_{j})-\\frac{1}{2}\\sum^{N}_{j=1}S^{2}_{jx}R_{j}-\\sum^{N}_{j=1}S_{j}S_{jx}R_{jx}-\\sum^{N}_{j=1}S^{2}_{j}R_{j}\\Big{)}\\partial^{2}_{x}S_{i}\\,dx\\\\\n\\leq & O(\\sqrt{\\alpha})+O(e^{-\\frac L4}).\n\\end{align*}\n\n\\begin{lemma}\\label{implicit-3}\nUnder the same assumption as \\eqref{implicit-1}, we have\n\\begin{align*}\n&\\Big\\|P\\ast\\Big{(}\\frac{1}{2}\\big{(}\\tilde{u}_{x}+\\sum^{N}_{j=1}R_{jx}\\big{)}^{2}\\big{(}\\tilde{v}_{x}+\\sum^{N}_{j=1}S_{jx}\\big{)}-\\frac{1}{2}\\sum^{N}_{j=1}R^{2}_{jx}S_{jx}\\Big{)}\\Big\\|_{L^\\infty}\\leq O(\\sqrt{\\alpha})+O(e^{-\\frac L4}),\\\\\n&\\Big\\|P\\ast\\Big{(}\\frac{1}{2}\\big{(}\\tilde{v}_{x}+\\sum^{N}_{j=1}S_{jx}\\big{)}^{2}\\big{(}\\tilde{u}_{x}+\\sum^{N}_{j=1}R_{jx}\\big{)}-\\frac{1}{2}\\sum^{N}_{j=1}S^{2}_{jx}R_{jx}\\Big{)}\\Big\\|_{L^\\infty}\\leq O(\\sqrt{\\alpha})+O(e^{-\\frac L4}).\n\\end{align*}\n\\end{lemma}\n\\begin{proof}\nWe estimate\n\\begin{align*}\n&\\Big\\|P\\ast\\Big{(}\\frac{1}{2}\\big{(}\\tilde{u}_{x}+\\sum^{N}_{j=1}R_{jx}\\big{)}^{2}\\big{(}\\tilde{v}_{x}+\\sum^{N}_{j=1}S_{jx}\\big{)}-\\frac{1}{2}\\sum^{N}_{j=1}R^{2}_{jx}S_{jx}\\Big{)}\\Big\\|\\\\\n\\leq & \\frac{1}{2}\\,\\int_{\\mathbb{R}}\\Big|(\\tilde{u}_{x}+\\sum^{N}_{j=1}R_{jx})^{2}(\\tilde{v}_{x}+\\sum^{N}_{j=1}S_{jx})-\\sum^{N}_{j=1}R^{2}_{jx}S_{jx}\\Big|\\,dx\\\\\n\\leq &\n\\frac{1}{2}\\,\\Big(\\int_{\\mathbb R}\\Big|\\Big((\\tilde{u}_{x}+\\sum^{N}_{j=1}R_{jx})^{2}-\\sum^{N}_{j=1}R^{2}_{jx}\\Big)(\\tilde{v}_{x}+\\sum^{N}_{j=1}S_{jx})\\Big|\\,dx+\\int_{\\mathbb R}\\Big|\\tilde{v}_{x}\\sum^{N}_{j=1}R^{2}_{jx}\\Big|\\,dx\\Big)\\\\\n&\\qquad\\qquad +\\frac{1}{2}\\int_{\\mathbb R}\\Big|\\sum^{N}_{j=1}R^{2}_{jx}\\sum^{N}_{j=1}S_{jx}-\\sum^{N}_{j=1}R^{2}_{jx}S_{jx}\\Big|\\,dx=\\frac{1}{2}\\,(I_{3,1}+I_{3,2}+I_{3,3}).\n\\end{align*}\nFor the term $I_{3,1}$, we obtain\n\\begin{align*}\nI_{3,1}=&\\int_{\\mathbb{R}}\\Big|\\Big{(}\\tilde{u}_{x}+\\sum^{N}_{j=1}R_{jx}\\Big{)}^2(\\tilde{v}_{x}+\\sum^{N}_{j=1}S_{jx})-\\sum^{N}_{j=1}R^{2}_{jx}S_{jx}\\Big|\\,dx\\\\\n\\leq &\\int_{\\mathbb{R}}\\Big|\\Big[\\Big{(}\\tilde{u}_{x}+\\sum^{N}_{j=1}R_{jx}\\Big{)}^2-\\sum^{N}_{j=1}R^{2}_{jx}\\Big](\\tilde{v}_{x}+\\sum^{N}_{j=1}S_{jx})\\Big|\\,dx+\\int_{\\mathbb{R}}\\Big|\\Big(\\sum^{N}_{j=1}R^{2}_{jx}\\Big)\\tilde{v}_{x}\\Big|\\,dx\\\\\n&\\qquad\\qquad+\\int_{\\mathbb{R}}\\Big|\\sum^{N}_{j=1}R^{2}_{jx}\\sum^{N}_{i=1}S_{ix}-\\sum^{N}_{j=1}R^{2}_{jx}S_{jx}\\Big|\\,dx\\\\\n\\leq &\\|v\\|_{L^\\infty}\\int_{\\mathbb{R}}\\Big|\\tilde{u}^2_{x}\\Big|+\\Big|2\\tilde{u}_{x}\\sum^{N}_{j=1}R_{jx}\\Big|+\\Big|\\Big(\\sum^{N}_{j=1}R_{jx}\\Big)^2-\\sum^{N}_{j=1}R^2_{jx}\\Big|\\,dx\\\\\n&\\qquad\\qquad+\\|\\tilde{v}\\|_{L^\\infty}\\int_{\\mathbb{R}}\\sum^{N}_{j=1}R^{2}_{jx}\\,dx+\\int_{\\mathbb{R}}\\sum^{N}_{i\\ne j}\\sum^{N}_{j=1}\\left|R^{2}_{jx}S_{ix}\\right|\\,dx\\leq O(\\sqrt{\\alpha})+O(e^{-\\frac L4}).\n\\end{align*}\nFor the term $I_{3,2}$, we get\n\\begin{align*}\nI_{3,2}=\\int_{\\mathbb R}\\sum^{N}_{j=1}R^2_{jx}|\\tilde{v}_{x}|\\,dx= \\sum^{N}_{j=1}\\int_{\\mathbb R}R^2_{jx}|\\tilde{v}_{x}|\\,dx\\leq \\sum^{N}_{j=1}\\Big(\\int_{\\mathbb R}\\tilde{v}^2_{x}\\,dx\\Big)^{\\frac 12}\\Big(\\int_{\\mathbb R}R^4_{jx}\\,dx\\Big)^{\\frac 12}\\leq O(\\sqrt{\\alpha}).\n\\end{align*}\nFor the term $I_{3,3}$, we have\n\\begin{align*}\n \\int_{\\mathbb R}\\Big|\\sum^{N}_{j=1}R^{2}_{jx}\\sum^{N}_{j=1}S_{jx}-\\sum^{N}_{j=1}R^{2}_{jx}S_{jx}\\Big|\\,dx \\leq \\int_{\\mathbb R}\\sum^{N}_{i\\ne j}\\sum^{N}_{j=1}\\left|R^{2}_{jx}S_{ix}\\right|\\,dx \\leq O(e^{-\\frac L4}).\n\\end{align*}\nThus, we deduce that\n\\begin{align*}\n\\Big\\|P\\ast\\Big{(}\\frac{1}{2}\\big{(}\\tilde{u}_{x}+\\sum^{N}_{j=1}R_{jx}\\big{)}^{2}\\big{(}\\tilde{v}_{x}+\\sum^{N}_{j=1}S_{jx}\\big{)}-\\frac{1}{2}\\sum^{N}_{j=1}R^{2}_{jx}S_{jx}\\Big{)}\\Big\\|_{L^\\infty}\\leq O(\\sqrt{\\alpha})+O(e^{-\\frac L4})\n\\end{align*}\nand\n\\begin{align*}\n&\\Big\\|P\\ast\\Big{(}\\frac{1}{2}\\big{(}\\tilde{v}_{x}+\\sum^{N}_{j=1}S_{jx}\\big{)}^{2}\\big{(}\\tilde{u}_{x}+\\sum^{N}_{j=1}R_{jx}\\big{)}-\\frac{1}{2}\\sum^{N}_{j=1}S^{2}_{jx}R_{jx}\\Big{)}\\Big\\|_{L^\\infty}\\leq O(\\sqrt{\\alpha})+O(e^{-\\frac L4}).\n\\end{align*}\nThis completes the proof of this lemma.\n\\end{proof}\n\nOn account of Lemma \\eqref{implicit-3}, we obtain\n\\begin{align*}\n&\\int_{\\mathbb{R}}P\\ast\\Big{(}\\frac{1}{2}\\big{(}\\tilde{u}_{x}+\\sum^{N}_{j=1}R_{jx}\\big{)}^{2}\\big{(}\\tilde{v}_{x}+\\sum^{N}_{j=1}S_{jx}\\big{)}-\\frac{1}{2}\\sum^{N}_{j=1}R^{2}_{jx}S_{jx}\\Big{)}\\partial_{x}R_{i}\\,dx\\\\\n\\leq &\\Big( O(\\sqrt{\\alpha})+O(e^{\\frac L4})\\Big) \\int_{\\mathbb R}|R_{ix}|\\,dx \\leq O(\\sqrt{\\alpha})+O(e^{-\\frac L4}).\n\\end{align*}\nSimilarly, we find\n\\begin{align*}\n\\int_{\\mathbb{R}}P\\ast\\Big{(}\\frac{1}{2}\\big{(}\\tilde{v}_{x}+\\sum^{N}_{j=1}S_{jx}\\big{)}^{2}\\big{(}\\tilde{u}_{x}+\\sum^{N}_{j=1}R_{jx}\\big{)}-\\frac{1}{2}\\sum^{N}_{j=1}S^{2}_{jx}R_{jx}\\Big{)}\\partial_{x}S_{i}\\,dx\\leq O(\\sqrt{\\alpha})+O(e^{-\\frac L4}).\n\\end{align*}\nThanks to the lemmas \\ref{implicit-1}, \\ref{implicit-2} and \\ref{implicit-3}, we arrive at\n\\begin{align*}\n|\\dot{\\tilde{x}}_{i}(t)-c_{i}|\\Big{(}\\|R_{ix}\\|^{2}_{L^{2}}+\\|S_{ix}\\|^{2}_{L^{2}}+O(\\sqrt{\\alpha})\\Big{)}\\leq O(\\sqrt{\\alpha})+O(e^{-\\frac{L}{8}}).\n\\end{align*}\nSince $\\|R_{ix}\\|_{L^2}>a_1$ and $\\|S_{ix}\\|^{2}_{L^2}>b_1$, then we have\n\\begin{align*}\n|\\dot{\\tilde{x}}_{i}(t)-c_{i}|\\leq O(\\sqrt{\\alpha})+O(e^{-\\frac{L}{8}}).\n\\end{align*}\n\nFinally, we claim\n\\begin{align*}\n\\left|x_{i}-\\tilde{x}_{i}\\right|\\leq \\frac{L}{12}.\n\\end{align*}\nIndeed, if $x\\notin [\\tilde{x}_{i}-L\/12,\\tilde{x}_{i}+L\/12]$, then\n\\begin{align*}\nu(x,t)v(x,t)\\leq c_{i}e^{-\\frac{L}{6}}+O(\\sqrt{\\alpha})+O(e^{-\\frac{L}{4}}) \\leq c_{i}-O(\\sqrt{\\alpha})-O(e^{-\\frac{L}{4}}),\n\\end{align*}\nby choosing $\\alpha$ small enough and $L$ large enough.\nHowever,\n\\begin{align*}\nu(t,x_{i})v(t,x_{i})=\\max_{x\\in J_{i}(t)}u(x)v(x)\\geq u(\\tilde{x}_{i})v(\\tilde{x}_{i})=c_{i}+O(\\sqrt{\\alpha})+O(e^{-\\frac{L}{4}}),\n\\end{align*}\nwhich is a contradiction.\nTherefore, we have $\\left|x_{i}-\\tilde{x}_{i}\\right|\\leq L\/12$.\n\\end{proof}\n\n\n\\subsection{Monotonicity property}\nThanks to the preceding proposition, for $\\epsilon_{0}>0$ small enough and $L_{0}>0$ large enough, one can construct $C^1$ functions $\\tilde{x}_{1},...,\\tilde{x}_{N}$ defined on $[0,t_{0}]$ such that \\eqref{initial-3-1}-\\eqref{initial-3-5} are satisfied. In this subsection we investigate the almost monotonicity of functionals that are very close to the energy at the right of $i$th bump, $i=1,...,N-1$ of $(u, v)$. Let $\\Psi$ be a $C^{\\infty}$-function such that\n\\begin{equation*}\n\\left\\{\n\\begin{aligned}\n&0<\\Psi(x)<1,\\,\\Psi'(x)>0,\\quad\\quad\\quad & x\\in{\\mathbb R},\\\\\n&|\\Psi'''|\\leq 10|\\Psi'|,&x\\in [-1,1],\\\\\n\\end{aligned}\n\\right.\n\\end{equation*}\nand\n\\begin{equation*}\n\\Psi(x)=\\left\\{\n\\begin{aligned}\n&e^{-|x|},\\quad &x<-1,\\\\\n&1-e^{-|x|},&x>1.\n\\end{aligned}\n\\right.\n\\end{equation*}\nSetting $\\Psi_{K}=\\Psi(\\cdot\/K)$, we introduce for $i=2,...,N$,\n\\begin{align*}\n&\\mathcal{J}^{u}_{j,K}(t)=\\int_{\\mathbb R}\\left(u^2(t)+u^2_{x}(t)\\right)\\Psi_{j,K}(t)\\,dx,\\quad \\mathcal{J}^{v}_{j,K}(t)=\\int_{\\mathbb R}\\left(v^2(t)+v^2_{x}(t)\\right)\\Psi_{j,K}(t)\\,dx,\\\\\n&\\mathcal{J}^{u,v}_{j,K}(t)=\\int_{\\mathbb R}\\left(u(t)v(t)+u_{x}(t)v_{x}(t)\\right)\\Psi_{j,K}(t)\\,dx,\n\\end{align*}\nwhere $\\Psi_{j,K}(t,x)=\\Psi_K(x-y_{j}(t))$ with $y_{j}(t)$, $j=2,...,N$, defined in \\eqref{initial-3-4}. Note that $\\mathcal{J}^{u}_{j,K}(t)$ is close to $\\left\\|u(t)\\right\\|_{H^{1}(x>y_{j}(t))}$ and thus measures the energy at the right of the $(j-1)th$ bump of $u$, $\\mathcal{J}^{v}_{j,K}(t)$ is close to $\\left\\|v(t)\\right\\|_{H^{1}(x>y_{j}(t))}$ and thus measures the energy at the right of the $(j-1)$th bump of $v$, and $\\mathcal{J}^{u,v}_{j,K}(t)$ is close to $_{H^{1}\\times H^{1}(x>y_{j}(t))}$ and thus measures the energy at the right of the $(j-1)th$ bump of $(u, v)$. Finally, we set\n\\begin{align*}\n\\sigma_{0}=\\frac{1}{4}\\,\\min(c_{1},c_{2}-c_{1},...,c_{N}-c_{N-1}).\n\\end{align*}\n\nWe have the following monotonicity result.\n\\begin{prop}\\label{monotonicity}\n(Exponential decay of the functionals $\\mathcal{J}^{u}_{j,K}(t)$, $\\mathcal{J}^{v}_{j,K}(t)$ and $\\mathcal{J}^{u,v}_{j,K}(t)$). Let $(u,v)\\in Y([0,T[)$ be a solution of two component Novikov equations satisfying \\eqref{initial-3-2} on $[0,t_{0}]$. There exist $\\alpha_{0}>0$ and $L_{0}>0$ only depending on $c_{1}$ such that if $0<\\alpha<\\alpha_{0}$ and $L\\geq L_{0}$ then for any $4 \\leq K \\lesssim \\sqrt{L}$,\n\\begin{align*}\n&\\mathcal{J}^{u}_{j,K}(t)-\\mathcal{J}^{u}_{j,K}(0)\\leq O(e^{-\\frac{\\sigma_{0}L}{8K}}), \\quad \\mathcal{J}^{v}_{j,K}(t)-\\mathcal{J}^{v}_{j,K}(0)\\leq O(e^{-\\frac{\\sigma_{0}L}{8K}}),\\\\\n&\\mathcal{J}^{u,v}_{j,K}(t)-\\mathcal{J}^{u,v}_{j,K}(0)\\leq O(e^{-\\frac{\\sigma_{0}L}{8K}}),\\quad \\forall j\\in \\{2,...,N\\},\\;\\forall t\\in[0,t_{0}].\n\\end{align*}\n\\end{prop}\n\nThe proof of this Proposition relies on the following Virial type identity.\n\n\\begin{lemma}\\label{virial}\n(Virial type identity). Let $(u,v)\\in Y([0,T[)$, with $0L_{0}>0$, with $\\alpha_{0}\\ll 1$ and $L_{0}\\gg 1$, we obtain\n\\begin{align*}\n\\widetilde{I}_{1,1} \\leq \\frac{c_1}{20}\\int_{\\mathbb R}\\left(u^{2}+u^{2}_{x}\\right)\\Psi'_{i,K}\\,dx+\\frac{C}{K}\\|u_{0}\\|^{3}_{H^{1}(\\mathbb R)}\\|v_{0}\\|_{H^{1}(\\mathbb R)}e^{-\\frac{1}{K}(\\sigma_{0}t+\\frac{L}{8})}.\n\\end{align*}\nBefore estimating the term $\\widetilde{I}_{1,2}$, using that $|\\Psi'''_{i,K}|\\leq 10K^{-2}\\Psi'_{i,K}$, we have\n\\begin{align*}\n\\left(1-\\partial^2_{x}\\right)\\Psi'_{i,K}(x)=\\Psi'_{i,K}(x)-\\frac{1}{K^2}\\Psi'''_{i,K}(x)\\geq \\left(1-\\frac{10}{K^2}\\right)\\Psi'_{i,K}(x),\\; \\forall x\\in \\mathbb R,\n\\end{align*}\nand since $K>4$, it holds\n\\begin{eqnarray}\\label{convolutionESI}\n\\begin{aligned}\n\\left(1-\\partial^2_{x}\\right)^{-1}\\Psi'_{i,K}(x)\\leq \\Big(1-\\frac{10}{K^2}\\Big)^{-1}\\Psi'_{i,K}(x),\\; \\forall x\\in \\mathbb R.\n\\end{aligned}\n\\end{eqnarray}\nNext, the estimate of $\\widetilde{I}_{1,2}$ gives us\n\\begin{align*}\n\\widetilde{I}_{1,2}=&\\int_{D_{i}}\\Big[2uP\\ast\\Big(\\frac{1}{2}u^{2}_{x}v+uu_{x}v_{x}+u^{2}v\\Big)\\Big]\\Psi'_{i,K}\\,dx+\\int_{D^c_{i}}\\Big[2uP\\ast\\Big(\\frac{1}{2}u^{2}_{x}v+uu_{x}v_{x}+u^{2}v\\Big)\\Big]\\Psi'_{i,K}\\,dx\\\\\n\\leq\\; & \\|u\\|_{L^\\infty(D_{i})}\\int_{\\mathbb R}P\\ast\\left|u^{2}_{x}v+2uu_{x}v_{x}+2u^{2}v\\right|\\Psi'_{i,K}\\,dx\\\\\n&\\quad +\\|\\Psi'_{i,K}\\|_{L^\\infty(D^c_{i})}\\|u\\|_{L^\\infty(\\mathbb R)}\\int_{\\mathbb R}\\left|u^{2}_{x}v+2uu_{x}v_{x}+2u^{2}v\\right|\\,dx\\\\\n\\leq\\; & 3\\|u\\|_{L^\\infty(D_{i})}\\|v\\|_{L^\\infty(D_{i})}\\int_{\\mathbb R}\\left|u^{2}_{x}+u^{2}\\right|\\Psi'_{i,K}\\,dx+\\frac{C}{K}\\|u_{0}\\|^{3}_{H^{1}(\\mathbb R)}\\|v_{0}\\|_{H^{1}(\\mathbb R)}e^{-\\frac{1}{K}(\\sigma_{0}t+\\frac{L}{8})}\\\\\n\\leq\\; & \\frac{c_1}{20}\\int_{\\mathbb R}\\left(u^{2}+u^{2}_{x}\\right)\\Psi'_{i,K}\\,dx+\\frac{C}{K}\\|u_{0}\\|^{3}_{H^{1}(\\mathbb R)}\\|v_{0}\\|_{H^{1}(\\mathbb R)}e^{-\\frac{1}{K}(\\sigma_{0}t+\\frac{L}{8})},\n\\end{align*}\nwhere the Young's inequality is used, the exponential decay of $\\Psi'_{i,K}$ on $D^c_{i}$, \\eqref{predict-1} and \\eqref{convolutionESI}.\nLet us tackle now the estimate of $\\widetilde{I}_{1,3}$. On $D^c_{i}$ we have\n\\begin{align*}\n&\\int_{D^c_{i}}\\left[uP_{x}\\ast(u^2_{x}v_{x})\\right]\\Psi'_{i,K}(t)\\,dx \\leq \\|\\Psi'_{i,K}\\|_{L^\\infty(D^c_{i})}\\|v\\|_{L^\\infty(D^c_{i})}\\int_{\\mathbb R}u^2_{x}\\left[P\\ast u\\right]\\,dx\\\\\n&\\quad \\leq \\; \\|\\Psi'_{i,K}\\|_{L^\\infty(D^c_{i})}\\|v\\|_{L^\\infty(D^c_{i})}\\|P\\ast u\\|_{L^\\infty(\\mathbb R)}\\int_{\\mathbb R}u^2_{x}\\,dx.\n\\end{align*}\nApplying the Holder inequality, we have for all $x\\in \\mathbb R$,\n\\begin{eqnarray}\\label{convolutionESI-1}\n\\begin{aligned}\nP\\ast u= \\frac{1}{2}\\int_{\\mathbb R}e^{-|x-y|}u(y)\\,dy\\leq \\frac{1}{2}\\left(\\int_{\\mathbb R}e^{-2|x-y|}\\,dy\\right)^{\\frac{1}{2}}\\left(\\int_{\\mathbb R}u^2(y)\\,dy\\right)^{\\frac{1}{2}}\\leq \\frac{1}{2}\\|u\\|_{L^2(\\mathbb R)},\n\\end{aligned}\n\\end{eqnarray}\nand then using \\eqref{convolutionESI-1} and the exponential decay of $\\Psi'_{i,K}$ on $D^c_{i}$, it holds\n\\begin{align*}\n\\int_{D^c_{i}}\\left[uP_{x}\\ast(u^2_{x}v_{x})\\right]\\Psi'_{i,K}(t)\\,dx\\leq \\frac{C}{K}\\|u_{0}\\|^{3}_{H^{1}(\\mathbb R)}\\|v_{0}\\|_{H^{1}(\\mathbb R)}e^{-\\frac{1}{K}(\\sigma_{0}t+\\frac{L}{8})}.\n\\end{align*}\nThe estimate of $\\widetilde{I}_{1,3}$ on $D_{i}$ leads to\n\\begin{align*}\n\\int_{D_{i}}\\left[uP_{x}\\ast(u^2_{x}v_{x})\\right]\\Psi'_{i,K}(t)\\,dx \\leq & \\|u\\|_{L^\\infty(D_{i})}\\|v\\|_{L^\\infty(D_{i})}\\int_{\\mathbb R}\\left[P\\ast\\Psi'_{i,K}(t)\\right]u^{2}_{x}\\,dx\\\\\n\\leq& \\frac{c_1}{20}\\int_{\\mathbb R}\\left(u^{2}+u^{2}_{x}\\right)\\Psi'_{i,K}\\,dx.\n\\end{align*}\nTherefore, for $0<\\alpha<\\alpha_{0}$ and $L>L_{0}>0$, with $\\alpha_{0}\\ll 1$ and $L_{0}\\gg 1$, it holds\n\\begin{align*}\n\\frac{d}{dt}\\mathcal{J}^{u}_{j,K}(t)\\leq \\frac{C}{K}\\|u_{0}\\|^{3}_{H^{1}(\\mathbb R)}\\|v_{0}\\|_{H^{1}(\\mathbb R)}e^{-\\frac{1}{K}(\\sigma_{0}t+\\frac{L}{8})}.\n\\end{align*}\nIntegrating between $0$ and $t$, we obtain\n\\begin{align*}\n\\mathcal{J}^{u}_{j,K}(t)-\\mathcal{J}^{u}_{j,K}(0)\\leq O(e^{-\\frac{\\sigma_{0}L}{8K}}).\n\\end{align*}\nSimilarly, we also obtain\n\\begin{align*}\n\\mathcal{J}^{v}_{j,K}(t)-\\mathcal{J}^{v}_{j,K}(0)\\leq O(e^{-\\frac{\\sigma_{0}L}{8K}}).\n\\end{align*}\nNow, we need to prove the third inequality of this proposition. Applying the Virial type identity with $g=\\Psi_{i,K}$ and using \\eqref{infspeed}, we get\n\\begin{eqnarray*}\n\\begin{aligned}\n\\frac{d}{dt}\\mathcal{J}^{u,v}_{j,K}(t)\n=&\\int_{\\mathbb R}\\Big[uvu_{x}v_{x}+uP\\ast\\Big(\\frac{1}{2}v^{2}_{x}u+vv_{x}u_{x}+v^{2}u\\Big)+uP_{x}\\ast\\Big(\\frac{1}{2}v^{2}_{x}u_{x}\\Big)\\Big]\\Psi'_{i,K}\\,dx\\\\\n&\\;\\;+\\int_{\\mathbb R}\\Big[vP\\ast\\Big(\\frac{1}{2}u^{2}_{x}v+uu_{x}v_{x}+u^{2}v\\Big)+uP_{x}\\ast\\Big(\\frac{1}{2}u^{2}_{x}v_{x}\\Big)\\Big]\\Psi'_{i,K}\\,dx\\\\\n&\\;\\;-\\dot{y}_{i}\\int_{\\mathbb R}\\left(uv+u_{x}v_{x}\\right)\\Psi'_{i,K}\\,dx\\\\\n=&\\int_{\\mathbb R}\\left[uv(uv+u_{x}v_{x})+uP\\ast\\left(v(uv+u_{x}v_{x})\\right)+vP\\ast\\left(u(uv+u_{x}v_{x})\\right)\\right]\\Psi'_{i,K}\\,dx\\\\\n&\\;\\;+\\int_{\\mathbb R}\\Big[-\\frac{1}{2}u^2v^2+vP\\ast\\Big(\\frac{1}{2}u^2_{x}v\\Big)+vP_{x}\\ast\\Big(\\frac{1}{2}u^2_{x}v_{x}\\Big)\\Big]\\Psi'_{i,K}\\,dx\\\\\n&\\;\\;+\\int_{\\mathbb R}\\Big[-\\frac{1}{2}u^2v^2+uP\\ast\\Big(\\frac{1}{2}v^2_{x}u\\Big)+uP_{x}\\ast\\Big(\\frac{1}{2}v^2_{x}u_{x}\\Big)\\Big]\\Psi'_{i,K}\\,dx\\\\\n&\\;\\;-\\dot{y}_{i}\\int_{\\mathbb R}\\left(uv+u_{x}v_{x}\\right)\\Psi'_{i,K}\\,dx\\\\\n\\triangleq &\\widetilde{I}_{2,1}+\\widetilde{I}_{2,2}+\\widetilde{I}_{2,3}-\\dot{y}_{i}\\int_{\\mathbb R}\\left(uv+u_{x}v_{x}\\right)\\Psi'_{i,K}\\,dx.\n\\end{aligned}\n\\end{eqnarray*}\nIn the same way as the proof of the first inequality, we obtain\n\\begin{align*}\n\\widetilde{I}_{2,1} \\leq \\frac{c_1}{20}\\int_{\\mathbb R}\\left(uv+u_{x}v_{x}\\right)\\Psi'_{i,K}\\,dx+\\frac{C}{K}\\|u_{0}\\|^{2}_{H^{1}(\\mathbb R)}\\|v_{0}\\|^{2}_{H^{1}(\\mathbb R)}e^{-\\frac{1}{K}(\\sigma_{0}t+\\frac{L}{8})}.\n\\end{align*}\nFor this term $\\widetilde{I}_{2,2}$, we note\n\\begin{align*}\n|u_x|\\leq u \\quad {\\rm and} \\quad |v_x|\\leq v.\n\\end{align*}\nThus,\n\\begin{align*}\n(u+u_{x})(v+v_{x})\\geq 0\\quad {\\rm and} \\quad (u-u_{x})(v-v_{x})\\geq 0,\n\\end{align*}\nwhich is\n\\begin{eqnarray}\\label{oneESI}\n\\begin{aligned}\n|uv_{x}+u_{x}v| \\leq |uv+u_{x}v_{x}|.\n\\end{aligned}\n\\end{eqnarray}\nThanks to this estimate \\eqref{oneESI}, we have\n\\begin{align*}\n\\widetilde{I}_{2,2}=&\\frac{1}{2}\\int_{\\mathbb R}\\left[(P_{xx}-P)\\ast u^2v+P\\ast\\left(u^2_{x}v\\right)+P_{x}\\ast\\left(u^2_{x}v_{x}\\right)\\right]v\\Psi'_{i,K}\\,dx\\\\\n=&\\frac{1}{2}\\int_{\\mathbb R}\\left[P\\ast \\left((u^2_{x}-u^2)v\\right)+P_{x}\\ast(2uu_{x}v+u^2v_{x}+u^2_{x}v_{x})\\right]v\\Psi'_{i,K}\\,dx\\\\\n\\leq &\\frac{1}{2}\\int_{\\mathbb R}\\left[P_{x}\\ast\\left((uv+u_{x}v_{x})u_{x}\\right)+P_{x}\\ast\\left((uv_{x}+u_{x}v)u\\right)\\right]v\\Psi'_{i,K}\\,dx\\\\\n\\leq &\\frac{1}{2}\\int_{\\mathbb R}\\left[P_{x}\\ast\\left((uv+u_{x}v_{x})u_{x}\\right)+P\\ast\\left((uv+u_{x}v_{x})u\\right)\\right]v\\Psi'_{i,K}\\,dx.\n\\end{align*}\nIn the same way as proof the inequality of the first inequality, we find\n\\begin{align*}\n\\widetilde{I}_{2,2}\\leq \\frac{c_1}{20}\\int_{\\mathbb R}\\left(uv+u_{x}v_{x}\\right)\\Psi'_{i,K}\\,dx+\\frac{C}{K}\\|u_{0}\\|^{2}_{H^{1}(\\mathbb R)}\\|v_{0}\\|^{2}_{H^{1}(\\mathbb R)}e^{-\\frac{1}{K}(\\sigma_{0}t+\\frac{L}{8})}.\n\\end{align*}\nSimilarly, we have\n\\begin{align*}\n\\widetilde{I}_{2,3}\\leq \\frac{c_1}{20}\\int_{\\mathbb R}\\left(uv+u_{x}v_{x}\\right)\\Psi'_{i,K}\\,dx+\\frac{C}{K}\\|u_{0}\\|^{2}_{H^{1}(\\mathbb R)}\\|v_{0}\\|^{2}_{H^{1}(\\mathbb R)}e^{-\\frac{1}{K}(\\sigma_{0}t+\\frac{L}{8})}.\n\\end{align*}\nTherefore, for $0<\\alpha<\\alpha_{0}$ and $L>L_{0}>0$, with $\\alpha_{0}\\ll 1$ and $L_{0}\\gg 1$, it holds\n\\begin{align*}\n\\frac{d}{dt}\\mathcal{J}^{u,v}_{j,K}(t)\\leq \\frac{C}{K}\\|u_{0}\\|^{3}_{H^{1}(\\mathbb R)}\\|v_{0}\\|_{H^{1}(\\mathbb R)}e^{-\\frac{1}{K}(\\sigma_{0}t+\\frac{L}{8})}.\n\\end{align*}\nIntegrating between $0$ and $t$, we obtain\n\\begin{align*}\n\\mathcal{J}^{u,v}_{j,K}(t)-\\mathcal{J}^{u,v}_{j,K}(0)\\leq O(e^{-\\frac{\\sigma_{0}L}{8K}}).\n\\end{align*}\nThis completes the proof of this proposition.\n\\end{proof}\n\n\n\\subsection{A localized and a global estimate}\nWe define the function $\\Phi_{i}=\\Phi_{i}(t,x)$ by $\\Phi_{1}=1-\\Psi_{2,K}=1-\\Psi_{K}(\\cdot-y_{2}(t))$, $\\Phi_{N}=\\Psi_{N,K}=\\Psi_{K}(\\cdot-y_{N}(t))$ and for $i=2,...,N-1$,\n\\begin{eqnarray*}\n\\begin{aligned}\n\\Phi_{i}=\\Psi_{i,K}-\\Psi_{i+1,K}=\\Psi_{K}(\\cdot-y_{i}(t))-\\Psi_{K}(\\cdot-y_{i+1}(t)),\n\\end{aligned}\n\\end{eqnarray*}\nwhere $\\Psi_{K}$ and the $y_{i}$ are defined in the previous section. It is easy to check that $\\sum^{N}_{i=1}\\Phi_{i,K}\\equiv1$. We take $L>0$ and $L\/K>0$ large enough so that $\\Phi_{i}$ satisfies\n\\begin{eqnarray}\\label{weightesi-1}\n\\begin{aligned}\n\\left|1-\\Phi_{i,K}\\right|\\leq 4e^{-\\frac{L}{8K}} \\quad {\\rm on}\\quad \\Big[\\tilde{x}_{i}-\\frac{L}{4},\\tilde{x}_{i}+\\frac{L}{4}\\Big],\n\\end{aligned}\n\\end{eqnarray}\nand\n\\begin{eqnarray}\\label{weightesi-2}\n\\begin{aligned}\n\\left|\\Phi_i\\right|\\leq 4e^{-\\frac{L}{8K}} \\quad {\\rm on}\\quad \\Big[\\tilde{x}_{j}-\\frac{L}{4},\\tilde{x}_{j}+\\frac{L}{4}\\Big]\\,\\; {\\rm whenever}\\;\\; j\\ne i.\n\\end{aligned}\n\\end{eqnarray}\nWe now use the following localized version of $E_{u}$, $E_{v}$, $H$ and $F$ defined for $i\\in \\{1,...,N\\}$, by\n\\begin{eqnarray}\\label{weightesi-3}\n\\begin{aligned}\n&E_{ui}(u)=\\int_{\\mathbb R}\\left(u^2+u^2_{x}\\right)\\Phi_{i}\\,dx, \\quad E_{vi}(v)=\\int_{\\mathbb R}\\left(v^2+v^2_{x}\\right)\\Phi_{i}\\,dx, \\\\\n& H_{i}(u,v)=\\int_{\\mathbb R}\\left(uv+u_{x}v_{x}\\right)\\Phi_{i}\\,dx\\\\\n{\\rm and}\\;\\; &F_{i}(u,v)=\\int_{\\mathbb R}\\Big(u^2v^2+\\frac{1}{3}u^2v_x^2+\\frac{1}{3}v^2u_x^2+\\frac{4}{3}uvu_xv_x-\\frac{1}{3}u_x^2v_x^2\\Big)\\Phi_{i}\\,dx.\n\\end{aligned}\n\\end{eqnarray}\nPlease note that henceforth we take $K=\\sqrt{L}\/8$.\n\nThe following lemma gives a localized version of \\eqref{functionalESI}. Note that the functionals $H_{i}$ and $F_{i}$ do not depend on time in the statement below since we fix $\\tilde{x}_{1}<...<\\tilde{x}_{N}$.\n\\begin{lemma}\\label{localizedESI}\nLet be given $N$ real numbers $\\tilde{x}_{1}<...<\\tilde{x}_{N}$ with $\\tilde{x}_{i}-\\tilde{x}_{i-1}\\geq 2L\/3$. Define the $J_{i}$ as in \\eqref{initial-3-4} and assume that, for $i=1,...,N$, there exists $x_{i}\\in J_{i}$ such that $|x_{i}-\\tilde{x}_{i}|\\leq L\/12$ and $u(x_{i})v(x_{i})=\\max_{x\\in J_{i}}uv:=M_{i}$. Then for any $(u,v)\\in H^{1}(\\mathbb R)\\times H^{1}(\\mathbb R)$, it holds\n\\begin{eqnarray}\\label{localizedESI-1}\n\\begin{aligned}\n\\frac{4}{3}M^2_{i}-\\frac{4}{3}M_{i}H_{i}(u,v)+F_{i}(u,v)\\leq O(L^{-\\frac{1}{2}}),\\quad i\\in\\{1,...,N\\}.\n\\end{aligned}\n\\end{eqnarray}\n\\end{lemma}\n\\begin{proof}\nLet $i\\in \\{1,...,N\\}$ be fixed. We introduce the functions $g_{u}$, $g_{v}$ and $h$ defined by\n\\begin{equation*}\ng_{u}(x)=\\left\\{\n\\begin{aligned}\n&u-u_{x},\\quad &xx_{i},\n\\end{aligned}\n\\right.\n\\end{equation*}\n\\begin{equation*}\ng_{v}(x)=\\left\\{\n\\begin{aligned}\n&v-v_{x},\\quad &xx_{i},\n\\end{aligned}\n\\right.\n\\end{equation*}\nand\\\\\n\\begin{equation*}\nh(x)=\\left\\{\n\\begin{aligned}\n&uv-\\frac13 (uv)_x-\\frac{1}{3}u_{x}v_{x},\\quad &xx_{i}.\n\\end{aligned}\n\\right.\n\\end{equation*}\nIntegrating by parts we compute\n\\begin{eqnarray}\\label{localizedfuncESI-1}\n\\begin{aligned}\n&\\int_{\\mathbb R}h(x)g_{u}(x)g_{v}(x)\\Phi_{i}dx\\\\\n=&\\int_{-\\infty}^{x_i}{\\Big(uv-\\frac{1}{3}(uv)_x-\\frac{1}{3}u_{x}v_{x}\\Big)\\Big(uv-(uv)_x+u_{x}v_{x}\\Big)}\\Phi_{i}\\,dx\\\\\n&\\qquad +\\int_{x_i}^{\\infty}{\\Big(uv+\\frac 13 (uv)_x-\\frac{1}{3}u_{x}v_{x}\\Big)\\Big(uv+(uv)_x+u_xv_x\\Big)}\\Phi_{i}\\,dx\\\\\n=&\\int_{-\\infty}^{\\infty}{\\Big(u^2v^2+\\frac{1}{3}u^2v_x^2+\\frac{1}{3}v^2u_x^2+\\frac{4}{3}uvu_xv_x-\\frac{1}{3}u_x^2v_x^2\\Big)}\\,dx-\\frac{4}{3}\\,M_{i}^2\\Phi_{i}(x_{i})\\\\\n&\\qquad+\\frac{2}{3}\\int^{x_{i}}_{\\-\\infty}u^2v^2\\Phi'_{i}\\,dx-\\frac{2}{3}\\int^{\\-\\infty}_{x_{i}}u^2v^2\\Phi'_{i}\\,dx.\\\\\n\\end{aligned}\n\\end{eqnarray}\nRecall that we take $K=\\sqrt{L}\/8$ and thus $|\\Phi'|\\leq C\/K=O(L^{-1\/2})$. Moreover, since $|x_{i}-\\tilde{x}_{i}|\\leq L\/12$, it follows from \\eqref{weightesi-1} that $\\Phi_{i}(x_{i})=1+O(e^{-\\sqrt{L}})$ and thus\n\\begin{eqnarray}\\label{localizedfuncESI-2}\n\\begin{aligned}\n\\int_{\\mathbb R}h(x)g_{u}(x)g_{v}(x)\\Phi_{i}dx=&F_{i}(u,v)-\\frac{4}{3}M^2_{i}+\\|u_{0}\\|^2_{H^{1}(\\mathbb R)}\\|v_{0}\\|^2_{H^{1}(\\mathbb R)}O(L^{-\\frac{1}{2}})+O(e^{-\\sqrt{L}}).\n\\end{aligned}\n\\end{eqnarray}\nOn the other hand, we firstly claim that $x_{1}$ is the maximum point of the function $u(x,t)v(x,t)$ on $(-\\infty,y_{2}(t)+aL]$, $x_{N}$ is the maximum point of the function $u(x,t)v(x,t)$ on $[y_{N}(t)-aL,+\\infty)$ and $x_{i}$ is the maximum point of the function $u(x,t)v(x,t)$ on $[y_{i}(t)-aL,y_{i+1}(t)+aL]$, where $a$ is a constant chosen later and $i\\in \\{2,...,N-1\\}$.\n\nWe need to show $x_{i}$ is the maximum point of the function $u(x,t)v(x,t)$ on $[y_{i}(t)-aL,y_{i}(t)]\\,\\cup\\, [y_{i+1}(t),y_{i+1}(t)+aL]$, where $i\\in \\{2,...,N-1\\}$.\nIf $x\\in [y_{i}(t)-aL,y_{i}(t)]\\,\\cup\\, [y_{i+1}(t),y_{i+1}(t)+aL]$, then\n\\begin{eqnarray*}\n\\begin{aligned}\nu(x,t)v(x,t)\\leq c_{i+1}e^{-\\left(\\frac{3}{8}-a\\right)L}+c_{i}e^{-\\frac{3}{8}L}+O(\\sqrt{\\alpha})+O(e^{-\\frac{L}{4}}).\n\\end{aligned}\n\\end{eqnarray*}\nChoosing $a$ with\n\\begin{eqnarray*}\n\\begin{aligned}\nc_{i+1}e^{-\\left(\\frac{3}{8}-a\\right)L}+c_{i}e^{-\\frac{3}{8}L}+O(\\sqrt{\\alpha})+O(e^{-\\frac{L}{4}})\\leq c_{i}-O(\\sqrt{\\alpha})-O(e^{-\\frac{L}{4}}),\n\\end{aligned}\n\\end{eqnarray*}\nand using the same estimate as above, we have proved similar conclusion of $x_{1}$ and $x_{N}$. \n\nNext, we define the intervals $\\hat{J}_{1}=(-\\infty,y_{2}(t)+aL]$, $\\hat{J}_{N}=[y_{N}(t)-aL,+\\infty)$ and for $i=2,...,N-1$, $\\hat{J}_{i}=[y_{i}(t)-aL,y_{i+1}(t)+aL]$, where $a$ is chosen above.\n\\begin{eqnarray}\n\\begin{aligned}\n\\int_{\\mathbb R}h(x)g_{u}(x)g_{v}(x)\\Phi_{i}dx=&\\int_{\\hat{J}_{i}}h(x)g_{u}(x)g_{v}(x)\\Phi_{i}dx+\\int_{\\hat{J}^c_{i}}h(x)g_{u}(x)g_{v}(x)\\Phi_{i}dx\\\\\n\\leq & \\frac{4}{3}M_{i}\\int_{\\mathbb R}g_{u}(x)g_{v}(x)\\Phi_{i}dx+\\int_{\\hat{J}^c_{i}}h(x)g_{u}(x)g_{v}(x)\\Phi_{i}dx,\n\\end{aligned}\n\\end{eqnarray}\nwhere we use the fact $h(x)\\leq 4u(x)v(x)\/3\\leq 4M_{i}\/3$ on the $\\hat{J}_{i}$. One chooses $L$ large enough satisfying $aL>10\\sqrt{L}$, which leads to\n\\begin{eqnarray*}\n\\begin{aligned}\n\\int_{\\hat{J}^c_{i}}h(x)g_{u}(x)g_{v}(x)\\Phi_{i}dx\\leq C\\|u_{0}\\|^2_{H^{1}(\\mathbb R)}\\|v_{0}\\|^2_{H^{1}(\\mathbb R)}e^{-\\tilde{a}L}\\leq O(L^{-1}),\n\\end{aligned}\n\\end{eqnarray*}\nwhere $C$ and $\\tilde{a}$ are constants. Therefore,\n\\begin{eqnarray*}\n\\begin{aligned}\n\\frac{4}{3}M^2_{i}-\\frac{4}{3}M_{i}H_{i}(u,v)+F_{i}(u,v)\\leq O(L^{-\\frac{1}{2}}),\\quad i\\in\\{1,...,N\\}.\n\\end{aligned}\n\\end{eqnarray*}\nThis completes the proof.\n\\end{proof}\nNow let us state a global identity related to \\eqref{energyESI}.\n\\begin{lemma}\\label{globalIDE}\nFor any $Z\\in \\mathbb R^N$ satisfying $|z_{i}-z_{i-1}|\\geq L\/2$ and any $(u,v)\\in H^{1}(\\mathbb R)\\times H^{1}(\\mathbb R)$, there holds\n\\begin{eqnarray*}\n\\begin{aligned}\n&E_{u}(u)-\\sum^{N}_{i=1}E_{u}(\\varphi_{c_{i}})=\\|u-R_{Z}\\|^2_{H^1}+4\\sum^N_{i=1}a_{i}(u(z_{i})-a_{i})+O\\left(e^{-\\frac{L}{4}}\\right),\\\\\n&E_{v}(v)-\\sum^{N}_{i=1}E_{v}(\\psi_{c_{i}})=\\|v-S_{Z}\\|^2_{H^1}+4\\sum^N_{i=1}b_{i}(v(z_{i})-b_{i})+O\\left(e^{-\\frac{L}{4}}\\right).\n\\end{aligned}\n\\end{eqnarray*}\n\\end{lemma}\n\\begin{proof}\nUsing the relation between $\\varphi$ and its derivative, integrating by parts, we get\n\\begin{eqnarray*}\n\\begin{aligned}\n&E_{u}(u-R_{Z})=E_{u}(u)+E_{u}(R_{Z})-2\\sum^{N}_{i=1}\\int_{\\mathbb R}u\\varphi_{c_{i}}(\\cdot-z_{i})+u_{x}\\partial_{x}\\varphi_{c_{i}}(\\cdot-z_{i})\\\\\n&=E_{u}(u)+E_{u}(R_{Z})-2\\sum^{N}_{i=1}\\int_{\\mathbb R}u\\varphi_{c_{i}}(\\cdot-z_{i})+2\\sum^{N}_{i=1}\\int^{\\infty}_{z_{i}}u_{x}\\varphi_{c_{i}}(\\cdot-z_{i})-2\\sum^{N}_{i=1}\\int^{z_{i}}_{-\\infty}u_{x}\\varphi_{c_{i}}(\\cdot-z_{i})\\\\\n&=E_{u}(u)+E_{u}(R_{Z})-4\\sum^{N}_{i=1}a_{i}u(z_{i}).\n\\end{aligned}\n\\end{eqnarray*}\nOn the other hand, since $|z_{i}-z_{i-1}|\\geq L\/2$, it is easy to check that\n\\begin{eqnarray*}\n\\begin{aligned}\nE(R_{Z})=\\sum^{N}_{i=1}E(\\varphi_{c_{i}})+O\\left(e^{-\\frac{L}{4}}\\right).\n\\end{aligned}\n\\end{eqnarray*}\nCombining these two identities, the desired result follows, and similarly, we obtain the second equality of this lemma.\n\\end{proof}\n\n\n\n\n\n\\subsection{End of the proof of Theorem 1.2}\n\n\nBefore we give the final proof, we need to prove the following lemmas.\n\\begin{lemma}\\label{FinalESI}\nAssume $\\|u_{0}-R_{Z^{0}}\\|_{H^{1}}+\\|v_{0}-S_{Z^{0}}\\|_{H^{1}}< \\epsilon $. Then for any $i=\\{1,...,N\\}$, we have\n\\begin{eqnarray*}\n\\begin{aligned}\n&\\left|H_{i}(u_{0},v_{0})-H_{i}(\\varphi_{c_{i}},\\psi_{c_{i}})\\right|3$ and $u\\geq 0$, $v\\geq 0$. Let\n\\begin{align*}\nM_{i}=u(x_{i})v(x_{i})=\\max_{x\\in J_{i}}u(x)v(x),\n\\end{align*}\nand set $\\alpha_{0}=A({\\epsilon_0}^{1\/4}+L_{0}^{-1\/8})$. Then we have\n\\begin{align*}\n|M_{i}-c_{i}|\\leq O\\big(L^{-\\frac{1}{4}}\\big)+O\\big(\\epsilon^{\\frac{1}{2}}\\big),\\quad i=1,...,N.\n\\end{align*}\n\\end{lemma}\n\\begin{proof}\nDefine the second-order polynomials $\\bar{P}^{i}$ and $\\hat{P}^{i}$ by:\n\\begin{align*}\n\\bar{P}^{i}(y)=y^2-H_{i}(u,v)y+\\frac{3}{4}F_{i}(u,v) \\;\\; \\mathrm{and} \\;\\; \\hat{P}^{i}(y)=y^2-H_{i}(u_0,v_0)y+\\frac{3}{4}F_{i}(u_{0},v_{0}).\n\\end{align*}\nFor the peaked solution, $H(\\varphi_{c_{i}},\\psi_{c_{i}})=2c_{i}$ and $F(\\varphi_{c_{i}},\\psi_{c_{i}})=4c^2_{i}\/3$, the above polynomial becomes\n\\begin{eqnarray}\\label{finalESI-1}\n\\begin{aligned}\n&\\hat{P}^{i}_{0}(y)=y^2-2c_{i}y+c^2_{i}=(y-c_{i})^2.\n\\end{aligned}\n\\end{eqnarray}\nSince\n\\begin{align*}\n\\bar{P}^{i}(y)=&\\hat{P}^{i}_{0}(y)+\\left(H(\\varphi_{c_{i}},\\psi_{c_{i}})-H_{i}(u_0,v_0)\\right)y+\\left(H_{i}(u_0,v_0)-H_{i}(u,v)\\right)y\\\\\n&\\qquad +\\frac{3}{4}\\left[\\left(F_{i}(u_{0},v_{0})-F(\\varphi_{c_{i}},\\psi_{c_{i}})\\right)+\\left(F_{i}(u,v)-F_{i}(u_{0},v_{0})\\right)\\right],\n\\end{align*}\nit follows that\n\\begin{eqnarray*}\n\\begin{aligned}\n\\sum^{N}_{i=1}\\bar{P}^{i}&(M_{i})=\\sum^{N}_{i=1}\\hat{P}^{i}_{0}(M_{i})+\\sum^{N}_{i=1}M_{i}\\left(H(\\varphi_{c_{i}},\\psi_{c_{i}})-H_{i}(u_0,v_0)\\right)\\\\\n&+\\frac{4}{3}\\Big(F(u_{0},v_{0})-\\sum^{N}_{i=1}F(\\varphi_{c_{i}},\\psi_{c_{i}})\\Big)+\\sum^N_{i=1}M_{i}\\Big((H_{i}(u_0,v_0)-H_{i}(u,v)\\Big)\\leq O(L^{-\\frac{1}{2}}).\n\\end{aligned}\n\\end{eqnarray*}\nUsing the Abel transformation of the last term and the monotonicity property and combining Lemma \\eqref{FinalESI}, we thus get\n\\begin{align*}\n\\sum^{N}_{i=1}(M_{i}-c_{i})^2\\leq &\\sum^{N}_{i=1}M_{i}\\left|H(\\varphi_{c_{i}},\\psi_{c_{i}})-H_{i}(u_0,v_0)\\right|+\\frac{3}{4}\\Big|F(u_{0},v_{0})-F(R_{Z},S_{Z})\\Big|+O\\big(L^{-\\frac{1}{2}}\\big)+O(\\epsilon)\\\\\n\\leq & O\\big(L^{-\\frac{1}{2}}\\big)+O(\\epsilon),\n\\end{align*}\nwhich completes the proof of the lemma.\n\\end{proof}\n\n\n\\begin{lemma}\\label{FLGJ}\nAssume $|M_{i}-c_{i}|\\leq O\\left(L^{-1\/4}\\right)+O\\left(\\epsilon^{1\/2}\\right),\\quad i=1,...,N$. Then for any $i=\\{1,...,N\\}$, we have\n\\begin{align*}\n|u(x_{i})-a_{i}|\\leq O\\left(L^{-1\/4}\\right)+O\\Big(\\epsilon^{1\/2}\\Big)\\quad {\\rm and} \\quad |v(x_{i})-b_{i}|\\leq O\\left(L^{-1\/4}\\right)+O\\left(\\epsilon^{1\/2}\\right).\n\\end{align*}\n\\end{lemma}\nNote the monotonicity property that $\\mathcal{J}^{u}_{j,K}(t)$ is close to $\\left\\|u(t)\\right\\|_{H^{1}(x>y_{j}(t))}$ and thus measures the energy at the right of the $(j-1)th$ bump of $u$ and $\\mathcal{J}^{v}_{j,K}(t)$ is close to $\\left\\|v(t)\\right\\|_{H^{1}(x>y_{j}(t))}$ and thus measures the energy at the right of the $(j-1)th$ bump of $v$. We thus use the induction argument to prove this lemma.\n\\begin{proof}\n{Step 1.}\nDefine\n\\begin{equation}\ng_{uN}(x)=\n\\left\\{\n\\begin{aligned}\n&u(x)-u_{x}(x), \\quad xx_N.\n \\end{aligned}\n\\right.\n\\end{equation}\nThen we have\n\\begin{eqnarray*}\n\\begin{aligned}\n0\\leq \\int_{\\mathbb R}g^2_{uN}(x)\\Phi_{N}(x)\\,dx=&\\int_{\\mathbb R}\\left(u^2+u^2_{x}\\right)\\Phi_{N}(x)\\,dx-2u^{2}(x_{N})\\Phi_{N}(x_N)\\\\\n&+\\int^{x_{N}}_{-\\infty}u^2\\Phi'_{N}\\,dx-\\int^{+\\infty}_{x_{N}}u^2\\Phi'_{N}\\,dx.\n\\end{aligned}\n\\end{eqnarray*}\nUsing $|x_{i}-\\tilde{x}_{i}|\\leq L\/12$, it follows from \\eqref{weightesi-1} that $\\Phi_{i}(x_{i})=1+O(e^{-\\sqrt{L}})$ and thus\n\\begin{align*}\nE_{uN}(u(t))-2u^2(x_{N})+O\\left(L^{-\\frac{1}{2}}\\right)\\geq 0.\n\\end{align*}\nNote that\n\\begin{eqnarray*}\nE_{uN}(u(t))=\\mathcal{J}^u_{N,K}(t)\\quad {\\rm and} \\quad E_{uN}(u(t))-E_{uN}(u(0))\\leq O\\big(e^{-\\sigma_{0}\\sqrt{L}}\\big).\n\\end{eqnarray*}\nWe thus get\n\\begin{eqnarray*}\n\\begin{aligned}\nu^2(x_{N})\\leq O(L^{-\\frac{1}{2}})+O(\\epsilon)+a^2_{N}.\n\\end{aligned}\n\\end{eqnarray*}\nSimilarly,\n\\begin{eqnarray*}\n\\begin{aligned}\nv^2(x_{N})\\leq O(L^{-\\frac{1}{2}})+O(\\epsilon)+b^2_{N}.\n\\end{aligned}\n\\end{eqnarray*}\nCombining the identity $|M_{N}-c_{N}|\\leq O\\left(L^{-1\/4}\\right)+O\\left(\\epsilon^{1\/2}\\right)$, we arrive at\n\\begin{eqnarray*}\n\\begin{aligned}\n|u(x_{N})-a_{N}|\\leq O\\big(L^{-\\frac{1}{4}}\\big)+O\\big(\\epsilon^{\\frac{1}{2}}\\big) \\quad {\\rm and} \\quad |v(x_{N})-b_{N}|\\leq O\\big(L^{-\\frac{1}{4}}\\big)+O\\big(\\epsilon^{\\frac{1}{2}}\\big).\n\\end{aligned}\n\\end{eqnarray*}\n{Step 2.} Assume for any $k\\leq i0$ which does not depend on $A$ such that\n\\begin{eqnarray*}\n\\begin{aligned}\n|u(x_{N})-a_{N}|\\leq C\\Big(L^{-\\frac{1}{4}}+\\epsilon^{\\frac{1}{2}}\\Big) \\quad {\\rm and} \\quad |v(x_{N})-b_{N}|\\leq C\\Big(L^{-\\frac{1}{4}}+\\epsilon^{\\frac{1}{2}}\\Big),\n\\end{aligned}\n\\end{eqnarray*}\nwhich has been verified by \\eqref{FLGJ}.\nFrom \\eqref{initial-3-3} and \\eqref{initial-3-5}, we know that for $i=2,...,N$,\n\\begin{eqnarray*}\n\\begin{aligned}\nx_{i}-x_{i-1}\\geq \\frac{2}{3}L.\n\\end{aligned}\n\\end{eqnarray*}\nThis completes the proof of the theorem.\n\\end{proof}\n\n\n\\vglue .5cm\n\n\\vskip 0.2cm\n\\noindent {\\bf Acknowledgments.} The work of He is partially supported by the NSF-China grant-11971251. The work of Liu is partially supported by the NSF-China grant-11722111 and grant-11871395. The work of Qu is partially supported by the NSF-China grant-11631007 and grant-11971251.\n\n\n\n\\vskip 1cm\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nProtostellar jets appear intimately linked to the process of mass accretion onto the growing star; their strikingly similar properties across protostellar age, mass, and accretion rate all point to universal ejection and collimation mechanisms\n\\citep{2002EAS.....3..147C,2007A&A...468L..29C, 2013A&A...551A...5E}. Yet, jets from the youngest protostars --- so-called Class 0 --- are much brighter in molecules \\citep[\\BT{e.g.,}][]{2000A&A...359..967T} than jets from more evolved protostars and pre-main sequence stars which are mainly atomic; Molecules have been traced as close as 20-100 au from the source \\citep[\\BT{e.g.,}][]{2017NatAs...1E.152L,2014ApJ...794..169H}. \nThe origin of this selective molecular richness remains an important issue for models of the jet origin.\nThree broad scenarios have been considered, with no fully validated answer so far. \n\nIn models of ejection from the stellar magnetosphere or the inner disk edge \\citep[\\BT{e.g.,}][]{1994ApJ...429..781S,2002ApJ...578..420R,2005ApJ...632L.135M,2013A&A...550A..99Z}, the jet\nwould be expected to be dust-free (the grain sublimation radius around a typical solar-mass protostar is $R_{\\rm sub}\\sim 0.3$~au, see \\BT{for example} \\citet{2016A&A...585A..74Y}).\nThe lack of dust screening then makes the wind extremely sensitive to photodissociation by the accretion shock.\nChemical models of dust-free winds by \\citet{1991ApJ...373..254G} found that CO, \\BT{SiO,} and H$_2$O could no longer form at the wind base in the presence of a typical expected level of FUV excess\\footnote{the flat UV flux in their UV1-UV2 models is $\\sim$ 30-500 that in BP Tau \\citep{2003ApJ...591L.159B}, for a wind-mass flux corresponding to a 1000 times larger accretion rate ($\\sim 3 \\times 10^{-5} M_\\odot$ yr$^{-1}$)}. \\citet{2005RMxAA..41..137R} showed that H$_2$ could form further out behind internal shocks. However, the key ions involved are also easily destroyed by FUV photons. \nHence, molecule formation in a dust-free jet within 20-100 au of protostars remains an open issue. \n\n\nA second proposed explanation \nis that the molecular component of jets may be tracing dusty MHD disk winds launched beyond $R_{\\rm sub}$, where dust can shield molecules against the FUV field and allow faster H$_2$ reformation. Detailed models are successful at reproducing the higher molecule richness of Class 0 jets \\citep{2012A&A...538A...2P}\nthe broad water line components revealed by \\textit{Herschel}\/HIFI \\citep{2016A&A...585A..74Y}, and the rotation signatures \\BT{recently resolved by ALMA in the HH212 jet and in the slow wider angle wind surrounding it} \\citep{2017NatAs...1E.152L, 2017A&A...607L...6T}. However, the same disk wind models predict that the fastest, SiO-rich streamlines in HH212 (flowing at $\\sim 100$ km~s$^{-1}$) would be launched\nfrom 0.05-0.2 au, within the dust sublimation radius \\citep{2017A&A...607L...6T}. Hence, this scenario still partly faces the unsolved question of molecule survival in a dust-free wind.\n\nA third scenario is that molecules could be somehow \"entrained\" from the surroundings into the jet, assumed initially atomic. In a time-dependent jet, travelling internal shocks will squeeze out high-pressure jet material, which then sweeps up the surrounding gas into a curved bowshock. If the surrounding material is molecular, a partly molecular bowshock will result, with a more tenuous \"wake\" of shocked molecular gas trailing behind it \\citep{1993A&A...278..267R,2005RMxAA..41..137R}. As the next \"internal working surface\" (IWS) propagates into this wake, it may again produce a molecular jet bowshock. However, after the passage of many such IWS, the wake will be so shock-processed and tenuous that not enough molecules may be left to produce molecular bowshocks close to the jet axis.\n\nIn the present paper, we revisit this last scenario in a new light by investigating whether a slower molecular \"disk wind\" surrounding the jet could help refill the wake and re-inject\nfresh (unprocessed) molecules into the jet path. This new outlook is prompted by the discovery of a potential molecular disk wind wrapped around the dense axial jet in HH212 \\citep{2017A&A...607L...6T}. We explore this possibility by studying analytically and numerically the propagation of bow shocks driven by a time-variable, inner jet into a surrounding slower disk wind. This scenario may be seen as an extension of the recent modeling work of \\citet{2016MNRAS.455.2042W} who studied the turbulent mixing layer between a jet and disk wind, with the novel addition of internal working surfaces in the jet to produce a stronger coupling between the two outflow components. \n\nBesides our main goal of exploring the impact of a DW on the chemical richness of Class 0 jets, our study has two other important motivations. \\BT{First, we aim to identify specific signatures in the morphology and kinematics of jet bowshocks that could reveal the presence of a surrounding DW. Secondly, we aim to\nidentify in which regions of space the pristine DW material would remain unperturbed, for comparison to theoretical DW models.} \n\n\nIn the present exploratory study, \\BT{we have limited} ourselves to purely hydrodynamical and cylindrical flows, which allow us to develop an analytical model that greatly helps to capture the main effects of the two-flow interaction, and to understand the numerical results. Also, this is expected to be an optimal case for interaction between the two flows, as magnetic tension would tend to oppose mixing. \n\nThe paper is organized as follows.\nIn \\BT{Section 2}, we build an analytical model (in the thin shell approximation) for the propagation of a bow shock driven by an IWS into\na surrounding disk wind. The model is extended to the case of two or\nmore successive IWS in \\BT{Section 3}. In \\BT{Section 4} we compare the analytic model with axisymmetric\nsimulations of a variable jet+surrounding disk wind configuration, and compare the results with a\n\"reference simulation\" in which the same variable jet propagates into a stationary environment.\nFinally, the results are discussed in \\BT{Section 5}.\n\n\\section{Analytical approach}\n\\subsection{Basic equations}\n\nWe \\BT{considered} the \"disk wind+jet\" configuration shown in \\BT{Fig.}~\\ref{fig:shockframe}.\nwhere a \ncylindrical jet of radius $r_j$ and time-variable velocity v$_j$ directed along the $z$-axis\nis immersed in a plane-parallel \"disk wind''\nwith uniform density $\\rho_w$ and time-independent velocity v$_w$ parallel to v$_j$.\n\n\\begin{figure}[!t]\n\\centering\n\\includegraphics[width=8cm]{fig2-eps-converted-to.pdf}\n\\caption{Schematic diagram of the flow around an internal working surface (IWS)\nin the frame of reference co-moving with the IWS\nat velocity v$_z$ = v$_j$ (a similar\nconfiguration would apply for the leading working surface of a jet).\nThe thick, horizontal line at the bottom of the graph is the jet\n(with a gap showing the position of the IWS).\nThe working surface ejects jet material sideways at an initial velocity\nv$_0$ into the slower disk wind, which in this frame of reference moves\ntowards the outflow source at velocity v$_j-$v$_w$. The distance $x$\nis measured towards the outflow source\nThe shape of the thin shell bow shock \nis given by $r_b(x)$ (see Equ.~\\ref{rx}), and terminates \nat the cylindrical radius $r_{b,f}(t)$ with $t$ the time elapsed since formation of the IWS\n(see Equ.~\\ref{rf}).\n}\n\\label{fig:shockframe}\n\\end{figure}\n\n\\begin{center}\n\\begin{figure*}[!t]\n\\centering\n\\includegraphics[width=12cm]{fig1-new2-eps-converted-to.pdf}\n\\caption{Schematic diagram showing the flow around a working surface of a jet\n(in this case the\nleading working surface, but the diagram also applies for an internal\nworking surface). \nThe jet is the horizontal, red rectangle at the bottom of the graph, with the source located at $z$=0. \nThe working surface in yellow is located at a distance $z_j$ from the source and travels\nat a velocity v$_j$. It ejects material away from the axis at an initial velocity v$_0$. The jet is\nsurrounded by a ``disk wind'', which travels along the outflow\naxis at a velocity v$_w$. The shape of the thin-shell bow shock (thick cyan line) is given by\n$z_b$ as a function of $r$ and ends at the edge of the bow wing (cyan point) (z$_{bf}$,r$_{bf}$). \nThe bow shock leaves behind a ``cavity'' (black region), which is partially refilled by the disk wind (brown region). \nThe boundaries of the initial swept up cavity (in black dashed line) \nand of the refilled region (cyan dash-dotted line) are given by $z_f$ an $z_c$ (respectively) as\na function of cylindrical radius $r$ (see \\BT{Eqs.}~\\ref{zfrf2} and \\ref{zc}).\n}\n\\label{fig:restframe}\n\\end{figure*}\n\\end{center}\n\nThe jet velocity variation is such that\nan internal working surface is produced within the jet beam. \nIn the following derivations, we assume\nthat the working surface is formed at $t=0$ at the position of the \nsource (\\BT{i.e.,} $z=0$),\nand that it then travels at a constant\nvelocity v$_j$ (for $t>0$). Such a working surface could be produced,\n\\BT{for example}, by an outflow velocity with a constant value v$_1<$v$_j$ for $t<0$,\njumping to a constant value v$_2>$v$_j$ for $t\\geq 0$.\nNote that if the shock is produced at distance $z_s>0$ at a time $t_s >0$, \nthe equations below remain valid with the transformation $z\\rightarrow z-z_s$ and $t\\rightarrow t-t_s$. \n\nIn a frame of reference moving with the internal working surface \\SC{(see \\BT{Fig.}~\\ref{fig:shockframe})}\nthe over-pressured shocked jet material which is ejected sideways from the\nworking surface interacts with the slower moving, surrounding disk wind.\nIn the strong radiative cooling limit, this sideways ejection leads to\nthe formation of a thin-shell bow shock, which sweeps up material of the\nsurrounding disk wind, \nflowing towards the source at a relative velocity (v$_j$-v$_w$).\n\nAssuming full mixing between jet and disk-wind material, we can write the mass, $r$-\nand $x$-momentum conservation equations \nat any point of radius $r_b$ along this thin-shell ($r$, $x$ and $r_b$ being defined in \\BT{Fig.}~\\ref{fig:shockframe})\nas:\n\\begin{equation}\n{\\dot m}={\\dot m}_0+\\int_{r_j}^{r_b} 2\\pi r'\\rho_w(\\textnormal{v}_j-\\textnormal{v}_w)dr'\\,,\n\\label{mass}\n\\end{equation}\n\\begin{equation}\n{\\dot \\Pi}_r={\\dot m}_0 \\textnormal{v}_0={\\dot m}\\textnormal{v}_r\\,,\n\\label{rmom}\n\\end{equation}\n\\begin{equation}\n{\\dot \\Pi}_x=\\int_{r_j}^{r_b} 2\\pi r'\\rho_w(\\textnormal{v}_j-\\textnormal{v}_w)^2dr'={\\dot m} \\textnormal{v}_x\\,,\n\\label{xmom}\n\\end{equation}\nwhere\n${\\dot m}$ is the mass rate, ${\\dot \\Pi}_r$ the $r$-momentum rate and\n${\\dot \\Pi}_x$ the $x$-momentum rate of the \\SC{mixed jet+disk-wind} material flowing along the thin-shell\nbow shock, and ${\\dot m}_0$ and $v_0$ are the mass rate and velocity (respectively)\n\\SC{at which jet material is initially} ejected sideways by the working surface. These equations have\nstraightforward interpretations. As an illustration, we point out that \\BT{Eq.}~(\\ref{rmom})\nstates that the radial momentum of the material flowing along the thin shell remains constant over time (which is due to the fact that the disk wind material adds no $r$-momentum), so that\nits radial velocity v$_r$ decreases as \\SC{${\\dot m}$ increases,} the $r$-momentum being shared with \\SC{a larger amount of}\nzero $r$-momentum material from the disk wind.\n\n\nThe mass rate ${\\dot m}_0$ and velocity v$_0$ of the sideways\nejected material are determined \\SC{only} by the properties of the working surface. For\na highly radiative working surface, we would expect the \\SC{post-shock jet}\nmaterial to cool to $\\sim 10^4$~K before exiting sonically into the disk\nwind. Therefore, we would expect v$_0\\sim 10$~km~s$^{-1}$.\nThe mass rate ${\\dot m}_0$\nwill have values of the order of the (time-dependent) mass loss rate ${\\dot M}_j$\n\\SC{in} the jet beam.\n\n\\SC{We note that although our basic equations are similar to those of \\citet{2001ApJ...557..443O} our approaches and derived equations will differ.\nThey considered only the case of the leading jet bowshock propagating in a medium at rest (v$_w = 0$), so that\nthe injected mass and momentum rates ${\\dot m}_0$ and ${\\dot m}_0$v$_0$ were\nexpressed as a function of the velocity of the shock and the jet radius. Here, \nwe keep ${\\dot m}_0$ and v$_0$ as explicit parameters, so that we can consider a moving surrounding disk wind of arbitrary velocity v$_w$,\nand an arbitrarily small $r_j$}.\n\n\\subsection{Shape of the bow shock shell}\n\nFor a disk-wind with position-independent density $\\rho_w$ and velocity v$_w$, the\nintegrals in \\BT{Eqs.}~(\\ref{mass}-\\ref{xmom}) can be trivially performed, and from\nthe ratio of \\BT{Eqs.}~(\\ref{rmom}-\\ref{xmom}) one obtains the differential\nequation of $r_b(x)$ :\n\\begin{equation}\n\\frac{dr_b}{dx}\n=\\frac{{\\dot m}_0 \\textnormal{v}_0}{\\pi\\rho_w(r_b^2-r_j^2)(\\textnormal{v}_j-\\textnormal{v}_w)^2}\\,,\n\\label{drdx}\n\\end{equation}\nwhich can be integrated to obtain the shape of the thin shell bow shock as a function of $x$ \\SC{in the IWS reference frame}:\n\\begin{equation}\nr_b(r_b^2-3r_j^2)+2r_j^3=L_0^2x\\,,\n\\label{rx}\n\\end{equation}\nwhere we defined the characteristic scale\\footnote{\n\\SC{Noting that} $L_0$ is the radius where the swept-up \\textit{x}-momentum is equal to 3\ntimes the injected $r$-momentum ${\\dot m}_0 \\textnormal{v}_0$, \\BT{Eq.}~(\\ref{rx}) is equivalent to \\BT{Eq.}~(22) in \n\\citet{2001ApJ...557..443O}.}\n\\begin{equation}\nL_0\\equiv \\sqrt{\\frac{3{\\dot m}_0 \\textnormal{v}_0}{\\pi \\rho_w(\\textnormal{v}_j-\\textnormal{v}_w)^2}}\\,.\n\\label{l0}\n\\end{equation}\n\nClearly, as the thin shell bow shock began to grow at $t=0$, the solution\ngiven by \\BT{Eq.}~(\\ref{rx}) \\SC{must} terminate at a finite maximum radius $r_{b,f}$ (see \\BT{Fig.}~\\ref{fig:shockframe}). \n\\SC{The growth of this outer radius with time}\ncan be calculated combining \\BT{Eqs.}~(\\ref{mass}-\\ref{rmom}) to\nobtain:\n\\begin{equation}\n\\frac{dr_{b,f}}{dt}=\\textnormal{v}_r=\\frac{{\\dot m}_0 \\textnormal{v}_0}{{\\dot m}_0+\\pi\\rho_w(r_{b,f}^2-r_j^2)(\\textnormal{v}_j-\\textnormal{v}_w)}\\,,\n\\label{drfdt}\n\\end{equation}\nwhich can be integrated with the boundary condition $r_{b,f}(t=0)=r_j$ to\nobtain \\SC{$r_{b,f}(t)$ at the current time $t$}:\n\\begin{equation}\n\\frac{1}{\\gamma L_0^2}\\left[r_{b,f}^3-r_j^3+3r_j^2(r_j-r_{b,f})\\right]+r_{b,f}-r_j=\\textnormal{v}_0t\\,,\n\\label{rf}\n\\end{equation}\nwith\n\\begin{equation}\n\\gamma\\equiv \\frac{\\textnormal{v}_j-\\textnormal{v}_w}{\\textnormal{v}_0}\\,.\n\\label{beta}\n\\end{equation}\n\nNow, in order to obtain the shape of the bowshock shell in the \\SC{source frame} $(z,r)$ (see \\BT{Fig.}~\\ref{fig:restframe}, cyan curve),\nwhen the IWS is located at distance $z_j$ from the source, we simply \ninsert the relation $x = (z_j-z_b)$ into \\BT{Eq.}~(\\ref{rx}) and $t = t_j\\equiv z_j\/$v$_j$ in \\BT{Eq.}~(\\ref{rf}). \nIn the \\textit{``narrow jet'} limit where $r_j \\to 0$, the thin shell bow shock has the \\SC{simple cubic} shape\ngiven by equation:\n\\begin{equation}\n\\frac{r_b}{L_0}=\\left(\\frac{z_j-z_b}{L_0}\\right)^{1\/3}\\,,\n\\label{rz}\n\\end{equation}\nending at the maximum ``outer edge\" radius $r_{b,f}$ \n(see the cyan dot in \\BT{Fig.}~\\ref{fig:restframe}) given by Eq.~\\ref{rf} evaluated at $t=t_j$:\n\\begin{equation}\n\\frac{1}{\\gamma}\\left(\\frac{r_{b,f}}{L_0}\\right)^3+\\frac{r_{b,f}}{L_0}=\\left(\\frac{\\textnormal{v}_0}{L_0}\\right)\\,t_j = \\left(\\frac{\\textnormal{v}_0}{L_0}\\right)\\left(\\frac{z_j}{\\textnormal{v}_j}\\right).\n\\label{rfz}\n\\end{equation}\nFor a \\textit{\"wide jet''} where $r_j$ is no longer negligible, the corresponding equations can also straightforwardly be obtained\nfrom \\BT{Eqs.}~(\\ref{rx}) and (\\ref{rf}), and are given in appendix \\ref{appendixA}. In the following, we will consider the\n\\textit{\"narrow jet\"} regime, which leads to simpler equations. \n\n\\subsection{The post-bow shock cavity}\n\nLet us now consider the trajectory \\SC{$r_f(z_f)$ described in the $z,r$ plane by} the outer edge of the thin\nshell bow shock \\SC{at earlier times, when the IWS travelled from its formation point $z=0$ at $t=0$ \nto its current location $z_j$ at time $t_j$}.\nThis trajectory will define the shape of the volume\nswept out by the travelling and expanding bowshock into the slower disk wind\n(see Fig.~\\ref{fig:restframe}, dashed black line).\n\nAt an earlier time $t_f$ ($0\\leq t_f\\leq t_j$), the bowshock terminated at an outer radius $r_f \\le r_{b,f}$ \ngiven by Eq.~\\ref{rfz} with $t_j=t_f$:\n\\begin{equation}\n\\frac{r_f^3}{\\gamma L_0^2}+r_f=\\textnormal{v}_0t_f.\n\\label{rftf}\n\\end{equation}\n\nThe distance $z_f$ from the source where this radius $r_f$ \\SC{was reached}\nis obtained from \\BT{Eq.}~(\\ref{rz}) by setting $z_b = z_f$, $r_b = r_f$ and $z_j=$v$_j \\, t_f$.\n\\begin{equation}\n\\frac{r_f^3}{L_0^2} = \\textnormal{v}_j t_f - z_f.\n\\label{zftf}\n\\end{equation}\n\nCombining \\BT{Eqs.}~(\\ref{rftf}-\\ref{zftf}) to eliminate $t_f$, and recalling \nthat $\\gamma = (\\textnormal{v}_j-\\textnormal{v}_w\/){\\textnormal{v}_0}$\\, we then obtain\nthe shape $r_f(z_f)$ of the cavity swept by the (growing) edge of the bow shock wing associated\nwith the travelling internal working surface (see dashed black curve in Fig.~\\ref{fig:restframe}) :\n\\begin{equation}\n\\frac{\\textnormal{v}_w}{\\textnormal{v}_j-\\textnormal{v}_w}\\left(\\frac{r_f}{L_0}\\right)^3+\\frac{\\textnormal{v}_j}{\\textnormal{v}_0}\\left(\\frac{r_f}{L_0}\\right)\n= \\frac{z_f}{L_0}\\,.\n\\label{zfrf2}\n\\end{equation}\n\n\\subsection{Refilling of the cavity by the disk-wind}\n\nOf course, as soon as the bow shock wing has passed by, the disk wind (travelling in the $z$-direction\nat a velocity v$_w$, see \\BT{Fig.}~\\ref{fig:restframe}) immediately starts to refill the swept-up cavity. \nFor a given radius $r_f(z_f)$ along the boundary of the swept-up volume, the refilling by the disk-wind will thus start at the \ntime $t_f$ (given by Equ.~\\ref{zftf}) when the bowshock edge reached this position; at \nthe present time $t_j$ the disk wind will have refilled a region of length $(t_j-t_f)$v$_w$ along the $z$-axis.\nThe boundary between the \\SC{wind-refilled} region and the emptied cavity thus\nhas a locus $z_c(r_c)$ (see the cyan dash-dotted line in Fig.~\\ref{fig:restframe}) given by:\n\\begin{equation}\nz_c=z_f+(t_j-t_f)\\textnormal{v}_w=\\gamma r_c+\\textnormal{v}_w t_j\\,,\n\\label{zc}\n\\end{equation}\nwhere for the second equality we have used \\BT{Eqs.}~(\\ref{rftf}-\\ref{zftf}) and set $r_c=r_f$.\n\nTherefore, the slower disk wind refills the cavity \\SC{swept} by the bow shock except\nfor an inner, conical ``hole'' with half-opening angle $\\alpha=\\arctan \\gamma^{-1}$ =\narctan[v$_0$ \/ (v$_j$-v$_w$)].\nThe conical cavity is attached to the wings of the bow shock at $(z_{bf},r_{bf})$, \nand its vertex along the jet axis is located at a distance from the source $z_a$= v$_w\\,t_j$ = $z_j$ (v$_w$\/v$_j$)\n(see \\BT{Eq.}~\\ref{zc} with $r_c=0$ and cyan asterisk in Fig.~\\ref{fig:restframe}).\n\n\\BT{Fig.}~\\ref{fig:analytic-timevol} shows the analytical flow configurations obtained at three different evolutionary times\n(corresponding to $t=2L_0\/$v$_j$, $4L_0\/$v$_j$ and $8L_0\/$v$_j$), and for\ntwo choices of the wind velocity (v$_w=0$ and v$_w=0.4$v$_j$). In the two models,\nwe have set v$_0=0.2$v$_j$. The model with v$_w=0$ (\\BT{left} frames of \\BT{Fig.~\\ref{fig:analytic-timevol}})\nproduces a cavity which does not fill up. For v$_w=0.4$v$_j$ (right frames\nof \\BT{Fig.~\\ref{fig:analytic-timevol}}), the bow shock has a more stubby shape compared to the v$_w=0$\nbow shock ($L_0$ is larger) and the cavity which it leaves behind is partially refilled by\nthe disk wind (brown region).\n\n\\begin{figure*}[!t]\n\\centering\n\\includegraphics[width=17cm]{fig3-new2-eps-converted-to.pdf}\n\\centering\n\\caption{The time evolution of the bow shock + cavity flow predicted by\nthe analytic model for two choices of the wind velocity: v$_w=0$ (left \nframe) and v$_w=0.4$v$_j$ (right frame). The dark region is\nthe empty part of the cavity (swept-up in the thin shell bow shock) and\nthe brown region is the part of the cavity that has been refilled\nby the disk wind (this region being of course absent in the v$_w=0$ model\nof the left frame). \nFor both models, we show snapshots \ncorresponding to $t=2L_0\/$v$_j$, $4L_0\/$v$_j$ and $8L_0\/$v$_j$), which result\nin positions $z_j=2L_0$, $4L_0$ and $8L_0$ for the working surface (see the\nlabels on the top left of each frame). Both models have v$_0=0.2 $v$_j$.}\n\\label{fig:analytic-timevol}\n\\end{figure*}\n\n\\subsection{Kinematics along the shell}\n\nFrom \\BT{Eqs.}~(\\ref{mass}-\\ref{xmom}), it is straightforward to show that for a narrow\njet surrounded by a homogeneous disk-wind, the radial and axial velocities \n\\SC{(in the source rest frame) \nof the well-mixed thin shell material as a function of cylindrical radius $r_b$ are }:\n\\begin{equation}\n\\textnormal{v}_r= \\textnormal{v}_0 \\left(1+\\frac{3r_b^2}{\\gamma L_0^2}\\right)^{-1}\\,,\n\\label{vrv0}\n\\end{equation}\n\\begin{equation}\n\\textnormal{v}_z=\\textnormal{v}_w+(\\textnormal{v}_j -\\textnormal{v}_w) \\left(1+\\frac{3r_b^2}{\\gamma L_0^2}\\right)^{-1}\\,,\n\\label{vzv0}\n\\end{equation}\nwhere for the second equation we have also considered that v$_z=$v$_j-$v$_x$ (see\n\\BT{Figs.} \\ref{fig:shockframe} and \\ref{fig:restframe}). In evaluating the radial velocities, one should keep in\nmind that the radius $r_b$ is always smaller than the $r_{b,f}$ value given by\n\\BT{Eq.}~(\\ref{rfz}).\n\nAs expected, \\SC{we find the following asymptotic limits }:\n\\begin{itemize}\n\\item $\\textnormal{v}_r$\nhas an initial value v$_0$ for $r_b\\to 0$ (i.e., as it\nleaves the jet working surface) and goes to 0 at large radii (as the radial\nmomentum of the thin shell bow shock is shared with an increasing mass of\ndisk wind),\n\\item $\\textnormal{v}_z$\nhas an initial value v$_j$ when it leaves the working surface ($r_b \\rightarrow 0$),\nand for large radii tends to the disk wind velocity v$_w$.\n\\end{itemize}\n\n\\BT{Eqs.}~(\\ref{vrv0}-\\ref{vzv0}) give the velocity of the well-mixed material\nwithin the thin shell bow shock. These velocities correspond to the Doppler\nvelocities observed in an astronomical observation provided that the emission\ndoes indeed come from fully-mixed material.\n\nAnother extreme limit is if the emission is actually \\SC{dominated by} the gas that has\njust gone through the bow shock and which is not yet mixed with the thin shell\nflow material. In this case, the axial and radial velocities of the emitting\nmaterial would correspond to the velocity directly behind a highly compressive\nradiative shock. For such a shock, the velocity of the post shock flow\n(measured in the reference system moving with the bow shock)\nis basically equal to the \\SC{projection of the incoming flow velocity} parallel to the shock front. \nIt is straightforward\nto show that in this case the immediate post shock radial and axial velocities of the emitting\nmaterial \\SC{in the source rest frame} are given - in the \\textit{\"narrow jet\"} limit - by:\n\\begin{equation}\n\\textnormal{v}_{r,ps}= (\\textnormal{v}_j-\\textnormal{v}_w) \\frac{3 (r_b\/L_0)^2}{1+9 (r_b\/L_0)^4},\n\\label{vrps}\n\\end{equation}\n\\begin{equation}\n\\textnormal{v}_{z,ps}= \\frac{\\textnormal{v}_j+9~\\textnormal{v}_w~(r_b\/L_0)^4}{1+9(r_b\/L_0)^4}.\n\\label{vzps}\n\\end{equation}\nWe note that while v$_{z,ps}$ has the same asymptotic limits as v$_z$ in the full mixing case (see \\BT{Eqs.}~\\ref{vzv0} and \\ref{vzps}), the radial post-shock velocity v$_{r,ps}$ tends to zero both for $r_b\\to 0$ \nand for $r_b\\to \\infty$ (see \\BT{Eqs.}~\\ref{vrps}), reaching a maximum value of (v$_j-$v$_w)\/2$ for a radius equal to $L_0 \/\\sqrt{3}$. This peak value for\nthe radial velocity is a general result of bow shock kinematics, valid regardless\nof the shape of the bow shock, which was first derived by \\citet{1987ApJ...316..323H}.\n\n\nBy combining \\BT{Eqs.}~(\\ref{vrv0}-\\ref{vzps}) with \\BT{Eq.}~(\\ref{rz}) it is\nstraightforward to obtain the axial and radial velocities as a function of\nthe distance $z$ along the symmetry axis. Examples of these dependencies are\nshown in the following section.\n\n\\subsection{A dimensional example}\n\n\\BT{We now consider a particular model of an internal working surface\nmoving at a velocity v$_j=100$~km~s$^{-1}$, located at $z_j=10^{16}$~cm along the $z$-axis and ejecting side way material at a rate ${\\dot m}_0=10^{-8}$M$_\\odot$yr$^{-1}$ with a lateral ejection velocity v$_0=10$~km~s$^{-1}$.} \n\nFor the surrounding disk wind, we assume a number density of atomic nuclei\n$n_w$=$\\rho_w\/1.4m_H$ =$10^4$~cm$^{-3}$ and\nvelocities v$_w=0$ and v$_w=40$~km~s$^{-1}$. With these parameters,\nwe obtain $L_0=5.2\\times 10^{14}$~cm (for v$_w=0$) and $L_0=8.7\\times 10^{14}$~cm \n(for v$_w=40$~km~s$^{-1}$). Note that ${\\dot m}_0$ and $\\rho_w$ are only involved in the shape and kinematic equations through\n$L_0 \\propto ({\\dot m}_0\/\\rho_w)^{1\/2}$, so that only their ratio actually matters in defining the flow properties.\n\nFor these two working surfaces, we obtain the shapes, and the radial and axial velocities\n(as a function of $z$) shown in \\BT{Fig.}~\\ref{fig4}. From this figure, it is clear that for the v$_w=40$~km~s$^{-1}$ model\nwe obtain a flatter working surface than for the v$_w=0$ case, \\SC{because $L_0$ is larger.}\n\n\\begin{figure}[!h]\n\\centering\n\\includegraphics[width=8cm]{fig4-new5-eps-converted-to.pdf}\n\\caption{Shape of the bow shock and the cavity (top), radial\nvelocities v$_r$ (center) and axial velocities v$_z$ (bottom) for the\ntwo models discussed in the text. The solid curves show the\nvelocities of the well-mixed material within the thin shell flow,\nand the dashed curves show the immediate\npost-bow shock velocities. The dotted red line shows v$_z=$ v$_w$.}\n\\label{fig4}\n\\end{figure}\n\nThe velocities of the fully mixed thin shell material (shown with solid lines\nin \\BT{Fig.}~\\ref{fig4}) have the following behaviors:\n\\begin{itemize}\n\\item v$_r$ has a value of v$_0=10$~km~s$^{-1}$ at\n$z=z_j$, and monotonically decreases toward (but not reaching) \\BT{zero} for decreasing values of $z$,\n\\item v$_z$ (lower plots of \\BT{Fig.}~\\ref{fig4})\nhas a value of v$_j=100$~km~s$^{-1}$ at\n$z_j$, and monotonically decreases for lower values of $z$,\ntowards (but not reaching) an asymptotic limit of v$_w$.\n\\end{itemize}\n\n The immediate post-bow shock velocities (shown with dashed lines in \\BT{Fig.}~\\ref{fig4}) have the following behaviors:\n\\begin{itemize}\n\\item v$_{r,ps}$ (central plots of \\BT{Fig.}~\\ref{fig4}) is \\BT{zero} at the apex of the bow shock surface at $z=z_j$, and rapidly grows to a \nmaximum value of\n50~km~s$^{-1}$ (for v$_w=0$) and 30~km~s$^{-1}$ (for v$_w=40$~km~s$^{-1}$), \nthis value corresponding to (v$_j-$v$_w$)$\/2$, as\ndiscussed in the previous section is reached at $z=z_j-L_0\/3\\sqrt{3}$. \nThe radial velocity then decreases again at smaller $z$ until the end of the bowshock wings,\n\\item the axial velocity v$_{z,ps}$ has the same qualitative behavior\nas the well-mixed v$_z$ (see above), but with\na different functional form that approaches faster its limit v$_w$ in the bowshock wings.\n\\end{itemize}\n\nWe expect that in reality, due to incomplete mixing, the emitting material will have axial\nand radial velocities between the fully-mixed layer and immediate post-bow\nshock velocities shown in \\BT{Fig.}~\\ref{fig4}. The difference between\nthese two velocities is particularly important for the\nradial component of the velocity of the emitting material.\n\n\\subsection{Successive bow shocks}\n\nIn the previous section, we assumed that the bow shock associated with an internal\nworking surface travels through undisturbed disk wind material. However, we saw that the cavity formed\nbehind it is only partially refilled by fresh disk wind. Therefore, a second bow shock will travel\ninto a disk wind structure containing an empty, conical cavity left behind by the first bow shock.\n\nWe now assume that the variable ejection velocity of the jet produces a second working surface \nat $z = 0$\nat a time $\\tau_{j}$, which also travels along the jet axis with\nthe same velocity v$_j$ as the first working surface.\n\\BT{Fig.}~\\ref{fig-twoiws} illustrates three steps of the propagation of this second working surface.\n\n\\begin{figure*}[!t]\n\\includegraphics[width=14cm]{fig5-new-eps-converted-to.pdf}\n\\centering\n\\caption{The time evolution of two successive bow shocks and the cavities predicted by the analytical model.\n The first bow shock is ejected at $t=0$, the second shock is formed at time $t= \\tau_j$ when the first bow shock\n is at $z=$v$_j \\tau_j$ taken here as $6 L_0$ (top). At a time $t=\\tau_j+2 L_0\/$v$_j$, the second bow shock is still\n propagating in the undisturbed disk wind material (center). At a time $t_c$ the second bow shock catches up the emptied cavity of the first bow shock at its vertex (bottom).}\n \\label{fig-twoiws}\n\\end{figure*}\n\nAt $t=\\tau_j$ (\\BT{Fig.}~\\ref{fig-twoiws}, first panel) the first working surface is at a distance $z_{j1} =$v$_j \\tau_j$ from the outflow source \nand its cavity is partially filled with fresh disk wind material, while the second working surface \nhas not yet expanded.\nAt a time $\\tau_j < t < t_c$ (\\BT{Fig.}~\\ref{fig-twoiws}, center) the second bow shock travels in unperturbed, pristine disk wind material \n\\SC{that refilled the cavity behind the first bowshock};\n\\SC{hence its shape and kinematics are still given by the same equations derived above for the leading internal working surface,\nand any molecules present in the disk-wind can enter the second bowshock.}\n\\SC{At a time $t = t_c$ (\\BT{Fig.}~\\ref{fig-twoiws}, bottom panel) the apex of the second bowshock just catches up with}\nthe vertex of the conical cavity emptied by the first bow shock and not refilled by the disk wind. \n\nTo obtain the time $t_c$,\nwe note that at any time $t$ ($\\tau_j < t < t_c$), the position of the apex of the second bow shock is $z_{j2} =$v$_j (t-\\tau_j)$ and the position of the vertex of the empty cavity behind the first bow shock is $z_{a1} = $v$_w t$. \nBy equating these two quantities we get:\n\\begin{equation}\n t_c = \\frac{\\textnormal{v}_j}{\\textnormal{v}_j-\\textnormal{v}_w} \\tau_j.\n \\label{criticaltime}\n\\end{equation}\nThis interaction occurs at a distance $l_c$ from the source:\n\\begin{equation} \n l_c = z_{a1}(t_c) = \\textnormal{v}_w t_c =\\frac{\\textnormal{v}_j}{\\textnormal{v}_j\/\\textnormal{v}_w-1} \\tau_j = \\frac{\\Delta z}{\\textnormal{v}_j\/\\textnormal{v}_w - 1 \n\\label{criticaldist}\n\\end{equation}\nwhere $\\Delta z = \\tau_j $v$_j$ denotes the distance between two successive IWS.\nUnless v$_w$ is very close to v$_j$, we find that $l_c$ is of the order of a few times the typical IWS spacing.\n\nOur model thus predicts that no more pristine unperturbed disk wind material can remain close to the jet axis beyond $z = l_c$. \nWhen the second IWS reaches $z > l_c$, the central region of its bow shock shell propagates into the emptied\ncavity left behind by the previous IWS. This second bow shock shell will in general become less curved than the first one, \nbecause its central region travels into a low density cavity instead of unperturbed disk wind. \n\n\\section{Numerical simulations}\nIn the previous section, we proposed a simple analytical \"thin-shell\" model that describes the morphology and the kinematics of a bow shock produced by a pulsating jet travelling in a surrounding disk wind. We especially show that the disk wind refills up part of the cavity carved by the bow shock, allowing us to observe pristine disk wind close the source. For successive bow shocks, bow shocks travels in an undisturbed disk wind up to a critical distance $l_c$. Above this altitude, bow shock shells interact with each other and analytical models can only be heuristic.\n\nIn this section, we present numerical simulations that start with the simple configuration adopted above, first to determine to what extent the analytical model can be used to describe a realistic situation -e.g., with a partial mixing - and secondly to study briefly the long term evolution of the interacting bow shock shells. \n\n\\subsection{Numerical method and setup}\n\nWe carry out numerical simulations of a variable ejection jet surrounded by a wide \"disk wind\" outflow.\nWe have implemented the new HD numerical code \\textit{Coyotl} which solves the \"2.5D\"\nEuler ideal fluid equations in cylindrical coordinates:\n\n\\begin{equation}\n\\frac{\\partial \\textbf{U} }{\\partial t} + \\frac{1}{r} \\frac{\\partial r \\textbf{F}}{\\partial r} + \\frac{\\partial }{\\partial z} \\textbf{G} = \\textbf{S}, \n\\label{euler2D1}\n\\end{equation}\nwhere $\\textbf{U}$ is the vector of conserved quantities\n\\begin{equation}\n\\textbf{U} = (\\rho, \\rho \\textnormal{v}_r, \\rho \\textnormal{v}_z, e, n_i)\n\\label{euler2D2}\n\\end{equation}\nwith fluxes in the $r$- and $z$- directions given, respectively, by\n\\begin{equation}\n\\textbf{F} = (\\rho \\textnormal{v}_r, \\rho \\textnormal{v}_r^2+p, \\rho \\textnormal{v}_z \\textnormal{v}_r, \\textnormal{v}_r(e+p), \\textnormal{v}_r n_i),\n\\label{eulerU2D3}\n\\end{equation}\n\\begin{equation}\n\\textbf{G} = (\\rho \\textnormal{v}_z, \\rho \\textnormal{v}_r \\textnormal{v}_z, \\rho \\textnormal{v}_z^2 +p, \\textnormal{v}_z (e+p), \\textnormal{v}_z n_i).\n\\label{eulerU2D4}\n\\end{equation}\n$n_i$ are passive scalars used to \\SC{separate} the jet from the \\SC{disk}-wind material in the flow.\nAssuming an ideal equation of state, the total energy density $e$ is\n\\begin{equation}\ne = \\frac{p}{\\rho (\\gamma -1)} + \\frac{1}{2} \\rho (\\textnormal{v}_r^2 + \\textnormal{v}_z^2).\n\\label{eulerU2D4}\n\\end{equation}\nand the source term is\n\\begin{equation}\n\\textbf{S} = (0, \\frac{p}{r}, 0, -\\rho^2 \\Lambda(T), 0).\n\\label{eulerU2D4bis}\n\\end{equation}\nwhere the cooling function $\\Lambda(T)$ is the parametrized atomic\/ionic cooling term\nof \\citet{1989ApJ...344..404R}, which approximates the cooling curve of \\citet{1976ApJ...204..290R} for temperatures above $10^4$K.\n\nThe numerical scheme is based on a second order Godunov method with an HLLC Riemann solver (Toro 1999).\nThe calculation of the fluxes and data reconstruction uses the second order scheme\ndescribed by \\citet{1991MNRAS.250..581F}. This algorithm solves Euler equations in a true cylindrical coordinate\nsystem as written in \\BT{Eq.}~(\\ref{euler2D1}) and calculates the cell gradients through\nthe center of gravity of the cylindrical cells.\n\nWe ran two simulations: a reference simulation called \\textit{no-DW} model, with v$_w$ = 0 (i.e., a jet in a stationary\nambient medium) and a simulation with v$_w = 0.4 $v$_j$ called \\textit{DW} model. \nTo follow the refilling of the cavity close to the source and the interaction between various shells, \nwe integrate equations on a $2000$ au $\\times~350$ au domain, with a resolution of 1~au per cell.\nAll jet and wind parameters except v$_w$ are kept equal between the two simulations, and \nare summarized in Table 1.\n\n\\begin{table}\n\\caption{Model parameters} \n\\label{table:1} \n\\centering \n\\begin{tabular}{c c } \n\\hline\\hline \nParameter & Value \\\\ \n\\hline \nresolution & $1$ au per cell \\\\\nsimulation domain $z \\times r$ & $2000$ au $ \\times 350$ au\\\\ \n\\hline \n Jet \\\\\n\\hline \naverage jet velocity, $\\textnormal{v}_j$ & $96$ km~s$^{-1}$ \\\\\nvariability amplitude, $\\delta $v$_j$ & $48$ km~s$^{-1}$ \\\\\nvariability period, $\\Delta \\tau_j$ & 33 yr \\\\\ntime of velocity increase $\\eta \\Delta \\tau_j$ & 0.1~$\\Delta \\tau_j$ \\\\\njet density & $9 \\times 10^{-22}$g~cm$^{-3}$ \\\\\njet temperature & $28$K \\\\\n\\SC{jet radius} & 20 au \\\\\n\\hline \n Disk wind \\\\\n\\hline \ndisk wind velocity, v$_W$ & $0$ (\\textit{no-DW} reference model) \\\\ \n\t\t\t\t\t\t\t & $0.4$ v$_j$ (\\textit{DW} model) \\\\\ndisk wind density & $3 \\times 10^{-23}$g~cm$^{-3}$ \\\\\ndisk wind temperature & 800K \\\\\n\n \n\\hline \n\\end{tabular}\n\\end{table}\n\nOur initial conditions have an inner, constant velocity jet filling the $rr_j$ we impose either the disk wind physical conditions, or a reflecting condition (for the reference simulation\nwith v$_w$ = 0). In order to avoid numerical problems due to the $z$-velocity shear \\SC{between the jet and the surrounding \ndisk-wind} we put a velocity gradient on \\BT{three} cells (i.e., 3~au) at the outer edge of the jet inflow. \n\n\\subsection{Single bow shock propagation}\n\n\\begin{figure*}[!h]\n\\includegraphics[width=15cm]{simu_nov_compressed_v3.pdf}\n\\centering\n\\caption{\n Maps of density and pressure for the reference no-DW simulation with v$_w$ = 0 (left) and \n the \\textit{DW} simulation with v$_w$ = 0.4v$_j$ (right) at a time $t=48$yr. \n \n Color scales on top are in g cm$^{-3}$ for density and in dyn cm$^{-2}$ for pressure.\n White contours show the locus of 50\\% mixing ratio between jet and disk-wind\/ambient material.\n The cyan curve shows a fit (to the numerical results) by the analytic shell shape in \\BT{Eq.}~\\ref{rfz}, with $L_0 = 65$ au (left) and\n $108$ au (right).\n The cyan dot indicates the \\SC{maximum radius} of\n the shell,\n \n the cyan asterix indicates the predicted vertex of the empty conical cavity left behind the shell, \n and the cyan dash-dotted line is the analytical predicted boundary between the emptied cavity and \n the region refilled from below by fresh disk wind (see Fig.~\\ref{fig:restframe} and \\BT{Eq.}~\\ref{zc}). }\n\\label{nodw_simu}\n\\end{figure*}\n\n\\BT{Fig.~\\ref{nodw_simu}} shows snapshots of the \\textit{no-DW} simulation (two frames on the left) and of the \\textit{DW} simulation (two frames on the right)\nafter a $t=48$~yr time integration, which is larger than\nthe ejection variability period of 33~yr (see Table 1). The first internal working surface (IWS) has travelled to a distance\nof 995~au from the source, and a second IWS to 355~au. \nIn this subsection, we study successively the shape of the first bow shock shell, the refilling of the cavity behind it, and the kinematics of the shell, comparing each of them with our analytical predictions.\n\n\\subsubsection{The shape of the bow shock shell}\n\nThe cyan curves in Fig. 6 show that the bow shock shells in the two simulations can be well fitted with the cubic analytic solution for the thin-shell (\\ref{rx}), with values for the characteristic scale $L_0 = 65$~au for the reference \\textit{no-DW} simulation and $L_0= 108$~au for the \\textit{DW} simulation. \nIn the simulations, the sideways ejection velocity v$_0$ and mass-flux $\\dot{m}_0$ (see Section 2) are a result of the IWS shock configuration, which compresses\nthe jet material and ejects it sideways.\nSince the two simulations only differ in the presence or lack of a surrounding disk wind, the jet IWS in the two simulations have similar characteristics. We then expect that $\\dot{m}_0$v$_0$ is the same, and that $L_0$ should vary with the wind velocity as $L_0 \\propto ($v$_j-$v$_W)^{-1}$ (see \\BT{Eq.}~\\ref{l0}). \nThe values of $L_0$ found above by fitting the shell shape are indeed consistent with this expectation.\n\n\\begin{figure}[!h]\n\\includegraphics[width=8.5cm]{fig7-new-june.pdf}\n\\caption{Transverse cut across the flow at the IWS location ($z=993$~au) in the \\textit{no-DW} time-frame shown in \\BT{Fig.} \\ref{nodw_simu}. This cut shows the radial velocity\n as a function of distance from the jet axis in solid line. We also plot the radial velocity weighted by the abundance of the jet tracer with a dashed line. The radial velocity first grows outwards, reaches a maximum velocity of $\\approx 14$~km~s$^{-1}$\n at a radius of $\\sim 25$~au (somewhat larger than the 20~au initial jet radius), and then remains with values $>10$~km~s$^{-1}$ until it drops\nsharply to 0 at $r\\sim 50$~au. The velocity maximum at $r \\sim 25$~au corresponds to the shock against the jet material. The second maximum at r$\\sim 50$~au is the shock that propagates in the disk-wind and the zero radial velocity material at larger radii is the undisturbed disk wind.}\n\\label{v0est}\n\\end{figure}\n\nThe cyan dots on the leading bow shock wings in \\BT{Fig.} \\ref{nodw_simu} indicate the maximum radius of the bow shock shell as observed in the numerical simulations, $r_{\\rm max} = $ 133 au in the \\textit{no-DW} and $r_{\\rm max} = $ 137 au in the \\textit{DW} cases. Assuming that it corresponds to \nthe current position of the edge of the thin shell $(r_{bf},z_{bf})$, as defined in \\BT{Fig.}~\\ref{fig:restframe}, our analytic model predicts that $r_{bf}$ depends on $L_0$ and v$_0$ through \\BT{Eq.}~(\\ref{rfz}) or (\\ref{rf-wide}). With $L_0=65$, $108$~au, we would deduce v$_0 = 27$, $19$ km~s$^{-1}$ for\nthe \\textit{no-DW} and \\textit{DW} simulations, respectively.\n\nTo obtain a direct measurement of v$_0$, \nwe plot in \\BT{Fig.} \\ref{v0est} a transverse cut of the radial velocity v$_r$ at the position of the leading working\nsurface ($z=993$~au) in the \\textit{DW} simulation. Inside the IWS, because of both the adiabatic expansion and the mass flux across the IWS,\n$u_r$ increases from zero to a maximum velocity of\n$14$ km~s$^{-1}$. This direct measurement of v$_0$\nis smaller than the values of 27, 19~km~s$^{-1}$ inferred from the maximum radius of the shell in our simulations using \\BT{Eq.}~\\ref{rfz} (see above). \nHowever, we note that taking v$_0$ = 14 km~s$^{-1}$ (its real value), the predicted $r_{bf}$ would be $r_{bf} = 111$~au in the no-DW case and $r_{bf} = 121$~au in the DW case, only slightly\nsmaller than the $r_{\\rm max}$ found in our simulations.\n\nThis slight difference in outer radius \nbetween the analytic model and the numerical simulations could be a result of several effects:\n\\begin{itemize}\n\\item The analytic model assumes a working surface with a time-independent, sideways ejection,\n while the numerical simulation has an IWS with time-dependent sideways ejection that depends on the evolution\n of the IWS shocks. The IWS in the simulations produces a higher sideways velocity at early times (v$_0 \\sim 18$~km~s$^{-1}$)\\footnote{Note that following \\citet{2001ApJ...557..443O} the maximum velocity that an atomic gas at T$=10^4$K can reach through adiabatic expansion is $\\sqrt{3} c_s = 18$~km~s$^{-1}$, where $c_s$ is the adiabatic sound speed.}, closer to the values deduced from the analytic cavity shapes,\n\\item in the numerical simulation, the sideways ejection from the IWS is not highly super-sonic. The thermal gas pressure is therefore expected\n to be an additional source of sideways momentum for the shell (an effect not included in our momentum conserving analytic model); \\SC{this will act to\n produce a higher ``effective\" v$_0$.}\n\\item similarly, the thermal pressure in the head of the bow shock driven into the surrounding environment will result in\n a sideways push which is not present in the momentum conserving analytic model. \n\\item the numerical simulations do not have instant mixing between the sideways ejected jet material and the shocked\n environment (or disk wind), as assumed in the analytic model. Since the immediate post shock velocity in the radial direction is generally greater that the radial mean shell velocity (see example in \\BT{Fig.}~\\ref{fig4}), the growth rate of the bow shock can be enhanced. \\SC{In the reference frame of the IWS (see Fig.~\\ref{fig:shockframe}), the non-mixed material will \"slide\" along the shell surface, extending $r_{b,f}$ to larger values.}\n\\end{itemize}\n\\citet{2001ApJ...557..429L} found in their simulations similar disagreements between direct measurements of the sideways momentum ejected by the IWS\nand the momentum estimated from the fitted shape of their analytic shell model. \n\n\n\\subsubsection{Cavity refilling}\n\nThe asterisk in cyan in each panel of \\BT{Fig. \\ref{nodw_simu}} indicates the location of the vertex of the emptied cavity as predicted from the analytic model\n(see \\BT{Fig.}~\\ref{fig:restframe}). For the \\textit{no-DW} simulation, this point is located at the shock formation position ($z_s=75$~au)\nwhereas for the \\textit{DW} simulation this point is located at $z_a = z_{sf}$ + $\\frac{v_w}{v_j} (z_j - z_{sf}) = 440$~au. \nWe also plot in cyan dash-dotted \\SC{the line connecting this vertex to $r_{\\rm max}$,} which traces the \nboundary predicted by the analytical model between the emptied swept-out conical cavity and the unperturbed surrounding medium\/refilled disk wind\n(see black conical region in Fig.~\\ref{fig:restframe}). Three important features can be seen.\n\n\n\\textit{In both numerical simulations, the emptied cavity predicted by the analytical thin-shell model,\ni.e. the conical volume inside the dash-dotted cyan line,\nis not really empty, but partially filled with a cocoon of low\ndensity and pressure material.} \nNo unperturbed ambient gas or disk wind can be left inside this volume (in black in Fig. \\ref{fig:analytic-timevol}), which was entirely swept-out by the growing shell \nduring the IWS propagation.\nHence this cocoon is made of shocked material that did not fully mix in the shell, and re-expanded in the low-pressure cavity behind it,\n refilling it ``from above''.\nThe white contour, which denotes the surface of 50\\%\\ jet\/environment mixing fraction (obtained following a passive scalar) \n\\SC{shows that} the cocoon is mainly filled with jet material close to the axis, where the shell mass is dominated by \n\\SC{gas ejected from the IWS}.\n\\SC{Further from the axis and closer to the theoretical boundary (cyan dash-dot line)} \nit is filled by ambient material that was swept up by the bowshock and re-expanded behind it.\n\n\\textit{The boundary with unperturbed ambient \\BT{or} disk wind material}\ncloses back to the axis at the predicted vertex position (see cyan asterisk \\BT{Fig.}~\\ref{nodw_simu}), \nbut is delimited by a weak shock that extends slightly outside from the predicted analytical boundary (dash-dotted cyan line \\BT{Fig.}~\\ref{nodw_simu}) .\nIn the \\textit{no-DW} model, the analytical boundary\nrepresents the trajectory of the edge of the bow shock ($z_c = z_f$ in the case v$_w=0$).\nHence, this weak shock is produced by the supersonic motion of the high-pressure edge of the bow shock ($r_{b,f},z_{b,f}$) in the static surrounding medium.\nThis launches a weak outward shock that repels the boundary of the unperturbed \nambient\nmaterial slightly outside the predicted cavity boundary (in cyan dash-dotted line). \nIn the presence of a supersonic disk wind, the weak shock front is advected away from the source so that it still closes back \non-axis at the predicted vertex position $z_a$. \nHence, \\BT{Eq.}~(\\ref{zc}) gives a strong limit on the boundary between perturbed and unperturbed material. \n\n\n\n\\textit{In the presence of a disk-wind, the region between the predicted cavity boundary (cyan dash-dotted line) and the weak shock front outside it\nis refilled by fresh disk wind material coming from below.}\n\\begin{figure}[!h]\n\\includegraphics[width=9cm]{dw-zoom-5_compressed-v2.pdf}\n\\caption{Zoom of the leading IWS of the simulation with a surrounding disk wind at time $t=48$~yr. Left: density\nstratification (with the logarithmic color scale given by the top bar in g~cm$^{-3}$), center: radial velocity\n(with the linear scale of top bar in km~s$^{-1}$) and right: axial velocity structure\n(with the linear scale of the top bar in km~s$^{-1}$). The white contours show the surfaces of\n50\\% (solid line), 10\\%, 0.1\\% and 0.001\\% (outer contour) jet material mixing fractions. \nThe cyan asterisk is the predicted vertex of the cavity, the cyan dash-dotted line in the predicted boundary \nof the cavity, and the cyan curve is the fitted shape of the bow shock.}\n\\label{refilling}\n\\end{figure}\nTo analyse this process\nwe show in Fig. \\ref{refilling} density and velocity maps of the region around the leading bow shock of the \\textit{DW} simulation.\nThe dashed white contours show \n10\\%, 0.1\\% and 0.001\\% jet material mixing fractions. \nMaterial that went through the bowshock and re-expanded in the cocoon has also been partially mixed with jet material. As a consequence, regions where no jet material is observed are regions where the disk wind has refilled the cocoon from below.\nThe location of the last, outer contour (corresponding to a $10^{-5}$ jet material mixing fraction)\nshows that the weak outer shock front propagates into un-mixed, fresh DW material. This material manages to cross the weak \nshock to refill ``from below\" the bottom part of the swept-out cavity.\n\nThe weak shock provides a slight push outwards to the refilling DW, with radial velocities \nthat vary from $+6$~km s$^{-1}$ to $+3$~km s$^{-1}$ along the shock front (middle panel of Fig.~\\ref{refilling}), \nsimilar to the adiabatic sound speed of the disk wind ($c_{s w}\\approx2.8~$km~s$^{-1}$). \nThe weak shock also reduces the DW inflow velocity v$_z$ to values slightly below v$_w$ = 0.4v$_j$ = 38.4~km~s$^{-1}$ (right panel of Fig.~\\ref{refilling}).\nHowever, refilling remains efficient up to the locus predicted by our analytical model (dash-dotted cyan line), \nas the jet mixing fraction there remains very small ($\\simeq$ 0.1\\%). \nThe presence of the weak shock does not appear to significantly modify the extent of DW refilling compared to analytical expectations.\n\n\nIn summary, we can therefore distinguish in our simulations three refilling regions behind the bowshock:\n \\begin{itemize}\n \\item a \\SC{low density cocoon} trailing the bow shock, that is refilled ``from above\" by shell material re-expanding into the emptied cavity.\n This region is mainly composed of jet material close to the apex of the bow shock, and of shocked swept-up disk wind material \n behind the wings of the bowshock,\n \\item an intermediate region (outside the cyan dash-dotted line and inside the weak shock closing the cavity) refilled ``from below'' \n by weakly shocked disk wind material, \n \\item a region upstream of the weak shock closing the cavity, that is refilled by unperturbed fresh disk wind keeping its initial physical conditions.\n \\end{itemize}\n\n\n\\subsubsection{Kinematics}\n\nWe now compare the kinematics in both simulations with our analytical predictions.\n\\BT{Fig.}~\\ref{PV} shows \"position-velocity\" (PV) diagrams for v$_r$ and v$_z$ as a function of distance $z$ along the flow axis.\nIn order to enhance the contribution from the material that has just been shocked, each pixel in a snap shot has been weighted by the cube of the pressure $p^3$ times the elementary volume $2\\pi r \\Delta r \\Delta z$.\nUsing this\nweighting, the maximum intensity (in yellow and orange shades) at each position in the PV diagrams then traces the velocity in the shell.\nThe separation between material originating mainly from the jet or mainly from the surroundings\/disk wind is done using a passive scalar.\nThe predicted mixed shell velocities (\\BT{Eqs.}~\\ref{vrv0} and \\ref{vzv0}) are shown in blue, and the predicted immediate post-bowshock velocities (\\BT{Eqs.}~\\ref{vrps} and \\ref{vzps}) are shown in magenta. Following the discussion of \\BT{Fig. \\ref{v0est}}, we take v$_0= 14$ km s$^{-1}$. \n\\begin{figure}[!h]\n\\centering\n\\includegraphics[width=9.5cm]{kinematics-nodw4-june-compressed.pdf} \\\\\n\\includegraphics[width=9.5cm]{kinematics-dw4-june-compressed.pdf}\n\\caption{Longitudinal position-velocity (PV) diagrams for the \\textit{no DW} simulation (top) and the \\textit{DW} simulation (bottom). From left to right: v$_r$ for the jet material\nonly, v$_r$ for the surrounding material only, v$_z$ for the jet material only, v$_z$ for the surrounding material only, and density stratification. The ordinate of all frames\nis position along the outflow axis (in au). The color scale in the PVs is scaled by volume $\\times$ cube of pressure so as to be maximum (in red and yellow shades) \nfor shocked material in the shell, while the \\SC{color scale} for density is in g~cm$^{-3}$. \nBlue curves are predicted velocities in the full mixing hypothesis (\\BT{Eqs.}~\\ref{vrv0} and \\ref{vzv0}), while\nmagenta curves are the predicted immediate-post shock velocities (\\BT{Eqs.}~\\ref{vrps} and \\ref{vzps}).}\n\\label{PV}\n\\end{figure}\n\nIn the v$_r$ PV diagrams of the surrounding\nmaterial, the \\SC{expansion velocities of shocked material in the shell (orange shading) are always larger than predicted by the (blue) full-mixing curve}\n(except very close to the bow shock apex where the shear is maximum). The \nsimulation more closely follows the immediate post-bow shock velocity curve (magenta), \\SC{indicating that high-pressure shocked material in the shell\nis not fully mixed in our simulations.} \\SC{Conversely,} the v$_r$ PV diagram of the jet material \ndecreases monotonically along the bow shock wing\nwith velocities always \\SC{slightly smaller than} the full mixing velocity curve (in blue). \n\nConcerning the velocity along the jet $z$-axis,\nthe v$_z$ values for jet material lie close to, or slightly above the full mixing curve in blue.\nThe v$_z$ PV diagrams for the surrounding material \n\\SC{generally show smaller v$_z$ than predicted by the full mixing curves.}\n\\SC{The high-pressure swept-up shell material (in orange)} \nlies close to the immediate post-shock velocities (magenta curve).\n\n \nThe relatively small v$_r$ velocities and large v$_z$ velocities \nobserved in the jet dominated material indicate that even if the full mixing hypothesis does not hold, the momentum is still conserved: if the velocities of the swept-up surrounding material are greater than expected from the full mixing hypothesis, then the velocities of the jet material (in the IWS rest-frame) must be smaller than the predicted full mixing velocities (and vice-versa). \n \nAs predicted, the most striking difference between the disk wind model and the reference no-DW model is\nthe saturation of the v$_z$ velocity in the bowshock wings \\SC{to a non-zero value of} v$_z \\approx $v$_w$. Even if this asymptotic limit does\nnot depend on any mixing, the \\SC{incomplete}\nmixing obtained in the simulations produces a \\SC{more} rapid convergence to v$_w$\n\\SC{than predicted in the case of full mixing (blue curve)}.\n \n \\subsection{Long-term evolution}\n\n\\begin{figure*}[!h]\n\\includegraphics[width=6cm]{longterm-nodw-4-compressed.pdf}\n\\includegraphics[width=7.97cm]{longterm-dw-5-compressed.pdf}\n\\centering\n\\caption{Density maps for the \\SC{no-DW} reference simulation (three frames on the left) and the \\SC{DW simulation} \n(three frames on the right) at $t=71$, 119 and 167~yr. The white contours indicate the surface of $50\\%$ (solid line) and 90 \\% (dashed line) \njet material mixing fractions. The black lines in the disk-wind simulation show\na cone of $\\alpha = 11^\\circ$ opening half-angle, which circumscribes the\nboundary the region perturbed by the jet and its internal working surfaces. The black dashed lines show the predicted trajectory of the edge of the bow shock (see eq. \\ref{zfrf2}). The density color scale is given by the right bar (in g~cm$^{-3}$).}\n\\label{long_term}\n\\end{figure*}\n\n\\BT{Fig.} \\ref{long_term} shows the longer-term evolution of the \nreference no-DW simulation (\\BT{three} left frames) and of the disk-wind simulation (\\BT{three} right frames) at times $t=71$, $119$ and $167$~yr.\nFrom this figure,\nwe see that the morphologies of the regions perturbed by the jet after the passage of several IWS\nare very different in the two cases.\n\nIn the \\textit{no-DW} simulation, the region perturbed by the jet behind the leading bowshock\nexpands into a roughly\ncylindrical shape, which tapers off close to the position of the outflow source (where it becomes a weak, radially expanding shock). This\nis the standard shape of the perturbed region in simulations of variable, radiative jets propagating into a uniform static medium,\nseen since the early\nwork of \\citet{1993ApJ...413..198S} and \\citet{1994ApJ...434..221B}.\n\nIn the disk wind simulation, in contrast, the\nregion perturbed by the jet behind the leading bowshock takes\na conical shape, tapering off at large distances from the outflow\nsource. \nFor the parameters of our DW simulation, the half-opening angle of the perturbed region \nis $\\alpha\\approx 11^\\circ$ (see Fig.~\\ref{long_term}). \nThis cone is located outside the predicted trajectory of the edge of the bow shock (drawn in black dashed line) given by \\BT{Eq.}~(\\ref{zfrf2}). This broadening occurs because, as discussed in item 3 of Section 3.2.2, the edge of the bow shock drives a weak outer shock into the undisturbed DW, which propagates away at a speed close to $c_s\\sim3.8$ km~s$^{-1}$. Taking into account the advection of the weak shock by the DW, one predicts that this will broaden the disturbed region by an angle $\\beta = \\arctan{c_s\/\\textnormal{v}_w} = 4\\degr$, in agreement with the observed cone opening. Obviously, in the no-DW simulations, this weak outer shock travels laterally without being advected, and no limiting cone forms.\n\n\nIn this surprisingly simple configuration\nadopted by our jet+disk wind simulation, the overall long-term effect of the disk wind is\nto stop the perturbations from travelling beyond this ``opening cone'' of the sideways ejection from the IWS. \n\nAnother important effect of the DW is to push the locus of 50\\% ambient material (white contour) \ncloser to the jet axis than in the no-DW case, due to the disk wind partial refilling behind each bowshock.\nHence, the first few IWS close to the source can still sweep up (possibly molecular) DW material.\nThe internal IWS are also more curved than in the no-DW case, where material ejected sideways meets a very low-density cocoon,\nproducing flat-topped internal bowshocks (see \\BT{Fig.} \\ref{long_term}).\n\n\\section{Summary}\n\nIn this paper we have presented a first exploration of an hydrodynamical flow composed by an inner,\nvariable jet surrounded by a slower, steady, cylindrical disk wind. The jet variability\nproduces internal working surfaces (IWS) which drive bow shocks into the disk wind, producing\na strong coupling between the two components of the flow.\n\nWe have developed a standard thin shell model for the bow shock driven into the disk wind\nby a single IWS, for a jet of arbitrarily small radius (see Section 2), \nderiving the shape of the bow shock and the refilling by the continuing disk wind\nof the cavity left behind by the bow shock. The model was extended\nto give a qualitative description of the flow resulting from two or more successive IWS bow shocks\nplowing through the disk wind (Section 3).\n\nThe appropriateness and limitations of the predictions of bow shock shapes and kinematics from this analytic model \nhave been checked with axisymmetric numerical simulations:\none of a variable jet+disk wind configuration,\nand a second reference simulation with the same variable jet surrounded by a stationary\nenvironment. We compared the analytic model with the numerical simulations, and we found\na relatively good agreement, giving us an understanding of the main features of\nthe simulated flows. These features are:\n\\begin{itemize}\n\\item the bow shocks of the numerical IWS have cubic morphologies which can be reproduced\nquite convincingly with the thin-shell, momentum conserving analytic\nmodel (see \\BT{Eqs.}~\\ref{rz}-\\ref{rfz} and Fig. \\ref{nodw_simu}),\n\\item the kinematics in the simulated bow shocks has a behavior\nwhich approximately follows the kinematics predicted from the analytic model\nfor the fully-mixed layer (for jet-dominated material) or \nthe immediate post-bow shock gas (for high-pressure swept-up ambient gas) (see Figs. 4. and 8).\n\\item these bow shocks leave behind cavities which are partially refilled\nby the slower disk wind (see Figs.~3, 5 and 8).\n\\item thanks to this refilling, subsequent IWS will propagate into fresh disk wind material \nup to a distance from the source $l_c = {\\Delta z}$\/(v$_j$\/v$_w$-1) (see Fig.~7).\n\\end{itemize}\n\nThe main contribution of this paper is thus to provide a numerically validated, simple analytic model which can\nbe used to model bow-like shapes of knots observed close to the outflow sources in high velocity,\ncollimated optical and molecular outflows \\citep{2000A&A...356L..41L,2015A&A...581A..85P}.\nAs shown by our simulations, this shape modeling (in the narrow jet limit) allows one to estimate the sideways ejection velocity \nfrom the IWS and the length scale of the bowshock. From this, constraints could be inferred on the \nmass-loss rate from the IWS and the surrounding flow properties (see \\BT{Eq.}~\\ref{l0}).\n\nAnother important contribution of this paper is to predict the regions\nwhere a surrounding disk-wind would remain unperturbed.\nA quite dramatic result of our jet+disk wind simulation is that the perturbations of the disk wind by the IWS bow shocks \nare confined inside a cone.\nTherefore, all of the gas outside this confinement cone is unperturbed disk wind material. Also,\nthere are pockets of undisturbed disk wind material within this cone, in the refilled region\nbetween the source and the last IWS, and also ahead of the latest\nIWS when it is at $z < l_c$ (see the three right hand frames of Fig. 10). \nThese are the regions in which one still \nfinds a record of the undisturbed characteristics of the disk wind, \nwhich could be useful for comparisons with disk wind models.\n\nFinally, another result of observational interest is that we identify several \ndistinctive signs of a cylindrical DW around a time-variable jet:\n(i) bow shocks that close upon the axis at a finite distance from the source (at a fraction v$_w$\/v$_j$ of the distance to the bow apex), \n(ii) a non-zero (= v$_w$) asymptotic value of longitudinal velocity in the far bowshock wings,\n(iii) internal bowshocks that are curved rather than \"flat-topped\", \n(iv) a predominance of DW material ahead of the first few IWS, which (if the DW is chemically richer and\/or dustier than the jet) \nshould produce different emission signatures compared to the more distant IWS.\n\nExtensions of the analytic model to more complex jet+disk wind flows do not\nappear very attractive (as, e.g., relaxing the assumption of a cylindrical uniform disk\nwind) as quite complex expressions are obtained, and are therefore not straightforwardly\napplicable to model observed structures. On the other hand, the numerical simulations\npresented here can be extended in many directions:\n\\begin{itemize}\n\\item including a more realistic disk wind model (e.g., with a radial dependence of the density and velocity, and a velocity not aligned with\nthe outflow axis),\n\\item studying the effect of a non-top hat jet cross section,\n\\item going from the HD to the MHD equations,\n\\item including a chemical\/ionic network and the associated cooling functions.\n\\end{itemize}\nIf future comparisons between jet+disk wind models and observations are sufficiently\npromising, the items listed above (as well as other easily imagined possibilities)\nwill become worthy of exploration.\n\n\\begin{acknowledgements}\n\\BT{We thank the anonymous referee for useful comments.} This work has been done within the LABEX PLAS@PAR project, and received financial state aid managed by the Agence Nationale de la Recherche, as part of the ''Programme d'Investissements d'Avenir'' under the reference ANR-11-IDEX-0004-02. AR acknowledges support from the DGAPA (UNAM) grant IG100218. This research has made use of NASA's Astrophysics Data System. \n\\end{acknowledgements}\n\n\\bibliographystyle{aa}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzfpxm b/data_all_eng_slimpj/shuffled/split2/finalzzfpxm new file mode 100644 index 0000000000000000000000000000000000000000..be41f662434e2ba0683e70a26775ff6b61e86054 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzfpxm @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\\ \\ We have recently developed methods for obtaining exact two-point resistance of certain circulant graph namely, the complete graph minus $N$ edges \\cite {Chair1}. In this paper, using similar techniques and ideas, we consider trigonometrical sums that arise in the computation of the two-point resistance of the finite resistor networks \\cite{Wu}, in the work of McCoy and Orrick on the chiral Potts model \\cite {McCoy}, and in the Verlinde dimension formula of the twisted \/untwisted space of conformal blocks of the $SO(3)$\/$SU(2)$ WZW model \\cite{Verlinde}.\n\n\\ \\ Before considering these trigonometrical sums, we test the techniques used in \\cite{Chair1}, by first deriving the Green's function of the one-dimensional lattice graphs with free boundaries, and the two-point resistance of the $N$-cycle graph \\cite{Wu}. The same techniques is then used to evaluate a trigonometrical sum that played a crucial role to prove R. F Scott's conjecture on the permanent of the Cauchy matrix \\cite{Minc,Todd}. Having tested these techniques, an alternative derivation is then given for certain trigonometrical sum that appeared in the perturbative chiral Potts model \\cite {McCoy}, \\cite {Mehta}.\n\n\\ \\ We have also considered the general case studied by Gervois and Mehta \\cite {Mehta}, Berndt and Yeap \\cite {Berndt}, here, our results agree with those in \\cite {Mehta}. It turns out that the Verlinde's dimension formulas for the untwisted space of conformal blocks, may be obtained simply by summing over certain parameter of a trigonometrical sum considered in \\cite{Mehta}. For the twisted space of conformal blocks, however, the parameter is restricted to take some value. It is shown that the dimension of the conformal blocks on a genus $g\\geq 2$ Riemann surface may be obtained through a recursion formula that relates different genera. Mathematically speaking, the dimension of the space of conformal blocks is obtained by expanding certain generating function order by order, or using the Hirzebruch-Rlemann-Roch theorem \\cite{Zagier}.\n\n\\ \\ By using the method given in \\cite {Chair1}, we are able to obtain closed form formula for the two-point resistance of a $2\\times N $ resistor network \\cite {Chair2}. In this paper, an exact computation of the corner-to-corner resistance as well as the total effective resistance of a $2\\times N$ will be given. The total effective effective resistance, also called the Kirchhoff index \\cite{Randic}, this is an invariant quantity of the resistor network or graph.\n\n\\ \\ The exact two-point resistance of an $M\\times N$ resistor network is given in terms of a double sum and not in a closed form \\cite {Wu}. Therefore, our computation carried out in this paper, represents the first non-trivial exact results for the two-point resistance of a two-dimensional resistor network.\n\n\\ \\ This method is then used to evaluate variant of trigonometrical sums, some of which are related to number theory, we hope that these trigonometrical sums will have some physical applications. It is interesting to point out that all the computations of the trigonometrical sums in this paper are based on a formula by Schwatt \\cite{ Schwatt} on trigonometrical power sums, and the representation of the binomial coefficients by the residue operator. The Schwatt's formula is modified slightly, only in the case of the corner-to-corner resistance, the Kirchhoff index and trigonometrical sum given by $ F_{1}(N,l,2) $, and $ F_{1}(N,l,2)$, see Section 6, this was also the case in our previous paper \\cite{Chair1}.\n\n\\ \\ This paper is organized as follows; in section 2, we give an explicit computations of the two-point resistance of the $N$-cycle graph and the Green's function of the one-dimensional lattice, and in Section 3, we give a simple derivation of a trigonometrical sum connected with Scott's conjecture on the permanent of the Cauchy matrix. In section 4, we consider trigonometrical sums arising in the chiral Potts model, and in the Verlinde formula of the dimension of the conformal blocks. Exact computations of the corner-to-corner and the Kirchhoff index of $2\\times N$ resistor network will be given in section 5. In Section 6, we consider other class of trigonometrical sums some of which are related to number theory, and finally, in Section 7, our conclusions are given.\n\\section{ The two-point resistance of one-dimensional lattice using the residue operator }\n\n\\ \\ In this Section, we first start with the two-point resistance of the $N$-cycle graph computations, then, move to the trigonometrical sum related to the two-point resistance of the one-dimensional lattice with free boundaries, that is, the path graph. The two-point resistance of the $N$-cycle graph between any two nodes $\\alpha$ and $\\beta$ is given by the following simple closed formula \\cite{Wu},\n\\begin{equation} \n\\label{t1}\nR(l)=\\frac{1}{N}\\sum_{n=1}^{N-1}\\frac{\\sin^2(nl\\pi\/N)}{\\sin^2(n\\pi\/N)}=\\frac{l(N-l)}{N},\n\\end{equation}\nwhere $l=|\\alpha-\\beta|$, and $1\\leq \\alpha,\\beta \\leq N. $\nOur derivation for the two-point resistance starts with the following trigonometrical identity $$\\cos(2ln\\pi\/N)=\\sum_{s=0}^{l}(-1)^s \\frac{l}{l+s}\\binom {l+s} {l-s}2^{2s}\\sin^{2s}(n\\pi\/N), $$ from which the above trigonometrical sum may be rewritten as\n\\begin{equation} \n\\label{t2}\nR(l)=\\frac{1}{2N}\\sum_{s=1}^{l}(-1)^{s +1}\\frac{l}{l+s}\\binom {l+s} {l-s}2^{2s}\\sum_{n=1}^{N-1} \\sin^{2(s-1)}(n\\pi\/N).\n\\end{equation} \nOn the other hand, Schwatt's formula for trigonometrical power sums \\cite{Schwatt}, gives\n\\begin{equation} \n\\label{t3}\n\\sum_{n=1}^{N-1} \\sin^{2(s-1)}(n\\pi\/N)=\\frac{1}{2^{2(s-1)-1}}\\sum_{t=1}^{s-1}(-1)^{t +1}\\binom {2(s-1)}{s-1-t} +\\frac{N-1}{2^{2(s-1)}}\\binom {2(s-1)} {s-1}.\n\\end{equation}\n\n\\ \\ \n Therefore, the two-point resistance may be obtained by evaluating the binomial sums in the expression of $R(l)$, based on the residue operator. This operator played a crucial role in evaluating combinatorial sums and proving combinatorial identities \\cite{Egorychev}. First, let us recall the definition of the residue operator $\\hbox{res}$. To that end, let $G(w)= \\sum_{k=0}^{\\infty}a_{k}w^{k}$ be a generating function for a sequence $\\{a_{k}\\}$. Then the k-th coefficient of $G(w)$ may be represented by the formal residue as follows\n$$a_{k}= \\hbox{res}_w G(w){w^{-k-1}}.$$\nThis is equivalent to the Cauchy integral representation of $a_k$,\n\\[a_k=\\frac{1}{2\\pi i}\\oint _{|z|=\\rho}\\frac{G(w)}{w^{k+1}}dw ,\\] for coefficients of the Taylor series in a punctured neighborhood of zero. In particular, the generating function of the binomial coefficient sequence $\\binom {n}{k}$ for a fixed $n$ is given by $$G(w)= \\sum_{k=0}^{n}\\binom {n}{k} w^{k} =(1+w)^{n},$$ \nand hence $$\\binom {n}{k}=\\hbox{res}_w (1+w){^n}{w^{-k-1}}.$$ The other binomial coefficient that we need in this paper is the following, $$\\binom {2n}{n}=\\hbox{res}_w (1-4w){^{-1\/2}}{w^{-n-1}}.\n$$\n\n\\ \\ Before finishing this brief summary, we should mention one important property of the residue operator $\\hbox{res}$, namely linearity, this is a crucial in doing computations, linearity states that; given some contants $\\alpha$ and $\\beta$, then $$\\alpha \\hbox{res}_w G_{1}(w){w^{-k-1}}+ \\beta\\hbox{res}_w G_{2}(w){w^{-k-1}}=\\hbox{res}_w(\\alpha G_{1}(w)+ \\beta G_{2}(w)) {w^{-k-1}}.$$ \nLet us now evaluate the first term in Eq.(\\ref{t2}), using the residue operator, namely the following term\n\\begin{eqnarray}\n\\label{t4}\nR_{1}(l):&=&\\frac{2}{N}\\sum_{s=1}^{l}(-1)^{s }\\frac{2l}{l+s}\\binom {l+s} {l-s}\\sum_{t=1}^{s-1}(-1)^{t }\\binom {2(s-1)}{s-1-t}\\nonumber\\\\&=&\\frac{2}{N}\\sum_{s=1}^{l}(-1)^{s }\\frac{2l}{l+s}\\binom {l+s} {l-s}\\sum_{t=1}^{s-1}(-1)^{t }\\hbox{res} \\frac{(1+w)^{2(s-1)}}{w^{s-t}}\\nonumber\\\\&=&\\frac{2}{N}\\sum_{s=1}^{l}(-1)^{s }\\frac{2l}{l+s}\\binom {l+s} {l-s}\\hbox{res}_{w=0}\\frac{(1+w)^{2(s)}}{(1+w)^{3}w^{s-1}}.\n\\end{eqnarray}\nIn obtaining Eq.(\\ref{t4}), we discarded an analytic term at the pole $w=0$ of order $s-1$. By making a change of variable $l-s=k $, then, Eq. (\\ref{t4}) may be rewritten as \n\\begin{eqnarray}\n\\label{t5}\nR_{1}(l)&=&(-1)^{l+1}\\frac{2}{N}\\hbox{res}_{w=0}\\frac{w}{(1+w)^{3}}\\sum_{k=1}^{l-1}(-1)^{ k}\\frac{2l}{2l-k}\\binom {2l-k} {k}\\bigg(\\frac{1+w}{\\sqrt{w}}\\bigg)^{2(l-k)}\\nonumber\\\\&=& (-1)^{l+1}\\frac{2}{N}\\hbox{res}_{w=0}\\frac{w}{(1+w)^{3}}\\bigg(C_{2l}\\big(\\frac{1+w}{\\sqrt{w}}\\big)-(-1)^{l}\\bigg),\n\\end{eqnarray}\nwhere $$ C_{2l}(x)= 2T_{2l}(x\/2)=\\sum_{k=0}^{l}(-1)^{k} \\frac{2l}{2l-k}\\binom {2l-k} {k}x^{2l-2k},$$\nis the normalized Chebyshev polynomial of the first kind\\cite{Rivlin}, and\n$$T_{2l}(x\/2)=\\frac{1}{2}\\Bigg\\lbrack\\Bigg(\\frac{x}{2}+\\sqrt{(x\/2)^2-1}\\bigg)^{2l} +\\Bigg(\\frac{x}{2}-\\sqrt{(x\/2)^2-1}\\bigg)^{2l} \\Bigg\\rbrack.$$\nUsing the fact that $C_{2l}\\big(\\frac{1+w}{\\sqrt{w}}\\big)= \\frac{1}{w^{l}}+w^{l}$, then the final expression for the first term $R_{1}(l) $, reads\n\\begin{eqnarray}\n\\label{t6}\nR_{1}(l)&=&(-1)^{l+1}\\frac{2}{N}\\hbox{res}_{w=0}\\frac{1}{(1+w)^{3}w^{l-1}}\\nonumber\\\\&=&-\\frac{l(l-1)}{N}\n\\end{eqnarray}\nSimilarly, the second term may written as \n\\begin{eqnarray}\n\\label{t7}\nR_{2}(l):&=&\\frac{(N-1)}{N}\\sum_{s=1}^{l}(-1)^{s }\\frac{2l}{l+s}\\binom {l+s} {l-s}\\binom {2(s-1)} {s-1}\\nonumber\\\\&=&(-1)^{l+1}\\frac{(N-1)}{N}\\hbox{res}_{w=0}\\frac{1}{(1+w)^{2}w^{l}}\\nonumber\\\\&=& \\frac{l(N-1)}{N}.\n\\end{eqnarray}\n Adding the contributions given by Eqs. (\\ref{t6}), and (\\ref{t7}), then, we get \n\\begin{equation}\n\\label{t8}\nR(l)=\\frac{1}{N}\\sum_{n=1}^{N-1}\\frac{\\sin^2(nl\\pi\/N)}{\\sin^2(n\\pi\/N)}=\\frac{l(N-l)}{N}.\n\\end{equation}\n\n\\ \\ Now, we want to evaluate the following trigonometrical sum $$ F_N(l ) = \\frac {1} {N}\\sum_{n=1}^{N-1} \\frac {1-\\cos nl\\pi\/N} {1-\\cos n\\pi\/N},$$ this sum arises in connection with the two-point resistance of a path graph \\cite{Wu}. The evaluation of this term may be done as follows; \n\\begin{eqnarray}\n\\label{t9}\nF_N(l ) &= &\\frac {1} {N}\\sum_{n=1}^{N-1} \\frac {1-\\cos nl\\pi\/N} {1-\\cos n\\pi\/N}\\nonumber\\\\&=&\\frac {1} {N}\\sum_{n=1}^\\frac{N}{2} \\frac {1-\\cos(2n-1)l\\pi\/N} {1-\\cos(2n-1)\\pi\/N}+\\frac {1} {N}\\sum_{n=1}^{\\frac{N}{2}-1} \\frac {1-\\cos 2nl\\pi\/N} {1-\\cos2n\\pi\/N},\n\\end{eqnarray}\nhere, $N$ is assumed to be even, similar steps may be used for $N$ odd. It is interesting to note that in evaluating $F_N(l )$, we only need to compute the first term since the second term is related to the two-point resistance of the $N$-cycle graph given by Eq. (\\ref{t8}). Then, the first term may be written as \n\\begin{eqnarray}\n\\label{t10}\n\\frac {1} {N}\\sum_{n=1}^\\frac{N}{2} \\frac {1-\\cos(2n-1)l\\pi\/N} {1-\\cos(2n-1)l\\pi\/N}&=&\\frac {1} {N} \\sum_{n=1}^\\frac{N}{2} \\frac {\\sin^{2}(2n-1)l\\pi\/2N} {\\sin^{2}(2n-1)l\\pi\/2N}\\nonumber\\\\&=&\\frac{1}{2N}\\sum_{s=1}^{l}(-1)^{s +1}\\frac{l}{l+s}\\binom {l+s} {l-s}2^{2s}\\sum_{n=1}^\\frac{N}{2}\\sin^{2(s-1)}\\frac{(2n-1)\\pi}{2N}.\\nonumber\\\\.\n \\end{eqnarray}\n By using the identity $$\\sum_{n=1}^\\frac{N}{2}\\sin^{2(s-1)}(2n-1)\\pi\/2N= \\frac{1}{2}\\bigg(\\sum_{n=1}^{N-1}\\sin^{2(s-1)}n\\pi\/2N+\\sum_{n=1}^{N-1}(-1)^{n-1}\\sin^{2(s-1)}n\\pi\/2N\\bigg), $$ \n and the formulas for trigonometrical power sums given in \\cite{Schwatt}, then, one can show\n\\begin{eqnarray}\n\\label{t11}\n\\sum_{n=1}^\\frac{N}{2}\\sin^{2(s-1)}(2n-1)\\pi\/2N=\\frac{2N}{2^{2s}}\\binom {2(s-1)} {s-1},\n\\end{eqnarray}\nwhich in turn, implies that the formula for the first term should be\n \\begin{eqnarray}\n\\label{t12}\n\\frac {1} {N}\\sum_{n=1}^\\frac{N}{2} \\frac {1-\\cos(2n-1)l\\pi\/N} {1-\\cos(2n-1)\\pi\/N}&=&\\frac{1}{2}\\sum_{s=1}^{l}(-1)^{s +1}\\frac{2l}{l+s}\\binom {l+s} {l-s}\\binom {2(s-1)} {s-1}\\nonumber\\\\&=&\\frac{l}{2}.\n\\end{eqnarray}\nTo compute the second term given in Eq. (\\ref{t9}), we use the following symmetry \nenjoyed by the two-point resistance of the $N$-cycle graph, $N$ even\n\\begin{eqnarray}\n\\label{t13}\n\\frac {1} {N}\\sum_{n=1}^{N-1} \\frac {1-\\cos 2nl\\pi\/N} {1-\\cos2n\\pi\/N}&=&\\frac {2} {N}\\sum_{n=1}^{\\frac{N}{2}-1} \\frac {1-\\cos 2nl\\pi\/N} {1-\\cos2n\\pi\/N}+\\frac {1} {2N}\\big(1-(-1)^{l}\\big).\n\\end{eqnarray}\nTherefore, the second term may be obtained to give the following closed formula for $F_N(l ) $\n\\begin{equation}\n\\label{t14}\nF_N(l ) = \\frac {1} {N}\\sum_{n=1}^{N-1} \\frac {1-\\cos nl\\pi\/N} {1-\\cos n\\pi\/N}= l-\\frac{1}{N}\\bigg(\\frac{l^2}{2}+\\frac{1}{4}(1-(-1)^{l})\\bigg)\n\\end{equation}\nThis is in a complete agreement with the formula for the Green's function for the path graph in \\cite{Wu}\n\\section{Trigonometrical sum connected with Scott's conjecture}\n\n\\ \\ \n In proving R. F Scott's conjecture on the permanent of the Cauchy matrix, Minc in \\cite{Minc} needed to evaluate the following trigonometrical sum;\n \\begin{equation}\n\\label{sc0}\n \\sum_{n=1}^{N} \\frac {\\cos (2n-1)l\\pi\/N} {1-\\cos (2n-1)\\pi\/N}.\n \\end{equation}\n He obtained a closed-form formula for this sum using induction,\n and the sum turns out to be equal to $\\frac{N}{2}(N-2l)$. A short time later, Stembridge and Todd \\cite{Todd}, gave a proof for the evaluation for this sum, based on linear algebra. Here, we give a short derivation for this sum using our formula given by Eq. (\\ref{t12}), and the well-known identity \n \\begin{equation}\n\\label{sc1}\n \\sum_{n=1}^{N-1}\\frac{1}{\\sin^2(n\\pi\/N)}=\\frac{N^{2}-1}{3}.\n\\end{equation}\nOur derivation follows easily by realizing that Eq. (\\ref{t12}) is symmetric under the shift $n\\rightarrow N-n$, and as a consequence one gets \n\\begin{equation}\n\\label{sc2}\n\\sum_{n=1}^{N} \\frac {1-\\cos(2n-1)l\\pi\/N} {1-\\cos(2n-1)\\pi\/N}=N l.\n\\end{equation}\nIn order to evaluate the sum in Eq. (\\ref{sc0}), we need a formula for the sum $$\\sum_{n=1}^{N} \\frac {1} {1-\\cos(2n-1)\\pi\/N} =\\frac{1}{2}\\sum_{n=1}^{N} \\frac {1} {\\sin^{2}(2n-1)\\pi\/2N }.$$\nThe latter may be evaluated as follows\n\\begin{eqnarray}\n\\label{sc3}\n\\frac{1}{2}\\sum_{n=1}^{N} \\frac {1} {\\sin^{2}(2n-1)\\pi\/2N}&=&\\frac{1}{2}\\sum_{n=1}^{2N-1}\\frac{1}{\\sin^2(n\\pi\/2N)}-\\frac{1}{2}\\sum_{n=1}^{N-1}\\frac{1}{\\sin^2(2n\\pi\/2N)}\\nonumber\\\\&=&\\frac{N^{2}}{2}.\n\\end{eqnarray}\nIn obtainnig Eq. (\\ref{sc3}), we used the identity given in Eq. (\\ref{sc1}), thus, using Eq. (\\ref{sc2}) and Eq. (\\ref{sc3}), we may write\n\\begin{equation}\n\\label{sc4}\n \\sum_{n=1}^{N} \\frac {\\cos (2n-1)l\\pi\/N} {1-\\cos (2n-1)\\pi\/N}=\\frac{N^2}{2}-Nl.\n\\end{equation}\nThis is exactly the result obtained by Minc, Stembridge and Todd \\cite{Minc,Todd}.\n\\section{Trigonometrical sums arising in the chiral Potts model and in the Verlinde's formula}\n\n\\ \\ In this section the trigonometrical sum $T_{4}(l):=\\sum_{n=1}^{N-1}\\frac{\\sin^2(nl\\pi\/N)}{\\sin^4(n\\pi\/N)} $ is evaluated in a closed form using the residue operator. We also give an almost closed formula for the general case $T_{2m}(l):=\\sum_{n=1}^{N-1}\\frac{\\sin^{2}(nl\\pi\/N)}{\\sin^{2m}(n\\pi\/N)} $, for $m\\geq1$. The first trigonometrical sum arises in the work of McCoy and Orrick on the chiral Potts model \\cite {McCoy}, this sum including other trigonometrical identities were proved by Gervois and Mehta \\cite {Mehta}. The second sum namely the sum $T_{2m}(l)$, was considered by Gervois and Mehta \\cite {Mehta} using a recursion formula. Here, we will obtain recursion formulas for both $T_{2m}(l)$ and $$T_{2m}:=\\sum_{n=1}^{N-1}\\frac{1}{\\sin^{2m}(n\\pi\/N)}.$$\n\n\\ \\ If, we set $N=k+2$ and $m =g-1$, $k,g$ being the level of the $su(2)$ Kac-Moody algebra, and the genus of the Riemann surface respectively. Then, the sum $ T_{2m}$ up to to some normalization factor is nothing but the dimension of the space of the conformal blocks of the $SU(2)$ WZW model. As a consequence, the recursion formula derived for $ T_{2m}$, may be used to obtain the expression for the dimension of the space of the conformal blocks for a given genus $g$. Similar computations are carried out for the twisted trigonometrical sum $$T_{2m}^{t}:=\\sum_{n=1}^{N-1}(-1)^{n+1}\\frac{1}{\\sin^{2m}(n\\pi\/N)}.$$ This is related to the dimension of the space of the conformal blocks of the $SO(3)$ WZW model.\n \\subsection {Trigonometrical sums and the perturbative chiral Potts model}\n \n \\ \\ Let us first start with the trigonometrical sums arising in the perturbative treatment of the the chiral Potts model \\cite {McCoy}. Techniques of the previous section, may be used to evaluate the sum $T_{4}(l)$, as follows\n\\begin{eqnarray}\n\\label{t15}\nT_{4}(l)&=&\\sum_{n=1}^{N-1}\\frac{\\sin^2(nl\\pi\/N)}{\\sin^4(n\\pi\/N)}\\nonumber\\\\&=&l^{2}\\sum_{n=1}^{N-1}\\frac{1}{\\sin^2(n\\pi\/N)}+\\frac{1}{2}\\sum_{s=2}^{l}(-1)^{s +1}\\frac{l}{l+s}\\binom {l+s} {l-s}2^{2s}\\sum_{n=1}^{N-1} \\sin^{2(s-2)}(n\\pi\/N)\\nonumber\\\\&=&\\frac{l^2}{3}(N^{2}-1)+8\\sum_{s=2}^{l}(-1)^{s }\\frac{2l}{l+s}\\binom {l+s} {l-s}\\sum_{t=1}^{s-2}(-1)^{t }\\binom {2(s-2)}{s-2-t}\\nonumber\\\\&+& 4(N-1)\\sum_{s=2}^{l}(-1)^{s +1}\\frac{2l}{l+s}\\binom {l+s} {l-s}\\binom {2(s-2)}{s-2},\n\\end{eqnarray}\nthe first term in the above equation follows from the well-known identity $$\\sum_{n=1}^{N-1}\\frac{1}{\\sin^2(n\\pi\/N)}=\\frac{N^{2}-1}{3},$$ while the second and the third terms may computed using the residue operator as in the previous section to give,\n\\begin{eqnarray}\n\\label{t16}\n\\sum_{s=2}^{l}(-1)^{s +1}\\frac{2l}{l+s}\\binom {l+s} {l-s}\\sum_{t=1}^{s-2}(-1)^{t }\\binom {2(s-2)}{s-2-t}&=&(-1)^{l+1} \\hbox{res}_{w=0}\\frac{1}{(1+w)^{5}w^{l-2}}\\nonumber\\\\&=&\\frac{1}{4!}(l+1)l(l-1)(l-2),\n\\end{eqnarray}\nand\n\\begin{eqnarray}\n\\label{t17}\n\\sum_{s=2}^{l}(-1)^{s +1}\\frac{2l}{l+s}\\binom {l+s} {l-s}\\binom {2(s-2)}{s-2}&=& (-1)^{l+1}\\hbox{res}_{w=0}\\frac{1}{(1+w)^{4}w^{l-1}}\\nonumber\\\\&=&-\\frac{1}{3!}(l+1)l(l-1).\n\\end{eqnarray}\nTherefore, the closed formula for the sum given in Eq. (\\ref{t15}), reads\n\\begin{eqnarray}\n\\label{t18}\n\\sum_{n=1}^{N-1}\\frac{\\sin^2(nl\\pi\/N)}{\\sin^4(n\\pi\/N)}&=&\\frac{l^2}{3}(N^{2}-1)+\\frac{1}{3}(l+1)l(l-1)(l-2)-\\frac{2(N-1)}{3}(l+1)l(l-1)\\nonumber\\\\&=&\\frac{l^2}{3}(N-l)(N-l)+\\frac{2}{3}(N-l).\n\\end{eqnarray}\nThis is exactly the result obtained by Gervois and Mehta using a recursion formula satisfied by $T_{2m}(l)$ \\cite {Mehta}. Next, we will give another recursion formula for the sum $T_{2m}(l)$. Now, $T_{2m}(l)$, may be written as \n\\begin{eqnarray}\n\\label{t19}\nT_{2m}(l)&=&\\sum_{n=1}^{N-1}\\frac{\\sin^2(nl\\pi\/N)}{\\sin^{2m}(n\\pi\/N)}\\nonumber\\\\&=&\\frac{1}{2}\\sum_{n=1}^{N-1}\\sum_{s=1}^{m-1}(-1)^{s+1 }\\frac{l}{l+s}\\binom {l+s} {l-s}2^{2s}\\frac{1}{\\sin^{2(m-s)}(n\\pi\/N)}\\nonumber\\\\&+&\\frac{1}{2}\\sum_{s=m}^{l}(-1)^{s +1}\\frac{l}{l+s}\\binom {l+s} {l-s}2^{2s}\\sum_{n=1}^{N-1} \\sin^{2(s-m)}(n\\pi\/N). \n\\end{eqnarray}\nThe first term on the right-hand side, is written in terms of the sum $$T_{2k}=:\\sum_{n=1}^{N-1}\\frac{1}{\\sin^{2k}(n\\pi\/N)}. $$ This may be computed \\cite {Mehta}, using $$T_{2k}=\\sum_{n=1}^{N-1}\\Big(\\cot^{2}(\\frac{n\\pi}{N})+1\\Big)^{k}=\\sum_{l=1}^{k} \\binom{k}{l}S_{l}, $$ where $S_{l}=\\sum_{n=1}^{N-1}\\Big(\\cot^{2}(\\frac{n\\pi}{N}) \\Big)^{2l}$ and a recurrence relation satisfied by the power sums $S_{l} $. It turns out that $T_{2k}$, may also be obtained using a recursion formula, this will be shown shortly. Now, the second term may be written as follows \n\\begin{eqnarray}\n\\label{t20}\n\\tilde{T}_{2m}(l):&=&\\frac{1}{2}\\sum_{s=m}^{l}(-1)^{s +1}\\frac{l}{l+s}\\binom {l+s} {l-s}2^{2s}\\sum_{n=1}^{N-1} \\sin^{2(s-m)}(n\\pi\/N)\\nonumber\\\\&=&2^{2m-1}\\sum_{s=m}^{l}(-1)^{s }\\frac{2l}{l+s}\\binom {l+s} {l-s}\\sum_{t=1}^{s-m}(-1)^{t }\\binom {2(s-m)}{s-m-t}\\nonumber\\\\&+&2^{2m-2}(N-1)\\sum_{s=m}^{l}(-1)^{s +1}\\frac{2l}{l+s}\\binom {l+s} {l-s}\\binom {2(s-m)}{s-m}\\nonumber\\\\&=&(-1)^{l+1}2^{2m-1}\\hbox{res}_{w=0}\\frac{1}{(1+w)^{2m+1}w^{l-m}}\\nonumber\\\\&+&(-1)^{l+1}2^{2m-2}(N-1)\\hbox{res}_{w=0}\\frac{1}{(1+w)^{2m}w^{l+1-m}}\\nonumber\\\\&=&(-1)^{m}2^{2m-1}\\frac{(l+m-1)!}{(l-m-1)!(2m)!}+ (-1)^{m+1}(N-1)2^{2m-2}\\frac{(l+m-1)!}{(l-m)!(2m-1)!}\\nonumber\\\\.\n\\end{eqnarray}\nTherefore, we succeeded in writing $\\tilde{T}_{2m}(l)$ in a closed form formula, One can check easily that our results agree with those given in \\cite {Mehta}, and so the formula for $T_{2m}(l)$ becomes\n\\begin{eqnarray}\n\\label{t21}\nT_{2m}(l)&=&\\sum_{n=1}^{N-1}\\frac{\\sin^2(nl\\pi\/N)}{\\sin^{2m}(n\\pi\/N)}\\nonumber\\\\&=&\\frac{1}{2}\\sum_{s=1}^{m-1}(-1)^{s+1 }\\frac{l}{l+s}\\binom {l+s} {l-s}2^{2s}T_{2(m-s)}\\nonumber\\\\&+&(-1)^{m+1}2^{2m-1}\\frac{(l+m-1)!}{(l-m)!(2m)!}(mN-l).\n\\end{eqnarray}\nsetting $m=1,2$, then, our previous results given by Eq's. (\\ref{t8}) and (\\ref{t18}) respectively are recovered. From Eq. (\\ref{t21}), it is clear that in order to have a closed formula for $T_{2m}(l)$, one needs also, the exact expressions for $T_{2k}$, $ k=1\\cdots, m-1$. Next, we will show that $ T_{2k}$ satisfies a recursion formula that involves the $T_{2k}$'s. The expression for $T_{2m}$ may be obtained from $T_{2m}(l)$ from the following simple formula;\n\\begin{eqnarray}\n\\label{t22}\n\\sum_{l=1}^{N-1}T_{2m}(l)&=&\\sum_{l=1}^{N-1}\\sum_{n=1}^{N-1}\\frac{\\sin^2(nl\\pi\/N)}{\\sin^{2m}(n\\pi\/N)}\\nonumber\\\\&=&\\frac{N}{2}\\sum_{n=1}^{N-1}\\frac{1}{\\sin^{2m}(n\\pi\/N)},\n\\end{eqnarray}\nTherefore, we may write\n\\begin{eqnarray}\n\\label{t23}\nT_{2m}&=&\\sum_{n=1}^{N-1}\\frac{1}{\\sin^{2m}(n\\pi\/N)}\\nonumber\\\\&=&\\frac{1}{N}\\sum_{l=1}^{N-1}\\sum_{s=1}^{m-1}(-1)^{s+1 }\\frac{l}{l+s}\\binom {l+s} {l-s}2^{2s}T_{2(m-s)}\\nonumber\\\\&+&\\frac{2}{N}\\sum_{l=1}^{N-1}(-1)^{l+1}2^{2m-1}\\hbox{res}_{w=0}\\frac{1}{(1+w)^{2m+1}w^{l-m}}\\nonumber\\\\&+&\\frac{2}{N}\\sum_{l=1}^{N-1}(-1)^{l+1}2^{2m-2}(N-1)\\hbox{res}_{w=0}\\frac{1}{(1+w)^{2m}w^{l+1-m}}\\nonumber\\\\&=&\\frac{1}{N}\\sum_{l=1}^{N-1}\\sum_{s=1}^{m-1}(-1)^{s+1 }\\frac{l}{l+s}\\binom {l+s} {l-s}2^{2s}T_{2(m-s)}\\nonumber\\\\&+&(-1)^{m+1}2^{2m-1}\\frac{(N+m-1)!}{N (N-m-1)!(2m+1)!}(2mN+1-N).\n\\end{eqnarray}\nAs a result, from our recursion formula, the different $T_{2m}$'s may be obtained directly, we do not have to use the recurrence relation satisfied by the power sum $S_{l}$ \\cite {Mehta}. For $m=1$ the first term in Eq. (\\ref{t23}) does not contribute and the second term gives the well known formula $ T_{2}=\\frac{N^{2}-1}{3}$. Now, for $m=2$, the first term may be computed to give $ \\frac{{(N^2-1)(N-1)}(2N-1)}{9}$, while the second term gives $ -\\frac{{(N^2-1)(N-2)}(3N+1)}{15}$, and hence, $T_{4}= \\frac{{(N^2-1)(N^2+11)}}{45}$ in a full agreement with \\cite {Mehta}, \\cite { Berndt}.\n\\subsection{The Verlinde dimension formula}\n\n\\ \\ The verlinde dimension formula may be obtained simply by setting $m=g-1$, $N=k+2$ in the expression of $T_{2m}$, where $g\\ge2$, $ k$ are the genus of the Riemann surface, and the level of the lie algebra $SU(2)$, respectively. Then, $T_{2(g-1)}$ up to some normalization factor, is the dimension of the space of conformal blocks $V_{g}$ of the $SU(2)$ WZW model \\cite{Verlinde},\n \\begin{eqnarray}\n\\label{t24}\ndimV_{g,k}&=&\\Big(\\frac{k+2}{2}\\Big)^{g-1}\\sum_{n=1}^{k+1}\\frac{1}{\\sin^{2g-2}n\\pi\/(k+2)}\\nonumber\\\\&=&\\Big(\\frac{k+2}{2}\\Big)^{g-1} \\frac{1}{(k+2)}\\sum_{l=1}^{k+1}\\sum_{s=1}^{g-2}(-1)^{s+1 }\\frac{l}{l+s}\\binom {l+s} {l-s}2^{2s}T_{2g-2 -2s} \\nonumber\\\\&+&(-1)^{g}2^{2g-3}\\Big(\\frac{k+2}{2}\\Big)^{g-1} \\frac{1}{(k+2)}\\frac{1}{(2g-1)}\\binom{k+g}{2g-2}\\big((k+2)(2g-3)+1\\big).\n\\end{eqnarray}\n As a consequence, the dimension of the space of the conformal blocks of the $SU(2)$ WZW model, may be computed using our recursion formula for $T_{2k}$. Our formula Eq. (\\ref{t24}) may be used to give \n\\begin{eqnarray}\n\\label{t25}\ndimV_{2,k}&=&\\frac{(k+1)(k+2)(k+3)}{6}\\nonumber\\\\dimV_{3,k}&=&\\frac{1}{5}\\frac{(k+1)(k+2)(k+3)}{6}\\Big[\\frac{(k+1)(k+2)(k+3)}{6}+2(k+2)\\Big]\\nonumber\\\\dimV_{4,k}&=&\\frac{1}{7}.\\frac{1}{5}\\frac{(k+1)(k+2)(k+3)}{6}\\nonumber\\\\&.&\\Big[\\frac{(k+1)(k+2)(k+3)}{6}\\Big[\\frac{2(k+1)(k+2)(k+3)+27(k+2)}{6}\\Big]+6(k+2)^{2}\\Big].\\nonumber\\\\\n\\end{eqnarray}\nOur first two expressions for dimension of the space of conformal blocks agree with those computed using conformal field theory \\footnote{The last term $k+2$ in the expression of $dimV_{3,k}$ should be corrected in \\cite{Piunikhin} in order to make it a positive integer.} \\cite{Piunikhin}. For $g=4$, our formula for $dimV_{4,k} $ is identical to the formula given by Zagier \\cite{Zagier} provided the shift $k\\rightarrow k+2$ is taken. This shift is natural, since Zagier defined the dimension of the space conformal blocks as $dimV_{g,k-2}$.\n\n\\ \\ For the WZW model based on $SO(3)$, the level $k$ must be even \\cite {Thaddeus}, and the formula for the dimension of the twisted space of the conformal blocks $V_{g,k}^{t}$, may be written as \n$$dimV_{g,k}^{t}=\\Big(\\frac{k+2}{2}\\Big)^{g-1}\\sum_{n=1}^{k+1}(-1)^{n+1}\\frac{1}{\\sin^{2g-2}n\\pi\/(k+2)}.$$ In order to derive a recursion formula for the dimension $ dimV_{g,k}^{t}$, we first note that the expression for the twisted version of the trigonometrical sum $ T_{2m}(l)$ given by Eq. (\\ref{t19}) is\n\\begin{eqnarray}\n\\label{t26}\nT_{2m}^{t}(l)&=&\\sum_{n=1}^{N-1}(-1)^{n+1} \\frac{\\sin^2(nl\\pi\/N)}{\\sin^{2m}(n\\pi\/N)}\\nonumber\\\\&=&\\frac{1}{2}\\sum_{s=1}^{m-1}(-1)^{s+1 }\\frac{l}{l+s}\\binom {l+s} {l-s}2^{2s}T_{2(m-g)}^{t}\\nonumber\\\\&+&\\frac{1}{2}\\sum_{s=m}^{l}(-1)^{s +1}\\frac{l}{l+s}\\binom {l+s} {l-s}2^{2s}\\sum_{n=1}^{N-1} (-1)^{n+1}\\sin^{2(s-m)}(n\\pi\/N), \n\\end{eqnarray}\nwhere $$T_{2m}^{t}=\\sum_{n=1}^{N-1}(-1)^{n+1}\\frac{1}{\\sin^{2m}(n\\pi\/N)} $$\nThe trigonometrical sum $$ \\sum_{n=1}^{N-1} (-1)^{n+1}\\sin^{2(s-m)}(n\\pi\/N),$$ is non-vanishing only if $ N$ is even\\cite{Schwatt} and is given by\n \\begin{eqnarray}\n\\label{t27}\n \\sum_{n=1}^{N-1} (-1)^{n+1}\\sin^{2(s-m)}(n\\pi\/N)&=&2^{2(m-s)+1}\\sum_{t=1}^{s-m}(-1)^{t }\\binom {2(s-m)}{s-m-t}\\nonumber\\\\&+&2^{2(m-s)} \\binom {2(s-m)}{s-m}\n\\end{eqnarray}\nThis formula shows clearly that the dimension of the twisted space of the conformal blocks is is non-vanishing only if $k$ is even, $N=k+2$ in agreement with the algebro-geometrical argument \\cite {Thaddeus}. Then, the expression for $T_{2m}^{t}$ may be written as\n\\begin{eqnarray}\n\\label{t28}\nT_{2m}(l)^{t}&=&\\sum_{n=1}^{N-1}(-1)^{n+1}\\frac{\\sin^2(nl\\pi\/N)}{\\sin^{2m}(n\\pi\/N)}\\nonumber\\\\&=&\\frac{1}{2}\\sum_{n=1}^{N-1}(-1)^{n+1}\\sum_{s=1}^{m-1}(-1)^{s+1 }\\frac{l}{l+s}\\binom {l+s} {l-s}2^{2s}\\frac{1}{\\sin^{2(m-s)}(n\\pi\/N)}\\nonumber\\\\&+&2^{2m-1}\\sum_{s=m}^{l}(-1)^{s +1}\\frac{2l}{l+s}\\binom {l+s} {l-s}\\sum_{t=1}^{s-m}(-1)^{t }\\binom {2(s-m)}{s-m-t}\\nonumber\\\\&+&2^{2m-2}\\sum_{s=m}^{l}(-1)^{s +1}\\frac{2l}{l+s}\\binom {l+s} {l-s}\\binom {2(s-m)}{s-m}\\nonumber\\\\&=&\\frac{1}{2}\\sum_{s=1}^{m-1}(-1)^{s+1 }\\frac{l}{l+s}\\binom {l+s} {l-s}2^{2s}T_{2(m-s)}^{t}\\nonumber\\\\&+&(-1)^{m+1}2^{2m-1}\\frac{(l+m-1)!}{(l-m)!(2m)!}l. \n\\end{eqnarray}\nThe recursion formula for the twisted trigonometrical sum $ T_{2m}^{t}$, may be derived by setting $ l= N\/2$ in Eq. (\\ref{t28})\n\\begin{eqnarray}\n\\label{t29}\nT_{2m}^{t}&=&\\sum_{n=1}^{N-1}(-1)^{n+1}\\frac{1}{\\sin^{2m}(n\\pi\/N)}\\nonumber\\\\&=&\\sum_{s=1}^{m-1}(-1)^{s+1 }\\frac{N\/2}{N\/2+s}\\binom {N\/2+s} {N\/2-s}2^{2s}T_{2(m-s)}^{t}\\nonumber\\\\&+&(-1)^{m+1}2^{2m}\\frac{(N\/2+m-1)!}{(N\/2-m)!(2m)!}N\/2 \\nonumber\\\\&-&\\sum_{n=1}^{N-1}\\frac{1}{\\sin^{2m}(n\\pi\/N)}.\n\\end{eqnarray}\nFor $m=1$ and $m=2$ the twisted trigonometrical sums are\n\\begin{equation}\n\\label{30}\nT_{2}^{t}=\\frac{N^{2}+2}{6}\n\\end{equation}\nand\n\\begin{equation}\n\\label{31}\nT_{4}^{t}=\\frac{7N^{4}+40N^{2}+88}{360},\n\\end{equation}\nrespectively. In obtaining these results we used the expressions for $T_{2}$ and $T_{4}$. These twisted trigonometrical sums appeared earlier as coefficients of certain generating function \\cite{ Zagier}. Using the recursion formula Eq. (\\ref{t29}), the twisted trigonometrical sum $ T_{6}^{t}$ is\n\\begin{equation}\n\\label{32}\nT_{4}^{t}=\\frac{31N^{6}+294N^{4}+1344N^{2}+3056}{15120}. \n\\end{equation}\nThe twisted trigonometrical sum formula given by Eq. (\\ref{t29}), implies that the dimension of the twisted space of the conformal blocks is may be deduced for any genus $g\\geq2$, through the following formula \n\\begin{eqnarray}\n\\label{t33}\ndimV_{g,k}^{t}&=&\\Big(\\frac{k+2}{2}\\Big)^{g-1}\\sum_{n=1}^{k+1}(-1)^{n+1}\\frac{1}{\\sin^{2g-2}n\\pi\/(k+2)}\\nonumber\\\\&=&\\Big(\\frac{k+2}{2}\\Big)^{g-1}\\sum_{s=1}^{g-2}(-1)^{s+1 }\\frac{(k+2)\/2}{(k+2)\/2+s}\\binom {(k+2)\/2+s} {(k+2)\/2-s}2^{2s}T_{2g-2-2s}^{t}\\nonumber\\\\&+&(-1)^{g}2^{2g-2}\\Big(\\frac{k+2}{2}\\Big)^{g-1}\\frac{((k+2)\/2+g-2)!}{((k+2)\/2-g+1)!(2g-2)!}(k+2)\/2 \\nonumber\\\\&-&dimV_{g,k}.\n\\end{eqnarray}\nNote that, the relation between $dimV_{g,k}^{t} $ and $ dimV_{g,k}$ is expected from the simple identity $$dimV_{g,k-2}^{t}= dimV_{g,k-2}-2^{g}dimV_{g,k\/2-1},$$ where $k$ even. The formula by Zagier \\cite{Zagier} for $dimV_{g,k-2}$, may be obtained using the following generating function \n$$ \\sum_{g=1}^{\\infty}dimV_{g,k-2}\\Big(\\frac{2}{k}\\sin^{2}x\\Big)^{g-1}=\\frac{k\\sin (k-1)x}{\\sin kx \\cos x }.$$\n\\section{The corner-to-corner resistance and the Kirchhoff index of a $ 2\\times N $ resistor network}\n\n\\ \\ In general, it is hard to have a closed-form expression for the two-point resistance of a resistor network, however, if the latter has certain symmetries like circulant resistor network, then this may be possible \\cite{Chair1}. The situation gets more and more complicated in two and three dimensional resistor networks \\cite{Wu}, as the exact two-point resistance are expressed in terms of the double and triple summations. It turns out that the recently developed techniques by the author \\cite{Chair1}, may be used to obtain exact formula for the two-point resistance of the first non-trivial $ 2\\times N $ resistor network \\cite{Chair2}. In this section, we derive an exact formula for the corner-to-corner resistance and the total effective resistance of a $ 2\\times N $ resistor network. The total effective resistance is also known as the Kirchhoff index, this index was introduced in chemistry as a molecular structure descriptor, it is used for discriminating among different molecules with similar shapes and structures \\cite{Randic}. At the moment, the only exact two-point resistance not written as a double summation of an $ M\\times N $ resistor network is the asymptotic expansion of the corner-to-corner resistance \\cite{Essam, Huang}. It is known that the value of the asymptotic expansion of the corner-to-corner resistance of a rectangular resistor network provides a lower bound to the resistance of compact percolation clusters \\cite{ Domany}.\n\\subsection{The exact evaluation of the corner-to-corner resistance }\n\n\\ \\ The exact expression for the resistance between two nodes of a rectangular network of resistors with\nfree boundary conditions was given by Wu \\cite{Wu}. Suppose that the resistances in the two spatial directions\nare r = s = 1, then the resistance ${R}_{\\,\\rm free}$ between two nodes ${\\bf r}_1=(x_1, y_1)$\nand ${\\bf r}_2=(x_2, y_2)$ is \n\\begin{eqnarray}\n&&R_{\\{M\\times N\\}}^{\\,\\rm free}({\\bf r}_1,{\\bf r}_2) = \\frac {1} {N} \\Big| x_1 -x_2 \\Big| + \\frac 1 {M} \\Big| y_1 - y_2 \\Big| +\\frac 2 {MN} \\nonumber\\\\\n&&\\times{\\sum_{m=1}^{M-1}\\sum_{n=1}^{N-1}\n\\frac {\\Big[\\cos\\Big(x_1+\\frac 1 2\\Big)\\theta_m \\cos\\Big(y_1+\\frac 1 2\\Big)\\phi_n\n - \\cos\\Big(x_2+\\frac 1 2\\Big)\\theta_m \\cos\\Big(y_2+\\frac 1 2\\Big)\\phi_n\n\\Big]^2 } \n{ (1-\\cos \\theta_m ) +(1-\\cos \\phi_n ) } } ,\\nonumber \\\\\n\\label{cc1}\n\\end{eqnarray}\nwhere\n\\begin{eqnarray}\n\\theta_m= \\frac{m\\pi} M, \\hskip1cm \\phi_n= \\frac{n\\pi} N.\\nonumber\n\\end{eqnarray} \nIn order to compute the corner-to-corner resistance of a $2 \\times N$ resistor network, we set $ M = 2$, ${\\bf r}_1=(0, 0)$\nand ${\\bf r}_2=(1, N-1)$ into Eq. (\\ref{cc1}), and so\nthe double sum of the above equation is reduced to a single sum, and the corner-to-corner\nresistance may be written as\n\\begin{eqnarray}\nR_{\\{2\\times N\\}}^{\\,\\rm free}((0,0),(1,N-1))&=&\\frac{1}{N}+\\frac{N-1}{2} \\nonumber\\\\&+& \\frac{1}{3N}\\sum_{n=1}^{N-1} \\frac{(1+(-1)^{n})(1+\\cos n\\pi\/N)}{2(1-2\/3\\cos^{2}n\\pi\/2N)}.\n\\label{cc2}\n\\end{eqnarray}\nFor $N$ even, the sum over $n$ may be reduced to\n\\begin{eqnarray}\n\\frac{2}{3N}\\sum_{n=1}^{N\/2-1} \\frac{\\cos^{2} n\\pi\/N}{(1-2\/3\\cos^{2}n\\pi\/N)}&=&\\frac{1}{3N}\\sum_{n=1}^{N-1} \\frac{\\cos^{2} n\\pi\/N}{(1-2\/3\\cos^{2}n\\pi\/N)}\\nonumber\\\\&=&\\frac{1}{3N}\\sum_{j=0}^{\\infty}(2\/3)^{j}\\sum_{n=1}^{N-1}\\cos^{2(j+1)} n\\pi\/N,\n\\label{cc3}\n\\end{eqnarray}\nto evaluate this sum, we follow closely the method developed by the author in \\cite{Chair1}. As explained in \\cite{Chair1},\nthe formula for the sum $\\sum_{n=1}^{N-1} \\cos^{2J} n\\pi\/N $, given by Schwatt \\cite{Schwatt} is not the right one to use, we use instead the formula \n\\begin{eqnarray}\n\\sum_{n=1}^{N-1} \\cos^{2J} n\\pi\/N =-1+\\frac{N}{2^{2j-1}}\\sum_{p=1}^{[J\/N]}\\binom {2J}{J-pN} +\\frac{1}{2^{2j}}\\binom {2J}{J},\n\\label{cc4}\n\\end{eqnarray}\nthus, the sum contribution to the corner-to-corner resistance using the residue representation of binomials is\n\\begin{eqnarray}\n\\frac{1}{3N}\\sum_{j=0}^{\\infty}(2\/3)^{j}\\sum_{n=1}^{N-1}\\cos^{2(j+1)} n\\pi\/N&=&-\\frac{1}{N}+\\sum_{j=0}^{\\infty}\\hbox{res}_{w}\\frac{(1+w)^{2j}}{(6w)^{j}}\\frac{w^{N}}{w(1-w^{N})}\\nonumber\\\\&+&\\frac{1}{2}\\Big[\\hbox{res}_w (1-4w){^{-1\/2}}\\sum_{j=0}^{\\infty}(1\/6w)^{j}{w^{-1}}-1\\Big]\\nonumber\\\\&=&-\\frac{1}{N}+\\sqrt{3}\\frac{(2-\\sqrt{3})^{N}}{1-(2-\\sqrt{3})^{N}}+\\frac{1}{2}(\\sqrt{3}-1).\n\\label{cc5}\n\\end{eqnarray}\nFinally, the corner-to-corner resistance of $ 2\\times N $ resistor network becomes,\n\\begin{eqnarray}\nR_{\\{2\\times N\\}}^{\\,\\rm free}((0,0),(1,N-1))&=&\\frac{N-1}{2} +\\sqrt{3}\\frac{(2-\\sqrt{3})^{N}}{1-(2-\\sqrt{3})^{N}}+\\frac{1}{2}(\\sqrt{3}-1).\n\\label{cc6}\n\\end{eqnarray}\n It is not difficult to see that this formula is also valid for $N$ odd.\n \n \\ Examples. For $N=2,3, 4$ our formula Eq. (\\ref{cc6}), gives\n \\begin{eqnarray}\n R_{\\{2\\times 2\\}}^{\\,\\rm free}((0,0),(1,1))&=&1\\nonumber\\\\R_{\\{2\\times 3\\}}^{\\,\\rm free}((0,0),(1,2))&=&1.4\\nonumber\\\\R_{\\{2\\times 4\\}}^{\\,\\rm free}((0,0),(1,3))&=&1.875,\n \\label{cc7}\n \\end{eqnarray} \nthese results are in a full agreement with Eq. (\\ref{cc2}).\n\\subsection{The Kirchhoff index}\n\n\\ \\ The computation of the total effective resistance of a $ 2\\times N $ resistor network, that is, the Kirchhoff index, may be computed in two ways. It may be evaluated by summing over all effective resistances between nodes of a given resistor network, or alternatively by summing over all eigenvalues of a Laplacian associated with resistor network \\cite{Gutman}. So, we do not have to know the effective resistance between each node to compute the total effective resistance of a resistor network. The formula that gives the Kirchhoff index of a resistor network in terms of the eigenvalues is\n $$ Kf(G)=N\\sum_{n=1}^{N-1}\\frac{1}{\\lambda_n},$$ where $\\lambda_{n}$ are the eigenvalues of the Laplacian of the network, or the graph $G$ made of nodes and edges considered as unit resistors. Our network is given by the cartesian product $ 2\\times N$, that is, made of two path lines with $N$ nodes, and $N$ path lines with two nodes. Now, the Kirchhoff index of a path line is $$Kf(P_{n})=N\\sum_{n=1}^{N-1}\\frac{1}{4\\sin^2(n\\pi\/2N)}=\\frac{N}{8}\\Big[ \\sum_{n=1}^{2N-1}\\frac{1}{\\sin^2(n\\pi\/2N)}-1\\Big]=\\frac{N^{3}-N}{6}.$$ \n Thus, the contribution from these path lines is $N+\\frac{N^{3}-N}{3}$, by connecting the system together, then the corresponding eigenvalues of the laplacian are $\\lambda_{1,n}= 3(1-2\/3\\cos^{2}n\\pi\/2N)$. As a consequence, the Kirchhoff index of a $ 2\\times N$, resistor network can be written as\n\\begin{eqnarray} \nKf(2\\times N)=N+\\frac{N^{3}-N}{3}+N\\sum_{n=1}^{N-1}\\frac{1}{3(1-2\/3\\cos^{2}n\\pi\/2N)},\n\\label{Kf1}\n\\end{eqnarray}\nNote that, our simple deduction of this expression gives the same value of the Kirchhoff index given by theorem 4.1 in \\cite{Yang}. The above sum seems difficult to evaluate, however, using a simple trick, we will be able to get a closed form for the Kirchhoff index. To that end, let us write\n\\begin{eqnarray}\n\\label{Kf2}\n\\sum_{n=1}^{N-1}\\frac{1}{(1-2\/3\\cos^{2}n\\pi\/2N)}&=&\\sum_{n=1}^{N\/2-1}\\frac{1}{(1-2\/3\\cos^{2}n\\pi\/N)}\\nonumber\\\\&+&\\sum_{n=1}^{N\/2}\\frac{1}{(1-2\/3\\cos^{2}(2n-1)\\pi\/2N)},\n\\end{eqnarray}\nwhere $N$ is assumed to be even. The first sum may be \ncarried out using the following trick;\n\\begin{eqnarray}\n\\sum_{j=0}^{\\infty}(2\/3)^{j}\\sum_{n=1}^{N-1}\\cos^{2(j+1)} n\\pi\/N&=&\\frac{3}{2}\\Big[\\sum_{j=0}^{\\infty}(2\/3)^{j}\\sum_{n=1}^{N-1}\\cos^{2j} n\\pi\/N-(N-1)\\Big].\n\\label{Kf3}\n\\end{eqnarray}\nNow, the sum on the left-hand side was computed before, see Eq. (\\ref{cc5}), then one may deduce\n\\begin{eqnarray}\n\\sum_{n=1}^{N-1}\\frac{1}{(1-2\/3\\cos^{2}n\\pi\/N)}&=&\\sum_{j=0}^{\\infty}(2\/3)^{j}\\sum_{n=1}^{N-1}\\cos^{2j} n\\pi\/N \\nonumber\\\\&=&-3+\\frac{6N(2-\\sqrt{3})^{N}}{\\sqrt{3}(1-(2-\\sqrt{3})^{N})}+\\sqrt{3}N\n\\label{Kf5}.\n\\end{eqnarray}\nand so,\n\\begin{eqnarray}\n\\sum_{n=1}^{N\/2-1}\\frac{1}{(1-2\/3\\cos^{2}n\\pi\/N)}&=&\\frac{1}{2}\\Big[\\sum_{n=1}^{N-1}\\frac{1}{(1-2\/3\\cos^{2}n\\pi\/N)}-1\\Big]\\nonumber\\\\&=&-2+\\frac{3N(2-\\sqrt{3})^{N}}{\\sqrt{3}(1-(2-\\sqrt{3})^{N})}+\\frac{\\sqrt{3}}{2}N.\n\\label{Kf6}\n\\end{eqnarray}\nUsing the identity \\cite{Chair1},\n\\begin{eqnarray}\n\\label{Kf7}\n\\sum_{n=1}^{N\/2}\\cos^{2j}(2n-1)l\\pi\/N&=&\n\\frac{N}{2^{2j+1}}\\binom {2j}{j}+\\frac{N}{2^{2j}}\\sum_{p=1}^{[j\/2N]}\\binom {2j}{j-2pN}\\nonumber\\\\&-&\\frac{N}{2^{2j}}\\sum_{p=1}^{[j\/2N]}\\binom {2j}{j-(2p-1)N},\n\\end{eqnarray}\n and by following similar steps as in the above computations, then, one may show\n \\begin{eqnarray}\n \\sum_{n=1}^{N\/2}\\frac{1}{(1-2\/3\\cos^{2}(2n-1)\\pi\/2N)}&=&\\frac{3N(2-\\sqrt{3})^{2N}}{\\sqrt{3}(1-(2-\\sqrt{3})^{2N})}-\\frac{3N(2-\\sqrt{3})^{N}}{\\sqrt{3}(1-(2-\\sqrt{3})^{2N})}\\nonumber\\\\&+&\\frac{\\sqrt{3}}{2}N.\n \\label{Kf8}\n \\end{eqnarray}\n Finally, the exact expression of the Kirchhoff index of a $2\\times N$ reeds\n\\begin{eqnarray}\nKf(2\\times N)=N+\\frac{N^{3}-N}{3}+\\frac{N}{3}\\Big[-2+\\frac{6N(2-\\sqrt{3})^{2N}}{\\sqrt{3}(1-(2-\\sqrt{3})^{2N})}+\\sqrt{3}N\\Big].\n\\label{Kf9}\n\\end{eqnarray} \nOne can show that, the above formula for the Kirchhoff formula is valid for $N$ odd as well.\n\nExample, For $N=1,2,3,4,5$, the Kirchhoff indices are respectively, \n \\begin{eqnarray}\nKf(2\\times 2)&=&5\\nonumber\\\\Kf(2\\times 3)&=&14.2 \\nonumber\\\\Kf(2\\times 4)&=&30.57142857\n\\nonumber\\\\Kf(2\\times 5)&=&56.10047847\n\\end{eqnarray} \n These results are in complete agreement with those obtained using formula given by Eq. (\\ref{Kf1}), or theorem 4.1 of reference \\cite{Yang}.\n \\section{Some trigonometrical sums related to number theory}\n \n \\ \\ In this section other class of trigonometrical sums will be evaluated using similar techniques as in the previous sections. Some of these trigonometrical sums are related to number theory. We will start with the following sum\n$$S(l):=\\sum_{n=1}^{N-1}(-1)^n\\frac{\\sin^2(nl\\pi\/N)}{\\sin^2(n\\pi\/N)}, $$ which is the alternating sum associated with the sum $R(l)$ given in Eq. (\\ref{t1}). This sum has the following closed formula\n\\begin{proposition}\n\\begin{equation}\n\\label{s}\nS(l)=\\sum_{n=1}^{N-1}(-1)^n\\frac{\\sin^2(nl\\pi\/N)}{\\sin^2(n\\pi\/N)}=-l^2\n\\end{equation}\n\\end{proposition}\nTo derive the above formula, we follow similar computations carried out for $R(l)$, except that this time the sum over $n$ is non-vanishing only if $N$ is even\n\\begin{equation} \n\\label{a1}\n\\sum_{n=1}^{N-1}(-1)^n\\sin^{2(s-1)}(n\\pi\/N)=\\frac{1}{2^{2(s-1)-1}}\\sum_{t=1}^{s-1}(-1)^{t +1}\\binom {2(s-1)}{s-1-t} +\\frac{-1}{2^{2(s-1)}}\\binom {2(s-1)} {s-1},\n\\end{equation}\n Comparing Eq. (\\ref{a1}) and Eq. (\\ref{t3}), and using the previous results, then, without any further computations, the formula for the trigonometrical sum $S(l)$ is obtained. \n Due to the the symmetry enjoyed by $ S(l)$, $S(l)=S(N-l)$, the right hand side of equation (\\ref{s}) should be read with this constraint, that is for both $l$, and $N-l$, $ S(l)=-l^2$ . Next, let us consider the sums $ S_{1}(l):=\\sum_{n=1}^{N-1}\\frac{\\sin(nl\\pi\/N)}{\\sin(n\\pi\/N)}$ and $S_{2}(l):=\\sum_{n=1}^{N-1}(-1)^n\\frac{\\sin(nl\\pi\/N)}{\\sin(n\\pi\/N)}$ that are closely related. We will prove that closed formulas for the sums $ S_{1}(l)$ and $ S_{2}(l)$ are given by\n \\begin{theorem}\n\\begin{equation} \n\\label{s1}\nS_{1}(l)=\\sum_{n=1}^{N-1}\\frac{\\sin(nl\\pi\/N)}{\\sin(n\\pi\/N)}=\\left\\{\\begin{array}{cl}N-l & \\text{for } l \\text{odd }\\\\\n0 & \\text{for } l \\text{even },\n\\end{array} \\right.\n\\end{equation}\n\\begin{eqnarray} \n\\label{s2}\nS_{2}(l)=\\sum_{n=1}^{N-1}(-1)^n\\frac{\\sin(nl\\pi\/N)}{\\sin(n\\pi\/N)}=\\left\\{\\begin{array}{cl}-(2l-1) & \\text{for } l \\text{odd } \\text{and } N \\text{even} \\\\\n0 & \\text{for } l \\text{odd } \\text{and } N \\text{odd}\\\\\n-2l& \\text{for } l \\text{even } \\text{and } N \\text{odd}\\\\\n0& \\text{for } l \\text{even } \\text{and } N \\text{even}.\n\\end{array} \\right.\n\\end{eqnarray}\n\\end{theorem}\n\nTo prove the first formula, we use the following trigonometrical identity \\cite{Hobson},\n \\begin{equation} \n\\label{Hobson}\n\\frac{\\sin(nl\\pi\/N)}{\\sin(n\\pi\/N)}=\\sum_{s\\geq0}(-1)^{s}\\binom {l-s-1}{s}2^{l-2s-1}\\cos^{l-2s-1}(n\\pi\/N), \n \\end{equation}\n thus, for $l$ odd, one has\n \\begin{eqnarray} \n\\label{n1}\nS_{1}(l)=\\sum_{n=1}^{N-1}\\frac{\\sin(2l-1)n\\pi\/N}{\\sin n\\pi\/N}&=&\\sum_{s\\geq0}(-1)^{s}\\binom {2l-2-s}{s}2^{2l-2-2s}\\sum_{n=1}^{N-1}\\cos^{2l-2-2s}(n\\pi\/N)\\nonumber\\\\.\n\\end{eqnarray}\nThe sum over $n$, may be computed from Schwatt's book \\cite{Schwatt}, see Eq. (107), page $221$ to give \n\\begin{eqnarray} \n\\label{n2}\n\\sum_{n=1}^{N-1}\\cos^{2l-2-2s}(n\\pi\/N)= \\frac{-2}{2^{2l-2-2s}}\\sum_{t=1}^{l-1-s}\\binom {2l-2-2s}{l-1-s-t} +\\frac{N-1}{2^{2l-2-2s}}\\binom {2l-2-2s} {l-1-s},\n\\end{eqnarray}\nthen, the first contribution to the sum given in Eq. (\\ref{n1}) is\n\\begin{eqnarray} \n\\label{n3}\nS_{1}(l)^{'}=-2\\sum_{s=0}^{l-1}(-1)^{s}\\binom {2l-2-s}{s}\\sum_{t=1}^{l-1-s}\\binom {2l-2-2s}{l-1-s-t}=\\nonumber\\\\-2\\hbox{res}_{w=0}\\sum_{s=0}^{l-1}(-1)^{s}\\binom {2l-2-s}{s}\\Big(\\frac{1+w}{\\sqrt {w}}\\Big)^{2l-2-2s}\\frac{1}{1-w}\\nonumber\\\\=-2\\hbox{res}_{w=0}U_{2l-2}\\Big(\\frac{1+w}{2\\sqrt {w}}\\Big)\\frac{1}{1-w},\n\\end{eqnarray}\nin obtaining the last line of the above equation we used the expresion for the normalized Chebyshev polynomial of the second kind $U_{n}(\\frac{x}{2})= \\sum_{k=0}^{[n\/2]}(-1)^{k}\\binom {n-k}{k}x^{n-2k}$. The residue may be evaluated using $$U_{n}(\\frac{x}{2})= \\frac{(x+\\sqrt{x^2-1})^{n+1}-(x-\\sqrt{x^2-1})^{n+1}}{2\\sqrt{x^2-1}}, $$ to obtain \n\\begin{eqnarray} \n\\label{n4}\nS_{1}(l)^{'}=-2\\hbox{res}_{w=0} \\frac{1}{w^{l-1}}\\frac{1}{(1-w)^2}=-2(l-1).\n\\end{eqnarray}\nSimilarly, the second contribution reads\n\\begin{eqnarray} \n\\label{ns3}\nS_{1}(l)^{''}&=&(N-1)\\hbox{res}_{w=0}U_{2l-2}\\Big(\\ \\frac{1+w}{2\\sqrt {w}}\\Big)\\frac{1}{w}\\nonumber\\\\&=& N-1.\n\\end{eqnarray}\nTherefore, combining these contributions, the closed formula for $ S_{1}(l)$ is \n\\begin{eqnarray} \n\\label{n00}\nS_{1}(l)=\\sum_{n=1}^{N-1}\\frac{\\sin(2l-1)n\\pi\/N}{\\sin n\\pi\/N}=N-(2l-1).\n\\end{eqnarray}\nIt is not difficult to show that there is no contribution to the sum $ S_{1}(l)$ for $l$ even. In proving the second formula for $S_{2}(l)$ Eq. (\\ref{s2}), one notes that in evaluting the sum over $n$ in $$S_{2}(l)=\\sum_{s\\geq0}(-1)^{s}\\binom {l-s-1}{s}2^{l-2s-1}\\sum_{n=1}^{N-1}(-1)^n\\cos^{l-2s-1}(n\\pi\/N),$$ turns out to depend on both $l$, $N$ unlike the previous case. By using formulas given by Eq (113), and Eq (114) in \\cite{ Schwatt}, we have\n\\begin{eqnarray} \n\\label{n5}\n\\sum_{n=1}^{N-1}(-1)^n\\cos^{l-2s-1}(n\\pi\/N)&=&\\frac{-2}{2^{2l-2-2s}}\\sum_{t=1}^{l-1-s}\\binom {2l-2-2s}{l-1-s-t} -\\frac{1}{2^{2l-2-2s}}\\binom {2l-2-2s} {s-l-1},\\nonumber\\\\\n\\end{eqnarray}\nfor $l$ odd, $ N$ even, and the sum vanishes for $l$ odd, $ N$ odd. If $l$ is even, then, the above sum is non-vanishing only for $N$ odd\n\nTherefore, for $l$ odd it follows from Eq. (\\ref{n5}) that we have \n\\begin{eqnarray} \n\\label{n7} \nS_{2}(l)&=&\\sum_{n=1}^{N-1}(-1)^n\\frac{\\sin (2l-1)n\\pi\/N}{\\sin(n\\pi\/N)}\\nonumber\\\\&=&-2\\hbox{res}_{w=0} \\frac{1}{w^{l-1}}\\frac{1}{(1-w)^2}-\\hbox{res}_{w=0} \\frac{1}{w^l}\\frac{1}{(1-w)}=-(2l-1),\n\\end{eqnarray}\n for $l$ even and $N$ odd, one has the following identity\n\\begin{eqnarray} \n\\label{n6}\n\\sum_{n=1}^{N-1}(-1)^n\\cos^{l-2s-1}(n\\pi\/N)&=&\\frac{-2}{2^{2l-1-2s}}\\sum_{t=0}^{l-1-s}\\binom {2l-1-2s}{l-1-s-t},\n\\end{eqnarray}\nfrom which\n\\begin{eqnarray} \n\\label{n0} \nS_{2}(l)=\\sum_{n=1}^{N-1}(-1)^n\\frac{\\sin(2nl\\pi\/N)}{\\sin(n\\pi\/N)}=-2\\hbox{res}_{w=0} \\frac{1}{w^{l}}\\frac{1}{(1-w)^2}=-2l,\n\\end{eqnarray}\n these results were recently verified by simulations without proof in connection with number theory \\footnote{Anonymous author working on characters of a finite field and the Polya-Vinogradov inequality.}. It is interesting to note that the closed formula for $ S_{2}(l)$ may be expected since if we let $l$ go to $ N-l$ in $ S_{1}(l)$, then, $S_{2}(l)=- l$ . Now, we will consider the following non-trivial and interesting trigonometrical sums, $$ F_{1}(N,l,2):= \\sum_{n=1}^{N-1}\\frac{\\sin(nl\\pi\/N)}{\\sin(n\\pi\/N)}\\frac{\\sin(2nl\\pi\/N)}{\\sin(2n\\pi\/N)},$$ and \n $$ F_{2}(N,l,2):= \\sum_{n=1}^{N-1}(-1)^{n}\\frac{\\sin(nl\\pi\/N)}{\\sin(n\\pi\/N)}\\frac{\\sin(2nl\\pi\/N)}{\\sin(2n\\pi\/N)},$$\n where $ F_{1}(N,N-l,2)= F_{2}(N,l,2)$, and $ N$ is assumed to be odd. So, if $ F_{2}(N,l,2)$ is known, then, $ F_{2}(N,l,2)$ may be obtained and vice-versa. In the rest of this paper, we show that both the trigonometrical sums may be evaluated to give the following closed formulas\n\\begin{theorem}\n\\begin{eqnarray} \n\\label{f1}\nF_{1}(N,l,2)& = &\\sum_{n=1}^{N-1}\\frac{\\sin(nl\\pi\/N)}{\\sin(n\\pi\/N)}\\frac{\\sin(2nl\\pi\/N)}{\\sin(2n\\pi\/N)}\\nonumber\\\\&=&-\\frac{1}{2}(3l-2)(3l-3)+\\frac{1}{2}(l-1)(l-2)-l+\\frac{1}{2}(1-(-1)^l)\\nonumber\\\\&+&\\frac{N-1}{2}\\Big(2l-1-(-1)^l\\Big)+N\\Big(3l-2-N+\\frac{1}{2}(1-(-1)^{l-N})\\Big)\\nonumber\\\\\n\\end{eqnarray}\nwhere $l$ is odd and the sum vanishes for even l, also, note that the last term namely the coefficient of $N$ is different from zero only if $ 3l-2>N$. The closed formula for $ F_{2}(N,l,2)$ reads\n\\begin{eqnarray} \n\\label{f2}\n F_{2}(N,l,2)&=& \\sum_{n=1}^{N-1}(-1)^{n}\\frac{\\sin(nl\\pi\/N)}{\\sin(n\\pi\/N)}\\frac{\\sin(2nl\\pi\/N)}{\\sin(2n\\pi\/N)}\\nonumber\\\\&=&-\\frac{1}{2}3l(3l-1)+\\frac{1}{2}l(l-1)-l+N\\Big(3l-\\frac{(N+1)}{2}+\\frac{1}{2}(1-(-1)^{l-\\frac{(N+1)}{2}})\\Big),\\nonumber\\\\,\n\\end{eqnarray}\nwhere $l$ is even and the sum vanishes for odd l, also, note that the last term whose coefficient is $N$ is different from zero only if $ 3l>\\frac{N+1}{2}$\n\\end{theorem} \n To prove the first formula, we note that $ F_{1}(N,2l,2)=0$, and hence the only sum to consider is the sum $ F_{1}(N,2l-1,2)$. The latter may be written as \n \\begin{eqnarray} \n\\label{n8}\n F_{1}(N,2l-1,2)&=&\\sum_{n=1}^{N-1}\\frac{\\sin(n(2l-1)\\pi\/N)}{\\sin(n\\pi\/N)}\\frac{\\sin(2n(2l-1)\\pi\/N)}{\\sin(2n\\pi\/N)}\\nonumber\\\\&=&\\sum_{s,k\\geq0}^{l-1}(-1)^{s+k}\\binom {2l-2-s}{s}\\binom {2l-2-k}{k}2^{2(2l-2)-2(s+k)}\\nonumber\\\\&\\times&\\sum_{j=0}^{2l-2-2k}(-1)^{j}2^j\\binom {2l-2-2k}{k}\\sum_{n=1}^{N-1}\\cos^{2l-2-2(s-j)}(n\\pi\/N).\n\\end{eqnarray}\nThe sum over $n$, formally looks like that given in Eq. (\\ref{n2}), however, the variable $t$, may be a multiple of $N$ and in that case the Schwatt's formula given by Eq (107), does not work it has to be modified slightly. The formula that takes into account this fact may be shown to be given by\n\\begin{eqnarray} \n\\label{n9}\n\\sum_{n=1}^{N-1}\\cos^{2l-2-2(s-j)}(n\\pi\/N)&= &\\frac{-2}{2^{2l-2-2(s-j)}}\\sum_{t=1}^{l-1-s}\\binom {2l-2-2(s-j)}{l-1-(s-j)-t}\\nonumber\\\\& +&\\frac{N-1}{2^{2l-2-2(s-j)}}\\binom {2l-2-2(s-j)} {l-1-(s-j)}\\nonumber\\\\&+&\\frac{2N}{2^{2l-2-2(s-j)}}\\sum_{p=1}^{[l-1-(s-j)\/N]}\\binom {2l-2-2(s-j)} {l-1-(s-j)-pN},\n\\end{eqnarray}\nwhere the first two terms in the above formula are those expected from Eq. (\\ref{n2}), while the third term is precisely the correction to the formula Eq. (\\ref{n2}) for $t$ congruent to $N$. Therefore, there are three contributions to the sum given in Eq. (\\ref{n8}), the first of which reads\n\\begin{eqnarray} \n\\label{n10}\nF_{1}^{'}(N,2l-1,2)&=&-2\\sum_{s,k\\geq0}^{l-1}(-1)^{s+k}\\binom {2l-2-s}{s}\\binom {2l-2-k}{k}2^{2l-2-2k}\\nonumber\\\\&\\times&\\sum_{j=0}^{2l-2-2k}(-1)^{j}\\frac{1}{2^j}\\binom {2l-2-2k}{k}\\sum_{t=1}^{l-1-s}\\binom {2l-2-2(s-j)}{l-1-(s-j)-t}\\nonumber\\\\&=&-2\\hbox{res}_{w=0}U_{2l-2}\\Big(\\frac{1+w}{2\\sqrt {w}}\\Big)U_{2l-2}\\Big(\\frac{1+w^2}{2w}\\Big)\\frac{1}{1-w}\\nonumber\\\\&=&-2\\hbox{res}_{w=0}\\Big(\\frac{1}{w^{3l-3}}-\\frac{1}{w^{l-2}}\\Big)\\frac{1}{(1-w)^2}\\frac{1}{1-w^2}\\nonumber\\\\&=&-\\frac{1}{2}(3l-2)(3l-3)+\\frac{1}{2}(l-1)(l-2)-l+\\frac{1}{2}(1-(-1)^l)\n\\end{eqnarray}\nwhile the second contribution is\n\\begin{eqnarray} \n\\label{n11}\nF_{1}^{''}(N,2l-1,2)&=&(N-1)\\sum_{s,k\\geq0}^{l-1}(-1)^{s+k}\\binom {2l-2-s}{s}\\binom {2l-2-k}{k}2^{2l-2-2k}\\nonumber\\\\&\\times&\\sum_{j=0}^{2l-2-2k}(-1)^{j}\\frac{1}{2^j}\\binom {2l-2-2k}{k}\\binom {2l-2-2(s-j)}{l-1-(s-j)}\\nonumber\\\\&=&(N-1)\\hbox{res}_{w=0}U_{2l-2}\\Big(\\frac{1+w}{2\\sqrt {w}}\\Big)U_{2l-2}\\Big(\\frac{1+w^2}{2w}\\Big)\\frac{1}{w}\\nonumber\\\\&=& (N-1)\\hbox{res}_{w=0}\\Big(\\frac{1}{w^{3l-2}}-\\frac{1}{w^{l-1}}\\Big)\\frac{1}{1-w^2}\\frac{1}{1-w}\\nonumber\\\\&=&\\frac{N-1}{2}\\Big(2l-1-(-1)^l\\Big).\n\\end{eqnarray}\n To obtain the last contribution we write the sum over $p$, in Eq. (\\ref{n9}), as $$ \\sum_{p=1}^{[l-1-(s-j)\/N]}\\binom {2l-2-2(s-j)} {l-1-(s-j)-pN}=\\hbox{res}_{w=0} \\Big( \\frac{(1+w)^{2l-2-2(s-j)}w^N}{w^{l-(s-j)}(1-w^{N})}\\Big),$$ and using the fact that $l\\leq N-1$, then, the third contribution may be computed to give\n \\begin{eqnarray} \n\\label{n12}\nF_{1}^{'''}(N,2l-1,2)&=&2N\\sum_{s,k\\geq0}^{l-1}(-1)^{s+k}\\binom {2l-2-s}{s}\\binom {2l-2-k}{k}2^{2l-2-2k}\\nonumber\\\\&\\times&\\sum_{j=0}^{2l-2-2k}(-1)^{j}\\frac{1}{2^j}\\binom {2l-2-2k}{k}\\hbox{res}_{w=0} \\Big( \\frac{(1+w)^{2l-2-2(s-j)}w^N}{w^{l-(s-j)}(1-w^{N})}\\Big)\\nonumber\\\\&=&2N\\hbox{res}_{w=0}\\Big(\\frac{1}{1-w^N}U_{2l-2}\\Big(\\frac{1+w}{2\\sqrt {w}}\\Big)U_{2l-2}\\Big(\\frac{1+w^2}{2w}\\Big)w^{N-1}\\Big)\\nonumber\\\\&=&2N\\hbox{res}_{w=0}\\frac{1}{1-w^N}\\frac{1}{w^{3l-2-N}}\\frac{1}{(1-w)(1-w^2)}\\nonumber\\\\&=&N\\big(3l-2-N+\\frac{1}{2}(1-(-1)^{l-N}\\big).\n\\end{eqnarray}\nNote that this will contribute only for $3l-2>N $, and as a result the formula for the sum $ F_{1}(N,2l-1,2)$ is\n\\begin{eqnarray} \n\\label{n13}\nF_{1}(N,2l-1,2)&=&\\sum_{n=1}^{N-1}\\frac{\\sin(n(2l-1)\\pi\/N)}{\\sin(n\\pi\/N)}\\frac{\\sin(2n(2l-1)\\pi\/N)}{\\sin(2n\\pi\/N)}\\nonumber\\\\&=&-\\frac{1}{2}(3l-2)(3l-3)+\\frac{1}{2}(l-1)(l-2)-l+\\frac{1}{2}(1-(-1)^l)\\nonumber\\\\&+&\\frac{N-1}{2}\\Big(2l-1-(-1)^l\\Big)+N\\Big(3l-2-N+\\frac{1}{2}(1-(-1)^{l-N})\\Big).\n\\end{eqnarray}\nHaving obtained a closed formula for the sum $ F_{1}(N,2l-1,2)$, we now wish to prove the formula for the alternating sum $ F_{2}(N,l,2) $. \nFirst, we note that for $N$ odd, the sum is non-vanishing only for $l$ is even. Therefore, the formula for $ F_{2}(N,l,2) $ becomes\n\\begin{eqnarray} \n\\label{n14}\n F_{2}(N,2l,2&)=&\\sum_{n=1}^{N-1}(-1)^{n}\\frac{\\sin((2l)n\\pi\/N)}{\\sin(n\\pi\/N)}\\frac{\\sin((2l)2n\\pi\/N)}{\\sin(2n\\pi\/N)}\\nonumber\\\\&=&\\sum_{s,k\\geq0}^{[(2l-1)\/2]}(-1)^{s+k}\\binom {2l-1-s}{s}\\binom {2l-1-k}{k}2^{2(2l-1)-2(s+k)}\\nonumber\\\\&\\times&\\sum_{j=0}^{2l-1-2k}(-1)^{j}2^j\\binom {2l-1-2k}{k}\\sum_{n=1}^{N-1}(-1)^n\\cos^{2l-1-2(s-j)}(n\\pi\/N).\n\\end{eqnarray}\nThe sum over $n$ may carried out using Eq. (114), in \\cite{ Schwatt} with the slight modification as explained before, then, it is not difficult to show\n\\begin{eqnarray} \n\\label{n15}\n\\sum_{n=1}^{N-1}(-1)^n\\cos^{2l-1-2(s-j)-1}(n\\pi\/N)&=&\\frac{-2}{2^{2l-1-2(s-j)}}\\sum_{t=0}^{l-1-s}\\binom {2l-1-2(s-j)}{l-1-(s-j)-t}\\nonumber\\\\&+&\\frac{2N}{2^{2l-1-2(s-j)}}\\sum_{p\\geq1}\\binom {2l-1-2(s-j)} {l-1-(s-j)-\\frac{(2p-1)N-1}{2}},\\nonumber\\\\\n\\end{eqnarray}\nwhere the sum over $p$, may be written as\n$$\\sum_{p\\geq1}\\binom {2l-1-2(s-j)} {l-1-(s-j)-\\frac{(2p-1)N-1}{2}}=\\hbox{res}_{w=0} \\Big( \\frac{(1+w)^{2l-1-2(s-j)}w^{N\/2}}{w^{l-(s-j)+1\/2}(1-w^{N})}\\Big).$$\nBy using Eq. (\\ref{n14}), computations show that the closed formula for the sum $F_{2}(N,2l,2)$ is\n\\begin{eqnarray} \n\\label{n16}\n F_{2}(N,2l,2&)=&\\sum_{n=1}^{N-1}(-1)^{n}\\frac{\\sin((2l)n\\pi\/N)}{\\sin(n\\pi\/N)}\\frac{\\sin((2l)2n\\pi\/N)}{\\sin(2n\\pi\/N)}\\nonumber\\\\&=&-2\\hbox{res}_{w=0}\\Big(U_{2l-2}\\Big(\\frac{1+w}{2\\sqrt {w}}\\Big)U_{2l-2}\\Big(\\frac{1+w^2}{2w}\\Big)\\frac{1}{\\sqrt{w}(1-w)}\\Big)\\nonumber\\\\&+&2N\\hbox{res}_{w=0}\\frac{1}{1-w^N}\\Big(U_{2l-1}\\Big(\\frac{1+w}{2\\sqrt {w}}\\Big)U_{2l-1}\\Big(\\frac{1+w^2}{2w}\\Big)\\frac{w^{N\/2}}{w}\\Big)\\nonumber\\\\&=&-2\\hbox{res}_{w=0}\\Big(\\frac{1}{w^{3l-1}}-\\frac{1}{w^{l-1}}\\Big)\\frac{1}{(1-w)^2}\\frac{1}{1-w^2}\\nonumber\\\\&+&2N\\hbox{res}_{w=0}\\frac{1}{1-w^N}\\frac{1}{w^{3l-(N+1)\/2}}\\frac{1}{(1-w)(1-w^2)}\\nonumber\\\\&=&-\\frac{1}{2}3l(3l-1)+\\frac{1}{2}l(l-1)-l+N\\Big(3l-\\frac{(N+1)}{2}+\\frac{1}{2}(1-(-1)^{l-\\frac{(N+1)}{2}})\\Big),\\nonumber\\\\\n \\end{eqnarray}\n where the last term whose coefficient is $N$, contributes only for $3l> \\frac{(N+1)}{2} $. Let us now, check that the formulas $ F_{1}(N,2l-1,2)$, $F_{2}(N,2l,2)$ are consistent with symmetry discussed earlier, that is, $ F_{1}(N,N-l,2)= F_{2}(N,l,2)$, this in turns implies that the correctness of the formulas. To do so, we will give some explicit examples, from the expression of $ F_{1}(N,2l-1,2) $ given in Eq. (\\ref{n13}), it is clear that the sum should be $N-1$, for $l=1$ and to check this, one has to take into account that when substituting $l=1$ in the formula, the last term of Eq. (\\ref{n13}) does not contribute. From the symmetry that relates the two sums, we should have $ F_{2}(N,N-1,2)=N-1$. Indeed, this is the case, we simply let $l=\\frac{N-1}{2} $ into Eq. (\\ref{n16}), this time, however, the last term of this equation does contribute. An explicit computation shows that $ F_{2}(N,2,2)= -4$, for $N> 3$, and $ F_{2}(N,2,2)= 2$, for $N= 3$, it is interesting to note that these two cases for $l=1$ are contained in the last term of Eq. (\\ref{n15}), since for $N> 3$, the last term is equal to $0$, and hence $ F_{2}(N,2,2)= -4$, while for $N= 3$, the last term is equal to $6$, that is, our formula gives the right answer. Using the symmetry, we obtain $ F_{1}(N,N-2,2)=-4$, this can be easily checked using our formula given by Eq. (\\ref{n13}), and $l=\\frac{N-1}{2} $.\n \\section{Conclusion}\n \n \\ \\ To conclude, in this paper we used our method in \\cite{ Chair1}, to give alternative derivations to closed formulas for trigonometrical sums that appear in one-dimensional lattice, and in the proof of the conjecture of F. R Scott on Permanent of the Cauchy matrix. A new derivation of certain trigonometrical sum of the perturbative chiral Potts model is given as well as new recursion formulas of certain trigonometrical sums \\cite{Mehta}. By using these recursion formulas, then, one is able deduce the Verlinde dimension formulas for the untwisted (twisted) space of conformal blocks of $SU(2)$ ($SO(3)$)WZW. In this paper, we reported closed-form formulas for the corner-to-corner resistance and the Kirchhoff index of the first non-trivial two-dimensional resistor network, $2\\times N$. We have also, considered other class of trigonometrical sums, some of which appear in number theory. Here, we followed similar formalism as in \\cite{ Chair1}, as a consequence the non-trivial circulant electrical networks (the cycle and complete graphs are not included) are related to non-trivial trigonometrical sums in number theory. For example in\\cite{ Chair1}, we had to introduce certain numbers that we called the Bejaia and Pisa numbers with well known properties so that the trigonometrical sums that arise in the computation of the two-point resistance are written in terms of these numbers nicely. By using the well known connection between the electrical networks and the random walks \\cite{Doyle}, one may hope to give interpretations to some of the trigonometrical sums in number theory other than those associated with the two-point resistance of a given electrical network, since the latter provides an alternative way to compute the basic quantity relevant to random walks known as the first passage time, the expected time to hit a target node for the first time for a walker starting from a source node \\cite{Tetali}.\n \\vspace{7mm}\n\n{\\bf Acknowledgment:}\n\nI would like to thank Professor Bruce Berndt for reading and making comments on the manuscript. Also, I would like to thank the Abdus Salam centre for Theoretical Physics for supports and\nhospitality throughout these years \n\\newpage\n\\bibliographystyle{phaip}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction\\label{intro}}\nWe consider the Cauchy problem of the fourth order nonlinear Schr\\\"odinger type equations:\n\\begin{equation}\\label{D4NLS}\n\\begin{cases}\n\\displaystyle (i\\partial_{t}+\\Delta ^2)u=\\partial P_{m}(u,\\overline{u}),\\hspace{2ex}(t,x)\\in (0,\\infty )\\times \\R^{d} \\\\\nu(0,x)=u_{0}(x),\\hspace{2ex}x\\in \\R^{d}\n\\end{cases}\n\\end{equation}\nwhere $m\\in \\N$, $m\\geq 2$, $P_{m}$ is a polynomial which is written by\n\\[\nP_{m}(f,g)=\\sum_{\\substack{\\alpha ,\\beta \\in \\Z_{\\geq 0}\\\\ \\alpha +\\beta=m}}f^{\\alpha}g^{\\beta}, \n\\]\n$\\partial$ is a first order derivative with respect to the spatial variable, for example a linear combination of \n$\\frac{\\partial}{\\partial x_1} , \\, \\dots , \\, \\frac{\\partial}{\\partial x_d}$ or $|\\nabla |= \\mathcal{F}^{-1}[|\\xi | \\mathcal{F}]$\nand the unknown function $u$ is $\\C$-valued. \nThe fourth order Schr\\\"{o}dinger equation with $P_{m}(u,\\overline{u})=|u|^{m-1}u$ appears in the study of deep water wave dynamics \\cite{Dysthe}, solitary waves \\cite{Karpman}, \\cite{KS}, vortex filaments \\cite{Fukumoto}, and so on.\nThe equation (\\ref{D4NLS}) is invariant under the following scaling transformation:\n\\[\nu_{\\lambda}(t,x)=\\lambda^{-3\/(m-1)}u(\\lambda^{-4}t,\\lambda^{-1}x), \n\\]\nand the scaling critical regularity is $s_{c}=d\/2-3\/(m-1)$. \nThe aim of this paper is to prove the well-posedness and the scattering for the solution of (\\ref{D4NLS}) \nin the scaling critical Sobolev space. \n\n\nThere are many results for the fourth order nonlinear Schr\\\"{o}dinger equation \nwith derivative nonlinearities (see \\cite{S1}, \\cite{S2}, \\cite{HJ1}, \\cite{HHW}, \\cite{HHW2}, \\cite{HJ3}, \\cite{S3}, \\cite{HJ2}, \\cite{Y12}, \\cite{HN15_1}, \\cite{HN15_2}, and references cited therein).\nEspecially, the one dimensional case is well studied.\nWang (\\cite{Y12}) considered (\\ref{D4NLS}) for the case $d=1$, $m=2l+1$, $l\\ge 2$, $P_{2l+1}(u,\\overline{u})=|u|^{2l}u$ \nand proved the small data global in time well-posedness for $s=s_{c}$ by using Kato type smoothing effect. \nBut he did not treat the cubic case.\nActually, a technical difficulty appears in this case (see Theorem \\ref{notC3} below).\n\n\nHayashi and Naumkin (\\cite{HN15_1}) considered (\\ref{D4NLS}) for $d=1$ with the power type nonlineality $\\partial_{x}(|u|^{\\rho -1}u)$ ($\\rho >4$) \nand proved the global existence of the solution and the scattering in the weighted Sobolev space.\nMoreover, they (\\cite{HN15_2}) also proved that the large time asymptotics is determined by the self similar solution in the case $\\rho =4$.\nTherefore, derivative quartic nonlinearity in the one spatial dimension is the critical in the sense of the asymptotic behavior of the solution.\n\nWe firstly focus on the quartic nonlinearity $\\partial _x (\\overline{u}^4)$ in one space dimension.\nSince this nonlinearity has some good structure, the global solution scatters to a free solution in the scaling critical Sobolev space.\nOur argument does not apply to \\eqref{D4NLS} with $P (u,\\overline{u}) = |u|^3 u$ because we rely on the Fourier restriction norm method.\nNow, we give the first results in this paper. \nFor a Banach space $H$ and $r>0$, we define $B_r(H):=\\{ f\\in H \\,|\\, \\|f\\|_H \\le r \\}$. \n\\begin{thm}\\label{wellposed_1}\nLet $d=1$, $m=4$ and $P_{4}(u,\\overline{u})=\\overline{u}^{4}$. Then the equation {\\rm (\\ref{D4NLS})} is globally well-posed for small data in $\\dot{H}^{-1\/2}$. \nMore precisely, there exists $r>0$ such that for any $T>0$ and all initial data $u_{0}\\in B_{r}(\\dot{H}^{-1\/2})$, there exists a solution\n\\[\nu\\in \\dot{Z}_{r}^{-1\/2}([0,T))\\subset C([0,T );\\dot{H}^{-1\/2})\n\\]\nof {\\rm (\\ref{D4NLS})} on $(0, T )$. \nSuch solution is unique in $\\dot{Z}_{r}^{-1\/2}([0,T))$ which is a closed subset of $\\dot{Z}^{-1\/2}([0,T))$ {\\rm (see Definition~\\ref{YZ_space} and (\\ref{Zr_norm}))}. \nMoreover, the flow map\n\\[\nS^{+}_{T}:B_{r}(\\dot{H}^{-1\/2})\\ni u_{0}\\mapsto u\\in \\dot{Z}^{-1\/2}([0,T))\n\\]\nis Lipschitz continuous. \n\\end{thm}\n\\begin{rem}\nWe note that $s=-1\/2$ is the scaling critical exponent of (\\ref{D4NLS}) for $d=1$, $m=4$. \n\\end{rem}\n\\begin{cor}\\label{sccat}\nLet $r>0$ be as in Theorem~\\ref{wellposed_1}. \nFor all $u_{0}\\in B_{r}(\\dot{H}^{-1\/2})$, there exists a solution \n$u\\in C([0,\\infty );\\dot{H}^{s_{c}})$ of (\\ref{D4NLS}) on $(0,\\infty )$ and the solution scatters in $\\dot{H}^{-1\/2}$. \nMore precisely, there exists \n$u^{+}\\in \\dot{H}^{-1\/2}$ \nsuch that \n\\[\nu(t)-e^{it\\Delta^2}u^{+}\n\\rightarrow 0\n\\ {\\rm in}\\ \\dot{H}^{-1\/2}\\ {\\rm as}\\ t\\rightarrow + \\infty. \n\\]\n\\end{cor}\n\nMoreover, we obtain the large data local in time well-posedness in the scaling critical Sobolev space.\nTo state the result, we put\n\\[\nB_{\\delta ,R} (H^s) := \\{ u_0 \\in H^s | \\ u_0=v_0+w_0 , \\, \\| v_0 \\| _{\\dot{H}^{-1\/2}} < \\delta, \\, \\| w_0 \\| _{L^2} 0$ such that for all $R \\ge \\delta$ and $u_0 \\in B_{\\delta ,R} (H^{-1\/2})$ there exists a solution\n\\[\nu \\in Z^{-1\/2}([0,T]) \\subset C([0,T); H^{-1\/2})\n\\]\nfor $T=\\delta ^{8} R^{-8}$ of \\eqref{D4NLS}.\n\nFurthermore, the same statement remains valid if we replace $H^{-1\/2}$ by $\\dot{H}^{-1\/2}$ as well as $Z^{-1\/2}([0,T])$ by $\\dot{Z}^{-1\/2}([0,T])$.\n\\end{thm}\n\n\\begin{rem}\nFor $s>-1\/2$, the local in time well-posedness in $H^s$ follows from the usual Fourier restriction norm method, which covers for all initial data in $H^s$.\nIt however is not of very much interest.\nOn the other hand, since we focus on the scaling critical cases, which is the negative regularity, we have to impose that the $\\dot{H}^{-1\/2}$ part of initial data is small.\nBut, Theorem \\ref{large-wp} is a large data result because the $L^2$ part is not restricted.\n\\end{rem}\n\n\nThe main tools of the proof are the $U^{p}$ space and $V^{p}$ space which are applied to prove \nthe well-posedness and the scattering for KP-II equation at the scaling critical regularity by Hadac, Herr and Koch (\\cite{HHK09}, \\cite{HHK10}).\n\nWe also consider the one dimensional cubic case and the high dimensional cases. \nThe second result in this paper is as follows.\n\\begin{thm}\\label{wellposed_2}\n{\\rm (i)}\\ Let $d=1$ and $m=3$. Then the equation {\\rm (\\ref{D4NLS})} is locally well-posed in $H^{s}$ for $s\\ge 0$. \\\\\n{\\rm (ii)}\\ Let $d\\geq 2$ and $(m-1)d\\geq 4$. Then the equation {\\rm (\\ref{D4NLS})}\n is globally well-posed for small data in $\\dot{H}^{s_{c}}$ (or $H^{s}$ for $s\\ge s_{c}$)\n and the solution scatters in $\\dot{H}^{s_{c}}$ (or $H^{s}$ for $s\\ge s_{c}$).\n\\end{thm}\n\nThe smoothing effect of the linear part recovers derivative in higher dimensional case.\nTherefore, we do not use the $U^p$ and $V^p$ type spaces.\nMore precisely, to establish Theorem \\ref{wellposed_2}, we only use the Strichartz estimates\nand get the solution in $C([0,T);H^{s_c})\\cap L^{p_m}([0,T); W^{q_m,s_{c}+1\/(m-1)})$ \nwith $p_m =2(m-1)$, $q_m =2(m-1)d\/\\{ (m-1)d-2\\}$.\nAccordingly, the scattering follows from a standard argument.\nSince the condition $(m-1)d\\geq 4$ is equivalent to $s_{c}+1\/(m-1)\\ge 0$, \nthe solution space $L^{p_m}([0,T); W^{q_m,s_{c}+1\/(m-1)})$ has nonnegative regularity even if the data belongs to $H^{s_{c}}$ with $-1\/(m-1)\\le s_c <0$. \nOur proof of Thorem~\\ref{wellposed_2} {\\rm (ii)} cannot applied for $d=1$ \nsince the Schr\\\"odingier admissible $(a,b)$ in {\\rm (\\ref{admissible_ab})} does not exist. \n\\begin{rem}\nFor the case $d=1$, $m=4$ and $P_{4}(u,\\overline{u})\\ne \\overline{u}^{4}$, \nwe can obtain the local in time well-posedness of {\\rm (\\ref{D4NLS})} in $H^{s}$ for $s\\ge 0$ \nby the same way of the proof of Theorem~\\ref{wellposed_2}. \nActually, we can get the solution in $C([0,T];H^s)\\cap L^4 ([0,T];W^{s+1\/2,\\infty })$ \nfor $s\\ge 0$ by using the iteration argument \nsince the fractional Leibnitz rule (see \\cite{CW91}) and the H\\\"older inequality imply\n\\[\n\\left\\| |\\nabla |^{s+\\frac{1}{2}}\\prod_{j=1}^{4}u_j \\right\\|_{L^{4\/3}_{t}([0,T);L_{x}^{1})}\n\\lesssim T^{1\/4}\\| |\\nabla |^{s+\\frac{1}{2}}u_1 \\|_{L^{4}_{t}L_{x}^{\\infty}}\\| u_2 \\|_{L^{4}_{t}L_{x}^{\\infty}}\n\\| u_3 \\|_{L^{\\infty}_{t}L_{x}^{2}}\\| u_4 \\|_{L^{\\infty}_{t}L_{x}^{2}}.\n\\]\n\\end{rem}\n\nWe give a remark on our problem, which shows that the standard iteration argument does not work.\n\\begin{thm}\\label{notC3}\n{\\rm (i)}\\ Let $d=1$, $m=3$, $s<0$ and $P_{3}(u,\\overline{u})=|u|^{2}u$. Then the flow map of {\\rm (\\ref{D4NLS})} from $H^s$ to $C(\\R ; H^s)$ is not smooth. \\\\\n{\\rm (ii)}\\ Let $m\\ge 2$, $s1}$ and $P_{<1}$ as\n\\[\nP_{>1}:=\\sum_{N\\ge 1}P_N,\\ P_{<1}:=Id-P_{>1}. \n\\]\n\n\\begin{defn}\\label{YZ_space}\nLet $s <0$.\\\\\n{\\rm (i)} We define $\\dot{Z}^{s}:=\\{u\\in C(\\R ; \\dot{H}^{s}(\\R^{d}))\\cap U^{2}_{S}|\\ \\| u \\| _{\\dot{Z}^{s}}<\\infty\\}$ with the norm\n\\[\n \\| u \\| _{\\dot{Z}^{s}}:=\\left(\\sum_{N}N^{2s} \\| P_{N}u \\| ^{2}_{U^{2}_{S}}\\right)^{1\/2}.\n\\]\n{\\rm (ii)} We define $Z^{s}:=\\{u\\in C(\\R ; H^{s}(\\R^{d})) |\\ \\| u \\| _{Z^{s}}<\\infty\\}$ with the norm\n\\[\n \\| u \\| _{Z^{s}}:= \\| P_{<1} u \\| _{\\dot{Z}^{0}}+ \\| P_{>1} u \\| _{\\dot{Z}^{s}}. \n\\]\n{\\rm (iii)} We define $\\dot{Y}^{s}:=\\{u\\in C(\\R ; \\dot{H}^{s}(\\R^{d}))\\cap V^{2}_{S}|\\ \\| u \\| _{\\dot{Y}^{s}}<\\infty\\}$ with the norm\n\\[\n \\| u \\| _{\\dot{Y}^{s}}:=\\left(\\sum_{N}N^{2s} \\| P_{N}u \\| ^{2}_{V^{2}_{S}}\\right)^{1\/2}.\n\\]\n{\\rm (iv)} We define $Y^{s}:=\\{u\\in C(\\R ; H^{s}(\\R^{d})) |\\ \\| u \\| _{Y^{s}}<\\infty\\}$ with the norm\n\\[\n \\| u \\| _{Y^{s}}:= \\| P_{<1} u \\| _{\\dot{Y}^{0}}+ \\| P_{>1 }u \\| _{\\dot{Y}^{s}}.\n\\]\n\\end{defn}\n\\section{Multilinear estimate for $P_{4}(u,\\overline{u})=\\overline{u}^{4}$ in $1d$ \\label{Multi_est}}\nIn this section, we prove multilinear estimates for the nonlinearity $\\partial_{x}(\\overline{u}^{4})$ in $1d$, which plays a crucial role in the proof of Theorem \\ref{wellposed_1}.\n\\begin{lemm}\\label{modul_est}\nWe assume that $(\\tau_{0},\\xi_{0})$, $(\\tau_{1}, \\xi_{1})$, $\\cdots$, $(\\tau_{4}, \\xi_{4})\\in \\R\\times \\R^{d}$ satisfy \n$\\sum_{j=0}^{4}\\tau_{j}=0$ and $\\sum_{j=0}^{4}\\xi_{j}=0$. Then, we have \n\\begin{equation}\\label{modulation_est}\n\\max_{0\\leq j\\leq 4}|\\tau_{j}-|\\xi_{j}|^{4}|\n\\geq \\frac{1}{5}\\max_{0\\leq j\\leq 4}|\\xi_{j}|^{4}. \n\\end{equation}\n\\end{lemm}\n\\begin{proof}\nBy the triangle inequality, we obtain (\\ref{modulation_est}). \n\\end{proof}\n\n\\subsection{The homogeneous case}\n\n\\begin{prop}\\label{HL_est_n}\nLet $d=1$ and $01}u_{j} \\| _{\\dot{Y}^{-1\/2}}\n\\end{split}\n\\]\nTherefore, we obtain\n\\[\n\\begin{split}\n&\\left|\\sum_{A_{1,1}'(N_{1})} \\int_{0}^{T}\\int_{\\R}\\left(N_{0}\\prod_{j=0}^{4}P_{N_{j}}u_{j}\\right)dxdt\\right| \\\\\n&\\lesssim T^{1\/2}N_0 \\| u_{0,N_{0}} \\| _{V^{2}_{S}} \\| u_{1,N_{1}} \\| _{V^{2}_{S}}\\prod_{j=2}^{3}\\| P_{>1}u_{j} \\| _{\\dot{Y}^{-1\/2}}\\|P_{<1}u_4\\|_{\\dot{Y}^{0}}\n\\end{split}\n\\]\nand note that $T^{1\/2}N_0\\le T^{1\/6}$.\n\nIn the case $T \\ge N_0^{-3}$, we divide the integrals on the left-hand side of (\\ref{hl}) into $10$ pieces of the form \\eqref{piece_form_hl} in the proof of Proposition \\ref{HL_est_n}.\nThanks to Lemma~\\ref{modul_est}, let us consider the case that $Q_{j}^{S}=Q_{\\geq M}^{S}$ for some $0\\leq j\\leq 4$.\nFirst, we consider the case $Q_{0}^{S}=Q_{\\geq M}^{S}$. \nBy the same way as in the proof of Proposition \\ref{HL_est_n} and using\n\\[\n\\|Q_{4}^{S}P_{<1}u_{4,T}\\|_{L^{12}_{t}L^{6}_{x}}\\lesssim \\|Q_{4}^{S}P_{<1}u_{4,T}\\|_{V^{2}_{S}}\\lesssim \\|P_{<1}u_{4,T}\\|_{\\dot{Y}^{0}}\n\\]\ninstead of (\\ref{L12L6_est}), we obtain\n\\[\n\\begin{split}\n&\\left|\\sum_{A_{1,1}'(N_{1})}\\int_{\\R}\\int_{\\R}\\left(N_{0}Q_{\\geq M}^{S}u_{0,N_{0},T}\\prod_{j=1}^{4}Q_{j}^{S}u_{j,N_{j},T}\\right)dxdt\\right|\\\\\n&\\leq N_{0} \\| Q_{\\geq M}^{S}u_{0,N_{0},T} \\| _{L^{2}_{tx}} \\| Q_{1}^{S}u_{1,N_{1},T} \\| _{L^{4}_{t}L^{\\infty}_{x}}\n\\prod_{j=2}^{3} \\left\\|\\sum_{1 \\le N_{j}\\lesssim N_{1}}Q_{j}^{S}u_{j,N_{j},T}\\right\\|_{L^{12}_{t}L^{6}_{x}} \\|Q_{4}^{S}P_{<1}u_{4,T}\\|_{L^{12}_{t}L^{6}_{x}}\\\\\n& \\lesssim N_0^{-\\frac{1}{2}} \\| P_{N_0} u_0 \\| _{V^2_S} \\| P_{N_1} u_1 \\| _{V^2_S} \\prod_{j=2}^{3} \\left\\| P_{>1} u_j \\right\\| _{\\dot{Y}^{-1\/2}} \\| P_{<1} u_{4} \\| _{\\dot{Y}^0}\n\\end{split}\n\\]\nand note that $N_0^{-1\/2}\\le T^{1\/6}$. \nSince the cases $Q_j^S = Q_{\\ge M}^S$ ($j=1,2,3$) are similarly handled, we omit the details here.\n\n\nWe focus on the case $Q_4^S = Q_{\\ge M}^S$.\nBy the same way as in the proof of Proposition \\ref{HL_est_n} and using\n\\[\n\\|Q_{\\ge M}^{S}P_{<1}u_{4,T}\\|_{L^{2}_{tx}}\\lesssim N_{0}^{-2} \\|P_{<1}u_{4,T}\\|_{V^{2}_{S}}\\lesssim N_{0}^{-2}\\|P_{<1}u_{4,T}\\|_{\\dot{Y}^{0}}\n\\]\ninstead of (\\ref{hi_mod_234}) with $j=4$, we obtain\n\\[\n\\begin{split}\n&\\left|\\sum_{A_{1,1}'(N_{1})}\\int_{\\R}\\int_{\\R}\\left(N_{0}Q_{\\geq M}^{S}u_{4,N_{4},T}\\prod_{j=0}^{3}Q_{j}^{S}u_{j,N_{j},T}\\right)dxdt\\right|\\\\\n&\\leq N_{0} \\| u_{0,N_{0},T} \\| _{L^{12}_{t}L_x^6} \\| Q_{1}^{S}u_{1,N_{1},T} \\| _{L^{4}_{t}L^{\\infty}_{x}}\n\\prod_{j=2}^{3} \\left\\|\\sum_{1 \\le N_{j}\\lesssim N_{1}}Q_{j}^{S}u_{j,N_{j},T}\\right\\|_{L^{12}_{t}L^{6}_{x}} \n\\|Q_{\\geq M}^{S} P_{<1}u_{4,T}\\|_{L^{2}_{tx}}\\\\\n& \\lesssim N_{0}^{-1\/2}\\| P_{N_0} u_0 \\| _{V^2_S} \\| P_{N_1} u_1 \\| _{V^2_S} \\prod_{j=2}^{3} \\left\\| P_{>1} u_j \\right\\| _{\\dot{Y}^{-1\/2}} \\| P_{<1} u_4 \\| _{\\dot{Y}^0}\n\\end{split}\n\\]\nand note that $N_0^{-1\/2}\\le T^{1\/6}$. \n\n\nWe secondly consider the case $A_{1,2}'(N_1)$.\nIn the case $T \\le N_0^{-3}$, the H\\\"older inequality implies\n\\[\n\\begin{split}\n& \\left|\\sum_{A_{1,2}'(N_{1})} \\int_{0}^{T}\\int_{\\R}\\left(N_{0}\\prod_{j=0}^{4}P_{N_{j}}u_{j}\\right)dxdt\\right| \\\\\n& \\le N_0 \\| \\ee_{[0,T)}\\|_{L^{2}_{t}}\n\\| u_{0,N_0} \\| _{L_t^4 L_x^{\\infty}} \\| u_{1,N_1} \\| _{L_t^4 L_x^{\\infty}} \\left\\| \\sum _{1 \\le N_2 \\lesssim N_1} u_{2,N_2} \\right\\| _{L_t^{\\infty} L_x^2}\n\\prod_{j=3}^{4}\\| P_{<1} u_{j} \\| _{L_t^{\\infty} L_x^4} .\n\\end{split}\n\\]\nBy the same estimates as in the proof for the case $A_{1,1}'(N_1)$ and\n\\[\n\\| P_{<1} u_{j} \\| _{L_t^{\\infty} L_x^4}\\lesssim \\| P_{<1} u_{j} \\| _{L_t^{\\infty} L_x^{2}}\n\\lesssim \\left(\\sum_{N\\le 2}\\|P_{N}P_{<1}u_{j}\\|_{V^{2}_{S}}^{2}\\right)^{1\/2}\n\\le \\|P_{<1}u_j\\|_{\\dot{Y}^{0}}\n\\]\nfor $j=3,4$, we obtain\n\\[\n\\begin{split}\n&\\left|\\sum_{A_{1,2}'(N_{1})} \\int_{0}^{T}\\int_{\\R}\\left(N_{0}\\prod_{j=0}^{4}P_{N_{j}}u_{j}\\right)dxdt\\right| \\\\\n&\\lesssim T^{1\/2}N_0^{1\/2} \\| u_{0,N_{0}} \\| _{V^{2}_{S}} \\| u_{1,N_{1}} \\| _{V^{2}_{S}}\\| P_{>1}u_{2} \\| _{\\dot{Y}^{-1\/2}}\\prod_{j=3}^{4}\\|P_{<1}u_j\\|_{\\dot{Y}^{0}}\n\\end{split}\n\\]\nand note that $T^{1\/2}N_0^{1\/2}\\le T^{1\/3}$. \n\n\nIn the case $T \\ge N_0^{-3}$, we divide the integrals on the left-hand side of (\\ref{hl}) into $10$ pieces of the form \\eqref{piece_form_hl} in the proof of Proposition \\ref{HL_est_n}.\nThanks to Lemma~\\ref{modul_est}, let us consider the case that $Q_{j}^{S}=Q_{\\geq M}^{S}$ for some $0\\leq j\\leq 4$.\nBy the same argument as in the proof for the case $A_{1,1}'(N_1)$, we obtain \n\\[\n\\begin{split}\n&\\left|\\sum_{A_{1,2}'(N_{1})}\\int_{\\R}\\int_{\\R}\\left(N_{0}Q_{\\geq M}^{S}u_{0,N_{0},T}\\prod_{j=1}^{4}Q_{j}^{S}u_{j,N_{j},T}\\right)dxdt\\right|\\\\\n&\\leq N_{0} \\| Q_{\\geq M}^{S}u_{0,N_{0},T} \\| _{L^{2}_{tx}} \\| Q_{1}^{S}u_{1,N_{1},T} \\| _{L^{4}_{t}L^{\\infty}_{x}} \\left\\|\\sum_{1 \\le N_{2}\\lesssim N_{1}}Q_{2}^{S}u_{2,N_{2},T}\\right\\|_{L^{12}_{t}L^{6}_{x}} \\prod_{j=3}^{4} \\| Q_{j}^{S}P_{<1}u_{j,T}\\|_{L^{12}_{t}L^{6}_{x}}\\\\\n& \\lesssim N_0^{-1} \\| P_{N_0} u_0 \\| _{V^2_S} | P_{N_1} u_1 \\| _{V^2_S} \\left\\| P_{>1} u_2 \\right\\| _{\\dot{Y}^{-1\/2}} \\prod _{j=3}^4 \\| P_{<1} v_j \\| _{\\dot{Y}^0}\n\\end{split}\n\\]\nif $Q_0 = Q_{\\ge M}^S$ and \n\\[\n\\begin{split}\n&\\left|\\sum_{A_{1,2}'(N_{1})}\\int_{\\R}\\int_{\\R}\\left(N_{4}Q_{\\geq M}^{S}u_{4,N_{4},T}\\prod_{j=0}^{3}Q_{j}^{S}u_{j,N_{j},T}\\right)dxdt\\right|\\\\\n&\\leq N_{0} \\| u_{0,N_{0},T} \\| _{L^{12}_{t}L_x^6} \\| Q_{1}^{S}u_{1,N_{1},T} \\| _{L^{4}_{t}L^{\\infty}_{x}} \\left\\|\\sum_{1 \\le N_{2}\\lesssim N_{1}}Q_{2}^{S}u_{2,N_{2},T}\\right\\|_{L^{12}_{t}L^{6}_{x}} \\\\\n&\\hspace{21ex}\\times \\|Q_{3}^{S} P_{<1}u_{3,T}\\|_{L^{12}_{t}L^{6}_{x}} \\| Q_{\\geq M}^{S} P_{<1}u_{4,T}\\|_{L^{2}_{tx}}\\\\\n& \\lesssim N_0^{-1} \\| P_{N_0} u_0 \\| _{V^2_S} \\| P_{N_1} u_1 \\| _{V^2_S} \\left\\| P_{>1} u_2 \\right\\| _{\\dot{Y}^{\\frac{1}{2}}} \\prod_{j=3}^{4}\\| P_{<1} u_j \\| _{\\dot{Y}^0}\n\\end{split}\n\\]\nif $Q_4 = Q_{\\ge M}^S$\nNote that $N_0^{-1}\\le T^{1\/3}$. \nThe remaining cases follow from the same argument as above.\n\n\nWe thirdly consider the case $A_{1,3}'(N_1)$.\nIn the case $T \\le N_0^{-3}$, the H\\\"older inequality implies\n\\[\n\\begin{split}\n& \\left|\\sum_{A_{1,3}'(N_{1})} \\int_{0}^{T}\\int_{\\R}\\left(N_{0}\\prod_{j=0}^{4}P_{N_{j}}u_{j}\\right)dxdt\\right| \\\\\n& \\le N_0 \\| \\ee_{[0,T)}\\|_{L^{2}_{t}}\\| u_{0,N_0} \\| _{L_t^4 L_x^{\\infty}} \\| u_{1,N_1} \\| _{L_t^4 L_x^{\\infty}}\n\\prod_{j=2}^{4} \\| P_{<1}u_{2} \\| _{L_t^{\\infty} L_x^3}.\n\\end{split}\n\\]\nBy the same estimates as in the proof for the case $A_{1,1}'(N_1)$ and\n\\[\n\\| P_{<1} u_{j} \\| _{L_t^{\\infty} L_x^3}\\lesssim \\| P_{<1} u_{j} \\| _{L_t^{\\infty} L_x^{2}}\n\\lesssim \\left(\\sum_{N\\le 2}\\|P_{N}P_{<1}u_{j}\\|_{V^{2}_{S}}^{2}\\right)^{1\/2}\n\\le \\|P_{<1}u_j\\|_{\\dot{Y}^{0}}\n\\]\nfor $j=2, 3,4$, we obtain\n\\[\n\\begin{split}\n&\\left|\\sum_{A_{1,3}'(N_{1})} \\int_{0}^{T}\\int_{\\R}\\left(N_{0}\\prod_{j=0}^{4}P_{N_{j}}u_{j}\\right)dxdt\\right| \n\\lesssim T^{1\/2}\\| u_{0,N_{0}} \\| _{V^{2}_{S}} \\| u_{1,N_{1}} \\| _{V^{2}_{S}}\\prod_{j=2}^{4}\\| P_{<1}u_{j} \\| _{\\dot{Y}^{0}}. \n\\end{split}\n\\]\n\n\nIn the case $T \\ge N_0^{-3}$, we divide the integrals on the left-hand side of (\\ref{hl}) into $10$ pieces of the form \\eqref{piece_form_hl} in the proof of Proposition \\ref{HL_est_n}.\nThanks to Lemma~\\ref{modul_est}, let us consider the case that $Q_{j}^{S}=Q_{\\geq M}^{S}$ for some $0\\leq j\\leq 4$.\nBy the same argument as in the proof for the case $A_{1,1}'(N_1)$, we obtain \n\\[\n\\begin{split}\n&\\left|\\sum_{A_{1,3}'(N_{1})}\\int_{\\R}\\int_{\\R}\\left(N_{0}Q_{\\geq M}^{S}u_{0,N_{0},T}\\prod_{j=1}^{4}Q_{j}^{S}u_{j,N_{j},T}\\right)dxdt\\right|\\\\\n&\\leq N_{0} \\| Q_{\\geq M}^{S}u_{0,N_{0},T} \\| _{L^{2}_{tx}} \\| Q_{1}^{S}u_{1,N_{1},T} \\| _{L^{4}_{t}L^{\\infty}_{x}} \n\\prod_{j=2}^{4} \\|Q_{j}^{S}P_{<1}u_{j,T}\\|_{L^{12}_{t}L^{6}_{x}}\\\\\n& \\lesssim N_0^{-3\/2} \\| P_{N_0} u_0 \\| _{V^2_S} \\| P_{N_1} u_1 \\| _{V^2_S} \\left\\| P_{<1} u_2 \\right\\| _{Y^{-1\/2}} \\prod _{j=3}^4 \\| P_{<1} v_j \\| _{\\dot{Y}^0} \n\\end{split}\n\\]\nif $Q_0 = Q_{\\ge M}^S$ and\n\\[\n\\begin{split}\n&\\left|\\sum_{A_{1,3}'(N_{1})}\\int_{\\R}\\int_{\\R}\\left(N_{4}Q_{\\geq M}^{S}u_{4,N_{4},T}\\prod_{j=0}^{3}Q_{j}^{S}u_{j,N_{j},T}\\right)dxdt\\right|\\\\\n&\\leq N_{0} \\| u_{0,N_{0},T} \\| _{L^{12}_{t}L_x^6} \\| Q_{1}^{S}u_{1,N_{1},T} \\| _{L^{4}_{t}L^{\\infty}_{x}} \\prod _{j=2}^3 \\|Q_{j}^{S} P_{<1}u_{j,T}\\|_{L^{12}_{t}L^{6}_{x}} \n\\|Q_{\\geq M}^{S} P_{<1}u_{4,T}\\|_{L^2_{tx}}\\\\\n& \\lesssim N_0^{-3\/2} \\| P_{N_0} u_0 \\| _{V^2_S} \\| P_{N_1} u_1 \\| _{V^2_S} \\prod _{j=2}^4 \\left\\| P_{<1} u_j \\right\\| _{Y^{0}}\n\\end{split}\n\\]\nif $Q_4 = Q_{\\ge M}^S$.\nNote that $N_0^{-3\/2}\\le T^{1\/2}$. \nThe cases $Q_j^S = Q_{\\ge M}^S$ ($j=1,2,3$) are the same argument as above. \n\n\\end{proof}\n\n\nFurthermore, we obtain the following estimate.\n\n\\begin{prop}\\label{HH_est-inh}\nLet $d=1$ and $00$, we define \n\\begin{equation}\\label{Zr_norm}\n\\dot{Z}^{s}_{r}(I)\n:=\\left\\{u\\in \\dot{Z}^{s}(I)\\left|\\ \\| u \\| _{\\dot{Z}^{s}(I)}\\leq 2r \\right.\\right\\}\n\\end{equation}\nwhich is a closed subset of $\\dot{Z}^{s}(I)$. \nLet $T>0$ and $u_{0}\\in B_{r}(\\dot{H}^{-1\/2})$ are given. For $u\\in \\dot{Z}^{-1\/2}_{r}([0,T))$, \nwe have\n\\[\n \\| \\Phi_{T,u_{0}}(u) \\| _{\\dot{Z}^{-1\/2}([0,T))}\\leq \\| u_{0} \\| _{\\dot{H}^{-1\/2}} +C \\| u \\| _{\\dot{Z}^{-1\/2}([0,T))}^{4}\\leq r(1+ 16 Cr^{3})\n\\]\nand\n\\[\n\\begin{split}\n \\| \\Phi_{T,u_{0}}(u)-\\Phi_{T,u_{0}}(v) \\| _{\\dot{Z}^{-1\/2}([0,T))}\n&\\leq C( \\| u \\| _{\\dot{Z}^{-1\/2}([0,T))}+ \\| v \\| _{\\dot{Z}^{-1\/2}([0,T))})^{3} \\| u-v \\| _{\\dot{Z}^{-1\/2}([0,T))}\\\\\n&\\leq 64Cr^{3} \\| u-v \\| _{\\dot{Z}^{-1\/2}([0,T))}\n\\end{split}\n\\]\nby Proposition~\\ref{Duam_est} and\n\\[\n \\| S(\\cdot )u_{0} \\| _{\\dot{Z}^{-1\/2}([0,T))}\\leq \\| \\ee_{[0,T)}S(\\cdot )u_{0} \\| _{\\dot{Z}^{-1\/2}}\\leq \\| u_{0} \\| _{\\dot{H}^{-1\/2}}, \n\\] \nwhere $C$ is an implicit constant in (\\ref{Duam_est_1}). Therefore if we choose $r$ satisfying\n\\[\nr <(64C)^{-1\/3},\n\\]\nthen $\\Phi_{T,u_{0}}$ is a contraction map on $\\dot{Z}^{-1\/2}_{r}([0,T))$. \nThis implies the existence of the solution of (\\ref{D4NLS}) and the uniqueness in the ball $\\dot{Z}^{-1\/2}_{r}([0,T))$. \nThe Lipschitz continuously of the flow map is also proved by similar argument. \n\\end{proof} \nCorollary~\\ref{sccat} is obtained by the same way as the proof of Corollaty\\ 1.2 in \\cite{Hi}. \n\n\\subsection{The large data case}\n\nIn this subsection, we prove Theorem \\ref{large-wp}.\nThe following is the key estimate.\n\n\\begin{prop}\\label{Duam_est-inh}\nLet $d=1$. We have\n\\begin{equation}\\label{Duam_est_1-inh}\n \\| I_{1}(u_{1},\\cdots u_{4}) \\| _{\\dot{Z}^{-1\/2}} \\lesssim \\prod_{j=1}^{4} \\| u_{j} \\| _{Y^{-1\/2}}.\n\\end{equation}\n\\end{prop}\n\n\\begin{proof}\nWe decompose $u_j = v_j +w_j$ with $v_j = P_{>1}u_j \\in \\dot{Y}^{-1\/2}$ and $w_j = P_{<1} u_j \\in \\dot{Y}^0$. \n>From Propositions \\ref{HL_est_n-inh}, \\ref{HH_est-inh}, and the same way as in the proof of Proposition~\\ref{Duam_est}, \nit remains to prove that\n\\[\n\\| I_{1}(w_{1},w_2,w_3,w_{4}) \\| _{\\dot{Z}^{-1\/2}} \\lesssim \\prod_{j=1}^{4} \\| u_{j} \\| _{\\dot{Y}^0}.\n\\]\nBy Theorem \\ref{duality}, the Cauchy-Schwartz inequality, the H\\\"older inequality and the Sobolev inequality, we have\n\\[\n\\| I_{1}(w_{1},w_2,w_3,w_{4}) \\| _{\\dot{Z}^{-1\/2}}\n\\lesssim \\left\\| \\prod_{j=1}^{4}\\overline{w_{j}} \\right\\|_{L^1([0,1];L^2)}\n\\lesssim \\prod _{j=1}^4 \\| w_j \\| _{L_t^{\\infty} L_x^2}\n\\lesssim \\prod_{j=1}^{4} \\| u_{j} \\| _{\\dot{Y}^{0}},\n\\]\nwhich completes the proof.\n\\end{proof}\n\n\\begin{proof}[\\rm{\\bf{Proof of Theorem \\ref{large-wp}}}]\nLet $u_0 \\in B_{\\delta ,R}(H^{-1\/2})$ with $u_0=v_0+w_0$, $v_0 \\in \\dot{H}^{-1\/2}$, $w_0 \\in L^2$.\nA direct calculation yields\n\\[\n\\| S(t) u_0 \\| _{Z^{-1\/2}([0,1))} \\le \\delta +R.\n\\]\nWe start with the case $R=\\delta = (4C+4)^{-4}$, where $C$ is the implicit constant in \\eqref{Duam_est_1-inh}.\nProposition \\ref{Duam_est-inh} implies that for $u \\in Z^{-1\/2}_r([0,1])$ with $r=1\/(4C+4)$\n\\begin{align*}\n\\| \\Phi_{1,u_{0}}(u) \\| _{Z^{-1\/2}([0,1))} & \\leq \\| S(t) u_0 \\| _{Z^{-1\/2}([0,1))} +C \\| u \\| _{Z^{-1\/2}([0,1))}^{4} \\\\\n& \\leq 2r^4 + 16C r^4\n= r^4 (16C+2)\n\\le r\n\\end{align*}\nand\n\\begin{align*}\n\\| \\Phi_{1,u_{0}}(u)-\\Phi_{1,u_{0}}(v) \\| _{Z^{-1\/2}([0,1))}\n&\\leq C( \\| u \\| _{Z^{-1\/2}([0,1))}+ \\| v \\| _{Z^{-1\/2}([0,1))})^{3} \\| u-v \\| _{Z^{-1\/2}([0,1))}\\\\\n&\\leq 64Cr^{3} \\| u-v \\| _{Z^{-1\/2}([0,1))}\n< \\| u-v \\| _{Z^{-1\/2}([0,1))}\n\\end{align*}\nif we choose $C$ large enough (namely, $r$ is small enough).\nAccordingly, $\\Phi_{1,u_{0}}$ is a contraction map on $\\dot{Z}^{-1\/2}_{r}([0,1))$.\n\nWe note that \nall of the above remains valid if we exchange $Z^{-1\/2}([0,1))$ by the smaller space $\\dot{Z}^{-1\/2}([0,1))$ since $\\dot{Z}^{-1\/2}([0,1)) \\hookrightarrow Z^{-1\/2}([0,1))$ and the left hand side of \\eqref{Duam_est_1-inh} is the homogeneous norm.\n\nWe now assume that $u_0 \\in B_{\\delta ,R}(H^{-1\/2})$ for $R \\ge \\delta = (4C+4)^{-4}$.\nWe define $u_{0, \\lambda}(x) = \\lambda ^{-1} u_0 (\\lambda ^{-1}x)$.\nFor $\\lambda = \\delta ^{-2} R^{2}$, we observe that $u_{0,\\lambda} \\in B_{\\delta ,\\delta}(H^{-1\/2})$.\nWe therefore find a solution $u_{\\lambda} \\in Z^{-1\/2}([0,1))$ with $u_{\\lambda}(0,x) = u_{0,\\lambda}(x)$.\nBy the scaling, we find a solution $u \\in Z^{-1\/2}([0, \\delta ^8 R^{-8}))$.\n\nThanks to Propositions \\ref{HL_est_n-inh} and \\ref{HH_est-inh}, the uniqueness follows from the same argument as in \\cite{HHK10}.\n\\end{proof}\n\n\\section{Proof of Theorem~\\ref{wellposed_2}}\\label{pf_wellposed_2}\\kuuhaku\nIn this section, we prove Theorem~\\ref{wellposed_2}. \nWe only prove for the homogeneous case since the proof for the inhomogeneous case is similar. \nWe define the map $\\Phi_{T, \\varphi}^{m}$ as \n\\[\n\\Phi_{T, \\varphi}^{m}(u)(t):=S(t)\\varphi -iI_{T}^{m}(u,\\cdots, u)(t),\n\\] \nwhere\n\\[\nI_{T}^{m}(u_{1},\\cdots u_{m})(t):=\\int_{0}^{t}\\ee_{[0,T)}(t')S(t-t')\\partial \\left(\\prod_{j=1}^{m}u_{j}(t')\\right)dt'.\n\\]\nand the solution space $\\dot{X}^{s}$ as\n\\[\n\\dot{X}^{s}:=C(\\R;\\dot{H}^{s})\\cap L^{p_{m}}(\\R;\\dot{W}^{s+1\/(m-1),q_{m}}),\n\\] \nwhere $p_{m}=2(m-1)$, $q_{m}=2(m-1)d\/\\{(m-1)d-2\\}$ for $d \\ge 2$ and $p_3=4$, $q_3=\\infty$ for $d=1$. \nTo prove the well-posedness of (\\ref{D4NLS}) in $L^{2}(\\R )$ or $H^{s_{c}}(\\R^{d})$, we prove that $\\Phi_{T, \\varphi}$ is a contraction map \non a closed subset of $\\dot{X}^{s}$. \nThe key estimate is the following:\n\\begin{prop}\\label{Duam_est_g}\n{\\rm (i)}\\ Let $d=1$ and $m=3$. For any $0From $\\xi _j \\in [N-N^{-1}, N+N^{-1}]$ for $j=1,2,3$, we get\n\\[\n|-(\\xi _1-\\xi _2+\\xi _3)^4+\\xi _1^4-\\xi _2^4+\\xi _3^4|\n\\lesssim 1.\n\\]\nWe therefore obtain for sufficiently small $t>0$\n\\begin{align*}\n|\\widehat{u^{(3)}_{N}} (t,\\xi ) |\n& \\gtrsim t N^{-3s+5\/2} \\left| \\int _{\\xi _1-\\xi _2+\\xi _3 =\\xi} \\ee _{[N-N^{-1}, N+N^{-1}]} (\\xi _1) \\ee _{[N-N^{-1}, N+N^{-1}]} (\\xi _2) \\ee _{[N-N^{-1}, N+N^{-1}]} (\\xi _3) \\right| \\\\\n& \\gtrsim t N^{-3s+1\/2} \\ee _{[N-N^{-1},N+N^{-1} ]} (\\xi ) .\n\\end{align*}\nHence,\n\\[\n\\| u^{(3)}_{N} \\| _{L^{\\infty}([0,1]; H^s)} \\gtrsim N^{-2s}.\n\\]\nThis lower bound goes to infinity as $N$ tends to infinity if $s<0$, which concludes the proof.\n\\end{proof}\n\n\nSecondly, we show that absence of a smooth flow map for $d \\ge 1$ and $m \\ge 2$.\nPutting\n\\[\ng_N := N^{-s-d\/2} \\mathcal{F}^{-1}[ \\ee _{[-N,N]^d}] ,\n\\]\nwe set $u_N^{(m)} := u^{(m)} [g_N]$.\nNote that $\\| g_N \\| _{H^s} \\sim 1$.\nAs above, we show the following.\n\n\\begin{prop}\nIf $s