diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzpgm" "b/data_all_eng_slimpj/shuffled/split2/finalzpgm" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzpgm" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\nLet $\\mathcal{B}(\\mathcal{H})$ denote the $C^*-$algebra of all bounded linear operators acting on a Hilbert space $\\mathcal{H}$. An operator $T\\in\\mathcal{B}(\\mathcal{H})$ is said to be positive if, for every $x\\in\\mathcal{H}$, one has $\\left\\geq 0$. In this case, we simply write $A\\geq O.$ Positive operators play an important role in understanding the geometry of a Hilbert space, and these operators constitute a special class of the wider class of self-adjoint operators; that is $A^*=A$, where $A^*$ denotes the conjugate of $A$. Among the most basic properties of self-adjoint operators is the fact that \n$$\\|T\\|=\\omega(T)=r(T),\\;T\\;{\\text{is\\;normal}},$$ where $\\|\\cdot\\|, \\omega(\\cdot)$, and $r(\\cdot)$ denote the operator norm, the numerical radius, and the spectral radius respectively. Actually, for a general $T\\in\\mathcal{B}(\\mathcal{H})$ one has \n $$\\|T\\|\\geq \\omega(T)\\geq r(T).$$\\\\\nWhile both $\\|\\cdot\\|$ and $\\omega(\\cdot)$ are norms on $\\mathcal{B}(\\mathcal{H})$, $r(\\cdot)$ is not. In fact, we have the equivalence relation \\cite[Theorem 1.3-1]{gust}\n\\begin{equation}\\label{eq_equiv_norms}\n\\frac{1}{2}\\|T\\|\\le\\omega(T)\\le\\|T\\|,\\;T\\in\\mathcal{B}(\\mathcal{H}).\n\\end{equation}\n\nNumerous researchers' core interests have been sharpening the above inequality and obtaining new possible relations between $\\|\\cdot\\|$ and $\\omega(\\cdot)$. This is because $\\|\\cdot\\|$ is much easier to compute than $\\omega(\\cdot)$, not to forget the math appetite for obtaining such new relations.\n\nThe Cartesian decomposition of $T\\in\\mathcal{B}(\\mathcal{H})$ is $T=\\mathfrak{R}T+\\textup i\\mathfrak{I}T$, where $\\mathfrak{R}T=\\frac{T+T^*}{2}$ and $\\mathfrak{I}T=\\frac{T-T^*}{2\\textup i}$ are the real and imaginary parts of $T$, respectively. Although $\\|T\\|\\geq \\omega(T)$ is always valid, the following reverses hold for the Cartesian components of $T$, see \\cite[Theorem 2.1]{1}\n\\begin{equation}\\label{eq_norm_reim}\n\\|\\mathfrak{R}T\\|\\le\\omega(T),\\;\\|\\mathfrak{I}T\\|\\le \\omega(T).\n\\end{equation}\n\nWhile the original definition of $\\omega(\\cdot)$ is based on a supremum over inner product values (i.e., $\\omega(T)=\\sup_{\\|x\\|=1}|\\left|$), the following identity is extremely useful \\cite{3}\n\\begin{equation}\\label{eq_w_re}\n\\underset{\\theta \\in \\mathbb{R}}{\\mathop{\\sup }}\\,\\left\\| {{\\operatorname{\\mathfrak Re}}^{\\textup i\\theta }}T \\right\\|=\\omega \\left( T \\right).\n\\end{equation}\n\nExploring further relations between $\\|\\cdot\\|$ and $\\omega(\\cdot),$ \nit has been shown in \\cite[Theorem 2.3]{1} that\n\\begin{equation}\\label{eq_fuad_mos}\n\\|A+B\\|\\le 2\\omega\\left(\\left[\\begin{array}{cc}O&A\\\\B^*&O\\end{array}\\right]\\right)\\le \\|A\\|+\\|B\\|;\n\\end{equation}\nas an interesting refinement of the triangle inequality of norms, using the numerical radius of a matrix operator.\n\n\nHaving the matrix operator term in \\eqref{eq_fuad_mos} is not a coincidence. In fact, numerous results have included such terms while studying numerical radius inequalities. For example, it has been shown in \\cite[Theorem 2.4]{4} that\n\\begin{equation}\\label{eq_max_w}\n\\frac{\\max \\left\\{ \\omega \\left( S+T \\right),\\omega \\left( S-T \\right) \\right\\}}{2}\\le \\omega \\left( \\left[ \\begin{matrix}\n O & S \\\\\n T & O \\\\\n\\end{matrix} \\right] \\right),\\text{ for any }S,T\\in \\mathcal B\\left( \\mathcal H \\right);\n\\end{equation}\n\nan inequality which has been reversed in a way or another by the form \\cite[Theorem 2.4]{4}\n\\begin{equation}\\label{eq_w_average_w}\n\\omega \\left( \\left[ \\begin{matrix}\n O & S \\\\\n T & O \\\\\n\\end{matrix} \\right] \\right)\\le \\frac{\\omega \\left( S+T \\right)+\\omega \\left( S-T \\right)}{2},\\text{ for any }S,T\\in \\mathcal B\\left( \\mathcal H \\right).\n\\end{equation}\n\n\n\nThe above matrix operator is not only comparable with numerical radius terms, as we also have \\cite[Theorem 2.1]{5} \n\\begin{equation}\\label{eq_need_prf}\n2\\omega \\left( \\left[ \\begin{matrix}\n O & A \\\\\n {{B}^{*}} & O \\\\\n\\end{matrix} \\right] \\right)\\le \\max \\left\\{ \\left\\| A \\right\\|,\\left\\| B \\right\\| \\right\\}+\\frac{1}{2}\\left( \\left\\| {{\\left| A \\right|}^{\\frac{1}{2}}}{{\\left| B \\right|}^{\\frac{1}{2}}} \\right\\|+\\left\\| {{\\left| {{B}^{*}} \\right|}^{\\frac{1}{2}}}{{\\left| {{A}^{*}} \\right|}^{\\frac{1}{2}}} \\right\\| \\right),\n\\end{equation}\nfor any $A,B\\in \\mathcal B\\left( \\mathcal H \\right)$.\n\nThe right-hand side of this latter inequality is related to the Davidson-Power inequality, which has been generalized in \\cite[Theorem 5]{6} to the form\n\\begin{equation}\\label{eq_abuamer}\n\\|A+B^*\\|\\le \\max\\{\\|A\\|,\\|B\\|\\}+\\max\\{\\|\\;|A|^{1\/2}|B^*|^{1\/2}\\|,\\|\\;|A^*|^{1\/2}|B|^{1\/2}\\|\\}.\n\\end{equation}\n\nAn important tool in obtaining matrix inequalities is convexity; whether it is scalar or operator convexity. Recall that a function $f:J\\to\\mathbb{R}$ is said to be convex on the interval $J$ if it satisfies $f((1-\\lambda)a+\\lambda b)\\le (1-\\lambda)f(a)+\\lambda f(b)$ for all $a,b\\in J$ and $0\\le \\lambda\\le 1$. In convex analysis, the Hermite-Hadamard inequality which states that for a convex function $f$ on $[0,1]$ one has\n\\begin{equation}\\label{eq_hh}\nf\\left( \\frac{1}{2} \\right)\\le \\int\\limits_{0}^{1}{f\\left( t \\right)dt}\\le \\frac{f\\left( 0 \\right)+f\\left( 1 \\right)}{2},\n\\end{equation}\nis a non-avoidable tool. Notice that this inequality provides a refinement of the mid-convexity condition of $f$.\n\nOur target in this paper is to further explore numerical radius and operator norm inequalities, via matrix operators and convex functions. For this, we begin by noting that \nsince $\\omega(\\cdot)$ and $||\\cdot||$ are norms, one can easily verify that the functions\n\\[f\\left( t \\right)=\\omega \\left( \\left( 1-t \\right)T+t{{T}^{*}} \\right),\\;{\\text{and}}\\;g(t)=\\left\\| \\left( 1-t \\right)T+t{{T}^{*}} \\right\\|\\]\nare convex on $\\left[ 0,1 \\right]$.\n\n\n\n\n\n\n\n\n\n\n\n\nWith a considerable amount of research devoted to inequalities of convex functions, the following inequalities which have been shown in \\cite{2} for a convex function $f:\\left[ 0,1 \\right]\\to \\mathbb{R}$ have played a useful role in the literature\n\t\\[f\\left( t \\right)\\le \\left( 1-t \\right)f\\left( 0 \\right)+tf\\left( 1 \\right)-2r\\left( \\frac{f\\left( 0 \\right)+f\\left( 1 \\right)}{2}-f\\left( \\frac{1}{2} \\right) \\right),\\]\nand\n\\[\\left( 1-t \\right)f\\left( 0 \\right)+tf\\left( 1 \\right)\\le f\\left( t \\right)+2R\\left( \\frac{f\\left( 0 \\right)+f\\left( 1 \\right)}{2}-f\\left( \\frac{1}{2} \\right) \\right),\\]\nwhere $r=\\min \\left\\{ t,1-t \\right\\}$, $R=\\max \\left\\{ t,1-t \\right\\}$, and $0\\le t\\le 1$. \nWe refer the reader to \\cite{sab_mia,sab_mjom} for some applications and further discussion of these inequalities.\n\n\n\nApplying these later inequalities to the convex functions $f$ and $g$ above implies the following refinements and reverses of \\eqref{eq_norm_reim}.\n\\begin{equation}\\label{6}\n\\frac{\\omega \\left( T \\right)-\\omega \\left( \\left( 1-t \\right)T+t{{T}^{*}} \\right)}{2R}\\le \\omega \\left( T \\right)-\\left\\| \\mathfrak RT \\right\\|\\le \\frac{\\omega \\left( T \\right)-\\omega \\left( \\left( 1-t \\right)T+t{{T}^{*}} \\right)}{2r}.\n\\end{equation}\nFurthermore,\n\\begin{equation}\\label{ned_quo}\n\\frac{\\left\\| T \\right\\|-\\left\\| \\left( 1-t \\right)T+t{{T}^{*}} \\right\\|}{2R}\\le \\left\\| T \\right\\|-\\left\\| \\mathfrak RT \\right\\|\\le \\frac{\\left\\| T \\right\\|-\\left\\| \\left( 1-t \\right)T+t{{T}^{*}} \\right\\|}{2r}.\n\\end{equation}\n\n\nUsing this approach, we will be able to present refined versions and generalizations of most of the above inequalities, with the conclusion of some product inequalities that entail some interesting relations. We refer to inequalities that govern $\\omega(AB)$ as product inequalities. It is well-known that $\\omega(\\cdot)$ is not sub-multiplicative. We refer the reader to \\cite{Li} for further discussion of this property. Interestingly, our approach will entail a relation between $\\omega(AB)$ and $\\|A+B\\|$, with an application to the matrix arithmetic-geometric mean inequality that states \\cite[Theorem IX.4.5]{bhatia}\n\\begin{equation*}\n\\|A^{1\/2}B^{1\/2}\\|\\le\\frac{1}{2}\\|A+B\\|,\\;A,B\\in\\mathcal{B}(\\mathcal{H}), A,B\\geq O. \n\\end{equation*}\nNamely, we obtain a new refinement of this inequality using the numerical radius; as a new approach to this direction, see Remark \\ref{remark_amgm} below.\n\nTo achieve our goal, some auxiliary results are needed as follows. \n\\begin{lemma}\nLet $A,B\\in\\mathcal{B}(\\mathcal{H})$. \n\\begin{enumerate}\n\\item If $n\\in\\mathbb{N}$, then \\cite[Theorem 2.1-1]{gust}\n\\begin{equation}\\label{eq_power_ineq}\n\\omega(A^n)\\le \\omega^n(A).\n\\end{equation}\n\\item The operator norm satisfies the identity\n\\begin{equation}\\label{eq_norm_blocks}\n\\left\\|\\left[\\begin{array}{cc}O&A\\\\A^*&O\\end{array}\\right]\\right\\|=\\|A\\|.\n\\end{equation}\n\\end{enumerate}\n\\end{lemma}\n\n\n\n\n\n\\section{Main Result}\nIn this section we present our results, starting with the following simple consequence that follows by applying \\eqref{eq_hh} on\n\\[f\\left( t \\right)=\\omega \\left( \\left( 1-t \\right)T+t{{T}^{*}} \\right)\\;{\\text{and}}\\;g(t)=\\left\\| \\left( 1-t \\right)T+t{{T}^{*}} \\right\\|\\] yielding refinements of \\eqref{eq_norm_reim}.\n\\begin{proposition}\nLet $T\\in \\mathcal B\\left( \\mathcal H \\right)$. Then\n\\begin{equation}\\label{2}\n\\left\\| \\mathfrak RT \\right\\|\\le \\int\\limits_{0}^{1}{\\omega \\left( \\left( 1-t \\right)T+t{{T}^{*}} \\right)}dt\\le \\omega \\left( T \\right),\n\\end{equation}\nand\n\\begin{equation}\\label{3}\n\\left\\| \\mathfrak IT \\right\\|\\le \\int\\limits_{0}^{1}{\\omega \\left( \\left( 1-t \\right){{T}^{*}}-tT \\right)dt}\\le \\omega \\left( T \\right).\n\\end{equation}\nMoreover,\n\\begin{equation}\\label{15}\n\\left\\| \\mathfrak RT \\right\\|\\le \\int\\limits_{0}^{1}{\\left\\| \\left( 1-t \\right)T+t{{T}^{*}} \\right\\|dt}\\le \\left\\| T \\right\\|,\n\\end{equation}\nand\n\\begin{equation}\\label{16}\n\\left\\| \\mathfrak IT \\right\\|\\le \\int\\limits_{0}^{1}{\\left\\| \\left( 1-t \\right){{T}^{*}}-tT \\right\\|dt}\\le \\left\\| T \\right\\|.\n\\end{equation}\n\\end{proposition}\n\nThe identity \\eqref{eq_w_re} provides an alternative formula to evaluate the numerical radius without appealing to the inner product. Interestingly, the inequalities \\eqref{2} and \\eqref{3} provide the following alternative identities, which help better understand how the numerical radius behaves.\n\\begin{corollary}\nLet $T\\in \\mathcal B\\left( \\mathcal H \\right)$. Then\n\\[\\omega \\left( T \\right)=\\underset{\\theta \\in \\mathbb{R}}{\\mathop{\\sup }}\\,\\left(\\int\\limits_{0}^{1}{\\omega \\left( \\left( 1-t \\right){{e}^{\\textup i\\theta }}T+t{{e}^{-\\textup i\\theta }}{{T}^{*}} \\right)dt}\\right)=\\underset{\\theta \\in \\mathbb{R}}{\\mathop{\\sup }}\\,\\left(\\int\\limits_{0}^{1}{\\omega \\left( \\left( 1-t \\right){{e}^{-\\textup i\\theta }}{{T}^{*}}-t{{e}^{\\textup i\\theta }}T \\right)dt}\\right).\\]\n\\end{corollary}\n\\begin{proof}\nReplacing $T$ by ${{e}^{\\textup i\\theta }}T$ in \\eqref{2}, we get\n\\[\\left\\| {{\\operatorname{\\mathfrak Re}}^{\\textup i\\theta }}T \\right\\|\\le \\int\\limits_{0}^{1}{\\omega \\left( \\left( 1-t \\right){{e}^{\\textup i\\theta }}T+t{{e}^{-\\textup i\\theta }}{{T}^{*}} \\right)dt}\\le \\omega \\left( T \\right).\\]\nTaking the supremum over $\\theta \\in \\mathbb{R}$, \\eqref{eq_w_re} implies the first identity. The second identity follows from \\eqref{3} and noting that\n\\[\\underset{\\theta \\in \\mathbb{R}}{\\mathop{\\sup }}\\,\\left\\| \\mathfrak I{{e}^{\\textup i\\theta }}T \\right\\|=\\omega \\left( T \\right).\\]\n\\end{proof}\n\n\n\nThe following result involves an integral refinement of the second inequality in \\eqref{eq_equiv_norms}.\n\\begin{proposition}\nLet $T\\in \\mathcal B\\left( \\mathcal H \\right)$. Then\n\\[\\omega \\left( T \\right)\\le \\min \\left\\{ {{\\lambda }_{1}},{{\\lambda }_{2}} \\right\\}\\le \\left\\| T \\right\\|,\\]\nwhere \n\\[{{\\lambda }_{1}}=\\underset{\\theta \\in \\mathbb{R}}{\\mathop{\\sup }}\\,\\left( \\int\\limits_{0}^{1}{\\left\\| \\left( 1-t \\right){{e}^{\\textup i\\theta }}T+t{{e}^{-\\textup i\\theta }}{{T}^{*}} \\right\\|dt} \\right)\\text{ and }{{\\lambda }_{2}}=\\underset{\\theta \\in \\mathbb{R}}{\\mathop{\\sup }}\\,\\left( \\int\\limits_{0}^{1}{\\left\\| \\left( 1-t \\right){{e}^{-\\textup i\\theta }}{{T}^{*}}-t{{e}^{\\textup i\\theta }}T \\right\\|dt} \\right).\\]\n\\end{proposition}\n\\begin{proof}\nBy the inequality \\eqref{15}, we have \n\\begin{equation*}\n\\sup_{\\theta \\in \\mathbb{R}}\\|{{\\operatorname{\\mathfrak Re}}^{\\textup i\\theta }}T\\|\\le \\sup_{\\theta \\in \\mathbb{R}}\\left( \\int\\limits_{0}^{1}{\\left\\| \\left( 1-t \\right)e^{i\\theta}T+t{e^{-i\\theta}{T}^{*}} \\right\\|dt}\\right)\\le \\|T\\|.\n\\end{equation*}\nFinally, by \\eqref{eq_w_re} we get\n\\begin{equation*}\n \\omega(T)\\le\\sup_{\\theta \\in \\mathbb{R}}\\left( \\int\\limits_{0}^{1}{\\left\\| \\left( 1-t \\right)e^{i\\theta}T+t{e^{-i\\theta}{T}^{*}} \\right\\|dt}\\right)\\le \\left\\| T \\right\\|.\n\\end{equation*}\nBy a similar proof and with the help of \\eqref{16}, we also have\n\\begin{equation*}\n\\omega(T)\\le \\sup_{\\theta\\in \\mathbb{R}}\\left( \\int\\limits_{0}^{1}{\\left\\| \\left( 1-t \\right){e^{-i\\theta}{T}^{*}}-te^{i\\theta}T \\right\\|dt}\\right)\\le \\left\\| T \\right\\|.\n\\end{equation*}\nThis completes the proof.\n\\end{proof}\n\n\n\n\nThe second inequality in the inequalities \\eqref{2} and \\eqref{3} can be reversed as follows. \n\\begin{proposition}\nLet $T\\in \\mathcal B\\left( \\mathcal H \\right)$. Then \n\\[\\frac{1}{2}\\omega \\left( T \\right)\\le \\left\\{ \\begin{aligned}\n & \\int\\limits_{0}^{1}{\\omega \\left( \\left( 1-t \\right)T+t{{T}^{*}} \\right)dt}, \\\\ \n & \\int\\limits_{0}^{1}{\\omega \\left( \\left( 1-t \\right){{T}^{*}}-tT \\right)dt}. \\\\ \n\\end{aligned} \\right.\\]\n\\end{proposition}\n\\begin{proof}\nFor any $0\\le t\\le 1$, it can be easily shown that\n\\[\\left| 1-2t \\right|\\omega \\left( T \\right)\\le \\min \\left\\{ \\omega \\left( \\left( 1-t \\right)T+t{{T}^{*}} \\right),\\omega \\left( \\left( 1-t \\right){{T}^{*}}-tT \\right) \\right\\}.\\]\nIntegrating this over the interval $[0,1]$ implies the desired result.\n\\end{proof}\n\n\nThe following result holds as well.\n\\begin{theorem}\n\tLet $T\\in \\mathcal B\\left( \\mathcal H \\right)$. Then\n\\begin{equation*}\n\\left\\| T \\right\\|\\le 2\\int\\limits_{0}^{1}{\\left\\| \\left( 1-t \\right)T+t{{T}^{*}} \\right\\|dt}\\le2 \\left\\| T \\right\\|,\n\\end{equation*}\nand\n\\[\\omega \\left( T \\right)\\le 2\\int\\limits_{0}^{1}{\\omega \\left( \\left( 1-t \\right)T+t{{T}^{*}} \\right)dt}\\le2\\omega \\left( T \\right).\\]\n\\end{theorem}\n\\begin{proof}\nLet $h(T)=\\|T\\|$ for any $T\\in \\mathcal B\\left( \\mathcal H \\right)$. Then, $h$ is a convex function on $\\mathcal B\\left( \\mathcal H \\right)$. For each $t\\in [0, 1],$ we have \n\\begin{equation*}\nh((1-2t)T)+h((2t-1)T^*)=h((1-t)A+tB)+h((1-t)B+tA),\n\\end{equation*}\nwhere $A=(1-t)T+tT^*$ and $B=-(1-t)T^*-tT$. Then, \n\\[ \\begin{aligned}\nh((1-2t)T)+h((2t-1)T^*)&=h((1-t)A+tB)+h((1-t)B+tA)\\\\\n&\\le(1-t)h(A)+th(B)+(1-t)h(B)+th(A)\\\\\n&=h(A)+h(B)\\nonumber \\\\\n&=h((1-t)T+tT^*)+h(-(1-t)T^*-tT)\\\\\n&=h((1-t)T+tT^*)+h((1-t)T^*+tT).\n\\end{aligned} \\]\nIntegrating, the previous inequality, from $t=0$ to $t=1$, we obtain\n\\begin{equation*}\n\\int_0^1|1-2t|(\\|T\\|+\\|T^*\\|)\\:dt\\le 2\\int_0^1\\|(1-t)T+tT^*\\|\\:dt.\n\\end{equation*}\nThus, \n\\begin{equation*}\n\\|T\\|=\\frac12 (\\|T\\|+\\|T^*\\|)\\le 2\\int_0^1\\|(1-t)T+tT^*\\|\\:dt.\n\\end{equation*}\nOn the other hand,\n\\begin{align*}\n\\|(1-t)T+tT^*\\|&\\le (1-t)\\|T\\|+t\\|T^*\\|=\\|T\\|; 0\\le t\\le 1.\n\\end{align*}\nIntegrating this last inequality and then multiplying by 2 complete the proof of the first inequality. The second inequality is proved similarly.\n\\end{proof}\n\n\n\n\n\n\n\n\nContinuing with the convexity of the norms, the inequality \\eqref{6} may be used to get the following refinement of the first inequality in \\eqref{eq_equiv_norms}.\n\\begin{theorem}\\label{9}\nLet $T\\in \\mathcal B\\left( \\mathcal H \\right)$. Then for any $0\\le t\\le 1$,\n\\[\\frac{1}{2}\\left\\| T \\right\\|+\\frac{1}{4R}\\left( 2\\omega \\left( T \\right)-\\left( \\omega \\left( \\left( 1-t \\right){{T}^{*}}-tT \\right)+\\omega \\left( \\left( 1-t \\right)T+t{{T}^{*}} \\right) \\right) \\right)\\le \\omega \\left( T \\right),\\]\nwhere $R=\\max \\left\\{ t,1-t \\right\\}$.\n\\end{theorem}\n\\begin{proof}\nThe first inequality in \\eqref{6} can be written as\n\\begin{equation}\\label{4}\n\\left\\| \\mathfrak RT \\right\\|+\\frac{\\omega \\left( T \\right)-\\omega \\left( \\left( 1-t \\right)T+t{{T}^{*}} \\right)}{2R}\\le \\omega \\left( T \\right).\n\\end{equation}\nReplacing $\\textup i{{T}^{*}}$ by $T$, we infer that\n\\begin{equation}\\label{5}\n\\left\\| \\mathfrak IT \\right\\|+\\frac{\\omega \\left( T \\right)-\\omega \\left( \\left( 1-t \\right){{T}^{*}}-tT \\right)}{2R}\\le \\omega \\left( T \\right).\n\\end{equation}\nBy \\eqref{4} and \\eqref{5}, we get\n\\[\\begin{aligned}\n & \\frac{1}{2}\\left\\| T \\right\\|+\\frac{1}{4R}\\left( 2\\omega \\left( T \\right)-\\left( \\omega \\left( \\left( 1-t \\right){{T}^{*}}-tT \\right)+\\omega \\left( \\left( 1-t \\right)T+t{{T}^{*}} \\right) \\right) \\right) \\\\ \n & =\\frac{1}{2}\\left\\| \\mathfrak RT+\\textup i\\mathfrak IT \\right\\|+\\frac{1}{4R}\\left( 2\\omega \\left( T \\right)-\\left( \\omega \\left( \\left( 1-t \\right){{T}^{*}}-tT \\right)+\\omega \\left( \\left( 1-t \\right)T+t{{T}^{*}} \\right) \\right) \\right) \\\\ \n & \\le \\frac{1}{2}\\left( \\left\\| \\mathfrak RT \\right\\|+\\left\\| \\mathfrak IT \\right\\| \\right)+\\frac{1}{4R}\\left( 2\\omega \\left( T \\right)-\\left( \\omega \\left( \\left( 1-t \\right){{T}^{*}}-tT \\right)+\\omega \\left( \\left( 1-t \\right)T+t{{T}^{*}} \\right) \\right) \\right) \\\\ \n &\\qquad\\text{(by the triangle inequality for the usual operator norm)}\\\\\n & \\le \\omega \\left( T \\right).\n\\end{aligned}\\]\nThis completes the proof.\n\\end{proof}\n\n\nAs a consequence of Theorem \\ref{9}, we get the following corollaries. Our results considerably refines \\cite[(4.3)]{4} and \\cite[(4.2)]{4}, respectively.\n\\begin{corollary}\\label{refwn}\n\tLet $T\\in \\mathcal B\\left( \\mathcal H \\right)$. Then,\n\\begin{equation*}\n\\frac{1}{2}\\left\\| T \\right\\|+\\frac{1}{2}\\Big| \\|\\mathfrak IT \\|-\\|\\mathfrak RT \\|\\Big| \\le\n\\frac{1}{2}\\left\\| T \\right\\|+\\frac{1}{2}\\left( 2\\omega \\left( T \\right)-\\left( \\|\\mathfrak IT \\|+\\|\\mathfrak RT \\| \\right)\\right)\\le \\omega \\left( T \\right).\t\n\\end{equation*}\n\\end{corollary}\n\\begin{proof}\nThe second inequality can be deduced from Theorem \\ref{9} with $t=\\frac12.$ On the other hand, we have \n\\[\\begin{aligned}\n\\frac{1}{2}\\Big| \\|\\mathfrak IT \\|-\\|\\mathfrak RT \\|\\Big|=& \\frac{1}{2}\\Big| \\|\\mathfrak IT \\|-\\omega(T)+\\omega(T)-|\\mathfrak RT \\|\\Big| \\\\ \n&\\le\\frac{1}{2}\\left(\\Big| \\|\\mathfrak IT \\|-\\omega(T)\\Big|+\\Big|\\omega(T)-|\\mathfrak RT \\|\\Big|\\right) \\\\\n&= \\frac{1}{2}\\left( \\omega(T)-\\|\\mathfrak IT \\|+\\omega(T)-\\|\\mathfrak RT \\|\\right),\\quad({\\text{by\\;the\\;inequality\\;\\eqref{eq_w_re}}}). \\nonumber\\ \n\\end{aligned}\\]\t\nThis completes the proof.\n\\end{proof}\n\n\nAs a consequence of Corollary \\ref{refwn}, we characterize when the numerical radius to be equal to half the operator norm. The following result is related to Theorem 3.1 previously obtained by Yamazaki in \\cite{3}.\n\\begin{proposition}\n\tLet $T\\in \\mathcal B\\left( \\mathcal H \\right)$. Then, $\\frac{\\|T\\|}{2}=\\omega(T)$ if and only if $\\|\\mathfrak Ie^{\\textup i\\theta}T\\|=\\|\\mathfrak Re^{\\textup i\\theta}T\\|=\\frac{\\|T\\|}{2}$ for any $\\theta \\in \\mathbb{R}.$\n\\end{proposition}\n\\begin{proof}\n\tIf $\\|\\mathfrak Ie^{\\textup i\\theta}T\\|=\\|\\mathfrak Re^{\\textup i\\theta}T\\|=\\frac{\\|T\\|}{2}$ for any $\\theta \\in \\mathbb{R}$, then by \\eqref{eq_w_re} we conclude that $\\omega(T)=\\frac{\\|T\\|}{2}$. Conversely, we suppose that $\\omega(T)=\\frac{\\|T\\|}{2}$, thus from Corollary \\ref{refwn} we conclude that \n\\[\t\\frac{1}{2}\\left\\| T \\right\\|=\\frac{1}{2}\\left\\| T \\right\\|+\\frac{1}{2}\\Big| \\|\\mathfrak IT \\|-\\|\\mathfrak RT \\|\\Big|=\n\t\\frac{1}{2}\\left\\| T \\right\\|+\\frac{1}{2}\\left( 2\\omega \\left( T \\right)-\\left( \\|\\mathfrak IT \\|+\\|\\mathfrak RT \\| \\right)\\right)= \\omega \\left( T \\right).\\]\n\tIf we replace $T$ for $e^{\\textup i\\theta}T$ with $\\theta \\in \\mathbb{R}$, we have \n\t\\[\\frac{1}{2}\\left\\| T \\right\\|=\t\\frac{1}{2}\\left\\| T \\right\\|+\\frac{1}{2}\\Big| \\|\\mathfrak Ie^{\\textup i\\theta}T \\|-\\|\\mathfrak Re^{\\textup i\\theta}T \\|\\Big|=\n\t\\frac{1}{2}\\left\\| T \\right\\|+\\frac{1}{2}\\left( 2\\omega \\left( T \\right)-\\left( \\|\\mathfrak Ie^{\\textup i\\theta}T \\|+\\|\\mathfrak Re^{\\textup i\\theta}T \\| \\right)\\right)= \\omega \\left( T \\right).\\]\n\tThis implies that $\\|\\mathfrak Ie^{\\textup i\\theta}T \\|=\\|\\mathfrak Re^{\\textup i\\theta}T \\|$ and $2\\omega(T)=\\|\\mathfrak Ie^{\\textup i\\theta}T \\|+\\|\\mathfrak Re^{\\textup i\\theta}T \\|,$ i.e. for any $\\theta \\in \\mathbb{R}$ we get\n\t$$\\|\\mathfrak Ie^{\\textup i\\theta}T \\|=\\|\\mathfrak Re^{\\textup i\\theta}T \\|=\\frac{\\|T\\|}{2}.$$\n\t\\end{proof}\n\n\n\\begin{corollary}\nLet $A,B\\in \\mathcal B\\left( \\mathcal H \\right)$. Then, \n\\begin{eqnarray}\n\\omega \\left( \\left[ \\begin{matrix}\nO & A \\\\\n{{B}} & O \\\\\n\\end{matrix} \\right]\\right)&\\geq&\n\\frac{1}{2}\\left\\| \\left[ \\begin{matrix}\nO & A \\\\\n{{B}} & O \\\\\n\\end{matrix} \\right] \\right\\|+\\frac{1}{2}\\left( 2\\omega \\left( \\left[ \\begin{matrix}\nO & A \\\\\n{{B}} & O \\\\\n\\end{matrix} \\right] \\right)-\\left( \\|A-B^* \\|+\\| A+B^* \\| \\right)\\right)\\nonumber\\\\\n&\\geq&\\frac{1}{2}\\left\\| \\left[ \\begin{matrix}\nO & A \\\\\n{{B}} & O \\\\\n\\end{matrix} \\right] \\right\\|+\\frac{1}{2}\\Big| \\|A-B^*\\|-\\|A+B^*\\|\\Big|.\\nonumber\\\\ \n\\nonumber\\ \\end{eqnarray}\n\\end{corollary}\n\\begin{proof}\n\tThis follows clearly from Corollary \\ref{refwn} by considering $T=\\left[ \\begin{matrix}\n\tO & A \\\\\n\t{{B}} & O \\\\\n\t\\end{matrix} \\right] $ and equality \\eqref{eq_norm_blocks}.\n\\end{proof}\n\n\n\nOn the other hand, the reverse for the second inequality in \\eqref{eq_w_re} may be obtained as follows. \n\\begin{theorem}\nLet $T\\in \\mathcal B\\left( \\mathcal H \\right)$. Then for any $0\\le t\\le 1$,\n\\[\\left\\| T \\right\\|\\le \\omega \\left( T \\right)+\\frac{\\left\\| T \\right\\|-\\left\\| \\left( 1-t \\right)T+t{{T}^{*}} \\right\\|}{2r}-\\frac{\\omega \\left( T \\right)-\\omega \\left( \\left( 1-t \\right)T+t{{T}^{*}} \\right)}{2R},\\]\nwhere $r=\\min \\left\\{ t,1-t \\right\\}$ and $R=\\max \\left\\{ t,1-t \\right\\}$. In particular, \n\\[\\frac{\\omega \\left( T \\right)-\\omega \\left( \\left( 1-t \\right)T+t{{T}^{*}} \\right)}{2R}\\le \\frac{\\left\\| T \\right\\|-\\left\\| \\left( 1-t \\right)T+t{{T}^{*}} \\right\\|}{2r}.\\]\n\n\\end{theorem}\n\\begin{proof}\nThe inequalities \\eqref{6} and \\eqref{ned_quo} imply\n\t\\[\\begin{aligned}\n \\left\\| T \\right\\|&\\le \\left\\| \\mathfrak RT \\right\\|+\\frac{\\left\\| T \\right\\|-\\left\\| \\left( 1-t \\right)T+t{{T}^{*}} \\right\\|}{2r} \\\\ \n & \\le \\omega \\left( T \\right)+\\frac{\\left\\| T \\right\\|-\\left\\| \\left( 1-t \\right)T+t{{T}^{*}} \\right\\|}{2r}-\\frac{\\omega \\left( T \\right)-\\omega \\left( \\left( 1-t \\right)T+t{{T}^{*}} \\right)}{2R}. \n\\end{aligned}\\]\t\nThis proves the first assertion. The second assertion follows from the first, noting that $\\omega(T)\\le\\|T\\|.$\n\\end{proof}\n\n\n\n\nContinuing with the theme of this paper, in the following result, the numerical radius of convex combinations of operator matrices is used to refine the triangle inequality, thanks to\n\\[\\omega \\left( \\left[ \\begin{matrix}\n O & \\left( 1-t \\right)A+tB \\\\\n \\left( 1-t \\right){{B}^{*}}+t{{A}^{*}} & O \\\\\n\\end{matrix} \\right] \\right)\\le \\omega \\left( \\left[ \\begin{matrix}\n O & A \\\\\n {{B}^{*}} & O \\\\\n\\end{matrix} \\right] \\right); 0\\le t\\le 1.\\]\n\\begin{theorem}\\label{10}\nLet $A,B\\in \\mathcal B\\left( \\mathcal H \\right)$. Then for any $0\\le t\\le 1$,\n\\[\\left\\| A+B \\right\\|\\le \\left\\| A \\right\\|+\\left\\| B \\right\\|-\\frac{\\omega \\left( \\left[ \\begin{matrix}\n O & A \\\\\n {{B}^{*}} & O \\\\\n\\end{matrix} \\right] \\right)-\\omega \\left( \\left[ \\begin{matrix}\n O & \\left( 1-t \\right)A+tB \\\\\n \\left( 1-t \\right){{B}^{*}}+t{{A}^{*}} & O \\\\\n\\end{matrix} \\right] \\right)}{R},\\]\nwhere $R=\\max \\left\\{ t,1-t \\right\\}$. \n\\end{theorem}\n\\begin{proof}\nLet $T=\\left[ \\begin{matrix}\n O & A \\\\\n {{B}^{*}} & O \\\\\n\\end{matrix} \\right]$ on $\\mathcal H\\oplus \\mathcal H$. Then by \\eqref{6}, we can write \n{\\footnotesize\n\\[\\begin{aligned}\n & \\left\\| A+B \\right\\| \\\\ \n & =\\left\\| T+{{T}^{*}} \\right\\| \\\\ \n & =2\\left\\| \\mathfrak RT \\right\\| \\\\ \n & \\le 2\\omega \\left( T \\right)-\\frac{\\omega \\left( T \\right)-\\omega \\left( \\left( 1-t \\right)T+t{{T}^{*}} \\right)}{R} \\\\ \n & =2\\underset{\\theta \\in \\mathbb{R}}{\\mathop{\\sup }}\\,\\left\\| \\mathfrak R{{e}^{\\textup i\\theta }}T \\right\\|-\\frac{\\omega \\left( T \\right)-\\omega \\left( \\left( 1-t \\right)T+t{{T}^{*}} \\right)}{R} \\\\ \n & =\\underset{\\theta \\in \\mathbb{R}}{\\mathop{\\sup }}\\,\\left\\| \\left[ \\begin{matrix}\n O & {{e}^{\\textup i\\theta }}A+{{e}^{-\\textup i\\theta }}B \\\\\n {{e}^{\\textup i\\theta }}{{B}^{*}}+{{e}^{-\\textup i\\theta }}{{A}^{*}} & O \\\\\n\\end{matrix} \\right] \\right\\|-\\frac{\\omega \\left( \\left[ \\begin{matrix}\n O & A \\\\\n {{B}^{*}} & O \\\\\n\\end{matrix} \\right] \\right)-\\omega \\left( \\left[ \\begin{matrix}\n O & \\left( 1-t \\right)A+tB \\\\\n \\left( 1-t \\right){{B}^{*}}+t{{A}^{*}} & O \\\\\n\\end{matrix} \\right] \\right)}{R} \\\\ \n & =\\underset{\\theta \\in \\mathbb{R}}{\\mathop{\\sup }}\\,\\left\\| {{e}^{\\textup i\\theta }}A+{{e}^{-\\textup i\\theta }}B \\right\\|-\\frac{\\omega \\left( \\left[ \\begin{matrix}\n O & A \\\\\n {{B}^{*}} & O \\\\\n\\end{matrix} \\right] \\right)-\\omega \\left( \\left[ \\begin{matrix}\n O & \\left( 1-t \\right)A+tB \\\\\n \\left( 1-t \\right){{B}^{*}}+t{{A}^{*}} & O \\\\\n\\end{matrix} \\right] \\right)}{R} \\\\ \n & \\le \\left\\| A \\right\\|+\\left\\| B \\right\\|-\\frac{\\omega \\left( \\left[ \\begin{matrix}\n O & A \\\\\n {{B}^{*}} & O \\\\\n\\end{matrix} \\right] \\right)-\\omega \\left( \\left[ \\begin{matrix}\n O & \\left( 1-t \\right)A+tB \\\\\n \\left( 1-t \\right){{B}^{*}}+t{{A}^{*}} & O \\\\\n\\end{matrix} \\right] \\right)}{R}, \n\\end{aligned}\\]}\nwhere the triangle inequality for the operator norm has been used to obtain the last inequality. This completes the proof.\n\\end{proof}\n\n\n\\begin{remark}\nLetting $T=\\left[\\begin{array}{cc}O&A\\\\B^*&O\\end{array}\\right]$, we have\n\\[\\begin{aligned}\n & \\omega \\left( \\left[ \\begin{matrix}\n O & \\left( 1-t \\right)A+tB \\\\\n \\left( 1-t \\right){{B}^{*}}+t{{A}^{*}} & O \\\\\n\\end{matrix} \\right] \\right)+\\omega \\left( \\left[ \\begin{matrix}\n O & \\left( 1-t \\right)B-tA \\\\\n \\left( 1-t \\right){{A}^{*}}-t{{B}^{*}} & O \\\\\n\\end{matrix} \\right] \\right) \\\\ \n&=\\omega((1-t)T+tT^*)+\\omega((1-t)T^*-tT)\\\\\n&\\le 2\\omega(T)\\quad({\\text{by\\;the\\;triangle\\;inequality}})\\\\\n & = 2\\omega \\left( \\left[ \\begin{matrix}\n O & A \\\\\n {{B}^{*}} & O \\\\\n\\end{matrix} \\right] \\right),\n\\end{aligned}\\]\nfor any $0\\le t\\le 1$. \nThus, noting \\eqref{eq_max_w} we have\n\\[\\begin{aligned}\n & \\max \\left\\{ \\omega \\left( \\left( 1-t \\right)\\left( B+{{A}^{*}} \\right)-t\\left( A+{{B}^{*}} \\right) \\right),\\omega \\left( \\left( 1-t \\right)\\left( B-{{A}^{*}} \\right)+t\\left( {{B}^{*}}-A \\right) \\right) \\right\\} \\\\ \n &\\quad +\\max \\left\\{ \\omega \\left( \\left( 1-t \\right)\\left( A+{{B}^{*}} \\right)+t\\left( B+{{A}^{*}} \\right) \\right),\\omega \\left( \\left( 1-t \\right)\\left( A-{{B}^{*}} \\right)+t\\left( B-{{A}^{*}} \\right) \\right) \\right\\} \\\\ \n & \\le 2\\omega \\left( \\left[ \\begin{matrix}\n O & \\left( 1-t \\right)A+tB \\\\\n \\left( 1-t \\right){{B}^{*}}+t{{A}^{*}} & O \\\\\n\\end{matrix} \\right] \\right)+2\\omega \\left( \\left[ \\begin{matrix}\n O & \\left( 1-t \\right)B-tA \\\\\n \\left( 1-t \\right){{A}^{*}}-t{{B}^{*}} & O \\\\\n\\end{matrix} \\right] \\right) \\\\ \n & \\le 4\\omega \\left( \\left[ \\begin{matrix}\n O & A \\\\\n {{B}^{*}} & O \\\\\n\\end{matrix} \\right] \\right).\n\\end{aligned}\\]\n In particular,\n \\begin{equation}\\label{8}\n\\begin{aligned}\n & \\max \\left\\{ \\left\\| \\mathfrak IA-\\mathfrak IB \\right\\|,\\left\\| \\mathfrak RA-\\mathfrak RB \\right\\| \\right\\}+\\max \\left\\{ \\left\\| \\mathfrak RA+\\mathfrak RB \\right\\|,\\left\\| \\mathfrak IA+\\mathfrak IB \\right\\| \\right\\} \\\\ \n & \\le \\left\\| A+B \\right\\|+\\left\\| A-B \\right\\| \\\\ \n & \\le 4\\omega \\left( \\left[ \\begin{matrix}\n O & A \\\\\n {{B}^{*}} & O \\\\\n \\end{matrix} \\right] \\right).\n \\end{aligned}\n \\end{equation}\n\nAlso noting \\eqref{eq_w_average_w}, by the second inequality in \\eqref{8}, we get the following interesting inequalities\n\\[\\begin{aligned}\n \\frac{\\left\\| A+B \\right\\|+\\left\\| A-B \\right\\|}{2}&\\le 2\\omega \\left( \\left[ \\begin{matrix}\n O & A \\\\\n {{B}^{*}} & O \\\\\n\\end{matrix} \\right] \\right) \\\\ \n & \\le \\omega \\left( A+{{B}^{*}} \\right)+\\omega \\left( A-{{B}^{*}} \\right). \n\\end{aligned}\\]\n\\end{remark}\n\n\nThe following result provides an integral version of \\eqref{eq_fuad_mos}; where the numerical radius of convex combinations of operator matrices is used to refine the triangle inequality. Since its proof is similar to Theorem \\ref{10}, we state it without details.\n\\begin{theorem}\nLet $A,B\\in \\mathcal B\\left( \\mathcal H \\right)$. Then\n\\[\\left\\| A+B \\right\\|\\le 2\\int\\limits_{0}^{1}{\\omega \\left( \\left[ \\begin{matrix}\n O & \\left( 1-t \\right)A+tB \\\\\n \\left( 1-t \\right){{B}^{*}}+t{{A}^{*}} & O \\\\\n\\end{matrix} \\right] \\right)}dt\\le \\left\\| A \\right\\|+\\left\\| B \\right\\|.\\]\n\\end{theorem}\n\n\n\n\nThe matrix operator $\\left[\\begin{array}{cc}O&A\\\\B^*&O\\end{array}\\right]$ is further used to obtain the following improvement of \\eqref{eq_abuamer}.\n\\begin{theorem}\\label{11}\nLet $A,B\\in \\mathcal B\\left( \\mathcal H \\right)$. Then for any $0 \\le t \\le 1$,\n\\[\\begin{aligned}\n & \\left\\| A+B \\right\\|+\\frac{\\omega \\left( \\left[ \\begin{matrix}\n O & A \\\\\n {{B}^{*}} & O \\\\\n\\end{matrix} \\right] \\right)-\\omega \\left( \\left[ \\begin{matrix}\n O & \\left( 1-t \\right)A+tB \\\\\n \\left( 1-t \\right){{B}^{*}}+t{{A}^{*}} & O \\\\\n\\end{matrix} \\right] \\right)}{R} \\\\ \n & \\le \\max \\left\\{ \\left\\| A \\right\\|,\\left\\| B \\right\\| \\right\\}+\\frac{1}{2}\\left( \\left\\| {{\\left| A \\right|}^{\\frac{1}{2}}}{{\\left| B \\right|}^{\\frac{1}{2}}} \\right\\|+\\left\\| {{\\left| {{B}^{*}} \\right|}^{\\frac{1}{2}}}{{\\left| {{A}^{*}} \\right|}^{\\frac{1}{2}}} \\right\\| \\right), \n\\end{aligned}\\]\nwhere $R=\\max \\left\\{ t,1-t \\right\\}$. In particular, if $A$ and $B$ are self-adjoint, we get\n{\\small\n\\[\\left\\| A+B \\right\\|+\\frac{\\omega \\left( \\left[ \\begin{matrix}\n O & A \\\\\n B & O \\\\\n\\end{matrix} \\right] \\right)-\\omega \\left( \\left[ \\begin{matrix}\n O & \\left( 1-t \\right)A+tB \\\\\n \\left( 1-t \\right)B+tA & O \\\\\n\\end{matrix} \\right] \\right)}{R}\\le \\max \\left\\{ \\left\\| A \\right\\|,\\left\\| B \\right\\| \\right\\}+\\left\\| {{\\left| A \\right|}^{\\frac{1}{2}}}{{\\left| B \\right|}^{\\frac{1}{2}}} \\right\\|.\\]\n}\n\\end{theorem}\n\\begin{proof}\n\nCombining \\eqref{eq_need_prf} with the inequality \\eqref{4}, we infer the desired result.\n\\end{proof}\n\n\n\\begin{remark}\nIt is worthwhile to mention here that if $A$ and $B$ are positive operators, then Theorem \\ref{11} reduces to \\cite{7}\n\\[\\left\\| A+B \\right\\|\\le \\max \\left\\{ \\left\\| A \\right\\|,\\left\\| B \\right\\| \\right\\}+\\left\\| {{A}^{\\frac{1}{2}}}{{B}^{\\frac{1}{2}}} \\right\\|.\\]\nThis follows from the following point for positive operators \\cite{8}\n\\begin{equation}\\label{eq_ned_pf_remark}\n\\omega \\left( \\left[ \\begin{matrix}\n O & \\left( 1-t \\right)A+tB \\\\\n \\left( 1-t \\right)B+tA & O \\\\\n\\end{matrix} \\right] \\right)=\\omega \\left( \\left[ \\begin{matrix}\n O & A \\\\\n B & O \\\\\n\\end{matrix} \\right] \\right)=\\frac{1}{2}\\left\\| A+B \\right\\|.\n\\end{equation}\n\n\\end{remark}\n\nNow we move to study inequalities for $\\omega(AB)$, where $A,B\\in\\mathcal{B}(\\mathcal{H})$. Interestingly, the following numerical radius inequality leads to a new proof of the arithmetic-geometric mean inequality for positive operators, as we shall see in Remark \\ref{remark_amgm} below.\n\\begin{theorem}\\label{12}\nLet $A,B\\in \\mathcal B\\left( \\mathcal H \\right)$. Then for any $0 \\le t \\le 1$,\n\\[{{\\omega }^{\\frac{1}{2}}}\\left( AB \\right)\\le \\frac{1}{2}\\left\\| A+B^{*} \\right\\|+\\frac{\\omega \\left( \\left[ \\begin{matrix}\n O & A \\\\\n {{B}} & O \\\\\n\\end{matrix} \\right] \\right)-\\omega \\left( \\left[ \\begin{matrix}\n O & \\left( 1-t \\right)A+tB^{*} \\\\\n \\left( 1-t \\right){{B}}+t{{A}^{*}} & O \\\\\n\\end{matrix} \\right] \\right)}{2r},\\]\nwhere $r=\\min \\left\\{ t,1-t \\right\\}$.\n\\end{theorem}\n\\begin{proof}\nBy the second inequality in \\eqref{6}, we have\n\\[2\\omega \\left( \\left[ \\begin{matrix}\n O & A \\\\\n {{B}^{*}} & O \\\\\n\\end{matrix} \\right] \\right)\\le \\left\\| A+B \\right\\|+\\frac{\\omega \\left( \\left[ \\begin{matrix}\n O & A \\\\\n {{B}^{*}} & O \\\\\n\\end{matrix} \\right] \\right)-\\omega \\left( \\left[ \\begin{matrix}\n O & \\left( 1-t \\right)A+tB \\\\\n \\left( 1-t \\right){{B}^{*}}+t{{A}^{*}} & O \\\\\n\\end{matrix} \\right] \\right)}{r}.\\]\nThus,\n\\[\\begin{aligned}\n & 2{{\\omega }^{\\frac{1}{2}}}\\left( AB \\right) \\\\ \n & \\le 2\\max \\left\\{ {{\\omega }^{\\frac{1}{2}}}\\left( AB \\right),{{\\omega }^{\\frac{1}{2}}}\\left( BA \\right) \\right\\} \\\\ \n & =2{{\\omega }^{\\frac{1}{2}}}\\left( \\left[ \\begin{matrix}\n AB & O \\\\\n O & BA \\\\\n\\end{matrix} \\right] \\right) \\\\ \n & =2{{\\omega }^{\\frac{1}{2}}}\\left( {{\\left[ \\begin{matrix}\n O & A \\\\\n B & O \\\\\n\\end{matrix} \\right]}^{2}} \\right) \\\\ \n & \\le 2\\omega \\left( \\left[ \\begin{matrix}\n O & A \\\\\n B & O \\\\\n\\end{matrix} \\right] \\right)\\quad({\\text{by}}\\;\\eqref{eq_power_ineq}) \\\\ \n & \\le \\left\\| A+B^{*} \\right\\|+\\frac{\\omega \\left( \\left[ \\begin{matrix}\n O & A \\\\\n {{B}} & O \\\\\n\\end{matrix} \\right] \\right)-\\omega \\left( \\left[ \\begin{matrix}\n O & \\left( 1-t \\right)A+tB^{*} \\\\\n \\left( 1-t \\right){{B}}+t{{A}^{*}} & O \\\\\n\\end{matrix} \\right] \\right)}{r}, \n\\end{aligned}\\]\nwhich completes the proof.\n\\end{proof}\n\nNow we use Theorem \\ref{12} to prove the following arithmetic-geometric mean inequality for positive operators.\n\\begin{remark}\\label{remark_amgm}\nLet $A,B\\in \\mathcal B\\left( \\mathcal H \\right)$ be two positive operators. It follows from Theorem \\ref{12},\n\\[\\begin{aligned}\n\\left\\| {{A}^{\\frac{1}{2}}}{{B}^{\\frac{1}{2}}} \\right\\|&={{r}^{\\frac{1}{2}}}\\left( AB \\right)\\quad \\text{(by \\cite[(2.1)]{5})}\\\\\n&\\le {{\\omega }^{\\frac{1}{2}}}\\left( AB \\right)\\\\\n&\\le \\frac{1}{2}\\left\\| A+B \\right\\|,\n\\end{aligned}\\]\nwhere \\eqref{eq_ned_pf_remark} has been used together with the fact that $r(T)\\le \\omega(T)$ for any $T\\in\\mathcal{B}(\\mathcal{H})$.\n\\end{remark}\n\nWhile Theorem \\ref{12} provides an upper bound of $\\omega(AB)$ in terms of $ \\left[ \\begin{matrix}\n O & A \\\\\n {{B}} & O \\\\\n\\end{matrix} \\right]$, we have the following lower bound in terms of the same matrix operator.\n\\begin{theorem}\\label{1}\nLet $A,B\\in \\mathcal B\\left( \\mathcal H \\right)$. Then\n\\[\\omega \\left( \\left[ \\begin{matrix}\n O & A \\\\\n B & O \\\\\n\\end{matrix} \\right] \\right)\\le \\sqrt{\\max \\left\\{ \\omega \\left( AB \\right),\\omega \\left( BA \\right) \\right\\}+\\underset{\\lambda \\in \\mathbb{C}}{\\mathop{\\inf }}\\,{{\\left\\| \\left[ \\begin{matrix}\n -\\lambda I & A \\\\\n B & -\\lambda I \\\\\n\\end{matrix} \\right] \\right\\|}^{2}}},\\]\nwhere $I$ is the identity operator in $\\mathcal{B}(\\mathcal{H}).$\n\\end{theorem}\n\\begin{proof}\nBy the main result of \\cite{9}, we can write\n\\[\\begin{aligned}\n \\max \\left\\{ \\omega \\left( A{{B}^{*}} \\right),\\omega \\left( {{B}^{*}}A \\right) \\right\\}&=\\omega \\left( \\left[ \\begin{matrix}\n A{{B}^{*}} & O \\\\\n O & {{B}^{*}}A \\\\\n\\end{matrix} \\right] \\right) \\\\ \n & =\\omega \\left( \\left[ \\begin{matrix}\n O & A \\\\\n {{B}^{*}} & O \\\\\n\\end{matrix} \\right]\\left[ \\begin{matrix}\n O & A \\\\\n {{B}^{*}} & O \\\\\n\\end{matrix} \\right] \\right) \\\\ \n & =\\omega \\left( {{T}^{2}} \\right) \\\\ \n & \\ge {{\\omega }^{2}}\\left( T \\right)-\\underset{\\lambda \\in \\mathbb{C}}{\\mathop{\\inf }}\\,{{\\left\\| T-\\lambda I \\right\\|}^{2}} \\\\ \n & ={{\\omega }^{2}}\\left( \\left[ \\begin{matrix}\n O & A \\\\\n {{B}^{*}} & O \\\\\n\\end{matrix} \\right] \\right)-\\underset{\\lambda \\in \\mathbb{C}}{\\mathop{\\inf }}\\,{{\\left\\| \\left[ \\begin{matrix}\n -\\lambda I & A \\\\\n {{B}^{*}} & -\\lambda I \\\\\n\\end{matrix} \\right] \\right\\|}^{2}},\n\\end{aligned}\\]\nwhich completes the proof.\n\\end{proof}\n\n\\begin{remark}\nIt follows from Theorem \\ref{1} that for $X_i\\in\\mathcal{B}(\\mathcal{H})$ $\\left( i=1,2,3,4 \\right)$,\n\\[\\begin{aligned}\n & \\omega \\left( \\left[ \\begin{matrix}\n {{X}_{1}} & {{X}_{2}} \\\\\n {{X}_{3}} & {{X}_{4}} \\\\\n\\end{matrix} \\right] \\right) \\\\ \n & =\\omega \\left( \\left[ \\begin{matrix}\n {{X}_{1}} & O \\\\\n O & {{X}_{4}} \\\\\n\\end{matrix} \\right]+\\left[ \\begin{matrix}\n O & {{X}_{2}} \\\\\n {{X}_{3}} & O \\\\\n\\end{matrix} \\right] \\right) \\\\ \n & \\le \\omega \\left( \\left[ \\begin{matrix}\n {{X}_{1}} & O \\\\\n O & {{X}_{4}} \\\\\n\\end{matrix} \\right] \\right)+\\omega \\left( \\left[ \\begin{matrix}\n O & {{X}_{2}} \\\\\n {{X}_{3}} & O \\\\\n\\end{matrix} \\right] \\right) \\\\ \n & \\le \\max \\left\\{ \\omega \\left( {{X}_{1}} \\right),\\omega \\left( {{X}_{4}} \\right) \\right\\}+\\sqrt{\\max \\left\\{ \\omega \\left( {{X}_{2}}{{X}_{3}} \\right),\\omega \\left( {{X}_{3}}{{X}_{2}} \\right) \\right\\}+\\underset{\\lambda \\in \\mathbb{C}}{\\mathop{\\inf }}\\,{{\\left\\| \\left[ \\begin{matrix}\n -\\lambda I & {{X}_{2}} \\\\\n {{X}_{3}} & -\\lambda I \\\\\n\\end{matrix} \\right] \\right\\|}^{2}}}. \n\\end{aligned}\\]\n\\end{remark}\n\\begin{remark}\nNotice that\n\\[\\begin{aligned}\n r\\left( {{X}_{1}}{{X}_{2}}+{{X}_{3}}{{X}_{4}} \\right) & =r\\left( \\left[ \\begin{matrix}\n {{X}_{1}}{{X}_{2}}+{{X}_{3}}{{X}_{4}} & O \\\\\n O & O \\\\\n\\end{matrix} \\right] \\right) \\\\ \n & =r\\left( \\left[ \\begin{matrix}\n {{X}_{1}} & {{X}_{3}} \\\\\n O & O \\\\\n\\end{matrix} \\right]\\left[ \\begin{matrix}\n {{X}_{2}} & O \\\\\n {{X}_{4}} & O \\\\\n\\end{matrix} \\right] \\right) \\\\ \n & =r\\left( \\left[ \\begin{matrix}\n {{X}_{2}} & O \\\\\n {{X}_{4}} & O \\\\\n\\end{matrix} \\right]\\left[ \\begin{matrix}\n {{X}_{1}} & {{X}_{3}} \\\\\n O & O \\\\\n\\end{matrix} \\right] \\right) \\\\ \n & =r\\left( \\left[ \\begin{matrix}\n {{X}_{2}}{{X}_{1}} & {{X}_{2}}{{X}_{3}} \\\\\n {{X}_{4}}{{X}_{1}} & {{X}_{4}}{{X}_{3}} \\\\\n\\end{matrix} \\right] \\right) \\\\ \n & \\le \\omega \\left( \\left[ \\begin{matrix}\n {{X}_{2}}{{X}_{1}} & {{X}_{2}}{{X}_{3}} \\\\\n {{X}_{4}}{{X}_{1}} & {{X}_{4}}{{X}_{3}} \\\\\n\\end{matrix} \\right] \\right). \n\\end{aligned}\\]\nIf in the above inequality we put ${{X}_{1}}={{e}^{\\textup i\\theta }}A$, ${{X}_{2}}=B$, ${{X}_{3}}={{e}^{-\\textup i\\theta }}{{B}^{*}}$, and ${{X}_{4}}={{A}^{*}}$, we reach\n\\begin{equation}\\label{13}\n\\left\\| {{\\operatorname{\\mathfrak Re}}^{\\textup i\\theta }}AB \\right\\|\\le \\frac{1}{2}\\omega \\left( \\left[ \\begin{matrix}\n {{e}^{\\textup i\\theta }}BA & {{e}^{-\\textup i\\theta }}B{{B}^{*}} \\\\\n {{e}^{\\textup i\\theta }}{{A}^{*}}A & {{\\left( {{e}^{\\textup i\\theta }}BA \\right)}^{*}} \\\\\n\\end{matrix} \\right] \\right).\n\\end{equation}\nThis indicates the relation between the numerical radius of the product of two operators and the numerical radius of $2\\times 2$ operator matrices.\n\nThe case $A=U{{\\left| T \\right|}^{1-t}}$ and $B={{\\left| T \\right|}^{t}}$, in \\eqref{13}, implies\n\\[\\left\\| \\mathfrak R{{e}^{\\textup i\\theta }}T \\right\\|\\le \\frac{1}{2}\\omega \\left( \\left[ \\begin{matrix}\n {{e}^{\\textup i\\theta }}\\widetilde{{{T}_{t}}} & {{e}^{-\\textup i\\theta }}{{\\left| T \\right|}^{2t}} \\\\\n {{e}^{\\textup i\\theta }}{{\\left| T \\right|}^{2\\left( 1-t \\right)}} & {{\\left( {{e}^{\\textup i\\theta }}\\widetilde{{{T}_{t}}} \\right)}^{*}} \\\\\n\\end{matrix} \\right] \\right),\\quad 0\\le t \\le 1,\\]\nwhere $\\widetilde{{{T}_{t}}}$ is the weighted Aluthge transform of $T$ defined by $\\widetilde{{{T}_{t}}}=|T|^tU|T|^{1-t},$ where $U$ is the partial isometry appearing in the polar decomposition in $T=U|T|.$ \n\nNotice that, if we replace $A=\\sqrt{\\frac{\\left\\| B \\right\\|}{\\left\\| A \\right\\|}}A$ and $B=\\sqrt{\\frac{\\left\\| A \\right\\|}{\\left\\| B \\right\\|}}B$, in \\eqref{13}, we also have\n\\begin{equation}\\label{14}\n\\left\\| {{\\operatorname{\\mathfrak Re}}^{\\textup i\\theta }}AB \\right\\|\\le \\frac{1}{2}\\omega \\left( \\left[ \\begin{matrix}\n {{e}^{\\textup i\\theta }}BA & {{e}^{-\\textup i\\theta }}\\frac{\\left\\| A \\right\\|}{\\left\\| B \\right\\|}B{{B}^{*}} \\\\\n {{e}^{\\textup i\\theta }}\\frac{\\left\\| B \\right\\|}{\\left\\| A \\right\\|}{{A}^{*}}A & {{\\left( {{e}^{\\textup i\\theta }}BA \\right)}^{*}} \\\\\n\\end{matrix} \\right] \\right),\n\\end{equation}\nand\n\\[\\left\\| {{\\operatorname{\\mathfrak Re}}^{\\textup i\\theta }}T \\right\\|\\le \\frac{1}{2}\\omega \\left( \\left[ \\begin{matrix}\n {{e}^{\\textup i\\theta }}\\widetilde{{{T}_{t}}} & {{e}^{-\\textup i\\theta }}{{\\left\\| T \\right\\|}^{1-2t}}{{\\left| T \\right|}^{2t}} \\\\\n {{e}^{\\textup i\\theta }}{{\\left\\| T \\right\\|}^{2t-1}}{{\\left| T \\right|}^{2\\left( 1-t \\right)}} & {{\\left( {{e}^{\\textup i\\theta }}\\widetilde{{{T}_{t}}} \\right)}^{*}} \\\\\n\\end{matrix} \\right] \\right).\\]\n\\end{remark}\nTo better understand how the above relations help obtain the numerical radius of the product of two operators, we give an example. Recall that in \\cite[Corollary 2]{8}, Abu-Omar and Kittaneh proved that if $\\mathcal H_1$ and $\\mathcal H_2$ are Hilbert spaces and $\\mathbb X=\\left[ \\begin{matrix}\n {{X}_{1}} & {{X}_{2}} \\\\\n {{X}_{3}} & {{X}_{4}} \\\\\n\\end{matrix} \\right]$ is an operator matrix with\n$X_1\\in \\mathcal B(\\mathcal H_1)$, $X_2\\in \\mathcal B(\\mathcal H_2,\\mathcal H_1)$, $X_3\\in \\mathcal B(\\mathcal H_1,\\mathcal H_2)$, and $X_4\\in \\mathcal B(\\mathcal H_2)$, then\n\\begin{equation*}\n\\omega \\left( \\mathbb X \\right)\\le \\frac{1}{2}\\left( \\omega \\left( {{X}_{1}} \\right)+\\omega \\left( {{X}_{4}} \\right)+\\sqrt{{{\\left( \\omega \\left( {{X}_{1}} \\right)-\\omega \\left( {{X}_{4}} \\right) \\right)}^{2}}+4{{\\omega }^{2}}\\left( \\mathbb E \\right)} \\right),\n\\end{equation*}\nwhere $\\mathbb E=\\left[ \\begin{matrix}\n O & {{X}_{2}} \\\\\n {{X}_{3}} & O \\\\\n\\end{matrix} \\right]$.\nIn the same paper (see \\cite[Remark 6]{8}), it has been shown that\n\\[{{\\omega }}\\left( \\mathbb E \\right)\\le \\min \\left\\{ {{\\alpha }_{1}},{{\\alpha }_{2}} \\right\\}\\]\nwhere\n\\[{{\\alpha }_{1}}=\\frac{1}{4}\\sqrt{\\left\\| {{\\left| {{X}_{2}} \\right|}^{2}}+{{\\left| X_{3}^{*} \\right|}^{2}} \\right\\|+2\\omega \\left( {{X}_{3}}{{X}_{2}} \\right)}\\;\\text{ and }\\;{{\\alpha }_{2}}=\\frac{1}{4}\\sqrt{\\left\\| {{\\left| X_{2}^{*} \\right|}^{2}}+{{\\left| {{X}_{3}} \\right|}^{2}} \\right\\|+2\\omega \\left( {{X}_{2}}{{X}_{3}} \\right)}.\\]\nCombining these two inequalities we get \n\\[\\omega \\left( \\left[ \\begin{matrix}\n {{X}_{1}} & {{X}_{2}} \\\\\n {{X}_{3}} & {{X}_{4}} \\\\\n\\end{matrix} \\right] \\right)\\le \\frac{1}{2}\\left( \\omega \\left( {{X}_{1}} \\right)+\\omega \\left( {{X}_{4}} \\right)+\\sqrt{{{\\left( \\omega \\left( {{X}_{1}} \\right)-\\omega \\left( {{X}_{4}} \\right) \\right)}^{2}}+4\\min \\left\\{ \\alpha _{1}^{2},\\alpha _{2}^{2} \\right\\}} \\right).\\]\nNow, using this and \\eqref{14}, we have\n\\[\\left\\| {{\\operatorname{\\mathfrak Re}}^{\\textup i\\theta }}AB \\right\\|\\le \\frac{1}{2}\\left( \\omega \\left( BA \\right)+\\min \\left\\{ {{\\beta }_{1}},{{\\beta }_{2}} \\right\\} \\right),\\]\nwhere\n\\[{{\\beta }_{1}}=\\frac{1}{2}\\sqrt{\\left\\| \\frac{{{\\left\\| A \\right\\|}^{2}}}{{{\\left\\| B \\right\\|}^{2}}}{{\\left| {{B}^{*}} \\right|}^{4}}+\\frac{{{\\left\\| B \\right\\|}^{2}}}{{{\\left\\| A \\right\\|}^{2}}}{{\\left| A \\right|}^{4}} \\right\\|+2\\omega \\left( {{\\left| A \\right|}^{2}}{{\\left| {{B}^{*}} \\right|}^{2}} \\right)},\\]\nand\n\\[{{\\beta }_{2}}=\\frac{1}{2}\\sqrt{\\left\\| \\frac{{{\\left\\| A \\right\\|}^{2}}}{{{\\left\\| B \\right\\|}^{2}}}{{\\left| {{B}^{*}} \\right|}^{4}}+\\frac{{{\\left\\| B \\right\\|}^{2}}}{{{\\left\\| A \\right\\|}^{2}}}{{\\left| A \\right|}^{4}} \\right\\|+2\\omega \\left( {{\\left| {{B}^{*}} \\right|}^{2}}{{\\left| A \\right|}^{2}} \\right)}.\\]\nThis implies,\n\\[\\omega \\left( AB \\right)\\le \\frac{1}{2}\\omega \\left( BA \\right)+\\frac{1}{4}\\sqrt{\\left\\| \\frac{{{\\left\\| A \\right\\|}^{2}}}{{{\\left\\| B \\right\\|}^{2}}}{{\\left| {{B}^{*}} \\right|}^{4}}+\\frac{{{\\left\\| B \\right\\|}^{2}}}{{{\\left\\| A \\right\\|}^{2}}}{{\\left| A \\right|}^{4}} \\right\\|+2\\min \\left\\{ \\omega \\left( {{\\left| A \\right|}^{2}}{{\\left| {{B}^{*}} \\right|}^{2}} \\right),\\omega \\left( {{\\left| {{B}^{*}} \\right|}^{2}}{{\\left| A \\right|}^{2}} \\right) \\right\\}}.\\]\nWe also have by \\eqref{13},\n\\[\\omega \\left( AB \\right)\\le \\frac{1}{2}\\omega \\left( BA \\right)+\\frac{1}{4}\\sqrt{\\left\\| {{\\left| {{B}^{*}} \\right|}^{4}}+{{\\left| A \\right|}^{4}} \\right\\|+2\\min \\left\\{ \\omega \\left( {{\\left| A \\right|}^{2}}{{\\left| {{B}^{*}} \\right|}^{2}} \\right),\\omega \\left( {{\\left| {{B}^{*}} \\right|}^{2}}{{\\left| A \\right|}^{2}} \\right) \\right\\}}.\\]\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section*{Introduction}\n\nIt is well known result that realistic interpretations of quantum\ntheory are nonlocal \\cite{bell}. This was first shown by means of\nBell's inequality. Afterwards, the proof of the same for three\nspin-1\/2 particles as well as for two spin-1 particles, without\nusing inequality caused much interest among physicists\n\\cite{green}. Surprisingly Hardy gave a proof of nonlocality\nwithout using inequality, for two spin-1\/2 particles which\nrequires two measurement settings on both the sides as happens in\ncase of Bell's argument \\cite{hardy92}. Later Hardy showed this\nkind of nonlocality argument can be made for almost all entangled\nstate of two spin-1\/2 particles except for maximally entangled\none.\\cite{hardy93}. He considered the cases where the measurement\nchoices were same for both the parties. Jordan showed that for\nany given entangled state of two spin-1\/2 particles except\nmaximally entangled state there are many set of observables on\neach side which satisfy Hardy's nonlocality conditions\n\\cite{jordan}. Jordan also showed that the set of observables\nwhich gives maximum probability of success in showing the\ncontradiction with local-realism, is the same as\nchosen by Hardy.\\\\\nRecently Cabello has introduced a logical structure to prove\nBell's theorem without inequality for three particles GHZ and W\nstate \\cite{cabello02}. Logical structure presented by Cabello is\nas follows : Consider four events D, E, F and G where D and F may\nhappen in one system and E and G happen in another system which is\nfar apart from the first. The probability of joint occurrence of D\nand E is non-zero, E always implies F, D always implies G, but F\nand G happen with lower probability than D and E. These four\nstatements are not compatible with local realism. The difference\nbetween these two probabilities is the measure of violation of\nlocal realism. Though Cabello's logical structure was originally\nproposed for showing nonlocality for three particle states but\nLiang and Li \\cite{liang05} exploited it in establishing\nnonlocality without inequality for a class of two qubit mixed\nentangled state. In this sense, Hardy's logical structure is an\nspecial case of Cabello's structure as the logical structure of\nHardy for establishing nonlocality is as follows: D and E\nsometimes happen, E always implies F, D always implies G, but F\nand G never happen. Recently based on Cabello's logical structure\nKunkri and Choudhary \\cite{kunkri} have shown that there may be\nmany classes of two qubit mixed states which exhibit nonlocality\nwithout inequality. It is noteworthy here that in contrast there\nis no two qubit mixed state which shows Hardy type nonlocality\n\\cite{karpra}. So it seems interesting to study that whether\nmaximally entangled states follow this more general (than\nHardy's), Cabello's nonlocality argument or not , because Hardy's\nnonlocality argument is not followed by a maximally entangled\nstate. In this paper we have studied it and found that maximally\nentangled states do not respond even to this argument. However,\nfor all other pure entangled states , Cabello's argument runs. We\nfurther have enquired about the highest value of difference\nbetween the two probabilities which appear in Cabello's argument.\nSurprisingly this value differs from the highest value of\nprobability which\nappears in Hardy's argument.\\\\\n\\section*{Cabello's argument for two qubits}\nLet us consider two spin-1\/2 particles A and B. Let F, D, G and E\nrepresent the spin observables along $n_F (\n\\sin{\\theta_F}\\cos{\\phi_F}, \\sin{\\theta_F}\\sin{\\phi_F},\n\\cos{\\theta_F})$, $n_D ( \\sin{\\theta_D}\\cos{\\phi_D},\n\\sin{\\theta_D}\\sin{\\phi_D}, \\cos{\\theta_D})$, $n_G (\n\\sin{\\theta_G}\\cos{\\phi_G}, \\sin{\\theta_G}\\sin{\\phi_G},\n\\cos{\\theta_G})$ and $n_E ( \\sin{\\theta_E}\\cos{\\phi_E},\n\\sin{\\theta_E}\\sin{\\phi_E}, \\cos{\\theta_E})$ respectively. Every\nobservable has the eigen value $\\pm 1$. Let F and D are measured\non particle A and G and E are measured on particle B. Now we\nconsider the following equations\n\\begin{equation}\nP(F = +1, G = +1) = q_1\n\\end{equation}\n\\begin{equation}\nP(D = +1, G = -1) = 0\n\\end{equation}\n\\begin{equation}\nP(F = -1, E = +1) = 0\n\\end{equation}\n\\begin{equation}\nP(D = +1, E = +1) = q_4\n\\end{equation}\n\nEquation (1) tells that if F is measured on particle A and G is\nmeasured on particle B, then the probability that both will get +1\neigen value is $q_1$. Other equations can be analyzed in a similar\nfashion. These equations form the basis of Cabello's nonlocality\nargument. It can easily be seen that these equations contradict\nlocal-realism if $q_1 < q_4$. To show this, let us consider those\nhidden variable states $\\lambda$ for which $D = +1$ and $E = +1$.\nNow for these states equations $(2)$ and $(3)$ tell that the\nvalues of $G$ and $F$ must be equal to $+1$. Thus according to\nlocal realism $P(F = +1, G = +1)$ should be at least equal to\n$q_4$, which contradicts equation $(1)$ as $q_1 < q_4$. It should\nbe noted here that $q_1=0$ reduces this argument to Hardy's one.\nSo by Cabello's argument we specifically mean that the above\nargument runs even with nonzero $q_1$.\\\\\nNow we will show that for almost all two qubit pure entangled\nstate other than maximally entangled one this kind of nonlocality\n argument runs. Following Schmidt\n decomposition procedure any\n entangled state of two particles A and B can be written as\n \\begin{equation}\n |\\psi\\rangle = (\\cos{\\beta}) |0\\rangle_A |0\\rangle_B + (\\sin\n {\\beta})e^{i\\gamma} |1\\rangle_A |1\\rangle_B\n \\end{equation}\nIf either $\\cos{\\beta}$ or $\\sin{\\beta}$ is zero, we have a\nproduct state not an entangled state. Then it is not possible to\nsatisfy equation $(1)-(4)$. Hence we assume that neither\n$\\cos{\\beta}$ nor $\\sin{\\beta}$ is zero; both are positive.\\\\\nThe density matrix for the above state is\n\\begin{equation}\n\\begin{array}{lcl}\n \\rho = \\frac{1}{4}[I^A \\otimes I^B + (\\cos^2{\\beta} - \\sin^2{\\beta})I^A\\otimes \\sigma_z^B +\n(\\cos^2{\\beta} - \\sin^2{\\beta})\\sigma_z^A \\otimes I^B \\\\ +\n(2\\cos{\\beta}\\sin{\\beta}\\cos{\\gamma})\\sigma_x^A \\otimes \\sigma_x^B\n+ (2\\cos{\\beta}\\sin{\\beta}\\sin{\\gamma})\\sigma_x^A \\otimes\n\\sigma_y^B \\\\ + (2\\cos{\\beta}\\sin{\\beta}\\sin{\\gamma})\\sigma_y^A\n\\otimes \\sigma_x^B -\n(2\\cos{\\beta}\\sin{\\beta}\\cos{\\gamma})\\sigma_y^A \\otimes \\sigma_y^B\n+ \\sigma_z^A \\otimes \\sigma_z^B]\n\\end{array}\n\\end{equation}\n\nWhere $\\sigma_x$, $\\sigma_y$ and $\\sigma_z$ are Pauli operators.\nNow for this state if F is measured on particle A and G is\nmeasured on particle B, then the probability that both will get +1\neigen value is given by\n\n\\begin{equation}\n\\begin{array}{lcl}\n P(F = +1, G = +1) = (\\frac{1}{4})[1 + (\\cos^2{\\beta} -\n \\sin^2{\\beta})(\\cos{\\theta_F} + \\cos{\\theta_G})\\\\\n + \\cos{\\theta_F}\\cos{\\theta_G} + 2\\cos{\\beta}\\sin{\\beta}\\sin{\\theta_F}\\sin{\\theta_G}\n \\times \\cos{(\\phi_F + \\phi_G - \\gamma)}]\n\\end{array}\n\\end{equation}\nRearranging the above expression we get\n\\begin{equation}\n\\begin{array}{lcl}\n P(F = +1, G = +1) =\n \\cos^2{\\beta}\\cos^2{\\frac{\\theta_F}{2}}\\cos^2{\\frac{\\theta_G}{2}} +\n \\sin^2{\\beta}\\sin^2{\\frac{\\theta_F}{2}}\\sin^2{\\frac{\\theta_G}{2}} +\n \\\\\n + 2\\cos{\\beta}\\sin{\\beta}\\cos{\\frac{\\theta_F}{2}}\\sin{\\frac{\\theta_F}{2}}\n \\cos{\\frac{\\theta_G}{2}}\\sin{\\frac{\\theta_G}{2}}\n \\times \\cos{(\\phi_F + \\phi_G - \\gamma)}]= q_1(say)\n\\end{array}\n\\end{equation}\nSimilar calculations for other probabilities give us:\n\\begin{equation}\n\\begin{array}{lcl}\n P(D = +1, G = -1) =\n \\cos^2{\\beta}\\cos^2{\\frac{\\theta_D}{2}}\\sin^2{\\frac{\\theta_G}{2}} +\n \\sin^2{\\beta}\\sin^2{\\frac{\\theta_D}{2}}\\cos^2{\\frac{\\theta_G}{2}} +\n \\\\\n + 2\\cos{\\beta}\\sin{\\beta}\\cos{\\frac{\\theta_D}{2}}\\sin{\\frac{\\theta_D}{2}}\n \\cos{\\frac{\\theta_G}{2}}\\sin{\\frac{\\theta_G}{2}}\n \\times \\cos{(\\phi_D + \\phi_G + \\pi - \\gamma)}]= q_2(say)\n\\end{array}\n\\end{equation}\n\n\\begin{equation}\n\\begin{array}{lcl}\n P(F = -1, E = +1) =\n \\cos^2{\\beta}\\cos^2{\\frac{\\theta_E}{2}}\\sin^2{\\frac{\\theta_F}{2}} +\n \\sin^2{\\beta}\\sin^2{\\frac{\\theta_E}{2}}\\cos^2{\\frac{\\theta_F}{2}} +\n \\\\\n + 2\\cos{\\beta}\\sin{\\beta}\\cos{\\frac{\\theta_F}{2}}\\sin{\\frac{\\theta_F}{2}}\n \\cos{\\frac{\\theta_E}{2}}\\sin{\\frac{\\theta_E}{2}}\n \\times \\cos{(\\phi_F + \\phi_E + \\pi - \\gamma)}]=q_3(say)\n\\end{array}\n\\end{equation}\n\n\\begin{equation}\n\\begin{array}{lcl}\n P(D = +1, E = +1) =\n \\cos^2{\\beta}\\cos^2{\\frac{\\theta_D}{2}}\\cos^2{\\frac{\\theta_E}{2}} +\n \\sin^2{\\beta}\\sin^2{\\frac{\\theta_D}{2}}\\sin^2{\\frac{\\theta_E}{2}} +\n \\\\\n + 2\\cos{\\beta}\\sin{\\beta}\\cos{\\frac{\\theta_D}{2}}\\sin{\\frac{\\theta_D}{2}}\n \\cos{\\frac{\\theta_E}{2}}\\sin{\\frac{\\theta_E}{2}}\n \\times \\cos{(\\phi_D + \\phi_E - \\gamma)}]=q_4(say)\n\\end{array}\n\\end{equation}\nFor running Cabello's nonlocality argument, following conditions\nshould be satisfied:\n\\begin{equation}\nq_2=0,~~ q_3=0,~~ (q_4 - q_1) > 0, ~~ q_1 > 0\n\\end{equation}\n\n\n\n\n\n\n\nSince $q_2$ represents probability, it can not be negative. If it\nis zero, it is at its minimum value. Then its derivative must be\nzero. From it's derivative with respect to $\\phi_D$ we see that\n$\\sin{(\\phi_D + \\phi_G + \\pi - \\gamma)}$ must be zero. Evidently\n\n\\begin{equation}\n\\cos{(\\phi_D + \\phi_G + \\pi - \\gamma)} = -1\n\\end{equation}\nWe conclude that if $q_2$ is zero, then\n\\begin{equation}\n\\cos{\\beta}\\cos{\\frac{\\theta_D}{2}}\\sin{\\frac{\\theta_G}{2}} =\n\\sin{\\beta}\\sin{\\frac{\\theta_D}{2}}\\cos{\\frac{\\theta_G}{2}}\n\\end{equation}\nSimilar sort of argument for $q_3$ to be zero will give:\n\\begin{equation}\n\\cos{(\\phi_F + \\phi_E + \\pi - \\gamma)} = -1\n\\end{equation}\nand\n\\begin{equation}\n\\cos{\\beta}\\cos{\\frac{\\theta_E}{2}}\\sin{\\frac{\\theta_F}{2}} =\n\\sin{\\beta}\\sin{\\frac{\\theta_E}{2}}\\cos{\\frac{\\theta_F}{2}}\n\\end{equation}\n\\section*{Maximally entangled states of two spin-1\/2 particles do not\nexhibit Cabello type nonlocality-}\n For maximally entangled state\n$\\tan{\\beta} = 1$, then from equations $(14)$ and $(16)$ we get\n\\begin{equation}\n\\frac{\\theta_G}{2} = \\frac{\\theta_D}{2} + n\\pi\n\\end{equation}\n\\begin{equation}\n\\frac{\\theta_F}{2} = \\frac{\\theta_E}{2} + m\\pi\n\\end{equation}\nUsing equations $(17)$ and $(18)$ first in equation $(8)$ and then\nin equation (11) we get $q_1$ and $q_4$ for maximally entangled\nstate as:\n\\begin{equation}\n\\begin{array}{lcl}\nq_1 =\n \\frac{1}{2}\\cos^2{\\frac{\\theta_D}{2}}\\cos^2{\\frac{\\theta_E}{2}} +\n \\frac{1}{2}\\sin^2{\\frac{\\theta_D}{2}}\\sin^2{\\frac{\\theta_E}{2}}\n \\\\\n + \\cos{\\frac{\\theta_D}{2}}\\sin{\\frac{\\theta_D}{2}}\n \\cos{\\frac{\\theta_E}{2}}\\sin{\\frac{\\theta_E}{2}}\n \\times \\cos{(\\phi_F + \\phi_G - \\gamma)}]\n\\end{array}\n\\end{equation}\n\n\\begin{equation}\n\\begin{array}{lcl}\nq_4 =\n \\frac{1}{2}\\cos^2{\\frac{\\theta_D}{2}}\\cos^2{\\frac{\\theta_E}{2}} +\n \\frac{1}{2}\\sin^2{\\frac{\\theta_D}{2}}\\sin^2{\\frac{\\theta_E}{2}}\n \\\\\n + \\cos{\\frac{\\theta_D}{2}}\\sin{\\frac{\\theta_D}{2}}\n \\cos{\\frac{\\theta_E}{2}}\\sin{\\frac{\\theta_E}{2}}\n \\times \\cos{(\\phi_D + \\phi_E - \\gamma)}]\n\\end{array}\n\\end{equation}\nFrom equations $(19)$ and $(20)$ it is clear that $q_4$ will be\ngrater than $ q_1$ for a maximally entangled state only when\n$\\cos{(\\phi_D + \\phi_E - \\gamma)}\n> \\cos{(\\phi_F + \\phi_G - \\gamma)}$. But equation $(13)$ together\nwith equation $(15)$ says that $\\cos{(\\phi_D + \\phi_E - \\gamma)} =\n\\cos{(\\phi_F + \\phi_G - \\gamma)}$ {\\it i.e} $ q_4 = q_1$. So one\ncan conclude that there is no choice of observable which can make\nmaximally entangled state to show Cabello type of\nnonlocality .\\\\\n\\section*{Cabello's argument runs for other two particle pure\nentangled states-}\n To show that for every pure entangled state other than maximally\n entangled state of two spin-1\/2 particles, Cabello like argument runs\n it will be sufficient to show that one can always choose a set of observables for which\n set of conditions given\nby equation (12) is satisfied. This is equivalent of saying that\nfor $ 0<\\beta<\\frac{\\pi}{2}$ except when $\\beta=\\frac{\\pi}{4}$\nthere is at least one value for each of\n$\\theta_D$,$\\theta_E$,$\\theta_G$,$\\theta_F,\\phi_D$,$\\phi_E$,$\\phi_G$,$\\phi_F$\nfor which conditions mentioned in(12) are satisfied.\\\\\nLet us choose our $\\phi's$ in such a manner that\\\\\n$$cos{(\\phi_F + \\phi_G - \\gamma)}= cos{(\\phi_D + \\phi_E -\n\\gamma)}=-1$$\nFor these $\\phi's$ equations (8) and (11) respectively will read\nas:\\\\\n\\begin{equation}\nq_1 = (\\cos{\\beta}\\cos{\\frac{\\theta_F}{2}}\\cos{\\frac{\\theta_G}{2}}\n- \\sin{\\beta}\\sin{\\frac{\\theta_F}{2}}\\sin{\\frac{\\theta_G}{2}})^2\n\\end{equation}\n\\begin{equation}\nq_4 = (\\cos{\\beta}\\cos{\\frac{\\theta_D}{2}}\\cos{\\frac{\\theta_E}{2}}\n- \\sin{\\beta}\\sin{\\frac{\\theta_D}{2}}\\sin{\\frac{\\theta_E}{2}})^2\n\\end{equation}\nSo\n\\begin{equation}\n\\begin{array}{lcl}\n (q_4 - q_1) =\n \\cos^2{\\beta}(\\cos^2{\\frac{\\theta_D}{2}}\\cos^2{\\frac{\\theta_E}{2}}\n - \\cos^2{\\frac{\\theta_F}{2}}\\cos^2{\\frac{\\theta_G}{2}}) +\n \\sin^2{\\beta}(\\sin^2{\\frac{\\theta_D}{2}}\\sin^2{\\frac{\\theta_E}{2}}\n - \\sin^2{\\frac{\\theta_F}{2}}\\sin^2{\\frac{\\theta_G}{2}})\\\\\n + 2\\sin{\\beta}\\cos{\\beta}(\\cos{\\frac{\\theta_F}{2}}\\cos{\\frac{\\theta_G}{2}}\\sin{\\frac{\\theta_F}{2}}\n \\sin{\\frac{\\theta_G}{2}} - \\cos{\\frac{\\theta_D}{2}}\\cos{\\frac{\\theta_E}{2}}\\sin{\\frac{\\theta_D}{2}}\n \\sin{\\frac{\\theta_E}{2}})\n\\end{array}\n\\end{equation}\nNow we will have to choose at least one set of values of\n$\\theta's$ in such a way that $(q_4 - q_1)$ and $q_1$ are nonzero\nand positive. Moreover, these values of $\\theta's$ should also not\nviolate conditions given in equations $(14)$ and $(16)$.\\\\\nlet us try with $ \\frac{\\theta_D}{2} = 0$ {\\it i.e}\n$$ \\sin{\\frac{\\theta_D}{2}} = 0,~~~\\cos{\\frac{\\theta_D}{2}} = 1$$\nThis makes equation $(14)$ to read as\n $$\\sin{\\frac{\\theta_G}{2}} = 0,\\Rightarrow {\\frac{\\theta_G}{2}} =\n 0$$\n Then from equation $(23)$ we get $$ (q_4 - q_1) =\n \\cos^2{\\beta}(\\cos^2{\\frac{\\theta_E}{2}}-\n \\cos^2{\\frac{\\theta_F}{2}})$$\nThus $(q_4 - q_1) > 0$ if\n\\begin{equation}\n\\cos{\\frac{\\theta_E}{2}}> \\cos{\\frac{\\theta_F}{2}}\n\\end{equation}\n Rewriting equation $(16)$ as\n \\begin{equation}\n \\tan{\\frac{\\theta_F}{2}} = \\tan{\\beta}\\tan{\\frac{\\theta_E}{2}}\n\\end{equation}\n Values of $\\theta's$ satisfying inequality (24) will not violate\n equation (25) provided $\\tan{\\beta} > 1$.\nNow for these values of $\\theta's$, from equation (21), we get:\n$q_1 = (\\cos{\\beta}\\cos{\\frac{\\theta_F}{2}})^2$\nwhich is greater than zero.\\\\\n So for the above values of $\\theta's$ {\\it i.e} for\n$\\frac{\\theta_D}{2} = \\frac{\\theta_G}{2} = 0$ and\n$\\cos{\\frac{\\theta_E}{2}}> \\cos{\\frac{\\theta_F}{2}}$, all the\nstates for which $\\tan{\\beta} > 1$ ; Cabello's nonlocality\nargument runs.\\\\\nFor other states {\\it i.e} for the states for which $\\tan{\\beta}\n< 1$, let us choose $\\frac{\\theta_D}{2} = \\frac{\\theta_G}{2} =\n\\frac {\\pi}{2}$. Then from equation $(23)$ we get $$ (q_4 - q_1) =\n \\sin^2{\\beta}(\\sin^2{\\frac{\\theta_E}{2}}-\n \\sin^2{\\frac{\\theta_F}{2}})$$\n Thus $(q_4 - q_1) > 0$ if\n\\begin{equation}\n\\sin{\\frac{\\theta_E}{2}}> \\sin{\\frac{\\theta_F}{2}}\n\\end{equation}\nOne can easily check that for abovementioned values of $\\theta's$\n; $q_1$ is also positive and equation (25) is satisfied too.\n\nThus if we choose $\\frac{\\theta_D}{2} = \\frac{\\theta_G}{2} =\n\\frac{\\pi}{2}$ and $\\sin{\\frac{\\theta_E}{2}}>\n\\sin{\\frac{\\theta_F}{2}}$, then all the states for which,\n$\\tan{\\beta} < 1$ satisfy Cabello's nonlocality argument. So for\nevery $\\beta$ (except for $\\beta=\\frac{\\pi}{4}$); we can choose\n$\\theta's$ and $\\phi's$ and hence the observables in such a way\nthat Cabello's argument\nruns.\\\\\n\\section*{Maximum probability of success}\nFor getting maximum probability of success of Cabello's argument\nin contradicting local-realism we will have to maximize the\nquantity $(q_4 - q_1)$ for a given $\\beta$ over all observable\nparameters $\\theta's$ and $\\phi's$ under the restrictions given by\nequation's $(13)-(16)$. Using the equations $(13)-(16)$, we have\n\\begin{equation}\n\\begin{array}{lcl}\n (q_4 - q_1) =\n \\cos^2{\\beta}[(k_2 - k_1) + \\tan^2{\\beta}\\tan^2{\\frac{\\theta_D}{2}}\n \\tan^2{\\frac{\\theta_E}{2}} (k_2 - k_1 \\tan^4{\\beta}) + \\\\ 2 \\tan{\\beta}\\tan{\\frac{\\theta_D}{2}}\n \\tan{\\frac{\\theta_E}{2}} (k_2 - k_1 \\tan^2{\\beta})\\cos{(\\phi_D + \\phi_E -\n \\gamma)}]\n\\end{array}\n\\end{equation}\nwhere $$ k_1 = \\frac{1}\n {(\\tan^2{\\beta}\\tan^2{\\frac{\\theta_D}{2}} + 1)(\\tan^2{\\beta}\\tan^2{\\frac{\\theta_E}{2}} +\n 1)},~~~~~ k_2 = \\frac{1}\n {(\\tan^2{\\frac{\\theta_D}{2}} + 1)(\\tan^2{\\frac{\\theta_E}{2}} +\n 1)} $$\n It is clear from the equation $(27)$ that one can obtain maximum value of $(q_4 - q_1)$,\n when $\\cos{(\\phi_D + \\phi_E - \\gamma)}= \\pm 1$.\nLet us first consider $\\cos{(\\phi_D + \\phi_E - \\gamma)}= -1$, then\nfrom equation $(27)$ we have\n\\begin{equation}\n\\begin{array}{lcl}\n (q_4 - q_1) =\n \\cos^2{\\beta}[\\frac{(1- \\tan{\\beta}\\tan{\\frac{\\theta_D}{2}}\\tan{\\frac{\\theta_E}{2}})^2}\n {(\\tan^2{\\frac{\\theta_D}{2}} + 1)(\\tan^2{\\frac{\\theta_E}{2}} +\n 1)} -\n\\frac{(1-\n\\tan^3{\\beta}\\tan{\\frac{\\theta_D}{2}}\\tan{\\frac{\\theta_E}{2}})^2}\n {(\\tan^2{\\beta}\\tan^2{\\frac{\\theta_D}{2}} + 1)(\\tan^2{\\beta}\\tan^2{\\frac{\\theta_E}{2}} +\n 1)} ]\n\\end{array}\n\\end{equation}\nFrom the above equation one can show that $(q_4 - q_1)$ will be\nmaximum when $\\theta_D = \\theta_E$ (see Appendix) which in turn\nimplies $\\theta_G = \\theta_F$ {\\it i.e } $(q_4 - q_1)$ becomes\nmaximum when measurement settings in both the sides is same as\nwas in Hardy's case. Now for the optimal case {\\it i.e } for\n$\\theta_G = \\theta_F$ and $\\theta_D = \\theta_E$, $(q_4 - q_1)$\nbecomes\n\\begin{equation}\n\\begin{array}{lcl}\n (q_4 - q_1) =\n \\cos^2{\\beta}[\\frac{(1- \\tan{\\beta}\\tan^2{\\frac{\\theta_D}{2}})^2}\n {(\\tan^2{\\frac{\\theta_D}{2}} + 1)^2} -\n\\frac{(1- \\tan^3{\\beta}\\tan^2{\\frac{\\theta_D}{2}})^2}\n {(\\tan^2{\\beta}\\tan^2{\\frac{\\theta_D}{2}} + 1)^2} ]\n\\end{array}\n\\end{equation}\nNumerically we have checked that $(q_4 - q_1)$ has a maximum value\nof $.1078$ when $\\cos{\\beta} = .485$ with $\\theta_D = \\theta_E =\n.59987$. This is interesting as maximum probability of success of\n Hardy's argument is only $9\\%$, whereas in case of Cabello's\nargument it is approximately $11\\%$.\\\\ Here we are comparing the\nmaximum probability of success of Hardy's argument with that of\nCabello's argument for all states.\\\\\n\\begin{figure}[hp]\n\\begin{center}\n\\scalebox{0.6}{\\includegraphics{samir1.ps}}\n\\end{center}\n\\caption{Comparison of the maximum probability of success between\nHardy's and Cabello's case}\n\\end{figure}\n\nGraph shows that for $\\cos{\\beta} \\approx.7$ {\\it i.e } for\n$\\beta=\\frac{\\pi}{4}$ and for $\\cos{\\beta}=1$ {\\it i.e } for\n$\\beta=0$; maximum of $(q_4 - q_1)$ vanishes. This is as expected\nbecause these values of $\\beta$ represent respectively the\nmaximally entangled and product states for which Cabello's\nargument does not run. For most of the other values of $\\beta$\n{\\it i.e } for most of the other entangled states , maximum\n probability of success of Cabello's argument in establishing their\n nonlocal feature is more than the maximum probability of success\n of hardy's argument in doing the same.\\\\\nAs we have mentioned earlier (just before equation 28) that\n$\\cos{(\\phi_D + \\phi_E - \\gamma)}= 1$ also optimizes $(q_4 -\nq_1)$. This also gives the same maximum value for $(q_4 - q_1)$as\ngiven by $\\cos{(\\phi_D + \\phi_E - \\gamma)}= -1$\nbut for $\\theta_D = -\\theta_E$.\\\\\n\\section*{Conclusion}\n\nIn conclusion, here we have shown that maximally entangled states\ndo not respond even to Cabello's argument which is a relaxed one\nand is more general than Hardy's argument. All other pure\nentangled states response to Cabello's argument. These states also\nexhibit Hardy type nonlocality. But, interestingly for most of\nthese nonmaximally entangled states, fraction of runs in which\nCabello's argument succeeds in demonstrating their nonlocal\nfeature can be made more than the fraction of runs in which\nHardy's argument succeeds in doing the same. So it seems that in\nsome sense, for demonstrating the nonlocal features of most of\nthe entangled\nstates, Cabello's argument is a better candidate.\\\\\n{\\bf Appendix-}\\\\ We want to optimize $(q_4 - q_1)$ given in\nequation $(28)$ with respect to $\\theta_D$ and $\\theta_E$ for a\ngiven $\\beta$. Differentiating equation $(28)$ with respect to\n$\\theta_D$ and equating it to zero, we have the following two\nequations\n\\begin{equation}\n(\\tan{\\beta}\\tan{\\frac{\\theta_E}{2}} + \\tan {\\frac{\\theta_D}{2}})=\n0\n\\end{equation}\nand\n\\begin{equation}\n(\\tan{\\beta}\\tan{\\frac{\\theta_E}{2}}\\tan{\\frac{\\theta_D}{2}} -\n1)(\\tan^2{\\beta}\\tan^2{\\frac{\\theta_E}{2}} +\n1)(\\tan^2{\\beta}\\tan^2{\\frac{\\theta_D}{2}} + 1)^2 = $$\n$$ (\\tan^2{\\beta}\\sec^2{\\frac{\\theta_D}{2}})(\\tan^3{\\beta}\\tan{\\frac{\\theta_E}{2}}\\tan{\\frac{\\theta_D}{2}}\n- 1)(\\sec^2{\\frac{\\theta_E}{2}}\\sec^2{\\frac{\\theta_D}{2}})\n\\end{equation}\n Similarly differentiating equation $(28)$ with\nrespect to $\\theta_E$ and equating it to zero, we have\n\\begin{equation}\n(\\tan{\\beta}\\tan{\\frac{\\theta_D}{2}} + \\tan {\\frac{\\theta_E}{2}})=\n0\n\\end{equation}\nand\n\\begin{equation}\n(\\tan{\\beta}\\tan{\\frac{\\theta_D}{2}}\\tan{\\frac{\\theta_E}{2}} -\n1)(\\tan^2{\\beta}\\tan^2{\\frac{\\theta_D}{2}} +\n1)(\\tan^2{\\beta}\\tan^2{\\frac{\\theta_E}{2}} + 1)^2 = $$\n$$ (\\tan^2{\\beta}\\sec^2{\\frac{\\theta_E}{2}})(\\tan^3{\\beta}\\tan{\\frac{\\theta_D}{2}}\\tan{\\frac{\\theta_E}{2}}\n- 1)(\\sec^2{\\frac{\\theta_D}{2}}\\sec^2{\\frac{\\theta_E}{2}})\n\\end{equation}\nAnalyzing above four conditions we have\n$$\\theta_D = \\theta_E$$ will give the optimal solution.\nSimilarly for $cos{(\\phi_D + \\phi_E - \\gamma)}= +1$, we will get\nsame kind of results.\n\\section*{Acknowledgement}\nAuthors would like to thank Guruprasad Kar, Debasis Sarkar for\nuseful discussions. We also thank Swarup Poria to help us in\nnumerical calculation. S.K acknowledges the support by the Council\nof Scientific and Industrial Research, Government of India, New\nDelhi.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{$\\protect\\bigskip $Introduction}\n\nLet $G$ be a Lie group which acts transitively on some space $M.$ In this\nframework, Felix Klein defined geometry (later known as \\textit{Erlangen\nProgramm}) as the study of properties which are invariant under the action\nof $G.$ Realizing $M$ as the coset space $G\/H$ where $H\\subset G$ is the\nstabilizer of some point $p\\in M$, we may thus speak of the Klein pair (or\nKlein geometry) $(G,G\/H)$. In Euclidean geometry, the best known example, $G$\nis the isometry group of $\\mathbb{R}^{n}$ and is the semi-direct product of\nthe orthogonal group $O(n)$ and $\\mathbb{R}^{n},$ $H=O(n)$ and $G\/H=\\mathbb{R%\n}^{n}.$ Non-Euclidean geometries, which were known at Klein's time, are\nother examples. Riemannian geometry can not be realized in this way unless\nthe metric has constant curvature in which case the isometry group acts\ntransitively and we have again a Klein geometry. We will refer the reader to\n[14], pg. 133 so that he\/she can feel the philosophical disturbance created\nby this situation at that time. As an attempt to unify (among others)\nRiemannian geometry and \\textit{Erlangen Programm}, Elie Cartan introduced\nin 1922 his generalized spaces (principal bundles) which are curved analogs\nof the principal $H$-bundle $G\\rightarrow G\/H.$ We will mention here the two\noutstanding books: [29] for Klein geometries, their generalizations called\nCartan geometries in [29] and also for the notions of Cartan and Ehresmann\nconnections, and [14] for more history of this subject (see in particular\npg. 34-42 for \\textit{Erlangen Programm)}. Cartan's approach, which is later\ndevoloped mainly by topologists from the point of view of fiber bundles,\nturned out to be extremely powerful. The spectacular achievements of the\ntheory of principle bundles and connections in geometry, topology and\nphysics are well known and it is therefore redundant to elobarate on them\nhere. However, it is also a fact that this theory leaves us with the\nunpleasent question: What happened to \\textit{Erlangen Programm}? The main\nreason for this question is that the total space $P$ of the principal bundle \n$P\\rightarrow M$ does not have any group-like structure and therefore does\nnot act on the base manifold $M.$ Thus $P$ emerges in this framework as a\nnew entity whose relation to the geometry of $M$ may not be immediate and we\nmust deal now with $P$ as a seperate problem. Consequently, it seems that\nthe most essential feature of Klein's realization of geometry is given up by\nthis approach. Some mathematicians already expressed their dissatisfaction\nof this state of affairs in literature with varying tones, among which we\nwill mention [29], [35], [26] and other contraversial works of the author of\n[26] (see the references in [26]).\n\nThe purpose of this work is to present another such unification which we by\nno means claim to be the ultimate and correct one but believe that it is\nfaithful to Klein's original conception of geometry. This unification is\nbased on the ideas which S. Lie and F. Klein communicated to each other\nbefore 1872 (see [14] for the extremely interesting history of this subject)\nand the works of D.C. Spencer and his co-workers around 1970 on the formal\nintegrability of PDEs. The main idea is simple and seems to be the only\npossible one: We concentrate on the action of $G$ on $M=G\/H$ and generalize\nthis action rather than generalizing the action of $H$ on $G$ in the\nprincipal $H$-bundle $G\\rightarrow G\/H.$ Now $G\\subset Diff(M)$ and $G$ may\nbe far from being a Lie group. As the natural generalization of \\textit{%\nErlangen Programm, }we may now deal directly with the group $G$ as in [2]\n(see also [23]), but this approach again does not incorporate Riemannian\ngeometry unless the metric is homogeneous. We consider here the Lie\npseudogroup $\\widetilde{G}$ determined by $G$ and filter the action of $%\n\\widetilde{G}$ on $M$ via its jets, thus realizing $\\widetilde{G}$ as a\nprojective limit $Lim_{\\leftarrow k}$ $\\mathcal{S}_{k}(M)$ of Lie equations $%\n\\mathcal{S}_{k}(M).$ Lie equations (in finite form) are very special\ngroupoids and are very concrete objects which are extensively studied by\nSpencer and his co-workers culminating in the difficult work [11]. We will\nrefer to [11], [25], [26] for Lie equations and [19], [20] for\ndifferentiable groupoids and algebroids. On the infinitesimal level, we\nobtain the approximation $Lim_{\\leftarrow k}$ $\\frak{s}_{k}(M)$ where $\\frak{%\ns}_{k}(M)$ is the infinitesimal Lie equation (or the algebroid) of $\\mathcal{%\nS}_{k}(M).$ The idea is now to start with the expression $Lim_{\\leftarrow k}$\n$\\mathcal{S}_{k}(M)$ as our definition of homogeneous geometry $\\mathcal{S}%\n_{\\infty }(M)$ (Section 3, Definition 2). Any transitive pseudogroup (in\nparticular a complex, symplectic or contact structure) determines a\nhomogeneous geometry and Klein geometries are special cases (Section 4).\nSome $\\mathcal{S}_{k}(M)$ may not prolong to a homogeneous geometry due to\nthe lack of formal integrability and Riemannian geometry (almost complex,\nalmost symplectic...structures) emerge as truncated geometries (Section 5).\nWe associate various spectral sequences to a homogeneous geometry $\\mathcal{S%\n}_{\\infty }(M)$ (in particular to a truncated geometry $\\mathcal{S}_{k}(M))$\n(Sections 2, 3). For a complex structure, we believe that one these spectral\nsequences is directly related to the Fr\\\"{o}licher spectral sequence which\nconverges to de Rham cohomology with $E_{1}$ term equal to Dolbeault\ncohomology. This unification is also a natural specialization of the\nstandard principal bundle approach initiated by E. Cartan (Sections 6, 7, 8).\n\nThe idea of filtering an object via jets (for instance, the solution space\nof some PDE called a diffiety in [32], [33]) is not new and is used by\nA.M.Vinogradov in 1978 in the construction of his $\\mathcal{C}$-spectral\nsequence and in the variational bicomplex approach to Euler-Lagrange\nequations (see [32], [33] and the references therein). In fact, this paper\ncan also be considered as the first step towards the realization of a\nprogram stated in [33] for quasi-homogeneous geometries ([33], Section 6.4).\nFurther, there is definitely a relation between the higher order de Rham\ncomplexes constructed here and those in [31]. We also believe that the main\nidea of this paper, though it may not have been stated as explicitly as in\nthis paper, is contained in [26] and traces back to [11] and [25]. In\nparticular, we would like to emphasize that all the ingredients of this\nunification are known and exist in the literature.\n\nThis paper consists of nine sections. Section 2 contains the technical core\nin terms of which the geometric concepts substantiate. This section may be\nsomewhat demanding for the reader who is not much familiar with jets and the\nformalism of Spencer operator. However, as we proceed, technical points\nslowly evaporate and the main geometric concepts which we are all familiar\nwith, start to take the center stage.\n\n\\section{Universal homogeneous envelope}\n\nLet $M$ be a differentiable manifold and $Diff(M)$ be the group of\ndiffeomorphisms of $M.$ Consider the map $Diff(M)\\times M\\rightarrow M$\ndefined by $(g,x)\\rightarrow g(x)=y$ and let $j_{k}(g)_{y}^{x}$ denote the $%\nk $-jet of $g$ with source at $x$ and target at $y.$ By choosing coordinates \n$(U,x^{i})$ and $(V,y^{i})$ around the points $x,y,$ we may think $%\nj_{k}(g)_{y}^{x}$ as the coefficients of the Taylor expansion of $g(x)=y$ up\nto order $k.$ We will call $j_{k}(g)_{y}^{x}$ the $k$-arrow induced by $g$\nwith source at $x$ and target at $y$ and imagine $j_{k}(g)_{y}^{x}$ as an\narrow starting at $x$ and ending at $y.$ Let $(f_{k})_{y}^{x}$ denote \n\\textit{any }$k$-arrow, i.e., $(f_{k})_{y}^{x}$ is the $k$-jet induced by\nsome arbitrary local diffeomorphism which maps $x$ to $y$. With some\nassumptions on orientability which involve only 1-jets (see [24] for\ndetails), there exists some $g\\in Diff(M)$ with $%\nj_{k}(g)_{y}^{x}=(f_{k})_{y}^{x}.$ Therefore, with the assumption imposed by\n[24], the pseudogroup $\\widetilde{Diff(M)}$ of local diffeomorphisms on $M$\ninduces the same $k$-arrows as $Diff(M),$ but this fact will not be used in\nthis paper. We can compose successive $k$-arrows and invert all $k$-arrows.\n\nNow let $(\\mathcal{G}_{k})_{y}^{x}$ denote the set of all $k$-arrows with\nsource at $x$ and target at $y.$ We define $\\mathcal{G}_{k}(M)\\doteq \\cup\n_{x,y\\in M}(\\mathcal{G}_{k})_{y}^{x}$ and obtain the projections $\\pi _{k,m}:%\n\\mathcal{G}_{k}(M)\\rightarrow \\mathcal{G}_{m}(M),$ $1\\leq m\\leq k-1,$ and $%\n\\pi _{k,m}$ is compatible with composition and inversion of arrows. We will\ndenote all projections arising from the projection of jets by the same\nnotation $\\pi _{k,m}.$ Now $\\mathcal{G}_{k}(M)$ is a transitive Lie equation\nin finite form $(TLEFF)$ on $M$ which is a very special groupoid (see [11],\n[25], [26] for Lie equations and [19], [20] for groupoids). We also have the\nlocally trivial map $\\mathcal{G}_{k}(M)\\rightarrow M\\times M$ which maps $(%\n\\mathcal{G}_{k})_{y}^{x}$ to $(x,y).$ Note that $(\\mathcal{G}_{k})_{x}^{x}$\nis a Lie group and can be identified (not in a canonical way) with $k^{th}$%\n-order jet group. Thus we obtain the sequence of \\ homomorphisms\n\n\\begin{equation}\n......\\longrightarrow \\mathcal{G}_{k+1}(M)\\longrightarrow \\mathcal{G}%\n_{k}(M)\\longrightarrow .....\\longrightarrow \\mathcal{G}_{1}(M)%\n\\longrightarrow M\\times M\\longrightarrow 1\n\\end{equation}\n\nwhere the last arrow is used with no algebraic meaning but to express\nsurjectivity. (1) gives the vague formula $Diff(M)\\times M=Lim_{k\\rightarrow\n\\infty }\\mathcal{G}_{k}(M)$ or more precisely $\\widetilde{Diff(M)}%\n=Lim_{k\\rightarrow \\infty }\\mathcal{G}_{k}(M).$ The ambiguity in this last\nformula is that a formal Taylor expansion may fail to recapture a local\ndiffeomorphism. However, this ambiguity is immaterial for our purpose for\nthe following reason: Let $(j_{\\infty }g)_{x}^{x}$ denote the $\\infty $-jet\nof some local diffeomorphism $g$ where $g(x)=x.$ Now $(j_{\\infty }g)_{x}^{x}$\ndetermines $g$ modulo the $\\infty $-jet of the identity diffeomorphism. This\nis a consequence of the following elementary but remarkable fact: For \n\\textit{any }sequence of real numbers $a_{0},a_{1},....$, there exists a\nreal valued differentiable function $f$ defined, say, near the origin $o\\in \n\\mathbb{R}$, satisfying $f^{(n)}(o)=a_{n}.$ In particular, the same\ninterpretation is valid for the $\\infty $-jets of all objects to be defined\nbelow.\n\nSince $\\mathcal{G}_{k}(M)$ is a differentiable groupoid (we will call the\nobject called a Lie groupoid in [19], [20] a differentiable groupoid,\nreserving the term ``Lie'' for Lie equations), it has an algebroid $\\frak{g}%\n_{k}(M)$ which can be constructed using jets only. To do this, we start by\nletting $J_{k}(T(M))_{p}$ denote the vector space of $k$-jets of vector\nfields at $p\\in M$ where $T(M)\\rightarrow M$ is the tangent bundle of $M.$\nAn element of $J_{k}(T(M))_{p}$ is of the form $(p,\\xi ^{i}(p),\\xi\n_{j_{1}}^{i}(p),\\xi _{j_{2}j_{1}}^{i}(p),....,\\xi\n_{j_{k}j_{k-1}....j_{1}}^{i}(p))$ in some coordinates $(U,x^{i})$ around $p.$\nIf $X=(\\xi ^{i}(x))$, $Y=(\\eta ^{i}(x))$ are two vector fields on $U,$\ndifferentiating the usual bracket formula $[X,Y](x)=\\xi ^{a}(x)\\partial\n_{a}\\eta ^{i}(x)-\\eta ^{a}(x)\\partial _{a}\\xi ^{i}(x)$ successively $k$%\n-times and evaluating at $p$, we obtain the \\textit{algebraic bracket }$\\{$ $%\n,$ $\\}_{k,p}:$ $J_{k}(T(M))_{p}$ $\\times J_{k}(T(M))_{p}\\rightarrow\nJ_{k-1}(T(M))_{p}.$ Note that this bracket does \\textit{not} endow $%\nJ_{k}(T(M))_{p}$ with a Lie algebra structure. However, for $k=\\infty ,$ $%\nJ_{\\infty }(T(M))_{p}$ is a graded Lie algebra with the bracket $\\{$ $,$ $%\n\\}_{\\infty ,p},$ and is the well known Lie algebra of formal vector fields\nwhich is extensively studied in literature ([10]). However, let $%\nJ_{k,0}(T(M))_{p}$ be the kernel of $\\ J_{k}(T(M))_{p}\\rightarrow\nJ_{0}(T(M))_{p}=T(M)_{p}$. Now $J_{k,0}(T(M))_{p}$ \\textit{is} a Lie algebra\nwith the bracket $\\{$ $,$ $\\}_{k,p}$ which is in fact the Lie algebra of $%\n\\mathcal{G}_{k}(M)_{p}^{p}.$\n\nWe now define the vector bundle $J_{k}(T(M))\\doteq \\cup _{x\\in\nM}J_{k}(T(M))_{x}\\rightarrow M.$ We will denote a section of $%\nJ_{k}(T(M))\\rightarrow M$ by $\\overset{(k)}{X}.$ To simplify our notation,\nwe will use the same notation $E$ for both the total space $E$ of a vector\nbundle $E\\rightarrow M$ and also for the space $\\Gamma E$ of global sections\nof $E\\rightarrow M.$ In a coordinate system $(U,x^{i}),$ $\\overset{(k)}{X}$\nis of the form $\\overset{(k)}{X}(x)=(x,\\xi ^{i}(x),\\xi _{j_{1}}^{i}(x),\\xi\n_{j_{2}j_{1}}^{i}(x),....,\\xi _{j_{k}j_{k-1}....j_{1}}^{i}(x)),$ but we may\nnot have $\\xi _{j_{m}j_{m-1}....j_{1}}^{i}(x)=\\frac{\\partial \\xi\n_{j_{m-1}....j_{1}}^{i}}{\\partial x^{j_{m}}}(x)$ $,$ $1\\leq m\\leq k.$ We can\nthink $\\overset{(k)}{X}$ also as the function $\\overset{(k)}{X}(x,y)=\\frac{1%\n}{\\alpha !}\\xi _{\\alpha }^{i}(x)(y-x)^{\\alpha }$ where $\\alpha $ is a\nmulti-index with $\\left| \\alpha \\right| \\leq k$ and we used summation\nconvention. For some $\\overline{x}\\in U,$ $\\overset{(k)}{X}(\\overline{x},y)$\nis some Taylor polynomial which is \\textit{not }necessarily the Taylor\npolynomial of $\\ \\xi ^{i}(x)$ at $x=\\overline{x}$ since we may \\textit{not }%\nhave $\\xi _{\\alpha +j}^{i}(\\overline{x})=\\frac{\\partial \\xi _{\\alpha }^{i}}{%\n\\partial x^{j}}(\\overline{x}),$ $\\left| \\alpha \\right| \\leq k.$ Note that we\nhave the bundle projections $\\pi _{k,m}:J_{k}(T(M))\\rightarrow J_{m}(T(M))$\nfor $0\\leq m\\leq k-1,$ where $J_{0}(T(M))\\doteq T(M).$ We will denote $%\nJ_{k}(T(M))$ by $\\frak{g}_{k}(M)$ for the reason which will be clear below.\n\nWe now have the Spencer bracket $[$ $,$ $]$ defined on $\\frak{g}_{k}(M)$ by\nthe formula\n\n\\begin{equation}\n\\lbrack \\overset{(k)}{X},\\overset{(k)}{Y}]=\\{\\overset{(k+1)}{X},\\overset{%\n(k+1)}{Y}\\}+i(\\overset{(0)}{X})D\\overset{(k+1)}{Y}-i(\\overset{(0)}{Y})D%\n\\overset{(k+1)}{X}\\qquad k\\geq 0\n\\end{equation}\n\nIn (2), $\\overset{(k+1)}{X}$ and $\\overset{(k+1)}{Y}$are arbitrary lifts of $%\n\\overset{(k)}{X}$ and $\\overset{(k)}{Y}$, $\\{$ $,$ $\\}:\\frak{g}%\n_{k+1}(M)\\times \\frak{g}_{k+1}(M)\\rightarrow \\frak{g}_{k}(M)$ is the\nalgebraic bracket defined pointwise by $\\{\\overset{(k+1)}{X},\\overset{(k+1)}{%\nY}\\}(p)\\doteq \\{\\overset{(k+1)}{X}(p),$ $\\overset{(k+1)}{Y}(p)\\}_{k+1,p}$\nand $D$ $:\\frak{g}_{k+1}(M)\\rightarrow T^{\\ast }\\otimes \\frak{g}_{k}(M)$ is\nthe Spencer operator given locally by the formula $(x,\\xi ^{i}(x),\\xi\n_{j_{1}}^{i}(x),\\xi _{j_{2}j_{1}}^{i}(x),....,\\xi\n_{j_{k+1}j_{k}....j_{1}}^{i}(x))\\rightarrow (x,\\frac{\\partial \\xi ^{i}}{%\n\\partial x^{j_{1}}}-\\xi _{j_{1}}^{i}(x),$ $.....,\\frac{\\partial \\xi\n_{j_{k}....j_{1}}^{i}(x)}{\\partial x^{j_{k+1}}}-$ $\\xi\n_{j_{k+1}j_{k}....j_{1}}^{i}(x)).$ Finally, the vector fields $\\overset{(0)}{%\nX}$ and $\\overset{(0)}{Y}$ are the projections of $\\overset{(k)}{X}$ and $%\n\\overset{(k)}{Y}$ and $i(\\overset{(0)}{X})$ denotes the interior product\nwith respect to the vector field $\\overset{(0)}{X}.$ It turns out that RHS\nof (2) does not depend on the lifts $\\overset{(k+1)}{X},$ $\\overset{(k+1)}{Y}%\n.$ The bracket $\\ [$ $,$ $]$ satisfies Jacobi identity. We will refer to\n[25], [26] for further details. In view of the local formulas for $\\{$ $,$ $%\n\\}_{k,p}$ and $D$, it is elementary to make local computations using (2)\nwhich however become formidable starting already with $k=3$. It is easy to\ncheck that (2) gives the usual bracket formula for vector fields for $k=0.$\nIn fact, letting $\\mathcal{X}(M)$ denote the Lie algebra of vector fields on \n$M$, we have the prolongation map $j_{k}:\\mathcal{X}(M)\\rightarrow \\frak{g}%\n_{k}(M)$ defined by $(x,\\xi ^{i}(x))\\rightarrow (x,\\xi ^{i}(x),\\partial\n_{j_{1}}\\xi ^{i}(x),\\partial _{j_{2}j_{1}}\\xi ^{i}(x),....,\\partial\n_{j_{k+1}j_{k}....j_{1}}\\xi ^{i}(x))$ which satisfies $j_{k}\\overset{(0)}{[X}%\n,$ $\\overset{(0)}{Y}]=[j_{k}\\overset{(0)}{X},$ $j_{k}\\overset{(0)}{Y}].$\nThus (2) gives the usual bracket and its derivatives when restricted to $%\nj_{k}(\\mathcal{X}(M)).$\n\nNow $\\frak{g}_{k}(M)$ is the transitive Lie equation in infinitesimal form $%\n(TLEIF)$ determined by $\\mathcal{G}_{k}(M).$ If we regard $\\mathcal{G}%\n_{k}(M) $ as a differentiable groupoid and construct its algebroid as in\n[19], [20], we end up with $\\frak{g}_{k}(M),$ justifying our notation $\\frak{%\ng}_{k}(M)$ for $J_{k}(T(M)).$ The projection $\\pi _{k,m}:\\frak{g}%\n_{k}(M)\\rightarrow \\frak{g}_{m}(M)$ respects the bracket, i.e., it is a\nhomomorphism of $TLEIF^{\\prime }$s.\n\nIn this way we arrive at the infinitesimal analog of (1):\n\n\\begin{equation}\n......\\longrightarrow \\frak{g}_{k+1}(M)\\longrightarrow \\frak{g}%\n_{k}(M)\\longrightarrow ......\\longrightarrow \\frak{g}_{1}(M)\\longrightarrow \n\\frak{g}_{0}(M)\\longrightarrow 0\n\\end{equation}\n\nProceeding formally, the formula $\\widetilde{Diff(M)}=Lim_{k\\rightarrow\n\\infty }\\mathcal{G}_{k}(M)$ now gives $\\mathcal{A}\\widetilde{Diff(M)}%\n=Lim_{k\\rightarrow \\infty }\\frak{g}_{k}(M)$ where $\\mathcal{A}$ stands for\nthe functor which assigns to a groupoid its algebroid. However note that $%\n\\widetilde{Diff(M)}$ is not a groupoid but rather a pseudogroup. Since a\nvector field integrates to some 1-parameter group of local diffeomorphisms\n(no condition on vector fields and diffeomorphisms since we have not imposed\na geometric structure yet), we naturally expect $\\mathcal{A}\\widetilde{%\nDiff(M)}=J_{\\infty }(T(M))$. As above, the vagueness in this formula is that\na vector field need not be locally determined by the Taylor expansion of its\ncoefficients at some point.\n\nWe now define the vector space $J_{k}(M)_{x}\\doteq \\{j_{k}(f)_{x}\\mid f\\in\nC^{\\infty }(M)\\}$ where $C^{\\infty }(M)$ denotes the set of smooth functions\non $M$ and $j_{k}(f)_{x}$ denotes the $k$-jet of $f$ at $x\\in M.$ Now $%\nJ_{k}(M)_{x}$ is a commutative algebra over $\\mathbb{R}$ with the\nmultiplication $\\bullet $ defined by $j_{k}(f)_{x}\\bullet j_{k}(g)_{x}\\doteq\nj_{k}(fg)_{x}.$ We define the vector bundle $J_{k}(M)\\doteq $ $\\cup _{x\\in\nM}J_{k}(M)_{x}\\rightarrow M$ with the obvious differentiable structure and\nprojection map. The vector space of global sections of $J_{k}(M)\\rightarrow\nM $ is a commutative algebra with the fiberwise defined operations. We have\nthe projection homomorphism $\\pi _{k,m}:J_{k}(M)\\rightarrow J_{m}(M).$ We\nwill denote an element of $J_{k}(M)$ by $\\overset{(k)}{f}$ which is locally\nof the form $%\n(x,f(x),f_{i_{1}}(x),f_{i_{2}i_{1}}(x),....,f_{i_{k}....i_{1}}(x))=(f_{%\n\\alpha }(x)),$ $\\left| \\alpha \\right| \\leq k.$\n\nNow let $\\overset{(k)}{X}\\in \\frak{g}_{k}(M)$ and $\\overset{(k)}{f}\\in\nJ_{k}(M).$ We define $\\overset{(k)}{X}\\overset{(k)}{f}\\in J_{k}(M)$ by\n\n\\begin{equation}\n\\overset{(k)}{X}\\overset{(k)}{f}\\doteq \\overset{(k)}{X}\\ast \\overset{(k+1)}{f%\n}+i(\\overset{(0)}{X})D\\overset{(k+1)}{f}\n\\end{equation}\n\nIn (4), $\\ast :$ $\\frak{g}_{k}(M)\\times J_{k+1}(M)\\rightarrow J_{k}(M)$ is\nthe algebraic action of $\\frak{g}_{k}(M)$ on $J_{k+1}(M)$ whose local\nformula is obtained by differentiating the standard formula $\\overset{(0)}{X}%\nf=\\xi ^{a}\\partial _{a}f$ successively $k$-times and substituting jets$,$ $%\nD: $ $J_{k+1}(M)\\rightarrow T^{\\ast }\\otimes J_{k}(M)$ is the Spencer\noperator defined by $%\n(x,f(x),f_{i_{1}}(x),f_{i_{2}i_{1}}(x),....,f_{i_{k}....i_{1}}(x))$\n\n$\\rightarrow (x,\\partial _{i_{1}}f(x)-f_{i_{1}}(x),\\partial\n_{i_{2}}f_{i_{1}}(x)-f_{i_{2}i_{1}}(x),....,\\partial\n_{i_{k}}f_{i_{k-1}....i_{1}}(x)-f_{i_{k}....i_{1}}(x))$ and $\\overset{(k+1}{f%\n}$ is some lift of $\\overset{(k)}{f}.$ The RHS of (4) does not depend on the\nlift $\\overset{(k+1}{f}$. It is easy to check that $\\overset{(0)}{X}\\overset{%\n(0)}{f}=\\xi ^{a}\\partial _{a}f.$ Like (2), (4) is compatible with\nprojection, i.e., we have $\\pi _{k,m}(\\overset{(k)}{X}\\overset{(k)}{f})=$ $%\n(\\pi _{k,m}\\overset{(k)}{X})($ $\\pi _{k,m}\\overset{(k)}{f}),$ $0\\leq m\\leq\nk. $ Since $J_{k}(M)_{x}$ is a vector space over $\\mathbb{R}$, $J_{k}(M)$ is\na module over $C^{\\infty }(M).$ We will call some $\\overset{(k)}{f}%\n=(f_{\\alpha })\\in J_{k}(M)$ a smooth function if $f_{\\alpha }(x)=0$ for $%\n1\\leq \\left| \\alpha \\right| \\leq k.$ This definition does not depend on\ncoordinates. Thus we have an injection $C^{\\infty }(M)\\rightarrow J_{k}(M)$\nof algebras. If $\\overset{(k)}{f}$ $\\in C^{\\infty }(M),$ then $\\overset{(k)}{%\nf}\\bullet \\overset{(k)}{g}=(\\pi _{k,0}\\overset{(k)}{f})\\overset{(k)}{g}%\n\\doteq \\overset{(0)}{f}\\overset{(k)}{g}.$ Similar considerations apply to $%\nJ_{k}(T(M)).$\n\nWe thus obtain the $k^{th}$-order analogs of the well known formulas:\n\n\\begin{equation}\n\\lbrack \\overset{(k)}{X},\\overset{(k)}{Y}]\\overset{(k)}{f}=\\overset{(k)}{X}(%\n\\overset{(k)}{Y}\\overset{(k)}{f})-\\overset{(k)}{Y}(\\overset{(k)}{X}\\overset{%\n(k)}{f})\n\\end{equation}\n\n\\begin{equation}\n\\overset{(k)}{X}(\\overset{(k)}{f}\\bullet \\overset{(k)}{g})=(\\overset{(k)}{X}%\n\\overset{(k)}{f})\\bullet \\overset{(k)}{g}+\\overset{(k)}{f}\\bullet (\\overset{%\n(k)}{X}\\overset{(k)}{g})\n\\end{equation}\n\nIn particular, (6) gives\n\n\\begin{equation}\n\\overset{(k)}{X}(\\overset{(0)}{f}\\overset{(k)}{g})=(\\overset{(0)}{X}\\overset{%\n(0)}{f})\\overset{(k)}{g}+\\overset{(0)}{f}(\\overset{(k)}{X}\\overset{(k)}{g})\n\\end{equation}\n\nwhere $\\overset{(0)}{X}=\\pi _{k,0}\\overset{(k)}{X}$. In the language of\n[19], [20], (5) and (7) define a representation of the algebroid $\\frak{g}%\n_{k}(M)$ on the vector bundle $J_{k}(M)\\rightarrow M$ (see also [25], pg.362\nand [11], III, pg. 419). All constructions of this paper work also for other\nsuch ``intrinsic'' representations of $\\frak{g}_{k}(M).$ The passage from\nsuch intrinsic representations to general representations, we believe, is\nquite crucial for \\textit{Erlangen Programm }and will be touched at the end\nof Section 4 and in $5)$ of Section 9.\n\nNow let $\\overset{[k,r]}{\\wedge }(M)_{x}$ denote the vector space of $r$%\n-linear and alternating maps $\\frak{g}_{k}(M)_{x}\\times $ $.....\\times \\frak{%\ng}_{k}(M)_{x}\\rightarrow J_{k}(M)_{x}$ where we assume $r\\geq 1,$ $k\\geq 0.$\nWe define the vector bundle $\\overset{[k,r]}{\\wedge }(M)\\doteq \\cup _{x\\in M}%\n\\overset{[k,r]}{\\wedge }(M)_{x}\\rightarrow M.$ If $\\overset{[k,r]}{\\omega }$ \n$\\in \\overset{\\lbrack k,r]}{\\wedge }(M)$ $(=\\Gamma \\overset{\\lbrack k,r]}{%\n\\wedge }(M))$ and $\\overset{(k)}{X}_{1},....,\\overset{(k)}{X}_{r}\\in \\frak{g}%\n_{k}(M),$ then $\\overset{[k,r]}{\\omega }(\\overset{(k)}{X}_{1},....,\\overset{%\n(k)}{X}_{r})\\in J_{k}(M)$ is defined by $(\\overset{[k,r]}{\\omega }(\\overset{%\n(k)}{X}_{1},....,\\overset{(k)}{X}_{r}))(x)\\doteq \\overset{\\lbrack k,r]}{%\n\\omega }(x)(\\overset{(k)}{X}_{1}(x),....,$ $\\overset{(k)}{X}_{r}(x))\\in\nJ_{k}(M)_{x}.$ We define $d\\overset{[k,r]}{\\omega }$ by the standard\nformula: $(d\\overset{[k,r]}{\\omega })(\\overset{(k)}{X}_{1},....,\\overset{(k)%\n}{X}_{r+1})=$\n\n\\begin{eqnarray}\n&&\\frac{1}{r+1}\\sum_{1\\leq i\\leq n+1}(-1)^{i+1}\\overset{(k)}{X}_{i}\\overset{%\n[k,r]}{\\omega }(\\overset{(k)}{X}_{1},....,\\overset{(i)}{\\parallel },....,%\n\\overset{(k)}{X}_{r}) \\\\\n&&+\\frac{1}{r+1}\\sum_{i\\leq j+1}(-1)^{i+j}\\overset{[k,r]}{\\omega }([\\overset{%\n(k)}{X}_{i},\\overset{(k)}{X}_{j}],...,\\overset{(i)}{\\parallel },...,\\overset{%\n(j)}{\\parallel },...,\\overset{(k)}{X}_{r+1}) \\notag\n\\end{eqnarray}\n\nWe also define $\\overset{[k,0]}{\\wedge }(M)\\doteq J_{k}(M)$ and $d:\\overset{%\n[k,0]}{\\wedge }(M)\\rightarrow \\overset{\\lbrack k,1]}{\\wedge }(M)$ by $(d%\n\\overset{(k)}{f})\\overset{(k)}{X}\\doteq \\overset{(k)}{X}\\overset{(k)}{f}.$\nWe have $d:\\overset{[k,r]}{\\wedge }(M)\\rightarrow \\overset{\\lbrack k,r+1]}{%\n\\wedge }(M)$: this follows from (6) or can be checked directly as in [9],\npg. 489, using $\\overset{(k)}{f}\\bullet \\overset{(k)}{g}=\\overset{(0)}{f}%\n\\overset{(k)}{g}$ if $\\overset{(k)}{f}\\in C^{\\infty }(M).$ In view of the\nJacobi identity and the alternating character of $\\overset{[k,r]}{\\omega }$,\nthe standard computation shows $d^{2}=0.$ Thus we obtain the complex\n\n\\begin{equation}\n\\overset{\\lbrack k,0]}{\\wedge }(M)\\longrightarrow \\overset{\\lbrack k,1]}{%\n\\wedge }(M)\\longrightarrow \\overset{\\lbrack k,2]}{\\wedge }(M)\\longrightarrow\n.....\\longrightarrow \\overset{\\lbrack k,n]}{\\wedge }(M)\\qquad k\\geq 0\n\\end{equation}\n\nFor $k=0,$ (9) gives de Rham complex.\n\nWe now assume $r\\geq 1$ and define the subspace $\\overset{(k,r)}{\\wedge }%\n(M)_{x}\\subset \\overset{\\lbrack k,r]}{\\wedge }(M)_{x}$ by the following\ncondition: $\\overset{[k,r]}{\\omega }_{x}\\in \\overset{\\lbrack k,r]}{\\wedge }%\n(M)_{x}$ belongs to $\\overset{(k,r)}{\\wedge }(M)_{x}$ iff $\\pi _{k,m}%\n\\overset{[k,r]}{\\omega }(\\overset{(k)}{X}_{1},$\n\n$....,$ $\\overset{(k)}{X}_{r})(x)\\in J_{m}(M)_{x}$ depends on $\\overset{(m)}{%\nX}_{1}(x),....,$ $\\overset{(m)}{X}_{r}(x),$ $m\\leq k.$ This condition holds\nvacuously for $k=0.$ Thus we obtain the projection $\\pi _{k,m}:\\overset{(k,r)%\n}{\\wedge }(M)_{x}\\rightarrow \\overset{(m,r)}{\\wedge }(M)_{x}.$ We define the\nvector bundle $\\overset{(k,r)}{\\wedge }(M)\\doteq \\cup _{x\\in M}\\overset{(k,r)%\n}{\\wedge }(M)_{x}\\rightarrow M$ and set $\\overset{(k,0)}{\\wedge }=\\overset{%\n[k,0]}{\\wedge }=J_{k}(M).$\n\n\\begin{definition}\nAn exterior $(k,r)$-form $\\overset{(k,r)}{\\omega }$ on $M$ is a smooth\nsection of the vector bundle $\\overset{(k,r)}{\\wedge }(M)\\rightarrow M.$\n\\end{definition}\n\nAn explicit description of $\\overset{(k,r)}{\\omega }$ in local coordinates\nis not without interest but will not be done here. Applying $\\pi _{k,m}$ to\n(8), we deduce $d:\\overset{(k,r)}{\\wedge }(M)\\rightarrow \\overset{(k,r+1)}{%\n\\wedge }(M)$ and the commutative diagram\n\n\\begin{equation}\n\\begin{array}{ccc}\n\\overset{(k+1,r)}{\\wedge } & \\overset{d}{\\longrightarrow } & \\overset{%\n(k+1,r+1)}{\\wedge } \\\\ \n\\downarrow _{\\pi } & & \\downarrow _{\\pi } \\\\ \n\\overset{(k,r)}{\\wedge } & \\overset{d}{\\longrightarrow } & \\overset{(k,r+1)}{%\n\\wedge }\n\\end{array}\n\\end{equation}\n\nThus we obtain the array\n\n\\begin{equation}\n\\begin{array}{ccccccccc}\n\\overset{(\\infty ,0)}{\\wedge } & \\longrightarrow & \\overset{(\\infty ,1)}{%\n\\wedge } & \\longrightarrow & \\overset{(\\infty ,2)}{\\wedge } & \\longrightarrow\n& ..... & \\longrightarrow & \\overset{(\\infty ,n)}{\\wedge } \\\\ \n..... & & ..... & & ..... & & ..... & & ..... \\\\ \n\\downarrow & & \\downarrow & & \\downarrow & & \\downarrow & & \\downarrow\n\\\\ \n\\overset{(2,0)}{\\wedge } & \\longrightarrow & \\overset{(2,1)}{\\wedge } & \n\\longrightarrow & \\overset{(2,2)}{\\wedge } & \\longrightarrow & ..... & \n\\longrightarrow & \\overset{(2,n)}{\\wedge } \\\\ \n\\downarrow & & \\downarrow & & \\downarrow & & \\downarrow & & \\downarrow\n\\\\ \n\\overset{(1,0)}{\\wedge } & \\longrightarrow & \\overset{(1,1)}{\\wedge } & \n\\longrightarrow & \\overset{(1,2)}{\\wedge } & \\longrightarrow & ..... & \n\\longrightarrow & \\overset{(1,n)}{\\wedge } \\\\ \n\\downarrow & & \\downarrow & & \\downarrow & & \\downarrow & & \\downarrow\n\\\\ \n\\overset{(0,0)}{\\wedge } & \\longrightarrow & \\overset{(0,1)}{\\wedge } & \n\\longrightarrow & \\overset{(0,2)}{\\wedge } & \\longrightarrow & ..... & \n\\longrightarrow & \\overset{(0,n)}{\\wedge }\n\\end{array}\n\\end{equation}\n\nwhere all horizontal maps are given by $d$ and all vertical maps are\nprojections. The top sequence is defined algebraically by taking projective\nlimits in each coloumn.\n\nUsing $\\bullet $, we also define the wedge product of $(k,r)$-forms in each\nrow by the standard formula which turns $\\overset{(k,\\ast )}{\\wedge }%\n(M)\\doteq \\oplus _{0\\leq r\\leq n}\\overset{(k,r)}{\\wedge }(M)$ into an\nalgebra. This algebra structure descends to the cohomology. In particular,\nwe obtain an algebra structure on the cohomology of the top row of (11),\nwhich we will denote by $H^{\\ast }(\\frak{g}_{\\infty }(M),J_{\\infty }(M))$.\n\nNow let $C^{k+1,r}(M)\\doteq $ $Kernel$ $(\\pi _{\\infty ,k}:\\overset{(\\infty\n,r)}{\\wedge }(M)\\rightarrow \\overset{(k,r)}{\\wedge }(M))$ and $\\mathcal{C}%\n^{k+1,\\ast }(M)\\doteq \\oplus _{0\\leq r\\leq n}C^{k+1,r}(M),$ $k\\geq 0.$ This\ngives the array\n\n\\begin{equation}\n\\begin{array}{ccccccccc}\n\\overset{(\\infty ,0)}{\\wedge }(M) & \\longrightarrow & \\overset{(\\infty ,1)}{%\n\\wedge }(M) & \\longrightarrow & \\overset{(\\infty ,2)}{\\wedge }(M) & \n\\longrightarrow & ..... & \\longrightarrow & \\overset{(\\infty ,n)}{\\wedge }(M)\n\\\\ \n\\uparrow & & \\uparrow & & \\uparrow & & \\uparrow & & \\uparrow \\\\ \nC^{1,0}(M) & \\longrightarrow & C^{1,1}(M) & \\longrightarrow & C^{1,2}(M) & \n\\longrightarrow & ..... & \\longrightarrow & C^{1,n}(M) \\\\ \n\\uparrow & & \\uparrow & & \\uparrow & & \\uparrow & & \\uparrow \\\\ \nC^{2,0}(M) & \\longrightarrow & C^{2,1}(M) & \\longrightarrow & C^{2,2}(M) & \n\\longrightarrow & .... & \\longrightarrow & C^{2,n}(M) \\\\ \n\\uparrow & & \\uparrow & & \\uparrow & & \\uparrow & & \\uparrow \\\\ \n..... & & ..... & & ..... & & ..... & & ..... \\\\ \nC^{\\infty ,0}(M) & \\longrightarrow & C^{\\infty ,1}(M) & \\longrightarrow & \nC^{\\infty ,2}(M) & \\longrightarrow & ..... & \\longrightarrow & C^{\\infty\n,n}(M)\n\\end{array}\n\\end{equation}\n\nwhere all horizontal maps in (12) are restrictions of $d$ and all vertical\nmaps are inclusions. The filtration in (12) preserves wedge product. In\nfact, we have $C^{k,r}(M)\\wedge $ $C^{s,t}(M)\\subset C^{k+s,r+t}(M)$ which\nfollows easily from the definition of $\\bullet .$ We will denote the\nspectral sequence of algebras determined by the filtration in (12) by $%\n\\mathcal{U}_{M}$ and call $\\mathcal{U}_{M}$ the universal spectral sequence\nof $M.$\n\nThe above construction will be relevant in the next section. However, we\nremark here that (11) contains no information other than $H_{deR}^{\\ast }(M,%\n\\mathbb{R)}.$ To see this, we observe that if $\\overset{(k)}{X}\\overset{(k)}{%\nf}=0$ for all $\\overset{(k)}{X}\\in \\frak{g}_{k}(M))$, then $\\overset{(k)}{f}%\n\\in \\mathbb{R}\\subset C^{\\infty }(M).$ Thus the kernel of the first\noperators in the horizontal rows of (11) define the constant sheaf $\\mathbb{R%\n}$ on $M.$ Since $\\overset{(k,r)}{\\wedge }(M)$ is a module over $C^{\\infty\n}(M)$, each row of (10) (which is easily shown to be locally exact) is a\nsoft resolution of the constant sheaf $\\mathbb{R}$ and thus computes $%\nH_{deR}^{\\ast }(M,\\mathbb{R)}.$\n\n\\section{Homogeneous geometries}\n\n\\begin{definition}\nA homogeneous geometry on a differentiable manifold $M$ is a diagram\n\n\\begin{equation}\n\\begin{array}{ccccccccc}\n..... & \\longrightarrow & \\mathcal{G}_{2}(M) & \\longrightarrow & \\mathcal{G%\n}_{1}(M) & \\longrightarrow & M\\times M & \\longrightarrow & 1 \\\\ \n\\uparrow & & \\uparrow & & \\uparrow & & \\parallel & & \\\\ \n..... & \\longrightarrow & \\mathcal{S}_{2}(M) & \\longrightarrow & \\mathcal{S%\n}_{1}(M) & \\longrightarrow & M\\times M & \\longrightarrow & 1\n\\end{array}\n\\end{equation}\n\nwhere $i)$ $\\mathcal{S}_{k}(M)$ is a $TLEFF$ for all $k\\geq 1$ and therefore \n$\\mathcal{S}_{k}(M)$ $\\subset \\mathcal{G}_{k}(M)$ and the vertical maps are\ninclusions $ii)$ The horizontal maps in the bottom sequence of (13) are\nrestrictions of the projection maps in the top sequence and are surjective\nmorphisms.\n\\end{definition}\n\nWith an abuse of notation, we will denote (13) by $\\mathcal{S}_{\\infty }(M)$\nand call $\\mathcal{S}_{\\infty }(M)$ a homogeneous geometry on $M.$ We thus\nimagine that the lower sequence of (13) ``converges'' to some (pseudo)group $%\nG\\subset \\widetilde{Diff(M)}$ which acts transitively on $M.$ However, $G$\nmay be far from being a Lie group and it may be intractible to deal with $G$\ndirectly. The idea of Definition 2 is to work with the arrows of $G$ rather\nthan to work with $G$ itself.\n\n\\begin{definition}\nLet $\\mathcal{S}_{\\infty }(M)$ be a homogeneous geometry on $M.$ $\\mathcal{S}%\n_{\\infty }(M)$ \\ is called a Klein geometry if there exists an integer $%\nm\\geqslant 1$ with the following property: If $(f_{m})_{y}^{x}\\in \\mathcal{S}%\n_{m}(M)_{y}^{x}$, then there exists a unique local diffeomorphism $g$ with $%\ng(x)=y$ satisfying $i)$ $j_{k}(g)_{y}^{x}=(f_{m})_{y}^{x}$ \\ $ii)$ $%\nj_{k}(g)_{g(z)}^{z}\\in \\mathcal{S}_{m}(M)_{g(z)}^{z}$ for all $z$ near $x.$\nThe smallest such integer (uniquely determined if $M$ is connected) is\ncalled the order of the Klein geometry.\n\\end{definition}\n\nIn short, a Klein geometry is a transitive pseudogroup whose local\ndiffeomorphisms are uniquely determined by any of their $m$-arrows and we\nrequire $m$ to be the smallest such integer. Once we have a Klein geometry $%\n\\mathcal{S}_{\\infty }(M)$ of order $m,$ then all $\\mathcal{S}_{k}(M),$ $%\nk\\geqslant m+1$ are uniquely determined by $\\mathcal{S}_{m}(M)$ and $%\n\\mathcal{S}_{k+1}(M)\\rightarrow \\mathcal{S}_{k}(M)$ is an isomorphism for\nall $k\\geqslant m,$ i.e., the Klein geometry $\\mathcal{S}_{\\infty }(M)$\nprolongs in a unique way to a homogeneous geometry and all the information\nis contained in terms up to order $m.$ We will thus denote a Klein geometry\nby $\\mathcal{S}_{(m)}(M).$ We will take a closer look at these geometries in\nthe next section. For instance, let $M^{2n}$ be a complex manifold. We\ndefine $\\mathcal{S}_{k}(M)_{y}^{x}$ by the following condition: $%\n(f_{k})_{y}^{x}\\in \\mathcal{G}_{k}(M)_{y}^{x}$ belongs to $\\mathcal{S}%\n_{k}(M)_{y}^{x}$ if there exists a local holomorphic diffeomorphism $g$ with \n$g(x)=y$ and $j^{k}(g)_{y}^{x}=(f_{k})_{y}^{x}.$ We see that $TLEFF$ $%\n\\mathcal{S}_{k}(M)$ defined by $\\mathcal{S}_{k}(M)\\doteq \\cup _{x,y\\in M}%\n\\mathcal{S}_{k}(M)_{y}^{x}$ satisfies conditions of Definition 3 and\ntherefore a complex structure determines a homogeneous geometry which is not\nnecessarily a Klein geometry. Similarly, a symplectic or contact structure\ndetermines a homogeneous geometry (since these structures have no local\ninvariants) which need not be Klein. More generally, any transitive\npseudogroup determines a homogeneous geometry via its arrows.\n\nNow given a homogeneous geometry $\\mathcal{S}_{\\infty }(M)$, we will sketch\nthe construction of its infinitesimal geometry $\\frak{s}_{\\infty }(M)$\nreferring to [25], [26] for further details. Let $x\\in M$ and $X$ be a\nvector field defined near $x.$ Let $f_{t}$ be the 1-parameter group of local\ndiffeomorphisms generated by $X.$ Suppose $X$ has the property that $%\nj_{k}(f_{t})_{y_{t}}^{x}$ belongs to $\\mathcal{S}_{k}(M)_{y_{t}}^{x}$ for\nall small $t$ with $t\\geqslant 0$ where $y_{t}=f_{t}(x)$. This is actually a\ncondition only on the $k$-jet of $X$ at $x.$ In this way we define the\nsubspace $\\frak{s}_{k}(M)_{x}\\subset \\frak{g}_{k}(M)_{x}$ which consists of\nthose $\\overset{(k)}{X}(x)$ satisfying this condition. We define the vector\nsubbundle $\\frak{s}_{k}(M)\\doteq \\cup _{x\\in M}\\frak{s}_{k}(M)_{x}%\n\\rightarrow M$ of $\\frak{g}_{k}(M)\\rightarrow M$ and the bracket (2) on $%\n\\frak{g}_{k}(M)$ restricts to a bracket on $\\frak{s}_{k}(M)$ and $\\frak{s}%\n_{k}(M)$ is the $TLEIF$ determined by $\\mathcal{S}_{k}(M).$\n\nIn this way we arrive at the diagram\n\n\\begin{equation}\n\\begin{array}{ccccccccccc}\n.... & \\longrightarrow & \\frak{g}_{k}(M) & \\longrightarrow & .... & \n\\longrightarrow & \\frak{g}_{1}(M) & \\longrightarrow & \\frak{g}_{0}(M) & \n\\longrightarrow & 0 \\\\ \n& & \\uparrow & & \\uparrow & & \\uparrow & & \\uparrow & & \\\\ \n.... & \\longrightarrow & \\frak{s}_{k}(M) & \\longrightarrow & .... & \n\\longrightarrow & \\frak{s}_{1}(M) & \\longrightarrow & \\frak{s}_{0}(M) & \n\\longrightarrow & 0\n\\end{array}\n\\end{equation}\n\nwhere the bottom horizontal maps are restrictions of the upper horizontal\nmaps and are surjective morphisms. The vertical maps are injective morphisms\ninduced by inclusion. Thus (14) is the infinitesimal analog of (13). We will\ndenote (14) by $\\frak{s}_{\\infty }(M)$ and call $\\frak{s}_{\\infty }(M)$ the\ninfinitesimal geometry of $\\mathcal{S}_{\\infty }(M).$\n\nNow we define $L_{\\overset{(k)}{Y}}:\\overset{(k,r)}{\\wedge }\\rightarrow \n\\overset{(k,r)}{\\wedge }$ and $i_{\\overset{(k)}{Y}}:\\overset{(k,r+1)}{\\wedge \n}\\rightarrow \\overset{(k,r)}{\\wedge }$ by the standard formulas: $(L_{%\n\\overset{(k)}{Y}}\\overset{(k,r)}{\\omega })(\\overset{(k)}{X}_{1},....,%\n\\overset{(k)}{X}_{r})\\doteq \\overset{(k)}{Y}(\\overset{(k,r)}{\\omega }(%\n\\overset{(k)}{X}_{1},....,\\overset{(k)}{X}_{r}))-\\overset{(k,r)}{\\omega }([%\n\\overset{(k)}{Y},\\overset{(k)}{X}_{1}],\\overset{(k)}{X}_{2}....,$\n\n$\\overset{(k)}{X}_{r})-....-\\overset{(k,r)}{\\omega }([\\overset{(k)}{X}_{1},%\n\\overset{(k)}{X}_{2}....,[\\overset{(k)}{Y},\\overset{(k)}{X}_{r}])$ and $(i_{%\n\\overset{(k)}{Y}}\\overset{(k+1,r)}{\\omega })(\\overset{(k)}{X}_{1},....,%\n\\overset{(k)}{X}_{r})\\doteq (r+1)$ $\\ \\overset{(k+1,r)}{\\omega }(\\overset{(k)%\n}{Y},\\overset{(k)}{X}_{1},....,\\overset{(k)}{X}_{r}).$ We also define $(\\pi\n_{k,m}L_{\\overset{(k)}{Y}}):\\overset{(m,r)}{\\wedge }\\rightarrow \\overset{%\n(m,r)}{\\wedge }$ by $\\pi _{k,m}L_{\\overset{(k)}{Y}}\\doteq L_{\\pi _{k,m}%\n\\overset{(k)}{Y}}$ and similarly $\\pi _{k,m}i_{\\overset{(k)}{Y}}=i_{\\pi\n_{k,m}\\overset{(k)}{Y}}.$ With these definitions we obtain the well known\nformulas\n\n\\begin{equation}\nL_{\\overset{(k)}{X}}=d\\circ i_{\\overset{(k)}{X}}+i_{\\overset{(k)}{X}}\\circ d\n\\end{equation}\n\n\\begin{equation}\nL_{\\overset{(k)}{X}}\\circ d=d\\circ L_{\\overset{(k)}{X}}\n\\end{equation}\n\nWe will now indicate briefly how a homogeneous geometry $\\mathcal{S}_{\\infty\n}(M)$ gives rise to various spectral sequences.\n\n$1)$ We define $\\overset{(k,r)}{\\frak{s}}(M)\\subset \\overset{(k,r)}{\\wedge }%\n(M)$ by the following condition: Some $\\overset{(k,r)}{\\omega }$ belongs to $%\n\\overset{(k,r)}{\\frak{s}}(M)$ if $\\quad i)$ $L_{\\overset{(k)}{X}}\\overset{%\n(k,r)}{\\omega }=0,$ $\\overset{(k)}{X}\\in \\frak{s}_{k}(M)$ $\\quad ii)$ $i_{%\n\\overset{(k)}{X}}\\overset{(k,r)}{\\omega }=0,$ $\\overset{(k)}{X}\\in \\frak{s}%\n_{k}(M).$ (15) and (16) show that $d:\\overset{(k,r)}{\\frak{s}}(M)\\rightarrow \n\\overset{(k,r+1)}{\\frak{s}}(M)$ and (10) holds. Thus we arrive at (11) and\n(12). Recall that if $\\frak{g}$ is any Lie algebra with a representation on $%\nV$ and $\\frak{s}\\subset \\frak{g}$ is a subalgebra, then we can define the\nrelative Lie algebra cohomology groups $H^{\\ast }(\\frak{g},\\frak{s},V)$ ([6])%\n$.$ Since our construction is modelled on the definition of $H^{\\ast }(\\frak{%\ng},\\frak{s},V),$ we will denote the cohomology of the top row of (11) by $%\nH^{\\ast }(\\frak{g}_{\\infty }(M),\\frak{s}_{\\infty }(M),J_{\\infty }(M))$ in\nthis case.\n\n$2)$ We will first make\n\n\\begin{definition}\n$\\Theta (\\mathcal{S}_{k}(M))\\doteq \\{\\overset{(k)}{f}\\in J_{k}(M)\\mid \n\\overset{(k)}{X}\\overset{(k)}{f}=0$ for all $\\overset{(k)}{X}\\in \\frak{s}%\n_{k}(M)\\}$\n\\end{definition}\n\n(6) shows that $\\Theta (\\mathcal{S}_{k}(M))$ is a subalgebra of $J_{k}(M).$\n\\ We will call $\\Theta (\\mathcal{S}_{k}(M))$ the $k^{th}$ order structure\nalgebra of the homogeneous geometry $\\mathcal{S}_{\\infty }(M).$ Note that $%\n\\Theta (\\mathcal{G}_{k}(M))=\\mathbb{R}.$ For $k\\geq 1,$ we define $\\overset{%\n[k,r]}{\\frak{s}}(M)_{x}$ as the space of alternating maps $\\frak{s}%\n_{k}(M)_{x}\\times ....\\times \\frak{s}_{k}(M)_{x}\\rightarrow J_{k}(M)_{x}$\nand define $\\overset{[k,r]}{\\frak{s}}(M)$ as in Section 2. We have the\nrestriction map $\\theta _{(k,r)}:\\overset{[k,r]}{\\wedge }(M)\\rightarrow \n\\overset{\\lbrack k,r]}{\\frak{s}}(M)$ whose kernel will be denoted by $%\n\\overset{[k,r]}{\\wedge }_{\\frak{s}}(M).$ Since $\\Theta (\\mathcal{S}%\n_{k}(M))=Ker(\\theta _{(k,1)}\\circ d),$ we obtain the commutative diagram\n\n\\begin{equation}\n\\begin{array}{ccccccccccc}\n& & 0 & & 0 & & 0 & & .... & & 0 \\\\ \n& & \\downarrow & & \\downarrow & & \\downarrow & & \\downarrow & & \n\\downarrow \\\\ \n& & \\Theta (\\mathcal{S}_{k}(M)) & \\longrightarrow & \\overset{[k,1]}{\\wedge }%\n_{\\frak{s}}(M) & \\longrightarrow & \\overset{[k,2]}{\\wedge }_{\\frak{s}}(M) & \n\\longrightarrow & .... & \\longrightarrow & \\overset{[k,n]}{\\wedge }_{\\frak{s}%\n}(M) \\\\ \n& & \\downarrow & & \\downarrow & & \\downarrow & & \\downarrow & & \n\\downarrow \\\\ \n\\mathbb{R} & \\longrightarrow & \\overset{[k,0]}{\\wedge }(M) & \\longrightarrow\n& \\overset{[k,1]}{\\wedge }(M) & \\longrightarrow & \\overset{[k,2]}{\\wedge }(M)\n& \\longrightarrow & .... & \\longrightarrow & \\overset{[k,2]}{\\wedge }(M) \\\\ \n& & \\downarrow & & \\downarrow & & \\downarrow & & \\downarrow & & \n\\downarrow \\\\ \n0 & \\longrightarrow & \\frac{\\overset{[k,0]}{\\wedge }(M)}{\\Theta (\\mathcal{S}%\n_{k}(M))} & \\longrightarrow & \\overset{[k,1]}{\\frak{s}}(M) & \\longrightarrow\n& \\overset{[k,2]}{\\frak{s}}(M) & \\longrightarrow & .... & \\longrightarrow & \n\\overset{[k,n]}{\\frak{s}}(M) \\\\ \n& & \\downarrow & & \\downarrow & & \\downarrow & & \\downarrow & & \n\\downarrow \\\\ \n& & 0 & & 0 & & 0 & & .... & & 0\n\\end{array}\n\\end{equation}\n\nWe will call (17) the horizontal crossection of the representation triple $(%\n\\mathcal{G}_{\\infty }(M),$\n\n$\\mathcal{S}_{\\infty }(M),J_{\\infty }(M))$ at order $k$. Passing to the long\nexact sequence in (17), we see that local cohomology of the top and bottom\nsequences coincide (with a shift in order) in view of the local exactness of\nthe middle row. Now defining $\\overset{(k,r)}{\\frak{s}}(M)$ as in Section 2,\n(17) defines an exact sequence of three spectral sequences where the middle\nspectral sequence is (12). Note that local exactness of the top and bottom\nsequences would imply that their cohomologies coincide with the sheaf\ncohomology groups $H^{\\ast }(M,\\Theta (\\mathcal{S}_{k}(M))$ and $H^{\\ast }(M,%\n\\frac{\\overset{[k,0]}{\\wedge }(M)}{\\Theta (\\mathcal{S}_{k}(M))})$\nrespectively since partition of unity applies to the spaces in these\nsequences. We will denote the limiting cohomology of the top and bottom\nsequences in (17) respectively by $H^{\\ast }(\\frak{s}_{\\infty }(M),0)$ and $%\nH^{\\ast }(\\frak{s}_{\\infty }(M),J_{\\infty }(M)).$ The reader may compare\n(17) to the diagram on page 183 in [25] which relates Spencer sequence to\nJanet sequence, called $\\mathcal{P}$-sequence in [25].\n\nBefore we end this section, note that $(f_{k+1})_{y}^{x}\\in \\mathcal{S}%\n_{k}(M)_{y}^{x}$ defines an isomorphism $(f_{k+1})_{y}^{x}:J_{k}(M)_{x}%\n\\rightarrow J_{k}(M)_{y}\\ $(in fact, $(f_{k})_{y}^{x}$ does it) and also an\nisomorphism $(f_{k+1})_{y}^{x}:\\frak{s}_{k}(M)_{x}\\rightarrow \\frak{s}%\n_{k}(M)_{y}.$ Let us assume that $\\mathcal{S}_{\\infty }(M)$ is defined by\nsome $G\\subset Diff(M)$ as in the case of a symplectic structure,\nhomogeneous complex structure or a Klein geometry (see Section 4). Now $G$\nacts on $\\overset{(k,r)}{\\frak{s}}(M)$ (defined as in 1) or 2) above) by $(g%\n\\overset{(k,r)}{\\omega })(\\overset{(k)}{X}_{1},....,\\overset{(k)}{X}%\n_{r})(p)\\doteq j_{k+1}(g)_{p}^{q}(\\overset{(k,r)}{\\omega }%\n(q)(j_{k+1}(g^{-1})_{q}^{p}\\overset{(k)}{X}%\n_{1}(p),....,j_{k+1}(g^{-1})_{q}^{p}\\overset{(k)}{X}_{r}(p))$ where $g(q)=p.$\nThe cochains which are invariant under this action form a subcomplex whose\ncohomology can be localized, i.e., can be computed at any point of $M.$ If $%\n\\mathcal{S}_{\\infty }(M)=\\mathcal{G}_{\\infty }(M)$ and $G=Diff(M)$, then an\ninvariant form must vanish but this need not be the case for a homogeneous\ngeometry. We will not go into the precise description of this cohomology\nhere though it is quite relevant for Klein geometries in Section 4 and can\nbe expressed in terms of some relative Lie algebra cohomology groups in this\ncase.\n\n\\section{Klein geometries}\n\nLet $G$ be a Lie group and $H$ a closed subgroup. $G$ acts on the left coset\nspace $G\/H$ on the left. Let $o$ denote the coset of $H.$ Now $H$ fixes $o$\nand therefore acts on the tangent space $T(G\/H)_{o}$ at $o$. However some\nelements of $H$ may act as identity on $T(G\/H)_{o}.$ The action of $h\\in H$\n(we regard $h$ as a transformation and use the same notation) on $T(G\/H)_{o}$\ndepends only on $1$-arrow of $h$ with source and target at $o.$ Let $%\nH_{1}\\subset H$ be the normal subgroup of $H$ consisting of elements which\nact as identity on $T(G\/H)_{o}.$ To recover $H_{1},$ we consider $1$-jets of\nvector fields at $o$ which we will denote by $J_{1}T(G(H)_{o}.$ The action\nof $h$ on $J_{1}T(G(H)_{o}$ depends only on $2$-arrow of $h.$ Now some\nelements $h\\in H_{1}$ may still act as identity at $J_{1}T(G(H)_{o}$ and we\ndefine the normal subgroup $H_{2}\\subset $ $H_{1}$ consisting of those\nelements. Iterating this procedure, we obtain a decreasing sequence of\nnormal subgroups $\\{1\\}\\subset ...\\subset H_{k}\\subset H_{k-1}\\subset\n.....\\subset H_{2}\\subset H_{1}\\subset H_{0}=H$ which stabilizes at some\ngroup $N$ which is the largest normal subgroup of $G$ contained in $H$ $($%\nsee [29], pg. 161). We will call the smallest integer $m$ satisfying $%\nN=H_{m} $ the order of the Klein pair $(G,H)$. In this case, it is easy to\nshow that $g\\in G$ is uniquely determined modulo $N$ by any of its $m$%\n-arrows. $G$ acts effectively on $G\/H$ iff $N=\\{1\\}.$ If $(G,H)$ is a Klein\npair of order $m,$ then so is $(G\/N,H\/N)$ which is further effective. We\nwill call $N$ the ghost of the Klein pair $(G,H)$ since it can not be\ndetected from the action and therefore \\textit{may} have implications that\nfall outside the scope of \\textit{Erlangen Programm. }We will touch this\nissue again in $5)$ of Section 9. Thus we see that an \\textit{effective }%\nKlein pair $(G,H)$ of order $m$ determines a Klein geometry $\\mathcal{S}%\n_{(m)}(G\/H)$ according to Definition 3 where the local diffeomorphisms\nrequired by Definition 3 are restrictions of global diffeomorphisms of $G\/H$\nwhich are induced by the elements of $G.$ Conversely, let $\\mathcal{S}%\n_{(m)}(M)$ be a Klein geometry according to Definition 3 and let $\\widetilde{%\nM}$ be the universal covering space of $M.$ We pull back the pseudogroup on $%\nM$ to a pseudogroup on $\\widetilde{M}$ using the local diffeomorphism $\\pi :%\n\\widetilde{M}\\rightarrow M.$ Using simple connectedness of $\\widetilde{M}$\nand a mild technical assumption which guarantees that the domains of the\nlocal diffeomorphisms do not become arbitrarily small, the standard\nmonodromy argument shows that a local diffeomorphism defined on $\\widetilde{M%\n}$ in this way uniquely extends to some global diffeomorphism on $\\widetilde{%\nM}.$ This construction is essentially the same as the one given in [30] on\npage 139-146. The global diffeomorphisms on $\\widetilde{M}$ obtained in this\nway form a Lie group $G.$ If $H\\subset G$ is the stabilizer of some $p\\in \n\\widetilde{M},$ then $H$ is isomorphic to $\\mathcal{S}_{(m)}(M)_{q}^{q}$\nwhere $\\pi (p)=q.$ To summarize, a Klein geometry $\\mathcal{S}_{(m)}(M)$\naccording to Definition 3 becomes an effective Klein pair $(G,H)$ of order $%\nm $ when pulled back to $\\widetilde{M}$. Conversely, an effective Klein pair \n$(G,H)$ of order $m$ defines a Klein geometry $\\mathcal{S}_{(m)}(M)$ if we\nmode out by the action of a discrete subgroup. Keeping this relation in\nmind, we will consider an effective Klein pair $(G,H)$ as our main object\nbelow.\n\nNow the above filtration of normal subgroups gives the diagram\n\n\\begin{equation}\n\\begin{array}{ccccccccccc}\n\\mathcal{G}_{m}(G\/H)_{o}^{o} & \\longrightarrow & \\mathcal{G}%\n_{m-1}(G\/H)_{o}^{o} & \\longrightarrow & .... & \\longrightarrow & \\mathcal{G%\n}_{1}(G\/H)_{o}^{o} & \\longrightarrow & 1 & & \\\\ \n\\uparrow & & \\uparrow & & \\uparrow & & \\uparrow & & & & \\\\ \nH & \\longrightarrow & H\/H_{m} & \\longrightarrow & ... & \\longrightarrow & \nH\/H_{1} & \\longrightarrow & 1 & & \n\\end{array}\n\\end{equation}\n\nwhere the spaces in the top sequence are jet groups in our universal\nsituation in Section 2 and the vertical maps are injections. Since the\nkernels in the upper sequence are vector groups, we see that $H_{1}$ is\nsolvable. As before, we now define $\\mathcal{S}_{m}(M)_{y}^{x}$ which\nconsists of $m$-arrows of elements of $G$ and define $\\mathcal{S}%\n_{m}(M)\\doteq \\cup _{x,y\\in M}\\mathcal{S}_{m}(M)_{y}^{x}.$ As in the case $%\nDiff(M)\\times M\\rightarrow M$ in Section 2, we obtain the map $G\\times\nG\/H\\rightarrow \\mathcal{S}_{m}(M)$ defined by $(g,x)\\rightarrow $ $m$-arrow\nof $g$ starting at $x$ and ending at $g(x).$ This map is surjective by\ndefinition and this time also injective by the definition of $m.$ Thus we\nobtain a concrete realization of $\\mathcal{S}_{m}(M)$ as $G\\times G\/H.$ Note\nthat $\\mathcal{S}_{m}(M)_{o}^{o}=H.$ Going downwards in the filtration, we\nobtain the commutative diagram\n\n\\begin{equation}\n\\begin{array}{ccc}\nG\\times G\/H & \\longrightarrow & \\mathcal{S}_{m}(M) \\\\ \n\\downarrow & & \\downarrow \\\\ \nG\/H_{m-1}\\times G\/H & \\longrightarrow & \\mathcal{S}_{m-1}(M) \\\\ \n\\downarrow & & \\downarrow \\\\ \n.... & \\longrightarrow & ... \\\\ \n\\downarrow & & \\downarrow \\\\ \nG\/H_{2}\\times G\/H & \\longrightarrow & \\mathcal{S}_{2}(M) \\\\ \n\\downarrow & & \\downarrow \\\\ \nG\/H_{1}\\times G\/H & \\longrightarrow & \\mathcal{S}_{1}(M)\n\\end{array}\n\\end{equation}\n\nFor instance, the bottom map in (19) is defined by $\\{xH_{1}\\}\\times\n\\{yH\\}\\rightarrow 1$-arrow of the diffeomorphism $\\{xH_{1}\\}$ starting at\nthe coset $\\{yH\\}$ and ending at the coset $\\{xyH\\}.$ Note that this is not\na group action since $G\/H_{1}$ is not a group but the composition and\ninversion of $1$-arrows are well defined. This map is a bijection by the\ndefinition of $H_{1}.$ Fixing one such $1$-arrow$,$ all other $1$-arrows\nstarting at $\\{yH\\}$ are generated by composing this arrow with elements of $%\n\\mathcal{S}_{1}(M)_{y}^{y}=I_{xy^{-1}}H\/I_{xy^{-1}}H_{1}$ where $I_{xy^{-1}}$\nis the inner automorphism of $G$ determined by $xy^{-1}\\in G.$ The vertical\nprojections on the right coloumn of (19) are induced by projection of jets\nas in Sections 2, 3 and the projections on the left coloumn are induced by\nprojections on the first factor and identity map on the second factor.\n\nA Lie group $G$ is clearly an effective Klein pair $(G,\\{1\\})$ with order $%\nm=0.$ For many Klein geometries we have $m=1$. This is the case, for\ninstance, if $H$ is compact, in which case we have an invariant metric, $H$\nis discrete or $(G,H)$ is a reductive pair which is extensively studied in\nliterature from the point of view of principal bundles ([12]). If $G$ is\nsemisimple, it is not difficult to show that the order of $(G,H)$ is at most\ntwo (see [18], pg. 131). For instance, let $M$ be a homogeneous complex\nmanifold, i.e., $Aut(M)$ acts transitively on $M.$ If $M$ is compact, then $%\nM=G\/H$ for some complex Lie group $G$ and a closed complex subgroup $H$\n([34]). If futher $\\pi _{1}(M)$ is finite, then $G\/H=\\overline{G}\/\\overline{H%\n}$ as complex manifolds for some semisimple Lie group $\\overline{G}$ ([34]).\nThus it follows that jets of order greater than two do not play any role in\nthe complex structure of $M$ in this case. If $G$ is reductive, it is stated\nin [35] that the order of $(G,H)$ is at most three. On the other hand, for\nany positive integer $m$, an effective Klein pair $(G;H)$ of order $m$ is\nconstructed in [1] such that $G\/H$ is open and dense in some weighted\nprojective space. Other examples of Klein pairs of arbitrary order are\ncommunicated to us by the author of [35]. However, we do not know the answer\nto the following question\n\n$\\mathbf{Q1:}$ For some positive integer $m,$ does there exist a Klein pair $%\n(G,H)$ of order $m$ such that $G\/H$ compact?\n\nIt is crucial to observe $\\ $that $(G_{1},H_{1})$ and $(G_{2},H_{2})$ may be\ntwo Klein pairs with different orders with $G_{1}\/H_{1}$ homeomorphic to $%\nG_{2}\/H_{2}.$ For instance, let $H\\subset G$ be complex Lie groups with $\\pi\n_{0}(G)=\\pi _{1}(G\/H)=0$ and $G\/H$ compact. If $M$ $\\subset G$ is a maximal\ncompact subgroup, then $(M,M\\cap H)$ is a Klein pair of order one and $%\nG\/H=M\/M\\cap H$ as topological manifolds (in fact, $G\/H$ is Kaehler iff its\nEuler characteristic is nonzero, see [34], [4]). The crucial fact here is\nthat an abundance of Lie groups may act transitively on a homogeneous space $%\nM$ with different orders and the topology (but not the analytic structure)\nof $M$ is determined by actions of order one only (the knowledge of a\nparticular such action suffices, see Theorem VI in [34] where a detailed\ndescription of complex homogeneous spaces is given. It turns out that ``they\nare many more than we expect'' as stated there).\n\nNow let $(G,H)$ be an effective Klein pair of order $m$ and let $\\mathcal{L}%\n(G)$ be the Lie algebra of $G.$ We have the map $\\sigma :\\mathcal{L}%\n(G)\\rightarrow J_{m}(T(G\/H))_{p}$ defined by $X\\rightarrow j_{m}(X^{\\ast\n})_{p}$ where $X^{\\ast }$ is the vector field on $G\/H$ induced by $X$ and $%\np\\in G\/H.$ $\\sigma $ is a homomorphism of Lie algebras where the bracket on $%\nJ_{m}(T(G\/H))_{o}$ is the algebraic bracket defined in Section 2. Note that\nthe map $X\\rightarrow X^{\\ast }$ is injective due to effectiveness and also\nsurjective due to transitivity. It is now easy to give a description of the\ninfinitesimal analog of (19). We can thus express everything defined in\nSections 2, 3 in concrete terms which will enable us to use the highly\ndeveloped structure theory of (semisimple) Lie groups ([17]). A detailed\ndescription of $\\frak{s}_{m}(M)$ is given in [26] (Theorem 15 on pg. 199).\nThe formula on pg. 200 in [26] is the same as the formula (15), Example\n3.3.7, pg. 104 in the recent book [20], but higher order jets and Spencer\noperator remain hidden in (15) and also in (4), Example 3.2.9, pg. 98 in\n[20].\n\nThe following situation deserves special mention: Let $G$ be a complex Lie\ngroup, $H$ a closed complex subgroup with a holomorphic representation on\nthe vector space $V$ and let $E\\rightarrow G\/H$ be the associated\nhomogeneous vector bundle of $G\\rightarrow G\/H.$ We now have the sheaf\ncohomology groups $H^{\\ast }(G\/H,\\mathbf{E})$ where $\\mathbf{E}$ denotes the\nsheaf of holomorphic sections of $E\\rightarrow G\/H.$ Borel-Weil theorem is\nderived in [4] from $H^{0}(G\/H,\\mathbf{E)}$. If the Klein pair $(G,H)$ has\norder $m$ and is effective, the principal bundle $G\\rightarrow G\/H$ can be\nidentified with the principal bundle $\\mathcal{S}_{m}(M)^{(o)}\\rightarrow\nG\/H $ where $\\mathcal{S}_{m}(M)^{(o)}$ consists of $m$-arrows in $G\/H$ with\nsource at the coset $o$ of $H$ (see Section 7). If $gx=y$, $g\\in G,$ $x,y\\in\nG\/H$, then the $m$-arrow of $g$ gives an isomorphism $E_{x}\\rightarrow E_{y}$\nbetween the fibers. Consequently, the action of $G$ on sections of $%\nE\\rightarrow G\/H$ is equivalent to the representation of the $TLEFF$ $%\n\\mathcal{S}_{m}(M)=G\\times G\/H$ on $E\\rightarrow G\/H.$ On the infinitesimal\nlevel, this gives a representation of $\\frak{s}_{m}(M)$ on $E\\rightarrow G\/H$\nand we can also define the cohomology groups $H^{\\ast }(\\frak{s}_{m}(M),E)$\nas in [19], [20]. Letting $\\Theta $ denote the sections of $E$ killed by $%\n\\frak{s}_{m}(M),$ we see that these two cohomology groups are related by a\ndiagram similar to (17). It is crucial to observe here how the order of jets\nremains hidden in $H^{\\ast }(G\/H,\\mathbf{E})$.\n\n\\section{Truncated geometries}\n\n\\begin{definition}\nSome $TLEFF$ $\\mathcal{S}_{k}(M)$ is called a truncated geometry on $M$ of\norder $k.$\n\\end{definition}\n\nWe will view a truncated geometry $\\mathcal{S}_{k}(M)$ as a diagram (13)\nwhere all $\\mathcal{S}_{m}(M)$ for $m\\leq k$ are defined as projections of $%\n\\mathcal{S}_{k}(M)$ and $\\mathcal{S}_{m}(M)$ for $m\\geqslant k+1$ do not\nexist. A homogeneous geometry defines a truncated geometry of any order. The\nquestion arises whether some truncated geometry always prolongs uniquely to\nsome homogeneous geometry. The answer turns out to be negative. For\ninstance, let $(M,g)$ be a Riemannian manifold and consider all $1$-arrows\non $M$ which preserve the metric $g.$ Such $1$-arrows define a $TLEFF$ $%\n\\mathcal{S}_{1}(M).$ We may fix some point $p\\in M$ and fix some coordinates\naround $p$ once and for all so that $g_{ij}(p)=\\delta _{ij}$, thus\nidentifying $\\mathcal{S}_{1}(M)_{p}^{p}$ with the orthogonal group $O(n).$\nNow any $1$-arrow with source at $p$ defines an orthogonal frame at its\ntarget $q$ by mapping the fixed orthogonal coordinate frame at $p$ to $q.$\nThe group $O(n)$ acts on all such $1$-arrows by composing on the source. Now\nforgetting $1$-arrows but keeping the orthogonal frames defined by them, we\nrecover the orthogonal frame bundle of the metric $g$. However we will not\nadapt this point of view. In view of the existence of geodesic coordinates,\nwe can now construct $2$-arrows on $M$ which preserve $1$-jet of $g$ , i.e., \n$1$-jet of $g$ at all $x\\in M$ can be identified (in various ways). Thus we\nobtain $\\mathcal{S}_{2}(M)$ and the projection $\\pi _{2,1}:\\mathcal{S}%\n_{2}(M)\\rightarrow \\mathcal{S}_{1}(M).$ As a remarkable fact, $\\pi _{2,1}$\nturns out to be an isomorphism. This fact is equivalent to the well known\nGauss trick of shifting the indices and showing the uniqueness of a metric\nconnection which is symmetric (Levi-Civita connection). The Christoffel\nsymbols are obtained now by twisting the $2$-arrows of $\\mathcal{S}_{2}(M)$\nby the $1$-arrows of $\\mathcal{S}_{1}(M)$. Now we may not be able to\nidentify $2$-jet of $g$ over $M$ due to curvature of $g$ and thus we may\nfail to construct the surjection $\\pi _{3,2}:\\mathcal{S}_{3}(M)\\rightarrow \n\\mathcal{S}_{2}(M).$ If we achive this (and $\\pi _{3,2}$ will be again an\nisomorphism), the next obstruction comes from the $3$-jet of $g$ which is\nessentially the covariant derivative of the curvature. However, if $g$ has\nconstant curvature, then we can prolong $\\mathcal{S}_{2}(M)$ uniquely to a\nhomogeneous geometry $\\mathcal{S}_{\\infty }(M)$ which, as a remarkable fact,\nturns out to be a Klein geometry of order one since in this case $%\nLim_{k\\rightarrow \\infty }S_{k}(M)$ recaptures the isometry group of $(M,g)$\nwhich acts transitively on $M$ and any isometry is uniquely determined by\nany of its $1$-arrows$.$ Thus we may view a truncated geometry $\\mathcal{S}%\n_{k}(M)$ as a candidate for some homogeneous geometry $\\mathcal{S}_{\\infty\n}(M)$ but $\\mathcal{S}_{k}(M)$ must overcome the obstructions, if any, put\nforward by $M.$ Almost all geometric structures (Riemannian, almost complex,\nalmost symplectic, almost quaternionic, ..) may be viewed as truncated\ngeometries of order at least one, each being a potential canditate for a\nhomogeneous geometry.\n\n\\begin{definition}\nA truncated geometry $\\mathcal{S}_{k}(M)$ is called formally integrable if\nit prolongs to a homogeneous geometry.\n\\end{definition}\n\nHowever, we require the prolongation required by Definition 6 to be\nintrinsically determined by $\\mathcal{S}_{k}(M)$ in some sense and not be\ncompletely arbitrary. Given some $\\mathcal{S}_{k}(M),$ note that Definition\n6 requires the surjectivity of $\\mathcal{S}_{j+1}(M)\\rightarrow \\mathcal{S}%\n_{j}(M),$ $j\\geqslant k.$ For instance, we may construct some $\\mathcal{S}%\n_{k+1}(M)$ in an intrinsic way without $\\mathcal{S}_{k+1}(M)\\rightarrow \n\\mathcal{S}_{k}(M)$ being surjective. We may now redifine all lower terms by \n$\\widetilde{\\mathcal{S}_{j}(M)}=\\pi _{k+1,j}\\mathcal{S}_{k+1}(M)$ and start\nanew at order $k+1.$ This is not allowed by Definition 6. For instance, let $%\n\\mathcal{S}_{1}(M)$ be defined by some almost symplectic form $\\omega $ (or\nalmost complex structure $J).$ Then $\\pi _{2,1}:\\mathcal{S}%\n_{2}(M)\\rightarrow \\mathcal{S}_{1}(M)$ will be surjective if $d\\omega =0$ $($%\nor $N(J)=0$ where $N(J)$ is the Nijenhuis tensor of $J$).\n\nThis prolongation process which we tried to sketch above is centered around\nthe concept of formal integrability which can be defined in full generality\nturning the ambigious Definition 6 into a precise one. However, this\nfundamental concept turns out to be highly technical and is fully developed\nby D.C. Spencer and his co-workers from the point of view of PDEs,\nculminating in [11]. More geometric aspects of this concept are emphasized\nin [25], [26] and other books by this author .\n\n\\section{Bundle maps}\n\nIn this section we will briefly indicate the allowable bundle maps in the\npresent framework. Consider the universal $TLEFF$ $\\mathcal{G}_{k}(M)$ of\norder $k.$ We define the group bundle $\\mathcal{AG}_{k}(M)\\doteq \\cup _{x\\in\nM}\\mathcal{G}_{k}(M)_{x}^{x}$. The sections of this bundle form a group with\nthe operation defined fiberwise. We will call such a section a universal\nbundle map (or a universal gauge transformation) of order $k.$ We will\ndenote the group of universal bundle maps by $\\Gamma \\mathcal{AG}_{k}(M).$\nWe obtain the projection $\\pi _{k+1,k}:\\Gamma \\mathcal{AG}%\n_{k+1}(M)\\rightarrow \\Gamma \\mathcal{AG}_{k}(M)$ which is a homomorphism. If \n$\\mathcal{S}_{k}(M)\\subset \\mathcal{G}_{k}(M),$ we similarly define $%\n\\mathcal{AS}_{k}(M)\\subset \\mathcal{AG}_{k}(M)$ and call elements of $\\Gamma \n\\mathcal{AS}_{k}(M)$ automorphisms (or gauge transformations) of $\\mathcal{S}%\n_{k}(M)$. Now let $\\mathcal{S}_{k}(M)\\subset \\mathcal{G}_{k}(M$ and $%\ng_{k}\\in \\Gamma \\mathcal{AG}_{k}(M).$ We will denote $g_{k}(x)$ by $%\n(g_{k})_{x}^{x},$ $x\\in M.$ We define the $TLEFF$ $(Adg)\\mathcal{S}_{k}(M)$\nby defining its $k$-arrows as $(Adg)\\mathcal{S}_{k}(M)_{y}^{x}\\doteq\n\\{(g_{k})_{y}^{y}(f_{k})_{y}^{x}(g_{k}^{-1})_{x}^{x}\\mid (f_{k})_{y}^{x}\\in \n\\mathcal{S}_{k}(M)_{y}^{x}\\}.$\n\n\\begin{definition}\n$\\mathcal{S}_{k}(M)$ is called equivalent to $\\mathcal{H}_{k}(M)$ if there\nexists some $g\\in \\Gamma \\mathcal{AG}_{k}(M)$ satisfying $(Adg)\\mathcal{S}%\n_{k}(M)=\\mathcal{H}_{k}(M)$\n\\end{definition}\n\nA necessary condition for the equivalence of $\\mathcal{S}_{k}(M)$ and $%\n\\mathcal{H}_{k}(M)$ is that $\\mathcal{S}_{k}(M)_{x}^{x}$ and $\\mathcal{H}%\n_{k}(M)_{x}^{x}$ be conjugate in $\\mathcal{G}_{k}(M)_{x}^{x}$, i.e., they\nmust be compatible structures (like both Riemannian,...). Let $\\frak{s}%\n_{k}(M)$ and $\\frak{h}_{k}(M)$ be the corresponding $TLEIF^{\\prime }$s. The\nabove action induces an action of $g$ on $\\frak{s}_{k}(M)$ which we will\ndenote also by $(Adg)\\frak{s}_{k}(M).$ This latter action uses $1$-jet of $g$\nsince the geometric order of $\\frak{s}_{k}(M)$ is $k+1,$ i.e., the\ntransformation rule of the elements (sections) of $\\frak{s}_{k}(M)$ uses\nderivatives up to order $k+1.$ This construction is functorial. In\nparticular, $(Adg)\\mathcal{S}_{k}(M)=\\mathcal{H}_{k}(M)$ implies $(Adg)\\frak{%\ns}_{k}(M)=\\frak{h}_{k}(M)$. Clearly these actions commute with projections.\nIn this way we define the moduli spaces of geometries.\n\nSome $g_{k}\\in \\Gamma \\mathcal{AG}_{k}(M)$ acts on any $k^{th}$ order object\ndefined in Section 2. However, the action of $g_{k}$ does not commute with $%\nd $ and therefore $g_{k}$ does not act on the horizontal complexes in (10).\nTo do this, we define $\\mathcal{AG}_{\\infty }(M)\\doteq \\cup _{x\\in M}%\n\\mathcal{G}_{\\infty }(M)_{x}^{x}$ where an element $(g_{\\infty })_{x}^{x}$\nof $\\mathcal{G}_{\\infty }(M)_{x}^{x}$ is the $\\infty $-jet of some local\ndiffeomorphism with source and target at $x.$ As we noted above, $(g_{\\infty\n})_{x}^{x}$ is far from being a formal object: it determines this\ndiffeomorphism modulo the $\\infty $-jet of identity diffeomorphism. Now some \n$g_{\\infty }\\in \\Gamma \\mathcal{AG}_{\\infty }(M)$ does act on the horizontal\ncomplexes in (10).\n\n\\section{Principal bundles}\n\nLet $\\mathcal{S}_{k}(M)\\subset \\mathcal{G}_{k}(M).$ We fix some $p\\in M$ and\ndefine $\\mathcal{S}_{k}(M)^{(p)}\\doteq \\cup _{x\\in M}\\mathcal{S}%\n_{k}(M)_{x}^{p}.$ The group $\\mathcal{S}_{k}(M)_{p}^{p}$ acts on $\\mathcal{S}%\n_{k}(M)_{x}^{p}$ by composing with $k$-arrows of $\\mathcal{S}_{k}(M)_{x}^{p}$\nat the source as $(f_{k})_{x}^{p}\\rightarrow (f_{k})_{x}^{p}(h_{k})_{p}^{p}$\nand the projection $\\mathcal{S}_{k}(M)^{(p)}\\rightarrow M$ with fiber $%\n\\mathcal{S}_{k}(M)_{x}^{p}$ over $x$ is a principal bundle with group $%\n\\mathcal{S}_{k}(M)_{p}^{p}.$ Considering the adjoint action of $\\mathcal{S}%\n_{k}(M)_{p}^{p}$ on itself, we construct the associated bundle whose\nsections are automorphisms (or gauge transformations) which we use in gauge\ntheory. This associated bundle can be identified with $\\mathcal{AS}_{k}(M)$\nin Section 6 and therefore the two concepts of automorphisms coincide. In\ngauge theory, we let $h_{k}\\in \\Gamma \\mathcal{AS}_{k}(M)$ act on $\\mathcal{S%\n}_{k}(M)^{(p)}$ on the target as $(f_{k})_{x}^{p}\\rightarrow\n(h_{k})_{x}^{x}(f_{k})_{x}^{p}$ reserving the source for the group $\\mathcal{%\nS}_{k}(M)_{p}^{p}.$ We will denote this transformation by $f\\rightarrow\nh\\odot f.$ We can regard the object $h_{k}\\odot \\mathcal{S}_{1}(g)^{(p)}$ as\nanother principal $\\mathcal{S}_{k}(M)_{p}^{p}$-bundle: we imagine two copies\nof $\\mathcal{S}_{k}(g)_{p}^{p}$, one belonging to principal bundle and one\noutside which is the group of the principal bundle and $h_{k}$ acts only on\nthe principal bundle without changing the group. To be consistent with $%\n\\odot ,$ we now regard $h_{k}\\in \\Gamma \\mathcal{AG}_{k}(M)$ as a general\nbundle map and define the transform of $\\mathcal{S}_{k}(M)^{(p)}$ by $h_{k}$\nusing $\\odot .$ Now $\\odot $ has a drawback from geometric point of view. To\nsee this, let $(M,g)$ be a Riemannian manifold. We will denote the $TLEFF$\ndetermined by $g$ by $\\mathcal{S}_{1}(g),$ identifying the principal $%\n\\mathcal{S}_{1}(g)^{(p)}\\rightarrow M$ with the orthogonal frame bundle of $%\ng $ and the group $\\mathcal{S}_{1}(g)_{p}^{p}$ with $O(n)$ as in Section 5.\nNow the transformed object $h\\odot \\mathcal{S}_{1}(g)^{(p)}$, which is\nanother $O(n)$-principal bundle, is not related to any metric in sight\nunless $h$ = identity !! Thus we see that $\\odot $ dispenses with the\nconcept of a metric but keeps the concept of an $O(n)$-principal bundle,\ncarrying us from our geometric envelope outside into the topological world\nof general principal bundles. On the other hand, the action of $(\\mathcal{G}%\n_{1})_{x}^{x}$ on metrics at $x$ gives an action of $h$ on metrics on $M$\nwhich we will denote by $g\\rightarrow h\\boxdot g.$ Changing our notation $%\n(Adh)\\mathcal{S}_{1}(g)$ defined in Section 6 to $h\\boxdot \\mathcal{S}%\n_{1}(g) $ (using the same notation $\\boxdot $), we see that $h\\boxdot \n\\mathcal{S}_{1}(g)=\\mathcal{S}_{1}(h\\boxdot g).$ Thus $\\boxdot $ preserves\nboth metrics and also $1$-arrows determined by them.\n\nConsider the naive inclusion\n\n\\begin{equation}\n\\text{differential geometry}\\subset \\text{topology}\n\\end{equation}\n\nIf we drop the word differential in (20), we may adapt the point of view\nthat the opposite inclusion holds now in (20). This is the point of view of\nA. Grothendieck who views geometry as the study of \\textit{form}, which\ncontains topology as a special branch (see his Promenade \\#12, translated by\nRoy Lisker). This broad perspective is clearly a far reaching generalization\nof \\textit{Erlangen Programm }presented here.\n\nIn view of (20), we believe that any theorem in the framework whose main\ningredients we attempted to outline here, however deep and far reaching, can\nbe formulated and proved also using principal bundles. To summarize, we may\nsay that differential geometry is the study \\textit{smooth forms }and the\nconcepts of a \\textit{form }and continuous deformation of \\textit{forms }%\ncome afterwards as higher level of abstractions. We feel that it may be\nfruitful to start with differential geometry rather than starting at a\nhigher level and then specializing to it.\n\n\\section{Connection and curvature}\n\nRecall that a right principal $G$-bundle $P\\rightarrow M$ determines the\ngroupoid $\\frac{P\\times P}{G}\\rightarrow M\\times M$ where the action of $G$\non $P\\times P$ is given by $(x,y)g\\doteq (xg,yg).$ Let $\\mathcal{A}%\n(P)\\rightarrow M$ be the automorphism bundle obtained as the associated\nbundle of $P\\rightarrow M$ using the adjoint action of $G$ on itself as in\nSection 7, whose sections are gauge transformations. We obtain in this way\nthe groupoid extension\n\n\\begin{equation}\n1\\longrightarrow \\mathcal{A}(P)\\longrightarrow \\frac{P\\times P}{G}%\n\\longrightarrow M\\times M\\longrightarrow 1\n\\end{equation}\n\nwhere we again use the first and last arrows to indicate injectivity and\nsurjectivity without any algebraic meaning. On the infinitesimal level, (21)\ngives the Atiyah sequence of $P\\rightarrow M$\n\n\\begin{equation}\n0\\longrightarrow \\mathcal{LA}(P)\\longrightarrow \\frac{TP}{G}\\overset{\\pi }{%\n\\longrightarrow }TM\\longrightarrow 0\n\\end{equation}\n\n(see [19], [20] for the details of the Atiyah sequence) where $\\mathcal{LA}%\n(P)\\longrightarrow M$ is the Lie algebra bundle obtained as the associated\nbundle using the adjoint action of $G$ on its Lie algebra $\\mathcal{L}(G).$\nConnection forms $\\omega $ on $P\\rightarrow M$ are in 1-1 correspondence\nwith transversals in (22), i.e., vector bundle maps $\\omega :TM\\rightarrow \n\\frac{TP}{G}$ with $\\pi \\circ \\omega =id$ and curvature $\\kappa $ of $\\omega \n$ is defined by $\\kappa (X,Y)=\\omega \\lbrack X,Y]-[\\omega X,\\omega Y].$ The\nextension (22) splits iff (22) admits a flat transversal. Thus Atiyah\nsequence completely recovers the formalism of connection and curvature on $%\nP\\rightarrow M$ in the framework of algebroid extensions as long as we work\nover a fixed base manifold $M$.\n\nNow let $\\mathcal{S}_{\\infty }(M)$ be a homogeneous geometry with\ninfinitesimal geometry $\\frak{s}_{\\infty }(M).$ The groupoid $\\frac{\\mathcal{%\nS}_{k}(M)^{(p)}\\times \\mathcal{S}_{k}(M)^{(p)}}{\\mathcal{S}_{k}(M)_{p}^{p}}$\nin (21) determined by the principal bundle $\\mathcal{S}_{k}(M)^{(p)}%\n\\rightarrow M$ as defined in Section 7 can be identified with $\\mathcal{S}%\n_{k}(M)$ and the algebroid $\\frac{T\\mathcal{S}_{k}(M)^{(p)}}{\\mathcal{S}%\n_{k}(M)_{p}^{p}}$ in (22) can be identified with $\\frak{s}_{k}(M).$ We have $%\n\\mathcal{A}(\\mathcal{S}_{k}(M)^{(p)})=\\cup _{x\\in M}\\mathcal{S}%\n_{k}(M)_{x}^{x}$ as already indicated in Section 7 and $\\mathcal{LA}(%\n\\mathcal{S}_{k}(M)^{(p)})$ $\\doteq \\cup _{x\\in M}\\mathcal{L}(\\mathcal{S}%\n_{k}(M)_{x}^{x})$ where the bracket of sections is defined fiberwise. Thus\n(21) becomes\n\n\\begin{equation}\n1\\longrightarrow \\mathcal{AS}_{k}(M)\\longrightarrow \\mathcal{S}%\n_{k}(M)\\longrightarrow M\\times M\\longrightarrow 1\n\\end{equation}\n\nand the Atiyah sequence (22) is now\n\n\\begin{equation}\n0\\longrightarrow \\mathcal{LAS}_{k}(M)\\longrightarrow \\frak{s}%\n_{k}(M)\\longrightarrow TM\\longrightarrow 0\n\\end{equation}\n\nIt is easy to check exactness of (24) in local coordinates using (2).\n\nOur purpose is now to indicate how the present framework captures\ninformation peculiar to jets by changing the base manifold, which Atiyah\nsequence does not detect. To see this, let $m\\leq k+1$ and consider the Lie\ngroup extension at $x\\in M:$\n\n\\begin{equation}\n1\\longrightarrow \\mathcal{S}_{k,m}(M)_{x}^{x}\\longrightarrow \\mathcal{S}%\n_{k}(M)_{x}^{x}\\longrightarrow \\mathcal{S}_{m}(M)_{x}^{x}\\longrightarrow 1\n\\end{equation}\n\nwhere the kernel $\\mathcal{S}_{k,m}(M)_{x}^{x}$ is nilpotent if $m\\geq 1$\nand is abelian if $k=m+1\\geq 2.$ Consider the $\\mathcal{S}_{k,m}(M)_{p}^{p}$%\n-principal bundle $\\mathcal{S}_{k}(M)^{(p)}\\rightarrow \\mathcal{S}%\n_{m}(M)^{(p)}.$ If $\\mathcal{S}_{k,m}(M)_{p}^{p}$ is contractible (this is\nthe case in many examples for $m\\geq 1)$, this principal bundle is trivial\nand its Atiyah sequence admits flat transversals. Thus nothing is gained by\nconsidering higher order jets and all information is contained in the Atiyah\nsequence of $\\mathcal{S}_{1}(M)^{(p)}\\rightarrow M.$ On the other hand, we\nhave the following extension of $TLEIF^{\\prime }s:$%\n\\begin{equation}\n0\\longrightarrow \\mathcal{LS}_{k,m}(M)\\longrightarrow \\frak{s}%\n_{k}(M)\\longrightarrow \\frak{s}_{m}(M)\\longrightarrow 0\n\\end{equation}\n\nwhere $\\mathcal{LS}_{k,m}(M)\\doteq \\cup _{x\\in M}\\mathcal{L}(\\mathcal{S}%\n_{k,m}(M)_{x}^{x}).$ Using (2), it is easy to check that the existence of a\nflat transversal in (26) a priori forces the splitting of the Lie algebra\nextension\n\n\\begin{equation}\n0\\longrightarrow \\mathcal{L}(\\mathcal{S}_{k,m}(M)_{x}^{x})\\longrightarrow \n\\mathcal{L}(\\mathcal{S}_{k}(M)_{x}^{x})\\longrightarrow \\mathcal{L}(\\mathcal{S%\n}_{m}(M)_{x}^{x})\\longrightarrow 0\n\\end{equation}\n\nfor all $x\\in M$ where (27) is (25) in infinitesimal form. However (25) and\n(27) do not split in general. For instance, (27) does not split even in the\nuniversal situation $\\mathcal{S}_{\\infty }(M)=\\mathcal{G}_{\\infty }(M)$ when \n$m=2$ and $k=3.$ In fact, the dimensions of the extension groups $H^{2}(%\n\\mathcal{L}(\\mathcal{S}_{m}(M)_{x}^{x}),\\mathcal{S}_{m+1,m}(M)_{x}^{x})$ are\ncomputed in [28] for all $m$ when $\\dim M=1.$\n\nThus we see that the theory of principal bundles, which is essentially\ntopological, concentrates on the maximal compact subgroup of the structure\ngroup of the principal bundle as it is this group which produces nontrivial\ncharacteristic classes as invariants of equivalence classes of principal\nbundles modulo bundle maps. Consequently, this theory concentrates on the\ncontractibility of the kernel in (25) whereas it is the types of the\nextensions in (25), (27) which emerge as the new ingredient in the present\nframework.\n\nThe connections considered above are transversals and involve only $%\nTLEIF^{\\prime }s$ (algebroids). There is another notion of connection based\non Maurer-Cartan form, which is incorporated by the nonlinear Spencer\nsequence (see Theorem 31 on page 224 in [25]), where the passage from $TLEFF$\n(groupoid) to its $TLEIF$ (algebroid) is used in a crucial way. The passage\nfrom extensions of $TLEFF^{\\prime }s$ (torsionfree connections in finite\nform) to extensions of their $TLEIF^{\\prime }s$ (torsionfree connections in\ninfinitesimal form) relates these two notions by means of a single diagram\nwhich we hope to discuss elsewhere. The approach to parabolic geometries\nadapted in [5] is a complicated mixture of these two notions whose\nintricacies, we believe, will be clearly depicted by this diagram.\n\nHowever we view a connection, the main point here seems to be that it\nbelongs to the group rather than to the space on which the group acts. Since\nthere is an abundance of groups acting transitively on some given space, it\nseems meaningles to speak of the curvature of some space unless we specify\nthe group. However, it turns out that the knowledge of the $k$-arrows of\nsome ideal group is sufficient to define a connection but this connection\nwill not be unique except in some special cases.\n\n\\section{Some remarks}\n\nIn this section (unfortunately somewhat long) we would like to make some\nremarks on the relations between the present framework and some other\nframeworks.\n\n$1)$ Let $\\mathcal{E}\\rightarrow M$ be a differentiable fibered space. It\nturns out that we have an exterior calculus on $J^{\\infty }(\\mathcal{E}%\n)\\rightarrow M.$ Decomposing exterior derivative and forms into their\nhorizontal and vertical components, we obtain a spectral sequence, called\nVinogradov $\\mathcal{C}$-spectral sequence, which is fundamental in the\nstudy of calculus of variations (see [32], [33] and the references therein).\nThe limit term of $\\mathcal{C}$-spectral sequence is $H_{deR}^{\\ast\n}(J^{\\infty }(\\mathcal{E})).$ In particular, if $\\mathcal{E}=T(M),$ we\nobtain $H_{deR}^{\\ast }(\\frak{g}_{\\infty }(M)).$ The $\\mathcal{C}$-spectral\nsequence can be defined also with coefficients ([21]). On the other hand, we\ndefined $H^{\\ast }(\\frak{g}_{\\infty }(M),J_{\\infty }(M))$ and $H^{\\ast }(%\n\\frak{g}_{\\infty }(M),\\frak{s}_{\\infty }(M),J_{\\infty }(M))$ in Sections 2,\n3. As we indicated in Section 2, we can consider representations of $\\frak{g}%\n_{\\infty }(M)$ other than $J_{\\infty }(M)$ (for instance, see $2)$ below).\nThese facts hint, we feel, at the existence of a very general Van Est type\ntheorem which relates these cohomology groups.\n\n$2)$ Recall that (5) and (7) define a representation of the algebroid $\\frak{%\ng}_{k}(M)$ on the vector bundle $J_{k}(M)\\rightarrow M$. Thus we can\nconsider the cohomology groups $H^{\\ast }(\\frak{g}_{k}(M),J_{k}(M)$ as\ndefined in [19], [20] (see also the references there for original sources)\nand $H^{\\ast }(\\frak{g}_{k}(M),J_{k}(M))$ coincides with the cohomology of\nthe bottom sequence of (17) by definition. Now $\\frak{g}_{k}(M)$ has other\n``intrinsic'' representations, for instance $\\frak{g}_{k-1}(M).$ Lemma 8.32\nand the formula on page 383 in [25] (which looks very similar to (4)) define\nthis representation. In particular, the cohomology groups $H^{\\ast }(\\frak{g}%\n_{k}(M),$ $\\frak{g}_{k-1}(M))$ are defined and are given by the bottom\nsequence of (17) for this case. Using this representation of $\\frak{g}%\n_{k}(M) $ on $\\frak{g}_{k-1}(M)$, deformations of $TLEIF^{\\prime }$s are\nstudied in [25] in detail using Janet sequence (Chapter 7, Section 8 of\n[25]). Recently, some deformation cohomology groups are introduced in [8] in\nthe general framework of algebroids. However, if the algebroid is a $TLEIF,$\nit seems to us that these cohomology groups coincide with those in [25] (and\ntherefore also with the bottom sequence of (17)) and are not new (see also\n[20], pg. 309 for a similar claim). In view of the last paragraph of Section\n4, the fact that deformation cohomology arises as sheaf cohomology in\nKodaira-Spencer theory and as algebroid cohomology in [25], [8] is no\ncoincidence.\n\n$3)$ Let $\\mathcal{S}_{2}(M)$ be a truncated geometry on $M.$ We fix some $%\nx\\in M$ and consider the following diagram of Lie group extensions\n\n\\begin{equation}\n\\begin{array}{ccccccccc}\n0 & \\longrightarrow & \\mathcal{G}_{2,1}(M)_{x}^{x} & \\longrightarrow & \n\\mathcal{G}_{2}(M)_{x}^{x} & \\longrightarrow & \\mathcal{G}_{1}(M)_{x}^{x} & \n\\longrightarrow & 1 \\\\ \n& & \\uparrow & & \\uparrow & & \\uparrow & & \\\\ \n0 & \\longrightarrow & \\mathcal{S}_{2,1}(M)_{x}^{x} & \\longrightarrow & \n\\mathcal{S}_{2}(M)_{x}^{x} & \\longrightarrow & \\mathcal{S}_{1}(M)_{x}^{x} & \n\\longrightarrow & 1\n\\end{array}\n\\end{equation}\n\nwhere the vertical maps are inclusions. The top row of (28) splits and the\ncomponents of these splittings are naturally interpreted as the Christoffel\nsymbols of symmetric ``point connections''. Some $k_{x}^{x}\\in \\mathcal{G}%\n_{2,1}(M)_{x}^{x}$, which is a particular bundle map as defined in Section 6\nwhen $M=\\{x\\},$ transforms $\\mathcal{S}_{2}(M)_{x}^{x}$ by conjugation but\nacts as identity on $\\mathcal{S}_{2,1}(M)_{x}^{x}$ and $\\mathcal{S}%\n_{1}(M)_{x}^{x}.$ Using $\\mathcal{G}_{2,1}(M)_{x}^{x}$ as allowable\nisomorphisms, we defined in [13] the group of \\textit{restricted }extensions \n$H_{res}^{2}(\\mathcal{S}_{1}(M)_{x}^{x},\\mathcal{S}_{2,1}(M)_{x}^{x})$. This\ngroup vanishes iff the restriction of some splitting of the top row of (28)\nto the bottom row splits also the bottom row or equivalently, iff the bottom\nrow admits a symmetric point connection. The main point is that this group\nis sensitive to phenomena happening only inside the top row of (28) which is\nour universal envelope as in Section 2. Using the Lie algebra anolog of\n(28), we defined also the group $H_{res}^{2}(\\mathcal{L}(\\mathcal{S}%\n_{1}(M)_{x}^{x}),\\mathcal{S}_{2,1}(M)_{x}^{x})$ (note that $\\mathcal{S}%\n_{2,1}(M)_{x}^{x}\\subset \\mathcal{G}_{2,1}(M)_{x}^{x}$ are vector groups)\nobtaining the homomorphism $H_{res}^{2}(\\mathcal{S}_{1}(M)_{x}^{x},\\mathcal{S%\n}_{2,1}(M)_{x}^{x})\\rightarrow H_{res}^{2}(\\mathcal{L}(\\mathcal{S}%\n_{1}(M)_{x}^{x}),\\mathcal{S}_{2,1}(M)_{x}^{x})$ ([13]). On the other hand,\nregarding the bottom row of (28) as an \\textit{arbitrary }Lie group\nextension as in [15] without any reference to our universal envelope, we can\ndefine $H^{2}(\\mathcal{S}_{1}(M)_{x}^{x},\\mathcal{S}_{2,1}(M)_{x}^{x})\\ $\\\nand the homomorphism $H^{2}(\\mathcal{S}_{1}(M)_{x}^{x},\\mathcal{S}%\n_{2,1}(M)_{x}^{x})\\rightarrow H^{2}(\\mathcal{L}(\\mathcal{S}_{1}(M)_{x}^{x}),%\n\\mathcal{S}_{2,1}(M)_{x}^{x})$. Thus we obtain the following commutative\ndiagram\n\n\\begin{equation}\n\\begin{array}{ccc}\nH_{res}^{2}(\\mathcal{S}_{1}(M)_{x}^{x},\\mathcal{S}_{2,1}(M)_{x}^{x}) & \n\\longrightarrow & H_{res}^{2}(\\mathcal{L}(\\mathcal{S}_{1}(M)_{x}^{x}),%\n\\mathcal{S}_{2,1}(M)_{x}^{x}) \\\\ \n\\downarrow & & \\downarrow \\\\ \nH^{2}(\\mathcal{S}_{1}(M)_{x}^{x},\\mathcal{S}_{2,1}(M)_{x}^{x} & \n\\longrightarrow & H^{2}(\\mathcal{L}(\\mathcal{S}_{1}(M)_{x}^{x}),\\mathcal{S}%\n_{2,1}(M)_{x}^{x})\n\\end{array}\n\\end{equation}\n\nwhere the vertical homomorphisms are induced by inclusion. In [13], we gave\nexamples where $H_{res}^{2}(\\mathcal{L}(\\mathcal{S}_{1}(M)_{x}^{x}),\\mathcal{%\nS}_{2,1}(M)_{x}^{x})$ is nontrivial whereas $H^{2}(\\mathcal{L}(\\mathcal{S}%\n_{1}(M)_{x}^{x}),$\n\n$\\mathcal{S}_{2,1}(M)_{x}^{x})$ is trivial. This fact shows that we may\nloose information when we pass from the top row to the bottom row in (29).\n\nNow the action of $\\mathcal{S}_{1}(M)_{x}^{x}$ on $\\mathcal{S}%\n_{2,1}(M)_{x}^{x}$ gives a representation of the $TLEFF$ $\\mathcal{S}_{1}(M)$\non the vector bundle $\\mathcal{S}_{2,1}(M)\\doteq \\cup _{x\\in M}\\mathcal{S}%\n_{2,1}(M)_{x}^{x}$ and thus we can define the cohomology groups $%\nH_{res}^{\\ast }(\\mathcal{S}_{1}(M),\\mathcal{S}_{2,1}(M))$ in such a way that\nthey will respect our universal envelope. In this way we arrive at the\nglobal analog of (29). We believe that we will loose information also in\nthis global case. To summarize, even though the constructions in this paper\ncan be formulated in the general framework of groupoids and algebroids as in\n[7], [8],[19], [20], we believe that this general framework will not be\nsensitive in general to certain phenomena peculiar to jets unless it takes\nthe universal homogeneous envelope into account. In particular, we would\nlike to express here our belief that Lie equations form the geometric core\nof groupoids which are the ultimate generalizations of (pseudo)group actions\nin which we dispense with the action but retain the symmetry that the action\ninduces on the space (compare to the introduction of [36]).\n\n$4)$ Let $\\mathcal{X}$ $(M)$ be the Lie algebra of smooth vector fields on $%\nM.$ Recalling that $j_{k}(\\mathcal{X}(M))\\subset \\frak{g}_{k}(M)$, (5) gives\na representation of $\\mathcal{X}(M)$ on $J_{k}(M)$. Denoting the cochains\ncomputing this cohomology by $\\overset{(k,r)}{\\wedge }_{GF},$ we obtain the\nchain map $\\overset{(k,\\ast )}{\\wedge }\\rightarrow $ $\\overset{(k,\\ast )}{%\n\\wedge }_{GF}$ which indicates that Gelfand-Fuks cohomology is involved in\nthe present framework and plays a central role.\n\n$5)$ This remark can be considered as the continuation of Section 7 and the\nlast paragraph of Section 4.\n\nLet $Q$ be the subgroup of $\\mathcal{G}_{1}(n+1)=GL(n+1,\\mathbb{R)}$\nconsisting of matrices of the form\n\n\\begin{equation}\n\\left[ \n\\begin{array}{cc}\nA & 0 \\\\ \n\\xi & \\lambda\n\\end{array}\n\\right]\n\\end{equation}\nwhere $A$ is an invertible $n\\times n$ matrix, $\\xi =(\\xi _{1},...,\\xi _{n})$\nand $\\lambda \\neq 0.$ We will denote (30) by $(A,\\xi ,\\lambda ).$ We have\nthe homomorphism $Q\/\\lambda I\\rightarrow \\mathcal{G}_{1}(n)$ defined by $%\n(A,\\xi ,1)\\rightarrow A,$ where $\\lambda I$ denotes the subgroup $\\{\\lambda\nI\\mid \\lambda \\in \\mathbb{R\\}},$ with the abelian kernel $K$ consisting of\nelements of the form $(I,\\xi ,1).$ We also have the injective homomorphism $%\nQ\/\\lambda I\\rightarrow \\mathcal{G}_{2}(n)$ defined by $(A_{j}^{i},\\xi\n_{j},1)\\rightarrow (A_{j}^{i},\\xi _{j}A_{k}^{i}+\\xi _{k}A_{j}^{i})$ which\ngives the diagram\n\n\\begin{equation}\n\\begin{array}{ccccccccc}\n0 & \\longrightarrow & \\mathcal{G}_{2,1}(n) & \\longrightarrow & \\mathcal{G}%\n_{2}(n) & \\longrightarrow & \\mathcal{G}_{1}(n) & \\longrightarrow & 1 \\\\ \n& & \\uparrow & & \\uparrow & & \\parallel & & \\\\ \n0 & \\longrightarrow & K & \\longrightarrow & Q\/\\lambda I & \\longrightarrow & \n\\mathcal{G}_{1}(n) & \\longrightarrow & 1\n\\end{array}\n\\end{equation}\n\nNow $(\\mathcal{G}_{1}(n+1),Q)$ is a Klein pair of order two with ghost $N=$ $%\n\\lambda I$. We also have the effective Klein pair $(\\mathcal{G}%\n_{1}(n+1)\/\\lambda I,Q\/\\lambda I)$ which is of order two. Note that the\nstandard action of $\\mathcal{G}_{1}(n+1)$ on $\\mathbb{R}^{n+1}\\backslash 0$\ninduces a transitive action of $\\mathcal{G}_{1}(n+1)$ on the the real\nprojective space $\\mathbb{R}P(n)$ and $Q$ is the stabilizer of the coloumn\nvector $p=$ $(0,0,...,1)^{T}$ so that both Klein pairs $(\\mathcal{G}%\n_{1}(n+1),Q)$ and $(\\mathcal{G}_{1}(n+1)\/\\lambda I,Q\/\\lambda I)$ define the\nsame base $\\mathbb{R}P(n).$ Clearly, $\\mathcal{G}_{1}(n+1)$ and $\\mathcal{G}%\n_{1}(n+1)\/\\lambda I$ induce the same $2$-arrows on $\\mathbb{R}P(n).$\n\nAt this stage, we have two relevant principle bundles.\n\n$i)$ The principle bundle $\\mathcal{G}_{1}(n+1)^{(p)}\\rightarrow \\mathbb{R}%\nP(n)$ which consists of all $2$-arrows eminating from $p$ and has the\nstructure group $Q\/\\lambda I.$ This is the same principle bundle as $(%\n\\mathcal{G}_{1}(n+1)\/\\lambda I)^{(p)}\\rightarrow \\mathbb{R}P(n)$. In\nparticular, (18) reduces to (31) in this case (the reader may refer to\nExample 4.1 on pg. 132 of \\ [18] and also to the diagram on pg.142).\n\n$ii)$ The principle bundle $\\mathcal{G}_{1}(n+1)\\rightarrow \\mathbb{R}P(n)$\nwith structure group $Q.$ Note that we have the central extension\n\n\\begin{equation}\n1\\longrightarrow \\lambda I\\longrightarrow Q\\longrightarrow Q\/\\lambda\nI\\longrightarrow 1\n\\end{equation}\n\nwhich splits: some $q=(A,\\xi ,\\lambda )\\in Q$ factors as $q=ab$ where $%\na=(\\lambda ^{-1}A,\\lambda ^{-1}\\xi ,1)$\n\n$\\in \\mathcal{G}_{2}(n)$ and $b=\\lambda I.$ Note that $ii)$ is obtained from \n$i)$ by lifting the structure group $Q\/\\lambda I$ to $Q$ in (31).\n\nNow we have a representation of $Q$ on $\\mathbb{R}$ defined by the\nhomomorphism $(A,\\xi ,\\lambda )\\rightarrow \\lambda ^{-N}$ for some integer $%\nN\\geq 0.$ Replacing $\\mathbb{R}$ by $\\mathbb{C}$ and working with complex\ngroups and holomorphic actions, it is known that the holomorphic sections of\nthe associated line bundle of $\\mathcal{G}_{1}(n+1,\\mathbb{C})\\rightarrow \n\\mathbb{C}P(n)$ realize all irreducable representations of the unitary group \n$U(n)$ as $N$ varies (see [16], pg. 138-152 and [17]). As a very crucial\nfact, we can repeat this construction by replacing the Klein pair $(\\mathcal{%\nG}_{1}(n+1,\\mathbb{C}),Q)$ of order two by the effective Klein pair $%\n(U(n),U(n-1)\\times U(1))$ of order one and this latter construction recovers\nthe same line bundle (see [16] for details). The following question\ntherefore arises naturally: Let $(G,H)$ be a Klein pair with ghost $N.$ Let $%\n\\rho :H\\rightarrow GL(V)$ be a representation and $E\\rightarrow M=G\/H$ be\nthe associated homogeneous vector bundle of $G\\rightarrow G\/H.$ Can we\nalways find some \\textit{effective} Klein pair $(\\overline{G},\\overline{H})$\n(not necessarily of the same order) with $\\overline{G}\/\\overline{H}=M,$ a\nrepresentation $\\overline{\\rho }:\\overline{H}\\rightarrow GL(V)$ such that $%\nE\\rightarrow M$ is associated with $(\\overline{G})^{(p)}\\rightarrow M$, or\nshortly\n\n$\\mathbf{Q2:}$ Can we always avoid ghosts in Klein geometry?\n\nReplacing $\\lambda I,$ $Q,$ $Q\/\\lambda I$ in (31) respectively by $U(1),$ $%\nSpin^{c}(4),$ $SO(4)$ and recalling the construction of $Spin^{c}$-bundle on\na 4-manifold ([22]), we see that $\\mathbf{Q2}$ is quite relevant as it asks\nessentially the scope and limitations of \\textit{Erlangen Programm.}\n\n$6)$ The assumption of transitivity, i.e., the surjectivity of the right\narrows in (13) , (14), is imposed upon us by \\textit{Erlangen Programm.}\nHowever, many of the constructions in this paper can be carried out without\nthe assumption of transitivity. For instance, foliations give rise to\nintransitive Lie equations but they are studied in the literature mostly\nfrom the point of view of general groupoids and algebroids (see the\nreferences in [7], [8]).\n\nOur main object of study in this paper has been a differentiable manifold $%\nM. $ The sole reason for this is that this author has been obsessed years\nago by the question ``What are Christoffel symbols? '' and he could not\nlearn algebraic geometry from books and he did not have the chance to learn\nit from experts (this last remark applies also to differential geometry) as\nhe has always been at the wrong place at the right time. We feel (and\nsometimes almost see, for instance [27], [3]) that the present framework has\nalso an algebraic counterpart valid for algebraic varieties.\n\n\\bigskip\n\n\\bigskip\n\n\\bigskip\n\n\\bigskip\n\n\\bigskip\n\n\\textbf{References}\n\n\\bigskip\n\n[1] E.Abado\\u{g}lu, preprint\n\n[2] A.Banyaga, The structure of Classical Diffeomorphism Groups, Kluwer\nAcademic Publishers, Volume 400, 1997\n\n[3] A.Beilinson, V.Ginzburg: Infinitesimal structure of moduli spaces of\nG-bundles, Internat. Math. Res. Notices, no 4, 93-106, 1993\n\n[4] R.Bott: Homogeneous vector bundles, Ann. of Math., Vol. 66, No. 2,\n203-248, 1957\n\n[5] A.Cap, J.Slovak, V.Soucek: Bernstein-Gelfand-Gelfand sequences, Ann. of\nMath. 154, 97-113, 2001\n\n[6] C.Chevalley, S.Eilenberg: Cohomology theory of Lie groups and Lie\nalgebras. Trans. Amer. Math. Soc. 63, (1948). 85--124\n\n[7] M.Crainic: Differentiable and algebroid cohomology, Van Est isomorphism,\nand characteristic classes, Comment. Math. Helv. 78, 2003, 681-721\n\n[8] M.Crainic, I.Moerdijk: Deformations of Lie brackets: cohomological\naspects, arXiv: 0403434\n\n[9] W.T.Van Est: Group cohomology and Lie Algebra cohomology in Lie groups,\nIndagationes Math., 15, 484-492, 1953\n\n[10] D.B.Fuks: Cohomology of infinite-dimensional Lie algebras. Translated\nfrom the Russian by A.B. Sosinski, Contemporary Soviet Mathematics,\nConsultants Bureau, New York, 1986\n\n[11] H.Goldschmidt, D.C.Spencer: On the nonlinear cohomology of Lie\nequations, I, II, III, IV, Acta Math. 136, 103-170, 171-239, 1976, J.\nDifferential Geometry, 13, 409-453, 455-526, 1979\n\n[12] W.Greub, S.Halperin, R.Vanstone: Connections, Curvature and Cohomology,\nVol.III, Academic Press, New York San Fransisco London, 1976\n\n[13] B.G\\\"{u}rel, E.Orta\\c{c}gil, F.\\\"{O}zt\\\"{u}rk: Group extensions in\nsecond order jet group, preprint\n\n[14] T.Hawkins: Emergence of the Theory of Lie Groups, An Essay in the\nHistory of Mathematics 1869-1926, Sources and Studies in the History of\nMathematics and Physical Sciences, 2000 Springer-Verlag New York, Inc.\n\n[15] G.Hochschild: Group extensions of Lie groups. Ann. of Math. (2) 54,\n(1951). 96--109\n\n[16] A.W.Knapp: Lie Groups, Lie Algebras and Cohomology, Mathematical Notes\n34, Princeton University Press, NJ, 1988\n\n[17] A.W.Knapp: Representation Theory of Semisimple Groups: An Overview\nbased on Examples, Mathematical Notes, Princeton University Press,\nPrinceton, NJ, 1986\n\n[18] S.Kobayashi: Transformation Groups in Differential Geometry, Ergebnisse\nder Mathematic und ihrer Grenzgebiete. Band 70, Springer-Verlag, 1972\n\n[19] K.Mackenzie: Lie Groupoids and Lie Algebroids in Differential Geometry,\nLondon Mathematical Society Lecture Note Series, 124, Cambridge University\nPress, Cambridge, 1987\n\n[20] K.Mackenzie: General Theory of Lie Groupoids and Lie Algebroids, London\nMathematical Society Lecture Note Series, 213, Cambridge University Press,\nCambridge, 2005\n\n[21] M. Marvan: On the $\\mathcal{C}$-spectral sequence with ''general''\ncoefficients. Differential geometry and its applications (Brno, 1989),\n361--371, World Sci. Publishing, Teaneck, NJ, 1990\n\n[22] J.W.Morgan: The Seiberg-Witten Equations and Applications to the\ntopology of smooth Four-Manifolds, Mathematical Notes 44, Princeton\nUnivertsity Press, Princeton, NJ, 1996\n\n[23] H.Omori: Infinite dimensional Lie transformation groups. Lecture Notes\nin Mathematics, Vol. 427. Springer-Verlag, Berlin-New York, 1974\n\n[24] R.S.Palais: Extending diffeomorphisms, Proc. Amer. Math. Soc. 11 1960\n274--277\n\n[25] J.F. Pommaret: Systems of Partial Differential Equations and Lie\nPseudogroups, Gordon and Breach Science Publishers, New York, London, Paris,\n1978\n\n[26] J.F.Pommaret: Partial Differential Equations and Group Theory. New\nperspectives for applications. Mathematics and its Applications, 293. Kluwer\nAcademic Publishers Group, Dordrecht, 1994\n\n[27] Z.Ran: Derivatives of Moduli, Internat. Math. Res. Notices, no 4,\n63-74, 1992\n\n[28] B.K.Reinhart: Some remarks on the structure of the Lie algebra of\nformal vector fields. Transversal structure of foliations (Toulouse, 1982).\nAst\\'{e}risque No. 116 (1984), 190--194\n\n[29] R.W.Sharpe: Differential Geometry, Cartan's Generalization of Klein's\nErlangen Program, Graduate Texts in Mathematics, Springer-Verlag, New York\nBerlin Heidelberg, 1997\n\n[30] W.Thurston: Three-dimensional Geometry and Topology Vol 1, Edited by\nSilvio Levy, Princeton Mathematical Series, 35, Princeton University Press,\nNJ, 1997\n\n[31] G.Vezzosi, A.M.Vinogradov: On higher order analogues of de Rham\ncohomology, Differential Geom. Appl. 19 (2003), no. 1, 29--59\n\n[32] A.M.Vinogradov: Cohomological Analysis of Partial Differential\nEquations and Secondary Calculus, Translations of Mathematical Monographs,\nVolume 204, AMS, 2000\n\n[33] A.M.Vinogradov: Scalar differential invariants, diffieties and\ncharacteristic classes, Mechanics, Analysis and Geometry, 200 years after\nLagrange, M.Francaviglia (editor), Elsevier Science Publishers B.V., 1991,\n379-414\n\n[34] H.C.Wang: Closed manifolds with homogeneous structures, Amer.J.Math.,\n76, 1954, 1-32\n\n[35] G.Weingart: Holonomic and semi-holonomic geometries, Seminaires \\&\nCongres, 4, 2000, 307-328\n\n\\bigskip \\lbrack 36] A.Weinstein: Groupoids: unifying internal and external\nsymmetry. A tour through some examples. Groupoids in analysis, geometry, and\nphysics (Boulder, CO, 1999), 1--19, Contemp. Math., 282, Amer. Math. Soc.,\nProvidence, RI, 2001\n\n\\bigskip\n\n\\bigskip\n\nErc\\\"{u}ment Orta\\c{c}gil, Bo\\u{g}azi\\c{c}i University, Bebek, 34342,\nIstanbul, Turkey\n\ne-mail: ortacgil@boun.edu.tr\n\n\\end{document}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nWith the end of Dennard Scaling, High Energy and Nuclear Physics (HENP) libraries and applications are now required to be multi-thread safe.\nThis is to ensure the scaling of performances with new CPU architectures.\nThe new \\texttt{C++} standards introduced new constructs and library components: \\texttt{std::thread}, \\texttt{std::mutex}, \\texttt{std::lock}, \\ldots\nThese components are however quite low-level and hard to use and compose, or easy to misuse.\nMoreover, \\texttt{C++} is still plagued with scalability issues.\nDevelopment speed is hindered by the interplay between the compilation model (based on \\texttt{\\#include}) and \\texttt{C++} templates.\n\\texttt{C++} is a very large language, hard and subtle to teach and understand: this has an impact on code maintenance and the ability to bring newcomers up-to-speed on any given project.\nOn the user side, \\texttt{C++} is showing deficiencies as well: installing a project's many dependencies can be quite difficult.\n\\texttt{Python} users will be surprised to learn there is still no \\texttt{pip}-like mechanism to install, automatically and recursively, dependencies for a given project.\nFinally, most HEP software stacks rely on applications dynamically linked to hundreds shared libraries: packaging and deployment can be time consuming tasks.\n\nFixing the whole \\texttt{C++} ecosystem sounds like a rather daunting task.\nIt might be easier instead to start from a blank page, with a new language that addresses all the current deficiencies of \\texttt{C++}, and improves the day-to-day life of a typical software programmer in a multi-core environment.\n\n\\texttt{Go}~\\cite{ref-golang} was created in 2009 to address these issues.\nWe will first describe briefly the \\texttt{Go} programming language, its concurrency primitives and the features of its tooling that make \\texttt{Go} a great fit for HENP applications.\nWe will then introduce \\texttt{Go-HEP}, a set of libraries and applications that aims to provide physicists with the basic tools to perform HENP analyses and integrate them in already existing \\texttt{C++\/Python} analysis pipelines.\n\n\\section{Introduction to \\texttt{Go}}\n\n\\texttt{Go} is an open source language, released under the BSD-3 license in 2009.\nIt is a compiled language with a garbage collector and builtin support for reflection.\nThe language is reminiscent of \\texttt{C\/C++}: it shows a syntax similar to its older peers but adds first-class functions, closures and object-oriented programming via the concept of interfaces.\n\\texttt{Go} is already available, via the \\texttt{gc} toolchain, on all major platforms (Linux, Windows, macOS, Android, iOS, \\ldots) and for many architectures (\\texttt{amd64}, \\texttt{arm64}, \\texttt{i386}, \\texttt{s390x}, \\texttt{mips64}, \\ldots)\nThere is also a \\texttt{Go} frontend to \\texttt{GCC}, aptly named \\texttt{gccgo}, that can can target all the platforms and architectures that the \\texttt{GCC} toolchain supports.\nGriesemer, Pike and Thompson created \\texttt{Go} to replace the multi-threaded \\texttt{C++} web servers that were hard to develop and slow to compile, even when using Google's massive infrastructure.\nAs such, \\texttt{Go} exposes two builtin concurrency primitives: the \\emph{goroutines} -- very lightweight green threads -- and the \\emph{channels} -- typed conduits that connect goroutines together.\n\n\n\nLaunching goroutines is done by prepending a function (or method) call with the keyword \\texttt{go}.\nGoroutines can be described as green threads: very lightweight execution contexts that are multiplexed on native threads.\nEach goroutine starts with a small stack (around $4KB$) that can grow and shrink as needed.\nThis allows to write applications with thousands of goroutines without the need to buy high-end servers.\nWriting real world concurrent programs needs synchronization and communication: the ability to exchange data between goroutines.\nThis is achieved \\emph{via} channels.\nNot only do channels allow to exchange data in a type safe manner but they are also a synchronization point: a goroutine trying to send a token of data through a channel will block until there is another goroutine on the other end of the channel trying to extract data from the channel, and \\emph{vice versa}.\nThe last piece that makes building concurrent programs in \\texttt{Go} efficient is the keyword \\texttt{select}.\nThis keyword is like a \\texttt{switch} statement for controling the data flow between multiple channels and thus between multiple goroutines.\nThe concurrency builtin tools of \\texttt{Go} are easier to reason about and more composable than mutexes and locks.\n\nMoreover, \\texttt{Go} comes with a package system and a builtin tool to build \\texttt{Go} code.\nThere are no header files like in \\texttt{C++} as they require to recursively process all dependent headers and thus slow the build.\nOnce compiled, \\texttt{Go} packages are completely self-contained and do not require a client of package \\texttt{p1} -- which itself depends on \\texttt{p2} and \\texttt{p3} -- to know anything about these packages.\nThe client only needs \\texttt{p1}.\nThis greatly improves the scalability of the build process as well as its speed.\nThanks to the way third-party packages are imported in \\texttt{Go}, \\emph{e.g.} \\mintinline{go}{import \"github.com\/pkg\/errors\"}, a build tool only needs to parse and understand \\texttt{Go} code to discover (recursively) dependencies of a given package.\nCoupled to the fact that package import strings contain the URL to the repository (GitHub, BitBucket, \\ldots) holding the actual code, this allows a simple command -- \\texttt{go get} -- to fetch code from the internet, (recursively) discover and fetch its dependencies and, finally build the whole artefact.\nThe instructions to install code are valid on all platforms that are supported by a given \\texttt{Go} toolchain: one just needs to point \\texttt{go get} at a repository.\nFinally, \\texttt{Go} code being very quick to compile makes static compilation manageable and enables simple deployment scenarii that boil down to \\texttt{scp}-ing the resulting binary.\n\\texttt{Go} exposes a productive development environment that is concurrency friendly.\nIt is being used by many companies~\\footnote{A non-exhaustive list is maintained here: \\texttt{https:\/\/github.com\/golang\/go\/wiki\/GoUsers}.} beside Google, and in a variety of scenarii: from rocket telemetry to cloud systems to container orchestration.\nBut for \\texttt{Go} to be useful in HENP, physicists need libraries and tools to carry analyses.\nMoreover, these tools need to be interoperable with existing analyses pipelines.\n\n\\section{\\texttt{Go-HEP}}\n\n\\texttt{Go-HEP} is a set of libraries and applications released under the BSD-3 license.\nThey allow High Energy and Nuclear Physicists to write efficient analysis code in the \\texttt{Go} programming language.\nThe \\texttt{go-hep.org\/x\/hep} organization provides a set of pure-\\texttt{Go} packages and building blocks to:\n\\begin{itemize}\n\t\\item write analyses in \\texttt{Go},\n\t\\item write data acquisition and monitoring systems in \\texttt{Go},\n\t\\item write control frameworks in \\texttt{Go}.\n\\end{itemize}\n\nAs \\texttt{Go-HEP} is written in pure-\\texttt{Go}, the whole software suite and its dependencies are installable on all \\texttt{Go} supported platforms (Linux, macOS, Windows, RPi3, \\ldots) with:\n\\begin{verbatim}\n $> go get go-hep.org\/x\/hep\/...\n\\end{verbatim}\nThe ellipsis (\\ldots) at the end instructs the \\texttt{go} tool to compile and install all the libraries and applications that are part of the \\texttt{go-hep.org\/x\/hep} repository.\n\nThe libraries provided by \\texttt{Go-HEP} can be broadly organized around two categories:\n\\begin{itemize}\n\t\\item libraries that provide physics and statistical tools (Lorentz vectors, histograms and n-tuples, fits, jet clustering, fast detector simulation, plots, \\ldots)\n\t\\item libraries that provide low level interoperability with \\texttt{C++} libraries, to allow \\texttt{Go-HEP} users to integrate with existing analyses pipelines.\n\\end{itemize}\n\nIndeed, analyses in HENP are mainly written in \\texttt{C++} and \\texttt{Python}.\nEven though \\texttt{Go} has a native foreign function interface called \\texttt{cgo} that allows \\texttt{Go} code to use \\texttt{C} code and vice versa, the libraries under \\texttt{go-hep.org\/x\/hep} do not call any \\texttt{C++} library (via a \\texttt{C} shim library.)\nThis decision allows to retain the quick edit-compile-run development cycle of \\texttt{Go} and the easy deployment and cross-compilation of \\texttt{Go} applications.\nInstead, the strategy for interoperating with \\texttt{C++} is to integrate with \\emph{e.g.} ROOT~\\cite{ref-ROOT} at the data file level.\n\\texttt{Go-HEP} provides read\/write access for LCIO, LHEF, HepMC, SLHA and YODA files.\n\\texttt{Go-HEP} provides -- at the moment -- only read access to ROOT files via its \\texttt{go-hep.org\/x\/hep\/rootio} package.\n\nIn the following, we will describe two components of \\texttt{Go-HEP}, \\texttt{fads} and \\texttt{hep\/rootio}, that enable physics analyses.\n\n\\section{\\texttt{fads}}\n\n\\subsection{\\texttt{fads-app}}\n\n\\texttt{fads} is a \"FAst Detector Simulation toolkit\".\nThis library is built on top of \\texttt{Go-HEP}'s control framework, \\texttt{fwk}.\nThe control framework exposes a traditional API, with a \\texttt{Task} type that can be started, process some event data and then stopped.\nEach event is processed by a goroutine.\nThe framework runs these tasks in their own goroutine and lets the \\texttt{Go} runtime deal with work stealing.\nData between tasks are exchanged via an event store service which itself utilizes \\texttt{channel}s to ensure there are no data races.\nBesides an event store service, \\texttt{fwk} also provides services for message logging, histogramming and n-tupling. \nN-tuple and histograms (1D and 2D) are provided by the \\texttt{hep\/hbook} and \\texttt{hep\/hbook\/ntup} packages and are persistified using \\texttt{hep\/rio}, a binary file format that heavily draws inspiration from \\texttt{SIO}, the binary file format the LCIO community uses.\n\nData dependencies between tasks are described at the job configuration level: each task declares what are its inputs (the type of input and a name that identifies it) and its outputs.\n\\texttt{fwk} ensures there are no cycles between the tasks and connects the goroutines together via channels.\n\\texttt{fwk} enables task-level parallelism and event-level parallelism via the concurrency building blocks of \\texttt{Go}.\nFor sub-task parallelism, users are required -- by construction -- to use the same building block, so everything is consistent and the \\texttt{Go} runtime has the complete picture.\n\n\\texttt{fads} is itself a transliteration of Delphes~\\cite{ref-delphes} (v3.0.12) but with \\texttt{fwk} underneath instead of \\texttt{ROOT}.\nTo assess the performances of \\texttt{Go} and \\texttt{fads} in a realistic settings, the whole ATLAS data card provided with Delphes has been implemented and packaged with \\texttt{fads} under \\texttt{hep\/fads\/cmd\/fads-app}.\nThis application is composed of:\n\\begin{itemize}\n\\item a HepMC file reader task,\n\\item a particle propagator, a calorimeter simulator,\n\\item energy rescalers, momentum smearers,\n\\item electron, photon and muon isolation tasks,\n\\item b-tagging and tau-tagging tasks, and a jet-finder task.\n\\end{itemize}\n\nThe jet finder task is based on a naive re-implementation of the \\texttt{C++} FastJet~\\cite{ref-fastjet} library: only the $N^3$ \"dumb\" clustering has been implemented so far.\nA small part of the directed acyclic graph of the resulting \\texttt{fads-app} application can be found in figure~\\ref{fig-dflow}.\n\n\\begin{figure}[h]\n\t\\begin{center}\n \\includegraphics[width=0.75\\textwidth]{figs\/fads-dflow-detail.png}\n\t\\end{center}\n\t\\caption{\\label{fig-dflow}Part of the directed acyclic graph of data dependencies between tasks composing the \\texttt{fads-app}, a \\texttt{Go} transliteration of the Delphes' ATLAS data card example. Rectangles are tasks, ellipses are data collections.}\n\\end{figure}\n\nDelphes was compiled with \\texttt{gcc-4.8} and the $N^3$ clustering strategy hard-coded, \\texttt{fads} was compiled with \\texttt{Go-1.9}.\nTimings were obtained on a Linux Xeon CPU E5-4620@2.20GHz server with 64 cores and 128Gb RAM with a 10000 HepMC events input file.\n\\begin{figure}[h]\n\n \\includegraphics[width=0.5\\textwidth]{figs\/linux-64-cores-rss.png}\n \\includegraphics[width=0.5\\textwidth]{figs\/linux-64-cores-hz.png}\n\n\t\\caption{\\label{fig-fads-perfs}Memory usage (left) and event processing rate (right) for \\texttt{fads} (solid red curve) and \\texttt{Delphes} (green dash line).}\n\\end{figure}\n\nThe results for various numbers of threads are shown in figure~\\ref{fig-fads-perfs}.\nAs Delphes is not thread-safe, we only show the data for one Delphes process and then draw a flat line for visual aid.\nThe graph on the left shows a smaller memory footprint for \\texttt{fads} that only matches that of Delphes' when 30 threads are used.\nThe graph on the right shows that \\texttt{fads} achieves a better event processing rate even when using only one thread, or when using only one thread but with the sequential event loop instead of the concurrent event loop (data point for $n_{thread}=0$.)\n\n\\texttt{fads} achieves better performances than Delphes-3.0.12 on this data set.\nThe output simulated data for \\texttt{fads} was matched bit-by-bit with that of Delphes up to the calorimetry stage~\\footnote{We could not manage to match data further down the sequence because of Delphes' usage of PRNG to seed other PRNGs in the calorimeter. Control histograms agreed at the statistical level.}.\n\\texttt{fads} also achieves these performances without the need to merge output files.\n\n\n\\subsection{\\texttt{fads-rivet-mc-generic}}\n\n\\texttt{fads} exports another application, \\texttt{fads-rivet-mc-generic}, a reimplementation of the \\texttt{MC\\_GENERIC} analysis from the Rivet~\\cite{ref-rivet} toolkit.\n\n\\begin{table}[h]\n\\begin{center}\n \\begin{tabular}{ l | c | c | c | c }\n\t & \\texttt{MaxRSS} ($Mb$) & Real ($s$) & CPU ($s$) \\\\\n \\hline\n\t \\texttt{Rivet} & 27 & 13.3 & 13.3 \\\\\n\t \\texttt{fads} & 23 & 5.7 & 5.7 \\\\\n \\hline\n \\end{tabular}\n\t\\caption{\\label{fig-rivet-perf-tab}Runtime performances of \\texttt{Rivet} and \\texttt{fads} to read a file with $Z$ events.}\n\\end{center}\n\\end{table}\n\n\\texttt{fads} shows better performances than the \\texttt{C++} application.\nAs Rivet does not use ROOT for this application, the memory usage of Rivet is closer to that of \\texttt{fads}.\nHowever, the event processing rate of \\texttt{fads} with only one thread is twice as good as the one of Rivet.\nThis is because the job steering and the event loop of Rivet are written in Python.\n\n\\section{\\texttt{hep\/rootio}}\nThe \\texttt{hep\/rootio} package is a pure-\\texttt{Go} package that:\n\\begin{itemize}\n\\item decodes and understands the structure of \\texttt{TFile}s, \\texttt{TKey}s, \\texttt{TDirectory} and \\texttt{TStreamerInfo}s,\n\\item decodes and deserializes \\texttt{TH1x}, \\texttt{TH2x}, \\texttt{TLeaf}, \\texttt{TBranch} and \\texttt{TTree}s. \n\\end{itemize}\nAt the moment, \\texttt{hep\/rootio} only concerns itself with reading ROOT files, although writing ROOT files is on the roadmap.\nNonetheless, \\texttt{hep\/rootio} can already be useful and provides the following commands:\n\\begin{itemize}\n\t\\item \\texttt{cmd\/root-ls} lists the content of ROOT files,\n\t\\item \\texttt{cmd\/root-print} extracts histograms from ROOT files and saves them as PDF or PNG,\n\t\\item \\texttt{cmd\/root-dump} prints the contents of \\texttt{TTree}s, event by event,\n\t\\item \\texttt{cmd\/root-diff} prints the differences between the contents of two \\texttt{TTree}s,\n\t\\item \\texttt{root-cnv-npy} converts simple \\texttt{TTree}s to \\texttt{NumPy} array data files,\n\t\\item \\texttt{cmd\/root-srv} allows to interactively plot histograms from a file and branches from a tree. \\texttt{root-srv} being pure-\\texttt{Go}, it can be hosted on Google AppEngine.\n\\end{itemize}\n\n\\texttt{rootio} can read flat \\texttt{TTree}s with \\texttt{C++} builtins, static and dynamic arrays of \\texttt{C++} builtins.\n\\texttt{rootio} can also read \\texttt{TTree}s with user defined classes containing \\texttt{std::vector}~\\footnote{where \\texttt{T} is a \\texttt{C++} builtin or a \\texttt{std::string} or a \\texttt{TString}}, another user defined class, \\texttt{std::string} or \\texttt{TString}, arrays of \\texttt{C++} builtins and \\texttt{C++ builtins}.\n\nTo assess the performances of \\texttt{hep\/rootio} with regard to the original \\texttt{C++} implementation, we created two input files of $10^6$ entries and 100 branches of \\texttt{float64}.\nOne file was compressed with the default settings of ROOT-6.10 ($686 Mb$) and the other had no compression ($764 Mb$).\n\n\\begin{table}[h]\n\\begin{center}\n \\begin{tabular}{ l | c | c | c | c }\n\t \\texttt{fCompression=0} & \\texttt{VMem} ($Mb$) & \\texttt{MaxRSS} ($Mb$) & Real ($s$) & CPU ($s$) \\\\\n \\hline\n\t \\texttt{ROOT} & 517 & 258 & 6.7 & 6.6 \\\\\n\t \\texttt{hep\/rootio} & 43 & 42 & 12.9 & 12.9 \\\\\n \\hline\n \\end{tabular}\n\\end{center}\n\n\\begin{center}\n \\begin{tabular}{ l | c | c | c | c }\n\t \\texttt{fCompression=1} & \\texttt{VMem} ($Mb$) & \\texttt{MaxRSS} ($Mb$) & Real ($s$) & CPU ($s$) \\\\\n \\hline\n\t \\texttt{ROOT} & 529 & 292 & 12.6 & 12.0 \\\\\n\t \\texttt{hep\/rootio} & 83 & 82 & 35.8 & 35.8 \\\\\n \\hline\n \\end{tabular}\n\t\\caption{\\label{fig-rootio-perf-tab}Runtime performances of \\texttt{C++\/ROOT} and \\texttt{hep\/rootio} to read a file with $10^6$ events, using no compression (up) and with default compression (down).}\n\\end{center}\n\\end{table}\n\nTable~\\ref{fig-rootio-perf-tab} shows the results obtained running the two programs with the compressed file and the uncompressed file as inputs.\nWhile the memory footprint of the \\texttt{Go} program is almost an order of magnitude lower than its \\texttt{C++} counterpart, it is twice as slow in the non-compressed case and almost thrice in the compressed case.\nThe degraded performances in the compressed case have been tracked back to the decompression package from the standard library which could probably be further optimized.\nOn the otherhand, the degraded performances in the non-compressed case come from the still young implementation of \\texttt{hep\/rootio} which could be also further optimized, \\emph{e.g.} preemptively loading multiple consecutive entries.\n\n\\section{Conclusions}\n\n\\texttt{Go} improves on \\texttt{C++\/Python} and addresses their deficiences with regard to code distribution, code installation, compilation, development and runtime speeds.\n\\texttt{Go} also provides builtin facilities to tackle concurrency efficiently and easily.\n\\texttt{Go-HEP} provides some building blocks that are already competitive with battle-tested \\texttt{C++} programs, both in terms of CPU, memory usage and cores' utilization.\nFurther improvements are still necessary in the ROOT I\/O area, both in terms of performances and features, so that \\texttt{Go-HEP} can be part of physics analyses pipelines.\n\n\\section*{References}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nVisual recognition systems (\\emph{e.g.}, image classification) play key roles in a wide range of applications such as autonomous driving, robot autonomy, smart medical diagnosis and video surveillance. Recently, remarkable progress has been made through big data and powerful GPUs driven deep neural networks (DNNs) under supervised learning framework~\\cite{LeCunCNN,AlexNet}. DNNs significantly increase prediction accuracy in visual recognition tasks and even outperform humans in some image classification tasks~\\cite{ResidualNet,InceptionNet}.\nDespite the dramatic improvement, it has been shown that DNNs trained for visual recognition tasks can be easily fooled by so-called \\textbf{adversarial attacks} which utilize visually imperceptible, carefully-crafted perturbations to cause networks to misclassify inputs in arbitrarily chosen ways in the close set of labels used in training~\\cite{FoolDeepNet,LBFGS, AdversarialExampl, CWAttack}, even with one-pixel attack~\\cite{OnePixelAttack}. Assuming full access to DNNs pretrained with clean images, white-box targeted attacks are powerful ways of investigating the brittleness of DNNs and their sensitivity to non-robust yet well-generalizing features in the data, and of exploiting adversarial examples as useful features~\\cite{AttackNotBugs}. \n\n\\begin{figure} [t]\n \\centering\n \\includegraphics[width=1.0\\linewidth]{top5-1.pdf}\n \\caption{\\small Examples of ordered Top-$5$ attacks for a pretrained ResNet-50 model~\\cite{ResidualNet} in the ImageNet-1000 dataset. We compare a modified C\\&W method~\\cite{CWAttack} (see details in Sec.~\\ref{sec:modifiedCW}) and our proposed adversarial distillation method in terms of $\\ell_2$ distance between a clean image and the learned adversarial example. \n \n Here, ${9\\times30}$ and ${9\\times1000}$ refer to the settings of hyperparameter search: we perform $9$ binary searches for the trade-off parameter of perturbation energy penalty in the objective function and each search step takes $30$ and $1000$ iterations of optimization respectively. Our method consistently outperforms the modified C\\&W method. Similarly, our method obtains better results for the bottom image of \\textit{Volleyball}, where the modified C\\&W$_{9\\times30}$ method fails to attack. (Best viewed in color and magnification)}\\label{fig:ex-top5-attack}\n\\end{figure}\n\nIn this paper, we focus on learning visually-imperceptible targeted attacks under the white-box setting. One scheme of learning these attacks is to design a proper adversarial objective function that leads to the imperceptible perturbation for any test image, \\emph{e.g.}, the widely used Carlini-Wagner (C\\&W) method~\\cite{CWAttack}. However, most methods address targeted attacks in the Top-$1$ manner, which limits the flexibility of attacks, and may lead to less rich perturbations. We propose to generalize this setting to account for \\textbf{ordered Top-$k$ targeted attacks}, that is to enforce the Top-$k$ predicted labels of an adversarial example to be the $k$ (randomly) selected and ordered labels ($k\\geq 1$, the ground-truth (GT) label is exclusive). Figure~\\ref{fig:ex-top5-attack} shows two examples. \n\nTo see why Top-$k$ targeted attacks are entailed, let's take a close look at the ``robustness\" of an attack method itself under the traditional Top-$1$ protocol. One crucial question is, \n\n\\textit{How far is the attack method able to push the underlying ground-truth label in the prediction of the learned adversarial examples?} \n\nConsider a white-box targeted attack method such as the C\\&W method~\\cite{CWAttack}. Although it can achieve $100$\\% attack success rate (ASR) under the given Top-$1$ protocol, if the ground-truth labels of adversarial examples still largely appear in the Top-$5$ of the prediction, we may be over-confident about the $100$\\% ASR, especially when some downstream modules may rely on Top-$5$ predictions in their decision making. Table~\\ref{tab:gt-rank} shows the results. The C\\&W method does not push the GT labels very far, especially when smaller perturbation energy is aimed using larger search range (\\emph{e.g.}, the average rank of the GT label is $2.6$ for C\\&W$_{9\\times1000}$). On the contrary, the three untargeted attack approaches work much better in terms of pushing the GT labels, although their perturbation energy are usually much larger.\nWhat is interesting to us is the difference of the objective functions used by the C\\&W method and the three untargeted attack methods respectively. The former maximizes the margin of the logits between the target and the runner-up (either GT or not), while the latter maximizes the cross-entropy between the prediction probabilities (softmax of logits) and the one-hot distribution of the ground-truth. Furthermore, the label smoothing methods~\\cite{LabelSmoothing, ConfidencePenalty} is often used to improve the performance of DNNs, which address the over-confidence in the one-hot vector encoding of annotations. And, the network distillation method~\\cite{distillation,distillation1} views the knowledge of a DNN as the conditional distribution it produces over outputs given an input. One question naturally arises,\n\n\\textit{Can we design a proper adversarial distribution similar in spirit to label smoothing to guide the ordered Top-$k$ attack by leveraging the view of point of network distillation?}\n\nOur proposed method aims to harness the best of the above strategies in designing proper target distributions and objective functions to achieve both high ASR and low perturbation energy. Our proposed ordered Top-$k$ attacks explicitly push the GT labels to a ``safe\" zone of retaining the ASR. \n \n \n\n \n\\begin{table}\n\\caption{\\small Results of showing where the ground-truth (GT) labels are in the prediction of learned adversarial examples for different attack methods. The test is done in ImageNet-1000 {\\tt validation} dataset using a pretrained ResNet-${50}$ model~\\cite{ResidualNet}. Please see Sec.~\\ref{sec:exp-setup} for details of experimental settings. }\n\\label{tab:gt-rank}\n\\centering\n\\resizebox{0.8\\textwidth}{!}{\n\\begin{tabular}{llllllll} \n\\toprule\n\\multirow{2}{*}{Method} &\\multirow{2}{*}{ASR} & \\multicolumn{5}{c}{Proportion of GT Labels in Top-$k$ {\\small (smaller is better)}}&\\multirow{2}{3cm}{{\\footnotesize Average Rank of GT Labels (larger is better)}}\\\\ \\cmidrule(r){3-7}\n&&Top-$3$ &Top-${5}$ &Top-${10}$ &Top-${50}$ &Top-${100}$\\\\ \\midrule\nC\\&W$_{9\\times30}$~\\cite{CWAttack} &99.9 &36.9 &50.5 &66.3 &90.0 &95.1 &20.4\\\\\nC\\&W$_{9\\times1000}$~\\cite{CWAttack} &100&71.9&87.0&96.1&99.9&100&2.6\\\\ \n\\hline \nFGSM~\\cite{FGSM} &80.7 &25.5&37.8 &52.8 &81.2 &89.2 &44.2\\\\\nPGD$_{10}$~\\cite{IFGSM, PGD} &100 &3.3 &6.7 &12 &34.7 &43.9 &306.5\\\\\nMIFGSM$_{10}$~\\cite{MIFGSM} &99.9 &0.7 &1.9 &6.0 &22.5 &32.3 &404.4\\\\\n\\bottomrule\n\\end{tabular}\n}\n\\end{table}\n\n\n\nTowards learning the generalized ordered Top-$k$ attacks, we present an \\textbf{adversarial distillation} framework: First, we compute an adversarial probability distribution for any given ordered Top-$k$ targeted labels with respect to the ground-truth of a test image. Then, we learn adversarial examples by minimizing the Kullback-Leibler (KL) divergence together with the perturbation energy penalty, similar in spirit to the network distillation method~\\cite{distillation}. More specifically, we explore how to leverage label semantic similarities in computing the targeted distributions, leading to \\textbf{knowledge-oriented attacks}. We measure label semantic similarities using the cosine distance between some off-the-shelf word2vec embedding of labels such as the pretrained Glove embedding~\\cite{Glove}.\nAlong this direction, a few questions of interest that naturally arise are studied: Are all Top-$k$ targets equally challenging for an attack approach? How can we leverage the semantic knowledge between labels to guide an attack approach to learn better adversarial examples and to find the weak spots of different attack approaches? We found that KL is a stronger alternative than the C\\&W loss function, and label semantic knowledge is useful in designing effective adversarial distributions. \n\nIn experiments, we develop a modified C\\&W approach for ordered Top-$k$ attacks as a strong baseline. We thoroughly test Top-$1$ and Top-$5$ attacks in the ImageNet-1000~\\cite{ImageNet} {\\tt validation} dataset using two popular DNNs trained with clean ImageNet-1000 {\\tt train} dataset, ResNet-50~\\cite{ResidualNet} and DenseNet-121~\\cite{DenseNet}. For both models, our proposed adversarial distillation approach outperforms the vanilla C\\&W method in the Top-$1$ setting, as well as other baseline methods such as the PGD method~\\cite{PGD,IFGSM}. Our approach shows significant improvement in the Top-$5$ setting against the modified C\\&W method. We observe that Top-$k$ targets that are distant from the GT label in terms of either label semantic distance or prediction scores of clean images are actually more difficulty to attack. \n\n\\textbf{Our Contributions.} This paper makes three main contributions to the field of learning adversarial attacks: \n (i) To our knowledge, this is the first work of learning ordered Top-$k$ attacks. This generalized setting is a straightforward extension of the widely used Top-$1$ attack and able to improve the robustness of adversarial attacks themselves. \n (ii) A conceptually simple yet effective framework, adversarial distillation is proposed to learn ordered Top-$k$ attacks under the white-box settings. It outperforms a strong baseline, the C\\&W method~\\cite{CWAttack} under both the traditional Top-$1$ and the proposed ordered Top-$5$ in the ImageNet-1000 dataset using two popular DNNs, ResNet-50~\\cite{ResidualNet} and DenseNet-121~\\cite{DenseNet}. \n (iii) Knowledge-oriented design of adversarial target distributions are studied whose effectiveness is supported by the experimental results.\n\n\\textbf{Paper organization.} The remainder of this paper is organized as follows. In Section~\\ref{sec:formulation}, we overview the white-box targeted attacks and the C\\&W method, and then present details of our proposed adversarial distillation framework for the ordered Top-$k$ targeted attack. In Section~\\ref{sec:exp-setup}, we present thorough comparisons in ImageNet-1000. In Section~\\ref{sec:related}, we briefly review the related work. Finally, we conclude this paper in Section~\\ref{sec:conclusion}. \n\n\\section{Problem formulation}\\label{sec:formulation}\nIn this section, we first briefly introduce, to be self-contained, the white-box attack setting and the widely used C\\&W method~\\cite{CWAttack} under the Top-$1$ protocol. We then define the ordered Top-$k$ attack setting and develop a modified C\\&W method for it ($k> 1$) as a strong baseline. Finally, we present our proposed adversarial distillation framework. \n\n\\subsection{Background on white-box targeted attack under the Top-$1$ setting}\nWe focus on classification tasks using DNNs. Denote by $(x, y)$ a pair of a clean input $x\\in \\mathcal{X}$ and its ground-truth label $y\\in \\mathcal{Y}$. For example, in the ImageNet-1000 classification task, $x$ represents a RGB image defined in the lattice of $224\\times 224$ and we have $\\mathcal{X}\\triangleq R^{3\\times 224\\times 224}$. $y$ is the category label and we have $\\mathcal{Y}\\triangleq \\{1, \\cdots, 1000\\}$. Let $f(\\cdot;\\Theta)$ be a DNN pretrained on clean training data where $\\Theta$ collects all estimated parameters and is fixed in learning adversarial examples. For notation simplicity, we denote by $f(\\cdot)$ a pretrained DNN. \nThe prediction for an input $x$ from $f(\\cdot)$ is usually defined using softmax function by,\n\\begin{equation}\n P =f(x)=softmax(z(x)),\n\\end{equation}\nwhere $P\\in R^{|\\mathcal{Y}|}$ represents the estimated confidence\/probability vector ($P_c\\geq 0$ and $\\sum_c P_c=1$) and $z(x)$ is the logit vector. The predicted label is then inferred by $\\hat{y}=\\arg\\max_{c\\in [1,|\\mathcal{Y}|]} P_c$. \n\nIn learning targeted attacks under the Top-$1$ protocol, for an input $(x, y)$, given a target label $t\\neq y$, we seek to compute some visually-imperceptible perturbation $\\delta(x, t, f)$ using the pretrained and fixed DNN $f(\\cdot)$ under the white-box setting. \\textit{White-box attacks} assume the complete knowledge of the pretrained DNN $f$, including its parameter values, architecture, training method, etc. The perturbed example $x'=x+\\delta(x, t, f)$ is called \\textbf{an adversarial example} of $x$ if $t=\\hat{y}'=\\arg\\max_c f(x')_c$ and the perturbation $\\delta(x, t, f)$ is sufficiently small according to some energy metric. We usually focus on the subset of inputs $(x,y)$'s that are correctly classified by the model, \\emph{i.e.}, $y=\\hat{y}=\\arg\\max_c f(x)_c$. Learning $\\delta(x, t, f)$ under the Top-$1$ protocol is posed as a constrained optimization problem~\\cite{AdversarialExampl, CWAttack}, \n\\begin{align}\n \\text{minimize}\\quad &\\mathcal{E}(\\delta)=||\\delta||_p, \\label{eq:formulation}\\\\\n \\nonumber \\text{subject to}\\quad & t=\\arg\\max_c f(x+\\delta)_c,\\\\\n \\nonumber & x+\\delta \\in [0, 1]^n,\n\\end{align}\nwhere $\\mathcal{E}(\\cdot)$ is defined by a $\\ell_p$ norm (\\emph{e.g.}, the $\\ell_2$ norm) and $n$ the size of the input domain (e.g., the number of pixels). \nTo overcome the difficulty (non-linear and non-convex constraints) of directly solving Eqn.~\\ref{eq:formulation}, the C\\&W method expresses it in a different form by designing some loss functions $L(x')=L(x+\\delta)$ such that the first constraint $t=\\arg\\max_c f(x')_c$ is satisfied if and only if $L(x')\\leq 0$. The best loss function proposed by the C\\&W method is defined by the hinge loss of logits between the target label and the runner-up, \n\\begin{equation}\n L_{CW}(x') = \\max(0, \\max_{c\\neq t}z(x')_c - z(x')_t). \\label{eq:cwloss}\n\\end{equation}\nThen, the learning problem becomes, \n\\begin{align}\n \\text{minimize}\\quad & ||\\delta||_p + \\lambda\\cdot L(x+\\delta), \\label{eq:formulation1}\\\\\n \\nonumber \\text{subject to}\\quad & x+\\delta \\in [0, 1]^n,\n\\end{align}\nwhich can be solved via back-propagation with the constraint satisfied via introducing a {\\tt tanh} layer. For the trade-off parameter $\\lambda$, a binary search will be performed during the learning (\\emph{e.g.}, $9\\times 1000$). \n\n\\subsection{The proposed ordered Top-$k$ attack setting}\nIt is straightforward to extend Eqn.~\\ref{eq:formulation} for learning ordered Top-$k$ attacks ($k\\geq 1$). Denote by $(t_1, \\cdots, t_k)$ the ordered Top-$k$ targets ($t_i\\neq y$). We have, \n\\begin{align}\n \\text{minimize}\\quad &\\mathcal{E}(\\delta)=||\\delta||_p, \\label{eq:formulation-topk}\\\\\n \\nonumber \\text{subject to}\\quad & t_i=\\arg\\max_{c\\in [1, |\\mathcal{Y}|], c\\notin \\{t_1, t_{i-1}\\} } f(x+\\delta)_c, \\quad i\\in \\{1,\\cdots, k\\}, \\\\\n \\nonumber & x+\\delta \\in [0, 1]^n .\n\\end{align}\n\n\\subsubsection{A modified C\\&W method}\\label{sec:modifiedCW}\nWe can modify the loss function (Eqn.~\\ref{eq:cwloss}) of the C\\&W method accordingly to solve Eqn.~\\ref{eq:formulation-topk}. We have, \n\\begin{equation}\n L^{(k)}_{CW}(x') = \\sum_{i=1}^k \\max(0, \\max_{j\\notin \\{t_1,\\cdots, t_{i}\\}}z(x')_j - z(x')_{t_i}). \\label{eq:cwloss-topk}\n\\end{equation}\nSo, the vanilla C\\&W loss (Eqn.~\\ref{eq:cwloss}) is the special case of Eqn.~\\ref{eq:cwloss-topk} (\\emph{i.e.}, when $k=1$).\n\n\\subsubsection{Our proposed knowledge-oriented adversarial distillation framework}\nIn the C\\&W loss functions, only the margin of logits between the targeted labels and the runner-ups is taken into account. In our adversarial distillation framework, we adopt the view of point proposed in the network distillation method~\\cite{distillation} that the full confidence\/probability distribution summarizes the knowledge of a trained DNN. We hypothesize that we can leverage the network distillation framework to learn the ordered Top-$k$ attacks by designing a proper adversarial probability distribution across the entire set of labels that satisfies the specification of the given ordered Top-$k$ targets. \n\nConsider the Top-$k$ targets, $(t_1, \\cdots, t_k)$, we want to define the adversarial probability distribution, denoted by $P^{adv}$ in which $P^{adv}_{t_i}> P^{adv}_{t_j}$ ($\\forall iP^{adv}_j$ ($\\forall j\\notin (t_1, \\cdots, t_k)$). The space of candidate distributions are huge. We present a simple knowledge-oriented approach to define the adversarial distribution. We first specify the logit distribution and then compute the probability distribution using softmax. Denote by $Z$ the maximum logit (\\emph{e.g.}, $Z=10$ in our experiments). We define the adversarial logits for the ordered Top-$k$ targets by,\n\\begin{equation}\n z^{adv}_{t_i}=Z - (i-1)\\times \\gamma, \\quad i\\in [1, \\cdots, k],\n\\end{equation}\nwhere $\\gamma$ is an empirically chosen decreasing factor (\\emph{e.g.}, $\\gamma=0.3$ in our experiments). For the remaining categories $j\\notin (t_1, \\cdots, t_k)$, we define the adversarial logits by,\n\\begin{equation}\n z^{adv}_j = \\alpha \\times \\frac{1}{k}\\sum_{i=1}^k s(t_i, j) + \\epsilon, \\label{eq:logit-others}\n\\end{equation}\nwhere $0\\leq \\alpha < z^{adv}_{t_k}$ is the maximum logit that can be assigned to any $j$, $s(a, b)$ is the semantic similarity between the label $a$ and label $b$, and $\\epsilon$ is a small position for numerical consideration (\\emph{e.g.}, $\\epsilon=1e$-$5$). We compute $s(a, b)$ using the cosine distance between the Glove~\\cite{Glove} embedding vectors of category names and $-1\\leq s(a, b) \\leq 1$. Here, when $\\alpha=0$, we discard the semantic knowledge and treat all the remaining categories equally. Note that our design of $P^{adv}$ is similar in spirit to the label smoothing technique and its variants~\\cite{LabelSmoothing,ConfidencePenalty} except that we target attack labels and exploit label semantic knowledge. The design choice is still preliminary, although we observe its effectiveness in experiments. We hope this can encourage more sophisticated work to be explored. \n\nWith the adversarial probability distribution $P^{adv}$ defined above as the target, we use the KL divergence as the loss function in our adversarial distillation framework as done in network distillation~\\cite{distillation} and we have, \n\\begin{equation}\n L^{(k)}_{adv}(x') = KL(f(x')||P^{adv}),\n\\end{equation}\nand then we follow the same optimization scheme as done in the C\\&W method (Eqn.~\\ref{eq:formulation1}). \n\n\\section{Experiments}\\label{sec:exp-setup}\nIn this section, we present results of our proposed method tested in ImageNet-1000~\\cite{ImageNet} using two pretrained DNNs, ResNet-50~\\cite{ResidualNet} and DenseNet-121~\\cite{DenseNet} from the PyTorch model zoo~\\footnote{https:\/\/github.com\/pytorch\/vision\/tree\/master\/torchvision\/models}. We implement our method using the AdverTorch toolkit~\\footnote{https:\/\/github.com\/BorealisAI\/advertorch}. Our source code will be released. \n\n\\textbf{Data.} In ImageNet-1000~\\cite{ImageNet}, there are $50,000$ images for validation. We obtain the subset of images for which the predictions of both the ResNet-50 and DenseNet-121 are correct. To reduce the computational demand, we further test our method in a randomly sampled subset, as commonly done in the literature. To enlarge the coverage of categories, we first randomly select 500 categories and then randomly chose 2 images per selected categories, resulting in 1000 test images in total. \n\n\\textbf{Settings.} We follow the protocol used in the C\\&W method. We only test $\\ell_2$ norm as the energy penalty for perturbations in learning. But we evaluate learned adversarial examples in terms of three norms ($\\ell_1$, $\\ell_2$ and $\\ell_{\\infty}$). We test two search schema for the trade-off parameter $\\lambda$ in optimization: both use $9$ steps of binary search, and $30$ and $1000$ iterations of optimization are performed for each trial of $\\lambda$. Only $\\alpha=1$ is used in Eqn.~\\ref{eq:logit-others} in experiments for simplicity due to computational demand.\nWe compare the results under three scenarios proposed in the C\\&W method~\\cite{CWAttack}: \\textit{The Best Case} settings test the attack against all incorrect classes, and report the target class(es) that was least difficult to attack.\n\\textit{The Worst Case} settings test the attack against all incorrect classes, and report the target class(es) that was most difficult to attack.\n\\textit{The Average Case} settings select the target class(es) uniformly at random among the labels that are not the GT.\n\n\\begin{figure} [h]\n \\centering\n \\includegraphics[width=1.0\\linewidth]{top1-1.pdf}\n \\caption{\\small Adversarial examples learned with the Top-$1$ attack setting using ResNet-50~\\cite{ResidualNet}. The perturbation is shown by $\\ell_{2}$ distance between the clean image and adversarial examples. For better visualization, we use different scales in showing the heat maps for different methods. (Best viewed in color and magnification)}\\label{fig:ex-top1}\n\\end{figure}\n\n\\begin{table*} [h]\n\\caption{\\small Results and comparisons under the Top-$1$ targeted attack setting. We also test against three state-of-the-art untargeted attack methods, FGSM and PGD and MIFGSM, and the last two use 10 steps in optimization.}\n\\label{tab:top1}\n\\centering\n\\resizebox{1\\textwidth}{!}{\n\\begin{tabular}{llllllllllllll} \n\\toprule\n\\multirow{2}{*}{Model}&\\multirow{2}{*}{Attack Method} & \\multicolumn{4}{c}{Best Case}&\\multicolumn{4}{c}{Average Case} &\\multicolumn{4}{c}{Worst Case} \\\\ \n\\cmidrule(r){3-14}\n&&ASR&$\\ell_{1}$&$\\ell_{2}$&$\\ell_{\\infty}$&ASR&$\\ell_{1}$&$\\ell_{2}$&$\\ell_{\\infty}$&ASR&$\\ell_{1}$&$\\ell_{2}$&$\\ell_{\\infty}$\\\\\n\\midrule\n\\multirow{7}{*}{ResNet-50~\\cite{ResidualNet}}&FGSM~\\cite{FGSM} &2.3 &9299 &24.1 &0.063 &0.46 &9299 &24.1 &0.063 &0 &N.A. &N.A. &N.A.\\\\\n&PGD$_{10}$~\\cite{IFGSM, PGD} &99.6 &4691 &14.1 &0.063 &88.1 &4714 &14.2 &0.063 &57.1 &4748 &14.3 &0.063\\\\\n&MIFGSM$_{10}$~\\cite{MIFGSM} &100 &5961 &17.4 &0.063 &99.98 &6082 &17.6 &0.063 &99.9 &6211 &17.9 &0.063\\\\\n&C\\&W$_{9\\times30}$~\\cite{CWAttack} &100 &209.7 &0.777 &0.022 &99.92 &354.1 &1.273 &0.031 &99.9 &560.9 &1.987 &0.042\\\\\n&Ours$_{9\\times30}$ &100 &140.9 &0.542 &0.018 &99.9 &184.6 &0.696 &0.025 &99.9 &238.6 &0.880 &0.032\\\\\n&C\\&W$_{9\\times1000}$~\\cite{CWAttack} &100 &95.6 &0.408 &0.017 &100 &127.2 &0.516 &\\textbf{0.023} &100 &164.1 &0.635 &0.030\\\\\n&Ours$_{9\\times1000}$ &100 &\\textbf{81.3} &\\textbf{0.380} &\\textbf{0.016} &100 &\\textbf{109.6} &\\textbf{0.472} &\\textbf{0.023} &100 &\\textbf{143.9} &\\textbf{0.579} &\\textbf{0.029}\\\\\n\\midrule\n\\multirow{7}{*}{DenseNet-121~\\cite{DenseNet}}&FGSM~\\cite{FGSM} &6.4 &9263 &24.0 &0.063 &1.44 &9270 &24.0 &0.063 &0 &N.A. &N.A. &N.A.\\\\\n&PGD$_{10}$~\\cite{IFGSM, PGD} &100 &4617 &14.2 &0.063 &97.2 &4716 &14.2 &0.063 &87.6 &4716 &14.2 &0.063\\\\\n&MIFGSM$_{10}$~\\cite{MIFGSM} &100 &5979 &17.6 &0.063 &100 &6095 &17.6 &0.063 &100 &6218 &17.9 &0.063\\\\\n&C\\&W$_{9\\times30}$~\\cite{CWAttack} &99.9 &188.6 &0.694 &0.019 &99.9 &279.4 &1.008 &0.028 &99.9 &396.5 &1.404 &0.037\\\\\n&Ours$_{9\\times30}$ &99.9 &136.4&0.523 &0.017 &99.9 &181.8 &0.678 &0.024 &99.9 &240.0 &0.870 &0.031\\\\\n&C\\&W$_{9\\times1000}$~\\cite{CWAttack} &100 &98.5 &0.415 &\\textbf{0.016} &100 &132.3 &0.528 &\\textbf{0.023} &100 &174.8 &0.657 &\\textbf{0.030}\\\\\n&Ours$_{9\\times1000}$ &100 &\\textbf{83.8} &\\textbf{0.384} &\\textbf{0.016} &100 &\\textbf{115.9} &\\textbf{0.485} &\\textbf{0.023} &100 &\\textbf{158.69} &\\textbf{0.610} &\\textbf{0.030}\\\\\n\\bottomrule\n\\end{tabular}\n}\n\\end{table*}\n\n\n\n\\begin{table*}\n\\caption{\\small Results of Top-$1$ targeted attacks using 5 most-like labels and 5 least-like labels as targets respectively, based on the label semantic similarities.}\n\\label{tab:top1-sim}\n\\centering\n\\resizebox{1\\textwidth}{!}{\n\\begin{tabular}{lllllllllllllllll} \n\\toprule\n\\multirow{2}{*}{Model}&\\multirow{2}{*}{Similarity} &\\multirow{2}{*}{Attack Method}& \\multicolumn{4}{c}{Best Case}&\\multicolumn{4}{c}{Average Case} &\\multicolumn{4}{c}{Worst Case} \\\\\n\\cmidrule(r){4-15}\n&&&ASR&$\\ell_{1}$&$\\ell_{2}$&$\\ell_{\\infty}$&ASR&$\\ell_{1}$&$\\ell_{2}$&$\\ell_{\\infty}$&ASR&$\\ell_{1}$&$\\ell_{2}$&$\\ell_{\\infty}$ \\ &\\\\\n\\midrule\n\\multirow{14}{*}{ResNet-50~\\cite{ResidualNet}}&\\multirow{7}{*}{Most like}\n&FGSM~\\cite{FGSM} &32.4 &9137 &23.8 &0.063 &0.0862 &9137 &23.8 &0.063 &0 &N.A. &N.A. &N.A.\\\\\n&&PGD$_{10}$~\\cite{IFGSM, PGD} &99.9 &4687 &14.1 &0.063 &94.6 &4708 &14.2 &0.063 &78.4 &4737 &14.3 &0.063\\\\\n&&MIFGSM$_{10}$~\\cite{MIFGSM}&100 &5993 &17.4 &0.063 &99.98 &6110 &17.7 &0.063 &99.9 &6228 &17.9 &0.063 \\\\\n&&C\\&W$_{9\\times30}$~\\cite{CWAttack} &100 &138 &0.51 &0.012 &99.94 &249 &0.89 &0.023 &99.9 &401 &1.40 &0.035\\\\\n&&Ours$_{9\\times30}$ &100 &114 &0.43 &0.011 &99.96 &171 &0.62 &0.020 &99.9 &251 &0.87 &0.028\\\\\n&&C\\&W$_{9\\times1000}$~\\cite{CWAttack} &100 &82 &0.34 &\\textbf{0.010} &100 &126 &0.48 &\\textbf{0.018} &100 &194 &0.68 &0.026\\\\\n&&Ours$_{9\\times1000}$&100 &\\textbf{75} &\\textbf{0.32} &\\textbf{0.010} &100 &\\textbf{117} &\\textbf{0.46} &\\textbf{0.018} &100 &\\textbf{187} &\\textbf{0.65} &\\textbf{0.025}\\\\\n\\cmidrule(r){2-15}\n&\\multirow{7}{*}{Least like}\n&FGSM~\\cite{FGSM} &0.4 &8860 &23.4 &0.063 &0.08 &8860 &23.4 &0.063 &0 &N.A. &N.A. &N.A.\\\\\n&&PGD$_{10}$~\\cite{IFGSM, PGD} &99.4 &4696 &14.1 &0.063 &84.82 &4721 &14.2 &0.063 &50 &4762 &14.3 &0.063\\\\\n&&MIFGSM$_{10}$~\\cite{MIFGSM}&100 &5960 &17.4 &0.0625 &99.96 &6069 &17.6 &0.063 &99.8 &6194 &17.9 &0.063\\\\\n&&C\\&W$_{9\\times30}$~\\cite{CWAttack} &99.9 &259 &0.95 &0.025 &99.9 &421 & 1.51&0.035 &99.9 &639 &2.25 &0.046\\\\\n&&Ours$_{9\\times30}$ &99.9 &154 &0.59 &0.020 &99.9 &194 & 0.73 &0.026 &99.9 &240 &0.89 &0.033\\\\\n&&C\\&W$_{9\\times1000}$~\\cite{CWAttack} &100 &102 &0.44 &\\textbf{0.019} &100 &132 &0.54 &0.025 &100 &165 &0.65 &0.032\\\\\n&&Ours$_{9\\times1000}$&100 &\\textbf{85} &\\textbf{0.40} &\\textbf{0.019} &100 &\\textbf{111} &\\textbf{0.49} &\\textbf{0.024} &100 &\\textbf{142} &\\textbf{0.59} &\\textbf{0.030}\\\\\n\\midrule\n\\multirow{14}{*}{DenseNet-121~\\cite{DenseNet}}&\\multirow{7}{*}{Most like}\n&FGSM~\\cite{FGSM} &46.1 &9132 &23.8 &0.063 &14.62 &9143 &23.8 &0.063 &0.3 &9263 &24.0. &0.063\\\\\n&&PGD$_{10}$~\\cite{IFGSM, PGD} &100 &4692 &14.1 &0.063 &98.92 &4712 &14.2 &0.063 &94.7 &4733 &14.3 &0.063\\\\\n&&MIFGSM$_{10}$~\\cite{MIFGSM}&100 &6010 &17.5 &0.063 &100 &6128 &17.7 &0.063 &100 &6245 &18.0 &0.063 \\\\\n&&C\\&W$_{9\\times30}$~\\cite{CWAttack} &100 &130 &0.48 &0.010 &100 &218 &0.77 &0.021 &100 &332 &1.14 &0.031\\\\\n&&Ours$_{9\\times30}$ &100 &114 &0.42 &0.010 &100 &170 &0.61 &0.019 &100 &250 &0.85 &0.028\\\\\n&&C\\&W$_{9\\times1000}$~\\cite{CWAttack}&100 &85 &0.34 &\\textbf{0.010} &100 &134 &0.50 &\\textbf{0.018} &100 &210 &0.71 &\\textbf{0.026}\\\\\n&&Ours$_{9\\times1000}$&100 &\\textbf{77} &\\textbf{0.33} &\\textbf{0.010} &100 &\\textbf{124} &\\textbf{0.48} &\\textbf{0.018} &100 &\\textbf{202} &\\textbf{0.69} &\\textbf{0.026}\\\\\n\\cmidrule(r){2-15}\n&\\multirow{7}{*}{Least like}\n&FGSM~\\cite{FGSM} &2.1 &9101 &23.7 &0.063 &0.42 &9101 &23.7 &0.063 &0 &N.A. &N.A. &N.A.\\\\\n&&PGD$_{10}$~\\cite{IFGSM, PGD} &100 &4698 &14.2 &0.063 &96.32 &4718 &14.2 &0.063 &83.6 &4745 &14.3 &0.063\\\\\n&&MIFGSM$_{10}$~\\cite{MIFGSM}&100 &5973 &17.4 &0.0625 &99.98 &6082 &17.9 &0.063 &99.9 &6203 &17.9 &0.063\\\\\n&&C\\&W$_{9\\times30}$~\\cite{CWAttack} &99.9 &215 &0.79 &0.023 &99.9 &310 & 1.12&0.030 &99.9 &428 &1.52 &0.039\\\\\n&&Ours$_{9\\times30}$ &99.9 &145 &0.56 &0.019 &99.9 &188 & 0.70 &0.025 &99.9 &240 &0.88 &0.032\\\\\n&&C\\&W$_{9\\times1000}$~\\cite{CWAttack} &100 &102 &0.43 &\\textbf{0.018} &100 &134 &0.54 &\\textbf{0.024} &100 &170 &0.65 &\\textbf{0.031}\\\\\n&&Ours$_{9\\times1000}$&100 &\\textbf{85} &\\textbf{0.40} &\\textbf{0.018} &100 &\\textbf{114} &\\textbf{0.49} &\\textbf{0.024} &100 &\\textbf{149} &\\textbf{0.59} &\\textbf{0.031}\\\\\n\n\\bottomrule\n\\end{tabular}\n}\n\\end{table*}\n\n\\subsection{Results for the Top-$1$ attack setting}\nWe first evaluate whether the proposed adversarial distillation framework is effective for the traditional Top-$1$ attack setting. The results show that the proposed method can consistently outperform the C\\&W method, as well as some other state-of-the-art untargeted attack methods including the PGD method~\\cite{PGD,IFGSM}. \n\n\nFigure~\\ref{fig:ex-top1} shows two qualitative results. We can see the C\\&W method and our proposed method ``attend\" to different regions in images to achieve the attacks. Table~\\ref{tab:top1} shows the quantitative results. Our proposed method obtains smaller $\\ell_1$ and $\\ell_2$ norm, while the $\\ell_{\\infty}$ norm are almost the same. Note that we only use the $\\ell_2$ norm in the objective function in learning. We will evaluate the results of explicitly using $\\ell_1$ and $\\ell_{\\infty}$ norm as penalty respectively in future work. \n\n\n\nAs shown in Table~\\ref{tab:top1-sim}, we also test whether the label semantic knowledge can help identify the weak spots of different attack methods, and whether the proposed method can gain more in those weak spots. We observe that attacks are more challenging if the Top-$1$ target is selected from the least-like set in terms of the label semantic similarity (see Eqn.~\\ref{eq:logit-others}). \n\n\\subsection{Results for the ordered Top-$5$ attack setting}\nWe test ordered Top-$5$ attacks and compare with the modified C\\&W method. Our proposed method significantly outperforms the modified C\\&W method, especially for the $9\\times 30$ optimization scheme, as shown in Table~\\ref{tab:top5}. We also observe improvement on the $\\ell_{\\infty}$ norm for the ordered Top-$5$ attacks (please see Figure~\\ref{fig:ex-top5-attack} for two visual examples). \n\nWe also test the effectiveness of knowledge-oriented specifications of selecting the ordered Top-$5$ targets with similar observation obtained as in the Top-$1$ experiments (see Table~\\ref{tab:top5-sim}). \n\nTo further evaluate the proposed method, we also test the ordered Top-$5$ attacks using labels with 5 highest and 5 lowest clean prediction scores as targets respectively, as shown in Table~\\ref{tab:top5-clean-logit}. We observe similar patterns that the 5 labels with the lowest clean prediction scores are more challenging to attack. This shed lights on learning data-driven knowledge: Instead of using the label semantic knowledge which may have some discrepancy in guiding the design of adversarial loss functions, we can leverage the similarities measured based on the confusion matrix in the training data if available. We leave this for future work. \n\n\\begin{table*}\n\\caption{\\small Results and comparisons under the ordered Top-$5$ targeted attack protocol using randomly selected and ordered 5 targets (GT exclusive).}\n\\label{tab:top5}\n\\centering\n\\resizebox{1\\textwidth}{!}{\n\\begin{tabular}{llllllllllllll} \n\\toprule\n\\multirow{2}{*}{Model}&\\multirow{2}{*}{Attack Method} & \\multicolumn{4}{c}{Best Case}&\\multicolumn{4}{c}{Average Case} &\\multicolumn{4}{c}{Worst Case} \\\\ \n\\cmidrule(r){3-14}\n&&ASR&$\\ell_{1}$&$\\ell_{2}$&$\\ell_{\\infty}$&ASR&$\\ell_{1}$&$\\ell_{2}$&$\\ell_{\\infty}$&ASR&$\\ell_{1}$&$\\ell_{2}$&$\\ell_{\\infty}$\\\\\n\\midrule\n\\multirow{4}{*}{ResNet-50~\\cite{ResidualNet}}&C\\&W$_{9\\times30}$~\\cite{CWAttack} &75.8 &2370 &7.76 &0.083 &29.34 &2425 &7.94 &0.086 &0.7 &2553 &8.37 &0.094\\\\\n&Ours$_{9\\times30}$ &96.1 &1060 &3.58 &0.056 &80.68 &1568 &5.13 &0.070 &49.8 &2215 &7.07 &0.087\\\\\n&C\\&W$_{9\\times1000}$~\\cite{CWAttack} &100 &437 &1.59 &0.044 &100 &600 &2.16 &0.058 &100 &779 &2.77 &0.074\\\\\n&Ours$_{9\\times1000}$ &100 &\\textbf{285} &\\textbf{1.09} &\\textbf{0.034} &100 &\\textbf{359} &\\textbf{1.35} &\\textbf{0.043} &100 &\\textbf{456} &\\textbf{1.68} &\\textbf{0.055}\\\\\n\\midrule\n\\multirow{4}{*}{DenseNet-121~\\cite{DenseNet}}&C\\&W$_{9\\times30}$~\\cite{CWAttack} &96.6 &2161 &7.09 &0.071 &73.68 &2329 &7.65 &0.080 &35.6 &2530 &8.28 &0.088\\\\\n&Ours$_{9\\times30}$ &97.7 &6413 &2.14 &0.043 &92.66 &1063 &3.57 &0.057 &83.3 &1636 &5.35 &0.072\\\\\n&C\\&W$_{9\\times1000}$~\\cite{CWAttack} &100 &392 &1.42 &0.040 &100 &527 &1.89 &0.052&100 &669 &2.37 &0.065\\\\\n&Ours$_{9\\times1000}$ &100 &\\textbf{273} &\\textbf{1.05} &\\textbf{0.033} &100 &\\textbf{344} &\\textbf{1.29} &\\textbf{0.042} &100 &\\textbf{425} &\\textbf{1.57} &\\textbf{0.052}\\\\\n\\bottomrule\n\\end{tabular}\n}\n\\end{table*}\n\n\\begin{table}\n\\caption{\\small Results of ordered Top-$5$ targeted attacks using 5 most-like labels and 5 least-like labels as targets respectively, based on the label semantic similarities.}\n\\label{tab:top5-sim}\n\\centering\n\\resizebox{0.8\\textwidth}{!}{\n\\begin{tabular}{lllllll} \n\\toprule\nModel&Similarity &Attack Method&ASR&$\\ell_{1}$&$\\ell_{2}$&$\\ell_{\\infty}$\\\\\n\\midrule\n\\multirow{8}{*}{ResNet-50~\\cite{ResidualNet}}&\\multirow{4}{*}{Most like}\n&C\\&W$_{9\\times30}$~\\cite{CWAttack} &80 &1922 &6.30 &0.066\\\\\n&&Ours$_{9\\times30}$ &96.5 &1286 &4.20 &0.054 \\\\\n&&C\\&W$_{9\\times1000}$~\\cite{CWAttack}&100 &392 &1.43 &0.042 \\\\\n&&Ours$_{9\\times1000}$&100 &\\textbf{277} &\\textbf{1.05} &\\textbf{0.035} \\\\\n\\cmidrule(r){2-7}\n&\\multirow{4}{*}{Least like}\n&C\\&W$_{9\\times30}$~\\cite{CWAttack} &27.1 &2418 &7.90 &0.085 \\\\\n&&Ours$_{9\\times30}$ &77.1 &1635 &5.35 &0.072\\\\\n&&C\\&W$_{9\\times1000}$~\\cite{CWAttack} &100 &596 &2.15 &0.060 \\\\\n&&Ours$_{9\\times1000}$ &100 &\\textbf{370} &\\textbf{1.39} &\\textbf{0.045} \\\\\n\\midrule\n\\multirow{8}{*}{DenseNet-121~\\cite{DenseNet}}&\\multirow{4}{*}{Most like}\n&C\\&W$_{9\\times30}$~\\cite{CWAttack} &92.1 &1798 &5.88 &0.059\\\\\n&&Ours$_{9\\times30}$ &98.4 &1228 &4.00 &0.050 \\\\\n&&C\\&W$_{9\\times1000}$~\\cite{CWAttack}&100 &361 &1.31 &0.039 \\\\\n&&Ours$_{9\\times1000}$&100 &\\textbf{265} &\\textbf{1.00} &\\textbf{0.034} \\\\\n\\cmidrule(r){2-7}\n&\\multirow{4}{*}{Least like}\n&C\\&W$_{9\\times30}$~\\cite{CWAttack} &75.7 &2325 &7.64 &0.080 \\\\\n&&Ours$_{9\\times30}$ &92.8 &1076 &3.63 &0.057\\\\\n&&C\\&W$_{9\\times1000}$~\\cite{CWAttack} &100 &529 &1.90 &0.052 \\\\\n&&Ours$_{9\\times1000}$&100 &\\textbf{343} &\\textbf{1.29} &\\textbf{0.042} \\\\\n\\bottomrule\n\\end{tabular}\n\n\\end{table}\n\n\n\n\n\n\n\n\\begin{table}\n\\caption{\\small Results of ordered Top-$5$ targeted attacks using labels with 5 highest and 5 lowest prediction scores of clean images as targets respectively. }\n\\label{tab:top5-clean-logit}\n\\centering\n\\resizebox{0.8\\textwidth}{!}{\n\\begin{tabular}{lllllll} \n\\toprule\nModel&Clean prediction &Attack Method&ASR&$\\ell_{1}$&$\\ell_{2}$&$\\ell_{\\infty}$\\\\\n\\midrule\n\\multirow{8}{*}{ResNet-50~\\cite{ResidualNet}}&\\multirow{4}{*}{Highest}\n&C\\&W$_{9\\times30}$~\\cite{CWAttack} &93 &1546 &4.98 &0.042\\\\\n&&Ours$_{9\\times30}$ &99.9 &1182 &3.78 &0.039 \\\\\n&&C\\&W$_{9\\times1000}$~\\cite{CWAttack}&100 &205 &0.75 &0.025 \\\\\n&&Ours$_{9\\times1000}$&100 &\\textbf{170} &\\textbf{0.65} &\\textbf{0.023} \\\\\n\\cmidrule(r){2-7}\n&\\multirow{4}{*}{Lowest}\n&C\\&W$_{9\\times30}$~\\cite{CWAttack} &13.4 &2231 &7.30 &0.082 \\\\\n&&Ours$_{9\\times30}$ &68.6 &1791 &5.86&0.077\\\\\n&&C\\&W$_{9\\times1000}$~\\cite{CWAttack} &100 &621 &2.25 &0.064 \\\\\n&&Ours$_{9\\times1000}$ &100 &\\textbf{392} &\\textbf{1.47} &\\textbf{0.047} \\\\\n\\bottomrule\n\\end{tabular}\n}\n\\end{table}\n\n\\section{Related work}\\label{sec:related}\nThe growing ubiquity of DNNs in advanced machine learning and AI systems dramatically increases their capabilities, but also increases the potential for new vulnerabilities to attacks. This situation has become critical as many powerful approaches have been developed where imperceptible perturbations to DNN inputs could deceive a well-trained DNN, significantly altering its prediction.\nPlease refer to~\\cite{AttackSurvey} for a comprehensive survey of attack methods in computer vision. We review some related work that motivate our work and show the difference. \n\n\\textbf{Distillation.}\nThe central idea of our proposed work is built on distillation. Network distillation~\\cite{distillation1,distillation} is a powerful training scheme proposed to train a new, usually lightweight model (a.k.a., the student) to mimic another already trained model (a.k.a. the teacher). It takes a functional viewpoint of the knowledge learned by the teacher as the conditional distribution it produces over outputs given an input. It teaches the student to keep up or emulate by adding some regularization terms to the loss in order to encourage the two models to be similar directly based on the distilled knowledge, replacing the training labels. Label smoothing~\\cite{LabelSmoothing} can be treated as a simple hand-crafted knowledge to help improve model performance. \nDistillation has been exploited to develop defense models~\\cite{distillation_defense} to improve model robustness. Our proposed adversarial distillation method utilizes the distillation idea in an opposite direction, leveraging label semantic driven knowledge for learning ordered Top-$k$ attacks and improving attack robustness. \n\n\\textbf{Adversarial Attack.} For image classification tasks using DNNs, the discovery of the existence of visually-imperceptible adversarial attacks~\\cite{LBFGS} was a big shock in developing DNNs.\nWhite-box attacks provide a powerful way of evaluating model brittleness. In a plain and loose explanation, DNNs are universal function approximator~\\cite{UniversalApproximator} and capable of even fitting random labels~\\cite{DNNGeneralization} in large scale classification tasks as ImageNet-1000~\\cite{ImageNet}. Thus, adversarial attacks are always learnable provided proper objective functions are given, especially when DNNs are trained with fully differentible back-propagation. Many white-box attack methods focus on norm-ball constrained objective functions~\\cite{LBFGS,IFGSM,CWAttack,MIFGSM}. The C\\&W method investigates 7 different loss functions. The best performing loss function found by the C\\&W method has been appliedin many attack methods and achieved strong results~\\cite{ZOOAttack, PGD, EADAttack}. By introducing momentum in the MIFGSM method~\\cite{MIFGSM} and the $\\ell_{p}$ gradient projection in the PGD method~\\cite{PGD}, they usually achieve better performance in generating adversarial examples. In the meanwhile, some other attack methods such as the StrAttack~\\cite{StrAttack} also investigate different loss functions for better interpretability of attacks. Our proposed method leverage label semantic knowledge in the loss function design for the first time.\n\n\n\n\n\\section{Conclusions}\\label{sec:conclusion}\nThis paper proposes to extend the traditional Top-$1$ targeted attack setting to the ordered Top-$k$ setting ($k\\geq 1$) under the white-box attack protocol. The ordered Top-$k$ targeted attacks can improve the robustness of attacks themselves. To our knowledge, it is the first work studying this ordered Top-$k$ attacks. To learn the ordered Top-$k$ attacks, we present a conceptually simple yet effective adversarial distillation framework motivated by network distillation. We also develop a modified C\\&W method as the strong baseline for the ordered Top-$k$ targeted attacks. In experiments, the proposed method is tested in ImageNet-1000 using two popular DNNs, ResNet-50 and DenseNet-121, with consistently better results obtained. We investigate the effectiveness of label semantic knowledge in designing the adversarial distribution for distilling the ordered Top-$k$ targeted attacks. \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section*{Acknowledgments} \nThis work is supported by ARO grant W911NF1810295 and ARO DURIP grant W911NF1810209, and NSF IIS 1822477. \n\n\n\n\n\\small\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}